How to Leave Your Journal Impact Factor

Online Information Review

ISSN: 1468-4527

Article publication date: 12 April 2013

761

Citation

Gorman, G. (2013), "How to Leave Your Journal Impact Factor", Online Information Review, Vol. 37 No. 2. https://doi.org/10.1108/oir.2013.26437baa.001

Publisher

:

Emerald Group Publishing Limited

Copyright © 2013, Emerald Group Publishing Limited


How to Leave Your Journal Impact Factor

How to Leave Your Journal Impact Factor

Article Type: Editorial From: Online Information Review, Volume 37, Issue 2.

The growing cacophony of voices raised in protest at the journal impact factor (JIF), how it is calculated and how it is used and abused in the academic community suggest that storming of the Thomson Reuters Bastille is at hand, and that alternatives are looking better than ever.

Until perhaps the mid-1990s most of us were either unaware of, or largely unconcerned about, JIFs, citation counts and other bean-counting measures beloved of those who can’t read but can count. We tended to feel that the voices raised in protest were a minority, and often from a limited range of disciplines, particularly the medical sciences – but no longer.

There is today a significant corpus of literature on the topic of JIF, and considerable disagreement on most aspects. Some are in support of JIF as a reasonable way of evaluating journals, research quality and individual researchers, whilst others think the system just requires tweaking, and still others (the most vocal) believe we should jettison the whole enterprise in favour of some others means of assessment.

In “Sick of impact factors: coda” Stephen Curry (2012a, b; Imperial College London and inveterate blogger) notes the range of new literature on the topic of JIF as a way of consolidating the arguments to date – but this is only a scratch on the surface. Unless one has been following the discourse carefully for many years, it is now almost impossible to adequately grasp what has been happening. Fortunately, two recent books, by Goldfinch and Yamamoto (2012) and by Gould (2013), go some way towards consolidating the literature up to the present. Goldfinch and Yamamoto (37 pages of references) range widely across the field of research assessment, dealing with JIFs to a considerable degree. Gould (two pages of references for each of ten chapters) primarily addresses peer review and its future, but the subtext is that, with a revised or different form of peer review, JIFs will have less stranglehold on the scholarly community.

What's the problem?

As Wilcox (2008) reminds us:

What started as an index for evaluating a journal has now morphed into an index for evaluating the papers that are published in the journal – and even for evaluating the authors who write the papers that are published in the journal (p. 373).

To give but one quite unbelievable example, one of my former university employers requires PhD candidates not just to be published before the degree can be awarded, but insists on publications in two ISI-ranked journals above Tier 4. Further, the supervisor(s) are included as authors because, as a colleague at the time stated, “the PhDs do the work that we supervisors don’t have time to do”. All of this is to boost the university's ranking based on citations and individuals’ JIFs, and has almost nothing to do with PhD completions. This is a true story. Another example is New Zealand's PBRF, in which qualities such as peer esteem and contribution to the research environment are used as two criteria in assessing individual academics; the official view notwithstanding, without a high individual impact factor based on journal papers (the principal criterion), these two criteria count for little.

Since JIFs are now used for journals, papers, PhD candidates and researchers, as well as for research-mandated institutions in many countries, we are living in a tangled web where the original logic of impact factors seems pretty remote from reality. But, whilst logic does not reign supreme, the answers may be relatively easy to realise in our information-rich age. As just noted, we have a large body of literature to assist us in viewing the JIF issue more clearly. Papers going back 25 years or more have offered us a number of perspectives and telling arguments on both sides to help us work our way through the principal issues.

Just walk away from JIFs

“Evaluating research by a single number is embarrassing reductionism, as if we were talking about figure skating rather than science” (Wilcox, 2008, p. 374). Tired of such reductionism and the abuses that arise from it (Wilcox again: “even if your paper is useless, publish it in a journal with a good impact factor and we will forgive you” (p. 373), a growing group of disaffected researchers and scholars is opting for a somewhat radical solution – let's just walk away from JIFs and the culture they represent completely.

But walking away in the present environment is a lonely and dangerous path for many, and we tend to fall back on some other metric as a way of “proving” our worth – the h-index, for example. And this can be done easily as a self-analysis exercise; Google Scholar citations (Google Scholar, 2013), for instance, allows one to determine the total number of citations, the h-index and i10-index almost instantly. It includes books and book chapters (good for many in the social sciences, and even more in the humanities) and book reviews, editorials, etc. (not so good). Of course this is also very flawed, as my colleague Peter Jacso will almost certainly point out at some time or other. The point here is that walking away is not so simple, and that some other metric may have advantages as well as disadvantages. And already the bean counters have latched onto this – one former employer openly states that they will not hire anyone with an h-index below n (I am unable to recall what n is). So even moving to a new standard may not work.

Another approach, seemingly advocated by well known and highly cited researchers with high recognition value in academia, is not to bother with counting or journals of any kind and simply to post their research on their homepages or their employers’ homepages, an institutional repository or some similar resource, often poorly managed. And such individuals continue to be cited time and again because of their reputations, because of their personal marketing (especially through blogs and tweets). There are many variations on this theme, which leads to a kind of anarchism to which the web lends itself so easily. But it is not for the bulk of researchers who quietly chip away, unknown to all but a handful of like-minded experts.

Better, it seems to me, is to seek a new and more robust way of quantifying the quality of our research outputs.

Is there a better alternative?

And this suggests the need to make a new plan, especially a new plan that is based on individual, or institutional or disciplinary contexts. Instead of forcing everyone into a science-focused mould such as JIF, we require flexible, transparent, user- and institution-friendly ways of measuring the impact of researchers. As in most evaluations, it is essentially that these be contextualised rather than imposed on us from New York, London or Berlin. One of the more recent and somewhat exciting innovations along these lines to come to my attention is altmetrics, which both expands our understanding of what constitutes an impact and also what actually makes that impact.

This matters because expressions of scholarship are becoming more diverse. Papers are increasingly joined by:

  • the sharing of “raw science” like data sets, code and experimental designs;

  • semantic publishing or “nanopublication”, where the citeable [sic] unit is an argument or passage rather than entire article; and

  • widespread self-publishing via blogging, microblogging and comments or annotations on existing work.

Because altmetrics are themselves diverse, they’re great for measuring impact in this diverse scholarly ecosystem. In fact, altmetrics will be essential to sift these new forms, since they’re outside the scope of traditional filters. This diversity can also help in measuring the aggregate impact of the research enterprise itself (Altmetrics, 2013).

Note this final paragraph: the diversity in-built in altmetrics allows us to measure impacts using whatever is most suitable for our particular context, even down to a nano-publication or blog. This philosophy is embedded in ImpactStory (www.impactstory.org), which allows one to build a profile from a number of resources; having now looked at this in terms of my own profile, I find it to be an exciting and robust way to profile myself – indeed, I look much better and more well-rounded through multidimensional altmetrics as practised in ImpactStory than I do through the single-dimensional JIF.

As Jason Priem, one of the founders of ImpactStory states, “It's troublingly naive to imagine one type of metric could adequately inform evaluations across multiple disciplines, departments, career stages, and job types” (cited in Henning and Gunn, 2012). It is not only troubling but also inaccurate, because it does not permit us to take account of what is most important to us in a specific context. Why, for example, should the University of Malaya be totally wedded to JIFs, which pay scant attention to Malaysian publications in any event, when in the Malaysian context one might want to consider how academics are contributing to the social and economic development of the country? In the traditional evaluation system this is not possible, but with altmetrics one might well include such contributions as an additional dimension of quality.

Take our scholarly image out of the hands of Thomson-Reuters, and let our scholarly community, our discipline and our institutions create a far more flexible means of determining “worth” – this, it seems to me, opens a future that is much more relevant and creative in terms of evaluating scholars and researchers across the disciplines. There are undoubtedly other viable alternatives being tested, and it would be interesting to hear from readers with experience of such approaches.

Gary Gorman

References

Altmetrics (2013), “Altmetrics: a manifesto”, available at: http://altmetrics.org/manifesto

Curry, S. (2012a), “Sick of impact factors”, Occam's Typewriter, available at: http://occamstypewriter.org/scurry/2012/08/13/sick-of-impact-factors/

Curry, S. (2012b), “Sick of impact factors: coda”, Occam's Typewriter, available at: http://occamstypewriter.org/scurry/2012/08/19/sick-of-impact-factors-coda/

Goldfinch, S. and Yamamoto, K. (2012), Prometheus Assessed? Research Measurement, Peer Review and Citation Analysis, Chandos Publishing, Witney

Google Scholar (2013), “Google Scholar citations”, available at: www.google.com/intl/en/scholar/coitations.html

Gould, T.H.P. (2013), Do We Still Need Peer Review? An Argument for Change, Scarecrow Press, Lanham, MD

Henning, V. and Gunn, W. (2012), “Impact factor: researchers should define the metrics that matter to them”, Peer review: why each research community should define the metrics that matter to them, Higher Education Network, Guardian Professional

Wilcox, A.J. (2008), “Rise and fall of the Thomson impact factor”, Epidemiology, Vol. 19 No. 3, pp. 373-374

Related articles