Last year we “celebrated” the 10th anniversary of the invention of the h-index (also known as the Hirsch factor; Hirsch, 2005), an indicator created by Jorge E. Hirsch, that attempts to measure the achievements of a research scientist. However, it not only appears that h-index has taken on a life of its own but also that the popularity of this formula currently surpasses the initial idea for its use envisioned by the inventor. Originally introduced as a simple characterization of the scientific output of a researcher (Hirsch, 2005), the h-index has come to be uncritically regarded as a “magic tool” that is applied to measure what is unmeasurable—the quality of science. As a result, it has become a “must have” indicator when applying for funds or a new position. Surprisingly, many decision-makers apparently are not fully aware of what it represents. According to its inventor, the h-index is the number of papers coauthored by the investigator with at least h citations each (Hirsch, 2005). That is, to be the proud owner of a high h-index, it is not sufficient to have authored many articles or for some of them to have been extensively cited. Rather, such an achievement must satisfy both issues: a considerable number of articles must be highly cited, which is reflected by ranking them according to the number of citations they have collected, and finding the one where the position on the list equals or is less than the number of the citations it garnered. The pros and cons of the h-index The initial idea of Hirsch was to discriminate the investigators who are persistently productive from those who experienced an isolated auspicious moment in their scientific life, and who currently only cut coupons from their popularity roll. Nevertheless, it assumes that researcher A, who published a breakthrough story that was extensively cited, should deserve less respect than researcher B who publishes often and regularly; however, the outcome of the latter's work has not contributed yet to any remarkable discovery. A good example is the inventor of the RNA isolation method, Piotr Chomczynski. He has in total over 65 000 citations to which he contributed almost exclusively (92.9% of all citations) with one single paper regarding the method he introduced in 1987 (Chomczynski and Sacchi, 1987). His current h-index is 23, relatively low for such a prodigious number of citations. Yet, could we imagine working with RNA over the past number of years without possessing this simple technique that is now the principle of virtually every commercial protocol related to RNA extraction? Notwithstanding, it might be argued—depreciating that discovery—that because this method is so simple, someone else would have been discovered it sooner or later. In response, let me cite my former mentor, professor Gunther Schutz, who would disprove similar arguments with a simple statement: If you say this is so trivial, why was it not you who discovered PCR? Breakthrough discoveries are not solely dependent on sophisticated science. Another problem with the h-index is the impossibility of comparing the investigators during different stages of their careers (even assuming comparisons among those representing the same field, which is another ambiguous factor). There is a certain correlation between the age of an investigator and h-index. Clearly and in any case, some of articles will accumulate citations and this number will increase over the time since they were first published. However, even the comparison between investigators at a similar career stage may often be misleading, particularly among young post-docs whose careers have just begun. We must be honest and acknowledge the fact that it is a rare occurrence that right after completing a PhD, such an investigator is able to independently decide about own career development. Mostly, the scientific achievements at this stage are primarily a derivative of the power of the PhD mentor and the reputation of the hosting institution. Another issue contributing to h-index limitations is that many research groups have different regulations regarding authorship. It is assumed that a researcher's name will be added to the authorship list only after considerable contribution has been made to the published work. However, what occurs fairly often is that being a “middle man” on the listing does not necessary reflect the significant contribution and, worth to be emphasized, the h-index does not differentiate between article authors who hold the most valuable first and last authorship position and those wherein the author's name appears as one among perhaps even 100 authors, as occurs with articles containing vast meta-analyses of clinical data. Although, some of these concerns were raised in the discussion part of the initial paper published by Hirsch (2005), they were overshadowed by the enormous popularity of this tool and its indiscriminate application. Despite of many critical yet unofficial discussions about the h-index, its limitations, and perhaps dangerous influence on science, the topic—if only tackled by publications—is usually narrowed to the pursuit of more and more sophisticated bibliometrics and proposals of new indicators (Bharathi, 2013; Biswal, 2013; Diaz et al., 2016; Wurtz and Schmidt, 2016). The critical voices seem to be less represented, nevertheless there are of course existing articles pointing out that past achievements of the scientists may not necessarily be correlated with future success and that all such rankings need context which means that the best method to gain an impression of the quality is still simply reading the papers (Wendl, 2007; von Bohlen Und Halbach, 2011). Perhaps the combination of looking into the context of the particular paper and the journal reputation gives the best assumption in this matter, however even this approach can be biased toward personal preferences and requires considerable amount of time. Last but not least, when examining the reasons why the h-index is not trustworthy, we should not neglect issues regarding fraud. Because the h-index does not discriminate self-citations, it is not difficult to predict that even the investigators who are poorly cited by others but who publish prodigiously, citing mostly themselves, will easily increase their h-index in the long run. Moreover, some evidence of misuse has been reported involving artificially pumped h-indices based on i.e., regular cross-citations between good friends (Kotov, 2010).
The Slavery of the h-index—Measuring the Unmeasurable
Published 2016 in Frontiers in Human Neuroscience
ABSTRACT
PUBLICATION RECORD
- Publication year
2016
- Venue
Frontiers in Human Neuroscience
- Publication date
2016-11-02
- Fields of study
Medicine, Psychology
- Identifiers
- External record
- Source metadata
Semantic Scholar, PubMed
CITATION MAP
EXTRACTION MAP
CLAIMS
- No claims are published for this paper.
CONCEPTS
- No concepts are published for this paper.
REFERENCES
Showing 1-39 of 39 references · Page 1 of 1
CITED BY
Showing 1-78 of 78 citing papers · Page 1 of 1