Science measure

Impact factor, h-index and altmetrics

There are many occasions when the quality of science and also of scientists is to be measured. That the work of those who have received a Nobel Prize is outstanding is beyond question, but how can the quality of all other scientists be measured? The funding agencies have limited resources and would like to ensure that these resources are used in the best possible way, i.e. that only the best scientists receive funding. Universities and research institutions also want to hire only the best scientists as employees. What methods do they have at their disposal to evaluate applicants for funding and positions?

For many years, the impact factor (IF) has played a prominent role in evaluating scientific performance. The IF indicates the average number of times articles published in a journal in a given year have been cited. Not all articles are taken into account, but only those that are characterized as "citable". Exactly what these are is negotiated between the publisher and the producer of the IF. This makes the IF non-transparent. The IF makes a statement about the journal in a given period, but not about the articles in that journal or even about its authors*. Nevertheless, in some disciplines it is common to measure the quality of a scientist by the articles he or she has published in journals with high IF.

For some years now, the h-index has increasingly played a role in the evaluation of scientists. The h-index is calculated from the number of publications x that have been cited at least x times. If a person has e.g. 30 publications of which 7 have been cited at least 7 times, the h-index is 7. Thus, the actual output of the researcher is more strongly considered and differences in research age can be better balanced. However, the h-index is not suitable for comparing researchers from different disciplines because the publication culture differs greatly and this affects the h-index. It is also problematic that the h-index makes no distinction between the different authors in articles with multiple authorship. Often, the first- and last-named authors are of particular importance there.

However, citations in scientific journals are often not the complete impact that an article has on society. Therefore, the altmetrics were developed that also draw on various other metrics, such as mentions in blogs, on Facebook or Twitter, or on Wikipedia, as well as download counts and references in literature management programs such as Mendeley.

The importance attached to these metrics, on the other hand, ensures unethical behavior on the part of the publishers in order to set their own value as high as possible. This starts with authorship. For example, there are "ghost authors" who have helped to prepare the text but want to conceal their collaboration (e.g., because of lobbying). The "guest authors" have actually contributed nothing to the current work, their name on the list of authors is rather a badge of honor or serves to increase the number of publications of the head of the institute. Furthermore, by manipulating the order of authors, it can happen that not the one who is mainly responsible for the publication is named as first author.

There are also some tricks regarding the number of citations. Since self-citations are relatively easy to detect, a popular method is the citation cartel, in which a group of scientists agrees to cite each other.

At the DFG, these problems have now been recognized. There, a maximum of ten publications per applicant may be cited. This also prevents the salami-slicing tactic, in which research results are published in the smallest possible publishable unit. It also allows reviewers to assess these articles qualitatively rather than making judgments based purely on quantitative metrics.

In order to advance in the scientific field, it is advisable to keep an eye on the different metrics. Presenting one's scientific work well and making it visible helps to recommend oneself to funding agencies and employers.

It does not make sense to "game" the system and artificially improve its metrics. On the one hand, this can later be interpreted negatively as unethical behavior, and on the other hand, it is possible that the evaluation of the metrics will change in the next decades and then these artificial values will be to one's own disadvantage.

It makes more sense to acquire an ORCID so that one's own publications can also be reliably assigned to one's own person. This way, fewer errors occur when creating the metrics and you have more control over which publications are assigned to your own scientific profile. Publishing in open access journals can also help, as these are cited more frequently than publications in closed access journals, although the IF is often higher for closed access journals than for OA journals. In addition, social media should not be forgotten. Success in science is not only about the ability to produce good scientific results, you also have to be able to present them. This will not change with other metrics.

Measuring science - impact factor, h-index and altmetrics by Silke Frank is licensed under a Creative Commons Attribution 4.0 International License.