Skip to main content

Measuring science

There are many occasions when the quality of science and of scientists should be measured. There is no question that the work of those who have won a Nobel Prize is outstanding, but how can the quality of all other scientists be measured? Funding bodies only have limited funds and would like to ensure that these funds are used as optimally as possible, i.e. that only the best scientists receive funding. Similarly, universities and research institutions only want to employ the best researchers. What methods are available to them to evaluate applicants for funding and positions?

For many years, the impact factor (IF) has played a prominent role in the evaluation of scientific performance. The IF indicates the average number of times articles published in a journal in a given year have been cited. Not all articles are taken into account, but only those that are characterized as "citable". The exact nature of these articles is negotiated between the publisher and the producer of the IF. This makes the IF non-transparent. The IF makes a statement about the journal in a certain period, but not about the articles in this journal or even about its authors. Nevertheless, in some disciplines it is common practice to measure the quality of a scientist based on the articles they have published in journals with a high IF.

For some years now, the h-index has played an increasingly important role in the evaluation of scientists. The h-index is calculated from the number of publications x that have been cited at least x times. For example, if a person has 30 publications of which 7 have been cited at least 7 times, the h-index is 7. This means that the actual output of the researcher is taken into account to a greater extent and differences in research age can be better balanced out. However, the h-index is not suitable for comparing researchers from different disciplines, as the publication culture differs greatly and this has an impact on the h-index. It is also problematic that the h-index makes no distinction between the different authors of articles with multiple authorship. The first and last author often have a special significance.

However, the citations in scientific journals are often not the complete impact that an article has on society. This is why the altmetrics were developed, which also draw on various other metrics, e.g. mentions in blogs, on Facebook or Twitter or on Wikipedia, but also download figures and references in literature management programs such as Mendeley.

On the other hand, the importance attached to these metrics leads to unethical behavior on the part of publishers in order to maximize their own value. This starts with authorship. For example, there are "ghost authors" who have helped to write the text but want to conceal their involvement (e.g. because of lobbying). The "guest authors" have actually contributed nothing to the actual work, their name on the list of authors is more of a mark of honor or serves to increase the number of publications of the head of the institute. Manipulation of the order of authorship can also result in the person who is primarily responsible for the publication not being named as the first author.

There are also some tricks when it comes to the number of citations. Since self-citations can be recognized relatively easily, a popular method is the citation cartel, in which a group of scientists agree to cite each other.

These problems have now been recognized by the DFG. A maximum of ten publications may be cited per applicant. This also prevents the salami-slicing tactic in which research results are published in the smallest possible publishable unit. It also enables the reviewers to assess these articles qualitatively and not to make a judgment on the basis of purely quantitative metrics.

To make progress in the scientific field, it is advisable to keep an eye on the various metrics. Presenting your own scientific work well and making it visible helps you to recommend yourself to funding bodies and employers.

It does not make sense to "game" the system and artificially improve its metrics. On the one hand, this can later be interpreted negatively as unethical behavior, and on the other hand, the evaluation of the metrics may change in the coming decades and these artificial values may then be to your own disadvantage.

It makes more sense to obtain an ORCID so that your own publications can be reliably attributed to you. This means that fewer mistakes are made when creating metrics and you have more control over which publications are assigned to your own academic profile. Publishing in open access journals can also help, as these are cited more frequently than publications in closed access journals, although the IF is often higher for closed access journals than for OA journals. Social media should also not be forgotten. Success in science is not just about the ability to produce good scientific results, you also have to be able to present them. This will not change with other metrics.

Measuring science - impact factor, h-index and altmetrics by Silke Frank is licensed under a Creative Commons Attribution 4.0 International License.