The impact factor: is it a good indicator of the quality of a publication?

The impact factor: is it a good indicator of the quality of a publication?

A few years earlier, the impact factor was considered a major criteria for research assessment. Conditions have changed but the impact factor remains for many researchers an important criterion for choosing the journal in which they wish to publish and also for the evaluation of the research by many academic institutions. There are also other measures that are based on the same principle as the impact factor. For example, the h-index.

Is a high impact factor of a scientific journal a guarantee of its quality?
Does a researcher’s h-index reflect their excellence in their field?

First, one must understand the calculation of the impact factor and similar measures.
The impact factor of a scientific journal calculated for a year X is the quotient of citations to articles from this journal in year X over the total number of articles from this journal published in the two years before X.
The h-index of a researcher is equal to h if exactly h articles by this researcher have been cited at least h times each.

So the more articles in a journal are cited, the greater the impact factor. Conversely, if the impact factor of a journal is high then necessarily its articles arouse the interest of researchers who use them to develop their research or at least that is what most researchers think. Almost the same can be said about the h-index.

In this sense, citations are considered an endorsement of an article and by extension of the journal and the author.

So what is the problem with this measure?

  • The impact factor takes into account the number of citations only, not the quality.
  • An article may be quoted just for context.
  • Citing an article does not mean that it has been judged positively.
  • The impact factor does not take into account the dynamism of each scientific field.
  • The impact factor is falsifiable. The same is true for the h-index.

I will detail each of the points above based on my personal experience.

First, the impact factor of a scientific journal is based on the total number of citations to its articles. It does not distinguish quotes according to origin. These may come from low quality sources but will still weigh exactly the same as quotes from a well reputable source.

The authors of an article cite references for several reasons. For example, to pick up on an important idea or build on previous work. Sometimes also to trace the development of a subject or to prepare the context of the current work.
During the review process, referees sometimes ask authors to expand this section and cite more articles related to the current work to show that they have done enough research on the topic. Most authors obey by citing multiple articles just to satisfy this reviewer request and sometimes for no other relevant reason.
In some areas of science, the most cited articles are actually review articles. Indeed, a summary is cited much more often than an original article. A synthesis can indeed present interesting aspects in particular by collecting and by organizing the preceding works, by providing a criticism and an evaluation of the results, by identifying the possible gaps of literature and by defining the possible directions of research. However, this leads researchers to cite review articles more often than articles containing the original ideas.

Quotations do not represent an endorsement of the cited reference. We can cite an article to say that it has shortcomings in terms of methodology and that it contains false results. We can cite a study to falsify or contradict it or sometimes even to reveal an intentional manipulation. One cannot then say that the citation in this case is a positive element that contributes to the reputation of the journal or the author.

The impact factor does not take into account the dynamism of each scientific field, which has a great influence on the journal’s impact factor. Some areas are more dynamic than others. In the field of computing, a large number of articles appear each month and an original idea will quickly be taken up and cited by several authors. On the other hand, in the field of pure mathematics, development is slower and this is why the impact factor of journals specialized in this field is low.
Another measure tries to correct this anomaly but it remains less known and used than the impact factor.

Finally, the impact factor can be manipulated and falsified. During the review process, the editors of several scientific journals ask authors to cite several articles from the same journal supposedly to show the link of the study to the journal. This practice is (very) widespread in certain fields and has been for several years (I received this request from an editor 15 years ago and I still receive it). This is how some journals have seen their impact factor explode in just a few years.
Worse still, the reviewer can ask authors to cite their own articles. A practice which is against the ethics of publication but which is practiced by some reviewers. Sometimes the reviewer stalls the review process by refusing to render a decision until the authors obey and cite his articles. Some editors do nothing to help the authors in this case or to prevent the reviewer from using his position to artificially increase the number of citations to his own articles and therefore his h-index. Inexperienced authors walk in this game because they want to publish and they don’t have the means (or don’t know how) to face this kind of situation on their own.

The questions that can be asked now are the following. Should the impact factor be completely banned as a criterion for evaluating the quality of scientific journals and the work of researchers? As a researcher, should the impact factor be excluded from the criteria for choosing the journal to which you want to submit your work for publication?

sciencedz

Leave a Reply