Farid Zaid at Inquisitive:

The idea that scholarly excellence could be measured with precision is a relatively recent invention. The Journal Impact Factor (JIF), introduced in the mid-twentieth century, marked the beginning of this shift – offering first libraries and then academic publishers, universities, and funding agencies a seemingly neutral way to quantify influence and impose order on the ambiguity of peer judgment. Over time, additional metrics such as citation counts, university and journal rankings, and the h-index amplified this logic, promising to identify scholarly value with algorithmic efficiency. It was a technocratic vision of academic meritocracy: detached, rational, transparent, and fair.

 

But the illusion of neutrality surrounding citation-based metrics masked the extent to which they can be – and routinely are – manipulated and distorted. Far from offering a transparent window into scholarly merit, these quantifiable measures invite strategic behaviour that rewards gaming over genuine intellectual contribution. Editors seeking to elevate their journal’s standing have been documented pressuring authors to insert irrelevant citations or favoring submissions likely to boost citation tallies, regardless of substance. Researchers, in turn, often engage in excessive self-citation to fabricate influence. Most concerning is the recent documented rise of citation cartels: collusive networks that systematically inflate members’ citation counts without regard for scholarly relevance.

 

Beyond these intentional distortions, deeper structural shifts in academic publishing have further eroded the reliability of citation-based metrics. As publication rates accelerate, studies are increasingly fragmented into smaller publishable units (aka salami-slicing), author lists lengthen, and the volume of references expands, while the signal-to-noise ratio of scholarly output diminishes. Measures such as citation count, h-index, and journal impact factor are increasingly saturated – amplified not by scholarly impact but by scale, density, and consolidation within elite publication networks. Across disciplines, and even within departments, such metrics no longer offer stable grounds for comparison.

 

These distortions are not fringe aberrations; they are embedded responses to institutional systems that equate numerical visibility with academic value. In line with Goodhart’s Law – which states that, when a measure becomes a target, it ceases to be a good measure – what began as a tool for assessing quality has become a force that reshapes scholarly practice itself, introducing perverse incentives that reward visibility over substance, resonance over rigour, and citation-maximization over genuine insight.