• Solutions
  • Resources
  • Pricing
  • Buy Credits

Are researchers building their h-index by plagiarizing work?

Posted by David Rothschild on Mar 29, 2012 4:55:00 PM

building research plagiarismThere are various means of measuring a researchers’s level of success in their field of expertise. Tallying the number of published papers is certainly one of them. Counting how many times each of those papers has been cited in credible journals is another. The problem with each of these methods is that, if you look at the cumulative totals of published works or citations, neither is really an accurate depiction of the quality of work being done by one individual or group. It was for that reason that the h-index, a formula that takes both of these factors into account, was developed.

The h-index is one measure by which scientists in the modern age measure their progress. Instead of just counting publishing credits, it counts citations, but only to a certain level. It’s too easy to skew the numbers by receiving multiple citations on one body of popular work, so the h-index looks at the number of quality pieces of work, not the total. Researchers who worked hard to earn it take great pride in their h-index rating. Unfortunately, there are also individuals out there getting cited for work they did not do; work that was plagiarized from other scientists. Essentially their h-index can be fabricated. 

A recent post by Marya Zilberberg on the site Retraction Watch describes a scenario where she discovered plagiarism of her work when she was reading the Journal of Clinical Monitoring and Computing. One of her papers had actually been taken word for word and cited by the journal under another’s name – S. Efrati. Upon discovering a single line of plagiarized text, she felt it may have just been an oversight that quotes were not used to credit her for the content, but upon reading the whole article it became the clear that the entire work, a paper on Ventilator Associated Pneumonia (VAP), had been plagiarized. She immediately contacted the editor.

Chief Editor Stephen Rees did follow through on the compliant and asked that Dr. Efrati write an erratum to the paper citing Dr. Zilberberg as the original author. He also asked that Efrati contact Zilberberg directly, an action that, as of the writing of this article, has not been taken yet. A retraction was not offered, so the citation awarded to Efrati crediting him for this work still affects his h-index in a positive way. What’s Marya Zilberberg’s response to the editor’s solution? “No offense to this journal, but exactly how many people are going to read the “erratum” and become aware of these authors’ misconduct? And what are the implications for checking their prior and future work?”

Is the h-index an accurate rating system for scientific achievement?

Since the h-index was first developed by physicist Jorge Hirsch back in 2005, it has faced a series of challenges and criticisms as to its accuracy. According to Hirsch, it is meant to be “An index to quantify an individual’s scientific research output,“ but it doesn’t account for either erroneous or negative citations of work. It’s also viewed by many as “unfair” to the newer scientists who have made important discoveries but have fewer total publications and citations. There are many pros to the system also, but there’s definitely room for improvement, especially in the area of detecting scientific plagiarism. Perhaps incidents like the one described above will spark some changes.  


Oransky, Ivan.  “How does it feel to have your scientific paper plagiarized?” Retraction Watch.  March 12th, 2012.

MD, Zildberg, AF, Shorr, MH Kollef. “Implementing quality improvements in the intensive care unit: ventilator bundle as an example.” PubMed January 2009.

DCU Library Blog. February 29th, 2008 “H-Index.” February 14th, 2012