Menu
  • Solutions
  • Resources
  • Pricing
  • Buy Credits

Research and Retractions Come into Focus

Posted by Jonathan Bailey on Jun 18, 2015 10:00:00 AM

In December 2014, an article published in the journal Science claimed that canvassers could changes the minds of conservative voters on same-sex marriage with a brief conversation. The study began to attract a great deal of attention in the mainstream press due to its relevancy to both an ongoing political divide and Supreme Court case.

bigstock-Business-team-analyzing-market-39980689However, questions about the study began to arise and, after the authors were unable to produce the raw data to answer the questions, Science retracted the article.

While this is far from the first study in the public spotlight to be pulled, it generated a great deal of interest in research integrity among the mainstream press.

The New York Times ran not one, but two different op-eds about the issue. The first written by Adam Marcus and Ivan Oransky, best known as the duo behind Retraction Watch, and a second written by their own editorial staff.

In both works, the message was largely the same: The way research papers are vetted and published needs to be changed and improved.

The problem is fairly straightforward, though anti-plagiarism tools have made it easier to detect plagiarized and duplicative papers before publication, there’s nothing similar that detects when researchers falsify, fudge or otherwise alter their data.

The problem, as Marcus and Oransky note, isn’t a small one. Two percent of all scientists admit to manipulating data and, on average, one paper a day is retracted due to misconduct. Also, these retractions have serious consequences, such as with the now-retracted study that linked Autism to vaccines.

Unfortunately, with the current peer review process, there’s no easy solution to this. Though some steps, such as requiring authors to provide peer reviewers with all of the original data, could help. The truth is that with millions of published articles per year, there’s almost no way to guarantee bad research won’t slip through.

However, Marcus and Oransky point to a potential solution in another piece they wrote for The Verge. In that article, they took a look at PubPeer, a service that provides a form of post-publication peer review, analyzing and commenting on already-published papers.

They do it in a centralized database that you can either search directly or, if you have a compatible browser, install an extension and see the comments as you do your research. Either way, PubPeer provides a means for readers to express concerns about a work and have it seen by other researchers who may be looking to cite or otherwise build upon the research.

But whether reviewing the research before or after publication, statistics are going to be key. Statisticians have been critical in blowing the whistle on many of science’s latest frauds, including the now-infamous Anil Potti, and it’s most likely statistics that will spot future cases of data manipulation.

Unfortunately though, since there is no single red flag and no unified way to spot data manipulation, there’s no simple way to automate even part of this process. As such, due to the pace of publication, it’s likely that those who seek to defraud their way into journals will continue to have an edge over those who seek to catch, until much more radical changes in the scientific publication ecosystem take place.

The views of this blog represent my own and not the views of iThenticate.