How big of a problem is plagiarism and why? This question was the basis of a conversation I had with Ron Keller, the technical editor for the IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, a non-animal science journal published by the Federation of Animal Science Societies (FASS). Keller shared his perspective on plagiarism, experiences with different plagiarism instances, and insights into self-citation as well as the pressure to publish. Watch the video below for highlights or read below for the full discussion.
iThenticate: How long have you been a technical editor, and how long have you been in an editing position, and how long have you been at FASS?
Keller: I’ve been at FASS for about 15 years, and then as a technical editor for 12 of that. Before that I worked for other companies and freelancing in other aspects of publishing. But yes, I’ve been working as a technical editor for 12 years mostly with the same journal. We have home journals at FASS; each of us mostly edits the same journal so that we have a good background in the material that we’re working on.
iThenticate: We’re here to talk about plagiarism today because iThenticate is a plagiarism detection software and FASS currently uses iThenticate through the CrossCheck program with CrossRef. Also, you recently took a survey with us and answered a few questions about plagiarism. One question was: in your perspective how big of a problem on a scale of 1 to 10 do you think plagiarism is and why?
Keller: I would rate it pretty high. I would say up around an 8 out of 10. I think that aside from the issue of intellectual property, the greater danger is to the integrity of the scientific record. If you have somebody that’s plagiarizing another article, it looks as though there’s a confirmation of the research in that article that hasn’t happened, and that can cause misallocation of funding—where it may go to untested research because it appears that it has more confirmation than it really has.
Also, if somebody is taking material from different articles, that material that they’re taking has been removed from the context that it was presented in, so you don’t get to see the original researcher’s assumptions that they’ve made—any qualifications of the information that they’ve given that would appear later in the paper. And especially if you’re taking it from multiple sources, you can actually be presenting material together that doesn’t have the same assumptions, so that can cause an additional problem.
Another thing that I think is a really serious problem is that by replicating this material that you make it impossible to connect that to corrections or retractions of the original source. If the original source were to be withdrawn or if later research shows that there were problems with it, the duplicated material would still be out there in the record and there would be no indication that it should also have been corrected or withdrawn. So that’s what I see as the bigger problem. That's not to say that the intellectual property issues are not a problem, but I think that, especially when you’re dealing with research that has to do with, say, medicine, you can have an immediate effect on people’s lives and I think that’s the really serious problem that I see with plagiarism.
iThenticate: Since the time that you’ve been in an editing position, have you seen plagiarism increase, decrease, or stay the same?
Keller: Well, my part of the process happens after peer review, so I’m somewhat insulated from how much of that doesn't get through the process as. I feel like it has increased, because I’m certainly aware of cases have come up, and there have been a few that have made it through the process, but hopefully as it becomes more prevalent, systems for detecting it and dealing with it are also becoming better. Certainly we see high profile cases of it showing up in the media, but I’m still not seeing a large number of cases getting through the peer-review process to the point where I start working with it.
iThenticate: That means it is a system that is working. Can you give us an example of something that did make it through?
Keller: Sure. The journal that I edit is an international journal, so we have authors with a wide range of English proficiency, and one of the papers that I was working on required fairly heavy editing for grammatical problems. As I was editing the paper I would run across whole paragraphs where there was no need for correction. That rang alarm bells, and so I went to Google and dropped in a couple of sentences from different sections of the paper and pretty quickly came up with the original papers that they had come from.
So at that point I talked to the Editor-in-Chief and we ran iThenticate on it again to check the newest revision of the paper and we found portions of the paper had come from works that ended up not being in the reference list. So it was clear that there was a problem with that paper—that material had been appropriated from different sources. This is still mid-resolution, so I don’t have a tidy answer for how that’s going to turn out at this point.
There was another case that was somewhat different. It was a paper that had started out as a presentation at a conference and had been expanded for inclusion in the journal. This was a case in which the authors had started a company that was marketing the product they had researched, and they had made that clear in the paper. I had a question about the formatting for the name of the product, and when I went to look for it online, I found that their description of the product in the paper was part of a section that had been copied verbatim from their own marketing material. I think this was a simple misunderstanding. This was their research, their product, their materials, and I think they didn’t recognize the problem that was coming up there, in terms of copyright and things like that, when they plugged that description into their paper.
That situation was actually fairly easy to resolve. We were able to cite the original source and make it clear that that section was being reprinted from that source with permission and that let us move on with publication without the authors having to rewrite that whole section of the paper.
iThenticate: Do you want to share your opinion of plagiarism detection software?
Keller: I’m really glad that we have it there to assist our Associate Editors and Editor-in Chief in safeguarding our peer-review process. There have been cases where we have had an Associate Editor who saw something and recognized that it’s been published before because it was something that they had read recently, or it was something that they used in their own research. But certainly having something automated that can look at a much bigger pool of material, I think that that’s a valuable asset. It is effective in finding matches with the material that it has access to.
And I think that there’s definitely leeway for the Associate Editor or the Editor-in-Chief to interpret those remarks, the results from the software, so that they can say, "OK, this is giving us a hit because there are only so many ways that you can describe this process, so it’s likely that we’re going to have this kind of overlap here," and they can look at where it appears in the document to decide whether it’s that match that they’re seeing or something else. But certainly I think that that’s a huge help to them, given the number of manuscripts that they’re seeing. Given the pressure to publish, editors are seeing more and more papers all the time; with the volume of work that they have to deal with, having that automated assistance, I think is really, really useful.
And that said, I think two things that can go along with that could really help. I think that education is a really valuable tool. Perhaps by having annual meetings of societies that offer a short course to let people know about these issues and make it clear where you’re crossing over the line—especially when it comes to self-plagiarism, because I think that that comes up fairly often. People may not intend to do it, for example, they may not be aware that it’s a problem to reuse the same figure in multiple papers without making it clear that that’s what they’re doing.
I think that if you start early and cover these issues—especially grad students as they are preparing to go into a career in research—that these are other issues that they need to be aware of. That is a whole separate issue from the regular plagiarism lectures and information that they’re going to get going through college.
Another thing is having everybody throughout the process be alert to it and aware that plagiarism is an issue, from the reviewers, through the editors and the publishers, and even to the reader. If something looks odd, have a second look at it and make sure that everything’s okay before proceeding.
iThenticate: You mentioned self-plagiarism, also termed duplication. Given the nature of engineering, there is quite a bit of self-citation happening. Any thoughts on that?
Keller: Part of that I think is just the longevity of your career in some of these fields. I know that when we did a big archiving project, I saw that there were authors that I know—that I’ve met and that I work with all the time—that were publishing before I was born, so yes, there is definitely a potential for that. When you’re very focused on one particular part of a field of research, there may not be a big pool of other researchers working on the same topic, and certainly part of the impact factor is how often you’re citing your own journal and your own work and things like that.
One of the things about self-citation that’s interesting to me from the perspective of a journal is that if your journal is publishing great research, that’s the research that you want people to cite. You want them to be citing the best research that’s out there, and if you’re publishing it, then naturally you would expect there to be a certain amount of self-citation going on. But certainly, you know, there’s no way to tell the difference between that situation and one in which a less-reputable journal that is citing their own papers because that’s the only thing that they can get to publish. There’s not a good differentiation in the metrics between those two cases.
iThenticate: You also mentioned the pressure to publish. There’s some cutting corners, splitting your studies across different publications, there’s all kinds of things going on because people have this enormous pressure.
Keller: And now that the impact factor is out there, that’s certainly another tool that universities are using to drive publication. We’ve seen people coming up with new ways to game that system and it’s, you know, it’s a catch-22; you have to have the process out there so that it’s transparent, but once the process is out there, it makes it possible to game the system. There was a case recently, I think it was on the Scholarly Kitchen blog, where someone did a survey of the literature and found that there was a group three journals that were essentially trading review articles to pump their impact factors. The current system isn’t set up to really recognize that, so it’s interesting. I don't know that there is a good solution.
iThenticate: Do you think that it’s effective when journals clearly state their plagiarism policies on their website and in their author information pages?
Keller: Yes, I think deterrence is certainly a good policy. I think that putting that information out there and letting authors know what your policy is and what you plan to do about it—I think that those are valuable in not only deterring cases where it’s intentional, but also letting people know that something that they’ve been doing, perhaps innocently, is maybe not okay.
iThenticate: That was very insightful! Thank you very much for speaking with us today.