Psychology

How Should We Deal With Scientific Fraud in Psychology?

April 4, 2012 by Justin Lehmiller

It was recently reported that a Dutch social psychologist, Diederik Stapel, published at least 30 papers in reputable scientific journals based on data he had completely faked. The full scope of Stapel’s academic misconduct is still being investigated and could possibly extend much further than this. How such widespread fraud went undetected for so many years has vexed the entire scientific community. As if that weren’t enough cause for concern, a journal article just came out showing how easy it is for psychologists to manipulate real data in order to show almost any result they want [1]. Consequently, many people are rightly questioning what we can do to get a better handle on unethical research practices. In this article, I offer my own take on what we should do about this issue.

One of the most frequent suggestions I have heard for increasing transparency has been to force scientists to make their raw data publicly available upon publication of their research so that it can be subject to verification. Although I do agree that this would make detection of fraud faster and easier when it is suspected, I would argue that this is hardly the panacea some have made it out to be. For one thing, just think about how much data we’re talking about collecting and storing. Using 2010 information from the Social Sciences Citation Index (SSCI), I counted 586 psychology journals that published a combined total of 29,987 articles in that year alone. Keep in mind that many of these papers included more than one study, the SSCI does not index all journals, and psychologists don’t publish exclusively in psychology journals (we also publish in medicine, public health, business, etc.). Thus, the sheer amount of data we’re talking about collecting annually is overwhelmingly large, especially if we apply this standard across science more broadly. Is anyone actually going to vet tens of thousands of datasets for accuracy and potential fraud each year? No. While making raw data more widely available sounds good in theory, in practice, it probably will not change much.

In my view, addressing the problem of unethical research practices requires us to look more deeply into why scientists fake and fudge their data and statistics in the first place. One of the driving forces here is pressure from academic administrators to constantly publish a high volume of research, even when you have data that do not work out the way you hoped. But perhaps even more powerful is the pressure that comes from journal editors and reviewers to tell a certain story. In my publishing experience, I’ve found that there’s not just an expectation, but often an explicit demand for “perfect” data that tell a “clean” story. Editors often do not want to publish papers in which some of the hypotheses were not supported or papers that have any kind of conflicting results. In some cases, I’ve been told by reviewers and editors to drop certain results entirely (e.g., nonsignificant findings and results that they did not find personally interesting). There have also been cases where I got an effect I didn’t expect, but was told that in order to publish my paper, I should rewrite the manuscript and pretend like I predicted it all along.

There are two important points to make out of this. First, the unfortunate reality is that scientific data on human thought and behavior is rarely “perfect” and “clean.” To use one of my students’ favorite terms, psychological data is often a “hot mess.” In real life, people can be fickle and unpredictable—so why shouldn’t our data reflect this? Why do we pretend like there is always a simple story and an easy explanation for everything? Second, by dropping all of the nonsignificant and conflicting findings and reframing our papers such that every hypothesis ever made is supported, we give the impression that science always works, when nothing could be further from the truth! The end result of this is that editors and reviewers have come to expect perfection, which promotes a culture of (1) manipulating or “massaging” data and statistics until they appear perfect and (2) selectively presenting only the results that “work.” The end result is that we publish a lot of things that can never be replicated and nobody ever knows why.

Lastly, one of the other major things that needs to change are journal policies and philosophies that limit them to publishing only “new” and “groundbreaking” research. More often than not, journals don’t want to publish papers that simply seek to replicate a previously documented effect. However, this is actually one of the most important things a journal can do! We need to know when a finding is a fluke and when it’s the real deal. I want to know more than just the newest and “sexiest” findings (which are often published because they will generate the most media attention, not because they were based on the most sound and rigorous research). In my opinion, a journal stakes its reputation on any finding it publishes. As a result, that journal should be required to publish subsequent studies that replicated or failed to replicate that result. Do these replications need to take up valuable pages in their print publication? No–and that’s not what I’m suggesting. Journals could simply publish an online-only supplement at the end of each year with brief reports of attempts to replicate their previously published findings. Such a system would use up very little in the way of money and resources, but would do an incredibly valuable service to the field. And by offering this formal outlet that could publish experiments that failed to work, it would likely reduce pressure on researchers to manipulate and fudge their data because scientists will have the ability to publish their data no matter what they find.

Of course, all of this is easier said than done, and there a number of other excellent ideas that have been proposed for addressing these issues. Although there are no simple solutions to the problem of unethical research practices, the field of psychology would be well-served by taking this issue seriously and instituting some meaningful reforms sooner rather than later if it wants to retain its credibility as a science.

For more articles on professional issues in psychology, click here.

Want to learn more about Sex and Psychology ? Click here for previous articles or follow the blog on Facebook (facebook.com/psychologyofsex), Twitter (@JustinLehmiller), or Reddit (reddit.com/r/psychologyofsex) to receive updates. You can also follow Dr. Lehmiller on YouTube and Instagram.

[1] Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22, 1359-1366. doi:10.1177/0956797611417632

Image Source: 123rf

...
Post Featured Image
Written by
Dr. Justin Lehmiller
Founder & Owner of Sex and Psychology

Dr. Justin Lehmiller is a social psychologist and Research Fellow at The Kinsey Institute. He runs the Sex and Psychology blog and podcast and is author of the popular book Tell Me What You Want. Dr. Lehmiller is an award-winning educator, and a prolific researcher who has published more than 50 academic works.

Read full bio >