Sex Research Literacy in an Era of Fake, Biased, and Sensationalized News
February 1, 2017 by Justin Lehmiller
Most popular media reports touting the results of the latest sex study suffer from one of the following problems: they’re either inaccurate, biased, or highly sensationalized. Unfortunately, too few readers recognize this, which means that too many end up taking what they read at face value. That’s a serious problem, especially given the fact that there’s already so much misinformation out there about sex as it is. Sex research literacy is therefore vital for helping readers to separate good from bad media reports–and never has this been more important than in this current era of “fake news.” To that end, here are a few things to keep in mind the next time you read a popular media article about sex research.
1.) Recognize that there’s no such thing as a “perfect” study—every study (whether about sex or something else) has limitations. Of course, the nature of the limitations will vary quite a lot across studies, but they’ll always be there. For instance, there are often concerns to be raised about the size or representativeness of the sample (see #2 below for more on this). In addition, there may be concerns about the nature or design of the study itself. Did the researchers measure the right outcomes? Was there an appropriate control condition? Any way you look at it, before drawing conclusions based upon the findings of any given study, it’s essential to acknowledge the limitations. To that end, I make an effort to highlight at least some limitations of every study I cover on the blog; however, not all sex writers are so careful about this (indeed, research has found that it’s actually rare for media reports of scientific research to mention any limitations at all!). In short, be very wary of articles that discuss sex research, but make no mention whatsoever of limitations–they’re telling you a story that’s probably too good to be true.
2.) Sex studies rarely have representative samples, so we have to be very cautious about generalizing the results. Representative samples are hard to come by in any line of research, but this is especially true in sex research for a few reasons. For one thing, sex research in general is poorly funded (take it from me–it’s hard to get grant money if you study sex for a living!). Unfortunately, without significant sums of money, it can be very challenging to recruit large and demographically representative samples. And even when sex studies are funded, self-selection is a major issue, meaning that certain kinds of people are more willing to participate in sex studies than others, especially studies of sexual arousal and behavior. As a result, our samples tend to be biased in favor of participants who hold positive attitudes toward sex and who are more sexually experienced than the rest of the population . Studies of sexual minorities also face self-selection issues in that they tend to oversample those who are more comfortable and confident in their sexuality, which means we don’t know as much about those who aren’t “out.” In light of this issue, it is important to pay attention to the nature of any sample and to ask yourself “to whom do these results apply?” Be extra skeptical of sex research reports that don’t tell you anything about the sample or that make sweeping generalizations.
3.) Correlational studies tell us nothing about cause-and-effect. Correlational studies are common in sex research. These are studies in which scientists look to see which variables are statistically associated. For instance, a correlation analysis could tell us whether there’s a link between condom use and women’s mood states—and, indeed, at least one study has found support for this idea, such that women who use condoms more frequently during vaginal intercourse tend to report being less happy . Trying to draw conclusions about what associations like this mean can get us into real trouble, though, because correlations don’t tell us why any two variables are linked. Does condom use cause changes in women’s mood (as the authors of this particular study argued)? Or do women’s mood states influence their condom use habits? Or is there perhaps a third variable–like relationship quality or length–that could instead account for this association? As you can see, correlational studies can potentially be explained in a lot of different ways! For this reason, it is important to be cautious about media reports that “hype” and sensationalize correlations (see here and here for a few particularly bad examples of media reports featuring correlational sex research).
4.) There are no universal principles of human sexuality—there are always exceptions to the “rules.” When writing up sex studies, journalists often leap to conclusions, such as “men are like ___ and women are like ____.” Or maybe they’ll talk about how porn or masturbation always or never lead to certain outcomes. Statements like this are wholly inaccurate because I have yet to see a sex study in which all people of a given group responded in exactly the same way—there is always some variability present (that’s why researchers report something called the standard deviation alongside any average or mean). Keep in mind that research can only tell us what certain groups of people do on average, not what everyone in a given group does. As such, it is important to avoid stereotyping members of a given group in everyday life based solely on the averages you’ve read about in a research report.
5.) A write-up of the latest sex study is not necessarily an invitation to change your life. Although many sex writers have a tendency to tell you what each study can do for you or how you can apply its results in everyday life, I don’t do this very often, and it’s because of all the reasons I outlined above. It is difficult (if not impossible) to make practical recommendations when you start considering the limitations of a given study and sample–and this is especially true when the research is correlational. In addition, it’s important to keep in mind that if this is the first time a finding has been reported, it’s not necessarily clear how reliable it is. If the current “replication crisis” in science has taught us anything, it’s that we can’t have quite as much confidence in a standalone finding as we’d like and that there are a lot of false positives and conflicting results out there. In light of this, it’s best not to look at any one study and think, “what can I personally do with these results?” When it comes to applying science to your own life, don’t focus too much on single studies–instead, look at the broader literature in a given area and to give due consideration to the limitations of the research.
Want to learn more about Sex and Psychology ? Click here for previous articles or follow the blog on Facebook (facebook.com/psychologyofsex), Twitter (@JustinLehmiller), or Reddit (reddit.com/r/psychologyofsex) to receive updates.
 Plaud, J.J., Gaither, G.A., Hegstad, H.J., Rowan, L., & Devitt, M.K. (1999). Volunteer bias in human psychophysiological sexual research: To whom do our research results apply? Journal of Sex Research, 36, 171-179.
 Gallup, G. G., Burch, R. L., & Platek, S. M. (2002). Doe
s semen have antidepressant properties? Archives of Sexual Behavior, 31, 289-293.
Image Source: iStockphoto
You Might Also Like:
Dr. Justin LehmillerFounder & Owner of Sex and Psychology
Dr. Justin Lehmiller is a social psychologist and Research Fellow at The Kinsey Institute. He runs the Sex and Psychology blog and podcast and is author of the popular book Tell Me What You Want. Dr. Lehmiller is an award-winning educator, and a prolific researcher who has published more than 50 academic works.Read full bio >