This is the second installment of a four-part series on the cultural context of contemporary popular science writing. Part I is here, and Parts III and IV will follow in the next two weeks.
In 2010, Jonah Lehrer wrote a widely-read New Yorker piece called “The Truth Wears Off.” It began with a provocative question: “Is there something wrong with the scientific method?”
![]() |
Source: http://www.newyorker.com/images/2010/12/13/p233/101213_r20317_p233.jpg |
Lehrer’s answer, both in the piece and in follow-ups elsewhere, was “yes.” He calls the frightening failure of scientists to reproduce one another’s results (or even their own) the “decline effect”—an old phrase for a new fear.
However, it’s not just science that’s in trouble. In the wake of Lehrer’s recent travails, something seems wrong with science writing, too—big, bold claims seem unable to weather scrutiny. In what follows, I’ll treat the problems facing science and science writing as parallel stories.
So, what is the “decline effect”?
According to Lehrer, the phrase was coined in the 1930s when a Duke psychologist thought he had discovered extrasensory perception (E.S.P.). However, his proof—a student able to predict the identity of hidden cards far better than chance—began to “decline” back to the level statistics would predict.
While this was seen as a drop in the student’s actual extrasensory powers, we now tend to see the “decline effect” operating on data rather than the phenomena. That is, E.S.P. didn’t decline over time—it never existed in the first place. Evidence to the contrary was simply smoothed out by regression to the mean.
This may seem obvious, but it’s actually where things get interesting.
Jonathan Schooler—a psychologist at UC-Santa Barbara—believes we can (and must) use science itself to figure out why well-established results are failing the test of replicability. He’s proposed an open-access database for all scientific results, not just the flashy, positive results that end up in journals.
![]() |
http://www.nature.com/news/2011/110223/full/470437a.html |
Basically, Schooler’s pointing out a problem in scientific publication. Going further, though, we might also explain publishing patterns in the world of popular science writing in the same way.
To be published in a journal like Nature, it’s essential to have a “positive” rather than a “negative” result. Schooler is a bit hazy on the distinction, but Lehrer clarifies it. Journals don’t want “null results,” especially if they disconfirm “exciting” ideas.To get published, you either need to have your own sexy idea, or at least some “confirming data” for someone else’s.
Though this makes a certain amount of sense—why not reward ingenuity?—both Lehrer and Schooler think it blocks the road to inquiry. By encouraging overblown hypotheses and silencing subsequent evidence agains them, we ignore how messy and uncertain “the truth” really is.
Lehrer concludes on a note that’s only gotten more interesting since scandal erupted around his own fudged data:
The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. […] When the experiments are done, we still have to choose what to believe.
The media is biased, and I mean not in the way that people think it is, but it’s certainly biased towards tension, it’s biased towards surprise. And so, there might be some kind of bias that leads us all towards a result that is counterintuitive and exciting.
![]() |
David Bloor (http://easts.dukejournals.org/content/4/3/419/F2.large.jpg) |
Thanks for these Lehrer posts, Hank. I've really been enjoying them. I wanted to double back to this second post, which I've been thinking about all week.
I wholeheartedly agree with you that pressures in the publishing industry reproduce some of the same distortions in writing on science as we see in science itself, and I'm all for some kind of reflexivity about and sensitivity towards our own writing and how it might be going amiss.
I wonder how you see this reflexivity best working, however. Do you think that we need a series of (what one of my friends jokingly refers to as) “critical studies of STS”? (I'm lumping our kinds of history in the STS box here.) Or do you think this is an operation best carried out on the fly? Or some combination of those two extremes?
I think that we would find right off the bat, for instance, that academic STS does not at all resemble the forms of democracy that it effusively espouses.
I think the only risk (that I see right now; I'm sure my risk averse mind could fantasize up many others, paranoid-style) is that such reflexivity would just lead to more STS navel-gazing, an already favorite activity of the field that I plan on describing in one of my next two blog posts.
LikeLike