This is the second installment of a four-part series on the cultural context of contemporary popular science writing. Part I is here, and Parts III and IV will follow in the next two weeks.
In 2010, Jonah Lehrer wrote a widely-read New Yorker piece called “The Truth Wears Off.” It began with a provocative question: “Is there something wrong with the scientific method?”
Lehrer’s answer, both in the piece and in follow-ups elsewhere, was “yes.” He calls the frightening failure of scientists to reproduce one another’s results (or even their own) the “decline effect”—an old phrase for a new fear.
However, it’s not just science that’s in trouble. In the wake of Lehrer’s recent travails, something seems wrong with science writing, too—big, bold claims seem unable to weather scrutiny. In what follows, I’ll treat the problems facing science and science writing as parallel stories.
According to Lehrer, the phrase was coined in the 1930s when a Duke psychologist thought he had discovered extrasensory perception (E.S.P.). However, his proof—a student able to predict the identity of hidden cards far better than chance—began to “decline” back to the level statistics would predict.
While this was seen as a drop in the student’s actual extrasensory powers, we now tend to see the “decline effect” operating on data rather than the phenomena. That is, E.S.P. didn’t decline over time—it never existed in the first place. Evidence to the contrary was simply smoothed out by regression to the mean.
This may seem obvious, but it’s actually where things get interesting.
Jonathan Schooler—a psychologist at UC-Santa Barbara—believes we can (and must) use science itself to figure out why well-established results are failing the test of replicability. He’s proposed an open-access database for all scientific results, not just the flashy, positive results that end up in journals.
Basically, Schooler’s pointing out a problem in scientific publication. Going further, though, we might also explain publishing patterns in the world of popular science writing in the same way.
To be published in a journal like Nature, it’s essential to have a “positive” rather than a “negative” result. Schooler is a bit hazy on the distinction, but Lehrer clarifies it. Journals don’t want “null results,” especially if they disconfirm “exciting” ideas.To get published, you either need to have your own sexy idea, or at least some “confirming data” for someone else’s.
Though this makes a certain amount of sense—why not reward ingenuity?—both Lehrer and Schooler think it blocks the road to inquiry. By encouraging overblown hypotheses and silencing subsequent evidence agains them, we ignore how messy and uncertain “the truth” really is.
Lehrer concludes on a note that’s only gotten more interesting since scandal erupted around his own fudged data:
The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. […] When the experiments are done, we still have to choose what to believe.
The media is biased, and I mean not in the way that people think it is, but it’s certainly biased towards tension, it’s biased towards surprise. And so, there might be some kind of bias that leads us all towards a result that is counterintuitive and exciting.
|David Bloor (http://easts.dukejournals.org/content/4/3/419/F2.large.jpg)|