The Fall of Jonah Lehrer (Part 2 of 4)

This is the second installment of a four-part series on the cultural context of contemporary popular science writing. Part I is here, and Parts III and IV will follow in the next two weeks.

In 2010, Jonah Lehrer wrote a widely-read New Yorker piece called “The Truth Wears Off.” It began with a provocative question: “Is there something wrong with the scientific method?”


Lehrer’s answer, both in the piece and in follow-ups elsewhere, was “yes.” He calls the frightening failure of scientists to reproduce one another’s results (or even their own) the “decline effect”—an old phrase for a new fear.

However, it’s not just science that’s in trouble. In the wake of Lehrer’s recent travails, something seems wrong with science writing, too—big, bold claims seem unable to weather scrutiny. In what follows, I’ll treat the problems facing science and science writing as parallel stories.

So, what is the “decline effect”?

According to Lehrer, the phrase was coined in the 1930s when a Duke psychologist thought he had discovered extrasensory perception (E.S.P.). However, his proof—a student able to predict the identity of hidden cards far better than chance—began to “decline” back to the level statistics would predict.

While this was seen as a drop in the student’s actual extrasensory powers, we now tend to see the “decline effect” operating on data rather than the phenomena. That is, E.S.P. didn’t decline over time—it never existed in the first place. Evidence to the contrary was simply smoothed out by regression to the mean.

This may seem obvious, but it’s actually where things get interesting.

Jonathan Schooler—a psychologist at UC-Santa Barbara—believes we can (and must) use science itself to figure out why well-established results are failing the test of replicability. He’s proposed an open-access database for all scientific results, not just the flashy, positive results that end up in journals.

Basically, Schooler’s pointing out a problem in scientific publication. Going further, though, we might also explain publishing patterns in the world of popular science writing in the same way.

To be published in a journal like Nature, it’s essential to have a “positive” rather than a “negative” result. Schooler is a bit hazy on the distinction, but Lehrer clarifies it. Journals don’t want “null results,” especially if they disconfirm “exciting” ideas.To get published, you either need to have your own sexy idea, or at least some “confirming data” for someone else’s.

Though this makes a certain amount of sense—why not reward ingenuity?—both Lehrer and Schooler think it blocks the road to inquiry. By encouraging overblown hypotheses and silencing subsequent evidence agains them, we ignore how messy and uncertain “the truth” really is.

Lehrer concludes on a note that’s only gotten more interesting since scandal erupted around his own fudged data:

The decline effect is troubling because it reminds us how difficult it is to prove anything. We like to pretend that our experiments define the truth for us. But that’s often not the case. […] When the experiments are done, we still have to choose what to believe. 

You might say the same thing about science writing. A field built on suggestion, hypothesis, and (often, it seems) data-fudging—witness recent events—is not so far removed from the explanations Lehrer and Schooler provide for scientific “decline.” 
This parallel (almost) surfaced in a follow-up to Lehrer’s piece on Radiolab and On the Media. Toward the end of the latter, Jad Abumrad (one of the hosts of the former) confesses to his own (i.e. Radiolab’s) possible role in such matters: 

The media is biased, and I mean not in the way that people think it is, but it’s certainly biased towards tension, it’s biased towards surprise. And so, there might be some kind of bias that leads us all towards a result that is counterintuitive and exciting.

What’s going on here? One of the world’s premier science journalists is recognizing the sort of pressures he’s under to report results in a certain way—which is precisely the sort of thing for which Lehrer faults science publishing. 
On the one hand, this is obvious. If Lehrer’s right (and he might be) that, in the end, scientists “still have to choose what to believe,” then no one will be surprised that journalists (and their readers) do, too. 
On the other hand, this is an opportunity for reflection. Taking these similarities seriously might let us see The Strong Programme (which Michael Barany mentioned in a recent comment) in a new way.
David Bloor (
In David Bloor’s canonical formulation (1976), we should explain knowledge claims causally, impartially, symmetrically, and reflexively. Here, the last two—symmetry and reflexivity—are the most interesting, and we might combine them in the case of Lehrer. If we owe the “decline effect” to publishing patterns, we might explain our own work in the same way. 
And this seems to hold. Journalists like Lehrer or Malcolm Gladwell who pitch counterintuitive claims about things like creativity are as much a product of the marketing for trade books (or TED talks, or the New Yorker) as the “decline effect” is a product of journal bias. 
In turn, the same must be true of academic (or blog) attention to Lehrer. On this note, a provocative chapter by Winfried Fluck is instructive. Fluck argues that publishing pressures in the professionalized humanities have produced a different sort of decline: up with originality, down with synthetic vision.
While others have seen our capacity to grasp what’s going on here as an opportunity to change the course of our work (or at least our methods), I’m less certain if there’s a way out of the loop. Some relish Lehrer’s point about choosing our beliefs, but the problem—as I see it—is that it produces both regulation and backlash of the sort I’ll talk about in my post next week.

One thought on “The Fall of Jonah Lehrer (Part 2 of 4)

  1. Lee

    Thanks for these Lehrer posts, Hank. I've really been enjoying them. I wanted to double back to this second post, which I've been thinking about all week.

    I wholeheartedly agree with you that pressures in the publishing industry reproduce some of the same distortions in writing on science as we see in science itself, and I'm all for some kind of reflexivity about and sensitivity towards our own writing and how it might be going amiss.

    I wonder how you see this reflexivity best working, however. Do you think that we need a series of (what one of my friends jokingly refers to as) “critical studies of STS”? (I'm lumping our kinds of history in the STS box here.) Or do you think this is an operation best carried out on the fly? Or some combination of those two extremes?

    I think that we would find right off the bat, for instance, that academic STS does not at all resemble the forms of democracy that it effusively espouses.

    I think the only risk (that I see right now; I'm sure my risk averse mind could fantasize up many others, paranoid-style) is that such reflexivity would just lead to more STS navel-gazing, an already favorite activity of the field that I plan on describing in one of my next two blog posts.



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s