Category Archives: Hank

Epimemetics and the "Selfish Gene"

“Die, Selfish Gene, Die.”
That’s the title of a controversial new article by David Dobbs. In it, he argues that the “selfish gene” (coined by Richard Dawkins in 1976) represents an outdated gene-centric approach to evolution. Instead, he says we need to focus on things like gene expression and genetic accommodation — things the “selfish gene” covers up and under-emphasizes.

Dobbs has been criticized for his biology (e.g. here and here and especially here), and he’s responded in an interesting way: “my challenge,” he writes in a follow-up post, “was less to an [sic] technical account of nature than to a metaphor and story used to describe those technicalities.” He’s against the “selfish gene” not for what it represents, but for its power as a “selfish meme.”

Comments on a post by the Editor-in-Chief of io9

While some have pointed out that Dobbs is having it both ways — and, to a certain extent, Dobbs acknowledges this — I want to take seriously his claim that what matters is less the selfish gene itself (or The Selfish Gene the book) and more the “selfish gene” the meme. Below, I’ll suggest why “meme” is an inappropriate way to frame the issue, and how history might provide another path.

First, let’s separate a few meanings of the “selfish gene.” There’s The Selfish Gene, a bestselling book by Richard Dawkins published in 1976. In it, Dawkins insisted that the gene — rather than the individual, the group, or the species — was the unit of selection in evolution. According to Dawkins, whichever unit natural selection “selects” will tend to become, as he put it, “selfish.” 

The Selfish Gene (Wikimedia Commons)

Beyond the title of the book and the ideas behind it, however, is what Dobbs is calling the “selfish meme”: that is, the phrase “selfish gene,” which has taken on a life of its own, both within biology and beyond. It’s this meme, according to Dobbs, that has tended to obscure the complexity of the interactions between genes and environment that many biologists find interesting. 

Dobbs makes this point at the very end of the original article, though in his follow-up he insists that it’s the heart of the matter. Given that the headline says the metaphor is “wrong,” Dobbs does seem to be backpedalling a bit. But headlines often overstate. As someone put it on Twitter today (though not in response to Dobbs, at least as far as I know):

Part of what I want to say is about this “science journalism.” Both Dawkins and Dobbs are, in the words of the comment above, “pop science communicators.” Both are concerned with the public understanding of science (indeed, Dawkins held Oxford’s Simonyi Professorship for the Public Understanding of Science until 2008), and this is, Dobbs insists, what his article was about.

Unlike Dobbs, however, Dawkins is also the author of (much of) the science he is communicating to a wider audience. This dual identity makes The Selfish Gene a hybrid of the sorts. It also places a unique sort of pressure on his word choice, as his phrases are liable to be taken up in both scientific research and the wider reception of (and interaction with) that research. We  should all be so lucky.

This hybridity is, I think, the source of much of the current confusion (and frustration). Dobbs and his followers insist that he was attacking the “selfish gene” as a meme (and Dawkins-as-communicator), while Dawkins and his followers have pilloried Dobbs for failing to understand (or, in some cases, even read) the book at all — fixating, instead, on its title alone.

While Dobbs no doubt read more, many people haven’t. Dawkins knows this, and admits that the title “might give an inadequate expression of its contents.” After all, “selfish” calls up a lot more than “that which is selected in evolution.” Given that it was in this very book that Dawkins introduced the “meme,” you’d think he’d have taken his title’s memetic potential more seriously.

But here’s where I want to push back. To a certain extent I like Dobbs’s emphasis on the “selfish gene” as a “selfish meme,” since it forces us to think of alternatives that are just as catchy but might better capture the direction of evolutionary research today. But I also want to resist the memetic move: I think it’s an error to use meme-centric history to critique gene-centric biology.

Richard Dawkins (Wikimedia Commons)

Why? Not because I don’t think there are such things as “memes” (I’m agnostic). Rather, just as Dobbs insists that the interesting story in biology today has more to with epigenetics than with “the gene,” I think a better account of the rise of the “selfish gene” would deal with its interactions with other cultural elements and the environment that enabled its success. Call it “epimemetics.”

A good “epimemetic” account would go beyond the fact that the “selfish gene” emerged at a time when “our understanding of genes was itself relatively simple.” It would uncover what was at work in the wider culture of the 1970s and 1980s that enabled its succes. What were the assumptions, the needs, the prior understandings and public debates that contributed to its “going viral”? 
After all, if something like “genetic accommodation” (which Dobbs describes in the article) is more widespread in the natural world than we thought, then its equivalent—”memetic accommodation,” in which environmental shifts produce changes that then feed back into the reproduction of our cultural heritage—must be widespread indeed. What would such a study look like? 
Age of Fracture (Harvard University Press)
To my mind, it could fit right into Daniel Rodgers’ Age of Fracture. If Rogers had included a chapter on biology, it almost certainly would’ve been on Dawkins and the disaggregation of the individual. And, given how Dawkins used the “selfish gene” to explain unselfish behavior, it would’ve had the same counterintuitive ring as Rodgers’ analysis of the “communitarianism” of John Rawls. 
In one sense, historians already do “epimemetics,” even if they don’t frame it as such (and I’m glad they don’t). After all, Lovejoy himself looked past his infamous “unit-ideas” to the conditions that shaped them in particular periods. There’s something satisfying in seeing how biologists do the same: looking beyond elementary “units” to see connections and feedback at the root of life itself. 
Rather than stop where Dobbs does (insisting that the “selfish gene” had its day but now it’s time to move on), I see this as a call to do more history. Why did it have its day — and how is our day different? If we can integrate that sort of history into the way we teach science, we’ll build a better sense of why certain ideas took off (and slowed down) than we will by calling them “viral.” 

Consuming the Self: One Critique of 23andMe

Last week, the FDA sent a letter to Ann Wojcicki — the CEO of direct-to-consumer (DTC) genetic testing company 23andMe — ordering them to stop marketing their Personal Genome Service (PGS), which the FDA defines as a “medical device” subject to specific forms of regulation. 
According to Forbes, Wojcicki and her company have flouted regulatory red-tape despite both efforts by the FDA to work with them and 23andMe’s own  statement that their “relationship with the FDA remains critically important.” As a result, the FDA ordered it to stop selling $99 PGS kits. 
Historians have noted different aspects of the story, both here at AmericanScience (here and here) and elsewhere (e.g. here and here). I want to take a slightly different tack, one rooted in Sanford Kwinter’s response to an address made by Wojcicki at Harvard in 2012. Here is Kwinter: 

There’s a lot to note about Kwinter’s position, which he calls “a position that is perhaps far too rarely put on the record today.” It’s one of resistance and radical critique, far removed from the usual stance of the historian today. Here, I’d like to explore its implications for our current moment.

“a position that is perhaps far too rarely put on the record today”

Kwinter begins with Spencer Wells’ “Genographic Project,” a non-medical effort to sequence human populations to learn about their history and diversity. Though Kwinter invites his classes to participate, they never do. Why? “Diffidence,” he says (though it sounds more like ambivalence). 
It’s this diffidence Kwinter wants to introduce into the discussion of the descendants of the Genographic Project, including DTC companies. And this is where his critique takes off. We have become, he insists, “sitting ducks” for “predatory” knowledge aggregators—like 23andMe.
This is close to Charles Seife’s comment that the $99 PGS is “a one-way portal into a world where corporations have access to the innermost contents of your cells and where insurers and pharmaceutical firms and marketers might know more about your body than you know yourself.”   
Seife’s is an important critique—though, as Nathaniel Comfort asks: “Is this any more insidious than gmail? If we say yes, we risk running headlong into the genetic determinism this blog rails against. We have to be careful about privileging biological information over social information.”
And it’s true that some of Kwinter’s remarks—about the nature of science, about the power of biological identity—might make science studies scholars cringe. But we shouldn’t let our cringes dull Kwinter’s critique. Rather, we should take it as a call to bring our tools to bear on the present.
“Who said the consumer is always right? The seller. Never anyone but the seller.”
When we do, we’ll see that what Kwinter calls “diffidence” (about the 23andMe project) is similar to the feeling we get when we, as historians, encounter past efforts to “naturalize” scientific progress. In both, we resist claims of inevitability in order to see both power and politics at play. 
In both her address and her response, Wojcicki insists repeatedly that personal genetic data is here to stay. If we don’t pay 23andMe for it now, we are not only putting our lives at risk (from diseases to which we are predisposed) but also ceding the (data) floor to—yes—China. 
That’s part of her pitch. The other has to do with red-tape: the medical bureaucracy, she says, is holding back innovation, and patient-consumers deserve to have things sped up. As Kwinter notes, this amounts to saying that “the customer is always right”—which is said, of course, by the sellers. 
So who is the consumer, and what are they buying? As individuals, we are pulled in with promises of self-knowledge: find family members, know ancestry, learn about disorders we can help prevent through lifestyle adjustments as long as we make them in time. It’s inviting, if not imperative.
But this isn’t (just) a self-knowledge service. Rather, as Kwinter puts it, it’s “a data-mining—and perhaps, I don’t know—a crypto-bio-prospecting” initiative. We are the “consumers” of the direct-to-consumer model, but 23andMe figures us elsewhere as “assets,” “resources,” and “data.”
Which is to say: there are other consumers. Just as the tailored ads on Facebook and Google suggest, your identity—in whatever form—is a valuable commodity. Even if 23andMe (like Google) hangs onto it, there are ways to profit from “streamlining” your digital life for you we can’t even imagine.
“the world’s trusted source of personal genetic information”
This ambiguity—about what “personal” means, about whether I am “consumer” or “consumed,” data-generator or just data—is in 23andMe’s mission statement, which is “to be the world’s trusted source of personal genetic information.” Trusted by whom? Information for whom
As Evgeny Morozov points out, for some reason we’ve tended to give “Big Data” a pass: “While ‘Big Pharma,’ ‘Big Food’ and ‘Big Oil’ are derogatory terms used to describe the greediness that reigns supreme in those industries, this is not the case with ‘Big Data.'” Why is this? 
His point about our collective inability—unwillingness?—to see “data” as just another corruptible commodity applies to DNA as well. Blinded by myths Silicon Valley itself has produced, regulators have failed to see that 2.0 rhetoric often masks 1.0 aims. It’s still a market, after all.

Steven Pinker’s New Scientism

Yesterday, The New Republic published a big article by bestselling Harvard psychologist Steven Pinker. The title says it all: “Science Is Not Your Enemy.” Or does it? After all: whose enemy is science supposed to be? Pinker’s answer is there in his subtitle: the targets of his “impassioned plea” are “neglected novelists, embattled professors, and tenure-less historians.”

Humanists: according to Pinker, science isn’t your enemy—it’s your friend. Or your extremely successful younger sibling. Its methods and results are yours if you want them—all you have to do is ask. The problem is: you don’t want them—you shy away from science, or reject it outright.

Pinker’s got a solution, and he’s calling it “scientism.”

As Pinker points out, “scientism” is a term of abuse. It’s usually hurled at “reductionist” efforts to pose scientific solutions to all sorts of problems. And, as a barb, it’s often hitched to times when bad politics wore a scientific mask—Social Darwinism, say, or eugenics. According to Pinker, this is how some people paper over ignorance and fear of the sciences.

By appropriating the term, Pinker hopes to wipe the slate clean (sorry). He sees the new “scientism” as a campaign to both “export” scientific ideals to “the rest of intellectual life” and add scientific ideas to the stock of existing “tools of humanistic scholarship.” I’ll come back to both this idea of exportability and the metaphor of the toolkit in a bit.

But first: why all the fuss? Pinker’s “scientism” is supposed to help solve the widespread (if perhaps unwarranted) sense that “something is wrong” with the humanities. As Pinker points out, “anti-intellectual trends” and “the commercialization of our universities” are part of the problem. But so is “postmodernism”—in a sense, the humanities have made their own bed.

John Brockman—a self-described “cultural impresario” about whom I’ve written before—shares Pinker’s sense of what’s wrong. In the preamble to a re-posting of Pinker’s piece, Brockman is even more polemical: “the official culture” has “kicked [science] out” and “elite universities” have “nudged science out of the liberal arts undergraduate curriculum.” He sees scientific intellectuals—bestselling authors, MacArthur fellows, TED talkers—as a sort of renegade “subculture.”

Does this sound right? It seems to me that, even within the academy, work that spans “the two cultures” is consistently rewarded—most obviously, with prizes and grants. The cutting edge is often that which is most engaged with the sciences. Say what you want about the digital humanities or experimental philosophy—they seem to be doing alright for themselves.

Interestingly, what Pinker points out as quintessentially humanistic modes of inquiry—”close reading” and “thick description”—stemmed from precisely this sort of engagement. Stefan Collini and John Guillory have revealed the roots of “close reading” in interactions between literary critics and scientific psychologists in the 1910s and ’20s. And we owe “thick description” to Clifford Geertz and the cross-pollination of anthropological field-work and cultural history in the 1960s and ’70s.

It could be that something similar—a new paradigm, even—is emerging from the adoption of digital tools, statistical methods, and fMRI scans by humanists today. Or not. The point is that such engagement is going on, and has a legacy that spans the twentieth-century—on either side of C.P. Snow’s “Two Cultures” diagnosis fifty years ago.

But I don’t want to rest on rejecting Pinker’s premise. Whether or not the humanities are in crisis, lots of people think they are—and many agree with Pinker that the sciences might offer a way out. What I want to highlight is the consequences of imagining this interaction in the terms I noted above: the “export” of ideals or the “toolkit” approach to rapprochement.

This view of intellectual life is a common one, well-illustrated by the title of a recent book by the philosopher Daniel Dennett: Intuition Pumps and Other Tools for Thinking. It’s no accident that Dennett is a leading philosopher of evolution: this view of cognition as tool-using is profoundly Darwinian. As a result, it represents, all by itself, the success of a particular scientific “export.”

This model of human agents—as embedded bricoleurs doing their best with the cultural resources (“tools”) at hand—is something we’ve argued about on this blog before. And it might well be the correct view. It’s certainly a very compelling one. Pinker, Dennett, and many of their peers in cognitive science and human evolution adhere to it.

And so do humanists—or at least historians. Limiting ourselves just to the history of science, let’s think over how the human agents at the heart of recent works are characterized. For the most part, I’d argue, they’re painted in a light very similar to the Pinker-Dennett-evolutionary model.

It wasn’t always this way, though. Time was, there were earnest efforts by historians to cast human actors in Marxist or Freudian—rather than a Darwinian—roles. In the last half-century, however, such accounts have gone the way of the Dodo, leaving us with one that’s extremely assimilable to reigning scientific views.

Here’s the rub. Pinker might be right about “two cultures” angst. But in adopting the toolkit model, he’s also put his finger on a prevailing assumption that ties the two sides together. This might explain both the promise and the peril perceived in the sort of “scientism” he’s proposing. Such shared assumptions are essential for bridge-building. But if humanists are uncomfortable with them, then the theory of agency underlying our accounts might merit further scrutiny.

Rule 14-1B: "Science" and "Tradition" in Golf

Yesterday, the United States Golf Association (USGA) announced a rule change. Coming into effect in 2016, Rule 14-1B will prohibit the use of so-called “anchored strokes” in sanctioned play. Rather than try to describe what “anchoring” is, here’s a helpful graphic provided by the USGA:


As a strategy for putting, “anchoring” has become increasingly popular—and controversial—over the last decade or so. According to ESPN, four out of the last six winners in major championships used “anchored strokes,” a rate of success that has fueled speculation about what (if any) competitive advantage such a stroke might confer.

I’m not a golf fan, and I don’t have an opinion one way or the other. What I’m interested in is the way this issue has been both contested within the golf community and portrayed in the media. Specifically, I was struck by how the old clash between “science” and “tradition” is playing out in some interesting ways. Here goes:

A central issue in debates over Rule 14–1B is, in the words of USGA President Glen Nager, “whether those who anchor play the same game” as those who don’t. Nager’s claim rested on the notion of the “traditional stroke” or “traditional free swing,” which he and the USGA aim to defend against the rising tide of “anchoring.”

Paul Azinger, a golf analyst for ESPN, challenged this claim from two directions. First, he thinks the “same game” argument is a specious one. “Who plays the same game as Tiger Woods?” The distinction between “anchoring” and something like driving distance on this score just doesn’t hold up. Second, he thinks Rule 14-1B is an attack on success, and that appeals to “tradition” or the “spirit of the game” are just window-dressing.

Now, one could imagine the USGA countering with scientific evidence: physiological tests about caloric efficiency, say, or anatomical studies of joint wear, or simple physical demonstrations. As one commenter on ESPN put it: “Physics 101: Levers are much easier to control than pendulums.” Or what about statistics? Is there evidence that “anchored” putters actually score better?

But the USGA has declined to conduct any experiments or run any regressions. Indeed, they’ve rejected any appeals to scientific or statistical studies. The 39-page document justifying Rule 14-1B makes the USGA’s position on this issue crystal-clear:

Although we understand that people often look for statistical data when engaged in
a factual and policy debate, we believe that these assertions are misplaced in the present context and reflect a misunderstanding of the rationale for the Rule and the principles on which the Rules of Golf are based.

Those principles, the report concludes, rest “on considerations such as tradition, experience and judgment, not on science or statistics.” The prohibition on “anchoring” isn’t about whether or not it actually confers an advantage (on one player or many, in a career or a single putt), but about the fact that it leads to “reducing variables and alleviating inherent obstacles that otherwise exist in the traditional free swinging method of stroke.”

On one level, this makes total sense. As the USGA points out, they never conducted scientific studies to determine the possible advantage of throwing the ball instead of hitting it with a club. And we all recognize the arbitrary distinctions conferred by the rules of something like golf and adhered to out of a sense of tradition.

On another level, though, there are interesting exceptions to the appeal to tradition in the face of science—or, to be more specific, technology. With regard to the material, shape, and size of clubs and balls, the USGA engages in a great deal of technical specificity, including the stipulation of exact protocols for testing things like moment of inertia and initial velocity.

Experimental Set-up for Measuring Moment of Inertia
Remember, clubs like the one pictured above are called “woods” for a reason. And yet, the USGA has allowed driving clubs to go metal (or carbon fiber) – within carefully-defined and scientifically-tested limits of size, weight, and flexibility. That’s part of what explains the fact that, in 1980, no one hit the ball more than 280 yards; and today, 90% of male professionals do
So, science and statistics are used to police the advantage conferred by “technology” (the construction of implements necessary for the game) but are rejected outright when it comes to “method” (the use to which those implements are put). Putting, to put it another way, is about the putt, not the putter. 
Does this division make sense? It might help to look at cases in other contexts to see how such matters have been adjudicated elsewhere. In baseball, for example, metal bats are prohibited in the Major Leagues—not (only) because wood is traditional, but because the “trampoline effect” of metal means balls travel faster off the bat and endanger fielders (and especially pitchers). 
Another example of the boundary between “science” and “tradition” is so-called “card counting” or “advantage gambling” in card games. While technically legal, many casinos find ways to discourage players from gaining an advantage through the use of probability theory. There’s some sense that the shotgun approach of card-counting is—even when dramatized—somehow not “the same game” as the blackjack the rest of us play. 
The same goes for controversy surrounding the statistical approach to baseball managing popularized as “Moneyball” (and written up on this blog here and here). From Billy Beane to “anchored putting,” science and technology serve somewhat tenuous roles in the evolution and policing of some of our oldest pastimes.

The High Quality Research Act and American Science

Yesterday, President Obama spoke at the National Academy of Sciences to mark its 150th anniversary. Alongside the usual issues, Obama took time to defend “the integrity of our scientific process” and “our rigorous peer review system.” 
Why? Because they’re under attack—from within the halls of Congress. 
Rep. Lamar Smith (R-TX) is preparing legislation that would disrupt peer review at the National Science Foundation (NSF). A draft of the bill—which is called the “High Quality Research Act” (HQRA)—leaked onto the web this week. It includes a new set of criteria for NSF projects:
There are all sorts of reasons these developments should be of interest to readers of this blog—not least, the fact that the NSF funds the history of science through its Science, Technology, and Society (STS) Program. Below, I’ll fill out a few of the details of what’s happened, and suggest some ways HQRA (and its discontents) link up with issues of concern to science studies more generally. 

While Smith was apparently the least of three evils when he was appointed chair of the House Committee on Science, Space, and Technology, his views on climate change and other issues set him apart from the vast majority of practicing scientists like those Obama addressed yesterday. 
Of course, this hasn’t stopped him from joining Tom Coburn and others in an effort to redirect the NSF’s extensive merit review system toward a set of goals defined by Congress. Late last week, Smith took the effort to a new level, though, when he wrote the Acting Director of the NSF about five projects whose “intellectual merit” (an NSF metric) he doubted. Here they are: 
As the Committee’s ranking Democrat pointed out in response, Smith’s decision to question the scientific merits of specific grantees is unprecedented. “By making this request,” she added, “you are sending a chilling message to the entire scientific community that peer review may always be trumped by political review.” While peer review isn’t perfect, she concluded, it’s the best we’ve got. 
Over the next few days, everyone—politicians and lobbyists, scientists and their societies, scholars and citizens—will no doubt be weighing in on what all of this means. Let me just highlight a few things that I think speak directly to some of the issues explored by science studies in general and this blog in particular. 
It’s worth noting that the five studies singled out by Smith all fall under the NSF’s Directorate for Social, Behavioral and Economic Sciences (SBE). Why does this matter? For one, it seems unlikely that Smith would have ventured such a salvo at “harder” sciences like math and physics (John McCain’s 2008 beef with bears not withstanding). 
For another, we’re not far from the so-called “Science Wars” of the 1990s. Then, as now, the politics of knowledge were close to the surface—and the headlines. Though Sokal’s famous “hoax” was perpetrated on behalf of peer review, its afterlife in the media juxtaposed “scientific” and “social” ways of knowing in a way that rendered (and renders) some “social scientists” anxious. 
Another theme of interest to scholars of science studies is the notion of “peer review” itself. Interest goes back at least as far as Robert Merton’s notion of “organized skepticism,” one of the four scientific norms he introduced in “The Normative Structure of Science” in 1942. Merton extended that analysis in 1971, in a famous co-authored piece on “Patterns of Evaluation in Science.” 
More recently, STS scholars like Sheila Jasanoff have built on Merton by attending to the career of peer review in the realms of law and policy—or, more recently, in the contentious political climate around the issue of global warming. Jasanoff and others trace this evolution to the expanding circle of stakeholders as science and technology extend their reach. 
Mario Biagioli provides an alternative perspective on peer review—including a useful overview of both its history and the literature attending on it. For Biagioli, the relevant questions are less those of stakeholders and politics with a capital-P (though they are relevant), and more the philosophical issues connected with questions of authorship and intellectual property. 
In either case, science studies has had a great deal to say about peer review, and will no doubt have more to say in the weeks and months ahead. What interests me is how, in response to Smith’s bill, the links between this thing called “the scientific process” and this thing called “peer review”—both contingent, even problematic concepts—tend to be shored up, not least by those (and here I’ll just take myself as a data point) who have participated in historicizing and even criticizing those same concepts in other contexts. 
Amidst calls for open access (including an ambivalent “trial” by Nature in 2006) and what seem like some pretty crucial questions about the merits and possibilities of the “scientific process” as it’s currently practiced (op. cit.), the intrusion of capital-P politics still tends to reinforce old binaries and shore up otherwise unstable categories in the interest of protecting what we’ve got. 
So, when Obama says “fields like psychology and anthropology and economics and political science” are all “sciences because scholars develop and test hypotheses and subject them to peer review,” or when he says “we’ve got to make sure that we are supporting the idea that they’re not subject to politics, that they’re not skewed by an agenda,” my interest is piqued. 
Do we really believe that’s what defines a science? Either way, do we really think whatever science is isn’t “subject to politics” or “skewed by an agenda”? Or do we mean it at particular times, with respect to particular politics and agendas, when particular hypotheses are under attack? 
I’m not sure, but wouldn’t it be fun to see a colleague—a scholar of peer review, say—called up to testify on its history and its relative importance for “the scientific process” as a result of all this?

The Science of Structure and the Apologetics of Agency

What do Jonah Lehrer and Sheryl Sandberg have in common?

I think it’s productive to see their separate moments in the sun through a shared lens. The way they’ve been received recently tells us something interesting about the way ideas of structure and agency play out in the popular press, and specifically how science fits into that picture.
In Lehrer’s plagiarism and Sandberg’s “Leaning In,” critics have fixated on the relative emphasis the two give to structure and agency. Where Lehrer didn’t take enough responsibility for his own agency, Sandberg made too much of hers (or any woman’s), at the cost of structural inequalities. Below, I explore how (and why) the two account for structure and agency the way they do, with special emphasis on the role of science in their accounts.
Let’s start with Lehrer.
Once the boy wonder of popular science, Lehrer’s world fell apart late last summer amidst allegations (and confessions) that he had both plagiarized (his own work and others’) and, at various points, outright fabricated. In a four-part series (links here), I used the episode to explore structural features of “the house that Gladwell built” and the place of popular science. 
Lehrer’s recent apology did something similar—much to the displeasure of his critics. Many felt Lehrer avoided admitting fault by pivoting away from his misdeeds to tell a story about the way the rules we’re forced to follow structure the actions we take. It’s not that Lehrer denied wrongdoing; it’s just that an apology is about your agency, not about what made (or let) you do it.
Critics called it a “meh culpa” and a “Mea Sorta Culpa.” They were outraged that he thought he could “humblebrag his way back into journalism.” The fact that he was paid $20,000 for his time certainly fueled the flames. But what was really at issue, displayed on the wall of live tweets behind him as he spoke, was the fact that he was using science to explain away his agency.
Live tweet by the man who first outed Lehrer
Lehrer says he needs new “standard operating procedures.” In effect, he says that his faults are here to stay—all he can do is contain them, with “a new list of rules, a stricter set of standard operating procedures.” Needless to say, this recourse to rules rankled journalists who see their trade as the pursuit of truth, not a flight from error. 
As Jennifer Scheussler put it for the New York Times: “before too long Mr. Lehrer was surrendering to the higher power of scientific research [and] the kind of scientific terms—“confirmation bias,” “anchoring”—he helped popularize.” In the end, it was more structure than agency, more science than apology—which no one wanted.
Things are different—opposite, even—with the reception of Sheryl Sandberg’s Lean In. Sandberg’s book, currently the #1 New York Times bestseller, has been persistently (some say unfairly) contrasted with another hugely popular piece on gender and the workplace: Anne Marie Slaughter’s much-read essay in the Atlantic Monthly, entitled “Why Women Still Can’t Have It All.”
Slaughter’s article, which sparked a healthy debate over the summer, was actually framed partly around Sandberg’s take on similar issues, as expressed in a series of lectures (Lean In wasn’t out) like the one above. While applauding many of Sandberg’s points, Slaughter takes strong issue with her charge that the lack of female leaders can be explained by an “ambition gap.”

It’s a critique Slaughter sharpened in her recent review of Lean In for the New York Times Book Review. Though Sandberg recognizes both women’s agency and the structures that constrain it, “she chooses to concentrate only on the ‘internal obstacles,’ the ways in which women hold themselves back. This,” Slaughter adds, “is unfortunate.” Yes, women should lean in; but so should “business.”

Many, including Slaughter, have faulted Sandberg for generalizing from a privileged position within the corporate world. That is, it’s too easy for a woman who seems to have it all (or as close to it as one can get) to emphasize individual agency as the driving force of inequality. Structure, suggests Slaughter, slips through the cracks of Sandberg’s self-help feminism.

Which brings me back to structure vs. agency. Where Lehrer emphasized structures, Sandberg touts agency. And, while both draw extensively on scientific studies, these seem to align much more strongly with the structural side of things. Maybe this is obvious, but it’s helped me clarify some of the issues I was teasing out of Lehrer’s fall last autumn.

For Lehrer, science suggests that his pre-conscious biases require structural constraints. On this view, agency eludes articulation—in fact, it’s hardly there at all. Sandberg, by contrast, uses science (and statistics) to flesh those structural constraints out in full—and then argues for agency as a way to push through them.

Either way, the scientific world “out there” is a structural one. As far as agency goes, it either vanishes entirely (in the case of Lehrer) or exists somewhere outside the studies, a sort of deus ex machina—in Sandberg’s view—to fight back against the structures that constrain it. When all is TED and done, it’s up to us (ironically, perhaps) to decide which it is.

A Short History of Neuro-Everything

Braaaaaaaaains are everywhere these days. In the wake of the big announcement about the Brain Activity Map (BAM) Project, publicity around the mind sciences has been ramping up. This week is “Brain Awareness Week,” meant to raise public awareness about neuroscience. And today, Scientific American MIND announced a new homepage and blog network.

A Portrait of the Author as a Brain Scan 

All of this attention has produced some reflection. Patrick McCray has contextualized BAM in what he calls “the *-omics of everything.” He and others—including Gary Marcus—have highlighted the technological and methodological challenges such dynamic mapping faces, compared to the “static” maps of the Human Genome or Human Connectome Projects.

What’s interesting about all this is how ubiquitous the brain is already. As I noted recently, it’s all over the academy: neuroaesthetics, neuropolitics, neuroeconomics, neurohistory—the list goes on. Pivoting away from the ubiquitous suffix (“-omics”) McCray noted, I want to pay attention to this prefix. With apologies to Bill Bryson, I think we need a short history of neuro-everything.

Again, apologies to Bill Bryson!

Vaughan Bell recently argued that this “everyday brain talk” is beginning to constitute a “folk neuroscience,” a set of popular misconceptions about the brain. Whereas ongoing research is revealing just how complex the brain is, “we live,” he writes, “in a culture where dull biological platitudes make headlines and irritating scientific cliches win arguments.” We’re not far from Lehrer.

Others have noted the same problem. Roger Scruton, for example, suggests that this “neuro-envy” took hold in the wake of Patricia Churchland’s Neurophilosophy (1986). Following Churchland’s lead, various humanistic disciplines were rebranded as “infant sciences,” complete with the “neuro-” prefix and ready to have longstanding questions answered with brain imaging technologies. 
Scruton calls this “Brain Drain,” and there’s a certain truth (and tangibility) to the title. The names he and others have heaped on the trend are telling: “neurononsense,” “neurobabble,” “neurobollocks,” “neurocrackpottery“—like the list of “neuro-disciplines,” the list goes on. The degree of derision has risen right along with with the attention and money paid to such efforts. “Brain drain” indeed.
As the Neurocritic recently asked: “What’s in a name?” Well, some—including Vaughan Bell—have pegged the birth of “neuroculture” to around the same time as “neuroscience” was coined (the 1960s). While neither blogger claims to care much about the term’s origins, they both see its emergence as symptomatic of a new era, in which, as Bell puts it, “everyday brain concepts have bubbled up from their scientific roots and integrated themselves into popular consciousness.” 
Here’s where historians of science might come in handy. For example: is “bubbling up” (from science to society) the best way to describe what’s going on? That is, how do terms and concepts cut between fields and across boundaries? How and when do such efforts get described as “pseudoscience,” and what do such charges have to say about the norms we attach to science and its authority?
There’s a lot to unpack here, but let me just focus on two lessons or approaches that historians might take with respect to the “neuro-everything” moment.
The first has to do with this question of “bubbling up” into popular consciousness. How does “brain talk” travel? One place to look might be the law: as a New York Times piece called “The Brain on the Stand” makes clear, the field has had a big impact on everything from evidence to jury selection. Some see neuroscience’s contributions as fundamentally new; others think its old concepts with new names. 
But the courts can’t be the primary site for transmission. What about marketing? Remember that controversial op-ed on how people literally love their iPhones? The science took a beating (here’s a summary), but this is the sort of research marketing firms are willing to pay for (in fact, the author commissioned one such firm to do it!). 
We’re still not quite there. And, while the primary locus has got to be the media outlets and bestselling authors who splash neuro-answers on covers and above the fold, that still doesn’t explain why audiences (and academics!) are willing and eager to pay for those sorts of answers. Nor can it just be the growth of neuro-imaging data and technology (though that, too, plays a part). 
Bell’s idea of “folk neuroscience” is too blunt—it seems to assume the sort of one-way transmission most historians and sociologists of science deny. What we need is a better account of the shifting values that attach to these questions and answers, one no doubt rooted in the stories people were already telling (or will to tell) about themselves before “fMRI” was even a twinkle in neuro-everyone’s collective eye.
Part of an answer might come from the second approach historians might take, which has to do with the notion of “pseudoscience.” As Michael Gordin has recently shown, the charge of “pseudoscience” tells us more about the scientists using it than about those they’re using it against. Thus, we might see current fights over the “neuro-” prefix as part of an ongoing fight within the cognitive sciences about proper methods and objects of study. 
Which makes sense. But what about the humanists, who object no less strenuously?  Sure, they might see fMRI of Austen readers as (possibly) pseudoscientific—but they’re actually more likely to react to its failures with respect to disciplinary norms and methods within the humanities. So-called “neuro lit crit” is perilous not because it’s bad science but because it’s bad humanities—”pseudohumanities,” if you will. 
These neuro-fields seem like a special case of the “boundary-work” historians like Gordin—following sociologists like Thomas Gieryn—have done such a careful job unpacking. I say special because, unlike mere “pseudosciences,” something like “neurohistory” brings contestation over the norms and methods of both the sciences and the humanities into the frame. What a mess!
But the payoff could be big: a short history of neuro-everything—or at least a conversation about it—might be just the sort of bridge between “the two cultures” we’ve been waiting for.