Category Archives: Lukas

The Problem of the One and the Many in Gun Control

Over the past several months, I’ve become obsessed by what you might describe as the problem of the one and the many. It surfaces for any collective in which the interests of individuals diverge from those of the whole. I’m currently writing an essay about the way this problem manifested itself in debates at the turn of the 20th century about the evolution of multi-cellularity or super-organisms like insect societies. I also wrote a post on this blog recently arguing that people’s misgivings about the Affordable Care Act ought to be seen as a manifestation of the same problem. Today, on the one-year anniversary of the tragic Sandy Hook school shooting, I would like to suggest that current debates about gun control can be seen in the same light.

As you will recall, the Sandy Hook tragedy last year occasioned a renewed effort to pass more restrictive gun control laws. And although these efforts have met with some success on a state level, supporters of gun control have gotten exactly nowhere in  the United States Congress. This is true despite the fact that an overwhelming majority of voters support background checks and other measures that impose some additional limits on our ability to bear arms.

The gridlock on a federal level primarily indexes the enormous power of well-funded pro-gun lobbying groups such as the National Rifle Association to exert pressure on lawmakers. So in a sense what we are witnessing is yet more evidence of the deep flaws that pervade America’s political system. Still, there is also a real and substantive debate about the right to bear arms that rewards further scrutiny. It is this debate, I would like to suggest, that exhibits an interesting structural similarity to the problem of multi-cellularity and eusociality, as well as other major transitions in evolution.

Gun control advocates are basically engaged in a public health campaign. By making it more difficult to purchase a firearm, you reduce the number of guns in circulation. This, in turn, will reduce the number of gun-related fatalities. The statistics are pretty straightforward and don’t leave much room for argument. As a whole, our society would be much better off if there were fewer guns on the streets.

Pro-gun advocates counter with any number of claims. Sometimes they simply insist on their constitutional right to bear arms, regardless of the dangers involved. This pits two (putative) goods against one another: our right to own firearms, and our safety. There are a few things to note here.

Not everyone would agree the right to bear arms is a good.  But let’s leave that issue aside for a moment, and note that the two goods in question are of a fundamentally different kind. One is a good of the individual (rights), whereas the other is a good of the community (public health).

Moreover, there is another line of arguments claims that making it more difficult to purchase a firearm will actually make people less safe. According to the logic of this argument, we would be better off having more guns on the streets rather than less. Thus, for example, Wayne LaPierre famously issued an incendiary call to arm our school’s teachers in the wake of the Sandy Hook tragedy.

As a rule, pro-gun advocates tend not to place much trust the federal government’s ability to succeed in removing firearms from circulation. As a result, they claim, criminals will always have access to illegal guns. If that’s correct, the main consequence of added regulation would be to keep firearms out of the hands of responsible, law-abiding citizens. Rather than trying to limit the number of guns, the argument therefore goes, we should be doing the opposite, allowing “good” guns to crowd out the “bad.”

But this argument only really gains purchase on the strength of our paranoia. Its fundamental premise is that the structure of our society as a collective has already, irredeemably broken down.

As public health advocates never tire of pointing out, there is plenty of evidence that we would all be better off, much better off, if there were fewer guns in circulation. The mere act of purchasing a firearm already increases your likelihood of suffering a gun-related injury.

Still, the opposing viewpoint does make some sense, if only on the margin.  We might indeed be better off, as a collective, if we reduced the number of firearms on the street. At the same time, I might still be better off, as an individual, by refusing to give up my weapon.

The logic here is roughly the same as the reasoning that compels people to avoid being vaccinated. If enough people around me vaccinate themselves against a disease, the likelihood that I will be hurt by the vaccine eventually exceeds my likelihood of actually contracting that disease. But of course, this should cause everyone else to forgo vaccination too. In that case, what we are left with is precisely the kind of society in which it starts to make sense that we all arm ourselves to the teeth.

If that’s right, then what we are witnessing today is far more than the breakdown of America’s political system. It is rather more like the breakdown of our ability to come together and form a meaningful collective.

Amblystoma cells, some of which are undergoing mitosis, from EB Wilson’s The Cell in Development and Inheritance, 1903

Academic Publishing, the AHA, and the Ratchet Effect

On Monday, the American Historical Association published an official statement urging graduate programs and university libraries to “to adopt a policy that allows the embargoing of completed history PhD dissertations in digital form for as many as six years.” The statement goes on to note that “History has been and remains a book-based discipline.” However, the increasingly common practice of requiring that completed dissertations be posted freely online may make it more difficult for recent graduates to secure a publisher. This, in turn, could make it much more difficult for young scholars to earn tenure.

As the comments section that follows the AHA’s online publication of its statement against online publishing indicates, this strikes many as a backwards-looking strategy. As I have argued myself in a previous post on this blog, scholarly publishing is clearly moving online. And as it does so, the nature of how we consume, share, and disseminate knowledge is certain to change. So why not embrace this trend rather than desperately try to hold on to an outdated, 19th-century version of print culture?


The answer, of course, is that although many of us are eager to publish our work freely online, it seems wrong to endanger the tenure prospects of a whole generation of scholars whose only crime was to have finished their PhD’s during a time of transition and upheaval. It is laudable for the profession to embrace change. But we should not expect its most vulnerable members to be on the vanguard, leading the charge into an uncertain future.

But does that mean the profession can’t embrace change? Couldn’t the change we all seek come the level of hiring and tenure committees instead? Answering these questions is far from straightforward,  and it requires a small detour through what might be called the “ratchet effect.”

I first heard the term “ratchet effect” in conversation with the philosopher Peter Godfrey-Smith, who described it as one among many potential mechanisms that drives cultural evolution. The ratchet effect will take hold anytime that cultural change is biased to drift in one direction rather than another. Take, for example, the case of airport security:

On a recent flight from Barcelona to Boston, I was surprised to find passports being checked at the gate of my connection in Zürich even though the Swiss border control had already inspected my documents when I entered the international terminal. Doing so added considerably to the time that it took us to board, and, to me, it seemed ridiculously over-indulgent. But there is nothing in the least bit surprising about it. In the wake of September 11th, there was a huge push to tighten the security around American airspace, and a few minutes of extra wait time seemed like a negligible sacrifice to make.

Of course, a long time has passed without a similar incident of in-flight terrorism so, for most of us, the cost-benefit analysis may have changed. But who is going to spear-head the movement to loosen airline security? After all, doing so would mean incurring the risk being blamed if another disaster did occur in the future. Hence, airline security is subject to the ratchet effect. It is much easier to tighten security than loosening it, giving us something to think about when we are stuck in what seems like an interminable queue.

Although its outcome is often annoying, the ratchet effect operates all around us, influencing everything from the evolution of the Republican party to the career trajectories of young historians.

At the same time that we have witnessed an upheaval in print culture, historians have also engaged in much hand-wringing about two interrelated and lamentable trends.  Ironically, while it is taking PhD students longer and longer to earn their degrees, they are also having a harder and harder time finding gainful employment. The relationship between these two trends is no less disturbing because it is obvious: it being harder to find a job, it makes sense for people to spend more time lingering in their PhD programs. By taking an extra couple of years to write their dissertations, they not only increase the amount of time they can spend on the market. They are also able to write better and more polished theses, thus giving them a leg up once they actually graduate.

The problem, of course, is that we are all playing the same game. Thus, we are caught up in a ratchet effect. As people spend longer writing their PhD and produce a more polished thesis, the basic requirements for securing a tenure-track job go up for the whole profession. For all practical purposes, it is simply no longer possible to land a permanent position with the kind of CV that was perfectly standard a generation ago. Rather than a completed dissertation and good letters of recommendation, you now need one or two published articles and a thesis that is well on its way to the book manuscript. Indeed, as more and more people also spend several years as a post-doc, it is not at all uncommon for recent hires to have a book contract in hand by the time they start their first permanent job. Sometimes, the book has already been published. This is, as they say, the new normal.

I read the AHA’s position on the online publication of PhD theses as a good-faith reaction to the ratcheting up of publication requirements for young scholars. But wouldn’t it be better to try and bring things down a few notches instead?

What I’m about to suggest is pretty draconian, so let me preface this by saying that I mainly put it out there as a contribution to a vitally important conversation.

What if we could use the move to online publishing as an opportunity to address the time-to-degree problem head-on? One way to do so would be to move to a more UK-style model, in which students are expected to write their PhD theses in 2-3 years (after having completed the relevant coursework, which in the US would result in roughly 5-year PhD programs). This would mean lowering expectations on PhD theses somewhat. Rather than a polished first draft of the book manuscript, the thesis would be an academic exercise, freely available on the internet, meant to *prepare* students for the task of writing a book rather than being a version of that book itself.

One virtue of such a move comes from the fact that the stagnant job market in the humanities is unlikely to change, meaning that many qualified people will fail to find a permanent teaching position. Although my proposal would not change that, at least it would mean that most recent PhD’s would be about 25 – 30 years old. My sense is that it is easier, and preferable, to make the difficult choice of leaving the profession at 30 years old rather than five to ten years down the line.

Another virtue is that it would take some of the pressure off the writing of the PhD itself. It strikes me as foolish to expect people to write a polished book manuscript in their first try. Better to learn your craft in the context of a long-form exercise in which you can experiment and make mistakes. Then, after you have defended, you can decide if you want to have another go at the same topic (this time knowing what you wished you had known the first time around), or you can choose to go with something new (this time knowing much more about how to pick a topic and design an argument).

Although others, including Louis Menand, have proposed similar measures, there are significant drawbacks to going this route.

One major problem with my suggestion about reducing time to degrees is that it does not go far enough to solve the problem of the ratchet effect. Because there are so many more talented historians with a PhD than there are permanent teaching positions, hiring committees would still be free to choose from a pool of remarkably accomplished applicants. That is, even if we suddenly forced students to complete the PhD program in five years, what’s to stop them from spending several years writing articles and polishing their thesis after they graduate? One thing I certainly do not want to do is advocate that the humanities go the way of the sciences, in which it has become standard to spend 5-10 years on the post-doc circuit building up a publication record before entering the tenure track.

Because of the ratchet effect, my proposal would only succeed if senior scholars commit to preferentially hire recent graduates. And this is where things get really draconian, because doing that would mean telling huge numbers of talented and deserving people who have been on the market for a number of years that all of a sudden they are out of the running for permanent positions. That’s a pretty bitter pill to swallow. So bitter, I think, that the AHA’s backwards-looking position on online publishing starts to make a lot of sense. 

Spies, Whistleblowers, and the Federal Shield Law

Julian Assange: Tinker, Tailor, Newsman, Spy?

The John-le-Carré-esque saga of Edward Snowden’s run from the United States Government has sparked an interesting conversation on how to distinguish whistle-blowing from espionage. The fact that Snowden has been charged under the Espionage Act of 1917 certainly ought to give us pause.  After all, this is a law that was originally passed during the First World War, one that was used, among other things, to silence pacifists and other opponents of American intervention as well as political dissidents in the ensuing Red Scare of the 1920s. No doubt, then, an argument can be made that just like one person’s freedom fighter is another’s terrorist, so too can a whistleblower be reclassified as a spy depending on which side of a political argument you happen to find yourself on.


Historians of science and STS scholars have thought a lot about the important work that all manner of classification can do. From Foucault’s early archeology of the human sciences to Hacking’s foray into historical ontology and Starr and Bowker’s book Sorting Things Out, we know that how we taxonomize or carve up the world has far-reaching implications for our epistemic, moral, political and indeed personal engagement with it. So it should come as no surprise that I’m an advocate of taking a second look at how our government goes about classifying its citizens as well as foreign nationals for the purpose of fighting an ill-defined but global war on terror.

Rather than address the question of Snowden’s disputed status as a whistle-blower head-on, though, I wanted to tack in a slightly different direction and ask how our government classifies journalists.

As I suspect many of you know, the Obama administration has been especially enthusiastic in its use of the Espionage Act to prosecute leakers and whistleblowers. Snowden and Pfc. Bradley Manning are only the most well-publicized cases, and there have been several others, including Thomas DrakeJohn Kiriakou, and Stephen Jin-Woo Kim.

Stephen Kim presents an especially interesting case.  A Senior National Security Analyst at Lawrence Livermore National Laboratory, Kim was charged with espionage for allegedly disclosing North Korea’s plans to test a nuclear bomb to the Fox News reporter James Rosen. As a result of his reporting, the US Department of Justice began monitoring Rosen’s activities. Eventually, the DOJ even named Rosen as Kim’s “criminal co-conspirator” to gain access to his personal email and phone records.

Only a few days before we learned the Obama Administration was eavesdropping on Rosen, the Guardian reported that phone records of twenty AP reporters had been seized by the Justice Department during 2012.

In its zeal to plug leaks, then, the Obama administration is not content to go after the leakers themselves. We now know of at least two cases in which they have also gone after journalists with whom the leakers communicated. These actions pose a serious threat to the journalistic profession’s ability to execute its traditional watchdog function, providing the oversight that is necessary for citizens to make informed choices in a democratic society. As I have argued elsewhere in this blog, the only workable solution to the secrecy paradox (we want our government to keep certain things secret, but we also recognize that voters cannot make informed decisions without knowing what the government is doing in their name) is to safeguard the potential for leaks.

Now, it goes without saying that the journalistic profession’s traditional status as a “fourth estate” is widely recognized, even within the government’s ranks. This is why the Obama administration’s decision to go after journalists to whom information has been leaked is such an incendiary topic of discussion.

In an effort to protect journalists from government eavesdropping and allow them to maintain the anonymity of their sources, the Pennsylvania Senator Arlen Specter introduced the Free Flow of Information Act in February of 2009. As its official language explains, the purpose of this bill is to “maintain the free flow of information to the public by providing conditions for the federally compelled disclosure of information by certain persons connected with the news media.”

As is usually the case, the wording is somewhat counterintuitive here. How on earth does compelling the disclosure of information promote the free flow of information? The answer lies in the phrase “providing conditions.” The idea is that by stipulating exactly under which circumstances journalists can be compelled to turn over information about their anonymous sources, the act protects them in all other circumstances. As always, then, the devil is in the details.

(There is an interesting parallel here to the Freedom of Information Act, which is really a secrecy act. See this post for more on that argument.)

I should note that the Free Flow of Information Act, which is often referred to as a Federal Shield Law, has not yet been signed into law. Still, it is worth a slightly closer look for what it tells us about what our public officials think it means to be a journalists.

The proposed Shield Law states that unless certain well-defined conditions are met, “a Federal entity may not compel a covered person to comply with a subpoena, court order, or other compulsory legal process seeking to compel the disclosure of protected information.” This is to say that except under certain specified and extraordinary circumstances, the government cannot force journalists to turn over information about anonymous sources.

However, there is a major sticking point in the proposed bill; indeed, it is one reason this bill has not yet been signed into law. This is the question of who qualifies as a “covered person,” i.e., to whom this law will apply. To take an extreme case, imagine a genuine Russian spy (again of the John-le-Carré variety) who has managed to infiltrate the State Department or some other important government or military agency. Now imagine they obtain some piece of information that is vital to the United States’ national security. How is the spy going to pass this information on to the Russians? What you do not want is for the law to create a situation in which some compatriot of the spy could simply create a public website onto which the spy could upload sensitive information for everyone (including the Russians) to see. You would not want to give the KGB the ability to claim that it has an “investigative reporting” arm which is a legitimate journalistic endeavor and therefore legally exempt from US Government oversight.

That’s obviously a far-fetched idea, but the point remains: any Federal Shield Law will require a taxonomy to distinguish legitimate journalists (“covered persons”) from illegitimate usurpers and impostors. How does the law do this?

Interestingly enough, much of the debate over this bill has centered on exactly this question: who is and who is not going to be a covered person? The original draft introduced by Sen. Specter defined “covered persons” as “a person who is engaged in journalism” where the latter just means “the regular gathering, preparing, collecting, photographing, recording, writing, editing, reporting, or publishing of news or information that concerns local, national, or international events or other matters of public interest for dissemination to the public.”

That’s obviously a very broad definition, one that would allow almost anyone to qualify as a journalist. It is thus no surprise that as the bill was debated, the definition of “covered person” became increasingly restrictive. For example, an early amendment to the bill re-defined a “covered person” as anyone whose “primary intent” is to gather and disseminate news and information of public intent and who “has such intent at the inception of the newsgathering process.”

But as my English professors in college never tired of pointing out, it is a very hard job to peer into someone’s mind and discern their “primary intent.” Hence, the bill’s language has continued to evolve. For example, once the Senate Bill made its way into the House of Representatives, the Committee on the Judiciary released a report that shows further restrictions had been placed on what it means to be a journalist. Now a covered person was anyone “who, for a substantial portion of the person’s livelihood, or for substantial financial gain, is regularly engaged in journalism.” That is, if the House Judiciary Committee has its way, the proposed Federal Shield Law would only apply to professional journalists, thus excluding volunteers, many freelancers, and most bloggers.

So what does that make someone like Julian Assange? Is he a journalist or is he a spy? It is a strange but perhaps not altogether surprising irony that according to the 2009 House Judiciary Committee, the answer to that question would depend on how Assange pays his bills!

Myriad Genetics Patent Struck Down!

As I’m sure most of you have heard, the US Supreme Court issued its ruling on the Myriad Genetics case today. There were no real surprises to speak of in the decision, as the court ruled exactly along the lines the executive branch asked it to. In an amicus curiae brief, lawyers for the US Department of Justice argued that whereas DNA sequences ought to not be eligible for patent protection, modified or so-called “complimentary” DNA does not qualify as a product of nature and is therefore patentable. The Supreme Court’s ruling, authored by Justice Thomas, towed exactly this line.

We’ve covered this case previously on this blog (here, here and here) so I won’t go into all of the details now.  But there are a couple of things worth pointing out.

Myriad’s argument that gene sequences are patent eligible because the act of isolating DNA turns a product of nature into an invention is a stretch, to say the least. Still, there was widespread concern that invalidating Myriad’s patents would strike a serious blow to the biotechnology industry, with major repercussions for America’s post-industrial economy. This helps to explain why the court went out of its way to emphasize the distinction between gDNA and cDNA (the latter is basically regular DNA with all the introns cut out).  As Eric Lander, the head of MIT and Harvard’s Broad Institute argued in his own friend of the court brief, the decision to uphold patents on cDNA helps to ensure the future profitability of biotech.

Second, my earlier prediction–the court’s ruling would hinge on the ontological question of whether genes are physical objects or informational entities–turned out to be right. Writing for the court, Clarence Thomas insists that Myriad’s patent claims “focus on the genetic information encoded in the BRCA1 and BRCA2 genes,” not their “chemical composition.” Hence, the fact that isolating these genes for sequencing required breaking covalent bonds in the DNA molecule is of no legal significance.

In the end, though, there were no real surprises in the court’s legal reasoning. Still, there was at least one major surprise (to me anyway).  It was the fact that the court issued a unanimous ruling today! I had predicted the court would strike down the patentability of gDNA but uphold the patentability of cDNA, but I had no idea which judges would vote which way. And I never would have guessed they would all agree on this issue. (Especially given the many differences of opinion on the circuit and appeals courts.)

Finally, and somewhat ironically, perhaps the most interesting part of the court’s decision turned out to be Justice Scalia’s concurring opinion, which I’ll reproduce here in full.

Is Scalia just being his usual snarky self, insisting that he have the last word? Or is he making a much more substantive argument about how Supreme Court justices ought to approach legal questions whose resolution requires taking a stand on technical issues that hinge on extra-legal expertise? Or, is Scalia actually saying that he’s unwilling to legislate ontology from the bench?

The Curious History of the Paleo-Diet, and its Relationship to Science & Modernity

Joseph Knowles emerging from the woods in his “Wilderness Garb,” Oct. 4th, 1913

Over the past few years, I’ve been following the career of a new fad called the “paleo-diet,” which advises us to adopt the eating habits of the Pleistocene. I first became aware of it from a New York Times article featuring John Durant, a 20-something office worker turned fitness guru from Manhattan who tries to live as our ancestors did before the dawn of agriculture. On his website, Durant explains that when he started working at his first job out of college, he began to notice that he often felt tired, anxious, and stressed out. He also started to put on weight and noticed that his complexion was becoming uneven.

On the lookout for an explanation for what might be going on with his body, Durant came across the UC Irvine Economist Art de Vany, who had developed a so-called evolutionary fitness regimen. Durant decided to give it a try, and began to eat a diet that is high in fat and protein, as well as fresh fruits and vegetables, but completely avoids grains and all processed foods. Moreover, Durant began to fast for long periods in between meals to simulate the lean times that hunter gatherers often had to endure. Indeed, some advocates of the paleo-diet even go so far as to engage in strenuous exercise before breaking a fast, reasoning that early hominids had to hunt down their prey before consuming a large dose of protein.

There’s been a lot of chatter about the relative merits and shortcomings of the paleo-diet recently (including an advice column at the Huffington Post and a hilarious review of Marlene Zuk’s book Paleofantasy: What Evolution Really Tells Us About Sex, Diet and How We Live on Salon). I’m not going to evaluate any of the substantive claims made either for or against this lifestyle.  Instead, I want to give a bit of historical context for these discussions from the late 19th and early 20th century (see the image above!).

Most people who have written about the paleo-diet cite a 1985 article in the New England Journal of Medicine entitled “Paleolithic Nutrition — A Consideration of Its Nature and Current Implications” as the point of origin for the fad. In what follows, I’ll try to push the narrative considerably further back into recent history. But the NEJM article is worth taking seriously because it makes an important point about not only this fad diet, but indeed every fad diet: they all claim to be grounded in science. What is unique and special about the paleo-diet is that it draws on an unusual branch of science, namely evolutionary theory.

On his website, Art de Vany claims that our evolutionary history did not prepare humans for a modern lifestyle.  To see why one might think this, it is worth taking a detour and listening to an excellent TED Talk that Daniel Dennett gave several years ago.  In his talk, Dennett used a piece of chocolate cake to explain Darwin’s curious form of “reverse reasoning.” It’s not true that we like the chocolate cake because it is sweet, Dennet explains. Rather, it is sweet because we like it.

There is nothing about cake that is inherently sweet. You can stare at a sugar molecule for as long as you want, and you will never understand why it tastes sweet. To understand that, you have to know something about how our brains are wired. And this wiring, Dennett explains, is a product of evolution. Our brains evolved to give us a psychological reward–the taste of sweetness–whenever we eat something that contains sugar, which, of course, is rich in calories. Something similar holds true for fat, salt, and a number of other foodstuffs.

The claim made by proponents of the paleo-diet is that this was good thing during the Pleistocene, because humans did not have access to a lot of calorie-rich foods. To survive and have offspring, you had to consume all the calories available. But in today’s world of industrial agriculture and high-fructose corn syrup, that is no longer the case. Differently put: there was no such thing as chocolate cake during the Pleistocene. Probably the sweetest thing anyone would have eaten at that time was a carrot. The chocolate cake is what the ethologist Niko Tinbergen called a super-normal stimulus — what my own behavioral ecology teacher called “the Dolly Parton effect”–something that is way off the scale of what our bodies have evolved to cope with.

Now, advising people to avoid or at least moderate the consumption of processed foods that are high in salt, fat, and sugar is not in the least bit controversial. I am willing to bet that any conventional nutritionist would be on board with the idea that just because something tastes good does not mean it is good for you, and that we should be careful about simply giving in to all of our cravings. But proponents of the paleo-diet want to go several steps further. Beyond advocating that we avoid foods packed with super-normal stimuli, they also counsel us to avoid dairy, grains, and cereals; indeed, anything that was unavailable prior to the development of agriculture. In so doing, they add an extra ingredient to the evolutionary reverse argument, namely an aversion to modernity.

To see why this is the case, it is useful to extend our historical vision beyond modern-day evolutionists such as Dennett and recent proponents of the paleo-diet like Durant and de Vany. In particular, I want to use the example of Joseph Knowles (pictured above) to show that the paleo-diet is rooted in a much older tradition of what constitutes healthy living.

Joseph Knowles was an artist and illustrator who became famous almost overnight for what he described as an “experiment” that consisted of trying to survive for two months alone in the Maine wilderness. His fifteen minuts began when reporters from the Boston Post photographed him gingerly disrobing, discarding his knife and other accoutrements of modern life, demonstrating his ability to make fire by rubbing pieces of wood against one another, and entering the woods, all on the morning of August 10, 1913.

Joseph Knowles demonstrating his wilderness survival skills just before heading off into the forest, August 10th, 1913.

During the two months he allegedly spent in the wilderness, Knowles periodically sent updates about his adventures to the Post, written in charcoal on a piece of tree bark.  Among other things, he recounted spending the first few days subsisting on berries before learning how to fish trout and hunt partridge and deer. He also wove strips of tree bark together to create a kind of textile that he could fashion into clothing and shoes. Then, on August 24th, about two weeks after he entered the forest, a front page story in the Post described how Knowles had successfully killed a bear using nothing but his wits and a club.

When he emerged from the wilderness wearing the bearskin on October 4th, Knowles received a hero’s welcome. He was cheered on at every stop of the way from Maine down to Boston, and huge crowds gathered to see him arrive at North Station before he gave a rousing speech about his experiences in the Boton Common. In the months that followed, Knowles wrote a best-selling book about his adventures entitled Alone in the Wilderness and received top billing on the Vaudeville circuit.

There’s lots to be said about Joseph Knowles, including the fact that a rival newspaper published evidence to the effect that he had spent most of his time in the “wilderness” drinking beer in a friend’s cabin. But I want to focus on one piece of the story in particular. One of the first things Knowles did after arriving in Boston was to pay a visit to Dudley Allen Sargent, the Director of the Hemenway Gymnasium at Harvard University.

Dudley Sargent examines Joseph Knowles at Harvard’s Hemenway Gymnasium.

In his autobiographical account of the saga, Knowles quoted Sargent as attesting to the fact that his time in the wilderness had left him in better shape than any of the college’s “football men,” reporting, among other things, that “With his legs alone he lifted more than a thousand pounds.” Sargent also noted a remarkable improvement in Knowles’ complexion: “Subjected to the action and the stimulus of the elements, Mr. Knowles’ skin has [come to serve] him as an overcoat, because it is so healthful that its pores close and shield him from drafts and sudden chills.” Thus, Sargent declared the “experiment” a complete success. “Forced to eat roots and bark at times, and to get whatever he could eat at irregular hours, his digestion is perfect, his health superb.”

Along with this testimonial, Knowles also included a chart comparing some of his vital statistics from before and after the time that he spent in the wilderness. Not only had he lost more than ten pounds, but, remarkably, he had grown slightly taller as well. Moreover, his muscles all increased in size and in girth, and his lung capacity shot up from 245 cubic inches to an astonishing 290 cubic inches!

Joseph Knowles’ vital statistics before and after the wilderness “experiment.”

As historians of science and environmental historians well known, Joseph Knowles was part of a larger cultural movement that Roderick Nash’s classic account describes as a kind of “wilderness cult.” Other notable examples of this movement’s popularity include the founding of the Boone and Crockett Club in 1887, the Sierra Club in 1892, the Boy Scouts of America in 1910, as well as Theodore Roosevelt’s fierce advocacy on behalf of wilderness preserves such as Yellowstone National Park as a place in which white, urban elites could experience what he called the “strenuous life.”

It is no surprise that the wilderness cult took off when it did. At a time in which America was becoming increasingly urban, industrial, and ethnically diverse, many worried that rather than heading for increasing prosperity, the country was inevitably on the decline. Thus, it seemed natural to harken back to a simpler and more authentic past, one in which people’s communion with nature left them healthier in body, mind, and soul. It was, after all, during this period that the historian Frederick Jackson Turner used a podium at the 1893 Chicago World’s Fair–a celebration devoted to industrial progress in a city that did more than any other to conquer the west–as a platform from which to mourn the official closing of the nation’s western frontier. And it was also during this period that Madison Grant, director of the Bronx Zoo and Trustee of the American Museum of Natural History, published his eugenic masterpiece, The Passing of the Great Race. Envisioning a dark future indeed, Grant counseled his readers to eschew the comforts and luxuries of modern civilization and allow the Darwinian struggle to continue tending the health of the gene pool.

Few things sum up these sentiments as well as the first edition of Ernest Seton’s Handbook for the Boy Scouts of America. “We have lived to see an unfortunate change,” he lamented on the very first page of the Handbook. “Partly through the growth of immense cities,” and “[p]artly through the decay of small farming,” he continued, America entered a period that Seton and so many others described using the word “Degeneracy.”  Thus, it was to “combat a system that has turned such a large proportion of our robust, manly, self-reliant boyhood into a lot of flat-chested cigarette smokers, with shaky nerves and doubtful vitality” that he brought scouting to America. Mindful of the fact that “Consumption” had become “the white man’s plague,” he concluded, “I should like to lead this whole nation into the way of living outdoors for at least a month each year.”

In closing, let me forestall a possible misinterpretation. Of course I do not mean to imply that Durant and other advocates of the paleo-diet are all eugenicists at heart. That is certainly not the lesson I hope people take away from the history that I have tried to present. But I do think that a few striking and salient parallels present themselves.

Perhaps it is a cliche to say that we are living through a time of enormous change, just as people during the American Gilded Age and Progressive Era did, but that does not make it any less true. One thing that I would like to suggest we are seeing, not just in the paleo-diet, but certainly there as well, is a kind of aversion towards modernity. People now as well as a hundred years ago have looked and are looking to the past in search of a simpler, more authentic, and, importantly, more healthful way to live one’s life.

But what is so curious about all of this is that so many of these people–from Joseph Knowles to Art de Vany–are also looking to science, a quintessentially modern institution if there ever was one, for both advice on how to get there as well as for the authority to argue that an earlier period in human history really was healthier and more adapted to our physical, spiritual, and emotional needs. 

The Ontology of the Patent Law, Part II

Illustration of “native” DNA in the human cell, from the majority opinion in Ass. for Mol. Path. v. Myriad Genetics, United States Court of Appeals for the Federal Circuit

A few weeks ago, I wrote a post about a case the US Supreme Court will hear on April 15th concerning whether genes can be patented. As we get closer to that date, I want to pick up the thread where it was left off.


As a quick reminder, the case before the court now concerns the validity of a patent that was granted to Myriad Genetics on a pair of genes (BRCA 1/2) whose presence has been shown to confer an increased risk of developing breast cancer. Here, I want to examine how this case turns on a difficult ontological question, namely: what kind of things are genes?

A number of people who support Myriad’s patent argue that human genes ought to be understood as a molecule like any other. They are a material object, nothing more and nothing less.

Others, including the co-discoverer of DNA’s molecular structure, Jim Watson, have urged the court to endorse a divergent vision. In a friend of the court brief, Watson argues that although genes are indeed a chemical molecule, they are also something more.

According to Watson, a gene is primarily an informational object. “It is a chemical entity,” he writes, “but DNA’s importance flows from its ability to encode and transmit the instructions for creating humans.” Watson goes on to cite some of the terminology commonly used in molecular biology, such as “transcription” and “translation” as evidence for this claim. He then makes the following, totally fascinating, statement:

“The myopic viewpoint thinks of a human gene as merely another chemical compound, composed of various bases and sugars. But history and science teach us otherwise. … The human genome’s ability to be our instruction book on life distinguishes it from other chemicals covered by the patent laws. No other molecule carries the information to instruct a human zygote to become a boy or a girl, a blonde or brunette, an Asian, African, or Caucasian.”

The reason this distinction between genes-as-molecules versus genes-as-information matters so much is that it speaks directly to the question of whether genes are patentable. According to the United States patent law, any “new and useful machine, manufacture, or composition of matter” can be patented. That language is extremely broad, and it is designed to encourage innovation. But there is also an important exception, which states that a product of nature is not patentable. So the Myriad Genetics case crucially turns on whether the BRCA 1/2 genes are a product of nature.

In an earlier decision in favor of Myriad Genetics, the US Federal Circuit Court of Appeals argued that isolated genes do not occur in nature. As the majority opinion pointed out, “DNA in the cell … is packaged into twenty three-pairs of chromosomes.” (See the figure above.) That is, the genes on which Myriad Genetics holds a patent are always part of a larger assemblage. But Myriad Genetics did not seek patent protection over whole chromosomes. They only applied for a patent on a section of DNA that had been isolated and purified. As the court’s ruling noted, Myriad “cleaved” the BRCA 1/2 genes “from their native chemical combination with other genetic materials.” This rendered them a human invention, for “an isolated DNA molecule is not a purified form of a natural material, but a distinct chemical entity that is obtained by human intervention.”

(It is worth pointing out that in another friend of the court brief, Eric Lander argues that the Appeals Court’s decision rested on a factually inaccurate assumption. In fact, DNA in the human body is constantly broken and repaired. So much so that it is statistically certain that isolated versions of both the BRCA 1 and 2 genes have appeared in nature.)

Watson’s claim that genes are primarily informational objects throws a wrench in the Appeals Court’s reasoning. It also echoes the argument made by Judge Robert Sweet of the United States District Court of New York in the first hearing of this case. In his ruling to strike down Myriad’s patent, Sweet wrote that although certain chemical differences may distinguish DNA in the human body from DNA that has been isolated and purified in the lab, those differences are irrelevant to the case at hand. That’s because chemical differences alone are not enough: isolated DNA would have to be “markedly different” from the DNA sequences routinely found in nature to qualify as a genuine invention.

But what constitutes a marked difference? Answering this question is tricky and, according to Judge Sweet, requires taking the nature of the object in question into account. In fact, although Judge Sweet did not use the word himself, we might say that it requires taking the essence of the object into account. The question before the court, then, is whether purifying a stretch of DNA by isolating it from the rest of the genome changes its essential nature somehow.

Why would this be?

To see why this is the case, Federal Circuit Court of Appeals Judge Bryson asks us to imagine a baseball bat that has been fashioned out of an Ash tree. There is a real sense in which the bat is just a “purification” of the tree because you can fashion a bat simply by taking away the wood around it. The bat has been “extracted” from the Ash tree much like the BRCA 1/2 genes have been extracted from the genome. But in fashioning a tree into a bat we have changed its function and thus completely changed its nature. “The result of the process of selection is a product with a function that is entirely different from that of the raw material from which it was obtained.” The same is not true for the BRCA 1/2 genes.

In fact, exactly the opposite is true! The reason that Myriad patented the BRCA 1/2 genes is that they serve as a diagnostic tool. But for them to succeed on this score, it is crucial for the isolated sequences retain their homology to those regions of the genome that confer an increased risk of developing breast cancer. To quote from Bryson dissenting opinion again: “Biochemists extract the target genes along lines defined by nature so as to preserve the structure and function that the gene possessed in its natural environment.” For this reason, the process “does not result in the creation of a human invention.”

Let me just close with a couple quick observations. First, much like the case of Diamond v. Chakrabarty that I discussed in my previous post, this case again forces the court to wade into the deep waters of ontological deliberation. As you’ll recall, the Diamond v. Chakrabarty decision saw the court privilege one level of biological organization (whole organisms) over another (circular pieces of DNA) in deciding whether or not something is an invention or “nature’s handiwork.” This is surprising, and it links up with a controversy within biology about the levels at which evolution operates (usually referred to as the units of selection debate).

Now the court is again being asked to make an ontological decision. But this time, it’s not about whether we should privilege one level of biological organization. Rather, it’s about whether genes are just a chemical molecule or if they are something more; namely, an informational entity.

Of course, historians of science have been thinking about this question for some time. For example, Lily Kay’s book Who Wrote the Book of Life argues that molecular biologists during the 1950s and 60s adopted the DNA-as-code metaphor because many of them had a background in physics and mathematics and because research on computers and information-processing was taking off at the time. Philosophers of biology, too, have debated the utility of thinking about genes in this way. (Here is an excellent review by Peter Godfrey-Smith.)

Despite all the debates, almost every historian and philosopher agrees that when biologists like Watson talk about genes as informational objects they are speaking metaphorically. DNA is not really a set of instructions or a codebook, but it might be heuristically useful to think of it in that way.

The question for most historians and philosophers of science, then, is not whether genes are informational entities, but whether the metaphor has been a useful and productive one. There is a deep irony in the fact that we are about to see the United States Supreme Court grapple with exactly this question, but that it will be doing so in a very literal way.

The Ontology of Patent Law, Part I

On April 15 of 2013, the Supreme Court of the United States will hear a case challenging the practice of patenting DNA sequences, including human genes. With the forbidding title of Association for Molecular Pathology v. Myriad Genetics, this case is all but certain to have a huge impact on the history of biotechnology, the patent law, and interactions between science and capitalism more broadly.
Today, I am posting the first of a two-part piece on the case, with some thoughts on patenting living things and parts thereof.

The case currently before the US Supreme Court concerns a biotech company called Myriad Genetics. During the mid 1990s, Myriad successfully filed for a patent on two genes (BRCA1 and BRCA 2) that dramatically increase a woman’s risk of developing breast cancer. Having sequence both of these two genes, Myriad Genetics developed a diagnostic test, which it currently markets for several thousand dollars. It is worth emphasizing that Myriad’s patent covers the genes themselves, not just the diagnostic procedure. In agreeing to hear the case, the Supreme Court explicitly signaled its willingness to address the question “Are human genes patentable?” 
(For more, see the Petition for a Writ of Certiorari. You can also read some commentary as well as download friend of the court briefs here.)
Rather than discuss the case in all its particulars, I’ll focus on what I take to be one of its more interesting dimensions: the extent to which challenges to what are called composition-of-matter patents can force the court to wade into the deep waters of ontological deliberation. At the risk of stating the obvious, I’ll remind everyone that ontology is a branch of metaphysics that studies the nature of being. Ontology is about what there is. In contrast, epistemology concerns how we come to know things. 
According to Title 35, Paragraph 101 of the United States Patent Code, any “new and useful process, machine, manufacture, or composition of matter” may be subject to patent protection. However, there is a well-known and longstanding exception to this extremely broad formulation: the so-called product of nature doctrine. It holds that naturally occurring entities such as physical laws or minerals cannot be subject to patent protection. As such, the court has often found itself in the position of having to decide what is and is not a genuine product of nature. In so doing, it has had to specify where nature ends and culture begins.
Patent law is often seen as a kind of bargain between society and individuals. The state agrees to give a monopoly over a new and useful invention in exchange for its disclosure or publication. The granting of a monopoly over an intangible good is not to be taken lightly because it hinders other people’s access to it. But the practice is usually seen as justified by the fact that doing so not only discourages the keeping of trade secrets, it also incentivizes discovery and thus acts as a spur to technological progress. 
If the patent law represents a kind of bargain or balancing act between the interests of individuals and the society, it makes sense to think carefully about where to draw the line between patentable and non-patentable subject matter. In particular, I think most people would agree that the law would no longer be fulfilling its proper function if it allowed someone to privatize whole swaths of the natural world simply by describing them and thus claiming an ownership right. It can’t be right that whoever discovers coal’s ability to release thermal energy when lit on fire therefore has a right to stop anyone else from digging up and burning hydrocarbons. (Although the latter might not be such a terrible turn of events in this day and age!)
The product of nature doctrine goes back to ex parte Latimer from 1889, in which the United States Patent Office ruled that a new fiber produced from pine needles was not patentable subject matter. Its most canonical expression, however, was articulated by William O. Douglas, Associate Justice of the United States Supreme Court. Writing the majority opinion in Funk Brothers Seed Co. v. Kalo Inoculant Co. (1948), Douglas declared that a mixture of naturally occurring bacteria was not patentable because these, “like the heat of the sun, electricity, or the qualities of metals, are part of the storehouse of knowledge of all men. They are manifestations of laws of nature, free to all men and reserved exclusively to none.”
Justice Douglas’ opinion continues to influence legal arguments about the proper scope and interpretation of 35 USC § 101. Part of its remarkable staying power is due to the fact that Chief Justice Warren Burger relied on it extensively for his 1980 decision in the case of Diamond v. Chakrabarty. As I’m sure most of you are aware, this was the case in which the United States Supreme Court ruled that genetically modified organisms were eligible for patent protection because they are products of human ingenuity rather than nature.
Chief Justice Burger’s argument in Diamond v. Chakrabarty explicitly contrasted the latter’s invention to the one under dispute in Funk Brothers v. Kalo Inoculant. To get Burger’s reasoning straight requires a passing familiarity with the details of both cases. 
The Funk Brothers case concerns a patent that had been granted on a mixture of various Rhizobia, bacteria that fix nitrogens after becoming established in the root system of plants. Farmers had long known that inoculating their plants with bacteria helped them to grow, but each plant required its own, specific strain. The patent under dispute in Funk Brothers was for a *mixture* of many bacteria that could successfully inoculate a whole range of plants, which made the mixture more widely applicable and thus economical than what was currently available on the market. 
Justice Douglas struck down the patent under dispute in Funk Brothers because making a mixture of exiting bacteria, he reasoned, did not qualify as a genuine invention. Rather, it represented “no more than the discovery of some of the handiwork of nature.” “No species acquires a different use,” he went on to argue. “The combination of species produces no new bacteria, no change in the six species of bacteria, and no enlargement of the range of their utility. Each species has the same effect it aways had.”
Chakrabarty’s patent claim was in many respects similar, but in others quite different from that under dispute in Funk Brothers. Whereas the latter concerned a mixture of pre-existing bacteria, Chakrabarty claimed to have engineered a whole new organism. He did so by introducing several small pieces of naturally occurring circular DNA molecules–called plasmid–into a species of pseudomonas bacteria. It is worth emphasizing that Chakrabarty did not claim to have manufactured any new pieces of DNA. All that he did was to introduce existing DNA molecules into a new organism. In the “Summary of the Invention” section of his original patent application, Chakrabarty wrote, “Having established the existence of (and transmissibility of) plasmid-borne capabilities for [breaking down petroleum molecules into more simple chemical compounds], unique single-cell microbes have been developed containing various stable combinations of [those plasmids].”
Despite their many similarities, Chief Justice Burger went out of his way to draw a stark contrast between Chakrabarty’s patent and the one under dispute in the Funk Brothers case. In Chakrabarty’s case, he wrote, “the patentee has produced a new bacterium with markedly different characteristics from any found in nature and one having the potential for significant utility.” Because the “discovery is not nature’s handiwork, but his own,” Burger concluded, “it is patentable subject matter.”
What I find so remarkable about this case is that in trying to decide whether a genetically engineered organism is patentable subject matter, the United States Supreme Court was not just compelled to decide what is and is not a product of nature. Rather, by way of trying to answer that question, it had to address an antecedent question about the level of biological organization at which nature produces its handiwork. That is to say: different strains of bacteria remain a product of nature even when they are brought into a new mixture with one another (and thereby acquire a new efficacy), whereas a new mixture of circular DNA molecules is a product of human ingenuity.
As an exercise, you can re-read Justice Douglas’ decision above and replace each instance of the word “species” and “bacteria” with the word “plasmid.”  I admit the phrasing sounds awkward, but the effect is pretty compelling nonetheless.
What is going on here? The answer, of course, is quite a lot. But let me just close with one thought and then take the issue up again in my next post, where I will examine the legal reasoning in the Myriad Genetics case itself.
The patent law is designed to encourage innovation and it does so by rewarding technological breakthroughs. I suspect that one reason we balk at the idea of patenting products of nature is that we don’t want to reward the mere act of describing something that was previously created via some other means, whether it is evolution or God or what have you. Reading Douglas’ decision, one gets the sense that he took *moral* offense at the notion that someone could receive financial rewards for doing no more than harnessing “nature’s handiwork.” 
Of course, nobody thinks that discovery is an easy or straightforward process. But patent law does assume that it is fundamentally different from the act of invention. One way in which the two are kept separate is that both are governed by different reward systems. Whereas a new discovery brings with it an accrual of credit, inventions bestow a more material kind of reward. 
My aim here is not to *endorse* a clear-cut distinction between invention and discovery. I am well aware that discoveries often come with financial and other kinds of material rewards. Similarly, it need hardly be pointed out that inventors are routinely given significant credit by the scientific community for the work they have done.
What I would like to suggest, however, is that one reason we are so invested in making a distinction between invention and discovery, products of nature and human ingenuity, is that doing so helps us keep alive an even more fundamental distinction, namely the one with which I began. That is the distinction between ontology and epistemology, between the nature of things in themselves and how we experience, know, or represent them. Seen in this light, it is no surprise that questions about patenting things like live organisms and human genes should elicit such strong emotional and moral reactions. After all, to give up the dream of drawing a hard line between acts of description and intervention would force us to revise a great deal more than just Title 35, Paragraph 101 of the United States Patent Code.
In the next post, I will link these issues up with more fine-grained concerns about giving ontological primacy to certain levels of biological organization and characterizing genes as informational objects.