Category Archives: Lukas

The Problem of the One and the Many in Gun Control

Over the past several months, I’ve become obsessed by what you might describe as the problem of the one and the many. It surfaces for any collective in which the interests of individuals diverge from those of the whole. I’m currently writing an essay about the way this problem manifested itself in debates at the turn of the 20th century about the evolution of multi-cellularity or super-organisms like insect societies. I also wrote a post on this blog recently arguing that people’s misgivings about the Affordable Care Act ought to be seen as a manifestation of the same problem. Today, on the one-year anniversary of the tragic Sandy Hook school shooting, I would like to suggest that current debates about gun control can be seen in the same light.

As you will recall, the Sandy Hook tragedy last year occasioned a renewed effort to pass more restrictive gun control laws. And although these efforts have met with some success on a state level, supporters of gun control have gotten exactly nowhere in  the United States Congress. This is true despite the fact that an overwhelming majority of voters support background checks and other measures that impose some additional limits on our ability to bear arms.

The gridlock on a federal level primarily indexes the enormous power of well-funded pro-gun lobbying groups such as the National Rifle Association to exert pressure on lawmakers. So in a sense what we are witnessing is yet more evidence of the deep flaws that pervade America’s political system. Still, there is also a real and substantive debate about the right to bear arms that rewards further scrutiny. It is this debate, I would like to suggest, that exhibits an interesting structural similarity to the problem of multi-cellularity and eusociality, as well as other major transitions in evolution.

Gun control advocates are basically engaged in a public health campaign. By making it more difficult to purchase a firearm, you reduce the number of guns in circulation. This, in turn, will reduce the number of gun-related fatalities. The statistics are pretty straightforward and don’t leave much room for argument. As a whole, our society would be much better off if there were fewer guns on the streets.

Pro-gun advocates counter with any number of claims. Sometimes they simply insist on their constitutional right to bear arms, regardless of the dangers involved. This pits two (putative) goods against one another: our right to own firearms, and our safety. There are a few things to note here.

Not everyone would agree the right to bear arms is a good.  But let’s leave that issue aside for a moment, and note that the two goods in question are of a fundamentally different kind. One is a good of the individual (rights), whereas the other is a good of the community (public health).

Moreover, there is another line of arguments claims that making it more difficult to purchase a firearm will actually make people less safe. According to the logic of this argument, we would be better off having more guns on the streets rather than less. Thus, for example, Wayne LaPierre famously issued an incendiary call to arm our school’s teachers in the wake of the Sandy Hook tragedy.

As a rule, pro-gun advocates tend not to place much trust the federal government’s ability to succeed in removing firearms from circulation. As a result, they claim, criminals will always have access to illegal guns. If that’s correct, the main consequence of added regulation would be to keep firearms out of the hands of responsible, law-abiding citizens. Rather than trying to limit the number of guns, the argument therefore goes, we should be doing the opposite, allowing “good” guns to crowd out the “bad.”

But this argument only really gains purchase on the strength of our paranoia. Its fundamental premise is that the structure of our society as a collective has already, irredeemably broken down.

As public health advocates never tire of pointing out, there is plenty of evidence that we would all be better off, much better off, if there were fewer guns in circulation. The mere act of purchasing a firearm already increases your likelihood of suffering a gun-related injury.

Still, the opposing viewpoint does make some sense, if only on the margin.  We might indeed be better off, as a collective, if we reduced the number of firearms on the street. At the same time, I might still be better off, as an individual, by refusing to give up my weapon.

The logic here is roughly the same as the reasoning that compels people to avoid being vaccinated. If enough people around me vaccinate themselves against a disease, the likelihood that I will be hurt by the vaccine eventually exceeds my likelihood of actually contracting that disease. But of course, this should cause everyone else to forgo vaccination too. In that case, what we are left with is precisely the kind of society in which it starts to make sense that we all arm ourselves to the teeth.

If that’s right, then what we are witnessing today is far more than the breakdown of America’s political system. It is rather more like the breakdown of our ability to come together and form a meaningful collective.

Amblystoma cells, some of which are undergoing mitosis, from EB Wilson’s The Cell in Development and Inheritance, 1903

Academic Publishing, the AHA, and the Ratchet Effect

On Monday, the American Historical Association published an official statement urging graduate programs and university libraries to “to adopt a policy that allows the embargoing of completed history PhD dissertations in digital form for as many as six years.” The statement goes on to note that “History has been and remains a book-based discipline.” However, the increasingly common practice of requiring that completed dissertations be posted freely online may make it more difficult for recent graduates to secure a publisher. This, in turn, could make it much more difficult for young scholars to earn tenure.

As the comments section that follows the AHA’s online publication of its statement against online publishing indicates, this strikes many as a backwards-looking strategy. As I have argued myself in a previous post on this blog, scholarly publishing is clearly moving online. And as it does so, the nature of how we consume, share, and disseminate knowledge is certain to change. So why not embrace this trend rather than desperately try to hold on to an outdated, 19th-century version of print culture?


The answer, of course, is that although many of us are eager to publish our work freely online, it seems wrong to endanger the tenure prospects of a whole generation of scholars whose only crime was to have finished their PhD’s during a time of transition and upheaval. It is laudable for the profession to embrace change. But we should not expect its most vulnerable members to be on the vanguard, leading the charge into an uncertain future.

But does that mean the profession can’t embrace change? Couldn’t the change we all seek come the level of hiring and tenure committees instead? Answering these questions is far from straightforward,  and it requires a small detour through what might be called the “ratchet effect.”

I first heard the term “ratchet effect” in conversation with the philosopher Peter Godfrey-Smith, who described it as one among many potential mechanisms that drives cultural evolution. The ratchet effect will take hold anytime that cultural change is biased to drift in one direction rather than another. Take, for example, the case of airport security:

On a recent flight from Barcelona to Boston, I was surprised to find passports being checked at the gate of my connection in Zürich even though the Swiss border control had already inspected my documents when I entered the international terminal. Doing so added considerably to the time that it took us to board, and, to me, it seemed ridiculously over-indulgent. But there is nothing in the least bit surprising about it. In the wake of September 11th, there was a huge push to tighten the security around American airspace, and a few minutes of extra wait time seemed like a negligible sacrifice to make.

Of course, a long time has passed without a similar incident of in-flight terrorism so, for most of us, the cost-benefit analysis may have changed. But who is going to spear-head the movement to loosen airline security? After all, doing so would mean incurring the risk being blamed if another disaster did occur in the future. Hence, airline security is subject to the ratchet effect. It is much easier to tighten security than loosening it, giving us something to think about when we are stuck in what seems like an interminable queue.

Although its outcome is often annoying, the ratchet effect operates all around us, influencing everything from the evolution of the Republican party to the career trajectories of young historians.

At the same time that we have witnessed an upheaval in print culture, historians have also engaged in much hand-wringing about two interrelated and lamentable trends.  Ironically, while it is taking PhD students longer and longer to earn their degrees, they are also having a harder and harder time finding gainful employment. The relationship between these two trends is no less disturbing because it is obvious: it being harder to find a job, it makes sense for people to spend more time lingering in their PhD programs. By taking an extra couple of years to write their dissertations, they not only increase the amount of time they can spend on the market. They are also able to write better and more polished theses, thus giving them a leg up once they actually graduate.

The problem, of course, is that we are all playing the same game. Thus, we are caught up in a ratchet effect. As people spend longer writing their PhD and produce a more polished thesis, the basic requirements for securing a tenure-track job go up for the whole profession. For all practical purposes, it is simply no longer possible to land a permanent position with the kind of CV that was perfectly standard a generation ago. Rather than a completed dissertation and good letters of recommendation, you now need one or two published articles and a thesis that is well on its way to the book manuscript. Indeed, as more and more people also spend several years as a post-doc, it is not at all uncommon for recent hires to have a book contract in hand by the time they start their first permanent job. Sometimes, the book has already been published. This is, as they say, the new normal.

I read the AHA’s position on the online publication of PhD theses as a good-faith reaction to the ratcheting up of publication requirements for young scholars. But wouldn’t it be better to try and bring things down a few notches instead?

What I’m about to suggest is pretty draconian, so let me preface this by saying that I mainly put it out there as a contribution to a vitally important conversation.

What if we could use the move to online publishing as an opportunity to address the time-to-degree problem head-on? One way to do so would be to move to a more UK-style model, in which students are expected to write their PhD theses in 2-3 years (after having completed the relevant coursework, which in the US would result in roughly 5-year PhD programs). This would mean lowering expectations on PhD theses somewhat. Rather than a polished first draft of the book manuscript, the thesis would be an academic exercise, freely available on the internet, meant to *prepare* students for the task of writing a book rather than being a version of that book itself.

One virtue of such a move comes from the fact that the stagnant job market in the humanities is unlikely to change, meaning that many qualified people will fail to find a permanent teaching position. Although my proposal would not change that, at least it would mean that most recent PhD’s would be about 25 – 30 years old. My sense is that it is easier, and preferable, to make the difficult choice of leaving the profession at 30 years old rather than five to ten years down the line.

Another virtue is that it would take some of the pressure off the writing of the PhD itself. It strikes me as foolish to expect people to write a polished book manuscript in their first try. Better to learn your craft in the context of a long-form exercise in which you can experiment and make mistakes. Then, after you have defended, you can decide if you want to have another go at the same topic (this time knowing what you wished you had known the first time around), or you can choose to go with something new (this time knowing much more about how to pick a topic and design an argument).

Although others, including Louis Menand, have proposed similar measures, there are significant drawbacks to going this route.

One major problem with my suggestion about reducing time to degrees is that it does not go far enough to solve the problem of the ratchet effect. Because there are so many more talented historians with a PhD than there are permanent teaching positions, hiring committees would still be free to choose from a pool of remarkably accomplished applicants. That is, even if we suddenly forced students to complete the PhD program in five years, what’s to stop them from spending several years writing articles and polishing their thesis after they graduate? One thing I certainly do not want to do is advocate that the humanities go the way of the sciences, in which it has become standard to spend 5-10 years on the post-doc circuit building up a publication record before entering the tenure track.

Because of the ratchet effect, my proposal would only succeed if senior scholars commit to preferentially hire recent graduates. And this is where things get really draconian, because doing that would mean telling huge numbers of talented and deserving people who have been on the market for a number of years that all of a sudden they are out of the running for permanent positions. That’s a pretty bitter pill to swallow. So bitter, I think, that the AHA’s backwards-looking position on online publishing starts to make a lot of sense. 

Spies, Whistleblowers, and the Federal Shield Law

Julian Assange: Tinker, Tailor, Newsman, Spy?

The John-le-Carré-esque saga of Edward Snowden’s run from the United States Government has sparked an interesting conversation on how to distinguish whistle-blowing from espionage. The fact that Snowden has been charged under the Espionage Act of 1917 certainly ought to give us pause.  After all, this is a law that was originally passed during the First World War, one that was used, among other things, to silence pacifists and other opponents of American intervention as well as political dissidents in the ensuing Red Scare of the 1920s. No doubt, then, an argument can be made that just like one person’s freedom fighter is another’s terrorist, so too can a whistleblower be reclassified as a spy depending on which side of a political argument you happen to find yourself on.


Historians of science and STS scholars have thought a lot about the important work that all manner of classification can do. From Foucault’s early archeology of the human sciences to Hacking’s foray into historical ontology and Starr and Bowker’s book Sorting Things Out, we know that how we taxonomize or carve up the world has far-reaching implications for our epistemic, moral, political and indeed personal engagement with it. So it should come as no surprise that I’m an advocate of taking a second look at how our government goes about classifying its citizens as well as foreign nationals for the purpose of fighting an ill-defined but global war on terror.

Rather than address the question of Snowden’s disputed status as a whistle-blower head-on, though, I wanted to tack in a slightly different direction and ask how our government classifies journalists.

As I suspect many of you know, the Obama administration has been especially enthusiastic in its use of the Espionage Act to prosecute leakers and whistleblowers. Snowden and Pfc. Bradley Manning are only the most well-publicized cases, and there have been several others, including Thomas DrakeJohn Kiriakou, and Stephen Jin-Woo Kim.

Stephen Kim presents an especially interesting case.  A Senior National Security Analyst at Lawrence Livermore National Laboratory, Kim was charged with espionage for allegedly disclosing North Korea’s plans to test a nuclear bomb to the Fox News reporter James Rosen. As a result of his reporting, the US Department of Justice began monitoring Rosen’s activities. Eventually, the DOJ even named Rosen as Kim’s “criminal co-conspirator” to gain access to his personal email and phone records.

Only a few days before we learned the Obama Administration was eavesdropping on Rosen, the Guardian reported that phone records of twenty AP reporters had been seized by the Justice Department during 2012.

In its zeal to plug leaks, then, the Obama administration is not content to go after the leakers themselves. We now know of at least two cases in which they have also gone after journalists with whom the leakers communicated. These actions pose a serious threat to the journalistic profession’s ability to execute its traditional watchdog function, providing the oversight that is necessary for citizens to make informed choices in a democratic society. As I have argued elsewhere in this blog, the only workable solution to the secrecy paradox (we want our government to keep certain things secret, but we also recognize that voters cannot make informed decisions without knowing what the government is doing in their name) is to safeguard the potential for leaks.

Now, it goes without saying that the journalistic profession’s traditional status as a “fourth estate” is widely recognized, even within the government’s ranks. This is why the Obama administration’s decision to go after journalists to whom information has been leaked is such an incendiary topic of discussion.

In an effort to protect journalists from government eavesdropping and allow them to maintain the anonymity of their sources, the Pennsylvania Senator Arlen Specter introduced the Free Flow of Information Act in February of 2009. As its official language explains, the purpose of this bill is to “maintain the free flow of information to the public by providing conditions for the federally compelled disclosure of information by certain persons connected with the news media.”

As is usually the case, the wording is somewhat counterintuitive here. How on earth does compelling the disclosure of information promote the free flow of information? The answer lies in the phrase “providing conditions.” The idea is that by stipulating exactly under which circumstances journalists can be compelled to turn over information about their anonymous sources, the act protects them in all other circumstances. As always, then, the devil is in the details.

(There is an interesting parallel here to the Freedom of Information Act, which is really a secrecy act. See this post for more on that argument.)

I should note that the Free Flow of Information Act, which is often referred to as a Federal Shield Law, has not yet been signed into law. Still, it is worth a slightly closer look for what it tells us about what our public officials think it means to be a journalists.

The proposed Shield Law states that unless certain well-defined conditions are met, “a Federal entity may not compel a covered person to comply with a subpoena, court order, or other compulsory legal process seeking to compel the disclosure of protected information.” This is to say that except under certain specified and extraordinary circumstances, the government cannot force journalists to turn over information about anonymous sources.

However, there is a major sticking point in the proposed bill; indeed, it is one reason this bill has not yet been signed into law. This is the question of who qualifies as a “covered person,” i.e., to whom this law will apply. To take an extreme case, imagine a genuine Russian spy (again of the John-le-Carré variety) who has managed to infiltrate the State Department or some other important government or military agency. Now imagine they obtain some piece of information that is vital to the United States’ national security. How is the spy going to pass this information on to the Russians? What you do not want is for the law to create a situation in which some compatriot of the spy could simply create a public website onto which the spy could upload sensitive information for everyone (including the Russians) to see. You would not want to give the KGB the ability to claim that it has an “investigative reporting” arm which is a legitimate journalistic endeavor and therefore legally exempt from US Government oversight.

That’s obviously a far-fetched idea, but the point remains: any Federal Shield Law will require a taxonomy to distinguish legitimate journalists (“covered persons”) from illegitimate usurpers and impostors. How does the law do this?

Interestingly enough, much of the debate over this bill has centered on exactly this question: who is and who is not going to be a covered person? The original draft introduced by Sen. Specter defined “covered persons” as “a person who is engaged in journalism” where the latter just means “the regular gathering, preparing, collecting, photographing, recording, writing, editing, reporting, or publishing of news or information that concerns local, national, or international events or other matters of public interest for dissemination to the public.”

That’s obviously a very broad definition, one that would allow almost anyone to qualify as a journalist. It is thus no surprise that as the bill was debated, the definition of “covered person” became increasingly restrictive. For example, an early amendment to the bill re-defined a “covered person” as anyone whose “primary intent” is to gather and disseminate news and information of public intent and who “has such intent at the inception of the newsgathering process.”

But as my English professors in college never tired of pointing out, it is a very hard job to peer into someone’s mind and discern their “primary intent.” Hence, the bill’s language has continued to evolve. For example, once the Senate Bill made its way into the House of Representatives, the Committee on the Judiciary released a report that shows further restrictions had been placed on what it means to be a journalist. Now a covered person was anyone “who, for a substantial portion of the person’s livelihood, or for substantial financial gain, is regularly engaged in journalism.” That is, if the House Judiciary Committee has its way, the proposed Federal Shield Law would only apply to professional journalists, thus excluding volunteers, many freelancers, and most bloggers.

So what does that make someone like Julian Assange? Is he a journalist or is he a spy? It is a strange but perhaps not altogether surprising irony that according to the 2009 House Judiciary Committee, the answer to that question would depend on how Assange pays his bills!

Myriad Genetics Patent Struck Down!

As I’m sure most of you have heard, the US Supreme Court issued its ruling on the Myriad Genetics case today. There were no real surprises to speak of in the decision, as the court ruled exactly along the lines the executive branch asked it to. In an amicus curiae brief, lawyers for the US Department of Justice argued that whereas DNA sequences ought to not be eligible for patent protection, modified or so-called “complimentary” DNA does not qualify as a product of nature and is therefore patentable. The Supreme Court’s ruling, authored by Justice Thomas, towed exactly this line.

We’ve covered this case previously on this blog (here, here and here) so I won’t go into all of the details now.  But there are a couple of things worth pointing out.

Myriad’s argument that gene sequences are patent eligible because the act of isolating DNA turns a product of nature into an invention is a stretch, to say the least. Still, there was widespread concern that invalidating Myriad’s patents would strike a serious blow to the biotechnology industry, with major repercussions for America’s post-industrial economy. This helps to explain why the court went out of its way to emphasize the distinction between gDNA and cDNA (the latter is basically regular DNA with all the introns cut out).  As Eric Lander, the head of MIT and Harvard’s Broad Institute argued in his own friend of the court brief, the decision to uphold patents on cDNA helps to ensure the future profitability of biotech.

Second, my earlier prediction–the court’s ruling would hinge on the ontological question of whether genes are physical objects or informational entities–turned out to be right. Writing for the court, Clarence Thomas insists that Myriad’s patent claims “focus on the genetic information encoded in the BRCA1 and BRCA2 genes,” not their “chemical composition.” Hence, the fact that isolating these genes for sequencing required breaking covalent bonds in the DNA molecule is of no legal significance.

In the end, though, there were no real surprises in the court’s legal reasoning. Still, there was at least one major surprise (to me anyway).  It was the fact that the court issued a unanimous ruling today! I had predicted the court would strike down the patentability of gDNA but uphold the patentability of cDNA, but I had no idea which judges would vote which way. And I never would have guessed they would all agree on this issue. (Especially given the many differences of opinion on the circuit and appeals courts.)

Finally, and somewhat ironically, perhaps the most interesting part of the court’s decision turned out to be Justice Scalia’s concurring opinion, which I’ll reproduce here in full.

Is Scalia just being his usual snarky self, insisting that he have the last word? Or is he making a much more substantive argument about how Supreme Court justices ought to approach legal questions whose resolution requires taking a stand on technical issues that hinge on extra-legal expertise? Or, is Scalia actually saying that he’s unwilling to legislate ontology from the bench?

The Curious History of the Paleo-Diet, and its Relationship to Science & Modernity

Joseph Knowles emerging from the woods in his “Wilderness Garb,” Oct. 4th, 1913

Over the past few years, I’ve been following the career of a new fad called the “paleo-diet,” which advises us to adopt the eating habits of the Pleistocene. I first became aware of it from a New York Times article featuring John Durant, a 20-something office worker turned fitness guru from Manhattan who tries to live as our ancestors did before the dawn of agriculture. On his website, Durant explains that when he started working at his first job out of college, he began to notice that he often felt tired, anxious, and stressed out. He also started to put on weight and noticed that his complexion was becoming uneven.

On the lookout for an explanation for what might be going on with his body, Durant came across the UC Irvine Economist Art de Vany, who had developed a so-called evolutionary fitness regimen. Durant decided to give it a try, and began to eat a diet that is high in fat and protein, as well as fresh fruits and vegetables, but completely avoids grains and all processed foods. Moreover, Durant began to fast for long periods in between meals to simulate the lean times that hunter gatherers often had to endure. Indeed, some advocates of the paleo-diet even go so far as to engage in strenuous exercise before breaking a fast, reasoning that early hominids had to hunt down their prey before consuming a large dose of protein.

There’s been a lot of chatter about the relative merits and shortcomings of the paleo-diet recently (including an advice column at the Huffington Post and a hilarious review of Marlene Zuk’s book Paleofantasy: What Evolution Really Tells Us About Sex, Diet and How We Live on Salon). I’m not going to evaluate any of the substantive claims made either for or against this lifestyle.  Instead, I want to give a bit of historical context for these discussions from the late 19th and early 20th century (see the image above!).

Most people who have written about the paleo-diet cite a 1985 article in the New England Journal of Medicine entitled “Paleolithic Nutrition — A Consideration of Its Nature and Current Implications” as the point of origin for the fad. In what follows, I’ll try to push the narrative considerably further back into recent history. But the NEJM article is worth taking seriously because it makes an important point about not only this fad diet, but indeed every fad diet: they all claim to be grounded in science. What is unique and special about the paleo-diet is that it draws on an unusual branch of science, namely evolutionary theory.

On his website, Art de Vany claims that our evolutionary history did not prepare humans for a modern lifestyle.  To see why one might think this, it is worth taking a detour and listening to an excellent TED Talk that Daniel Dennett gave several years ago.  In his talk, Dennett used a piece of chocolate cake to explain Darwin’s curious form of “reverse reasoning.” It’s not true that we like the chocolate cake because it is sweet, Dennet explains. Rather, it is sweet because we like it.

There is nothing about cake that is inherently sweet. You can stare at a sugar molecule for as long as you want, and you will never understand why it tastes sweet. To understand that, you have to know something about how our brains are wired. And this wiring, Dennett explains, is a product of evolution. Our brains evolved to give us a psychological reward–the taste of sweetness–whenever we eat something that contains sugar, which, of course, is rich in calories. Something similar holds true for fat, salt, and a number of other foodstuffs.

The claim made by proponents of the paleo-diet is that this was good thing during the Pleistocene, because humans did not have access to a lot of calorie-rich foods. To survive and have offspring, you had to consume all the calories available. But in today’s world of industrial agriculture and high-fructose corn syrup, that is no longer the case. Differently put: there was no such thing as chocolate cake during the Pleistocene. Probably the sweetest thing anyone would have eaten at that time was a carrot. The chocolate cake is what the ethologist Niko Tinbergen called a super-normal stimulus — what my own behavioral ecology teacher called “the Dolly Parton effect”–something that is way off the scale of what our bodies have evolved to cope with.

Now, advising people to avoid or at least moderate the consumption of processed foods that are high in salt, fat, and sugar is not in the least bit controversial. I am willing to bet that any conventional nutritionist would be on board with the idea that just because something tastes good does not mean it is good for you, and that we should be careful about simply giving in to all of our cravings. But proponents of the paleo-diet want to go several steps further. Beyond advocating that we avoid foods packed with super-normal stimuli, they also counsel us to avoid dairy, grains, and cereals; indeed, anything that was unavailable prior to the development of agriculture. In so doing, they add an extra ingredient to the evolutionary reverse argument, namely an aversion to modernity.

To see why this is the case, it is useful to extend our historical vision beyond modern-day evolutionists such as Dennett and recent proponents of the paleo-diet like Durant and de Vany. In particular, I want to use the example of Joseph Knowles (pictured above) to show that the paleo-diet is rooted in a much older tradition of what constitutes healthy living.

Joseph Knowles was an artist and illustrator who became famous almost overnight for what he described as an “experiment” that consisted of trying to survive for two months alone in the Maine wilderness. His fifteen minuts began when reporters from the Boston Post photographed him gingerly disrobing, discarding his knife and other accoutrements of modern life, demonstrating his ability to make fire by rubbing pieces of wood against one another, and entering the woods, all on the morning of August 10, 1913.

Joseph Knowles demonstrating his wilderness survival skills just before heading off into the forest, August 10th, 1913.

During the two months he allegedly spent in the wilderness, Knowles periodically sent updates about his adventures to the Post, written in charcoal on a piece of tree bark.  Among other things, he recounted spending the first few days subsisting on berries before learning how to fish trout and hunt partridge and deer. He also wove strips of tree bark together to create a kind of textile that he could fashion into clothing and shoes. Then, on August 24th, about two weeks after he entered the forest, a front page story in the Post described how Knowles had successfully killed a bear using nothing but his wits and a club.

When he emerged from the wilderness wearing the bearskin on October 4th, Knowles received a hero’s welcome. He was cheered on at every stop of the way from Maine down to Boston, and huge crowds gathered to see him arrive at North Station before he gave a rousing speech about his experiences in the Boton Common. In the months that followed, Knowles wrote a best-selling book about his adventures entitled Alone in the Wilderness and received top billing on the Vaudeville circuit.

There’s lots to be said about Joseph Knowles, including the fact that a rival newspaper published evidence to the effect that he had spent most of his time in the “wilderness” drinking beer in a friend’s cabin. But I want to focus on one piece of the story in particular. One of the first things Knowles did after arriving in Boston was to pay a visit to Dudley Allen Sargent, the Director of the Hemenway Gymnasium at Harvard University.

Dudley Sargent examines Joseph Knowles at Harvard’s Hemenway Gymnasium.

In his autobiographical account of the saga, Knowles quoted Sargent as attesting to the fact that his time in the wilderness had left him in better shape than any of the college’s “football men,” reporting, among other things, that “With his legs alone he lifted more than a thousand pounds.” Sargent also noted a remarkable improvement in Knowles’ complexion: “Subjected to the action and the stimulus of the elements, Mr. Knowles’ skin has [come to serve] him as an overcoat, because it is so healthful that its pores close and shield him from drafts and sudden chills.” Thus, Sargent declared the “experiment” a complete success. “Forced to eat roots and bark at times, and to get whatever he could eat at irregular hours, his digestion is perfect, his health superb.”

Along with this testimonial, Knowles also included a chart comparing some of his vital statistics from before and after the time that he spent in the wilderness. Not only had he lost more than ten pounds, but, remarkably, he had grown slightly taller as well. Moreover, his muscles all increased in size and in girth, and his lung capacity shot up from 245 cubic inches to an astonishing 290 cubic inches!

Joseph Knowles’ vital statistics before and after the wilderness “experiment.”

As historians of science and environmental historians well known, Joseph Knowles was part of a larger cultural movement that Roderick Nash’s classic account describes as a kind of “wilderness cult.” Other notable examples of this movement’s popularity include the founding of the Boone and Crockett Club in 1887, the Sierra Club in 1892, the Boy Scouts of America in 1910, as well as Theodore Roosevelt’s fierce advocacy on behalf of wilderness preserves such as Yellowstone National Park as a place in which white, urban elites could experience what he called the “strenuous life.”

It is no surprise that the wilderness cult took off when it did. At a time in which America was becoming increasingly urban, industrial, and ethnically diverse, many worried that rather than heading for increasing prosperity, the country was inevitably on the decline. Thus, it seemed natural to harken back to a simpler and more authentic past, one in which people’s communion with nature left them healthier in body, mind, and soul. It was, after all, during this period that the historian Frederick Jackson Turner used a podium at the 1893 Chicago World’s Fair–a celebration devoted to industrial progress in a city that did more than any other to conquer the west–as a platform from which to mourn the official closing of the nation’s western frontier. And it was also during this period that Madison Grant, director of the Bronx Zoo and Trustee of the American Museum of Natural History, published his eugenic masterpiece, The Passing of the Great Race. Envisioning a dark future indeed, Grant counseled his readers to eschew the comforts and luxuries of modern civilization and allow the Darwinian struggle to continue tending the health of the gene pool.

Few things sum up these sentiments as well as the first edition of Ernest Seton’s Handbook for the Boy Scouts of America. “We have lived to see an unfortunate change,” he lamented on the very first page of the Handbook. “Partly through the growth of immense cities,” and “[p]artly through the decay of small farming,” he continued, America entered a period that Seton and so many others described using the word “Degeneracy.”  Thus, it was to “combat a system that has turned such a large proportion of our robust, manly, self-reliant boyhood into a lot of flat-chested cigarette smokers, with shaky nerves and doubtful vitality” that he brought scouting to America. Mindful of the fact that “Consumption” had become “the white man’s plague,” he concluded, “I should like to lead this whole nation into the way of living outdoors for at least a month each year.”

In closing, let me forestall a possible misinterpretation. Of course I do not mean to imply that Durant and other advocates of the paleo-diet are all eugenicists at heart. That is certainly not the lesson I hope people take away from the history that I have tried to present. But I do think that a few striking and salient parallels present themselves.

Perhaps it is a cliche to say that we are living through a time of enormous change, just as people during the American Gilded Age and Progressive Era did, but that does not make it any less true. One thing that I would like to suggest we are seeing, not just in the paleo-diet, but certainly there as well, is a kind of aversion towards modernity. People now as well as a hundred years ago have looked and are looking to the past in search of a simpler, more authentic, and, importantly, more healthful way to live one’s life.

But what is so curious about all of this is that so many of these people–from Joseph Knowles to Art de Vany–are also looking to science, a quintessentially modern institution if there ever was one, for both advice on how to get there as well as for the authority to argue that an earlier period in human history really was healthier and more adapted to our physical, spiritual, and emotional needs.