Author Archives: Jenna Healey

About Jenna Healey

PhD Candidate in the History of Science and Medicine at Yale University.

What Can Reddit Tell Us About The Future of Science?

Back in January 2013, twenty-three year old Kim Suozzi passed away after her fight with glioblastoma, a rare form of brain cancer. Kim, a recent college graduate and neuroscience major, made headlines in the months leading up to her death for her decision to cryopreserve her brain in hopes that she would one day be revived. Unable to pay for the high cost of the procedure, Kim and her boyfriend Josh Schisler took her case to the Internet, determined to drum up donations to fund Kim’s dream. Ultimately, their campaign was a huge success: Kim raised the $80,000 dollars she needed to preserve her brain until neuroscientists figure out a way to bring her back to life.


Continue reading

Celebrating 50 Years of JAS-Bio

This past weekend in New Haven, Yale hosted the 50th annual Joint Atlantic Seminar for the History of Biology, known colloquially as JAS-Bio. Since 1965, the seminar has been hosted by institutions up and down the Eastern seaboard. JAS-Bio is a unique setting where historians of biology at all stages of their careers can meet and interact. While all the papers are given by graduate students, the audience is a great mix of senior faculty, early career scholars, and graduate students from many different institutions. This makes JAS-Bio an ideal venue for graduate students to receive feedback on their research, both from their peers and from more established scholars.

Polly Winsor, the esteemed historian of biology, published a short history of the seminar in Isis in 1999. Winsor describes the seminar as a small, friendly, and supportive environment in which students could “try their wings in circumstances less daunting than the annual meeting of HSS.” This year’s meeting was no exception. After each paper, the audience of over fifty people had an abundance of friendly feedback for the speakers. There were so many questions, in fact, that some session chairs had to forbid speakers from responding in order to collect all of the comments. The intellectual exchange didn’t stop there. At coffee breaks, meals, and receptions, I heard speakers field questions about their talks and discuss their larger research projects (including one lively conversation that carried on well after midnight). As compared to larger national meetings, there really is something unique about the type of intellectual engagement that happens at JAS-Bio. The seminar provides the time and space for extended conversation that can get drowned out in a larger conference setting. The regular attendees also take the seminar’s tradition of supporting graduate students very seriously, and it shows.

Reminiscing about fifty years of JAS-Bio. Photo Credit: Daniel Liu.

On Saturday, ten talks were delivered by students from Johns Hopkins, Yale, MIT, University of Pennsylvania, UW Madison, Brown, and Princeton (check out the program here). The papers covered everything from the biochemistry of the cell membrane (Daniel Liu, UW Madison) to the economics of goldfish breeding (Laurel Waycott, Yale). I learned why the disappearing pinky toe has become a ubiquitous symbol of human evolution (Emily Kern, Princeton) and how the discovery of echolocation grew out of military research on sonar and radar during WWII (Richard Nash, Johns Hopkins). I would say there was an even split between intellectual history and social/cultural history, with some speakers dabbling in intellectual frameworks such as disability studies and the history of capitalism. What struck me was not only the high quality of all of the papers, but also the growing awareness among graduate students about what makes a good presentation. Shira Shmuely (MIT), for example, took advantage of Powerpoint to show off her amazing archival find: handwritten laboratory inspection notebooks from late Victorian Britain. Speakers also shared a lot of jokes and light-hearted anecdotes, which was fitting given the relaxed tone of the meeting (also, I just really enjoy puns).

My favorite part of the day was the final session. Henry Cowles, AmericanScience alum and co-organizer of the conference, invited audience members up by “cohort” (the year they first attended the meeting) to reminisce and reflect on their experience. Over the next hour, audience members sketched out an informal history of the seminar and, in the process, the history of the discipline. Three members of the audience – Everett Mendelesohn, Garland Allen, and Ruth Cowan – had attended the original 1965 meeting at Yale. They recalled the department’s hospitality as well as the intellectual generosity of the original gathering. Research in the conference archive (held at the Smithsonian) revealed that two of the graduate students originally slated to give papers that day were unable to do so when time for discussion ran short. They were apparently quite relieved, however, as they had been roped into giving talks by their advisor and didn’t actually have anything prepared!

In her history, Winsor highlights the role of JAS-Bio as a “training ground” for young scholars giving their first academic paper. The tradition continued at this year’s seminar. The program included four first-year graduate students and one incredibly impressive undergraduate (Eliza Cohen, Brown). Indeed, the list of scholars who gave their first academic paper at JAS-Bio is impressive: Ruth Cowan, Gar Allen, Steven Shapin, Bernie Lightman, Rob Koehler, Jane Maienschein, John Harley Warner, Janet Brown, and  Jim Secord, among many others. In reminiscing, these scholars recalled the nerve-wracking experience of delivering their first papers (tenured professors – they’re just like us!). Bernie Lightman was afraid that his mother (in attendance) was going embarrass him by asking a question; John Harley Warner was worried that the smart British girl who presented before him would make him look bad (it was Janet Browne). Pam Henson, scheduled to present her first paper after taking three days of oral exams, tried to escape the lecture hall (but was dragged back in by one of her professors).

Leonard Wilson was the co-founder of JAS-Bio (along with Frederic Holmes). Reflecting on the seminar’s history in 1999, he said: “Clearly the Joint Atlantic Seminar filled a need, unexpected but real, that was not met by national meetings. It was the need of students working in relative isolation to talk about their work, to meet others engaged in similar problems, and to exchange ideas and information. When Frederic Holmes and I were planning the first Joint Atlantic Seminar, we thought that if it were not a good idea, the meeting simply need not be repeated. So far it has been worth repeating.”

This exercise in collective memory also turned up a host of other entertaining anecdotes. When Stony Brook hosted JAS-Bio in 1994, James Watson was invited to give short opening remarks, which ballooned into a forty-minute lecture on the history of biology. As Nathaniel Comfort recalled, Watson’s speech was “interesting to [historians of biology] in ways that he could not even imagine.” Sharon Kingsland described eleven-hour drives from Toronto and groups of graduate students lounging around the seminar room listening to Janis Joplin (before getting busted by their advisors). In 2009, graduate students had to work together to help a fellow speaker who locked himself out of his room wearing only a towel (while his clothes and talk remained inside). At the very end of the day, Luis Campos opened a package containing the very first advance copy of his book, fresh from the publisher. Luis explained that he wanted to share the accomplishment with his JAS-Bio family, who had been there since the beginning of the project. Put together, all of these anecdotes demonstrate the role that JAS-Bio has played not just intellectually, but socially, in the creation of a history of biology community. I can attest to the fact that the meetings are a great venue for forging friendships across institutions. This is true not only of JAS-Bio but its many siblings, including the Joint Atlantic Seminar for the History of Medicine (JAS-Med), the Midwest Junto for the History of Science, and Phun-Day (the Harvard-Princeton-MIT History of Physical Sciences Workshop).

With so much reflection on JAS-Bio’s history, we weren’t left with much time to reflect on its future. Upon studying the list of this year’s participants and their first JAS-Bio appearances, one scholar noted that there was a conspicuous gap between 1965 and 1978. What happened to the scholars who had first presented during those years? Someone suggested that the gap could be explained by the dismal job market during that time, a comment that was not lost on current graduate students facing our own job market crisis. As more historians of biology find employment outside of the academy, or forge hybrid careers, JAS-Bio has the potential to bridge the academic/non-academic divide by bringing together the largest possible number of historians of biology (regardless of academic appointment) at least once a year. And while I don’t pretend to know which new framework will preoccupy us ten, twenty, or thirty years down the line (if I did, I would write a book about it!), this year’s papers indicate that graduate students are not afraid to push the discipline in exciting new directions. Here’s to another fifty years of fun, friendship, and exciting ideas in the history of biology.

Dispatches From The Particle Accelerator

Back in January, I was presented with a unique opportunity: the chance to walk inside of a particle accelerator.

And I was really really excited about it.

The tour was part of Yale’s Science Studies Lunch series. The brainchild of AmericanScience alum Joanna Radin and Bill Rankin, both Assistant Professors in the Program for the History of Science and Medicine at Yale, these monthly events bring together an interdisciplinary crowd of historians, sociologists, scientists, medical practitioners, and artists: anyone remotely interested in the social study of science. In a series of field trips, we have explored scientific collections, museums, labs, farms, and, on one day in January, a particle accelerator.

The installation of the original accelerator, 1965

Part of Yale’s Wright Lab, the Van de Graaf particle accelerator is in the process of being decommissioned. Originally installed in the mid-1960s, the atom smasher made Yale a national hub for the study of nuclear particle physics. At 100 feet long, the machine is dwarfed by the mammoth particle accelerators in operation today (for comparison, the Large Hadron Collider in Geneva is 17 miles long). But at the time of the accelerator’s last upgrade in 1987, it was the highest-energy tandem accelerator in the world.

The installation of the new tandem accelerator, 1985.

Note the yellow exterior – the tank was promptly painted blue for purposes of school spirit.

From the outside, the accelerator looks unremarkable. Nestled in the side of a grassy knoll on campus (in an area known as Science Hill), the only outward clue of what lies beneath is a formidable set of double doors plastered with a “Security Notice.” The accelerator is so unremarkable, in fact, that I lived two blocks away from it and walked by it every day for three years without ever noticing it was there. We postulated that the grassy camouflage was a strategy to keep the lab hidden from view, or perhaps served as a shield for radiation. But the lab’s director, Karsten Heeger, assured us that he knew of no particular reason that the accelerator was placed underground other than an apocryphal story that the lab’s original director didn’t want his graduate students to have windows (sigh).

Realizing that I had lived in such close proximity to the lab made it even more exciting to explore the particle accelerator at my own doorstep. Here is what I took away as three of the most interesting themes of our conversation.

The Challenge of Commemoration 

As the accelerator is decommissioned, the Wright Lab is making a concerted effort to commemorate the occasion and to preserve the accelerator’s historical legacy. Back in November, the lab held a open house that attracted over 700 people from across Connecticut. Long lineups of visitors waited out in the cold for their chance to tour the accelerator before it is dismantled. As one employee explained, this kind of public outreach wasn’t possible while the accelerator was still operational, because there was rarely a moment when the facility wasn’t being used to run experiments. But the lab has relished the opportunity to finally let the public in to explore the accelerator, and have proven receptive to alternative uses for the space. For example, local artists are organizing an exhibition that will take advantage of the unique setting. There has also been “at least one” skateboarder who has used the accelerator as a half-pipe.

As a historian, I was impressed by the lab’s sensitivity to issues of historical preservation. The lab is working with history of science Professor Paola Bertucci, who is also the Assistant Curator of the Historical Scientific Instruments Collection, to preserve key artifacts and eventually build an exhibit commemorating the accelerator in conjunction with the Peabody Museum of Natural History. We discussed the challenges inherent in preserving an experimental apparatus that takes up more than 14,000 square feet of space As I snapped pictures with my phone, I quickly realized that photography was an insufficient medium for capturing the size of the machine, and the sense of awe it entails. So how do we preserve a sense of scale? One suggestion was to project and then paint an outline of the accelerator onto the wall. This way, future visitors to the lab will be able to see the accelerator’s footprint long after it disappears.

The group inside the accelerator. I’m there too!

Photo Credit: Charlotte Abney

I wondered aloud if the accelerator might not be preserved as is (or at least in part) for future museum goers to walk through and experience just as our group did. I had in mind my visit to the Smithsonian Air and Space Museum last summer, where life-sized models of space shuttles and airplanes hang from the ceiling and can be explored by the museum’s visitors. It was explained, however, that the accelerator and its scientific paraphernalia had to be taken apart because it is being parceled off to various institutions around the country. In fact, because the accelerator was partly funded by the federal government, Yale doesn’t actually own all of the equipment and therefore doesn’t control what happens to it. If the equipment can still be used to conduct “useful science,” it will be recycled and repurposed. I think there is much more to say about the idea of recycling in science, and the ways in which “outdated” technologies can have long and varied scientific careers of their own.

Heeger also explained how the spirit of the lab’s history will be preserved in the new designs for the Wright Laboratory, which will be renovated following the accelerator’s removal. The accelerator’s centrality in the lab meant that generations of Yale students received hands-on training in the design, construction, and maintenance of equipment. The lab hopes to keep this hands-on training as a fundamental part of student life by building a workshop for designing and building detectors for use at other major research sites. Even for theoretical physicists, these mechanical skills are a crucial part of the research endeavor.

Maintenance, Repair, and the Invisible Labor of Big Science

My favorite part of the trip was hearing from Frank Lopez, the lab’s Research Development Technician. He described the day-to-day operations of the accelerator, and the challenges of keeping such a complex piece of machinery working properly. As Lopez quipped, “if it exists, it will break down,” and the accelerator seems to have been a particularly finicky piece of equipment.

Lopez described the mundane task of cleaning the machine, as any dust or hair present in the inner chamber could interfere with the experiment. The accelerator is made up of thousands of individual metal parts, and they all had to be perfectly clean for an experiment to succeed. The image of scientists perched on top of a giant particle accelerator, carefully removing dust and hair from every nook and cranny, contrasts sharply with stereotypically heroic representations of experimental science. Graffiti along the walls and ceiling of the accelerator hints at the long hours the crew must have spent inside of the chamber inspecting, cleaning, and maintaining the equipment.

There are lots and lots of individual parts, and lots of things that can go wrong.

Photo Credit: Bill Rankin

Because visiting teams of scientists would reserve the accelerator months or even years in advance, the crew had to be ready to run the experiment as soon as the group arrived. As each scientific team usually only had a week with the accelerator, there was no time for second chances. Part of the challenge was that in order to insulate the accelerator (which operated at 22 million volts), the inner chamber had to be pumped full of gas. Once the gas was pumped in (a process that in itself took an entire day), no humans could go back inside to tweak the machinery.

If one of the thousands of parts that made up the accelerator came loose and fell on the ground, it would create sparks so massive that the machine “sounded like a monster.” To solve this problem, the crew actually used remote-control cars, and later a robot of their own design, to retrieve errant pieces and save the experiment. If the robot failed to solve the problem, all of the gas needed to be pumped back out (which would take another day), so that the crew could re-enter the chamber and fix the machine. With such tight timelines, there was a lot at stake in making sure the machine worked properly.

Jeffery Ashenfelter, the Associate Director of Operations, reflected on the the tacit knowledge required to make the accelerator work. The key to successful science was tricking nature – but nature doesn’t always like to be tricked. He described the process of aligning the ion beam as “ion sorcery,” and admitted that they didn’t always understand how or why a certain alignment worked better than others. It took a lot of trial and error and a deep familiarity with the machine to create a successful experiment.

When we think of “big science,” we (naturally) tend to think about what’s “big”: the exorbitant costs, the challenges of international cooperation, the sheer scale of the required machinery. But the day-to-day operation of a particle accelerator requires a highly knowledgeable team that can carry out the countless small tasks and adjustments that make experiments work.

The Future of Big Science

Lastly, our tour guides reflected on the future of the Wright Lab and of “big science” more generally. The decision to decommission the particle accelerator stemmed in part from the lab’s shift away from particle physics towards the study of neutrinos and dark matter. When the accelerator was first installed in the 1960s, particle physics was an exciting and politically significant area of inquiry. Today, particle physics is what one lab member described as a “mature field.” While the accelerator could still be used to generate new knowledge, research in a mature field holds less appeal for an elite institution like Yale who strives to be on the cutting edge of new knowledge production.

While Yale had been a central hub for visiting researchers around the world, the massive scale of modern research facilities requires them to be placed in remote locations scattered throughout the globe, away from universities and urban centers. Members of the Wright Lab, for example, conduct research in several facilities around the world including the Gran Sasso National Underground Laboratory in Italy, the Daya Bay Reactor near Hong Kong, and (the coolest of all) the IceCube High-Energy Neutrino Telescope in Antarctica.

A shot of the outside of the accelerator. It looked a little like a submarine, complete with portholes.

Photo Credit: Bill Rankin.

The international nature of such work presents new challenges for physicists. While they know how to write equations and design detectors, members of the lab felt less prepared for the cross-cultural cooperation required to conduct major experiments. Ashenfelter spoke of his experiences in Italy and admitted that at first he didn’t know how to “get science done” in a different cultural setting. At every new site there is a learning curve as scientists adjust to the facility’s unique culture and regulations.

Thank you to the Wright Lab for giving us such an excellent tour and for answering our many questions about the facility and its history. I can now brag about having been inside of a particle accelerator, which I’m sure will be a huge hit at future academic gatherings and nerdy cocktail parties.

The Epistemology of a Podcast

Unless you’ve been living under a rock for the past few months, you’ve probably heard of Serial, the podcast sensation taking the internet by storm. Hosted by Sarah Koenig, the podcast is a serialized account of the 1999 murder of Hae Min Lee, a 17 year-old senior at Baltimore’s Woodlawn High School. In the style of a true crime drama, Serial revolves around the fundamental question: whodunit. But in this case, there is also a possibility of wrongful conviction. Koenig’s entry into the story comes through the family of Adnan Syed, Lee’s ex-boyfriend, who was convicted of 1st degree murder in her death. Koenig set out on a year-long investigation of the case, pouring over trial records, interrogation transcripts, even the prosecutor’s evidence files. Was Syed wrongfully convicted? If he didn’t do it, than who did?
Serial, a spinoff of the popular NPR podcast This American Life, has attracted an incredible amount of media attention. Time, New York Times, and The Atlantic have all covered the podcast. Slate began its own meta-podcast to discuss Serial each week. Some of this coverage has focused on allegations that racial prejudice pervades Koenig’s reporting. Because the case focuses on the murder of a Korean-American teenager by her Pakistani-American boyfriend, Koenig (as a white journalist) is reporting on communities in which she is a cultural outsider. Others have criticized Koenig for making herself, as narrator and amateur detective, the protagonist of someone else’s story.
I discovered Serial after a few episodes had already aired. I have listened to all of the available episodes thus far, and for the most part have enjoyed listening to the show. Today, the highly anticipated final episode of Serialwill go live. Before I listen to the final episode, I wanted to share a #histstm perspective on the show and its surprising success.

Reasonable Doubt?
At its core, Serial is a reflection of the nature of truth. The show is fueled by uncertainty, as Koenig brings the listener through a series of “buts,” “what-ifs,” and “wtfs” that will make your head spin. Adnan was primarily convicted based on the testimony of Jay, his friend and alleged accomplice in disposing Hae’s body. Jay’s testimony is riddled with inconsistencies. It doesn’t help that Jay provided his testimony on four separate occasions: two pre-trial interrogations and two times on the stand. Koenig meticulously pokes at these inconsistencies, in hopes that by picking apart Jay’s testimony, the case against Adnan will unravel.
What struck me about this was how little the testimony’s inconsistencies bothered me. It seemed intuitive to me that if you told a story four times, under great pressure and in extraordinary circumstances, that it would change a little each time. Yet, it is these inconsistencies that drive Koenig’s uncertainty, as well as the entire Serial narrative. Her discomfort seems to grow out of the fact that testimony used to convict someone of first-degree murder shouldn’t be full of holes, and potentially, full of lies. Such uncertainty appears to be an assault on our standard of justice, and in particular the belief that we must prove guilt beyond reasonable doubt.
The question then becomes: how much doubt is “reasonable” in the context of the criminal justice system? There are piles of scholarly work on this topic, but as layperson it was not something I had given much thought. In Episode 8, “The Deal With Jay,” Koenig interviews Jim Trainum, a private investigator, former homicide detective, and expert on false confessions. I think that their conversation really gets at the heart of the issue. Trainum, while admitting that the inconsistencies were troubling, also acknowledges that investigators were “better than average” in handling the evidence. He explains that the detectives in the case didn’t push on Jay’s inconsistencies, the way that Koenig is, out of fear of creating “bad evidence.” Jay was the prosecution’s star witness: the entire case hinged on his testimony. If the investigators pushed too hard, there would be nothing left for them to use, and their case would fall apart.
Koenig balks at Trainum’s use of the expression “bad evidence.” “All facts are friendly!” she exclaims, “You can’t pick and choose.” Trainum responds by explaining that for prosecutors, the goal was not to get at absolute truth, but to build a strong case. Koenig doesn’t back down: “How can you build a good case, how can he be a good witness, if there is stuff that is not true or unexplained?” Trainum concludes by suggesting that in any case, there will always be things that are unexplainable. He also points to the possibility of confirmation bias: the prosecutors were looking for facts to support the theory they already believed to be true.
In a weird way, Trainum’s explanation brought me back to Kuhn’s explanation of normal science: that as evidence accumulates, the underlying assumptions of a scientific theory go unquestioned.  It is only when enough anomalies accumulate that scientists will begin to question the tenets of their current paradigm. For Koenig, any anomaly or inconsistency should be enough to throw the conviction out the window. But as Trainum points out, there are important consistencies that check out, and give credence to the theory that Syed is guilty. In the absence of alternative explanation, it becomes a compelling case.

The CSI Effect

The CSI Effect! Source:
Another unsettling feature of the case is the absence of physical evidence. In Episode 7, Koenig interviews lawyers in the “Innocence Project” at University of Virginia School of Law. The students reviewing the case are unanimously unconvinced of Syed’s guilt. One student claims that there are “mountains of reasonable doubt.”
The sticking point for these students is the absence of physical evidence. Although a liquor body found near Hae’s body was scraped for epithelial cells, they were never tested. Fibers found in the soil around her body were only tested against a very small number of fabric samples. There were no DNA tests performed on the body itself. Some of the forensic reports from the case appear to be missing from the records.
Could these students be suffering from the “The CSI Effect?” This expression is used to describe an increasing demand among jurors and the wider public for forensic evidence, which is attributed to the popularity of television crime shows like CSI. Studies have shown that frequent viewers of CSI may place a lower value on circumstantial evidence, and many lawyers argue that there has been a significant shift in the behavior of juries. Could Koenig (as well as the listener’s) uncertainty be a side effect of 15 years of television that glorifies forensic science? Is one person’s testimony enough to put someone away for 1st degree murder?
Interestingly, there was actually a cutting-edge technology that was introduced into the trial: the cell phone. Syed’s case was one of the first in Baltimore Country to use cell phone records and tower pings as evidence in a criminal trial. Syed himself had only purchased a cell phone three days before the crime occurred, a fact that was used to throw suspicion onto him. The ways in which the cell phone records and cell phone tower pings confirmed Jay’s testimony lent significant strength to the prosecution’s case.

Reddit and the Radio
Lastly, I want to reflect on why Serial’s surprising popularity. After all, as a genre serialization is nothing new. Literary serials date back to the 17th century, and surged in popularity during the Victorian era with Dickens’ The Pickwick Papers. More recently, serialized stories were the bread and butter of television networks everywhere. Daytime soaps, prime-time dramas, even some reality shows are versions of the serial form. And not that long ago, before Netflix allowed us to mainline entire seasons of the Gilmore Girls in a weekend (no judgment), viewers had to wait with bated breath for the next installment. Maybe Serial’s charm derives from being an old-school radio show in an era of immediacy and on-demand entertainment.
Another interesting phenomena to pop up in Serial’s wake is the creation of a subreddit dedicated to the show. As of the time of writing, the subreddit has almost 30,000 followers. Posters propose alternative theories, map cell phone pings, and share detailed timelines of the crime. Important figures in the case, including Adnan’s brother and Jay, are rumored to be posting in the group. The group’s activity reminds me a little of citizen science, where the collection or analysis of large amounts of data is crowd-sourced by non-professionals. Perhaps this collision of an old-school serial and the universe of social media can help explain the show’s popularity. The pleasure of waiting for the next installment is further intensified by discussion and speculation within an online community.

For weeks, listeners have expressed anxiety about the ending of the show (it even inspired a Funny or Die parody). After all, Koenig is investigating a real case, not reciting a script. And in real life, we don’t always succeed in finding the truth. Koenig has acknowledged this pressure, but insists she has no special responsibility to provide listeners with a satisfying conclusion. We will soon find out!

SHOT Recap: Innovation, Risk, and Magic

This past weekend, while many friends from the HSTM world were convening in Chicago for HSS/PSA, I was in Dearborn, MI attending my first meeting of the Society for the History of Technology (SHOT). Following on the heels of Leah and Evan’s great conference recaps, I want to share some of my experiences and highlights from SHOT 2014.

As is always the way with larger national conferences, the program was chock full of panels that I was excited to attend. But with so many sessions running in parallel, I was forced to make some difficult choices. In his closing address, SHOT President Bruce Seely acknowledged that expanding the program to include more scholars meant that attendees often found themselves wishing that they could be in two places at the same time. I know I only heard a fraction of the exciting work presented last weekend. In the comments, I’d love to hear from my fellow SHOT attendees about their own highlights.

The conference opened with a plenary lecture from historian David E. Nye. Nye, a professor at the University of South Denmark, has written several landmark volumes in the history of technology including Electrifying America (1990) and American Technological Sublime (1994). In his lecture, Nye proposed a list of eight defining features of the history of technology. I thought Nye’s lecture was a thoughtful reflection on disciplinary identity in an area of study that is, by his own admission, fundamentally interdisciplinary. I was particularly struck by Nye’s emphasis on labor as a core element of the history of technology: “without workers there can be no technologies.” He called for more engagement between labor historians and historians of technology, as well as a recognition of the role of workers in both modification and meaning-making. You can read Nye’s lecture in full here. The plenary was followed by a reception at the “Car Court” of the lovely Henry Ford Museum. There I learned much about the history of the automobile, and saw many models of the original Ford (from A to T!)

A Ford Model T on display during Thursday evening’s reception at the Henry Ford Museum, Dearborn MI.
On the first full day of the conference, I saw two panels which interrogated the “groups, networks, and systems” (as described by Nye) that shape technologies and drive innovation. The first, Technology in Use, examined the role of users in technology transfer. Joshua Walker (University of Maryland) described how Mexican peasants became masters of repair, cannibalizing some pieces of agricultural machinery to repair others after Mexican economic policy blocked the import of parts or newer models. Carrie Meyer (George Mason University) showed images of “power houses” set up by midwestern farmers in the early 20th century to maximize the utility of their gas engines for farm and domestic chores. Lastly, Aashish Velkar (University of Manchester) gave a glimpse into his fascinating project about the global diffusion of the metric system and the clash between the state and citizens during the transition to a new system of measurement.

The second panel, Who Were the Innovators, featured four great papers that challenged traditional notions about “who drives innovation.” In these four cases, it was popular science writers, mothers, high school students, and government regulators that pushed for the development of new technologies. Gender emerged as a central theme of the panel, as the presenters showed how historically bounded definitions of femininity and masculinity become intertwined with our narratives of progress. Joy Rankin (a fellow student at Yale), provided particularly striking examples of the masculine culture of “personal computing before personal computers” among high school and college students in the early 1970s. My favorite story involved one Dartmouth student using BASIC to send a romantic message to his girlfriend at Vassar: a giant printout that read “I Miss You Girl.” He hoped that his programming prowess would impress his significant other. It must have worked, because they’re still together today!

An engaging panel on Friday morning asked: Who Were The Innovators?
During the second day of the conference, I attended three consecutive panels that dealt with some combination of gender, health, and consumer technologies. This included my own panel on Body Practices, chaired by Projit Mukharji (University of Pennsylvania). After chairing the panel on Rot at HSS, Projit flew to Dearborn so he could comment on Jessica Martucci’s (Mississippi State University) fascinating paper on placentophagy (or, eating one’s own placenta) and my own paper on ovulation detection technologies. Needless to say, it must have been a weird weekend for Projit, but I very much appreciated all his helpful comments as well as the audience’s enthusiastic discussion of embodied technologies.

Another great discussion followed the panel Health, Harm, and Hope: Technological Comprehension and Consumer Health Products. Martha Gardner (MCPHS University) showed us how hexachlorophene became America’s most famous chemical in the postwar period before being banned by the FDA in the 1970s, while Jeffrey Womack (University of Houston) spoke about the history of radium water (yes, that would be radioactive water), an energy drink for the early 20th century. Audience members debated the role of the consumer in assessing risk, especially when it comes to products that may slowly impact health over a long period of time. Can consumers be trusted to weigh quantitative evidence or to understand the hazards of accumulation? Do we always need regulators to remove potentially dangerous products from the market, or are consumers capable of deciding what amount of risk they’ll accept? In the final paper of the panel, Lara Freidenfelds (independent scholar and member of the Princeton Research Forum) spoke of the messy risk calculus of home pregnancy testing. Pregnancy detection is now happening so early that it is creating a new category of “early pregnancy” and, consequently, the concept of a very early miscarriage. Apparently, you can be just a little bit pregnant, and this technologically-enabled state has serious emotional consequences for women who use the tests.

A few other highlights:
  • Pamela O. Long, recent recipient of a MacArthur Genius Grant, was also the recipient of this years’ SHOT Leonardo da Vinci Medal. Audience members were assured that the award committee had decided to recognize Long’s work long before the MacArthur folks made their announcement. Long gave a great speech reflecting on her scholarly career, and called on young scholars to embrace the study of premodern technology.
  • There were several panels reflecting on the importance of public engagement in the history of technology, including a roundtable response to Nicholas Kristof’s “indictment of academia” published last year in the New York Times. I had a few great conversations with academics who are engaged in curating exhibits in public spaces, and many emphasized how the materiality of the history of technology provides perfect opportunities for “hands-on” public engagement.
  • On Friday afternoon, I attended the panel Indistinguishable from Magic: Technology and the Occult in Machine Age America. The title was inspired by Arthur C. Clarke’s third law of prediction (“any sufficiently advanced technology is indistinguishable from magic”,) and the papers were an fascinating look into how both spiritualists and magicians adopted modern technologies to create and study magic. Robert MacDougall (University of Western Ontario) capped off the panel with a smart paper on the Keely Motor, which touched on the challenges of writing the history of a failed technology as well as combining biography with the history of technology.
  • Although I wasn’t able to attend, there was lots of buzz about Thursday’s THATcamp, and more generally about the need to bring fresh methodological approaches to the history of technology. I think it is fair to say that there was a lot of excitement and experimentation among the new generation of scholars at the meeting, and I’m looking forward to seeing how a list of “defining features” of the discipline might look very different a decade from now.
If you’re interested in hearing more about the conversations we had in Dearborn, check out this compilation of #SHOT2014 tweets, courtesy of Finn Arne Jorgenson.

Going Global

As Ebola spreads outside the confines of West Africa, public health officials have declared the epidemic to be a crisis on a global scale. Peter Maurer, president of the International Red Cross, described the outbreak as “a global health catastrophe…an epidemic of global dimension and a global threat.” While there is much to be said about the politics of these statements (as well as the public health response to Ebola more generally), it is Maurer’s constant invocation of the “global” that interests me most.

Within the history of science, medicine, and technology, we are experiencing our own turn towards the global. Over the past few years, global history has emerged as a theme in CFPs, at conferences, and in recently published scholarship. Back in 2012, my department introduced a History of Global Health class which attracted a large number of enthusiastic undergraduates. This trend towards global history is also reflected in the job market. Based on a very unscientific survey of recent job postings, the number of advertisements requesting a candidate with a global research focus has jumped from 16% to 29% between 2011 and 2014.*

What is driving this trend? Is global just the newest label for non-Western history (in the tradition of comparative history, international history, transnational history, or “America and the world”)?  Or is global history a qualitatively different enterprise?
Image from Kenneth Lu via Flickr

A cynical interpretation of the trend is that “global history” is the product of a troubled job market. As universities tighten their purse strings, the number of permanent academic positions has shrunk considerably. When a department is able to secure the funds for a tenure-track position, it seems prudent to select a candidate that is as versatile as possible. From a budgetary perspective, this means being able to teach a wide variety of classes across a number of different geographic regions. A global historian that can fulfill many of the department’s teaching needs is more useful that a specialist who can only teach a narrow range of classes. Of course, it might also be that universities recognize the growing demand for coursework that provides a global perspective on history, and are orienting their hiring towards that end.

While fiscal politics play a part in shaping the job market, I think that the global turn runs deeper than that. The intellectual energy emanating from the field is palpable. Whenever I hear a talk or read new literature that engages with global histories of science, I begin to think about my own work differently, asking how I can move my research beyond its national confines. Our turn to the global is certainly informed by our own contemporary moment when epidemics become “global catastrophes” and information travels around the world in a single tweet. Living in a globalized world, we seek to historicize it.

The challenge then, is how to go about the process of doing global history. The global history of science, technology, and medicine encompasses topics as diverse as the politics of global health, the development of global technology networks, and the impact of globalization on scientific practice. What do these studies have in common, other than telling stories that transcend national borders? Is there a methodological approach that binds them together?

I am certainly not the first to ask these questions. Several historians of science, medicine, and technology have thoughtfully reflected on the “global turn” and what it means for our discipline. Historian of medicine Warwick Anderson has written extensively in what we could call the history of global health. In an essay published this month, Anderson reviewed the recent historiography in the history of global health and suggested that the most important contributions have been written not by historians, but by anthropologists. This is because anthropologists pay close attention to local contexts – a seemingly strange virtue in a search for the “global.” Anderson goes so far as to say that “the most compelling accounts of global health manage to localize medical interventions: they examine the messy and often confusing, even conflicted, interactions of foreign doctors and aid-workers, domestic and traditional health practitioners, and their patients.”

Anderson’s claim for the importance of locality in global history runs through his other writings on the subject. In an essay published back in February in the Social History of Medicine, Anderson critiqued historians’ emphasis on global “flows,” what he cheekily refers to as the “hydraulic turn.” By embracing the language of “flow,” he argues, historians begin to take globalization for granted, treating it an inevitable historical narrative instead of a process that requires its own analysis and historicization.

The issue of movement is a common thread running through other methodological discussions of global history. Fa-Ti Fan identified “circulation” and “trade” as the two methodological pillars of global STS. Stuart McCook recently suggested the method of “following” an object – whether it be material, textual, or biological – as it moves around the world. Sujit Sivasundaram, on the other hand, encourages historians to cross-contextualize their sources to better understand how both Western and non-Western subjects approached the natural world at any given moment. Sivasundaram’s call to cross-contextualization strikes me as similar to Anderson’s praise of the anthropologist. Both insist on careful and thorough local study that maps the complexity of encounters as they happen on the ground.

The tension that emerges from these discussions is a battle between flow and focus, between moving and standing still. Even if people, ideas, and technologies are always moving around the world, it is only by taking a snapshot that will we understand how global flows influence local realities. In all likelihood, both kinds of studies will be essential for historians tackling the enormous task of writing the history of the world.

* Based on the job listings on Academic Jobs Wiki (
* Check out the Isis focus section on “Global Histories of Science,” published in March 2010, as well as well as Isis focus section from December 2013, “Global Currents in National Histories of Science: The “Global Turn” and the History of Science in Latin America.”

What Difference Does a Chromosome Make?

This feature is cross-posted on Cosmologics, an online magazine project of the Program for Science, Religion and Culture at the Harvard Divinity School.

Back in May, the NIH announced their intention to draft new policies to address gender bias in preclinical research. The majority of model organisms and cell lines used in preclinical research are male, a bias that obscures potentially significant differences between the sexes. Sex, the NIH argues, should be treated as a fundamental variable in biomedical research. By revamping inclusion and reporting policies, the NIH hopes that sex-based differences will be identified earlier in the research process.

Image from TZU-YEN FU via flickr creative commons.

This policy change is not without precedent. In 1987, the NIH changed its grant guidelines to require equal numbers of women and men in clinical research. Before this moment, clinical trial participants were almost exclusively male. This preference for male bodies was justified by the argument that females’ constantly cycling hormones would add too much noise to experimental data, making it more difficult for researchers to observe the effect of the intervention being studied.

Douglas Fields, a neuroscientist at the NIH, published his critique of the NIH’s new policy in last month’s Scientific American. In his article, Fields laments that the new policy is “about politics, not science.” Twenty years ago, when the inclusion of women in clinical research was first proposed, many scientists made the same complaint. Today, by Fields’ own admission, there is “no debate” about the importance of ensuring diversity among clinical trial participants. As the gradual acceptance of clinical inclusion policies show, the ideals of “good” scientific practice change over time, a process that is inevitably political.

Fields’ critique is two-fold. First, he argues that implementing the new policies will result in unnecessary and wasteful expenditures. Second, he contends that including both male and female research subjects in every study will produce unacceptable levels of experimental variation.

Let’s look at the issue of variation first. According to Fields, the reason why most preclinical researchers conduct single-sex experiments is that including both sexes would increase variation while simultaneously cutting sample size in half. This would make it more difficult for researchers to detect significant differences between the experimental and control groups. Fields rejects the explanation that researchers favor male animals and cells because they exhibit less hormonal variation. He does not offer an alternative explanation, however, for why a majority of single-sex experiments use only male animals.

The way around the problem of variation, it would seem, would be to increase the overall number of test subjects, so that there is a sufficiently large number of animals or cells of each gender. The larger sample size would smooth out the variation so that researchers could still detect significant differences between experimental conditions. Alternatively, researchers could perform separate statistical analyses on male and female populations, which would be useful in identifying the differences between these populations. Let’s say, for example, that female rats react positively to a candidate drug, but male rats do not. That seems like essential information to have before a drug advances into human clinical trials.

Back to Fields’ primary objection: experiments with larger subject pools cost more money. I think that Fields is probably right, even if Janine Clayton and Francis Collins insist that inclusion “need not be difficult or costly.” The NIH policy of preclinical inclusion may add to the already substantial cost of biomedical research. Is it worth it? That depends on what one thinks the goal of preclinical research should be. If we want experiments that are cheap, clean, and produce unambiguous results, then perhaps we should stick to a single-sex model. But if we hope to gain insight into how a patient’s gender (among other biological factors) might play out in a clinical situation, inclusion seems necessary—even if it comes at a cost. Fields actually suggests that using both male and female animals could cut down on laboratory costs, as labs could breed costly transgenic rats instead of purchasing new ones. And this week, the NIH announced 10 million dollars in “administrative supplements” to encourage gender balance in projects already funded by the agency. It seems that if the scientific community wants to make inclusion a priority, additional resources and solutions are available to ease its integration into preclinical research.

Fields accepts that sex difference is an important object of biomedical inquiry. In lieu of mandatory inclusion policies, he suggests earmarking NIH funds for researchers for whom the study of sex difference is a major focus. This forgets that sex differences often crop up in unexpected places—from processes of cellular death to the effects of daily aspirin on heart disease. The NIH policy need not transform every study into a major investigation of sex differences. But if scientists aren’t on the look-out for sex differences in their own research, how will we know where to look? Inclusion policies will provide new leads for researchers who want to delve deeper into the mechanisms behind sex-based variation.

I do, however, have my own reservations about expanding inclusion policies to include preclinical research. My main fear is that such policies will encourage us to see sex-based difference where none exists, or where such a difference would be irrelevant for clinical practice. I am convinced that biological sex can be can be an important variable in the study of a variety of biomedical phenomena. But I also imagine that there are many instances in which variation between the sexes is negligible—and that’s ok. Another danger of lionizing biological difference is that we might neglect the cultural factors that influence sex differences at the clinical level. We need to maintain a flexible understanding of the relationship between biological sex markers (either gonadal or chromosomal) and the lived experiences of all genders.

I also admit that I am less convinced about the necessity of inclusion at the cellular level than among laboratory animals. Maybe this is because sex difference at the cellular level boils down to an X or a Y, which bears little resemblance to the complex manifestations of sex or gender at the organismal level. Maybe it is because I am uncomfortable with personifying cells that are bought, sold, or left to divide in perpetuity, even though I know those cells once came from a human being with a gender identity of his or her own. Historian Hannah Landecker’s work on the HeLa cell line has shown how the racial identity of the donor has been projected back onto the cells themselves, often in highly problematic ways. For example, scientists wrote of HeLa’s tendency towards “miscegenation” through the aggressive contamination of other cell lines. By focusing on the sex of the cells (or even animals) we study, will we be able to avoid slipping into gendered language? How might our projection of gender onto cells change the way we think about or study them?

Only time will tell if the NIH’s new policies will enrich our understanding of biological differences between the sexes. There is a fine line, however, between appreciating sex differences where they do exist and expecting that biological sex will shape every aspect of clinical practice. Thinking critically about the ways in which sex factors into preclinical research is a necessary first step in ensuring that both women and men benefit equally from the fruits of biomedical research.