Opinion | We Could Easily Make Risky Virological Research Safer

The 2007 outbreak of foot-and-mouth disease in Britain was traced to a faulty drainage pipe at a research facility. In 2015 the Department of Defense discovered that a germ-warfare program in Utah had mistakenly mailed almost 200 samples of live anthrax over 12 years. In 2018 a burst pipe released as many as 3,000 gallons of wastewater from labs working with Ebola and anthrax at Fort Detrick in Maryland onto a grassy area a few feet from an open storm drain.

Lab accidents happen, and they aren’t especially rare. A 2014 USA Today investigation by Alison Young, whose book “Pandora’s Gamble: Lab Leaks, Pandemics, and a World At Risk” is a shocking accounting of the problem, identified more than a thousand accidents reported to federal regulators from 2008 to 2012. Some were not especially dangerous. But if you’ve read accounts of them at any point over the course of the Covid-19 pandemic as debate continued over its origins, chances are they’ve shaken you a bit. Many of the touchstone examples have been tied to quotidian causes — sloppy procedures and lax oversight. But lately debate has focused on the dangerousness of the experiments themselves, in part because knowing what is risky suggests what extra precautions might be taken and in part because it raises a more bracing fundamental question: What kind of work is worth this risk?

In January the National Science Advisory Board for Biosecurity issued a series of draft recommendations for tightening regulation and oversight. The proposed framework would expand the list of pathogens that would require rigorous review and close some loopholes that allowed some researchers to avoid that oversight. But for the moment, the recommendations sit in a kind of regulatory limbo, awaiting a green light from the White House and implementation at the National Institutes of Health.

For those who believe it is likely that a lab leak was responsible for this pandemic and the deaths of probably 20 million people, the need for greater scrutiny and regulation appears intuitive and urgent. And many of those who see the Covid pandemic as merely the sort of pathogenic disaster that lab accidents might cause agree that greater safety is needed.

But there are opponents of new oversight who also think the stakes are high. In late January, as the new Republican House leaders announced plans to scrutinize the pandemic’s origins and biosafety, the American Society for Microbiology’s Journal of Virology published a commentary, “Virology Under the Microscope: A Call for Rational Discourse,” signed by over 150 scientists. I’ve heard it mocked as “Virologists Against Regulation.”

“Should such hearings lead to Congress legislating restrictions on scientific research, the outcome could impede our ability to predict, prepare and respond to emerging viral threats,” the scientists wrote. “An equally devastating outcome would be to sow even more public distrust in science, which would limit our ability to confront viruses in general and increase the human burden from viral diseases.”

In an April 27 Energy and Commerce Committee hearing on the biosafety of risky research, it was generally Republicans who pressed harder on matters of research protocols and the blind spots of oversight. The Department of Health and Human Services and the N.I.H. “have persisted in foot-dragging, stonewalling or flat-out refusing to engage in legitimate questions,” the chairwoman, Cathy McMorris Rodgers of Washington, charged in her opening statement.

Democrats tended to echo the concerns raised in the virologists’ letter, warning that dwelling too long on matters of biosafety would jeopardize scientific enterprise. “I remain concerned that basic science has become so politicized that we can’t have a reasoned conversation on how to protect the public from disease without delving into unsupported conspiracies or unfounded allegations about what scientists are doing in America’s labs,” said Representative Paul D. Tonko of New York, flagging the “politicization of science and maligning of scientists” as especially significant worries.

But perhaps progress need not require a cage match between science and safety. Last month, in a keynote address at a conference sponsored by the Bulletin of the Atomic Scientists in Geneva, the Harvard epidemiologist Marc Lipsitch sketched out a levelheaded, unobtrusive approach capable of delivering a much safer research landscape while preserving much of the scientific gains of the old paradigm.

Lipsitch was among the loudest voices raising concerns about so-called gain of function research (defined in different ways by different groups, it generally refers to work manipulating pathogens to make them either more transmissible or more virulent) before the federal government established a moratorium in 2014. That year he and Tom Inglesby published estimates that “a laboratory-year of experimentation on virulent, transmissible influenza virus” was introducing risk equivalent to 2,000 to 1.6 million deaths during that period.

Lipsitch had been relatively quiet during the pandemic as heated lab-leak debate crowded out more measured conversations about biosafety and research oversight. And in Geneva his perspective was less alarmist than measured and technocratic. He said that even a vast majority of experiments described as gain of function are fairly safe, provide obvious benefits and need no additional review. Regulatory focus, he said, should be on a much narrower category that he called “gain of function research of concern.” The way he sees it, “over 90 percent of work in virology is not even in this category.” And while many virologists have recently argued that further oversight might hamper vaccine development, it would be very easy to avoid that problem, he said, for one simple reason: “The concerning experiments are not the ones that make vaccines.”

As for what experiments are concerning, the science advisory board’s recommendations, Lipsitch went on, offered a pretty good standard — broadening the category of pathogens requiring special scrutiny from those judged to be both likely highly virulent and highly transmissible, as was the standard set in 2017, to include those considered likely moderately virulent or moderately transmissible and likely to pose a severe threat to public health or national security. Research involving manipulation of the novel coronavirus, SARS-CoV-2, for instance, would not require additional review under the old framework but would under the new definitions. He said he’d prefer an even more expansive definition, including pathogens exhibiting low virulence and transmissibility, but on paper, the science advisory board recommendations offered what he called a pretty good definitional framework.

Implementation matters, too, and may be cause for concern. Since the 2017 standards were instituted, Lipsitch said, only three projects have come under the expanded review. Over the same period, he’s learned of at least five experiments he believed should have been reviewed but weren’t — meaning that just the experiments he happened to know about that slipped through without review outnumber the experiments processed by the relevant ethical authorities.

Those were very few experiments, over half a decade. Almost surely, there are more. But the small number suggests that regulation need not be all that difficult, bureaucratically cumbersome or even expensive and that a lot of additional safety could be achieved with relatively little additional review.

What would be the substance of that review? To begin with, Lipsitch said, it should require that research with pandemic risk is expected to yield a public-health benefit rather than simply a scientific breakthrough, “so that we’re weighing lives against lives.” Too often, he said, risky experiments skate by on the presumption that any new scientific insight will prove valuable. “The idea has always been that to know what to worry about in nature, we can just study it in the lab or cause it to happen in the lab,” he said, and that’s not necessarily the case. “It’s kind of bad manners in a scientific discussion to say that other people’s claims about the importance of their science are overblown,” he said. “But I’m going to do it.”

This is not to say that no risky research offers benefits but that we should more rigorously scrutinize claims about benefits and whether funds could be better spent. “And that actually is a big point,” Lipsitch said, “because the biosecurity and biosafety is expensive. So the same amount of money could buy you a lot of other kinds of research. And what are the marginal benefits that we get from doing the risky experiment instead of the next best safe experiment?”

The science advisory board recommendations have shortcomings. They don’t fully address what happens when a project’s direction changes after it’s been funded and approved. And they would not directly shape the ethical standards set by institutions or scientific journals, which play a meaningful role in shaping the status incentives of researchers. The recommendations would address a shortcoming of existing protocols by extending their reach to include privately funded research, though that shortcoming may be, at present, less significant than it seems. According to Lipsitch, there appears to be no meaningful profit opportunity in this kind of basic virological research now. Of course, the recommendations don’t apply to other nations or private actors abroad, but, he said, “if the U.S. would stop telling everybody how great risky research is, it would lose a lot of its currency and status around the world.”

These suggestions would not eliminate the risk of lab accidents, but they would reduce the risk — and fairly simply. “Setting scientific norms has to be a positive thing,” Lipsitch said, “even if it’s not a perfectly effective thing.”

Source: Read Full Article