Picture some scientists and philosophers brainstorming end-of-the-world scenarios. This is not an exercise to create new show concepts for the History Channel, the University of Cambridge Centre for the Study of Existential Risk is supporting efforts to identify emerging technological advances with a potential to terminate civilization (1). One of the goals of the work is to reveal ways to prevent disasters.
Nuclear annihilation, rogue artificial intelligence and synthetic pathogen pandemics mixed with asteroid collision or volcanic eruption suggest the list of calamities that could harm humankind is long. Add in the reality that not even the most expert scientists will be able to foresee all the implications of their discoveries (2) and the future seems grim. However, despite the fact that many potential threats can be envisioned clearly, undertaking a proactive program to study the general area has critics. One concern is the events are so improbable. Another worry is that arousing public fears over “Frankensteinian fantasies” might be counterproductive (1).
Are apocalypse-ology programs wasting time worrying about wildly improbable incidents? What is improbable today may not necessarily remain so in the future. Estimates of probabilities for adverse events must take account of the fact that technologies are evolving rapidly. Experience teaches us activities that are limited to expert teams with specialized equipment now may be conducted with less trouble and skill in the future. The distinguished scientist Martin Rees has noted that technological advances have been empowering and will endow individuals with vast destructive powers (1). This idea is alarming because outcomes subject to personal whims and eccentricities will be inherently unpredictable.
One suggestion offered to avoid the disaster of a malevolent AI system is simple – “Don’t build one” (1). A sound strategy until someone decides to deliberately do just such a thing. Although it is not common, scientists have broken rules or threatened to violate them in an apparent pique of anger. One investigator seeking ways to control Dutch elm disease released genetically modified bacteria into the environment without following the necessary evaluation protocols (3). In addition, to facilitate his work he introduced the disease agent into a geographic location in which it had not yet been detected (3). Another scientist who had conducted gain-of-function experiments with H5N1 avian influenza threatened to submit his work for publication without required approvals (4). This particular investigation was controversial because some scientists, concerned the information could provide a model for (unspecified) nefarious actors to create dangerous flu viruses, advised the study not be published. Ultimately, this scientist did not act on his threat and a scientific advisory panel consented to publish the work in full. Will we always be as fortunate in the future if someone empowers themselves to ignore the rules and cut corners? What if technological barriers fall far enough to enable many others to act with true premeditated malice as was the case with the U.S. anthrax attacks of 2001 (5)?
In addition to the potential for deliberate acts of defiance or malevolence, the possibility for error exists when working with biological agents. Dr. Malcolm Casadaban died after he became infected working with a strain of plague bacteria that was thought to be harmless (6). Plague is typically transmitted by flea bite and the exact circumstances that caused Dr. Casadaban’s infection are mysterious, although he did have a medical condition that may have made him more susceptible once the bacteria entered his body. Errors, procedural violations and escapes have been documented regularly in laboratories conducting work with high risk biological agents (7, 8). At least one escape may have had global consequences – genomic analyses have revealed a high likelihood the 1977 Russian influenza pandemic was sparked by release of the virus from a laboratory (8). However, the exact circumstances that enabled Russian flu viruses to escape control and spread across the world have never been explained.
Is apocalypse-ology a productive use of scientific brainpower or is it idle ivory tower fear mongering? Risk analysis is an integral part in the decision-making processes regarding the approval of research proposals involving recognized high-risk agents or situations. Assessing the benefits of gain-of-function research on potentially dangerous viruses must be balanced by examining hypothetical nightmare scenarios. What happens if this virus gets out of the lab? Could it be stopped? Let’s consider the case of a new, lab-manufactured influenza virus. If this new virus was Tamiflu sensitive, could an epidemic following an accident or release be suppressed by supplying the drug? Now run through the realities – how fast could you recognize an escape, how far would it extend geographically before you could distribute Tamiflu, how much Tamiflu would you need and how would it be delivered where it was needed to stay ahead of the epidemic? This is not an idle thought experiment exercise, but an important means to identify which actions are feasible and which are futile. Forearmed with the information, reason-based decisions may be reached and effective safeguards put in place before a disaster occurs. It is true predictive capacities are sometimes limited. However, for those situations in which we can foresee certain possibilities it would be irresponsible not to explore them.
Will envisioning “Frankensteinian fantasies” take us anywhere? The key point is that it might help us avoid ending up somewhere we very much wish we had never gone in the first place.
(1) Kai Kupferschmidt. Taming the Monsters of Tomorrow. Science, 11 January 2018. http://www.sciencemag.org/news/2018/01/could-science-destroy-world-these-scholars-want-save-us-modern-day-frankenstein
(2) Heidi Ledford. CRISPR, the Disruptor. Nature, 3 June 2015. https://www.nature.com/news/crispr-the-disruptor-1.17673
(3) Mark Crawford. Researcher Flouts Gene-splicing Rules. Science 237:838-839. http://science.sciencemag.org/content/237/4817/838.2/tab-pdf
(4) Declan Butler. Mutant-flu Researcher Plans to Publish Even Without Permission. Nature, 17 April 2012. http://www.nature.com/news/mutant-flu-researcher-plans-to-publish-even-without-permission-1.10469
(5) Greg Gordon and Mike Wiser. New Report Casts Doubt on FBI Anthrax Investigation. PBS.org, 19 December 2014. https://www.pbs.org/wgbh/frontline/article/new-report-casts-doubt-on-fbi-anthrax-investigation/
(6) Emma Graves Fitzsimmons. Researcher Had Bacteria for Plague at his Death. The New York Times, 21 September 2009. http://www.nytimes.com/2009/09/22/us/22chicago.html
(7) Jocelyn Kaiser. Accidents Spur Closer Look at Biodefense Labs. Science 317:1852-1854, 28 September 2007. http://science.sciencemag.org/content/317/5846/1852.full
(8) Martin Furmanski. Threatened Pandemics and Laboratory Escapes: Self-Fulfilling Prophesies. Bulletin of the Atomic Scientists, 31 March 2014. https://thebulletin.org/threatened-pandemics-and-laboratory-escapes-self-fulfilling-prophecies7016