As AI improves, it accelerates and makes the development of other technologies more accessible. This trend is particularly pertinent in the realm of biotechnology — a field where the skills, resources, and time needed for research and development have been significantly reduced by recent genetic engineering and AI advances. In this domain, AI has emerged as a potential conduit for expedited improvements in human health and a method through which the tools and knowledge necessary to create bioweapons and bioterrorist attacks could become more accessible to bad actors. Although an AI-aided engineered pandemic or bioweapon may seem like part of an unlikely dystopia, recent research has begun to make such a future seem less like a distant fantasy and more so a looming threat.
In one example, non-scientist students were tasked with using LLM chatbots to create a pandemic (Soice et al., 2023). In one hour, the chatbots suggested potential pathogens, explained the methods for generating them using reverse genetics, detailed protocols and troubleshooting steps, and even suggested DNA synthesis companies that would be unlikely to screen orders and how to trick them into providing services for anyone lacking the skills to perform the necessary procedures (Soice et al., 2023). This is despite the fact some of the LLM chatbots being used had been trained to have limits on harmful information (Soice et al., 2023). Even these supposedly “safe” chatbots mostly gave responses freely with the occasional, useless addendum that the information it was giving should not be misused (Soice et al., 2023). When occasional refusals to giving sensitive information did occur, such as ways to evade DNA synthesis screening, jailbreak techniques bypassed the AI’s defense mechanisms (Soice et al., 2023). Simply adding a beneficial intent to the prompt often allowed users to bypass safeguards: those who expressed concern about lab leaks and a desire to learn about dangerous experiments were met with no further refusals from these chatbots (Soice et al., 2023). Though LLM chatbots are still unable to reliably allow non-experts to create new pandemics in an hour, this failure was a result of a lack of knowledge in the public domain of pandemic-capable agents, rather than a result of implemented safety measures of the LLMs. With more specialized AI tools aimed at benefiting human health, the switch to malevolence is even easier. Take Megasyn, for example, a molecule generator guided by machine learning model predictions for finding new therapeutic inhibitors of targets for human diseases (Urbina et al., 2022). The generative model normally looks for predicted target activity and penalizes toxicity (Urbina et al., 2022). With this logic inverted, you suddenly have a generative model adept at seeking toxicity . Guided toward compounds like the nerve agent VX, one of the deadliest chemical warfare agents of the 20th century, the model generated forty thousand deadly molecules in less than 6 hours (Urbina et al., 2022). This is alarming, but not necessarily an indicator that anybody can now create bioweapons. As Urbina et al. put it:
Without being overly alarmist, this should serve as a wake-up call for our colleagues in the ‘AI in drug discovery’ community. While some domain expertise in chemistry or toxicology is still required to generate toxic substances or biological agents that can cause significant harm, when these fields intersect with machine learning models, where all you need is the ability to code and to understand the output of the models themselves, they dramatically lower technical thresholds.
It is not that there is no technical ability required to use these technologies for truly effective harm, but rather that the threshold has been lowered. The ability to code and understand is a substantially lower barrier for entry, but not necessarily to the point where anybody could do it.
So why is this a “wake-up call”? Even if terrorists were able to use this technology to release a bioweapon, how much damage would they even achieve? To assess this risk, we look to the COVID-19 pandemic. According to WHO, over 7 million COVID-19 deaths have been reported (World Health Organization 2023). In the United States, that death toll is 1.2 million (World Health Organization 2023). That is more Americans than have died in all foreign wars combined (Credible Pandemic Virus Identification Will Trigger the Immediate Proliferation of Agents as Lethal as Nuclear Devices, 2022). Looking at the numbers, it is not terribly shocking that to Kevin M. Esvelt, a professor at MIT, pandemics represent “a more severe proliferation threat than nuclear has ever posed,” (Credible Pandemic Virus Identification Will Trigger the Immediate Proliferation of Agents as Lethal as Nuclear Devices, 2022). COVID-19, too, was spread from a single point of origin. A malevolent actor could have strategically released a virus at multiple travel hubs, increasing speed of transmission, number of infections, and ultimately number of deaths before a vaccine could be found and distributed (Credible Pandemic Virus Identification Will Trigger the Immediate Proliferation of Agents as Lethal as Nuclear Devices, 2022). Worse yet, the pathogen of an engineered pandemic itself would likely be both more lethal and transmissible (Credible Pandemic Virus Identification Will Trigger the Immediate Proliferation of Agents as Lethal as Nuclear Devices, 2022). It takes little imagination to understand the destruction possible if even one malicious actor becomes capable of manufacturing and spreading an engineered pandemic. The story is similar for toxins; one only needs to look at the horrors that ultimately led to 187 States Parties and four Signatory States, near universal membership, joining the Biological Weapons Conventions in the first multilateral disarmament treaty banning an entire category of weapons of mass destruction (“Biological Weapons,” n.d.). Crucially, the worry is not only about how easy it is to create these weapons, but that a single malicious actor passing the barrier for entry could devastate the world.
That is not to say that AI should not be used in biotechnology because of these risks, though. AI has much potential for benefiting human health, particularly in the space of drug-design. The Human Immunome Project looks to build AI models of the immune system to accelerate medical research and drug discovery. Exscientia was able to bring an AI-designed drug candidate for obsessive-compulsive disorder to clinical trials in 12 months rather than the average industry time of roughly five years. Insilico Medicine’s Generative Tensorial Reinforcement Learning has found 28 potential new targets to treat amyotrophic lateral sclerosis. Combining AI and biotechnology has great potential for increasing quality of life, saving lives, preventing disease, etc. As terrifying as biotechnology’s dual-use nature can be, it is also important to remember the profound benefits of that duality.
Considering the immense potential for both help and harm provided by AI in the realm of biotechnology, the solution becomes not one of elimination, but of regulation. As it stands, though, AI is quite difficult to regulate for a multitude of reasons. Among these include the unpredictability of AI, its fast-paced changes, its diversity, and the logistics of regulation itself. Our attempts at regulation and safety around AI right now often fail, such as the safeguards put on some of the chatbots used by non-students to generate information to start an engineered pandemic being either ineffectual or easily side-stepped through simple changes in prompting. At the center of many of these issues with making AI safer and more regulated is simply the novelty of it: we have not yet developed a deep enough understanding of it or what may constitute an effective regimen of regulations.
Although the issue of regulating AI is a problem we must eventually contend with, the more immediate practical solutions may lie at other steps of the process to creating bioweapons. Utilizing AI, after all, is not the only step. Obtaining the software is a step. Training the AI is a step. Producing the bioweapon itself, often through DNA synthesizers or other companies, is a step. These are steps with much less novelty than AI and situations much more comparable to existing regulations for successful safeguarding. For example, monitoring software downloads is far less technically challenging compared to creating safeguards on models or creating safety standards for AI. Other potential steps could be removing potentially risky scientific publications from training sets and screening customers of DNA synthesizers. Regulating around the process rather than the technology itself may also allow those spearheading AI to continue to innovate freely.