The Looming Threat of Designer Pathogens: How AI is Reshaping Biosecurity
A chilling scenario played out at the Munich Security Conference in February 2024: a simulation involving a rapidly spreading, AI-designed virus causing severe brain inflammation and a 20% mortality rate. While fictional, this exercise, involving security, health, and technology leaders, highlighted a growing and very real threat – the potential for engineered pandemics. The ease with which malicious actors could leverage artificial intelligence to create dangerous pathogens is no longer science fiction.
The Dark Side of Digital Biology
The simulation wasn’t simply about a novel virus; it was about its origin. The source was identified as an anarchist terrorist group that had recruited disgruntled scientists and stolen equipment from high-security laboratories. Crucially, the group didn’t rely on years of traditional research. They used artificial intelligence to design the deadly pathogen. This analysis comes from FP Analytics, the research unit of “Foreign Policy” magazine, and the vaccine alliance CEPI.
How Significant is the Risk?
Reports indicate that defense against biological attacks has been consistently underfunded. Andrew Hebbeler, Biosecurity Director at CEPI, emphasizes that while the world is better prepared for outbreaks than before the COVID-19 pandemic, dangerous gaps remain. The EU Commission’s crisis agency, HERA, shares this concern, acknowledging that while creating artificial pathogens is currently resource-intensive, rapid advancements in AI technology are making the scenario increasingly plausible. Researchers have already demonstrated the ability to use AI to develop novel viruses, suggesting similar approaches could be applied to human-targeting pathogens.
A Virus for $100,000
The accessibility of the tools needed to create dangerous pathogens is alarming. In 2016, a team at the University of Alberta successfully recreated horsepox in a lab within six months, using DNA fragments ordered by mail and a budget of just $100,000. According to the World Health Organization (WHO), this feat didn’t require exceptional expertise, raising concerns that the same techniques could be used to resurrect even more deadly, eradicated diseases like smallpox.
A September 2025 study from Stanford University further underscored this risk. Researchers successfully used specialized AI models to design complete, functional genomes of bacteriophages – viruses that infect bacteria – entirely on a computer. Unlike traditional synthetic biology, which modifies existing organisms, the AI generated entirely new genetic codes not found in nature. Sixteen of these AI-designed phages proved viable and highly efficient in laboratory tests, even outperforming their natural counterparts in replication and overcoming bacterial defenses.
“Language Models of Life”
While the Stanford researchers took safety precautions, such as avoiding training the models on data from viruses that infect humans, the study demonstrated a dramatic reduction in the barrier to designing complex biological systems. What once required decades of evolution or specialized expertise can now be computed in a short time using “language models of life.”
Early Warning Systems for Dangerous Gene Sequences
One crucial defense is screening nucleic acid synthesis – verifying orders placed with DNA synthesis companies. Manufacturing artificial pathogens requires pre-fabricated DNA fragments, making these companies a key control point for identifying potentially dangerous genetic sequences. HERA and CEPI agree that a broad risk management approach is necessary, avoiding blanket restrictions that could hinder innovation in vaccines, therapeutics, and diagnostics.
More Dangerous Than Nature?
The question remains: are AI-developed pathogens more dangerous than those that arise naturally? CEPI urges stronger pandemic preparedness, including its “100-Day Mission” to accelerate vaccine development. This mission relies on an agent-based AI system that analyzes vast datasets.
NATO’s Response: The DIANA Initiative
The threat has also reached the attention of NATO, which adopted a strategy for the responsible use of biotechnology in February 2024. The alliance is actively implementing plans through its innovation accelerator, DIANA, which seeks technologies with both civilian and defense applications – “dual-use” technologies. Examples include:
- Realnose.ai: A US company using AI to analyze smells, capable of detecting dangerous chemical or biological substances in the air.
- Biocellis: A French firm developing a portable reader that uses bioluminescence to rapidly detect toxic substances or biological warfare agents.
- Aboa Space Research Oy: A Finnish company combining a handheld microscope with AI for real-time identification of antibiotic-resistant bacteria, toxins, or irradiated cells.
These technologies were central to the discussion at the first NATO Biotech Conference in Brussels in late October 2025, highlighting the potential of AI-powered sensors as a defensive line.
Frequently Asked Questions
Q: How likely is a deliberately engineered pandemic?
A: While currently resource-intensive, the rapid advancements in AI are making the creation of artificial pathogens increasingly plausible.
Q: What is being done to prevent this?
A: Efforts include screening DNA synthesis orders, developing AI-powered detection technologies, and strengthening international collaboration on pandemic preparedness.
Q: Could AI actually create a more dangerous virus than nature?
A: It’s currently unclear, but the ability of AI to design novel genetic codes raises concerns about the potential for pathogens with enhanced virulence or resistance.
Q: What role does NATO play in biosecurity?
A: NATO is investing in dual-use technologies through its DIANA initiative and has adopted a strategy for the responsible use of biotechnology.
Pro Tip: Stay informed about advancements in biotechnology and AI. Understanding the risks is the first step towards mitigating them.
Did you realize? The University of Alberta recreated horsepox with a budget of just $100,000, demonstrating the accessibility of dangerous biological engineering.
What are your thoughts on the future of biosecurity? Share your comments below and let’s continue the conversation.
