Sam Altman has repeatedly warned that advanced AI poses a significant risk of enabling the development of bioweapons, even by individuals without scientific expertise
- Jerry Guinati
- 34 minutes ago
- 1 min read
Sam Altman has repeatedly warned that advanced AI poses a significant risk of enabling the development of bioweapons, even by individuals without scientific expertise.
He has stated that AI could help non-experts design, synthesize, and deploy dangerous pathogens, citing concerns that future models may reach a "high-risk tier" in terms of potential misuse. Altman emphasized that the world is not taking these risks seriously enough, and that "the warning lights are flashing" regarding biological and cybersecurity threats.
In a 2025 Federal Reserve speech, he warned that foreign adversaries could use powerful AI to design bioweapons capable of disrupting critical infrastructure or attacking populations, noting that such scenarios are now "very possible" with superhuman intelligence. He also highlighted that AI's role in scientific discovery—like accelerating biotech research—could be weaponized, and that OpenAI itself is preparing for this risk by hiring a Head of Preparedness to anticipate and mitigate such dangers.
Altman has called for a resilience-based approach to AI safety—similar to how society now manages fire risks—with stronger safeguards, international cooperation, and proactive risk assessment. He stressed that while AI has transformative potential, "if something goes visibly really wrong for AI this year, I think bio would be a reasonable bet for what that could be."