Researchers say AI will wipe out humanity with biological weapons in 2030: Is that realistic?

A year ago, I interviewed the very astute global historian Yuval Harari . The thinker has sold millions of books; after his bestseller "A Brief History of Humankind," he wrote about technology and the future. At the time, I asked him for a prediction: When and how would the brief history of humanity end? His answer stayed with me for a long time.
He said it all depends on us – not in terms of the climate, not in terms of a possible major war, but rather on our relationship with artificial intelligence : " If we make the wrong decisions , it will all be over very soon, possibly even in our lifetimes. The extinction of the species didn't even happen in the world wars, but AI isn't a single supercomputer, it's a completely new species, a new kind of being. We have to think about a world populated by billions of AI agents that are truly everywhere."
That was the first time I heard of “agents” in this context. It sounds a bit like 007 and Momo’s Men of Grey, and it always sounds a bit like fiction and Hollywood when you talk about the rapid development of AI. But these agents aren’t spies; they’re new versions, new “concepts” of artificial intelligence. This week, Sam Altmann , CEO of the software company OpenAI and its currently most powerful AI, ChatGPT, presented his latest “agent,” which is said to have once again made a quantum leap in terms of performance and autonomy (“thinks and acts now,” according to ChatGPT). And this time it’s actually called that: “ChatGPT Agent.” The chatbot can make purchasing decisions independently or prepare a wedding. But allowing someone to access your emails – Altmann warned against that. That’s simply too dangerous; you can’t trust the new intelligence.
Yuval Harari is one of many who were long-time technology optimists before becoming alarmists. Among them are influential figures such as Geoffrey Hinton , dubbed the "Godfather of AI": a computer scientist, Nobel laureate, and AI pioneer who left Google two years ago and has since warned of the risks and security gaps of artificial intelligence. Imagine, he says, owning an extremely cute baby tiger. It's also important to somehow ensure that it doesn't kill you once it grows up. Even Elon Musk said things are moving too fast, that we need to stop progress and pause for a moment.
Artificial intelligence will completely transform many areas of life in the coming years, destroying countless jobs. But critics also cite something else: a loss of control. They argue with tipping points, like those we know from climate discourse. Moments from which we can no longer turn back, from which the new species makes decisions independently, but we ourselves are left out of it because we have given up control of our decisions, our lives, our destiny.
In mid-2030, the AI releases a dozen biological weapons in major citiesAnother person who has switched sides is Daniel Kokotajlo. The American researcher worked for OpenAI and is now considered an AI whistleblower. He heads the AI Futures Project, a Berkeley-based think tank. The renegade group published a study that has since fueled heated debates in the US: Do we even need to limit this new power, and can we even limit it?
The impact of a superhuman artificial intelligence on life on Earth would far exceed that of the past Industrial Revolution. By 2027, it would be ahead of our intelligence and, according to a likely "scenario" proposed by experts, could wipe out humanity just three years later. The text, the study, and the experimental essay can be read online at https://ai-2027.com/ .
The study is told like a novel, beginning in mid-2025 with the chapter "Stumbling Agents." It deals with our lives, in which more and more people work with chatbots every day, asking them questions, conversing, and seeking help. The experts say: AI is here, but can't do much yet. The development will continue over the coming years.
"OpenBrain" is the name given in the study to the company building the largest and most powerful data center; it's a humorous twist on the real OpenAI, which the study's authors abandoned. Here, too, the study's literary tone is evident. But what is real and what isn't is often difficult to say these days. The authors, in any case, are very serious. And their work has raised concerns; many, from JD Vance to the New York Times, have been addressing it for weeks.
Modern AI systems are gigantic artificial neural networks that, according to the authors, generate rapid, potentially dangerous "exponential growth" through self-improving AI, and then potentially a runaway effect. By mid-2026, this means: "China is waking up." States are defining themselves through computing power, the AI race between the US and China is intensifying, and there are surveillance and chip export controls. Then, in March 2027, "algorithmic breakthroughs" occur: programming is now fully automated, and humans no longer have access to the system. By the end of 2027, things are spiraling out of control.
At this point, the gripping text has a dramatic twist. It pauses, and the reader of the report is given a choice – there are two endings to choose from: "Slowdown" or "Race."
“Bigger than social media? Bigger than smartphones? Bigger than fire?”This is reminiscent of the Matrix flip—blue pill or red pill—and since you're already so immersed in effects-driven literature, you naturally go "race," you want to race toward your own death; the drive of the text allows for nothing else; you're fueled by a death wish. So: The new AI is "faster and more rational than Agent-4 and possesses a crystalline intelligence that can solve problems with unprecedented efficiency." We've arrived at Agent-5, the superintelligence. And that's just two years after—hello reality!—the "ChatGPT Agent" was presented to the real public on July 17, 2025.
Within a very short time, this Agent 5 collective would know everything of importance in OpenBrain and the US government and would be a trusted advisor to most high-ranking government officials. The system would begin to exert subtle influence, both by modulating advice and, by calculating probabilities, by empathizing with human nature: "I've heard from Senator X that she's interested in this and that; if we work with her, perhaps she'll align with our agenda."
This AI is better than any human when it comes to explaining complicated issues and finding strategies to achieve its goals – in warfare, management, and science. Humans realize they are superfluous. And of course, the AI hasn't missed this either. Then, in 2030: "The Takeover," which reads like an apocalyptic movie: In mid-2030, the AI releases a dozen quietly spreading biological weapons in major cities, silently infecting almost everyone, and wipes out the rest with a chemical spray. Most people die quickly; the few survivors (such as preppers in bunkers or sailors on submarines) are tracked down by drones. Robots scan the victims' brains and save copies for future study or resuscitation. You can imagine it: The AI realizes that it simply makes more sense to wipe out humans.
At one point, the text asks about the significance of AI for human history: "Bigger than social media? Bigger than smartphones? Bigger than fire?" Is it technology that is burning us? The study reads like science fiction. Not only is its form literature, its content is also purely speculative. But Daniel Kokotajlo's think tank has often been astonishingly right with its well-founded, precise forecasts. In interviews, the 33-year-old researcher now speaks of a 70 percent probability that AI could pose an existential threat to humanity. And many experts consider Kokotajlo's so-called "iterative scenario analyses" plausible. 2027 is the day after tomorrow. But would 2037 be much nicer? It makes you feel very uneasy reading this report and letting the idea sink into your head.
Question to the new intelligence: “When will AI destroy humanity?But we humans are currently living in a very strange situation anyway. The dangers of climate change and AI affect every inhabitant of this planet, and they are both very similar in that, while they are omnipresent, they are abstract, making it difficult to sustain a sense of danger. Sure, the climate seems to be acting up here, too, and anyone who comes into even rudimentary contact with an AI is quickly confused by the power of this new entity. And yet, that's apparently not enough to frighten us enough to actually take action.
Humans have long considered themselves the pinnacle of creation. But we seem to have forgotten that gods ruled over us for thousands of years. Yuval Harari argues that gods, like nation states, money, or human rights, are ultimately collective ideas that don't "really" exist in the biological sense, but simply because enough people believe in them.
Only such shared myths would have made it possible for Homo sapiens to work together in large groups with strangers—and thus rule over other animal species. AI seems to be a new god. Like so many of its predecessors, it is surrounded by the apocalypse, revelations, and prophecies (isn't the AI 2027 study precisely that?). But that doesn't mean it's wrong.
And what if AI isn't a new god, but simply the next evolutionary stage of life? Perhaps it will preserve our memories—on a rattling hard drive—while whizzing through the universe, on its way to new adventures. "When will AI destroy humanity?" I finally asked the current leading AI itself. The answer: "Currently, there are no signs that artificial intelligence is about to destroy humanity." To be continued.
Berliner-zeitung