The speed at which AI is improving is phenomenal, but with that speed comes concerns.
Photo by Igor Omilaev on Unsplash.comImagine a laboratory where no one comes in to work. There are no scientists putting on gloves, no technicians handling test tubes, and no coffee cups left behind. The lights could stay off, yet experiments still take place — sometimes thousands at a time — because the work is done by an artificial intelligence system with robotic equipment. This setup never needs a lunch break, it takes no sick days, and it never has a vacation. It is working constantly, 24 hours a day, seven days a week.
This is no longer science fiction. In early 2026, reports indicated that an AI model had independently designed and completed 36,000 biological experiments in a remote-controlled robotic lab, reducing the cost of producing a specific protein by 40 percent. The AI decided what to test, robots did the work, and the results went straight back to the AI to guide the next steps. A person set the goal, but the machines did everything else. Once the goal was specified, there was no further human input.
This type of setup, often called a self-driving laboratory, is quickly becoming the norm instead of a novelty. It is changing science faster than the rules can keep up. Given the cost savings and the ability to run thousands of experiments at once, it is easy to see why any corporation or institution would be seduced by this technology. In an age when we are desperately seeking medications and treatments for illnesses, this type of experimental work gains even more in terms of desirability.
What These Systems Can Actually Do
The short answer is: a lot. Tasks that once took research teams years of careful trial and error can now be done in weeks or even days. Instead of testing one idea at a time, these AI-powered systems try thousands of variations at once. It’s like an engineer testing every version of a prototype at the same time, instead of building one and hoping it works.
Medicine is one of the most promising fields for this technology. AI is already creating new proteins, which are the tiny molecular machines behind almost every process in our bodies. Wasn’t the Nobel Prize in Chemistry recently won for an AI group that uncovered the most vital area in medical research — protein folding?
Researchers use these tools to develop new drugs and make vaccine design faster. The quick vaccine response during COVID-19 could become normal, instead of a rare, heroic effort. Some drugs designed by AI are already being tested in people, and they could cut the usual development time by half or more.
Outside of medicine, self-driving labs are making discoveries in chemistry and materials science that people might never have found. These systems don’t get bored, miss patterns, or give up after many failed tries. In theory, this could speed up scientific discovery in ways that affect almost every part of daily life, from cheaper medicines to new materials for batteries or construction. They could discover new molecules and interactions that we would never have conceived of previously.
There is also an important fairness issue. Today, most advanced research happens at wealthy universities and big companies. Cloud-based robotic labs, which you can access online, could enable smaller organizations to run complex experiments without expensive lab equipment. This kind of access could give more people and places a chance to make scientific discoveries. It’s an incredible breakthrough in using human brainpower instead of requiring a billion-dollar financial backing, before anything can be initiated.
The Part That Should Make Us Pause
There is a problem with such a powerful tool: it does not know who is using it or what they plan to do. Researchers and security experts call this the dual-use problem. Technologies made to help can also be used to cause harm.
With AI-powered lab systems, this concern is real. Studies show that these AI tools can help make a virus spread more efficiently, even without special training. It is unsettling to know that this kind of software is becoming more common each year.
The same system that could help create a life-saving drug could also be used to design something dangerous. Think about the current advances in killer drones that can sense, seek out, and kill people without any human intervention. If you think this is something from “The Terminator,” think again.
Researchers have also found that people with little or no background in biology, not trained scientists, just curious individuals, can use large AI language models to obtain detailed, step-by-step instructions for working with dangerous pathogens, even though filters are supposed to block such information. It’s “The Anarchists’ Cookbook,” but on a different scale. In one study, about 90 percent of these untrained users said they had little trouble getting the AI to provide risky biological instructions.
This doesn’t mean the technology itself is bad. A kitchen knife isn’t evil, but we don’t give them out to anyone without context. The real issue is how quickly and widely AI can work. An AI system can create thousands of experimental designs in one night. The distance between a risky idea and making it real is getting smaller, and the usual gatekeepers — scientists, review boards, and biosafety officers — weren’t involved when these tools were created.
Even from a scientific point of view, there are challenges to consider. Autonomous systems can speed up experiments, but they don’t replace the deep, intuitive insights that come from years of experience. AI can suggest which experiments to try next based on the data, but it can’t always tell if you are asking the right question.
All of us know about the importance of specific and detailed prompts to give to the AI that will initiate some computational mechanism. And when a drug fails in a clinical trial — as most still do, whether designed by AI or not — the loss of time, money, and patient hope is just as real.
The Governance Gap — and Why It Matters
The frameworks meant to keep biological research safe weren’t written with any of this in mind. International treaties banning biological weapons date back to 1975. U.S. regulations governing biological research don’t account for AI-driven automation. Rules governing AI don’t specifically address its use in biology. The two sets of rules exist in parallel, each with blind spots that the other doesn’t cover.
Some AI companies have started using their own voluntary safety measures. They might set stricter internal rules before releasing their most powerful models or update guidelines for how much biological risk a model can pose before adding more safeguards. But the keyword is voluntary. There is no outside group making sure these promises are kept, and no universal standard that all companies follow.
There are proposals for new rules, such as matching access to an AI tool with the level of risk it poses, instead of blocking everything or nothing. Some suggest better screening of synthetic DNA or regulating not just the AI models, but also the biological data they use for training. These are sensible suggestions, but so far, they are only ideas — not actual policies. Another area where AI is burgeoning: the recent announcement that Anthropic has a program so dangerous that they don’t want to release it to the public.
The truth is that science has moved ahead of the rules. This has happened before with nuclear technology, the early internet, and genetic engineering. Sometimes we caught up in time, and sometimes we didn’t. The difference now is how quickly these AI systems can work and how many areas they affect. Biology isn’t a small field — it is the foundation of all life on Earth.
This doesn’t mean we should stop progress. The possible benefits of AI-powered science are too important to ignore out of fear. Faster cures, cheaper treatments, and new discoveries could make a real difference for people with serious illnesses. But the time to set up the right safeguards is short and running out. The labs are already working. The real question is whether anyone is paying attention.