Here’s why ChatGPT needs to know how to make bioweapons

AdminUncategorized9 hours ago6 Views

My bingo card for this month did not include OpenAI telling the world that future frontier AI models coming to ChatGPT will know how to make bioweapons or novel biothreats, but here we are. We can add this capability to the growing list of issues that give us reason to worry about a future where AI reaches superintelligence.

However, it’s not as bad as it sounds. OpenAI is giving us this warning now to explain what it’s doing to prevent future versions of ChatGPT from helping bad actors devise bioweapons.

OpenAI wants to be in control of teaching advanced biology and chemistry to its AI models rather than ensuring ChatGPT never gets trained with such data. The better understanding of biology and chemistry ChatGPT has, the easier it’ll be to assist humans in devising new medications and treatment plans. More advanced versions of ChatGPT might then come up with innovations on their own once superintelligence is reached.

Providing support for creating bioweapons is just a side effect. That’s why OpenAI’s work on ensuring ChatGPT can’t provide assistance to anyone looking to make improvised biothreats has to start now.

AI health innovations are already here

We’ve already seen scientists use current AI capabilities to come up with novel treatment options. Some of them use AI to see how drugs approved for certain conditions can be repurposed to treat rare illnesses.

We also saw an AI system find a cure to a type of blindness by devising theories and proposing experiments for coming up with new therapies. Ultimately, that AI also discovered that an existing eye drug can help prevent blindness in a particular eye condition.

Biothreats: The obvious side effect

OpenAI addressed the potential of AI to improve scientific discovery in a new blog post that tackles the risk of ChatGPT helping with bioweapons:

Advanced AI models have the power to rapidly accelerate scientific discovery, one of the many ways frontier AI models will benefit humanity. In biology, these models are already helping scientists⁠(opens in a new window) identify which new drugs are most likely to succeed in human trials. Soon, they could also accelerate drug discovery, design better vaccines, create enzymes for sustainable fuels, and uncover new treatments for rare diseases to open up new possibilities across medicine, public health, and environmental science.

OpenAI explained that it has a strategy in place to ensure ChatGPT models can’t help people with minimal expertise or highly skilled actors create bioweapons. Rather than hoping for the best, the plan is to devise and deploy guardrails that prevent ChatGPT from helping bad actors when given harmful prompts.

OpenAI says it has engaged with “biosecurity, bioweapons, and bioterrorism, as well as academic researchers, to shape our biosecurity threat model, capability assessments, and model and usage policies” from the early days.

Currently, it’s employing red teamers consisting of AI experts and biology experts to test the chatbot and prevent ChatGPT from providing assistance when asked to help with experiments that could allow someone to create a bioweapon.

How ChatGPT protects against bioterrorism

OpenAI also outlined the features it built into ChatGPT to prevent misuse that might allow someone to obtain bioweapon-related assistance.

The AI will refuse dangerous prompts. For dual-use requests that might involve topics like virology experiments or genetic engineering, ChatGPT won’t provide actionable steps. The lack of detail should stop people who are not experts in bio-related fields from taking action.

Always-on detection systems would also detect bio-related activity deemed risky. The AI won’t respond and a manual review would be triggered. A human would get access to that ChatGPT chat. OpenAI might also suspend accounts and conduct investigations into the user. In “egregious cases,” OpenAI might involve law enforcement authorities.

Add red teaming and security controls, and OpenAI has a complex plan to prevent such abuse. Nothing is guaranteed, however. Bad actors might end up jailbreaking ChatGPT to obtain information on bioweapons. But so far, OpenAI says its systems are working.

ChatGPT o3, one of OpenAI’s most advanced reasoning AI models that might assist with such dangerous threats remains “below the High capability threshold in our Preparedness Framework.”

What’s a High capability threshold model?

OpenAI explains in the blog footnotes what a High capability threshold model is:

Our Preparedness Framework defines capability thresholds that could lead to severe risk, and risk-specific guidelines to sufficiently minimize the risk of severe harm. For biology, a model that meets a High capability threshold could provide meaningful assistance to novice actors with basic relevant training, enabling them to create biological or chemical threats.

If a model reaches a High capability threshold, we won’t release it until we’re confident the risks have been sufficiently mitigated.

The company also says in the same footnotes that it might withold certain features from future ChatGPT versions if they reach that High capability threshold.

OpenAI isn’t the only company handling bioweapon threats in the context of advanced AI with extra care. Anthropic announced that Claude 4 features increased security guardrails to prevent the AI from helping anyone create bioweapons.

What comes next

OpenAI also said it’ll host its first ever biodefense summit this July to explore how its frontier models can accelerate research. Government researchers and NGOs will attend the event.

The company is also hopeful that both the public and private sector will come up with novel ideas to use AI for health-related scientific discovery that can benefit the world.

Read More

0 Votes: 0 Upvotes, 0 Downvotes (0 Points)

Leave a reply

Recent Comments

No comments to show.

Stay Informed With the Latest & Most Important News

I consent to receive newsletter via email. For further information, please review our Privacy Policy

Advertisement

Loading Next Post...
Follow
Sign In/Sign Up Sidebar Search Trending 0 Cart
Popular Now
Loading

Signing-in 3 seconds...

Signing-up 3 seconds...

Cart
Cart updating

ShopYour cart is currently is empty. You could visit our shop and start shopping.