Scoop: OpenAI plans new product for cybersecurity use
Add Axios as your preferred source to
see more of our stories on Google.

Illustration: Aïda Amer/Axios
OpenAI is finalizing a product with advanced cybersecurity capabilities that it plans to release to a small set of partners, a source familiar told Axios.
Why it matters: AI capabilities have reached a tipping point, at least in terms of autonomy and hacking capabilities. Model-makers are now so worried about the havoc their own tools could cause that they're reluctant to release them into the wild.
- Anthropic is also planning a limited rollout of Mythos, its new model.
Driving the news: Anthropic announced plans Tuesday to limit access of its new Mythos Preview model to a handpicked group of technology and cybersecurity companies over fears of its advanced hacking capabilities.
- At the time, it was the first AI company to take such an approach with a new model.
Zoom in: OpenAI introduced its "Trusted Access for Cyber" pilot program in February after rolling out GPT-5.3-Codex, the company's most cyber-capable reasoning model.
- Organizations in the invite-only program are given access to "even more cyber capable or permissive models to accelerate legitimate defensive work," according to a blog post.
- At the time, OpenAI committed $10 million in API credits to participants.
The big picture: Former government officials and top security leaders have been ringing alarm bells over the past year about AI models that — in the wrong hands — could one day autonomously disrupt water utilities, the electric grid, or financial systems.
- Those capabilities now appear to be here.
Threat level: Even if AI companies hold back their models for limited releases, top security experts all have the same message: There's no going back.
- "You can't stop models from doing code enumeration or finding flaws in older codebases," said Rob T. Lee, chief AI officer at the SANS Institute. "That capability exists now."
- It's only a matter of weeks or months before there's a new model with similar capabilities out in the wild, Wendi Whitmore, chief security intelligence officer at Palo Alto Networks, told Axios during a panel at the HumanX conference in San Francisco on Tuesday.
- Adam Meyers, senior vice president of counter adversary operations at CrowdStrike, called Mythos' capabilities a "wake-up call" for the entire industry.
Between the lines: Restricting the rollout of a new frontier model makes "more sense" if companies are concerned about models' ability to write new exploits — rather than about their ability to find bugs in the first place, Stanislav Fort, CEO of security firm Aisle, told Axios.
The intrigue: Staggering the release of new AI models looks a lot like how cybersecurity vendors currently handle the disclosure of security flaws in software, Lee added.
- "It's the same debate we've had for decades around responsible vulnerability disclosure," Lee said.
Yes, but: This product is different from OpenAI's upcoming model, Spud.
- It's unclear the cyber capabilities of that model and how OpenAI plans to roll it out.
Reality check: Widely available AI models are already capable of finding some of the vulnerabilities and exploits that Mythos uncovered, researchers at AISLE found Wednesday.
Go deeper: The wildest things Anthropic's Mythos pulled off in testing
Editor's note: The headline and story have been corrected to clarify that OpenAI is releasing a cybersecurity product, separate from its new model, to select partners (not that it is staggering the release of its new model to select companies).
