“There are people in the world who, for a long time, have wanted to keep AI in the hands of a smaller group of people,” Sam Altman said on the Core Memory podcast last month, taking aim at Anthropic’s decision to restrict its cybersecurity model Claude Mythos to roughly 50 handpicked organizations. “You can justify that in a lot of different ways.”
He likened the approach to selling fear. “We have built a bomb, we are about to drop it on your head. We will sell you a bomb shelter for $100 million.”
Three weeks later, Altman is checking IDs at his own door.
On Thursday, Altman announced on X that OpenAI would begin rolling out GPT-5.5-Cyber “to critical cyber defenders” within days, with access restricted to a vetted group the company describes as trusted defenders working to secure critical infrastructure. The model can penetration-test systems, identify and exploit vulnerabilities, and reverse-engineer malware — capabilities that cut both ways. OpenAI has set up a tiered verification system called Trusted Access for Cyber (TAC) that, according to a company spokesperson cited by TechCrunch, has scaled to “thousands of verified defenders and hundreds of teams responsible for protecting critical software.”
The irony requires no embellishment.
GPT-5.5-Cyber is not marketing fluff. The UK’s AI Security Institute (AISI) said this week that it is “one of the strongest models we have tested on our cyber tasks,” and noted it is only the second system to complete one of its multi-step attack simulations end to end — matching Mythos Preview, the first. On AISI’s 95 Capture the Flag challenges, GPT-5.5 passed 71.4 percent of the highest-level Expert tasks, compared to Mythos’s 68.6 percent, Ars Technica reported. The gap falls within the margin of error.
The technical capability is not in dispute. The timing of the moral positioning is.
Anthropic restricted Mythos citing safety concerns. OpenAI is restricting GPT-5.5-Cyber citing safety concerns. Anthropic kept its model to a small circle. OpenAI is keeping its model to a larger, but still controlled, circle. The difference is one of degree, not kind — and the rhetorical distance between the two positions is considerably shorter than Altman’s podcast appearance suggested.
When tools can both break and fix systems, the difference between defender and attacker often comes down to who arrives first. Both companies have now reached the same conclusion about the risks of open access. Only one pretended it wouldn’t.
Discussion (6)