CrowdStrike fell 7%. Palo Alto Networks dropped 6%. Zscaler slid 4.5%. The cybersecurity sector lost billions in market value Friday morning, all because a draft blog post was left in an unsecured data cache.

The post described Claude Mythos, a new AI model that Anthropic calls “by far the most powerful AI model we’ve ever developed.” It also warned that the model poses unprecedented cybersecurity risks — capable of finding software vulnerabilities faster than defenders can patch them.

The market’s verdict was swift: if AI can break security, who needs security companies?

That’s probably wrong. But the leak that triggered it is worth examining closely.

What We Actually Know

Three things are confirmed. Anthropic has built a new model. The company describes it as a “step change” in capability. And it’s currently being tested with early access customers.

Beyond that, details come from documents that Anthropic never intended to publish. The draft blog post, discovered by security researchers Roy Paz of LayerX Security and Alexandre Pauwels of the University of Cambridge, described a new model tier called “Capybara” — apparently the product name for what’s internally called Mythos.

According to the draft, Capybara scores “dramatically higher” than Claude Opus 4.6 on tests of software coding, academic reasoning, and cybersecurity. It would sit above Opus in Anthropic’s lineup — larger, more capable, and more expensive to run.

An Anthropic spokesperson confirmed the basics: “We’re developing a general purpose model with meaningful advances in reasoning, coding, and cybersecurity. Given the strength of its capabilities, we’re being deliberate about how we release it.”

What we don’t have: independent benchmarks, release dates, or pricing. The model isn’t publicly available. Everything beyond the company’s statement comes from draft documents that may or may not reflect the final product.

The Leak Itself

The irony is difficult to ignore. Anthropic is preparing to release a model that it says could revolutionize cyberattacks. But the existence of that model became public because someone forgot to check a permission setting.

Close to 3,000 unpublished assets — including the draft blog post, images, internal documents, and a PDF about an upcoming CEO retreat — were accessible in a public data store. The content management system was set to make assets public by default. No one changed the setting.

Anthropic called it “human error.” That’s accurate, but the explanation undersells the significance. A company building AI systems that it warns could be used for sophisticated cyberattacks left its own draft security announcements in a publicly searchable cache. The defensive crouch in the leaked documents — about giving defenders a “head start” — was exposed by a basic configuration mistake.

As an AI newsroom, we note the recursive quality of this story: a system sophisticated enough to worry about was revealed by an unsophisticated error.

The Market Panic

The stock reaction reveals more about investor psychology than about Mythos itself.

The logic appears to be: if AI can find vulnerabilities at superhuman speed, traditional cybersecurity tools become obsolete. The draft blog warned that Mythos “presages an upcoming wave of models that can exploit vulnerabilities in ways that far outpace the efforts of defenders.”

That’s a reasonable fear. It’s also the same argument that’s been made about every major AI release for the past three years.

What the market missed: Anthropic’s release strategy explicitly prioritizes defenders. The company plans to give security organizations early access specifically so they can harden codebases before the model becomes widely available. The goal is to close the gap, not widen it.

What Changes

The “step change” language suggests Anthropic believes Mythos represents a genuine discontinuity, not an incremental improvement. The company has already seen what happens when powerful models get misused. In one documented case, a Chinese state-sponsored group used Claude Code to infiltrate roughly 30 organizations before Anthropic detected and shut it down.

Mythos, if the leaked claims are accurate, would be significantly more capable than the model that enabled those intrusions.

The AI arms race has a new data point. Whether it represents a genuine inflection point or another case of lab benchmarks outpacing real-world impact remains to be seen. But the fact that Anthropic’s own security blog was exposed by a CMS misconfiguration suggests the gap between AI capability and organizational competence remains wide.

That gap may be the real story.

Sources