The Pentagon's New AI Stack: Eight Vendors and One Conspicuous Gap
On May 1, the Department of Defense published a four-paragraph press release that will define the federal AI vendor landscape for years to come.
It named eight companies authorized to deploy artificial intelligence on the Pentagon's most sensitive classified networks, and Anthropic isn't one of them. That single absence is surgical, with implications that stretch from classified networks to the company's next funding round.
The contracts cover deployment on Impact Level 6 and Impact Level 7 network environments, the two highest security tiers in the Defense Department's infrastructure, where mission planning, intelligence analysis, and weapons targeting actually happen.
The eight companies included are OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, SpaceX, Reflection AI, and Oracle, which was added hours after the initial announcement. All of them agreed to make their models available for any purpose the military deems lawful. That clause is the entire story.
How Claude went from first in to locked out
Anthropic's Claude was the first AI model deployed on Pentagon classified networks. The company held a reported $200 million contract and was deeply embedded in military operations through Palantir's Maven toolkit.
Then the Pentagon's standard contractual language became the sticking point. The department required vendors to allow their models for any use that doesn't violate the law, including fully autonomous weapons and domestic mass surveillance. Anthropic pushed back, wanting explicit guarantees that Claude wouldn't be used for those purposes.
The DOD saw those limits as unacceptable. In March 2026, the Pentagon designated Anthropic a supply chain risk, the first time that label had ever been applied to an American company. The designation isn't just symbolic. It bars Anthropic's products from Pentagon systems and, by extension, from the contractors who build on those systems.
Anthropic sued. A federal judge in California blocked the broader government ban, but Anthropic's CEO Dario Amodei visited the White House last month for a meeting with Chief of Staff Susie Wiles, which both sides described as productive.
The litigation is still active in two courts. The May 1 contract list is the operational consequence of that legal impasse: seven other vendors stepped in and signed on the Pentagon's terms while the courts sort out whether the blacklist was constitutional retaliation.
What the vendor list actually tells us
The composition of the eight-company group is worth reading carefully. OpenAI and xAI already had existing classified deals before this announcement. SpaceX's inclusion reflects its 2026 merger with xAI, which brought Elon Musk's Grok family of models inside the SpaceX entity.
Google signed a Gemini deal on April 28, one day after 600 of its own employees signed an open letter asking CEO Sundar Pichai to refuse.
The most telling name on the list is Reflection AI. Reflection is a two-year-old startup that has yet to release a publicly available model. The firm raised $2 billion in October and counts Nvidia among its backers. It's also supported by 1789 Capital, a venture fund where Donald Trump Jr. is a partner, and is reportedly seeking a $25 billion valuation.
A pre-product company winning an IL7 contract before it ships a model isn't a normal procurement outcome. None of the eight companies issued a public statement defending the contractual language Anthropic refused.
The speed of the pivot
Since the Anthropic fallout, the Pentagon has significantly compressed the way it onboards new AI providers. Integration onto classified tiers now takes under three months.
Previously, the process took 18 months or longer. That's a meaningful operational change, and it's a direct response to the vulnerability the DOD created by relying on a single primary vendor.
"It's irresponsible to be reliant on any one partner," DOD CTO Emil Michael told CNBC, adding that the Pentagon learned "that one partner didn't really want to work with us in the way we wanted to work with them."
The platform those eight vendors now feed is substantial. GenAI.mil has already been used by over 1.3 million Defense Department personnel to build agent workflows at scale, after just five months of operation. The vendor list isn't a pilot program. It's production infrastructure.
The Mythos complication
Here's where the story gets structurally contradictory. While the DOD was formalizing Anthropic's exclusion from its classified vendor stack, a different part of the government was actively using Anthropic's newest model.
Two sources confirmed to Axios that the NSA was using Mythos, while one said the model was also being used more widely within the department.
Mythos, launched April 7, is Anthropic's most capable model and the one that's generated the sharpest national security attention. Anthropic says the model can find and exploit zero-day bugs in every major operating system and every major web browser, and demonstrated this by identifying a 27-year-old flaw in OpenBSD.
NSA officials studying Mythos have been impressed by its speed and efficiency in searching for potential security flaws, according to Bloomberg sources speaking on background.
Michael addressed this directly on CNBC: "I think the Mythos issue that's being dealt with government-wide, not just at the Department of War, is a separate national security moment where we have to make sure that our networks are hardened up, because that model has capabilities that are particular to finding cyber vulnerabilities and patching them."
That's a precise articulation of the bind. The DOD can't fully cut off a company whose most dangerous model the NSA is actively evaluating for defensive use.
Anthropic hasn't made Mythos generally available, citing cybersecurity concerns. Instead, it's restricted access to a group of over 40 organizations and launched Project Glasswing, an industry consortium to find and fix vulnerabilities in foundational systems. The NSA engagement is happening through that restricted channel, not through any reinstated Pentagon contract.
Where this goes from here
The legal fight is the short-term variable. If Anthropic wins on the constitutional retaliation claim, the supply chain designation likely falls, the contractual standoff restarts, and the "all lawful purposes" language returns to negotiation.
If the Pentagon prevails, Anthropic's exclusion from federal AI infrastructure becomes durable, and the revenue consequence compounds as agencies that follow DOD procurement norms make their own decisions.
Meanwhile, the Mythos situation creates pressure in the other direction. The White House reopened discussions with Anthropic in recent weeks after the company made significant announcements about several technology breakthroughs.
A government that's simultaneously suing a company and soliciting its most sensitive model has a negotiating problem, and both sides know it.
For developers building on any of these platforms, the vendor list matters in concrete terms. Workflows on GenAI.mil are now running against OpenAI, Google, Microsoft, AWS, Nvidia, Grok via SpaceX, and Reflection's forthcoming models.
Claude isn't in that stack, and if you've built agent tooling optimized for Claude on the assumption that Anthropic would remain the primary DOD provider, that assumption expired on May 1.