Daily Signal: May 6, 2026

Share
Daily signal logo image: An "A" for Algorithm Times, with "Daily Signal" in smaller print below it.

Algorithm Times' Daily Signal is a daily sweep of the AI headlines worth reading, with context for why they matter.

Today, the uneven commercial terrain of enterprise AI is the theme, as a large workforce commitment and a large deal collapse are landing in the same news cycle. We're also tracking a concrete supply-chain consequence for Anthropic and the strongest peer-reviewed clinical benchmark for a reasoning model published to date.


EPAM Systems targets 10,000 Claude-certified engineers, with 1,300 already certified and 5,000 targeted by end of Q3

Workforce certification numbers are the most specific signal in the EPAM-Anthropic multi-year partnership announcement: 1,300 engineers already certified across Claude models, Claude Code, and the Claude Agent SDK; 5,000 targeted by end of Q3; a full program target of 10,000; and a specialized cadre of 250 forward-deployed engineer specialists.

EPAM operates primarily as an engineering services firm, not a software vendor, which means these certifications translate to billable delivery capacity for enterprise clients adopting Claude in production environments.

No contract dollar value is disclosed. Certification volume at a single named firm is a different signal than seat counts or API call volume; it represents institutional depth that is harder to reverse quickly, and 10,000 certified engineers at a global delivery firm is a real commitment at a scale most announced AI partnerships have not approached.

Snap and Perplexity have terminated their $400 million annual integration agreement, with the financial impact reflected in Snap's Q1 guidance

The termination is confirmed in Snap's Q1 filing, which states that current revenue guidance assumes no Perplexity contribution following mutual termination. At $400 million annually, this was among the largest announced AI platform distribution deals on record. Neither party has disclosed a cause.

The financial guidance impact is Snap's, not Perplexity's, and is already baked into Snap's disclosed numbers, which limits near-term investor uncertainty on that side. The broader signal is that consumer-scale AI integration deals at this price point are harder to execute to completion than to announce, a pattern worth tracking as similar platform distribution agreements continue to be signed across the industry.

The Pentagon signed classified-network AI agreements with eight companies; Anthropic is the only US frontier lab not included

The DoD's list covers SpaceX, OpenAI, Google, Microsoft, Nvidia, AWS, Oracle, and Reflection for deployment in classified networks. Anthropic's absence is a direct consequence of the Trump administration's supply-chain-risk designation, which Anthropic is contesting in federal court.

Classified network access is the prerequisite for most defense contract work involving frontier AI, making Anthropic the only frontier lab currently excluded from that market segment while litigation proceeds. The court timeline adds uncertainty that enterprise buyers in defense-adjacent sectors have to account for in vendor selection now, independent of how the legal question ultimately resolves.

A blinded study published in Science using 76 real emergency room records finds o1 produces correct diagnoses at a higher rate than attending physicians

Seventy-six actual Beth Israel Deaconess emergency room admissions form the evaluation set, with graders scoring all diagnostic outputs without knowing whether they came from OpenAI's o1 or from attending physicians in this Harvard-led study published in Science.

At triage, o1 reached the correct or near-correct diagnosis 67% of the time versus 55% and 50% for two internal medicine attendings. With richer clinical detail, o1 hit 82% accuracy against a human range of 70-79%, though the authors note this difference is not statistically significant at that threshold.

The study uses real patient records rather than synthetic benchmarks, employs a blinded design, and appears in a peer-reviewed journal, which puts it at a different evidentiary level than vendor-published evaluations.

The authors explicitly call for prospective trials before clinical deployment and acknowledge that the human comparators are internal medicine attendings rather than emergency medicine specialists. For developers building clinical tooling on reasoning models, the methodology and the authors' own stated caveats are as important as the headline accuracy numbers.