The Post-LLM Bet: Three Labs, $2.6 Billion, and a Structural Shift in AI Capital

Share
The Post-LLM Bet: Three Labs, $2.6 Billion, and a Structural Shift in AI Capital

In roughly seven weeks, three AI research labs founded by some of the most credentialed researchers in the field raised a combined $2.6 billion at the seed stage on a shared thesis: that large language models are not the terminal architecture for artificial intelligence, and that whoever builds the alternative first will define the next decade of the field.

That's a lot of money to spend on a disagreement.

The sequence began on March 10, when AMI Labs — Advanced Machine Intelligence, the Paris-headquartered startup co-founded by Turing Award winner Yann LeCun after his departure from Meta and CEO Alex LeBrun — announced a $1.03 billion seed round at a $3.5 billion pre-money valuation. The round was co-led by Cathay Innovation, Greycroft, Hiro Capital, HV Capital, and Bezos Expeditions, with NVIDIA, Samsung, and Toyota Ventures among participants.

AMI's technical bet is on world models: AI systems that learn abstract representations of physical reality rather than predicting tokens. The underlying architecture, JEPA (Joint Embedding Predictive Architecture), was proposed by LeCun in 2022. LeBrun has acknowledged that moving from research to commercial application could take years.

On April 17, the Financial Times reported that Recursive Superintelligence — a London-incorporated company founded on December 31, 2025, by former Salesforce chief scientist Richard Socher and Tim Rocktäschel, UCL professor and until recently director and principal scientist at Google DeepMind — had raised at least $500 million at a $4 billion pre-money valuation. GV led the round; NVIDIA joined. The round was oversubscribed to the point where total capital could reach $1 billion.

The founding team also includes former OpenAI researchers Josh Tobin, Jeff Clune, and Tim Shi. The company's stated goal is to automate the entire frontier AI development pipeline, from evaluation and data selection to training, post-training, and research direction, creating systems that improve themselves without human involvement.

Socher has called this "the third and perhaps final stage of neural networks." Per the FT, recursive self-improvement has not been demonstrated to work over extended periods. The company has roughly 20 staff and no public product; a launch is expected around mid-May 2026.

Ten days later, Ineffable Intelligence emerged from stealth. Founded by David Silver, the former lead of DeepMind's reinforcement learning team and a UCL professor, the company raised $1.1 billion at a $5.1 billion valuation, the largest seed round in European history at the time of its announcement, overtaking AMI Labs' record set seven weeks earlier.

The round was co-led by Sequoia (Alfred Lin and Sonya Huang) and Lightspeed (Ravi Mhatre and Raviraj Jain), with participation from NVIDIA, DST Global, Index, Google, the UK Sovereign AI Fund, the British Business Bank, and the Wellcome Trust. Ineffable's stated technical direction is a reinforcement learning-based "superlearner" designed to discover knowledge from interactions with the environment without relying on human-generated data.

Three things are notable in the aggregate. First, NVIDIA appears as an investor in all three companies. That's not coincidental. NVIDIA's strategic interest in backing alternative AI paradigms alongside its GPU business is a hedge: if any of these approaches eventually require large compute clusters with different training dynamics, NVIDIA wants the relationship.

Second, both Silver and Rocktäschel hold UCL professorships and departed Google DeepMind within a short window, suggesting a talent exodus from one of the field's most productive labs that warrants watching.

Third, and most directly relevant to developers and engineers tracking where the field is moving: none of these companies has shipped anything. AMI's CEO has framed the path to product as a multi-year research timeline. Recursive has a 20-person team and a mid-May launch target that, as of this writing, remains unconfirmed. Ineffable's website describes a vision, not a system.

That distinction between thesis and demonstrated capability matters for how to read these raises. LeCun's critique of LLM scaling is well-documented and published, but JEPA has not yet matched frontier LLMs on standard benchmarks.

Silver's reinforcement learning credentials are unimpeachable, including AlphaGo, AlphaZero, and AlphaFold. But RL systems that can discover genuinely novel knowledge at scale rather than mastering bounded game environments remain an open research problem. Recursive's recursive self-improvement thesis is, as the FT noted, untested over extended durations. Investors aren't buying products. They're buying paradigms and pedigrees.

The structural question, then, is whether this cluster of raises represents a genuine regime change in how capital thinks about AI, or a flight to credibility in a market where LLM-adjacent deals have become crowded and expensive.

The answer is probably a bit of both. Sequoia and Lightspeed co-leading a seed at a $5.1 billion valuation is not speculative. Those firms have the reserves to absorb a long research arc. GV leading Recursive at $4 billion is Google's own venture arm, positioning itself as a potential alternative to its core AI infrastructure investments. These are considered bets from institutions that model long time horizons.

What isn't yet clear is whether the alternatives work at scale. That's the question that $2.6 billion in seed capital has bought three labs the runway to answer. The research timelines involved and the distance each of these approaches remains from commercial deployment mean the answer won't arrive quickly.

But the direction of the money has already moved.