75% of Google's Code is Now AI-Generated
Read the number carefully.
Google CEO Sundar Pichai disclosed at Google Cloud Next last week that three-quarters of all new code written at Google is now generated by AI and subsequently reviewed by human engineers.
But the trajectory is arguably more interesting than the number: 25% in October 2024, 50% by late 2025, 75% now. In twelve months, it tripled.
Before drawing conclusions from that figure, it's worth being precise about what it actually measures.
"AI-generated" in this context means code suggested by AI and accepted or edited by a human engineer before being committed. It describes the origin of characters in a diff, not autonomous output shipping to production without review.
Every line that gets counted in that 75% still went through a human engineer, code review, and automated testing. Pichai has been consistent about this framing. The 75% measures authorship, not accountability. Those are different things, and conflating them leads to both unwarranted alarm and unwarranted confidence.
What makes the number genuinely significant isn't the percentage itself. It's that Google has made AI adoption a managed operational target rather than a passive outcome.
Engineers at Google now have AI-related objectives factored into their performance reviews. The company isn't simply observing that 75% of its code is AI-generated, the same way you observe the weather. It's directing that outcome. That's a meaningful distinction for anyone trying to understand where engineering organizations are headed.
The productivity claim Pichai attached to the figure is also worth examining. He cited a complex internal code migration, completed by agents and engineers working together, that ran six times faster than a comparable project a year earlier, completed by engineers alone.
A 6x figure for a single migration is plausible and interesting, but it's a single data point offered by the company, without controls, and with an obvious interest in the story it tells. Treat it as a directional signal, not a benchmark.
There is, however, a detail buried in the coverage that says more than the headline number. Some engineers at Google DeepMind have reportedly been using Anthropic's Claude Code for development work in recent months, apparently generating internal friction along the way.
That's worth noticing. Inside Google, with full access to Gemini and every internal AI coding tool the company has built, engineers with the technical judgment to evaluate the options are reaching for the competitor's product.
That's not an indictment of Gemini's coding capability, which is genuinely strong. It's a signal that even at the frontier, no single model has an unambiguous advantage across all tasks, and that engineers, given a choice, will optimize for output quality over institutional loyalty.
For developers and engineering organizations watching this from the outside, the operational reality is already arriving regardless of the headline number. The question isn't whether AI will write a significant portion of new code. That transition is underway at Google, at Microsoft, and across engineering organizations that have deployed tools like Claude Code, Cursor, and GitHub Copilot at scale.
The question is what engineering actually looks like when the generation step is largely automated.
The early evidence points toward a restructuring of the skill premium rather than a reduction in the engineering workforce. The work that commands attention and compensation is shifting upstream: systems architecture, code review, and the judgment to evaluate whether AI-generated output is correct, safe, and consistent with business requirements that the model does not fully understand.
Routine implementation, boilerplate, test generation, and refactoring are moving toward AI. What remains distinctly human is the work that requires knowing why the system exists and what it is actually supposed to do.
That shift has one underappreciated consequence. Junior engineers who historically learned by writing a lot of code, shipping it, breaking it, and fixing it, now enter organizations where the first-draft generation step they would have owned is handled by a model.
The mentorship and skill development pathways that depended on that loop need to be deliberately rebuilt, not assumed to continue working in an environment where the volume of human-authored code is shrinking.
Organizations that are thinking carefully about this are building in review-focused onboarding, pairing junior engineers with senior reviewers on AI-generated output, and treating code evaluation as a learnable skill that requires explicit development. Organizations that aren't thinking about this are accumulating a skills debt that won't show up in the metrics for a while.
The 75% figure will keep moving. At the current trajectory, it's not implausible that a year from now, Pichai cites a number above 90%. What matters more than the percentage is what's being measured, who owns the output, and whether the engineering organizations building on top of this transition are doing it deliberately or just riding the wave.