At Google Cloud Next this week, Sundar Pichai disclosed a number that reframes the entire conversation about AI and software development. Seventy-five percent of all new code written at Google is now generated by AI and subsequently reviewed by human engineers. That figure was roughly 25% in October 2024. By last fall it had climbed to 50%. In twelve months, it has tripled.
This is not a startup’s claim. This is Google, a company that maintains production systems at a scale most engineers will never encounter, writing in its official blog post that the majority of its new code no longer originates from human keystrokes.
What Pichai Actually Said
In his Cloud Next keynote post, Pichai wrote that Google is shifting to “truly agentic workflows,” where engineers orchestrate AI agents rather than writing code directly. He cited one concrete example: a complex internal code migration completed by agents and engineers working together ran six times faster than a comparable project completed a year earlier with engineers alone.
He gave a second example: the team behind the Gemini app on MacOS built the initial release using Google’s internal agentic development platform, Antigravity, going from an idea to a working native Swift app prototype in a matter of days. Both examples point to the same shift — agents compressing the time between intent and working software.
The policy dimension is notable. Google is now factoring AI adoption into employee performance reviews. This means the 75% figure is not a passive outcome of engineers experimenting with useful tools. It is a managed operational target.
Also Read: SpaceX Secures $60 Billion Option to Acquire Cursor
The Friction Beneath the Headline
The story has a layer worth understanding. Some employees at Google DeepMind have reportedly been permitted to use Anthropic’s Claude Code for development work in recent months. That decision apparently created internal friction, which signals something real: even inside Google, engineers prefer whichever model works best for the task, not necessarily the one built in-house. It also tells you that Google’s internal AI coding infrastructure, however mature, is not yet unambiguous best-in-class in the eyes of people who use it daily.
What this Means for Software Engineers
The instinct is to read a number like 75% and ask whether software engineers are being replaced. That is the wrong question. Google has not reduced its engineering headcount in response to AI-generated code. What it has changed is the nature of the job.
Writing code is becoming an output of the pipeline, not the primary skill of the engineer. What the job increasingly demands is the ability to decompose complex systems cleanly, evaluate what an AI-generated function actually does versus what it appears to do, and catch subtle errors that look correct at the commit stage but create problems in production. These are architectural and judgment skills. They take years to build and do not come from learning syntax faster.
For engineers early in their careers who built their value around coding speed and recall, the trajectory of this number is a serious signal. For engineers with strong systems thinking, security awareness, and product context, the same trajectory represents an expansion of what one person can actually ship.
The Trajectory is the Story
25% to 50% to 75% in twelve months. If that rate of change continues, the practical question is not whether AI dominates software development at major technology companies. It already does, at Google’s scale. The question is how fast the same shift reaches mid-market engineering teams, and what the second-order effects look like when the majority of new code everywhere originates from a model.
Google’s disclosure is the clearest benchmark the industry has seen from a company of this complexity. Every CTO reading it is recalibrating their hiring plans. Every engineer reading it should be recalibrating their skill investment.

