Stanford HAI dropped its 2026 AI Index this morning — 423 pages, nine years of independent data, no lab PR budget behind it. If you only read one document this year to understand where AI actually stands, this is it.
Here’s what the report actually says, past the headlines.
Capabilities are Accelerating, Not Plateauing
Every pundit who called peak AI in 2025 was wrong. The AI industry produced over 90% of notable frontier models in 2025 alone. On SWE-bench Verified, coding performance jumped from 60% to near 100% of the human baseline in a single year. On Humanity’s Last Exam — questions designed by subject-matter experts to represent the hardest problems in their fields — the top score was 8.8% in 2025. It’s now 38.3%, with the best models as of April 2026 crossing 50%.
Organizational adoption reflects this. AI adoption has reached 88% in the tech industry, and 4 in 5 university students now use generative AI.
The Transparency Collapse Nobody is Talking About
Here’s the number that should be the story: the Foundation Model Transparency Index dropped from 58 to 40 this year, with the most capable models disclosing the least. Google, Anthropic, and OpenAI have all abandoned the practice of disclosing their latest model’s dataset sizes and training duration. Eighty of the 95 most notable models launched last year were released without their training code.
The labs have made a deliberate choice: as the models get more powerful, they get less legible. This isn’t a side effect. It’s a competitive strategy.
Also Read: AI Didn’t Kill Writing. It Killed Coding.
The US-China Gap is Nearly Gone
In early 2023, OpenAI had a clear lead with ChatGPT. As of March 2026, Anthropic leads, trailed closely by xAI, Google, and OpenAI. Chinese models like DeepSeek and Alibaba lag only modestly. The US still outputs more top-tier models and higher-impact patents, but China leads in total patent output, model publication volume, and industrial robot installations.
US private AI investment reached $285.9 billion in 2025 — more than 23 times China’s $12.4 billion. And yet the performance gap is measured in single-digit percentage points. That should alarm every American policymaker.
The Talent Cliff
The number of AI researchers and developers relocating to the US has dropped 89% since 2017, with an 80% decline in the last year alone. The US is spending more on AI than any country in history while making itself less attractive to the people who build it. That’s a structural problem no amount of compute spending fixes.
The Public is Not Coming Along for the Ride
Only 10% of Americans say they’re more excited than concerned about AI in daily life. Meanwhile, 56% of AI experts believe it will have a positive impact on the US over the next 20 years. The US also reported the lowest trust in its government to regulate AI among surveyed countries, at 31%.
Employment for software developers aged 22 to 25 has fallen nearly 20% since 2022, and a third of organizations expect AI to shrink their workforce. The industry keeps pointing to benchmark scores. The public is looking at their job offers.
The Bottom Line
The 2026 AI Index is not a victory lap. It’s a stress test. The capabilities are real, the investment is real, the adoption is real. But the transparency is gone, the talent is leaving, and the public trust that makes any of this socially sustainable is at a low. Stanford’s data doesn’t editorialize. It doesn’t have to.

