In April 2013, Marc Andreessen stood in front of a crowd and said Google Glass was going to change the world. “You put it on and you say yep, that’s the future,” he told TechCrunch. He co-founded the Glass Collective — a fund backed by a16z, Kleiner Perkins, and Google Ventures — specifically to seed startups building Glass apps. He said people would feel “naked and lonely” without it.
Google Glass was discontinued in 2015.
On April 6, 2026, Andreessen posted four words on X: “AGI is already here.” The post crossed 1.5 million views within hours, with YC president Garry Tan amplifying it and skeptics immediately firing back. One user, @TheRealJunto, responded by digging up the Glass quote from 2013. The parallel is uncomfortable — and instructive.
The Declaration Without a Definition
The core problem with Andreessen’s AGI claim is the same problem every such claim has: there is no agreed-upon definition of AGI. The term generally refers to an AI system that can match or exceed human cognitive ability across any task — not just coding or chess or writing, but reasoning, creativity, physical dexterity, and judgment under uncertainty.
By that standard, we are not there. GPT-5.4 scored 75% on OSWorld-V, a benchmark simulating desktop productivity tasks. The human baseline on that same benchmark is 72.4%. That is impressive. It is not general intelligence — it is one model, on one benchmark, slightly outperforming an average person at clicking through software.
A synthesis of over 9,800 expert predictions places the consensus arrival of AGI at approximately 2040. This is not a fringe view. It is the median of the world’s most informed forecasters.
The definition shifts because shifting it is useful. When Huang says “we’re already there,” he’s using a looser definition — AI that delivers economic value at scale. When Andreessen says it’s here, he likely means something similar. But the original academic definition of AGI — a system that can learn and perform any intellectual task a human can — is a far higher bar. The goalpost moves because the people moving it have something to gain each time it moves.
The Pattern of Confident Declarations
Andreessen is not unique in making this kind of claim. The pattern is consistent and worth naming directly.
Elon Musk predicted AGI by the end of 2025. When that didn’t happen, he updated the timeline to 2026. Jensen Huang said at the Financial Times Future of AI Summit in November 2025: “We are already there… it doesn’t matter, because at this point it’s a bit of an academic question.” Sam Altman has put the milestone somewhere between 2029 and 2035 depending on which interview you read.
None of these people are defining the term the same way. All of them have enormous financial stakes in you believing the milestone is close — or already achieved.
Musk sells compute through xAI and needs investment. Huang sells the chips that run AI. Andreessen’s firm has hundreds of millions in AI portfolio companies. When you hear an AGI declaration, it is worth asking: what does this person need you to believe, and why today?
What “uneven distribution” Actually Means
Andreessen borrowed William Gibson’s famous line — “the future is already here, it’s just not evenly distributed” — to frame AGI as an access problem rather than a capability problem. This is rhetorically elegant and technically evasive.
The implication is that AGI exists somewhere, for someone, and the rest of us just haven’t caught up. But that’s not how capability works. A model that can outperform a human at coding tasks, or financial modeling, or research summarization, is not AGI — it is narrow intelligence applied at scale. Impressive, economically valuable, and genuinely transformative. But calling it AGI is a category error dressed up as insight.
Why this Matters for the Industry
The AGI declaration game has real consequences. Enterprise buyers accelerate procurement cycles. Boards shift strategy. Governments fast-track regulation. Investors price companies on AGI proximity rather than revenue fundamentals.
For enterprise buyers, this matters in a concrete way. When a figure like Andreessen declares AGI, procurement cycles compress. Boards that were planning 18-month AI roadmaps start asking why they aren’t moving faster. Vendors use the declaration as sales ammunition. Budgets shift. Hiring plans change. None of this is based on a technical assessment — it’s based on a narrative authored by someone with a financial stake in the outcome. That’s not a reason to ignore AI. It’s a reason to separate the signal from the performance.
When someone with Andreessen’s platform declares AGI has arrived, the market moves — even if the claim is unfounded. That’s not nothing. But it’s also not a technical assessment. It’s a narrative.
In 2013, the narrative was that wearable computing was inevitable and imminent. Andreessen was so convinced he created a fund to profit from it. The product was discontinued before most people ever tried it.
The technology wasn’t wrong, exactly. The timeline was. And the conviction was mistaken for evidence.
We’ve seen this before.

