HomeOpinionMeta Might Actually Pull Off This AI Comeback. And the Timing Has...

Meta Might Actually Pull Off This AI Comeback. And the Timing Has Nothing to Do With Luck

For most of the past year, writing about Meta’s AI strategy meant writing about disappointment. Llama 4 underwhelmed when it launched in April 2025. The benchmark numbers turned out to be inflated. Developers moved on to OpenAI’s Codex and Anthropic’s Claude Code. Meta spent tens of billions of dollars on infrastructure and talent and had very little to show for it. The dominant question was whether Mark Zuckerberg’s $14.3 billion bet on Alexandr Wang and Scale AI was going to be remembered as a strategic masterstroke or as one of the most expensive corporate mistakes in tech history.

Yesterday, Meta released Muse Spark — the first model out of Meta Superintelligence Labs and Wang’s first deliverable as Chief AI Officer. The model is good. Not the best in the world, and Meta isn’t claiming otherwise, but legitimately competitive with frontier systems from OpenAI, Anthropic, and Google. And it’s free, with no tiered pricing, currently live on the Meta AI app and meta.ai in the US, with rollout to WhatsApp, Instagram, Facebook, and Messenger coming in the next few weeks.

On its own, this would be a solid product launch story. But Muse Spark didn’t arrive in a vacuum. It arrived in the middle of a much larger industry shift, one that’s been building for months, that suddenly makes Meta’s approach look smarter than it had any right to look.

This is an argument for why the $14.3 billion bet might actually work.

The Industry Has Been Quietly Getting More Expensive for a While

Start with what’s happening to the rest of the AI market. The narrative most people carry around — that AI is getting cheaper and more accessible — was true in 2024. It has been getting steadily less true throughout 2025 and into 2026.

Anthropic has been the most visible example. The company has been tightening access to Claude in stages for several months. In February 2026, Anthropic reaffirmed an existing policy forbidding the use of third-party harnesses with Claude subscriptions. In late March, Anthropic changed how subscription usage was calculated so customers burned through their limits faster during peak hours. On April 4, the company moved from policy warnings to billing-based enforcement: subscribers can no longer use their Claude subscription limits for third-party harnesses, including OpenClaw, and instead need to pay through a pay-as-you-go option billed separately from the subscription. The restriction will extend to all third-party harnesses in the coming weeks.

The technical reasoning Anthropic offered is sound. Claude’s first-party tools are optimized for prompt cache reuse. Third-party harnesses bypass those efficiencies, which creates outsized infrastructure strain. Boris Cherny, head of Claude Code at Anthropic, wrote on X that the company’s subscriptions weren’t built for the usage patterns of these third-party tools, and that capacity needs to be managed thoughtfully. From a margin standpoint, this is rational. For one reporter, a $20 monthly Claude subscription enabled about $236 of token usage in March, with others reporting ratios as skewed as 36x when comparing price paid to list price value. Anthropic was bleeding money on heavy users, and it stopped.

Also Read: Google Just Made AI Free on Your Phone — No Internet Needed

But from a developer’s standpoint, the practical impact is the same regardless of the reasoning: a workflow that used to cost $20 a month now costs hundreds. And Anthropic has been clear that more restrictions are coming.

OpenAI is moving in the same direction, just with a different tactic. Today, April 9, 2026, OpenAI introduced a new $100 per month ChatGPT Pro tier, sitting between the $20 Plus plan and the existing $200 Pro plan. The new tier offers 5x more Codex usage than the Plus plan, with OpenAI making no bones that this new pricing tier is to challenge Anthropic, which has long had a $100 per month option for Claude. The free ChatGPT tier now includes ads. And alongside the new $100 plan, OpenAI confirmed it is rebalancing Codex usage on the $20 Plus plan to support more sessions throughout the week, rather than longer sessions in a single day. In other words, the cheap plan got worse, and a new more expensive plan appeared to capture users who can’t function on the cheap plan anymore.

This is not an isolated decision. OpenAI also updated Codex pricing on April 2, 2026, moving from per-message pricing to API token-based pricing for Plus, Pro, Business, and new Enterprise customers. The direction of travel is consistent. AI is becoming more usage-metered, more tier-stratified, and more expensive at the high end.

Google has been doing similar things with Gemini access, layering paid subscriptions over what used to be free features and pushing heavy users toward enterprise contracts.

Step back from any individual change and the pattern is unmistakable. The frontier AI labs are all under pressure to monetize. They have raised enormous amounts of money at enormous valuations, and the path to justifying those valuations runs through extracting more revenue per user. Free tiers are getting worse. Paid tiers are getting more expensive. Power users are getting capped or pushed to higher plans. The all-you-can-eat era of AI is ending, and it’s been ending in slow motion for several months.

Meta Is Walking In the Opposite Direction

This is the context Muse Spark arrived in. And it’s why the launch matters more than the model itself.

Meta is not under the same pressure as OpenAI or Anthropic. It does not need to make money from AI directly. Meta’s advertising business generated over $164 billion in revenue in 2025. Muse Spark is not a product that needs to monetize. It is a feature that makes Meta’s existing products — WhatsApp, Instagram, Facebook, Messenger — more engaging, more useful, and better at capturing the kind of intent and behavior data that feeds ad targeting. Every conversation a user has with Meta AI inside WhatsApp is, from Meta’s perspective, both a user benefit and a data point that improves the actual business.

That structural advantage changes what kinds of decisions Meta can make. Meta can ship a frontier-competitive AI model to billions of users for free, with no usage caps and no tiered pricing, indefinitely, without ever needing to convert any of those users to a paid plan. OpenAI and Anthropic cannot do that. Their entire business depends on making AI users into AI customers.

And Meta can afford the compute. The company is projected to spend $115 billion to $135 billion on AI infrastructure in 2026, roughly double its 2025 spending. That number sounds insane in isolation. It looks more rational when you consider what it actually buys: the ability to run a free, frontier-class model at the scale of WhatsApp, indefinitely, as a feature of the ad business rather than as a product that has to stand on its own.

This asymmetry has existed for a while. What changed yesterday is that Meta finally has a model good enough to make the asymmetry matter.

The Model Itself Is Better Than the Headlines Suggest

Most coverage of Muse Spark has focused on what it can’t do. The model trails GPT-5.4 and Claude Opus 4.6 on coding benchmarks. It scores 42.5 on ARC AGI 2 against Gemini 3.1 Pro’s 76.5 on abstract reasoning. Meta itself acknowledges these gaps. The headline takeaway from a lot of analysts has been that Muse Spark is a credible second-tier model that doesn’t redefine the frontier.

That framing misses what’s interesting. On the benchmarks where Muse Spark wins, it doesn’t win narrowly; it wins by enormous margins. On HealthBench Hard, Muse Spark scored 42.8, roughly triple Gemini 3.1 Pro’s 20.6 and Claude Opus 4.6’s 14.8. Meta collaborated with over 1,000 physicians on the training data, and the result is the most capable medical reasoning model anyone has shipped. On CharXiv Reasoning, figure and chart understanding, Muse Spark scored 86.4, ahead of GPT-5.4’s 82.8 and Gemini 3.1 Pro’s 80.2. On GPQA Diamond, the PhD-level scientific reasoning benchmark, it scored 89.5, in legitimate frontier territory.

On the independent Artificial Analysis Intelligence Index, Muse Spark scored 52, placing it fourth overall behind only Gemini 3.1 Pro, GPT-5.4, and Claude Opus 4.6. Llama 4 Maverick scored 18 on the same index. That is a roughly 3x improvement in nine months, achieved by a team that rebuilt Meta’s entire AI stack from scratch.

And then there is the efficiency story, which is the part that actually matters for what Meta is about to do. Meta says Muse Spark matches Llama 4 Maverick’s capabilities with over 10x less compute. That single number is what makes the free, mass-distribution strategy economically possible. A model that costs an order of magnitude less to run is a model you can give away to billions of people without bankrupting the ad business that funds it.

Wang’s first model wasn’t designed to win every benchmark. It was designed to be efficient enough to deploy at WhatsApp scale, and good enough at the things Meta’s users actually do — health questions, visual understanding, shopping research, casual reasoning — to be genuinely useful inside the apps people already use. On those terms, it succeeded.

The Distribution Path Is the Real Story

Muse Spark is currently live on the Meta AI app and meta.ai website in the US. In the 24 hours after launch, the Meta AI app jumped from #57 to #5 on the US App Store. Sensor Tower estimated roughly 46,000 US downloads on launch day alone. Those are good numbers for a standalone app launch. They are not the actual play.

The actual play is the rollout to WhatsApp, Instagram, Facebook, Messenger, and Ray-Ban AI glasses, which Meta has confirmed is coming in the next few weeks. When that rollout completes, a frontier-competitive AI model will be sitting inside the apps that roughly 3 billion people open multiple times every day. WhatsApp alone has over 2 billion monthly users. No other AI company has anything remotely close to this distribution path.

For comparison: OpenAI’s ChatGPT has approximately 400 million monthly active users. Anthropic’s Claude has a small fraction of that. Even Google’s Gemini, despite being integrated across Search and Android, doesn’t have the same kind of habitual engagement that WhatsApp commands. Distribution at Meta’s scale isn’t a marketing advantage. It’s a structural feature of the business that the AI labs cannot replicate without somehow building their own consumer messaging platforms first.

The right way to think about this is not “Meta launched a chatbot.” It’s “Meta is about to add a free, frontier-competitive AI to every WhatsApp chat on the planet, in the same window that Anthropic and OpenAI are making their chatbots more expensive to use.”

Why This Is the Right Window for Meta

This is the window where a Meta Muse Spark AI comeback stops sounding like wishful thinking

Pull these threads together. The frontier AI labs have spent the past several months tightening access. Anthropic has been steadily restricting how subscribers can use Claude. OpenAI has been reshuffling pricing tiers and introducing more expensive plans for the use cases that used to be covered by cheaper ones. Free tiers are getting worse across the board. Heavy users are being pushed onto plans that cost five to ten times what they used to pay.

This is happening because OpenAI and Anthropic are running the business they’re supposed to run. They are pure-play AI companies. They have to extract revenue from their models to justify their valuations and fund their compute. None of these decisions are villainous. They are the entirely rational consequences of a business model that depends on monetizing inference.

But they create a gap. A real one. There is now a substantial population of AI users, like casual consumers, students, small businesses, and people in markets where $20 a month is meaningful money, who are about to find their existing tools getting more expensive or more restricted. These are not the highest-value AI customers. They are not the developers running OpenClaw at $1,000 a day in inference costs. But there are billions of them, and they are exactly the population that Meta is structurally positioned to serve.

If you are a WhatsApp user in India, or Brazil, or Indonesia, or any of dozens of markets where Meta dominates messaging, the calculus is about to look very different. A frontier-competitive AI assistant will be sitting inside the app you already use, for free, with no signup, no payment, no usage limit. That is not the same product as ChatGPT Pro for $100 a month. It doesn’t need to be.

Meta has been waiting for a moment when the rest of the industry made distribution and free access look attractive again. The industry just provided that moment.

What Could Still Go Wrong

This is the optimistic case. It is not a guaranteed outcome.

Meta has been caught manipulating benchmark results before. After Llama 4 launched, the company was found to have published benchmark scores from a fine-tuned variant that wasn’t the model actually available to users. Independent verification of Muse Spark’s benchmarks is still in early days. If those numbers don’t hold up under scrutiny, this becomes the second consecutive Meta AI launch the company can’t defend, and the comeback story dies before it starts.

The coding gap is also genuinely a problem. OpenAI’s Codex has over 3 million weekly users and is growing 70% month over month. Anthropic’s Claude Code is the de facto standard for serious developer use. These are the highest-paying customers in the consumer AI market, and Meta does not have a credible product for them. Shipping a free chat assistant to 3 billion casual users is impressive. It is a different business from selling agentic coding tools to developers, and Meta is conceding the more lucrative half of the market.

Meta’s history with consumer AI is also mixed. The Meta AI assistant has existed inside WhatsApp and Instagram for over a year, and adoption has been modest. Distribution is necessary but not sufficient. If users don’t actually engage with Muse Spark inside WhatsApp the way they engage with ChatGPT in a browser, the distribution advantage becomes academic. The Meta AI app is currently #5 on the App Store, but ChatGPT and Gemini are still ahead, and Meta has historically struggled to convert installs into habitual usage for AI products specifically.

And Wang is a single point of failure. Meta’s entire AI strategy is now built around one person who joined nine months ago and has shipped exactly one model. That model is good. The next ones need to be better. AI talent is being poached at hundred-million-dollar pay packages across the industry, and keeping the Meta Superintelligence Labs team intact and productive over the next two years is an open question.

The Bottom Line

Meta’s AI strategy has been a story of expensive disappointment for most of the past year. The release of Muse Spark doesn’t end that story by itself. But it is the first piece of evidence that the Meta Muse Spark AI comeback is starting to look real. Wang’s team has shipped something credible, the efficiency gains make mass distribution economically viable, and the rest of the industry is helpfully drifting toward the kind of paid, restricted, premium-tiered AI that leaves room for a free competitor.

The $14.3 billion question was never whether Wang could build a good model. It was whether Meta could build a good model in time to matter, in a market that was still hospitable to a free, ad-subsidized approach. Muse Spark suggests the answer to the first question is yes. The behavior of OpenAI and Anthropic over the past few weeks suggests the answer to the second question is also, surprisingly, yes.

Meta might actually pull this off. The interesting thing isn’t that Muse Spark exists. It’s that the timing works.

Subscribe to our newsletter

Subscribe and never miss out on such trending AI-related articles.

We will never sell your data

Join our WhatsApp Channel and Newsletter to be a part of an engaging community.

Rohit Yadav
Rohit Yadav
Rohit is the CEO and editor-in-chief at Analytics Drift.

Most Popular