GC

Google Cloud AI Revenue Jumps 800%; Gemini Enterprise MAUs Up 40% QoQ

AuthorAndrew
Published on:
Published in:AI

An 800% revenue jump always sounds like the future arriving early. It can also be the oldest trick in business storytelling: make the number big enough that nobody asks what it’s made of.

Alphabet is out saying revenue from products that use its AI models is up 800% year over year. They’re also saying Gemini Enterprise paid monthly active users grew 40% quarter over quarter. And on the Google Cloud side, they reported that the number of cloud deals between $100 million and $1 billion doubled year over year. Based on what’s been shared publicly, the message is clear: enterprises are buying, and they’re buying in serious chunks.

My read: this is real momentum, but it’s also a dangerous moment to confuse “demand” with “durability.”

The big-deal stat is the one that makes me sit up. A $100 million to $1 billion cloud deal isn’t someone casually trying a tool. That’s a board-level decision, a long migration plan, and usually a multi-year bet. If those deals are doubling, that suggests Google Cloud is not just getting invited to the meeting; it’s winning enough to matter.

And the Gemini Enterprise user growth is another signal that this isn’t only demos and pilots. Paid usage moving up quarter to quarter usually means the tool is getting into real workflows: support teams using it to draft replies, analysts using it to summarize docs, developers using it to speed up routine tasks. People don’t keep paying for software that never leaves the “nice experiment” stage.

But “800% revenue growth from AI products” is where I get skeptical. Not because it’s impossible, but because it’s easy to make that number huge when the starting point is small, or when “AI products” includes a wide mix of things that aren’t all the same business. Is this mostly add-ons inside existing contracts? Is it usage-based spikes that could cool off? Is it one or two big customers expanding fast? The number might still be impressive, but the shape of it matters.

There’s also a quieter claim in the summary: improved infrastructure. That’s not a side detail. That’s the whole fight.

Enterprise AI isn’t just “who has the smartest model.” It’s who can run it cheaply, reliably, and in a way that doesn’t freak out a company’s security team. If Google has genuinely improved the plumbing—faster, more stable, easier to deploy—then this growth is less about hype and more about friction getting removed. And when friction drops, budgets move.

Here’s where the stakes get real. Imagine you run a mid-size bank. Your CEO wants “AI everywhere,” but your compliance team is already nervous. You don’t want ten different teams signing up for random tools with unclear data handling. A big cloud vendor offering an enterprise version, with controls and billing and support, looks like the safe path. If Gemini Enterprise is growing, it might be because it’s becoming the “approved” AI tool inside companies. That’s not sexy, but it’s powerful.

Now imagine you’re a startup selling a specialized AI tool for customer service, or legal review, or internal search. You’re not just competing on features anymore. You’re competing against the gravity of the cloud contract. If a company already has a huge relationship with Google Cloud, and Google can bundle AI capabilities into that relationship, a lot of smaller vendors are going to get squeezed. Not because they’re bad, but because procurement loves consolidation.

That’s the part I find both promising and concerning.

Promising, because standardizing on a few platforms can reduce chaos. People complain about “shadow AI” for a reason. A single enterprise platform can mean clearer rules, better monitoring, and fewer leaks.

Concerning, because once AI becomes a line item inside giant cloud deals, it stops being an open market. It becomes a package. And packages don’t always reward the best product; they reward the strongest sales motion. If this trend holds, we could end up with a world where “enterprise AI” is basically whatever your cloud provider sells you, whether or not it’s the best fit for your team.

There’s another risk hiding in these growth numbers: expectations. When executives hear “800%,” they start planning like that curve continues. They hire, they reorganize, they promise productivity gains. Then reality shows up: models are helpful but messy, errors still happen, and people need training. If the business case depends on perfect adoption, it will disappoint.

On the flip side, maybe the real story is simpler. Maybe enterprises have moved past the debating stage. They’ve accepted that AI is going to be part of work, and now they’re picking vendors they trust to run it at scale. If that’s what’s happening, Google’s position makes sense: enterprise relationships, deep infrastructure, and a product that’s finally getting used enough to show traction.

Still, I can’t shake the feeling that we’re watching two things at once: real adoption and a land grab. The winners will be the companies that turn “AI excitement” into boring, repeatable value—fewer tickets, faster cycles, better decisions—without creating new disasters in privacy, accuracy, or cost.

If these huge cloud-and-AI deals keep doubling, do we end up with a healthier, more controlled enterprise AI world—or do we just lock most companies into a few platforms before we even figure out what good AI use actually looks like?

Frequently asked questions

What is AI agent governance?

AI agent governance is the set of policies, controls, and monitoring systems that ensure autonomous AI agents behave safely, comply with regulations, and remain auditable. It covers decision logging, policy enforcement, access controls, and incident response for AI systems that act on behalf of a business.

Does the EU AI Act apply to my company?

The EU AI Act applies to any organisation that develops, deploys, or uses AI systems in the EU, regardless of where the company is headquartered. High-risk AI systems face strict obligations starting 2 August 2026, including risk management, data governance, transparency, human oversight, and conformity assessments.

How do I test an AI agent for security vulnerabilities?

AI agent security testing evaluates agents for prompt injection, data exfiltration, policy bypass, jailbreaks, and compliance violations. Talan.tech's Talantir platform runs 500+ automated test scenarios across 11 categories and produces a certified security score with remediation guidance.

Where should I start with AI governance?

Start with a free AI Readiness Assessment to benchmark your current maturity across 10 dimensions (strategy, data, security, compliance, operations, and more). The assessment takes about 15 minutes and produces a prioritised roadmap you can act on immediately.

Ready to secure and govern your AI agents?

Start with a free AI Readiness Assessment to benchmark your maturity across 10 dimensions, or dive into the product that solves your specific problem.