This deal is either a smart way to keep AI progress moving, or a clean example of how a few giants quietly turn an “open future” into a company town.
Amazon just doubled down on Anthropic with another $5 billion, taking its total commitment to $13 billion. In return, Anthropic is now basically an anchor tenant for Amazon’s cloud business, and it’s pledged to spend more than $100 billion on AWS cloud services over the next decade. The public story is simple: AI needs huge amounts of computing power, and this secures it. The less comfortable story is also simple: the companies that own the computers get to shape what AI becomes.
On paper, I get why Anthropic would do this. If you’re building frontier AI systems, you don’t just need money. You need access to chips, data centers, and steady capacity when demand spikes. If you can’t get that, you can’t ship. And if you can’t ship, you’re dead—especially while the rest of the AI world is raising eye-watering amounts of funding.
But when one cloud provider becomes this central to one of the most important AI labs, it changes the power balance. This isn’t like renting office space. This is like building your entire factory inside someone else’s gates, then promising you’ll buy your electricity from them for ten years.
A lot of people will argue this is just normal scaling. Fine. Yet the consequences don’t stay inside one company.
Imagine you’re a mid-sized software company trying to build an AI feature that competes with something Anthropic offers. You’re also on AWS, because almost everyone is somewhere. Now your key supplier is financially tied to the competitor powering the same cloud. Even if everyone behaves perfectly, that’s a weird place to be. You’re going to wonder about pricing, priority access, and who gets the newest hardware first when supplies are tight. And even if none of that ever turns into blatant favoritism, the fear alone changes behavior. People avoid risk when the floor feels tilted.
Or imagine you’re a hospital system or a bank deciding which AI model to bake into your workflows. If the best model is deeply tied to one cloud, that “choice” quietly becomes a choice about your whole tech stack. That can lock you in for years. Not because you love it, but because unpicking it later is painful and expensive.
That’s why the anchor tenant detail matters. When a company publicly commits to spending over $100 billion on one cloud provider, it’s not just “we like their service.” It’s a marriage. And marriages shape decisions in ways that don’t show up in press releases.
There’s also a bigger pattern here. AI funding is surging. The money is flowing to whoever can plausibly claim they’ll build the next big model. But the real bottleneck isn’t ideas. It’s compute. So the winners won’t just be the teams with the best researchers. They’ll be the teams with the best access deals. That’s not the kind of competition most people think they’re watching.
And it gets more tangled when you look around at the rest of the AI landscape. OpenAI, from what’s been shared publicly, is also shifting leadership and leaning harder into enterprise monetization. That makes sense. Consumer hype is loud, but businesses pay bills. Still, the more these labs chase big enterprise contracts, the more cautious they’ll become about risk, controversy, and anything that makes procurement teams nervous. In other words: the models might get “safer,” but also more bland, more restricted, more shaped by big customers.
Then there’s the government angle. Public reporting says the NSA is deploying Anthropic in some form. I’m not going to pretend I know the details, because they’re not fully clear here. But it raises the obvious tension: companies building “helpful” AI also want major government customers, and those customers have their own priorities. That doesn’t automatically mean something sinister. It does mean the incentives are not purely about what’s best for everyday users.
To be fair, there’s a strong counter-argument: without deals like this, progress slows, and maybe the US loses ground to other countries that will fund and build anyway. Also, cloud providers investing in AI labs could create stability. It could mean fewer chaotic scrambles for compute, fewer sudden outages, fewer labs cutting corners to survive. If you want reliable tools, you need reliable infrastructure.
But I don’t love the direction. When a few platforms become the gatekeepers of compute, and the top AI labs become financially intertwined with those platforms, “the market” starts to look like a controlled ecosystem. The danger isn’t one dramatic scandal. It’s a slow drift toward a world where real choice disappears, startups can’t afford to compete, and innovation happens mostly where it’s convenient for the biggest balance sheets.
And if you’re an everyday user, you may not notice at first. You’ll just notice prices creeping up, features getting bundled, and policies tightening. If you’re a developer, you’ll notice the ground rules changing mid-game. If you’re a regulator, you’ll be stuck trying to prove harm after the lock-in is already complete.
So here’s the question I can’t shake: as these AI labs and cloud giants tie themselves together with commitments this big, at what point do we stop calling it “competition” and start treating it like essential infrastructure that needs real rules?