VA

VCX AI Fund IPO Jumps 2,700% to $18B, Far Above NAV

Published on:

This is the kind of “AI boom” story that sounds exciting until you look at what people are actually buying. A fund goes public, slaps “AI” on the label, and in six days the market decides it’s worth $18 billion. Not because the fund suddenly owns $18 billion of great companies. Because people really, really want a piece of the hottest private AI names—and they’ll overpay to get it.

Based on what’s been shared publicly, this newly launched AI-focused fund, VCX, surged about 2,700% after its IPO and now sits around an $18 billion market cap. The pitch is simple: it gives investors access to equity in “leading AI companies” like Anthropic and OpenAI. That’s the dream for a lot of people who missed earlier waves—because most of the juiciest AI equity is private, locked up, and hard to reach.

Here’s the part that makes this less “wild success” and more “warning sign.” The fund’s actual assets are reportedly valued around $650 million. Yet the public market is pricing the wrapper at a level that implies something like magic is happening inside. Retail demand is pushing the share price far above net asset value. In some cases, investors are effectively paying up to 30 times for exposure to companies like Anthropic and OpenAI.

That’s not “confidence.” That’s impatience getting a ticker symbol.

If you’re wondering how this happens, it’s not complicated. People want the story more than they want the math. A normal investor can’t easily buy shares in a private AI lab. But they can buy a public fund that says it owns some. The fund becomes a proxy for an entire belief system: “AI will eat the world and I refuse to miss it.” When enough people think that way at the same time, price stops being about underlying value and becomes about access.

And access can be a very expensive drug.

Imagine you’re a regular person who’s been watching AI headlines for a year. You’ve seen friends make money on other trades. You’ve heard that the biggest gains happen before companies go public. Then you see a brand-new IPO that promises a basket of the most famous private AI names. You don’t want to study net asset value. You want exposure. You buy. Thousands of people do the same thing. The price rises. That rise becomes the marketing.

Another scenario: you’re not even especially bullish on AI. You’re just watching a chart go vertical and thinking, “I’ll get in and get out fast.” The problem is everyone thinks that. When the trade is crowded and the asset underneath can’t quickly change to justify the new price, exits get small fast. Someone always ends up holding the bag, and it’s usually not the insiders or the professionals who understand the structure best.

What’s at stake here isn’t just whether some traders get burned. It’s what this kind of frenzy does to behavior. When a fund can trade at an extreme premium simply because it has a little exposure to famous private companies, it creates an incentive to manufacture more “access products” that look like ownership but behave like hype. More wrappers. More fees. More complexity. More people thinking they’re investing, when they’re mostly buying scarcity.

It also quietly changes how the AI companies themselves get treated. If public markets reward the label more than the fundamentals, pressure builds around private valuations too. Employees see headlines and assume their equity is worth a fortune right now. Early investors feel emboldened. New money shows up expecting only up-and-to-the-right outcomes. If the real business results don’t match the market’s emotional pricing, you don’t just get a stock drop. You get layoffs, stalled projects, and a whole lot of cynicism that spills onto the real innovation.

Now, there is a fair counterpoint: sometimes markets are early, not wrong. Maybe investors are paying up because they think the fund will buy more stakes, get better access, and end up holding far more valuable assets later. Maybe this is the public market doing what it does—pricing future demand and future growth, not current balance sheets. And sure, a big premium can persist longer than skeptics expect.

But paying 30 times for the same underlying exposure is not “future growth.” It’s a tax for being late.

The ugly possibility is that this becomes a template. If it works once, it will be copied. The next product won’t even need to be good; it just needs the right names in the brochure and the right timing on social media. That’s how you get bubbles that don’t pop from one big event, but from slow disappointment—people realizing, one by one, that they bought a symbol of AI rather than AI itself.

I’m not rooting against VCX. I’m rooting against the idea that the easiest, loudest way to invest in AI is automatically the smartest. Because when a fund’s market cap explodes while its underlying assets don’t, the “growth” is mostly coming from other buyers—until it isn’t.

So here’s the real debate: should regular investors be protected from paying extreme premiums for “access” products like this, or is that an acceptable price of open markets where anyone can buy whatever they want?

Frequently asked questions

What is AI agent governance?

AI agent governance is the set of policies, controls, and monitoring systems that ensure autonomous AI agents behave safely, comply with regulations, and remain auditable. It covers decision logging, policy enforcement, access controls, and incident response for AI systems that act on behalf of a business.

Does the EU AI Act apply to my company?

The EU AI Act applies to any organisation that develops, deploys, or uses AI systems in the EU, regardless of where the company is headquartered. High-risk AI systems face strict obligations starting 2 August 2026, including risk management, data governance, transparency, human oversight, and conformity assessments.

How do I test an AI agent for security vulnerabilities?

AI agent security testing evaluates agents for prompt injection, data exfiltration, policy bypass, jailbreaks, and compliance violations. Talan.tech's Talantir platform runs 500+ automated test scenarios across 11 categories and produces a certified security score with remediation guidance.

Where should I start with AI governance?

Start with a free AI Readiness Assessment to benchmark your current maturity across 10 dimensions (strategy, data, security, compliance, operations, and more). The assessment takes about 15 minutes and produces a prioritised roadmap you can act on immediately.

Ready to secure and govern your AI agents?

Start with a free AI Readiness Assessment to benchmark your maturity across 10 dimensions, or dive into the product that solves your specific problem.