On paper, “800,000 times faster” sounds like the kind of claim you’re supposed to cheer for. In practice, it’s the kind of claim that should make you sit up straighter and ask what we’re about to hand over to a model we don’t fully understand.
The news item going around is about a new AI model called Heaviside. The pitch is simple: instead of running slow, traditional simulations to see how electromagnetic fields behave in a chip or device, Heaviside predicts that behavior directly from the geometry. Same goal, radically less time. It was trained on tens of millions of designs. And the people behind it say it can even propose “alien structures” — layouts that work better than what human designers would come up with.
If that’s real, it’s not just a speed upgrade. It’s a shift in who (or what) gets to decide what “good design” looks like.
Here’s my judgment: the speed is not the main story. The main story is that we’re moving from “we understand it because we simulated it” to “we trust it because the model says so.” That trade has consequences. Big ones.
Electromagnetics is one of those areas where tiny design choices can make a device behave totally differently. If you’ve ever dealt with wireless problems, signal noise, weird interference, or a chip that works on Tuesday and fails on Wednesday, you know the vibe: you can do everything “right” and still get surprised. Traditional methods are slow partly because reality is messy. They force you to pay the cost of reality upfront.
Heaviside is promising a shortcut: give me the shape, I’ll tell you the fields. And fast enough that you can iterate like crazy. That changes behavior. When iteration becomes cheap, people iterate more and think less. That’s not a moral statement, it’s just what happens. You don’t meditate on each decision when you can crank out 10,000 options before lunch.
Imagine you’re a small hardware team trying to route signals across a dense chip package. Today, the slow part isn’t ideas — it’s waiting for the simulation queue, squinting at results, arguing about tradeoffs, then trying again. If a model really makes that 800,000 times faster, you can search the design space in a way that was basically impossible before. You might find layouts that reduce interference, lower power use, or pack more performance into the same area. That’s a win for teams who are always behind schedule and boxed in by physics.
But there’s a sharp edge here. If the model suggests an “alien structure” that beats your best human design, what exactly do you do with that? You can build it. You can ship it. You can even brag about it. But can you explain it? Can you debug it when the next constraint shows up — heat, manufacturing limits, durability, odd corner cases? Or do you just keep asking the model for another alien answer and hope the stack of guesses holds?
People will say, reasonably, that this is how progress works. We already rely on tools we don’t fully “understand” at the deepest level. We trust compilers. We trust complex simulation software. We trust measurement gear. True. The counterpoint is real: if the model is accurate and tested, refusing it is just ego. “Humans should stay in charge” can turn into “humans should stay slow.”
Still, there’s a difference between a tool that calculates according to known rules and a tool that predicts based on patterns it absorbed from past designs. If it was trained on tens of millions of designs, that’s impressive. It’s also a clue about the limits: the model learns what it has seen. The most dangerous failures are the ones that look plausible while being wrong in a way you don’t notice until much later.
Now picture a larger company using this to design new interconnects or wireless components. The incentive will be to crank the model, choose the best-looking option, and move on. Time is money, and this thing sells time. The risk is that “best-looking” is defined by whatever the training and evaluation setup rewarded. If the evaluation doesn’t match the real world in some edge case, you don’t get a small error. You get a product recall, a reliability problem, or a system that fails only under specific conditions that are hard to reproduce.
There’s also a quiet power shift baked into this. If one group has a model that can search the design space at insane speed, they don’t just build better chips. They build faster. That means more experiments, more learning, more advantage, and eventually a gap others can’t close. This won’t just affect chip designers. If it really applies to wireless communication and chip interconnects, it touches everything from phones to data centers.
And then there’s the human side. If the best designs start looking “alien,” what happens to the craft of engineering? Do we train new people to understand fields and tradeoffs, or do we train them to prompt a model, check a few outputs, and hope? If the second path wins, you may get short-term speed and long-term fragility — a world where fewer people can reason from first principles when the model is unavailable, wrong, or simply not trained for the new problem.
I’m not saying “don’t use it.” I’m saying the burden of proof should be high, and the testing should be brutal, because the downside isn’t just one bad prediction. The downside is building an entire design process around something that feels like magic, then discovering the magic has blind spots.
If Heaviside really can predict electromagnetic behavior this fast from geometry, how much of the final design process should we be willing to hand over to it before we demand explanations we can live with?