Move over, Moore. It’s Huang time.
When you live in Austin, you overhear a lot of interesting conversations. Last week at lunch, two Dell sales reps at the next table were shaking their heads about companies building their own AI data centers. One leaned in: “They don’t realize how fast these chips are being upgraded.”
He’s right. And if you’re a telco planning an on-premise AI data center, he’s talking about you.
Telcos are making AI infrastructure decisions using the same playbook they’ve used for decades. Buy the hardware. Own it. Control it. Depreciate it over three to five years. It worked for RAN, core network, even compute and storage.
But for AI? That playbook doesn’t work. It’s a massive waste of money.
Welcome to Huang’s law
You’re used to Moore’s law: computing power doubles roughly every 18–24 months. Your finance team built entire CapEx models around it. Your procurement cycles assume it. Your depreciation schedules depend on it.
Moore’s law made infrastructure ownership rational. But AI doesn’t follow Moore’s law.
AI follows Huang’s law, named after NVIDIA’s Jensen Huang, which says GPU performance MORE than doubles every two years. Not 2x. We’re talking 8–15x improvements.
Look at what’s actually happening:
- NVIDIA: H100 to B200 in under two years—15x performance improvement
- Google TPUs: 275 trillion operations per second (v4) to 4,614 (v7)—16x increase in four years
- AWS Trainium: Trainium2 (December 2024) is 4x faster than Trainium1 (December 2020). Trainium3 (late 2025) will be another 4x faster—16x improvement in five years
To put this in perspective, Moore’s law would predict roughly 32x performance improvement over a decade. AI chips are delivering 1000x. That’s not a rounding error; that’s your entire infrastructure strategy being based on the wrong formula. If you’re building a business case that assumes Moore’s law depreciation schedules, you’re calculating with numbers that are off by more than an order of magnitude. Every assumption about useful life, residual value, and replacement cycles is wrong.
Now let’s do the math your CFO needs to see.
Say you spend $50 million in CapEx today to build your own AI infrastructure. In two years, with an 8x performance gain in new chips, that same workload can be run on $6.25 million of new hardware. Your “asset” is now a stranded cost with three years of depreciation left. And you can’t justify replacing it because of sunk costs.
You’re stuck, watching competitors run circles around you with infrastructure that’s 8x faster. While your finance team tells you to “sweat the asset.”
Compare that to the same capacity on the public cloud. Year one: $10 million OpEx. Year two: $1.25 million for the same workload on newer, faster chips. Year three: either you’re down to $156,000 OpEx, or you’re getting 8x more capacity for that same $10 million. Total three-year spend: under $12 million.
That’s right: $50 million in CapEx versus $12 million in OpEx. That’s the difference between leading and lagging in AI.
It’s not just about the math
Private infrastructure isn’t just more expensive for AI. It’s structurally incapable of delivering competitive AI.
AWS, Google, and Microsoft collectively spend about $150 billion every year on cloud infrastructure R&D. They’re not just buying NVIDIA chips. They’re designing custom silicon. AWS Trainium is purpose-built for transformer models. Google’s TPUs are engineered specifically for machine learning workloads. Microsoft is building AI accelerators directly into Azure.
When you use the public cloud, you get all of this innovation at the cost of your OpEx. When you build your own data center, you get a purchase order for last year’s chips.
And then there’s the ecosystem. Public cloud gives you instant access to pre-trained foundation models: Claude, GPT, Gemini, Llama. You get ML Ops tooling that actually works: SageMaker, Vertex AI, Azure ML. You get seamless integration with data lakes, vector databases, and streaming analytics.
To build it yourself, you’re looking at years of engineering effort. And where exactly is that AI talent coming from? The engineers who can build this infrastructure have better offers from companies that aren’t asking them to reinvent what AWS already solved.
“Telco is different”
I know what you’re thinking: Telco is different. We have real-time requirements. Network performance constraints. Regulatory requirements. We can’t just throw workloads in the cloud.
You’re right that telco is different. You’re wrong that it matters for this decision.
Those real-time requirements? They’re for your network traffic, not your AI training workloads. Your 5G core needs low latency. Your model training doesn’t. In fact, training on hyperscale infrastructure with the latest chips means your models are ready faster, not slower.
And then there’s the sovereign AI argument. I hear this one a lot: “We need to build our own infrastructure to keep national data in-country. Data sovereignty matters.”
It does matter. But you don’t need to own the infrastructure to achieve it.
AWS, Google, and Microsoft already offer regional deployments that meet every data sovereignty requirement. Want your data in Frankfurt? Done. Sydney? No problem. Need to comply with local regulations? They’ve got you covered. The hyperscalers have already invested billions in building compliant, in-country infrastructure across dozens of regions globally.
It’s not about whether you can meet sovereignty requirements with the public cloud—you can. The question is whether you want to spend $50 million to build what AWS already built, just so you can say you “own” it while it depreciates 8x faster than you planned.
Some telcos are positioning themselves as “sovereign AI providers” and talking about trust and national infrastructure. That’s a fine positioning exercise. But be honest about what you’re really providing: access to GPUs that will be obsolete in 18 months, at a price premium, with longer deployment times than your customers could get from a hyperscaler’s regional data center down the street.
Security? Be honest: who’s more likely to keep your AI infrastructure secure: Amazon’s dedicated security teams or your overworked IT team that’s also trying to figure out how to install CUDA drivers?
The “telco is different” argument isn’t wrong. You’re just applying it to the wrong problem. What makes you different is your subscriber data, your network insights, your domain expertise. Not your ability to rack servers better than AWS.
“We need to be in control”
You think you’re getting control by building your own AI infrastructure. What you’re actually getting is the privilege of managing obsolescence.
In a Huang’s law world, flexibility IS control.
Real control in AI means being able to switch to Trainium3 the day it drops. Testing your models on TPUs in the morning and H100s in the afternoon to see which performs better. Scaling from 10 to 10,000 GPUs for a weekend training run without filing a purchase order.
The public cloud gives you this flexibility. When AWS releases a new chip generation, you get it immediately. When Google optimizes TPUs for a specific architecture you’re using, you switch with an API call. No committees. No depreciation schedules. No sunk costs.
What actually matters to your business is controlling your data, your models, and your subscriber outcomes. Let your hyperscaler handle the commodity layer—the chips, cooling, maintenance. You focus on the differentiation layer—the AI applications that actually drive revenue.
When the hardware is obsolete in 18 months, ownership isn’t control—it’s being handcuffed to depreciating assets. You can’t pivot. You can’t upgrade. You can’t experiment.
“This is how we’ve always done it”
And there it is. The real reason.
Not security. Not sovereignty. Not control. It’s inertia.
Your procurement process is built for three-year hardware refresh cycles. Your finance team thinks in terms of CapEx and depreciation. Your operations team knows how to manage data centers. This is comfortable. This is known.
AI doesn’t care about your comfort.
The companies winning at AI right now—across every industry—aren’t the ones with the best data centers. They’re the ones with the best models, the best data pipelines, the best applications—none of which require owning infrastructure.
Meanwhile, you’re spending precious AI budget on racking servers instead of hiring data scientists. On managing cooling systems instead of building telco-specific models. On replacing obsolete chips instead of actual innovation.
Every dollar in AI CapEx is a dollar NOT spent on the capabilities that will actually differentiate you in the market.
The tough question
If you build your own data center, two years from now, you’ll have to answer this question: “Why did we spend $50 million on infrastructure that’s now a stranded asset?”
Your answer can’t be “This is how we’ve always done it.” It can’t be “We thought we needed control.” And it definitely can’t be “We didn’t realize chips would improve this fast.”
Because here’s what your board will hear: “We made a massive capital allocation mistake during the most important technology shift in telco’s history.”
Two years from now, your competitors who went with the public cloud will be running on infrastructure that’s 8-16x faster than yours, at a fraction of the cost. They’ll be iterating on models weekly while you’re still trying to justify replacing Year One hardware. They’ll be using their AI budget on data scientists and telco-specific models. You’ll be using yours on depreciation.
When that performance gap translates to subscriber churn, to ARPU stagnation, to missing your guidance—what’s your answer going to be?
The ROI on your AI investment
If your AI infrastructure business case assumes Moore’s law, tear it up. You’re calculating with the wrong formula.
Moore’s law made ownership rational. Huang’s law makes it reckless.
Here’s what to do differently:
Stop thinking about AI infrastructure like network infrastructure. Networks have long refresh cycles. AI doesn’t. The RAN playbook doesn’t work for GPUs.
Reframe the build/buy decision. This isn’t about whether to own AI infrastructure. It’s about whether to buy a depreciating asset in the fastest-moving hardware market in history.
Reallocate your capital. Every dollar you spend on AI CapEx is a dollar you can’t spend on data scientists, telecom-specific models, and actual innovation. Which investment creates competitive advantage?
The bottom line
Which would you rather be two years from now: a telco that doesn’t own the GPUs but can optimize every quarter, riding the wave of performance improvements and cost reductions while building your AI capabilities? Or a telco that owns it all—stuck with old chips, sunk costs, and a depreciation schedule while competitors lap you?
The Dell sales reps had it exactly right. The chip upgrade cycle has changed. Your infrastructure strategy needs to change with it.
It’s time to recalculate your TCO with Huang’s law—and then call your cloud provider.
Recent Posts

Get my FREE insider newsletter, delivered every two weeks, with curated content to help telco execs across the globe move to the public cloud.
Get started
Contact Totogi today to start your cloud and AI journey and achieve up to 80% lower TCO and 20% higher ARPU.
Believe
In October, Totogi built a complete telco BSS from scratch in 9 hours. Live. In public. Why? Because we’re proving that AI and BSS Magic can deliver production-ready code.
Engage
Set up a meeting to learn how the Totogi platform can unify and supercharge your legacy systems, kickstarting your AI-first transition.
Evolve
Strategic HR and transformation leader Jim Abolt talks with DR about how successful AI implementation will take big ideas and bold execution.