Blog

Think twice before building your own public cloud

The other day I was talking with a telco exec in South America who told me he was planning to build his own public cloud. After my double take I asked him, “Why on earth would you do this?”

His answer: because he lived in a country where the government didn’t allow him to take data out of the country, and none of the hyperscalers had yet built a data center in his country, he was left with no other choice but to build his own public cloud.

These two reasons (data sovereignty and there’s no hyperscaler in my country) are some of the most common reasons telcos cite when they conclude they must build their own public clouds.

Let’s talk about why I think this is a really bad idea.

Are you sure your data can’t leave?

Many telcos believe that data sovereignty is a real thing for them, and it’s one of the first objections CxOs use as a reason why they can’t use the public cloud. I think telcos should absolutely be aware of the data privacy and location laws in the areas where they do business, but too many telcos incorrectly err on the side of caution, thinking that restrictions exist where in actuality there are none.

For example, in the last week I’ve talked to three different telcos, in three different countries, in three different continents (Bolivia, Serbia, Zimbabwe), who have used this objection with me – and upon further research I have discovered all three don’t have all the information. They are correct that there are restrictions; however, in each case there was a way to work around the restriction and use the public cloud.

Protecting data and abiding by the law are real things, and data sovereignty is certainly a contentious issue. CxOs are often told incorrectly by their staff that strict rules govern data sovereignty, and CxOs believe them. If this is what you believe, I suggest you spend some time actually researching the regulations; you might be surprised. 

Check out this quick run-down of the whats and wheres of data sovereignty. The Cloud Security Alliance Consensus Assessments Initiative’s site also has a ton of resources including a questionnaire of things that a cloud customer (i.e YOU) or auditor should consider asking of their cloud provider. In my experience, very very few countries have completely blocked out your ability to use a BFC, even when it’s not in your country. My advice: if you can use a hyperscaler, you should.

No Hyperscaler For Miles and Miles

But for the sake of this blog, let’s pretend that you are unfortunately in one of the few countries that can’t move their data out of the country and there’s no hyperscaler data center located in your country. Before you decide to pull the trigger on building your own public cloud, let’s hold the phone.

Data centers are incredibly expensive to build. If you want evidence of just how freaking expensive data centers are to build, look no further than Amazon. The company recently announced that it’s rolling out a new data center in Hyderabad, India. According to this blog by Jeff Barr, Amazon’s Chief Evangelist, it’ll open in mid-2022 with three Availability Zones. This will be Amazon’s second region in India, its eleventh region in Asia, and will bring its global total to 29 to date. You can check out all of Amazon’s locations hereGoogle and Microsoft Azure are busy too: Google has 21 and Azure reportedly has more than 100.

Though you may already manage sophisticated data centers, don’t underestimate how difficult they are to run when you open your data centers to your enterprise customers.

Barr talks about ‘regions’ and his use of Amazon lingo is actually pretty apt. Most people probably imagine a data center as a building with a load of servers – albeit on a pretty industrial scale. For a more accurate picture read Corey Quinn’s blog, which goes into more detail on what a ‘region’ is. In short, building a data center involves:

  • Multiple buildings
  • Redundant power lines between buildings
  • Fiber between buildings, the internet backbone, and your proprietary networks
  • Tens of thousands of servers and other hardware
  • Deploying services into the data center
  • Hiring and deploying staff

ONE BILLION DOLLARS

The bill for Hyderabad is a whopping $2.8 billion.

AWS doesn’t build out data centers unless it builds out three availability zones, which works out to be about $900 million dollars per zone or per public cloud. Ouch. If anyone’s perfected and optimized the process of building a data center, it’s AWS. So, for the purposes of this blog, let’s assume it costs $900 million to build a data center with one availability zone designed to serve one billion workloads. Even if you just do the equivalent of one availability zone, you’re looking at close to $1B of capex spend to get this thing off the ground, and we haven’t even had to do a hardware refresh yet.

When all else fails…

But what if you really really can’t use the public cloud? Well in that case, I still have ideas for you.

If you are hell bent on building your own public cloud, then I recommend to try to use as much software from the public cloud providers as possible. Managed services are available such as AWS Outpost, which offers:

  • The same AWS infrastructure, AWS services, APIs, and tools to any data center, co-location space, or on-premises facility
  • Access to the full range of AWS services available in the region to build, manage, and scale your on-premises applications using AWS services and tools
  • Some local datasets can’t be easily migrated to the cloud for processing due to cost, size, bandwidth, or timing constraints. Outposts allow you to process data locally, while keeping your data lakes and ML training in a region. With Outposts, you can setup a consistent hybrid cloud architecture to process data on premises and easily move data to the cloud for long-term archival. Control over where your workloads run and where your data resides, while using local operational tooling for things like monitoring and stability

or GCP Anthos, which enables you to:

  • Run Kubernetes clusters anywhere, in both cloud and on-premises environments
  • Define, automate, and enforce policies across environments to meet your security and compliance requirements
  • Integrate security into each stage of the application lifecycle while offering a comprehensive portfolio of security controls across all of these deployment models
  • Unburden ops and development teams by empowering them to manage and secure traffic between services while monitoring, troubleshooting, and improving application performance

Google Cloud Platform has even gone a step further for analytics with the recently announced BigQuery Omni from GCP which allows you to use their analytic engines and databases while keeping your datastores in place – whether in the public cloud or on premise.

By using as much public cloud software as you can, you can reduce your own investment and realize efficiencies in your business operations and workflows. This will put you in a prime position to move to the public cloud later and capitalize on the hyperscalers’ ever-expanding footprints when either they open a data center in your country OR your country adjusts your data regulations and you can move it out of the country. If you set up your own public cloud in a way that leverages the public cloud software, you’ll be able to quickly pivot to using the BFCs if and when the time comes – use a BFC to its maximum advantage.

There’s always a tweet

There’s always a tweet

So before you pull the trigger on almost $1B spend to build your own public cloud, spend some time reading up on Verizon’s foray into the public cloud: they tried to do this by first acquiring a bunch of data centers and then bailed and sold off the whole business.

In 2011, Verizon thought it was a great idea to spend $1.4 billion to acquire data center provider Terremark. A decade on, it realized it couldn’t compete with the might of the hyperscalers and dumped the business on Equinix (similar to a move made by Century Link in 2016) and completely exited the business of building its own cloud. Verizon had clearly spent enough money to do it, so why didn’t it work?

One word: software. Telcos believe public cloud is all about the hardware infrastructure as the key to building a successful cloud business. While that is partly true, they are missing the BIGGER piece of the puzzle of what you need to really attract people to your cloud: KILLER SOFTWARE. The BFCs have invested BILLIONS and BILLIONS of dollars perfecting the SOFTWARE of the public cloud. All of that software – along with the talent required to build it – does not exist in telcos. Not in a real, scalable, commercially-available-to-the-larger-public sort of way. Telcos could try to remake themselves into a software shop, but remember the BFCs have a 30-year head start on them: they started in software and added hardware later, not the other way around. That’s quite a lead.

Remind me again why you think it’s a good idea to build your own public cloud?

Recent Posts

  1. 🕸️ My spidey sense about CloudSense 🕸️
  2. 48% used, 100% paid: how to fix the overspend on your cloud contract
  3. 3 key takeaways from TelecomTV’s DSS report—one will surprise you
  4. Will telco move to the public cloud before Elon Musk gets to Mars?
  5. The death of Amdocs


Get my FREE insider newsletter, delivered every two weeks, with curated content to help telco execs across the globe move to the public cloud.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

More from TelcoDR

Time to go to the public cloud! The essential steps for cloud success.

Whitepaper

Time to go to the public cloud!

In this paper, I talk about the power of the public cloud for telcos and lay out the four clear steps you need to take to trigger your transformation. As you’ll discover, it’s not about transition…

Don’t be a server hugger

Blog

Don’t be a server hugger

You’re wasting a massive amount of money, time and effort on maintaining your private on-premise kit – don’t be sentimental about it!

In the cloud

Blog

15 examples of how telcos are moving to the public cloud

Telcos have been taking steps to move applications to the public cloud. Here are examples of telcos using the public cloud to cut costs, grow revenue, and lead in their marketplace.

Telco in 20 Podcast. Tune in.

Podcast

Ep 5 – The BFCs: Putting Microsoft Azure to the test for telcos

Rick Lievano, Worldwide Director of Technology Strategy at Microsoft, builds a good case for why Azure should be telecom’s cloud platform of choice, not the least of which is Azure for Operators – a game changer in the quest for carrier-grade cloud.

Telco in 20 podcast. Featured guest: Neil Bunn, Head of Customer Engineering, Media & Telco, Google Cloud

Podcast

Ep 6 – The BFCs: Is Google Cloud Platform (GCP) right for telcos?

Neil Bunn, Head of Customer Engineering, Media and Telco, for Google Cloud Canada, reviews the coolest features of Google cloud technology and offers an unexpected reason why GCP should be at the top of telcos’ list (hint: think, Anthos).

Telco in 20 Podcast - Featured Guest: Chivas Nambiar, Director, WW Solutions Architecture AWS for Telecom, AWS.

Podcast

Ep 70 – Unleashing telco potential with AWS

Chivas Nambiar, director of worldwide solution architecture, AWS for telecom, talks about how the hyperscaler is working with our industry, how cloud software is changing the landscape, and the future of telco in an AI-driven world.