Blog

Let’s take a look at latency

Stop using latency as a reason to not move to the public cloud.

Faster is better, and so when it comes to speed and telco services, it’s all about measuring latency.

So what is latency?

Put simply, it’s the measurement of delay – the time it takes for information transmitted over a network to complete the round trip. Measured in milliseconds, it is key to the user experience and critical to telcos in terms of delivering quality of service and keeping customers happy. 

Latency requirements vary by application. For example, gaming needs near real-time data delivery to ensure the best user experience. Therefore, latency can be a big differentiator between a great service and unusable.

A big reason why telcos want to retain ownership of their end-to-end networks is so they are in control of service levels and by extension, the subscriber end-user experience. I get a lot of push back on moving workloads to the public cloud for this exact reason. So my question is: if telcos were to move their workloads to the public cloud, is the latency low enough?

The short answer: it depends. It depends on where you’re calling from and if the datacenters are nearby and, of course, how fast you need it to be.

For a telco to use a hyperscaler (like Amazon Web Services (AWS), Google Cloud Platform (GCP) or Microsoft Azure (Azure)), it means it is moving workloads that have historically resided on premise (usually at a telco’s headquarters or an offsite, but nearby, datacenter) to a datacenter owned and managed by the hyperscaler. Obviously when the workload is nearby, the latency time is relatively low. But when you move the workload to a hyperscaler, what happens to latency? Let’s look at how you measure latency from the perspective of the telco: making a call from the core network, out to a hyperscaler to an application running in the public cloud, and back.

In order to get your network request to a hyperscaler’s datacenter you have to get your traffic to one of its Points of Presence (PoPs). Once you’ve reached a PoP, you have a direct route into its datacenter (which means latency is pretty low there and back). So the first thing to figure out is – how do you get your traffic to a hyperscaler’s PoP?

PoPs are pretty much everywhere, as the hyperscalers have been building out their PoPs for several years – and continue to grow them. It may vary from region to region, but a quick survey based on the most recent available information shows that AWS has 220+ PoPs, Google 130, and Microsoft Azure more than 170.

So, how does data get from your core network to a PoP?

An easy analogy I like to use is “hot potato” or “cold potato” routing. The first involves traffic being routed via the public internet – no one owning it and it being moved around like a hot potato. Traffic remains on public networks for most of the time, entering the hyperscaler’s private network for only the ‘last mile.’ Performance and latency are dictated by the involved ISPs, and may change from one call to the next. On the way back, same thing: traffic is mostly on the public internet, back to the user. This is usually a cheaper option, but also will not be the fastest.

On the other hand, the cold potato approach tries to get traffic onto the network as close to the call out as possible and then route it to a hyperscaler’s private network. Return traffic follows the same path, staying on the private network for as long as possible to minimize the number of re-routing points. This is great because the path will have deterministic latency as opposed to the hot potato approach, which may be fast one time and not fast enough the next. Cold potato is usually more expensive, but for those workloads where latency is important, it’s worth the additional cost.

All the hyperscalers have BOTH OPTIONS available. AWS was last to the table in offering its customers choice, which it started doing only in 2019. In most cases it’s a service offering that you select by the workload, so you can have some parts of your footprint with one level of service, and configure other workloads to have the other.

With that, let’s discuss the offerings.

Azure

Azure gives its users the option of cold or hot potato. Default is routing via the Microsoft Global Network, where the traffic does not enter the public internet at all (cold potato), or via the public internet (hot potato). The former is deemed the better option, as it involves the least number of hops and therefore the best performance and lowest latency. The latter is dependent to some extent on the quality of the ISP and network provider, so there is less control over performance and quality.

Google Cloud Platform

GCP also gives users choice. Its Standard Tier hands off traffic to the public internet early in its journey, while Premium Tier customers benefit from traffic staying on the Google backbone for most of its journey and being handed off to the public internet close to the destination user.

AWS

By default, AWS offers a hot potato routing algorithm. Recently it has introduced its cold potato solution called Global Accelerator, which moves user traffic off the public internet and on to Amazon’s private global network.

Whichever selection you choose, be sure to measure both. While it would seem that cold potato is always better, your mileage may vary. To use a 2018 example, GCP in Eastern Europe using the cold potato approach would be slower than the default hot potato approach if you were using its Mumbai datacenter. That’s because it didn’t have a backbone running from Eastern Europe to India. Instead your traffic would route westbound – and therefore, around the world (see page 23 in the Thousand Eyes report where they discuss this more eloquently than I do).

How do I know if my path will be fast enough?

The best way to figure out if your location is fast enough for your workloads to move to the public cloud is to just measure it. There are a number of free tools out there to check latency. Two great tools that work on all three major hyperscalers are CloudPingTest or CloudHarmony. There are also proprietary tools you can use; Azure allows users to check latency from an IP location to Azure datacenters; between Azure regions via its backbone network; and between Azure availability zones. GCP outlines the differences between different testing tools here, and AWS’s ping tool is great to use for AWS. Note that some of these tools require you to be located at the IP address where the call out will take place, so get someone at that location to run the tests for you.

Note that not every region is fast enough for all workloads to move to the public cloud – yet (I bet Antarctica has some crappy latency pings!). But as the hyperscalers continue to expand their datacenters and their PoPs, things will continue to improve and continue to get faster. Soon, the latency excuse will be a thing of the past.

Recent Posts

  1. 48% used, 100% paid: how to fix the overspend on your cloud contract
  2. 3 key takeaways from TelecomTV’s DSS report—one will surprise you
  3. Will telco move to the public cloud before Elon Musk gets to Mars?
  4. The death of Amdocs
  5. Getting AI right is hard 🚀


Get my FREE insider newsletter, delivered every two weeks, with curated content to help telco execs across the globe move to the public cloud.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

More from TelcoDR