Struggling With Capacity and Congestion in the Digital Video Age
Look at 5G: five times (or more) the density of antennas will be required as is needed for 4G. Each will need its radio access network (RAN) transmission and routing kit plugged in and powered on, and, if current movements such as Multi-Access Edge Computer (MEC) architectures come to fruition, they will run servers and other technology in the RAN to make it fly.
In the office or domestic environment, that will, in turn, mean more and more devices will be connected, each requiring power.
While CDNs and those up in the upper layers of the network stack preach caching and network function virtualisation (NFV), software-defined networks (SDN), serverless computing, and an ever-growing array of similar solutions to make such technology demands lean and efficient, each of these devices will require power.
So where is that power going to come from? In my experience this (and not bandwidth capacity) is actually the question that is on the minds of those in the carrier and operator space.
When planners are looking at “edge” technologies, our streaming and CDN industry is trying to sell more efficient caching, proxy serving, and stream splitting, perhaps even transcoding further and further (topologically) from the stream origin, but actually each of these resources can be power-hungry. All too often, many of them assume that power will just be provided by the network operator or data centre facility.
But that simply is not the case, so those forward-looking network planners are increasingly studying the electricity demand that those solutions require, as well as how to cool these devices within the data centres, so water is becoming more of an issue as well.
A growing number of industry leaders believe that pushing function deeper into the “edge” is universally good. However, “edge” is actually a vague term. Some define the location of the edge as the domestic/office facility, some define it to be the regional PoP of a CDN, while others talk about the RAN or Digital Subscriber Line Access Multiplexer (DSLAM) of the carrier/operator networks.
You all know I can be a bit of a pedant when it comes to terminology (put that down to my 20-plus years of industry journalism, or perhaps I am just being a pain in the neck), but for me “edge” means “the point where the owner of the operation hands off responsibility to the ‘next’ network in the workflow.”
This could mean the “edge” of a software vendor’s scope is its API. It could mean that the edge is the peering or transit interface between the CDN and the last-mile operators’ carrier networks, or it could mean that the edge is the consumers’ devices. So every edge is defined by perspective—meaning that edge solution providers actually typically do have a common theme of being able to interop well with the platforms/technology or networks on the other side of that edge.
But one thing is certain: as function is moved to the edge, there is usually an increased demand for computing power.
For that reason, edges also have another element that is increasingly coming to the fore, and that is not the edge of one API talking to another, nor the edge of peering or transit interconnects. Instead, it is the edge of where fibre, compute, and power converge.
Just look at bitcoin mining operations: all the very large actors in this space are located right next to diverse power sources, where they can shop for cheap power from hydroelectric, fossil fuel, or nuclear power and therefore keep the suppliers’ pricing under control.
Today, the same goes for the very largest public and private cloud hosting. These data centres are located near one or more of the following:
- Beaches, so they can reach transoceanic landing stations (and therefore international markets)
- Internet exchange points, so they can reach domestic markets
- Diverse power stations, so they can light up their technology and avoid monopoly charging from the power providers
While the first two on the list are relatively simple geographic choices, the power issue is much more constrained. Power is very political. Being close to available, sustainable power—with a price that can be controlled—is a complex planning decision.
The largest network planners are looking at core power for data centres in the area of 140 gigawatts (GW). Regional data centres can require in the region of 50GW of power, and metro-pops can require 1GW too.
This means that capacity planning for scaling up networks is increasingly hitting a ceiling not of fibre or data capacity, but of electricity.
Network planners are not looking at CDN caching or stream transcoding abilities to offload or decentralise their data requirements, but are looking at power efficiency. If a fibre DWDM repeater can double a fibre’s capacity for a kilowatt where a server farm at the remote end of the fibre might reduce the need to increase the fibre capacity but in turn require a megawatt of power to run, then it makes complete sense for the telco to simply increase the fibre capacity and host the servers at the local edge, serving the streams from the local data centre. So in this case, “deeper edge caching” is actually more expensive and less efficient, offering the CDN’s customers a more expensive service and in all practicality providing no other benefits.
This model continues right through the operator networks, where lower-power graphics processing unit (GPU) decoders in client devices and reduced power requirements in transcoders and demuxers all start to weigh in as key effects on network planning and operational costs.
CDN and streaming demand/scale is now getting so significant, these effects are now having an increasingly consequential influence on network planning to meet the demand created by the massive uptake of online video and audio services.
So not every deep edge cache or server placement is a panacea, and it is very important that you address this as you plan to scale your streaming services into this mature phase of the market development. While publishers and pureplay CDN customers in theory can leave the technical challenges to the CDN and the CDN’s underlying operators to manage, an understanding of the pressures those networks face will help navigate pricing and service levels.
For operators and CDNs, though, the future of expansion is increasingly facing a chokehold not from bandwidth capacity but from power availability, so don’t be quick to jump on licensing server edge technology for the sake of it and just because that is what the vendors are telling you to do. Instead, go and run some checks on the power efficiency of these strategies and talk to your infrastructure teams and partners about the issues.
Understanding exactly what to scale to increase your capacity may make a huge long-term difference to your costs and your scalability.
We are keen to hear from carriers, operators, and networks who are seeing this effect, so please email the author or comment where you find this article online. This is an area we will be trying to cover in more depth over the next months, both in articles and on panel sessions I moderate at various conferences.
Finally, a huge thanks to Microsoft’s Dave Crowley for his unique insights on this topic. He and I will be working on more articles on this topic in the future.
[This article appears in the Winter 2018 issue of Streaming Media Magazine European Edition as "Capacity and Congestion: Hall of Mirrors in the Power House of Telecoms?"]
Related Articles
Microservices architectures are central to the growth of OTT services, but they're widely misunderstood. Here's an intro to the major players.
31 Aug 2018