Dense Broadband . . . for a World of Video

Exploding use of video-dense applications challenges both the foundations of network design and existing models for wholesale and network sharing.

Data intensity in a world of video

PROCESSING POWER

It’s been nearly two decades since IBM’s Deep Blue became the first machine to win a chess game against a reigning world champion (Garry Kasparov, 1996). That event roughly coincided with the start of the Internet era as we know it – about the time that most corporations were establishing their first web site.

Computing power has come a long way since then – the GPU chips in this years higher-end tablets deliver nearly an order of magnitude more processing power than Deep Blue’s eleven gigaFLOPS and a serious home computer packs a thousand times more. A gigaFLOPS is a unit representing approximately a billion Floating Point Operations Per Second, and the cost of that processing capacity has fallen from about 30,000 dollars in 1997 to less than a dollar today. There have been similar exponential increases in storage and network throughput affordability, resulting in a data explosion on fixed and mobile networks.

Two things that have not changed in that time are the nature of the electromagnetic spectrum and the laws of physics that limit what we can do with it. How those considerations are going to affect the economics of network operators’ businesses in the context of continually exploding data demand are the focus of this article.

CLAUDE SHANNON: PART ONE

But first, as an aside, you may be wondering what all this means for Chess? Would a human today still have a chance of beating a well-programmed computer? In typical chess positions there are around 30 legal moves and a typical game lasts about 40 moves. In a 1950 paper Claude Shannon showed that this resulted in 10120 variations from the initial position – a largish number when compared to the number of atoms in the universe at around 1080 for example. Let’s assume we had an exaFLOPS computer capable of a quintillion (1018) instructions per second (which is likely a half decade from reality) and that one move took one instruction (which is nonsensically optimistic) then the 10120 variations would take 10102 seconds (or “a hundred googol seconds” since its not often we get to use the original spelling of the search engine’s namesake). That is quite a lot longer than the age of the universe (about 1017 seconds). In short, a brute force analysis of all possibilities in chess won’t be possible in our lifetimes, if ever. So chess programs use other methods, including huge databases of what moves worked in practice in real games between human beings. There are some great parallels here for big-data analytics, especially in the context of customer experience in complex industries like telecoms, but that is the subject of a future article.

DENSE BROADBAND

For now, let’s get back to the topic of Dense Broadband. In a previous article, we covered the phenomenon of the Nomadic User, who is an intensive data user concentrated on any one of several wirelessly connected devices. We use the term Dense Broadband to describe what is required to wirelessly serve those users’ intense data appetite when there are many of them in the same geographic area. From an engineering perspective it is not just a question of bandwidth – the question of latency is equally important since that determines whether cloud-based applications are sufficiently responsive.

In telecoms there are a couple of ground rules of physics worth always worth bearing in mind. The first is that our planet is 40,000 km in circumference and the speed of light in a glass fiber of refractive index 1.5 is about 200km/s so there’s a physical minimum of 100ms for data to travel half way around the world. We’ll come back to this in another article because that delay is noticeable to humans so it matters to Cloud architecture.

CLAUDE SHANNON: PART TWO

But in this article we’ll focus on a second constraint that is more significant. Remember Claude Shannon from a couple of paragraphs ago? It turns out that apart from analyzing the game tree complexity of chess, Claude Shannon is also widely regarded as the father of modern information theory and with Ralph Hartley demonstrated the theoretical maximum information rate that can be sent through a noisy channel. As such, the so-called Shannon Limit describes the maximum bandwidth achievable through copper cable, fiber cable or radio spectrum. If you need more capacity with copper or fiber you can always add more cables – but radio spectrum is like land – the supply is finite.

To increase the radio performance experienced by a set of wirelessly connected Nomadic Users in a particular geographic area, there are only four levers:

  1. Increase the amount of Spectrum allocated (from the finite supply). In practical terms this means acquiring additional licenses for additional spectrum either from existing owners, or from the government as spectrum is recovered from legacy uses (e.g. former analog TV UHF or re-farming GSM). It can and should certainly also mean leveraging unlicensed spectrum (such as WiFi) even if such spectrum is increasingly contended.
  2. Increase the spectral efficiency, i.e. get closer to the Shannon Limit. The obvious way to do this is to move to a later generation of radio technology, that uses more advanced encoding or antenna technology – for example, LTE has much better spectral efficiency than the latest 3G HSPA variants (along with numerous other advantages)
  3. Improve the payload efficiency – use the available radio more efficiently through compression or improving application behavior. There is often neither the incentive or the competence/awareness in the application community to consider network considerations despite the fact that it is often one of the cheapest ways to improve application performance.
  4. Reduce the number of concurrent users per cell. In a city, a traditional mobile network cell tower will cover a circular area of at least 1km in radius and often much larger (20-30km in lower population density areas). The amount of spectrum, spectral efficiency and payload efficiency determine how much capacity is available within a cell. There is some interesting work that has been done on modifying user behavior to spread demand (the telecoms equivalent of yield management) but by-and-large there are only two ways to reduce the average number of concurrent users in a cell – offloading some users to other networks (usually to fixed networks via WiFi) or reducing the cell size.

THE NOMADIC VIDEO DATA EXPLOSION

There is a data explosion under way, driven by video applications and nomadicity, and that explosion is not expected to slow down. The word “explosion” is somewhat emotive, but it is appropriate here because the demand growth is not linear. Long term demand projections vary significantly by market, but recent experiences in such planning suggest overall data growth right now is in the order of 65% to 75% but with that rate of growth itself increasing.

The challenge is that the first three “levers” to accommodate this growth are rather linear and finite. The first varies significantly by market but in most markets the options for additional spectrum are limited. The second “spectral efficiency” improvements can be significant – the improved spectral efficiency of LTE probably accommodates two years of growth once the handset population is high enough, but such benefits can only be realized about once a decade. The benefits of improved payload handling are modest by comparison.

What this says is that the fourth lever needs to come into play, and in a significant way. The industry has already accepted WiFi offloading as an essential tool in managing the economic challenges in keeping up with data demand. WiFi offload means encouraging nomadic and mobile users to connect via WiFi networks when they are in their homes or public spaces well served by WiFi like cafés, airports, libraries and shopping centers. That can make a big difference, because those places cover two thirds or more of where people want to use their devices. But there are two outstanding issues – first, the places in between today’s WiFi hotspots are going to need to be served by much smaller cells, and secondly public WiFi networks are becoming congested in high density areas.

SPECTRUM MATHEMATICS

Let’s do some basic maths for an operator has run out of spectrum options i.e. doesn’t have spare capacity and can’t buy more. Global mobile data use is tripling every two years and we have two mechanisms that can triple capacity – WiFi Offload (since two thirds of users are in WiFi coverage) and upgrade to LTE which could triple the spectral efficiency. So it looks like we get four years, and maybe even another year by being clever with payload. On the face of it, the current macro network footprint might hold out for another half decade.

But this basic mathematics misses a few significant considerations:

  1. Many operators have already been pushing hard on WiFi offload for a few years now and are still running out of spectrum, especially in metro areas.
  2. Spectral efficiency gains don’t just rely on network upgrades, they also rely on device radio capabilities – no benefit to LTE networks if the devices remain 3G+WIFI. This means it is important to get new devices adopted quickly, but device lifetimes are still several years.
  3. Growth in mobile data use is not about global averages – the actual data use varies significantly from cell to cell – easy to measure and harder to predict. This means that actual data growth is much higher in some cells and much lower in others.

And then beyond all of those considerations is the one question that does not lend itself so easily to basic mathematics – the question of competition. The problem with industry mobile data projections is that they tend to be based on extrapolation – how are device populations and capabilities changing, and what are the data growth trends seen for those different device types? So there is a difference between projected data growth (which is implicitly constrained by industry-wide capacity) and the actual data demand curve (what consumers would use at a particular price point if the capacity existed). The telecoms industry is characterized by high consumer awareness of competing offers and the propensity of a high proportion of customers to change providers in the context of a disruptive price offer. So beyond the steady growth trend there is also the possibility for service providers to face discontinuous changes in demand, or even to create those discontinuities themselves if their network infrastructure has a significant cost/bit advantage for the targeted connectivity modes (fixed, mobile, nomadic).

INDUSTRY CONSEQUENCES

These various considerations need to be analyzed in detail for each market. In nearly every market there will be a spectrum crunch over the next decade, but the urgency of the problem varies significantly based on the starting situation, projected demand growth and competition. What can be said is that in most markets the conclusion will be that cell size needs to become much smaller in a relatively short time frame, and that has profound consequences not only for how mobile networks are architected, but also for the structure of telecommunications industry, especially in metro areas.

Let’s say that we target an order of magnitude increase in bandwidth by reducing cell size. The good news is that with good radio planning, it is possible to deliver an almost linear relationship between the number of additional cells and the amount of additional bandwidth. So that means ten times the number of cell sites – not the same kind of cell sites (much smaller, lower power) – but nevertheless we need real estate, planning consents and civil works for ten new sites in each cell. More importantly each of those sites needs both power and fiber backhaul, which implies ducting and additional civil works. The radio electronics/antenna part of the equation (whether it is small cells that use ‘cellular’ technology like 3G or LTE or outdoor WiFi Access Points or a combination) is therefore only a small part of the overall cost.

In traditional mobile data networks, each operator built their own cells – each new operator built an overlay that covered the same geographic area as its competitors. Not a technically efficient solution to covering the needs of the population, but it has by-and-large worked pretty well until now. That is already changing in traditional macro networks where the economics of data growth are (finally) driving network sharing agreements to become reality in parts of Western Europe. And it is just not going to work for Dense Broadband – first the economics don’t work, and second local government (and their citizens) are not going to stand for the visual clutter or upheaval overbuild implies.

The signficance for industry structure is that the metro backhaul and radio access needs to be shared among multiple service providers . . . so it is a natural wholesale business. But it is not necessarily a standalone wholesale business – because in parallel to this evolution of mobile networks, fiber is going deeper into traditional fixed networks. In deep fiber networks (fiber to the home or to the cabinet) there is a similar dynamic – it doesn’t make sense for competing service providers to build separate deep fiber access networks. This is increasingly taking the form of a wholesale model, sometimes helped along by government funding of the new access network. What’s more, it doesn’t make sense for the fixed access network and the new mobile backhaul networks to be built independently because they have very similar characteristics and the potential to share a great deal of infrastructure and civil works.

A NEW INDUSTRY PLAYER: THE DATA-CO

The industry model that flows from the above is a company that is specializing in wholesaling very good data performance – fixed, mobile and nomadic – at a low cost per bit per connectivity mode. Such a company does not have end-customer relationships – it specializes in data connectivity. Let’s call it a DataCo, as distinct from the term NetCo that is often used in functional separation scenarios but doesn’t capture the specialized focus on data excellence.

The next question is whether a DataCo is just focused on access networks, or whether it attempts to complete the data delivery chain. This makes sense because the data experience of an end user is not just a function of the access network – it is just as much a function of the aggregation networks, the IP network and the location of data centers or content caching (CDN). In many parts of the world, especially islands, it is also a function of the submarine cable paths providing connectivity to cloud hosted services in different parts of the world. All those those moving parts work together to determine both the performance and the cost effectiveness with which data can be delivered to an end user.

There are two important conclusions to draw from this. First, the value of infrastructure assets needs to be considered in the context of how they contribute to the overall data delivery value chain. Secondly, there will need to be, in many markets a new class of operator specialized all or part of the end-to-end data delivery chain and that new class of operator does not resemble many that exist today.

 

Follow Developments:

Follow the author (Philip Carden) on Twitter:  
Follow Number Eight Capital on LinkedIn:    

Sharing and Comments:

Comments

  1. Sebastien says:

    Good demonstration !
    It is so obvious that the current RAN strategies are going into a wall, yet very few operators realizes it and even fewer start to equip themselves with the ammunition that they will need (agile an short cycle radio capacity and planning cycles, tiger teams approach for site acquisition, small cells strategy and technology).
    Operators are also very far from considering a network sharing/wholesaler approach that will be needed, too many still see infrastructure as a competitive differentiator.
    It would be a good time for governments of geographies like EU to tackle this problem, and launch some sort of NBN initiatives in the wireless space. It would even be part of some Marshall Plan, boosting the economy through public investment.

Leave a Reply