Post written by
Vapor IO, the world's first true edge computing company.">Founder and CEO of Vapor IO, the world's first true edge computing company.
The last mile represents the final hop in our end-to-end telecommunications networks, where data bridges from infrastructure to device. Examples include the coaxial systems owned by cable providers, wireless networks owned by telecom operators and fiber optic systems offered by the likes of Fios by Verizon and Sonic. For as long as communication networks have existed, the last mile has presented unique infrastructural challenges and opportunities. By being the most distributed portion of the internet, the last mile is often the hardest to build and operate, but by being the closest to the end user or device, it’s also the most important part of the network for enabling next-generation applications.
The last mile occupies a strange place in the networking world. Without the last mile, the people and devices depending on the internet wouldn’t be able to access it. Yet, in spite of the crucial importance of the last mile network, investment has not kept pace with demands. Historically, it has required network operators to make massive investments in fixed fortifications. Although the last mile provides one of the most crucial links in our internet transport system, the inflexible investments required have made it the most underserved. For most of its recent life, the last mile has been treated as a “dumb pipe” — a way to get bits to and from the internet, but not a place where any interesting compute occurs or significant value is added.
Over the past few years, we’ve seen an exploding demand for low-latency and high-bandwidth connections to the internet. This has been driven, in part, by the proliferation of smartphones and the popularity of over the top (OTT) services such as Hulu and Netflix, but today, the billions of new connected devices and the petabytes of data we expect them to generate have been eclipsing these early drivers.
Latency, a measure of how long it takes one piece of data to reach its destination, needs to be lower than it is today for the vast majority of internet users to support new types of applications. Streaming gaming and autonomous vehicles, whether cars or drones, are some of the most popular examples. At the same time, bandwidth, a measure of how much data can be received over a network connection per second, needs to be higher than it is today for the majority of internet users in order to support the needs of many of these same emerging applications.
Network operators have turned to transformative technologies, such as network function virtualization (NFV), software-defined networking (SDN) and cloud radio access networking (C-RAN) to support this new demand. These new network capabilities will rewrite how we design, build and operate networks by removing the need for fixed appliances in the network topology. Rather than deploying more closed-box devices for network functionality, operators are replacing them with software running network function in cloud-like environments atop general-purpose servers.
Convergence At The Edge
Historically, internet applications have taken data from the edge and transported it to the “cloud,” which was most often instantiated on servers in some far off data center. However, applications of the near future will demand that the cloud come to them, which means building micro data centers at the edge of the last mile, in order to house cloud servers near the data and devices they support.
These new cloud servers will be embedded at the edge of the last mile infrastructure. The deployment of infrastructure edge computing — in the form of micro data centers at the edge of the wireless and wired networks — will bring powerful cloud resources to the edge. By turning network functionality into software running at the edge, the historically separate silos of networking and compute will converge to operate seamlessly together on the same underlying infrastructure. Compute and storage will fan out across the network, creating a gradient of cloud resources that occupy new edge data centers extending all the way to the last mile.
Fixing The Last Mile
Today, solving for the challenges of the last mile is not as simple as it seems. Consider the cellular network: Data sent from one device to another attached to the same cell tower, or to the internet, cannot take a straight path to its destination. Instead, due to convoluted legacy network architectures, data in transit often takes a meandering, ineffective path, sometimes “tromboning” (looping out and back) thousands of miles to do so.
Fixing the last mile requires real convergence between the networking and compute layers of the internet. Edge data centers deployed at the infrastructure edge — that is, at locations on the operator side of the last mile, such as near the base of cell towers — become the catalyst for this fundamental rearchitecting of the internet. Deploying thousands of edge data centers, each with its own integrated meet-me rooms and internet exchange points, makes it possible to fully converge networking and compute, bypassing the awkward and inefficient legacy data routing.
Brilliant transformation is occurring not just in nature, but in our networks as well.
In winter months, dark, barely alive branches stretch out as far as the roots have managed to grow. Spring will bring life again with new, vibrant leaves appearing at the very tip of every branch. As in nature, the tree trunks and branches gets us there, but the leaves are where magic happens. By making it possible to converge network and compute at the edge, infrastructure edge computing becomes the foundation, or the leaves, for a next-generation internet, and the way to fix the last mile.