Living On The Edge (Part I): Why Edge Computing Is Such A Big Deal

Post written by

Ravi Mayuram

Ravi is Senior Vice President of Engineering and CTO at Couchbase, overseeing development and delivery of the Couchbase data platform.

This is the first article of a multi-part series on edge computing. In this article, we’ll discuss what edge computing is, why it has emerged and why it is so revolutionary. 

Bigger Than Cloud And The Internet Of Things (IoT)

We are surrounded by smart sensors, smart actuators and local mini-networks of interconnected devices. Self-driving cars, activity and location trackers, personal medical monitors, field inventory controls, transportation monitoring networks, smart cards, smart kiosks, smartphones, smart cameras, information delivery devices, gaming platforms, virtual reality, home and business artificial intelligence (AI) -- the list is endless. Not only are these devices proliferating at an exponential pace, but they are also becoming smarter. This is all with the goal of capturing and delivering more data, making real-time decisions and interacting directly with everyone in our daily lives. This fundamental change in how we experience data is based on the newest innovation of the computing revolution: edge computing. 

According to Gartner, roughly 10% of today’s enterprise data is being produced and processed outside of centralized data centers, and by 2025, it will grow to 75% or more. This creates insurmountable challenges to the traditional model of shipping data back to the data center (on-premises or in the cloud) to be stored and processed. The laws of physics and economics make this approach a non-starter (more on this later). A new approach to computing and the topology of data management is needed -- one that allows the compute, storage, and processing of data to be ubiquitous and much closer to the data’s point of origin.  

Edge computing is poised to explode onto the data processing scene, with disruptive consequences. Analysis from Grand View Research estimates that it is projected to be a nearly $29 billion market by 2025, with a CAGR of 54%. It’s disruptive precisely because it provides the opportunity to rethink existing cloud and data center-centric deployment models. 

The good news is that these new devices and platforms have become increasingly more sophisticated, adding memory, storage, compute and networking capabilities everywhere. A “dumb” sensor that collects and forwards data to be processed elsewhere is just that -- a dumb sensor. A smart sensor -- coupled with a smart actuator, a processor and local memory -- can store data locally, run decision making software on the data, and only forward the data and decisions that are important. Even dumb sensors, when connected to a smart platform (like autonomous cars, factory floor management, environmental controls or avionics systems) can become smart due to the high-performance processing and storage that is available in proximity to the sensor. 

To get an idea of the impact of this change, think about the difference between the iPhone I and today’s iPhone X. Today’s iPhone has 32 times the amount of storage, 24 times the amount of memory and a processor that’s 5 times faster than the original model just 10 years ago. The amount of computing capacity we are carrying in our hip pockets used to require lots of real estate, power and cooling in data centers just some 20 years ago. In the world we are living in now, there is exponentially more compute capacity and data generation at the edge (phones, watches, IoT devices) than on the cloud. 

As applications have become more modular and data processing has moved onto these new platforms, they have enabled a whole new set of users (business users and consumers) and created new paradigms of business interactions and data processing (business to business (B2B), business to consumer (B2C), consumer to consumer (C2C)). Each successive move away from the data center has created huge business opportunities and irrevocably changed how users interact with their data. It is part of the Fourth Industrial Revolution. 

Why Now? 

The concept of edge computing isn’t really new. Computing started in the data center, where the memory, processors and storage were available to manage data. As devices outside of the data center became more capable (beginning with desktops), data processing and storage moved out of the data center and onto these new platforms. Additional migration of data processing outside of the data center occurred with the introduction of personal computers, smartphones and other mobile computing platforms. 

What’s changed in the last few years is the sheer proliferation of these devices and the dramatic enhancements these new platforms have. The periphery of our compute and storage environment is being infused and enabled with smart sensors, smart actuators, and local mini-networks of interconnected devices, each one with their own memory, processors, storage and networking capabilities. As more of these interconnected, intelligent devices have become available and commonplace, we’ve reached a crossroads with regard to where and how data is processed and accessed, hence edge computing. It is changing how we interact with our data and is providing new business models and opportunities for those enterprises that can leverage it. 

There’s been lots of discussion about what edge computing is and is not. In general terms, it’s a distributed, progressive computing paradigm that brings data storage and processing closer to the location where it is needed. Edge computing moves data, apps, services, and power away from points of centralization to areas closer to where the data is being generated and consumed (users, digital platforms, etc.). Edge computing is about how we structure applications and data processing in this new world of intelligent, interconnected devices at the periphery. Edge computing is a radical innovation in application topology and a new computational paradigm, but it’s not a simple, easily identifiable technology that you can just purchase and deploy. It requires rethinking how applications are structured, what infrastructure is available at the periphery and how it can be leveraged to produce better outcomes.

What’s Next? 

In future articles in this series, I'll be discussing how to get started with edge computing.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?
">

This is the first article of a multi-part series on edge computing. In this article, we’ll discuss what edge computing is, why it has emerged and why it is so revolutionary. 

Bigger Than Cloud And The Internet Of Things (IoT)

We are surrounded by smart sensors, smart actuators and local mini-networks of interconnected devices. Self-driving cars, activity and location trackers, personal medical monitors, field inventory controls, transportation monitoring networks, smart cards, smart kiosks, smartphones, smart cameras, information delivery devices, gaming platforms, virtual reality, home and business artificial intelligence (AI) -- the list is endless. Not only are these devices proliferating at an exponential pace, but they are also becoming smarter. This is all with the goal of capturing and delivering more data, making real-time decisions and interacting directly with everyone in our daily lives. This fundamental change in how we experience data is based on the newest innovation of the computing revolution: edge computing. 

According to Gartner, roughly 10% of today’s enterprise data is being produced and processed outside of centralized data centers, and by 2025, it will grow to 75% or more. This creates insurmountable challenges to the traditional model of shipping data back to the data center (on-premises or in the cloud) to be stored and processed. The laws of physics and economics make this approach a non-starter (more on this later). A new approach to computing and the topology of data management is needed -- one that allows the compute, storage, and processing of data to be ubiquitous and much closer to the data’s point of origin.  

Edge computing is poised to explode onto the data processing scene, with disruptive consequences. Analysis from Grand View Research estimates that it is projected to be a nearly $29 billion market by 2025, with a CAGR of 54%. It’s disruptive precisely because it provides the opportunity to rethink existing cloud and data center-centric deployment models. 

The good news is that these new devices and platforms have become increasingly more sophisticated, adding memory, storage, compute and networking capabilities everywhere. A “dumb” sensor that collects and forwards data to be processed elsewhere is just that -- a dumb sensor. A smart sensor -- coupled with a smart actuator, a processor and local memory -- can store data locally, run decision making software on the data, and only forward the data and decisions that are important. Even dumb sensors, when connected to a smart platform (like autonomous cars, factory floor management, environmental controls or avionics systems) can become smart due to the high-performance processing and storage that is available in proximity to the sensor. 

To get an idea of the impact of this change, think about the difference between the iPhone I and today’s iPhone X. Today’s iPhone has 32 times the amount of storage, 24 times the amount of memory and a processor that’s 5 times faster than the original model just 10 years ago. The amount of computing capacity we are carrying in our hip pockets used to require lots of real estate, power and cooling in data centers just some 20 years ago. In the world we are living in now, there is exponentially more compute capacity and data generation at the edge (phones, watches, IoT devices) than on the cloud. 

As applications have become more modular and data processing has moved onto these new platforms, they have enabled a whole new set of users (business users and consumers) and created new paradigms of business interactions and data processing (business to business (B2B), business to consumer (B2C), consumer to consumer (C2C)). Each successive move away from the data center has created huge business opportunities and irrevocably changed how users interact with their data. It is part of the Fourth Industrial Revolution. 

Why Now? 

The concept of edge computing isn’t really new. Computing started in the data center, where the memory, processors and storage were available to manage data. As devices outside of the data center became more capable (beginning with desktops), data processing and storage moved out of the data center and onto these new platforms. Additional migration of data processing outside of the data center occurred with the introduction of personal computers, smartphones and other mobile computing platforms. 

What’s changed in the last few years is the sheer proliferation of these devices and the dramatic enhancements these new platforms have. The periphery of our compute and storage environment is being infused and enabled with smart sensors, smart actuators, and local mini-networks of interconnected devices, each one with their own memory, processors, storage and networking capabilities. As more of these interconnected, intelligent devices have become available and commonplace, we’ve reached a crossroads with regard to where and how data is processed and accessed, hence edge computing. It is changing how we interact with our data and is providing new business models and opportunities for those enterprises that can leverage it. 

There’s been lots of discussion about what edge computing is and is not. In general terms, it’s a distributed, progressive computing paradigm that brings data storage and processing closer to the location where it is needed. Edge computing moves data, apps, services, and power away from points of centralization to areas closer to where the data is being generated and consumed (users, digital platforms, etc.). Edge computing is about how we structure applications and data processing in this new world of intelligent, interconnected devices at the periphery. Edge computing is a radical innovation in application topology and a new computational paradigm, but it’s not a simple, easily identifiable technology that you can just purchase and deploy. It requires rethinking how applications are structured, what infrastructure is available at the periphery and how it can be leveraged to produce better outcomes.

What’s Next? 

In future articles in this series, I'll be discussing how to get started with edge computing.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Ravi is Senior Vice President of Engineering and CTO at Couchbase, overseeing development and delivery of the Couchbase Data Platform. ...