Edge computing promises better user experiences and greater efficiencies, but without software the edge is just computers. Realizing the full potential of edge needs a catalyst—software developers.
Editor’s Note: This is a guest post from an industry expert. The State of the Edge blog welcomes diverse opinions from industry practitioners, analysts, and researchers, highlighting thought leadership in all areas of edge computing and adjacent technologies. If you’d like to propose an article, please see our Submission Guidelines.
We Need More Conversations About Software at the Edge
I’ve attended several edge-focused conferences over the past year, and I’ve noticed a dramatic absence of software-related conversations. Recently, at Edge Congress in Austin, I found myself in a session where the speaker polled the crowd to see who in the room represented data centers; half of the hands went up. Then he asked about telco; the other half of the hands went up. His final question asked who was representing software; in a session with more than 200 participants, only 3-4 hands went in the air (including mine and my co-founder’s).
To put this in context, a traditional content delivery network might need to run software on 100 data centers in order to cover the entire world’s population with better than 40ms round-trip times. A new class of edge computing applications will demand better than 10ms round-trip times, which may require thousands of data centers at the edge, such as at the base of cell towers. As telcos and data center providers deploy edge data centers, few people are talking about how we actually develop software that runs at scale across an exponentially increasing number of locations on this new infrastructure.
We Must Bring a Software Perspective to the Edge
Every engineer has a unique perspective on what and where the edge is based on their role and the application architecture in which they operate. So, rather than attach a specific definition, the edge is better thought of as a compute continuum. Depending on the scenario, the edge can span from a centralized data center to continental/national/regional data centers, to cell towers, and all the way down to IoT devices (e.g., phones, point-of-sale systems).
Each provider also has their own point of view on what the edge is. Large data center operators often say their network edge is the firewall. If you talk to a CDN provider (e.g., Akamai, Cloudflare), they’ll say that the edge is where their servers are. If you talk to a telco , they’ll say that the edge is where their servers are—whether in a local central office (CO) or at tower-connected antenna hubs. And for large enterprises, the edge for them may be walls of their own data centers.
In reality, the edge isn’t any one of these places. It’s all of them. In order to make this complex landscape useful to developers, we must approach it from a software perspective, building abstractions and systems that allow developers to interact with the edge how and where they need.
What’s Holding us Back?
Aside from a few noteworthy engineering teams, such as those at Netflix and Chick-fil-A, who have taken it upon themselves to build distributed architectures to run innovative workloads at the edge, the extent of most edge computing today is still locked into traditional CDN workloads and systems. As more developers look to leverage the benefits of edge computing, they need more flexibility and control than current CDNs can provide.
While many CDN providers are leaning into edge computing, the legacy systems have many deficiencies that are impeding developers from advancing beyond simple caching and other standard optimization techniques. The problems include:
- Fixed and inflexible networks translate to poor architectural choices.
- Disparate point solutions and “black box” edge software lead to a slow rate of change.
- Lack of integration with developer workflows and support for modern DevOps principles creates poor control of the edge.
Developers have absorbed concepts like centralized cloud, agile and DevOps, yet most developers have little experience building highly distributed systems. How can we overcome this deficit by leveraging common practices for faster edge adoption?
Requirements for Empowering Edge Development
In order to empower developers to move sophisticated parts of application logic out of the centralized infrastructure and into a service running on an unknown number of servers, there are some minimum requirements that must be addressed.
- Local Development. Distributed systems are hard to build. Developers need a true full stack environment that allows them to make and test changes locally before pushing to production. Not only does this reflect standard practices among modern development teams, but it also brings the benefits of faster feedback and risk-free experimentation.
- Immediate Diagnostics. Developers need comprehensive, real-time insights in order to monitor, diagnose, and optimize systems. This includes transaction traces, logging, and aggregated metrics.
- Consistent Behavior. Developers need to have confidence in their toolsets, in both usability and performance. In order to deliver platforms that developers adopt, all decisions must come from a developer-first mindset. The complete system must work in dev the same way it works in prod.
Edge Workloads, Components, and Scheduling
Out-of-Band vs. Inline (or In-Band) Workloads
There are two high-level categories of workloads to consider when thinking about the edge from a developer’s perspective. Out-of-band workloads are the more basic of the two and can also be thought of as synchronous or transactional: a client makes a request, and the system blocks on the response. A good example is an HTTP request or static file delivery.
Things get more sophisticated when it comes to the second category—inline workloads—which can also be thought of as asynchronous or non-transactional, where custom logic at the edge processes data as it is being ingested. The computing model changes substantially when inline workloads are introduced at the edge, and this is where the true potential of edge computing starts to take shape.
Edge Workload Components
Within the workloads, there are several key components that drive decision-making for both the developer and those tooling developers at the edge.
- Web Servers: Traditional CDN workloads have primarily relied on load balancing and reverse proxies. As edge workloads become more sophisticated, software architects are leveraging networks of containerized microservices.
- Other Triggers: What many have termed as ‘serverless functions’ have become more common when running logic closer to end devices. This, coupled with edge cron jobs that compile and send only the necessary information back to the origin server, have established the foundation for edge computing. However, as the need for more specialized infrastructure arises, developers are looking to a ’serverless for containers’ model to run their containerized microservices at the edge without having to worry about the allocation and provisioning of servers near end users.
- State Management: At the moment, there are a few different state management models that people talk about at the edge: ephemeral, persistent, and distributed. Distributed state management presents the most interesting challenges for edge computing. For example, a common use case for distributed state at the edge comes in web application firewalls, where security administrators want to block traffic at every endpoint and as soon as one endpoint detects it, the other endpoints should know about it. For an interesting read on this subject, check out Edge Computing is a Distributed Data Problem.
- Messaging: Every type of workload running in an edge platform must be able to receive messages via low latency global message delivery. In order to scale this messaging, API extensibility is key.
- Diagnostics: What a developer fundamentally needs when they’re building these systems is traceability through the entire stack. We need to be able to provide effective mechanisms for developers and operators to be able to go into a system and see what went wrong and where they can optimize.
Edge Workload Scheduling
One of the biggest topics when it comes to edge computing is scheduling. Imagine a future world where every 5G base station has a data center at the bottom of it. While there will be a massive amount of compute in these edge data centers there certainly will not be enough to run every single application in the world at every one of those towers, in parallel. We need a system that can optimize workload scheduling to run in the right place at the right time. This is a very challenging problem that nobody has completely solved yet.
As we work through these challenges, the types of scheduling models to look after include:
- Static: This is what we have today with content delivery networks—set locations with pre-determined configurations.
- Dynamic: Scheduling based on latency or volume thresholds. This is perhaps where the most opportunity lies when it comes to edge computing.
- Enforcements: Circumstantial scheduling based on geography or sovereignty requirements, as in the case of GDPR, or compliance, such as PCI.
We Need to Bring DevOps to the Edge
To keep pace with technology, engineers must be able to conduct quick deployments in safe, reliable and repeatable ways. DevOps and continuous delivery (CD) help to support a more responsive and flexible software delivery cycle; DevOps accelerate development cycles, which helps organizations achieve a quicker pace of innovation.
As developers gain more control over the provisioning of IT resources and everyday operations, they require more flexibility, transparency and visibility in their technology stack. In keeping with the developer-first mindset, edge compute software must adhere to these same principles in order for us to continue to charge forward in this new paradigm. As companies deploy hardware into edge data centers, we must similarly advance the software that takes advantage of these new capabilities. Hardware and software must evolve together at the edge.
Daniel Bartholomew is Co-Founder & CTO at Section, a developer-centric, multipurpose edge PaaS solution that empowers web application engineers to run any workload, anywhere. Daniel has spent over twenty years in engineering leadership and technical consulting roles. His vision for a developer-friendly edge platform was born long before the term ‘edge computing’ was coined and has evolved into a pioneering technology that is focused on meeting the needs of today’s developers.