3 options for edge computing through Kubernetes

Posted May 25, 20204 min read

Introduction:Enterprises have different views on edge computing, but few people rule out the possibility of deploying application components to the edge in the future, especially for the Internet of Things and other low-latency applications. _

Many organizations also believe that Kubernetes is an ideal mechanism for running containers in edge computing environments, especially those companies that have adopted container orchestration systems to meet their cloud and data center needs.

The use of Kubernetes in edge computing:The 3 general rules are:

Rule 1: Avoid using edge models that lack resource pools, because Kubernetes does not provide any real benefits in these environments. This is because Kubernetes is specifically designed to manage container deployment in a cluster, which means that the cluster is a resource pool.

Rule 2: Treat edge Kubernetes as a special use case in a wider Kubernetes deployment.

Rule 3: Unless there are a large number of edge resource pools, please avoid using Kubernetes tools specifically designed for edge hosting.

In addition to these three general rules, there are three main deployment options for running Kubernetes and containers in an edge environment.

Option 1:Public Cloud

In this model, the public cloud provider hosts the edge environment or the environment is an extension of public cloud services.

The typical use case for this option is to enhance interactivity in the cloud front end. In this case, the edge is an extension of the public cloud, and the organization's Kubernetes deployment practices should be suitable for cloud providers' products. The cloud provider's support for edge computing may involve local edge devices integrated with the provider's public cloud service, such as Amazon Snowball.

Public cloud edge hosting is almost always supported by extending one of the cloud hosting options(VM, container, or serverless features) to the edge, which means Kubernetes will not treat the edge as a separate cluster. This approach is easy to implement, but may require Kubernetes hosting strategies(such as affinity, taint, and tolerance) to direct edge components to edge resources. Be careful; if the goal of edge computing is to reduce latency, do not allow edge resources to be too far from the elements they control.

In an edge computing architecture, data is processed on the periphery of the network-as close as possible to its original source.

Option 2:Server facilities outside the data center

This approach involves edge deployment in one or more server facilities outside the organization's own data center.

The main use case for this edge model is the Industrial Internet of Things, where there are a large number of edge processing requirements-at least enough to justify placing servers in locations such as factories and warehouses. In this case, there are two options:treat each edge hosting point as a separate cluster, or treat edge hosting only as part of the primary data center cluster.

Where edge hosting supports a variety of applications(which means that each edge site is a real resource pool), consider using a dedicated Kubernetes distribution, such as KubeEdge, which is optimized for edge-centric tasks . Determine whether your edge applications are tightly integrated with the data center Kubernetes deployment, and in some cases whether the edge and data center will back up other applications.

In many edge deployments, the edge almost acts as a client that runs only dedicated applications and not resource pools. In this case, it may not be necessary to integrate the Kubernetes cluster. Otherwise, consider using Kubernetes federated authentication as a method for unified edge and data center strategy deployment.

Option 3:Dedicated appliances

In this case, the edge model consists of a set of special equipment dedicated to the factory or processing facility.

Many dedicated edge devices are based on ARM microprocessors, rather than Intel or AMD chips centered on servers. In many cases, these devices are closely connected to IoT devices, which means that each edge device has its own community of sensors and controllers to manage. The applications here are not variable, so overall, whether it is containerized or dedicated to Kubernetes deployment, the benefits are less. The most common use case for this model is smart buildings.

Non-server edge devices are often associated with Kubernetes versions designed for smaller device footprints, such as K3s. However, some dedicated edge devices may not require orchestration at all. If the device can run multiple applications simultaneously or separately, or a set of hosted collaborative application components in these devices, you can consider using K3 to coordinate deployment. If neither of these conditions are true, simply load the application to a device with local or network storage as needed.

In some cases, the edge components of the application are tightly integrated with the application components running in the data center. This may require administrators to deploy edge and data center components simultaneously and use a common orchestration model on both. In this case, either merge the edge elements into the main data center cluster and then use policies to direct the hosting of the edge components to the correct location, or separate the edges into separate clusters, and then deploy and orchestrate them through the alliance.

It was finally determined that containers and business processes were tools for effectively using resource pools. For companies with edge computing models that create small server farms at the edge, Kubernetes and edge computing are good partners. Jointly with the main Kubernetes deployment, a separate, edge-specific Kubernetes strategy is a good idea.

If the edge environment is more professional, it must still be deployed and managed in conjunction with the main application hosting resources(whether in the cloud or data center)-using the edge as a host type for existing Kubernetes deployments. If the edge is professional and largely independent, consider not using containers in edge computing at all.