Latency-free with Edge Computing

26.10.2020Raffael Schneider
Cloud Edge Computing Internet of Things Serverless Headless PaaS

According to this year’s The Internet of Things Report, Business Insider predicts over 41 billion active IoT devices. This equates to about 5 devices per person that will be connected to the internet, constantly sending data and thus generating traffic. The idea of edge computing is to provide computing power as close as possible to where it is needed, rather than in a geographically distant cloud.

Smaller devices, which are not considered servers in today’s understanding, are also designed to process data. Devices such as routers, Raspberry Pi or smart refrigerators are potential server units that enable short data life cycles due to their proximity to the end device - on the edge or at the end of the network.

An infrastructure designed for the edge enables numerous advantages compared to classic, centralised cloud clusters, such as increased cost efficiency, native data security and especially highly reduced latency times.

Data is thus no longer processed in its entirety in the cloud, but distributed across different devices close to the customer. In this way, the costs of computing power are passed on to the end devices, the data have shorter runtimes and are thus more secure and faster. The establishment of smart devices and a high density of CPU power in all possible areas is what makes edge computing possible in the first place.

Edge Computing in practice

Edge computing is a paradigm that offers the end customer a higher-performance and more seamless user experience by providing data and content as close to the customer as possible. There is not yet a uniform overall solution that accomplishes all this. Moreover, today’s approaches are not yet sufficiently established. There is still a lot of potential to use edge computing efficiently. Concrete approaches to implementing an edge-friendly infrastructure can be roughly divided into three use cases. These are as follows:

  • Edge cloud hosting combines the concept of classic cloud hosting with the possibility of hosting applications and services in a globally distributed manner, so that the nearest data centre is addressed when the application is accessed.
  • Micro cloud (also called edge cluster) is a small cluster of nodes with local storage and network. This micro cluster is not found in huge data centres, but is by definition at the edge on the edge of the network where an on-site cluster is necessary.
  • IoT Gateway is a node that centralises a decentralised network of IoT devices, filters, visualises, aggregates computing operations and makes them available for further services.

Just as with cloud technologies, the edge is also striving to standardise interfaces. A variety of open source projects are pushing the edge paradigm forward with visionary projects. Here we will look at a few examples of the current state of the edge stack together.

Edge cloud hosting

Traditional cloud hosting is done through a datacenter. These datacentres are mirrored in one location but are still geographically close. For example, Amazon has an AWS datacenter in Frankfurt at 3 different premises, but the geographical proximity of these 3 datacentres is still in relative proximity to each other.

Edge hosting would then be achieved as soon as the load balancing of an application can be done in a global set of possible locations. This means that the CNAME (?)

AWS CloudFront and Lambda@Edge

Just like Appfleet, Fastly or Fly, Amazon enables computing power to be brought to the end user with AWS CloudFront. This involves bringing cloud services to the relevant region zone and persisting them accordingly via static asset caching.

MicroK8s and K3S

MicroK8s corresponds to the Micro cloud approach. Canonical’s MicroK8s is a full-fledged Kubernetes with the smallest possible footprint and leverages the best-breeds of the cloud world such as Ingress, Istio, Prometheus, as well as other technologies. This deals better with the irregular availability of the nodes. For example, wireless towers, self-driving cars or other single-board computers can be used as nodes. For this reason, ‘micro cloud’ solutions such as MicroK8s support ARM processors that are often built into tiny devices.

Acri

Akri complements the Micro cloud approach. With Akri, a cluster can be built that is device-aware. Akri provides a Kubernetes interface whereby embedded systems can be exposed as cluster nodes. This allows the aggregated CPU/GPU power of small devices such as sensors, (micro)controllers or MCUs to be exploited for clustering. These devices are then called leaves.

Akri’s motto is “Simply put: you name it, Akri finds it, you use it”. The Kubernetes device plugin framework is used for this purpose. Normally, small devices like cameras, controllers or microcontrollers are too small to run Kubernetes, but by adding a new level of abstraction through the Container Network Interface (CNI), even these devices can be used as leaf devices for nodes. This is as close to the edge as it gets.

OpenShift Gateways

Established platforms now also offer edge solutions. For example, Red Hat provides gateways for their container platform OpenShift that are necessary for an implementation of an edge-capable cluster.

Edge computing is only at the beginning

No matter where the digitalised journey goes, we at b-nova always see an opportunity to generate added value for our customers through innovation and technology.

Red Hat’s approach to edge computing | RedHat

What’s the deal with edge computing? | Canonical

From Cloud Computing to Edge Computing | Medium @OpenSourceVoices