We currently host our serverless applications directly on AWS Lambda. However, since we want to remain as provider-independent as possible, we will have a look at Knative in this techup. Knative is a Kubernetes based platform to provide serverless applications directly in Kubernetes. This means that a serverless application can also be ported to any other provider as long as Kubernetes is installed.
Knative is an open source project and was first presented in 2018 by Google. But not only Google is working on the serverless framework, but also numerous other large companies such as IBM or RedHat. The latest version at the time of this blog post is 0.24.
Knative comes with two basic components “Serving” and “Eventing”. Serving is responsible for running serverless containers in Kubernetes. Eventing offers an interface to react to events such as GitHub hooks or message queues.
In some posts you can read about a third component “Build”. However, this was archived with this issue, as one wants to use Tekton Pipelines in the future.
Knative serving
Knative Serving uses Kubernetes and Istio to deploy serverless applications and functions. The following functions are supported:
-
Autoscaling including “scale to zero”
-
Support of the most common network layers such as Istio, Kourier, Ambassador, …
-
Snapshots of code and configurations
There are four Kubernetes CRDs (Custom Resource Definitions) to define how the serverless applications behave in the cluster.
Service
The service resource controls the entire lifecycle of an application or function. It also creates all objects that are required for execution (Route, Konfiguration, Revisionen). The service also regulates which revision should be used.
Route
The route resource links a network endpoint to one or more revisions.
Configuration
The configuration resource ensures the desired state of the deployment. There is a strict separation between code and configuration. A new revision is created every time the configuration is changed.
Revision
The revision resource is a snapshot of the code and the configuration, which is created with every change. A revision can no longer be changed after it has been created. Revisions can be auto-scaled based on traffic.
Knative Eventing
Knative Eventing provides functions for managing events. The applications are executed in an event-driven model. Eventing allows producers (Producer) to be flexibly coupled with consumers (Consumer). Knative organizes the queuing of events and delivers them to the container-based services. Knative uses HTTP post requests to send and receive events between the producer and consumer.
Knative Eventing defines an EventType object to simply offer consumers the event types that they can consume. These event types are stored in the event registry.
Anyone who follows my techups knows that I like to see the frameworks in practice! 😄 So let’s take a look at Knative in a cluster.
In practice
First, we want to install Knative Serving in our Kubernetes cluster. We follow the steps in Administration Guide with all defaults (Kourier, Magic DNS).
Knative Serving Custom Resources and Knative Serving
|
|
network layer (Kourier)
|
|
Magic DNS (sslip.io)
Knative offers a Kubernetes job “default-domain”, which configures Knative Serving so that sslip.io is used as the default DNS suffix.
|
|
Now that we’ve installed Knative Serving, let’s install Knative Eventing.
Knative Eventing Custom Resources and Knative Eventing:
|
|
So far so good, Knative is now installed in our cluster.
Knative Serving Example
Now we want to create our first serverless application and deploy it using Knative. To do this, we write a very simple go server.
|
|
Then we create a Docker file in order to be able to build an image from our Go application.
|
|
Then generate our go.mod manifest with
|
|
From this we create an image and push it to https://docker.io.
|
|
Our image is now ready to be deployed. We create and a knative service. To do this, we write the following service.yaml file.
|
|
Now just apply and our first serverless application should be available.
|
|
After our service is created, Knative will do the following for us:
-
A new version of our application is being created.
-
A route, ingress, service and a load balancer are created for our application
-
Automatic up- and downscaling of our pods.
Let’s look at this in detail. First, let’s find out the URL for our service. With Magic DNS we automatically get a http://sslip.io URL for each service. With the following command we can see this specific url:
|
|
We see that there are currently no pods running. So let’s start a watch on get pods and see what happens when a request is made for the url.
|
|
|
|
As soon as a request is made, a pod is automatically started and the request is accepted. As soon as the response has been sent, the pod then automatically shuts down again after a certain timeout (60s).
Conclusion
Using a simple example, we have seen how a very simple serverless application can be made available using Knative. We at b-nova have decided to migrate all existing Lambda functions to Knative! 🚀
The reason for this is relatively simple. AWS Lambda needs a handler within the application to receive the serverless requests.
A Go Lambda application looks like this, for example:
|
|
The main()
method must call the lambda.Start()
function as an entry point. So we have to change our application code
so that we can deploy the Lambda function.
At Knative we can leave our application code unchanged. All that is required is an additional configuration that is independent of the source code.
Next steps
In today’s TechUp we only looked at Knative Serving and only the basics there. There are still many interesting topics in this area, such as traffic splitting.
We currently don’t have a use case for Knative Eventing, but we will definitely keep an eye on this and reserve it for another TechUp.
As always, you can find the complete source code in our TechHub GitHub repo https://github.com/b-nova-techhub/knative-hello-bnova
Related Links:
This text was automatically translated with our golang markdown translator.