Serverless on Kubernetes with Knative

11.08.2021 Stefan Welsch
Cloud DevOps knative serverless k8s golang howto

We currently host our serverless applications directly on AWS Lambda. However, since we want to remain as provider-independent as possible, we will have a look at Knative in this techup. Knative is a Kubernetes based platform to provide serverless applications directly in Kubernetes. This means that a serverless application can also be ported to any other provider as long as Kubernetes is installed.

Knative is an open source project and was first presented in 2018 by Google. But not only Google is working on the serverless framework, but also numerous other large companies such as IBM or RedHat. The latest version at the time of this blog post is 0.24.

Knative comes with two basic components “Serving” and “Eventing”. Serving is responsible for running serverless containers in Kubernetes. Eventing offers an interface to react to events such as GitHub hooks or message queues.

In some posts you can read about a third component “Build”. However, this was archived with this issue, as one wants to use Tekton Pipelines in the future.

Knative serving

Knative Serving uses Kubernetes and Istio to deploy serverless applications and functions. The following functions are supported:

  • Autoscaling including “scale to zero”

  • Support of the most common network layers such as Istio, Kourier, Ambassador, ...

  • Snapshots of code and configurations

There are four Kubernetes CRDs (Custom Resource Definitions) to define how the serverless applications behave in the cluster.

Service

The service resource controls the entire lifecycle of an application or function. It also creates all objects that are required for execution (Route, Konfiguration, Revisionen). The service also regulates which revision should be used.

Route

The route resource links a network endpoint to one or more revisions.

Configuration

The configuration resource ensures the desired state of the deployment. There is a strict separation between code and configuration. A new revision is created every time the configuration is changed.

Revision

The revision resource is a snapshot of the code and the configuration, which is created with every change. A revision can no longer be changed after it has been created. Revisions can be auto-scaled based on traffic.

Knative Eventing

Knative Eventing provides functions for managing events. The applications are executed in an event-driven model. Eventing allows producers (Producer) to be flexibly coupled with consumers (Consumer). Knative organizes the queuing of events and delivers them to the container-based services. Knative uses HTTP post requests to send and receive events between the producer and consumer.

Knative Eventing defines an EventType object to simply offer consumers the event types that they can consume. These event types are stored in the event registry.

Anyone who follows my techups knows that I like to see the frameworks in practice! 😄 So let's take a look at Knative in a cluster.

In practice

First, we want to install Knative Serving in our Kubernetes cluster. We follow the steps in Administration Guide with all defaults (Kourier, Magic DNS).

Knative Serving Custom Resources and Knative Serving

# Install the required custom resources by running the command:
kubectl apply -f https://github.com/knative/serving/releases/download/v0.24.0/serving-crds.yaml

# Install the core components of Knative Serving by running the command:
kubectl apply -f https://github.com/knative/serving/releases/download/v0.24.0/serving-core.yaml

network layer (Kourier)

# Install the Knative Kourier controller by running the command:
kubectl apply -f https://github.com/knative/net-kourier/releases/download/v0.24.0/kourier.yaml

# Configure Knative Serving to use Kourier by default by running the command:
kubectl patch configmap/config-network \
  --namespace knative-serving \
  --type merge \
  --patch '{"data":{"ingress.class":"kourier.ingress.networking.knative.dev"}}'

Magic DNS (sslip.io)

Knative offers a Kubernetes job “default-domain”, which configures Knative Serving so that sslip.io is used as the default DNS suffix.

kubectl apply -f https://github.com/knative/serving/releases/download/v0.24.0/serving-default-domain.yaml

Now that we've installed Knative Serving, let's install Knative Eventing.

Knative Eventing Custom Resources and Knative Eventing:

# Install the required custom resource definitions (CRDs):
kubectl apply -f https://github.com/knative/eventing/releases/download/v0.24.0/eventing-crds.yaml

# Install the core components of Eventing:
kubectl apply -f https://github.com/knative/eventing/releases/download/v0.24.0/eventing-core.yaml

So far so good, Knative is now installed in our cluster.

Knative Serving Example

Now we want to create our first serverless application and deploy it using Knative. To do this, we write a very simple go server.

package main

import (
	"fmt"
	"log"
	"net/http"
	"os"
)

func handler(w http.ResponseWriter, r *http.Request) {
	log.Print("b-nova: received a request")
	message := os.Getenv("MESSAGE")
	if message == "" {
		message = "World"
	}
	fmt.Fprintf(w, "Hello %s!\n", message)
}

func main() {
	log.Print("b-nova: starting server...")

	http.HandleFunc("/", handler)

	port := "8080"

	log.Printf("b-nova: listening on port %s", port)
	log.Fatal(http.ListenAndServe(fmt.Sprintf(":%s", port), nil))
}

Then we create a Docker file in order to be able to build an image from our Go application.

# Use the official Golang image to create a build artifact.
# This is based on Debian and sets the GOPATH to /go.
FROM golang:1.16 as builder

# Create and change to the app directory.
WORKDIR /app

# Retrieve application dependencies using go modules.
# Allows container builds to reuse downloaded dependencies.
COPY go.* ./
RUN go mod download

# Copy local code to the container image.
COPY . ./

# Build the binary.
# -mod=readonly ensures immutable go.mod and go.sum in container builds.
RUN CGO_ENABLED=0 GOOS=linux go build -mod=readonly -v -o server

# Use the official Alpine image for a lean production container.
# https://hub.docker.com/_/alpine
# https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
FROM alpine:3
RUN apk add --no-cache ca-certificates

# Copy the binary to the production image from the builder stage.
COPY --from=builder /app/server /server

# Run the web service on container startup.
CMD ["/server"]

Then generate our go.mod manifest with

go mod init

From this we create an image and push it to https://docker.io.

# Build the container on your local machine
docker build -t bnova/knative-hello-bnova .

# Push the container to docker registry
docker push bnova/knative-hello-bnova

Our image is now ready to be deployed. We create and a knative service. To do this, we write the following service.yaml file.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: knative-hello-bnova
  namespace: knative-hello-bnova
spec:
  template:
    spec:
      containers:
        - image: docker.io/bnova/knative-hello-bnova
          env:
            - name: MESSAGE
              value: "from the whole b-nova team!"

Now just apply and our first serverless application should be available.

kubectl apply --filename service.yaml

After our service is created, Knative will do the following for us:

  • A new version of our application is being created.

  • A route, ingress, service and a load balancer are created for our application

  • Automatic up- and downscaling of our pods.

Let's look at this in detail. First, let's find out the URL for our service. With Magic DNS we automatically get a http://sslip.io URL for each service. With the following command we can see this specific url:

$ kubectl get ksvc knative-hello-bnova-service -n knative-hello-bnova  --output=custom-columns=NAME:.metadata.name,URL:.status.url
NAME                  URL
knative-hello-bnova   http://knative-hello-bnova.knative-hello-bnova.157.230.76.188.sslip.io

We see that there are currently no pods running. So let's start a watch on get pods and see what happens when a request is made for the url.

$ kubectl get pods -n knative-hello-bnova                                                                                                                                                                                                                                       2.6.3 ⎈ do-fra1-b-nova-openhub-cluster 15:14:57
No resources found in knative-hello-bnova namespace.
~/Development/go/src ❯ kubectl get pods -n knative-hello-bnova -w                                                                                                                                                                                                                 2.6.3 ⎈ do-fra1-b-nova-openhub-cluster 15:09:31
NAME                                               READY   STATUS    RESTARTS   AGE
knative-hello-bnova-00001-deployment-5d99758858-lklpt   0/2     Pending   0          0s
knative-hello-bnova-00001-deployment-5d99758858-lklpt   0/2     Pending   0          0s
knative-hello-bnova-00001-deployment-5d99758858-lklpt   0/2     ContainerCreating   0          0s
knative-hello-bnova-00001-deployment-5d99758858-lklpt   1/2     Running             0          2s
knative-hello-bnova-00001-deployment-5d99758858-lklpt   1/2     Running             0          3s
knative-hello-bnova-00001-deployment-5d99758858-lklpt   2/2     Running             0          3s
knative-hello-bnova-00001-deployment-5d99758858-lklpt   2/2     Terminating         0          64s
knative-hello-bnova-00001-deployment-5d99758858-lklpt   1/2     Terminating         0          67s
knative-hello-bnova-00001-deployment-5d99758858-lklpt   0/2     Terminating         0          97s
knative-hello-bnova-00001-deployment-5d99758858-lklpt   0/2     Terminating         0          98s
knative-hello-bnova-00001-deployment-5d99758858-lklpt   0/2     Terminating         0          98s

As soon as a request is made, a pod is automatically started and the request is accepted. As soon as the response has been sent, the pod then automatically shuts down again after a certain timeout (60s).

Conclusion

Using a simple example, we have seen how a very simple serverless application can be made available using Knative. We at b-nova have decided to migrate all existing Lambda functions to Knative! 🚀

The reason for this is relatively simple. AWS Lambda needs a handler within the application to receive the serverless requests.

A Go Lambda application looks like this, for example:

func HandleRequest(ctx context.Context, event MyEvent) (string, error) {
	stage := event.Stage
	if stage == "" {
		stage = os.Getenv("stage")
	}

    // here is your logic
    
	return "ok", nil
}

func main() {
	lambda.Start(HandleRequest)
}

The main() method must call the lambda.Start() function as an entry point. So we have to change our application code so that we can deploy the Lambda function.

At Knative we can leave our application code unchanged. All that is required is an additional configuration that is independent of the source code.

Next steps

In today's TechUp we only looked at Knative Serving and only the basics there. There are still many interesting topics in this area, such as traffic splitting.

We currently don't have a use case for Knative Eventing, but we will definitely keep an eye on this and reserve it for another TechUp.

As always, you can find the complete source code in our TechHub GitHub repo https://github.com/b-nova-techhub/knative-hello-bnova

Related Links:

https://knative.dev/docs/

https://stackoverflow.com/questions/58860118/how-does-knative-servings-activator-intercept-requests-to-scaled-down-revisions


This text was automatically translated with our golang markdown translator.

Stefan Welsch - pioneer, stuntman, mentor. As the founder of b-nova, Stefan is always looking for new and promising fields of development. He is a pragmatist through and through and therefore prefers to write articles that are as close as possible to real-world scenarios.