Knative is a Kubernetes based tool to deploy and manage serverless workloads. This guide will look at the following:

  • Knative use cases

  • Setting up Istio for traffic management

  • Installing Knative Serving on Civo

  • Connecting Istio and Knative

  • Deploying the first version of our stateless application

  • Observability in our deployments

  • Updating our Deployment

Knative Overview

Kubernetes is great for running stateless applications. As we are moving to the cloud and containers to manage our microservice infrastructure, our goal is to spend less time maintaining our infrastructure resources and more time to develop features and improve our application.

However, one of the main criticisms around Kubernetes is that we will again be required to focus on our infrastructure resources, manage YAML files, and spend months learning about the technology. The good news is that Knative and other tools are here to help us to simplify the Kubernetes deployment process.

Knative is divided into a Serving and an Eventing component.

The Serving Component is used for deploying serverless applications and functions. The Eventing Component is used to bind to cloud-native events and consume those. In this guide, we are going to be using the Knative Serving component.

The Knative API is built on top of the Kubernetes API. By applying Knative Custom Resource Definitions, you are extending the Kubernetes API, gaining access to Knative features. This also means that we can use existing Kubernetes tools with Knative.

Let’s get started.

Prerequisites

To follow the guide, you will need the following:

Currently, Knative is not on the Civo Marketplace. Thus, we will have to follow the commands provided in the Knative documentation.

You should be connected to your Civo cluster. If you do not have a cluster yet, now is the right time to get started.. Make sure you can use kubectl to get to your cluster. You should see your cluster's nodes if you run kubectl get nodes as below:

☁  knative-examples [main]   kubectl get nodes
NAME                                  STATUS   ROLES                  AGE     VERSION
k3s-demo-cluster-2c8a30fd-master-9ef3   Ready    control-plane,master   12m   v1.20.2+k3s1
k3s-demo-cluster-2c8a30fd-node-58c9     Ready    <none>                 11m   v1.20.2+k3s1
k3s-demo-cluster-2c8a30fd-node-3dfc     Ready    <none>                 11m   v1.20.2+k3s1

Istio Installation

First, we are making sure to have our Service Mesh up and running. It is going to be used to handle the traffic of our deployment. Our Service Mesh of choice is going to be Istio. If you prefer to not use Istio, you can have a look at the Knative documentation that provides several alternatives.

Note that without setting up a LoadBalancer in the Civo Dashboard, your cluster will be able to create one LoadBalancer. When you are creating the cluster, you want to make sure that it is not setting up a LoadBalancer for any of the deployments. Otherwise, you might have to convert the existing LoadBalancer into a NodePort to run a LoadBalancer for your Service Mesh. If the concepts of LoadBalancer and Service Mesh are new to you, please refer to the Kubernetes documentation.

For convenience, we are going to use istioctl to install Istio. If you have not installed istioctl yet, make sure to follow these steps:

curl -L https://istio.io/downloadIstio | sh -
export PATH=$PWD/bin:$PATH

The environment variable PATH makes istioctl to be accessible in your path. Move the downloaded binary to your bin folder to make istioctl available anywhere in your file system.

Next, we are using istioctl to install Istio:

istioctl install --set profile=demo --skip-confirmation

Check Istio is running with the following command:

kubectl get all -n istio-system

The important component to check in the output is that the istio-ingressgateway has an External-IP address.

Install Knative Serving

Now that we have our Service Mesh running, we can install the Knative Serving Component.

kubectl apply -f https://github.com/knative/serving/releases/download/v0.22.0/serving-crds.yaml
kubectl apply -f https://github.com/knative/serving/releases/download/v0.22.0/serving-core.yaml

Note: For the latest version, please check the Knative documentation. You may need to change the version directory in the URLs above to the version you want to install.

Now that we have installed that, check the knative-serving namespace to make sure that everything is running correctly.

kubectl get all -n knative-serving

Connect Knative and Istio

Lastly, we have to apply the Knative Istio controller:

kubectl apply -f https://github.com/knative/net-istio/releases/download/v0.22.0/net-istio.yaml

We will create a new namespace for our stateless application and make sure that Istio can access both our new namespace and the knative-serving namespace.

$ kubectl create ns demo
> namespace/demo created
$ kubectl label namespace demo istio-injection=enabled
> namespace/demo labeled
$ kubectl label namespace knative-serving istio-injection=enabled
> namespace/knative-serving labeled

Awesome! We are nearly there, we just have to make sure that for any deployment we will be able to get a unique URL. Open an editor and copy the YAML file below:

apiVersion: v1
kind: ConfigMap
metadata:
  name: config-domain
  namespace: knative-serving
data:
  <cluster url>.xip.io: |

Ideally, store this in a domain-config.yml file. Next, replace in the file with the External-IP from the istio-ingressgateway that you can find in the istio-system namespace on your cluster and save the file. Refer to the Istio installation section above for the command to run to get your external IP.

Then apply the configuration to your cluster:

kubectl apply -f domain-config.yml

Now we can go ahead and apply the first revision of our deployment.

Deploy Stateless Application

To make our first deployment, we have to learn a bit about the way Knative uses deployments. Traditionally, you would require the following YAML files or more to run a stateless application:

  • ReplicaSet
  • Pod
  • Pod Scaler to ensure the adequate number of pods are running
  • We need a Service so that other Pods/Services can access the application
  • If the application should be used outside of the cluster, we need an Ingress or similar

As you can imagine, those are all YAML files that all have to be maintained. Knative works quite differently and only requires a few lines of YAML to spin up all of the resources required by your application.

Let’s take a look at Knative Service Deployments. Below is a YAML file.

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: react-example
  namespace: demo
spec:
  template:
    metadata:
      name: react-example-first
    spec:
      containers:
        - image: docker.io/anaisurlichs/knative-demo:current
          ports:
          - containerPort: 80
          imagePullPolicy: Always
          env:
            - name: TARGET
              value: "Knative React v1"
  traffic:
  - tag: current
    latestRevision: true
    percent: 100

As you can see, we are using the Knative API provided by the Knative CRDs that we installed earlier.

Within the spec section, we specify the container image we want to deploy, the ports that our container Image exposes and will connect to, and additionally environment variables. The latter is needed to tell Knative the revision that we are using. Additionally, we are defining how our traffic should be distributed. This is our latest revision and we currently want to send 100% of the traffic to this revision.

Let’s go ahead and apply this yaml file. Save it in your current working directory as release-sample.yaml and run:

kubectl apply -f release-sample.yaml

Note that we have specified the namespace in the YAML itself. If you are not specifying the namespace and then check all of the resources running in your default namespaces, you might see the following:

NAME                                        URL                                                 LATESTCREATED         LATESTREADY           READY     REASON
service.serving.knative.dev/react-example   http://react-example.default.212.2.245.178.xip.io   react-example-first   react-example-first   Unknown   **IngressNotConfigured**

This tells us that our IngressGateway is not configured.

However, we gave Istio access to the demo namespace and applied our resource to that namespace. If you have been following the tutorial, you should now be able to access the application http://react-example.default.<external ip>.xip.io.

Voila, our application is running. Let’s observe Knative’s magic in our cluster.

We have a little script that you can run to generate some traffic to your deployment:

while true; do
    curl -I <domain>
    sleep 0.1
done

Replace the with the specific url of your Istio IngressGateway, save this as call.sh and run it:

./call.sh

If you get a permission denied error, change the script's permissions to executable:

chmod +x call.sh

And then try again to run the script:

./call.sh

While you run the script, let’s watch our resources scale-up. Open up another terminal and run the following:

watch kubectl get all -n demo

You should see something like the below:

> pod/react-example-first-deployment-75698f678f-pps9s   3/3     Running   0          85s

Once you stop the script (with Ctrl-c), our resources will then scale back down to 0 automatically. It is pure magic! (And really good engineering work :P)

Observability

We want to make sure we know what is going on in our cluster. To get insights into our metrics from Istio, we are going to install Prometheus and Grafana. Note that we are using the quick installation from the Istio documentation. If you were to run this on a more serious deployment and not a demo cluster, you would want to make sure to create your custom set-up.

Install Prometheus:

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.9/samples/addons/prometheus.yaml

Install Grafana:

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.9/samples/addons/grafana.yaml

We can then access Prometheus through the following command

kubectl port-forward -n istio-system service/prometheus 9090:9090

Navigating with your web browser to localhost:9090, then go to Status > Targets and you will see all the endpoints that Prometheus is scraping.

And now to make it even fancier, we are going to access Grafana for some nice visualisations of those targets:

kubectl port-forward -n istio-system service/grafana 3000:3000

Now, navigate to localhost:3000 and go to Dashboards > Istio > Istio Workloads Dashboard. In a new terminal, run the call.sh script from before again to generate some traffic.

After a few seconds, you should see the Dashboard being populated with fancy data:

Grafana dashboard showing KNative traffic

Make sure to click through their other Dashboards since those will provide you with more data. If you are curious about how Grafana accesses those, click on the header of any of the graphs and edit. This will provide you with the PromQL query used to access the metrics that are shown in the Dashboard. PromQL is Prometheus custom query language. If you would like to learn more about that, have a look at the Prometheus documentation.

Update deployment

Remember the traffic section in our Knative Service YAML file?

Now we are going to deploy an update to our Deployment. Since we do not want to make the change all at once, we are going to split the traffic first between our old and our new deployment. Knative Services make give us a handy option to do that. Remember this part from release-sample.yaml from earlier?

 traffic:
  - tag: current
    latestRevision: true
    percent: 100

Our updated Knative Service YAML will look like the following:

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: react-example
  namespace: demo
spec:
  template:
    metadata:
      name: react-example-second
    spec:
      containers:
        - image: docker.io/anaisurlichs/knative-demo:new
          ports:
          - containerPort: 80
          imagePullPolicy: Always
          env:
            - name: TARGET
              value: "Knative React v2"
  traffic:
  - tag: current
    revisionName: react-example-first
    percent: 50
  - tag: new
    revisionName: react-example-second
    percent: 50
  - tag: latest
    latestRevision: true
    percent: 0

As you can see, we have made the following changes:

  • we are using an updated container image
  • the env section has changed
  • the traffic section is further extended to specify a traffic split between our first and our second deployment.

Save the YAML file as traffic-splitting.yaml in your current directory.

Before we apply the new deployment, we want to use another Dashboard that will show us better how the traffic split is taking place. For this we are going to use Kiali. Kiali can be installed to your cluster with:

$ kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.9/samples/addons/kiali.yaml

$ istioctl dashboard kiali

Then apply the Knative Service Deployment from above:

$ kubectl apply -f traffic-splitting.yaml

Run the call.sh script from earlier again to generate some traffic.

Go ahead and navigate to the public IP with your browser. If you refresh the page, you should see the background of the application changing. This is because the different versions are being served to you 50/50.

Lastly, navigate to the Kiali dashboard, the url should be provided in the console. You will then see the traffic split between both revisions:

Kiali Dashboard

Once you are happy with revision two, you could then update the deployment to send 100% of traffic to that revision by changing the traffic-splitting.yaml to give the newest revision all the traffic.

Summarising

Congratulations! We covered a lot in this guide; from setting up Istio, installing Knative Serving, our first Knative deployment and then gathering metrics from Prometheus and observing the traffic in Grafana and Kiali. If you followed along with the entire tutorial, take a screenshot of your Grafana Dashboard and tweet it @civocloud and @urlichsanais. I am sure other community members would love to hear about your experience.