One of the key benefits of Kubernetes lies in its extensibility. The API-driven nature of the container orchestration platform has paved the way for many of the tools we know and love today.

In this tutorial, we will discuss admission controllers, what they are, and why they are important. Once understood, we will wrap up by creating an admission controller of our own.

What are Admission Controllers?

The Kubernetes documentation does a great job of explaining admission controllers:

An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the object, but only after the request is authenticated and authorized.

In simpler terms, admission controllers can be thought of as middleware that can validate, mutate, or reject requests to the Kubernetes API.

Types of Admission Controllers

Admission controllers are generally found in two types:

  • Validating admission controllers: These controllers accept or deny requests based on compliance with custom rules.
  • Mutating admission controllers: These controllers modify or “mutate” resource attributes, this could be anything from modifying deployment image tags or adding labels to a deployment.

Admission controllers may also be both validating and mutating.

The Lifecycle of a Kubernetes API Request

To further illustrate the role of admission controllers, let’s take a look at the lifecycle of a request to the Kubernetes API:

The Lifecycle of a Kubernetes API Request

The process begins with an API request sent to the Kubernetes API server, such as creating a new pod or deployment. The API server receives the request and passes it to the appropriate handler for the resource type (e.g., pods). Before proceeding, the request undergoes authentication and authorization checks. If either fails, the request is rejected.

If mutating admission controllers are enabled, the request is then sent to each configured mutating webhook. These webhooks can modify the request object before it's passed to the next stage. The image shows three webhooks being called sequentially, each potentially altering the request.

After any potential mutations, the request object is validated against the JSON schema for the resource type. This ensures the request conforms to the expected structure.

If validating admission controllers are enabled, the now-validated request is sent to each configured validating webhook. These webhooks can inspect the request and either allow or reject it based on custom logic.

If all previous stages succeed, the request is finally persisted to etcd, the distributed key-value store that acts as the central data store for Kubernetes.

Static Admission Controllers

It is also worth mentioning that there are static admission controllers, which are admission controllers that ship with Kubernetes (these may also be validating or mutating). A good example of a static admission controller is the NamespaceLifecycle this is a validating admission controller that ensures no new objects can be created in a namespace being terminated. This admission controller also prevents the deletion of three system-reserved namespaces:

  • default,
  • kube-system,
  • kube-public.

Why do we need Admission Controllers?

There are a number of ways admission controllers could be used; however, the majority of the use cases fall under two categories:

Security: Controllers can validate container images upon deployment to ensure they adhere to predefined standards. This validation can include checks for trusted sources and vulnerability scans.

Compliance: Controllers can validate incoming requests against predefined policies, ensuring that deployments, configurations, and resource allocations comply with organizational standards, organizations can also leverage admission controllers to implement custom business logic specific to their operational needs.

Writing Your Own Admission Controller

With the basics of what and why, we can now get the admission controllers out of the way. Let’s shift gears a little and take a look at how we can implement an admission controller of our own.

For this demonstration, we will be developing a mutating admission controller that modifies the number of replicas in a deployment. This is a relatively basic example, but as you would see it showcases some of the core concepts of admission controllers.

Prerequisites

This tutorial assumes a working knowledge of Kubernetes. Knowledge of Golang is helpful but not strictly required. In addition, you would need the following installed in order to follow along:

Creating a Kubernetes Cluster (Optional)

If you already have a Kubernetes cluster up and running, feel free to skip this step, the important part here is to have admission plugins enabled. You can verify using the following kubectl command:

kubectl api-versions | grep admissionregistration.k8s.io

Output:

admissionregistration.k8s.io/v1

If you do not get the output above, take a look at this section of the Kubernetes docs for how to turn on admission controllers.

To create a cluster using the Civo CLI, run the following command:

civo k3s create --create-firewall --nodes 1 -m --save --switch --wait admissions 

This would launch a one-node Kubernetes cluster in your Civo account, the -m flag would merge the kubeconfig for the new cluster with your existing kubeconfig, --switch points your kube-context to the newly created cluster.

Preparing your Development Environment

With a Kubernetes cluster created, let’s shift our focus toward the code. Start by creating a new directory to house all the code. In your terminal, run the following commands:

Create a new directory:

mkdir mutating-replicant && cd mutating-replicant

This would create a new directory called mutating-replicant and change the current working directory into it.

Next, we need to initialize a Go module, this would help track and manage dependencies as we go along.

Initialize a Go module:

go mod init replicant 

With that out of the way, the next step is to create a file called main.go this would serve as the entry point for our controller.

Create main.go

touch main.go 

Controller Code

In your editor of choice, open up main.go and follow along with the code below:

Import relevant packages:

package main

import (
    "crypto/tls"
    "encoding/json"
    "errors"
    "flag"
    "fmt"
    "io"
    "log"
    "net/http"

    "golang.org/x/exp/slog"
    admissionv1 "k8s.io/api/admission/v1"
    appsv1 "k8s.io/api/apps/v1"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/apimachinery/pkg/runtime/serializer"
)
  1. k8s.io/api/admission/v1: This package contains Kubernetes API types related to admission controllers, specifically the AdmissionReview and AdmissionResponse structs, which are fundamental for handling admission requests and responses.
  2. k8s.io/api/apps/v1: This package holds Kubernetes API types related to apps/v1, such as the Deployment struct.
  3. k8s.io/apimachinery/pkg/runtime: This package provides support for runtime object conversion and manipulation. It's utilized here for creating a scheme and a decoder for handling Kubernetes API objects.
  4. k8s.io/apimachinery/pkg/runtime/serializer: Aids in serializing and deserializing Kubernetes API objects. It's used here to create a decoder for deserializing admission review requests.

Declare a few global variables and a structure:

To do so update main.go with the code below:

...

var (
    port    int
    tlsKey  string
    tlsCert string
)

type PatchOperation struct {
    Op    string      `json:"op"`
    Path  string      `json:"path"`
    Value interface{} `json:"value,omitempty"`
}

In the code above, we define three new variables:

port: This stores the port number on which the admission controller will listen for incoming requests.

tlsKey: This holds the file path to the TLS private key used for secure communication with the Kubernetes API server.

tlsCert: Similar to tlsKey, this variable stores the file path to the TLS certificate required for the TLS handshake between the admission controller and the Kubernetes API server.

PatchOperations in Kubernetes

In Kubernetes, making alterations or updates to resources often involves sending JSON patches to the Kubernetes API server. These patches conform to the JSON Patch standard (RFC 6902) and represent a sequence of operations to be applied to a target JSON document.

Each PatchOperation within this sequence encapsulates a single modification to the target document. The structure we've defined aligns with this standard, comprising three essential fields:

  • Op: Signifies the operation to be executed, whether it's adding a new field, removing an existing one, or replacing a value.
  • Path: Specifies the JSON path within the target Kubernetes resource where the operation should be applied. This path provides the precise location within the resource's JSON structure that needs modification.
  • Value: Represents the new value to be applied at the specified path.

Parsing Admission Requests

At the core of our controller lies an HTTP server, upon each request, an AdmissionReview is sent in the request body. Next, let’s write a function to extract the admission review from HTTP requests. Update main.go with the code below:

func httpError(w http.ResponseWriter, err error) {
    slog.Error("unable to complete request", "error", err.Error())
    w.WriteHeader(http.StatusBadRequest)
    w.Write([]byte(err.Error()))
}

func parseAdmissionReview(req *http.Request, deserializer runtime.Decoder) (*admissionv1.AdmissionReview, error) {

    reqData, err := io.ReadAll(req.Body)
    if err != nil {
        slog.Error("error reading request body", err)
        return nil, err
    }

    admissionReviewRequest := &admissionv1.AdmissionReview{}

    _, _, err = deserializer.Decode(reqData, nil, admissionReviewRequest)
    if err != nil {
        slog.Error("unable to desdeserialize request", err)
        return nil, err
    }
    return admissionReviewRequest, nil
}

In the updated code, we introduce two functions:

httpError: Throughout the code, we would need to return an error if any operation fails, this function would help reduce repetition as well as log any errors as we go along.

parseAdmissionReview: Reads the entire body of the incoming HTTP request, after this, an empty AdmissionReview object is initialized. This object will be populated with the details extracted from the HTTP request. Using the provided deserializer, the function decodes the request data into the previously initialized AdmissionReview object, if the process is successful, the parsed AdmissionReview object is returned.

Handling Mutations

With our helper functions defined, we can now set up the HTTP handler responsible for handling admission review requests and performing mutations based on the extracted information.

... 

func mutate(w http.ResponseWriter, r *http.Request) {
    slog.Info("recieved new mutate request")

    scheme := runtime.NewScheme()
    codecFactory := serializer.NewCodecFactory(scheme)
    deserializer := codecFactory.UniversalDeserializer()

    admissionReviewRequest, err := parseAdmissionReview(r, deserializer)
    if err != nil {
        httpError(w, err)
        return
    }

    // Define the GroupVersionResource for Deployment objects
    deploymentGVR := metav1.GroupVersionResource{
        Group:    "apps",
        Version:  "v1",
        Resource: "deployments",
    }
    
    // Check if the admission request is for a Deployment object
    if admissionReviewRequest.Request.Resource != deploymentGVR {
        err := errors.New("admission request is not of kind: Deployment")
        httpError(w, err)
        return
    }

    deployment := appsv1.Deployment{}
    
    // Extract the Deployment object from the admission request
    _, _, err = deserializer.Decode(admissionReviewRequest.Request.Object.Raw, nil, &deployment)
    if err != nil {
        err := errors.New("unable to unmarshall request to deployment")
        httpError(w, err)
        return
    }
    var patches []PatchOperation
    
  // Perform mutations or modifications to the Deployment object
    patch := PatchOperation{
        Op:    "replace",
        Path:  "/spec/replicas",
        Value: 3,
    }

    patches = append(patches, patch)

    //marshal the patch into bytes
    patchBytes, err := json.Marshal(patches)
    if err != nil {
        err := errors.New("unable to marshal patch into bytes")
        httpError(w, err)
        return
    }

    // Prepare the AdmissionResponse with the generated patch
    admissionResponse := &admissionv1.AdmissionResponse{}
    patchType := admissionv1.PatchTypeJSONPatch
    admissionResponse.Allowed = true
    admissionResponse.PatchType = &patchType
    admissionResponse.Patch = patchBytes

    var admissionReviewResponse admissionv1.AdmissionReview
    admissionReviewResponse.Response = admissionResponse

    admissionReviewResponse.SetGroupVersionKind(admissionReviewRequest.GroupVersionKind())
    admissionReviewResponse.Response.UID = admissionReviewRequest.Request.UID

    responseBytes, err := json.Marshal(admissionReviewResponse)
    if err != nil {
        err := errors.New("unable to marshal patch response  into bytes")
        httpError(w, err)
        return
    }
    slog.Info("mutation complete", "deployment mutated", deployment.ObjectMeta.Name)
    w.Write(responseBytes)
}

For brevity, we’ll be walking through only the important parts. We begin by using the previously defined parseAdmissionReview function to extract the AdmissionReview object from the incoming HTTP request. If the request is of type Deployment, we extract the Deployment object from the admission request's raw data and unmarshal it into the deployment variable.

To perform a patch, we initialize a slice of the PatchOperation structure, and we create a new patch that performs a replace operation on spec.replicas of the deployment and sets the value to 3, we append the patch to slice of patches and marshal the slice into bytes.

Using the AdmissionResponse, we assemble a response, setting the patch type to JsonPatch, the marshaled patchBytes is assigned to the Patch field. By setting the Allowed field to true, we indicate the response is permitted.

The AdmissionResponse is wrapped within an AdmissionReviewResponse object, which includes additional metadata for the API server the GroupVersionKind and UID is set to match the original request.

The AdmissionReviewResponse is marshaled into a byte array using json.Marshal and sent back in the response using w.Write()

Creating an HTTP Server

With an HTTP handler created, let’s tie everything together in the main function, once again open up main.go:

...

func main() {
    flag.IntVar(&port, "port", 9093, "Admisson controller port")
    flag.StringVar(&tlsKey, "tls-key", "/etc/webhook/certs/tls.key", "Private key for TLS")
    flag.StringVar(&tlsCert, "tls-crt", "/etc/webhook/certs/tls.crt", "TLS certificate")
    flag.Parse()
    slog.Info("loading certs..")
    certs, err := tls.LoadX509KeyPair(tlsCert, tlsKey)
    if err != nil {
        slog.Error("unable to load certs","error", err)
    }

    http.HandleFunc("/mutate", mutate)

    slog.Info("successfully loaded certs. Starting server...", "port", port)
    server := http.Server{
        Addr: fmt.Sprintf(":%d", port),
        TLSConfig: &tls.Config{
            Certificates: []tls.Certificate{certs},
        },
    }

    if err := server.ListenAndServeTLS("", ""); err != nil {
        log.Panic(err)
    }

}

We begin by defining command-line options for setting the port, TLS key, and certificate. After retrieving their values, it attempts to load the TLS certificates, reporting any errors. With that in place, we register the mutate function to handle requests sent to the /mutate endpoint. Finally, we create an HTTP server configured for HTTPS on the specified port.

The final code should look like this:

package main

import (
    "crypto/tls"
    "encoding/json"
    "errors"
    "flag"
    "fmt"
    "io"
    "log"
    "net/http"

    "golang.org/x/exp/slog"
    admissionv1 "k8s.io/api/admission/v1"
    appsv1 "k8s.io/api/apps/v1"
    metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    "k8s.io/apimachinery/pkg/runtime"
    "k8s.io/apimachinery/pkg/runtime/serializer"
)

var (
    port    int
    tlsKey  string
    tlsCert string
)

type PatchOperation struct {
    Op    string      `json:"op"`
    Path  string      `json:"path"`
    Value interface{} `json:"value,omitempty"`
}

// http error handling func
func httpError(w http.ResponseWriter, err error) {
    slog.Error("unable to complete request", "error", err.Error())
    w.WriteHeader(http.StatusBadRequest)
    w.Write([]byte(err.Error()))
}

// Parse Admission Review from requests
func parseAdmissionReview(req *http.Request, deserializer runtime.Decoder) (*admissionv1.AdmissionReview, error) {

    reqData, err := io.ReadAll(req.Body)
    if err != nil {
        slog.Error("error reading request body", err)
        return nil, err
    }

    admissionReviewRequest := &admissionv1.AdmissionReview{}

    _, _, err = deserializer.Decode(reqData, nil, admissionReviewRequest)
    if err != nil {
        slog.Error("unable to desdeserialize request", err)
        return nil, err
    }
    return admissionReviewRequest, nil
}

// mutation handler 
func mutate(w http.ResponseWriter, r *http.Request) {
    slog.Info("recieved new mutate request")

    scheme := runtime.NewScheme()
    codecFactory := serializer.NewCodecFactory(scheme)
    deserializer := codecFactory.UniversalDeserializer()

    admissionReviewRequest, err := parseAdmissionReview(r, deserializer)
    if err != nil {
        httpError(w, err)
        return
    }

    deploymentGVR := metav1.GroupVersionResource{
        Group:    "apps",
        Version:  "v1",
        Resource: "deployments",
    }

    if admissionReviewRequest.Request.Resource != deploymentGVR {
        err := errors.New("admission request is not of kind: Deployment")
        httpError(w, err)
        return
    }

    deployment := appsv1.Deployment{}

    _, _, err = deserializer.Decode(admissionReviewRequest.Request.Object.Raw, nil, &deployment)
    if err != nil {
        err := errors.New("unable to unmarshall request to deployment")
        httpError(w, err)
        return
    }
    var patches []PatchOperation

    patch := PatchOperation{
        Op:    "replace",
        Path:  "/spec/replicas",
        Value: 3,
    }

    patches = append(patches, patch)

    patchBytes, err := json.Marshal(patches)
    if err != nil {
        err := errors.New("unable to marshal patch into bytes")
        httpError(w, err)
        return
    }
    admissionResponse := &admissionv1.AdmissionResponse{}
    patchType := admissionv1.PatchTypeJSONPatch
    admissionResponse.Allowed = true
    admissionResponse.PatchType = &patchType
    admissionResponse.Patch = patchBytes

    var admissionReviewResponse admissionv1.AdmissionReview
    admissionReviewResponse.Response = admissionResponse

    admissionReviewResponse.SetGroupVersionKind(admissionReviewRequest.GroupVersionKind())
    admissionReviewResponse.Response.UID = admissionReviewRequest.Request.UID

    responseBytes, err := json.Marshal(admissionReviewResponse)
    if err != nil {
        err := errors.New("unable to marshal patch response  into bytes")
        httpError(w, err)
        return
    }
    slog.Info("mutation complete", "deployment mutated", deployment.ObjectMeta.Name)
    w.Write(responseBytes)
}

func main() {
    flag.IntVar(&port, "port", 9093, "Admisson controller port")
    flag.StringVar(&tlsKey, "tls-key", "/etc/webhook/certs/tls.key", "Private key for TLS")
    flag.StringVar(&tlsCert, "tls-crt", "/etc/webhook/certs/tls.crt", "TLS certificate")
    flag.Parse()
    slog.Info("loading certs..")
    certs, err := tls.LoadX509KeyPair(tlsCert, tlsKey)
    if err != nil {
        slog.Error("unable to load certs", "error", err)
    }

    http.HandleFunc("/mutate", mutate)

    slog.Info("successfully loaded certs. Starting server...", "port", port)
    server := http.Server{
        Addr: fmt.Sprintf(":%d", port),
        TLSConfig: &tls.Config{
            Certificates: []tls.Certificate{certs},
        },
    }

    if err := server.ListenAndServeTLS("", ""); err != nil {
        log.Panic(err)
    }

}

Containerizing the Controller

With our controller code complete, the next step is to create a container image we can deploy to our cluster. Within the current directory, create a Dockerfile and add the following directives using your editor of choice:

FROM golang:latest as builder

WORKDIR /app

COPY . . 

RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o server main.go

FROM alpine 

RUN apk add --no-cache ca-certificates

COPY --from=builder /app/server /server

EXPOSE 9093

ENTRYPOINT ["/server"] 

Next, we need to build and push the image to a container registry, in this demo we would be using ttl.sh, an ephemeral container registry that doesn’t require authentication to use, this makes it easy to use in demos such as these. In production, you’d probably want to use an internal registry or something like DockerHub to store your images.

Build and Push the Image:

export IMAGE_NAME=mutating-replicant-v1
docker build --push -t ttl.sh/${IMAGE_NAME}:1h .

Notice we used 1h as the image tag, this tells ttl.sh that we want to store our image for an hour.

Deploying the Controller

With a container image published, we can finally turn our attention to getting our controller deployed. Create a new directory to house deployment manifests:

mkdir ./deployments && cd deployments

TLS with cert-manager

To establish a secure connection with the Kubernetes API Server we need to generate TLS certificates. Thankfully cert-manager allows us to generate self-signed certificates in an automated fashion. If you are unfamiliar with cert-manager, it is an open-source tool that automates the management and issuance of TLS certificates for Kubernetes clusters.

Install cert-manager:

In your terminal, run the following command to install cert-manager:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.13.3/cert-manager.yaml

Creating a Self Signed Certificate:

Within the deployments directory create a file called certs.yaml and add the following code:

ClusterIssuer Definition for Self-Signed Certificate:



---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}

This definition establishes a ClusterIssuer named selfsigned-issuer that signifies it's a self-signed certificate, allowing the cluster to generate certificates on its own.

Certificate Definition for Mutating Replicant:

---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: mutating-replicant
  namespace: default 
spec:
  isCA: true
  commonName: mutating-replicant
  secretName: root-secret
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: selfsigned-issuer
    kind: ClusterIssuer
    group: cert-manager.io
  dnsNames:
    - mutating-replicant.default.svc

This definition sets up a Certificate named mutating-replicant in the default namespace. It specifies that it's a Certificate Authority (isCA: true) more importantly, using the dnsNames field, we can specify the DNS name of our controller.

Issuer Definition for Mutant Issuer:

---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: mutant-issuer
  namespace: default 
spec:
  ca:
    secretName: root-secret

This definition creates an Issuer named mutant-issuer in the default namespace, utilizing a root secret (secretName: root-secret) for issuing certificates. This aligns with the secret name in the Certificate definition.

The final manifest should look like this:

---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: selfsigned-issuer
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: mutating-replicant
  namespace: default 
spec:
  isCA: true
  commonName: mutating-replicant
  secretName: root-secret
  privateKey:
    algorithm: ECDSA
    size: 256
  issuerRef:
    name: selfsigned-issuer
    kind: ClusterIssuer
    group: cert-manager.io
  dnsNames:
    - mutating-replicant.default.svc
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: mutant-issuer
  namespace: default 
spec:
  ca:
    secretName: root-secret

Apply the manifest using Kubectl:

kubectl apply -f certs.yaml 

Verify the certificate was created:

kubectl get certificates 

The output should look like the below image:

Deploying the Controller - TLS with cert-manager

Controller Deployment

Create a new file called controller.yaml and add the following code to create a deployment:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: mutating-replicant
  name: mutating-replicant 
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mutating-replicant
  template:
    metadata:
      labels:
        app: mutating-replicant
    spec:
      containers:
        - image: ttl.sh/mutating-replicant-v1:1h 
          name: mutating-replicant 
          imagePullPolicy: Always
          args:
            - --port=9093
            - --tls-crt=/etc/webhook/certs/tls.crt
            - --tls-key=/etc/webhook/certs/tls.key
          ports:
            - containerPort: 9093 
              name: webhook
              protocol: TCP
          volumeMounts:
            - mountPath: /etc/webhook/certs
              name: certs
      volumes:
        - name: certs
          secret:
            secretName: root-secret
---
apiVersion: v1
kind: Service
metadata:
  name: mutating-replicant 
  namespace: default
spec:
  selector:
    app: mutating-replicant 
  type: ClusterIP
  ports:
  - name: mutating-replicant 
    protocol: TCP
    port: 443
    targetPort: 9093

The manifest above creates a deployment and service for our controller, notice we also mount the certificates we created earlier using the root-secret.

Apply the manifest using Kubectl:

kubectl apply -f controller.yaml 

Verify the deployment using the following:

kubectl get deployment/mutating-replicant

The output should look like the below image:

Deploying the Controller - Controller Deployment

Deploying mutating-replicant controller to Kuberntes

Now that we've deployed our mutating-replicant controller and secured its TLS credentials, it's time to integrate it with Kubernetes as a mutation webhook. We do this using the MutatingWebhookConfiguration resource.

Create a file called mutating-webhook.yaml and add the following configuration:

kind: MutatingWebhookConfiguration
apiVersion: admissionregistration.k8s.io/v1
metadata:
  name: mutate-replicas
  annotations:
    cert-manager.io/inject-ca-from: default/mutating-replicant
webhooks:
  - name: mutating-replicant.default.svc
    clientConfig:
      service:
        namespace: default
        name: mutating-replicant
        path: /mutate
    rules:
      - apiGroups:
          - "apps"
        apiVersions:
          - "v1"
        resources:
          - "deployments"
        operations:
          - "CREATE"
        scope: Namespaced
    sideEffects: None
    admissionReviewVersions:
      - "v1"

The MutatingWebhookConfiguration tells Kubernetes about our webhook, including:

  • What resources it should watch for: In this case, we're targeting deployments created under the apps API group and v1 API version.
  • When it should intervene: We only activate it for CREATE operations, meaning it intercepts deployment creations before they're persisted.
  • How to reach it: We point Kubernetes to the webhook service by referencing its namespace, name, and path.

In addition, the cert-manager.io/inject-ca-from annotation tells cert-manager to inject the CA certificate issued for mutating-replicant (in the default namespace) into pods using this webhook configuration.

Apply the Mutating Webhook Configuration using Kubectl:

kubectl apply -f mutating-webhook.yaml 

Verify the webhook was created:

kubectl get mutatingwebhookconfigurations.admissionregistration.k8s.io

You should see the following results:

Deploying mutating-replicant controller to Kuberntes

Testing the Controller

With our controller registered and deployed, we can finally test it! To do this, we will create a deployment. Recall that our controller watches for deployments specifically when they are created.

Create a file called sample.yaml and add the following code:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: whoami
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: whoami
  template:
    metadata:
      labels:
        app: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami
          ports:
            - containerPort: 80 

This is a standard deployment but notice we are deploying just one replica of the whoami service, for our controller to be considered working we should have 3 replicas deployed when this manifest is applied.

Apply the deployment using Kubectl:

kubectl apply -f sample.yaml  

Verify the deployment:

kubectl get deployment/whoami 

Output:

Testing the Controller

Success! The deployment was modified, we can also verify the controller received the request by looking at the logs from the controller:

kubectl logs deploy/mutating-replicant

Output:

Testing the Controller Output

Clean up

Upon completing this tutorial you might want to clean up some of the resources we just created, follow these steps:

Uninstalling the controller:

removing the controller from your cluster is fairly straightforward, simply remove the mutating webhook and controller deployments using kubectl:

 kubectl delete mutatingwebhookconfigurations.admissionregistration.k8s.io/mutate-replicas

Delete the controller deployment:

kubectl delete deploy/mutating-replicant

To avoid unwanted charges, you might want to delete your cluster, run the following command to do so:

civo k3s delete admissions

This command will delete the admissions cluster from your Civo account.

Summary

In this tutorial, we discussed admission controllers, we started off with what they are and why they are important, we also discussed two big use cases for them and concluded by creating a mutating admission controller.

If you’re looking to learn more about extending the Kubernetes API or some of the technologies we used in this post, here are some ideas:

All the code used in this guide is available here.