gRPC (gRPC Remote Procedure Calls) is a high-performance RPC framework created by Google. It utilizes HTTP/2 for transport and Protocol Buffers for serialization, making it an efficient and versatile tool for inter-service communication. gRPC is widely adopted for internal microservices architectures, enabling seamless data exchange and service orchestration within a private network.

While gRPC is often designed for internal use, there are scenarios where exposing gRPC services externally becomes necessary. For instance, you might want to expose a gRPC service to enable external clients or partners to interact with your application's backend functionalities. Alternatively, you might want to expose a gRPC service to facilitate communication with other microservices residing in a different cloud provider or environment.

In this tutorial, we are going to demonstrate how to expose your gRPC services using the NGINX Ingress controller.

Prerequisites

This article assumes some working knowledge of Kubernetes and Ingress controllers. In addition, you would need the following installed:

Preparing the Kubernetes cluster

We’ll begin by creating a Kubernetes cluster, feel free to skip this step if you have a cluster created already.

For simplicity, we will be doing it from the CLI:

civo k3s create --create-firewall --nodes 2 -m --save --switch --wait gRPC nginx -r=Traefik

This would launch a two-node cluster, we also remove the default Ingress controller using the -r flag, since we would be using emissary in this demonstration. This would also point your kube-context to the cluster we just launched.

Installing NGINX Ingress

With a cluster created, the next step is to install NGINX Ingress, open up your terminal, and follow along with the command below:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.2/deploy/static/provider/cloud/deploy.yaml
⚠️ This would create a LoadBalancer resource within your cluster.

Deploying a gRPC Application

For this demonstration, we will be using a sample gRPC application called "randrpc" that generates random numbers. This application is intentionally simple for illustration purposes, but it still adheres to the gRPC protocol and exposes a service for generating random numbers.

Deploy the randrpc Server

Within a directory of your choosing, create a file called deployment.yaml and add the following code using your favorite text editor:

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: randrpc-server-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: randrpc-server
  template:
    metadata:
      labels:
        app: randrpc-server
    spec:
      containers:
      - name: randrpc-server
        image: ghcr.io/s1ntaxe770r/randrpc-server:v1.5
        ports:
        - containerPort: 7070
---
apiVersion: v1
kind: Service
metadata:
  name: randrpc-server-svc
spec:
  selector:
    app: randrpc-server
  port:
  - port: 80
    targetPort: 7070
    type: ClusterIP

In the configuration above, we created a Kubernetes deployment named randrpc-server-deployment to manage the randrpc application. This deployment will create two replicas of the randrpc server container. To expose the randrpc server internally within the Kubernetes cluster, we created a service.

Apply the manifest using kubectl:

kubectl apply -f deployment.yaml

TLS Termination with CertManager

When exposing services over the internet, it's crucial to ensure secure communication by terminating TLS (Transport Layer Security) at the ingress level. This prevents eavesdropping and data tampering, safeguarding sensitive information exchanged between clients and the gRPC service.

CertManager is an open-source tool that automates the management and issuance of TLS certificates for Kubernetes clusters. It simplifies the process of obtaining, renewing, and deploying TLS certificates, ensuring that your gRPC services remain secure and accessible.

Installing CertManager

To install CertManager, execute the following command to apply the CertManager manifest:

kubectl apply -f <https://github.com/jetstack/cert-manager/releases/download/v1.7.1/cert-manager.yaml>

Creating a ClusterIssuer

A ClusterIssuer is a CertManager resource that specifies the ACME (Automated Certificate Management Environment) server and provider configuration for issuing TLS certificates. Create a new file named issuer.yaml and add the following configuration:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
  namespace: cert-manager
spec:
  acme:
    # The ACME server URL
    server: <https://acme-v02.api.letsencrypt.org/directory>
    # Email address used for ACME registration
    email: <your email address>
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: nginx

This manifest defines the ClusterIssuer with the following key components:

  • server: The URL of the ACME server used to obtain TLS certificates.
  • email: The email address associated with the ACME account.
  • privateKeySecretRef: A reference to a secret named letsencrypt-prod that stores the ACME account private key.
  • solvers: An array of challenge solvers, specifying the HTTP-01 solver that uses NGINX ingress to validate certificate ownership.

Apply the manifest to the cluster using the following:

kubectl apply -f issuer.yaml

Exposing The Application

With the ClusterIssuer in place, we can now create the Ingress resource to expose our gRPC application to the outside world. Before proceeding, we need to retrieve the DNS name for the load balancer. Use the following Civo CLI command to retrieve the DNS name:

civo kubernetes show <cluster name> -o custom -f "DNSEntry"

Once you have the DNS name, create a new file named ingress.yaml and add the following manifest:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
  name: randrpc-ingress
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - host: "<civo-dns-name>"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: randrpc-server-svc
            port:
              number: 80
  tls:
    - secretName: tls-secret 
      hosts:
        - "<civo-dns-name>"

This manifest defines the Ingress resource named randrpc-ingress with the following components:

  • annotations: Specify additional configuration for the Ingress controller.
    • nginx.ingress.kubernetes.io/ssl-redirect: Enables automatic redirection of HTTP traffic to HTTPS.
    • nginx.ingress.kubernetes.io/backend-protocol: Indicates that the backend service uses the GRPC protocol.
    • cert-manager.io/cluster-issuer: Specifies the ClusterIssuer (letsencrypt-prod) to use for obtaining TLS certificates.
  • rules: Defines the Ingress rules that map incoming requests to backend services.
    • host: Specifies the DNS name for the load balancer. Replace <DNS_NAME> with the actual DNS name you retrieved earlier.
    • http: Configures the HTTP routing for the Ingress rule.
      • paths: Defines a path pattern that matches incoming requests. The / path matches any request to the root of the domain.
      • backend: Specifies the backend service to route matching requests to. The randrpc-server-svc service exposes the gRPC application.
  • tls: Configures TLS termination for the Ingress rule.
    • secretName: References the secret (tls-secret) that contains the TLS certificates.
    • hosts: Specifies the hostname for which TLS termination is enabled.

Once you've updated the manifest with the actual DNS name, apply the Ingress resource using the following command:

kubectl apply -f ingress.yaml

Calling the gRPC Service

Now that your gRPC application is exposed through the Ingress resource, you can use grpcurl to invoke its methods. Replace <DNS_NAME> with the actual DNS name you retrieved earlier:

grpcurl -d '{"min": 10, "max": 100}' <DNS_NAME>:443 randrpc.RandService.Rand

you should see a similar response:

{
  "rand": 50
}

Summary

In this tutorial, we explored how to expose a gRPC service and terminate TLS at the Ingress level using the NGINX Ingress and CertManager. If you’re looking to explore the NGINX Ingress further, here are a couple of ideas:

  1. Explore A/B Testing with NGINX Ingress Controller: Learn how to perform A/B testing on your gRPC applications using NGINX Ingress Controller by referring to this link: https://www.civo.com/learn/a-b-testing-using-the-nginx-kubernetes-ingress-controller.
  2. Implement Rate Limiting with NGINX Ingress: Discover how to implement rate limiting on your gRPC applications using NGINX Ingress to manage traffic and protect against overload by checking out this guide: https://www.civo.com/learn/rate-limiting-applications-with-nginx-ingress.