How to deploy DeepSeek-R1 on Civo GPUs

Deploy DeepSeek-R1 on a Civo GPU-powered Kubernetes cluster for efficient AI applications, and learn how to automate setup with Terraform or GitHub Actions.

6 minutes reading time

Written by

Josh Mesout
Josh Mesout

Chief Innovation Officer @ Civo

DeepSeek, a Chinese AI startup, has recently launched its latest model, DeepSeek-R1, which rivals leading AI models like OpenAI's o1 in performance but at a fraction of the cost. This open-source model has quickly gained attention, topping Apple's App Store and causing significant ripples in the tech industry.

Deploying DeepSeek-R1 on a Civo GPU-powered Kubernetes cluster allows you to harness its advanced capabilities efficiently. This guide will walk you through the process, enabling you to leverage DeepSeek-R1's test and power for your AI applications seamlessly.

Simplifying LLM deployment with Civo’s LLM boilerplate

Setting up a GPU-enabled Kubernetes cluster to run LLMs can be complex and time-consuming, especially for those who require seamless integration, data security, and regulatory compliance. To address this challenge, we've created a step-by-step guide to deploying a Kubernetes GPU cluster on Civo using the Civo LLM Boilerplate.

Accelerate your LLM deployment with Civo GPUs

Experience high-performance, scalable, cost-effective GPU solutions for your machine learning and AI projects. Our NVIDIA-powered cloud GPUs help you streamline LLM deployments, whether for development or production.

👉 Learn More

What you'll learn

In this tutorial, you'll learn how to automate the setup of DeepSeek on a Kubernetes GPU cluster on Civo Cloud using Terraform or GithubActions, and deploy essential tools such as:

Project goal

The goal of this project is to enable customers to easily use Open Source LLMs, providing 1:1 compatibility with DeepSeek:

  • Access to the latest Open Source LLMs made available from Ollama.
  • Provide a user interface to allow non-technical users access to models.
  • Provide a path to produce insights with LLMs while maintaining sovereignty over the data.
  • Enable LLMs in regulatory use cases where ChatGPT can't be used.

Prerequisites

Before beginning, ensure you have the following:

Deploying DeepSeek on Civo using Terraform

Project setup

  1. Obtain your Civo API key from the Civo Dashboard.
  2. Create a file named terraform.tfvars in the project's root directory.
  3. Insert your Civo API key into this file as follows:
civo_token = "YOUR_API_KEY"

Project configuration

Project configurations are managed within the tf/variables.tf file. This file contains definitions and default values for the Terraform variables used in the project.

VariableDescriptionTypeDefault value

cluster_name

The name of the cluster.

string

"llm_boilerplate"

cluster_node_size

The GPU node instance to use for the cluster.

string

"g1.l40s.kube.x1"

cluster_node_count

The number of nodes to provision in the cluster.

number

1

civo_token

The Civo API token, set in terraform.tfvars.

string

N/A

region

The Civo Region to deploy the cluster in.

string

"LON1"

Deployment configuration

Deployment of components is controlled through boolean variables within the tf/variables.tf file. Set these variables to true to enable the deployment of the corresponding component.

VariableDescriptionTypeDefault value

deploy_ollama

Deploy the Ollama inference server.

bool

true

deploy_ollama_ui

Deploy the Ollama Web UI.

bool

true

deploy_app

Deploy the example application.

bool

false

deploy_nv_device_plugin_ds

Deploy the Nvidia GPU Device Plugin for enabling GPU support.

bool

true

Deploy LLM boilerplate

To deploy, simply run the following commands:

Step 1: Initialize Terraform

terraform init

This command initializes Terraform, installs the required providers, and prepares the environment for deployment.

Step 2: Plan deployment

terraform plan

This command displays the deployment plan, showing what resources will be created or modified.

Step 3: Apply deployment

terraform apply

This command applies the deployment plan. Terraform will prompt for confirmation before proceeding with the creation of resources.

Building and deploying the example application

Step 1: Build the custom application container

Enter the application folder:

cd app

Build the Docker image:

docker build -t {repo}/{image} .

Push the Docker image to a registry:

docker push -t {repo}/{image}

Navigate to the Helm chart:

cd ../infra/helm/app

Modify the Helm Values to point to your Docker registry, e.g.

replicaCount: 1
image:
repository: {repo}/{image}
pullPolicy: Always
tag: "latest"
service:
type: ClusterIP
port: 80

Step 2: Initialize Terraform

Navigate to the Terraform directory:

cd ../tf

Then:

terraform init

This command initializes Terraform, installs the required providers, and prepares the environment for deployment.

initializes Terraform

Source: Image by author

Step 3: Plan deployment

terraform plan

This command displays the deployment plan, showing what resources will be created or modified.

deployment plan

Source: Image by author

Step 4: Apply deployment

terraform apply

This command applies the deployment plan. Terraform will prompt for confirmation before proceeding with the creation of resources.

creation of resources

Source: Image by author

Deployment takes around 10 minutes to stand up the Civo Kubernetes Cluster, assign a GPU node, deploy the Helm charts and GPU configuration before downloading the models and running them on your Nvidia GPU.

deploy the helm charts

Source: Image by author

Troubleshooting

If you experience any issues during the deployment (for example, if you experience a timeout), you can reattempt the deployment by rerunning:

terraform apply

Deploy DeepSeek through GitHub Actions

For those who prefer a fully automated cloud-based approach, GitHub Actions offers a powerful solution. As a part of GitHub's CI/CD platform, Actions allows you to automate your software workflows, including deployments. This method simplifies the deployment process, ensuring that it is repeatable and error-free, which is particularly beneficial for managing and updating large-scale machine learning models like DeepSeek without manual intervention.

First, navigate to the repository: https://github.com/civo-learn/civo-llm-boilerplate, and then use the template to create a new repository.

create a new repository

Source: Image by author

After doing so, go to the settings of your newly created repository and make sure GitHub Actions are allowed to run.

Actions permissions

Source: Image by author

Make a new secret through the settings for the repository called CIVO_TOKEN and set it to your Civo account token.

Now, you can head to the actions tab and run the deployment.

run workflow

Source: Image by author

Accessing and managing your deployment

Once you have successfully deployed DeepSeek using either Terraform or GitHub Actions, the next step is to verify and utilize the deployment:

Checking the Load Balancers

After deployment, you can check the load balancers attached to your Kubernetes cluster to locate the Open Web UI endpoint. Navigate to the load balancer section in your Civo Dashboard and find the DNS name labeled “ollama-ui-open-webui.”

Checking the Load Balancers

Source: Image by author

Completing the initial open-web-ui setup, which involves registering an initial administrator account and configuring the deployment options, will grant you access to a “ChatGPT-like” interface, where you can interact with the deployed LLM directly.

deepseek running

Source: Image by author

From this window, you can further configure your environment, such as setting your security and access preferences and what access newly registered users can access. You can make other users administrators in addition to the first registered account.

Deploying additional models

If you wish to expand your LLM capabilities, simply navigate to the settings menu found in the top right-hand corner of the Open Web UI screen. Select “models” from the left-hand menu to add or manage additional models. This feature allows for versatile deployment configurations and model management, ensuring that your setup can adapt to various requirements and tasks.

If you would like to change the default models deployed, simply modify the variables.tf file in the infra/tf folder. This is a list of all the Ollama models you wish to deploy.

variable "default_models" {
description = "List of default models to use in Ollama Web UI."
type = list(string)
default = ["llama3.2", "deepseek-r1"] #Include additional models here if required
}

Summary

Congratulations! You have successfully deployed a Kubernetes GPU cluster on Civo Cloud using Terraform and set up various components for running LLMs, including the Ollama inference server and web interface.

With this boilerplate, you now have a scalable and flexible infrastructure for leveraging Open Source LLMs, allowing you to customize deployments, integrate additional tools, or expand your cluster as needed.

If you want to learn more about LLMs, check out some of these resources:

Josh Mesout
Josh Mesout

Chief Innovation Officer @ Civo

Josh Mesout is Chief Innovation Officer at Civo, where he focuses on exploring emerging technologies and driving innovation across the company’s cloud platform. His work includes identifying opportunities in areas such as artificial intelligence, machine learning, and cloud-native infrastructure.

Before joining Civo, Josh led enterprise machine learning platform initiatives at AstraZeneca, supporting hundreds of machine learning projects across multiple research and business teams. His background spans data science platforms, cloud engineering, and technology innovation programs.

View author profile