Argo Workflows is a Kubernetes-native workflow execution engine that allows you to run workflows in a DAG (directed acyclic graph) or step-based manner. In simpler terms, Argo Workflows offers a familiar approach similar to that found in continuous integration platforms like GitHub Actions, but with powerful additional features specifically designed for Kubernetes environments.
This tutorial will explain what Argo Workflows are, how they work, their use cases, and how to get started.
An introduction to Argo Workflows
Before diving into the tutorial, let’s take a look at the unique advantages Argo Workflows offers. As a Kubernetes-native solution, it can leverage your existing compute resources to perform actions that you might otherwise outsource to an external platform, such as your CI/CD workflows. However, this isn’t always the case. To put this in perspective, here are a few use cases for Argo Workflows:
| Use Case | Description |
|---|---|
| Machine Learning Pipelines | You can orchestrate complex ML workflows where data preprocessing, model training, evaluation, and deployments must occur in sequence. Unlike external ML platforms, your data never leaves your cluster, and you can leverage GPU nodes or high-memory instances as needed without worrying about egress costs. |
| Data Processing Jobs | ETL pipelines that need to process large datasets can benefit from Argo's ability to spin up multiple parallel workers and coordinate their execution. This is particularly useful when you're already running data processing workloads on Kubernetes and want to avoid managing external orchestration tools, such as Apache Airflow. |
| CI/CD for Kubernetes Applications | While GitHub Actions works well for building applications, Argo Workflows excels at deployment pipelines that require deep integration with Kubernetes resources, such as running database migrations, performing canary deployments, or coordinating multi-service rollouts. |
Argo Workflow concepts
Before diving into a demo, there are a couple of core concepts you will need to understand:
Workflow is a central resource in Argo, defining the workflow to be executed and storing the workflow's state. These critical functions mean you should treat Workflows as live objects.
Templates define the individual tasks or steps that make up your workflow. Linking back to the GitHub actions example, this would be the regular steps in an action.
WorkflowTemplate vs Template
| Term | Description |
|---|---|
| Template | A single task in a workflow, such as running a container, executing a script, or defining step connections. |
| WorkflowTemplate | A cluster-level resource containing the full workflow definition, including all templates. Reusable across teams and can be run multiple times with different parameters. |
Different types of Templates
Container: Allows you to define a container to run. This is the most straightforward type where you specify an image and commands, this could be a base operating system, e.g, Ubuntu, or a custom image containing the required build tools:
- name: hello-world
container:
image: alpine:latest
command: [sh, -c]
args: ["echo hello world"]
Script: Runs a script in a container image. Similar to the container type, but lets you write the script inline instead of using a separate file:
- name: date
inputs:
parameters:
- name: num
script:
image: python:alpine3.6
command: [python]
source: |
import datetime
now = datetime.datetime.now()
print(now)
Resource: Allows direct interaction with cluster resources. You can perform operations like CREATE, APPLY, or PATCH on Kubernetes resources:
- name: create-deployment
resource:
action: create
manifest: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.21
ports:
- containerPort: 80
DAG: Last but not least, DAG defines a directed acyclic graph of other templates. This lets you specify dependencies between tasks and run them in parallel where possible:
- name: my-dag
dag:
tasks:
- name: A
template: echo
arguments:
parameters: [{name: message, value: A}]
- name: B
dependencies: [A]
template: echo
arguments:
parameters: [{name: message, value: B}]
- name: C
dependencies: [A]
template: echo
arguments:
parameters: [{name: message, value: C}]
Installing Argo Workflows
With some of the high-level concepts covered. Here is how to get started with Argo Workflows.
Prerequisites
This tutorial assumes some familiarity with Kubernetes. Additionally, you will need the following installed on your machine:
Creating a Kubernetes Cluster (Optional)
If you already have a Kubernetes cluster up and running, feel free to skip this step. To create a cluster using the Civo CLI, run the following command:
civo k3s create --create-firewall --nodes 1 -m --save --switch --wait argo
This would launch a one-node Kubernetes cluster in your Civo account, the -m flag would merge the kubeconfig for the new cluster with your existing kubeconfig, and --switch points your kube-context to the newly created cluster.
The output is similar to:
The cluster argo (dca42473-f079-44a2-8328-5fae315c005b) has been created in 2 min 43 sec
Create a namespace for Argo
Begin by creating a namespace to house all the Argo Workflows resources:
kubectl create ns argo
The output is similar to:
namespace/argo created
Install Argo Workflows
While Argo Workflows provides a quick-start manifest for getting up and running quickly, it is not suitable for production. The recommended approach is no more difficult than the quick start.
Add the Argo project helm repository:
helm repo add argo https://argoproj.github.io/argo-helm
Update your local repository:
helm repo update
Install Argo Workflows:
helm install workflows argo/argo-workflows --version=0.45.19 --namespace=argo
This will install Argo Workflows v3.6.10, which is the latest at the time of writing.
Verify that the installation completed:
kubectl get pods -n argo
The output is similar to:
NAME READY STATUS RESTARTS AGE
workflows-argo-workflows-workflow-controller 1/1 Running 0 97s
workflows-argo-workflows-server-946f459ff-rfj46 1/1 Running 0 97s
Configure access to the Workflows UI
Just before you start writing a workflow, you’ll likely want to view submitted workflows. Argo Workflows ships with a neat user interface but requires some setup.
Argo Worfklows provides a few methods for authentication. Server auth is great for testing locally; however, in production environments, client authentication, which takes advantage of Kubernetes bearer tokens, is recommended.
Begin by creating a role for your user's account:
kubectl apply -f - <<EOF
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argo-workflows-user
namespace: argo
rules:
- apiGroups: [""]
resources: ["events", "pods", "pods/log"]
verbs: ["get", "list", "watch"]
- apiGroups: ["argoproj.io"]
resources: ["workflows", "workflowtemplates", "cronworkflows", "workflowtaskresults"]
verbs: ["create", "delete", "update", "patch", "get", "list", "watch"]
EOF
The role above will provide the service account, which will be created shortly with sufficient permissions to read, write, and watch workflows within the argo namespace.
Create a corresponding service account:
kubectl create sa argo-ui -n argo
Bind the service account to the role and the argo namespace:
kubectl create rolebinding argo-ui-binding --role=argo-workflows-user --serviceaccount=argo:argo-ui -n argo
Finally, create a secret to hold the bearer token:
kubectl apply -f - <<EOF
apiVersion: v1
kind: Secret
metadata:
name: argo-ui-secret
namespace: argo
annotations:
kubernetes.io/service-account.name: argo-ui
type: kubernetes.io/service-account-token
EOF
Retrieve the token:
kubectl get secret agent.service-account-token -o=jsonpath='{.data.token}' | base64 --decode
Store the token in a secure location before moving on to the next step.
Expose the workflow UI:
kubectl get secret argo-ui-secret -o=jsonpath='{.data.token}' -n argo | base64 --decode)
This will expose the UI on http://localhost:2746:

Paste in the token you copied earlier, and you should be greeted with a second screen like this:

If you run into an error such as “Forbidden: workflowtemplates.argoproj.io is forbidden: User "system:serviceaccount:argo:argo-ui" cannot list resource "workflowtemplates" in API group "argoproj.io" at the cluster scope”
Ensure you have the argo namespace selected. You can do this by typing this in on the right-hand side of the interface:

This occurs because we explicitly granted Argo Workflows permissions to the argo namespace. If you need access to more namespaces, update the service account and cluster role permissions as needed.
Writing a Workflow
So far, you have deployed Argo Workflows and exposed the user interface. To write a new workflow, run the following command to create a basic example:
cat <<EOF > hello-workflow.yaml
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: hello-argo
namespace: argo
spec:
entrypoint: hello
templates:
- name: hello
container:
image: alpine:latest
command: [sh, -c]
args: ["echo 'Hello Argo Workflows!'"]
EOF
This basic workflow demonstrates the core concepts we covered earlier. The Workflow resource defines what should be executed, while the template (named "hello") specifies the actual work - in this case, running a container that prints a message. The entrypoint tells Argo which template to start with, similar to how a main function works in programming.
Submitting Workflows
To submit a workflow, apply the manifest using kubectl:
kubectl apply -f hello-workflow.yaml
Head back to the workflows dashboard, and you should be greeted with:

You can click on the workflow to view logs and details about the run.
Conclusion
Argo Workflows is a great project that can be used to orchestrate multi-step workflows, and its Kubernetes-first design makes it a natural fit for those already using ArgoCD.
This post covered the fundamentals of Argo Workflows and how to get started - if you’re looking to try something similar, check out this tutorial on FluxCD, and if you’d like to take it a step further, learn how to build a pipeline for deploying Node.js applications with Flux.