Getting started with Argo Workflows on Civo
Get started with Argo Workflows, learning what they are, how they work, their use cases, and how to get started with your Kubernetes environment on Civo.
Written by
Technical Writer @ Civo
Written by
Technical Writer @ Civo
Argo Workflows is a Kubernetes-native workflow execution engine that allows you to run workflows in a DAG (directed acyclic graph) or step-based manner. In simpler terms, Argo Workflows offers a familiar approach similar to that found in continuous integration platforms like GitHub Actions, but with powerful additional features specifically designed for Kubernetes environments.
This tutorial will explain what Argo Workflows are, how they work, their use cases, and how to get started.
An introduction to Argo Workflows
Before diving into the tutorial, let’s take a look at the unique advantages Argo Workflows offers. As a Kubernetes-native solution, it can leverage your existing compute resources to perform actions that you might otherwise outsource to an external platform, such as your CI/CD workflows. However, this isn’t always the case. To put this in perspective, here are a few use cases for Argo Workflows:
Argo Workflow concepts
Before diving into a demo, there are a couple of core concepts you will need to understand:
Workflow is a central resource in Argo, defining the workflow to be executed and storing the workflow's state. These critical functions mean you should treat Workflows as live objects.
Templates define the individual tasks or steps that make up your workflow. Linking back to the GitHub actions example, this would be the regular steps in an action.
WorkflowTemplate vs Template
Different types of Templates
Container: Allows you to define a container to run. This is the most straightforward type where you specify an image and commands, this could be a base operating system, e.g, Ubuntu, or a custom image containing the required build tools:
- name: hello-worldcontainer:image: alpine:latestcommand: [sh, -c]args: ["echo hello world"]
Script: Runs a script in a container image. Similar to the container type, but lets you write the script inline instead of using a separate file:
- name: dateinputs:parameters:- name: numscript:image: python:alpine3.6command: [python]source: |import datetimenow = datetime.datetime.now()print(now)
Resource: Allows direct interaction with cluster resources. You can perform operations like CREATE, APPLY, or PATCH on Kubernetes resources:
- name: create-deploymentresource:action: createmanifest: |apiVersion: apps/v1kind: Deploymentmetadata:name: nginx-deploymentspec:replicas: 2selector:matchLabels:app: nginxtemplate:metadata:labels:app: nginxspec:containers:- name: nginximage: nginx:1.21ports:- containerPort: 80
DAG: Last but not least, DAG defines a directed acyclic graph of other templates. This lets you specify dependencies between tasks and run them in parallel where possible:
- name: my-dagdag:tasks:- name: Atemplate: echoarguments:parameters: [{name: message, value: A}]- name: Bdependencies: [A]template: echoarguments:parameters: [{name: message, value: B}]- name: Cdependencies: [A]template: echoarguments:parameters: [{name: message, value: C}]
Installing Argo Workflows
With some of the high-level concepts covered. Here is how to get started with Argo Workflows.
Prerequisites
This tutorial assumes some familiarity with Kubernetes. Additionally, you will need the following installed on your machine:
Creating a Kubernetes cluster (optional)
If you already have a Kubernetes cluster up and running, feel free to skip this step. To create a cluster using the Civo CLI, run the following command:
civo k3s create --create-firewall --nodes 1 -m --save --switch --wait argo
This would launch a one-node Kubernetes cluster in your Civo account, the -m flag would merge the kubeconfig for the new cluster with your existing kubeconfig, and --switch points your kube-context to the newly created cluster.
The output is similar to:
The cluster argo (dca42473-f079-44a2-8328-5fae315c005b) has been created in 2 min 43 sec
Create a namespace for Argo
Begin by creating a namespace to house all the Argo Workflows resources:
kubectl create ns argo
The output is similar to:
namespace/argo created
Install Argo Workflows
While Argo Workflows provides a quick-start manifest for getting up and running quickly, it is not suitable for production. The recommended approach is no more difficult than the quick start.
Add the Argo project Helm repository:
helm repo add argo https://argoproj.github.io/argo-helm
Update your local repository:
helm repo update
Install Argo Workflows:
helm install workflows argo/argo-workflows --version=0.45.19 --namespace=argo
This will install Argo Workflows v3.6.10, which is the latest at the time of writing.
Verify that the installation completed:
kubectl get pods -n argo
The output is similar to:
NAME READY STATUS RESTARTS AGEworkflows-argo-workflows-workflow-controller 1/1 Running 0 97sworkflows-argo-workflows-server-946f459ff-rfj46 1/1 Running 0 97s
Configure access to the Workflows UI
Just before you start writing a workflow, you’ll likely want to view submitted workflows. Argo Workflows ships with a neat user interface but requires some setup.
Argo Worfklows provides a few methods for authentication. Server auth is great for testing locally; however, in production environments, client authentication, which takes advantage of Kubernetes bearer tokens, is recommended.
Begin by creating a role for your user's account:
kubectl apply -f - <<EOFapiVersion: rbac.authorization.k8s.io/v1kind: Rolemetadata:name: argo-workflows-usernamespace: argorules:- apiGroups: [""]resources: ["events", "pods", "pods/log"]verbs: ["get", "list", "watch"]- apiGroups: ["argoproj.io"]resources: ["workflows", "workflowtemplates", "cronworkflows", "workflowtaskresults"]verbs: ["create", "delete", "update", "patch", "get", "list", "watch"]EOF
The role above will provide the service account, which will be created shortly with sufficient permissions to read, write, and watch workflows within the argo namespace.
Create a corresponding service account:
kubectl create sa argo-ui -n argo
Bind the service account to the role and the Argo namespace:
kubectl create rolebinding argo-ui-binding --role=argo-workflows-user --serviceaccount=argo:argo-ui -n argo
Finally, create a secret to hold the bearer token:
kubectl apply -f - <<EOFapiVersion: v1kind: Secretmetadata:name: argo-ui-secretnamespace: argoannotations:kubernetes.io/service-account.name: argo-uitype: kubernetes.io/service-account-tokenEOF
Retrieve the token:
kubectl get secret agent.service-account-token -o=jsonpath='{.data.token}' | base64 --decode
Store the token in a secure location before moving on to the next step.
Expose the workflow UI:
kubectl get secret argo-ui-secret -o=jsonpath='{.data.token}' -n argo | base64 --decode)
This will expose the UI on http://localhost:2746:

Paste in the token you copied earlier, and you should be greeted with a second screen like this:

If you run into an error such as “Forbidden: workflowtemplates.argoproj.io is forbidden: User "system:serviceaccount:argo:argo-ui" cannot list resource "workflowtemplates" in API group "argoproj.io" at the cluster scope”
Ensure you have the argo namespace selected. You can do this by typing this in on the right-hand side of the interface:

This occurs because we explicitly granted Argo Workflows permissions to the argo namespace. If you need access to more namespaces, update the service account and cluster role permissions as needed.
Writing a Workflow
So far, you have deployed Argo Workflows and exposed the user interface. To write a new workflow, run the following command to create a basic example:
cat <<EOF > hello-workflow.yamlapiVersion: argoproj.io/v1alpha1kind: Workflowmetadata:name: hello-argonamespace: argospec:entrypoint: hellotemplates:- name: hellocontainer:image: alpine:latestcommand: [sh, -c]args: ["echo 'Hello Argo Workflows!'"]EOF
This basic workflow demonstrates the core concepts we covered earlier. The Workflow resource defines what should be executed, while the template (named "hello") specifies the actual work - in this case, running a container that prints a message. The entrypoint tells Argo which template to start with, similar to how a main function works in programming.
Submitting Workflows
To submit a workflow, apply the manifest using kubectl:
kubectl apply -f hello-workflow.yaml
N/B: The argo cli is also capable of submiting workflows but kubectl was used in this demo for simplicity.
Head back to the workflows dashboard, and you should be greeted with:

You can click on the workflow to view logs and details about the run.
Conclusion
Argo Workflows is a great project that can be used to orchestrate multi-step workflows, and its Kubernetes-first design makes it a natural fit for those already using ArgoCD.
This post covered the fundamentals of Argo Workflows and how to get started - if you’re looking to try something similar, check out this tutorial on FluxCD, and if you’d like to take it a step further, learn how to build a pipeline for deploying Node.js applications with Flux.

Technical Writer @ Civo
Jubril Oyetunji is a DevOps engineer and technical writer with a strong focus on cloud-native technologies and open-source tools. His work centers on creating practical tutorials that help developers better understand platforms such as Kubernetes, NGINX, Rust, and Go.
As a contract technical writer, Jubril authored an extensive library of technical guides covering cloud-native infrastructure and modern development workflows. Many of his tutorials achieved strong search rankings, helping developers around the world learn and adopt emerging technologies.
Share this article
Further Reading
19 September 2024