Docker is a containerisation suite that allows you to deploy lightweight software packages in pre-made virtual machines called containers. Multiple coputers running Docker can be joined together and act as a single pool of resources. Docker released a tool in 2016 to make this easier, called Docker Swarm. There are two types of nodes in a Docker Swarm, managers and workers. Both types of node can run containers, but the manager is responsible for distributing and allocating work.
This guide will show you how to Setup a three-node Docker Swarm Cluster on Ubuntu (18.04) on Civo's cloud, deploying nginx
to handle web requests for reliability and availability. It will then suggest other applications for your new swarm.
Pre-Requisites
First things first, you need an account on Civo. Civo offers $250 free credit for a month to new users: Sign up here.
As we will be setting up a three-node swarm, you will need three instances set up with Docker. Here is a quick guide to do just that. Once you have three instances set up, you can continue with this guide.
DNS Configuration
If you have a DNS Server you can configure the A Records for these hosts on DNS, but for simplicity, I will add the IP addresses assigned to the instances when they were created tp into my /etc/hosts file so we can resolve names to IP’s. To do this, make note of the individual instance IP addresses (marked 'public network' on the Civo dashboard), and open up the hosts file:
$ sudo nano /etc/hosts
In my example, I edited in my IP Addresses as follows underneath the existing mappings:
172.31.1.11 manager
172.31.1.12 worker-1
172.31.1.13 worker-2
Repeat the above steps on the other 2 Servers and make note of the IP Addresses of each node. You should be able to ping and reach each of the nodes that were configured. Make sure to allow all traffic between these nodes.
civo@manager:~$ ping worker-2
PING worker-2 (172.31.1.13) 56(84) bytes of data.
64 bytes from worker-2 (172.31.1.13): icmp_seq=1 ttl=64 time=1.21 ms
Initializing the Swarm
Now we will initialize the swarm on the manager node and as we have more than one network interface, we will specify the --advertise-addr
option:
```
$ docker swarm init --advertise-addr 172.31.11
Swarm initialized: current node (siqyf3yricsvjkzvej00a9b8h) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join \
--token SWMTKN-1-0eith07xkcg93lzftuhjmxaxwfa6mbkjsmjzb3d3sx9cobc2zp-97s6xzdt27y2gk3kpm0cgo6y2 \
172.31.1.11:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
From the response above, we received the join token that allows the workers to register with the manager node. Also, as you can see, if you have a scenario where you want to have more than one manager node, you can run `docker swarm join-token manager` to receive the join token for additional manager.
Let’s add the two worker nodes to the manager:
$ [worker-1] docker swarm join --token SWMTKN-1-0eith07xkcg93lzftuhjmxaxwfa6mbkjsmjzb3d3sx9cobc2zp-97s6xzdt27y2gk3kpm0cgo6y2 172.31.1.11:2377
This node joined a swarm as a worker.
$ [worker-2] docker swarm join --token SWMTKN-1-0eith07xkcg93lzftuhjmxaxwfa6mbkjsmjzb3d3sx9cobc2zp-97s6xzdt27y2gk3kpm0cgo6y2 172.31.1.11:2377
This node joined a swarm as a worker.
To see the node status from the manager node, list all the nodes in the swarm:
[manager] $ docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
j14mte3v1jhtbm3pb2qrpgwp6 worker-1 Ready Active
siqyf3yricsvjkzvej00a9b8h * master Ready Active Leader
srl5yzme5hxnzxal2t1efmwje worker-2 Ready Active
```
This command allows us to determine if the nodes are active, available, etc.
If at any time you lost your join token, you can retrieve it by running the following for a worker token:
$ docker swarm join-token worker -q
And the following for a manager token:
$ docker swarm join-token manager -q
Docker Swarm uses a declarative model. This means that you define the desired state of the service, and rely on Docker to maintain this state. More information on this model can be found in the Docker documentation.
While a node may be called a manager
, they will still perform worker
roles! This is why all three nodes will be serving the same application. Even if a manager node drops off due to a network error, one of the workers will step up and take over management duties.
Deploying Our First Service
At this moment, we will see that we have no services running in our swarm:
[manager] $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
We will create a standard nginx service with two replicas, meaning there will be 2 containers of nginx runing in our swarm.
But first, we need to create a overlay network, which is a network driver that creates a distributed network among multiple Docker daemon hosts. Swarm takes care of the routing automatically, which is routed via port mappings. In other words, you hit your manager node on a published port, and it will route the request to the desired application residing in the respective container on a configured node, such as worker-2
.
To create a overlay network called mynet:
[manager] $ docker network create --driver overlay mynet
Now onto creating the Service. If any of the worker containers fail, they will handled by the manager node and will be spawned again to have the desired number that we set on the replica option:
[manager] $ docker service create --name my-web --publish 8080:80 --replicas 2 --network mynet nginx
The above command creates a new docker service called my-web
published at port 8080
, routed to port 80
, with 2 replica containers, and nginx
on the overlay network mynet
that we just created.
Let’s have a look at our nginx service:
[manager] $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
1okycpshfusq my-web replicated 2/2 nginx:latest *:8080->80/tcp
We can see the two replicas are running the latest nginx
just as we wanted. To see the nodes where the containers are running, we can use docker service ps my-web
:
[manager] $ docker service ps my-web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE
k0qqrh8s0c2d my-web.1 nginx:latest worker-1 Running Running 30 seconds ago
nku9wer6tmll my-web.2 nginx:latest worker-2 Running Running 30 seconds ago
From the above output, we can see that worker-1 and worker-2 are serving our containers for our service. We can also retrieve more information of our service by using the inspect option, which will give you a detailed response in JSON format of the service:
[manager] $ docker service inspect my-web
Now that we have inspected our service and can see it's running as we thought, its time to test out the nginx service.
Testing Nginx in our Swarm
On your Civo dashboard, find out the public IP address of the manager instance and copy it. On your own computer (not on any of the instance nodes) make a request against this docker node manager address on the port that was exposed, in this case 8080: ``` $ curl -I http://docker-manager-public-IP:8080
HTTP/1.1 200 OK
Server: nginx/1.17.2
Date: Mon, 05 Aug 2019 13:59:48 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 23 Jul 2019 11:45:37 GMT
Connection: keep-alive
ETag: "5d36f361-264"
Accept-Ranges: bytes
``
As you can see, the HTTP header shows the server type as
nginx`, which means things are working. That means we have successfully set up a 3 node Docker Swarm cluster and deployed a basic Nginx service to it. Give yourself a pat on the back!
Next Steps
Now that you have successfully deployed a Docker Swarm application, what next? This blog covers more Docker Swarm basics and its history. For another project you could easily swap out the nginx
service for an npm
package to create UUIDs and scale your microservice using Swarm. Or, if you're really keen on container orchestration, you could read up on the differences between Docker and Kubernetes.
✅Follow Civo on Twitter!