Getting Started with Docker Swarm

What is Swarm

From the Docker Docs:

Docker Swarm is native clustering for Docker. It turns a pool of Docker hosts into a single, virtual Docker host. Because Docker Swarm serves the standard Docker API, any tool that already communicates with a Docker daemon can use Swarm to transparently scale to multiple hosts.

Why Swarm

There are other clustering tools out there, Kubernetes/OpenShift are examples. I’m using Swarm for this post to introduce the concepts of clustering from a Docker perspective. When we create a swarm “cluster” we are creating a pool of nodes that are all running Docker, and since Swarm exposes the Docker API any tool that we develop that communicates w/ Docker will also work with Swarm.

Getting Started

WARNING do not do this in production. This is strictly for development/POC. Security becomes a major concern when moving to prod so please read Protect the Docker daemon socket.

Modify The Docker Daemon

In order for Swarm to work we need to make sure that the docker daemons are configured to listen on a TCP port. Our swarm control node is an Ubuntu machine so we will need to edit the docker config like so:

 jray@swarm.johnray.io:~
 > sudo vim /etc/default/docker

# Docker Upstart and SysVinit configuration file

export http_proxy=http://:8000
 export https_proxy=${http_proxy}

DOCKER_OPTS="-g /docker -H unix:///var/run/docker.sock -H tcp://0.0.0.0:2375"

jray@swarm.johnray.io:~
 > sudo service docker restart
 docker stop/waiting
 docker start/running, process 13599

This has to be done on every node that will be in the cluster. In this case I have two other nodes which are systemd based. We need to edit the daemon.conf drop in file on those nodes.

 jray@node1.johnray.io:~
 > sudo vim /etc/systemd/system/docker.service.d/daemon.conf

[Service]
 ExecStart=
 ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375 -g /docker \
 --storage-driver devicemapper --storage-opt dm.fs=xfs \
 --storage-opt dm.thinpooldev=/dev/mapper/dvol-docker--pool \
 --storage-opt dm.use_deferred_removal=true \
 --storage-opt dm.use_deferred_deletion=true

# Reload the systemd daemon
 jray@node1.johnray.io:~
 > sudo systemctl daemon-reload

jray@node1.johnray.io:~
 > sudo systemctl restart docker

Just to check that you have set everything up correctly we can run the docker client w/ the -H option to point it at a different daemon. On my Ubuntu node point docker at node1 and get its info.

 jray@swarm.johnray.io:~
 > docker -H tcp://node1.johnray.io:2375 info
 Containers: 0
 Images: 155
 Server Version: 1.9.0
 Storage Driver: devicemapper
 Pool Name: dvol-docker--pool
 Pool Blocksize: 65.54 kB
 Base Device Size: 107.4 GB
 Backing Filesystem: xfs
 Data file:
 Metadata file:
 Data Space Used: 8.973 GB
 Data Space Total: 749 GB
 Data Space Available: 740 GB
 Metadata Space Used: 12.35 MB
 Metadata Space Total: 1.002 GB
 Metadata Space Available: 990.1 MB
 Udev Sync Supported: true
 Deferred Removal Enabled: true
 Deferred Deletion Enabled: true
 Deferred Deleted Device Count: 0
 Library Version: 1.02.93-RHEL7 (2015-01-28)
 Execution Driver: native-0.2
 Logging Driver: json-file
 Kernel Version: 3.10.0-229.4.2.el7.x86_64
 Operating System: Oracle Linux Server 7.1
 CPUs: 48
 Total Memory: 251.7 GiB
 Name: node1
 ID: 6UEV:AF7N:5T5Q:2API:OF7S:6FIX:VB7E:2REM:VFRB:BQA3:6KBA:FSUE

Get Swarm

Docker swarm is conveniently designed to be run in a container, so let’s pull it down. NOTE: You will need to do this on all the nodes that will be in the cluster.

 jray@swarm.johnray.io:~
 > docker pull swarm
 Using default tag: latest
 latest: Pulling from library/swarm
 d681c900c6e3: Pull complete
 188de6f24f3f: Pull complete
 90b2ffb8d338: Pull complete
 237af4efea94: Pull complete
 3b3fc6f62107: Pull complete
 7e6c9135b308: Pull complete
 986340ab62f0: Pull complete
 a9975e2cc0a3: Pull complete
 Digest: sha256:c21fd414b0488637b1f05f13a59b032a3f9da5d818d31da1a4ca98a84c0c781b
 Status: Downloaded newer image for swarm:latest

Once the image is pulled down we need to create a file that lists all the nodes in cluster. We do this because using the swarm create command actually reaches out to Docker’s hosted discovery platform and registers the nodes. We are going to use a static file in this case that looks like this.

 jray@swarm.johnray.io:~
 > cat /docker/swarm_nodes
 10.135.16.104:2375
 10.207.13.36:2375
 10.207.13.35:2375

Run the following on the nodes to start join the swarm cluster.

 jray@node1.johnray.io:~
 > docker run -it -d -v /docker/swarm_nodes:/swarm_nodes swarm join \
 --addr=10.207.13.35:2375 file:///swarm_nodes

And then on the manager, which is also part of the cluster.

 jray@swarm.johnray.io:/docker
 > docker run -it -d -v /docker/swarm_nodes:/swarm_nodes -p 8375:2375 swarm manage file:///swarm_nodes

jray@swarm.johnray.io:/docker
 > docker logs 3572eee1a1
 INFO[0000] Listening for HTTP addr=:2375 proto=tcp
 INFO[0000] Registered Engine swarm at 10.135.16.104:2375
 INFO[0000] Registered Engine node1 at 10.207.13.35:2375
 INFO[0000] Registered Engine node2 at 10.207.13.36:2375

Success!! You now have a small docker swarm cluster running. You can see information about the cluster by logging in to the swarm manager and using the -H option like so.

 jray@swarm.johnray.io:/docker
 > docker -H localhost:8375 info
 Containers: 4
 Images: 19
 Role: primary
 Strategy: spread
 Filters: health, port, dependency, affinity, constraint
 Nodes: 3
 node1: 10.207.13.36:2375
 └ Status: Healthy
 └ Containers: 1
 └ Reserved CPUs: 0 / 50
 └ Reserved Memory: 0 B / 264.3 GiB
 └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-229.4.2.el7.x86_64, operatingsystem=Oracle Linux Server 7.1, storagedriver=devicemapper
 node2: 10.207.13.35:2375
 └ Status: Healthy
 └ Containers: 1
 └ Reserved CPUs: 0 / 50
 └ Reserved Memory: 0 B / 264.3 GiB
 └ Labels: executiondriver=native-0.2, kernelversion=3.10.0-229.4.2.el7.x86_64, operatingsystem=Oracle Linux Server 7.1, storagedriver=devicemapper
 swarm: 10.135.16.104:2375
 └ Status: Healthy
 └ Containers: 2
 └ Reserved CPUs: 0 / 4
 └ Reserved Memory: 0 B / 8.221 GiB
 └ Labels: executiondriver=native-0.2, kernelversion=3.16.0-43-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs
 CPUs: 104
 Total Memory: 536.8 GiB
 Name: 3572eee1a1e2

But the really cool part is that we can use the swarm manager to launch containers that will be scheduled in our cluster. I’m going to use the node constraint here to force the container to be launched on node1.

 jray@swarm.johnray.io:/docker
 > docker -H localhost:8375 run -it -d -e constraint:node==node1 --name rvd_tick --net host -e "RH_ENV=drhub-poc" reg.johnray.io/yarnhoj/appA "/apps/appA/bin/start_rvd.sh"

# First a look at what is running on the local machine
 jray@swarm.johnray.io:/docker
 > docker ps
 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
 NAMES
 3572eee1a1e2 swarm "/swarm manage file:/" 4 minutes ago Up 4 minutes 0.0.0.0:8375->2375/tcp
 thirsty_spence
 499d86d19f9e swarm "/swarm join --addr=1" 5 minutes ago Up 4 minutes 2375/tcp
 elated_wescoff

# And now what is running in the cluster
 jray@swarm.johnray.io:/docker
 > docker -H localhost:8375 ps
 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
 NAMES
 1d8e8064d44c reg.johnray.io/yarnhoj/appA "./start_appA.sh " 15 seconds ago Up 14 seconds
 node1/rvd_tick

On node1 you can run docker ps and see that container is running on that node.

 jray@node1:~
 > docker ps
 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
 NAMES
 1d8e8064d44c reg.johnray.io/yarnhoj/appA "./start_appA.sh " 22 minutes ago Up 22 minutes
 rvd_tick
 0648f566f1aa swarm "/swarm join --addr=1" 27 minutes ago Up 27 minutes 2375/tcp
 suspicious_yonath

The Docker_Host Variable

If you get tired of typing -H : you can set the DOCKER_HOST variable in your shell.

 jray@swarm.johnray.io:/docker
 > docker ps
 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
 NAMES
 3572eee1a1e2 swarm "/swarm manage file:/" 31 minutes ago Up 31 minutes 0.0.0.0:8375->2375/tcp
 thirsty_spence
 499d86d19f9e swarm "/swarm join --addr=1" 31 minutes ago Up 31 minutes 2375/tcp
 elated_wescoff

jray@swarm.johnray.io:/docker
 > export DOCKER_HOST=tcp://localhost:8375

# Now docker ps only shows up the cluster and not the local machine
 jray@swarm.johnray.io:/docker
 > docker ps
 CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
 NAMES
 1d8e8064d44c reg.johnray.io/yarnhoj/appA "./start_appA.sh " 28 minutes ago Up 28 minutes
 node1/rvd_tick

Happy Hacking!

Filled under: Docker