Running Services within a Docker Swarm
When Docker released its latest version, Docker Engine v1.12, it included quite a few changes to the capabilities provided by Docker Swarm. In today’s article, we’ll be exploring how to deploy a service using Docker’s Swarm Mode.
Activating Swarm Mode on Ubuntu 16.04
Before we can deploy a service on a Docker Engine Swarm, we will first need to set up a Swarm Cluster. Since we’re showing capabilities added with 1.12, we will also be installing the latest version of Docker Engine.
The following instructions will guide you through installing Docker Engine on Ubuntu 16.04. For other versions and platforms, you can follow Docker’s official installation guide.
Setting up the Docker Apt repository
For this installation, we’ll be using the standard installation method for Ubuntu, which relies on the Apt package manager. Since we will be installing the latest version of Docker Engine, we need to configure Apt to pull the docker-engine
package from Docker’s official Apt repository rather than the systems preconfigured repositories.
Add Docker’s Public Key
The first step in configuring Apt to use a new repository is to add that repository’s public key into Apt’s cache with the apt-key
command.
# apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
The above apt-key
command requests the specified key (58118E89F3A912897C070ADBF76221572C52609D
), from the p80.pool.sks-keyservers.net
key server. This public key will be used to validate all packages downloaded from the new repository.
Specifying Docker’s repository location
With Docker’s public key now imported, we can configure Apt to use the Docker Project’s repository server. We will do this by adding an entry within the /etc/apt/sources.list.d/
directory.
# echo "deb https://apt.dockerproject.org/repo ubuntu-xenial main" >> /etc/apt/sources.list.d/docker.list
When we refresh Apt’s package cache, Apt will look through all files within the sources.list.d/
directory to find new package repositories. The command I just showed you creates (if it doesn’t already exist) a new file called docker.list
with an entry that adds the apt.dockerproject.org
repository.
Updating Apt’s package cache
To refresh Apt’s package cache, we can run the apt-get
command with the update
option.
# apt-get update
This will cause Apt to repopulate its list of repositories by rereading all of its configuration files, including the one we just added. It will also query those repositories to cache a list of available packages.
Installing the linux-image-extra
prerequisite
Before installing Docker Engine, we need to install a prerequisite package. The linux-image-extra
package is a kernel specific package that’s needed for Ubuntu systems to support the aufs
storage driver. This driver is used by Docker for mounting volumes within containers.
To install this package, we will use the apt-get
command again, but this time with the install
option.
# apt-get install linux-image-extra-$(uname -r)
In that apt-get
command, the $(uname -r)
portion or the command will return the kernel version for the current running kernel. Any kernel updates to this system should include the installation of the appropriate linux-image-extra
package version that coincides with the new kernel version. If this package is not updated as well, some issues may arise with Docker’s ability to mount volumes.
Install Docker Engine
With Apt configured and the linux-image-extra
prerequisite package installed, we can now move to installing Docker Engine. To do so, we will once again use the apt-get
command with the install
option to install the docker-engine
package.
# apt-get install docker-engine
At this point, we should have Docker Engine v1.12.0
or newer installed. To verify that we have the correct version, we can execute the docker
command with the version
option.
# docker version Client: Version: 1.12.0 API version: 1.24 Go version: go1.6.3 Git commit: 8eab29e Built: Thu Jul 28 22:11:10 2016 OS/Arch: linux/amd64 Server: Version: 1.12.0 API version: 1.24 Go version: go1.6.3 Git commit: 8eab29e Built: Thu Jul 28 22:11:10 2016 OS/Arch: linux/amd64
With this, we can see that both the Server and Client versions are 1.12.0
. From here, we can move on to creating our Swarm.
!New Call-to-action
Creating a Docker Swarm
From this point on within this article, we will be executing tasks from several machines. To help make things a bit more clear, I have included the hostname in the command examples.
We will start our Swarm Cluster with two nodes. At this point, both of these nodes should have Docker Engine installed based on the instructions above.
When creating the Swarm Cluster, we will need to designate a node manager. For this example, we’ll be using a host by the name of swarm-01
as a node manager. To make swarm-01
a node manager, we need to create our Swarm Cluster by executing a command on swarm-01
first. The command we will be executing is the docker
command with the swarm init
options.
root@swarm-01:~# docker swarm init --advertise-addr 10.0.0.1 Swarm initialized: current node (awwiap1z5vtxponawdqndl0e7) is now a manager. To add a worker to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-51pzs5ax8dmp3h0ic72m9wq9vtagevp1ncrgik115qwo058ie6-3fokbd3onl2i8r7dowtlwh7kb \ 10.0.0.1:2377 To add a manager to this swarm, run the following command: docker swarm join \ --token SWMTKN-1-51pzs5ax8dmp3h0ic72m9wq9vtagevp1ncrgik115qwo058ie6-bwex7fd4u5aov4naa5trcxs34 \ 10.0.0.1:2377
With the above command, in addition to the swarm init
, we also specified the --advertise-addr
flag with a value of 10.0.0.1
. This is the IP that the Swarm node manager will use to advertise the Swarm Cluster Service. While this address can be a private IP, it’s important to note that in order for nodes to join this swarm, those nodes will need to be able to connect to the node manager over this IP on port 2377
.
After running the docker swarm init
command, we can see that the swarm-01
was given a node name (awwiap1z5vtxponawdqndl0e7
) and made the manager of this swarm. The output also supplies two commands: one command is to add a node worker to the swarm and the other is to add another node manager to the swarm.
Docker Swarm Mode can support multiple node managers. It will, however, elect one of them to be the primary node manager which will be responsible for orchestration within the Swarm.
Adding a node worker to the Swarm Cluster
With the Swarm Cluster created, we can now add a new node worker using the docker
command provided by the output of the Swarm creation.
root@swarm-02:~# docker swarm join \ > --token SWMTKN-1-51pzs5ax8dmp3h0ic72m9wq9vtagevp1ncrgik115qwo058ie6-3fokbd3onl2i8r7dowtlwh7kb \ > 10.0.0.1:2377 This node joined a swarm as a worker.
In this example, we added swarm-02
to the swarm as a node worker. A node worker is a member of the Swarm Cluster whose role is to run tasks; in this case, tasks are containers. The node manager on the other hand has a role of managing the orchestration of tasks (containers) and maintaining the Swarm Cluster itself.
In addition to its node manager duties, a node manager is also a node worker, which means it will also run tasks for our Swarm Cluster.
Viewing the current Swarm nodes
With the previous commands executed, we now have a basic two-node Swarm Cluster. We can verify the status of this cluster by executing the docker
command with the node ls
options.
root@swarm-01:~# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 13evr7hmiujjanbnu3n92dphk swarm-02.example.com Ready Active awwiap1z5vtxponawdqndl0e7 * swarm-01.example.com Ready Active Leader
From the output of this command, we can see that both swarm-01
and swarm-02
are in a Ready
and Active
state. With this, we can now move on to deploying services to this Swarm Cluster.
Creating a Service
With Docker Swarm Mode, a service is a long-running Docker container that can be deployed to any node worker. It’s something that either remote systems or other containers within the swarm can connect to and consume.
For this example, we’re going to deploy a Redis service.
Deploying a Replicated Service
A replicated service is a Docker Swarm service that has a specified number of replicas running. These replicas consist of multiple instances of the specified Docker container. In our case, each replica will be a unique Redis instance.
To create our new service, we’ll use the docker
command while specifying the service create
options. The followingw command will create a service named redis
that has 2
replicas and publishes port 6379
across the cluster.
root@swarm-01:~# docker service create --name redis --replicas 2 --publish 6379:6379 redis er238pvukeqdev10nfmh9q1kr
In addition to specifying the service create
options, we also used the --name
flag to name the service redis
and the --replicas
flag to specify that this service should run on 2
different nodes. We can validate that it is in fact running on both nodes by executing the docker
command with the service ls
options.
root@swarm-01:~# docker service ls ID NAME REPLICAS IMAGE COMMAND er238pvukeqd redis 2/2 redis
In the output, we can see there are 2
of 2
replicas currently running. If we want to see more details on these tasks, we can run the docker
command with the service ps
option.
root@swarm-01:~# docker service ps redis ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 5lr10nbpy91csmc91cew5cul1 redis.1 redis swarm-02.example.com Running Running 40 minutes ago 1t77jsgo1qajxxdekbenl4pgk redis.2 redis swarm-01.example.com Running Running 40 minutes ago
The service ps
option will show the tasks (containers) for the specified service. In this example, we can see the redis
service has a task (container) running on both of our swarm nodes.
Connecting to the Redis service
Since we have validated that the service is running, we can try to connect to this service from a remote system with the redis-cli
client.
vagrant@vagrant:~$ redis-cli -h swarm-01.example.com -p 6379 swarm-01.example.com:6379>
From the connection above, we were successful in connecting to the redis
service. This means our service is up and available.
How Docker Swarm publishes services
When we created the redis
service, we used the --publish
flag with the docker service create
command. This flag was used to tell Docker to publish port 6379
as an available port for the redis
service.
When Docker publishes a port for a service, it does so by listening on that port across all nodes within the Swarm Cluster. When traffic arrives on that port, that traffic is then routed to a container running for that service. While this concept is pretty standard when all nodes are running a service’s container, this concept gets interesting when we have more nodes than we do replicas.
To see how this works, let’s add a third node worker to the Swarm Cluster.
Adding a third node worker into the mix
To add another node worker, we can simply repeat the installation and setup steps in the first part of this article. Since we already covered those steps, we’ll skip ahead to the point where we have a three-node Swarm Cluster. We can once again check the status of this cluster by running the docker
command.
root@swarm-01:~# docker node ls ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 13evr7hmiujjanbnu3n92dphk swarm-02.example.com Ready Active awwiap1z5vtxponawdqndl0e7 * swarm-01.example.com Ready Active Leader e4ymm89082ooms0gs3iyn8vtl swarm-03.example.com Ready Active
We can see that the cluster consists of three hosts:
swarm-01
swarm-02
swarm-03
When we created our service with two replicas, it created a task (container) on swarm-01
and swarm-02
. Let’s see if this is still the case even though we added another node worker.
root@swarm-01:~# docker service ps redis ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 5lr10nbpy91csmc91cew5cul1 redis.1 redis swarm-02.example.com Running Running 55 minutes ago 1t77jsgo1qajxxdekbenl4pgk redis.2 redis swarm-01.example.com Running Running 55 minutes ago
With replicated services, Docker Swarm’s goal is to ensure that there is a task (container) running for every replica specified. When we created the redis
service, we specified that there should be 2
replicas. This means that even though we have a third node, Docker has no reason to start a new task on that node.
At this point, we have an interesting situation: We have a service that’s running on 2
of the 3
Swarm nodes. In a non-Swarm world, that would mean the redis
service would be unavailable when connecting to our third Swarm node. With Swarm Mode however, that is not the case.
Connecting to a service on a non-task-running worker
Earlier when I described how Docker publishes a service port, I mentioned that it does so by publishing that port across all nodes within the Swarm. What’s interesting about this is what happens when we connect to a node worker that isn’t running any containers (tasks) associated with our service.
Let’s take a look at what happens when we connect to swarm-03
over the redis
published port.
vagrant@vagrant:~$ redis-cli -h swarm-03.example.com -p 6379 swarm-03.example.com:6379>
What’s interesting about this is that our connection was successful. It was successful despite the fact that swarm-03
is not running any redis
containers. This works because internally Docker is rerouting our redis
service traffic to a node worker running a redis
container.
Docker calls this ingress load balancing. The way it works is that all node workers listen for connections to published service ports. When that service is called by external systems, the receiving node will accept the traffic and internally load balance it using an internal DNS service that Docker maintains.
So even if we scaled out our Swarm cluster to 100 node workers, end users of our redis
service can simply connect to any node worker. They will then be redirected to one of the two Docker hosts running the service tasks (containers).
All of this rerouting and load balancing is completely transparent to the end user. It all happens within the Swarm Cluster.
Making our service global
At this point, we have the redis
service setup to run with 2
replicas, meaning it’s running containers on 2
of the 3
nodes.
If we wanted our redis
service to consist of an instance on every node worker, we could do that easily by modifying the service’s number of desired replicas from 2
to 3
. This would mean however that with every node worker we add or subtract, we will need to adjust the number of replicas.
We could alternatively do this automatically by making our service a Global Service. A Global Service in Docker Swarm Mode is used to create a service that has a task running on every node worker automatically. This is useful for common services such as Redis that may be leveraged internally by other services.
To show this in action, let’s go ahead and recreate our redis
service as a Global Service.
root@swarm-01:~# docker service create --name redis --mode global --publish 6379:6379 redis 5o8m338zmsped0cmqe0guh2to
The command to create a Global Service is the same docker service create
command we used to create a replicated service. The only difference is the --mode
flag along with the value of global
.
With the service now created, we can see how Docker distributed our tasks for this service by once again executing the docker
command with the service ps
options.
root@swarm-01:~# docker service ps redis ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR 27s6q5yvmyjvty8jvp5k067ul redis redis swarm-03.example.com Running Running 26 seconds ago 2xohhkqvlw7969qj6j0ca70xx \_ redis redis swarm-02.example.com Running Running 38 seconds ago 22wrdkun5f5t9lku6sbprqi1k \_ redis redis swarm-01.example.com Running Running 38 seconds ago
We can see that when the service was created as a Global Service, a task was then started on every node worker within our Swarm Cluster.
Summary
In this article, we not only installed Docker Engine, we also set up a Swarm Cluster, deployed a replicated service, and then created a Global Service.
In a recent article, I not only installed Kubernetes, I also created a Kubernetes service. In comparing the Docker Swarm Mode services with the Kubernetes services, I personally find that Swarm Mode services were easier to get set up and created. For someone who simply wishes to use the “services” features of Kubernetes and doesn’t need some of its other capabilities, Docker Swarm Mode may be an easier alternative.
Reference: | Running Services within a Docker Swarm from our WCG partner Ben Cane at the Codeship Blog blog. |