DevOps

Creating an Automation Platform on Docker

1. Introduction

In today’s era of software development, container orchestration has become one of the most important aspect. In agile software development it is very important for every build that gets executed to go through a cycle of sanitation, before the code can be merged onto the production environment. Thus, it is very important, from an automation perspective, to get the quality automation process, to get executed within the container along with the compilation of the actual code base. This would help the development team get an idea about any breakage of any functionalities within the already existing process. It is very easy to set up and execute a selenium workflow within a docker container.

2. Pre-requisites

To start running your automation tests on a container based platform, we would be needing the Docker. It is a very popular software which helps in achieving stable software builds through the deployment of containers. Since I am a mac user there are two variations of docker that can be used on an OsX platform.

  • The first option that we have is the Docker Toolbox. The Docker Toolbox environment installs docker, docker-machine and docker-compose in /usr/local/bin directory on your Mac. It would require the provisioning of Oracle VirtualBox, to spin up a virtual machine called default which can run the boot2docker on your machine. In case of docker toolbox you have to particularly set the docker environment variables using $(docker-machine env default). This helps docker and docker-compose to communicate with the docker-engine running on the virtual box.
  • The second option that we have is the Docker for mac. Docker for mac is a mac based native application and is placed in the /Applications directory. During installation it creates a symlink for docker and docker-compose in /usr/local/bin directory. The Docker for mac does not use Virtual Box, but rather uses a native virtual machine environment called Hyperkit, which is a lightweight macOS virtualization solution built on the hypervisor framework in macOS Yosemite 10.10 and higher. Installing docker for mac does not affect machines created using docker-machines. Rather docker for mac gives the option to copy the default docker-machine to the new docker for Mac Hyperkit during the time of installation. The docker for mac application does not use docker-machine to provision the virtual machine, rather creates and manages it directly. Besides the docker for mac during installation, provisions the Hyperkit virtual machine based on alpine linux which runs the docker engine. This exposes the docker API on a socket level using the /var/run/docker.sock. This is the default location which docker would look for, if it does not find any environment variable set for communication with the docker engine in the virtual machine. So there is no need to set an environment variable if we are using the docker for mac.

3. System requirements

If you are using docker for mac then these are the basic system requirements that would be needed.

  • Mac must be a 2010 or newer model, with Intel’s hardware support for memory management unit (MMU) virtualization.
  • OS X El Capitan 10.11 and newer macOS releases are supported. At a minimum, Docker for Mac requires macOS Yosemite 10.10.3 or newer.
  • At least 4GB of RAM within the system.
  • Virtual Box prior to version 4.3.30 must NOT be installed (it is incompatible with Docker for Mac). If you have a newer version of Virtual Box installed, it is fine.

4. Containerization

Containerization technology is being broadly used now a days to ease the process of deployment and have a similar platform for everyone who are directly involved with the application. The process of containerization, gives the developer a hassle free way of deploying and developing an application in a common platform, that has the same specification. For example Developer A is developing an application using JAVA-8 specifications and uses hibernate framework of version 4.5.5. Wouldn’t it be better if Developer B and Developer C in the team also had the same specifications? This would reduce the dependency of your application under development, on the host machine. Likewise, when I am deploying my application on the cloud it would become hassle free as I have the idea of the configuration for the container, and that it would work as per the predefined configurations.

So technically, (if I may put) app containerization is an operating system level virtualization technique. It helps deploying and running distributed applications on the same virtual machine. Multiple isolated systems are run on a single host control and it accesses the same kernel. The application containers hold the components such as files, environment variables and libraries necessary to run the desired software. As resources are shared in this way, application containers can be created with less strain on the overall resources available. Besides, portability of the application also becomes much more easier. For example if a variation from the standard image is required a container can be created which holds the newer library for the same image.

5. Docker

Docker is a container management software. It helps in developing, running and shipping your application. Docker enables us to separate our application from the infrastructure, so that we can deliver the software quickly and reliably. Very recently, docker has become of immense importance when handling application deployments and application builds. The only reason behind it is simplicity of its implementation. The architecture of Docker is very simple and it is based on a client server interaction. The brain behind Docker is “Solomon Hykes” who has also been a successful co-founder and CEO of Dot cloud, the next best competitor of, ‘Heroku’. Last but not the least you need to run

5.1 Docker Platform

Docker provides the ability to package and run an application in a loosely isolated environment called a container. The isolation allows a user to run many containers simultaneously on a given host system. Because of it’s lightweight nature of containers, which run without the extra load of a hypervisor, you can run more containers on a given hardware combination, than if you were using actual virtual machines to run your application.

5.2 Docker Engine

Docker Engine is a client-server application with these major components:

  • A server which is a type of long-running daemon process.
  • A REST API which specifies interfaces that programs can use to talk to the daemon and instruct it what to do.
  • A command line interface (CLI) client.

The CLI uses the Docker REST API to control or interact with the Docker daemon through scripting or direct CLI commands. Many other Docker applications use the underlying API and CLI. The daemon creates and manages Docker objects, such as images, containers, networks, and data volumes.

5.3 Docker Architecture

Docker uses a client-server architecture. The Docker client interacts with the Docker daemon, which does the heavy lifting job of building, running, and distributing the Docker containers. The docker client and the daemon can run on the same system, or it can connect a docker client to a remote docker daemon. The client and the daemon communicate with each other using the REST API.

5.4 Docker Daemon

The Docker daemon runs on a host machine. The user uses the Docker client to interact with the daemon.

5.5 Docker Client

It is the primary interface for Docker calls and services. It accepts the commands from the user and communicates with the docker daemon.

5.6 Docker Containers

A Docker container is a runnable instance of a Docker image. You can run, start, stop, move, or delete a container using Docker API or CLI commands. When you run a container, you can provide configuration metadata such as networking information or environment variables.

5.7 Docker Registries

A docker registry is a library of images. A registry can be public or private, and can be on the same server as the Docker daemon or Docker client, or on a totally separate server.

6. Implementation

  1. Get the image into your local machine either from github by using the following command git clone https://github.com/Corefinder89/pyconauto.git or you can clone it from dockerhub as well using the following command docker pull corefinder/pyconauto.
  2. Next you need to configure the automation project directory inside the Python-Automation-Project directory.
  3. Once the image is present in your local system you need to build the corresponding Dockerfile to get the dependencies and packages in your container. The command to build your docker image would be docker build -t <tagname_image> corefinder/pyconauto ..
  4. Last but not least, you need to run the docker container to execute all your tests using the command docker run --name <container name> corefinder/pyconauto. A small python automation code is added to the Python-Automation-Project directory. The dockerfile can be built and executed to understand the flow and then get started with automation. By default I am executing the test in Firefox using the gecko driver. The selenium version installed is the latest version – 3.3.3, so geckodriver and chromedriver has to be configure separately. Firefox utilises xvfb to run headless. You can also configure your tests to run headless on phantomjs or on headless chrome.

7. References

GitHubhttps://github.com/Corefinder89/PyConAuto
DockerHubhttps://hub.docker.com/r/corefinder/pyconauto/

Soumyajit Basu

Soumyajit is a QA/DevOps engineer by profession and a technology enthusiast by passion. He loves communicating about technology and is an author at his own blog platform as well as in Dzone and Web Code Geeks.
Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Back to top button