Elastic web services using Docker

Gaurav Raj
5 min readJul 8, 2020

There have been several web services out there which have been deployed using Docker. So, before coming into the details of Docker and containers we first understand what cloud computing and virtualization are and how these are related to Docker.

Cloud Computing

Cloud computing is an emerging technology that provides on-demand resources for varying requirements on a leased basis, i.e., Pay as you model. It provides computing services like servers, storage, databases, networking, software, etc over the Internet. Cloud providers use a virtualization-based approach to build their stack, where virtualization refers to the act of creating a virtual version of computing resources, including computer hardware platforms, storage devices, and computer network resources. Virtualization makes it possible to run multiple operating systems and multiple applications on the same server at the same time. Virtualization can be implemented with virtual machines and containers.

VMs and Containers

Containers and VMs are similar in their goals, i.e., to isolate an application and its dependencies into a self-contained unit that can run anywhere.

Fig.- VMs and containers

But, the most important difference between containers and VMs is that containers share the host systems kernel with other containers and this is what makes containers so lightweight. Also, a VM provides hardware virtualization but a container provides operating-system-level virtualization by abstracting the user space. So, containers being lightweight, minimum memory requirements and little startup time, it is widely used for the deployment of a web service.

Until now, we have described more about the cloud, virtualization, virtual machines and containers, and also we have seen why containers are used for deployment of a web service. But, from the web service providers’ point of view what he needs. So, the answer is he requires minimum response time for the requests to be served and also the optimized number of web server containers he would require for fulfilling those dynamic web requests.

Docker

Docker wraps software components into a complete standardized unit that contains everything required to run i.e. runtime environment, tools, or libraries. It provides the facility to run an application in an isolated environment which is called a container and it guarantees that the software will always run as expected. Docker has a client-server architecture. The docker daemon(server) receives the commands from the docker client through CLI or rest APIs. Docker client and server can be present on the same host machine or different hosts.

Generally, a web service provider would need more than one server for serving web requests. It is so because the web requests are very dynamic and he does not want the requests to be dropped whenever the server becomes overloaded. In this case, the best option would be to deploy the web server containers serving those requests and here we get one advantage of the elasticity, i.e., the ability of a system to add and remove resources to adopt the load variation in real-time. It is the dynamic property of cloud computing that allows the system to scale on-demand in an operational system. Scalability is another term that is associated with elasticity and the system can sustain increasing workloads by making use of additional resources. Another advantage that could be drawn from this setup, i.e., using multiple web server containers is the use of a load balancer like HAProxy, which helps in improving the response time of the requests served by minimizing the overload on any server.

Fig. HAProxy load balancer

HAProxy stands for High Availability Proxy, which is a popular open-source software used as TCP/HTTP load balancer and proxying solution. It acts as a server load balancer which is used for load balancing TCP connections and HTTP requests. In TCP mode, load balancing decisions are based on the whole connection but in HTTP mode, the decisions are based on each HTTP request. It spreads various requests across multiple servers to optimize resource usage, maximize throughput, avoid overloading of any single resource, and minimize response time. It consists of frontend and backend where it listens to user requests from the frontend port and sends those requests to the backend servers and in this way it also acts as a reverse proxy. It distributes the requests using certain algorithms like round-robin, least connection, etc.

Deployment of Elastic web service

In the deployment part, we are going to use the Docker compose tool which is used to define and run multi-container Docker applications. It uses YAML files(docker-compose.yml) to configure the application’s services. The docker-compose CLI utility allows users to run commands on multiple containers at once, for example, building images, scaling containers, stopping containers that are running, and more. Using this, we can set the initial number of server containers that we want to deploy and also sets resources constraints like millicores of CPU used (%CPU) and memory limits (%RAM). The docker-compose.yml file is used to define the services of applications and includes various configuration options. After this, we are going to use a Docker swarm for container orchestration. Docker swarm is helpful in health check on every container, ensure containers are up on every system, scaling the containers up or down depending on the load, and adding updates in the containers. Containers in a swarm are called nodes. Some nodes are made manager and some are made workers. The manager node has total control of the swarm created. Finally, we are going to use any auto-scaling algorithm like HPA (Horizontal Pod Autoscaler), Time series forecasting, etc which could be a reactive or proactive scaling model for automatic scaling of the number of web server containers based on the workload (number of requests) and hence provides elasticity to the web services. This scaling is easily possible with the help of docker only because it provides certain tools like docker-compose and docker swarm for easy deployment of the application along with easy scaling of the initial number of containers.

--

--