Dock and Roll: Master Docker Automation and Container Deployment
Docker Automation is a feature that allows you to automate the deployment of your containers using various methods, such as Docker Compose, Docker Swarm, or Kubernetes. You can define your desired state of your application in a configuration file, and Docker will take care of the rest.
PaaS (Platform as a Service) Options for Deploying Containers
Is a cloud computing model that streamlines the deployment and management of containers by abstracting the underlying infrastructure. This allows developers to focus on creating and running their applications.
Kubernetes (K8s)
Kubernetes is a system that orchestrates the lifecycle of containers on a cluster of nodes. A node is a physical or virtual machine that runs one or more containers. A container is a lightweight package of software that contains everything it needs to run: code, runtime, libraries, configuration, etc. Docker is one of the most popular tools for creating and running containers.
Kubernetes allows you to define your desired state for your containerized applications using declarative configuration files. For example, you can specify how many replicas of a container you want to run, what ports they should expose, what resources they should consume, etc. Kubernetes then ensures that your cluster matches your desired state by creating, updating, or deleting containers as needed.
Kubernetes consists of two main components:
- control plane: responsible for managing the cluster state and communicating with the worker nodes
- worker nodes: responsible for running the pods that do the work. Pods are usually single instances of the application itself, and containers run inside pods
Worker nodes consists of several components:
- API server: The main entry point for all the Kubernetes operations. It exposes a RESTful API that allows users and other components to interact with the cluster
- etcd: A distributed key-value store that stores the cluster configuration and state
- scheduler: A component that assigns pods (groups of containers) to nodes based on various factors, such as resource availability, affinity, anti-affinity
- controller manager: A component that runs various controllers that monitor and reconcile the cluster state. For example, the replication controller ensures that the desired number of pods are running for each application
- kubelet: An agent that runs on each node and communicates with the control plane. It manages the pods and containers on its node, as well as the node’s health and status
- kube-proxy: A network proxy that runs on each node and handles the service discovery and load balancing for the pods
- container runtime: The software that runs the containers on the nodes. Kubernetes supports various container runtimes, such as Docker, containerd, CRI-O
Kubernetes in Action
To deploy docker containers with Kubernetes, you need to have a Kubernetes cluster and a docker registry. A Kubernetes cluster is a set of nodes that run the Kubernetes components and your applications. A docker registry is a service that stores and distributes your container images.
There are many ways to create a Kubernetes cluster, such as using cloud providers, local tools, or custom solutions. For this tutorial, we will use Minikube, which is a tool that runs a single-node Kubernetes cluster on your local machine.
You also need to have kubectl, which is a command-line tool that interacts with the Kubernetes cluster.
To use a docker registry, you can either use a public one, such as Docker Hub, or set up your own private one. For this tutorial, we will use Docker Hub, which is a free service that hosts public and private repositories.
Start Minikube
minikube start
This will create and start a virtual machine that runs the Kubernetes cluster.
Build your docker image by running in the directory where your Dockerfile
is located
docker build -t <your-username>/<your-image-name> .
This will create an image from your Dockerfile
and tag it with your username and image name.
Push your docker image to Docker Hub
docker push <your-username>/<your-image-name>
This will upload your image to your repository on Docker Hub
Create a deployment manifest file
kubectl create deployment <your-deployment-name> - image=<your-username>/<your-image-name> - dry-run=client -o yaml > deployment.yaml
This will generate a YAML file that defines a deployment object with one replica of your pod using your image.
Edit the deployment manifest file as needed. You can change the number of replicas, the image version, the update strategy, and other parameters. You can also add labels, annotations, environment variables, volumes, and other configurations to your pod spec.
Apply the deployment manifest file
kubectl apply -f deployment.yaml
This will create or update the deployment object on your cluster.
Check the status of your deployment
kubectl get deployment <your-deployment-name>
This will show you the current state of your pods and replicas.
Expose your deployment as a service
kubectl expose deployment <your-deployment-name> - type=NodePort - port=<your-port-number>
This will create a service object that maps a port on each node to a port on your pod.
Get the URL of your service
minikube service <your-deployment-name> - url
This will show you the URL where you can access your application from your browser.
Docker Swarm
Docker Swarm is a tool that allows you to manage a cluster of docker containers across multiple nodes. It provides features such as service discovery, load balancing, scaling, and rolling updates.
Docker Swarm in Action
To start using Docker Swarm, you need to have at least one node that acts as a manager and one or more nodes that act as workers.
The manager node is responsible for orchestrating the cluster and maintaining the desired state of the services.
The worker nodes are responsible for running the containers and reporting their status to the manager.
To create a manager node, you can use the command docker swarm init
. This will generate a token that you can use to join other nodes to the cluster. For example, if your manager node has the IP address 192.168.0.1
, you can run the following command on the manager node
docker swarm init --advertise-addr 192.168.0.1
Use docker swarm join
command and run it on any node that you want to join as a worker. You can also use the docker swarm join-token manager
command to generate a token for joining as a manager.
Once you have your nodes joined to the cluster, you can start deploying docker containers with Docker Swarm. To do this, you need to create a service definition file that specifies the details of your service, such as the image, ports, replicas, networks.
For example, you can create a file called web.yml
with the following content:
version: '3'
services:
web:
image: nginx
ports:
- '80:80'
deploy:
replicas: 3
update_config:
parallelism: 1
delay: 10s
restart_policy:
condition: on-failure
networks:
- webnet
networks:
webnet:
This file defines a service called web
that uses the nginx
image and exposes port 80
. It also specifies that there should be 3 replicas of the service running at any time, and that they should be updated one by one with a delay of 10 seconds between each update.
To deploy this service, you can use the command
docker stack deploy -c web.yml web
This will create a stack called web
that contains the service defined in the web.yml
file. You can check the status of your service by using the command docker service ls
.
To update your service, you can simply modify the web.yml
file and run
docker stack deploy -c web.yml web
This will trigger a rolling update of your service according to the update configuration specified in the file.
You can also use the command docker service update
to change some parameters of your service without modifying the file.
To remove your service, you can use the command docker stack rm web
. This will stop and remove all the containers belonging to your service. You can also use the command docker service rm web_web
to remove only the service without removing the stack.
Nomad
Nomad is a workload orchestration tool designed by HashiCorp to make it easy to deploy, manage and scale containers and non-containerized applications across on-prem and clouds at scale.
Nomad is a single binary that runs as an agent on each node in the cluster. It can run in two modes: server and client. The servers are responsible for managing the cluster state, scheduling jobs, and coordinating with other servers. The clients are responsible for executing the tasks assigned by the servers, such as running Docker containers.
Nomad supports a wide range of workloads beyond containers, including Windows, Java, VM, and more. It also integrates seamlessly with other HashiCorp tools, such as Consul for service discovery and Vault for secrets management.
Nomad uses a declarative way of specifying jobs, which are the units of work that Nomad manages. A job consists of one or more task groups, which are collections of tasks that should be co-located on the same node. A task is the smallest unit of work that Nomad executes, such as running a Docker container.
Nomad uses HCL (HashiCorp Configuration Language) to write job specification files (aka jobspecs), which are similar to JSON and YAML.
Here is an example of a simple jobspec that runs a web application using Docker:
job "web" {
group "web" {
task "webapp" {
driver = "docker"
config {
image = "ghcr.io/org/project/app:latest"
}
}
}
}
Nomad in Action
To run Nomad, you need to install the Nomad binary on each node in the cluster. You can download it from the official website or use a package manager such as Homebrew or Chocolatey.
To start a Nomad server, you need to create a configuration file that specifies the server mode and the bootstrap expectation (the number of servers required to form a quorum)
server {
enabled = true
bootstrap_expect = 3
}
Then you can run nomad agent -config server.hcl
to start the server agent. You need to repeat this process on at least three nodes to form a cluster.
To start a Nomad client, you need to create a configuration file that specifies the client mode and the address of one or more servers
client {
enabled = true
servers = ["server1:4647", "server2:4647", "server3:4647"]
}
Then you can run nomad agent -config client.hcl
to start the client agent. You can repeat this process on as many nodes as you want to join the cluster as clients.
You can use nomad server members
and nomad node status
commands to check the status of the cluster.
To deploy Docker containers with Nomad, you need to write a jobspec
that specifies the Docker driver and the image name in the config stanza
of the task. You can also specify other options such as ports, environment variables, volumes
job "web" {
group "web" {
count = 2
network {
port "http" {
static = 8080
to = 8080
}
}
task "webapp" {
driver = "docker"
config {
image = "ghcr.io/org/project/app:latest"
ports = ["http"]
}
env {
APP_NAME = "webapp"
}
}
}
}
To submit this job to Nomad, you can use command
nomad job run web.hcl
You can use nomad job status web
and nomad alloc status <ID>
commands to check the status of the job and its allocations (the instances of the task group).
You can also use nomad alloc logs <ID>
and nomad alloc exec <ID> <command>
commands to access the logs and execute commands inside the container.
Conclusion
Docker Automation is a powerful tool that simplifies the deployment of containers. By using Docker Automation, you can:
- Package your applications into reusable and portable units
- Build and push your images to a secure registry
- Deploy and manage your applications on any cluster
- Monitor and upgrade your applications with ease