Pods: The Building Blocks of Kubernetes

Roman Glushach
9 min readAug 17, 2023

--

Kubernetes: Pods

Pod is the smallest unit of computing that you can create and manage in Kubernetes. A pod is a group of one or more containers that share some resources, such as storage and network, and have a specification for how to run them.

A pod can be thought of as a logical host for your application. It contains one or more application containers that are relatively tightly coupled and need to work together. For example, a pod can contain a web server and a sidecar container that updates the web content from a shared volume.

A pod can also contain init containers that run before the application containers and perform some initialization tasks. For example, an init container can copy some configuration files or install some dependencies for the main application.

A pod can also have ephemeral containers that are injected for debugging purposes if your cluster supports this feature. For example, an ephemeral container can run a shell or a diagnostic tool to inspect the state of a pod.

Pods are designed to support distributed systems and microservices architectures.

Pods allow you to:

  • Run multiple containers together as a unit of deployment and scaling
  • Isolate containers from each other while sharing resources and network
  • Simplify container management and orchestration
  • Enhance application performance and reliability

Benefits of Using Pods

Pods are the basic building blocks of Kubernetes applications. By grouping containers into Pods, Kubernetes can manage them as a single unit, and provide them with the following benefits:

  • Resource sharing: Containers in the same Pod can share CPU, memory, network, and storage resources, and communicate with each other via localhost
  • Scheduling: Kubernetes schedules Pods to Nodes based on their resource requirements and availability. Pods are always co-located and co-scheduled on the same Node
  • Lifecycle: Kubernetes controls the lifecycle of Pods, such as creating, starting, stopping, updating, and deleting them. Pods have a restart policy that determines how they are handled when one or more containers exit
  • Replication: Kubernetes can replicate Pods to scale up or down an application, or to provide fault tolerance. Replication controllers, such as Deployments or ReplicaSets, ensure that a desired number of Pods are running at any given time

How Pods Work?

Pods are created by using pod manifests, which are YAML or JSON files that describe the desired state of your pods. You can create pods directly by using the kubectl create command or by applying a pod manifest file. However, it is recommended to use higher-level workload resources that manage pods for you, such as deployments, jobs, or statefulsets.

Pods are assigned to nodes by a component called the scheduler, which tries to find the best node for each pod based on some criteria such as resource availability, affinity, anti-affinity, taints, tolerations, etc.

Pods run on nodes by using a component called the kubelet, which is an agent that runs on each node and communicates with the Kubernetes API server. The kubelet is responsible for creating, starting, stopping, and deleting the containers in your pods. The kubelet also reports the status of your pods and nodes to the API server.

Pods communicate with each other by using the cluster network, which is a virtual network that spans all the nodes in the cluster and allows pods to reach each other by their IP addresses. The cluster network is implemented by using various plugins that conform to the Container Network Interface (CNI) specification.

Pods communicate with external services by using services, which are abstractions that define a logical set of pods and a policy to access them. Services have an IP address (called cluster IP) and optionally a DNS name that can be used by other pods or external clients to send requests to the pods. Services can also have different types such as NodePort, LoadBalancer, or ExternalName that expose them outside the cluster.

How to Design Pods?

There are different ways to design and organize containers in a Pod, depending on the use case and the desired behavior.

Single-container Pod

This is the simplest and most common pattern, where a Pod runs a single container. This is useful for stateless applications that do not require coordination or communication with other containers.

For example, a Pod can run a web server, a database, or a worker process.

Multi-container Pod

This is a more advanced pattern, where a Pod runs multiple containers that work together as a single unit. This is useful for stateful or complex applications that require inter-container communication or coordination.

For example, a Pod can run an application container and a sidecar container that provides additional functionality such as logging, monitoring, or proxying.

Init-container Pod

This is a special pattern, where a Pod runs one or more init containers before running the main application container. Init containers are used to perform initialization tasks such as downloading dependencies, setting up configuration, or waiting for other services.

For example, a Pod can run an init container that waits for a database to be ready before running the application container.

How to Create and Manage Pods?

Using Pod manifests

You can create Pods directly using the kubectl command or the Kubernetes API. This is useful for testing or debugging purposes, but not recommended for production use, as Pods created this way are not managed by any workload resource and will not be rescheduled if they fail or get deleted.

To create a Pod directly, you can use the following command:

kubectl apply -f pod.yaml

where pod.yaml is a file that contains the Pod template

apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

Using workload resources

You can use higher-level Kubernetes resources that create and manage pods for you. For example, you can use a deployment resource to create a set of pods that run your application and update them automatically when you change the image or configuration. You can also use a statefulset resource to create a set of pods that have persistent identities and storage.

To create a Pod indirectly using a Deployment resource, you can use the following command:

kubectl apply -f deployment.yaml

where deployment.yaml is a file that contains the Deployment specification

apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80

To manage pods, you can use various kubectl commands or the Kubernetes dashboard:

  • kubectl get pods — list all the pods in your cluster or namespace
  • kubectl describe pod <pod-name> — get detailed information about a specific pod
  • kubectl logs <pod-name> — view the logs of a pod’s containers
  • kubectl exec <pod-name> — <command> — execute a command in a pod’s container
  • kubectl delete pod <pod-name> — delete a pod from the cluster

Best Practices for Kubernetes Pods

Kubernetes Patterns for Pods

Sidecar Pattern

Sidecar Pattern

The sidecar pattern is a way of adding additional functionality to a main container without modifying it. A sidecar container runs alongside the main container in the same Pod, and can perform tasks such as logging, monitoring, proxying, or synchronization. The sidecar container can access the same resources as the main container, such as files, network, or environment variables.

The benefits of the sidecar pattern are:

  • decouples the main container from the auxiliary tasks, making it easier to maintain and update
  • allows you to reuse common sidecar containers across different applications
  • simplifies the configuration and communication between the main container and the sidecar container, since they share the same network and storage

To implement the sidecar pattern, you need to define two containers in your Pod spec: one for the main application and one for the sidecar. You can use the same or different images for the containers, depending on your needs. You also need to ensure that the containers have compatible resource requirements and readiness probes.

An example of the sidecar pattern is using a fluentd container to collect and forward logs from an nginx container. The fluentd container can be configured to send logs to a central logging system, such as Elasticsearch or Splunk. The nginx container does not need to know anything about the logging mechanism, and can focus on serving web requests.

Ambassador Pattern

Ambassador Pattern

The ambassador pattern is a way of abstracting the access to external services from a main container. An ambassador container acts as a proxy or adapter for the main container, and can handle tasks such as authentication, encryption, load balancing, or caching. The ambassador container exposes a local interface to the main container, and transparently forwards requests to the external service.

The benefits of the ambassador pattern are:

  • simplifies the configuration and discovery of external services for the main container
  • isolates the main container from the network issues and failures of the external service
  • allows you to change or swap the external service without affecting the main container

To implement the ambassador pattern, you need to define two containers in your Pod spec: one for the main application and one for the ambassador. The ambassador container should expose the same interface as the external service, and forward the requests from the main container to the external service. You also need to configure the environment variables or DNS settings for the main container to point to the ambassador container.

An example of the ambassador pattern is using an envoy container to communicate with a cloud service, such as AWS S3 or Google Cloud Storage. The envoy container can handle the authentication and authorization with the cloud provider, and provide a consistent interface to the main container. The main container does not need to know anything about the cloud service, and can use a simple HTTP client to access it.

Adapter Pattern

Adapter Pattern

The adapter pattern is a way of standardizing the output of a main container for consumption by other components. An adapter container transforms or enriches the output of the main container, and exposes it in a uniform way. The adapter container can perform tasks such as formatting, filtering, aggregating, or converting data.

The benefits of the adapter pattern are:

  • enables interoperability and integration between different applications and systems
  • reduces the complexity and duplication of code for handling different output formats
  • allows you to apply consistent policies and rules to the output data

To implement the adapter pattern, you need to define two containers in your Pod spec: one for the main application and one for the adapter. The adapter container should read the output from a shared volume or a pipe, and write the transformed output to another destination. You also need to ensure that the containers run in a sequential order, using init containers or postStart hooks.

An example of the adapter pattern is using a prometheus exporter container to expose metrics from an application container. The prometheus exporter container can scrape metrics from the application container via HTTP or other protocols, and expose them in a format that prometheus can understand. The application container does not need to know anything about the Prometheus format, and can use any instrumentation library to generate metrics.

Conclusion

Kubernetes Pods are the smallest deployable units of computing that you can create and manage in Kubernetes. They provide many benefits such as horizontal scaling, fault tolerance, resource efficiency, isolation, portability, and compatibility. You can create and manage Pods using various methods such as kubectl commands, YAML manifests, workload resources, Helm charts, etc. You can also use various features such as labels, annotations, init containers, sidecar containers, ephemeral containers, resource requests and limits, node affinity and anti-affinity, pod affinity and anti-affinity, taints and tolerations, pod security policies, etc. to optimize your Pod usage and performance.

--

--

Roman Glushach
Roman Glushach

Written by Roman Glushach

Senior Software Architect & Engineer Manager at Freelance

No responses yet