Ways to Provision Kubernetes Resources: The Science Behind the Magic

Roman Glushach
13 min readSep 22, 2023

--

Ways to Provision Kubernetes Resources

Provisioning Kubernetes resources is an essential aspect of deploying and managing applications in a Kubernetes environment.

Kubernetes resource provisioning is the process of creating and managing the resources that your applications need to run on a Kubernetes cluster. These resources include pods, which are the basic units of execution in Kubernetes; services, which provide network access to your pods; volumes, which provide persistent storage for your pods; and other resources such as configmaps, secrets, ingresses, and more.

Kubernetes resource provisioning is driven by 2 main concepts:

  • Declarative configuration: means that you specify the desired state of your resources using YAML or JSON files, called manifests. For example, you can define a pod manifest that describes the name, image, ports, and environment variables of your pod. You can then apply this manifest to your cluster using the kubectl command-line tool or the Kubernetes API
  • Reconciliation: means that Kubernetes constantly compares the desired state of your resources with the actual state of your cluster. If there is any difference between the two states, Kubernetes tries to resolve it by creating, updating, or deleting resources as needed. For example, if you apply a pod manifest that specifies two replicas of your pod, but only one replica is running on your cluster, Kubernetes will create another replica to match your desired state. This process is performed by various controllers that run inside the Kubernetes control plane

Cluster API

Cluster API is a Kubernetes API that allows users to manage their Kubernetes clusters using a declarative configuration format. It provides a set of APIs that enable users to create, update, and manage Kubernetes objects, such as nodes, pods, services, and deployments, among others.

The Cluster API is designed to be extensible, allowing users to define their own custom resources and manage them using the same API. This makes it an ideal solution for managing complex, custom Kubernetes deployments.

How Does It Work

Cluster API works by using a declarative configuration format, which means that users define the desired state of their Kubernetes cluster, and the Cluster API controller works to ensure that the cluster matches that desired state.

High-level overview of how Cluster API works:

  • Users create a Cluster API configuration file that defines the desired state of their Kubernetes cluster. This file is written in YAML or JSON format
  • The Cluster API controller reads the configuration file and compares it to the current state of the cluster
  • The controller identifies the differences between the desired state and the current state and creates a plan to bring the cluster into the desired state
  • The controller executes the plan and updates the cluster with the necessary changes
  • The controller continues to monitor the cluster and makes adjustments as needed to ensure that it remains in the desired state

Features

  • Declarative Configuration: Cluster API uses a declarative configuration format, which means that users define the desired state of their cluster, and the controller works to ensure that the cluster matches that desired state. This makes it easier to manage complex, custom Kubernetes deployments
  • Extensibility: Cluster API is designed to be extensible, allowing users to define their own custom resources and manage them using the same API. This makes it an ideal solution for managing complex, custom Kubernetes deployments
  • Automatic Rollouts: Cluster API provides automatic rollouts, which means that the controller automatically applies changes to the cluster without requiring manual intervention. This reduces the risk of human error and ensures that the cluster is always up-to-date
  • Rollbacks: Cluster API provides rollback capabilities, which means that users can easily roll back changes to a previous state in case of errors or issues
  • Dry-Run: Cluster API provides a dry-run feature, which allows users to see the changes that will be made to the cluster without actually applying them. This is useful for testing and validation purposes

Use Cases

  • Provisioning new Kubernetes clusters: Cluster API can be used to create new Kubernetes clusters from scratch, including configuring the underlying infrastructure, networking, and storage
  • Upgrading existing Kubernetes clusters: Cluster API can be used to upgrade existing Kubernetes clusters to the latest version, including upgrading the control plane and worker nodes
  • Migrating Kubernetes clusters: Cluster API can be used to migrate Kubernetes clusters between different environments, such as on-premises to cloud or cloud to cloud
  • Managing custom Kubernetes resources: Cluster API can be used to manage custom Kubernetes resources, such as deployments, services, and pods, using a declarative configuration format
  • Automating Kubernetes tasks: Cluster API can be used to automate repetitive Kubernetes tasks, such as creating and updating deployments, services, and pods

Terraform

Terraform is an open-source IaC tool developed by HashiCorp, a company known for its suite of DevOps tools. Terraform allows developers and operators to define infrastructure as code, which can then be used to provision and manage resources across various environments.

Terraform supports a wide range of cloud and on-premises environments, including AWS, Azure, Google Cloud, and OpenStack, as well as container orchestration platforms like Kubernetes.

How Does It Work

Terraform works by converting your infrastructure configuration file into a set of API calls that are sent to the respective cloud or on-premises environment. The tool uses a declarative syntax, which means that you describe what you want your infrastructure to look like, rather than how to create it.

High-level overview of how Terraform works:

  • Configuration File: You create a configuration file that defines your infrastructure using Terraform’s syntax. This file includes resources, such as VMs, networks, and storage, that are required for your application
  • Provider: Terraform uses a provider to interact with the cloud or on-premises environment. A provider is a plugin that knows how to communicate with a specific cloud or environment, such as AWS, Azure, or Google Cloud. The provider translates the Terraform configuration file into API calls that can be understood by the target environment
  • Resource: In Terraform, resources are the building blocks of your infrastructure. A resource can be a VM, a network, a storage bucket, or any other infrastructure component that you want to manage. Each resource has a unique ID, and Terraform uses this ID to identify and manage the resource
  • State: Terraform maintains a state file that keeps track of the resources created by Terraform. The state file is used to compare the current state of the infrastructure with the desired state defined in the configuration file. This helps Terraform to determine what changes need to be made to the infrastructure
  • Plan: Before making any changes to the infrastructure, Terraform creates a plan that outlines the necessary changes. The plan includes a list of resources that need to be created, updated, or deleted
  • Apply: Once you’ve reviewed and approved the plan, Terraform applies the changes to the infrastructure. The tool uses the provider to make the necessary API calls to create, update, or delete resources
  • Refresh: After applying the changes, Terraform refreshes the state file to reflect the new state of the infrastructure. This ensures that the state file is always up-to-date and accurate

Features

  • Declarative Syntax: Terraform uses a declarative syntax, which means that you describe what you want your infrastructure to look like, rather than how to create it. This makes it easier to manage complex infrastructure and reduces the risk of errors
  • Immutable Infrastructure: Terraform encourages immutable infrastructure, which means that you never modify existing resources. Instead, you create new resources and version them appropriately. This approach makes it easier to manage changes and maintain a consistent state of the infrastructure
  • Versioning: Terraform uses versioning to track changes to your infrastructure. Each resource has a version number that increments when the resource is updated. This allows Terraform to maintain a history of changes and roll back to a previous version if necessary
  • Modules: Terraform modules are reusable infrastructure components that you can share across different projects. Modules can include resources, providers, and configuration files. This allows you to create a library of commonly used infrastructure components that you can reuse across different projects
  • Dependencies: Terraform allows you to define dependencies between resources. This means that Terraform will create resources in a specific order, ensuring that dependent resources are created before others

Use Cases

  • Multi-Cloud Deployment Terraform enables infrastructure provisioning across multiple clouds, increasing fault-tolerance, resilience, and flexibility while reducing vendor lock-in and costs. It can deploy a federated Kubernetes cluster across AWS and Azure, and handle cross-cloud dependencies
  • Application Infrastructure Deployment, Scaling, and Monitoring Terraform efficiently deploys, scales, and monitors application infrastructure. It manages resources for each tier of your application architecture and uses modules to reuse and share common configurations. It can deploy a demo Nginx application to a Kubernetes cluster with Helm, install the Datadog agent for monitoring, and implement blue-green or canary deployments for load balancers
  • Self-Service Clusters Terraform helps create a self-service infrastructure model, empowering product teams to manage their own infrastructure. It uses Terraform Cloud or Enterprise for collaboration, governance, and automation. It codifies standards and policies for deploying and managing services in your organization. It can create a self-service portal for developers to request and provision AWS EC2 instances or Azure VMs, and integrate with tools like Vault or Consul for secure access and configuration management

Helm

Helm is a tool for managing Kubernetes applications using charts, which are packages of pre-configured Kubernetes resources.

How Does It Work

Helm has a client-only architecture, which means that it interacts directly with the Kubernetes API server without relying on an in-cluster server component. This simplifies the installation and security of Helm, as well as reduces the resource consumption on the cluster.

To install a Helm chart, the Helm client communicates with the Helm repository to retrieve the chart and its dependencies. The Helm client then creates a Kubernetes deployment, service, and other resources required by the application.

The deployment process includes the following steps:

  • Chart Validation: The Helm client validates the Helm chart to ensure it meets the requirements of the Kubernetes cluster. The validation process includes checking the chart’s dependencies, configuration, and other requirements
  • Dependency Resolution: The Helm client resolves the dependencies of the Helm chart. It retrieves the dependencies from the Helm repository and installs them on the Kubernetes cluster
  • Resource Creation: The Helm client creates the necessary Kubernetes resources, such as deployments, services, and config maps, to run the application. It uses the information in the Helm chart to create the resources
  • Installation: The Helm client installs the application on the Kubernetes cluster. It deploys the application’s code, configures the environment, and starts the application

Features

  • Package Management: Kubernetes Helm provides a package manager for Kubernetes that allows us to easily install, manage, and upgrade applications. It uses a package file called a Helm chart to install an application
  • Dependency Management: Kubernetes Helm allows us to manage dependencies between applications. It ensures that the dependencies are installed and configured correctly before installing the application
  • Configuration Management: Kubernetes Helm provides a way to manage the configuration of an application. It allows us to define the configuration of an application in a Helm chart and manage it across different environments
  • Rollouts and Rollbacks: Kubernetes Helm provides a way to manage the rollout and rollback of applications. It allows us to create a new deployment and roll it out to the cluster, and also allows us to roll back to a previous version of the application if necessary
  • Support for Multiple Environments: Kubernetes Helm supports multiple environments, including development, staging, and production. It allows us to manage the configuration of an application across different environments
  • Integration with Kubernetes Features: Kubernetes Helm integrates with Kubernetes features such as deployments, services, and pods. It uses these features to manage the lifecycle of an application
  • Extensibility: Kubernetes Helm is extensible and allows us to create custom resources and plugins. It also supports third-party plugins and resources

Use Cases

  • Loading of secrets to access a repository to pull an image before the main service is deployed
  • Performing DB migrations before updating the service
  • Cleaning up external resources after the service is deleted
  • Checking for the prerequisites of a service before the service is deployed
  • Standardizing deployments across different microservices by using a common chart template
  • Simplifying deployments of applications that require multiple resources such as deployments, services, configmaps, and ingresses by using a single chart
  • Reusing configurations that someone has already made for common applications such as Grafana, Prometheus, or Nginx by installing charts from public or private repositories

Kustomize

Kustomize is an open-source tool that was created by the Kubernetes community to address the need for a simple and efficient way to manage custom resources in a Kubernetes cluster. It was designed to be used in conjunction with the Kubernetes API, and provides a powerful and flexible way to automate the creation, update, and deletion of resources in a Kubernetes cluster.

Kustomize is built using a simple, declarative language that allows users to define custom resources in a YAML file. This file, known as a Kustomization, can be used to create, update, or delete resources in a Kubernetes cluster. Kustomize also provides a number of features that make it easy to manage and maintain custom resources, including support for dependencies, validation, and rolling updates.

How Does It Work

  • Create a Kustomization: A Kustomization is a YAML file that defines a set of custom resources that you want to create, update, or delete in a Kubernetes cluster. The YAML file includes a list of resources, along with any necessary configuration and metadata
  • Apply the Kustomization: Once you have created a Kustomization, you can apply it to a Kubernetes cluster using the kustomize command-line tool. The tool will read the YAML file, and use the Kubernetes API to create, update, or delete the resources defined in the file
  • Dependencies: Kustomize supports dependencies, which means that you can define a resource in a Kustomization that depends on another resource. For example, you might have a resource that depends on a database being created before it can be used. Kustomize will automatically create the dependent resource before creating the resource that depends on it
  • Validation: Kustomize also supports validation, which means that you can define rules that must be met before a resource can be created or updated. For example, you might have a rule that ensures that a certain field is filled in before a resource can be created. Kustomize will automatically validate the resource before creating or updating it
  • Rolling Updates: Kustomize supports rolling updates, which means that you can define a set of resources that should be updated one at a time. This can be useful in situations where you want to update a set of resources without downtime. Kustomize will automatically create a new version of the resource, update the dependencies, and then delete the old version

Features

  • Declarative Language: Kustomize uses a declarative language, which means that you define what you want to create, update, or delete, rather than how to do it. This makes it easy to manage and maintain custom resources, as you don’t need to worry about the details of how the resources are created or updated
  • Support for Dependencies: Kustomize supports dependencies, which means that you can define a resource that depends on another resource. This makes it easy to create resources in a specific order, and ensures that resources are not created until the dependencies are in place
  • Validation: Kustomize supports validation, which means that you can define rules that must be met before a resource can be created or updated. This ensures that resources are created and updated correctly, and helps to prevent errors and inconsistencies
  • Rolling Updates: Kustomize supports rolling updates, which means that you can define a set of resources that should be updated one at a time. This can be useful in situations where you want to update a set of resources without downtime
  • Support for Different Resource Types: Kustomize supports a wide range of resource types, including deployments, services, pods, and more. This means that you can use Kustomize to manage a wide range of custom resources in a Kubernetes cluster
  • Integration with the Kubernetes API: Kustomize is designed to work seamlessly with the Kubernetes API, which means that you can use the Kubernetes API to create, update, and delete resources in a Kubernetes cluster. This provides a powerful and flexible way to manage custom resources, and makes it easy to integrate Kustomize with other Kubernetes tools and services

Use Cases

  • Customizing Off-the-Shelf Applications: Kustomize allows you to customize applications like Helm charts by overlaying patches on the base configuration. This avoids the need to alter the original files
  • Managing Multiple Environments: Kustomize can manage different environments (development, staging, production) with varying configurations. It creates overlays for each environment that inherit from a common base and override specific settings
  • Composing Multiple Applications: Kustomize can combine multiple applications into a single one. This is useful for deploying sets of applications that form a complete solution, such as a microservice architecture or a machine learning pipeline

Tips for Making a Decision

Cluster API

  • Cluster API is a Kubernetes-native API for managing clusters and deploying applications
  • It provides a unified way to manage and orchestrate multiple Kubernetes clusters, making it easier to deploy and manage large-scale applications
  • Cluster API is designed to work seamlessly with Kubernetes and takes advantage of its strengths, such as its scalability, reliability, and security features
  • Cluster API supports a wide range of use cases, including deploying and managing multi-cluster applications, creating and managing clusters for different environments, and automating cluster management tasks

Terraform

  • Terraform is an infrastructure as code (IAC) tool that allows you to define and manage your infrastructure using a human-readable configuration file
  • Terraform supports a wide range of cloud and on-premises providers, including AWS, Azure, Google Cloud, and more
  • Terraform provides a declarative way of defining infrastructure, which means that you describe what you want to create and Terraform takes care of creating it for you
  • Terraform can be used to manage Kubernetes clusters, but it is not specifically designed for Kubernetes and may not take full advantage of Kubernetes’ features

Helm

  • Helm is a package manager for Kubernetes that allows you to easily install and manage applications on your Kubernetes cluster
  • Helm provides a way to package applications and their dependencies into a single package, making it easy to install and manage applications across multiple clusters
  • Helm supports a wide range of application types, including web applications, databases, and messaging systems
  • Helm is designed to work seamlessly with Kubernetes and takes advantage of its strengths, such as its scalability, reliability, and security features

Kustomize

  • Kustomize is a tool for automating the creation and deployment of custom resources in Kubernetes
  • Kustomize provides a way to define custom resources using a simple, declarative syntax, making it easy to create and manage custom resources across multiple clusters
  • Kustomize supports a wide range of custom resources, including CRDs, CronJobs, and Deployments
  • Kustomize is designed to work seamlessly with Kubernetes and takes advantage of its strengths, such as its scalability, reliability, and security features

Conclusion

Kustomize is a good choice when you need to customize and manage Kubernetes resources, Helm is a good choice when you want to manage and deploy applications using a package manager, Terraform is a good choice when you need to manage and provision infrastructure resources, and Cluster API is a good choice when you want to manage and deploy Kubernetes clusters.

It’s important to note that these are just general guidelines, and the best tool for the job will depend on your specific use case and requirements. It’s also worth noting that these tools are not mutually exclusive, and you may find that using a combination of them is the best approach for your needs.

--

--

Roman Glushach
Roman Glushach

Written by Roman Glushach

Senior Software Architect & Engineer Manager at Freelance

No responses yet