Orchestrating Connectivity: An Introduction Guide to Kubernetes Services and Networking
Kubernetes Services are abstractions that define a logical set of pods and a policy to access them. Services allow pods to communicate with each other and with external clients, regardless of where they are deployed or how they are scaled. Services also provide load balancing, service discovery, and name resolution for the pods they target.
There are different types of services in Kubernetes, such as ClusterIP, NodePort, LoadBalancer, ExternalName, and Headless.
What are Kubernetes Services?
Kubernetes Services are abstractions that define a logical set of pods and a policy to access them. Services allow pods to communicate with each other and with external clients, regardless of where they are deployed or how they are scaled. Services also provide load balancing, service discovery, and name resolution for the pods they target.
There are different types of services in Kubernetes
- ClusterIP: default type that expose a service on a cluster-internal IP address. It creates a virtual IP address (VIP) within the cluster that can be used by other pods or services in the same cluster
- NodePort: allow external clients to access a service through a port on each node. The service will forward the traffic from the node port to a pod port
- LoadBalancer: create an external load balancer that routes traffic to a service
- ExternalName: map a service to an external DNS name. This is useful for integrating with external services that are not managed by Kubernetes
- Headless: don’t assign an IP address to a service, but instead return the pod IPs directly
Use Cases for Different Types of Services:
Kubernetes Networking is based on a few principles and requirements:
- Every pod should have a unique IP address across the cluster
- Pods on different nodes should be able to communicate with each other without NAT
- Nodes should be able to communicate with pods without NAT
- The IP address of a pod should not change when it is restarted or moved to another node
This means that Kubernetes treats pods as first-class network entities, and does not impose any restrictions on how they communicate with each other. Pods can use any protocol or port, and can even have multiple IP addresses or interfaces.
However, this also means that Kubernetes does not provide any built-in network functionality. It relies on external components to implement the network layer:
- Container runtime: creates and configures the pod network namespace and interfaces
- Network plugin: connects the pod network to the node network and provides cross-node communication
- Service proxy: implements the service abstraction and load balancing
- DNS server: provides name resolution for services and pods
How does Kubernetes Networking Work?
Kubernetes uses a flat network model, which means that every pod can communicate with every other pod in the cluster, regardless of which node they are running on. This simplifies the application development and deployment, as there is no need to configure complex network policies or routing rules.
To achieve this, Kubernetes relies on a network plugin that implements the Container Network Interface (CNI) specification. The CNI plugin is responsible for allocating IP addresses to pods and setting up the network routes and rules on each node. There are many CNI plugins available, such as Calico, Flannel, Weave Net, etc.
Each pod has its own IP address, which is assigned by the CNI plugin when the pod is created. The pod IP address is ephemeral, which means that it can change when the pod is restarted or moved to another node. Therefore, it is not recommended to use pod IP addresses for communication between pods or services. Instead, it is better to use service names or DNS names, which are stable and resolvable within the cluster.
Kubernetes has some network features:
- DNS: Kubernetes runs a DNS service (CoreDNS) that provides name resolution for services and pods within the cluster. Pods can use the DNS service to discover and communicate with other services and pods by their names
- Ingress: Kubernetes supports ingress resources that define rules for routing external traffic to services within the cluster. Ingress resources require an ingress controller that implements the rules and provides load balancing, SSL termination, authentication, etc. There are many ingress controllers available for Kubernetes, such as Nginx, Traefik, Istio, etc.
- Network Policy: Kubernetes supports network policy resources that define rules for allowing or denying traffic between pods or namespaces within the cluster. Network policy resources require a network plugin that supports them and enforces them at the pod level
Service Discovery and Management
Service discovery is the process of locating the appropriate service instance for a given request.
Kubernetes provides various mechanisms for service discovery
- Environment Variables: Kubernetes injects environment variables containing service information into pods at runtime, allowing applications to discover services dynamically
- DNS Lookups: Kubernetes services can be queried using DNS lookups, which return the IP address and port of the service instance. Applications can then use this information to establish connections with the service
- Service Meshes: A service mesh is a configurable infrastructure layer for microservices application that makes communication between services flexible, reliable, and fast. It provides features like load balancing, circuit breaking, and service discovery. Popular service meshes include Istio, Linkerd, and Envoy
Service Mesh and It’s Role in Service Communication
A service mesh is a dedicated intermediary layer between services that enables more sophisticated communication patterns, improved reliability, and better performance.
Here are some key benefits of using a service mesh
- Proxying: Service meshes act as proxies between services, allowing for more advanced routing, filtering, and observability capabilities
- Circuit Breaking: Service meshes can detect failures and automatically open or close circuits to prevent cascading failures and reduce latency
- Observability: Service meshes collect metrics and logs, providing insights into service behavior, performance, and health
- Security: Service meshes can encrypt communication between services, ensuring data privacy and security
Network Models
Host Network Model
Each pod gets its own IP address, and communicates directly with other pods and services using that IP address. This model is simple to implement but lacks some advanced networking features.
Pros:
- Easy to set up and manage
- No additional software required
Cons:
- Limited scalability
- No support for advanced networking features like service discovery and load balancing
- Reduces the port availability on the node, as pods have to use unique ports to avoid conflicts
- Exposes the pods to the external network, which may pose security risks
- Does not support pod mobility, as pods cannot move across nodes without changing their IP address
Overlay Network Model
A separate network fabric is created above the underlying physical network. Each pod gets a unique IP address within the overlay network, which allows it to communicate with other pods and services.
Pros:
- Supports advanced networking features like service discovery and load balancing
- Allows for better isolation between apps and environments
Cons:
- Requires additional software and infrastructure
- Can introduce additional latency
- Adds overhead and complexity to the network stack, as packets have to be encapsulated and decapsulated at each hop
- May cause performance degradation and packet loss due to MTU (Maximum Transmission Unit) issues or congestion
- May require additional configuration or coordination with the underlying network infrastructure to support multicast or broadcast traffic
Underlay Network Model
The physical network infrastructure is used to provide connectivity between pods and services. Each pod gets a unique IP address within the underlay network, which allows it to communicate with other pods and services.
Pros:
- Utilizes existing network infrastructure
- Provides better performance compared to overlay models
Cons:
- Does not support advanced networking features like service discovery and load balancing
- Can be complex to set up and manage
- Requires integration and cooperation with the underlying network infrastructure, which may not be feasible or desirable in some environments
- May consume a large number of IP addresses from the cluster or external networks
- May not support some advanced features such as service discovery or load balancing natively
Network Policies and Their Role in Controlling Traffic Flow
Network policies define what traffic is allowed to flow between different parts of a cluster. Policies can be defined at various levels, including pod level, namespace level, and cluster level. Network policies allow administrators to control traffic flow based on criteria such as source and destination IP addresses, ports, and protocols.
Policies are enforced by the kernel module within each node. When a packet is sent between two endpoints, the kernel module checks the policy rules to determine whether the traffic is allowed or blocked.
Types of network policies
- Allow: Allows all traffic to flow between two endpoints
- Deny: Blocks all traffic from flowing between two endpoints
- Default: Applies to all traffic that doesn’t match any other policy
Kubernetes Services in Action
ClusterIP
A ClusterIP service is the default type of service in Kubernetes. It creates a virtual IP address (VIP) within the cluster that can be used to access the pods selected by the service. A ClusterIP service is only accessible from within the cluster, not from the outside.
To create a ClusterIP service, we can use the following YAML
manifest
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
The service exposes port 80
and forwards the traffic to port 8080
on the pods. The service also assigns a VIP to itself, which can be queried by using the DNS name my-service.default.svc.cluster.local
, where default is the namespace of the service.
To access the service from within the cluster, we can use the VIP or the DNS name
curl http://my-service.default.svc.cluster.local
curl http://10.96.0.10 # assuming this is the VIP assigned to the service
NodePort
A NodePort service is a type of service that exposes a port on each node of the cluster. This allows external clients to access the pods selected by the service by using any node’s IP address and the allocated port.
To create a NodePort service we can use YAML
manifest
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
nodePort: 30080
The service exposes port 80
and forwards the traffic to port 8080
on the pods. The service also allocates a port on each node of the cluster, which can be specified by using the nodePort field or left blank for Kubernetes to choose a random port in the range 30000–32767
.
To access the service from outside the cluster, we can use any node’s IP address and the allocated port
curl http://node1.example.com:30080 # assuming node1.example.com is one of the nodes in the cluster
curl http://node2.example.com:30080 # assuming node2.example.com is another node in the cluster
LoadBalancer
A LoadBalancer service is a type of service that creates an external load balancer for the pods selected by the service. This allows external clients to access the pods via a single IP address and port, which are managed by the load balancer.
To create a LoadBalancer service we need to create YAML
manifest
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: LoadBalancer
selector:
app: my-app
ports:
- name: http
protocol: TCP
port: 80
targetPort: 8080
The service exposes port 80
and forwards the traffic to port 8080
on the pods. The service also requests a load balancer from the cloud provider or an external controller, which will assign an external IP address and port to the service.
To access the service from outside the cluster, we can use the external IP address and port assigned by the load balancer
curl http://35.192.168.10 # assuming this is the external IP address assigned by the load balancer
ExternalName
An ExternalName service is a type of service that maps a service name to an external DNS name. This allows clients within the cluster to access external services by using a familiar DNS name.
To create an ExternalName service, we can start with YAML
manifest
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: ExternalName
externalName: my-external-service.example.com
This manifest defines a service named my-service
that maps to the external DNS name my-external-service.example.com
. The service does not have any selectors or ports, as it does not manage any pods.
To access the service from within the cluster, we can use the DNS name my-service.default.svc.cluster.local
, which will resolve to the external DNS name
curl http://my-service.default.svc.cluster.local # this will resolve to http://my-external-service.example.com
Conclusion
Kubernetes networking is a vital part of any Kubernetes cluster. It enables pods to communicate with each other and with the outside world. It also provides abstractions and features that make it easier to manage and scale your applications.