The Evolution of Kafka Architecture: From ZooKeeper to KRaft

Roman Glushach
7 min readJul 5, 2023

--

Evolution of Kafka Architecture

Kafka’s architecture has recently shifted from ZooKeeper to a quorum-based controller that uses a new consensus protocol called Kafka Raft, shortened as KRaft. The shift from ZooKeeper to KRaft has been a major architectural overhaul, resulting in simplified deployment, enhanced scalability, and improved performance.

Zookeeper

In ZooKeeper mode (ZK) ensemble serves as the coordinating service with one ZK node as leader and others as followers. The ZK leader selects a controller from the Kafka brokers and shares the Kafka cluster’s metadata with the followers.

High overview Zookeeper architecture

High overview Zookeeper architecture

ZooKeeper is a distributed system that organizes data in a hierarchical namespace, similar to a standard file system. The namespace consists of data registers called znodes. A znode is identified by a path, which is a sequence of path elements separated by a slash.

ZooKeeper supports three types of znodes in its namespace:

  • Persistent: the default type and remain in ZooKeeper until deleted
  • Ephemeral: znodes, which are deleted if the session in which the znode was created disconnects. Ephemeral znodes cannot have children
  • Sequential: znodes, which can be used to create sequential numbers like IDs

ZooKeeper provides a reliable and fast system with scalability. It is designed to be replicated over a set of servers called an ensemble. Each server maintains an in-memory image of the state, along with a transition log and snapshots in a persistent store.

ZooKeeper clients connect to exactly one server but can switch to another if the server becomes unavailable. Read requests are serviced from the local replica of each server database. Write requests are processed by an agreement protocol. This protocol involves forwarding all write requests to the leader server, which coordinates them using the ZooKeeper Atomic Broadcast (ZAB) protocol.

The ZAB protocol is the core of ZooKeeper, which keeps all of the servers in sync. It ensures the reliable delivery of messages. It also ensures that the messages are delivered in total and causal order. The protocol leverages TCP for communication and establishes point-to-point FIFO channels between servers.

Role in Kafka architecture

Kafka is a distributed system that provides high availability and fault tolerance. However, it needs a way to coordinate among the active brokers and to keep a consistent view of the cluster and its configurations.

For a long time, Kafka has used ZooKeeper as its metadata management tool to perform several key functions:

  • Controller Election: ZooKeeper is essential for electing the controller. Each broker tries to create a temporary node in ZooKeeper. The first broker to succeed becomes the controller and gets a controller epoch.
    Cluster Membership: ZooKeeper also helps to manage the membership of brokers in a cluster. When a broker connects to ZooKeeper instances, a temporary znode is created under a group znode. This temporary znode is removed if the broker fails
  • Topic Configuration: Kafka stores a set of configurations for each topic in ZooKeeper. These configurations can be specific to each topic or global. It also stores information like the list of existing topics, the number of partitions for each topic, and the location of replicas
  • Access Control Lists (ACLs): Kafka maintains the ACLs for all topics in ZooKeeper. This determines who or what can read or write on each topic. It also stores information like the list of consumer groups and members of each consumer group
  • Quotas: Kafka brokers can limit the broker resources that a client can use. This is stored in ZooKeeper as quotas. There are two types of quotas: network bandwidth quota defined by byte-rate threshold and request rate quota defined by CPU utilization threshold

Difficulties

  • Scalability: ZooKeeper is not designed to handle a large number of clients or requests. As Kafka clusters grow in size and traffic, ZooKeeper can become a bottleneck and affect the performance and availability of Kafka
  • Complexity: ZooKeeper adds another layer of complexity and dependency to Kafka’s architecture. Kafka users and operators have to install, configure, monitor, and troubleshoot ZooKeeper separately from Kafka. Moreover, ZooKeeper has its own configuration parameters, failure scenarios, and security mechanisms that are different from Kafka’s
  • Consistency: ZooKeeper guarantees strong consistency, which means that all the nodes in the cluster see the same view of the data at any given time. However, this also means that ZooKeeper requires a quorum (majority) of nodes to be available to process any request. If a quorum is not available, ZooKeeper cannot serve any request and becomes unavailable. This can affect Kafka’s availability as well, especially in scenarios such as network partitions or data center failures
  • complex and difficult to manage
  • introduces a bottleneck between the active Kafka controller and the ZooKeeper leader
  • Meta-data changes (updates) and failovers are slow

KRaft

KRaft stands for Kafka Raft Metadata mode, which means that Kafka uses the Raft consensus protocol to manage its own metadata instead of relying on ZooKeeper.

High overview KRaft architecture

High overview KRaft architecture

The original Kafka architecture relied on ZooKeeper to manage the metadata of the cluster. However, ZooKeeper introduced some complexity and limitations for Kafka. Therefore, a series of Kafka Improvement Proposals (KIPs) were proposed to replace ZooKeeper with a self-managed metadata quorum. The first version of this self-managed mode was released as an early access feature in Kafka 2.8.

The self-managed mode simplifies the metadata management by delegating it to a new quorum controller service in Kafka. The quorum controller adopts an event-sourced storage model and uses Kafka Raft (Kraft) as the consensus protocol to ensure the consistency and durability of the metadata across the quorum.

Kraft is an event-based implementation of the Raft consensus protocol, which is similar to the ZAB protocol used by ZooKeeper, but with an event-driven design. The quorum controller maintains an event log to store the state transitions, which are periodically compacted into snapshots to avoid unbounded growth.

The quorum controller operates in a leader-follower mode, where the leader controller generates events in the metadata topic within Kafka, and the follower controllers consume and apply these events. If a broker fails or disconnects, it can resume from the last committed event when it rejoins. This minimizes the downtime.

The quorum controller also has some advantages over the ZooKeeper-based controller, such as not needing to load the state from an external service, and having all the committed metadata records in memory. Furthermore, the same event-driven mechanism is used to propagate and synchronize all metadata changes across the cluster.

Benefits

  • Simplicity: KRaft simplifies Kafka’s architecture by removing the need for a separate coordination service. Kafka users and operators only have to deal with one system instead of two. Moreover, KRaft uses the same configuration parameters, failure scenarios, and security mechanisms as Kafka’s data plane, which reduces the learning curve and operational overhead
  • Scalability: KRaft improves Kafka’s scalability by reducing the load on the metadata store. In KRaft mode, only the controller quorum (a subset of brokers) participates in the Raft protocol, while the rest of the brokers only communicate with the controller quorum. This reduces the number of connections and requests that have to go through the metadata store, and allows Kafka to handle more brokers and topics without affecting the performance and availability of the metadata store
  • Availability: KRaft enhances Kafka’s availability by allowing partial failures in the metadata store. In KRaft mode, only a quorum of controllers is required to process any request. If some controllers are unavailable or partitioned from the rest of the cluster, the remaining controllers can still serve requests and maintain the cluster state. This increases Kafka’s resilience to network partitions and data center failures
  • Simplified deployment and management: Kafka users no longer need to run and maintain a separate ZooKeeper cluster, which reduces the operational complexity and cost of Kafka deployments. Moreover, Kafka users can leverage the existing tools and APIs for managing Kafka clusters, such as the Admin API, the Config API, and the kafka-reassign-partitions tool
  • Increased security: KRaft supports encryption and authentication of client-server communication using SSL/TLS or SASL mechanisms, which protects Kafka metadata from unauthorized access or modification

Trade-offs

  • Compatibility: KRaft is not compatible with older versions of Kafka or ZooKeeper. Users who want to migrate from ZooKeeper to KRaft have to follow a specific procedure that involves upgrading their brokers and clients to support KRaft mode, converting their metadata from ZooKeeper format to KRaft format, and switching their brokers from ZooKeeper mode to KRaft mode. This can be a complex and risky process that requires careful planning and testing
  • Consistency: KRaft relaxes the consistency guarantee of ZooKeeper by allowing eventual consistency in some cases. For example, if a controller fails or leaves the cluster, it may take some time for another controller to take over its role and update its view of the cluster state. During this time, some brokers or clients may see stale or inconsistent metadata until they refresh their cache or reconnect to the new controller. This can cause temporary errors or inconsistencies in some operations, such as creating or deleting topics, reassigning partitions, or committing offsets

Transition from Zookeeper to KRaft

Transitioning from ZooKeeper to KRaft is a straightforward process that involves upgrading to Kafka 2.8 or later and enabling KRaft mode. Since August 2019, Kafka has undergone a major architectural overhaul with the Collin McCabe’s KIP-500, which removes the dependency on ZooKeeper for managing metadata and leader election. This resulted in the development of KRaft mode, which has since been marked as production-ready. As of Kafka 3.5, the ZooKeeper mode is being deprecated, and it will be completely removed from Kafka with the release of Kafka 4.0.

Conclusion

Kafka’s shift from ZooKeeper to KRaft is a major architectural overhaul that simplifies deployment, enhances scalability, and improves performance. KRaft mode eliminates the need for a separate ZooKeeper ensemble, making it easier to deploy and manage Kafka clusters.

KRaft mode is more scalable than ZooKeeper mode, allowing Kafka clusters to handle more traffic and data. KRaft mode is faster than ZooKeeper mode, resulting in lower latency and higher throughput.

--

--