How does kafka handle network partitions?

情到浓时终转凉″ 提交于 2019-12-08 17:14:41

问题


Kafka has the concept of a in-sync replica set, which is the set of nodes that aren't too far behind the leader.

What happens if the network cleanly partitions so that a minority containing the leader is on one side, and a majority containing the other in-sync nodes on the other side?

The minority/leader-side presumably thinks that it lost a bunch of nodes, reduces the ISR size accordingly, and happily carries on.

The other side probably thinks that it lost the leader, so it elects a new one and happily carries on.

Now we have two leaders in the same cluster, accepting writes independently. In a system that requires a majority of nodes to proceed after a partition, the old leader would step down and stop accepting writes.

What happens in this situation in Kafka? Does it require majority vote to change the ISR set? If so, is there a brief data loss until the leader side detects the outages?


回答1:


I haven't tested this, but I think the accepted answer is wrong and Lars Francke is correct about the possibility of brain-split.

Zookeeper quorum requires a majority, so if ZK ensemble partitions, at most one side will have a quorum.

Being a controller requires having an active session with ZK (ephemeral znode registration). If the current controller is partitioned away from ZK quorum, it should voluntarily stop considering itself a controller. This should take at most zookeeper.session.timeout.ms = 6000. Brokers still connected to ZK quorum should elect a new controller among themselves. (based on this: https://stackoverflow.com/a/52426734)

Being a topic-partition leader also requires an active session with ZK. Leader that lost a connection to ZK quorum should voluntarily stop being one. Elected controller will detect that some ex-leaders are missing and will assign new leaders from the ones in ISR and still connected to ZK quorum.

Now, what happens to producer requests received by the partitioned ex-leader during ZK timeout window? There are some possibilities.

If producer's acks = all and topic's min.insync.replicas = replication.factor, then all ISR should have exactly the same data. The ex-leader will eventually reject in-progress writes and producers will retry them. The newly elected leader will not have lost any data. On the other hand it won't be able to serve any write requests until the partition heals. It will be up to producers to decide to reject client requests or keep retrying in the background for a while.

Otherwise, it is very probable that the new leader will be missing up to zookeeper.session.timeout.ms + replica.lag.time.max.ms = 16000 worth of records and they will be truncated from the ex-leader after the partition heals.

Let's say you expect longer network partitions than you are comfortable with being read-only.

Something like this can work:

  • you have 3 availability zones and expect that at most 1 zone will be partitioned from the other 2
  • in each zone you have a Zookeeper node (or a few), so that 2 zones combined can always form a majority
  • in each zone you have a bunch of Kafka brokers
  • each topic has replication.factor = 3, one replica in each availability zone, min.insync.replicas = 2
  • producers' acks = all

This way there should be two Kafka ISRs on ZK quorum side of the network partition, at least one of them fully up to date with ex-leader. So no data loss on the brokers, and available for writes from any producers that are still able to connect to the winning side.




回答2:


In a Kafka cluster, one of the brokers is elected to serve as the controller.

Among other things, the controller is responsible for electing new leaders. The Replica Management section covers this briefly: http://kafka.apache.org/documentation/#design_replicamanagment

Kafka uses Zookeeper to try to ensure there's only 1 controller at a time. However, the situation you described could still happen, spliting both the Zookeeper ensemble (assuming both sides can still have quorum) and the Kafka cluster in 2, resulting in 2 controllers.

In that case, Kafka has a number of configurations to limit the impact:

  • unclean.leader.election.enable: False by default, this is used to prevent replicas that were not in-sync to ever become leaders. If no available replicas are in-sync, Kafka marks the partition as offline, preventing data loss
  • replication.factor and min.insync.replicas: For example, if you set them to 3 and 2 respectively, in case of a "split-brain" you can prevent producers from sending records to the minority side if they use acks=all

See also KIP-101 for the details about handling logs that have diverged once the cluster is back together.



来源:https://stackoverflow.com/questions/48825755/how-does-kafka-handle-network-partitions

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!