Whole cluster failing if one kafka node goes down?

两盒软妹~` 提交于 2021-01-29 22:54:54

问题


I have 3 node kafka cluster each having zookeeper and kafka. If i explicitly kill the leader node both zookeeper and kafka the whole cluster is not accepting any incoming data and waiting for the node to come back.

kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 3 min.insync.replicas=2 --partitions 6 --topic logs

topic created using the above command.

Node 1

server.properties

broker.id=0
listeners=PLAINTEXT://:9092
advertised.listeners=PLAINTEXT://10.0.2.4:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=localhost:2181,10.0.2.5:2181,10.0.14.7:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

zookeeper.properties

tickTime=2000 
dataDir=/tmp/zookeeper/ 
initLimit=5 
syncLimit=2 
server.0=0.0.0.0:2888:3888
server.1=analyzer1:2888:3888
server.2=10.0.14.4:2888:3888
clientPort=2181

The respective kafka and zookeeper for each node is in above format respectively.

When i check zookeeper status of rest of the nodes i can see a new leader. But still producer fails to send data. Also two kafka nodes not responding with below error.

WARN Client session timed out, have not heard from server in 30004ms for sessionid 0x0 (org.apache.zookeeper.ClientCnxn)

Can anyone help me with this?

If you want kafka logs from the available node?

[2020-10-08 19:40:13,607] WARN Client session timed out, have not heard from server in 12002ms for sessionid 0x2acefe00000 (org.apache.zookeeper.ClientCnxn)
[2020-10-08 19:40:13,608] INFO Client session timed out, have not heard from server in 12002ms for sessionid 0x2acefe00000, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn)
[2020-10-08 19:40:13,709] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-10-08 19:40:13,709] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2020-10-08 19:40:13,709] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-10-08 19:40:13,709] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2020-10-08 19:40:13,866] INFO Opening socket connection to server 10.0.14.7/10.0.14.7:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-10-08 19:40:13,867] INFO Socket error occurred: 10.0.14.7/10.0.14.7:2181: Connection refused (org.apache.zookeeper.ClientCnxn)
[2020-10-08 19:40:13,968] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-10-08 19:40:13,968] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-10-08 19:40:14,093] WARN [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Connection to node 1 (/10.0.2.5:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2020-10-08 19:40:14,093] INFO [ReplicaFetcher replicaId=0, leaderId=1, fetcherId=0] Error sending fetch request (sessionId=205463854, epoch=INITIAL) to node 1: {}. (org.apache.kafka.clients.FetchSessionHandler)
java.io.IOException: Connection to 10.0.2.5:9092 (id: 1 rack: null) failed.
        at org.apache.kafka.clients.NetworkClientUtils.awaitReady(NetworkClientUtils.java:71)
        at kafka.server.ReplicaFetcherBlockingSend.sendRequest(ReplicaFetcherBlockingSend.scala:103)
        at kafka.server.ReplicaFetcherThread.fetchFromLeader(ReplicaFetcherThread.scala:206)
        at kafka.server.AbstractFetcherThread.processFetchRequest(AbstractFetcherThread.scala:300)
        at kafka.server.AbstractFetcherThread.$anonfun$maybeFetch$3(AbstractFetcherThread.scala:135)
        at kafka.server.AbstractFetcherThread.maybeFetch(AbstractFetcherThread.scala:134)
        at kafka.server.AbstractFetcherThread.doWork(AbstractFetcherThread.scala:117)
        at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:96)

回答1:


This typically happens when you only mention one of the brokers in the KafkaProducer properties bootstrap_servers. My assumption is that this property is set only to the broker 10.0.2.5:9092 instead of listing all three nodes.

Although it is sufficient to mention only one of them to be able to communicate with the entire cluster, it is recommended to have listed at least two broker addresses (as a comma seperated list) to deal with such scenarios your are facing.

In case of an individual broker failure the broker might switch partition leader to active brokers as you have seen in the logs. Even though you are not listing all of the brokers in the property bootstrap_servers the producer will figure out to which broker (partition leader) it needs to send the data to.



来源:https://stackoverflow.com/questions/64264379/whole-cluster-failing-if-one-kafka-node-goes-down

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!