When I am write a topic to kafka,there is an error:Offset commit failed
:
2016-10-29 14:52:56.387 INFO [nioEventLoopGroup-3-1][org.apache.kafka.commo
Looking at your logs the problem is that cluster probably don't have connection to node which is the only one know replica of given topic in zookeeper.
You can check it using given command:
kafka-topics.sh --describe --zookeeper localhost:2181 --topic test1
or using kafkacat:
kafkacat -L -b localhost:9092
Example result:
Metadata for all topics (from broker 1003: localhost:9092/1003):
1 brokers:
broker 1003 at localhost:9092
1 topics:
topic "topic1" with 1 partitions:
partition 0, leader -1, replicas: 1001, isrs: , Broker: Leader not available
If you have single node cluster then broker id(1001) should match leader of topic1 partition.
But as you can see the only one known replica of topic1 was 1001 - which is not available now, so there is no possibility to recreate topic on different node.
The source of the problem can be an automatic generation of broker id(if you don't have specified broker.id
or it is set to -1
).
Then on starting the broker(the same single broker) you probably receive broker id different that previously and different than was marked in zookeeper (this a reason why partition deletion can help - but it is not a production solution).
The solution may be setting broker.id value in node config to fixed value - according to documentation it should be done on produciton environment:
broker.id=1
If everything is alright you should receive sth like this:
Metadata for all topics (from broker 1: localhost:9092/1001):
1 brokers:
broker 1 at localhost:9092
1 topics:
topic "topic1" with 1 partitions:
partition 0, leader 1, replicas: 1, isrs: 1
Kafka Documentation: https://kafka.apache.org/documentation/#prodconfig
The solution for me was that I had to make sure KAFKA_ADVERTISED_HOST_NAME
was the correct IP address of the server.
Hi you have to keep your kafka replicas and replication factor for your code same.
for me i keep 3 as replicas and 3 as replication factor.
We faced same issue in production too. The code was working fine for long time suddenly we got this exception.
We analyzed that there is no issue in code. So we asked deployment team to restart the zookeeper. Restarting it solved the issue.
I faced similar issue. The problem I had was, when you start your kafka broker there is a property associated with it, "KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR". If you are working with single node cluster make sure you set this property with value '1'. As its default value is 3. This change resolved my problem. (you can check the value in kafka.properties file) Note: I was using base image of confluent kafka version 4.0.0 ( confluentinc/cp-kafka:4.0.0)