What is the meaning of this kafka error ?
[2018-08-22 11:40:49,429] WARN [Consumer clientId=consumer-1, groupId=console-consumer-62114] 1 partitions hav
I think my problem is I was running 2 instances, without setting anything special for replications. (prob no replication?) then I removed a broker. Some topics stopped working.
[2018-08-22 11:40:49,429] WARN [Consumer clientId=consumer-1, groupId=console-consumer-62114] 1 partitions have leader brokers without a matching listener, including [topicname-0] (org.apache.kafka.clients.NetworkClient)
This error also happens if you try to run multiple consumers and the kafka topic contains only one partition. Generally one consumer should mapped with one partition. If you are using two consumers then you should have 2 partition in the kafka topic.
I had this issue while running Kafka in a docker container.
The following solution helped to resolve the issue.
As mentioned by @coldkreap
in the comment of this answer: https://stackoverflow.com/a/58067985/327862
The kafka broker information is kept between restarts because the
wurstmeister/kafka
image creates a volume named 'kafka'. If you run docker volume ls you will see a kafka volume. Remove that volume and you will be able to recreate topics, etc.
If using docker-compose
, you can run the following command to remove containers along with their associated volumes:
docker-compose down -v
OR
docker-compose rm -sv
In my case, i was getting this error while updating my kafka cluster from v2.0 to v2.4. The reason was the configuration settings was wrong for log.dirs in server.properties file. Because I didnt noticed different disk name for different nodes and i missmatch the disknames in log.dirs settings.
In my case, i was getting this error when i was testing Kafka
fail-over. I brought down 1 Kafka
, and expected the message to be written to the other Kafka
.
The issue was that topic replication-factor
was set to 1, when i needed to set it to 2. (2 Kafka instances)
Bonus:
Check out the directories where the topics are created(in my case: kafka-logs-xx) for both Kafka
, and you will understand why :-)
jumping_monkey's bonus on checking the directories is helpful. For me, I was deploying with Bitnami Kafka. On first deploy I was not config
in the helm values. I wanted to change retention down to minutes, and set that:
config: |-
log.retention.minutes=10
This caused the log.dirs
directory to switch from /bitnami/kafka/data
to /tmp/logs
.
Essentially where the data is being stored on the Kafka brokers caused the error to show up.