Leader brokers without a matching listener error in kafka

后端 未结 7 2105
北荒
北荒 2021-02-05 15:38

What is the meaning of this kafka error ?

[2018-08-22 11:40:49,429] WARN [Consumer clientId=consumer-1, groupId=console-consumer-62114] 1 partitions hav

相关标签:
7条回答
  • 2021-02-05 15:51

    I think my problem is I was running 2 instances, without setting anything special for replications. (prob no replication?) then I removed a broker. Some topics stopped working.

    0 讨论(0)
  • 2021-02-05 16:03

    [2018-08-22 11:40:49,429] WARN [Consumer clientId=consumer-1, groupId=console-consumer-62114] 1 partitions have leader brokers without a matching listener, including [topicname-0] (org.apache.kafka.clients.NetworkClient)

    This error also happens if you try to run multiple consumers and the kafka topic contains only one partition. Generally one consumer should mapped with one partition. If you are using two consumers then you should have 2 partition in the kafka topic.

    0 讨论(0)
  • 2021-02-05 16:07

    I had this issue while running Kafka in a docker container.

    The following solution helped to resolve the issue.

    As mentioned by @coldkreap in the comment of this answer: https://stackoverflow.com/a/58067985/327862

    The kafka broker information is kept between restarts because the wurstmeister/kafka image creates a volume named 'kafka'. If you run docker volume ls you will see a kafka volume. Remove that volume and you will be able to recreate topics, etc.

    If using docker-compose, you can run the following command to remove containers along with their associated volumes:

    docker-compose down -v
    

    OR

    docker-compose rm -sv
    
    0 讨论(0)
  • 2021-02-05 16:08

    In my case, i was getting this error while updating my kafka cluster from v2.0 to v2.4. The reason was the configuration settings was wrong for log.dirs in server.properties file. Because I didnt noticed different disk name for different nodes and i missmatch the disknames in log.dirs settings.

    0 讨论(0)
  • 2021-02-05 16:10

    In my case, i was getting this error when i was testing Kafka fail-over. I brought down 1 Kafka, and expected the message to be written to the other Kafka.

    The issue was that topic replication-factor was set to 1, when i needed to set it to 2. (2 Kafka instances)

    Bonus:
    Check out the directories where the topics are created(in my case: kafka-logs-xx) for both Kafka, and you will understand why :-)

    0 讨论(0)
  • 2021-02-05 16:11

    jumping_monkey's bonus on checking the directories is helpful. For me, I was deploying with Bitnami Kafka. On first deploy I was not config in the helm values. I wanted to change retention down to minutes, and set that:

    config: |-
      log.retention.minutes=10
    

    This caused the log.dirs directory to switch from /bitnami/kafka/data to /tmp/logs.

    Essentially where the data is being stored on the Kafka brokers caused the error to show up.

    0 讨论(0)
提交回复
热议问题