Consumer not receiving messages, kafka console, new consumer api, Kafka 0.9

后端 未结 16 1987
一个人的身影
一个人的身影 2021-02-01 02:41

I am doing the Kafka Quickstart for Kafka 0.9.0.0.

I have zookeeper listening at localhost:2181 because I ran

bin/zookeeper-server-start.sh          


        
相关标签:
16条回答
  • 2021-02-01 03:03

    Use this:

    $ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
    

    Note: Remove --new-consumer from your command

    For reference see here: https://kafka.apache.org/quickstart

    0 讨论(0)
  • 2021-02-01 03:08

    This problem also impacts ingesting data from the kafka using flume and sink the data to HDFS.

    To fix the above issue:

    1. Stop Kafka brokers
    2. Connect to zookeeper cluster and remove /brokers z node
    3. Restart kafka brokers

    There is no issue with respect to kafka client version and scala version that we are using the cluster. Zookeeper might have wrong information about broker hosts.

    To verify the action:

    Create topic in kafka.

    $ kafka-console-consumer --bootstrap-server slavenode01.cdh.com:9092 --topic rkkrishnaa3210 --from-beginning
    

    Open a producer channel and feed some messages to it.

    $ kafka-console-producer --broker-list slavenode03.cdh.com:9092 --topic rkkrishnaa3210
    

    Open a consumer channel to consume the message from a specific topic.

    $ kafka-console-consumer --bootstrap-server slavenode01.cdh.com:9092 --topic rkkrishnaa3210 --from-beginning
    

    To test this from flume:

    Flume agent config:

    rk.sources  = source1
    rk.channels = channel1
    rk.sinks = sink1
    
    rk.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
    rk.sources.source1.zookeeperConnect = ip-20-0-21-161.ec2.internal:2181
    rk.sources.source1.topic = rkkrishnaa321
    rk.sources.source1.groupId = flume1
    rk.sources.source1.channels = channel1
    rk.sources.source1.interceptors = i1
    rk.sources.source1.interceptors.i1.type = timestamp
    rk.sources.source1.kafka.consumer.timeout.ms = 100
    rk.channels.channel1.type = memory
    rk.channels.channel1.capacity = 10000
    rk.channels.channel1.transactionCapacity = 1000
    rk.sinks.sink1.type = hdfs
    rk.sinks.sink1.hdfs.path = /user/ce_rk/kafka/%{topic}/%y-%m-%d
    rk.sinks.sink1.hdfs.rollInterval = 5
    rk.sinks.sink1.hdfs.rollSize = 0
    rk.sinks.sink1.hdfs.rollCount = 0
    rk.sinks.sink1.hdfs.fileType = DataStream
    rk.sinks.sink1.channel = channel1
    

    Run flume agent:

    flume-ng agent --conf . -f flume.conf -Dflume.root.logger=DEBUG,console -n rk
    

    Observe logs from the consumer that the message from the topic is written in HDFS.

    18/02/16 05:21:14 INFO internals.AbstractCoordinator: Successfully joined group flume1 with generation 1
    18/02/16 05:21:14 INFO internals.ConsumerCoordinator: Setting newly assigned partitions [rkkrishnaa3210-0] for group flume1
    18/02/16 05:21:14 INFO kafka.SourceRebalanceListener: topic rkkrishnaa3210 - partition 0 assigned.
    18/02/16 05:21:14 INFO kafka.KafkaSource: Kafka source source1 started.
    18/02/16 05:21:14 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: source1: Successfully registered new MBean.
    18/02/16 05:21:14 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: source1 started
    18/02/16 05:21:41 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
    18/02/16 05:21:42 INFO hdfs.BucketWriter: Creating /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920.tmp
    18/02/16 05:21:48 INFO hdfs.BucketWriter: Closing /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920.tmp
    18/02/16 05:21:48 INFO hdfs.BucketWriter: Renaming /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920.tmp to /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920
    18/02/16 05:21:48 INFO hdfs.HDFSEventSink: Writer callback called.
    
    0 讨论(0)
  • 2021-02-01 03:09

    I had this problem that consumer finished executing in kafka_2.12-2.3.0.tgz.

    Tried debugging but no logs were printed.

    Try running fine with kafka_2.12-2.2.2 .Works fine.

    And try running the zookeeper and kafka from the quickstart guide!

    0 讨论(0)
  • In my case, broker.id=1 in server.properties was problem.

    This should be broker.id=0 when you use only one kafka server for development.

    Don't forget remove all logs and restart zookeper and kafka

    • Remove /tmp/kafka-logs (defined in server.properties file)
    • Remove [your_kafka_home]/logs
    • Restart Zookeper and Kafka
    0 讨论(0)
提交回复
热议问题