How to get data from old offset point in Kafka?

后端 未结 7 2069
别跟我提以往
别跟我提以往 2020-12-12 18:12

I am using zookeeper to get data from kafka. And here I always get data from last offset point. Is there any way to specify the time of offset to get old data?

There

相关标签:
7条回答
  • 2020-12-12 18:42

    For Now

    Kafka FAQ give an answer to this problem.

    How do I accurately get offsets of messages for a certain timestamp using OffsetRequest?

    Kafka allows querying offsets of messages by time and it does so at segment granularity. The timestamp parameter is the unix timestamp and querying the offset by timestamp returns the latest possible offset of the message that is appended no later than the given timestamp. There are 2 special values of the timestamp - latest and earliest. For any other value of the unix timestamp, Kafka will get the starting offset of the log segment that is created no later than the given timestamp. Due to this, and since the offset request is served only at segment granularity, the offset fetch request returns less accurate results for larger segment sizes.

    For more accurate results, you may configure the log segment size based on time (log.roll.ms) instead of size (log.segment.bytes). However care should be taken since doing so might increase the number of file handlers due to frequent log segment rolling.


    Future Plan

    Kafka will add timestamp to message format. Refer to

    https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Enriched+Message+Metadata

    0 讨论(0)
  • 2020-12-12 18:53

    Refer the doc about kafka config : http://kafka.apache.org/08/configuration.html for your query on smallest and largest values of offset parameter.

    BTW, While exploring kafka, I was wondering how to replay all messages for a consumer. I mean if a consumer group has polled all the messages and it wants to re-get those.

    The way it can be achieved is to delete data from zookeeper. Use the kafka.utils.ZkUtils class to delete a node on zookeeper. Below is its usage :

    ZkUtils.maybeDeletePath(${zkhost:zkport}", "/consumers/${group.id}");
    
    0 讨论(0)
  • 2020-12-12 18:55

    Using the KafkaConsumer you can use Seek, SeekToBeginning and SeekToEnd to move around in the stream.

    https://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#seekToBeginning(java.util.Collection)

    Also, If no partition is provided, it will seek to the first offset for all of the currently assigned partitions.

    0 讨论(0)
  • 2020-12-12 18:58

    The consumers belong always to a group and, for each partition, the Zookeeper keeps track of the progress of that consumer group in the partition.

    To fetch from the beginning, you can delete all the data associated with progress as Hussain refered

    ZkUtils.maybeDeletePath(${zkhost:zkport}", "/consumers/${group.id}");
    

    You can also specify the offset of partition you want, as specified in core/src/main/scala/kafka/tools/UpdateOffsetsInZK.scala

    ZkUtils.updatePersistentPath(zkClient, topicDirs.consumerOffsetDir + "/" + partition, offset.toString)
    

    However the offset is not time indexed, but you know for each partition is a sequence.

    If your message contains a timestamp (and beware that this timestamp has nothing to do with the moment Kafka received your message), you can try to do an indexer that attempts to retrieve one entry in steps by incrementing the offset by N, and store the tuple (topic X, part 2, offset 100, timestamp) somewhere.

    When you want to retrieve entries from a specified moment in time, you can apply a binary search to your rough index until you find the entry you want and fetch from there.

    0 讨论(0)
  • 2020-12-12 18:58

    have you tried this?

    bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

    It would print out all the messages for the given topic, "test" in this example.

    More details from this link https://kafka.apache.org/quickstart

    0 讨论(0)
  • 2020-12-12 19:01

    Kafka Protocol Doc is a great source to play with request/response/Offsets/Messages: https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol you use Simple Consumer example as where following code demonstrate the state:

    FetchRequest req = new FetchRequestBuilder()
    
            .clientId(clientName)
    
            .addFetch(a_topic, a_partition, readOffset, 100000) 
    
            .build();
    
    FetchResponse fetchResponse = simpleConsumer.fetch(req);
    

    set readOffset to start initial offset from. but you need to check the max offset as well as above will provide limited offsets count as per FetchSize in last param of addFetch method.

    0 讨论(0)
提交回复
热议问题