Kafka consumer offsets out of range with no configured reset policy for partitions

后端 未结 2 559
暗喜
暗喜 2020-12-17 14:26

I\'m receiving exception when start Kafka consumer.

org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no confi

相关标签:
2条回答
  • 2020-12-17 15:06

    So you are trying to access offset(29898318) in topic(test) partition(0) which is not available right now.

    There could be two cases for this

    1. Your topic partition 0 may not have those many messages
    2. Your message at offset 29898318 might have already deleted by retention period

    To avoid this you can do one of following:

    1. Set auto.offset.reset config to either earliest or latest . You can find more info regarding this here
    2. You can get smallest offset available for a topic partition by running following Kafka command line tool

    command:

    bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2
    

    Hope this helps!

    0 讨论(0)
  • 2020-12-17 15:08

    I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:

    • cleanup.policy=compact,delete
    • retention of 4 days

    If Kafka Streams still has a snapshot file pointing to an offset that doesn't exist anymore, the restore consumer is configured to fail. It doesn't fall back to the earliest offset. This scenario can happen when very few data comes in or when the application is down. In both cases, when there's no commit within the changelog retention period, the snapshot file won't be updated. (This is on partition basis)

    Easiest way to resolve this issue is to stop your kafka streams application, remove its local state directory and restart your application.

    0 讨论(0)
提交回复
热议问题