I\'m receiving exception when start Kafka consumer.
org.apache.kafka.clients.consumer.OffsetOutOfRangeException: Offsets out of range with no confi
So you are trying to access offset(29898318
) in topic(test
) partition(0
) which is not available right now.
There could be two cases for this
0
may not have those many messages29898318
might have already deleted by retention periodTo avoid this you can do one of following:
auto.offset.reset
config to either earliest
or latest
.
You can find more info regarding this heresmallest offset
available for a topic partition by
running following Kafka command line toolcommand:
bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <broker-ip:9092> --topic <topic-name> --time -2
Hope this helps!
I hit this SO question when running a Kafka Streams state store with a specific changelog topic config:
cleanup.policy=compact,delete
If Kafka Streams still has a snapshot file pointing to an offset that doesn't exist anymore, the restore consumer is configured to fail. It doesn't fall back to the earliest offset. This scenario can happen when very few data comes in or when the application is down. In both cases, when there's no commit within the changelog retention period, the snapshot file won't be updated. (This is on partition basis)
Easiest way to resolve this issue is to stop your kafka streams application, remove its local state directory and restart your application.