Please can anyone tell me how to read messages using the Kafka Consumer API from the beginning every time when I run the consumer.
This works with the 0.9.x consumer. Basically when you create a consumer, you need to assign a consumer group id to this consumer using the property ConsumerConfig.GROUP_ID_CONFIG
. Generate the consumer group id randomly every time you start the consumer doing something like this properties.put(ConsumerConfig.GROUP_ID_CONFIG, UUID.randomUUID().toString());
(properties is an instance of java.util.Properties that you will pass to the constructor new KafkaConsumer(properties)
).
Generating the client randomly means that the new consumer group doesn't have any offset associated to it in kafka. So what we have to do after this is to set a policy for this scenario. As the documentation for the auto.offset.reset
property says:
What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted):
- earliest: automatically reset the offset to the earliest offset
- latest: automatically reset the offset to the latest offset
- none: throw exception to the consumer if no previous offset is found or the consumer's group
- anything else: throw exception to the consumer.
So from the options above listed we need to choose the earliest
policy so the new consumer group starts from the beginning every time.
Your code in java, will look something like this:
properties.put(ConsumerConfig.GROUP_ID_CONFIG, UUID.randomUUID().toString());
properties.put(ConsumerConfig.CLIENT_ID_CONFIG, "your_client_id");
properties.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
consumer = new KafkaConsumer(properties);
The only thing that you need to figure it out now, is when having multiple consumers that belong to the same consumer group but are distributed how to generate a random id and distribute it between those instances so they all belong to the same consumer group.
Hope it helps!
So for me what worked was a combination of what has been suggested above. The key change was to include
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
and have a randomly generated GROUP ID each time. But this alone didn't work for me. For some reason the first time I polled the consumer it never got any records. I had to hack it to get it to work -
consumer.poll(0); // without this the below statement never got any records
final ConsumerRecords<Long, String> consumerRecords = consumer.poll(Duration.ofMillis(100));
I'm new to KAFKA and have no idea why this is happening, but for anyone else still trying to get this to work, hope this helps.
To always read from offset 0 without creating new groupId everytime.
// ... Assuming the props have been set properly.
// ... enable.auto.commit and auto.offset.reset as default
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Collections.singletonList(topic));
consumer.poll(0); // without this, the assignment will be empty.
consumer.assignment().forEach(t -> {
System.out.printf("Set %s to offset 0%n", t.toString());
consumer.seek(t, 0);
});
while (true) {
// ... consumer polls messages as usual.
}
props.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "false");
props.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
If you simply avoid saving any offsets the consumer will always reset at the beginning.
One possible solution is to use an implementation of ConsumerRebalanceListener while subscribing to one or more topics. The ConsumerRebalanceListener contains callback methods when new partitions are assigned or removed from a consumer. The following code sample illustrates this :
public class SkillsConsumer {
private String topic;
private KafkaConsumer<String, String> consumer;
private static final int POLL_TIMEOUT = 5000;
public SkillsConsumer(String topic) {
this.topic = topic;
Properties properties = ConsumerUtil.getConsumerProperties();
properties.put("group.id", "consumer-skills");
this.consumer = new KafkaConsumer<>(properties);
this.consumer.subscribe(Collections.singletonList(this.topic),
new PartitionOffsetAssignerListener(this.consumer));
}
}
public class PartitionOffsetAssignerListener implements ConsumerRebalanceListener {
private KafkaConsumer consumer;
public PartitionOffsetAssignerListener(KafkaConsumer kafkaConsumer) {
this.consumer = kafkaConsumer;
}
@Override
public void onPartitionsRevoked(Collection<TopicPartition> partitions) {
}
@Override
public void onPartitionsAssigned(Collection<TopicPartition> partitions) {
//reading all partitions from the beginning
for(TopicPartition partition : partitions)
consumer.seekToBeginning(partition);
}
}
Now whenever the partitions are assigned to the consumer, each partition will be read from the beginning.
If you are using the java consumer api more specifically org.apache.kafka.clients.consumer.Consumer, You can try the seek* methods.
consumer.seekToBeginning(consumer.assignment())
Here, consumer.assignment() returns all the partitions assigned to a given consumer and seekToBeginning will start from the earliest offset for the given collection of partitions.