We are using Kafka as a Strictly Ordered Queue and hence a single topic/single partition/single consumer group
combo is in use. I should be able to use multiple
Changing offset.retention.minutes will not help. This is to free the space used by the offsets for groups which are inactive. Assuming you do not have too many inactive group ids, you don’t need it.
change the log.retention.bytes config for offsets topic and set it to lower value as per what you want. You can change this config using Kafka-config.sh or some other way you are aware of.
Once you limit the topic size, kafka compaction will kick in when topic size reaches the threshold and clean it up for you.
offsets.retention.minutes
and log.retention.XXX
properties will impact a physical removal of records/messages/logs only if offset file rolling occurs.
In general, offsets.retention.minutes
property dictates that a broker should forget about your consumer if a consumer disappeared for the specified amount of time and it can do that even without removing log files from the disk.
If you set this value to a relatively low number and check your __consumer_offsets
topic while there are no active consumers, over time you will notice something like:
[group,topic,7]::OffsetAndMetadata(offset=7, leaderEpoch=Optional.empty, metadata=, commitTimestamp=1557475923142, expireTimestamp=None)
[group,topic,8]::OffsetAndMetadata(offset=6, leaderEpoch=Optional.empty, metadata=, commitTimestamp=1557475923142, expireTimestamp=None)
[group,topic,6]::OffsetAndMetadata(offset=7, leaderEpoch=Optional.empty, metadata=, commitTimestamp=1557475923142, expireTimestamp=None)
[group,topic,19]::NULL
[group,topic,5]::NULL
[group,topic,22]::NULL
Which signifies how event store systems, like Kafka, work in general. They record new events, instead of changing the existing ones.
I am not aware of any Kafka version where topics are deleted/cleaned up every 60 minutes by default and I have a feeling you misinterpreted something from the documentation.
It seems that the way __consumer_offsets
are managed is very different from regular topics. The only way to get __consumer_offsets
deleted is to force rolling of its files. That, however, doesn't happen same way it does for regular log files. While regular log files(for your data topics) are rolled automatically every time they are deleted, regardless of log.roll.
property, __consumer_offsets
don't do that. And if they are not rolled and stay at the initial ...00000
segment, they are not deleted at all. So, it seems the way to reduce your __consumer_offsets
files is:
log.roll.
;offsets.retention.minutes
if you can afford to disconnect your consumers;log.retention.XXX
property.