KeyedMessage keyedMessage = new KeyedMessage(request.getRequestTopicName(), Seri
Keys are mostly useful/necessary if you require strong order for a key and are developing something like a state machine. If you require that messages with the same key (for instance, a unique id) are always seen in the correct order, attaching a key to messages will ensure messages with the same key always go to the same partition in a topic. Kafka guarantees order within a partition, but not across partitions in a topic, so alternatively not providing a key - which will result in round-robin distribution across partitions - will not maintain such order.
In the case of a state machine, keys can be used with log.cleaner.enable to deduplicate entries with the same key. In that case, Kafka assumes that your application only cares about the most recent instance of a given key and the log cleaner deletes older duplicates of a given key only if the key is not null. This form of log compaction is controlled by the log.cleaner.delete.retention property and requires keys.
Alternatively, the more common property log.retention.hours, which is enabled by default, works by deleting complete segments of the log that are out of date. In this case keys do not have to be provided. Kafka will simply delete chunks of the log that are older than the given retention period.
That's all to say, if you've enabled log compaction or require strict order for messages with the same key then you should definitely be using keys. Otherwise, null keys may provide better distribution and prevent potential hot spotting issues in cases where some keys may appear more than others.
tl;dr No, a key is not required as part of sending messages to Kafka. But...
In addition to the very helpful accepted answer I would like to add a few more details
By default, Kafka uses the key of the message to select the partition of the topic it writes to. This is done in the DefaultPartitioner
by
kafka.common.utils.Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;
If there is no key provided, then Kafka will partition the data randomly in a round-robin fashion.
In Kafka, it is possible to create your own Partitioner by extending the Partitioner
class. For this, you need to override the partition
method which has the signature:
int partition(String topic,
Object key,
byte[] keyBytes,
Object value,
byte[] valueBytes,
Cluster cluster)
Usually, the key of a Kafka message is used to select the partition and the return value (of type int
) is the partition number. Without a key, you need to rely on the value which might be much more complex to process.
As stated in the given answer, Kafka has guarantees on ordering of the messages only at partition level.
Let's say you want to store financial transactions for your customers in a Kafka topic with two partitions. The messages could look like (key:value)
null:{"customerId": 1, "changeInBankAccount": +200}
null:{"customerId": 2, "changeInBankAccount": +100}
null:{"customerId": 1, "changeInBankAccount": +200}
null:{"customerId": 1, "changeInBankAccount": -1337}
null:{"customerId": 1, "changeInBankAccount": +200}
As we do not have defined a key the two partitions will presumably look like
// partition 0
null:{"customerId": 1, "changeInBankAccount": +200}
null:{"customerId": 1, "changeInBankAccount": +200}
null:{"customerId": 1, "changeInBankAccount": +200}
// partition 1
null:{"customerId": 2, "changeInBankAccount": +100}
null:{"customerId": 1, "changeInBankAccount": -1337}
Your consumer reading that topic could end up telling you that the balance on the account is 600 at a particular time although that was never the case! Just because it was reading all messages in partition 0 in prior to the messages in partition 1.
With a senseful key (like customerId) this could be avoided as the partitoning would be like this:
// partition 0
1:{"customerId": 1, "changeInBankAccount": +200}
1:{"customerId": 1, "changeInBankAccount": +200}
1:{"customerId": 1, "changeInBankAccount": -1337}
1:{"customerId": 1, "changeInBankAccount": +200}
// partition 1
2:{"customerId": 2, "changeInBankAccount": +100}
Without a key as part of your messages, you will not be able to set the topic configuration cleanup.policy
to compacted
. According to the documentation "log compaction ensures that Kafka will always retain at least the last known value for each message key within the log of data for a single topic partition.".
This nice and helpful setting will not be available without any key.
In real-life use cases, the key of a Kafka message can have a huge influence on your performance and clarity of your business logic.
A key can for example be used naturally for partitioning your data. As you can control your consumers to read from particular partitions this could serve as an efficient filter. Also, the key can include some meta data on the actual value of the message that helps you control the subsequent processing. Keys are usually smaller then values and it is therefore more convenient to parse a key instead of the whole value. At the same time, you can apply all serializations and schema registration as done with your value also with the key.
As a note, there is also the concept of Header that can be used to store information, see documentation.
The key with a message is basically sent to get the message ordering for a specific field.
Explain and example