问题
I wrote a kafka streams code that uses kafka 2.4 kafka client version and kafka 2.2 server version. I have 50 partition on my topic & internal topic.
My kafka stream code has selectKey() DSL operation and I have 2 million of record using same KEY. In the stream configuration, I have done
props.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, RoundRobinPartitioner.class);
So that I am able to use different partitions with the exactly same key. If I dont use Round Robin as expected all my messages go to same partition.
Everything is ok untill now but I realized that; when I use RoundRobinPartitioner class my messages go like ~40 partitions. 10 partition is in the idle state. I wonder what am I missing ? It should use 50 of them about 2 million of records right?
final KStream<String, IdListExportMessage> exportedDeviceIdsStream =
builder.stream("deviceIds");
// k: appId::deviceId, v: device
final KTable<String, Device> deviceTable = builder.table(
"device",
Consumed.with(Serdes.String(), deviceSerde)
);
// Some DSL operations
.join(
deviceTable,
(exportedDevice, device) -> {
exportedDevice.setDevice(device);
return exportedDevice;
},
Joined.with(Serdes.String(), exportedDeviceSerde, deviceSerde)
)
.selectKey((deviceId, exportedDevice) -> exportedDevice.getDevice().getId())
.to("bulk_consumer");
And
props.put(StreamsConfig.STATE_DIR_CONFIG, /tmp/kafka-streams);
props.put(StreamsConfig.REPLICATION_FACTOR_CONFIG, 3);
props.put(StreamsConfig.NUM_STANDBY_REPLICAS_CONFIG, 2);
props.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 100);
props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, EXACTLY_ONCE);
props.put("num.stream.threads", 10);
props.put("application.id", applicationId);
RoundRobinPartitioner.java
public class RoundRobinPartitioner implements Partitioner {
private final ConcurrentMap<String, AtomicInteger> topicCounterMap = new ConcurrentHashMap();
public RoundRobinPartitioner() {
}
public void configure(Map<String, ?> configs) {
}
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
int numPartitions = partitions.size();
int nextValue = this.nextValue(topic);
List<PartitionInfo> availablePartitions = cluster.availablePartitionsForTopic(topic);
if (!availablePartitions.isEmpty()) {
int part = Utils.toPositive(nextValue) % availablePartitions.size();
return ((PartitionInfo)availablePartitions.get(part)).partition();
} else {
return Utils.toPositive(nextValue) % numPartitions;
}
}
private int nextValue(String topic) {
AtomicInteger counter = (AtomicInteger)this.topicCounterMap.computeIfAbsent(topic, (k) -> {
return new AtomicInteger(0);
});
return counter.getAndIncrement();
}
public void close() {
}
}
回答1:
You cannot change the partitioning using ProducerConfig.PARTITIONER_CLASS_CONFIG
configuration -- this only works for the plain producer.
In Kafka Streams, you need to implement the interface StreamsPartitioner
and pass your implementation into corresponding operators, e.g., to("topic", Produced.streamPartitioner(new MyPartitioner())
.
来源:https://stackoverflow.com/questions/59645127/kafka-streams-roundrobinpartitioner