i currently profiled my kafka producer spring boot application and found many \"kafka-producer-network-thread\"s running (47 in total). Which would never stop running, even when
When using transactions, the producer cache grows on demand and is not reduced.
If you are producing messages on a listener container (consumer) thread; there is a producer for each topic/partition/consumer group. This is required to solve the zombie fencing problem, so that if a rebalance occurs and the partition moves to a different instance, the transaction id will remain the same so the broker can properly handle the situation.
If you don't care about the zombie fencing problem (and you can handle duplicate deliveries), set the producerPerConsumerPartition
property to false on the DefaultKafkaProducerFactory
and the number of producers will be much smaller.