Java Apache Kafka Producer Metadata Updater & Retry Logic

会有一股神秘感。 提交于 2020-05-24 00:05:55

问题


I am using Spring for Apache Kafka and have created a service that uses a Kafka Producer (org.apache.kafka.clients.producer) via Spring's KafkaTemplate to send messages to a topic. On the target Kafka cluster I have disabled auto topic creation. Using a combination of producer configurations listed here https://kafka.apache.org/documentation/#producerconfigs I am successfully controlling how many times a request is retried, time between retries, etc.

If I provide a topic that does not exist the request times out when I expect it to (upon reaching the value of max.block.ms). However, after the timeout I continue to get log entries (such as the one below) at the interval set for retry.backoff.ms until 300000 ms / 5 minutes has been reached.

I've been unable to determine which configuration property on the producer or the brokers can be changed to stop the producer from checking for 5 minutes to see if the topic has been created.

Can someone point me to the correct setting that will allow me to reduce this or have it stop checking once the request has timed out?

Log Entry Example:

WARN  [kafka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater: [Producer clientId=producer-1] Error while fetching metadata with correlation id 9 : {<specified_topic>=UNKNOWN_TOPIC_OR_PARTITION}

Producer Configs Used:

  • delivery.timeout.ms = 5000
  • linger.ms = 1000
  • max.block.ms = 8000
  • request.timeout.ms= 4000
  • max.retry.count = 0
  • retry.backoff.ms = 2000

回答1:


Kafka Producer retrieves and caches topic/partition metadata before first send. It then periodically tries to refresh this metadata, every metadata.max.age.ms (default=5minutes) for "good" and every retry.backoff.ms for "invalid" topics. These metadata refresh attempts is what you're observing in the log.

To prevent cache from growing uncontrollably, unused topics are dropped from it after certain period of time according to these source comments. Currently, this expiry period is hardcoded in ProducerMetadata.java to be 5 minutes.

  public class ProducerMetadata extends Metadata {
      private static final long TOPIC_EXPIRY_NEEDS_UPDATE = -1L;
      static final long TOPIC_EXPIRY_MS = 5 * 60 * 1000;
        ...

You can actually observe all this activity by setting producer log level to DEBUG.



来源:https://stackoverflow.com/questions/60270645/java-apache-kafka-producer-metadata-updater-retry-logic

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!