KAFKA - What is reason for getting ProducerFencedException during producer.send

后端 未结 4 593
执笔经年
执笔经年 2021-01-13 19:56

Trying to load around 50K messages into KAFKA topic. In the beginning of few runs getting below exception but not all the time.

org.apache.kafka.common.Kafka         


        
相关标签:
4条回答
  • 2021-01-13 20:33

    There was race condition in my Producer initialization code. I have fixed by changing Producer map to the type ConcurrentHashMap to ensure thread safe.

    0 讨论(0)
  • 2021-01-13 20:33

    When running multiple instances of the application, transactional.id must be the same on all instances to satisfy fencing zombies when producing records on a listener container thread. However, when producing records using transactions that are not started by a listener container, the prefix has to be different on each instance.

    https://docs.spring.io/spring-kafka/reference/html/#transaction-id-prefix

    0 讨论(0)
  • 2021-01-13 20:50

    Caused by: org.apache.kafka.common.errors.ProducerFencedException: Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.

    This exception message is not very helpful. I believe that it is trying to say that the broker no longer has any record of the transaction-id that is being sent by the client. This can either be because:

    • Someone else was using the same transaction-id and committed it already. In my experience, this is less likely unless you are sharing transaction-ids between clients. We ensure that our ids are unique using UUID.randomUUID().
    • The transaction timed out and was removed by broker automation.

    In our case, we were hitting transaction timeouts every so often that generated this exception. There are 2 properties that govern how long the broker will remember a transaction before aborting it and forgetting about it.

    • transaction.max.timeout.ms -- A broker property that specifies the maximum number of milliseconds until a transaction is aborted and forgotten. Default in many Kafka versions seems to be 900000 (15 minutes). Documentation from Kafka says:

      The maximum allowed timeout for transactions. If a client’s requested transaction time exceeds this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction.

    • transaction.timeout.ms -- A producer client property that sets the timeout in milliseconds when a transaction is created. Default in many Kafka versions seems to be 60000 (1 minute). Documentation from Kafka says:

      The maximum amount of time in ms that the transaction coordinator will wait for a transaction status update from the producer before proactively aborting the ongoing transaction.

    If the transaction.timeout.ms property set in the client exceeds the transaction.max.timeout.ms property in the broker, the producer will immediately throw something like the following exception:

    org.apache.kafka.common.KafkaException: Unexpected error in InitProducerIdResponse
    The transaction timeout is larger than the maximum value allowed by the broker 
    (as configured by transaction.max.timeout.ms).
    
    0 讨论(0)
  • 2021-01-13 20:55

    I write a unit test to reproduce this, from this piece of Java code, you can easily understand how this happen by two same tansactional id.

      @Test
      public void SendOffset_TwoProducerDuplicateTrxId_ThrowException() {
        // create two producer with same transactional id
        Producer producer1 = KafkaBuilder.buildProducer(trxId, servers);
        Producer producer2 = KafkaBuilder.buildProducer(trxId, servers);
    
        offsetMap.put(new TopicPartition(topic, 0), new OffsetAndMetadata(1000));
    
        // initial and start two transactions
        sendOffsetBegin(producer1);
        sendOffsetBegin(producer2);
    
        try {
          // when commit first transaction it expected to throw exception
          sendOffsetEnd(producer1);
    
          // it expects not run here
          Assert.assertTrue(false);
        } catch (Throwable t) {
          // it expects to catch the exception
          Assert.assertTrue(t instanceof ProducerFencedException);
        }
      }
    
      private void sendOffsetBegin(Producer producer) {
        producer.initTransactions();
        producer.beginTransaction();
        producer.sendOffsetsToTransaction(offsetMap, consumerGroup);
      }
    
      private void sendOffsetEnd(Producer producer) {
        producer.commitTransaction();
      }
    
    0 讨论(0)
提交回复
热议问题