Kafka Failed to update metadata

醉酒当歌 提交于 2019-12-14 03:53:14

问题


I am using Kafka v0.10.1.1 with Spring-boot.

I am trying to produce a message in a Kafka topic mobile-user using the below producer code:

Topic mobile-user have 5 partitions and 2 replication factor. I have attached my Kafka settings at the end of the question.

package com.service;

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.stereotype.Service;
import org.springframework.util.concurrent.ListenableFuture;
import org.springframework.util.concurrent.ListenableFutureCallback;

import com.shephertz.karma.constant.Constants;
import com.shephertz.karma.exception.KarmaException;
import com.shephertz.karma.util.Utils;

/**
 * @author Prakash Pandey
 */
@Service
public class NotificationSender {

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;

    private static Logger LOGGER = LoggerFactory.getLogger(NotificationSender.class);

    // Send Message
    public void sendMessage(String topicName, String message) throws KarmaException {
        LOGGER.debug("========topic Name===== " + topicName + "=========message=======" + message);
        ListenableFuture<SendResult<String, String>> result = kafkaTemplate.send(topicName, message);
        result.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
            @Override
            public void onSuccess(SendResult<String, String> result) {
                LOGGER.info("sent message='{}'" + " to partition={}" + " with offset={}", message,
                        result.getRecordMetadata().partition(), result.getRecordMetadata().offset());
            }

            @Override
            public void onFailure(Throwable ex) {
                LOGGER.error(Constants.PRODUCER_MESSAGE_EXCEPTION.getValue() + Utils.getStackTrace(ex));

            }
        });

        LOGGER.debug("Payload sent to kafka");
        LOGGER.debug("topic: " + topicName + ", payload: " + message);
    }
}

Problem:

I am successfully able to sent message to kafka but sometimes I am receiving this error:

org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 5000 ms.
2017-10-25 06:21:48, [ERROR] [karma-unified-notification-dispatcher - NotificationDispatcherSender - onFailure:43] Exception in sending message to kafka for queryorg.springframework.kafka.core.KafkaProducerException: Failed to send; nested exception is org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 5000 ms.
        at org.springframework.kafka.core.KafkaTemplate$1.onCompletion(KafkaTemplate.java:255)
        at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:486)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:436)
        at org.springframework.kafka.core.DefaultKafkaProducerFactory$CloseSafeProducer.send(DefaultKafkaProducerFactory.java:156)
        at org.springframework.kafka.core.KafkaTemplate.doSend(KafkaTemplate.java:241)
        at org.springframework.kafka.core.KafkaTemplate.send(KafkaTemplate.java:151)
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 5000 ms.    

Kafka properties:

spring.kafka.producer.retries=5
spring.kafka.producer.batch-size=1000
spring.kafka.producer.request.timeout.ms=60000
spring.kafka.producer.linger.ms=10
spring.kafka.producer.acks=1
spring.kafka.producer.buffer-memory=33554432
spring.kafka.producer.max.block.ms=5000
spring.kafka.topic.retention=86400000

spring.zookeeper.hosts=192.20.1.19:2181,10.20.1.20:2181,10.20.1.26:2181
spring.kafka.session.timeout=30000
spring.kafka.connection.timeout=10000
spring.kafka.topic.partition=5
spring.kafka.message.replication=2

spring.kafka.listener.concurrency=1
spring.kafka.listener.poll-timeout=3000
spring.kafka.consumer.auto-commit-interval=1000
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.max-poll-records=200
spring.kafka.consumer.max-poll-interval-ms=300000

It would be very helpful if you could help me solving this problem. Thanks.

Please note: I am not receiving this above message every time. I am successfully able to produce a message to kafka-topic and successfully consume it on consumer. This above error comes approxmately after 1000 successfully produced message.


回答1:


Change default bootstrap-servers property:

private List<String> bootstrapServers = new ArrayList<String>(
            Collections.singletonList("localhost:9092"));

to yours:

spring.kafka.bootstrap-servers: ${kafka.binder.broker}:${kafka.binder.defaultBrokerPort}


来源:https://stackoverflow.com/questions/46932127/kafka-failed-to-update-metadata

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!