Spring Kafka Producer not sending to Kafka 1.0.0 (Magic v1 does not support record headers)

后端 未结 4 2188
青春惊慌失措
青春惊慌失措 2021-02-14 04:08

I am using this docker-compose setup for setting up Kafka locally: https://github.com/wurstmeister/kafka-docker/

docker-compose up works fine, creating topi

相关标签:
4条回答
  • 2021-02-14 04:24

    I had a similar issue. Kafka adds headers by default if we use JsonSerializer or JsonSerde for values. In order to prevent this issue, we need to disable adding info headers.

    if you are fine with default json serialization, then use the following (key point here is ADD_TYPE_INFO_HEADERS):

    Map<String, Object> props = new HashMap<>(defaultSettings);
    props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
    ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props);
    

    but if you need a custom JsonSerializer with specific ObjectMapper (like with PropertyNamingStrategy.SNAKE_CASE), you should disable adding info headers explicitly on JsonSerializer, as spring kafka ignores DefaultKafkaProducerFactory's property ADD_TYPE_INFO_HEADERS (as for me it's a bad design of spring kafka)

    JsonSerializer<Object> valueSerializer = new JsonSerializer<>(customObjectMapper);
    valueSerializer.setAddTypeInfo(false);
    ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props, Serdes.String().serializer(), valueSerializer);
    

    or if we use JsonSerde, then:

    Map<String, Object> jsonSerdeProperties = new HashMap<>();
    jsonSerdeProperties.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
    JsonSerde<T> jsonSerde = new JsonSerde<>(serdeClass);
    jsonSerde.configure(jsonSerdeProperties, false);
    
    0 讨论(0)
  • 2021-02-14 04:25

    you are using kafka version <=0.10.x.x once you using using this, you must set JsonSerializer.ADD_TYPE_INFO_HEADERS to false as below.

    Map<String, Object> props = new HashMap<>(defaultSettings);
    props.put(JsonSerializer.ADD_TYPE_INFO_HEADERS, false);
    props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
    props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);
    ProducerFactory<String, Object> producerFactory = new DefaultKafkaProducerFactory<>(props);
    

    for your producer factory properties.

    In case you are using kafka version > 0.10.x.x, it should just work fine

    0 讨论(0)
  • 2021-02-14 04:36

    I just ran a test against that docker image with no problems...

    $docker ps
    
    CONTAINER ID        IMAGE                    COMMAND                  CREATED             STATUS              PORTS                                                NAMES
    f093b3f2475c        kafkadocker_kafka        "start-kafka.sh"         33 minutes ago      Up 2 minutes        0.0.0.0:32768->9092/tcp                              kafkadocker_kafka_1
    319365849e48        wurstmeister/zookeeper   "/bin/sh -c '/usr/sb…"   33 minutes ago      Up 2 minutes        22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp   kafkadocker_zookeeper_1
    

    .

    @SpringBootApplication
    public class So47953901Application {
    
        public static void main(String[] args) {
            SpringApplication.run(So47953901Application.class, args);
        }
    
        @Bean
        public ApplicationRunner runner(KafkaTemplate<Object, Object> template) {
            return args -> template.send("foo", "bar", "baz");
        }
    
        @KafkaListener(id = "foo", topics = "foo")
        public void listen(String in) {
            System.out.println(in);
        }
    
    }
    

    .

    spring.kafka.bootstrap-servers=192.168.177.135:32768
    spring.kafka.consumer.auto-offset-reset=earliest
    spring.kafka.consumer.enable-auto-commit=false
    

    .

    2017-12-23 13:27:27.990  INFO 21305 --- [      foo-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned: [foo-0]
    baz
    

    EDIT

    Still works for me...

    spring.kafka.bootstrap-servers=192.168.177.135:32768
    spring.kafka.consumer.auto-offset-reset=earliest
    spring.kafka.consumer.enable-auto-commit=false
    spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
    spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
    

    .

    2017-12-23 15:27:59.997  INFO 44079 --- [           main] o.a.k.clients.producer.ProducerConfig    : ProducerConfig values: 
        acks = 1
        ...
        value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer
    
    ...
    
    2017-12-23 15:28:00.071  INFO 44079 --- [      foo-0-C-1] o.s.k.l.KafkaMessageListenerContainer    : partitions assigned: [foo-0]
    baz
    
    0 讨论(0)
  • 2021-02-14 04:37

    Solved. The problem is neither the broker, some docker cache nor the Spring app.

    The problem was a console consumer which I used in parallel for debugging. This was an "old" consumer started with kafka-console-consumer.sh --topic=topic --zookeeper=...

    It actually prints a warning when started: Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].

    A "new" consumer with --bootstrap-server option should be used (especially when using Kafka 1.0 with JsonSerializer). Note: Using an old consumer here can indeed affect the producer.

    0 讨论(0)
提交回复
热议问题