kafka-producer-api

How @SentTo sends the message to related topic?

瘦欲@ 提交于 2019-12-11 07:28:15
问题 I am using ReplyingKafkaTemplate in my Rest controller to return the synchronous response. I am also setting header REPLY_TOPIC. For listener microservice part, @KafkaListener(topics = "${kafka.topic.request-topic}") @SendTo public Model listen(Model<SumModel,SumResp> request) throws InterruptedException { SumModel model = request.getRequest(); int sum = model.getNumber1() + model.getNumber2(); SumResp resp = new SumResp(sum); request.setReply(resp); request.setAdditionalProperty("sum", sum);

Implement filering for kafka messages

别来无恙 提交于 2019-12-11 06:56:42
问题 I have started using Kafka recently and evaluating Kafka for few use cases. If we wanted to provide the capability for filtering messages for consumers (subscribers) based on message content, what is best approach for doing this? Say a topic named "Trades" is exposed by producer which has different trades details such as market name, creation date, price etc. Some consumers are interested in trades for a specific markets and others are interested in trades after certain date etc. (content

Apache Kafka Producer inside php-fpm - too many producers connections

喜你入骨 提交于 2019-12-11 06:34:45
问题 The use case: 8 servers of 300 php-fpm concurrent child process each, produces records to Apache Kafka. Each one produces 1 Kafka record, 1000 records per second. Why do we need so many connections? We have a web API, that is getting 60K calls per minute. Those requests are doing many things and processed via thousands of web php-fpm workers (unfortunately). As part of the request handling, we produce events to Kafka. The problem: I cannot find a way to persist connections between php-fpm web

.NET Kerberos from Windows to Linux (different realms)

半世苍凉 提交于 2019-12-11 06:10:45
问题 Qn: If i've diff kerberos Realms, and the broker sits on Linux and producer sits on windows, how do enable the connectivity using Kerberos? I have valid keytab. and here is the krb5 Please see marked answer to this question in this link. Connect to Kafka on Unix from Windows with Kerberos below question is continuation for 3rd scenario explained by @Samson. answering some of Samson's suggestions, 1 default realm is added in krb5. 2.there is one way trust. the broker domain trusts my domain.

How to define multiple serializers in kafka?

柔情痞子 提交于 2019-12-11 05:34:43
问题 Say, I publish and consume different type of java objects.For each I have to define own serializer implementations. How can we provide all implementations in the kafka consumer/producer properties file under the "serializer.class" property? 回答1: We have a similar setup with different objects in different topics, but always the same object type in one topic. We use the ByteArrayDeserializer that comes with the Java API 0.9.0.1, which means or message consumers get only ever a byte[] as the

Error to serialize message when sending to kafka topic

[亡魂溺海] 提交于 2019-12-11 02:06:12
问题 i need to test a message, which contains headers, so i need to use MessageBuilder, but I can not serialize. I tried adding the serialization settings on the producer props but it did not work. Can someone help me? this error: org.apache.kafka.common.errors.SerializationException: Can't convert value of class org.springframework.messaging.support.GenericMessage to class org.apache.kafka.common.serialization.StringSerializer specified in value.serializer My test class: public class

Kafka streams.allMetadata() method returns empty list

倖福魔咒の 提交于 2019-12-10 13:38:30
问题 So I am trying to get interactive queries working with Kafka streams. I have Zookeeper and Kafka running locally (on windows). Where I use the C:\temp as the storage folder, for both Zookeeper and Kafka. I have setup the topic like this kafka-topics.bat --zookeeper localhost:2181 --create --replication-factor 1 --partitions 1 --topic rating-submit-topic kafka-topics.bat --zookeeper localhost:2181 --create --replication-factor 1 --partitions 1 --topic rating-output-topic Reading I have Done

Kafka Offset after retention period

梦想的初衷 提交于 2019-12-10 13:01:06
问题 I have a kafka topic with 1 partition. if it had 100 messages in it the offset would be from 0.99. According the kafka retention policy all of the messages will be wiped out after the specified period. and i am sending 100 new messages to the topic once all have been wiped out(after retention period). Now, where would the new offset of the message start from? is it From 100 or from 0?? I am trying to understand whether the new offsets will be 100-199 or 0-99? 回答1: Kafka honors the log

How to find the offset range for a topic-partition in Kafka 0.10?

故事扮演 提交于 2019-12-10 10:37:47
问题 I'm using Kafka 0.10.0. Before processing, I want to know the size of the records in a partition. In 0.9.0.1 version, I used to find the difference between latest and earliest offset for a partition by using the below code. In the new version, it gets stuck when retrieving the consumer#position method. package org.apache.kafka.example.utils; import java.util.ArrayList; import java.util.Collections; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util

If my producer producing, then why the consumer couldn't consume? it stuck @ poll()

大憨熊 提交于 2019-12-08 12:30:43
问题 Im publishing to the remote kafka server and try to consume messages from that remote server. (Kafka v 0.90.1) Publishing works fine but nor the consuming. Publisher package org.test; import java.io.IOException; import java.util.Properties; import org.apache.kafka.clients.producer.KafkaProducer; import org.apache.kafka.clients.producer.ProducerRecord; public class Producer { private void generateMessgaes() throws IOException { String topic = "MY_TOPIC"; Properties props = new Properties();