apache-kafka

Why is this KStream/KTable topology propagating records that don't pass the filter?

隐身守侯 提交于 2021-02-09 09:13:07
问题 I have the following topology that: Creates a state store Filters records based on SOME_CONDITION, maps its values to a new entity and finally publishes these records to another topic STATIONS_LOW_CAPACITY_TOPIC However I am seeing this on the STATIONS_LOW_CAPACITY_TOPIC: � null � null � null � {"id":140,"latitude":"40.4592351","longitude":"-3.6915330",...} � {"id":137,"latitude":"40.4591366","longitude":"-3.6894151",...} � null That is to say, it's as if it were also publishing to the

DeadLetterPublishingRecoverer - Dead-letter publication failed with InvalidTopicException for name topic at TopicPartition ends with _ERR

不打扰是莪最后的温柔 提交于 2021-02-08 11:56:22
问题 I identified an error when I changed the DeadLetterPublishingRecoverer destionationResolver. When I use: private static final BiFunction<ConsumerRecord<?, ?>, Exception, TopicPartition> DESTINATION_RESOLVER = (cr, e) -> new TopicPartition(cr.topic() + ".ERR", cr.partition()); it works perfectly. However, if you use _ERR instead of .ERR, an error occurs: 2020-08-05 12:53:10,277 [kafka-producer-network-thread | producer-kafka-tx-group1.ABC_TEST_XPTO.0] WARN o.apache.kafka.clients.NetworkClient

How to get messages from Kafka Consumer one by one in java?

狂风中的少年 提交于 2021-02-08 11:43:46
问题 I'm using Apache Kafka API and trying to get only one message at a time. I'm only writing to one topic. I can send and receive messages by having a pop UI screen with a textbox. I input a string in the textbox and click "send." I can send as many messages as I want. Let's say I send 3 messages and my 3 messages were "hi," "lol," "bye." There is also a "receive" button. Right now, using the traditional code found in TutorialsPoint, I get all 3 messages (hi, lol, bye) at once printed on the

Kafka on Multiple Servers

好久不见. 提交于 2021-02-08 11:41:48
问题 I followed this link to install Kafka + Zookeeper. It all works well, yet I am setting up Kafka + Zookeeper on 2 servers. I have setup the kafka/config/server.properties to have: Server 1: broker.id = 0 Server 1: zookeeper.connect = localhost:2181,99.99.99.91:2181 Server 2: broker.id = 1 Server 2: zookeeper.connect = localhost:2181,99.99.99.92:2181 I am wondering the follwing: When I publish a topic, does it go to both Instances, or just the server it's loaded on? In order to use multiple

How to commit offsets thread safe using camel-kafka?

情到浓时终转凉″ 提交于 2021-02-08 10:51:12
问题 As asked in question How to manually control the offset commit with camel-kafka? I want to commit offsets manually using camel-kafka. My route: .from(kafka:topic1) .aggregate(new GroupByExchangeStrategy()) .to(kafka:topic2) .process(new ManualCommitProcessor()) , where ManualCommitProcessor will do the commitment after sending the message to another topic. Problem is that aggregator and kafka producer are working in separated threads to the kafka consumer which is responsible for offset

How to commit offsets thread safe using camel-kafka?

只谈情不闲聊 提交于 2021-02-08 10:48:38
问题 As asked in question How to manually control the offset commit with camel-kafka? I want to commit offsets manually using camel-kafka. My route: .from(kafka:topic1) .aggregate(new GroupByExchangeStrategy()) .to(kafka:topic2) .process(new ManualCommitProcessor()) , where ManualCommitProcessor will do the commitment after sending the message to another topic. Problem is that aggregator and kafka producer are working in separated threads to the kafka consumer which is responsible for offset

How to commit offsets thread safe using camel-kafka?

♀尐吖头ヾ 提交于 2021-02-08 10:48:25
问题 As asked in question How to manually control the offset commit with camel-kafka? I want to commit offsets manually using camel-kafka. My route: .from(kafka:topic1) .aggregate(new GroupByExchangeStrategy()) .to(kafka:topic2) .process(new ManualCommitProcessor()) , where ManualCommitProcessor will do the commitment after sending the message to another topic. Problem is that aggregator and kafka producer are working in separated threads to the kafka consumer which is responsible for offset

Kafka 0.9.0 New Java Consumer API fetching duplicate records

爱⌒轻易说出口 提交于 2021-02-08 10:35:30
问题 I am new to kafka and i am trying to prototype a simple consumer-producer message queue (traditional queue) model using Apache kafka 0.9.0 Java clients. From the producer process, i am pushing 100 random messages to a topic configured with 3 partitions. This looks fine. I created 3 consumer threads with same group id, subscribed to the same topic. auto commit enabled. Since all 3 consumer threads are subscribed to same topic i assume that each consumer will get a partition to consume and will

After integration of Kafka in reactjs ,some issue's happening

点点圈 提交于 2021-02-08 10:30:45
问题 After integration of Kafka in reactjs, some issue's happening as follow. kafkaClient.js:728 Uncaught TypeError: net.createConnection is not a function at KafkaClient.push../node_modules/kafka-node/lib/kafkaClient.js.KafkaClient.createBroker (kafkaClient.js:728) at KafkaClient.push../node_modules/kafka-node/lib/kafkaClient.js.KafkaClient.setupBroker (kafkaClient.js:314) at KafkaClient.push../node_modules/kafka-node/lib/kafkaClient.js.KafkaClient.connectToBroker (kafkaClient.js:253) at

How do I view the full TCP packet that Apache Kafka produces?

て烟熏妆下的殇ゞ 提交于 2021-02-08 10:01:32
问题 I am using Apache Kafka. I use KafkaProducer to produce data and KafkaConsumer to consume data. My config data is: Properties props = new Properties(); props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092"); props.put(ProducerConfig.CLIENT_ID_CONFIG, "DemoProducer"); props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.IntegerSerializer"); props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization