apache-kafka-streams

ProducerFencedException Processing Kafka Stream

让人想犯罪 __ 提交于 2020-08-09 09:43:27
问题 I'm using kafka 1.1.0. A kafka stream consistently throws this exception (albeit with different messages) WARN o.a.k.s.p.i.RecordCollectorImpl@onCompletion:166 - task [0_0] Error sending record (key KEY value VALUE timestamp TIMESTAMP) to topic OUTPUT_TOPIC due to Producer attempted an operation with an old epoch. Either there is a newer producer with the same transactionalId, or the producer's transaction has been expired by the broker.; No more records will be sent and no more offsets will

KStream to KTable Inner Join producing different number of records every time processed with same data

谁说我不能喝 提交于 2020-07-23 06:07:21
问题 I want to do a KStream to KTable Join. using KTable as just a lookup table. below steps shows the sequence in which code is executed Construct KTable ReKey KTable Construct KStream ReKey KStream Join KStream - KTable Lets say there are 8000 records in KStream, 14 records in KTable and Assuming that for each key in KStreams there is a record in KTable. So the expected output would be 8000 records. Every time i do a join for first time or when i start the application. Expected output is 8000

KStream to KTable Inner Join producing different number of records every time processed with same data

我与影子孤独终老i 提交于 2020-07-23 06:06:26
问题 I want to do a KStream to KTable Join. using KTable as just a lookup table. below steps shows the sequence in which code is executed Construct KTable ReKey KTable Construct KStream ReKey KStream Join KStream - KTable Lets say there are 8000 records in KStream, 14 records in KTable and Assuming that for each key in KStreams there is a record in KTable. So the expected output would be 8000 records. Every time i do a join for first time or when i start the application. Expected output is 8000

KStream to KTable Inner Join producing different number of records every time processed with same data

≡放荡痞女 提交于 2020-07-23 06:05:06
问题 I want to do a KStream to KTable Join. using KTable as just a lookup table. below steps shows the sequence in which code is executed Construct KTable ReKey KTable Construct KStream ReKey KStream Join KStream - KTable Lets say there are 8000 records in KStream, 14 records in KTable and Assuming that for each key in KStreams there is a record in KTable. So the expected output would be 8000 records. Every time i do a join for first time or when i start the application. Expected output is 8000

Kafka GlobalKTable Latency Issue

本小妞迷上赌 提交于 2020-07-17 05:56:40
问题 I have a topic which is read as GlobalKTable and Materialized in a store. The issue is if I update a key on the topic and then read from store, for a while(~0.5sec) I get the old value. What could be the reason for this issue? Is it that globalktable stores the data in rocksDB per application instance so if the key on another partition is updated it takes some time to pull data from all partitions and update its local rocksDB. If not, please explain how does globalktable store maintain its

Kafka GlobalKTable Latency Issue

Deadly 提交于 2020-07-17 05:56:16
问题 I have a topic which is read as GlobalKTable and Materialized in a store. The issue is if I update a key on the topic and then read from store, for a while(~0.5sec) I get the old value. What could be the reason for this issue? Is it that globalktable stores the data in rocksDB per application instance so if the key on another partition is updated it takes some time to pull data from all partitions and update its local rocksDB. If not, please explain how does globalktable store maintain its

Kafka UNKNOWN_PRODUCER_ID exception

你离开我真会死。 提交于 2020-07-09 05:37:19
问题 I sometimes find UNKNOWN_PRODUCER_ID exception when using kafka streams. 2018-06-25 10:31:38.329 WARN 1 --- [-1-1_0-producer] o.a.k.clients.producer.internals.Sender : [Producer clientId=default-groupz-7bd94946-3bc0-4400-8e73-7126b9b9c0d4-StreamThread-1-1_0-producer, transactionalId=default-groupz-1_0] Got error produce response with correlation id 1996 on topic-partition default-groupz-mplat-five-minute-stat-urlCount-counts-store-changelog-0, retrying (2147483646 attempts left). Error:

Kafka Streams join by key with complex condition

若如初见. 提交于 2020-06-29 06:51:56
问题 I'm trying to join KStream with GlobalKTable by key, but with specific logic. StreamsBuilder builder = new StreamsBuilder(); KStream<String, Integer> stream = builder.stream(inputTopic1); // key = "ABC" GlobalKTable<String, Integer> table = builder.globalTable(inputTopic2); // key = "ABC" stream.join(table, // join first by "ABC" = "ABC", then by "AB" = "AB", then by "A" = "A" (key, value) -> key, (valueLeft, valueRigth) -> {/* identify by which condition the join was performed */}); For

Write to GlobalStateStore on Kafka Streams

偶尔善良 提交于 2020-06-28 06:22:38
问题 I am trying to use addGlobalStore on a Kafka DSL where a need to store few values that I will need global access for all my threads/instances. My problem is that I need periodically to update these values inside my topology and make all running threads aware of the new values. I initialized the global store through builder.addGlobalStore and using the init() function of a Processor that was used as the last argument on this function, but I cannot find a way to update the values inside the

Write to GlobalStateStore on Kafka Streams

↘锁芯ラ 提交于 2020-06-28 06:22:06
问题 I am trying to use addGlobalStore on a Kafka DSL where a need to store few values that I will need global access for all my threads/instances. My problem is that I need periodically to update these values inside my topology and make all running threads aware of the new values. I initialized the global store through builder.addGlobalStore and using the init() function of a Processor that was used as the last argument on this function, but I cannot find a way to update the values inside the