apache-kafka-streams

Kafka Streaming tasks and management of Internal state stores

空扰寡人 提交于 2020-12-15 06:06:00
问题 Lets say we have launched 2 Streaming-Tasks at 2 different machines (instances) with following properties :- public final static String applicationID = "StreamsPOC"; public final static String bootstrapServers = "10.21.22.56:9093"; public final static String topicname = "TestTransaction"; public final static String shipmentTopicName = "TestShipment"; public final static String RECORD_COUNT_STORE_NAME = "ProcessorONEStore"; and using these aforesaid properties, here is how stream-task's

Kafka Streaming tasks and management of Internal state stores

僤鯓⒐⒋嵵緔 提交于 2020-12-15 06:04:08
问题 Lets say we have launched 2 Streaming-Tasks at 2 different machines (instances) with following properties :- public final static String applicationID = "StreamsPOC"; public final static String bootstrapServers = "10.21.22.56:9093"; public final static String topicname = "TestTransaction"; public final static String shipmentTopicName = "TestShipment"; public final static String RECORD_COUNT_STORE_NAME = "ProcessorONEStore"; and using these aforesaid properties, here is how stream-task's

i have a kafka pipeline (json problem update) for kafka connect

谁都会走 提交于 2020-12-15 05:51:53
问题 so i updated according to some suggestions. but the streams application terminates after some time. without performing. no error in below code shown by ide. at last i'm sending data to topic as key equals string and value as a json object. still not working. i guess its a line or something but not sure if im right. please. also attached the error screenshot below. Serializer<JsonNode> jsonSerializer = new JsonSerializer(); Deserializer<JsonNode> jsonDeserializer = new JsonDeserializer();

i have a kafka pipeline (json problem update) for kafka connect

北城以北 提交于 2020-12-15 05:51:29
问题 so i updated according to some suggestions. but the streams application terminates after some time. without performing. no error in below code shown by ide. at last i'm sending data to topic as key equals string and value as a json object. still not working. i guess its a line or something but not sure if im right. please. also attached the error screenshot below. Serializer<JsonNode> jsonSerializer = new JsonSerializer(); Deserializer<JsonNode> jsonDeserializer = new JsonDeserializer();

Kafka Stream: How to trigger event based on hopping windows and how to trigger event based on combined set of windows that are part of hopping window

烂漫一生 提交于 2020-12-13 03:14:58
问题 Here is our topology: test-events--->groupByKey-->Window Aggregation-->Suppress-->Filter-->rekey Here is the flow We have 1 min hopping window. With total window size of 5 min. The hopping window is evaluated every five minutes, HOW to get count of every 1 min hops?? Total 5 min window, hop duration 1 min. 1. How to TRIGGER some event based on 1 min HOP 2. How to TRIGGER some event based on TOTAL 5 min WINDOW ||-----||----||-------||------||-----| ||-----||----||-------||------||-----| We

Kafka Stream: How to trigger event based on hopping windows and how to trigger event based on combined set of windows that are part of hopping window

萝らか妹 提交于 2020-12-13 03:10:45
问题 Here is our topology: test-events--->groupByKey-->Window Aggregation-->Suppress-->Filter-->rekey Here is the flow We have 1 min hopping window. With total window size of 5 min. The hopping window is evaluated every five minutes, HOW to get count of every 1 min hops?? Total 5 min window, hop duration 1 min. 1. How to TRIGGER some event based on 1 min HOP 2. How to TRIGGER some event based on TOTAL 5 min WINDOW ||-----||----||-------||------||-----| ||-----||----||-------||------||-----| We

Kafka Stream: How to trigger event based on hopping windows and how to trigger event based on combined set of windows that are part of hopping window

╄→尐↘猪︶ㄣ 提交于 2020-12-13 03:10:31
问题 Here is our topology: test-events--->groupByKey-->Window Aggregation-->Suppress-->Filter-->rekey Here is the flow We have 1 min hopping window. With total window size of 5 min. The hopping window is evaluated every five minutes, HOW to get count of every 1 min hops?? Total 5 min window, hop duration 1 min. 1. How to TRIGGER some event based on 1 min HOP 2. How to TRIGGER some event based on TOTAL 5 min WINDOW ||-----||----||-------||------||-----| ||-----||----||-------||------||-----| We

How to introduce delay in rebalancing in case of kafka consumer group?

别等时光非礼了梦想. 提交于 2020-12-12 18:53:25
问题 I want to give some time to my consumer to restart so that unnecessary rebalance doesnt happen. How can I do that? In case of shutdown, I want replication to come in picture and after some time if consumer is not back up, rebalance should occur else not. 回答1: There's broker level config called group.initial.rebalance.delay.ms you can tweak. The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means

Kafka Streams - Processor context commit

梦想的初衷 提交于 2020-12-08 06:30:48
问题 should we ever invoke processorContext.commit() in Processor implementation by ourselves? I mean invoking commit method inside scheduled Punctuator implementation or inside process method. in which use cases should we do that, and do we need that at all? the question relates to both Kafka DSL with transform() and Processor API. seems Kafka Streams handles it by itself, also invoking processorContext.commit() does not guarantee that it will be done immediately. 回答1: It is ok to call commit() -

Kafka Streams - Processor context commit

元气小坏坏 提交于 2020-12-08 06:28:42
问题 should we ever invoke processorContext.commit() in Processor implementation by ourselves? I mean invoking commit method inside scheduled Punctuator implementation or inside process method. in which use cases should we do that, and do we need that at all? the question relates to both Kafka DSL with transform() and Processor API. seems Kafka Streams handles it by itself, also invoking processorContext.commit() does not guarantee that it will be done immediately. 回答1: It is ok to call commit() -