Is Kafka Stream StateStore global over all instances or just local?

后端 未结 3 1969
眼角桃花
眼角桃花 2021-01-11 12:35

In Kafka Stream WordCount example, it uses StateStore to store word counts. If there are multiple instances in the same consumer group, the S

相关标签:
3条回答
  • 2021-01-11 12:46

    Use a Processor instead of Transformer, for all the transformations you want to perform on the input topic, whenever there is a usecase of lookingup data from GlobalStateStore . Use context.forward(key,value,childName) to send the data to the downstream nodes. context.forward(key,value,childName) may be called multiple times in a process() and punctuate() , so as to send multiple records to downstream node. If there is a requirement to update GlobalStateStore, do this only in Processor passed to addGlobalStore(..) because, there is a GlobalStreamThread associated with GlobalStateStore, which keeps the state of the store consistent across all the running kstream instances.

    0 讨论(0)
  • 2021-01-11 13:05

    This depends on your view on a state store.

    1. In Kafka Streams a state is shared and thus each instance holds part of the overall application state. For example, using DSL stateful operator use a local RocksDB instance to hold their shard of the state. Thus, with this regard the state is local.

    2. On the other hand, all changes to the state are written into a Kafka topic. This topic does not "live" on the application host but in the Kafka cluster and consists of multiple partition and can be replicated. In case of an error, this changelog topic is used to recreate the state of the failed instance in another still running instance. Thus, as the changelog is accessible by all application instances, it can be considered to be global, too.

    Keep in mind, that the changelog is the truth of the application state and the local stores are basically caches of shards of the state.

    Moreover, in the WordCount example, a record stream (the data stream) gets partitioned by words, such that the count of one word will be maintained by a single instance (and different instances maintain the counts for different words).

    For an architectural overview, I recommend http://docs.confluent.io/current/streams/architecture.html

    Also this blog post should be interesting http://www.confluent.io/blog/unifying-stream-processing-and-interactive-queries-in-apache-kafka/

    0 讨论(0)
  • 2021-01-11 13:08

    If worth mentioning that there is a GlobalKTable improvement proposal

    GlobalKTable will be fully replicated once per KafkaStreams instance. That is, each KafkaStreams instance will consume all partitions of the corresponding topic.

    From the Confluent Platform's mailing list, I've got this information

    You could start prototyping using Kafka 0.10.2 (or trunk) branch...

    0.10.2-rc0 already has GlobalKTable!

    Here's the actual PR.

    And the person that told me that was Matthias J. Sax ;)

    0 讨论(0)
提交回复
热议问题