问题
During investigation within new features in Apache Kafka 0.9 and 0.10, we had used KStreams and KTables. There is interesting fact, that Kafka uses RocksDB internally. See Introducing Kafka Streams: Stream Processing Made Simple. RocksDB is not written in JVN compatible language, so it needs careful handling of the deployment, as it needs extra shared library (OS dependent).
And here there are simple questions:
- Why Apache Kafka Streams uses RocksDB?
- How is it possible to change it?
I had tried to search the answer, but I see only implicit reason, that RocksDB is very fast for operations in the range of about millions operations per second.
On the other hand I see some DBs that are coded in Java and perhaps end to end they could do that as well as they are not going over JNI.
回答1:
RocksDB is used for several (internal) reasons (as you mentioned already for example its performance). Conceptually, Kafka Streams does not need RocksDB -- it is used as internal key-value cache and any other store offering similar functionality would work, too.
Comment from @miguno below (rephrased):
One important advantage of RocksDB in contrast to pure in-memory key-value stores is its ability to write to disc. Thus, a state larger than available main memory can be supported by Kafka Streams.
Comment from @miguno above:
FYI:
"RocksDB is not written in JVN compatible language, so it needs careful handling of the deployment, as it needs extra shared library (OS dependent)."
As a user of Kafka Streams you don't need to install anything.
Using Kafka Streams DSL, as of 0.10.2 release (KAFKA-3825) it's possible to plug in custom state stores and to use a different key-value store.
Using Kafka Streams Processor API, you can implement your own store via StateStore
interface and connect it to a processor node in your topology.
来源:https://stackoverflow.com/questions/40110511/why-apache-kafka-streams-uses-rocksdb-and-if-how-is-it-possible-to-change-it