can i use flink rocksDB state backend with local file system?

梦想与她 提交于 2021-01-27 22:13:41

问题


I am exploring using Flink rocksDb state backend, the documentation seems to imply i can use a regular file system such as: file:///data/flink/checkpoints, but the code javadoc only mentions hdfs or s3 option here.

I am wondering if it's possible to use local file system with flink rocksdb backend, thanks!

Flink docs: https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/state_backends.html#the-rocksdbstatebackend

Flink code: https://github.com/apache/flink/blob/master/flink-state-backends/flink-statebackend-rocksdb/src/main/java/org/apache/flink/contrib/streaming/state/RocksDBStateBackend.java#L175


回答1:


No, you should not do that!

With this path you configure the directory into which Flink writes checkpoints. A checkpoint is a copy of your application state that is used to restore the application state in case of a failure such as a machine failure. The path must point to a persistent and remote storage to be able to read the checkpoint in case that a process was killed ore a machine died. If a checkpoint was written to the local filesystem of a machine that failed, you would not be able to recover the job and restore the state.

However, you can write the checkpoint to a local path if this is a mount point of an NFS (or any other remote storage) that can be mounted from other machines as well.



来源:https://stackoverflow.com/questions/58614739/can-i-use-flink-rocksdb-state-backend-with-local-file-system

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!