问题
Our current HDFS Cluster has replication factor 1.But to improve the performance and reliability(node failure) we want to increase Hive intermediate files (hive.exec.scratchdir) replication factor alone to 5. Is it possible to implement that ?
Regards, Selva
回答1:
See if -setrep
helps you.
setrep
Usage:
hadoop fs -setrep [-R] [-w] <numReplicas> <path>
Changes the replication factor
of a file. If path is a directory then the command recursively changes the replication factor
of all files under the directory tree rooted at path.
Options:
The -w flag requests that the command wait for the replication to complete. This can potentially take a very long time.
The -R flag is accepted for backwards compatibility. It has no effect.
Example:
hadoop fs -setrep -w 3 /user/hadoop/dir1
hadoop fs -setrep -R -w 100 /path/to/hive/warehouse
Reference: -setrep
来源:https://stackoverflow.com/questions/33292277/how-to-change-hdfs-replication-factor-for-hive-alone