Consider a hadoop cluster where the default block size is 64MB in hdfs-site.xml
. However, later on the team decides to change this to 128MB. Here are my questio
On point 1 - On Hadoop 1.2.1, A restart is not required after a change to dfs.block.size in hdfs-site.xml file. The file block size can be easily verified by checking the Hadoop administration page at http://namenode:50070/dfshealth.jsp
Ensure to change the dfs.block.size on all the data nodes.
As mentioned here for your point:
check link for more information.
Will this change require restart of the cluster or it will be taken up automatically and all new files will have the default block size of 128MB
A restart of the cluster will be required for this property change to take effect.
What will happen to the existing files which have block size of 64M? Will the change in the configuration apply to existing files automatically?
Existing blocks will not change their block size.
If not automatically done, then how to manually do this block change?
To change the existing files you can use distcp. It will copy over the files with the new block size. However, you will have to manually delete the old files with the older block size. Here's a command that you can use
hadoop distcp -Ddfs.block.size=XX /path/to/old/files /path/to/new/files/with/larger/block/sizes.