Change Block size of existing files in Hadoop

后端 未结 3 2047
情话喂你
情话喂你 2020-12-30 11:22

Consider a hadoop cluster where the default block size is 64MB in hdfs-site.xml. However, later on the team decides to change this to 128MB. Here are my questio

相关标签:
3条回答
  • 2020-12-30 11:43

    On point 1 - On Hadoop 1.2.1, A restart is not required after a change to dfs.block.size in hdfs-site.xml file. The file block size can be easily verified by checking the Hadoop administration page at http://namenode:50070/dfshealth.jsp

    Ensure to change the dfs.block.size on all the data nodes.

    0 讨论(0)
  • 2020-12-30 11:55

    As mentioned here for your point:

    1. Whenever you change a configuration, you need to restart the NameNode and DataNodes in order for them to change their behavior.
    2. No, it will not. It will keep the old block size on the old files. In order for it to take the new block change, you need to rewrite the data. You can either do a hadoop fs -cp or a distcp on your data. The new copy will have the new block size and you can delete your old data.

    check link for more information.

    0 讨论(0)
  • 2020-12-30 11:56

    Will this change require restart of the cluster or it will be taken up automatically and all new files will have the default block size of 128MB

    A restart of the cluster will be required for this property change to take effect.

    What will happen to the existing files which have block size of 64M? Will the change in the configuration apply to existing files automatically?

    Existing blocks will not change their block size.

    If not automatically done, then how to manually do this block change?

    To change the existing files you can use distcp. It will copy over the files with the new block size. However, you will have to manually delete the old files with the older block size. Here's a command that you can use

    hadoop distcp -Ddfs.block.size=XX /path/to/old/files /path/to/new/files/with/larger/block/sizes.
    
    0 讨论(0)
提交回复
热议问题