Hadoop fs lookup for block size?

前端 未结 5 732
悲哀的现实
悲哀的现实 2021-02-05 12:22

In Hadoop fs how to lookup the block size for a particular file?

I was primarily interested in a command line, something like:

hadoop fs ... hdfs://fs1.d         


        
相关标签:
5条回答
  • 2021-02-05 13:03

    The fsck commands in the other answers list the blocks and allow you to see the number of blocks. However, to see the actual block size in bytes with no extra cruft do:

    hadoop fs -stat %o /filename
    

    Default block size is:

    hdfs getconf -confKey dfs.blocksize
    

    Details about units

    The units for the block size are not documented in the hadoop fs -stat command, however, looking at the source line and the docs for the method it calls we can see it uses bytes and cannot report block sizes over about 9 exabytes.

    The units for the hdfs getconf command may not be bytes. It returns whatever string is being used for dfs.blocksize in the configuration file. (This is seen in the source for the final function and its indirect caller)

    0 讨论(0)
  • 2021-02-05 13:10

    Try to code below

    path=hdfs://a/b/c
    
    size=`hdfs dfs -count ${path} | awk '{print $3}'`
    echo $size
    
    0 讨论(0)
  • 2021-02-05 13:11

    For displaying the actual block size of the existing file within HDFS I used:

    [pety@master1 ~]$ hdfs dfs -stat %o /tmp/testfile_64
    67108864
    
    0 讨论(0)
  • 2021-02-05 13:17

    I think it should be doable with:

    hadoop fsck /filename -blocks
    

    but I get Connection refused

    0 讨论(0)
  • 2021-02-05 13:19

    Seems hadoop fs doesn't have options to do this.

    But hadoop fsck could.

    You can try this

    $HADOOP_HOME/bin/hadoop fsck /path/to/file -files -blocks
    
    0 讨论(0)
提交回复
热议问题