Split size vs Block size in Hadoop

前端 未结 3 809
隐瞒了意图╮
隐瞒了意图╮ 2020-12-01 03:00

What is relationship between split size and block size in Hadoop? As I read in this, split size must be n-times of block size (n is an integer and n > 0), is this correct? I

相关标签:
3条回答
  • 2020-12-01 03:45

    In HDFS architecture there is a concept of blocks. A typical block size used by HDFS is 64 MB. When we place a large file into HDFS it chopped up into 64 MB chunks(based on default configuration of blocks), Suppose you have a file of 1GB and you want to place that file in HDFS, then there will be 1GB/64MB = 16 split/blocks and these block will be distribute across the DataNodes. These blocks/chunk will reside on a different different DataNode based on your cluster configuration.

    Data splitting happens based on file offsets. The goal of splitting of file and store it into different blocks, is parallel processing and fail over of data.

    Difference between block size and split size.

    Split is logical split of the data, basically used during data processing using Map/Reduce program or other dataprocessing techniques on Hadoop Ecosystem. Split size is user defined value and you can choose your own split size based on your volume of data(How much data you are processing).

    Split is basically used to control number of Mapper in Map/Reduce program. If you have not defined any input split size in Map/Reduce program then default HDFS block split will be considered as input split.

    Example:

    Suppose you have a file of 100MB and HDFS default block configuration is 64MB, then it will chopped in 2 split and occupy 2 blocks. Now you have a Map/Reduce program to process this data but you have not specified any input split then based on the number of blocks(2 block) input split will be considered for the Map/Reduce processing and 2 mapper will get assigned for this job.

    But suppose, you have specified the split size(say 100MB) in your Map/Reduce program then both blocks(2 block) will be considered as a single split for the Map/Reduce processing and 1 Mapper will get assigned for this job.

    Suppose, you have specified the split size(say 25MB) in your Map/Reduce program then there will be 4 input split for the Map/Reduce program and 4 Mapper will get assigned for the job.

    Conclusion:

    1. Split is a logical division of the input data while block is a physical division of data.
    2. HDFS default block size is default split size if input split is not specified.
    3. Split is user defined and user can control split size in his Map/Reduce program.
    4. One split can be mapping to multiple blocks and there can be multiple split of one block.
    5. The number of map tasks (Mapper) are equal to the number of splits.
    0 讨论(0)
  • 2020-12-01 03:58
    • Assume we have a file of 400MB with consists of 4 records(e.g : csv file of 400MB and it has 4 rows, 100MB each)

    • If the HDFS Block Size is configured as 128MB, then the 4 records will not be distributed among the blocks evenly. It will look like this.

    • Block 1 contains the entire first record and a 28MB chunk of the second record.
    • If a mapper is to be run on Block 1, the mapper cannot process since it won't have the entire second record.

    • This is the exact problem that input splits solve. Input splits respects logical record boundaries.

    • Lets Assume the input split size is 200MB

    • Therefore the input split 1 should have both the record 1 and record 2. And input split 2 will not start with the record 2 since record 2 has been assigned to input split 1. Input split 2 will start with record 3.

    • This is why an input split is only a logical chunk of data. It points to start and end locations with in blocks.

    • If the input split size is n times the block size, an input split could fit multiple blocks and therefore less number of Mappers needed for the whole job and therefore less parallelism. (Number of mappers is the number of input splits)

    • input split size = block size is the ideal configuration.

    Hope this helps.

    0 讨论(0)
  • 2020-12-01 03:59

    The Split creation depends on the InputFormat being used. The below diagram explains how FileInputFormat's getSplits() method decides the splits for two different files.
    Note the role played by the Split Slope (1.1).

    The corresponding Java source that does the split is:


    The method computeSplitSize() above expands to Max(minSize, min(maxSize, blockSize)), where min/max size can be configured by setting mapreduce.input.fileinputformat.split.minsize/maxsize

    0 讨论(0)
提交回复
热议问题