How does Hadoop perform input splits?

前端 未结 10 810
礼貌的吻别
礼貌的吻别 2020-11-30 23:18

This is a conceptual question involving Hadoop/HDFS. Lets say you have a file containing 1 billion lines. And for the sake of simplicity, lets consider that each line is of

相关标签:
10条回答
  • 2020-12-01 00:16

    FileInputFormat.addInputPath(job, new Path(args[ 0])); or

    conf.setInputFormat(TextInputFormat.class);

    class FileInputFormat funcation addInputPath ,setInputFormat take care of inputsplit, also this code defines the number of mappers get created. we can say inputsplit and number of mappers is directly proportion to number of blocks used for storing input file on HDFS.

    Ex. if we have input file with size 74 Mb , this file stored on HDFS in two blocks (64 MB and 10 Mb). so inputsplit for this file is two and two mapper instances get created for reading this input file.

    0 讨论(0)
  • 2020-12-01 00:17

    Difference between block size and input split size.

    Input Split is logical split of your data, basically used during data processing in MapReduce program or other processing techniques. Input Split size is user defined value and Hadoop Developer can choose split size based on the size of data(How much data you are processing).

    Input Split is basically used to control number of Mapper in MapReduce program. If you have not defined input split size in MapReduce program then default HDFS block split will be considered as input split during the data processing.

    Example:

    Suppose you have a file of 100MB and HDFS default block configuration is 64MB then it will chopped in 2 split and occupy two HDFS blocks. Now you have a MapReduce program to process this data but you have not specified input split then based on the number of blocks(2 block) will be considered as input split for the MapReduce processing and two mapper will get assigned for this job. But suppose, you have specified the split size(say 100MB) in your MapReduce program then both blocks(2 block) will be considered as a single split for the MapReduce processing and one Mapper will get assigned for this job.

    Now suppose, you have specified the split size(say 25MB) in your MapReduce program then there will be 4 input split for the MapReduce program and 4 Mapper will get assigned for the job.

    Conclusion:

    1. Input Split is a logical division of the input data while HDFS block is a physical division of data.
    2. HDFS default block size is a default split size if input split is not specified through code.
    3. Split is user defined and user can control split size in his MapReduce program.
    4. One split can be mapping to multiple blocks and there can be multiple split of one block.
    5. The number of map tasks (Mapper) are equal to the number of input splits.

    Source : https://hadoopjournal.wordpress.com/2015/06/30/mapreduce-input-split-versus-hdfs-blocks/

    0 讨论(0)
  • 2020-12-01 00:18

    Files are split into HDFS blocks and the blocks are replicated. Hadoop assigns a node for a split based on data locality principle. Hadoop will try to execute the mapper on the nodes where the block resides. Because of replication, there are multiple such nodes hosting the same block.

    In case the nodes are not available, Hadoop will try to pick a node that is closest to the node that hosts the data block. It could pick another node in the same rack, for example. A node may not be available for various reasons; all the map slots may be under use or the node may simply be down.

    0 讨论(0)
  • 2020-12-01 00:20

    The short answer is the InputFormat take care of the split of the file.

    The way that I approach this question is by looking at its default TextInputFormat class:

    All InputFormat classes are subclass of FileInputFormat, which take care of the split.

    Specifically, FileInputFormat's getSplit function generate a List of InputSplit, from the List of files defined in JobContext. The split is based on the size of bytes, whose Min and Max could be defined arbitrarily in project xml file.

    0 讨论(0)
提交回复
热议问题