This is a conceptual question involving Hadoop/HDFS. Lets say you have a file containing 1 billion lines. And for the sake of simplicity, lets consider that each line is of
The InputFormat
is responsible to provide the splits.
In general, if you have n nodes, the HDFS will distribute the file over all these n nodes. If you start a job, there will be n mappers by default. Thanks to Hadoop, the mapper on a machine will process the part of the data that is stored on this node. I think this is called Rack awareness
.
So to make a long story short: Upload the data in the HDFS and start a MR Job. Hadoop will care for the optimised execution.
When a Hadoop job is run, it split input files into chunks and assign each split to a mapper to process; this is called InputSplit.
I think what Deepak was asking was more about how the input for each call of the map function is determined, rather than the data present on each map node. I am saying this based on the second part of the question: More specifically, each time the map() function is called what are its Key key and Value val parameters?
Actually, the same question brought me here, and had i been an experienced hadoop developer, i may have interpreted it like the answers above.
To answer the question,
the file at a given map node is split, based on the value we set for InputFormat. (this is done in java using setInputFormat()! )
An example:
conf.setInputFormat(TextInputFormat.class); Here, by passing TextInputFormat to the setInputFormat function, we are telling hadoop to treat each line of the input file at the map node as the input to the map function. Linefeed or carriage-return are used to signal end of line. more info at TextInputFormat!
In this example: Keys are the position in the file, and values are the line of text.
Hope this helps.
FileInputFormat is the abstract class which defines how the input files are read and spilt up. FileInputFormat provides following functionalites: 1. select files/objects that should be used as input 2. Defines inputsplits that breaks a file into task.
As per hadoopp basic functionality, if there are n splits then there will be n mapper.
Fortunately everything will be taken care by framework.
MapReduce data processing is driven by this concept of input splits. The number of input splits that are calculated for a specific application determines the number of mapper tasks.
The number of maps is usually driven by the number of DFS blocks in the input files.
Each of these mapper tasks is assigned, where possible, to a slave node where the input split is stored. The Resource Manager (or JobTracker, if you’re in Hadoop 1) does its best to ensure that input splits are processed locally.
If data locality can't be achieved due to input splits crossing boundaries of data nodes, some data will be transferred from one Data node to other Data node.
Assume that there is 128 MB block and last record did not fit in Block a and spreads in Block b, then data in Block b will be copied to node having Block a
Have a look at this diagram.
Have a look at related quesitons
About Hadoop/HDFS file splitting
How does Hadoop process records split across block boundaries?
There is a seperate map reduce job that splits the files into blocks. Use FileInputFormat for large files and CombineFileInput Format for smaller ones. You can also check the whether the input can be split into blocks by issplittable method. Each block is then fed to a data node where a map reduce job runs for further analysis. the size of a block would depend on the size that you have mentioned in mapred.max.split.size parameter.