Hadoop read multiple lines at a time

后端 未结 2 557
你的背包
你的背包 2021-02-04 17:54

I have a file in which a set of every four lines represents a record.

eg, first four lines represent record1, next four represent record 2 and so on..

How can I

2条回答
  •  小蘑菇
    小蘑菇 (楼主)
    2021-02-04 18:35

    A few approaches, some dirtier than others:


    The right way

    You may have to define your own RecordReader, InputSplit, and InputFormat. Depending on exactly what you are trying to do, you will be able to reuse some of the already existing ones of the three above. You will likely have to write your own RecordReader to define the key/value pair and you will likely have to write your own InputSplit to help define the boundary.


    Another right way, which may not be possible

    The above task is quite daunting. Do you have any control over your data set? Can you preprocess it in someway (either while it is coming in or at rest)? If so, you should strongly consider trying to transform your dataset int something that is easier to read out of the box in Hadoop.

    Something like:

    ALine1
    ALine2            ALine1;Aline2;Aline3;Aline4
    ALine3
    ALine4        ->
    BLine1
    BLine2            BLine1;Bline2;Bline3;Bline4;
    BLine3
    BLine4
    

    Down and Dirty

    Do you have any control over the file sizes of your data? If you manually split your data on the block boundary, you can force Hadoop to not care about records spanning splits. For example, if your block size is 64MB, write your files out in 60MB chunks.

    Without worrying about input splits, you could do something dirty: In your map function, add your new key/value pair into a list object. If the list object has 4 items in it, do processing, emit something, then clean out the list. Otherwise, don't emit anything and move on without doing anything.

    The reason why you have to manually split the data is that you are not going to be guaranteed that an entire 4-row record will be given to the same map task.

提交回复
热议问题