Hadoop gzip compressed files

后端 未结 4 1014
攒了一身酷
攒了一身酷 2020-12-09 10:37

I am new to hadoop and trying to process wikipedia dump. It\'s a 6.7 GB gzip compressed xml file. I read that hadoop supports gzip compressed files but can only be processed

相关标签:
4条回答
  • 2020-12-09 10:47

    GZIP files cannot be partitioned in any way, due to a limitation of the codec. 6.7GB really isn't that big, so just decompress it on a single machine (it will take less than an hour) and copy the XML up to HDFS. Then you can process the Wikipedia XML in Hadoop.

    Cloud9 contains a WikipediaPageInputFormat class that you can use to read the XML in Hadoop.

    0 讨论(0)
  • 2020-12-09 10:49

    A file compressed with the GZIP codec cannot be split because of the way this codec works. A single SPLIT in Hadoop can only be processed by a single mapper; so a single GZIP file can only be processed by a single Mapper.

    There are atleast three ways of going around that limitation:

    1. As a preprocessing step: Uncompress the file and recompress using a splittable codec (LZO)
    2. As a preprocessing step: Uncompress the file, split into smaller sets and recompress. (See this)
    3. Use this patch for Hadoop (which I wrote) that allows for a way around this: Splittable Gzip

    HTH

    0 讨论(0)
  • 2020-12-09 10:50

    This is one of the biggest miss understanding in HDFS.

    Yes files compressed as a gzip file are not splitable by MapReduce, but that does not mean that GZip as a codec has no value in HDFS and cannot be made splitable.

    GZip as a Codec can be used with RCFiles, Sequence Files, Arvo Files, and many more file formats. When the Gzip Codec is used within these splitable formats you get the great compression and pretty good speed from Gzip plus the splitable component.

    0 讨论(0)
  • 2020-12-09 10:53

    Why not ungzip it and use Splittable LZ compression instead?m

    http://blog.cloudera.com/blog/2009/11/hadoop-at-twitter-part-1-splittable-lzo-compression/

    0 讨论(0)
提交回复
热议问题