hadoop mapreduce: java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z

前端 未结 6 1467
暖寄归人
暖寄归人 2020-12-16 04:27

I am trying to write a snappy block compressed sequence file from a map-reduce job. I am using hadoop 2.0.0-cdh4.5.0, and snappy-java 1.0.4.1

Here is my code:

相关标签:
6条回答
  • 2020-12-16 04:59

    In my case, you may check the hive-conf files : mapred-site.xml , and check the key: mapreduce.admin.user.env 's value,

    I tested it in a new datanode, and received unlinked-buildSnappy error on the machine where is no native dependencies ( libsnappy.so , etc)

    0 讨论(0)
  • 2020-12-16 05:00

    I you need all files, not only the *.so ones. Also ideally you would include the folder to your path instead of copying the libs from there. You need to restart the MapReduce service after this, so that the new libraries are taken and can be used.

    Niko

    0 讨论(0)
  • 2020-12-16 05:07

    My problem was that my JRE did not contain the appropriate native libraries. This may or may not be because I switched the JDK from cloudera's pre-built VM to JDK 1.7. The snappy .so files are in your hadoop/lib/native directory, the JRE needs to have them. Adding them to the classpath did not seem to resolve my issue. I resolved it like this:

    $ cd /usr/lib/hadoop/lib/native
    $ sudo cp *.so /usr/java/latest/jre/lib/amd64/
    

    Then I was able to use the SnappyCodec class. Your paths may be different though.

    That seemed to get me to the next problem:

    Caused by: java.lang.RuntimeException: native snappy library not available: SnappyCompressor has not been loaded.

    Still trying to resolve that.

    0 讨论(0)
  • 2020-12-16 05:09

    Found the following information from the Cloudera Communities

    1. Ensure that LD_LIBRARY_PATH and JAVA_LIBRARY_PATH contains the native directory path having the libsnappy.so** files.
    2. Ensure that LD_LIBRARY_PATH and JAVA_LIBRARY path have been exported in the SPARK environment(spark-env.sh).

    For example I use Hortonworks HDP and I have the following configuration in my spark-env.sh

    export JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH:/usr/hdp/2.2.0.0-2041/hadoop/lib/native
    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/hdp/2.2.0.0-2041/hadoop/lib/native
    export SPARK_YARN_USER_ENV="JAVA_LIBRARY_PATH=$JAVA_LIBRARY_PATH,LD_LIBRARY_PATH=$LD_LIBRARY_PATH"
    
    0 讨论(0)
  • 2020-12-16 05:13

    after removing hadoop.dll (which i copied manually) from windows\system32 and setting up HADOOP_HOME=\hadoop-2.6.4 IT WORKS!!!

    0 讨论(0)
  • 2020-12-16 05:24

    check your core-site.xml and mapred-site.xml they should contain correct properties and path of the folder with libraries

    core-site.xml

    <property>
      <name>io.compression.codecs</name>
    <value>org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec</value>
    </property>
    

    mapred-site.xml

     <property>
          <name>mapreduce.map.output.compress</name>
          <value>true</value>
        </property>
    
        <property>
         <name>mapred.map.output.compress.codec</name>  
         <value>org.apache.hadoop.io.compress.SnappyCodec</value>
        </property>
    
    
        <property>
          <name>mapreduce.admin.user.env</name>
          <value>LD_LIBRARY_PATH=/usr/hdp/2.2.0.0-1084/hadoop/lib/native</value>
        </property>
    

    LD_LIBRARY_PATH - has to contain path of libsnappy.so .

    0 讨论(0)
提交回复
热议问题