Spark Unable to load native-hadoop library for your platform

前端 未结 2 720
鱼传尺愫
鱼传尺愫 2020-12-28 13:03

I\'m a dummy on Ubuntu 16.04, desperately attempting to make Spark work. I\'ve tried to fix my problem using the answers found here on stackoverflow but I couldn\'t resolve

相关标签:
2条回答
  • 2020-12-28 13:38

    Steps to fix:

    • download Hadoop binaries
    • unpack to directory of your choice
    • set HADOOP_HOME to point to that directory.
    • add $HADOOP_HOME/lib/native to LD_LIBRARY_PATH.
    0 讨论(0)
  • 2020-12-28 13:43
    1. Download hadoop binary (link) and put it in your home directory (you can choose a different hadoop version if you like and change the next steps accordingly)
    2. Unzip the folder in your home directory using the following command. tar -zxvf hadoop_file_name
    3. Now add export HADOOP_HOME=~/hadoop-2.8.0 to your .bashrc file. Open a new terminal and try again.

    Source: Install PySpark on ubuntu

    0 讨论(0)
提交回复
热议问题