spark submit “Service 'Driver' could not bind on port” error

前端 未结 9 1952
醉酒成梦
醉酒成梦 2021-01-04 20:49

I used the following command to run the spark java example of wordcount:-

time spark-submit --deploy-mode cluster --master spark://192.168.0.7:6066 --class o         


        
相关标签:
9条回答
  • 2021-01-04 21:15

    I had the same issue when trying to run the shell, and was able to get this working by setting the SPARK_LOCAL_IP environment variable. You can assign this from the command line when running the shell:

    SPARK_LOCAL_IP=127.0.0.1 ./bin/spark-shell

    For a more permanent solution, create a spark-env.sh file in the conf directory of your Spark root. Add the following line:

    SPARK_LOCAL_IP=127.0.0.1

    Give execute permissions to the script using chmod +x ./conf/spark-env.sh, and this will set this environment variable by default.

    0 讨论(0)
  • 2021-01-04 21:15

    Use as below in dataframes

    val spark=SparkSession.builder.appName("BinarizerExample").master("local[*]").config("spark.driver.bindAddress", "127.0.0.1").getOrCreate()

    0 讨论(0)
  • 2021-01-04 21:19

    You need to enter the hostname in your /etc/hosts file. Something like:

    127.0.0.1   localhost "hostname"
    
    0 讨论(0)
  • 2021-01-04 21:21

    I solved this problem by modifying the slave file.its spark-2.4.0-bin-hadoop2.7/conf/slave please check your configure。

    0 讨论(0)
  • 2021-01-04 21:36

    I am using Maven/SBT to manage dependencies and the Spark core is contained in a jar file.

    You can override the SPARK_LOCAL_IP at runtime by setting the "spark.driver.bindAddress" (here in Scala):

    val config = new SparkConf()
    config.setMaster("local[*]")
    config.setAppName("Test App")
    config.set("spark.driver.bindAddress", "127.0.0.1")
    val sc = new SparkContext(config)
    
    0 讨论(0)
  • 2021-01-04 21:36

    I also had this issue.

    The reason (for me) was that the IP of my local system was not reachable from my local system. I know that statement makes no sense, but please read the following.

    My system name (uname -s) shows that my system is named "sparkmaster". In my /etc/hosts file, I have assigned a fixed IP address for the sparkmaster system as "192.168.1.70". There were additional fixed IP addresses for sparknode01 and sparknode02 at ...1.71 & ...1.72 respectively.

    Due to some other problems I had, I needed to change all of my network adapters to DHCP. This meant that they were getting addresses like 192.168.90.123. The DHCP addresses were not in the same network as the ...1.70 range and there was no route configured.

    When spark starts, is seems to want to try to connect to the host named in uname (i.e. sparkmaster in my case). This was the IP 192.168.1.70 - but there was no way to connect to that because that address was in an unreachable network.

    My solution was to change one of my Ethernet adapters back to a fixed static address (i.e. 192.168.1.70) and voila - problem solved.

    So the issues seems to be that when spark starts in "local mode" it attempts to connect to a system named after your system's name (rather than local host). I guess this makes sense if you are wanting to setup a cluster (Like I did) but it can result in the above confusing message. Possibly putting your system's host name on the 127.0.0.1 entry in /etc/hosts may also solve this problem, but I did not try it.

    0 讨论(0)
提交回复
热议问题