spark submit “Service 'Driver' could not bind on port” error

若如初见. 提交于 2019-12-01 04:35:10

I had the same issue when trying to run the shell, and was able to get this working by setting the SPARK_LOCAL_IP environment variable. You can assign this from the command line when running the shell:

SPARK_LOCAL_IP=127.0.0.1 ./bin/spark-shell

For a more permanent solution, create a spark-env.sh file in the conf directory of your Spark root. Add the following line:

SPARK_LOCAL_IP=127.0.0.1

Give execute permissions to the script using chmod +x ./conf/spark-env.sh, and this will set this environment variable by default.

I am using Maven/SBT to manage dependencies and the Spark core is contained in a jar file.

You can override the SPARK_LOCAL_IP at runtime by setting the "spark.driver.bindAddress" (here in Scala):

val config = new SparkConf()
config.setMaster("local[*]")
config.setAppName("Test App")
config.set("spark.driver.bindAddress", "127.0.0.1")
val sc = new SparkContext(config)

I also had this issue.

The reason (for me) was that the IP of my local system was not reachable from my local system. I know that statement makes no sense, but please read the following.

My system name (uname -s) shows that my system is named "sparkmaster". In my /etc/hosts file, I have assigned a fixed IP address for the sparkmaster system as "192.168.1.70". There were additional fixed IP addresses for sparknode01 and sparknode02 at ...1.71 & ...1.72 respectively.

Due to some other problems I had, I needed to change all of my network adapters to DHCP. This meant that they were getting addresses like 192.168.90.123. The DHCP addresses were not in the same network as the ...1.70 range and there was no route configured.

When spark starts, is seems to want to try to connect to the host named in uname (i.e. sparkmaster in my case). This was the IP 192.168.1.70 - but there was no way to connect to that because that address was in an unreachable network.

My solution was to change one of my Ethernet adapters back to a fixed static address (i.e. 192.168.1.70) and voila - problem solved.

So the issues seems to be that when spark starts in "local mode" it attempts to connect to a system named after your system's name (rather than local host). I guess this makes sense if you are wanting to setup a cluster (Like I did) but it can result in the above confusing message. Possibly putting your system's host name on the 127.0.0.1 entry in /etc/hosts may also solve this problem, but I did not try it.

You need to enter the hostname in your /etc/hosts file. Something like:

127.0.0.1   localhost "hostname"
Ravi R

This is possibly a duplicate of Spark 1.2.1 standalone cluster mode spark-submit is not working

I have tried the same steps, but able to run the job. Kindly post the full spark-env.sh and spark-defaults if possible.

I had this problem and it is because of changing real IP with my IP in /etc/hosts.

This issue is related to IP address alone. Error messages in the log file are not informative. check with following 3 steps:

  1. check your IP address - can be checked with ifconfig or ip commands. If your service is not a Public service. IP addresses with 192.168 should be good enough. 127.0.0.1 cannot be used if you are planning a cluster.

  2. check your environment variable SPARK_MASTER_HOST - check there are no typos in the name of the variable or actual IP address.

    env | grep SPARK_

  3. check the port you are planning to use for sparkMaster is free with command netstat. Do not use a port below 1024. For example:

    netstat -a | 9123

After your sparkmaster starts running if you are not able see webui from a different machine, then open the webui port with command iptables.

I solved this problem by modifying the slave file.its spark-2.4.0-bin-hadoop2.7/conf/slave please check your configure。

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!