Failed to bind to: spark-master, using a remote cluster with two workers

后端 未结 4 1320
滥情空心
滥情空心 2020-12-14 17:53

I am managing to get everything working with the local master and two remote workers. Now, I want to connect to a remote master that has the same remote workers. I have trie

相关标签:
4条回答
  • 2020-12-14 18:38

    I had this problem when my /etc/hosts file was mapping the wrong IP address to my local hostname.

    The BindException in your logs complains about the IP address 192.168.0.191. I assume that resolves to the hostname of your machine and it's not the actual IP address that your network interface is using. It should work fine once you fix that.

    0 讨论(0)
  • 2020-12-14 18:40

    Setting the environment variable SPARK_LOCAL_IP=127.0.0.1 solved this for me.

    0 讨论(0)
  • 2020-12-14 18:41

    I had spark working in my EC2 instance. I started a new web server and to meet its requirement I had to change hostname to ec2 public DNS name i.e.

    hostname ec2-54-xxx-xxx-xxx.compute-1.amazonaws.com
    

    After that my spark could not work and showed error as below:

    16/09/20 21:02:22 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1. 16/09/20 21:02:22 ERROR SparkContext: Error initializing SparkContext.

    I solve it by setting SPARK_LOCAL_IP to as below:

    export SPARK_LOCAL_IP="localhost"
    

    then just launched sparkling shell as below:

    $SPARK_HOME/bin/spark-shell
    
    0 讨论(0)
  • 2020-12-14 18:46

    Possily your master is running on non-default port. Can you post your submit command? Have a look in https://spark.apache.org/docs/latest/spark-standalone.html#connecting-an-application-to-the-cluster

    0 讨论(0)
提交回复
热议问题