Runnning Spark on cluster: Initial job has not accepted any resources

前端 未结 1 1994
盖世英雄少女心
盖世英雄少女心 2020-12-07 04:22
  1. I have a remote Ubuntu server on linode.com with 4 cores and 8G RAM
  2. I have a Spark-2 cluster consisting of 1 master and 1 slave on my remote Ubuntu server.
相关标签:
1条回答
  • 2020-12-07 04:43

    You are submitting application in the client mode. It means that driver process is started on your local machine.

    When executing Spark applications all machines have to be able to communicate with each other. Most likely your driver process is not reachable from the executors (for example it is using private IP or is hidden behind firewall). If that is the case you can confirm that by checking executor logs (go to application, select on of the workers with the status EXITED and check stderr. You "should" see that executor is failing due to org.apache.spark.rpc.RpcTimeoutException).

    There are two possible solutions:

    • Submit application from the machine which can be reached from you cluster.
    • Submit application in the cluster mode. This will use cluster resources to start driver process so you have to account for that.
    0 讨论(0)
提交回复
热议问题