TaskSchedulerImpl: Initial job has not accepted any resources;

前端 未结 5 1261
旧时难觅i
旧时难觅i 2020-12-03 06:14

Here is what I am trying to do.

I have created two nodes of DataStax enterprise cluster,on top of which I have created a java program to get the count of one table (

相关标签:
5条回答
  • 2020-12-03 06:23

    My problem was that I was assigning too much memory than my slaves had available. Try reducing the memory size of the spark submit. Something like the following:

    ~/spark-1.5.0/bin/spark-submit --master spark://my-pc:7077 --total-executor-cores 2 --executor-memory 512m
    

    with my ~/spark-1.5.0/conf/spark-env.sh being:

    SPARK_WORKER_INSTANCES=4
    SPARK_WORKER_MEMORY=1000m
    SPARK_WORKER_CORES=2
    
    0 讨论(0)
  • 2020-12-03 06:31

    I faced similar issue and after some online research and trial-n-error, I narrowed down to 3 causes for this (except for the first the other two are not even close to the error message):

    1. As indicated by the error, probably you are allocating the resources more than that is available. => This was not my issue
    2. Hostname & IP Address mishaps: I took care of this by specifying the SPARK_MASTER_IP and SPARK_LOCAL_IP in spark-env.sh
    3. Disable Firewall on the client : This was the solution that worked for me. Since I was working on a prototype in-house code, I disabled the firewall on the client node. For some reason the worker nodes, were not able to talk back to the client for me. For production purposes, you would want to open-up certain number of ports required.
    0 讨论(0)
  • 2020-12-03 06:32

    Please look at Russ's post

    Specifically this section:

    This is by far the most common first error that a new Spark user will see when attempting to run a new application. Our new and excited Spark user will attempt to start the shell or run their own application and be met with the following message

    ...

    The short term solution to this problem is to make sure you aren’t requesting more resources from your cluster than exist or to shut down any apps that are unnecessarily using resources. If you need to run multiple Spark apps simultaneously then you’ll need to adjust the amount of cores being used by each app.

    0 讨论(0)
  • 2020-12-03 06:33

    In my case, the problem was that I had the following line in $SPARK_HOME/conf/spark-env.sh:

    SPARK_EXECUTOR_MEMORY=3g

    of each worker,
    and the following line in $SPARK_HOME/conf/spark-default.sh

    spark.executor.memory 4g

    in the "master" node.

    The problem went away once I changed 4g to 3g. I hope that this will help someone with the same issue. The other answers helped me spot this.

    0 讨论(0)
  • 2020-12-03 06:47

    I have faced this issue few times even though the resource allocation was correct.

    The fix was to restart the mesos services.

    sudo service mesos-slave restart
    sudo service mesos-master restart
    
    0 讨论(0)
提交回复
热议问题