YARN not preempting resources based on fair shares when running a Spark job

前端 未结 2 1501
说谎
说谎 2021-02-07 03:01

I have a problem with re-balancing Apache Spark jobs resources on YARN Fair Scheduled queues.

For the tests I\'ve configured Hadoop 2.6 (tried 2.7 also) to run in pseudo

2条回答
  •  不思量自难忘°
    2021-02-07 03:52

    You need to set one of the preemption timeouts in your allocation xml. One for minimum share and one for fair share, both are in seconds. By default, the timeouts are not set.

    From Hadoop: The Definitive Guide 4th Edition

    If a queue waits for as long as its minimum share preemption timeout without receiving its minimum guaranteed share, then the scheduler may preempt other containers. The default timeout is set for all queues via the defaultMinSharePreemptionTimeout top-level element in the allocation file, and on a per-queue basis by setting the minSharePreemptionTimeout element for a queue.

    Likewise, if a queue remains below half of its fair share for as long as the fair share preemption timeout, then the scheduler may preempt other containers. The default timeout is set for all queues via the defaultFairSharePreemptionTimeout top-level element in the allocation file, and on a per-queue basis by setting fairSharePreemptionTimeout on a queue. The threshold may also be changed from its default of 0.5 by setting defaultFairSharePreemptionThreshold and fairSharePreemptionThreshold (per-queue).

提交回复
热议问题