Why does worker node not see updates to accumulator on another worker nodes?

前端 未结 2 366
闹比i
闹比i 2020-12-20 23:04

I\'m using a LongAccumulator as a shared counter in map operations. But it seems that I\'m not using it correctly because the state of the counter on the worker

相关标签:
2条回答
  • 2020-12-20 23:38

    From the answer from @user6910411 (emphasis mine):

    Each task has its own accumulator, which is updated locally, and merged withing "shared" copy on the driver, once task has finished and result has been reported.

    That part of the answer in bold is not 100% correct.

    The current values of the internal and external accumulators are sent to the driver every executor heartbeat that has to happen at regular intervals or the driver assumes an executor's lost.

    The regular interval is controlled by spark.executor.heartbeatInterval property which is 10s by default.

    Interval between each executor's heartbeats to the driver. Heartbeats let the driver know that the executor is still alive and update it with metrics for in-progress tasks. spark.executor.heartbeatInterval should be significantly less than spark.network.timeout.

    As quoted above, the heartbeat is the "transport layer" to pass the partial updates to accumulators (on executors) to the driver.

    There are two kinds of accumulators -- internal and non-internal (for the lack of more proper name I'm going to call the non-internal accumulators non-internal).

    The internal accumulators are used for task metrics that Spark uses to let an administrator/operator know what happens under the covers.

    The same mechanism Spark uses to send partial updates to non-internal accumulators so the local updates to the accumulators (on executors where tasks run) are visible to the driver every executor heartbeat.

    I'm not sure about that, but the driver might not give them to the code (= to the outside world), but the main point is to know that the driver knows the current value of an accumulator (delayed by executor heartbeat).


    BTW, the question is about worker nodes being the boundary of accumulator updates but in reality this is a task alone that creates the visibility boundary for accumulator updates. It does not really matter if you have one or two worker nodes (with one or more executors) as you won't see the accumulator updates across tasks on a single executor too.

    Accumulator updates are local to a task and it's in the discretion of the driver and a task to be informed about any update to an accumulator.

    0 讨论(0)
  • 2020-12-20 23:52

    It shouldn't work:

    Tasks running on a cluster can then add to it using the add method. However, they cannot read its value. Only the driver program can read the accumulator’s value, using its value method.

    Each task has its own accumulator, which is updated locally, and merged with "shared" copy on the driver, once task has finished and result has been reported.

    The old Accumulator API (now wrapping AccumulatorV2) actually thrown an exception when using value from within a task, but for some reason it has been omitted in AccumulatorV2.

    What you experience is actually similar to the old behavior described here How to print accumulator variable from within task (seem to "work" without calling value method)?

    0 讨论(0)
提交回复
热议问题