How does Hive choose the number of reducers for a job?

后端 未结 1 1265
独厮守ぢ
独厮守ぢ 2020-12-16 13:29

Several places say the default # of reducers in a Hadoop job is 1. You can use the mapred.reduce.tasks symbol to manually set the number of reducers.

When I run a H

相关标签:
1条回答
  • 2020-12-16 14:01

    The default of 1 maybe for a vanilla Hadoop install. Hive overrides it.

    In open source hive (and EMR likely)

    # reducers = (# bytes of input to mappers)
                 / (hive.exec.reducers.bytes.per.reducer)
    

    This post says default hive.exec.reducers.bytes.per.reducer is 1G.

    You can limit the number of reducers produced by this heuristic using hive.exec.reducers.max.

    If you know exactly the number of reducers you want, you can set mapred.reduce.tasks, and this will override all heuristics. (By default this is set to -1, indicating Hive should use its heuristics.)

    In some cases - say 'select count(1) from T' - Hive will set the number of reducers to 1 , irrespective of the size of input data. These are called 'full aggregates' - and if the only thing that the query does is full aggregates - then the compiler knows that the data from the mappers is going to be reduced to trivial amount and there's no point running multiple reducers.

    0 讨论(0)
提交回复
热议问题