I read from somewhere that if you choose a batch size that is a power 2, training will be faster. What is this rule? Is this applicable to other applications? Can you provide a reference paper?
Algorithmically speaking, using larger mini-batches allows you to reduce the variance of your stochastic gradient updates (by taking the average of the gradients in the mini-batch), and this in turn allows you to take bigger step-sizes, which means the optimization algorithm will make progress faster.
However, the amount of work done (in terms of number of gradient computations) to reach a certain accuracy in the objective will be the same: with a mini-batch size of n, the variance of the update direction will be reduced by a factor n, so the theory allows you to take step-sizes that are n times larger, so that a single step will take you roughly to the same accuracy as n steps of SGD with a mini-batch size of 1.
As for tensorFlow, I found no evidence of your affirmation, and its a question that has been closed on github : https://github.com/tensorflow/tensorflow/issues/4132
Note that image resized to power of two makes sense (because pooling is generally done in 2X2 windows), but that’s a different thing altogether.
The notion comes from aligning computations (
C
) onto the physical processors (PP
) of the GPU.
Since the number of PP is often a power of 2, using a number of C
different from a power of 2 leads to poor performance.
You can see the mapping of the C
onto the PP
as a pile of slices of size the number of PP
.
Say you've got 16 PP
.
You can map 16 C
on them : 1 C
is mapped onto 1 PP
.
You can map 32 C
on them : 2 slices of 16 C
, 1 PP
will be responsible for 2 C
.
This is due to the SIMD paradigm used by GPUs. This is often called Data Parallelism : all the PP
do the same thing at the same time but on different data.
I've heard this, too. Here's a white paper about training on CIFAR-10 where some Intel researchers make the claim:
In general, the performance of processors is better if the batch size is a power of 2.
However, it's unclear just how big the advantage may be because the authors don't provide any training duration data :/
来源:https://stackoverflow.com/questions/44483233/does-using-batch-size-as-powers-of-2-is-faster-on-tensorflow