Instance Normalisation vs Batch normalisation

前端 未结 4 1380
攒了一身酷
攒了一身酷 2021-01-29 19:25

I understand that Batch Normalisation helps in faster training by turning the activation towards unit Gaussian distribution and thus tackling vanishing gradients problem. Batch

相关标签:
4条回答
  • 2021-01-29 20:06

    Definition

    Let's begin with the strict definition of both:

    Batch normalization

    Instance normalization

    As you can notice, they are doing the same thing, except for the number of input tensors that are normalized jointly. Batch version normalizes all images across the batch and spatial locations (in the CNN case, in the ordinary case it's different); instance version normalizes each batch independently, i.e., across spatial locations only.

    In other words, where batch norm computes one mean and std dev (thus making the distribution of the whole layer Gaussian), instance norm computes T of them, making each individual image distribution look Gaussian, but not jointly.

    A simple analogy: during data pre-processing step, it's possible to normalize the data on per-image basis or normalize the whole data set.

    Credit: the formulas are from here.

    Which normalization is better?

    The answer depends on the network architecture, in particular on what is done after the normalization layer. Image classification networks usually stack the feature maps together and wire them to the FC layer, which share weights across the batch (the modern way is to use the CONV layer instead of FC, but the argument still applies).

    This is where the distribution nuances start to matter: the same neuron is going to receive the input from all images. If the variance across the batch is high, the gradient from the small activations will be completely suppressed by the high activations, which is exactly the problem that batch norm tries to solve. That's why it's fairly possible that per-instance normalization won't improve network convergence at all.

    On the other hand, batch normalization adds extra noise to the training, because the result for a particular instance depends on the neighbor instances. As it turns out, this kind of noise may be either good and bad for the network. This is well explained in the "Weight Normalization" paper by Tim Salimans at al, which name recurrent neural networks and reinforcement learning DQNs as noise-sensitive applications. I'm not entirely sure, but I think that the same noise-sensitivity was the main issue in stylization task, which instance norm tried to fight. It would be interesting to check if weight norm performs better for this particular task.

    Can you combine batch and instance normalization?

    Though it makes a valid neural network, there's no practical use for it. Batch normalization noise is either helping the learning process (in this case it's preferable) or hurting it (in this case it's better to omit it). In both cases, leaving the network with one type of normalization is likely to improve the performance.

    0 讨论(0)
  • 2021-01-29 20:20

    IN provide visual and appearance in-variance and BN accelerate training and preserve discriminative feature. IN is preferred in Shallow layer(starting layer of CNN) so remove appearance variation and BN is preferred in deep layers(last CNN layer) should be reduce in order to maintain discrimination.

    0 讨论(0)
  • 2021-01-29 20:21

    Great question and already answered nicely. Just to add: I found this visualisation From Kaiming He's Group Norm paper helpful.

    Source: link to article on Medium contrasting the Norms

    0 讨论(0)
  • 2021-01-29 20:21

    I wanted to add more information to this question since there are some more recent works in this area. Your intuition

    use instance normalisation for image classification where class label should not depend on the contrast of input image

    is partly correct. I would say that a pig in broad daylight is still a pig when the image is taken at night or at dawn. However, this does not mean using instance normalization across the network will give you better result. Here are some reasons:

    1. Color distribution still play a role. It is more likely to be a apple than an orange if it has a lot of red.
    2. At later layers, you can no longer imagine instance normalization acts as contrast normalization. Class specific details will emerge in deeper layers and normalizing them by instance will hurt the model's performance greatly.

    IBN-Net uses both batch normalization and instance normalization in their model. They only put instance normalization in early layers and have achieved improvement in both accuracy and ability to generalize. They have open sourced code here.

    0 讨论(0)
提交回复
热议问题