For what reason Convolution 1x1 is used in deep neural networks?

前端 未结 2 1998
再見小時候
再見小時候 2021-01-30 07:21

I\'m looking at InceptionV3 (GoogLeNet) architecture and cannot understand why do we need conv1x1 layers?

I know how convolution works, but I see a profit with patch siz

相关标签:
2条回答
  • 2021-01-30 07:50

    You can think about 1x1xD convolution as a dimensionality reduction technique when it's placed somewhere into a network.

    If you have an input volume of 100x100x512 and you convolve it with a set of D filters each one with size 1x1x512 you reduce the number of features from 512 to D. The output volume is, therefore, 100x100xD.

    As you can see this (1x1x512)xD convolution is mathematically equivalent to a fully connected layer. The main difference is that whilst FC layer requires the input to have a fixed size, the convolutional layer can accept in input every volume with spatial extent greater or equal than 100x100.

    A 1x1xD convolution can substitute any fully connected layer because of this equivalence.

    In addition, 1x1xD convolutions not only reduce the features in input to the next layer, but also introduces new parameters and new non-linearity into the network that will help to increase model accuracy.

    When the 1x1xD convolution is placed at the end of a classification network, it acts exactly as a FC layer, but instead of thinking about it as a dimensionality reduction technique it's more intuitive to think about it as a layer that will output a tensor with shape WxHxnum_classes.

    The spatial extent of the output tensor (identified by W and H) is dynamic and is determined by the locations of the input image that the network analyzed.

    If the network has been defined with an input of 200x200x3 and we give it in input an image with this size, the output will be a map with W = H = 1 and depth = num_classes. But, if the input image have a spatial extent greater than 200x200 than the convolutional network will analyze different locations of the input image (just like a standard convolution does) and will produce a tensor with W > 1 and H > 1. This is not possibile with a FC layer that constrains the network to accept fixed size input and produce fixed size output.

    0 讨论(0)
  • 2021-01-30 08:12

    A 1x1 convolution simply maps in input pixel to an output pixel, not looking at anything around itself. It is often used to reduce the number of depth channels, since it is often very slow to multiply volumes with extremely large depths.

    input (256 depth) -> 1x1 convolution (64 depth) -> 4x4 convolution (256 depth)
    
    input (256 depth) -> 4x4 convolution (256 depth)
    

    The bottom one is about ~3.7x slower.

    Theoretically the neural network can 'choose' which input 'colors' to look at using this, instead of brute force multiplying everything.

    0 讨论(0)
提交回复
热议问题