what is the difference between Flatten() and GlobalAveragePooling2D() in keras

后端 未结 3 2032
情深已故
情深已故 2020-12-24 14:15

I want to pass the output of ConvLSTM and Conv2D to a Dense Layer in Keras, what is the difference between using global average pooling and flatten Both is working in my cas

相关标签:
3条回答
  • 2020-12-24 14:47

    You can test the difference between Flatten and GlobalPooling on your own making comparison with numpy, if you are more confident

    We make a demonstration using, as input, a batch of images with this shape (batch_dim, height, width, n_channel)

    import numpy as np
    from tensorflow.keras.layers import *
    
    batch_dim, H, W, n_channels = 32, 5, 5, 3
    X = np.random.uniform(0,1, (batch_dim,H,W,n_channels)).astype('float32')
    
    • Flatten accepts as input tensor of at least 3D. It operates a reshape of the input in 2D with this format (batch_dim, all the rest). In our case of 4D, it operates a reshape in this format (batch_dim, H*W*n_channels).

      np_flatten = X.reshape(batch_dim, -1) # (batch_dim, H*W*n_channels)
      tf_flatten = Flatten()(X).numpy() # (batch_dim, H*W*n_channels)
      
      (tf_flatten == np_flatten).all() # True
      
    • GlobalAveragePooling2D accepts as input 4D tensor. It operates the mean on the height and width dimensionalities for all the channels. The resulting dimensionality is 2D (batch_dim, n_channels). GlobalMaxPooling2D makes the same but with max operation.

      np_GlobalAvgPool2D = X.mean(axis=(1,2)) # (batch_dim, n_channels)
      tf_GlobalAvgPool2D = GlobalAveragePooling2D()(X).numpy() # (batch_dim, n_channels)
      
      (tf_GlobalAvgPool2D == np_GlobalAvgPool2D).all() # True
      
    0 讨论(0)
  • 2020-12-24 14:55

    Flattening is No brainer and it simply converts a multi-dimensional object to one-dimensional by re-arranging the elements.

    While GlobalAveragePooling is a methodology used for better representation of your vector. It can be 1D/2D/3D. It uses a parser window which moves across the object and pools the data by averaging it (GlobalAveragePooling) or picking max value (GlobalMaxPooling). Padding is essentially required to take the corner cases into the account.

    Both are used for taking effect of sequencing into account in a simpler way.

    0 讨论(0)
  • 2020-12-24 14:58

    That both seem to work doesn't mean they do the same.

    Flatten will take a tensor of any shape and transform it into a one dimensional tensor (plus the samples dimension) but keeping all values in the tensor. For example a tensor (samples, 10, 20, 1) will be flattened to (samples, 10 * 20 * 1).

    GlobalAveragePooling2D does something different. It applies average pooling on the spatial dimensions until each spatial dimension is one, and leaves other dimensions unchanged. In this case values are not kept as they are averaged. For example a tensor (samples, 10, 20, 1) would be output as (samples, 1, 1, 1), assuming the 2nd and 3rd dimensions were spatial (channels last).

    0 讨论(0)
提交回复
热议问题