How does input image size influence size and shape of fully connected layer?

前端 未结 2 428
遇见更好的自我
遇见更好的自我 2021-01-28 04:50

I am reading a lot of tutorials that state two things.

  1. \"[Replacing fully connected layers with convolutional layers] casts them into fully convolutional networks
2条回答
  •  一个人的身影
    2021-01-28 05:24

    1. Images has to be of a pre-defined size during training and testing. For the fully connected layer, you can have as many nodes as you want, and that number doesn't depend on the input image size, or the convolution layer's output dimensions.
    2. The input image size and the convolutions will determine the shape of the convolution layers and the final flattened output, which will be fed to a fully connected layer. The fully connected layer can have any dimension, and is not dependent on the input image. Below is a sample code.
        model = Sequential()
        model.add(Conv2D(32, (3,3), activation='relu', input_shape=input_shape))
        model.add(BatchNormalization())
        model.add(Conv2D(64, (3,3), activation='relu'))
        model.add(BatchNormalization())
        model.add(Conv2D(128, (3,3), activation='relu'))
        model.add(BatchNormalization())
        model.add(Conv2D(256, (3,3), activation='relu')
        model.add(BatchNormalization())
        model.add(Conv2D(256, (3,3), activation='relu')
        model.add(MaxPooling2D())
        model.add(BatchNormalization())
        model.add(Flatten())
        model.add(Dense(512, activation='sigmoid')) #This is the fully connected layer, whose dimensions are independent of the previous layers
    

提交回复
热议问题