问题
I am trying to use transfer learning using vgg16. My main concept is to train the first few layers of vgg16, and add my own layer, afterwords add the rest of the layers from vgg16, and add my own output layer to the end. To do this I follow this sequence: (1) load layers and freez layers, (2) add my layers, (3) load the rest of layers (except the output layer) [THIS IS WHERE I ENCOUNTER THE FOLLOWING ERROR] and freez the layer, (4) add output layer. Is my approach ok? If not, then where I am doing wrong? Here's the error:
ValueError: Input 0 is incompatible with layer block3_conv1: expected axis -1 of input shape to have value 128 but got shape (None, 64, 56, 64)
The full code is here for better understanding:
vgg16_model= load_model('Fetched_VGG.h5')
vgg16_model.summary()
model= Sequential()
#add vgg layer (inputLayer, block1, block2)
for layer in vgg16_model.layers[0:6]:
model.add(layer)
#frees
# Freezing the layers (Oppose weights to be updated)
for layer in model.layers:
layer.trainable = False
#add custom
model.add(Conv2D(64, (3, 3), activation='relu', padding='same', name='block66_conv1_m') )
model.add( Conv2D(64, (3, 3), activation='relu', padding='same', name='block66_conv2_m') )
model.add( Conv2D(64, (3, 3), activation='relu', padding='same', name='block66_conv3_m') )
model.add( MaxPooling2D((2, 2), strides=(2, 2), name='block66_pool_m'))
# add vgg layer (block 3 to last layer (except the output dense layer))
for layer in vgg16_model.layers[7:-1]:
model.add(layer)
# Freezing the layers (Oppose weights to be updated)
for layer in model.layers:
layer.trainable = False
# add out out layer
model.add(Dense(2, activation='softmax', name='predictions'))
model.summary()
回答1:
As VGG16 layer 7 is expecting 128 filters you'll need to match this with your final Conv2D
model.add( Conv2D(128, (3, 3), activation='relu', padding='same', name='block66_conv3_m') )
If the dimensions match you should be able to build your model but it's not clear what you're trying to achieve. Your approach of adding to the middle of the VGG16 model will mean that all the downstream layers will need to be retrained
来源:https://stackoverflow.com/questions/57915233/how-to-add-customm-layers-inside-vgg16-when-doing-transfer-learning