I combine two VGG net in keras together to make classification task. When I run the program, it shows an error:
RuntimeError: The name \"predictions\"
You can change the layer's name in keras, don't use 'tensorflow.python.keras'.
Here is my sample code:
from keras.layers import Dense, concatenate
from keras.applications import vgg16
num_classes = 10
model = vgg16.VGG16(include_top=False, weights='imagenet', input_tensor=None, input_shape=(64,64,3), pooling='avg')
inp = model.input
out = model.output
model2 = vgg16.VGG16(include_top=False,weights='imagenet', input_tensor=None, input_shape=(64,64,3), pooling='avg')
for layer in model2.layers:
layer.name = layer.name + str("_2")
inp2 = model2.input
out2 = model2.output
merged = concatenate([out, out2])
merged = Dense(1024, activation='relu')(merged)
merged = Dense(num_classes, activation='softmax')(merged)
model_fusion = Model([inp, inp2], merged)
model_fusion.summary()
First, based on the code you posted you have no layers with a name attribute 'predictions', so this error has nothing to do with your layer
Dense
layer prediction
: i.e:
prediction = Dense(1, activation='sigmoid',
name='main_output')(combineFeatureLayer)
The VGG16 model has a Dense
layer with name
predictions
. In particular this line:
x = Dense(classes, activation='softmax', name='predictions')(x)
And since you're using two of these models you have layers with duplicate names.
What you could do is rename the layer in the second model to something other than predictions, maybe predictions_1
, like so:
model2 = keras.applications.vgg16.VGG16(include_top=True, weights='imagenet',
input_tensor=None, input_shape=None,
pooling=None,
classes=1000)
# now change the name of the layer inplace.
model2.get_layer(name='predictions').name='predictions_1'
Example:
# Network for affine transform estimation
affine_transform_estimator = MobileNet(
input_tensor=None,
input_shape=(config.IMAGE_H // 2, config.IMAGE_W //2, config.N_CHANNELS),
alpha=1.0,
depth_multiplier=1,
include_top=False,
weights='imagenet'
)
affine_transform_estimator.name = 'affine_transform_estimator'
for layer in affine_transform_estimator.layers:
layer.name = layer.name + str("_1")
# Network for landmarks regression
landmarks_regressor = MobileNet(
input_tensor=None,
input_shape=(config.IMAGE_H // 2, config.IMAGE_W // 2, config.N_CHANNELS),
alpha=1.0,
depth_multiplier=1,
include_top=False,
weights='imagenet'
)
landmarks_regressor.name = 'landmarks_regressor'
for layer in landmarks_regressor.layers:
layer.name = layer.name + str("_2")
input_image = Input(shape=(config.IMAGE_H, config.IMAGE_W, config.N_CHANNELS))
downsampled_image = MaxPooling2D(pool_size=(2,2))(input_image)
x1 = affine_transform_estimator(downsampled_image)
x2 = landmarks_regressor(downsampled_image)
x3 = add([x1,x2])
model = Model(inputs=input_image, outputs=x3)
optimizer = Adadelta()
model.compile(optimizer=optimizer, loss=mae_loss_masked)