I\'m trying to follow the Deep Autoencoder Keras example. I\'m getting a dimension mismatch exception, but for the life of me, I can\'t figure out why. It works when I use
Thanks for the hint from Marcin. Turns out all the decoder layers need to be unrolled in order to get it to work.
# retrieve the last layer of the autoencoder model
decoder_layer1 = autoencoder.layers[-3]
decoder_layer2 = autoencoder.layers[-2]
decoder_layer3 = autoencoder.layers[-1]
# create the decoder model
decoder = Model(input=encoded_input, output=decoder_layer3(decoder_layer2(decoder_layer1(encoded_input))))
The problem lies in:
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
In previous model - the last layer was the only decoder layer. So it input was also an input to decoder. But right now you have 3 decoding layer so you have to go back to the first one in order to obtain decoder first layer. So changing this line to:
# retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-3]
Should do the work.
You need to apply the transformation from each decoder layer to the previous. You can manually unroll and hard code these as in the accepted answer, or the following loop should take care of it:
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the decoder layers and apply to each prev layer
num_decoder_layers = 3
decoder_layer = encoded_input
for i in range(-num_decoder_layers, 0):
decoder_layer = autoencoder.layers[i](decoder_layer)
# create the decoder model
decoder = Model(encoded_input, decoder_layer)