I\'m trying to follow the Deep Autoencoder Keras example. I\'m getting a dimension mismatch exception, but for the life of me, I can\'t figure out why. It works when I use
You need to apply the transformation from each decoder layer to the previous. You can manually unroll and hard code these as in the accepted answer, or the following loop should take care of it:
# create a placeholder for an encoded (32-dimensional) input
encoded_input = Input(shape=(encoding_dim,))
# retrieve the decoder layers and apply to each prev layer
num_decoder_layers = 3
decoder_layer = encoded_input
for i in range(-num_decoder_layers, 0):
decoder_layer = autoencoder.layers[i](decoder_layer)
# create the decoder model
decoder = Model(encoded_input, decoder_layer)