I am brand new to Deep-Learning
so I\'m reading though Deep Learning with Keras by Antonio Gulli and learning a lot. I want to start using some of the concepts. I
You don't need to restructure anything at all to get the output of a Conv1D layer into an LSTM layer.
So, the problem is simply the presence of the Flatten
layer, which destroys the shape.
These are the shapes used by Conv1D and LSTM:
(batch, length, channels)
(batch, timeSteps, features)
Length is the same as timeSteps, and channels is the same as features.
Using the Bidirectional
wrapper won't change a thing either. It will only duplicate your output features.
If you're going to classify the entire sequence as a whole, your last LSTM must use return_sequences=False
. (Or you may use some flatten + dense instead after)
If you're going to classify each step of the sequence, all your LSTMs should have return_sequences=True
. You should not flatten the data after them.