How to use Bidirectional RNN and Conv1D in keras when shapes are not matching?

后端 未结 1 1087
孤独总比滥情好
孤独总比滥情好 2021-01-27 00:01

I am brand new to Deep-Learning so I\'m reading though Deep Learning with Keras by Antonio Gulli and learning a lot. I want to start using some of the concepts. I

相关标签:
1条回答
  • 2021-01-27 00:15

    You don't need to restructure anything at all to get the output of a Conv1D layer into an LSTM layer.

    So, the problem is simply the presence of the Flatten layer, which destroys the shape.

    These are the shapes used by Conv1D and LSTM:

    • Conv1D: (batch, length, channels)
    • LSTM: (batch, timeSteps, features)

    Length is the same as timeSteps, and channels is the same as features.

    Using the Bidirectional wrapper won't change a thing either. It will only duplicate your output features.


    Classifying.

    If you're going to classify the entire sequence as a whole, your last LSTM must use return_sequences=False. (Or you may use some flatten + dense instead after)

    If you're going to classify each step of the sequence, all your LSTMs should have return_sequences=True. You should not flatten the data after them.

    0 讨论(0)
提交回复
热议问题