LSTM/GRU autoencoder convergency

后端 未结 1 1369
臣服心动
臣服心动 2021-01-03 15:53

Goal

I have a strange situation trying to create an efficient autoencoder over my time series dataset:
X_train (200, 23, 178) X_val (100, 23

相关标签:
1条回答
  • 2021-01-03 16:29

    The 2 models you have above do not seem to be comparable, in a meaningful way. The first model is attempting to compress your vector of 178 values. It is quite possible that these vectors contain some redundant information so it is reasonable to assume that you will be able to compress them.

    The second model is attempting to compress a sequence of 23 x 178 vectors via single GRU layer. This is a task with a significantly higher number of parameters. The repeat vector simply takes the output of the 1st GRU layer (the encoder) and makes it in input of the 2nd GRU layer (the decoder). But then you take a single value of the decoder. Instead of the TimeDistributed layer, I'd recommend that you use return_sequences=True in the 2nd GRU (decoder). Otherwise you are saying that you are expecting that the 23x178 sequence is constituted with elements all with the same value; that has to lead to a very high error / no solution.

    I'd recommend you take a step back. Is your goal to find similarity between the sequences ? Or to be able to make predictions ? An auto-encoder approach is preferable for a similarity task. In order to make predictions, I'd recommend that you go more towards an approach where you apply a Dense(1) layer to the output of the sequences step.

    Is your data-set open ? available ? I'd be curious on taking it for a spin if that would be possible.

    0 讨论(0)
提交回复
热议问题