LSTM/GRU autoencoder convergency
问题 Goal I have a strange situation trying to create an efficient autoencoder over my time series dataset: X_train (200, 23, 178) X_val (100, 23, 178) X_test (100, 23, 178) Current situation With a simple autoencoder I have better results rather than my simple LSTM AE over a dataset of time series. I have some concerns about my utilization of the Repeat Vector wrapper layer, which as far as I understood, is supposed to repeat a number of times like the sequence length the last state of the LSTM