How to apply LSTM-autoencoder to variant-length time-series data?

后端 未结 1 1323
面向向阳花
面向向阳花 2021-01-14 01:32

I read LSTM-autoencoder in this tutorial: https://blog.keras.io/building-autoencoders-in-keras.html, and paste the corresponding keras implementation below:

         


        
相关标签:
1条回答
  • 2021-01-14 02:09

    You can use shape=(None, input_dim)

    But the RepeatVector will need some hacking taking dimensions directly from the input tensor. (The code works with tensorflow, not sure about theano)

    import keras.backend as K
    
    def repeat(x):
    
        stepMatrix = K.ones_like(x[0][:,:,:1]) #matrix with ones, shaped as (batch, steps, 1)
        latentMatrix = K.expand_dims(x[1],axis=1) #latent vars, shaped as (batch, 1, latent_dim)
    
        return K.batch_dot(stepMatrix,latentMatrix)
    
    
    decoded = Lambda(repeat)([inputs,encoded])
    decoded = LSTM(input_dim, return_sequences=True)(decoded)
    
    0 讨论(0)
提交回复
热议问题