问题
I am trying to train LSTM model to classify long time series sequences with 700 time steps.
Sample time series belonging to class 1
Sample time series belonging to class 2
Model
model = Sequential()
model.add(LSTM(100, input_shape=(700,1), return_sequences=False))
model.add(Dense(1, activation='sigmoid'))
sgd = SGD(lr=0.001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy', optimizer=sgd, metrics=['accuracy'])
history = model.fit(xtrain, y,
batch_size = 10,
epochs=20,
validation_split = 0.2, shuffle= True)
The training loss remains constant and validation loss increases since the first epoch. Any ideas on the issue here?
I tried adding convolutional layers with the LSTM layer. This very well improved model accuracy but test loss and accuracy fluctuates very drastically.
New model Architecture
def generate_model():
ip = Input(shape=(700,1))
x = LSTM(8)(ip)
x = Dropout(0.8)(x)
y = Permute((2, 1))(ip)
y = Conv1D(128, 8, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = Conv1D(256, 5, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = Conv1D(128, 3, padding='same', kernel_initializer='he_uniform')(y)
y = BatchNormalization()(y)
y = Activation('relu')(y)
y = GlobalAveragePooling1D()(y)
x = concatenate([x, y])
out = Dense(1, activation='sigmoid')(x)
model = Model(ip, out)
model.summary()
return model
model = generate_model()
sgd = SGD(lr=0.0001, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='binary_crossentropy',
optimizer=adam,
metrics=['accuracy'])
history1 = model.fit(xtrain, y,
batch_size = 10,
epochs=20,
validation_split = 0.2, shuffle= True,callbacks=callbacks)
Any ideas on why this is happening?
来源:https://stackoverflow.com/questions/51064649/lstm-to-classify-long-time-series-sequences