What to do next when Deep Learning neural network stop improving in term of validation accuracy?

余生长醉 提交于 2021-01-29 18:31:31

问题


I was running into this issue where my model converge very fast only after about 20 or 30 epoch My data set contain 7000 sample and my neural network has 3 hidden layer, each with 18 neurons and batch normalization with drop out 0.2.

My task is a multi label classification where my label are [0 0 1] , [0 1 0], [1 0 0] and [0 0 0]

num_neuron = 18
model = Sequential()
model.add(Dense(num_neuron, input_shape=(input_size,), activation='elu'))
model.add(Dropout(0.2))
model.add(keras.layers.BatchNormalization())

model.add(Dense(num_neuron, activation='elu'))
model.add(Dropout(0.2))
model.add(keras.layers.BatchNormalization())

model.add(Dense(num_neuron/3, activation='elu'))
model.add(Dropout(0.2))
model.add(keras.layers.BatchNormalization())

model.add(Dense(3, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
              optimizer='nadam',
              metrics=['accuracy'])
history = model.fit(X_train, Y_train,batch_size=512 ,epochs=1000,
                    validation_data=(X_test, Y_test), verbose=2)

I was wondering if there is anything I can do to improve more because even after I set put for 1000 epoch, nothing would really change


回答1:


This is the expected behaviour in the training of a neural network: after a while, the training process is said to have converged, which means that further training doesn't lead to any further progress (in fact, training for too long may even hurt the model's generalization capacity, since it may lead to overfitting to the training set. Early stopping was created to tackle this issue).

In your case, since training has already converged and neither the training nor the validation loss are decreasing anymore, it is safe to say that you have achieved the highest possible accuracy for this specific task, with this specific training procedure and this specific model architecture (3 hidden layers with 18 neurons).

It is still possible to make improvements, however, by experimenting with these properties. In your case, it is hard to say since I don't know the task that you're training for, but since your training loss is almost the same as the validation loss, your model is probably underfitting, meaning that it will likely get better if you use a more capable model (with more neurons per layer or more layers) or decrease normalization.



来源:https://stackoverflow.com/questions/62776084/what-to-do-next-when-deep-learning-neural-network-stop-improving-in-term-of-vali

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!