I have a dataset that I used for making NN model in Keras, i took 2000 rows from that dataset to have them as validation data, those 2000 rows should be added in .predict<
The training data you posted gives high validation accuracy, so I'm a bit confused as to where you get that 65% from, but in general when your model performs much better on training data than on unseen data, that means you're over fitting. This is a big and recurring problem in machine learning, and there is no method guaranteed to prevent this, but there are a couple of things you can try:
I will list the problems/recommendations that I see on your model.
sigmoid
activation function in the last layer which seems it is a binary classification but in your loss
fuction you used mse
which seems strange. You can try binary_crossentropy
instead of mse
loss function for your model.adam
optimizer instead of sgd
.57849
sample you can use 47000 samples in training+validation and rest of will be your test set. validation_split_ratio
then it will automatically give validation set from your training set.