I train a Neural Network of Regression Problem in Keras. Why the output is only one Dimension, the accuracy in each Epoch always show acc: 0.0000e+00?
Just a quick add-on to the excellent answers already posted.
The following snippet is a custom metric that will display the average percentage difference between you NN's prediction and the actual value.
def percentage_difference(y_true, y_pred):
return K.mean(abs(y_pred/y_true - 1) * 100)
to implement it into your metrics simply add it to the "metrics" option in your model compilation. I.e.
model.compile(loss= 'mean_squared_error',
optimizer='Adam', metrics=['accuracy',percentage_difference])
There can be few issues with your model, Check and fix
I ran into a similar problem, after trying all the suggestions and none of them working, I figured something must be wrong somewhere else.
After looking at my data distribution, I realized that I was not shuffling my data. So my training data was the majority of one class and my testing data was 100% another class. After shuffling the data the accuracy was no longer 0.0000e+00, it was something more meaningful.
The problem is that your final model output has a linear activation, making the model a regression, not a classification problem. "Accuracy" is defined when the model classifies data correctly according to class, but "accuracy" is effectively not defined for a regression problem, due to its continuous property.
Either get rid of accuracy as a metric and switch over to fully regression, or make your problem into a classification problem, using loss='categorical_crossentropy'
and activation='softmax'
.
This is a similar problem to yours: Link
For more information see: StackExchange
I am not sure what your problem is, but your model looks a little weird to me.
This is your model:
lrelu = LeakyReLU(alpha = 0.1)
model = Sequential()
model.add(Dense(126, input_dim=15)) #Dense(output_dim(also hidden wight), input_dim = input_dim)
model.add(lrelu) #Activation
model.add(Dense(252))
model.add(lrelu)
model.add(Dense(1))
model.add(Activation('linear'))
and the visualization of your model is shown as below:
There are two layers which can be the output layer of your model, and you didn't decide which one is your actual output layer. I guess that's the reason you cannot make the correct prediction.
If you want to implement your model like this,
you should add your activation layer independently, rather than use the same one.
For example,
model = Sequential()
model.add(Dense(126, input_dim=15)) #Dense(output_dim(also hidden wight), input_dim = input_dim)
model.add(LeakyReLU(alpha = 0.1)) #Activation
model.add(Dense(252))
model.add(LeakyReLU(alpha = 0.1))
model.add(Dense(1))
model.add(Activation('linear'))