I am trying to use a deep neural network architecture to classify against a binary label value - -1 and +1. Here is my code to do it in tensorflow
.
From this:
a binary label value - -1 and +1
. . . I am assuming your values in train_y
and test_y
are actually -1.0 and +1.0
This is not going to work very well with your chosen loss function sigmoid_cross_entropy_with_logits
- which assumes 0.0 and +1.0. The negative y
values are causing mayhem! However, the loss function choice is good for binary classification. I suggest change your y
values to 0 and 1.
In addition, technically the output of your network is not the final prediction. The loss function sigmoid_cross_entropy_with_logits
is designed to work with a network with sigmoid transfer function in the output layer, although you have got it right that the loss function is applied before this is done. So your training code appears correct
I'm not 100% sure about the tf.transpose
though - I would see what happens if you remove that, personally I.e.
output = tf.add(tf.matmul(l3, output_layer['weights']), output_layer['biases'])
Either way, this is the "logit" output, but not your prediction. The value of output
can get high for very confident predictions, which probably explains your very high values later due to missing the sigmoid function. So add a prediction tensor (this represents the probability/confidence that the example is in the positive class):
prediction = tf.sigmoid(output)
You can use that to calculate accuracy. Your accuracy calculation should not be based on L2 error, but sum of correct values - closer to the code you had commented out (which appears to be from a multiclass classification). For a comparison with true/false for binary classification, you need to threshold the predictions, and compare with the true labels. Something like this:
predicted_class = tf.greater(prediction,0.5)
correct = tf.equal(predicted_class, tf.equal(y,1.0))
accuracy = tf.reduce_mean( tf.cast(correct, 'float') )
The accuracy value should be between 0.0 and 1.0. If you want as a percentage, just multiply by 100 of course.