binary classification, xentropy mismatch , invalid argument ( Received a label value of 1 which is outside the valid range of [0, 1) )

邮差的信 提交于 2019-12-11 12:11:37

问题


I'm working on a Deep neural Network for text-classification but I got a problem with my xentropy. I'm following a course with multiclass classification and I try to adapt it to my binary classification problem.

The course used softmax for multiclass as :

   xentropy = tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y, 
                                                  logits=logits)

but doing so , I had that error :

 InvalidArgumentError (see above for traceback): Received a label value of 1 which is outside the valid range of [0, 1).  Label values: 0 0 1 0 1 0 1 0 0 1 1 1 1 1 1 1 0 1 1 0 1 0 1 1 0 0 1 1 0 0 0 1 0 0 1 1 0 0 1 0 1 1 1 1 0 0 0 0 1 0 0 0 1
     [[node loss/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits (defined at <ipython-input-84-5748b847be41>:2) ]]

I don't understand it because it's evident that my label should have value of 0 and 1 included

Here's a link Received a label value of 1 which is outside the valid range of [0, 1) - Python, Keras to a similar problem , I tried to find a solution elsewhere but didn't find a nice one.

I tried to replace it by a binary classification xentropy as suggested in the link with :

  xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)

but then I got problems with dtype of my X or/and Y. I tried to follow the instructions , switching between int32 and float32 for each X and Y but didn't solve the problems , still problems of dtype , just moved it.

I guess that Y must be an int32 because

 correct = tf.nn.in_top_k(logits, y, 1)

gave me the following error (while switching the dtype of my X and Y) : Value passed to parameter 'targets' has DataType float32 not in list of allowed values: int32, int64

Here's my full code :

import tensorflow as tf
n_inputs = 28 
n_hidden1 = 15
n_hidden2 = 5
n_outputs = 1
reset_graph()

X = tf.placeholder(tf.float32, shape=(None, n_inputs), name="X") 
y = tf.placeholder(tf.int32, shape=(None), name="y")  

def neuron_layer(X, n_neurons, name, activation=None):
    with tf.name_scope(name):
        n_inputs = int(X.shape[1])
        stddev = 2 / np.sqrt(n_inputs)
        print(n_inputs,stddev) 
        init = tf.truncated_normal((n_inputs, n_neurons), stddev=stddev)     
        W = tf.Variable(init,name="kernel")  #weights random
        b = tf.Variable(tf.zeros([n_neurons]), name="bias")
        Z = tf.matmul(X, W) + b
        if activation is not None:
            return activation(Z)
        else:
            return Z

hidden1 = neuron_layer(X, n_hidden1, name="hidden1",
                           activation=tf.nn.relu)
hidden2 = neuron_layer(hidden1, n_hidden2, name="hidden2",
                           activation=tf.nn.relu)
logits = neuron_layer(hidden2, n_outputs, name="outputs")
learning_rate = 0.01

xentropy = tf.nn.sigmoid_cross_entropy_with_logits(labels=y, logits=logits)
loss = tf.reduce_mean(xentropy, name="loss")
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
training_op = optimizer.minimize(loss)
correct = tf.nn.in_top_k(logits, y, 1)
accuracy = tf.reduce_mean(tf.cast(correct, tf.float32))

and after resolving this problem , I will train it.

I know it's a little bit long but I'm really blocked on that for hours now ... any suggestions ?

来源:https://stackoverflow.com/questions/57077979/binary-classification-xentropy-mismatch-invalid-argument-received-a-label-v

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!