cross-entropy

How to implement Weighted Binary CrossEntropy on theano?

China☆狼群 提交于 2019-12-04 03:15:37
How to implement Weighted Binary CrossEntropy on theano? My Convolutional neural network only predict 0 ~~ 1 (sigmoid). I want to penalize my predictions in this way : Basically, i want to penalize MORE when the model predicts 0 but the truth was 1. Question : How can I create this Weighted Binary CrossEntropy function using theano and lasagne ? I tried this below prediction = lasagne.layers.get_output(model) import theano.tensor as T def weighted_crossentropy(predictions, targets): # Copy the tensor tgt = targets.copy("tgt") # Make it a vector # tgt = tgt.flatten() # tgt = tgt.reshape(3000) #

What is the difference between cross-entropy and log loss error?

邮差的信 提交于 2019-12-03 15:10:28
What is the difference between cross-entropy and log loss error? The formulae for both seem to be very similar. They are essentially the same; usually, we use the term log loss for binary classification problems, and the more general cross-entropy (loss) for the general case of multi-class classification, but even this distinction is not consistent, and you'll often find the terms used interchangeably as synonyms. From the Wikipedia entry for cross-entropy : The logistic loss is sometimes called cross-entropy loss. It is also known as log loss From the fast.ai wiki entry on log loss : Log loss

Cross entropy function (python)

a 夏天 提交于 2019-12-03 03:32:03
I am learning the neural network and I want to write a function cross_entropy in python. Where it is defined as where N is the number of samples, k is the number of classes, log is the natural logarithm, t_i,j is 1 if sample i is in class j and 0 otherwise, and p_i,j is the predicted probability that sample i is in class j . To avoid numerical issues with logarithm, clip the predictions to [10^{−12}, 1 − 10^{−12}] range. According to above description, I wrote down the codes by clippint the predictions to [epsilon, 1 − epsilon] range, then computing the cross_entropy based on the above formula

Why “softmax_cross_entropy_with_logits_v2” backprops into labels

孤人 提交于 2019-12-01 17:02:16
问题 I am wondering why in Tensorflow version 1.5.0 and later, softmax_cross_entropy_with_logits_v2 defaults to backpropagating into both labels and logits. What are some applications/scenarios where you would want to backprop into labels? 回答1: I saw the github issue below asking the same question, you might want to follow it for future updates. https://github.com/tensorflow/minigo/issues/37 I don't speak for the developers who made this decision, but I would surmise that they would do this by

Why “softmax_cross_entropy_with_logits_v2” backprops into labels

别说谁变了你拦得住时间么 提交于 2019-12-01 17:00:56
I am wondering why in Tensorflow version 1.5.0 and later, softmax_cross_entropy_with_logits_v2 defaults to backpropagating into both labels and logits. What are some applications/scenarios where you would want to backprop into labels? I saw the github issue below asking the same question, you might want to follow it for future updates. https://github.com/tensorflow/minigo/issues/37 I don't speak for the developers who made this decision, but I would surmise that they would do this by default because it is indeed used often, and for most application where you aren't backpropagating into the

What is cross-entropy?

可紊 提交于 2019-11-28 15:01:32
I know that there are a lot of explanations of what cross-entropy is, but I'm still confused. Is it only a method to describe the loss function? Can we use gradient descent algorithm to find the minimum using the loss function? stackoverflowuser2010 Cross-entropy is commonly used to quantify the difference between two probability distributions. Usually the "true" distribution (the one that your machine learning algorithm is trying to match) is expressed in terms of a one-hot distribution. For example, suppose for a specific training instance, the label is B (out of the possible labels A, B,

How can I implement a weighted cross entropy loss in tensorflow using sparse_softmax_cross_entropy_with_logits

只谈情不闲聊 提交于 2019-11-28 05:33:50
I am starting to use tensorflow (coming from Caffe), and I am using the loss sparse_softmax_cross_entropy_with_logits . The function accepts labels like 0,1,...C-1 instead of onehot encodings. Now, I want to use a weighting depending on the class label; I know that this could be done maybe with a matrix multiplication if I use softmax_cross_entropy_with_logits (one hot encoding), Is there any way to do the same with sparse_softmax_cross_entropy_with_logits ? import tensorflow as tf import numpy as np np.random.seed(123) sess = tf.InteractiveSession() # let's say we have the logits and labels

Why is the Cross Entropy method preferred over Mean Squared Error? In what cases does this doesn't hold up? [closed]

末鹿安然 提交于 2019-11-28 04:22:23
Although both of the above methods provide better score for better closeness of prediction, still cross-entropy is preferred. Is it in every cases or there are some peculiar scenarios where we prefer cross-entropy over MSE? Cross-entropy is prefered for classification , while mean squared error is one of the best choices for regression . This comes directly from the statement of the problems itself - in classification you work with very particular set of possible output values thus MSE is badly defined (as it does not have this kind of knowledge thus penalizes errors in incompatible way). To

What is the meaning of the word logits in TensorFlow?

若如初见. 提交于 2019-11-27 09:57:44
In the following TensorFlow function, we must feed the activation of artificial neurons in the final layer. That I understand. But I don't understand why it is called logits? Isn't that a mathematical function? loss_function = tf.nn.softmax_cross_entropy_with_logits( logits = last_layer, labels = target_output ) Salvador Dali Logits is an overloaded term which can mean many different things: In Math , Logit is a function that maps probabilities ( [0, 1] ) to R ( (-inf, inf) ) Probability of 0.5 corresponds to a logit of 0. Negative logit correspond to probabilities less than 0.5, positive to >

What is cross-entropy?

孤街醉人 提交于 2019-11-27 08:58:07
问题 I know that there are a lot of explanations of what cross-entropy is, but I'm still confused. Is it only a method to describe the loss function? Can we use gradient descent algorithm to find the minimum using the loss function? 回答1: Cross-entropy is commonly used to quantify the difference between two probability distributions. Usually the "true" distribution (the one that your machine learning algorithm is trying to match) is expressed in terms of a one-hot distribution. For example, suppose