When using RMSE loss in TensorFlow I receive very small loss values smalerl than 1 [closed]

久未见 提交于 2020-01-05 07:42:37

问题


Hello I have a network that produces logits / outputs like this:

logits = tf.placeholder(tf.float32, [None, 128, 64, 64]) // outputs
y = tf.placeholder(tf.float32, [None, 128, 64, 64]) // ground_truth, targets

--> y ground truth values are downscaled from [0, 255] to [0, 1] in order to increase perforce as I have read it is better to use range [0, 1]

Now I want to calculate the RMSE / EuclideanLoss like this:

loss = tf.reduce_mean(tf.square(logits - y))

or

loss = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(y, logits))))

not sure which one is better.

When doing so my loss values start at roughly 1. and then quickly go down to 2.5e-4. When I use the EuclideanLoss in Caffe for the same network my loss values starts from roughly 1000 and goes down to 200. Am I doing anything wrong in Tensorflow or why are the loss values that small? I can't really track the loss values in tensorboard since they are so small. Can anyone help me?


回答1:


The first loss you propose is the better one (the second introduces an unnecessary sqrt).

Values equal to or smaller than 1 seem to be the only possible values since the range of values you provide is within 0 and 1 - so the biggest possible error is 1.

If you have trouble visualising the loss in tensorboard, try showing the graphs using a log scale (one of the two buttons under the graphs)



来源:https://stackoverflow.com/questions/44717224/when-using-rmse-loss-in-tensorflow-i-receive-very-small-loss-values-smalerl-than

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!