Printing the loss during TensorFlow training

后端 未结 2 984
無奈伤痛
無奈伤痛 2020-12-23 16:49

I am looking at the TensorFlow \"MNIST For ML Beginners\" tutorial, and I want to print out the training loss after every training step.

My training loop currently l

相关标签:
2条回答
  • 2020-12-23 17:27

    Instead of just running the training_step, run also the cross_entropy node so that its value is returned to you. Remember that:

    var_as_a_python_value = sess.run(tensorflow_variable)
    

    will give you what you want, so you can do this:

    [_, cross_entropy_py] = sess.run([train_step, cross_entropy],
                                     feed_dict={x: batch_xs, y_: batch_ys})
    

    to both run the training and pull out the value of the cross entropy as it was computed during the iteration. Note that I turned both the arguments to sess.run and the return values into a list so that both happen.

    0 讨论(0)
  • 2020-12-23 17:29

    You can fetch the value of cross_entropy by adding it to the list of arguments to sess.run(...). For example, your for-loop could be rewritten as follows:

    for i in range(100):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
        _, loss_val = sess.run([train_step, cross_entropy],
                               feed_dict={x: batch_xs, y_: batch_ys})
        print 'loss = ' + loss_val
    

    The same approach can be used to print the current value of a variable. Let's say, in addition to the value of cross_entropy, you wanted to print the value of a tf.Variable called W, you could do the following:

    for i in range(100):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        cross_entropy = -tf.reduce_sum(y_ * tf.log(y))
        _, loss_val, W_val = sess.run([train_step, cross_entropy, W],
                                      feed_dict={x: batch_xs, y_: batch_ys})
        print 'loss = %s' % loss_val
        print 'W = %s' % W_val
    
    0 讨论(0)
提交回复
热议问题