TensorFlow : optimizer gives nan as ouput

最后都变了- 提交于 2019-12-24 17:07:50

问题


I am running a very simple tensorflow program

W = tf.Variable([.3],tf.float32)
b = tf.Variable([-.3],tf.float32)
x = tf.placeholder(tf.float32)

linear_model = W*x + b

y = tf.placeholder(tf.float32)

squared_error = tf.square(linear_model - y)

loss = tf.reduce_sum(squared_error)

optimizer = tf.train.GradientDescentOptimizer(0.1)

train = optimizer.minimize(loss)

init = tf.global_variables_initializer()

with tf.Session() as s:
    file_writer = tf.summary.FileWriter('../../tfLogs/graph',s.graph)
    s.run(init)
    for i in range(1000):
        s.run(train,{x:[1,2,3,4],y:[0,-1,-2,-3]})
    print(s.run([W,b]))

this gives me

[array([ nan], dtype=float32), array([ nan], dtype=float32)]

what am i doing wrong?


回答1:


You're using loss = tf.reduce_sum(squared_error) instead of reduce_mean. With reduce_sum your loss gets bigger when you have more data, and even with this small example it means your gradient is big enough to cause your model to diverge.

Something else which can cause this type of problem is when your learning rate is too large. In this case you can also fix it by changing your learning rate from 0.1 to 0.01, but if you're still using reduce_sum it will break again when you add more points.



来源:https://stackoverflow.com/questions/47103581/tensorflow-optimizer-gives-nan-as-ouput

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!