TensorFlow apply_gradients remotely

前端 未结 1 511
执笔经年
执笔经年 2021-01-13 13:25

I\'m trying to split up the minimize function over two machines. On one machine, I\'m calling \"compute_gradients\", on another I call \"apply_gradients\" with gradients tha

相关标签:
1条回答
  • 2021-01-13 14:11

    Assuming that each gradients[i] is a NumPy array that you've fetched using some out-of-band mechanism, the fix is simply to remove the tf.convert_to_tensor() invocation when building feed_dict:

    feed_dict = {}
    for i, grad_var in enumerate(compute_gradients):
        feed_dict[placeholder_gradients[i][0]] = gradients[i]
    apply_gradients.run(feed_dict=feed_dict)
    

    Each value in a feed_dict should be a NumPy array (or some other object that is trivially convertible to a NumPy array). In particular, a tf.Tensor is not a valid value for a feed_dict.

    0 讨论(0)
提交回复
热议问题