I\'m trying to split up the minimize function over two machines. On one machine, I\'m calling \"compute_gradients\", on another I call \"apply_gradients\" with gradients tha
Assuming that each gradients[i]
is a NumPy array that you've fetched using some out-of-band mechanism, the fix is simply to remove the tf.convert_to_tensor() invocation when building feed_dict
:
feed_dict = {}
for i, grad_var in enumerate(compute_gradients):
feed_dict[placeholder_gradients[i][0]] = gradients[i]
apply_gradients.run(feed_dict=feed_dict)
Each value in a feed_dict should be a NumPy array (or some other object that is trivially convertible to a NumPy array). In particular, a tf.Tensor
is not a valid value for a feed_dict
.