What is wrong with the following code? The tf.assign
op works just fine when applied to a slice of a tf.Variable
if it happens outside of a loop.
It makes sense from a CUDA perspective to disallow assignment of individual indices as it negates all performance benefits of heterogeneous parallel computing.
I know this adds a bit of computational overhead but it works.
import tensorflow as tf
v = [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
n = len(v)
a = tf.Variable(v, name = 'a',dtype=tf.float32)
def cond(i, a):
return i < n
def body(i, a1):
e = tf.eye(n,n)[i]
a1 = a1 + e *(a1[i-1] + a1[i-2])
return i + 1, a1
i, b = tf.while_loop(cond, body, [2, a])
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
print('i: ',sess.run(i))
print('b: ',sess.run(b))