How to handle different queue batch size and feed value batch size in tensorflow?

China☆狼群 提交于 2020-01-25 03:13:07

问题


My code used to work on tensorflow 0.6, but it no longer works on the lastest tensorflow.

I would like to perform inference every few training iterations. My training data is pulled from a queue, my inference data is from feed_dict. The training batch size is 128 while the inference batch size is 1. What Should I do to make the network accept the two different batch sizes?

batch_size = 128
x_batch = tf.placeholder("float", [None, 100])
q = tf.FIFOQueue(10, [tf.float32], shapes=[[batch_size, 100]])
enqueue_op = q.enqueue([x_batch])

# during training
x = q.dequeue() # dequeue operation

# network definition, takes x as input, and output y
......

# during inference
x_array_of_batch_size_1 = .. # a 1x100 numpy array
sess.run([y], feed_dict={x: x_array_of_batch_size_1))

I got the following error:

ValueError: Cannot feed value of shape (1, 100) for Tensor u'fifo_queue_Dequeue:0', which has shape '(128, 100)'

回答1:


We added this check recently to prevent errors (and add a few optimization opportunities). You can make your program work again by changing the declaration of x to use the new tf.placeholder_with_default() op:

x = tf.placeholder_with_default(q.dequeue(), shape=[None, 100])


来源:https://stackoverflow.com/questions/36105763/how-to-handle-different-queue-batch-size-and-feed-value-batch-size-in-tensorflow

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!