Tensor Flow shuffle_batch() blocks at end of epoch

前端 未结 3 1531
醉话见心
醉话见心 2021-01-23 00:57

I\'m using tf.train.shuffle_batch() to create batches of input images. It includes a min_after_dequeue parameter that makes sure there\'s a specified number of elements inside t

3条回答
  •  攒了一身酷
    2021-01-23 01:48

    You are correct that running the RandomShuffleQueue.close() operation will stop the dequeuing threads from blocking when there are fewer than min_after_dequeue elements in the queue.

    The tf.train.shuffle_batch() function creates a tf.train.QueueRunner that performs operations on the queue in a background thread. If you start it as follows, passing a tf.train.Coordinator, you will be able to close the queue cleanly (based on the example here):

    sess = tf.Session()
    coord = tf.train.Coordinator()
    tf.train.start_queue_runners(sess, coord=coord)
    
    while not coord.should_stop():
      sess.run(train_op)
    # When done, ask the threads to stop.
    coord.request_stop()
    # And wait for them to actually do it.
    coord.join(threads)
    

提交回复
热议问题