I\'m reading batch of images by getting idea here from tfrecords(converted by this)
My images are cifar images, [32, 32, 3] and as you can see while reading and pas
I had the exactly same issue today and later I found it was the input data file I downloaded from "famous data set" (such as https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data) that caused the error: It has some empty lines at the end of the file. Remove the empty lines, the error was gone!
You're likely processing the parsed TFRecord example wrong. E.g. trying to reshape a tensor to an incompatible size. You can debug using a tf_record_iterator to confirm the data you're reading is stored the way you think it is:
import tensorflow as tf
import numpy as np
tfrecords_filename = '/path/to/some.tfrecord'
record_iterator = tf.python_io.tf_record_iterator(path=tfrecords_filename)
for string_record in record_iterator:
# Parse the next example
example = tf.train.Example()
example.ParseFromString(string_record)
# Get the features you stored (change to match your tfrecord writing code)
height = int(example.features.feature['height']
.int64_list
.value[0])
width = int(example.features.feature['width']
.int64_list
.value[0])
img_string = (example.features.feature['image_raw']
.bytes_list
.value[0])
# Convert to a numpy array (change dtype to the datatype you stored)
img_1d = np.fromstring(img_string, dtype=np.float32)
# Print the image shape; does it match your expectations?
print(img_1d.shape)
To summarize the comments, the
Compute status: Out of range: RandomSuffleQueue '_2_input/shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 100, current size 0)
was caused by the queue running out of data. This is often caused by thinking you have enough data for N iterations when really you only have enough for M iterations where M < N.
One suggestion for figuring out how much data you actually have is to count how many times you can read data before an OutOfRangeError exception is thrown by the queue.
This could also be caused by a wrong tf record file name that doesn't exist at all. Make sure you have correct file paths specified before you do other checks.
I had a similar problem. Digging around the web, it turned out that if you use some num_epochs
argument, you have to initialize all the local
variables, so your code should end up looking like:
with tf.Session() as sess:
sess.run(tf.local_variables_initializer())
sess.run(tf.global_variables_initializer())
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(coord=coord)
# do your stuff here
coord.request_stop()
coord.join(threads)
If you post some more code, maybe I could take a deeper look into it. In the meantime, HTH.
I had this same problem and none of the previous answers seemed to solve it so I will also chime in.
For me the problem ended up being the features list I was passing to parse_single_example. For whatever reason (since I am using a float_list ?) in my tfrecords file I needed to specify the length of the array in my features list or use tf.VarLenFeature ie:
feature_structure = {'features': tf.FixedLenFeature([FEATURE_SIZE], tf.float32),
'outputs': tf.FixedLenFeature([OUTPUT_SIZE], tf.float32)}
d_features = tf.parse_single_example(serialized_example, features=feature_structure)
Without this I kept getting the "random_shuffle_queue is closed and has insufficient elements" error which I am guessing is because my parsed example had no data in it.