I have 5 tfrecords files, one for each object. While training I want to read data equally from all the 5 tfrecords i.e. if my batch size is 50, I should get 10 samples from 1st tfrecord file, 10 samples from the second tfrecord file and so on. Currently, it just reads sequentially from all the three files i.e. I get 50 samples from the same record. Is there a way to sample from differnt tfrecords files?
I advise you to read the tutorial by @mrry on tf.data
. On slide 42 he explains how to use tf.data.Dataset.interleave()
to read multiple tfrecord files at the same time.
For instance if you have 5 files, containing:
file0.tfrecord: [0, 1]
file1.tfrecord: [2, 3]
file2.tfrecord: [4, 5]
file3.tfrecord: [6, 7]
file4.tfrecord: [8, 9]
You can write the dataset like this:
files = ["file{}.tfrecord".format(i) for i in range(5)]
files = tf.data.Dataset.from_tensor_slices(files)
dataset = files.interleave(lambda x: tf.data.TFRecordDataset(x),
cycle_length=5, block_length=1)
dataset = dataset.map(_parse_function) # parse the record
The parameters of interleave
are:
- cycle_length
: number of files to read concurrently. If you want to read from all your files to create a batch, set this to the number of files (in your case this is what you should do since each file contains one type of label)
- block_length
: each time we read from a file, reads block_length
elements from this file
We can test that it works as expected:
iterator = dataset.make_one_shot_iterator()
x = iterator.get_next()
with tf.Session() as sess:
for _ in range(num_samples):
print(sess.run(x))
which will print:
0
2
4
6
8
1
3
5
7
9
来源:https://stackoverflow.com/questions/48692402/shuffling-tfrecords-files