I am adapting the cifar10 convolution example to my problem. I\'d like to change the data input from a design that reads images one-at-a-time from a file to a design that
There are a few answers - none quite as elegant as a map function. Which is best depends a bit on your desire for memory efficiency.
(a) You can use enqueue_many
to throw them into a tf.FIFOQueue and then dequeue and tf.image.resize_image_with_crop_or_pad
an image at a time, and concat it all back into one big smoosh. This is probably slow. Requires N calls to run for N images.
(b) You could use a single placeholder feed and run to resize and crop them on their way in from your original datasource. This is possibly the best option from a memory perspective, because you never have to store the unresized data in memory.
(c) You could use the tf.control_flow_ops.While op to iterate through the full batch and build up the result in a tf.Variable
. Particularly if you take advantage of the parallel execution permitted by while, this is likely to be the fastest approach.
I'd probably go for option (c) unless you want to minimize memory use, in which case filtering it on the way in (option b) would be a better choice.