问题
The CNN cifar10 tutorial (tensor flow tutorials) gives an example of low-level API use for reading data as an independent job to train model (with multiple GPU). Is it possible to use high-level API Estimator with low-level threading support and multi/single GPUs training?
I am looking for a way to combine both:
The custom Estimator from high-level API, details https://www.tensorflow.org/extend/estimators
input_fn as a queue, which gives the same functionality which is described in https://www.tensorflow.org/programmers_guide/reading_data for Coordinator class
coord = tf.train.Coordinator() threads = tf.train.start_queue_runners(sess=sess, coord=coord)
It is not straightforward to me!
回答1:
I push a code to here.
It is support input_fn as a queue when using estimator. High-level API Estimator with low-level threading support and multi/single GPUs training. And easy to customer code all of you need.
来源:https://stackoverflow.com/questions/42529598/estimator-with-coordinator-as-an-input-function-for-reading-input-data-in-distri