I have a standard tensorflow Estimator with some model and want to run it on multiple GPUs instead of just one. How can this be done using data parallelism?
I searched
I think this is all you need.
Link: https://www.youtube.com/watch?v=bRMGoPqsn20
More Details: https://www.tensorflow.org/api_docs/python/tf/distribute/Strategy
Explained: https://medium.com/tensorflow/multi-gpu-training-with-estimators-tf-keras-and-tf-data-ba584c3134db
NUM_GPUS = 8
dist_strategy = tf.contrib.distribute.MirroredStrategy(num_gpus=NUM_GPUS)
config = tf.estimator.RunConfig(train_distribute=dist_strategy)
estimator = tf.estimator.Estimator(model_fn,model_dir,config=config)
UPDATED
With TF-2.0 and Keras you may use this (https://www.tensorflow.org/tutorials/distribute/keras)