How to run Tensorflow Estimator on multiple GPUs with data parallelism

后端 未结 5 800
青春惊慌失措
青春惊慌失措 2021-01-31 06:17

I have a standard tensorflow Estimator with some model and want to run it on multiple GPUs instead of just one. How can this be done using data parallelism?

I searched

5条回答
  •  借酒劲吻你
    2021-01-31 06:49

    I think this is all you need.

    Link: https://www.youtube.com/watch?v=bRMGoPqsn20

    More Details: https://www.tensorflow.org/api_docs/python/tf/distribute/Strategy

    Explained: https://medium.com/tensorflow/multi-gpu-training-with-estimators-tf-keras-and-tf-data-ba584c3134db

    NUM_GPUS = 8
    dist_strategy = tf.contrib.distribute.MirroredStrategy(num_gpus=NUM_GPUS)
    config = tf.estimator.RunConfig(train_distribute=dist_strategy)
    estimator = tf.estimator.Estimator(model_fn,model_dir,config=config)
    

    UPDATED

    With TF-2.0 and Keras you may use this (https://www.tensorflow.org/tutorials/distribute/keras)

提交回复
热议问题