How to run Tensorflow Estimator on multiple GPUs with data parallelism

后端 未结 5 808
青春惊慌失措
青春惊慌失措 2021-01-31 06:17

I have a standard tensorflow Estimator with some model and want to run it on multiple GPUs instead of just one. How can this be done using data parallelism?

I searched

5条回答
  •  心在旅途
    2021-01-31 07:14

    The standard example is: https://github.com/tensorflow/tensorflow/blob/r1.4/tensorflow/contrib/learn/python/learn/estimators/estimator.py

    One way to run it data-parallel would be to loop over available GPU devices, and send chunks of your batch to copied versions of your model (all done within your model_fn), then merge the results.

提交回复
热议问题