Simple TensorFlow example loading one copy of a model onto each GPU that's available

被刻印的时光 ゝ 提交于 2019-12-11 06:36:06

问题


I'm looking at porting from a different production machine learning framework to TensorFlow. In our current system for both training and inference we load copies of our model onto as many GPUs as are on the machine.

I would like to keep this way of load-balancing for now. Where can I find a simple example of loading one copy of a TF model onto each GPU that's available on a machine?


回答1:


Here's an example from https://github.com/rafaljozefowicz/lm/blob/master/language_model.py#L21

You wrap your model creation code into _forward function, and then call it once for each GPU

    for i in range(hps.num_gpus):
        with tf.device(assign_to_gpu(i, ps_device)), tf.variable_scope(tf.get_variable_scope(),
                                                                       reuse=True if i > 0 else None):
            loss = self._forward(i, xs[i], ys[i], ws[i])
            losses += [loss]
            if mode == "train":
                cur_grads = self._backward(loss, summaries=(i == hps.num_gpus - 1))
                tower_grads += [cur_grads]

    self.loss = tf.add_n(losses) / len(losses)


来源:https://stackoverflow.com/questions/46776824/simple-tensorflow-example-loading-one-copy-of-a-model-onto-each-gpu-thats-avail

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!