Train multiple keras/tensorflow models on different GPUs simultaneously

有些话、适合烂在心里 提交于 2021-02-19 01:42:07

问题


I would like to train multiple models on multiple GPUs at the simultaneously from within a jupyter notebook. I am working on a node with 4GPUs. I would like to assign one GPU to one model and train 4 different models at the same time. Right now, I select a GPU for one notebook by (e.g.):

import os
os.environ['CUDA_VISIBLE_DEVICES'] = '1'

def model(...):
    ....

model.fit(...)

In four different notebooks. Though, then the results and the output of the fitting procedure is distributed in four different notebooks. Though, running them in one notebook sequentially, needs a lot of time. How do you assign GPU's to individual functions and run them in parallel?


回答1:


I recommend using Tensorflow scopes like so:

with tf.device_scope('/gpu:0'):
  model1.fit()
with tf.device_scope('/gpu:1'):
  model2.fit()
with tf.device_scope('/gpu:2'):
  model3.fit()


来源:https://stackoverflow.com/questions/50992771/train-multiple-keras-tensorflow-models-on-different-gpus-simultaneously

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!