How to do multi GPU training with Keras?

只愿长相守 提交于 2019-12-13 11:58:20

问题


I want my model to run on multiple GPUs sharing parameters but with different batches of data.

Can I do something like that with model.fit()? Is there any other alternative?


回答1:


Keras now has (as of v2.0.9) in-built support for device parallelism, across multiple GPUs, using keras.utils.multi_gpu_model.

Currently, only supports the Tensorflow back-end.

Good example here (docs): https://keras.io/getting-started/faq/#how-can-i-run-a-keras-model-on-multiple-gpus Also covered here: https://datascience.stackexchange.com/a/25737




回答2:


try to use make_parallel function in: https://github.com/kuza55/keras-extras/blob/master/utils/multi_gpu.py (it will work only with the tensorflow backend).



来源:https://stackoverflow.com/questions/45166247/how-to-do-multi-gpu-training-with-keras

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!