I want my model to run on multiple GPUs sharing parameters but with different batches of data.
Can I do something like that with model.fit()
? Is there any other alternative?
Keras now has (as of v2.0.9) in-built support for device parallelism, across multiple GPUs, using keras.utils.multi_gpu_model
.
Currently, only supports the Tensorflow back-end.
Good example here (docs): https://keras.io/getting-started/faq/#how-can-i-run-a-keras-model-on-multiple-gpus Also covered here: https://datascience.stackexchange.com/a/25737
try to use make_parallel function in: https://github.com/kuza55/keras-extras/blob/master/utils/multi_gpu.py (it will work only with the tensorflow backend).
来源:https://stackoverflow.com/questions/45166247/how-to-do-multi-gpu-training-with-keras