Configuration/Flags for TF-Slim across multiple GPU/Machines

心不动则不痛 提交于 2020-01-04 04:44:10

问题


I am curios if there are examples on how to run TF-Slim models/slim using deployment/model_deploy.py across multiple GPU’s on multiple machines. The documentation is pretty good but I am missing a couple of pieces. Specifically what needs to be put in for worker_device and ps_device and what additionally needs to be run on each machine?

An example like the one at the bottom of the distributed page would be awesome. https://www.tensorflow.org/how_tos/distributed/

来源:https://stackoverflow.com/questions/41229450/configuration-flags-for-tf-slim-across-multiple-gpu-machines

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!