I hit a problem when running TensorFlow inference on multiple-GPU setups.
Environment: Python 3.6.4; TensorFlow 1.8.0; Centos 7.3; 2 Nvidia Tesla P4
Her
The device names might be different depending on your setup.
Execute:
from tensorflow.python.client import device_lib print(device_lib.list_local_devices())
And try using the device name for your second GPU exactly as listed there.
name