tensorflow multi GPU parallel usage
问题 I want to use 8 gpus on parallel, not sequencely. For example, when I execute this code, import tensorflow as tf with tf.device('/gpu:0'): for i in range(10): print(i) with tf.device('/gpu:1'): for i in range(10, 20): print(i) I tried cmd command 'CUDA_VISIBLE_DEVICE='0,1' but result is same. I want to see the result "0 10 1 11 2 3 12 .... etc" But actual result is sequencely "0 1 2 3 4 5 ..... 10 11 12 13.." How can I get wanted result? 回答1: ** I see an edit with the question so adding this