Verifying if GPU is actually used in Keras/Tensorflow, not just verified as present

前端 未结 2 1195
北荒
北荒 2021-01-15 23:33

I\'ve just built a deep learning rig (AMD 12 core threadripper; GeForce RTX 2080 ti; 64Gb RAM). I originally wanted to install CUDnn and CUDA on Ubuntu 19.0, but the install

2条回答
  •  小鲜肉
    小鲜肉 (楼主)
    2021-01-16 00:30

    You can see the following details here.
    Based on the documentation:

    If a TensorFlow operation has both CPU and GPU implementations, 
    by default, the GPU devices will be given priority when the operation is assigned to a device.
    For example, tf.matmul has both CPU and GPU kernels. 
    On a system with devices CPU:0 and GPU:0, the GPU:0 device will be selected to run tf.matmul unless you explicitly request running it on another device.
    

    Logging device placement

    tf.debugging.set_log_device_placement(True)
    
    # Create some tensors
    a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
    b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
    c = tf.matmul(a, b)
    
    print(c)
    
    Example Result
    Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0
    tf.Tensor(
    [[22. 28.]
     [49. 64.]], shape=(2, 2), dtype=float32)
    

    For Manual Device placement

    tf.debugging.set_log_device_placement(True)
    
    # Place tensors on the CPU
    with tf.device('/GPU:0'):
      a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
      b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
    
    c = tf.matmul(a, b)
    print(c)
    
    Example Result: 
    Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0
    tf.Tensor(
    [[22. 28.]
     [49. 64.]], shape=(2, 2), dtype=float32)
    

提交回复
热议问题