Tensorflow - GPU dedicated vs shared memory

后端 未结 1 1681
清酒与你
清酒与你 2021-01-22 01:48

Does Tensorflow use only dedicated GPU memory or can it also use shared memory?

Also I ran this:

from tensorflow.python.client import device_lib

device_l

相关标签:
1条回答
  • 2021-01-22 02:01

    In my experience, Tensorflow only uses the dedicated GPU memory as described below. At that time, memory_limit = max dedicated memory - current dedicated memory usage (observed in the Win10 task manager)

    from tensorflow.python.client import device_lib
    print(device_lib.list_local_devices())
    

    Output:

    physical_device_desc: "device: XLA_CPU device"
    , name: "/device:GPU:0"
    device_type: "GPU"
    memory_limit: 2196032718
    

    To verify this , I tried to use for a single task (Tensorflow 2 benchmark from https://github.com/aime-team/tf2-benchmarks), it gives "Resource exhausted" error as below on a GTX1060 3GB with Tensorflow 2.3.0.

    2021-01-20 01:50:53.738987: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties: 
    pciBusID: 0000:01:00.0 name: GeForce GTX 1060 3GB computeCapability: 6.1
    coreClock: 1.7085GHz coreCount: 9 deviceMemorySize: 3.00GiB deviceMemoryBandwidth: 178.99GiB/s
    
    Limit:                      2196032718
    InUse:                      1997814016
    MaxInUse:                   2155556352
    NumAllocs:                        1943
    MaxAllocSize:                551863552
    Reserved:                            0
    PeakReserved:                        0
    LargestFreeBlock:                    0
    
    2021-01-20 01:51:21.393175: W tensorflow/core/framework/op_kernel.cc:1767] OP_REQUIRES failed at conv_ops.cc:539 : Resource exhausted: OOM when allocating tensor with shape[64,256,56,56] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
    Traceback (most recent call last):
    

    I have tried to do the same with multiple small tasks. It tries to use the shared GPU memory for multiple tasks with different Juypter kernels, but the newer task ultimately fails.

    For an example with two similar Xception models :

    Task 1: runs without an error

    Task 2: fails with below error

    UnknownError:  Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above.
         [[node xception/block1_conv1/Conv2D (defined at <ipython-input-25-0c5fe80db9f1>:3) ]] [Op:__inference_predict_function_5303]
    
    Function call stack:
    predict_function
    

    GPU Memory usage at the failure (note the usage of shared memory at the start of the Task 2)

    0 讨论(0)
提交回复
热议问题