How to check if pytorch is using the GPU?

后端 未结 10 1239
既然无缘
既然无缘 2020-12-04 05:01

I would like to know if pytorch is using my GPU. It\'s possible to detect with nvidia-smi if there is any activity from the GPU during the process,

相关标签:
10条回答
  • 2020-12-04 05:22

    As it hasn't been proposed here, I'm adding a method using torch.device, as this is quite handy, also when initializing tensors on the correct device.

    # setting device on GPU if available, else CPU
    device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
    print('Using device:', device)
    print()
    
    #Additional Info when using cuda
    if device.type == 'cuda':
        print(torch.cuda.get_device_name(0))
        print('Memory Usage:')
        print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')
        print('Cached:   ', round(torch.cuda.memory_reserved(0)/1024**3,1), 'GB')
    

    Edit: torch.cuda.memory_cached has been renamed to torch.cuda.memory_reserved. So use memory_cached for older versions.

    Output:

    Using device: cuda
    
    Tesla K80
    Memory Usage:
    Allocated: 0.3 GB
    Cached:    0.6 GB
    

    As mentioned above, using device it is possible to:

    • To move tensors to the respective device:

        torch.rand(10).to(device)
      
    • To create a tensor directly on the device:

        torch.rand(10, device=device)
      

    Which makes switching between CPU and GPU comfortable without changing the actual code.


    Edit:

    As there has been some questions and confusion about the cached and allocated memory I'm adding some additional information about it:

    • torch.cuda.max_memory_cached(device=None)

      Returns the maximum GPU memory managed by the caching allocator in bytes for a given device.

    • torch.cuda.memory_allocated(device=None)

      Returns the current GPU memory usage by tensors in bytes for a given device.


    You can either directly hand over a device as specified further above in the post or you can leave it None and it will use the current_device().


    Additional note: Old graphic cards with Cuda compute capability 3.0 or lower may be visible but cannot be used by Pytorch!
    Thanks to hekimgil for pointing this out! - "Found GPU0 GeForce GT 750M which is of cuda capability 3.0. PyTorch no longer supports this GPU because it is too old. The minimum cuda capability that we support is 3.5."

    0 讨论(0)
  • 2020-12-04 05:23

    To check if there is a GPU available:

    torch.cuda.is_available()
    

    If the above function returns False,

    1. you either have no GPU,
    2. or the Nvidia drivers have not been installed so the OS does not see the GPU,
    3. or the GPU is being hidden by the environmental variable CUDA_VISIBLE_DEVICES. When the value of CUDA_VISIBLE_DEVICES is -1, then all your devices are being hidden. You can check that value in code with this line: os.environ['CUDA_VISIBLE_DEVICES']

    If the above function returns True that does not necessarily mean that you are using the GPU. In Pytorch you can allocate tensors to devices when you create them. By default, tensors get allocated to the cpu. To check where your tensor is allocated do:

    # assuming that 'a' is a tensor created somewhere else
    a.device  # returns the device where the tensor is allocated
    

    Note that you cannot operate on tensors allocated in different devices. To see how to allocate a tensor to the GPU, see here: https://pytorch.org/docs/stable/notes/cuda.html

    0 讨论(0)
  • 2020-12-04 05:26

    Create a tensor on the GPU as follows:

    $ python
    >>> import torch
    >>> print(torch.rand(3,3).cuda()) 
    

    Do not quit, open another terminal and check if the python process is using the GPU using:

    $ nvidia-smi
    
    0 讨论(0)
  • 2020-12-04 05:30

    This is going to work :

    In [1]: import torch
    
    In [2]: torch.cuda.current_device()
    Out[2]: 0
    
    In [3]: torch.cuda.device(0)
    Out[3]: <torch.cuda.device at 0x7efce0b03be0>
    
    In [4]: torch.cuda.device_count()
    Out[4]: 1
    
    In [5]: torch.cuda.get_device_name(0)
    Out[5]: 'GeForce GTX 950M'
    
    In [6]: torch.cuda.is_available()
    Out[6]: True
    

    This tells me the GPU GeForce GTX 950M is being used by PyTorch.

    0 讨论(0)
提交回复
热议问题