Valgrind and CUDA: Are reported leaks real?

前端 未结 5 1024
庸人自扰
庸人自扰 2020-12-15 22:23

I have a very simple CUDA component in my application. Valgrind reports a lot of leaks and still-reachables, all related to the cudaMalloc calls.

Are these leaks rea

相关标签:
5条回答
  • 2020-12-15 23:01

    To add to scarl3tt's answer, this may be overly general for some applications, but if you want to use valgrind while ignoring most of the cuda issues, use the option --suppressions=valgrind-cuda.supp where valgrind-cuda.supp is a file with the following rules:

    {
       alloc_libcuda
       Memcheck:Leak
       match-leak-kinds: reachable,possible
       fun:*alloc
       ...
       obj:*libcuda.so*
       ...
    }
    
    {
       alloc_libcufft
       Memcheck:Leak
       match-leak-kinds: reachable,possible
       fun:*alloc
       ...
       obj:*libcufft.so*
       ...
    }
    
    {
       alloc_libcudaart
       Memcheck:Leak
       match-leak-kinds: reachable,possible
       fun:*alloc
       ...
       obj:*libcudart.so*
       ...
    }
    
    0 讨论(0)
  • 2020-12-15 23:04

    Try using cuda-memcheck --leak-check full. Cuda-memcheck is a set of tools that provides similar functionality to Valgrind for CUDA applications. It is installed as part of the CUDA toolkit. You can get more documentation about how to use cuda-memcheck here : http://docs.nvidia.com/cuda/cuda-memcheck/

    Note that cuda-memcheck is not a direct replacement for valgrind and can't be used to detect host side memory leaks or buffer overflows.

    0 讨论(0)
  • 2020-12-15 23:08

    Since I don't have 50 reputation, I cannot leave a comment on @Vyas 's answer.

    I feel strange that cuda-memcheck cannot observe cuda memory leakage.

    I just write a very simple code with a cuda memory leakage, but when using cuda-memcheck --leak-check full it give no leakage. It is:

    #include <iostream>
    #include <cuda_runtime.h>
    
    using namespace std;
    
    int main(){
        float* cpu_data;
        float* gpu_data;
        int buf_size = 10 * sizeof(float);
    
        cpu_data = (float*)malloc(buf_size);
        for(int i=0; i<10; i++){
            cpu_data[i] = 1.0f * i;
        }
    
        cudaError_t cudaStatus = cudaMalloc(&gpu_data, buf_size);
    
        cudaMemcpy(gpu_data, cpu_data, buf_size, cudaMemcpyHostToDevice);
    
        free(cpu_data);
        //cudaFree(gpu_data);
    
        return 0;
    }
    

    Note the commented line of code, which make this program a cuda memory leakage, I think. However, when execuing cuda-memcheck ./a.out it gives:

    ========= CUDA-MEMCHECK
    ========= ERROR SUMMARY: 0 errors
    
    0 讨论(0)
  • 2020-12-15 23:13

    I wouldn't trust valgrind or any other leak detector (like VLD) with CUDA. I'm sure they weren't designed with GPU allocations in mind. I don't know whether Nvidia's Nsight has the capability these days (I haven't done GPU programming for almost 6 months now), but that's the best thing I used for CUDA debugging, and to be quite honest, it was buggy as hell.

    The code you've posted shouldn't create a leak.

    0 讨论(0)
  • 2020-12-15 23:15

    It's a known issue that valgrind reports false-positives for a bunch of CUDA stuff. The best way to avoid seeing it would be to use valgrind suppressions, which you can read all about here: http://valgrind.org/docs/manual/manual-core.html#manual-core.suppress

    If you want to jumpstart into something a little closer to your specific issue, an interesting post is this one on the Nvidia dev forums. It has a link to a sample suppression rule file. https://devtalk.nvidia.com/default/topic/404607/valgrind-3-4-suppressions-a-little-howto/

    0 讨论(0)
提交回复
热议问题