Include a static cuda library into a c++ project

后端 未结 1 1733
难免孤独
难免孤独 2021-01-17 05:51

I have a templated static CUDA library which I want to include into a common c++ project. When I include the headers of the library the compiler crashes and says It cannot r

1条回答
  •  不知归路
    2021-01-17 06:47

    Here's a set of instructions that should help:

    A. Create library project:

    1. select File...New...CUDA C/C++ Project
    2. Select Static Library...Empty Project and give the project a name (test8)
    3. Next...Next...Finish to finish creating the project
    4. right click on project name in Project Explorer window, select New...Header File, give it a name (test8lib.h)
    5. edit test8lib.h (with contents from below), save it
    6. create another new header file for cuda template, (test8.cuh)
    7. edit test8.cuh (with contents from below), save it
    8. create a new source file, (test8.cu)
    9. edit test8.cu (with contents from below), save it
    10. select Project...Build Project (libtest8.a is now built)

    test8lib.h:

    #ifndef TEST8LIB_H_
    #define TEST8LIB_H_
    
    void calc_square_vec_float(float *in_data, float *out_data, int size);
    
    
    #endif /* TEST8LIB_H_ */
    

    test8.cuh:

    #ifndef TEST8_CUH_
    #define TEST8_CUH_
    
    template  __global__ void squareVector(T *input, T *output, int size) {
        int idx = threadIdx.x+blockDim.x*blockIdx.x;
        if (idx < size) output[idx]=input[idx]*input[idx];
    }
    
    
    #endif /* TEST8_CUH_ */
    

    test8.cu:

    #include "test8lib.h"
    #include "test8.cuh"
    #define nTPB 256
    
    void calc_square_vec_float(float *in_data, float *out_data, int size){
        float *d_in_data, *d_out_data;
        cudaMalloc(&d_in_data,  size*sizeof(float));
        cudaMalloc(&d_out_data, size*sizeof(float));
        cudaMemcpy(d_in_data, in_data, size*sizeof(float),cudaMemcpyHostToDevice);
        squareVector<<<(size+nTPB-1)/nTPB, nTPB>>>(d_in_data, d_out_data, size);
        cudaMemcpy(out_data, d_out_data, size*sizeof(float),cudaMemcpyDeviceToHost);
    }
    

    B. Create main project:

    1. File...new...C++ project...empty project...Linux GCC toolchain, give it a name (test9)
    2. Next...Finish to finish creating the project
    3. File...New Source File...Default C++ source template, give it a name (test9.cpp)
    4. edit the file with contents from below, save it.
    5. add the include path: Project...Properties...Build...Settings...Tool Settings...GCC C++ Compiler...Includes...Include Paths...Add and add the directory where test8lib.h is located.
    6. add the lib: Tool Settings...GCC C++ Linker...Libraries...Libraries...Add and add the name of the previously built library (test8)
    7. also add CUDA runtime library (cudart)
    8. add the lib path: Tool Settings...GCC C++ Linker...Libraries...Library Paths...Add and add the path to the previously built library (e.g. /path/to/cuda-workspace/test8/Debug)
    9. also add the path to cudart (e.g. /usr/local/cuda/lib64)
    10. Build Project
    11. Run Project

    test9.cpp:

    #include 
    #include 
    #include "test8lib.h"
    #define DSIZE 4
    #define TEST_VAL 2.0f
    
    int main(){
        float *in, *out;
        in = (float *)malloc(DSIZE*sizeof(float));
        out = (float *)malloc(DSIZE*sizeof(float));
        for (int i=0; i

    0 讨论(0)
提交回复
热议问题