How to deal with “DNN module was not built with CUDA backend; switching to CPU” warning in C++?

帅比萌擦擦* 提交于 2021-01-27 08:30:50

问题


I am trying to run YOLOv3 on Visual Studio 2019 using CUDA 10.2 with cuDNN v7.6.5 on Windows 10 using NVidia GeForce 930M. Here is part of the code I used.

#include <fstream>
#include <sstream>
#include <iostream>
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>

using namespace cv;
using namespace dnn;
using namespace std;

int main()
{
    // Load names of classes
    string classesFile = "coco.names";
    ifstream ifs(classesFile.c_str());
    string line;
    while (getline(ifs, line)) classes.push_back(line);

    // Give the configuration and weight files for the model
    String modelConfiguration = "yolovs.cfg";
    String modelWeights = "yolov3.weights";

    // Load the network
    Net net = readNetFromDarknet(modelConfiguration, modelWeights);
    net.setPreferableBackend(DNN_BACKEND_CUDA);
    net.setPreferableTarget(DNN_TARGET_CUDA);

    // Open the video file
    inputFile = "vid.mp4";
    cap.open(inputFile);

    // Get frame from the video
    cap >> frame;

    // Create a 4D blob from a frame
    blobFromImage(frame, blob, 1 / 255.0, Size(inpWidth, inpHeight), Scalar(0, 0, 0), true, false);

    // Sets the input to the network
    net.setInput(blob);

    // Runs the forward pass to get output of the output layers
    vector<Mat> outs;
    net.forward(outs, getOutputsNames(net));
}

Although I add $(CUDNN)\include;$(cudnn)\include; to Additional Include Directories in both C/C++ and Linker, added CUDNN_HALF;CUDNN; to C/C++>Preprocessor Definitions, and added cudnn.lib; to Linker>Input, I still get this warning:

DNN module was not built with CUDA backend; switching to CPU

and it runs on CPU instead of GPU, can anyone help me with this problem?


回答1:


I solved it by using CMake, but I had first to add this opencv_contrib then rebuilding it using Visual Studio. Make sure that these WITH_CUDA, WITH_CUBLAS, WITH_CUDNN, OPENCV_DNN_CUDA, BUILD_opencv_world are checked in CMake.




回答2:


I had a similar issue happen to me about a week ago, but I was using Python and Tensorflow. Although the languages were different compared to C++, I did get the same error. To fix this, I uninstalled CUDA 10.2 and downgraded to CUDA 10.1. From what I have found, there might be a dependency issue with CUDA, or in your case, OpenCV hasn't created support yet for the latest version of CUDA.

EDIT

After some further research it seems to be an issue with Opencv rather than CUDA. Referencing this github thread, if you installed Opencv with cmake, remove the arch bin version below 7 on the config file, then rebuild/reinstall Opencv. However, if that doesn't work, another option would be to remove CUDA arch bin version < 5.3 and rebuild.



来源:https://stackoverflow.com/questions/61095996/how-to-deal-with-dnn-module-was-not-built-with-cuda-backend-switching-to-cpu

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!