How do I capture and Process each and every frame of an image using CImg library?

后端 未结 1 745
小蘑菇
小蘑菇 2021-01-07 00:36

I\'m working on a project based on real time image processing using CImg Library in Raspberrypi.

I need to capture images at higher frame rates (say atleast 30 fps),

相关标签:
1条回答
  • 2021-01-07 00:49

    Updated Answer

    I have updated my original answer here to show how to copy the acquired data into a CImg structure and also to show 2 worker threads that can then process the image while the main thread continues to acquire frames at the full speed. It achieves 60 frames per second.

    I have not done any processing inside the worker threads because I don't know what you want to do. All I did was save the last frame to disk to show that the acquisition into a CImg is working. You could have 3 worker threads. You could pass one frame to each thread on a round-robin basis, or you could have each of 2 threads process half the frame at each iteration. Or each of 3 threads process one third of a frame. You could change the polled wakeups to use condition variables.

    #include <ctime>
    #include <fstream>
    #include <iostream>
    #include <thread>
    #include <mutex>
    #include <raspicam/raspicam.h>
    
    // Don't want any X11 display by CImg
    #define cimg_display 0
    
    #include <CImg.h>
    
    using namespace cimg_library;
    using namespace std;
    
    #define NFRAMES     1000
    #define NTHREADS    2
    #define WIDTH       1280
    #define HEIGHT      960
    
    // Commands/status for the worker threads
    #define WAIT    0
    #define GO      1
    #define GOING   2
    #define EXIT    3
    #define EXITED  4
    volatile int command[NTHREADS];
    
    // Serialize access to cout
    std::mutex cout_mutex;
    
    // CImg initialisation
    // Create a 1280x960 greyscale (Y channel of YUV) image
    // Create a globally-accessible CImg for main and workers to access
    CImg<unsigned char> img(WIDTH,HEIGHT,1,1,128);
    
    ////////////////////////////////////////////////////////////////////////////////
    // worker thread - There will 2 or more of these running in parallel with the
    //                 main thread. Do any image processing in here.
    ////////////////////////////////////////////////////////////////////////////////
    void worker (int id) {
    
       // If you need a "results" image of type CImg, create it here before entering
       // ... the main processing loop below - you don't want to do malloc()s in the
       // ... high-speed loop
       // CImg results...
    
       int wakeups=0;
    
       // Create a white for annotating
       unsigned char white[] = { 255,255,255 };
    
       while(true){
          // Busy wait with 500us sleep - at worst we only miss 50us of processing time per frame
          while((command[id]!=GO)&&(command[id]!=EXIT)){
             std::this_thread::sleep_for(std::chrono::microseconds(500));
          }
          if(command[id]==EXIT){command[id]=EXITED;break;}
          wakeups++;
    
          // Process frame of data - access CImg structure here
          command[id]=GOING;
    
          // You need to add your processing in HERE - everything from
          // ... 9 PIXELS MATRIX GRAYSCALE VALUES to
          // ... THRESHOLDING CONDITION
    
          // Pretend to do some processing.
          // You need to delete the following "sleep_for" and "if(id==0...){...}"
          std::this_thread::sleep_for(std::chrono::milliseconds(2));
    
          if((id==0)&&(wakeups==NFRAMES)){
             // Annotate final image and save as PNG
             img.draw_text(100,100,"Hello World",white);
             img.save_png("result.png");
          }
       }
    
       cout_mutex.lock();
       std::cout << "Thread[" << id << "]: Received " << wakeups << " wakeups" << std::endl;
       cout_mutex.unlock();
    }
    
    int main ( int argc,char **argv ) {
    
       raspicam::RaspiCam Camera;
       // Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
       Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);
    
       // Allowable widths: 320, 640, 1280
       // Allowable heights: 240, 480, 960
       // setCaptureSize(width,height)
       Camera.setCaptureSize(WIDTH,HEIGHT);
    
       std::cout << "Main: Starting"  << std::endl;
       std::cout << "Main: NTHREADS:" << NTHREADS << std::endl;
       std::cout << "Main: NFRAMES:"  << NFRAMES  << std::endl;
       std::cout << "Main: Width: "   << Camera.getWidth()  << std::endl;
       std::cout << "Main: Height: "  << Camera.getHeight() << std::endl;
    
       // Spawn worker threads - making sure they are initially in WAIT state
       std::thread threads[NTHREADS];
       for(int i=0; i<NTHREADS; ++i){
          command[i]=WAIT;
          threads[i] = std::thread(worker,i);
       }
    
       // Open camera
       cout<<"Opening Camera..."<<endl;
       if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}
    
       // Wait until camera stabilizes
       std::cout<<"Sleeping for 3 secs"<<endl;
       std::this_thread::sleep_for(std::chrono::seconds(3));
    
       for(int frame=0;frame<NFRAMES;frame++){
          // Capture frame
          Camera.grab();
    
          // Copy just the Y component to our mono CImg
          std::memcpy(img._data,Camera.getImageBufferData(),WIDTH*HEIGHT);
    
          // Notify worker threads that data is ready for processing
          for(int i=0; i<NTHREADS; ++i){
             command[i]=GO;
          }
       }
    
       // Let workers process final frame, then tell to exit
       std::this_thread::sleep_for(std::chrono::milliseconds(50));
    
       // Notify worker threads to exit
       for(int i=0; i<NTHREADS; ++i){
          command[i]=EXIT;
       }
    
       // Wait for all threads to finish
       for(auto& th : threads) th.join();
    }
    

    Note on timing

    You can time code like this:

    #include <chrono>
    
    typedef std::chrono::high_resolution_clock hrclock;
    
    hrclock::time_point t1,t2;
    
    t1 = hrclock::now();
    // do something that needs timing
    t2 = hrclock::now();
    
    std::chrono::nanoseconds elapsed = t2-t1;
    long long nanoseconds=elapsed.count();
    

    Original Answer

    I have been doing some experiments with Raspicam. I downloaded their code from SourceForge and modified it slightly to do some simple, capture-only tests. The code I ended up using looks like this:

    #include <ctime>
    #include <fstream>
    #include <iostream>
    #include <raspicam/raspicam.h>
    #include <unistd.h> // for usleep()
    using namespace std;
    
    #define NFRAMES 1000
    
    int main ( int argc,char **argv ) {
    
        raspicam::RaspiCam Camera;
        // Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
        Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);
    
        // Allowable widths: 320, 640, 1280
        // Allowable heights: 240, 480, 960
        // setCaptureSize(width,height)
        Camera.setCaptureSize(1280,960);
    
        // Open camera 
        cout<<"Opening Camera..."<<endl;
        if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}
    
        // Wait until camera stabilizes
        cout<<"Sleeping for 3 secs"<<endl;
        usleep(3000000);
        cout << "Grabbing " << NFRAMES << " frames" << endl;
    
        // Allocate memory
        unsigned long bytes=Camera.getImageBufferSize();
        cout << "Width: "  << Camera.getWidth() << endl;
        cout << "Height: " << Camera.getHeight() << endl;
        cout << "ImageBufferSize: " << bytes << endl;;
        unsigned char *data=new unsigned char[bytes];
    
        for(int frame=0;frame<NFRAMES;frame++){
           // Capture frame
           Camera.grab();
    
           // Extract the image
           Camera.retrieve ( data,raspicam::RASPICAM_FORMAT_IGNORE );
    
           // Wake up a thread here to process the frame with CImg
        }
        return 0;
    }
    

    I dislike cmake so I just compiled like this:

    g++ -std=c++11 simpletest.c -o simpletest -I. -I/usr/local/include -L /opt/vc/lib -L /usr/local/lib -lraspicam -lmmal -lmmal_core -lmmal_util
    

    I found that, regardless of the dimensions of the image, and more or less regardless of the encoding (RGB, BGR, GRAY) it achieves 30 fps (frames per second).

    The only way I could get better than that was by making the following changes:

    • in the code above, use RASPICAM_FORMAT_YUV420 rather than anything else

    • editing the file private_impl.cpp and changing line 71 to set the framerate to 90.

    If I do that, I can achieve 66 fps.

    As the Raspberry Pi is only a pretty lowly 900MHz CPU but with 4 cores, I would guess you would want to start 1-3 extra threads at the beginning outside the loop and then wake one, or more of them up where I have noted in the code to process the data. The first thing they would do is copy the data out of the acquisition buffer before the next frame started - or have multiple buffers and use them in a round-robin fashion.

    Notes on threading

    In the following diagram, green represents the Camera.grab() where you acquire the image, and red represents the processing you do after the image is acquired. At the moment, you are acquiring the data (green), and then processing it (red) before you can acquire the next frame. Note that 3 of your 4 CPUs do nothing.

    What I am suggesting is that you offload the processing (red) to the other CPUs/threads and keep acquiring new data (green) as fast as possible. Like this:

    Now you see you get more frames (green) per second.

    0 讨论(0)
提交回复
热议问题