问题
I'm working on a project based on real time image processing using CImg Library in Raspberrypi.
I need to capture images at higher frame rates (say atleast 30 fps), when I use the inbuilt Raspicam commands such as
sudo raspistill -o -img_%d.jpg -tl 5 -t 1000 -a 512
/* -tl : time lapse duration in msec -t : total time duration (1000 msec = 1 sec) -a : displays frame numbers */
with this command though it shows 34 frames per second,I could only capture maximum of 4 frames/images (and rest of the frames are skipped)
sudo raspistill -o -img_%d.jpg -tl 5 -tl 1000 -q 5 -md 7 -w 640 -h 480 -a 512
and From this above command I could capture at a maximum of 7-8 images per second but by reducing the resolution and quality of the images.
But I don't want to compromise on the quality of an image since I will be capturing an image, processing it immediately and will be deleting an image to save memory.
Later I tried using V4L2(Video for Linux) drivers to make use of the best performance of a camera, but in the internet, tutorials regarding V4l2 and cimg are quite scarce, I couldn't find one.
I have been using the following commands
# Capture a JPEG image
v4l2-ctl --set-fmt-video=width=2592,height=1944,pixelformat=3
v4l2-ctl --stream-mmap=3 --stream-count=1 –stream-to=somefile.jpg
(source : http://www.geeetech.com/wiki/index.php/Raspberry_Pi_Camera_Module)
but I couldn't get enough information about those parameters such as (stream-mmap & stream-count) what does it exactly, and how does these commands help me in capturing 30 frames/images per second ?
CONDITIONS:
Most importantly I don't want to use OPENCV, MATLAB or any other image processing softwares, since my image processing task is very simple (I.e detection of led light blink) also my objective is to have a light weight tool to perform these operations at the cost of higher performance.
And also my programming code should be in either C or C++ but not in python or Java (since processing speed matters !)
Please make a note that,my aim is not to record a video but to capture as many frames as possible and to process each and individual images.
For using in Cimg I searched over few docs from a reference manual, but I couldn't understand it clearly how to use it for my purpose.
The class cimg_library::CImgList represents lists of cimg_library::CImg images. It can be used for instance to store different frames of an image sequence. (source : http://cimg.eu/reference/group__cimg__overview.html )
- I found the following examples, But i'm not quite sure whether it suits my task
Load a list from a YUV image sequence file.
CImg<T>& load_yuv
(
const char *const
filename,
const unsigned int
size_x,
const unsigned int
size_y,
const unsigned int
first_frame = 0,
const unsigned int
last_frame = ~0U,
const unsigned int
step_frame = 1,
const bool
yuv2rgb = true
Parameters filename Filename to read data from. size_x Width of the images. size_y Height of the images. first_frame Index of first image frame to read. last_frame Index of last image frame to read. step_frame Step applied between each frame. yuv2rgb Apply YUV to RGB transformation during reading.
But here, I need rgb values from an image frames directly without compression.
Now I have the following code in OpenCv which performs my task, but I request you to help me in implementing the same using CImg libraries (which is in C++) or any other light weight libraries or something with v4l2
#include <iostream>
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main (){
VideoCapture capture (0); //Since you have your device at /dev/video0
/* You can edit the capture properties with "capture.set (property, value);" or in the driver with "v4l2-ctl --set-ctrl=auto_exposure=1"*/
waitKey (200); //Wait 200 ms to ensure the device is open
Mat frame; // create Matrix where the new frame will be stored
if (capture.isOpened()){
while (true){
capture >> frame; //Put the new image in the Matrix
imshow ("Image", frame); //function to show the image in the screen
}
}
}
- I'm a beginner to the Programming and Raspberry pi, please excuse if there are any mistakes in the above problem statements.
"With some of your recommendations, I slighthly modified the raspicam c++ api code and combined with CIMG image processing functionality "
#include "CImg.h"
#include <iostream>
#include <cstdlib>
#include <fstream>
#include <sstream>
#include <sys/timeb.h>
#include "raspicam.h"
using namespace std;
using namespace cimg_library;
bool doTestSpeedOnly=false;
size_t nFramesCaptured=100;
//parse command line
//returns the index of a command line param in argv. If not found, return -1
int findParam ( string param,int argc,char **argv ) {
int idx=-1;
for ( int i=0; i<argc && idx==-1; i++ )
if ( string ( argv[i] ) ==param ) idx=i;
return idx;
}
//parse command line
//returns the value of a command line param. If not found, defvalue is returned
float getParamVal ( string param,int argc,char **argv,float defvalue=-1 ) {
int idx=-1;
for ( int i=0; i<argc && idx==-1; i++ )
if ( string ( argv[i] ) ==param ) idx=i;
if ( idx==-1 ) return defvalue;
else return atof ( argv[ idx+1] );
}
raspicam::RASPICAM_EXPOSURE getExposureFromString ( string str ) {
if ( str=="OFF" ) return raspicam::RASPICAM_EXPOSURE_OFF;
if ( str=="AUTO" ) return raspicam::RASPICAM_EXPOSURE_AUTO;
if ( str=="NIGHT" ) return raspicam::RASPICAM_EXPOSURE_NIGHT;
if ( str=="NIGHTPREVIEW" ) return raspicam::RASPICAM_EXPOSURE_NIGHTPREVIEW;
if ( str=="BACKLIGHT" ) return raspicam::RASPICAM_EXPOSURE_BACKLIGHT;
if ( str=="SPOTLIGHT" ) return raspicam::RASPICAM_EXPOSURE_SPOTLIGHT;
if ( str=="SPORTS" ) return raspicam::RASPICAM_EXPOSURE_SPORTS;
if ( str=="SNOW" ) return raspicam::RASPICAM_EXPOSURE_SNOW;
if ( str=="BEACH" ) return raspicam::RASPICAM_EXPOSURE_BEACH;
if ( str=="VERYLONG" ) return raspicam::RASPICAM_EXPOSURE_VERYLONG;
if ( str=="FIXEDFPS" ) return raspicam::RASPICAM_EXPOSURE_FIXEDFPS;
if ( str=="ANTISHAKE" ) return raspicam::RASPICAM_EXPOSURE_ANTISHAKE;
if ( str=="FIREWORKS" ) return raspicam::RASPICAM_EXPOSURE_FIREWORKS;
return raspicam::RASPICAM_EXPOSURE_AUTO;
}
raspicam::RASPICAM_AWB getAwbFromString ( string str ) {
if ( str=="OFF" ) return raspicam::RASPICAM_AWB_OFF;
if ( str=="AUTO" ) return raspicam::RASPICAM_AWB_AUTO;
if ( str=="SUNLIGHT" ) return raspicam::RASPICAM_AWB_SUNLIGHT;
if ( str=="CLOUDY" ) return raspicam::RASPICAM_AWB_CLOUDY;
if ( str=="SHADE" ) return raspicam::RASPICAM_AWB_SHADE;
if ( str=="TUNGSTEN" ) return raspicam::RASPICAM_AWB_TUNGSTEN;
if ( str=="FLUORESCENT" ) return raspicam::RASPICAM_AWB_FLUORESCENT;
if ( str=="INCANDESCENT" ) return raspicam::RASPICAM_AWB_INCANDESCENT;
if ( str=="FLASH" ) return raspicam::RASPICAM_AWB_FLASH;
if ( str=="HORIZON" ) return raspicam::RASPICAM_AWB_HORIZON;
return raspicam::RASPICAM_AWB_AUTO;
}
void processCommandLine ( int argc,char **argv,raspicam::RaspiCam &Camera ) {
Camera.setWidth ( getParamVal ( "-w",argc,argv,640 ) );
Camera.setHeight ( getParamVal ( "-h",argc,argv,480 ) );
Camera.setBrightness ( getParamVal ( "-br",argc,argv,50 ) );
Camera.setSharpness ( getParamVal ( "-sh",argc,argv,0 ) );
Camera.setContrast ( getParamVal ( "-co",argc,argv,0 ) );
Camera.setSaturation ( getParamVal ( "-sa",argc,argv,0 ) );
Camera.setShutterSpeed( getParamVal ( "-ss",argc,argv,0 ) );
Camera.setISO ( getParamVal ( "-iso",argc,argv ,400 ) );
if ( findParam ( "-vs",argc,argv ) !=-1 )
Camera.setVideoStabilization ( true );
Camera.setExposureCompensation ( getParamVal ( "-ec",argc,argv ,0 ) );
if ( findParam ( "-gr",argc,argv ) !=-1 )
Camera.setFormat(raspicam::RASPICAM_FORMAT_GRAY);
if ( findParam ( "-yuv",argc,argv ) !=-1 )
Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);
if ( findParam ( "-test_speed",argc,argv ) !=-1 )
doTestSpeedOnly=true;
int idx;
if ( ( idx=findParam ( "-ex",argc,argv ) ) !=-1 )
Camera.setExposure ( getExposureFromString ( argv[idx+1] ) );
if ( ( idx=findParam ( "-awb",argc,argv ) ) !=-1 )
Camera.setAWB( getAwbFromString ( argv[idx+1] ) );
nFramesCaptured=getParamVal("-nframes",argc,argv,100);
Camera.setAWB_RB(getParamVal("-awb_b",argc,argv ,1), getParamVal("-awb_g",argc,argv ,1));
}
//timer functions
#include <sys/time.h>
#include <unistd.h>
class Timer{
private:
struct timeval _start, _end;
public:
Timer(){}
void start(){
gettimeofday(&_start, NULL);
}
void end(){
gettimeofday(&_end, NULL);
}
double getSecs(){
return double(((_end.tv_sec - _start.tv_sec) * 1000 + (_end.tv_usec - _start.tv_usec)/1000.0) + 0.5)/1000.;
}
};
void saveImage ( string filepath,unsigned char *data,raspicam::RaspiCam &Camera ) {
std::ofstream outFile ( filepath.c_str(),std::ios::binary );
if ( Camera.getFormat()==raspicam::RASPICAM_FORMAT_BGR || Camera.getFormat()==raspicam::RASPICAM_FORMAT_RGB ) {
outFile<<"P6\n";
} else if ( Camera.getFormat()==raspicam::RASPICAM_FORMAT_GRAY ) {
outFile<<"P5\n";
} else if ( Camera.getFormat()==raspicam::RASPICAM_FORMAT_YUV420 ) { //made up format
outFile<<"P7\n";
}
outFile<<Camera.getWidth() <<" "<<Camera.getHeight() <<" 255\n";
outFile.write ( ( char* ) data,Camera.getImageBufferSize() );
}
int main ( int argc,char **argv ) {
int a=1,b=0,c;
int x=444,y=129; //pixel coordinates
raspicam::RaspiCam Camera;
processCommandLine ( argc,argv,Camera );
cout<<"Connecting to camera"<<endl;
if ( !Camera.open() ) {
cerr<<"Error opening camera"<<endl;
return -1;
}
// cout<<"Connected to camera ="<<Camera.getId() <<" bufs="<<Camera.getImageBufferSize( )<<endl;
unsigned char *data=new unsigned char[ Camera.getImageBufferSize( )];
Timer timer;
// cout<<"Capturing...."<<endl;
// size_t i=0;
timer.start();
for (int i=0;i<=nFramesCaptured;i++)
{
Camera.grab();
Camera.retrieve ( data );
std::stringstream fn;
fn<<"image.jpg";
saveImage ( fn.str(),data,Camera );
// cerr<<"Saving "<<fn.str()<<endl;
CImg<float> Img("/run/shm/image.jpg");
//Img.display("Window Title");
// 9 PIXELS MATRIX GRAYSCALE VALUES
float pixvalR1 = Img(x-1,y-1);
float pixvalR2 = Img(x,y-1);
float pixvalR3 = Img(x+1,y-1);
float pixvalR4 = Img(x-1,y);
float pixvalR5 = Img(x,y);
float pixvalR6 = Img(x+1,y);
float pixvalR7 = Img(x-1,y+1);
float pixvalR8 = Img(x,y+1);
float pixvalR9 = Img(x+1,y+1);
// std::cout<<"coordinate value :"<<pixvalR5 << endl;
// MEAN VALUES OF RGB PIXELS
float light = (pixvalR1+pixvalR2+pixvalR3+pixvalR4+pixvalR5+pixvalR6+pixvalR7+pixvalR8+pixvalR9)/9 ;
// DISPLAYING MEAN RGB VALUES OF 9 PIXELS
// std::cout<<"Lightness value :"<<light << endl;
// THRESHOLDING CONDITION
c = (light > 130 ) ? a : b;
// cout<<"Data is " << c <<endl;
ofstream fout("c.txt", ios::app);
fout<<c;
fout.close();
}
timer.end();
cerr<< timer.getSecs()<< " seconds for "<< nFramesCaptured << " frames : FPS " << ( ( float ) ( nFramesCaptured ) / timer.getSecs() ) <<endl;
Camera.release();
std::cin.ignore();
}
- from this code, I would like to know how can we get the data directly from camera.retrieve(data), without storing it as an image file and to access the data from an image buffer, to process the image and delete it further.
As per the recommendations of Mark Setchell, which i made a slight changes in the code and i'm getting good results, but, Is there any way to improve the processing performance to get higher Frame rate ? with this code i'm able to get at a maximum of 10 FPS.
#include <ctime>
#include <fstream>
#include <iostream>
#include <thread>
#include <mutex>
#include <raspicam/raspicam.h>
// Don't want any X11 display by CImg
#define cimg_display 0
#include <CImg.h>
using namespace cimg_library;
using namespace std;
#define NFRAMES 1000
#define NTHREADS 2
#define WIDTH 640
#define HEIGHT 480
// Commands/status for the worker threads
#define WAIT 0
#define GO 1
#define GOING 2
#define EXIT 3
#define EXITED 4
volatile int command[NTHREADS];
// Serialize access to cout
std::mutex cout_mutex;
// CImg initialisation
// Create a 1280x960 greyscale (Y channel of YUV) image
// Create a globally-accessible CImg for main and workers to access
CImg<unsigned char> img(WIDTH,HEIGHT,1,1,128);
////////////////////////////////////////////////////////////////////////////////
// worker thread - There will 2 or more of these running in parallel with the
// main thread. Do any image processing in here.
////////////////////////////////////////////////////////////////////////////////
void worker (int id) {
// If you need a "results" image of type CImg, create it here before entering
// ... the main processing loop below - you don't want to do malloc()s in the
// ... high-speed loop
// CImg results...
int wakeups=0;
// Create a white for annotating
unsigned char white[] = { 255,255,255 };
while(true){
// Busy wait with 500us sleep - at worst we only miss 50us of processing time per frame
while((command[id]!=GO)&&(command[id]!=EXIT)){
std::this_thread::sleep_for(std::chrono::microseconds(500));
}
if(command[id]==EXIT){command[id]=EXITED;break;}
wakeups++;
// Process frame of data - access CImg structure here
command[id]=GOING;
// You need to add your processing in HERE - everything from
// ... 9 PIXELS MATRIX GRAYSCALE VALUES to
// ... THRESHOLDING CONDITION
int a=1,b=0,c;
int x=330,y=84;
// CImg<float> Img("/run/shm/result.png");
float pixvalR1 = img(x-1,y-1);
float pixvalR2 = img(x,y-1);
float pixvalR3 = img(x+1,y-1);
float pixvalR4 = img(x-1,y);
float pixvalR5 = img(x,y);
float pixvalR6 = img(x+1,y);
float pixvalR7 = img(x-1,y+1);
float pixvalR8 = img(x,y+1);
float pixvalR9 = img(x+1,y+1);
// MEAN VALUES OF RGB PIXELS
float light = (pixvalR1+pixvalR2+pixvalR3+pixvalR4+pixvalR5+pixvalR6+pixvalR7+pixvalR8+pixvalR9)/9 ;
// DISPLAYING MEAN RGB VALUES OF 9 PIXELS
// std::cout<<"Lightness value :"<<light << endl;
// THRESHOLDING CONDITION
c = (light > 130 ) ? a : b;
// cout<<"Data is " << c <<endl;
ofstream fout("c.txt", ios::app);
fout<<c;
fout.close();
// Pretend to do some processing.
// You need to delete the following "sleep_for" and "if(id==0...){...}"
// std::this_thread::sleep_for(std::chrono::milliseconds(2));
/* if((id==0)&&(wakeups==NFRAMES)){
// Annotate final image and save as PNG
img.draw_text(100,100,"Hello World",white);
img.save_png("result.png");
} */
}
cout_mutex.lock();
std::cout << "Thread[" << id << "]: Received " << wakeups << " wakeups" << std::endl;
cout_mutex.unlock();
}
//timer functions
#include <sys/time.h>
#include <unistd.h>
class Timer{
private:
struct timeval _start, _end;
public:
Timer(){}
void start(){
gettimeofday(&_start, NULL);
}
void end(){
gettimeofday(&_end, NULL);
}
double getSecs(){
return double(((_end.tv_sec - _start.tv_sec) * 1000 + (_end.tv_usec - _start.tv_usec)/1000.0) + 0.5)/1000.;
}
};
int main ( int argc,char **argv ) {
Timer timer;
raspicam::RaspiCam Camera;
// Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);
// Allowable widths: 320, 640, 1280
// Allowable heights: 240, 480, 960
// setCaptureSize(width,height)
Camera.setCaptureSize(WIDTH,HEIGHT);
std::cout << "Main: Starting" << std::endl;
std::cout << "Main: NTHREADS:" << NTHREADS << std::endl;
std::cout << "Main: NFRAMES:" << NFRAMES << std::endl;
std::cout << "Main: Width: " << Camera.getWidth() << std::endl;
std::cout << "Main: Height: " << Camera.getHeight() << std::endl;
// Spawn worker threads - making sure they are initially in WAIT state
std::thread threads[NTHREADS];
for(int i=0; i<NTHREADS; ++i){
command[i]=WAIT;
threads[i] = std::thread(worker,i);
}
// Open camera
cout<<"Opening Camera..."<<endl;
if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}
// Wait until camera stabilizes
std::cout<<"Sleeping for 3 secs"<<endl;
std::this_thread::sleep_for(std::chrono::seconds(3));
timer.start();
for(int frame=0;frame<NFRAMES;frame++){
// Capture frame
Camera.grab();
// Copy just the Y component to our mono CImg
std::memcpy(img._data,Camera.getImageBufferData(),WIDTH*HEIGHT);
// Notify worker threads that data is ready for processing
for(int i=0; i<NTHREADS; ++i){
command[i]=GO;
}
}
timer.end();
cerr<< timer.getSecs()<< " seconds for "<< NFRAMES << " frames : FPS " << ( ( float ) ( NFRAMES ) / timer.getSecs() ) << endl;
// Let workers process final frame, then tell to exit
// std::this_thread::sleep_for(std::chrono::milliseconds(50));
// Notify worker threads to exit
for(int i=0; i<NTHREADS; ++i){
command[i]=EXIT;
}
// Wait for all threads to finish
for(auto& th : threads) th.join();
}
COMPILED COMMAND FOR EXECUTION OF THE CODE :
g++ -std=c++11 /home/pi/raspicam/src/raspicimgthread.cpp -o threadraspicimg -I. -I/usr/local/include -L /opt/vc/lib -L /usr/local/lib -lraspicam -lmmal -lmmal_core -lmmal_util -O2 -L/usr/X11R6/lib -lm -lpthread -lX11
**RESULTS :**
Main: Starting
Main: NTHREADS:2
Main: NFRAMES:1000
Main: Width: 640
Main: Height: 480
Opening Camera...
Sleeping for 3 secs
99.9194 seconds for 1000 frames : FPS 10.0081
Thread[1]: Received 1000 wakeups
Thread[0]: Received 1000 wakeups
real 1m43.198s
user 0m2.060s
sys 0m5.850s
And one more query is that, when i used normal Raspicam c++ API code to perform the same tasks (the code which i mentioned previous to this) i got almost same results with very slight enhancement in the performance (ofcourse my frame rate is increased from 9.4 FPS to 10 FPS).
But in the code 1:
I have been saving images in a ram disk for processing and then i'm deleting. I haven't used any threads for parallel processing.
in the code 2 :
We are not saving any images in the disk and directly processing it from the buffer. And we are also using threads to improve the processing speed.
unfortunately, though we made some changes in the code 2 from the code 1, I'm not able to get desired results (which is to be performed at 30 FPS)
Awaiting your favorable suggestions and any help is really appreciated.
Thanks in advance
Best Regards BLV Lohith Kumar
回答1:
Updated Answer
I have updated my original answer here to show how to copy the acquired data into a CImg
structure and also to show 2 worker threads that can then process the image while the main thread continues to acquire frames at the full speed. It achieves 60 frames per second.
I have not done any processing inside the worker threads because I don't know what you want to do. All I did was save the last frame to disk to show that the acquisition into a CImg is working. You could have 3 worker threads. You could pass one frame to each thread on a round-robin basis, or you could have each of 2 threads process half the frame at each iteration. Or each of 3 threads process one third of a frame. You could change the polled wakeups to use condition variables.
#include <ctime>
#include <fstream>
#include <iostream>
#include <thread>
#include <mutex>
#include <raspicam/raspicam.h>
// Don't want any X11 display by CImg
#define cimg_display 0
#include <CImg.h>
using namespace cimg_library;
using namespace std;
#define NFRAMES 1000
#define NTHREADS 2
#define WIDTH 1280
#define HEIGHT 960
// Commands/status for the worker threads
#define WAIT 0
#define GO 1
#define GOING 2
#define EXIT 3
#define EXITED 4
volatile int command[NTHREADS];
// Serialize access to cout
std::mutex cout_mutex;
// CImg initialisation
// Create a 1280x960 greyscale (Y channel of YUV) image
// Create a globally-accessible CImg for main and workers to access
CImg<unsigned char> img(WIDTH,HEIGHT,1,1,128);
////////////////////////////////////////////////////////////////////////////////
// worker thread - There will 2 or more of these running in parallel with the
// main thread. Do any image processing in here.
////////////////////////////////////////////////////////////////////////////////
void worker (int id) {
// If you need a "results" image of type CImg, create it here before entering
// ... the main processing loop below - you don't want to do malloc()s in the
// ... high-speed loop
// CImg results...
int wakeups=0;
// Create a white for annotating
unsigned char white[] = { 255,255,255 };
while(true){
// Busy wait with 500us sleep - at worst we only miss 50us of processing time per frame
while((command[id]!=GO)&&(command[id]!=EXIT)){
std::this_thread::sleep_for(std::chrono::microseconds(500));
}
if(command[id]==EXIT){command[id]=EXITED;break;}
wakeups++;
// Process frame of data - access CImg structure here
command[id]=GOING;
// You need to add your processing in HERE - everything from
// ... 9 PIXELS MATRIX GRAYSCALE VALUES to
// ... THRESHOLDING CONDITION
// Pretend to do some processing.
// You need to delete the following "sleep_for" and "if(id==0...){...}"
std::this_thread::sleep_for(std::chrono::milliseconds(2));
if((id==0)&&(wakeups==NFRAMES)){
// Annotate final image and save as PNG
img.draw_text(100,100,"Hello World",white);
img.save_png("result.png");
}
}
cout_mutex.lock();
std::cout << "Thread[" << id << "]: Received " << wakeups << " wakeups" << std::endl;
cout_mutex.unlock();
}
int main ( int argc,char **argv ) {
raspicam::RaspiCam Camera;
// Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);
// Allowable widths: 320, 640, 1280
// Allowable heights: 240, 480, 960
// setCaptureSize(width,height)
Camera.setCaptureSize(WIDTH,HEIGHT);
std::cout << "Main: Starting" << std::endl;
std::cout << "Main: NTHREADS:" << NTHREADS << std::endl;
std::cout << "Main: NFRAMES:" << NFRAMES << std::endl;
std::cout << "Main: Width: " << Camera.getWidth() << std::endl;
std::cout << "Main: Height: " << Camera.getHeight() << std::endl;
// Spawn worker threads - making sure they are initially in WAIT state
std::thread threads[NTHREADS];
for(int i=0; i<NTHREADS; ++i){
command[i]=WAIT;
threads[i] = std::thread(worker,i);
}
// Open camera
cout<<"Opening Camera..."<<endl;
if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}
// Wait until camera stabilizes
std::cout<<"Sleeping for 3 secs"<<endl;
std::this_thread::sleep_for(std::chrono::seconds(3));
for(int frame=0;frame<NFRAMES;frame++){
// Capture frame
Camera.grab();
// Copy just the Y component to our mono CImg
std::memcpy(img._data,Camera.getImageBufferData(),WIDTH*HEIGHT);
// Notify worker threads that data is ready for processing
for(int i=0; i<NTHREADS; ++i){
command[i]=GO;
}
}
// Let workers process final frame, then tell to exit
std::this_thread::sleep_for(std::chrono::milliseconds(50));
// Notify worker threads to exit
for(int i=0; i<NTHREADS; ++i){
command[i]=EXIT;
}
// Wait for all threads to finish
for(auto& th : threads) th.join();
}
Note on timing
You can time code like this:
#include <chrono>
typedef std::chrono::high_resolution_clock hrclock;
hrclock::time_point t1,t2;
t1 = hrclock::now();
// do something that needs timing
t2 = hrclock::now();
std::chrono::nanoseconds elapsed = t2-t1;
long long nanoseconds=elapsed.count();
Original Answer
I have been doing some experiments with Raspicam. I downloaded their code from SourceForge and modified it slightly to do some simple, capture-only tests. The code I ended up using looks like this:
#include <ctime>
#include <fstream>
#include <iostream>
#include <raspicam/raspicam.h>
#include <unistd.h> // for usleep()
using namespace std;
#define NFRAMES 1000
int main ( int argc,char **argv ) {
raspicam::RaspiCam Camera;
// Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
Camera.setFormat(raspicam::RASPICAM_FORMAT_YUV420);
// Allowable widths: 320, 640, 1280
// Allowable heights: 240, 480, 960
// setCaptureSize(width,height)
Camera.setCaptureSize(1280,960);
// Open camera
cout<<"Opening Camera..."<<endl;
if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}
// Wait until camera stabilizes
cout<<"Sleeping for 3 secs"<<endl;
usleep(3000000);
cout << "Grabbing " << NFRAMES << " frames" << endl;
// Allocate memory
unsigned long bytes=Camera.getImageBufferSize();
cout << "Width: " << Camera.getWidth() << endl;
cout << "Height: " << Camera.getHeight() << endl;
cout << "ImageBufferSize: " << bytes << endl;;
unsigned char *data=new unsigned char[bytes];
for(int frame=0;frame<NFRAMES;frame++){
// Capture frame
Camera.grab();
// Extract the image
Camera.retrieve ( data,raspicam::RASPICAM_FORMAT_IGNORE );
// Wake up a thread here to process the frame with CImg
}
return 0;
}
I dislike cmake
so I just compiled like this:
g++ -std=c++11 simpletest.c -o simpletest -I. -I/usr/local/include -L /opt/vc/lib -L /usr/local/lib -lraspicam -lmmal -lmmal_core -lmmal_util
I found that, regardless of the dimensions of the image, and more or less regardless of the encoding (RGB, BGR, GRAY) it achieves 30 fps (frames per second).
The only way I could get better than that was by making the following changes:
in the code above, use RASPICAM_FORMAT_YUV420 rather than anything else
editing the file
private_impl.cpp
and changing line 71 to set the framerate to 90.
If I do that, I can achieve 66 fps.
As the Raspberry Pi is only a pretty lowly 900MHz CPU but with 4 cores, I would guess you would want to start 1-3 extra threads at the beginning outside the loop and then wake one, or more of them up where I have noted in the code to process the data. The first thing they would do is copy the data out of the acquisition buffer before the next frame started - or have multiple buffers and use them in a round-robin fashion.
Notes on threading
In the following diagram, green represents the Camera.grab()
where you acquire the image, and red represents the processing you do after the image is acquired. At the moment, you are acquiring the data (green), and then processing it (red) before you can acquire the next frame. Note that 3 of your 4 CPUs do nothing.
What I am suggesting is that you offload the processing (red) to the other CPUs/threads and keep acquiring new data (green) as fast as possible. Like this:
Now you see you get more frames (green) per second.
来源:https://stackoverflow.com/questions/37013039/how-do-i-capture-and-process-each-and-every-frame-of-an-image-using-cimg-library