I\'ve been working with OpenCV and Apple\'s Accelerate framework and find the performance of Accelerate to be slow and Apple\'s documentation limited. Let\'s take for example:>
To use vImage with OpenCV, pass a reference to your OpenCV matrix to a method like this one:
long contrastStretch_Accelerate(const Mat& src, Mat& dst) {
vImagePixelCount rows = static_cast(src.rows);
vImagePixelCount cols = static_cast(src.cols);
vImage_Buffer _src = { src.data, rows, cols, src.step };
vImage_Buffer _dst = { dst.data, rows, cols, dst.step };
vImage_Error err;
err = vImageContrastStretch_ARGB8888( &_src, &_dst, 0 );
return err;
}
The call to this method, from your OpenCV code block, looks like this:
- (void)processImage:(Mat&)image;
{
contrastStretch_Accelerate(image, image);
}
It's that simple, and since these are all pointer references, there's no "deep copying" of any kind. It's as fast and efficient as it can possibly be, all questions of context and other related performance-considerations aside (I can help you with those, too).
SIDENOTE: Did you know that you have to change the channel permutation when mixing OpenCV with vImage? If not, prior to calling any vImage functions on an OpenCV matrix, call:
const uint8_t map[4] = { 3, 2, 1, 0 };
err = vImagePermuteChannels_ARGB8888(&_img, &_img, map, kvImageNoFlags);
if (err != kvImageNoError)
NSLog(@"vImagePermuteChannels_ARGB8888 error: %ld", err);
Perform the same call, map and all, to return the image to the channel order proper for an OpenCV matrix.