I am using OpenCV 2.2 on the iPhone to detect faces. I\'m using the IOS 4\'s AVCaptureSession to get access to the camera stream, as seen in the code that follows.
vImage is a pretty fast way to do it. Requires ios5 though. The call says ARGB but it works for the BGRA you get from the buffer.
This also has the advantage that you can cut out a part of the buffer and rotate that. See my answer here
- (unsigned char*) rotateBuffer: (CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t currSize = bytesPerRow*height*sizeof(unsigned char);
size_t bytesPerRowOut = 4*height*sizeof(unsigned char);
void *srcBuff = CVPixelBufferGetBaseAddress(imageBuffer);
unsigned char *outBuff = (unsigned char*)malloc(currSize);
vImage_Buffer ibuff = { srcBuff, height, width, bytesPerRow};
vImage_Buffer ubuff = { outBuff, width, height, bytesPerRowOut};
uint8_t rotConst = 1; // 0, 1, 2, 3 is equal to 0, 90, 180, 270 degrees rotation
vImage_Error err= vImageRotate90_ARGB8888 (&ibuff, &ubuff, NULL, rotConst, NULL,0);
if (err != kvImageNoError) NSLog(@"%ld", err);
return outBuff;
}
Maybe easier to just set the video orientation the way you want:
connection.videoOrientation = AVCaptureVideoOrientationPortrait
This way you don't need to do that rotation gimmick at all
If you rotate at 90 degree stops then you can just do it in memory. Here is example code that just simply copies the data to a new pixel buffer. Doing a brute force rotation should be straight forward.
- (CVPixelBufferRef) rotateBuffer: (CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
//CVReturn status = CVPixelBufferPoolCreatePixelBuffer (NULL, _pixelWriter.pixelBufferPool, &pxbuffer);
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
height, kCVPixelFormatType_32BGRA, (CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *dest_buff = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(dest_buff != NULL);
int *src = (int*) src_buff ;
int *dest= (int*) dest_buff ;
size_t count = (bytesPerRow * height) / 4 ;
while (count--) {
*dest++ = *src++;
}
//Test straight copy.
//memcpy(pxdata, baseAddress, width * height * 4) ;
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return pxbuffer;
}
You can then use AVAssetWriterInputPixelBufferAdaptor if you are writing this back out to an AVAssetWriterInput.
The above is not optimized. You may want to look for a more efficient copy algorithm. A good place to start is with In-place Matrix Transpose. You would also want to use a pixel buffer pool rather then create a new one each time.
Edit. You could use the GPU to do this. This sounds like a lot of data being pushed around. In CVPixelBufferRef there is the key kCVPixelBufferOpenGLCompatibilityKey. I assume you could create a OpenGL compatible image from the CVImageBufferRef (which is just a pixel buffer ref), and push it through a shader. Again, overkill IMO. You may see if BLAS or LAPACK has 'out of place' transpose methods. If they do then you can be assured they are highly optimized.
90 CW where new_width = width ... This will get you a portrait oriented image.
for (int i = 1; i <= new_height; i++) {
for (int j = new_width - 1; j > -1; j--) {
*dest++ = *(src + (j * width) + i) ;
}
}
I know this is quite old question, but I've been solving similar problem recently and maybe someone can find my solution useful.
I needed to extract raw image data from image buffer of YCbCr format delivered by iPhone camera (got from [AVCaptureVideoDataOutput.availableVideoCVPixelFormatTypes firstObject]), dropping information such as headers, meta information etc to pass it to further processing.
Also, I needed to extract only small area in the center of captured video frame, so some cropping was needed.
My conditions allowed capturing video only in either landscape orientation, but when a device is positioned in landscape left orientation, image is delivered turned upside down, so I needed to flip it in both axis. In case the image is flipped, my idea was to copy data from the source image buffer in reverse order and reverse bytes in each row of read data to flip image in both axis. That idea really works, and as I needed to copy data from source buffer anyway, it seems there's not much performance penalty if reading from the start or the end (Of course, bigger image = longer processing, but I deal with really small numbers).
I'd like to know what others think about this solution and of course some hints how to improve the code:
/// Lock pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0);
/// Address where image buffer starts
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
/// Read image parameters
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/// See whether image is flipped upside down
BOOL isFlipped = (_previewLayer.connection.videoOrientation == AVCaptureVideoOrientationLandscapeLeft);
/// Calculate cropping frame. Crop to scanAreaSize (defined as CGSize constant elsewhere) from the center of an image
CGRect cropFrame = CGRectZero;
cropFrame.size = scanAreaSize;
cropFrame.origin.x = (width / 2.0f) - (scanAreaSize.width / 2.0f);
cropFrame.origin.y = (height / 2.0f) - (scanAreaSize.height / 2.0f);
/// Update proportions to cropped size
width = (size_t)cropFrame.size.width;
height = (size_t)cropFrame.size.height;
/// Allocate memory for output image data. W*H for Y component, W*H/2 for CbCr component
size_t bytes = width * height + (width * height / 2);
uint8_t *outputDataBaseAddress = (uint8_t *)malloc(bytes);
if(outputDataBaseAddress == NULL) {
/// Memory allocation failed, unlock buffer and give up
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return NULL;
}
/// Get parameters of YCbCr pixel format
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;
NSUInteger bytesPerRowY = EndianU32_BtoN(bufferInfo->componentInfoY.rowBytes);
NSUInteger offsetY = EndianU32_BtoN(bufferInfo->componentInfoY.offset);
NSUInteger bytesPerRowCbCr = EndianU32_BtoN(bufferInfo->componentInfoCbCr.rowBytes);
NSUInteger offsetCbCr = EndianU32_BtoN(bufferInfo->componentInfoCbCr.offset);
/// Copy image data only, skipping headers and metadata. Create single buffer which will contain Y component data
/// followed by CbCr component data.
/// Process Y component
/// Pointer to the source buffer
uint8_t *src;
/// Pointer to the destination buffer
uint8_t *destAddress;
/// Calculate crop rect offset. Crop offset is number of rows (y * bytesPerRow) + x offset.
/// If image is flipped, then read buffer from the end to flip image vertically. End address is height-1!
int flipOffset = (isFlipped) ? (int)((height - 1) * bytesPerRowY) : 0;
int cropOffset = (int)((cropFrame.origin.y * bytesPerRowY) + flipOffset + cropFrame.origin.x);
/// Set source pointer to Y component buffer start address plus crop rect offset
src = baseAddress + offsetY + cropOffset;
for(int y = 0; y < height; y++) {
/// Copy one row of pixel data from source into the output buffer.
destAddress = (outputDataBaseAddress + y * width);
memcpy(destAddress, src, width);
if(isFlipped) {
/// Reverse bytes in row to flip image horizontally
[self reverseBytes:destAddress bytesSize:(int)width];
/// Move one row up
src -= bytesPerRowY;
}
else {
/// Move to the next row
src += bytesPerRowY;
}
}
/// Calculate crop offset for CbCr component
flipOffset = (isFlipped) ? (int)(((height - 1) / 2) * bytesPerRowCbCr) : 0;
cropOffset = (int)((cropFrame.origin.y * bytesPerRowCbCr) + flipOffset + cropFrame.origin.x);
/// Set source pointer to the CbCr component offset + crop offset
src = (baseAddress + offsetCbCr + cropOffset);
for(int y = 0; y < (height / 2); y++) {
/// Copy one row of pixel data from source into the output buffer.
destAddress = (outputDataBaseAddress + (width * height) + y * width);
memcpy(destAddress, src, width);
if(isFlipped) {
/// Reverse bytes in row to flip image horizontally
[self reverseBytes:destAddress bytesSize:(int)width];
/// Move one row up
src -= bytesPerRowCbCr;
}
else {
src += bytesPerRowCbCr;
}
}
/// Unlock pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
/// Continue with image data in outputDataBaseAddress;