I am writing an iphone (IOS 4) program that capture live video from the camera and process it in real time.
I prefer to capture in kCVPixelFormatType_420YpCbCr8BiPlanarF
Answering my own question. this solved the problem I had (which was to grab yuv output, display it and process it), although its not exactly the answer to the question:
To grab YUV output from the camera:
AVCaptureVideoDataOutput *videoOut = [[AVCaptureVideoDataOutput alloc] init];
[videoOut setAlwaysDiscardsLateVideoFrames:YES];
[videoOut setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
To display it as is, use AVCaptureVideoPreviewLayer, it does not require any much code. (You can see the FindMyiCon sample in the WWDC samples pack for example).
To process the YUV y channel (bi-planer in this case so it's all in a single chunk, you can also use memcpy instead of looping) :
- (void)processPixelBuffer: (CVImageBufferRef)pixelBuffer {
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
int bufferHeight = CVPixelBufferGetHeight(pixelBuffer);
int bufferWidth = CVPixelBufferGetWidth(pixelBuffer);
// allocate space for ychannel, reallocating as needed.
if (bufferWidth != y_channel.width || bufferHeight != y_channel.height)
{
if (y_channel.data) free(y_channel.data);
y_channel.width = bufferWidth;
y_channel.height = bufferHeight;
y_channel.data = malloc(y_channel.width * y_channel.height);
}
uint8_t *yc = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);
int total = bufferWidth * bufferHeight;
for(int k=0;k<total;k++)
{
y_channel.data[k] = yc[k++]; // copy y channel
}
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
}