I'm reading this tutorial on getting pixel data from the iPhone camera.
While I have no issue running and using this code, I need to take the output of the camera data (which comes in BGRA) and convert it to ARGB so that I can use it with an external library. How do I do this?
If you're on iOS 5.0, you can use vImage within the Accelerate framework to do a NEON-optimized color component swap using code like the following (drawn from Apple's WebCore source code):
vImage_Buffer src;
src.height = height;
src.width = width;
src.rowBytes = srcBytesPerRow;
src.data = srcRows;
vImage_Buffer dest;
dest.height = height;
dest.width = width;
dest.rowBytes = destBytesPerRow;
dest.data = destRows;
// Swap pixel channels from BGRA to RGBA.
const uint8_t map[4] = { 2, 1, 0, 3 };
vImagePermuteChannels_ARGB8888(&src, &dest, map, kvImageNoFlags);
where width
, height
, and srcBytesPerRow
are obtained from your pixel buffer via CVPixelBufferGetWidth()
, CVPixelBufferGetHeight()
, and CVPixelBufferGetBytesPerRow()
. srcRows
would be the pointer to the base address of the bytes in the pixel buffer, and destRows
would be memory you allocated to hold the output RGBA image.
This should be much faster than simply iterating over the bytes and swapping the color components.
Depending on the image size, an even faster solution would be to upload the frame to OpenGL ES, render a simple rectangle with this as a texture, and use glReadPixels() to pull down the RGBA values. Even better would be to use iOS 5.0's texture caches for both upload and download, where this process only takes 1-3 ms for a 720p frame on an iPhone 4. Of course, using OpenGL ES means a lot more supporting code to pull this off.
来源:https://stackoverflow.com/questions/10654909/converting-bgra-to-argb