问题
I'm loading this (very small) image using:
UIImage* image = [UIImage named:@"someFile.png"];
The image is 4x1 and it contains a red, green, blue and white pixel from left to right, in that order.
Next, I get the pixel data out of the underlying CGImage:
NSData* data = (NSData*)CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage));
Now, for some reason, the pixel data is laid out differently depending on the iOS device.
When I run the app in the simulator or on my iPhone 4, the pixel data looks like this:
(255,0,0),(0,255,0),(0,0,255),(255,255,255)
So, the pixels are 3 bytes per pixel, with blue as the most significant byte and red as the least significant. So I guess you call that BGR?
When I check the CGBitmapInfo, I can see that the kCGBitmapByteOrderMask is kCGBitmapByteOrderDefault. I can't find anywhere that explains what "default" is.
On the other hand, when I run it on my first gen iPhone, the pixel data looks like this:
(0,0,255,255),(0,255,0,255),(255,0,0,255),(255,255,255,255)
So 4 bytes per channel, alpha as the most significant byte, and blue as the least significant. So... that's called ARGB?
I've been looking at the CGBitmapInfo for clues on how to detect the layout. On the first gen iPhone, the kCGBitmapAlphaInfoMask is kCGImageAlphaNoneSkipFirst. That means that the most significant bits are ignored. So that makes sense. On the first gen iPhone the kCGBitmapByteOrderMask is kCGBitmapByteOrder32Little. I don't know what that means or how to relate it back to how the R, G and B components are laid out in memory. Can anyone shed some light on this?
Thanks.
回答1:
To ensure device independence, it may be better to use a CGBitmapContext
to populate the data for you.
Something like this should work
// Get the CGImageRef
CGImageRef imageRef = [theImage CGImage];
// Find width and height
NSUInteger width = CGImageGetWidth(imageRef);
NSUInteger height = CGImageGetHeight(imageRef);
// Setup color space
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Alloc data that the image data will be put into
unsigned char *rawData = malloc(height * width * 4);
// Create a CGBitmapContext to draw an image into
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * width;
NSUInteger bitsPerComponent = 8;
CGContextRef context = CGBitmapContextCreate(rawData, width, height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
CGColorSpaceRelease(colorSpace);
// Draw the image which will populate rawData
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CGContextRelease(context);
for (NSUInteger y = 0; y < height; y++) {
for (NSUInteger x = 0; x < width; x++) {
int byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
CGFloat red = rawData[byteIndex];
CGFloat green = rawData[byteIndex + 1];
CGFloat blue = rawData[byteIndex + 2];
CGFloat alpha = rawData[byteIndex + 3];
}
}
free(rawData);
回答2:
I'm sure in 5+ years you've found a solution, but this is still a shady area of Core Graphics, so wanted to drop in my two cents.
Different devices and file formats may use different byte order for various reasons, mostly because they can and because of performance. There's plenty of information around on this, including RGBA color space representation on Wikipedia.
Core Graphics often uses kCGBitmapByteOrderDefault
, which is rather useless, but it also defines host endian bitmap formats, which you can use for cross reference:
#ifdef __BIG_ENDIAN__
#define kCGBitmapByteOrder16Host kCGBitmapByteOrder16Big
#define kCGBitmapByteOrder32Host kCGBitmapByteOrder32Big
#else
#define kCGBitmapByteOrder16Host kCGBitmapByteOrder16Little
#define kCGBitmapByteOrder32Host kCGBitmapByteOrder32Little
#endif
When used with Swift, this is also useless, because those #define
's aren't available as is. One way to work around this is to create a bridging header and equivalent implementation and redefine those constants.
// Bridge.h
extern const int CGBitmapByteOrder16Host;
extern const int CGBitmapByteOrder32Host;
// Bridge.m
#import "Bridge.h"
const int CGBitmapByteOrder16Host = kCGBitmapByteOrder16Host;
const int CGBitmapByteOrder32Host = kCGBitmapByteOrder32Host;
Now CGBitmapByteOrder16Host
and CGBitmapByteOrder32Host
constants should be available from Swift.
来源:https://stackoverflow.com/questions/7300591/how-to-determine-and-interpret-the-pixel-format-of-a-cgimage