问题
For the past 4 to 5 hours I've been wrestling with this very bizarre issue. I have a an array of bytes which contain pixel values out of which I'll like to make an image of. The array represents 32 bit per component values. There is no Alpha channel, so the image is 96 bits/pixel.
I have specified all of this to the CGImageCreate function as follows:
CGImageRef img = CGImageCreate(width, height, 32, 96, bytesPerRow, space, kCGImageAlphaNone , provider, NULL, NO, kCGRenderingIntentDefault);
bytesPerRow is 3*width*4
. This is so because there are 3 components per pixel, and each component takes 4 bytes (32 bits). So, total bytes per row is 3*4*width. The data provider is defined as follows:
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bitmapData,3*4*width*height,NULL);
This is where things get bizarre. In my array, I am explicity setting the values to be 0x000000FF (for all 3 channels) and yet, the image is coming out to be completely white. If I set the value to 0xFFFFFF00, the image comes out to be black. This is telling me that the program is, for some reason, not reading all of the 4 bytes for each component and is instead reading the least significant byte. I have tried all sorts of combinations - even including an Alpha channel, but it has made no difference to this.
The program is blind to this: 0xAAAAAA00. It simply reads this as 0. When I'm explicity specifying that the bits per component are 32 bits, shouldn't the function take this into account and actually read 4 bytes from the array?
The bytes array is defined as: bitmapData = (char*)malloc(bytesPerRow*height);
And I am assigning values to the array as follows
for(i=0;i<width*height;i++)
{
*((unsigned int *)(bitmapData + 12*i + 0)) = 0xFFFFFF00;
*((unsigned int *)(bitmapData + 12*i + 4)) = 0xFFFFFF00;
*((unsigned int *)(bitmapData + 12*i + 8)) = 0xFFFFFF00;
}
Note that I address the array as an int to address 4 bytes of memory. i is multiplied by 12 because there are 12 bytes per pixel. The addition of 4 and 8 allow the loop to address the green and blue channels. Note that I have inspected the memory of the array in the debugger and that seems to be perfectly OK. The loop is writing to 4 bytes. Any sort of pointers to this would be MOST helpful. My ultimate goal is to be able to read 32 bit FITS files - for which I already have the program written. I am only testing the above code with the above array.
Here the code in its entirety if it matters. This is in drawRect:(NSRect)dirtyRect
method of my custom view:
int width, height, bytesPerRow;
int i;
width = 256;
height = 256;
bytesPerRow = 3*width*4;
char *bitmapData;
bitmapData = (char*)malloc(bytesPerRow*height);
for(i=0;i<width*height;i++)
{
*((unsigned int *)(bitmapData + 12*i + 0)) = 0xFFFFFF00;
*((unsigned int *)(bitmapData + 12*i + 4)) = 0xFFFFFF00;
*((unsigned int *)(bitmapData + 12*i + 8)) = 0xFFFFFF00;
}
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,bitmapData,3*4*width*height,NULL);
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGImageRef img = CGImageCreate(width, height, 32, 96, bytesPerRow, space, kCGImageAlphaNone, provider, NULL, NO, kCGRenderingIntentDefault);
CGColorSpaceRelease(space);
CGDataProviderRelease(provider);
CGContextRef theContext = [[NSGraphicsContext currentContext] graphicsPort];
CGContextDrawImage(theContext, CGRectMake(0,0,width,height), img);
回答1:
I see a few things worth pointing out:
First, the Quartz 2D Programming Guide doesn't list 96-bpp RGB as a supported format. You might try 128-bpp RGB.
Second, you're working on a little-endian system*, which means LSB comes first. Change the values to which you set each component to 0x33000000EE
and you will see a light grey (EE
), not a dark grey (33
).
Most importantly, bbum is absolutely right when he points out that your display can't render that range of color**. It's getting squashed down to 8-bpc just for display. If it's correct in memory, then it's correct in memory.
*: More's the pity. R.I.P PPC.
**: Maybe NASA has one that can?
来源:https://stackoverflow.com/questions/5494535/32-bit-component-images-with-cgimagecreate-are-actually-only-8-bit-component