问题
I have a large 1D dynamic array in my program that represents a FITS image on disk i.e. it holds all the pixel values of the image. The type of the array is double. At the moment, I am only concerned with monochrome images.
Since Cocoa does not support the FITS format directly, I am reading in the images using the CFITSIO library. This works - I can manipulate the array as I wish and save the result to disk using the library.
However, I now want to display the image. I presume this is something NSImage or NSView can do. But the class references don't seem to list a method which could take a C array and ultimately return an NSImage object. The closest I found was -initWithData:(NSData*). But I'm not 100% sure if this is what I need.
Am I barking up the wrong tree here? Any pointers to a class or method which could handle this would be
EDIT:
Here's the updated code. Note that I'm setting every pixel to 0xFFFF. This only results in a grey image.This is ofcourse just a test. When loading the actual FITS file, I replace 0xFFFF with imageArray[i * width + j]
. This works perfectly in 8 bits (of course, I divide every pixel value by 256 to represent it in 8 bits).
NSBitmapImageRep *greyRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:width
pixelsHigh:height
bitsPerSample:16
samplesPerPixel:1
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSCalibratedWhiteColorSpace
bytesPerRow:0
bitsPerPixel:16];
NSInteger rowBytes = [greyRep bytesPerRow];
unsigned short*pix = (unsigned short*)[greyRep bitmapData];
NSLog(@"Row Bytes: %d",rowBytes);
if(temp.bitPix == 16) // 16 bit image
{
for(i=0;i<height;i++)
{
for(j=0;j<width;j++)
{
pix[i * rowBytes + j] = 0xFFFF;
}
}
}
I also tried using Quartz2D directly. That does produce a proper image, even in 16 bits. But bizarrely, the data array takes 0xFF as white and not 0xFFFF. So I still have to divide everything by 0xFF - losing data in the process. Quartz2D code:
short* grey = (short*)malloc(width*height*sizeof(short));
for(int i=0;i<width*height; i++)
{
grey[i] = imageArray[i];
}
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceGray();
CGContextRef bitmapContext = CGBitmapContextCreate(grey, width, height, 16, width*2, colorSpace, kCGImageAlphaNone);
CFRelease(colorSpace);
CGImageRef cgImage = CGBitmapContextCreateImage(bitmapContext);
NSImage *greyImage = [[NSImage alloc] initWithCGImage:cgImage size:NSMakeSize(width, height)];
Any suggestions?
回答1:
initWithData
only works for image types that the system already knows about. For unknown types -- and raw pixel data -- you need to construct the image representation yourself. You can do this via Core Graphics as suggested in the answer that Kirby links to. Alternatively, you can use NSImage
by creating and adding an NSBitmapImageRep
.
The exact details will depend on the format of your pixel data, but here's an example of the process for a greyscale image where the source data (the samples
array) is represented as double in the range [0,1]:
/* generate a greyscale image representation */
NSBitmapImageRep *greyRep =
[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: nil // allocate the pixel buffer for us
pixelsWide: xDim
pixelsHigh: yDim
bitsPerSample: 8
samplesPerPixel: 1
hasAlpha: NO
isPlanar: NO
colorSpaceName: NSCalibratedWhiteColorSpace // 0 = black, 1 = white in this color space
bytesPerRow: 0 // passing 0 means "you figure it out"
bitsPerPixel: 8]; // this must agree with bitsPerSample and samplesPerPixel
NSInteger rowBytes = [greyRep bytesPerRow];
unsigned char* pix = [greyRep bitmapData];
for ( i = 0; i < yDim; ++i )
{
for ( j = 0; j < xDim; ++j )
{
pix[i * rowBytes + j] = (unsigned char)(255 * (samples[i * xDim + j]));
}
}
NSImage* greyscale = [[NSImage alloc] initWithSize:NSMakeSize(xDim,yDim)];
[greyscale addRepresentation:greyRep];
[greyRep release];
EDIT (in response to comment)
I didn't know for sure whether 16 bit samples were supported, but you seem to have confirmed that they are.
What you're seeing stems from still treating the pixels as unsigned char
, which is 8 bits. So you're only setting half of each row, and you're setting each of those pixels, one byte at a time, to the two byte value 0xFF00
-- not quite true white, but very close. The other half of the image is not touched, but would have been initialised to 0, so it stays black.
You need instead to work in 16 bit, by first casting the value you get back from the rep:
unsigned short * pix = (unsigned short*) [greyRep bitmapData];
And then assigning 16 bit values to the pixels:
if ( j % 2 )
{
pix[i * rowBytes + j] = 0xFFFF;
}
else
{
pix[i * rowBytes + j] = 0;
}
Scratch that, rowBytes
is in bytes so we need to stick with unsigned char
for pix
and cast when assigning, which is a bit uglier:
if ( j % 2 )
{
*((unsigned short*) (pix + i * rowBytes + j * 2)) = 0xFFFF;
}
else
{
*((unsigned short*) (pix + i * rowBytes + j * 2)) = 0;
}
(I've switched the order of clauses because the == 0
seemed redundant. Actually for something like this it would be much neater to use ?:
syntax, but enough of this C futzing.)
回答2:
Here's the solution with your code. This was extended to support all 3 channels. Each channel is 16 bits. Note that I assign imageArray[i] to each channel. This is only because I currently haven't written the code which can read colour FITS files, so to test things out I'm just assigning the image to each channel. The result is, of course, a grayscale image on screen. But if one wants, it can easily be modified such that Red is assigned to Red and so on.
NSBitmapImageRep *colorRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:width
pixelsHigh:height
bitsPerSample:16
samplesPerPixel:3
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:(3*2*width)
bitsPerPixel:48];
rowBytes = [greyRep bytesPerRow];
NSLog(@"Row Bytes: %d",rowBytes);
pix = [colorRep bitmapData];
for(i=0;i<height*width;++i)
{
*((unsigned short*)(pix + 6*i)) = imageArray[i];
*((unsigned short*)(pix + 6*i + 2)) = imageArray[i];
*((unsigned short*)(pix + 6*i + 4)) = imageArray[i];
}
NSImage *theImage = [[NSImage alloc] initWithSize:NSMakeSize(width, height)];
[greyScale addRepresentation:colorRep];
[myimageView setImage:theImage];
回答3:
Adapt the answer from Converting RGB data into a bitmap in Objective-C++ Cocoa to your data.
来源:https://stackoverflow.com/questions/5458770/nsimage-from-a-1d-pixel-array