I have a large 1D dynamic array in my program that represents a FITS image on disk i.e. it holds all the pixel values of the image. The type of the array is double
Here's the solution with your code. This was extended to support all 3 channels. Each channel is 16 bits. Note that I assign imageArray[i] to each channel. This is only because I currently haven't written the code which can read colour FITS files, so to test things out I'm just assigning the image to each channel. The result is, of course, a grayscale image on screen. But if one wants, it can easily be modified such that Red is assigned to Red and so on.
NSBitmapImageRep *colorRep = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes:nil
pixelsWide:width
pixelsHigh:height
bitsPerSample:16
samplesPerPixel:3
hasAlpha:NO
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:(3*2*width)
bitsPerPixel:48];
rowBytes = [greyRep bytesPerRow];
NSLog(@"Row Bytes: %d",rowBytes);
pix = [colorRep bitmapData];
for(i=0;i<height*width;++i)
{
*((unsigned short*)(pix + 6*i)) = imageArray[i];
*((unsigned short*)(pix + 6*i + 2)) = imageArray[i];
*((unsigned short*)(pix + 6*i + 4)) = imageArray[i];
}
NSImage *theImage = [[NSImage alloc] initWithSize:NSMakeSize(width, height)];
[greyScale addRepresentation:colorRep];
[myimageView setImage:theImage];
Adapt the answer from Converting RGB data into a bitmap in Objective-C++ Cocoa to your data.
initWithData
only works for image types that the system already knows about. For unknown types -- and raw pixel data -- you need to construct the image representation yourself. You can do this via Core Graphics as suggested in the answer that Kirby links to. Alternatively, you can use NSImage
by creating and adding an NSBitmapImageRep
.
The exact details will depend on the format of your pixel data, but here's an example of the process for a greyscale image where the source data (the samples
array) is represented as double in the range [0,1]:
/* generate a greyscale image representation */
NSBitmapImageRep *greyRep =
[[NSBitmapImageRep alloc]
initWithBitmapDataPlanes: nil // allocate the pixel buffer for us
pixelsWide: xDim
pixelsHigh: yDim
bitsPerSample: 8
samplesPerPixel: 1
hasAlpha: NO
isPlanar: NO
colorSpaceName: NSCalibratedWhiteColorSpace // 0 = black, 1 = white in this color space
bytesPerRow: 0 // passing 0 means "you figure it out"
bitsPerPixel: 8]; // this must agree with bitsPerSample and samplesPerPixel
NSInteger rowBytes = [greyRep bytesPerRow];
unsigned char* pix = [greyRep bitmapData];
for ( i = 0; i < yDim; ++i )
{
for ( j = 0; j < xDim; ++j )
{
pix[i * rowBytes + j] = (unsigned char)(255 * (samples[i * xDim + j]));
}
}
NSImage* greyscale = [[NSImage alloc] initWithSize:NSMakeSize(xDim,yDim)];
[greyscale addRepresentation:greyRep];
[greyRep release];
EDIT (in response to comment)
I didn't know for sure whether 16 bit samples were supported, but you seem to have confirmed that they are.
What you're seeing stems from still treating the pixels as unsigned char
, which is 8 bits. So you're only setting half of each row, and you're setting each of those pixels, one byte at a time, to the two byte value 0xFF00
-- not quite true white, but very close. The other half of the image is not touched, but would have been initialised to 0, so it stays black.
You need instead to work in 16 bit, by first casting the value you get back from the rep:
unsigned short * pix = (unsigned short*) [greyRep bitmapData];
And then assigning 16 bit values to the pixels:
if ( j % 2 )
{
pix[i * rowBytes + j] = 0xFFFF;
}
else
{
pix[i * rowBytes + j] = 0;
}
Scratch that, rowBytes
is in bytes so we need to stick with unsigned char
for pix
and cast when assigning, which is a bit uglier:
if ( j % 2 )
{
*((unsigned short*) (pix + i * rowBytes + j * 2)) = 0xFFFF;
}
else
{
*((unsigned short*) (pix + i * rowBytes + j * 2)) = 0;
}
(I've switched the order of clauses because the == 0
seemed redundant. Actually for something like this it would be much neater to use ?:
syntax, but enough of this C futzing.)