I am trying process an image in CoreGraphics and then return the processed image back to an NSImage
for saving and displaying. I have ample resources on how to perform these functions in iOS but the helper methods seem to be missing in NSImage
. In iOS the class method is imageWithCGImage:
, how can you do this in Mac OS?
The matching method in NSImage is initWithCGImage:size:
.
The second argument takes the image's size in points. The factor between the size in pixels (of the CGImage) and the size in points is the scale factor. So, for example, if you have a 100×100px CGImage, and pass a size of (NSSize){ 50.0, 50.0 }
, the image will be 50 points in size, and double-resolution.
Usually you should just pass the size in pixels (from the CGImage) as the size in points. For handling multiple scale factors, it's better to use a single NSImage with multiple NSImageReps, like what you get from -[NSWorkspace iconForFileType:]
for most types or from creating an NSImage from a typical .icns file.
来源:https://stackoverflow.com/questions/9098388/getting-nsimage-from-cgimageref