问题
I have been programming for two years on iOS and never on mac. I am working on a little utility for handling some simple image needs that I have in my iOS development. Anyway, I have working code in iOS that runs perfectly but I have absolutely no idea what equivalents are for mac.
I've tried a bunch of different things but I really don't understand how to start a graphics context on the Mac outside of a "drawRect:" method. On the iPhone I would just use UIGraphicsBeghinImageContext(). I know other post have said to use lockFocus/unlockFocus but I'm not sure how exactly to make that work for my needs. Oh, and I really miss UIImage's "CGImage" property. I don't understand why NSImage can't have one, though it sounds a bit trickier than just that.
Here is my working code on iOS—basically it just creates a reflected image from a mask and combines them together:
UIImage *mask = [UIImage imageWithContentsOfFile:[[NSBundle mainBundle]pathForResource:@"Mask_Image.jpg" ofType:nil]];
UIImage *image = [UIImage imageNamed::@"Test_Image1.jpg"];
UIGraphicsBeginImageContextWithOptions(mask.size, NO, [[UIScreen mainScreen]scale]);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, 0.0, mask.size.height);
CGContextScaleCTM(ctx, 1.f, -1.f);
[image drawInRect:CGRectMake(0.f, -mask.size.height, image.size.width, image.size.height)];
UIImage *flippedImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef maskRef = mask.CGImage;
CGImageRef maskCreate = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef masked = CGImageCreateWithMask([flippedImage CGImage], maskCreate);
CGImageRelease(maskCreate);
UIImage *maskedImage = [UIImage imageWithCGImage:masked];
CGImageRelease(masked);
UIGraphicsBeginImageContextWithOptions(CGSizeMake(image.size.width, image.size.height + (image.size.height * .5)), NO, [[UIScreen mainScreen]scale]);
[image drawInRect:CGRectMake(0,0, image.size.width, image.size.height)];
[maskedImage drawInRect:CGRectMake(0, image.size.height, maskedImage.size.width, maskedImage.size.height)];
UIImage *anotherImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
//do something with anotherImage
Any suggestions for achieving this (simply) on the Mac?
回答1:
Here's a simple example that draws a blue circle into an NSImage (I'm using ARC in this example, add retains/releases to taste)
NSSize size = NSMakeSize(50, 50);
NSImage* im = [[NSImage alloc] initWithSize:size];
NSBitmapImageRep* rep = [[NSBitmapImageRep alloc]
initWithBitmapDataPlanes:NULL
pixelsWide:size.width
pixelsHigh:size.height
bitsPerSample:8
samplesPerPixel:4
hasAlpha:YES
isPlanar:NO
colorSpaceName:NSCalibratedRGBColorSpace
bytesPerRow:0
bitsPerPixel:0];
[im addRepresentation:rep];
[im lockFocus];
CGContextRef ctx = [[NSGraphicsContext currentContext] graphicsPort];
CGContextClearRect(ctx, NSMakeRect(0, 0, size.width, size.height));
CGContextSetFillColorWithColor(ctx, [[NSColor blueColor] CGColor]);
CGContextFillEllipseInRect(ctx, NSMakeRect(0, 0, size.width, size.height));
[im unlockFocus];
[[im TIFFRepresentation] writeToFile:@"/Users/USERNAME/Desktop/foo.tiff" atomically:NO];
The main difference is that on OS X you first have to create the image, then you can begin drawing into it; on iOS you create the context, then extract the image from it.
Basically, lockFocus makes the current context be the image and you draw directly onto it, then use the image.
I'm not completely sure if this answers all of your question, but I think it's at least one part of it.
回答2:
Well, here's the note on UIGraphicsBeginImageContextWithOptions
:
UIGraphicsBeginImageContextWithOptions
Creates a bitmap-based graphics context with the specified options.
The OS X equivalent, which is also available in iOS (and UIGraphicsBeginImageContextWithOptions
is possibly a wrapper around) is CGBitmapContextCreate
:
Declared as:
CGContextRef CGBitmapContextCreate (
void *data,
size_t width,
size_t height,
size_t bitsPerComponent,
size_t bytesPerRow,
CGColorSpaceRef colorspace,
CGBitmapInfo bitmapInfo
);
Although it's a C API, you could think of CGBitmapContext as a subclass of CGContext. It renders to a pixel buffer, whereas a CGContext renders to an abstract destination.
For UIGraphicsGetImageFromCurrentImageContext
, you can use CGBitmapContextCreateImage
and pass your bitmap context to create a CGImage.
回答3:
Here is a Swift (2.1 / 10.11 API-compliant) version of Cobbal's answer
let size = NSMakeSize(50, 50);
let im = NSImage.init(size: size)
let rep = NSBitmapImageRep.init(bitmapDataPlanes: nil,
pixelsWide: Int(size.width),
pixelsHigh: Int(size.height),
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: NSCalibratedRGBColorSpace,
bytesPerRow: 0,
bitsPerPixel: 0)
im.addRepresentation(rep!)
im.lockFocus()
let rect = NSMakeRect(0, 0, size.width, size.height)
let ctx = NSGraphicsContext.currentContext()?.CGContext
CGContextClearRect(ctx, rect)
CGContextSetFillColorWithColor(ctx, NSColor.blackColor().CGColor)
CGContextFillRect(ctx, rect)
im.unlockFocus()
回答4:
Swift3 version of Cobbal's answer
let size = NSMakeSize(50, 50);
let im = NSImage.init(size: size)
let rep = NSBitmapImageRep.init(bitmapDataPlanes: nil,
pixelsWide: Int(size.width),
pixelsHigh: Int(size.height),
bitsPerSample: 8,
samplesPerPixel: 4,
hasAlpha: true,
isPlanar: false,
colorSpaceName: NSCalibratedRGBColorSpace,
bytesPerRow: 0,
bitsPerPixel: 0)
im.addRepresentation(rep!)
im.lockFocus()
let rect = NSMakeRect(0, 0, size.width, size.height)
let ctx = NSGraphicsContext.current()?.cgContext
ctx!.clear(rect)
ctx!.setFillColor(NSColor.black.cgColor)
ctx!.fill(rect)
im.unlockFocus()
来源:https://stackoverflow.com/questions/12223739/ios-to-mac-graphiccontext-explanation-conversion