nsimage

How to composite several NSImages into one big image?

给你一囗甜甜゛ 提交于 2019-12-03 10:27:17
I have a collection of objects which describe an image-name, its size and it's X/Y location. The collection is sorted by "layers", so I can composite the images in a sort of painter's algorithm. From this, I can determine the rectangle necessary to hold all of the images, so now what I want to do is: Create some sort of buffer to hold the result (The NS equivalent of what iPhoneOS calls UIGraphicsContext.) Draw all the images into the buffer. Snag a new NSImage out of the composited result of the buffer. In iPhoneOS, this is the code that does what I want: UIGraphicsBeginImageContext (woSize);

Get the correct image width and height of an NSImage

坚强是说给别人听的谎言 提交于 2019-12-03 08:42:44
问题 I use the code below to get the width and height of a NSImage: NSImage *image = [[[NSImage alloc] initWithContentsOfFile:[NSString stringWithFormat:s]] autorelease]; imageWidth=[image size].width; imageHeight=[image size].height; NSLog(@"%f:%f",imageWidth,imageHeight); But sometime imageWidth , imageHeight does not return the correct value. For example when I read an image, the EXIF info displays: PixelXDimension = 2272; PixelYDimension = 1704; But imageWidth, imageHeight outputs 521:390 回答1:

Resize and Save NSImage?

荒凉一梦 提交于 2019-12-03 07:57:48
I have an NSImageView which I get an image for from an NSOpenPanel. That works great. Now, how can I take that NSImage, half its size and save it as the same format in the same directory as the original as well? If you can help at all with anything I'd appreciate it, thanks. Check the ImageCrop sample project from Matt Gemmell: http://mattgemmell.com/source/ Nice example how to resize / crop images. Finally you can use something like this to save the result (dirty sample): // Write to TIF [[resultImg TIFFRepresentation] writeToFile:@"/Users/Anne/Desktop/Result.tif" atomically:YES]; // Write to

Get pixels and colours from NSImage

懵懂的女人 提交于 2019-12-03 06:25:26
I have created an NSImage object, and ideally would like to determine how many of each pixels colour it contains. Is this possible? I suggest creating your own bitmap context , wrapping it in a graphics context and setting that as the current context, telling the image to draw itself , and then accessing the pixel data behind the bitmap context directly. This will be more code, but will save you both a trip through a TIFF representation and the creation of thousands or millions of NSColor objects. If you're working with images of any appreciable size, these expenses will add up quickly. This

NSImage to NSBitmapImageRep

跟風遠走 提交于 2019-12-03 06:21:48
How to convert NSImage to NSBitmapImageRep? I have code: - (NSBitmapImageRep *)bitmapImageRepresentation { NSBitmapImageRep *ret = (NSBitmapImageRep *)[self representations]; if(![ret isKindOfClass:[NSBitmapImageRep class]]) { ret = nil; for(NSBitmapImageRep *rep in [self representations]) if([rep isKindOfClass:[NSBitmapImageRep class]]) { ret = rep; break; } } if(ret == nil) { NSSize size = [self size]; size_t width = size.width; size_t height = size.height; size_t bitsPerComp = 32; size_t bytesPerPixel = (bitsPerComp / CHAR_BIT) * 4; size_t bytesPerRow = bytesPerPixel * width; size_t

UIImage vs NSImage: Drawing to an off screen image in iOS

[亡魂溺海] 提交于 2019-12-03 06:20:17
In mac osx (cocoa), It is very easy to make a blank image of a specific size and draw to it off screen: NSImage* image = [[NSImage alloc] initWithSize:NSMakeSize(64,64)]; [image lockFocus]; /* drawing code here */ [image unlockFocus]; However, in iOS (cocoa touch) there does not seem to be equivalent calls for UIImage. I want to use UIImage (or some other equivalent class) to do the same thing. That is, I want to make an explicitly size, initially empty image to which I can draw using calls like UIRectFill(...) and [UIBezierPath stroke] . How would I do this? CoreGraphics is needed here, as

Repeating background image in an NSView

荒凉一梦 提交于 2019-12-03 05:18:09
问题 I am trying to draw a repeating background image in my NSView, I have this till now: // INIT - (id)initWithFrame:(NSRect)frame { if (self = [super initWithFrame:frame]) { self.backgroundImage = [NSImage imageNamed:@"progressBackground.pdf"]; } return self; } // DRAW - (void)drawRect:(NSRect)dirtyRect { // Draw the background [backgroundImage drawInRect:[self bounds] fromRect:NSMakeRect(0.0f, 0.0f, backgroundImage.size.width, backgroundImage.size.height) operation:NSCompositeSourceAtop

NSImage from a 1D pixel array?

有些话、适合烂在心里 提交于 2019-12-03 00:51:07
I have a large 1D dynamic array in my program that represents a FITS image on disk i.e. it holds all the pixel values of the image. The type of the array is double . At the moment, I am only concerned with monochrome images. Since Cocoa does not support the FITS format directly, I am reading in the images using the CFITSIO library. This works - I can manipulate the array as I wish and save the result to disk using the library. However, I now want to display the image. I presume this is something NSImage or NSView can do. But the class references don't seem to list a method which could take a C

Get the correct image width and height of an NSImage

自作多情 提交于 2019-12-03 00:13:28
I use the code below to get the width and height of a NSImage: NSImage *image = [[[NSImage alloc] initWithContentsOfFile:[NSString stringWithFormat:s]] autorelease]; imageWidth=[image size].width; imageHeight=[image size].height; NSLog(@"%f:%f",imageWidth,imageHeight); But sometime imageWidth , imageHeight does not return the correct value. For example when I read an image, the EXIF info displays: PixelXDimension = 2272; PixelYDimension = 1704; But imageWidth, imageHeight outputs 521:390 Dimensions of your image in pixels is stored in NSImageRep of your image. If your file contains only one

Resizing Large Resolution Images Producing 1000x1000 Pixels Size when Size is set to 500x500 Pixels

左心房为你撑大大i 提交于 2019-12-02 13:24:23
Im using the following extension method to resize an image.When it comes to large resolution images the output size remains 1000x1000 pixels even when I set the output size to 500x500 pixels extension NSImage { func resizeImage(width: CGFloat, _ height: CGFloat) -> NSImage { let img = NSImage(size: CGSize(width:width, height:height)) img.lockFocus() let ctx = NSGraphicsContext.current ctx?.imageInterpolation = .high self.draw(in: NSMakeRect(0, 0, width, height), from: NSMakeRect(0, 0, size.width, size.height), operation: .copy, fraction: 1) img.unlockFocus() return img } What im I doing wrong?