I've got an NSImageView that takes up the full extent of a window. There's no border to the image view, and its set to display in the lower left. So this means that the origin of the view matches the origin of actual image, no matter how the window is resized.
Also, the image is much larger than what I can reasonably fit at full scale on the screen. So I also have the imageview set to proportionally scale down the size of the image. However, I can't seem to find this scale factor anywhere.
My ultimate goal is to map a mouse down event into actual image coordinates. To do this, I think I need one more piece of information...how big the displayed NSImage actually is.
If I look at the [imageView bounds]
, I get the bounding rectangle of the image view, which generally will be larger than the image.
I think that this gives you what you need:
NSRect imageRect = [imageView.cell drawingRectForBounds: imageView.bounds];
which returns the offset of the origin of the image within the view, and it's size.
And for you end goal of remapping the mouse coordinates, something like this on your custom view class should work...
- (void)mouseUp:(NSEvent *)event
{
NSPoint eventLocation = [event locationInWindow];
NSPoint location = [self convertPoint: eventLocation fromView: nil];
NSRect drawingRect = [self.cell drawingRectForBounds:self.bounds];
location.x -= drawingRect.origin.x;
location.y -= drawingRect.origin.y;
NSSize frameSize = drawingRect.size;
float frameAspect = frameSize.width/frameSize.height;
NSSize imageSize = self.image.size;
float imageAspect = imageSize.width/imageSize.height;
float scaleFactor = 1.0f;
if(imageAspect > frameAspect) {
///in this case image.width == frame.width
scaleFactor = imageSize.width / frameSize.width;
float imageHeightinFrame = imageSize.height / scaleFactor;
float imageOffsetInFrame = (frameSize.height - imageHeightinFrame)/2;
location.y -= imageOffsetInFrame;
} else {
///in this case image.height == frame.height
scaleFactor = imageSize.height / frameSize.height;
float imageWidthinFrame = imageSize.width / scaleFactor;
float imageOffsetInFrame = (frameSize.width - imageWidthinFrame)/2;
location.x -= imageOffsetInFrame;
}
location.x *= scaleFactor;
location.y *= scaleFactor;
//do something with you newly calculated mouse location
}
Since I haven't found any solution to get the real image frame inside the NSImageView yet, I did the image calculation manually, respecting all its properties (scaling, alignment and border). This might not be the most efficient code and there may be minor deviations of 0.5-1 pixels from the real image but it's coming pretty close to the original image (I know this question is quite old but the solution might help others):
@implementation NSImageView (ImageFrame)
// -------------------------------------------------------------------------
// -imageFrame
// -------------------------------------------------------------------------
- (NSRect)imageFrame
{
// Find the content frame of the image without any borders first
NSRect contentFrame = self.bounds;
NSSize imageSize = self.image.size;
NSImageFrameStyle imageFrameStyle = self.imageFrameStyle;
if (imageFrameStyle == NSImageFrameButton ||
imageFrameStyle == NSImageFrameGroove)
{
contentFrame = NSInsetRect(self.bounds, 2, 2);
}
else if (imageFrameStyle == NSImageFramePhoto)
{
contentFrame = NSMakeRect(contentFrame.origin.x + 1,
contentFrame.origin.y + 2,
contentFrame.size.width - 3,
contentFrame.size.height - 3);
}
else if (imageFrameStyle == NSImageFrameGrayBezel)
{
contentFrame = NSInsetRect(self.bounds, 8, 8);
}
// Now find the right image size for the current imageScaling
NSImageScaling imageScaling = self.imageScaling;
NSSize drawingSize = imageSize;
// Proportionally scaling
if (imageScaling == NSImageScaleProportionallyDown ||
imageScaling == NSImageScaleProportionallyUpOrDown)
{
NSSize targetScaleSize = contentFrame.size;
if (imageScaling == NSImageScaleProportionallyDown)
{
if (targetScaleSize.width > imageSize.width) targetScaleSize.width = imageSize.width;
if (targetScaleSize.height > imageSize.height) targetScaleSize.height = imageSize.height;
}
NSSize scaledSize = [self sizeByScalingProportionallyToSize:targetScaleSize fromSize:imageSize];
drawingSize = NSMakeSize(scaledSize.width, scaledSize.height);
}
// Axes independent scaling
else if (imageScaling == NSImageScaleAxesIndependently)
drawingSize = contentFrame.size;
// Now get the image position inside the content frame (center is default) from the current imageAlignment
NSImageAlignment imageAlignment = self.imageAlignment;
NSPoint drawingPosition = NSMakePoint(contentFrame.origin.x + contentFrame.size.width / 2.0 - drawingSize.width / 2.0,
contentFrame.origin.y + contentFrame.size.height / 2.0 - drawingSize.height / 2.0);
// NSImageAlignTop / NSImageAlignTopLeft / NSImageAlignTopRight
if (imageAlignment == NSImageAlignTop ||
imageAlignment == NSImageAlignTopLeft ||
imageAlignment == NSImageAlignTopRight)
{
drawingPosition.y = contentFrame.origin.y+contentFrame.size.height - drawingSize.height;
if (imageAlignment == NSImageAlignTopLeft)
drawingPosition.x = contentFrame.origin.x;
else if (imageAlignment == NSImageAlignTopRight)
drawingPosition.x = contentFrame.origin.x + contentFrame.size.width - drawingSize.width;
}
// NSImageAlignBottom / NSImageAlignBottomLeft / NSImageAlignBottomRight
else if (imageAlignment == NSImageAlignBottom ||
imageAlignment == NSImageAlignBottomLeft ||
imageAlignment == NSImageAlignBottomRight)
{
drawingPosition.y = contentFrame.origin.y;
if (imageAlignment == NSImageAlignBottomLeft)
drawingPosition.x = contentFrame.origin.x;
else if (imageAlignment == NSImageAlignBottomRight)
drawingPosition.x = contentFrame.origin.x + contentFrame.size.width - drawingSize.width;
}
// NSImageAlignLeft / NSImageAlignRight
else if (imageAlignment == NSImageAlignLeft)
drawingPosition.x = contentFrame.origin.x;
// NSImageAlignRight
else if (imageAlignment == NSImageAlignRight)
drawingPosition.x = contentFrame.origin.x + contentFrame.size.width - drawingSize.width;
return NSMakeRect(round(drawingPosition.x),
round(drawingPosition.y),
ceil(drawingSize.width),
ceil(drawingSize.height));
}
// -------------------------------------------------------------------------
// -sizeByScalingProportionallyToSize:fromSize:
// -------------------------------------------------------------------------
- (NSSize)sizeByScalingProportionallyToSize:(NSSize)newSize fromSize:(NSSize)oldSize
{
CGFloat widthHeightDivision = oldSize.width / oldSize.height;
CGFloat heightWidthDivision = oldSize.height / oldSize.width;
NSSize scaledSize = NSZeroSize;
if (oldSize.width > oldSize.height)
{
if ((widthHeightDivision * newSize.height) >= newSize.width)
{
scaledSize = NSMakeSize(newSize.width, heightWidthDivision * newSize.width);
} else {
scaledSize = NSMakeSize(widthHeightDivision * newSize.height, newSize.height);
}
} else {
if ((heightWidthDivision * newSize.width) >= newSize.height)
{
scaledSize = NSMakeSize(widthHeightDivision * newSize.height, newSize.height);
} else {
scaledSize = NSMakeSize(newSize.width, heightWidthDivision * newSize.width);
}
}
return scaledSize;
}
@end
As I indicated in an above comment, here's the approach I took:
// the view that mouseUp: is part of doesnt draw anything. I'm layering it
// in the window hierarchy to intercept mouse events. I suppose I could have
// subclassed NSImageView instead, but I went this route. isDragging is
// an ivar...its cleared in mouseDown: and set in mouseDragged:
// this view has no idea what the original unscaled image size is, so
// rescaling is done by caller
- (void)mouseUp:(NSEvent *)theEvent
{
if (!isDragging)
{
NSPoint rawPoint = [theEvent locationInWindow];
NSImageView *view = self.subviews.lastObject;
point = [self convertPoint:rawPoint fromView:view];
point.x /= view.bounds.size.width;
point.y /= view.bounds.size.height;
[owner mouseClick:point];
}
}
And in my NSWindowController, which is my window delegate for the mouse view, I have:
static int resizeMode=-1;
- (void)windowDidEndLiveResize:(NSNotification *)notification
{
if ([notification object]==frameWindow)
self.resizeFrameSelection=0;
resizeMode = -1;
}
- (NSSize)windowWillResize:(NSWindow *)sender toSize:(NSSize)frameSize
{
if (sender==frameWindow)
{
float imageAspectRatio = (float)movie.movieSize.width / (float)movie.movieSize.height;
float newH = frameSize.height;
float newW = frameSize.width;
float currH = sender.frame.size.height;
float currW = sender.frame.size.width;
float deltaH = abs(newH-currH);
float deltaW = abs(newW-currW);
// lock onto one dimension to key off of, per drag.
if ( resizeMode==1 || (resizeMode==-1 && deltaW<deltaH ))
{
// adjust width to match aspect ratio
frameSize.width = frameSize.height * imageAspectRatio;
resizeMode=1;
}
else
{
// adjust height to match aspect ratio
frameSize.height = frameSize.width / imageAspectRatio;
resizeMode=2;
}
}
return frameSize;
}
来源:https://stackoverflow.com/questions/11711913/getting-bounds-of-an-nsimage-within-an-nsimageview