Given an UIImage
and a CGRect
, what is the most efficient way (in memory and time) to draw the part of the image corresponding to the CGRect<
The quickest way is to use an image mask: an image that is the same size as the image to mask but with a certain pixel pattern indicating which portion of the image to mask out when rendering ->
// maskImage is used to block off the portion that you do not want rendered
// note that rect is not actually used because the image mask defines the rect that is rendered
-(void) drawRect:(CGRect)rect maskImage:(UIImage*)maskImage {
UIGraphicsBeginImageContext(image_.size);
[maskImage drawInRect:image_.bounds];
maskImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef maskRef = maskImage.CGImage;
CGImageRef mask = CGImageMaskCreate(CGImageGetWidth(maskRef),
CGImageGetHeight(maskRef),
CGImageGetBitsPerComponent(maskRef),
CGImageGetBitsPerPixel(maskRef),
CGImageGetBytesPerRow(maskRef),
CGImageGetDataProvider(maskRef), NULL, false);
CGImageRef maskedImageRef = CGImageCreateWithMask([image_ CGImage], mask);
image_ = [UIImage imageWithCGImage:maskedImageRef scale:1.0f orientation:image_.imageOrientation];
CGImageRelease(mask);
CGImageRelease(maskedImageRef);
}
Rather than creating a new image (which is costly because it allocates memory), how about using CGContextClipToRect
?
I guessed you are doing this to display part of an image on the screen, because you mentioned UIImageView
. And optimization problems always need defining specifically.
Actually, UIImageView
with clipsToBounds
is one of the fastest/simplest ways to archive your goal if your goal is just clipping a rectangular region of an image (not too big). Also, you don't need to send setNeedsDisplay
message.
Or you can try putting the UIImageView
inside of an empty UIView
and set clipping at the container view. With this technique, you can transform your image freely by setting transform
property in 2D (scaling, rotation, translation).
If you need 3D transformation, you still can use CALayer
with masksToBounds
property, but using CALayer
will give you very little extra performance usually not considerable.
Anyway, you need to know all of the low-level details to use them properly for optimization.
UIView
is just a thin layer on top of CALayer
which is implemented on top of OpenGL which is a virtually direct interface to the GPU. This means UIKit is being accelerated by GPU.
So if you use them properly (I mean, within designed limitations), it will perform as well as plain OpenGL
implementation. If you use just a few images to display, you'll get acceptable performance with UIView
implementation because it can get full acceleration of underlying OpenGL (which means GPU acceleration).
Anyway if you need extreme optimization for hundreds of animated sprites with finely tuned pixel shaders like in a game app, you should use OpenGL directly, because CALayer
lacks many options for optimization at lower levels. Anyway, at least for optimization of UI stuff, it's incredibly hard to be better than Apple.
UIImageView
?What you should know is all about GPU acceleration. In all of the recent computers, fast graphics performance is achieved only with GPU. Then, the point is whether the method you're using is implemented on top of GPU or not.
IMO, CGImage
drawing methods are not implemented with GPU.
I think I read mentioning about this on Apple's documentation, but I can't remember where. So I'm not sure about this. Anyway I believe CGImage
is implemented in CPU because,
CGImage
should be for CPU.So it seems to be done in CPU. Graphics operations done in CPU are a lot slower than in GPU.
Simply clipping an image and compositing the image layers are very simple and cheap operations for GPU (compared to CPU), so you can expect the UIKit library will utilize this because whole UIKit is implemented on top of OpenGL.
Because optimization is a kind of work about micro-management, specific numbers and small facts are very important. What's the medium size? OpenGL on iOS usually limits maximum texture size to 1024x1024 pixels (maybe larger in recent releases). If your image is larger than this, it will not work, or performance will be degraded greatly (I think UIImageView is optimized for images within the limits).
If you need to display huge images with clipping, you have to use another optimization like CATiledLayer
and that's a totally different story.
And don't go OpenGL unless you want to know every details of the OpenGL. It needs full understanding about low-level graphics and 100 times more code at least.
Though it is not very likely happen, but CGImage
stuffs (or anything else) doesn't need to be stuck in CPU only. Don't forget to check the base technology of the API which you're using. Still, GPU stuffs are very different monster from CPU, then API guys usually explicitly and clearly mention them.
It would ultimately be faster, with a lot less image creation from sprite atlases, if you could set not only the image for a UIImageView, but also the top-left offset to display within that UIImage. Maybe this is possible.
Meanwhile, I created these useful functions in a utility class that I use in my apps. It creates a UIImage from part of another UIImage, with options to rotate, scale, and flip using standard UIImageOrientation values to specify.
My app creates a lot of UIImages during initialization, and this necessarily takes time. But some images aren't needed until a certain tab is selected. To give the appearance of quicker load I could create them in a separate thread spawned at startup, then just wait till it's done if that tab is selected.
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture {
return [ChordCalcController imageByCropping:imageToCrop toRect:aperture withOrientation:UIImageOrientationUp];
}
// Draw a full image into a crop-sized area and offset to produce a cropped, rotated image
+ (UIImage*)imageByCropping:(UIImage *)imageToCrop toRect:(CGRect)aperture withOrientation:(UIImageOrientation)orientation {
// convert y coordinate to origin bottom-left
CGFloat orgY = aperture.origin.y + aperture.size.height - imageToCrop.size.height,
orgX = -aperture.origin.x,
scaleX = 1.0,
scaleY = 1.0,
rot = 0.0;
CGSize size;
switch (orientation) {
case UIImageOrientationRight:
case UIImageOrientationRightMirrored:
case UIImageOrientationLeft:
case UIImageOrientationLeftMirrored:
size = CGSizeMake(aperture.size.height, aperture.size.width);
break;
case UIImageOrientationDown:
case UIImageOrientationDownMirrored:
case UIImageOrientationUp:
case UIImageOrientationUpMirrored:
size = aperture.size;
break;
default:
assert(NO);
return nil;
}
switch (orientation) {
case UIImageOrientationRight:
rot = 1.0 * M_PI / 2.0;
orgY -= aperture.size.height;
break;
case UIImageOrientationRightMirrored:
rot = 1.0 * M_PI / 2.0;
scaleY = -1.0;
break;
case UIImageOrientationDown:
scaleX = scaleY = -1.0;
orgX -= aperture.size.width;
orgY -= aperture.size.height;
break;
case UIImageOrientationDownMirrored:
orgY -= aperture.size.height;
scaleY = -1.0;
break;
case UIImageOrientationLeft:
rot = 3.0 * M_PI / 2.0;
orgX -= aperture.size.height;
break;
case UIImageOrientationLeftMirrored:
rot = 3.0 * M_PI / 2.0;
orgY -= aperture.size.height;
orgX -= aperture.size.width;
scaleY = -1.0;
break;
case UIImageOrientationUp:
break;
case UIImageOrientationUpMirrored:
orgX -= aperture.size.width;
scaleX = -1.0;
break;
}
// set the draw rect to pan the image to the right spot
CGRect drawRect = CGRectMake(orgX, orgY, imageToCrop.size.width, imageToCrop.size.height);
// create a context for the new image
UIGraphicsBeginImageContextWithOptions(size, NO, imageToCrop.scale);
CGContextRef gc = UIGraphicsGetCurrentContext();
// apply rotation and scaling
CGContextRotateCTM(gc, rot);
CGContextScaleCTM(gc, scaleX, scaleY);
// draw the image to our clipped context using the offset rect
CGContextDrawImage(gc, drawRect, imageToCrop.CGImage);
// pull the image from our cropped context
UIImage *cropped = UIGraphicsGetImageFromCurrentImageContext();
// pop the context to get back to the default
UIGraphicsEndImageContext();
// Note: this is autoreleased
return cropped;
}
The very simple way to move big image inside UIImageView as follows.
Let we have the image of size (100, 400) representing 4 states of some picture one below another. We want to show the 2nd picture having offsetY = 100 in square UIImageView of size (100, 100). The solution is:
UIImageView *iView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];
CGRect contentFrame = CGRectMake(0, 0.25, 1, 0.25);
iView.layer.contentsRect = contentFrame;
iView.image = [UIImage imageNamed:@"NAME"];
Here contentFrame is normalized frame relative to real UIImage size. So, "0" means that we start visible part of image from left border, "0.25" means that we have vertical offset 100, "1" means that we want to show full width of the image, and finally, "0.25" means that we want to show only 1/4 part of image in height.
Thus, in local image coordinates we show the following frame
CGRect visibleAbsoluteFrame = CGRectMake(0*100, 0.25*400, 1*100, 0.25*400)
or CGRectMake(0, 100, 100, 100);