I have a photo app that is using AV Foundation. I have setup a preview layer using AVCaptureVideoPreviewLayer that takes up the top half of the screen. So when the user is tryin
Have a look at AVCaptureVideoPreviewLayer
s
-(CGRect)metadataOutputRectOfInterestForRect:(CGRect)layerRect
This method lets you easily convert the visible CGRect of your layer to the actual camera output.
One caveat: The physical camera is not mounted "top side up", but rather rotated 90 degrees clockwise. (So if you hold your iPhone - Home Button right, the camera is actually top side up).
Keeping this in mind, you have to convert the CGRect the above method gives you, to crop the image to exactly what is on screen.
Example:
CGRect visibleLayerFrame = THE ACTUAL VISIBLE AREA IN THE LAYER FRAME
CGRect metaRect = [self.previewView.layer metadataOutputRectOfInterestForRect:visibleLayerFrame];
CGSize originalSize = [originalImage size];
if (UIInterfaceOrientationIsPortrait(_snapInterfaceOrientation)) {
// For portrait images, swap the size of the image, because
// here the output image is actually rotated relative to what you see on screen.
CGFloat temp = originalSize.width;
originalSize.width = originalSize.height;
originalSize.height = temp;
}
// metaRect is fractional, that's why we multiply here
CGRect cropRect;
cropRect.origin.x = metaRect.origin.x * originalSize.width;
cropRect.origin.y = metaRect.origin.y * originalSize.height;
cropRect.size.width = metaRect.size.width * originalSize.width;
cropRect.size.height = metaRect.size.height * originalSize.height;
cropRect = CGRectIntegral(cropRect);
This may be a bit confusing, but what made me really understand it is this:
Hold your device "Home Button right" -> You'll see the x - axis actually lies along the "height" of your iPhone, while the y - axis lies along the "width" of your iPhone. That's why for portrait images, you have to swap the size ;)