I am trying to design a barcode scanner for an App i am currently working on. I want the scanner preview to fill the whole screen of the device and provide a smaller frame t
As soon as i got it, it was absolutely clear:
The AVCaptureMetadataOutput is defined by pixels, therefore to map the coordinates of the display to that in the output, i had to use metadataOutputRectOfInterestForRect:
From AVCaptureOutput.h:
/*!
@method metadataOutputRectOfInterestForRect:
@abstract
Converts a rectangle in the receiver's coordinate space to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput
whose capture device is providing input to the receiver.
@param rectInOutputCoordinates
A CGRect in the receiver's coordinates.
@result
A CGRect in the coordinate space of the metadata output whose capture device is providing input to the receiver.
@discussion
AVCaptureMetadataOutput rectOfInterest is expressed as a CGRect where {0,0} represents the top left of the picture area,
and {1,1} represents the bottom right on an unrotated picture. This convenience method converts a rectangle in
the coordinate space of the receiver to a rectangle of interest in the coordinate space of an AVCaptureMetadataOutput
whose AVCaptureDevice is providing input to the receiver. The conversion takes orientation, mirroring, and scaling into
consideration. See -transformedMetadataObjectForMetadataObject:connection: for a full discussion of how orientation and mirroring
are applied to sample buffers passing through the output.
*/
- (CGRect)metadataOutputRectOfInterestForRect:(CGRect)rectInOutputCoordinates NS_AVAILABLE_IOS(7_0);
After using this to set the rectOfInterest it worked.