问题
I'm using Google's Text detection API from MLKit to detect text from images. It seems to work perfectly on screenshots but when I try to use it on images taken in the app (using AVFoundation) or on photos uploaded from camera roll it spits out a small number of seemingly random characters.
This is my code for running the actual text detection:
func runTextRecognition(with image: UIImage) {
let visionImage = VisionImage(image: image)
textRecognizer.process(visionImage) { features, error in
self.processResult(from: features, error: error)
}
}
func processResult(from text: VisionText?, error: Error?) {
guard error == nil, let text = text else {
print("oops")
return
}
let detectedText = text.text
let okAlert = UIAlertAction(title: "OK", style: .default) { (action) in
// handle user input
}
let alert = UIAlertController(title: "Detected text", message: detectedText, preferredStyle: .alert)
alert.addAction(okAlert)
self.present(alert, animated: true) {
print("alert was presented")
}
}
This is my code for using images from camera roll (works for screenshots, not for images taken by camera):
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [UIImagePickerController.InfoKey : Any]) {
if let image = info[.originalImage] as? UIImage {
self.runTextRecognition(with: image)
uploadView.image = image
} else {
print("error")
}
self.dismiss(animated: true, completion: nil)
}
This is my code for using photos taken on the camera inside the app (never works, results are always nonsense):
func photoOutput(_ output: AVCapturePhotoOutput,
didFinishProcessingPhoto photo: AVCapturePhoto,
error: Error?) {
PHPhotoLibrary.shared().performChanges( {
let creationRequest = PHAssetCreationRequest.forAsset()
creationRequest.addResource(with: PHAssetResourceType.photo, data: photo.fileDataRepresentation()!, options: nil)
}, completionHandler: nil)
let testImage = UIImage(data: photo.fileDataRepresentation()!)
self.runTextRecognition(with: testImage!)
}
And this is what I did for using test images that I put in Assets.xcassets (this is the only one that consistently works well):
let uiimage = UIImage(named: "testImage")
self.runTextRecognition(with: uiimage!)
I'm thinking my issues may lie in the orientation of the UIImage, but I'm not sure. Any help would be much appreciated!
回答1:
If your imagepicker is working fine, the problem can be with the image orientation. For a quick test, you can capture multiple images in different orientation and see if it works.
My problem was the text recognition working from image picked from gallery but not from the camera. That was orientation issue.
Solution 1
Before converting into vision image, fix the image orientation as follows.
let fixedImage = pickedImage.fixImageOrientation()
Add this extension.
extension UIImage {
func fixImageOrientation() -> UIImage {
UIGraphicsBeginImageContext(self.size)
self.draw(at: .zero)
let fixedImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return fixedImage ?? self
} }
Solution 2
Firebase documentation provide a method to fix for all orientation.
func imageOrientation(
deviceOrientation: UIDeviceOrientation,
cameraPosition: AVCaptureDevice.Position
) -> VisionDetectorImageOrientation {
switch deviceOrientation {
case .portrait:
return cameraPosition == .front ? .leftTop : .rightTop
case .landscapeLeft:
return cameraPosition == .front ? .bottomLeft : .topLeft
case .portraitUpsideDown:
return cameraPosition == .front ? .rightBottom : .leftBottom
case .landscapeRight:
return cameraPosition == .front ? .topRight : .bottomRight
case .faceDown, .faceUp, .unknown:
return .leftTop
}
}
Create metada:
let cameraPosition = AVCaptureDevice.Position.back // Set to the capture device you used.
let metadata = VisionImageMetadata()
metadata.orientation = imageOrientation(
deviceOrientation: UIDevice.current.orientation,
cameraPosition: cameraPosition
)
Set metadata to vision image.
let image = VisionImage(buffer: sampleBuffer)
image.metadata = metadata
来源:https://stackoverflow.com/questions/53163291/mlkit-text-detection-on-ios-working-for-photos-taken-from-assets-xcassets-but-n