I am trying to combine CoreML and ARKit in my project using the given inceptionV3 model on Apple website.
I am starting from the standard template for A
Don't process images yourself to feed them to Core ML. Use Vision. (No, not that one. This one.) Vision takes an ML model and any of several image types (including CVPixelBuffer) and automatically gets the image to the right size and aspect ratio and pixel format for the model to evaluate, then gives you the model's results.
Here's a rough skeleton of the code you'd need:
var request: VNRequest
func setup() {
let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
}
func classifyARFrame() {
let handler = VNImageRequestHandler(cvPixelBuffer: session.currentFrame.capturedImage,
orientation: .up) // fix based on your UI orientation
handler.perform([request])
}
func myResultsMethod(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNClassificationObservation]
else { fatalError("huh") }
for classification in results {
print(classification.identifier, // the scene label
classification.confidence)
}
}
See this answer to another question for some more pointers.