Combining CoreML and ARKit

前端 未结 1 1473
你的背包
你的背包 2021-02-06 13:52

I am trying to combine CoreML and ARKit in my project using the given inceptionV3 model on Apple website.

I am starting from the standard template for A

相关标签:
1条回答
  • 2021-02-06 14:11

    Don't process images yourself to feed them to Core ML. Use Vision. (No, not that one. This one.) Vision takes an ML model and any of several image types (including CVPixelBuffer) and automatically gets the image to the right size and aspect ratio and pixel format for the model to evaluate, then gives you the model's results.

    Here's a rough skeleton of the code you'd need:

    var request: VNRequest
    
    func setup() {
        let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
        request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
    }
    
    func classifyARFrame() {
        let handler = VNImageRequestHandler(cvPixelBuffer: session.currentFrame.capturedImage,
            orientation: .up) // fix based on your UI orientation
        handler.perform([request])
    }
    
    func myResultsMethod(request: VNRequest, error: Error?) {
        guard let results = request.results as? [VNClassificationObservation]
            else { fatalError("huh") }
        for classification in results {
            print(classification.identifier, // the scene label
                  classification.confidence)
        }
    }
    

    See this answer to another question for some more pointers.

    0 讨论(0)
提交回复
热议问题