Multi-face detection in RealityKit

允我心安 提交于 2020-06-25 05:44:28

问题


I have added content to the face anchor in Reality Composer, later on, after loading the Experience that i created on Reality Composer, i create a face tracking session like this:

guard ARFaceTrackingConfiguration.isSupported else { return }
let configuration = ARFaceTrackingConfiguration()
configuration.maximumNumberOfTrackedFaces = ARFaceTrackingConfiguration.supportedNumberOfTrackedFaces
configuration.isLightEstimationEnabled = true

arView.session.delegate = self
arView.session.run(configuration, options: [.resetTracking, .removeExistingAnchors])

It is not adding the content to all the faces that is detecting, and i know it is detecting more than one face because the other faces occlude the content that is stick to the other face, is this a limitation on RealityKit or i am missing something in the composer? actually is pretty hard to miss somehting since it is so basic and simple.

Thanks.


回答1:


You can't succeed in multi-face tracking in RealityKit in case you use models with embedded Face Anchor, i.e. the models that came from Reality Composer' Face Tracking preset (you can use just one model with .face anchor, not three). Or you MAY USE such models but you need to delete these embedded AnchorEntity(.face) anchors. Although there's a better approach – simply load models in .usdz format.

Let's see what Apple documentation says about embedded anchors:

You can manually load and anchor Reality Composer scenes using code, like you do with other ARKit content. When you anchor a scene in code, RealityKit ignores the scene's anchoring information.

Reality Composer supports 5 anchor types: Horizontal, Vertical, Image, Face & Object. It displays a different set of guides for each anchor type to help you place your content. You can change the anchor type later if you choose the wrong option or change your mind about how to anchor your scene.

There are two options:

  1. In new Reality Composer project, deselect the Create with default content checkbox at the bottom left of the action sheet you see at startup.

  2. In RealityKit code, delete existing Face Anchor and assign a new one. The latter option is not great because you need to recreate objects positions from scratch:

     boxAnchor.removeFromParent()
    

Nevertheless, I've achieved a multi-face tracking using AnchorEntity() with ARAnchor intializer inside session(:didUpdate:) instance method (just like SceneKit's renderer() instance method).

Here's my code:

import ARKit
import RealityKit

extension ViewController: ARSessionDelegate {
        
    func session(_ session: ARSession, didUpdate anchors: [ARAnchor]) {
        
        guard let faceAnchor = anchors.first as? ARFaceAnchor
        else { return }
        
        let anchor1 = AnchorEntity(anchor: faceAnchor)
        let anchor2 = AnchorEntity(anchor: faceAnchor)
        
        anchor1.addChild(model01)
        anchor2.addChild(model02)           
        arView.scene.anchors.append(anchor1)
        arView.scene.anchors.append(anchor2)
    }
}
class ViewController: UIViewController {

    @IBOutlet var arView: ARView!
    let model01 = try! Entity.load(named: "angryFace")     // USDZ file
    let model02 = try! FacialExpression.loadSmilingFace()  // RC scene

    override func viewDidLoad() {
        super.viewDidLoad()
        arView.session.delegate = self

        guard ARFaceTrackingConfiguration.isSupported else {
            fatalError("Alas, Face Tracking isn't supported")
        }
    }    
    override func viewWillAppear(_ animated: Bool) {
        super.viewWillAppear(animated)
        let config = ARFaceTrackingConfiguration()
        config.maximumNumberOfTrackedFaces = 2
        arView.session.run(config)
    }
}


来源:https://stackoverflow.com/questions/59618102/multi-face-detection-in-realitykit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!