问题
I'm trying to understand and use ARKit. But there is one thing that I cannot fully understand. Apple said:
A real-world position and orientation that can be used for placing objects in an AR scene
...but that's not enough.
What is ARAnchor
exactly?
What are the differences between anchors and feature points?
Is ARAnchor
just part of feature points?
And how does ARKit determines its anchors?
回答1:
Merely saying, ARAnchor
is an invisible null-object that can hold a 3D content (at anchor's position) in World Space. Think of ARAnchor
just like it's a local axis for your 3D object. Every 3D object has a pivot point, right? So this pivot point must meet an ARAnchor
.
As Apple documentation says:
ARAnchor
is a real-world position and orientation that can be used for placing objects in AR Scene. Adding an anchor to the session helps ARKit to optimize world-tracking accuracy in the area around that anchor, so that virtual objects appear to stay in place relative to the real world. If a virtual object moves, remove the corresponding anchor from the old position and add one at the new position.
ARAnchor
is a parent class for all other types of anchors existing in ARKit framework, hence all these subclasses inherit from ARAnchor
class.
Here's an image with visual representation of plane anchor. But keep in mind: by default, you can't see detected plane nor its ARPlaneAnchor.
In ARKit 3.0 you can add ARAnchors
to your scene using different scenarios:
ARPlaneAnchor
- If horizontal or/and vertical
planeDetection
instance property isON
, ARKit is able to add ARPlaneAnchors to the session.
- If horizontal or/and vertical
ARImageAnchor
- This type of anchors contains information about the position and orientation of an image detected (anchor's located in the image center) in AR world-tracking session. For activation use
detectionImages
instance property. In ARKit 2.0 you can totally track up to 25 images, in ARKit 3.0 – up to 100 images, respectively. But, in both cases, not more than just 4 images simultaneously.
- This type of anchors contains information about the position and orientation of an image detected (anchor's located in the image center) in AR world-tracking session. For activation use
ARBodyAnchor
- In the latest release of ARKit you can enable body tracking by running your session with
ARBodyTrackingConfiguration()
. You'll get ARBodyAnchor in aRoot Joint
of 3D Skeleton.
- In the latest release of ARKit you can enable body tracking by running your session with
ARFaceAnchor
- Face Anchor stores the information about the topology and pose, as well as face's expression that you can detect with a front TrueDepth camera. When face is detected, Face Anchor will be attached slightly behind a nose, in the center of a face. In ARKit 2.0 you can track just one face, in ARKit 3.0 – up to 3 faces, simultaneously.
ARObjectAnchor
- This type of anchors holds an information about 6 Degrees of Freedom (position and orientation) of a real-world 3D object detected in a world-tracking session.
AREnvironmentProbeAnchor
- Probe Anchor provides environmental lighting information for a specific area of space in a world-tracking session. ARKit's Artificial Intelligence uses it to supply metallic shaders with environmental reflections.
ARParticipantAnchor
- This is an indispensable anchor type for multiuser AR experiences. If you want to employ it, use
true
value forisCollaborationEnabled
instance property andMultipeer Connectivity
framework.
- This is an indispensable anchor type for multiuser AR experiences. If you want to employ it, use
There are also other regular approaches to create anchors in AR session:
Hit-testing method
- Tapping on the screen, projects a point onto a hidden detected plane, placing ARAnchor there.
Feature points
- Special yellow points that ARKit automatically generates on a high-contrast margins of real-world objects, can give you a place to put an ARAnchor on. This approach is possible thanks to hit-testing method too.
ARCamera's transform
- iPhone's camera's position and orientation (Matrix 4x4) can be easily used as a place for ARAnchor.
And below is a code's snippet how to implement anchors inside renderer method:
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else { return }
let grid = Grid(anchor: planeAnchor)
node.addChildNode(grid)
}
Since the release of RealityKit in 2019 you can use its new class
AnchorEntity
. Use AnchorEntity class as the root point of an entity hierarchy, and add it to the anchors collection for a Scene instance. This enables ARKit to place the anchor entity, along with all of its hierarchical descendants, into the real world.
AnchorEntity
stores inside itself three components:
- Transform component
- Synchronization component
- Anchoring component
To find out the difference between
ARAnchor
andAnchorEntity
look at this post.
来源:https://stackoverflow.com/questions/52893075/what-is-aranchor-exactly