Marker based initial positioning with ARCore/ARKit?

最后都变了- 提交于 2019-12-12 08:10:34

问题


problem situation: Creating AR-Visualizations always at the same place (on a table) in a comfortable way. We don't want the customer to place the objects themselves like in countless ARCore/ARKit examples.

I'm wondering if there is a way to implement those steps:

  1. Detect marker on the table
  2. Use the position of the marker as the initial position of the AR-Visualization and go on with SLAM-Tracking

I know there is something like an Marker-Detection API included in the latest build of the TangoSDK. But this technology is limited to a small amount of devices (two to be exact...).

best regards and thanks in advance for any idea


回答1:


I am also interested in that topic. I think the true power of AR can only be unleashed when paired with environment understanding.

I think you have two options:

  1. wait for the new Vuforia 7 to be released and supposedly it is going to support visual markers with ARCore and ARKit.
  2. Engage CoreML / Computer Vision - in theory it is possible but I haven't seen many examples. I think it might be a bit difficult to start with (e.g. build and calibrate model).

However Apple have got it sorted: https://youtu.be/E2fd8igVQcU?t=2m58s




回答2:


if using Google Tango, you can implement this using the built in Area Descriptions File (ADF) system. The system has a holding screen and you are told to "walk around". Within a few seconds, you can relocalise to an area the device has previously been. (or pull the information from a server etc..)

Googles VPS (Visual Positioning Service) is a similar Idea, (closed Beta still) which should come to ARCore. It will, as far as I understand, allow you to localise a specific location using the camera feed from a global shared map of all scanned locations. I think, when released, it will try to fill the gap of an AR Cloud type system, which will solve these problems for regular developers.

See https://developers.google.com/tango/overview/concepts#visual_positioning_service_overview

The general problem of relocalising to a space using pre-knowledge of the space and camera feed only is solved in academia and other AR offerings, hololens etc... Markers/Tags aren't required. I'm unsure, however, which other commercial systems provide this feature.




回答3:


This is what i got so far for ARKit.

@objc func tap(_ sender: UITapGestureRecognizer){
    let touchLocation = sender.location(in: sceneView)
    let hitTestResult = sceneView.hitTest(touchLocation, types: .featurePoint)

    if let hitResult = hitTestResult.first{
        if first == nil{
            first = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
        }else if second == nil{
            second = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)
        }else{
            third = SCNVector3Make(hitResult.worldTransform.columns.3.x, hitResult.worldTransform.columns.3.y, hitResult.worldTransform.columns.3.z)

            let x2 = first!.x
            let z2 = -first!.z
            let x1 = second!.x
            let z1 = -second!.z
            let z3 = -third!.z

            let m = (z1-z2)/(x1-x2)
            var a = atan(m)

            if (x1 < 0 && z1 < 0){
                a = a + (Float.pi*2)
            }else if(x1 > 0 && z1 < 0){
                a = a - (Float.pi*2)
            }

            sceneView.scene.rootNode.addChildNode(yourNode)
            let rotate = SCNAction.rotateBy(x: 0, y: CGFloat(a), z: 0, duration: 0.1)
            yourNode.runAction(rotate)
            yourNode.position = first!

            if z3 - z1 < 0{
                let rotate = SCNAction.rotateBy(x: 0, y: CGFloat.pi, z: 0, duration: 0.1)
                yourNode.runAction(rotate)
            }
        }
    }
}

Theory is:
Make three dots A,B,C such that AB is perpendicular to AC. Tap dots in order A-B-C.
Find angle of AB in x=0 of ARSceneView which gives required rotation for node.
Any one of the point can be refrenced to calculate position to place node.
From C find if node needs to be flipped.

I am still working on some exceptions that needs to be satisfied.




回答4:


At the moment both ARKit 3.0 and ARCore 1.12 have all necessary API tools to fulfil almost any marker-based tasks for a precise positioning of 3D model.

ARKit

Right out-of-the-box, ARKit has the ability to detect 3D objects and place ARObjectAnchors in a scene as well as to detect images and use ARImageAnchors for accurate positioning. Main ARWorldTrackingConfiguration() class includes both instance properties – .detectionImages and .detectionObjects. It's not superfluous to say that ARKit primordially has indispensable built-in features from several frameworks:

  • CoreMotion
  • SceneKit
  • SpriteKit
  • UIKit
  • CoreML
  • Metal
  • AVFoundation

In addition to the above, ARKit 3.0 has tight integration with a brand-new RealityKit module helping to implement multiuser connectivity, list of ARAnchors and shared sessions.

ARCore

Although ARCore has a feature called Augmented Images, the framework has no built-in machine learning algorithms, helping us detect real-environment 3D objects, but Google ML Kit framework does have. So, as an Android developer you can use both frameworks at the same time to precisely auto-composite 3D model over a real object in AR scene.

It is worth recognizing that ARKit 3.0 has a more robust and advanced toolkit than ARCore 1.12.



来源:https://stackoverflow.com/questions/47810898/marker-based-initial-positioning-with-arcore-arkit

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!