问题
I would like to create a GPS drawing program in Argon and A-Frame which draws lines based upon people's movements.
Lines can be drawn in A-Frame with, for example, the meshline component which uses Cartesian points:
<a-entity meshline="lineWidth: 20; path: -2 -1 0, 0 -2 0</a-entity>
If I were to do this with a GPS device, I would take the GPS coordinates and map them directly to something like Google maps. Does Argon have any similar functionality such that I can use the GPS coordinates directly as the path like so:
<a-entity meshline="lineWidth: 20; path: 37.32299 -122.04185 0, 37.32298 -122.03224</a-entity>
Since one can specify an LLA point for a reference frame I suppose one way to do this would be to conceive of the center LLA point as "0, 0, 0" and then use a function to map the LLA domain to a Cartesian range.
It would be preferable, however, to use the geo-coordinates directly. Is this possible in Argon?
回答1:
To understand the answer, you need to first understand the various frames of reference used by Argon.
First, Argon makes use of cesiumjs.org's geospatial math libraries and Entity's so that all "locations" in Argon must either be expressed geospatially OR be relative to a geospatial entity. These are rooted at the center of the earth, in what Cesium calls FIXED
coordinates, but are also know as ECEF or ECF coordinates. In that system, coordinates are in meters, with up/down going through the poles, east/west going through the meridian (I believe). Any point on the surface of the earth is represented with pretty large numbers.
This coordinate system is nice because we can represent anything on or near the earth precisely using it. Cesium also supports INERTIAL
coordinates, which are used to represent near-earth orbital objects, and can convert between the two frames.
But, it is inconvenient when doing AR for a few reasons:
the numbers used to represent the position of the viewer and objects near them are quite large, even if they are very close, which can lead to mathematical accuracy issues, especially in the 3D graphics system.
The coordinates we "think about" when we think about the world around us have the ground as "flat" and "up" as pointing ... well, up. So, in 3D graphics, an object above another object typically has the same X and Z values, but has a Y that's bigger. In ECEF coordinates, all the numbers change because what we perceive as "up" is really a vector from the center of the earth though us, and is only "up" if we're on the north (or south, depending on your +/-) pole. Most 3D graphics libraries you might want to use (e.g., physics libraries, for example), assume a world in which the ground is one plane (typically the XZ plane) and Y is up (some aeronautics and other engineering applications use Z as up and have XY as the ground, but the issue is the same).
Argon deals with this, as do many geospatial AR systems, by creating a local coordinate system for the graphics and application to use. There are really three options for this:
Pick some arbitrary (but fixed) local place as the origin. Some systems, which are built to work in one place, have this hard-coded. Others let the application set it. We don't do this because it would encourage applications to take the easy path and only work in one place (we've seen this in the past).
Set the local place to the camera. This has the advantage that the math is the most "accurate" because all points are expressed relative to the camera. But, this causes two issues. First, the camera tends to move continuously (even if only due to sensor noise) in AR apps. Second, many libraries (again, like physics libraries) assume that the origin of the system is stable and on the earth, with the camera/user moving through it. These issues can be worked around, but they are tedious for application developers to deal with.
Set the origin of the local coordinates to an arbitrary location near the user, and if the user moves far from it, recenter automatically. The advantage of this is the program doesn't necessarily have to do much to deal with it, and it meshes nicely with 3D graphics libraries. The disadvantage is the local coordinates are arbitrary, and might be different each time a program is run. However, the application developer may have to pay attention to when the origin is recentered.
Argon uses open 3. When the app starts, we create a new local coordinate frame at the user's location, on the plane tangent to the earth. If the user moves far from that location we update the origin and emit an event to the application (currently, we recenter if you are 5km away from the origin). In many simple apps, with only a few frames or reference expressed in geospatial coordinates (and the rest of the application data expressed relative to known geospatial locations), the conversion from geospatial to local can just be done each frame, allowing the app developer to ignore the reentering problem. The programmer is free to use either ENU (east-north-up) or EUS (east-up-south) as their coordinate system; we tend to use EUS because it's similar to what most 3D graphics systems use (Y is up, Z points south, and X is east).
One of the reasons we chose this approach is that we've found in the past that if we had predictable local coordinates, application developers would store data using those coordinates even though that's not a good idea (you data is now tied to some relatively arbitrary application-specific coordinate system, and will now only work in that location).
So, now to your question. Your issue is that you want to use geospatial (cesium's coordinates, that argon uses) coordinates in AFrame. The short answer is you can't use them directly, since AFrame is built assuming a local 3D graphics coordinate system. The argon-aframe package binds aframe to argon by allowing you to specify referenceframe
components that position an a-entity at an argon/cesium geospatial location, and take care of all the internal conversions for you.
The assumption when I wrote that code was that authors would then create their content using the local, 3D graphics coordinates, and attach those hunks of graphics to a-entity's that were located in the world with referenceframe
's.
In order to have individual coordinates in AFrame correspond to geospatial places, you will need to manage that yourself, perhaps by creating a component to do it for you, or (if the data is known at the start) by converting it up front.
Here's what I'd do.
Assuming you have a list of geospatial coordinates (expressed as LLA), I'd convert each to a local coordinates (by first converting from LLA to Cesium's FIXED ECEF coordinates and creating a Cesium Entity, and then calling Argon's context.getEntityPose()
on that entity (which will return it's local coordinates). I would pick one geospatial location in the set (perhaps the first one?) and then subtract it's local coordinates from each of them, so that they are all expressed in local coordinates relative to that known geospatial location.
Then, I'd create an AFrame entity attached to the referenceframe of that unique geospatial entity, and create your graphics content inside of it, using the local coordinates that are expressed relative to it. For example, let's say the geospatial location is LongLat = "-84.398881 33.778463"
and you stored those points (local coordinates, relative to LongLat
) in userPath
, you could do something like this:
<ar-scene>
<ar-geopose id="GT" lla=" -84.398881 33.778463" userotation="false">
<a-entity meshline="lineWidth: 20; path: userPath; color: #E20049"></a-entity>
</ar-geopose>
</ar-scene>
来源:https://stackoverflow.com/questions/40048673/using-geo-coordintes-instead-of-cartesian-to-draw-in-argon-and-a-frame