hololens

AR增强现实开发介绍

随声附和 提交于 2020-01-13 04:18:25
AR增强现实开发介绍 ---理论篇 ​ AR增强现实开发最近做一些AR增强现实的内容,一些普及性的内容,与大家分享。 一: 什么是AR增强现实技术: 是一种将真实世界信息和虚拟世界信息“无缝”集成的新技术。是把原本在现实世界的一定时间空间范围内很难体验到的实体信息(视觉信息,声音,味道,触觉等),通过电脑等科学技术,模拟仿真后再叠加,将虚拟的信息应用到真实世界,被人类感官所感知,从而达到超越现实的感官体验。 二: AR增强现实技术突出的特点: 1: ​真实世界和虚拟世界的信息集成 2: 具有实时交互性 3: 是在三维尺度空间中增添定位虚拟物体 三: VR与AR的区别 虚拟现实(VR),看到的场景和人物全是假的,是把你的意识代入一个虚拟的世界。增强现实(AR),看到的场景和人物一部分是真一部分是假,是把虚拟的信息带入到现实世界中。 四: 适用范围 AR技术不仅在与VR技术相类似的应用领域,诸如尖端武器、飞行器的研制与开发、数据模型的可视化、虚拟训练、娱乐与艺术等领域具有广泛的应用,而且由于其具有能够对真实环境进行增强显示输出的特性,在医疗研究与解剖训练、精密仪器制造和维修、军用飞机导航、工程设计和远程机器人控制等领域,具有比VR技术更加明显的优势。 PS: 谷歌认为,增强现实才是未来的发展趋势,因为它能够带给人们更多互动体验,而非虚拟现实的隔离。 五: AR增强现实的应用领域: 1:

AR中的SLAM(一)

好久不见. 提交于 2020-01-12 15:12:21
写在前面 本系列打算讲讲个人对AR行业和AR中的SLAM算法的一点浅显的看法。才疏学浅,文中必然有很多疏漏和不足,还望能和大家多多讨论。今天先讲讲我对AR的一些认识。 AR的一点理解 AR是什么 AR是人类的第三只眼,让人类在现实世界中看到虚拟物体并与之交互。 VR是人类做梦的一种载体。 AR能做什么 AR有能力将二维交互变成三维交互,能让虚拟物体看起来像真的一样。举几个简单的场景。 你可以拥有一只虚拟宠物。你可以从各个方向观察一个虚拟小猫,绕着它转动。你会发现,走近看会显得大一些,离远了看着会变小。愿意的话,还可以挠挠它,和它玩耍。 你将拥有一个虚拟电视。它可以固定在任何地方(即使你到处走动),也可以随意移动。 你在和家人远程通话时能在自己房间看到家人的全息投影。 你将房间变成虚拟密室。 当然,你可以在任何地方唤醒任何APP。 我觉得,如果未来AR能够发展出更多的外延,那么在消费领域会有机会取代手机 ,然后并在若干年后沦为卖广告赚钱的工具 。而在工业领域,AR也有机会蹦跶几下 ,然后被机器人取代 。 AR怎么实现 目前AR需要通过外部设备来实现。比如在手机上,通过获取摄像头图像并在上面叠加虚拟物体。而专业的AR设备可以将虚拟物体成像在眼睛前面或投射入眼球。 AR需要什么 要实现前面提到的三维交互,AR设备需要具备下面三个功能 自我定位。指定位出自身在空间中的位置

再次陷入迷惘期的一点感想

假如想象 提交于 2020-01-12 07:11:36
最近一直在刷算法题,越刷越觉得自己是码畜,越刷越迷惘。至于为何要刷题,原因无他,打算跳槽了。 跳槽到帝都已经3个多月了,从开始的兴奋到现在的难受,夏天都还没过完。 3年以前我从来没想过会成为一个程序员,这一切的改变都起源于对WP平台的兴趣。作为一个不会编程的人,最初的学习很辛苦,照着文档一步一步模仿,虽然很多代码并不清楚其原理,迷迷糊糊的只知道其功能。兴趣真是最好的老师,那年我大二,花了一个夏天囫囵吞枣地学完了C#基本的语法和特性。现在想来那时候懵懵懂懂时学习的热情远胜于今日,每天能发现自己学到新东西是件很快乐的事情。 那年我也很幸运。年底参加诺基亚居然拿到一项2万欧元的大奖,那是我第一次到帝都,也是我第一次靠写代码获得较为可观的收入。然而一个很惭愧的事实是,但是我写的应用很多功能上都是照抄官方示例项目,例如相机功能、语音功能等等,我自己写的代码就像是粘合剂,来实现我的想法。即使到了这个时候,我还是只会写一些基础的功能,类似于API的搬运工吧。但已经意识到自己欠缺的基础知识,空中楼阁不会长远。从那时开始学习缺失的基础内容,计算机组成原理、数据结构、设计模式、算法等等。学习起来真的很难,很多东西都很难理解,但是因为中间不停地在开发WP应用,对于数据结构和设计模式收获多一些。 半年后,我又幸运地赶上给脸萌开发WP版的机会,因为我相对大胆一点,结果主动联系脸萌争取之下幸运地拿到了授权

Missing System.Media.Capture.Frames namespace

孤者浪人 提交于 2020-01-07 05:53:05
问题 I'm trying to develop a barcode reader app for the Microsoft HoloLens (using this tutorial) but I'm encountering a problem. In the code of this tutorial, there's being referenced to the System.Media.Capture.Frames namespace and it seems like I'm missing this namespace. Looking at the Windows universal samples from Microsoft, there's also being referenced to this namespace. However, I cannot import this namespace. I'm running on Windows 10 Version 1607 Build 14393.693 I have .NET Framework 4.6

WebView.CapturePreviewToStreamAsync() fails, but only on HoloLens with Unity3D

廉价感情. 提交于 2020-01-05 08:47:06
问题 Currently, we're trying to combine the use of Unity3D holograms and HTML rendering in one app. The idea works as follows: 1) start a XAML app whose main window contains a WebView component (using the XAML app template) 2) Switch to Holographic (DirectX) mode (essentially copy over the code from the Holographic app template) 3) The WebView still runs in the background 4) We can call the WebView.Navigate() method 5) Render the contents of the WebView via the CapturePreviewToStreamAsync() API to

Hololens Apps Will No Longer Build - Cites Metadata file missing and c-Sharp.firstpass not found

故事扮演 提交于 2020-01-02 03:35:08
问题 I have followed the exact steps Microsoft lists under multiple projects within their Microsoft Holographic - Academy tutorials. I completed them all, and all of them worked just fine from creation to export and testing. This is using Unity3D / C# which gets compiled into a Visual Studio Solution (sln) file. After completing those, I went on to build my own app - which built just fine as well. No problem! Now when I try to build - I get a strange error that the c-Sharp.firstpass file (the

WWW/UnityWebRequest POST/GET request won't return the latest data from server/url

江枫思渺然 提交于 2019-12-25 07:30:00
问题 I am creating a HoloLens app using Unity which has to take data from a REST API and display it. I am currently using WWW datatype to get the data and yield return statement in a coroutine that will be called from the Update() function. When I try to run the code, I get the latest data from the API but when someone pushes any new data onto the API, it does not automatically get the latest data in real time and I have to restart the app to see the latest data. My Code: using UnityEngine; using

Custom Vision on HoloLens

允我心安 提交于 2019-12-24 21:33:27
问题 I'm using custom Vision ( https://www.customvision.ai ) to train a model for object recognition. After 10 Iterations of training it suddenly stopped loading. I always export it as onnx and load it on the HoloLens (with this tutorial: https://mtaulty.com/2018/03/29/third-experiment-with-image-classification-on-windows-ml-from-uwp-on-hololens-in-unity/ ). And it worked for quite some time (though the results weren't perfect), but after I continued to train the model to find the things better it

How to launch Skype in HoloLens from Unity through script

◇◆丶佛笑我妖孽 提交于 2019-12-24 19:56:29
问题 I have read a question similar to this one, but years ago. I just want to know if there is now an option or some code to put in my Unity app to run Skype from HoloLens. I have tried putting the Skype uri by code, but it only works on my PC, not on my HoloLens. It just exit my App and I have to open Skype manually, so it is not the thing I really want. Anyone knows what to do? 回答1: You cannot open up or run Skype from within your own app. It works on your Desktop because that Skype is the full

How do I discard pixels based on vertex color and turn that on or off in a MonoBehaviour?

白昼怎懂夜的黑 提交于 2019-12-24 19:05:32
问题 I've written a shader that uses a mesh's vertex colors and also has a function that clips all vertices that have an interpolated vertex color in the blue channel greater than 0.5 (discards all blue vertices). I'm trying to create a voice command that allows the user to call the function when they are ready. However, in Microsoft's Mixed Reality Toolkit Speech Input Handler, it only allows me to call functions from components (Mesh Renderer, Mesh Filter, Mesh Collider, etc.) of the GameObject