stereoscopy

NV_STEREO_IMAGE_SIGNATURE and DirectX 10/11 (nVidia 3D Vision)

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-18 05:11:13
问题 I'm trying to use SlimDX and DirectX10 or 11 to control the stereoization process on the nVidia 3D Vision Kit. Thanks to this question I've been able to make it work in DirectX 9. However, due to some missing methods I've been unable to make it work under DirectX 10 or 11. The algorithm goes like this: Render left eye image Render right eye image Create a texture able to contain them both PLUS an extra row (so the texture size would be 2 * width, height + 1) Write this NV_STEREO_IMAGE

Three.js StereoEffect cannot be applied to CSS3DRenderer

╄→гoц情女王★ 提交于 2019-12-13 19:47:23
问题 I'm in the process of developing a chrome VR web app. Now I'm desperately trying to figure out how to render a website into my into my stereoscopic scene which has some meshes in it. So I have my renderer for the meshes, which works well. The following code is only the relevant snippets: var renderer = new THREE.WebGLRenderer(); Then i have my stereoeffect renderer which receives the webgl renderer: var effect = new THREE.StereoEffect(renderer); Next is that I create the website renderer, and

ReprojectImageTo3D corresponding pixel in image (Stereo Vision)

守給你的承諾、 提交于 2019-12-11 07:55:14
问题 I have a disparity map. Based on the disparity map, hovering on the 'left image' displays: X and y of the image, So if i hover on the top-left most, it will display x:0, y:0 The next step is to display distance of the specific pixel,to make my life easy, I will try to do it with reprojectImageTo3D(disp, Q) I got Q from stereoRectify now, reprojectImageTo3D in python, returns an n by 3 matrix. So I can see, it is a row of x y z coordinates. Wondering, how can I know which pixel are these

True stereoscopic quad buffering in XNA

孤街醉人 提交于 2019-12-11 07:34:27
问题 We are trying to make 3D stereoscopy work within XNA for Windows PC Games using NVidia 3D Vision, we really have no idea how this would be achieved and are just now skimming through the XNA documentation, while we found some examples for anaglyph 3D, we were wondering if there was any way to make it work with the Active glasses that NVidia bundles with its 3D Vision package. We would also love to hear any alternatives as to how we could make this work on Xbox360, without the glasses of course

how to capture video from webcam in MJPG opencv

自古美人都是妖i 提交于 2019-12-10 10:47:40
问题 I have bought the two of Genius facecam 1000x camera and trying to set up a stereo camera, the v4l2-ctl outputs for cameras are as follow : ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUYV 4:2:2 Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : Motion-JPEG As you can see the pixel format MJPG is supported and from this and this this pixel format is needed, but when I try to capture video from both webcams the VIDIOC_STREAMON: No

Get textured pointcloud with Block-Matching-Algorithm

老子叫甜甜 提交于 2019-12-08 09:25:32
问题 I want to texture a generated pointcloud with the original image color from two images. For this I calculated the disparity-map with Block-Matching and did the reconstruction. Also writing an export function for .ply-files wasn't a big deal. My problem is: How do I get the color from block-matching-algorithm? It does look for similar pixels on rectified images, but there is no variable which saves the position of a found matching, referred to API. Afterward it is not possible to recover the

Image Processing: Bad Quality of Disparity Image with OpenCV

不羁的心 提交于 2019-12-06 10:23:45
问题 I want to create a disparity image using two images from low resolution usb cameras. I am using OpenCV 4.0.0. The frames I use are taken from a video. The results I am currently getting are very bad (see below). Both cameras were calibrated and the calibration data used to undistort the images. Is it because of the low resolution of the left image and right image? Left: Right: To have a better guess there also is an overlay of both images. Overlay: The values for the cv2.StereoSGBM_create()

how to capture video from webcam in MJPG opencv

可紊 提交于 2019-12-06 08:28:47
I have bought the two of Genius facecam 1000x camera and trying to set up a stereo camera, the v4l2-ctl outputs for cameras are as follow : ioctl: VIDIOC_ENUM_FMT Index : 0 Type : Video Capture Pixel Format: 'YUYV' Name : YUYV 4:2:2 Index : 1 Type : Video Capture Pixel Format: 'MJPG' (compressed) Name : Motion-JPEG As you can see the pixel format MJPG is supported and from this and this this pixel format is needed, but when I try to capture video from both webcams the VIDIOC_STREAMON: No space left on device error still happening for the second camera, I can only get stereo video at 320x240

Stereoscopic 3D on WPF

[亡魂溺海] 提交于 2019-12-06 02:13:29
问题 I have to show stereoscopic 3D graphics on a WPF control. I already have the code which create two DirectX-9 textures to show, one texture for each eye. I want to use 3D Vision (not anaglyph). I considered the following ways to show the two pictures as 3D stereo: Using OpenGL or DirectX 11.1 Stereo API. Using NvAPI_Stereo_SetActiveEye as described here: http://www.nvidia.com/docs/IO/40505/WP-05482-001_v01-final.pdf Using NVidia stereo signature as described here: NV_STEREO_IMAGE_SIGNATURE and

Can OpenGL rendering be used for 3D monitors?

和自甴很熟 提交于 2019-12-05 23:58:18
问题 We have been considering buying a 3D-ready LCD monitor along with a machine with a 3D-stereoscopic vision capable graphics card (ATI Radeon 5870 or better). This is for displaying some scientific data that we are rendering in 3D using OpenGL. Now can we expect the GPU, Monitor and shutter-glasses to take care of the stereoscopic display or do we need to modify the rendering program? If there are specific techniques for graphics programming for 3D stereoscopic displays, some tutorial links