stereoscopy

Is stereoscopy (3D stereo) making a come back?

一世执手 提交于 2019-12-04 17:56:24
I'm working on a stereoscopy application in C++ and OpenGL (for medical image visualization). From what I understand, the technology was quite big news about 10 years ago but it seems to have died down since. Now, many companies seem to be investing in the technology... Including nVidia it would seem . Stereoscopy is also known as "3D Stereo", primarily by nVidia (I think). Does anyone see stereoscopy as a major technology in terms of how we visualize things? I'm talking in both a recreational and professional capacity. With nVidia's 3D kit you don't need to "make a stereoscopy application",

Image Processing: Bad Quality of Disparity Image with OpenCV

巧了我就是萌 提交于 2019-12-04 15:18:37
I want to create a disparity image using two images from low resolution usb cameras. I am using OpenCV 4.0.0. The frames I use are taken from a video. The results I am currently getting are very bad (see below). Both cameras were calibrated and the calibration data used to undistort the images. Is it because of the low resolution of the left image and right image? Left: Right: To have a better guess there also is an overlay of both images. Overlay: The values for the cv2.StereoSGBM_create() function are based on the ones of the example code that comes with OpenCV (located in OpenCV/samples

Can OpenGL rendering be used for 3D monitors?

空扰寡人 提交于 2019-12-04 04:45:40
We have been considering buying a 3D-ready LCD monitor along with a machine with a 3D-stereoscopic vision capable graphics card (ATI Radeon 5870 or better). This is for displaying some scientific data that we are rendering in 3D using OpenGL. Now can we expect the GPU, Monitor and shutter-glasses to take care of the stereoscopic display or do we need to modify the rendering program? If there are specific techniques for graphics programming for 3D stereoscopic displays, some tutorial links will be much appreciated. Techically OpenGL provisions for stereoscopic display. The keyword is "Quad

How can I output a HDMI 1.4a-compatible stereoscopic signal from an OpenGL application to a 3DTV?

喜你入骨 提交于 2019-12-03 12:53:56
I have an OpenGL application that outputs stereoscopic 3D video to off-the-shelf TVs via HDMI, but it currently requires the display to support the pre-1.4a methods of manually choosing the right format (side-by-side, top-bottom etc). However, now I have a device that I need to support that ONLY supports HDMI 1.4a 3D signals, which as I understand it is some kind of packet sent to the display that tells it what format the 3D video is in. I'm using an NVIDIA Quadro 4000 and I would like to know if it's possible to output my video (or tell the video card how to) in a way that a standard 3DTV

How do I output 3D images to my 3D TV?

戏子无情 提交于 2019-12-03 12:10:22
问题 I have a 3D TV and feel that I would be shirking my responsibilities (as a geek) if I didn't at least try to make it display pretty 3D images of my own creation! I've done a very basic amount of OpenGL programming before and so I understand the concepts involved - Assume that I can render myself a simple Tetrahedron or Cube and make it spin around a bit; How can I get my 3D TV to display this image in, well, 3D? Note that I understand the basics of how 3D works (render the same image twice

Stereo vision: Depth estimation

和自甴很熟 提交于 2019-12-03 09:55:56
问题 I am working on Stereo vision task and I would like to get the distance between stereo vision cameras and the object. I am using Matlab with Computer Vision System Toolbox. I have calibrated cameras with using "Camera Calibration Toolbox for Matlab" thus I have Intrinsic parameters of left and right camera and Extrinsic parameters (position of right camera wrt left camera). I have also a pair of rectified pictures and thier disparity map. For estimation of disparity I have used Matlab

Using OpenCV descriptor matches with findFundamentalMat

大城市里の小女人 提交于 2019-12-03 06:13:56
问题 I posted earlier with a problem regarding the same program but received no answers. I've since corrected the issue I was experiencing at that point, only to face a new problem. Basically I am auto correcting stereo image pairs for rotation and translation using an uncalibrated approach. I use feature detection algorithms such as SURF to find points in two images, a left and right stereo image pair, and then using SURF again I match the points between the two images. I then need to use these

Convert between MATLAB stereoParameters and OpenCV stereoRectify stereo calibration

回眸只為那壹抹淺笑 提交于 2019-12-03 03:52:13
I wish to convert a MATLAB stereoParameters structure to intrinsics and extrinsics matrices to use in OpenCV's stereoRectify. If I understood http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html and http://mathworks.com/help/vision/ref/stereoparameters-class.html , stereoParameters.CameraParameters1 and stereoParameters.CameraParameters2 store the intrinsic matrices and the other members of stereoParameters the extrinsic ones. I think I got this mapping Intrinsics: cameraMatrix1 = stereoParameters.CameraParameters1.IntrinsicMatrix' cameraMatrix2 =

How do I output 3D images to my 3D TV?

隐身守侯 提交于 2019-12-03 03:27:34
I have a 3D TV and feel that I would be shirking my responsibilities (as a geek) if I didn't at least try to make it display pretty 3D images of my own creation! I've done a very basic amount of OpenGL programming before and so I understand the concepts involved - Assume that I can render myself a simple Tetrahedron or Cube and make it spin around a bit; How can I get my 3D TV to display this image in, well, 3D? Note that I understand the basics of how 3D works (render the same image twice from 2 different angles, one for each eye), my question is about the logistics of actually doing this (do

Using OpenCV descriptor matches with findFundamentalMat

久未见 提交于 2019-12-02 20:50:38
I posted earlier with a problem regarding the same program but received no answers. I've since corrected the issue I was experiencing at that point, only to face a new problem. Basically I am auto correcting stereo image pairs for rotation and translation using an uncalibrated approach. I use feature detection algorithms such as SURF to find points in two images, a left and right stereo image pair, and then using SURF again I match the points between the two images. I then need to use these matched points to find the fundamental matrix which I can use to correct the images. My issue is this.