stereo-3d

OpenCV Stereo Matching/Calibration

╄→гoц情女王★ 提交于 2019-12-04 08:33:20
问题 I'd initially posted this on the OpenCV forums, but unfortunately, I didn't get too many views/replies so I'm posting here with the hopes that someone might have a direction to please suggest? I am using the Bumblebee XB3 Stereo Camera and it has 3 lenses. I've spent about three weeks reading forums, tutorials, the Learning OpenCV book and the actual OpenCV documentation on using the stereo calibration and stereo matching functionality. In summary, my issue is that I have a good disparity map

OpenCV - Tilted camera and triangulation landmark for stereo vision

给你一囗甜甜゛ 提交于 2019-12-04 07:32:08
I am using a stereo system and so I am trying to get world coordinates of some points by triangulation. My cameras present an angle, the Z axis direction (direction of the depth) is not normal to my surface. That is why when I observe flat surface, I get no constant depth but a "linear" variation, correct? And I want the depth from the baseline direction... How I can re-project? A piece of my code with my projective arrays and triangulate function : #C1 and C2 are the cameras matrix (left and rig) #R_0 and T_0 are the transformation between cameras #Coord1 and Coord2 are the correspondant

AttributeError: 'module' object has no attribute

妖精的绣舞 提交于 2019-12-04 03:55:01
I am trying to get the depth map of two stereo images. I have taken the code from http://docs.opencv.org/trunk/doc/py_tutorials/py_calib3d/py_depthmap/py_depthmap.html I get the following error: Traceback (most recent call last): File "depth.py", line 9, in <module> stereo = cv2.createStereoBM(numDisparities=16, blockSize=15) AttributeError: 'module' object has no attribute 'createStereoBM' My code is: import numpy as np import cv2 from matplotlib import pyplot as plt imgL = cv2.imread('tsukuba_l.png',0) imgR = cv2.imread('tsukuba_r.png',0) stereo = cv2.createStereoBM(numDisparities=16,

How do you use Processing for Android to display a stereoscopic image in a Google Cardboard device?

Deadly 提交于 2019-12-03 20:44:05
Processing was designed to make drawing with Java much easier. Processing for Android has the power of its desktop sibling plus information from sensors. Putting these things together, shouldn't it be easy to display a stereoscopic image and move around it like Oculus Rift or Google Cardboard? The code below displays an image in two viewports - one for the left eye and one for the right eye. The result is that the image looks 3D when viewed from a Google Cardboard device. Accelerometer and gyroscope data are used to move the 3D image as the head is moved around. The only bug is that of

Opencv 3D from points in stereo pair

放肆的年华 提交于 2019-12-03 20:11:06
Is there a simple function in OpenCV to get the 3D position and pose of an object from a stereo camera pair? I have the cameras and baseline calibrated with the chess board. I now want to take a known object like the same chessboard, with known 3D points in it's own coordinates and find the real world position (in the camera coordinates). There are functions to do this for a single camera (POSIT) and functions to find the 3D disparity image for the entire scene. It must be simple to do almost the same process as for the camera calibration and find the chessboard in the camera pair - but I can

Rotation and Translation from Essential Matrix incorrect

ぃ、小莉子 提交于 2019-12-03 16:09:12
I currently have a stereo camera setup. I have calibrated both cameras and have the intrinsic matrix for both cameras K1 and K2 . K1 = [2297.311, 0, 319.498; 0, 2297.313, 239.499; 0, 0, 1]; K2 = [2297.304, 0, 319.508; 0, 2297.301, 239.514; 0, 0, 1]; I have also determined the Fundamental matrix F between the two cameras using findFundamentalMat() from OpenCV. I have tested the Epipolar constraint using a pair of corresponding points x1 and x2 (in pixel coordinates) and it is very close to 0 . F = [5.672563368940768e-10, 6.265600996978877e-06, -0.00150188302445251; 6.766518121363063e-06, 4

Creating synchronized stereo videos using webcams

人盡茶涼 提交于 2019-12-03 13:54:58
问题 I am using OpenCV to capture video streams from two USB webcams (Microsoft LifeCam Studio) in Ubuntu 14.04. I am using very simple VideoCapture code (source here) and am trying to at least view two videos that are synchronized against each other. I used Android stopwatch apps (UltraChron Stopwatch Lite and Stopwatch Timer) on my Samsung Galaxy S3 mini to realize that my viewed images are out of sync (show different time on stopwatch). The frames are synced maybe in 50% of the time. The frame

Python - Perspective transform for OpenCV from a rotation angle

别来无恙 提交于 2019-12-03 13:14:07
I'm working on depth map with OpenCV . I can obtain it but it is reconstructed from the left camera origin and there is a little tilt of this latter and as you can see on the figure, the depth is "shifted" (the depth should be close and no horizontal gradient): I would like to express it as with a zero angle, i try with the warp perspective function as you can see below but i obtain a null field... P = np.dot(cam,np.dot(Transl,np.dot(Rot,A1))) dst = cv2.warpPerspective(depth, P, (2048, 2048)) with : #Projection 2D -> 3D matrix A1 = np.zeros((4,3)) A1[0,0] = 1 A1[0,2] = -1024 A1[1,1] = 1 A1[1,2

How can I output a HDMI 1.4a-compatible stereoscopic signal from an OpenGL application to a 3DTV?

喜你入骨 提交于 2019-12-03 12:53:56
I have an OpenGL application that outputs stereoscopic 3D video to off-the-shelf TVs via HDMI, but it currently requires the display to support the pre-1.4a methods of manually choosing the right format (side-by-side, top-bottom etc). However, now I have a device that I need to support that ONLY supports HDMI 1.4a 3D signals, which as I understand it is some kind of packet sent to the display that tells it what format the 3D video is in. I'm using an NVIDIA Quadro 4000 and I would like to know if it's possible to output my video (or tell the video card how to) in a way that a standard 3DTV

stereoCalibrate() changes focal lengths even when it was not supposed to

青春壹個敷衍的年華 提交于 2019-12-03 08:38:49
I noticed that opencv stereoCalibrate() changes the focal lengths in camera matrices even though I've set appropriate flag (ie CV_CALIB_FIX_FOCAL_LENGTH). I'm using two identical cameras with the same focal length set mechanically on lens and furthermore I know the sensor size so I can compute intrinsic camera matrix manually what actually I do. Here you have some output form the stereo calibration program - camera matrices before and after stereoCalibrate(). std::cout << "Before calibration: " << std::endl; std::cout << "C1: " << _cameraMatrixA << std::endl; std::cout << "C2: " <<