perspectivecamera

Trying to understand the math behind the perspective matrix in WebGL

我的梦境 提交于 2019-11-27 20:34:58
问题 All matrix libraries for WebGL have some sort of perspective function that you call to get the perspective matrix for the scene. For example, the perspective method within the mat4.js file that's part of gl-matrix is coded as such: mat4.perspective = function (out, fovy, aspect, near, far) { var f = 1.0 / Math.tan(fovy / 2), nf = 1 / (near - far); out[0] = f / aspect; out[1] = 0; out[2] = 0; out[3] = 0; out[4] = 0; out[5] = f; out[6] = 0; out[7] = 0; out[8] = 0; out[9] = 0; out[10] = (far +

Perspective projection and view matrix: Both depth buffer and triangle face orientation are reversed in OpenGL

跟風遠走 提交于 2019-11-27 16:23:52
I am having trouble with my scene in OpenGL. Objects that are supposed to be further away are drawn closer etc AND front facing triangles are being culled instead of back facing ones. They are drawn in the correct orientation as it is a package I have used before. I am convinced that it is something to do with my projection or veiwModel matrix. I can not see anything wrong with these though! AV4X4FLOAT formProjMatrix(float FOVangle,float aspect,float nearz,float farz) { AV4X4FLOAT A; A.m[0] = 1/(aspect*tanf(FOVangle/2)); A.m[5] = 1/tanf(FOVangle/2); A.m[10] = farz/(farz-nearz); A.m[11] =

OpenCV extrinsic camera from feature points

两盒软妹~` 提交于 2019-11-27 12:42:35
问题 How do I retrieve the rotation matrix, the translation vector and maybe some scaling factors of each camera using OpenCV when I have pictures of an object from the view of each of these cameras? For every picture I have the image coordinates of several feature points. Not all feature points are visible in all of the pictures. I want to map the computed 3D coordinates of the feature points of the object to a slightly different object to align the shape of the second object to the first object.

How to calculate perspective transform for OpenCV from rotation angles?

£可爱£侵袭症+ 提交于 2019-11-26 19:31:00
问题 I want to calculate perspective transform (a matrix for warpPerspective function) starting from angles of rotation and distance to the object. How to do that? I found the code somewhere on OE. Sample program is below: #include <opencv2/objdetect/objdetect.hpp> #include <opencv2/highgui/highgui.hpp> #include <opencv2/imgproc/imgproc.hpp> #include <iostream> #include <math.h> using namespace std; using namespace cv; Mat frame; int alpha_int; int dist_int; int f_int; double w; double h; double

Perspective projection and view matrix: Both depth buffer and triangle face orientation are reversed in OpenGL

这一生的挚爱 提交于 2019-11-26 18:37:46
问题 I am having trouble with my scene in OpenGL. Objects that are supposed to be further away are drawn closer etc AND front facing triangles are being culled instead of back facing ones. They are drawn in the correct orientation as it is a package I have used before. I am convinced that it is something to do with my projection or veiwModel matrix. I can not see anything wrong with these though! AV4X4FLOAT formProjMatrix(float FOVangle,float aspect,float nearz,float farz) { AV4X4FLOAT A; A.m[0] =

Transformation of 3D objects related to vanishing points and horizon line

隐身守侯 提交于 2019-11-26 12:48:33
问题 I\'m trying to computing the exact prospective transformation of a 3D object starting from a vanishing points and horizon line of a picture. What I want is, fixed the vanishing points and horizontal line of a picture, I want rotate and skew an 3D object according with vanishing points and horizontal lines that I set starting from the picture Below the final result that I expected. How can I obtain this result? What kind of transformation can I use? In this video is possibile to see the result

How to recover view space position given view space depth value and ndc xy

六眼飞鱼酱① 提交于 2019-11-26 05:36:55
问题 I am writing a deferred shader, and am trying to pack my gbuffer more tightly. However, I cant seem to compute the view position given the view space depth correctly // depth -> (gl_ModelViewMatrix * vec4(pos.xyz, 1)).z; where pos is the model space position // fov -> field of view in radians (0.62831855, 0.47123888) // p -> ndc position, x, y [-1, 1] vec3 getPosition(float depth, vec2 fov, vec2 p) { vec3 pos; pos.x = -depth * tan( HALF_PI - fov.x/2.0 ) * (p.x); pos.y = -depth * tan( HALF_PI