Coordinate transformations and projection issues

ぃ、小莉子 提交于 2019-12-11 07:16:10

问题


I've been trying for a while to get my mouse coordinates converted into 3D space coordinates in an OpenGL scene.

Currently, my projections are a little bit of a mess (I think), and it doesn't seem to fully take my "camera" into account when I move around a scene. To check this, I draw a line.

My resize function:

    void oglWidget::resizeGL(int width, int height)
    {
        if (height == 0) {
            height = 1;
        }
        pMatrix.setToIdentity();
        pMatrix.perspective(fov, (float) width / (float) height, -1, 1);
        glViewport(0, 0, width, height);
    }

My rendering function (paintGl()) goes as follows:

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPolygonMode( GL_FRONT_AND_BACK, GL_LINE );
QMatrix4x4 mMatrix;
QMatrix4x4 vMatrix;

QMatrix4x4 cameraTransformation;
cameraTransformation.rotate(alpha, 0, 1, 0);
cameraTransformation.rotate(beta, 1, 0, 0);

QVector3D cameraPosition = cameraTransformation * QVector3D(camX, camY, distance);
QVector3D cameraUpDirection = cameraTransformation * QVector3D(0, 1, 0);
vMatrix.lookAt(cameraPosition, QVector3D(camX, camY, 0), cameraUpDirection);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();    
gluLookAt(cameraPosition.x(), cameraPosition.y(), cameraPosition.z(), camX, camY, 0, cameraUpDirection.x(), cameraUpDirection.y(), cameraUpDirection.z());

shaderProgram.bind();
shaderProgram.setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix);
shaderProgram.setUniformValue("texture", 0);


for (int x = 0; x < tileCount; x++)
{
    shaderProgram.setAttributeArray("vertex", tiles[x]->vertices.constData());
    shaderProgram.enableAttributeArray("vertex");
    shaderProgram.setAttributeArray("textureCoordinate", textureCoordinates.constData());
    shaderProgram.enableAttributeArray("textureCoordinate");
    glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, tiles[x]->image.width(), tiles[x]->image.height(), 0, GL_RGBA, GL_UNSIGNED_BYTE, tiles[x]->image.bits());
    glDrawArrays(GL_TRIANGLES, 0, tiles[x]->vertices.size());
}
shaderProgram.release();

And to create my Ray:

GLdouble modelViewMatrix[16];
GLdouble projectionMatrix[16];
GLint viewport[4];
GLfloat winX, winY, winZ;
glGetDoublev(GL_MODELVIEW_MATRIX, modelViewMatrix);
glGetDoublev(GL_PROJECTION_MATRIX, projectionMatrix);
glGetIntegerv(GL_VIEWPORT, viewport);

winX = (float)x;
winY = (float)viewport[3] - (float)y;
glReadPixels( winX, winY, 1, 1, GL_DEPTH_COMPONENT, GL_FLOAT, &winZ );

GLdouble nearPlaneLocation[3];
gluUnProject(winX, winY, 0, modelViewMatrix, projectionMatrix,
             viewport, &nearPlaneLocation[0], &nearPlaneLocation[1],
        &nearPlaneLocation[2]);

GLdouble farPlaneLocation[3];
gluUnProject(winX, winY, 1, modelViewMatrix, projectionMatrix,
             viewport, &farPlaneLocation[0], &farPlaneLocation[1],
        &farPlaneLocation[2]);

 QVector3D nearP = QVector3D(nearPlaneLocation[0], nearPlaneLocation[1],
            nearPlaneLocation[2]);
 QVector3D farP = QVector3D(farPlaneLocation[0], farPlaneLocation[1],
            farPlaneLocation[2]);

I feel like I'm using conflicting systems or something.

Should I be using different variables to manage my camera? I see talk of projection view, model view, etc, but I don't see how I would use that and also use the shader program. I'm still novice when it comes to OpenGL.

So to clarify: I'm attempting to convert my mouse coordinates into 3D space coordinates. So far, it appears to semi-work, it just doesn't take into account camera rotation. I confirmed that my problem has to do with either the ray creation, or with the unprojection of coordinates, not with my actual ray picking logic.


回答1:


It very much looks like you're being tripped up by mixing different feature groups/levels. You have some aspects of using the fixed function matrix stack. For example, in your drawing function:

glMatrixMode(GL_MODELVIEW);
glLoadIdentity();    
gluLookAt(cameraPosition.x(), cameraPosition.y(), cameraPosition.z(), camX, camY, 0, cameraUpDirection.x(), cameraUpDirection.y(), cameraUpDirection.z());

Or in your un-projection function:

glGetDoublev(GL_MODELVIEW_MATRIX, modelViewMatrix);
glGetDoublev(GL_PROJECTION_MATRIX, projectionMatrix);

But in other places, you're building your own matrices, and passing them into the shader as uniforms:

vMatrix.lookAt(cameraPosition, QVector3D(camX, camY, 0), cameraUpDirection);
...
shaderProgram.setUniformValue("mvpMatrix", pMatrix * vMatrix * mMatrix);

These are two different ways of implementing the same functionality. But you can't just freely mix and match pieces of them. If you want to keep things clean and predictable, you should choose one, and stick with it. Since the first one is legacy functionality that is deprecated, I suggest you stick with building your own matrices, and using uniforms defined in your shader code.

As an example to illustrate why your current mix of features is a recipe for disappointment: You're getting the current projection matrix from the fixed function matrix stack with glGetDoublev(GL_PROJECTION_MATRIX, ...). But at least in the code shown here, you're never specifying a projection matrix using the matrix stack. So this is just going to give you the identity matrix. As far as OpenGL is concerned, the pMatrix matrix you pass as a uniform to your shader program is completely unrelated.

Rewriting your whole code would be a little much for this answer, so here are just pointers for the critical parts:

  • Get rid of all calls that refer to the matrix stack. This includes calls like glMatrixMode(), glLoadIdentity(), gluLookAt(), and the glGetDoublev() calls to get the current matrices.
  • Use shaders for all your rendering (if you aren't already), and define all matrices you need as uniforms in your GLSL code.
  • Calculate and manage matrices yourself, either using your own code, or one of the widely used matrix/vector libraries.
  • Pass those matrices as uniforms to the shader.

Now, for the un-projection, it looks like you already have your model, view, and projection matrices (pMatrix, vMatrix, mMatrix). So there is no need for any glGet*() calls to retrieve them from OpenGL. Multiply vMatrix with mMatrix to get the modelViewMatrix, and use pMatrix directly as the projection matrix you pass to gluUnProject().

Strictly speaking, I would also consider GLU deprecated. But if you're comfortable still using gluUnProject(), that might be easiest for now. Otherwise, commonly used matrix libraries are likely to have an implementation of it. Or if you're not afraid to get your hands dirty, it shouldn't be hard to implement if you look up some specs/documentation that explain the underlying calculations.




回答2:


Here, I think I had a series of problems.

  1. I should have used -1 for my near plane, not 0
  2. I had differing data types for my matrices (QT ones specifically,), making me unable to directly plug them into the unproject function.

I solved these by doing the following:

  1. I installed the GLM library to normalize my data types
  2. I performed the matrix calculations myself
    • So I pulled the matrices into my ray creation function, then multiplied the inverse view matrix by the inverse model matrix, then by the inverse projection matrix.
    • This value would then be multiplied in 2 different vectors of the screen coordinate (which had to be in NDC space). One with a z of -1, the other of +1 corresponding to the near and far planes of window space. Lastly, those vectors also had to have a "w" value of 1, so that the resulting calculation of matrix * vector could be divided by the last spot of itself.

I had a series of questions opened here because my initial problem of ray picking lead to revealing a whole series of errors in my code. If anyone has come across the problem posted here, it may be worth checking out my other questions that I opened, as all of my problems all stem from projection issues:

  • In here, I learned that I actually need some form of calculations in order for ray creation.
  • In here, I learned that unProject wasn't working because I was trying to pull the model and view matrices using OpenGL functions, but I never actually set them in the first place, because I built the matrices by hand. I solved that problem in 2 fold: I did the math manually, and I made all the matrices of the same data type (they were mixed data types earlier, leading to issues as well).
  • And lastly, in here, I learned that my order of operations was slightly off (need to multiply matrices by a vector, not the reverse), that my near plane needs to be -1, not 0, and that the last value of the vector which would be multiplied with the matrices (value "w") needed to be 1.

Since my problem extended over quite a great length of time, and took up several questions, I owe credit to a few individuals who helped me along the way:

  • srobins of facepunch, in this thread
  • derhass from here, in this question, and this discussion


来源:https://stackoverflow.com/questions/28421319/coordinate-transformations-and-projection-issues

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!