Skewed frustum/off-axis projection for head tracking in OpenGL

半世苍凉 提交于 2019-12-18 10:47:47

问题


I am trying to do an off-axis projection in my application and trying to change the perspective of the scene as per the user's head position. Normally, given that I had to draw a box on the screen, I would draw a Box on the screen as:

ofBox(350,250,0,50); //ofBox(x, y, z, size); where x, y and z used here are the screen coordinates

To do an off-axis projection here, I am aware that I would have to change the perspective projection as follows:

vertFov = 0.5; near = 0.5; aspRatio = 1.33;
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glFrustum(near * (-vertFov * aspRatio + headX),
          near * (vertFov * aspRatio + headX),
          near * (-vertFov + headY),
          near * (vertFov + headY),
          near, far); //frustum changes as per the position of headX and headY
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(headX * headZ, headY * headZ, 0, headX * headZ, headY * headZ, -1);
glTranslate(0,0,headZ);

For a symmetric frustum in the above case (where headX and headY is zero), the left, right params come out to be -0.33, 0.33 and bottom, top parameters come out to be -0.25, 0.25 and establish my clipping volume along those coordinates. I tried to simulate the off-axis using a mouse for a test and did the following:

double mouseXPosition = (double)ofGetMouseX();
double mouseYPosition = (double)ofGetMouseY();
double scrWidth = (double)ofGetWidth();
double scrHeight = (double)ofGetHeight();

headX = ((scrWidth -mouseXPosition) / scrWidth) - 0.5;
headY = (mouseYPosition / scrHeight) - 0.5;
headZ = -0.5; //taken z constant for this mouse test

However, I intend to use Kinect which gives me coordinates for head of the order of (200, 400, 1000), (-250, 600, 1400), (400, 100, 1400) etc. and I am not able to make out how to change the frustum parameters when I have those head positions. For eg: Considering 0 to be at the center for the Kinect, if the user moves such that his position is (200, 400, 1000), then how would the frustum parameters change here?
How will the objects have to be drawn when the z-distance obtained from Kinect will also have to be taken into account? Objects have to become smaller in size as z increase and that could happen by glTrasnlate() call inside the above off-axis code, but the two scales of the coordinate systems are different (glFrustum now sets clipping volume to [-0.25,0.33] to [0.25,-0.33] wheres Kinect is in the order of hundreds (400,200,1000)). How do I apply the z values to glFrustum/gluLookAt then?


回答1:


First, you don't want to use gluLookAt. gluLookAt rotates the camera, but the physical screen the user looks at doesn't rotate. gluLookAt would only work if the screen would rotate such that the screen normal would keep pointing at the user. The perspective distortion of the off-axis projection will take care of all the rotation we need.

What you need to factor into your model is the position of the screen within the frustum. Consider the following image. The red points are the screen borders. What you need to achieve is that these positions remain constant in the 3D WCS, since the physical screen in the real world also (hopefully) doesn't move. I think this is the key insight to virtual reality and stereoscopy. The screen is something like a window into the virtual reality, and to align the real world with the virtual reality, you need to align the frustum with that window.

To do that you have to determine the position of the screen in the coordinate system of the Kinect. Assuming the Kinect is on top of the screen, that +y points downwards, and that the unit you're using is millimeters, I would expect these coordinates to be something along the lines of (+-300, 200, 0), (+-300, 500, 0).

Now there are two possibilities for the far plane. You could either choose to use a fixed distance from the camera to the far plane. That would mean the far plane would move backwards if the user moved backwards, possibly clipping objects you'd like to draw. Or you could keep the far plane at a fixed position in the WCS, as shown in the image. I believe the latter is more useful. For the near plane, I think a fixed distance from the camera is ok though.

The inputs are the 3D positions of the screen wcsPtTopLeftScreen and wcsPtBottomRightScreen, the tracked position of the head wcsPtHead, the z value of the far plane wcsZFar (all in the WCS), and the z value of the near plane camZNear (in camera coordinates). We need to compute the frustum parameters in camera coordinates.

camPtTopLeftScreen = wcsPtTopLeftScreen - wcsPtHead;
camPtTopLeftNear = camPtTopLeftScreen / camPtTopLeftScreen.z * camZNear;

and the same with the bottom right point. Also:

camZFar = wcsZFar - wcsPtHead.z

Now the only problem is that the Kinect and OpenGL use different coordinate systems. In the Kinect CS, +y points down, +z points from the user towards the Kinect. In OpenGL, +y points up, +z points towards the viewer. That means we have to multiply y and z by -1:

glFrustum(camPtTopLeftNear.x, camPtBottomRightNear.x,
  -camPtBottomRightNear.y, -camPtTopLeftNear.y, camZNear, camZFar);

If you want a better explanation that also covers stereoscopy, check out this video, I found it insightful and well done.

Quick demo, you might have to adjust wcsWidth, pxWidth, and wcsPtHead.z.

#include <glm/glm.hpp>
#include <glm/ext.hpp>
#include <glut.h>
#include <functional>

float heightFromWidth;
glm::vec3 camPtTopLeftNear, camPtBottomRightNear;
float camZNear, camZFar;
glm::vec3 wcsPtHead(0, 0, -700);

void moveCameraXY(int pxPosX, int pxPosY)
{
  // Width of the screen in mm and in pixels.
  float wcsWidth = 520.0;
  float pxWidth = 1920.0f;

  float wcsHeight = heightFromWidth * wcsWidth;
  float pxHeight = heightFromWidth * pxWidth;
  float wcsFromPx = wcsWidth / pxWidth;

  glm::vec3 wcsPtTopLeftScreen(-wcsWidth/2.f, -wcsHeight/2.f, 0);
  glm::vec3 wcsPtBottomRightScreen(wcsWidth/2.f, wcsHeight/2.f, 0);
  wcsPtHead = glm::vec3(wcsFromPx * float(pxPosX - pxWidth / 2), wcsFromPx * float(pxPosY - pxHeight * 0.5f), wcsPtHead.z);
  camZNear = 1.0;
  float wcsZFar = 500;

  glm::vec3 camPtTopLeftScreen = wcsPtTopLeftScreen - wcsPtHead;
  camPtTopLeftNear = camZNear / camPtTopLeftScreen.z * camPtTopLeftScreen;
  glm::vec3 camPtBottomRightScreen = wcsPtBottomRightScreen - wcsPtHead;
  camPtBottomRightNear = camPtBottomRightScreen / camPtBottomRightScreen.z * camZNear;
  camZFar = wcsZFar - wcsPtHead.z;

  glutPostRedisplay();
}

void moveCameraZ(int button, int state, int x, int y)
{
  // No mouse wheel in GLUT. :(
  if ((button == 0) || (button == 2))
  {
    if (state == GLUT_DOWN)
      return;
    wcsPtHead.z += (button == 0 ? -1 : 1) * 100;
    glutPostRedisplay();
  }
}

void reshape(int w, int h)
{
  heightFromWidth = float(h) / float(w);
  glViewport(0, 0, w, h);
}

void drawObject(std::function<void(GLdouble)> drawSolid, std::function<void(GLdouble)> drawWireframe, GLdouble size)
{
  glPushAttrib(GL_ALL_ATTRIB_BITS);
  glEnable(GL_COLOR);
  glDisable(GL_LIGHTING);
  glColor4f(1, 1, 1, 1);
  drawSolid(size);
  glColor4f(0.8, 0.8, 0.8, 1);
  glDisable(GL_DEPTH_TEST);
  glLineWidth(1);
  drawWireframe(size);

  glColor4f(0, 0, 0, 1);
  glEnable(GL_DEPTH_TEST);
  glLineWidth(3);
  drawWireframe(size);
  glPopAttrib();
}

void display(void)
{
  glPushAttrib(GL_ALL_ATTRIB_BITS);
  glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
  glEnable(GL_DEPTH_TEST);

  // In the Kinect CS, +y points down, +z points from the user towards the Kinect.
  // In OpenGL, +y points up, +z points towards the viewer.
  glm::mat4 mvpCube;
  mvpCube = glm::frustum(camPtTopLeftNear.x, camPtBottomRightNear.x,
    -camPtBottomRightNear.y, -camPtTopLeftNear.y, camZNear, camZFar);
  mvpCube = glm::scale(mvpCube, glm::vec3(1, -1, -1));
  mvpCube = glm::translate(mvpCube, -wcsPtHead);
  glMatrixMode(GL_MODELVIEW); glLoadMatrixf(glm::value_ptr(mvpCube));

  drawObject(glutSolidCube, glutWireCube, 140);

  glm::mat4 mvpTeapot = glm::translate(mvpCube, glm::vec3(100, 0, 200));
  mvpTeapot = glm::scale(mvpTeapot, glm::vec3(1, -1, -1)); // teapots are in OpenGL coordinates
  glLoadMatrixf(glm::value_ptr(mvpTeapot));
  glColor4f(1, 1, 1, 1);
  drawObject(glutSolidTeapot, glutWireTeapot, 50);

  glFlush();
  glPopAttrib();
}

void leave(unsigned char, int, int)
{
  exit(0);
}

int main(int argc, char **argv)
{
  glutInit(&argc, argv);
  glutCreateWindow("glut test");
  glutDisplayFunc(display);
  glutReshapeFunc(reshape);
  moveCameraXY(0,0);
  glutPassiveMotionFunc(moveCameraXY);
  glutMouseFunc(moveCameraZ);
  glutKeyboardFunc(leave);
  glutFullScreen();
  glutMainLoop();
  return 0;
}

The following images should be viewed from a distance equal to 135% of their width on screen (70 cm on my 52 cm wide screen in fullscreen).




回答2:


The best explanation on how to use glFrustum for head tracking applications you can find is in this paper of Robert Kooima called generalized perspective projection:

http://csc.lsu.edu/~kooima/pdfs/gen-perspective.pdf

It also allows you to simply use stereo projections, you just have to switch between left and right cameras!



来源:https://stackoverflow.com/questions/16723674/skewed-frustum-off-axis-projection-for-head-tracking-in-opengl

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!