问题
I am currently working on ray-tracing techniques and I think I've made a pretty good job; but, I haven't covered camera yet.
Until now, I used a plane fragment for view plane which is located between (-width/2, height/2, 200)
and (width/2, -height/2, 200)
[200 is just a fixed number of z, can be changed].
Addition to that, I use the camera mostly on e(0, 0, 1000)
, and I use a perspective projection.
I send rays from point e
to pixels, and print it to image's corresponding pixel after calculating the pixel color.
Here is a image I created. Hopefully you can guess where eye and view plane are by looking at the image.
My question starts from here. It's time to move my camera around, but I don't know how to map 2D view plane coordinates to the canonical coordinates. Is there a transformation matrix for that?
The method I think requires to know the 3D coordinates of pixels on view plane. I am not sure it's the right method to use. So, what do you suggest?
回答1:
There are a variety of ways to do it. Here's what I do:
- Choose a point to represent the camera location (
camera_position
). - Choose a vector that indicates the direction the camera is looking (
camera_direction
). (If you know a point the camera is looking at, you can compute this direction vector by subtractingcamera_position
from that point.) You probably want to normalize (camera_direction
), in which case it's also the normal vector of the image plane. - Choose another normalized vector that's (approximately) "up" from the camera's point of view (
camera_up
). camera_right = Cross(camera_direction, camera_up)
camera_up = Cross(camera_right, camera_direction)
(This corrects for any slop in the choice of "up".)
Visualize the "center" of the image plane at camera_position + camera_direction
. The up and right vectors lie in the image plane.
You can choose a rectangular section of the image plane to correspond to your screen. The ratio of the width or height of this rectangular section to the length of camera_direction determines the field of view. To zoom in you can increase camera_direction or decrease the width and height. Do the opposite to zoom out.
So given a pixel position (i, j)
, you want the (x, y, z)
of that pixel on the image plane. From that you can subtract camera_position
to get a ray vector (which then needs to be normalized).
Ray ComputeCameraRay(int i, int j) {
const float width = 512.0; // pixels across
const float height = 512.0; // pixels high
double normalized_i = (i / width) - 0.5;
double normalized_j = (j / height) - 0.5;
Vector3 image_point = normalized_i * camera_right +
normalized_j * camera_up +
camera_position + camera_direction;
Vector3 ray_direction = image_point - camera_position;
return Ray(camera_position, ray_direction);
}
This is meant to be illustrative, so it is not optimized.
回答2:
For rasterising renderers, you tend to need a transformation matrix because that's how you map directly from 3D coordinates to screen 2D coordinates.
For ray tracing, it's not necessary because you're typically starting from a known pixel coordinate in 2D space.
Given the eye position, a point in 3-space that's in the center of the screen, and vectors for "up" and "right", it's quite easy to calculate the 3D "ray" that goes from the eye position and through the specified pixel.
I've previously posted some sample code from my own ray tracer at https://stackoverflow.com/a/12892966/6782
来源:https://stackoverflow.com/questions/13078243/how-to-move-a-camera-using-in-a-ray-tracer