Help me solve this bug with my ray tracer

不羁的心 提交于 2019-12-23 21:50:25

问题


I'm not going to post any code for this question because it would require way too much context, but I shall explain conceptually what I'm doing.

I'm building a simple ray tracer that uses affine transformations. What I mean is that I'm intersecting all rays from camera coordinates with generic shapes. The shapes all have associated affine transformations, and the rays are first multiplied by the inverses of these transformations before intersecting with scene objects.

So for example, say I wanted a sphere of radius 3 positioned at (10,10,10). I create the sphere and give it a transformation matrix representing this transformation.

I create a ray in camera coordinates. I multiply the ray by the inverse of the sphere's transformation matrix and intersect it with the generic sphere (r=1 at (0,0,0)). I take the distance along this generic ray at the intersection point and using it I find the generic normal and the point along the original ray and save these into a Transformation object (along with the distance (t) and the actual transformation).

When it comes time to figure out the colour of this intersection point I take the transformation's inverse transpose and multiply it by the generic normal to find the normal. The point of intersection is just the point along the original non-transformed ray if I use the t value from the intersection of the inverse transformed ray.

The problem is, when I do things this way the transformations have weird effects. The main effect is that transformations seem to drag lights from the scene along with them. If I build a bunch of images and apply a slightly larger rotation to the sphere with each frame, it seems to drag the lights in the scene around with it. Here's an example

I honestly cannot figure out what I'm doing wrong here, but I'm tearing my hair out. I can't think of any good reason whatsoever for this to be happening. Any help would be hugely appreciated.


回答1:


DISCLAIMER: I'm not an expert in raytracing and I also missunderstood transpose for transposition in the problem description.

When you calculate the normal in the intersection point you are in a "transformed coordinate space", in particular, right? So the normal will be in that coordinate space as well. Later you only transpose that vector to the real intersection point, but the normal is still rotated.

Assuming you have a generic sphere which is red on positive x and blue on negative x. Let's consider that the camera is at 20,0,0 and you the 1-sphere is only by 180degrees around the y axis (no transposing). Then the ray (1,0,0) will be the transformed ray -1,0,0 and will hit the sphere from negative x at (-1,0,0) and t = 9. The normal should be (-1,0,0). When you transpose that normal to the real intersection point the normal will still be (-1,0,0). So by following that normal you should get the correct color, but also the light from the "backside" of the sphere.




回答2:


You have made the decision to do intersections in object coordinates rather than world coordinates. IMHO that is an error (unless you're doing lots of instancing). However given that, you should compute the point of intersection in object space as well as the normal in object space. These need to be converted back to world coordinates using the objects transformation - NOT its inverse. This is how the object gets to world space, and how everything in object space gets to world space. Off hand I'm not certain how to transform the t parameter, so I'd go with transforming the intersection point initially until you get correct results.



来源:https://stackoverflow.com/questions/5647265/help-me-solve-this-bug-with-my-ray-tracer

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!