raytracing

Camera Pitch/Yaw to Direction Vector

馋奶兔 提交于 2019-12-05 02:13:24
What I'm trying to do is cast a ray from my camera. I know the camera's x, y and z coordinates, as well as its pitch and yaw. I need to calculate its direction vector so I can pass it to my raytracing algorithm. The camera's up vector is (0, 1, 0). "Pitch", from the perspective of the camera, is looking up and down. (I would prefer to not use matrices, but I will if I have to) Assuming that your coordinate system is set up such that the following conditions are met: (pitch, yaw) -> (x, y, z) (0, 0) -> (1, 0, 0) (pi/2, 0) -> (0, 1, 0) (0, -pi/2) -> (0, 0, 1) This will calculate (x, y, z): xzLen

3D coordinate of 2D point given camera and view plane

丶灬走出姿态 提交于 2019-12-04 21:33:27
问题 I wish to generate rays from the camera through the viewing plane. In order to do this, I need my camera position ("eye"), the up, right, and towards vectors (where towards is the vector from the camera in the direction of the object that the camera is looking at) and P, the point on the viewing plane. Once I have these, the ray that's generated is: ray = camera_eye + t*(P-camera_eye); where t is the distance along the ray (assume t = 1 for now). My question is, how do I obtain the 3D

Updating OpenGL ES Touch Detection (Ray Tracing) for iPad Retina?

旧街凉风 提交于 2019-12-04 20:51:28
I have the below code which I am using for ray tracing. The code works successfully on non-retina iPads, however does not function on the retina iPads. The touch is detected, however the converted point is off to the left and below where it should be. Can anyone suggest how I can update the below to accommodate the retina screen ? - (void)handleTap: (UITapGestureRecognizer *)recognizer { CGPoint tapLoc = [recognizer locationInView:self.view]; bool testResult; GLint viewport[4]; glGetIntegerv(GL_VIEWPORT, viewport); float uiKitOffset = 113; //Need to factor in the height of the nav bar + the

DirectX 11 compute shader for ray/mesh intersect

风格不统一 提交于 2019-12-04 16:36:42
I recently converted a DirectX 9 application that was using D3DXIntersect to find ray/mesh intersections to DirectX 11. Since D3DXIntersect is not available in DX11, I wrote my own code to find the intersection, which just loops over all the triangles in the mesh and tests them, keeping track of the closest hit to the origin. This is done on the CPU side and works fine for picking via the GUI, but I have another part of the application that creates a new mesh from an existing one based on several different viewpoints, and I need to check line of sight for every triangle in the mesh many times.

Detect&find intersection ray vs. cubic bezier triangle

╄→гoц情女王★ 提交于 2019-12-04 14:53:30
While writing a model editor, besides enabling raytracing I can think about couple of operations where I'd like to find an very good approximation about the intersection point between a ray and a triangular bezier patch. How to do this? I know couple of ways but likely there's better ones. Exact use-cases: I might want to use one bezier triangle patch as a reference surface for drawing detailed shapes with mouse. I might too want to pinpoint a splitting-point from such patch. If there's C source code for it, I might like to see that too. Perhaps even use it instead of rolling my own code. I'd

How to quickly find if a point is obscured in a complex scene?

不羁岁月 提交于 2019-12-03 16:57:54
问题 I have a complex 3D scene that I need to display HTML elements on top of, based on a 3D coordinate. (I'm simply overlaying a div tag on top and positioning it with CSS.) However, I also need to partially hide it (e.g., making it transparent) when the 3D coordinate is obscured by a model (or phrased in another way, when it's not visible in the camera). These models may be have many hundreds of thousands of faces, and I need a way to find out if it's obscured that's fast enough to be run many

Textured spheres without strong distortion

会有一股神秘感。 提交于 2019-12-03 14:17:11
I've seen well-textured balls, planets, and other spherical objects in couple of games, last time in UFO: aftermath. If you just splatter a texture into latitude/longditude as u and w -coordinates you'll get lots of ugly texture distortion to poles. I can think myself an one way to implement a spherical map with minimum distortion. By mapping in triangles instead of squares. But I don't know any algorithms. How to produce vertices and texture coordinates for such spheres? Also, I don't see a way to generate a complete spherical map from a simple flat square map. Is there some intuitive way on

Is there a really good book about ray tracing? [closed]

回眸只為那壹抹淺笑 提交于 2019-12-03 13:46:24
I need to do some research on ray tracing and create my own ray tracer. Are there any good books on the subject? Pharr and Humphreys, "Physically Based Rendering", Morgan-Kaufman 2004 Wann Jensen, "Realistic Image Synthesis Using Photon Mapping", AK Peters, 2001 Dutré, Bala, and Baekert, "Advanced Global Illumination", AK Peters, 2006 An Introduction to Ray Tracing, Glassner et al . I can recommend Physically Based Rendering by Pharr and Humphreys, which includes a full renderer written in C++ using literature programming methods. MrTelly Not a book, but the place where I learnt all about ray

3D coordinate of 2D point given camera and view plane

一笑奈何 提交于 2019-12-03 13:13:25
I wish to generate rays from the camera through the viewing plane. In order to do this, I need my camera position ("eye"), the up, right, and towards vectors (where towards is the vector from the camera in the direction of the object that the camera is looking at) and P, the point on the viewing plane. Once I have these, the ray that's generated is: ray = camera_eye + t*(P-camera_eye); where t is the distance along the ray (assume t = 1 for now). My question is, how do I obtain the 3D coordinates of point P given that it is located at position (i,j) on the viewing plane? Assume that the upper

Trouble with Phong Shading

痞子三分冷 提交于 2019-12-03 08:49:29
I am writing a shader according to the Phong Model . I am trying to implement this equation: where n is the normal, l is direction to light, v is the direction to the camera, and r is the light reflection. The equations are described in more detail in the Wikipedia article. As of right now, I am only testing on directional light sources so there is no r^2 falloff. The ambient term is added outside the below function and it works well. The function maxDot3 returns 0 if the dot product is negative, as it usually done in the Phong model. Here's my code implementing the above equation: #include