raytracing

Textured spheres without strong distortion

血红的双手。 提交于 2019-12-09 12:44:15
问题 I've seen well-textured balls, planets, and other spherical objects in couple of games, last time in UFO: aftermath. If you just splatter a texture into latitude/longditude as u and w -coordinates you'll get lots of ugly texture distortion to poles. I can think myself an one way to implement a spherical map with minimum distortion. By mapping in triangles instead of squares. But I don't know any algorithms. How to produce vertices and texture coordinates for such spheres? Also, I don't see a

Is there a really good book about ray tracing? [closed]

倾然丶 夕夏残阳落幕 提交于 2019-12-09 10:02:04
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . I need to do some research on ray tracing and create my own ray tracer. Are there any good books on the subject? 回答1: Pharr and

Trouble with Phong Shading

試著忘記壹切 提交于 2019-12-09 06:38:16
问题 I am writing a shader according to the Phong Model. I am trying to implement this equation: where n is the normal, l is direction to light, v is the direction to the camera, and r is the light reflection. The equations are described in more detail in the Wikipedia article. As of right now, I am only testing on directional light sources so there is no r^2 falloff. The ambient term is added outside the below function and it works well. The function maxDot3 returns 0 if the dot product is

Sampling a hemisphere using an arbitary distribtuion

情到浓时终转凉″ 提交于 2019-12-08 11:45:09
问题 I am writing a ray tracer and I wish to fire rays from a point p into a hemisphere above that point according to some distribution. 1) I have derived a method to uniformly sample within a solid angle (defined by theta) above p Image phi = 2*pi*X_1 alpha = arccos (1-(1-cos(theta))*X_2) x = sin(alpha)*cos(phi) y = sin(alpha)*sin*phi z = -cos(alpha) Where X is a uniform random number That works and Im pretty happy with that. But my question is what happens if I do not want a uniform distribution

Refraction Vector (Ray tracing)

五迷三道 提交于 2019-12-08 06:02:06
问题 I am doing ray tracing and I do the refraction of the ray using the following relation (I got it from PDF called "Reflections and Refractions in Ray Tracing"): But I have seen it in another PDF as follows: Could you please explain for me why? And how can I reassure that my refraction vector that I calculated is correct? Thanks 回答1: Assuming that your vectors are actually xyz triplets: float3 reflect( float3 i, float3 n ) { return i - 2.0 * n * dot(n,i); } 来源: https://stackoverflow.com

What are we trying to achieve by creating a right and down vectors in this ray tracing depiction?

风格不统一 提交于 2019-12-08 03:24:05
问题 Often, we see the following picture when talking about ray tracing. Here, I see the Z axis as the sort of direction if the camera pointed straight ahead, and the XY grid as the grid that the camera is seeing here. From the camera's point of view, we see the usual Cartesian grid me and my classmates are used to. Recently I was examining code that simulates this. One thing that is not obvious from this picture to me is the requirement for the "right" and "down" vectors. Obviously we have look

Is this plane-ray intersection code correct?

て烟熏妆下的殇ゞ 提交于 2019-12-07 18:18:40
问题 My Plane class has two fields: public Vector3 Norm; //normal vector public double Offset; //signed distance to origin This is the code I use for intersection, and I don't know if it is correct. I double checked my equations and everything, but I would like to get feedback from people more experienced with this. public override Intersection Intersect(Ray ray) { // Create Intersection. Intersection result = new Intersection(); // Find t. double t = - (Vector3.Dot(Norm,ray.Start) + Offset) /

Unexpected result when computing shadows in a ray tracer

穿精又带淫゛_ 提交于 2019-12-07 13:23:42
问题 I'm working on a raytracer using C++, and so far I've been able to compute a lighting model based on diffuse, specular and ambient components. My problem appeared when I tried to add shadows to my scenes: the scenes get really messed up: My code is divided as follows: I have a base class "SceneObject", which has a virtual method intersect(), which takes a ray (defined by origin and direction) and outputs a boolean, as well as return arguments for the calculated t value, a hitpoint and the

What types of geometry definition file format is best used with ray tracing to include the type of material

瘦欲@ 提交于 2019-12-07 03:21:07
问题 I wanted to use .obj format, but I noticed that it doesn't have representation for the type of material, i.e. opaque, transparent, reflective. Is there a common file format that includes that information as well, or should I just take the known .obj format and change it so that it'll include that info? 回答1: you might want to check mtl-files. Haven't (yet) used it myself though ;) http://people.sc.fsu.edu/~jburkardt/data/mtl/mtl.html and http://people.sc.fsu.edu/~jburkardt/data/obj/obj.html

Webgl: alternative to writing to gl_FragDepth

℡╲_俬逩灬. 提交于 2019-12-07 02:21:54
问题 In WebGL, is it possible to write to the fragment's depth value or control the fragment's depth value in some other way? As far as I could find, gl_FragDepth is not present in webgl 1.x, but I am wondering if there is any other way (extensions, browser specific support, etc) to do it. What I want to archive is to have a ray traced object play along with other elements drawn using the usual model, view, projection. 回答1: There is the extension EXT_frag_depth Because it's an extension it might