问题
I need to enhance precision for function in one of mine Atmospheric scattering GLSL fragment shader which computes the intersection between single ray and axis aligned ellipsoid.
This is the core function for mine atmospheric scattering shader. The old original shader was on floats
and for normal rendering was fine, but after addition of zoom I found that with relatively small distances the precision get lost. On floats the usable distances for Earth was only up to 0.005 AU (Astronomical Unit). So I tried to port the critical function to double
and it helps so now the usable distance is around 1.0 AU (with small artifacts)
This is the double
version of function inside Fragment Shader (old-style source uses deprecated things!!!)
#extension GL_ARB_gpu_shader_fp64 : enable
double abs(double x) { if (x<0.0) x=-x; return x; }
// compute length of ray(p0,dp) to intersection with ellipsoid((0,0,0),r) -> view_depth_l0,1
// where r.x is elipsoid rx^-2, r.y = ry^-2 and r.z=rz^-2
float view_depth_l0=-1.0,view_depth_l1=-1.0;
bool _view_depth(vec3 _p0,vec3 _dp,vec3 _r)
{
double a,b,c,d,l0,l1;
dvec3 p0,dp,r;
p0=dvec3(_p0);
dp=dvec3(_dp);
r =dvec3(_r );
view_depth_l0=-1.0;
view_depth_l1=-1.0;
a=(dp.x*dp.x*r.x)
+(dp.y*dp.y*r.y)
+(dp.z*dp.z*r.z); a*=2.0;
b=(p0.x*dp.x*r.x)
+(p0.y*dp.y*r.y)
+(p0.z*dp.z*r.z); b*=2.0;
c=(p0.x*p0.x*r.x)
+(p0.y*p0.y*r.y)
+(p0.z*p0.z*r.z)-1.0;
d=((b*b)-(2.0*a*c));
if (d<0.0) return false;
d=sqrt(d);
l0=(-b+d)/a;
l1=(-b-d)/a;
if (abs(l0)>abs(l1)) { a=l0; l0=l1; l1=a; }
if (l0<0.0) { a=l0; l0=l1; l1=a; }
if (l0<0.0) return false;
view_depth_l0=float(l0);
view_depth_l1=float(l1);
return true;
}
- input is ray and radiuses^-2 of ellipsoid
output is distance from p0 to intersections
precision of input and output variables are float (it is sufficient)
This is how it looks after porting to Double
So the question is: Q1. How to improve accuracy of this function?
- target accuracy for
view_depth_l0
,view_depth_l1
is+/- 20 m
for distance|p0|=100 AU
That would be ideal, right now it seems like +/- 5 km for 10 AU distance which is bad. Even 10 times accurate computation will be huge step forward any ideas ?
[edit1] l0,l1 range
I was wrong float conversion of view_depth_l0,view_depth_l1
is the cause of artifacts. After shifting it to relative distance the accuracy improved a lot. I just added this:
// relative shift to preserve accuracy
const double m0=1000000000.0; // >= max view depth !!!
if (l0>m0){ a=floor(l0/m0)*m0; a-=m0; if (l1>l0) l1-=a; l0-=a; }
before this:
view_depth_l0=float(l0);
view_depth_l1=float(l1);
return true;
}
The rest of the shader handle l0,l1
as relative values anyway sonow the result is this:
for distances up to 10.0 AU it is fine now (artifacts are noticeable only in very high zooms), new artifacts are caused most likely elsewhere so will have to research further when I got the time and will.
[edit2] shifting p0 along dp closer towards (0,0,0)
Implementation need relative expensive normalize and length functions and the result without range shift (edit1) was a bit better then raw function but the improvement is not too big. With range shifting (edit1) the result is the same as before so this is not the way. Mine conclusion is that all remaining artifacts are not caused by view dept function itself.
I will try to port shader to #version 400 + fp64
on the whole thing to check if the input data is not rounded by float too much
[edit3] actual source code
#extension GL_ARB_gpu_shader_fp64 : enable
double abs(double x) { if (x<0.0) x=-x; return x; }
// compute length of ray(p0,dp) to intersection with ellipsoid((0,0,0),r) -> view_depth_l0,1
// where r.x is elipsoid rx^-2, r.y = ry^-2 and r.z=rz^-2
float view_depth_ll= 0.0, // shift to boost accuracy
view_depth_l0=-1.0, // view_depth_ll+view_depth_l0 first hit
view_depth_l1=-1.0; // view_depth_ll+view_depth_l1 second hit
const double view_depth_max=100000000.0; // > max view depth
bool _view_depth(vec3 _p0,vec3 _dp,vec3 _r)
{
dvec3 p0,dp,r;
double a,b,c,d,l0,l1;
view_depth_ll= 0.0;
view_depth_l0=-1.0;
view_depth_l1=-1.0;
// conversion to double
p0=dvec3(_p0);
dp=dvec3(_dp);
r =dvec3(_r );
// quadratic equation a.l.l+b.l+c=0; l0,l1=?;
a=(dp.x*dp.x*r.x)
+(dp.y*dp.y*r.y)
+(dp.z*dp.z*r.z);
b=(p0.x*dp.x*r.x)
+(p0.y*dp.y*r.y)
+(p0.z*dp.z*r.z); b*=2.0;
c=(p0.x*p0.x*r.x)
+(p0.y*p0.y*r.y)
+(p0.z*p0.z*r.z)-1.0;
// discriminant d=sqrt(b.b-4.a.c)
d=((b*b)-(4.0*a*c));
if (d<0.0) return false;
d=sqrt(d);
// standard solution l0,l1=(-b +/- d)/2.a
a*=2.0;
l0=(-b+d)/a;
l1=(-b-d)/a;
// alternative solution q=-0.5*(b+sign(b).d) l0=q/a; l1=c/q; (should be more accurate sometimes)
// if (b<0.0) d=-d; d=-0.5*(b+d);
// l0=d/a;
// l1=c/d;
// sort l0,l1 asc
if (abs(l0)>abs(l1)) { a=l0; l0=l1; l1=a; }
if (l0<0.0) { a=l0; l0=l1; l1=a; }
if (l0<0.0) return false;
// relative shift to preserve accuracy after conversion back float
if (l0>view_depth_max){ a=floor(l0/view_depth_max)*view_depth_max; a-=view_depth_max; view_depth_ll=float(a); if (l1>l0) l1-=a; l0-=a; }
// conversion back float
view_depth_l0=float(l0);
view_depth_l1=float(l1);
return true;
}
Porting the rest of shader to double has no effect. The only thing that could improve this is double
input data (input is double
but GL convert it to float
), but mine current GLSL HW does not allow 64 bit
interpolators
Q2. Is there a way to pass double
interpolators from vertex to fragment shader?
Something like varying dvec4 pixel_pos;
in old style GLSL or out smooth dvec4 pixel_pos;
in core profile
来源:https://stackoverflow.com/questions/25470493/ray-and-ellipsoid-intersection-accuracy-improvement