Shader Time Uniform - clock_gettime being truncated

时光怂恿深爱的人放手 提交于 2019-12-13 16:13:09

问题


Take this function getting time as a double using clock_gettime:

// return current time in milliseconds
static double time_get_ms(void)
{
    struct timespec res;

#ifdef ANDROID
    clock_gettime(CLOCK_REALTIME_HR, &res);
#else
    clock_gettime(CLOCK_REALTIME, &res);
#endif
    return (double)(1000.0*res.tv_sec + res.tv_nsec/1e6);
}

Sending it to a shader requires conversion to float. The mantissa is being overflowed and is truncated on the way to the shader.

Example:

As a double = 1330579093642.441895

As a float = 1330579111936.000000

The float value gets stuck at a single number for a long period of time due to the truncation.

It also seems even the value of seconds in res.tv_sec is to large for a float, it is also being truncated on the way to the GPU.

Trying to measure time since application launch and I run into the same problem rather quickly.

So what is the best way to get a running time value into a shader? Something cross platform in the linux world (so IOS, Android, Linux).


回答1:


You have run out of 32-bit floating point precision in your mantissa. The shader will be no better able to deal with this lack of precision, because unless you're using GL 4.x with double-precision support, shaders can only handle 32-bit floats too.

If you are running an application so long that milliseconds overflow floating point precision (which would require ~107 milliseconds, or about 2.7 hours or so), then you have to find a way for it to gracefully handle this situation. Exactly how you do that depends on exactly what your shader is using this time value for.

Most shaders don't need an actual time in milliseconds. The vast majority of shader processes that need a time (or something like a time) are cyclical. In which case you can simply pass them a value saying how far they are through a particular cycle. This value, on the range [0, 1), is 0 when at the beginning of the cycle, 0.5 in the middle, and reaches 1 at the end.

If your shader process cannot be parametrized, if it does need an absolute time, then your next bet is to pass two floating-point parameters. Basically, you need to keep your overflow bits. This will complicate all of your math involving the time on the GPU, since you have to use two values and know how to graft them together to do time updates. Again, how you do this depends on what you're doing in your shader.

Alternatively, you could send your time in 32-bit unsigned integers, if your hardware supports GL 3.x+. This will give you a few more bits of precision, so it will hold off the problem for longer. If you send your time as tenths of a millisecond, then you should be able to get about 5 days or so worth of time before it overflows. Again, this will complicate all of your GPU math, since you can't just do an int-to-float conversion.



来源:https://stackoverflow.com/questions/9511233/shader-time-uniform-clock-gettime-being-truncated

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!