Sprites sequence control through DeltaTime

前端 未结 1 475
無奈伤痛
無奈伤痛 2020-12-22 06:11

Previously in my main loop of the game, the time was managed at 60 FPS and the respective Delay for the time delay.

The Sprite sequence was animated as follows:

相关标签:
1条回答
  • 2020-12-22 06:53

    delay in main loop is not really a good way for this (as it is not accounting for the time other stuff in your main loop takes). When you removed delay then the speed is bigger and varying more because the other stuff in your main loop timing is more significant and usually non constant for many reasons like:

    • OS granularity
    • synchronization with gfx card/driver
    • non constant processing times

    There are more ways how to handle this:

    1. measure time

      <pre>
      t1=get_actual_time();
      while (t1-t0>=animation_T)
       {
       siguienteSprite++;
       t0+=animation_T;
       }
      // t0=t1; // this is optional and change the timing properties a bit
      </pre>
      

      where t0 is some global variable holding "last" measured time os sprite change. t1 is actual time and animation_T is time constant between animation changes. To measure time you need to use OS api like PerformanceCounter on windows or RDTSC in asm or any other you got at hand but with small enough resolution.

    2. OS timer

      simply increment the siguienteSprite in some timer with animation_T interval. This is simple but OS timers are not precise and usually of around 1ms or more + OS granularity (similar to Sleep accuracy).

    3. Thread timer

      you can create single thread for timing purposes for example something like this:

      for (;!threads_stop;)
       {
       Delay(animation_T); // or Sleep()
       siguienteSprite++;
       }
      

      Do not forget that siguienteSprite must be volatile and buffered during rendering to avoid flickering and or access violation errors. This approach is a bit more precise (unless you got single core CPU).

      You cam also increment some time variable instead and use that as actual time in your app with any resolution you want. But beware if delay is not returning CPU control to OS then this approach will utilize your CPU to 100%/CPU_cores. There is remedy for this and that is replacing your delay with this:

      Sleep(0.9*animation_T);
      for (;;)
       {
       t1=get_actual_time();
       if (t1-t0>=animation_T)
        {
        siguienteSprite++;
        t0=t1;
        break;
        }
      

    If you are using measured time then you should handle overflows (t1<t0) because any counter will overflow after time. For example using 32bit part of RDTSC on 3.2 GHz CPU core will overflow every 2^32/3.2e9 = 1.342 sec so it is real possibility. If my memory serves well then Performance counters in Windows usually runs around 3.5 MHz on older OS systems and around 60-120 MHz on newer (at least last time I check) and are 64 bit so the overflows are not that big of a problem (unless you run 24/7). Also in case of RDTSC use you should set process/thread affinity to single CPU core to avoid timing problems on multi core CPUs.

    I did my share of benchmarking and advanced high resolution timing at low level over the years so here few related QAs of mine:


    wrong clock cycle measurements with rdtsc - OS Granularity
    Measuring Cache Latencies - measuring CPU frequency
    Cache size estimation on your system? - PerformanceCounter example
    Questions on Measuring Time Using the CPU Clock - PIT as alternative timing source

    0 讨论(0)
提交回复
热议问题