Fastest timing resolution system

后端 未结 10 2201
死守一世寂寞
死守一世寂寞 2020-12-03 06:11

What is the fastest timing system a C/C++ programmer can use?

For example:
time() will give the seconds since Jan 01 1970 00:00.
GetTickCount() on Windows wi

相关标签:
10条回答
  • 2020-12-03 06:32

    I'd suggest that you use the GetSystemTimeAsFileTime API if you're specifically targeting Windows. It's generally faster than GetSystemTime and has the same precision (which is some 10-15 milliseconds - don't look at the resolution); when I did a benchmark some years ago under Windows XP it was somewhere in the range of 50-100 times faster.

    The only disadvantage is that you might have to convert the returned FILETIME structures to a clock time using e.g. FileTimeToSystemTime if you need to access the returned times in a more human-friendly format. On the other hand, as long as you don't need those converted times in real-time you can always do this off-line or in a "lazy" fashion (e.g. only convert the time stamps you need to display/process, and only when you actually need them).

    QueryPerformanceCounter can be a good choice as others have mentioned, but the overhead can be rather large depending on the underlying hardware support. In my benchmark I mention above QueryPerformanceCounter calls was 25-200 times slower than calls to GetSystemTimeAsFileTime. Also, there are some reliability problems as e.g. reported here.

    So, in summary: If you can cope with a precision of 10-15 milliseconds I'd recommend you to use GetSystemTimeAsFileTime. If you need anything better than that I'd go for QueryPerformanceCounter.

    Small disclaimer: I haven't performed any benchmarking under later Windows versions than XP SP3. I'd recommend you to do some benchmarking on you own.

    0 讨论(0)
  • 2020-12-03 06:32

    If you are targeting a late enough version of the OS then you could use GetTickCount64() which has a much higher wrap around point than GetTickCount(). You could also simply build a version of GetTickCount64() on top of GetTickCount().

    0 讨论(0)
  • 2020-12-03 06:37

    I recently had this question and did some research. The good news is that all three of the major operating systems provide some sort of high resolution timer. The bad news is that it is a different API call on each system. For POSIX operating systems you want to use clock_gettime(). If you're on Mac OS X, however, this is not supported, you have to use mach_get_time(). For windows, use QueryPerformanceCounter. Alternatively, with compilers that support OpenMP, you can use omp_get_wtime(), but it may not provide the resolution that you are looking for.

    I also found cycle.h from fftw.org (www.fftw.org/cycle.h) to be useful.

    Here is some code that calls a timer on each OS, using some ugly #ifdef statements. The usage is very simple: Timer t; t.tic(); SomeOperation(); t.toc("Message"); And it will print out the elapsed time in seconds.

    #ifndef TIMER_H
    #define TIMER_H
    
    #include <iostream>
    #include <string>
    #include <vector>
    
    # if  (defined(__MACH__) && defined(__APPLE__))
    #   define _MAC
    # elif (defined(_WIN32) || defined(WIN32) || defined(__CYGWIN__) || defined(__MINGW32__) || defined(_WIN64))
    #   define _WINDOWS
    #   ifndef WIN32_LEAN_AND_MEAN
    #     define WIN32_LEAN_AND_MEAN
    #   endif
    #endif
    
    # if defined(_MAC)
    #    include <mach/mach_time.h>
    # elif defined(_WINDOWS)
    #    include <windows.h>
    # else
    #    include <time.h>
    # endif
    
    
    #if defined(_MAC)
      typedef uint64_t timer_t;
      typedef double   timer_c;
    
    #elif defined(_WINDOWS)
      typedef LONGLONG      timer_t;
      typedef LARGE_INTEGER timer_c;
    
    #else
      typedef double   timer_t;
      typedef timespec timer_c;
    #endif
    
      //==============================================================================
      // Timer
      // A quick class to do benchmarking.
      // Example: Timer t;  t.tic();  SomeSlowOp(); t.toc("Some Message");
    
      class Timer {
      public:
        Timer();
    
        inline void tic();
        inline void toc();
        inline void toc(const std::string &msg);
    
        void print(const std::string &msg);
        void print();
        void reset();
        double getTime();
    
      private:
        timer_t start;
        double duration;
        timer_c ts;
        double conv_factor;
        double elapsed_time;
      };
    
    
    
      Timer::Timer() {
    
    #if defined(_MAC)
        mach_timebase_info_data_t info;
        mach_timebase_info(&info);
    
        conv_factor = (static_cast<double>(info.numer))/
                      (static_cast<double>(info.denom));
        conv_factor = conv_factor*1.0e-9;
    
    #elif defined(_WINDOWS)
        timer_c freq;
        QueryPerformanceFrequency(&freq);
        conv_factor = 1.0/(static_cast<double>freq.QuadPart);
    
    #else
        conv_factor = 1.0;
    #endif
    
        reset();
      }
    
      inline void Timer::tic() {
    
    #if defined(_MAC)
        start = mach_absolute_time();
    
    #elif defined(_WINDOWS)
        QueryPerformanceCounter(&ts);
        start = ts.QuadPart;
    
    #else
        clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts);
        start = static_cast<double>(ts.tv_sec) + 1.0e-9 *
                static_cast<double>(ts.tv_nsec);
    
    #endif
      }
    
      inline void Timer::toc() {
    #if defined(_MAC)
        duration =  static_cast<double>(mach_absolute_time() - start);
    
    #elif defined(_WINDOWS)
        QueryPerformanceCounter(&qpc_t);
        duration = static_cast<double>(qpc_t.QuadPart - start);
    
    #else
        clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts);
        duration = (static_cast<double>(ts.tv_sec) + 1.0e-9 *
                    static_cast<double>(ts.tv_nsec)) - start;
    
    #endif
    
        elapsed_time = duration*conv_factor;
      }
    
      inline void Timer::toc(const std::string &msg) { toc(); print(msg); };
    
      void Timer::print(const std::string &msg) {
        std::cout << msg << " "; print();
      }
    
      void Timer::print() {
        if(elapsed_time) {
          std::cout << "elapsed time: " << elapsed_time << " seconds\n";
        }
      }
    
      void Timer::reset() { start = 0; duration = 0; elapsed_time = 0; }
      double Timer::getTime() { return elapsed_time; }
    
    
    #if defined(_WINDOWS)
    # undef WIN32_LEAN_AND_MEAN
    #endif
    
    #endif // TIMER_H
    
    0 讨论(0)
  • 2020-12-03 06:39

    On Linux you get microseconds:

    struct timeval tv;
    int res = gettimeofday(&tv, NULL);
    double tmp = (double) tv.tv_sec + 1e-6 * (double) tv.tv_usec;
    

    On Windows, only millseconds are available:

    SYSTEMTIME st;
    GetSystemTime(&st);
    tmp += 1e-3 * st.wMilliseconds;
    
    return tmp;
    

    This came from R's datetime.c (and was edited down for brevity).

    Then there is of course Boost's Date_Time which can have nanosecond resolution on some systems (details here and here).

    0 讨论(0)
  • 2020-12-03 06:46

    POSIX supports clock_gettime() which uses a struct timespec which has nanosecond resolution. Whether your system really supports that fine-grained a resolution is more debatable, but I believe that's the standard call with the highest resolution. Not all systems support it, and it is sometimes well hidden (library '-lposix4' on Solaris, IIRC).


    Update (2016-09-20):

    • Mac OS X 10.6.4 did not support clock_gettime(), and neither did any other version of Mac OS X up to and including Mac OS X 10.11.6 El Capitan). However, starting with macOS Sierra 10.12 (released September 2016), macOS does have the function clock_gettime() and manual pages for it at long last. The actual resolution (on CLOCK_MONOTONIC) is still microseconds; the smaller units are all zeros. This is confirmed by clock_getres() which reports that the resolution is 1000 nanoseconds, aka 1 µs.

    The manual page for clock_gettime() on macOS Sierra mentions mach_absolute_time() as a way to get high-resolution timing. For more information, amongst other places, see Technical Q&A QA1398: Mach Absolute Time Units and (on SO) What is mach_absolute_time() based on on iPhone?

    0 讨论(0)
  • 2020-12-03 06:47

    GetSystemTimeAsFileTime is the fastest resource. Its granularity can be obtained by a call to GetSystemTimeAdjustment which fills lpTimeIncrement. The system time as filetime has 100ns units and increments by TimeIncrement. TimeIncrement can vary and it depends on the setting of the multimedia timer interface.

    A call to timeGetDevCaps will disclose the capabilities of the time services. It returns a value wPeriodMin for the minimum supported interrupt period. A call to timeBeginPeriod with wPeriodMin as argument will setup the system to operate at highest possible interrupt frequency (typically ~1ms). This will also force the time increment of the system filetime returned by GetSystemTimeAsFileTime to be smaller. Its granularity will be in the range of 1ms (10000 100ns units).

    For your purpose, I'd suggest to go for this approach.

    The QueryPerformanceCounter choice is questionable since its frequency is not accurate by two means: Firstly it deviates from the value given by QueryPerformanceFrequency by a hardware specific offset. This offset can easely be several hundred ppm, which means that a conversion into time will contain an error of several hundreds of microseconds per second. Secondly it has thermal drift. The drift of such devices can easely be several ppm. This way another - heat dependend - error of several us/s is added.

    So as long as a resolution of ~1ms is sufficient and the main question is the overhead, GetSystemTimeAsFileTime is by far the best solution.

    When microseconds matter, you'd have to go a longer way and see more details. Sub-millisecond time services are described at the Windows Timestamp Project

    0 讨论(0)
提交回复
热议问题