gettimeofday() C++ Inconsistency

前端 未结 1 1901
醉梦人生
醉梦人生 2021-01-06 13:04

I\'m doing a project that involves comparing programming languages. I\'m computing the Ackermann function. I tested Java, Python, and Ruby, and got responses between 10 an

相关标签:
1条回答
  • Assuming you're talking about the resolution of the data returned, the POSIX specification for gettimeofday states:

    The resolution of the system clock is unspecified.

    This is due to the fact that systems may have a widely varying capacity for tracking small time periods. Even the ISO standard clock() function includes caveats like this.

    If you're talking about how long it takes to call it, the standard makes no guarantees about performance along those lines. An implementation is perfectly free to wait 125 minutes before giving you the time although I doubt such an implementation would have much market success :-)


    As an example of the limited resolution, I typed in the following code to check it on my system:

    #include <stdio.h>
    #include <sys/time.h>
    
    #define NUMBER 30
    
    int main (void) {
        struct timeval tv[NUMBER];
        int count[NUMBER], i, diff;
    
        gettimeofday (&tv[0], NULL);
    
        for (i = 1; i < NUMBER; i++) {
            gettimeofday (&tv[i], NULL);
            count[i] = 1;
            while ((tv[i].tv_sec == tv[i-1].tv_sec) &&
                (tv[i].tv_usec == tv[i-1].tv_usec))
            {
                count[i]++;
                gettimeofday (&tv[i], NULL);
            }
        }
    
        printf ("%2d: secs = %d, usecs = %6d\n", 0, tv[0].tv_sec, tv[0].tv_usec);
        for (i = 1; i < NUMBER; i++) {
            diff = (tv[i].tv_sec - tv[i-1].tv_sec) * 1000000;
            diff += tv[i].tv_usec - tv[i-1].tv_usec;
    
            printf ("%2d: secs = %d, usecs = %6d, count = %5d, diff = %d\n",
                i, tv[i].tv_sec, tv[i].tv_usec, count[i], diff);
        }
    
        return 0;
    }
    

    The code basically records the changes in the underlying time, keeping a count of how many calls it took to gettimeofday() for the time to actually change. This is on a reasonably powerful machine so it's not short on processing power (the count indicates how often it was able to call gettimeofday() for each time quantum, around the 5,800 mark, ignoring the first since we don't know when in that quantum we started the measurements).

    The output was:

     0: secs = 1318554836, usecs = 990820
     1: secs = 1318554836, usecs = 991820, count =  5129, diff = 1000
     2: secs = 1318554836, usecs = 992820, count =  5807, diff = 1000
     3: secs = 1318554836, usecs = 993820, count =  5901, diff = 1000
     4: secs = 1318554836, usecs = 994820, count =  5916, diff = 1000
     5: secs = 1318554836, usecs = 995820, count =  5925, diff = 1000
     6: secs = 1318554836, usecs = 996820, count =  5814, diff = 1000
     7: secs = 1318554836, usecs = 997820, count =  5814, diff = 1000
     8: secs = 1318554836, usecs = 998820, count =  5819, diff = 1000
     9: secs = 1318554836, usecs = 999820, count =  5901, diff = 1000
    10: secs = 1318554837, usecs =    820, count =  5815, diff = 1000
    11: secs = 1318554837, usecs =   1820, count =  5866, diff = 1000
    12: secs = 1318554837, usecs =   2820, count =  5849, diff = 1000
    13: secs = 1318554837, usecs =   3820, count =  5857, diff = 1000
    14: secs = 1318554837, usecs =   4820, count =  5867, diff = 1000
    15: secs = 1318554837, usecs =   5820, count =  5852, diff = 1000
    16: secs = 1318554837, usecs =   6820, count =  5865, diff = 1000
    17: secs = 1318554837, usecs =   7820, count =  5867, diff = 1000
    18: secs = 1318554837, usecs =   8820, count =  5885, diff = 1000
    19: secs = 1318554837, usecs =   9820, count =  5864, diff = 1000
    20: secs = 1318554837, usecs =  10820, count =  5918, diff = 1000
    21: secs = 1318554837, usecs =  11820, count =  5869, diff = 1000
    22: secs = 1318554837, usecs =  12820, count =  5866, diff = 1000
    23: secs = 1318554837, usecs =  13820, count =  5875, diff = 1000
    24: secs = 1318554837, usecs =  14820, count =  5925, diff = 1000
    25: secs = 1318554837, usecs =  15820, count =  5870, diff = 1000
    26: secs = 1318554837, usecs =  16820, count =  5877, diff = 1000
    27: secs = 1318554837, usecs =  17820, count =  5868, diff = 1000
    28: secs = 1318554837, usecs =  18820, count =  5874, diff = 1000
    29: secs = 1318554837, usecs =  19820, count =  5862, diff = 1000
    

    showing that the resolution seems to be limited to no better than one thousand microseconds. Of course, your system may be different to that, the bottom line is that it depends on your implementation and/or environment.


    One way to get around this type of limitation is to not do something once but to do it N times and then divide the elapsed time by N.

    For example, let's say you call your function and the timer says it took 125 milliseconds, something that you suspect seems a little high. I would suggest then calling it a thousand times in a loop, measuring the time it took for the entire thousand.

    If that turns out to be 125 seconds then, yes, it's probably slow. However, if it takes only 27 seconds, that would indicate your timer resolution is what's causing the seemingly large times, since that would equate to 27 milliseconds per iteration, on par with what you're seeing from the other results.

    Modifying your code to take this into account would be along the lines of:

    int main() {
        const int count = 1000;
        timeval tim;
    
        gettimeofday(&tim, NULL);
        double t1 = 1.0e6 * tim.tv_sec + tim.tv_usec;
    
        int v;
        for (int i = 0; i < count; ++i)
            v = a(3, 4);           
    
        gettimeofday(&tim, NULL);
        double t2 = 1.0e6 * tim.tv_sec + tim.tv_usec;
    
        cout << v << '\n' << ((t2 - t1) / count) << '\n';
    
        return 0;
    }
    
    0 讨论(0)
提交回复
热议问题