Why does JavaScript appear to be 4 times faster than C++?

前端 未结 5 1807
长发绾君心
长发绾君心 2021-01-29 22:42

For a long time, I had thought of C++ being faster than JavaScript. However, today I made a benchmark script to compare the speed of floating point calculations in the two langu

5条回答
  •  再見小時候
    2021-01-29 23:37

    I may have some bad news for you if you're on a Linux system (which complies with POSIX at least in this situation). The clock() call returns number of clock ticks consumed by the program and scaled by CLOCKS_PER_SEC, which is 1,000,000.

    That means, if you're on such a system, you're talking in microseconds for C and milliseconds for JavaScript (as per the JS online docs). So, rather than JS being four times faster, C++ is actually 250 times faster.

    Now it may be that you're on a system where CLOCKS_PER_SECOND is something other than a million, you can run the following program on your system to see if it's scaled by the same value:

    #include 
    #include 
    #include 
    
    #define MILLION * 1000000
    
    static void commaOut (int n, char c) {
        if (n < 1000) {
            printf ("%d%c", n, c);
            return;
        }
    
        commaOut (n / 1000, ',');
        printf ("%03d%c", n % 1000, c);
    }
    
    int main (int argc, char *argv[]) {
        int i;
    
        system("date");
        clock_t start = clock();
        clock_t end = start;
    
        while (end - start < 30 MILLION) {
            for (i = 10 MILLION; i > 0; i--) {};
            end = clock();
        }
    
        system("date");
        commaOut (end - start, '\n');
    
        return 0;
    }
    

    The output on my box is:

    Tuesday 17 November  11:53:01 AWST 2015
    Tuesday 17 November  11:53:31 AWST 2015
    30,001,946
    

    showing that the scaling factor is a million. If you run that program, or investigate CLOCKS_PER_SEC and it's not a scaling factor of one million, you need to look at some other things.


    The first step is to ensure your code is actually being optimised by the compiler. That means, for example, setting -O2 or -O3 for gcc.

    On my system with unoptimised code, I see:

    Time Cost: 320ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    Time Cost: 300ms
    a = 2717999973.760710
    

    and it's three times faster with -O2, albeit with a slightly different answer, though only by about one millionth of a percent:

    Time Cost: 140ms
    Time Cost: 110ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    Time Cost: 100ms
    a = 2718000003.159864
    

    That would bring the two situations back on par with each other, something I'd expect since JavaScript is not some interpreted beast like in the old days, where each token is interpreted whenever it's seen.

    Modern JavaScript engines (V8, Rhino, etc) can compile the code to an intermediate form (or even to machine language) which may allow performance roughly equal with compiled languages like C.

    But, to be honest, you don't tend to choose JavaScript or C++ for its speed, you choose them for their areas of strength. There aren't many C compilers floating around inside browsers and I've not noticed many operating systems nor embedded apps written in JavaScript.

提交回复
热议问题