Why is my python/numpy example faster than pure C implementation?

前端 未结 3 1179
既然无缘
既然无缘 2020-12-30 14:42

I have pretty much the same code in python and C. Python example:

import numpy
nbr_values = 8192
n_iter = 100000

a = numpy.ones(nbr_values).astype(numpy.flo         


        
相关标签:
3条回答
  • 2020-12-30 15:27

    First, turn on optimization. Secondly, subtleties matter. Your C code is definitely not 'basically the same'.

    Here is equivalent C code:

    sinary2.c:

    #include <math.h>
    #include <stdlib.h>
    
    float *sin_array(const float *input, size_t elements)
    {
        int i = 0;
        float *output = malloc(sizeof(float) * elements);
        for (i = 0; i < elements; ++i) {
            output[i] = sin(input[i]);
        }
        return output;
    }
    

    sinary.c:

    #include <math.h>
    #include <stdlib.h>
    
    extern float *sin_array(const float *input, size_t elements)
    
    int main(void)
    {
        int i;
        int nbr_values = 8192;
        int n_iter = 100000;
        float *x = malloc(sizeof(float) * nbr_values);  
        for (i = 0; i < nbr_values; ++i) {
            x[i] = 1;
        }
        for (i=0; i<n_iter; i++) {
            float *newary = sin_array(x, nbr_values);
            free(x);
            x = newary;
        }
        return 0;
    }
    

    Results:

    $ time python foo.py 
    
    real    0m5.986s
    user    0m5.783s
    sys 0m0.050s
    $ gcc -O3 -ffast-math sinary.c sinary2.c -lm
    $ time ./a.out 
    
    real    0m5.204s
    user    0m4.995s
    sys 0m0.208s
    

    The reason the program has to be split in two is to fool the optimizer a bit. Otherwise it will realize that the whole loop has no effect at all and optimize it out. Putting things in two files doesn't give the compiler visibility into the possible side-effects of sin_array when it's compiling main and so it has to assume that it actually has some and repeatedly call it.

    Your original program is not at all equivalent for several reasons. One is that you have nested loops in the C version and you don't in Python. Another is that you are working with arrays of values in the Python version and not in the C version. Another is that you are creating and discarding arrays in the Python version and not in the C version. And lastly you are using float in the Python version and double in the C version.

    Simply calling the sin function the appropriate number of times does not make for an equivalent test.

    Also, the optimizer is a really big deal for C. Comparing C code on which the optimizer hasn't been used to anything else when you're wondering about a speed comparison is the wrong thing to do. Of course, you also need to be mindful. The C optimizer is very sophisticated and if you're testing something that really doesn't do anything, the C optimizer might well notice this fact and simply not do anything at all, resulting in a program that's ridiculously fast.

    0 讨论(0)
  • 2020-12-30 15:40

    You seem to be doing the the same operation in C 8192 x 10000 times but only 10000 in python (I haven't used numpy before so I may misunderstand the code). Why are you using an array in the python case (again I'm not use to numpy so perhaps the dereferencing is implicit). If you wish to use an array be careful doubles have a performance hit in terms of caching and optimised vectorisation - you're using different types between both implementations (float vs double) but given the algorithm I don't think it matters.

    The main reason for a lot of anomalous performance benchmark issues surrounding C vs Pythis, Pythat... Is that simply the C implementation is often poor.

    https://www.ibm.com/developerworks/community/blogs/jfp/entry/A_Comparison_Of_C_Julia_Python_Numba_Cython_Scipy_and_BLAS_on_LU_Factorization?lang=en

    If you notice the guy writes C to process an array of doubles (without using restrict or const keywords where he could've), he builds with optimisation then forces the compiler to use SIMD rather than AVE. In short the compiler is using an inefficient instruction set for doubles and the wrong type of registers too if he wanted performance - you can be sure the numba and numpy will be using as many bells and whistles as possible and will be shipped with very efficient C and C++ libraries to begin with. In short if you want speed with C you have to think about it, you may even have to disassemble the code and perhaps disable optimisation and use compiler instrinsics instead. It gives you the tools to do it so don't expect the compiler to do all the work for you. If you want that degree of freedom use Cython, Numba, Numpy, Scipy etc. They're very fast but you won't be able to eek out every bit of performance out of the machine - to do that use C, C++ or new versions of FORTRAN.

    Here is a very good article on these very points (I'd use SciPy):

    https://www.scipy.org/scipylib/faq.html

    0 讨论(0)
  • 2020-12-30 15:43

    Because "numpy" is a dedicated math library implemented for speed. C has standard functions for sin/cos, that are generally derived for accuracy.

    You are also not comparing apples with apples, as you are using double in C, and float32 (float) in python. If we change the python code to calculate float64 instead, the time increases by about 2.5 seconds on my machine, making it roughly match with the correctly optimized C version.

    If the whole test was made to do something more complicated that requires more control structres (if/else, do/while, etc), then you would probably see even less difference between C and Python - because the C compiler can't really do "sin" any faster - unless you implement a better "sin" function.

    Newer mind the fact that your code isn't quite the same on both sides... ;)

    0 讨论(0)
提交回复
热议问题