Why is MATLAB so fast in matrix multiplication?

前端 未结 12 1124
無奈伤痛
無奈伤痛 2020-11-22 00:29

I am making some benchmarks with CUDA, C++, C#, Java, and using MATLAB for verification and matrix generation. When I perform matrix multiplication with MATLAB, 2048x

相关标签:
12条回答
  • 2020-11-22 01:06

    The general answer to "Why is matlab faster at doing xxx than other programs" is that matlab has a lot of built in, optimized functions.

    The other programs that are used often do not have these functions so people apply their own creative solutions, which are suprisingly slower than professionally optimized code.

    This can be interpreted in two ways:

    1) The common/theoretical way: Matlab is not significantly faster, you are just doing the benchmark wrong

    2) The realistic way: For this stuff Matlab is faster in practice because languages as c++ are just too easily used in ineffective ways.

    0 讨论(0)
  • 2020-11-22 01:07

    The answer is LAPACK and BLAS libraries make MATLAB blindingly fast at matrix operations, not any proprietary code by the folks at MATLAB.

    Use the LAPACK and/or BLAS libraries in your C++ code for matrix operations and you should get similar performance as MATLAB. These libraries should be freely available on any modern system and parts were developed over decades in academia. Note that there are multiple implementations, including some closed source such as Intel MKL.

    A discussion of how BLAS gets high performance is available here.


    BTW, it's a serious pain in my experience to call LAPACK libraries directly from c (but worth it). You need to read the documentation VERY precisely.

    0 讨论(0)
  • 2020-11-22 01:08

    This kind of question is recurring and should be answered more clearly than "MATLAB uses highly optimized libraries" or "MATLAB uses the MKL" for once on Stack Overflow.

    History:

    Matrix multiplication (together with Matrix-vector, vector-vector multiplication and many of the matrix decompositions) is (are) the most important problems in linear algebra. Engineers have been solving these problems with computers since the early days.

    I'm not an expert on the history, but apparently back then, everybody just rewrote his FORTRAN version with simple loops. Some standardization then came along, with the identification of "kernels" (basic routines) that most linear algebra problems needed in order to be solved. These basic operations were then standardized in a specification called: Basic Linear Algebra Subprograms (BLAS). Engineers could then call these standard, well-tested BLAS routines in their code, making their work much easier.

    BLAS:

    BLAS evolved from level 1 (the first version which defined scalar-vector and vector-vector operations) to level 2 (vector-matrix operations) to level 3 (matrix-matrix operations), and provided more and more "kernels" so standardized more and more of the fundamental linear algebra operations. The original FORTRAN 77 implementations are still available on Netlib's website.

    Towards better performance:

    So over the years (notably between the BLAS level 1 and level 2 releases: early 80s), hardware changed, with the advent of vector operations and cache hierarchies. These evolutions made it possible to increase the performance of the BLAS subroutines substantially. Different vendors then came along with their implementation of BLAS routines which were more and more efficient.

    I don't know all the historical implementations (I was not born or a kid back then), but two of the most notable ones came out in the early 2000s: the Intel MKL and GotoBLAS. Your Matlab uses the Intel MKL, which is a very good, optimized BLAS, and that explains the great performance you see.

    Technical details on Matrix multiplication:

    So why is Matlab (the MKL) so fast at dgemm (double-precision general matrix-matrix multiplication)? In simple terms: because it uses vectorization and good caching of data. In more complex terms: see the article provided by Jonathan Moore.

    Basically, when you perform your multiplication in the C++ code you provided, you are not at all cache-friendly. Since I suspect you created an array of pointers to row arrays, your accesses in your inner loop to the k-th column of "matice2": matice2[m][k] are very slow. Indeed, when you access matice2[0][k], you must get the k-th element of the array 0 of your matrix. Then in the next iteration, you must access matice2[1][k], which is the k-th element of another array (the array 1). Then in the next iteration you access yet another array, and so on... Since the entire matrix matice2 can't fit in the highest caches (it's 8*1024*1024 bytes large), the program must fetch the desired element from main memory, losing a lot of time.

    If you just transposed the matrix, so that accesses would be in contiguous memory addresses, your code would already run much faster because now the compiler can load entire rows in the cache at the same time. Just try this modified version:

    timer.start();
    float temp = 0;
    //transpose matice2
    for (int p = 0; p < rozmer; p++)
    {
        for (int q = 0; q < rozmer; q++)
        {
            tempmat[p][q] = matice2[q][p];
        }
    }
    for(int j = 0; j < rozmer; j++)
    {
        for (int k = 0; k < rozmer; k++)
        {
            temp = 0;
            for (int m = 0; m < rozmer; m++)
            {
                temp = temp + matice1[j][m] * tempmat[k][m];
            }
            matice3[j][k] = temp;
        }
    }
    timer.stop();
    

    So you can see how just cache locality increased your code's performance quite substantially. Now real dgemm implementations exploit that to a very extensive level: They perform the multiplication on blocks of the matrix defined by the size of the TLB (Translation lookaside buffer, long story short: what can effectively be cached), so that they stream to the processor exactly the amount of data it can process. The other aspect is vectorization, they use the processor's vectorized instructions for optimal instruction throughput, which you can't really do from your cross-platform C++ code.

    Finally, people claiming that it's because of Strassen's or Coppersmith–Winograd algorithm are wrong, both these algorithms are not implementable in practice, because of hardware considerations mentioned above.

    0 讨论(0)
  • 2020-11-22 01:08

    MATLAB uses a highly optimized implementation of LAPACK from Intel known as Intel Math Kernel Library (Intel MKL) - specifically the dgemm function. The speed This library takes advantage of processor features including SIMD instructions and multi-core processors. They don't document which specific algorithm they use. If you were to call Intel MKL from C++ you should see similar performance.

    I am not sure what library MATLAB uses for GPU multiplication but probably something like nVidia CUBLAS.

    0 讨论(0)
  • 2020-11-22 01:09

    Here's my results using MATLAB R2011a + Parallel Computing Toolbox on a machine with a Tesla C2070:

    >> A = rand(1024); gA = gpuArray(A);
    % warm up by executing the operations a couple of times, and then:
    >> tic, C = A * A; toc
    Elapsed time is 0.075396 seconds.
    >> tic, gC = gA * gA; toc
    Elapsed time is 0.008621 seconds.
    

    MATLAB uses highly optimized libraries for matrix multiplication which is why the plain MATLAB matrix multiplication is so fast. The gpuArray version uses MAGMA.

    Update using R2014a on a machine with a Tesla K20c, and the new timeit and gputimeit functions:

    >> A = rand(1024); gA = gpuArray(A);
    >> timeit(@()A*A)
    ans =
        0.0324
    >> gputimeit(@()gA*gA)
    ans =
        0.0022
    

    Update using R2018b on a WIN64 machine with 16 physical cores and a Tesla V100:

    >> timeit(@()A*A)
    ans =
        0.0229
    >> gputimeit(@()gA*gA)
    ans =
       4.8019e-04
    

    (NB: at some point (I forget when exactly) gpuArray switched from MAGMA to cuBLAS - MAGMA is still used for some gpuArray operations though)

    0 讨论(0)
  • 2020-11-22 01:10

    Matlab incorporated LAPACK some time ago, so I assume their matrix multiplication uses something at least that fast. LAPACK source code and documentation is readily available.

    You might also look at Goto and Van De Geijn's paper "Anatomy of High-Performance Matrix Multiplication" at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.140.1785&rep=rep1&type=pdf

    0 讨论(0)
提交回复
热议问题