Why is MATLAB so fast in matrix multiplication?

前端 未结 12 1107
無奈伤痛
無奈伤痛 2020-11-22 00:29

I am making some benchmarks with CUDA, C++, C#, Java, and using MATLAB for verification and matrix generation. When I perform matrix multiplication with MATLAB, 2048x

12条回答
  •  你的背包
    2020-11-22 00:48

    The sharp contrast is not only due to Matlab's amazing optimization (as discussed by many other answers already), but also in the way you formulated matrix as an object.

    It seems like you made matrix a list of lists? A list of lists contains pointers to lists which then contain your matrix elements. The locations of the contained lists are assigned arbitrarily. As you are looping over your first index (row number?), the time of memory access is very significant. In comparison, why don't you try implement matrix as a single list/vector using the following method?

    #include 
    
    struct matrix {
        matrix(int x, int y) : n_row(x), n_col(y), M(x * y) {}
        int n_row;
        int n_col;
        std::vector M;
        double &operator()(int i, int j);
    };
    

    And

    double &matrix::operator()(int i, int j) {
        return M[n_col * i + j];
    }
    

    The same multiplication algorithm should be used so that the number of flop is the same. (n^3 for square matrices of size n)

    I'm asking you to time it so that the result is comparable to what you had earlier (on the same machine). With the comparison, you will show exactly how significant memory access time can be!

提交回复
热议问题