When should I be using `sparse`?

前端 未结 3 2025
渐次进展
渐次进展 2021-01-04 03:10

I\'ve been looking through Matlab\'s sparse documentation trying to find whether there are any guidelines for when it makes sense to use a sparse representation rather than

相关标签:
3条回答
  • 2021-01-04 03:45

    If you have a matrix of a fixed dimension, then the best way to establish a reliable answer is just trial and error. However, if you do not know the dimensions of your matrices/vectors, then the rules of thumb are

    Your sparse vectors should have effectively constant number of nonzero entries

    which for matrices will imply

    Your N x N sparse matrix should have <= c * N nonzero entries, where c is a constant "much less" than N.

    Let's give a pseudo-theoretical explanation to this rule. We shall consider a fairly easy task of making a scalar (or dot) product of two vectors with double valued coordinates. Now, if you have two dense vectors of the same length N, your code will look like

    //define vectors vector, wector as double arrays of length N 
    double sum = 0;
    for (int i = 0; i < N; i++)
    {
        sum += vector[i] * wector[i];
    }
    

    this amounts in N additions, N multiplications and N conditinal branches (cycle operations). The most costly operation here is the conditional branch, so costly, that we may neglect multiplications and the more so additions. The reason why it is so expensive is explained in an answer to this question.

    UPD: In fact, in a for cycle, you risk to choose a wrong branch only once, at the end of your cycle, since by definition the default branch to choose will be going into the cycle. This amounts in at most 1 pipeline restart per scalar product operation.

    Let's now have a look at how sparse vectors are realized in BLAS. There, each vector is encoded by two arrays: one of values and one of corresponding indices, something like

    1.7    -0.8    3.6
    171     83     215
    

    (plus one integer telling the supposed length N). It is indicated in the BLAS documentation, that the ordering of indices plays no role here, so that the data

    -0.8    3.6    1.7
     83     215    171
    

    encodes the same vector. This remark gives enough information to reconstruct the algorithm for scalar product. Given two sparse vectors encoded by the data int[] indices, double[] values and int[] jndices, double[] walues, one will calculate their scalar product in the lines of this "code":

    double sum = 0;
    for (int i = 0; i < indices.length; i++)
    {
        for (int j = 0; j < jndices.length; j++)
        {
            if(indices[i] == jndices[j])
            {
                sum += values[indices[i]] * walues[jndices[j]];
            }
        }
    }
    

    which gives us a total amount of indices.length * jndices.length * 2 + indices.length conditional branches. This means that just in order to cope with the dense algorithm, your vectors are to have at most sqrt(N) nonzero entries. The point here is the dependency on N is already nonlinear, so there is no point in asking whether you need 1% or 10% or 25% filling. 10% is perfect for vectors of length 10, still sort of OK for length 50 and already a total ruin for length 100.

    UPD. In this code snippet, you have an if branch, and the probability to take the wrong path is 50%. Thus, a scalar product of two sparse vectors will amount in about 0.5 to 1 times the average number of nonzero entries per sparse vector) pipeline restarts, depending on how sparse your vectors are . The numbers are to be adjusted: in an if statement without else, the shortest instruction will be taken first, which is "do nothing", but still.

    Note that the most efficient operation is a scalar product of a sparse and a dense vector. Given a sparse vector of indices and values and a dense vector dense, your code will look like

    double sum = 0;
    for (int i = 0; i < indices.length; i++)
    {
        sum += values[indices[i]] * dense[indices[i]];
    }
    

    i.e. you'll have indices.length conditional branches, which is good.

    UPD. Once again, I bet you'll have at most one pipeline restart per operation. Note also that afaik in modern multicore processors both alternatives are performed in parallel on two different cores, so that in alternative branches you only need to wait for the longest one to finish.

    Now, when multiplying matrix with a vector, you basically take #rows scalar products of vectors. Multiplying matrix with matrix amounts in taking #((nonzero) columns in the second matrix) of matrix by vector multiplications. You are welcome figure out the complexity by yourself.

    And so here is where all the black magic deep theory of different matrix storing begins. You may store your sparse matrix as dense array of sparse rows, as a sparse array of dense rows or sparse array of sparse rows. Same goes for columns. All the funny abbreviations from Scipy cited in the question have to do with that.

    You will "always" have an advantage in speed if you multiply a matrix built of sparse rows with a dense matrix, or a matrix of dense columns. You may want to store your sparse matrix data as dense vectors of diagonals - so in the case of convolution neural networks - and then you'll need completely different algorithms. You may want to make your matrix a block matrix - so does BLAS - and get a reasonable computation boost. You may want to store your data as two matrices - say, a diagonal and a sparse, which is the case for finite element method. You could make use of sparsity for general neural networks (like. fast forward, extreme learning machine or echo state network) if you always multiply a row stored matrix by a column vector, but avoid multiplying matrices. And, you will "always" get an advantage by using sparse matrices if you follow the rule of thumb - it holds for finite element and convolution networks, but fails for reservoir computing.

    0 讨论(0)
  • 2021-01-04 03:50

    I am not an expert in using sparse matrices, however Mathworks does have some documentation pertaining to the operation and computation efficiency.

    Their computation complexity description:

    The computational complexity of sparse operations is proportional to nnz, the number of nonzero elements in the matrix. Computational complexity also depends linearly on the row size m and column size n of the matrix, but is independent of the product m*n, the total number of zero and nonzero elements.

    The complexity of fairly complicated operations, such as the solution of sparse linear equations, involves factors like ordering and fill-in, which are discussed in the previous section. In general, however, the computer time required for a sparse matrix operation is proportional to the number of arithmetic operations on nonzero quantities.

    Without boring you with the algorithmic details, another answer suggests you shouldn't bother with sparse for an array that is only 25% non-zeros. They offer some code for you to test on. See their post for details.

    A = sprand(2000,2000,0.25);
    tic,B = A*A;toc
    Elapsed time is 1.771668 seconds.
    
    Af = full(A);
    tic,B = Af*Af;toc
    Elapsed time is 0.499045 seconds.
    
    0 讨论(0)
  • 2021-01-04 03:55

    Many operations on full matrices use BLAS/LAPACK library calls that are insanely optimized and tough to beat. In practice, operations on sparse matrices will only outperform those on full matrices in specialized situations that can sufficiently exploit (i) sparsity and (ii) special matrix structure.

    Just randomly using sparse probably will make you worse off. Example: which is faster, adding a 10000x10000 full matrix to a 10000x10000 full matrix? Or adding a 10000x10000 full matrix to an entirely sparse (i.e. everything is zero) 10000x10000 matrix? try it! On my system, the full + full is faster!

    What are some examples of situations where sparse CRUSHES full?

    Example 1: solving linear system A*x=b where A is 5000x5000 but is block diagonal matrix constructed of 500 5x5 blocks. Setup code:

    As = sparse(rand(5, 5));
    for(i=1:999)
       As = blkdiag(As, sparse(rand(5,5))); 
    end;                         %As is made up of 500 5x5 blocks along diagonal
    Af = full(As); b = rand(5000, 1);
    

    Then you can test speed difference:

    As \ b % operation on sparse As takes .0012 seconds
    Af \ b % solving with full Af takes about 2.3 seconds
    

    In general, a 5000 variable linear system is somewhat difficult, but 1000 separate 5 variable linear systems is trivial. The latter is basically what gets solved in the sparse case.

    The overall story is that if you have special matrix structure and can cleverly exploit sparsity, it's possible to solve insanely large problems that otherwise would be intractable. If you have a specialized problem that is sufficiently large, have a matrix that is sufficiently sparse, and are clever with linear algebra (so as to preserve sparsity), a sparse typed matrix can be extremely powerful.

    On the other hand, randomly throwing in sparse without deep, careful thought is almost certainly going to make your code slower.

    0 讨论(0)
提交回复
热议问题