Matlab: need some help for a seemingly simple vectorization of an operation

后端 未结 3 753
孤独总比滥情好
孤独总比滥情好 2020-12-19 21:12

I would like to optimize this piece of Matlab code but so far I have failed. I have tried different combinations of repmat and sums and cumsums, but all my attempts seem to

相关标签:
3条回答
  • 2020-12-19 21:35

    The nchoosek(v,k) function generates all combinations of the elements in v taken k at a time. We can use this to generate all possible pairs of indicies then use this to vectorize the loops. It appears that in this case the vectorization doesn't actually improve performance (at least on my machine with 2017a). Maybe someone will come up with a more efficient approach.

    idx = nchoosek(1:T,2);
    d = bsxfun(@minus,(X(idx(:,2),:) - X(idx(:,1),:)), (idx(:,2)-idx(:,1))/T);
    Result = sum(abs(d),1)';
    
    0 讨论(0)
  • 2020-12-19 21:50

    It is at least easy to vectorize your inner loop:

    Result=zeros(S,1);
    for c=1:T-1
       d=(X(c+1:T,:)-X(c,:))-((c+1:T)'-c)./T;
       Result=Result+sum(abs(d),1)';
    end
    

    Here, I'm using the new automatic singleton expansion. If you have an older version of MATLAB you'll need to use bsxfun for two of the subtraction operations. For example, X(c+1:T,:)-X(c,:) is the same as bsxfun(@minus,X(c+1:T,:),X(c,:)).

    What is happening in the bit of code is that instead of looping cc=c+1:T, we take all of those indices at once. So I simply replaced cc for c+1:T. d is then a matrix with multiple rows (9 in the first iteration, and one fewer in each subsequent iteration).

    Surprisingly, this is slower than the double loop, and similar in speed to Jodag's answer.

    Next, we can try to improve indexing. Note that the code above extracts data row-wise from the matrix. MATLAB stores data column-wise. So it's more efficient to extract a column than a row from a matrix. Let's transpose X:

    X=X';
    Result=zeros(S,1);
    for c=1:T-1
       d=(X(:,c+1:T)-X(:,c))-((c+1:T)-c)./T;
       Result=Result+sum(abs(d),2);
    end
    

    This is more than twice as fast as the code that indexes row-wise.

    But of course the same trick can be applied to the code in the question, speeding it up by about 50%:

    X=X';
    Result=zeros(S,1);
    for c=1:T-1
       for cc=c+1:T
          d=(X(:,cc)-X(:,c))-(cc-c)/T;
          Result=Result+abs(d);
       end
    end
    

    My takeaway message from this exercise is that MATLAB's JIT compiler has improved things a lot. Back in the day any sort of loop would halt code to a grind. Today it's not necessarily the worst approach, especially if all you do is use built-in functions.

    0 讨论(0)
  • 2020-12-19 21:52

    Update: here are the results for the running times for the different proposals (10^5 trials):

    So it looks like the transformation of the matrix is the most efficient intervention, and my original double-loop implementation is, amazingly, the best compared to the vectorized versions. However, in my hands (2017a) the improvement is only 16.6% compared to the original using the mean (18.2% using the median).

    Maybe there is still room for improvement?

    0 讨论(0)
提交回复
热议问题