I have a huge matrix. I am just given an example of a matrix with size (1*1000000).
I am using simple Loop
( I do not prefer using Loop<
If you want to vectorise it, you need to know which indices of a
you would be using at each iteration. For example, The term a(ii+1)
with ii
iterating from 2
to 999999
means you're using the elements of a
from indices 3 to last and similarly find that out for other terms. Then just do the element wise division ./. 0
is manually appended at the start since in your code, you didn't explicitly store anything at first index and zero is what automatically gets stored when you skip an index.
k = [0 abs(a(3:end)-2*a(2:end-1)+a(1:end-2)) ./ (a(3:end)+2*a(2:end-1)+a(1:end-2))];
Performance timed with timeit on my system with R2017a and a=rand(1,1e8);
:
Orig_Post = 14.3219
Orig_Post_with_Preallocation = 1.7764
Vectorised = 5.3292
So it can be seen that loops have been significantly improved in the newer versions. It turns out that the solution with the loop with properly pre-allocated memory for k
is much faster than the vectorised one. The reduced performance you're experiencing is caused due to no preallocation (as Cris Luengo already suggested). To pre-allocate, write k = zeros(1, size(a,2)-1);
before the loop.