This question pops up quite often in one form or another (see for example here or here). So I thought I\'d present it in a general form, and provide an answer which might se
I've done some benchmarking on the two proposed solutions. The benchmarking code is based on the timeit function, and is included at the end of this post.
I consider two cases: three vectors of size n
, and three vectors of sizes n/10
, n
and n*10
respectively (both cases give the same number of combinations). n
is varied up to a maximum of 240
(I choose this value to avoid the use of virtual memory in my laptop computer).
The results are given in the following figure. The ndgrid
-based solution is seen to consistently take less time than combvec
. It's also interesting to note that the time taken by combvec
varies a little less regularly in the different-size case.
Benchmarking code
Function for ndgrid
-based solution:
function combs = f1(vectors)
n = numel(vectors); %// number of vectors
combs = cell(1,n); %// pre-define to generate comma-separated list
[combs{end:-1:1}] = ndgrid(vectors{end:-1:1}); %// the reverse order in these two
%// comma-separated lists is needed to produce the rows of the result matrix in
%// lexicographical order
combs = cat(n+1, combs{:}); %// concat the n n-dim arrays along dimension n+1
combs = reshape(combs,[],n);
Function for combvec
solution:
function combs = f2(vectors)
combs = combvec(vectors{:}).';
Script to measure time by calling timeit
on these functions:
nn = 20:20:240;
t1 = [];
t2 = [];
for n = nn;
%//vectors = {1:n, 1:n, 1:n};
vectors = {1:n/10, 1:n, 1:n*10};
t = timeit(@() f1(vectors));
t1 = [t1; t];
t = timeit(@() f2(vectors));
t2 = [t2; t];
end