Parallel computing in Octave on a single machine — package and example

Deadly 提交于 2019-12-20 19:05:51

问题


I would like to parallelize a for loop in Octave on a single machine (as opposed to a cluster). I asked a question about a parallel version of Octave a while ago parallel computing in octave

And the answer suggested that I download a parallel computing package, which I did. The package seems largely geared to cluster computing, but it did mention single machine parallel computing, but was not clear on how to run even a parallel loop.

I also found another question on SO about this, but I did not find a good answer for parallelizing loops in Octave: Running portions of a loop in parallel with Octave?

Does anyone know where I can find an example of running a for loop in parallel in Octave???


回答1:


I am computing large number of RGB histograms. I need to use explicit loops to do it. Therefore computation of each histogram takes noticeable time. For this reason running the computations in parallel makes sense. In Octave there is an (experimental) function parcellfun written by Jaroslav Hajek that can be used to do it.

My original loop

histograms = zeros(size(files,2), bins^3);
  % calculate histogram for each image
  for c = 1 : size(files,2)
    I = imread(fullfile(dir, files{c}));
    h = myhistRGB(I, bins);
    histograms(c, :) = h(:); % change to 1D vector
  end

To use parcellfun, I need to refactor the body of my loop into a separate function.

function histogram = loadhistogramp(file)
  I = imread(fullfile('.', file));
  h = myhistRGB(I, 8);
  histogram = h(:); % change to 1D vector
end

then I can call it like this

histograms = parcellfun(8, @loadhistogramp, files);

I did a small benchmark on my computer. It is 4 physical cores with Intel HyperThreading enabled.

My original code

tic(); histograms2 = loadhistograms('images.txt', 8); toc();
warning: your version of GraphicsMagick limits images to 8 bits per pixel
Elapsed time is 107.515 seconds.

With parcellfun

octave:1> pkg load general; tic(); histograms = loadhistogramsp('images.txt', 8); toc();
parcellfun: 0/178 jobs donewarning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
warning: your version of GraphicsMagick limits images to 8 bits per pixel
parcellfun: 178/178 jobs done
Elapsed time is 29.02 seconds.

(The results from the parallel and serial version were the same (only transposed).

octave:6> sum(sum((histograms'.-histograms2).^2))
ans = 0

When I repeated this several times, the running times were pretty much the same all the time. The parallel version was running around 30 second (+- approx 2s) with both 4, 8 and also 16 subprocesses)




回答2:


Octave loops are slow, slow, slow and you're far better off expressing things in terms of array-wise operations. Let's take the example of evaluating a simple trig function over a 2d domain, as in this 3d octave graphics example (but with a more realistic number of points for computation, as opposed to plotting):

vectorized.m:

tic()
x = -2:0.01:2;
y = -2:0.01:2;
[xx,yy] = meshgrid(x,y);
z = sin(xx.^2-yy.^2);
toc()

Converting it to for loops gives us forloops.m:

tic()
x = -2:0.01:2;
y = -2:0.01:2;
z = zeros(401,401);
for i=1:401
    for j=1:401
        lx = x(i);
        ly = y(j);
        z(i,j) = sin(lx^2 - ly^2);
    endfor        
endfor
toc()

Note that already the vectorized version "wins" in being simpler and clearer to read, but there's another important advantage, too; the timings are dramatically different:

$ octave --quiet vectorized.m 
Elapsed time is 0.02057 seconds.

$ octave --quiet forloops.m 
Elapsed time is 2.45772 seconds.

So if you were using for loops, and you had perfect parallelism with no overhead, you'd have to break this up onto 119 processors just to break even with the non-for-loop !

Don't get me wrong, parallelism is great, but first get things working efficiently in serial.

Almost all of octave's built-in functions are already vectorized in the sense that they operate equally well on scalars or entire arrays; so it's often easy to convert things to array operations instead of doing things element-by-element. For those times when it's not so easy, you'll generally see that there are utility functions (like meshgrid, which generates a 2d-grid from the cartesian product of 2 vectors) that already exist to help you.




回答3:


Now pararrayfun usage examples can be found there: http://wiki.octave.org/Parallel_package



来源:https://stackoverflow.com/questions/10520495/parallel-computing-in-octave-on-a-single-machine-package-and-example

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!