问题
I have two for loops running in my Matlab code. The inner loop is parallelized using Matlabpool in 12 processors (which is maximum Matlab allows in a single machine).
I dont have Distributed computing license. Please help me how to do it using Octave or Scilab. I just want to parallelize 'for' loop ONLY.
There are some broken links given while I searched for it in google.
回答1:
parfor
is not really implemented in octave yet. The keyword is accepted, but is a mere synonym of for
(http://octave.1599824.n4.nabble.com/Parfor-td4630575.html).
The pararrayfun
and parcellfun
functions of the parallel package are handy on multicore machines.
They are often a good replacement to a parfor loop.
For examples, see http://wiki.octave.org/Parallel_package. To install, issue (just once)
pkg install -forge parallel
And then, once on each session
pkg load parallel
before using the functions
回答2:
In Scilab you can use parallel_run:
function a=g(arg1)
a=arg1*arg1
endfunction
res=parallel_run(1:10, g);
Limitations
- uses only one core on Windows platforms.
- For now, parallel_run only handles arguments and results of scalar matrices of real values and the types argument is not used
- one should not rely on side effects such as modifying variables from outer scope : only the data stored into the result variables will be copied back into the calling environment.
- macros called by parallel_run are not allowed to use the JVM
- no stack resizing (via gstacksize() or via stacksize()) should take place during a call to parallel_run
回答3:
In GNU Octave you can use the parfor
construct:
parfor i=1:10
# do stuff that may run in parallel
endparfor
For more info: help parfor
回答4:
To see a list of Free and Open Source alternatives to MATLAB-SIMULINK please check its Alternativeto page or my answer here. Specifically for SIMULINK alternatives see this post.
something you should consider is the difference between vectorized, parallel, concurrent, asynchronous and multithreaded computing. Without going much into the details vectorized programing is a way to avoid ugly
for-loops
. For examplemap
function and list comprehension on Python is vectorised computation. It is the way you write the code not necesarily how it is being handled by the computer. Parallel computation, mostly used for GPU computing (data paralleism), is when you run massive amount of arithmetic on big arrays, using GPU computational units. There is also task parallelism which mostly refers to ruing a task on multiple threads, each processed by a separate CPU core. Concurrent or asynchronous is when you have just one computational unit, but it does multiple jobs at the same time, without blocking the processor unconditionally. Basically like a mom cooking and cleaning and taking care of its kid at the same time but doing only one job at the time :)Given the above description there are lot in the FOSS world for each one of these. For Scilab specifically check this page. There is MPI interface for distributed computation (multithreading/parallelism on multiple computers). OpenCL interfaces for GPU/data-parallel computation. OpenMP interface for multithreading/task-parallelism. The
feval
functions is not parallelism but a way to vectorize a conventional function.Scilab matrix arithmetic andparallel_run
are vectorized or parallel depending to the platform, hardware and version of the Scilab.
来源:https://stackoverflow.com/questions/24970519/how-to-use-parallel-for-loop-in-octave-or-scilab