问题
I am implementing a parallel processing system which will eventually be deployed on a cluster, but I'm having trouble working out how the various methods of parallel processing interact.
I need to use a for loop to run a big block of code, which contains several large list of matrices operations. To speed this up, I want to parallelise the for loop with a foreach(), and parallelise the list operations with mclapply.
example pseudocode:
cl<-makeCluster(2)
registerDoParallel(cl)
outputs <- foreach(k = 1:2, .packages = "various packages") {
l_output1 <- mclapply(l_input1, function, mc.cores = 2)
l_output2 <- mclapply(l_input2, function, mc.cores = 2)
return = mapply(cbind, l_output1, l_output2, SIMPLIFY=FALSE)
}
This seems to work. My questions are:
1) is it a reasonable approach? They seem to work together on my small scale tests, but it feels a bit kludgy.
2) how many cores/processors will it use at any given time? When I upscale it to a cluster, I will need to understand how much I can push this (the foreach only loops 7 times, but the mclapply lists are up to 70 or so big matrices). It appears to create 6 "cores" as written (presumably 2 for the foreach, and 2 for each mclapply.
回答1:
I think it's a very reasonable approach on a cluster because it allows you to use multiple nodes while still using the more efficient mclapply
across the cores of the individual nodes. It also allows you to do some of the post-processing on the workers (calling cbind
in this case) which can significantly improve performance.
On a single machine, your example will create a total of 10 additional processes: two by makeCluster
which each call mclapply
twice (2 + 2(2 + 2)). However, only four of them should use any significant CPU time at a time. You could reduce that to eight processes by restructuring the functions called by mclapply
so that you only need to call mclapply
once in the foreach loop, which may be more efficient.
On multiple machines, you will create the same number of processes, but only two processes per node will use much CPU time at a time. Since they are spread out across multiple machines it should scale well.
Be aware that mclapply
may not play nicely if you use an MPI cluster. MPI doesn't like you to fork processes, as mclapply
does. It may just issue some stern warnings, but I've also seen other problems, so I'd suggest using a PSOCK cluster which uses ssh to launch the workers on the remote nodes rather than using MPI.
Update
It looks like there is a problem calling mclapply
from cluster workers created by the "parallel" and "snow" packages. For more information, see my answer to a problem report.
来源:https://stackoverflow.com/questions/34704733/parallel-r-with-foreach-and-mclapply-at-the-same-time