Problems using foreach parallelization

后端 未结 2 1454
花落未央
花落未央 2020-12-13 16:21

I\'m trying to compare parallelization options. Specifically, I\'m comparing the standard SNOW and mulitcore implementations to those using d

相关标签:
2条回答
  • 2020-12-13 16:40

    To follow on something Joris said, foreach() is best when the number of jobs does not hugely exceed the number of processors you will be using. Or more generally, when each job takes a significant amount of time on its own (seconds or minutes, say). There is a lot of overhead in creating the threads, so you really don't want to use it for lots of small jobs. If you were doing 10 million sims rather than 10 thousand, and you structured your code like this:

    nSims = 1e7
    nBatch = 1e6
    foreach(i=1:(nSims/nBatch), .combine=c) %dopar% {
      replicate(nBatch, mean(rnorm(n=size, mean=mu, sd=sigma))
    }
    

    I bet you would find that foreach was doing pretty well.

    Also note the use of replicate() for this kind of application rather than sapply. Actually, the foreach package has a similar convenience function, times(), which could be applied in this case. Of course, if your code is not doing a simple simulations with identical parameters every time, you will need sapply() and foreach().

    0 讨论(0)
  • 2020-12-13 16:51

    To start with, you could write your foreach code a bit more concise :

    FECltSim <- function(nSims=1000, size=100, mu=0, sigma=1) {
      foreach(i=1:nSims, .combine=c) %dopar% {
        mean(rnorm(n=size, mean=mu, sd=sigma))
      }
    }
    

    This gives you a vector, no need to explicitly make it within the loop. Also no need to use cbind, as your result is every time just a single number. So .combine=c will do

    The thing with foreach is that it creates quite a lot of overhead to communicate between the cores and get the results of the different cores fit together. A quick look at the profile shows this pretty clearly :

    $by.self
                             self.time self.pct total.time total.pct
    $                             5.46    41.30       5.46     41.30
    $<-                           0.76     5.75       0.76      5.75
    .Call                         0.76     5.75       0.76      5.75
    ...
    

    More than 40% of the time it is busy selecting things. It also uses a lot of other functions for the whole operation. Actually, foreach is only advisable if you have relatively few rounds through very time consuming functions.

    The other two solutions are built on a different technology, and do far less in R. On a sidenode, snow is actually initially developed to work on clusters more than on single workstations, like multicore is.

    0 讨论(0)
提交回复
热议问题