Hi I am trying to use ddply
in the plyr
library in R, with the MC
package. It doesn\'t seem to be speeding up the computation. This
The mean
function operates too quickly relative to the communication costs required to distribute the split sections to each core and retrieve the results.
This is a common "problem" people run into with distributed computing. They expect it to make everything run faster because they forget there are costs (communication between the nodes) as well as benefits (using multiple cores).
Something specific to parallel processing in plyr: only the function is run on multiple cores. The splitting and combining still is still done on a single core, so the function you're applying would have to be very computationally intensive to see a benefit when using plyr functions in parallel.
Continuation to Joshua's answer, there is a fix if you want to quicken up this operation. It is inspired by the Map-reduce ideology and I had done a POC on a sample dataset a while back.
I used the snowfall library- I believe you can work with doMC as well.
# On my phone, please pardon typos/bugs
test <- data.frame(x=1:1000000, y=rep(c(1:20), 500))
testList = list()
testList[[1]] <- test[c(1:250000),]
testList[[2]] <- test[c(250001:500000),]
testList[[3]] <- test[c(500001:750000),]
testList[[4]] <- test[c(750001:1000000),]
# Write a function for the above - Need to find optimum number of splits
sfInit(parallel = TRUE, cpus=4)
sfCluster(plyr)
meanList = sfClusterSpplyLB(testList, function(x) ddply(test, "y", mean))
sfStop()
aggregate(meanList, by=list(y), FUN=mean)
This might help you, given that we are now doing the split-combine routine in a distributed fashion. This works for means when the size of the splits are the same, it works for sums, min/max, count etc are OK but there are some operations that we can't use this for.