Speed up the loop operation in R

前端 未结 10 2094
说谎
说谎 2020-11-22 00:04

I have a big performance problem in R. I wrote a function that iterates over a data.frame object. It simply adds a new column to a data.frame and a

10条回答
  •  渐次进展
    2020-11-22 00:24

    General strategies for speeding up R code

    First, figure out where the slow part really is. There's no need to optimize code that isn't running slowly. For small amounts of code, simply thinking through it can work. If that fails, RProf and similar profiling tools can be helpful.

    Once you figure out the bottleneck, think about more efficient algorithms for doing what you want. Calculations should be only run once if possible, so:

    • Store the results and access them rather than repeatedly recalculating
    • Take non-loop-dependent calculations out of loops
    • Avoid calculations which aren't necessary (e.g. don't use regular expressions with fixed searches will do)

    Using more efficient functions can produce moderate or large speed gains. For instance, paste0 produces a small efficiency gain but .colSums() and its relatives produce somewhat more pronounced gains. mean is particularly slow.

    Then you can avoid some particularly common troubles:

    • cbind will slow you down really quickly.
    • Initialize your data structures, then fill them in, rather than expanding them each time.
    • Even with pre-allocation, you could switch to a pass-by-reference approach rather than a pass-by-value approach, but it may not be worth the hassle.
    • Take a look at the R Inferno for more pitfalls to avoid.

    Try for better vectorization, which can often but not always help. In this regard, inherently vectorized commands like ifelse, diff, and the like will provide more improvement than the apply family of commands (which provide little to no speed boost over a well-written loop).

    You can also try to provide more information to R functions. For instance, use vapply rather than sapply, and specify colClasses when reading in text-based data. Speed gains will be variable depending on how much guessing you eliminate.

    Next, consider optimized packages: The data.table package can produce massive speed gains where its use is possible, in data manipulation and in reading large amounts of data (fread).

    Next, try for speed gains through more efficient means of calling R:

    • Compile your R script. Or use the Ra and jit packages in concert for just-in-time compilation (Dirk has an example in this presentation).
    • Make sure you're using an optimized BLAS. These provide across-the-board speed gains. Honestly, it's a shame that R doesn't automatically use the most efficient library on install. Hopefully Revolution R will contribute the work that they've done here back to the overall community.
    • Radford Neal has done a bunch of optimizations, some of which were adopted into R Core, and many others which were forked off into pqR.

    And lastly, if all of the above still doesn't get you quite as fast as you need, you may need to move to a faster language for the slow code snippet. The combination of Rcpp and inline here makes replacing only the slowest part of the algorithm with C++ code particularly easy. Here, for instance, is my first attempt at doing so, and it blows away even highly optimized R solutions.

    If you're still left with troubles after all this, you just need more computing power. Look into parallelization (http://cran.r-project.org/web/views/HighPerformanceComputing.html) or even GPU-based solutions (gpu-tools).

    Links to other guidance

    • http://www.noamross.net/blog/2013/4/25/faster-talk.html

提交回复
热议问题