Vectorize a product calculation which depends on previous elements?

前端 未结 7 1960
小鲜肉
小鲜肉 2020-12-04 22:23

I\'m trying to speed up/vectorize some calculations in a time series. Can I vectorize a calculation in a for loop which can depend on results from an earlier iteration? For

相关标签:
7条回答
  • 2020-12-04 23:06

    It's also possible to do this with "apply" using the original vector and a lagged version of the vector as the constituent columns of a data frame.

    0 讨论(0)
  • 2020-12-04 23:07

    Sometimes you just need to think about it totally differently. What you're doing is creating a vector where every item is the same as the first if it's a 1 or 0 otherwise.

    z <- c(1,1,0,0,0,0)
    if (z[1] != 1) z[1] <- 0
    z[2:length(z)] <- z[1]
    
    0 讨论(0)
  • 2020-12-04 23:13

    This is a nice and simple example where Rcpp can shine.

    So let us first recast functions 1 and 2 and their compiled counterparts:

    library(inline)
    library(rbenchmark)
    library(compiler)
    
    fun1 <- function(z) {
      for(i in 2:NROW(z)) {
        z[i] <- ifelse(z[i-1]==1, 1, 0)
      }
      z
    }
    fun1c <- cmpfun(fun1)
    
    
    fun2 <- function(z) {
      for(i in 2:NROW(z)) {
        z[i] <- if(z[i-1]==1) 1 else 0
      }
      z
    }
    fun2c <- cmpfun(fun2)
    

    We write a Rcpp variant very easily:

    funRcpp <- cxxfunction(signature(zs="numeric"), plugin="Rcpp", body="
      Rcpp::NumericVector z = Rcpp::NumericVector(zs);
      int n = z.size();
      for (int i=1; i<n; i++) {
        z[i] = (z[i-1]==1.0 ? 1.0 : 0.0);
      }
      return(z);
    ")
    

    This uses the inline package to compile, load and link the five-liner on the fly.

    Now we can define our test-date, which we make a little longer than the original (as just running the original too few times result in unmeasurable times):

    R> z <- rep(c(1,1,0,0,0,0), 100)
    R> identical(fun1(z),fun2(z),fun1c(z),fun2c(z),funRcpp(z))
    [1] TRUE
    R> 
    

    All answers are seen as identical.

    Finally, we can benchmark:

    R> res <- benchmark(fun1(z), fun2(z),
    +                  fun1c(z), fun2c(z),
    +                  funRcpp(z),
    +                  columns=c("test", "replications", "elapsed", 
    +                            "relative", "user.self", "sys.self"),
    +                  order="relative",
    +                  replications=1000)
    R> print(res)
            test replications elapsed relative user.self sys.self
    5 funRcpp(z)         1000   0.005      1.0      0.01        0
    4   fun2c(z)         1000   0.466     93.2      0.46        0
    2    fun2(z)         1000   1.918    383.6      1.92        0
    3   fun1c(z)         1000  10.865   2173.0     10.86        0
    1    fun1(z)         1000  12.480   2496.0     12.47        0
    

    The compiled version wins by a factor of almost 400 against the best R version, and almost 100 against its byte-compiled variant. For function 1, the byte compilation matters much less and both variants trail the C++ by a factor of well over two-thousand.

    It took about one minute to write the C++ version. The speed gain suggests it was a minute well spent.

    For comparison, here is the result for the original short vector called more often:

    R> z <- c(1,1,0,0,0,0)
    R> res2 <- benchmark(fun1(z), fun2(z),
    +                  fun1c(z), fun2c(z),
    +                  funRcpp(z),
    +                  columns=c("test", "replications", 
    +                            "elapsed", "relative", "user.self", "sys.self"),
    +                  order="relative",
    +                  replications=10000)
    R> print(res2)
            test replications elapsed  relative user.self sys.self
    5 funRcpp(z)        10000   0.046  1.000000      0.04        0
    4   fun2c(z)        10000   0.132  2.869565      0.13        0
    2    fun2(z)        10000   0.271  5.891304      0.27        0
    3   fun1c(z)        10000   1.045 22.717391      1.05        0
    1    fun1(z)        10000   1.202 26.130435      1.20        0
    

    The qualitative ranking is unchanged: the Rcpp version dominates, function2 is second-best. with the byte-compiled version being about twice as fast that the plain R variant, but still almost three times slower than the C++ version. And the relative difference are lower: relatively speaking, the function call overhead matters less and the actual looping matters more: C++ gets a bigger advantage on the actual loop operations in the longer vectors. That it is an important result as it suggests that more real-life sized data, the compiled version may reap a larger benefit.

    Edited to correct two small oversights in the code examples. And edited again with thanks to Josh to catch a setup error relative to fun2c.

    0 讨论(0)
  • 2020-12-04 23:16

    Check out the rollapply function in zoo.

    I'm not super familiar with it, but I think this does what you want:

    > c( 1, rollapply(z,2,function(x) x[1]) )
    [1] 1 1 1 1 1 1
    

    I'm sort of kludging it by using a window of 2 and then only using the first element of that window.

    For more complicated examples you could perform some calculation on x[1] and return that instead.

    0 讨论(0)
  • 2020-12-04 23:17

    There is a function that does this particular calculation: cumprod (cumulative product)

    > cumprod(z[zi])
    [1] 1 0 0 0 0
    
    > cumprod(c(1,2,3,4,0,5))
    [1]  1  2  6 24  0  0
    

    Otherwise, vectorize with Rccp as other answers have shown.

    0 讨论(0)
  • 2020-12-04 23:21

    ifelse is vectorized and there's a bit of a penalty if you're using it on one element at a time in a for-loop. In your example, you can get a pretty good speedup by using if instead of ifelse.

    fun1 <- function(z) {
      for(i in 2:NROW(z)) {
        z[i] <- ifelse(z[i-1]==1, 1, 0)
      }
      z
    }
    
    fun2 <- function(z) {
      for(i in 2:NROW(z)) {
        z[i] <- if(z[i-1]==1) 1 else 0
      }
      z
    }
    
    z <- c(1,1,0,0,0,0)
    identical(fun1(z),fun2(z))
    # [1] TRUE
    system.time(replicate(10000, fun1(z)))
    #   user  system elapsed 
    #   1.13    0.00    1.32
    system.time(replicate(10000, fun2(z)))
    #   user  system elapsed 
    #   0.27    0.00    0.26 
    

    You can get some additional speed gains out of fun2 by compiling it.

    library(compiler)
    cfun2 <- cmpfun(fun2)
    system.time(replicate(10000, cfun2(z)))
    #   user  system elapsed 
    #   0.11    0.00    0.11
    

    So there's a 10x speedup without vectorization. As others have said (and some have illustrated) there are ways to vectorize your example, but that may not translate to your actual problem. Hopefully this is general enough to be applicable.

    The filter function may be useful to you as well if you can figure out how to express your problem in terms of a autoregressive or moving average process.

    0 讨论(0)
提交回复
热议问题