Out of memory when using `outer` in solving my big normal equation for least squares estimation

∥☆過路亽.° 提交于 2019-12-06 14:48:37

This is a least squares problem with 100,000 responses. Your bigmatrix is the response (matrix), beta is the coefficient (matrix), while w1 is the residual (matrix).

bigmatrix, as well as w1, if formed explicitly, will each cost

(100,000 * 100,000 * 8) / (1024 ^ 3) = 74.5 GB

This is far too large.

As estimation for each response is independent, there is really no need to form bigmatrix in one go and try to store it in RAM. We can just form it tile by tile, and use an iterative procedure: form a tile, use a tile, then discard it. For example, the below considers a tile of dimension 100,000 * 2,000, with memory size:

(100,000 * 2,000 * 8) / (1024 ^ 3) = 1.5 GB

By such iterative procedure, the memory usage is effectively under control.

x1 <- rnorm(100000)
x2 <- rnorm(100000)
g <- cbind(x1, x2, x1^2, x2^2)
gg <- crossprod(g)    ## don't use `t(g) %*% g`
## we also don't explicitly form `gg` inverse

## initialize `beta` matrix (4 coefficients for each of 100,000 responses)
beta <- matrix(0, 4, 100000)

## we split 100,000 columns into 50 tiles, each with 2000 columns
for (i in 1:50) {
   start <- 2000 * (i-1) + 1    ## chunk start
   end <- 2000 * i    ## chunk end
   bigmatrix <- outer(x1, x2[start:end], "<=")
   Gw <- crossprod(g, bigmatrix)    ## don't use `t(g) %*% bigmatrix`
   beta[, start:end] <- solve(gg, Gw)
   }

Note, don't try to compute the residual matrix w1, as It will cost 74.5 GB. If you need residual matrix in later work, you should still try to break it into tiles and work one by one.

You don't need to worry about the loop here. The computation inside each iteration is costly enough to amortize looping overhead.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!