Adding lagged variables to an lm model?

我们两清 提交于 2019-11-27 10:34:38

问题


I'm using lm on a time series, which works quite well actually, and it's super super fast.

Let's say my model is:

> formula <- y ~ x

I train this on a training set:

> train <- data.frame( x = seq(1,3), y = c(2,1,4) )
> model <- lm( formula, train )

... and I can make predictions for new data:

> test <- data.frame( x = seq(4,6) )
> test$y <- predict( model, newdata = test )
> test
  x        y
1 4 4.333333
2 5 5.333333
3 6 6.333333

This works super nicely, and it's really speedy.

I want to add lagged variables to the model. Now, I could do this by augmenting my original training set:

> train$y_1 <- c(0,train$y[1:nrow(train)-1])
> train
  x y y_1
1 1 2   0
2 2 1   2
3 3 4   1

update the formula:

formula <- y ~ x * y_1

... and training will work just fine:

> model <- lm( formula, train )
> # no errors here

However, the problem is that there is no way of using 'predict', because there is no way of populating y_1 in a test set in a batch manner.

Now, for lots of other regression things, there are very convenient ways to express them in the formula, such as poly(x,2) and so on, and these work directly using the unmodified training and test data.

So, I'm wondering if there is some way of expressing lagged variables in the formula, so that predict can be used? Ideally:

formula <- y ~ x * lag(y,-1)
model <- lm( formula, train )
test$y <- predict( model, newdata = test )

... without having to augment (not sure if that's the right word) the training and test datasets, and just being able to use predict directly?


回答1:


Have a look at e.g. the dynlm package which gives you lag operators. More generally the Task Views on Econometrics and Time Series will have lots more for you to look at.

Here is the beginning of its examples -- a one and twelve month lag:

R>      data("UKDriverDeaths", package = "datasets")
R>      uk <- log10(UKDriverDeaths)
R>      dfm <- dynlm(uk ~ L(uk, 1) + L(uk, 12))
R>      dfm

Time series regression with "ts" data:
Start = 1970(1), End = 1984(12)

Call:
dynlm(formula = uk ~ L(uk, 1) + L(uk, 12))

Coefficients:
(Intercept)     L(uk, 1)    L(uk, 12)  
      0.183        0.431        0.511  

R> 



回答2:


Following Dirk's suggestion on dynlm, I couldn't quite figure out how to predict, but searching for that led me to dyn package via https://stats.stackexchange.com/questions/6758/1-step-ahead-predictions-with-dynlm-r-package

Then after several hours of experimentation I came up with the following function to handle the prediction. There were quite a few 'gotcha's on the way, eg you can't seem to rbind time series, and the result of predict is offset by start and a whole bunch of things like that, so I feel this answer adds significantly compared to just naming a package, though I have upvoted Dirk's answer.

So, a solution that works is:

  • use the dyn package
  • use the following method for prediction

predictDyn method:

# pass in training data, test data,
# it will step through one by one
# need to give dependent var name, so that it can make this into a timeseries
predictDyn <- function( model, train, test, dependentvarname ) {
    Ntrain <- nrow(train)
    Ntest <- nrow(test)
    # can't rbind ts's apparently, so convert to numeric first
    train[,dependentvarname] <- as.numeric(train[,dependentvarname])
    test[,dependentvarname] <- as.numeric(test[,dependentvarname])
    testtraindata <- rbind( train, test )
    testtraindata[,dependentvarname] <- ts( as.numeric( testtraindata[,dependentvarname] ) )
    for( i in 1:Ntest ) {
       result <- predict(model,newdata=testtraindata,subset=1:(Ntrain+i-1))
       testtraindata[Ntrain+i,dependentvarname] <- result[Ntrain + i + 1 - start(result)][1]
    }
    return( testtraindata[(Ntrain+1):(Ntrain + Ntest),] )
}

Example usage:

library("dyn")

# size of training and test data
N <- 6
predictN <- 10

# create training data, which we can get exact fit on, so we can check the results easily
traindata <- c(1,2)
for( i in 3:N ) { traindata[i] <- 0.5 + 1.3 * traindata[i-2] + 1.7 * traindata[i-1] }
train <- data.frame( y = ts( traindata ), foo = 1)

# create testing data, bunch of NAs
test <- data.frame( y = ts( rep(NA,predictN) ), foo = 1)

# fit a model
model <- dyn$lm( y ~ lag(y,-1) + lag(y,-2), train )
# look at the model, it's a perfect fit. Nice!
print(model)

test <- predictDyn( model, train, test, "y" )
print(test)

# nice plot
plot(test$y, type='l')

Output:

> model

Call:
lm(formula = dyn(y ~ lag(y, -1) + lag(y, -2)), data = train)

Coefficients:
(Intercept)   lag(y, -1)   lag(y, -2)  
        0.5          1.7          1.3  

> test
             y foo
7     143.2054   1
8     325.6810   1
9     740.3247   1
10   1682.4373   1
11   3823.0656   1
12   8686.8801   1
13  19738.1816   1
14  44848.3528   1
15 101902.3358   1
16 231537.3296   1

Edit: hmmm, this is super slow though. Even if I limit the data in the subset to a constant few rows of the dataset, it takes about 24 milliseconds per prediction, or, for my task, 0.024*7*24*8*20*10/60/60 = 1.792 hours :-O




回答3:


Try the ARIMA function. The AR parameter is for auto-regressive, which means lagged y. xreg = allows you to add other X variables. You can get predictions with predict.ARIMA.




回答4:


Here's a thought:

Why don't you create a new data frame? Fill a data frame with the regressors you need. You could have columns like L1, L2, ..., Lp for all lags of any variable you want and, then, you get to use your functions exactly like you would for a cross-section type of regression.

Because you will not have to operate on your data every time you call fitting and prediction functions, but will have transformed the data once, it will be considerably faster. I know that Eviews and Stata provide lagging operators. It is true that there is some convenience to it. But it also is inefficient if you do not need everything functions like 'lm' compute. If you have a few hundreds of thousands of iterations to perform and you just need the forecast, or the forecast and the value of information criteria like BIC or AIC, you can beat 'lm' in speed by avoiding to make computations that you will not use -- just write an OLS estimator in a function and you're good to go.



来源:https://stackoverflow.com/questions/13096787/adding-lagged-variables-to-an-lm-model

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!