I am trying to explain to myself the forecasting result from applying an ARIMA model to a time-series dataset. The data is from the M1-Competition, the series is MNB65. I am trying to fit the data to an ARIMA(1,0,0) model and get the forecasts. I am using R. Here are some output snippets:
> arima(x, order = c(1,0,0))
Series: x
ARIMA(1,0,0) with non-zero mean
Call: arima(x = x, order = c(1, 0, 0))
Coefficients:
ar1 intercept
0.9421 12260.298
s.e. 0.0474 202.717
> predict(arima(x, order = c(1,0,0)), n.ahead=12)
$pred
Time Series:
Start = 53
End = 64
Frequency = 1
[1] 11757.39 11786.50 11813.92 11839.75 11864.09 11887.02 11908.62 11928.97 11948.15 11966.21 11983.23 11999.27
I have a few questions:
(1) How do I explain that although the dataset shows a clear downward trend, the forecast from this model trends upward? This also happens for ARIMA(2,0,0), which is the best ARIMA fit for the data using auto.arima
(forecast package) and for an ARIMA(1,0,1) model.
(2) The intercept value for the ARIMA(1,0,0) model is 12260.298. Shouldn't the intercept satisfy the equation: C = mean * (1 - sum(AR coeffs))
, in which case, the value should be 715.52
. I must be missing something basic here.
(3) This is clearly a series with non-stationary mean. Why is an AR(2) model still selected as the best model by auto.arima
? Could there be an intuitive explanation?
Thanks.
No ARIMA(p,0,q) model will allow for a trend because the model is stationary. If you really want to include a trend, use ARIMA(p,1,q) with a drift term, or ARIMA(p,2,q). The fact that auto.arima() is suggesting 0 differences would usually indicate there is no clear trend.
The help file for arima() shows that the intercept is actually the mean. That is, the AR(1) model is
(Y_t-c) = ϕ(Y_{t-1} - c) + e_t
rather thanY_t = c + ϕY_{t-1} + e_t
as you might expect.auto.arima()
uses a unit root test to determine the number of differences required. So check the results from the unit root test to see what's going on. You can always specify the required number of differences inauto.arima()
if you think the unit root tests are not leading to a sensible model.
Here are the results from two tests for your data:
R> adf.test(x)
Augmented Dickey-Fuller Test
data: x
Dickey-Fuller = -1.031, Lag order = 3, p-value = 0.9249
alternative hypothesis: stationary
R> kpss.test(x)
KPSS Test for Level Stationarity
data: x
KPSS Level = 0.3491, Truncation lag parameter = 1, p-value = 0.09909
So the ADF says strongly non-stationary (the null hypothesis in that case) while the KPSS doesn't quite reject stationarity (the null hypothesis for that test). auto.arima()
uses the latter by default. You could use auto.arima(x,test="adf")
if you wanted the first test. In that case, it suggests the model ARIMA(0,2,1) which does have a trend.
来源:https://stackoverflow.com/questions/2684479/explaining-the-forecasts-from-an-arima-model