Unit Root Tests

There are a wide range of tests for a unit root available. The main reasons for this is that unlike one sided t statistics in asymptotically normal problems this test is not uniformly most powerful for testing the unit root hypothesis and also that nonnegative functions of the data tend to result in consistent tests for a unit root.

The upshot is that many tests have been proposed in the most straightforward testing problems (where perhaps there is serial correlation but weak enough heteroskedasticity) even before we consider the problems of removing so called deterministic terms and allowing for heteroskedasticity. The programs here are limited primarily to those from my papers.

The programs are for the DF-GLS tests (Elliott, Rothenberg and Stock 1996, denote as ERS here) as well as programs from my later papers which allow for different assumptions on the initial condition. This is currently a work in progress so not everything is available yet.

Some intuition is useful for understanding these tests and testing for unit roots in general. First, as noted, nonnegative functions of the data suitably scaled converge under the null hypothesis to functions of Brownian Motions under the null hypothesis and typically converge to zero under the alternative hypothesis. This results in consistent tests using lower tail critical values, which since the limiting distribution is a function of Brownian Motions is typically not a normal distribution need to be computed on a statistic by statistic basis. What is happenning is that the data is diverging under the null hypothesis, so the 'suitable scaling' involves stopping this to obtain a limiting distribution. But under the null the data does not diverge and this scaling is overkill sending the statistic to zero.

Problems arise if there are other components in the data that might also account for trending behavior in the data, which need to be removed so that the distinction between data drifting off or mean reverting is preserved. One addition to the model that is nearly always appropriate is the addition of a nonzero mean to the data. Under the alterantive, a non zero mean seems reasonable since we do not expect the data under the alternative to be both mean reverting and mean reverting to zero. Many tests use OLS to remove the mean, which changes the asymptoptic distribution of the tests and reduces power. Since a mean does not impact the 'drifting off' property, this seems strange that asymptotic power is reduced. In ERS we show that GLS detrending does not impact the limit distribution, and hence does not reduce power (this is the intuition for using the DF-GLS test instead of the usual DF test which has OLS detrending).

When there is potentially a time trend in the data along with the stochastic component that may or may not have a unit root the removal of the trend has stronger effects. Under the alternative hypothesis we have mean reversion around a linear trend. Removing the trend results in mean reversion as before. Under the null hypothesis however if the variable has a unit root then it is diverging from any value, removing a trend stops this behavior over the full sample so the data cannot get 'too far' from the trend (estimates of the trend will adapt to make sure this happens) and so a major element of the distinction between the data under the null and alternative is removed when allowing for a trend. The data still behaves differently (getting further from the trend in sample) and unit root tests pick up on this to continue to have power.

More complicated trends make the distinction even harder because they are better at adapting to remove the non mean reverting behavior of the data, hence reducing the difference in behavior of the detrended data when it has a unit root or when it is mean reverting. For example a trend that breaks at some point in the sample where this point is estimated will further reduce the variation of the data around the broken trend. So power will be even worse.

All tests need to be scaled by some estimate of the variation of the data. If the variation (variance of the innovations to the series) changes greatly over the data set then this also creates major problems for tests for unit roots.



Publications