Detecting Heteroskedasticity: Breusch-Pagan Test
1- Linear Relationship between variables.
2- The independent variable X is not random.
3- The expected value of the error term is zero
4- The variance of the error term is the same for all observations
5- The error term is uncorrelated across observations
6- The error term is normally distributed
Heteroskedasticity occurs when the variance of the errors differs across observations. When that is the case, standard errors and test statistics in a regression output will be incorrect unless they are adjusted for heteroskedasticity.
There are two broad kinds of heteroskedasticity:
When heteroskedasticity of the error variance is not correlated with the independent variables, it is said to be Unconditional heteroskedasticity and that creates no major problems for statistical inference.
When heteroskedasticity of the error variance is correlated with the independent variables, it is said to be Conditional heteroskedasticity and that causes the most problems for statistical inference.
The Breusch–Pagan (1979) test is widely used to diagnose the conditional heteroskedasticity. This test is distributed as a ?2 random variable with the number of degrees of freedom equal to the number of independent variables (K) in the regression.
The Breusch–Pagan is a one tailed test because we are concerned about heteroskedasticity only for large values of the test statistic.
BP Statistic = nR2
Where, R2 is from the regression of the squared residuals on the independent variables from the original regression. If no conditional heteroskedasticity exists, the value of this R2 will be very low.
The calculated BP test statistic is the basis for deciding whether or not to reject the null hypothesis, comparing it with specified rejection or critical points.
The comparison values we choose are based on the level of significance selected. The level of significance reflects how much sample evidence we require to reject the null. We can use three conventional significance levels to conduct hypothesis tests: 0.10, 0.05, and 0.01.
We can formulate the following set of hypothesis.
H0: No Conditional heteroskedasticity Versus Ha: Conditional heteroskedasticity
If we find that the calculated value of the test statistics is more extreme than rejection points, we reject the null hypothesis.
We can use two different methods to correct the regression coefficients’ standard errors for conditional heteroskedasticity and such a correction may reverse the conclusions about a particular hypothesis test.
1- Robust standard errors (corrects the standard errors), also known as heteroskedasticity-consistent standard errors or White corrected standard errors.
2- Generalized least squares (modifies the original equation), it requires econometric expertise to implement correctly on financial data.