(Some of the computer output has been suppressed)
ECONOMETRICS SOFTWARE LIBRARY (ESL) PROGRAM -- VERSION 4.33
Copyright (C) by Ramu Ramanathan --- All rights reserved
Dept. of Econ., UC San Diego, La Jolla, CA 92093-0508, PH. (619)534-6787
FAX (619)534-7040, Email address: ramu@weber.ucsd.edu
Reading header file C:\ESL\DATA7-8.hdr
List of variables
0) const 1) grth 2) y60 3) inv 4) pop
5) school 6) dn 7) di 8) doecd
period: 1, maxobs: 104, obs range: full 1-104, current 1-104
Reading datafile C:\ESL\DATA7-8 BY OBSERVATIONS
?square y60 inv pop school dn di doecd ;
Created sq_y60 = y60 squared as var no. 9
Created sq_inv = inv squared as var no. 10
Created sq_pop = pop squared as var no. 11
Created sq_schoo = school squared as var no. 12

(estimate the original model with OLS)
?ols grth const y60 inv pop school dn di doecd ;
(This will be referred to as the original regression in part 2.)
OLS ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - grth

VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant 1.285178 1.041575 1.233879 0.22026
2) y60 -0.387226 0.053676 -7.214105 < 0.0001 ***
3) inv 0.502818 0.081867 6.141907 < 0.0001 ***
4) pop -0.327083 0.335362 -0.975314 0.331856
5) school 0.173185 0.057805 2.996043 0.00348 ***
6) dn -0.66846 0.1594 -4.193594 < 0.0001 ***
7) di 0.305689 0.091109 3.355218 0.001137 ***
8) doecd 0.241716 0.137948 1.752224 0.082928 *

(save the residuals, the absolute values, the squares, and the natural logs of the squared residuals)
?genr ut = uhat
Generated var. no. 13 (ut)
?genr absut = abs (ut)
Generated var. no. 14 (absut)
?genr usq = ut * ut
Generated var. no. 15 (usq)
?genr lnusq = ln (usq)
Generated var. no. 16 (lnusq)

(Testing for the Glesjer approach)
(Regress the absolute value of the residuals)
?ols absut const inv y60 pop doecd ;

OLS ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - absut
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant 0.850277 0.580477 1.46479 0.146147
3) inv 0.082607 0.042177 1.95855 0.052979 *
2) y60 -0.030725 0.024674 -1.245259 0.215976
4) pop 0.224227 0.189616 1.182529 0.239828
8) doecd -0.102235 0.078484 -1.302625 0.195725
Unadjusted R-squared 0.147 Adjusted R-squared 0.113

(compute LM test statistic for the Glesjer approach)
?genr lm1 = $nrsq
Generated var. no. 17 (lm1)
?print lm1 ;
Varname: lm1, period: 1, maxobs: 104, obs range: full 1-104, current 1-104
15.2919203
(This value is computed by multiplying the number of observations (104) by the unadjusted R-squared (0.147))

(Testing for the Breusch-Pagan approach)
(Regressing the square of the residuals from the original model)
?ols usq const inv sq_inv y60 sq_y60 pop sq_pop doecd ;

OLS ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - usq
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant 0.99719 4.782794 0.208495 0.835284
3) inv -0.680457 0.287535 -2.366516 0.019965 **
10) sq_inv 0.148438 0.055465 2.676254 0.008755 ***
2) y60 0.065315 0.221122 0.295379 0.768342
9) sq_y60 -0.004815 0.01429 -0.336963 0.73688
4) pop 0.148717 3.850167 0.038626 0.969269
11) sq_pop 0.001082 0.727888 0.001487 0.998817
8) doecd -0.12615 0.067746 -1.862116 0.065645 *
Unadjusted R-squared 0.174 Adjusted R-squared 0.113

(Computing the LM statistic for the Breusch-Pagan approach)
?genr lm2 = $nrsq
Generated var. no. 18 (lm2)
?print lm2 ;
Varname: lm2, period: 1, maxobs: 104, obs range: full 1-104, current 1-104
18.06311522

(The Harvey-Godfrey approach)
(Regressing the natural log of the square of the residuals from the original model)
?ols lnusq const inv sq_inv y60 sq_y60 pop sq_pop doecd ;

OLS ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - lnusq
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant -44.233223 66.370465 -0.666459 0.506716
3) inv -6.011087 3.990107 -1.506498 0.135223
10) sq_inv 1.212004 0.769682 1.574682 0.11862
2) y60 -2.185465 3.0685 -0.712226 0.478052
9) sq_y60 0.123268 0.198305 0.621609 0.535672
4) pop -46.462678 53.42847 -0.869624 0.386675
11) sq_pop -9.389342 10.100846 -0.92956 0.354931
8) doecd -0.675005 0.940099 -0.718014 0.474492
Unadjusted R-squared 0.150 Adjusted R-squared 0.088

(Computing the LM statistic for the Harvey-Godfrey approach)
?genr lm3 = $nrsq
Generated var. no. 19 (lm3)
?print lm3 ;
Varname: lm3, period: 1, maxobs: 104, obs range: full 1-104, current 1-104
15.59492674

(The White approach)
(Regressing the square of the residuals from the original model)
?ols usq const inv sq_inv y60 sq_y60 pop sq_pop school sq_schoo dn di
doecd ;

OLS ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - usq
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant 1.289845 5.006732 0.257622 0.797274
3) inv -0.671534 0.302027 -2.223421 0.028636 **
10) sq_inv 0.146579 0.057798 2.536054 0.012897 **
2) y60 0.065842 0.251645 0.261647 0.794178
9) sq_y60 -0.004929 0.016015 -0.307754 0.758965
4) pop 0.385014 4.023424 0.095693 0.923973
11) sq_pop 0.045081 0.760048 0.059313 0.952831
5) school -0.00181 0.042402 -0.042689 0.966042
12) sq_schoo 0.002908 0.018737 0.155218 0.87699
6) dn 0.016891 0.082146 0.205619 0.837543
7) di -0.00615 0.044907 -0.136944 0.891374
8) doecd -0.128785 0.07047 -1.827525 0.070863 *
Unadjusted R-squared 0.174 Adjusted R-squared 0.076

(Calculating the LM statistic for the White approach)
?genr lm4 = $nrsq
Generated var. no. 20 (lm4)
?print lm4 ;
Varname: lm4, period: 1, maxobs: 104, obs range: full 1-104, current 1-104
18.1420736

(Conducting the tests)
(For each test, the null hypothesis is that there is no heteroskedasticity)
(The null hypothesis can also be stated as all coefficients except the constant in the auxiliary regression are 0)

(The Glesjer test)
?pvalue 3 4 lm1
For Chi-square (4), area to the right of 15.29192 is 0.004133

(The Breusch-Pagan test)
?pvalue 3 7 lm2
For Chi-square (7), area to the right of 18.063115 is 0.011689

(The Harvey-Godfrey test)
?pvalue 3 7 lm3
For Chi-square (7), area to the right of 15.594927 is 0.029086

(The White test)
?pvalue 3 11 lm4
For Chi-square (11), area to the right of 18.142074 is 0.078342

(All of the p-values are below 0.10, so in each case we reject the null hypothesis and conclude that there is heteroskedasticity.)

?store dataherb ;

Data for obs 1-104 stored by observations as dataherb

------------------------------------------------------------------------------------------------------------------------------------
Part 2

Reading header file C:\ESL\USER\DATAHERB.hdr
List of variables
0) const 1) grth 2) y60 3) inv 4) pop
5) school 6) dn 7) di 8) doecd 9) sq_y60
10) sq_inv 11) sq_pop 12) sq_schoo 13) ut 14) absut
15) usq 16) lnusq 17) lm1 18) lm2 19) lm3
20) lm4

period: 1, maxobs: 104, obs range: full 1-104, current 1-104
Reading datafile C:\ESL\USER\DATAHERB BY OBSERVATIONS

(Auxiliary regression for the Glesjer approach - dependent variable is absolute value of the error from the original regression)
?ols absut const inv y60 pop doecd ;

OLS ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - absut
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant 0.850277 0.580477 1.46479 0.146147
3) inv 0.082607 0.042177 1.95855 0.052979 *
2) y60 -0.030725 0.024674 -1.245259 0.215976
4) pop 0.224227 0.189616 1.182529 0.239828
8) doecd -0.102235 0.078484 -1.302625 0.195725

(Computing the predicted value for sigma. Sigma is the standard deviation of the error term from the original regression.)
?genr absuhat = absut - uhat
Generated var. no. 21 (absuhat)
?print absuhat ;
Varname: absuhat, period: 1, maxobs: 104, obs range: full 1-104, current 1-104
0.29510107 0.17593918 0.24741198 0.35466455 0.23294215 0.17515668
0.25915434 0.23160517 0.20105285 0.33153188 0.29079558 0.20980801
0.26915324 0.23330975 0.3034103 0.3203731 0.2736911 0.32966742
0.20454318 0.29155569 0.22166208 0.32368635 0.27384472 0.23113616
0.20173119 0.27183744 0.25784257 0.2606176 0.22784148 0.24651991
0.29450253 0.25700134 0.26638624 0.33712999 0.29139165 0.2561534
0.20668518 0.22484505 0.33808653 0.31264507 0.22368637 0.25323809
0.28413859 0.28796596 0.28381046 0.25836628 0.29453753 0.27604895
0.31188352 0.21100399 0.3133872 0.19399442 0.27605479 0.27912932
0.25260317 0.31544664 0.25885462 0.27354743 0.30499857 0.25990527
0.25726479 0.16990253 0.08703169 0.08703613 0.09457468 0.13387967
0.11401292 0.09947468 0.14745601 0.13188674 0.10614589 0.12525838
0.10824936 0.12146947 0.10159099 0.08244336 0.1058078 0.17803975
0.05526881 0.12799161 0.15897791 0.1176518 0.09753303 0.2700894
0.28305976 0.22979348 0.22621443 0.17723308 0.28030979 0.24766061
0.2810247 0.26516592 0.31396429 0.21927239 0.24349933 0.25319961
0.30983777 0.280708 0.28026512 0.30571826 0.24577202 0.23737177
0.14543108 0.2222478

(Notice these values are all positive. We can use these estimates to calculate the weights.)
?genr wt1 = 1/absuhat
Generated var. no. 22 (wt1)

(Regress the original model using weighted least squares.)
?wls wt1 grth const y60 inv pop school dn di doecd ;

WEIGHTED LEAST SQUARES ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - grth, Variable used as weight - wt1
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant 1.088161 0.740056 1.470378 0.14473
2) y60 -0.389279 0.043229 -9.004972 < 0.0001 ***
3) inv 0.471285 0.073395 6.421233 < 0.0001 ***
4) pop -0.448851 0.232378 -1.931553 0.056364 *
5) school 0.158332 0.05382 2.94187 0.004089 ***
6) dn -0.678826 0.162827 -4.169 < 0.0001 ***
7) di 0.3006 0.091177 3.296893 0.001371 ***
8) doecd 0.281998 0.10295 2.739178 0.007343 ***

STATISTICS BASED ON RESIDUALS FOR THE WEIGHTED MODEL
R-squared is suppressed because it is not meaningful. F-statistic tests the
hypothesis that each coefficient (including the constant term) is zero.
Error Sum of Sq (ESS) 148.884871 Std Err of Resid. (sgmahat) 1.245345
F-statistic (8, 96) 104.386131 pvalue = Prob(F > 104.386) is < 0.0001
Durbin-Watson Stat. 2.070633 First-order auto corr coeff -0.037

STATISTICS BASED ON RESIDUALS FOR THE ORIGINAL MODEL
R-squared is computed as the square of the corr. between observed and
predicted dep. var.
Mean of dep. var. 0.451759 S.D. of dep. variable 0.477426
Error Sum of Sq (ESS) 9.314841 Std Err of Resid. (sgmahat) 0.311496
Unadjusted R-squared 0.603 Adjusted R-squared 0.574

MODEL SELECTION STATISTICS
SGMASQ 0.09703 AIC 0.104462 FPE 0.104493
HQ 0.113435 SCHWARZ 0.128026 SHIBATA 0.103345
GCV 0.105115 RICE 0.10585

Residuals for the unweighted model are saved as uhat. Type:
genr newname = uhat to use it in the future

(The Breusch-Pagan approach)
(The auxiliary regression. Dependent variable is the square of the error from the original regression)
?ols usq const inv sq_inv y60 sq_y60 pop sq_pop doecd ;

OLS ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - usq
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant 0.99719 4.782794 0.208495 0.835284
3) inv -0.680457 0.287535 -2.366516 0.019965 **
10) sq_inv 0.148438 0.055465 2.676254 0.008755 ***
2) y60 0.065315 0.221122 0.295379 0.768342
9) sq_y60 -0.004815 0.01429 -0.336963 0.73688
4) pop 0.148717 3.850167 0.038626 0.969269
11) sq_pop 0.001082 0.727888 0.001487 0.998817
8) doecd -0.12615 0.067746 -1.862116 0.065645 *

(Computing the predicted value for sigma squared)
?genr usqhat = usq - uhat
Generated var. no. 23 (usqhat)
?print usqhat ;
Varname: usqhat, period: 1, maxobs: 104, obs range: full 1-104, current 1-104
0.17524201 0.09374921 0.05984416 0.23754412 0.03460434 0.10986369
0.06288879 0.04527691 0.06826127 0.22830021 0.09786902 0.11025788
0.13331777 0.05798304 0.09830352 0.12439247 0.05516936 0.15951876
0.07097112 0.06919184 0.068635 0.1890943 0.1040863 0.06536731
0.09830935 0.06186269 0.06434076 0.07199834 0.0561614 0.04257329
0.08833557 0.13377095 0.07452031 0.118123 0.09071474 0.07360505
0.18643835 0.08443932 0.26480994 0.15119585 0.0836722 0.04636837
0.13504222 0.10051291 0.12469163 0.09552952 0.21878822 0.10992772
0.16197964 0.02953918 0.17760969 0.09070367 0.07645132 0.09304811
0.07855273 0.26304498 0.08060272 0.09876932 0.12430564 0.06611273
0.08674734 0.14599569 -0.01970779 -0.01963762 0.01332658 0.13339311
0.02232553 0.03397448 0.06621578 0.03016866 0.00523132 0.02581542
0.04501755 -0.0111176 -0.05708832 -0.01302687 0.04823664 0.00601882
-0.08012873 0.00468672 0.09933749 -0.00848013 -0.03416962 0.09346165
0.10976291 0.07872695 0.0688068 0.05210696 0.08721271 0.11629121
0.13188382 0.089103 0.20484738 0.10005875 0.15724923 0.07018115
0.17349796 0.22121714 0.11735698 0.18340477 0.06579285 0.06543139
0.00888735 0.05753114

(Some of these values are negative. The following steps replace the negative values with the actual values of usq computed from the original regression.)
?genr d1 = (usqhat>0)
Generated var. no. 24 (d1)
?genr sgmasq = (d1*usqhat) + ((1-d1)*usq)
Generated var. no. 25 (sgmasq)
?print sgmasq ;
Varname: sgmasq, period: 1, maxobs: 104, obs range: full 1-104, current 1-104
0.17524201 0.09374921 0.05984416 0.23754412 0.03460434 0.10986369
0.06288879 0.04527691 0.06826127 0.22830021 0.09786902 0.11025788
0.13331777 0.05798304 0.09830352 0.12439247 0.05516936 0.15951876
0.07097112 0.06919184 0.068635 0.1890943 0.1040863 0.06536731
0.09830935 0.06186269 0.06434076 0.07199834 0.0561614 0.04257329
0.08833557 0.13377095 0.07452031 0.118123 0.09071474 0.07360505
0.18643835 0.08443932 0.26480994 0.15119585 0.0836722 0.04636837
0.13504222 0.10051291 0.12469163 0.09552952 0.21878822 0.10992772
0.16197964 0.02953918 0.17760969 0.09070367 0.07645132 0.09304811
0.07855273 0.26304498 0.08060272 0.09876932 0.12430564 0.06611273
0.08674734 0.14599569 0.01183374 0.00585367 0.01332658 0.13339311
0.02232553 0.03397448 0.06621578 0.03016866 0.00523132 0.02581542
0.04501755 0.00179727 0.0706486 0.00298215 0.04823664 0.00601882
0.00265494 0.00468672 0.09933749 0.05692447 0.00330347 0.09346165
0.10976291 0.07872695 0.0688068 0.05210696 0.08721271 0.11629121
0.13188382 0.089103 0.20484738 0.10005875 0.15724923 0.07018115
0.17349796 0.22121714 0.11735698 0.18340477 0.06579285 0.06543139
0.00888735 0.05753114

(Notice that all the values are now positive, so we can calculate the weights.)
?genr wt2 = 1/sqrt(sgmasq)
(We must take the squareroot because the weight needs to be in the form of 1/sigma.)
Generated var. no. 26 (wt2)

(Regress the original model using weighted least squares.)
?wls wt2 grth const y60 inv pop school dn di doecd ;

WEIGHTED LEAST SQUARES ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - grth, Variable used as weight - wt2
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant 0.276825 0.495737 0.55841 0.577864
2) y60 -0.32681 0.031951 -10.228382 < 0.0001 ***
3) inv 0.56275 0.077408 7.269935 < 0.0001 ***
4) pop -0.474675 0.142996 -3.319493 0.001276 ***
5) school 0.109112 0.05102 2.138602 0.035006 **
6) dn -0.581255 0.134591 -4.318695 < 0.0001 ***
7) di 0.298075 0.082114 3.629996 0.000457 ***
8) doecd 0.195264 0.071617 2.726503 0.00761 ***

STATISTICS BASED ON RESIDUALS FOR THE WEIGHTED MODEL
R-squared is suppressed because it is not meaningful. F-statistic tests the
hypothesis that each coefficient (including the constant term) is zero.
Error Sum of Sq (ESS) 95.355589 Std Err of Resid. (sgmahat) 0.996638
F-statistic (8, 96) 206.409935 pvalue = Prob(F > 206.410) is < 0.0001
Durbin-Watson Stat. 2.09488 First-order auto corr coeff -0.052

STATISTICS BASED ON RESIDUALS FOR THE ORIGINAL MODEL
R-squared is computed as the square of the corr. between observed and
predicted dep. var.
Mean of dep. var. 0.451759 S.D. of dep. variable 0.477426
Error Sum of Sq (ESS) 9.481344 Std Err of Resid. (sgmahat) 0.314267
Unadjusted R-squared 0.597 Adjusted R-squared 0.568

MODEL SELECTION STATISTICS
SGMASQ 0.098764 AIC 0.106329 FPE 0.106361
HQ 0.115463 SCHWARZ 0.130315 SHIBATA 0.105192
GCV 0.106994 RICE 0.107743

Residuals for the unweighted model are saved as uhat. Type:
genr newname = uhat to use it in the future

(The White approach)
(Run the auxiliary regression. Dependent variable is square of the error from the original regression.)
?ols usq const inv sq_inv y60 sq_y60 pop sq_pop school sq_schoo dn di
doecd ;

OLS ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - usq
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant 1.289845 5.006732 0.257622 0.797274
3) inv -0.671534 0.302027 -2.223421 0.028636 **
10) sq_inv 0.146579 0.057798 2.536054 0.012897 **
2) y60 0.065842 0.251645 0.261647 0.794178
9) sq_y60 -0.004929 0.016015 -0.307754 0.758965
4) pop 0.385014 4.023424 0.095693 0.923973
11) sq_pop 0.045081 0.760048 0.059313 0.952831
5) school -0.00181 0.042402 -0.042689 0.966042
12) sq_schoo 0.002908 0.018737 0.155218 0.87699
6) dn 0.016891 0.082146 0.205619 0.837543
7) di -0.00615 0.044907 -0.136944 0.891374
8) doecd -0.128785 0.07047 -1.827525 0.070863 *

(Compute the predicted value of sigma squared. I called the variable white here.)
?genr white = usq - uhat
Generated var. no. 27 (white)
?print white ;
Varname: white, period: 1, maxobs: 104, obs range: full 1-104, current 1-104
0.17250649 0.0937091 0.06118187 0.23386911 0.0414292 0.11437733
0.06025042 0.0464075 0.07327924 0.23085684 0.10679302 0.10504883
0.11756526 0.06349301 0.09773792 0.12168963 0.04041008 0.16196387
0.06654862 0.06692789 0.063826 0.18937995 0.11259869 0.06310774
0.1001629 0.06684202 0.06014481 0.07864518 0.05090151 0.04434699
0.09072454 0.12782298 0.07609723 0.11724359 0.09374153 0.07189149
0.18749554 0.08839836 0.25872786 0.14979643 0.08082458 0.04438492
0.13702814 0.10034044 0.11572292 0.0871593 0.22172726 0.11651776
0.16798941 0.03114771 0.18017353 0.09192368 0.07406656 0.10044421
0.06577735 0.26537031 0.08435223 0.10354016 0.12364531 0.06445796
0.08686912 0.14702005 -0.01932901 -0.01820181 0.01532212 0.13510758
0.02153179 0.03349384 0.06605224 0.03282077 0.00382394 0.02653838
0.0457267 -0.01304385 -0.05711496 -0.01331224 0.04127257 0.00260433
-0.07802578 0.00500487 0.09803616 -0.00654444 -0.03239613 0.09657317
0.11049164 0.07742527 0.06468901 0.04703476 0.08571397 0.12212656
0.13325017 0.09017319 0.21168211 0.10137207 0.15383209 0.06942081
0.1719757 0.22106678 0.11820486 0.18496509 0.06446051 0.06884226
0.01114725 0.05989262

(Since some of the predicted values are negative, we will replace them with the error term squared from the original regression (usq).)
?genr d2 = (white>0)
Generated var. no. 28 (d2)
?genr white2 = (d2*white)+((1-d2)*usq)
Generated var. no. 29 (white2)
?print white2 ;
Varname: white2, period: 1, maxobs: 104, obs range: full 1-104, current 1-104
0.17250649 0.0937091 0.06118187 0.23386911 0.0414292 0.11437733
0.06025042 0.0464075 0.07327924 0.23085684 0.10679302 0.10504883
0.11756526 0.06349301 0.09773792 0.12168963 0.04041008 0.16196387
0.06654862 0.06692789 0.063826 0.18937995 0.11259869 0.06310774
0.1001629 0.06684202 0.06014481 0.07864518 0.05090151 0.04434699
0.09072454 0.12782298 0.07609723 0.11724359 0.09374153 0.07189149
0.18749554 0.08839836 0.25872786 0.14979643 0.08082458 0.04438492
0.13702814 0.10034044 0.11572292 0.0871593 0.22172726 0.11651776
0.16798941 0.03114771 0.18017353 0.09192368 0.07406656 0.10044421
0.06577735 0.26537031 0.08435223 0.10354016 0.12364531 0.06445796
0.08686912 0.14702005 0.01183374 0.00585367 0.01532212 0.13510758
0.02153179 0.03349384 0.06605224 0.03282077 0.00382394 0.02653838
0.0457267 0.00179727 0.0706486 0.00298215 0.04127257 0.00260433
0.00265494 0.00500487 0.09803616 0.05692447 0.00330347 0.09657317
0.11049164 0.07742527 0.06468901 0.04703476 0.08571397 0.12212656
0.13325017 0.09017319 0.21168211 0.10137207 0.15383209 0.06942081
0.1719757 0.22106678 0.11820486 0.18496509 0.06446051 0.06884226
0.01114725 0.05989262

(Since these values are all positive, we can calculate the weights.)
(We must take the square root because the variable white2 is the predicted values for sigma squared.)
?genr wt3 = 1/sqrt(white2)
Generated var. no. 30 (wt3)

(Use weighted least squares for the original model.)
?wls wt3 grth const y60 inv pop school dn di doecd ;
WEIGHTED LEAST SQUARES ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - grth, Variable used as weight - wt3
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant -0.002658 0.436893 -0.006084 0.995158
2) y60 -0.321149 0.031089 -10.330012 < 0.0001 ***
3) inv 0.563459 0.077063 7.311662 < 0.0001 ***
4) pop -0.574039 0.129112 -4.446071 < 0.0001 ***
5) school 0.117407 0.0518 2.266527 0.025664 **
6) dn -0.608984 0.12859 -4.735859 < 0.0001 ***
7) di 0.291644 0.082775 3.52336 0.000654 ***
8) doecd 0.153463 0.066682 2.30142 0.02353 **

STATISTICS BASED ON RESIDUALS FOR THE WEIGHTED MODEL
R-squared is suppressed because it is not meaningful. F-statistic tests the
hypothesis that each coefficient (including the constant term) is zero.
Error Sum of Sq (ESS) 95.938836 Std Err of Resid. (sgmahat) 0.999681
F-statistic (8, 96) 223.125699 pvalue = Prob(F > 223.126) is < 0.0001
Durbin-Watson Stat. 2.095378 First-order auto corr coeff -0.052

STATISTICS BASED ON RESIDUALS FOR THE ORIGINAL MODEL
R-squared is computed as the square of the corr. between observed and
predicted dep. var.
Mean of dep. var. 0.451759 S.D. of dep. variable 0.477426
Error Sum of Sq (ESS) 9.494213 Std Err of Resid. (sgmahat) 0.314481
Unadjusted R-squared 0.597 Adjusted R-squared 0.567

MODEL SELECTION STATISTICS
SGMASQ 0.098898 AIC 0.106473 FPE 0.106506
HQ 0.115619 SCHWARZ 0.130491 SHIBATA 0.105335
GCV 0.10714 RICE 0.107889

Residuals for the unweighted model are saved as uhat. Type:
genr newname = uhat to use it in the future

(The Harvey-Godfrey approach)
(The auxiliary regression. The dependent variable is the natural log of the error squared from the original regression.)
?ols lnusq const inv sq_inv y60 sq_y60 pop sq_pop doecd ;

OLS ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - lnusq
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant -44.233223 66.370465 -0.666459 0.506716
3) inv -6.011087 3.990107 -1.506498 0.135223
10) sq_inv 1.212004 0.769682 1.574682 0.11862
2) y60 -2.185465 3.0685 -0.712226 0.478052
9) sq_y60 0.123268 0.198305 0.621609 0.535672
4) pop -46.462678 53.42847 -0.869624 0.386675
11) sq_pop -9.389342 10.100846 -0.92956 0.354931
8) doecd -0.675005 0.940099 -0.718014 0.474492

(hg is the predicted value of the natural log of sigma squared.)
?genr hg = lnusq - uhat
Generated var. no. 31 (hg)

(hg2 converts the predicted value into the form of sigma squared.)
?genr hg2 = exp (hg)
Generated var. no. 32 (hg2)
(Since we used the exponential function, the values for hg2 are all positive.)
 
(Again we take the square root to get the weights into the form of 1/sigma.)
?genr wt4 = 1/sqrt(hg2)
Generated var. no. 33 (wt4)

(Use weighted least squares on the original model.)
?wls wt4 grth const y60 inv pop school dn di doecd ;

WEIGHTED LEAST SQUARES ESTIMATES USING THE 104 OBSERVATIONS 1-104
Dependent variable - grth, Variable used as weight - wt4
VARIABLE COEFFICIENT STDERROR T STAT 2Prob(t > |T|)
0) constant 1.067268 0.677709 1.574818 0.118588
2) y60 -0.389503 0.040642 -9.583738 < 0.0001 ***
3) inv 0.516414 0.081006 6.375041 < 0.0001 ***
4) pop -0.426705 0.210079 -2.031166 0.045003 **
5) school 0.13914 0.055822 2.492593 0.014394 **
6) dn -0.726031 0.148047 -4.904055 < 0.0001 ***
7) di 0.342198 0.095076 3.599197 0.000507 ***
8) doecd 0.276137 0.091657 3.012708 0.003311 ***

STATISTICS BASED ON RESIDUALS FOR THE WEIGHTED MODEL
R-squared is suppressed because it is not meaningful. F-statistic tests the
hypothesis that each coefficient (including the constant term) is zero.
Error Sum of Sq (ESS) 296.745366 Std Err of Resid. (sgmahat) 1.758152
F-statistic (8, 96) 116.389909 pvalue = Prob(F > 116.390) is < 0.0001
Durbin-Watson Stat. 2.125912 First-order auto corr coeff -0.065

STATISTICS BASED ON RESIDUALS FOR THE ORIGINAL MODEL
R-squared is computed as the square of the corr. between observed and
predicted dep. var.
Mean of dep. var. 0.451759 S.D. of dep. variable 0.477426
Error Sum of Sq (ESS) 9.308478 Std Err of Resid. (sgmahat) 0.311389
Unadjusted R-squared 0.604 Adjusted R-squared 0.575

MODEL SELECTION STATISTICS
SGMASQ 0.096963 AIC 0.10439 FPE 0.104422
HQ 0.113357 SCHWARZ 0.127939 SHIBATA 0.103275
GCV 0.105044 RICE 0.105778

Residuals for the unweighted model are saved as uhat. Type:
genr newname = uhat to use it in the future

Table for model comparison
  (t-statistics are in parentheses)
glesjer
b-p
white
h-g
constant
1.088
0.277
-0.003
1.067
(1.47)
(0.56)
(-0.01)
(1.57)
y60
-0.389
-0.327
-0.321
-0.390
(-9.00)
(-10.23)
(-10.33)
(-9.58)
inv
0.471
0.563
0.563
0.516
(6.42)
(7.27)
(7.31)
(6.38)
pop
-0.449
-0.475
-0.574
-0.427
(-1.93)
(-3.32)
(-4.45)
(-2.03)
school
0.158
0.109
0.117
0.139
(2.94)
(2.14)
(2.27)
(2.49)
dn
-0.679
-0.581
-0.609
-0.726
(-4.17)
(-4.32)
(-4.74)
(-4.90)
di
0.301
0.298
0.292
0.342
(3.30)
(3.63)
(3.52)
(3.60)
doecd
0.282
0.195
0.153
0.276
(2.74)
(2.73)
(2.30)
(3.01)
unad R2
0.603
0.597
0.597
0.604
sgmasq
0.097
0.099
0.099
0.097
*
hq
0.113
0.115
0.116
0.113
*
gcv
0.105
0.107
0.107
0.105
*
aic
0.104
0.106
0.106
0.104
*
schwarz
0.128
0.130
0.130
0.128
*
rice
0.106
0.108
0.108
0.106
*
fpe
0.104
0.106
0.107
0.104
*
shibata
0.103
0.105
0.105
0.103
*
The Harvey-Godfrey approach had better model selection statistics for all 8 criteria. The difference shows up past the third decimal place.

The signs all make sense.

Y60 is negative. This may be because countries with lower income in 1960 are probably less developed. They can grow faster than more developed countries because they can use the technologies of the more developed countries without discovering them on their own.

Inv is positive. This is because countries that invest more have a tendency to grow faster.

Pop is negative. This makes sense because a country with a higher population must use more of its resources to support its people. Remember the marginal effect is the effect that variable has holding all other variables constant.

School is positive. Countries with more educated people tend to have higher income growth than other countries.

Dn is negative. This is because non-oil countries generally have less wealth than oil countries.

Di is positive. This is because industrialized countries tend to grow faster than other countries.

Doecd is positive. Members of the Organization of Economic Cooperation and Development (OECD) probably have fewer trade restrictions than other countries, so they are able to grow faster.