2. Technology serving science - introduction to econometric software



#1: Stealing gretl

Luckily it’s one of those open source boons. It can be downloaded from http://gretl.sourceforge.net/


#2:  Creating a model    
                                                                              
Once you download it you need to import your data. The data file should be saved as excel 97-2003 document.  More up to date versions are beyond gretl’s reach.     

       
                      
If done successfully you’ll be able to have a glimpse at each variable separately (left mouse double click on variable’s tag). You can check each just to make sure that everything went as planned.


After this initial familiarization with the new environment select:                      
Model > Ordinary Least Squares, then arrange variables as shown below and click OK.



                         





In less than a second, the program generates exactly the same parameters that we’ve already calculated in excel. While it’s not very motivating, the program did that faster and more accurately producing plenty of additional indicators.

 
#3: Interpreting the results

The numbers are pure work of fiction but we still can interpret them showing how it works. We’ll omit some of those more complicated indicators since at this level of knowledge they could’ve at best litter our minds.


Coefficients   (also known as regressors) 
                                                       
We already know that these are consecutive elements of the equation and we’ve also mastered how to put them in the right order. Still, we haven’t yet mentioned how exactly should those be understood. For example:

If F equals 22.15 it means that ceteris paribus[1] each additional gram of food makes cat (on average)  22.15 units happier.

H coefficient, the negative one, indicates that increasing human disturbance per 1 unit causes on average 14.7 units decrease in cats mood (ceteris paribus).

S, the only zero-one variable in the model, can be interpreted as follows: cat’s mood ceteris paribus is on average better by 7.25 units if the cat has slept (that is if S equals 1).

The last question concerning those parameters would be what does the constant mean. The idea is to put 0 in place of all the remaining variables meaning that cat didn’t get any food, caress etc. Then the default mood would be 13. The imaginary graph of the cat’s mood would thus originate from point (0,13). Sometimes the constant cannot be interpreted because variables won’t ever be equal to zero in a real world.


Standard errors

Those are estimators of what the variance of coefficients‘ distribution will be. Tells you how much will parameters of different cats sway in reference to the calculated coefficients. The bigger the standard error the weaker the model. But as long as it doesn’t exceed half of the coefficient it’s ok. The mathematical interpretation of the rule goes as follows:

(standard error/coefficient) x 100 < 50


Calculating that for variable C we’ll get:

(0,0235 / 5) x 100 <50

0,47 < 50
                           
This means that the variable’s parameter is good, too good I would say. Since numbers in the example are quite random I guess the indicators will keep reaching extremes.


R-squared (R2)

It’s a relation of parameters variance to dependent variable variance. It tells you how much the change in the cat’s mood is explained by the change in quantity of food, caress etc. In other words how well the model we created explains moods alterations.

R2 varies from 0 to 1,  0 being the worst and 1 the best possible model. There is no fixed number separating the good from the bad. In big cross-sectional data models based on international statistics R2 should be contained within 0.30 – 0.70 limit, while for those little ones concerning individual households, companies or cats the acceptable R values vary from 0.05 to 0.40. Those limits get stricter in time-series models where R2 is expected to be greater than 0.70.

Our little cats-based model with its R2 amounting to 0.1010 seems all right.


Adjusted R2


The tricky thing about the R2 indicator is that its value increases as we put more and more variables into the model and so the temptation arises to come up with as many variables as possible. The end to this sick fantasy puts adjusted R2. It calculates exactly the same thing as plain R2 does. The only difference is that this indicator punishes us for each additional variable added to the model deducting a little from the original indicator. That’s why adjusted R2 is always a bit smaller than R2. Though I’ve never seen it having a negative value...


p-value of t-ratio  (empirical significance value)

In order to interpret that one we have to turn on more abstract thinking. So there’s a hypothesis, that a variable (let’s say F) lacks empirical significance. The term empirical significance comes from statistics and tells whether something is important or not. So one more time:

hypothesis 0 : F is not important to the model
and it’s alternative hypothesis 1 : F is important to the model

There’s also given a so called significance level (α) of 0.05. If:

p value > α  then the hypothesis­ 0 is true, F is not important to the model
p value < α  then we accept the alternative hypothesis 1, F is important to the model

It all comes down to a very simple thing. You look at p-value column. Something bigger than 0.05 – bad, not important.

In our model only the constant has empirical significance (constants always have it), so as you can see that’s not a very good model.

The column t-ratio is given so that you can check for yourself which hypothesis to accept and which to reject. You can do it using statistical table for student’s t-distribution. Since we already got p-value it seems just pointless.


Wald test (F statistics)

It calculates the same thing as t-ratio significance test. The difference is that the former does it for each variable separately. Wald test checks whether the variables makes sense altogether. Instead of student’s t it is based on F distribution. One more time we are given both F value (0.927766) for our own calculations  and p-value ( 0.459739)which will make things much easier.

Ho  says that all variables taken together are unimportant to the model.
The alternative H1 states the opposite.
Significance level is always the same, α=0.05

p value 0.459739 > α 0.05

Therefore we have to accept Ho and admit that the model is totally wrong. It’s not like it comes as a surprise.



Information criterion

Shwarz, Akaike and Hannah-Quinn are all information criterions. Their values serve for comparison purposes only. If you have two models and wonder which to choose you pick the one characterized by lower values of information criterions. Those values themselves tell you nothing more.
                           
That would be all of the most basic analysis of a model. Not much but as you can see it can already tell if a model makes sense or not. This one clearly makes none.

              




[1] ceteris paribus- ‘with other things the same’, That’s just a formal requirement. Each time you interpret something you have to add ‘ceteris paribus’ somewhere in the sentence. I’m never quite sure where to put it, just make sure it’s there, anywhere.