# What is econometrics 2?no_redirect=1

There is in fact a theoretical answer to this and it may be of importance to you in the long run. There are three main schools of statistical thought and several minor schools. The three are, in the order of discovery, the Bayesian, the Likelihoodist and the Frequentist. The differences matter because they can provide different answers using the exact same data and running the same formula.

From the language of your posting, you are not using the Bayesian methodology. Bayesian methods do not have a null hypothesis, so they cannot have a p-value. Rather than compute a probability that the null is as extreme or more extreme than you would expect if the null is true, Bayesian methods compute the positive probability that each separate hypothesis is true. It does not falsify a null, it assigns a positive probability to each competing possible hypothesis. It does not limit itself to two hypothesis, though you can have two.

The second is the Likelihoodist model and this is where the language in your posting seems to go. The likelihoodist model has p-values, but no cut-off value, $\alpha$, while the Frequentist school as a cut-off value $\alpha$, but there is no such thing as a p-value. Both schools are part of a broader scheme called the null hypothesis method.

In the null hypothesis method, you choose a null and grant 100% probability that it is true, ex ante. Statistics is a branch of rhetoric and not a branch of mathematics. Like physics, it uses mathematics intensively, but it is not part of the field of mathematics. Rather, it is about how to decide arguments. The ordinary null used in statistics is the "no effect" hypothesis. That is you argue that all $\beta=0$, that is the independent variables have no effect at all on the dependent variable. If you falsify it, then you are arguing that it does have an effect and the null is falsified.

In the case of the Likelihoodist school, there is no magical cut-off value for the p-value. You just report it. It is the weight of the evidence against the null. A p-value of less than .15 leaves a lot of room for chance, but if you are okay with that level, then it is significant to you. This is an either-or discussion. It either is good enough, or it is not good enough.

If you believe it is good enough, then you use the values of the OLS equation because they are the most probable value based on your data. From that point onward, the adjusted $R^2$ gives the best measure of the amount of variability that your model provides.

If you do not believe that a p-value is good enough, then you are faced with a peculiar problem. You need to throw away your OLS equation, according to theory. If the null is not falsified then no new information exists and you ignore your results. It is as if you had never performed the analysis in the first place. The Likelihoodist school is epistemological, that is it is a knowledge seeking tool. If the p-value is not adequate for your purposes, then there is no added knowledge and you need to spend your life looking at something else rather than waste your time on this topic.

On the other hand, if you use the Frequentist school, the results have a different interpretation. In the Frequentist school, the OLS equations are not the maximum likelihood estimator(MLE) as they are with Likelihoodists, but instead the equations are the minimum variance unbiased estimators(MVUE). It happens to work out nicely that the two schools have the same answers for the simple problems, such as regression.

The Frequentist school is behavioral. It tells you how to behave. For example, if you were doing quality assurance on batches coming out of a factory and you got results indicating poor quality, falsifying the null of good quality, then you destroy the batch, otherwise you accept the batch. You do not know the batch is bad just because it tests bad. You do not know if a batch is good, just because it tests good. This is not about knowledge, it is about how you should behave. Do you destroy the batch when you reject the null and accept the batch when you accept the null? Yes.

In order to create this, before you collect the data, you set a value called $\alpha$, which is your cut-off. This value is set with respect to both the importance of false positives and the importance of false negatives. If the test statistic is in the rejection region, then you behave as if the OLS parameters were the true parameters. The adjusted $R^2$ then provides the degree of variability explained by the model. If the test statistic is in the acceptance region, then you have accepted the null. The null was that all $\beta=0$, so you are obligated to behave as if there was no relationship between any of the independent variables and the dependent variable and the $R^2$ doesn't matter because it is the the $R^2$ of a model that you are not using.

It is difficult to determine what is going on with your model using null hypothesis methods. It could be spurious correlations that explain a lot of the variability of your sample, but not the variability that you would find out of sample. It could be that the effects are close to zero, but not zero, but the effect size is too small for it to be clearly significant. That is to say, your sample lacks the power to detect the effect. The problem with null hypothesis methods, in general, is that there is no way to distinguish a null that is truly false from one where the result was due to a weird sample. It is impossible to distinguish truth and chance effects.

If you have a professor around that uses Bayesian methods, and they are uncommon in economics, you could have them run it as a Bayesian model. Bayesian hypotheses are combinatoric, so that if you have three regressors, $x_1, x_2, x_3$, the researcher has to run all possible combinations of dependent variables and test each submodel for the probability that it is the true model. This may help distinguish between the various parts and the things that can go wrong with MLE and MVUE models such as multicolinearity, small effects and so forth.

$\endgroup$

answered Dec 23 '16 at 18:19

Dave HarrisDave Harris