Course Outline

list Introduction to Statistics: A Modeling Approach

The F Ratio

PRE alone, therefore, is not a sufficient guide in our quest to reduce error. Yes, it tells us whether we are reducing error. But it does not take into account the cost of that reduction. Just as sum of squares needed to be divided by n-1 (degrees of freedom) in order to get a comparable measure of variation (variance), we need a way to correct PRE so as to take into account the number of parameters it takes to realize the gains in PRE. This brings us to the F ratio.

Let’s go back to our analysis of variance table (reprinted below) and focus on the three columns that come between SS and PRE: df, MS, and F. Just a note: df stands for degrees of freedom, MS stands for Mean Square, and F, well, that stands for the F ratio.

Let’s start by looking at the last row of the table.

If you take the SS and divide by the degrees of freedom (which is n-1 for the empty model) you end up with Mean Square also called variance or \(s^2\).

Try it out. We’ve written the code to calculate the variance of Thumb length using the function var(). Use R as a calculator and try taking the sum of squares for the empty model and dividing by the degrees of freedom.

require(mosaic) require(ggformula) require(supernova) require(lsr) Fingers <- read.csv(file="https://raw.githubusercontent.com/UCLATALL/intro-stats-modeling/master/datasets/fingers.csv", header=TRUE, sep=",") Fingers$Height2Group <- ntile(Fingers$Height, 2) Fingers$Height2Group <- factor(Fingers$Height2Group, levels=c(1,2), labels = c("short", "tall")) Fingers$Height3Group <- ntile(Fingers$Height, 3) Fingers$Height3Group <- factor(Fingers$Height3Group, levels=c(1,2,3), labels = c("short", "medium", "tall")) Height2Group.model <- lm(Thumb ~ Height2Group, data = Fingers) Height3Group.model <- lm(Thumb ~ Height3Group, data = Fingers) # this calculates the variance of Thumb var(Fingers$Thumb) # try taking the SS from the empty model and dividing it by df (you can get these numbers from the supernova table) # this calculates the variance of Thumb var(Fingers$Thumb) # try taking the SS from the empty model and dividing it by df (you can get these numbers from the supernova table) 11880.21/156 test_function_result("var") test_output_contains("11880.21 / 156") test_error() success_msg("G-R-eat job!")
Use supernova() on Height2Group.model to find the values of SS and df
DataCamp: ch7-23

Partitioning Degrees of Freedom

Now look at the degrees of freedom (df) column in the table. Just as SS can be partitioned into SS Error and SS Model, df for model and df for error add up to degrees of freedom total. Let’s see why the degrees of freedom are partitioned as they are between model and error.

It’s helpful to think about degrees of freedom as a budget. The more data (represented by n) you have, the more degrees of freedom you have. Whenever you estimate a parameter, you spend (or lose) a degree of freedom because a parameter will limit the freedom of those data points.

Let’s say we have a distribution with three scores and the mean is 5. If we know the mean is 5, do we really have three data points that could be any value? Not really. If we know the mean is 5, then if we know any two of the three numbers, we can figure out the third. So even though two of the data points can be any value, the third is restricted because the mean has to be 5. So, we say there are only two (or n-1) degrees of freedom.

Let’s think about the degrees of freedom we have with thumb lengths. We start with a maximum budget of n = 157, because we have 157 thumb lengths in our Fingers data frame. When we created an empty model, we estimated one parameter (the Grand Mean). We used that mean to calculate our SS and eventually MS. So we could say we spent a degree of freedom in order to get our MS for the empty model, which leaves us with n - 1, or 156, df.

L_Ch7_TheF_1

The two-group model (Height2Group.model) costs two degrees of freedom because it has one more parameter than the empty model. Looking at the df and SS for the row “Model (error reduced)” reveals that, for just the price of one more degree of freedom, we have reduced our squared error (\(SS_{Model}\)) by 830.88.

Let’s think about the df we have left over. We started with a budget of n=157 and then spent two degrees of freedom on the Height2Group model, which means we have n-2, or 155 df, left over. A general rule of thumb (no pun intended) is this: every time you add a parameter to a model you “spend” one degree of freedom.

L_Ch7_TheF_2

Varieties of MS

If you take the SS and divide by the degrees of freedom you end up with Mean Square. But there are a few different kinds of MS as shown in the three rows of our supernova table.

The last row is the variance, average squared error, from our empty model. We call that MS Total and could write it like this— \(MS_{Total}\)—to indicate the MS from the empty model depicted in the row labeled Total. When you look at that variance (76.16), think back to the idea of the average blue square based on the deviations from the data points to the Grand Mean. (The figure below depicts the blue squares for TinyFingers but it would be similar for the larger Fingers data set.)

We can calculate \(MS_{Total}\) like this:

\[MS_{Total} = SS_{Total}/df_{Total}\]

L_Ch7_TheF_3

The F Ratio

Now let’s get to the F ratio. In our table, we have produced two different estimates of variance under the Height2Group model: MS Model and MS Error.

MS Model tells us what the variance would be if every student was assumed to have the thumb length predicted by the Height2Group model; MS Error tells us what the leftover variance would be after subtracting out the model. The F ratio is calculated as the MS Model divided by the MS Error.

This ratio turns out to be a very useful statistic. If there were no effect of Height2Group on thumb length we would expect the variance estimated using the group means to be approximately the same as the variance within the groups, resulting in an F ratio of approximately 1. But if the variation across groups is more than the variation within groups, the F ratio would rise above 1. A larger F ratio means a stronger effect of our model.

L_Ch7_TheF_4

Just like variance provided us with a way to account for sample size when using SS to measure error from our model, the F ratio provides a way to take degrees of freedom into account when judging the fit of a model. The F ratio gives us a sense of whether the degrees of freedom that we spent in order to make our model more complicated were “worth it”.

L_Ch7_TheF_5

Responses