Course Outline

segmentGetting Started (Don't Skip This Part)

segmentIntroduction to Statistics: A Modeling Approach

segmentPART I: EXPLORING VARIATION

segmentChapter 1  Welcome to Statistics: A Modeling Approach

segmentChapter 2  Understanding Data

segmentChapter 3  Examining Distributions

segmentChapter 4  Explaining Variation

4.7 Randomness

segmentPART II: MODELING VARIATION

segmentChapter 5  A Simple Model

segmentChapter 6  Quantifying Error

segmentChapter 7  Adding an Explanatory Variable to the Model

segmentChapter 8  Models with a Quantitative Explanatory Variable

segmentPART III: EVALUATING MODELS

segmentChapter 9  Distributions of Estimates

segmentChapter 10  Confidence Intervals and Their Uses

segmentChapter 11  Model Comparison with the F Ratio

segmentChapter 12  What You Have Learned

segmentResources
list Introduction to Statistics: A Modeling Approach
Randomness
Let’s take a little detour into the notion of randomness. First let’s make a distinction between what we mean by “random” in regular life and what we mean by “random” in statistics.
To start, let’s look at what humans consider “random”. Students were asked to think of a random number between 1 and 20 and enter it into a survey.
L_Ch4_Randomness_1
The result of 211 students entering in a random number between 1 and 20 are provided for you in a data frame called Survey. The variable Any1_20 holds the number that they entered. Take a look at a few rows of this data frame and make a histogram of Any1_20. Note—you probably want to have 20 bins for the 20 possible values of this variable.
require(mosaic)
require(ggformula)
require(haven)
Survey < read.csv(file="https://raw.githubusercontent.com/UCLATALL/introstatsmodeling/master/datasets/studentsurvey.csv", header=TRUE, sep=",")
# Take a look at a few lines of Survey
# Make a histogram of Any1_20.
head(Survey)
gf_histogram(~ Any1_20, data = Survey)
ex() %>% check_function("head") %>% check_result() %>% check_equal()
ex() %>% check_function("head") %>% check_arg("x") %>% check_equal()
ex() %>% check_function("gf_histogram") %>% check_arg("object") %>% check_equal()
ex() %>% check_function("gf_histogram") %>% check_arg("data") %>% check_equal()
ex() %>% check_error()
L_Ch4_Randomness_5
Students have many ideas of randomness including “unpredictable,” “unexpected,” “unlikely,” or “weird.” To the students in this survey, some particular numbers sound more random than others. They seem to think, for example, that 17 and 7 sound more random than 10 or 15.
The mathematical concept of random is different. Whereas we often think that random means unpredictable, random processes, the way statisticians think of them, are actually highly predictable, governed by a probability distribution.
If each of the numbers 1 to 20 had an equal likelihood of being selected, we could model that as a random process, just like we did die rolls. Although it is hard to predict which number would be generated on a single trial (we’d be wrong, on average, 19 out of 20 times), it is highly predictable that we would have a uniform distribution in the long run.
Exploring Randomness
Let’s use resample()
like we did before to explore what the results of a random process can look like. We will start with the task we just discussed: 211 students asked to generate a random number between 1 and 20. But this time we will simulate the data being generated by a random process in which each number has an equal probability of being selected.
We can do this in two ways. We could create a vector to represent a 20sided die and then resample from it 211 times.
side20 < c(1:20)
resample(side20, 211)
We could also skip the extra step of creating the R object side20 and just resample directly from the numbers 1:20
a bunch of times.
resample(1:20, 211)
Of course, if we don’t save the results of this resample into vector, we’ll have done this for nothing. So modify the code below to save the 211 random numbers into Any1_20. The function gf_histogram()
needs a data frame so we’ll create a data frame called Computer and put Any1_20 in it. Create a histogram of the computergenerated Any1_20.
require(mosaic)
require(tidyverse)
custom_seed(13)
# create a random sample of 211 numbers between 1 and 20
Any1_20 <
# this puts Any1_20 into a new data frame called Computer
Computer < data.frame(Any1_20)
# make a histogram of Any1_20 from Computer
# create a random sample of 211 numbers between 1 and 20
Any1_20 < resample(1:20, 211)
# this puts Any1_20 into a new data frame called Computer
Computer < data.frame(Any1_20)
# make a histogram of Any1_20 from Computer
gf_histogram(~ Any1_20, data = Computer)
ex() %>% check_object("Any1_20") %>% check_equal()
ex() %>% check_function("resample") %>% check_arg("...") %>% check_equal()
ex() %>% check_error()
ex() %>% check_object("Computer") %>% check_equal()
ex() %>% check_function("data.frame") %>% check_arg("...") %>% check_equal()
ex() %>% check_error()
ex() %>% check_function("gf_histogram") %>% check_arg("object") %>% check_equal()
ex() %>% check_function("gf_histogram") %>% check_arg("data") %>% check_equal()
ex() %>% check_error()
Just a note, n in the title just stands for “how many values” in this distribution.
The computergenerated random numbers are much more uniform compared to the humangenerated random numbers. And it’s definitely more rectangular than the humangenerated distribution. However, it’s not exactly rectangular either.
L_Ch4_Randomness_6
Make a prediction. What would the histogram look like if the computer generated 10,000 samples instead of 211? Change the code below to try that out. Make sure to change the title as well!
require(mosaic)
require(tidyverse)
custom_seed(13)
# modify this to generate a random sample of 10,000
Any1_20 < resample(1:20, 211)
# modify this to put Any1_20 into a new data frame called Computer
Computer <
# this makes a histogram of Any1_20 from Computer
gf_histogram(~ Any1_20, data = Computer, fill = "dodgerblue", color = "gray", bins = 20) %>%
gf_labs(title = "Computer generated random numbers (n = 211)")
# modify this to generate a random sample of 10,000
Any1_20 < resample(1:20, 10000)
# modify this to put Any1_20 into a new data frame called Computer
Computer < data.frame(Any1_20)
# this makes a histogram of Any1_20 from Computer
gf_histogram(~ Any1_20, data = Computer, fill = "dodgerblue", color = "gray", bins = 20) %>%
gf_labs(title = "Computer generated random numbers (n = 10000)")
test_object("Any1_20")
test_data_frame("Computer")
test_function("gf_histogram")
test_function("gf_labs", args="title")
test_error()
success_msg("You're doing a fantastic job!")
We can see that even the distribution of 10,000 randomly generated numbers from a simulated 20sided die is not perfectly even. But it is more even than the smaller sample of 211 numbers in the previous histogram.
This results from what we previously called the Law of Large Numbers. Provided each of the 20 numbers truly has an equal probability of coming up, the distribution will be perfectly rectangular in the long run. As you can see, the long run would need to be pretty long—more than 10,000 rolls of the die in this case.
This leads us to a critical feature of random processes. Even though they are very unpredictable in the short run—for example, if we asked you to predict what the next roll would be of a 20sided die you would only have a 1 in 20 chance of predicting correctly—they are actually very predictable in the long run.
In fact, a truly random DGP is one of the easiest kinds of DGPs for us to model as statisticians. Although we only consider an example of a uniform random process here, there are lots of other models of randomness. These are called probability distributions. We will learn about other models of randomness—such as the normal distribution—as we go.
Statisticians tend to model unexplained variation, whether real or induced by data collection, as though it were generated by a random process (e.g., uniform, or normal, or some other probability distribution). They do this because this helps them make some progress. It is easy to predict what unexplained variation might look like if the DGP is random. Then they can compare what they predicted, assuming a random process, with what the data distribution actually looks like.
There are actually even more reasons to model unexplained variation as random! This will turn out to be a very useful strategy, an idea we will continue to explore in later chapters.