Statistics and Sampling Distributions
Before addressing how to estimate model parameters and their uncertainty from data, we need to define some statistical terminology. In this section we will introduce the concept of a 'population' and a 'sample', how they are different, and how we would like to utilise the statistics of sample to make inferences about a population.
In statistics, a population consists of all members of a defined group from which we may collect data. Populations are often very large, or even infinite, so it is usually infeasible to conduct a census, that is, collect data for every member of the population. Therefore, we can study only a subset of the population, which we refer to as a sample. Statistical inference is the process of deducing things about the whole population (which properties we do not know) based on a sample that we can measure these properties directly for.
In statistical theory, the distinction between sample and population is basic. The sample properties (e.g., mean, mode, median, range, standard deviation, inner quantile etc.) are called statistics, whereas the true (i.e. population) mean, mode etc. are called the parameters.
We compute the mean, mode, median and other sample statistics from the data in the sample directly. For a given sample statistic, we always assume the existence of a true parameter for a given population, which we cannot measure. If we could somehow obtain a data point for the whole population, or in other words, sample the whole population (which we cannot), these sample statistics would then become equal to the true population parameters. However, this is generally not possible so we need to establish a set of tools that relate the sample statistics to true population parameters, which is the main topic of this chapter.
Parameters:
Population parameters are generally denoted as and represent some characteristic of the population (e.g. mean, variance, proportion). In this chapter, we are will be specifically referring to the parameters in the model . Below are some examples of model parameters for distributions already considered and summarised in Table 1:
Thus, in this chapter we will examine methods for computing given some sample data. In particular, Point Estimation involves getting a particular value for (e.g. , and Interval Estimation involves defining an interval in which we are 'confident' lies (e.g. .
Sample Statistics:
In a general sense, a statistic is any quantity (e.g. mean, variance, etc) calculated from sample data. At this point we will make a slight notation change to distinguish between a random variable given by some population distribution and sample data from that distribution:
Random variable associated with the underlying population distribution
Random sample of data points from the distribution for .
Note that is also a random variable, as it represents the sample data points before they are measured.
So we will say that we take random sample data from some underlying population distribution for . The sample data consists of for a sample size . Please note again that is also a random variable and represents the sample measurements before they are recorded from the population distribution. From a sample we can compute a statistic, such as the sample mean:
As an example, imagine going out and asking the first people you meet on campus what their height is. Your sample might look like today, for instance. If you were repeat this same process tomorrow, it is highly unlikely that you will randomly sample the same people from the campus population, and therefore your will be different (e.g. ), even though the population (and its properties) will be the same (i.e. students on campus and their heights). Therefore is a random variable, and it is expected that for each sampling trial the values of will be different.
NOTE: Here we have made the important assumption that the sampling process is independent and identically distributed ('i.i.d.'). This property will be assumed throughout the rest of the chapter and will also prove useful in estimating model parameters.
Sampling Distributions:
As just mentioned above, we can repeatedly sample from some underlying population distribution to compute a sample statistic (i.e. mean, variance), and we refer to this process as conducting a number of sampling trials. As we conduct a sufficient number of sampling trials, we end up generating a probability distribution for our sample statistic, which we call a sampling distribution:
The probability distribution of a statistic (such as mean or std dev) is known as its Sampling Distribution. The standard deviation of a sampling distribution (e.g. sample means) is the Standard Error (SE).
At this point, we have been introducing a lot of terminology, and it is easy to get confused between the distribution of a population vs. the distribution of a statistic (i.e. a sampling distribution). So let us consider an illustrative example for a sampling distribution of a sample mean, using data we have previously analysed in lectures.
Example: Sampling Distribution of the Sample Mean
As an illustration, let's take our population distribution from which we are sampling to be the exponential distribution shown in Figure 16. In lecture we had shown that when taking random samples of size from this distribution, we observed the sampling distribution of the sample mean was modelled by the normal distribution for large enough sample sizes . Indeed, this was the main result of the Central Limit Theorem (end of Ch 3), which states that the mean of a random sample is distributed as:
Which can equivalently be written as . In this example, since we know is exponentially distributed as , we can compute the CLT normal approximation using and :
The end result is that the CLT states that the sampling distribution for the sample mean is normally distributed for large sample sizes , and its standard deviation, , decreases with increasing sample size . This implies that as the sample size becomes larger, the dispersion of the sampling distribution of the sample means becomes smaller. To avoid confusion, we refer to the standard deviation of the sampling distribution for the sample mean as the standard error:
Computing a Sampling Distribution of Sample Means:
We have just asserted that the sampling distribution of a sample mean is normally distributed, with a standard error that decreases with increasing sample size . Here we compute several sampling distributions of the sample mean for different values of , to examine if this is indeed true.
By conducting 1,000 trials of sampling that are randomly drawn from a population given by the exponential distribution , we can compute the sampling distribution of the sample mean . The sampling distribution of the sample mean is shown in Figure 29 (as blue histograms) for sample sizes and 100, where we do see that they are roughly bell-shaped and contain the true population mean of . For reference, we also present the CLT normal approximation given by red curves in Figure 29) since in this hypothetical experiment we actually know the underlying population distribution for and can compute and .
Figure 29: Sampling distribution of sample means for 1,000 sampling trials of from the exponential distribution . As the sample size increases, we can see that the distribution of the sample means (given by the blue histogram) becomes more focused around the true value of the expected mean . The red curve shows the normal approximation given by the CLT, which states that .
Visually we can see in Figure 29 that the CLT normal approximation is indeed a very good representation of our sampling distribution of the sample mean , particularly for large (as the CLT theorem asserts). Likewise, we see that as the sample size increases, the sampling distribution for the sample mean is more tightly distributed around , meaning that our estimations of the expected value based on random samples are more accurate. This latter point is connected to the fact that the standard error of our sampling distribution is given by .