Estimators and Point Estimation
This section investigates methods for estimating values for model parameters using sample observations. Specifically, a statistic is referred to as an estimator if we use it as a value for ; we shall denote this estimator as . It is important to note here that is a random variable.
Bias-Variance Decomposition
At this point, we are going to digress a bit and discuss some important properties of point estimators. First, we define the Mean Squared Error (MSE) of an estimator of a parameter to be:
Starting from the definition of MSE, we can expand and contract the quadratic terms to derive a useful expression:
We can interpret each of the terms as follows:
An estimator of a parameter is said to be consistent if
A consistent estimator converges to the parameter as the sample size increases.
Maximum-Likelihood Estimation (MLE)
Maximum-Likelihood Estimation, or 'MLE' for short, is an extremely important technique in probabilistic modelling as it provides a framework for estimating parameters for any model from a given sample . The main idea behind this approach is that even though we cannot estimate the true parameter of the population correctly (because we cannot sample the whole population), we can do no better than to find the parameter of the distribution that is most likely to have generated the sample. Given some sample dataset , we can define a likelihood function as the probability that the sample data was generated given some value of the model parameter :
Since we assume our samples are independent and identically distributed (i.i.d.), this greatly simplifies to:
We define the Maximum-Likelihood Estimate (MLE) of a parameter to be given by:
Note: In order to be succinct, we will use to refer to throughout this section.
Obtaining is often a straightforward process that involves writing down the likelihood function , and then finding the value for that maximises it. Often it is also beneficial to verify that the estimate is indeed a maximum. In this course we will only be concerned with likelihood functions that can be directly maximised using methods from calculus, however, we note that more advanced techniques (e.g. nonlinear optimisation) may be required to obtain the MLE estimates in general.
Example: Bernoulli MLE
We will highlight the steps behind the process for obtaining a Maximum-Likelihood Estimate of using the Bernoulli model of flipping a coin; here the model parameter we are estimating is (i.e. the probability a coin flip is heads), as seen in Table 1.
- Writing Down the Likelihood Function Here we seek to estimate the probability that flipping a coin comes up heads (model parameter ). Let's say we toss the coin twenty times and produce the following sample :
We know that each coin flip is a Bernoulli trial, with the probability of heads equal to (we shall casually refer to this model parameter as in the subsequent steps to illustrate the general process). Formally, we can express this as , where implies a 'heads' is observed in the coin flip in the sequence.
It is often beneficial to apply zero/one encoding to our Bernoulli RV. That is use if the coin flip was heads and 0 if it was tails. The complete probability mass function for the Bernoulli distribution can then be written as:
Since we perform each coin flip independently (i.e., i.i.d.), the total probability of observing (given the true parameter of a coin is ) can be expressed as a product of each of the individual trials:
Thus, the individual terms in likelihood function are simply the PMF or PDF under consideration evaluated at . Note that this is always true for independent identically distributed (i.i.d.) experiments by definition and simplifies the overall calculations significantly. After we substitute in our expression for and re-arrange the equation, the expression for the likelihood of given for this coin flip example becomes:
Where and are the total number of heads and tails observed in respectively, and .
- Maximising the Likelihood Function We can now optimise the expression of to obtain a MLE for . In practice, however, it is advantageous to optimise the logarithm of this expression instead, i.e. the so-called log-likelihood function, , as it transforms the series of multiplications into a summation, which results in a much easier function to optimise:
From calculus we know that a function is at a minimum, maximum or a saddle point if and only if it's derivative is zero. We can use this to find the maximum of the log-likelihood function . However we need to note that our maximisation is constrained in this example by the fact that .
We can perform the maximisation to obtain the MLE in the coin flip case as follows:
Finally, we get that:
That is, our maximum-likelihood estimate for the probability of the coin coming up heads is equal to the total number of heads observed in our sample data divided by the sample size , which is a relativelyintuitive way to estimate this probability. For the particular sample in this example, we find that . We should also note that the last expression above is that of the sample mean, which makes sense since for the Bernoulli distribution, and we have just seen that this estimate improves with increasing sample size (see Figure 29).
- Verifying the Solution We can further use calculus to verify that the solution for number of heads we obtained is indeed the maximum by computing the second derivative of the function. If the parameter is a local maximum then the second derivative of at that point will be negative. We therefore want to verify:
Which is true in our case as is always negative for any .
Thus, the procedure for computing the maximum-likelihood estimate of a model parameter can be summarised as:
Maximum-Likelihood Estimation (MLE) procedure for an i.i.d. sample :
- Write down the likelihood function , and take the logarithm of this function:
- Maximise the log-likelihood function with respect to to obtain
- Verify that the obtained is indeed the maximum and within the correct range for .
Let's see how this MLE procedure for estimating model parameters from sample data applies to another distribution from Table 1: The Poisson distribution.
Example: Maximum Likelihood Estimate for a Poisson Random Variable
Given a dataset of i.i.d. samples drawn from the Poisson distribution, derive the expression for the Maximum-Likelihood Estimate of .
Solution:
We approach this problem using the three-step procedure described above. The first step is to write down the likelihood function. By the definition of a Poisson random variable, each of the samples in the dataset have the probability:
The likelihood for the entire dataset is therefore:
We can factor and rearrange the likelihood above as follows:
Taking the natural logarithm of the above, we the following expression for the log-likelihood:
We can now directly maximise , for :
Therefore our MLE of is the sample mean once again!
We can verify that this is indeed the maximum by taking the second derivative of and checking that it is negative at :
Which is clearly negative for all values of .
Remarks on MLE Maximum-likelihood estimation is a standard illustration of frequentist statistics. While the estimated parameter approaches the correct population parameter as the sample size approaches infinity, the estimates might deviate from the actual population parameter value for smaller sample sizes. None-the-less, an important advantage of maximum-likelihood estimation is that it produces consistent estimators (See Section 5.2.1; proof omitted here).
For instance, we estimated the to be equal to 0.4 for the coin-flip dataset used in this chapter, even though we used a fair coin to generate it. We were equally as likely to obtain the value 0.6 as well. In order to use statistics correctly, we need to be aware that parameter estimates are themselves stochastic, and be prepared to deal with the underlying uncertainty of these estimations. We therefore also need to consider interval estimation and statistical hypothesis testing as potential ways of dealing with this inherent randomness.