Diagonalisation of matrices
Diagonal matrices are particularly convenient for eigenvalue problems since the eigenvalues of a diagonal matrix coincide with the diagonal entries and the corresponding eigenvector is simply the coordinate vector. Diagonalised matrices are useful in determining matrix exponents which are in turn useful in describing solutions to linear systems of differential equations.
Consider a matrix with three distinct eigenvalues, , and with eigenvectors , and , respectively. We write a matrix whose columns are the eigenvectors,
What happens when we pre-multiply by ? We have
which implies, using ,
We define
This gives us the equation
We can take the inverse of on the right on both sides of the equation to get,
This is called the diagonalisation of matrix .
We consider the matrix
We calculate the characteristic equation,
which, after some algebraic manipulation, gives us
First we compute the eigenvector with eigenvalue . We need to solve for the roots of
Note that we have multiplied the first two rows by 2 to ease computations. Since is singular, we know that one of the equations must be redundant. We choose the following two equations to proceed,
We undertake to simplify the equations,
The second row immediately gives that and the first row that so the eigenvector is
Following the same steps with the other two eigenvalues we obtain
for and , respectively. We can now form the matrix in Eq. (10.100),
Then, is given by
Equation (10.106) is equal to ,
where is given by,
Applications of Diagonalisation
Let us consider a diagonal matrix,
What happens when we square the matrix? We obtain,
We observe that matrix powers of diagonal matrices correspond simply to powers of the diagonal entries. Let us consider what happens when we square both sides of Eq. (10.102),
We see that we can repeatedly multiply by the same expression and and cancel in the middle of the product such that,
If we let , then which we can put between and to ease the calculation of the power of .
In addition to powers of we can also use matrix diagonalisation to easily calculate matrix exponentials. Recall the power series expansion of the exponential given by Eq. (6.20),
We define the expression through the series expansion,
We show how diagonalisation helps us compute the expression . We can replace with in Eq. (10.111) which gives,
In this case, the distributive law of matrix multiplication allows us to write,
We observe that we have in between and and so we obtain,
Finally, for , we have .
Note that the derivation is not rigorous, since we need to consider convergence properties of these series and define what it means for a matrix sum to converge. The interested reader may wish to think how to extend the concept of partial sums and the convergence of series to the convergence of a series of matrices.
Example
Let
diagonalise and hence calculate .
Solution We need to express as the diagonalisation in order to calculate matrix powers. To compute we need the eigenvalues and for we need the corresponding eigenvectors. Once we have we need to compute its inverse. The eigenvalues are obtained from the characteristic equation,
which gives the eigenvalues and as
To find the eigenvector for , we solve ,
as always, one equation is redundant leaving us with one degree of freedom. The eigenvector is computed as . Similarly, we obtain the other two eigenvectors corresponding to the eigenvalues . For , we obtain and for . Using the eigenvectors as the columns of we have,
Next, we compute the inverse of using Gauss-Jordan
By performing the following row transformations, , , we obtain the identity on the RHS and the inverse on the LHS,
Now, to calculate we raise and multiply by on the left and on the right,
From the procedure followed in Example 10.11, we can calculate matrix exponents. For instance, to compute for matrix given in Example 10.11, we compute as,
The matrix exponent is then calculated from .
Geometric and algebraic multiplicities
Consider the matrix,
Its characteristic equation gives
which gives the eigenvalue 1 repeated twice. We cannot diagonalise this matrix since the only diagonalisable form is the identity matrix for which,
and this results in a contradiction. To find eigenvectors, we have which gives
The solution is and so the eigenvector is . This motivates the following definition.
Algebraic and geometric multiplicities equation
The algebraic multiplicity of an eigenvalue is the number of times it is repeated as a root of the characteristic equation. In the example above, has algebraic multiplicity two. The geometric multiplicity of an eigenvalue is the number of linearly independent eigenvectors that are associated with the eigenvalue. In the example above the geometric multiplicity of 1 is one.
Exercises
- Diagonalise and hence find and .
- Diagonalise and hence find and .