2D linear systems
In Section 12.6, we considered 1D systems where we discussed that the behaviour of trajectories on the phase line is very limited: trajectories can approach a fixed point, move away from a fixed point and approach , or, if we are on the fixed point, we stay there forever. In Subsec. 14.1.1 we saw that order linear differential equations can be expressed in terms of a system of linear, first-order ODEs. This section is concerned with two-dimensional (2D) linear systems (or, equivalently, second order LODEs).
Mathematical setup
We consider a vector where and are functions of time, . Note that we consider autonomous systems, i.e there is no explicit dependence on the independent variable, . Our state variables are and [or where . This is analogous to the 1D systems we discussed in Subsec. 12.6. We consider the system
where and . Note that in this chapter we use a single dot to denote the first order derivatives. Again, as done throughout these notes, we consider our variables to be real and so we define . The system given by Eq. (14.16) can be expressed as,
where and are functions of both variables, and . In 1D systems, we consider to be a point on the real line at a particular point in time and to be the vector field, representing the velocity of the point at that same time. If we now consider to be a point on the - plane at a particular point in time, then and are vector fields representing the velocity of the point at that time.
In terms of linear systems, Eq. (14.18) is
or, equivalently,
where is a real constant-coefficient matrix,
The systems that we consider here are homogeneous.
Qualitative behaviour
A simple example is given as follows:
The system (14.20) is decoupled since we have the equivalent scalar equations
where the equation does not depend on and the equation does not depend on is a real constant and it can be thought of as a system parameter. Of course Eqs. (14.21) are easy to solve (you can solve each one separately) to obtain
where and are initial conditions for and , respectively, at . Now, if we wanted to visualise what these solutions look like we would possibly want to plot and against . Of course to plot these we would need to know the values of and and, worse yet, if the initial conditions varied (it is very likely to have varying initial conditions in a real physical situation) then, we would need to have a different plot for each pair of initial conditions. Further, as this is a decoupled system, it is easy to solve for the explicit solution. In most cases, solving the differential equations explicitly is not possible and hence visualising the behaviour in such a way is not going to provide much intuition on the general qualitative features of the system. For this reason, our main focus will be to draw and analyse the phase portrait where now, the phase space is not the phase line (as in but we look at the phase plane i.e. .
General solution
A nice thing about linear systems is that if we can find solutions and , by the principle of superposition, their sum is also a solution
where and are arbitrary constants. Here, we discuss the method for obtaining a general solution to the homogeneous linear system.
Eigenvalues & eigenvectors
Suppose we want to solve system (14.18) with . To solve this, we assume a trial solution for the vector as follows:
where
The motivation for the trial solution comes from the fact that [which multiplies the vector in (14.18)] is a constant coefficient matrix. Equation (14.18) tells us that we are looking for a solution vector which, when differentiated once (i.e. ) gives us the function itself multiplied by a constant factor. This is analogous to our choice of exponential solutions for constant coefficient, second order ODEs (covered in Chapter 13, Section 13.2). In trial solution (14.24) we have three unknowns: and . We proceed by substituting (14.24) in the system (14.18), which yields,
Since , we can simplify (14.25) by dividing through by :
In vector form, (14.26) is expressed as follows:
This is the eigenvalue problem we considered in Chapter 10, Section 10.7; we briefly review some of the details here. We note that the eigenvector, appears on both sides of Eq. (14.27) but at this point we cannot combine the two sides since the scalar on the LHS cannot be subtracted from the square matrix, on the RHS. What we need to do therefore is make sure the LHS has the same dimension as the RHS: we replace by the diagonal matrix (note this does not change the equation in any way, this is analogous to multiplying a number or a function by the number 1). Equation (14.26) is expressed as:
This then simplifies to:
In a more compact form, Eq. (14.29) is:
Equation (14.24) gives us the form of the trial solution to system (14.18). It is obvious that, if Eq. (14.30) gives , then the solution to the linear system will be ; this is the trivial solution which always satisfies a linear, homogeneous system like (14.18). We seek all nonzero solutions, i.e. we need . From linear algebra, we know that for nonzero ,
Using Eq. (14.31), we get the following result
which we recognise as the characteristic equation of matrix . It is a quadratic equation because is a matrix. From (14.32) we solve for . The roots, say and , are the eigenvalues of matrix .
As we saw in Chapter 10, for a , the characteristic equation may be expressed as:
where is the trace of [this is the sum of the diagonal entries of , i.e. and is the determinant of .
If we now go back to Eq. (14.29) [equivalently, (14.30)], for each root we have two equations.
For :
and, for ,
Now, we define: and .
The vectors, and are the eigenvectors; is the eigenvector corresponding to and is the eigenvector corresponding to .
Note that if the work has been done correctly, in each of the two sets of equations in (14.34) and (14.35), the two equations will be dependent, i.e., one will be a constant multiple of the other. If this were not the case, the only solutions would be which lead to trivial solutions. From (14.33) (and from what we know on second order, linear ODEs), it is obvious that the roots to the characteristic equation can be:
- real and distinct;
- complex conjugates;
- real and repeated.
To find solutions to linear systems of the form (14.18), the very first step is to write down the characteristic equation [for systems, a shortcut way of obtaining this is given by Eq. (14.33)] and solve for the roots, say and .
Real eigenvalues
This is the easy case and most of the work for it has already been explained in Subsec. 14.2.1. We will work our way through it using an example. Consider the linear system where the matrix is given by:
Using (14.33), the characteristic equation is
which gives and as the eigenvalues. Next, we need to find two eigenvectors corresponding to and .
Find corresponding to
Substituting in Eqs. (14.34), we have the following two equations
We can easily see that the equations in (14.38) are a constant multiple of each other, from which we obtain a relationship between and as follows,
By setting , the relation (14.39) gives and hence the first eigenvector is
Find corresponding to
The second eigenvector is . Substituting in Eqs. (14.35), we have the following two equations which will help us find a relationship between and :
Equations (14.40) yield the following relationship between and ,
So, if , then, Eq. (14.41) gives ; the second eigenvector therefore is . Once we have the eigenvalues and the corresponding eigenvectors, we have two independent solutions to system (14.18), shown in Eq. (14.42)
Equations (14.42) become
The general solution is then constructed using the linear combination of the two solutions in Eqs. (14.43),
If the eigenvalues and of the square matrix, are real and distinct then the linear system has the following general solution:
where and are arbitrary constants and and are two linearly independent eigenvectors corresponding to and , respectively.
Complex eigenvalues
In this case, the characteristic equation for the matrix gives two complex and distinct eigenvalues, and . We will see that, by using only one eigenvalue (either or ), it is possible to obtain two linearly independent solutions which are, in turn, used to construct the general solution of the system. We therefore start with and proceed to find the corresponding eigenvector. Consider a linear system like (14.18) with .
Using the characteristic equation, (14.33), we find that and . From this point onward, we continue with any one of the eigenvalues e.g. .
Find corresponding to
We determine the usual way [using (14.30)]. This gives a complex eigenvector, corresponding to .
Substituting and in the trial solution (14.24),
Now, since the system we are trying to solve is real (this is because matrix is real), our solution for should also be real. Equation (14.46) is complex; further, we need two linearly independent solutions, say and , to make up the general solution to the system.
The next step involves separating Eq. (14.46) into real and imaginary parts. The real part is given by :
and the imaginary part by :
There is no need to include the imaginary in Eq. (14.48); this is implied by the fact that the solution in (14.48) is the coefficient of the imaginary in (14.46).
Note: The two linearly independent solutions we seek are actually given by Eqs. (14.47) and (14.48). Recall that any constant multiple of (14.47) and (14.48) is also a solution. By using the principle of superposition, we write down the general solution as follows:
If the eigenvalues and of the square matrix, are complex then, the linear system has the solution,
where is the complex eigenvector corresponding to The general solution is:
where and are constants and and are the real and imaginary parts of Eq. (14.50), respectively.
Repeated eigenvalues
In this final case, the characteristic equation gives a single real eigenvalue, . Using the usual method, we can write down solution to the linear system (14.18) as follows:
where is the eigenvector corresponding to . For a 2D system however, we need a pair of fundamental solutions (these make a linearly independent set) to construct the general
you may write down a similar equation to (14.50) using and the corresponding eigenvector , however, only one such equation is required to get all the linearly independent solutions we need. solution.
The question is, if the characteristic equation only gives us one eigenvalue, then what do we do about the second solution? The answer to this depends on the type of repeated eigenvalues; these can be:
(i) complete;
(ii) defective.
Complete
This is the easiest of the two cases; the reason being that we can actually find two linearly independent eigenvectors such that the general solution to the linear system may be expressed as:
note that the exponential terms in Eq. (14.53) have the same exponent which is the only solution from the characteristic equation.
As an example for this case, we look at the linear system (14.18) with the matrix given as .
The matrix can also be expressed as , where is the identity matrix. A matrix that may be expressed as the product of a scalar and the identity matrix is known as a scalar matrix. It is easy to see that the eigenvalues of are the nonzero diagonal entries, in this case (this is a double root). If we now attempt to find an eigenvector, i.e. corresponding to using the usual way [i.e. using Eq. (14.30)], we get the following 2 equations:
However, since , what Eqs. (14.54) and (14.55) imply is that every vector is an eigenvector. Our objective here is to find two vectors, and , which are linearly independent (i.e. not a constant multiple of each other). The usual choice is the following,
Then, by substituting (14.56) and in (14.53), we have the general solution for complete eigenvalues, as follows,
If the eigenvalues and of the square matrix, are real, repeated and complete then, the linear system has only one eigenvalue, and has the following general solution
where and are constants and and are two linearly independent eigenvectors.
Defective
An eigenvalue is defective if we can only find one nonzero eigenvector (up to a constant multiple) for the linear system we are trying to solve. Again, we can write down the first solution as
where, we have used,
to find . Since a requires two linearly independent solutions, we need to find the second one some other way.
From our experience with second order, constant coefficient ODEs (see Subsec. 13.2.4), we know that multiplying our first solution [this is given by Eq. (14.59)] by the independent variable might not be a terrible idea. We can easily show that simply doing that does not offer a viable second solution. Instead, the correct form of the second trial solution is the following,
In Eq. (14.61), is the eigenvalue obtained from the characteristic equation and is the eigenvector corresponding to which we obtain using the condition given by (14.60). The only unknown in Eq. (14.61) therefore is .
How to find .
Now, we need a condition similar to the one given by Eq. (14.60) which allows us to determine . We derive this condition in a similar way to the one we used to derive the condition for in Section 14.2.1.
Since the suggested solution given by (14.61) satisfies the linear system (14.18) then, we differentiate (14.61) to obtain and substitute back in Eq. (14.18). Doing this gives,
Dividing (14.62) by and, upon rearranging, we get,
Using (14.60) in (14.63),
where is already known from using (14.60) and is the unknown vector. Equation (14.64) gives the condition we need to determine .
The equation for [given by (14.64)] is guaranteed to have a solution provided that the eigenvalue is defective. Note that when solving for , try setting either or to zero and then solve for the other entry to get a suitable vector (see example below). Finally, once and are determined, we construct the general solution as follows
where and are given by Eqs. (14.59) and (14.61), respectively. Next, we briefly look at an example on how to solve linear systems with defective matrices. We also show why arbitrarily setting one of the entries in as zero while using (14.64) to solve for the other one is appropriate.
Consider the linear system (14.18) with the matrix given as
Using the characteristic equation, we find that (real, repeated). Since (i.e. is not a scalar matrix), the eigenvalue is said to be defective. Using (14.60), we find the first eigenvector as which allows us to write down the first solution as,
Now, for , we use (14.64). This gives us the following two equations
of course Eqs. (14.67) and (14.68) are dependent so all we can get from these is a relationship between and . We can set to zero [either in (14.67) or (14.68)] and solve for or the other way around. However, to see why this works, let us just set (where is arbitrary) and rewrite Eq. (14.67) as follows,
The vector is therefore:
Now, if we substitute (14.70) in the second solution [given by (14.61)], we get,
The last term in Eq. (14.71) is simply a constant multiple of the first solution which is given by Eq. (14.66) and therefore it can be ignored (by setting ). The first two terms on the RHS of (14.71) however, constitute a new solution,
Note that the vector is exactly what we would have got had we set in Eq. (14.67) [or Eq. (14.68)]. The equations given by (14.66) and (14.72) form a fundamental set of solutions (it can be easily shown that their Wronskian determinant is never zero). The general solution we are looking for then is given by,
If the eigenvalues and of the square matrix, are real, repeated, and defective then, the linear system only has one eigenvalue, and has the following general solution
where:
- and are constants;
- satisfies
- satisfies .