Artificial Intelligence 🤖
Differential equations
Ordinary differential equations
Systems of linear and nonlinear ODEs
2D linear systems

2D linear systems

In Section 12.6, we considered 1D systems where we discussed that the behaviour of trajectories on the phase line is very limited: trajectories can approach a fixed point, move away from a fixed point and approach ±\pm \infty, or, if we are on the fixed point, we stay there forever. In Subsec. 14.1.1 we saw that nth n^{\text {th }} order linear differential equations can be expressed in terms of a system of nn linear, first-order ODEs. This section is concerned with two-dimensional (2D) linear systems (or, equivalently, second order LODEs).

Mathematical setup

We consider a vector x\boldsymbol{x} where x=(x,y)\boldsymbol{x}=(x, y) and x,yx, y are functions of time, tt. Note that we consider autonomous systems, i.e there is no explicit dependence on the independent variable, tt. Our state variables are xx and yy [or x=(x,y)]\boldsymbol{x}=(x, y)] where xR2\boldsymbol{x} \in \mathbb{R}^{2}. This is analogous to the 1D systems we discussed in Subsec. 12.6. We consider the system

x˙=f(x)\dot{\boldsymbol{x}}=\boldsymbol{f}(\boldsymbol{x})

where x˙=(x˙,y˙)\dot{\boldsymbol{x}}=(\dot{x}, \dot{y}) and f=(f,g)\boldsymbol{f}=(f, g). Note that in this chapter we use a single dot to denote the first order derivatives. Again, as done throughout these notes, we consider our variables to be real and so we define xR2\boldsymbol{x} \in \mathbb{R}^{2}. The 2×22 \times 2 system given by Eq. (14.16) can be expressed as,

x˙=f(x),y˙=g(x);\begin{aligned} & \dot{x}=f(\boldsymbol{x}), \\ & \dot{y}=g(\boldsymbol{x}) ; \end{aligned}

where ff and gg are functions of both variables, xx and yy. In 1D systems, we consider x(t)x(t) to be a point on the real line at a particular point in time and f(x(t))f(x(t)) to be the vector field, representing the velocity of the point at that same time. If we now consider (x(t),y(t))(x(t), y(t)) to be a point on the xx - yy plane at a particular point in time, then ff and gg are vector fields representing the velocity of the point at that time.

In terms of linear systems, Eq. (14.18) is

x˙=Ax,\dot{\boldsymbol{x}}=A \boldsymbol{x},

or, equivalently,

x˙=ax+by,y˙=cx+dy.\begin{aligned} & \dot{x}=a x+b y, \\ & \dot{y}=c x+d y . \end{aligned}

where AA is a real constant-coefficient matrix,

A=(abcd).A=\left(\begin{array}{ll} a & b \\ c & d \end{array}\right) .

The systems that we consider here are homogeneous.

Qualitative behaviour

A simple example is given as follows:

x˙=(μ001)x,xR2\dot{\boldsymbol{x}}=\left(\begin{array}{cc} \mu & 0 \\ 0 & 1 \end{array}\right) \boldsymbol{x}, \quad \boldsymbol{x} \in \mathbb{R}^{2}

The system (14.20) is decoupled since we have the equivalent scalar equations

x˙=μx, and y˙=y;\dot{x}=\mu x, \text { and } \dot{y}=y ;

where the x˙\dot{x} equation does not depend on yy and the y˙\dot{y} equation does not depend on x;μx ; \mu is a real constant and it can be thought of as a system parameter. Of course Eqs. (14.21) are easy to solve (you can solve each one separately) to obtain

x(t)=x0eμt, and y(t)=y0etx(t)=x_{0} e^{\mu t}, \text { and } y(t)=y_{0} e^{t}

where x0x_{0} and y0y_{0} are initial conditions for xx and yy, respectively, at t=0t=0. Now, if we wanted to visualise what these solutions look like we would possibly want to plot xx and yy against tt. Of course to plot these we would need to know the values of x0x_{0} and y0y_{0} and, worse yet, if the initial conditions varied (it is very likely to have varying initial conditions in a real physical situation) then, we would need to have a different plot for each pair of initial conditions. Further, as this is a decoupled system, it is easy to solve for the explicit solution. In most cases, solving the differential equations explicitly is not possible and hence visualising the behaviour in such a way is not going to provide much intuition on the general qualitative features of the system. For this reason, our main focus will be to draw and analyse the phase portrait where now, the phase space is not the phase line (as in 1D)1 \mathrm{D}) but we look at the phase plane i.e. R×R\mathbb{R} \times \mathbb{R}.

General solution

A nice thing about linear systems is that if we can find solutions x1\boldsymbol{x}_{1} and x2\boldsymbol{x}_{2}, by the principle of superposition, their sum is also a solution

x=c1x1+c2x2,\boldsymbol{x}=c_{1} \boldsymbol{x}_{1}+c_{2} \boldsymbol{x}_{2},

where c1c_{1} and c2c_{2} are arbitrary constants. Here, we discuss the method for obtaining a general solution to the homogeneous 2×22 \times 2 linear system.

Eigenvalues & eigenvectors

Suppose we want to solve system (14.18) with A=(abcd)A=\left(\begin{array}{ll}a & b \\ c & d\end{array}\right). To solve this, we assume a trial solution for the vector x\boldsymbol{x} as follows:

x=veλt\boldsymbol{x}=\boldsymbol{v} e^{\lambda t}

where v=(a1b1)\boldsymbol{v}=\left(\begin{array}{ll}a_{1} & b_{1}\end{array}\right)^{\top}

The motivation for the trial solution comes from the fact that AA [which multiplies the vector x\boldsymbol{x} in (14.18)] is a constant coefficient matrix. Equation (14.18) tells us that we are looking for a solution vector x\boldsymbol{x} which, when differentiated once (i.e. x˙\dot{\boldsymbol{x}} ) gives us the function itself multiplied by a constant factor. This is analogous to our choice of exponential solutions for constant coefficient, second order ODEs (covered in Chapter 13, Section 13.2). In trial solution (14.24) we have three unknowns: λ,a1\lambda, a_{1} and b1b_{1}. We proceed by substituting (14.24) in the system (14.18), which yields,

λ(a1b1)eλt=(abcd)(a1b1)eλt.\lambda\left(\begin{array}{l} a_{1} \\ b_{1} \end{array}\right) e^{\lambda t}=\left(\begin{array}{ll} a & b \\ c & d \end{array}\right)\left(\begin{array}{l} a_{1} \\ b_{1} \end{array}\right) e^{\lambda t} .

Since eλt0e^{\lambda t} \neq 0, we can simplify (14.25) by dividing through by eλte^{\lambda t} :

λ(a1b1)=(abcd)(a1b1).\lambda\left(\begin{array}{l} a_{1} \\ b_{1} \end{array}\right)=\left(\begin{array}{ll} a & b \\ c & d \end{array}\right)\left(\begin{array}{l} a_{1} \\ b_{1} \end{array}\right) .

In vector form, (14.26) is expressed as follows:

λv=Av.\lambda \boldsymbol{v}=A \boldsymbol{v} .

This is the eigenvalue problem we considered in Chapter 10, Section 10.7; we briefly review some of the details here. We note that the eigenvector, v\boldsymbol{v} appears on both sides of Eq. (14.27) but at this point we cannot combine the two sides since the scalar λ\lambda on the LHS cannot be subtracted from the square matrix, AA on the RHS. What we need to do therefore is make sure the LHS has the same dimension as the RHS: we replace λ\lambda by the diagonal matrix λI\lambda I (note this does not change the equation in any way, this is analogous to multiplying a number or a function by the number 1). Equation (14.26) is expressed as:

(λ00λ)(a1b1)=(abcd)(a1b1).\left(\begin{array}{ll} \lambda & 0 \\ 0 & \lambda \end{array}\right)\left(\begin{array}{l} a_{1} \\ b_{1} \end{array}\right)=\left(\begin{array}{ll} a & b \\ c & d \end{array}\right)\left(\begin{array}{l} a_{1} \\ b_{1} \end{array}\right) .

This then simplifies to:

(aλbcdλ)(a1b1)=(00).\left(\begin{array}{cc} a-\lambda & b \\ c & d-\lambda \end{array}\right)\left(\begin{array}{l} a_{1} \\ b_{1} \end{array}\right)=\left(\begin{array}{l} 0 \\ 0 \end{array}\right) .

In a more compact form, Eq. (14.29) is:

(AλI)v=0.(A-\lambda I) \boldsymbol{v}=\mathbf{0} .

Equation (14.24) gives us the form of the trial solution to system (14.18). It is obvious that, if Eq. (14.30) gives v=0\boldsymbol{v}=\mathbf{0}, then the solution to the linear system will be x=0\boldsymbol{x}=\mathbf{0}; this is the trivial solution which always satisfies a linear, homogeneous system like (14.18). We seek all nonzero solutions, i.e. we need v0\boldsymbol{v} \neq \mathbf{0}. From linear algebra, we know that for nonzero v\boldsymbol{v},

det(AλI)=0.\operatorname{det}(A-\lambda I)=0 .

Using Eq. (14.31), we get the following result

λ2(a+d)λ+(adbc)=0\lambda^{2}-(a+d) \lambda+(a d-b c)=0

which we recognise as the characteristic equation of matrix AA. It is a quadratic equation because AA is a 2×22 \times 2 matrix. From (14.32) we solve for λ\lambda. The roots, say λ1\lambda_{1} and λ2\lambda_{2}, are the eigenvalues of matrix AA.

As we saw in Chapter 10, for a 2×2 matrix A\underline{2 \times 2 \text { matrix } A}, the characteristic equation may be expressed as:

λ2τλ+Δ=0\lambda^{2}-\tau \lambda+\Delta=0

where τ\tau is the trace of AA [this is the sum of the diagonal entries of AA, i.e. (a+d)](a+d)] and Δ\Delta is the determinant of AA.

If we now go back to Eq. (14.29) [equivalently, (14.30)], for each root λ\lambda we have two equations.

For λ=λ1\lambda=\lambda_{1} :

(aλ1)a1+bb1=0ca1+(dλ1)b1=0;\begin{aligned} & \left(a-\lambda_{1}\right) a_{1}+b b_{1}=0 \\ & c a_{1}+\left(d-\lambda_{1}\right) b_{1}=0 ; \end{aligned}

and, for λ=λ2\lambda=\lambda_{2},

(aλ2)a2+bb2=0,ca2+(dλ2)b2=0.\begin{aligned} & \left(a-\lambda_{2}\right) a_{2}+b b_{2}=0, \\ & c a_{2}+\left(d-\lambda_{2}\right) b_{2}=0 . \end{aligned}

Now, we define: v1=(a1b1)\boldsymbol{v}_{\mathbf{1}}=\left(\begin{array}{ll}a_{1} & b_{1}\end{array}\right)^{\top} and v2=(a2b2)\boldsymbol{v}_{\mathbf{2}}=\left(\begin{array}{ll}a_{2} & b_{2}\end{array}\right)^{\top}.

The vectors, v1\boldsymbol{v}_{\mathbf{1}} and v2\boldsymbol{v}_{\mathbf{2}} are the eigenvectors; v1\boldsymbol{v}_{\mathbf{1}} is the eigenvector corresponding to λ1\lambda_{1} and v2\boldsymbol{v}_{\mathbf{2}} is the eigenvector corresponding to λ2\lambda_{2}.

Note that if the work has been done correctly, in each of the two sets of equations in (14.34) and (14.35), the two equations will be dependent, i.e., one will be a constant multiple of the other. If this were not the case, the only solutions would be v1=v2=0\boldsymbol{v}_{\mathbf{1}}=\boldsymbol{v}_{\mathbf{2}}=\mathbf{0} which lead to trivial solutions. From (14.33) (and from what we know on second order, linear ODEs), it is obvious that the roots to the characteristic equation can be:

  1. real and distinct;
  2. complex conjugates;
  3. real and repeated.

To find solutions to linear systems of the form (14.18), the very first step is to write down the characteristic equation [for 2×22 \times 2 systems, a shortcut way of obtaining this is given by Eq. (14.33)] and solve for the roots, say λ1\lambda_{1} and λ2\lambda_{2}.

Real eigenvalues

This is the easy case and most of the work for it has already been explained in Subsec. 14.2.1. We will work our way through it using an example. Consider the linear system x˙=Ax\dot{\boldsymbol{x}}=A \boldsymbol{x} where the matrix AA is given by:

A=(4725)A=\left(\begin{array}{cc} 4 & 7 \\ -2 & -5 \end{array}\right) \text {. }

Using (14.33), the characteristic equation is

λ2+λ6=0\lambda^{2}+\lambda-6=0

which gives λ1=3\lambda_{1}=-3 and λ2=2\lambda_{2}=2 as the eigenvalues. Next, we need to find two eigenvectors corresponding to λ1\lambda_{1} and λ2\lambda_{2}.

Find v1\boldsymbol{v}_{\mathbf{1}} corresponding to λ1=3\lambda_{1}=-3

Substituting λ1=3\lambda_{1}=-3 in Eqs. (14.34), we have the following two equations

7a1+7a2=0,2a12a2=0.\begin{aligned} 7 a_{1}+7 a_{2} & =0, \\ -2 a_{1}-2 a_{2} & =0 . \end{aligned}

We can easily see that the equations in (14.38) are a constant multiple of each other, from which we obtain a relationship between a1a_{1} and b1b_{1} as follows,

b1=a1.b_{1}=-a_{1} .

By setting a1=1a_{1}=1, the relation (14.39) gives b1=1b_{1}=-1 and hence the first eigenvector is v1=(11)\boldsymbol{v}_{1}=\left(\begin{array}{ll}1 & -1\end{array}\right)^{\top}

Find v2\boldsymbol{v}_{\mathbf{2}} corresponding to λ2=2\lambda_{2}=2

The second eigenvector is v2=(a2b2)\boldsymbol{v}_{\mathbf{2}}=\left(\begin{array}{ll}a_{2} & b_{2}\end{array}\right)^{\top}. Substituting λ2=2\lambda_{2}=2 in Eqs. (14.35), we have the following two equations which will help us find a relationship between a2a_{2} and b2b_{2} :

2a2+7b2=02a27b2=0.\begin{aligned} 2 a_{2}+7 b_{2} & =0 \\ -2 a_{2}-7 b_{2} & =0 . \end{aligned}

Equations (14.40) yield the following relationship between a2a_{2} and b2b_{2},

b2=2a27.b_{2}=\frac{-2 a_{2}}{7} .

So, if a2=1a_{2}=1, then, Eq. (14.41) gives b2=2/7b_{2}=-2 / 7; the second eigenvector therefore is v2=(12/7)\boldsymbol{v}_{\mathbf{2}}=\left(\begin{array}{ll}1 & -2 / 7\end{array}\right)^{\top}. Once we have the eigenvalues and the corresponding eigenvectors, we have two independent solutions to system (14.18), shown in Eq. (14.42)

x1=v1eλ1t,x2=v2eλ2t.\boldsymbol{x}_{\mathbf{1}}=\boldsymbol{v}_{\mathbf{1}} e^{\lambda_{1} t}, \quad \boldsymbol{x}_{\mathbf{2}}=\boldsymbol{v}_{\mathbf{2}} e^{\lambda_{2} t} .

Equations (14.42) become

x1=(11)e3t,x2=(127)e2t.\boldsymbol{x}_{\mathbf{1}}=\left(\begin{array}{c} 1 \\ -1 \end{array}\right) e^{-3 t}, \quad \boldsymbol{x}_{\mathbf{2}}=\left(\begin{array}{c} 1 \\ -\frac{2}{7} \end{array}\right) e^{2 t} .

The general solution is then constructed using the linear combination of the two solutions in Eqs. (14.43),

x=c1(11)e3t+c2(127)e2t\boldsymbol{x}=c_{1}\left(\begin{array}{c} 1 \\ -1 \end{array}\right) e^{-3 t}+c_{2}\left(\begin{array}{c} 1 \\ -\frac{2}{7} \end{array}\right) e^{2 t}

If the eigenvalues λ1\lambda_{1} and λ2\lambda_{2} of the square matrix, AA are real and distinct then the linear system x˙=Ax\dot{\boldsymbol{x}}=A \boldsymbol{x} has the following general solution:

x=c1v1eλ1t+c2v2eλ2t,\boldsymbol{x}=c_{1} \boldsymbol{v}_{\mathbf{1}} e^{\lambda_{1} t}+c_{2} \boldsymbol{v}_{\mathbf{2}} e^{\lambda_{2} t},

where c1c_{1} and c2c_{2} are arbitrary constants and v1\boldsymbol{v}_{\mathbf{1}} and v2\boldsymbol{v}_{\mathbf{2}} are two linearly independent eigenvectors corresponding to λ1\lambda_{1} and λ2\lambda_{2}, respectively.

Complex eigenvalues

In this case, the characteristic equation for the 2×22 \times 2 matrix gives two complex and distinct eigenvalues, λ1=μ+ωi\lambda_{1}=\mu+\omega i and λ2=μωi\lambda_{2}=\mu-\omega i. We will see that, by using only one eigenvalue (either λ1\lambda_{1} or λ2\lambda_{2} ), it is possible to obtain two linearly independent solutions which are, in turn, used to construct the general solution of the 2×22 \times 2 system. We therefore start with λ1=μ+ωi\lambda_{1}=\mu+\omega i and proceed to find the corresponding eigenvector. Consider a linear system like (14.18) with A=(121112)A=\left(\begin{array}{cc}-\frac{1}{2} & 1 \\ -1 & -\frac{1}{2}\end{array}\right).

Using the characteristic equation, (14.33), we find that λ1=0.5+i\lambda_{1}=-0.5+i and λ2=0.5i\lambda_{2}=-0.5-i. From this point onward, we continue with any one of the eigenvalues e.g. λ1=0.5+i\lambda_{1}=-0.5+i.

Find v1v_{1} corresponding to λ1=0.5+i\lambda_{1}=-0.5+i

We determine v1\boldsymbol{v}_{\mathbf{1}} the usual way [using (14.30)]. This gives a complex eigenvector, v1=(1i)\boldsymbol{v}_{\mathbf{1}}=\left(\begin{array}{ll}1 & i\end{array}\right)^{\top} corresponding to λ1=0.5+i\lambda_{1}=-0.5+i.

Substituting λ1\lambda_{1} and v1\boldsymbol{v}_{\mathbf{1}} in the trial solution (14.24),

x=(1i)e(0.5+i)t=e0.5t[(10)+i(01)](cost+isint)=e0.5t[(10)cost+i(01)cost+i(10)sint(01)sint].\begin{aligned} \boldsymbol{x} & =\left(\begin{array}{c} 1 \\ i \end{array}\right) e^{(-0.5+i) t} \\ & =e^{-0.5 t}\left[\left(\begin{array}{l} 1 \\ 0 \end{array}\right)+i\left(\begin{array}{l} 0 \\ 1 \end{array}\right)\right](\cos t+i \sin t) \\ & =e^{-0.5 t}\left[\left(\begin{array}{l} 1 \\ 0 \end{array}\right) \cos t+i\left(\begin{array}{l} 0 \\ 1 \end{array}\right) \cos t+i\left(\begin{array}{l} 1 \\ 0 \end{array}\right) \sin t-\left(\begin{array}{l} 0 \\ 1 \end{array}\right) \sin t\right] . \end{aligned}

Now, since the system we are trying to solve is real (this is because matrix AA is real), our solution for x\boldsymbol{x} should also be real. Equation (14.46) is complex; further, we need two linearly independent solutions, say x1x_{1} and x2x_{2}, to make up the general solution to the 2×22 \times 2 system.

The next step involves separating Eq. (14.46) into real and imaginary parts. The real part is given by Re(x)\operatorname{Re}(\boldsymbol{x}) :

Re(x)=e0.5t[(10)cost(01)sint];\operatorname{Re}(\boldsymbol{x})=e^{-0.5 t}\left[\left(\begin{array}{l} 1 \\ 0 \end{array}\right) \cos t-\left(\begin{array}{l} 0 \\ 1 \end{array}\right) \sin t\right] ;

and the imaginary part by Im(x)\operatorname{Im}(\boldsymbol{x}) :

Im(x)=e0.5t[(01)cost+(10)sint].\operatorname{Im}(\boldsymbol{x})=e^{-0.5 t}\left[\left(\begin{array}{l} 0 \\ 1 \end{array}\right) \cos t+\left(\begin{array}{l} 1 \\ 0 \end{array}\right) \sin t\right] .

There is no need to include the imaginary ii in Eq. (14.48); this is implied by the fact that the solution in (14.48) is the coefficient of the imaginary ii in (14.46).

Note: The two linearly independent solutions we seek are actually given by Eqs. (14.47) and (14.48). Recall that any constant multiple of (14.47) and (14.48) is also a solution. By using the principle of superposition, we write down the general solution as follows:

x=c1e0.5t(costsint)+c2e0.5t(sintcost)\boldsymbol{x}=c_{1} e^{-0.5 t}\left(\begin{array}{c} \cos t \\ -\sin t \end{array}\right)+c_{2} e^{-0.5 t}\left(\begin{array}{c} \sin t \\ \cos t \end{array}\right)

If the eigenvalues λ1\lambda_{1} and λ2\lambda_{2} of the square matrix, AA are complex then, the linear system x˙=Ax\dot{\boldsymbol{x}}=A \boldsymbol{x} has the solution,

x=v1eλ1t\boldsymbol{x}=\boldsymbol{v}_{\mathbf{1}} e^{\lambda_{1} t}

where v1\boldsymbol{v}_{\mathbf{1}} is the complex eigenvector corresponding to λ126\lambda_{1} \cdot{ }^{26} The general solution is:

x=c1Re(x)+c2Im(x).\boldsymbol{x}=c_{1} \operatorname{Re}(\boldsymbol{x})+c_{2} \operatorname{Im}(\boldsymbol{x}) .

where c1c_{1} and c2c_{2} are constants and Re(x)\operatorname{Re}(\boldsymbol{x}) and Im(x)\operatorname{Im}(\boldsymbol{x}) are the real and imaginary parts of Eq. (14.50), respectively.

Repeated eigenvalues

In this final case, the characteristic equation gives a single real eigenvalue, λ\lambda. Using the usual method, we can write down aa solution to the linear system (14.18) as follows:

x1=v1eλt\boldsymbol{x}_{\mathbf{1}}=\boldsymbol{v}_{\mathbf{1}} e^{\lambda t}

where v1\boldsymbol{v}_{\mathbf{1}} is the eigenvector corresponding to λ\lambda. For a 2D system however, we need a pair of fundamental solutions (these make a linearly independent set) to construct the general

26{ }^{26} you may write down a similar equation to (14.50) using λ2\lambda_{2} and the corresponding eigenvector v2\boldsymbol{v}_{\mathbf{2}}, however, only one such equation is required to get all the linearly independent solutions we need. solution.

The question is, if the characteristic equation only gives us one eigenvalue, then what do we do about the second solution? The answer to this depends on the type of repeated eigenvalues; these can be:

(i) complete;

(ii) defective.

Complete

This is the easiest of the two cases; the reason being that we can actually find two linearly independent eigenvectors such that the general solution to the linear system may be expressed as:

x=c1v1eλt+c2v2eλt\boldsymbol{x}=c_{1} \boldsymbol{v}_{\mathbf{1}} e^{\lambda t}+c_{2} \boldsymbol{v}_{\mathbf{2}} e^{\lambda t}

note that the exponential terms in Eq. (14.53) have the same exponent λ\lambda which is the only solution from the characteristic equation.

As an example for this case, we look at the linear system (14.18) with the matrix AA given as A=(3003)A=\left(\begin{array}{ll}3 & 0 \\ 0 & 3\end{array}\right).

The matrix AA can also be expressed as A=3IA=3 I, where II is the 2×22 \times 2 identity matrix. A matrix that may be expressed as the product of a scalar and the identity matrix is known as a scalar matrix. It is easy to see that the eigenvalues of AA are the nonzero diagonal entries, in this case λ=3\lambda=3 (this is a double root). If we now attempt to find an eigenvector, i.e. v1=(a1b1)\boldsymbol{v}_{\mathbf{1}}=\left(\begin{array}{ll}a_{1} & b_{1}\end{array}\right)^{\top} corresponding to λ=3\lambda=3 using the usual way [i.e. using Eq. (14.30)], we get the following 2 equations:

(3λ)a1=0(3λ)b1=0.\begin{aligned} & (3-\lambda) a_{1}=0 \\ & (3-\lambda) b_{1}=0 . \end{aligned}

However, since λ=3\lambda=3, what Eqs. (14.54) and (14.55) imply is that every vector is an eigenvector. Our objective here is to find two vectors, v1\boldsymbol{v}_{\mathbf{1}} and v2\boldsymbol{v}_{\mathbf{2}}, which are linearly independent (i.e. not a constant multiple of each other). The usual choice is the following,

v1=(10), and v2=(01).\boldsymbol{v}_{1}=\left(\begin{array}{l} 1 \\ 0 \end{array}\right) \text {, and } \boldsymbol{v}_{2}=\left(\begin{array}{l} 0 \\ 1 \end{array}\right) .

Then, by substituting (14.56) and λ=3\lambda=3 in (14.53), we have the general solution for complete eigenvalues, as follows,

x=c1(10)e3t+c2(01)e3t.\boldsymbol{x}=c_{1}\left(\begin{array}{l} 1 \\ 0 \end{array}\right) e^{3 t}+c_{2}\left(\begin{array}{l} 0 \\ 1 \end{array}\right) e^{3 t} .

If the eigenvalues λ1\lambda_{1} and λ2\lambda_{2} of the square matrix, AA are real, repeated and complete then, the linear system x˙=Ax\dot{\boldsymbol{x}}=A \boldsymbol{x} has only one eigenvalue, λ1=λ2=λ\lambda_{1}=\lambda_{2}=\lambda and has the following general solution

x=c1v1eλt+c2v2eλt\boldsymbol{x}=c_{1} \boldsymbol{v}_{\mathbf{1}} e^{\lambda t}+c_{2} \boldsymbol{v}_{\mathbf{2}} e^{\lambda t}

where c1c_{1} and c2c_{2} are constants and v1=(10)\boldsymbol{v}_{\mathbf{1}}=\left(\begin{array}{l}1 \\ 0\end{array}\right) and v2=(01)\boldsymbol{v}_{\mathbf{2}}=\left(\begin{array}{l}0 \\ 1\end{array}\right) are two linearly independent eigenvectors.

Defective

An eigenvalue is defective if we can only find one nonzero eigenvector (up to a constant multiple) for the 2×22 \times 2 linear system we are trying to solve. Again, we can write down the first solution as

x1=v1eλt\boldsymbol{x}_{1}=\boldsymbol{v}_{1} e^{\lambda t}

where, we have used,

(AλI)v1=0(A-\lambda I) \boldsymbol{v}_{\mathbf{1}}=\mathbf{0}

to find v1\boldsymbol{v}_{\mathbf{1}}. Since a 2×22 \times 2 requires two linearly independent solutions, we need to find the second one some other way.

From our experience with second order, constant coefficient ODEs (see Subsec. 13.2.4), we know that multiplying our first solution [this is given by Eq. (14.59)] by the independent variable might not be a terrible idea. We can easily show that simply doing that does not offer a viable second solution. Instead, the correct form of the second trial solution is the following,

x2=eλt(v1t+v2).\boldsymbol{x}_{\mathbf{2}}=e^{\lambda t}\left(\boldsymbol{v}_{\mathbf{1}} t+\boldsymbol{v}_{\mathbf{2}}\right) .

In Eq. (14.61), λ\lambda is the eigenvalue obtained from the characteristic equation and v1\boldsymbol{v}_{\mathbf{1}} is the eigenvector corresponding to λ\lambda which we obtain using the condition given by (14.60). The only unknown in Eq. (14.61) therefore is v2\boldsymbol{v}_{\mathbf{2}}.

How to find v2v_{2}.

Now, we need a condition similar to the one given by Eq. (14.60) which allows us to determine v2\boldsymbol{v}_{\mathbf{2}}. We derive this condition in a similar way to the one we used to derive the condition for v1\boldsymbol{v}_{\mathbf{1}} in Section 14.2.1.

Since the suggested solution given by (14.61) satisfies the linear system (14.18) then, we differentiate (14.61) to obtain x2\boldsymbol{x}_{\mathbf{2}}^{\prime} and substitute back in Eq. (14.18). Doing this gives,

eλt[λ(v1t+v2)+v1]=Aeλt(v1t+v2).e^{\lambda t}\left[\lambda\left(\boldsymbol{v}_{\mathbf{1}} t+\boldsymbol{v}_{\mathbf{2}}\right)+\boldsymbol{v}_{\mathbf{1}}\right]=A e^{\lambda t}\left(\boldsymbol{v}_{\mathbf{1}} t+\boldsymbol{v}_{\mathbf{2}}\right) .

Dividing (14.62) by eλte^{\lambda t} and, upon rearranging, we get,

v1=(AλI)tv1+(AλI)v2\boldsymbol{v}_{\mathbf{1}}=(A-\lambda I) t \boldsymbol{v}_{\mathbf{1}}+(A-\lambda I) \boldsymbol{v}_{\mathbf{2}}

Using (14.60) in (14.63),

v1=(AλI)v2\boldsymbol{v}_{\mathbf{1}}=(A-\lambda I) \boldsymbol{v}_{\mathbf{2}}

where v1=(a1b1)\boldsymbol{v}_{\mathbf{1}}=\left(\begin{array}{ll}a_{1} & b_{1}\end{array}\right)^{\top} is already known from using (14.60) and v2=(a2b2)\boldsymbol{v}_{\mathbf{2}}=\left(\begin{array}{ll}a_{2} & b_{2}\end{array}\right)^{\top} is the unknown vector. Equation (14.64) gives the condition we need to determine v2\boldsymbol{v}_{\mathbf{2}}.

The equation for v2\boldsymbol{v}_{\mathbf{2}} [given by (14.64)] is guaranteed to have a solution provided that the eigenvalue is defective. Note that when solving for v2\boldsymbol{v}_{\mathbf{2}}, try setting either a2a_{2} or b2b_{2} to zero and then solve for the other entry to get a suitable vector (see example below). Finally, once λ,v1\lambda, \boldsymbol{v}_{\mathbf{1}} and v2\boldsymbol{v}_{\mathbf{2}} are determined, we construct the general solution as follows

x=c1x1+c2x2\boldsymbol{x}=c_{1} \boldsymbol{x}_{1}+c_{2} \boldsymbol{x}_{2}

where x1\boldsymbol{x}_{\boldsymbol{1}} and x2\boldsymbol{x}_{\boldsymbol{2}} are given by Eqs. (14.59) and (14.61), respectively. Next, we briefly look at an example on how to solve linear systems with defective matrices. We also show why arbitrarily setting one of the entries in v2\boldsymbol{v}_{\mathbf{2}} as zero while using (14.64) to solve for the other one is appropriate.

Consider the linear system (14.18) with the matrix AA given as

A=(124164)A=\left(\begin{array}{cc} 12 & 4 \\ -16 & -4 \end{array}\right)

Using the characteristic equation, we find that λ=4\lambda=4 (real, repeated). Since AλIA \neq \lambda I (i.e. AA is not a scalar matrix), the eigenvalue is said to be defective. Using (14.60), we find the first eigenvector as v1=(12)\boldsymbol{v}_{\mathbf{1}}=\left(\begin{array}{ll}1 & -2\end{array}\right)^{\top} which allows us to write down the first solution as,

x1=(12)e4t\boldsymbol{x}_{\mathbf{1}}=\left(\begin{array}{c} 1 \\ -2 \end{array}\right) e^{4 t}

Now, for v2\boldsymbol{v}_{\mathbf{2}}, we use (14.64). This gives us the following two equations

1=8a2+4b2,2=16a28b2;\begin{aligned} 1 & =8 a_{2}+4 b_{2}, \\ -2 & =-16 a_{2}-8 b_{2} ; \end{aligned}

of course Eqs. (14.67) and (14.68) are dependent so all we can get from these is a relationship between a2a_{2} and b2b_{2}. We can set a2a_{2} to zero [either in (14.67) or (14.68)] and solve for b2b_{2} or the other way around. However, to see why this works, let us just set a2=ka_{2}=k (where kk is arbitrary) and rewrite Eq. (14.67) as follows,

b2=18k4b_{2}=\frac{1-8 k}{4}

The vector v2\boldsymbol{v}_{\mathbf{2}} is therefore:

v2=(k14(18k))=(014)+k(12).\boldsymbol{v}_{\mathbf{2}}=\left(\begin{array}{c} k \\ \frac{1}{4}(1-8 k) \end{array}\right)=\left(\begin{array}{c} 0 \\ \frac{1}{4} \end{array}\right)+k\left(\begin{array}{c} 1 \\ -2 \end{array}\right) .

Now, if we substitute (14.70) in the second solution [given by (14.61)], we get,

x2=e4t(12)t+e4t(014)+ke4t(12).\boldsymbol{x}_{\mathbf{2}}=e^{4 t}\left(\begin{array}{c} 1 \\ -2 \end{array}\right) t+e^{4 t}\left(\begin{array}{c} 0 \\ \frac{1}{4} \end{array}\right)+k e^{4 t}\left(\begin{array}{c} 1 \\ -2 \end{array}\right) .

The last term in Eq. (14.71) is simply a constant multiple of the first solution which is given by Eq. (14.66) and therefore it can be ignored (by setting k=0k=0 ). The first two terms on the RHS of (14.71) however, constitute a new solution,

x2=e4t(12)t+e4t(014).\boldsymbol{x}_{\mathbf{2}}=e^{4 t}\left(\begin{array}{c} 1 \\ -2 \end{array}\right) t+e^{4 t}\left(\begin{array}{c} 0 \\ \frac{1}{4} \end{array}\right) .

Note that the vector v2=(01/4)\boldsymbol{v}_{\mathbf{2}}=\left(\begin{array}{ll}0 & 1 / 4\end{array}\right)^{\top} is exactly what we would have got had we set a2=0a_{2}=0 in Eq. (14.67) [or Eq. (14.68)]. The equations given by (14.66) and (14.72) form a fundamental set of solutions (it can be easily shown that their Wronskian determinant is never zero). The general solution we are looking for then is given by,

x=c1x1+c2x2=c1(12)e4t+c2[(12)t+(014)]e4t.\begin{aligned} \boldsymbol{x} & =c_{1} \boldsymbol{x}_{\mathbf{1}}+c_{2} \boldsymbol{x}_{\mathbf{2}} \\ & =c_{1}\left(\begin{array}{c} 1 \\ -2 \end{array}\right) e^{4 t}+c_{2}\left[\left(\begin{array}{c} 1 \\ -2 \end{array}\right) t+\left(\begin{array}{c} 0 \\ \frac{1}{4} \end{array}\right)\right] e^{4 t} . \end{aligned}

If the eigenvalues λ1\lambda_{1} and λ2\lambda_{2} of the square matrix, AA are real, repeated, and defective then, the linear system x˙=Ax\dot{\boldsymbol{x}}=A \boldsymbol{x} only has one eigenvalue, λ1=λ2=λ\lambda_{1}=\lambda_{2}=\lambda and has the following general solution

x=c1v1eλt+c2(v1t+v2)eλt\boldsymbol{x}=c_{1} \boldsymbol{v}_{\mathbf{1}} e^{\lambda t}+c_{2}\left(\boldsymbol{v}_{\mathbf{1}} t+\boldsymbol{v}_{\mathbf{2}}\right) e^{\lambda t}

where:

  • c1c_{1} and c2c_{2} are constants;
  • v1\boldsymbol{v}_{\mathbf{1}} satisfies (AλI)v1=0(A-\lambda I) \boldsymbol{v}_{\mathbf{1}}=\mathbf{0}
  • v2\boldsymbol{v}_{\mathbf{2}} satisfies (AλI)v2=v1(A-\lambda I) \boldsymbol{v}_{\mathbf{2}}=\boldsymbol{v}_{\mathbf{1}}.