Taylor/Maclaurin series
A power series centred at a point is defined to be an expression of the form,
where and are constants. A power series centred at the origin (i.e. when ), can be thought of as an 'infinite polynomial' and has the form,
In what follows, we discuss a general method for writing a power series representation of a function. We first assume that a function has a power series representation about ,
where the first three terms are given by Eq. (6.1). The next assumption is that has derivatives of every order and that we can find them. The latter allows us to determine the unknown coefficients . Evaluated at , we have . Using differentiation, we can obtain formulae for the other coefficients. For instance, differentiating once wrt yields
which, evaluated again at gives . Continuing with this pattern, we can write down the following formula for the coefficients,
where represents the derivative. Note that the formula works for since and , by definition. Provided a power series representation for the function about exists then, the Taylor series for about is given by,
If we use , we have a Taylor series about sometimes referred to as the Maclaurin series for given by the following expression,
Remainder
Now, at this point we have not yet discussed where (i.e. for what values of ) does the Taylor series for converge. This is covered later in Section 6.2. What we are interested in here is another important question: in the case where the Taylor series for does converge, does it converge to ? To answer this, we refer to the remainder, a quantity which represents the difference between and the Taylor polynomial of degree for centred at . Define the degree Taylor polynomial of as,
With the full Taylor series given by (6.5), the sum is the partial sum of the series. The remainder is defined to be,
which represents the error between the function and the degree polynomial. Note that the remainder depends on . Rearranging (6.8) and using (6.7), we can write the function as,
The sum in (6.7) can be used as an approximation to if, at and its first derivatives agree with . This gives conditions:
We now state the following theorem.
Taylor's theorem
Let be an times differentiable function on an open interval containing the points and . Then,
where
for some number between and . Equation (6.10) gives the Lagrange form of the remainder (see also Subsec. 6.1.3 for more details on remainders). Note that there are several formulae for the remainder like, for instance, the integral form given by
If we can show that, as then, we get a sequence of increasingly better approximations to leading to the Taylor series as given by Eq. (6.5). In general, the series will converge only for certain values of determined by the radius of convergence of the series (see Subsec. 6.2.3). Note that taking in Taylor's theorem gives a similar result for the Maclaurin series.
Example 6.1 Compute the Maclaurin series for with .
Solution The degree Maclaurin series is given by the sum,
We then proceed to evaluate the function and its derivatives at , , and . Back in (6.11), we have
which approximates near . Now, how good is this approximation? To show this, we need to determine that the remainder term [which gives the difference between and its approximation, is small. Taking in Eq. (6.9) and using Eq. (6.12), we write
where, using Eq. (6.10),
for some between 0 and . Since , the remainder term is !. Further, which gives !. Finally, using Eq. (6.13),
The term should decay rapidly in the vicinity of . Example 6.2 Compute the Maclaurin series for , as in Example 6.1 using arbitrary .
Solution The degree Maclaurin series is given by the sum,
We evaluate the function and its derivatives at 0 : we find that is 0 if is even and alternates between 1 and -1 if is odd, as seen above for Example 6.1. For arbitrary , we have
The remainder term is given as
for some between 0 and . Since the derivatives of only take values between -1 and 1 , we have that
We need to show that the Maclaurin series converges to for all which is equivalent to showing that for any fixed value of the limits of the remainders, as . If grows fast as increases, however, for , the denominator increases faster. See below for a rough sketch of the proof.
Limit of as
We show that as for a fixed value of by making use of the Sandwich theorem. This brief section aims to add to the intuition in Example 6.2 that as for fixed . We assume the following inequality holds
The Sandwich theorem states that if
and there exists an integer for all , then
Here, , and are assumed to be sequences. To simplify the discussion, we fix the value of to 3 and let ! where clearly . We have
which can be expressed as individuals fractions,
We can already see that the limit approaches 0 as increases, since we are multiplying by smaller and smaller functions. Now, apart from the fractions , the rest of the fractions in the sequence above are at most equal to which means we can write
Finally, let and ; the limits of both and are 0 as and, since it follows, by the Sandwich theorem, .
Common power series
Using the definitions of series defined earlier, we can write down the Maclaurin series for and , valid for any real :
As a last example, consider the series of the binomial function given by ,
The series is valid in . If is not a positive integer, then the above series has infinitely many nonzero terms. If is a positive integer, the series terminates; the binomial function is a polynomial for positive with only the first terms are nonzero. For instance, with ,
and its Maclaurin series, say , is given by
i.e. , since all derivatives higher than or equal to the third, vanish.
Taylor's formula with integral remainder
Here, we derive Taylor's formula with integral remainder by reconstructing a function through repeated integration (note that notation, definitions, and techniques pertaining to integration are covered in Chapter 8). Through the derivation in this subsection we arrive at the definitions of the remainder discussed in Subsec. 6.1.1. Starting with the identity,
we have
Since is arbitrary, (6.22) should also hold with replaced by the function , such that
Using the above expression for in the integral in (6.22), we obtain,
Next, just as we obtained (6.23) from (6.22) by replacing with , we can replace in by so that
Using this result in (6.24) yields,
Repeating this process, under the assumption that is sufficiently differentiable of course, we find
where
and means times. Equation (6.27) is known as Taylor's formula with remainder, where the remainder is expressed in integral form by (6.28).
Suppose that over , where and are constants. Then
Integrating, we have
It follows from (6.29) that we must be able to express as
where is some suitable point in ; this is the Lagrange form of which we saw in Eq. (6.10).
In fact, this result for the remainder can also be obtained with the mean value theorem of the integral calculus:
if over then
If and are minimum and maximum values of over and is continuous, then there must be some point in , say , such that
and this is known as the mean value theorem of the integral calculus. Using this theorem in (6.28), where we now have derivatives, yields (6.30).