Rules of vector algebra
Here we start by defining some rules of vector arithmetic using two vectors, and . Note that we define to be the vector of the same magnitude as but in the opposite direction.
Addition and subtraction
Given the vectors and ,
The diagram below shows the geometric interpretation of the addition of two vectors; it is known as the parallelogram law and it shows that vector addition is commutative, i.e.
The geometric interpretation of the subtraction of two vectors, is given below. The subtraction can be thought of as the addition of the vectors and since has the same magnitude as but in the opposite direction.
Multiplication by a scalar
For a vector and a nonzero scalar , the vector is the vector of magnitude and in the direction of if or the opposite direction if . For , the scalar multiplication gives,
Using the two properties above, we can show what was stated in Definition 9.4: any 3D vector is a linear combination of the standard basis vectors in 3D. Starting from the vector , we can use the addition rule to rewrite as
and the scalar multiplication rule to express as,
Figure 9.2: Three-dimensional Cartesian coordinate system with axes .
This shows that any vector can be written in terms of the standard basis vectors. Consider the Cartesian coordinates shown in Fig. 9.2. The unit vectors are the standard basis vectors in Eq. (9.1). The vectors are parallel to the axes, respectively.
The vector that starts at the point and ends at is called a position vector. The vector which has all its components equal to zero is called a zero vector, sometimes denoted by , etc.
The position vector with general coordinates is written as
where are the components of . The magnitude of is given by
Example 9.1 Prove that the line joining a vertex of a parallelogram to the midpoint of the opposite side divides the diagonal in the ratio 1:2.
Solution Figure 9.3 shows a diagram of the parallelogram. We have
for some scalar and we want to show that .
We define
for some scalar . Next, we consider
and we choose to express Eq. (9.2) in terms of a basis; we choose the position vectors and as the basis. By definition, we have
it follows that in Eq. (9.2), we are left to express and in terms of and . We start with :
For :
Substituting Eqs. (9.3)-(9.5) in (9.2) and rearranging, yields
Comparing coefficients of and on both sides, we have and .
Figure 9.3: Diagram for question in Example 9.1.
Vector spaces
A vector is an element of a vector space. In broad terms, a vector space is a set of vectors together with rules for vector addition and scalar multiplication. Such operations must produce vectors in the space and satisfy certain conditions. Definition 9.5 gives said conditions.
A vector space is a set closed under finite vector addition and scalar multiplication. The scalars may come from any field , e.g. on real or complex vector spaces. For all , and and , we have the following requirements:
(i) Commutativity
(ii) Associativity of vector addition
Additive identity
There exists the zero vector such that for all ,
(iv) Existence of additive inverse
For any there exists , such that,
(v) Associativity of scalar multiplication
(vi) Distributivity of scalar sums
(vii) Distributivity of vector sums
(viii) Scalar multiplication identity
A fun exercise is to use the rules given in Definition 9.5 to prove that for all ,