Algebra s lang download


















Find a vector perpendicular to 1, 2, - 3 and 2, -1, 3 , and another vector perpendicular to - 1, 3, 2 and 2, 1, 1. Find a parametric representation for the line of intersection of the planes of Exercises 10 and Find the point of the intersection of the line through P in the direction of N, and the plane through Q perpendicular to N. Find the distance between the indicated point and plane.

You have learned to solve such equations by the successive elimination of the variables. In this chapter, we shall review the theory of such equations, dealing with equations in n variables, and interpreting our results from the point of view of vectors. Several geometric interpreta- tions for the solutions of the equations will be given. The first chapter is used here very little, and can be entirely omitted if you know only the definition of the dot product between two n-tuples.

The multiplication of matrices will be formulated in terms of such a product. One geometric interpretation for the solutions of homogeneous equations will however rely on the fact that the dot product between two vectors is 0 if and only if the vectors are perpendicular, so if you are interested in this interpretation, you should refer to the section in Chapter I where this is explained.

Matrices We consider a new kind of object, matrices. An array of numbers all a l2 a l3 a ln a 2l a22 a23 a2n is called a matrix. The matrix has m rows and n columns.

For instance, the first column is and the second row is a 2l , a 22 , We call aij the ij-entry or ij- component of the matrix. The following is a 2 x 3 matrix: 1 4 The rows are 1,1, -2 and -1,4, The columns are Thus the rows of a matrix may be viewed as n-tuples, and the columns may be viewed as vertical m-tuples.

A vertical m-tuple is also called a column vector. A column vector is an n x 1 matrix. When we write a matrix in the form aij , then i denotes the row and j denotes the column. A single number a may be viewed as a 1 x 1 matrix.

We note that we have met so far with the zero number, zero vector, and zero matrix. We shall now define addition of matrices and multiplication of ma- trices by numbers.

We define addition of matrices only when they have the same size. In other words, we add matrices of the same size componentwise. This is trivially verified.

We shall now define the multiplication of a matrix by a number. We define cA to be the matrix whose ij-component is ca ij. We write Thus we multiply each component of A by c. Let A, B be as in Example 2. The matrix - A is also called the additive inverse of A. We define one more notion related to a matrix. Taking the transpose of a matrix amounts to changing rows into columns and vice versa. Such a matrix is necessarily a square matrix. Remark on notation.

I have written the transpose sign on the left, because in many situations one considers the inverse of a matrix written A-I, and then it is easier to write tA-I rather than A - 1 Y or At - 1, which are in fact equal. The mathematical community has no consensus as to where the transpose sign should be placed, on the right or left.

How do the diagonal elements of A and t A differ? Show that for any square matrix A, the matrix A - tA is skew-symmetric. Let Xl' Multiplication of Matrices We shall now define the product of matrices. Let Let A, B be as in Example 1. What do you find? Linear equations. Matrices give a convenient way of writing linear equations. You should already have considered systems of linear equations. We shall see later how to solve such systems. We say that there are m equations and n unknowns, or n variables.

Markov matrices. A matrix can often be used to represent a practical situation. Suppose that any given year, some people leave each one of these cities to go to one of the others.

The pelcentages of people leaving and going is given as follows, for each year. Ch goes to LA and t Ch goes to Bo. Such a matrix is called a Markov matrix. If A is a square matrix, then we can form the product AA, which will be a square matrix of the same size as A. It is denoted by A2. Similarly, we can form A 3 , A 4 , and in general, An for any positive integer n.

Thus An is the product of A with itself n times. We can define the unit n x n matrix to be the matrix having diagonal components all equal to 1, and all other components equal to O.

You will find two different values. This is expressed by saying that mul- tiplication of matrices is not necessarily commutative. For instance, powers of A commute, i.

We now prove other basic properties of multiplication. Let A, B, C be matrices. Since our first assertion follows. As for the second, observe that the k-th column of xB is XBk. Since our second assertion follows. Associative law. Then A, BC can be multiplied. If we had started with the jl-component of BC and then computed the ii-component of A BC we would have found exactly the same sum, there by proving the desired property. The above properties are very similar to those of multiplication of numbers, except that the commutative law does not hold.

We can also relate multiplication with the transpose: Let A, B be matrices o. I t is occasion- ally convenient to rewrite the system in this fashion. Unlike division with non-zero numbers, we cannot divide by a matrix, any more than we could divide by a vector n-tuple.

We do this only for square matrices. Let A be an n x n matrix. Since we multiplied A with B on both sides, the only way this can make sense is if B is also an n x n matrix. Some matrices do not have in- verses. However, if an inverse exists, then there is only one we say that the inverse is unique, or uniquely determined by A. This is easy to prove. In other words, if B is a right in verse for A, then it is also a left in verse.

You may assume this for the time being. Thus in verifying that a matrix is the in verse of another, you need only do so on one side. We shall also find later a way of computing the inverse when it exists. It can be a tedious matter. Let c be a number. Then the matrix c 0 o o c o We can also write it as cI, where I is the unit n x n matrix.

Exercise 6. Then by the rule for the transpose of a product, we get because I is equal to its own transpose. In light of this result, it is customary to omit the parentQeses, and to write fA -1 for the inverse of the transpose, which we have seen IS equal to the transpose of the inverse. We end this section with an important example of multiplication of matrices. A special type of 2 x 2 matrix represents rota- tions. For each number 8, let R 8 be the matrix COS 8 - sin 8. Thus rotation by an angle 8 can be represented by the matrix R 8.

Note how we multiply the column vector on the left with the matrix R 8. The minus sign is now in the lower left-hand corner. However, they also illustrate some more theoretical aspects of this multip- lication. Therefore they should be all worked out.

Specifically: Exercises 7 through 12 illustrate multiplication by the standard unit vectors. Exercises 14 through 19 illustrate multiplication of triangular matrices. Exercises 24 through 27 illustrate how addition of numbers is transformed into multiplication of matrices. Exercises 27 through 32 illustrate rotations. Let I be the unit n x n matrix. Let A be an n x r matrix. What is I A? If A is an m x n matrix, what is AI? Let D be the matrix all of whose coordinates are O.

Let A be a matrix of a size such that the product AD is defined. What is AD? Let 0 7. Let A, B be as in Exercise 5. State the general rule including this exercise as a special case. How would you describe X A? Generalize to similar statements con- cerning n x n matrices, and their products with unit vectors.

Find AX for each of the following values of X. Let and What is AX? Let X be a column vector having all its components equal to 0 except the j-th component which is equal to 1. Let A be an arbitrary matrix, whose size is such that we can form the product AX. What is AX? Let X be the indicated column vector, and A the indicated matrix. Find AX as a column vector. Describe in words the effect on A of this product.

Let A G! Find the product SA for each one of the following matrices S. Describe in words the effect of this product on A.

Generalize to 4 x 4 matrices. Let A be a square matrix. Show that A is invertible. Let A, B be two square matrices of the same size. Suppose this is the case. Prove: a B is similar to A. Such a matrix is called upper triangular. If A, B are upper triangular matrices of the same size what can you say about the diagonal elements of AB?

Exercises 24 through 27 give examples where addition of numbers IS trans- formed into multiplication of matrices. What is An where n is a positive integer? Show that the matrix A in Exercise 24 has an inverse. What is this inverse? SIn cos 0 a Show that for any two numbers 0 1 , O2 we have [You will have to use the addition formulas for sine and cosine. Use induction. Find the matrix R 0 associated with the rotation for each of the following values of O.

What is the matrix associated with the rotation by an angle - 0 i. Elementary matrices. In each case find U A. Let E be the matrix as shown. Find EA where A is the same matrix as in the preceding exercise. Find EA where A is the same matrix as in the preceding exercise and Exercise Let Irs be the matrix whose rs-component is t and such that all other components are equal to o.

Let I jj be the matrix whose jj-component is 1 and such that all other components are O. What is EA? The rest of the chapter will be mostly concerned with linear equations, and especially homogeneous ones. We shall find three ways of interpret- ing such equations, illustrating three dfferent ways of thinking about matrices and vectors.

Homogeneous Linear Equations and Elimination In this section, we look at linear equations by one method of elimina- tion. In the next section, we shall discuss another method. We shall be interested in the case when the number of unknowns is greater than the number of equations, and we shall see that in that case, there always exists a non-trivial solution. Before dealing with the general case, we shall study examples. We wish to find a solution with not all of x, y, z equal to O.

We than solve for x. We redute the problem of solving these simultaneous equations to the preceding case of one equation, by eliminating one variable. Now we meet one equation in more than one variable. The values which we have obtained for x, y, z are also solutions of the first equation, because the first equation is in an obvious sense the sum of equation 2 multiplied by 2, and equation 3.

Again we use the elimination method. Multiply the second equation by 2 and subtract it from the third. We eliminate y from these two equations as follows: Multiply the top one by 5, multiply the bottom one by 4, and subtract them.

Note that we had three equations in four unknowns. By a successive elimination of variables, we reduced these equations to two equations in three unknowns, and then one equation in two unknowns. Using precisely the same method, suppose that we start with three equations in five unknowns. Eliminating one variable will yield two equations in four unknowns. Eliminating another variable will yield one equation in three unknowns. We can then solve this equation, and pro- ceed backwards to get values for the previous variables just as we have shown in the examples.

We eliminate one of the variables, say Xl' and obtain a system of m - 1 equations in n - 1 unknowns. We eliminate a second variable, say X 2 , and obtain a system of m - 2 equations in n - 2 un- knowns.

We then give non-trivial arbi- trary values to all the remaining variables but one, solve for this last variable, and then proceed backwards to solve successively for each one of the eliminated variables as we did in our examples. Thus we have an effective way of finding a non-trivial solution for the original system.

We shall phrase this in terms of induction in a precise manner. Let bb Equations like. The number n is called the number of un- knowns, and m is the number of equations. This solution will be called the trivial solution. A solution x l ' Therefore a solution of the system of linear equations can be interpreted as the set of all n-tuples X which are perpendicular to the row vectors of the matrix A.

Then the system has a non-trivial solution. Proof The proof will be carried out by induction. After renumbering the variables and the coefficients, we may assume that it is a l. Then we give X 2 , Let us now assume that our theorem is true for a system of m - 1 equations in more than m - 1 unknowns. If all coefficients aij are equal to 0, we can give any non-zero value to our variables to get a solution.

If some coefficient is not equal to 0, then after renumbering the equations and the variables, we may assume that it is all. We shall subtract a multiple of the first equation from the others to eliminate Xl. According to our assumption, we can find a non-trivial solution x 2 , We can then solve for Xl in the first equa- tion, namely In that way, we find a solution of A 1. The argument we have just given allows us to proceed stepwise from one equation to two equations, then from two to three, and so forth.

This concludes the proof. Let X be an n-tuple. If c is a number, show that cX IS a solution. In Exercise 2, suppose that X is perpendicular to each one of the vectors A l ,. A vector is called a linear combination of AI' Show that X is perpendicular to such a vector. Find at least one non-trivial solution for each one of the following systems of equations.

Since there are many choices involved, we don't give answers. Show that the only solutions of the following systems of equations are trivial. Add one equation to another. Interchange two equations. These operations are reflected in operations on the augmented matrix of coefficients, which are also called elementary row operations: Multiply one row by a non-zero number.

Add one row to another. Interchange two rows. Suppose that a system of linear equations is changed by an elemen- tary row operation. By making row operations, we can hope to simplify the shape of the system so that it is easier to find the solutions. Let us define two matrices to be row equivalent if one can be obtained from the other by a succession of elementary row operations.

To obtain an equivalent system A', B' as simple as possible we use a method which we first illustrate in a concrete case. Consider the augmented matrix in the above example. We have the following row equivalences: -2 1 2 -1 -1 G -1 1 3 0 -! Subtract 3 times second row from first row -5 4 5 -1 -1 G -1 1 3 0 -! Subtract 2 times second row from third row -5 4 5!

This makes it very simple to solve the equations. This is now in a form where we can solve by giving w an arbitrary value in the third equation, and solve for z from the third equation. Then we solve for y from the second, and x from the first.

We can give w any value to start with, and then determine values for x, y, z. Thus we see that the solutions depend on one free parameter.

Later we shall express this property by saying that the set of solutions has dimension 1. For the moment, we give a general name to the above procedure. Let M be a matrix. We shall say that M is in row echelon form if it has the following property: Whenever two successive rows do not consist entirely of zeros, then the second row starts with a non-zero entry at least one step further to the right than the first row.

All the rows consisting entirely of zeros are at the bottom of the matrix. In the previous example we transformed a matrix into another which IS in row echelon form. In the above example, the leading coefficients are 1, 15, One may perform one more change by dividing each row by the leading coefficient. In this last matrix, the leading coefficient of each row is equal to 1. The following matrix is in row echelon form. Every matrix is row equivalent to a matrix in row echelon form. Proof Select a non-zero entry furthest to the left in the matrix.

If this entry is not in the first column, this means that the matrix consists entirely of zeros to the left of this entry, and we can forget about them. So suppose this non-zero entry is in the first column. After an inter- change of rows, we can find an equivalent matrix such that the upper left-hand corner is not O. Then we obtain a matrix which has zeros in the first column except for all. Thus the original matrix is row equivalent to a matrix of the form We then repeat the procedure with the smaller matrix We can continue until the matrix is In row echelon form formally by induction.

We give another proof of the fundamental theorem: Theorem 4. T hen there exists a non-trivial solution. Hence there are n - r variables other than Xk1 ' We give these variables arbitrary values, which we can of course select not all equal to O. This gives us the non-trivial solution, and proves the theorem. Observe that the pattern follows exactly that of the examples, but with a notation dealing with the general case.

Solve the linear equations in each case by this method. The row operations which we used to solve linear equations can be represented by matrix operations. Multiplication by I rr then leaves the r-th row fixed, and re- places all the other rows by zeros. All other rows are replaced by zero. Th us J rs interchanges the r-th row and the s-th row, and replaces all other rows by zero. We can express E as a sum: where Irs is the matrix which has rs-component 1, and all other compon- ents 0 as before.

Observe that E is obtained from the unit matrix by interchanging the first two rows, and leaving the third row unchanged. Thus the operation of interchanging the first two rows of A is carried out by mUltiplication with the matrix E obtained by doing this operation on the unit matrix. This is a special case of the following general fact.

Theorem 5. Let E be the matrix obtained from the unit n x n matrix by interchanging two rows. Then EA is the matrix obtained from A by interchanging these two rows. Suppose that we interchange the r-th and s-th row. Thus E differs from the unit matrix by interchanging the r-th and s-th rows.

By the previous discussion, this is precisely the matrix obtained by interchanging the r-th and s-th rows of A, and leaving all the other rows unchanged.

The same type of discussion also yields the next result. We know that IsrA puts the r-th row of A in the s-th place, and multiplication by c mUltiplies this row by c. All other rows besides the s-th row in cI srA are equal to O. Take any 4 x n matrix A and compute EA. You will find that EA is obtained by multiplying the third row of A by 4 and adding it to the first row of A.

These three types reflect the row operations discussed in the preceding section. Multiplication by a matrix of type a mUltiplies the r-th row by the number c. Multiplication by a matrix of type b interchanges the r-th and s-th row. Multiplication by a matrix of type c adds c times the s-th row to the r-th row.

Proposition 5. An elementary matrix is invertible. Proof For type a , the inverse matrix has r-th diagonal component c - 1, because multiplying a row first by c and then by c - 1 leaves the row unchanged.

For type b , we note that by interchanging the r-th and s-th row twice we return to the same matrix we started with. F'or type c , as in Theorem 5. This is based on the following properties. If A, B are square matrices of the same size and have inverses, then so does the product AB, and This is immediate, because Similarly, for any number of factors: Proposition 5. If A 1, Since an elementary matrix has an inverse, we conclude that any pro- duct of elementary matrices has an inverse.

Let A be a square matrix, and let A' be row equivalent to A. Then A has an inverse if and only if A' has an inverse. Proof There exist elementary matrices E b Then the right-hand side has an inverse by Proposition 5. Hence A' has an inverse. This proves the proposition.

By Theorem 4. If one row of A' is zero, then by the defini- tion of echelon form, the last row must be zero, and A' is not invertible, hence A is not invertible. If all the rows of A' are non-zero, then A' is a triangular matrix with non-zero diagonal components. It now suffices to find an inverse for such a matrix. In fact, we prove: Theorem 5.

A square matrix A is invertible if and only if A is row equivalent to the unit matrix. Any upper triangular matrix with non- zero diagonal elements is invertible.

Suppose that A is row equivalent to the unit matrix. Then A is invertible by Proposition 5. Suppose that A is invertible. We have just seen that A is row equivalent to an upper triangular matrix with non- zero elements on the diagonal. We multiply the i-th row with aii 1. We obtain a triangular matrix such that all the diagonal compon- ents are equal to 1. This makes all the elements of the last column equal to o except for the lower right-hand corner, which is 1.

We repeat this procedure with the next to the last row, and continue upward. This means that by row equivalences, we can replace all the components which lie strictly above the diagonal by o. We then terminate with the unit matrix, which is therefore row equivalent with the original matrix.

This proves the theorem. Corollary 5. Let A be an invertible matrix. Then A can be expressed as a product of elementary matrices. Ei: 1, thus proving the corollary. We perform the following row opera- tions, corresponding to the multiplication by elementary matrices as shown. Interchange first two rows. Subtract 2 times first row from third row. In- deed, in this case, we multiply both sides on the left by A - 1 and we find This also proves: Proposition 5.

Assume that the matrix of coefficients A is invertible. Using elementary row operations, find inverses for the following matrices. Linear Combinations Let AI, Let XI""'Xn be numbers. Then we call a linear combination of A I, A similar definition applies to a linear combina- tion of row vectors. The linear combination is called non-trivial if not all the coefficients Xl' Vectors A 1, We may thus summarize the description of the set of solutions of the system of homogeneous linear equations in a table.

Indeed, let x l ' This proves that E b We shall study the notions of linear dependence and independence more systematically in the next chapter. They were mentioned here just to have a complete table for the three basic interpretations of a system of linear equations, and to introduce the notion in a concrete special case before giving the general definitions in vector spaces. Let C k be the k-th column of C. Express Ck as a linear combination of the columns of A. Describe precisely which are the coefficients, coming from the matrix B.

Which column is it? A member of the collection is also called an element of the set. It is useful in practice to use short symbols to denote certain sets.

For instance we denote by R the set of all numbers. To say that "x is a number" or that "x is an element of R" amounts to the same thing. The set of n-tuples of numbers will be denoted by Rn. Thus" X is an element of R n" and" X is an n-tuple" mean the same thing. Instead of saying that u is an element of a set S, we shall also frequently say that u lies in S and we write u E S. Thus the set of rational numbers is a subset of the set of real numbers.

To say that S is a subset of S' is to say that S is part of S'. To denote the fact that S is a subset of S', we write S c S'. Definitions In mathematics, we meet several types of objects which can be added and multiplied by numbers.

Among these are vectors of the same dimension and functions. It is now convenient to define in general a notion which includes these as a special case. There is an element of V, denoted by 0, such that for all elements u of V. We have used all these rules when dealing with vectors, or with func- tions but we wish to be more systematic from now on, and hence have made a list of them.

Further properties which can be easily deduced from these are given in the exercises and will be assumed from now on. The algebraic properties of elements of an arbitrary vector space are very similar to those of elements of R2, R 3 , or Rn. Consequently it is customary to call elements of an arbitrary vector space also vectors. If u, v are vectors i. We also write - v instead of - 1 v. Fix two positive integers m, n.

Let V be the set of all m x n matrices. It is easy to verify that all properties VS 1 through VS 8 are satisfied by our rules for addition of matrices and multiplication of matrices by numbers. The main thing to observe here is that addition of matrices is defined in terms of the components, and for the addition of components, the conditions analogous to VS 1 through VS 4 are satisfied.

They are standard properties of numbers. Similarly, VS 5 through VS 8 are true for multiplication of matrices by numbers, because the corresponding properties for the multiplication of numbers are true. Let V be the set of all functions defined for all numbers.

We also know how to multiply f by a number c. It is the function cf whose values at a number t is cf t. In dealing with functions, we have used properties VS 1 through VS 8 many times. We now realize that the set of functions is a vector space. We emphasize the condition for all t.

If a function has some of its values equal to zero, but other values not equal to 0, then it is not the zero function. In practice, a number of elementary properties concerning addition of elements in a vector space are obvious because of the concrete way the vector space is given in terms of numbers, for instance as in the previous two examples.

We shall now see briefly how to prove such properties just from the axioms. It is possible to add several elements of a vector space. Suppose we wish to add four elements, say u, v, w, z. We first add any two of them, then a third, and finally a fourth. Using the rules VS 1 and VS 4, we see that it does not matter in which order we perform the additions. This is exactly the same situation as we had with vectors. The same remark applies to the sum of any number n of elements of V.

We shall use 0 to denote the number zero, and 0 to denote the element of any vector space V satisfying property VS 2. We observe that this zero element 0 is uniquely determined by condition VS 2. Subspaces Let V be a vector space, and let W be a subset of V.

Assume that W satisfies the following conditions. Indeed, properties VS 1 through VS 8, being satisfied for all elements of V, are satisfied also for the elements of W. We shall call W a subspace of V. Then W is a subspace of V, which we could identify with Rn - 1. Let A be a vector in R3. Then W is a subspace of R3. This proves that W is a subspace of R3. Example 5. Let Sym n x n be the set of all symmetric n x n matrices.

Then Sym n x n is a subspace of the space of all n x n matrices. Also the zero matrix is symmetric. If c is a number, then cf is continuous. The zero function is continuous. Hence the continuous functions form a subspace of the vector space of all functions. If c is a number, then cf is differentiable. The zero function is differentiable.

Hence the differentiable functions form a subspace of the vector space of all functions. Furthermore, every differentiable function is continuous. Hence the differentiable functions form a subspace of the vector space of continuous functions.

Let V be a vector space and let U, W be subspaces. We denote by U n W the intersection of U and W, i. Then U n W is a subspace. For instance, if U, Ware two planes in 3-space passing through the origin, then in general, their intersection will be a straight line passing through the ori- gin, as shown in Fig.

Let U, W be subspaces of a vector space V. Show that W is a subspace of Rn. Show that the following sets of elements in R2 form subspaces.

Show that the following sets of elements in R 3 form subspaces. Let V be a subspace of Rn. Let W be the set of elements of R n which are perpendicular to every element of V. This subspace W is often denoted by V 1. Let V be an arbitrary vector space, and let V l , An expression of type is called a linear combination of v l , The set of all linear combinations of Vl' Proof Let W be the set of all such linear combinations.

Furthermore, if c is a number, then is a linear combination of V l , The subspace W consisting of all linear combinations of V l , Let V l be a non-zero element of a vector space V, and let w be any element of V.

Plane passing through the origin Figure 2 We obtain the most general notion of a plane by the following opera- tion. Let S be an arbitrary subset of V. Let P be an element of V. If we add P to all elements of S, then we obtain what is called the translation of S by P. Let V 1 , V 2 be elements of a vector space V such that neither is a scalar multiple of the other.

We define the plane passing through P, parallel to V 1 , V 2 to be the set of all elements where t 1 , t2 are arbitrary numbers. This notion of plane is the analogue, with two elements VI' v2 , of the notion of parametrized line considered in Chapter I. Usually such a plane does not pass through the orIgIn, as shown on Fig. Thus such a plane is not a subspace of V. We give a number of examples below. Let V be a vector space and let v, u be elements of V.

This line segment is illustrated in the following picture. Let v, w be elements of a vector space V. Assume that neither is a scalar multiple of the other. This definition is clearly justified since t 1 v is a point of the segment between 0 and v Fig. Let A 1' Let W be the set of all elements of Rn which are perpendicular to A 1, Show that the vectors of Ware perpendicular to every element of V.

Draw the parallelogram spanned by the vectors 1, 2 and -1, 1 in R2. Draw the parallelogram spanned by the vectors 2, -1 and 1,3 in R2. Convex Sets Let S be a subset of a vector space V.

The set on the right is not convex since the line segment between P and Q is not entir- ely contained in S. This gives us a simple test to determine whether a set is convex or not. Let S be the parallelogram spanned by two vectors V1, V2' so S is the set of linear combinations with We wish to prove that S is convex. Let and be points in S. Half planes. This is the equation of a line as shown on Fig.

Prove as Exercise 2 that each half plane is convex. This is clear intuitively from the picture, at least in R2, but your proof should be valid for the analogous situation in Rn. Theorem 3. Let P l' Then S is convex. This proves our theorem. In the next theorem, we shall prove that the set of all linear combina- tions with is the smallest convex set containing P 1, Then it is geometrically clear that the smallest convex set containing these three points is the triangle having these points as vertices.

Figure 13 Thus it is natural to take as definition of a triangle the following pro- perty, valid in any vector space. Let P 1, P 2, P 3 be three points in a vector space V, not lying on a line.

Then the triangle spanned by these points is the set of all combina- tions When we deal with more than three points, then the set of linear combinations as in Theorem 3. Although we shall not need the next result, it shows that this convex set is the smallest convex set containing all the points P 1, Omit the proof if you can't handle the argument by induction.

Any convex set which contains P l' Proof We prove this by induction. We shall prove it for n. Let S' be a convex set containing p 1, But then lies in S' by definition of a convex set, as was to be shown. Prove that S is convex. Let A be a non-zero vector in Rn and let c be a fixed number.

Let S be a convex set in a vector space. If c is a number, denote by cS the set of all elements cv with v in S. Show that cS is convex.

Let S 1 and S2 be convex sets. Show that the intersection SIn S2 is convex. Let S be a convex set in a vector space V. Let w be an arbitrary element of V. Linear Independence Let V be a vector space, and let Vb We shall say that Vb In other words, vectors Vb Indeed, let a b Show that the vectors 1, 1 and - 3, 2 are linearly inde- pendent. This is a system of two equations which we solve for a and b. Hence, a, b are both 0, and our vectors are linearly independent.

The vectors E b To prove this we have to prove that they are linearly independent, which was already done in Example 1; and that they generate Rn. Hence they form a basis. However, there are many other bases. We shall find out that any two vectors which are not parallel form a basis of R2. Let us first consider an exam pIe. If VI' V2 are as drawn, they form a basis of R2.

Show that the vectors 1, 1 and - 1, 2 form a basis of R2. We have to show that they are linearly independent and that they generate R2.

Next, we must show that 1,1 and -1,2 generate R2. Let s, t be an arbitrary element of R2. Again subtract the first equation from the second. This proves that 1, 1 and -1,2 generate R2, and concludes the proof that they form a basis of R2. The general story for R 2 is expressed in the following theorem. Let a, b and c, d be two vectors in R2. If you can't do it, you will find the proof in the answer section. It parallels closely the procedure of Example 4.

The elements of V can be represented by n-tuples relative to this basis, as follows. If an element v of V is written as a linear combination of the basis elements, then we call x l' The coordinates with respect to the usual basis E 1, The following theorem shows that there can only be one set of co- ordinates for a given vector. Let V be a vector space. Let X b The theorem expresses the fact that when an element is written as a linear combination of v 1, This is true only when V 1 , Find the coordinates of 1,0 with respect to the two vectors 1, 1 and -1,2.

The two functions et , e 2t are linearly independent. Differentiate this relation. Subtract the first from the second relation. Hence et , e 2t are linearly independent. Let V be the vector space of all functions of a variable t.

Let i1' We emphasize that linear dependence for functions means that the above relation holds for all values of t. Advertisement Hide. This service is more advanced with JavaScript available. Authors view affiliations Serge Lang. Front Matter Pages i-xv. Front Matter Pages Pages Algebraic Extensions. Galois Theory. Extensions of Rings. Transcendental Extensions. Algebraic Spaces. Noetherian Rings and Modules. Real Fields. Absolute Values. Matrices and Linear Maps.

Representation of One Endomorphism. Structure of Bilinear Forms. The Tensor Product.



0コメント

  • 1000 / 1000