You are on page 1of 37

System of linear equations

From Wikipedia, the free encyclopedia


Jump to: navigation, search

A linear system in three variables determines a collection of planes. The intersection point is the solution. In mathematics, a system of linear equations (or linear system) is a collection of linear equations involving the same set of variables. For example,

is a system of three equations in the three variables . A solution to a linear system is an assignment of numbers to the variables such that all the equations are simultaneously satisfied. A solution to the system above is given by

since it makes all three equations valid.[1] In mathematics, the theory of linear systems is a branch of linear algebra, a subject which is fundamental to modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra, and such methods play a prominent role in engineering, physics, chemistry, computer science, and economics. A system of non-linear

equations can often be approximated by a linear system (see linearization), a helpful technique when making a mathematical model or computer simulation of a relatively complex system.

Contents
[hide]

1 Elementary example 2 General form o 2.1 Vector equation o 2.2 Matrix equation 3 Solution set o 3.1 Geometric interpretation o 3.2 General behavior 4 Properties o 4.1 Independence o 4.2 Consistency o 4.3 Equivalence 5 Solving a linear system o 5.1 Describing the solution o 5.2 Elimination of variables o 5.3 Row reduction o 5.4 Cramer's rule o 5.5 Other methods 6 Homogeneous systems o 6.1 Solution set o 6.2 Relation to nonhomogeneous systems 7 See also 8 Notes 9 References o 9.1 Textbooks 10 External links

[edit] Elementary example


The simplest kind of linear system involves two equations and two variables:

One method for solving such a system is as follows. First, solve the top equation for x in terms of y:

Now substitute this expression for x into the bottom equation:

This results in a single equation involving only the variable y. Solving gives y = 1, and substituting this back into the equation for x yields x = 3/2. This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra.)

[edit] General form


A general system of m linear equations with n unknowns can be written as

Here and

are the unknowns, are the constant terms.

are the coefficients of the system,

Often the coefficients and unknowns are real or complex numbers, but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure.

[edit] Vector equation


One extremely helpful view is that each unknown is a weight for a column vector in a linear combination.

This allows all the language and theory of vector spaces (or more generally, modules) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side is called their span, and the equations have a solution just when the right-hand

vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension) cannot be larger than m or n, but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side, and otherwise not guaranteed.

[edit] Matrix equation


The vector equation is equivalent to a matrix equation of the form

where A is an mn matrix, x is a column vector with n entries, and b is a column vector with m entries.

The number of vectors in a basis for the span is now expressed as the rank of the matrix.

[edit] Solution set

The solution set for the equations x y = 1 and 3x + y = 9 is the single point (2, 3). A solution of a linear system is an assignment of values to the variables x1, x2, ..., xn such that each of the equations is satisfied. The set of all possible solutions is called the solution set. A linear system may behave in any one of three possible ways:

1. The system has infinitely many solutions. 2. The system has a single unique solution. 3. The system has no solution.

[edit] Geometric interpretation


For a system involving two variables (x and y), each linear equation determines a line on the xyplane. Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set. For three variables, each linear equation determines a plane in three-dimensional space, and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For n variables, each linear equations determines a hyperplane in n-dimensional space. The solution set is the intersection of these hyperplanes, which may be a flat of any dimension.

[edit] General behavior

The solution set for two equations in three variables is usually a line. In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns: 1. Usually, a system with fewer equations than unknowns has infinitely many solutions. Such a system is also known as an underdetermined system. 2. Usually, a system with the same number of equations and unknowns has a single unique solution. 3. Usually, a system with more equations than unknowns has no solution. In the first case, the dimension of the solution set is usually equal to n m, where n is the number of variables and m is the number of equations.

The following pictures illustrate this trichotomy in the case of two variables:

One Equation

Two Equations

Three Equations

The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point. Keep in mind that the pictures above show only the most common case. It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point). In general, a system of linear equations may behave differently than expected if the equations are linearly dependent, or if two or more of the equations are inconsistent.

[edit] Properties
[edit] Independence
The equations of a linear system are independent if none of the equations can be derived algebraically from the others when the number of equations is equal to the number of variables in each equation. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence.

The equations x 2y = 1, 3x + 5y = 8, and 4x + 3y = 7 are not linearly independent.

For example, the equations

are not independent- they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations. For a more complicated example, the equations

are independent. The third equation is the sum of the other two, however we are left with two equations that can't be derived algebraically from one another. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point.

[edit] Consistency

The equations 3x + 2y = 6 and 3x + 2y = 12 are inconsistent. The equations of a linear system are consistent if they possess a common solution, and inconsistent otherwise. When the equations are inconsistent, it is possible to derive a contradiction from the equations, such as the statement that 0 = 1. For example, the equations

are inconsistent. In attempting to find a solution, we tacitly assume that there is a solution; that is, we assume that the value of x in the first equation must be the same as the value of x in the second equation (the same is assumed to simultaneously be true for the value of y in both equations). Applying the substitution property (for 3x+2y) yields the equation 6 = 12, which is a false statement. This therefore contradicts our assumption that the system had a solution and we conclude that our assumption was false; that is, the system in fact has no solution. The graphs of these equations on the xy-plane are a pair of parallel lines. It is possible for three linear equations to be inconsistent, even though any two of the equations are consistent together. For example, the equations

are inconsistent. Adding the first two equations together gives 3x + 2y = 2, which can be subtracted from the third equation to yield 0 = 1. Note that any two of these equations have a common solution. The same phenomenon can occur for any number of equations. In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent.

[edit] Equivalence
Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice-versa. Equivalent systems convey precisely the same information about the values of the variables. In particular, two linear systems are equivalent if and only if they have the same solution set.

[edit] Solving a linear system


There are several algorithms for solving a system of linear equations.

[edit] Describing the solution


When the solution set is finite, it is usually described in set notation. For example, the solution set 2, 3, and 4 would be written: {2,3,4} It can be difficult to describe a set with infinite solutions. Typically, some of the variables are designated as free (or independent, or as parameters), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables. For example, consider the following system:

The solution set to this system can be described by the following equations:

Here z is the free variable, while x and y are dependent on z. Any point in the solution set can be obtained by first choosing a value for z, and then computing the corresponding values for x and y. Each free variable gives the solution space one degree of freedom, the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z. An infinite solution of higher order may describe a plane, or higher dimensional set. Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows:

Here x is the free variable, and y and z are dependent.

[edit] Elimination of variables


The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows: 1. In the first equation, solve for the one of the variables in terms of the others. 2. Plug this expression into the remaining equations. This yields a system of equations with one fewer equation and one fewer unknown. 3. Continue until you have reduced the system to a single linear equation. 4. Solve this equation, and then back-substitute until the entire solution is found. For example, consider the following system:

Solving the first equation for x gives x = 5 + 2z 3y, and plugging this into the second and third equation yields

Solving the first of these equations for y yields y = 2 + 3z, and plugging this into the second equation yields z = 2. We now have:

Substituting z = 2 into the second equation gives y = 8, and substituting z = 2 and y = 8 into the first equation yields x = 15. Therefore, the solution set is the single point (x, y, z) = (15, 8, 2).

[edit] Row reduction


Main article: Gaussian elimination In row reduction, the linear system is represented as an augmented matrix:

This matrix is then modified using elementary row operations until it reaches reduced row echelon form. There are three types of elementary row operations: Type 1: Swap the positions of two rows. Type 2: Multiply a row by a nonzero scalar. Type 3: Add to one row a scalar multiple of another. Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original. There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss-Jordan elimination. The following computation shows Gauss-Jordan elimination applied to the matrix above:

The last matrix is in reduced row echelon form, and represents the system x = 15, y = 8, z = 2. A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down.

[edit] Cramer's rule


Main article: Cramer's rule Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants. For example, the solution to the system

is given by

For each variable, the denominator is the determinant of the matrix of coefficients, while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms.

Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.) Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision.

[edit] Other methods


While systems of three or four equations can be readily solved by hand, computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting. Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A. This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b. If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite can be solved twice as fast with the Cholesky decomposition. Levinson recursion is a fast method for Toeplitz matrices. Special methods exist also for matrices with many zero elements (so-called sparse matrices), which appear often in applications. A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods.

[edit] Homogeneous systems


A system of linear equations is homogeneous if all of the constant terms are zero:

A homogeneous system is equivalent to a matrix equation of the form

where A is an m n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries.

[edit] Solution set


Every homogeneous system has at least one solution, known as the zero solution (or trivial solution), which is obtained by assigning the value of zero to each of the variables. The solution set has the following additional properties: 1. If u and v are two vectors representing solutions to a homogeneous system, then the vector sum u + v is also a solution to the system. 2. If u is a vector representing a solution to a homogeneous system, and r is any scalar, then ru is also a solution to the system. These are exactly the properties required for the solution set to be a linear subspace of Rn. In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A.

[edit] Relation to nonhomogeneous systems


There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system:

Specifically, if p is any specific solution to the linear system Ax = b, then the entire solution set can be described as

Geometrically, this says that the solution set for Ax = b is a translation of the solution set for Ax = 0. Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p. This reasoning only applies if the system Ax = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A.

Cramers Rule
Determinants can be used to solve a linear system of equations using Cramers Rule. Cramers Rule for Two Equations in Two Variables

Given the system

This system has the unique solution

where When solving a system of equations using Cramers Rule, remember the following: 1. Three different determinants are used to find x and y. The determinants in the denominators are identical. 2. The elements of D, the determinant in the denominator, are the coefficients of the variables in the system; coefficients of x in the first column and coefficients of y in the second column.

3.

, the determinant in the numerator of x, is obtained by replacing the x-coefficients, , in D with the constants from the right sides of the equations, .

4.

, the determinant in the numerator for y, is obtained by replacing the y-coefficients, , in D with the constants from the right side of the equation, .

Example. Use Cramers Rule to solve the system: 5x 4y = 2 6x 5y = 1 Solution. We begin by setting up and evaluating the three determinants :

From Cramers Rule, we have The solution is (6,7). Cramers Rule does not apply if D = 0. When D = 0, the system is either inconsistent or dependent. Another method must be used to solve it. Example. Solve the system: 3x + 6y = -1 2x + 4y = 3 Solution. We begin by finding D:

Since D = 0, Cramers Rule does not apply. We will use elimination to solve the system.

3x + 6y = -1 2x + 4y = 3 2(3x + 6y) = 2(-1) -3(2x + 4y) = -3(3) 6x + 12y = -2 -6x 12y = -9 0 = -11 Add the equations Multiply both sides of equation 1 by 2 and both sides of equation 2 by 3 to eliminate x Simplify

The false statement, 0 = -11, indicates that the system is inconsistent and has no solution. Cramers Rule can be generalized to systems of linear equations with more than two variables. Suppose we are given a system with the determinant of the coefficient matrix D. Let denote the determinant of the matrix obtained by replacing the column containing the coefficients of "n" with the constants from the right sides of the equations. Then we have the following result: If a linear system of equations with variables x, y, z, . . . has a unique solution given by the formulas Example. Use Cramers Rule to solve the system: 4x - y + z = -5 2x + 2y + 3z = 10 5x 2y + 6z = 1 Solution. We begin by setting up four determinants: D consists of the coefficients of x, y, and z from the three equations :

is obtained by replacing the x-coefficients in the first column of D with the constants from the right sides of the equations.

is obtained by replacing the y-coefficients in the second column of D with the constants from the right sides of the equations

is obtained by replacing the z-coefficients in the third column of D with the constants from the right sides of the equations

Next, we evaluate the four determinants:

= 4(12 (-6)) + 1(12 15) + 1(-4 10) = 4(18) + 1(-3) + 1(-14) = 72 3 14 = 55

= -5(12 (-6)) + 1(60 3) + 1(-20 2)

= -5(18)+1(57) + 1(-22) = -90 + 57 22 = -55

= 4(60 3) + 5(12 15) + 1(2 50) = 4(57) + 5(-3) + 1(-48) = 228 - 15 48 = 165

= 4(2 (-20)) + 1(2 50) 5(-4 10) = 4(22) + 1(-48) 5(-14) = 88 48 + 70 = 110 Substitute these four values into the formula from Cramers Rule:

The solution is (-1, 3, 2).

Return to College Algebra: Table of Contents Return to Homepage

Matrix Row Operations (page 1 of 2)


"Operations" is mathematician-ese for "procedures". The four "basic operations" on numbers are addition, subtraction, multiplication, and division. For matrices, there are three basic row operations; that is, there are three procedures that you can do with the rows of a matrix.

The first operation is row-switching. For instance, given the matrix:

...you can switch the rows around to put the matrix into a nicer row arrangement, like this:

Row-switching is often indicated by drawing arrows, like this:

When switching rows around, be careful to copy the entries correctly.

The second operation is row multiplication. For instance, given the following matrix:

...you can multiply the first row by

1 to get a positive leading value in the first row:

This row multiplication is often indicated by using an arrow with multiplication listed on top of it, like this:

The "1R1" indicates the actual operation. The "1" says that we multiplied by negative one; the "R1" says that we were working with the first row. Note that the second and third rows were copied down, unchanged, into the second matrix. The multiplication only applied to the first row, so the entries for the other two rows were just carried along unchanged. You can multiply by anything you like. For instance, to get a leading matrix, you can multiply the third row by a negative one-half:

1 in the third row of the previous

Since you weren't doing anything with the first and second rows, those entries were just copied over unchanged into the new matrix. You can do more than one row multiplication within the same step, so you could have done the two above steps in just one step, like this:

It is a good idea to use some form of notation (such as the arrows and subscripts above) so you can keep track of your work. Matrices are very messy, especially if you're doing them by hand, and notes can make it easier to check your work later. It'll also impress your teacher.

The last row operation is row addition. Row addition is similar to the "addition" method for solving systems of linear equations. Suppose you have the following system of equations:

x + 3y = 1 x + y = 3

Copyright Elizabeth Stapel 1999-2009 All Rights Reserved

You could start solving this system by adding down the columns to get

4y = 4:

You can do something similar with matrices.For instance, given the following matrix:

...you can "reduce" (get more leading zeroes in) the second row by adding the first row to it (the general goal with matrices at this stage being to get a "1" or "0's" and then a "1" at the beginning of each matrix row). When you were reducing the two-equation linear system by adding, you drew an "equals" bar across the bottom and added down. When you are using addition on a matrix, you'll need to grab some scratch paper, because you don't want to try to do the work inside the matrix. So add the two rows on your scratch paper:

Scratch work don't hand this in!

This is your new second row; you will write it in place of the old second row. The result will look like this:

In this case, the "R1 + R2" on the arrow means "I added row one to row two, and this is the result I got". Since row one didn't actually change, and since we didn't do anything with row three, these rows get copied into the new matrix unchanged.

Matrix Row Operations: Examples (page 2 of 2)


In practice, the most common procedure is a combination of row multiplication and row addition. Thinking back to solving two-equation linear systems by addition, you most often had to multiply one row by some number before you added it to the other row. For instance, given:

...you would multiply the first row by

2 before adding it to the second row:

Using matrices, the above system looks like this:

Grab some scratch paper for the row calculation:

...and do the row operation:

The "2R1 + R2" means "I multiplied row one by 2, and then added the result to row two". Note that row one itself is actually unchanged, so it is copied over to the new matrix. Only row two was actually adjusted. Copyright Elizabeth Stapel 1999-2009 All Rights Reserved

By the way, for doing operations like this, you'll probably want to use a lot of scratch paper, so you can be careful with your calculations. Or else you'll want to figure out how to have your graphing calculator do the messy parts. For instance, my calculator can do the "2R1 + R2" operation like this:

Check your manual for the instructions for your model of calculator.

Returning to that last matrix above, this is what the full calculation would look like: Hand-in work: Scratch-work:

Divide the second row by 7:

Remember to do the scratch-work on scratch paper, not in the margins of your homework. You would hand in the following:

The above matrix calculations correspond to solving the linear system "x + 2y the solution "x = 1, y = 1".

= 1, 2x + 3y = 5" to get

It's fairly simple to learn the three matrix row operations, but actually doing the operations can be frustrating. It is amazingly easy to make little arithmetic errors that mess up all your calculations. So do your work very clearly, plainly denoting the row operations you are doing. The neater you are, the more likely you'll be to get the right answer in the end, but even if you get a wrong answer, neat work is a lot easier to check. If you are required to do the work by hand but you have a graphing calculator, ask your instructor if it's okay to do the operations in the calculator (instead of on scratch paper), just writing down the steps you took and copying down the results, because this can really cut down on the errors. Matrices are quite powerful and versatile, but good golly! are they annoying to do by hand!

Gaussian Elimination

Gaussian elimination is a method for solving matrix equations of the form

(1) To perform Gaussian elimination starting with the system of equations

(2)

compose the "augmented matrix equation"

(3)

Here, the column vector in the variables is carried along for labeling the matrix rows. Now, perform elementary row operations to put the augmented matrix into the upper triangular form

(4)

Solve the equation of the th row for , then substitute back into the equation of the row to obtain a solution for , etc., according to the formula

st

(5) In Mathematica, RowReduce performs a version of Gaussian elimination, with the equation being solved by
GaussianElimination[m_?MatrixQ, v_?VectorQ] := Last /@ RowReduce[Flatten /@ Transpose[{m, v}]]

LU decomposition of a matrix is frequently used as part of a Gaussian elimination process for solving a matrix equation. A matrix that has undergone Gaussian elimination is said to be in echelon form. For example, consider the matrix equation

(6)

In augmented form, this becomes

(7)

Switching the first and third rows (without switching the elements in the right-hand column vector) gives

(8)

Subtracting 9 times the first row from the third row gives

(9)

Subtracting 4 times the first row from the second row gives

(10)

Finally, adding

times the second row to the third row gives

(11)

Restoring the transformed matrix equation gives

(12)

which can be solved immediately to give , back-substituting to obtain actually follows trivially in this example), and then again back-substituting to find

(which

[1]

Write the given system as an augmented matrix. Examples of this step are below, or in specialized example "b", or in specialized example "c", or in our text Rolf (Pg 88). [system] ===> [ A | B ]

[2]

Convert [ A | B ] to REDUCED FORM: Pivot on matrix elements in positions 1-1, 2-2, 3-3, and so forth as far as is possible, in that order, with the objective of creating the biggest possible identity matrix I in the left portion of the augmented matrix. If one of these pivoting elements is zero, then first interchange it's row with a lower row. For all problems you will see this semester, this step [2] is equivalent to steps 1 through 7 on Pg 95-96 of Rolf, resulting in REDUCED FORM. [ A | B ] ===> [ I | C ]

[3]

When [2] is done, re-write the final matrix [ I | C ] as equations. C will be a (vertical) list of variable values which solve the system, as in the example below

Note 1: It is possible to vary the GAUSS/JORDAN method and still arrive at correct solutions to problems. For example, the pivot elements in step [2] might be different from 1-1, 2-2, 3-3, etc. Also, it is possible to use row operations which are not strictly part of the pivoting process. Students are nevertheless encouraged to use the above steps [1][2][3]. Note 2: Professor M Farland names row operations just a bit differently from our text: c follow Prof M Farland's naming style. Note 3: Compare the steps of G / J with those for finding < a href="inverse.htm">matrix inverses. One simple example of G/J row operations is offered immediately above the pivoting reference; an example is below: Below is a system of equations which we will solve using G/J Below is the 1st augmented matrix : pivot on the "1" encircled in red Row operations for the 1st pivoting are named below Next we pivot on the number "5" in the 2-2 position, encircled below
c

step 1

Below is the result of Row performing P1 on the operations element in the 2-2 position. of P2 Next we must perform P2 are below

The result of the 2nd Using P1 pivoting is below. below Now pivot on "-7" we change encircled in red "-7" to "1"

Below is the result of performing P1 on "-7" in the 3-3 position. Next we must perform P2

Row operations of P2 are below

The result of the Step third (and last) [3] pivoting is below of with 3x3 ISM G/J matrix in blue

Re-writing the final matrix as equations gives the solution to the original system

To check your pivot calculations, try the PIVOT ENGINE.

Home About this weblog

Reduced Row Echelon Form


Lets take row echelon form and push it a little further into a form that is unique: reduced row echelon form. When we were doing Gaussian elimination we saved some steps by leaving pivoted rows alone. This was fine for the purposes of solving equations, where the point is just to eliminate progressively more and more variables in each equation. We also saved steps by leaving the pivots as whatever values they started with. Both of these led to a lot of variation in the possibilities for row echelon forms. Well tweak the method of Gaussian elimination to give a new algorithm called Gauss-Jordan elimination, which will take more steps but will remedy these problems. First of all, when we pick a pivot and swap it into place, we scale that row by the inverse of the pivot value. This will set the pivot value itself to (which, incidentally, helps with the shears that come next). Then we shear to eliminate not only the nonzero entries below the pivot, but the nonzero entries above the pivot as well. That leaves the pivot as the only nonzero entry in its column. Lets see what this looks like in terms of our usual example:

We use the in the upper-left as a pivot and scale the top row by

Next, we clear out the first column with shears. Note that the shear value is now just the negative of the entry in the first column.

So far its almost the same as before, except for normalizing the pivot. But now when we choose the in the second column as the pivot we scale its row by

and use shears to clear the column above and below this pivot

Now the only choice for a pivot in the third column is to clear the column

. Again we normalize it and use shears

This matrix is in reduced row echelon form, and I say that it is unique. Indeed, we are not allowed to alter the basis of the input space (since that would involve elementary column operations), so we can view this as a process of creating a basis for the output space in terms of the given basis of the input space. What we do is walk down the input basis, vector by vector. At each step, we ask if the image of this basis vector is linearly independent of the images of those that came before. If so, this image is a new basis vector. If not, then we can write the image (uniquely) in terms of the output basis vectors weve already written down. In our example above, each of the first three input basis vectors gives a new output basis vector, and the image of the last input basis vector can be written as the column vector in the last column. The only possibility of nonuniqueness is if we run out of input basis vectors before spanning the output space. But in that case the last rows of the matrix will consist of all zeroes anyway, and so

it doesnt matter what output basis vectors we choose for those rows! And so the reduced row echelon form of the matrix is uniquely determined by the input basis we started with. September 3, 2009 - Posted by John Armstrong | Algebra, Linear Algebra | | 7 Comments

7 Comments
1. [...] Linear Group Okay, so we can use elementary row operations to put any matrix into its (unique) reduced row echelon form. As we stated last time, this consists of building up a basis for the image of the transformation [...] Pingback by Elementary Matrices Generate the General Linear Group The Unapologetic Mathematician | September 4, 2009 | Reply 2. Do we result in the same matrix in reduced row echelon form for A^T (that is A transposed)? Will it have the same number of non-zero rows? Comment by Kuldeep | September 7, 2009 | Reply 3. Well, you obviously cant get the same matrix, because the reduced row echelon form must be the same shape as the original matrix. In the example above, the reduced row echelon form of would be a matrix, not a matrix? So could the reduced row echelon form of be the transpose of the reduced row echelon form of ? Well, clearly this is impossible as well. Again, looking at the example above, the transpose of the reduced row echelon form is the matrix

which is most definitely not the reduced row echelon form of anything. Now, what you could do is define something like the reduced column echelon form, replacing elementary row operations everywhere with elementary column operations. Then it would be the case that the reduced row echelon form of is the transpose of the reduced column echelon form of . Comment by John Armstrong | September 7, 2009 | Reply 4. Hi, I am looking at the construction of a holomorphic atlas on a Grassmannian of a complex vector space. Essentially this involves associating a unique k x (n-k) matrix with a k x n matrix of rank k. Books seem to assumed that the k x n matrix can always be transformed

to reduced row echelon form(by Gauss Jordan Elimination?) by premultiplication by a non-singular k x k matrix (presumably corresponding to elementary row operations). Is this always true and is the transforming matrix always non-singular? (Necessary if the result represents the same k-plane). If the k x n matrix is of rank k, does this ensure that the transformed matrix starts with the k x k identity matrix? There are perfectly good proofs that you can transform the k x n matrix in this way to have a k x k minor equal to the identity but to change this to the RRE form would involve column operations which would correspond to a change of basis in the intial space. Comment by Noel Robinson | November 3, 2009 | Reply 5. Yes, thats exactly the point. The elementary row operations correspond to elementary matrices, which are nonsingular (unless you try to scale a row by zero). The result may not start with the identity matrix unless you also allow elementary column operations. For instance, the matrix

Is in reduced row echelon form, and has rank , but does not start with the matrix. Comment by John Armstrong | November 3, 2009 | Reply

identity

6. Thank you for the response and confirming my suspicion that a change of basis would be required. Actually, I do not think that this change of basis is necessary to construct a chart on a Grassmannian as you can just pick out the k x (n-k) minor to determine the coordinate without needing the representative to be in the form (I|A). Comment by Noel Robinson | November 3, 2009 | Reply

Matrix Multiplication by Jin Tao, Mccoy Jen, and Erica Kim Section 1: Reading a Matrix

Given a matrix A that has m rows and n columns, then the entry in the ith row and jth column of A is aij and is called the (i, j) entry of A: Column j

Row i

=A

Section 2: Matrix addition and scalar multiplication Two matrices can only be added if they are of equal size, meaning both matrix A and matrix B have m rows and n columns. Example:

A + C = Error because the 3rd column of A cannot be matched to anyting in C

Addition and Scalar Multiplication Theorems: If A, B, and C be matrices of the same size, and r and s are scalars. 1. 2. 3. 4. 5. 6. A+B=B+A B. (A + B) + C = A + (B + C) A+0=A r(A + B) = rA + rB (r + s)A = rA + sA r(sA) = (rs)A

Section 3: Matrix multiplication Definition: If A is an m x n matrix, and if B is an n x p matrix with columns b1, . . . , bp, then the product AB is the m x p matrix whose columns are Ab1, . . . , Abp, that is, AB = A[b1 b2 . . . bp] = [Ab1 Ab2 . . . Abp] Move mouse over the numbers to see an example

Row-Column Rule for Computing AB: If the product AB is defined, then the entry in row i and column j of AB is the sum of the products of corresponding entries from row i of A and column j of B. If (AB)ij denotes the (i,j)-entry in AB, and if A is an m x n matrix, then:

Move mouse over the answer matrix to see an example

Matrix Multiplication Theorems: If A is a matrix with m rows and n columns, and matrices B and C have sizes that do not produce errors for the following examples, then: 1. 2. 3. 4. A(BC)= (AB)C A(B + C) = AB + Ac (B + C)A = BA + CA r(AB) = (rA)B = A(rB)

5. A = A = AIn IMPORTANT: For a matrix A to be multiplied with matrix B, the number of columns in A MUST be equal to the number of rows in matrix B. This means that: 1. AB may not always be equal to BA (move mouse over numbers to see reason):

2. Cancelation laws are not true for matrix multiplication: if AB = AC, then it is not always true that B = C 3. If the product of AB is a zero matrix, it is not always true that A = 0 or B = 0 Section 4: Matrix Powers If A is a square matrix with dimensions n x n, then Ak is the product of k comies of A. If k=0, then A0 is the identity matrix of A. Section 5: Transpose of Matrix If A is an m x n matrix, the transpose of A is an n x m matrix denoted by AT where the columns are formed by the rows of A. Example:

Transpose of Matrix Theorems

1. 2. 3. 4.

(AT)T = A (A + B)T = AT + AT For any scalar r, (rA)T = rAT (AB)T = BTAT

Section 6: Partitioned Matrices One of the key things in linear algebra is the ability to split up a bigger matrix into smaller subsections, which is also known as partitioning a matrix. For an instance, we have a matrix A:

Can be partitioned into its subsections based on the users discretion. Here is a partitioned Matrix:

These subsections make up the Matrix:

Section 7: Addition and Scalar Multiplication of Partitioned Matrices Addition works the same way with partition matrices.

As long as the matrices are partitioned in the same manner and have the same size, the laws of addition for matrices still hold true.

==>

Scalar Multiplication on a partition matrix works the same way as multiplication of a scalar on a regular matrix as well.

Section 8: Multiplication of Partitioned Matrices The purpose of partition up a matrix for multiplication is to split up a bigger matrix into smaller chunks, which makes it easier to compute. For an instance, take this example:

or

becomes:

and

The Multiplication still works the same way, just within the partitions themselves:

==>

The key is that when you substitute back in the numbers, matrices are being substituted back in and not just a single number.

Substituting the equations back in gets you:

And

Which leads to:

Column-Row expenasion of AB If A is m x n and B is n x p, then

It is from these components that you multiply out and get other matrices then you add up all the individual bits to get the full answer

(C) 2004 Jin Tao

You might also like