You are on page 1of 23

TOPIC :- EXPLAIN THE GAUSSIAN ELIMINATION METHOD AND

GAUSS JORDAN ELIMINATION METHOD FOR SOLVING LINEAR


EQUATION

STUDENT’S NAME:

ROLL NO:

SECTION:

REGISTRATION NO.:

COUSRE NAME: MATHEMATICS

COURSE CODE: MTH101

DATE OF SUBMISSION: 15-NOV-2010

SUBMITTED TO: Mrs.Gurpreet kaur

ACKNOWLEDGEMENT
I take this opportunity to present my votes of thanks to all those guidepost who
really acted as lightening pillars to enlighten my way throughout this project that
has led to successful and satisfactory completion of this study.

I will be highly thankful to Mrs.Gurpreeet Kaur for her active support, valuable
time and advice, whole-hearted guidance, sincere cooperation and pains-taking
involvement during the study and in completing the assignment of preparing the
said project within the time stipulated.

Lastly, I am thankful to all those, particularly the various friends , who have been
instrumental in creating proper, healthy and conductive environment and
including new and fresh innovative ideas for us during the project, their help, it
would have been extremely difficult for me to prepare the project in a time bound
framework.

NISHA SHARMA

TABLE OF CONTENT
1) INTRODUCTION
A )GAUSSIAN ELIMINATION METHOD
2) LINEAR EQUATION
= EXAMPLES
3) MATRIX REPRESENTATION OF A LINEAR EQUATION
4) GAUSSIAN ELIMINATION
= EXAMPLES
5) GAUSSIAN ELIMINATION STEPS INVOLVED
6) GAUSS JORDAN METHOD
7) GAUSS JORDAN METHOD STEPS INVOLVED
8) DEFINITIONS
9) REFERENCES

GAUSSIAN ELIMINATION METHOD:-


Gaussian elimination method is an exact method which solves a given system of
equation in n unknown by transforming the coefficients matrix, into an upper
triangular matrix and then solve for the unknowns by back substitution.
Consider a system of n equations in n unknowns
A11x11 +a12x2+a13x3+...+a1nxn = a1,n+1________1
A21x1+a22x2 +a23x3+...+a2nxn = a2,n+1__________2
......................................................................
An1x1+an2x2+an3x3+....+annxn = an,n+1____________n
Eliminating the unknowns x1 from the (n-1) equations namely (2),(3).......(n-1),(n) by

ai1
subtracting the multiple a11 of the first equation from the ith equations for i=

2,3,4,...n.. now eliminate x2 (n-2) equations of the resultant system as follows:


A11x1+ a12x2+...+a1nxn= a1,n+1
A22+x2+....+a2nxn=a2,n+1
A33x3+...+a3nxn= a3,n+1
.............................................................................
Anm(n-1)xn=an,n+1(n-1)

Linear equations

Definition :- The equation ax+by+cz+dw=h

Where a, b, c, d, and h are known numbers, while x, y, z, and w are unknown


numbers, is called a linear equation. If h =0, the linear equation is said to be
homogeneous. A linear system is a set of linear equations and a homogeneous
linear system is a set of homogeneous linear equations. For eg.
2x -3y = 1
x + 3y = 2
And
x+y–z=1
x + 3y + 3z = -2
Are linear systems, while
2x – 3y2 = -1
x+y+z=1
is a nonlinear system (because of y2). The system
2x -3y – 3z + w =0
x + 3y =0
x–y + w =0
is an homogeneous linear system.

Matrix Representation of a Linear System

Matrices are helpful in rewriting a linear system in a very simple form. The algebraic
properties of matrices may then be used to solve systems. First, consider the linear
system

ax + by + cz + dw =e
fx + gy + hz + iw =j
kx + ly + mz + nw =p
qx + ry + sz + tw =u

Set the matrices

Using matrix multiplications, we can rewrite the linear system above as the matrix
equation

A.X =C

The matrix A is called the matrix coefficient of the linear system. The matrix C is
called the nonhomogeneous term.When C = 0, the linear system is homogeneous.
The matrix X is the unknown matrix. Its entries are the unknowns of the linear
system.The augmented matrix associated with the system is the matrix [A|C]

In general if the linear system has n equations with m unknowns, then the matrix
coefficient will be n*m matrix and the augmented matrix an n*(m+1) matrix.

Definition. Two linear systems with n unknowns are said to be equivalent if and only
if they have the same set of solutions.
This definition is important since the idea behind solving a system is to find an
equivalent system which is easy to solve. Indeed, it is clear that if we interchange two
equations, the new system is still equivalent to the old one. If we multiply an equation
with a nonzero number, we obtain a new system still equivalent to old one. And
finally replacing one equation with the sum of two equations, we again obtain an
equivalent system. These operations are called elementary operations on systems.
Let us see how it works in a particular case.

Example-1

Suppose the goal is to find and describe the solution(s), if any, of the following
system of linear equations:

2x + y – z = 8 (L1)
- 3x – y +2z = -11 (L2)
- 2x + y + 2z = -3 (L3)

The algorithm is as follows: eliminate x from all equations below L1, and then
eliminate y from all equations below L2. This will put the system into triangular form.
Then, using back-substitution, each unknown can be solved for.

In the example, x is eliminated from L2 by adding 3/2 L1 to L2. x is then eliminated


from L3 by adding L1 to L3. Formally:

L2 + 3/2 L1 L2

L3 + L1  L3

The result is:

2x + y – z = 8
½y+½z=1
2y + z = 5

Now y is eliminated from L3 by adding − 4L2 to L3:


L3 + (-4L2) l3

The result is:

2x + y – z = 8
½y+½z=1
-z =1

This result is a system of linear equations in triangular form, and so the first part of
the algorithm is complete.
The last part, back-substitution, consists of solving for the known in reverse order. It
can thus be seen that

Z = -1
Then, z can be substituted into L2, which can then be solved to obtain
y=3 (L2)
Next, z and y can be substituted into L1, which can be solved to obtain
x=2 (L1)

The system is solved

Some systems cannot be reduced to triangular form, yet still have at least one valid

solution: for example, if y had not occurred in L2 and L3 after the first step above, the
algorithm would have been unable to reduce the system to triangular form. However,
it would still have reduced the system to echelon form. In this case, the system does
not have a unique solution, as it contains at least one free variable. The solution set
can then be expressed parametrically (that is, in terms of the free variables, so that if
values for the free variables are chosen, a solution will be generated).

In practice, one does not usually deal with the systems in terms of equations but
instead makes use of the augmented matrix (which is also suitable for computer
manipulations). For example:

2x+y–z =8 (L1)
- 3x - y +2z = -11 (L2)
- 2x + y + 2z = -3 (L3)

Therefore, the Gaussian Elimination algorithm applied to the augmented matrix


begins with:

which, at the end of the first part(Gaussian elimination, zeros only under the leading
1) of the algorithm, looks like this:
That is, it is in row echelon form.

At the end of the algorithm, if the Gauss–Jordan elimination(zeros under and above
the leading 1) is applied:

That is, it is in reduced row echelon form, or row canonical form.

Example-2 . Consider the linear system

x+y+z=0
x – 2y + 2z = 4
x + 2y – z = 2

The idea is to keep the first equation and work on the last two. In doing that, we will
try to kill one of the unknowns and solve for the other two. For example, if we keep
the first and second equation, and subtract the first one from the last one, we get the
equivalent system
x+y+z=0
x – 2y + 2z = 4
x + 2y –z = 2

Next we keep the first and the last equation, and we subtract the first from the second.
We get the equivalent system

x+y+z=0
– 3y + z = 4
y - 2z = 2
Now we focus on the second and the third equation. We repeat the same procedure.
Try to kill one of the two unknowns (y or z). Indeed, we keep the first and second
equation, and we add the second to the third after multiplying it by 3. We get
x+y+z=0
– 3y + z = 4
-5z = 10
This obviously implies z = -2. From the second equation, we get y = -2, and finally
from the first equation we get x = 4. Therefore the linear system has one solution
x = 4 , y = -2 , z = -2

Going from the last equation to the first while solving for the unknown is
backsolving.
Linear systems for which the matrix coefficient is upper-triangular are easy to solve.
This is particularly true, if the matrix is in echelon form. So the trick is to perform
elementary operations to transform the initial linear system into another one for which
the coefficient matrix is in echelon form.
Consider the augmented matrix

Let us perform some elementary row operations on this matrix. Indeed, if we keep the
first and second row, and subtract the first one from the last one we get

Next we keep the first and the last rows, and we subtract the first from the second. We
get

Then we keep the first and second row, and we add the second to the third after
multiplying it by 3 to get
This is a triangular matrix which is not in echelon form. The linear system for which
this matrix is an augmented one is

x+y+z=0
– 3y + z = 4
-5z = 10

In every step the new matrix was exactly the augmented matrix associated to the new
system. This shows that instead of writing the systems over and over again, it is easy
to play around with the elementary row operations and once we obtain a triangular
matrix, write the associated linear system and then solve it. This is known as

Gaussian Elimination
Consider a linear system.

1. Construct the augmented matrix for the system;


2. Use elementary row operations to transform the augmented matrix into a triangular
one;
3. Write down the new linear system for which the triangular matrix is the associated
augmented matrix;
4. Solve the new system. You may need to assign some parametric values to some
unknowns, and then apply the method of back substitution to solve the new system.
Example. Solve the following system via Gaussian elimination

The augmented matrix is


We use elementary row operations to transform this matrix into a triangular one. We
keep the first row and use it to produce all zeros elsewhere in the first column. We
have

Next we keep the first and second row and try to have zeros in the second column. We
get

Next we keep the first three rows. We add the last one to the third to get

This is a triangular matrix. Its associated system is

2x – 3y –z + 2w + 3v = 4
2y + z + 5v = -4
v=1

Clearly we have v = 1. Set z = s and w = t, then we have

The first equation implies


x = 2 +3/2y + 1/2z - w–3/2v
.
Using algebraic manipulations, we get
x = - 25/4- 1/4s - t

Putting all the stuff together, we have

Example. Use Gaussian elimination to solve the linear system

x–y=4
2x – 2y = -4

The associated augmented matrix is

We keep the first row and subtract the first row multiplied by 2 from the second row.
We get

This is a triangular matrix. The associated system is

x–y=4
0 = -12

Clearly the second equation implies that this system has no solution. Therefore this
linear system has no solution.

Definition. A linear system is called inconsistent if it does not have a solution. In


other words, the set of solutions is empty. Otherwise the linear system is called
consistent.
Following the example above, we see that if we perform elementary row operations
on the augmented matrix of the system and get a matrix with one of its rows equal to

, where , then the system is inconsistent.

GAUSSIAN ELIMINATION.:

STEPS IN GAUSSIAN ELIMINATION

The steps in Gaussian elimination can be summarized as follows:


Stage 1: (Forward Elimination Phase)
1-Search the first column of [A|b] from the top to the bottom for the first non-zero.
Entry, and then if necessary, the second column (the case where all the coefficients
Corresponding to the first variable are zero), and then the third column, and so on.
The Entry thus found is called the current pivot.
1. Interchange, if necessary, the row containing the current pivot with the first row.

2. Keeping the row containing the pivot (that is, the first row) untouched, subtract
Appropriate multiples of the first row from all the other rows to obtain all zeroes
below the current pivot in its column.
3. Repeat the preceding steps on the sub matrix consisting of all those elements
which are below and to the right of the current pivot.
4. Stop when no further pivot can be found.

Remark: The forward elimination phase of the Gauss elimination method leads to the
“Row echelon form” of a matrix which can be defined as follows: A matrix is said to
Be in a row echelon form (or to be a row echelon matrix) if it has a staircase-like
pattern.
Characterized by the following properties:
(a) The all-zero rows (if any) are at the bottom.
(b) If we call the left most non-zero entry of a non-zero row its leading entry, then the

Leading entry of each non-zero row is to the right of the leading entry of the
preceding row.
The entries denoted by ∗ and the ci’s are real numbers; they may or may not be zero.
The pi’s denote the pivots; they are non-zero. Note that there is exactly one pivot in
Each of the first r rows of U and that any column of U has at most one pivot. Hence
r ≤ m and r ≤ n.
The number r of non-zero rows and the real numbers cr+1, . . . , cn, if any, appearing
in the row echelon matrix U obtained by the forward elimination phase of the
Gaussian Elimination algorithm applied to the matrix A determine the character of the
solutions of the system Ax = b.
1 : If r < m (the number of non-zero rows is less than the number of equations) and
cr+k 6= 0 for some k ≥ 1, then the (r + k)th row corresponds to the self-contradictory
Equation 0 = cr+k and so the system has no solutions (inconsistent system).
2 : If (i) r = m or (ii) r < m and cr+k = 0 for all k ≥ 1, then there exists a solution of the
System (consistent system). If the jth column of U contains a pivot, then xj is called
a basic variable; otherwise xj is called a free variable.
In fact, there are n − r free variables, where n is the number of columns of A (and
hence of U).
We can summarize these results as follows: Consider the m × n linear system Ax =
b of m equations in n variables. Suppose the matrix [A|b] is reduced by the Gauss
elimination algorithm to the row echelon matrix [U|c] with r non-zero rows in the part
U corresponding to A.
------If r = m, that is, U has no non-zero rows, then the system Ax = b is consistent
.
------If r < m, then the system is consistent if and only if cr+k = 0 for all k ≥ 1.

------For a consistent system,


if r = n, then there is a unique solution and
if r < n, then there is an infinite set of solutions (with n − r parameters).

Stage 2: (Back Substitution Phase)

In the case of a consistent system, if xj is a free variable, then it can be set equal to a
Parameter sj which can assume arbitrary values. If xj is a basic variable, then we solve
for xj in terms of xj+1, . . . , xm, starting from the last basic variable and working our
way up row by row.
Gauss-Jordan Elimination:
Gauss-Jordan Elimination is a method for solving a linear system of equations. The
method is named after Carl Friedrich Gauss and Wilhelm Jordan.
Gauss-Jordan Elimination is a variant of Gaussian Elimination. The system
represented by the new augmented matrix has the same solution set as the original
system of linear equations. In Gauss-Jordan Elimination, the goal is to transform the
coefficient matrix into a diagonal matrix, and the zeros are introduced into the matrix
one column at a time. Eliminate the elements both above and below the diagonal
element of a given column in one pass through the matrix.
The general procedure for Gauss-Jordan Elimination can be summarized in the
following steps:

Gauss-Jordan Elimination Steps

1. Write the augmented matrix for the system of linear equations.


2. Use elementary row operations on the augmented matrix [A|b] to transform A into
diagonal form. If a zero is located on the diagonal, switch the rows until a nonzero is
in that place. If you are unable to do so, stop; the system has either infinite or no
solutions.
3. By dividing the diagonal element and the right-hand-side element in each row by
the diagonal element in that row, make each diagonal element equal to one.

Consider the following system of linear equations:

x1 - x2 + 2x3 = 3

2x1 - 2x2 + 5x3 = 4

x1 + 2x2 - x3 = -3

2x2 + 2x3 = 1

To use Gauss-Jordan Elimination, we start by representing the given system of linear


equations as an augmented matrix.
Definition 1: Augmented Matrix Form

An augmented matrix is a matrix representation of a system of linear equations where


each row of the matrix is the coefficients of the given equation and the equation's
result.

Example 1:

The following system of linear equations:

x1 - x2 + 2x3 = 3

2x1 - 2x2 + 5x3 = 4

x1 + 2x2 - x3 = -3

2x2 + 2x3 = 1
would be represented as:

The goal of Gauss-Jordan elimination is to transform the matrix into reduced echelon
form.

Definition 2: Reduced Echelon Form

A matrix is said to be in reduced echelon form if:

(1) Any rows consisting entirely of zeros are grouped at the bottom of the matrix.

(2) The first nonzero element of any row is 1. This element is called a leading 1.

(3) The leading 1 of each row after the first is positioned to the right of the leading 1
of the previous row.

(4) If a column contains a leading 1, then all other elements in that column are 0.

Here are some examples of matrices that are not in reduced echelon form.
The matrix below violates (1). The rows consisting entirely of 0's are not grouped at
the bottom.

The matrix below violates (2). In row 2, the first nonzero element is not a 1.

The matrix below violates (3). In row 3, the leading 1 is not to the right of the leading
1 in row 2.

The matrix below violates (4). The column associated with the leading 1 in row 2
contains a nonzero value.

Below are some examples of matrices that are in reduced echelon form:
Next, we need to use elementary row operations so the matrix results in reduced
echelon form. The elementary row operations are operations that the result matrix
represents the same system of linear equations. In other words, the two matrices are
row equivalent.

Definition 3: Row Equivalence (Equivalent Systems)

Two matrices are row equivalent if they represent the same system of linear
equations. They are also called equivalent systems.

The following three elementary operations preserve row equivalence. That is, the
resulting matrix is row equivalent to the matrix before the operation.

(1) Interchanging two rows


(2) Multiplying the elements of a row by a nonzero constant
(3) Adding the elements of one row to the corresponding elements of another row.

(1) is clear. Changing the order of the linear equations does not change the values that
solve them. (2) is equivalent to multiplying the same value to both sides of the
equation. (3) is the idea that two linear equations imply that their sum is also a valid
linear equation. In actual practice, we will combine (2) and (3) to get: (4) Add a
multiple of the elements of one row to the corresponding elements of another row.

So, the following elementary operations result in a matrix that is row equivalent to the
previous matrix:

Definition 4: Elementary Operations

(1) Row Exchange: The exchange of two rows.


(2) Row Scaling: Multiply the elements of a row by a nonzero constant.
(3) Row Replacement: Add a multiple of the elements of one row to the
corresponding elements of another row.

Once the matrix is in reduced echelon form, we can convert the matrix back into a
linear system of equations to see the solution.
Here's the full algorithm:

Definition 5: Gauss-Jordan Elimination

(1) Represent the linear system of equations as a matrix in augmented matrix form

(2) Use elementary row operations to derive a matrix in reduced echelon form

(3) Write the system of linear equations corresponding to the reduced echelon form.

Here's an example:

Example 2: Using Gauss-Jordan Elimination

(1) We start with a system of linear equations:

x1 - 2x2 + 4x3 = 12

2x1 - x2 + 5x3 = 18

-x1 + 3x2 - 3x3 = -8

(2) We represent them in augmented matrix form:

(3) We use elementary row operations to put this matrix into reduced echelon form (I
will use R1, R2, R3 for Row 1, Row 2, and Row 3):

1-Let R2 = R2 + (-2)R1

2- Let R3 = R3 + R1
3- Let R2 = (1/3)R2

4 -Let R1 = R1 + 2*R2

5- Let R3 = R3 + (-1)*R2

6- Let R3 = (1/2)R3

7 - Let R1 = R1 + (-2)*R3

8 - Let R2 = R2 + R3

(4) We now write down the system of linear equations corresponding to the reduced
echelon form:
x1 = 2
x2 = 1
x3 = 3
Of course, we need to be sure that we can always get to reduced echelon form. Here is
the theorem that guarantees this:

Theorem 1: Every matrix is row equivalent to a reduced echelon form matrix

Example 3:

We will apply Gauss-Jordan Elimination to the same example that was used to
demonstrate Gaussian Elimination. Remember, in Gauss-Jordan Elimination we want
to introduce zeros both below and above the diagonal.

1. Write the augmented matrix for the system of linear equations.

As before, we use the symbol to indicate that the matrix preceding the arrow is being
changed due to the specified operation; the matrix following the arrow displays the
result of that change.

2. Use elementary row operations on the augmented matrix [A|b] to transform A into
diagonal form.
At this point we have a diagonal coefficient matrix. The final step in Gauss-Jordan
Elimination is to make each diagonal element equal to one. To do this, we divide each
row of the augmented matrix by the diagonal element in that row.

3. By dividing the

Hence,

Our solution is simply the right-hand side of the augmented matrix. Notice that the
coefficient matrix is now a diagonal matrix with ones on the diagonal. This is a
special matrix called the identity matrix.

REFERENCES:
1. www.google.com

2. www.sosmath.com

3. www.mathworld.com

4. www.blogspot.com

5. www.wikipedia.com

You might also like