You are on page 1of 92

NUMERICAL SOLUTIONS OF LINEAR ALGEBRAIC SYSTEMS USING MATLAB

BY

ODEH FRIDAY ODEH REG. NO. : 04153024

BEING A PROJECT SUBMITTED TO THE MATHEMATICS, STATISTICS AND COMPUTER SCIENCE DEPARTMENT, UNIVERSITY OF ABUJA, IN PARTIAL FULFILLMENT OF THE REQUIREMENT FOR THE AWARD OF BACHELOR OF SCIENCE (BSc.) DEGREE IN MATHEMATICS.

NOVEMBER 2008

DEDICATION
I dedicate this work (the first of its kind by me) to GOD, my Creator. Also, to my parents, Mr. & Mrs. S. U. ODEH, my sisters, and my one and only late Brother Emmanuel.

ACKNOWLEDGEMENT
My profound gratitude goes to God, whose report I have always believed, for giving me grace to complete this race. Also, to my parents, Mr. & Mrs. Odeh whom have also made this race possible by their words of advice. Also, to my wonderful sisters, Aunty Mercy (big sis), Sister Joy (#2), Alice (babydoll), and Esther (Agbani), thanks for the motivation and inspiration. The merit from this research is due to the information I tapped from the fountain of one of our own scholar, my Humble Supervisor, a professor in the making, Dr. F. O. Ogunfiditimi. I am grateful to you for your time and energy, through your tight schedule; you still took out time for me. Thank you Sir! Also, many thanks to the MEN of the department, Professor M. O. Adewale and Dr. E. O. Ukpong for always making us keep our heads up. Also, to the entire lecturers in the department. My appreciation also goes to my vision helpers, Rev. (Dr) and Mrs. F. Ugbong for their prayers. My crew, Bulus Sidi, George Ehiagwina, Kefas Bako, John Ukhai, Gwaz Akut, Peterson Okoro, Efe Bestman, Emeka Okorie thanks for the brotherly love. Adeoye Oreoluwa David (LasB), Odion Irene (the only female fried I have in the department), Igwe Chima, Ade (Class rep) how could I have survived without you guys in the department. Thanks to my teams (MACOSSA, University of Abuja Basketball Team, ELOHIM Choir & Instrumentalists, ELOHIM FYB 2008) you brought out the leader in me by believing in me. To Jigga Banty, Gene, A Y. keep the music playing. Gift, Gloria, Joy, Esther, Aderonke, Ibimina, and thank you for always being there. To my Guy! Saleh Agyo B. four year and still kicking, big up bro! The last but not the least, words can express you, I LOVE YOU Seyi Soyemi! Finally, to my roommates, housemates, street mates, course mates, department mates, and all whom I cannot mention, I appreciate you all.
3

ABSTRACT
This project is not a comprehensive introduction to MATLAB. Instead, it focuses on the specific features of MATLAB that are useful for analyzing linear equations. Two basic goals of this course are First, to deepen students' understanding of linear equations by giving them a new tool, a mathematical software system, for analyzing linear equations. Second, to bring students to a level of expertise in the mathematical software system that would allow them to use it in other mathematics, engineering, or science courses. The emphasis of this research is to show how the solution of Ax = b is arrived at using MATLAB. Where A is the coefficient matrix, x is the unknown vector, and b is the solution vector.

TABLE OF CONTENTS
TITLE PAGEi CERTIFICATION PAGE.ii DEDICATION PAGE..iii ACKNOWLEDGEMENT PAGE.iv ABSTRACT PAGE.v CHAPTER ONE: INTRODUCTION 1.1 INTRODUCTION TO LINEAR ALGEBRAIC SYSTEMS.1 1.2 OVERVIEW OF NUMERICAL SOLUTIONS.............................................3 1.3 MATLAB........................................................................................3 1.4 DEFINITION OF TERMS...................................................................4 1.4.1 REVIEW OF VECTOR, SCALAR AND MATRICES.................................5 1.4.2 MATRIX OPERATION.....................................................................9 1.4.3 DETERMINANT...........................................................................11 1.4.4 MATRIX TRANSPOSE, MATRIX INVERSE, AND PSEUDO- INVERSE.....11 1.4.5 VECTOR AND MATRIX NORMS......................................................12 1.4.6 EIGENVALUE.............................................................................13 1.4.7 STABILITY AND CONDITIONING...................................................14 1.4.8 ERRORS...................................................................................14 1.5 SCOPE OF STUDY.........................................................................15 1.6 LIMITATIONS...............................................................................16
5

1.7 METHODOLOGY............................................................................17 CHAPTER TWO: LITERATURE REVIEW 2.1 HISTORICAL BACKGROUND OF LINEAR ALGEBRA..............................20 2.2 MATRIX THEORY...........................................................................25 2.3 MATLAB HISTORY.........................................................................27 CHAPTER THREE: METHODOLOGY 3.1 INTRODUCTION.32 3.2 FACTORIZATIONS (DECOMPOSTION)..............................................33 3.2.1 LU FACTORIZATION...................................................................34 3.2.2 CHOLESKY FACTORIZATION........................................................43 3.3 ITERATIVE METHOD......................................................................48 3.3.1 GAUSS-SIEDEL ITERATION METHOD............................................49 3.4 MATRIX EIGENVALUE PROBLEM......................................................52 3.5 ERROR AND ACCURACY58 CHAPTER FOUR: APPLICATIONS 4.1 APPLICATIONS OF LINEAR ALGEBRIAC SYSTEMS.62 4.1.1 ECONOMICS.62 4.1.2 MARKOV CHAINS.70 4.1.3 ELECTRICAL NETWORK73 CHAPTER FIVE: SUMMARY, CONCLUSION, AND RECOMMENDATIONS 5.1 SUMMARY.81
6

5.2 CONCLUSION82 5.3 RECOMMENDATIONS.82 REFERENCES84

CHAPTER ONE INTRODUCTION

1.1 INTRODUCTION TO LINEAR ALGEBRAIC SYSTEMS Linear Algebraic systems can be said to be a set of linear equations. A Linear equation is referred to as a simple equation with a simple variable. Therefore, a linear system or system of linear equations is a collection of linear equations involving set of variables. Linear Systems are always of the form; a11 x1 + a12 x2 + + a1n xn = b1 a21 x1 + a22 x2 + + a2n xn = b2
. . . . .

am1 x1 + am2 x2 + + amn xn = bm This is a general system of m linear equations in n unknowns. The system can also be expressed in a more compact form as: aij xj = bi Or we use the matrix equation as;
8

Which may be written as AX = B, where A = (aij), X = (xj), and B = (bi) A is called the coefficient matrix, X the matrix variables. In the double script notation, the first subscript denotes the row and the second the column in which the entry stands. Thus a22 is the entry in the second row and second column. Therefore, In order to solve a set of equations, we express the equations in form of a matrix equation or form. A system of equation with a common solution is said to be consistent, that is, the variables have values. When the variables have no consistent value, it has no common solution, and then it is not consistent.

1.2 OVERVIEW OF NUMERICAL SOLUTIONS Numerical Solutions involves the development and evaluation of methods for solving a problem or mathematical problem numerically, rather than using the methods of algebraic mathematical analysis and theorems. It involves computing numerical results from given mathematical data. The data is known as the input while the obtained result is the output. Recent trends emphasize the use and computers to implement the computation rather than manual process. The increasing power of computer system and scientific computing is a big boost to the use and development of numerical solutions. Scientific computing is the field of study concerned with constructing mathematical models and numerical solution techniques and using computers to analyze and solve scientific and engineering problems. 1.3 MATLAB Jack Little and Cleve Moler, the founders of The MathWorks, recognized the need among engineers and scientists for more powerful and productive computation environments beyond those provided by languages such as FORTRAN (FORmula TRANslator) and C. In response to that need, they combined their expertise in mathematics, engineering, and computer science to develop MATLAB, a high10

performance technical computing environment. MATLAB combines comprehensive math and graphics functions with a powerful high-level language. The name MATLAB stands for MATrix LABoratory.A program that has been purpose-designed to make computations easy, fast, and reliable. Basic MATLAB is good for; a) Computations, including linear algebra, data analysis, signal processing, polynomials and interpolation, numerical integration, and numerical solution of differential equations. b) Graphics, in 2-Dimension and 3-Dimension, including colour, lighting, and animation. 1.4 DEFINITION OF TERMS The conventions that will be used are that of lower case letter which denotes a scalar or a vector and a capital letter denotes a matrix. Typically, the MATLAB variables are named with a capital letter if they will be used as matrices and lower case for scalar and vectors.

11

The MATLAB environment uses the term matrix to indicate a variable containing real or complex numbers arranged in a two-dimensional grid. An array is, more generally, a vector, matrix, or higher-dimensional grid of numbers MATLAB has a lot of functions that create different types of matrices, row vector and column vector. 1.4.1 REVIEW OF VECTOR, SCALAR AND MATRICES A vector is a matrix. A column vector is a m-by-1 matrix given as; a11 a21
-

am1 a row vector is a 1-by-n matrix given as; [a11 a12 .. . a1n]

A diagonal matrix is a matrix that has non-zero entries only on the main diagonal, i.e. a11, a22 ann.

12

A diagonal matrix whose diagonal entries are equal is also a scalar 1-by-1 matrix, i.e. a11=a22= . . . = ann. D= d 0 0 0 d 0 0 0 d = [d]

When these entries equal 1, it is known as an Identity Matrix. Generally, accepted mathematical notation uses I to denote the identity matrices of different sizes. MATLAB could not use I for this purpose because id did not distinguish between uppercase and lowercase letters and i already served as a subscript and a complex unit. So, an English language pun was introduced. The Function eye(m, n) returns a m-by-n matrix(if m=n, eye(n) returns a square matrix). There are also different matrices created with their different commands respectively, e.g. a) Pascal Matrix which consists of rows of numbers, the apex is 1 and each row starts and ends with 1, others obtained by adding together two numbers on either side in the row above, i.e.

13

P = pascal(3) =

1 1 1

1 2 3

1 3 6

b) Magic Matrix is a matrix where the entries in the column when added equal the same, i.e. M = magic(4) = 16 2 3 13 8 12 1

5 11 10 9 7 6

4 14 15

c) Hilbert Matrix has its entries aij = 1/(1+j-1) sym(hilb(3)) = 1 1/3 1/3 1/3 1/5 = 1.0000 0.5000 0.3333 0.5000 0.3333 0.2500 0.3333 0.2500 0.2000

sym(hilb(3)) = hilb(3) as above. d) The Ones Matrix has all its entries equal to 1

14

ones(3) =

1 1 1

1 1 1

1 1 1

e) Random Matrix is a matrix with random integers: fix(10*rand(3)) = ` 6 0 4 3 8 0 7 9 9

A symmetric matrix is a square matrix whose transpose equals the matrix, i.e. AT = A = -4 1 8 1 0 -3 8 -3 5

A skew-symmetric matrix is a matrix whose transpose equals minus the matrix, i.e. A = AT. Every skew-symmetric matrix has its entire main diagonal equal to 0 A = AT = 0 -4 -5 4 0 -10 5 10 0

Orthogonal Matrix is a matrix where the transposition gives the inverse of the matrix, i.e. AT = A-1
15

AT = A-1 =

2/3

1/3

-2/3 1/3 -2/3

-2/3 2/3 1/3 2/3

Triangular Matrix is a matrix where all entries above the main diagonal equal 0 (lower triangular Matrix) or with all entries below the main diagonal equal to 0 (upper triangular Matrix). Sparse Matrix is a matrix with relatively few non zero entries. Sparse matrix algorithms can tackle huge sparse matrices that are utterly impractical for dense matrix algorithms. 1.4.2 MATRIX OPERATION Addition and subtraction of matrices is defined just as it is for arrays, element-byelement. Addition and subtraction require both matrices to have the same dimension, or one of them is a scalar. If the dimensions are incompatible, an error results in MATLAB environment. The sum of two row vector or two column vectors must have the same number of components. For example, addition of a scalar and a row vector w=s+v
16

w= w=

[8]

[2

-1] [10 8 7]

[2+8 0+8

-1+8] =

A row vector and a column vector of the same length can be multiplied in either order. The result is either a scalar, inner product, or a matrix. Multiplication of matrices is defined in a way that reflects composition of the underlying linear transformations and allows compact representation of systems of simultaneous linear equations. The matrix product C = AB is defined when the column dimension of A is equal to the row dimension of B, or when one of them is a scalar. If A is m-by-p and B is p-by-n, their product C is m-by-n matrix. For example, multiplying two vectors, a row vector and a column vector. u = [3; 1; 4] = 3 1 4 v = [2 0 -1] = x=v*u x= [3*2 + 1*0 + 4*-1] = [6 + 0 4] = [2] [2 0 -1]

17

It should be noted that a 3-by-2 matrix cannot be multiplied with a 3-by-3 matrix. Also, the multiplication of two matrices A and B, ABBA. 1.4.3 DETERMINANT Determinants were originally introduced for solving linear systems. Every square matrix A has associated with it a number called determinant of the matrix A denoted by |A|. The function det computes the determinant of a square matrix. A determinant of order three is a scalar associated with a 3-by-3 matrix A such that; detA = |A| = a11 a21 a31
=

a12 a22 a32 a23| a33|

a13 a23 a33 + a13| a21 | a31 a22| a32|

a11| a22 |a32

a23| a33|

- a12| a21 |a31

= [a11 (a22a33 - a23a32) - a12 (a21a33 - a23a31) + a13 (a21a32 - a22a31)] 1.4.4 MATRIX TRANSPOSE, MATRIX INVERSE, AND PSEUDO- INVERSE. For real matrices, the transpose operation interchanges aij and aji. MATLAB uses the apostrophe operator (') to perform a complex conjugate transpose, while the general is notation ( T ), e.g.
18

v = [2

-1]

vT = 2 0 -1

If A is square matrix and nonsingular (det A 0), the equations AX = I and XA = I have the same solution, X. This solution is called the inverse of A, it is denoted by A-1 and computed by the function inv(A) in MATLAB. It has the property that: A-1A= A A-1 = I (I = Identity Matrix)

It should be noted that rectangular matrices do not have inverses and if A has no inverse, it is a singular matrix, i.e.det A = 0. A partial replacement for the inverse is provided by the Moore-Penrose pseudoinverse, which is computed by the pinv function. This show rectangular matrices have pseudo-inverses using MATLAB. 1.4.5 VECTOR AND MATRIX NORMS A vector norm for column vectors x = [xj] for j = 1, 2 . . . n is a generalized length, it is denoted by||x||. The most important connection with computation is the p-norm of a vector x given by; ||x|| = [|xi|P] 1/P = (|x1|P + |x2|P + . . . . . + |xn|P) 1/P
19

It is computed by norm(x,p). This is defined by any value of p > 1, but the most common values of p are 1, 2, and . The default value is p = 2, which correspond to Euclidean length which is given; ||x||2 = x12 + x22 + . . . . . + xn2 The p-norm of a matrix A, ||A||P = max||Ax||/||x||P can be computed for p = 1, 2, and by norm(A,p) in MATLAB. Again, the default value is p = 2. 1.4.6 EIGENVALUE Matrix Eigenvalue problems deal with vector equations of the form Ax = x, where A is a given square matrix, x is an unknown vector and an unknown scalar. For non-trivial solutions, i.e. x 0, the values of are called Eigenvalues of the matrix A and the corresponding solutions of the equation Ax = x are called the Eigenvectors. Ax = x = a11 a21 . an1 a12 . . . . . . . a1n a22 . . . . . . . a2n . . . xn x1 x2 = [] x1 x2 . xn

an2 . . . . . . . ann

Ax = x => Ax x = 0 => (A I) x = 0 The unit matrix is introduced since we cannot subtract (scalar) from matrix A.
20

1.4.7 STABILITY AND CONDITIONING In numerical analysis, the condition number associated with a problem is the measure of that problem amenability to computation, i.e. how numerically well conditioned the problem is. A problem with a low condition number is well conditioned, while a problem with a high condition number is ill conditioned. The condition number associated with a linear equation gives a bound on how accurate the solution will be after approximate solution. Thus, the solution x, changes with respect to b. This leads to the notion of numerical stability. An algorithm is numerically stable if an error once generated does not grow too much during computation. This is possible when the problem is well conditioned. The art of numerical computation is to find a stable algorithm for solving a well posed problem. For example, 2=1.41421 is a well posed problem. The function f(x)=1/(x-1) , f(1.1)=10 and f(1.001)=1000, evaluating f(x) here x=1 is an ill conditioned problem. 1.4.8 ERRORS Errors are deviations from the expected or exact value. Assessing the accuracy of the output results of computations is a paramount goal in Numerical solutions.

21

There are different types of errors encountered that affect the accuracy of the results obtained. The various types of errors that may limit accuracy are: a. Experimental Errors: errors from inputting mathematical data; this occurs not only in the use of wrong data but also in inherent imperfections of physical measurements and quantities. b. Round-off Errors: the error arises because of calculations with numbers performed with approximate representation of the actual numbers e.g., 3.142517 represented as 3.143. c. Approximation Errors: this occurs as a result of truncating values. It is as a result of the computation method used. In MATLAB, the decimal point representations of individual entries are the signal to use variable-precision arithmetic (vpa). The result of each arithmetic operation is rounded to 16-significant decimal digits as default in MATLAB. 1.5 SCOPE OF STUDY The main focus of this study is to show that Ax = b has a unique solution. Therefore, we want to show how MATLAB works to give solution to Linear Algebraic
22

Systems. A little familiarity with MATLAB is required; also understanding thee principle behind an algorithm for a problem is needed. Linear Algebra is one of the cornerstones of modern computational mathematics. One of the most important problems in technical computing is the solutions of systems of linear equations. In matrix notation, the general problem which is the scope of our study takes the following form: Given two matrices A and b, does there exist a unique solution or matrix x such that Ax = b or xA = b? 1.6 LIMITATIONS The concept of a matrix was historically introduced to simplify the solution of linear systems although they today have much greater and broad reaching applications. We know that linear equations in two or three variables can be solved using techniques such as elimination method and substitution method, but theses techniques are not relevant for dealing with large systems where there are large numbers of variables. Therefore, our restriction will be methods of solving linear equations using MATLAB. Also, every matrix in our study will be a square matrix which is non-singular with a unique solution.
23

1.7 METHODOLOGY Computational difficulties encountered with theoretical linear algebra methods are the use of Cramers rule, least square method, explicitly finding the inverse of a matrix, e.t.c. An example is using Cramers rule for Ax = b, it might take years to solve a 20-by-20 matrix problem using the definition of determinant. Therefore, we introduce numeric methods relevant to this study. They are, a) Iterative Methods b) Factorization (Decomposition): LU Factorization, Cholesky Factorization c) Matrix Inverse d) Eigenvalues German mathematician, Carl Friedrich Gauss, proved that every equation has at least one solution which is known as the fundamental theorem of algebra. It was first stated as every polynomial f(x) |R[x] with complex coefficients can be factored into linear factors over the complex number. A polynomial in general is defined as: f(z) = zn + cn-1zn-1 + cn-2zn-2 + . . . . . + c0 Where x = complex variable
24

c0, c1, . . . . .cn-1 are complex constants Carl Friedrich Gauss reformulated this theorem to avoid the notion of complex number stating: Every polynomial f(x) |R[x] with real coefficients can be factored into linear and quadratic form. The name fundamental theorem of algebra was given at a time which algebra was basically about solving polynomial equations with complex or real coefficients. In spite of its name Fundamental Theorem of Algebra, there is no known proof which is purely algebraic, and many mathematicians believe such a proof does not exist. Besides, it is not fundamental for modern algebra. It suffices to prove that every polynomial has at least one root because by ordinary algebraic division, we can divide by the corresponding linear factor to give another polynomial whose degree is reduced by 1. Repeating the whole process, we get all n of the root. A Simple illustration is x2 = 4. Solving: x = 4 => x = 2 This shows x = 2 is a solution, but x2 = -4 when solved: x =-4 => x = 2-1 What is the physical meaning of -1? Then we say x2 = -4 Gauss in his dissertation said x2 = -4 has a solution:

25

x =-4 => x = 2-1 => x = 2i The following formulas where also introduced in solving an equation: i) ax + b = 0 x = -b/a ii) ax2 + bx + c = 0 x = -b b2 4ac 2a Therefore, in our methods of solving systems of linear equations, there exists a unique solution of Ax = b.

26

CHAPTER TWO LITERATURE REVIEW 2.1 HISTORICAL BACKGROUND OF LINEAR ALGEBRA. The history of modern linear algebra followed the introduction and development of the notion of a matrix. The subject of linear algebra followed the development of determinants, which arose from the study of coefficients of systems of linear equations. Gottfried Wilhelm Leibnitz (or Leibniz) born at Leipzig on June 21 (O.S.), 1646, and died in Hanover on November 14, 1716, who was one of the two founders of calculus, used determinants in 1693 and Cramer presented his determinant-based formula for solving systems of linear equations (today known as Cramer's Rule) in 1750. Cramer's rule is a theorem in linear algebra, which gives the solution of a system of linear equations in terms of determinants. It is named after Gabriel Cramer (1704 - 1752), who published the rule in his 1750 Introduction l'analyse des lignes courbes algbriques. Let a system be represented in matrix multiplication form as Ax = b, then the theorem states that: xi = det Ai / det A (i = 1, 2 . . . . n) Ai = matrix formed by replacing the ith column of A by the column vector b
27

Considering the linear system: ax1 + ax2 = e cx1 + dx2 = f a c b d x1 x2 = e f

Then by Cramers rule: x1 = |e |f |a |c x2 = |a |c |a |c b| d| b| d| e| f| b| d| = af - ec ad - bc = ed - bf ad - bc

The rules for all n-by-n matrices are similar.

28

Johann Carl Friedrich Gauss (30 April 1777 23 February 1855) a German mathematician and scientist who contributed significantly to many fields, including number theory, statistics, analysis, differential geometry, geodesy, electrostatics, astronomy, and optics. He was sometimes known as the princeps mathematicorum (Latin, usually translated as "the Prince of Mathematicians") and "greatest mathematician since antiquity". Gauss had a remarkable influence in many fields of mathematics and science and is ranked as one of history's most influential mathematicians. He developed Gaussian elimination around 1800 and used it to solve least squares problems in celestial computations. Even though Gauss' name is associated with this technique for successively eliminating variables from systems of linear equations, Chinese manuscripts from several centuries earlier have been found that explain how to solve a system of three equations in three unknowns by Gaussian elimination. For matrix algebra to fruitfully develop, one needed both proper notation and definition of matrix multiplication. James Joseph Sylvester (September 3, 1814 London March 15, 1897 Oxford) an English mathematician, made fundamental contributions to matrix theory, invariant theory, number theory, partition theory and combinatorics. In 1848, he first introduced the term ''matrix,'' which was the Latin word for womb, as a name for an array of numbers. Matrix algebra was nur29

tured by the work of Arthur Cayley (August 16, 1821 - January 26, 1895). He helped found the modern British school of pure mathematics in 1855. Cayley studied compositions of linear transformations and was led to define matrix multiplication so that the matrix of coefficients for the composite transformation ST is the product of the matrix S times the matrix T. He went on to study the algebra of these compositions including matrix inverses. Cayley in his 1858 Memoir on the Theory of Matrices gave the famous Cayley-Hamilton theorem that asserts that a square matrix is a root of its characteristic polynomial. The use of a single letter A to represent a matrix was crucial to the development of matrix algebra. Early in the development, the formula det(AB) = det(A)det(B) provided a connection between matrix algebra and determinants. Cayley wrote, ''There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants.'' Mathematicians also attempted to develop of algebra of vectors but there was no natural definition of the product of two vectors that held in arbitrary dimensions. Hermann Gnther Grassmann (April 15, 1809, Stettin (Szczecin) September 26, 1877, Stettin) was a German polymath, renowned in his day as a linguist and now admired as a mathematician. He was also a physicist, neohumanist, general scholar, and publisher. He proposed the first vector algebra that involved a non30

commutative vector product (that is, v x w need not equal w x v) in his book Ausdehnungslehre (1844). Grassmann's text also introduced the product of a column matrix and a row matrix, which resulted in what is now called a simple or a rankone matrix. In the late 19th century, American mathematical physicist Josiah Willard Gibbs (February 11, 1839 April 28, 1903) published his famous treatise on vector analysis. In that treatise, Gibbs represented general matrices, which he called dyadics, as sums of simple matrices, which Gibbs called dyads. Later the theoretical physicist Paul Adrien Maurice Dirac (August 8, 1902 October 20, 1984) introduced the term ''bra-ket'' for what we now call the scalar product of a ''bra'' (row) vector times a ''ket'' (column) vector and the term ''ket-bra'' for the product of a ket times a bra, resulting in what we now call a simple matrix, as above. There was renewed interest in matrices, particularly on the numerical analysis of matrices, after World War II with the development of modern digital computers. John von Neumann (December 28, 1903 February 8, 1957) a Hungarian American mathematician and Herman Heine Goldstine (September 13, 1913 June 16, 2004), introduced condition numbers in analyzing round-off errors in 1947. Alan Mathison Turing (23 June 1912 7 June 1954) and von Neumann were the 20th century giants in the development of stored-program computers.
31

Turing introduced the LU decomposition of a matrix in 1948. The L is a lower triangular matrix with 1's on the diagonal and the U is an echelon matrix. The benefit of the QR decomposition was realized a decade later. The Q is a matrix whose columns are orthonormal vectors and R is a square upper triangular invertible matrix with positive entries on its diagonal. The QR factorization is used in computer algorithms for various computations, such as solving equations and find eigenvalues. 2.2 MATRIX THEORY A matrix which is a rectangular array of numbers has one of its principal uses in representing systems of equations of the first degree in several unknowns. Each matrix row represents one equation, and the entries in a row are the coefficients of the variables in the equations, in some fixed order. Given the general form of system of equations below: a11 x1 + a12 x2 + + a1n xn = b1 . . . . . . . . . Equation (1) a21 x1 + a22 x2 + + a2n xn = b2 . . . . . . . . . Equation (2) . . . . . .

am1 x1 + am2 x2 + + amn xn = bm . . . . . . . . . Equation (n)

32

This is given in matrix form as:

And x1, x2, . . . xn are the unknowns. For example, given the linear system in three unknowns: 3x1 + 2x2 x3 = 1 . . . . . equation (1) 2x1 -2x2 + 4x3 =-2 . . . . . equation (2) -x1 + 1/2x2 x3 = 0 . . . . . equation (3) The matrix form is: 3 2 -1 2 -2 -1 4 -1 x1 x2 x3

The unknown are given in vector form as: x =

33

Addition and multiplication can be defined so that certain sets of matrices form algebraic systems.The sum of two matrices is defined only if they are of the same size, i.e. thesame n-by-n matrix. If A = [aij] and B = [bij] are of the same size, then C = A + B => cij = aij + bij is defined. This is done simply by adding corresponding elements. The set of all matrices of a fixed size has the property that addition is closed, associative, and comutative. A unique matrix 0 exists such that for any matrix A; A+0=0+A=A And corresponding to any matrix A, there exist a unique matrix B such that; A+B=B+A=0 The product AB of two matrices A and B, is defined only if the number of columns of the left factor A is the same as the number of rows of the right factor B. Note that AB BA 2.3 MATLAB HISTORY In the mid-1970s, Cleve Moler and several colleagues developed the FORTRAN subroutine libraries called LINPACK and EISPACK. LINPACK is a collection of
34

FORTRAN subroutines for solving linear equations, while EISPACK contains subroutines for solving eigenvalue problems. Together, LINPACK and EISPACK represent the state of the art software for matrix computation. In the late 1970s, Cleve, who was then chairman of the computer science department at the University of New Mexico, wanted to be able to teach students in his linear algebra courses using the LINPACK and EISPACK software. However, he didn't want them to have to program in FORTRAN, because this wasn't the purpose of the course. So, as a "hobby" on his own time, he started to write a program that would provide simple interactive access to LINPACK and EISPACK. He named his program MATLAB, for MATrix LABoratory. Over the next several years, when Cleve would visit another university to give a talk, or as a visiting professor, he would end up by leaving a copy of his MATLAB on the university machines. Within a year or two, MATLAB started to catch on by word of mouth within the applied math community as a cult phenomenon. In early 1983, John Little was exposed to MATLAB because of a visit Cleve made to Stanford University. Little, an engineer recognized the potential application of MATLAB to engineering applications. So in 1983, Little teamed up with Moler and Steve Bangert to develop a second generation, professional version of MATLAB written in C and integrated with graphics. The MathWorks, Inc. was founded in 1984 to market and
35

continue development of MATLAB. The very first version was written in the late 1970s for use in courses in matrix theory, linear algebra, and numerical analysis. The basic MATLAB data element is a matrix. The department of Scientific Computing at Uppsala University, Sweden, started to use MATLAB in 1986 as a teaching tool in numerical analysis, especially in numerical linear algebra. The MATLAB software at that time was an academic version, written in FORTRAN code. It has since then been updated and extended. Solving problems in MATLAB is, therefore, generally much quicker than programming in a high-level language such as C or FORTRAN. The focus in MATLAB is on computation, not mathematics. The heart and soul of MATLAB is linear algebra. Now, there are two basic versions of the software, the professional version, and the student edition. Since then, the software has evolved into an interactive system and programming language for general scientific and technical computation and visualization. There are hundreds of built-in functions that come with the basic MATLAB and there are optional "toolboxes" of functions for specific purposes such as Controls, Signal Processing, and Optimization. Most of the functions in MATLAB and the Toolboxes are written in the MATLAB language and the source code is

36

readable. MATLAB commands are expressed in a form very similar to that used in mathematics and engineering. Some commonly used MATLAB commands are given below for any matrix A: Command chol condest det diag Meaning Cholesky Factorization of A 1-norm condition number estimate Determinant of matrix A Diagonal matrices and diagonals of matrix eig eye hilb inv lu linsolve magic norm Eigenvalues and Eigenvectors of A Identity matrix Hilbert n-by-n matrix Matrix Inverse of A LU Factorization of A Solve a system of linear equations Magic n-by-n Matrix Vector and matrix norms [V,E]=eig(A) eye(n) hilb(n) inv(A) [L,U]=lu(A) x = linsolve(A,b) magic(n) [norm(v,1) norm(v) norm(v,inf)]
37

Syntax/Usage C=chol(A) C=condest(A) det(A) diag(A)

normest ones pascal pinv

2-norm estimate Create array of all ones Pascal n-by-n Matrix

nrm = normest(S) ones(n) pascal(n)

Moore-Penrose pseudoinverse of ma- pinv(A) trix

poly rand

Characteristic Polynomial of A Uniformly distributed pseudorandom numbers

P=poly(A) rand(n)

rref trace tril triu zeros

Reduced Row Echelon Form Of A Sum of diagonal elements of A Lower triangular part of matrix Upper triangular part of matrix Create array of all zeros

rref(A) b = trace(A) tril(A) triu(A) zeros(n)

Linear algebra functions are located in the MATLAB matfun directory. For a complete list, brief descriptions, and links to reference pages, type: help matfun

38

CHAPTER THREE METHODOLOGY 3.1 INTRODUCTION One of the most important problems in technical computing is the solution of systems of simultaneous linear equations. Considering the equation 9x = 27 which is of the form Ax = b, that is, a 1-by-1 example. Does the equation 9x = 27 have a unique solution? The answer is definitely yes. The equation has the unique solution x = 3. The solution is easily obtained by division. 9x = 27 => 9x/9 = 27/9 => x = 3 The solution is not easily obtained by computing the inverse of 9, that is, 91

= 0.1111... (Using Ax = b => x =A-1b) and then multiplying 9-1 by 27. This would

be more work and, if 9-1 is represented to a finite number of digits, less accurate. Similar considerations apply to sets of linear equations with more than one unknown; the MATLAB software solves such equations without computing the inverse of the matrix.

39

The two division symbols used are slash, /, and backslash, \, used to describe the solution of a general system of linear equations for two situations where the unknown matrix appears on the left or right of the coefficient matrix: X = b/A = bA-1 Denotes the solution to the matrix equation xA = b X = A\b = A-1b Denotes the solution to the matrix equation Ax = b

The coefficient matrix A is always in the denominator. The dimension compatibility conditions for x = A\b require the two matrices A and b to have the same number of rows. The solution x then has the same number of columns as b and its row dimension is equal to the column dimension of A. For x = b/A, the roles of rows and columns are interchanged. In practice, linear equations of the form Ax = b occur more frequently than those of the form xA = b. So we concentrate more on the backslash operator using MATLAB. The most common situation involves a square coefficient matrix A and a single right-hand side column vector b. 3.2 FACTORIZATIONS (DECOMPOSTION): The core of solving a system of linear algebraic equations is decomposing the coefficient matrix. Through the decomposition process, the coupled equations are decoupled and the solution can be obtained with much less effort.
40

Matrix factorization which will be discussed in this section will make use of triangular matrices, where all the entries either above or below the main diagonal are 0. Systems of linear equations involving triangular matrices are easily and quickly solved using either forward or back substitution. 3.2.1 LU FACTORIZATION: This is one of methods used to solve system of linear algebraic equation. It expresses any square matrix A as the product of a permutation of a lower triangular matrix and an upper triangular matrix. It is given by; A = LU L = lower triangular matrix with ones on its diagonal U = upper triangular matrix The LU factorization of a matrix A allows the linear system Ax = b to be solved quickly with x = U\(L\b) in MATLAB. Ax = b => Ax = Lux = b Ly = b Ux = y

Example 3.1: Determine an LU composition for the matrix

41

A=

3 2 -4

6 5 1

-9 -3 10

Solution: we first go through the row operation => 3 2 -4 => 1 2 -4 1 0 -4 1 0 0 U= 1 0 0

6 5 1 2 5 1 2 1 1 2 1 9 2 1 0

-9 -3 10 -3 -3 10 -3 3 10 -3 3 -2 -3 3 1 -R3/29 R3 + 4R1 R2 2R1 R1/3 =>

1 2 -4

2 5 1 1

-3 -3 10 2 1 1 2 1 9 2 1 0 -3 3 10 -3 3 -2 -3 3 1

=>

0 -4 1

=>

0 0 1

=>

0 0

42

Now, we need to get L. We will need the elementary matrices for each of these, or more precisely their inverses. The inverse matrix can be found by applying the inverse operation to the identity matrix: L = E1-1 E2-1 . . . . . Ek-1 R1/3 => E1 = 1/3 0 0 R2 2R1 => E2 = 1 -2 0 R3 + 4R1 => E3 = 1 0 4 R3 9R2 => E4 = 1 0 0 -R3/29 => E5 = 0 0 0 0 1 0 0 1 0 0 1 0 0 1 -9 0 1 0 (E = Elementary Matrix) 0 0 1 0 0 1 0 0 1 0 0 1 0 0 -1/29
43

E1-1 = 3 0 0 E2-1 = 1 2 0 E3-1 = 1 0 -4 E4-1 = 1 0 0 E5-1 = 1 0 0

0 1 0 0 1 0 0 1 0 1 1 9 0 1 0

0 0 1 0 0 1 0 0 1 0 0 1 0 0 -29

L=

3 2 -4

0 1 9

0 0 -29 (definition of L = E1-1 E2-1 E3-1 E4-1 E5-1)

Finally, we can verify the LU Factorization LU = A = 3 2 4 LU = A = 0 1 9 0 0 -29 1 0 0 2 1 0 -3 3 1 = 3 2 -4 3(-3) + 0 + 0 2(-3) + 0 + 0 -4(-3) + 27 - 29 6 5 1 -9 -3 10

3(1) + 0 + 0 2(1) + 0 + 0 -4(1) + 0 + 0

3(2) + 0 + 0 2(2) + 0 + 0 -4(2) + 9 + 0 -9 -3 10

LU = A =

3 2 -4

6 5 1

In MATLAB environment, >> A=[3 6 -9;2 5 -3;-4 1 10] A= 3 2 -4 6 -9 5 -3

1 10
44

L= -0.7500 1.0000 0

-0.5000 0.8148 1.0000 1.0000 U= -4.0000 1.0000 10.0000 0 0 6.7500 -1.5000 0 3.2222 0 0

The theoretical computation of the LU Factorization of A is different from the MATLAB computation, but when verified: >> L*U ans = 3 2 -4 6 5 1 -9 -3 10

Example 3.2: Using the LU factorization above, find the solution of the following system of equations: 3x1 + 6x2 -9x3 = 0
45

2x1 + 5x2 -3x3 = -4 -4x1 + x2 + 10x3 = 3 Solution: First we write the matrix form of the system 3 2 -4 6 5 1 -9 -3 10 x1 x2 x3 = 0 -4 3

From example 3.1, we found the LU-decomposition given as: LU = A = 3 2 4 0 1 9 0 0 -29 1 0 0 2 1 0 -3 3 1 = 3 2 -4 6 5 1 -9 -3 10

From our definition, Ly = b: 3 2 -4 0 1 9 0 0 -29 y1 y2 y3 = 0 -4 3

Solving using forward substitution: 3y1 2y1 + y2 -4y1 +9y2 29y3 =0 = -4 =3 => y1 = 0 => y2 = -4 => y3 =-39/29
46

The second system, Ux =y 1 0 0 2 1 0 -3 3 1 x1 x2 x3 = 0 -4 -39/29

Solving using backward substitution: x1 + 2x2 3x3 = 0 x2 + 3x3 = -4 x3 = -39/29 In MATLAB environment, > A=[3 6 -9;2 5 -3;-4 1 10] A= 3 2 -4 6 -9 5 -3 => x1 = -119/29 => x2 = 1/29 => x3 = -39/29

1 10

>> [L,U]=lu(A) L= -0.7500 1.0000 0

-0.5000 0.8148 1.0000 1.0000 0 0


47

U= -4.0000 1.0000 10.0000 0 0 6.7500 -1.5000 0 3.2222

>> b=[0;-4;3] b= 0 -4 3 >> x=U\(L\b) x= -4.1034 0.0345 -1.3448 This computed solution using MATLAB equals the theoretical computation. Example 3.3: Solve the system of equations using MATLAB: 2x1 5x2 + 2x3 = 7 x1 + 2x2 4x3 = 3 3x1 4x2 6x3 = 5

48

Solution: >> A=[2 -5 2;1 2 -4;3 -4 -6] A= 2 1 3 -5 2 -4 2 -4 -6

>> [L,U]=lu(A) L= 0.6667 -0.7000 1.0000 0.3333 1.0000 1.0000 U= 3.0000 -4.0000 -6.0000 0 0 3.3333 -2.0000 0 4.6000 0 0 0

>> b=[7;3;5]

49

b= 7 3 5 >> x=U\(L\b) x= 5.0000 1.0000 1.0000 The solution to the given system is => x1 = 5, x2 = 1, x3 = 1 3.2.2 CHOLESKY FACTORIZATION Cholesky Factorization requires a system to be positive definite, i.e. xTAx > 0 for all x 0. All the diagonal elements of the matrix A are positive and the off diagonal elements are not too big. The Cholesky Factorization expresses a symmetric matrix as the product of a triangular matrix and its transpose: A = L U = C CT = c11 c21 c31 0 c22 c32 0 0 c33
50

c11 0 0

c12 c22 0

c13 c32 c33

C = L = Lower Triangular Matrix CT = U = Upper Triangular Matrix = Transpose of Matrix C The Cholesky Factorization allows the linear system Ax = b => Cx CT = b The formulas for factoring matrix A to get the lower triangular matrix C are: c11 = a11 cjj = ajj - cjs2 cpj = 1/cjj ( apj - cjs cps ) cji = aji/c11 j=2...n s = 1, . . . j-1 p = 3, . . . n

Example 3.4: Solve the linear system by cholesky method: x1 3x2 + 2x3 = 27 -3x1 + 10x2 -5x3 = -78 2x1 5x2 + 6x2 = 64 Solution: the above system is rewritten in Ax = b as: 1 -3 2 A= -3 10 -5 1 -3 2 2 -5 6 -3 10 -5 2 -5 6 = x1 x2 x3 = = = c11 c21 c31 27 -78 64 0 c22 c32
51

0 0 c33

c11 0 0

c21 c22 0

c31 c32 c33

Calculating the values of c11, c21, c31, c22, c32, c33 c11 = a11 = 1 = 1 c21 = a21/c11 = -3/1 = -3 c31 = a31/c11 = 2/1 = 2 c22 = a22 c212 = 10 (-3)2 = 10 9 = 1 =1 c32 = 1/a22 (a22 c31 c21) = 1/1 (-5 (-32)) = -5 + 6 = 1 c33 = a33 c312 c212 = 6 4 1 = 1 = 1 A= 1 -3 2 -3 10 -5 2 -5 6 = 1 -3 2 0 1 1 0 0 1 1 0 0 -3 1 0 2 1 1

Using Cy = b => Ly = b => 1 -3 2 => y1 -3y1 + y2 2y1 + y2 + y3 = 64 Then CTx = y => Ux = y 0 1 1 0 0 1 = 27 = -78 y1 y2 y3 = = = 27 -78 64

=> y1 = 27 => y2 = 3 => y3 = 7

52

=>

1 0 0

-3 1 0

2 1 1

x1 x2 x3 x1 = 1

= = =

27 3 7

=>

x1 - 3x2 + 2x3 = 27 => x2 + x3 = 3 => x3 = 7 =>

x2 = -4 x3 = 7

In MATLAB environment, the above example is given as: >> A=[1 -3 2;-3 10 -5;2 -5 6] A= 1 -3 2

-3 10 -5 2 -5 6

>> C=chol(A) ans = 1 0 0 -3 1 0 2 1 1

>> C=chol(A)'

53

ans = 1 -3 2 0 1 1 0 0 1

>> b=[27;-78;64] b= 27 -78 64 >> x=C\(C'\b) x= 1 -4 7 Where x1 = 1, x2 = -4, x3 = 7 It can be seen that both theoretical and MATLAB solution gives the same result. It should be noted that if the system is not positive definite, MATLAB generates an error. Example 3.5: Solve the system below using MATLAB: 2x3 + 3 = x2 + 3x1
54

x1 3x3 = 2x2 + 1 3x2 + x3 = 2 2x1 Solution: Rearranging the system, we get: 3x1 + x2 + 2x3 = 3 x1 + 2x2 - 3x3 = 1 -2x1 + 3x2 + x3 = 2 >> A=[3 1 2;1 2 -3;-2 3 1] A= 3 1 -2 1 2 3 2 -3 1

>> C=chol(A) ??? Error using ==> chol Matrix must be positive definite. 3.3 ITERATIVE METHOD This is a step by step process of achieving the desired result of the solution of Ax = b, i.e. by repeating a sequence of steps successively until the result is achieved. We use this method if the entries on the main diagonal of the matrix are large.
55

3.3.1 GAUSS-SIEDEL ITERATION METHOD Assuming that ajj = 1 for j = 1, 2 . . . n, A matrix A is given as: A=I+L+U I = n-by-n identity matrix L = Lower Triangular Matrix U = Upper Triangular Matrix Ax = b => Ax = (I + L + U)x = b Ax = Ix + Lx + Ux =b Ax = x + Lx + Ux = b x = Ax Lx Ux = b Lx Ux (Ax = b) (Ix = x) (main diagonal = 0) (main diagonal = 0) ajj 0

The iterative formula now becomes for m+1 iteration(s): x(m) = b Lx(m+1) Ux(m) x(m) = mth approximation x(m+1) = (m+1)st approximation Example 3.6: Apply Gauss-Siedel iteration to the system below. Do 5 steps starting from 1, 1, 1 using 5-significant digits in the computation. 10x1 + x2 + x3 = 6 x1 + 10x2 + x3 = 6
56

x1 + x2 + 10x3 = 6 Solution: Dividing through the system by 10 => x1 + 0.1x2 + 0.1x3 = 0.6 0.1x1 + x2 + 0.1x3 = 0.6 0.1x1 + 0.1x2 + x3 = 0.6 Putting the system in form of x = b Lx Ux x1 = 0.6 x2 = 0.6 - 0.1x1 x3 = 0.6 - 0.1x1 - 0.1x2 Iterating from 1, 1, 1 => x1 = x2 = x3 = 1 1st Iteration x1(1) = 0.6 0.1(0.1) - 0.1(1) = 0.40000 = 0.51400 0.1x2 - 0.1x3 - 0.1x3

x3(1) = 0.6 - 0.1(0.4) - 0.1(0.46) 2nd Iteration x1(2) = 0.6 x2(2) = 0.6 - 0.1(0.5026)

0.1(0.46) - 0.1(0.514) = 0.50260 - 0.1(0.514) = 0.49834

57

x3(2) = 0.6 - 0.1(0.5026) - 0.1(0.49834) 3rd Iteration x1(3) = 0.6 -

= 0.49990

0.1(0.49834) - 0.1(0.49990) = 0.50017 - 0.1(0.49990) = 0.50014 =0.49996

x2(3) = 0.6 - 0.1(0.50017)

x3(3) = 0.6 - 0.1(0.50017) - 0.1(0.50014) 4th Iteration x1(4) = 0.6 x2(4) = 0.6 - 0.1(0.50017) x3(4) = 0.6 - 0.1(0.49999) - 0.1(0.50000) 5th Iteration x1(5) = 0.6 x2(5) = 0.6 - 0.1(0.50000) x3(5) = 0.6 - 0.1(0.50000) - 0.1(0.50000) Therefore, x = x1 x2 x3 = 0.5 0.5 0.5

0.1(0.50014) - 0.1(0.49996) = 0.49999 - 0.1(0.49996) = 0.50000 = 0.50000

0.1(0.50000) - 0.1(0.50000) = 0.50000 - 0.1(0.50000) = 0.50000 = 0.50000

58

3.4 MATRIX EIGENVALUE PROBLEM In this section, we will be looking at a situation where a given matrix A (square matrix) and vector x, the product Ax will be the same as the scalar multiplication x for some scalar . is the eigenvalue of A, and x the eigenvector of A. We will often call x the eigenvector corresponding to or associated with , and the eigenvalue corresponding to or associated with x. Ax = x has a non-trivial solution x 0 Ax x = 0 => (A I)x = 0 (I = n-by-n identity matrix)

The characteristic determinant is given as: det(A I) = 0 det(A I) = a11 a21 . an1 . a12 a22 . an2 . ...... ...... . . a1n a2n . ann - =0

......

We also obtain a characteristic polynomial of A from the characteristic determinant; it is of nth degree in . The nth degree polynomial in is of the form: P() = n + Cn-1n-1 + . . . . . + C1 + C0 Eigenvalues are denoted by 1, 2 . . . n. When occurs exactly once, it is called a simple eigenvalue.
59

The sum of n eigenvalues equals the sum of the main diagonal entries of the matrix and it is called the TRACE of the matrix. The Trace of a matrix is: Trace A = ajj = k (j = 1, 2 . . . n, k = 1, 2 . . . n)

The product of the eigenvalues = determinant of A => det(A) = 12. . .n A square matrix which is triangular, the eigenvalue will be the diagonal entries a11, a22. . . ann. The set of all the solutions to (A I) x = 0 is called the EIGENSPACE of A corresponding to . In MATLAB, the command [V,E]=eig(A) gives the eigenvector and eigenvalue of matrix A. With the eigenvalues on the diagonal of a diagonal matrix E and the corresponding eigenvectors forming the columns of a matrix V, we have AV = VE If V is nonsingular, this becomes the eigenvalue decomposition A =VEV-1 Example 3.7: Find all eigenvalues of A A= 5 4 -4 8 1 -4 16 8 -11
60

Solution: The determinant of A gives us the characteristic polynomial A I = 5 4 -4 = 5- 4 -4 8 1- -4 8 1 -4 16 8 -11- -8 |4 |-4 8| -11-| +16 |4 |-4 1-| -4| 16 8 -11 0 0 0 0 0 0

det(A I) = (5) |1 8| |-4 -11-|

= (5-) [(1-)(-11-)+32]-8[4(-11-)+32]+16[-16+4(1-)]=0 = (5-) (2+10+21)-8(-4-12) +16(-4-12) =0 =-3-102-21+52+50+105+32+96-64-192=0 =-3-102+52-21+50+32-64+105+96-192=0 =-3-52-3+9=0 =3+52+3-9=0 Putting =1 (1)3 + 5(1)2 + 3(1) 9 = 1 + 5 + 3 - 9 = 0 = 1 is an eigenvalue Putting = -3
61

(-3)3 + 5(-3)2 + 3(-3) 9 = -27 +45 9 9 = 0 = -3 is also an eigenvalue twice ( + 3)2 ( - 1) = 0 In MATLAB environment, the solution to our problem is given as: >> A=[5 8 16;4 1 8;-4 -4 -11] A= 5 4 8 1 16 8

-4 -4 -11 >> [T,E]=eig(A) T= 0.8165 -0.5774 -0.7573 0.4082 -0.5774 0.6509 -0.4082 0.5774 0.0532 E= 1.0000 0 0 0

0 -3.0000 0

0 -3.0000

It can be seen that the entries on the main diagonal of the matrix E corresponds to our eigenvalues and the columns of matrix T is the eigenvector of A.
62

The characteristic polynomial is given by the function poly(A): >> P=poly(A) P= 1.0000 5.0000 3.0000 -9.0000

It can also be seen that the coefficients of our characteristic polynomial calculated theoretically equals our MATLAB solution. Example 3.8: Find the eigenvalues and eigenvectors of the matrix below using MATLAB A= 3 8 2 Solution: >> A=[-3 1 -3;-8 3 -6; 2 -1 2] A= -3 -8 2 1 3 -1 -3 -6 2 1 3 1 3 6 2

>> poly(A) ans = 1.0000 -2.0000 -1.0000 2.0000


63

This vector p= [1 -2 -1 2] contains the coefficients of the characteristic polynomial. Our characteristic polynomial now becomes: 3 22 + 2 = 0 To find the eigenvalues we must find the zeroes of this polynomial: >> E=eig(A) E= -1.0000 2.0000 1.0000 =-1, =2, =1 => (+1) (-2) (-1) =0 Using the [V, D] =eig(A) command MATLAB gives the eigenvalues, eigenvectors and the diagonal form of the matrix in one step: >> [V,D]=eig(A) V= 0.4472 -0.4082 -0.5774 0.8944 -0.8165 -0.5774 0 0.4082 0.5774 D= -1.0000 0 0 0 0 0 0 1.0000

2.0000

64

3.5 ERROR AND ACCURACY As indicated in chapter one, matrix and vector quantities norm are needed to measure errors in approximate solutions to linear systems. The three common vector norms, each of which has an associated matrix norm are: 1 norm => ||x||1 = |xi| 2 norm => ||x||2 = |xi|2 - norm => ||x|| = max {|xi|: 1 i 1} For any vector norm, the associated matrix norm is given as: ||A||1 = max {||Ax||: ||x|| = 1} For the 1- and - norms associated matrix norms take the form: ||A||1 = max { |aij|: 1 j n} ||A|| = max { |aij|: 1 i n} A measure of the sensitivity to round-off error of a matrix and numerical problems is obtained from the condition number of the matrix. Condition number is defined as c(A) = ||A|| ||A||-1 The residual vector is given by r = b Ax, where x is the computed solution of the system Ax = b. The true solution is denoted by x*. ||r||/c(A)||b|| ||x* - x||/||x*|| c(A)||r||/||b|| Example 3.9: Find the 1- and - norms of the matrix A, given by: i = 1. . . . n

65

A=

1 2 3 4 5

2 3 3 4 5

3 4 4 4 5

4 5 5 5 5

7.0001 9.0002 9.0004 9.0004 10.0003

Solution: ||A||1 is calculated from the maximum column sum ||A||1 = max { |aij|: 1 j n } ||A||1 = 7.0001 + 9.0002 + 9.0004 + 9.0004 + 10.0003 ||A||1 = 44.0014 ||A|| is the maximum row sum ||A|| = max { |aij|: 1 i n} ||A|| = 5 + 5 + 5 + 5 + 10.0003 ||A|| = 30.0003 In MATLAB environment, we create the matrix A by: >> A=[1 2 3 4 7-0001;2 3 4 5 9.0002;3 3 4 5 9.0004;4 4 4 5 9.0004;5 5 5 5 10.0003]

66

A= 1.0000 2.0000 2.0000 3.0000 3.0000 3.0000 4.0000 4.0000 5.0000 5.0000 3.0000 4.0000 6.0000 4.0000 5.0000 9.0002 4.0000 5.0000 9.0004 4.0000 5.0000 9.0004 5.0000 5.0000 10.0003

>> N=[norm(A,1) norm(A,inf)] N= 43.0013 30.0003 It can be clearly seen from our computation that the result are perfectly equal. norm(A,1) = ||A||1 = 44.0014 norm(A,inf) = ||A|| = 30.0003 (maximum column sum) (maximum row sum)

Example 3.10: Consider the Hilbert Matrix and its inverse given from MATLAB: H =sym(hilb(3))= [ 1, 1/2, 1/3] [1/2, 1/3, 1/4] [1/3, 1/4, 1/5] H-1 =inv(sym(hilb(3)))= [ 9, -36, 30]

[-36, 192, -180] [30, -180, 180] Calculate the minimum norms of both matrix and find the condition number. Solution:
67

Maximum norm of H => ||H|| = 1 + + 1/3 =11/6 Maximum norm of H-1 => ||H-1|| = 36 + 192 + 180 = 408 C(H) = ||H|| ||H-1|| = 11/6(408) = 748 Example 3.11: If E is the perturbation of the matrix H in the example above due to internal round offs. Calculate the round off error estimation given by; ||x* - x||/||x*|| c(H) ||E||/||A|| Where ||E|| = 0.003, c(H) = 748, ||A| = 1.8333 Solution: ||x* - x||/||x*|| 748(0.003/1.8333) =1.1968 This is an error of more than 100%. Therefore, it can be seen that more digits are required for calculation to reduce floating point errors.

68

CHAPTER FOUR APPLICATIONS 4.1 APPLICATIONS OF LINEAR ALGEBRIAC SYSTEMS The solution of linear systems, their eigenvalue and eigenvector calculations are two linear algebra problems of special importance in applications. But, because of the sizes of matrices involved, they must be done on a computer. It is only for very small matrices or matrices of special type that these two problems can be solved by hand. Applications of linear algebra to science and real life problems are numerous. The solutions to many problems in physics, chemistry, biology, engineering, medicine, economics, computer graphics require tools from linear algebra, so do all branches of modern mathematics. We will discuss a few of these in this chapter and see how they are applied to real life problems. 4.1.1 ECONOMICS: In order to manipulate and understand the economy of a country or region, one needs to come up with certain model based on the various sections or sectors of their economy.

69

The Leontief Model is an attempt in this direction. Wassily W. Leontief (1906 1999), Russian economist was cited for his creation, the input output technique. Input was defined by him as the goods and services that each industry buys from all the other industries, Output as the products industry sell to others. By this method, the variation in the flow of goods from one industry to another can be graphically demonstrated. Economists use his method to analyze, plan, and predict economic changes. The Leontief represents the economy as a system of linear equations. They are of two types: a)The Leontief Closed Model: Consider an ,economy consisting n interdependent industries (sectors) s1. . . . .sn, meaning each industry consumes some of the goods produced by other industries, including itself. For example, Power Holding Company of Nigeria (PHCN) consumes some of its own power. We say such an economy is closed if it satisfies its own needs. Let mij be the number of units produced by the industry si, and necessary to produce one unit of industry sj. If pk is the production level of the industry sk, then mijpj represents the number of units produced by si and consumed by industry sj. Then, the total number of units produced by industry si is given by: p1mi1 + p2mi2 + . . . . . + pnmin

70

In order to have a balanced economy, the total production of each industry must be equal to its total consumption. This gives the linear system: m11p1 + m12p2 + . . . . . + m1npn = p1 m21p1 + m22p2 + . . . . . + m2npn = p2 . . . . . .

mn1p1 + mn2p2 + . . . . . + mnnpn = pn If M = m11 m21 . mn1 And P = p1 P2 . pn The system can be rewritten as Mp = p where M is called the input-output matrix m12 m22 . mn2 . . . m1n . . . m2n . .

. . . mnn

71

Example 4.1: Suppose the economy of Gwagwalada depends on three industries; service, electricity, and oil production. Monitoring these three industries over a period of one year, the following observations are drawn. 1. To produce one unit worth of service, the service industry must consume 0.3 units of its own production, 0.3 units of electricity, and 0.3 units of oil to run its operation. 2. To produce one unit of electricity, the power generating plant must but 0.4units of service, 0.1 unit of its own production, and 0.5 units of oil. 3. Finally, the oil production company requires 0.3 units of service, 0.6 units of electricity and 0.2 units of its own production to produce one unit of oil. Find the production level of each of these industries in order to satisfy the external and the internal demands assuming that the model is closed Solution: Considering the following variables: 1. p1 = production level for the service industry 2. p2 = production level for the power generating plant 3. p3 = production level for the oil production company Since the model is closed, the total consumption of each industry must equal its total production.
72

0.3p1 + 0.3p2 + 0.3p3 = p1 0.4p1 + 0.1p2 + 0.5p3 = p2 0.3p1 + 0.6p2 + 0.2p3 = p3 Then M = 0.3 0.4 0.3 0.3 0.1 0.6 0.3 0.5 0.2 p= p1 p2 p3

Since each column sums up to one, we write the above system as (MI) p=0 That is, Mp = p => Mp p = 0 => Mp Ip = 0 => (M-I) p=0 M-I = -0.7 0.3 0.4 0.3 0.3

-0.9 0.5 0.6 -0.8

Using MATLAB rref command, >> rref(M-I) ans = 1.0000 0 0 0 -0.8235

1.0000 -0.9216 0 0
73

To solve the system, we let p3 = t, the solution is: p1 = 0.8235t p2 = 0.9216t p3 = t The values must be non-negative for the model to make sense, i.e. t0. Taking t100, p1 = 82.35units p2 = 92.16units p3 = 100units b) The Leontief Open Model: The fist model treats the case where no goods leave or enter the economy, but this does not happen often in reality. For example, let di be the demand from an ith outside industry, pi and mij be as in the closed model discussed above, then: pi = mi1p1 + mi2p2 + . . . . . + minpn + di p = Mp + d d= d1 d2 . dn
74

(for i = 1 . . . n) (M and p are same as the closed model) (d = demand vector)

To solve the linear system; p = Mp + d => d = p-Mp=> d = Ip-Mp => d = (I-M) p p = (I-M)-1 d (I-M) matrix is required to be invertible; this might not be the case always. Example 4.2: An open economy with three industries; coal mining operation, electricity generating plant, and an auto-manufacturing plant. To produce one naira of coal, the mining operation must purchase 10kobo of its own production, 30kobo of electricity and 10kobo worth of automobile for its transportation. To produce one naira of electricity, it takes 25kobo of coal, 40kobo of electricity and 15kobo of automobile. Finally, to produce one naira worth of automobile, the automanufacturing company plant must purchase 20kobo of coal, 50kobo of electricity, and consume 10kobo of automobile. Assuming that during a period of one week, the economy has an exterior demand for 50,000naira worth of coal, 75,000naira worth of electricity, and 125,000naira worth of autos. Find the production level of each of the three industries in that period of one week in order to exactly satisfy both the internal and external demands Solution: The input-output matrix of the economy

75

M = 0.1 0.3 0.1

0.25 0.2 0.4 0.5

0.15 0.1 50,000 75,000 125,000

The demand vector d =

Using p = (I-M)-1 d I-M = 1-0.1 -0.3 -0.1 In MATLAB, (I-M)-1 = inv(I-M) = 1.4646 0.8031 0.7717 1.0079 2.4882 1.6063 0.3307 0.5039 1.4646 Multiplying with the demand vector d to get p p= 1.4646 0.8031 0.7717 1.0079 2.4882 1.6063 0.3307 0.5039 1.4646 50,000 75,000 125,000
76

-0.25 1-0.4 -0.15

-0.2 -0.5 = 1-0.1

0.9 -0.3 -0.1

-0.25 0.6 -0.15

-0.2 -0.5 0.9

P=

1.0e+005 * 2.2992 4.3780 2.3740

This is approximately equal to our theoretical approach which is p1 = 229925 p2 = 437797.5 p3 = 237402.5 The total output of coal mining operation =229,925 naira The total output cost of electricity generating plant = 437,797.50 naira The total output cost of the auto-manufacturing plant = 237,402.50 naira 4.1.2 MARKOV CHAINS There is a random process in which events are discrete rather continuous, and the future development of each event is independent of all historical events, or dependent on only the immediate preceding event. Suppose there is a physical quantity or mathematical system that has n possible states and at any time, the system is in one and only one of its n states. Also, assuming at a given observation period, say kth period, the probability of the system being in a particular state depends only on
77

its status at the k-1st period. Such system is the markov chain or markov process. It deals with a square matrix that has non-negative entries and row sums al equal to1.This type of matrix is known as the Stochastic matrix or Transition matrix. Example 4.3 Suppose that the 2003 state of land use in a local government area of 50square miles of area is: I. II. III. Residential Use = 30% Commercial Use = 20% Industrial Use = 50%

Find the states in 2009, 2014, 2019 assuming the transition probabilities for 5year intervals are given by the matrix below: S= 0.8 0.1 0 Solution: From the matrix S and the 2003 state of land use, we compute the 2009 state of land use. If a = vector for land use
78

0.1 0.7 0.1

0.1 0.2 0.9

aT = [a1

a2 a3] = [30 20

50]

Multiplying aT and S = [30 20 50] 0.8 0.1 0 Using MATLAB, aT*S = bT = [26 22 Therefore, in 2009 => 52] Residential Use = 26% * 50 = 13square miles Commercial Use = 22% * 50 =11square miles Industrial Use = 52% * 50 = 26square miles bT * S = [26 22 52] 0.8 0.1 0 bT * S = cT = [23.0 23.2 53.8] It shows that in 2014, Residential Use => 23.0% * 50 =11.5square miles
79

0.1 0.7 0.1

0.1 0.2 0.9

0.1 0.7 0.1

0.1 0.2 0.9

Commercial Use => 23.2% * 50 = 11.6square miles Industrial Use => 53.8% * 50 = 26.9square miles cT * S = dT = [23.0 23.2 53.8] 0.8 0.1 0 dT = [20.72 23.92 55.36] In 2019, Residential Use => 20.72% * 50 =10.36square miles Commercial Use => 23.92% * 50 = 11.96square miles Industrial Use => 55.36% * 50 = 26.68square miles We can see the future from our calculation that for residential purpose, the land use decreases with time. For commercial, the land use appreciates every five years. For industrial purpose, fluctuates with time. 4.1.3 ELECTRICAL NETWORK A network consists of branches and nodes. A typical example is the street network where the branches are the streets and the nodes are the intersections. Many net0.1 0.7 0.1 0.1 0.2 0.9

80

work problems can be modified by systems of linear equations. The basic laws governing this process are: a) Ohms Law: The voltage drop Er across a resistor is proportional to the instantaneous current I => E = IR, where I is the current and R is the constant of proportionality called the resistance. b)Kirchhoffs Voltage Law: The algebraic sum of all the instantaneous voltage drops around any close loop is 0, or the voltage impressed on a closed loop is equal to the sum of the voltage drops in the rest of the loop ( A complete circuit for an electrical current). c)Kirchhoffs Current Law: At any point of a circuit, the sum of the inflowing current is equal to the sum of the out flowing currents. Linearity of this model is seen from the matrix Ri = v. The simplest sort of electrical network involves only batteries and resistors. Examples of resistors include light bulbs, motors, and heaters. Example 4.4: Determine the branch currents I1, I2, I3 in the network shown below:

81

Solution: Kirchhoffs current law provides one equation for each node

NODE A B

Current In I1 I2 + I3

Current Out I2 + I3 I1

From Kirchhoffs Current Law: Node A => I1 = I2 + I3 Node B => I2 + I3 = I1 The voltage law provides one equation for each loop. In the loop at the top, we choose a counterclockwise direction which coincides with the direction shown for I1 and I3, and gives the current flow from the battery.
82

Ri Voltage Drops 20I1 + 10I3 From Kirchhoffs Voltage Law: 20I1 + 10I3 = 60

Voltage Sources 60

The bottom loop, the counterclockwise direction coincides with the direction of I2 but opposite to I3 Ri Voltage Drops 5I2 10I3 5I2 10I3 = 50 The third loop, for which we choose the counterclockwise direction by Kirchhoffs Voltage Law: 20I1 + 5I2 = 60 +50 Rearranging, we have: I1 I 2 I 3 = 0 20I1+10I3 = 60 5I210I3 = 50
83

Voltage Sources 50

Putting the above equation in the form Ri = v 1 20 0 -1 0 5 -1 10 -10 I1 I2 I3 = = = 0 60 50

Solving by LU factorization, Ri = LUi =v R= 1 20 0 r11 = u11 = 1 r21 = l21u11 =20 l21 =20/1 = 20 r31 = l31u11 = 0 l31 = 0/1 = 0 -1 0 5 -1 10 -10 = 1 l21 l31 0 1 l32 0 0 1 u11 0 0 u12 u22 0 u13 u23 u33

r12 = u12 = -1 r22 =l21u12 + u22 = 0 u22 = 20 r32 = l31u12 + l32u22 = 5 l32 = 5/20 = 0.25

r13 = u13 = -1 r23 = l21u13 + u23 = 0 u23 = 30 r33 = l31u13 + l32u23 +u33 l33 = 0-7.5+10 = 2.5

84

R = LU =

1 20 0

-1 0 5

-1 10 -10 =

1 20 0

0 1

0 0

1 0 0

-1 20 0

-1 30 2.5

0.25 1

First, we solve Ly = v 1 20 0 y1 20y1 + y2 0 0 0 0 y1 y2 y3 = 0 60 50 => y1 = 0 => y2 = 60 => y3 = 35

0.25 1 =0 = 60

0.25y2 +y3 = 50 Then Ui = y 1 0 0 I1 -1 20 0 I2 -1 30 2.5 I1 I2 I3 I3 = 0

= = =

0 60 35 => I1 = -18 +14 = -4 => I2 =-360/20 = -18 => I3 = 35/2.5 = 14


85

20I2 +

30I3 = 60 2.50I3 = 35

An electric currents overall power depends on the amount of current flowing. Electric currents flow because atoms and molecules contain two types of electrical charge, positive and negative, and these opposite charges attract each other. Direct current (DC) is the flow of electricity in one direction. Alternating current (AC) intermittently reverses direction because of the way it is generated. The branch current in the network I1 = -4amps can be said to be alternating current The branch current in network I2 = -18amps, also exhibit alternating current. For I3 = 14amps, the current flows in one direction (Direct Current) With MATLAB, >> A=[1 -1 -1;20 0 10;0 5 -10] A= 1 20 0 -1 0 -1 10

5 -10

>> b= [0; 60; 50] b= 0 60 50 >> [L, U] =lu(A)


86

L= 0.0500 -0.2000 1.0000 1.0000 0 U= 20.0000 0 0 0 10.0000 0 0 0

1.0000

5.0000 -10.0000 0 -3.5000

>> i=U\ (L\b) i= 4 6 -2 The results are different, but still satisfy the system. The branch current in the network I1 = 4amps, direct current. The branch current in network I2 = 6amps, also exhibit direct current. For I3 = -2amps, alternating current.

87

CHAPTER FIVE SUMMARY, CONCLUSION, RECOMMENDATIONS 5.1 SUMMARY

As we know, one of the most important problems in technical computing is the solution of systems of simultaneous linear equations. In this research, the objective shows numerical computation is the best alternative to deal with large systems, i.e. to solve large systems of algebraic equations, numerical computing using MATLAB is one of the best alternatives. However, different methods of solving systems of algebraic equations have been used such as: LU Factorization, Cholesky Factorization, Iterative method, and Eigenvalue method. In each method, problems have been solved theoretically and the same answers were arrived at when solved using MATLAB. Chapter one deals with introduction, scope, limitation, and methodology of solutions of linear algebraic systems. In chapter two, the history and literature review of our research is discussed. Here, Mathematical scholars who contributed immensely to linear algebra and its solutions to linear systems were identified. Also, MATLAB as a method of enhancing solutions of linear systems was identified. Chapter three is based on the methods compatible with MATLAB for solving linear systems to obtain a unique solution. At the end of a problem, MATLAB commands are given to simplify the theoretical method.

88

Chapter four provides the way real life problems are resolved to linear systems, and how a solution is arrived at using the methods of solving linear systems. The Solution is then interpreted 5.2 CONCLUSION

Numerical Linear Algebra deals with numerical solutions of linear algebra problems. It has grown into an independent topic for research and teaching in recent years. The reason is, of course, obvious. Numerical Linear Algebra techniques are essential ingredients in scientific computing that are routinely used to solve practical-life problems (Signal Processing, Control Theory, Heat Transfer, Fluid Dynamics, Biomedical Engineering, Vibration, Statistics, Bioscience, and Economics) As it can be seen, theoretical approach to methods of solving large systems of linear equations is lengthy and tedious. Therefore, in consideration, numerical method is considered the best approach in dealing with large problems in systems of linear equations. Although problems arise from computing, such as errors from wrong input of data, errors from floating point numbers, e.t.c. It is still regarded as the best in accuracy and speed. Also, it is considered reliable. Therefore, computers can be used with minimum effort to solve systems of linear equations to a reasonable degree of accuracy and efficiency using computing programs, e.g. MATLAB. 5.3 RECOMMENDATIONS

The MATLAB high-performance language for technical computing integrates computation, visualization, and programming in an easy-to-use environment where
89

problems and solutions are expressed in familiar mathematical notation. It is a very useful mathematical computing software recommended for use at tertiary level of study and all level of computing involving large data. A little familiarity is just what is needed to start using the program. The guiding principle behind MATLAB is "Do the Right Thing." This means doing what is best for everyone for the long term, and believing that "right" answers exist.

90

REFERENCES
Bronson, Richard (July 1988) Schaum's Outline of Matrix Operations; McGraw-Hill New York Heath, Michael T. (2002) Scientific Computing: An Introductory Survey; McGraw-Hill Companies, Inc. New York

Kreyszig, Erwin (March 11, 1999) Advanced Engineering Mathematics; WILEY ACADEMIC (John Wiley & Sons Ltd; 8th Edition, International Edition)

Lay, David C. (August 22, 2005) Linear Algebra and Its Applications; Addison Wesley Publishing Company

Ogunfiditimi, F. O. (2006) MTH 206(Numerical Analysis) Lecture Notes Recktenwald, Gerald W. (2000) Introduction to Numerical Methods with MATLAB: Implementations and Applications. Prentice-Hall Upper Saddle River, NJ

Scheid, Francis (January 1989) Schaum's Outline of Numerical Analysis; McGraw-Hill New York

Turner, Peter R. (December 1994) Numerical Analysis; Scholium International, Inc.

91

Ukpong, E. O. (2006) MTH 204 (Linear Algebra II) Lecture Notes Dawkins Paul (2000) LINEAR ALGEBRA: http://tutorial.math.lamar.edu/terms.aspx,

http://en.wikipedia.org/wiki http://web.mit.edu/18.06/www/ 18.06 Linear Algebra, spring 2008 MICROSOFT ENCARTA (Students Edition) 2008 www.mathworks.com

92

You might also like