Professional Documents
Culture Documents
Back Matter
Appendix A: Matrix
Mathematics
A P P E N D I X
The McGrawHill
Companies, 2004
Matrix Mathematics
A.1 DEFINITIONS
The mathematical description of many physical problems is often simplified by
the use of rectangular arrays of scalar quantities of the form
[A] = ..
(A.1)
..
..
..
.
.
.
.
am1 am2 amn
Such an array is known as a matrix, and the scalar values that compose the array
are the elements of the matrix. The position of each element ai j is identified by
the row subscript i and the column subscript j.
The number of rows and columns determine the order of a matrix. A matrix
having m rows and n columns is said to be of order m by n (usually denoted as
m n ). If the number of rows and columns in a matrix are the same, the matrix
is a square matrix and said to be of order n. A matrix having only one row is
called a row matrix or row vector. Similarly, a matrix with a single column is a
column matrix or column vector.
If the rows and columns of a matrix [ A] are interchanged, the resulting
matrix is known as the transpose of [ A], denoted by [ A]T . For the matrix defined
in Equation A.1, the transpose is
[A]T = ..
(A.2)
..
..
..
.
.
.
.
a1n a2n amn
and we observe that, if [ A] is of order m by n, then [ A]T is of order n by m. For
447
Hutton: Fundamentals of
Finite Element Analysis
448
Back Matter
APPENDIX A
The McGrawHill
Companies, 2004
Appendix A: Matrix
Mathematics
Matrix Mathematics
example, if [ A] is given by
[ A] =
2
4
1
0
3
2
the transpose of [ A] is
2 4
[A] = 1 0
3 2
T
2 0 0
[A] = 0 1 0
0 0 3
is a diagonal matrix.
An identity matrix (denoted [I ]) is a diagonal matrix in which the value of
the nonzero terms is unity. Hence,
1 0 0
[A] = [I ] = 0 1 0
0 0 1
is an identity matrix.
A null matrix (also known as a zero matrix [0] ) is a matrix of any order in
which the value of all elements is 0.
A symmetric matrix is a square matrix composed of elements such that the
nondiagonal values are symmetric about the main diagonal. Mathematically,
symmetry is expressed as ai j = a ji and i $= j . For example, the matrix
2 2 0
[A] = 2 4 3
0 3 1
is a symmetric matrix. Note that the transpose of a symmetric matrix is the same
as the original matrix.
A skew symmetric matrix is a square matrix in which the diagonal terms aii
have a value of 0 and the off-diagonal terms have values such that ai j = a ji . An
example of a skew symmetric matrix is
0 2 0
[A] = 2 0 3
0 3 0
For a skew symmetric matrix, we observe that the transpose is obtained by
changing the algebraic sign of each element of the matrix.
Hutton: Fundamentals of
Finite Element Analysis
Back Matter
The McGrawHill
Companies, 2004
Appendix A: Matrix
Mathematics
[C ] = [ A] + [B]
where
i = 1, m
ci j = a i j + bi j
j = 1, n
(A.4)
The operation of matrix subtraction is similarly defined. Matrix addition and subtraction are commutative and associative; that is,
[ A] + [B] = [B] + [ A]
(A.5)
[ A] + ([B] + [C ]) = ([ A] + [B]) + [C ]
(A.6)
[B] = u[ A]
i = 1, m
j = 1, n
(A.8)
(A.9)
exists only if the number of columns in [ A] is the equal to the number of rows in
[B]. If this condition is satisfied, the matrices are said to be conformable for
multiplication. If [ A] is of order m p and [B] is of order p n , the matrix
product [C ] = [ A][ B] is an m n matrix having elements defined by
ci j =
p
%
aik bk j
(A.10)
k=1
Thus, each element ci j is the sum of products of the elements in the ith row of [ A]
and the corresponding elements in the jth column of [B]. When referring to the
matrix product [ A][ B] , matrix [ A] is called the premultiplier and matrix [B] is
the postmultiplier.
In general, matrix multiplication is not commutative; that is,
[ A][ B] $= [B][ A]
(A.11)
449
Hutton: Fundamentals of
Finite Element Analysis
450
Back Matter
APPENDIX A
Appendix A: Matrix
Mathematics
The McGrawHill
Companies, 2004
Matrix Mathematics
Matrix multiplication does satisfy the associative and distributive laws, and we
can therefore write
([ A][ B])[C ] = [ A]([B][C ])
[ A]([B] + [C ]) = [ A][ B] + [ A][C ]
([ A] + [B])[C ] = [ A][C ] + [B][C ]
(A.12)
A.3 DETERMINANTS
The determinant of a square matrix is a scalar value that is unique for a given
matrix. The determinant of an n n matrix is represented symbolically as
&
&
& a11 a12 a1n &
&
&
& a21 a22 a2n &
det[A] = |A| = && ..
(A.13)
.. &&
..
..
.
.
. &
& .
&
&
an1 an2 ann
(A.15)
|A| = a11 (a22 a33 a23 a32 ) a12 (a21 a33 a23 a31 ) + a13 (a21 a32 a22 a31 )
(A.17)
Given the definition of Equation A.15, the determinant of a square matrix of any
order can be determined.
Next, consider the determinant of a 3 3 matrix
&
&
& a11 a12 a13 &
&
&
|A| = & a21 a22 a23 &
(A.16)
&
&
a31 a32 a33
defined as
Note that the expressions in parentheses are the determinants of the second-order
matrices obtained by striking out the first row and the first, second, and third
columns, respectively. These are known as minors. A minor of a determinant is
Hutton: Fundamentals of
Finite Element Analysis
Back Matter
The McGrawHill
Companies, 2004
Appendix A: Matrix
Mathematics
(A.18)
and the determinant is said to be expanded in terms of the cofactors of the first
row. The cofactors of an element ai j are obtained by applying the appropriate
algebraic sign to the minor |M i j | as follows. If the sum of row number i and column number j is even, the sign of the cofactor is positive; if i + j is odd, the sign
of the cofactor is negative. Denoting the cofactor as C i j we can write
C i j = (1) i+ j |M i j |
(A.19)
The determinant given in Equation A.18 can then be expressed in terms of cofactors as
| A| = a11 C 11 + a12 C 12 + a13 C 13
(A.20)
The determinant of a square matrix of any order can be obtained by expanding the determinant in terms of the cofactors of any row i as
n
%
| A| =
ai j C i j
(A.21)
j=1
or any column j as
| A| =
n
%
ai j C i j
(A.22)
i=1
(A.23)
that is, the product of a square matrix and its inverse is the identity matrix of
order n. The concept of the inverse of a matrix is of prime importance in solving
simultaneous linear equations by matrix methods. Consider the algebraic system
a11 x 1 + a12 x 2 + a13 x 3 = y1
a21 x 1 + a22 x 2 + a23 x 3 = y2
a31 x 1 + a32 x 2 + a33 x 3 = y3
(A.24)
(A.25)
451
Hutton: Fundamentals of
Finite Element Analysis
452
Back Matter
APPENDIX A
The McGrawHill
Companies, 2004
Appendix A: Matrix
Mathematics
Matrix Mathematics
where
a12
a22
a32
a11
[A] = a21
a31
{x} =
'
x1
x2
x3
a13
a23
a33
(A.26)
(A.27)
{y} =
y2
y3
(A.28)
is the 3 1 column matrix (vector) representing the right-hand sides of the equations (the forcing functions).
If the inverse of matrix [ A] can be determined, we can multiply both sides of
Equation A.25 by the inverse to obtain
[ A]1 [ A]{x } = [ A]1 {y}
(A.29)
(A.30)
Noting that
the solution for the simultaneous equations is given by Equation A.29 directly as
{x } = [ A]1 {y}
(A.31)
While presented in the context of a system of three equations, the result represented by Equation A.31 is applicable to any number of simultaneous algebraic
equations and gives the unique solution for the system of equations.
The inverse of matrix [ A] can be determined in terms of its cofactors and
determinant as follows. Let the cofactor matrix [C ] be the square matrix having as
elements the cofactors defined in Equation A.19. The adjoint of [ A] is defined as
adj[ A] = [C ] T
(A.32)
adj[ A]
| A|
(A.33)
If the determinant of [ A] is 0, Equation A.33 shows that the inverse does not
exist. In this case, the matrix is said to be singular and Equation A.31 provides
no solution for the system of equations. Singularity of the coefficient matrix
indicates one of two possibilities: (1) no solution exists or (2) multiple (nonunique) solutions exist. In the latter case, the algebraic equations are not linearly
independent.
Hutton: Fundamentals of
Finite Element Analysis
Back Matter
The McGrawHill
Companies, 2004
Appendix A: Matrix
Mathematics
1
$
#
a12
0
1
a11 a12
[B1 ][A] = a11
=
a11
(A.35)
a21 a22
a
a
21
22
0 1
Next, multiply the first row by a21 and subtract from the second row, which is
equivalent to the matrix multiplication
a12
a12
a12
1
#
$
1
a11
a11
1
0 1 a
=
11
=
[B2 ][B1 ][A] =
a21 1
a
|A|
12
a21 a22
0 a22
a21
0
a11
a11
(A.36)
1
0
1
0
a11
|A|
0
a12
a11
1
=
|A|
0
a11
a12
a11
(A.37)
Finally, multiply the second row by a12 /a11 and subtract from the first row:
a12 #
a12
$
1
1
a11 = 1 0 = [I ]
a11
[B4 ][B3 ][B2 ][B1 ][A] =
(A.38)
0 1
0
1
0
1
Considering Equation A.23, we see that
[ A]1 = [B4 ][ B3 ][ B2 ][ B1 ]
(A.39)
(A.40)
This application of the Gauss-Jordan procedure may appear cumbersome, but the
procedure is quite amenable to computer implementation.
453
Hutton: Fundamentals of
Finite Element Analysis
454
Back Matter
APPENDIX A
The McGrawHill
Companies, 2004
Appendix A: Matrix
Mathematics
Matrix Mathematics
(A.41)
in which we want to eliminate the first p unknowns. The matrix equation can be
written in partitioned form as
#
$)
* )
*
[ A 11 ] [ A 12 ]
{X 1 }
{F1 }
=
(A.42)
[ A 21 ] [ A 22 ]
{X 2 }
{F2 }
where the orders of the submatrices are as follows
[ A 11 ]
[ A 12 ]
[ A 21 ]
[ A 22 ]
{X 1 }, {F1 }
{X 2 }, {F12 }
p p
p (n p)
(n p) p
(n p) (n p)
p1
(n p) 1
(A.43)
The complete set of equations can now be written in terms of the matrix partitions as
[ A 11 ]{X 1 } + [ A 12 ]{X 2 } = {F1 }
[ A 21 ]{X 1 } + [ A 22 ]{X 2 } = {F2 }
(A.44)
(A.45)
(A.47)
The unknown values of {F1 ] can then be calculated directly using the equations
of the upper partition.
Hutton: Fundamentals of
Finite Element Analysis
Back Matter
The McGrawHill
Companies, 2004
Appendix B: Equations of
Elasticity
A P P E N D I X
Equations of Elasticity
B.1 STRAIN-DISPLACEMENT RELATIONS
In general, the concept of normal strain is introduced and defined in the context
of a uniaxial tension test. The elongated length L of a portion of the test specimen
having original length L 0 (the gauge length) is measured and the corresponding
normal strain defined as
=
L
L L0
=
L0
L0
(B.1)
which is simply interpreted as change in length per unit original length and is
observed to be a dimensionless quantity. Similarly, the idea of shear strain is often
introduced in terms of a simple torsion test of a bar having a circular cross section. In each case, the test geometry and applied loads are designed to produce a
simple, uniform state of strain dominated by one major component.
In real structures subjected to routine operating loads, strain is not generally
uniform nor limited to a single component. Instead, strain varies throughout the
geometry and can be composed of up to six independent components, including
both normal and shearing strains. Therefore, we are led to examine the appropriate definitions of strain at a point. For the general case, we denote u = u(x , y, z),
v = v(x , y, z) , and w = w(x , y, z) as the displacements in the x, y, and z coordinate directions, respectively. (The displacements may also vary with time; for
now, we consider only the static case.) Figure B.1(a) depicts an infinitesimal element having undeformed edge lengths dx , dy , dz located at an arbitrary point
(x, y, z) in a solid body. For simplicity, we first assume that this element is loaded
in tension in the x direction only and examine the resulting deformation as shown
(greatly exaggerated) in Figure B.1(b). Displacement of point P is u while that of
point Q is u + ( u/ x ) dx such that the deformed length in the x direction is
given by
u
u
dx # = dx + u Q u P = dx + u +
dx u = dx +
dx
(B.2)
x
x
455