You are on page 1of 41

Ruhr-University Bochum Faculty of Civil and Environmental Engineering Institute of Mechanics

A Small Compendium on Vector and Tensor Algebra and Calculus


Klaus Hackl Mehdi Goodarzi

2010

ii

Contents
Foreword 1 Vectors and tensors algebra
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 1.10 1.11 1.12 1.13 1.14 1.15 1.16 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 2.11 2.12 Scalars and vectors . . . . . . . . . . . . . . . . Geometrical operations on vectors . . . . . . . Fundamental properties . . . . . . . . . . . . . Vector decomposition . . . . . . . . . . . . . . Dot product of vectors . . . . . . . . . . . . . . Index notation . . . . . . . . . . . . . . . . . . Tensors of second order . . . . . . . . . . . . . Dyadic product . . . . . . . . . . . . . . . . . . Tensors of higher order . . . . . . . . . . . . . . Coordinate transformation . . . . . . . . . . . . Eigenvalues and eigenvectors . . . . . . . . . . Invariants . . . . . . . . . . . . . . . . . . . . . Singular value decomposition . . . . . . . . . . Cross product . . . . . . . . . . . . . . . . . . . Triple product . . . . . . . . . . . . . . . . . . Dot and cross product of vectors: miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

v
1 2 2 3 3 5 5 7 7 10 12 14 15 15 16 17

2 Tensor calculus

Tensor functions & tensor elds . . . . . . . . . . . . Derivative of tensor functions . . . . . . . . . . . . . Derivatives of tensor elds, Gradient, Nabla operator Integrals of tensor functions . . . . . . . . . . . . . . Line integrals . . . . . . . . . . . . . . . . . . . . . . Path independence . . . . . . . . . . . . . . . . . . . Surface integrals . . . . . . . . . . . . . . . . . . . . Divergence and curl operators . . . . . . . . . . . . . Laplace's operator . . . . . . . . . . . . . . . . . . . Gauss' and Stoke's Theorems . . . . . . . . . . . . . Miscellaneous formulas and theorems involving nabla Orthogonal curvilinear coordinate systems . . . . . . iii

19

19 20 20 21 22 23 23 24 25 26 27 28

iv

CONTENTS

2.13 Dierentials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.14 Dierential transformation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.15 Dierential operators in curvilinear coordinates . . . . . . . . . . . . . . . .

29 31 31

Foreword
A quick review of vector and tensor algebra, geometry and analysis is presented. The reader is supposed to have sucient familiarity with the subject and the material is included as an entry point as well as a reference for the subsequence.

vi

FOREWORD

Chapter 1

Vectors and tensors algebra


Algebra is concerned with operations dened in sets with certain properties. Tensor and vector algebra deals with properties and operations in the set of tensors and vectors. Throughout this section together with algebraic aspects, we also consider geometry of tensors to obtain further insight.

1.1 Scalars and vectors


There are physical quantities that can be specied by a single real number, like temperature and speed. These are called scalars. Other quantities whose specication requires magnitude (a positive real number) and direction, for instance velocity and force, are called vectors. A typical illustration of a vector is an arrow which is a directed line segment. Given an appropriate unit, length of the arrow reects magnitude of the vector. In the present context the precise terminology would be Euclidean vector or geometric vector, to acknowledge that in mathematics, vector has a broader meaning with which we are not concerned here. It is instructive to have a closer look at the meaning of direction of a vector. Given a vector v and a directed line L (Fig. 1.1), projection v of the vector onto the direction is a scalar vL = v cos in which is the angle between v and L . Therefore, a vector eL can be thought as a function that assigns a scalar to each L direction v (L ) = vL . This notion can be naturally extended v cos to understand tensors. A tensor (of second order) is a function Fig. 1.1: Pro jection. that assigns vectors to directions T (L ) = T L in the sense of projection. In other words the projection of tensor T on direction L is a vector like T L . This looks rather abstract but its meaning is going to be clear in the sequel when we explain the Cauchy's formula in which the dot product of stress (tensor) and area (vector) yields traction force (vector).

Notation 1.

Vectors and tensors are denoted by bold-face letters like A and v , and magnitude of a vector by v or simply by its normal-face v . In handwriting an underbar is used to denote tensors and vectors like v . 1

CHAPTER 1.

VECTORS AND TENSORS ALGEBRA

1.2 Geometrical operations on vectors


Denition 2 (Equality of vectors).
and direction. Two vectors are equal if they have the same magnitude (Summation of vectors). The sum of every two vectors v and w is a vector u = v + w formed by placing initial point of w on terminal point of v and joining initial point of v and terminal point of w (Fig. 1.2(a,b)). This is equivalent to parallelogram law for vector addition as shown in Fig. 1.2(c). Subtraction of vectors v w is dened as v + (w).

Denition 3

w
u=v+w

v
u=v+w

v
(a)

w
(c)

(b) Fig. 1.2:


Vector summation.

Denition 4 (Multiplication of a vector by a scalar).

For any real number (scalar) and vector v , their multiplication is a vector v whose magnitude is || times the magnitude of v and whose direction is the same as v if > 0 and opposite to v if < 0. If = 0 then v = 0.

1.3 Fundamental properties


The set of geometric vectors V together with the just dened summation and multiplication operations is subject to the following properties

Property 5
1. 2. 3. 4.

(Vector summation). v+0=0+v =v v + (v ) = (v ) + v = 0 v+w =w+v u + (v + w) = (u + v ) + w

Existence of additive identity Existence of additive inverse Commutative law Associative law

Property 6
1. 2. 3. 4.

(Multiplication with scalars). Distributive law Distributive law Associative law Multiplication with scalar identity.

(v + w) = v + w ( + ) v = v + v ( v ) = ( ) v 1v = v

A vector can be rigorously dened as an element of a vector space.

Denition 7

(Real vector space). The mathematical system (V, +, R, ) is called a real vector space or simply a vector space, where V is the set of geometrical vectors, + is vector

1.4.

VECTOR DECOMPOSITION

summation with the four properties mentioned, R is the set of real numbers furnished with summation and multiplication of numbers, and stands for multiplication of a vector with a real number (scalar) having the above mentioned properties. The reader may wonder why this recent denition is needed. Remember that a quantity whose expression requires magnitude and direction is not necessarily a vector(!). A well known example is nite rotation which has both magnitude and direction but is not a vector because it does not follow the properties of vector summation, as dened before. Therefore, having magnitude and direction is not sucient for a quantity to be identied as a vector, but it must also obey the rules of summation and multiplication with scalars.

Denition 8 (Unit vector).

A unit vector is a vector of unit magnitude. For any vector v which is equal to v = v / v . One would say that its unit vector is referred to by ev or v the unit vector carries the information about direction. Therefore magnitude and direction . as constituents of a vector are multiplicatively decomposed as v = v v

1.4 Vector decomposition


Writing a vector v as the sum of its components is called decomposition. Components of a vector are its projections onto the coordinate axes (Fig. 1.3)

v = v1 e1 + v2 e2 + v3 e3 ,

(1.1)

where {e1 , e2 , e3 } are unit basis vectors of coordinate system, {v1 , v2 , v3 } are components of v and {v1 e1 , v2 e2 , v3 e3 } are component vectors of v . Since a vector is specied by its components relative X3 to a given coordinate system, it can be alternatively denoted by array notation v v e v1 3 3 T v v2 = (v1 v2 v3 ) . (1.2) v1 e 1 v3 X2 Using Pythagorean theorem the norm (magnitude) of a vector v in three-dimensional space can be written as

X1

v2 e2
Fig. 1.3:
Components.

v =

2 v1

2 v2

2. v3

(1.3)

Remark 9. Representation of a geometrical vector by its equivalent array (or tuple) is


the underlying idea of analytic geometry which basically paves the way for an algebraic treatment of geometry.

1.5 Dot product of vectors


Dot product (also called inner or scalar product) of two vectors v and w is dened as

v w = v w cos ,
where is the angle between v and w.

(1.4)

CHAPTER 1.

VECTORS AND TENSORS ALGEBRA

Property 10
1. 2. 3.

(Dot product). Commutativity Linearity Positivity.

vw =wv u (v + w) = u v + u w v v 0 and v v = 0 v = 0

Geometrical interpretation of dot product


Denition of dot product (1.4) can be restated as

) (ww ) = v ( ) = vw cos , v w = (v v v w ) = w (v w

(1.5)

which says the dot product of vectors v and w equals projection of w on v direction times magnitude of v (see Fig. 1.1), or the other way around. Also, dividing leftmost and rightmost terms of (1.5) by vw gives

w = cos , v

(1.6)

which says the dot product of unit vectors (standing for directions) determines the angle between them. A simple and yet interesting result is obtained for components of a vector v as

v1 = v e1 ,

v2 = v e2 ,

v3 = v e3 .

(1.7)

Important results
1. In any orthonormal coordinate system including Cartesian coordinates it holds that

e1 e1 = 1 , e1 e2 = 0 , e1 e3 = 0 e2 e1 = 0 , e2 e2 = 1 , e2 e3 = 0 e3 e1 = 0 , e3 e2 = 0 , e3 e3 = 1 ,
2. therefore

(1.8)

v w = v1 w1 + v2 w2 + v3 w3 .
3. For any two nonzero vectors a and b, a b = 0 i a is normal to b. 4. Norm of a vector v can be obtained by

(1.9)

v = (v v ) /2 .
1

(1.10)

Note that v w = (v1 e1 + v2 e2 + v3 e3 ) (w1 e1 + w2 e2 + w3 e3 ) = v1 w1 e1 e1 + v1 w2 e1 e2 + .

1.6.

INDEX NOTATION

1.6 Index notation


A typical tensorial calculation demands component-wise arithmetic derivations which is usually a tedious task. As an example let us try to expand the left-hand-side of equation (1.9)

v w = (v1 e1 + v2 e2 + v3 e3 ) (w1 e1 + w2 e2 + w3 e3 ) = v1 w1 e1 e1 + v1 w2 e1 e2 + v1 w3 e1 e3 + v2 w1 e2 e1 + v2 w2 e2 e2 + v2 w3 e2 e3 + v3 w1 e3 e1 + v3 w2 e3 e2 + v3 w3 e3 e3 ,
(1.11)

which together with equation (1.8) yields the right-hand-side of (1.9). As can be seen for a very basic expansion considerable eort is required. The so called index notation is developed for simplication. Index notation uses parametric index instead of explicit index. If for example the letter i is used as index in vi it addresses any of the possible values i = 1, 2, 3 in three dimensional space, therefore representing any of the vector v 's components. Then {vi } is equivalent to the set of all vi 's, namely {v1 , v2 , v3 }.

Notation 11. When an index appears twice in one term then that term is summed up over all possible values of the index. This is called Einstein's convention.
To clarify this, let's have a look at trace of a 3 3 matrix tr(A) = 3 i=1 Aii = A11 + A22 + A33 . Based on Einstein's convention one could write tr(A) = Aii , because the index i has appeared twice and shall be summed up over all possible values of i = 1, 2, 3 then Aii = A11 + A22 + A33 . As another example a vector v can be written as v = vi ei because the index i has appeared twice and we can write vi ei = v1 e1 + v2 e2 + v3 e3 .

Remark 12. A repeating index (which is summed up over) can be freely renamed. For
example vi wi Amn can be rewritten as vk wk Amn because in fact vi wi Amn = Bmn and index i does not appear as one of the free indices which can vary among {1, 2, 3}. Note that renaming i to m or n would not be possible as it changes the meaning.

1.7 Tensors of second order


Based on the denition of inner product, length and angle which are the building blocks of Euclidean geometry become completely general concepts (Eqs. (1.10) and (1.6)). That is why an n-dimensional vector space (Def. 7) together with an inner product operation (Prop. 10) is called a Euclidean space denoted by En . Since we are basically unable to draw geometrical scenes in higher than three dimensions (or maybe two in fact!), it may seem just too abstract to generalize classical geometry to higher dimensions. If you remember that the physical space-time can be best expressed in a four-dimensional geometrical framework, then you might agree that this limitation of dimensions is more likely to be a restriction of our perception rather than a part of physical reality.

CHAPTER 1.

VECTORS AND TENSORS ALGEBRA

Another generalization would concern projection. Projection can be seen as geometrical transformation that can be dened based on inner product. Up to three dimensions vector projection is easily understood. Now if we allow vectors to be entities belonging to an arbitrary vector space endowed with an inner product, one can think of inner product as the means of projection of a vector in the space onto another vector or direction. Note that we have put no restrictions on the vector space such as its dimension. By the way one would ask what do we need all these cumbersome generalizations for? The answer is to understand more complex constructs based on the basic understanding we already have of components of a geometrical vector. So far we have seen that a scalar can be expressed by a single real number. This can be seen as: scalars have no components on the three coordinate axes so require 30 real numbers to be specied. On the other hand a vector v has three components which are its projections onto the three coordinate axes Xi 's each obtained by vi = v ei . Therefore, v requires 31 scalars to be specied. There are more complex constructs which are called second order tensors. The three projections of a second order tensor A onto coordinate axes are obtained by inner product of A with basis vectors as Ai = A ei . These projections are vectors (not scalars!), and since each vector is determined by three scalars  its components  a second order tensor requires 3 3 = 32 real numbers to be specied. In array form the three components of a tensor A are vectors denoted by

A = A1 A2 A3
where each vector Ai has three components

(1.12)

A1i Ai = A2i A3i


therefore A is written in the matrix form

(1.13)

A11 A12 A13 A = A21 A22 A23 A31 A32 A33

(1.14)

where each vector Ai is written component-wise in one column of the matrix. In analogy to the notation

v = vi e i
for vectors (based on Einstein's summation convention), a tensor can be written as

(1.15)

A = Aij ei ej

or

A = Aij ei ej

(1.16)

with being dyadic product. Note that we usually use the rst notation for brevity.

1.8.

DYADIC PRODUCT

1.8 Dyadic product


Dyadic product of two vectors v and w is a tensor A such that

Aij = vi wj .

(1.17)

Property 13.
1. 2. 3.

Dyadic product has the following properties

vw =wv u (v + w) = u v + u w ( u + v ) w = u w + v w (u v ) w = (v w) u u (v w) = (u v ) w
From equation (1.16) considering the above properties we obtain

Aij = ei A ej ,
which is the analog of equation (1.7). In equation (1.16) the dyads which are given in matrix notation as 100 010 00 e1 e1 = 0 0 0 e1 e2 = 0 0 0 e1 e3 = 0 0 000 000 00 00 000 000 e2 e1 = 1 0 0 e2 e2 = 0 1 0 e2 e3 = 0 0 00 000 000 000 000 00 e3 e1 = 0 0 0 e3 e2 = 0 0 0 e3 e3 = 0 0 100 010 00

(1.18)

ei ej are basis tensors 1 0 0 0 1 0 0 0 1

(1.19)

(1.20)

(1.21)

Note that not every second order tensor can be expressed as dyadic product of two vectors in general, however every second order tensor can be written as linear combination of dyadic products of vectors, as in its most common form given by the decomposition onto a given coordinate system as A = Aij ei ej .

1.9 Tensors of higher order


Dyadic product provides a convenient passage to denition of higher order tensors. We know that dyadic product of two vectors (rst-order tensor) yields a second-order tensor. Generalization of the idea is immediate for a set of n given vectors v 1 , . . . , v n as

T = v 1 v2 v n

(1.22)

which is a tensor of the nth order. To make sense of it, let's take a look at inner product of the above tensor with vector w

T w = (v 1 v 2 v n ) w = (v n w) (v 1 v 2 v n1 )

(1.23)

CHAPTER 1.

VECTORS AND TENSORS ALGEBRA

where (v n w) is of course a scalar and v 1 v n1 a tensor of (n 1)th order. This means the tensor T maps a rst-order tensor (vector) to a tensor of the (n 1)th order. We reemphasize that not all tensors can be written as dyadic product of vectors, however tensors can be decomposed on the basis of a coordinate system. Decomposition of the nth -order tensor T is written as

T = Ti1 i2 in ei1 ei2 ein .

(1.24)

where i1 in {1, 2, 3} and ei1 ei2 ein is the basis tensor of the nth order. It should be clear that, visualization of e.g. third-order tensors will require three-dimensional arrays and so on.

Remark 14. A vector is a rst-order tensor and a scalar is a zeroth-order tensor. Order of

a tensor is equal to number of its indices in index notation. Furthermore, a tensor of order n requires 3n scalars to be specied. We usually address a second order tensor by simply calling it a tensor and the meaning should be clear from the context.

If an mth order tensor A is multiplied with an nth order tensor B we get an (m + n)th order tensor C such that

C = A B = (Ai1 im ei1 eim ) (Bj1 jn ej1 ejn ) = Ai1 im Bj1 jn ei1 eim ej1 ejn = Ck1 km+n ek1 ekm+n .
(1.25)

Algebra and properties of higher order tensors


On the set of nth -order tensors summation operation can be dened as

A + B = (Ai1 in + Bi1 in ) ei1 ein ,


and multiplication with a scalar as

(1.26)

A = (Ai1 in ) ei1 ein ,

(1.27)

having the properties similar to 5 and 6. In fact tensors of order n are elements of a vector space having the dimension 3n .

Exercise 1.

Show that dimension of a vector space consisting of all nth -order tensors is 3n . (Hint: nd 3n independent tensors that can span all elements of the space.)

Generalization of inner product


Inner/Dot product of two tensors A = Ai1 im ei1 eim and B = Bj1 jn ej1 ejn is given as

A B = Ai1 im ei1 eim Bj1 jn ej1 ejn = Ai1 im1 k Bkj2 jn ei1 ei2 eim1 ej2 ej3 ejn
(1.28) which is a tensor of (m + n 2)th order. Note that the innermost index k is common and summed up. Generalization of this idea follows

1.9.

TENSORS OF HIGHER ORDER

Denition 15

(contraction product). For any two tensors A of the mth order and B of
r

the nth order, their r-contraction product for r min{m, n} yields a tensor C = A of order (m + n 2r) such that

C = Ai1 imr k1 kr Bk1 kr jr+1 jn ei1 eimr ejr ejn .


product is the so called double-contraction which is denoted by A : B A
2

(1.29)

Note that the r innermost indices are common. The frequently used case of contraction

B.

Write the identity = C : in index notation, where C is a fourth-order tensor and and are second-order tensors. Then, expand the components 12 and 11 for the case of two-dimensional Cartesian coordinates.

Exercise 2.

Remark 16. Contraction product of two tensors is not symmetric in general, i.e. A
r

B= B A, except when r = 1 and A and B are rst-order tensors (vectors), or A and B have certain symmetry properties.

Kronecker delta
The second order unity tensor or identity tensor denoted by I is dened by the following fundamental property.

Property 17.

For every vector v it holds that

I v = vI = v.
Representation of unity tensor by index notation as

(1.30)

ij =

1 , if i = j 0 , if i = j .

(1.31)

is called Kronecker's delta. Its expansion for i, j {1, 2, 3} gives in matrix form 100 (ij ) = 0 1 0 . 001 Some important properties of Kronecker delta are

(1.32)

1. Orthonormality of the basis vectors {ei } in equation (1.8), can be abbreviated as

ei ej = ij .
2. The so called index exchange rule is given as

(1.33)

ui ij = uj
3. Trace of Kronecker delta

or in general Aij mz mn = Aij nz .

(1.34) (1.35)

ii = 3 .

Exercise 3.

Proofs are left to the reader as a practice of index notation.

10

CHAPTER 1.

VECTORS AND TENSORS ALGEBRA

Totally antisymmetric tensor


The so called totally antisymmetric tensor or permutation symbol, also named after Italian mathematician Tullio Levi-Civita as Levi-Civita symbol, is a third-order tensor dened as +1 , if (i, j, k ) is an even permutation of (1, 2, 3) 1 , if (i, j, k ) is an odd permutation of (1, 2, 3) (1.36) ijk = 0 , otherwise, i.e. if i = j or j = k or k = i. There are important properties and relations on permutation symbol as follows 1. It is immediately known from the denition that
ijk

jik

ikj

kji

kij

jki .

(1.37)

2. Determinant of a 3 3 matrix A can be expressed by

det(A) =

ijk A1i A2j A3j

(1.38)

3. The general relationship between Kronecker delta and permutation symbol reads il im in (1.39) ijk lmn = det jl jm jn kl km kn and as special cases
jkl jmn

= km ln kn lm , = 2km .

(1.40) (1.41)

ijk ijm

Exercise 4.

Verify the identities in equations (1.40) and (1.41) based on (1.39).

1.10 Coordinate transformation


Either seen as an empirical fact or a mathematical postulate, it is well accepted that (Principle of frame indierence). A mathematical model of any physical phenomenon must be formulated without reference to any particular coordinate system. This is a merit of vectors and tensors that provide mathematical models with such a generality. However in solving real world problems everything shall be expressed in terms of scalars nally for computation, and therefore introduction of a coordinate system is inevitable. The immediate question would be how to transform vectorial and tensorial quantities form one coordinate system to another. Substitution from (1.42) into the latter equation gives

Denition 18

v1 = (v1 e1 + v2 e2 + v3 e3 ) e1 = e1 e1 v1 + e1 e2 v2 + e1 e3 v3 v2 = (v1 e1 + v2 e2 + v3 e3 ) e2 = e2 e1 v1 + e2 e2 v2 + e2 e3 v3 v3 = (v1 e1 + v2 e2 + v3 e3 ) e3 = e3 e1 v1 + e3 e2 v2 + e3 e3 v3 .


(1.44)

1.10.

COORDINATE TRANSFORMATION

11

Suppose that vector v is decomposed in Cartesian coordinates X1 X2 X3 with bases {e1 , e2 , e3 } (Fig. 1.4)

X3

X3 v

v = v1 e1 + v2 e2 + v3 e3 .

(1.42)

Since a vector is completely determined by its components, we should be able to nd the decomposition of vector v on a second coordinate system X1 X2 X3 with bases {e1 , e2 , e3 } in terms of {v1 , v2 , v3 }. Using equation (1.7) we have

X1

X2 X1 X2

Fig. 1.4:
dinates.

Transformation of coor-

v1 = v e1 ,

v2 = v e2 ,

v3 = v e3 .

(1.43)

Restatement in array notation reads v1 e1 e1 e1 e2 e1 e3 v1 v2 = e2 e1 e2 e2 e2 e3 v2 , v3 e3 e1 e3 e2 e3 e3 v3 and in index notation

(1.45)

vj = ej ei vi .

(1.46)

Then again, introducing transformation tensor Q = ej ei we can rewrite the transformation rule as v = Qv . (1.47)

Property 19

(Orthonormal tensor). Transformation Q is geometrically a rotation map with the property QT Q = I or Q1 = QT (1.48) which is called orthonormality.

Transformation of tensors
Having the decomposition of an nth -order tensor A in X1 X2 X3 coordinates as

A = Ai1 in ei1 ein ,

(1.49)

we look for its components in another coordinates X1 X2 X3 . Following the same approach as for vectors, using equation (1.29)

Aj1 jn = A

ej1 ejn
n

= Ai1 in ei1 ein

ej1 ejn
(1.50)

= ei1 ej1 ein ejn Ai1 in ,

For the special case when A is a second order tensor we can write in matrix notation

A = QAQT ,

Q = ej ei .

(1.51)

12

CHAPTER 1.

VECTORS AND TENSORS ALGEBRA

Remark 20. The above transformation rules are substantial. They are so fundamental that,

as an alternative, we could have dened vectors and tensors based on their transformation properties instead of the geometric and algebraic approach.

Exercise 5.

Find the transformation matrix for a rotation by angle around X1 in threedimensional Cartesian coordinates X1 X2 X3 .

Exercise 6.

For a symmetric second order tensor A derive the transformation formula with a transformation given as cos sin 0 (Qij ) = sin cos 0 . 0 0 1

1.11 Eigenvalues and eigenvectors


The question of transformation from one coordinate system to another is answered. In practical situations we face a second question which is the choice of coordinate system in order to simplify the representation of the formulation at hand. A very basic example is explained here. Suppose that we want to write a vector v in components. With an arbitrary choice T of reference the answer is v = v1 v2 v3 , with v1 , v2 , v3 R. However with the choice of reference coordinates such that e.g. X1 axis coincides with vector v we come up with T v = v1 0 0 . This second representation of the vector is simplied to one component only which is in fact a scalar, and therefore being one-dimensional. Of course in more complicated situations an appropriate choice of coordinate system can make a much greater dierence. We will see that the consequences of our rather practical question turns out to be theoretically subtle. The next step in complexity is how to simplify tensors with a proper choice of coordinate system.

Denition 21.

Given a second order tensor A, every nonzero vector like is called an eigenvector of A if there exists a real number such that

A =
where is the corresponding eigenvalue of .

(1.52)

In general, a tensor multiplied by a vector gives another vector with a dierent magnitude and direction, interestingly however, there are vectors on which the tensor acts like a scalar as laid down by equation (1.52). Now suppose that a coordinate system is chosen so that X1 axis coincides with . It is clear that

A e1 = A11 e1 + A21 e2 + A31 e3


and also back to (1.52)

A e1 = e1

(1.53)

1.11.

EIGENV ALUES AND EIGENVECTORS

13

comparing both equation gives

A11 = , A21 = 0 , A31 = 0 .

(1.54)

It is possible to show that for non-degenerate symmetric tensors in R3 there are three orthonormal eigenvectors which establish an orthonormal coordinate system which is called principal coordinates. Following equation (1.54) a symmetric tensor in principal coordinates takes the diagonal form 11 0 0 (1.55) = 0 22 0 , 0 0 33 where its diagonal elements are eigenvalues corresponding to principal directions which are parallel to eigenvectors. The following theorem generalizes these ideas to n dimensions.

Theorem 22 (Spectral decomposition). Let A be a tensor of second order in n-dimensional

space with 1 , , n being its n linearly independent eigenvectors and their corresponding eigenvalues 1 , , n . Then A can be written as
A = 1
(1.56)

where is the n n matrix having 's as its columns = ( 1 , , n ), and is an n n diagonal matrix with ith diagonal element as i . Remark 23. In the case of symmetric tensors equation (1.56) takes the form of
A = T
(1.57)

because for orthonormal eigenvectors, the matrix is an orthonormal matrix which is in fact a rotation transformation.

Remark 24. The is the representation of A in principal coordinates. Since it is diagonal, one can write
n

=
i=1

i ni ni .

(1.58)

Suppose that we want to derive some relationship or develop a model based on tensorial expressions. If the task is accomplished in principal coordinates it takes much less eort and at the same time, the results are completely general i.e. they hold for any other coordinate system. This is often done in material modeling and in other branches of science as well. As the nal step we would like to solve the equation (1.52). It can be recast in the form

(A I ) = 0 .
This equation has nonzero solutions if

(1.59)

det(A I ) = 0

(1.60)

14

CHAPTER 1.

VECTORS AND TENSORS ALGEBRA

which is called the characteristic equation of tensor A. Let us have a look at its expansion

A11 A12 A13 A21 A22 A23 = 0, A31 A32 A33


which gives

(1.61)

(A11 + A22 + A33 ) +

A22 A23 A A A A + 11 13 + 11 12 A32 A33 A31 A33 A21 A22

which is a cubic equation with at most three distinct roots.

A11 A12 A13 A21 A22 A23 = 0 A31 A32 A33 (1.62)

1.12 Invariants
Components of a vector v change from one coordinate system to another, however its length remains the same in all coordinates because length is a scalar. This is interesting because one can write the length of a vector in terms of its components in any coordinate 1 system as v = (vi vi ) /2 , which means there is a combination of components that does not change by coordinate transformation while the components themselves do. Any function of components f (v1 , v2 , v3 ) that does not change under coordinate transformation is called an invariant. This independence of coordinate system makes invariants physically important, and the reason should become clear soon. Tensors of higher order have also invariants. For a second order tensor A, looking back on equation (1.62), the values , 2 and 3 are scalars and stay unchanged under coordinate transformation, therefore coecients of the equation must remain unchanged as well, and they are all invariants of tensor A denoted by

IA = A11 + A22 + A33 IIA = A22 A23 A A A A + 11 13 + 11 12 A32 A33 A31 A33 A21 A22

(1.63) (1.64) (1.65)

A11 A12 A13 IIIA = A21 A22 A23 A31 A32 A33

where I, II and III are called rst, second and third principal invariants of A. The above formulas look familiar. The rst invariant is trace, the second is sum of principal minors and the third is determinant. Of course, we could express invariants in terms of eigenvalues of the tensor

IA = tr(A) = 1 + 2 + 3 1 (tr(A))2 tr A2 IIA = 2 IIIA = det(A) = 1 2 3 .

(1.66)

= 1 2 + 2 3 + 3 1

(1.67) (1.68)

1.13.

SINGULAR VALUE DECOMPOSITION

15

In material modeling we often look for a scalar valued function of a tensor in the form of (A). Since the function value is a scalar it must be invariant under transformation of coordinates. To fulll this requirement, the function is formulated in terms of invariants of A, then it will be invariant itself. Therefore the most natural form of such a formulation would be = (IA , IIA , IIIA ) . (1.69) This will be used in the sequel when hyper-elastic material models are explained.

1.13 Singular value decomposition


The Euclidean norm of a vector is a non-negative scalar that represents its magnitude. It is also possible to talk about norm of a tensor. Avoiding the details, we only mention that the question of norm of a tensor A comes down to nding the maximum eigenvalue of AT A. Since the matrix AT A is positive semidenite its eigenvalues are non-negative.

Denition 25.

T If 2 i is an eigenvalue of A A, then i > 0 is called a singular value of A.

When A is a symmetric tensor singular values of A equal the absolute values of its eigenvalues.

1.14 Cross product


Cross product of two vectors v and w is a vector u = v w such that u is perpendicular to v and w, and in the direction that the triple (v , w, u) be right-handed (Fig. 1.5). Magnitude of u is given by

u=vw

u = vw sin

0 .

(1.70)

The cross product of vectors has the following properties.

v
Fig. 1.5:

w
Cross product.

Property 26 (Cross product).


1. 2.

v w = w v u (v + w) = u v + u w

Anti-symmetry Linearity.

Important results
1. In any orthonormal coordinate system including Cartesian coordinates it holds that

e1 e1 = 0 , e1 e2 = e3 , e1 e3 = e2 e2 e1 = e3 , e2 e2 = 0 , e2 e3 = e1 e3 e1 = e2 , e3 e2 = e1 , e3 e3 = 0 ,

(1.71)

16 2. therefore

CHAPTER 1.

VECTORS AND TENSORS ALGEBRA

e1 e2 e3 v w = v1 v2 v3 , w1 w2 w3
3. which in index notation reads

(1.72)

vw =

ijk vj wk ei

or

[v w]i =

ijk vj wk

(1.73)

Exercise 7.

Using equation (1.71) prove the identity (1.72).

Geometrical interpretation of cross product


Norm of the vector v w is equal to the area of parallelogram having sides v and w, and since it is normal to plane of the parallelogram we consider it to be the area vector. Then it is clear the the area A of a triangle with sides v and w is given by 1 A = v w. (1.74) 2

v
Fig. 1.6:
Area of triangle.

Cross product for tensors


Cross product can be naturally extended to the case of a tensor A and a vector v as

Av =

jmn Aim vn ei ej

and

w sin

vA=

imn vm Anj ei ej

(1.75)

1.15 Triple product


For every three vectors u, v and w, combination of dot and cross products gives

u (v w ) ,
the so called triple product. Based on equation (1.72) it can be written in component form

(1.76)

u w

u1 u2 u3 u (v w) = v1 v2 v3 , w1 w2 w3
or in index notation

(1.77)

v
Fig. 1.7:
Parallelepiped.

u (v w ) =

ijk ui vj wk

(1.78)

Geometrically, triple product of the three vectors equals the volume of parallelepiped with sides u, v and w (Fig. 1.7).

1.16.

DOT AND CROSS PRODUCT OF VECTORS: MISCELLANEOUS FORMULAS

17

1.16 Dot and cross product of vectors: miscellaneous formulas


u (v w) = (u w) v (u v ) w (u v ) w = (u w) v (v w) u (u v ) (w x) = (u w) (v x) (u x) (v w) (u v ) (w x) = (u (v x)) w (u (v w)) x = (u (w x)) v (v (w x)) u
(1.82) (1.79) (1.80) (1.81)

Exercise 8. Exercise 9.

Using index notation verify the above identities.

For a tensor given as T = I + vw with v and w being orthogonal vectors i.e. 1 n v w = 0, calculate T 2 , T 3 , , T n and nally eT (Hint: eT = n=0 n! T ).

18

CHAPTER 1.

VECTORS AND TENSORS ALGEBRA

Chapter 2

Tensor calculus
So far, vectors and tensors are considered based on operations by which we can combine them. In this section we study vector and tensor valued functions in three dimensional space, together with their derivatives and integrals. It is important to keep in mind that we address tensors and vectors both by the term tensor, and the reader should already know that a vector is a rst-order tensor and a scalar a zeroth-order tensor.

2.1 Tensor functions & tensor elds


A tensor function A(u) assigns a tensor A to a real number u. In formal mathematical notation A : R T (2.1) u A(u) where T is the space of tensors of the same order, for instance second order tensors T = R33 or vectors T = R3 or maybe fourth order tensors T = R3333 . A typical example is the position vector of a moving particle given as a function of time x(t) where t is a real number and X2 x is a vector with initial point at origin and terminal point at the moving particle (Fig. 2.1). On the other hand a tensor led A(x) assigns a tensor A to a point x in space, say three dimensional. Again one could formally write A : R3 T O X1 . (2.2) x A(x) Fig. 2.1: Position vector. Here, R3 is the usual three dimensional Euclidean space. For instance, we can think of temperature distribution in a given domain T (x) which is a scalar eld that assigns scalars (temperatures) to points x in the domain. Another example could be strain led (x) in a solid body under deformation, which assigns second-order tensors to points x in the body. 19

x( t)

20

CHAPTER 2.

TENSOR CALCULUS

2.2 Derivative of tensor functions


Derivative of a dierentiable tensor function A(u) is dened as

dA A(u + u) A(u) = lim . u0 du u

(2.3)

We assume that all derivatives exist unless otherwise stated. For the case of a vector function A(u) = Ai (u) ei in Cartesian coordinates the above denition becomes

dA dAi dA1 dA2 dA3 = ei = e1 + e2 + e3 . du du du du du


Because ei 's remain unchanged in Cartesian coordinates their derivatives are zero.

(2.4)

Exercise 10.

For a given second-order tensor function A(u) = Aij (u) ei ej in Cartesian coordinates write down the derivatives.

Property 27.
1. 2. 3. 4. 5.

For tensor functions A(u) , B (u) , C (u), and vector function a(u) we have (2.5) (2.6)

dB dA d (A B ) = A + B du du du d dB dA (A B ) = A + B du du du d dA [A (B C )] = (B C ) + A du du da da a =a du du da =0 i a = const a du

dB C du

+A B

dC du

(2.7) (2.8) (2.9)

Exercise 11.

Verify the 4th and the 5th properties above.

2.3 Derivatives of tensor elds, Gradient, Nabla operator


Since a tensor led A(x) is a multi-variable function its dierential dA depends on the direction of dierential of independent variable dx which is a vector dx = dxi ei . When dx is parallel to one of coordinate axes, partial derivatives of the tensor eld A(x) = A(x1 , x2 , x3 ) are obtained

A A(x1 + x1 , x2 , x3 ) A(x1 , x2 , x3 ) = lim x1 0 x1 x1 A A(x1 , x2 + x2 , x3 ) A(x1 , x2 , x3 ) = lim x2 0 x2 x2 A A(x1 , x2 , x3 + x3 ) A(x1 , x2 , x3 ) = lim x3 0 x3 x3

(2.10)

2.4.

INTEGRALS OF TENSOR FUNCTIONS

21

or in compact form

A(x + xi ei ) A(x) A = lim . (2.11) xi xi 0 xi In general, dx is not parallel to coordinate axes. Therefore based on the chain rule of dierentiation A A A A dA(x1 , x2 , x3 ) = dxi = dx1 + dx3 + dx3 , (2.12) xi x1 x2 x3 A ei . xi

Remembering from calculus of multi-variable functions, this can also be written as

dA = grad(A) dx with grad(A) =

(2.13)

In the latter formula gradient of tensor eld grad(A) appears which is a tensor of one order higher than A itself. To highlight this fact, we rewrite the above equation as

grad(A) =

A ei . xi

(2.14)

A proper choice of notation in mathematics is substantial and introduction of operators notation in calculus is no exception. Since linear operators follow algebraic rules somewhat similar to that of arithmetics of real numbers, writing relations in operators notation is invaluable. Here, we introduce the so called nabla operator as

ei

= e1 + e2 + e3 . xi x1 x2 x3

(2.15)

Then gradient of the tensor led in equation (2.13) can be written as

grad(A) = A = A .

(2.16)

2.4 Integrals of tensor functions


If A(u) and B (u) are tensor functions such that A(u) = d/du B (u), the indenite integral of A is A(u) du = B (u) + C (2.17) with C = const, and the denite integral of A over the interval [a, b] is
b

A(u) du = B (b) B (a) .


a

(2.18)

Decomposition of equation (2.17) for (say) a second-order tensor gives

Aij (u) du = Bij (u) + Cij ,

(2.19)

where Aij is a real valued function of real numbers. Therefore, integration of a tensor function comes down to integration of its components.

In recent equations the dyadic is written explicitly sometimes for pedagogical reasons.

22

CHAPTER 2.

TENSOR CALCULUS

2.5 Line integrals


Consider a space curve C joining two points P1 (a1 , a2 , a3 ) and P2 (b1 , b2 , b3 ) as shown in Fig. 2.2. If the curve is divided into N parts by subdivision points x1 , . . . , xN 1 , then the line integral of a tensor eld A(x) along C is given by
P2 N

X3 C P1 X1
Fig. 2.2:

P2 xn X2
Space curve.

A dx =
C P1

A dx = lim

A(xi ) xi
i=1

(2.20)

where xi = xi xi1 , and when N the largest of divisions tends to zero

max xi 0 .
i

In Cartesian coordinates the line integral can be written as

A dx =
C C

(A1 dx1 + A2 dx2 + A3 dx3 ) .

(2.21)

Line integral has the following properties.

Property 28.
P2 P1

1.
P1 P2

A dx =
P2 P3

A dx
P2

(2.22)

2.
P1

A dx =
P1

A dx +
P3

A dx

P3 is between P1 and P2

(2.23)

Parameterization
A curve in space is a one-dimensional entity which can be specied by a parameterization of the form

x(s)
which obviously means

(2.24)

x1 = x1 (s) , x2 = x2 (s) , x3 = x3 (s) .


Therefore a path integral can be parametrized in terms of s by
x2 s2

A dx =
x1 s1

dx ds . ds

(2.25)

2.6.

PATH INDEPENDENCE

23

2.6 Path independence


A line integral as introduced above, depends on its limit points P1 and P2 , on particular choice of the path C , and on its integrand A(x). However, there are cases where the line integral is independent of the path C . Then, having specied the tensor eld A(x) and the initial point P1 , the line integral is a function of the position of P2 only. If we denote the position of P2 by x then
x

(x) =
P1

A dx .

(2.26)

Now if we consider dierentiating the tensor eld (x) it yields

d = A dx ,
which according denition of gradient (2.13) means

(2.27)

A = grad() = .
This argument could be reversed which would lead to the following theorem.

(2.28)

Theorem 29. The line integral Exercise 12.

function (x) such that A = .

A dx is path independent if and only if there is a

Prove the reverse argument of the above theorem. That is, starting from existence of (x) such that A = , prove that the line integral is path independent.

Remark 30. Note that in particular case of C being a closed path the line integral is path
independent if and only if

A dx = 0 ,
C

(2.29)

for arbitrary path C . The circle on integral sign shows that C is closed. This is another statement of path-independence which is equivalent to the ones mentioned before.

2.7 Surface integrals


Consider a smooth connected surface S in space (Fig. 2.3) subdivided into N elements Si , i = 1, . . . , N at positions xi with unit normal vectors to the surface ni . Let tensor eld A(x) be dened over the domain S . Then the surface integral of normal component of A over S is dened as
N

X3

ni

Si S

A n dS = lim
S

A(xi ) ni Si
i=1

(2.30)

X2 X1
Fig. 2.3:
Surface integral.

where it holds that

max Si 0 .
i

24

CHAPTER 2.

TENSOR CALCULUS

The surface integral of tangential component of A over S is dened as


N

A n dS = lim
S

A(xi ) ni Si
i=1

(2.31)

Note the surface integral (2.30) is a tensor of one order lower than A due to appearance of A n in the integrand, while the surface integral (2.31) is a tensor of the same order as A due to the term A n.

Surface integrals in terms of double-integrals


Surface integrals can be calculated in Cartesian coordinates. This is the case usually when the surface S does not have any symmetries, such as rotational symmetry. For this the surface S is projected onto Cartesian plane X1 X2 which gives a domain like S (Fig. 2.3). Then dx1 dx2 A n dS = An . (2.32) n e3 S
S

The transformation between the area element dS and its projection dx1 dx2 is given by

dx1 dx2 = (ndS ) e3 = (n e3 ) dS ,


where n is the unit normal vector to dS and e3 is the unit normal vector to dx1 dx2 .

Exercise 13.

Calculate the surface integral of the normal component of vector eld v = 1/r2 er over the unit sphere centered at origin.

2.8 Divergence and curl operators


In surface integrals introduced above, the case when the surn X3 face S is closed has a special importance. In a physical condS text, integral of the normal component of a tensor eld over a closed surface expresses the overall ux of the eld through S the surface which is a measure of intensity of sources . On the other hand, closed surface integral of the tangential compoX2 nent represents the so called rotation of the eld around the surface. X1 Assume that S is a closed surface with outward unit norFig. 2.4: Closed surface. mal vector n, occupying the domain with volume V . Integral over closed surfaces is usually denoted by . We can dene surface integrals of a given tensor eld A(x) over S as in equations (2.30) and (2.31). As mentioned before, in the case of closed surfaces these have special physical meanings. A question that naturally arises is: what are the average intensity of sources and rotations over the domain . The answer is

D=

1 V

A n dS
S

C=

1 V

A n dS ,
S

(2.33)

A source, as it literally suggests, is what causes the tensor eld.

2.9.

LAPLACE'S OPERATOR

25

where integrals are divided by volume V . Now if the domain shrinks, that is V 0, then these average values reect the local intensity or concentration of sources and rotations of the tensor eld. This is the physical analogue of density which is the concentration of mass. These local values are called divergence and curl of the tensor eld A, denoted and formally dened by 1 A n dS (2.34) div(A) = lim V 0 V S

curl(A) = lim
It can be shown that

V 0

1 V

A n dS .
S

(2.35) (2.36) (2.37)

div(A) = A curl(A) = A

in operator notation. The proof is straightforward in Cartesian coordinates and can be found in most calculus books.

Exercise 14.

Using a good calculus book, starting from (2.34) and (2.35) derive equations (2.36) and (2.37) in Cartesian coordinates for a vector eld u(x) as

div(u) = u = ui i

and

curl(u) = u =

kij ui j

Remark 31. So far gradient, divergence and curl operators are applied to their operand

form the right-hand-side i.e. grad(A) = A , div(A) = A , curl(A) = A . This is not a strict requirement by their denition. In fact the direction from which the operator is applied depends on the context. If we exchange the order of dyadic, inner and outer products in equations (2.26), (2.34) and (2.35) respectively, the direction of application of nabla will be reversed, to wit

dx

V 0

lim

1 V

n A dS = A
S

V 0

lim

1 V

n A dS = A (2.38)
S

which are equally valid. However, keeping a consistence convention shall not be overlooked.

Exercise 15.

Calculate ( v ) for the vector eld v .

2.9 Laplace's operator


We remember from equation (2.28) that a tensor eld A whose line integrals are pathindependent can be written as the gradient of a lower-order tensor eld as A = . Now if we are interested in the concentration of sources that generate A then

div(A) = div(grad()) =

(2.39)

This combination of gradient and divergence operators appears so often that it is given a distinct name, the so called Laplace's operator or Laplacian which is denoted by

= k k .

(2.40)

26

CHAPTER 2.

TENSOR CALCULUS

Laplacian of a eld A is sometimes denoted by 2 A. Laplace's operator is also called harmonic operator. That is why two consecutive application of Laplacian over a eld is called biharmonic operator denoted by

4 A = A = (A)
which is expanded in Cartesian coordinates as

(2.41)

4 4 4 4 4 4 + 2 + 2 + + + 2 2 2 2 . x4 x4 x4 x2 x2 x2 1 2 3 1 x2 2 x3 3 x1

(2.42)

2.10 Gauss' and Stoke's Theorems


Theorem 32
(Gauss theorem). Let S be a closed surface bounding a region of volume V with outward unit normal vector n, and A(x) a tensor eld (Fig. 2.4). Then we have

A dV =
V S

A n dS .

(2.43)

This result is also called Green's or divergence theorem. If we remember the physical interpretation of divergence, then the Gauss theorem means the ux of a tensor eld through a closed surface equals the intensity of the sources bounded by the surface, which physically makes perfect sense.

Theorem 33 (Stoke's theorem). Let S be an open two-sided


surface bounded by a closed non-self-intersecting curve C as in Fig. 2.5. Then
A dx =
C S

X3

dS S

(A ) n dS

(2.44)

C X2
Fig. 2.5:

where n is directed towards counter-clockwise view of the path X1 direction as indicated in the gure.

Stoke's theorem.

Physical interpretation of Stoke's theorem is much similar to that of Gauss' theorem except that it involves closed line integration and enclosed surface, instead of closed surface integration and enclosed volume. Namely, the overall rotation of the tensor eld along a closed path equals the intensity of rotations enclosed by the path.

2.11.

MISCELLANEOUS FORMULAS AND THEOREMS INVOLVING NABLA

27

2.11 Miscellaneous formulas and theorems involving nabla


For given vector elds v and w, tensor elds A and B , and scalar elds and we have

(A + B ) = A + B (A + B ) = A + B (A + B ) = A + B (A) = () A + ( A) (A) = () A + ( A) ( A) = ( A) A (v w) = ( v ) w + v ( w) (v w) = (w )v ( v )w (v )w + ( w)v (v w) = (w )v + (v )w + w ( v ) + v ( w)


2

(2.45) (2.46) (2.47) (2.48) (2.49) (2.50) (2.51) (2.52) (2.53)

Theorem 34. For a tensor eld A


(A) = 0 ( A) = 0 (A) = 0 (A ) = 0
(2.54) (2.55)

given that A is smooth enough.


Let us rephrase the above theorem as follows: Gradient of a tensor eld is curl-free, i.e. its curl vanishes. Curl of a tensor eld is divergence-free, i.e. its divergence vanishes. Comparing to Gauss and Stokes's theorems, these mean

(A ) n dS =
S V

(A ) dV = 0

(2.56)

and

(A) dx =
C S

(A) n dS = 0 .

(2.57)

Exercise 16. Theorem 36


is

Show that in general A = 0 and A = 0. (Green's identities). For two scalar elds and the rst Green's identity

Remark 35. The recent equation, according (2.29), is equivalent to path-independence.

[ ( ) + () ( )] dV =
V S

( ) n dS

(2.58)

and the second Green's identity


[ ( ) () ] dV =
V S

[ ( ) () ] n dS .

(2.59)

28

CHAPTER 2.

TENSOR CALCULUS

Alternative forms of Gauss' and Stoke's theorems


For tensor eld A and scalar eld we have

A dV =
V S

A n dS
C

dx =
S

() n dS .

(2.60)

which could be equivalently stated as

A dV =
V S

n A dS
C

dx =
S

n () dS .

(2.61)

u3

u2
u1
Fig. 2.6:

c2
e3 u1 = c1 e2 u2

e1

u3 = c3

Curvilinear coordinates.

2.12 Orthogonal curvilinear coordinate systems


A point P in space (Fig. 2.6) can be specied by its rectangular coordinates (another name for Cartesian coordinates) (x1 , x2 , x3 ), or curvilinear coordinates (u1 , u2 , u3 ). Then there must be transformation rules of the form

x = x(u)

and

u = u(x)

(2.62)

where the transformation functions are smooth and on-to-one. Smoothness means being dierentiable, and being one-to-one requires Jacobian determinant to be non-zero

det

(u1 , u2 , u3 ) (x1 , x2 , x3 )

u1 x1 u2 x1 u3 x1

u1 x2 u2 x2 u3 x2

u1 x3 u2 x3 u3 x3

=0

(2.63)

Denition 37

(Jacobian). For a dierentiable function f (x) such that f : Rm Rn , its fi Jacobian is an m n matrix dened as J = x with i = 1, . . . , m and j = 1, . . . , n. j At the point P there are three surfaces described by

u1 = const ,

u2 = const ,

u3 = const

(2.64)

passing through P , called coordinate surfaces. And there are also three curves which are intersections of every two coordinate surfaces. These are called coordinate curves. On each

2.13.

DIFFERENTIALS

29

coordinate curve only one of the three coordinates ui varies and the other two are constant. If the position vector is denoted by r then the vector r /ui is tangent to ui coordinate curve, and the unit tangent vectors (Fig. 2.6) are obtained from

e1 =

r /u1 , r /u1

e2 =

r /u2 , r /u2

e3 =

r /u3 r /u3

(2.65)

Introducing scale factors hi = r /ui we have

r /u1 = h1 e1 ,

r /u2 = h2 e2 ,

r /u3 = h3 e3

(2.66)

If e1 , e2 and e3 are mutually orthogonal then we call (u1 , u2 , u3 ) an orthogonal curvilinear coordinate system.

Remark 38. In rectangular coordinates the coordinate surfaces are at planes and coordinate curves are straight lines. Also, the scale factors hi 's are equal to one due to straight coordinate curves. Furthermore, the unit vectors ei 's are the same for all points in space. These properties do not generally hold for curvilinear coordinate systems.

2.13 Dierentials
In three dimensional space, there are three types of geometrical objects (other than points) regarding dimensions one-dimensional objects: lines two-dimensional objects: surfaces three-dimensional objects: volumes. Each type has its own dierential element, namely line elements, surface elements and volume elements. We assume that the reader has already dealt with the related ideas in rectangular (Cartesian) coordinates. Now we want to generalize those ideas to curvilinear coordinates.

Line elements
An innitesimal line segment with an arbitrary direction in space can be projected onto coordinate axes. Consider the position vector r explained in a curvilinear coordinate system by r (u1 , u2 , u3 ). A line element is the dierential of the position vector obtained by chain rule and using equation (2.66)

dr =
where

r r r du1 + du2 + du3 = h1 du1 e1 + h2 du2 e2 + h3 du3 e3 . u1 u2 u3 dr1 = h1 du1 , dr2 = h2 du2 , dr3 = h3 du3

(2.67)

(2.68)

30

CHAPTER 2.

TENSOR CALCULUS

are the basis line elements. If the position vector sweeps a curve (which is typical for example in kinematics) the arc length element denoted by ds is obtained by
2 2 2 2 2 (ds)2 = dr dr = h2 1 (du1 ) + h2 (du2 ) + h3 (du3 ) .

(2.69)

Note that the above formulas are completely general. For instance in Cartesian coordinates where hi = 1 they are reduced to familiar forms

dr = dx1 e1 + dx2 e2 + dx3 e3

and

(ds)2 = (dx1 )2 + (dx2 )2 + (dx3 )2 .

Area elements
An area element is a vector described by its magnitude and direction

dS = dS n ,
whose decomposition is obtained by

(2.70)

dS = (dS e1 ) e1 + (dS e2 ) e2 + (dS e3 ) e3

(2.71)

where dS ei is in fact the projection of dS on the coordinate surface normal to ei . On the other hand, having the coordinate line elements (2.68), the area elements on each coordinate surface are obtained by

dS1 e1 = dr2 e2 dr3 e3 = h2 h3 du2 du3 e1 dS2 e2 = dr3 e3 dr1 e1 = h3 h1 du3 du1 e2 dS3 e3 = dr1 e1 dr2 e2 = h1 h2 du1 du2 e3
which are called basis area elements. Note that an area element is a parallelogram with its side being line elements, and its area is obtained by cross product of the line element vectors. Comparing the two above equation gives (2.72)

dS = (h2 h3 du2 du3 ) e1 + (h3 h1 du3 du1 ) e2 + (h1 h2 du1 du2 ) e3

(2.73)

As the simplest case, in Cartesian coordinates the latter formula gives the familiar form

dS = dx2 dx3 e1 + dx3 dx1 e2 + dx1 dx2 e3 .

Volume element
A volume element dV is a parallelepiped built by the three coordinate line elements dr1 e1 , dr2 e2 and dr3 e3 . Since the volume of a parallelepiped is obtained by triple product of its side vectors as in equation (1.77), we have

dV = (dr1 e1 ) (dr2 e2 ) (dr3 e3 ) = h1 h2 h3 du1 du2 du3 .

(2.74)

2.14.

DIFFERENTIAL TRANSFORMATION

31

2.14 Dierential transformation


For two given orthogonal curvilinear coordinate (u1 , u2 , u3 ) and (q1 , q2 , q3 ), we are interested in transformation of dierentials between them. Here we adopt the index notation. There are two types of expressions in the aforementioned formulas to be transformed. One is the dierentials dui and the other is the factors of the form r /ui or their norms hi . We consider the transformation of each separately. Starting from transformations of the form u(q ), applying the chain rule we have

dui =

ui dqj qj

and

r r qj = ui qj ui

(2.75)

and because of orthogonality of coordinates r r qj = ui qj ui


j

1/2 .
(2.76)

Transformation of area element form one curvilinear coordinates to another is obtained by

dAq =

(q1 , q2 , q3 ) (u1 , u2 , u3 )

(q1 , q2 , q3 ) (u1 , u2 , u3 )

dAu

(2.77)

And volume element transforms according

dVq =

(q1 , q2 , q3 ) dVu (u1 , u2 , u3 )

(2.78)

2.15 Dierential operators in curvilinear coordinates


Here we briey address the general formulations of gradient, divergence, curl and Laplacian in orthogonal curvilinear coordinates. Formulations are given in a concrete form for a scalar eld and a vector eld a = a1 e1 + a2 e2 + a3 e3 however the reader should be able to use them for tensor elds of any order.

1 1 1 + + h1 u1 h2 u2 h3 u3 1 a = (h2 h3 a1 ) + (h3 h1 a2 ) + (h1 h2 a3 ) h1 h2 h3 u1 u2 u3 h1 e1 h2 e2 h3 e3 1 /u1 /u2 /u3 a = h1 h2 h3 h1 a1 h2 a2 h3 a3 = = 1 h1 h2 h3 u1 h2 h3 h1 u1 + u2 h3 h1 h2 u2 + u3 h1 h2 h3 u3

(2.79) (2.80) (2.81)

. (2.82)

Let us apply these formulas for the two most commonly used curvilinear coordinate systems, namely, cylindrical and spherical coordinates.

32

CHAPTER 2.

TENSOR CALCULUS

X3

z = x3

x x1 x2 X1
Fig. 2.7:
Cylindrical coordinates.

X2

Cylindrical coordinates
In cylindrical coordinates a point is determined by (r, , z ) (Fig. 2.7). Transformation to rectangular coordinates are given as

x1 = r cos , x2 = r sin , x3 = z r=
Scale factors are
2 x2 1 + x2 , = arctan(x2 /x1 ) , z = x3

(2.83) (2.84)

hr = 1
and dierential elements

h = r

hz = 1 ,

(2.85)

dr = dr er + r d e + dz ez dS = r d dz er + dr dz e + r dr d ez dV = r dr d dz .
Finally the dierential operators can be listed as

(2.86) (2.87) (2.88)

1 + e + ez r r z 1 1 a az a = (rar ) + + r r r z 1 1 2 2 2 1 1 2 2 = r + 2 + = + + + 2 r r r r 2 z 2 r2 r r r2 2 z a 1 az ar az 1 ar a = er + e + (ra ) ez r z z r r r er

(2.89) (2.90) (2.91) (2.92)

2.15.

DIFFERENTIAL OPERATORS IN CURVILINEAR COORDINATES

33

X3

x3

r x2 X1 x1 X2

Fig. 2.8:

Spherical coord.

Spherical coordinates
In spherical coordinates a point is determined by (r, , ) (Fig. 2.8). Transformation to rectangular coordinates are given as

x1 = r sin cos , x2 = r sin sin , x3 = r cos r=


Scale factors are
2 2 x2 1 + x2 + x3 , = arccos(x3 /r ) , = arctan(x2 /x1 )

(2.93)

.
(2.94) (2.95) (2.96) (2.97)

hr = 1
and dierential elements

h = r

h = r sin ,

dr = dr er + r d e + r sin d e dS = r sin d d er + r sin dr d e + r dr d e dV = r sin dr d d .


Finally the dierential operators can be listed as
2 2

1 1 + e + e r r r sin 1 a 1 1 r 2 ar + (sin a ) + a = 2 r r r sin r sin 2 2 2 cos 1 1 2 = + + + + r2 r r r2 sin r2 2 r2 sin2 2 1 a 1 1 ar (sin a ) er + (ra ) e a = r sin r sin r 1 ar (ra ) e + r r er

(2.98) (2.99) (2.100) (2.101) (2.102)

34

CHAPTER 2.

TENSOR CALCULUS

Bibliography
[1] A.I. Borisenko and I.E. Tarapov. Vector and Tensor Analysis with Applications. Dover Pub. Inc, 1968. [2] P.C. Chou and N.J. Pagano. Elasticity: Tensor, Dyadic, and Engineering Approaches. Dover Pub. Inc, 1992. [3] R.W. Ogden. Non-Linear Elastic Deformations. Dover Pub. Inc, 1997. [4] M.R. Spiegel. Mathematical Handbook of Formulas and Tables. McGraw-Hill (Schaum's outline), 1979.

35

You might also like