You are on page 1of 36

An Introduction to

Nonlinear Solid Mechanics


Anna Pandolfi
An Introduction to Nonlinear Solid Mechanics
Doctoral School Politecnico di Milano
J anuary February 2012
NLSM - J an/Feb 2012 Anna Pandolfi
Course Outline
Continuum mechanics explains successfully various phenomena without
describing the complexity of the internal microstructures:
the solid is seen as a macroscopic system.
Mathematics preliminaries
Kinematics of deformations
Statics
Balance principles
Objectivity
Thermodynamics of materials
Hyperelasticity
Plasticity
Viscosity
Other behaviors
Lecture slides: http://www.stru.polimi.it/people/pandolfi/WebPage/nlsm.plp
1
NLSM - J an/Feb 2012 Anna Pandolfi
Reference Textbooks
G. A. Holzapfel, Nonlinear Solid Mechanics,
Wiley, Chichester, 2000
R. W. Ogden, Nonlinear Elastic Deformations,
Constable Company, London, 1984 (Dover, New York, 1997)
J . E. Mardsen and T.J .R. Hughes, Mathematical Foundations of Elasticity
Englewood Cliffs, N.J . Prentice-Hall, 1984 (Dover, New York, 1994)
A. J . M. Spencer,Continuum Mechanics,
Longman, London, 1980 (Dover, New York, 2004)
P. Chadwick, Continuum Mechanics. Concise Theory and Problems,
Allen & Unwin Ltd, London,1976 (Dover, New York, 1999)
L. E. Malvern, Introduction to the Mechanics of a Continuous Medium
Prentice-Hall, Englewood Cliffs, New J ersey, 1969
C. Truesdell and W. Noll, The Nonlinear Field Theories of Mechanics,
Springer-Verlag, Berlin, 1965 (1992, 2004)
2
1. Mathematical preliminaries
Anna Pandolfi
An Introduction to Nonlinear Solid Mechanics
Doctoral School Politecnico di Milano
J anuary February 2012
NLSM - J an/Feb 2012 Anna Pandolfi
Notation
Scalars:
Vectors:
2-nd order tensors:
3-rd order tensors:
4-th order tensors:
Dot product:
Vector product:
Dyadic product:
Norm:
Vector or Na bla operator:
Divergence:
Curl:
Laplace operator:
Use the dummy (repeated) index over 1,2,3 (3D space) to denote summation.
4
NLSM - J an/Feb 2012 Anna Pandolfi
Orthonormal Basis Vectors
3D Euclidean space with a right-handed orthonormal system (basis).
Fixed set of three unit vectors with the property:
Representation of vectors in index notation:
Permutation tensor:
Properties of orthonormal basis vectors:
5
NLSM - J an/Feb 2012 Anna Pandolfi
Basic Vector Algebra
Scalar product (contraction of indices):
j
th
component of a vector:
Norm of a vector:
Vector (or cross) product:
Triple scalar (or box) product:
6
NLSM - J an/Feb 2012 Anna Pandolfi
Groups of Linear Transformations
A group is a set X together with a binary operation s.t.
1. The operation is associative
2. The identity element e is defined
3. The inverse element x
-1
is defined
By fixing a basis in , identify the set of the linear mappings by the set
of all the square n x n matrices. The matrix multiplication can be
assumed as the corresponding binary operation.
Matrix multiplication represents the composition of all linear mappings
from to on the set , but it fails to be a group since not all the
linear mappings have an inverse. It requires a restriction.
The General Linear Group GL(n) of dimension n is the group of invertible linear
mappings over :
7
NLSM - J an/Feb 2012 Anna Pandolfi
Second Order Tensors
A second order tensor A is linear transformation (linear mapping) that acts on u
to generate v :
How can a tensor be generated? Through the combination of dyads, i.e. dyadic
(or tensor or outer) products of vectors. A dyadic product results into an
expansion of indices (as opposite to inner or dot product, which is a
contraction of indices)
A dyadic Wis a linear combination of dyads with scalar coefficients (in general
no tensor can be expressed as a single dyad):
Any second order tensor A may be expressed as a dyadic formed by the basis
vectors (proved after the example):
8
NLSM - J an/Feb 2012 Anna Pandolfi
Interpretation of Dyad
The dyad is a second order tensor which linearly transforms a vector w into a
vector n in the direction of u with the rule:
Simple example in Cartesian coordinates. Assume:
Now compute:
The vector w transforms into a vector
in the direction of u. The figure shows the
special case on unit vectors.
9
NLSM - J an/Feb 2012 Anna Pandolfi
Cartesian Tensors
When a tensor A is resolved along orthonormal basis vectors, it is called
Cartesian tensor. In particular, define the unit tensor I:
A dyad and a tensor A can be expressed in matrix notation by entering
their nine Cartesian components into a table:
The Cartesian components of A can be computed by using the dyad definition:
Using the following steps:
The expression shows that the vector has three components in .
10
NLSM - J an/Feb 2012 Anna Pandolfi
Positive and Negative Definiteness
Trace and Transpose
The tensor A is said positive semi-definite (negative semi-definite) if for any non
zero vector v:
The tensor A is said positive definite (negative definite) if for any non zero
vector v:
The positive or negative definiteness are related to the associate eigenvalue
problem.
Trace of a dyad or of a tensor:
Transpose A
T
of a tensor:
In particular:
11
NLSM - J an/Feb 2012 Anna Pandolfi
Contraction Products
Contraction of an index: a familiar example is the scalar product.
Single index contraction product (or dot product) can be defined between two
tensors and a tensor and a vector:
yielding to a tensor or to a vector, respectively.
Components of a dot product along an orthonormal basis:
A double index contraction product between two tensors yields a scalar
(example: internal work in WVP):
Application to the calculation of the norm of a tensor:
12
NLSM - J an/Feb 2012 Anna Pandolfi
Determinant and inverse of a Tensor
Determinant of a tensor:
Properties:
If the determinant is null, the tensor is said to be singular. For a nonsingular
tensor, a unique inverse A
-1
exists, such that:
Shortly, denote:
If the unique inverse A
-1
does exist, the tensor A is said invertible.
The General Linear Group GL(n) is the group of the invertible linear mappings
over .
13
NLSM - J an/Feb 2012 Anna Pandolfi
Orthogonal Tensors
A tensor is said orthogonal if it preserves the length and the angle of vectors:
Which implies:
By computing the determinants:
If Q and R are orthogonal, then also the product QR is orthogonal. It is said that
the set of orthogonal mappings is a subgroup of GL(n) closed under
multiplication, and it is called Orthogonal Group O(n).
The subgroup of the orthogonal group that preserves the orientation, i.e. with
determinant equal to 1, is called Special Orthogonal Group SO(n). The other
subgroup corresponds to reflections.
14
NLSM - J an/Feb 2012 Anna Pandolfi
Symmetric and Skew Tensors
Any second order tensor A can always be uniquely decomposed into a
symmetric S and a skew Wtensor:
The meaning and the importance of a symmetric tensor will be discussed later.
Any skew tensor Wbehaves like a three component vector. By introducing the
axial (dual) vector e of W, for any vector u the following relation holds:
The components of the skew tensor and of the axial vector are related as:
Example:
15
NLSM - J an/Feb 2012 Anna Pandolfi
Projection Tensors
Projections: given a unit vector e, a vector u decomposes into a vector in the
direction of e and a vector onto the plane normal to e:
where the projection tensors P
e
of order two are introduced.
16
NLSM - J an/Feb 2012 Anna Pandolfi
Spheric and Deviatoric Tensors
Any second order tensor A can be decomposed into the sum of a spherical part
and a deviatoric part:
Deviatoric operator:
Example: von Mises theory of plasticity where the deviatoric part of the stress is
used
The spherical part of a second order tensor is also said isotropic. The elements
outside of the principal diagonal of a spherical tensor are null.
The deviatoric part of a second order tensor is also said isochoric. The trace of
a deviatoric tensor is null.
17
NLSM - J an/Feb 2012 Anna Pandolfi
Higher Order Tensors
Three-order tensors (27 components):
The double contraction between a 3-order and a 2-order tensors gives a vector:
Four-order tensors (81 components):
The double contraction between a 4-order and a 2-order tensors gives a 2 order
tensor:
Obtain a four-order tensor by a dyadic product of 2 two-order tensors:
It is a typical tensor used in constitutive relations, to link a second order tensor
(stress) to another (strain).
18
NLSM - J an/Feb 2012 Anna Pandolfi
Transpose and Identity 4
th
Order Tensors
Unique transpose of a four-order tensor: for all second order tensors B and C
Two (distinct) identity four-order tensors:
The deviatoric part of a second order tensor A may be written alternatively by
introducing a four-order projection tensor :
19
NLSM - J an/Feb 2012 Anna Pandolfi
Eigenvalues of Tensors
An eigenvalue of a A is a complex number such that is not
invertible. The spectrum is the collection of all eigenvalues of A.
The characteristic polynomial is invariant under similarity
transformations; the coefficients are also invariants:
Eigenvalues are the roots of the characteristic polynomial:
Eigenvalues characterize the physical nature of a tensor and do not depend on
coordinates.
Since , the complex eigenvalues came in conjugate pairs.
A and A
T
have the same eigenvalues. If A is invertible, the eigenvalues of A
-1
are the reciprocal of the eigenvalues of A.
20
NLSM - J an/Feb 2012 Anna Pandolfi
Eigenvectors of General Tensors
A right (or left) eigenvector associated to a eigenvalue of a
second order tensor A satisfies the equation:
A right eigenvector u
1
and a left eigenvector v
2
are orthogonal if and only if:
A is said to be diagonalizable if it has n linearly independent eigenvectors. In
such a case it is possible to define a right and a left eigenbasis (dual basis) with
normalized eigenvectors, so that:
Then A and A
T
admit the so called spectral decomposition:
21
NLSM - J an/Feb 2012 Anna Pandolfi
Symmetric Tensors, Spectral Decomposition
In the case of symmetric tensors, eigenvalues are real numbers, and right and
left eigenvectors coincide. The eigenvectors form an orthonormal basis and the
tensor is diagonalizable.
By using the definition of identity second order tensor in the principal reference
system, write any diagonalizable tensor with distinct eigenvalues as:
For two equal eigenvalues (e.g. in 2 dimensions) , i.e.
1
=
2
= :
For three equal eigenvalues:
22
NLSM - J an/Feb 2012 Anna Pandolfi
Transformation Laws for Basis Vectors
Vector and tensor remain invariant upon a change of basis, but their respective
components depend upon the coordinate system introduced.
Given two sets of basis vectors, and , , denote
Using the dyadic representation by the basis of the tensor Q :
[Q] is an orthogonal matrix which contains the collection of the components Q
ij
of the proper orthogonal tensor Q and it is referred to as the transformation
matrix.
In the following, we use [u] to denote the column matrix representation of a
vector and [A] the matrix representation of a tensor.
In different basis the same vector (tensor) is described by different components.
23
NLSM - J an/Feb 2012 Anna Pandolfi
Vector and Tensor Transformation Laws
Vector transformation law. Describe the same vector in two different systems:
Tensor transformation law. Describe the same tensor in two different systems:
A tensor is said to be isotropic if its components are the same under arbitrary
rotations of the basis vectors.
24
NLSM - J an/Feb 2012 Anna Pandolfi
Scalar, Vector and Tensor Functions
Scalar, Vector and Tensor functions: the arguments are scalar, vector, or tensor
variables respectively. The returned values may be any. For vector and tensor
values we assume that the components, in a basis system e
i
, are in turn
functions of the scalar, vector, or tensor variables:
First (n
th
) derivatives wrt a scalar argument (i.e. time t), as derivatives of the
components:
Derivative of the inverse:
25
NLSM - J an/Feb 2012 Anna Pandolfi
Gradient of a Scalar valued Tensor Function
The nonlinear scalar-valued function returns a scalar for each A. Want to
approximate the nonlinear function u at A with a linear function: use first-order
Taylors expansion
The Landau order symbol o() denotes a small error that tends to zero faster
than his argument:
Second order tensor called gradient (or derivative) of u at A:
The gradient is derived as the first variation of u.
26
NLSM - J an/Feb 2012 Anna Pandolfi
Gradient of a Tensor valued Tensor Function
The nonlinear tensor valued tensor function returns a tensor for each B.
Want to approximate the nonlinear function A at B by a linear function: use first-
order Taylors expansion
Fourth-order tensor called gradient (or derivative) of function A at B:
In particular:
27
NLSM - J an/Feb 2012 Anna Pandolfi
Gradient of a Scalar Field u (x)
A scalar field u(x), a vector field u(x), or a tensor field A(x) assign a scalar,
vector, or tensor respectively to each material point x over a domain O.
The Taylors expansions of a scalar field introduces the gradient:
Total differential of u:
Introduce the Vector operator (Nabla operator) :
28
NLSM - J an/Feb 2012 Anna Pandolfi
Directional Derivative
Level surface of a scalar field u in x:
Since gradu is a vector normal to the level
surface, the normal to the surface n is:
For any vector u forming an angle u with n, the directional derivative (Gteaux
derivative) at x in the direction of u is the scalar product:
It is maximum in the direction of n and minimum in the direction of n.
The normal derivative is the maximum of all the directional derivatives in x for
unit vectors u:
29
NLSM - J an/Feb 2012 Anna Pandolfi
Alternative Definitions of Directional Derivative
Gteaux derivative: for any vector u forming an angle u with n, the directional
derivative at x in the direction of u is the scalar product:
The directional derivative in x is also the rate of change of u along the straight
line through x in the direction of u:
30
NLSM - J an/Feb 2012 Anna Pandolfi
Divergence and Curl
Divergence: scalar field produced by the vector operator dotted in a vector field
When div u = 0, the field is said solenoidal (or divergence-free)
Curl: vector field produced by the vector operator crossed in a vector field
When curl u = 0, the field is said irrotational (or conservative, curl-free). It holds:
A vector field u such that u = grad u is automatically irrotational.
Thusu is called potential of u.
31
NLSM - J an/Feb 2012 Anna Pandolfi
Gradient of a Vector Field
The gradient of a smooth vector field u(x) is a second order tensor field:
in matrix notation:
Note that:
Transpose of the gradient:
32
NLSM - J an/Feb 2012 Anna Pandolfi
Divergence and Gradient of a Tensor Field
Divergence of a tensor field: vector field produced by the vector operator dotted
in a tensor field
Alternative definition (differential equilibrium equation)
The gradient of a smooth tensor field A(x) is a third-order tensor field:
Note that:
Laplace operator: the vector operator dotted into itself
33
NLSM - J an/Feb 2012 Anna Pandolfi
Special Differential Equations and Hessian
The Laplace operator operated upon a scalar field u yields another scalar field,
as in the Laplaces equation (Poissons equation):
Hessian operator (gradient of a gradient, leads to a tensor):
Properties (to remember in the calculation of the tangent stiffness):
If the vector field is both solenoidal and irrotational, it is said to be harmonic:
34
NLSM - J an/Feb 2012 Anna Pandolfi
Integral Theorems
Divergence Theorem: for u and A
smooth fields defined over a 3D
region with volume V and bounding
closed surface S:
Gauss divergence theorem: the first integral is called total (outward normal) flux
of u out of the total boundary surface S enclosing V.
35

You might also like