You are on page 1of 25

Random Fields

Ecient Analysis and Simulation

Christian Bucher & Sebastian Wol

Vienna University of Technology


& DYNARDO Austria GmbH, Vienna

Overview

Introduction
Elementary properties
Conditional random fields
Computational aspects
Example
Concluding remarks

2/25

c Christian Bucher 2010-2014

Random field
Real valued function H(x) defined in an n-dimensional space
H R;

x = [x1 , x2 , . . . xn ]T D Rn

Ensemble of all possible realisations


Describe statistics in terms of mean and variance
Need to consider the correlation structure between values of
H at dierent locations x and y

H(x, )

x
y

x, y
L

3/25

c Christian Bucher 2010-2014

Second order statistics of random field


Mean value function

H(x)
= E[H(x)]
Autocovariance function

CHH (x, y) = E[{H(x) H(x)}{H(y)


H(y)}]
A random field H(x) is called weakly homogeneous if

H(x)
= const.

x D;

CHH (x, x+) = CHH ()

x D

A homogeneous random field H(x) is called isotropic if


CHH () = CHH (||||)

Correlation distance (characteristic length Lc )


4/25

c Christian Bucher 2010-2014

Example: Random field in a square plate


Simulated random samples of isotropic field

5/25

c Christian Bucher 2010-2014

Conditional Random Fields 1


Assume that the values of the random field H(x) are known
at the locations xk , k = 1 . . . m
Stochastic interpolation for the conditional random field:
i ) = a(x) +
H(x

bk (x)H(xk )

k=1

a(x) and bk (x) are random interpolating functions.


Make the mean value of the dierence between the random
field and the conditional field zero

E[H(x)
H(x)] = 0
Minimize the variance of the dierence

E[(H(x)
H(x))2 ] Min.
6/25

c Christian Bucher 2010-2014

Conditional Random Fields 2


Mean value of the conditional random field.

H(x1 )
[
]
..

H(x)
= CHH (x, x1 ) . . . CHH (x, xm ) C1
.
HH
H(xm )
CHH denotes the covariance matrix of the random field H(x)
at the locations of the measurements.
Covariance matrix of the conditional random field
y) = C(x, y)
C(x,

CHH (y, x1 )
]
[

..
CHH (x, x1 ) . . . CHH (x, xm ) C1

.
HH
CHH (y, xm )

Zero at the measurement points.


7/25

c Christian Bucher 2010-2014

Spectral decomposition
Perform a Fourier-type series expansion using deterministic
basis functions k and random coecients ck
H(x) =

ck R, k R, x D

ck k (x),

k=1

Optimal choice of the basis functions is given by an


eigenvalue (spectral) decomposition of the
auto-covariance function (Karhunen-Loeve expansion)
CHH =

k=1

k k (x)k (y),

CHH (x, y)x (x)dx = k k (y)

The basis functions k are orthogonal and the coecients ck


are uncorrelated.
8/25

c Christian Bucher 2010-2014

Discrete version
Discrete random field
Hi = H(xi ),

i = 1...N

(1)

Spectral decomposition is given by


Hi E(Hi ) =

k (xi )ck =

k=1

ik ck

(1)

k=1

In matrix-vector notation

H = c + H
Computation of basis vectors by solving for the eigenvalues
k of the covariance matrix CHH
CHH k = k k ; k 0; k = 1 . . . N
9/25

c Christian Bucher 2010-2014

Example: Random field in a square plate


Basis vectors

10/25

c Christian Bucher 2010-2014

Modeling random fields

Most important: Correlation structure, most significant


parameter is the correlation length Lc .
Estimate for the correlation length can be obtained by
applying statistical methods to observed data
Type of probability distribution of the material/geometrical
parameters. Statistical methods can be applied to infer
distribution information from observed measurements.
Helpful to identify the exact type of correlation (or
covariance) function, and to check for homogeneity. This will
be feasible only if a fairly large set of experimental data is
available.

11/25

c Christian Bucher 2010-2014

Computational aspects

Need to set up the covariance matrix from covariance


function of field
Storage requirements of O(M 2 )
Covariance matrix is full
Karhunen-Loeve expansion is realised using numerical
methods from linear algebra (eigenvalue analysis)
Numerical complexity of O(M 3 )

12/25

c Christian Bucher 2010-2014

Simulation for small correlation length


Assemble sparse covariance matrix (e.g. based on piecewise
polynomial covariance functions)
Cl,p (d) = (1 d/l)p+ ,

p>1

Perform a decomposition of the covariance matrix, possible


C = LLT , eg. by a sparse Cholesky factorization.
Simulate N field vectors uk of statistically independent
standard-normal random variables, one number for each
node.
Apply the correlation in standard normal space for each
sample k: zk = Luk .
Transform the correlated field samples into the space of the
desired random field: xk,i = F (1) (N(zk,i )).
Does not reduce the number of variables
13/25

c Christian Bucher 2010-2014

Simulation for large correlation length 1


Typical covariance function
(
)
d2
Cl (d) = exp 2
2l
A spectral decomposition is used to factorize the covariance
matrix by Cov = diag(i )T with eigenvalues i and
orthogonal eigenvectors = [i ].
This decomposition is used to reduce the number of random
variables. Given a moderately large correlation length, only a
few (eg. 3-5) eigenvectors are required to represent more
than 90% of the total variability.
Perform a decomposition of the covariance matrix
CHH = diag(ii )T and choose m basis vectors i being
associated with the largest eigenvalues.
14/25

c Christian Bucher 2010-2014

Simulation for large correlation length 2

Simulate N vectors uk of statistically independent


standard-normal random variables, each vector is of
dimension m.
Apply the (decomposed) covariance in standard normal space
for each sample k
zk =

i i uk,i

Transform the correlated field samples


into)the space of the
(
desired random field: xk,j = F (1) N(zk,j ) .

15/25

c Christian Bucher 2010-2014

Simulation for large correlation length 3


A global error measure may be based on the total variability
being explained by the selected eigenvalues, i.e.

ci
1
= 1 i=1
=1
ci
n
i=1 ni
i=1

wherein n is the number of discrete points.


This procedure allows the generation of random field samples
with relatively large correlation length parameters
It is based on a model order reduction, i.e. only a portion of
the desired variability can be retained.
Covariance matrix is stored as a dense matrix. Hence, the
size of the FEM mesh is eectively limited to 30.000
nodes (covariance matrix has 9x108 entries, i.e. > 7GB).

16/25

c Christian Bucher 2010-2014

Ecient simulation strategy


Randomly select M support points from the finite element
mesh.
Assemble the covariance matrix for the selected sub-space.
Perform a decomposition of the covariance matrix
C = diag(i )T and choose m basis vectors i .
Create basis vectors i by interpolating the values of i on
the FEM mesh.
Simulate N vectors uk of statistically independent
standard-normal random variables, each vector is of
dimension m.
Apply the (decomposed) covariance in standard normal space
for each sample k
m

zk =
i uk,i
i

Transform the correlated field samples


into)the space of the
(
desired random field: xk,j = F (1) N(zk,j ) .
17/25

c Christian Bucher 2010-2014

Expansion Optimal Linear Estimator 1

Expansion Optimal Linear Estimation (EOLE) is an


extension of Kriging
Kriging interpolates a random field based on samples being
measured at a sub-set of mesh points.
Assume that the sub-space is described by the field values
m
m

i i,1 uk,i , . . . ,
i i,M uk,i }
yk = {zk,1 , . . . , zk,M } = {
i=1

18/25

i=1

c Christian Bucher 2010-2014

Expansion Optimal Linear Estimator 2

Minimization of the variance between the target random field


and its approximation under the constraint of equal mean
values of both results in:
i
i = CTzy C1
yy
i
with Cy y denoting the correlation matrix between the
sub-space points and Czy denoting the (rectangular)
covariance matrix between the sub-space points and the
nodes in full space.

19/25

c Christian Bucher 2010-2014

Example

Sheet metal forming application


Modelled by 4-node shell elements using 8786 finite element
nodes
Homogeneoues field, exponential correlation function
Maximum dimension is 540 mm, correlation length
parameter is chosen to be 100 mm.
Truncated Gaussian distribution with mean value 5, standard
deviation 15, lower bound 20 and upper bound 30
Sub-space dimension is chosen to be small (between 50 and
1000 points)

20/25

c Christian Bucher 2010-2014

Example
M = 50

M = 200

M = 500

21/25

c Christian Bucher 2010-2014

Example - Basis vectors (M = 50)

22/25

c Christian Bucher 2010-2014

Example - Basis vectors (full)

23/25

c Christian Bucher 2010-2014

Errors

MAC values of various shapes (reference of comparison: full


model) for dierent numbers of support points n.
n
MAC 1 MAC 2 MAC 5 MAC 10
50
0.999
0.999
0.949
0.393
100
0.999
0.999
0.999
0.986
200
0.999
0.999
0.999
0.999
400
0.999
0.999
0.999
0.999
800
1
1
0.999
0.999
8786
1
1
1
1

24/25

c Christian Bucher 2010-2014

Concluding Remarks
Karhunen-Loeve expansion is very useful for reduction of
number of variables
Solution of eigenvalue problem may run into computational
problems (storage, time)
Suitable reduction methods reduce storage and time
requirements drastically
Software Statistics on Structures - SoS by

25/25

c Christian Bucher 2010-2014

You might also like