You are on page 1of 22

UME

A UNIVERSITY October 7, 2011


Computational Science and Engineering
Modeling and Simulation
Stochastic simulations
Peter Olsson
0 Introduction
In these lecture notes some sections are marked with a . They will not be
covered in the exam since the content usually presumes more knowledge of
physics than required for participating in this course. (The physics students
should of course nevertheless diligently work through these sections.)
0.1 The Boltzmann distribution

Physical properties are often discussed for systems at constant temperature


the canonical ensemble. When the temperature is constant the energy
of the system may vary and this could at rst seem to complicate things
a bit. To discuss the properties of the canonical ensemble one considers
the system, S, to be in thermal contact with a reservoir, R, such that both
energies uctuate, but that the total energy E = E
R
+E
S
is a constant. The
reservoir is taken to be much bigger than the system, such that the relative
uctuations in the energy of the reservoir are negligable. The temperature
of the resorvior may therefore be considered constant.
An important property in the microcanonical ensemble is the number of
states with a certain energy, (E). The entropy is related to this through
S(E) = k
B
ln (E),
where k
B
is Boltzmanns constant. The temperature is in turn related to the
entropy through
1
T
=
S
E
.
Making use of the above in an expansion of ln for the reservoir gives
ln (E E

) ln (E) E
v
ln
E
= ln (E)
E

k
B
T
.
We now consider the probability for a certain state with energy E

of the
system. This is related to the number of states of the reservoir with energy
E E:
P

(E E

) = e
ln (EE)
e
E/k
B
T
. (1)
This expression, e
E/k
B
T
, is the Boltzmann factor.
1
0.2 The ideal gas
The simplest description of a gas is the Ideal gas law,
pV = Nk
B
T,
which relates pressure times volume to the number of particles, Boltzmanns
constant, and the temperature (measured in Kelvin) in a gas where the in-
teractions between the molecules may be neglected. This is often a good
approximation, e.g. at room temperature and pressure of 1 atm. This equa-
tion is sometimes written pV = nRT, where n = N/N
A
is the number of
moles of the substance (N
A
is Avogadros constant) and R = N
A
k
B
is the gas
constant.
0.3 A gas of interacting particles
The properties of a non-interacting gas may be calculated analytically but
that is no longer possible when interactions between particles are included.
(The eects of the interactions are however captured in an approxmative way
in van der Waals gas.)
We now approximate the interaction energy by a sum of pair interactions:
U(r) =

j>i
V
LJ
([r
i
r
j
[).
The pair interaction is attractive, V
LJ
(r) r
6
, at large distances and
repulsive at short distances. A common choice is to let the short distance
repulsion be r
12
, which gives the Lennard-Jones potential
V
LJ
(r) = 4
_
_

r
_
12

r
_
6
_
. (2)
The minimum of the potential is
V
LJ
(r
min
) = , r
min
= 2
1/6
. (3)
0.4 Length and time scales
In the following we will use simulation units and will not be very concerned
with realistic values of energies, distances, and velocities. The point with this
2
section is therefore only to give a feeling for the characteristic length scale,
time scale, and energy of a real gas. The parameters of the LJ interaction
for Neon atoms are
= 4.18 10
22
J, and = 3.12 10
10
m.
The typical velocity may be obtained from (see Sec. 1.1)
mv
2
= 3k
B
T,
where 3 is the dimensionality. With k
B
1.38 10
23
J/k, m 28
1.67 10
26
kg for N
2
molecules (typical of air), and T = 300K (which is
27

C), this gives the typical velocity
_
v
2
10
3
m/s.
With a potential energy that changes on scales of 10
10
m this gives relevant
times of 10
13
s = 0.1ps. This is then the order of the size of the time step
that should be used when integrating the equations of motion for a gas at
room temperature.
0.5 Simulation units
To use simulations units we let
r
/m
r and
V
LJ
(r)
/J
V
LJ
(r),
where /m (meter) and /J (Joule) are pure numbers. This means that r
and V
LJ
still have the dimension of length and energy, respectively, but that
r = in ordinary units is now r = 1. We also often set m = 1. For the
temperature we let k
B
T T, which means that temperature now has the
dimension of energy.
0.6 Dierent kinds of simulations
This section summarizes ve dierent methods that may be used for simulat-
ing a gas. There are dierences both in the way to describe the system and
in the kind of dynamics to use in the simulation. The numbers 1 through
5 refer to the section numbers in these notes.
3
i) Describe the gas in terms of r and v.
1) Molecular dynamics: ODE deterministic.
2) Langevin dynanics: ODE with a stochastic term.
ii) Describe the gas with r, only.
3) Brownian dynamics: r
i
= cF
i
+ noise.
4) Monte Carlo: No real dynamics. Iterate the following for each i:
suggest a change r
i
by random,
calculate the change in energy, E
i
,
accept this change with a probability that depends on E
i
/T.
iii) Use a discrete space with positions r

= (0, 0, 0), (0, 0, 1),. . . (0, 0, L


1),. . . (L 1, L 1, L 1).
5) The lattice gas: Each position can be either occupied or empty,
n

= 0, 1. The number of particles uctuates in the simulation.


Going upwards in this list one nds a more realistic simulation; going
down the eciency is higher.
4
1 Considerations for molecular dynamics
We have stressed in the earlier lectures that a given system usually may
be simulated in a number of dierent ways. First there is a possiblity of
choosing dierent models (as in the predator-prey system) and second it is
often possible to attack a certain model with dierent kinds of simulation
techniques. In these lectures the focus will be on stochastic simulations
stochastic dierential equations and Monte Carlo simulations. Molecular
dynamics is only considered shortly as a background to the other methods.
1.1 Elementary physics
The behavior of a classical gas is fully specied by the Hamiltonian (the total
energy), which is the sum of the potential and kinetic energies,
1(r, v) = U(r) + K(v),
which in turn are given by
U =
1
2

i,j
V (r
i
r
j
),
and
K =

i
m
i
v
2
i
2
.
The dynamics is given by Newtons second law:
m
i
v
i
= F
i

i
U(r),
where
i
denotes dierentiation with respect to r
i
.
The collisions in a gas means that the molecules interchange momentum
(mv) which after a short relaxation time (through the central limit theorem)
leads to a normal distribution of the velocities. For each velocity component
the probability distribution is given by
1
P(v) e
mv
2
/2T
,
and accordingly
mv
2
= T. (4)
The total energy is a constant of the motion but E
pot
and E
kin
uctuate
(in opposite directions) during the simulation.
1
From now on we are using T = k
B
T
real
.
5
1.2 Cuto in the interaction potential
If we were to do an exact simulation of a Lennard-Jones gas we would use
the potential out to arbitrary distances. This is however very inecient and
it is therefore customary to introduce a cuto r
c
such that the interaction is
zero for r > r
c
. To get a continuous potentialand thereby a well-behaved
forcethe potential is then shifted,
V
s
(r) =
_
V
LJ
(r) V
LJ
(r
c
), r < r
c
,
0, r > r
c
.
This is the expression we will use in the computer lab. In cases when one
also needs a well-behaved second derivative the above expression may be
generalized to
V
s
(r) =
_
V
LJ
(r) V
LJ
(r
c
) + (r
c
r)
dV
LJ
dr

rc
, r < r
c
,
0, r > r
c
.
1.3 Periodic boundary conditions
The simulations always take place in a simulation cell of nite extension.
The questions then arises what do to with the walls. To just let the particles
disappear when they reach the boundary would be disastrous and to let the
particles bounce o the wall isnt any ideal method either. The best method
is to use periodic boundary conditions. To visualize how that works one can
think of several identical simulation cells placed after one another with the
same particles. This is however only a mental picture, in the simulations
each particle only exists once, but associated with each particle is an image
in each cell.
There are then a few consequences of this arrangement:
When a particle leaves the system through a wall it simultanously enters
the system through the opposite wall. Each time a particle coordinate
changes one should therefore check if the new position is outside the cell,
x < 0 or x > L. If that is the case, the particle should be translated,
x + L x or x L x, to the interval [0, L).
We only consider the interaction of a certain particle i with a single
image of each other particle j; it is the nearest image that should
be used. (This applies to short range interactions. For long range
6
interactions there are dierent techniques.) To nd the nearest image
one applies the following method for each of the directions x, y, and
z: calculate x
ij
= x
i
x
j
, if [x
ij
[ > L/2 either add or subtract L such
that [x
ij
[ < L/2. The distance vector is then
r
ij
= (x
ij
, y
ij
, z
ij
).
The nearest image convention also has implications on the smallest
possible size of the simulation cell. The method may only be used
consistently if each given particle has at most one image within the
range of the interaction which means that the linear size always should
be L 2r
c
.
1.4 Integration method and time step
The basic equations we want to integrate numerically are
r = v,
m v = F.
A good integration method should have time reversal symmetry. One possi-
bility is velocity Verlet and another one is the leap-frog method where posi-
tions and forces are dened at times t = n
t
whereas velocities are dened
at the intermediate times, t = (n + 1/2)
t
. With the notation r
n
r(n
t
),
and similarly for force and velocity, we have
v
n+1/2
= v
n1/2
+ (1/m)F
n

t
,
r
n+1
= r
n
+v
n+1/2

t
.
As discussed before a given simulation method is only stable for a certain
range of
t
. Beside this one should consider criteria based on the physical
system in consideration. In the present case this means that we only can
expect a good precision if the steps are such that the change in interaction
potential in a single step is small. If the interaction potential changes on the
scale , this implies that the time step
t
should be chosen such that
_
v
2

t
, or
t

_
m
T
.
One should test the simulations with dierent time steps and chose a time
step in the range where the measured results only depend very weakly on
t
.
7
2 Stochastic dierential equation Langevin
dynamics
Langevin dynamics was rst invented to simulate larger molecules in a solvent
with a large number of lighter molecules. Since the lighter molecules typically
have larger velocities one needs a small time step when they are included in
the simulation. Instead of keeping track of all the collisions one can include
the eect of these smaller molecules in an average way. The lighter molecules
give two eects: a random force and a friction term.
The equation of motion then becomes
m v = F v + . (5)
where is a vector noise.
2.1 Properties of the noise
We will now examine how the magnitude of should be used to give a
simulation at a desired temperature.
Each component of the noise is an independent random process with
(t) = 0 and
(t)(t

) = A(t t

).
It is clear that this noise process has very unusual properties. To try to
remedy the uneasy feeling we consider the average noise during a time interval
,

(t) =
1

_
t+
t
dt

(t

).
It its then possible to calculate the expectation value

=
1

2
_
t+
t
dt

_
t+
t
dt

(t

)(t

)
=
1

2
_
t+
t
dt

_
t+
t
dt

A(t

) =
A

.
2.2 Magnitude of the noise
The equation of motion for a single diusing particle in one dimension is
mv(t +
t
) = mv(t)
t
v(t) +
t
.
8
We need to demand that the expectation value of v
2
at time t +
t
should
be the same as the expectation value at t:
m
2
v
2
(t +
t
) = (m
t
)
2
v
2
(t) +
2
t

2
. (6)
To rst order in
t
(we need to include
2
t

2
since
2
1/
t
) this leads
to the requirement
0 = 2m
t
v
2
(t) +
2
t

2
,
which gives

2
= 2mv
2
(t)/
t
= 2T/
t
.
2.3 Eects of the damping
We are now interested in how a change of aects the system. It turns out
that the static, i.e. time-independent quantities, remain the same whereas the
time-dependent quantites are aected. However, before looking into this, we
examine how the acceptable value of
t
actually depends directly on the size
of the damping .
2.3.1 The time step and the damping
It turns out that we have to decrease the time step when increasing to avoid
introducing errors in the results. To show this we keep terms proportional
to
2
t
in Eq. (6). This gives
m
2
v
2
(t +
t
) = (m
2
2m
t
+
2

2
t
)v
2
(t) +
2
t

2
.
Assuming a time independent kinetic energy, i.e. v
2
(t +
t
) = v
2
(t) we
nd
v
2
(2m
t

2
t
) = 2T
t
,
which gives
v
2
=
T
m
t
/2
.
This estimate shows that the error depends on
t
, which means that one
has to take smaller time step when the damping is larger.
9
2.3.2 Time-dependent correlations
One example of a time-dependent correlation function is the velocity-velocity
correlation function which essentially measures how long a particle moves in
the same direction. The denition is
g
v
(t) = v
i
(t

+ t) v
i
(t

),
where the expectation value is meant as an average over all particles i, and
all initial times, t

.
2.3.3 Static quantities the pair correlation function
A useful quantity to probe that static properies is the potential energy, which
should be independent of for suciently small time steps. More detailed
information about the behavior of the gas may be found in the pair correlation
function, g(r). The denition of g(r) is
The probability to nd a particle at r, granted that there is a
particle at the origin.
To calculate g(r) one collects a histogram over particle separations, h(r),
with resolution r. In two dimensions the relation between h(r) and the
pair correlation function is
h(r) = 2r g(r)r,
which is easily inverted to get g(r). At large distances g(r) becomes equal to
the density, whereas the behavior at short distances shows a peak at r r
min
(c.f. Eq. (3)), and possibly some more peaks at larger distances.
10
3 Brownian dynamics
In the Langevin equation of motion, Eq. (5) the friction term introduces a
time scale, m/. This follows since m v = v gives v(t) = v(0)e
(/m)t
. If
m/
t
the velocity decays quickly to zero and the dynamics is if there
were no inertia. In the limit m 0 one gets overdamped dynamics,
0 = F v + , v =
1

F +

.
In a description with only the position coordinates, this becomes
r =
1

F + ,
where the components of the noise variable are now characterized by

2
=
2T

t
.
3.1 The Fokker-Planck equation
From the above sketchy motivation the Brownian dynamics is only expected
to be valid in somewhat special cases. We now want to demonstrate that
Brownian dynamics actually is of more general validity and always is expected
to give the correct equilibrium distribution, Eq. (1), in the limit of small
t
.
To simplify the notation we consider a particle in a one dimensional po-
tential U(x) at temperature T. We then expect to nd the distribution
P(x) e
U(x)/T
.
3.1.1 Derivation of the Fokker-Planck equation
Consider starting from some probability distribution P(x, 0) at time t = 0.
The change to this distribution comes from two terms:
P(x,
t
) = P(x, 0)
_
dx

P(x, 0)P(x

,
t
[x, 0)+
_
dx

P(x

, 0)P(x,
t
[x

, 0).
The rst integral contains P(x

,
t
[x, 0) which is the probability that the
particle which was at x at t = 0 is at x

at time
t
. The second integral is
11
the probability that the particle originally at x

will be found at x at time

t
. The rst integral is simplied with the use of
_
dx

P(x

,
t
[x, 0) = 1, (7)
which is just a statement that the particle originally at x has to be somewhere
a time
t
later. Introducing x = x x

in the second integral gives


P(x, 0)
t

t
= P(x, 0) +
_
d xP(x x, 0)P(x,
t
[x x, 0),
which may be expanded with
f(x x) =

n=0
( x)
n
n!

n
x
n
f(x).
Note that what corresponds to f(x) in the present case is P(x, 0)P(x +
x,
t
[x, 0). The equation then becomes
P(x, 0)
t
=
P(x, 0)

t
+

n
(1)
n
n!

n
x
n
_
P(x, 0)
1

t
_
d x x
n
P(x + x,
t
[x, 0)
_


x
[P(x, 0)M
1
] +
1
2

2
x
2
[P(x, 0)M
2
] , (8)
where
M
n
=
1

t
_
d x x
n
P(x + x,
t
[x, 0), (9)
and the n = 0 term in the sum cancels the P(x, 0) term. Eq. (8) is the
Fokker-Planck equation.
3.1.2 Application of the Fokker-Planck equation
We now want to use Eq. (8) to nd the stationary probability distribution for
a particle in a one dimensional potential U(x) at temperature T, governed
by Brownian dynamics. To that end we need to evaluate the integrals over
P(x + x,
t
[x, 0) which is related to the dynamics,
x(
t
) = x(0) +

t

F +
t
.
12
The dynamics may be written as a delta function,
P(x + x,
t
[x, 0) =
_
x +

t

F +
t
(x + x)
_
=
_

F +
t
x
_
.
The relevant quantities are M
1
and M
2
averaged over the random noise:
M
n
=
1

t
__
d x x
n

F +
t
x
__
=
1

t
__

F +
t
_
n
_
.
Using F = U/x, = 0, and
2
= 2T/
t
we get, to lowest order in

t
,
M
1
=
1

U
x
and M
2
=
2T

.
3.1.3 Stationary solution
We now want to demonstrate that P e
U/T
is the stationary solution to
the F-P equation, i.e. that it gives P/t = 0. We then plug in
P e
U/T
,
P
x
=
1
T
U
x
P,

2
P
x
2
=
_
1
T
2
_
U
x
_
2

1
T

2
U
x
2
_
P,
at the right hand side of Eq. (8). It is seen from inspection that the terms
cancel one another out

_
1
T
U
x
P
__
1

U
x
_
+ P
_
1

2
U
x
2
_
+
T

_
1
T
2
_
U
x
_
2

1
T

2
U
x
2
_
P = 0,
and the conclusion is that P e
U/T
is the stationary solution to the Fokker-
Planck equation with Brownian dynamics.
3.1.4 Stochastic dierential equations and Monte Carlo
The next section is about Markov chain Monte Carlo which is a method
to generate congurations from arbitrary distributions (e.g. e
U/T
), but we
already state the important conclusion that a simulation of a stochastic dif-
ferential equation is equivalent to a Monte Carlo simulation as far as the
static quantities are concerned.
13
4 Markov chain Monte Carlo
4.1 Markov chains
Suppose that there is a nite number of possible congurations or multidi-
mensional variables x
(i)
. A Markov chain is a random chain of these variables,
x
1
, x
2
,. . . produced by means of a transition matrix p
ij
with the following
properties:
1. p
ij
0,
2. p
ii
,= 1,
3.

j
p
ij
= 1 for all i.
Note that (1) is necessary since probabilities have to be non-negative, (2)
means that the chain may never come to a halt, and (3) means that the total
probability to go to some state must always be equal to unity.
If the system is left to perform a random walk according to these tran-
sition probabilities this will lead to a certain probability distibution
i

(x
(i)
). This is a profound result which makes Markov chains very useful.
We have here for simplicity assumed a discrete set of possible congurations
but Markov chains may also be dened for the more general case of a con-
tinuous conguration space.
4.2 Construct the transition matrix
The task is now to choose the transition matrix for the Markov chain such
that a certain probability distribution (x) is obtained. A sucient (but not
necessary) condition is to require detailed balance, i.e.

i
p
ij
=
j
p
ji
. (10)
That this gives the required probability distribution may be seen by consid-
ering the probability to be in state j after a given step which is

i
p
ij
=
j

i
p
ji
=
j
,
where we have made use of the detailed balance condition followed by condi-
tion (3) above. The transition probability usually consists of two parts. We
write
p
ij
= q
ij

ij
,
14
where
q
ij
= probability for the transition to be suggested

ij
= probability for the transition to be accepted
Detailed balance may e.g. be fullled with the following choice for
ij
:

ij
= min
_
1,

j
q
ji

i
q
ij
_
(11)
The Metropolis algorithm
One often has a symmetric targeting probability q
ij
= q
ji
. The acceptance
probability then simplies to

ij
= min
_
1,

j

i
_
, i ,= j. (12)
This is the choice in the Metropolis algorithm.
We will now explicitely check that the choice of
ij
according to Eq. (12)
actually does fulll detailed balance. This has to be examined for two cases,

j
>
i
and
j
<
i
:

ij

ji
p
ij
p
ji

i
p
ij

j
p
ji

j
>
i
: 1
i
/
j
q q
i
/
j
q
i
q
i

j
<
i
:
j
/
i
1 q
j
/
i
q q
j
q
j
The rst two columns are the acceptance probabilities from Eq. (12), the
next two columns are the transition probabilities (with q = q
ij
= q
ji
), and
the last two columns are
i
p
ij
and
j
p
ji
, respectively. The equality of the
last two columns demonstrates that Eq. (10) actually is fullled.
4.2.1 Requirements for a valid Monte Carlo method
Many dierent methods have been proposed for doing Monte Carlo in all
kinds of models. To prove that a method is correct one needs to show that
two conditions are fullled:
1. The simulation is ergodic, i.e. it is possible to reach all the dierent
states through a nite number of steps.
2. The Monte Carlo steps fulll detailed balance, Eq. (11).
15
4.3 Monte Carlo for an interacting gas
We now consider the numerical calculation of expectation values. The de-
nition of the expectation value of a quantity A is
A =

,
where P

is the probability for the system to be in state . For an interacting


gas this kind of formula cannot be used directly since there is a continuum
of states
2
We are then bound to generating a limited number of states, and
this may mainly be done in two dierent ways:
Generate n congurations, r
()
, for = 1 . . . n, randomly, with the
r
()
i
independent of one another. The expectation value is then
A =

n
=1
A

e
E

n
=1
e
E
, (13)
However, attempting to do this, one will nd that most congurations
have a rather large energy and therefore a very small Boltzmann factor,
e
E
. This means that most of the congurations contribute a negli-
gable amount to the sums. The reason for this is that there is a high
probability for at least one pair of these randomly generated particles
to be very close to one another, which is enough to give a conguration
a very high energy. That the congurations typical of low tempera-
tures will almost never show up has disastrous consequences for the
possibility to calculate the properties at most temperatures of interest.
Except for very small systems this is therefore an entirely impracticable
method.
Use Markov Chain Monte Carlo to generate the congurations with a
probability

e
E
. The expectation values should then be calcu-
lated as a direct average,
A =
1
n
n

=1
A

. (14)
2
In systems where the states constitute a discrete set, they are typically far too many
for this sum to be performed on a computer.
16
This is a useful method that allows for a simple calculation of many
quantities that would otherwise be very dicultif not altogether
impossibleto calculate. The only drawback is that the congurations,
and thereby the A

, are not independent of one another. One therefore


needs to perform simulations for times that are much longer than the
memory of the system to get a number of independent congurations.
4.4 Monte Carlo in practice
4.4.1 One sweep
In the standard implementation one loops over the particles, i = 1, . . . N:
1. Suggest a change of the position of particle i:
r

i
= r
i
+
i
,
where the
(x)
i
,
(y)
i
, and
(z)
i
, usually are from a rectangular distribu-
tion centered around zero. Note that this means that the targeting
probability is symmetric.
2. Calculate the energy dierence U = U

i
U
i
and the acceptance
probability, from Eq. (12),

ii
= min
_
e
U/T
, 1
_
.
3. Accept this new position with probability
ii
, i.e. generate a random
number [0, 1) and accept the change if <
ii
.
To get a reasonably ecient simulation the range of the suggested changes

i
should be chosen such that the acceptance ratio is not too far from 50%,
say between 30 and 70%.
Note also that there is no need to calculate the full potential energy (from

i<j
V (r
ij
), which is a computation of order N
2
) to compute the change in
energy due to the change in position of particle i. One may instead use
U U

i
U
i
=

j
[V (r

i
r
j
) V (r
i
r
j
)],
which is a computation of order N and therefore is much faster.
17
4.4.2 Program structure
The program usually consists of a few dierent parts:
1. Initialization Assign initial values to the positions in some way.
Common alternatives are just random values or to read in the positions
of an earlier run from a le.
2. Equilibration We have to allow for some time before the Markov
chain starts to produce congurations according to the correct prob-
ability distribution. This is achieved by calling the sweep routine a
(often fairly large) number of times.
3. Production The actual run when data is collected. Results could
also be written to a data le for later analysis. The below is a good
structure for the simulation.
3
for iblock=... {
for isamp= ... nsamp {
sweep(conf);
measure( , conf, v);
}
write_data_to_file(v);
write_conf_to_file(conf);
vsum[ ] += v[ ];
v2sum[ ] += v[ ] * v[ ];
}
4. Print out results The averages are obtained from vsum, with v2sum
it also becomes possible to determine the standard error.
3
It is also good to write the conguration to le at the same time. If the run is killed
for some reason it can then be continued without the need for another equilibration.
18
5 The lattice gas and the Ising model
As already mentioned it is common with models that are dened on a lattice
instead for on the continuum. An important reason for such a step is that the
computations then may become much simpler and thereby also considerably
faster. It is immediately clear that a lattice model of a gas cannot reproduce
all results from the continuum, but it turns out that certain features are
not sensitive to the details of the model and if the goal is to examine such
properties, the lattice model is superior to the continuum gas.
5.1 Coexistence region and the critical point

The gas changes to a liquid at temperatures and densities where the attractive
interaction becomes important. Roughly speaking it prefers the liquid state
for temperatures T < , (the energy at the minimum). One way to examine
this transition is to start from the gas phase and then decrease the volume
at a xed temperature. At a certain specic volume (volume per particle
or volume per mol) there appears a small region with a higher density a
liquid. We say that we have coexistence between liquid and gas, the region
where this is possible is called the coexistence region. The specic volume
of the gas, v
gas
, is then higher than the specic volume of the liquid, v
liq
.
As the total volume decreases further, the liquid fraction increases further
and the gas phase eventually disappears. This process takes place at a given
constant pressure.
If the same process is repeated at dierent temperatures (and pressures)
one nds that the dierence in specic volume
v = v
gas
v
liq
decreases when the temperature increases and actually vanishes at a given
temperature, the critical temperature, T
c
. This is one of the coordinates
that describe the critical point, the other are v
c
and p
c
. When approaching
the critical temperature the volume dierence vanishes with a power law,
v (T
c
T)

,
where 0.3 is a critical exponent. In the region close to the critical point
there are large regions with a lower density coexisting with other regions with
a higher density. This is a behavior that is both dicult to analyze with
19
analytical methods and dicult to study with high precision in simulations.
It turns out that one may capture this behavior much more eciently by
using a dierent model for simulation of the gas the lattice gas.
5.2 The lattice gas
In the lattice gas we give up the continuum and instead let the particles sit
on a lattice. This is a good idea if we want to capture the behavior at large
length scales though it of course doesnt reproduce the properties at small
scales.
The variables in the lattice gas are n
i
= 0, 1 corresponding to the presence
or absence of a particle at lattice point i. The repulsion is given by the fact
that two particles cannot be on the same lattice point and the attraction is
given by assigning the energy for each neighboring pair of particles. The
energy for a certain conguration is then
E =

ij
n
i
n
j
,
where

ij
denotes a sum over all nearest neighbors. Unlike the models
we have discussed so far, the number of particles is not a constant in this
model. The large collection of states which is possible when both energy and
number of particles vary is called the grand canonical ensemble and one also
introduces a chemical potential that controls the particle number, much
as the temperature controls the uctuating energy. The Boltzmann factor is
then complemented by another term with the eect to keep n down.
5.3 The Ising model

The Ising model is usually thought of as a model of a magnet. This is in


many ways similar to the lattice gas, the variables (the spins) are now
s
i
= 1, corresponding to spin up and spin down. The energy for the Ising
model is
4
E =

ij
s
i
s
j
.
4
It is also common to consider the eect of an applied magnetic eld, this is included
by adding the term h

i
s
i
to this expression for the energy.
20
The Ising model has been called the Drosophila
5
of statistical mechanics be-
cause it has been so thoroughly studied and manipulated in all conceivable
ways. One can put this model on lattices in dierent dimensions, D = 1,
2, 3,. . . , and it turns out that the behavior changes dramatically with the
dimensionality. The behavior in one dimension is easy to calculate analyti-
cally. The properties of the phase transition in 2D is terribly more dicult to
calculate. This problem was solved by the norwegian physicist Lars Onsager
in 1944an impressing tour de force, which by some is considered one of the
greatest scientic achievements of the 20th century. The behavior in 3D is,
in turn, immensely more complicated and continues to attract researchers
from both physics and mathematics.
Literature
An introduction to Computer Simulation Methods, Gould & Tobochnik,
Addison-Wesley, 1996.
Stochastic simulation in physics, P. Kevin MacKeown, Springer, 1997.
A guide to Monte Carlo simulations in statistical physics, Landau &
Binder, Cambridge University Press, 2000.
Monte Carlo Methods in Statistical Physics, Newman & Barkema, Ox-
ford University Press, 1999.
5
A fruit y that has been thoroughly studied in genetics by innumerable experiments
with dierent kinds of mutations.
21

You might also like