Professional Documents
Culture Documents
March 8, 2013
Quantum mechanics uses a wide range of mathematics including See also: http://simple.wikipedia.
org/wiki/Quantum
_
mechanics
complex numbers, linear algebra, calculus, and differential equa-
tions. In this handout, we give a crash course on some of the basic
mathematics used.
Complex Numbers
Complex numbers
1
give us a way of saying that the square of a num-
1
For another simple explanation, see
also: http://simple.wikipedia.org/
wiki/Complex
_
numbers
ber can be negative. To that end, we have to introduce the imaginary
number i such that
i =
1. (1)
A complex number then has the form z = a + ib where a and b are the The number z can be treated just as any
number with a variable i, you just have
to remember that i
2
= 1.
usual real numbers we use every day.
Every complex number also has a complex conjugate. This number
is made by going through a complex number and replacing i with i
and is represented in physics by z
z = (a ib)(a + ib) = a
2
+ iab iab i
2
b
2
= a
2
+ b
2
. (2)
Since a and b are the usual real numbers, z
n=0
x
n
n!
= 1 +
x
1
+
x
2
2 1
+
x
3
3 2 1
+ . (3)
It has the important property that if you take its derivative you get
back the same function
3 3
We will see this a lot in the form
d
dt
e
it
= i e
it
or
d
dx
e
i px
= i p e
i px
.
d
dx
e
x
= e
x
. (4)
Now, this function has the particular property that if instead of x,
you use an imaginary number, you obtain This property gives the famous e
i
=
1.
e
i
= cos + i sin . (5)
And in fact, any complex number can be represented by z = r e
i
where r and are both real numbers. We can see from this that z
z = Remember to change i to i, so z
=
re
i
, and also remember that e
x
e
y
=
e
x+y
.
r
2
, so if we look at Eq. (2), we can see that r =
a
2
+ b
2
.
crash course: the math of quantum mechanics 2
Linear algebra
Linear algebra is at the heart of quantum mechanics. It deals with
vectors and matrices in many dimensions.
Vectors
A column vector dened by |v is dened as just an ordered set of The | is called a ket in quantum
mechanics
numbers in a column:
|v =
_
_
_
_
_
_
z
1
z
2
.
.
.
z
N
_
_
_
_
_
_
. (6)
The number N is called the dimension. We can dene the correspond-
ing row vector by v|, and it looks like The | is called a bra in quantum
mechanics.
v| =
_
z
1
z
2
z
N
_
. (7)
Notice how in making the column vector into a row vector we took
the complex conjugate of each entry. This is very important since now
we will dene the inner product of a two vectors |v and |w as v|w
(just replace z with w to obtain the entries of |w) where for the above
we get
v|w = z
1
w
1
+ z
2
w
2
+ + z
N
w
N
. (8)
Matrices
Now, what makes linear algebra linear is that we act on vectors with When we write a |v, it means multiply
each entry of |v by the number a
matrices that have the property that if M is a matrix, |v and |w are
vectors and a, b are just numbers then
M(a |v + b |w) = aM|v + bM|w .
A matrix looks like
4 4
Strictly speaking, this is a square ma-
trix; in general, the rows and columns
need not be the same, but most of our
use will be with square matrices.
M =
_
_
_
_
_
_
m
11
m
12
m
1N
m
21
m
22
m
2N
.
.
.
.
.
.
.
.
.
.
.
.
m
N1
m
N2
m
NN
_
_
_
_
_
_
. (9)
Matrices can be multipled, but the procedure for this is similar to that
of the inner product and not how one might guess to multiply matri-
ces. To develop this, consider two ways of thinking of a matrix: As a
row vector of column vectors or as a column vector of row vectors:
crash course: the math of quantum mechanics 3
Here we have dened for example
|m
1
=
_
_
_
_
_
m
11
m
21
.
.
.
m
N1
_
_
_
_
_
and
m
1
| =
_
m
11
m
12
m
1N
_
M =
_
|m
1
|m
2
|m
N
_
=
_
_
_
_
_
_
m
1
|
m
2
|
.
.
.
m
N
|
_
_
_
_
_
_
. (10)
And like this we can dene matrix multiplication
MN =
_
_
_
_
_
_
m
1
|
m
2
|
.
.
.
m
N
|
_
_
_
_
_
_
_
|n
1
|n
2
|n
N
_
=
_
_
_
_
_
_
m
1
|n
1
m
1
|n
2
m
1
|n
N
m
2
|n
1
m
2
|n
2
m
2
|n
N
.
.
.
.
.
.
.
.
.
.
.
.
m
N
|n
1
m
N
|n
2
m
N
|n
N
_
_
_
_
_
_
.
(11)
For a real example of this, see
5
.
5
http://simple.wikipedia.org/wiki/
Matrix
_
(mathematics)
Multiplication by a vector is similar. In fact we can dene it for
column and row vectors:
M|v =
_
_
_
_
_
_
m
1
|
m
2
|
.
.
.
m
N
|
_
_
_
_
_
_
|v =
_
_
_
_
_
_
m
1
|v
m
2
|v
.
.
.
m
N
|v
_
_
_
_
_
_
, (12)
and
v| M = v|
_
|m
1
|m
2
|m
N
_
(13)
=
_
v|m
1
v|m
2
v|m
N
_
Now, we can also construct the hermitian conjugate of a matrix, M
by just writing
M
=
_
_
_
_
_
_
m
11
m
21
m
N1
m
12
m
22
m
2N
.
.
.
.
.
.
.
.
.
.
.
.
m
1N
m
2N
m
NN
_
_
_
_
_
_
. (14)
Notice how every entry is the complex conjugate and ipped across
the diagonal
6
. A matrix is hermitian if M = M
ij
= m
ji
.
There is also a special matrix called the identity matrix which has 1
crash course: the math of quantum mechanics 4
on the diagonal and zero off the diagonal:
I =
_
_
_
_
_
_
1 0 0
0 1 0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1
_
_
_
_
_
_
, (15)
this has the property that MI = M and I M = M for any matrix M
(Check this as an exercise).
Eigenvalues and Eigenvectors
Matrices can have special vectors called eigenvectors which have the
property that if |E is an eigenvector of M,
M|E = E |E , (16)
we sometimes identify eigenvectors by their eigenvalues, calling |E, Note that if |E is an eigenvector, so
is a |E. This ambiguiuty oftentimes
lets us normalize the vectors by using
the eigenvector with E|E = 1. This
condition is also related to probabilities
in quantum mechanics.
the vector with eigenvalue E. If M is hermitian, as it usually is in our
applications, then the eigenvalues E are real numbers (not imaginary
or complex).
And if M is an N N matrix, there are in fact N eigenvectors
associated with it. One of the astounding properties of these vectors
is that any vector can be written as a sum of them i.e. they are a
basis. For example, if we have a three dimensional matrix M with
eigenvectors |E
1
, |E
2
, and |E
3
, then any vector can be written
|v = v
1
|E
1
+ v
2
|E
2
+ v
3
|E
3
. (17)
In arbitary dimension N this takes the form
|v =
N
i=1
v
i
|E
i
. (18)
Lastly, these vectors are orthogonal. Which means E
i
|E
j
= 0 if i
and j are different. The proof of this is simple. There are two ways to
evaluate
E
i
|M|E
j
= E
i
E
i
|E
j
= E
j
E
i
|E
j
. (19)
Thus rewriting the expressions,
(E
i
E
j
) E
i
|E
j
= 0, (20)
so either E
i
= E
j
or E
i
|E
j
= 0, and since we have assumed E
i
= E
j
,
we get that E
i
|E
j
= 0 necessarily. This proves theyre orthogonal
7
.
7
Sometimes E
i
= E
j
and this is called
a "degeneracy". A modied version of
this property holds in that case.
crash course: the math of quantum mechanics 5
Functions and Fourier transforms
Sometimes instead of the usual vectors and matrices we are working
with functions and differentiation
8
. For many purposes functions
8
More generally, functions and linear
operators. In fact, all of linear algebra
can be applied to functions and linear
operators dened by differentials and
integrals. This section is a taste of that.
actually behave like vectors and differentiation like a matrix. For
instance, take the differential equation
i
d
dx
f (x) = k f (x), (21)
this looks very much like the eigenvalue equation seen in Eq. (16) if
we change M to i
d
dx
, |E to f (x), and E to k. The solution to it is
(which you can check using Eq. 4)
f (x) = e
ikx
. (22)
The relation to eigenvectors goes even further. Any function can
be written as an integral of these functions
9
. Then any g(x) can be
9
When going from vectors to functions,
sums are (sometimes) replaced by
integrals.
written as
g(x) =
_
g(k)e
ikx
dk
2
. (23)
This decomposition into basis functions is called a Fourier decomposi- The introduction of 2 is necessary
and is sometimes dened elsewhere.
In much of quantum mechanics, the
1
2
is placed with the inverse Fourier
transform.
tion or an inverse Fourier transform. The reason it is an inverse Fourier
transform is because we can actually nd the function g(k) (called
the Fourier transform) by taking
g(k) =
_
g(x)e
ikx
dx, (24)
To check this is the case, we can substitute this into Eq. (23) while
change the dummy variable x to y to see
g(x) =
_
g(y)e
ik(xy)
dy
dk
2
, (25)
now the k-integral can be done and it gives a new function we call
the -function:
(x y) =
_
e
ik(xy)
dk
2
. (26)
This function has the property that if we do the k-integral in Eq. (25)
we see:
g(x) =
_
i
d
dx
g
_
= i
_
(x)
dg(x)
dx
dx. (29)
As an exercise, use the Fourier transforms and the -function from
the last section to show that
_
f
i
d
dx
g
_
=
_
k
f
e
ax
2
+bx
dx =
_
a
e
b
2
/(4a)
. (31)
With this identity we can actually see that the Fourier transform of a
Gaussian is a Gaussian. Take the function g(x) = e
ax
2
, then we can
write
g(k) =
_
e
ax
2
e
ikx
dx, (32)
Thus, if we let b = ik, the right hand side of Eq. (32) can be evalu-
ated to be
g(k) =
_
a
e
k
2
/(4a)
. (33)
As an exercise, perform the inverse fourier transform to recover
g(x) = e
ax
2
.