You are on page 1of 9

Physics 139B

Solutions to Homework Set 2

Fall 2009

1. Liboff, problem 9.32 on page 395.


(a) The key equation is given in Table 9.4 on p. 379 of Liboff:
L | , mi = ~ [( m)( m + 1)]1/2 | , m 1i .

(1)

We will also need to use:


~ 2 | , mi = ~2 ( + 1) | , mi .
L

(2)

When coupling two p states, there are nine possible states in the product basis:
|1 , m1 i1 |1 , m2i2 , where m1 , m2 = +1 , 0 , 1. Using
~2 = L
~ 21 + L
~ 22 + 2L1z L2z + L1+ L2 + L1 L2+ ,
L
~ 2 on the nine states of the product basis:
we can operate with L
~ 2 |1 , 1i |1 , 1i = 6~2 |1 , 1i |1 , 1i ,
L
1
2
1
2
2
2
~
L |1 , 1i1 |1 , 0i2 = 4~ |1 , 1i1 |1 , 0i2 + 2~2 |1 , 0i1 |1 , 1i2 ,

~ 2 |1 , 1i |1 , 1i = 2~2 |1 , 1i |1 , 1i + 2~2 |1 , 0i |1 , 0i ,
L
1
2
1
2
1
2
2
2
2
~
L |1 , 0i1 |1 , 1i2 = 4~ |1 , 0i1 |1 , 1i2 + 2~ |1 , 1i1 |1 , 0i2 ,
~ 2 |1 , 0i |1 , 0i = 4~2 |1 , 0i |1 , 0i + 2~2 |1 , 1i |1 , 1i + 2~2 |1 , 1i |1 , 1i ,
L
1
2
1
2
1
2
1
2
2
2
2
~
L |1 , 0i |1 , 1i = 4~ |1 , 0i |1 , 1i + 2~ |1 , 1i |1 , 0i ,
1

~ 2 |1 , 1i |1 , 1i = 2~2 |1 , 1i |1 , 1i + 2~2 |1 , 0i |1 , 0i ,
L
1
2
1
2
1
2
2
2
2
~
L |1 , 1i1 |1 , 0i2 = 4~ |1 , 1i1 |1 , 0i2 + 2~ |1 , 0i1 |1 , 1i2 ,

~ 2 |1 , 1i |1 , 1i = 6~2 |1 , 1i |1 , 1i .
L
1
2
1
2

~ 2 with L
~ 21 + L
~ 22 + 2L1z L2z + L1+ L2 + L1 L2+ ,
In evaluating the above, I replaced L
and then used eqs. (1) and (2) repeatedly.
~ 2 on the nine states of the total angular moLikewise, we can operate with L
mentum bases, | , m ; 1 1i,
~ 2 |2 , m ; 1 1i = 6~2 |2 , m ; 1 1i ,
L
~ 2 |1 , m ; 1 1i = 2~2 |2 , m ; 1 1i ,
L
~ 2 |0 , 0 ; 1 1i = 0 ,
L

m = 2 , 1 , 0 , 1 , 2 ,

m = 1 , 0 , 1 ,

It is now straightforward to verify Table 9.5 on p. 394 of Liboff. First, by


operating with Lz = Lz1 + Lz2 , it follows that in the expansion
X
Cm1 m2 |1 m1 i1 |2 m2 i2 ,
| m ; 1 2 i =
m1 ,m2

only terms with m = m1 + m2 appear on the right hand side above. Second, by
~ 2 and using the results above, we can verify the entries of Table
operating with L
9.5. We give two examples. To check that

q 
1
|2 , 1 ; 1 , 1i = 2 |1 , 1i1 |1 , 0i2 + |1 , 0i1 |1 , 1i2 ,
~ 2 |2 , 2 ; 1 1i = 6~2 |2 , 2 ; 1 1i, and
we note that L




~ 2 |1 , 1i |1 , 0i + |1 , 0i |1 , 1i = 6~2 |1 , 1i |1 , 0i + |1 , 0i |1 , 1i ,
L
1
2
1
2
1
2
1
2
as expected. To check that

q 
1
|0 0 ; 1 1i = 3 |1 , 1i1 |1 , 1i2 |1 , 0i1 |1 , 0i2 + |1 , 1i1 |1 , 1i2 ,

~ 2 |0 0 ; 1 1i = 0, and
we note that L
~ 2 [|1 , 1i |1 , 1i |1 , 0i |1 , 0i + |1 , 1i |1 , 1i ]
L
1
2
1
2
1
2
= (2 2) |1 , 1i1 |1 , 1i2 + (2 + 2 4) |1 , 0i1 |1 , 0i2
+(2 2) |1 , 1i1 |1 , 1i2 = 0 .
The other seven entries of Table 9.5 are easily checked in the same way using the
results obtained above.
(b) The Clebsch-Gordon coefficients involved in the expansion of the state
|0 0 ; 1 1i are:
q
C1,1 = C0,0 = C1,1 = 13 .
(c) The inner product h2 0 ; 1 1| 0 0 ; 1 1i = 0 as the total angular momentum
basis states are orthonormal. This can be also be checked by expanding in terms
of the direct product basis, and using the fact that the direct basis product states
are orthonormal.

2. Liboff, problem 9.46 on page 400.


We are asked to evaluate Ym ( , + ). Consider the formula,
Ym ( ,

2 + 1 ( m)!
) =
4 ( + m)!

1/2

Pm (cos )eim ,

which follows from eqs. (9.70) and (9.77) of Liboff. First, we note that:
cos( ) = cos .
2

The last line of Table 9.2 indicates that P ( cos ) = (1) P (cos ). The definition of Pm (cos ) given in terms of P (cos ) in Table 9.3 implies that
Pm ( cos ) = (1)+m Pm (cos ) .
Finally, using the definition of Ym ( , ) given above, it follows that:
Ym ( , + ) = (1)+m eim Ym ( , )
= (1)+m (1)m Ym ( , )
= (1) Ym ( , ) ,
after noting that eim = (ei )m = (1)m and (1)2m = +1 for integer m. Thus,
we have proven that the parity of of the state | mi (odd or even) is the same as
that of .

3. Liboff, problem 11.61 on page 541.


The spin-one matrices are given by eq. (11.66) of Liboff:

0 1 0
0 i
0
~
~
Sy = i
0 i ,
Sx = 1 0 1 ,
2 0 1 0
2 0
i
0

1
~
Sz = 0
2 0

0
0
0
0 .
0 1

We can compute the expectation values of Sx , Sy and Sz with respect to the state

a

= b ,
c
as follows:


0 1 0
a
~
~
hSx i = h |Sx | i = Sx = (a b c ) 1 0 1 b = 2Re[b (a + c)] ,
2
2
0 1 0
c


0 i
0
a
~
~

0 i
b = 2Im[b (a c)] ,
hSy i = h |Sy | i = Sy = (a b c ) i
2
2
0
i
0
c


1 0 0
a
~
~

b = (|a|2 |c|2 ) ,
hSz i = h |Sz | i = Sz = (a b c ) 0 0 0
2
2
0 0 1
c

We shall also impose the normalization condition,

|a|2 + |b|2 + |c|2 = 1 .


3

To determine whether a state exists such that hSx i = hSy i = hSz i = 0, we


must solve the simultaneous equations:
Re[b (a + c)] = 0 ,
Im[b (a c)] = 0 ,
|a|2 |c|2 = 0 ,
|a|2 + |b|2 + |c|2 = 1 .

(3)

q
There are numerous solutions. For example, b = 0 and |a| = |c| = 12 is clearly
a solution. Another possible solution is a = c = 0 and |b| = 1. To understand the
physical meaning of these solutions, note that the vectors



1
1
0
1
1
1

0 ,
0 ,
1 ,
2 1
2 1
2 0

are eigenstates, respectively of Sx , Sy and Sz with eigenvalue zero in each case.


Clearly, any eigenstate of Sx with eigenvalue zero will have hSx i = 0. But it is
easy to see that eigenstates of Sy and Sz with eigenvalue zero will also satisfy
hSx i = 0. For example, using eq. (11.68) of Liboff, we see that

1
q 

1

0 = 12 x(1) + x(1) ,
2 1

0
q 

1

1 = 12 x(1) x(1) ,
2 0

(1)

where the x
are eigenstates of Sx with eigenvalues ~. That is, these states
are superpositions of states of opposite sign Sx values with amplitudes of equal
magnitude. Hence the Sx value averages to zero, i.e., hSx i = 0. A similar argument
can be made to show that hSy i = hSz i = 0.
NOTE ADDED: It is easy to prove the following general result. Let be an
~ n
eigenstate of S
with eigenvalue zero. Then, hSx i = hSy i = hSz i = 0, where the
expectation values are defined by hSi i h| Si |i.
To prove this, we exploit the commutation relations satisfied by the Si ,
[Si , Sj ] = i~

3
X

ijk Sk .

k=1

It follows that
~ n
~ n
[S
1 , S
2] =

n
1i n
2j [Si , Sj ] = i~

i,j

X
ijk

~ (
ijk n
1i n
2j Sk = i~S
n1 n
2) .

~ n
Suppose that we choose n
1 = n
, where S
|i = 0 . Then
~ n
~ n
~ (
h[S
, S
2 ]i = i~hS
nn
2 )i .
But,
~ n
~ n
~ n
~ n
~ n
~ n
~ n
~ n
h[S
, S
2 ]i = h| [S
, S
2 ] |i = h| (S
S
2 S
2 S
|i = 0 .
~ n
~ n
after applying S
|i = 0 and h| S
= 0. Hence, it follows that:
~ (
~ (
hS
nn
2 )i h| S
nn
2 ) |i = 0 .

(4)

If n
points in some direction other than x
, y,
and z, then it is easy to show
that one can always choose n
2 such that n
n
2 points in either the x
, y,
or z
directions. Choosing the appropriate n
2 , it then follows from eq. (4) that
hSx i = hSy i = hSz i = 0 .

(5)

Next, suppose n
= z . Then, hSz i = h| Sz |i = 0, since |i is an eigenstate of
Sz with zero eigenvalue. In this case, if n
2 = x
, then n
n
2 = y.
Similarly, if
n
2 =
y, then n
n
2 = x
. Consequently, eq. (4) implies that hSx i = hSy i = 0.
A similar argument can be made if n
=x
or n
= y. In all possible cases, one
~ n
arrives at eq. (5). Hence, we conclude that if is an eigenstate of S
with
eigenvalue zero, then hSx i = hSy i = hSz i = 0.
Applying this general result to problem 3, we conclude that the most general
~ n
for which hSx i = hSy i = hSz i = 0 is an eigenstate of S
with eigenvalue zero,
for any possible n
. I will now demonstrate that

a
= b ,
c

~ n
is an eigenstate of S
with eigenvalue zero (for some n
) if and only if a, b and
c satisfy the simultaneous equations given in eq. (3). Using the spin-one matrices
~ given above,
S

cos
sin ei
0
~n
0
sin ei ,
S
= sin ei
i
0
sin e
cos
where n
= (sin cos , sin sin , cos ). Solving the equation,

a
cos
sin ei
0
i
sin ei
b = 0,
0
sin e
i
0
sin e
cos
c

we find:

a cos + b sin ei = 0 ,
sin (ei a + ei c) = 0 ,
b sin ei c cos = 0 ,
5

where a, b and c satisfy the normalization condition |a|2 + |b|2 + |c|2 = 1. I will
now show that these equations are equivalent to eq. (3). For example, if sin = 0,
then a = c = 0 and |b| = 1 [which is one of the solutions of eq. (3)]. If sin 6= 0,
then c = e2i a, which is equivalent to |a| = |c|. Finally, adding and subtracting
the first and third equations above yield:
(a c) cos + 2b sin cos = 0 ,
(a + c) cos 2ib sin sin = 0 .
Multiply these equations by b to obtain:
b (a c) cos = 2|b|2 sin cos ,
b (a + c) cos = 2i|b|2 sin sin .
These two equations imply that Re[b (a + c)] = Im[b (a c)] = 0, which again
~ n
reproduces the results of eq. (3). Thus, if is an eigenstate of S
with eigenvalue
zero, then the components of satisfy eq. (3), which of course implies that hSx i =
hSy i = hSz i = 0. The converse can also be proven. Namely, if the components of
~ n
satisfies eq. (3) then is an eigenstate of S
with eigenvalue zero. Simply use
the above equations to show that and (which determine n
) can be determined
from a, b, and c. One finds:
c
e2i = ,
a

a
tan = ei .
b

Note that the case of a = 0 implies that c = 0 and sin = 0, in which case is
not well defined. There is one overall phase of a, b and c which is arbitrary (since
it is not fixed by the normalization condition).

4. Start with the time-dependent Schrodinger equation for a charged particle in at


external electromagnetic field in the Coulomb gauge. Denote the electromagnetic
~ Then,
scalar and vector potentials by and A.
ie~ ~ ~
e2 ~ 2
~2 ~ 2

+
A +
=
A + (V + e) .
i~
t
2m
mc
2mc2

(6)

The complex conjugate of this equation is:


i~


~2 ~ 2 ie~ ~ ~
e2 ~ 2
=
A + (V + e) .

A +
t
2m
mc
2mc2

(7)

Multiply eq. (6) by and eq. (7) by and subtract the two resulting equations.
The end result is:


i
i
h
~2 h ~ 2


~ 2 + ie~ A
~
~ + A
~
~ .
=

+
i~
t
t
2m
mc
(8)
6

The above result can be simplified by applying the following three identities:

+
= ( ) ,
t
t
t


~ 2
~ 2 =
~
~
~ ,

~
~ + A
~
~ = (
~ A
~ )
~ A
~.
A

(9)
(10)
(11)

~
For example, in deriving eq. (11) above, we used the following properties of :
~ A
~ ) = A
~ (
~ ) +
~ A
~,
(
~ ) =
~ +
~ .
(
~ A
~ = 0, in which case eq. (11) implies that:
However, in the Coulomb gauge,
~
~ + A
~
~ = (
~ A
~ ) .
A
Applying eqs. (9), (10) and (12) to eq. (8), one obtains:


i
h
2

e
e~

~
~
~
~ .


(e ) =
A
t
2im
mc

(12)

(13)

If we identify

i
2
e~ h ~
~ e A
~ ,

J~
2im
mc
e ,

(14)
(15)

then eq. (13) can be rewritten as:


~ J~ + = 0 ,

t
which is the continuity equation. Note that if we employ the momentum operator
~ then we can rewrite J~ as
p
~ = i~,
! #
"
~
e
A
e
.
(16)
~
Re p
J~ =
m
c
In order to verify the above result, note that Re z 12 (z + z ). Hence, eq. (16)
yields:
"
!
! #
~
~
~
~
e
A
e
A
e
~
~

J~ =
2m
i
c
i
c
=

i
2
e~ h ~
~ e A
~ ,

2im
mc
7

which is identical to the result of eq. (14).


~ = = 0, we recover the results of Liboff,
Finally, we note that it we set A
eqs. (7.97) and (7.107) [apart from the overall factor of e, which is conventional].
In particular, the definition of is unmodified by the presence of the external
electromagnetic field.
ADDED NOTE: To verify that the expression obtained above for J~ makes
physical sense, recall that that the canonical momentum p
~ is related to the mechanical momentum m~
v by
e~
m~
v=p
~ A
.
c
Hence, we can write:
J~ = e Re( ~
v ) = Re(~
v) .
Classically, the current is related to the velocity field by J~ = ~
v , so the quantum
mechanical result derived above is sensible.

5. The time-dependent Schrodinger equation for a charged particle in an external


electromagnetic field (before fixing a choice of gauge) is given by:
i~

ie~ ~ ~
ie~ ~ ~
~2 ~ 2
e2 ~ 2

+
A +
=
( A) +
A + e ,
t
2m
mc
mc
2mc2

(17)

where I have set the external potential V = 0, as it plays no roll in this problem.
Note that I have not imposed a particular gauge such as the Coulomb gauge as we
did in problem 4. This is important, since the goal of this problem is to show that
the Schrodinger equation is form invariant under a generalized gauge transforma~ are gauge-transformed and
tion, in which the electromagnetic potentials, and A
the wave function is transformed by an appropriate phase factor.
(a) Consider a new wave function 1 (~
r , t) defined by:



ieX(~
r , t)
1 (~
r, t) = exp
(~
r , t) .
~c

(18)

Substituting = 1 exp [ieX(~


r , t)/(~c)] into eq. (17), one needs to compute:


ie X

ieX(~
r,t)/(~c) 1
=e

1 ,
t
t
~c t


ie ~
ieX(~
r,t)/(~c) ~
~
1 1 X ,
= e
~c
~ =e

ieX(~
r,t)/(~c)


2
ie
e
2ie
2
~ 1 )(X)
~
~ X
~
~
~ 1
(
1
1 (X)(
X)
.

~c
~c
~ 2 c2
2

Inserting these results into eq. (17), the common factor of exp [ieX(~
r , t)/(~c)]
cancels, and we end up with:
i~

2
1 e X
~2 ~ 2
ie~ ~ ~
ie~
~ A)
~ + e A
~ 2 1
+
1 =
1 (
1 +
A 1 +
t
c t
2m
mc
2mc
2mc2

ie~ ~ 2
e2
ie~ ~
~
~
~
(1 )(X)
+
1 X +
1 (X)(
X)
mc
2mc
2mc2

e2
~ X)
~
1 (A
+ e1 .
mc2

Collecting and rearranging terms, the above equation takes the following form:
i~

~2 ~ 2
ie~ ~ ~
1
~ 1 + ie~ 1 (
~ A
~ + X)
~
=
(A + X)
1 +
t
2m
mc
2mc


1 X
e2
2
~
~
1 .
1 (A + X) + e
+
2mc2
c t

(19)

That is, eq. (19) is the time-dependent Schrodinger equation satisfied by 1 (~


r , t).
(b) If we perform a gauge transformation of the electromagnetic scalar and
vector potentials,
~ = A
~ + X
~ ,
A

1 X
,
c t

(20)

then, eq. (19) can be rewritten as:


i~

2
1
~2 ~ 2
ie~ ~ ~
ie~
~ A
~ )+ e A
~ 2 1 +e 1 . (21)
=
1 (
1 +
A 1 +
t
2m
mc
2mc
2mc2

Note that eq. (21) has exactly the same form as eq. (17) if we replace
~A
~,
A

and

1 .

That is, the Schrodinger equation is invariant with respect to generalized gauge
~ and the wave function are simultaneously transtransformations in which A,
formed, as specified in eqs. (18) and (20).

You might also like