You are on page 1of 63

INTRODUCTION TO LIE ALGEBRAS AND THEIR

REPRESENTATIONS
PART III
MICHAELMAS 2010

IAN GROJNOWSKI
DPMMS, UNIVERSITY OF CAMBRIDGE
I.GROJNOWSKI@DPMMS.CAM.AC.UK
TYPESET BY ELENA YUDOVINA
STATISTICAL LABORATORY, UNIVERSITY OF CAMBRIDGE
E.YUDOVINA@STATSLAB.CAM.AC.UK

1. Introduction, motivation, etc.


Lecture 1 For most of this lecture, we will be motivating why we care about Lie algebras
and why they are defined the way they are. If there are sections of it that dont make sense,
they can be skipped (although if none of it makes sense, you should be worried).
Connected Lie groups are groups which are also differentiable manifolds. The simple
algebraic groups over algebraically closed fields (which is a nice class of Lie groups) are:
SL(n) = {A Mat n | det A = 1}
SO(n) = {A SLn |AAT = I}
SP (n) = {A SLn |some symplectic form...}
and exactly 5 others (as we will actually see!)
(We are ignoring finite centers, so Spin(n), the double cover of SO(n), is missing.)
Remark. Each of these groups has a maximal compact subgroup. For example, SU (n) is the
maximal compact subgroup of SL(n), while SOn (R) is the maximal compact subgroup of
SOn (C). These are themselves Lie groups, of course. The representations of the maximal
compact subgroup are the same as the algebraic representations of the simple algebraic group
are the same as the finite-dimensional representations of its (the simple algebraic groups)
Lie algebra which are what we study.
Lie groups complicated objects: for example, SU2 (C) is homotopic to a 3-sphere, and SUn
is homotopic to a twisted product of a 3-sphere, a 5-sphere, a 7-sphere, etc. Thus, studying
Lie groups requires some amount of algebraic topology and a lot of algebraic geometry. We
want to replace these complicated disciplines by the easy discipline of linear algebra.
Therefore, instead of a Lie group G we will be considering g = TI G, the tangent space to
G at the identity.
Definition. A linear algebraic group is a subgroup of some GLn defined by polynomial
equations in the matrix coefficients.
We see that the examples above are linear algebraic groups.
Date: Michaelmas 2010.
1
2 IAN GROJNOWSKI

Remark. It is not altogether obvious that this is an intrinsic characterisation (it would
be nice not to depend on the embedding into GLn ). The intrinsic characterisation is as
affine algebraic groups, that is, groups which are affine algebraic varieties and for which
multiplication and inverse are morphisms of affine algebraic varieties. One direction of this
identification is relatively easy (we just need to check that multiplication and inverse really
are morphisms); in the other direction, we need to show that any affine algebraic group has
a faithful finite-dimensional representation, i.e. embeds in GL(V ). This involves looking at
the ring of functions and doing something with it.
We will now explain a tangent space if you havent met with it before.
Example 1.1. Lets pretend were physicists, since were in their building anyway. Let
G = SL2 , and let ||  1. Then for
   
1 a b
g= + + higher-order terms
1 c d
to lie in SL2 , we must have det g = 1, or
 
1 + a b
1 = det + h.o.t = 1 + (a + d) + 2 (junk) + h.o.t
c 1 + d
Therefore, to have g G we need to have a + d = 0.
To do this formally, we define the dual numbers
E = C[]/2 = {a + b|a, b C}
For a linear algebraic group G, we consider
G(E) = {A Mat n (E)|A satisfies the polynomial equations defining G GLn }
For example,  

SL2 (E) = { |, , , E, = 1}

The natural map E C,  7 0 gives a projection : G(E) G. We define the Lie algebra
of G as
g
= 1 (I)
= {X Mat n (C)|I + X G(E)}.
In particular,  
a b
sl2 = { Mat 2 (C)|a + d = 0}.
c d
Remark. I + X represents an infinitesimal change at I in the direction X. Equivalently,
the germ of a curve Spec C[[]] G.
Exercise 1. (Do this only if you already know what a tangent space and tangent bundle
are.) Show that G(E) = T G is the tangent bundle to G, and g = T1 G is the tangent space
to G at 1.
Example 1.2. Let G = GLn = {A Mat n |A1 exists}.
Claim:
G(E) := {A Mat n (E)|A1 exists}
= {A + B|A, B Mat n C, A1 exists}
Indeed, (A + B)(A1 A1 BA1 ) = I.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 20103

Remark. It is not obvious that GLn is itself a linear algebraic group (what are the polynomial
 for determinant is non-zero?). However, we can think of GLn Mat n+1 as
equations

A
{ | det(A) = 1}.

Example 1.3. G = SLn C.
Exercise 2. det(I + X) = 1 +  trace (X)
As a corollary to the exercise, sln = {X Mat n |trace (X) = 0}.
Example 1.4. G = On C = {AAT = I}. Then
g := {X Mat n C|(I + X)(I + X)T = I}
= {X Mat n C|I + (X + X T ) = I}
= {X Mat n C|X + X T = 0}
the antisymmetric matrices. Now, if X + X T = 0 then 2 trace (X) = 0, and since were
working over C and not in characteristic 2, we conclude trace (X) = 0. Therefore, this is
also the Lie algebra of SOn .
This is not terribly surprising, because topologically On has two connected components
corresponding to determinant +1 and 1 (they are images of each other via a reflection).
Since g = T1 G, it cannot see the det = 1 component, so this is expected.
The above example propmts the question: what exactly is it in the structure of g that we
get from G being a group?
The first thing to note is that g does not get a multiplication. Indeed, (I + A)(I + B) =
I + (A + B), which has nothing to do with multiplication.
The bilinear operation that turns out to generalize nicely is the commutator, (P, Q) 7
P QP 1 Q1 . Taken as a map G G G that sends (I, I) 7 I, this should give a map
T1 G T1 G T1 G by differentiation.
Remark. Generally, differentiation gives a linear map, whereas what we will get is a bilinear
map. This is because we will in fact differentiate each coordinate: i.e. first differentiate the
map fP : G G, Q 7 P QP 1 Q1 with respect to Q to get a map fP : g g (with P G
still) and then differentiate again to get a map g g g.
What is this map?
Let P = I + A, Q = I + B where 2 = 2 = 0 but  6= 0. Then
P QP 1 Q1 := (I + A)(I + B)(I A)(I B)
= I + (AB BA).
Thus, the binary operation on g should be [A, B] = AB BA.
Definition. The bracket of A and B is [A, B] = AB BA.
Exercise 3. Show that (P QP 1 Q1 )1 = QP Q1 P 1 implies [A, B] = [B, A], i.e. the
bracket is skew-symmetric.
Exercise 4. Show that the associativity of multiplication in G implies the Jacobi identity
0 = [[X, Y ], Z] + [[Y, Z], X] + [[Z, X], Y ]
4 IAN GROJNOWSKI

Remark. The meaning of implies in the above exercises is as follows: we want to think of the
bracket as the derivative of the commutator, not as the explicit formula [A, B] = AB BA
(which makes the skew-symmetry obvious, and the Jacobi identity only slightly less so). For
example, we could have started working in a different category.
We will now define our object of study.
Definition. Let K be a field, char K 6= 2, 3. A Lie algebra g is a vector space over K
equipped with a bilinear map (Lie bracket) [, ] : g g g with the following properties:
(1) [X, Y ] = [Y, X] (skew-symmetry);
(2) [[X, Y ], Z] + [[Y, Z], X] + [[Z, X], Y ] = 0 (Jacobi identity).
Lecture 2 Examples of the Lie algebra definition from last time:
Example 1.5. (1) gln = Mat n with [A, B] = AB BA.
(2) son = {A + AT = 0}
(3) sln = {trace (A) = 0} (note that while trace (AB) 6= 0 for A, B sln , we do have
trace (AB) = trace (BA), so [A, B] sln even though AB 6 sln )
1
..
.


1

(4) sp2n = {A gl2n |JAT J 1 +A = 0} where J is the symplectic form

1


..
.
1
...

. . .
(5) b = { . . .. gln } the upper triangular matrices (the name is b for Borel,
. .

but we dont need to worry about that yet)
0 ...

0 . . .
(6) n = { . . . .. gln } the strictly upper triangular matrices (the name is n
.
0
for nilpotent, but we dont need to worry about that yet)
(7) For any vector space V , let [, ] : V V V be the zero map. This is the abelian
Lie algebra.
Exercise 5. (1) Check directly that gln is a Lie algebra.
(2) Check that the other examples above are Lie subalgebras of gln , that is, vector
subspaces closed under [, ].
 

Example 1.6. is not a Lie subalgebra of gl2 .
0
Exercise 6. Find the algebraic groups whose Lie algebras are given above.
Exercise 7. Classify all Lie algebras of dimension 3. (You might want to start with di-
mension 2. Ill do dimension 1 for you on the board: skew symmetry of bracket means that
bracket is zero, so there is only the abelian one.)
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 20105

Definition. A representation of a Lie algebra g on a vector space V is a homomorphism of


Lie algebras : g glV . We say that g acts on V .
Observe that, above, our Lie algebras were defined with a (faithful) representation in tow.
There is always a representation coming from the Lie algebra acting on itself (it is the
derivative of the group acting on itself by conjugation):
Definition. For x g define ad (x) : g g as y 7 [x, y].
Lemma 1.1. ad : g End g is a representation (called the adjoint representation).
Proof. We need to check whether ad ([x, y]) = ad (x)ad (y) ad (y)ad (x), i.e. whether the
equality will hold when we apply it to a third element z. But:
ad ([x, y])(z) = [[x, y], z]
and
ad (x)ad (y)(z) ad (y)ad (x)(z) = [x, [y, z]] [y, [x, z]] = [[y, z], x] [[z, x], y].
These are equal by the Jacobi identity. 
Definition. The center of g is {x g : [x, y] = 0, y g} = ker(ad : g End g).
Thus, if g has no center, then ad is an embedding of g into End g (and conversely, of
course).
Is it true that every finite-dimensional Lie algebra embeds into some glV , i.e. is lin-
ear? (Equivalently, is it true that every finite-dimensional Lie algebra has a faithful finite-
dimensional representation?) We see that if g has no center, then the adjoint
 representation
0
makes this true. On the other hand, if we look inside gl2 , then n = is abelian, so
0 0
maps to zero in gln , despite the fact that we started with n gl2 . That is, we cant always
just take ad .
Theorem 1.2 (Ados theorem; we will not prove it). Any finite-dimensional Lie algebra
over K is a subalgebra of gln , i.e. admits a faithful finite-dimensional representation.
Remark. This is the Lie algebra equivalent of the statement we made last time about algebraic
groups embedding into GL(V ). That theorem was easy because there was a natural
representation to look at (functions on the algebraic group). There isnt such a natural
object for Lie groups, so Ados theorem is actually hard.
     
0 1 1 0 0 0
Example 1.7. g = sl2 with basis e = ,h= ,f = (these are the
0 0 0 1 1 0
standard names). Then [e, f ] = h, [h, e] = 2e, and [h, f ] = 2f .
Exercise 8. Check this!
A representation of sl2 is a triple E, F, H Mat n satisfying the same bracket relations.
Where might we find such a thing?
In this lecture and the next, we will get them as derivatives of representations of SL2 . We
will then rederive them from just the linear algebra.
Definition. If G is an algebraic group, an algebraic representation of G on V is a homomor-
phism of groups : G GL(V ) defined by polynomial equations in the matrix coefficients.
6 IAN GROJNOWSKI

(This can be done so that it is invariant of the embedding into GLn . Think about it.)
To get a representation of the Lie algebra out of this, we again use the dual numbers E. If
we use E = K[]/2 instead of K, we will get a homomorphism of groups G(E) GLV (E).
Moreover, since (I) = I and it commutes with the projection map, we get
(I + A) = I +  (some function of A) = I + d(A)
(this is to be taken as the definition of d(A)).
Exercise 9. d is the derivative of evaluated at I (i.e., d : TI G TI GLV ).
Exercise 10. The fact that : G GLV was a group homomorphism means that d : g
glV is a Lie algebra homomorphism, i.e. V is a representation of g.
Example 1.8. G = SL2 .
Let L(n) be the space of homogeneous polynomials of degree n in two variables x, y, with
basis xn , xn1 y, . . . , xy n1 n
 , y (so dim L(n) = n + 1). Then GL2 acts on L(n) by change of
a b
coordinates: for g = and f L(n) we have
c d
((n g)f )(x, y) = f (ax + cy, bx + dy)
In particular, 0 is the trivial representation of GL2 , 1 is the usual 2-dimensional represen-
tation K 2 , and 2
  a ab b2
a b
2 = 2ac ad + bc 2bd
c d
c2 cd d2
Since GL2 acts on L(n), we see that SL2 acts on L(n).
Remark. The proper way to think of this is as follows. GL2 acts on P1 and on O(n) on P1 ,
hence on the global sections (O(n), P1 ) = S n K 2 . Thats the source of these representations.
We can do this sort of thing for all the other algebraic groups we listed previously, using flag
varieties instead of P1 and possibly higher (co)homologies instead of the global sections (this
is a theorem of Borel, Weil, and Bott). That, however, requires algebraic geometry, and gets
harder to do in infinitely many dimensions.
 
0 1
Differentiating the above representations, e.g. for e = , we see
0 0
(I + e)xi y j = xi (x + y)j = xi y j + jxi+1 y j1
Therefore, d(e)xi y j = jxi+1 y j1 ,
Exercise 11. (1) In the action of the Lie algebra,
e(xi y j ) = jxi+1 y j1
f (xi y j ) = ixi1 y j+1
h(xi y j ) = (i j)xi y j .
(2) Check directly that these formulae give a representation of sl2 .
(3) Check that L(2) is the adjoint representation.

(4) Show that the formulae e = x y , f = y x , h = x x y y give an (infinite-
dimensional!) representation of sl2 on k[x, y].
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 20107

(5) Let char (k) = 0. Show that L(n) is an irreducible representation of sl2 , hence of
SL2 .
Lecture 3 Last time we defined a functor

(algebraic representations of a linear algebraic group G)


(Lie algebra representations of g = Lie (G))
via 7 d.
This functor is not as nice as you would like to believe, except for the Lie algebras we care
about.
Example 1.9. G = C = g = Lie (G) = C with [x, y] = 0.
A representation of g = C on V is a matrix A End (V ) (to : C End (V ) corresponds
A = (1)).
W V is a submodule iff AW W , and is isomorphic to 0 : g End (V 0 ) iff A and
A0 are conjugate as matrices.
Therefore, representations of g correspond to the Jordan normal form of matrices.
As any linear transformation over C has an eigenvector, theres always a 1D subrep of V .
Therefore, V is irreducible iff dim V = 1. Also, V is completely decomposable (a direct sum
of irreducible representations)
iff A is diagonalizable.

0 1
0 1
.. ..

For example, if A =
. . then the associated representation is indecom-

. .

. 1
0
posable, but not irreducible. Invariant subspaces are he1 i, he1 , e2 i, . . . , he1 , e2 , . . . , en i, and
none of them has an invariant orthogonal complement.
Now lets look at the representations of G = C .
The irreducible representations are n : G GL1 = Aut(C) via z 7 (multiplication by) z n
for n Z. Every finite-dimensional representation is a direct sum of these.
Remark. Proving these statements about representations of G takes some theory of algebraic
groups. However, you probably know this for S 1 , which is homotopic to G. Recall that to
get complete reducibility in the finite group setting you took an average over the group; you
can do the same thing here, because S 1 is compact and so has finite volume. In fact, for the
theory that we study, our linear algebraic groups have maximal compact subgroups, which
have the same representations, which for this reason are completely reducible. We will not
prove that representations of the linear algebraic groups are the same as the representations
of their maximal compact subgroups.
Observe that the representations of G and of g are not the same. Indeed, 7 d will
send n 7 n C (prove it!). Therefore, irreducible representations of G embed into ir-
reducible representations of g, but the map is not remotely surjective (not to mention the
decomposability issue).
The above example isnt very surprising, since g is also the Lie algebra Lie (C, +), and we
should not expect representations of g to coincide with those of C . The surprising result is
8 IAN GROJNOWSKI

Theorem 1.3 (Lie). 7 d is an equivalence of categories Rep G Rep g if G is a simply


connected simple algebraic group.
Remark. A simple algebraic group is not simple in the usual sense: e.g. SLn has a center,
which is obviously a normal subgroup. However, if G is simply connected and simple in the
above sense, then the center of G is finite (e.g. ZSLn = n , the nth roots of unity), and the
normal subgroups are subgroups of the center.
Exercise 12. If G is an algebraic group, and Z is a finite central subgroup of G, then
Lie (G/Z) = Lie (G). (Morally, Z identifies points far away, and therefore does not affect
the tangent space at I.)
We have now seen that the map (algebraic groups) (Lie algebras) is not injective (see
above exercise, or more shockingly G = C, C ). In fact,
Exercise 13. Let Gn = C n C where C acts on C via t = tn (so (t, )(t0 , 0 ) =
(tt0 , t0n + 0 )). Show that Gn
= Gm iff n = m. Show that Lie (Gn )
= Cx + Cy with
[x, y] = y independently of n.
The map also isnt surjective (its image are the algebraic Lie algebras). This is easily
seen in characteristic p; for example, slp /center cannot be the image of an algebraic group.
In general, algebraic groups have a Jordan decomposition every element can be written as
(semisimple) (nilpotent), and therefore the algebraic Lie algebras should have a Jordan
decomposition as well. In Bourbaki, you can find an example of a 5-dimensional Lie algebra,
for which the semisimple and the nilpotent elements lie only in the ambient glV and not in
the algebra as well.
2. Representations of sl2
From now on, all algebras and representations are over C (an algebraically closed field of
characteristic 0). Periodically well mention
 which
 results
 dont need
 this.

0 1 1 0 0 0
Recall the basis of sl2 was e = ,h= , and f = with commutation
0 0 0 1 1 0
relations [e, f ] = h, [h, e] = 2e, [h, f ] = 2f .
For the next while, we will be proving the following
Theorem 2.1.
(1) For all n 0 there exists a unique irreducible representation of g = sl2 of dimension
n + 1. (Recall L(n) from the previuos lecture; these are the only ones.)
(2) Every finite-dimensional representation of sl2 is a direct sum of irreducible represen-
tations.
Let V be a representation of sl2 .
Definition. The -weight space for V is V = {v V : hv = v}, the eigenvectors of h with
eigenvalue .
Example 2.1. In L(n), we had L(n) = Cxi y j for = i j.
Let v V , and consider ev. We have
hev = (he eh + eh)v = [h, e]v + e(hv) = 2ev + ev = ( + 2)ev.
That is, if v V then ev V+2 and similarly f v V2 . (These are clearly iff.)
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 20109

We will think of this pictorially as a chain


e
. . .
V2
V
V+2
. . .
f

E.g., in L(n) we had xi y j sitting in the place of the V s, and the chain went from = n to
= n:
Vn Vn2 Vn4 . . . V(n4) V(n2) Vn
n n1 n2 2
hx i
hx yi
hx y i
. . .
hx2 y n2 i
hxy n1 i
hy n i
(Note that the string for L(n) has n + 1 elements.)
Definition. If v V ker e, i.e. hv = v and ev = 0, then we say that v is a highest-weight
vector of weight .
Lemma 2.2. Let V be a representation of sl2 and v V a highest-weight vector of weight
. Then W = hv, f v, f 2 v, . . . i is an sl2 -invariant subspace, i.e. a subrepresentation.
Proof. We need to show hW W , f W W , and eW W .
Note that f W W by construction.
Further, since v V , we saw above that f k v V2k , and so hW W as well.
Finally,
ev = 0 W
ef v = ([e, f ] + f e)v = hv + f (0) = v W
ef 2 v = ([e, f ] + f e)f v = ( 2)f v + f (v) = (2 2)f v W
ef 3 v = ([e, f ] + f e)f 2 v = ( 4)f 2 v + f (2 2)f v = (3 6)f 2 v W
and so on. 
Exercise 14. ef n v = n( n + 1)f n1 v for v a highest-weight vector of weight .
Lemma 2.3. Let V be a finite-dimensional representation of sl2 and v V a highest-weight
vector of weight . Then Z0 .
Remark. Somehow, the integrality condition is the Lie algebra remembering something about
the topology of the group. (Saying what this something is more precisely is difficult.)
Proof. Note that f k v all lie in different eigenspaces of h, so if they are all nonzero then they
are linearly independent. Since dim V < , we conclude that there exists a k such that
f k v 6= 0 and f k+r v = 0 for all r 1. Then by the above exercise,
0 = ef k+1 v = (k + 1)( k)f k v
from which = k, a nonnegative integer. 
Proposition 2.4. If V is a finite-dimensional representation of sl2 , it has a highest-weight
vector.
Proof. Were over C, so we can pick some eigenvector of h. Now apply e to it repeatedly:
v, ev, e2 v, . . . belong to different eigenspaces of h, so if they are nonzero, they are linearly
independent. Therefore, there must be some k such that ek v 6= 0 but ek+r v = 0 for all r 1.
Then ek v is a highest-weight vector of weight ( + 2k). 
10 IAN GROJNOWSKI

Corollary 2.5. If V is an irreducible representation of sl2 then dim V = n + 1, and V has


a basis v0 , v1 , . . . , vn on which sl2 acts as hvi = (n 2i)vi ; f vi = vi+1 (with f vn = 0); and
evi = i(n i + 1)vi1 . In particular, there exists a unique (n + 1)-dimensional irreducible
representation, which must be isomorphic to L(n).
Exercise 15. Work out how this basis is related to the basis we previously had for L(n).
Lecture 4
We will now show that every finite-dimensional representation of sl2 C is a direct sum of ir-
reducible representations, or (equivalently) the category of finite-dimensional representations
of sl2 C is semisimple, or (equivalently) every representation of sl2 C is completely reducible.
Morally, weve shown that every such representation consists of strings descending from
the highest weight, and well now show that these strings dont interact. Well first show
that they dont interact when they have different highest weights (easier) and then that they
also dont interact when they have the same highest weight (harder).
We will also show that h acts diagonalisably on every finite-dimensional representation,
while e and f are nilpotent. In fact,
Remark.    
a t
Span h = { } = Lie { } Lie (SL2 )
a t1
 
t
where { } is the maximal torus inside SL2 . Like for C , the representations here
t1
should correspond to integers. In some hazy way, the fact that we are picking out the
representations of a circle and not any other 1-dimensional Lie algebra is a sign of the Lie
algebra remembering something about the group. (Saying what it is remembering and how
is a hard question.)
Example 2.2. Recall that C[x, y] is a representation of sl2 via the action by differentail
operators, and is a direct sum of the representations L(n).

Exercise 16. Show that the formulae e = x y , f = y x , h = x x y y give a representation

of sl2 on x y C[x/y, y/x] for any , C, and describe the submodule structure.
Definition. Let V be a finite-dimensional representation of sl2 . Define = ef + f e + 12 h2
End (V ) to be the Casimir of sl2 .
Lemma 2.6. is central; that is, e = e, f = f , and h = h.
Exercise 17. Prove it. For example,
1 1 1
e = e(ef + f e + h2 ) = e(ef f e) + 2(ef e) + (eh he)h + heh
2 2 2
1 1 1
= 2ef e + heh = (f e ef )e + 2ef e + h(he eh) + heh = e
2 2 2
Observe that we can write = (ef f e) + 2f e + 12 h2 = ( 21 h2 + h) + 2f e. This will be
useful later.
Corollary 2.7. If V is an irreducible representation of sl2 , then acts on V by a scalar.
Proof. Schurs lemma. 
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
11

Lemma 2.8. Let L(n) be the irreducible representation of sl2 with highest weight n. Then
acts on L(n) by 12 n2 + n.
Proof. It suffices to check this on the highest-weight vector v (so hv = nv and ev = 0). Then
1 1
v = ( h2 + h + 2f e)v = ( n2 + n)v.
2 2
i
Since commutes with f and L(n) is spanned by f v, we could conclude directly (without
using Schurs lemma) that acts by this scalar. 
Observe that if L(n) and L(m) are irreducible representations with different highest
weights, then acts by a different scalar (since 21 n(n + 2) is increasing in n).
Definition. Let V be a finite-dimensional representation of sl2 . Set
V = {v V : ( )dim V v = 0}.
This is the generalised eigenspace of with eigenvalue .
Claim. Each V is a subrepresentation for sl2 .
Proof. Let x sl2 and v V . Because is central, we have
( )dim V (xv) = x( )dim V v = 0,
so xv V as well. 
Now, if V 6= 0, then = 12 n2 + n, and V is somehow glued together from copies of
L(n). More precisely:
Definition. Let W be a finite-dimensional g-module. A composition series for W is a
sequence of submodules
0 = W0 W1 . . . Wr = W
such that Wi /Wi1 is an irreducible module.

0 1
0 1
... ...

Example 2.3. (1) g = C, W = Cr , where 1 C acts as . Then


. . . 1

0
there exists a unique composition series for W , namely,
0 he1 i he1 , e2 i . . . he1 , e2 , . . . , er i.
(2) On the other hand, if g = C and W = C with the abelian action (i.e. 1 C acts by
0), then any chain 0 W1 . . . Wr with dim Wi = i will be a composition series.
Lemma 2.9. Composition series always exist.
Proof. We induct on the dimension of W . Take an irreducible submodule W1 W ; then
W/W1 has smaller dimension, so has a composition series. Taking its preimage in W and
sticking W1 in front will do the trick. 
Remark. The factors Wi /Wi1 are defined up to order.
12 IAN GROJNOWSKI

So, the precise statement is that V has composition series with quotients L(n) for some
fixed n. Indeed, take an irreducible submodule L(n) V ; note that acts on L(n) by
1 2
2
n + n, so we must have = 12 n2 + n, or in other words n is uniquely determined by ; and
moreover, acts on V /L(n), and its only generalised eigenvalue there is still , so we can
repeat the argument.
Claim. h acts on V with generalised eigenvalues in the set {n, n 2, . . . , n}.
Proof. This is a general fact about composition series. Let h act on W and let W 0 W be
invariant under this action, i.e. hW 0 W 0 . Then
{generalised eigenvalues of h on W } = {generalised eigenvalues of h on W 0 }
{generalised eigenvalues of h on W/W 0 }.
(You can see this by looking at the upper triangular matrix decomposition of h.) Now since
V is composed of L(n), the generalised eigenvalues of h must lie in that set. 
Note also that on the kernel of e : V V the only generalised eigenvalue of h is n; that

is, (h n)dim V x = 0 for x V ker e. (This follows by applying the above observation to
the composition series intersected with ker e.)
Lecture 5
Lemma 2.10. (1) hf k = f k (h 2k)
(2) ef n+1 = f n+1 e + (n + 1)f n (h n).
Proof. We saw (1) already.
Exercise 18. Prove (2) (by induction, e.g. for n = 0 the claim is ef = f e + h.)

Proposition 2.11. h acts diagonalisably on ker e : V V ; in fact, it acts as multiplica-
tion by n. That is,
ker e : V V = (V )n = {x V : hx = nx}.
Recall we know that ker e is (in) the generalised eigenspace of h with eigenvalue n (and
after the first line of the proof, we will have equality rather than just containment). We are
showing that we can drop the word generalised.
Proof. If hx = nx then ex (V )n+2 = 0 (as generalised eigenvalues of h on V are
n, n 2, . . . , n) so x ker e.

Conversely, let x ker e; we know (h n)dim V x = 0. By the lemma above,

(h n + 2k)dim V f k x = f k (h n)dim V x = 0.
That is, f k x belongs to the generalised eigenspace of h with eigenvalue h n + 2k.
On the other hand, for any y ker e, y 6= 0 = f n y 6= 0.
Remark. This should be an obvious property of upper-triangular matrices, but well work it
out in full detail.
Take 0 = W0 W1 . . . Wr = V the composition series for V with quotients L(n).
There is an i such that y Wi , y 6 Wi1 . Let y = y + Wi1 Wi /Wi1 = L(n). Then y is
n n
the highest-weight vector in L(n), and so f y 6= 0 in L(n). Therefore, f y 6= 0.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
13

Now, f n+1 x belongs to the generalised eigenspace of h with eigenvalue n 2, so is equal


to 0 (as generalised eigenvalues of h on V are n, n 2, . . . , n). Therefore,
0 = ef n+1 x = (n + 1)f n (h n)x + f n+1 ex = (n + 1)f n (h n)x.
Now, (h n)x ker e (its still in the generalised eigenspace of h with eigenvalue n), so
if (h n)x 6= 0 then f n (h n)x 6= 0, which in characteristic 0 poses certain problems.
Therefore, hx = nx. 
We can now finish the proof of complete reducibility. Indeed, choose a basis w1 , w2 , . . . , wk
of ker e : V V , and consider the string generated by each wi ; that is, consider
hwi , f wi , f 2 wi , . . . , f n wi i.
Exercise 19. Convince yourself that these strings constitute a direct sum decomposition
of V ; that is, these are subrepresentations, nonintersecting, linearly independent, and span
everything.
Therefore, h acts semisimply and diagonalisably on V , and therefore on all of V .
Exercise 20. Show that this is false in characteristic p. That is, show that irreducible
representations of sl2 (Fp ) are parametrised by n N, and find a representation of sl2 (Fp )
that does not decompose into a direct sum of irreducibles.
3. Consequences
Let V , W be representations of a Lie algebra g.
Claim. g End (V W ) = End (V ) End (W ) via x 7 x 1 + 1 x is a homomorphism
of Lie algebras.
Exercise 21. Prove it. (This comes from differentiating the group homomorphism G
G G via g 7 (g, g).)
Corollary 3.1. If V , W are representations of g, then so is V W .
Remark. If A is an algebra, V and W representations of A, then V W is naturally a
representation of AA. To make it a representation of A, we need an algebra homomorphism
A A A. (An object with such a structure plus some other properties is called a
Hopf algebra.) In some essential way, g is a Hopf algebra. (Or, rather, a deformation of the
universal enveloping algebra of g is a Hopf algebra.)
Now, take g = sl2 . What is L(n) L(m)? That is, we know L(n) L(m) = k ak L(k);
what are the ak ?
One method of doing this would be to compute all the highest-weight vectors.
Exercise 22. Do this for L(1) L(n). Then do it for L(2) L(n).
As a start on the exercise, let va denote the highest-weight vector in L(a). Then vn vm
is a highest-weight vector in L(n) L(m). Indeed,
h(vn vm ) = (hvn ) vm + vn (hvm ) = (n + m)vn vm
e(vn vm ) = (evn ) vm + vn (evm ) = 0 + 0 = 0
Therefore,
L(n) L(m) = L(n + m) + other stuff
14 IAN GROJNOWSKI

However, counting dimensions, we get


(n + 1)(m + 1) = (n + m + 1) + dim(other stuff),
which shows that there is quite a lot of this other stuff in there.
One can write down explicit formulae for all the highest-weight vectors; these are compli-
cated but mildly interesting. However, we dont have to do it to determine the summands
of L(n) L(m).
Let V be a finite-dimensional representation of sl2 .
Definition. The character of V is ch V = nZ dim Vn z n N[z, z 1 ].
P

Properties 3.2. (1) ch V |z=1 = dim V .


This is equivalent to the claim that h is diagonalisable with eigenvalues in Z, so
V = nZ Vn .
n+1 +z (n+1)
(2) ch L(n) = z n + z n2 + . . . + z n = z z+z 1 . Sometimes this is written as [n + 1]z .
(3) ch V = ch W iff V = W.
Note that ch L(0) = 1, ch L(1) = z +z 1 , ch L(2) = z 2 +1+z 2 , . . . form a Z-basis for
the space of symmetric Laurent polynomials with integer coefficients. On the other
hand, by complete reducibility,
V = n0 an L(n), an 0
W = n0 bn L(n), bn 0

V = W an = bn for all n
P
But since ch L(n) form a basis, ch V = an ch L(n) determines an .
Remark. Note that for the representations, we only need nonnegative coefficients
an 0. On the other hand, if instead of looking at modules we were looking at chain
complexes, the idea of Z-linear combinations would become meaningful.
(4) ch (V W ) = ch V ch W

P
Exercise 23 (Essential!). Vn Vm (V W )n+m . Therefore, (V W )p = n+m=p Vn
Vm . This is exactly how we multiply polynomials.
Example 3.1. We can now compute L(1) L(3):

ch L(1) ch L(3) = (z + z 1 )(z 3 + z + z 1 + z 3 ) =


(z 4 + z 2 + 1 + z 2 + z 4 ) + (z 2 + 1 + z 2 ) = ch L(4) + ch L(2),
so L(1) L(3) = L(4) L(2).
(5) The Clebsch-Gordan rule:
n+m
M
L(n) L(m) = L(k).
k=|nm|
knm mod 2

There is a picture that goes with this rule (so you dont actually need to remember
it):
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
15

m + 1 dots

n + 1 dots

Figure 1. Crystal for sl2 .

The individual strings are the L(a) in the decomposition.


One of the points of this course is that this sort of picture-drawing (called crystals) lets
us decompose tensor products of representations of all other semisimple Lie algebras. (This
fact was discovered about 20 years ago by Kashiwara and Lusztig.)
What we will actually show for the other semisimple Lie algebras:
The category of finite-dimensional representations is completely reducible (i.e. all
finite-dimensional representations are semisimple)
We will parameterise the irreducible representations
We will compute the character of the irreducible representations
We will show how to take tensor products of irreducible representations by drawing
pictures (crystals)

4. Structure and classification of simple Lie algebras


Lecture 6
We begin with some linear algebra preliminaries.
Definition. A Lie algebra g is called simple if the only ideals in g are 0 and g, and dim g > 0
(i.e., g is not abelian). A Lie algebra g is called semisimple if it is a direct sum of simple Lie
algebras.
Remark. Recall that C behaved quite differently from sl2 , so it is sensible to exclude the
abelian Lie algebras.
Definition. The derived algebra of g is [g, g] = Span [x, y] : x, y g.
Exercise 24. (1) [g, g] is an ideal;
(2) g/[g, g] is abelian.
Definition. The central series of g is given by g0 = g, gn = [gn1 , g], i.e.
g [g, g] [[g, g], g] . . .
The derived series of g is given by g(0) = g, g(n) = [g(n1) , g(n1) ], i.e.
g [g, g] [[g, g], [g, g]] . . .
Remark. Clearly, g(n) gn .
Definition. g is nilpotent if gn = 0 for some n > 0, i.e. the central series terminates.
g is solvable if g(n) = 0 for some n > 0, i.e. the derived series terminates.
16 IAN GROJNOWSKI

By the above remark, a nilpotent Lie algebra is necessarily solvable.


Remark. The term solvable comes from Galois theory (if the Galois group is solvable, the
associated polynomial is solvable by radicals). We will see shortly where the term nilpotent
comes from.
0 ...

0 . . .
Example 4.1. (1) n = { . . . .. gln } the strictly upper triangular matrices
.
0
are a
nilpotent Lie algebra.
...

. . .
(2) b = { . . . .. gln } the upper triangular matrices are a solvable Lie algebra.
.

Exercise 25 (Essential). (a) Compute the central and derived series to check that
these algebras are nilpotent and solvable respectively.
(b) Compute the center of these algebras.
(3) Let W be a symplectic vector space with inner product h, i (recall that a symplectic
vector space is equipped with a nondegenerate bilinear antisymmetric form, and there
is essentially one example: given a finite-dimensional vector space L let W = L + L
with inner product hL, Li = hL , L i = 0 and hv, v i = hv , vi = v (v)). Define the
Heisenberg Lie algebra as
HW = W Cc [w, w0 ] = hw, w0 ic, [c, w] = 0.
Exercise 26. Show this is a Lie algebra, and that it is nilpotent.
This is the most important nilpotent Lie algebra occurring in nature. For example,
let L = C, then HW = Cp + Cq + Cc with [p, q] = c, [p, c] = [q, c] = 0 (as we know
from classifying the 3-dimensional Lie algebras!) Show that this has a representation
on C[x] via q 7 multiplication by x, p 7 /x, c 7 1.
(For a general vector space L, the basis vectors v1 , . . . , vn map to multiplication
by x1 , . . . , xn , and their duals map to /x1 , . . . , /xn .)
Properties 4.1. (1) Subalgebras and quotients of solvable Lie algebras are solvable.
Subalgebras and quotients of nilpotent Lie algebras are nilpotent.
(2) Let g be a Lie algebra, and let h be an ideal of g. Then g is solvable if and only if h
and g/h are both solvable. (That is, solvable Lie algebras are built out of abelian Lie
algebras, i.e. there is a refinement of the derived series such that the subquotients
are 1-dimensional and therefore abelian.)
(3) g is nilpotent iff the center of g is nonzero, and g/(center of g) is nilpotent.
Indeed, if g is nilpotent, then the central series is
g ) g1 ) . . . ) gn1 ) gn = 0,
and since gn = [gn1 , g] we must have gn1 contained in the center of g.
(4) g is nilpotent iff ad g gl(g) is nilpotent, as we had an exact sequence
0 center of g , g  ad g 0.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
17

Exercise 27. Prove the properties. (The proofs should all be easy and tedious.)
We now state a nontrivial result, which we will not prove.
Theorem 4.2 (Lies Theorem). Let g gl(V ) be a solvable Lie algebra, and let the base field
k be algebraically closed and have characteristic zero. Then there exists a basis v1 , . . . , vn of
V with respect to which g b(V ), i.e. the matrices of all elements of g are upper triangular.
Equivalently, there exists a linear function : g k and an element v V , such that
xv = (x)v for all x g, i.e. v is a common eigenvector for all of g, i.e. a one-dimensional
subrepresentation.
In particular, the only irreducible finite-dimensional representations of g are one-dimensional.
Exercise 28. Show that these are actually equivalent. (One direction should be obvious; in
the other, quotient by v1 and repeat.)
Exercise 29. Show that we need k to be algebraically closed.
Show that we need k to have characteristic 0. [Hint: let g be the 3-dimensional Heisen-
berg, and show that k[x]/xp is a finite-dimensional irreducible representation of dimension
markedly bigger than 1.]
Corollary 4.3. In characteristic 0, if g is a solvable finite-dimensional Lie algebra, then
[g, g] is nilpotent.
Exercise 30. Find a counterexample in characteristic p.
Proof. Apply Lies theorem to the adjoint representation g End (g). With respect to
some basis, ad g b(g). Since [b, b] n (note the diagonal entries cancel out when we
take the bracket), we see that [ad g, ad g] is nilpotent. Since ad [g, g] = [ad g, ad g] (its a
representation!), we see that ad [g, g] is nilpotent, and therefore so is [g, g]. 
Theorem 4.4 (Engels Theorem). Let the base field k be arbitrary. g is nilpotent iff ad g
consists of nilpotent endomorphisms of g (i.e., for all x g, ad x is nilpotent).
Equivalently, if V is a finite-dimensional representation of g such that all elements of g
act on V by nilpotent endomorphisms, then there exists v V such that x(v) = 0 for all
x g, i.e. V has the trivial representation as a subrep.
Equivalently, we can write x as a strictly upper triangular matrix.
Exercise 31. Show that these are actually equivalent.
Lecture 7
Definition. A symmetric bilinear form (, ) : g g k is invariant if ([x, y], z) = (x, [y, z]).
Exercise 32. If a g is an ideal and (, ) is an invariant bilinear form, then a is an ideal.
Definition. If V is a finite-dimensional representation of g, i.e. if : g gl(V ) is a
homomorphism, define the trace form to be (x, y)V = trace ((x)(y) : V V ).
Exercise 33. Check that the trace form is symmetric, bilinear, and invariant. (They should
all be obvious.)
Example 4.2. (, )ad is the Killing form (named after a person, not after an action). This
is the trace form attached to the adjoint representation. That is, (x, y)ad = trace (ad x, ad y :
g g).
18 IAN GROJNOWSKI

Theorem 4.5 (Cartans criteria). Let g gl(V ) and let char k = 0. Then g is solvable if
and only if for every x g and y [g, g] the trace form (x, y)V = 0. (That is, [g, g] g .)
Exercise 34. Lies theorem gives us one direction. Indeed, if g is solvable then we can take
a basis in which x is an upper triangular matrix, y is strictly upper triangular, and then
xy and yx both have zeros on the diagonal (and thus trace 0). That is, if g is solvable and
nonabelian, all trace forms are degenerate. (The exercise is to convince yourself that this is
true.)
Corollary 4.6. g is solvable iff (g, [g, g])ad = 0.
Proof. If g is solvable, Lies theorem gives the trace to be zero. Conversely, Cartans criteria
tell us that ad g = g/(center of g) is solvable, and therefore so is g. 
Exercise 35. Not every invariant form is a trace form.
= Chp, q, c, di where [c, H]
Let H = 0, [p, q] = c, [d, p] = p, [d, q] = q.
(1) Construct a nondegenerate invariant form on H.

(2) Show H is solvable.

(3) Extend the representation of hc, p, qi = H on k[x] to a representation of H.
Definition. The radical of g, R(g), is the maximal solvable ideal in g.
Exercise 36. (1) Show R(g) is the sum of all solvable ideals in g (i.e., show that the
sum of solvable ideals is solvable).
(2) Show R(g/R(g)) = 0.
Theorem 4.7. In characteristic 0, the following are equivalent:
(1) g is semisimple
(2) R(g) = 0
(3) The Killing form (, )ad is nondegenerate (the Killing criterion).
Moreover, if g is semisimple, then every derivation D : g g is inner, but not conversely
(where we define the terms derivation and inner below).
Definition. A derivation D : g g is a linear map satisfying D[x, y] = [Dx, y] + [x, Dy].
Example 4.3. ad (x) is a derivation for any x. Derivations of the form ad (x) for some x g
are called inner.
Remark. If g is any Lie algebra, we have the exact sequence
0 R(g) , g  g/R(g) 0
where R(g) is a solvable ideals, and g/R(g) is semisimple. That is, any Lie algebra has
a maximal semisimple quotient, and the kernel is a solvable ideal. This shows you how
much nicer the theory of Lie algebras is than the corresponding theory for finite groups.
In particular, the corresponding statement for finite groups is essentially equivalent to the
classification of the finite simple groups.
A theorem that we will neither prove nor use, but which is pretty:
Theorem 4.8 (Levis theorem). In characterstic 0, a stronger result is true. Namely, the
exact sequence splits, i.e. there exists a subalgebra g such that = g/R(g). (This
subalgebra is not canonical; in particular, it is not an ideal of g.) That is, g can be written
as g = n R(g).
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
19

Exercise 37. Show that this fails in characteristic p. Let g = slp (Fp ); show R(g) = Fp I
(note that the identity matrix in dimension p has trace 0), but that there is no complement
to R(g) that is an algebra.
Proof of Theorem 4.7. First, notice that R(g) = 0 if and only if g has no nonzero abelian
ideals. (In one direction, an abelian ideal is solvable; in the other, notice that the last term
of the derived series is abelian.)
From now on, (, ) refers to the Killing form (, )ad .
(3) = (2): we show that if a is an abelian ideal of g, then a g . (Since the Killing
form is nondegenerate, this means a = 0, and therefore R(g) = 0.)
Write g = a + h where h is avector  space complement to a (not necessarily
 anideal). For
0
a a, ad a has block matrix . For x g, ad x has block matrix . (This is
0 0 0
because a is an ideal, so [a, x] a for all x g, and moreover ad a acts by 0 on a.) Therefore,
 
0
trace (ad a ad x) = trace ( ) = 0,
0 0
i.e. (a, g)ad = 0. Since the Killing form is nondegenerate, a = 0.
(2) = (3): Let r g be an ideal (e.g. r = g ), and suppose r 6= 0. Then r gl(g) via
ad , and (x, y)ad = 0 for all x, y r. By Cartans criteria, ad r = r/(center of r) is solvable,
so r is solvable, contradicting R(g) = 0.
Exercise 38. Show that R(g) g [R(g), R(g)].
(2),(3) = (1): Let (, )ad be nondegenerate, and let a g be a minimal nonzero ideal.
We claim that (, )ad |a is either 0 or nondegenerate.
Indeed, the kernel of (, )ad |a = {x a : (x, a)ad = 0} = a a is an ideal!
Cartans criteria imply that a is solvable if (, )ad |a is 0. Since R(g) = 0, we conclude that
(, )ad |a is nondegenerate.
Therefore, g = a a , where a is a minimal ideal, i.e. simple. (Note that R(g) = 0 so a
cannot be abelian.)
But now ideals of a are ideals of g, so we can apply the same argument to a since
R(g) = 0 = R(a ) = 0. Therefore, g = ai where the ai are simple Lie algebras.
Exercise 39. Show that if g is semisimple, then g is the direct sum of minimal ideals in a
unique manner. That is, if g = ai where ai are minimal, and b is a minimal ideal of g,
then in fact b = ai for some i. (Consider b aj .) Conclude that (1) = (2).
Lecture 8 Finally, we show that all derivations on semisimple Lie algebras are inner.
Let g be a semisimple Lie algebra, and let D : g g be a derivation. Cosnider the linear
functional l : g k via x 7 trace (D ad x). Since g is semisimple, the Killing form is
a nondegenerate inner product giving an isomorphism with the dual, so y g such that
l(x) = (y, x)ad . Our task is to show that D = ad y, i.e. that E := D ad y = 0.
This is equivalent to showing that Ex = 0 for all x g, or equivalently that (Ex, z)ad = 0
for all z g. Now,
ad (Ex) = E ad x ad x E = [E, ad x] : g g
since ad (Ex)(z) = [Ex, z] = E[x, z] [x, Ez]. Therefore,
(Ex, z)ad = trace g (ad (Ex) ad z) = trace g ([E, ad x] ad z) = trace g (E, [ad x, ad z]).
20 IAN GROJNOWSKI

But by definition of E, trace g (E, ad a) = trace g (D, ad a) (y, a)ad = 0. 


Exercise 40. (1) A nilpotent Lie algebra always has non-inner derivations.
(2) g = ha, bi with [a, b] = b only has inner derivations. Thus, this condition doesnt
characterise semisimplicity.
Exercise 41. (1) Let g be a simple Lie algebra, (, )1 and (, )2 two non-degenerate invari-
ant bilinear forms. Show such that (, )1 = (, )2 .
(2) Let g = sln C. We will shortly show that this is a simple Lie algebra. We have two
nondegenerate bilinear forms: the Killing form (, )ad and (A, B) = trace (AB) in the
standard representation. Compute .

5. Structure theory
Definition. A torus t g is an abelian subalgebra s.t. t t the adjoint ad (t) : g g is
a diagonalisable linear map. A maximal torus is a torus that is not contained in any bigger
torus.
Example 5.1. Let T = (S 1 )r G where G is a compact Lie group (or T = (C )r G
where G is a reductive algebraic group). Then t = Lie (T ) g = Lie (G) is a torus, maximal
if T is. (This comes from knowing that representations of t are diagonalisable by an averaging
argument, bur we dont really know this.)
Exercise 42. (1) g = sln or g = gln , t the diagonal matrices (of trace 0 if inside sln ) is
a maximal
 torus.
0
(2) is not a torus. (We should know this is not diagonalisable!)
0 0
For a vector space V , let t1 , . . . , tr : V V be pairwise commuting diagonalisable linear
maps; let = (1 , . . . , r ) Cn . Set V = {v V : ti v = i v} the simultaneous eigenspace.
L
Lemma 5.1. V = (C )r V .
Proof. Induct on r. For r = 1 follows from the requirement that t1 be diagonalisable.
For r > 1, look at t1 , . . . , tr1 and decompose
V = V(1 ,...,r1 ) .
Since tr commutes with t1 , . . . , tr1 , it preserves the decomposition. Now decompose each
V(1 ,...,r1 ) into eigenspaces for tr . 
Let t be the r-dimensional abelian Lie algebra with basis (t1 , . . . , tr ). The lemma asserts
that V is a semisimple (i.e. completely reducible) representation of t. V = V is the
decomposition into isotypic representations. Namely, letting C be the 1-dimensional repre-
sentation wherein ti (w) = i w, we have 6= = C 6= C , and V is the direct sum of
dim V copies of C .
Exercise 43. Show that every irreducible representation of t is one-dimensional.
Another way of saying this is as follows. is a linear map t C, i.e. an element of the dual
space t (sending (ti ) = i ). Therefore, one-dimensional representations of t (which are
all the irreducible representations of t) correspond to elements of t = Homvector spaces (t, C).
The decomposition
V = t V , V = {v V : t(v) = (t)v}
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
21

is called the weight space decomposition of V .


Now, let g be a Lie algebra and t a maximal torus. The weight space decomposition of g
is
M
g = g0 g
t
6=0

where g0 = {x g|[t, x] = 0} and g = {x G : [t, x] = (t)x}.


Definition. R = { t |g 6= 0, 6= 0} is the set of roots of g.
We will now compute the root space decomposition for sln .
Example 5.2. g = sln , t = diagonal matrices in sln .
t1 0
Let t = .. and Eij the matrix with 1 in the (i, j)th position and 0 elsewhere.
.
0 tn
Then [t, Eij ] = (ti tj )Eij .
Let i t , i (t) = ti . Then 1 , . . . , n span t , but 1 + . . . + n = 0.
Remark. t is a subalgebra of Cn , so t should be a quotient of (Cn ) , and here is the relation
by which it is a quotient.
Therefore, we can rewrite the above as [t, Eij ] = (i j )(t) Eij .
Therefore, the roots are R = {i j , i 6= j} and g0 = t. (Note that weve shown t is a
maximal torus in the process.) The root space gi j = CEij is one-dimensional.
That is, the root space decomposition of sln is:
M
sln = t gi j
i j R

Exercise 44. This is (a) an essential exercise, and (b) guaranteed to be on the exam. The
first should get you to do it, the second is irrelevant.
Compute the root space decomposition for sln (as we just did), so2n , so2n+1 , sp2n where t
is the diagonal matrices in g, and we take the conjugate son defined by

0 1
...
son = {A gln |JA + AT J = 0}, J = ...


1 0
(this is conjugate to the usual son since weve defined a nondegenerate inner product; in this
version, the diagonal matrices are a maximal torus). The symplectic matrices are given by
the same embedding as before.
In particular, show that t is a maximal torus and the root spaces are one-dimensional.
Lecture 9 The Lie algebras we were working with last time are called the classical Lie
algebras (they are automorphisms of vector spaces). The rest of the course will be concerned
with explaining how their roots control their representations.
Proposition 5.2. sln C is simple.
22 IAN GROJNOWSKI

Proof. Recall M
sln C = t g
R
where R = {i j |i 6= j} and gi j = CEij . P
Suppose r sln is a nonzero ideal. Choose r r, r 6= 0, s.t. when we write r = t+ R e
with e g , the number of nonzero terms is minimal.
First, suppose t 6= 0. Choose P t0 t such that (t0 ) 6= 0 for all R (i.e., t0 has distinct
eigenvalues). Consider [t0 , r] = r (t0 )e . Note that this lies in r, and if it is nonzero, it
has fewer nonzero terms than r, a contradiction. Thus, [t0 , r] = 0, i.e. r = t t.
Since t 6= 0, there exists R with (t) 6= 0 (i.e. it cant have the same eigenvalue, since
its trace 0). Then
[t, e ] = (t)e = e r
Letting = i j , we have Eij r. Now,
[Eij , Ejk ] = Eik for i 6= k [Esi , Eij ] = Esj for s 6= j
and therefore Eab r for a 6= b. Finally,
[Ei,i+1 , Ei+1,i ] = Eii Ei+1,i+1 r,
so in fact the entire basis for sln lies in r and r = sln .
Remark. This is actually a combinatorial statement about root spaces; well see more about
this later.
That leaves us with the case t = 0. If r = cEij has only one nonzero term, we are done
exactly as above; therefore,
X
r = e + e + e , 6=
R{,}

Choose t0 t such that (t0 ) 6= (t0 ); then some linear combination of [t0 , r] and r will have
fewer nonzero terms than r, a contradiction. 
Proposition 5.3. Let g be a semisimple Lie algebra. Then maximal tori exist (i.e. are
nonzero). Moreover, g0 = {x g|[t, x] = 0} = t. (Recall that t is not defined to be
the maximal abelian subalgebra, but as the maximal abelian subalgebra whose adjoint acts
semisimply.)
We wont prove this; the proof involves introducing the Cartan subalgebra and showing
that its the same as a maximal torus. As a result, we will sometimes be calling t the Cartan
subalgebra.
This means that the root space decomposition of a semisimple Lie algebra is
M
g=t g
R

as we have, of course, already seen for the classical Lie algebras.


Theorem 5.4 (Structure theorem for semisimple Lie algebras, part I).LLet g be a semisimple
Lie algebra over C, and let t g be a maximal torus. Write g = t R g . Then
(1) CR = t , i.e. the roots span t .
(2) dim g = 1.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
23

(3) If , R and + R then [g , g ] = g+ . If + 6 R and 6= then


[g , g ] = 0.
(4) [g , g ] t is one-dimensional, and g [g , g ] g is a Lie subalgebra of g
isomorphic to sl2 . (In particular, R = R.
Exercise 45. Check this for the classical Lie algebras.
Proof. (1) Suppose not, then there exists t t such that (t) = 0 for all R. But
then [t, g ] = 0; as [t, t] = 0 by definition, we see that t is central in g. However, g is
semisimple, so it has no abelian ideals, in particular no center.
We will now show a sequence of statements.
(a) [g , g ] g+ if , t .
Indeed, for x g and y g we have
[t, [x, y]] = [[t, x], y] + [x, [t, y]] = ((t) + (t))[x, y].
This shows that [g , g ] g+ if + R, is 0 if + 6 R, and [g , g ] t.
(b) (g , g )ad = 0 if 6= ; (, )ad |g g is nondegenerate.

Proof. Let x g and y g . Recall (x, y)ad = trace g (ad x ad y). To show
that its zero, we show that ad x ad y is nilpotent.
Indeed, (ad x ad y)N g g+N (+) , so if 6= then for N  0 this is zero.
On the other hand, (, )ad is nondegenerate and g = t g g , so (, )ad |g g
must be nondegenerate. 
(c) In particular, (, )ad |t is nondegenerate, as t = g0 .
Remark. (, )ad |t 6= (, )ad t (which is zero since t is abelian).
Therefore, the Killing form defines an isomorphism : t t with (t)(t0 ) =
(t, t0 )ad . It also defines an induced inner product (well, symmetric bilinear form
to be precise we dont know its positive) on t , via ((t), (t0 )) = (t, t0 )ad .
(d) R = R, since (, )ad is nondegenerate on g g , but (g , g )ad = 0
if 6= 0. In particular, g = g via the Killing form.
(e) x g , y g = [x, y] = (x, y)ad 1 ().

Proof.
(t, [x, y])ad = ([t, x], y)ad = (t)(x, y)ad ,
which is exactly what we want. 
(f) Let e g be nonzero, and pick e g such that (e , e )ad 6= 0. Then
[e , e ] = (e , e )ad 1 (). To show that e , e , and their bracket give a
copy of sl2 , we need to compute
[ 1 (), e ] = ( 1 ())e = (, )e
We will be done if we show that (, ) 6= 0 (then we get a copy of sl2 after
renormalising).
Proposition 5.5. (, ) 6= 0 for all R.
Proof. Next time. 
24 IAN GROJNOWSKI


Lecture 10 We now prove the last proposition, (, ) 6= 0 for all R.
Proof. Suppose otherwise. Let m = he , e , 1 ()i. If (, ) = 0, i.e. if [ 1 (), e ] = 0,
then [m , m ] = C 1 and m is solvable. However, by Lies theorem this implies that
ad [m , m ] acts by nilpotents on g, i.e. ad 1 () is nilpotent. Since 1 () t, it is also
diagonalisable, and the only diagonalisable nilpotent element is 0. However, R =
6= 0. 
Now, define
2
h = 1 ()
(, )
and rescale e such that (e , e )ad = 2/(, ).
Exercise 46. The map (e , h , e ) 7 (e, h, f ) gives an isomorphism m
= sl2 .
Remark. In sln , the root spaces are spanned by Eij , so we are saying that the subalgebra of
matrices of the form
a




a

is a copy of sl2 , which is obvious. The cool thing is that something like this happens for all
the semisimple Lie algebras, and that this lets us know just about everything about them.
We will now use our knowledge of the representation theory of sl2 . We show
Claim. dim g = 1 for all R.
Proof. Pick a copy of m = he , h , e i
= sl2 . Suppose dim g > 1, then the map
1
g C (), x 7 [e , x] has a kernel. That is, there is a v g such that ad e v = 0.
Then v is a highest-weight vector (it comes from a root space g , so its an eigenvector of
ad h ), and its weight is determined by
ad h v = (h )v = 2v.
That is, v is a highest-weight vector of weight 2. However, we know that in finite-
dimensional representations of sl2 (which g is, since g acts on it) the highest weights are
nonnegative integers. This contradiction shows the claim. 
Before we finish proving the theorem we had (we still need to show [g , g ] = g+ when
, , + R), we show further combinatorial properties of the roots.
Theorem 5.6 (Structure theorem, continued). (1)
2(, )
Z for all , R
(, )
(2) If R and k R, then k = 1.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
25

(3) M
g+k
kZ
is an irreducible module for (sl2 ) = he , h , e i. In particular,
{k + |k + R, k Z} {0}
2(,)
is of the form p, (p 1), . . . , , . . . , + (q 1), + q where p q = (,)
.
We call this set the string through .
(1): let q = max{k : + k R} and let v g+q be nonzero. Then ad e v
g+(q+1) = 0, and
 
2(, )
ad h v = ( + q)(h ) v = + 2q v.
(, )
That is, v is a highest weight vector for (sl2 ) with weight 2(, )/(, ) + 2q. Since the
weight in a finite-dimensional representation is an integer, we see that 2(, )/(, ) Z.
Proof. (3): Structure of sl2 -modules tells us that (ad e )r v 6= 0 for 0 r 2(,)
(,)
+ 2q = N ,
N +1
and that (ad e ) v = 0. Therefore, { + q k, 0 k N } are all roots (or possibly
equal to zero in any case, they have nonzero root space). This certainly is an irreducible
(sl2 ) module; we now need to show that there are no other roots in the string through .
We do this by repeating the same construction from the bottom up:
Let p = max{k : k r} and let w gp be nonzero. then ad e w = 0 and
ad h w = (2(, )/(, )2p)w, so w is a lowest-weight vector of weight 2(, )/(, )2p.
By applying ad e repeatedly, we get an irreducible sl2 module with roots p, (p
1), . . . , + (p 2(, )/(, )).
Now by construction we have p 2(, )/(, ) q and also q + 2(, )/(, ) p, which
means that in fact p q = 2(, )/(, ) and the two submodules we get coincide.
(2): By (1) we have 2(, k)/(k, k) = 2/k Z and 2(k, )/(, ) = 2k Z. Thus, it
suffices to show that R = 2 6 R.
But indeed, suppose 2 R and let v g2 be nonzero. Then [e , v] g has the
property that
([e , v], e ) = (v, [e , e ]) = 0,
and since g is one-dimensional and spanned by e , we find ([e , v], g ) = 0. Since (, )|g g
is nondegenerate, this means [e , v] = 0, i.e. v g2 is a highest-weight vector for (sl2 )
with highest weight 4. This doesnt happen in finite-dimensional representations, so g2 =
0 and 2 is not a root.
(In fact, we could derive this from (3), since we know that there is a 3-dimensional sl2 -
module running through 0.) 
We finally show [g , g ] = g+ when , , + R. Indeed, we have just shown
that g+k is an irreducible representation of m , and ad e : g+k g+(k+1) is an
isomorphism for k < q. Since q 1, we see that ad e g = g+ .
Remark. The combinatorial statement of (3) is rather clunky. We now unclunk it:
For t , define s : t t by s (v) = v 2(,v)
(,)
(a reflection of v about ).
Claim. (3) implies (and in fact, is equivalent to) s () R whenever , R.
26 IAN GROJNOWSKI

Proof. Let r = 2(, )/(, ). If r 0, then p = q + r r; if r 0 then q = p r r.


Either way, r is in the string through . 
Over the next several lectures, we will classify the root systems as combinatorial objects.
Before we do that, we state some further properties of the roots:
Proposition 5.7. (1) , R = (, ) Q.
(2) If we pick a basis 1 , . . . , l of t , with i R, then any R expands as
P
qi i
with qi Q, i.e. dim QR = dimC t.
(3) (, ) is positive definite on QR.
Lecture 11
Proof of the Proposition at the end of last lecture. (1) It suffices to show (, ) Q for
all R.
Let h, h0 t, then
X
(h, h0 )ad = trace g (ad had h0 ) = (h)(h0 )
R
L
since g = t + R g by the structure theorem.
Therefore, for , t ,
X X
(, ) = ( 1 (), 1 ())ad = ( 1 )( 1 ) = (, )(, ).
R R
2
Multiplying by 4/(, )2 , we get
P
In particular, (, ) = R (, ) .

4 X  2(, ) 2
= Z
(, ) R (, )

and therefore (, ) Q.
(2) Let B be the Gram matrix of the basis, Bij = (i , j ).
Exercise 47. (, ) is a nondegenerate symmetric bilinear form = det B 6= 0.
P P
Let = ci i R. Then (, i ) = cj (i , j ). Therefore, we can recover the
vector of ci as
(c1 , . . . , cl ) = ((, 1 ), . . . , (, l ))(B T )1
Since the inner productsP of roots are all rational, we see that ci Q as well.
(3) Let QR.
P Then = ci i with ci Q, and (, ) Q for all R. Moreover,
(, ) = R (, )2 0, and if (, ) = 0 then (, ) = 0 for all R. Since R
spans t and (, ) is nondegenerate, this implies that = 0.


6. Root systems
Let V be a vector space over R, (, ) an inner-product (i.e. a positive-definite symmetric
bilinear form). For V , 6= 0, write = (,)
2
, so that (, ) = 2.
Define s : V V by s (v) = v (v, ).
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
27

Lemma 6.1. s is the reflection in the hyperplane orthogonal to . In particular, all but
one of the eigenvalues of s are 1, and the remaining one is 1 (with eigenvector ). Thus,
s2 = 1, or (s + 1)(s 1) = 0, and s O(V, (, )), the orthogonal group of V defined by
the inner product (, ).
Proof. Write V = R . Then s () = (, ) = , and s fixes all v . 
Definition. A root system R in V is a finite set R V such that
(1) 0 6 R, RR = V

(2) For all , R, the inner product (, ) Z.


(3) For all R, s (R) R. (In particular, s () = R.)
Moreover, R is called reduced if
(4) , k R = k = 1.
L
Example 6.1. Let g be a semisimple Lie algebra, g = t + R g . Then R RR is a
reduced root system.
Definition. Let W GL(V ) be the group generated by reflections s , R; W is called
the Weyl group of R.
Observe that |W | < : W acts on R by permutations, and since 0 6 R, the action is
faithful (since s W for all R and s sends to , the only possible W -invariant
element is 0, but it is not in R). Thus, W injects into Sym(R), so |R| < = |W | < .
Definition. The rank of R is the dimension of V .
Definition. An isomorphism of root systems (R, V ) (R0 , V 0 ) is a linear bijection : V
V 0 such that (R) = R0 . Note that is not required to be an isometry.
Definition. If (R, V ) and (R0 , V 0 ) are root systems, so is their direct sum (R t R0 , V V 0 ).
A root system which is not isomorphic to a direct sum is called irreducible.
Example 6.2. (1) Rank 1: V = R, (x, y) = xy. R = {, } with 6= 0. W = Z/2.

Exercise 48. This is the only rank 1 root system.
(2) Rank 2: V = R2 with the usual inner product.
e2

e1

(a) This is A1 A1 (or A1 +A1 ), and is not irreducible. The Weyl group is Z/2Z/2.
(b) A2 : = , = , (, ) = 1.
W = S3 is the dihedral group of order 6.
This is the root system of sl3 , as you should be able to establish (and we will
shortly see).
28 IAN GROJNOWSKI

+ + 2

(c) B2 : = e1 , = e2 e1 : (, ) = 1 and (, ) = 2. and + are short roots;


and 2 + are long roots.
W is the dihedral group of order 8.
This is the root system of sp4 and so5 .
(d) G2 :
At this point, we clearly should look at all the dihedral groups, right? Unfor-
tunately, no: the condition (, ) Z rules out all but the dihedral group of
order 12. This is the root system of a Lie group, but not one of the classical

ones (symmetries of octonions).


Exercise 49. All of these are indeed root systems. They are the only rank 2 root systems,
and A2 , B2 , and G2 are irreducible.
Lemma 6.2. If R is a root system, so is R = { | R}.
Exercise 50. Prove it.
Definition. R is simply laced if all the roots have the same length.
Exercise 51. If R is simply laced, then (R, V ) = (R0 , V 0 ) where (, ) = 2 for all R0

(i.e. = ). [Hint: If irreducible, rescale; if not, break up into a direct sum of irreducibles.]
We would like to look at roots as coming from a lattice. We will see why length-2 is so
useful.
Definition. A lattice L is a finitely-generated free abelian group (= Zl for some l) with
bilinear form L L Z, such that (L Z R, (, )) is an inner product space.
A root of L is L such that (, ) = 2. We write RL = {l L|(l, l) = 2} for the roots of
L.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
29

Note that RL = s (L) L.


Lemma 6.3. RL is a simply laced root system in RRL .
Proof. The only non-obvious
claim is |RL | < . However, RL is the intersection of a compact
set (sphere of radius 2) with a discrete set (the lattice), so is finite. 
Definition. We say that L is generated by roots if ZRL = L. Note that then L is an even
lattice, i.e. (l, l) 2Z for all l L.
k2
Example 6.3. L = Z with (, ) = . If = 2 then RL = {} and L = ZRL ; if 2
6= 1
for all k then RL = .
We now characterise the simply laced root systems.
(1) An . Consider Zn+1 = n+1
L
i=1 Zei with (ei , ej ) = ij (the square lattice). Define

ai = 0}
X
L = {l Zn+1 |(l, e1 + . . . + en ) = 0} = {ai ei | = Zn .
Then RL = {ei ej |i 6= j}, #RL = n(n + 1), ZRL = L.
If = ei ej then s applied to a vector with coordinates x1 , . . . , xn+1 (in basis
e1 , . . . , en+1 ) swaps the i and j coordinate; therefore, W = hsei ej i = Sn+1 .
(RL , RL) is the root system of type An . Note that n is the rank of the root system,
not the index of the associated Lie algebra.
Exercise 52. (a) You should not believe a word I say: Check these statements!
(b) Draw L Zn+1 and RL for n = 1, 2. Check that A1 and A2 agree with the
previous pictures.
(c) Show that the root system of sln+1 has type An .
(2) Dn . Consider the square lattice Zn . The roots are RZn = {ei ej |i 6= j}. Set
X X
L = ZRL = {l = ai ei |ai Z, ai even}
sei ej swaps the ith and jth coordinate as before; sei +ej flips the sign of both the ith
and the jth coordinate.
(RL , ZRL ) is called Dn ; #RL = 2n(n 1). The Weyl group is
W = (Z/2)n1 o Sn
where (Z/2)n1 is the subgroup of even number of sign changes. (Possibly I mean
(Z/2)n1 n Sn .)
Exercise 53. (a) Check these statements!
(b) Dn is irreducible if n 3.
(c) RD3 = RA3 , RD2 = RA1 RA1
(d) The root system of so2n has type Dn .
Lecture 12 We continue our classification of the simply laced root systems:
(1) E8 . Let
1 X
n = {(k1 , . . . , kn )|either all ki Z or all ki Z + and ki 2Z}.
2
Consider = ( 12 , 12 , . . . , 21 ) n , and note that (, ) = n/4. Thus, if n is an even
lattice, we must have 8|n.
30 IAN GROJNOWSKI

Exercise 54. (a) 8n is a lattice.


(b) For n > 1 the roots of 8n are a root system of type Dn
(c) The roots in 8 are {ei ej , i 6= j; 12 (e1 e2 . . .e8 ), with an even number of signs}.
The root system of 8 is called the root system of type E8 . The number of roots is
 
8
#R8 = 4 + 128 = 240.
2
To check that youre awake, the dimension of the associated Lie algebra is 280 +
rank(R8 ) = 248.
Remark. E8 is weird. Its smallest representation is the adjoint representation of
dimension 248. This is quite unlike son , sln , etc., which had dimension O(n2 ) and
an n-dimensional representation. There isnt a good understanding of why E8 is like
that.
Exercise 55. Can you compute #W , the order of the Weyl group of E8 ? [Answer:
214 35 52 7]
Exercise 56. If R is a root system and R, then R is a root system.
We will now apply this to 8 . Let = 21 (1, 1, . . . , 1) and = e7 + e8 .
(2)
Definition. R8 is a root system of type E7 .
h, i R8 is a root system of type E6 .
Exercise 57. Show #RE7 = 126 and #RE6 = 72. Describe the corresponding root
lattices.
Remark. These lattices are reasonably natural objects. Indeed, if we took the as-
sociated algebraic group and its maximal torus, there is a natural lattice associated
with it coming from the fundamental group. Equivalently, we can think of the group
homomorphisms to (or from) C . This isnt quite the root lattice, but its closely
related (and we might say more about it later).
Theorem 6.4. (1) (ADE classification) The complete list of irreducible simply laced
root systems is
An , n 1, Dn , n 4, E6 , E7 , E8 .
No two root systems in this list are isomorphic.
(2) The remaining irreducible reduced root systems are denoted by
B2 = C2 , Bn , Cn , n 3, F4 , G2
where
RBn = {ei , ei ej , i 6= j} Zn
RCn = {2ei , ei ej , i 6= j} Zn
with RCn = RBn . The root system of type Bn corresponds to so2n+1 , and the root
system of type Cn corresponds to sp2n . The Weyl groups are
WBn = WCn = (Z/2)n o Sn .
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
31

The root system F4 is defined as follows: let Q = {(k1 , . . . , k4 )|all ki Z or all ki Z + 12 }


and take
1
RF4 = { Q|(, ) = 2 or (, ) = 1} = {ei , ei ej , i 6= j, (e1 . . . e4 )}.
2
Remark. The duality between RBn and RCn suggests some form of duality between so2n+1
and sp2n . We will see it later.
To prove this theorem, we will first choose a good basis for V (the space spanned by the
root lattice). Choose f : V R linear such that f () 6= 0 for all R.
Definition. A root R is positive, R+ , if f () > 0.
A root R is negative, R , if f () < 0. (Equivalently, R+ .)
A root R+ is simple, , if is not the sum of two positive roots.
Example 6.4. (1) An : the roots are R = {ei ej |i 6= j}. Choose
f (e1 ) = n + 1, f (e2 ) = n, . . . , f (en+1 ) = 1.
Then R+ = {ei ej |i < j}.
Since f (R) Z, if f () = 1 then must be simple. Therefore, {e1
e2 , . . . , en en+1 }. On the other hand, it is clear that the positive integer span of
these is R+ , so in fact we have equality.
(2) Bn : R = {ei , ei ej , i 6= j}. Take f (e1 ) = n, . . . , f (en ) = 1. Then R+ =
{ei , ei ej |i < j} and = {e1 e2 , . . . , en1 en , en }.
(3) Cn : R = {2ei , ei ej , i 6= j}. R+ = {2ei , ei ej , i < j} and = {e1 e2 , . . . , en1
en , 2en }.
(4) Dn : R = {ei ej , i 6= j}, = {e1 e2 , . . . , en1 en , en1 + en }.
(5) E8 : set f (e1 ) = 28 = 1 + 2 + . . . + 7, and f (ei ) = 9 i for i = 2 . . . 8. Then
1
R+ = {ei ej , i < j, (e1 e2 . . . e8 ) with an even number of signs}
2
and
1
= {e2 e3 , . . . , e7 e8 , (e1 + e8 e2 . . . e7 ), e7 + e8 }
| {z } |2 | {z }
} f =3
f =1
{z
f =2

Exercise 58. Check all of these (by finding a suitable function f where one is not listed).
Also find the simple roots for E6 , E7 , F4 , and G2 .
Remark. So far we have made two choices. We have chosen a maximal torus, and we have
chosen a function f . Neither of these matters. We may or may not prove this about the
maximal torus later; as for f , note that the space of functionals f is partitioned into cells
within which we get the same notions of positive roots. The Weyl group acts transitively on
these cells, which should show that the choice of f does not affect the structure of the root
system we are deriving
Proposition 6.5. (1) , = 6 R.
(2) , , 6= = (, ) 0 (i.e.Pthe angle between and is obtuse).
(3) Every R+ can be written as = ki i with i and ki Z0 .
(4) Simple roots are linearly independent.
(5) R+ , 6 = i such that i R+ .
32 IAN GROJNOWSKI

(6) R is irreducible iff is indecomposable (i.e. 6= 1 t 2 with (1 , 2 ) = 0).

Exercise 59. Prove this! (Either by a case-by-case analysis which you should do in a few
cases anyway or by finding a uniform proof from the axioms of a root system.)

We will now find another way to represent the root system. Let = {1 , . . . , l }, let
aij = (i , j ).

Definition. A = (aij ) is the Cartan matrix.

Properties 6.6. (1) aij Z, aii = 2, aij 0 for i 6= j


(2) aij = 0 aji = 0
(3) det A > 0
(4) All principal subminors of A have positive determinant.

Proof. The first two should be obvious from the above. To prove the determinant, note that
2
(1 ,1 )
0
A=
.. (i , j )

.
2
0 (l ,l )

The first matrix is diagonal with positive entries, the second one is the Gram matrix of a
positive definite bilinear form and so has positive determinant.
The last property follows because any principal subminor of A (i.e. obtained by removing
the same set of rows and columns) has the same form as A. 

We will represent A by a Dynkin diagram. The vertices correspond to simple roots.


Between vertices i and j there are aij aji lines (note that this value can be 0 or 1 in the
simply laced diagrams, may be 2 in Bn or Cn , and may be 3 in G2 ). If there are 2 or 3 lines,
we put an arrow towards the short root.

Exercise 60. Show the Dynkin diagrams are as we claim.

Lecture 13

Exercise 61. If (R, V ) is an irreducible root system with positive roots R+ , then there is a
unique root R+ such that , + 6 R+ . This is called the highest root.
At the moment, the easiest way to prove this is to examine all the root systems we
have. Later well give a uniform proof (and show the relationship between and the adjoint
representation).

Example 6.5. In An , the highest root is e1 en+1 .

Define the extended Cartan matrix A by setting 0 = (and then A = (aij )0i,jl
2( , )
with aij = (i , j ) = (ii,ij) ).
The associated diagram is called the extended or affine Dynkin diagram.
 
2 2
Example 6.6. A1 has A = (2) and A = .
2 2
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
33

e1 e2 e3 e4
An
e2 e3 en en+1

e1 e2 e3 e4
en1 + en
Dn
e2 e3
en1 en

E8

E7

E6

en1 en
Bn en

en1 en
Cn 2en

G2

F4

Figure 2. Dynkin diagrams (simply laced first).

An has
2 1 0 1
2 1

1 2 1
...

1 2 ...

A= A = 0 1 2 0

... ...
1 ... ...
1
1 2
1 0 1 2
Notice that A satisfies almost the same conditions as the Cartan matrix, i.e.
(1) aij Z, aij 0 for i 6= j, aii = 2
(2) aij = 0 aji = 0
34 IAN GROJNOWSKI

(3) det A = 0, and every principal subminor of A has positive determinant.


The third property follows since A is related to the Gram matrix for 0 , . . . , l , which are
not linearly independent.
Exercise 62. Write down A and the extended Dynkin diagram for all types. To get you
started, in Bn we have 0 = e1 e2 , in Cn we have 0 = 2e1 , and in Dn we have
0 = e1 e2 .
(2)
Exercise 63. Show that An (twisted An ) also has determinant 0. (See Figure 3.)
Exercise 64. The Dynkin diagram of AT is the Dynkin diagram of A with the arrows
reversed.
These diagrams (plus their transposes) are (almost) all the non-garbage Lie algebras out
there. (Well be slightly more precise about the meaning of non-garbage later.)
Theorem 6.7. An irreducible (i.e., connected) Dynkin diagram is one of An , Bn , Cn , Dn ,
E6 , E7 , E8 , F4 , and G2 .
Proof. (1) First, we classify the rank-2 Dynkin diagrams, i.e. their Cartan matrices:
 
2 a
A=
b 2
with det A = 4 ab > 0. This leaves the options (a, b) = (0, 0) (A1 A1 ), (1, 1) (A2 ),
(2, 1) or (1, 2) (B2 ), and (3, 1) or (1, 3) (G2 ).
(2) Observe any subdiagram of a Dynkin diagram is a Dynkin diagram. In particular,
since the affine Dynkin diagrams have determinant 0, they are not subdiagrams of a
Dynkin diagram.
(3) A Dynkin diagram contains p no cycles. Indeed, let 1 , . . . , n be distinct simple roots,
P
and consider = i / (i , i ). Then
X 2(i , j ) X
0 < (, ) = n + p =n aij aji .
i<j
(i , i )(j , j ) i<j
P
Therefore, i<j aij aji < n. On the other hand, if we had a cycle on n simple roots,
P
it would have at least n edges, giving i<j aij aji n, a contradiction.
(4) If a diagram is simply laced, it is A, D, or E:
Suppose its not of type A. Since D 4 is not a subdiagram, any branchings are at
most 3-way; and since D n is not a subdiagram for n > 4, there is exactly one branch
point. Therefore, the diagram is T-shaped, with p, q, and r vertices in each leg. We
call this shape Tp,q,r ; e.g., E8 = T5,3,2 .
Exercise 65. Finish the proof in one of two ways:
(a) By using the fact that E6 , E7 , E8 are not subdiagrams of a Dynkin diagram; or
(b) By showing det Tp,q,r = pq + pr + qr pqr, and concluding that Dn , E6 , E7 , and
E8 are the only possibilities. (E.g.: det E8 = 15 + 10 + 6 30 = 1.)
(5) We now deal with the non-simply-laced diagrams.
Exercise 66. If there is a triple bond in the diagram, then the Dynkin diagram is
2 ; you need to check the
G2 . (Note that we took care of one of the possibilities in G
possibility of a double and a triple bond, and of two triple bonds.)
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
35

1
0
0
1

An

Dn
1
0
0
1

E8 1
0
0
1

E7 1
0
0
1

E6

11
00
00
11
00
11
Bn
1
0
0
1

Cn 1
0
0
1

G2 1
0
0
1

F4 11
00
00
11

An
(2)
1
0
0
1
Figure 3. Affine Dynkin diagrams.

(2)
Finally, if there is a double bond in the diagram, then by Cn and An theres only
one of them, and the diagram does not branch by B n . If the double bond is in the
middle, we have F4 (since we dont have F4 ); otherwise, we have Bn or Cn .

36 IAN GROJNOWSKI

Exercise 67. Compute the determinant of all the Cartan matrices.


Example 6.7.
 
2 1
A1 : det(2) = 2 A2 : det = 3 A3 : det = 4 ...
1 2
This comes out of the following: SLn+1 = {X : det X = 1} has center consisting of (n + 1)th
roots of unity (i.e. a cyclic group of order n + 1). Note that the order of the center is equal
to the determinant of the Cartan matrix. This is true in general: the order of the center of
a simply connected algebraic group with Lie algebra g and associated Cartan matrix A is
equal to det A.
In particular, this means that the algebraic group attached to E8 has no center.

7. Existence and uniqueness


Lecture 14
A. Independence of choices
Theorem 7.1. All maximal tori are conjugate. (Recall that we are in the setting of char k =
0, k = k.) That is, if t and t0 are two maximal tori of g,
g Aut (g)0 = {g GL(g)|g : g g is a Lie algebra homomorphism}0
such that gt = t0 . (Note: Aut (g) is itself an algebraic group; Aut (g)0 is the connected
component of 1. Also, Lie (Aut g) = g.)
The proof is not very hard and involves a little bit of algebraic geometry; we wont give
it.
Theorem 7.2. All positive root systems R+ are conjugate. That is, let (V, R) be a root
+ +
system. Let f1 , f2 : V R satisfy fi () 6= 0 for all R, and define R(1) , E(2) . There
+ +
exists a unique w W such that wR(1) = R(2) . (Consequently, w1 = 2 and we get the
same Cartan matrix out of the two choices.)
The proof is quite easy, but we still wont give it.
Corollary 7.3. g determines the Cartan matrix, i.e. the Dynkin diagram, regardless of the
choices we made.
B. Uniqueness
Theorem 7.4. For i = 1, 2, let gi be semisimple Lie algebras with Cartan subalgebras ti ,
root systems Ri , positive root systems Ri+ , simple roots i and Cartan matrices Ai . Suppose
that, after reordering indices, A1 = A2 . Then there exists an isomorphism of Lie algebras
: g1 g2 such that (t1 ) = t2 , (R1 ) = R2 , etc.
We will not prove this theorem, but we will say more about it.
C. Existence
Theorem 7.5. Let A be a Cartan matrix. There exists a semisimple Lie algebra whose
Cartan matrix is A.
Remark. We already know this for the infinite families, but a nice proof would not be case-
by-case.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
37

We will now talk about existence and uniqueness.


L
Let g be a semisimple Lie algebra, g = t + R g , and let the simple roots be =
{1 , . . . , l }. Choose
2 2 1 (i )
0 6= Ei gi , Fi gi such that (Ei , Fi )ad = , Hi = .
(i , i ) (i , i )
2( , )
Define aij = (i , j ) = (ii,ij) .
We have the following commutation relations:
[Hi , Hj ] = 0 (the torus commutes)
[Hi , Ej ] = aij Ej ([Hi , Ej ] = j (Hi )Ej = j (2 1 (i ))/(i , i )Ej = 2(i , j )/(i , i )Ej )
[Hi , Fj ] = aij Fj
[Ei , Fj ] = 0 for i 6= j, since [Ei , Fj ] gi j and i j 6 R for i , j
[Ei , Fi ] = Hi
Depending on your mood, these are either some of the relations or all of the relations.
We will see what
L this means.
Let n = R+ g , n = R+ g . Then g = n+ + t + n . (Note that n+ and n are
+
L
not ideals of g.)
Lemma 7.6. Ei generate n+ ; Fi generate n ; hence Ei , Fi generate g as Lie algebras.
+
P
Proof. For a root R write = ki i with ki Z0 , and define the height of to be
ki . We now induct on the height of the root. Recall that if R+ and is not
P
h() =
simple, then there exists i such that i = R+ . Therefore, g is generated by
the Ei . However, then g = [g , gi ] is also generated by the Ei .
The corresponding claim about Fi follows immediately by taking R+ as the positive
roots.
Finally, [Ei , Fi ] = Hi and {Hi |i = 1, . . . , l} are a basis for t (recall the simple roots are a
basis for t ). 
Let A be a generalised Cartan matrix. That is, aii = 2, aij = 0 whenever aji = 0, and
aij Z0 for i 6= j (but we do not include requirements of positive definiteness). Let
g = hEi , Fi , Hi |[Hi , Hj ] = 0, [Hi , Ej ] = aij Ej , [Hi , Fj ] = aij Fj , [Ei , Fj ] = 0 for i 6= j, [Ei , Fi ] = Hi i
Clearly, if A is the Cartan matrix of g, then g  g. Are these all the relations? Not quite.
Let
g = g/h(ad Ei )1aij Ej = 0, (ad Fi )1aij Fj = 0i
(e.g., if aij = 0 then Ei and Ej commute, and if aij = 1 then [Ei , [Ei , Ej ]] = 0. These are
called the Serre relations.
Remark. They are called the Serre relations because they were discovered by Harish-Chandra
and Chevalley.
Exercise 68. Check that the Serre relations hold in the classical groups.
Theorem 7.7. (1) If A is indecomposable, g has a unique maximal ideal, and g is
the quotient of g by its maximal ideal, so is simple. (But not necessarily finite-
dimensional!)
38 IAN GROJNOWSKI

(2) Hence, if g is a finite-dimensional semisimple Lie algebra with Cartan matrix A, then
the map g g via Ei 7 Ei , Fi 7 Fi , Hi 7 Hi (where on the left-hand side we
have the canonical generators of g and on the right hand side we have the elements
of the root spaces we found above) factors through g and is surjective, so gives an
isomorphism g = g.
The second part of the theorem clearly implies the uniqueness. We also see that existence
is equivalent to the statement
g is finite-dimensional A is a Cartan matrix
Definition. g (for any A) is called a Kac-Moody algebra (this time because they were
independently discovered by Kac and Moody).
Most of the representation theory we will be talking about holds verbatim for the Kac-
Moody algebras. However, note that all we have given for them is a finite presentation,
which is not a very nice object. In particular, the root spaces of these algebras are not
1-dimensional. The dimension of the root space for any given root is computable but hor-
rible, unless the matrix A is an affine Dynkin diagram. In that case, the Lie algebra is
approximately Cc + g C[t, t1 ].
The Weyl group has a corresponding presentation:
Theorem 7.8 (Presentation of the Weyl group). Write si = si , then
W = hs1 , . . . , sl |s2i = 1, (si sj )mij = 1i,
where mij are defined in terms of the Cartan matrix as follows:
aij aji 0 1 2 3
mij 2 3 4 6
E.g., if i and j are not connected in the Dynkin diagram then si sj = sj si , and if they are
connected by a single edge, then si sj si = sj si sj . These are known as the braid relations.
In An1 they look like this:

Figure 4. Braid relations in An1 . We are asserting that the permutations


are equal.

Lecture 15
Exercise 69. Check, for each root system, that the relations claimed for the Weyl group
do hold. (This requires checking it for the rank-2 root systems.) We will not prove the
isomorphism.
L
Remark. To show existence, we know that g = t + R g , so in principle existence should
only require picking a basis E and computing [E , E ] = c E+ . How hard is it to write
down c ?
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
39

For a long time it was thought to be quite hard; however, with the discovery of the affine
Lie algebras, it turned out to be quite easy. We might even get to doing it for the ADE
algebras.
Remark. So far the status of existence of a finite-dimensional simple Lie algebra correspond-
ing to an abstract root system for us is as fikkiws: we have shown that there exists a
simple Lie algebra corresponding to each generalised Cartan matrix, but notL that it is finite-
dimensional. Suppose that for the g that we have defined we write g = t + R g (where
the torus comes from the root system). We can ask two questions: (1) what is the relation-
ship between R and the root system we started with?; (2) What is the dimension of dim g ?
While it is true that W R, in general there is not an equality between them. We call
the roots in W real roots Rre ; for the real roots, dim g = 1. For finite-dimensional
Lie algebras, all roots are real. If A is an affine Cartan matrix, then the semidefinite nature
of it means that there exists a unique (up to scaling) root of length 0, called the imaginary
root. The imaginary roots are pleasant and describable objects. For a general, indefinite,
Cartan matrix, there exists an algorithm for computing dim g , but the numbers are not
very meaningful.
8. Representations
Let g be a semisimple Lie algebra, g = t + R g = t + n+ + n . Let V be a finite-
L
dimensional representation of g. (We dont quite need finite-dimensional; well see exactly
what we need a bit later.)
L
Proposition 8.1. (1) V = t V , where V = {v V |t(v) = (t)v t t} (this is
the weight space decomposition with respect to t)
(2) If V 6= 0, then (h ) Z for all R. (Here, h is the element of the (sl2 ) ,
h = 1 ( ).)
Proof. Since V is a finite-dimensional representation of g, it is a finite-dimensional represen-
tation of (sl2 ) . Therefore, h acts diagonalisably on V with (h ) Z. The first assertion
follows because h span t. 
L
Definition. Q = ZR = i Zi is the lattice of roots.
P = { QR|(, ) Z R} (the dual lattice) is the lattice of weights. Note that the
condition can be checked by computing (, i ) for i .
Since, for , R we have (, ) Z, we see Q P . Also, if V is a finite-dimensional
representation of g and V 6= 0, then P , as (h ) = (, ).
Exercise 70. (1) |P/Q| < . In fact, |P/Q| = det A where A is the Cartan matrix.
(2) W (P ) P , where W is the Weyl group.
Example 8.1. In sl2 , R = , Q = Z, (, ) = 2, so P = Z/2, and |P/Q| = 2 = det(2).
Definition. For V a finite-dimensional representation of g, define
X
ch V = dim V e Z[P ].
P
+
Here, e is a formal symbol (e e = e ) forming a basis for Z[P ].
People are looking at me as if Im speaking some language I dont speak, let alone you
dont speak.
40 IAN GROJNOWSKI

Example 8.2. For sl2 , P = Z/2. Write z = e/2 , then ch L(n) = z n + . . . + z n =


z n+1 z (n+1)
zz 1
.
Example 8.3. Let g = sl3 , and V = g the adjoint representation. Then
ch V = 2 + z + w + zw + z 1 + w1 + z 1 w1 ,
where w = e1 , z = e2 .

1 1
2 1 + 2
1 2 1
1
1 1

Figure 5. Root lattice for sl3 , with dimensions of weight spaces. Note that
the outer numbers are 1, and the picture is W -invariant (W = S3 here).

Proposition 8.2. If V is a finite-dimensional representation of g, then dim V = dim Vw


for all w W , i.e. ch V is W -invariant, ch V Z[P ]W . (In particular, for sln this means
that ch V are symmetric functions.)
We will sketch three proofs, of which the third one is the easiest, but the first two reveal
what is actually going on.
Sketch 1. If G is an algebraic group with Lie algebra g then (by a theorem) W = N (T )/T
(where T is the maximal torus in G whose Lie algebra is t, and N (T ) is its normaliser).
Example 8.4. If G = SLn with T the diagonal matrices, then N (T ) are matrices with only
one nonzero entry in each row and column, and N (T )/T = Sn the permutation matrices.
That is, w N (T ) such that wT
= w.
If G acts on V (as it always does when G is simply connected), then w(V
) = Vw , since
tw(v)
w 1 tw)v
= w( w 1 tw)v).
= w((
That is, W is almost a subgroup
 of G. However,  it is definitely
 not actually a subgroup.
1 1
E.g., in SL2 , we take = s, and then s 2 = T is not equal to 1. That
1 1
is, W does not embed into G and does not act on V . (There is a small 2-group, (Z/2)l ,
that intervenes. This 2-group is also involved in determining the coefficients c .) 
Sketch 2. We can mimic the above argument in g.
     
0 1 1 0 1 1 1 0
= = exp(f ) exp(e) exp(f ).
1 0 1 1 0 1 1 1
Therefore, for each root we define s = exp(f ) exp(e ) exp(f ), where the matrix expo-
nential is defined in the usual power series way.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
41

Exercise 71. (1) If V is a finite-dimensional representation of g, then e , f act nilpo-


tently on g, so s : V V is a well-defined finite sum.
(2) s 2 =  : V V , where  2 = 1 and  acts by a constant. Determine this constant
(in terms of ).
(3) s (V ) = Vs .

Remark. We did not really need V to be finite-dimensional; what we need instead is for each
e and f to act locally nilpotently. We say x : V V acts locally nilpotently if for all v V
we have xn v = 0 for some n.
Exercise 72. This is equivalent to saying that as an sl2 -module, V splits into a (possibly
infinite) direct sum of finite-dimensional sl2 -modules.
Definition. Such a V is called integrable (because you can integrate the Lie algebra action
to a group action).
In the rest of the course, you can replace all instances of finite-dimensional by inte-
grable, and then all theorems and proofs will work for the Kac-Moody algebras as well.
Sketch 3. If were just working with sl2 anyway, we shouldnt need all the other stuff.
Exercise 73. Consider V as a representation of (sl2 ) t. Then V splits up into a direct
sum of strings, each of which is of the form
2 ... m
where m = (h ).
Such a string is obviously s -invariant, since s will just flip the string from left to right
(as the reflection did in sl2 ). 
Lecture 16
Definition. For , t write to mean =
P
ki i with ki N. The points
{ P | } are the lattice points in an obtuse cone.

11111
00000

00000
11111
00000
11111
00000
11111
00000
11111
00000
11111
Figure 6. Cone for sl3 .

Definition. If V is a representation of g, we say that P is a highest weight if V 6= 0


and V 6= 0 = .
We say v V is a singular vector if v 6= 0 and e v = 0 for all R+ . (Note that
e v V+ , so if is a highest weight, then all nonzero elements of V are singular vectors.)
We say that is an extremal weight if w is a highest weight for some w W .
Example 8.5. In the adjoint rep of sl3 , the highest weight is 1 + 2 , and the entire outer
hexagon are extremal weights.
42 IAN GROJNOWSKI

Theorem 8.3. g is a semisimple Lie algebra over C.


(1) Complete reducibility: if V is a finite-dimensional representation of g, then V is a
direct sum of irreducible representations of g.
Set P + = { P : (, ) > 0 R+ } (of course, its enough to check this on
). This is the cone of dominant weights. The assertion of the next parts of the
theorem is that P + parametrises the finite-dimensional irreducible representations of
g, with 7 L(). More precisely,
(2) Let V be a finite-dimensional irreducible representation of g, and v V a singular
vector. Then
(a) dim V = 1
(b) V 6= 0 = , so is a highest weight (and v a highest weight vector)
(c) (hi ) N for all i, i.e. P + .
Moreover, if W is another finite-dimensional irreducible representation of g with
highest weight , and w W , w 6= 0, then there exists a unique isomorphism
V W mapping v 7 w. (Of course, given they are irreducible representations,
if there is an isomorphism, there is a one-dimensional family of them, so we would
pin down the exact one by sending v 7 w. That is, uniqueness is not the nontrivial
statement here.)
(3) Given P + , there exists a finite-dimensional irreducible representation with highest
weight , called L().
(4) A formula for ch L() (the Weyl character formula), to be stated later.
Remark. For Kac-Moody algebras, we need a slight modification, since it is no longer obvious
that we will have highest-weight vectors. We need to work in the category of integrable
highest-weight representations instead.
Corollary 8.4. X
ch L() = e + a e Z[P ]
<
+
hence {ch L()| P } are linearly independent (they have different leading terms). On the
other hand, X X
ch L() = m + m Z[P ]W , where m =
a e .
< W

Since {m } clearly form a basis for Z[P ] , ch L() are a basis of Z[P ]W .
W

Corollary 8.5. If V and W are finite-dimensional representations of g, V


= W ch V =
ch W .
Proof. Complete reducibility + characters of irreducibles are a basis. 
Definition. Define i P to be the dual basis to simple (co)roots, (i , j ) = ij for
i = 1, . . . , l. These are called the fundamental weights.
Then P + = Z0 i = { ni i |ni 0}. This is a fundamental domain for the action
P
of W . Each irreducible representation will have a unique highest weight lying in this cone,
which is what parametrises the representation.
Exercise 74. Compute i for the simple Lie algebras sln , so2n , so2n+1 , sp2n , etc., and draw
the picture for A2 , B2 , and G2 .
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
43
11111
00000
00000
11111
00000
11111
00000
11111
00000
11111
00000
11111
00000
11111
00000
11111
00000
11111
2 1 + 2
00000
11111
00000
11111
00000
11111
1

Figure 7. Cone of positive weights for sl3 . The adjoint representations high-
est weight is 1 + 2 .

Exercise 75. P = = (hi )i , where hi = 1 (i ).


P

Example 8.6. For any g, the trivial representation is L(0).

For any
L simple g, the adjoint representation V = g is irreducible. The decomposition
g = t + R g means that a highest weight is a root such that + i 6 R for all i, i.e.
= , the highest root in R+ . (Therefore, the theorem of the highest weight implies that
is unique, as promised.)
Example 8.7. In An1 , the highest root is = e1 en (recall i = ei ei+1 and hi is a
diagonal matrix with +1 in the ith position, 1 in the (i + 1)th position, and 0 elsewhere).
We have (h1 ) = 1, (h2 ) = 0, . . . , (hn2 ) = 0, (hn1 ) = 1. We can label the Dynkin
diagram by (hi ):
1 0 0 1
...

Figure 8. Adjoint representation of sln ; labels are (hi ), giving a decompo-


sition of into fundamental weights.

Exercise 76. Compute (hi ) for all the simple Lie algebras.
Example 8.8. Other representations of sln : we havePthe standard representation, Cn , with
weight basis v1 , . . . , vn and weights e1 , . . . , en (recall ei = 0). The highest weight is e1 (as
e1 > e2 > . . . > en , since e1 e2 , e2 e3 , . . . R+ ). Therefore, Cn = L(1 ), and
ch Cn = ee1 + . . . + een = z1 + . . . + zn
where zi = exp(ei ) Z[P ] = Z[z11 , . . . , zn1 ]/(z1 . . . zn = 1).
Now, if V and W are representations of g, then so is V W (via x g acts by x1+1x).
In particular, V V is a representation. Note that : V V V V , a b 7 b a,
commutes with the g-action, so eigenspaces of are g-modules. That is, S 2 V and 2 V are
g-modules. In general, these may not be irreducible, but they are for sln (provided V is
irreducible, of course!), as we check below:
Consider s Cn (for s n 1). This has weight basis vi1 . . . vis for i1 < . . . < is , and
the weight of this is ei1 + . . . + eis as is clear from the action. Therefore,
Ei (vi1 . . . vis ) = 0 for all i vi1 . . . vis = v1 . . . vs ,
44 IAN GROJNOWSKI

i.e. this is the only singular vector. Note that its weight is = e1 + . . . + es and is the sth
fundamental weight, as (, ei ei+1 ) = is . Therefore,
s Cn = L(s ) is a fundamental representation.
In particular, n1 Cn = (Cn ) = L(n1 ).
Consider S m Cn . It has weight basis vi1 . . . vim with i1 . . . im and weight ei1 + . . . + eim .
We have
Ei (vi1 . . . vim ) = 0 for all i vi1 . . . vis = v1 . . . v1 = v1m ,
i.e. S m Cn is an irreducible representation isomorphic to L(m1 ).
Exercise 77. Check that the above is true. Compute ch s Cn , ch S m Cn . Find a closed
formula for the generating functions
X X
(ch S m Cn )q m , (ch m Cn )q m .
m0 m0

Lecture 17
Exercise 78. (1) If V and W are g-modules, V finite-dimensional, then V W
=
Hom (V, W ) as g-modules, via v w 7 (v 7 v (v) w).
Remark. If V is a g-module, V is a g-module with action (x )(v) = (xv). The
sign comes from differentiating the group action, (x g)(v) = g 1 (xv).
(2) Show that if V = Cn and g = sln , then V V = sln C, adjoint + trivial
representations (where the trivial representation comes from Id Hom g (V, V )). In
contrast, V V = S 2 V 2 V always.
Exercise 79. Let g = son or sp2l for 2l = n, and V = Cn .
(1) Compute the highest weight of this representation.
(2) V = V via the form defining g, so V V has at least three components (since it
must have the trivial subrepresentation). Show that it has exactly three, describe
them, and find the highest weights.
We will now devote ourselves to proving Theorem 8.3 classifying all the representations of
simple Lie algebras.
Let g be any algebra with a non-degenerate invariant bilinear form (, ), e.g., g semisimple
and (, ) = (, )ad . Let x1 , . . . , xN be a basis of g, and x1 , . . . , xN the dual basis (so (xi , xj ) =
ij ).
Definition. = xi xi is the Casimir of g.
P

Remark. For the moment, is an operator on any representation of g. However, we will in a


moment define the universal enveloping algebra of g, and that is where naturally lives.
Lemma 8.6. For all x g, we have [, x] = 0.
[, x] = [ xi xi , x]. Since ad x is a derivation, we rewrite this as
P
Proof 1, coordinate-based.
xi [xi , x] + [xP i
P P
i , x]x .
Write [x , x] = aij xj and [xi , x] = bij xj . Then
i
P

aij = ([xi , x], xj ) = ([xj , xi ], x)


bij = ([xi , x], xj ) = ([xj , xi ], x) = aji
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
45

Thus,
X X
[, x] = xi xj aij + xj xi bij = 0.

Proof 2, without coordinates. We have maps P of g-modules C , End (g)
= g g = gg
i
(the first map is via 7 Id , i.e. 1 7 xi x ; the last isomorphism is via the bilinear
form). Since g acts on V , we have a map of g-modules g End (V ). Finally, we have a
map of g-modules End (V ) End (V ) End (V ) by multiplication. This finally gives
C , g g
= g g End (V ) End (V ) End (V )
sending 1 7 . Therefore, generates a trivial g-submodule of End (V ), which is equivalent
to saying [, x] = 0 for all x g. 
L
Now let g be semisimple, g = t R g , and (, )ad the Killing form. Choose a
basis u1 , . . . , ul of t, x of g ; choose the dual basis u1 , . . . , ul of t and x of g (so
that (x , x ) = 1). Normalise x so that (x , x )ad = 1, so that x = x . Then
[x , x ] = 1 () by definition. Now,
X X X X X
= ui ui + (x x + x x ) = ui ui + 2 x x + 1 ().
R+ R+ R+

1
P
Definition. = 2 R+ .
Remark. Of course, there will be an exercise to compute for all the simple Lie algebras.
However, we will state this exercise later, when its more obvious what is.
Remark. If you like algebraic geometry, this is the square root of the canonical bundle on
the flag variety.
Then X X
= ui ui + 2 1 () + 2 x x
R+

Lemma 8.7. Let V be a g-module, v V a singular vector with weight (i.e. n+ v = 0 and
tv = (t)v). Then v = (| + |2 ||2 )v.
Proof. Apply the most recent version of :
X  X
v = (ui )(ui ) + (2 1 ()) v + 2 x x v = ((, ) + 2(, ))v.
R+


In particular, if V is irreducible, acts on V by (, ) + 2(, ) by Schurs lemma.

9. PBW theorem
Let g be any Lie algebra over a field k.
Definition. The universal enveloping algebra of g Ug is the associative algebra over k gen-
erated by g with relations xy yx = [x, y] for all x, y g.
46 IAN GROJNOWSKI

More formally, if V is a vector space over k, then


M
T V = k + V + V V + V 3 + . . . = V n
n0

is the tensor algebra of V , and is the free associative algebra generated by V . Let J be the
two-sided ideal in T g generated by x y y x = [x, y] for all x, y g; then Ug = T g/J.
Exercise 80. An enveloping algebra for g is a linear map i : g A, where A is an associative
algebra, and i(x)i(y) i(y)i(x) = i([x, y]). (For example, if V is a representation of g, we
can take A = End (V ) and i the action map.) Show that Ug is initial in the category of
enveloping algebras, i.e. any such map i factors uniquely through Ug.
Remark. This is old terminology anywhere other than the above exercise and old textbooks,
enveloping algebra almost certainly means universal enveloping algebra.
The Casimir operator naturally lives in Ug; in fact, as weve shown, in the center Z(Ug).
Observe that T g is a graded algebra, but that the relations we are imposing on it are not
homogeneous (x y and y x have degree 2, while [x, y] has degree 1). Consequently, Ug
is not graded. It is, however, filtered.
Define (Ug)n to be the span of products of n elements of g, then Un Um for n m,
and Un Um Un+m . (In particular, U0 k, U1 k + g, etc.)
Exercise 81. Show that if x Um and y Un , then [x, y] Un+m1 (we clearly have
[x, y] Un+m , and are asserting that in fact the degree is one lower). By abuse of notation,
for x, y Ug we write [x, y] to mean xy yx.
Definition. For a filtration F0 F1 . . . , we write gr (F ) = Fi /Fi1 . (We probably need
Fi to be a filtration of an algebra for this to make sense...) This is the associated graded
algebra.
Theorem 9.1 (Poincare-Birkhoff-Witt).
gr Ug = (Ug)n /(Ug)n1
= Sg
Equivalently, if {x1 , . . . , xn } is a basis for g, then {xa11 xa22 . . . xann |ai Z0 } is a basis for Ug.
In particular, g , Ug.
Exercise 82. (1) Show that the previous exercise, [Un , Um ] Un+m1 , implies that there
is a well-defined map S(g) gr Ug extending the map Q g 7 g.
(2) Show that the map is surjective, or equivalently that { xai i } span Ug.
Therefore, the hard part of PBW theorem is injectivity. We will not prove the theorem.
Remark. The theorem should imply that Ug is a deformation of Sg (i.e. if we consider
xy yx = t[x, y] then for t = 0 we get Sg whereas for t = 1 and, in fact, for generic t
by renormalising we get Ug). The shortest proof of the theorem is indeed via deformation
theory.

10. Back to representations


Exercise 83. If V is a representation of g and v V , then the g-submodule of V generated
by v is Ug v.
Lecture 18
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
47

Definition. V is a highest weight module for g if v V singular (i.e. n+ v = 0 and


t v = (t)v for all t t and some t ) such that V = Ug v.
Note that in fact Un v = V .
Proof. PBW theorem (the easy direction) implies that if x1 , . . . , xN is a basis of g then
xa11 . . . xaNN spans Ug. Taking a basis for g ordered n , then t, then n+ , we see Ug
=
Un Ut Un+ , and Un+ v = Ut v = Cv Un v (since n+ v = 0). 
Remark. If V is irreducible and finite-dimensional, it is a highest weight module (as it has a
singular vector v, and Ug v is a submodule).
Proposition 10.1. Let V be a highest weight module for g (not necessarily finite-dimensional),
and let v be a highest weight vector with highest weight t .
L
(1) t acts diagonalisably on V , so V = D() V , where
X
D() = { ki i |ki Z0 } = { t | }
(the descent of ).
(2) V = Cv , and all other weight spaces are finite-dimensional.
(3) V is irreducible if and only if all singular vectors are in V .
(4) acts on V as | + |2 ||2
(5) If v is a singular vector, then | + |2 = | + |2
(6) If RR, then there exist only finitely many such that V contains a singular
vector.
(7) V contains a unique maximal proper submodule I. I is graded by t, I = (I V ),
and I is the sum of all proper submodules of V .
Remark. The assumption in (6) is in fact unnecessary, but there isnt quite a one-line proof
in that case. Note that if V is finite-dimensional, then we in fact know that all weights lie
in QR, but we are not assuming finite dimensionality here.
Proof. As V = Un v , expressions e1 e2 . . . er v span V (where i R+ and ei
gi ). The weight of such an expression is 1 2 . . . r .
Exercise 84. Prove it!
This implies (1) immediately, and (2) upon noticing that there are finitely many ways of
writing = 1 . . . r for i R+ .
(3): if v V is a singular vector, then N = Ug v = Un v is a submodule of V with
weights in D(). Since 6= , we have D() ( D(), whence N is a proper submodule.
Thus, if there is a singular vector outside V , then V is not irreducible.
Conversely, if N V is a properP submodule, then tN N , so N is gradedPby t, and its
weights lie in D(). Let = ki i be Pa weight in N , with i and ki minimal.
Since N is a proper submodule, 6= so ki > 0, and in that case any nonzero vector in
N is singular (since + i is not a weight of N for any simple i ).
(7): Any proper submodule of V is t-graded and does not contain v . The sum of all
proper submodules still does not contain v , so it is the maximal proper submodule.
(4): for the singular vector v we have v = (| + |2 ||2 )v . Since V = Ug v and
commutes with Ug, we see that acts by the same constant on all of V .
(5) is now immediate (apply the same observation to any other singular vector).
48 IAN GROJNOWSKI

(6): if V contains a signular vector, then | + |2 = | + |2 . This means that lies in


the intersection of a sphere in RR (compact) and D() (discrete), so there are only finitely
many such . 
Definition. Let t . A Verma module M () with highest weight and highest weight
vector v is a universal module with highest weight . That is, given any other highest
weight module W with highest weight and highest weight vector w, there exists a unique
map M () W sending v 7 w.
Remark. Verma modules were also discovered by Harish-Chandra.
Proposition 10.2. Let t . Then
(1) There exists a unique Verma module M ().
(2) There exists a unique irreducible highest-weight module of weight , called L().
Proof. Uniqueness of M () is clear from universality. For existence, take
M () = Ug Ub C
where b = n + t and C is the b-module on which n+ v = 0 and t v = (t)v.
+

Remark. You should think of this as the induced representation.


More concretely,
M () = Ug/(left ideal generated by u (u) for all u Ub)
(where is extended to act on b by 0 on n+ ).
That is, M () is the module generated by g acting on 1 with relations {n+ 1 = 0, t 1 =
(t)1 for all t t} = J , and only the relations these imply.
If W is any other highest weight module of highest weight , then W = Ug/J for some
J J , i.e. M ()  W .
In particular, an irreducible highest-weight module must be of the form M ()/I() where
I() is a maximal proper submodule of M (). Since we showed that there exists a unique
maximal proper submodule, L() is unique. 
Proposition 10.3. Let R+ = {1 , . . . , r }. Then ek
1
1
. . . ek
r
v is a basis for M ().
r

Proof. Immediate from the difficult direction of PBW theorem. 


Corollary 10.4. Any irreducible finite-dimensional g-module is of the form L() for some
P +.
Proof. We know that the highest weight must lie in P + by sl2 theory. 
Example 10.1. g = sl2 . The Verma module M () looks like this:
...
v F v F 2 v
i.e. an infinite string.
Exercise 85. (1) Show that V () = L() unless Z0 .
(2) If Z0 , show that V () contains a unique proper submodule, spanned by F +1 v ,
F +2 v , . . . (which is itself a Verma module). Recover the finite-dimensional irre-
ducible representations.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
49
k2
1111
0000
0000000
1111111
F
0000
1111
0000000
1111111
0000
1111
0000000
1111111
0000
1111
0000000
1111111
0000
1111
0000000
1111111
0000
1111
0000000
1111111
00000
11111 F k1
0000
1111
0000000
1111111
00000
11111
0000
1111
0000000
1111111
00000
11111
0000
1111
0000000
1111111
00000
11111
Figure 9. Some of the singular vectors in a representation of sl3 note that
the quotient isnt W -invariant, so there must be others.

Exercise 86. (1) If g is an arbitrary simple Lie algebra and (Hi ) Z0 , show that
(Hi )+1
Fi v is a singular vector of M (). (There will be others!)
(2) Compute ch M () for all Verma modules.
Lecture 19 We will now try to show that L() is finite-dimensional if P + .
Proposition 10.5. If P + , L() is integrable, i.e. Ei and Fi act locally nilpotently.
Proof. If V is any highest-weight module, Ei acts locally nilpotently (as Ei V V+i and
weights of V lie in D()). We must show that Fi acts locally nilpotently.
(H )+1
By the exercise, Fi i v is a singular vector. Since L() is irreducible, it has no
(H )+1
singular vectors other than v , so Fi i v = 0.
(1) ak b = ki=0 ki ((ad a)i b)aki
P 
Exercise 87.
(2) Using the above and the Serre relations (ad e )4 e = 0, show FiN e1 . . . er v = 0
for N  0 by induction on r.

Remark. In a general Kac-Moody algebra, the Serre relations are no longer (ad e )4 e = 0,
but the exponents are still bounded above by aij for the relevant i, j in the generalised
Cartan matrix.
Corollary 10.6. dim L() = dim L()w for all w W .
Proof. When we were proving Proposition 8.2, we only needed local nilpotence of Ei and
Fi to conclude invariance under w = si = si . Since W is generated by si , we conclude
invariance under all of W . 
Theorem 10.7 (Cartans theorem). If g is finite-dimensional and P + , then L() is
finite-dimensional.
Remark. This is clear pictorially L() is W -invariant and sits in the cone D(). However,
its faster to work through it formally.
Proof. First, we show that if R+ then e acts locally nilpotently on L(), i.e. all (not
just the simple) local sl2 s act integrably. In fact, en v = 0 for n = (, ) + 1, since
otherwise we have L()n 6= 0 and hence s ( n) is a weight in L(). However,
s ( n) = s () + n = (, ) + ((, ) + 1) = + >
and so cannot be a weight in L() as is the highest weight. Applying Exercise 87, we see
that e acts locally nilpotently on all of L().
50 IAN GROJNOWSKI

Therefore,
L() = U(n )v = {ek
1
1
. . . ek
r
v },
r
R+ = {1 , . . . , r }
is finite. 
Lemma 10.8. si (R+ \ {i }) = R+ \ {i }.
Proof. Let R+ \ {i }, = j6=i kj j + ki i with all kj 0 and some kj 6= 0. Note
P

that si () has the same coefficients on j for j 6= i. Since R = R+ t R+ , we see si () 6


R+ = si () R+ . 
Lemma 10.9. Recall = 21 R+ . Then (Hi ) = 1 for all i, i.e. = 1 + . . . + l .
P

Proof. Observe

1 1 X 1 1 X
si = si +
2 i 2 = i + = i .
6=
2 2 6=
i i
R+ R+

On the other hand, si = (, i )i , so (, i ) = 1 for all i. 


Lemma 10.10 (Key lemma). Let P + , + P + , . If | + | = | + | then
= .
Remark. This, again, is geometrically obvious from the fact that both D() and a sphere
are convex.
Proof.
0 = ( + , + ) ( + , + ) = ( , + ( + ) + )
Write = ki i , then since + ( + ) P + and (, i ) > 0 for all i, we see that we
P
must have ki = 0 for all i. 
Theorem 10.11 (Weyl complete reducibility). Let g be a semisimple Lie algebra, char k = 0,
k = k. Then every finite-dimensional g-module V is a direct sum of irreducibles.
Proof. Recall that V is completely reducible as an (sl2 ) -module, V = V .
+ +
Consider V n = {x V : n+ x = 0}. By Engels theorem, V n 6= 0.
+ + +
Since [t, n+ ] n+ , t acts on V n , so V n = P Vn .
L

+
Claim. For every v Vn , the module L = Ug v is irreducible.
Proof. L is a highest weight module with highest weight , so we must only show that it
has no other singular vectors. If is the weight of a singular vector in L, then , but
| + | = | + | by considering the action of the Casimir. Since V , and therefore L, is
finite-dimensional, we must have , P + . By the key lemma, = . 
+
Consequently, V 0 = Ug V n is completely reducible: if {v1 , . . . , vr } is a weight basis
+
for V n with weights 1 , . . . , r , then V 0 = L(1 ) . . . L(r ). It remains to show that
N = V /V 0 = 0.
+ +
Suppose not. N is finite-dimensional, so N n 6= 0. Let v Nn be a singular vector with
P + . Lift v to v V . Since v 6= 0 in N , we must have Ei v 6= 0 V+i for some i.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
51
2
Note that Ei v = 0, so Ei v V 0 . Consequently, the Casimir acts on Ei v by +
||2 for some {1 , . . . , r }. On the other hand, since v was
singular in N , acts on v
2 2
by | + | || . Since commutes with Ei , we find + = | + |. Moreover, + i

P
is a weight in L(), so + i = kj j , i.e. < . This, however, contradicts the key
lemma. 
Remark. The usual proof (in particular, Weyls own proof) of this theorem involves lifting
the integrable action to the group, and using the maximal compact subgroup (for which we
have complete reducibility because it has positive, finite measure). Somehow, the key lemma
is the equivalent of a positive-definite metric.
Lemma 10.12. Let t and M () the Verma module. Then
e
ch M () = Q )
.
R+ (1 e

Lecture 20
Proof. Let R+ = {1 , . . . , r }. By PBW, thePbasis for M () is {ek
1
1
. . . ek
r
v } for ki Z0 ,
r
and the weight of such anPelement is ki i . Therefore, dim M () is the number
of
Q ways of writing as ki i over i R. This is precisely the coefficient of e in
1
R+ (1 ) . 
Write = R+ (1 e ). We have shown that ch M () = e /.
Q

Lemma 10.13. For all w W , w(e ) = det w e . Here, det : W Z/2 gives the
determinant of w acting on t (1).
Proof. Since W is generated by simple reflections, it suffices to check that si (e ) = e .
Indeed,

Y Y
i
(1 e ) = e .

si (e ) = si
e (1 e ) (1 e ) = ei
(1 + e+i
)
6=i 6=i
R+ R+


Lemma 10.14. (1) For any highest weight module V () with highest weight there
exist coefficients a 0, , such that
X
ch V () = a ch L()

|+|=|+|

with a = 1.
(2) There exist coefficients b Z with b = 1 such that
X
ch L() = b ch M ().

|+|=|+|

We write B() = { | | + | = | + |}. Recall that B() is a finite set.


Remark. We have shown that B() is finite when RR, but it is true in general.
52 IAN GROJNOWSKI

Proof. First, note that (1) implies (2). Indeed, totally order B() so that i j = i j.
Applying (1) to the Verma modules gives a system of equations relating ch M () and ch L()
which is upper-triangular with 1s on the diagonal. Inverting this system gives (2). (Note in
particular that the coefficients in (2) may be negative.)
We now prove (1). RecallPthat the weight spaces of a highest weight module are finite-
dimensional. We induct on B() dim V () .
If V () is irreducible, we clearly have (1). Otherwise, there exists B() with a
singular
P P v V () . Pick with the largest height of (recall the height of
vector
ki i is ki ). Then Ug v V () has no singular vectors, and is therefore irreducible,
L(). Setting V () = V ()/L(), we see that V () is a highest-weight module with a
P
smaller value of B() dim V () , and ch V () = ch V () + ch L(). 

We will now compute ch L() for P + .


We know
X X
ch L() = b ch M () = b e /.
B() B()

Now, w(ch L()) = ch L() for all w W , and w(e ) = det w e . Therefore,
X
e ch L() = b e+
B()

is W -antiinvariant. We can therefore rewrite


X X X
b e+ = b det w ew(+) .
B() orbits of W on B() + wW

Now, if RR (which is true, since P + ), then W (+) intersects {x RR|(x, i ) 0}


in exactly one point (this set is a fundamental domain for the W -action, and we wont have
boundary problems because (, i ) = 1 > 0). Therefore, we can take a dominant weight as
the orbit representative. Note, however, that there is only one dominant weight in B(),
namely, ! (This was the Key Lemma 10.10.) Therefore, we derive
Theorem 10.15 (Weyl Character Formula).

det w ew(+)
P
X
ch L() = wW
Q )
= det w ch M (w( + ) ).
e R+ (1 e wW

Example 10.2. Let g = sl2 , and write z = e/2 . Then C[P ] = C[z, z 1 ] and = z. We
have
ch L(m/2) = (z m+1 z (m+1) )/(z z 1 )
as before.
Remark. The second expression for the character of L() looks like an Euler characteristic.
In fact, it is: there is a complex of Verma modules whose cohomology are the irreducible
modules. This is known as the Bernstein-Gelfand-Gelfand resolution.
The first expression comes from something like a cohomology of coherent line bundles,
and is variously known as Euler-Grothendieck, Attiyah-Bott, and Attiyah-Singer.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
53

Corollary 10.16. Since L(0) = C, we must have ch L(0) = 1, so


Y X
e (1 e ) = det w ew .
R+ wW

This is known as the Weyl denominator identity.


Exercise 88. Let g = sln . Show that the Weyl denominator identity is equivalent to the
Vandermonde determinant
1 1 ... 1

z1 z2 . . . zn Y
det .. . .. ... .. = (zj zi ),
. . i<j
z1n1 z2n1 . . . znn1
where we write zi = ei .
Corollary 10.17 (Weyl dimension formula).
Y
dim L() = (, + )/(, ).
R+

Example 10.3. g = sl3 of type A2 , R+ = {, , + } with = + = 1 + 2 . Let


= m1 1 + m2 2 , then
+
(, + ) m1 + 1 m2 + 1 m1 + m2 + 2
(, ) 1 1 2
Therefore, the dimension of L() is 1/2(m1 + 1)(m2 + 1)(m1 + m2 + 2).
Just to check that you too certainly know the dimensions of the irreducible representa-
tions, let me state the obvious exercise.
Exercise 89. Compute the dimensions of all the finite-dimensional irreducible representa-
tions of B2 and G2 .
Lecture 21
Remark. Let w W be written as w = si1 si2 . . . sir where sik are simple reflections. Then
det w = (1)r . The minimal r such that w can be written in this form is called the length
fo w, denoted l(w). The Monoid Lemma asserts that you can get from one minimal-length
expression for w to another by repeatedly applying the braid relations.
Exercise 90. l(w) = #{R+ w1 R+ } = l(w1 ).
We now go back to our proof.
dim L() e C[P ]. We would like to set e 7 1, but that
P
Proof. We know ch L() =
leads to a 0/0 expression in the Weyl character formula, which is hard to evaluate. Therefore,
we will start by defining the homomorphism F : C[P ] C(q), e 7 q (,) . We would like
to evaluate F0 (ch L()), since F0 (e ) = 1 for all .
We apply F to the Weyl denominator identity:
1
Y X X
q (,) (1 q (,) ) = det wq (w,) = det wq (,w )
R+ wW wW
54 IAN GROJNOWSKI

since det w = det w1 and (x, wy) = (w1 x, y) (i.e. the Weyl group is a subgroup of the
orthogonal group of the inner product).
We now apply F to the Weyl character formula:
(w(+),)
P
wW det wq
F (ch L()) = (,) Q (,) )
q R+ (1 q

provided (, ) 6= 0 for all R+ .


Take = (recall that (, i ) > 0 for all simple roots i , so (, ) > 0 for all R+ ).
Then
q (,+) R+ (1 q (,+) )
Q
X
(,)
F (ch L()) = dim L() q = Q
q (,) R+ (1 q (,) )
where we used our expression for the Weyl denominator identity and applied it to the nu-
merator.
Setting q = 1 and applying LHopitals rule,
Y ( + , )
dim L() = .
+
(, )
R


Remark. You are now in a position to answer any question you choose P toanswer, if necessary
by brute force. For example, let us decompose L() L() = m L(); we want to
compute the Littlewood-Richardson coefficients m (recall that we had the Clebsch-Gordan
rule for them in sl2 ).
Define : Z[P ] Z[P ], e 7 e , and CT : Z[P ] Z with e 7 0 unlessQ = 0, and
0 1
e 7 1. Define (, ) : Z[P ]Z[P ] Z by (f, g) = |W | CT (f g), where = R+ (1e ).

Claim. Let = ch L(), = ch L(), then ( , ) = . (If our Lie algebra has type
A, then are the Schur functions, and this is their orthogonality.)
Given this claim, we have m = ( , ) is algorithmically computable. It was a
great surprise to everyone when it was discovered that actually there is a simpler and more
beautiful way of doing this.
Proof of claim.
1 X
( , ) = CT ( ew(+) ex(+) det(wx))
|W | x,wW

However, CT (ew(+)x(+) ) = wx , since for , R+ we have x1 w( + ) = + =


x = w, = . 

11. Principal sl2


Define L t by (L , i ) = 1 for all i . (Compare with , which had (, i ) = 1.)
Exercise 91. Show L = 21 R+ . In particular, if R is simply laced, = L .
P

Then (L , ) = ht() = ki if we write = ki i .


P P
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
55

Exercise 92.
L)
Y (1 q (+, ) )
FL (ch L()) = q (, .
+
(1 q , )
R
[Hint: apply FL to the Weyl denominator identity for the Langlands dual Lie algebra with
root system R .]
Note that ( + , )/(, ) = ( + , )/(, ) as = 2/(, ), so we recoved the
same formula for Weyl dimension.
Definition. We call FL (ch L()) the q-dimension of L(), denoted dimq L().
Proposition 11.1. dimq L() is a unimodal polynomial. More specifically, it lives in N[q 2 , q 2 ]Z/2
or qN[q 2 , q 2 ]Z/2 (depending on its degree), and the coefficients decrease as the absolute value
of the degree gets bigger.
Proof. We will show that dimq L() is the character of an sl2 -module in which all strings
have the same parity.
Set H = 2 1 (L ) t g, and set E = Ei .
P

Exercise 93. Check that [H, E] = 2E.


P P
Writing H = ci Hi with ci C, set F = ci Fi .
Exercise 94. (1) Show that E, F, H generate a copy of sl2 . This is called the principal
sl2 .
(2) Show that if is a weight of L(), then ( , 2L ) (, 2L ) mod 2.
(3) Derive the proposition.

Exercise 95. Write [n] = (q n 1)/(q 1), and [n]! = [n][n 1] . . . [1]. Show that the
following polynomials are unimodal:
 
n [n]!
= (1 + q)(1 + q 2 ) . . . (1 + q n )
k [k]![n k]!
[Hint: for the first, apply the above arguments to g = sln and V = S k Cn or k Cn+k .
For the second, apply it to the spin representation of Bn = so2n+1 ; we will define the spin
representation next time.]
Remark. When is L() = L() ? Precisely when the lowest weight of L() is . The lowest
weight of L() is w0 , where w0 is the (unique) maximal-length element of W which sends
all the positive roots to negative ones.
Now, for a representation V which is isomorphic to its dual V , we get a bilinear form on
V . This bilinear form is either symmetric or antisymmetric. By above, we see that we can
determine which it is by laboriously computing characters; but can we tell simply by looking
at ?
The answer turns out to be yes. Indeed, the form will pair the highest-weight and the
lowest-weight vector, and if we look at the principal sl2 , it is a string from the highest-weight
to the lowest-weight vector. Now, on the principal sl2 it is quite easy to see that the form
is antisymmetric if the representation is L(n) with n odd, and symmetric if it is L(n) with
n event. Thus, all we need to know is (the parity of) (, 2L ). This is called the Schur
indicator.
56 IAN GROJNOWSKI

Lecture 22
Exercise 96. Compute dimq L(), where L() is the adjoint representation, for G2 , A2 ,
and B2 . Then do it for all the classical groups. You will notice that L()|principal sl2 =
L(2e1 ) + . . . + L(2el ) where l = rank g = dim t, e1 , . . . , el N, and e1 = 1. The ei are called
the exponents of the Weyl group, and the order of the Weyl group is |W | = (e1 +1) . . . (el +1).
Compute |W | for E8 .

12. Crystals
Let g be a semisimple Lie algebra, = {1 , . . . , l } the simple roots, and P the weight
lattice.
Definition. A crystal is a set B, 0 6 B, together with functions wt : B P , ei : B
B t {0} fi : B B t {0} such that
ei (b)) = wt(b) + i ; if fi b 6= 0, then wt(fi (b)) = wt(b) i .
(1) If ei b 6= 0, then wt(
(2) For b, b0 B, ei b = b0 if and only if b = fi b0 .
(3) Before we state the third defining property, we need to introduce slightly more nota-
tion.
We can draw B as a graph. The vertices are b B, and the edges are b b0 if ei b0 = b.
i
We say that this edge is coloured by i. This graph is known as the crystal graph.
Example 12.1. In sl2 the string
n n 2 n 4 . . . n
is a crystal, where the weight of vertex i is i/2.
Define i (b) = max{n 0 : eni (b) 6= 0}, and i (b) = max{n 0 : fin (b) 6= 0}.
} b
| {z | ...
{z }
i i

In sl2 , the sum i (b) + i (b) is the length of the string. On the other hand, if we are in
the highest-weight representation L(n) = L(nw) and wt(b) = n 2k, then (b) = k and
(b) = n k, i.e. (b) (b) = (wt(b), ). The third property of the Lie algebras is that
this happens in general:
(3) i (b) i (b) = (wt(b), i ) for all i .
Define B = {b B : wt(b) = }.
Definition. For B1 and B2 crystals, we define the tensor product B1 B2 as follows: as a
set, B1 B2 = B1 B2 , wt(b1 b2 ) = wt(b1 ) + wt(b2 ), and
( (
ei b1 ) b2 , if i (b1 ) i (b2 )
( (fi b1 ) b2 , if i (b1 ) > i (b2 )
ei (b1 b2 ) = fi (b1 b2 ) =
b1 ( ei b2 ), if i (b1 ) < i (b2 ) b1 (fi b2 ), if i (b1 ) i (b2 )
That is, we are trying to recreate the picture in Figure 10 in each colour.
Exercise 97. Check that B1 B2 as defined above is a crystal.
Exercise 98. B1 (B2 B3 )
= (B1 B2 ) B3 via b1 (b2 b3 ) 7 (b1 b2 ) b3 .
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
57

m + 1 dots

n + 1 dots

Figure 10. Crystal for sl2 .

Remark. As we see above, the tensor product is associative. In general, however, it is not
commutative.
Definition. B is the crystal obtained from B by reversing the arrows. That is, B = {b :
b B}, wt(b ) = wt(b), i (b ) = i (b) (and vice versa), and ei (b ) = (fi b) (and vice
versa).
Exercise 99. (B1 B2 ) = B2 B1 .
Remark. We will see in a moment precisely how we can attach a crystal to the basis of a
representation. If B parametrises a basis for V , then B parametrises a basis for V . The
condition on the weights can be seen by noting that if L() has highest weight , then L()
has lowest weight .
Theorem 12.1 (Kashiwara). Let L() be an irreducible highest-weight representation with
highest weight P + . Then
(1) There exists a crystal B() whose elements are in one-to-one correspondence with a
basis of L(), and elements of B() parametrise L() , so that
X
ch L() = ewt(b) .
bB()

(2) For any simple root i (i.e. a simple sl2 g), the decomposition of L() as an (sl2 )i -
module is given by the i-coloured strings in B(). (In particular, as an uncoloured
graph, B() is connected, since it is spanned by ( fi )v .)
Q
(3) B()B() is the crystal for L()L() i.e. B()B() decomposes into connected
components in the same way as L() L() decomposes into irreducible representa-
tions.
Remark. We will mention three proof approaches below. The first of them is due to Kashi-
wara, and is in the lecturers opinion the most instructive.
Example 12.2. Let g = sl3 , V = C3 = L(1 ). We compute V V and V V .
Remark. Note that while the crystals give the decomposition of the representation into
irreducibles, they do not correspond directly to a basis. That is, there is no sl2 -invariant
basis that we could use here. Kashiwaras proof of the theorem uses the quantum group Uq g,
which is an algebra over C[q, q 1 ] and a deformation of the universal enveloping algebra Ug.
The two have the same representations, but over C[q, q 1 ] there is a very nice basis which
satisfies ei b = ei b+q (some mess). Therefore, setting q = 0 (freezing) will give the crystal.
58 IAN GROJNOWSKI

1 1 1
1 1 2
21 1 + 2
1 + 2 1
21 1

Figure 11. Crystals for V V and V V . Observe V V = S 2 V + 2 V =


S 2 V + V , and V V = C + sl3 .

Lusztigs proof uses some of the same ideas, where I didnt catch how it goes.
Soon, we will look at Littelmann paths, which give a purely combinatorial way of proving
this theorem (which, on the face of it, is a purely combinatorial statement).
Definition. A crystal is called integrable if it is a crystal of a highest-weight module with
highest weight P + .
For two integrable crystals B1 , B2 , we do in fact have B1 B2 = B2 B1 (in general, this
is false).
Remark. There is a combinatorial condition due to Stembridge which determines whether
a crystal is integrable; it is a degeneration of the Serre relations, but we will not state it
precisely.
Lecture 23 Consider the crystal for the standard representation of sln , L(1 ) = Cn :

e1 =1 1 1 1 1 2 . . . 1 1 ...n1 .
1 2 3 n1

We can use this to construct the crystals for all representations of sln as follows:
Let P + , = k1 1 + . . . + kn1 n1 . Then L() must be a summand of L(1 )k1
. . . L(n1 )kn1 , since the highest weight of this representation is (with highest weight
vector vki
i
, where vi was the highest-weight vector of L(i )). Moreover, L(i ) = i Cn is
a summand of (Cn )i . We conclude that L() occurs in some (Cn )N for N > 0. Therefore,
the crystal for Cn together with the rule for taking tensor products of crystals determines
the crystal of every representation of sln .
We introduce the semi-standard Young tableau of a representation. (This is due to Hodge,
Schur, and Young.) Write
B(1 ) = 1 2 3 . . . n
1 2 3 n1
n
for the crystal of the standard representation of C . For i < n let
bi = 1 2 . . . i B(1 )i .
(This corresponds to the vector v1 v2 . . . vi i Cn , where vk are the basis vectors of
Cn .)
Exercise 100. (1) bi is a highest-weight vector in B(1 )i of weight i = e1 + . . . + ei .
(Recall that b B is a highest-weight vector if ei b = 0 for all i.) Hence, the connected
component of B(1 )i containing bi is B(i ).
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
59

(2) The connected component of B(i ) consists precisely of

{ a1 a2 . . . ai |1 a1 < . . . < ai n} B(1 )i .

a1
a2
We write elements of the form a1 a2 . . . ai as column vectors .. . For example,
.
ai
1
2
the highest weight vector is denoted .. .
.
i
ki i . Embed B() , B(1 )k1 B(2 )k2 . . . B(n1 )kn1 by mapping
P
Let =
k
the highest weight vector b 7 bk
1
1
. . . bn1n1 (which is a highest-weight vector in the
image). Now, we can represent any element in B(1 )k1 . . . B(n1 )kn1 by a sequence
of column vectors as in Figure 12, where the entries in the boxes are strictly increasing down
columns and (non-strictly) decreasing along rows.

... k1
k2

n1

kn3
kn2
kn1

Figure 12. Generic element of B(1 )k1 . . . B(n1 )kn1 , aka a semi-
standard Young tableau. Here, kn1 = 3, kn2 = 2, kn3 = 1, . . . , k2 = 4, k1 =
3.

Definition. A semi-standard Young tableau of shape is an array of numbers with dimen-


sions as above, such that the numbers strictly increase down columns, and (non-strictly)
decrease along rows.

Theorem 12.2 (Exercise). (1) The connected component of B() in B(1 )k1 B(n1 )kn1
are precisely the semi-standard Young tableaux of shape .
(2) Describe the action of ei and fi explicitly in terms of tableaux.

We will now construct the Young tableaux for all the classical Lie algebras.

Example 12.3. so2n+1 , i.e. type Bn : for the standard representation C2n+1 we have
the crystal

1 2 3 ... n 0 n ... 2 1
1 2 3 n1 n n n1 2 1
60 IAN GROJNOWSKI

so2n (type Dn ): the crystal of the standard representation C2n is


n
n1 n
% &
1 2 n2 n2 2 1
1 2 ... n1 n1 ... 2 1
& %
n n
n
sp2n (type Cn ): the crystal of the standard representation C2n is
1 2 ... n n ... 2 1
1 2 n1 n n1 2 1

Exercise 101. (1) Check that the crystals of the standard representations are as claimed.
(2) What subcategory of the category of representations of g do these representations
generate?
Consider the highest weight of the standard representation. This gives an element
P/Q = Z(G), a finite group (where G is the simply connected Lie group attached
to g). Consider the subgroup hi P/Q. We cannot obtain all of the representations
unless P/Q is cyclic and generated by . For the classical examples we have P/Q =
Z/2 Z/2 for D2n , Z/4 for D2n+1 , and Z/2 for Bn and D2n .
(3) (Optional) Write down a combinatorial set like Young tableaus that is the crystal of
B() with obtained from the standard representation.
For Bn we have one more representation, the spin representation. Recall the Dynkin
diagram for Bn ,
en1 en
Bn en

Definition. Let 1 , . . . , n be the dual basis to 1 , . . . , n . We call L(n ) the spin repre-
sentation.
Exercise 102. Use the Weyl dimension formula to show that dim L(n ) = 2n .
Define B = {(i1 , . . . , in )|ij = 1} with wt((i1 , . . . , in ) = 21 ij ej P , and
P
(
(i1 , . . . , +1j , 1j+1 , . . . , in ), if (ij , ij+1 ) = (1, +1)
ei (i1 , . . . , in ) = ,1 j n 1
0, otherwise
and (
(i1 , . . . , in1 , 1), if in = 1
en (i1 , . . . , in ) =
0, otherwise
(note that in particular eni = 0 always).
Claim. This is the crystal of the spin representation L(n ).
n
Remark. Note that C . This is for a very good reason. Indeed, gln
dim L(n ) = dim
A
so2n+1 via A 7 0 , and L(n )|gl = Cn .
n
JAT J 1
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
61

Exercise 103. Check that B|gln is the crystal of Cn .


For type Dn , the situation is more complicated. We can define representations
V + = L(n ), V = L(n1 );
these are called the half-spin representations. The corresponding crystals are B = {(i1 , . . . , in )|ij =
1} with ij = 1 for B + and ij = 1 for B . We define the weight and most ei , fi as
Q Q
before, with the change that
(
(i1 , . . . , in2 , +1, +1) if (in1 , in ) = (1, 1)
en (i1 , . . . , in ) =
0, otherwise.
Lecture 24

13. Littelmann paths


Set PR = P Z R. By a path we mean a piecewise linear continuous map [0, 1] PR .
We consider paths up to reparametrisation, i.e. = , where : [0, 1] [0, 1] is a
piecewise-linear isomorphism. (That is, we care about the trajectory, not about the speed
at which we traverse the path.)
Let P = {paths such that (0) = 0, (1) P }. Define a crystal structure on P as
follows: we set
wt() = (1).
To define ei (), let
hi = min Z {h(t), i i|0 t 1} 0.
That is, hi is the smallest integer in h(t), i i (note that since (0) = 0, we have hi 0).
If hi = 0, we set ei () = 0 (this is not the path that stays at 0, but rather the extra
element in the crystal).
If hi < 0, take the smallest t1 > 0 such that h(t1 ), i i = hi . Then take the largest t0 < t1
such that h(t0 ), i i = hi + 1.
The idea is to reflect the segment of the path from t0 to t1 in the h, i i = hi + 1 plane.
Therefore, we define the path ei () as follows:

(t),
0 t t0

ei ()(t) = (t0 ) + si ((t) (t0 )) = (t) h(t) (t0 ), i ii , t0 t t1

(t) + ,
i t t1
See Figure 13.
Exercise 104. Show that i () = hi .
Example 13.1. Lets compute the crystals of some representations of sl2 :
ei (i /2 0) = (0 i/2), ei (0 i/2) = 0
and


ei ( i

0) = (i /20) ei (i /2
0) = (0 i )

If is a path, let be the reversed path, i.e. t 7 (1 t) (1). Define


fi () = (
ei ( )) .
Exercise 105. With the above definitions, P is a crystal.
62 IAN GROJNOWSKI

hi hi + 1

(1) ei ()(1)

Figure 13. is marked in black; ei (), from the point when it deviates from
, is marked in red.

Define P + = {paths such that [0, 1] PR+ = {x PR : hx, i i 0 for all i}}. If
P + , then ei () = 0 for all i.
For P + let B be the subcrystal of P generated by (i.e. B = {fi1 fi2 . . . fir }).
Theorem 13.1 (Littelmann). (1) If , 0 P + then B
= B0 as crystals iff (1) =
0
(1).
(2) There is a unique isomorphism between this crystal and B((1)), the crystal of the
irreducible representation L((1)). (Of course, the isomoprhism sends to (1).)
(3) Moreover, for the path (t) = t, P + , Littelmann gives an explicit combinatorial
description of the paths in B .
Example 13.2. Lets compute the crystal of the adjoint representation of sl3 :


e . = ( )
( + )
Exercise 106. Compute the rest of the crystal, and check that you get the adjoint repre-
sentation of sl3 .
The simplest nontrivial example is G2 , the automorphisms of octonions.
Exercise 107. Compute the crystal of the 7-dimensional and the 14-dimensional (adjoint)
representation of G2 . Compute their tensor product.
The tensor product of crystals has a very nice (and natural!) realisation in terms of
concatenating paths:
Definition. For 1 , 2 P, define 1 2 to be the concatenation (traverse 1 , then translate
2 to start at 1 (1) and traverse it).
Exercise 108. : P P P is a morphism of crystals.
That is, the tensor product of two crystals simply consists of the concatenated paths in
them.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
63

Remark. We have now defined B() explicitly without using L(). One can prove the Weyl
character formula
det wew(+ )
P
ch B() = wW Q )
R+ (1 e
from this, without referring to L() or quantum groups. To do this, one builds ch L() and
L() itself one root at a time, and uses the Demazure character formula.
More specifically: w W consider Lw (), the n+ -submodule of L() generated by vw ,
where v is the highest weight vector. (Equivalently, it is the submodule generated by the
1-dimensional space wL() .)
Theorem 13.2 (Demazure character formula).
ch Lw () = Dw (e ),
where we write w = si1 . . . sir a reduced (i.e. shortest) decomposition, and set
f ei si (f ) f 1
Dw = Dsi1 . . . Dsir , Dsi (f ) = = (1+s i ) = (f ei /2 si (f ei /2 )).
1 ei 1 ei ei /2 ei /2
That is,

e + e i + . . . + e i ,
s
h, i i 0
Dsi (e ) = 0, h, i i = 1
(e+i + . . . + esi i ), h, i < 1.

i

Ive got about six minutes. Is that enough time to tell you about the Langlands program?
Suppose that G is an algebraic group with Lie algebra g which is semisimple. For a given
g, there is the smallest group G above it, Gad = G/ZG (whose centre is {1}). There is also
the largest group, Gsc the simply connected cover of G, which is still an algebraic group. We
have 1 (Gad ) = (P/Q) and ZGsc = P/Q.
(For example, SLn is the simply connected group, and P SLn is the adjoint group.)
In this course we have studied the category Rep g = Rep Gsc .
Recall that for G a finite group, the number of irreducible representations is the number
of conjugacy classes, but conjugacy classes do not parametrise the representations. For us,
however, the representations are parametrised by P + = P/W .
It is possible, for G an algebraic group, to define L G, the Langlands dual group, where
the Lie algebra g and the root system R correspond to g and R (and in fact the torus and
the dual torus will swap in the dual group).
If we consider the representations of G(F ) where F = Fq or F = C((t)), these correspond
roughly to conjugacy classes of homomorphisms (W (F ) L G(C))/L G(C), where W (F ) is
a thickening of the Galois group of F .

You might also like