Professional Documents
Culture Documents
REPRESENTATIONS
PART III
MICHAELMAS 2010
IAN GROJNOWSKI
DPMMS, UNIVERSITY OF CAMBRIDGE
I.GROJNOWSKI@DPMMS.CAM.AC.UK
TYPESET BY ELENA YUDOVINA
STATISTICAL LABORATORY, UNIVERSITY OF CAMBRIDGE
E.YUDOVINA@STATSLAB.CAM.AC.UK
Remark. It is not altogether obvious that this is an intrinsic characterisation (it would
be nice not to depend on the embedding into GLn ). The intrinsic characterisation is as
affine algebraic groups, that is, groups which are affine algebraic varieties and for which
multiplication and inverse are morphisms of affine algebraic varieties. One direction of this
identification is relatively easy (we just need to check that multiplication and inverse really
are morphisms); in the other direction, we need to show that any affine algebraic group has
a faithful finite-dimensional representation, i.e. embeds in GL(V ). This involves looking at
the ring of functions and doing something with it.
We will now explain a tangent space if you havent met with it before.
Example 1.1. Lets pretend were physicists, since were in their building anyway. Let
G = SL2 , and let || 1. Then for
1 a b
g= + + higher-order terms
1 c d
to lie in SL2 , we must have det g = 1, or
1 + a b
1 = det + h.o.t = 1 + (a + d) + 2 (junk) + h.o.t
c 1 + d
Therefore, to have g G we need to have a + d = 0.
To do this formally, we define the dual numbers
E = C[]/2 = {a + b|a, b C}
For a linear algebraic group G, we consider
G(E) = {A Mat n (E)|A satisfies the polynomial equations defining G GLn }
For example,
SL2 (E) = { |, , , E, = 1}
The natural map E C, 7 0 gives a projection : G(E) G. We define the Lie algebra
of G as
g
= 1 (I)
= {X Mat n (C)|I + X G(E)}.
In particular,
a b
sl2 = { Mat 2 (C)|a + d = 0}.
c d
Remark. I + X represents an infinitesimal change at I in the direction X. Equivalently,
the germ of a curve Spec C[[]] G.
Exercise 1. (Do this only if you already know what a tangent space and tangent bundle
are.) Show that G(E) = T G is the tangent bundle to G, and g = T1 G is the tangent space
to G at 1.
Example 1.2. Let G = GLn = {A Mat n |A1 exists}.
Claim:
G(E) := {A Mat n (E)|A1 exists}
= {A + B|A, B Mat n C, A1 exists}
Indeed, (A + B)(A1 A1 BA1 ) = I.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 20103
Remark. It is not obvious that GLn is itself a linear algebraic group (what are the polynomial
for determinant is non-zero?). However, we can think of GLn Mat n+1 as
equations
A
{ | det(A) = 1}.
Example 1.3. G = SLn C.
Exercise 2. det(I + X) = 1 + trace (X)
As a corollary to the exercise, sln = {X Mat n |trace (X) = 0}.
Example 1.4. G = On C = {AAT = I}. Then
g := {X Mat n C|(I + X)(I + X)T = I}
= {X Mat n C|I + (X + X T ) = I}
= {X Mat n C|X + X T = 0}
the antisymmetric matrices. Now, if X + X T = 0 then 2 trace (X) = 0, and since were
working over C and not in characteristic 2, we conclude trace (X) = 0. Therefore, this is
also the Lie algebra of SOn .
This is not terribly surprising, because topologically On has two connected components
corresponding to determinant +1 and 1 (they are images of each other via a reflection).
Since g = T1 G, it cannot see the det = 1 component, so this is expected.
The above example propmts the question: what exactly is it in the structure of g that we
get from G being a group?
The first thing to note is that g does not get a multiplication. Indeed, (I + A)(I + B) =
I + (A + B), which has nothing to do with multiplication.
The bilinear operation that turns out to generalize nicely is the commutator, (P, Q) 7
P QP 1 Q1 . Taken as a map G G G that sends (I, I) 7 I, this should give a map
T1 G T1 G T1 G by differentiation.
Remark. Generally, differentiation gives a linear map, whereas what we will get is a bilinear
map. This is because we will in fact differentiate each coordinate: i.e. first differentiate the
map fP : G G, Q 7 P QP 1 Q1 with respect to Q to get a map fP : g g (with P G
still) and then differentiate again to get a map g g g.
What is this map?
Let P = I + A, Q = I + B where 2 = 2 = 0 but 6= 0. Then
P QP 1 Q1 := (I + A)(I + B)(I A)(I B)
= I + (AB BA).
Thus, the binary operation on g should be [A, B] = AB BA.
Definition. The bracket of A and B is [A, B] = AB BA.
Exercise 3. Show that (P QP 1 Q1 )1 = QP Q1 P 1 implies [A, B] = [B, A], i.e. the
bracket is skew-symmetric.
Exercise 4. Show that the associativity of multiplication in G implies the Jacobi identity
0 = [[X, Y ], Z] + [[Y, Z], X] + [[Z, X], Y ]
4 IAN GROJNOWSKI
Remark. The meaning of implies in the above exercises is as follows: we want to think of the
bracket as the derivative of the commutator, not as the explicit formula [A, B] = AB BA
(which makes the skew-symmetry obvious, and the Jacobi identity only slightly less so). For
example, we could have started working in a different category.
We will now define our object of study.
Definition. Let K be a field, char K 6= 2, 3. A Lie algebra g is a vector space over K
equipped with a bilinear map (Lie bracket) [, ] : g g g with the following properties:
(1) [X, Y ] = [Y, X] (skew-symmetry);
(2) [[X, Y ], Z] + [[Y, Z], X] + [[Z, X], Y ] = 0 (Jacobi identity).
Lecture 2 Examples of the Lie algebra definition from last time:
Example 1.5. (1) gln = Mat n with [A, B] = AB BA.
(2) son = {A + AT = 0}
(3) sln = {trace (A) = 0} (note that while trace (AB) 6= 0 for A, B sln , we do have
trace (AB) = trace (BA), so [A, B] sln even though AB 6 sln )
1
..
.
1
(4) sp2n = {A gl2n |JAT J 1 +A = 0} where J is the symplectic form
1
..
.
1
...
. . .
(5) b = { . . .. gln } the upper triangular matrices (the name is b for Borel,
. .
but we dont need to worry about that yet)
0 ...
0 . . .
(6) n = { . . . .. gln } the strictly upper triangular matrices (the name is n
.
0
for nilpotent, but we dont need to worry about that yet)
(7) For any vector space V , let [, ] : V V V be the zero map. This is the abelian
Lie algebra.
Exercise 5. (1) Check directly that gln is a Lie algebra.
(2) Check that the other examples above are Lie subalgebras of gln , that is, vector
subspaces closed under [, ].
Example 1.6. is not a Lie subalgebra of gl2 .
0
Exercise 6. Find the algebraic groups whose Lie algebras are given above.
Exercise 7. Classify all Lie algebras of dimension 3. (You might want to start with di-
mension 2. Ill do dimension 1 for you on the board: skew symmetry of bracket means that
bracket is zero, so there is only the abelian one.)
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 20105
(This can be done so that it is invariant of the embedding into GLn . Think about it.)
To get a representation of the Lie algebra out of this, we again use the dual numbers E. If
we use E = K[]/2 instead of K, we will get a homomorphism of groups G(E) GLV (E).
Moreover, since (I) = I and it commutes with the projection map, we get
(I + A) = I + (some function of A) = I + d(A)
(this is to be taken as the definition of d(A)).
Exercise 9. d is the derivative of evaluated at I (i.e., d : TI G TI GLV ).
Exercise 10. The fact that : G GLV was a group homomorphism means that d : g
glV is a Lie algebra homomorphism, i.e. V is a representation of g.
Example 1.8. G = SL2 .
Let L(n) be the space of homogeneous polynomials of degree n in two variables x, y, with
basis xn , xn1 y, . . . , xy n1 n
, y (so dim L(n) = n + 1). Then GL2 acts on L(n) by change of
a b
coordinates: for g = and f L(n) we have
c d
((n g)f )(x, y) = f (ax + cy, bx + dy)
In particular, 0 is the trivial representation of GL2 , 1 is the usual 2-dimensional represen-
tation K 2 , and 2
a ab b2
a b
2 = 2ac ad + bc 2bd
c d
c2 cd d2
Since GL2 acts on L(n), we see that SL2 acts on L(n).
Remark. The proper way to think of this is as follows. GL2 acts on P1 and on O(n) on P1 ,
hence on the global sections (O(n), P1 ) = S n K 2 . Thats the source of these representations.
We can do this sort of thing for all the other algebraic groups we listed previously, using flag
varieties instead of P1 and possibly higher (co)homologies instead of the global sections (this
is a theorem of Borel, Weil, and Bott). That, however, requires algebraic geometry, and gets
harder to do in infinitely many dimensions.
0 1
Differentiating the above representations, e.g. for e = , we see
0 0
(I + e)xi y j = xi (x + y)j = xi y j + jxi+1 y j1
Therefore, d(e)xi y j = jxi+1 y j1 ,
Exercise 11. (1) In the action of the Lie algebra,
e(xi y j ) = jxi+1 y j1
f (xi y j ) = ixi1 y j+1
h(xi y j ) = (i j)xi y j .
(2) Check directly that these formulae give a representation of sl2 .
(3) Check that L(2) is the adjoint representation.
(4) Show that the formulae e = x y , f = y x , h = x x y y give an (infinite-
dimensional!) representation of sl2 on k[x, y].
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 20107
(5) Let char (k) = 0. Show that L(n) is an irreducible representation of sl2 , hence of
SL2 .
Lecture 3 Last time we defined a functor
E.g., in L(n) we had xi y j sitting in the place of the V s, and the chain went from = n to
= n:
Vn Vn2 Vn4 . . . V(n4) V(n2) Vn
n n1 n2 2
hx i
hx yi
hx y i
. . .
hx2 y n2 i
hxy n1 i
hy n i
(Note that the string for L(n) has n + 1 elements.)
Definition. If v V ker e, i.e. hv = v and ev = 0, then we say that v is a highest-weight
vector of weight .
Lemma 2.2. Let V be a representation of sl2 and v V a highest-weight vector of weight
. Then W = hv, f v, f 2 v, . . . i is an sl2 -invariant subspace, i.e. a subrepresentation.
Proof. We need to show hW W , f W W , and eW W .
Note that f W W by construction.
Further, since v V , we saw above that f k v V2k , and so hW W as well.
Finally,
ev = 0 W
ef v = ([e, f ] + f e)v = hv + f (0) = v W
ef 2 v = ([e, f ] + f e)f v = ( 2)f v + f (v) = (2 2)f v W
ef 3 v = ([e, f ] + f e)f 2 v = ( 4)f 2 v + f (2 2)f v = (3 6)f 2 v W
and so on.
Exercise 14. ef n v = n( n + 1)f n1 v for v a highest-weight vector of weight .
Lemma 2.3. Let V be a finite-dimensional representation of sl2 and v V a highest-weight
vector of weight . Then Z0 .
Remark. Somehow, the integrality condition is the Lie algebra remembering something about
the topology of the group. (Saying what this something is more precisely is difficult.)
Proof. Note that f k v all lie in different eigenspaces of h, so if they are all nonzero then they
are linearly independent. Since dim V < , we conclude that there exists a k such that
f k v 6= 0 and f k+r v = 0 for all r 1. Then by the above exercise,
0 = ef k+1 v = (k + 1)( k)f k v
from which = k, a nonnegative integer.
Proposition 2.4. If V is a finite-dimensional representation of sl2 , it has a highest-weight
vector.
Proof. Were over C, so we can pick some eigenvector of h. Now apply e to it repeatedly:
v, ev, e2 v, . . . belong to different eigenspaces of h, so if they are nonzero, they are linearly
independent. Therefore, there must be some k such that ek v 6= 0 but ek+r v = 0 for all r 1.
Then ek v is a highest-weight vector of weight ( + 2k).
10 IAN GROJNOWSKI
Lemma 2.8. Let L(n) be the irreducible representation of sl2 with highest weight n. Then
acts on L(n) by 12 n2 + n.
Proof. It suffices to check this on the highest-weight vector v (so hv = nv and ev = 0). Then
1 1
v = ( h2 + h + 2f e)v = ( n2 + n)v.
2 2
i
Since commutes with f and L(n) is spanned by f v, we could conclude directly (without
using Schurs lemma) that acts by this scalar.
Observe that if L(n) and L(m) are irreducible representations with different highest
weights, then acts by a different scalar (since 21 n(n + 2) is increasing in n).
Definition. Let V be a finite-dimensional representation of sl2 . Set
V = {v V : ( )dim V v = 0}.
This is the generalised eigenspace of with eigenvalue .
Claim. Each V is a subrepresentation for sl2 .
Proof. Let x sl2 and v V . Because is central, we have
( )dim V (xv) = x( )dim V v = 0,
so xv V as well.
Now, if V 6= 0, then = 12 n2 + n, and V is somehow glued together from copies of
L(n). More precisely:
Definition. Let W be a finite-dimensional g-module. A composition series for W is a
sequence of submodules
0 = W0 W1 . . . Wr = W
such that Wi /Wi1 is an irreducible module.
0 1
0 1
... ...
Example 2.3. (1) g = C, W = Cr , where 1 C acts as . Then
. . . 1
0
there exists a unique composition series for W , namely,
0 he1 i he1 , e2 i . . . he1 , e2 , . . . , er i.
(2) On the other hand, if g = C and W = C with the abelian action (i.e. 1 C acts by
0), then any chain 0 W1 . . . Wr with dim Wi = i will be a composition series.
Lemma 2.9. Composition series always exist.
Proof. We induct on the dimension of W . Take an irreducible submodule W1 W ; then
W/W1 has smaller dimension, so has a composition series. Taking its preimage in W and
sticking W1 in front will do the trick.
Remark. The factors Wi /Wi1 are defined up to order.
12 IAN GROJNOWSKI
So, the precise statement is that V has composition series with quotients L(n) for some
fixed n. Indeed, take an irreducible submodule L(n) V ; note that acts on L(n) by
1 2
2
n + n, so we must have = 12 n2 + n, or in other words n is uniquely determined by ; and
moreover, acts on V /L(n), and its only generalised eigenvalue there is still , so we can
repeat the argument.
Claim. h acts on V with generalised eigenvalues in the set {n, n 2, . . . , n}.
Proof. This is a general fact about composition series. Let h act on W and let W 0 W be
invariant under this action, i.e. hW 0 W 0 . Then
{generalised eigenvalues of h on W } = {generalised eigenvalues of h on W 0 }
{generalised eigenvalues of h on W/W 0 }.
(You can see this by looking at the upper triangular matrix decomposition of h.) Now since
V is composed of L(n), the generalised eigenvalues of h must lie in that set.
Note also that on the kernel of e : V V the only generalised eigenvalue of h is n; that
is, (h n)dim V x = 0 for x V ker e. (This follows by applying the above observation to
the composition series intersected with ker e.)
Lecture 5
Lemma 2.10. (1) hf k = f k (h 2k)
(2) ef n+1 = f n+1 e + (n + 1)f n (h n).
Proof. We saw (1) already.
Exercise 18. Prove (2) (by induction, e.g. for n = 0 the claim is ef = f e + h.)
Proposition 2.11. h acts diagonalisably on ker e : V V ; in fact, it acts as multiplica-
tion by n. That is,
ker e : V V = (V )n = {x V : hx = nx}.
Recall we know that ker e is (in) the generalised eigenspace of h with eigenvalue n (and
after the first line of the proof, we will have equality rather than just containment). We are
showing that we can drop the word generalised.
Proof. If hx = nx then ex (V )n+2 = 0 (as generalised eigenvalues of h on V are
n, n 2, . . . , n) so x ker e.
Conversely, let x ker e; we know (h n)dim V x = 0. By the lemma above,
(h n + 2k)dim V f k x = f k (h n)dim V x = 0.
That is, f k x belongs to the generalised eigenspace of h with eigenvalue h n + 2k.
On the other hand, for any y ker e, y 6= 0 = f n y 6= 0.
Remark. This should be an obvious property of upper-triangular matrices, but well work it
out in full detail.
Take 0 = W0 W1 . . . Wr = V the composition series for V with quotients L(n).
There is an i such that y Wi , y 6 Wi1 . Let y = y + Wi1 Wi /Wi1 = L(n). Then y is
n n
the highest-weight vector in L(n), and so f y 6= 0 in L(n). Therefore, f y 6= 0.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
13
P
Exercise 23 (Essential!). Vn Vm (V W )n+m . Therefore, (V W )p = n+m=p Vn
Vm . This is exactly how we multiply polynomials.
Example 3.1. We can now compute L(1) L(3):
There is a picture that goes with this rule (so you dont actually need to remember
it):
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
15
m + 1 dots
n + 1 dots
Exercise 27. Prove the properties. (The proofs should all be easy and tedious.)
We now state a nontrivial result, which we will not prove.
Theorem 4.2 (Lies Theorem). Let g gl(V ) be a solvable Lie algebra, and let the base field
k be algebraically closed and have characteristic zero. Then there exists a basis v1 , . . . , vn of
V with respect to which g b(V ), i.e. the matrices of all elements of g are upper triangular.
Equivalently, there exists a linear function : g k and an element v V , such that
xv = (x)v for all x g, i.e. v is a common eigenvector for all of g, i.e. a one-dimensional
subrepresentation.
In particular, the only irreducible finite-dimensional representations of g are one-dimensional.
Exercise 28. Show that these are actually equivalent. (One direction should be obvious; in
the other, quotient by v1 and repeat.)
Exercise 29. Show that we need k to be algebraically closed.
Show that we need k to have characteristic 0. [Hint: let g be the 3-dimensional Heisen-
berg, and show that k[x]/xp is a finite-dimensional irreducible representation of dimension
markedly bigger than 1.]
Corollary 4.3. In characteristic 0, if g is a solvable finite-dimensional Lie algebra, then
[g, g] is nilpotent.
Exercise 30. Find a counterexample in characteristic p.
Proof. Apply Lies theorem to the adjoint representation g End (g). With respect to
some basis, ad g b(g). Since [b, b] n (note the diagonal entries cancel out when we
take the bracket), we see that [ad g, ad g] is nilpotent. Since ad [g, g] = [ad g, ad g] (its a
representation!), we see that ad [g, g] is nilpotent, and therefore so is [g, g].
Theorem 4.4 (Engels Theorem). Let the base field k be arbitrary. g is nilpotent iff ad g
consists of nilpotent endomorphisms of g (i.e., for all x g, ad x is nilpotent).
Equivalently, if V is a finite-dimensional representation of g such that all elements of g
act on V by nilpotent endomorphisms, then there exists v V such that x(v) = 0 for all
x g, i.e. V has the trivial representation as a subrep.
Equivalently, we can write x as a strictly upper triangular matrix.
Exercise 31. Show that these are actually equivalent.
Lecture 7
Definition. A symmetric bilinear form (, ) : g g k is invariant if ([x, y], z) = (x, [y, z]).
Exercise 32. If a g is an ideal and (, ) is an invariant bilinear form, then a is an ideal.
Definition. If V is a finite-dimensional representation of g, i.e. if : g gl(V ) is a
homomorphism, define the trace form to be (x, y)V = trace ((x)(y) : V V ).
Exercise 33. Check that the trace form is symmetric, bilinear, and invariant. (They should
all be obvious.)
Example 4.2. (, )ad is the Killing form (named after a person, not after an action). This
is the trace form attached to the adjoint representation. That is, (x, y)ad = trace (ad x, ad y :
g g).
18 IAN GROJNOWSKI
Theorem 4.5 (Cartans criteria). Let g gl(V ) and let char k = 0. Then g is solvable if
and only if for every x g and y [g, g] the trace form (x, y)V = 0. (That is, [g, g] g .)
Exercise 34. Lies theorem gives us one direction. Indeed, if g is solvable then we can take
a basis in which x is an upper triangular matrix, y is strictly upper triangular, and then
xy and yx both have zeros on the diagonal (and thus trace 0). That is, if g is solvable and
nonabelian, all trace forms are degenerate. (The exercise is to convince yourself that this is
true.)
Corollary 4.6. g is solvable iff (g, [g, g])ad = 0.
Proof. If g is solvable, Lies theorem gives the trace to be zero. Conversely, Cartans criteria
tell us that ad g = g/(center of g) is solvable, and therefore so is g.
Exercise 35. Not every invariant form is a trace form.
= Chp, q, c, di where [c, H]
Let H = 0, [p, q] = c, [d, p] = p, [d, q] = q.
(1) Construct a nondegenerate invariant form on H.
(2) Show H is solvable.
(3) Extend the representation of hc, p, qi = H on k[x] to a representation of H.
Definition. The radical of g, R(g), is the maximal solvable ideal in g.
Exercise 36. (1) Show R(g) is the sum of all solvable ideals in g (i.e., show that the
sum of solvable ideals is solvable).
(2) Show R(g/R(g)) = 0.
Theorem 4.7. In characteristic 0, the following are equivalent:
(1) g is semisimple
(2) R(g) = 0
(3) The Killing form (, )ad is nondegenerate (the Killing criterion).
Moreover, if g is semisimple, then every derivation D : g g is inner, but not conversely
(where we define the terms derivation and inner below).
Definition. A derivation D : g g is a linear map satisfying D[x, y] = [Dx, y] + [x, Dy].
Example 4.3. ad (x) is a derivation for any x. Derivations of the form ad (x) for some x g
are called inner.
Remark. If g is any Lie algebra, we have the exact sequence
0 R(g) , g g/R(g) 0
where R(g) is a solvable ideals, and g/R(g) is semisimple. That is, any Lie algebra has
a maximal semisimple quotient, and the kernel is a solvable ideal. This shows you how
much nicer the theory of Lie algebras is than the corresponding theory for finite groups.
In particular, the corresponding statement for finite groups is essentially equivalent to the
classification of the finite simple groups.
A theorem that we will neither prove nor use, but which is pretty:
Theorem 4.8 (Levis theorem). In characterstic 0, a stronger result is true. Namely, the
exact sequence splits, i.e. there exists a subalgebra g such that = g/R(g). (This
subalgebra is not canonical; in particular, it is not an ideal of g.) That is, g can be written
as g = n R(g).
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
19
Exercise 37. Show that this fails in characteristic p. Let g = slp (Fp ); show R(g) = Fp I
(note that the identity matrix in dimension p has trace 0), but that there is no complement
to R(g) that is an algebra.
Proof of Theorem 4.7. First, notice that R(g) = 0 if and only if g has no nonzero abelian
ideals. (In one direction, an abelian ideal is solvable; in the other, notice that the last term
of the derived series is abelian.)
From now on, (, ) refers to the Killing form (, )ad .
(3) = (2): we show that if a is an abelian ideal of g, then a g . (Since the Killing
form is nondegenerate, this means a = 0, and therefore R(g) = 0.)
Write g = a + h where h is avector space complement to a (not necessarily
anideal). For
0
a a, ad a has block matrix . For x g, ad x has block matrix . (This is
0 0 0
because a is an ideal, so [a, x] a for all x g, and moreover ad a acts by 0 on a.) Therefore,
0
trace (ad a ad x) = trace ( ) = 0,
0 0
i.e. (a, g)ad = 0. Since the Killing form is nondegenerate, a = 0.
(2) = (3): Let r g be an ideal (e.g. r = g ), and suppose r 6= 0. Then r gl(g) via
ad , and (x, y)ad = 0 for all x, y r. By Cartans criteria, ad r = r/(center of r) is solvable,
so r is solvable, contradicting R(g) = 0.
Exercise 38. Show that R(g) g [R(g), R(g)].
(2),(3) = (1): Let (, )ad be nondegenerate, and let a g be a minimal nonzero ideal.
We claim that (, )ad |a is either 0 or nondegenerate.
Indeed, the kernel of (, )ad |a = {x a : (x, a)ad = 0} = a a is an ideal!
Cartans criteria imply that a is solvable if (, )ad |a is 0. Since R(g) = 0, we conclude that
(, )ad |a is nondegenerate.
Therefore, g = a a , where a is a minimal ideal, i.e. simple. (Note that R(g) = 0 so a
cannot be abelian.)
But now ideals of a are ideals of g, so we can apply the same argument to a since
R(g) = 0 = R(a ) = 0. Therefore, g = ai where the ai are simple Lie algebras.
Exercise 39. Show that if g is semisimple, then g is the direct sum of minimal ideals in a
unique manner. That is, if g = ai where ai are minimal, and b is a minimal ideal of g,
then in fact b = ai for some i. (Consider b aj .) Conclude that (1) = (2).
Lecture 8 Finally, we show that all derivations on semisimple Lie algebras are inner.
Let g be a semisimple Lie algebra, and let D : g g be a derivation. Cosnider the linear
functional l : g k via x 7 trace (D ad x). Since g is semisimple, the Killing form is
a nondegenerate inner product giving an isomorphism with the dual, so y g such that
l(x) = (y, x)ad . Our task is to show that D = ad y, i.e. that E := D ad y = 0.
This is equivalent to showing that Ex = 0 for all x g, or equivalently that (Ex, z)ad = 0
for all z g. Now,
ad (Ex) = E ad x ad x E = [E, ad x] : g g
since ad (Ex)(z) = [Ex, z] = E[x, z] [x, Ez]. Therefore,
(Ex, z)ad = trace g (ad (Ex) ad z) = trace g ([E, ad x] ad z) = trace g (E, [ad x, ad z]).
20 IAN GROJNOWSKI
5. Structure theory
Definition. A torus t g is an abelian subalgebra s.t. t t the adjoint ad (t) : g g is
a diagonalisable linear map. A maximal torus is a torus that is not contained in any bigger
torus.
Example 5.1. Let T = (S 1 )r G where G is a compact Lie group (or T = (C )r G
where G is a reductive algebraic group). Then t = Lie (T ) g = Lie (G) is a torus, maximal
if T is. (This comes from knowing that representations of t are diagonalisable by an averaging
argument, bur we dont really know this.)
Exercise 42. (1) g = sln or g = gln , t the diagonal matrices (of trace 0 if inside sln ) is
a maximal
torus.
0
(2) is not a torus. (We should know this is not diagonalisable!)
0 0
For a vector space V , let t1 , . . . , tr : V V be pairwise commuting diagonalisable linear
maps; let = (1 , . . . , r ) Cn . Set V = {v V : ti v = i v} the simultaneous eigenspace.
L
Lemma 5.1. V = (C )r V .
Proof. Induct on r. For r = 1 follows from the requirement that t1 be diagonalisable.
For r > 1, look at t1 , . . . , tr1 and decompose
V = V(1 ,...,r1 ) .
Since tr commutes with t1 , . . . , tr1 , it preserves the decomposition. Now decompose each
V(1 ,...,r1 ) into eigenspaces for tr .
Let t be the r-dimensional abelian Lie algebra with basis (t1 , . . . , tr ). The lemma asserts
that V is a semisimple (i.e. completely reducible) representation of t. V = V is the
decomposition into isotypic representations. Namely, letting C be the 1-dimensional repre-
sentation wherein ti (w) = i w, we have 6= = C 6= C , and V is the direct sum of
dim V copies of C .
Exercise 43. Show that every irreducible representation of t is one-dimensional.
Another way of saying this is as follows. is a linear map t C, i.e. an element of the dual
space t (sending (ti ) = i ). Therefore, one-dimensional representations of t (which are
all the irreducible representations of t) correspond to elements of t = Homvector spaces (t, C).
The decomposition
V = t V , V = {v V : t(v) = (t)v}
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
21
Exercise 44. This is (a) an essential exercise, and (b) guaranteed to be on the exam. The
first should get you to do it, the second is irrelevant.
Compute the root space decomposition for sln (as we just did), so2n , so2n+1 , sp2n where t
is the diagonal matrices in g, and we take the conjugate son defined by
0 1
...
son = {A gln |JA + AT J = 0}, J = ...
1 0
(this is conjugate to the usual son since weve defined a nondegenerate inner product; in this
version, the diagonal matrices are a maximal torus). The symplectic matrices are given by
the same embedding as before.
In particular, show that t is a maximal torus and the root spaces are one-dimensional.
Lecture 9 The Lie algebras we were working with last time are called the classical Lie
algebras (they are automorphisms of vector spaces). The rest of the course will be concerned
with explaining how their roots control their representations.
Proposition 5.2. sln C is simple.
22 IAN GROJNOWSKI
Proof. Recall M
sln C = t g
R
where R = {i j |i 6= j} and gi j = CEij . P
Suppose r sln is a nonzero ideal. Choose r r, r 6= 0, s.t. when we write r = t+ R e
with e g , the number of nonzero terms is minimal.
First, suppose t 6= 0. Choose P t0 t such that (t0 ) 6= 0 for all R (i.e., t0 has distinct
eigenvalues). Consider [t0 , r] = r (t0 )e . Note that this lies in r, and if it is nonzero, it
has fewer nonzero terms than r, a contradiction. Thus, [t0 , r] = 0, i.e. r = t t.
Since t 6= 0, there exists R with (t) 6= 0 (i.e. it cant have the same eigenvalue, since
its trace 0). Then
[t, e ] = (t)e = e r
Letting = i j , we have Eij r. Now,
[Eij , Ejk ] = Eik for i 6= k [Esi , Eij ] = Esj for s 6= j
and therefore Eab r for a 6= b. Finally,
[Ei,i+1 , Ei+1,i ] = Eii Ei+1,i+1 r,
so in fact the entire basis for sln lies in r and r = sln .
Remark. This is actually a combinatorial statement about root spaces; well see more about
this later.
That leaves us with the case t = 0. If r = cEij has only one nonzero term, we are done
exactly as above; therefore,
X
r = e + e + e , 6=
R{,}
Choose t0 t such that (t0 ) 6= (t0 ); then some linear combination of [t0 , r] and r will have
fewer nonzero terms than r, a contradiction.
Proposition 5.3. Let g be a semisimple Lie algebra. Then maximal tori exist (i.e. are
nonzero). Moreover, g0 = {x g|[t, x] = 0} = t. (Recall that t is not defined to be
the maximal abelian subalgebra, but as the maximal abelian subalgebra whose adjoint acts
semisimply.)
We wont prove this; the proof involves introducing the Cartan subalgebra and showing
that its the same as a maximal torus. As a result, we will sometimes be calling t the Cartan
subalgebra.
This means that the root space decomposition of a semisimple Lie algebra is
M
g=t g
R
Proof. Let x g and y g . Recall (x, y)ad = trace g (ad x ad y). To show
that its zero, we show that ad x ad y is nilpotent.
Indeed, (ad x ad y)N g g+N (+) , so if 6= then for N 0 this is zero.
On the other hand, (, )ad is nondegenerate and g = t g g , so (, )ad |g g
must be nondegenerate.
(c) In particular, (, )ad |t is nondegenerate, as t = g0 .
Remark. (, )ad |t 6= (, )ad t (which is zero since t is abelian).
Therefore, the Killing form defines an isomorphism : t t with (t)(t0 ) =
(t, t0 )ad . It also defines an induced inner product (well, symmetric bilinear form
to be precise we dont know its positive) on t , via ((t), (t0 )) = (t, t0 )ad .
(d) R = R, since (, )ad is nondegenerate on g g , but (g , g )ad = 0
if 6= 0. In particular, g = g via the Killing form.
(e) x g , y g = [x, y] = (x, y)ad 1 ().
Proof.
(t, [x, y])ad = ([t, x], y)ad = (t)(x, y)ad ,
which is exactly what we want.
(f) Let e g be nonzero, and pick e g such that (e , e )ad 6= 0. Then
[e , e ] = (e , e )ad 1 (). To show that e , e , and their bracket give a
copy of sl2 , we need to compute
[ 1 (), e ] = ( 1 ())e = (, )e
We will be done if we show that (, ) 6= 0 (then we get a copy of sl2 after
renormalising).
Proposition 5.5. (, ) 6= 0 for all R.
Proof. Next time.
24 IAN GROJNOWSKI
Lecture 10 We now prove the last proposition, (, ) 6= 0 for all R.
Proof. Suppose otherwise. Let m = he , e , 1 ()i. If (, ) = 0, i.e. if [ 1 (), e ] = 0,
then [m , m ] = C 1 and m is solvable. However, by Lies theorem this implies that
ad [m , m ] acts by nilpotents on g, i.e. ad 1 () is nilpotent. Since 1 () t, it is also
diagonalisable, and the only diagonalisable nilpotent element is 0. However, R =
6= 0.
Now, define
2
h = 1 ()
(, )
and rescale e such that (e , e )ad = 2/(, ).
Exercise 46. The map (e , h , e ) 7 (e, h, f ) gives an isomorphism m
= sl2 .
Remark. In sln , the root spaces are spanned by Eij , so we are saying that the subalgebra of
matrices of the form
a
a
is a copy of sl2 , which is obvious. The cool thing is that something like this happens for all
the semisimple Lie algebras, and that this lets us know just about everything about them.
We will now use our knowledge of the representation theory of sl2 . We show
Claim. dim g = 1 for all R.
Proof. Pick a copy of m = he , h , e i
= sl2 . Suppose dim g > 1, then the map
1
g C (), x 7 [e , x] has a kernel. That is, there is a v g such that ad e v = 0.
Then v is a highest-weight vector (it comes from a root space g , so its an eigenvector of
ad h ), and its weight is determined by
ad h v = (h )v = 2v.
That is, v is a highest-weight vector of weight 2. However, we know that in finite-
dimensional representations of sl2 (which g is, since g acts on it) the highest weights are
nonnegative integers. This contradiction shows the claim.
Before we finish proving the theorem we had (we still need to show [g , g ] = g+ when
, , + R), we show further combinatorial properties of the roots.
Theorem 5.6 (Structure theorem, continued). (1)
2(, )
Z for all , R
(, )
(2) If R and k R, then k = 1.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
25
(3) M
g+k
kZ
is an irreducible module for (sl2 ) = he , h , e i. In particular,
{k + |k + R, k Z} {0}
2(,)
is of the form p, (p 1), . . . , , . . . , + (q 1), + q where p q = (,)
.
We call this set the string through .
(1): let q = max{k : + k R} and let v g+q be nonzero. Then ad e v
g+(q+1) = 0, and
2(, )
ad h v = ( + q)(h ) v = + 2q v.
(, )
That is, v is a highest weight vector for (sl2 ) with weight 2(, )/(, ) + 2q. Since the
weight in a finite-dimensional representation is an integer, we see that 2(, )/(, ) Z.
Proof. (3): Structure of sl2 -modules tells us that (ad e )r v 6= 0 for 0 r 2(,)
(,)
+ 2q = N ,
N +1
and that (ad e ) v = 0. Therefore, { + q k, 0 k N } are all roots (or possibly
equal to zero in any case, they have nonzero root space). This certainly is an irreducible
(sl2 ) module; we now need to show that there are no other roots in the string through .
We do this by repeating the same construction from the bottom up:
Let p = max{k : k r} and let w gp be nonzero. then ad e w = 0 and
ad h w = (2(, )/(, )2p)w, so w is a lowest-weight vector of weight 2(, )/(, )2p.
By applying ad e repeatedly, we get an irreducible sl2 module with roots p, (p
1), . . . , + (p 2(, )/(, )).
Now by construction we have p 2(, )/(, ) q and also q + 2(, )/(, ) p, which
means that in fact p q = 2(, )/(, ) and the two submodules we get coincide.
(2): By (1) we have 2(, k)/(k, k) = 2/k Z and 2(k, )/(, ) = 2k Z. Thus, it
suffices to show that R = 2 6 R.
But indeed, suppose 2 R and let v g2 be nonzero. Then [e , v] g has the
property that
([e , v], e ) = (v, [e , e ]) = 0,
and since g is one-dimensional and spanned by e , we find ([e , v], g ) = 0. Since (, )|g g
is nondegenerate, this means [e , v] = 0, i.e. v g2 is a highest-weight vector for (sl2 )
with highest weight 4. This doesnt happen in finite-dimensional representations, so g2 =
0 and 2 is not a root.
(In fact, we could derive this from (3), since we know that there is a 3-dimensional sl2 -
module running through 0.)
We finally show [g , g ] = g+ when , , + R. Indeed, we have just shown
that g+k is an irreducible representation of m , and ad e : g+k g+(k+1) is an
isomorphism for k < q. Since q 1, we see that ad e g = g+ .
Remark. The combinatorial statement of (3) is rather clunky. We now unclunk it:
For t , define s : t t by s (v) = v 2(,v)
(,)
(a reflection of v about ).
Claim. (3) implies (and in fact, is equivalent to) s () R whenever , R.
26 IAN GROJNOWSKI
4 X 2(, ) 2
= Z
(, ) R (, )
and therefore (, ) Q.
(2) Let B be the Gram matrix of the basis, Bij = (i , j ).
Exercise 47. (, ) is a nondegenerate symmetric bilinear form = det B 6= 0.
P P
Let = ci i R. Then (, i ) = cj (i , j ). Therefore, we can recover the
vector of ci as
(c1 , . . . , cl ) = ((, 1 ), . . . , (, l ))(B T )1
Since the inner productsP of roots are all rational, we see that ci Q as well.
(3) Let QR.
P Then = ci i with ci Q, and (, ) Q for all R. Moreover,
(, ) = R (, )2 0, and if (, ) = 0 then (, ) = 0 for all R. Since R
spans t and (, ) is nondegenerate, this implies that = 0.
6. Root systems
Let V be a vector space over R, (, ) an inner-product (i.e. a positive-definite symmetric
bilinear form). For V , 6= 0, write = (,)
2
, so that (, ) = 2.
Define s : V V by s (v) = v (v, ).
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
27
Lemma 6.1. s is the reflection in the hyperplane orthogonal to . In particular, all but
one of the eigenvalues of s are 1, and the remaining one is 1 (with eigenvector ). Thus,
s2 = 1, or (s + 1)(s 1) = 0, and s O(V, (, )), the orthogonal group of V defined by
the inner product (, ).
Proof. Write V = R . Then s () = (, ) = , and s fixes all v .
Definition. A root system R in V is a finite set R V such that
(1) 0 6 R, RR = V
e1
(a) This is A1 A1 (or A1 +A1 ), and is not irreducible. The Weyl group is Z/2Z/2.
(b) A2 : = , = , (, ) = 1.
W = S3 is the dihedral group of order 6.
This is the root system of sl3 , as you should be able to establish (and we will
shortly see).
28 IAN GROJNOWSKI
+ + 2
ai = 0}
X
L = {l Zn+1 |(l, e1 + . . . + en ) = 0} = {ai ei | = Zn .
Then RL = {ei ej |i 6= j}, #RL = n(n + 1), ZRL = L.
If = ei ej then s applied to a vector with coordinates x1 , . . . , xn+1 (in basis
e1 , . . . , en+1 ) swaps the i and j coordinate; therefore, W = hsei ej i = Sn+1 .
(RL , RL) is the root system of type An . Note that n is the rank of the root system,
not the index of the associated Lie algebra.
Exercise 52. (a) You should not believe a word I say: Check these statements!
(b) Draw L Zn+1 and RL for n = 1, 2. Check that A1 and A2 agree with the
previous pictures.
(c) Show that the root system of sln+1 has type An .
(2) Dn . Consider the square lattice Zn . The roots are RZn = {ei ej |i 6= j}. Set
X X
L = ZRL = {l = ai ei |ai Z, ai even}
sei ej swaps the ith and jth coordinate as before; sei +ej flips the sign of both the ith
and the jth coordinate.
(RL , ZRL ) is called Dn ; #RL = 2n(n 1). The Weyl group is
W = (Z/2)n1 o Sn
where (Z/2)n1 is the subgroup of even number of sign changes. (Possibly I mean
(Z/2)n1 n Sn .)
Exercise 53. (a) Check these statements!
(b) Dn is irreducible if n 3.
(c) RD3 = RA3 , RD2 = RA1 RA1
(d) The root system of so2n has type Dn .
Lecture 12 We continue our classification of the simply laced root systems:
(1) E8 . Let
1 X
n = {(k1 , . . . , kn )|either all ki Z or all ki Z + and ki 2Z}.
2
Consider = ( 12 , 12 , . . . , 21 ) n , and note that (, ) = n/4. Thus, if n is an even
lattice, we must have 8|n.
30 IAN GROJNOWSKI
Exercise 58. Check all of these (by finding a suitable function f where one is not listed).
Also find the simple roots for E6 , E7 , F4 , and G2 .
Remark. So far we have made two choices. We have chosen a maximal torus, and we have
chosen a function f . Neither of these matters. We may or may not prove this about the
maximal torus later; as for f , note that the space of functionals f is partitioned into cells
within which we get the same notions of positive roots. The Weyl group acts transitively on
these cells, which should show that the choice of f does not affect the structure of the root
system we are deriving
Proposition 6.5. (1) , = 6 R.
(2) , , 6= = (, ) 0 (i.e.Pthe angle between and is obtuse).
(3) Every R+ can be written as = ki i with i and ki Z0 .
(4) Simple roots are linearly independent.
(5) R+ , 6 = i such that i R+ .
32 IAN GROJNOWSKI
Exercise 59. Prove this! (Either by a case-by-case analysis which you should do in a few
cases anyway or by finding a uniform proof from the axioms of a root system.)
We will now find another way to represent the root system. Let = {1 , . . . , l }, let
aij = (i , j ).
Proof. The first two should be obvious from the above. To prove the determinant, note that
2
(1 ,1 )
0
A=
.. (i , j )
.
2
0 (l ,l )
The first matrix is diagonal with positive entries, the second one is the Gram matrix of a
positive definite bilinear form and so has positive determinant.
The last property follows because any principal subminor of A (i.e. obtained by removing
the same set of rows and columns) has the same form as A.
Lecture 13
Exercise 61. If (R, V ) is an irreducible root system with positive roots R+ , then there is a
unique root R+ such that , + 6 R+ . This is called the highest root.
At the moment, the easiest way to prove this is to examine all the root systems we
have. Later well give a uniform proof (and show the relationship between and the adjoint
representation).
Define the extended Cartan matrix A by setting 0 = (and then A = (aij )0i,jl
2( , )
with aij = (i , j ) = (ii,ij) ).
The associated diagram is called the extended or affine Dynkin diagram.
2 2
Example 6.6. A1 has A = (2) and A = .
2 2
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
33
e1 e2 e3 e4
An
e2 e3 en en+1
e1 e2 e3 e4
en1 + en
Dn
e2 e3
en1 en
E8
E7
E6
en1 en
Bn en
en1 en
Cn 2en
G2
F4
An has
2 1 0 1
2 1
1 2 1
...
1 2 ...
A= A = 0 1 2 0
... ...
1 ... ...
1
1 2
1 0 1 2
Notice that A satisfies almost the same conditions as the Cartan matrix, i.e.
(1) aij Z, aij 0 for i 6= j, aii = 2
(2) aij = 0 aji = 0
34 IAN GROJNOWSKI
1
0
0
1
An
Dn
1
0
0
1
E8 1
0
0
1
E7 1
0
0
1
E6
11
00
00
11
00
11
Bn
1
0
0
1
Cn 1
0
0
1
G2 1
0
0
1
F4 11
00
00
11
An
(2)
1
0
0
1
Figure 3. Affine Dynkin diagrams.
(2)
Finally, if there is a double bond in the diagram, then by Cn and An theres only
one of them, and the diagram does not branch by B n . If the double bond is in the
middle, we have F4 (since we dont have F4 ); otherwise, we have Bn or Cn .
36 IAN GROJNOWSKI
(2) Hence, if g is a finite-dimensional semisimple Lie algebra with Cartan matrix A, then
the map g g via Ei 7 Ei , Fi 7 Fi , Hi 7 Hi (where on the left-hand side we
have the canonical generators of g and on the right hand side we have the elements
of the root spaces we found above) factors through g and is surjective, so gives an
isomorphism g = g.
The second part of the theorem clearly implies the uniqueness. We also see that existence
is equivalent to the statement
g is finite-dimensional A is a Cartan matrix
Definition. g (for any A) is called a Kac-Moody algebra (this time because they were
independently discovered by Kac and Moody).
Most of the representation theory we will be talking about holds verbatim for the Kac-
Moody algebras. However, note that all we have given for them is a finite presentation,
which is not a very nice object. In particular, the root spaces of these algebras are not
1-dimensional. The dimension of the root space for any given root is computable but hor-
rible, unless the matrix A is an affine Dynkin diagram. In that case, the Lie algebra is
approximately Cc + g C[t, t1 ].
The Weyl group has a corresponding presentation:
Theorem 7.8 (Presentation of the Weyl group). Write si = si , then
W = hs1 , . . . , sl |s2i = 1, (si sj )mij = 1i,
where mij are defined in terms of the Cartan matrix as follows:
aij aji 0 1 2 3
mij 2 3 4 6
E.g., if i and j are not connected in the Dynkin diagram then si sj = sj si , and if they are
connected by a single edge, then si sj si = sj si sj . These are known as the braid relations.
In An1 they look like this:
Lecture 15
Exercise 69. Check, for each root system, that the relations claimed for the Weyl group
do hold. (This requires checking it for the rank-2 root systems.) We will not prove the
isomorphism.
L
Remark. To show existence, we know that g = t + R g , so in principle existence should
only require picking a basis E and computing [E , E ] = c E+ . How hard is it to write
down c ?
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
39
For a long time it was thought to be quite hard; however, with the discovery of the affine
Lie algebras, it turned out to be quite easy. We might even get to doing it for the ADE
algebras.
Remark. So far the status of existence of a finite-dimensional simple Lie algebra correspond-
ing to an abstract root system for us is as fikkiws: we have shown that there exists a
simple Lie algebra corresponding to each generalised Cartan matrix, but notL that it is finite-
dimensional. Suppose that for the g that we have defined we write g = t + R g (where
the torus comes from the root system). We can ask two questions: (1) what is the relation-
ship between R and the root system we started with?; (2) What is the dimension of dim g ?
While it is true that W R, in general there is not an equality between them. We call
the roots in W real roots Rre ; for the real roots, dim g = 1. For finite-dimensional
Lie algebras, all roots are real. If A is an affine Cartan matrix, then the semidefinite nature
of it means that there exists a unique (up to scaling) root of length 0, called the imaginary
root. The imaginary roots are pleasant and describable objects. For a general, indefinite,
Cartan matrix, there exists an algorithm for computing dim g , but the numbers are not
very meaningful.
8. Representations
Let g be a semisimple Lie algebra, g = t + R g = t + n+ + n . Let V be a finite-
L
dimensional representation of g. (We dont quite need finite-dimensional; well see exactly
what we need a bit later.)
L
Proposition 8.1. (1) V = t V , where V = {v V |t(v) = (t)v t t} (this is
the weight space decomposition with respect to t)
(2) If V 6= 0, then (h ) Z for all R. (Here, h is the element of the (sl2 ) ,
h = 1 ( ).)
Proof. Since V is a finite-dimensional representation of g, it is a finite-dimensional represen-
tation of (sl2 ) . Therefore, h acts diagonalisably on V with (h ) Z. The first assertion
follows because h span t.
L
Definition. Q = ZR = i Zi is the lattice of roots.
P = { QR|(, ) Z R} (the dual lattice) is the lattice of weights. Note that the
condition can be checked by computing (, i ) for i .
Since, for , R we have (, ) Z, we see Q P . Also, if V is a finite-dimensional
representation of g and V 6= 0, then P , as (h ) = (, ).
Exercise 70. (1) |P/Q| < . In fact, |P/Q| = det A where A is the Cartan matrix.
(2) W (P ) P , where W is the Weyl group.
Example 8.1. In sl2 , R = , Q = Z, (, ) = 2, so P = Z/2, and |P/Q| = 2 = det(2).
Definition. For V a finite-dimensional representation of g, define
X
ch V = dim V e Z[P ].
P
+
Here, e is a formal symbol (e e = e ) forming a basis for Z[P ].
People are looking at me as if Im speaking some language I dont speak, let alone you
dont speak.
40 IAN GROJNOWSKI
1 1
2 1 + 2
1 2 1
1
1 1
Figure 5. Root lattice for sl3 , with dimensions of weight spaces. Note that
the outer numbers are 1, and the picture is W -invariant (W = S3 here).
11111
00000
00000
11111
00000
11111
00000
11111
00000
11111
00000
11111
Figure 6. Cone for sl3 .
Since {m } clearly form a basis for Z[P ] , ch L() are a basis of Z[P ]W .
W
Figure 7. Cone of positive weights for sl3 . The adjoint representations high-
est weight is 1 + 2 .
For any
L simple g, the adjoint representation V = g is irreducible. The decomposition
g = t + R g means that a highest weight is a root such that + i 6 R for all i, i.e.
= , the highest root in R+ . (Therefore, the theorem of the highest weight implies that
is unique, as promised.)
Example 8.7. In An1 , the highest root is = e1 en (recall i = ei ei+1 and hi is a
diagonal matrix with +1 in the ith position, 1 in the (i + 1)th position, and 0 elsewhere).
We have (h1 ) = 1, (h2 ) = 0, . . . , (hn2 ) = 0, (hn1 ) = 1. We can label the Dynkin
diagram by (hi ):
1 0 0 1
...
Exercise 76. Compute (hi ) for all the simple Lie algebras.
Example 8.8. Other representations of sln : we havePthe standard representation, Cn , with
weight basis v1 , . . . , vn and weights e1 , . . . , en (recall ei = 0). The highest weight is e1 (as
e1 > e2 > . . . > en , since e1 e2 , e2 e3 , . . . R+ ). Therefore, Cn = L(1 ), and
ch Cn = ee1 + . . . + een = z1 + . . . + zn
where zi = exp(ei ) Z[P ] = Z[z11 , . . . , zn1 ]/(z1 . . . zn = 1).
Now, if V and W are representations of g, then so is V W (via x g acts by x1+1x).
In particular, V V is a representation. Note that : V V V V , a b 7 b a,
commutes with the g-action, so eigenspaces of are g-modules. That is, S 2 V and 2 V are
g-modules. In general, these may not be irreducible, but they are for sln (provided V is
irreducible, of course!), as we check below:
Consider s Cn (for s n 1). This has weight basis vi1 . . . vis for i1 < . . . < is , and
the weight of this is ei1 + . . . + eis as is clear from the action. Therefore,
Ei (vi1 . . . vis ) = 0 for all i vi1 . . . vis = v1 . . . vs ,
44 IAN GROJNOWSKI
i.e. this is the only singular vector. Note that its weight is = e1 + . . . + es and is the sth
fundamental weight, as (, ei ei+1 ) = is . Therefore,
s Cn = L(s ) is a fundamental representation.
In particular, n1 Cn = (Cn ) = L(n1 ).
Consider S m Cn . It has weight basis vi1 . . . vim with i1 . . . im and weight ei1 + . . . + eim .
We have
Ei (vi1 . . . vim ) = 0 for all i vi1 . . . vis = v1 . . . v1 = v1m ,
i.e. S m Cn is an irreducible representation isomorphic to L(m1 ).
Exercise 77. Check that the above is true. Compute ch s Cn , ch S m Cn . Find a closed
formula for the generating functions
X X
(ch S m Cn )q m , (ch m Cn )q m .
m0 m0
Lecture 17
Exercise 78. (1) If V and W are g-modules, V finite-dimensional, then V W
=
Hom (V, W ) as g-modules, via v w 7 (v 7 v (v) w).
Remark. If V is a g-module, V is a g-module with action (x )(v) = (xv). The
sign comes from differentiating the group action, (x g)(v) = g 1 (xv).
(2) Show that if V = Cn and g = sln , then V V = sln C, adjoint + trivial
representations (where the trivial representation comes from Id Hom g (V, V )). In
contrast, V V = S 2 V 2 V always.
Exercise 79. Let g = son or sp2l for 2l = n, and V = Cn .
(1) Compute the highest weight of this representation.
(2) V = V via the form defining g, so V V has at least three components (since it
must have the trivial subrepresentation). Show that it has exactly three, describe
them, and find the highest weights.
We will now devote ourselves to proving Theorem 8.3 classifying all the representations of
simple Lie algebras.
Let g be any algebra with a non-degenerate invariant bilinear form (, ), e.g., g semisimple
and (, ) = (, )ad . Let x1 , . . . , xN be a basis of g, and x1 , . . . , xN the dual basis (so (xi , xj ) =
ij ).
Definition. = xi xi is the Casimir of g.
P
Thus,
X X
[, x] = xi xj aij + xj xi bij = 0.
Proof 2, without coordinates. We have maps P of g-modules C , End (g)
= g g = gg
i
(the first map is via 7 Id , i.e. 1 7 xi x ; the last isomorphism is via the bilinear
form). Since g acts on V , we have a map of g-modules g End (V ). Finally, we have a
map of g-modules End (V ) End (V ) End (V ) by multiplication. This finally gives
C , g g
= g g End (V ) End (V ) End (V )
sending 1 7 . Therefore, generates a trivial g-submodule of End (V ), which is equivalent
to saying [, x] = 0 for all x g.
L
Now let g be semisimple, g = t R g , and (, )ad the Killing form. Choose a
basis u1 , . . . , ul of t, x of g ; choose the dual basis u1 , . . . , ul of t and x of g (so
that (x , x ) = 1). Normalise x so that (x , x )ad = 1, so that x = x . Then
[x , x ] = 1 () by definition. Now,
X X X X X
= ui ui + (x x + x x ) = ui ui + 2 x x + 1 ().
R+ R+ R+
1
P
Definition. = 2 R+ .
Remark. Of course, there will be an exercise to compute for all the simple Lie algebras.
However, we will state this exercise later, when its more obvious what is.
Remark. If you like algebraic geometry, this is the square root of the canonical bundle on
the flag variety.
Then X X
= ui ui + 2 1 () + 2 x x
R+
Lemma 8.7. Let V be a g-module, v V a singular vector with weight (i.e. n+ v = 0 and
tv = (t)v). Then v = (| + |2 ||2 )v.
Proof. Apply the most recent version of :
X X
v = (ui )(ui ) + (2 1 ()) v + 2 x x v = ((, ) + 2(, ))v.
R+
In particular, if V is irreducible, acts on V by (, ) + 2(, ) by Schurs lemma.
9. PBW theorem
Let g be any Lie algebra over a field k.
Definition. The universal enveloping algebra of g Ug is the associative algebra over k gen-
erated by g with relations xy yx = [x, y] for all x, y g.
46 IAN GROJNOWSKI
is the tensor algebra of V , and is the free associative algebra generated by V . Let J be the
two-sided ideal in T g generated by x y y x = [x, y] for all x, y g; then Ug = T g/J.
Exercise 80. An enveloping algebra for g is a linear map i : g A, where A is an associative
algebra, and i(x)i(y) i(y)i(x) = i([x, y]). (For example, if V is a representation of g, we
can take A = End (V ) and i the action map.) Show that Ug is initial in the category of
enveloping algebras, i.e. any such map i factors uniquely through Ug.
Remark. This is old terminology anywhere other than the above exercise and old textbooks,
enveloping algebra almost certainly means universal enveloping algebra.
The Casimir operator naturally lives in Ug; in fact, as weve shown, in the center Z(Ug).
Observe that T g is a graded algebra, but that the relations we are imposing on it are not
homogeneous (x y and y x have degree 2, while [x, y] has degree 1). Consequently, Ug
is not graded. It is, however, filtered.
Define (Ug)n to be the span of products of n elements of g, then Un Um for n m,
and Un Um Un+m . (In particular, U0 k, U1 k + g, etc.)
Exercise 81. Show that if x Um and y Un , then [x, y] Un+m1 (we clearly have
[x, y] Un+m , and are asserting that in fact the degree is one lower). By abuse of notation,
for x, y Ug we write [x, y] to mean xy yx.
Definition. For a filtration F0 F1 . . . , we write gr (F ) = Fi /Fi1 . (We probably need
Fi to be a filtration of an algebra for this to make sense...) This is the associated graded
algebra.
Theorem 9.1 (Poincare-Birkhoff-Witt).
gr Ug = (Ug)n /(Ug)n1
= Sg
Equivalently, if {x1 , . . . , xn } is a basis for g, then {xa11 xa22 . . . xann |ai Z0 } is a basis for Ug.
In particular, g , Ug.
Exercise 82. (1) Show that the previous exercise, [Un , Um ] Un+m1 , implies that there
is a well-defined map S(g) gr Ug extending the map Q g 7 g.
(2) Show that the map is surjective, or equivalently that { xai i } span Ug.
Therefore, the hard part of PBW theorem is injectivity. We will not prove the theorem.
Remark. The theorem should imply that Ug is a deformation of Sg (i.e. if we consider
xy yx = t[x, y] then for t = 0 we get Sg whereas for t = 1 and, in fact, for generic t
by renormalising we get Ug). The shortest proof of the theorem is indeed via deformation
theory.
Exercise 86. (1) If g is an arbitrary simple Lie algebra and (Hi ) Z0 , show that
(Hi )+1
Fi v is a singular vector of M (). (There will be others!)
(2) Compute ch M () for all Verma modules.
Lecture 19 We will now try to show that L() is finite-dimensional if P + .
Proposition 10.5. If P + , L() is integrable, i.e. Ei and Fi act locally nilpotently.
Proof. If V is any highest-weight module, Ei acts locally nilpotently (as Ei V V+i and
weights of V lie in D()). We must show that Fi acts locally nilpotently.
(H )+1
By the exercise, Fi i v is a singular vector. Since L() is irreducible, it has no
(H )+1
singular vectors other than v , so Fi i v = 0.
(1) ak b = ki=0 ki ((ad a)i b)aki
P
Exercise 87.
(2) Using the above and the Serre relations (ad e )4 e = 0, show FiN e1 . . . er v = 0
for N 0 by induction on r.
Remark. In a general Kac-Moody algebra, the Serre relations are no longer (ad e )4 e = 0,
but the exponents are still bounded above by aij for the relevant i, j in the generalised
Cartan matrix.
Corollary 10.6. dim L() = dim L()w for all w W .
Proof. When we were proving Proposition 8.2, we only needed local nilpotence of Ei and
Fi to conclude invariance under w = si = si . Since W is generated by si , we conclude
invariance under all of W .
Theorem 10.7 (Cartans theorem). If g is finite-dimensional and P + , then L() is
finite-dimensional.
Remark. This is clear pictorially L() is W -invariant and sits in the cone D(). However,
its faster to work through it formally.
Proof. First, we show that if R+ then e acts locally nilpotently on L(), i.e. all (not
just the simple) local sl2 s act integrably. In fact, en v = 0 for n = (, ) + 1, since
otherwise we have L()n 6= 0 and hence s ( n) is a weight in L(). However,
s ( n) = s () + n = (, ) + ((, ) + 1) = + >
and so cannot be a weight in L() as is the highest weight. Applying Exercise 87, we see
that e acts locally nilpotently on all of L().
50 IAN GROJNOWSKI
Therefore,
L() = U(n )v = {ek
1
1
. . . ek
r
v },
r
R+ = {1 , . . . , r }
is finite.
Lemma 10.8. si (R+ \ {i }) = R+ \ {i }.
Proof. Let R+ \ {i }, = j6=i kj j + ki i with all kj 0 and some kj 6= 0. Note
P
Proof. Observe
1 1 X 1 1 X
si = si +
2 i 2 = i + = i .
6=
2 2 6=
i i
R+ R+
+
Claim. For every v Vn , the module L = Ug v is irreducible.
Proof. L is a highest weight module with highest weight , so we must only show that it
has no other singular vectors. If is the weight of a singular vector in L, then , but
| + | = | + | by considering the action of the Casimir. Since V , and therefore L, is
finite-dimensional, we must have , P + . By the key lemma, = .
+
Consequently, V 0 = Ug V n is completely reducible: if {v1 , . . . , vr } is a weight basis
+
for V n with weights 1 , . . . , r , then V 0 = L(1 ) . . . L(r ). It remains to show that
N = V /V 0 = 0.
+ +
Suppose not. N is finite-dimensional, so N n 6= 0. Let v Nn be a singular vector with
P + . Lift v to v V . Since v 6= 0 in N , we must have Ei v 6= 0 V+i for some i.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
51
2
Note that Ei v = 0, so Ei v V 0 . Consequently, the Casimir acts on Ei v by +
||2 for some {1 , . . . , r }. On the other hand, since v was
singular in N , acts on v
2 2
by | + | || . Since commutes with Ei , we find + = | + |. Moreover, + i
P
is a weight in L(), so + i = kj j , i.e. < . This, however, contradicts the key
lemma.
Remark. The usual proof (in particular, Weyls own proof) of this theorem involves lifting
the integrable action to the group, and using the maximal compact subgroup (for which we
have complete reducibility because it has positive, finite measure). Somehow, the key lemma
is the equivalent of a positive-definite metric.
Lemma 10.12. Let t and M () the Verma module. Then
e
ch M () = Q )
.
R+ (1 e
Lecture 20
Proof. Let R+ = {1 , . . . , r }. By PBW, thePbasis for M () is {ek
1
1
. . . ek
r
v } for ki Z0 ,
r
and the weight of such anPelement is ki i . Therefore, dim M () is the number
of
Q ways of writing as ki i over i R. This is precisely the coefficient of e in
1
R+ (1 ) .
Write = R+ (1 e ). We have shown that ch M () = e /.
Q
Lemma 10.13. For all w W , w(e ) = det w e . Here, det : W Z/2 gives the
determinant of w acting on t (1).
Proof. Since W is generated by simple reflections, it suffices to check that si (e ) = e .
Indeed,
Y Y
i
(1 e ) = e .
si (e ) = si
e (1 e ) (1 e ) = ei
(1 + e+i
)
6=i 6=i
R+ R+
Lemma 10.14. (1) For any highest weight module V () with highest weight there
exist coefficients a 0, , such that
X
ch V () = a ch L()
|+|=|+|
with a = 1.
(2) There exist coefficients b Z with b = 1 such that
X
ch L() = b ch M ().
|+|=|+|
Proof. First, note that (1) implies (2). Indeed, totally order B() so that i j = i j.
Applying (1) to the Verma modules gives a system of equations relating ch M () and ch L()
which is upper-triangular with 1s on the diagonal. Inverting this system gives (2). (Note in
particular that the coefficients in (2) may be negative.)
We now prove (1). RecallPthat the weight spaces of a highest weight module are finite-
dimensional. We induct on B() dim V () .
If V () is irreducible, we clearly have (1). Otherwise, there exists B() with a
singular
P P v V () . Pick with the largest height of (recall the height of
vector
ki i is ki ). Then Ug v V () has no singular vectors, and is therefore irreducible,
L(). Setting V () = V ()/L(), we see that V () is a highest-weight module with a
P
smaller value of B() dim V () , and ch V () = ch V () + ch L().
Now, w(ch L()) = ch L() for all w W , and w(e ) = det w e . Therefore,
X
e ch L() = b e+
B()
det w ew(+)
P
X
ch L() = wW
Q )
= det w ch M (w( + ) ).
e R+ (1 e wW
Example 10.2. Let g = sl2 , and write z = e/2 . Then C[P ] = C[z, z 1 ] and = z. We
have
ch L(m/2) = (z m+1 z (m+1) )/(z z 1 )
as before.
Remark. The second expression for the character of L() looks like an Euler characteristic.
In fact, it is: there is a complex of Verma modules whose cohomology are the irreducible
modules. This is known as the Bernstein-Gelfand-Gelfand resolution.
The first expression comes from something like a cohomology of coherent line bundles,
and is variously known as Euler-Grothendieck, Attiyah-Bott, and Attiyah-Singer.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
53
since det w = det w1 and (x, wy) = (w1 x, y) (i.e. the Weyl group is a subgroup of the
orthogonal group of the inner product).
We now apply F to the Weyl character formula:
(w(+),)
P
wW det wq
F (ch L()) = (,) Q (,) )
q R+ (1 q
Remark. You are now in a position to answer any question you choose P toanswer, if necessary
by brute force. For example, let us decompose L() L() = m L(); we want to
compute the Littlewood-Richardson coefficients m (recall that we had the Clebsch-Gordan
rule for them in sl2 ).
Define : Z[P ] Z[P ], e 7 e , and CT : Z[P ] Z with e 7 0 unlessQ = 0, and
0 1
e 7 1. Define (, ) : Z[P ]Z[P ] Z by (f, g) = |W | CT (f g), where = R+ (1e ).
Claim. Let = ch L(), = ch L(), then ( , ) = . (If our Lie algebra has type
A, then are the Schur functions, and this is their orthogonality.)
Given this claim, we have m = ( , ) is algorithmically computable. It was a
great surprise to everyone when it was discovered that actually there is a simpler and more
beautiful way of doing this.
Proof of claim.
1 X
( , ) = CT ( ew(+) ex(+) det(wx))
|W | x,wW
Exercise 92.
L)
Y (1 q (+, ) )
FL (ch L()) = q (, .
+
(1 q , )
R
[Hint: apply FL to the Weyl denominator identity for the Langlands dual Lie algebra with
root system R .]
Note that ( + , )/(, ) = ( + , )/(, ) as = 2/(, ), so we recoved the
same formula for Weyl dimension.
Definition. We call FL (ch L()) the q-dimension of L(), denoted dimq L().
Proposition 11.1. dimq L() is a unimodal polynomial. More specifically, it lives in N[q 2 , q 2 ]Z/2
or qN[q 2 , q 2 ]Z/2 (depending on its degree), and the coefficients decrease as the absolute value
of the degree gets bigger.
Proof. We will show that dimq L() is the character of an sl2 -module in which all strings
have the same parity.
Set H = 2 1 (L ) t g, and set E = Ei .
P
Lecture 22
Exercise 96. Compute dimq L(), where L() is the adjoint representation, for G2 , A2 ,
and B2 . Then do it for all the classical groups. You will notice that L()|principal sl2 =
L(2e1 ) + . . . + L(2el ) where l = rank g = dim t, e1 , . . . , el N, and e1 = 1. The ei are called
the exponents of the Weyl group, and the order of the Weyl group is |W | = (e1 +1) . . . (el +1).
Compute |W | for E8 .
12. Crystals
Let g be a semisimple Lie algebra, = {1 , . . . , l } the simple roots, and P the weight
lattice.
Definition. A crystal is a set B, 0 6 B, together with functions wt : B P , ei : B
B t {0} fi : B B t {0} such that
ei (b)) = wt(b) + i ; if fi b 6= 0, then wt(fi (b)) = wt(b) i .
(1) If ei b 6= 0, then wt(
(2) For b, b0 B, ei b = b0 if and only if b = fi b0 .
(3) Before we state the third defining property, we need to introduce slightly more nota-
tion.
We can draw B as a graph. The vertices are b B, and the edges are b b0 if ei b0 = b.
i
We say that this edge is coloured by i. This graph is known as the crystal graph.
Example 12.1. In sl2 the string
n n 2 n 4 . . . n
is a crystal, where the weight of vertex i is i/2.
Define i (b) = max{n 0 : eni (b) 6= 0}, and i (b) = max{n 0 : fin (b) 6= 0}.
} b
| {z | ...
{z }
i i
In sl2 , the sum i (b) + i (b) is the length of the string. On the other hand, if we are in
the highest-weight representation L(n) = L(nw) and wt(b) = n 2k, then (b) = k and
(b) = n k, i.e. (b) (b) = (wt(b), ). The third property of the Lie algebras is that
this happens in general:
(3) i (b) i (b) = (wt(b), i ) for all i .
Define B = {b B : wt(b) = }.
Definition. For B1 and B2 crystals, we define the tensor product B1 B2 as follows: as a
set, B1 B2 = B1 B2 , wt(b1 b2 ) = wt(b1 ) + wt(b2 ), and
( (
ei b1 ) b2 , if i (b1 ) i (b2 )
( (fi b1 ) b2 , if i (b1 ) > i (b2 )
ei (b1 b2 ) = fi (b1 b2 ) =
b1 ( ei b2 ), if i (b1 ) < i (b2 ) b1 (fi b2 ), if i (b1 ) i (b2 )
That is, we are trying to recreate the picture in Figure 10 in each colour.
Exercise 97. Check that B1 B2 as defined above is a crystal.
Exercise 98. B1 (B2 B3 )
= (B1 B2 ) B3 via b1 (b2 b3 ) 7 (b1 b2 ) b3 .
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
57
m + 1 dots
n + 1 dots
Remark. As we see above, the tensor product is associative. In general, however, it is not
commutative.
Definition. B is the crystal obtained from B by reversing the arrows. That is, B = {b :
b B}, wt(b ) = wt(b), i (b ) = i (b) (and vice versa), and ei (b ) = (fi b) (and vice
versa).
Exercise 99. (B1 B2 ) = B2 B1 .
Remark. We will see in a moment precisely how we can attach a crystal to the basis of a
representation. If B parametrises a basis for V , then B parametrises a basis for V . The
condition on the weights can be seen by noting that if L() has highest weight , then L()
has lowest weight .
Theorem 12.1 (Kashiwara). Let L() be an irreducible highest-weight representation with
highest weight P + . Then
(1) There exists a crystal B() whose elements are in one-to-one correspondence with a
basis of L(), and elements of B() parametrise L() , so that
X
ch L() = ewt(b) .
bB()
(2) For any simple root i (i.e. a simple sl2 g), the decomposition of L() as an (sl2 )i -
module is given by the i-coloured strings in B(). (In particular, as an uncoloured
graph, B() is connected, since it is spanned by ( fi )v .)
Q
(3) B()B() is the crystal for L()L() i.e. B()B() decomposes into connected
components in the same way as L() L() decomposes into irreducible representa-
tions.
Remark. We will mention three proof approaches below. The first of them is due to Kashi-
wara, and is in the lecturers opinion the most instructive.
Example 12.2. Let g = sl3 , V = C3 = L(1 ). We compute V V and V V .
Remark. Note that while the crystals give the decomposition of the representation into
irreducibles, they do not correspond directly to a basis. That is, there is no sl2 -invariant
basis that we could use here. Kashiwaras proof of the theorem uses the quantum group Uq g,
which is an algebra over C[q, q 1 ] and a deformation of the universal enveloping algebra Ug.
The two have the same representations, but over C[q, q 1 ] there is a very nice basis which
satisfies ei b = ei b+q (some mess). Therefore, setting q = 0 (freezing) will give the crystal.
58 IAN GROJNOWSKI
1 1 1
1 1 2
21 1 + 2
1 + 2 1
21 1
Lusztigs proof uses some of the same ideas, where I didnt catch how it goes.
Soon, we will look at Littelmann paths, which give a purely combinatorial way of proving
this theorem (which, on the face of it, is a purely combinatorial statement).
Definition. A crystal is called integrable if it is a crystal of a highest-weight module with
highest weight P + .
For two integrable crystals B1 , B2 , we do in fact have B1 B2 = B2 B1 (in general, this
is false).
Remark. There is a combinatorial condition due to Stembridge which determines whether
a crystal is integrable; it is a degeneration of the Serre relations, but we will not state it
precisely.
Lecture 23 Consider the crystal for the standard representation of sln , L(1 ) = Cn :
e1 =1 1 1 1 1 2 . . . 1 1 ...n1 .
1 2 3 n1
We can use this to construct the crystals for all representations of sln as follows:
Let P + , = k1 1 + . . . + kn1 n1 . Then L() must be a summand of L(1 )k1
. . . L(n1 )kn1 , since the highest weight of this representation is (with highest weight
vector vki
i
, where vi was the highest-weight vector of L(i )). Moreover, L(i ) = i Cn is
a summand of (Cn )i . We conclude that L() occurs in some (Cn )N for N > 0. Therefore,
the crystal for Cn together with the rule for taking tensor products of crystals determines
the crystal of every representation of sln .
We introduce the semi-standard Young tableau of a representation. (This is due to Hodge,
Schur, and Young.) Write
B(1 ) = 1 2 3 . . . n
1 2 3 n1
n
for the crystal of the standard representation of C . For i < n let
bi = 1 2 . . . i B(1 )i .
(This corresponds to the vector v1 v2 . . . vi i Cn , where vk are the basis vectors of
Cn .)
Exercise 100. (1) bi is a highest-weight vector in B(1 )i of weight i = e1 + . . . + ei .
(Recall that b B is a highest-weight vector if ei b = 0 for all i.) Hence, the connected
component of B(1 )i containing bi is B(i ).
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
59
a1
a2
We write elements of the form a1 a2 . . . ai as column vectors .. . For example,
.
ai
1
2
the highest weight vector is denoted .. .
.
i
ki i . Embed B() , B(1 )k1 B(2 )k2 . . . B(n1 )kn1 by mapping
P
Let =
k
the highest weight vector b 7 bk
1
1
. . . bn1n1 (which is a highest-weight vector in the
image). Now, we can represent any element in B(1 )k1 . . . B(n1 )kn1 by a sequence
of column vectors as in Figure 12, where the entries in the boxes are strictly increasing down
columns and (non-strictly) decreasing along rows.
... k1
k2
n1
kn3
kn2
kn1
Figure 12. Generic element of B(1 )k1 . . . B(n1 )kn1 , aka a semi-
standard Young tableau. Here, kn1 = 3, kn2 = 2, kn3 = 1, . . . , k2 = 4, k1 =
3.
Theorem 12.2 (Exercise). (1) The connected component of B() in B(1 )k1 B(n1 )kn1
are precisely the semi-standard Young tableaux of shape .
(2) Describe the action of ei and fi explicitly in terms of tableaux.
We will now construct the Young tableaux for all the classical Lie algebras.
Example 12.3. so2n+1 , i.e. type Bn : for the standard representation C2n+1 we have
the crystal
1 2 3 ... n 0 n ... 2 1
1 2 3 n1 n n n1 2 1
60 IAN GROJNOWSKI
Exercise 101. (1) Check that the crystals of the standard representations are as claimed.
(2) What subcategory of the category of representations of g do these representations
generate?
Consider the highest weight of the standard representation. This gives an element
P/Q = Z(G), a finite group (where G is the simply connected Lie group attached
to g). Consider the subgroup hi P/Q. We cannot obtain all of the representations
unless P/Q is cyclic and generated by . For the classical examples we have P/Q =
Z/2 Z/2 for D2n , Z/4 for D2n+1 , and Z/2 for Bn and D2n .
(3) (Optional) Write down a combinatorial set like Young tableaus that is the crystal of
B() with obtained from the standard representation.
For Bn we have one more representation, the spin representation. Recall the Dynkin
diagram for Bn ,
en1 en
Bn en
Definition. Let 1 , . . . , n be the dual basis to 1 , . . . , n . We call L(n ) the spin repre-
sentation.
Exercise 102. Use the Weyl dimension formula to show that dim L(n ) = 2n .
Define B = {(i1 , . . . , in )|ij = 1} with wt((i1 , . . . , in ) = 21 ij ej P , and
P
(
(i1 , . . . , +1j , 1j+1 , . . . , in ), if (ij , ij+1 ) = (1, +1)
ei (i1 , . . . , in ) = ,1 j n 1
0, otherwise
and (
(i1 , . . . , in1 , 1), if in = 1
en (i1 , . . . , in ) =
0, otherwise
(note that in particular eni = 0 always).
Claim. This is the crystal of the spin representation L(n ).
n
Remark. Note that C . This is for a very good reason. Indeed, gln
dim L(n ) = dim
A
so2n+1 via A 7 0 , and L(n )|gl = Cn .
n
JAT J 1
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
61
hi hi + 1
(1) ei ()(1)
Figure 13. is marked in black; ei (), from the point when it deviates from
, is marked in red.
Define P + = {paths such that [0, 1] PR+ = {x PR : hx, i i 0 for all i}}. If
P + , then ei () = 0 for all i.
For P + let B be the subcrystal of P generated by (i.e. B = {fi1 fi2 . . . fir }).
Theorem 13.1 (Littelmann). (1) If , 0 P + then B
= B0 as crystals iff (1) =
0
(1).
(2) There is a unique isomorphism between this crystal and B((1)), the crystal of the
irreducible representation L((1)). (Of course, the isomoprhism sends to (1).)
(3) Moreover, for the path (t) = t, P + , Littelmann gives an explicit combinatorial
description of the paths in B .
Example 13.2. Lets compute the crystal of the adjoint representation of sl3 :
e . = ( )
( + )
Exercise 106. Compute the rest of the crystal, and check that you get the adjoint repre-
sentation of sl3 .
The simplest nontrivial example is G2 , the automorphisms of octonions.
Exercise 107. Compute the crystal of the 7-dimensional and the 14-dimensional (adjoint)
representation of G2 . Compute their tensor product.
The tensor product of crystals has a very nice (and natural!) realisation in terms of
concatenating paths:
Definition. For 1 , 2 P, define 1 2 to be the concatenation (traverse 1 , then translate
2 to start at 1 (1) and traverse it).
Exercise 108. : P P P is a morphism of crystals.
That is, the tensor product of two crystals simply consists of the concatenated paths in
them.
INTRODUCTION TO LIE ALGEBRAS AND THEIR REPRESENTATIONS PART III MICHAELMAS 2010
63
Remark. We have now defined B() explicitly without using L(). One can prove the Weyl
character formula
det wew(+ )
P
ch B() = wW Q )
R+ (1 e
from this, without referring to L() or quantum groups. To do this, one builds ch L() and
L() itself one root at a time, and uses the Demazure character formula.
More specifically: w W consider Lw (), the n+ -submodule of L() generated by vw ,
where v is the highest weight vector. (Equivalently, it is the submodule generated by the
1-dimensional space wL() .)
Theorem 13.2 (Demazure character formula).
ch Lw () = Dw (e ),
where we write w = si1 . . . sir a reduced (i.e. shortest) decomposition, and set
f ei si (f ) f 1
Dw = Dsi1 . . . Dsir , Dsi (f ) = = (1+s i ) = (f ei /2 si (f ei /2 )).
1 ei 1 ei ei /2 ei /2
That is,
e + e i + . . . + e i ,
s
h, i i 0
Dsi (e ) = 0, h, i i = 1
(e+i + . . . + esi i ), h, i < 1.
i
Ive got about six minutes. Is that enough time to tell you about the Langlands program?
Suppose that G is an algebraic group with Lie algebra g which is semisimple. For a given
g, there is the smallest group G above it, Gad = G/ZG (whose centre is {1}). There is also
the largest group, Gsc the simply connected cover of G, which is still an algebraic group. We
have 1 (Gad ) = (P/Q) and ZGsc = P/Q.
(For example, SLn is the simply connected group, and P SLn is the adjoint group.)
In this course we have studied the category Rep g = Rep Gsc .
Recall that for G a finite group, the number of irreducible representations is the number
of conjugacy classes, but conjugacy classes do not parametrise the representations. For us,
however, the representations are parametrised by P + = P/W .
It is possible, for G an algebraic group, to define L G, the Langlands dual group, where
the Lie algebra g and the root system R correspond to g and R (and in fact the torus and
the dual torus will swap in the dual group).
If we consider the representations of G(F ) where F = Fq or F = C((t)), these correspond
roughly to conjugacy classes of homomorphisms (W (F ) L G(C))/L G(C), where W (F ) is
a thickening of the Galois group of F .