You are on page 1of 29

Combinatorics and Geometry of Coxeter Groups

November 17, 2012


Abstract
These are my lecture notes for my Yale class during Fall 2012.
1 Introduction
Let me begin with citing a passage from the book The theory of Groups and Quantum Mechanics
by one of our masters, Herman Weyl:
It is somewhat distressing that the theory of linear algebras must again and again be developed
from its beginning, for the fundamental concepts of this branch of mathematics crop up everywhere
in mathematics and physics,and a knowledge of them should be as widely disseminated as the
elements of the dierential calculus.
Coxeter groups are precisely the foundational discrete tools for dealing with linear algebras.
In these notes, rather ambitiously, we hope to contribute to their already-saturated literature. The
idea we have in mind is to develop the theory of Coxeter groups (from scratch) and use these
developments to study the geometry of certain symmetric spaces. In particular we are going to
focus on the structures emanating from relations between orbits of a Borel subgroup acting on
certain algebraic varieties.
The rst part of our exposition is heavily inuenced by the beautiful (out of print) 1982 book
of H. Hiller called Geometry of Coxeter Groups. In fact, the only thing we are doing here is
1
incorporating some new developments to Hillers framework. Of course, there appeared many
excellent text-books on Coxeter groups since 1982. Among them are the Humphreys Reection
Groups, Brenti and Bjorners Combinatorics of Coxeter Groups..
On the other hand, it seems that none of these resources mention the applications we have
in mind, for example, the combinatorics to the De Concini - Procesi completions of symmetric
varieties, or the combinatorics of the Renner monoids of reductive algebraic monoids.
2 Coxeter Groups
A group W is called Coxeter group W if there is a subset S W such that W has a presentation
W = (s S : (ss

)
m
ss

= 1),
where m
ss
2, 3, 4, , . . . , is the order of ss

, s s

and m
ss
= 1. It is customary to call the pair
(W, S ) a Coxeter system. If S is a nite, its cardinality is called the rank of (W, S ).
A Coxeter group W is called irreducible if it cannot be written as a product of non-trivial
Coxeter groups.
Remark 2.1. There is an irreducible Coxeter group H
3
that is a product but one of the factors is not
a Coxeter group.
Denition 2.2. A Coxeter matrix M
ss
is a symmetric matrix with 1s along the diagonal and
m
ss
2, 3 , , s s

. The Coxeter graph of (W, S ) is an edge-labelled graph


W
with one
node for each s S and an edge from s to s

if m
ss
> 2, labeled by m
ss
. (If m
ss
= 3, then we
suppress it from the graph.)
Remark 2.3. A disjoint union of Coxeter systems is again a Coxeter system. In particular a Coxeter
group W is irreducible if and only if
W
is connected.
Example 2.4. Let S
n
denote the symmetric group on n letters. Together with the set S of simple
transpositions s
i
= (i i + 1), i = 1, . . . n 1, the pair (S
n
, S ) forms a Coxeter system. The Coxeter
graph of S
n
looks like
2
s
n1
s
n2
s
1
s
2
Recall that a partially ordered set is a pair (P, ) consisting of a set P and a relation satisfying
1. reexivity
2. anti-symmetry
3. transitivity
A poset is called graded (or ranked), if there exists a rank (or length) function : P N such that
if y covers x in P, then (y) = (x) + 1.
It turns out that Coxeter groups have several dierent graded poset structure. Common to these
poset structures on a Coxeter system (W, S ) is the length function on W.
Denition 2.5. Let (W, S ) be a Coxeter system and let w W. The length of w, denoted by (w) is
the smallest integer n 0 such that w can be written as a product of n elements from S .
If w is of length n and w = s
i
1
s
i
n
for some s
i
j
S , then this expression is called a reduced
decomposition of w.
Example 2.6. If w = (w
1
, . . . , w
n
) S
n
is a permutation, then (w) = inv(w), where inv(w) is the
number of pairs 1 i < j n such that w( j) > w(i). We call inv(w) the inversion number of w.
Lemma 2.7. If w, w

W, then
1. (w
1
) = (w)
2. (w) (w

) (ww

) (w) + (w

)
Proof. The rst part, and the second inequality of the second item are easy to prove. We only
prove the rst inequality of the second item. Write w = ww

(w

)
1
. By the second part
(w) (ww

) + ((w

)
1
)
= (ww

) + (w

).

3
Corollary 2.8. For s S and w W, (ws) = (w) 1.
Proof. Let F(S ) denote the free group on S . Then W is the quotient of F(S ) by the relations
imposed by the matrix M. Dene the signature homomorphism : F(S ) 1, 1 by sending a
word s

1
s

2
s

r
to (1)
r
. Since the only relations on W are of the form (s

)
m
,
= 1, and since
((s

)
m
) = (1)
2m
= 1,
the homomorphism factors from W to give a homomorphism : W 1, 1. In other words,
(ww

) = (w)(w

) for all w, w

W. In particular, if s S , then (ws) = (w). Now it


follows from previous lemma that (ws) = (w) 1.
Example 2.9. Let B
n
denote the semi-direct product S
2
S
n
. B
n
is commonly known as hyperoc-
tahedral group because it is the symmetry group of the n-dimensional cube (octahedron).
As a set B
n
is given by
B
n
= (g, ) : g S
n
2
, S
n
.
The multiplication is given by
(g, ) (h, ) = (g(h), ).
The following description of B
n
is more transparent. Consider the set B of n-tuples a =
(a
1
, . . . , a
n
) with a
i
n, n + 1, . . . , 1, 1, . . . n 1, n. Let S denote the set s
1
, . . . , s
n1
, s
0
,
where s
i
, i = 1, . . . , n1 is the simple transposition acting on B by permuting the entries of a given
n-tuple a B. The action of s
0
on a is given by changing the sign of its rst entry a
1
.
It follows that B
n
consists of n-tuples with distinct entries from n, . . . , 1, 1, . . . , n. The
multiplication is given by composition in this interpretation.
The Coxeter graph of B
n
is given by
s
n1
s
n2
s
0
s
1 4
The length function on B
n
is given by
(w) = inv(w)

j: w( j)<0
w( j).
4
Example 2.10 (Ane permutations). Fix n N and consider the group

S
n
of all permutations of
Z such that
x( j + n) = x( j) + n for all j Z,

_
n
i=1
x(i) = (n + 1)n/2.
Then

S
n
is a Coxeter group with circular Coxeter graph and the generating set s
1
, . . . , s
n
where
s
i
=
_
jZ
(i + jn i + 1 + jn),
for i = 1, . . . , n. The Coxeter graph of

S
n
is
s
n
s
n1
s
n2
s
1
s
2
2.1 Reections
Let V be a vector space over R equipped with a symmetric bilinear form (, ). For V with
(, ) 0, let H

denote the hyper-plane H

:= V : (, ) = 0 and dene a reection by the


linear transformation s

: V V by
s

(y) = y
2(y, )
(, )
, y V. (2.11)
Remark 2.12. An important observation (for Lie theory) is that for any automorphism of V such
that (x, y) = ((x), (y)) for all x, y V, we have
s

1
= s
()
.
5
In fact,
s

1
(y) = s

(
1
(y))
=
_

1
(y)
2(
1
(y), )
(, )

_
= y
_
2(
1
(y), )
(, )

_
= y
2(
1
(y), )
(, )
()
= y
2(y, ())
((), ())
()
= s
()
(y).
Let us call a transformation which preserves the standard inner product on R
n
orthogonal.
Lemma 2.13. Every orthogonal transformation of R
2
is either a reection or a rotation.
Proof. If (e
1
) = ae
1
+ be
2
, then (e
2
) has to be perpendicular to (e
1
) and has to be on the unit
circle. Thus we have two possibilities:
1. if (e
2
) = be
1
+ ae
2
, then is a rotation by an angle which satises a = cos().
2. if (e
2
) = be
1
ae
2
, then is the reection s

, where = (cos
+
2
, sin
+
2
).

There is an important corollary of the previous lemma.


Corollary 2.14. If and are two unit vectors in a Euclidean space V, then the composition s

is a rotation through twice the angle between and .


Proof. We assume that and are co-linear. Otherwise, the assertion is trivial. Note that s

xes the (n 2)-dimensional space

. Therefore, it is enough to focus on a two dimensional


vector space V.
6
In this case, we observe that s

does not x any vector. Indeed, if s

(v) = v, then it follows


from the denition (2.11) of a reection that s

() = 0, which is impossible. Since both s

and s

are orthogonal transformations, by the previous lemma, s

is a rotation.
It remains to compute the angle between them. Indeed, it is not dicult to see that is mapped
to s

() and the angle between and s

() is twice the angle between and . (Use double-


angle formula for cos and the fact that the angle between two vectors v and v

is given by cos =
(v,v

)
vv

Remark 2.15. If and makes an obtuse angle



m
, then the order of s

is m and furthermore
(, ) = cos( /m) = cos(/m).
2.2 Positive Bilinear form of a Coxeter system
Let W be a Coxeter group and S = s
1
, . . . , s

be its Coxeter generators (hence (W, S ) is of rank


). Let e
1
, . . . , e

be the standard basis for the -dimensional vector space R

.
The canonical bilinear form B(, ) : R

R attached to (W, S ) is dened by


B(e
i
, e
j
) =
_

_
cos

m
i, j
if m
i, j
<
1 otherwise
Obviously B(e
i
, e
i
) = 1 for all i.
Canonical bilinear form gives us the canonical representation : W GL(R

) of (W, S ):
(s
i
)(v) = v 2B(x, e
i
)e
i
, (2.16)
where s
i
S and v R

.
We have some remarks in order.
Remark 2.17. For any Coxeter generator s S , (s) is a reection that preserves B(, ). Therefore,
if we denote by O(B) the subgroup of GL(V) consisting of maps preserving B, the image of lies
in O(B).
7
Remark 2.18. Recall that a bilinear form on a nite dimensional vector space V over R is a vector
space homomorphism (, ) : V V R. The form is called positive, if (x, x) 0 for all x V
and called positive denite, if it is positive and (x, x) = 0 implies x = 0. The form is called non-
degenerate if (x, y) = 0 for all y V implies x = 0. The form is called symmetric if (x, y) = (y, x)
for all x, y V.
The restriction of B(, ) to a 2-dimensional subspace Re
s
i
+ Re
s
j
(i j) is positive, and it is
positive denite if and only if m
s
i
s
j
< . Indeed, if x = ae
s
+ be
s
and m = m
ss
then
B(x, x) = a
2
+ b
2
2ab cos

m
= (a b cos

m
)
2
+ b
2
sin
2

m
.
Therefore, B(x, x) 0 and B(x, x) = 0 if and only if x = 0.
By the following lemma we see that does not kill any relation.
Lemma 2.19. If s, s

S , then the order of (s)(s

) is m
ss
.
Proof. It is clear that (s)
2
= 1, thus we assume s s

, hence m
ss
2. Suppose rst that m
ss

is nite. Then B(, ) is an inner product on E = spane


s
, e
s
. Since both (s) and (s

) x the
orthogonal complement of E we need only consider (s)(s

) on E. Since (s)(s

) is a rotation
through twice the angle between (s) and (s

), therefore its order is 2/2 = /.


Next we assume that m
ss
= , then by induction ((s)(s

))
k
(e
s
) = e
s
+2kx, where x = e
s
+e
s
.
Therefore, (s)(s

) is of innite order.

Denition 2.20. A Coxeter system (W, S ) is called irreducible if W is not a product of two Coxeter
subgroups. In other words, there does not exists two proper subsets S

, S

S satisfying s

=
s

for all s

and s

.
Recall that a representation V of a group G is called completely reducible if for every G-
invariant subspace E of V, there exists a complimentary G-invariant subspace E

such that V =
E E

. If V does not have any proper G-invariant subspace, then it is called an irreducible repre-
sentation of G.
8
Proposition 2.21. Suppose that (W, S ) is an irreducible Coxeter system. Let V = R

, and let
denote the canonical representation of an irreducible Coxeter group W on V. If B(, ) is non-
degenerate (meaning that if B(x, y) = 0 for all y V, then x = 0), then is irreducible. If B(, ) is
degenerate, then is not completely reducible.
Proof. Let D
B
V denote the degeneracy locus
D
B
= v V : B(v, y) = 0, for all y V.
Obviously D
B
is a invariant linear subspace of V. Furthermore, because B(e
s
, e
s
) = 1 for all
s S , D
B
is proper.
We claim that any proper -invariant subspace E of V has to lie in D
B
. To this end, we rst show
that for any s S , e
s
E. Otherwise, we dene S

= s S : e
s
E and S

= S S

. Since (W, S )
is irreducible, there exists s

S and s

such that B(e


s
, e
s
) 0. (Otherwise the subgroups
that S

and S

generate commute with each other.) We see from (s

)(e
s
) = e
s
2B(e
s
, e
s
)e
s

that 2B(e
s
, e
s
)e
s
, or e
s
has to be lie in E. This is a contradiction. Therefore, S

= . Now, if
x E is arbitrary, then (s)x = x 2B(x, s)e
s
shows that 2B(x, s)e
s
E, hence B(x, s) = 0 for all
s S . Therefore, x D
B
.
Now, if we assume that D
B
is trivial, then any proper -invariant subspace has to vanish, there-
fore, has to be irreducible.
On the other hand, if D
B
is non-trivial, and is completely reducible, then there exists a -
invariant proper subspace E V such that D
B
E = V. But we know that E has to lie in D
B
in this
case, hence, we have D
B
= V, a contradiction. In other words, is not completely reducible.

Corollary 2.22. Suppose (W, S ) is an irreducible Coxeter system. If B(, ) is degenerate, then W is
innite.
Proof. If B is degenerate, then is not completely reducible. But by Maschkes theorem we know
that all nite dimensional representations of a nite group are completely reducible.
9
2.3 Weyls Chambers, Tits Theorem
We focus on the dual of the canonical representation

: W GL(V

), where V = R

dened as
follows: let V

and w W. Then

(w)() = , where
(v) = ((w)
1
v) for v V.
For s S dene the hyperplane H
s
of s and positive half space A
s
of s by
H
s
:= V

: (s) = 0,
A
s
:= V

: (s) > 0.
Note that A
s
= A
s
H
s
and V

= A
s
H
s
(A
s
) = A
s
H
s
sA
s
.
The fundamental chamber is dened to be
C :=
_
sS
A
s
. (2.23)
Translates wC, w W of C are called the chambers of (W, S ).
Theorem 2.24. For w W, dierent from the identity element, wC C = .
It is clear now that W acts on wC
wW
transitively. Furthermore, if

(w) = 1, then C =

(w)C = wC, hence w = id. In other words,

(hence ) is a faithful representation.


The proof of Theorem 2.24 is constructed in two steps. The rst step is to prove the result for
rank 2 Coxeter systems.
Lemma 2.25. Suppose (W, S ) is a Coxeter system of rank 2. Suppose that S = s, s

, and let
C = A
s
A
s
denote the fundamental chamber. Then for w W exactly one of the following holds:
1. wC A
s
and (sw) = (w) + 1, or
2. wC sA
s
and (sw) = (w) 1.
10
Proof. let
s
and
s
denote the dual basis elements to e
s
and e
s
. The fundamental chamber C =
A
s
A
s
consists of points a
s
+b
s
with a, b > 0. Another way to write this is to consider the line
segment J = t
s
+ (1 t)
s
: t (0, 1). Then
C =
_
R
0
J.
There are two possibilities: a) W is nite, b) W is innite. We proceed with the rst case. In
this case W is the symmetry group D
m
of a regular m-gon in the plane.
Let us focus on D
4
, for when m is arbitrary, the situation is similar. If S = s, s

is the set of
Coxeter generators, then D
4
has 8 elements listed as
1, s, s

, ss

, s

s, ss

s, s

ss

, ss

ss

.
In Figure (2.1) we use
s
,
s
to denote the dual basis to e
s
, e
s
, where V = Re
s
+ Re
s
.
Furthermore, we indicated positive half spaces A
s
and A
s
by the shaded blue and shaded grey
areas, respectively. In this case, the fundamental chamber C is the simplicial cone with apex at the
origin and bounded by the rays (0, y) : y R
0
and (x, x) : x R
0
.
Reecting C with respect to hyperplanes of s and s

(indicated by the dotted black and dotted


blue rays) we obtain the picture in Figure 2.2 of chambers.
It is clear from Figure 2.2 that for w D
4
if wC A
s
, then we have (sw) = (w)+1. Similarly,
if wC sA
s
, then we have (sw) = (w) 1.
Next we look at the case m = . Let I be the line between
s
and
s
. It is straightforward to
verify that s
s
=
s
+2
s
, s
s
=
s
. Similarly, s

s
=

s
+2
s
, s


s
=
s
. In other words,
W acts on I, and the line segment J is moved under this action as in Figure 2.3:

Before I insert the second step of the proof in here I would like to check Bourbakis exposition.
It is point out by J.E. Humphreys that one needs to be more careful about the proof presented in
Hillers book. It is not obvious that the restriction of the length function from W to the rank 2 case
is the length function of the subgroup.
11
A
s

A
s
C = As A
s

Figure 2.1: Positive half spaces


C
s

C
sC
ss

C
s

sC ss

sC
s

ss

C
ss

ss

C = C

Figure 2.2: Chambers of D


4
.
12
J
s

s
s

J s

sJ sJ
I
Figure 2.3: Chambers for the innite dihedral groups
3 Root systems
Let V be a nite dimensional vector space which is endowed with a positive denite, symmetric
bilinear form B(, ). A nite spanning subset of V is called a root system, if
1. for each , s

acts on ,
2. the only scaler multiple of , other than itself is .
Elements of are called roots and orderings on lead to important concepts. To this end, let us
construct a total order on .
For any ordered basis v
1
, . . . , v
k
of V, if we declare
_
c
i
v
i

_
d
i
v
i
whenever
c
1
= d
1
, c
2
= d
2
, . . . , c
j1
= d
j1
and c
j
< d
j
for some j [k],
we obtain a total ordering on V. Obviously, restriction of on gives a total ordering.
Once we x a total ordering on V, a subset is called a positive system (with respect to
) if = : 0 < . (In situations we do not worry about , we like to write
+
in place of
.) The vectors in are called negative roots and the set is called a negative system.
A subset is called a simple system, if
is a basis for V.
If each root =
_

has either c

N for all , or c

N for all .
13
Notice that it follows from the above discussion that a positive system exists. On the other hand
it is not obvious that a simple system exists. We follow the arguments produced in Humpreyss
book for the following critical fact.
Lemma 3.1. Let be a simple system in a root system . Then there exists a unique set of positive
system such that . Conversely, each positive system contains a unique simple system.
Proof. The reason that exists is because once we x a total ordering on the nite basis of
V, we can choose to be the set of elements in which are > 0 with respect to lexicographic
ordering determined by the ordered basis. Observe that this implies the uniqueness of as well.
To prove the converse statement we make the following observation: if is a positive system
then let be the set of positive roots that cannot be written as a sum of two (or more) positive
roots with positive coecients. Obviously the set of all nonnegative linear combinations of the
elements of includes . Hence, its R-span is equal to the R-span of .
Next we need to show that =
1
, . . . ,
k
is a linearly independent set. Assume for a second
that B(
i
,
j
) 0 for all
i
,
j
. If
_
iS [k]
c
i

i
= 0 for some S [k] and non-zero c
i
R, i S .
Separating this relation into two, we write

iR
c
i

i
=

jR

c
j

j
,
where R R

= S are two disjoint subsets of S . Now, call to be any of these sums. Then > 0.
On the other hand, 0 (, ) =
_
iR, jR
c
i
c
j
B(
i
,
j
) 0, which implies that = 0 (by the
positive deniteness of B(, )). Thus we obtain a contradiction.
Let us show that B(, ) 0 for all , . If this fails for a pair , , then we look at
s

() = 2B(, )/B(, ) = c , where c = 2B(, )/B(, ) > 0. If


s

() = c

,
c

lies in
+
, then (1 c

) =
_
,
c

. Obviously, if c

< 1, then since , we get a


contradiction to the minimality of . On the other hand, if c

1, then we have
0 = (c

1) + c +

,
c

.
14
This is a contradiction because a non-negative linear combination of with at least one positive
coecient cannot be zero by the denition of total ordering. Therefore, s

cannot be positive. A
similar argument shows that s

cannot be negative neither. (Cases to consider here are c + c

< 0
and c + c

0.) This produces the contradiction that we are seeking.

We record our conclusion from the last part of the proof as follows.
Corollary 3.2. For all simple roots , , B(, ) 0.
Now we know that simple systems exists and we proceed to analyze the length function.
Denition 3.3. Let W denote the group generated by the simple reections S := s

: .
Following Hiller, we call (W, S ) the Weyl system of (, ). For w W, consider
w
=
+

w
1
(
+
) , the set of positive roots that are mapped to the negatives. We dene r(w) to be the
cardinality of
w
.
Lemma 3.4. If w = is a simple root, then
w
= , hence r(w) = 1.
Proof. We claim that for from
+
, s

()
+
. Since s

() = , this gives us the


desired result. Write =
_

with c

0. Lets look at s

= c. By Corollary 3.2
we know that c 0. Therefore, the coecients of the simple roots appearing in s

() are all
positive. Since the sum of two positive roots is positive s

has to be positive.

Lemma 3.5. If w W and , then


1. r(ws

) = r(w) + 1 if and only if w()


+
2. r(s

w) = r(w) + 1 if and only if w


1
()
+
Proof. The simple reection s

permutes
+
. Therefore, r(ws

) = r(w) 1. Part 1. follows


from this. Part 2. is similar.
15
Let (W, S ) be the Weyl system of (, ). Since W acts faithfully on , we see that it has to be
a nite group. Let : W N denote the length function, as before, dened by sending w W to
the smallest number l for which there exists a presentation of w of the form w = s

1
s

l
.
Theorem 3.6. For all w W, r(w) =
w
= (w).
The proof of this theorem is rather tricky but very nice.
Proof. Let w W. First of all, by induction and previous two lemmas, we see that r(w) (w).
Now assume that r(w) < k.
To come up with a contradiction we set our notation as follows.
Given a simple root
i
let us denote by s
i
S the corresponding Coxeter generator and
re-label the simple roots in such a way that w = s
1
s
k
is a reduced decomposition for w.
Then by Lemma 3.5 Part 1, there exists 1 j k 1 such that
s
1
s
j
(
j+1
)

. (3.7)
Otherwise, start with j = k1, so that s
1
s
k1
(
k
)
+
, and hence r(s
1
s
k1
s
k
) = r(s
1
, . . . , s
k1
)+
1. Nowproceed with the same argument applied to (s
1
s
k2
)s
k1
. If this does not fail at any point,
then r(w) has to be equal to k, a contradiction to our assumption.
Let us proceed with expression (3.7). By Lemma 3.5 there exists e j such that
s
e+1
s
j
(
j+1
)
+
and s
e
s
e+1
s
j
(
j+1
)

.
But recall that there exists a unique positive root, namely
i
that s
i
maps into

. Therefore,
the positive root s
e+1
s
j
(
j+1
) has to be
e
. Let denote s
e+1
s
j
. Thus, (
j+1
) =
e
. By
Remark 2.12 we see that
(s
e+1
s
j
)s
j+1
(s
j
s
e+1
) = s
j+1

1
= s
(
j+1
)
= s
e
.
It follows that
s
e
s
j
= s
e+1
s
j+1
.
16
We nish the proof by the following contradiction:
w = s
1
s
e1
(s
e
s
j
)s
j+1
s
k
= s
1
s
e1
(s
e+1
s
j+1
)s
j+1
s
k
= s
1
s
e1
s
e+1
s
j
s
j+2
s
k
,
which has length at most k 2.
Remark 3.8. A useful corollary of the proof of Theorem 3.6 is the following.
Suppose we have a Weyl system (W, S ). Let w W an element such that r(w) = k and let
s
1
s
m
= w for somem > k (3.9)
be a non-reduced expression for w. (Here, the indexing of the simple reections does NOT refer
to a particular ordering of .) Then there exists a substring s
i
s
i+1
s
j
(1 i j m 1) of the
non-reduced expression (3.9) of w such that
s
i
s
j
= s
i+1
s
j+1
.
Remark 3.10. Suppose we have a simple system and an element . Let us order in such
a way that =
1
, . . . ,
k
with =
1
. Then the matrix of s

with respect to the basis of V is


given by
[s

=
_

_
1 0 0 0 0 0 0
1 0 0 0 0 0
0 1 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
_

Therefore, det s

= 1.
Theorem 3.11. The Weyl system (W, S ) of (, ) is a Coxeter system.
17
Proof. It is enough to show that all relations between the generators s

, are of the form


(s

)
m
,
= 1. To this end, suppose that
s
1
s
k
= 1 (3.12)
is a relation of minimal length where s
i
= s

i
,
i
.
Set w = s
1
s
k
= 1. By Remark 3.10 we know that det(w) = (1)
k
. On the other hand w = 1.
Therefore, k has to be even; k = 2m.
Let us re-write the relation (3.12) in an another form:
s
1
s
m+1
= s
2m
s
m+2
.
Since the right hand side of the equation has m 1 terms, (s
1
s
m+1
) < m + 1.
By Remark 3.8 we know that there exist 1 i j m such that
s
i+1
s
j+1
= s
i
s
j
. (3.13)
Once we substitute (3.13) into relation (3.12), in order not to contradict the minimality of its length,
we see that we have to have i = 1 and j = m, and therefore,
s
1
s
2
s
m
= s
2
s
m+1
. (3.14)
On the other hand, starting with the equivalent relation s
2
s
k
s
1
= 1, by the same arguments
as above, we see hat
s
2
s
3
s
m+1
= s
3
s
m+2
. (3.15)
It follows from (3.15) that we have s
2
s
m+1
s
m+2
s
m+1
s
3
= 1, or equivalently;
s
3
s
2
s
3
s
4
s
m+1
s
m+2
s
m+1
s
4
= 1.
Splitting this relation in the middle, and proceeding just as before, we obtain s
3
s
2
s
3
s
4
s
m
=
s
2
s
3
s
m+1
. Combined with (3.14), we see that s
1
= s
3
. Obviously we can repeat this argument
after we cyclicly permute the factors, and conclude s
2
= s
4
, s
3
= s
5
, s
4
= s
6
, and so on. Therefore,
the relation we started with s
1
s
k
= 1 is nothing but (s
1
s
2
)(s
1
s
2
) (s
1
s
2
) = 1, which is a Coxeter
relation. Hence the proof is nished.

18
Recall that in Titss theorem we consider the fundamental domain C for the action of W. The
combined outcome of the results we have so far is that the following sets are in bijections with
each other:
bases for ,
the set of chambers of ,
W.
In particular, there exists a unique element w
0
W such that w
0
= . Then w
0

+
=

, hence
(w
0
) =
+
. It is easy to prove that
(w
0
w) = (ww
0
) = (w
0
) (w). (3.16)
Proposition 3.17. Let s
1
s
2
s
k
= w W be a reduced decomposition for some simple generators
s
i
= s

i
S with corresponding simple roots
i
. Let
i
denote the root s
1
s
i1
(
i
), i =
1, . . . , k. Then the following sets are all equal:
i.
w
1 =
+
w

,
ii.
i
: i = 1, . . . , k,
iii.
+
: s

(s
1
s
i
s
k
) = w, for some i. (Here, s
i
means that we omit s
i
from the
expression.)
Proof. (i. ii.) Let be a positive root that is mapped to a negative root by w
1
. Suppose

is the negative root such that = w(). Then = w


1
= s
k
s
1
()

. Thus, it makes sense


to consider the smallest index i such that
s
i
s
1
()

.
Since the exists a unique positive root that s
i
maps to a negative, namely
i
, we see that
i
=
s
i1
s
1
(). Therefore, = s
1
s
i1
(
i
) =
i
.
19
(ii. iii.) Let us compute s

i
(s
1
s
i
s
k
).
s

i
(s
1
s
i
s
k
) = s
s
1
s
i1
(
i
)
(s
1
s
i
s
k
)
= (s
1
s
i1
)s
i
(s
i1
s
1
)(s
1
s
i
s
k
)
= (s
1
s
i1
)s
i
(s
i+1
s
k
) = w
(iii. i.) Note that
+
: s

(s
1
s
i
s
k
) = w, for some i is uniquely determined by
which s
i
we omit from the expression. Therefore,
+
: s

(s
1
s
i
s
k
) = w, for some i
k. On the other hand, the length of w and w
1
are the same. Therefore,
w
1 = k. It follows that
all three sets are of the same cardinality k, hence they are equal.

Remark 3.18. Notice that Proposition 3.17 shows that the set s
1
s
i1
(
i
) : i = 1, . . . , k is
independent of the reduced decomposition s
1
s
k
of w we started with, because it is equal to

w
1 . Furthermore, the reducedness of the expression s
1
s
2
s
k
= w W in the hypothesis is used
at the very last step, while showing that the cardinalities of these sets are the same.
Theorem 3.19 (Matsumotos exchange condition). If (W, S ) is a nite Coxeter group, w = s
1
s
q
(where s
i
= s

i
for some
i
), and (sw) < (w) for some s = s

, , then there exists


1 i q such that
sw = s
1
s
i
s
q
.
Moreover, if s
1
s
k
is a reduced expression, then i is unique.
Proof. Set w

= s

w and suppose w

= s

1
s

p
(p < q) is a reduced decomposition for some
simple generators s

i
S . Then s

1
s

p
= w and this is a reduced decomposition. Therefore,
by Proposition 3.17, we see that =
1
for this reduced expression, hence
w
1 . On the other
hand, by Remark 3.18 (or by the proof of the Proposition)
w
1
+
: s

(s
1
s
i
s
q
) =
w, for some i.
For uniqueness, if there are i < j such that
sw = s
1
s
i
s
q
= s
1
s
j
s
q
,
20
and w = s
1
s
q
is reduced, then we see that s
i+1
s
j
= s
i
s
j+1
. The former segment has
j (i +1) +1 = j i simple reections, the latter has j +1 i +1 = j i +2. Therefore, replacing
the latter segment in w by the former, the length of w gets shorter. This contradicts the reducedness
of s
1
s
q
.

3.1 Weyl groups (Crystallographic Coxeter groups)


Suppose we require
m =
2B(, )
B(, )
Z (3.20)
for all simple roots , . Then s

() = m lies in Z[], the lattice generated by in V.


Obviously, in this case Z[], also. (This is equivalent to saying that W acts integrally on .)
Furthermore,
_
2B(, )
B(, )
_
2
= 4
_
B(, )
B(, )
_
2
= 4 cos
2
_

m
,
_
Z
implies that m
,
2, 3, 4, 6. Therefore, we see that integrality condition has severe conse-
quences, hence these Coxeter groups deserve a name: we call them Weyl groups.
The co-root of a root is, by denition, the vector
=
2
B(, )
V.
Note that (3.20) is equivalent to require that
B(,

) Z for all , .
The fundamental (dominant) weight

V associated with is the dual of the co-root


of :
B(

, ) =
_

_
1 if =
0 if
for all . Since is a basis for V, it is clear that

is a basis for V, also. Fundamental


dominant weights play an important role for representation theory of Lie algebras and groups.
21
3.2 Parabolic Subgroups: stabilizers of the faces of the fundamental cham-
ber.
Let I be a subset of a set of simple roots associated with a Coxeter group W. The subgroup
generated by the elements s

, I is called a parabolic subgroup, and denoted by W


I
. These
subgroups arise naturally in the analysis of the Weyl chambers of a Coxeter system.
Let C
I
denote the face of the fundamental chamber:
C
I
=
_

_
_
I
H

_
_
I
A

_
,
where H

is the hyperplane perpendicular to and A

is the positive half space whose inward


normal direction is given by .
Theorem 3.21. Let w, w

W and I, J . If the intersection of wC


I
and w

C
J
is non-empty, then
wW
I
= w

W
J
and I = J.
Proof. Observe that wC
I
w

C
J
if and only if w
1
wC
I
C
J
, and wW
I
= w

W
J
with
I = J if and only if w
1
wW
I
= W
J
. Therefore, we do not loose anything by assuming w

to be the
identity element of W.
Now we induct on the length of w. It is clear that if (w) = 0, then w is the identity element and
C
I
C
J
if and only if I = J and hence W
I
= W
J
. Assume that the result is true for all w W
with (w) < k and we prove it for (w) = k. To this end, let w W be an element of length k and
suppose that wC
I
C
J
is non-empty. We would like to conclude that w W
I
and I = J.
Since W
I
is a subgroup, it decomposes W into a union of right cosets. Suppose w lies in the
right coset w W
I
u for some u W. Then w = w

u for some w

W
I
. Since W is nite we choose
u to be of minimal possible length. Since w

W
I
, there exists s I be such that (sw

) = (w

)1.
Then the length of sw

u = sw is one less than that of w

u = w.
Recall from Lemma 2.25 that (sw) = (w) + 1 is true if and only if wC sA
s
. By taking the
closures we see that
wC
I
C
J
wC C sA
s
A
s
= H
s
.
22
It follows that if wC
I
C
J
is non-empty, then C
J
intersects H
s
non-trivially. But by denition of
the C
J
this is possible if and only if C
J
is contained in H
s
. Now we re-write wC
I
C
J
:
wC
I
C
J
= s(wC
I
C
J
) = swC
I
sC
J
= swC
I
C
J
.
Therefore, by induction assumption we have that sw W
I
, hence w W
I
and also I = J.

Immediate from the parabolic analog of the Tits theorem that


Corollary 3.22. The stabilizer of any point in C
I
is W
I
.
Proof. If x C
I
and wx = x, then wC
I
C
I
, hence w W
I
. Conversely, if s W
I
is a simple
reection, then since C
I
H
s
, sC
I
= C
I
. Since W
I
is generated by its simple reections we are
done.
3.3 Root Systems and Linear Algebraic Groups
Humphreys books, 1) Linear Algebraic Groups and 2) An Introduction to Lie algebras and Rep-
resentation Theory are excellent sources for the details. Let us dene, without loss of generality, a
linear algebraic group to be a closed subgroup G GL
n
.
For Lie theory, there are two essential classes of subgroups; solvable subgroups and unipotent
subgroups. Recall that a group is called solvable, if there exists a sequence of normal subgroups
1 B
1
B
k1
B
k
= B such that the quotient group B
i
/B
i1
is commutative for i =
1, . . . , k. A subgroup U of a linear algebraic group is called unipotent if it consists of unipotent
elements (matrices).
The radical R(G) of a linear algebraic group G is dened to be the maximal closed, connected,
normal solvable subgroup. The unipotent radical R
u
(G) of G is a maximal closed, connected,
unipotent subgroup. A linear algebraic group is called
reductive, if R(G) = 1,
semisimple, if R
u
(G) = 1.
23
Remark 3.23. Semi-simple groups are reductive. Reductivity is equivalent to complete reducibility.
Example 3.24. GL
n
is reductive but not semi-simple. SL
n
is semi-simple.
The Lie algebra g of G is dened to be the tangent space at the identity element of G. g is
called linear, if it is the subalgebra of End(C
n
) with respect to product dened by A B = [A, B] =
AB BA. A Lie algebra is called simple if it does not contain any non-trivial ideals, and it is called
semi-simple, if it is a direct sum of simple ideals. In particular, the Lie algebra of a semi-simple
linear algebraic group is semi-simple. A Lie algebra is called reductive, if the Lie algebra modulo
its center is semi-simple. In particular, the Lie algebra of a reductive linear algebraic group is
reductive.
A useful outcome of the semi-simplicity of g is that the adjoint representation ad : g End(g)
dened by
ad(g)(h) := [g, h]
is injective. So semi-simple Lie algebras, as we dened, are always linear.
A group acts on itself by conjugation, and if we have a linear algebraic group G it denes a
representation on the tangent space T
e
G = g of its identity element:
Ad : G GL(g).
Its derivative at identity is a linear map from g into End(g). If one computes this derivative (just
as in a calculus class), he/she sees that the resulting representation is nothing but the adjoint repre-
sentation of g. For this reason, we call Ad of G the adjoint representation of G.
Let T G be a torus, which is by denition, a commutative linear algebraic group consisting
of diagonalizable elements, only. Therefore, the adjoint representation of G restricted to T splits g
into simultaneous eigenspaces. If we assume that T is a maximal torus, then the decomposition,
which is called the Cartan decomposition, has the form
g = t

rR(G,T)
g
r
,
where R(G, T) is the nite set of non-zero character functions r : T C

such that
g
r
= v g : t v = r(t)v for all t T.
24
Furthermore, g
r
s are all one dimensional.
Remark 3.25. 1. Hence t is the 0-eigenspace for the action of T.
2. We could formulate the above action of T using its Lie algebra, and in that context R(g, t) is
a set of linear functionals on g.
3. R(G, T) is in fact an algebraically independent set of functions in the algebra of regular
functions (the coordinate ring) on T.
The above set of characteristic functions R(G, T) forms a root system. Indeed, let X

(T) denote
the free abelian group generated by the characteristic functions. Let E denote the Euclidean space
X

(T)
Z
R. Checking that R(G, T) satises the axioms of a root system in E is easy. (Alternatively,
we replace r : T C

by its dierential dr : t C, which is a linear functional on t. It is easy to


check that = dR(G, T) = R(g, t) is a root system in t

.)
The Weyl group W of an algebraic group G is dened to be the quotient W = N
G
(T)/Z
G
(T),
where N
G
(T) is the normalizer of the maximal torus T in G and Z
G
(T) is the centralizer of the
maximal torus. We know that if G is reductive, then Z
G
(T) = T.
A Borel subgroup B of a linear algebraic group G is a maximal closed, connected solvable
subgroup.
Remark 3.26. It follows from the denition of reductivity that, in a reductive group, Borel sub-
groups are never normal. In other words, the conjugation action of G on the set of Borel subgroups
has no xed points. There are two more important facts that we would like to mention;
1. the conjugation action of G on the set of all Borel subgroups is transitive,
2. the set of all Borel subgroups of G has a projective variety structure.
One of the most important facts about Borel subgroups in linear algebraic groups is that the
B B-orbits on G are organized with respect to the combinatorics of the Weyl group.
Theorem 3.27 (Bruhat-Chevalley decomposition.). Let G be a reductive linear algebraic group,
and let B G be a Borel subgroup. Then
G =
_
wW
B wB,
25
where W is the Weyl group of G and for w W, w N
G
(T) is a representative of w.
There is an important extension of Bruhat-Chevalley decomposition to certain semigroups.
Denition 3.28. A closed submonoid M Mat
n
is called a linear algebraic monoid.
Since determinant det : Mat
n
C is a morphism, the group of invertible elements of a linear
algebraic monoid form a linear algebraic group. We dene a linear algebraic monoid reductive, if
its group of invertible elements is reductive.
Theorem 3.29 (Bruhat-Chevalley-Renner decomposition.). Let M be a reductive linear algebraic
group with unit group G, and let B G be a Borel subgroup. Then
M =
_
rR
B rB,
where R = N
G
(T)/T.
The parametrizing object R in the above theorem is called the Renner monoid of M. Essentially,
it plays the same role for M that W plays for G. We list some well-known facts about R:
1. R is a nite semigroup,
2. W R is the group of invertible elements,
3. R is an inverse semigroup; for each r R, there exits a unique r

R such that rr

r = r and
r

rr

= r

.
The geometric Bruhat-Chevalley ordering (group-case) is dened as follows. Let w, w

W be
two elements. Then
w w

B wB B

w

B, (3.30)
where w and

w

are the representatives of w and w

in N
G
(T). The geometric Bruhat-Chevalley-
Renner ordering (monoid-case) on R is an extension of the previous case:
r r

B rB B

B, (3.31)
where r,

r

are the representatives of r, r

R in N
G
(T).
26
3.4 Parabolic Subgroups via Geometry (to be re-written)
Important facts:
1. When a Borel subgroup B act on a projective X, there exists a point x X such that B x = x.
This fact is known as Lie-Kolchin Theorem and its use leads to great results.
2. GIT tells us that the orbit of a linear algebraic group is projective, (or (quasi)ane) if
and only if so is G/H, where H is the stabilizer of a point of the orbit is projective (reps.
(quasi)ane).
We call a closed subgroup P G parabolic, if it contains a Borel subgroup.
Theorem 3.32. The quotient space G/P is a projective variety if and only if P is parabolic.
Proof. Let B G be a Borel subgroup. Suppose G/P is projective. Then there exists a Bxed
point x/P in G/P. In other words, BxP = xP, hence x
1
Bx P. Therefore, P contains a Borel
subgroup.
Conversely, start with a faithful representation V of G. Then the induced action on the ag
variety of V has a closed orbit and its stabilizer is a Borel subgroup B

. Suppose that B = gBg


1

P. By the #2 of important facts above, G/B, hence G/P is projective.


4 Algebraic symmetric spaces
We assume that G and all of its subgroups we consider here are dened over an algebraically closed
eld K of characteristic 0.
Let G be a linear algebraic group and let : G G denote an involutory automorphism of G.
Let K denote the subgroup K = g G : (g) = g. A quotient of the form G/K, which carries an
ane algebraic variety structure (for it is isomorphic to an orbit of G) is called a symmetric space.
The purpose of this section is to explain how a root system of G behaves relative to K.
Let g denote the Lie algebra of G. There is an induced (from ) Lie algebra automorphism
of order 2, which we denote by , also. K being -xed implies that its Lie algebra Lie(K) is a
subalgebra of g such that h Lie(K) if and only if (h) = h.
27
To this end, let us recall some very important result from [Steinberg-Endomorphisms]
Theorem 4.1. Every stable torus in G is contained in a maximal torus of G which is -stable.
What we need is not necessarily that -stable maximal torus, but rather a -stable torus whose
certain subtorus has the maximal possible rank. Let us explain.
Let T be a -stable maximal torus of G and let t = Lie(T) denote its Lie algebra. Then Lie(T)
splits into a direct sum of two subalgebras; t = t
0
t
1
, where
t
0
= Lie(T
0
), T
0
:= t t : (t) = t,
t
1
= Lie(T
1
), T
1
:= t t : (t) = t
1
.
We call T
1
an anisotropic torus (relative to ). For our purposes, we need T to be such that its T
1
is maximal among all anisotropic tori (rel. to ) of G.
we look at the case of Sp
2n
. Recall that the involution in this case is of the form (g) =
J(g
1
)

J
1
, where J is the matrix having n 2 2 blocks of the form
_

_
0 1
1 0
_

_
It is each to check that maps an diagonal matrix
x =
_

_
x
1
0 0 0
0 x
2
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 x
2n1
0
0 0 0 x
2n
_

_
(x) =
_

_
x
1
2
0 0 0
0 x
1
1
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 x
1
2n
0
0 0 0 x
1
2n1
_

_
.
Therefore, the subgroup of all invertible diagonal matrices T SL
2n
is a -stable maximal
torus, and the subgroup S T of elements
_

_
x
1
0 0 0
0 x
1
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 x
2n1
0
0 0 0 x
2n1
_

_
28
is a split torus.
For i = 1, . . . , 2n, let
i
: T C

denote the i-th coordinate function on T. Then its dierential


d
i
is a linear functional on the dual space t

of the Lie algebra t = Lie(T) of T. Then the root


system of (T, G) is then equal to = d
i
d
j
: 1 i j 2n. Let B denote the Borel
subgroup of upper triangular matrices. Corresponding set of positive roots and the set of simple
roots are given by

+
= d
i
d
j
: 1 i < j 2n,
=
i
= d
i
d
i+1
: 1 i 2n 1.
Since the induced action of on the coordinate function is given by d
i
(x) = d((x)), we
compute:
d
i
=
_

_
d
i1
if i = 2k
d
i+1
if i = 2k 1
hence assuming i + 1 < j, we have
( d
i
d
j
) =
_

_
d
i1
+ d
j1
if i = 2k and j = 2l
d
i1
+ d
j+1
if i = 2k and j = 2l 1
d
i+1
+ d
j1
if i = 2k 1 and j = 2l
d
i+1
+ d
j+1
if i = 2k 1 and j = 2l 1
When i + 1 = j, we have
(
i
) =
_

i1

i

i+1
= d
i1
+ d
i+2
if i = 2k

i
= d
i
d
i+1
if i = 2k 1
It is easy to see now that the only positive roots that are xed by are those simple roots
i
,
where i is an odd number. Moreover, (
+

1
)
+
.
Therefore, (T, B) is a split pair.

1
=
2
,
4
, . . . ,
2n1

=
1
,
3
, . . . ,
2n1

29

You might also like