Professional Documents
Culture Documents
)
m
ss
= 1),
where m
ss
2, 3, 4, , . . . , is the order of ss
, s s
and m
ss
= 1. It is customary to call the pair
(W, S ) a Coxeter system. If S is a nite, its cardinality is called the rank of (W, S ).
A Coxeter group W is called irreducible if it cannot be written as a product of non-trivial
Coxeter groups.
Remark 2.1. There is an irreducible Coxeter group H
3
that is a product but one of the factors is not
a Coxeter group.
Denition 2.2. A Coxeter matrix M
ss
is a symmetric matrix with 1s along the diagonal and
m
ss
2, 3 , , s s
if m
ss
> 2, labeled by m
ss
. (If m
ss
= 3, then we
suppress it from the graph.)
Remark 2.3. A disjoint union of Coxeter systems is again a Coxeter system. In particular a Coxeter
group W is irreducible if and only if
W
is connected.
Example 2.4. Let S
n
denote the symmetric group on n letters. Together with the set S of simple
transpositions s
i
= (i i + 1), i = 1, . . . n 1, the pair (S
n
, S ) forms a Coxeter system. The Coxeter
graph of S
n
looks like
2
s
n1
s
n2
s
1
s
2
Recall that a partially ordered set is a pair (P, ) consisting of a set P and a relation satisfying
1. reexivity
2. anti-symmetry
3. transitivity
A poset is called graded (or ranked), if there exists a rank (or length) function : P N such that
if y covers x in P, then (y) = (x) + 1.
It turns out that Coxeter groups have several dierent graded poset structure. Common to these
poset structures on a Coxeter system (W, S ) is the length function on W.
Denition 2.5. Let (W, S ) be a Coxeter system and let w W. The length of w, denoted by (w) is
the smallest integer n 0 such that w can be written as a product of n elements from S .
If w is of length n and w = s
i
1
s
i
n
for some s
i
j
S , then this expression is called a reduced
decomposition of w.
Example 2.6. If w = (w
1
, . . . , w
n
) S
n
is a permutation, then (w) = inv(w), where inv(w) is the
number of pairs 1 i < j n such that w( j) > w(i). We call inv(w) the inversion number of w.
Lemma 2.7. If w, w
W, then
1. (w
1
) = (w)
2. (w) (w
) (ww
) (w) + (w
)
Proof. The rst part, and the second inequality of the second item are easy to prove. We only
prove the rst inequality of the second item. Write w = ww
(w
)
1
. By the second part
(w) (ww
) + ((w
)
1
)
= (ww
) + (w
).
3
Corollary 2.8. For s S and w W, (ws) = (w) 1.
Proof. Let F(S ) denote the free group on S . Then W is the quotient of F(S ) by the relations
imposed by the matrix M. Dene the signature homomorphism : F(S ) 1, 1 by sending a
word s
1
s
2
s
r
to (1)
r
. Since the only relations on W are of the form (s
)
m
,
= 1, and since
((s
)
m
) = (1)
2m
= 1,
the homomorphism factors from W to give a homomorphism : W 1, 1. In other words,
(ww
) = (w)(w
) for all w, w
j: w( j)<0
w( j).
4
Example 2.10 (Ane permutations). Fix n N and consider the group
S
n
of all permutations of
Z such that
x( j + n) = x( j) + n for all j Z,
_
n
i=1
x(i) = (n + 1)n/2.
Then
S
n
is a Coxeter group with circular Coxeter graph and the generating set s
1
, . . . , s
n
where
s
i
=
_
jZ
(i + jn i + 1 + jn),
for i = 1, . . . , n. The Coxeter graph of
S
n
is
s
n
s
n1
s
n2
s
1
s
2
2.1 Reections
Let V be a vector space over R equipped with a symmetric bilinear form (, ). For V with
(, ) 0, let H
: V V by
s
(y) = y
2(y, )
(, )
, y V. (2.11)
Remark 2.12. An important observation (for Lie theory) is that for any automorphism of V such
that (x, y) = ((x), (y)) for all x, y V, we have
s
1
= s
()
.
5
In fact,
s
1
(y) = s
(
1
(y))
=
_
1
(y)
2(
1
(y), )
(, )
_
= y
_
2(
1
(y), )
(, )
_
= y
2(
1
(y), )
(, )
()
= y
2(y, ())
((), ())
()
= s
()
(y).
Let us call a transformation which preserves the standard inner product on R
n
orthogonal.
Lemma 2.13. Every orthogonal transformation of R
2
is either a reection or a rotation.
Proof. If (e
1
) = ae
1
+ be
2
, then (e
2
) has to be perpendicular to (e
1
) and has to be on the unit
circle. Thus we have two possibilities:
1. if (e
2
) = be
1
+ ae
2
, then is a rotation by an angle which satises a = cos().
2. if (e
2
) = be
1
ae
2
, then is the reection s
, where = (cos
+
2
, sin
+
2
).
and s
is a rotation.
It remains to compute the angle between them. Indeed, it is not dicult to see that is mapped
to s
is given by cos =
(v,v
)
vv
is m and furthermore
(, ) = cos( /m) = cos(/m).
2.2 Positive Bilinear form of a Coxeter system
Let W be a Coxeter group and S = s
1
, . . . , s
.
The canonical bilinear form B(, ) : R
_
cos
m
i, j
if m
i, j
<
1 otherwise
Obviously B(e
i
, e
i
) = 1 for all i.
Canonical bilinear form gives us the canonical representation : W GL(R
) of (W, S ):
(s
i
)(v) = v 2B(x, e
i
)e
i
, (2.16)
where s
i
S and v R
.
We have some remarks in order.
Remark 2.17. For any Coxeter generator s S , (s) is a reection that preserves B(, ). Therefore,
if we denote by O(B) the subgroup of GL(V) consisting of maps preserving B, the image of lies
in O(B).
7
Remark 2.18. Recall that a bilinear form on a nite dimensional vector space V over R is a vector
space homomorphism (, ) : V V R. The form is called positive, if (x, x) 0 for all x V
and called positive denite, if it is positive and (x, x) = 0 implies x = 0. The form is called non-
degenerate if (x, y) = 0 for all y V implies x = 0. The form is called symmetric if (x, y) = (y, x)
for all x, y V.
The restriction of B(, ) to a 2-dimensional subspace Re
s
i
+ Re
s
j
(i j) is positive, and it is
positive denite if and only if m
s
i
s
j
< . Indeed, if x = ae
s
+ be
s
and m = m
ss
then
B(x, x) = a
2
+ b
2
2ab cos
m
= (a b cos
m
)
2
+ b
2
sin
2
m
.
Therefore, B(x, x) 0 and B(x, x) = 0 if and only if x = 0.
By the following lemma we see that does not kill any relation.
Lemma 2.19. If s, s
) is m
ss
.
Proof. It is clear that (s)
2
= 1, thus we assume s s
, hence m
ss
2. Suppose rst that m
ss
) x the
orthogonal complement of E we need only consider (s)(s
) on E. Since (s)(s
) is a rotation
through twice the angle between (s) and (s
))
k
(e
s
) = e
s
+2kx, where x = e
s
+e
s
.
Therefore, (s)(s
) is of innite order.
Denition 2.20. A Coxeter system (W, S ) is called irreducible if W is not a product of two Coxeter
subgroups. In other words, there does not exists two proper subsets S
, S
S satisfying s
=
s
for all s
and s
.
Recall that a representation V of a group G is called completely reducible if for every G-
invariant subspace E of V, there exists a complimentary G-invariant subspace E
such that V =
E E
. If V does not have any proper G-invariant subspace, then it is called an irreducible repre-
sentation of G.
8
Proposition 2.21. Suppose that (W, S ) is an irreducible Coxeter system. Let V = R
, and let
denote the canonical representation of an irreducible Coxeter group W on V. If B(, ) is non-
degenerate (meaning that if B(x, y) = 0 for all y V, then x = 0), then is irreducible. If B(, ) is
degenerate, then is not completely reducible.
Proof. Let D
B
V denote the degeneracy locus
D
B
= v V : B(v, y) = 0, for all y V.
Obviously D
B
is a invariant linear subspace of V. Furthermore, because B(e
s
, e
s
) = 1 for all
s S , D
B
is proper.
We claim that any proper -invariant subspace E of V has to lie in D
B
. To this end, we rst show
that for any s S , e
s
E. Otherwise, we dene S
= s S : e
s
E and S
= S S
. Since (W, S )
is irreducible, there exists s
S and s
and S
)(e
s
) = e
s
2B(e
s
, e
s
)e
s
that 2B(e
s
, e
s
)e
s
, or e
s
has to be lie in E. This is a contradiction. Therefore, S
= . Now, if
x E is arbitrary, then (s)x = x 2B(x, s)e
s
shows that 2B(x, s)e
s
E, hence B(x, s) = 0 for all
s S . Therefore, x D
B
.
Now, if we assume that D
B
is trivial, then any proper -invariant subspace has to vanish, there-
fore, has to be irreducible.
On the other hand, if D
B
is non-trivial, and is completely reducible, then there exists a -
invariant proper subspace E V such that D
B
E = V. But we know that E has to lie in D
B
in this
case, hence, we have D
B
= V, a contradiction. In other words, is not completely reducible.
Corollary 2.22. Suppose (W, S ) is an irreducible Coxeter system. If B(, ) is degenerate, then W is
innite.
Proof. If B is degenerate, then is not completely reducible. But by Maschkes theorem we know
that all nite dimensional representations of a nite group are completely reducible.
9
2.3 Weyls Chambers, Tits Theorem
We focus on the dual of the canonical representation
: W GL(V
), where V = R
dened as
follows: let V
and w W. Then
(w)() = , where
(v) = ((w)
1
v) for v V.
For s S dene the hyperplane H
s
of s and positive half space A
s
of s by
H
s
:= V
: (s) = 0,
A
s
:= V
: (s) > 0.
Note that A
s
= A
s
H
s
and V
= A
s
H
s
(A
s
) = A
s
H
s
sA
s
.
The fundamental chamber is dened to be
C :=
_
sS
A
s
. (2.23)
Translates wC, w W of C are called the chambers of (W, S ).
Theorem 2.24. For w W, dierent from the identity element, wC C = .
It is clear now that W acts on wC
wW
transitively. Furthermore, if
(w) = 1, then C =
, and let
C = A
s
A
s
denote the fundamental chamber. Then for w W exactly one of the following holds:
1. wC A
s
and (sw) = (w) + 1, or
2. wC sA
s
and (sw) = (w) 1.
10
Proof. let
s
and
s
denote the dual basis elements to e
s
and e
s
. The fundamental chamber C =
A
s
A
s
consists of points a
s
+b
s
with a, b > 0. Another way to write this is to consider the line
segment J = t
s
+ (1 t)
s
: t (0, 1). Then
C =
_
R
0
J.
There are two possibilities: a) W is nite, b) W is innite. We proceed with the rst case. In
this case W is the symmetry group D
m
of a regular m-gon in the plane.
Let us focus on D
4
, for when m is arbitrary, the situation is similar. If S = s, s
is the set of
Coxeter generators, then D
4
has 8 elements listed as
1, s, s
, ss
, s
s, ss
s, s
ss
, ss
ss
.
In Figure (2.1) we use
s
,
s
to denote the dual basis to e
s
, e
s
, where V = Re
s
+ Re
s
.
Furthermore, we indicated positive half spaces A
s
and A
s
by the shaded blue and shaded grey
areas, respectively. In this case, the fundamental chamber C is the simplicial cone with apex at the
origin and bounded by the rays (0, y) : y R
0
and (x, x) : x R
0
.
Reecting C with respect to hyperplanes of s and s
s
=
s
+2
s
, s
s
=
s
. In other words,
W acts on I, and the line segment J is moved under this action as in Figure 2.3:
Before I insert the second step of the proof in here I would like to check Bourbakis exposition.
It is point out by J.E. Humphreys that one needs to be more careful about the proof presented in
Hillers book. It is not obvious that the restriction of the length function from W to the rank 2 case
is the length function of the subgroup.
11
A
s
A
s
C = As A
s
C
sC
ss
C
s
sC ss
sC
s
ss
C
ss
ss
C = C
s
s
J s
sJ sJ
I
Figure 2.3: Chambers for the innite dihedral groups
3 Root systems
Let V be a nite dimensional vector space which is endowed with a positive denite, symmetric
bilinear form B(, ). A nite spanning subset of V is called a root system, if
1. for each , s
acts on ,
2. the only scaler multiple of , other than itself is .
Elements of are called roots and orderings on lead to important concepts. To this end, let us
construct a total order on .
For any ordered basis v
1
, . . . , v
k
of V, if we declare
_
c
i
v
i
_
d
i
v
i
whenever
c
1
= d
1
, c
2
= d
2
, . . . , c
j1
= d
j1
and c
j
< d
j
for some j [k],
we obtain a total ordering on V. Obviously, restriction of on gives a total ordering.
Once we x a total ordering on V, a subset is called a positive system (with respect to
) if = : 0 < . (In situations we do not worry about , we like to write
+
in place of
.) The vectors in are called negative roots and the set is called a negative system.
A subset is called a simple system, if
is a basis for V.
If each root =
_
has either c
N for all , or c
N for all .
13
Notice that it follows from the above discussion that a positive system exists. On the other hand
it is not obvious that a simple system exists. We follow the arguments produced in Humpreyss
book for the following critical fact.
Lemma 3.1. Let be a simple system in a root system . Then there exists a unique set of positive
system such that . Conversely, each positive system contains a unique simple system.
Proof. The reason that exists is because once we x a total ordering on the nite basis of
V, we can choose to be the set of elements in which are > 0 with respect to lexicographic
ordering determined by the ordered basis. Observe that this implies the uniqueness of as well.
To prove the converse statement we make the following observation: if is a positive system
then let be the set of positive roots that cannot be written as a sum of two (or more) positive
roots with positive coecients. Obviously the set of all nonnegative linear combinations of the
elements of includes . Hence, its R-span is equal to the R-span of .
Next we need to show that =
1
, . . . ,
k
is a linearly independent set. Assume for a second
that B(
i
,
j
) 0 for all
i
,
j
. If
_
iS [k]
c
i
i
= 0 for some S [k] and non-zero c
i
R, i S .
Separating this relation into two, we write
iR
c
i
i
=
jR
c
j
j
,
where R R
= S are two disjoint subsets of S . Now, call to be any of these sums. Then > 0.
On the other hand, 0 (, ) =
_
iR, jR
c
i
c
j
B(
i
,
j
) 0, which implies that = 0 (by the
positive deniteness of B(, )). Thus we obtain a contradiction.
Let us show that B(, ) 0 for all , . If this fails for a pair , , then we look at
s
() = c
,
c
lies in
+
, then (1 c
) =
_
,
c
. Obviously, if c
1, then we have
0 = (c
1) + c +
,
c
.
14
This is a contradiction because a non-negative linear combination of with at least one positive
coecient cannot be zero by the denition of total ordering. Therefore, s
cannot be positive. A
similar argument shows that s
< 0
and c + c
We record our conclusion from the last part of the proof as follows.
Corollary 3.2. For all simple roots , , B(, ) 0.
Now we know that simple systems exists and we proceed to analyze the length function.
Denition 3.3. Let W denote the group generated by the simple reections S := s
: .
Following Hiller, we call (W, S ) the Weyl system of (, ). For w W, consider
w
=
+
w
1
(
+
) , the set of positive roots that are mapped to the negatives. We dene r(w) to be the
cardinality of
w
.
Lemma 3.4. If w = is a simple root, then
w
= , hence r(w) = 1.
Proof. We claim that for from
+
, s
()
+
. Since s
with c
0. Lets look at s
= c. By Corollary 3.2
we know that c 0. Therefore, the coecients of the simple roots appearing in s
() are all
positive. Since the sum of two positive roots is positive s
has to be positive.
permutes
+
. Therefore, r(ws
1
s
l
.
Theorem 3.6. For all w W, r(w) =
w
= (w).
The proof of this theorem is rather tricky but very nice.
Proof. Let w W. First of all, by induction and previous two lemmas, we see that r(w) (w).
Now assume that r(w) < k.
To come up with a contradiction we set our notation as follows.
Given a simple root
i
let us denote by s
i
S the corresponding Coxeter generator and
re-label the simple roots in such a way that w = s
1
s
k
is a reduced decomposition for w.
Then by Lemma 3.5 Part 1, there exists 1 j k 1 such that
s
1
s
j
(
j+1
)
. (3.7)
Otherwise, start with j = k1, so that s
1
s
k1
(
k
)
+
, and hence r(s
1
s
k1
s
k
) = r(s
1
, . . . , s
k1
)+
1. Nowproceed with the same argument applied to (s
1
s
k2
)s
k1
. If this does not fail at any point,
then r(w) has to be equal to k, a contradiction to our assumption.
Let us proceed with expression (3.7). By Lemma 3.5 there exists e j such that
s
e+1
s
j
(
j+1
)
+
and s
e
s
e+1
s
j
(
j+1
)
.
But recall that there exists a unique positive root, namely
i
that s
i
maps into
. Therefore,
the positive root s
e+1
s
j
(
j+1
) has to be
e
. Let denote s
e+1
s
j
. Thus, (
j+1
) =
e
. By
Remark 2.12 we see that
(s
e+1
s
j
)s
j+1
(s
j
s
e+1
) = s
j+1
1
= s
(
j+1
)
= s
e
.
It follows that
s
e
s
j
= s
e+1
s
j+1
.
16
We nish the proof by the following contradiction:
w = s
1
s
e1
(s
e
s
j
)s
j+1
s
k
= s
1
s
e1
(s
e+1
s
j+1
)s
j+1
s
k
= s
1
s
e1
s
e+1
s
j
s
j+2
s
k
,
which has length at most k 2.
Remark 3.8. A useful corollary of the proof of Theorem 3.6 is the following.
Suppose we have a Weyl system (W, S ). Let w W an element such that r(w) = k and let
s
1
s
m
= w for somem > k (3.9)
be a non-reduced expression for w. (Here, the indexing of the simple reections does NOT refer
to a particular ordering of .) Then there exists a substring s
i
s
i+1
s
j
(1 i j m 1) of the
non-reduced expression (3.9) of w such that
s
i
s
j
= s
i+1
s
j+1
.
Remark 3.10. Suppose we have a simple system and an element . Let us order in such
a way that =
1
, . . . ,
k
with =
1
. Then the matrix of s
=
_
_
1 0 0 0 0 0 0
1 0 0 0 0 0
0 1 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 1 0 0
0 0 0 0 1 0
0 0 0 0 0 1
_
Therefore, det s
= 1.
Theorem 3.11. The Weyl system (W, S ) of (, ) is a Coxeter system.
17
Proof. It is enough to show that all relations between the generators s
)
m
,
= 1. To this end, suppose that
s
1
s
k
= 1 (3.12)
is a relation of minimal length where s
i
= s
i
,
i
.
Set w = s
1
s
k
= 1. By Remark 3.10 we know that det(w) = (1)
k
. On the other hand w = 1.
Therefore, k has to be even; k = 2m.
Let us re-write the relation (3.12) in an another form:
s
1
s
m+1
= s
2m
s
m+2
.
Since the right hand side of the equation has m 1 terms, (s
1
s
m+1
) < m + 1.
By Remark 3.8 we know that there exist 1 i j m such that
s
i+1
s
j+1
= s
i
s
j
. (3.13)
Once we substitute (3.13) into relation (3.12), in order not to contradict the minimality of its length,
we see that we have to have i = 1 and j = m, and therefore,
s
1
s
2
s
m
= s
2
s
m+1
. (3.14)
On the other hand, starting with the equivalent relation s
2
s
k
s
1
= 1, by the same arguments
as above, we see hat
s
2
s
3
s
m+1
= s
3
s
m+2
. (3.15)
It follows from (3.15) that we have s
2
s
m+1
s
m+2
s
m+1
s
3
= 1, or equivalently;
s
3
s
2
s
3
s
4
s
m+1
s
m+2
s
m+1
s
4
= 1.
Splitting this relation in the middle, and proceeding just as before, we obtain s
3
s
2
s
3
s
4
s
m
=
s
2
s
3
s
m+1
. Combined with (3.14), we see that s
1
= s
3
. Obviously we can repeat this argument
after we cyclicly permute the factors, and conclude s
2
= s
4
, s
3
= s
5
, s
4
= s
6
, and so on. Therefore,
the relation we started with s
1
s
k
= 1 is nothing but (s
1
s
2
)(s
1
s
2
) (s
1
s
2
) = 1, which is a Coxeter
relation. Hence the proof is nished.
18
Recall that in Titss theorem we consider the fundamental domain C for the action of W. The
combined outcome of the results we have so far is that the following sets are in bijections with
each other:
bases for ,
the set of chambers of ,
W.
In particular, there exists a unique element w
0
W such that w
0
= . Then w
0
+
=
, hence
(w
0
) =
+
. It is easy to prove that
(w
0
w) = (ww
0
) = (w
0
) (w). (3.16)
Proposition 3.17. Let s
1
s
2
s
k
= w W be a reduced decomposition for some simple generators
s
i
= s
i
S with corresponding simple roots
i
. Let
i
denote the root s
1
s
i1
(
i
), i =
1, . . . , k. Then the following sets are all equal:
i.
w
1 =
+
w
,
ii.
i
: i = 1, . . . , k,
iii.
+
: s
(s
1
s
i
s
k
) = w, for some i. (Here, s
i
means that we omit s
i
from the
expression.)
Proof. (i. ii.) Let be a positive root that is mapped to a negative root by w
1
. Suppose
.
Since the exists a unique positive root that s
i
maps to a negative, namely
i
, we see that
i
=
s
i1
s
1
(). Therefore, = s
1
s
i1
(
i
) =
i
.
19
(ii. iii.) Let us compute s
i
(s
1
s
i
s
k
).
s
i
(s
1
s
i
s
k
) = s
s
1
s
i1
(
i
)
(s
1
s
i
s
k
)
= (s
1
s
i1
)s
i
(s
i1
s
1
)(s
1
s
i
s
k
)
= (s
1
s
i1
)s
i
(s
i+1
s
k
) = w
(iii. i.) Note that
+
: s
(s
1
s
i
s
k
) = w, for some i is uniquely determined by
which s
i
we omit from the expression. Therefore,
+
: s
(s
1
s
i
s
k
) = w, for some i
k. On the other hand, the length of w and w
1
are the same. Therefore,
w
1 = k. It follows that
all three sets are of the same cardinality k, hence they are equal.
Remark 3.18. Notice that Proposition 3.17 shows that the set s
1
s
i1
(
i
) : i = 1, . . . , k is
independent of the reduced decomposition s
1
s
k
of w we started with, because it is equal to
w
1 . Furthermore, the reducedness of the expression s
1
s
2
s
k
= w W in the hypothesis is used
at the very last step, while showing that the cardinalities of these sets are the same.
Theorem 3.19 (Matsumotos exchange condition). If (W, S ) is a nite Coxeter group, w = s
1
s
q
(where s
i
= s
i
for some
i
), and (sw) < (w) for some s = s
= s
w and suppose w
= s
1
s
p
(p < q) is a reduced decomposition for some
simple generators s
i
S . Then s
1
s
p
= w and this is a reduced decomposition. Therefore,
by Proposition 3.17, we see that =
1
for this reduced expression, hence
w
1 . On the other
hand, by Remark 3.18 (or by the proof of the Proposition)
w
1
+
: s
(s
1
s
i
s
q
) =
w, for some i.
For uniqueness, if there are i < j such that
sw = s
1
s
i
s
q
= s
1
s
j
s
q
,
20
and w = s
1
s
q
is reduced, then we see that s
i+1
s
j
= s
i
s
j+1
. The former segment has
j (i +1) +1 = j i simple reections, the latter has j +1 i +1 = j i +2. Therefore, replacing
the latter segment in w by the former, the length of w gets shorter. This contradicts the reducedness
of s
1
s
q
.
) Z for all , .
The fundamental (dominant) weight
, ) =
_
_
1 if =
0 if
for all . Since is a basis for V, it is clear that
_
_
I
H
_
_
I
A
_
,
where H
C
J
is non-empty, then
wW
I
= w
W
J
and I = J.
Proof. Observe that wC
I
w
C
J
if and only if w
1
wC
I
C
J
, and wW
I
= w
W
J
with
I = J if and only if w
1
wW
I
= W
J
. Therefore, we do not loose anything by assuming w
to be the
identity element of W.
Now we induct on the length of w. It is clear that if (w) = 0, then w is the identity element and
C
I
C
J
if and only if I = J and hence W
I
= W
J
. Assume that the result is true for all w W
with (w) < k and we prove it for (w) = k. To this end, let w W be an element of length k and
suppose that wC
I
C
J
is non-empty. We would like to conclude that w W
I
and I = J.
Since W
I
is a subgroup, it decomposes W into a union of right cosets. Suppose w lies in the
right coset w W
I
u for some u W. Then w = w
u for some w
W
I
. Since W is nite we choose
u to be of minimal possible length. Since w
W
I
, there exists s I be such that (sw
) = (w
)1.
Then the length of sw
u = w.
Recall from Lemma 2.25 that (sw) = (w) + 1 is true if and only if wC sA
s
. By taking the
closures we see that
wC
I
C
J
wC C sA
s
A
s
= H
s
.
22
It follows that if wC
I
C
J
is non-empty, then C
J
intersects H
s
non-trivially. But by denition of
the C
J
this is possible if and only if C
J
is contained in H
s
. Now we re-write wC
I
C
J
:
wC
I
C
J
= s(wC
I
C
J
) = swC
I
sC
J
= swC
I
C
J
.
Therefore, by induction assumption we have that sw W
I
, hence w W
I
and also I = J.
such that
g
r
= v g : t v = r(t)v for all t T.
24
Furthermore, g
r
s are all one dimensional.
Remark 3.25. 1. Hence t is the 0-eigenspace for the action of T.
2. We could formulate the above action of T using its Lie algebra, and in that context R(g, t) is
a set of linear functionals on g.
3. R(G, T) is in fact an algebraically independent set of functions in the algebra of regular
functions (the coordinate ring) on T.
The above set of characteristic functions R(G, T) forms a root system. Indeed, let X
(T) denote
the free abelian group generated by the characteristic functions. Let E denote the Euclidean space
X
(T)
Z
R. Checking that R(G, T) satises the axioms of a root system in E is easy. (Alternatively,
we replace r : T C
.)
The Weyl group W of an algebraic group G is dened to be the quotient W = N
G
(T)/Z
G
(T),
where N
G
(T) is the normalizer of the maximal torus T in G and Z
G
(T) is the centralizer of the
maximal torus. We know that if G is reductive, then Z
G
(T) = T.
A Borel subgroup B of a linear algebraic group G is a maximal closed, connected solvable
subgroup.
Remark 3.26. It follows from the denition of reductivity that, in a reductive group, Borel sub-
groups are never normal. In other words, the conjugation action of G on the set of Borel subgroups
has no xed points. There are two more important facts that we would like to mention;
1. the conjugation action of G on the set of all Borel subgroups is transitive,
2. the set of all Borel subgroups of G has a projective variety structure.
One of the most important facts about Borel subgroups in linear algebraic groups is that the
B B-orbits on G are organized with respect to the combinatorics of the Weyl group.
Theorem 3.27 (Bruhat-Chevalley decomposition.). Let G be a reductive linear algebraic group,
and let B G be a Borel subgroup. Then
G =
_
wW
B wB,
25
where W is the Weyl group of G and for w W, w N
G
(T) is a representative of w.
There is an important extension of Bruhat-Chevalley decomposition to certain semigroups.
Denition 3.28. A closed submonoid M Mat
n
is called a linear algebraic monoid.
Since determinant det : Mat
n
C is a morphism, the group of invertible elements of a linear
algebraic monoid form a linear algebraic group. We dene a linear algebraic monoid reductive, if
its group of invertible elements is reductive.
Theorem 3.29 (Bruhat-Chevalley-Renner decomposition.). Let M be a reductive linear algebraic
group with unit group G, and let B G be a Borel subgroup. Then
M =
_
rR
B rB,
where R = N
G
(T)/T.
The parametrizing object R in the above theorem is called the Renner monoid of M. Essentially,
it plays the same role for M that W plays for G. We list some well-known facts about R:
1. R is a nite semigroup,
2. W R is the group of invertible elements,
3. R is an inverse semigroup; for each r R, there exits a unique r
R such that rr
r = r and
r
rr
= r
.
The geometric Bruhat-Chevalley ordering (group-case) is dened as follows. Let w, w
W be
two elements. Then
w w
B wB B
w
B, (3.30)
where w and
w
in N
G
(T). The geometric Bruhat-Chevalley-
Renner ordering (monoid-case) on R is an extension of the previous case:
r r
B rB B
B, (3.31)
where r,
r
R in N
G
(T).
26
3.4 Parabolic Subgroups via Geometry (to be re-written)
Important facts:
1. When a Borel subgroup B act on a projective X, there exists a point x X such that B x = x.
This fact is known as Lie-Kolchin Theorem and its use leads to great results.
2. GIT tells us that the orbit of a linear algebraic group is projective, (or (quasi)ane) if
and only if so is G/H, where H is the stabilizer of a point of the orbit is projective (reps.
(quasi)ane).
We call a closed subgroup P G parabolic, if it contains a Borel subgroup.
Theorem 3.32. The quotient space G/P is a projective variety if and only if P is parabolic.
Proof. Let B G be a Borel subgroup. Suppose G/P is projective. Then there exists a Bxed
point x/P in G/P. In other words, BxP = xP, hence x
1
Bx P. Therefore, P contains a Borel
subgroup.
Conversely, start with a faithful representation V of G. Then the induced action on the ag
variety of V has a closed orbit and its stabilizer is a Borel subgroup B
J
1
, where J is the matrix having n 2 2 blocks of the form
_
_
0 1
1 0
_
_
It is each to check that maps an diagonal matrix
x =
_
_
x
1
0 0 0
0 x
2
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 x
2n1
0
0 0 0 x
2n
_
_
(x) =
_
_
x
1
2
0 0 0
0 x
1
1
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 x
1
2n
0
0 0 0 x
1
2n1
_
_
.
Therefore, the subgroup of all invertible diagonal matrices T SL
2n
is a -stable maximal
torus, and the subgroup S T of elements
_
_
x
1
0 0 0
0 x
1
0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 x
2n1
0
0 0 0 x
2n1
_
_
28
is a split torus.
For i = 1, . . . , 2n, let
i
: T C
+
= d
i
d
j
: 1 i < j 2n,
=
i
= d
i
d
i+1
: 1 i 2n 1.
Since the induced action of on the coordinate function is given by d
i
(x) = d((x)), we
compute:
d
i
=
_
_
d
i1
if i = 2k
d
i+1
if i = 2k 1
hence assuming i + 1 < j, we have
( d
i
d
j
) =
_
_
d
i1
+ d
j1
if i = 2k and j = 2l
d
i1
+ d
j+1
if i = 2k and j = 2l 1
d
i+1
+ d
j1
if i = 2k 1 and j = 2l
d
i+1
+ d
j+1
if i = 2k 1 and j = 2l 1
When i + 1 = j, we have
(
i
) =
_
i1
i
i+1
= d
i1
+ d
i+2
if i = 2k
i
= d
i
d
i+1
if i = 2k 1
It is easy to see now that the only positive roots that are xed by are those simple roots
i
,
where i is an odd number. Moreover, (
+
1
)
+
.
Therefore, (T, B) is a split pair.
1
=
2
,
4
, . . . ,
2n1
=
1
,
3
, . . . ,
2n1
29