You are on page 1of 170

Lectures on Quadratic Forms

By
C.L. Siegel
Tata Institute of Fundamental Research, Bombay
1957
(Reissued 1967)
Lectures on Quadratic Fomrs
By
C.L. Siegel
Notes by
K. G. Ramanathan
No part of this book may be reproduced in any
form by print, microlm of any other means with-
out written permission from the Tata Institute of
Fundamental Research, Colaba, Bombay 5
Tata Institute of Fundamental Research, Bombay
1955 56
(Reissumed 1967)
Contents
1 Vector groups and linear inequalities 1
1 Vector groups . . . . . . . . . . . . . . . . . . . . . . . 1
2 Lattices . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Characters . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Diophantine approximations . . . . . . . . . . . . . . . 13
2 Reduction of positive quadratic forms 21
1 Quadratic forms . . . . . . . . . . . . . . . . . . . . . . 21
2 Minima of denite forms . . . . . . . . . . . . . . . . . 27
3 Half reduced positive forms . . . . . . . . . . . . . . . . 33
4 Two auxiliary regions . . . . . . . . . . . . . . . . . . . 40
5 Space of reduced matrices . . . . . . . . . . . . . . . . 47
6 Binary forms . . . . . . . . . . . . . . . . . . . . . . . 56
7 Reduction of lattices . . . . . . . . . . . . . . . . . . . 61
3 Indenite quadratic forms 67
1 Discontinuous groups . . . . . . . . . . . . . . . . . . . 67
2 The H - space of a symmetric matrix . . . . . . . . . . . 72
3 Geometry of the H-space . . . . . . . . . . . . . . . . . 78
4 Reduction of indenite quadratic forms . . . . . . . . . 80
5 Binary forms . . . . . . . . . . . . . . . . . . . . . . . 85
4 Analytic theory of Indenite quadratic forms 99
1 The theta series . . . . . . . . . . . . . . . . . . . . . . 99
2 Proof of a lemma . . . . . . . . . . . . . . . . . . . . . 103
iii
iv Contents
3 Transformation formulae . . . . . . . . . . . . . . . . . 105
4 Convergence of an integral . . . . . . . . . . . . . . . . 115
5 A theorem in integral calculus . . . . . . . . . . . . . . 127
6 Measure of unit group and measure of representation . . 133
7 Integration of the theta series . . . . . . . . . . . . . . . 142
8 Eisenstein series . . . . . . . . . . . . . . . . . . . . . . 148
9 Main Theorem . . . . . . . . . . . . . . . . . . . . . . 157
10 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Chapter 1
Vector groups and linear
inequalities
1 Vector groups
1
Let K be the eld of real numbers and V a vector space of dimension n
over K. Let us denote element of V by small Greek letters and elements
of K by small Latin letters. The identity element of V will be denoted
by 0 and will be called the zero element of V. We shall also denote by 0
the zero element in K.
Let
1
, . . . ,
n
be a base of V so that for any V
=

i
x
i
, x
i
K.
We call x
1
, . . . , x
n
the coordinates of . Suppose

1
, . . . ,

n
is another
basis of V, then

i
=

j
a
ji
, i = 1, . . . , n
where a
ji
K and the matrix M = (a
ji
) is non-singular. If in terms of

1
, . . . ,

n
=

i
y
i
, y
i
K
1
2 1. Vector groups and linear inequalities
then it is easy to see that
_

_
x
1
:
:
x
n
_

_
= M
_

_
y
1
:
:
y
n
_

_
(1)
Suppose
1
, . . . ,
m
is any nite set of elements of V. We denote by
L(
1
, . . . ,
m
) the linear subspace generated in V by
1
, . . . ,
m
. This 2
means that L(
1
, . . . ,
m
) is the set of elements of the form

1
x
1
+ +
m
x
m
, x
i
X.
It is clear that L(
1
, . . . ,
m
) has dimension Min(n, m).
Let R
n
denote the Euclidean space of n dimensions, so that every
point P in R
n
has coordinates x
1
, . . . , x
n
, x
i
K. Let
1
, . . . ,
n
be a
basis of V and let x
1
, . . . , x
n
be the coordinates of in V with regard
to this basis. Make correspond to , the point in R
n
with coordinates
x
1
, . . . , x
n
. It is then easily seen that this correspondence is (1, 1). For
any V dene the absolute value || by
||
2
=
n

i=1
x
2
i
where x
1
, . . . , x
n
are coordinates of . Then | | satises the axioms of a
distance function in a metric space. We introduce a topology in V by
prescribing a fundamental system of neighbourhoods of to be the set
of {S
d
} where S
d
is the set of in V with
| | < d (2)
S
d
is called a sphere of radius d and center . The topology above makes
V a locally compact abelian group. The closure S
d
of S
d
is a compact
set. From (1), it follows that the topologies dened by dierent bases of
V are equivalent.
A subgroup G of V is called a vector group. The closure G of G in 3
V is again a vector group. We say that G is discrete if G has no limit
points in V. Clearly therefore a discrete vector group is closed.
1. Vector groups 3
Suppose G is discrete, then there is a neighbourhood of zero which
has no elements of G not equal to zero in it. For, if in every neighbour-
hood of zero there exists an element of G, then zero is a limit point of G
in V. This contradicts discreteness of G. Since G is a group, it follows
that all elements of G are isolated in V. As a consequence we see that
every compact subset of V has only nitely many elements of G in it.
We now investigate the structure of discrete vector groups. We shall
omit the completely trivial case when the vector group G consists only
of the zero element.
Let G {0} be a discrete vector group. Let 0 be an element of
G. Consider the intersection
G
1
= G L().
Let d > 0 be a large real number and consider all the y > 0 for
which y is in G
1
and y d. If d is large, then this set is not empty.
Because G is discrete, it follows that there are only nitely many y with
this property. Let q > 0 be therefore the smallest real number such that

1
= q G
1
. Let = x be any element in G
1
. Put x = hq + k where
h is an integer and 0 k < q. Then x and
1
h are in G
1
and so k is in 4
G
1
. But from denition of q, it follows that k = 0 or
=
1
h, h integer.
This proves that
G
1
= {
1
},
the innite cyclic group generated by
1
.
If in G there are no elements other than those in G
1
, then G = G
1
.
Otherwise let us assume as induction hypothesis that in G we have found
m( n) elements
1
, . . . ,
m
which are linearly independent over K and
such that G L(
1
, . . . ,
m
) consists precisely of elements of the form

1
g
1
+ +
m
g
m
where g
1
, . . . , g
m
are integers. This means that
G
m
= G L(
1
, . . . ,
m
) = {
1
} + + {
m
}
is the direct sum of m innite cyclic groups. If in G there exist no other
elements than in G
m
then G = G
m
. Otherwise let G, G
m
. Put
G
m+1
= G L(
1
, . . . ,
m
, ).
4 1. Vector groups and linear inequalities
Consider the elements in G
m+1
G of the form
=
1
x
1
+ +
m
x
m
+ y, x
i
K.
where y 0 and y d with d a large positive real number. This set C of
elements is not empty since it contains . Put now x
i
= g
i
+ k
i
where
g
i
is an integer and 0 k
i
< 1, i = 1, . . . , m. Let =
1
g
1
+ +
m
g
m
,
then G
m
and so
=
1
k
1
+ +
m
k
m
+ y
is an element of G
m+1
. Thus for every G
m+1
there exists a = 5
G with the property

=
1
k
1
+ +
m
k
m
+ y
0 k
i
< 1, y d. Thus all those s lie in a closed sphere of radius
(m + d
2
)
1
2
. Since G is discrete, this point set has to be nite. Thus for
the s in G the y can take only nitely many values.
Therefore let q > 0 be the smallest value of y for which
m+1
=

1
t
1
+ +
m
t
m
+ q is in G. Let
=
1
x
1
+ +
m
x
m
+ y
be in G
m+1
. Put y = qh + k where h is an integer and 0 k < q. Then

m+1
h =
1
(x
1
t
1
h) + +
m
(x
m
t
m
h) + k
is in G
m+1
. By denition of q, k = 0. But in that case by induction
hypothesis x
i
t
i
h = h
i
is an integer. Thus
=
1
h
1
+ +
m
h
m
+
m+1
h
h
1
, . . . , h are integers. This proves that
G
m+1
= {
1
} + + {
m+1
}
is a direct sum of m + 1 innite cyclic groups.
We can continue this process now but not indenitely since
1
, . . . ,

m+1
, . . . are linearly independent. Thus after r n steps, the process
ends. We have hence the
1. Vector groups 5
Theorem 1. Every discrete vector group G {0} in V is a direct sum of 6
r innite cyclic groups, 0 < r n.
Conversely the direct sum of cyclic innite groups is a discrete vec-
tor group. We have thus obtained the structure of all discrete vector
groups.
We shall now study the structure of all closed vector groups.
Let G be a closed vector group. Let S
d
be a sphere of radius d
with the zero element of G as centre. Let r(d) be the maximum number
of elements of G which are linearly independent and which lie in S
d
.
Clearly r(d) satises
0 r(d) n.
Also r(d) is an increasing function of d and since it is integral valued it
tends to a limit when d 0. So let
r = lim
d0
r(d).
This means that there exists a d
0
> 0 such that for d d
0
r = r(d).
We call r the rank of G.
Clearly 0 r n. Suppose r = 0, then we maintain that G is
discrete; for if not, there exists a sequence
1
, . . . ,
n
, . . . of elements of
G with a limit point in V. Then the dierences {
k

1
}, k 1 will
form a set of elements of G with zero as a limit point and so in every
neighbourhood of zero there will be elements of G which will mean that
r > 0.
Conversely if G is discrete there exists a sphere S
d
, d > 0 which 7
does not contain any point of G not equal to zero and containing zero.
This means r = 0. Hence
r = 0 G is discrete.
Let therefore r > 0 so that G is not discrete. Let d be a real number
0 < d < d
0
so that r(d) = r. Let S
d
be a sphere around the zero element
of G and of radius d. Let
1
, . . . ,
r
be elements of G in S
d
which are
6 1. Vector groups and linear inequalities
linearly independent. Let t > 0 be any real number and let d
1
> 0
be chosen so that d
1
< Min(d,
t
n
). Then r(d
1
) = r. If
1
, . . . ,
r
be
elements of G which are linearly independent and which are contained
in the sphere S
d
1
around the zero element of G, then L(
1
, . . . ,
r
)
L(
1
, . . . ,
r
) since S
d
1
S
d
. But since both have dimension r,
L(
1
, . . . ,
r
) = L(
1
, . . . ,
r
).
Since
1
, . . . ,
r
are in S
d
1
we have
|
i
| d
1

t
n
, i = 1, . . . , r.
Let L(
1
, . . . ,
r
). Then by above
=
1
x
1
+ +
r
x
r
.
Put x
i
= g
i
+ k
i
where g
i
is an integer and 0 k
i
< 1. Put =
1
g
1
+
+
r
g
r
. Since
1
, . . . ,
r
G, will also be in G. Now
| | = |
1
k
1
+ +
r
k
r
|
|
1
k
1
| + + |
r
k
r
| <
t
n
n = t.
8
Since t is arbitrary, it means that in every neighbourhood of there
are elements of G. Hence G. But G is closed and is arbitrary in
L(
1
, . . . ,
r
). Thus
L(
1
, . . . ,
r
) G.
We have now two possibilities; r = n or r < n. If r = n then
V = L(
1
, . . . ,
r
) G V, which means G = V. So let r < n.
Complete
1
, . . . ,
r
into a basis
1
, . . . ,
n
of V. In terms of this basis,
any G may be written
=
1
x
1
+ +
n
x
n
, x
i
K.
But =
1
x
1
+ +
r
x
r
is an element of L(
1
, . . . ,
r
) and so of G.
Thus
= =
r+1
x
r+1
+ +
n
x
n
2. Lattices 7
is in G. Also L(
r+1
, . . . ,
n
). It is to be noted that determines
uniquely. The s that arise in this manner clearly form a vector group
contained in L(
r+1
, . . . ,
n
) and isomorphic to the factor group G
L(
1
, . . . ,
r
). We contend that this subgroup of s is discrete. For, if
not let
1
, . . . be a sequence of elements with a limit point in V. Then
in every arbitrary neighbourhood of zero there are elements of the set
{
k

l
}, k 1. Since
k

l
is an element of L(
r+1
, . . . ,
n
), this means
that the rank of G is r + 1. This contradiction proves our contention.
Using theorem 1, it follows that there exist s elements
1
, . . . ,
s
is G
such that every G can be written uniquely in the form 9
=
1
x
1
+ +
r
x
r
+
1
g
1
+ +
s
g
s
()
where x
i
K and gs are integers. The uniqueness of the above form
implies that
1
, . . . ,
s
are linearly independent. We have hence the
Theorem 2. Let G be a closed vector group. There exist integers r and
s, 0 r r +s n and r +s independent elements
1
, . . . ,
r
,
1
, . . . ,
s
in G such that every element G can be uniquely expressed in the form
().
It is easy to see that if G is a vector group such that G consists of all
elements of the form () then G is closed. In particular if r = 0, we
have discrete groups as a special case.
It can be seen that L(
1
, . . . ,
r
) is the connected component of the
zero element in G.
2 Lattices
Let G be a discrete vector group. There exist r n elements
1
, . . . ,
r
of V such that
G = {
1
} + + {
r
}
is a direct sum of r innite cyclic groups. If
1
, . . . ,
r+1
are any r + 1
elements of G, then there is a non-trivial relation.

1
h
1
+ +
r+1
h
r+1
= 0
8 1. Vector groups and linear inequalities
where h
1
, . . . , h
r+1
are integers. For let
i
=
r

j=1

j
a
ji
, i = 1, . . . , r + 1.
Then the matrix A = (a
ji
) has r rows and r +1 columns and is an integral
matrix. There exist therefore rational numbers h
1
, . . . , h
r+1
not all zero 10
such that
A
_

_
h
1
.
.
.
h
r+1
_

_
=
_

_
0
.
.
.
0
_

_
This means that there are rational numbers h
1
, . . . , h
r+1
not all zero such
that
1
h
1
+ +
r+1
h
r+1
= 0. Multiplying by a common denominator
we obtain the result stated.
Let us now make the
Denition. A vector group G is said to be a lattice if G is discrete and
contains a basis of V.
This means that there exists a basis
1
, . . . ,
n
of V such that
G = {
1
} + + {
n
}. (3)
The quotient group V G is clearly compact. Conversely suppose G
is a discrete vector group such that V G is compact. If
1
, . . . ,
r
are
independent elements of G generating G, complete
1
, . . . ,
r
to a basis

1
, . . . ,
n
of V. A set of representatives of Vmod G is then given by
=
1
x
1
+ +
n
x
n
where 0 x
i
< 1, i = 1, . . . , r. Since Vmod G is compact, it follows that
r = n. Thus a lattice is a discrete vector group G with V G compact.
A set of elements
1
, . . . ,
n
of G generating G is said to be a base
of the lattice G. If
1
, . . . ,
n
is another base of G then 11
(
1
, . . . ,
n
) = (
1
, . . . ,
n
)A (4)
(
1
, . . . ,
n
) = (
1
, . . . ,
n
)B
where A and B are n rowed integral matrices. Because of (4), it follows
that AB = E, E being the unit matrix of order n. Thus |A| = 1, |B| = 1.
2. Lattices 9
We call a matrix A unimodular if A and A
1
are both integral. The
unimodular matrices form a group . (4) shows that a transformation of
a base of G into another base is accomplished by means of a unimodular
transformation.
Conversely if
1
, . . . ,
n
is a base of G and A is a unimodular matrix,
then
1
, . . . ,
n
dened by
(
1
, . . . ,
n
) = (
1
, . . . ,
n
)A
form again a base of G as can be easily seen. Thus is the group of
automorphisms of a lattice.
Let G be a lattice and
1
, . . . ,
n
a base of it. Let be any element
in G. Then can be completed into a base of G if and only if
G L() = {}
as is evident from section 1. Let =
1
g
1
+ +
n
g
n
where g
1
, . . . , g
n
are integers. If can be completed into a base ,
2
, . . . ,
n
of G then, by
above, the transformation taking
1
, . . . ,
n
to ,
2
, . . . ,
n
is unimodu-
lar. This means that
(g
1
, g
2
, . . . , g
n
) = 1.
Conversely let

=
1
g
1
+ +
n
g
n
with (g
1
, . . . , g
n
) = 1. Let G 12
and
G L() = {
1
}
where
1
=
1
t
1
+ +
n
t
n
. Since L(), it follows that {
1
} and
=
1
q for some integer q. Because of independence of
1
, . . . ,
n
, it
follows that q divides (g
1
, . . . , g
n
). This means that q = 1, that is
G L() = {}
Therefore can be completed to a base of G. Hence the
Theorem 3. Let G be a lattice with a base
1
, . . . ,
n
. Let =
1
g
1
+
+
n
g
n
be an element in G. Then can be completed to a base of G
if and only if (g
1
, . . . , g
n
) = 1.
From the relation between bases of G and unimodular matrices, we
have
10 1. Vector groups and linear inequalities
Corollary. Let g
1
, . . . , g
n
be n integers. They can be made the rst col-
umn of a unimodular matrix if and only if (g
1
, . . . , g
n
) = 1.
3 Characters
Let G be a vector group. A character of G is a real valued function on
V with the properties
1) () is an integer for G
2) is continuous on V
3) ( + ) = () + (), , V
It follows trivially therefore that
(0) = 0.
Since is a continuous function, we have 13
lim
n
(
n
) = ()
where
1
,
2
, . . . is a sequence of elements in V converging to .
If p is an integer then (p) = p(). If r is a rational number, say
r =
a
b
, a, b integers, then b(r) = (a) = a() so that
(r) = r().
By continuity it follows that if r is real
(r) = r().
Suppose
1
and
2
are two characters of G. Dene =
1
+
2
by
() =
1
() +
2
().
It is then trivial to verify that is a character of G. It then follows that
the characters of G form a group G

, called the character group or the


dual of G.
Let G be a vector group and G its closure. Then
3. Characters 11
Lemma. G and G have the same character group.
Proof. A character of G is already a character of G.
Conversely let be a character of G. Then satises properties 2)
and 3). We have only to verify the property 1). Let G. Then there
is a sequence of elements
1
,
2
, . . . in G with as the limit. Since is
continuous
lim
n
(
n
) = ().
But (
n
) are all integers. Thus () is integral. Thus is a character 14
of G.
The interest in lemma is due to the fact that in order to study the
structure of G

, it is enough to consider G

as the dual of the closed


vector group G whose structure we had investigated earlier.
Let G be the closure of the vector group G and G

its character
group. By theorem 2 there exists a base
1
, . . . ,
n
of V such that
=
1
x
1
+ +
n
x
n
, x
i
K
belongs to G if and only if x
i
is integral for r < i r + s and x
i
= 0 for
i > r +s, r and s being integers determined by theorem 2. If G

then
for V
() = x
1
(
1
) + + x
n
(
n
).
If however G then () is integral. Therefore
(
i
) =
_

_
0 i r
integer r < i r + s
arbitrary real i > r + s.
Thus for G
() =
r+s

i=r+1
(
i
) x
i
If G, then because of denition of
1
, . . . ,
n
it follows that
either at least one of x
r+1
, . . . , x
r+s
is not an integer or at least one of
12 1. Vector groups and linear inequalities
x
r+s+1
, . . . , x
n
is not zero. Suppose that =
_
i

1
x
i
, x
r+1
0(mod 1).
Dene the linear function on V by 15
(
i
) =
_

_
1 if i = r + 1
0 if i r + 1.
Then is a character of G and
() = (
r+1
)x
r+1
= x
r+1
0(mod 1)
The same thing is true if x
r+i
0(mod 1), 1 i s. Suppose now that
=
_
i

i
x
i
and one of x
r+s+1
, . . . , x
n
say x
n
0. Dene linear on V
by
(
i
) =
_

_
0 if i n
1
2x
n
if i = n.
Then is a character of G and () =
1
2
0(mod 1). Hence if
G there is a character of G which is not integral for . We have thus
proved.
Theorem 4. Let V. Then G if and only if for every character
of G, () is integral.
Let us x a basis
1
, . . . ,
n
of V so that
1
, . . . ,
r+s
is a basis of
G. If G

then (
i
) = c
i
where
c
i
=
_

_
0 i r
integer r < i r + s
real i > r + a
If (c
1
, . . . , c
n
) is any set of n real numbers satisfying the above condi-
tions, then the linear function dened on V by
() =
n

i=1
c
i
x
i
4. Diophantine approximations 13
where =
_
i

i
x
i
, is a character of G. If R
n
denotes the space of real 16
n-tuples (x
1
, . . . , x
n
) then the mapping
(c
1
, . . . , c
n
)
is seen to be an isomorphism of G

into R
n
. Thus G

is a closed vector
group of rank n r s.
It can be proved easily that G

the character group of G

is isomor-
phic to G.
4 Diophantine approximations
We shall study an application of the considerations in 3 to a problem
in linear inequalities.
Let
L
i
(h) =
m

j=1
a
i j
h
j
, (i = 1, . . . , n)
be n linear forms in m variables h
1
, . . . , h
m
with real coecient a
i j
. Let
b
1
, . . . , b
n
be n arbitrarily given real numbers. We consider the problem
of ascertaining necessary and sucient conditions on the a
i j
s so that
given a > 0 there exist integers h
1
, . . . , h
m
such that
|L
i
(h) b
i
| < , (i = 1, . . . , n).
In order to study this problem, let us introduce the vector space V of
all a rowed real columns
=
_

_
a
1
.
.
.
a
n
_

_
, a
i
K.
V has then dimension n over K. Let
1
, . . . ,
n
be elements of V dened 17
by

i
=
_

_
a
1i
.
.
.
a
ni
_

_
, i = 1, . . . , m
14 1. Vector groups and linear inequalities
and let G be the vector group consisting of all sums
m
_
i=1

i
g
i
where g
i
s
are integers. Let be the vector
=
_

_
b
1
.
.
.
b
n
_

_
Then our problem on linear forms is seen to be equivalent to that of
obtaining necessary and sucient conditions that there be elements in
G as close to as one wishes; in other words the condition that be in
G. Theorem 4 now gives the answer, namely that
() 0(mod 1)
for every character of G.
Let us choose a basis
1
, . . . ,
n
of V where

i
=
_

_
0
.
.
.
1
0
.
.
.
0
_

_
i = 1, . . . , n
with zero everywhere except at the i th place. Now in terms of this basis

k
=
1
a
1k
+ +
n
a
nk
, k = 1, . . . , m
Therefore if is a character of G 18
(
k
) =
n

i=1
a
ik
c
i
where (
i
) = c
i
, i = 1, . . . , n. Also (
k
) 0(mod 1). Furthermore if
c
1
, . . . , c
n
be any real numbers satisfying
n

i=1
c
i
a
ik
0(mod 1), k = 1, . . . , m,
4. Diophantine approximations 15
then the linear function dened on V by (
i
) = c
i
is a character of G.
By theorem 4 therefore
n

i=1
c
i
b
i
0(mod 1)
We have therefore the theorem due to Kronecker.
Theorem 5. A necessary and sucient condition that for every t > 0,
there exist integers h
1
, . . . , h
m
satisfying
|L
i
(h) b
i
| < t, i = 1, . . . , n,
is that for every set c
1
, . . . , c
n
of real numbers satisfying
n

i=1
c
i
a
ik
0(mod 1), k = 1, . . . , m,
we should have
n

i=1
a
i
b
i
0(mod 1).
We now consider the special case m > n. Let m = n + q, q 1. Let
the linear forms be
q

j=1
a
i j
h
j
+ g
i
, i = 1, . . . , n
in the m variables h
1
, . . . , h
q
, g
1
, . . . , g
n
. Then the vectors
1
, . . . ,
m
19
above are such that

q+i
=
i
, i = 1, . . . , n.
This means that if is a character of G, c
i
= (
i
) is an integer. Thus
Corollary 1. The necessary and sucient condition that for every t > 0,
there exist integers h
1
, . . . , h
q
, g
1
, . . . , g
n
satisfying

j=1
a
i j
h
j
+ g
i
b
i

< t, i = 1, . . . , n
16 1. Vector groups and linear inequalities
is that for every set c
1
, . . . , c
n
of integers satisfying

i
c
i
a
i j
0(mod 1), j = 1, . . . , q
we have

i
c
i
b
i
0(mod 1).
We now consider another special case q = 1. The linear forms are
of the type
a
i
h + g
i
b
i
i = 1, . . . , n
a
1
, . . . , a
n
, b
1
, . . . , b
n
being real numbers. Suppose now we insist that
the condition on b
1
, . . . , b
n
be true whatever b
1
, . . . , b
n
are. This will
mean that from above Corollary c
1
= c
2
= . . . = c
n
= 0 or, in other
words, that a
1
, . . . , a
n
have to satisfy the condition that

i
c
i
a
i
0(mod 1), c
i
integral
if and only if c
i
= 0, i = 1, . . . , n. This is equivalent to saying that 20
the real numbers 1, a
1
, . . . , a
1
are linearly independent over the eld of
rational numbers.
Let us denote by R
n
the Euclidean space of n dimensions and by F
n
the unit cube consisting of points (x
1
, . . . , x
n
) with
0 x
i
< 1 i = 1, . . . , n.
For any real number x, let ((x)) denote the fractional part of x, i.o. ((x)) =
x [x]. Then
Corollary 2. If 1, a
1
, . . . , a
n
are real numbers linearly independent over
the eld of rational numbers, then the points (x
1
, . . . , x
n
) where
x
i
= ((ha
i
)) i = 1, . . . , n
are dense in the unit cube, if h runs through all integers.
4. Diophantine approximations 17
We consider now the homogeneous problem namely of obtaining
integral solutions of the inequalities
|L
i
(h)| < t, i = 1, . . . , n
t > 0 being arbitrary. Here we have to insist that h
1
, . . . , h
m
should not
all be zero.
We study only the case m > n. As before introduce the vector space
V of n-tuples. Let
1
, . . . ,
m
and G have the same meaning as before.
If the group G is not discrete, it will mean that the inequalities will have
solutions for any t, however small. If however G is discrete then since 21
m > n the elements
1
, . . . ,
m
have to be linearly integrally dependent.
Hence we have integers h
1
, . . . , h
m
not all zero such that

1
h
1
+ +
m
h
m
= 0.
We have hence the
Theorem 6. If m > n, the linear inequalities
|L
i
(h)| < t, i = 1, . . . , n
have for every t > 0, a non-trivial integral solution.
Bibliography
[1] O. Perron : Irrationalzahlen Chelsea, 1948.
[2] C. L. Siegel : Lectures on Geometry of Numbers, New York Uni-
versity, New York, 1946.
19
Chapter 2
Reduction of positive
quadratic forms
1 Quadratic forms
22
Let V be a vector space of dimension n over the eld K of real numbers.
Dene an inner product between vectors , of V by
i) K
ii) =
iii) ( + ) = +
iv) (a) = ()a, a K.
Obviously if
1
, . . . ,
n
is a base of V and , have the expression =
_
i

i
a
i
, =
_
i

i
b
i
then
=
n

i, j=1
a
i
b
j
(
i

j
).
If we denote by S the n-rowed real matrix S = (s
i j
), s
i j
=
i

j
then S is
symmetric and
= a

S b (1)
21
22 2. Reduction of positive quadratic forms
where a =
_
a
1
.
.
.
a
n
_
, b =
_

_
b
1
.
.
.
b
n
_

_
and a

denotes the transpose of the column


vector a. (1) is a bilinear form in the 2n quantities a
1
, . . . , a
n
, b
1
, . . . , b
n
.
In particular

2
= a

S a
is a quadratic form in a
1
, . . . , a
n
.
Suppose that

1
, . . . ,

n
is another base of V. Then 23

i
=

j
a
i j
i = 1, . . . , n.
and the matrix A = (a
i j
) is non-singular. If S
1
= (

j
) then one sees
easily that
S
1
= S [A] = A

S A.
Thus if S with regard to one base is non-singular, then the S correspond-
ing to any other base is also non-singular.
Conversely let S by and real n-rowed symmetric matrix and
1
, . . . ,

n
a base of V over K. Put

i

j
= s
i j
( j, i = 1, . . . , n)
and extend it by linearity to any two vectors of V. Then we have in V an
inner product dened.
If =
_
i

i
x
i
is a generic vector of V over K,

2
= x

S x = S [x] =

i, j
x
i
x
j
s
i j
.
The expression on the right is a quadratic form in the n variables x
1
, . . . ,
x
n
and we call S its matrix. The quadratic form is degenerate or non-
degenerate according as its matrix S is or is not singular.
Let x

S x =
n
_
k,l=1
s
kl
x
k
x
l
be a quadratic form in the n variables x
1
, . . . ,
x
n
and let s
1
= s
11
0. We may write
x

S x = s
1
x
2
1
+ 2s
12
x
1
x
2
+ + 2s
1n
x
1
x
n
+ Q(x
2
, . . . , x
n
)
1. Quadratic forms 23
so that Q(x
2
, . . . , x
n
) is a quadratic form in the n1 variables x
2
, . . . , x
n
. 24
We now write, since s
1
0,
x

S x = s
1
_
x
1
+
s
12
s
1
x
2
+ +
s
1n
s
1
x
n
_
2

s
2
12
s
1
x
2
2
. . .

s
1n
2
s
1
x
2
n
+ Q(x
2
, . . . , x
n
).
We have thus nally
x

S x = s
1
y
2
1
+ R(x
2
, . . . , x
n
)
where y
1
= x
1
+
s
12
s
1
x
2
+ +
s
1n
s
1
x
n
and R(x
2
, . . . , x
n
) is a quadratic form
in the n 1 variables x
2
, . . . , x
n
. If we make a change of variables
y
1
= x
1
s
12
s
1
x
2
+ +
s
1n
s
1
x
n
y
1
= x
i
i > 1
_

_
(2)
then we may write
x

S x =
_
s
1
0
0 S
1
_
_

_
y
1
.
.
.
y
n
_

_
where S
1
is the matrix of the quadratic form R(x
2
, . . . , x
n
). Using matrix
notation we have
S =
_

_
s
1
q

q S
2
_

_
=
_
s
1
0
0 S
1
_ _
1 s
1
1
q

0 E
_
(3)
where E is the unit matrix of order n1, q is a column of n1 rows and 25
S
1
= S
2
s
1
1
qq

;
which, incidentally gives an expression for the matrix of R.
More generally suppose S =
_
S
1
Q
Q

S
2
_
where S
1
is a k-rowed matrix
and is non-singular. Put x =
_
y
z
_
where y is a column of k rows and z has
n k rows. Then
S [x] = S
1
[y] + y

Qz + z

y + S
2
[z],
24 2. Reduction of positive quadratic forms
which can be written in the form
S [x] = S
1
[y + S
1
1
Qz] + W[z] (4)
where W = S
2
Q

S
1
Q. In matrix notation we have
S =
_
S
1
0
0 W
_ _
E S
1
1
Q
0 E
_
(5)
the orders of the two unit matrices being evident. In particular, we have
|S | = |S
1
| |W|.
Let S be a real, non-singular, n-rowed, symmetric matrix. It is well
known that there exists an orthogonal matrix V such that
S [V] = V

S V = D
where D = [d
1
, . . . , d
n
] is a real diagonal matrix. The elements d
1
, . . . ,
d
n
of D are called the eigen-values of S . Let L denote the unit sphere 26
L : x

x = 1
so that a generic point x on L is an n-tuple x =
_
x
1
.
.
.
x
n
_
of real numbers.
Let m and N denote the smallest and largest of the eigen values of S .
Then for any x on L.
m S [x] M
For, if we put
_

_
y
1
.
.
.
y
n
_

_
= y V
1
x, then y

y = 1 and
S [x] = D[V
1
x] = D[y] = d
1
y
2
1
+ + d
n
y
2
n
.
But then
S [x] = (d
1
M)y
2
1
+ + (d
n
M)y
2
n
+ M M.
The other inequality is obtained by changing S to S .
1. Quadratic forms 25
More generally we have, for any arbitrary real vector x
mx

x S [x] Mx

x. (6)
If x = 0, the statement is obvious. Let x 0. Then t
2
= x

x 0. Put
y = t
1
x. Then y

y = 1 and so m S [y] M. Multiplying throughout


by t
2
we get the result in (6).
We now dene a quadratic form x

S x to be positive denite (or sim-


ply positive) if S [x] > 0 for all vectors x 0. It is positive semi-denite
if S [x] 0 for real x 0. We shall denote these by S > 0 and S 0
respectively. If S > 0, then obviously |S | 0. For, if |S | = 0, then there 27
exists x 0 such that S x = 0. Then
0 = x

S x > 0
which is absurd.
If S > 0 and |A| 0 and A is a real matrix, then T = S [A] is again
positive. For, if x = 0, the Ax y 0 and so
T[x] = S [Ax] = S [y] > 0.
We now prove two lemmas for later use.
Lemma 1. A matrix S is positive denite if and only if |S
r
| > 0 for
r = 1, . . . , n, where S
r
is the matrix formed by the rst r rows and
columns of S .
Proof. We shall use induction on n. If n = 1, the lemma is trivial. Let
therefore lemma be proved for matrices of order n 1 instead of n. Let
S =
_

_
S
n1
q
q

a
_

_
If S > 0 then S
n1
> 0 and so |S
n1
| 0. We can therefore write
S =
_
S
n1
0

0 l
_ _
E S
1
n1
q
0 l
_
(7)
so that |S | = |S
n1
|l. Induction hypothesis shows that |S
n1
| > 0 and
l > 0 so that |S | > 0 and |S
r
| > 0 for all r.
26 2. Reduction of positive quadratic forms
The converse also follows since by hypothesis |S | > 0 and |S
n1
| > 28
0. So 1 > 0. But by induction hypothesis S
n1
> 0.
Lemma 2. If S > 0 and S = (s
kl
), then
|S | s
1
. . . s
n
where s
kk
= s
k
, k = 1, . . . , n.
Proof. We again use induction on n. From the equation (7) we have
|S | = |S
n1
| l.
But l = s
n
q

S
1
n1
q > 0 since S
1
n1
> 0 and s
n
> 0. If we assume
lemma proved for n 1 instead of n we get
|S | s
1
. . . s
n1
l s
1
. . . s
n
.
More generally we can prove that if S > 0 and S =
_
S
1
S
12
S

12
S
2
_
then
|S | |S
1
| |S
2
| (8)
It is easy to see that equality holds in (8) if and only if S
12
= 0.
Let S > 0, then s
1
, . . . , s
n
are all positive. We can write as in (3)
S =
_
s
1
0
0 W
_ _
1 s
1
1
q

0 E
_
But since now W > 0, its rst diagonal element is dierent from zero 29
and we can write W also in the form (3). In this way we get
S =
_

_
d
1
0
.
.
.
0 d
n
_

_
_

_
1, d
12
, . . . , d
1n
0, 1, d
23
, . . . , d
2n
: . . . . . .
0 . . . . . . 1
_

_
= D[V] (9)
where D = [d
1
, . . . , d
n
] is a diagonal matrix and V = (d
kl
) is a triangle
matrix with d
kk
= 1, k = 1, . . . , n, and d
kl
= 0, k > 1. We can therefore
write
S [x] =
n

k=1
d
k
(x
k
+ d
k k+1
x
k+1
+ + d
kn
x
n
)
2
2. Minima of denite forms 27
The expression S = D[V] is unique. For if S = D
1
[V
1
] where D
1
is
a diagonal matrix and V
1
is triangular, then
D[W] = D
1
where W = VV
1
1
is also a triangular matrix. In this case, it readily
follows that W = E and D = D
1
.
In general we have the fact that if
S =
_
S 0
0 S
2
_ _
E T
0 E
_
(10)
where S
1
has order k then S
1
, S
2
and T are unique.
We call the decomposition (9) of S the Jacobi transformation of
S .
2 Minima of denite forms
30
Let S and T be two real, non-singular n-rowed symmetric matrices.
They are said to be equivalent (denoted S T) if there exists a uni-
modular matrix U such that
S [U] = T.
Since the unimodular matrices form a group, the above relation is an
equivalence relation. We can therefore put the n-rowed real symmetric
matrices into classes of equivalent matrices. Evidently, two matrices in
a class have the same determinant.
If S = S

is real and t is a real number, we say that S represents t


integrally, if there is an integral vector x such that
S [x] = t.
In case t = 0, we insist that x 0. The representation is said to be
primitive, if x is a primitive vector. Obviously if S T then S and T
both represent the same set of real numbers.
If S > 0, then all the eigen values of S are positive. Let m > 0 be
the smallest eigen value of S . Let t > 0 be a large real number. Then if
28 2. Reduction of positive quadratic forms
S [x] < t then mx

x < t and so the elements of x are bounded. Therefore


there exist only nitely many integral vectors x satisfying
S [x] < t.
This means that if x runs through all non-zero integral vectors, S [x] has
a minimum. We denote this minimum by (S ). There is therefore an 31
integral x such that
S [x] = (S )., x 0.
Moreover x is a primitive vector. For if x is not primitive, then x = qy
where q > 1 is an integer, and y is a primitive vector. Then
(S ) = S [x] = q
2
S [y] > S [y]
which is impossible. Furthermore if S T then (S ) = (T). For, let
S = T[U] where U is unimodular. If x is a primitive vector such that
(S ) = S [x], then
(S ) = S [x] = T[Ux] (T).
Also if (T) = T[y], then
(T) = T[y] = S [U
1
y] (S ).
This proves the contention.
If S > 0 and t is a real number, then (tS ) = t(S ). But |tS | = t
n
|S |
so that it seems reasonable to compare (S ) with |S |
1/n
.
We not prove the following important theorem due to Hermite.
Theorem 1. If (S ) is the minimum of the positive matrix S of n rows,
there exist a constant c
n
depending only on n, such that
(S ) c
n
|S |
1/n
Proof. We use induction on n.
2. Minima of denite forms 29
If n = 1, then S is a positive real number s. If x 0, and integral, 32
then sx
2
> s unless x = 1 so that
c
1
= 1.
Let us assume theorem proved for n 1 instead of n. Let x be the primi-
tive integral vector such that (S ) = S [x]. Complete x into a unimodular
matrix U. Then T = S [U] has rst diagonal element equal to (S ). Also
(S ) = (T) by our remarks above. Furthermore |S | = |T|. Therefore
in order to prove the theorem we may assume that the rst diagonal
element s
1
of S is equal to (S ).
Let S =
_

_
s
1
q

q S
1
_

_
. Then
S =
_
s
1
0

0 W
_ _
1 s
1
1
q

0 E
_
where W = S
1
qs
1
1
q! Also |S | = s
1
|W|.
Let x =
_
x
1
y
_
be a vector and let y have n 1 rows, so that
S [x] = s
1
(x
1
+ s
1
1
q

y)
2
+ W[y]. (11)
Since W > 0, we can choose an integral y so that W[y] is minimum. x
1
can now be chosen integral in such a manner that

1
2
x
1
+ s
1
1
q

y
1
2
(12)
using (11) and (12) and induction hypothesis we get 33
(S ) S [x]
(S )
4
+ c
n1

1/(n1)
Substituting for |W| we get
(S )
_
4
3
c
n1
_n1
n
|S |
1
n
which proves the theorem.
30 2. Reduction of positive quadratic forms
Using c
1
= 1 and computing successively from the recurrence for-
mula c
n
=
_
4
3
c
n1
_n1
n
we see that
c
n
= (4/3)
n1
2
(13)
is a possible value of c
n
. This estimate is due to Hermite.
The best possible value for c
n
is unknown except in a few cases. We
shall show that c
2
=
_
4
3
and that it is the best possible for n = 2. From
Hermites estimate (13), we see that for a positive binary matrix S ,
(S )
_
4
3
_1
2
|S |
1
2
.
Consider now the positive quadratic from x
2
+ xy + y
2
whose matrix
S =
_
1
1
2
1
2
1
_
For integral x, y not both zero, x
2
+ xy + y
2
1 so that (S ) = 1. Also
|S | =
3
4
. We have
1 =
_
4
3
_1
2
|S |
1
2
which proves that
_
4
3
is the best possible value of c
2
. 34
We shall now obtain a ner estimate for c
n
due to Minkowski. This
estimate is better than Hermites for large values of n. To this end we
make the following consideration.
Let R
n
denote the Euclidean space of n dimensions regarded as a
vector space of ordered n-tuples (x
1
, . . . , x
n
). A point set L in R
n
is
said to be convex if whenever A and B are two points of it,
A + B
2
, the
mid point of the line joining A and B, is also a point of L. It is said
to be symmetric about the origin if whenever x belongs to it, x also
2. Minima of denite forms 31
belongs to it. Obviously if L is both convex and symmetric, it contains
the origin.
If L is a point set in R
n
and h is any point in R
n
we denote by L
h
the set of points x such that x L
h
if and only if x h is a point of L.
With this notation L = L
0
.
If L is an open, bounded symmetric convex set, the L has a mea-
sure (L) in the Jordon sense and for h R
n
(L) = (L
h
).
We call a point P = (x
1
, . . . , x
n
) in R
n
a lattice point if x
1
, . . . , x
n
are
all integers. The lattice points form a lattice in R
n
considered as a vector
group. We shall denote points of this lattice by the letters g, g

, . . ..
The following lemma, due to Minkowski, shows the relationship be-
tween convex sets and lattices.
Lemma 3. If L is an open, bounded, symmetric and convex set of vol- 35
ume > 2
n
, then L contains a lattice point other than the origin.
Proof. We shall assume that L has no lattice point in it other than the
origin and then prove that (L) 2
n
.
So let L have no lattice point in it other than the origin. Dene the
point set M by x M if and only if 2x L. Then M is an open,
symmetric, bounded and convex set. Also
(L) = 2
n
(M). (14)
Consider now the translates M
g
of M by the lattice points. If g g

then M
g
and M
g
are disjoint sets. For, if x M
g
M
g
then x g and
x g

are points of M. Since M is symmetric and convex.


g g

2
=
(x g

) + (g x)
2
is a point of M. By denition of M, g g

is a point of L. But g g

.
Thus L has a lattice point other than the origin. This contradicts our
assumption. Thus the M
g
for all g are distinct.
32 2. Reduction of positive quadratic forms
Let denote the unit cube, that is the set of points x = (x
1
, . . . , x
n
)
with 0 x
i
< 1, i = 1, . . . , n. By the property of M
g
s above

g
(M
g
) = (

g
M
g
) () = 1 (15)
But (M
g
) = (M
g
) so that by (15)
1

g
( M
g
) =

g
(
g
M) = (

g
M).
But the
g
cover R
n
completely without gaps or overlapping when g 36
runs over all lattice points. Hence
(M) 1.
Using (14) our theorem follows.
We can now prove the following theorem due to Minkowski.
Theorem 2. If S > 0 and (S ) is its minimum, then
(S )
4

_
n
2
+ 1
__
2/n
|S |
1/n
Proof. In R
n
let us consider the point set L dened by the set of x =
_
x
1
.
.
.
x
n
_
with
S [x] <
It is trivially seen to be open and symmetric. Also since S > 0, L is
bounded. To see that it is convex, write S = A

A and put Ax
1
= y
1
,
Ax
2
= y
2
. Then a simple calculation proves that
2
_
y
1
+ y
2
2
_

_
y
1
+ y
2
2
_
y

1
y
1
+ y

2
y
2
.
This shows that L is a convex set. The volume of L is
(L) =

n/2

n/2
(
n
2
+ 1)

1/2
If we put = (S ), then L contains no lattice point other than the 37
origin. Minkowskis lemma then proves theorem 2.
3. Half reduced positive forms 33
Denote the constants in Hermites and Minkowskis theorems by c
n
and c

n
respectively. If we use stirlings formula for the -function in the
form
log (x) x log x.
We get log
n
= log
4

+
2
n
log
_
n
2
+ 1
_
log n whereas log c
n
=
n 1
2
log 4/3 n where is an absolute constant. This shows that
for large n, Minkowskis estimate is better than Hermites.
3 Half reduced positive forms
We now consider the space R
h
, h =
n(n + 1)
2
of real symmetric n-rowed
matrices and impose on it the topology of the h-dimensional real Eu-
clidean space. Let P denote the subspace of positive matrices. If
S P then all the principal minors of S have positive determinant.
This shows that P is the intersection of a nite number of open subsets
of R
h
and hence is open.
Let S be a matrix in the frontier of P in R
h
. Let S
1
, S
2
, . . . be a
sequence of matrices in P converging to S . Let x 0 be any real
column vector. Then S
k
[x] > 0 and hence by continuity S [x] 0. From
the arbitrariness of x, it follows that S 0. On the other hand let S be
any positive semi-denite matrix in R
h
. Let E denote the unit matrix of
order n. Then for > 0, S + E is a positive matrix, which shows that
in every neighbourhood of S there are points of P. This proves that the 38
frontier of P in R
h
consists precisely of positive semi-denite matrices.
Let denote the group of unimodular matrices. We represent in
R
h
as a group of transformations S S [U], S R
h
. Also U and U
load to the same representation in R
h
. It is easy to see that the only
elements in which keep every element of R
h
xed are E. Thus if we
identify in , the matrices U and U then S S [U] gives a faithful
representation of
0
in R
h
,
0
= / E. If U runs over all elements
of and S R
h
, S [U] runs through all matrices in the class of S . We
shall now nd in each class of positive matrices, a matrix having certain
nice properties.
34 2. Reduction of positive quadratic forms
Let T P and let u run over the rst columns of all the matrices in
. There u are precisely all the primitive vectors. Consider the values
T[u] as u runs over these rst columns. Then T[u] has a minimum,
which is none other than (T). Let this be attained for u = u
1
. It is
obvious that u
1
is not unique for, u
1
, also satises this condition. In
any case, since T > 0, there are only nitely many us with the property
T[u] = T[u
1
]. Let u
1
be xed and let u run over the second columns of
all unimodular matrices whose rst column is u
1
. The us now are not all
the primitive vectors (for instance u u
1
). T[u] again has a minimum
say for u = u
2
and by our remark above 39
T[u
1
] T[u
2
]
Also there are only nitely many u with T[u] = T[u
2
]. Consider now all
unimodular matrices whose rst two columns are u
1
, u
2
and determine
a u
3
such that T[u
3
] is minimum. Continuing in this way one nally
obtains a unimodular matrix
U = (u
1
, . . . , u
n
)
and a positive matrix S = T[U].
S T and by our construction, it is obvious, that S is not unique in
the class of T. We shall study the matrices S and U more closely.
Suppose we have constructed the columns u
1
, . . . , u
k1
. In order to
construct the k-th column we consider all unimodular matrices V whose
rst k 1 columns are u
1
, . . . , u
k1
in that order. Using the matrix U
above which has this property,
U
1
V =
_
E
k1
A
0 B
_
(16)
where E
k1
is the unit matrix of order k 1 and A and B are integral
matrices. Since U and V are unimodular, B is unimodular. If w =
_
w
1
.
.
.
w
n
_
denotes the rst column of the matrix
_
A
B
_
then, since B is unimodular
(w
k
, w
k+1
, . . . , w
n
) = 1. (17)
3. Half reduced positive forms 35
The k-th column y
k
of V is Uw. Conversely let w be any integral column 40
satisfying (17). Then w
k
, . . . , w
n
can be made the rst column of a uni-
modular matrix B of order n k + 1. Choosing any integral matrix A of
k 1 rows and nk +1 columns, whose rst column is w
1
, . . . , w
k1
, we
get a matrix V whose rst k 1 columns are u
1
, . . . , u
k1
(by means of
the equation (16)). Thus the k-th column of all the unimodular matrices
with rst k 1 columns equal to u
1
, . . . , u
k1
is of the form Uw, where
w is an arbitrary integral vector with (w
k
, . . . , w
n
) = 1.
Consider the matrix S = T[U]. By the choice of u
k
, we have if w
satises (17), then
S [w] = T[Uw] T[u
k
] = s
k
where S = (s
kl
). We have thus proved that in each class of T there exists
a matrix S satisfying
I) s
1
> 0
II) S [w] s
k
, k = 1, . . . , n
_

_
for every integral column w =
_
w
1
.
.
.
w
n
_
with (w
k
, . . . , w
n
) = 1.
Matrices which satisfy (I) and (II) shall be called half reduced and
the subset of P of matrices S , half reduced, shall be denoted R
0
.
In the sequel we shall denote by e
1
, . . . , e
n
, the n columns in order
of the unit matrix of order n and by an admissible k-vector w we shall
understand an integral vector w of n rows, satisfying (17). e
k
is clearly 41
an admissible k-vector.
Since e
k+1
is an admissible k + 1-vector, we have
s
k+1
= S [e
k+1
] s
k
which shows that
s
1
s
2
. . . s
n
. (18)
Let u =
_
x
1
.
.
.
x
n
_
be an integral vector with x
k
= 1, x
l
= 1, x
i
= 0 for
i k, i l and k < l. Then u is an admissible l-vector and so
s
k
+ 2s
kl
+ s
l
= S [u] s
l
.
36 2. Reduction of positive quadratic forms
This means that 2s
kl
s
k
. Changing the sign of x
k
we get 2s
kl
s
k
.
Hence
s
k
2s
kl
s
k
, 1 k < l n (19)
Remark. Suppose S is a real symmetric matrix satisfying (II). Let S
1
be the matrix obtained from S by deleting the h
1
-th, h
2
-th,. . . ,h
l
-th rows
and columns from S . Then S
1
also has properties similar to S since we
have only to consider such admissible vectors w for which the h
1
, . . . , h
l
-
th elements are zero.
We now prove the
Theorem 3. Let S be a real, symmetric n-rowed matrix with the prop-
erty (II). Then S 0. If, in addition, it satises (I), then S > 0. 42
Proof. Suppose s
1
= 0. Then by (19) we have
0 = s
1
2s
1l
s
1
= 0
which shows that S has the form
S =
_
0 0

0 S
1
_
If s
2
= 0, we again have a similar decomposition for S
1
, since S
1
, by
our remark above, also satises II. Thus either S = 0 or else there is a
rst diagonal element s
k
, such that s
k
0. Then
S =
_
0 0
0 S
k
_
S
k
having s
k
for its rst diagonal element. We shall now show that S
k
>
0. Observe that S
k
satises both I) and II) and therefore for proving the
theorem it is enough to show that if S satises I and II, then S > 0.
If n = 1, the theorem is trivially true. Let therefore theorem proved
for n 1 instead of n. Put
S =
_

_
S
1
q
q

s
n
_

_
3. Half reduced positive forms 37
where q is a column of n 1 rows. S
1
satises I and II and so by 43
induction hypothesis S
1
> 0. Also since s
n
s
1
, therefore s
n
> 0.
Let x =
_
y
z
_
be a column of n rows, y having n 1 rows and let z be
a real number. Then
S [x] = S
1
[y + S
1
1
qz] + (s
n
q

S
1
1
q)z
2
.
We assert that s
n
q

S
1
1
q > 0. For let s
n
q

S
1
1
q. Then for > 0 and
every x 0
S [x] S
1
[y + S
1
1
qz] + z
2
(20)
Consider the quadratic form on the right side of the inequality above. It
is of order n, positive and has a determinant |S
1
|. Therefore we may
nd a column vector x =
_
y
z
_
, integral, such that the value of the right
side is a minimum and so by Hermites theorem
S
1
[y + S
1
1
qz] + z
2
c
n
|S
1
|
1/n

1/n
.
Using (20) and observing that s
1
is the minimum of S [x] we get, for this
x,
0 < s
1
S [x] c
n
|S
1
|
1/n

1/n
(21)
Since can be chosen arbitrarily small we get a contradiction from (21).
Thus s
n
q

S
1
1
q > 0. This means the S > 0.
We have thus shown that all matrices satisfying (I) and (II) are in
P.
We prove now the following important theorem due to Minkowski. 44
Theorem 4. If S is a positive half-reduced matrix, then
1
s
1
. . . s
n
|S |
b
n
where b
n
is a constant depending only on n.
Proof. The left hand side inequality has already been proved in lemma
2 even for all matrices in P. In order to prove the right hand side
inequality we use induction.
38 2. Reduction of positive quadratic forms
Consider now the ratios
s
n
s
n1
,
s
n1
s
n2
, . . . ,
s
2
s
1
.
Since S is half-reduced, all these ratios are 1. Let =
n(n 1)
4
. For
the above ratios, therefore, one of two possibilities can happen. Either
there exists a k, 2 k n such that
s
n
s
n1
< ,
s
n1
s
n2
< , . . . ,
s
k+1
s
k
<
s
k
s
k1

_

_
(22)
or that
s
n
s
n1
, . . . ,
s
2
s
1
< (23)
Note that in the case n = 2, the second possibility cannot occur since
then =
1
2
and
s
2
s
1
1.
Consider (23) rst. We have 45
s
1
. . . s
n
s
n
1
<
n(n 1)
2
and since
s
1
. . . s
n
|S |
=
s
1
. . . s
n
s
n
1

s
n
1
|S |
we get, using Hermites inequality
s
1
. . . s
n
|S |
< c
n
n

n(n 1)
2
which proves theorem.
Suppose now that (22) is true and so k 2. Write
S =
_
S
k1
Q
Q
1
R
_
3. Half reduced positive forms 39
where S
k1
has k 1 rows. Let x =
_
y
z
_
where y is a column with k 1
rows. We have, by completion of squares
S [x] = S
k1
[y + S
1
k1
Qz] + (R Q

S
1
k1
Q)[z] (24)
Also |R Q

S
1
k1
Q| = |S |/|S
k1
|. Choose z to be an integral primitive
vector such that (R Q

S
1
k1
Q)[z] is minimum. By Hermites theorem
therefore
(R Q

S
1
k1
Q)[z] c
nk+1
(|S |/|S
k1
|)
1/nk+1
(25)
Put y + S
1
k1
Qz = w so that w =
_
w
1
.
.
.
w
k1
_
. Choose now y to be an integral
vector such that

1
2
w
i

1
2
, i = 1, . . . , k 1. (26)
By the choice of z, it follows that x =
_
y
z
_
is an admissible k-vector. 46
Hence
s
k
S [x]. (27)
Also since S
k1
is half-reduced, we get
S
k1
[w] =
k1

p,q=1
s
pq
w
p
w
q

k(k 1)
8
s
k1
.
Using (22) we get
S
k1
[w]
s
k
2
(28)
From (24), (25), (27) and (28) we get
s
k
2c
nk+1
(|S |/|S
k1
|)
1/(nk+1)
(29)
Since
s
1
. . . s
n
|S |
=
s
1
. . . s
k1
|S
k1
|
|S
k1
|
|S |
s
nk+1
k
s
k
. . . s
n
s
nk+1
k
40 2. Reduction of positive quadratic forms
we get by induction hypothesis on S
k1
, that
s
1
. . . s
n
|S |
b
k1
(2c
nk+1
)
nk+1

(n k)(n k + 1)
2
which proves the theorem completely.
The best possible value of b
n
is again unknown except in a few sim-
ple cases. We shall prove that
b
2
= 4/3 (30)
and it is the best possible value.
Let ax
2
+2bxy+cy
2
be a half-reduced positive form. Then 2b a
c. The determinant of the form is d = ac b
2
. Thus 47
ac = ac b
2
+ b
2
d +
a
2
4
d +
ac
4
which gives
ac
4
3
d (31)
Consider the binary quadratic form x
2
+ xy + y
2
. It is half-reduced
because if x and y are two integers not both zero, then x
2
+ xy + y
2
1.
The determinant of the form is 3/4. Product of diagonal elements is
unity. Hence
1 =
4
3
d
and this shows that 4/3 is the best possible value.
4 Two auxiliary regions
Let R
0
denote the space of half-reduced matrices. Dene the point set
R

t
for t > b
n
1 as the set of S satisfying
0 < s
k
< ts
k+1
k = 1, . . . , n 1
t <
s
kl
s
k
< t 1 k < 1 n
s
1
. . . s
n
|S |
< t
_

_
(32)
4. Two auxiliary regions 41
Because of (18), (19) and theorem 4, it follows that
R
0
R

t
. (33)
But what is more important is that
lim
t
R

t
= P (34)
This is easy to see. For, if S P, let t be chosen larger than the 48
maximum of the nite number of ratios
s
k
s
k+1
, k = 1, . . . , n1;
s
kl
s
k
, 1
k < 1 n,
s
1
. . . s
n
|S |
and b
n
. Then S R

t
for this value of t.
Let S R

t
and consider the Jacobi transformation of S ; namely
S =
_

_
d
1
0
.
.
.
0 d
n
_

_
_

_
1, t
12
, . . . t
1n

0 . . . . . . 1
_

_
= D[T] (35)
Then
s
kl
= d
k
t
kl
+
k1

h=1
d
h
t
hk
t
hl
, 1 k l n.
In particular, putting k = 1, and using the fact that d
1
, . . . , d
n
are all
positive, we get
s
k
d
k
1. (36)
Also since |S | = d
1
. . . d
n
, we have
n

k=1
s
k
d
k
=
s
1
. . . s
n
|S |
< t. Since t > 1,
we have
s
k
d
k
< t (k = 1, . . . , n).
Using (32) we get
d
k
d
k+1
=
d
k
s
k

s
k
s
k+1

s
k+1
d
k+1
< t
2
. (37)
Now s
1l
= d
1
t
1l
so that
|t
1l
| =
|s
1l
|
d
1
=
|s
1l
|
s
1

s
1
d
1
< t
2
42 2. Reduction of positive quadratic forms
Let us assume that we have proved that 49
abs t
gl
< u
0
, 1 g k 1, g < l n (38)
for a constant u
0
depending on t and n. Then
abs t
kl

abs s
kl
d
k
+
k1

h=1
d
h
d
k
abs t
hk
abs t
hl
< u
1
,
because of (37) and (38), u
1
depending only on t and n. It therefore
follows that if u is the maximum of u
0
, u
1
, t
2
, then for the elements of
D and T in (35) we have
0 < d
k
< ud
k+1
, k = 1, . . . , n 1
abs t
kl
< u, k < l.
_

_
(39)
We now dene R

u
to be the set of points S P such that if S =
D[T] where D = [d
1
, . . . , d
n
] is a diagonal matrix and T = (t
kl
) is a
triangle matrix then D and T satisfy (39) for some u. Since the Jacobi
transformation is unique, this point set is well dened.
From what we have seen above, it follows that given R

t
, there exists
a u = u(t, n) such that
R

t
R

u
Conversely one sees easily that given R

u
there exists a t = t(u, n) such
that
R

u
R

t
.
In virtue of (34), it follows that
lim
u
R

u
= P. (40)
50
We now prove two lemmas useful later.
Let S P and let t be a real number such that S R

t
. Let S
0
denote the matrix
S
0
=
_

_
s
1
0
.
.
.
0 s
n
_

_
(41)
We prove
4. Two auxiliary regions 43
Lemma 4. There exists a constant c = c(t, n) such that whatever be the
vector x,
1
c
S
0
[x] S [x] cS
0
[x].
Proof. Let P
1
denote the diagonal matrix P
1
= [

s
1
, . . . ,

s
n
]. Put
W = S [P]. In order to prove the lemma, it is enough to show that if
x

x = 1 then
1
c
W[w] c.

Let W = (w
kl
). Then w
kl
= s
kl
/

s
k
s
l
. Because S R

t
we have
abs w
kl
= abs
s
kl
s
k
_
s
k
s
1
< t c
1
, k l (42)
where c
1
depends only on t and n. W being symmetric, it follows that
the elements of W are in absolute value less than a constant c
2
= c
2
(t, n).
Consider now the characteristic polynomial f () = |E W|. By
(42) all the coecients of the polynomial f () are bounded in absolute
value by a constant c
3
= c
3
(t, n). Also since W > 0, the eigen values 51
of W are bounded by c
4
= c
4
(t, n). Let
1
, . . . ,
n
be these eigen values.
Then

1
. . .
n
= |W| =
|S |
s
1
. . . s
n
> t
1
which means that there exists a constant c
5
= c
5
(t, n) such that

i
> c
5
(t, n), i = 1, . . . , n.
(6) then gives the result of lemma 4.
Next we prove
Lemma 5. If S R

t
and S =
_
S
1
S
12
S

12
S
2
_
, then S
1
1
S
12
has all its ele-
ments bounded in absolute value by a constant depending only on t and
n.
44 2. Reduction of positive quadratic forms
Proof. By the Jacobi transformation we have S = D[T]. Since R

t

R

u
for u = u(t, n), the elements of T are u in absolute value. Write
T =
_
T
1
T
12
0 T
2
_
, D =
_
D
1
0
0 D
2
_
where T
1
and D
1
have the same number of rows and columns as S
1
. We
have S
1
= D
1
[T
1
] and S
12
= T

1
D
1
T
12
so that
S
1
1
S
12
= T
1
1
T
12
.
Since T
1
is a triangle matrix, so is T
1
1
and its elements are u
1
in
absolute value, u
1
= u
1
(t, n). The elements of T
12
are already u. Our
lemma is proved.
We are now ready to prove the following important 52
Theorem 5. Let S and T be two matrices in R

t
. Let G be an integral
matrix such that 1) S [G] = T and 2) abs |G| < t. Then the elements of G
are less, in absolute value, then a constant c depending only on t and n.
Proof. The constants c
1
, c
2
, . . . occurring in the following proof depend
only on t and n. Also bounded shall mean bounded in absolute value
by such constants.
Let G = (g
kl
) and let g
1
, . . . , g
n
denote the n columns of G. We then
have
S [g
l
] = t
l
l = 1, . . . , n.
Introducing the positive diagonal matrix of lemma 4, we obtain
S
0
[g
l
] c
1
S [g
l
] = c
1
t
l
.
But S
0
[g
l
] =
_
k
s
k
g
2
kl
so that
s
k
g
2
kl
c
1
t
1
k, l = 1, . . . , n (43)
Consider now the matrix G. Since |G| 0, there exists in its expan-
sion a non-zero term. That means there is a permutation l
1
, . . . , l
n
of 1,
2, 3, . . . , n such that
g
1l
1
g
2l
2
. . . g
nl
n
0.
4. Two auxiliary regions 45
From (43) therefore we get
s
k
s
k
g
2
kl
k
c
1
t
l
k
k = 1, . . . , n.
Consider now the integers k, k + 1, . . . and l
k
, l
k+1
, . . . , l
n
. All of the 53
latter cannot be > k. So there is an i k such that l
i
k. Hence
s
i
c
1
t
l
i
.
So, since S and T are in R

0
,
s
k
c
2
t
k
, k = 1, . . . , n. (44)
On the other hand
n
_
k=1
t
k
s
k
=
t
1
. . . t
n
|T|

|S |
s
1
. . . s
n
|G|
2
and all the factors on the right are bounded. Therefore
n
_
k=1
t
k
s
k
< c
3
.
Using (44), it follows that
t
k
c
4
s
k
, (k = 1, 2, . . . , n). (45)
Combining (43) and (45) we have the inequality
s
k
g
2
kl
< c
5
s
l
k, l = 1, . . . , n. (46)
Let p now be dened to be the largest integer such that
s
l
c
5
s
l
, k p, l p 1. (47)
If p = 1, this condition does not exist. From the denition of p, it
follows that for every integer g with p + 1 g n, there exists a k
g
g
and an l
g
< g such that
s
k
g
< c
5
s
l
g
(48)
46 2. Reduction of positive quadratic forms
This holds for p = 1, but if p = n, it does not exist. 54
Let c
6
be a constant such that
s
k
< c
6
s
l
k l. (49)
This exists since S R

t
. Using (48) and (49) and putting c
7
= c
5
c
2
6
we
have
s
g
< c
7
s
g1
g p + 1 (50)
(49) and (50) give the important inequality
1
c
8
<
s
k
s
l
< c
8
k p, l p (51)
Using (46) and (47), we have if k p and l p 1
s
k
g
2
kl
< c
5
s
1
s
k
.
Since s
k
0, we have g
2
kl
< 1. But g
kl
are integers. Hence
g
kl
= 0 k p, l p 1 (52)
Let us split G up into 4 parts by
G =
_
G
1
G
12
G
21
G
2
_
where G
1
is a square matrix of order p 1. (52) then shows that
G
21
= 0. (53)
Let now k p, 1 p. Then from (51) we have
g
2
kl
< c
5
s
1
s
k
< c
5
c
8
(54)
which means that the elements of G
2
are bounded. 55
Note that if p = 1, our theorem is already proved by (54). So we
may assume p > 1.
5. Space of reduced matrices 47
In order to prove the theorem we use induction. If n = 1, the theorem
is trivially true. Assume theorem therefore proved for n 1 instead of
n. Split S and T in the form
S =
_
S
1
S
12
S

12
S
2
_
T =
_
T
1
T
12
T

12
T
2
_
where S
1
and T
1
are p 1 rowed square matrices. Because S [G] = T,
we get
S
1
[G
1
] = T
1
G

1
S
1
G
12
+ G

1
S
12
G
2
= T
12
(55)
By considerations above G
21
= 0 therefore |G| = |G
1
| |G
2
|. Since G
is integral it follows that abs |G
1
| < t. Also S
1
and T
1
are p 1 rowed
square matrices which are in R

t, p1
, where R

t, p1
is the same as R

t
with p1 instead of n. E
y
induction hypothesis and (55) we see that G
1
is bounded.
Using the fact that G

1
S
1
= T
1
G
1
1
we get
G
12
= G
1
T
1
1
T
12
S
1
1
S
12
G
2
.
Using lemma 5, it follows that the elements of G
12
are bounded.
Our theorem is completely proved.
In particular,
Corollary. If S and T are in R

t
and S [U] = T for a unimodular U, then 56
U belongs to a nite set of unimodular matrices determined completely
by t and n.
5 Space of reduced matrices
We have seen that given any matrix T > 0, there exists in the class of T,
a half-reduced matrix S . Consider now the 2
n
unimodular matrices of
the form
A =
_

_
a
1
0
.
.
.
0 a
n
_

_
48 2. Reduction of positive quadratic forms
where a
i
= 1. If S is half-reduced, then S [A] also is half-reduced.
For, if x =
_
x
1
.
.
.
x
n
_
is an admissible k-vector, then Ax =
_

_
x
1
.
.
.
x
n
_

_
is also an
admissible k-vector. Also, the diagonal elements of S and S [A] are the
same. We shall choose A properly so that S [A] satises some further
conditions.
Since S [A] = S [A], there is no loss in generality if we assume
a
1
= 1. Denote by
1
, . . . ,
n
the n columns of the matrix A. Consider
now

1
S
2
. This equals a
2
s
12
. If s
12
0 choose a
2
so that
a
2
s
12
0.
If s
12
= 0, a
2
may be chosen arbitrarily. Having chosen a
1
, . . . , a
k
con-
sider

k
S
k+1
= a
k
a
k+1
s
kk+1
. Since a
k
has been chosen, we choose
a
k+1
= 1 by the condition
a
k
a
k+1
s
kk+1
0,
provided s
kk+1
0. If s
kk+1
= 0, a
k+1
may be arbitrarily chosen. We 57
have thus shown that in each class of equivalent matrices, there is a
matrix S satisfying
) s
1
> 0
) s
kk+1
0, k = 1, . . . , n 1.
) S [x] s
k
0, k = 1, . . . , n for every admissible k-vector.
We shall call a matrix satisfying the above conditions a reduced ma-
trix, reduced in the sense of Minkowski. Let R denote the set of reduced
matrices, then
R R
0
. (56)
Since the elements of S P are coordinates of the point S , the
conditions ) and ) above show that R is dened by the intersection
of an innity of closed half spaces of P. We shall denote the linear
functions in ) and ) by L
r
, r = 1, 2, 3, . . .. It is to be noted that we
exclude the case when an L
r
is identically zero. This happens when in
5. Space of reduced matrices 49
), x is the admissible k-vector equal to e
k
. We may therefore say that
R is dened by
) s
1
> 0, ) L
r
0 r = 1, 2, 3, . . . (57)
We shall see presently that the innite system of linear inequalities can
be replaced by a nite number of them.
In order to study some properties of the reduced space R, we rst 58
make some denitions.
Denition. i) S is said to be an inner point of R if s
1
> 0 and
L
r
(S ) > 0 for all r.
ii) It is said to be a boundary point of R if s
1
> 0 L
r
(S ) 0 for all r
and L
r
(S ) = 0 at least for one r.
iii) It is said to be an outer point of R if s
1
> 0 and L
r
(S ) < 0 at least
for one r.
We rst show that R has inner points.
Consider the quadratic form
S |x| = x
2
1
+ + x
2
n
+ (p
1
x
1
+ + p
n
x
n
)
2
where p
1
, . . . , p
n
are n real numbers satisfying
0 < p
1
< p
2
. . . < p
n
< 1.
The matrix S = (s
kl
) is then given by
s
k
= 1 + p
2
k
, k = 1, . . . , n
s
kl
= p
k
p
l
, k l.
We assert that S is an inner point of R. In the rst place
s
1
> 0, s
kk+1
= p
k
p
k+1
> 0; k = 1, . . . , n 1.
Next let x be an admissible k-vector not equal to e
k
. Then at least one
of x
k
, . . . , x
n
has to be dierent from zero. If at least two of then, say x
j
,
x
1
are dierent from zero, so that k 1 < j n, then
S [x] x
2
1
+ x
2
j
+ 2 > 1 + p
2
k
= s
k
.
50 2. Reduction of positive quadratic forms
The worst case is when all of x
k
, . . . , x
n
except one are zero. If x
k
= 1 59
and x
1
= 0 for l > k, then x
i
0 for some i < k since x e
k
and (then)
S [x] 2. Let x
i
= 0 for all i except i = l > k so that x = e
l
. Then
S [x] = 1 + p
2
l
> 1 + p
2
k
= s
k
.
This proves that S is an inner point.
We now prove
Theorem 6. The set of inner points of R is an open set in P.
Proof. Let S be an inner point of R. Then s
1
> 0 and L
r
> 0 for all
r. The inequalities s
kk+1
> 0 being nitely many can be satised also
at all points of a suciently small neighbourhood of S . We therefore
consider the other innitely many inequalities. Let S

be a point close
to S so that the elements of S

S are near zero. Let x be an admissible


k-vector e
k
. Consider S

[x] s

k
where S

= (s

kl
). Let > 0 be a
real number. We can choose S

close to S so that (S

S )[x] x

x.
Now
S

[x] s

k
= (S

S )[x] + S x s

k
x

x + S [x] s

k
.
If > 0 is the smallest eigen value of S , then we may assume small
enough so that
S

[x] s

2
x

x s

k
.
There are only nitely many integral vectors x with

2
x

x s

k
. We may 60
therefore choose S

close enough to S such that


S

[x] s

2
x

x s

k
> 0
for all admissible k-vectors x. Doing this for k = 1, . . . , n we see that
there is a small sphere containing S and which consists entirely of points
S

. These, by our construction, are inner points. This proves our con-
tention.
5. Space of reduced matrices 51
Consider now the outer points of R. Let S be one such. Then at least
for one r, L
r
(S ) < 0. Since the L
r
are linear functions of the coordinates
and hence continuous, we may choose a neighbourhood of S consisting
of points for all of which L
r
< 0. This means that the set of outer points
of R is open. Note that here it is enough to deal with one inequality
alone unlike the previous one where one had to deal with all the L
r
s.
Let now S be a boundary point of R. Let S

be an inner point.
Consider the points T

dened by
T

= S

+ (1 )S.
These are points on the line joining S and S

and every neighbourhood


of S contains points T

with > 0 and points T

with < 0.
Consider the points T

with 0 < 1. These are the points between


S and S

. Let L
r
be one of the linear polynomials dening R. Now
L
r
(S ) 0, and L
r
(S

) > 0, for all r. Thus


L
r
(T

) = L
r
(S

) + (1 )L
r
(S ) > 0.
Hence T

is an inner point. 61
Let now T be a point with < 0. Since S is a boundary point, there
is an r such that L
r
(S ) = 0. For this r
L
r
(T

) = L
r
(S

) < 0
which proves that T

is an outer point.
Since linear functions are continuous, the limit of a sequence of
points of R is again a point of R. This proves
Theorem 7. R is a closed set in P and the boundary points of R
constitute the frontier of R in the topology of P.
We now prove the following
Theorem 8. Let S and S

be two points of R such that S [U] = S

for
a unimodular U E. Then S and S

are boundary points of R and U


belongs to a nite set of unimodular matrices determined completely by
the integer n.
52 2. Reduction of positive quadratic forms
Proof. The second part of the theorem follows readily from the Corol-
lary to Theorem 5. To prove the rst part, we consider two cases: (1) U
is a diagonal matrix, and (2) U is not a diagonal matrix.
Let U be a diagonal matrix, U = (a
1
, . . . , a
n
), with a
i
= 1. We may
assume, since S [U] = S [U] that a
1
= 1. Let a
k+1
be the rst element
= 1. Then, with usual notation,
s

kk+1
= s
kk+1
.
But S and S

being points of R we have 62


0 s

kk+1
= s
kk+1
0
which means that s
kk+1
= 0 = s

kk+1
. Hence S and S

are both boundary


points of R.
Suppose U is not a diagonal matrix and denote its columns by u
1
,
. . . , u
n
. Let u
k
be the rst column dierent from the corresponding col-
umn of a diagonal matrix. Hence u
i
= e
i
, i = 1, . . . , k 1. (Note that k
may very well be equal to 1). Then
U =
_
D
0 V
_
where D is a diagonal matrix which is unimodular. V is a unimodular
matrix. Furthermore
U
1
=
_
D
1

0 V
1
_
is unimodular. Let w
k
be the k-th column of U
1
. Then w
k
e
k
. Now
s

k
= S [u
k
] s
k
and
s
k
= S

[w
k
] s

k
which proves that S [u
k
] s
k
= 0 = S

[w
k
] s

k
and therefore S and S

are boundary points of R.


Suppose now that S is a boundary point of R. By Theorem 7, there-
fore, there exists a sequence of outer points S
1
, S
2
, . . . converging to S . 63
5. Space of reduced matrices 53
If the sux k is suciently large, then all the S
k
s lie in a neighbour-
hood of S . Therefore they are all contained in an R

t
for some t. For
each k let U
k
be a unimodular matrix such that S
k
[U
k
] is in R. Since
R R

t
, we have for all suciently large k, S
k
and S
k
[U
k
] are both in
R

t
. It follows therefore by Theorem 5, that U
k
s belong to a nite set of
matrices. There exists therefore a subsequence S
k
1
, S
k
2
, . . . converging
to S such that one unimodular matrix U, among these nitely many, car-
ries S
k
i
into R. Also Lim
n
S
k
n
= S and therefore limS
k
n
[U] = S [U] is a
point of R. Since S is a point of R, it follows from the above theorem
that S [U] is also a boundary point of R. Furthermore U E since S
k
are all outer points and S
k
[U] R. Hence
Theorem 9. If S is a boundary point of R, there exists a unimodular
matrix U E and belonging to the nite set determined by Theorem
8, such that S [U] is again a boundary point of R.
By Theorem 8, there exist nitely many unimodular matrices say
U
1
, . . . , U
g
which occur in the transformation of boundary points into
boundary points. If u
k
is the k-th column of one of these matrices, then
u
k
is an admissible k-vector. Suppose it is e
k
. Then for all S R,
S [u
k
] s
k
0. Let us denote by L
1
, L
2
, . . . , L
h
all the linear forms, not
identically zero, which result from all the u
k
s k = 1, . . . , n occurring in 64
the set U
1
, . . . , U
g
. Let L
1
, . . . , L
h
also include the linear forms s
kk+1
,
k = 1, . . . , n 1; then from above we see that for a boundary point S of
R, there is an r h such that L
r
(S ) = 0 (not identically). Also for all
points of R
s
1
> 0, L
1
(S ) 0, . . . , L
h
(S ) 0. (59)
But what is more important, we have
Theorem 10. A point S of P belongs to R if and only if s
1
> 0 and
L
r
(S ) 0 for r = 1, . . . , h.
Proof. The interest in the theorem is in the suciency of the conditions
(59).
Let S be a point of P satisfying (59). Suppose S is not in R. Since
it is in P, it is an outer point of R. Therefore L
r
(S ) < 0 for some r > h.
54 2. Reduction of positive quadratic forms
Let S

be an inner point of R. Consider the points T

,
T = S + (1 )S

for 0 < < 1, in the open segment joining S and S

. Since the set of


inner points of R is open and S is assumed to be an outer point, there
exists a
0
such that T

0
is on the frontier of R and 0 <
0
< 1. By our
remarks above, there exists for T

0
an s h such that L
s
(T

0
) = 0. This
means that
0 = L
s
(T

0
) =
0
L
s
(S ) + (1
0
)L
s
(S

).
But (1
0
)L
s
(S

) > 0 so that L
s
(T

0
) > 0. This is a contradiction.
Therefore S R.
We have therefore proved that R is bounded by a nite number of 65
planes all passing through the origin. R is thus a pyramid.
Let now R denote the closure of R in the space R
h
. At every point
S or R one has, because of continuity of linear functions,
s
1
0, L
r
(S ) 0, r = 1, 2, 3, . . .
If S R but not in R, then s
1
= 0. In virtue of the other inequalities,
we see that
S =
_
0 0
0 S
1
_
.
S
1
again has similar properties. Thus either S = 0 or
S =
_
0 0
0 S
k
_
where S
k
is non-singular and is a reduced matrix of order r, 0 < r < n.
We thus see that the points of R which are not in R are the semi-positive
reduced matrices.
Consider now the space P and the group . If U , the mapping
S S [U] is topological and takes P onto itself. For U denote by
R
U
the set of matrices S [U] with S R. Because U and U lead to the
same mapping, we have R
U
= R
U
. Since in every class of matrices
there is a reduced matrix we see that
5. Space of reduced matrices 55
1)
_
U
R
U
= P
where in the summation we identify U and U. Thus the R
U
s 66
cover P without gaps.
Let U and V be in and U V. Consider the intersection of R
U
and R
V
. Let S R
U
R
V
. Then T
1
= S [U
1
] and T
2
= S [V
1
]
are both points of R. Moreover T
1
= T
2
[VU
1
] and VU
1
E
so that T
1
is a boundary point of R. Since the mapping S S [U]
is topological S is a boundary point of R
U
and also of R
V
. Hence
2) If UV
1
E and U and V are unimodular, then R
U
and R
V
can have at most boundary points in common.
In particular, if U E, R and R
U
can have only boundary
points in common. If S R R
U
then S and S [U
1
] are in R
and by Theorem 9, U belongs to a nite set of matrices depending
only on n. If we call R
U
a neighbour of R if RR
U
is not empty,
then we have proved
3) R has only nitely many neighbours.
Let K now be a compact subset of P. It is therefore bounded in
P and hence there exists a t > 0 such that K R

t
. Suppose R
U
,
for a unimodular U, intersects K. Let S R
U
K. There is then
a T R such that T[U] = S . For large t, R R

t
. Then T and
S are both in R

t
and S = T[U]. Therefore U belongs to a nite
set of matrices. Hence there exist a nite number of unimodular
matrices, say U
1
, . . . , U
p
such that
K
p

i=1
R
U
i
Hence 67
4) Every compact subset of P is covered by a nite number of im-
ages R
U
of R.
We have thus obtained the fundamental results of Minkowskis
reduction theory.
56 2. Reduction of positive quadratic forms
We now give a simple application.
Suppose S is a positive, reduced, integral matrix. Then since s
1
s
2
. . .
s
n
b
n
|S |, s
1
, . . . , s
n
are positive and b
n
depends only on n, it follows
that for a given |S |, there exist only nitely many integer values for
s
1
, . . . , s
n
. Also
s
k
2s
kl
s
k
, k < l
so that s
kl
being integers, there are nitely many values of s
kl
satisfying
the above inequalities. We have therefore the
Theorem 11. There exist only nitely many positive, integral, reduced
matrices with a given determinant and number of rows.
Since all matrices in a class have the same determinant, and in each
class there is at least one reduced matrix, we get the
Theorem 12. There exist only a nite number of classes of positive in-
tegral matrices with given determinant and number of rows.
It has to be noticed, that in virtue of property 3) above, one has, in
general, only one reduced matrix in a class.
6 Binary forms
68
We now study the particular case n = 2.
Let S =
_
a b
b c
_
be a positive binary matrix and x =
_
x
y
_
a vector. The
quadratic form S [x] = ax
2
+2bxy+cy
2
is positive denite. By the results
of the previous section, we see that, if S is reduced then
a > 0, 0 2b a c. (60)
We shall now prove that any matrix S satisfying (60) is reduced.
Let x =
_
x
y
_
be an admissible one-vector. If y = 0, then x = 1. If y
0, then x and y are coprime integers. Consider the value ax
2
+2bxy+cy
2
for admissible one-vectors. We assert that ax
2
+ 2bxy + cy
2
a. In the
rst case S [x] = a. In the second case, because of (60)
ax
2
+ 2bxy + cy
2
a(x
2
xy + y
2
).
6. Binary forms 57
But x and y are not both zero. Thus x
2
xy + y
2
1 which means that
S [x] a.
Let now x =
_
x
y
_
be an admissible two-vector. Then y = 1. If x = 0,
then S [x] = c. Let x 0, then
S [x] = ax
2
2bx + c = c + x(ax 2b).
Because of (60), it follows that x(ax 2b) 0. Thus S satises condi-
tions I) and II) of half reduction. Also b 0. This proves that S > 0
and reduced.
(60) thus gives the necessary and sucient conditions for a binary 69
quadratic form to be reduced.
In the theory of binary quadratic forms, one discusses some-times
equivalence not under all unimodular matrices, but only with respect to
those unimodular matrices whose determinant is unity. We say that two
binary matrices S and T are properly equivalent if there is a unimodular
matrix U such that
S = T[U], |U| = 1. (61)
The properly equivalent matrices constitute a proper class. Note that
the properly unimodular matrices form a group. Two matrices S and T
which are equivalent in the sense of the previous sections, but which do
not satisfy (61) are said to be improperly equivalent. Note that improper
equivalence is not an equivalence relation.
In order to obtain the reduction theory for proper equivalence we
proceed thus: If S
1
=
_
a
1
b
1
b
1
c
1
_
is positive, then there is a unimodular
matrix U such that S = S
1
[U] =
_
a b
b c
_
satises (60). If |U| = 1 we call
S a properly reduced matrix. If |U| = 1, then consider W
W =
_
1 0
0 1
_
(62)
Then V = UW has the property |V| = 1. Now S [W] =
_
a b
b c
_
and we
call this properly reduced. In any case we see that S is properly reduced 70
means
a > 0, 0 |2b| a c. (63)
58 2. Reduction of positive quadratic forms
If we denote by R the reduced domain, that is the set of reduced
matrices in the old sense and R

the properly reduced domain, one sees


immediately that
R

= R + R
W
where W has the meaning in (62).
We shall now give two applications.
Let S =
_
a b
b c
_
be a positive integral matrix. Because of conditions
(63) and the additional condition (31), it follows that for given |S |, there
exist only nitely many properly reduced integral matrices. Consider
now the case |S | = 1. Then because of (31),
ac
4
3
(64)
and hence the only integers a, b, c, satisfying (63) and (64) are a = c =
1, b = 0. This proves
i) Every binary integral positive quadratic form of determinant unity
is properly equivalent to x
2
+ y
2
.
Let now p be a prime number > 2. Let p be representable by the
quadratic form x
2
+ y
2
. We assert that then p 1(mod 4). For, if x and
y are integers such that
x
2
+ y
2
= p
then x and y cannot be congruent to each other mod 2. So let x be odd 71
and y even. Then p = x
2
+ y
2
1(mod 4).
We will now prove that conversely if p 1(mod 4), the form x
2
+y
2
represents p (integrally). For, let be a primitive root mod p. There is
then an integer k, 1 k < p 1 such that

k
1(mod p).
This means that
2k
1(mod p) and by denition of primitive root, we
get k = p1/2. But p 1(mod 4) so that k is an even integer. Therefore
1 (
k/2
)
2
(mod p).
6. Binary forms 59
There is thus an integer b, 1 b p 1 such that b
2
1(mod p). Put
b
2
= 1 + p, 1 an integer.
Consider the binary form px
2
+ 2bxy + y
2
. Its determinant is p
b
2
= 1. By the result obtained in i), this form is equivalent to x
2
+ y
2
.
But px
2
+ 2bxy + y
2
represents p, (x = 1, y = 0). Therefore x
2
+ y
2
represents p. Thus
ii) If p is a prime > 2, then x
2
+ y
2
= p has a solution if and only if
p 1(mod 4).
Results i) and ii) are due originally to Lagrange.
Let S [x] = ax
2
+ 2bxy + cy
2
be a real, positive, binary quadratic
form. We can write
S [x] = a(x y)(x y) (65)
where is a root, necessarily complex, of the polynomial az
2
+ 2bz + c 72
and is its conjugate. Let = + i have positive imaginary part.
Let V =
_


_
be a real matrix of unit determinant and consider the
mapping
S S [V].
Then S [Vx] is given by
S [Vx] = a

(x

y)(x

y) (66)
where a

= a( ) ( ) is necessarily real and positive, and

= V
1
() =

+
. (67)
It is easy to see that

also has positive imaginary part. Let us also


observe that =
b + i

|S |
a
.
Consider now the relationship between S and . If S is given, then
(65) determines a with positive imaginary part. Now given , (65)
itself shows that S is determined only upto a real factor. This real factor
can be determined by insisting that the associated quadratic forms have
a given determinant. In particular, if |S | = 1 then the is uniquely
60 2. Reduction of positive quadratic forms
determined by S and conversely. If = +i, > 0, then the S is given
by
S =
_

1
0
0
_ _
1
0 1
_
(68)
Let P denote the space of positive binary forms of unit determinant 73
and G the upper half complex -plane. By what we have seen above the
mapping S in (65) is (1, 1) both ways. Let denote the group of
proper unimodular matrices. It acts on G as a group of mappings
U() =
+
+
, U =
_


_
(69)
of G onto itself. If we dene two points
1
,
2
in G as equivalent if
there is a U such that
1
= U(
2
), then the classical problem of
constructing a fundamental region in G for , is seen to be the same
as selecting from each class of equivalent points one point so that the
resulting point set has nice properties.
By means of the (1, 1) correspondence, we have established in (68)
between Pand G, we have S
1
= S
2
[U] if and only if the corresponding
points
1
,
2
respectively satisfy

1
= U
1
(
2
).
We dene the fundamental region F in G to be the set of points such
that the matrices corresponding to them are properly reduced; in other
words, they satisfy (63). For the S in (68), S [x] =
1

(x
2
2xy + (
2
+

2
)y
2
). Therefore F consists of points = + i for which
|2| 1

2
+
2
1
(70)
7. Reduction of lattices 61
This is the familiar modular region in the upper half -plane. That 74
it is a fundamental region follows from the properties of the space of
reduced matrices in P. The points P and Q are the complex numbers
1 + i

3
2
, and so for any point in F,

3
2
. This means that for a
positive reduced binary form ax
2
+ 2bxy + cy
2
of determinant d
a

d

2

3
,
which we had already seen in Theorem 1.
7 Reduction of lattices
Let V be the Euclidean space of n dimensions formed by n-rowed real
columns
=
_

_
a
1
.
.
.
a
n
_

_
.
Let
1
, . . . ,
n
be a basis of V so that

i
=
_

_
a
1i
.
.
.
a
ni
_

_
, i = 1, . . . , n.
Denote by A the matrix (a
kl
). Obviously |A| 0.
62 2. Reduction of positive quadratic forms
Let L be a lattice in V and let
1
, . . . ,
n
be a basis of this lattice. L
then consists of elements
1
g
1
+ +
n
g
n
where g
1
, . . . , g
n
are integers.
We shall call A the matrix of the lattice.
Conversely if A is any non-singular n-rowed matrix, then the 75
columns of A, as elements of V are linearly independent and therefore
determine a lattice.
Let L be the lattice above and let
1
, . . . ,
n
be any other base of L
and B its matrix, then
B = AU
where U is a unimodular matrix. Also if U runs through all unimodular
matrices, then AU runs through all bases of L. We now wish to single
out among these bases one which has some distinguished properties.
Let us introduce in V, the inner product of two vectors and
by
= a
1
b
1
+ + a
n
b
n
where =
_
a
1
.
.
.
a
n
_
, =
_

_
b
1
.
.
.
b
n
_

_
. The square of the length of the vector is
given by

2
= a
2
1
+ + a
2
n
.
Let A be the matrix of a base
1
, . . . ,
n
of L. Consider the positive
matrix S = A

A. If S is given A is determined only upto an orthogonal


matrix P on its left. For, if A

A = A

1
A
1
then AA
1
1
= P is orthogonal.
But multiplication on the left by an orthogonal matrix implies a rotation
in V about the origin.
We shall call a base B of L reduced if S
1
= B

B is a reduced matrix.
Obviously in this case
0 <
2
1
. . .
2
n

k+1
0, k = 1, . . . , n 1.
From the way reduced matrices are determined we see that a reduced 76
base
1
, . . . ,
n
of L may be dened to be a base such that for every set
of integers x
1
, . . . , x
n
such that (x
k
, . . . , x
n
) = 1 the vector
=
1
x
1
+ +
n
x
n
7. Reduction of lattices 63
satises

2

2
k
(k = 1, . . . , n.)
Also

k

k+1
0(k = 1, . . . , n + 1).
If follows therefore that

2
1
. . .
2
n
c
n
|A

A| = c
n
|A|
2
c
n
being a constant depending only on n. Also abs |A| is the volume of
the parallelopiped formed by the vectors
1
, . . . ,
n
.
consider the case n = 2.
We have, because of (30)

2
1

2
2

4
3
|A|
2
(72)
Let now denote the acute angle between the vectors
1
and
2
.
Since the area of the parallelogram formed by
1
and
2
on the one hand
equals abs |A| and on the other
_

2
1

2
2
sin , we see that 77
sin
2

3
4
(73)
Since 0

2
, it follows from (73) that

2
.
Hence for a two dimensional lattice we may choose a basis in such a
manner that the angle (acute) between the basis vectors is between 60

and 90

.
Bibliography
[1] C.F. Gauss Disquisitiones Arithmeticae Ges. Werke 1, (1801).
[2] C. Hermite Oeuvres Vol.1, Paris (1905) P. 94-273.
[3] H. Minkowski Geometrie der Zahlen, Leipzig (1896).
[4] H. Minkowski Discontinuit atsbereich f ur arithmetische Aquiv-
alenz. Ges. Werke, Bd.2, (1911), P.53 - 100.
[5] C.L. Siegel Einheiten quadratischer Formen Abh. Math. Sem. Han-
sischen Univ. 13, (1940), P. 209-239.
65
Chapter 3
Indenite quadratic forms
1 Discontinuous groups
78
In the previous chapter we had met with the situation in which a group
of transformations acts on a topological space and we constructed, by
a certain method, a subset of this space which has some distinguished
properties relative to the group. We shall now study the following gen-
eral situation.
Let be an abstract group and T a Hausdor topological space on
which has a representation
t t, t T, (1)
carrying T into itself. We say that this representation of is discontin-
uous if for every point t T, the set of points {t} for has no
limit point in T. The problem now is to determine, for a given , all the
spaces T on which has a discontinuous representation. For an arbitrar-
ily given group, this problem can be very dicult. We shall, therefore,
impose certain restrictions on and T. Let us assume that there is a
group , of transformations of T onto itself, which is transitive on T.
This means that if t
1
and t
2
are any two elements of T, there exists
such that
t
1
= t
2
. (2)
67
68 3. Indenite quadratic forms
Let us further assume that is a subgroup of . Let t
0
be a point in T
and consider the subgroup of consisting of such that 79
t
0
= t
0
. (3)
If t is any point of T, we have because of transitivity,
t = t
0

for some . Because of (3), we get


t = t
0
.
Conversely if

is such that t = t
0

, then t
0

= t
0
or

.
Thus every point t T determines a coset

of \that is, the space of


right cosets of modulo . Conversely if

is any coset, then t = t


0

is a point determined by

. Hence the mapping


t

(4)
of T on \ is (1, 1) both ways. In order to make this correspondence
topological, let us study the following situation.
Let be locally compact topological group and T a Hausdor topo-
logical space on which has a representation
t t (5)
as a transitive group of mappings. Let us assume that this representation
is open and continuous. We recall that (5) is said to be open if for every
open set P in and every t T the set {t}, P is an open set in
T. Then it follows that the subgroup of leaving t
0
T xed is
not only a closed subgroup but that the mapping (4) of T on \ is a
homeomorphism.
Let be a subgroup of which has on T a discontinuous represen- 80
tation. Then has trivially a representation in \. By the remarks
above, the representation
, (6)
1. Discontinuous groups 69
is discontinuous in \.
On the other hand, let be any closed subgroup of . Then the
representation

of on \ is open and continuous. It is clearly transitive. In order,


therefore, to nd all spaces on which has a discontinuous representa-
tion, it is enough to consider the spaces of right cosets of with regard
to closed subgroups of .
Suppose is a closed subgroup of and has a discontinuous
representation on \. Let K be a closed subgroup of contained in
. Then has a discontinuous representation on K\. For, if K is
a coset such that the set of cosets {K}, has a limit point in
K\, then the set {}, also has a limit point in \ and so (6)
would not be discontinuous. In particular, if we take for K the subgroup
consisting only of the identity element e, then is discontinuous in is
clearly equivalent to is a discrete subgroup of .
Thus if there exists some subgroup of such that is discontin-
uous in \, then necessarily has to be discrete. It can be proved that
if has a countable basis of open sets, then is enumerable.
Suppose now that is a locally compact group with a countable ba- 81
sis of open sets. Let be a discrete subgroup of . If is any compact,
hence closed, subgroup of then it follows that the representation (6)
of in \ is discontinuous. This can be seen by assuming that for a
certain , the set
n
,
n
has limit point and this will lead to a
contradiction because of the discreteness of .
In general the fact (6) is discontinuous in \ does not entail that
is compact. Let us, therefore, consider the following situation.
Let be a locally compact group possessing a countable basis of
open sets. Then there exists in a right invariant Haar measure d
which is determined uniquely upto a positive multiplicative factor. Let
be a discrete subgroup of . There exists then in a subset F possessing
the following properties: 1)
_
a
Fa = , 2) the sets {Fa} for a are
mutually disjoint and 3) F is measurable in terms of the Haar measure
d. F is then said to be a fundamental set relative to . Note that if F
is a fundamental set then so if Fa for any a so that a fundamental
70 3. Indenite quadratic forms
set is not unique. 1) and 2) assert that F intersects each coset of \
in exactly one point so that F has to be formed in by choosing one
element from each coset \. The interesting point is that, under the
conditions on , this can be done in such a way that the resulting set F
is measurable. Let us now assume that
_
F
d < . (7)
It can then be shown that the value of the integral in (7) is independent 82
of the choice of F. We now state, without proof, the important
Theorem 1. Let be a locally compact topological group with a count-
able basis of open sets. Let be a discrete subgroup of and F a funda-
mental set in relative to . Let F have nite Haar measure in . If
is any closed subgroup of , then has a discontinuous representation
in \ if and only if is compact.
The interest in the theorem lies in the necessity part of it.
Let us assume that is, as will be in the applications, a Lie group.
Let be a discrete subgroup of . For any closed subgroup of , the
dimensions of , \ and are connected by
dim + dim\ = dim.
If F is a fundamental set in with regard to and is of nite measure,
in terms of the invariant measure in , then by Theorem 1, will be
discontinuous in \ if and only if is compact. In order, therefore,
to obtain a space T = \ of smallest dimension in which has a
discontinuous representation, one has to consider a which is compact
and maximal with this property.
Let us consider the following example. 83
Let be the group of n-rowed real matrices A is a Lie group.
Let us determine rst all compact subgroups of . Let K be a compact
subgroup of . If C K, then |C| = 1. For, the mapping C |C| of K
into the multiplicative group of real numbers is clearly topological and
since K is compact, the set of images |C| is a compact and so bounded
1. Discontinuous groups 71
subgroup of the multiplicative group of real numbers. Thus |C| = 1.
In order to study K therefore, it is enough to study the group
0
of real
matrices A with |A| = 1. Let {dA} denote the volume measure in
0
so
that
{dAB} = {dA}
for B
0
. Let M be an open bounded subset of
0
. Consider the set
G =
_
CK
MC.
Since the sets MC are open, G is open. Since K is compact, it follows
that G is bounded. Consider the integral
P =
_
G
A

A{dA}.
Since A

A > 0, it follows that P is positive. Also if C is in K,


P[C] =
_
G
C

AC{dA}
=
_
G
A

A{dAC
1
} = P
This proves that there exists a P > 0 such that P[C] = P for C K. 84
Since P > 0, there exists B such that P = B

B. Hence if Q = BCB
1
then Q

Q = E or Q is orthogonal. Hence BKB


1
is a subgroup of the
orthogonal group. We have hence proved
Theorem 2. All maximal compact subgroups of are conjugates of the
real orthogonal group.
Let P
0
denote the space of all positive real n-rowed matrices of
determinant 1.
0
has in P
0
a representation P P[A], P P
0
,
A
0
and this representation is both open and continuous. Also
0
is transitive on P
0
. The set of elements A
0
which x the matrix
E
n
is precisely the orthogonal group . By our considerations above,
72 3. Indenite quadratic forms
\
0
is homeomorphic to P
0
. So every discrete subgroup of
0
has
discontinuous representation in P
0
. We shall consider the subgroup
consisting of unimodular matrices. That this is discrete is clear. In the
previous chapter we constructed for in \a fundamental domain R.
We shall now construct a fundamental set for in
0
.
In
0
, is represented as a group of translations A AU. Let us
dene the point set F
1
in
0
to consist of matrices A such that A

A = P
is reduced in the sense of Minkowski and so is in R. Clearly if A F
1
then BA is also an element of F
1
for arbitrary orthogonal B. Because of
the properties of R, the point set F
1
satises
F
1
=
0
.
Since P[E] = P, we shall take the subset F
0
of F
1
consisting of A with 85
a
11
0 where A = (a
kl
). It is easy to see from the properties of R, that
F
0
and F
0
for have non-empty intersection only for nitely many
. By removing from F
0
a suitably chosen set of points, one obtains a
fundamental set in
0
for . Minkowski proved that the volume of F
0
is nite.
For more details we refer to the papers [6], [7] and [8].
2 The H - space of a symmetric matrix
We now consider another important application of the previous consid-
erations.
Let S be a non-singular n-rowed symmetric matrix of signature p,
q where p + q = n and 0 p n. This means that there exists a real
matrix L such that
S [L] = S
0
=
_
E
p
0
0 E
q
_
(8)
Let denote the group of real matrices C such that
S [C] = S. (9)
2. The H - space of a symmetric matrix 73
is called the orthogonal group of S . is a Lie group. We shall now
determine all compact subgroups of . Let K be a compact subgroup of
. Then there exists a positive matrix P such that
P[V] = P, V K (10)
Since P > 0 and S is symmetric, there exists a matrix L such that 86
S [L] = S
0
, P[L] = [d
1
, . . . , d
n
] = D, (11)
D being a diagonal matrix with positive diagonal elements. Let B =
L
1
VL. Then since V K
S
0
[B] = S
0
, D[B] = D.
Put T = S
0
D = DS
0
. Then from above we have TB = BT. Therefore
T
2
B = T TB = TB T = BT
2
. But T
2
= D
2
. Therefore
BD
2
= D
2
B. (12)
Let B = (b
kl
), then (12) gives
b
kl
d
2
l
= b
kl
d
2
k
, l k, l n (13)
so that either b
kl
= 0 or d
2
l
= d
2
k
. In any case since the d
k
> 0 for all k,
we get
BD = DB.
This means that D = B

DB = B

BD and as D > 0, we see that B ir


orthogonal.
If is the orthogonal group, then K is a subgroup of LL
1
.
This shows that all maximal compact subgroup of are conjugates of
each other and conjugate to LL
1
Call this subgroup
0

0
is a
maximal compact subgroup of .
Put now P = (LL

)
1
. Then for V
0
.
P[V] = P.
Also P and S are connected by the relation 87
74 3. Indenite quadratic forms
PS
1
P = S. (14)
Denote by H the space of symmetric matrices P > 0 satisfying (14)
for a xed S . For any H H there exists a matrix M such that
H[M] = D, S [M] = S
0
(15)
where D is a diagonal matrix. Because of (14) we see that DS
0
D = S
0
or since D > 0, D = E, the unit matrix. Hence
H = (MM

)
1
. (16)
But from (11), S [L] = S
0
which proves that ML
1
or M = CL for
C . From (16) therefore
H = P[C
1
].
Conversely for any C , P[C] = H also satises (14). Thus the
totality of positive solutions H of (14) is given by
H = P[C]
where C runs through all matrices in and P is a xed solution of (14).
This proves that the representation H H[C] of in His transitive.
Consider now the space of right cosets of module
0
. If for a
H in H, H = P[C] = P[C
1
], then by denition of
0
, CC
1
1

0
so that H determines a unique right coset
0
C
1
of
0
\. Also every
right coset determines uniquely an element H = P[C
1
] in H. By the
considerations in the previous section
0
\ and H are homeomorphic. 88
Since
0
is a maximal compact subgroup, every discrete subgroup of
has a discontinuous representation in H.
We call H the representation space of the orthogonal group of S .
We remark that if S is denite, that is p = 0 or n, is compact and
so the H space consists only of one point namely S if S > 0 and S if
S > 0.
We shall now obtain a parametrical representation for the space H
which is dened by
H > 0, HS
1
H = S. (17)
2. The H - space of a symmetric matrix 75
Let H be any solution. Put
K =
1
2
(H + S ), L =
1
2
(H S ). (18)
Using the matrix M in (15) we have
K[M] =
1
2
(S
0
+ E) =
_
E
p
0
0 0
_
L[M] =
1
2
(E S
0
) =
_
0 0
0 E
q
_
_

_
(19)
which shows at once that K 0 and has rank p and L 0 and has rank
q. Furthermore because of (17) and (18), we get
KS
1
K = K, LS
1
L = L
KS
1
L = 0 = LS
1
K.
_

_
(20)
Suppose now that K is any matrix satisfying
KS
1
K = K (21)
with K 0 and K having rank p. Dene then two matrices H and L by 89
H = 2K S, L =
H S
2
.
Then K + L = S so that by the law of inertia, L has rank q. Also
because of (21), H satises the equation HS
1
H = S . So |H| 0 K and
L satisfy the equation (20). From the equation
S
1
[K, L] =
_
K 0
0 L
_
and from the signature of S we have rank L = q and L 0. Since
H = K L and |H| 0, it follows that H > 0 or that H is a solution of
(17). We have thus reduced the solution of the inhomogeneous problem
(17) to that of the homogeneous problem (21). Therefore let K 0 be
76 3. Indenite quadratic forms
an n-rowed matrix of rank p. There exists a non-singular matrix F such
that
K = F

_
D 0
0 0
_
F
where D > 0 and has p rows. If G denotes the matrix formed by the rst
p-rows of F then
K = G

DG
and G has rank p. Thus the most general form of a semi-positive matrix
K of n-rows and columns and of rank p is
K = QT
1
Q

, Q = Q
(n, p)
where T = T
(p)
> 0, and Q has rank p. Let K satisfy (21). Then 90
Q(T
1
Q

S
1
QT
1
T
1
)Q

= 0. (22)
But since Q has rank p, there is a submatrix of Q of p rows which is
non-singular. Using this, it follows from (22) that
T = S
1
[Q] > 0.
The most general solution of (21) therefore is given by
K = T
1
[Q

], T = S
1
[Q] > 0.
We thus obtain the homogeneous parametric representation of H by
H = 2K S, K = T
1
[Q

], T = S
1
[Q] > 0, Q = Q
(n, p)
(23)
It is obvious that Q determines H uniquely whereas if W is a p-
rowed non-singular matrix, then Q and QW determine the same H. In
order to obtain the inhomogeneous parametrical representation, we con-
sider the special case S = S
0
given by (8). Let us denote the corre-
sponding H by H
0
. Write
Q =
_
Q
1
Q
2
_
, Q
1
= Q
(p, p)
1
(24)
2. The H - space of a symmetric matrix 77
Then
T
0
= Q

1
Q
1
Q

2
Q
2
> 0.
This means that |Q
1
| 0. For, if |Q
1
| = 0, there is a column x of p rows
such that x 0 and Q
1
x = 0. Then
0 < T
0
[x] = x

2
Q
2
x 0
which is absurd. We can therefore put
Q =
_
E
X

_
Q
1
(25)
where X = X
(p,q)
and E is the unit matrix of order p T
0
> 0 then means 91
(since |Q| 0) that
E XX

> 0. (26)
Thus K
0
= T
1
0
[Q

] is given by
K
0
_
(E XX

)
1
(E XX

)
1
X
X

(E XX

)
1
X

(E XX

)
1
X
_
(27)
where E = E
p
. In order to compute H
0
= 2K
0
S
0
we put
W =
_
E
p
X
X

E
_
, F =
_
E
p
X
0 E
_
W and F being n-rowed matrices. We then have
W[F] =
_
E 0
0 E X

X
_
, W[F

] =
_
E XX

0
0 E
q
_
(28)
Since T
0
> 0, (26) shows that W > 0 and therefore E X

X > 0.
We can therefore write
H
0
=
_

_
E + XX

E XX

2X
E XX

2X

E XX

E + X

X
E X

X
_

_
(29)
H
0
is thus the space of matrices H
0
with X satisfying the condition
(26). This shows that H
0
has the topological dimension pq.
In order to obtain the inhomogeneous parametrical representation
for H from that of H
0
we observe that if the matrix L is such that S [L] = 92
S
0
, then H
0
= H[L].
78 3. Indenite quadratic forms
3 Geometry of the H-space
Consider the space P of all positive n-rowed matrices P. Let P = (p
kl
)
and dP denote the matrix of the dierentials dp
kl
. If A is any non-
singular matrix, then d(A

PA) = A

dPA so that
ds
2
= (P
1
dPP
1
dP) (30)
where denotes the trace, is invariant under the transformations P
P[A] of Pinto itself. ds
2
is a quadratic dierential form in the
n(n + 1)
2
independent dierentials dp
kl
. In order to see that this is a positive def-
inite form, observe that since the group of non-singular matrices acts
transitively on P, it is enough to verify the positivity of (30) at some
particular point, say P = E. At P = E we see that the quadratic form
(30) equals
((dP)
2
) =

k,l
(dp
kl
)
2
which is positive. This shows that P is a Riemannian space with the
metric (30).
It can be shown that joining any two points P
1
, P
2
of P there exists
one geodesic only. Since P
1
and P
2
can be transformed simultaneously
into the unit matrix E and a positive diagonal matrix D = [d
1
, . . . , d
n
],
it is enough to show this for points E and D. One can see that if for
0 1
D

=
_

_
d

1
.
.
.
d

n
_

_
be dened symbolically then the geodesic line joining E and D consists 93
precisely of these points D

.
Consider now the H space. It is a subspace of P. The quadratic
dierential form
ds
2
= (H
1
dHH
1
dH)
denes in H a line element invariant under the mappings H H[C],
C . It is practical to take
ds
2
=
1
s
(H
1
dHH
1
dH)
3. Geometry of the H-space 79
as the line element. Since this is again positive denite, it follows that
H is a Riemannian space of pq real dimensions.
One can also express the line element in terms of the parameter X.
We obtain
ds
2
= {(E XX

)
1
dX(E X

X)
1
dX

}.
(28) shows that |E XX

| = |E X

X| and so we obtain the invariant


volume element under this metric as
dv = |E XX

|
n/2
{dX}
where {dX} =
p

a=1
q

b=1
dx
ab
, X = (x
ab
), is the Euclidean volume element
in the pq dimensional X space.
As before one can construct geodesics joining two points H
1
and H
2
in H. As H is a subspace of P and since the metric on H is the induced
metric from P one can construct a geodesic joining the two points H
1
and H
2
considered as points of P. It is interesting to note that all points
of the geodesic lie in H showing that H is a geodesic submanifold of P. 94
See [7].
The spaces P and H come under a remarkable class of spaces stud-
ied by E. Cartan. They are the symmetric spaces. According to E.
Cartan a topological space T is said to be symmetric about a point P, if
there exists an analytic automorphism of T onto itself which has P as
the only xed point and whose square is the identity automorphism. T
is said to be a symmetric space, if it is symmetric with regard to every
point of T. If the space T is homogeneous, that is if on T there acts
a transitive group of analytic automorphisms, then symmetry with
regard to one point implies that T is a symmetric space.
Let us now consider the space P and let P
0
P. Let P be any
matrix in P. Dene
P

= P
0
P
1
P
0
.
Then P

P and P P

is clearly an analytic homeomorphism of P


into itself. Also
P

= P
0
P
1
P
0
= P
0
(P
1
0
PP
1
0
)P
0
= P.
80 3. Indenite quadratic forms
By the remark in 2, the only solution P > 0 of P = P
0
P
1
P
0
is P
0
itself. Thus P is symmetric about P
0
. P
0
being arbitrary, it shows that
P is a symmetric space.
Consider now the space M and let H
0
be a point in it. For any H in
H dene
H

= H
0
H
1
H
0
.
Then H

is also in H because 95
H

S
1
H

= H
0
H
1
H
0
S
1
H
0
H
1
H
0
= S
since H and H
0
are in H. Thus H H

is an analytic automorphism of
H onto itself and the previous considerations show that H is a symmetric
space.
The symmetrie H H

is isometric. For,
dH

= d(H
0
H
1
H
0
) = H
0
H
1
dH H
1
H
0
,
as can be seen from dierentiating the equation HH
1
= E. Therefore
dH

H
1
= H
0
H
1
dH H
1
H
0
H
1
0
HH
1
0
= H
0
H
1
dHH
1
0
. Hence
(dH

H
1
dH

H
1
) = (dH H
1
dHH
1
)
which proves our contention.
4 Reduction of indenite quadratic forms
Let S be a real, non-singular, symmetric matrix of n-rows. If it has
signature p, q with pq > 0, then there is associated with it a space H of
positive matrices H satisfying
HS
1
H = S.
H has the dimension pq > 0.
We now say that S is reduced if the H space of S has a non-empty
intersection with the Minkowski reduced space R. Note that this has a
meaning since H is a subspace of P. This means that there is at least
4. Reduction of indenite quadratic forms 81
one H in the H space of S which is reduced in the sense of Minkowski.
Obviously if S is denite, then H reduces to the single point S and this 96
denition coincides with that of Minkowskis for denite forms.
We recall that the class of S is dened to be the set of all matrices
S [U] where U runs through all unimodular matrices. Our denition
shows that for any S = S

, there exists an element in the class of S


which is reduced. For, let H be any element in the H space of S . By the
Minkowski theory, there exists a unimodular U such that H[U] R.
Put H[U] = H
1
and S [U] = S
1
. Then
H
1
S
1
1
H
1
= S
1
which shows that the H space of S
1
intersects R in a non-empty set.
Also S [U] is in the class of S .
In general, there will be an innity of reduced matrices in the class
of S . We, however, have the following
Theorem 3. There exist only nitely many integral symmetric reduced
matrices with given determinant.
Proof. If S is denite, the theorem is already proved in the last chapter.
So let S be indenite. Let S be reduced in the above sense. Let H be
in the H space of S which is reduced in the sense of Minkowski. By the
Jacobi transformation
H = D[V]
where
D = [d
1
, . . . , d
n
], V = (v
kl
)
where v
kk
= 1, v
kl
= 0 if k > l. From Minkowskis reduction theory, it 97
follows that there is a constant c depending only on n such that
0 < d
k
< cd
k+1
,k = 1, . . . , n 1
c < v
kl
< c, k 1 n.
Introduce the matrix W given by
W (w
kl
) =
_

_
0 . . . . 0 1
0 . . . . . 0
. . . . . . 0
1 . . . . . 0
_

_
(31)
82 3. Indenite quadratic forms
so that w
kl
= 0 if k + l n + 1 and equal to 1 otherwise. It then follows
that
W
2
= E.
Put D
1
= D
1
[W] = [d

1
, . . . , d

n
]. Then
d
nk+1
d

k
= 1, k = 1, . . . , n,
so that
0 < d

k
< c d

k+1
, k = 1, . . . , n 1. (32)

Let V
1
= WV
1
W
. Then because of the choice of W, V
1
= (v

kl
) is
again a triangle matrix. Because the elements of V satisfy the above
conditions and W is a constant matrix, we see that there is a constant c
1
,
depending only on n such that
c
1
< v

kl
< c
1
, 1 k l n. (33)
If we put c
0
= max(c, c
1
) then the matrix H
1
= D
1
[V
1
] satises the
condition
H
1
R

c
0
(34)
with the notation of the previous chapter. But then 98
H
1
= D
1
[V
1
] = D
1
[WV
1
] = D
1
[W
2
V
1
W
] = H
1
[W].
Since HS
1
H = S , we see that if WS = S
1
, then
H = S H
1
S = S

1
H
1
S
1
= H
1
[S
1
].
H and H
1
are both in R

c
0
and so by theorem 5 of the previous chapter,
there are only nitely many S
1
integral and with determinant equal to
|S | satisfying the above condition. This proves the theorem.
Since in each class of matrices there is at least one reduced matrix
we have
Corollary 1. There exist only nitely many classes of integral matrices
with given determinant.
4. Reduction of indenite quadratic forms 83
If S is rational, then for a certain integer a, aS in integral. Hence
from Theorem 3 we get
Corollary 2. In each class of rational matrices, there exist only nitely
many reduced matrices.
Let S be a rational non-singular symmetric matrix and let S be the
matrix of an indenite quadratic form. Let be the orthogonal group
of S , that is the group of real matrices C with S [C] = S . A unimodular
matrix U satisfying the condition
S [U] = S
is said to be a unit of S . The units of S clearly form a group (S ) called
the unit group of S .
Let us consider the Hspace of positive matrices H with HS
1
H = S . 99
The orthogonal group of S has, in H, a representation as a transitive
group of mappings
H H[C] C .
Since (S ) is a subgroup of , (S ) has a representation in the H space.
Since the unimodular group is discontinuous in P, the representation
of (S ) in H is discontinuous. Clearly U and U lead to the same repre-
sentation. Therefore if we identify U and U in (S ), the H H[U],
U (S ) gives a faithful and discontinuous representation of (S ) in H.
Thus (S ) is a discrete subgroup of . H will be the space of smallest
dimension in which (S ) is discontinuous. We shall construct for (S )
in H a fundamental domain F.
In the class of the rational non-singular symmetric indenite S there
exist nitely many reduced matrices, say S
1
, . . . , S
l
. Let U
1
, . . . , U
l
be
unimodular matrices so that S
i
= S [U
i
], i = 1, . . . , l.
Let H be any matrix in the H space of S . There exists then a uni-
modular matrix U so that H[U] R. By denition of reduction for
indenite matrices S [U] is reduced. Thus S [U] has to be one of the
nitely many S
1
, . . . , S
l
, say S
k
. Then
S [U] = S
k
= S [U
k
],
84 3. Indenite quadratic forms
or S [UU
1
k
] = S which means that UU
1
k
(S ).
Let us denote by R
U
1
k
the set of matrices P[U
1
k
] for P R. Then 100
for the H above H[U] R or H[UU
1
k
] R
U
1
k
. Since UU
1
k
is a unit
of S and H H, it follows that if V = UU
1
k
then
H[V] H R
U
1
k
.
If we therefore take F as the set
F =
l
_
k=1
(H R
U
1
k
) (35)
then for every point H in H there is a unit V such that H[V] F. If for
any unit A of S we put F
A
as the image of F under A we have proved
that
H

A
F
A
.
Note that F
A
= F
A
. We shall sketch a proof that F is indeed a funda-
mental region for (S ) in H.
Let U and V be two units of S such that UV
1
E and F
U
and
F
V
have a non-empty intersection. Then F and F
UV
1 have a non-empty
intersection. We call F
UV
1 a neighbour of F. Let therefore F
A
be
a neighbour of F so that A E is a unit of S and F
A
intersects F
in a non-empty set. Because of the denition (35) of F we see that
l
_
k=1
(HR
U
1
k
A
) and
l
_
k=1
(HR
U
1
k
) have a non-empty intersection. (Note
that H = H[A]). This means that for two integers i, j, 1 i, j l, R
U
1
i
A
and R
U
1
j
have a non-empty intersection or
R RU
1
i
AU
j
is not empty. Now U
1
1
AU
j
E. For, this means that AU
j
= U
i
101
which is the same thing as i = j or A = E. Then U
1
i
AU
j
(by
Minkowskis theory) belongs to a nite set of matrices depending only
on n. Hence F has a nite number of neighbours.
In order to study the points of intersection of F and F
A
where F
A
is a neighbour of F, we remark that since R and R
V
, for a unimodular
5. Binary forms 85
V, have only boundary points in common, it is enough to show that the
boundary points of F relative to H are the intersection of the boundaries
of R
U
i
with H. This would be achieved if we prove that H does not lie
on a boundary plane of R
U
i
for any i. For a proof of this non-trivial fact
we refer to [8].
In the H space we have seen that there is a volume element dv in-
variant under the mappings H H[C], C . In the next chapter we
shall prove that
_
F
dv (36)
if nite except in the case of binary zero form. This will show inciden-
tally that (S ) is an innite group if S is not the matrix of a binary zero
form. For, one can show, rather easily, that
_
H
dv is innite.
5 Binary forms
We shall now study the case of binary quadratic forms systematically.
Let ax
2
+ 2bxy + cy
2
be a real binary quadratic from whose matrix 102
S =
_
a b
b c
_
is non-singular and let a 0. Write
S [x] = ax
2
+ 2bxy + cy
2
= a(x
1
y)(x
2
y)
where
1
+
2
=
2b
a
,
1

2
= c/a. Thus
1
and
2
are roots of the
equation
a
2
+ 2b + c = 0. (37)
If |S | = ac b
2
> 0, then
1
and
2
are both complex. If |S | < 0, then
1
and
2
are real.
Let
L =
_


_
86 3. Indenite quadratic forms
be the matrix of the linear transformation x x + y, y x + y.
Then
S [Lx] = a

(x

1
y)(x

2
y)
where a

= a(
1
)(
2
). We shall assume that a

0. We have
then

i
=

i
+
i = 1, 2. (38)
This shows that the transformation S S [L] results in transformation
(38) in the roots of the equation (37), the matrix of the transformation
for the roots being |L|L
1
.
We shall now consider the case ae b
2
< 0 so that S [x] is an inde- 103
nite binary form. Our object is to study the H space of S . This is the set
of two-rowed symmetric matrices H satisfying
H > 0, HS
1
H = S.
If C is a non-singular matrix such that S [C
1
] = S
0
=
_
1 0
0 1
_
, then by
our considerations before, H can be obtained from the H
0
satisfying
H
0
S
1
0
H
0
= S
0
by taking H = H
0
[C]. This space H
0
of H
0
has the parametrical repre-
sentation
H
0
=
_

_
1 + y
2
1 x
2
2x
1 x
2
2x
1 x
2
1 + x
2
1 x
2
_

_
, |x| < 1.
We put
H
0
=
_
p q
q p
_
;
then |H
0
| = p
2
q
2
= 1.
In the second chapter we had associated uniquely with every positive
matrix of determinant 1 a point z in the upper half of the complex z-
plane. If z is the representative point for H
0
, then z = + i, > 0
and
z =
q
p
+
i
p
5. Binary forms 87
which, in terms of the parameter x is 104
z =
1
i
x i
x + i
(39)
(39) shows that z lies on the semi-circle in the upper half plane with unit
radium around the origin as centre. Since the linear transformation (39)
takes the points x = 1, 0, 1 into the points z = 1, i, 1 it follows that as
x runs through all x in the interval (1, 1), z traces the above semi-circle
of unit radius. We may take this semi-circle as the H
0
space.
From our general results we see that the line element is
ds =
dx
1 x
2
(40)
so that the distance between two points z
1
, z
2
, with values of the param-
eters x
1
, x
2
respectively, is
(z
1
, z
2
) =
x
2
_
x
1
ds =
1
2
log
_
1 + x
2
1 x
2
:
1 + x
1
1 x
1
_
(41)
From (39), z determines x uniquely, namely
x =
1
i
z i
z + i
(42)
so that
dx =
2dz
(z + i)
2
and hence the distance (z
1
, z
2
) is also
(z
1
, z
2
) =
1
4
log
_
z
2
+ 1
z
2
1
:
z
1
+ 1
z
1
1
_
(43)
In Poincares model of non-Euclidean geometry, See [11], the dis- 105
tance
0
(z
1
, z
2
) between two points z
1
, z
2
in the upper half plane is given
by
88 3. Indenite quadratic forms

0
(z
1
, z
2
) = log[D(z
1
, z
2
, z
3
, z
4
)]
where z
3
and z
4
are the points in which the unique circle through z
1
and
z
2
orthogonal to the real line, intersects the real line: z
1
, z
2
, z
3
, z
4
being
in cyclical order and D(z
1
, z
2
, z
3
, z
4
) is the cross ratio
D(z
1
, z
2
, z
3
, z
4
) =
z
1
z
3
z
1
z
4
:
z
2
z
3
z
2
z
4
.
With this notation we see that (43) can be written as
(z
1
, z
2
) =
1
2
log[D(z
1
, z
2
, 1, 1)] (45)
Let us now go back to the H space. This consists of points H = H
0
[C].
We shall assume |C| = 1. For if not, we interchange the columns of C
which merely means taking S instead of S . By (38) it follows that
the H space is precisely the semi-circle on
1
,
2
as diameter, where

1
,
2
are the roots of (37). The equation of this semi-circle is, as can be
seen
a(
2
+
2
) + 2b + c = 0 (46)
with centre on the real line at the point
b
a
and radius
_
||S ||
a
.
In the previous chapter we had seen that the modular region F de- 106
5. Binary forms 89
ned by
zz 1

1
2

1
2
is a fundamental region for the proper unimodular group. Analogous
to our denition of reduction of indenite form in the last section, we
dene a binary form S [x] reduced if its H space intersects F in a non-
empty set. Since transformations in the upper half plane are by means
of matrices of determinant unity, this can be called proper reduction.
Since the H space is given by (46), the fact that S is reduced means that
at least one of the vertices P, Q lies within the circle (46). Since P, Q
are the points
1 + i

3
2
we see that if S is reduced, then
a
_

2
+
2
_
+ 2b + c 0
where =
1
2
and =

3
2
. This gives
a b + c 0. (47)
We may assume a > 0. For, if a < 0 (we have already assumed it
is not equal to zero) there exists a properly unimodular matrix U such
90 3. Indenite quadratic forms
that the rst diagonal element of S [U] is positive. In this case, we have
since a + c |b| we get
1
4
(4a 2|b|)
2
+ 3b
2
= (2|b| a)
2
+ 3a
2
= 4d + 4(a
2
a|b| + ac) 4d
where d = ||S ||. This gives at once 107
a
2

4
3
d, b
2

4
3
d, 3ac d
which at once shows that the number of integral properly reduced forms
of given determinant is nite.
It is to be noted that the reduction conditions above are not the same
as those of Gauss.
Let us now construct a fundamental region for the unit group in this
H space. Before doing this, we shall rst study the structure of the unit
group = (S ) of S .
Let S [x] = ax
2
+bxy+cy
2
be a form representing integers for integral
values of x and y. Then
S =
_
a b/2
b/2 c
_
is a semi-integral matrix. Let
b
2
4ac = d.
d > 0 and not a square. Then S [x] is not a zero form. Also let (a, b, c) =
1. Since S [x] is not a zero form, neither a nor c is zero. We shall
consider the proper group of units U of S , namely the group
0
=
0
(S )
of unimodular matrices U such that
S [U] = S, |U| = 1. (48)
Clearly
0
is a subgroup of of index 2. Let U =
_


_
be an element
of
0
. Let
1
,
2
be the roots of
a
2
+ b + c = 0. (49)
5. Binary forms 91
Then it can be seen that the mapping S S [U] keeps
1
and
2
xed. 108
Thus

i
=

i
+
, i = 1, 2.
which means that
1
and
2
are again roots of the polynomial

2
+ ( ) = 0 (50)
(49) is irreducible in the rational number eld since otherwise
1
and
2
will both be rational and so

||S || is rational, which is a contradiction
to our assumption that S [x] is not a zero form. Therefore (49) and (50)
give

a
=

b
=

c
= q (51)
where q is an integer. If U E then 0, 0. Let us put + = p.
Then
=
p + bq
2
, =
p bq
2
(52)
Since = 1, we get the relation
p
2
q
2
b
2
4
+ q
2
ac = 1 or
p
2
dq
2
= 4 (53)
which is the well-known Pells equation. Thus every unit of S in
0
gives rise to a solution of Pells equation. Conversely every solution of
(53) gives rise to a unit U of S , as can be seen from (51) and (52). This 109
unit is
U =
_

_
p bq
2
cq
aq
p + bq
2
_

_
Consider the representation H H[U], U
0
in the H space. Let
the representative point of H
0
be z
0
on the semi-circle which denes the
H space. Denote by w
0
the point H
0
[U]. Let z be any variable point on
this semi-circle and w the image by the transformation H H[U]. If
U =
_


_
then
w
0
=
z
0

z
0
+
, W =
z
z +
.
92 3. Indenite quadratic forms
Because of the fact that U xes both
1
and
2
and cross-ratio is unal-
tered by linear transformation, we see that
z
1
z
2
:
z
0

1
z
0

2
=
w
1
w
2
:
w
0

1
w
0

2
or that
w
2
w
1
=
z
2
z
1
(54)
where = D(w
0
, z
0
,
1
,
2
). But equation (54) shows that = D(w, z,

1
,
2
), which means that is a constant independent of the point z.
This shows that in the non-Euclidean geometry of ours, the mapping
H H[U] corresponds to a translation. Also (See [11]) the quantity
has the property that if
1
and
2
are the eigen values of the matrix of 110
the transformation
w =
z
z +
(55)
then, by proper ordering of
1
and
2
we have
=

2
and therefore the non-Euclidean distance
0
(z, w) is given by

0
(z, w) = log

1

2
where
1

2
. Now
1
and
2
are characteristic roots of the mapping
(55) and hence they satisfy

2
( + ) + 1 = 0
which shows that

1
,
2
=
+
2

_
_
+
2
_
2
1.
Substituting from (52), + = p, we get

1
,
2
=
p q

d
2
(56)
5. Binary forms 93
where p and q are the unique solutions of Pells equation corresponding
to the unit U.
Let R be the eld of rational numbers and R(

d) the real quadratic


eld. The element of R(

d) dened by
=
p + q

d
2
where p and q are a solution of Pells equation, is a unit of norm 1. This 111
is seen from the fact that is a root of

2
p + 1 = 0.
(If d is square-free the converse is also true).
In (56), the quantities
1
,
2
are and
1
in some order. If U is
changed to U
1
or to U, then gets changed to
1
or respectively.
We therefore choose among the four quantities , ,
1
,
1
, one
(and there is only one in general), call it

which is such that

1
and put

0
(z, w) = log
2
.
(Note that
1
/
2
=
2
with
1

2
). This will then mean that the
translation in the H space is by the amount
(z, w) = log

.
Since the representation of
0
in H is discontinuous it follows that
the translations form a discrete subgroup in the group of non-Euclidean
motions on H. There is thus a U
0
and a corresponding

0
such that log

0
is the smallest. Hence any U will be of the form
U = U
n
0
(n = 0, 1, . . .)
Similarly any which arises from a U in
0
is of the form
=
n
0
(n = 0, 1, . . .)
94 3. Indenite quadratic forms
If instead of S being semi-integral, it was rational symmetric we 112
could, by multiplying it by an integer, make it integral satisfying the
condition (a, b, c) = 1. But this multiplication does not aect the unit
group. Hence the
Theorem 4. The group of proper units of an indenite, binary, non-zero,
rational quadratic form is an innite cyclic group whose elements stand
in a(1, 1) correspondence with the solutions of Pells equation.
Let S =
_
a b
b c
_
be a rational symmetric, non-singular indenite ma-
trix and let S [x] be not a zero form. Let the
semi-circle ALB denote the H space of the matrix S so that A and B
are points on the real axis with coordinates
2
and
1
,
2

1
. Further-
more since S [x] is not a zero form, the quantities
1
and
2
are irrational.
Let R denote the fundamental region of the proper unimodular group in
the upper half z-plane. Let U
1
, . . . , U
g
be the nitely many reducing
properly unimodular matrices. If H
U
for U = U
1
, . . . , U
g
denotes the 113
image of H under the transform H H[U] (this means for the points z
on H the transformation z

z

z +
, U =
_


_
), then
G =
g

i=1
(H
U
i
R)
U
1
i
is a point set on Hand is a fundamental region for
0
in H. It is important
to notice that H
U
i
does not lie completely on the boundary of R. For
then H
U
i
would have to be the unit circle or one of the lines = 1/2.
5. Binary forms 95
In the rst case this would mean that
z
z +
= 1
where z =
1
or
2
. This is the same as saying
1
and
2
are rational,
which they are not. Then same happens if the second case is true.
This shows that none of the arcs (H
U
i
R)
U
i
has an end point at
1
or
2
. Hence G is compact. Its volume therefore in the measure induced
by the invariant metric is nite.
Bibliography
[1] E. Cartan : Sur une classe remarquable d

espaces de Riemann,
Bull. Math. Soc. France, Tome 55 (1927) P. 114 - 134.
[2] Fricke-Klein : Vorlesungen uber die theorie der Modul- 114
funktionen, Bd.1, Leipsig (1890), P.243-269.
[3] C. F. Gauss : Disquisitiones Arithmeticae.
[4] C. Hermite : Oeuvres, Tome 1, Paris (1905), P.164-293.
[5] L. Pontrjagin : Topological groups, Princeton (1940).
[6] C. L. Siegel : Discontinuous groups, Annals of Math., 44 (1943),
P. 674-689.
[7] C. L. Siegel : Some remarks on discontinuous groups, Annals of
Math., 46(1945), P.708-718.
[8] C. L. Siegel : Einheiten quadratischer Formen, Abh. aus. dem.
Math. Semi. Hansischen Univ., 13(1940), P.209-239.
[9] C. L. Siegel : Indenite quadratische Formen und Funktionenthe-
orie I, Math. Annalen, 124 (1951), P.17-54.
[10] C. L. Siegel : The average measure of quadratic forms with given
determinant and signature, Annals of Math., 45(1944), P.667-685.
[11] C. L. Siegel : Ausgew ahlte Fragen der Funktionentheorie II
G ottingen, 1953.
97
98 BIBLIOGRAPHY
[12] A. Weil : Lint egration dans les groupes topologiques et ses appli-
cations, Paris (1940).
Chapter 4
Analytic theory of Indenite
quadratic forms
1 The theta series
115
Let
f (x
1
, . . . , x
n
) = a
11
x
2
1
+ + a
nn
x
2
n
+ b
1
x
1
+ + b
n
x
n
+ c = 0 (1)
be a Diophantine equation with integral coecients. Let S denote the
matrix of the homogeneous part. We consider integral linear homoge-
neous transformations
x
i

n

j=1
q
i j
x
j
, i = 1, . . . , n
where the matrix Q = (q
i j
) is unimodular. Let the resulting function be
Q(x
1
, . . . , x
n
). Then f = 0 has an integral solution if and only if Q = 0
has an integral solution.
Suppose the matrix S has rank r, 0 < r < n so that |S | = 0. Let p
be a primitive vector such that S p = 0. Let U be the unimodular matrix
with p as its rst column. Then
S [U] =
_
0 0

0 S
1
_
.
99
100 4. Analytic theory of Indenite quadratic forms
We may repeat the process with S
1
instead of S . Finally therefore we
arrive at a unimodular matrix V so that
S [V] =
_
0 0
0 S
r
_
|S
r
| 0. Put now
(b
1
, b
2
, . . . , b
n
)V = (c
1
, . . . , c
n
).
If c
1
, . . . , c
nr
are zero it means that by a unimodular transformation 116
we can bring f into a quadratic form in r-variables. Suppose now that
c
1
, . . . , c
nr
are not all zero. Since they are integers, there exists a uni-
modular matrix V
1
of n r rows such that
(c
1
, . . . , c
nr
)V
1
= (0, 0, . . . , d).
Put now
V
2
=
_
V
1
0
0 E
r
_
.
Then S [VV
2
] = S [V] and f (x
1
, . . . , x
n
) becomes transformed into
(x
nr
, . . . , x
n
) = d
11
x
2
nr+1
+ d
rr
x
2
n
+ dx
nr
+ d
1
x
nr+1
+ + d
r
x
n
+ d

.
This is the form into which f can, in general, be transformed by uni-
modular transformation.
We shall hereafter assume |S | 0 and that the quadratic form is
integral valued, that is, that for x
1
, . . . , x
n
integral, f (x
1
, . . . , x
n
) is an
integer. This means
) S is semi-integral (2)
that is that its diagonal elements are integers and twice the non-diagonal
elements are integers. (1) can now be written in the form
S [x] + b

x + c = S [x +
1
2
S
1
b] + c
1
4
S
1
[b].
If we put t = c
1
4
S
1
[b] and 2S a = b, then (1) takes the simple form
S [x + a] t = 0. (3)
1. The theta series 101
Obviously 117
) 2S a is integral. (4)
We shall therefore consider the diophantine problem in which the left
side is S [x +a]. Clearly and rational number t which can be represented
by S [x + a] satises
t S [a](mod 1). (5)
Consider the diophantine equation S [x +a] = t under the conditions
(2) and (4). If S > 0, the number of integral solutions, denoted A(S, a, t),
is nite. We now form the generating series

t
A(S, a, t)e
2itz
(6)
where z = + i, > 0. It follows that

t
A(S, a, t)e
2itz
=

x
e
2iS [x+a]z
(7)
and since S > 0, the series on the right of (7) converges. The right
side of (7) is a so-called theta series studied extensively by Jacobi. In
particular if S = E
4
, the unit matrix of order 4 and a = 0, we have

t=0
A
4
(t)e
2itz
=
_

x=
e
2ix
2
z
_

_
4
where A
4
(t) is the number of integral solutions of
t = x
2
1
+ x
2
2
+ x
2
3
+ x
2
4
. (8)
It was conjectured by Fermat and proved by Lagrange that for every 118
t 1, (8) has a solution. Jacobi proved, by using the theory of Elliptic
theta series, that
A
4
(t) = 8

d/t
4d
d.
Since for every t, unity divider t, we nd that A
4
(t) 1, for all t 1.
This, besides proving Lagranges theorem, is a quantitative improve-
ment of it.
102 4. Analytic theory of Indenite quadratic forms
If S is indenite, A(S, a, t), if dierent from zero, is innite. This
means that the right side of (7) diverges. It is therefore desirable to
dene an analogue of A(S, a, t) for indenite S . To this end we shall
introduce the theta series.
Let z = +i, 0 be a complex parameter. Let S be n-rowed sym-
metric and H any matrix in the representation space H of the orthogonal
group of S . Then H > 0 and HS
1
H = S . Put
R = S + iH. (9)
Then R = zK + zL where K =
1
2
(S + H), L =
1
2
(S H). R is now a
complex symmetric matrix whose imaginary part H is positive denite.
Let a be a rational column satisfying (4). Put y = x + a, x being an
integral column. Dene
f
a
(z, H) =

ya(mod 1)
e
2iR[y]Z
(10)
where the summation runs over all rational columns y a(mod 1).
Since H > 0, (10) is absolutely convergent for every H in H.
For our purposes, it seems practical to consider a more general func- 119
tion f
a
(z, H, w) dened by
f
a
(z, H, w) =

ya(mod 1)
e
2iR[y]+2iy

(11)
where w is a complex column of n rows of elements w
1
, . . . , w
n
. It is
clear that f
a
(z, H, w) are still convergent. (11) is the general theta series.
It is to be noticed that if S > 0, (10) coincides with (7).
Consider now all the rational vectors a which satisfy (4), namely
that 2S a is integral. It is clear that there are only nitely many such a
incongruent (mod 1). The number l of such residue classes is clearly at
most equal to
d = abs2|S |. (12)
Let a
1
, a
2
, . . . , a
l
be the complete set of these l residue classes incongru-
ent (mod 1). For each class a
i
form the function f
a
i
(z, H, w). We denote
2. Proof of a lemma 103
by f (x, H, w) the functional vector
f (z, H, w) =
_

_
f
a
1
(z, H, w)
.
.
.
f
a
l
(z, H, w
_

_
. (13)
2 Proof of a lemma
Let P > 0 be an n-rowed real matrix and u =
_
u
1
.
.
.
u
n
_
a column of n real
numbers. The function
f (u
1
, . . . , u
n
) =

x
e
P[x+u]
where x runs through all integral n-rowed columns, is a continuous func- 120
tion of the n-variables u
1
, . . . , u
n
and has in each variable the period 1.
It has the fourier series

l
c
l
e
2il

u
l running through all integral n-rowed vectors and
c
l
=
_

f (u
1
, . . . , u
n
)e
2il

u
du
1
. . . du
n
(14)
where is the unit cube in the n dimensional space R
n
of u
1
, . . . , u
n
.
Since P > 0, we can write P = M

M for a non-singular M. Put


Mu = v, M
1
l = k. (15)
Then
c
l
: |P|

1
2
_
R
n
e
y

v2ik

v
dv
104 4. Analytic theory of Indenite quadratic forms
where we take the positive value of the square root. If v =
_
v
1
.
.
.
v
n
_
and
k =
_

_
k
1
.
.
.
k
n
_

_
, then v

v = v
2
1
+ + v
2
n
and k

v = k
1
v
1
+ + k
n
v
n
so that
c
l
= |P|

1
2
e
P
1
[l]
_

e
t
2
dt
_

_
n
(16)
That the value of the integral on the right of (16) is unity is seen as
follows: In the rst place from the uniform convergence of the series
f (u
1
, . . . , u
n
), it follows that

x
e
P[x+u]
= |P|

1
2

n

l
e
P
1
[l]+2il

u
(17)
where 121
=

e
t
2
dt.
Secondly, is independent of P and so putting n = 1, P = 1 and
u = 0, we see from (17) that = 1.
Suppose now that in (17), u
1
, . . . , u
n
are complex variables. Since
f (u
1
, . . . , u
n
) is absolutely convergent which moreover is uniformly con-
vergent in every compact subset of the n-complex space of u
1
, . . . , u
n
, it
follows that f (u
1
, . . . , u
n
) is an analytic function of the n-complex vari-
ables u
1
, . . . , u
n
. The same is true of the right side of (17) also. Since
(17) holds for all real u
1
, . . . , u
n
, it holds, by analytic continuation, for
complex u
1
, . . . , u
n
also.
Suppose now that P is a complex symmetric matrix P = X + iY
whose real part X is positive denite. Then P
1
also has this property.
For, since X and Y are real symmetric and X > 0, there exists a non-
singular real C such that
X = C

C, Y = C

DC
3. Transformation formulae 105
where D = [d
1
, . . . , d
n
] is a real diagonal matrix with diagonal elements
d
1
, . . . , d
n
. Now P = X + iY = C

(E + iD)C so that
P
1
[C

] =
_

_
1 id
1
1 + d
2
1
, . . . ,
1 id
n
1 + d
2
n
_

_
(18)
which shows, since C is real, that the real part of P
1
is positive denite
symmetric. Incidentally we have shown that P is non-singular.
If we now take u
1
, . . . , u
n
to be xed complex numbers, then f (P) = 122
_
x
e
[x+u]
is an analytic function of the
n(n + 1)
2
complex variables con-
stituting the matrix P. Since (17) is true for P real, by analytic continua-
tion, it is true also if P is complex symmetric with positive real part. For
|P|

1
2
one takes that branch of the algebraic function which is positive
for real P. We thus have the
Lemma 1. Let P be a complex n-rowed symmetric matrix with real part
positive. Let u be any complex column. Then

x
e
P[x+u]
= |P|

1
2

x
e
P
1
[x]+2ix

u
where x runs through all integral columns and |P|

1
2
is that branch of
the algebraic function which is positive real P.
3 Transformation formulae
We now wish to study the transformation theory of the theta series de-
ned in (11), under the modular substitutions
z z
M
=
z +
z +
, M =
_


_
, |M| = 1 (19)
where M is an integral matrix. Since H will be xed throughout this
section we shall write f
a
(z, w) instead of f
a
(z, H, w). Following Hermite,
we rst consider the case 0, and write
z +
z +
=

+
2
z
1
, z
1
1
= z
2
, z
2
= z +

(20)
106 4. Analytic theory of Indenite quadratic forms
Clearly 123
f
a
(z
M
, w) = f
a
_

+
2
z
1
, w
_
. (21)
Denote by R
M
the matrix R in (9) with z replaced by z
M
, then
R
M
=

S +
2
R
1
where
R
1
= z
1
S + H
2
+ z
1
S H
2
(22)
By denition of y, y a is integral. We may therefore write v a =
x + g where y belongs to the nite set of residue classes of integral
vectors (mod ). When x runs through all integral vectors and g through
a complete system of (mod ) incongruent integral vectors, then y runs
through all rational vectors a(mod 1). We have therefore
f
a
(z
M
, w) =

g((mod) )
e
2i

S [g+a]

x
e
2i(
2
R
1
[x+g+a]+w

(w+g+a))
R
1
being non-singular, we can complete squares in the exponent in
the inner sum and obtain,
f
a
(z
M
, w) = e

i
2
R
1
1
[w]

g((mod) )
e
2i

S [g+a]

x
e
2iR
1
[x+(g+a)
1
+R
1
1
w/2]
_

_
(23)
In order to be able to apply lemma 1 to the inner sum in (23) we rst
compute R
1
1
. Since S and H are symmetric and H > 0, there exists a
real non-singular C such that
S = C

_
E
p
0
0 E
q
_
C, H = C

C.
Then R
1
is given by 124
R
1
= C

_
z
1
E
p
0
0 z
1
E
q
_
C (24)
3. Transformation formulae 107
where z
1
is given by (20). It readily follows that
R
1
1
= z
1
1
S
1
+ H
1
2
z
1
1
S
1
H
1
2
. (25)
Using (20), we nally have
R
1
1
=
_

S + R
_
[S
1
] (26)
Applying lemma 1 to the inner sum we now nd
f
a
(z
M
, w) = | 2iR
1
|

1
2
e

i
2
R
1
1
[w]

g(mod )
e
2i

S [g+a]

l
e
i
2
_

S +R
_
[S
1
l]+2il

((g+a)
1
+R
1
1
w

2
)
_

_
(27)
where the square root has to be taken according to the prescription in
lemma 1. It follows then, using (24), that
| 2iR
1
|

1
2
= d

1
2
_
z +

_
p
2
_
z +

_
q
2
(28)
where
= e
i
4
(qp)
(29)
is an eighth root of unity and d is given by (12).
In order to simplify the inner sum in (27), we prove the following
Lemma 2. If a and b are two rational columns such that 2S a and 2S b 125
are integral, , , integers such that 1((mod) ) and x is any
integral column, then

g((mod) )
e
2i

_
S [g+a]2(x+b)

S (g+a)+S [x+b]
_
is independent of x.
108 4. Analytic theory of Indenite quadratic forms
Proof. We have only to consider the exponent in each term (mod ). In
the rst place we have
S [g + a] = S [g + a x] + 2x

S (g + a)
2
S [x]
S [x + b] = S [b] + S [x] 2b

S (g + a x) + 2b

S (g + a)
2(x + b)

S (g + a) = 2b

S (g + a) + 2x

S (g + a).
Using the fact that 1((mod) ) we see that the exponent in each
term is congruent (mod ) to
S [g + a x] 2b

S (g + a x) + S [b].
Since now and x are xed and g runs over a complete system of residue
classes (mod ), it follows that g x also runs through a complete
system (mod ). This proves the lemma.
In the inner sum in (27), l runs through all integral vectors, so that
we may write
1
2
S
1
l = (x + b)
where x is an integral column and b is one of the nitely many repre-
sentatives a
1
, . . . , a
n
of the residue classes (mod 1) given in (13). Also
when x runs through all integral vectors and b through these residue
class representatives, (x + b) runs through all rational columns of the
type
1
2
S
1
l. We have thus
f
a
(z
M
, w) = | 2iR
1
|

1
2
e

i
2
R
1
1
[W]

g((mod) )
e
2i

_
S [g+a]2(x+b)

S (g+a)+S [x+b]
_
e
2iR[x+b]+2i(x+b)

S R
1
1
w
_

_
126
Let us use the abbreviation

ab
(M) =

g((mod) )
e
2i

_
S [g+a]2b

S (g+a)+S [b]
_
(30)
3. Transformation formulae 109
Then we have
f
a
(z
M
, w) = | 2iR
1
|

1
2
e

i
2
R
1
1
[w]

a,b
(M)e
2iR[x+b]+2i(x+b)

S R
1
1
w
(31)
Let us now dene the vector w
M,z
by
S R
1
1
w
M,z
= w
1
. (32)
Using (22) for R
1
we get
w
M,z
=
_
1
z +
K +
1
z +
L
_
S
1
w. (33)
With this denition of w
M,z
we see that
R
1
1
[w
M,z
] = R
1
[S
1
w].
Use now the abbreviation
(M, z, w) = e

i
2
R
1
[S
1
w]
; (34)
then substituting w
M,z
for w in (27), we get the formula
f
a
(z
M
, w
M,z
) = d

1
2
_
z +

_
p/2
_
z +

_
q/2

a,b
(M) f
b
(z, w)(M, z, w)
_

_
(35)

Till now we considered the case 0. Let now = 0. Then 127


M =
_

0
_
110 4. Analytic theory of Indenite quadratic forms
and = 1. Also z
M
=
z +

= z +. The denition of w
M,z
given in
(33) is valid even if = 0. Thus
e
2iS [a]
f
a
(z
M
, w
M,z
) =

x
e
2iR[x+a]+2i(x+a)

w
M,z
.
Since = 1, x also runs through all integral vectors and so
f
a
(z
M
, w
M,z
) = e
2iS [a]
f
a
(z, w) (36)
a being again some one of the a
1
, . . . , a
l
determined by a and .
For any two rational columns a, b with 2S a and 2S b integral, let us
dene
e
ab
=
_

_
1 if a b(mod 1)
0 otherwise.
Dene now the l-rowed matrix G(M, z) by
G(M, z) =
_

_
d

1
2
_
z +

_
p
2
_
z +

_
q
2
(
ab
(M)), if 0
_
e
a a

e
2iS [a]
_
, if = 0
(37)
Also put
(M, z, w) =
_

_
e

i
2
R
1
[S
1
w]
if 0
1 if = 0
(38)
We then have the following fundamental formula for the vector f (z, M, 128
w) dened in (13):
Theorem 1. If M =
_


_
is any modular matrix and z
M
=
z +
z +
, then
f (z
M
, w
M,z
) = G(M, z) f (z, w)(M, z, w)
where w
M,z
is dened by (33) and G(M, z) and (M, z, w) by (37) and
(38) respectively.
3. Transformation formulae 111
We shall now obtain a composition formula for the l-rowed matrices
G(M, z).
Let M and M
1
be two modular matrices
M =
_


_
, M
1
=
_

1

1

1

1
_
, MM
1
=
_

2

2

2

2
_
By the denition of z
M
, it follows that
(z
M
1
)
M
= z
MM
1
(39)
From denition (33), it follows that
(w
M
1
,z
)
M,z
M
1
=
_
1
z
M
1
+
K +
1
z
M
1
+
L
_
S
1
.
_
1

1
z +
1
K +
1

1
z +
1
L
_
S
1
w.
_

_
Using the properties of the matrices K and L we get
(w
M
1
,z
)
M,z
M
1
=
_
1

2
z +
2
K +
1

2
z +
2
L
_
S
1
w
which gives the formula
w
MM
1
,z
= (w
M
1
,z
)
M,z
M
1
. (40)
Using the denition of R
1
and of w
M,z
we get 129
R
1
[S
1
w] = w

S
1
w
M,z
(41)
Let us now assume, for a moment, that ,
1
,
2
are all dierent
from zero. Using denition (38) let us write
(M
1
, z, w) (M, z
M
1
, w
M
1
,z
) = e

i
2

.
Then using (41), it follows that
= w

S
1
_

1
+
_
1

1
z +
1
K +
1

1
z +
1
L
_
S
1
112 4. Analytic theory of Indenite quadratic forms
_
1
z
M
1
+
K +
1
z
M
1
+
L
_
S
1
_
w
M
1
, z
Using again the properties of K and L we obtain
= w

S
1

2
_

1
z +
1

2
z +
2
K +

1
z +
1

2
z +
2
L
_
S
1
w
M
1
,z
which is seen to be equal to
w

S
1

2
w
MM
1
,z
.
By (41) therefore we get the formula
(M
1
, z, w) (M, z
M
1
, w
M
1
,z
) = (MM
1
, z, w) (42)
We can now release the condition on ,
1
,
2
. If some or all of
them are zero, then using denition (38), we can uphold (42). Thus (42)
is true for any two modular matrices M, M
1
.
If we now use theorem 1 we have
f (z
MM
1
, w
MM
1
,z
) = G(MM
1
, z) f (z, w)(MM
1
, z, w). (43)
Using (39) and (40) we have
f (z
MM
1
, w
MM
1
,z
) = G(M, z
M
1
) f (z
M
1
, w
M
1
,z
)(M, z
M
1
, w
M
1
,z
)
which again gives the formula 130
f (z
MM
1
, w
MM
1
,z
) =
= G(M, z
M
1
)G(M
1
, z) f (z, w)(M, z
M
1
, w
M
1
,z
)(M
1
z, w) (44)
Using (42), (43) and (44) and observing that (MM
1
, z, w) 0, we get
the matrix equation
(G(MM
1
, z) G(M, z
M
1
)G(M
1
, z)) f (z, w) = 0. (45)
We remark that the 1-rowed matrix on the left hand side of equation
(45) is independent of w. Let us now prove
3. Transformation formulae 113
Lemma 3. The l functions f
a
1
(z, w), . . . , f
a
1
(z, w) are linearly indepen-
dent over the eld of complex numbers.
Proof. By denition f
a
(z, w) =
_
x
e
2iR[x+a]+2iw

(x+a)
so that it is a
fourier series in the n variables w
1
, . . . , w
n
. We may write
f
a
(z, w) =

r
c
r
e
2iw

r
where r runs through all rational vectors a(mod 1). If
1
, . . . ,
r
be
complex numbers such that
l

i=1

i
f
a
i
(z, w) = 0
then by uniqueness theorem of fourier series, every fourier coecient
must vanish. But since a
1
, . . . , a
l
are all distinct (mod 1), it follows that
the exponents in the l series f
a
(z, w) are all distinct. Hence
i
= 0, i =
1, . . . , n, and our lemma is proved.
Using (45) and lemma 3, it follows that the l-rowed matrix on the 131
left of (45) is identically zero. Hence the
Theorem 2. For any two modular matrices M, M
1
we have the compo-
sition formula
G(MM
1
, z) = G(M, z
M
1
)G(M
1
, z)
Let M =
_


_
be a modular matrix so that
M
1
=
_


_
Let us assume that 0. Let as before a, b be two rational columns
chosen from the set a
1
, . . . , a
l
. We shall prove

a b
(M) =
b a
(M
1
) (46)
114 4. Analytic theory of Indenite quadratic forms
where
a b
(M) is the sum dened in (30).
In order to prove (46), put t =
a b
(M) and t

=
b a
(M
1
). Because
of lemma 2, we have
t =

y((mod) )
e
2i

_
S [y+a]2(y+b)

S (x+a)+S [x+b]
_
Taking the sum over all integral x(mod ) we have
t.abs
n
=

x((mod) )

y((mod) )
e
2i

{S [y+a]2(y+b)

S (x+a)+S [x+b]}
Interchanging the two summations we have
t abs
n
=

y((mod) )

x((mod) )
e

2i

{S [x+b]2(x+a)

S (y+b)+S [y+a]}
But by lemma 2 again we see that the inner sum is independent of y and 132
equal to t

, the complex conjugate of t

. Thus
t abs
n
= t

abs
n
and since 0, it follows that t = t

and (46) is proved.


In the composition formula of theorem 2, let us put M
1
= M
1
.
Then G(E, z) = E is the unit matrix of order 1. From the denition of
G(M, z) we have
G(M
1
, z) = d

1
2
_
z

_
p
2
_
z

_
q
2
(
a b
(M
1
))
G(M, z
M
1) = d

1
2
(
1
(z + ))
p
2
(
1
(z + ))
q
2
(
ab
(M))
Let us put
(M) = d

1
2
abs

n
2
(
ab
(M)). (47)
Then we get from the previous equations
(M) (M)

= E
4. Convergence of an integral 115
which shows that (M) is unitary.
In case = 0, from (37), G(M, z) is clearly unitary. We therefore
put
(M) = G(M, z), if = 0. (48)
Let us now put w = 0 in theorem 1. Then (M, z, w) = 1 so that if
we write as in 1, f (z, H) instead of f (z, H, w), when w = 0, we get
f (z
M
, H) = G(M, z) f (z, H). (49)
Using the denitions (47) and (48), we get
Theorem 3. If M =
_


_
is a modular matrix, then
(
z
+ )

p
2
(z + )

q
2
f (z
M
, H) = (M) f (z, H)
where (M) is a certain unitary matrix and the radical scalar factors 133
on the left side are taken with their principal parts.
We remark that we introduced the vector w only to prove the com-
position formula in theorem 2. Hereafter we will have only f (z, H), the
column consisting of f
a
i
(z, H) dened in (10).
From the composition formula we get
(MM
1
) = (M) (M
1
) (50)
which shows that the mapping M (M) is a unitary representation of
the modular group.
4 Convergence of an integral
Let S be the matrix of a quadratic form and let S be non-singular and
semi-integral. Let S have signature p, q so that p+q = n. Let us assume
that pq > 0. With S we had associated the H space of matrices H with
H > 0, HS
1
H = S . Let be the group of units of S . Let a denote
a rational column vector with 2S a integral. Denote by
a
the group of
units of S satisfying
Ua a(mod 1) (51)
116 4. Analytic theory of Indenite quadratic forms

a
is obviously a subgroup of of nite index. Let U
1
, . . . , U
s
denote a
complete system of representatives of left cosets of mod
a
so that
=
s

i=1
U
i

a
, s = ( :
a
).
Denote by F a fundamental region of in H and by F
k
the image by U
k
134
of F in H. Put
F
a
=
_
k
F
k
;
then it is easy to verify that F
a
is a fundamental domain for
a
in H.
For every H in H we had dened the theta series
f
a
(z, H) =

ya(mod 1)
e
2iR[y]
so that regarded as a function of H, f
a
(z, H) is a function on the manifold
H. If U
a
then
f
a
(z, H[U]) =

ya(mod 1)
e
2i(S +iH[U])[y]
.
Writing S [U] instead of S and observing that Uy Ua a(mod 1) and
that Uy runs through all rational columns a(mod 1) if y does, we have
f
a
(z, H[U]) = f
a
(z, H) (52)
so that we may regard f
a
(z, H) as a function on F
a
. Let dv be the invari-
ant volume measure in the H space. We shall now prove that
_
F
a
f
a
(z, H)dv
converges, in particular, if n > 4 and that
_
F
a
f
a
(z, H)dv =

ya(mod 1)
_
F
a
e
2iR[y]
dv (53)
4. Convergence of an integral 117
For proving this it is enough to show that the series of absolute values of
the terms of f
a
(z, H) converges uniformly in every compact subset of F
a
and that the integral over F
a
of this series of absolute values converges. 135
Because of the property (52) and the invariance of the volume mea-
sure it is enough to consider the integral over F instead of F
a
. By our
method of construction
F =
_
k
(H R
k
)
where R
k
is obtained from the Minkowski fundamental domain R. It is
therefore enough to consider the integral
_
HR
f
a
(z, H)dv
and prove (53) for H R instead of F
a
.
The general term of the integrand is e
2iR[y]
and its absolute value is
e
2H[x+a]
where > 0, H > 0 is reduced in the sense of Minkowski, x an integral
column and a a rational column with 2S a integral. If H = (h
kl
) then H
being reduced, there exists a constant c
1
, such that
H[y] > c
1
(h
1
y
2
1
+ + h
n
y
2
n
)
y being a real column with n elements y
1
, . . . , y
n
. Therefore
n
_
i=1

y
i
e
2c
1
h
i
y
2
i
is a majorant for the sum of the absolute values of the terms of f
a
(z, H).
Since, for a constant c
2
> 0,

t=
e
c
2
ht
2
< c
3
(1 + h

1
2
)
118 4. Analytic theory of Indenite quadratic forms
where h > 0 is a positive real number and c
3
is a constant depending on 136
c
2
, it follows that it is enough to prove, for our purpose, the convergence
of
_
HR
n
_
k=1
(1 + h

1
2
k
)dv (54)
If D is any compact subset of H R, then because H is reduced,
n

k=1
(1 + h

1
2
k
) is uniformly bounded in D. This will prove (53) as soon as
(54) is proved.
The proof depends on an application of Minkowskis reduction the-
ory.
Consider any element H = (h
kl
) in H R and consider the products
h
k
h
nk
, k = 1, 2, . . . , n 1. There exists an integer r
0 r
n
2
(55)
such that
h
k
h
nk

1
4
r < k < n r (56)
h
r
h
nr
<
1
4
(57)
If r = 0, (57) is empty and if r =
n
2
(which implies that n is even)
(56) is empty. Let us denote by M
r
the subset of H R consisting of
those H which have the same integer r associated with them. Clearly
H R =
_
r
M
r
. It is enough therefore to prove that for every r,
_
M
r
n
_
k=1
(1 + h

1
2
k
)dv
converges.
We rst obtain a parametrical representation for the matrices H in 137
M
r
.
Let K =
H + S
2
= (u
kl
) and L =
H S
2
= (v
kl
); then K and L are
non-negative matrices so that
u
kl

u
k
u
1
, v
kl

v
k
v
1
4. Convergence of an integral 119
where for a real number g, g denotes its absolute value. Since K + L =
S we get
s
kl

u
k
u
l
+

v
k
v
l
.
But since u
k
+ v
k
= h
k
we obtain, by using Schwarzs inequality,
s
kl

_
h
k
h
1
.
H being reduced we get for k r, l n r, using (57),
s
kl

_
h
k
h
1

_
h
r
h
nr
<
1
2
(58)
Since S is a semi-integral matrix, it follows that
s
kl
= 0, k r, l n r.
We have therefore a decomposition of S into the form
S =
_

_
0 0 P
0 F Q
P

G
_

_
(59)
where P is an r-rowed non-singular matrix. It has to be noted that if
r = 0, then
S = F (60)
and if r =
n
2
(n is then even),
S =
_
0 P
P

G
_
(61)
138
We now put S

= S [C] where
S

=
_

_
0 0 P
0 F 0
P

0 0
_

_
, C =
_

_
E P
1
Q


1
2
P
1
G
0 E 0
0 0 E
_

_
(62)
120 4. Analytic theory of Indenite quadratic forms
We split up H also in the same fashion, by the Jacobi transformation,
H = H
0
[C
0
] where
H
0
=
_

_
H
1
0 0
0 H
2
0
0 0 H
3
_

_
, C
0
=
_

_
E L
1
L
2
0 E L
3
0 0 E
_

_
(63)
where H
1
and H
3
are r-rowed symmetric matrices. Put
C
0
C = L =
_

_
E Q
1
Q
2
0 E Q
3
0 0 E
_

_
(64)
If we put H

= H[C], then since S


1
[H] = S , it follows that
S
1
[H

] = S

. Using the matrix L we have


(LS
1
L

)[H
0
] = S

[L
1
].
Substituting for the various matrices above, we get
F
1
[H
2
] = F (65)
H
3
P
1
H
1
= P

(66)
Q
3
= F
1
Q

1
P (67)
Q
2
= (A
1
2
F
1
[Q

1
])P (68)
where A is a skew symmetric matrix of r-rows. It is obvious that if 139
H
1
, H
2
, H
3
, Q
1
, Q
2
and Q
3
satisfy the above conditions, then the cor-
responding H is in H. We therefore choose the parameters for the M
r
space in the following manner: We have H
1
is arbitrary, r-rowed and
positive. From this H
3
is uniquely xed. Q
1
is an arbitrary matrix of
r-rows and n 2r columns. Q
3
is then determined uniquely. Choose A
to be arbitrary skew symmetric. Then (68) determines Q
2
. H
2
is now a
positive matrix satisfying (65). Thus the parameters are H
1
, Q
1
, A and
the parameters required to parametrize the space of positive H
2
satis-
fying (65). A simple calculation shows that the number of parameters
is
r(r + 1)
2
+ r(n 2r) +
r(r 1)
2
+ (p r)(q r). (69)
4. Convergence of an integral 121
We now compute the volume element in terms of these parameters.
The metric in the H space is
ds
2
=
1
8
(H
1
dHH
1
dH).
We substitute for H in terms of these new parameters. We denote dier-
entiation by (

) dot. Since C is a constant matrix we get


ds
2
=
1
8
(H
1
dH

H
1
dH

) (70)
As H

= H
0
[L] we get
H
1

H

= H
1
0
[L
1
](

H
0
[L] +

H
0
L + LH
0

L).
This gives the expression
(H
1

H

)
2
=
_
(H
1
0

H
0
H
1
0

H
0
) + 4(H
1
0

H
0

LL
1
_
+ 2(

LL
1

LL
1
) + 2(H
0
[

LL
1
]H
1
0
)
_

_
(71)
We shall now simplify the expression on the right of (71). Since 140
L = C
0
C and C is a constant matrix, we get

LL
1
=
_

_
0

Q
1

Q
2

Q
1
Q
3
0 0

Q
3
0 0 0
_

_
which shows that
(

LL
1

LL
1
) = 0, (H
1
0

H
0

LL
1
) = 0. (72)
Using the expression for H
0
in (63) we get
(H
1
0

H
0
H
1
0

H
0
) =
3

i=1
(H
1
i

H
i
H
1
i

H
i
).
Dierentiating (66) with regard to the variables H
1
and H
3
we get

H
3
P
1
H
1
+ H
3
P
1

H
1
= 0
122 4. Analytic theory of Indenite quadratic forms
which shows that H
1
1

H
1
= P
1

H
3
H
1
3
P

and therefore
(H
1
1

H
1
H
1
1

H
1
) = (H
1
3

H
3
H
1
3

H
3
) (73)
Using the expressions for

LL
1
and H
0
, we obtain
(H
0
[

LL
1
]H
1
0
) = (H
1
[

Q
1
]H
1
2
) + (H
1
[

Q
2

Q
1
Q
3
]H
1
3
)
+ (H
2
[

Q
3
]H
1
3
).
Dierentiating (67) and (68) with regard to the variables Q
1
, Q
2
,
Q
3
, we get

Q
3
= F
1

Q

1
P

Q
2
= (

A
1
2

Q
1
F
1
Q

1

1
2
Q
1
F
1

Q

1
)P (74)
We now introduce the matrix B dened by
B =
1
2
(

Q
1
F
1
Q

1
Q
1
F
1

Q

1
).
We can then write 141

Q
2

Q
1
Q
3
= (

A + B)P.
We have nally, except for a positive constant, the metric in the
space M
r
ds
2
=
_
(H
1
2

H
2
)
2
_
+ 2
_
(H
1
1

H
1
)
2
_
+ 4(H
1
[

Q
1
]H
1
2
)
+ 2(H
1
[

A + B]H
1
). (75)
The determinant of the quadratic dierential form
2
_
(H
1
1

H
1
)
2
_
+ 4(H
1
[

Q
1
]H
1
2
) + 2(H
1
[

A + B]H
1
)
is given by
2
r(nr1)
|H
1
|
n2r2
|H
2
|
r
(76)
4. Convergence of an integral 123
If dv
2
denotes the invariant volume measure in the space of H
2
, satisfy-
ing (65), then we have to prove
_
M
r
n
_
k=1
(1 + h

1
2
k
)|H
1
|
n2r2
2
|H
2
|

r
2
{dH
1
}{dQ
1
}{dA}dv
2
(77)
is convergent.
The constants c
3
, . . . appearing in the sequel all depend only on n
and S . Moreover bounded shall mean bounded in absolute value by
such constants.
Since |S | 0, at least one term in the expansion of |S | does not
vanish. This means there is a permutation
_
1, 2, . . . , n
l
1
, l
2
, . . . , l
n
_
such that s
kl
k
0, k = 1, . . . , n. 142
Since S is semi-integral, s
kl
k

1
2
which shows that
h
k
h
l
k
s
2
kl
k

1
4
(78)
Consider now the integers 1, 2, . . . , a and the corresponding integers
l
1
, . . . , l
a
, a n. At least one of the latter, say l
t
n a + 1. Therefore
t a, l
t
n a + 1.
Since H is reduced, h
t
h
a
, h
l
t
h
na+1
. Using (78) we get
h
a
h
na+1

1
4
, a = 1, . . . , n. (79)
Let us consider the identity
n
_
k=1
(h
k
h
nk+1
) =
r
_
k=1
(h
k
h
nk+1
)
2

k=r+1
(h
k
h
nk+1
)
r

k=1
(h
k
h
nk+1
)
124 4. Analytic theory of Indenite quadratic forms
Since r n r, it follows using (56)
n

k=r+1
(h
k
h
nk+1
)
r

k=1
(h
k
h
nk+1
)
c
3
h
2
nr
Therefore we obtain
n
_
k=1
(h
k
h
nk+1
) c
3
h
2
nr
r
_
k=1
(h
k
h
nk+1
)
2
Using (79) and the fact that H is reduced, we get the inequality
h
nr
c
4
. (80)
Since H is reduced, H
0
R

c
for a c > 0 depending only on n and 143
S . It follows from (80), therefore, that the elements of H
1
and H
2
are
bounded. Also from (79) and (80), it follows that
h
k
c
5
r < k n (81)
which shows that the elements of H
1
2
are bounded.
From equations (63) and (64) we get
Q
1
= L
1
P
1
Q

, Q
2
= L
2

1
2
P
1
G.
Since P, Q and G are constant matrices and L
1
, L
2
have bounded ele-
ments (since H is reduced), it follows that the elements of Q
1
and Q
2
are bounded.
From the denition of A in (68), it follows that its elements are also
bounded. From (80) and the fact that H is reduced we get
h
1
h
2
h
3
. . . h
r
c
4
(82)
and therefore
r
_
k=1
(1 + h

1
2
k
) c
6
(h
1
. . . h
r
)

1
2
(83)
4. Convergence of an integral 125
We therefore nally see that it is enough to prove
_
|H
1
|
n2r2
2
(h
1
. . . h
r
)

1
2
{dH
1
}
converges, H
1
being reduced and satisfying (82). dH
1
=

1ijr
dh
i j
.
Since h
i
2h
i j
h
i
, i < j, it follows that the variation of h
i j
is h
i
.
Therefore it is enough to prove that the integral
_
(h
1
. . . h
r
)

1
2
(h
1
. . . h
r
)
n
2
r1
h
r1
1
. . . h
r1
dh
1
. . . dh
r
,
extended over the set 0 < h
1
h
2
h
3
. . . h
r
c
4
converges. We 144
make a change of variables
h
1
= s
1
. . . s
r
h
2
= s
2
. . . s
r
h
r
= s
r
_

_
(84)
The integral then becomes transformed into
_
s

1
1
. . . s

r

ds
1
. . . ds
r
s
1
. . . s
r
(85)
where since s
k
=
h
k
h
k+1
, 0 < s
k
< Min(1, c
4
) and
k
= k
_
n k
2
1
_
,
k = 1, . . . , r.
Now
k

n r
2
1, k = 1, 2, . . . , r so that if n r 2 > 0, the
integral obviously converges. If n > 4, since r
n
2
, this condition is
satised and the integral converges.
Now the maximum value of r is Min(p, q). Let n = 4 and S [x]
be not a quaternionic form, i.e., it is not the norm of a general element
of a quaternion algebra over the eld of rational numbers. In that case
the maximum value of r is 0 or 1 so that n r 2 > 0 and the integral
converges. If n = 3 and S [x] is not a zero form, then r = 0 and nr2 >
0.
126 4. Analytic theory of Indenite quadratic forms
If r = 0, then all elements of H in M
0
are bounded and the integral
over M
0
converges. This shows that if n = 2 and S [x] is not a zero form,
the integral again converges.
In particular, we have
Theorem 4. If n > 4 the integral 145
_
F
a
f
a
(z, H)dv
converges and
_
F
a
f
a
(z, H)dv =

y
_
F
a
e
2iR[y]
dv.
Let us now consider the integral
_
F
a
dv. In order to prove it is nite,
it is enough to prove
_
M
r
dv is nite for every r. Thus we have to prove
_
h
n4
2
1
. . . h
n2r2
2
r
dh
1
. . . dh
r
0 < h
i
h
2
. . . h
r
c
4
is nite. By the same change of variables we see that instead of
k
,
one has
k
= k
_
n k 1
2
_
so that since
k

n k 1
2
, the integral
converges if n r 1 > 0. Since r
n
2
, the integral converges if n > 2.
If n = 2 and r = 0, then again the integral converges. If n = 2 and r = 1,
S [x] is a binary zero form and we had see in the previous chapter that
_
F
dv diverges. We have thus proved
Theorem 5. If S [x] is not a binary zero form
_
F
a
dv
converges.
5. A theorem in integral calculus 127
5 A theorem in integral calculus
146
For out later purposes we shall prove a theorem on multiple integrals.
Let R
m
denote the Euclidean space of m dimensions with x
1
, . . . , x
m
forming a coordinate system. Let
y
k
= f
k
(x
1
, . . . , x
m
), k = 1, . . . , n,
be n dierentiable functions with n m. Let a
1
, . . . , a
n
be n real num-
bers and let F be the surface determined by the n equations
y
k
= a
k
k = 1, . . . , n.
Let us moreover assume that the functional matrix
_
f
i
x
j
_
_

_
i = 1, . . . , n
j = 1, . . . , m
has the maximum rank n at every point of F. Introduce m n dieren-
tiable functions y
n+1
, . . . , y
m
of x
1
, . . . , x
m
so that the Jacobian
J =

_
y
i
x
j
_

i, j = 1, . . . , m
is dierent from zero at every point of F. The y
n+1
, . . . , y
m
are the local
coordinates of the surface F. Let denote the absolute value of J and
put
d =
1
dy
n+1
. . . dy
m
. (86)
The properties of Jacobians show that d is independent of the choice
of y
n+1
, . . . , y
m
. We shall denote d symbolically by
d =
{dx}
{dy}
(87)
and take d as the measure of volume on surface F. 147
In case m = n, because of the conditions on the Jacobian, the point
set F is zero dimensional and we dene d to be the measure which
assigns to each point the measure
1

.
128 4. Analytic theory of Indenite quadratic forms
As an example put m = 2, and consider in R
2
the point set F dened
by
y
1
=
_
x
2
1
+ x
2
2
, y
1
= 1.
Then
y
1
x
1
,
y
1
x
2
cannot both vanish at any point of F. Choose now y
2
as
y
2
= tan
1
x
2
x
1
The Jacobian is
=

(y
1
, y
2
)
(x
1
, x
2
)

=
1
y
1
.
and the volume element on the circle F : x
2
1
+ x
2
2
= 1 is
d = y
1
dy
2
.
Let X = X
(r,s)
be a real matrix of r rows and s columns with elements
x
kl
constituting a coordinate system in R
rs
. We denote by
{dX} =
r
_
k=1
s
_
l=1
dx
kl
the Euclidean volume element in R
rs
. If however X = X

is r-rowed
symmetric, then
{dX} =
_
1klr
dx
kl
Let V be a k-rowed real non-singular symmetric matrix with signa- 148
ture , k . Let F be a rectangular matrix with k-rows and columns
so that the matrix T dened by
V[F] = T
is non-singular and has signature , . Obviously k. Let
W be a xed matrix of + rows and of signature , + . Then
+ k. Let D be the surface consisting of real matrices X of k rows
and columns satisfying
V[F, X] = W.
5. A theorem in integral calculus 129
If we write
W =
_
T Q
Q

R
_
then D is the surface dened by the equations
F

VX = Q
X

VX = R
_

_
(88)
In conformity with our previous notation, let the volume element on
the surface D be denoted by
{dX}
{dQ}{dR}
.
We have then the following
Theorem 6.
_
D
{dX}
{dQ}{dR}
=

k

||V||

2
||T||
k+1
2
||W||
k1
2
where
h
=
h

i=1

i/2
(i/2)
and
0
= 1. Also if = 0, ||T|| has to be taken 149
equal to 1.
Proof. First let > 0. Denote by I the integral
I = ||V||

2
||T||

k+1
2
||W||

k1
2
_
D
{dX}
{dQ}{dR}
Let C be a k-rowed non-singular matrix. Consider the transformation,
X CX, F CF, V V[C
1
].
This leaves W unaltered. Also
{d(CX)} = ||C||

{dX}
which shows that I is unaltered. We shall choose C in such a manner
that the resulting integral can be easily evaluated.
130 4. Analytic theory of Indenite quadratic forms
Since F has rank , there exists a matrix C
0
such that
C
0
F =
_
E

0
_
E

being the unit matrix of order . Since V[F] = T is non-singular, we


have
V[C
1
0
] =
_
T L
L

N
_
=
_
T 0
0 M
_ _
E T
1
L
0 E
_
As V has signature , k , it follows tht M > 0. Put M = P

P,
where P is non-singular. Then
V[C
1
0
] =
_
T 0
0 E
k
_ _
E T
1
L
0 P
_
We now choose C so that
C =
_
E

T
1
L
0 P
_
C
0
A simple computation of determinants now shows that I reduces to 150
||T||

k+1
2
||W||

k1
2
_
D
{dX}
{dQ}{dR}
where D is now the domain dened by X =
_
X
1
X
2
_
, X
1
= X
(,)
1
satisfying
_
T 0
0 E
k
_ _
E X
1
0 X
2
_
=
_
T Q
Q

R
_
= W
Q = TX
1
, R = X

1
TX X

2
X
2
.
_

_
(89)
Completing squares, we get
_
T 0
0 E
_ _
E X
1
T
1
Q
0 X
2
_
=
_
T 0
0 R
1
_
(90)
where
R
1
= X

2
X
2
(91)
5. A theorem in integral calculus 131
and R
1
> 0.
Now {dX} = {dX
1
}{dX
2
} and from (89)
{dX
1
} = ||T||

{dQ}
Also (90) shows that
||W|| = ||T|| ||R
1
||.
Therefore I reduces to
||R
1
||

(k1)
2
_
D
{dX
2
}
{dR
1
}
where D is the domain dened by X
2
satisfying (91).
Let G be a non-singular matrix such that 151
X
2
G = Y
R
1
[G] = S
_

_
(92)
Then {dY} = ||G||
k
{dX
2
} and {dS } = ||G||
+1
{dR
1
} so that if we choose
G such that R
1
[G] = E

, then in order to prove the theorem it is enough


to prove
_
D
{dY}
{dS }
=

k
(93)
where we have written k instead of k and D is the domain of Y with
Y

Y = S, S = E

. (94)
Note that (93) is a special case of the theorem we want to prove,
namely with V = E
k
, W = E

, = 0.
In order to prove (93) we shall use induction on . Assume theorem
6 to have been proved for 1 1. Let be an integer 0 < < . Put
Y =
_
Y
(k,)
1
, Y
(k,)
2
_
and
S =
_
T Q
Q

R
_
132 4. Analytic theory of Indenite quadratic forms
where T = Y

1
Y
1
, Q = Y

1
Y
2
, R = Y

2
Y
2
. Then {dY
1
}{dY
2
} = {dY} and
{dS } = {dT}{dQ}{dR}. Assume now Y
1
xed. Varying Y
2
we get
_
D
{dY}
{dS }
=
_
{dY
1
}
{dT}
{dY
2
}
{dQ}{dR}
Induction hypothesis works and hence 152
_
D
{dY}
{dS }
=

k
_
{dY
1
}
{dT}
with T = Y

1
Y
1
. Again induction hypothesis works since 0 < < and
we have
_
{dY
1
}
{dT}
=

k
(93) is thus proved. In order to uphold induction, we have to prove (93)
in case = 1 that is
_
D
{dX}
{dt}
=

k1
(95)
where D is the space
x
2
1
+ + x
2
k
= t, t = 1.
We now use induction on k. For k = 1, the proposition is trivial; so
let k > 1 and (95) proved for k 1 instead of k. Introducing x
1
, . . . , x
k1
as a coordinate system on D we get
_
{dx}
{dt}
=
_
{(dx
1
. . . dx
k1
)}
x
k
since 2x
k
dx
k
= dt and we consider only positive values of x
k
. Now
x
k
= (1 u)
1
2
where u = x
2
1
+ + x
2
k1
. We therefore have
_
{dx
1
. . . dx
k
}
dt
=
_
(1 u)

1
2
dx
1
. . . dx
k1
du
6. Measure of unit group and measure of representation 133
By induction hypothesis, 153
dx
1
. . . dx
k1
du
=

k1
2
(
k1
2
)
u
k1
2
1
Therefore we get
_
{dx}
dt
=
1
_
0
(1 u)

1
2
u
k1
2
1
du

k1
2
(
k1
2
)
Evaluating the beta integral, we get the result.
The case = 0 is also contained in the above discussion
Theorem 6 is now completely demonstrated.
6 Measure of unit group and measure of represen-
tation
Let S be the matrix of a non-degenerate real quadratic form with signa-
ture p, q, (p + q = n). Let denote the orthogonal group of S , hence
the group of real matrices Y with
S [Y] = S,
is then a locally compact group and there exists on a left invariant
Haar measure determined uniquely upto a positive multiplicative con-
stant. Instead of we shall consider the surface (W) consisting of all
solutions Y of the matrix equation
S [Y] = W,
where W is a xed matrix, non-singular and of signature p, q. Clearly if
Y
1
and Y
2
lie in (W), Y
1
Y
1
2
so that (W) consists of all CY where
Y is a xed solution of S [Y] = W and C runs through all elements in .
According to the previous section, we can introduce on (W), a volume 154
measure
{dY}
{dW}
(96)
134 4. Analytic theory of Indenite quadratic forms
The surface (W) has the property that the orthogonal group of S
acts as a transitive group of left translations
Y CY (97)
C , on it. Also the measure (96) dened above is invariant under
these left translations. Since and (W) are homeomorphic and (W)
is locally compact, (96) is the Haar measure in (W) invariant under
(97).
It is practical to consider on (W) the measure
||S ||

1
2
||W||
1
2
{dY}
{dW}
(98)
instead of (96), for the following reason. (96) already has the invariance
property under the transformations (97). Consider now the mapping
Y YP, W W[P] (99)
where P is an n-rowed non-singular matrix. Since {d(YP)} = ||P||
n
{dY}
and {dW[P]} = ||P||
n+1
{dW}, it follows that (98) remains unaltered by
the transformations (99). Thus (98) is independent of W which means,
we can choose for W a matrix suitable to us. In particular, if W = S ,
(98) gives the Haar measure on required for our purposes.
Let now S be a rational matrix and H the representation space of the
unit group of S . The unit group (S ) of S is a discrete subgroup of 155
and is represented in H by the mapping H H[U], H H, U (S ).
We constructed in H for (S ) a fundamental domain F. By theorem 5 it
follows that, if S [x] is not a binary zero form,
V =
_
F
dv < (100)
where dv is the invariant volume element in H.
(S ) being a discrete subgroup of , there exists a fundamental set
F
0
for (S ) in . By means of the translation Y CY, C , we
construct a fundamental set

F for (S ) in (W). Let (S ) denote
(S ) = ||S ||

1
2
||W||
1
2
_

F
{dY}
{dW}
(101)
6. Measure of unit group and measure of representation 135
It is to be noted that the value of (S ) is independent of the way

F is
constructed. Since (98) is independent of W, (S ) is actually the Haar
measure of the fundamental set F
0
for (S ) in . We call (S ), the
measure of the unit group (S ).
It is to be noticed that the mappings H H[

U] are identical in H,
whereas for U (S ), the representations Y UY and Y UY are
distinct in (W). We now prove the important
Theorem 7. If
h
=
h

k=1

k.2
(k/2)
, then (S ) and V are connected by the
relation
2(S ) =
p

q
||S ||
(
n+1
2
)
V
provided S is not the matrix of a binary zero form. 156
Proof. In order to prove this we consider the homogeneous as well as
the inhomogeneous parametrical representation of the H space. In the
homogeneous parametrization H in H is given by
H = 2K S, K = T
1
[Z

S ], T = S [Z] > 0 (102)


where Z = Z
(n, p)
is a real matrix. Z determines H uniquely, but H
determines Z only upto a non-singular p-rowed matrix factor on the
right. Let us put as before
S = S
0
[C
1
]
where S
0
=
_
E
p
0
0 E
q
_
. Let
Z = C
_
E
p
X
_
L
with X = X
(q, p)
, and |L| 0. The inhomogeneous parametrical repre-
sentation is given by
T
0
= E X

X > 0 (103)
with X real and
T = T
0
[L] (104)
136 4. Analytic theory of Indenite quadratic forms
Let W be a symmetric n-rowed matrix of signature (p, q) and having
the form
W =
_
T Q
Q

R
_
, T = T
(p)
> 0 (105)
(W) will now be the space of solutions Y,
Y = (Y
1
Y
2
), Y
1
= Y
(n, p)
1
, Y
2
= Y
(n,q)
2
satisfying W = S [Y] so that
T = S [Y
1
] > 0, Q = Y

1
S Y
2
, R = S [Y
2
] (106)
Every Y
1
satisfying (106) determines a H in H uniquely by (102). Since 157
with Y
1
, UY
1
also is a solution where U (S ), we construct a funda-
mental set

F in (W) to be the set consisting of those Y
1
for which the
corresponding H determined by (102) lie in F, the fundamental domain
for (S ) in H. It is easy to verify that

F is actually a fundamental set.
Now
{dY} = {dY
1
}{dY
2
}; {dW} = {dT}{dQ}{dR}.
Let Y
1
be xed so that the corresponding H which it determines in
in H is in F. Let now Y
2
satisfy (106). We then have
Z(S ) = ||S ||

1
2
||W||
1
2
_

F(Y
1
)
{dY
1
}
{dT}
_
D
{dY
2
}
{dQ}{dR}
where D is the domain determined by Y
2
satisfying (106) with Y
1
xed
and

F(Y
1
) is the set of Y
1
which determine H in F. For the inner integral
we apply theorem 6 and so
2(S ) =
q
||S ||

q+1
2
||T||
1q
2
_

F(Y
1
)
{dY
1
}
{dT}
Now X and L determine Y
1
uniquely, X satisfying (103) and L satis-
fying (104). Thus
{dY
1
} = ||C||
p
||L||
q
{dX}{dL}.
6. Measure of unit group and measure of representation 137
Expressing L in terms of T
0
(104), we get
2(S ) =
q
||S ||

n+1
2
||T||
1
2
_
F(Y
1
)
|T
0
|

q
2
{dX}{dL}
{dT}
But since T = T
0
[L] we get 158
_
{dL}
{dT}
=
p
||T
0
||
p/2
||T||

1
2
We therefore nally have the formula
2(S ) =
p

q
||S ||

n+1
2
_
|T
0
|

n
2
{dX}
From the form of T
0
we see that the integral has the value V and our
theorem is proved.
Let now S be the matrix of a non-degenerate, rational quadratic form
of signature p, q so that p+q = n. Let t be a rational number represented
by S so that
S [y] = t (107)
for an integral column y. It is obvious that with y, Uy is also a solution
of (107) where U is a unit of S . We shall associate with a given solution
y of (107) a real number (y, S ) called the measure of the representation
y, which will allow us to generalize, later, to indenite forms the notion
of number of representations.
Let W be the real symmetric matrix of signature p, q given by
W =
_

_
t q

q R
_

_
We consider all the real solutions Y
0
= Y
(n,n1)
0
satisfying
S [Y] = W (108)
where Y = (y Y
0
). Let (q, R) be the surface determined by Y
0
. Thus Y
0
satises
q

= y

S Y
0
, R = S [Y
0
] (109)
138 4. Analytic theory of Indenite quadratic forms
W being a xed matrix. Clearly (q, R) is a locally compact topological 159
space.
Let (y) be the subgroup of the orthogonal group of S consisting of
those matrices V in with
Vy = y. (110)
Then (y) is a locally compact topological group. Since with Y, VY for
V (y) is also a solution of (108), it follows that the mapping
Y
0
VY
0
(111)
gives a representation of (y) in (q, R). Clearly this representation
is faithful. Also since W is xed, the representation (111) of (y) on
(q, R) is transitive on (q, R). We introduce the volume element
||S ||

1
2
||W||
1
2
{dY
0
}
{dq}{dR}
(112)
which is clearly invariant under the mappings (111). Thus (112) gives
the left invariant Haar measure in the locally compact space (q, R).
The volume element (112) introduced above has another property.
Let P be a real matrix of the form
P =
_
1 p

0 P
0
_
where |P
0
| 0 so that P is non-singular. Consider the transformation
Y yp

+ Y
0
P
0
W W[P]
_

_
(113)
Then 160
{d(yp

+ Y
0
P
0
)} = ||P
0
||
n
{dY
0
}
{dW[P
0
]} = ||P
0
||
n+1
{dW}
which shows that the transformations (113) leave (112) unaltered. Thus
(112) is independent of W and we may therefore choose W a particular
way suitable to us.
6. Measure of unit group and measure of representation 139
Put Y = (yY
1
Y
2
) where Y
1
= Y
(n, p)
2
, Y
2
= Y
(n,q1)
2
and write
W =
_
W
1
Q
Q

R
1
_
, W
1
= W
(p+1)
1
where
W
1
= S [yY
1
] =
_
t v

v T
_
(114)
We now choose W so that
|W
1
| 0, T > 0. (115)
Since T has p rows and columns and S has signature p, q, it follows that
W
1
has signature p, 1.
The subgroup (y) of units U of S with Uy = y is a discrete sub-
group of (y) and so the representation (111) with V (y) is discon-
tinuous in (q, R). Let F(y) be a fundamental region in (q, R), for this
discrete subgroup (y). We dene the measure (y, S ) of the represen-
tation y by
(y, S ) = ||S ||

1
2
||W||
1
2
_

F(y)
{dY
0
}
{dq}{dR}
(116)
161
We shall rst show how to construct the fundamental region

F(y).
Let Y be a solution of the equations (115), (114). According to (102),
this determines uniquely a H in the H space. If U (y), then UY
1
determines the point H[U
1
] in H. Let F(y) be the fundamental region
in H for the discrete subgroup (y) of (S ), the unit group of S . This
F(y) can be constructed as follows: Let (S ) be written as a union of
left cosets modulo (y),
(S ) =

i
U
i
(y).
Let F be the fundamental region for (S ) in H. Let F(y) =
_
i
F(U
i
).
Then F(y) is the required region. Since Y
0
= (Y
1
, Y
2
) we dene

F(y)
to be the set of Y
0
for which the Y
1
determines a point in F(y). It can
140 4. Analytic theory of Indenite quadratic forms
be easily veried that

F(y) determined in this manner is a fundamental
region for (y) in (q, R).
Because of (109) and (114) we may write,
q =
_
v
v
1
_
, R =
_
T T
1
T

1
R
1
_
(117)
Then
{dq} = {dv}{dv
1
}
{dR} = {dT}{dT
1
}{dR
1
}
Since {dY
0
} = {dY
1
}{dY
2
} we x Y
1
so that the H that it determines 162
in H is in F(y) and integrate over the space of Y
2
which clearly is deter-
mined by
S [yY
1
, Y
2
] =
_
W
1
Q
Q

R
1
_
Since
{dY
2
}
{dQ}{dR
1
}
=
{dY
2
}
{dv
1
}{dT
1
}{dR
1
}
We have, on using theorem 6,
(v, S ) =
q1
||S ||
q/2
||W
1
||
1
q
2
_
{dY
1
}
{dv}{dT}
(118)
where the domain of integration is over those Y
1
which determine points
H in F(y). Since T > 0, (114) now gives
W =
_
t w 0

0 T
_ _
1 0

T
1
v E
_
where w = T
1
[v]. Since T 0 and W
1
has signature (p, 1), it follows
that
w t > 0
w 0
_

_
(119)
6. Measure of unit group and measure of representation 141
Substituting |W
1
| = (t w)|T|, we get from (118)
(y, S ) =
q1
||S ||
q/2
||T||
1
q
2
(w t)
1
q
2
_
{dY
1
}
{dv}{dT}
(120)
We now remark that w depends only on the H which Y
1
determines
in H. For,
w = T
1
[v] = T
1
[Y

1
S y] = T
1
[Y

1
S ][y]
But from (102), T
1
[Y

1
S ] =
H + S
2
so that 163
2W = (H + S )[y] = H[y] + t
or that
w =
H[y] + t
2
(121)
Let now g(w) be an integrable function of w to be chosen later. Mul-
tiply both sides of (120) by g(w) (wt)
q
2
1
and integrate over the v space
satisfying
T
1
[v] = w > t.
We then get, by applying theorem 6 and using
{dv} =
{dv}
dw
dw
the result

1
(y, S )
_
w>Max(0,t)
g(w)w
p
2
1
( t)
q
2
1
dw
=
p1

q1
||S ||
q/2
||T||

1
2

q
2
_
g(w)
{dY
1
}
{dT}
(122)
The function g(w) has to be so chosen that the integrals are convergent.
We will see later that this can be done. The domain of integration for
the integral on the right of (122) is over that set of Y
1
which determine
H in F(y). Since every H in F(y) determines a Y
1
, we see that we have
to apply the analysis in the proof of theorem 7 to obtain
142 4. Analytic theory of Indenite quadratic forms
Theorem 8. Let (y, S ) be the measure of the representation y of S [y] =
t. Then
(y, S )
_
w>0
w>t
g(w)w
p
2
1
(w t)
q
2
1
dw =
p1

q1
||S ||

n
2
_
F(y)
g
_

_
H[y] + t
2
_

_
dv
where g(w) is an integrable function making the integrals converge and 164
dv is the invariant volume element in the H space.
7 Integration of the theta series
We shall hereafter assume that n > 4.
Let us denote by V
a
the volume of the fundamental region V
a
for
a
in the H space so that
V
a
=
_
F
a
dv.
F
a
is nite by theorem 5. We put

a
(z) = V
1
a
_
F
a
f
a
(z, H)dv (123)
Then by theorem 4,

a
(z) = V
1
a

ya(mod 1)
_
F
a
e
2iR[y]
dv
If a 0(mod 1), then y 0 is a possible value of y and then we have
the term
V
1
a
_
F
a
dv (124)
By denition of V
a
, the value of (124) is unity. Let us therefore put

a
=
_

_
1 if a 0(mod 1)
0 otherwise.
7. Integration of the theta series 143
Then we have 165

a
(z) =
a
+ V
1
a

ya(mod 1)
y0
_
F
a
e
2iR[y]
dv.
Let us call two rational vectors y
1
and y
2
associated, if there exists
a matrix U in
a
such that y
1
= Uy
2
. Otherwise they are said to be non-
associated. For any y consider the subgroup
a
(y) of
a
with Uy = y.
We can write

a
=

k
U
k

a
(y) (125)
as a union of left cosets. U
k
y then run through all vectors associated
with y. Because of uniform convergence, we can write

a
(z) =
a
+

k
V
1
a
_
F
a
e
2iR[U
k
y]
dv
where the accent indicates that we should sum over all non-associate
vectors y with y 0 and y a(mod 1). Since the volume element dv
has the invariance property we may write

a
(z) =
a
+

k
V
1
a
_
F
a
[U
k
]
e
2iR[y]
dv
where F
a
[U
k
] is the image of the fundamental region F
a
by the trans-
formation H H[U
k
]. Because of (125) a fundamental region F(y) for

a
(y) in H is given by
F(y) =

k
F
a
[U
k
].
Consider the group
a
(y). Now E is not an element of
a
(y) since
that means y = y or y = 0. But E may be in
a
. This means that 166
a a(mod 1) or 2a 0(mod 1). In this the U
k
s in (125) may be
so chosen that with U
k
, U
k
is also a representative of a coset. Since
144 4. Analytic theory of Indenite quadratic forms
H H[U] and H H[U] dene the same mapping in the H space,
it shows that if 2a 0(mod 1), the F
a
[U
k
] give a double covering of the
fundamental region F(y). So let us dene
j
a
=
_

_
2 if 2a 0(mod 1)
1 if 2a 0(mod 1).
Then we can write

a
(z) =
a
+ j
a

y
V
1
a
_
F(y)
e
2iR[y]
dv
Let us now put in theorem 8
g(w) = e
2itz4w
and use the abbreviation
h
t
(z) = e
2itz
_
w>max(0,t)
w
p
2
1
(w t)
q
2
1
e
4w
dw, (126)
then, since (126) converges for p > 0, q > 0, > 0, we get

a
(z) =
a
+
j
a
V
a

y
(y, S )

p1

q1
||S ||
n
2
h
t
(z) (127)
It can be shown that for each rational number t 0, the number of
non-associate representations
S [y] = t, y a(mod 1)
is nite. If t = 0, one has to consider only non-associate primitive
representations. If therefore we put
M(S, a, t) =

y
(y, S ) (128)
where the summation runs on the right through the nitely many non- 167
7. Integration of the theta series 145
associate representations of S [y] = t, we can write (127) in the form

a
(z) =
a
+
j
a
V
a
||S ||
n/2

p1

q1

tS [a](mod 1)
M(S, a, t)h
t
(z) (129)
Just as we dened (S ) in theorem 7 for the unit group , we can
dene
a
(S ) for the subgroup
a
of also.
a
is a subgroup of nite
index ( :
a
) in . Let
=

U
U
a
be a decomposition of into left cosets mod
a
. If F is a fundamental
region for , then
F
a
=

U
F[U]
is a fundamental region for
a
. Since U and U give rise to the same
mapping in H space, we have to consider whether E belongs to
a
;
i.e., 2a 0(mod 1), which means that U and U are in the same coset
and so
V
a
= ( :
a
)V.
If however 2a 0(mod 1), then U and U belong to dierent cosets
and so
_
U
F[U] gives a double covering of F
a
. Thus
V
a
=
1
2
( :
a
)V.
Using the denition of j
a
we get
(S )
V
=

a
(S )
V
a

j
a
2
If we denote (S, a, t) the quantity 168
(S, a, t) =
M(S, a, t)

a
(S )
(130)
146 4. Analytic theory of Indenite quadratic forms
We get, on using theorem 7, the nal formula

a
(z) =
a
+

n/2
||S ||

1
2
(p/2)(q/2)

tS [a](mod 1)
(S, a, t)h
t
(z) (131)
We call M(S, a, t) the measure of representation of t by S [x + a].
(131) is the analogue, for indenite forms, of the generating function
(6).
Let us now consider the functional vector
(z) =
_

a
1
(z)
.
.
.

a
l
(z)
_

_
(132)
a
1
, . . . , a
l
having the same meaning as before. Let d = abs2S and the
subgroup of units U in satisfying
U E(mod d).
Since for every a
i
, 2S a
i
is an integral vector, it follows that
a
i
, (i = 1, 2, . . . , 1).
Also / is a nite group. If F
0
is a fundamental region for in H and
V its volume then because of invariance of volume element we have

a
(z) = V
1
_
F
0
f
a
(z, H)dv.
Let now (S, t) and denote the vectors
(S, t) =
_

_
(S, a
1
, t)
.
.
.
(S, a
1
, t)
_

_
, =
_

a
l
.
.
.

a
l
_

_
where (S, a, t) is dened by (130) and
a
= 0 or 1 according as a 169
0(mod 1) or not. Then from (49) and (131) we have the
7. Integration of the theta series 147
Theorem 9. Let n > 4 and M =
_


_
be a modular matrix. Then
(z) = +

n/2
||S ||

1
2
(p/2)(q/2)

t
(S, t)h
t
(z)
satises
(z
M
) = G(M, z)(z).
The function h
t
(z) introduced in (126) can be expressed in terms of
the conuent hypergeometric function h(, , ) dened by
h(, , ) =

_
0
w
1
(w + 1)
1
e
w
)dw
where and are complex numbers with positive real parts and is
a positive real parameter. h(, , ) is a solution of the second order
dierential equation

d
2
h
d
2
+ ( + + )
dh
d
h = 0.
From the denition of h
t
(z) we have
h
0
(z) =

_
0
W
n/22
e
4w
dw
which reduces to the -integral. We have hence
h
0
(z) = (4)
1n/2

_
n
2
1
_
. (134)
Let now t < 0. Changing, in (126), the variable w to tw we get
easily
h
t
(z) = e
2itz
(t)
n/21
h(p/2, q/2, 4t) (135)
170
In case t > 0, we make a change of variable w wt + t. One then
obtains
h
t
(z) = e
2itz
t
n/21
h(q/2, p/2, 4t). (136)
148 4. Analytic theory of Indenite quadratic forms
If we put h
t
(z) = u(, ) = u as a function of the two real variables
and , then u satises the partial dierential equation
u = 0 (137)
where
=
_

2

2
+

2
_
+
n
2

+ i
(q p)
2

(138)
The interesting fact to be noticed is that the dierential operator
is independent of t. Since (z) in theorem 9 is a linear function in h
t
(z)
we see that
(z) = 0 (139)
It is to be noted also that f (z, H) is not a solution of the dierential
equation (137).
8 Eisenstein series
Let M =
_


_
be a modular matrix. By (30), for any two vectors a, b
among a
1
, . . . , a
l
we have

ab
(M) =

g((mod) )
e
2i

(S [g+a]2b

S (g+a)+S [b])
Let us consider the case b = 0 which is a possible value of a
1
, . . . , a
l
.
Then

a,0
(M) =

g((mod) )
e
2i

S [g+a]
which is an ordinary Gaussian sum. It is to be noted that
a,0
(M) de- 171
pends only on the rst column of the matrix M and is independent of
and .
Let G denote the group of proper unimodular matrices and G
0
the
subgroup consisting of all modular matrices with = 0. Let
G =

M
MG
0
(141)
8. Eisenstein series 149
be a decomposition of G as a sum of left cosets modulo G
0
. If M and
M
1
belong to the same left coset, then
M
1
1
M =
_
1
0 1
_
so that the values of
a,0
(M) and
a,0
(M
1
) are equal. Also since G
0
contains the matrix
_
1 0
0 1
_
, we may choose the representatives in (141)
so that 0.
Let g(M, z) denote the rst column of the matrix G(M, z) dened in
(37). Let M have > 0. Then because of theorem 2 we have
g(M, z
M
1 ) =
1
d

1
2

n/2
(z )
p/2
(z )
q/2
_

a
1
0
.
.
.

a
l
0
_

_
g(E, z) =
_

_
(142)
We now form the series
(z) =

M
g(M, z
M
1 )
the sum taken over all representatives in (141). (z) is a vector of func- 172
tions
a
1
(z), . . . ,
a
l
(z) where

a
(z) =
a
+
1
d

1
2

(,)=1
>0

n
_
z

_
p/2
_
z

_
q/2
(143)

g((mod) )
e
2i

S [g+a]
In order to prove the absolute convergence of the above series for
a
(z),
observe that by (47) and (48), (M) is unitary and so it is enough to
prove the convergence of

(,)=1
|
z
|

n
2
150 4. Analytic theory of Indenite quadratic forms
It is well-known that this converges for n > 4. The convergence is even
uniform in every compact subdomain of the z-plane.
From theorem 2 we have
g(MM
1
, z
M
1
1
) = G(M, z)g(M
1
, z
M
1
1
)
If M is xed and M
1
runs through a complete system of representatives
in (141), then MM
1
also runs through the representatives in (141). This
gives
(z
M
) = G(M, z)(z). (144)
Thus (z) also satises the same transformation formula as (z). We
shall now obtain a fourier expansion for the function
a
(Z). To this end
we rst prove
Lemma 4. Let a > 1, b > 1 be two real numbers and
E(z) =

k=
(z k)
a
(z k)
b
Then
E(z) =
i
ba
(2)
a+b
(a)(b)

e
2ilz

_
max(0,1)
u
a1
(u 1)
b1
e
4u
du.
where z = + i, > 0. 173
Proof. Since a +b > 2, it follows that E(z) is absolutely convergent and
is also uniformly convergent in every bounded domain of the z-plane. It
is clearly periodic of period 1 in . Hence
E(z) =

l=
e
2il
_

_
1
_
0

(z k)
a
(z k)
b
e
2il
d
_

_
.
This shows that the fourier coecient equals

z
a
z
b
e
2il
d.
8. Eisenstein series 151
By means of the substitution i we get for this fourier coecient
the integral
i
ba1
i
_
i
( )
a
( + )
b
e
2l
d.
We now write instead of + obtaining thus the integral
i
ba1
e
2i
+i
_
i

b
(2 )
a
e
2l
d.
In order to evaluate the integral above we use the -integral and
obtain
1
(a)
+i
_
i
e
2l

b
_

_
0
u
a1
e
(2)u
du
_

_
d (145)
We can change the order of integration and hence the above integral
equals
1
(a)

_
0
u
a1
e
2u
_

_
+i
_
i

b
e
(u2l)
d
_

_
du
We now use the well-known Weierstrass formula in -functions, 174
namely
1
2i
c+i
_
ci
x
b
e
x
dx =
_

b1
(b)
if > 0
0 if 0
where c > 0, b > 0.
From this formula, it follows that the integral in (145) equals
2i
(a)(b)
_
u>max(0,2,l)
u
a1
(u 2l)
b1
e
2u
du.
We once again make a change of variable u to 2u. We then obtain the
fourier coecient as given in the lemma.
152 4. Analytic theory of Indenite quadratic forms
Actually the lemma can be seen to be true for a > 0, b > 0 and
a + b 1. In particular, if we put a = p/2 and b = q/2 and use the
denition of h
t
(z) in (126) and in (29), we obtain the formula

k=
(z k)
p/2
(z k)
q/2
=
(2)
n/2

(p/2)(q/2)

1=
h
l
(z) (146)
Let us now consider the expression for
a
(z), namely

a
(z) =
a
+
1
d

1
2

(,)=1
>0

m
_
z

_
p/2
_
z

_
q/2

g((mod) )
e
2i

S [g+a]
Put D = 2d. We shall prove that
a
(z) has the period D in .

a
(z + D) =
a
+
1
d

1
2

(,)=1

n
_
z + D

_
p/2
_
z + D

_
q/2
> 0

g((mod) )
e
2i

S [g+a]
But since 2S a is integral, it follows that DS [a] is an integer. Hence 175
DS [g + a] 0(mod 1)
We may therefore write

a
(z + D) =
a
+
1
d

1
2

(,)=1

n
_
z + D

_
p/2
_
z + D

_
q/2
> 0

g((mod) )
e
2i
_

D
_
S [g + a]
Since

+ D runs through all rational fractions when


does so, we see


that

a
(z + D) =
a
(z).
8. Eisenstein series 153
Because of absolute convergence we can write the series for
a
(z) in
the following way: We put all rational numbers

into residue classes


modulo D. If 0

< D is a xed rational number with (, ) = 1, then


all rational numbers in the class of

are obtained in the form


kD
where k runs through integers. Thus

a
(z) =
a
+
1
d

1
2

<D

n
D

n
2

g((mod) )
e
2i

S [g+a]

k=
( k)

p
2
( k)

q
2
(147)
where =
_
z

_
D
1
. Using (146) we get

a
(z) =
a
+
d

1
2
(2)
n/2
(p/2)(q/2)

<D

n
D
n/2

g((mod) )
e
2i

S [g+a]

l=
h
l
_
D
1
_
z

__
(148)
176
Using (126) we have
h
l
_
D
1
_
z

__
= D
n/21
e
2i

l
D
h
l/D
(z).
We may therefore write (148) in the form

a
(z) =
a
+

n/2
||S ||

1
2
(p/2)(q/2)

l=
h
1/D
(z)

<D
D
1

g((mod) )
e
2i
_
S [g+a]
l
D
_

We now contend that the inner sum is zero if


S [a]
l
D
0(mod 1)
154 4. Analytic theory of Indenite quadratic forms
For, from (147), it is obvious that instead of the summation over 0

< D, we could equally well have the summation range as 1

<
D + 1. This means that the expression

<D
D
1

n

g((mod) )
e
2i

_
S [g+a]
l
D
_
(149)
is unaltered by changing

into

+1. But this change multiplies (149)


by
e
2i(S [a]
l
D
)
This proves our contention. We can therefore write

a
(z) =
a
+

n/2
||S ||

1
2
(p/2)(q/2)

tS [a](mod 1)
h
t
(z)

<D
D
1

n

g((mod) )
e
2i

(S [g+a]t)
We can now write all numbers 0

< D in the form


+ r where 177
0

< 1 and r = 0, 1, 2, . . . , D 1. Because of the property of the


expression (149) we get nally the
Theorem 10. The function
a
(z) has the expansion

a
(z) =
a
+

n/2
||S ||

1
2
(p/2)(q/2)

tS [a](mod 1)
h
t
(z)

<1

n

g((mod) )
e
2i

(S [g+a]t)
The expression

t
=

<1

n

g((mod) )
e
2i

(S [g+a]t)
(150)
8. Eisenstein series 155
is a so-called singular series. Series of this type were studied by Hardy
and littlewood in their researches on Warings problem. We shall now
give some properties of this singular series.
Let q > 0 be an integer. Put
f
q
=

|q
_

<1

n

g((mod) )
e
2i

(S [g+a]t)
_

_
Since q = s where s is an integer, we may take the inner summation
over a complete residue system modulo q. Then each of the terms will
be repeated s
n
times. This gives
f
q
= q
n
q1

=0

g(mod q)
e
2i
q
(S [g+a]t)
Interchanging the two summations above we have
f
q
= q
n

g(mod q)
_

_
q1

=0
e
2i
q
(S [g+a]t)
_

_
(151)
Because of the well-known formula 178
q1

=0
e
2i
q
=
_

_
0 if q
q if q|
We see that the inner sum in (151) vanishes if the congruence
S [x + a] t(mod q) (152)
has no solution. If it has a solution g, then the inner sum has the value
q. Thus
f
q
=
A
0
(S, a, t)
q
n1
(153)
where A
q
(S, a, t) is the number of incongruent solutions mod q of the
congruence (152). It will then follow from the denition of
t
that if
156 4. Analytic theory of Indenite quadratic forms
q through a sequence of integers q
1
, q
2
, q
3
, . . . such that every
natural number divides all but a nite number of these qs,

t
= lim
q
f
q
= lim
q
A
q
(S, a, t)
q
n1
(154)
From the denition of A
q
(S, a, t) and the Chinese-remainder theo-
rem, it follows that
A
q
(S, a, t) A
q
(S, a, t) = A
qq
(S, a, t)
for two coprime integers q, q

. This shows that f


q
=
A(S, a, t)
q
n1
is a
multiplicative arithmetic function. In order to compute f
q
for a given q,
it is enough to compute f
q
for q = p
l
where p is a prime number and
l > 0 is an integer.
If q = p
l
, l > 0 and p a prime number, it can be shown that

p
(S, a, t) = lim
l
A
q
(S, a, t)
q
n1
exists. In fact, if l is suciently large the value of
A
q
(S, a, t)
q
n1
is in- 179
dependent of l. This shows that
p
(S, a, t) is really a rational number.
Furthermore for all except a nite number of primes (for instance, for
p 2d)

p
(S, a, t) =
A
p
(S, a, t)
p
n1
.
this enables us to compute
p
(S, a, t) for almost all primes p. From the
fact that
p
(S, a, t) exists for every p one can construct the product
(S, a, t) =
_
p

p
(S, a, t).
It is proved in the analytic theory of quadratic forms that the product
above converges and is dierent from zero only if every factor is dier-
ent from zero. Moreover

t
= (S, a, t) (155)
9. Main Theorem 157
This gives an arithmetical meaning for
t
namely that
t
> 0 if and only
if A
q
(S, a, t) 0 for every integer q > 1. For a proof of these statements
see [2].
It should be noticed that since
a
(z) is a linear function in h
t
(z) and
as h
t
(z) is a solution of the equation (137), the function
a
(z) and hence
the vector (z) dened in (143) is a solution of the dierential equation
(z) = 0. (156)
The series
a
(z) are called the Eisenstein series. The vector (z) of 180
Eisenstein series and the vector (z) satisfy the same dierential equa-
tion and have the same transformation formula with regard to modular
substitutions.
9 Main Theorem
We shall now prove the main theorem of the analytic theory of indenite
quadratic forms, namely,
Theorem 11. If n > 4 and S is a rational symmetric matrix which is
semi-integral, of signature p, q, p + q = n, pq > 0 and a a rational
vector with 2S a integral, then for t S [a](mod 1)
M(S, a, t) =
a
(S )
_
p

p
(S, a, t)
the product running over all primes p.
Proof. The series

a
(z) =
a
+

n
2
||S ||
1/2
(p/2)(q/2)

t
M(S, a, t)

a
(S )
h
t
(z)
and

a
(z) =
a
+

n/2
||S ||

1
2
(p/2)(q/2)

t
h
t
(z)
158 4. Analytic theory of Indenite quadratic forms
are fourier series in the real part of z. In order to prove theorem 11, it
is enough to prove that

a
(z)
a
(z) = 0. (157)
Then from (155) and the uniqueness theorem of fourier series, it would
follow that the coecients of
a
(z)
a
(z) are zero and the theorem is 181
proved.
We shall therefore prove (157).
Let (z) be the vector
(z) =
_

a
1
(z)
.
.
.

a
l
(z)
_

_
where

a
(z) =
a
(z)
a
(z).
If we put
t
=
M(S, a, t)

a
(S )
, then

a
(z) =

n/2
||S ||

1
2
(p/2)(q/2)

t
(
t

t
)h
t
(z) (158)
It is to be noticed that
a
(z) lacks the constant term. From theorem 9
and (144) we have
(z
M
) = G(M, z)(z). (159)
The Unitary matrix (M) is dened in 3 by
G(M, z) =
_

_
(z + )
p/2
(z + )
q/2
(M) if 0
(M) if = 0.
If we put z
M
=
M
+ i
M
, then

M
=

|z + |
2
(160)
9. Main Theorem 159
Let us now prove some properties of the function h
t
(z) introduced in
(126). In the rst place
h
t
(z) e
2it
(4)
1n/2
(n/2 1) for 0. (161)
This can be proved easily: For, if t = 0, then
h
0
(z) = (4)
1n/2
(n/2 1)
as was seen in (134). Let now t > 0. Let us make the substitution 182
w
w
4
in the integral for h
t
(z). Then
e
2it
h
t
(z) =
_
w>4t
(4)
1n/2
w
p/21
(w 4t)
q/21
e
w
dw
But when 0
_
w>4t
w
p/21
(w 4t)
q/21
e
w
dw (n/2 1).
This proves (161). The case t < 0 is dealt with in a similar fashion.
In case we have
h
t
(z) 0 if t 0
h
t
(z)
n/21
0 if t = 0
_
(162)
This is easily seen from the expression for h
t
(z) and (134). In fact, if
t 0, h
t
(z) 0 exponentially as .
Let us now consider the equation (158) for
a
(z). The function h
t
(z)
is monotonic and decreasing in for xed . This means that the series
for
a
(z) is uniformly bounded in the whole of the fundamental region
of the modular group in the z plane. Let
a
(z) =
n/4

a
(z). Since n > 4
and (162) holds with h
t
(z) 0 exponentially as , t 0, it follows
that
a
(z) is bounded, uniformly in , in the fundamental region of the
modular group in the upper half z plane.
Let (z) be the vector
(z) =
_

a
1
(z)
.
.
.

a
l
(z)
_

_
=
n/4
(z)
160 4. Analytic theory of Indenite quadratic forms
Then because of (160) and the transformation formula (159) for (z) it 183
follows that, if M is a modular matrix,
|
a
i
(z
M
)

j
|
i j
||
a
j
(z)| (i = 1, . . . , l) (163)
where (M) = (
i j
). But (M) is a unitary matrix so that |
i j
| 1. This
means that
|
a
i
(z
M
)|

j
|
a
j
(z)|
From what we have seen above, it follows that
a
(z) is bounded in the
whole of the upper half z-plane.
Now
a
(z) and
a
(z) are fourier series in the real variable and have
the period 2d = D. The fourier coecient of
a
(z) =
a
(z)
a
(z) is
1
D
D
_
0

a
(z)e
2it
d (164)
which clearly equals

n/2
||S ||

1
2
(p/2)(q/2)
(
t

t
)h
t
(i) (165)
Since n > 4 and
n/4

a
(z) is bounded in the upper half z plane, it
follows that
1
D

n/41
D
_
0

n/4

a
(z)e
2it
d 0 (166)
as 0. On the other hand the expression on the left of (166) is, in 184
virtue of (164), (165), equal to

n/2
||S ||

1
2
(p/2)(q/2)
(
t

t
)h
t
(i)
n
2
1
Because of (161)
h
t
(i)
n/21
(4)
1n/2
(n/2 1) 0
10. Remarks 161
as 0. Because of (166) therefore, it follows that

t

t
= 0.
Our theorem is thus completely proved.
Going back to the denitions of
a
(z) in (123) and
a
(z) in (143) we
have the partial fraction decomposition
Theorem 12. If n > 4, then
V
1
a
_
F
a
f
a
(z, H)dv =
=
a
+
1
d

1
2

(,)=1
>0

n
_

g((mod) )
e
2i

S [g+a]
_

_
_
z

p
2
_
z

q
2
10 Remarks
Let us consider the main theorem. The right hand side is a product
extended over all the primes and is zero if and only if at least one factor
is zero. The left hand side is dierent from zero only if the equation
S [x + a] = t (167)
has an integral solution. Thus the main theorem shows that (167) has an
integral solution if and only if
S [x + a] t(mod m)
has a solution for every integer m 1. Because of the denition of 185

p
(S, a, t) we may also say that if S is indenite and n > 4, then (167)
has an integral solution if and only if (167) is true in p-adic integers
for every p. In the case t = 0, this is the Meyer-Hasse theorem. But
our main theorem is a quantitative improvement of the Meyer-Hasse
theorem, in as much as it gives an expression for the measure of repre-
sentation of t by S [x + a].
162 4. Analytic theory of Indenite quadratic forms
The method of proof consisted in rst obtaining a generating func-
tion f (z) for the Diophantine problem (167) and then constructing a
function E(z), the Eisenstein series, which behaves like f (z) for all mod-
ular substitutions. In other words, we construct a function E(z) which
behaves like f (z) when z approaches, in the upper half plane, a ratio-
nal point on the real axis. This idea was originally used by Hardy and
Ramanujan in the problem of representation of integers by sums of k
squares. The generating function f (z) here was the theta series
f (z) =
_

1=
e
2il
2
z
_

_
k
The function E(z) is constructed in the same way as here and Hardy and
Ramanujan showed that for k = 5, 6, 7, 8
f (z) = E(z).
But for k = 9, f (z) E(z). It is remarkable that in the case of indef-
inite forms, one has equality if k 4. One does not have, in general, 186
for representation of integers by denite forms, a formula like that in
theorem 11. One can obtain a modied formula by introducing a genus
of forms. If S > 0 is an integral matrix, the genus of S consists of all
integral matrices P 0 which are such that for each integer m > 1 there
is an integral matrix U with
S [U] P(mod m), (|U|, m) = 1.
It is then known that a genus consists of a nite number of classes. Let
S
1
, . . . , S
a
be representatives of the nitely many classes in the genus
of S . If T > 0 is any k-rowed integral matrix, we can dene for each i,
i = 1, . . . , a the number, A(S
i
, T), of representations
S
i
[X] = T.
If E(S
i
) denotes the order of the unit group of S
i
(this being nite since
S
i
> 0) we can form
A(S, T) =
a
_
i=1
A(S
i
, T)
E(S
i
)
a
_
i=1
1
E(S
i
)
10. Remarks 163
the average measure of representation of T by a genus of S . Just as in
(154) we can dene for each p,

p
(S, T) = lim
1
A
p
l (S, T)
p
l
where = nk
k(k + 1)
2
. This is nite, rational and

p

p
(S, T) con-
verges if k n. The main theorem would then be
A(S, T) = c
_
c

p
(S, T) (168)
c being a constant depending on n and k. 187
A similar formula, with suitable restrictions, exists if S and T are
indenite also.
One might ask if our theorem 12 could be extended to the cases n =
2, 3, 4. In case n = 4, and S [x] is not a quaternion zero form, then one
can prove that f (z) = E(z). The method is slightly more complicated.
The dierential operator , or slight variants of it, which we had not
used in our main theorem, plays an important role here. In case n = 2
and 3 it can be proved that our main theorem is false.
Generalizations of the main theorem may be made by considering
representations not of numbers, but of rational symmetric matrices. One
can generalize the results by considering instead of the domain of ratio-
nal integers, the ring of integers in an algebraic number eld or more
generally an order in an involutorial simple algebra over the rational
number eld. The bibliography gives the sources for these generaliza-
tions.
Bibliography
[1] C. L. Siegel : Additive theorie der-Zahlk orper II Math. Ann. 88 188
(1923) P. 184-210.
[2] C. L. Siegel :

Uber die analytische theorie der quadratischen For-
men, Annals of Math. 36 (1935) P. 527-606; 37 (1936) P. 230-263;
38 (1937) P. 212-291.
[3] C. L. Siegel :

Uber die zeta funktionen indeniter quadratischen
Formen, Math. Zeir 43 (1938) P. 682-708; 44 (1939) P. 398-426.
[4] C. L. Siegel : Einheiten quadratischer Formen Abhandlungen.
Math. Sem. Hansischen. Univ. 13 (1940) P. 209-239.
[5] C. L. Siegel : Equivalence of quadratic forms Amer. Jour. of Math.
63 (1941) P. 658-680.
[6] C. L. Siegel : On the theory of indenite quadratic forms, Annals
of Math. 45 (1944) P. 577-622.
[7] C. L. Siegel : Indenite quadratische Formen and Modulfunktio-
nen, Courant. Anniv. Volume (1948) P. 395-406.
[8] C. L. Siegel : Indenite quadratische Formen and Funktionenthe-
orie, Math. Annalen, 124 (1951) I, P. 17-54; II ibid P. 364-387.
[9] C. L. Siegel : A generalization of the Epstein zeta-function, Jour. 189
Ind. Math. Soc. Vol. 20 (1956) P. 1-10.
165
166 BIBLIOGRAPHY
[10] C. L. Siegel : Die Funktionalgleichungen einiger Dirichletscher
Reihen, Math. Zeit 63 (1956) P. 363-373.
[11] H. Weyl : Fundamental domains for lattice groups in division al-
gebras Festschrift to A. Speiser, 1945.

You might also like