You are on page 1of 207

Lectures on Risk Theory

Klaus D. Schmidt
Lehrstuhl f ur Versicherungsmathematik
Technische Universit at Dresden
December 28, 1995
To the memory of
Peter Hess
and
Horand Stormer
Preface
Twentyve years ago, Hans B uhlmann published his famous monograph Mathe-
matical Methods in Risk Theory in the series Grundlehren der Mathematischen
Wissenschaften and thus established nonlife actuarial mathematics as a recognized
subject of probability theory and statistics with a glance towards economics. This
book was my guide to the subject when I gave my rst course on nonlife actuarial
mathematics in Summer 1988, but at the same time I tried to incorporate into my
lectures parts of the rapidly growing literature in this area which to a large extent
was inspired by B uhlmanns book.
The present book is entirely devoted to a single topic of risk theory: Its subject is
the development in time of a xed portfolio of risks. The book thus concentrates on
the claim number process and its relatives, the claim arrival process, the aggregate
claims process, the risk process, and the reserve process. Particular emphasis is
laid on characterizations of various classes of claim number processes, which provide
alternative criteria for model selection, and on their relation to the trinity of the
binomial, Poisson, and negativebinomial distributions. Special attention is also paid
to the mixed Poisson process, which is a useful model in many applications, to the
problems of thinning, decomposition, and superposition of risk processes, which are
important with regard to reinsurance, and to the role of martingales, which occur
in a natural way in canonical situations. Of course, there is no risk theory without
ruin theory, but ruin theory is only a marginal subject in this book.
The book is based on lectures held at Technische Hochschule Darmstadt and later at
Technische Universit at Dresden. In order to raise interest in actuarial mathematics
at an early stage, these lectures were designed for students having a solid background
in measure and integration theory and in probability theory, but advanced topics like
stochastic processes were not required as a prerequisite. As a result, the book starts
from rst principles and develops the basic theory of risk processes in a systematic
manner and with proofs given in great detail. It is hoped that the reader reaching
the end will have acquired some insight and technical competence which are useful
also in other topics of risk theory and, more generally, in other areas of applied
probability theory.
I am deeply indebted to J urgen Lehn for provoking my interest in actuarial mathe-
matics at a time when vector measures rather than probability measures were on my
viii Preface
mind. During the preparation of the book, I benetted a lot from critical remarks
and suggestions from students, colleagues, and friends, and I would like to express
my gratitude to Peter Amrhein, Lutz K usters, and Gerd Waldschaks (Universit at
Mannheim) and to Tobias Franke, KlausThomas He, Wolfgang Macht, Beatrice
Mensch, Lothar Partzsch, and Anja Voss (Technische Universit at Dresden) for the
various discussions we had. I am equally grateful to Norbert Schmitz for several
comments which helped to improve the exposition.
Last, but not least, I would like to thank the editors and the publishers for accepting
these Lectures on Risk Theory in the series Skripten zur Mathematischen Stochastik
and for their patience, knowing that an authors estimate of the time needed to
complete his work has to be doubled in order to be realistic.
Dresden, December 18, 1995 Klaus D. Schmidt
Contents
Introduction 1
1 The Claim Arrival Process 5
1.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 The Erlang Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 A Characterization of the Exponential Distribution . . . . . . . . . . 12
1.4 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2 The Claim Number Process 17
2.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2 The Erlang Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 A Characterization of the Poisson Process . . . . . . . . . . . . . . . 23
2.4 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3 The Claim Number Process as a Markov Process 43
3.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
3.2 A Characterization of Regularity . . . . . . . . . . . . . . . . . . . . 51
3.3 A Characterization of the Inhomogeneous Poisson Process . . . . . . 56
3.4 A Characterization of Homogeneity . . . . . . . . . . . . . . . . . . . 62
3.5 A Characterization of the Poisson Process . . . . . . . . . . . . . . . 76
3.6 A Claim Number Process with Contagion . . . . . . . . . . . . . . . . 77
3.7 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
4 The Mixed Claim Number Process 85
4.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.2 The Mixed Poisson Process . . . . . . . . . . . . . . . . . . . . . . . 87
4.3 The PolyaLundberg Process . . . . . . . . . . . . . . . . . . . . . . 93
4.4 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
5 The Aggregate Claims Process 103
5.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
5.2 Compound Distributions . . . . . . . . . . . . . . . . . . . . . . . . . 109
5.3 A Characterization of the Binomial, Poisson, and Negativebinomial
Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
5.4 The Recursions of Panjer and DePril . . . . . . . . . . . . . . . . . . 119
5.5 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
x Contents
6 The Risk Process in Reinsurance 127
6.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
6.2 Thinning a Risk Process . . . . . . . . . . . . . . . . . . . . . . . . . 128
6.3 Decomposition of a Poisson Risk Process . . . . . . . . . . . . . . . . 133
6.4 Superposition of Poisson Risk Processes . . . . . . . . . . . . . . . . . 141
6.5 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
7 The Reserve Process and the Ruin Problem 155
7.1 The Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
7.2 Kolmogorovs Inequality for Positive Supermartingales . . . . . . . . 161
7.3 Lundbergs Inequality . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
7.4 On the Existence of a Superadjustment Coecient . . . . . . . . . . 166
7.5 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
Appendix: Special Distributions 171
Auxiliary Notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Generalities on Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . 172
Discrete Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
Continuous Distributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
Bibliography 181
List of Symbols 193
Author Index 195
Subject Index 198
Introduction
Modelling the development in time of an insurers portfolio of risks is not an easy task
since such models naturally involve various stochastic processes; this is especially
true in nonlife insurance where, in constrast with whole life insurance, not only the
claim arrival times are random but the claim severities are random as well.
The sequence of claim arrival times and the sequence of claim severities, the claim
arrival process and the claim size process, constitute the two components of the
risk process describing the development in time of the expenses for the portfolio
under consideration. The claim arrival process determines, and is determined by,
the claim number process describing the number of claims occurring in any time
interval. Since claim numbers are integervalued random variables whereas, in the
continuous time model, claim arrival times are realvalued, the claim number process
is, in principle, more accessible to statistical considerations.
As a consequence of the equivalence of the claim arrival process and the claim
number process, the risk process is determined by the claim number process and
the claim size process. The collective point of view in risk theory considers only
the arrival time and the severity of a claim produced by the portfolio but neglects
the individual risk (or policy) causing the claim. It is therefore not too harmful to
assume that the claim severities in the portfolio are i. i. d. so that their distribution
can easily be estimated from observations. As noticed by Kupper
(1)
[1962], this
means that the claim number process is much more interesting than the claim size
process. Also, Helten and Sterk
(2)
[1976] pointed out that the separate analysis of
the claim number process and the claim size process leads to better estimates of the
(1)
Kupper [1962]: Die Schadenversicherung . . . basiert auf zwei stochastischen Grossen, der
Schadenzahl und der Schadenhohe. Hier tritt bereits ein fundamentaler Unterschied zur Lebensver-
sicherung zutage, wo die letztere in den weitaus meisten Fallen eine zum voraus festgelegte, feste
Zahl darstellt. Die interessantere der beiden Variablen ist die Schadenzahl.
(2)
Helten and Sterk [1976]: Die Risikotheorie befat sich also zunachst nicht direkt mit der
stochastischen Gesetzmaigkeit des Schadenbedarfs, der aus der stochastischen Gesetzmaigkeit
der Schadenhaugkeit und der Schadenausbreitung resultiert, denn ein Schadenbedarf . . . kann
ja in sehr verschiedener Weise aus Schadenhaugkeit und Schadenhohe resultieren . . . F ur die
KHaftpicht zeigt eine Untersuchung von Troblinger [1975 ] sehr deutlich, da eine Aufspaltung
des Schadenbedarfs in Schadenhaugkeit und Schadenhohe wesentlich zur besseren Schatzung des
Schadenbedarfs beitragen kann.
2 Introduction
aggregate claims amount, that is, the (random) sum of all claim severities occurring
in some time interval.
The present book is devoted to the claim number process and also, to some extent, to
its relatives, the aggregate claims process, the risk process, and the reserve process.
The discussion of various classes of claim number processes will be rather detailed
since familiarity with a variety of properties of potential models is essential for model
selection. Of course, no mathematical model will ever completely match reality, but
analyzing models and confronting their properties with observations is an approved
way to check assumptions and to acquire more insight into real situations.
The book is organized as follows: We start with the claim arrival process (Chapter 1)
Chapter 1
The Claim
Arrival Process
Chapter 2
The Claim
Number Process
Chapter 3
The Claim Number Process
as a Markov Process
Chapter 5
The Aggregate
Claims Process
Chapter 6
The Risk Process
in Reinsurance
Chapter 4
The Mixed Claim
Number Process
Chapter 7
The Reserve Process
and the Ruin Problem
Interdependence Table
Introduction 3
and then turn to the claim number process which will be studied in three chapters,
exhibiting the properties of the Poisson process (Chapter 2) and of its extensions
to Markov claim number processes (Chapter 3) and to mixed Poisson processes
(Chapter 4). Mixed Poisson processes are particularly useful in applications since
they reect the idea of an inhomogeneous portfolio. We then pass to the aggregate
claims process and study some methods of computing or estimating aggregate claims
distributions (Chapter 5). A particular aggregate claims process is the thinned claim
number process occurring in excess of loss reinsurance, where the reinsurer assumes
responsibility for claim severities exceeding a given priority, and this leads to the
discussion of thinning and the related problems of decomposition and superposition
of risk processes (Chapter 6). Finally, we consider the reserve process and the ruin
problem in an innite time horizon when the premium income is proportional to
time (Chapter 7).
The Interdependence Table given above indicates various possibilities for a selective
reading of the book. For a rst reading, it would be sucient to study Chapters 1,
2, 5, and 7, but it should be noted that Chapter 2 is almost entirely devoted to the
Poisson process. Since Poisson processes are unrealistic models in many classes of
nonlife insurance, these chapters should be complemented by some of the material
presented in Chapters 3, 4, and 6. A substantial part of Chapter 4 is independent
of Chapter 3, and Chapters 6 and 7 contain only sporadic references to denitions
or results of Chapter 3 and depend on Chapter 5 only via Section 5.1. Finally, a
reader who is primarily interested in claim number processes may leave Chapter 5
after Section 5.1 and omit Chapter 7.
The reader of these notes is supposed to have a solid background in abstract measure
and integration theory as well as some knowledge in measure theoretic probability
theory; by contrast, particular experience with special distributions or stochastic
processes is not required. All prerequisites can be found in the monographs by
Aliprantis and Burkinshaw [1990], Bauer [1991, 1992], and Billingsley [1995].
Almost all proofs are given in great detail; some of them are elementary, others
are rather involved, and certain proofs may seem to be superuous since the result
is suggested by the actuarial interpretation of the model. However, if actuarial
mathematics is to be considered as a part of probability theory and mathematical
statistics, then it has to accept its (sometimes bothering) rigour.
The notation in this book is standard, but for the convenience of the reader we x
some symbols and conventions; further details on notation may be found in the List
of Symbols.
Throughout this book, let (, T, P) be a xed probability space, let B(R
n
) denote
the algebra of Borel sets on the Euclidean space R
n
, let denote the counting
measure concentrated on N
0
, and let
n
denote the Lebesgue measure on B(R
n
);
in the case n = 1, the superscript n will be dropped.
4 Introduction
The indicator function of a set A will be denoted by
A
. A family of sets A
i

iI
is
said to be disjoint if it is pairwise disjoint, and in this case its union will be denoted
by

iI
A
i
. A family of sets A
i

iI
is said to be a partition of A if it is disjoint
and satises

iI
A
i
= A.
For a sequence of random variables Z
n

nN
which are i. i. d. (independent and
identically distributed), a typical random variable of the sequence will be denoted
by Z. As a rule, integrals are Lebesgue integrals, but occasionally we have to switch
to Riemann integrals in order to complete computations.
In some cases, sums, products, intersections, and unions extend over the empty
index set; in this case, they are dened to be equal to zero, one, the reference set,
and the empty set, respectively. The terms positive, increasing, etc. are used in the
weak sense admitting equality.
The main concepts related to (probability) distributions as well as the denitions
and the basic properties of the distributions referred to by name in this book are
collected in the Appendix. Except for the Dirac distributions, all parametric families
of distributions are dened as to exclude degenerate distributions and such that their
parameters are taken from open intervals of R or subsets of N.
It has been pointed out before that the present book addresses only a single topic of
risk theory: The development in time of a xed portfolio of risks. Other important
topics in risk theory include the approximation of aggregate claims distributions,
tarication, reserving, and reinsurance, as well as the wide eld of life insurance or,
more generally, insurance of persons. The following references may serve as a guide
to recent publications on some topics of actuarial mathematics which are beyond
the scope of this book:
Life insurance mathematics: Gerber [1986, 1990, 1995], Wolfsdorf [1986], Wolt-
huis [1994], and Helbig and Milbrodt [1995].
Life and nonlife insurance mathematics: Bowers, Gerber, Hickman, Jones, and
Nesbitt [1986], Panjer and Willmot [1992], and Daykin, Pentik ainen and Pesonen
[1994].
Nonlife insurance mathematics: B uhlmann [1970], Gerber [1979], Sundt [1984,
1991, 1993], Heilmann [1987, 1988], Straub [1988], Wolfsdorf [1988], Goovaerts,
Kaas, van Heerwaarden, and Bauwelinckx [1990], Hipp and Michel [1990], and
Norberg [1990].
Since the traditional distinction between life and nonlife insurance mathematics is
becoming more and more obsolete, future research in actuarial mathematics should,
in particular, aim at a unied theory providing models for all classes of insurance.
Chapter 1
The Claim Arrival Process
In order to model the development of an insurance business in time, we proceed in
several steps by successively introducing
the claim arrival process,
the claim number process,
the aggregate claims process, and
the reserve process.
We shall see that claim arrival processes and claim number processes determine each
other, and that claim number processes are the heart of the matter.
The present chapter is entirely devoted to the claim arrival process.
We rst state the general model which will be studied troughout this book and
which will be completed later (Section 1.1). We then study the special case of a
claim arrival process having independent and identically exponentially distributed
waiting times between two successive claims (Section 1.2). We nally show that the
exponential distribution is of particular interest since it is the unique distribution
which is memoryless on the interval (0, ) (Section 1.3).
1.1 The Model
We consider a portfolio of risks which are insured by some insurer. The risks produce
claims and pay premiums to the insurer who, in turn, will settle the claims. The
portfolio may consist of a single risk or of several ones.
We assume that the insurer is primarily interested in the overall performance of the
portfolio, that is, the balance of premiums and claim payments aggregated over all
risks. (Of course, a surplus of premiums over claim payments would be welcome!)
In the case where the portfolio consists of several risks, this means that the insurer
does not care which of the risks in the portfolio causes a particular claim. This is
the collective point of view in risk theory.
6 Chapter 1 The Claim Arrival Process
We assume further that in the portfolio claims occur at random in an innite time
horizon starting at time zero such that
no claims occur at time zero, and
no two claims occur simultaneously.
The assumption of no two claims occurring simultaneously seems to be harmless.
Indeed, it should not present a serious problem when the portfolio is small; however,
when the portfolio is large, it depends on the class of insurance under consideration
whether this assumption is really acceptable. (For example, the situation is certainly
dierent in re insurance and in (third party liability) automobile insurance, where in
certain countries a single insurance company holds about one quarter of all policies;
in such a situation, one has to take into account the possibility that two insurees
from the same large portfolio produce a car accident for which both are responsible
in parts.)
Comment: When the assumption of no two claims occurring simultaneously is
judged to be nonacceptable, it can nevertheless be saved by slightly changing the
point of view, namely, by considering claim events (like car accidents) instead of
single claims. The number of single claims occurring at a given claim event can then
be interpreted as the size of the claim event. This point of view will be discussed
further in Chapter 5 below.
Let us now transform the previous ideas into a probabilistic model:
A sequence of random variables T
n

nN
0
is a claim arrival process if there exists a
null set
T
T such that, for all
T
,
T
0
() = 0 and
T
n1
() < T
n
() holds for all n N.
Then we have T
n
() > 0 for all n N and all
T
. The null set
T
is said
to be the exceptional null set of the claim arrival process T
n

nN
0
.
For a claim arrival process T
n

nN
0
and for all n N, dene the increment
W
n
:= T
n
T
n1
.
Then we have W
n
() > 0 for all n N and all
T
, and hence
E[W
n
] > 0
for all n N, as well as
T
n
=
n

k=1
W
k
for all n N. The sequence W
n

nN
is said to be the claim interarrival process
induced by the claim arrival process T
n

nN
0
.
1.1 The Model 7
Interpretation:
T
n
is the occurrence time of the nth claim.
W
n
is the waiting time between the occurrence of claim n1 and the occurrence
of claim n.
With probability one, no claim occurs at time zero and no two claims occur
simultaneously.
For the remainder of this chapter, let T
n

nN
0
be a xed claim arrival process and
let W
n

nN
be the claim interarrival process induced by T
n

nN
0
. Without loss of
generality, we may and do assume that the exceptional null set of the claim arrival
process is empty.
Since W
n
= T
n
T
n1
and T
n
=

n
k=1
W
n
holds for all n N, it is clear that the
claim arrival process and the claim interarrival process determine each other. In
particular, we have the following obvious but useful result:
1.1.1 Lemma. The identity

_
T
k

k0,1,...,n
_
=
_
W
k

k1,...,n
_
holds for all nN.
Furthermore, for n N, let T
n
and W
n
denote the random vectors R
n
with
coordinates T
i
and W
i
, respectively, and let M
n
denote the (nn)matrix with
entries
m
ij
:=
_
1 if i j
0 if i < j .
Then M
n
is invertible and satises det M
n
= 1, and we have T
n
= M
n
W
n
and
W
n
= M
1
n
T
n
. The following result is immediate:
1.1.2 Lemma. For all nN, the distributions of T
n
and W
n
satisfy
P
Tn
= (P
Wn
)
M
n
and P
Wn
= (P
Tn
)
M
1
n
.
The assumptions of our model do not exclude the possibility that innitely many
claims occur in nite time. The event
sup
nN
T
n
<
is called explosion.
1.1.3 Lemma. If sup
nN
E[T
n
] < , then the probability of explosion is equal to
one.
This is obvious from the monotone convergence theorem.
8 Chapter 1 The Claim Arrival Process
1.1.4 Corollary. If

n=1
E[W
n
] < , then the probability of explosion is equal
to one.
In modelling a particular insurance business, one of the rst decisions to take is to
decide whether the probability of explosion should be zero or not. This decision is,
of course, a decision concerning the distribution of the claim arrival process.
We conclude this section with a construction which in the following chapter will
turn out to be a useful technical device:
For n N, the graph of T
n
is dened to be the map U
n
: R, given by
U
n
() :=
_
, T
n
()
_
.
Then each U
n
is TTB(R)measurable. Dene a measure : TB(R) [0, ]
by letting
[C] :=

n=1
P
U
n
[C] .
The measure will be called the claim measure induced by the claim arrival process
T
n

nN
0
.
1.1.5 Lemma. The identity
[AB] =
_
A
_

n=1

T
n
B
_
dP
holds for all A T and B B(R).
Proof. Since U
1
n
(AB) = A T
n
B, we have
[AB] =

n=1
P
U
n
[AB]
=

n=1
P[A T
n
B]
=

n=1
_
A

TnB
dP
=
_
A
_

n=1

TnB
_
dP ,
as was to be shown. 2
The previous result connects the claim measure with the claim number process which
will be introduced in Chapter 2.
1.2 The Erlang Case 9
Most results in this book involving special distributions concern the case where the
distributions of the claim arrival times are absolutely continuous with respect to
Lebesgue measure; this case will be referred to as the continuous time model. It is,
however, quite interesting to compare the results for the continuous time model with
corresponding ones for the case where the distributions of the claim arrival times
are absolutely continuous with respect to the counting measure concentrated on N
0
.
In the latter case, there is no loss of generality if we assume that the claim arrival
times are integervalued, and this case will be referred to as the discrete time model.
The discrete time model is sometimes considered to be an approximation of the
continuous time model if the time unit is small, but we shall see that the properties
of the discrete time model may drastically dier from those of the continuous time
model. On the other hand, the discrete time model may also serve as a simple
model in its own right if the portfolio is small and if the insurer merely wishes to
distinguish claimfree periods from periods with a strictly positive number of claims.
Results for the discrete time model will be stated as problems in this and subsequent
chapters.
Another topic which is related to our model is life insurance. In the simplest case, we
consider a single random variable T satisfying P[T > 0] = 1, which is interpreted
as the time of death or the lifetime of the insured individual; accordingly, this model
is called single life insurance. More generally, we consider a nite sequence of random
variables T
n

n0,1,...,N
satisfying P[T
0
= 0] = 1 and P[T
n1
< T
n
] = 1 for all
n 1, . . . , N, where T
n
is interpreted as the time of the nth death in a portfolio
of N insured individuals; accordingly, this model is called multiple life insurance.
Although life insurance will not be studied in detail in these notes, some aspects of
single or multiple life insurance will be discussed as problems in this and subsequent
chapters.
Problems
1.1.A If the sequence of claim interarrival times is i. i. d., then the probability of explo-
sion is equal to zero.
1.1.B Discrete Time Model: The inequality T
n
n holds for all n N.
1.1.C Discrete Time Model: The probability of explosion is equal to zero.
1.1.D Multiple Life Insurance: Extend the denition of a claim arrival process as
to cover the case of multiple (and hence single) life insurance.
1.1.E Multiple Life Insurance: The probability of explosion is equal to zero.
1.2 The Erlang Case
In some of the special cases of our model which we shall discuss in detail, the claim
interarrival times are assumed or turn out to be independent and exponentially
10 Chapter 1 The Claim Arrival Process
distributed. In this situation, explosion is either impossible or certain:
1.2.1 Theorem (ZeroOne Law on Explosion). Let
n

nN
be a sequence
of real numbers in (0, ) and assume that the sequence of claim interarrival times
W
n

nN
is independent and satises P
W
n
= Exp(
n
) for all nN.
(a) If the series

n=1
1/
n
diverges, then the probability of explosion is equal to
zero.
(b) If the series

n=1
1/
n
converges, then the probability of explosion is equal to
one.
Proof. By the dominated convergence theorem, we have
E
_
e

n=1
Wn
_
= E
_

n=1
e
Wn
_
=

n=1
E
_
e
Wn

n=1

n
+ 1
=

n=1
_
1
1
1 +
n
_

n=1
e
1/(1+n)
= e

n=1
1/(1+
n
)
.
Thus, if the series

n=1
1/
n
diverges, then the series

n=1
1/(1+
n
) diverges as
well and we have P[

n=1
W
n
= ] = 1, and thus
P[sup
nN
T
n
< ] = P
__

n=1
W
n
<
__
= 0 ,
which proves (a).
Assertion (b) is immediate from Corollary 1.1.4. 2
In the case of independent claim interarrival times, the following result is also of
interest:
1.2.2 Lemma. Let (0, ). If the sequence of claim interarrival times W
n

nN
is independent, then the following are equivalent:
(a) P
W
n
= Exp() for all nN.
(b) P
T
n
= Ga(, n) for all nN.
In this case, E[W
n
] = 1/ and E[T
n
] = n/ holds for all nN, and the probability
of explosion is equal to zero.
1.2 The Erlang Case 11
Proof. The simplest way to prove the equivalence of (a) and (b) is to use charac-
teristic functions.
Assume rst that (a) holds. Since T
n
=

n
k=1
W
k
, we have

Tn
(z) =
n

k=1

W
k
(z)
=
n

k=1

iz
=
_

iz
_
n
,
and thus P
Tn
= Ga(, n). Therefore, (a) implies (b).
Assume now that (b) holds. Since T
n1
+ W
n
= T
n
, we have
_

iz
_
n1

W
n
(z) =
T
n1
(z)
W
n
(z)
=
T
n
(z)
=
_

iz
_
n
,
hence

W
n
(z) =

iz
,
and thus P
W
n
= Exp(). Therefore, (b) implies (a).
The nal assertion is obvious from the distributional assumptions and the zero
one law on explosion.
For readers not familiar with characteristic functions, we include an elementary
proof of the implication (a) = (b); only this implication will be needed in the
sequel. Assume that (a) holds. We proceed by induction.
Obviously, since T
1
= W
1
and Exp() = Ga(, 1), we have P
T
1
= Ga(, 1).
Assume now that P
Tn
= Ga(, n) holds for some n N. Then we have
P
Tn
[B] =
_
B

n
(n)
e
x
x
n1

(0,)
(x) d(x)
and
P
W
n+1
[B] =
_
B
e
x

(0,)
(x) d(x)
for all B B(R). Since T
n
and W
n+1
are independent, the convolution formula
yields
P
T
n+1
[B] = P
T
n
+W
n+1
[B]
= P
Tn
P
W
n+1
[B]
12 Chapter 1 The Claim Arrival Process
=
_
B
__
R

n
(n)
e
(ts)
(ts)
n1

(0,)
(ts) e
s

(0,)
(s) d(s)
_
d(t)
=
_
B

n+1
(n+1)
e
t
__
R
n(ts)
n1

(0,t)
(s) d(s)
_

(0,)
(t) d(t)
=
_
B

n+1
(n+1)
e
t
__
t
0
n(ts)
n1
ds
_

(0,)
(t) d(t)
=
_
B

n+1
(n+1)
e
t
t
n

(0,)
(t) d(t)
=
_
B

n+1
(n+1)
e
t
t
(n+1)1

(0,)
(t) d(t)
for all B B(R), and thus P
T
n+1
= Ga(, n+1). Therefore, (a) implies (b). 2
The particular role of the exponential distribution will be discussed in the following
section.
Problems
1.2.A Let Q := Exp() for some (0, ) and let Q
t
denote the unique distribution
satisfying Q
t
[k] = Q[(k1, k]] for all k N. Then Q
t
= Geo(1e

).
1.2.B Discrete Time Model: Let (0, 1). If the sequence W
n

nN
is independent,
then the following are equivalent:
(a) P
W
n
= Geo() for all n N.
(b) P
Tn
= Geo(n, ) for all n N.
In this case, E[W
n
] = 1/ and E[T
n
] = n/ holds for all n N.
1.3 A Characterization of the Exponential
Distribution
One of the most delicate problems in probabilistic modelling is the appropriate
choice of the distributions of the random variables in the model. More precisely, it
is the joint distribution of all random variables that has to be specied. To make this
choice, it is useful to know that certain distributions are characterized by general
properties which are easy to interpret.
In the model considered here, it is sucient to specify the distribution of the claim
interarrival process. This problem is considerably reduced if the claim interarrival
times are assumed to be independent, but even in that case the appropriate choice
of the distributions of the single claim interarrival times is not obvious. In what
follows we shall characterize the exponential distribution by a simple property which
1.3 A Characterization of the Exponential Distribution 13
is helpful to decide whether or not in a particular insurance business this distribution
is appropriate for the claim interarrival times.
For the moment, consider a random variable W which may be interpreted as a
waiting time.
If P
W
= Exp(), then the survival function R [0, 1] : w P[W > w] of the
distribution of W satises P[W > w] = e
w
for all w R
+
, and this yields
P[W > s+t] = P[W > s] P[W > t]
or, equivalently,
P[W > s+t[W > s] = P[W > t]
for all s, t R
+
. The rst identity reects the fact that the survival function of the
exponential distribution is selfsimilar on R
+
in the sense that, for each s R
+
,
the graphs of the mappings t P[W > s+t] and t P[W > t] dier only by
a scaling factor. Moreover, if W is interpreted as a waiting time, then the second
identity means that the knowledge of having waited more than s time units does
not provide any information on the remaining waiting time. Loosely speaking, the
exponential distribution has no memory (or does not use it). The question arises
whether the exponential distribution is the unique distribution having this property.
Before formalizing the notion of a memoryless distribution, we observe that in the
case P
W
= Exp() the above identities hold for all s, t R
+
but fail for all s, t R
such that s < 0 < s+t; on the other hand, we have P
W
[R
+
] = 1. These observations
lead to the following denition:
A distribution Q : B(R) [0, 1] is memoryless on S B(R) if
Q[S] = 1 and
the identity
Q[(s+t, )] = Q[(s, )] Q[(t, )]
holds for all s, t S.
The following result yields a general property of memoryless distributions:
1.3.1 Theorem. Let Q : B(R) [0, 1] be a distribution which is memoryless on
S B(R). If 0S, then Q satises either Q[0] = 1 or Q[(0, )] = 1.
Proof. Assume that Q[(0, )] < 1. Since 0 S, we have
Q[(0, )] = Q[(0, )] Q[(0, )]
= Q[(0, )]
2
,
14 Chapter 1 The Claim Arrival Process
hence
Q[(0, )] = 0 ,
and thus
Q[(t, )] = Q[(t, )] Q[(0, )]
= 0
for all t S.
Dene t := inf S and choose a sequence t
n

nN
S which decreases to t. Then we
have
Q[(t, )] = sup
nN
Q[(t
n
, )]
= 0 .
Since Q[S] = 1, we also have
Q[(, t)] = 0 .
Therefore, we have
Q[t] = 1 ,
hence Q[t S] = 1, and thus t S.
Finally, since 0 S, we have either t < 0 or t = 0. But t < 0 implies t (2t, )
and hence
Q[t] Q[(2t, )]
= Q[(t, )] Q[(t, )]
= Q[(t, )]
2
,
which is impossible. Therefore, we have t = 0 and hence Q[0] = 1, as was to be
shown. 2
The following result characterizes the exponential distribution:
1.3.2 Theorem. For a distribution Q : B(R) [0, 1], the following are equivalent:
(a) Q is memoryless on (0, ).
(b) Q = Exp() for some (0, ).
In this case, = log Q[(1, )].
Proof. Note that Q = Exp() if and only if the identity
Q[(t, )] = e
t
holds for all t [0, ).
1.3 A Characterization of the Exponential Distribution 15
Assume that (a) holds. By induction, we have
Q[(n, )] = Q[(1, )]
n
and
Q[(1, )] = Q[(1/n, )]
n
for all n N.
Thus, Q[(1, )] = 1 is impossible because of
0 = Q[]
= inf
nN
Q[(n, )]
= inf
nN
Q[(1, )]
n
,
and Q[(1, )] = 0 is impossible because of
1 = Q[(0, )]
= sup
nN
Q[(1/n, )]
= sup
nN
Q[(1, )]
1/n
.
Therefore, we have
Q[(1, )] (0, 1) .
Dene now := log Q[(1, )]. Then we have (0, ) and
Q[(1, )] = e

,
and thus
Q[(m/n, )] = Q[(1, )]
m/n
=
_
e

_
m/n
= e
m/n
for all m, n N. This yields
Q[(t, )] = e
t
for all t (0, ) Q. Finally, for each t [0, ) we may choose a sequence
t
n

nN
(0, ) Q which decreases to t, and we obtain
Q[(t, )] = sup
nN
Q[(t
n
, )]
= sup
nN
e
t
n
= e
t
.
By the introductory remark, it follows that Q = Exp(). Therefore, (a) implies (b).
The converse implication is obvious. 2
16 Chapter 1 The Claim Arrival Process
1.3.3 Corollary. For a distribution Q : B(R) [0, 1], the following are equivalent:
(a) Q is memoryless on R
+
.
(b) Either Q =
0
or Q = Exp() for some (0, ).
Proof. The assertion is immediate from Theorems 1.3.1 and 1.3.2. 2
With regard to the previous result, note that the Dirac distribution
0
is the limit
of the exponential distributions Exp() as .
1.3.4 Corollary. There is no distribution which is memoryless on R.
Proof. If Q : B(R) [0, 1] is a distribution which is memoryless on R, then
either Q =
0
or Q = Exp() for some (0, ), by Theorem 1.3.1 and Corollary
1.3.3. On the other hand, none of these distributions is memoryless on R. 2
Problems
1.3.A Discrete Time Model: For a distribution Q : B(R) [0, 1], the following are
equivalent:
(a) Q is memoryless on N.
(b) Either Q =
1
or Q = Geo() for some (0, 1).
Note that the Dirac distribution
1
is the limit of the geometric distributions
Geo() as 0.
1.3.B Discrete Time Model: For a distribution Q : B(R) [0, 1], the following are
equivalent:
(a) Q is memoryless on N
0
.
(b) Either Q =
0
or Q =
1
or Q = Geo() for some (0, 1).
In particular, the negativebinomial distribution fails to be memoryless on N
0
.
1.3.C There is no distribution which is memoryless on (, 0).
1.4 Remarks
Since the conclusions obtained in a probabilistic model usually concern probabilities
and not single realizations of random variables, it is natural to state the assump-
tions of the model in terms of probabilities as well. While this is a merely formal
justication for the exceptional null set in the denition of the claim arrival process,
there is also a more substantial reason: As we shall see in Chapters 5 and 6 below,
it is sometimes of interest to construct a claim arrival process from other random
variables, and in that case it cannot in general be ensured that the exceptional null
set is empty.
Theorem 1.3.2 is the most famous characterization of the exponential distribution.
Further characterizations of the exponential distribution can be found in the mono-
graphs by Galambos and Kotz [1978] and, in particular, by Azlarov and Volodin
[1986].
Chapter 2
The Claim Number Process
In the previous chapter, we have formulated a general model for the occurrence of
claims in an insurance business and we have studied the claim arrival process in
some detail.
In the present chapter, we proceed one step further by introducing the claim number
process. Particular attention will be given to the Poisson process.
We rst introduce the general claim number process and show that claim number
processes and claim arrival processes determine each other (Section 2.1). We then
establish a connection between certain assumptions concerning the distributions of
the claim arrival times and the distributions of the claim numbers (Section 2.2). We
nally prove the main result of this chapter which characterizes the (homogeneous)
Poisson process in terms of the claim interarrival process, the claim measure, and a
martingale property (Section 2.3).
2.1 The Model
A family of random variables N
t

tR
+
is a claim number process if there exists a
null set
N
T such that, for all
N
,
N
0
() = 0,
N
t
() N
0
for all t (0, ),
N
t
() = inf
s(t,)
N
s
() for all t R
+
,
sup
s[0,t)
N
s
() N
t
() sup
s[0,t)
N
s
() + 1 for all t R
+
, and
sup
tR
+
N
t
() = .
The null set
N
is said to be the exceptional null set of the claim number process
N
t

tR
+
.
Interpretation:
N
t
is the number of claims occurring in the interval (0, t].
Almost all paths of N
t

tR
+
start at zero, are rightcontinuous, increase with
jumps of height one at discontinuity points, and increase to innity.
18 Chapter 2 The Claim Number Process
Our rst result asserts that every claim arrival process induces a claim number
process, and vice versa:
2.1.1 Theorem.
(a) Let T
n

nN
0
be a claim arrival process. For all t R
+
and , dene
N
t
() :=

n=1

T
n
t
() .
Then N
t

tR
+
is a claim number process such that
N
=
T
, and the identity
T
n
() = inft R
+
[ N
t
() = n
holds for all nN
0
and all
T
.
(b) Let N
t

tR
+
be a claim number process. For all nN
0
and , dene
T
n
() := inft R
+
[ N
t
() = n .
Then T
n

nN
0
is a claim arrival process such that
T
=
N
, and the identity
N
t
() =

n=1

T
n
t
()
holds for all t R
+
and all
N
.
The verication of Theorem 2.1.1 is straightforward.
0
-
1
2
3
4
5
n
6
0 T
1
() T
2
() T
3
() T
4
() T
5
()
t

N
t
()
Claim Arrival Process and Claim Number Process
For the remainder of this chapter, let N
t

tR
+
be a claim number process, let
T
n

nN
0
be the claim arrival process induced by the claim number process, and let
W
n

nN
be the claim interarrival process induced by the claim arrival process. We
assume that the exceptional null set is empty.
2.1 The Model 19
By virtue of the assumption that the exceptional null set is empty, we have two
simple but most useful identities, showing that certain events determined by the
claim number process can be interpreted as events determined by the claim arrival
process, and vice versa:
2.1.2 Lemma. The identities
(a) N
t
n = T
n
t and
(b) N
t
= n = T
n
tT
n+1
t = T
n
t < T
n+1

hold for all nN


0
and t R
+
.
The following result expresses in a particularly concise way the fact that the claim
number process and the claim arrival process contain the same information:
2.1.3 Lemma.
(N
t

tR
+
) = (T
n

nN
0
) .
In view of the preceding discussion, it is not surprising that explosion can also be
expressed in terms of the claim number process:
2.1.4 Lemma. The probability of explosion satises
P[sup
nN
T
n
< ] = P
_
_
tN
N
t
=
_
= P
_
_
t(0,)
N
t
=
_
.
Proof. Since the family of sets N
t
=
t(0,)
is increasing, we have
_
t(0,)
N
t
= =
_
tN
N
t
= .
By Lemma 2.1.2, this yields
_
t(0,)
N
t
= =
_
tN
N
t
=
=
_
tN

nN
N
t
n
=
_
tN

nN
T
n
t
=
_
tN
sup
nN
T
n
t
= sup
nN
T
n
< ,
and the assertion follows. 2
2.1.5 Corollary. Assume that the claim number process has nite expectations.
Then the probability of explosion is equal to zero.
Proof. By assumption, we have E[N
t
] < and hence P[N
t
=] = 0 for all
t (0, ). The assertion now follows from Lemma 2.1.4. 2
20 Chapter 2 The Claim Number Process
The discussion of the claim number process will to a considerable extent rely on the
properties of its increments which are dened as follows:
For s, t R
+
such that s t, the increment of the claim number process N
t

tR
+
on the interval (s, t] is dened to be
N
t
N
s
:=

n=1

s<T
n
t
.
Since N
0
= 0 and T
n
> 0 for all n N, this is in accordance with the denition
of N
t
; in addition, we have
N
t
() = (N
t
N
s
)() + N
s
() ,
even if N
s
() is innite.
The nal result in this section connects the increments of the claim number process,
and hence the claim number process itself, with the claim measure:
2.1.6 Lemma. The identity
[A(s, t]] =
_
A
(N
t
N
s
) dP
holds for all A T and s, t R
+
such that s t.
Proof. The assertion follows from Lemma 1.1.5 and the denition of N
t
N
s
. 2
In the discrete time model, we have N
t
= N
t+h
for all t N
0
and h[0, 1) so that
nothing is lost if in this case the index set of the claim number process N
t

tR
+
is
reduced to N
0
; we shall then refer to the sequence N
l

lN
0
as the claim number
process induced by T
n

nN
0
.
Problems
2.1.A Discrete Time Model: The inequalities
(a) N
l
N
l1
+1 and
(b) N
l
l
hold for all l N.
2.1.B Discrete Time Model: The identities
(a) N
l
N
l1
= 0 =

l
j=1
T
j1
< l < T
j
,
N
l
N
l1
= 1 =

l
j=1
T
j
= l, and
(b) T
n
= l = n N
l
n N
l1
= N
l1
< n N
l

hold for all n N and l N.


2.2 The Erlang Case 21
2.1.C Multiple Life Insurance: For all t R
+
, dene N
t
:=

N
n=1

Tnt
. Then
there exists a null set
N
T such that, for all
N
,
N
0
() = 0,
N
t
() 0, 1, . . . , N for all t (0, ),
N
t
() = inf
s(t,)
N
s
() for all t R
+
,
sup
s[0,t)
N
s
() N
t
() sup
s[0,t)
N
s
() + 1 for all t R
+
, and
sup
tR
+
N
t
() = N.
2.2 The Erlang Case
In the present section we return to the special case of claim arrival times having an
Erlang distribution.
2.2.1 Lemma. Let (0, ). Then the following are equivalent:
(a) P
T
n
= Ga(, n) for all nN.
(b) P
N
t
= P(t) for all t (0, ).
In this case, E[T
n
] = n/ holds for all nN and E[N
t
] = t holds for all t (0, ).
Proof. Note that the identity
e
t
(t)
n
n!
=
_
t
0

n
(n)
e
s
s
n1
ds
_
t
0

n+1
(n+1)
e
s
s
n
ds
holds for all n N and t (0, ).
Assume rst that (a) holds. Lemma 2.1.2 yields
P
_
N
t
= 0

= P
_
t < T
1

= e
t
,
and, for all n N,
P
_
N
t
= n

= P
_
T
n
t

P
_
T
n+1
t

=
_
(,t]

n
(n)
e
s
s
n1

(0,)
(s) d(s)

_
(,t]

n+1
(n+1)
e
s
s
(n+1)1

(0,)
(s) d(s)
=
_
t
0

n
(n)
e
s
s
n1
ds
_
t
0

n+1
(n+1)
e
s
s
n
ds
= e
t
(t)
n
n!
.
This yields
P
_
N
t
= n

= e
t
(t)
n
n!
for all n N
0
, and hence P
Nt
= P(t). Therefore, (a) implies (b).
22 Chapter 2 The Claim Number Process
Assume now that (b) holds. Since T
n
> 0, we have
P
_
T
n
t

= 0
for all t (, 0]; also, for all t (0, ), Lemma 2.1.2 yields
P
_
T
n
t

= P
_
N
t
n

= 1 P
_
N
t
n1

= 1
n1

k=0
P
_
N
t
= k

= 1
n1

k=0
e
t
(t)
k
k!
= (1e
t
)
n1

k=1
e
t
(t)
k
k!
=
_
t
0
e
s
ds
n1

k=1
__
t
0

k
(k)
e
s
s
k1
ds
_
t
0

k+1
(k+1)
e
s
s
k
ds
_
=
_
t
0

n
(n)
e
s
s
n1
ds
=
_
(,t]

n
(n)
e
s
s
n1

(0,)
(s) d(s) .
This yields
P
_
T
n
t

=
_
(,t]

n
(n)
e
s
s
n1

(0,)
(s) d(s)
for all t R, and hence P
Tn
= Ga(, n). Therefore, (b) implies (a).
The nal assertion is obvious. 2
By Lemma 1.2.2, the equivalent conditions of Lemma 2.2.1 are fullled whenever the
claim interarrival times are independent and identically exponentially distributed;
that case, however, can be characterized by a much stronger property of the claim
number process involving its increments, as will be seen in the following section.
Problem
2.2.A Discrete Time Model: Let (0, 1). Then the following are equivalent:
(a) P
Tn
= Geo(n, ) for all n N.
(b) P
N
l
= B(l, ) for all l N.
In this case, E[T
n
] = n/ holds for all n N and E[N
l
] = l holds for all l N;
moreover, for each l N, the pair (N
l
N
l1
, N
l1
) is independent and satises
P
N
l
N
l1
= B().
2.3 A Characterization of the Poisson Process 23
2.3 A Characterization of the Poisson Process
The claim number process N
t

tR
+
has
independent increments if, for all m N and t
0
, t
1
, . . . , t
m
R
+
such that
0 = t
0
< t
1
< . . . < t
m
, the family of increments N
t
j
N
t
j1

j1,...,m
is indepen-
dent, it has
stationary increments if, for all m N and t
0
, t
1
, . . . , t
m
, h R
+
such that
0 = t
0
< t
1
< . . . < t
m
, the family of increments N
t
j
+h
N
t
j1
+h

j1,...,m
has
the same distribution as N
t
j
N
t
j1

j1,...,m
, and it is
a (homogeneous) Poisson process with parameter (0, ) if it has stationary
independent increments such that P
Nt
= P(t) holds for all t (0, ).
It is immediate from the denitions that a claim number process having independent
increments has stationary increments if and only if the identity P
N
t+h
N
t
= P
N
h
holds
for all t, h R
+
.
The following result exhibits a property of the Poisson process which is not captured
by Lemma 2.2.1:
2.3.1 Lemma (Multinomial Criterion). Let (0, ). Then the following
are equivalent:
(a) The claim number process N
t

tR
+
satises
P
N
t
= P(t)
for all t (0, ) as well as
P
_
m

j=1
N
t
j
N
t
j1
= k
j

N
tm
= n
_
=
n!

m
j=1
k
j
!
m

j=1
_
t
j
t
j1
t
m
_
k
j
for all m N and t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
and for
all n N
0
and k
1
, . . . , k
m
N
0
such that

m
j=1
k
j
= n.
(b) The claim number process N
t

tR
+
is a Poisson process with parameter .
Proof. The result is obtained by straightforward calculation:
Assume rst that (a) holds. Then we have
P
_
m

j=1
N
t
j
N
t
j1
= k
j

_
= P
_
m

j=1
N
t
j
N
t
j1
= k
j

N
t
m
= n
_
P[N
t
m
= n]
=
n!

m
j=1
k
j
!

m

j=1
_
t
j
t
j1
t
m
_
k
j
e
tm
(t
m
)
n
n!
24 Chapter 2 The Claim Number Process
=
n!

m
j=1
k
j
!

m

j=1
_
t
j
t
j1
t
m
_
k
j

j=1
e
(t
j
t
j1
)

k
j

t
n
m
n!
=
m

j=1
e
(t
j
t
j1
)
((t
j
t
j1
))
k
j
k
j
!
.
Therefore, (a) implies (b).
Assume now that (b) holds. Then we have
P
N
t
= P(t)
as well as
P
_
m

j=1
N
t
j
N
t
j1
= k
j

N
t
m
= n
_
=
P
_
m

j=1
N
t
j
N
t
j1
= k
j

_
P[N
t
m
= n]
=
m

j=1
P[N
t
j
N
t
j1
= k
j
]
P[N
t
m
= n]
=
m

j=1
e
(t
j
t
j1
)
((t
j
t
j1
))
k
j
k
j
!
e
t
m
(t
m
)
n
n!
=
n!

m
j=1
k
j
!
m

j=1
_
t
j
t
j1
t
m
_
k
j
.
Therefore, (b) implies (a). 2
Comparing the previous result with Lemmas 2.2.1 and 1.2.2 raises the question
whether the Poisson process can also be characterized in terms of the claim arrival
process or in terms of the claim interarrival process. An armative answer to this
question will be given in Theorem 2.3.4 below.
While the previous result characterizes the Poisson process with parameter in the
class of all claim number processes satisfying P
N
t
= P(t) for all t (0, ), we
shall see that there is also a strikingly simple characterization of the Poisson process
in the class of all claim number processes having independent increments; see again
Theorem 2.3.4 below.
Theorem 2.3.4 contains two further characterizations of the Poisson process: one in
terms of the claim measure, and one in terms of martingales, which are dened as
follows:
2.3 A Characterization of the Poisson Process 25
Let I be any subset of R
+
and consider a family Z
i

iI
of random variables having
nite expectations and an increasing family T
i

iI
of subalgebras of T such
that each Z
i
is T
i
measurable. The family T
i

iI
is said to be a ltration, and it
is said to be the canonical ltration for Z
i

iI
if it satises T
i
= (Z
h

hI(,i]
)
for all i I. The family Z
i

iI
is a
submartingale for T
i

iI
if it satises
_
A
Z
i
dP
_
A
Z
j
dP
for all i, j I such that i < j and for all A T
i
, it is a
supermartingale for T
i

iI
if it satises
_
A
Z
i
dP
_
A
Z
j
dP
for all i, j I such that i < j and for all A T
i
, and it is a
martingale for T
i

iI
if it satises
_
A
Z
i
dP =
_
A
Z
j
dP
for all i, j I such that i < j and for all A T
i
.
Thus, a martingale is at the same time a submartingale and a supermartingale, and
all random variables forming a martingale have the same expectation. Reference to
the canonical ltration for Z
i

iI
is usually omitted.
Let us now return to the claim number process N
t

tR
+
. For the remainder of this
section, let T
t

tR
+
denote the canonical ltration for the claim number process.
The following result connects claim number processes having independent incre-
ments and nite expectations with a martingale property:
2.3.2 Theorem. Assume that the claim number process N
t

tR
+
has indepen-
dent increments and nite expectations. Then the centered claim number process
N
t
E[N
t
]
tR
+
is a martingale.
Proof. Since constants are measurable with respect to any algebra, the natural
ltration for the claim number process coincides with the natural ltration for the
centered claim number process. Consider s, t R
+
such that s < t.
(1) The algebras T
s
and (N
t
N
s
) are independent:
For mNand s
0
, s
1
, . . . , s
m
, s
m+1
R
+
such that 0=s
0
<s
1
<. . . < s
m
=s<t =s
m+1
,
dene
(
s
1
,...,sm
:=
_
N
s
j

j1,...,m
_
=
_
N
s
j
N
s
j1

j1,...,m
_
.
26 Chapter 2 The Claim Number Process
By assumption, the increments N
s
j
N
s
j1

j1,...,m+1
are independent, and this
implies that the algebras (
s
1
,...,s
m
and (N
t
N
s
) are independent.
The system of all such algebras (
s
1
,...,s
m
is directed upwards by inclusion. Let
c
s
denote the union of these algebras. Then c
s
and (N
t
N
s
) are independent.
Moreover, c
s
is an algebra, and it follows that (c
s
) and (N
t
N
s
) are independent.
Since T
s
= (c
s
), this means that the algebras T
s
and (N
t
N
s
) are independent.
(2) Consider now A T
s
. Because of (1), we have
_
A
_
_
N
t
E[N
t
]
_

_
N
s
E[N
s
]
_
_
dP =
_

A
_
(N
t
N
s
) E[N
t
N
s
]
_
dP
=
_

A
dP
_

_
(N
t
N
s
) E[N
t
N
s
]
_
dP
= 0 ,
and hence
_
A
_
N
t
E[N
t
]
_
dP =
_
A
_
N
s
E[N
s
]
_
dP .
(3) It now follows from (2) that N
t
E[N
t
]
tR
+
is a martingale. 2
As an immediate consequence of the previous result, we have the following:
2.3.3 Corollary. Assume that the claim number process N
t

tR
+
is a Poisson
process with parameter . Then the centered claim number process N
t
t
tR
+
is
a martingale.
We shall see that the previous result can be considerably improved.
We now turn to the main result of this section which provides characterizations of
the Poisson process in terms of
the claim interarrival process,
the increments and expectations of the claim number process,
the martingale property of a related process, and
the claim measure.
With regard to the claim measure, we need the following denitions: Dene
c :=
_
A(s, t] [ s, t R
+
, s t, A T
s
_
and let
H := (c)
denote the algebra generated by c in TB((0, )).
2.3 A Characterization of the Poisson Process 27
2.3.4 Theorem. Let (0, ). Then the following are equivalent:
(a) The sequence of claim interarrival times W
n

nN
is independent and satises
P
W
n
= Exp() for all nN.
(b) The claim number process N
t

tR
+
is a Poisson process with parameter .
(c) The claim number process N
t

tR
+
has independent increments and satises
E[N
t
] = t for all t R
+
.
(d) The process N
t
t
tR
+
is a martingale.
(e) The claim measure satises [
1
= (P )[
1
.
Proof. We prove the assertion according to the following scheme:
(a) = (b) = (c) = (d) = (e) = (a)
Assume rst that (a) holds. The basic idea of this part of the proof is to show
that the selfsimilarity of the survival function of the exponential distribution on
the interval (0, ) implies selfsimilarity of the claim arrival process in the sense
that, for any s R
+
, the claim arrival process describing the occurrence of claims
in the interval (s, ) has the same properties as the claim arrival process describing
the occurrence of claims in the interval (0, ) and is independent of N
s
.
0
-
1
2
3
4
5
n
-
1
2
3
nN
s
()
6
0 T
1
() T
2
() s T
3
() T
4
() T
5
()
t
6
0 T
s
1
() T
s
2
() T
s
3
()
ts

N
t
() N
s
()
N
t
()
Claim Arrival Process and Claim Number Process
(1) By assumption, the sequence W
n

nN
is independent and satises
P
Wn
= Exp()
for all nN. By Lemma 1.2.2, this yields P
T
n
= Ga(, n) for all nN, and it now
follows from Lemma 2.2.1 that
P
Nt
= P(t)
holds for all t (0, ).
28 Chapter 2 The Claim Number Process
(2) Because of (1), we have
P[N
t
= ] = 0
for all t R
+
, and it now follows from Lemma 2.1.4 that the probability of explosion
is equal to zero. Thus, without loss of generality, we may and do assume that
N
t
() < holds for all t R
+
and all , and this yields
=

k=0
N
t
= k
for all t R
+
.
(3) For s R
+
, dene
T
s
0
:= 0
and, for all n N,
T
s
n
:=

k=0
_

N
s
=k
(T
k+n
s)
_
=

k=0
_

T
k
s<T
k+1

(T
k+n
s)
_
.
Then the sequence T
s
n

nN
0
satises T
s
0
= 0 and
T
s
n1
< T
s
n
for all n N. Therefore, T
s
n

nN
0
is a claim arrival process. Let W
s
n

nN
denote
the claim interarrival process induced by T
s
n

nN
0
.
(4) For each s R
+
, the nite dimensional distributions of the claim interarrival
processes W
s
n

nN
and W
n

nN
are identical ; moreover, N
s
and W
s
n

nN
are
independent:
Consider rst t R
+
and k N
0
. Then we have
N
s
= kt < W
s
1
= N
s
= kt < T
s
1

= N
s
= kt < T
k+1
s
= T
k
s < T
k+1
s+t < T
k+1

= T
k
ss+t < T
k+1

= T
k
ss+t < T
k
+W
k+1

= T
k
ssT
k
+t < W
k+1
.
Using the transformation formula for integrals, independence of T
k
and W
k+1
, and
Fubinis theorem, we obtain
P[N
s
= kt < W
s
1
] = P[T
k
ssT
k
+t < W
k+1
]
2.3 A Characterization of the Poisson Process 29
=
_

T
k
ssT
k
+t<W
k+1

() dP()
=
_
R
2

(,s]
(r)
(sr+t,)
(u) dP
W
k+1
,T
k
(u, r)
=
_
R
2

(,s]
(r)
(sr+t,)
(u) d(P
W
k+1
P
T
k
)(u, r)
=
_
R

(,s]
(r)
__
R

(sr+t,)
(u) dP
W
k+1
(u)
_
dP
T
k
(r)
=
_
(,s]
__

sr+t<W
k+1

() dP()
_
dP
T
k
(r)
=
_
(,s]
P[sr+t < W
k+1
] dP
T
k
(r) .
Using this formula twice together with the fact that the distribution of each W
n
is
Exp() and hence memoryless on R
+
, we obtain
P[N
s
= kt < W
s
1
] =
_
(,s]
P[sr+t < W
k+1
] dP
T
k
(r)
=
_
(,s]
P[sr < W
k+1
] P[t < W
k+1
] dP
T
k
(r)
=
_
(,s]
P[sr < W
k+1
] dP
T
k
(r) P[t < W
k+1
]
= P[N
s
= k0 < W
s
1
] P[t < W
k+1
]
= P[N
s
= k] P[t < W
1
] .
Therefore, we have
P[N
s
= kt < W
s
1
] = P[N
s
= k] P[t < W
1
] .
Consider now n N, t
1
, . . . , t
n
R
+
, and k N
0
. For each j 2, . . . , n, we have
N
s
= kt
j
< W
s
j
= N
s
= kt
j
< T
s
j
T
s
j1

= N
s
= kt
j
< T
k+j
T
k+j1

= N
s
= kt
j
< W
k+j
.
Since the sequence W
n

nN
is independent and identically distributed, the previous
identities yield
P
_
N
s
= k
n

j=1
t
j
< W
s
j

_
= P
_
n

j=1
_
N
s
= kt
j
< W
s
j

_
_
30 Chapter 2 The Claim Number Process
= P
_
_
N
s
= kt
1
< W
s
1

j=2
_
N
s
= kt
j
< W
s
j

_
_
= P
_
_
N
s
= kt
1
< W
s
1

j=2
_
N
s
= kt
j
< W
k+j

_
_
= P
_
_
N
s
= kt
1
< W
s
1

j=2
t
j
< W
k+j

_
= P
_
_
T
k
ssT
k
+t
1
< W
k+1

j=2
t
j
< W
k+j

_
= P[T
k
ssT
k
+t
1
< W
k+1
]
n

j=2
P[t
j
< W
k+j
]
= P[N
s
= kt
1
< W
s
1
]
n

j=2
P[t
j
< W
k+j
]
= P[N
s
= k] P[t
1
< W
1
]
n

j=2
P[t
j
< W
j
]
= P[N
s
= k]
n

j=1
P[t
j
< W
j
]
= P[N
s
= k] P
_
n

j=1
t
j
< W
j

_
.
Therefore, we have
P
_
N
s
= k
n

j=1
t
j
< W
s
j

_
= P[N
s
= k] P
_
n

j=1
t
j
< W
j

_
.
Summation over k N
0
yields
P
_
n

j=1
t
j
< W
s
j

_
= P
_
n

j=1
t
j
< W
j

_
.
Inserting this identity into the previous one, we obtain
P
_
N
s
= k
n

j=1
t
j
< W
s
j

_
= P[N
s
= k] P
_
n

j=1
t
j
< W
s
j

_
.
The last two identities show that the nite dimensional distributions of the claim in-
terarrival processes W
s
n

nN
and W
n

nN
are identical, and that N
s
and W
s
n

nN
are independent. In particular, the sequence W
s
n

nN
is independent and satises
P
W
s
n
= Exp() for all n N.
2.3 A Characterization of the Poisson Process 31
(5) The identity P
N
s+h
Ns
= P
N
h
holds for all s, h R
+
:
For all n N
0
, we have
N
s+h
N
s
= n =

k=0
N
s
= kN
s+h
= k+n
=

k=0
N
s
= kT
k+n
s+h < T
k+n+1

k=0
N
s
= kT
s
n
h < T
s
n+1

= T
s
n
h < T
s
n+1
.
Because of (4), the nite dimensional distributions of the claim interarrival processes
W
s
n

nN
and W
n

nN
are identical, and it follows that the nite dimensional
distributions of the claim arrival processes T
s
n

nN
0
and T
n

nN
0
are identical as
well. This yields
P[N
s+h
N
s
= n] = P[T
s
n
h < T
s
n+1
]
= P[T
n
h < T
n+1
]
= P[N
h
= n]
for all n N
0
.
(6) The claim number process N
t

tR
+
has independent increments:
Consider rst s R
+
. Because of (4), N
s
and W
s
n

nN
are independent and
the nite dimensional distributions of the claim interarrival processes W
s
n

nN
and W
n

nN
are identical; consequently, N
s
and T
s
n

nN
0
are independent and,
as noted before, the nite dimensional distributions of the claim arrival processes
T
s
n

nN
0
and T
n

nN
0
are identical as well.
Consider next s R
+
, m N, h
1
, . . . , h
m
R
+
, and k, k
1
, . . . , k
m
N
0
. Then we
have
P
_
N
s
= k
m

j=1
N
s+h
j
N
s
= k
j

_
= P
_
N
s
= k
m

j=1
T
s
k
j
h
j
< T
s
k
j
+1

_
= P[N
s
= k] P
_
m

j=1
T
s
k
j
h
j
< T
s
k
j
+1

_
= P[N
s
= k] P
_
m

j=1
T
k
j
h
j
< T
k
j
+1

_
= P[N
s
= k] P
_
m

j=1
N
h
j
= k
j

_
.
32 Chapter 2 The Claim Number Process
We now claim that, for all m N, the identity
P
_
m

j=1
N
t
j
N
t
j1
= n
j

_
=
m

j=1
P[N
t
j
N
t
j1
= n
j
] .
holds for all t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
, and n
1
, . . . , n
m
N
0
.
This follows by induction:
The assertion is obvious for m = 1.
Assume now that it holds for some m N and consider t
0
, t
1
, . . . , t
m
, t
m+1
R
+
such that 0 = t
0
< t
1
< . . . < t
m
< t
m+1
, and n
1
, . . . , n
m
, n
m+1
N
0
. For
j 0, 1, . . . , m, dene h
j
:= t
j+1
t
1
. Then we have 0 = h
0
< h
1
< . . . < h
m
and
hence, by assumption and because of (5),
P
_
m

j=1
N
h
j
N
h
j1
= n
j+1

_
=
m

j=1
P[N
h
j
N
h
j1
= n
j+1
]
=
m

j=1
P[N
h
j
h
j1
= n
j+1
]
=
m

j=1
P[N
t
j+1
t
j
= n
j+1
]
=
m

j=1
P[N
t
j+1
N
t
j
= n
j+1
]
=
m+1

j=2
P[N
t
j
N
t
j1
= n
j
] .
Using the identity established before with s := t
1
, this yields
P
_
m+1

j=1
N
t
j
N
t
j1
= n
j

_
= P
_
m+1

j=1
_
N
t
j
=
j

i=1
n
i
__
= P
_
N
t
1
= n
1

m+1

j=2
_
N
t
j
=
j

i=1
n
i
__
= P
_
N
t
1
= n
1

m+1

j=2
_
N
t
j
N
t
1
=
j

i=2
n
i
__
= P
_
N
t
1
= n
1

m

j=1
_
N
t
j+1
N
t
1
=
j+1

i=2
n
i
__
= P
_
N
t
1
= n
1

m

j=1
_
N
t
1
+h
j
N
t
1
=
j+1

i=2
n
i
__
2.3 A Characterization of the Poisson Process 33
= P[N
t
1
= n
1
] P
_
m

j=1
_
N
h
j
=
j+1

i=2
n
i
__
= P[N
t
1
= n
1
] P
_
m

j=1
N
h
j
N
h
j1
= n
j+1

_
= P[N
t
1
= n
1
]
m+1

j=2
P[N
t
j
N
t
j1
= n
j
]
=
m+1

j=1
P[N
t
j
N
t
j1
= n
j
] ,
which is the assertion for m+1. This proves our claim, and it follows that the claim
number process N
t

tR
+
has independent increments.
(7) It now follows from (6), (5), and (1) that the claim number process N
t

tR
+
is
a Poisson process with parameter . Therefore, (a) implies (b).
Assume now that (b) holds. Since N
t

tR
+
is a Poisson process with parameter ,
it is clear that N
t

tR
+
has independent increments and satises E[N
t
] = t for
all t R
+
. Therefore, (b) implies (c).
6
0 T
1
() T
2
() T
3
() T
4
() T
5
()
n
0
-
1
2
3
4
5
t
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
t

N
t
()
Claim Arrival Process and Claim Number Process
Assume next that (c) holds. Since N
t

tR
+
has independent increments and
satises E[N
t
] = t for all t R
+
, it follows from Theorem 2.3.2 that N
t
t
tR
+
is a martingale. Therefore, (c) implies (d).
34 Chapter 2 The Claim Number Process
Assume now that (d) holds. For all s, t R
+
such that s t and for all A T
s
,
Lemma 2.1.6 together with the martingale property of N
r
r
rR
+
yields
[A(s, t]] =
_
A
(N
t
N
s
) dP
=
_
A
(ts) dP
= (ts) P[A]
= [(s, t]] P[A]
= (P )[A(s, t]] .
Since A(s, t] is a typical set of c, this gives
[
c
= (P )[
c
.
Since (0, ) =

n=1
(n1, n], the set (0, ) is the union of countably
many sets in c such that P is nite on each of these sets. This means that
the measure [
c
= (P )[
c
is nite. Furthermore, since the family T
s

sR
+
is increasing, it is easy to see that c is stable under intersection. Since (c) = H,
it now follows from the uniqueness theorem for nite measures that
[
1
= (P )[
1
.
Therefore, (d) implies (e)
Assume nally that (e) holds. In order to determine the nite dimensional distri-
butions of the claim interarrival process W
n

nN
, we have to study the probability
of events having the form
A t < W
n

for n N, t R
+
, and A (W
k

k1,...,n1
) = (T
k

k0,1,...,n1
).
(1) For n N
0
, dene
c
n
:=
_
n

k=1
t
k
< T
k

t
1
, . . . , t
n
R
+
_
.
Since c
n
is stable under intersection and satises (c
n
) = (T
k

k0,1,...,n
), it is
sucient to study the probability of events having the form
A t < W
n

for n N, t R
+
, and A c
n1
.
(2) For n N, t R
+
, and A c
n1
, dene
H
n,t
(A) :=
_
(, u) [ A, T
n1
()+t < u T
n
()
_
.
2.3 A Characterization of the Poisson Process 35
Then we have
U
1
n
(H
n,t
(A)) = A T
n1
+t < T
n

= A t < W
n
,
as well as
U
1
k
(H
n,t
(A)) =
for all k N such that k ,= n. This gives
A t < W
n
=

k=1
U
1
k
(H
n,t
(A)) .
Now the problem is to show that H
n,t
(A) H; if this is true, then we can apply the
assumption on the claim measure in order to compute P[A t < W
n
].
(3) The relation H
n,t
(A) H holds for all n N, t R
+
, and A c
n1
:
First, for all k, m N
0
such that k m and all p, q, s, t R
+
such that s+t < p < q,
we have
_
s < T
k
T
m
+t p
_
(p, q] =
_
N
s
< km N
pt

_
(p, q] ,
which is a set in c and hence in H.
Next, for all k, m N such that k m and all s, t R
+
, dene
H
k,m;s,t
:= (, u) [ s < T
k
(), T
m
()+t < u .
Then we have
H
k,m;s,t
=
_
p,qQ, s+t<p<q
_
s < T
k
T
m
+t p
_
(p, q] ,
and hence H
k,m;s,t
H.
Finally, since
A =
n1

k=1
t
k
< T
k

for suitable t
1
, . . . , t
n1
R
+
, we have
H
n,t
(A) = H
n,t
_
n1

k=1
t
k
< T
k

_
=
_
(, u)


n1

k=1
t
k
< T
k
, T
n1
()+t < u T
n
()
_
=
n1

k=1
(, u) [ t
k
< T
k
(), T
n1
()+t < u (, u) [ T
n
() < u
=
n1

k=1
H
k,n1;t
k
,t
H
n,n;0,0
,
and hence H
n,t
(A) H.
36 Chapter 2 The Claim Number Process
(4) Consider n N, t R
+
, and A c
n1
. Because of (2) and (3), the assumption
on the claim measure yields
P[A t < W
n
] = P
_

k=1
U
1
k
(H
n,t
(A))
_
=

k=1
P[U
1
k
(H
n,t
(A))]
=

k=1
P
U
k
[H
n,t
(A)]
= [H
n,t
(A)]
= (P )[H
n,t
(A)] .
Thus, using the fact that the Lebesgue measure is translation invariant and vanishes
on singletons, we have
_
R

(T
n1
()+t,T
n
()]
(s) d(s) =
_
R

[t,W
n
())
(s) d(s)
hence
1

P[A t < W
n
] = (P )[H
n,t
(A)]
=
_
R

Hn,t(A)
(, s) d(P )(, s)
=
_
R

A
()
(T
n1
()+t,T
n
()]
(s) d(P )(, s)
=
_

A
()
__
R

(T
n1
()+t,T
n
()]
(s) d(s)
_
dP()
=
_

A
()
__
R

[t,W
n
())
(s) d(s)
_
dP()
=
_
R

A
()
[t,Wn())
(s) d(P )(, s)
=
_
R

[t,)
(s)
As<W
n

() d(P )(, s)
=
_
R

[t,)
(s)
__

As<W
n

dP()
_
d(s)
=
_
[t,)
P[A s < W
n
] d(s) ,
and thus
P[A t < W
n
] =
_
[t,)
P[A s < W
n
] d(s) .
2.3 A Characterization of the Poisson Process 37
(5) Consider n N and A c
n1
. Then the function g : R R, given by
g(t) :=
_
0 if t (, 0)
P[A t < W
n
] if t R
+
,
is bounded; moreover, g is monotone decreasing on R
+
and hence almost surely
continuous. This implies that g is Riemann integrable and satises
_
[0,t)
g(s) d(s) =
_
t
0
g(s) ds
for all t R
+
. Because of (4), the restriction of g to R
+
satises
g(t) = P[A t < W
n
]
=
_
[t,)
P[A s < W
n
] d(s)
=
_
[t,)
g(s) d(s) ,
and thus
g(t) g(0) =
_
[0,t)
g(s) d(s)
=
_
t
0
g(s) ds .
This implies that the restriction of g to R
+
is dierentiable and satises the dier-
ential equation
g
t
(t) = g(t)
with initial condition g(0) = P[A].
For all t R
+
, this yields
g(t) = P[A] e
t
,
and thus
P[A t < W
n
] = g(t)
= P[A] e
t
.
(6) Consider n N. Since c
n1
, the previous identity yields
P[t < W
n
] = e
t
for all t R
+
. Inserting this identity into the previous one, we obtain
P[A t < W
n
] = P[A] P[t < W
n
]
for all t R
+
and A c
n1
. This shows that (W
1
, . . . , W
n1
) and (W
n
) are
independent.
38 Chapter 2 The Claim Number Process
(7) Because of (6), it follows by induction that the sequence W
n

nN
is independent
and satises
P
W
n
= Exp()
for all n N. Therefore, (e) implies (a). 2
To conclude this section, let us consider the prediction problem for claim number
processes having independent increments and nite second moments:
2.3.5 Theorem (Prediction). Assume that the claim number process N
t

tR
+
has independent increments and nite second moments. Then the inequality
E[(N
t
(N
s
+E[N
t
N
s
]))
2
] E[(N
t
Z)
2
]
holds for all s, t R
+
such that s t and for every random variable Z satisfying
E[Z
2
] < and (Z) T
s
.
Proof. Dene Z
0
:= N
s
+ E[N
t
N
s
]. By assumption, the pair N
t
Z
0
, Z
0
Z
is independent, and this yields
E[(N
t
Z
0
)(Z
0
Z)] = E[N
t
Z
0
] E[Z
0
Z]
= E[N
t
(N
s
+E[N
t
N
s
])] E[Z
0
Z]
= 0 .
Therefore, we have
E[(N
t
Z)
2
] = E[((N
t
Z
0
) + (Z
0
Z))
2
]
= E[(N
t
Z
0
)
2
] + E[(Z
0
Z)
2
] .
The last expression attains its minimum for Z := Z
0
. 2
Thus, for a claim number process having independent increments and nite second
moments, the best prediction under expected squared error loss of N
t
by a random
variable depending only on the history of the claim number process up to time s is
given by N
s
+ E[N
t
N
s
].
As an immediate consequence of the previous result, we have the following:
2.3.6 Corollary (Prediction). Assume that the claim number process N
t

tR
+
is a Poisson process with parameter . Then the inequality
E[(N
t
(N
s
+(ts)))
2
] E[(N
t
Z)
2
]
holds for all s, t R
+
such that s t and for every random variable Z satisfying
E[Z
2
] < and (Z) T
s
.
As in the case of Corollary 2.3.3, the previous result can be considerably improved:
2.3 A Characterization of the Poisson Process 39
2.3.7 Theorem (Prediction). Let (0, ). Then the following are equivalent:
(a) The claim number process N
t

tR
+
has nite second moments and the inequal-
ity
E[(N
t
(N
s
+(ts)))
2
] E[(N
t
Z)
2
]
holds for all s, t R
+
such that s t and for every random variable Z satisfying
E[Z
2
] < and (Z) T
s
.
(b) The claim number process N
t

tR
+
is a Poisson process with parameter .
6
0 T
1
() T
2
() s T
3
() T
4
() T
5
()
n
0
-
1
2
3
4
5
t
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
"
""
N
s
() + (ts)

N
t
()
Claim Arrival Process and Claim Number Process
Proof. Assume that (a) holds and consider s, t R
+
. For A T
s
satisfying
P[A] > 0, dene Z := N
s
+ (ts) + c
A
. Then we have (Z) T
s
, and hence
E[(N
t
(N
s
+(ts)))
2
]
E[(N
t
Z)
2
]
= E[(N
t
(N
s
+(ts)+c
A
))
2
]
= E[(N
t
(N
s
+(ts))c
A
)
2
]
= E[(N
t
(N
s
+(ts)))
2
] 2cE[(N
t
(N
s
+(ts)))
A
] + c
2
P[A] .
Letting
c :=
1
P[A]
E[(N
t
(N
s
+(ts)))
A
] ,
we obtain
E[(N
t
(N
s
+(ts)))
2
]
E[(N
t
(N
s
+(ts)))
2
]
1
P[A]
_
E[(N
t
(N
s
+(ts)))
A
]
_
2
,
40 Chapter 2 The Claim Number Process
hence
E[(N
t
(N
s
+(ts)))
A
] = 0 ,
and thus
_
A
_
_
N
t
t
_

_
N
s
s
_
_
dP =
_
A
_
N
t
(N
s
+(ts))
_
dP
= 0 .
Of course, the previous identity is also valid for A T
s
satisfying P[A] = 0.
This shows that the process N
t
t
tR
+
is a martingale, and it now follows from
Theorem 2.3.4 that the claim number process N
t

tR
+
is a Poisson process with
parameter . Therefore, (a) implies (b).
The converse implication is obvious from Corollary 2.3.6. 2
Problems
2.3.A Discrete Time Model: Adopt the denitions given in this section to the dis-
crete time model. The claim number process N
l

lN
0
is a binomial process
or Bernoulli process with parameter (0, 1) if it has stationary independent
increments such that P
N
l
= B(l, ) holds for all l N.
2.3.B Discrete Time Model: Assume that the claim number process N
l

lN
0
has in-
dependent increments. Then the centered claim number process N
l
E[N
l
]
lN
0
is a martingale.
2.3.C Discrete Time Model: Let (0, 1). Then the following are equivalent:
(a) The claim number process N
l

lN
0
satises
P
N
l
= B(l, )
for all l N as well as
P
_
_
m

j=1
N
j
N
j1
= k
j

N
m
= n
_
_
=
_
m
n
_
1
for all mNand for all nN
0
and k
1
, . . . , k
m
0, 1 such that

m
j=1
k
j
= n.
(b) The claim number process N
l

lN
0
satises
P
N
l
= B(l, )
for all l N as well as
P
_
_
m

j=1
N
l
j
N
l
j1
= k
j

N
l
m
= n
_
_
=
m

j=1
_
l
j
l
j1
k
j
_

_
l
m
n
_
1
for all m N and l
0
, l
1
, . . . , l
m
N
0
such that 0 = l
0
< l
1
< . . . < l
m
and for all n N
0
and k
1
, . . . , k
m
N
0
such that k
j
l
j
l
j1
for all
j 1, . . . , m and

m
j=1
k
j
= n.
(c) The claim number process N
l

lN
0
is a binomial process with parameter .
2.4 Remarks 41
2.3.D Discrete Time Model: Let (0, 1). Then the following are equivalent:
(a) The sequence of claim interarrival times W
n

nN
is independent and satis-
es P
W
n
= Geo() for all n N.
(b) The claim number process N
l

lN
0
is a binomial process with parameter .
(c) The claim number process N
l

lN
0
has independent increments and satis-
es E[N
l
] = l for all l N
0
.
(d) The process N
l
l
lN
0
is a martingale.
Hint: Prove that (a) (b) (c) (d).
2.3.E Discrete Time Model: Assume that the claim number process N
l

lN
0
has
independent increments. Then the inequality
E[(N
m
(N
l
+E[N
m
N
l
]))
2
] E[(N
l
Z)
2
]
holds for all l, m N
0
such that l m and for every random variable Z satisfying
(Z) T
l
.
2.3.F Discrete Time Model: Let (0, 1). Then the following are equivalent:
(a) The inequality
E[(N
m
(N
l
+(ml)))
2
] E[(N
m
Z)
2
]
holds for all l, m N
0
such that l m and for every random variable Z
satisfying (Z) T
l
.
(b) The claim number process N
l

lN
0
is a binomial process with parameter .
2.3.G Multiple Life Insurance: Adopt the denitions given in this section to multiple
life insurance. Study stationarity and independence of the increments of the
process N
t

tR
+
as well as the martingale property of N
t
E[N
t
]
tR
+
.
2.3.H Single Life Insurance:
(a) The process N
t

tR
+
does not have stationary increments.
(b) The process N
t

tR
+
has independent increments if and only if the distri-
bution of T is degenerate.
(c) The process N
t
E[N
t
]
tR
+
is a martingale if and only if the distribution
of T is degenerate.
2.4 Remarks
The denition of the increments of the claim number process suggests to dene, for
each and all B B(R),
N()(B) :=

n=1

T
n
B
() .
Then, for each , the map N() : B(R) N
0
is a measure, which is
nite whenever the probability of explosion is equal to zero. This point of view leads
to the theory of point processes; see Kerstan, Matthes, and Mecke [1974], Grandell
42 Chapter 2 The Claim Number Process
[1977], Neveu [1977], Matthes, Kerstan, and Mecke [1978], Cox and Isham [1980],
Bremaud [1981], Kallenberg [1983], Karr [1991], Konig and Schmidt [1992], Kingman
[1993], and Reiss [1993]; see also Mathar and Pfeifer [1990] for an introduction into
the subject.
The implication (a) = (b) of Theorem 2.3.4 can be used to show that the Poisson
process does exist: Indeed, Kolmogorovs existence theorem asserts that, for any
sequence Q
n

nN
of probability measures B(R) [0, 1], there exists a probability
space (, T, P) and a sequence W
n

nN
of random variables R such that the
sequence W
n

nN
is independent and satises P
Wn
= Q
n
for all n N. Letting
T
n
:=

n
k=1
W
k
for all n N
0
and N
t
:=

n=1

T
n
t
for all t R
+
, we obtain a
claim arrival process T
n

nN
0
and a claim number process N
t

tR
+
. In particular,
if Q
n
= Exp() holds for all n N, then it follows from Theorem 2.3.4 that
N
t

tR
+
is a Poisson process with parameter . The implication (d) = (b) of
Theorem 2.3.4 is due to Watanabe [1964]. The proof of the implications (d) = (e)
and (e) = (a) of Theorem 2.3.4 follows Letta [1984].
Theorems 2.3.2 and 2.3.4 are typical examples for the presence of martingales in
canonical situations in risk theory; see also Chapter 7 below.
In the case where the claim interarrival times are independent and identically (but
not necessarily exponentially) distributed, the claim arrival process or, equivalently,
the claim number process is said to be a renewal process; see e. g. Gut [1988],
Alsmeyer [1991], Grandell [1991], and Resnick [1992]. This case will be considered,
to a limited extent, in Chapter 7 below.
The case where the claim interarrival times are independent and exponentially (but
not necessarily identically) distributed will be studied in Section 3.4 below.
We shall return to the Poisson process at various places in this book: The Poisson
process occurs as a very special case in the rather analytical theory of regular claim
number processes satisfying the ChapmanKolmogorov equations, which will be
developed in Chapter 3, and it also occurs as a degenerate case in the class of
mixed Poisson processes, which will be studied in Chapter 4. Moreover, thinning,
decomposition, and superposition of Poisson processes, which are important with
regard to reinsurance, will be discussed in Chapters 5 and 6.
Chapter 3
The Claim Number Process as a
Markov Process
The characterizations of the Poisson process given in the previous chapter show that
the Poisson process is a very particular claim number process. In practical situations,
however, the increments of the claim number process may fail to be independent or
fail to be stationary or fail to be Poisson distributed, and in each of these cases the
Poisson process is not appropriate as a model. The failure of the Poisson process
raises the need of studying larger classes of claim number processes.
The present chapter provides a systematic discussion of claim number processes
whose transition probabilities satisfy the ChapmanKolmogorov equations and can
be computed from a sequence of intensities. The intensities are functions of time, and
special attention will be given to the cases where they are all identical or constant.
We rst introduce several properties which a claim number process may possess,
which are all related to its transition probabilities, and which are all fullled by the
Poisson process (Section 3.1). We next give a characterization of regularity of claim
number processes satisfying the ChapmanKolmogorov equations (Section 3.2). Our
main results characterize claim number processes which are regular Markov processes
with intensities which are all identical (Section 3.3) or all constant (Section 3.4).
Combining these results we obtain another characterization of the Poisson process
(Section 3.5). We also discuss a claim number process with contagion (Section 3.6).
3.1 The Model
Throughout this chapter, let N
t

tR
+
be a claim number process, let T
n

nN
0
be
the claim arrival process induced by the claim number process, and let W
n

nN
be
the claim interarrival process induced by the claim arrival process.
In the present section we introduce several properties which a claim number process
may possess and which are all fullled by the Poisson process. We consider two
44 Chapter 3 The Claim Number Process as a Markov Process
lines of extending the notion of a Poisson process: The rst one is based on the
observation that, by denition, every Poisson process has independent increments,
and this leads to the more general notion of a Markov claim number process and
to the even more general one of a claim number process satisfying the Chapman
Kolmogorov equations. The second one, which has a strongly analytical avour
and is quite dierent from the rst, is the notion of a regular claim number process.
Regular claim number processes satisfying the ChapmanKolmogorov equations will
provide the general framework for the discussion of various classes of claim number
processes and for another characterization of the Poisson process.
The claim number process N
t

tR
+
is a Markov claim number process, or a Markov
process for short, if the identity
P
_
N
t
m+1
= n
m+1

j=1
N
t
j
= n
j

_
= P[N
t
m+1
= n
m+1
[N
tm
= n
m
]
holds for all m N, t
1
, . . . , t
m
, t
m+1
(0, ), and n
1
, . . . , n
m
, n
m+1
N
0
such
that t
1
< . . . < t
m
< t
m+1
and P[

m
j=1
N
t
j
= n
j
] > 0; the conditions imply that
n
1
. . . n
m
. Moreover, if the claim number process is a Markov process, then
the previous identity remains valid if t
1
= 0 or n
j
= for some j 1, . . . , m.
3.1.1 Theorem. If the claim number process has independent increments, then it
is a Markov process.
Proof. Consider m N, t
1
, . . . , t
m
, t
m+1
(0, ), and n
1
, . . . , n
m
, n
m+1
N
0
such that t
1
< . . . < t
m
< t
m+1
and P[

m
j=1
N
t
j
= n
j
] > 0. Dene t
0
:= 0 and
n
0
:= 0. Since P[N
0
= 0] = 1, we have
P
_
N
t
m+1
= n
m+1

j=1
N
t
j
= n
j

_
=
P
_
m+1

j=1
N
t
j
= n
j

_
P
_
m

j=1
N
t
j
= n
j

_
=
P
_
m+1

j=1
N
t
j
N
t
j1
= n
j
n
j1

_
P
_
m

j=1
N
t
j
N
t
j1
= n
j
n
j1

_
=
m+1

j=1
P[N
t
j
N
t
j1
= n
j
n
j1
]
m

j=1
P[N
t
j
N
t
j1
= n
j
n
j1
]
= P[N
t
m+1
N
tm
= n
m+1
n
m
]
3.1 The Model 45
as well as
P[N
t
m+1
= n
m+1
[N
t
m
= n
m
] = P[N
t
m+1
N
t
m
= n
m+1
n
m
] ,
and thus
P
_
N
t
m+1
= n
m+1

j=1
N
t
j
= n
j

_
= P[N
t
m+1
= n
m+1
[N
t
m
= n
m
] .
Therefore, N
t

tR
+
is a Markov process. 2
3.1.2 Corollary. If the claim number process is a Poisson process, then it is a
Markov process.
We shall see later that the claim number process may be a Markov process without
being a Poisson process.
In the denition of a Markov claim number process we have already encountered
the problem of conditional probabilities with respect to null sets. We now introduce
some concepts which will allow us to avoid conditional probabilities with respect to
null sets:
A pair (k, r) N
0
R
+
is admissible if either (k, r) = (0, 0) or (k, r) N
0
(0, ).
Let / denote the set consisting of all (k, n, r, t) N
0
N
0
R
+
R
+
such that (k, r)
is admissible, k n, and r t. A map
p : / [0, 1]
is a transition rule for the claim number process N
t

tR
+
if it satises

n=k
p(k, n, r, t) 1
for each admissible pair (k, r) and all t [r, ) as well as
p(k, n, r, t) = P[N
t
= n[N
r
= k]
for all (k, n, r, t) / such that P[N
r
= k] > 0. It is easy to see that a transition
rule always exists but need not be unique. However, all subsequent denitions and
results involving transition rules will turn out to be independent of the particular
choice of the transition rule.
Comment: The inequality occurring in the denition of a transition rule admits
strictly positive probability for a jump to innity in a nite time interval. For
46 Chapter 3 The Claim Number Process as a Markov Process
example, for each admissible pair (k, r) and all t [r, ) such that P[N
r
= k] > 0,
we have
P[N
t
= [N
r
= k] = 1

n=k
p(k, n, r, t) .
Also, since every path of a claim number process is increasing in time, there is no
return from innity. In particular, for all r (0, ) and t [r, ) such that
P[N
r
= ] > 0, we have
P[N
t
= [N
r
= ] = 1 .
These observations show that, as in the denition of Markov claim number processes,
innite claim numbers can be disregarded in the denitions of admissible pairs and
transition rules.
For a transition rule p : / [0, 1] and (k, n, r, t) /, dene
p
k,n
(r, t) := p(k, n, r, t) .
The p
k,n
(r, t) are called the transition probabilities of the claim number process
N
t

tR
+
with respect to the transition rule p. Obviously, the identity
p
n,n
(t, t) = 1
holds for each admissible pair (n, t) satisfying P[N
t
= n] > 0.
The claim number process N
t

tR
+
satises the ChapmanKolmogorov equations
if there exists a transition rule p such that the identity
p
k,n
(r, t) =
n

m=k
p
k,m
(r, s) p
m,n
(s, t)
holds for all (k, n, r, t) / and s [r, t] such that P[N
r
= k] > 0. The validity of
the ChapmanKolmogorov equations is independent of the particular choice of the
transition rule: Indeed, for mk, . . . , n such that P[N
s
= mN
r
= k] > 0,
we have P[N
s
= m] > 0, and thus
p
k,m
(r, s) p
m,n
(s, t) = P[N
t
= n[N
s
= m] P[N
s
= m[N
r
= k] ;
also, for mk, . . . , n such that P[N
s
= mN
r
= k] = 0, we have p
k,m
(r, s) =
P[N
s
= m[N
r
= k] = 0, and thus
p
k,m
(r, s) p
m,n
(s, t) = 0 ,
whatever the value of p
m,n
(s, t) was dened to be.
3.1.3 Theorem. If the claim number process is a Markov process, then it satises
the ChapmanKolmogorov equations.
3.1 The Model 47
Proof. Consider (k, n, r, t) / and s [r, t] such that P[N
r
= k] > 0. Then
we have
p
k,n
(r, t) = P[N
t
= n[N
r
= k]
=
n

m=k
P[N
t
= nN
s
= m[N
r
= k]
=
n

m=k
_
P[N
t
= n[N
s
= mN
r
= k] P[N
s
= m[N
r
= k]
_
=
n

m=k
_
P[N
t
= n[N
s
= m] P[N
s
= m[N
r
= k]
_
=
n

m=k
p
k,m
(r, s) p
m,n
(s, t) ,
where the second and the third sum are to be taken only over those m k, . . . , n
for which P[N
s
= mN
r
= k] > 0. 2
3.1.4 Corollary. If the claim number process has independent increments, then it
satises the ChapmanKolmogorov equations.
3.1.5 Corollary. If the claim number process is a Poisson process, then it satises
the ChapmanKolmogorov equations.
The claim number process N
t

tR
+
is homogeneous if there exists a transition rule
p such that the identity
p
n,n+k
(s, s+h) = p
n,n+k
(t, t+h)
holds for all n, k N
0
and s, t, h R
+
such that (n, s) and (n, t) are admissible and
satisfy P[N
s
= n] > 0 and P[N
t
= n] > 0. Again, homogeneity is independent
of the particular choice of the transition rule.
3.1.6 Theorem. If the claim number process has stationary independent incre-
ments, then it is a homogeneous Markov process.
Proof. By Theorem 3.1.1, the claim number process is a Markov process.
To prove homogeneity, consider k N
0
and h R
+
and an admissible pair (n, t)
satisfying P[N
t
= n] > 0. Then we have
p
n,n+k
(t, t+h) = P[N
t+h
= n+k[N
t
= n]
= P[N
t+h
N
t
= k[N
t
N
0
= n]
= P[N
t+h
N
t
= k]
= P[N
h
N
0
= k]
= P[N
h
= k] .
Therefore, N
t

tR
+
is homogeneous. 2
48 Chapter 3 The Claim Number Process as a Markov Process
3.1.7 Corollary. If the claim number process is a Poisson process, then it is a
homogeneous Markov process.
The relations between the dierent classes of claim number processes considered so
far are presented in the following table:
ChapmanKolmogorov
Markov
Independent Increments Homogeneous Markov
Stationary
Independent Increments
Poisson
Claim Number Processes
We now turn to another property which a claim number process may possess:
The claim number process N
t

tR
+
is regular if there exists a transition rule p
and a sequence
n

nN
of continuous functions R
+
(0, ) such that, for each
admissible pair (n, t),
(i)
P[N
t
= n] > 0 ,
(ii) the function R
+
[0, 1] : h p
n,n
(t, t+h) is continuous,
(iii)
lim
h0
1
h
_
1 p
n,n
(t, t+h)
_
=
n+1
(t)
= lim
h0
1
h
p
n,n+1
(t, t+h) .
3.1 The Model 49
In this case,
n

nN
is said to be the sequence of intensities of the claim number
process. Because of (i), regularity is independent of the particular choice of the
transition rule.
Comment:
Condition (i) means that at any time t (0, ) every nite number of claims is
attained with strictly positive probability.
Condition (ii) means that, conditionally on the event N
t
= n, the probability of
no jumps in a nite time interval varies smoothly with the length of the interval.
Condition (iii) means that, conditionally on the event N
t
= n, the tendency for
a jump of any height is, in an innitesimal time interval, equal to the tendency
for a jump of height one.
3.1.8 Theorem. If the claim number process is a Poisson process with para-
meter , then it is a homogeneous regular Markov process with intensities
n

nN
satisfying
n
(t) = for all nN and t R
+
.
Proof. By Corollary 3.1.7, the claim number process is a homogeneous Markov
process.
To prove the assertion on regularity, consider an admissible pair (n, t).
First, since
P[N
t
= n] = e
t
(t)
n
n!
,
we have P[N
t
= n] > 0, which proves (i).
Second, since
p
n,n
(t, t+h) = e
h
,
the function h p
n,n
(t, t+h) is continuous, which proves (ii).
Finally, we have
lim
h0
1
h
_
1 p
n,n
(t, t+h)
_
= lim
h0
1
h
_
1 e
h
_
=
as well as
lim
h0
1
h
p
n,n+1
(t, t+h) = lim
h0
1
h
e
h
h
= .
This proves (iii).
Therefore, N
t

tR
+
is regular with intensities
n

nN
satisfying
n
(t) = for all
n N and t R
+
. 2
The previous result shows that the properties introduced in this section are all
fullled by the Poisson process.
50 Chapter 3 The Claim Number Process as a Markov Process
Problems
3.1.A Discrete Time Model: Adopt the denitions given in this section, as far as
this is reasonable, to the discrete time model.
3.1.B Discrete Time Model: If the claim number process has independent incre-
ments, then it is a Markov process.
3.1.C Discrete Time Model: If the claim number process is a binomial process, then
it is a homogeneous Markov process.
3.1.D Multiple Life Insurance: Adopt the denitions given in the section to mul-
tiple life insurance. Study the Markov property and regularity of the process
N
t

tR
+
.
3.1.E Single Life Insurance: The process N
t

tR
+
is a Markov process.
3.1.F Single Life Insurance: Assume that P[T > t] (0, 1) holds for all t (0, ).
Then every transition rule p satises p
0,0
(r, t) = P[T > t]/P[T > r] as well
as p
0,1
(r, t) = 1 p
0,0
(r, t) and p
1,1
(r, t) = 1 for all r, t R
+
such that r t.
3.1.G Single Life Insurance: Assume that the distribution of T has a density f with
respect to Lebesgue measure and that f is continuous on R
+
and strictly positive
on (0, ). For all t R
+
, dene
(t) :=
f(t)
P[T > t]
.
(a) The process N
t

tR
+
is regular with intensity
1
= .
(b) There exists a transition rule p such that the dierential equations
d
dt
p
0,n
(r, t) =
_
p
0,0
(r, t)
1
(t) if n = 0
p
0,0
(r, t)
1
(t) if n = 1
with initial conditions
p
0,n
(r, r) =
_
1 if n = 0
0 if n = 1
for all r, t R
+
such that r t.
(c) There exists a transition rule p such that the integral equations
p
0,n
(r, t) =
_

_
e

_
t
r

1
(s) ds
if n = 0
_
t
r
p
0,0
(r, s)
1
(s) ds if n = 1
for all r, t R
+
such that r t.
(d) Interpret the particular form of the dierential equations.
The function is also called the failure rate of T.
3.2 A Characterization of Regularity 51
3.2 A Characterization of Regularity
The following result characterizes regularity of claim number processes satisfying
the ChapmanKolmogorov equations:
3.2.1 Theorem. Assume that the claim number process N
t

tR
+
satises the
ChapmanKolmogorov equations and let
n

nN
be a sequence of continuous func-
tions R
+
(0, ). Then the following are equivalent:
(a) N
t

tR
+
is regular with intensities
n

nN
.
(b) There exists a transition rule p such that the dierential equations
d
dt
p
k,n
(r, t) =
_
p
k,k
(r, t)
k+1
(t) if k = n
p
k,n1
(r, t)
n
(t) p
k,n
(r, t)
n+1
(t) if k < n
with initial conditions
p
k,n
(r, r) =
_
1 if k = n
0 if k < n
hold for all (k, n, r, t) /.
(c) There exists a transition rule p such that the integral equations
p
k,n
(r, t) =
_

_
e

_
t
r

k+1
(s) ds
if k = n
_
t
r
p
k,n1
(r, s)
n
(s) p
n,n
(s, t) ds if k < n
hold for all (k, n, r, t) /.
Proof. We prove the assertion according to the following scheme:
(a) = (b) = (c) = (a)
Assume rst that (a) holds and consider a transition rule p and (k, n, r, t) /.
(1) By the ChapmanKolmogorov equations, we have
p
k,k
(r, t+h) p
k,k
(r, t) = p
k,k
(r, t) p
k,k
(t, t+h) p
k,k
(r, t)
= p
k,k
(r, t)
_
1 p
k,k
(t, t+h)
_
,
and hence
lim
h0
1
h
_
p
k,k
(r, t+h) p
k,k
(r, t)
_
= p
k,k
(r, t) lim
h0
1
h
_
1 p
k,k
(t, t+h)
_
= p
k,k
(r, t)
k+1
(t) .
Thus, the right derivative of t p
k,k
(r, t) exists and is continuous, and this implies
that the derivative of t p
k,k
(r, t) exists and satises the dierential equation
d
dt
p
k,k
(r, t) = p
k,k
(r, t)
k+1
(t)
with initial condition p
k,k
(r, r) = 1.
52 Chapter 3 The Claim Number Process as a Markov Process
In particular, we have
p
k,k
(r, t) = e

_
t
r

k+1
(s) ds
> 0 .
(2) Assume now that k < n. Then we have
p
k,n
(r, t+h) p
k,n
(r, t) =
n

m=k
p
k,m
(r, t) p
m,n
(t, t+h) p
k,n
(r, t)
=
n2

m=k
p
k,m
(r, t) p
m,n
(t, t+h)
+ p
k,n1
(r, t) p
n1,n
(t, t+h)
p
k,n
(r, t)
_
1 p
n,n
(t, t+h)
_
.
For m k, . . . , n2, we have p
m,n
(t, t+h) 1 p
m,m
(t, t+h) p
m,m+1
(t, t+h),
hence
lim
h0
1
h
p
m,n
(t, t+h) = 0 ,
and this identity together with
lim
h0
1
h
p
n1,n
(t, t+h) =
n
(t)
and
lim
h0
1
h
_
1 p
n,n
(t, t+h)
_
=
n+1
(t)
yields
lim
h0
1
h
_
p
k,n
(r, t+h) p
k,n
(r, t)
_
=
n2

m=k
p
k,m
(r, t) lim
h0
1
h
p
m,n
(t, t+h)
+ p
k,n1
(r, t) lim
h0
1
h
p
n1,n
(t, t+h)
p
k,n
(r, t) lim
h0
1
h
_
1 p
n,n
(t, t+h)
_
= p
k,n1
(r, t)
n
(t) p
k,n
(r, t)
n+1
(t)) .
Thus, the right derivative of t p
k,n
(r, t) exists, and it follows that the function
t p
k,n
(r, t) is right continuous on [r, ). Moreover, for t (r, ) the Chapman
Kolmogorov equations yield, for all s (r, t),

p
k,n
(r, s) p
k,n
(r, t)

p
k,n
(r, s)
n

m=k
p
k,m
(r, s) p
m,n
(s, t)

3.2 A Characterization of Regularity 53

n1

m=k
p
k,m
(r, s) p
m,n
(s, t) + p
k,n
(r, s)
_
1p
n,n
(s, t)
_

m=k
_
1 p
m,m
(s, t)
_
=
n

m=k
_
1
p
m,m
(r, t)
p
m,m
(r, s)
_
,
and thus
lim
st

p
k,n
(r, s) p
k,n
(r, t)

= 0 ,
which means that the function t p
k,n
(r, t) is also left continuous on (r, ) and
hence continuous on [r, ). But then the right derivative of t p
k,n
(r, t) is con-
tinuous, and this implies that the derivative of t p
k,n
(r, t) exists and satises the
dierential equation
d
dt
p
k,n
(r, t) = p
k,n1
(r, t)
n
(t) p
k,n
(r, t)
n+1
(t)
with initial condition p
k,n
(r, r) = 0.
(3) Because of (1) and (2), (a) implies (b).
Assume now that (b) holds and consider a transition rule p satisfying the dier-
ential equations and (k, n, r, t) /.
(1) We have already noticed in the preceding part of the proof that the dierential
equation
d
dt
p
k,k
(r, t) = p
k,k
(r, t)
k+1
(t)
with initial condition p
k,k
(r, r) = 1 has the unique solution
p
k,k
(r, t) = e

_
t
r

k+1
(s) ds
.
(2) Assume now that k < n. Then the function t 0 is the unique solution of the
homogeneous dierential equation
d
dt
p
k,n
(r, t) = p
k,n
(r, t)
n+1
(t)
with initial condition p
k,n
(r, r) = 0. This implies that the inhomogeneous dierential
equation
d
dt
p
k,n
(r, t) = p
k,n1
(r, t)
n
(t) p
k,n
(r, t)
n+1
(t)
with initial condition p
k,n
(r, r) = 0 has at most one solution.
54 Chapter 3 The Claim Number Process as a Markov Process
Assume that the function t p
k,n1
(r, t) is already given (which because of (1) is
the case for n = k + 1) and dene
p
k,n
(r, t) :=
_
t
r
p
k,n1
(r, s)
n
(s) p
n,n
(s, t) ds .
Since
d
dt
p
k,n
(r, t) =
_
t
r
p
k,n1
(r, s)
n
(s)
_
d
dt
p
n,n
(s, t)
_
ds + p
k,n1
(r, t)
n
(t)
=
_
t
r
p
k,n1
(r, s)
n
(s) (p
n,n
(s, t)
n+1
(t)) ds + p
k,n1
(r, t)
n
(t)
=
_
t
r
p
k,n1
(r, s)
n
(s) p
n,n
(s, t) ds (
n+1
(t)) + p
k,n1
(r, t)
n
(t)
= p
k,n
(r, s) (
n+1
(t)) +p
k,n1
(r, t)
n
(t)
= p
k,n1
(r, t)
n
(t) p
k,n
(r, s)
n+1
(t)
and p
k,n
(r, r) = 0, the function t p
k,n
(r, t) is the unique solution of the dierential
equation
d
dt
p
k,n
(r, t) = p
k,n1
(r, t)
n
(t) p
k,n
(r, t)
n+1
(t) ,
with initial condition p
k,n
(r, r) = 0, and we have
p
k,n
(r, t) :=
_
t
r
p
k,n1
(r, s)
n
(s) p
n,n
(s, t) ds .
(3) Because of (1) and (2), (b) implies (c).
Assume nally that (c) holds and consider a transition rule p satisfying the integral
equations. For n N and r, t R
+
such that r t, dene

n
(r, t) :=
_
t
r

n
(s) ds .
Then we have
p
n,n
(r, t) = e

_
t
r

n+1
(s) ds
= e

n+1
(r,t)
> 0
for each admissible pair (n, r) and all t [r, ).
First, for all t R
+
, we have
P[N
t
= 0] = P[N
t
= 0[N
0
= 0]
= p
0,0
(0, t)
> 0 .
3.2 A Characterization of Regularity 55
Consider now t (0, ) and assume that p
0,n1
(0, t) = P[N
t
= n1] > 0 holds
for some n N and all t R
+
(which is the case for n = 1). Then we have
P[N
t
= n] = P[N
t
= n[N
0
= 0]
= p
0,n
(0, t)
=
_
t
0
p
0,n1
(0, s)
n
(s) p
n,n
(s, t) ds
> 0 .
This yields
P[N
t
= n] > 0
for each admissible pair (n, t), which proves (i).
Second, for each admissible pair (n, t) and for all h R
+
, we have
p
n,n
(t, t+h) = e

n+1
(t,t+h)
,
showing that the function h p
n,n
(t, t+h) is continuous, which proves (ii).
Finally, for each admissible pair (n, t), we have
lim
h0
1
h
_
1 p
n,n
(t, t+h)
_
= lim
h0
1
h
_
1 e

n+1
(t,t+h)
_
=
n+1
(t) ,
and because of
p
n,n+1
(t, t+h) =
_
t+h
t
p
n,n
(t, u)
n+1
(u) p
n+1,n+1
(u, t+h) du
=
_
t+h
t
p
n,n
(t, u)
n+1
(u) e

n+2
(u,t+h)
du
= e

n+2
(t,t+h)
_
t+h
t
p
n,n
(t, u)
n+1
(u) e

n+2
(t,u)
du
we also have
lim
h0
1
h
p
n,n+1
(t, t+h)
= lim
h0
1
h
_
e

n+2
(t,t+h)
_
t+h
t
p
n,n
(t, u)
n+1
(u) e

n+2
(t,u)
du
_
= e

n+2
(t,t)
p
n,n
(t, t)
n+1
(t) e

n+2
(t,t)
=
n+1
(t) .
This proves (iii).
Therefore, (c) implies (a). 2
Since regularity is independent of the particular choice of the transition rule, every
transition rule for a regular claim number process satisfying the ChapmanKolmo-
gorov equations fullls the dierential and integral equations of Theorem 3.2.1.
56 Chapter 3 The Claim Number Process as a Markov Process
3.3 A Characterization of the Inhomogeneous
Poisson Process
In the present section, we study claim number processes which are regular Markov
processes with intensities which are all identical.
3.3.1 Lemma. Assume that the claim number process N
t

tR
+
satises the
ChapmanKolmogorov equations and is regular. Then the following are equivalent:
(a) The identity
p
0,k
(t, t+h) = p
n,n+k
(t, t+h)
holds for all n, k N
0
and t, h R
+
such that (n, t) is admissible.
(b) The intensities of N
t

tR
+
are all identical.
Proof. It is clear that (a) implies (b).
Assume now that (b) holds.
(1) For each admissible pair (n, t) and all h R
+
, we have
p
0,0
(t, t+h) = e

_
t+h
t

1
(s) ds
= e

_
t+h
t

n+1
(s) ds
= p
n,n
(t, t+h) .
(2) Assume now that the identity
p
0,k
(t, t+h) = p
n,n+k
(t, t+h)
holds for some k N
0
and for each admissible pair (n, t) and all h R
+
(which
because of (1) is the case for k = 0). Then we have
p
0,k+1
(t, t+h) =
_
t+h
t
p
0,k
(t, u)
k+1
(u) p
k+1,k+1
(u, t+h) du
=
_
t+h
t
p
n,n+k
(t, u)
n+k+1
(u) p
n+k+1,n+k+1
(u, t+h) du
= p
n,n+k+1
(t, t+h) .
for each admissible pair (n, t) and all h R
+
.
(3) Because of (1) and (2), (b) implies (a). 2
Let : R
+
(0, ) be a continuous function. The claim number process N
t

tR
+
is an inhomogeneous Poisson process with intensity if it has independent incre-
ments satisfying
P
N
t+h
N
t
= P
_
_
t+h
t
(s) ds
_
3.3 A Characterization of the Inhomogeneous Poisson Process 57
for all t R
+
and h (0, ). Thus, the claim number process is an inhomogeneous
Poisson process with constant intensity t if and only if it is a Poisson process
with parameter .
3.3.2 Theorem. Let : R
+
(0, ) be a continuous function. Then the
following are equivalent:
(a) The claim number process N
t

tR
+
is a regular Markov process with intensities

nN
satisfying
n
(t) = (t) for all nN and t R
+
.
(b) The claim number process N
t

tR
+
has independent increments and is regular
with intensities
n

nN
satisfying
n
(t) = (t) for all nN and t R
+
.
(c) The claim number process N
t

tR
+
is an inhomogeneous Poisson process with
intensity .
Proof. Each of the conditions implies that the claim number process satises the
ChapmanKolmogorov equations. Therefore, Theorem 3.2.1 applies.
For all r, t R
+
such that r t, dene
(r, t) :=
_
t
r
(s) ds .
We prove the assertion according to the following scheme:
(a) = (c) = (b) = (a)
Assume rst that (a) holds.
(1) For each admissible pair (n, r) and all t [r, ), we have
p
n,n
(r, t) = e

_
t
r

n+1
(s) ds
= e

_
t
r
(s) ds
= e
(r,t)
.
(2) Assume now that the identity
p
n,n+k
(r, t) = e
(r,t)
((r, t))
k
k!
holds for some k N
0
and for each admissible pair (n, r) and all t [r, ) (which
because of (1) is the case for k = 0). Then we have
p
n,n+k+1
(r, t) =
_
t
r
p
n,n+k
(r, s)
n+k+1
(s) p
n+k+1,n+k+1
(s, t) ds
=
_
t
r
e
(r,s)
((r, s))
k
k!
(s) e
(s,t)
ds
= e
(r,t)
_
t
r
((r, s))
k
k!
(s) ds
= e
(r,t)
((r, t))
k+1
(k+1)!
for each admissible pair (n, r) and all t [r, ).
58 Chapter 3 The Claim Number Process as a Markov Process
(3) Because of (1) and (2), the identity
p
n,n+k
(r, t) = e
(r,t)
((r, t))
k
k!
holds for all n, k N
0
and r, t R
+
such that (n, r) is admissible and r t.
(4) Because of (3), we have
P[N
t
= n] = p
0,n
(0, t)
= e
(0,t)
((0, t))
n
n!
for all t R
+
and n N
0
such that (n, t) is admissible, and thus
P
Nt
= P((0, t)) .
for all t (0, ).
(5) The identity
P
N
t
N
r
= P((r, t))
holds for all r, t R
+
such that r < t.
In the case r = 0, the assertion follows from (4).
In the case r > 0, it follows from (4) that the probability of explosion is equal to
zero and that P[N
r
= n] > 0 holds for all n N
0
. Because of (3), we obtain
P[N
t
N
r
= k] =

n=0
P[N
t
N
r
= kN
r
= n]
=

n=0
_
P[N
t
= n + k[N
r
= n] P[N
r
= n]
_
=

n=0
p
n,n+k
(r, t) p
0,n
(0, r)
=

n=0
e
(r,t)
((r, t))
k
k!
e
(0,r)
((0, r))
n
n!
= e
(r,t)
((r, t))
k
k!

n=0
e
(0,r)
((0, r))
n
n!
= e
(r,t)
((r, t))
k
k!
for all k N
0
, and thus
P
NtNr
= P((r, t)) .
3.3 A Characterization of the Inhomogeneous Poisson Process 59
(6) The claim number process N
t

tR
+
has independent increments:
Consider m N, t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
, and
k
1
, . . . , k
m
N
0
. For j 0, 1, . . . , m, dene
n
j
:=
j

i=1
k
i
.
If P[

m1
j=1
N
t
j
N
t
j1
= k
j
] = P[

m1
j=1
N
t
j
= n
j
] > 0, then the Markov property
together with (3) and (5) yields
P
_
N
t
m
= n
m

m1

j=1
N
t
j
= n
j

_
= P[N
t
m
= n
m
[N
t
m1
= n
m1
]
= p
n
m1
,nm
(t
m1
, t
m
)
= e
(t
m1
,tm)
((t
m1
, t
m
))
n
m
n
m1
(n
m
n
m1
)!
= P[N
tm
N
t
m1
= n
m
n
m1
]
= P[N
t
m
N
t
m1
= k
m
]
and hence
P
_
m

j=1
N
t
j
N
t
j1
= k
j

_
= P
_
m

j=1
N
t
j
= n
j

_
= P
_
N
t
m
= n
m

m1

j=1
N
t
j
= n
j

_
P
_
m1

j=1
N
t
j
= n
j

_
= P[N
tm
N
t
m1
= k
m
] P
_
m1

j=1
N
t
j
= n
j

_
= P[N
tm
N
t
m1
= k
m
] P
_
m1

j=1
N
t
j
N
t
j1
= k
j

_
.
Obviously, the identity
P
_
m

j=1
N
t
j
N
t
j1
= k
j

_
= P[N
t
m
N
t
m1
= k
m
] P
_
m1

j=1
N
t
j
N
t
j1
= k
j

_
is also valid if P[

m1
j=1
N
t
j
N
t
j1
= k
j
] = 0.
It now follows by induction that N
t

tR
+
has independent increments.
(7) Because of (5) and (6), N
t

tR
+
is an inhomogeneous Poisson process with
intensity . Therefore, (a) implies (c).
60 Chapter 3 The Claim Number Process as a Markov Process
Assume now that (c) holds. Of course, N
t

tR
+
has independent increments.
Furthermore, for each admissible pair (n, r) and all k N
0
and t [r, ), we have
p
n,n+k
(r, t) = P[N
t
= n+k[N
r
= n]
= P[N
t
N
r
= k[N
r
N
0
= n]
= P[N
t
N
r
= k]
= e
(r,t)
((r, t))
k
k!
.
For k = 0, this yields
p
n,n
(r, t) = e
(r,t)
= e

_
t
r
(s) ds
,
and for k N we obtain
p
n,n+k
(r, t) = e
(r,t)
((r, t))
k
k!
= e
(r,t)
_
t
r
((r, s))
k1
(k1)!
(s) ds
=
_
t
r
e
(r,s)
((r, s))
k1
(k1)!
(s) e
(s,t)
ds
=
_
t
r
p
n,n+k1
(r, s) (s) p
n+k,n+k
(s, t) ds .
It now follows from Theorem 3.2.1 that N
t

tR
+
is regular with intensities
n

nN
satisfying
n
(t) = (t) for all n N and t R
+
. Therefore, (c) implies (b).
Assume nally that (b) holds. Since N
t

tR
+
has independent increments, it
follows from Theorem 3.1.1 that N
t

tR
+
is a Markov process. Therefore, (b) im-
plies (a). 2
The following result is a partial generalization of Lemma 2.3.1:
3.3.3 Lemma (Multinomial Criterion). If the claim number process N
t

tR
+
is an inhomogeneous Poisson process with intensity , then the identity
P
_
m

j=1
N
t
j
N
t
j1
= k
j

N
tm
= n
_
=
n!

m
j=1
k
j
!
m

j=1
__
t
j
t
j1
(s) ds
_
tm
0
(s) ds
_
k
j
holds for all m N and t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
and
for all n N
0
and k
1
, . . . , k
m
N
0
such that

m
j=1
k
j
= n.
3.3 A Characterization of the Inhomogeneous Poisson Process 61
Proof. For all r, t R
+
satisfying r t, dene
(r, t) :=
_
t
r
(s) ds .
Then we have
P
_
m

j=1
N
t
j
N
t
j1
= k
j

N
t
m
= n
_
=
P
_
m

j=1
N
t
j
N
t
j1
= k
j

_
P[N
t
m
= n]
=
m

j=1
P[N
t
j
N
t
j1
= k
j
]
P[N
tm
= n]
=
m

j=1
e
(t
j1
,t
j
)
((t
j1
, t
j
))
k
j
k
j
!
e
(0,t
m
)
((0, t
m
))
n
n!
=
n!

m
j=1
k
j
!
m

j=1
_
(t
j1
, t
j
)
(0, t
m
)
_
k
j
,
as was to be shown. 2
Problems
3.3.A Assume that the claim number process has independent increments and is regular.
Then its intensities are all identical.
3.3.B The following are equivalent:
(a) The claim number process is a regular Markov process and its intensities are
all identical.
(b) The claim number process has independent increments and is regular.
(c) The claim number process is an inhomogeneous Poisson process.
3.3.C Consider a continuous function : R
+
(0, ) satisfying
_

0
(s) ds = . For
all t R
+
, dene
(t) :=
_
t
0
(s) ds
as well as
N

t
:= N
(t)
and
N

t
:= N

1
(t)
.
(a) If the claim number process N
t

tR
+
is a Poisson process with parameter 1,
then N

tR
+
is an inhomogeneous Poisson process with intensity .
(b) If the claim number process N
t

tR
+
is an inhomogeneous Poisson process
with intensity , then N

tR
+
is a Poisson process with parameter 1.
62 Chapter 3 The Claim Number Process as a Markov Process
3.3.D Consider a continuous function : R
+
(0, ) satisfying
_

0
(s) ds = . For
all t R
+
, dene
(t) :=
_
t
0
(s) ds .
Then the following are equivalent:
(a) The claim number process N
t

tR
+
is an inhomogeneous Poisson process
with intensity .
(b) The claim number process N
t

tR
+
has independent increments and satis-
es E[N
t
] = (t) for all t R
+
.
(c) The process N
t
(t)
tR
+
is a martingale.
3.3.E Prediction: If the claim number process N
t

tR
+
is an inhomogeneous Poisson
process with intensity , then the inequality
E
_
_
N
t+h

_
N
t
+
_
t+h
t
(s) ds
__
2
_
E[(N
t+h
Z)
2
]
holds for all t, h R
+
and for every random variable Z satisfying E[Z
2
] <
and (Z) T
t
.
3.4 A Characterization of Homogeneity
In the present section, we study claim number processes which are regular Markov
processes with intensities which are all constant.
3.4.1 Lemma. Assume that the claim number process N
t

tR
+
satises the
ChapmanKolmogorov equations and is regular. Then the following are equivalent:
(a) N
t

tR
+
is homogeneous.
(b) The intensities of N
t

tR
+
are all constant.
Proof. Assume rst that (a) holds and consider n N
0
. For all s, t (0, ),
we have

n+1
(s) = lim
h0
1
h
p
n,n+1
(s, s+h)
= lim
h0
1
h
p
n,n+1
(t, t+h)
=
n+1
(t) .
Thus,
n+1
is constant on (0, ) and hence, by continuity, on R
+
. Therefore,
(a) implies (b).
Assume now that (b) holds and consider a sequence
n

nN
in (0, ) such that

n
(t) =
n
holds for all n N and t R
+
.
3.4 A Characterization of Homogeneity 63
(1) For all n N
0
and s, t, h R
+
such that (n, s) and (n, t) are admissible, we
have
p
n,n
(s, s+h) = e

_
s+h
s

n+1
(u) du
= e

_
s+h
s

n+1
du
= e

n+1
h
,
and hence
p
n,n
(s, s+h) = p
n,n
(t, t+h) .
(2) Assume now that the identity
p
n,n+k
(s, s+h) = p
n,n+k
(t, t+h)
holds for some k N
0
and for all n N
0
and s, t, h R
+
such that (n, s) and (n, t)
are admissible (which because of (1) is the case for k = 0). Then we have
p
n,n+k+1
(s, s+h) =
_
s+h
s
p
n,n+k
(s, u)
n+k+1
(u) p
n+k+1,n+k+1
(u, s+h) du
=
_
s+h
s
p
n,n+k
(t, ts+u)
n+k+1
p
n+k+1,n+k+1
(ts+u, t+h) du
=
_
t+h
t
p
n,n+k
(t, v)
n+k+1
p
n+k+1,n+k+1
(v, t+h) dv
=
_
t+h
t
p
n,n+k
(t, v)
n+k+1
(v) p
n+k+1,n+k+1
(v, t+h) dv
= p
n,n+k+1
(t, t+h)
for all n N
0
and s, t, h R
+
such that (n, s) and (n, t) are admissible.
(3) Because of (1) and (2), (b) implies (a). 2
The main result of this section is the following characterization of homogeneous
regular Markov processes:
3.4.2 Theorem. Let
n

nN
be a sequence of real numbers in (0, ). Then the
following are equivalent:
(a) The claim number process N
t

tR
+
is a regular Markov process with intensities

nN
satisfying
n
(t) =
n
for all nN and t R
+
.
(b) The sequence of claim interarrival times W
n

nN
is independent and satises
P
W
n
= Exp(
n
) for all nN.
Proof. For n N, let T
n
and W
n
denote the random vectors R
n
with
coordinates T
i
and W
i
, respectively, and let M
n
denote the (nn)matrix with
entries
m
ij
:=
_
1 if i j
0 if i < j .
64 Chapter 3 The Claim Number Process as a Markov Process
Then M
n
is invertible and satises
det M
n
= 1 .
Moreover, we have T
n
= M
n
W
n
and hence
W
n
= M
1
n
T
n
as well as T
1
n
= W
1
n
M
1
n
. Furthermore, let 1
n
denote the vector in R
n
with all
coordinates being equal to one, and let . , .) denote the inner product on R
n
.
Assume rst that (a) holds. Since the claim arrival process is more directly related
to the claim number process than the claim interarrival process is, we shall rst
determine the nite dimensional distributions of the claim arrival process and then
apply the identity W
n
= M
1
n
T
n
to obtain the nite dimensional distributions of
the claim interarrival process.
Consider two sequences r
j

jN
and t
j

jN
0
of real numbers satisfying t
0
= 0 and
t
j1
r
j
< t
j
for all j N. We rst exploit regularity and then the Markov property
in order to determine the nite dimensional distributions of the claim arrival process.
(1) For each admissible pair (j, r) and all t [r, ), we have
p
j,j
(r, t) = e

j+1
(tr)
.
Indeed, regularity yields
p
j,j
(r, t) = e

_
t
r

j+1
(s) ds
= e

_
t
r

j+1
ds
= e

j+1
(tr)
.
(2) For all j N and r, t (0, ) such that r t, we have
p
j1,j
(r, t) =
_

j+1

j
_
e

j
(tr)
e

j+1
(tr)
_
if
j
,=
j+1

j
(tr) e

j
(tr)
if
j
=
j+1
.
Indeed, if
j
,=
j+1
, then regularity together with (1) yields
p
j1,j
(r, t) =
_
t
r
p
j1,j1
(r, s)
j
(s) p
j,j
(s, t) ds
=
_
t
r
e

j
(sr)

j
e

j+1
(ts)
ds
=
j
e
(
j
r
j+1
t)
_
t
r
e
(
j+1

j
)s
ds
=
j
e
(
j
r
j+1
t)
1

j+1

j
_
e
(
j+1

j
)t
e
(
j+1

j
)r
_
=

j

j+1

j
_
e

j
(tr)
e

j+1
(tr)
_
;
3.4 A Characterization of Homogeneity 65
similarly, if
j
=
j+1
, then
p
j1,j
(r, t) =
_
t
r
p
j1,j1
(r, s)
j
(s) p
j,j
(s, t) ds
=
_
t
r
e

j
(sr)

j
e

j+1
(ts)
ds
=
_
t
r
e

j
(sr)

j
e

j
(ts)
ds
=
_
t
r

j
e

j
(tr)
ds
=
j
(tr) e

j
(tr)
.
(3) For all j N and h (0, ), we have
p
j1,j1
(h, h+r
j
) p
j1,j
(r
j
, t
j
) p
j,j
(t
j
, r
j+1
)
=
_
t
j
r
j

j
e
(
j+1

j
)s
j
ds
j
p
j,j
(h, h+r
j+1
) .
Indeed, if
j
,=
j+1
, then (1) and (2) yield
p
j1,j1
(h, h+r
j
) p
j1,j
(r
j
, t
j
) p
j,j
(t
j
, r
j+1
)
= e

j
r
j

j+1

j
_
e

j
(t
j
r
j
)
e

j+1
(t
j
r
j
)
_
e

j+1
(r
j+1
t
j
)
=

j

j+1

j
_
e
(
j+1

j
)t
j
e
(
j+1

j
)r
j
_
e

j+1
r
j+1
=
_
t
j
r
j

j
e
(
j+1

j
)s
j
ds
j
p
j,j
(h, h+r
j+1
) ;
similarly, if
j
=
j+1
, then
p
j1,j1
(h, h+r
j
) p
j1,j
(r
j
, t
j
) p
j,j
(t
j
, r
j+1
)
= e

j
r
j

j
(t
j
r
j
) e

j
(t
j
r
j
)
e

j+1
(r
j+1
t
j
)
= e

j
r
j

j
(t
j
r
j
) e

j
(t
j
r
j
)
e

j
(r
j+1
t
j
)
=
j
(t
j
r
j
) e

j
r
j+1
=
_
t
j
r
j

j
ds
j
p
j,j
(h, h+r
j+1
)
=
_
t
j
r
j

j
e
(
j+1

j
)s
j
ds
j
p
j,j
(h, h+r
j+1
) .
66 Chapter 3 The Claim Number Process as a Markov Process
(4) For all n N, we have
P
_
N
r
n
= n1
n1

j=1
N
t
j
= jN
r
j
= j1
_
> 0
and
P
_
n

j=1
N
t
j
= jN
r
j
= j1
_
> 0 .
This follows by induction, using (1) and (2) and the Markov property:
For n = 1, we have
P[N
r
1
= 0] = P[N
r
1
= 0[N
0
= 0]
= p
0,0
(0, r
1
)
> 0
and
P[N
t
1
= 1N
r
1
= 0] = P[N
t
1
= 1[N
r
1
= 0] P[N
r
1
= 0]
= p
0,0
(0, r
1
) p
0,1
(r
1
, t
1
)
> 0 .
Assume now that the assertion holds for some n N. Then we have
P
_
N
r
n+1
= n
n

j=1
N
t
j
= jN
r
j
= j1
_
= P
_
N
r
n+1
= n

j=1
N
t
j
= jN
r
j
= j1
_
P
_
n

j=1
N
t
j
= jN
r
j
= j1
_
= P[N
r
n+1
= n[N
tn
= n]
P
_
n

j=1
N
t
j
= jN
r
j
= j1
_
= p
n,n
(t
n
, r
n+1
) P
_
n

j=1
N
t
j
= jN
r
j
= j1
_
> 0
3.4 A Characterization of Homogeneity 67
and
P
_
n+1

j=1
N
t
j
= jN
r
j
= j1
_
= P
_
N
t
n+1
= n+1

N
r
n+1
= n
n

j=1
N
t
j
= jN
r
j
= j1
_
P
_
N
r
n+1
= n
n

j=1
N
t
j
= jN
r
j
= j1
_
= P[N
t
n+1
= n+1[N
r
n+1
= n]
P
_
N
r
n+1
= n
n

j=1
N
t
j
= jN
r
j
= j1
_
= p
n,n+1
(r
n+1
, t
n+1
) P
_
N
r
n+1
= n
n

j=1
N
t
j
= jN
r
j
= j1
_
> 0 .
(5) For all n N, we have
P
_
n

j=1
r
j
< T
j
t
j

_
=
n1

j=1
_
t
j
r
j

j
e
(
j+1

j
)s
j
ds
j

_
t
n
r
n

n
e

n
s
n
ds
n
.
Indeed, using (4), the Markov property, (3), and (1), we obtain, for all h (0, ),
P
_
n

j=1
r
j
< T
j
t
j

_
= P
_
n

j=1
N
r
j
< j N
t
j

_
= P
_
N
t
n
nN
r
n
= n1
n1

j=1
N
t
j
= jN
r
j
= j1
_
= P[N
tn
n[N
rn
= n1]

n1

j=1
_
P[N
r
j+1
= j[N
t
j
= j] P[N
t
j
= j[N
r
j
= j1]
_
P[N
r
1
= 0[N
0
= 0]
= p
0,0
(0, r
1
)
n1

j=1
p
j1,j
(r
j
, t
j
) p
j,j
(t
j
, r
j+1
)
_
1 p
n1,n1
(r
n
, t
n
)
_
68 Chapter 3 The Claim Number Process as a Markov Process
= p
0,0
(h, h+r
1
)
n1

j=1
p
j1,j
(r
j
, t
j
) p
j,j
(t
j
, r
j+1
)
_
1 p
n1,n1
(r
n
, t
n
)
_
=
n1

j=1
_
t
j
r
j

j
e
(
j+1

j
)s
j
ds
j
p
n1,n1
(h, h+r
n
)
_
1 p
n1,n1
(r
n
, t
n
)
_
=
n1

j=1
_
t
j
r
j

j
e
(
j+1

j
)s
j
ds
j
e

n
r
n

_
1 e

n
(t
n
r
n
)
_
=
n1

j=1
_
t
j
r
j

j
e
(
j+1

j
)s
j
ds
j

_
e
nrn
e
ntn
_
=
n1

j=1
_
t
j
r
j

j
e
(
j+1

j
)s
j
ds
j

_
tn
r
n

n
e
nsn
ds
n
.
(6) Consider n N and dene f
n
: R
n
R
+
by letting
f
n
(w) :=
n

j=1

j
e

j
w
j

(0,)
(w
j
) .
Dene s
0
:= 0 and A :=
n
j=1
(r
j
, t
j
]. Because of (5), we obtain
P[T
n
A] = P
_
n

j=1
r
j
< T
j
t
j

_
=
n1

j=1
_
t
j
r
j

j
e
(
j+1

j
)s
j
ds
j

_
tn
r
n

n
e
nsn
ds
n
=
n1

j=1
_
(r
j
,t
j
]

j
e
(
j+1

j
)s
j
d(s
j
)
_
(r
n
,t
n
]

n
e

n
s
n
d(s
n
)
=
_
A
_
n1

j=1

j
e
(
j+1

j
)s
j
_

n
e

n
s
n
d
n
(s)
=
_
A
_
n

j=1

j
e

j
(s
j
s
j1
)

(0,)
(s
j
s
j1
)
_
d
n
(s)
=
_
A
f
n
(M
1
n
(s)) d
n
(s) .
(7) Since the sequence T
n

nN
0
is strictly increasing with T
0
= 0, it follows from (6)
that the identity
P[T
n
A] =
_
A
f
n
(M
1
n
(s)) d
n
(s)
holds for all A B(R
n
).
3.4 A Characterization of Homogeneity 69
(8) Consider B
1
, . . . , B
n
B(R) and let B :=
n
j=1
B
j
. Since W
n
= M
1
n
T
n
, the
identity established in (7) yields
P
Wn
[B] = P[W
n
B]
= P[M
1
n
T
n
B]
= P[T
n
M
n
(B)]
=
_
M
n
(B)
f
n
(M
1
n
(s)) d
n
(s)
=
_
B
f
n
(w) d
n
M
1
n
(w)
=
_
B
f
n
(w)
1
[ det M
1
n
[
d
n
(w)
=
_
B
f
n
(w) d
n
(w)
=
_
B
_
n

j=1

j
e

j
w
j

(0,)
(w
j
)
_
d
n
(w)
=
n

j=1
_
B
j

j
e

j
w
j

(0,)
(w
j
) d(w
j
) .
(9) Because of (8), the sequence W
n

nN
is independent and satises
P
W
n
= Exp(
n
)
for all n N. Therefore, (a) implies (b).
Assume now that (b) holds.
(1) To establish the Markov property, consider m N, t
1
, . . . , t
m
, t
m+1
(0, ),
and k
1
, . . . , k
m
N
0
such that t
1
< . . . < t
m
< t
m+1
and P[

m
j=1
N
t
j
= k
j
] > 0.
In the case where k
m
= 0, it is clear that the identity
P
_
N
t
m+1
= k
m+1

j=1
N
t
j
= k
j

_
= P[N
t
m+1
= l[N
t
m
= k
m
]
holds for all k
m+1
N
0
such that k
m
k
m+1
.
In the case where k
m
N, dene k
0
:= 0 and n := k
m
, and let l 0, 1, . . . , m1
be the unique integer satisfying k
l
< k
l+1
= k
m
= n. Then there exists a rectangle
B
n
j=1
(0, t
l+1
] such that
_
l

j=1
T
k
j
t
j
< T
k
j
+1

_
T
n
t
l+1
= T
1
n
(B) .
70 Chapter 3 The Claim Number Process as a Markov Process
Letting A := M
1
n
(B), we obtain
_
l

j=1
T
k
j
t
j
< T
k
j
+1

_
T
n
t
l+1
= T
1
n
(B)
= W
1
n
(M
1
n
(B))
= W
1
n
(A) .
Using independence of W
n
and W
n+1
, the transformation formula for integrals, and
Fubinis theorem, we obtain
P
_
m

j=1
N
t
j
= k
j

_
= P
__
l

j=1
N
t
j
= k
j

_
N
t
l+1
= nN
t
m
= n
_
= P
__
l

j=1
T
k
j
t
j
< T
k
j
+1

_
T
n
t
l+1
t
m
< T
n+1

_
= P[W
1
n
(A) t
m
T
n
< W
n+1
]
= P[W
1
n
(A) t
m
1
n
, W
n
) < W
n+1
]
=
_
A
__
(tm1n,s),)
dP
W
n+1
(w)
_
dP
W
n
(s)
=
_
A
__
(t
m
1
n
,s),)

n+1
e

n+1
w
d(w)
_
dP
Wn
(s)
=
_
A
e

n+1
1n,s)
__
(t
m
,)

n+1
e

n+1
v
d(v)
_
dP
W
n
(s)
=
_
A
e

n+1
1
n
,s)
dP
Wn
(s)
_
(t
m
,)
dP
W
n+1
(v) .
Also, using the same arguments as before, we obtain
P[N
t
m
= k
m
] = P[N
t
m
= n]
= P[T
n
t
m
< T
n+1
]
= P[T
n
t
m
t
m
T
n
< W
n+1
]
=
_
(,t
m
]
__
(t
m
s,)
dP
W
n+1
(w)
_
dP
T
n
(s)
=
_
(,tm]
__
(tms,)

n+1
e

n+1
w
d(w)
_
dP
T
n
(s)
=
_
(,t
m
]
e

n+1
s
__
(t
m
,)

n+1
e

n+1
v
d(v)
_
dP
Tn
(s)
=
_
(,tm]
e

n+1
s
dP
T
n
(s)
_
(tm,)
dP
W
n+1
(v) .
3.4 A Characterization of Homogeneity 71
Consider now k N
0
such that k
m
< k, and dene U := T
k
T
n+1
= T
k
T
n
W
n+1
.
Then we have
P
_
N
t
m+1
k
m

j=1
N
t
j
= k
j

_
= P
__
l

j=1
N
t
j
= k
j

_
N
t
l+1
= nN
t
m
= nN
t
m+1
k
_
= P
__
l

j=1
T
k
j
t
j
< T
k
j
+1

_
T
n
t
l+1
t
m
< T
n+1
T
k
t
m+1

_
= P[W
1
n
(A) t
m
T
n
< W
n+1
U t
m+1
T
n
W
n+1
]
= P[W
1
n
(A) t
m
1
n
, W
n
) < W
n+1
U t
m+1
1
n
, W
n
)W
n+1
]
=
_
A
__
(tm1n,s),)
__
(,t
m+1
1n,s)w]
dP
U
(u)
_
dP
W
n+1
(w)
_
dP
W
n
(s)
=
_
A
__
(tm1n,s),)
P[U t
m+1
1
n
, s)w] dP
W
n+1
(w)
_
dP
Wn
(s)
=
_
A
__
(t
m
1
n
,s),)
P[U t
m+1
1
n
, s)w]
n+1
e

n+1
w
d(w)
_
dP
W
n
(s)
=
_
A
e

n+1
1
n
,s)
__
(tm,)
P[U t
m+1
v]
n+1
e

n+1
v
d(v)
_
dP
W
n
(s)
=
_
A
e

n+1
1n,s)
dP
W
n
(s)
_
(t
m
,)
P[U t
m+1
v] dP
W
n+1
(v)
as well as
P[N
t
m+1
kN
tm
= k
m
]
= P[N
t
m+1
kN
tm
= n]
= P[T
n
t
m
< T
n+1
T
k
t
m+1
]
= P[T
n
t
m
t
m
T
n
< W
n+1
U t
m+1
T
n
W
n+1
]
=
_
(,t
m
]
__
(t
m
s,)
__
(,t
m+1
sw]
dP
U
(u)
_
dP
W
n+1
(w)
_
dP
Tn
(s)
=
_
(,t
m
]
__
(t
m
s,)
P[U t
m+1
sw] dP
W
n+1
(w)
_
dP
T
n
(s)
=
_
(,tm]
__
(tms,)
P[U t
m+1
sw]
n+1
e

n+1
w
d(w)
_
dP
T
n
(s)
=
_
(,t
m
]
e

n+1
s
__
(t
m
,)
P[U t
m+1
v]
n+1
e

n+1
v
d(v)
_
dP
Tn
(s)
=
_
(,tm]
e

n+1
s
dP
T
n
(s)
_
(tm,)
P[U t
m+1
v] dP
W
n+1
.
72 Chapter 3 The Claim Number Process as a Markov Process
This yields
P
_
N
t
m+1
k

j=1
N
t
j
= k
j

_
=
P
_
N
t
m+1
k
m

j=1
N
t
j
= k
j

_
P
_
m

j=1
N
t
j
= k
j

_
=
_
A
e

n+1
1n,s)
dP
Wn
(s)
_
(t
m
,)
P[U t
m+1
v] dP
W
n+1
(v)
_
A
e

n+1
1
n
,s)
dP
W
n
(s)
_
(t
m
,)
dP
W
n+1
(v)
=
_
(tm,)
P[U t
m+1
v] dP
W
n+1
(v)
_
(t
m
,)
dP
W
n+1
(v)
=
_
(,t
m
]
e

n+1
s
dP
T
n
(s)
_
(t
m
,)
P[U t
m+1
v] dP
W
n+1
(v)
_
(,tm]
e

n+1
s
dP
Tn
(s)
_
(tm,)
dP
W
n+1
(v)
=
P[N
t
m+1
k N
t
m
= k
m
]
P[N
t
m
= k
m
]
= P[N
t
m+1
k[N
t
m
= k
m
] .
Therefore, we have
P
_
N
t
m+1
k

j=1
N
t
j
= k
j

_
= P[N
t
m+1
k[N
tm
= k
m
]
for all k N
0
such that k
m
< k.
Of course, the previous identity is also valid if k
m
= k, and it thus holds for all
k N
0
such that k
m
k. But this implies that the identity
P
_
N
t
m+1
= k
m+1

j=1
N
t
j
= k
j

_
= P[N
t
m+1
= k
m+1
[N
t
m
= k
m
]
holds for all k
m+1
N
0
such that k
m
k
m+1
, which means that N
t

tR
+
is a
Markov process.
3.4 A Characterization of Homogeneity 73
(2) To prove the assertion on regularity, consider an admissible pair (n, t). As before,
we obtain
P[N
t+h
= nN
t
= n] = P[T
n
tt+h < T
n+1
]
= P[T
n
tt+hT
n
< W
n+1
]
=
_
(,t]
__
(t+hs,]
dP
W
n+1
(w)
_
dP
Tn
(s)
=
_
(,t]
__
(t+hs,]

n+1
e

n+1
w
d(w)
_
dP
T
n
(s)
=
_
(,t]
e

n+1
(t+hs)
dP
Tn
(s)
= e

n+1
(t+h)
_
(,t]
e

n+1
s
dP
T
n
(s)
for all h R
+
, hence
P[N
t
= n] = e

n+1
t
_
(,t]
e

n+1
s
dP
T
n
(s)
> 0 ,
and thus
p
n,n
(t, t+h) = P[N
t+h
= n[N
t
= n]
=
P[N
t+h
= nN
t
= n]
P[N
t
= n]
=
e

n+1
(t+h)
_
(,t]
e

n+1
s
dP
Tn
(s)
e

n+1
t
_
(,t]
e

n+1
s
dP
T
n
(s)
= e

n+1
h
for all h R
+
.
By what we have shown so far, we have
P[N
t
= n] > 0 ,
which proves (i).
It is also clear that the function h p
n,n
(t, t+h) is continuous, which proves (ii).
Furthermore, we have
lim
h0
1
h
_
1 p
n,n
(t, t+h)
_
= lim
h0
1
h
_
1 e

n+1
h
_
=
n+1
.
74 Chapter 3 The Claim Number Process as a Markov Process
Also, we have
P[N
t+h
= n+1N
t
= n]
= P[T
n
t < T
n+1
t+h < T
n+2
]
= P[T
n
ttT
n
< W
n+1
t+hT
n
t+hT
n
W
n+1
< W
n+2
]
=
_
(,t]
__
(ts,t+hs]
__
(t+hsw,]
dP
W
n+2
(u)
_
dP
W
n+1
(w)
_
dP
T
n
(s)
=
_
(,t]
__
(ts,t+hs]
__
(t+hsw,]

n+2
e

n+2
u
d(u)
_
dP
W
n+1
(w)
_
dP
T
n
(s)
=
_
(,t]
__
(ts,t+hs]
e

n+2
(t+hsw)
dP
W
n+1
(w)
_
dP
Tn
(s)
=
_
(,t]
__
(ts,t+hs]
e

n+2
(t+hsw)

n+1
e

n+1
w
d(w)
_
dP
T
n
(s)
=
_
(,t]
__
(t,t+h]
e

n+2
(t+hv)

n+1
e

n+1
(vs)
d(v)
_
dP
T
n
(s)
=
n+1
e

n+2
(t+h)
_
(,t]
e

n+1
s
dP
T
n
(s)
_
(t,t+h]
e
(
n+2

n+1
)v
d(v) ,
and thus
P[N
t+h
= n+1[N
t
= n]
=
P[N
t+h
= n+1N
t
= n]
P[N
t
= n]
=

n+1
e

n+2
(t+h)
_
(,t]
e

n+1
s
dP
T
n
(s)
_
(t,t+h]
e
(
n+2

n+1
)v
d(v)
e

n+1
t
_
(,t]
e

n+1
s
dP
Tn
(s)
=
n+1
e

n+1
t
e

n+2
(t+h)
_
(t,t+h]
e
(
n+2

n+1
)v
d(v) .
In the case
n+1
,=
n+2
, we obtain
p
n,n+1
(t, t+h) = P[N
t+h
= n+1[N
t
= n]
=
n+1
e

n+1
t
e

n+2
(t+h)
_
(t,t+h]
e
(
n+2

n+1
)v
d(v)
=
n+1
e

n+1
t
e

n+2
(t+h)
e

n+2

n+1
)(t+h)
e
(
n+2

n+1
)t

n+2

n+1
=

n+1

n+2

n+1
_
e

n+1
h
e

n+2
h
_
,
3.4 A Characterization of Homogeneity 75
and thus
lim
h0
1
h
p
n,n+1
(t, t+h) = lim
h0
1
h

n+1

n+2

n+1
_
e

n+1
h
e

n+2
h
_
=
n+1
;
in the case
n+1
=
n+2
, we obtain
p
n,n+1
(t, t+h) = P[N
t+h
= n+1[N
t
= n]
=
n+1
e

n+1
t
e

n+2
(t+h)
_
(t,t+h]
e
(
n+2

n+1
)v
d(v)
=
n+1
e

n+1
t
e

n+1
(t+h)
_
(t,t+h]
d(v)
=
n+1
he

n+1
h
,
and thus
lim
h0
1
h
p
n,n+1
(t, t+h) = lim
h0
1
h

n+1
he

n+1
h
=
n+1
.
Thus, in either case we have
lim
h0
1
h
_
1 p
n,n
(t, t+h)
_
=
n+1
= lim
h0
1
h
p
n,n+1
(t, t+h) .
This proves (iii).
We have thus shown that N
t

tR
+
is regular with intensities
n

nN
satisfying

n
(t) =
n
for all n N and t R
+
. Therefore, (b) implies (a). 2
In the situation of Theorem 3.4.2, explosion is either certain or impossible:
3.4.3 Corollary (ZeroOne Law on Explosion). Let
n

nN
be a sequence
of real numbers in (0, ) and assume that the claim number process N
t

tR
+
is a
regular Markov process with intensities
n

nN
satisfying
n
(t) =
n
for all nN
and t R
+
.
(a) If the series

n=1
1/
n
diverges, then the probability of explosion is equal to
zero.
(b) If the series

n=1
1/
n
converges, then the probability of explosion is equal to
one.
This follows from Theorems 3.4.2 and 1.2.1.
76 Chapter 3 The Claim Number Process as a Markov Process
Problems
3.4.A The following are equivalent:
(a) The claim number process is a regular Markov process and its intensities are
all constant.
(b) The claim interarrival times are independent and exponentially distributed.
3.4.B Let : R
+
R
+
be a continuous function which is strictly increasing and
satises (0) = 0 and lim
t
(t) = . For all t R
+
, dene
N

t
:= N

1
(t)
.
Then N

t

tR
+
is a claim number process. Moreover, if N
t

tR
+
has indepen-
dent increments or is a Markov process or satises the Chapman-Kolmogorov
equations or is regular, then the same is true for N

t

tR
+
.
3.4.C Operational Time: A continuous function : R
+
R
+
which is strictly
increasing and satises (0) = 0 and lim
t
(t) = is an operational time for
the claim number process N
t

tR
+
if the claim number process N

t

tR
+
is
homogeneous.
Assume that the claim number process N
t

tR
+
satises the ChapmanKolmo-
gorov equations and is regular with intensities
n

nN
, and let
n

nN
be a
sequence of real numbers in (0, ). Then the following are equivalent:
(a) There exists an operational time for the claim number process N
t

tR
+
such that

n
(t) =
n
holds for all n N and t R
+
.
(b) There exists a continuous function : R
+
(0, ) satisfying
_

0
(s) ds =
and such that
n
(t) =
n
(t) holds for all n N and t R
+
.
Hint: Use Theorem 3.2.1 and choose and , respectively, such that the identity
(t) =
_
t
0
(s) ds holds for all t R
+
.
3.4.D Operational Time: If N
t

tR
+
is an inhomogeneous Poisson process with
intensity satisfying
_

0
(s) ds = , then there exists an operational time
such that N

t

tR
+
is a homogeneous Poisson process with parameter 1.
3.4.E Operational Time: Study the explosion problem for claim number processes
which are regular Markov processes and possess an operational time.
3.5 A Characterization of the Poisson Process
By Theorem 3.1.8, the Poisson process is a regular Markov process whose intensities
are all identical and constant. By Theorems 3.3.2, 3.4.2, and 2.3.4, regular Markov
processes whose intensities are either all identical or all constant share some of the
characteristic properties of the Poisson process:
For a regular Markov process whose intensities are all identical, the increments
of the claim number process are independent and Poisson distributed.
For a regular Markov process whose intensities are all constant, the claim inter-
arrival times are independent and exponentially distributed.
3.6 A Claim Number Process with Contagion 77
In the rst case, the intensities are related to the parameters of the distributions of
the increments of the claim number process; in the second case, they are related to
the parameters of the distributions of the claim interarrival times. It is therefore
not surprising that the Poisson process is the only regular Markov process whose
intensities are all identical and constant:
3.5.1 Theorem. Let (0, ). Then the following are equivalent:
(a) The claim number process N
t

tR
+
is a regular Markov process with intensities

nN
satisfying
n
(t) = for all nN and t R
+
.
(b) The claim number process N
t

tR
+
has independent increments and is regular
with intensities
n

nN
satisfying
n
(t) = for all n N and t R
+
.
(c) The claim number process N
t

tR
+
is a Poisson process with parameter .
(d) The sequence of claim interarrival times W
n

nN
is independent and satises
P
W
n
= Exp() for all nN.
Proof. The equivalence of (a), (b), and (c) follows from Theorem 3.3.2, and the
equivalence of (a) and (d) follows from Theorem 3.4.2. 2
The equivalence of (c) and (d) in Theorem 3.5.1 is the same as the equivalence of
(a) and (b) in Theorem 2.3.4, which was established by entirely dierent arguments;
in particular, Theorem 2.3.4 was not used in the proof of Theorem 3.5.1.
Problems
3.5.A Assume that the claim number process has stationary independent increments
and is regular. Then its intensities are all identical and constant.
3.5.B The following are equivalent:
(a) The claim number process is a regular Markov process and its intensities are
all identical and constant.
(b) The claim number process has stationary independent increments and is
regular.
(c) The claim number process is a Poisson process.
(d) The claim interarrival times are independent and identically exponentially
distributed.
3.6 A Claim Number Process with Contagion
In the present section we study a regular claim number process which is homogeneous
and satises the ChapmanKolmogorov equations but need not be a Markov process
and fails to be a Poisson process. More precisely, the increments of this claim number
process fail to be independent, fail to be stationary, and fail to be Poisson distributed.
In other words, this claim number process lacks each of the dening properties of
the Poisson process.
78 Chapter 3 The Claim Number Process as a Markov Process
3.6.1 Theorem (Positive Contagion). Let , (0, ). Then the following
are equivalent:
(a) The claim number process N
t

tR
+
satises the ChapmanKolmogorov equa-
tions and is regular with intensities
n

nN
satisfying

n
(t) = ( + n 1)
for all nN and t R
+
.
(b) The identity
p
n,n+k
(t, t+h) =
_
+ n + k 1
k
_
_
e
h
_
+n
_
1 e
h
_
k
holds for each admissible pair (n, t) and all kN
0
and hR
+
.
In this case, the claim number process is homogeneous and satises
P
Nt
= NB
_
, e
t
_
for all t (0, ), and the increments are neither independent nor stationary and
satisfy
P
N
t+h
N
t
= NB
_
,
1
1 + e
(t+h)
e
t
_
for all t R
+
and h(0, ).
Proof. Assume rst that (a) holds.
(1) For each admissible pair (n, t) and all h R
+
, we have
p
n,n
(t, t+h) = e

_
t+h
t

n+1
(u) du
= e

_
t+h
t
(+n) du
=
_
e
h
_
+n
.
(2) Assume now that the identity
p
n,n+k
(t, t+h) =
_
+ n + k 1
k
_
_
e
h
_
+n
_
1 e
h
_
k
holds for some k N
0
and for each admissible pair (n, t) and all h R
+
(which
because of (1) is the case for k = 0). Then we have
p
n,n+k+1
(t, t+h) =
_
t+h
t
p
n,n+k
(t, u)
n+k+1
(u) p
n+k+1,n+k+1
(u, t+h) du
=
_
t+h
t
_
_
+ n + k 1
k
_
_
e
(ut)
_
+n
_
1 e
(ut)
_
k

_
+n +k
_

_
e
(t+hu)
_
+n+k+1
_
du
3.6 A Claim Number Process with Contagion 79
=
_
+ n + (k + 1) 1
k + 1
_
_
e
h
_
+n

_
t+h
t
(k + 1)
_
1 e
(ut)
_
k
_
e
(t+hu)
_
k+1
du
=
_
+ n + (k + 1) 1
k + 1
_
_
e
h
_
+n

_
t+h
t
(k + 1)
_
e
(t+hu)
e
h
_
k
e
(t+hu)
du
=
_
+ n + (k + 1) 1
k + 1
_
_
e
h
_
+n
_
1 e
h
_
k+1
for each admissible pair (n, t) and all h R
+
.
(3) Because of (1) and (2), (a) implies (b).
Assume now that (b) holds.
(1) To verify the ChapmanKolmogorov equations, consider (k, n, r, t) / and
s [r, t] such that P[N
r
= k] > 0.
In the case r = t, there is nothing to prove.
In the case r < t, we have
n

m=k
p
k,m
(r, s) p
m,n
(s, t)
=
n

m=k
_
_
+ m1
mk
_
_
e
(sr)
_
+k
_
1 e
(sr)
_
mk

_
+ n 1
n m
_
_
e
(ts)
_
+m
_
1 e
(ts)
_
nm
_
=
n

m=k
_
_
+ n 1
n k
_
_
e
(tr)
_
+k
_
1 e
(tr)
_
nk

_
n k
mk
__
e
(ts)
e
(tr)
1 e
(tr)
_
mk
_
1 e
(ts)
1 e
(tr)
_
(nk)(mk)
_
=
_
+ n 1
n k
_
_
e
(tr)
_
+k
_
1 e
(tr)
_
nk

m=k
_
n k
mk
__
e
(ts)
e
(tr)
1 e
(tr)
_
mk
_
1 e
(ts)
1 e
(tr)
_
(nk)(mk)
=
_
+ k + (nk) 1
n k
_
_
e
(tr)
_
+k
_
1 e
(tr)
_
nk
= p
k,n
(r, t) .
Therefore, N
t

tR
+
satises the ChapmanKolmogorov equations.
80 Chapter 3 The Claim Number Process as a Markov Process
(2) To prove the assertion on regularity, consider an admissible pair (n, t).
First, since
P[N
t
= n] = p
0,n
(0, t)
=
_
+n 1
n
_
_
e
t
_

_
1 e
t
_
n
,
we have P[N
t
= n] > 0, which proves (i).
Second, since
p
n,n
(t, t+h) =
_
e
h
_
+n
= e
(+n)h
,
the function h p
n,n
(t, t+h) is continuous, which proves (ii).
Finally, we have
lim
h0
1
h
_
1 p
n,n
(t, t+h)
_
= lim
h0
1
h
_
1 e
(+n)h
_
=
_
+ n
_
,
as well as
lim
h0
1
h
p
n,n+1
(t, t+h) = lim
h0
1
h
_
+n
__
e
h
_
+n
_
1 e
h
_
=
_
+ n
_
.
This proves (iii).
We have thus shown that N
t

tR
+
is regular with intensities
n

nN
satisfying

n
(t) = ( + n 1) for all n N and t R
+
.
(3) Because of (1) and (2), (b) implies (a).
Let us now prove the nal assertions.
(1) Consider t R
+
. For all n N
0
, we have
P[N
t
= n] = p
0,n
(0, t)
=
_
+n 1
n
_
_
e
t
_

_
1 e
t
_
n
.
This yields
P
Nt
= NB
_
, e
t
_
.
(2) Consider now t, h R
+
. Because of (1), we have, for all k N
0
,
P[N
t+h
N
t
= k]
=

n=0
P[N
t+h
N
t
= kN
t
= n]
3.6 A Claim Number Process with Contagion 81
=

n=0
_
P[N
t+h
= n + k[N
t
= n] P[N
t
= n]
_
=

n=0
p
n,n+k
(t, t+h) p
0,n
(0, t)
=

n=0
_
_
+n +k 1
k
_
_
e
h
_
+n
_
1 e
h
_
k

_
+n 1
n
_
_
e
t
_

_
1 e
t
_
n
_
=

n=0
_
_
+k 1
k
__
e
(t+h)
1 e
h
+ e
(t+h)
_

_
1 e
h
1 e
h
+e
(t+h)
_
k

_
+k + n 1
n
_
_
1 e
h
+e
(t+h)
_
+k
_
e
h
e
(t+h)
_
n
_
=
_
+ k 1
k
__
e
(t+h)
1 e
h
+ e
(t+h)
_

_
1 e
h
1 e
h
+ e
(t+h)
_
k

n=0
_
+ k +n 1
n
_
_
1 e
h
+ e
(t+h)
_
+k
_
e
h
e
(t+h)
_
n
=
_
+ k 1
k
__
e
(t+h)
1 e
h
+ e
(t+h)
_

_
1 e
h
1 e
h
+ e
(t+h)
_
k
=
_
+ k 1
k
__
1
e
(t+h)
e
t
+ 1
_

_
e
(t+h)
e
t
e
(t+h)
e
t
+ 1
_
k
.
This yields
P
N
t+h
Nt
= NB
_
,
1
1 + e
(t+h)
e
t
_
.
(3) It is clear that N
t

tR
+
is homogeneous.
(4) It is clear from the formula for the transition probabilities that N
t

tR
+
cannot
have independent increments. 2
Comment: The previous result illustrates the ne line between Markov claim
number processes and claim number processes which only satisfy the Chapman
Kolmogorov equations: By Theorem 3.4.2, the sequence of claim interarrival times
W
n

nN
is independent and satises P
W
n
= Exp((+n1) ) for all n N if
and only if the claim number process N
t

tR
+
is a regular Markov process with
intensities
n

nN
satisfying
n
= ( + n 1) for all n N; in this case the
claim number process clearly satises the equivalent conditions of Theorem 3.6.1.
82 Chapter 3 The Claim Number Process as a Markov Process
On the other hand, the equivalent conditions of Theorem 3.6.1 involve only the
twodimensional distributions of the claim number process, which clearly cannot
tell anything about whether the claim number process is a Markov process or not.
The following result justies the term positive contagion:
3.6.2 Corollary (Positive Contagion). Let , (0, ) and assume that the
claim number process N
t

tR
+
satises the ChapmanKolmogorov equations and is
regular with intensities
n

nN
satisfying

n
(t) = ( + n 1)
for all nN and t R
+
. Then, for all t, h (0, ), the function
n P[N
t+h
n+1[N
t
= n]
is strictly increasing and independent of t.
Thus, in the situation of Corollary 3.6.2, the conditional probability of at least one
claim occurring in the interval (t, t+h] increases with the number of claims already
occurred up to time t.
The claim number process considered in this section illustrates the importance of
the negativebinomial distribution.
Another regular claim number process which is not homogeneous but has station-
ary increments and which is also related to the negativebinomial distribution and
positive contagion will be studied in Chapter 4 below.
Problems
3.6.A Negative Contagion: For N, modify the denitions given in Section 3.1
by replacing the set N
0
by 0, 1, . . . , .
Let N and (0, ). Then the following are equivalent:
(a) The claim number process N
t

tR
+
satises the ChapmanKolmogorov
equations and is regular with intensities
n

nN
satisfying

n
(t) = ( + 1 n)
for all n 1, . . . , and t R
+
.
(b) The identity
p
n,n+k
(t, t+h) =
_
n
k
_
_
1 e
h
_
k
_
e
h
_
nk
holds for each admissible pair (n, t) and all k 0, 1, . . . , n and h R
+
.
3.6 A Claim Number Process with Contagion 83
In this case, N
t

tR
+
is homogeneous, the increments are neither independent
nor stationary, and
P
N
t+h
N
t
= B
_
, e
t
_
1e
h
_
_
holds for all t R
+
and h (0, ); in particular,
P
N
t
= B
_
, 1e
t
_
holds for all t (0, ).
3.6.B Negative Contagion: Let N and (0, ) and assume that the claim
number process N
t

tR
+
satises the ChapmanKolmogorov equations and is
regular with intensities
n

nN
satisfying

n
(t) = ( + 1 n)
for all n 1, . . . , and t R
+
. Then, for all t, h (0, ), the function
n P[N
t+h
n+1[N
t
= n]
is strictly decreasing and independent of t.
3.6.C If the claim number process has positive or negative contagion, then
P
W
1
= Exp() .
Compare this result with Theorem 3.4.2 and try to compute P
W
n
for arbitrary
n N or n 1, . . . , , respectively.
3.6.D If the claim number process has positive or negative contagion, change parameters
by choosing
t
(0, ) and
t
R0 such that

n
(t) =
t
+ (n1)
t
holds for all n N
0
and t R
+
. Interpret the limiting case
t
= 0.
3.6.E Extend the discussion of claim number processes with positive or negative con-
tagion to claim number processes which satisfy the ChapmanKolmogorov equa-
tions and are regular with intensities
n

nN
satisfying

n
(t) = (t) (+n1)
or

n
(t) = (t) (n+1)
for some continuous function : R
+
(0, ) and (0, ).
For a special case of this problem, see Problem 4.3.A.
84 Chapter 3 The Claim Number Process as a Markov Process
3.7 Remarks
The relations between the dierent classes of claim number processes studied in this
chapter are presented in the following table:
ChapmanKolmogorov

n
(t)
Markov

n
(t)
Homogeneous CK

n
(t) =
n
Inhomogeneous Poisson

n
(t) = (t)
Homogeneous Markov

n
(t) =
n
Positive Contagion

n
(t) = (+n1)
Homogeneous Poisson

n
(t) =
Regular Claim Number Processes
Of course, the dierent classes of regular claim number processes appearing in one
line are not disjoint.
In Chapter 4 below, we shall study another, and rather important, claim number
process which turns out to be a regular Markov process with intensities depending
on time and on the number of claims already occurred; this is an example of a claim
number process which is not homogeneous and has dependent stationary increments.
For a discussion of operational time, see B uhlmann [1970], Mammitzsch [1983, 1984],
and Sundt [1984, 1991, 1993].
Markov claim number processes can be generalized to semiMarkov processes which
allow to model multiple claims, to distinguish dierent types of claims, and to take
into account claim severities; see Stormer [1970] and Nollau [1978], as well as Janssen
[1977, 1982, 1984] and Janssen and DeDominicis [1984].
Chapter 4
The Mixed Claim Number
Process
The choice of appropriate assumptions for the claim number process describing a
portfolio of risks is a serious problem. In the present chapter we discuss a general
method to reduce the problem. The basic idea is to interpret an inhomogeneous
portfolio of risks as a mixture of homogeneous portfolios. The claim number process
describing the inhomogeneous portfolio is then dened to be a mixture of the claim
number processes describing the homogeneous portfolios such that the mixing distri-
bution represents the structure of the inhomogeneous portfolio. We rst specify the
general model (Section 4.1) and then study the mixed Poisson process (Section 4.2)
and, in particular, the PolyaLundberg process (Section 4.3).
The prerequisites required for the present chapter exceed those for the previous ones
in that conditioning will be needed not only in the elementary setting but in full
generality. For information on conditioning, see Bauer [1991], Billingsley [1995], and
Chow and Teicher [1988].
4.1 The Model
Throughout this chapter, let N
t

tR
+
be a claim number process and let be a
random variable.
Interpretation: We consider an inhomogeneous portfolio of risks. We assume that
this inhomogeneous portfolio is a mixture of homogeneous portfolios of the same size
which are similar but distinct, and we also assume that each of the homogeneous
portfolios can be identied with a realization of the random variable . This means
that the distribution of represents the structure of the inhomogeneous portfolio
under consideration. The properties of the (unconditional) distribution of the claim
number process N
t

tR
+
are then determined by the properties of its conditional
distribution with respect to and by the properties of the distribution of .
86 Chapter 4 The Mixed Claim Number Process
Accordingly, the random variable and its distribution P

are said to be the struc-


ture parameter and the structure distribution of the portfolio, respectively, and the
claim number process N
t

tR
+
is said to be a mixed claim number process.
The claim number process N
t

tR
+
has
conditionally independent increments with respect to if, for all m N and
t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
, the family of increments
N
t
j
N
t
j1

j1,...,m
is conditionally independent with respect to , and it has
conditionally stationary increments with respect to if, for all m N and
t
0
, t
1
, . . . , t
m
, h R
+
such that 0 =t
0
<t
1
< . . . <t
m
, the family of increments
N
t
j
+h
N
t
j1
+h

j1,...,m
has the same conditional distribution with respect to
as N
t
j
N
t
j1

j1,...,m
.
It is immediate from the denitions that a claim number process having condition-
ally independent increments with respect to has conditionally stationary incre-
ments with respect to if and only if the identity P
N
t+h
Nt[
= P
N
h
[
holds for all
t, h R
+
.
4.1.1 Lemma. If the claim number process has conditionally stationary increments
with respect to , then it has stationary increments.
Proof. For all mN and t
0
, t
1
, . . . , t
m
, h R
+
such that 0=t
0
<t
1
< . . . <t
m
and
for all k
1
, . . . , k
m
N
0
, we have
P
_
m

j=1
N
t
j
+h
N
t
j1
+h
= k
j

_
=
_

P
_
m

j=1
N
t
j
+h
N
t
j1
+h
= k
j

()
_
dP()
=
_

P
_
m

j=1
N
t
j
N
t
j1
= k
j

()
_
dP()
= P
_
m

j=1
N
t
j
N
t
j1
= k
j

_
,
as was to be shown. 2
By contrast, a claim number process having conditionally independent increments
with respect to need not have independent increments; see Theorem 4.2.6 below.
The following lemma is immediate from the properties of conditional expectation:
4.1.2 Lemma. If the claim number process N
t

tR
+
has nite expectations, then
the identities
E[N
t
] = E[E(N
t
[)]
and
var [N
t
] = E[var (N
t
[)] + var [E(N
t
[)]
hold for all t R
+
.
The second identity of Lemma 4.1.2 is called the variance decomposition.
4.2 The Mixed Poisson Process 87
4.2 The Mixed Poisson Process
The claim number process N
t

tR
+
is a mixed Poisson process with parameter if
is a random variable satisfying P

[(0, )] = 1 and if N
t

tR
+
has conditionally
stationary independent increments with respect to such that P
N
t
[
= P(t) holds
for all t (0, ).
We rst collect some basic properties of the mixed Poisson process:
4.2.1 Lemma. If the claim number process N
t

tR
+
is a mixed Poisson process,
then it has stationary increments and satises
P[N
t
= n] > 0
for all t (0, ) and n N
0
.
Proof. By Lemma 4.1.1, the claim number process N
t

tR
+
has stationary
increments. Moreover, we have
P[N
t
= n] =
_

e
t()
(t())
n
n!
dP()
=
_
R
e
t
(t)
n
n!
dP

() ,
and thus P[N
t
= n] > 0. 2
An obvious question to ask is whether or not a mixed Poisson process can have
independent increments. We shall answer this question at the end of this section.
The following result is a partial generalization of Lemma 2.3.1:
4.2.2 Lemma (Multinomial Criterion). If the claim number process N
t

tR
+
is a mixed Poisson process, then the identity
P
_
m

j=1
N
t
j
N
t
j1
= k
j

N
t
m
= n
_
=
n!

m
j=1
k
j
!
m

j=1
_
t
j
t
j1
t
m
_
k
j
holds for all m N and t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
and
for all n N
0
and k
1
, . . . , k
m
N
0
such that

m
j=1
k
j
= n.
Proof. We have
P
_
m

j=1
N
t
j
N
t
j1
= k
j
N
tm
= n
_
= P
_
m

j=1
N
t
j
N
t
j1
= k
j

_
88 Chapter 4 The Mixed Claim Number Process
=
_

P
_
m

j=1
N
t
j
N
t
j1
= k
j

()
_
dP()
=
_

j=1
P(N
t
j
N
t
j1
= k
j
[()) dP()
=
_

j=1
e
(t
j
t
j1
)()
((t
j
t
j1
)())
k
j
k
j
!
dP()
=
n!

m
j=1
k
j
!
m

j=1
_
t
j
t
j1
t
m
_
k
j

e
t
m
()
(t
m
())
n
n!
dP()
as well as
P[N
t
m
= n] =
_

e
t
m
()
(t
m
())
n
n!
dP() ,
and thus
P
_
m

j=1
N
t
j
N
t
j1
= k
j

N
t
m
= n
_
=
P
_
m

j=1
N
t
j
N
t
j1
= k
j
N
tm
= n
_
P[N
tm
= n]
=
n!

m
j=1
k
j
!
m

j=1
_
t
j
t
j1
t
m
_
k
j

e
tm()
(t
m
())
n
n!
dP()
_

e
t
m
()
(t
m
())
n
n!
dP()
=
n!

m
j=1
k
j
!
m

j=1
_
t
j
t
j1
t
m
_
k
j
,
as was to be shown. 2
In the case m = 2, the multinomial criterion is called Lundbergs binomial criterion.
The multinomial criterion allows to check the assumption that the claim number
process is a mixed Poisson process and is useful to compute the nite dimensional
distributions of a mixed Poisson process.
As a rst consequence of the multinomial criterion, we show that every mixed Poisson
process is a Markov process:
4.2.3 Theorem. If the claim number process is a mixed Poisson process, then it
is a Markov process.
4.2 The Mixed Poisson Process 89
Proof. Consider m N, t
1
, . . . , t
m
, t
m+1
(0, ), and n
1
, . . . , n
m
, n
m+1
N
0
such that t
1
< . . . < t
m
< t
m+1
and P[

m
j=1
N
t
j
= n
j
] > 0. Dene t
0
:= 0 and
n
0
:= 0. Because of the multinomial criterion, we have
P
_
N
t
m+1
= n
m+1

j=1
N
t
j
= n
j

_
=
P
_
m+1

j=1
N
t
j
= n
j

_
P
_
m

j=1
N
t
j
= n
j

_
=
P
_
m+1

j=1
N
t
j
N
t
j1
= n
j
n
j1

_
P
_
m

j=1
N
t
j
N
t
j1
= n
j
n
j1

_
=
P
_
m+1

j=1
N
t
j
N
t
j1
= n
j
n
j1

N
t
m+1
= n
m+1

_
P[N
t
m+1
= n
m+1
]
P
_
m

j=1
N
t
j
N
t
j1
= n
j
n
j1

N
tm
= n
m

_
P[N
tm
= n
m
]
=
n
m+1
!

m+1
j=1
(n
j
n
j1
)!
m+1

j=1
_
t
j
t
j1
t
m+1
_
n
j
n
j1
P[N
t
m+1
= n
m+1
]
n
m
!

m
j=1
(n
j
n
j1
)!
m

j=1
_
t
j
t
j1
t
m
_
n
j
n
j1
P[N
t
m
= n
m
]
=
_
n
m+1
n
m
__
t
m
t
m+1
_
nm
_
t
m+1
t
m
t
m+1
_
n
m+1
nm

P[N
t
m+1
= n
m+1
]
P[N
t
m
= n
m
]
as well as
P[N
t
m+1
= n
m+1
[N
tm
= n
m
]
= P[N
tm
= n
m
[N
t
m+1
= n
m+1
]
P[N
t
m+1
= n
m+1
]
P[N
t
m
= n
m
]
=
_
n
m+1
n
m
__
t
m
t
m+1
_
nm
_
t
m+1
t
m
t
m+1
_
n
m+1
nm

P[N
t
m+1
= n
m+1
]
P[N
tm
= n
m
]
,
and hence
P
_
N
t
m+1
= n
m+1

j=1
N
t
j
= n
j

_
= P[N
t
m+1
= n
m+1
[N
t
m
= n
m
] .
This proves the assertion. 2
90 Chapter 4 The Mixed Claim Number Process
4.2.4 Theorem. If the claim number process is a mixed Poisson process with
parameter such that has nite moments of any order, then it is regular and
satises
p
n,n+k
(t, t+h) =
h
k
k!

E[e
(t+h)

n+k
]
E[e
t

n
]
for each admissible pair (n, t) and all k N
0
and h (0, ) and with intensities

nN
satisfying

n
(t) =
E[e
t

n
]
E[e
t

n1
]
for all n N and t R
+
.
Proof. Because of the multinomial criterion, we have
p
n,n+k
(t, t+h) = P[N
t+h
= n +k[N
t
= n]
= P[N
t
= n[N
t+h
= n + k]
P[N
t+h
= n + k]
P[N
t
= n]
=
_
n + k
n
__
t
t + h
_
n
_
h
t + h
_
k

e
(t+h)
((t+h))
n+k
(n+k)!
dP
_

e
t
(t)
n
n!
dP
=
h
k
k!

E[e
(t+h)

n+k
]
E[e
t

n
]
.
Let us now prove the assertion on regularity.
First, Lemma 4.2.1 yields P[N
t
= n] > 0, which proves (i).
Second, since
p
n,n
(t, t+h) =
E[e
(t+h)

n
]
E[e
t

n
]
;
the function h p
n,n
(t, t+h) is continuous, which proves (ii).
Finally, we have
lim
h0
1
h
_
1 p
n,n
(t, t+h)
_
= lim
h0
1
h
_
1
E[e
(t+h)

n
]
E[e
t

n
]
_
=
E[e
t

n+1
]
E[e
t

n
]
as well as
lim
h0
1
h
p
n,n+1
(t, t+h) = lim
h0
1
h
h
E[e
(t+h)

n+1
]
E[e
t

n
]
=
E[e
t

n+1
]
E[e
t

n
]
.
This proves (iii).
4.2 The Mixed Poisson Process 91
We have thus shown that N
t

tR
+
is regular with intensities
n

nN
satisfying

n
(t) =
E[e
t

n
]
E[e
t

n1
]
for all n N and t R
+
. 2
The following result provides another possibility to check the assumption that the
claim number process is a mixed Poisson process and can be used to estimate the
expectation and the variance of the structure distribution of a mixed Poisson process:
4.2.5 Lemma. If the claim number process N
t

tR
+
is a mixed Poisson process
with parameter such that has nite expectation, then the identities
E[N
t
] = t E[]
and
var [N
t
] = t E[] + t
2
var []
hold for all t R
+
; in particular, the probability of explosion is equal to zero.
Proof. The identities for the moments are immediate from Lemma 4.1.2, and the
nal assertion follows from Corollary 2.1.5. 2
Thus, if the claim number process N
t

tR
+
is a mixed Poisson process such that
the structure distribution is nondegenerate and has nite expectation, then, for all
t (0, ), the variance of N
t
strictly exceeds the expectation of N
t
; moreover, the
variance of N
t
is of order t
2
while the expectation of N
t
is of order t.
We can now answer the question whether a mixed Poisson process can have inde-
pendent increments:
4.2.6 Theorem. If the claim number process N
t

tR
+
is a mixed Poisson pro-
cess with parameter such that has nite expectation, then the following are
equivalent:
(a) The distribution of is degenerate.
(b) The claim number process N
t

tR
+
has independent increments.
(c) The claim number process N
t

tR
+
is an inhomogeneous Poisson process.
(d) The claim number process N
t

tR
+
is a (homogeneous) Poisson process.
Proof. Obviously, (a) implies (d), (d) implies (c), and (c) implies (b).
Because of Lemma 4.2.5 and Theorem 2.3.4, (b) implies (d).
Assume now that N
t

tR
+
is a Poisson process. Then we have
E[N
t
] = var [N
t
]
for all t R
+
, and Lemma 4.2.5 yields var [] = 0, which means that the structure
distribution is degenerate. Therefore, (d) implies (a). 2
92 Chapter 4 The Mixed Claim Number Process
Problems
4.2.A Assume that the claim number process is a mixed Poisson process with parameter
such that has nite moments of any order. Then it has dierentiable inten-
sities
n

nN
satisfying

t
n
(t)

n
(t)
=
n
(t)
n+1
(t)
for all n N and t R
+
.
4.2.B Assume that the claim number process is a mixed Poisson process with parameter
such that has nite moments of any order. Then the following are equivalent:
(a) The intensities are all identical.
(b) The intensities are all constant.
(c) The claim number process is a homogeneous Poisson process.
4.2.C Estimation: Assume that the claim number process N
t

tR
+
is a mixed
Poisson process with parameter such that has a nondegenerate distribution
and a nite second moment, and dene = E[]/var [] and = E[]
2
/var [].
Then the inequality
E
_
_

+N
t
+t
_
2
_
E[((a +bN
t
))
2
]
holds for all t R
+
and for all a, b R.
4.2.D Operational Time: If the claim number process is a mixed Poisson process for
which an operational time exists, then there exist , (0, ) such that the
intensities satisfy

n
(t) =
+n 1
+t
for all n N and t R
+
.
Hint: Use Problems 4.2.A and 3.4.C.
4.2.E Discrete Time Model: The claim number process N
l

lN
0
is a mixed binomial
process with parameter if P

[(0, 1)] = 1 and if N


l

lN
0
has conditionally
stationary independent increments with respect to such that P
N
l
= B(l, )
holds for all l N.
If the claim number process is a mixed binomial process, then it has stationary
increments.
4.2.F Discrete Time Model: If the claim number process N
l

lN
0
is a mixed bino-
mial process, then the identity
P
_
_
m

j=1
N
l
j
N
l
j1
= k
j

N
l
m
= n
_
_
=
m

j=1
_
l
j
l
j1
k
j
_

_
l
m
n
_
1
holds for all m N and l
0
, l
1
, . . . , l
m
N
0
such that 0 = l
0
< l
1
< . . . < l
m
and
for all n N
0
and k
1
, . . . , k
m
N
0
such that k
j
l
j
l
j1
for all j 1, . . . , m
and such that

m
j=1
k
j
= n.
4.3 The PolyaLundberg Process 93
4.2.G Discrete Time Model: If the claim number process is a mixed binomial pro-
cess, then it is a Markov process.
4.2.H Discrete Time Model: If the claim number process N
l

lN
0
is a mixed bino-
mial process with parameter , then the identities
E[N
l
] = l E[]
and
var [N
l
] = l E[
2
] +l
2
var []
hold for all l N
0
.
4.2.I Discrete Time Model: If the claim number process N
l

lN
0
is a mixed bino-
mial process with parameter , then the following are equivalent:
(a) The distribution of is degenerate.
(b) The claim number process N
l

lN
0
has independent increments.
(c) The claim number process N
l

lN
0
is a binomial process.
4.2.J Discrete Time Model: Assume that the claim number process N
l

lN
0
is a
mixed binomial process with parameter such that has a nondegenerate dis-
tribution, and dene = E[
2
]/var [] and = E[]. Then the inequality
E
_
_

+N
l
+l
_
2
_
E[((a +bN
l
))
2
]
holds for all l N
0
and for all a, b R.
4.3 The PolyaLundberg Process
The claim number process N
t

tR
+
is a PolyaLundberg process with parameters
and if it is a mixed Poisson process with parameter such that P

= Ga(, ).
4.3.1 Theorem. If the claim number process N
t

tR
+
is a PolyaLundberg
process with parameters and , then the identity
P
_
m

j=1
N
t
j
= n
j

_
=
( + n
m
)
()

m
j=1
(n
j
n
j1
)!
_

+ t
m
_
m

j=1
_
t
j
t
j1
+ t
m
_
n
j
n
j1
holds for all m N, for all t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
,
and for all n
0
, n
1
, . . . , n
m
N
0
such that 0 = n
0
n
1
. . . n
m
; in particular, the
claim number process N
t

tR
+
has stationary dependent increments and satises
P
Nt
= NB
_
,

+ t
_
for all t (0, ) and
P
N
t+h
N
t
[N
t
= NB
_
+ N
t
,
+ t
+ t + h
_
for all t, h (0, ).
94 Chapter 4 The Mixed Claim Number Process
Proof. Because of Lemma 4.2.1 and Theorem 4.2.6, is it clear that the claim
number process has stationary dependent increments.
Let us now prove the remaining assertions:
(1) We have
P[N
t
= n]
=
_

P(N
t
= n[()) dP()
=
_

e
t()
(t())
n
n!
dP()
=
_
R
e
t
(t)
n
n!
dP

()
=
_
R
e
t
(t)
n
n!

()
e

(0,)
() d()
=
( + n)
() n!
_

+ t
_

_
t
+ t
_
n

_
R
( + t)
+n
( + n)
e
(+t)

+n1

(0,)
() d()
=
_
+ n 1
n
__

+ t
_

_
t
+t
_
n
,
and hence
P
N
t
= NB
_
,

+ t
_
.
(2) Because of the multinomial criterion and (1), we have
P
_
m

j=1
N
t
j
= n
j

_
= P
_
m

j=1
N
t
j
= n
j

N
t
m
= n
m

_
P[N
t
m
= n
m
]
= P
_
m

j=1
N
t
j
N
t
j1
= n
j
n
j1

N
tm
= n
m

_
P[N
tm
= n
m
]
=
n
m
!

m
j=1
(n
j
n
j1
)!
m

j=1
_
t
j
t
j1
t
m
_
n
j
n
j1

_
+ n
m
1
n
m
__

+ t
m
_

_
t
m
+ t
m
_
n
m
=
( +n
m
)
()

m
j=1
(n
j
n
j1
)!
_

+ t
m
_
m

j=1
_
t
j
t
j1
+ t
m
_
n
j
n
j1
.
4.3 The PolyaLundberg Process 95
(3) Because of (2) and (1), we have
P[N
t+h
= n + k N
t
= n]
=
( + n + k)
() n! k!
_

+ t + h
_

_
t
+ t + h
_
n
_
h
+ t + h
_
k
and
P[N
t
= n] =
( +n)
() n!
_

+ t
_

_
t
+ t
_
n
,
hence
P[N
t+h
N
t
= k[N
t
= n]
=
P[N
t+h
N
t
= k N
t
= n]
P[N
t
= n]
=
P[N
t+h
= n+k N
t
= n]
P[N
t
= n]
=
( + n + k)
() n! k!
_

+ t + h
_

_
t
+ t + h
_
n
_
h
+ t + h
_
k
( + n)
() n!
_

+ t
_

_
t
+ t
_
n
=
_
+ n + k 1
k
__
+ t
+ t + h
_
+n
_
h
+t +h
_
k
,
and thus
P
N
t+h
N
t
[N
t
= NB
_
+ N
t
,
+t
+ t + h
_
.
This completes the proof. 2
By Theorem 4.3.1, the PolyaLundberg process is not too dicult to handle since
its nite dimensional distributions are completely known although its increments
are not independent.
As an immediate consequence of Theorem 4.3.1, we see that the PolyaLundberg
process has positive contagion:
4.3.2 Corollary (Positive Contagion). If the claim number process N
t

tR
+
is a PolyaLundberg process, then, for all t, h (0, ), the function
n P[N
t+h
n+1[N
t
= n]
is strictly increasing.
96 Chapter 4 The Mixed Claim Number Process
Thus, for the PolyaLundberg process, the conditional probability of at least one
claim occurring in the interval (t, t+h] increases with the number of claims already
occurred up to time t.
We complete the discussion of the PolyaLundberg process by showing that it is a
regular Markov process which is not homogeneous:
4.3.3 Corollary. If the claim number process N
t

tR
+
is a PolyaLundberg
process with parameters and , then it is a regular Markov process satisfying
p
n,n+k
(t, t+h) =
_
+ n + k 1
k
__
+ t
+ t + h
_
+n
_
h
+ t + h
_
k
for each admissible pair (n, t) and all k N
0
and h R
+
and with intensities

nN
satisfying

n
(t) =
+ n 1
+ t
for all n N and t R
+
; in particular, the claim number process N
t

tR
+
is not
homogeneous.
Proof. By Theorems 4.2.3 and 4.2.4, the PolyaLundberg process is a regular
Markov process.
By Theorem 4.3.1, we have
p
n,n+k
(t, t+h) = P[N
t+h
= n+k[N
t
= n]
= P[N
t+h
N
t
= k[N
t
= n]
=
_
+ n + k 1
k
__
+t
+ t + h
_
+n
_
h
+ t + h
_
k
.
This yields

n+1
(t) = lim
h0
1
h
p
n,n+1
(t, t+h)
= lim
h0
1
h
( +n)
_
+t
+ t + h
_
+n
h
+ t + h
=
+ n
+ t
,
and thus

n
(t) =
+ n 1
+ t
.
Since the intensities are not constant, it follows from Lemma 3.4.1 that the claim
number process is not homogeneous. 2
In conclusion, the PolyaLundberg process is a regular Markov process which is
not homogeneous and has stationary dependent increments. The discussion of the
PolyaLundberg process thus completes the investigation of regular claim number
processes satisfying the ChapmanKolmogorov equations.
4.3 The PolyaLundberg Process 97
Problems
4.3.A Let , (0, ). Then the following are equivalent:
(a) The claim number process N
t

tR
+
satises the ChapmanKolmogorov
equations and is regular with intensities
n

nN
satisfying

n
(t) =
+n 1
+t
for all nN and t R
+
.
(b) The identity
p
n,n+k
(t, t+h) =
_
+n +k 1
k
__
+t
+t +h
_
+n
_
h
+t +h
_
k
holds for each admissible pair (n, t) and all kN
0
and hR
+
.
In this case, the claim number process satises
P
N
t
= NB
_
,

+t
_
for all t (0, ),
P
N
t+h
N
t
= NB
_
,

+h
_
for all t, h(0, ), and
P[N
t+h
N
t
= k[N
t
= n] =
_
+n +k 1
k
__
+t
+t +h
_
+n
_
h
+t +h
_
k
for all t, h (0, ) and all n, k N
0
; in particular, the claim number process
has dependent increments and is not homogeneous.
Compare the result with Theorem 4.3.1 and Corollary 4.3.3.
4.3.B If the claim number process N
t

tR
+
is a PolyaLundberg process with para-
meters and , then
P
W
1
= Par(, ) .
Try to compute P
W
n
and P
T
n
for arbitrary n N.
4.3.C Prediction: If the claim number process N
t

tR
+
is a PolyaLundberg process
with parameters and , then the inequality
E
_
_
N
t+h

_
N
t
+

+t

+
t
+t

N
t
t
_
h
_
2
_
E[(N
t+h
Z)
2
]
holds for all t, h (0, ) and for every random variable Z satisfying E[Z
2
] <
and (Z) T
t
.
Interpret the quotient / and compare the result with Theorem 2.3.5.
98 Chapter 4 The Mixed Claim Number Process
4.3.D Prediction: If the claim number process N
t

tR
+
is a PolyaLundberg process
with parameters and , then the identity
E(N
t+h
N
t
[N
t
) =
_

+t

+
t
+t

N
t
t
_
h
holds for all t, h (0, ).
Interpret the expression in brackets and compare the result with Problem 4.3.C.
4.3.E Estimation: If the claim number process N
t

tR
+
is a PolyaLundberg process
with parameters and , then the identity
P
[Nt
= Ga( +t, +N
t
)
and hence
E([N
t
) =
+N
t
+t
holds for all t R
+
.
Compare the result with Problems 4.3.D and 4.2.C.
4.3.F Assume that P

= Ga(, , ) with (0, ). If the claim number process


N
t

tR
+
is a mixed Poisson process with parameter , then the identity
P
_
_
m

j=1
N
t
j
= n
j

_
_
=
n
m
!

m
j=1
(n
j
n
j1
)!

m

j=1
(t
j
t
j1
)
n
j
n
j1
e
t
m
_

+t
m
_

nm

k=0

k
k!
_
+n
m
k 1
n
m
k
__
1
+t
m
_
nmk
holds for all m N, for all t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
and for all n
0
, n
1
, . . . , n
m
N
0
such that 0 = n
0
n
1
. . . n
m
; in particular,
the claim number process N
t

tR
+
has stationary dependent increments and
satises
P
N
t
= Del
_
t, ,

+t
_
for all t (0, ).
4.3.G Assume that P

= Ga(, , ) with (0, ). If the claim number process


N
t

tR
+
is a mixed Poisson process with parameter , then it is a regular
Markov process satisfying
p
n,n+k
(t, t+h) =
_
n +k
k
_
h
k
e
h
_
+t
+t +h
_

n+k

j=0

j
j!
_
+n +k j 1
n +k j
__
1
+t +h
_
n+kj
n

j=0

j
j!
_
+n j 1
n j
__
1
+t +h
_
nj
4.3 The PolyaLundberg Process 99
for each admissible pair (n, t) and all k N
0
and h R
+
and with intensities

nN
satisfying

n
(t) = n
n

j=0

j
j!
_
+n j 1
n j
__
1
+t
_
nj
n1

j=0

j
j!
_
+n 1 j 1
n 1 j
__
1
+t
_
n1j
for all n N and t R
+
; in particular, the claim number process N
t

tR
+
is
not homogeneous.
4.3.H Assume that N
t
t

tR
+
and N
tt
t

tR
+
are independent claim number processes
such that N
t
t

tR
+
is a homogeneous Poisson process with parameter and
N
tt
t

tR
+
is a PolyaLundberg process with parameters and . For all t R
+
,
dene
N
t
:= N
t
t
+N
tt
t
.
Show that N
t

tR
+
is a claim number process and compute its nite dimensional
distributions.
4.3.I Discrete Time Model: Assume that P

= Be(, ). If the claim number


process N
l

lN
0
is a mixed binomial process with parameter , then the identity
P
_
_
m

j=1
N
l
j
= n
j

_
_
=
m

j=1
_
l
j
l
j1
n
j
n
j1
_
_
l
m
n
m
_
_
+n
m
1
n
m
__
+l
m
n
m
1
l
m
n
m
_
_
+ +l
m
1
l
m
_
holds for all m N, for all l
0
, l
1
, . . . , l
m
N
0
such that 0 = l
0
< l
1
< . . . < l
m
,
and for all n
0
, n
1
, . . . , n
m
N
0
such that 0 = n
0
n
1
. . . n
m
and such
that n
j
l
j
holds for all j 1, . . . , m; in particular, the claim number process
N
l

lN
0
has stationary dependent increments and satises
P
N
m
= NH(m, , )
for all m N, and
P
N
m+l
Nm[Nm
= NH(l, +N
m
, +mN
m
)
for all m N
0
and l N.
4.3.J Discrete Time Model: Assume that P

= Be(, ). If the claim number


process N
l

lN
0
is a mixed binomial process with parameter , then the identity
P
W
1
[k] =
B(+1, +k1)
B(, )
holds for all k N.
Try to compute P
W
n
and P
T
n
for arbitrary n N.
100 Chapter 4 The Mixed Claim Number Process
4.3.K Discrete Time Model: Assume that P

= Be(, ). If the claim number


process N
l

lN
0
is a mixed binomial process with parameter , then it is a
Markov process satisfying
p
n,n+k
(m, m+l) =
_
+n +k 1
k
__
+mn +l k 1
l k
_
_
+ +m+l 1
l
_
for all m, l N
0
, n 0, 1, . . . , m and k 0, 1, . . . , m + l n; in particular,
the claim number process N
l

lN
0
is not homogeneous.
4.3.L Discrete Time Model: Assume that P

= Be(, ). If the claim number pro-


cess N
l

lN
0
is a mixed binomial process with parameter , then the inequality
E
_
_
N
m+l

_
N
m
+
+
++m
E[] +
m
++m

N
m
m
_
l
_
2
_
E[(N
m+l
Z)
2
]
holds for all m, l N and for every random variable Z satisfying E[Z
2
] < and
(Z) T
m
.
Compare the result with Problem 2.3.F.
4.3.M Discrete Time Model: Assume that P

= Be(, ). If the claim number


process N
l

lN
0
is a mixed binomial process with parameter , then the identity
E(N
m+l
N
m
[N
m
) =
_
+
+ +m
E[] +
m
+ +m

N
m
m
_
l
holds for all m, l N.
4.3.N Discrete Time Model: Assume that P

= Be(, ). If the claim number


process N
l

lN
0
is a mixed binomial process with parameter , then the identity
P
[N
m
= Be( +N
m
, +mN
m
)
and hence
E([N
m
) =
+N
m
+ +m
holds for all m N
0
.
4.4 Remarks
The background of the model discussed in this chapter may become clearer if we
agree to distinguish between insurers portfolios and abstract portfolios. An insurers
portfolio is, of course, a group of risks insured by the same insurance company; by
contrast, an abstract portfolio is a group of risks which are distributed among one or
more insurance companies. Homogeneous insurers portfolios tend to be small, but
homogeneous abstract portfolios may be large. Therefore, it seems to be reasonable
4.4 Remarks 101
to combine information from all insurance companies engaged in the same insurance
business in order to model the claim number processes of homogeneous abstract
portfolios. This gives reliable information on the conditional distribution of the
claim number process of each insurers portfolio. The single insurance company is
then left with the appropriate choice of the structure distribution of its own portfolio.
The interpretation of the mixed claim number process may be extended as follows:
Until now, we have assumed that an inhomogeneous insurers portfolio is a mixture
of abstract portfolios which are homogeneous. In some classes of nonlife insurance
like industrial re risks insurance, however, it is dicult to imagine portfolios which
are homogeneous and suciently large to provide reliable statistical information. It
is therefore convenient to modify the model by assuming that a rather inhomoge-
neous insurers portfolio is a mixture of rather homogeneous abstract portfolios the
mathematics of mixing does not change at all. Once this generalization is accepted,
we may also admit more than two levels of inhomogeneity and mix rather homoge-
neous portfolios to describe more and more inhomogeneous portfolios. In any case,
the level of inhomogeneity of a portfolio is reected by the variance of the structure
distribution, which is equal to zero if and only if the portfolio is homogeneous.
There is still another variant of the interpretations given so far: The mixed claim
number process may interpreted as the claim number process of a single risk selected
at random from an inhomogeneous portfolio of risks which are similar but distinct
and which can be characterized by a realization of the structure parameter which is
not observable. This interpretation provides a link between claim number processes
or aggregate claims processes and experience rating a theory of premium calculation
which, at its core, is concerned with optimal prediction of future claim numbers or
claim severities of single risks, given individual claims experience in the past as well
as complete or partial information on the structure of the portfolio from which the
risk was selected. An example of an optimal prediction formula for future claim
numbers is given in Problem 4.3.C. For an introduction to experience rating, see
Sundt [1984, 1991, 1993] and Schmidt [1992].
According to Seal [1983], the history of the mixed Poisson process dates back to
a paper of Dubourdieu [1938] who proposed it as a model in automobile insurance
but did not compare the model with statistical data. Two years later, the mixed
Poisson process became a central topic in the famous book by Lundberg [1940]
who developed its mathematical theory and studied its application to sickness and
accident insurance. Still another application was suggested by Hofmann [1955] who
studied the mixed Poisson process as a model for workmens compensation insurance.
For further details on the mixed Poisson process, it is worth reading Lundberg
[1940]; see also Albrecht [1981], the surveys by Albrecht [1985] and Pfeifer [1986],
and the recent manuscript by Grandell [1995]. Pfeifer [1982a, 1982b] and Gerber
[1983] studied asymptotic properties of the claim arrival process induced by a Polya
Lundberg process, and Pfeifer and Heller [1987] and Pfeifer [1987] characterized the
102 Chapter 4 The Mixed Claim Number Process
mixed Poisson process in terms of the martingale property of certain transforms of
the claim arrival process. The mixed Poisson process with a structure parameter
having a threeparameter Gamma distribution was rst considered by Delaporte
[1960, 1965], and further structure distributions were discussed by Troblinger [1961],
Kupper [1962], Albrecht [1981], and Gerber [1991]. These authors, however, studied
only the onedimensional distributions of the resulting claim number processes.
In order to select a claim number process as a model for given data, it is useful to
recall some criteria which are fullled for certain claim number processes but fail for
others. The following criteria refer to the inhomogeneous Poisson process and the
mixed Poisson process, each of them including the homogeneous Poisson process as
a special case:
Independent increments: The inhomogeneous Poisson process has independent
increments, but the mixed Poisson process with a nondegenerate structure dis-
tribution has not.
Stationary increments: The mixed Poisson process has stationary increments,
but the inhomogeneous Poisson process with a nonconstant intensity has not.
Multinomial criterion: The multinomial criterion with probabilities propertional
to time intervals holds for the mixed Poisson process, but it fails for the inhomo-
geneous Poisson process with a nonconstant intensity.
Moment inequality: The moment inequality,
E[N
t
] var [N
t
]
for all t (0, ), is a strict inequality for the mixed Poisson process with a
nondegenerate structure distribution, but it is an equality for the inhomogeneous
Poisson process.
Once the type of the claim number process has been selected according to the
previous criteria, the next step should be to choose parameters and to examine the
goodness of t of the theoretical nite dimensional distributions to the empirical
ones.
Automobile insurance was not only the godfather of the mixed Poisson process when
it was introduced in risk theory by Dubourdieu [1938] without reference to statistical
data; it still is the most important class of insurance in which the mixed Poisson
process seems to be a good model for the empirical claim number process. This is
indicated in the papers by Thyrion [1960], Delaporte [1960, 1965], Tr oblinger [1961],
Derron [1962], Bichsel [1964], and Ruohonen [1988], and in the book by Lemaire
[1985]. As a rule, however, these authors considered data from a single period only
and hence compared onedimensional theoretical distributions with empirical ones.
In order to model the development of claim numbers in time, it would be necessary
to compare the nite dimensional distributions of selected claim number processes
with those of empirical processes.
Chapter 5
The Aggregate Claims Process
In the present chapter we introduce and study the aggregate claims process. We rst
extend the model considered so far (Section 5.1) and then establish some general
results on compound distributions (Section 5.2). It turns out that aggregate claims
distributions can be determined explicitly only in a few exceptional cases. However,
the most important claim number distributions can be characterized by a simple
recursion formula (Section 5.3) and admit the recursive computation of aggregate
claims distributions and their moments when the claim size distribution is discrete
(Section 5.4).
5.1 The Model
Throughout this chapter, let N
t

tR
+
be a claim number process and let T
n

nN
0
be the claim arrival process induced by the claim number process. We assume that
the exceptional null set is empty and that the probability of explosion is equal to
zero.
Furthermore, let X
n

nN
be a sequence of random variables. For t R
+
, dene
S
t
:=
Nt

k=1
X
k
=

n=0

Nt=n
n

k=1
X
k
.
Of course, we have S
0
= 0.
Interpretation:
X
n
is the claim amount or claim severity or claim size of the nth claim.
S
t
is the aggregate claims amount of the claims occurring up to time t.
Accordingly, the sequence X
n

nN
is said to be the claim size process, and the
family S
t

tR
+
is said to be the aggregate claims process induced by the claim
number process and the claim size process.
104 Chapter 5 The Aggregate Claims Process
For the remainder of this chapter, we assume that the sequence X
n

nN
is i. i. d.
and that the claim number process N
t

tR
+
and the claim size process X
n

nN
are independent.
0
-
s
6
0 T
1
() T
2
() T
3
() T
4
() T
5
()
t

S
t
()
Claim Arrival Process and Aggregate Claims Process
Our rst result gives a formula for the computation of aggregate claims distributions:
5.1.1 Lemma. The identity
P[S
t
B] =

n=0
P[N
t
= n] P
__
n

k=1
X
k
B
__
holds for all t R
+
and B B(R).
Proof. We have
P[S
t
B] = P
__
N
t

k=1
X
k
B
__
= P
_

n=0
N
t
= n
_
n

k=1
X
k
B
__
=

n=0
P[N
t
= n] P
__
n

k=1
X
k
B
__
,
as was to be shown. 2
5.1 The Model 105
For s, t R
+
such that s t, the increment of the aggregate claims process
S
t

tR
+
on the interval (s, t] is dened to be
S
t
S
s
:=
N
t

k=Ns+1
X
k
.
Since S
0
= 0, this is in accordance the denition of S
t
; in addition, we have
S
t
() = (S
t
S
s
)() + S
s
() ,
even if S
s
() is innite. For the aggregate claims process, the properties of having
independent or stationary increments are dened in the same way as for the claim
number process.
5.1.2 Theorem. If the claim number process has independent increments, then
the same is true for the aggregate claims process.
Proof. Consider m N, t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
, and
B
1
, . . . , B
m
B(R). For all n
0
, n
1
, . . . , n
m
N
0
such that 0 = n
0
n
1
. . . n
m
,
we have
P
__
m

j=1
_
N
t
j
= n
j
_
_

_
m

j=1
_
n
j

k=n
j1
+1
X
k
B
j
___
= P
_
m

j=1
_
N
t
j
= n
j
_
_
P
_
m

j=1
_
n
j

k=n
j1
+1
X
k
B
j
__
= P
_
m

j=1
N
t
j
N
t
j1
= n
j
n
j1

_
P
_
m

j=1
_
n
j

k=n
j1
+1
X
k
B
j
__
=
m

j=1
P[N
t
j
N
t
j1
= n
j
n
j1
]
m

j=1
P
__
n
j

k=n
j1
+1
X
k
B
j
__
=
m

j=1
_
P[N
t
j
N
t
j1
= n
j
n
j1
] P
__
n
j
n
j1

k=1
X
k
B
j
___
.
This yields
P
_
m

j=1
_
S
t
j
S
t
j1
B
j
_
_
= P
_
m

j=1
_
N
t
j

k=N
t
j1
+1
X
k
B
j
__
106 Chapter 5 The Aggregate Claims Process
= P
_

n
1
=0

n
2
=n
1

n
m
=n
m1
_
m

j=1
_
N
t
j
= n
j
_
_

_
m

j=1
_
N
j

k=N
j1
+1
X
k
B
j
___
=

n
1
=0

n
2
=n
1

nm=n
m1
P
__
m

j=1
_
N
t
j
= n
j
_
_

_
m

j=1
_
n
j

k=n
j1
+1
X
k
B
j
___
=

n
1
=0

n
2
=n
1

n
m
=n
m1
m

j=1
_
P[N
t
j
N
t
j1
= n
j
n
j1
] P
__
n
j
n
j1

k=1
X
k
B
j
___
=

l
1
=0

l
2
=0

l
m
=0
m

j=1
_
P[N
t
j
N
t
j1
= l
j
] P
__
l
j

k=1
X
k
B
j
___
=
m

j=1
_

l
j
=0
P[N
t
j
N
t
j1
= l
j
] P
__
l
j

k=1
X
k
B
j
___
.
The assertion follows. 2
5.1.3 Theorem. If the claim number process has stationary independent incre-
ments, then the same is true for the aggregate claims process.
Proof. By Theorem 5.1.2, the aggregate claims process has independent incre-
ments.
Consider t, h R
+
. For all B B(R), Lemma 5.1.1 yields
P
__
S
t+h
S
t
B
_
= P
__
N
t+h

k=N
t
+1
X
k
B
__
= P
_

n=0

m=0
N
t
= nN
t+h
N
t
= m
_
n+m

k=n+1
X
k
B
__
=

n=0

m=0
P[N
t
= n] P[N
t+h
N
t
= m] P
__
n+m

k=n+1
X
k
B
__
=

n=0
P[N
t
= n]

m=0
P[N
h
= m] P
__
m

k=1
X
k
B
__
=

m=0
P[N
h
= m] P
__
m

k=1
X
k
B
__
= P[S
h
B] .
Therefore, the aggregate claims process has also stationary increments. 2
5.1 The Model 107
The assumption of Theorem 5.1.3 is fullled when the claim number process is
a Poisson process; in this case, the aggregate claims process is also said to be a
compound Poisson process.
Interpretation: For certain claim size distributions, the present model admits
interpretations which dier from the one presented before. Of course, if the claim
size distribution satises P
X
[1] = 1, then the aggregate claims process is identical
with the claim number process. More generally, if the claim size distribution satises
P
X
[N] = 1, then the following interpretation is possible:
N
t
is the number of claim events up to time t,
X
n
is the number of claims occurring at the nth claim event, and
S
t
is the total number of claims up to time t.
This shows the possibility of applying our model to an insurance business in which
two or more claims may occur simultaneously.
Further interesting interpretations are possible when the claim size distribution is a
Bernoulli distribution:
5.1.4 Theorem (Thinned Claim Number Process). If the claim size dis-
tribution is a Bernoulli distribution, then the aggregate claims process is a claim
number process.
Proof. Since P
X
is a Bernoulli distribution, there exists a null set
X
T such
that
X
n
() 0, 1
holds for all n N and
X
. Furthermore, since E[X] > 0, the ChungFuchs
theorem yields the existence of a null set

T such that

n=1
X
k
() =
holds for all

. Dene

S
:=
X

.
It now follows from the properties of the claim number process N
t

tR
+
and the
previous remarks that the aggregate claims process S
t

tR
+
is a claim number
process with exceptional null set
S
. 2
Comment: Thinned claim number processes occur in the following situations:
Assume that P
X
= B(). If is interpreted as the probability for a claim to
be reported, then N
t
is the number of occurred claims and S
t
is the number of
reported claims, up to time t.
108 Chapter 5 The Aggregate Claims Process
Consider a sequence of random variables Y
n

nN
which is i. i. d. and assume
that N
t

tR
+
and Y
n

nN
are independent. Consider also c (0, ) and
assume that := P[Y > c] (0, 1). Then the sequence X
n

nN
, given by
X
n
:=
Y
n
>c
, is i. i. d. such that P
X
= B(), and N
t

tR
+
and X
n

nN
are
independent. If Y
n
is interpreted as the size of the nth claim, then N
t
is the
number of occurred claims and S
t
is the number of large claims (exceeding c),
up to time t. This is of interest in excess of loss reinsurance, or XL reinsurance
for short, where the reinsurer assumes responsibility for claims exceeding the
priority c.
A particularly interesting result on thinned claim number processes is the following:
5.1.5 Corollary (Thinned Poisson Process). If the claim number process is
a Poisson process with parameter and if the claim size distribution is a Bernoulli
distribution with parameter , then the aggregate claims process is a Poisson process
with parameter .
Proof. By Theorem 5.1.3, the aggregate claims process has stationary independent
increments.
Consider t (0, ). By Lemma 5.1.1, we have
P[S
t
= m] =

n=0
P[N
t
= n] P
__
n

k=1
X
k
= m
__
=

n=m
e
t
(t)
n
n!
_
n
m
_

m
(1)
nm
=

n=m
e
t
(t)
m
m!
e
(1)t
((1)t)
nm
(nm)!
= e
t
(t)
m
m!

j=0
e
(1)t
((1)t)
j
j!
= e
t
(t)
m
m!
for all m N
0
, and hence P
S
t
= P(t).
Therefore, the aggregate claims process is a Poisson process with parameter . 2
We shall return to the thinned claim number process in Chapter 6 below.
Problem
5.1.A Discrete Time Model (Thinned Binomial Process): If the claim number
process is a binomial process with parameter and if the claim size distribution
is a Bernoulli distribution with parameter , then the aggregate claims process
is a binomial process with parameter .
5.2 Compound Distributions 109
5.2 Compound Distributions
In this and the following sections of the present chapter we shall study the problem
of computing the distribution of the aggregate claims amount S
t
at a xed time t.
To this end, we simplify the notation as follows:
Let N be a random variable satisfying P
N
[N
0
] = 1 and dene
S :=
N

k=1
X
k
.
Again, the random variables N and S will be referred to as the claim number and
the aggregate claims amount, respectively.
We assume throughout that N and X
n

nN
are independent, and we maintain
the assumption made before that the sequence X
n

nN
is i. i. d. In this case, the
aggregate claims distribution P
S
is said to be a compound distribution and is denoted
by
C(P
N
, P
X
) .
Compound distributions are also named after the claim number distribution; for
example, if P
N
is a Poisson distribution, then C(P
N
, P
X
) is said to be a compound
Poisson distribution.
The following result is a reformulation of Lemma 5.1.1:
5.2.1 Lemma. The identity
P
S
[B] =

n=0
P
N
[n] P
n
X
[B]
holds for all B B(R).
Although this formula is useful in certain special cases, it requires the computa-
tion of convolutions which may be dicult or at least time consuming. For the
most important claim number distributions and for claim size distributions satis-
fying P
X
[N
0
] = 1, recursion formulas for the aggregate claims distribution and its
moments will be given in Section 5.4 below.
In some cases, it is also helpful to look at the characteristic function of the aggregate
claims distribution.
5.2.2 Lemma. The characteristic function of S satises

S
(z) = m
N
(
X
(z)) .
110 Chapter 5 The Aggregate Claims Process
Proof. For all z R, we have

S
(z) = E
_
e
izS

= E
_
e
iz

N
k=1
X
k
_
= E
_

n=0

N=n
e
iz

n
k=1
X
k
_
= E
_

n=0

N=n
n

k=1
e
izX
k
_
=

n=0
P[N = n]
n

k=1
E
_
e
izX
k

n=0
P[N = n] E
_
e
izX

n
=

n=0
P[N = n]
X
(z)
n
= E
_

X
(z)
N

= m
N
(
X
(z)) ,
as was to be shown. 2
To illustrate the previous result, let us consider some applications.
Let us rst consider the compound Poisson distribution.
5.2.3 Corollary. If P
N
= P(), then the characteristic function of S satises

S
(z) = e
(
X
(z)1)
.
If the claim number distribution is either a Bernoulli distribution or a logarithmic
distribution, then the computation of the compound Poisson distribution can be
simplied as follows:
5.2.4 Corollary. For all (0, ) and (0, 1),
C
_
P(), B()
_
= P() .
5.2.5 Corollary. For all (0, ) and (0, 1),
C
_
P(), Log()
_
= NB
_

[ log(1)[
, 1
_
.
5.2 Compound Distributions 111
Thus, the compound Poisson distributions of Corollaries 5.2.4 and 5.2.5 are nothing
else than a Poisson or a negativebinomial distribution, which are easily computed
by recursion; see Theorem 5.3.1 below.
Let us now consider the compound negativebinomial distribution.
5.2.6 Corollary. If P
N
= NB(, ), then the characteristic function of S satises

S
(z) =
_

1 (1)
X
(z)
_

.
A result analogous to Corollary 5.2.4 is the following:
5.2.7 Corollary. For all (0, ) and , (0, 1),
C
_
NB(, ), B()
_
= NB
_
,

+ (1)
_
.
For the compound negativebinomial distribution, we have two further results:
5.2.8 Corollary. For all m N, (0, 1), and (0, ),
C
_
NB(m, ), Exp()
_
= C
_
B(m, 1), Exp()
_
.
5.2.9 Corollary. For all m N and , (0, 1),
C
_
NB(m, ), Geo()
_
= C
_
B(m, 1), Geo()
_
.
Corollaries 5.2.8 and 5.2.9 are of interest since the compound binomial distribution
is determined by a nite sum.
Let us now turn to the discussion of moments of the aggregate claims distribution:
5.2.10 Lemma (Walds Identities). Assume that E[N] < and E[X[ < .
Then the expectation and the variance of S exist and satisfy
E[S] = E[N] E[X]
and
var [S] = E[N] var [X] + var [N] E[X]
2
.
112 Chapter 5 The Aggregate Claims Process
Proof. We have
E
_
S

= E
_
N

k=1
X
k
_
= E
_

n=1

N=n
n

k=1
X
k
_
=

n=1
P[N = n] E
_
n

k=1
X
k
_
=

n=1
P[N = n] nE[X]
= E[N] E[X] ,
which is the rst identity.
Similarly, we have
E
_
S
2

= E
_
_
_
N

k=1
X
k
_
2
_
_
= E
_
_

n=1

N=n
_
n

k=1
X
k
_
2
_
_
=

n=1
P[N = n] E
_
_
_
n

k=1
X
k
_
2
_
_
=

n=1
P[N = n]
_
_
var
_
n

k=1
X
k
_
+
_
E
_
n

k=1
X
k
__
2
_
_
=

n=1
P[N = n]
_
nvar [X] + n
2
E[X]
2
_
= E[N] var [X] + E[N
2
] E[X]
2
,
and thus
var [S] = E[S
2
] E[S]
2
=
_
E[N] var [X] + E[N
2
] E[X]
2
_

_
E[N] E[X]
_
2
= E[N] var [X] + var [N] E[X]
2
,
which is the second identity. 2
5.2 Compound Distributions 113
5.2.11 Corollary. Assume that P
N
= P() and E[X[ < . Then
E[S] = E[X]
and
var [S] = E[X
2
] .
The following general inequalities provide upper bounds for the tail probabilities
P[S c] of the aggregate claims distribution:
5.2.12 Lemma. Let Z be a random variable and let h : R R
+
be a measurable
function which is strictly increasing on R
+
. Then the inequality
P[Z c]
E[h([Z[)]
h(c)
holds for all c (0, ).
Proof. By Markovs inequality, we have
P[Z c] P[[Z[ c]
= P[h([Z[) h(c)]

E[h([Z[)]
h(c)
,
as was to be shown. 2
5.2.13 Lemma (Cantellis Inequality). Let Z be a random variable satisfying
E[Z
2
] < . Then the inequality
P[Z E[Z] + c]
var [Z]
c
2
+ var [Z]
holds for all c (0, ).
Proof. Dene Y := Z E[Z]. Then we have E[Y ] = 0 and var [Y ] = var [Z]. For
all x (c, ), Lemma 5.2.12 yields
P
_
Z E[Z] + c

= P
_
Y c

= P
_
Y + x c + x

E
_
(Y + x)
2

(c + x)
2
=
E[Y
2
] + x
2
(c + x)
2
=
var [Y ] + x
2
(c + x)
2
=
var [Z] + x
2
(c + x)
2
.
114 Chapter 5 The Aggregate Claims Process
The last expression attains its minimum at x = var [Z]/c, and this yields
P
_
Z E[Z] + c


var [Z] +
_
var [Z]
c
_
2
_
c +
var [Z]
c
_
2
=
c
2
var [Z] +
_
var [Z]
_
2
_
c
2
+ var [Z]
_
2
=
var [Z]
c
2
+ var [Z]
,
as was to be shown. 2
Problems
5.2.A Let Q denote the collection of all distributions Q : B(R) [0, 1] satisfying
Q[N
0
] = 1. For Q, R Q, dene C(Q, R) by letting
C(Q, R)[B] :=

n=0
Q[n] R
n
[B]
for all B B(R). Then C(Q, R) Q. Moreover, the map C : QQ Q turns
Q into a noncommutative semigroup with neutral element
1
.
5.2.B Ammeter Transform: For all (0, ) and (0, 1),
C
_
NB(, ), P
X
_
= C
_
P([log()[), C(Log(1), P
X
)
_
.
5.2.C Discrete Time Model: For all m N and , (0, 1),
C
_
B(m, ), B()
_
= B(m, ) .
5.2.D Discrete Time Model: For all m N, (0, 1), and (0, ),
C
_
B(m, ), Exp()
_
=
m

k=0
_
m
k
_

k
(1)
mk
Ga(, k) ,
where Ga(, 0) :=
0
. Combine the result with Corollary 5.2.8.
5.2.E If E[N] < and E[X[ < , then
minE[N], var [N] E[X
2
] var [S] maxE[N], var [N] E[X
2
] .
If P
N
= P(), then these lower and upper bounds for var [S] are identical.
5.2.F If E[N] (0, ), P
X
[R
+
] = 1 and E[X] (0, ), then
v
2
[S] = v
2
[N] +
1
E[N]
v
2
[X] .
5.2.G If P
N
= P(), P
X
[R
+
] = 1 and E[X] (0, ), then
v
2
[S] =
1

_
1 +v
2
[X]
_
.
5.3 A Characterization of the Binomial, Poisson, and . . . 115
5.3 A Characterization of the Binomial, Poisson,
and Negativebinomial Distributions
Throughout this section, let Q : B(R) [0, 1] be a distribution satisfying
Q[N
0
] = 1 .
For n N
0
, dene
q
n
:= Q[n] .
The following result characterizes the most important claim number distributions
by a simple recursion formula:
5.3.1 Theorem. The following are equivalent:
(a) Q is either the Dirac distribution
0
or a binomial, Poisson, or negativebinomial
distribution.
(b) There exist a, b R satisfying
q
n
=
_
a +
b
n
_
q
n1
for all n N.
Moreover, if Q is a binomial, Poisson, or negativebinomial distribution, then a < 1.
Proof. The rst part of the proof will give the following decomposition of the
(a, b)plane, which in turn will be used in the second part:
0
-
a
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
@
b
6
0 1

0
B(m, ) P() NB(, )
Claim Number Distributions
116 Chapter 5 The Aggregate Claims Process
Assume rst that (a) holds.
If Q=
0
, then the recursion holds with a = b = 0.
If Q=B(m, ), then the recursion holds with a = /(1) and b = (m+1) /(1).
If Q=P(), then the recursion holds with a = 0 and b = .
If Q=NB(, ), then the recursion holds with a = 1 and b = (1)(1).
Therefore, (a) implies (b).
Assume now that (b) holds. The assumption implies that q
0
> 0.
Assume rst that q
0
= 1. Then we have
Q =
0
.
Assume now that q
0
< 1. Then we have 0 < q
1
= (a + b) q
0
, and thus
a + b > 0 .
The preceding part of the proof suggests to distinguish the three cases a < 0, a = 0,
and a > 0.
(1) The case a < 0 : Since a +b > 0, we have b > 0, and it follows that the sequence
a +b/n
nN
decreases to a. Therefore, there exists some m N satisfying q
m
> 0
and q
m+1
= 0, hence 0 = q
m+1
= (a + b/(m+1)) q
m
, and thus a + b/(m+1) = 0.
This yields
m =
a + b
a
and b = (m+1)(a). For all n 1, . . . , m, this gives
q
n
=
_
a +
b
n
_
q
n1
=
_
a +
(m+1)(a)
n
_
q
n1
=
m+ 1 n
n
(a) q
n1
,
and thus
q
n
=
_
n

k=1
m+ 1 k
k
_
(a)
n
q
0
=
_
m
n
_
(a)
n
q
0
.
Summation gives
1 =
m

n=0
q
n
=
m

n=0
_
m
n
_
(a)
n
q
0
= (1 a)
m
q
0
,
5.3 A Characterization of the Binomial, Poisson, and . . . 117
hence q
0
= (1 a)
m
, and thus
q
n
=
_
m
n
_
(a)
n
q
0
=
_
m
n
_
(a)
n
(1 a)
m
=
_
m
n
__
a
1 a
_
n
_
1
1 a
_
mn
for all n 0, . . . , m. Therefore, we have
Q = B
_
a + b
a
,
a
1 a
_
.
(2) The case a = 0 : Since a + b > 0, we have b > 0. For all n N, we have
q
n
=
b
n
q
n1
,
and thus
q
n
=
b
n
n!
q
0
.
Summation gives
1 =

n=0
q
n
=

n=0
b
n
n!
q
0
,
hence q
0
= e
b
, and thus
q
n
= e
b
b
n
n!
for all n N
0
. Therefore, we have
Q = P(b) .
(3) The case a > 0 : Dene c := (a+b)/a. Then we have c > 0 and b = (c1) a. For
all n N, this gives
q
n
=
_
a +
b
n
_
q
n1
=
_
a +
(c1)a
n
_
q
n1
=
c + n 1
n
a q
n1
,
118 Chapter 5 The Aggregate Claims Process
and thus
q
n
=
_
n

k=1
c + k 1
k
_
a
n
q
0
=
_
c + n 1
n
_
a
n
q
0
.
In particular, we have q
n
(1/n)ca
n
q
0
. Since

n=0
q
n
= 1, we must have a < 1.
Summation yields
1 =

n=0
q
n
=

n=0
_
c +n 1
n
_
a
n
q
0
= (1 a)
c
q
0
,
hence q
0
= (1 a)
c
, and thus
q
n
=
_
c + n 1
n
_
a
n
q
0
=
_
c + n 1
n
_
(1 a)
c
a
n
.
Therefore, we have
Q = NB
_
a + b
a
, 1 a
_
.
We have thus shown that (b) implies (a).
The nal assertion has already been shown in the preceding part of the proof. 2
Theorem 5.3.1 and its proof suggest to consider the family of all distributions Q
satisfying Q[N
0
] = 1 and for which there exist a, b R satisfying b < a < 1 and
q
n
=
_
a +
b
n
_
q
n1
for all n N as a parametric family of distributions which consists of the binomial,
Poisson, and negativebinomial distributions. Note that the Dirac distribution
0
is
excluded since it does not determine the parameters uniquely.
5.4 The Recursions of Panjer and DePril 119
Problems
5.3.A Let Q be a nondegenerate distribution satisfying Q[N
0
] = 1 and
q
n
=
_
a +
b
n
_
q
n1
for suitable a, b R and all n N. Then the expectation and the variance of Q
exist and satisfy
E[Q] =
a +b
1 a
and
var [Q] =
a +b
(1 a)
2
,
and hence
var [Q]
E[Q]
=
1
1 a
.
Interpret this last identity with regard to Theorem 5.3.1 and its proof, and illus-
trate the result in the (a, b)plane. Show also that
v
2
[Q] =
1
a +b
.
and interpret the result.
5.3.B If Q is a geometric or logarithmic distribution, then there exist a, b R satisfying
q
n
=
_
a +
b
n
_
q
n1
for all n N such that n 2.
5.3.C If Q is a Delaporte distribution, then there exist a
1
, b
1
, b
2
R satisfying
q
n
=
_
a
1
+
b
1
n
_
q
n1
+
b
2
n
q
n2
for all n N such that n 2.
5.3.D If Q is a negativehypergeometric distribution, then there exist c
0
, c
1
, d
0
, d
1
R
satisfying
q
n
=
c
0
+c
1
n
d
0
+d
1
n
q
n1
for all n N.
5.4 The Recursions of Panjer and DePril
The aim of this section is to prove that the recursion formula for the binomial,
Poisson, and negativebinomial distributions translates into recursion formulas for
the aggregate claims distribution and its momemts when the claim size distribution
is concentrated on N
0
.
120 Chapter 5 The Aggregate Claims Process
Throughout this section, we assume that
P
X
[N
0
] = 1 .
Then we have P
S
[N
0
] = 1.
Comment: At the rst glance, it may seem a bit confusing to assume P
X
[N
0
] = 1
instead of P
X
[N] = 1. Indeed, claim severities should be strictly positive, and this is
also true for the number of claims at a claim event. However, our assumption allows
P
X
to be an aggregate claims distribution, and this opens the possibility of applying
Panjers recursion repeatedly for a class of claim number distributions which are
compound distributions; see Problem 5.4.D below.
For all n N
0
, dene
p
n
:= P[N = n]
f
n
:= P[X = n]
g
n
:= P[S = n] .
Then the identity of Lemma 5.2.1 can be written as
g
n
=

k=0
p
k
f
k
n
.
Note that the sum occurring in this formula actually extends only over a nite
number of terms.
5.4.1 Lemma. The identities
f
n
m
=
m

k=0
f
(n1)
mk
f
k
and
f
n
m
=
n
m
m

k=1
k f
(n1)
mk
f
k
hold for all n, m N.
Proof. For all j 1, . . . , n and k 0, 1, . . . , m, we have
P
__
n

i=1
X
i
= m
_
X
j
= k
_
= P
__
n

j,=i=1
X
i
= mk
_
X
j
= k
_
= P
__
n

j,=i=1
X
i
= mk
__
P[X
j
= k]
= f
(n1)
mk
f
k
.
5.4 The Recursions of Panjer and DePril 121
This yields
f
n
m
= P
__
n

i=1
X
i
= m
__
=
m

k=0
P
__
n

i=1
X
i
= m
_
X
j
= k
_
=
m

k=0
f
(n1)
mk
f
k
,
which is the rst identity, as well as
E
_

n
i=1
X
i
=m

X
j
_
=
m

k=0
E
_

n
i=1
X
i
=m

X
j
=k
X
j
_
=
m

k=1
E
_

n
i=1
X
i
=m

X
j
=k
k
_
=
m

k=1
k P
__
n

i=1
X
i
= m
_
X
j
= k
_
=
m

k=1
k f
(n1)
mk
f
k
for all j 1, . . . , n, and hence
f
n
m
= P
__
n

i=1
X
i
= m
__
= E
_

n
i=1
X
i
=m

_
= E
_

n
i=1
X
i
=m

1
m
n

j=1
X
j
_
=
1
m
n

j=1
E
_

n
i=1
X
i
=m

X
j
_
=
1
m
n

j=1
m

k=1
k f
(n1)
mk
f
k
=
n
m
m

k=1
k f
(n1)
mk
f
k
,
which is the second identity. 2
122 Chapter 5 The Aggregate Claims Process
For the nondegenerate claim number distributions characterized by Theorem 5.3.1,
we can now prove a recursion formula for the aggregate claims distribution:
5.4.2 Theorem (Panjers Recursion). If the distribution of N is nondegenerate
and satises
p
n
=
_
a +
b
n
_
p
n1
for some a, b R and all n N, then
g
0
=
_

_
(1 + f
0
)
m
if P
N
= B(m, )
e
(1f
0
)
if P
N
= P()
_

1 f
0
+ f
0
_

if P
N
= NB(, )
and the identity
g
n
=
1
1 af
0
n

k=1
_
a + b
k
n
_
g
nk
f
k
holds for all n N; in particular, if f
0
= 0, then g
0
= p
0
.
Proof. The verication of the formula for g
0
is straightforward. For m N,
Lemma 5.4.1 yields
g
m
=

j=0
p
j
f
j
m
=

j=1
p
j
f
j
m
=

j=1
_
a +
b
j
_
p
j1
f
j
m
=

j=1
a p
j1
f
j
m
+

j=1
b
j
p
j1
f
j
m
=

j=1
a p
j1
m

k=0
f
(j1)
mk
f
k
+

j=1
b
j
p
j1
j
m
m

k=1
k f
(j1)
mk
f
k
=

j=1
a p
j1
f
(j1)
m
f
0
+

j=1
m

k=1
_
a + b
k
m
_
p
j1
f
(j1)
mk
f
k
= af
0

j=0
p
j
f
j
m
+
m

k=1
_
a + b
k
m
_
_

j=0
p
j
f
j
mk
_
f
k
= af
0
g
m
+
m

k=1
_
a + b
k
m
_
g
mk
f
k
,
and the assertion follows. 2
5.4 The Recursions of Panjer and DePril 123
An analogous result holds for the moments of the aggregate claims distribution:
5.4.3 Theorem (DePrils Recursion). If the distribution of N is nondegenerate
and satises
p
n
=
_
a +
b
n
_
p
n1
for some a, b R and all n N, then the identity
E[S
n
] =
1
1 a
n

k=1
_
n
k
__
a + b
k
n
_
E[S
nk
] E[X
k
]
holds for all n N.
Proof. By Theorem 5.4.2, we have
(1af
0
) E[S
n
] = (1af
0
)

m=0
m
n
g
m
=

m=1
m
n
(1af
0
) g
m
=

m=1
m
n
m

k=1
_
a + b
k
m
_
g
mk
f
k
=

m=1
m

k=1
_
am
n
+ bkm
n1
_
g
mk
f
k
=

k=1

m=k
_
am
n
+bkm
n1
_
g
mk
f
k
=

k=1

l=0
_
a(k+l)
n
+ bk(k+l)
n1
_
g
l
f
k
,
and hence
E[S
n
] = af
0
E[S
n
] + (1af
0
) E[S
n
]
= af
0

l=0
l
n
g
l
+

k=1

l=0
_
a(k+l)
n
+ bk(k+l)
n1
_
g
l
f
k
=

k=0

l=0
_
a(k+l)
n
+ bk(k+l)
n1
_
g
l
f
k
=

k=0

l=0
_
a
n

j=0
_
n
j
_
l
nj
k
j
+ b
n1

j=0
_
n 1
j
_
l
n1j
k
j+1
_
g
l
f
k
124 Chapter 5 The Aggregate Claims Process
= a
n

j=0
_
n
j
_
E[S
nj
] E[X
j
] + b
n1

j=0
_
n 1
j
_
E[S
n1j
] E[X
j+1
]
= aE[S
n
] + a
n

j=1
_
n
j
_
E[S
nj
] E[X
j
] + b
n

j=1
_
n 1
j 1
_
E[S
nj
] E[X
j
]
= aE[S
n
] +
n

j=1
_
n
j
__
a + b
j
n
_
E[S
nj
] E[X
j
] ,
and the assertion follows. 2
In view of Walds identities, DePrils recursion is of interest primarily for higher
order moments of the aggregate claims distribution.
Problems
5.4.A Use DePrils recursion to solve Problem 5.3.A.
5.4.B Use DePrils recursion to obtain Walds identities in the case where P
X
[N
0
] = 1.
5.4.C Discuss the Ammeter transform in the case where P
X
[N
0
] = 1.
5.4.D Assume that P
X
[N
0
] = 1. If P
N
is nondegenerate and satises P
N
= C(Q
1
, Q
2
),
then
P
S
= C(P
N
, P
X
) = C(C(Q
1
, Q
2
), P
X
) = C(Q
1
, C(Q
2
, P
X
)) .
In particular, if each of Q
1
and Q
2
is a binomial, Poisson, or negativebinomial
distribution, then P
S
can be computed by applying Panjers recursion twice.
Extend these results to the case where P
N
is obtained by repeated compounding.
5.5 Remarks
Corollary 5.2.5 is due to Quenouille [1949].
Corollary 5.2.8 is due to Panjer and Willmot [1981]; for the case m = 1, see also
Lundberg [1940].
For an extension of Walds identities to the case where the claim severities are not
i. i. d., see Rhiel [1985].
Because of the considerable freedom in the choice of the function h, Lemma 5.2.12
is a exible instrument to obtain upper bounds on the tail probabilities of the
aggregate claims distribution. Particular bounds were obtained by Runnenburg and
Goovaerts [1985] in the case where the claim number distribution is either a Poisson
or a negativebinomial distribution; see also Kaas and Goovaerts [1986] for the case
where the claim size is bounded. Under certain assumptions on the claim size
5.5 Remarks 125
distribution, an exponential bound on the tail probabilities of the aggregate claims
distribution was obtained by Willmot and Lin [1994]; see also Gerber [1994], who
proved their result by a martingale argument, and Michel [1993a], who considered
the Poisson case.
The Ammeter transform is due to Ammeter [1948].
Theorem 5.3.1 is wellknown; see Johnson and Kotz [1969] and Sundt and Jewell
[1981].
Theorem 5.4.2 is due to Panjer [1981]; for the Poisson case, see also Shumway and
Gurland [1960] and Adelson [1966]. Computational aspects of Panjers recursion
were discussed by Panjer and Willmot [1986], and numerical stability of Panjers
recursion was recently studied by Panjer and Wang [1993], who dened stability in
terms of an index of error propagation and showed that the recursion is stable in
the Poisson case and in the negativebinomial case but unstable in the binomial case;
see also Wang and Panjer [1994].
There are two important extensions of Panjers recursion:
Sundt [1992] obtained a recursion for the aggregate claims distribution when the
claim number distribution satises
p
n
=
k

i=1
_
a
i
+
b
i
n
_
p
ni
for all n N such that n m, where k, m N, a
i
, b
i
R, and p
ni
:= 0 for
all i 1, . . . , k such that i > n; see also Sundt and Jewell [1981] for the case
m = 2 and k = 1, and Schr oter [1990] for the case m = 1, k = 2, and a
2
= 0.
Examples of claim number distributions satisfying the above recursion are the
geometric, logarithmic, and Delaporte distributions. A characterization of the
claim number distributions satisfying the recursion with m = 2 and k = 1 was
given by Willmot [1988].
Hesselager [1994] obtained a recursion for the aggregate claims distribution when
the claim number distribution satises
p
n
=
l

j=0
c
j
n
j
l

j=0
d
j
n
j
p
n1
for all nN, where l N and c
j
, d
j
R such that d
j
,= 0 for some j 0, . . . , l;
see also Panjer and Willmot [1982] and Willmot and Panjer [1987], who intro-
duced this class and obtained recursions for l = 1 and l = 2, and Wang and
126 Chapter 5 The Aggregate Claims Process
Sobrero [1994], who extended Hesselagers recursion to a more general class of
claim number distributions. An example of a claim number distribution satisfying
the above recursion is the negativehypergeometric distribution.
A common extension of the previous classes of claim number distributions is given
by the class of claim number distributions satisfying
p
n
=
k

i=1
l

j=0
c
ij
n
j
l

j=0
d
ij
n
j
p
ni
for all n N such that n m, where k, l, m N, c
ij
, d
ij
R such that for each
i 1, . . . , k there exists some j 0, . . . , l satisfying d
ij
,= 0, and p
ni
:= 0 for all
i 1, . . . , k such that i > n. A recursion for the aggregate claims distribution in
this general case is not yet known; for the case l = m = 2, which applies to certain
mixed Poisson distributions, see Willmot [1986] and Willmot and Panjer [1987].
Further important extensions of Panjers recursion were obtained by Kling and
Goovaerts [1993] and Ambagaspitiya [1995]; for special cases of their results, see
Gerber [1991], Goovaerts and Kaas [1991], and Ambagaspitiya and Balakrishnan
[1994].
The possibility of evaluating the aggregate claims distribution by repeated recursion
when the claim number distribution is a compound distribution was rst mentioned
by Willmot and Sundt [1989] in the case of the Delaporte distribution; see also
Michel [1993b].
Theorem 5.4.3 is due to DePril [1986]; for the Poisson case, see also Goovaerts,
DeVylder, and Haezendonck [1984]. DePril actually obtained a result more general
than Theorem 5.4.3, namely, a recursion for the moments of Sc with c R. Also,
Kaas and Goovaerts [1985] obtained a recursion for the moments of the aggregate
claims distribution when the claim number distribution is arbitrary.
For a discussion of further aspects concerning the computation or approximation of
the aggregate claims distribution, see Hipp and Michel [1990] and Schr oter [1995]
and the references given there.
Let us nally remark that Scheike [1992] studied the aggregate claims process in the
case where the claim arrival times and the claim severities may depend on the past
through earlier claim arrival times and claim severities.
Chapter 6
The Risk Process in Reinsurance
In the present chapter we introduce the notion of a risk process (Section 6.1) and
study the permanence properties of risk processes under thinning (Section 6.2),
decomposition (Section 6.3), and superposition (Section 6.4). These problems are
of interest in reinsurance.
6.1 The Model
A pair (N
t

tR
+
, X
n

nN
) is a risk process if
N
t

tR
+
is a claim number process,
the sequence X
n

nN
is i. i. d., and
the pair (N
t

tR
+
, X
n

nN
) is independent.
In the present chapter, we shall study the following problems which are of interest
in reinsurance:
First, for a risk process (N
t

tR
+
, X
n

nN
) and a set C B(R), we study the
number and the (conditional) claim size distribution of claims with claim size in C.
These quantities, which will be given an exact denition in Section 6.2, are of interest
in excess of loss reinsurance where the reinsurer is concerned with a portfolio of large
claims exceeding a priority c (0, ).
Second, for a risk process (N
t

tR
+
, X
n

nN
) and a set C B(R), we study the
relation between two thinned risk processes, one being generated by the claims with
claim size in C and the other one being generated by the claims with claim size in
the complement C. This is, primarily, a mathematical problem which emerges quite
naturally from the problem mentioned before.
Finally, for risk processes (N
t
t

tR
+
, X
t
n

nN
) and (N
tt
t

tR
+
, X
tt
n

nN
) which are
independent, we study the total number and the claim size distribution of all claims
which are generated by either of these risk processes. These quantities, which will
be made precise in Section 6.4, are of interest to the reinsurer who forms a portfolio
128 Chapter 6 The Risk Process in Reinsurance
by combining two independent portfolios obtained from dierent direct insurers in
order to pass from small portfolios to a larger one.
In either case, it is of interest to know whether the transformation of risk processes
under consideration yields new risk processes of the same type as the original ones.
6.2 Thinning a Risk Process
Throughout this section, let (N
t

tR
+
, X
n

nN
) be a risk process and consider
CB(R). Dene
:= P[XC] .
We assume that the probability of explosion is equal to zero and that (0, 1).
Let us rst consider the thinned claim number process:
For all t R
+
, dene
N
t
t
:=
N
t

n=1

X
n
C
.
Thus, N
t
t

tR
+
is a particular aggregate claims process.
6.2.1 Theorem (Thinned Claim Number Process). The family N
t
t

tR
+
is
a claim number process.
This follows from Theorem 5.1.4.
Let us now consider the thinned claim size process, that is, the sequence of claim
severities taking their values in C.
Let
0
:= 0. For all l N, dene

l
:= infn N [
l1
< n, X
n
C
and let H(l) denote the collection of all sequences n
j

j1,...,l
N which are strictly
increasing. For H = n
j

j1,...,l
H(l), dene J(H) := 1, . . . , n
l
H.
6.2.2 Lemma. The identities
l

j=1

j
= n
j
=

nH
X
n
C

nJ(H)
X
n
/ C
and
P
_
l

j=1

j
= n
j

_
=
l
(1)
n
l
l
hold for all l N and for all H = n
j

j1,...,l
H(l).
6.2 Thinning a Risk Process 129
It is clear that, for each l N, the family

l
j=1

j
= n
j

H1(l)
is disjoint. The
following lemma shows that it is, up to a null set, even a partition of :
6.2.3 Corollary. The identity

H1(l)
P
_
l

j=1

j
= n
j

_
= 1
holds for all l N.
Proof. By induction, we have

H1(l)

l
(1)
n
l
l
= 1
for all l N. The assertion now follows from Lemma 6.2.2. 2
The basic idea for proving the principal results of this section will be to compute
the probabilities of the events of interest from their conditional probabilities with
respect to events from the partition

l
j=1

j
= n
j

H1(l)
with suitable l N.
By Corollary 6.2.3, each
n
is nite. For all n N, dene
X
t
n
:=

k=1

n
=k
X
k
.
Then we have (X
t
n

nN
) (X
n

nN
).
The following lemma provides the technical tool for the proofs of all further results
of this section:
6.2.4 Lemma. The identity
P
_
k

i=1
X
t
i
B
i

l

j=1

j
= n
j

_
=
k

i=1
P[XB
i
[XC]
l
(1)
n
l
l
holds for all k, l N such that k l, for all B
1
, . . . , B
k
B(R), and for every
sequence n
j

j1,...,l
H(l).
Proof. For every sequence H = n
j

j1,...,l
H(l), we have
P
_
k

i=1
X
t
i
B
i

l

j=1

j
= n
j

_
= P
_
k

i=1
X
n
i
B
i

l

j=1

j
= n
j

_
130 Chapter 6 The Risk Process in Reinsurance
= P
_
k

i=1
X
n
i
B
i

l

j=1
X
n
j
C

nJ(H)
X
n
/ C
_
= P
_
k

i=1
X
n
i
B
i
C
l

j=k+1
X
n
j
C

nJ(H)
X
n
/ C
_
=
k

i=1
P[XB
i
C]
l

j=k+1
P[XC]

nJ(H)
P[X / C]
=
k

i=1
P[XB
i
[XC]
l

j=1
P[XC]

nJ(H)
P[X / C]
=
k

i=1
P[XB
i
[XC]
l
(1)
n
l
l
,
as was to be shown. 2
6.2.5 Theorem (Thinned Claim Size Process). The sequence X
t
n

nN
is
i. i. d. and satises
P[X
t
B] = P[XB[XC]
for all B B(R).
Proof. Consider k N and B
1
, . . . , B
k
B(R). By Lemmas 6.2.4 and 6.2.2, we
have, for every sequence n
j

j1,...,k
H(k),
P
_
k

i=1
X
t
i
B
i

k

j=1

j
= n
j

_
=
k

i=1
P[XB
i
[XC]
k
(1)
n
k
k
=
k

i=1
P[XB
i
[XC] P
_
k

j=1

j
= n
j

_
.
By Corollary 6.2.3, summation over H(k) yields
P
_
k

i=1
X
t
i
B
i

_
=
k

i=1
P[XB
i
[XC] .
The assertion follows. 2
We can now prove the main result of this section:
6.2.6 Theorem (Thinned Risk Process). The pair (N
t
t

tR
+
, X
t
n

nN
) is a
risk process.
Proof. By Theorems 6.2.1 and 6.2.5, we know that N
t
t

tR
+
is a claim number
process and that the sequence X
t
n

nN
is i. i. d.
6.2 Thinning a Risk Process 131
To prove that the pair (N
t
t

tR
+
, X
t
n

nN
) is independent, consider m, n N,
B
1
, . . . , B
n
B(R), t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
, and
k
0
, k
1
, . . . , k
m
N
0
such that 0 = k
0
k
1
. . . k
m
.
Consider also l
0
, l
1
, . . . , l
m
N
0
satisfying 0 = l
0
l
1
. . . l
m
as well as
k
j
k
j1
l
j
l
j1
for all j 1, . . . , m. Dene n
0
:= 0 and l := maxn, k
m
+1,
and let H denote the collection of all sequences n
j

j1,...,l
H(l) satisfying
n
k
j
l
j
< n
k
j
+1
for all j 1, . . . , m. By Lemma 6.2.4 and Theorem 6.2.5, we
have
P
_
n

i=1
X
t
i
B
i

m

j=1
_
l
j

h=1

X
h
C
= k
j
__
= P
_
n

i=1
X
t
i
B
i

m

j=1

k
j
l
j
<
k
j
+1

_
=

H1
P
_
n

i=1
X
t
i
B
i

l

j=1

j
= n
j

_
=

H1
_
n

i=1
P[XB
i
[XC]
_

l
(1)
n
l
l
=

H1
P
_
n

i=1
X
t
i
B
i

l
(1)
n
l
l
= P
_
n

i=1
X
t
i
B
i

H1

l
(1)
n
l
l
,
hence
P
_
n

i=1
X
t
i
B
i

m

j=1
N
t
t
j
= k
j

m

j=1
N
t
j
= l
j

_
= P
_
n

i=1
X
t
i
B
i

m

j=1
_
N
t
j

h=1

X
h
C
= k
j
_

j=1
N
t
j
= l
j

_
= P
_
n

i=1
X
t
i
B
i

m

j=1
_
l
j

h=1

X
h
C
= k
j
_

j=1
N
t
j
= l
j

_
= P
_
n

i=1
X
t
i
B
i

m

j=1
_
l
j

h=1

X
h
C
= k
j
__
P
_
m

j=1
N
t
j
= l
j

_
= P
_
n

i=1
X
t
i
B
i

H1

l
(1)
n
l
l
P
_
m

j=1
N
t
j
= l
j

_
,
132 Chapter 6 The Risk Process in Reinsurance
and thus
P
_
n

i=1
X
t
i
B
i

m

j=1
N
t
t
j
= k
j

m

j=1
N
t
j
= l
j

_
= P
_
n

i=1
X
t
i
B
i

_
P
_
m

j=1
N
t
t
j
= k
j

m

j=1
N
t
j
= l
j

_
.
Summation yields
P
_
n

i=1
X
t
i
B
i

m

j=1
N
t
t
j
= k
j

_
= P
_
n

i=1
X
t
i
B
i

_
P
_
m

j=1
N
t
t
j
= k
j

_
.
Therefore, the pair (N
t
t

tR
+
, X
t
n

nN
) is independent. 2
6.2.7 Corollary (Thinned Risk Process). Let h : R R be a measurable
function. Then the pair (N
t
t

tR
+
, h(X
t
n
)
nN
) is a risk process.
This is immediate from Theorem 6.2.6.
As an application of Corollary 6.2.7, consider c (0, ), let C := (c, ), and dene
h : R R by letting
h(x) := (x c)
+
.
In excess of loss reinsurance, c is the priority of the direct insurer, and the reinsurer
is concerned with the risk process (N
t
t

tR
+
, (X
t
n
c)
+

nN
).
More generally, consider c, d (0, ), let C := (c, ), and dene h : R R by
letting
h(x) := (x c)
+
d .
In this case, the reinsurer is not willing to pay more than d for a claim exceeding
c and covers only the layer (c, c +d ]; he is thus concerned with the risk process
(N
t
t

tR
+
, (X
t
n
c)
+
d
nN
).
Problems
6.2.A The sequence
l

lN
0
is a claim arrival process (in discrete time) satisfying
P

l
= Geo(l, )
for all l N. Moreover, the claim interarrival times are i. i. d. Study also the
claim number process induced by the claim arrival process
l

lN
0
.
6.2.B For c (0, ) and C := (c, ), compute the distribution of X
t
for some specic
choices of the distribution of X.
6.2.C Discrete Time Model: The risk process (N
l

lN
0
, X
n

nN
) is a binomial
risk process if the claim number process N
l

lN
0
is a binomial process.
Study the problem of thinning for a binomial risk process.
6.3 Decomposition of a Poisson Risk Process 133
6.3 Decomposition of a Poisson Risk Process
Throughout this section, let (N
t

tR
+
, X
n

nN
) be a risk process and consider
CB(R). Dene
:= P[XC] .
We assume that the probability of explosion is equal to zero and that (0, 1).
Let us rst consider the thinned claim number processes generated by the claims
with claim size in C or in C, respectively.
For all t R
+
, dene
N
t
t
:=
N
t

n=1

XnC
and
N
tt
t
:=
N
t

n=1

X
n
C
.
By Theorem 6.2.1, N
t
t

tR
+
and N
tt
t

tR
+
are claim number processes.
The following result improves Corollary 5.1.5:
6.3.1 Theorem (Decomposition of a Poisson Process). Assume that
N
t

tR
+
is a Poisson process with parameter . Then the claim number processes
N
t
t

tR
+
and N
tt
t

tR
+
are independent Poisson processes with parameters and
(1), respectively.
Proof. Consider m N, t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
, and
k
t
1
, . . . , k
t
m
N
0
and k
tt
1
, . . . , k
tt
m
N
0
.
For all j 1, . . . , m, dene k
j
:= k
t
j
+ k
tt
j
and n
j
:=

j
i=1
k
i
. Then we have
P
_
m

j=1
N
t
t
j
N
t
t
j1
= k
t
j
N
tt
t
j
N
tt
t
j1
= k
tt
j

j=1
N
t
j
N
t
j1
= k
j

_
= P
_
_
m

j=1
_
_
_
Nt
j

h=N
t
j1
+1

X
h
C
= k
t
j
_
_
_

_
_
_
Nt
j

h=N
t
j1
+1

X
h
C
= k
tt
j
_
_
_

j=1
N
t
j
= n
j

_
_
= P
_
_
m

j=1
_
_
_
n
j

h=n
j1
+1

X
h
C
= k
t
j
_
_
_

_
_
_
n
j

h=n
j1
+1

X
h
C
= k
tt
j
_
_
_

j=1
N
t
j
= n
j

_
_
134 Chapter 6 The Risk Process in Reinsurance
= P
_
_
m

j=1
_
_
_
n
j

h=n
j1
+1

X
h
C
= k
t
j
_
_
_

_
_
_
n
j

h=n
j1
+1

X
h
C
= k
tt
j
_
_
_
_
_
=
m

j=1
_
k
j
k
t
j
_

j
(1)
k

j
as well as
P
_
m

j=1
N
t
j
N
t
j1
= k
j

_
=
m

j=1
P[N
t
j
N
t
j1
= k
j
]
=
m

j=1
e
(t
j
t
j1
)
((t
j
t
j1
))
k
j
k
j
!
,
and hence
P
_
m

j=1
N
t
t
j
N
t
t
j1
= k
t
j
N
tt
t
j
N
tt
t
j1
= k
tt
j

_
= P
_
m

j=1
N
t
t
j
N
t
t
j1
= k
t
j
N
tt
t
j
N
tt
t
j1
= k
tt
j

j=1
N
t
j
N
t
j1
= k
j

_
P
_
m

j=1
N
t
j
N
t
j1
= k
j

_
=
m

j=1
_
k
j
k
t
j
_

j
(1)
k

j=1
e
(t
j
t
j1
)
((t
j
t
j1
))
k
j
k
j
!
=
m

j=1
e
(t
j
t
j1
)
((t
j
t
j1
))
k

j
k
t
j
!

m

j=1
e
(1)(t
j
t
j1
)
((1)(t
j
t
j1
))
k

j
k
tt
j
!
.
This implies that N
t
t

tR
+
and N
tt
t

tR
+
are Poisson processes with parameters
and (1), respectively, and that N
t
t

tR
+
and N
tt
t

tR
+
are independent. 2
Let us now consider the thinned claim size processes.
Let
t
0
:= 0 and
tt
0
:= 0. For all l N, dene

t
l
:= infn N [
t
l1
< n, X
n
C
and

tt
l
:= infn N [
tt
l1
< n, X
n
C .
For m, n N
0
such that m + n N, let T
t
(m, n) denote the collection of all pairs
of strictly increasing sequences m
i

i1,...,k
N and n
j

j1,...,l
N satisfying
6.3 Decomposition of a Poisson Risk Process 135
k = m and l n as well as n
l
< m
k
= k + l (such that one of these sequences may
be empty and the union of these sequences is 1, . . . , k +l); similarly, let T
tt
(m, n)
denote the collection of all pairs of strictly increasing sequences m
i

i1,...,k
N
and n
j

j1,...,l
N satisfying l = n and k m as well as m
k
< n
l
= k + l, and
dene
T(m, n) := T
t
(m, n) +T
tt
(m, n) .
The collections T(m, n) correspond to the collections H(l) considered in the previous
section on thinning.
6.3.2 Lemma. The identities
k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j
=
k

i=1
X
m
i
C
l

j=1
X
n
j
C
and
P
_
k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
=
k
(1)
l
hold for all m, n N
0
such that m+n N and for all (m
i

i1,...,k
, n
j

j1,...,l
)
T(m, n).
It is clear that, for every choice of m, n N
0
such that m + n N, the family

k
i=1

t
i
= m
i

l
j=1

tt
j
= n
j

DT(m,n)
is disjoint; the following lemma shows
that it is, up to a null set, even a partition of :
6.3.3 Corollary. The identity

DT(m,n)
P
_
k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
= 1
holds for all m, n N
0
such that m+ n N.
Proof. For m, n N
0
such that m +n = 1, the identity

DT(m,n)
P
_
k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
= 1
follows from Corollary 6.2.3.
For m, nN, we split T(m, n) into two parts: Let T
1
(m, n) denote the collection of
all pairs (m
i

i1,...,k
, n
j

j1,...,l
) T(m, n) satisfying m
1
= 1, and let T
2
(m, n)
denote the collection of all pairs (m
i

i1,...,k
, n
j

j1,...,l
) T(m, n) satisfying
n
1
= 1. Then we have
T(m, n) = T
1
(m, n) +T
2
(m, n) .
Furthermore, there are obvious bijections between T
1
(m, n) and T(m1, n) and
between T
2
(m, n) and T(m, n1).
Using Lemma 6.3.2, the assertion now follows by induction over m+ n N. 2
136 Chapter 6 The Risk Process in Reinsurance
By Corollary 6.2.3, each
t
n
and each
tt
n
is nite. For all n N, dene
X
t
n
:=

k=1

n
=k
X
k
and
X
tt
n
:=

k=1

n
=k
X
k
.
Then we have (X
t
n

nN
) (X
tt
n

nN
) (X
n

nN
).
6.3.4 Lemma. The identity
P
_
n

h=1
X
t
h
B
t
h

n

h=1
X
tt
h
B
tt
h

k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
=
n

h=1
P[X
t
h
B
t
h
]
n

h=1
P[X
tt
h
B
tt
h
] P
_
k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
holds for all nN, for all B
t
1
, . . . , B
t
n
B(R) and B
tt
1
, . . . , B
tt
n
B(R), and for every
pair (m
i

i1,...,k
, n
j

j1,...,l
) T(n, n).
Proof. By Lemma 6.3.2 and Theorem 6.2.5, we have
P
_
n

h=1
X
t
h
B
t
h

n

h=1
X
tt
h
B
tt
h

k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
= P
_
n

h=1
X
m
i
B
t
h

n

h=1
X
n
j
B
tt
h

k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
= P
_
n

h=1
X
m
i
B
t
h

n

h=1
X
n
j
B
tt
h

k

i=1
X
m
i
C
l

j=1
X
n
j
C
_
= P
_
n

h=1
X
m
i
B
t
h
C
n

h=1
X
n
j
B
tt
h
C
k

i=n+1
X
m
i
C
l

j=n+1
X
n
j
C
_
=
n

h=1
P[XB
t
h
C]
n

h=1
P[XB
tt
h
C]

i=n+1
P[XC]
l

j=n+1
P[XC]
=
n

h=1
P[XB
t
h
[XC]
n

h=1
P[XB
tt
h
[XC]

i=1
P[XC]
l

j=1
P[XC]
6.3 Decomposition of a Poisson Risk Process 137
=
n

h=1
P[X
t
h
B
t
h
]
n

h=1
P[X
tt
h
B
tt
h
]
k
(1)
l
=
n

h=1
P[X
t
h
B
t
h
]
n

h=1
P[X
tt
h
B
tt
h
] P
_
k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
,
as was to be shown. 2
6.3.5 Theorem (Decomposition of a Claim Size Process). The claim size
processes X
t
n

nN
and X
tt
n

nN
are independent.
Proof. Consider n N, B
t
1
, . . . , B
t
n
B(R), and B
tt
1
, . . . , B
tt
n
B(R). By Lemma
6.3.4, we have, for every pair (m
i

i1,...,k
, n
j

j1,...,l
) T(n, n),
P
_
n

h=1
X
t
h
B
t
h

n

h=1
X
tt
h
B
tt
h

k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
=
n

h=1
P[X
t
h
B
t
h
]
n

h=1
P[X
tt
h
B
tt
h
] P
_
k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
.
By Corollary 6.3.3, summation over T(n, n) yields
P
_
n

h=1
X
t
h
B
t
h

n

h=1
X
tt
h
B
tt
h

_
=
n

h=1
P[X
t
h
B
t
h
]
n

h=1
P[X
tt
h
B
tt
h
] .
The assertion follows. 2
The risk process (N
t

tR
+
, X
n

nN
) is a Poisson risk process if the claim number
process N
t

tR
+
is a Poisson process.
We can now prove the main result of this section:
6.3.6 Theorem (Decomposition of a Poisson Risk Process). Assume that
(N
t

tR
+
, X
n

nN
) is a Poisson risk process. Then (N
t
t

tR
+
, X
t
n

nN
) and
(N
tt
t

tR
+
, X
tt
n

nN
) are independent Poisson risk processes.
Proof. By Theorems 6.3.1 and 6.3.5, we know that N
t
t

tR
+
and N
tt
t

tR
+
are
independent Poisson processes and that X
t
n

nN
and X
tt
n

nN
are independent
claim size processes.
To prove that the algebras (N
t
t

tR
+
N
tt
t

tR
+
) and (X
t
n

nN
X
tt
n

nN
)
are independent, consider m, n N, B
t
1
, . . . , B
t
n
B(R) and B
tt
1
, . . . , B
tt
n
B(R), as
well as t
0
, t
1
, . . . , t
m
R
+
such that 0 = t
0
< t
1
< . . . < t
m
and k
t
1
, . . . , k
t
m
N
0
and
k
tt
1
, . . . , k
tt
m
N
0
such that k
t
1
. . . k
t
m
and k
tt
1
. . . k
tt
m
. For all j 1, . . . , m,
dene k
j
= k
t
j
+ k
tt
j
.
Furthermore, let p := maxn, k
t
m
, k
tt
m
, and let T denote the collection of all pairs
(m
i

i1,...,k
, n
j

j1,...,l
) T(p, p) satisfying maxm
k

j
, n
k

j
= k
j
for all j
1, . . . , m. By Lemma 6.3.4 and Theorem 6.3.5, we have
138 Chapter 6 The Risk Process in Reinsurance
P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

m

j=1
_
k
j

r=1

XrC
= k
t
j
__
=

DT
P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

m

j=1
max
t
k

j
,
tt
k

j
= k
j

_
=

DT
P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

k

i=1

t
i
= m
i

l

j=1

tt
j
= n
j

_
=

DT
_
n

h=1
P[X
t
h
B
t
h
]
n

h=1
P[X
tt
h
B
tt
h
]
k
(1)
l
_
=

DT
P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

k
(1)
l
= P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

DT

k
(1)
l
,
hence
P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

m

j=1
N
t
t
j
= k
t
j
N
tt
t
j
= k
tt
j

_
= P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

m

j=1
N
t
t
j
= k
t
j
N
t
j
= k
j

_
= P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

m

j=1
_
N
t
j

r=1

XrC
= k
t
j
_
N
t
j
= k
j

_
= P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

m

j=1
_
k
j

r=1

X
r
C
= k
t
j
_
N
t
j
= k
j

_
= P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

m

j=1
_
k
j

r=1

X
r
C
= k
t
j
_
P
_
m

j=1
N
t
j
= k
j

_
= P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

DT

k
(1)
l
P
_
m

j=1
N
t
j
= k
j

_
,
and thus
P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

m

j=1
N
t
t
j
= k
t
j
N
tt
t
j
= k
tt
j

_
= P
_
n

h=1
X
t
h
B
t
h
X
tt
h
B
tt
h

_
P
_
m

j=1
N
t
t
j
= k
t
j
N
tt
t
j
= k
tt
j

_
.
6.3 Decomposition of a Poisson Risk Process 139
Therefore, the algebras (N
t
t

tR
+
N
tt
t

tR
+
) and (X
t
n

nN
X
tt
n

nN
) are
independent.
The assertion follows. 2
Let S
t
t

tR
+
and S
tt
t

tR
+
denote the aggregate claims processes induced by the
risk processes (N
t
t

tR
+
, X
t
n

nN
) and (N
tt
t

tR
+
, X
tt
n

nN
), respectively. In the
case where (N
t

tR
+
, X
n

nN
) is a Poisson risk process, one should expect that
the sum of the aggregate claims processes S
t
t

tR
+
and S
tt
t

tR
+
agrees with the
aggregate claims process S
t

tR
+
. The following result asserts that this is indeed
true:
6.3.7 Theorem. Assume that (N
t

tR
+
, X
n

nN
) is a Poisson risk process.
Then the aggregate claims processes S
t
t

tR
+
and S
tt
t

tR
+
are independent, and
the identity
S
t
= S
t
t
+ S
tt
t
holds for all t R
+
.
Proof. By Theorem 6.3.6, the aggregate claims processes S
t
t

tR
+
and S
tt
t

tR
+
are independent.
For all N
t
t
= 0 N
tt
t
= 0, we clearly have
S
t
t
() + S
tt
t
() = S
t
() .
Consider now k
t
, k
tt
N
0
such that k
t
+k
tt
N. Note that max
t
k
,
tt
k
k
t
+k
tt
,
and that
N
t
t
= k
t
N
tt
t
= k
tt
= max
t
k
,
tt
k
= k
t
+k
tt
N
t
= k
t
+k
tt
.
Thus, for (m
i

i1,...,k
, n
j

j1,...,l
) T(k
t
, k
tt
) satisfying maxm
k
, n
k
> k
t
+k
tt
the set N
t
t
= k
t
N
tt
t
= k
tt

k
i=1

t
i
= m
i

l
j=1

tt
j
= n
j
is empty, and for
(m
i

i1,...,k
, n
j

j1,...,l
) T(k
t
, k
tt
) satisfying maxm
k
, n
k
= k
t
+k
tt
we have,
for all N
t
t
= k
t
N
tt
t
= k
tt

k
i=1

t
i
= m
i

l
j=1

tt
j
= n
j
,
S
t
t
() + S
tt
t
() =
N

t
()

i=1
X
t
i
() +
N

t
()

j=1
X
tt
j
()
=
k

i=1
X
t
i
() +
k

j=1
X
tt
j
()
=
k

i=1
X
m
i
() +
k

j=1
X
n
j
()
=
k

+k

h=1
X
h
()
140 Chapter 6 The Risk Process in Reinsurance
=
Nt

h=1
X
h
()
= S
t
() .
This yields
S
t
t
() + S
tt
t
() = S
t
()
for all N
t
t
= k
t
N
tt
t
= k
tt
.
We conclude that
S
t
t
+ S
tt
t
= S
t
,
as was to be shown. 2
Problems
6.3.A Let (N
t

tR
+
, X
n

nN
) be a Poisson risk process satisfying P
X
= Exp(),
and let C := (c, ) for some c (0, ).
Compute the distributions of X
t
and X
tt
, compare the expectation, variance and
coecient of variation of X
t
and of X
tt
with the corresponding quantities of X,
and compute these quantities for S
t
t
, S
tt
t
, and S
t
(see Problem 5.2.G).
6.3.B Extend the results of this section to the decomposition of a Poisson risk process
into more than two Poisson risk processes.
6.3.C Let (N
t

tR
+
, X
n

nN
) be a Poisson risk process satisfying P
X
[1, . . . , m] = 1
for some m N. For all j 1, . . . , m and t R
+
, dene
N
(j)
t
:=
N
t

n=1

X
n
=j
.
Then the claim number processes N
(1)
t

tR
+
, . . . , N
(m)
t

tR
+
are independent,
and the identity
S
t
=
m

j=1
j N
(j)
t
holds for all t R
+
.
6.3.D Extend the results of this section to the case where the claim number process is
an inhomogeneous Poisson process.
6.3.E Discrete Time Model: Study the decomposition of a binomial risk process.
6.3.F Discrete Time Model: Let (N
l

lN
0
, X
n

nN
) be a binomial risk process
satisfying P
X
= Geo(), and let C := (m, ) for some m N.
Compute the distributions of X
t
and X
tt
, compare the expectation, variance and
coecient of variation of X
t
and of X
tt
with the corresponding quantities of X,
and compute these quantities for S
t
l
, S
tt
l
, and S
l
.
6.4 Superposition of Poisson Risk Processes 141
6.4 Superposition of Poisson Risk Processes
Throughout this section, let (N
t
t

tR
+
, X
t
n

nN
) and (N
tt
t

tR
+
, X
tt
n

nN
) be
risk processes, let T
t
n

nN
0
and T
tt
n

nN
0
denote the claim arrival processes in-
duced by the claim number processes N
t
t

tR
+
and N
tt
t

tR
+
, and let W
t
n

nN
and W
tt
n

nN
denote the claim interarrival processes induced by the claim arrival
processes T
t
n

nN
0
and T
tt
n

nN
0
, respectively.
We assume that the risk processes (N
t
t

tR
+
, X
t
n

nN
) and (N
tt
t

tR
+
, X
tt
n

nN
)
are independent, that their exceptional null sets are empty, and that the claim
number processes N
t
t

tR
+
and N
tt
t

tR
+
are Poisson processes with parameters

t
and
tt
, respectively.
For all t R
+
, dene
N
t
:= N
t
t
+N
tt
t
.
The process N
t

tR
+
is said to be the superposition of the Poisson processes
N
t
t

tR
+
and N
tt
t

tR
+
.
The following result shows that the class of all Poisson processes is stable under
superposition:
6.4.1 Theorem (Superposition of Poisson Processes). The process N
t

tR
+
is a Poisson process with parameter
t
+
tt
.
Proof. We rst show that N
t

tR
+
is a claim number process, and we then prove
that it is indeed a Poisson process. To simplify the notation in this proof, let
:=
t
+
tt
.
(1) Dene

N
:=
_
n

,n

N
0
T
t
n
= T
tt
n
.
Since the claim number processes N
t
t

tR
+
and N
tt
t

tR
+
are independent and
since the distributions of their claim arrival times are absolutely continuous with
respect to Lebesgue measure, we have
P[
N
] = 0 .
It is now easy to see that N
t

tR
+
is a claim number process with exceptional null
set
N
.
(2) For all t R
+
, we have
E[N
t
] = E[N
t
t
] + E[N
tt
t
]
=
t
t +
tt
t
= t .
142 Chapter 6 The Risk Process in Reinsurance
(3) Let T
t
t

tR
+
and T
tt
t

tR
+
denote the canonical ltrations of N
t
t

tR
+
and
N
tt
t

tR
+
, respectively, and let T
t

tR
+
denote the canonical ltration of N
t

tR
+
.
Using independence of the pair (N
t
t

tR
+
, N
tt
t

tR
+
) and the martingale property
of the centered claim number processes N
t
t

t
t
tR
+
and N
tt
t

tt
t
tR
+
, which
is given by Theorem 2.3.4, we see that the identity
_
A

(N
t+h
N
t
h) dP
=
_
A

_
(N
t
t+h
+N
tt
t+h
) (N
t
t
+N
tt
t
) (
t
+
tt
)h
_
dP
=
_
A

(N
t
t+h
N
t
t

t
h) dP +
_
A

(N
tt
t+h
N
tt
t

tt
h) dP
= P[A
tt
]
_
A

(N
t
t+h
N
t
t

t
h) dP + P[A
t
]
_
A

(N
tt
t+h
N
tt
t

tt
h) dP
= 0
holds for all t, h R
+
and for all A
t
T
t
t
and A
tt
T
tt
t
. Thus, letting
c
t
:= A
t
A
tt
[ A
t
T
t
t
, A
tt
T
tt
t
,
the previous identity yields
_
A
(N
t+h
(t+h)) dP =
_
A
(N
t
t) dP
for all t, h R
+
and A c
t
. Since c
t
is stable under intersection and satises
T
t
(c
t
), we conclude that the identity
_
A
(N
t+h
(t+h)) dP =
_
A
(N
t
t) dP
holds for all t, h R
+
and A T
t
. Therefore, the centered claim number process
N
t
t
tR
+
is a martingale, and it now follows from Theorem 2.3.4 that the claim
number process N
t

tR
+
is a Poisson process with parameter . 2
Let T
n

nN
0
denote the claim arrival process induced by the claim number pro-
cess N
t

tR
+
and let W
n

nN
denote the claim interarrival process induced by
T
n

nN
0
.
To avoid the annoying discussion of null sets, we assume henceforth that the excep-
tional null set of the claim number process N
t

tR
+
is empty.
The following result shows that each of the distributions P
W
and P
W
has a density
with respect to P
W
:
6.4 Superposition of Poisson Risk Processes 143
6.4.2 Lemma. The distributions P
W
and P
W
satisfy
P
W
=
_

t

t
+
tt
e

w
dP
W
(w)
and
P
W
=
_

tt

t
+
tt
e

w
dP
W
(w) .
Proof. Since P
W
= Exp(
t
) and P
W
= Exp(
t
+
tt
), we have
P
W
=
_

t
e

(0,)
(w) d(w)
=
_

t

t
+
tt
e

w
(
t
+
tt
) e
(

)w

(0,)
(w) d(w)
=
_

t

t
+
tt
e

w
dP
W
(w) ,
which is the rst identity. The second identity follows by symmetry. 2
For l N and k 0, 1, . . . , l, let c(l, k) denote the collection of all pairs of strictly
increasing sequences m
i

i1,...,k
N and n
j

j1,...,lk
N with union 1, . . . , l
(such that one of these sequences may be empty).
For l N, dene
c(l) =
l

k=0
c(l, k) .
The collections c(l) correspond to the collections H(l) and T(m, n) considered in
the preceding sections on thinning and decomposition.
For l N, k 0, 1, . . . , l, and C = (m
i

i1,...,k
, n
j

j1,...,lk
) c(l, k), let
A
C
:=

i1,...,k
T
m
i
= T
t
i

j1,...,lk
T
n
j
= T
tt
j
.
We have the following lemma:
6.4.3 Lemma. The identity
P[A
C
] =
_

t

t
+
tt
_
k
_

tt

t
+
tt
_
lk
holds for all l N and k 0, 1, . . . , l and for all C c(l, k).
144 Chapter 6 The Risk Process in Reinsurance
Proof. Consider C = (m
i

i1,...,k
, n
j

j1,...,lk
) c(l, k). We prove several
auxiliary results from which the assertion will follow by induction over l N.
(1) If l = 1 and m
k
= l, then
P[T
1
= T
t
1
] =

t

t
+
tt
.
We have
T
1
< T
t
1
=
_
tQ
T
1
t < T
t
1

=
_
tQ
N
t
1N
t
t
= 0
=
_
tQ
N
tt
t
1N
t
t
= 0
=
_
tQ
T
tt
1
t < T
t
1

= T
tt
1
< T
t
1

as well as, by a similar argument,


T
1
> T
t
1
= ,
and thus
T
1
= T
t
1
= T
t
1
T
tt
1

= T
t
1
< T
tt
1

= W
t
1
< W
tt
1
.
(In the sequel, arguments of this type will be tacitly used at several occasions.) Now
Lemma 6.4.2 yields
P[T
1
= T
t
1
] = P[W
t
1
< W
tt
1
]
=
_
R
P[w < W
tt
1
] dP
W

1
(w)
=
_
R
e

w

t

t
+
tt
e

w
dP
W
(w)
=

t

t
+
tt
,
as was to be shown.
(2) If l = 1 and n
lk
= l, then
P[T
1
= T
tt
1
] =

tt

t
+
tt
.
This follows from (1) by symmetry.
6.4 Superposition of Poisson Risk Processes 145
(3) If l 2 and m
k
= l, then
P
_

i1,...,k
T
m
i
= T
t
i

j1,...,lk
T
n
j
= T
tt
j

_
=

t

t
+
tt
P
_

i1,...,k1
T
m
i
= T
t
i

j1,...,lk
T
n
j
= T
tt
j

_
.
This means that elimination of the event T
m
k
= T
t
k
produces the factor
t
/(
t
+
tt
).
To prove our claim, we consider separately the cases m
k1
< l1 and m
k1
= l1.
Let us rst consider the case m
k1
< l 1. For all s, t (0, ) such that s t,
Lemma 6.4.2 yields
_
(ts,)
_
(s+vt,)
dP
W
(w) dP
W
(v) =
_
(ts,)
e

(s+vt)
dP
W
(v)
=
_
(ts,)
e

(s+vt)

t

t
+
tt
e

v
dP
W
(v)
=

t

t
+
tt
e

(st)
_
(ts,)
dP
W
(v)
=

t

t
+
tt
e

(st)
e
(

)(ts)
=

t

t
+
tt
e

(ts)
=

t

t
+
tt
_
(ts,)
dP
W
(v) .
For integration variables v
1
, . . . , v
k1
R and w
1
, . . . , w
lk
R, dene
s :=
k1

h=1
v
h
and
t :=
lk

h=1
w
h
.
With suitable domains of integration where these are not specied, the identity
established before yields
P
_

i1,...,k
T
m
i
= T
t
i

j1,...,lk
T
n
j
= T
tt
j

_
= P[. . . < T
t
k1
. . . < T
tt
lk
< T
t
k
< T
tt
lk+1
]
= P[. . . < T
t
k1
. . . < T
tt
lk
< T
t
k1
+ W
t
k
< T
tt
lk
+W
tt
lk+1
]
146 Chapter 6 The Risk Process in Reinsurance
= . . .
_
. . .
_ _
(ts,)
_
(s+vt,)
dP
W

lk+1
(w) dP
W

k
(v) dP
W

lk
(w
lk
) . . . dP
W

k1
(v
k1
) . . .
= . . .
_
. . .
_
_

t

t
+
tt
_
(ts,)
dP
W

k
(v)
_
dP
W

lk
(w
lk
) . . . dP
W

k1
(v
k1
) . . .
=

t

t
+
tt

_
. . .
_
. . .
_ _
(ts,)
dP
W

k
(v) dP
W

lk
(w
lk
) . . . dP
W

k1
(v
k1
) . . .
_
=

t

t
+
tt
P[. . . < T
t
k1
. . . < T
tt
lk
< T
t
k1
+ W
t
k
]
=

t

t
+
tt
P[. . . < T
t
k1
. . . < T
tt
lk
< T
t
k
]
=

t

t
+
tt
P
_

i1,...,k1
T
m
i
= T
t
i

j1,...,lk
T
n
j
= T
tt
j

_
.
Let us now consider the case m
k1
= l 1. For all s, t (0, ) such that t s,
Lemma 6.4.2 yields
_
(0,)
_
(s+vt,)
dP
W
(w) dP
W
(v) =
_
(0,)
e

(s+vt)
dP
W
(v)
=
_
(0,)
e

(s+vt)

t

t
+
tt
e

v
dP
W
(v)
=

t

t
+
tt
e

(st)
=

t

t
+
tt
_
(st,)
dP
W
(w) .
This yields, as before,
P
_

i1,...,k
T
m
i
= T
t
i

j1,...,lk
T
n
j
= T
tt
j

_
= P[. . . < T
tt
lk
. . . < T
t
k1
< T
t
k
< T
tt
lk+1
]
= P[. . . < T
tt
lk
. . . < T
t
k1
< T
t
k1
+ W
t
k
< T
tt
lk
+W
tt
lk+1
]
= . . .
_
. . .
_ _
(0,)
_
(s+vt,)
dP
W

lk+1
(w) dP
W

k
(v) dP
W

k1
(v
k1
) . . . dP
W

lk
(w
lk
) . . .
= . . .
_
. . .
_
_

t

t
+
tt
_
(st,)
dP
W

lk+1
(v)
_
dP
W

k1
(v
k1
) . . . dP
W

lk
(w
lk
) . . .
=

t

t
+
tt

_
. . .
_
. . .
_ _
(st,)
dP
W

lk+1
(v) dP
W

k1
(v
k1
) . . . dP
W

lk
(w
lk
) . . .
_
6.4 Superposition of Poisson Risk Processes 147
=

t

t
+
tt
P[. . . < T
tt
lk
. . . < T
t
k1
< T
tt
lk
+ W
tt
lk+1
]
=

t

t
+
tt
P[. . . < T
tt
lk
. . . < T
t
k1
< T
tt
lk+1
]
=

t

t
+
tt
P
_

i1,...,k1
T
m
i
= T
t
i

j1,...,lk
T
j
= T
tt
j

_
.
This proves our claim.
(4) If l 2 and n
lk
= l, then
P
_

i1,...,k
T
m
i
= T
t
i

j1,...,lk
T
n
j
= T
tt
j

_
=

tt

t
+
tt
P
_

i1,...,k
T
m
i
= T
t
i

j1,...,lk1
T
n
j
= T
tt
j

_
.
This means that elimination of T
n
lk
= T
tt
l
produces the factor
tt
/(
t
+
tt
). The
identity follows from (3) by symmetry.
(5) Using (1), (2), (3), and (4), the assertion now follows by induction over l N. 2
It is clear that, for each l N, the family A
C

C((l)
is disjoint; we shall now show
that it is, up to a null set, even a partition of .
6.4.4 Corollary. The identity

C((l)
P[A
C
] = 1
holds for all l N.
Proof. By Lemma 6.4.3, we have

C((l)
P[A
C
] =
l

k=0

C((l,k)
P[A
C
]
=
l

k=0
_
l
k
__

t

t
+
tt
_
k
_

tt

t
+
tt
_
lk
= 1 ,
as was to be shown. 2
148 Chapter 6 The Risk Process in Reinsurance
6.4.5 Corollary. The identities
l

k=1
P[T
l
= T
t
k
] =

t

t
+
tt
and
l

k=1
P[T
l
= T
tt
k
] =

tt

t
+
tt
hold for all l N.
Proof. For l N and k 1, . . . , l, let c(l, k, k) denote the collection of all pairs
(m
i

i1,...,k
, n
j

j1,...,lk
) c(l) satisfying m
k
= l. Then we have
l

k=1
P[T
l
= T
t
k
] =
l

k=1

C((l,k,k)
P[A
C
]
=
l

k=1

C((l,k,k)
_

t

t
+
tt
_
k
_

tt

t
+
tt
_
lk
=
l

k=1
_
l 1
k 1
__

t

t
+
tt
_
k
_

tt

t
+
tt
_
lk
=

t

t
+
tt
l1

j=0
_
l 1
j
__

t

t
+
tt
_
j
_

tt

t
+
tt
_
(l1)j
=

t

t
+
tt
,
which is the rst identity. The second identity follows by symmetry. 2
For all n N, dene
X
n
:=
n

k=1
_

T
n
=T

X
t
k
+
T
n
=T

X
tt
k
_
.
The sequence X
n

nN
is said to be the superposition of the claim size processes
X
t
n

nN
and X
tt
n

nN
.
6.4.6 Theorem (Superposition of Claim Size Processes). The sequence
X
n

nN
is i. i. d. and satises
P
X
=

t

t
+
tt
P
X
+

tt

t
+
tt
P
X
.
6.4 Superposition of Poisson Risk Processes 149
Proof. Consider l N and B
1
, . . . , B
l
B(R).
For C = (m
i

i1,...,k
, n
j

j1,...,lk
) c(l), Lemma 6.4.3 yields
P[A
C
] =
_

t

t
+
tt
_
k
_

tt

t
+
tt
_
lk
,
and hence
P
_
l

h=1
X
h
B
h
A
C
_
= P
_

i1,...,k
X
t
m
i
B
m
i

j1,...,lk
X
tt
n
j
B
n
j
A
C
_
=

i1,...,k
P[X
t
m
i
B
m
i
]

j1,...,lk
P[X
tt
n
j
B
n
j
] P[A
C
]
=

i1,...,k
P
X
[B
m
i
]

j1,...,lk
P
X
[B
n
j
]
_

t

t
+
tt
_
k
_

tt

t
+
tt
_
lk
=

i1,...,k

t
+
tt
P
X
[B
m
i
]

j1,...,lk

tt

t
+
tt
P
X
[B
n
j
] .
By Corollary 6.4.4, summation over c(l) yields
P
_
l

h=1
X
h
B
h

_
=

C((l)
P
_
l

h=1
X
h
B
h
A
C
_
=

C((l)
_

i1,...,k

t
+
tt
P
X
[B
m
i
]

j1,...,lk

tt

t
+
tt
P
X
[B
n
j
]
_
=
l

h=1
_

t

t
+
tt
P
X
[B
h
] +

tt

t
+
tt
P
X
[B
h
]
_
,
and the assertion follows. 2
We can now prove the main result of this section:
6.4.7 Theorem (Superposition of Poisson Risk Processes). The pair
(N
t

tR
+
, X
n

nN
) is a Poisson risk process.
Proof. Consider l N, B
1
, . . . , B
l
B(R), and a disjoint family D
h

h1,...,l
of
intervals in (0, ) with increasing lower bounds. Dene
:=
l1

h=1
(
t
+
tt
) [D
h
] P
W
[D
l
] .
150 Chapter 6 The Risk Process in Reinsurance
If we can show that
P
_
l

h=1
X
h
B
h
T
h
D
h

_
= P
_
l

h=1
X
h
B
h

_
,
then we have
P
_
l

h=1
X
h
B
h
T
h
D
h

_
= P
_
l

h=1
X
h
B
h

_
P
_
l

h=1
T
h
D
h

_
.
We proceed in several steps:
(1) The identity
P
_
l

h=1
T
h
D
h
A
C
_
=
_

t

t
+
tt
_
k
_

tt

t
+
tt
_
lk
holds for all C = (m
i

i1,...,k
, n
j

j1,...,lk
) c(l).
Assume that l = m
k
. For integration variables v
1
, . . . , v
k
R and w
1
, . . . , w
lk
R
and for i 0, 1, . . . , k and j 0, 1, . . . , lk, dene
s
i
:=
i

h=1
v
h
and
t
j
:=
j

h=1
w
h
.
Using translation invariance of the Lebesgue measure, we obtain
_
D
n
1
t
0
. . .
_
D
n
lk
t
lk1
e

t
lk
dP
W

lk
(w
lk
) . . . dP
W

1
(w
1
) =
lk

j=1

tt
[D
n
j
] ,
also, using the transformation formula, we obtain
_
D
m
k
s
k1
e

s
k
dP
W

k
(v
k
)
=
_
D
m
k
s
k1
e

s
k

t
e

v
k

(0,)
(v
k
) d(v
k
)
=

t

t
+
tt
e

s
k1
_
D
m
k
s
k1
(
t
+
tt
) e
(

)(s
k1
+v
k
)

(0,)
(v
k
) d(v
k
)
=

t

t
+
tt
e

s
k1
_
D
m
k
(
t
+
tt
) e
(

)s
k

(0,)
(s
k
) d(s
k
)
=

t

t
+
tt
e

s
k1
P
W
[D
m
k
]
=

t

t
+
tt
e

s
k1
P
W
[D
l
] ,
6.4 Superposition of Poisson Risk Processes 151
and hence
_
D
m
1
s
0
. . .
_
D
m
k
s
k1
e

s
k
dP
W

k
(v
k
) . . . dP
W

1
(v
1
) =

t

t
+
tt
P
W
[D
l
]
k1

i=1

t
[D
m
j
] .
This yields
P
_
l

h=1
T
h
D
h
A
C
_
= P
_
k

i=1
T
t
i
D
m
i

lk

j=1
T
tt
j
D
n
j
T
t
k
T
tt
lk+1

_
= P
_
k

i=1
T
t
i
D
m
i

lk

j=1
T
tt
j
D
n
j
T
t
k
T
tt
lk
+ W
tt
lk+1

_
=
_
Dm
1
s
0
. . .
_
Dm
k
s
k1
_
Dn
1
t
0
. . .
_
Dn
lk
t
lk1
_
(s
k
t
lk
,)
dP
W

lk+1
(w
lk+1
)
dP
W

lk
(w
lk
) . . . dP
W

1
(w
1
) dP
W

k
(v
k
) . . . dP
W

1
(v
1
)
=
_
Dm
1
s
0
. . .
_
Dm
k
s
k1
_
Dn
1
t
0
. . .
_
Dn
lk
t
lk1
e

(s
k
t
lk
)
dP
W

lk
(w
lk
) . . . dP
W

1
(w
1
) dP
W

k
(v
k
) . . . dP
W

1
(v
1
)
=
_
D
m
1
s
0
. . .
_
D
m
k
s
k1
e

s
k
_
_
D
n
1
t
0
. . .
_
D
n
lk
t
lk1
e

t
lk
dP
W

lk
(w
lk
) . . . dP
W

1
(w
1
)
_
dP
W

k
(v
k
) . . . dP
W

1
(v
1
)
=
_
Dm
1
s
0
. . .
_
Dm
k
s
k1
e

s
k
_
lk

j=1

tt
[D
n
j
]
_
dP
W

k
(v
k
) . . . dP
W

1
(v
1
)
=

t

t
+
tt
P
W
[D
l
]
k1

i=1

t
[D
m
j
]
lk

j=1

tt
[D
n
j
]
=
_

t

t
+
tt
_
k
_

tt

t
+
tt
_
lk

l1

h=1
(
t
+
tt
) [D
h
] P
W
[D
l
]
=
_

t

t
+
tt
_
k
_

tt

t
+
tt
_
lk
.
By symmetry, the same result obtains in the case l = n
lk
.
152 Chapter 6 The Risk Process in Reinsurance
(2) The identity
P
_
l

h=1
X
h
B
h
T
h
D
h
A
C
_
=
k

i=1

t
+
tt
P
X
[B
m
i
]
lk

j=1

tt

t
+
tt
P
X
[B
n
j
]
holds for all C = (m
i

i1,...,k
, n
j

j1,...,lk
) c(l).
Because of (1), we have
P
_
l

h=1
X
h
B
h
T
h
D
h
A
C
_
= P
_
k

i=1
X
t
m
i
B
m
i
T
t
m
i
D
m
i

lk

j=1
X
tt
n
j
B
n
j
T
tt
n
j
D
n
j
A
C
_
= P
_
k

i=1
X
t
m
i
B
m
i

lk

j=1
X
tt
n
j
B
n
j

_
P
_
k

i=1
T
t
m
i
D
m
i

lk

j=1
T
tt
n
j
D
n
j
A
C
_
=
k

i=1
P
X
[B
m
i
]
lk

j=1
P
X
[B
n
j
] P
_
l

h=1
T
h
D
h
A
C
_
=
k

i=1
P
X
[B
m
i
]
lk

j=1
P
X
[B
n
j
]
_

t

t
+
tt
_
k
_

tt

t
+
tt
_
lk

=
k

i=1

t
+
tt
P
X
[B
m
i
]
lk

j=1

tt

t
+
tt
P
X
[B
n
j
] ,
as was to be shown.
(3) We have
P
_
l

h=1
X
h
B
h
T
h
D
h

_
= P
_
l

h=1
X
h
B
h

_
.
By Corollary 6.4.4 and because of (2) and Theorem 6.4.6, we have
P
_
l

h=1
X
h
B
h
T
h
D
h

_
=

C((l)
P
_
l

h=1
X
h
B
h
T
h
D
h
A
C
_
6.4 Superposition of Poisson Risk Processes 153
=

C((l)
_
k

i=1

t
+
tt
P
X
[B
m
i
]
lk

j=1

tt

t
+
tt
P
X
[B
n
j
]
_
=

C((l)
_
k

i=1

t
+
tt
P
X
[B
m
i
]
lk

j=1

tt

t
+
tt
P
X
[B
n
j
]
_
=
l

h=1
_

t

t
+
tt
P
X
[B
h
] +

tt

t
+
tt
P
X
[B
h
]
_
=
l

h=1
P[X
h
B
h
]
= P
_
l

h=1
X
h
B
h

_
,
as was to be shown.
(4) We have
P
_
l

h=1
X
h
B
h

l

h=1
T
h
D
h

_
= P
_
l

h=1
X
h
B
h

_
P
_
l

h=1
T
h
D
h

_
.
This follows from (3).
(5) The previous identity remains valid if the sequence D
h

h1,...,l
of intervals
is replaced by a sequence of general Borel sets. This implies that X
n

nN
and
T
n

nN
0
are independent. 2
Let us nally consider the aggregate claims process S
t

tR
+
induced by the Poisson
risk process (N
t

tR
+
, X
n

nN
). If the construction of the claim size process
X
n

nN
was appropriate, then the aggregate claims process S
t

tR
+
should agree
with the sum of the aggregate claims processes S
t
t

tR
+
and S
tt
t

tR
+
induced by
the original Poisson risk processes (N
t
t

tR
+
, X
t
n

nN
) and (N
tt
t

tR
+
, X
tt
n

nN
),
respectively. The following result asserts that this is indeed true:
6.4.8 Theorem. The identity
S
t
= S
t
t
+ S
tt
t
holds for all t R
+
.
Proof. For all N
t
= 0, we clearly have
S
t
() = S
t
t
() + S
tt
t
() .
Consider now l N. For C = (m
i

i1,...,k
, n
j

j1,...,lk
) c(l), we have
N
t
= l A
C
= T
l
t < T
l+1
A
C
= T
t
k
t < T
t
k+1
T
tt
lk
t < T
tt
lk+1
A
C
= N
t
t
= k N
tt
t
= lk A
C
154 Chapter 6 The Risk Process in Reinsurance
and hence, for all N
t
= l A
C
,
S
t
() =
Nt()

h=1
X
h
()
=
l

h=1
X
h
()
=
k

i=1
X
m
i
() +
lk

j=1
X
n
j
()
=
N

t
()

i=1
X
t
i
() +
N

t
()

j=1
X
tt
j
()
= S
t
t
() + S
tt
t
() .
By Corollary 6.4.4, this yields
S
t
() = S
t
t
() + S
tt
t
()
for all N
t
= l.
We conclude that
S
t
= S
t
t
+S
tt
t
,
as was to be shown. 2
Problems
6.4.A Extend the results of this section to more than two independent Poisson risk
processes.
6.4.B Study the superposition problem for independent risk processes which are not
Poisson risk processes.
6.4.C Discrete Time Model: The sum of two independent binomial processes may
fail to be a claim number process.
6.5 Remarks
For theoretical considerations, excess of loss reinsurance is probably the simplest
form of reinsurance. Formally, excess of loss reinsurance with a priority held by the
direct insurer is the same as direct insurance with a deductible to be paid by the
insured.
For further information on reinsurance, see Gerathewohl [1976, 1979] and Dienst
[1988]; for a discussion of the impact of deductibles, see Sterk [1979, 1980, 1988].
The superposition problem for renewal processes was studied by Stormer [1969].
Chapter 7
The Reserve Process and the
Ruin Problem
In the present chapter we introduce the reserve process and study the ruin problem.
We rst extend the model considered so far and discuss some technical aspects of
the ruin problem (Section 7.1). We next prove Kolmogorovs inequality for positive
supermartingales (Section 7.2) and then apply this inequality to obtain Lundbergs
inequality for the probability of ruin in the case where the excess premium process
has a superadjustment coecient (Section 7.3). We nally give some sucient
conditions for the existence of a superadjustment coecient (Section 7.4).
7.1 The Model
Throughout this chapter, let N
t

tR
+
be a claim number process, let T
n

nN
0
be
the claim arrival process induced by the claim number process, and let W
n

nN
be
the claim interarrival process induced by the claim arrival process. We assume that
the exceptional null set is empty and that the probability of explosion is equal to
zero.
Furthermore, let X
n

nN
be a claim size process, let S
t

tR
+
be the aggregate
claims process induced by the claim number process and the claim size process, and
let (0, ). For u (0, ) and all t R
+
, dene
R
u
t
:= u + t S
t
.
Of course, we have R
u
0
= u.
Interpretation:
is the premium intensity so that t is the premium income up to time t.
u is the initial reserve.
R
u
t
is the reserve at time t when the initial reserve is u.
Accordingly, the family R
u
t

tR
+
is said to be the reserve process induced by the
156 Chapter 7 The Reserve Process and the Ruin Problem
claim number process, the claim size process, the premium intensity, and the initial
reserve.
u

u + t
0
-
s
6
0 T
1
() T
2
() T
3
() T
4
() T
5
()
t

S
t
()
Claim Arrival Process and Aggregate Claims Process
We are interested in the ruin problem for the reserve process. This is the problem of
calculating or estimating the probability of the event that the reserve process falls
beyond zero at some time.
0
-
u
r
6
T
1
() T
2
() T
3
() T
4
() T
5
()
t

u + t

R
u
t
()
Claim Arrival Process and Reserve Process
7.1 The Model 157
In order to give a precise formulation of the ruin problem, we need the following
measuretheoretic consideration:
Let Z
t

tR
+
be family of measurable functions [, ]. A measurable
function Z : [, ] is said to be an essential inmum of Z
t

tR
+
if it has
the following properties:
P[Z Z
t
] = 1 holds for all t R
+
, and
every measurable function Y : [, ] such that P[Y Z
t
] = 1 holds
for all t R
+
satises P[Y Z] = 1.
The denition implies that any two essential inma of Z
t

tR
+
are identical with
probability one. The almost surely unique essential inmum of the family Z
t

tR
+
is denoted by inf
tR
+
Z
t
.
7.1.1 Lemma. Every family Z
t

tR
+
of measurable functions [, ]
possesses an essential inmum.
Proof. Without loss of generality, we may assume that Z
t
() [1, 1] holds for
all t R
+
and .
Let denote the collection of all countable subsets of R
+
. For J , consider the
measurable function Z
J
satisfying
Z
J
() = inf
tJ
Z
t
() .
Dene
c := inf
J
E[Z
J
] ,
choose a sequence J
n

nN
satisfying
c = inf
nN
E[Z
J
n
] ,
and dene J

:=

nN
J
n
. Then we have J

. Since c E[Z
J

] E[Z
J
n
]
holds for all n N, we obtain
c = E[Z
J

] .
For each t R
+
, we have Z
J

t
= Z
J
Z
t
, hence
c E[Z
J

t
]
= E[Z
J

Z
t
]
E[Z
J

]
= c ,
whence
P[Z
J
Z
t
= Z
J
] = 1 ,
and thus
P[Z
J

Z
t
] = 1 .
The assertion follows. 2
158 Chapter 7 The Reserve Process and the Ruin Problem
Let us now return to the reserve process R
u
t

tR
+
. By Lemma 7.1.1, the reserve
process has an essential inmum which is denoted by
inf
tR
+
R
u
t
.
In particular, we have inf
tR
+
R
u
t
< 0 T. The event inf
tR
+
R
u
t
< 0 is called
ruin of the reserve process and its probability is denoted by
(u) := P[inf
tR
+
R
u
t
< 0]
in order to emphasize the dependence of the probability of ruin on the initial reserve.
In practice, an upper bound

for the probability of ruin is given in advance and


the insurer is interested to choose the initial reserve u such that
(u)

.
In principle, one would like to choose u such that
(u) =

,
but the problem of computing the probability of ruin is even harder than the problem
of computing the accumulated claims distribution. It is therefore desirable to have
an upper bound
t
(u) for the probability of ruin when the initial reserve is u, and
to choose u such that

t
(u) =

.
Since
(u)
t
(u) ,
the insurer is on the safe side but possibly binds too much capital.
It is intuitively clear that the time when the reserve process rst falls beyond zero for
the rst time must be a claim arrival time. To make this point precise, we introduce
a discretization of the reserve process: For n N, dene
G
n
:= W
n
X
n
,
and for n N
0
dene
U
u
n
:= u +
n

k=1
G
k
.
Of course, we have U
u
0
= u.
The sequence G
n

nN
is said to be the excess premium process, and the sequence
U
u
n

nN
0
is said to be the modied reserve process.
7.1 The Model 159
The following result shows that the probability of ruin is determined by the modied
reserve process:
7.1.2 Lemma. The probability of ruin satises
(u) = P[inf
nN
0
U
u
n
< 0] .
Proof. Dene A := sup
N
0
T
n
< . For each A, we have
inf
tR
+
R
u
t
() = inf
tR
+
_
u + t
Nt()

k=1
X
k
()
_
= inf
nN
0
inf
t[T
n
(),T
n+1
())
_
u + t
N
t
()

k=1
X
k
()
_
= inf
nN
0
inf
t[T
n
(),T
n+1
())
_
u + t
n

k=1
X
k
()
_
= inf
nN
0
_
u + T
n
()
n

k=1
X
k
()
_
= inf
nN
0
_
u +
n

k=1
W
k
()
n

k=1
X
k
()
_
= inf
nN
0
_
u +
n

k=1
(W
k
() X
k
())
_
= inf
nN
0
_
u +
n

k=1
G
k
()
_
= inf
nN
0
U
u
n
() .
Since the probability of explosion is assumed to be zero, we have P[A] = 0, hence
inf
tR
+
R
u
t
= inf
nN
0
U
u
n
,
and thus
(u) = P[inf
tR
+
R
u
t
< 0]
= P[inf
nN
0
U
u
n
< 0] ,
as was to be shown. 2
160 Chapter 7 The Reserve Process and the Ruin Problem
In the case where the excess premiums are i. i. d. with nite expectation, the previous
lemma yields a rst result on the probability of ruin which sheds some light on the
dierent roles of the initial reserve and the premium intensity:
7.1.3 Theorem. Assume that the sequence G
n

nN
is i. i. d. with nondegenerate
distribution and nite expectation. If E[G] 0, then, for every initial reserve, the
probability of ruin is equal to one.
Proof. By the ChungFuchs Theorem, we have
1 = P
__
liminf
nN
n

k=1
G
k
=
__
P
__
liminf
nN
n

k=1
G
k
< u
__
P
__
inf
nN
n

k=1
G
k
< u
__
= P[inf
nN
U
u
n
< 0] ,
and thus
P[inf
nN
U
u
n
< 0] = 1 .
The assertion now follows from Lemma 7.1.2. 2
7.1.4 Corollary. Assume that the sequences W
n

nN
and X
n

nN
are inde-
pendent and that each of them is i. i. d. with nondegenerate distribution and nite
expectation. If E[X]/E[W], then, for every initial reserve, the probability of
ruin is equal to one.
It is interesting to note that Theorem 7.1.3 and Corollary 7.1.4 do not involve any
assumption on the initial reserve.
In the situation of Corollary 7.1.4 we see that, in order to prevent the probability of
ruin from being equal to one, the premium intensity must be large enough to ensure
that the expected premium income per claim, E[W], is strictly greater than the
expected claim size E[X]. The expected claim size is called the net premium, and
the expected excess premium,
E[G] = E[W] E[X] ,
is said to be the safety loading of the (modied) reserve process. For later reference,
we note that the safety loading is strictly positive if and only if the premium intensity
sases
>
E[X]
E[W]
.
We shall return to the situation of Corollary 7.1.4 in Section 7.4 below.
7.2 Kolmogorovs Inequality for Positive Supermartingales 161
7.2 Kolmogorovs Inequality for Positive
Supermartingales
Our aim is to establish an upper bound for the probability of ruin under a suitable
assumption on the excess premium process. The proof of this inequality will be
based on Kolmogorovs inequality for positive supermartingales which is the subject
of the present section.
Consider a sequence Z
n

nN
0
of random variables having nite expectations and
let T
n

nN
0
be the canonical ltration for Z
n

nN
0
.
A map : N
0
is a stopping time for T
n

nN
0
if =n T
n
holds for
all n N
0
, and it is bounded if sup

() < . Let T denote the collection of all


bounded stopping times for T
n

nN
0
, and note that N
0
T.
For T, dene
Z

:=

n=0

=n
Z
n
.
Then Z

is a random variable satisfying


[Z

[ =

n=0

=n
[Z
n
[
as well as
E[Z

] =

n=0
_
=n
Z
n
dP .
Note that the sums occurring in the denition of Z

and in the formulas for [Z

[
and E[Z

] actually extend only over a nite number of terms.


7.2.1 Lemma (Maximal Inequality). The inequality
P[sup
nN
0
[Z
n
[ > ]
1

sup
T
E[Z

[
holds for all (0, ).
Proof. Dene
A
n
:= [Z
n
[ >
n1

k=1
[Z
k
[ .
162 Chapter 7 The Reserve Process and the Ruin Problem
Then we have sup
N
0
[Z
n
[ > =

n=0
A
n
and hence
P[sup
nN
0
[Z
n
[ > ] =

n=0
P[A
n
] .
Consider r N
0
, and dene a random variable
r
by letting

r
() :=
_
n if A
n
and n0, 1, . . . , r
r if

r
n=0
A
n
.
Then we have
r
T, and hence
r

n=0
P[A
n
]
r

n=0
1

_
A
n
[Z
n
[ dP
=
1

n=0
_
A
n
[Z

r
[ dP

[Z

r
[ dP

sup
T
E[Z

[ .
Therefore, we have
P[sup
nN
0
[Z
n
[ > ] =

n=0
P[A
n
]

sup
T
E[Z

[ ,
as was to be shown. 2
7.2.2 Lemma. The following are equivalent:
(a) Z
n

nN
0
is a supermartingale.
(b) The inequality E[Z

] E[Z

] holds for all , T such that .


Proof. Assume rst that (a) holds and consider , T such that . For all
k, n N
0
such that n k, we have = k n+1 = = k n T
n
,
and thus
_
=kn
Z
n
dP =
_
=k=n
Z
n
dP +
_
=kn+1
Z
n
dP

_
=k=n
Z
n
dP +
_
=kn+1
Z
n+1
dP .
7.2 Kolmogorovs Inequality for Positive Supermartingales 163
Induction yields
_
=k
Z
k
dP =
_
=kk
Z
k
dP

n=k
_
=k=n
Z
n
dP ,
and this gives
_

dP =

k=0
_
=k
Z
k
dP

k=0

n=k
_
=k=n
Z
n
dP
=

n=0
n

k=0
_
=k=n
Z
n
dP
=

n=0
_
=n
Z
n
dP
=
_

dP .
Therefore, (a) implies (b).
Assume now that (b) holds. Consider n N
0
and A T
n
, and dene a random
variable by letting
() :=
_
n + 1 if A
n if A .
Then we have n T, hence
_
A
Z
n
dP +
_
\A
Z
n
dP =
_

Z
n
dP

dP
=
_
A
Z
n+1
dP +
_
\A
Z
n
dP ,
and thus
_
A
Z
n
dP
_
A
Z
n+1
dP .
Therefore, (b) implies (a). 2
164 Chapter 7 The Reserve Process and the Ruin Problem
7.2.3 Lemma. The following are equivalent:
(a) Z
n

nN
0
is a martingale.
(b) The identity E[Z

] = E[Z

] holds for all , T.


The proof of Lemma 7.2.3 is similar to that of Lemma 7.2.2.
The sequence Z
n

nN
0
is positive if each Z
n
is positive.
7.2.4 Corollary (Kolmogorovs Inequality). If Z
n

nN
0
is a positive super-
martingale, then the inequality
P[sup
nN
0
Z
n
> ]
1

E[Z
0
]
holds for all (0, ).
This is immediate from Lemmas 7.2.1 and 7.2.2.
7.3 Lundbergs Inequality
Throughout this section, we assume that the sequence of excess premiums G
n

nN
is independent.
A constant (0, ) is a superadjustment coecient for the excess premium process
G
n

nN
if it satises
E
_
e
G
n

1
for all n N, and it is an adjustment coecient for the excess premium process if
it satises
E
_
e
Gn

= 1
for all n N. The excess premium process need not possess a superadjustment
coecient; if the distribution of some excess premium is nondegenerate, then the
excess premium process has at most one adjustment coecient.
Let T
n

nN
0
denote the canonical ltration for U
u
n

nN
0
.
7.3.1 Lemma. For (0, ), the identity
_
A
e
U
u
n+1
dP =
_
A
e
U
u
n
dP
_

e
G
n+1
dP
holds for all n N
0
and A T
n
.
7.3 Lundbergs Inequality 165
Proof. For all n N
0
, we have
T
n
=
_
U
u
k

k0,1,...,n
_
=
_
G
k

k1,...,n
_
.
Since the sequence G
n

nN
is independent, this yields
_
A
e
U
u
n+1
dP =
_

A
e
(U
u
n
+G
n+1
)
dP
=
_

A
e
U
u
n
e
G
n+1
dP
=
_

A
e
U
u
n
dP
_

e
G
n+1
dP
=
_
A
e
U
u
n
dP
_

e
G
n+1
dP ,
for all n N
0
and A T
n
. The assertion follows. 2
As an immediate consequence of Lemma 7.3.1, we obtain the following characteri-
zations of superadjustment coecients and adjustment coecients:
7.3.2 Corollary. For (0, ), the following are equivalent:
(a) is a superadjustment coecient for the excess premium process.
(b) For every u (0, ), the sequence e
U
u
n

nN
0
is a supermartingale.
7.3.3 Corollary. For (0, ), the following are equivalent:
(a) is an adjustment coecient for the excess premium process.
(b) For every u (0, ), the sequence e
U
u
n

nN
0
is a martingale.
The main result of this section is the following:
7.3.4 Theorem (Lundbergs Inequality). If (0, ) is a superadjustment
coecient for the excess premium process, then the identity
P[inf
nN
0
U
u
n
< 0] e
u
holds for all u (0, ).
Proof. By Corollaries 7.3.2 and 7.2.4, we have
P[inf
nN
0
U
u
n
< 0] = P
__
sup
nN
0
e
U
u
n
> 1
_
E
_
e
U
u
0

= E
_
e
u

= e
u
,
as was to be shown. 2
The upper bound for the probability of ruin provided by Lundbergs inequality
depends explicitly on the initial reserve u. Implicitly, it also depends, via the super-
adjustment coecient , on the premium intensity .
166 Chapter 7 The Reserve Process and the Ruin Problem
Problem
7.3.A Assume that the sequence X
n

nN
is independent. If there exists some (0, )
satisfying E[e
Xn
] 1 for all n N, then the inequality
P[sup
tR
+
S
t
> c] e
c
holds for all c (0, ).
Hint: Extend Lemma 7.1.2 and Lundbergs inequality to the case = 0.
7.4 On the Existence of a Superadjustment
Coecient
In the present section, we study the existence of a (super)adjustment coecient.
We rst consider the case where the excess premiums are i. i. d. According to the
following result, we have to assume that the safety loading is strictly positive:
7.4.1 Theorem. Assume that the sequence G
n

nN
is i. i. d. with nondegenerate
distribution and nite expectation. If the excess premium process has a superadjust-
ment coecient, then E[G] > 0.
Proof. The assertion follows from Theorems 7.3.4 and 7.1.3. 2
7.4.2 Corollary. Assume that the sequences W
n

nN
and X
n

nN
are inde-
pendent and that each of them is i. i. d. with nondegenerate distribution and nite
expectation. If the excess premium process has a superadjustment coecient, then
> E[X]/E[W].
The previous result has a partial converse:
7.4.3 Theorem. Assume that
(i) W
n

nN
and X
n

nN
are independent,
(ii) W
n

nN
is i. i. d. and satises supz R
+
[ E[e
zW
] < (0, ], and
(iii) X
n

nN
is i. i. d. and satises supz R
+
[ E[e
zX
] < (0, ) as well as
P
X
[R
+
] = 1.
If > E[X]/E[W], then the excess premium process has an adjustment coecient.
Proof. By assumption, the sequence G
n

nN
is i. i. d. For all z R, we have
E
_
e
zG

= E
_
e
zW

E
_
e
zX

,
and hence
M
G
(z) = M
W
(z) M
X
(z) .
7.4 On the Existence of a Superadjustment Coecient 167
By assumption, there exists some z
t
(0, ) such that the moment generating
functions of W and of X are both nite on the interval (, z
t
). Dierentiation
gives
M
t
G
(z) = M
t
W
(z) M
X
(z) + M
W
(z) M
t
X
(z)
for all z in a neighbourhood of 0, and thus
M
t
G
(0) = E[W] E[X] .
By assumption, there exists some z

(0, ) satisfying M
X
(z

) = and hence
M
G
(z

) = . Since M
G
(0) = 1 and M
t
G
(0) > 0, it follows that there exists
some (0, z

) satisfying E[e
G
] = M
G
() = 1. But this means that is an
adjustment coecient for the excess premium process. 2
In Corollary 7.4.2 and Theorem 7.4.3, the claim interarrival times are i. i. d., which
means that the claim number process is a renewal process. A particular renewal
process is the Poisson process:
7.4.4 Corollary. Assume that
(i) N
t

tR
+
and X
n

nN
are independent,
(ii) N
t

tR
+
is a Poisson process with parameter , and
(iii) X
n

nN
is i. i. d. and satises supz R
+
[ E[e
zX
] < (0, ) as well as
P
X
[R
+
] = 1.
If > E[X], then the excess premium process has an adjustment coecient.
Proof. By Lemmas 2.1.3 and 1.1.1, the claim interarrival process W
n

nN
and
the claim size process X
n

nN
are independent.
By Theorem 2.3.4, the sequence W
n

nN
is i. i. d. with P
W
= Exp(), and this
yields supz R
+
[ E[e
zW
] < = and E[W] = 1/.
The assertion now follows from Theorem 7.4.3. 2
In order to apply Lundbergs inequality, results on the existence of an adjustment
coecient are, of course, not sucient; instead, the adjustment coecient has to be
determined explicitly. To this end, the distributions of the excess premiums have
to be specied, and this is usually done by specifying the distributions of the claim
interarrival times and those of the claim severities.
7.4.5 Theorem. Assume that
(i) N
t

tR
+
and X
n

nN
are independent,
(ii) N
t

tR
+
is a Poisson process with parameter , and
(iii) X
n

nN
is i. i. d. and satises P
X
= Exp().
If > /, then / is an adjustment coecient for the excess premium process.
In particular,
P[inf
nN
U
u
n
< 0] e
(/)u
.
168 Chapter 7 The Reserve Process and the Ruin Problem
Proof. By Theorem 2.3.4, the sequence W
n

nN
is i. i. d. with P
W
= Exp().
Dene
:=

.
Then we have
E
_
e
G

= E
_
e
W
] E[e
X

= M
W
() M
X
()
=

+


= 1 ,
which means that is an adjustment coecient for the excess premium process.
The inequality for the probability of ruin follows from Theorem 7.3.4. 2
In the previous result, the bound for the probability of ruin decreases when either
the initial reserve or the premium intensity increases.
Let us nally turn to a more general situation in which the excess premiums are still
independent but need not be identically distributed. In this situation, the existence
of an adjustment coecient cannot be expected, and superadjustment coecients
come into their own right:
7.4.6 Theorem. Let
n

nN
and
n

nN
be two sequences of real numbers in
(0, ) such that := sup
nN

n
< and := inf
nN

n
> 0. Assume that
(i) N
t

tR
+
and X
n

nN
are independent,
(ii) N
t

tR
+
is a regular Markov process with intensities
n

nN
satisfying

n
(t) =
n
for all n N and t R
+
, and
(iii) X
n

nN
is independent and satises P
Xn
= Exp(
n
) for all n N.
If > /, then / is a superadjustment coecient for the excess premium
process. In particular,
P[inf
nN
U
u
n
< 0] e
(/)u
.
Proof. By Theorem 3.4.2, the sequence W
n

nN
is independent and satises
P
Wn
= Exp(
n
) for all n N. Dene
:=

.
As in the proof of Theorem 7.4.5, we obtain
E
_
e
G
n

= E
_
e
W
n

E
_
e
X
n

7.5 Remarks 169


= E
_
e
W
] E[e
X

=

n

n
+

=

n

n
+ (
n

n
)
=

n

n
+ ((
n
) + (
n
))
1 ,
which means that is a superadjustment coecient for the excess premium process.
The inequality for the probability of ruin follows from Theorem 7.3.4. 2
The previous result, which includes Theorem 7.4.5 as a special case, is a rather
exceptional example where a superadjustment coecient does not only exist but
can even be given in explicit form.
Problems
7.4.A In Theorem 7.4.3 and Corollary 7.4.4, the condition on M
X
is fullled whenever
P
X
= Exp(), P
X
= Geo(), or P
X
= Log().
7.4.B Discrete Time Model: Assume that
(i) N
l

lN
0
and X
n

nN
are independent,
(ii) N
l

lN
0
is a binomial process with parameter , and
(iii) X
n

nN
is i. i. d. and satises supz R
+
[ E[e
zX
] < (0, ) as well
as P
X
[R
+
] = 1.
If > E[X], then the excess premium process has an adjustment coecient.
7.5 Remarks
In the discussion of the ruin problem, we have only considered a xed premium
intensity and a variable initial reserve. We have done so in order not to overburden
the notation and to clarify the role of (super)adjustment coecients, which depend
on the premium intensity but not on the initial reserve. Of course, the premium
intensitiy may be a decision variable as well which can be determined by a given
upper bound for the probability of ruin, but the role of the initial reserve and the
role of the premium intensity are nevertheless quite dierent since the former is
limited only by the nancial power of the insurance company while the latter is to
a large extent constrained by market conditions.
The (super)martingale approach to the ruin problem is due to Gerber [1973, 1979]
and has become a famous method in ruin theory; see also DeVylder [1977], Delbaen
and Haezendonck [1985], Rhiel [1986, 1987], Bjork and Grandell [1988], Dassios and
170 Chapter 7 The Reserve Process and the Ruin Problem
Embrechts [1989], Grandell [1991], Mller [1992], Embrechts, Grandell, and Schmidli
[1993], Embrechts and Schmidli [1994], Mller [1995], and Schmidli [1995].
The proof of Kolmogorovs inequality is usually based on the nontrivial fact that a
supermartingale Z
n

nN
0
satises E[Z
0
] E[Z

] for arbitrary stopping times ;


see Neveu [1972]. The simple proof presented here is wellknown in the theory of
asymptotic martingales; see Gut and Schmidt [1983] for a survey and references.
Traditionally, Lundbergs inequality is proven under the assumption that is an ad-
justment coecient; the extension to the case of a superadjustment coecient is due
to Schmidt [1989]. The origin of this extension, which is quite natural with regard
to the use of Kolmogorovs inequality and the relation between (super)adjustment
coecients and (super)martingales, is in a paper by Mammitzsch [1986], who also
pointed out that in the case of i. i. d. excess premiums a superadjustment coecient
may exist when an adjustment does not exist.
For a discussion of the estimation problem for the (super)adjustment coecient, see
Herkenrath [1986], Deheuvels and Steinebach [1990], Csorgo and Steinebach [1991],
Embrechts and Mikosch [1991], and Steinebach [1993].
Although Theorem 7.4.3 provides a rather general condition under which an adjust-
ment coecient exists, there are important claim size distributions which do not
satisfy these conditions; an example is the Pareto distribution, which assigns high
probability to large claims. For a discussion of the ruin problem for such heavy tailed
claim size distributions, see Thorin and Wikstad [1977], Seal [1980], Embrechts and
Veraverbeke [1982], Embrechts and Villase nor [1988], Kl uppelberg [1989], and Beir-
lant and Teugels [1992].
A natural extension of the model considered in this chapter is to assume that the
premium income is not deterministic but stochastic; see B uhlmann [1972], DeVylder
[1977], Dassios and Embrechts [1989], Dickson [1991], and Mller [1992].
While the (homogeneous) Poisson process still plays a prominent role in ruin theory,
there are two major classes of claim number processes, renewal processes and Cox
processes, which present quite dierent extensions of the Poisson process and for
which the probability of ruin has been studied in detail. Recent work focusses on Cox
processes, or doubly stochastic Poisson processes, which are particularly interesting
since they present a common generalization of the inhomogeneous Poisson process
and the mixed Poisson process; see e. g. Grandell [1991] and the references given
there.
Let us nally remark that several authors have also studied the probability that the
reserve process attains negative values in a bounded time interval; for a discussion
of such nite time ruin probabilities, see again Grandell [1991].
Appendix: Special Distributions
In this appendix we recall the denitions and some properties of the probability
distributions which are used or mentioned in the text. For comments on applications
of these and other distributions in risk theory, see Panjer and Willmot [1992].
Auxiliary Notions
The Gamma Function
The map : (0, ) (0, ) given by
() :=
_

0
e
x
x
1
dx
is called the gamma function. It has the following properties:
(1/2) =

(1) = 1
(+1) = ()
In particular, the identity
(n+1) = n!
holds for all nN
0
. Roughly speaking, the values of the gamma function correspond
to factorials.
The Beta Function
The map B : (0, )(0, ) given by
B(, ) :=
_
1
0
x
1
(1x)
1
dx
is called the beta function. The fundamental identity for the beta function is
B(, ) =
() ()
(+)
,
172 Appendix: Special Distributions
showing that the properties of the beta function follow from those of the gamma
function. Roughly speaking, the inverted values of the beta function correspond to
binomial coecients.
The Generalized Binomial Coecient
For R and m N
0
, the generalized binomial coecient is dened to be
_

m
_
:=
m1

j=0
j
mj
.
For (0, ), the properties of the gamma function yield the identity
_
+ m1
m
_
=
( + m)
() m!
which is particularly useful.
Measures
We denote by : B(R) R the counting measure concentrated on N
0
, and we
denote by : B(R) R the Lebesgue measure. These measures are nite, and
the most important probability measures B(R) [0, 1] are absolutely continuous
wiht respect to either or .
For n N, we denote by
n
: B(R
n
) R the ndimensional Lebesgue measure.
Generalities on Distributions
A probability measure Q : B(R
n
) [0, 1] is called a distribution.
A distribution Q is degenerate if there exists some y R
n
satisfying
Q[y] = 1 ,
and it is nondegenerate if it is not degenerate.
In the remainder of this appendix, we consider only distributions B(R) [0, 1].
For y R, the Dirac distribution
y
is dened to be the (degenerate) distribution
Q satisfying
Q[y] = 1 .
Because of the particular role of the Dirac distribution, all parametric classes of
distributions considered below are dened as to exclude degenerate distributions.
Let Q and R be distributions B(R) [0, 1].
Generalities on Distributions 173
Expectation and Higher Moments
If
min
__
(,0]
(x) dQ(x),
_
[0,)
xdQ(x)
_
< ,
then the expectation of Q is said to exist and is dened to be
E[Q] :=
_
R
xdQ(x) ;
if
max
__
(,0]
(x) dQ(x),
_
[0,)
xdQ(x)
_
<
or, equivalently,
_
R
[x[ dQ(x) < ,
then the expectation of Q exists and is said to be nite. In this case, Q is said to
have nite expectation.
More generally, if, for some n N,
_
R
[x[
n
dQ(x) < ,
then Q is said to have a nite moment of order n or to have a nite nth moment
and the nth moment of Q is dened to be
_
R
x
n
dQ(x) .
If Q has a nite moment of order n, then it also has a nite moment of order k for
all k 1, . . . , n 1. The distribution Q is said to have nite moments of any
order if
_
R
[x[
n
dQ(x) <
holds for all n N.
Variance and Coecient of Variation
If Q has nite expectation, then the variance of Q is dened to be
var [Q] :=
_
R
(x E[Q])
2
dQ(x) .
If Q satises Q[R
+
] = 1 and E[Q] (0, ), then the coecient of variation of Q is
dened to be
v[Q] :=
_
var [Q]
E[Q]
.
174 Appendix: Special Distributions
Characteristic Function
The characteristic function or Fourier transform of Q is dened to be the map

Q
: R C given by

Q
(z) :=
_
R
e
izx
dQ(x) .
Obviously,
Q
(0) = 1. Moreover, a deep result on Fourier transforms asserts that
the distribution Q is uniquely determined by its characteristic function
Q
.
Moment Generating Function
The moment generating function of Q is dened to be the map M
Q
: R [0, ]
given by
M
Q
(z) :=
_
R
e
zx
dQ(x) .
Again, M
Q
(0) = 1. Moreover, if the moment generating function of Q is nite in a
neighbourhood of zero, then Q has nite moments of any order and the identity
d
n
M
Q
dz
n
(0) =
_
R
x
n
dQ(x)
holds for all n N.
Probability Generating Function
If Q[N
0
] = 1, then the probability generating function of Q is dened to be the map
m
Q
: [1, 1] R given by
m
Q
(z) :=
_
R
z
x
dQ(x)
=

n=0
z
n
Q[n] .
Since the identity
1
n!
d
n
m
Q
dz
n
(0) = Q[n]
holds for all n N
0
, the distribution Q is uniquely determined by its probability
generating function m
Q
. The probability generating function has a unique extension
to the closed unit disc in the complex plane.
Discrete Distributions 175
Convolution
If + : R
2
R is dened to be the map given by +(x, y) := x +y, then
Q R := (QR)
+
is a distribution which is called the convolution of Q and R. The convolution satises

QR
=
Q

R
,
and hence Q R = R Q, as well as
M
QR
= M
Q
M
R
;
also, if Q[N
0
] = 1 = R[N
0
], then
m
QR
= m
Q
m
R
.
If Q and R have nite expectations, then
E[Q R] = E[Q] + E[R] ,
and if Q and R both have a nite second moment, then
var [Q R] = var [Q] + var [R] .
Furthermore, the identity
(QR)[B] =
_
R
Q[By] dR(y)
holds for all B B(R); in particular, Q
y
=
y
Q is the translation of Q by y.
If Q =
_
f d and R =
_
g d for , , then Q R =
_
fg d, where the map
f g : R R
+
is dened by
(f g)(x) :=
_
R
f(xy)g(y) d(y) .
For n N
0
, the nfold convolution of Q is dened to be
Q
n
:=
_

0
if n = 0
Q Q
(n1)
if n N
If Q =
_
f d for , , then the density of Q
n
with respect to is denoted f
n
.
Discrete Distributions
A distribution Q : B(R) [0, 1] is discrete if there exists a countable set S B(R)
satisfying Q[S] = 1. If Q[N
0
] = 1, then Q is absolutely continuous with respect to .
For detailed information on discrete distributions, see Johnson and Kotz [1969] and
Johnson, Kotz, and Kemp [1992].
176 Appendix: Special Distributions
The Binomial Distribution
For m N and (0, 1), the binomial distribution B(m, ) is dened to be the
distribution Q satisfying
Q[x] =
_
m
x
_

x
(1)
mx
for all x 0, 1, . . . , m.
Expectation:
E[Q] = m
Variance:
var [Q] = m(1)
Characteristic function:

Q
(z) = ((1) + e
iz
)
m
Moment generating function:
M
Q
(z) = ((1) + e
z
)
m
Probability generating function:
m
Q
(z) = ((1) + z)
m
Special case: The Bernoulli distribution B() := B(1, ).
The Delaporte Distribution
For , (0, ) and (0, 1), the Delaporte distribution Del(, , ) is dened
to be the distribution
Q := P() NB(, ) .
The Geometric Distribution
For m N and (0, 1), the geometric distribution Geo(m, ) is dened to be
the distribution
Q :=
m
NB(m, ) .
Special case: The oneparameter geometric distribution Geo() := Geo(1, ).
Discrete Distributions 177
The Logarithmic Distribution
For (0, 1), the logarithmic distribution Log() is dened to be the distribution
Q satisfying
Q[x] =
1
[ log(1)[

x
x
for all x N.
Expectation:
E[Q] =
1
[ log(1)[

1
Variance:
var [Q] =
[ log(1)[
[ log(1)[
2

(1)
2
Characteristic function:

Q
(z) =
log(1e
iz
)
log(1)
Moment generating function for z (, log()):
M
Q
(z) =
log(1e
z
)
log(1)
Probability generating function:
m
Q
(z) =
log(1z)
log(1)
The Negativebinomial Distribution
For (0, ) and (0, 1), the negativebinomial distribution NB(, ) is dened
to be the distribution Q satisfying
Q[x] =
_
+x 1
x
_

(1)
x
for all x N
0
.
Expectation:
E[Q] =
1

178 Appendix: Special Distributions


Variance:
var [Q] =
1

2
Characteristic function:

Q
(z) =
_

1 (1)e
iz
_

Moment generating function for z (, log(1)):


M
Q
(z) =
_

1 (1)e
z
_

Probability generating function:


m
Q
(z) =
_

1 (1)z
_

Special case: The Pascal distribution NB(m, ) with m N.


The Negativehypergeometric (or PolyaEggenberger) Distribution
For m N and , (0, ), the negativehypergeometric distribution or Polya
Eggenberger distribution NH(m, , ) is dened to be the distribution Q satisfying
Q[x] =
_
+ x 1
x
__
+ mx 1
mx
__
+ + m1
m
_
1
for all x 0, . . . , m.
Expectation:
E[Q] = m

+
Variance:
var [Q] = m

( + )
2
+ + m
+ + 1
The Poisson Distribution
For (0, ), the Poisson distribution P() is dened to be the distribution Q
satisfying
Q[x] = e


x
x!
for all x N
0
.
Continuous Distributions 179
Expectation:
E[Q] =
Variance:
var [Q] =
Characteristic function:

Q
(z) = e
(e
iz
1)
Moment generating function:
M
Q
(z) = e
(e
z
1)
Probability generating function:
m
Q
(z) = e
(z1)
Continuous Distributions
A distribution Q : B(R) [0, 1] is continuous if it is absolutely continuous with
respect to . For detailed information on continuous distributions, see Johnson and
Kotz [1970a, 1970b].
The Beta Distribution
For , (0, ), the beta distribution Be(, ) is dened to be the distribution
Q :=
_
1
B(, )
x
1
(1x)
1

(0,1)
(x) d(x) .
Expectation:
E[Q] =

+
Variance:
var [Q] =

( + )
2
( + + 1)
Special case: The uniform distribution U(0, 1) := Be(1, 1).
180 Appendix: Special Distributions
The Gamma Distribution (Two Parameters)
For , (0, ), the gamma distribution Ga(, ) is dened to be the distribution
Q :=
_

()
e
x
x
1

(0,)
(x) d(x) .
Expectation:
E[Q] =

Variance:
var [Q] =

2
Characteristic function:

Q
(z) =
_

iz
_

Moment generating function for z (, ):


M
Q
(z) =
_

z
_

Special cases:
The Erlang distribution Ga(, m) with m N.
The exponential distribution Exp() := Ga(, 1).
The chisquare distribution
2
m
:= Ga(1/2, m/2) with m N.
The Gamma Distribution (Three Parameters)
For , (0, ) and R, the gamma distribution Ga(, , ) is dened to be
the distribution
Q :=

Ga(, ) .
Special case: The twoparameter gamma distribution Ga(, ) = Ga(, , 0).
The Pareto Distribution
For , (0, ), the Pareto distribution Par(, ) is dened to be the distribution
Q :=
_

_

+ x
_
+1

(0,)
(x) d(x) .
Bibliography
Adelson, R. M.
[1966] Compound Poisson distributions. Oper. Res. Quart. 17, 7375.
Albrecht, P.
[1981] Dynamische statistische Entscheidungsverfahren f ur Schadenzahlprozesse.
Karlsruhe: Verlag Versicherungswirtschaft.
[1985] Mixed Poisson process. In: Encyclopedia of Statistical Sciences, Vol. 6, pp.
556559. New York Chichester: Wiley.
Aliprantis, C. D., and Burkinshaw, O.
[1990] Principles of Real Analysis. Second Edition. Boston New York: Academic
Press.
Alsmeyer, G.
[1991] Erneuerungstheorie. Stuttgart: Teubner.
Ambagaspitiya, R. S.
[1995] A family of discrete distributions. Insurance Math. Econom. 16, 107127.
Ambagaspitiya, R. S., and Balakrishnan, N.
[1994] On the compound generalized Poisson distributions. ASTIN Bull. 24, 255
263.
Ammeter, H.
[1948] A generalization of the collective theory of risk in regard to uctuating basic
probabilities. Scand. Actuar. J. 31, 171198.
Azlarov, T. A., and Volodin, N. A.
[1986] Characterization Problems Associated with the Exponential Distribution.
Berlin Heidelberg New York: Springer.
Balakrishnan, N. (see R. S. Ambagaspitiya)
Bauer, H.
[1991] Wahrscheinlichkeitstheorie. 4. Auflage. Berlin: DeGruyter.
[1992] Ma und Integrationstheorie. 2. Auflage. Berlin: DeGruyter.
182 Bibliography
Bauwelinckx, T. (see M. J. Goovaerts)
Beirlant, J., and Teugels, J. L.
[1992] Modeling large claims in nonlife insurance. Insurance Math. Econom. 11
1729.
Bichsel, F.
[1964] ErfahrungsTarierung in der MotorfahrzeughaftpichtVersicherung.
Mitt. SVVM 64, 119130.
Billingsley, P.
[1995] Probability and Measure. Third Edition. New York Chichester: Wiley.
Bjork, T., and Grandell, J.
[1988] Exponential inequalities for ruin probabilities. Scand. Actuar. J., 77111.
Bowers, N. L., Gerber, H. U., Hickman, J. C., Jones, D. A.,
and Nesbitt, C. J.
[1986] Actuarial Mathematics. Itasca (Illinois): The Society of Actuaries.
Bremaud, P.
[1981] Point Processes and Queues. Berlin Heidelberg New York: Springer.
B uhlmann, H.
[1970] Mathematical Methods in Risk Theory. Berlin Heidelberg New York:
Springer.
[1972] Ruinwahrscheinlichkeit bei erfahrungstariertem Portefeuille. Mitt. SVVM
72, 211224.
Burkinshaw, O. (see C. D. Aliprantis)
Chow, Y. S., and Teicher, H.
[1988] Probability Theory Independence, Interchangeability, Martingales. Second
Edition. Berlin Heidelberg New York: Springer.
Cox, D. R., and Isham, V.
[1980] Point Processes. London New York: Chapman and Hall.
Csorgo, M., and Steinebach, J.
[1991] On the estimation of the adjustment coecient in risk theory via interme-
diate order statistics. Insurance Math. Econom. 10, 3750.
Dassios, A., and Embrechts, P.
[1989] Martingales and insurance risk. Comm. Statist. Stoch. Models 5, 181217.
Daykin, C. D., Pentikainen, T., and Pesonen, M.
[1994] Practical Risk Theory for Actuaries. London New York: Chapman and
Hall.
Bibliography 183
DeDominicis, R. (see J. Janssen)
Deheuvels, P., and Steinebach, J.
[1990] On some alternative estimates of the adjustment coecient. Scand. Actuar.
J., 135159.
Delbaen, F., and Haezendonck, J.
[1985] Inversed martingales in risk theory. Insurance Math. Econom. 4, 201206.
Delaporte, P. J.
[1960] Un probl`eme de tarication de lassurance accidents dautomobiles examine
par la statistique mathematique. In: Transactions of the International
Congress of Actuaries, Vol. 2, pp. 121135.
[1965] Tarication du risque individuel daccidents dautomobiles par la prime
modelee sur le risque. ASTIN Bull. 3, 251271.
DePril, N.
[1986] Moments of a class of compound distributions. Scand. Actuar. J., 117120.
Derron, M.
[1962] Mathematische Probleme der Automobilversicherung. Mitt. SVVM62, 103
123.
DeVylder, F. (see also M. J. Goovaerts)
[1977] Martingales and ruin in a dynamical risk process. Scand. Actuar. J., 217
225.
Dickson, D. C. M.
[1991] The probability of ultimate ruin with a variable premium loading a special
case. Scand. Actuar. J., 7586.
Dienst, H. R. (ed.)
[1988] Mathematische Verfahren der R uckversicherung. Karlsruhe: Verlag Ver-
sicherungswirtschaft.
Dubourdieu, J.
[1938] Remarques relatives `a la theorie mathematique de lassuranceaccidents.
Bull. Inst. Actu. Franc. 44, 79126.
Embrechts, P. (see also A. Dassios)
Embrechts, P., and Mikosch, T.
[1991] A bootstrap procedure for estimating the adjustment coecient. Insurance
Math. Econom. 10, 181190.
Embrechts, P., Grandell, J., and Schmidli, H.
[1993] Finitetime Lundberg inequalities on the Cox case. Scand. Actuar. J. 1741.
184 Bibliography
Embrechts, P., and Schmidli, H.
[1994] Ruin estimation for a general insurance risk model. Adv. Appl. Probab.
26, 404422.
Embrechts, P., and Veraverbeke, N.
[1982] Estimates for the probability of ruin with special emphasis on the possibility
of large claims. Insurance Math. Econom. 1, 5572.
Embrechts, P., and Villase nor, J. A.
[1988] Ruin estimates for large claims. Insurance Math. Econom. 7, 269274.
Galambos, J., and Kotz, S.
[1978] Characterizations of Probability Distributions. Berlin Heidelberg New
York: Springer.
Gerathewohl, K.
[1976] R uckversicherung Grundlagen und Praxis, Band 1. Karlsruhe: Verlag
Versicherungswirtschaft.
[1979] R uckversicherung Grundlagen und Praxis, Band 2. Karlsruhe: Verlag
Versicherungswirtschaft.
Gerber, H. U. (see also N. L. Bowers)
[1973] Martingales in risk theory. Mitt. SVVM 73, 205216.
[1979] An Introduction to Mathematical Risk Theory. Homewood (Illinois): Irwin.
[1983] On the asymptotic behaviour of the mixed Poisson process. Scand. Actuar.
J. 256.
[1986] Lebensversicherungsmathematik. Berlin Heidelberg New York: Springer.
[1990] Life Insurance Mathematics. Berlin Heidelberg New York: Springer.
[1991] From the generalized gamma to the generalized negative binomial distribu-
tion. Insurance Math. Econom. 10, 303309.
[1994] Martingales and tail probabilities. ASTIN Bull. 24, 145146.
[1995] Life Insurance Mathematics. Second Edition. Berlin Heidelberg New
York: Springer.
Goovaerts, M. J. (see also R. Kaas, B. Kling, J. T. Runnenburg)
Goovaerts, M. J., DeVylder, F., and Haezendonck, J.
[1984] Insurance Premiums. Amsterdam New York: NorthHolland.
Goovaerts, M. J., and Kaas, R.
[1991] Evaluating compound generalized Poisson distributions recursively. ASTIN
Bull. 21, 193198.
Goovaerts, M. J., Kaas, R., Van Heerwaarden, A. E.,
and Bauwelinckx, T.
[1990] Eective Actuarial Methods. Amsterdam New York: NorthHolland.
Bibliography 185
Grandell, J. (see also T. Bjork, P. Embrechts)
[1977] Point processes and random measures. Adv. Appl. Probab. 9, 502526, 861.
[1991] Aspects of Risk Theory. Berlin Heidelberg New York: Springer.
[1995] Mixed Poisson Processes. Manuscript.
Gurland, J. (see R. Shumway)
Gut, A.
[1988] Stopped Random Walks Limit Theorems and Applications. Berlin Hei-
delberg New York: Springer.
Gut, A., and Schmidt, K. D.
[1983] Amarts and Set Function Processes. Berlin Heidelberg New York:
Springer.
Haezendonck, J. (see F. Delbaen, M. J. Goovaerts)
Heilmann, W. R.
[1987] Grundbegrie der Risikotheorie. Karlsruhe: Verlag Versicherungswirtschaft.
[1988] Fundamentals of Risk Theory. Karlsruhe: Verlag Versicherungswirtschaft.
Helbig, M., and Milbrodt, H.
[1995] Mathematik der Personenversicherung. Manuscript.
Heller, U. (see D. Pfeifer)
Helten, E., and Sterk, H. P.
[1976] Zur Typisierung von Schadensummenverteilungen. Z. Versicherungswissen-
schaft 64, 113120.
Herkenrath, U.
[1986] On the estimation of the adjustment coecient in risk theory by means of
stochastic approximation procedures. Insurance Math. Econom. 5, 305313.
Hesselager, O.
[1994] A recursive procedure for calculation of some compound distributions.
ASTIN Bull. 24, 1932.
Hickman, J. C. (see N. L. Bowers)
Hipp, C., and Michel, R.
[1990] Risikotheorie: Stochastische Modelle und Statistische Methoden. Karlsruhe:
Verlag Versicherungswirtschaft.
Hofmann, M.
[1955]

Uber zusammengesetzte PoissonProzesse und ihre Anwendungen in der
Unfallversicherung. Mitt. SVVM 55, 499575.
186 Bibliography
Isham, V. (see D. R. Cox)
Janssen, J.
[1977] The semiMarkov model in risk theory. In: Advances in Operations Re-
search, pp. 613621. Amsterdam New York: NorthHolland.
[1982] Stationary semiMarkov models in risk and queuing theories. Scand. Ac-
tuar. J., 199210.
[1984] SemiMarkov models in economics and insurance. In: Premium Calculation
in Insurance, pp. 241261. Dordrecht Boston: Reidel.
Janssen, J., and DeDominicis, R.
[1984] Finite nonhomogeneous semiMarkov processes: Theoretical and computa-
tional aspects. Insurance Math. Econom. 3, 157165; 4, 295 (1985).
Jewell, W. S. (see B. Sundt)
Johnson, N. L., and Kotz, S.
[1969] Distributions in Statistics: Discrete Distributions. New York Chichester:
Wiley.
[1970a] Distributions in Statistics: Continuous Univariate Distributions, Vol. 1.
New York Chichester: Wiley.
[1970b] Distributions in Statistics: Continuous Univariate Distributions, Vol. 2.
New York Chichester: Wiley.
Johnson, N. L., Kotz, S., and Kemp, A. W.
[1992] Univariate Discrete Distributions. Second Edition. New York Chichester:
Wiley.
Jones, D. A. (see N. L. Bowers)
Kaas, R. (see also M. J. Goovaerts)
Kaas, R., and Goovaerts, M. J.
[1985] Computing moments of compound distributions. Scand. Actuar. J., 3538.
[1986] Bounds on stoploss premiums for compound distributions. ASTIN Bull.
16, 1318.
Kallenberg, O.
[1983] Random Measures. London Oxford: Academic Press.
Karr, A. F.
[1991] Point Processes and Their Statistical Inference. Second Edition. New York
Basel: Dekker.
Kemp, A. W. (see N. L. Johnson)
Kerstan, J. (see also K. Matthes)
Bibliography 187
Kerstan, J., Matthes, K., and Mecke, J.
[1974] Unbegrenzt teilbare Punktprozesse. Berlin: AkademieVerlag.
Kingman, J. F. C.
[1993] Poisson Processes. Oxford: Clarendon Press.
Kling, B., and Goovaerts, M. J.
[1993] A note on compound generalized distributions. Scand. Actuar. J., 6072.
Kl uppelberg, C.
[1989] Estimation of ruin probabilities by means of hazard rates. Insurance Math.
Econom. 8, 279285.
Konig, D., and Schmidt, V.
[1992] Zufallige Punktprozesse. Stuttgart: Teubner.
Kotz, S. (see J. Galambos, N. L. Johnson)
Kupper, J.
[1962] Wahrscheinlichkeitstheoretische Modelle in der Schadenversicherung. Teil
I : Die Schadenzahl. Blatter DGVM 5, 451503.
Lemaire, J.
[1985] Automobile Insurance. Boston Dordrecht Lancaster: KluwerNijho.
Letta, G.
[1984] Sur une caracterisation classique du processus de Poisson. Expos. Math. 2,
179182.
Lin, X. (see G. E. Willmot)
Lundberg, O.
[1940] On Random Processes and Their Application to Sickness and Accident
Statistics. Uppsala: Almqvist and Wiksells.
Mammitzsch, V.
[1983] Ein einfacher Beweis zur Konstruktion der operationalen Zeit. Blatter
DGVM 16, 13.
[1984] Operational time: A short and simple existence proof. In: Premium Calcu-
lation in Insurance, pp. 461-465. Dordrecht Boston: Reidel.
[1986] A note on the adjustment coecient in ruin theory. Insurance Math.
Econom. 5, 147149.
Mathar, R., and Pfeifer, D.
[1990] Stochastik f ur Informatiker. Stuttgart: Teubner.
Matthes, K. (see also J. Kerstan)
188 Bibliography
Matthes, K., Kerstan, J., and Mecke, J.
[1978] Innitely Divisible Point Processes. New York Chichester: Wiley.
Mecke, J. (see J. Kerstan, K. Matthes)
Michel, R. (see also C. Hipp)
[1993a] On probabilities of large claims that are compound Poisson distributed.
Blatter DGVM 21, 207211.
[1993b] Ein individuellkollektives Modell f ur SchadenzahlVerteilungen. Mitt.
SVVM, 7593.
Mikosch, T. (see P. Embrechts)
Milbrodt, H. (see M. Helbig)
Mller, C. M.
[1992] Martingale results in risk theory with a view to ruin probabilities and diu-
sions. Scand. Actuar. J. 123139.
[1995] Stochastic dierential equations for ruin probabilities. J. Appl. Probab. 32,
7489.
Nesbitt, C. J. (see N. L. Bowers)
Neveu, J.
[1972] Martingales `a Temps Discret. Paris: Masson.
[1977] Processus ponctuels. In: Ecole dEte de Probabilites de SaintFlour VI, pp.
249445. Berlin Heidelberg New York: Springer.
Nollau, V.
[1978] SemiMarkovsche Prozesse. Berlin: AkademieVerlag.
Norberg, R.
[1990] Risk theory and its statistics environment. Statistics 21, 273299.
Panjer, H. H. (see also S. Wang, G. E. Willmot)
[1981] Recursive evaluation of a family of compound distributions. ASTIN Bull.
12, 2226.
Panjer, H. H., and Wang, S.
[1993] On the stability of recursive formulas. ASTIN Bull. 23, 227258.
Panjer, H. H., and Willmot, G. E.
[1981] Finite sum evaluation of the negativebinomialexponential model. ASTIN
Bull. 12, 133137.
[1982] Recursions for compound distributions. ASTIN Bull. 13, 111.
[1986] Computational aspects of recursive evaluation of compound distributions.
Insurance Math. Econom. 5, 113116.
[1992] Insurance Risk Models. Schaumburg (Illinois): Society of Actuaries.
Bibliography 189
Pentikainen, T. (see C. D. Daykin)
Pesonen, M. (see C. D. Daykin)
Pfeifer, D. (see also R. Mathar)
[1982a] The structure of elementary birth processes. J. Appl. Probab. 19, 664667.
[1982b] An alternative proof of a limit theorem for the PolyaLundberg process.
Scand. Actuar. J. 176178.
[1986] PolyaLundberg process. In: Encyclopedia of Statistical Sciences, Vol. 7,
pp. 6365. New York Chichester: Wiley.
[1987] Martingale characteristics of mixed Poisson processes. Blatter DGVM 18,
107100.
Pfeifer, D., and Heller, U.
[1987] A martingale characterization of mixed Poisson processes. J. Appl. Probab.
24, 246251.
Quenouille, M. H.
[1949] A relation between the logarithmic, Poisson and negative binomial series.
Biometrics 5, 162164.
Reiss, R. D.
[1993] A Course on Point Processes. Berlin Heidelberg New York: Springer.
Resnick, S. I.
[1992] Adventures in Stochastic Processes. Boston Basel Berlin: Birkhauser.
Rhiel, R.
[1985] Zur Berechnung von Erwartungswerten und Varianzen von zufalligen Sum-
men in der kollektiven Risikotheorie. Blatter DGVM 17, 1518.
[1986] A general model in risk theory. An application of modern martingale theory.
Part one: Theoretic foundations. Blatter DGVM 17, 401428.
[1987] A general model in risk theory. An application of modern martingale theory.
Part two: Applications. Blatter DGVM 18, 119.
Runnenburg, J. T., and Goovaerts, M. J.
[1985] Bounds on compound distributions and stoploss premiums. Insurance
Math. Econom. 4, 287293.
Ruohonen, M.
[1988] On a model for the claim number process. ASTIN Bull. 18, 5768.
Scheike, T. H.
[1992] A general risk process and its properties. J. Appl. Probab. 29, 7381.
Schmidli, H. (see also P. Embrechts)
[1995] CramerLundberg approximations for ruin probabilities of risk processes per-
turbed by diusion. Insurance Math. Econom. 16, 135149.
190 Bibliography
Schmidt, K. D. (see also A. Gut)
[1989] A note on positive supermartingales in ruin theory. Blatter DGVM 19,
129132.
[1992] Stochastische Modellierung in der Erfahrungstarierung. Blatter DGVM
20, 441455.
Schmidt, V. (see D. Konig)
Schroter, K. J.
[1990] On a family of counting distributions and recursions for related compound
distributions. Scand. Actuar. J., 161175.
[1995] Verfahren zur Approximation der Gesamtschadenverteilung Systematisie-
rung, Techniken und Vergleiche. Karlsruhe: Verlag Versicherungswirtschaft.
Seal, H. L.
[1980] Survival probabilities based on Pareto claim distributions. ASTIN Bull. 11,
6171.
[1983] The Poisson process: Its failure in risk theory. Insurance Math. Econom. 2,
287288.
Shumway, R., and Gurland, J.
[1960] A tting procedure for some generalized Poisson distributions. Scand. Ac-
tuar. J. 43, 87108.
Sobrero, M. (see S. Wang)
Steinebach, J. (see also M. Csorgo, P. Deheuvels)
[1993] Zur Schatzung der CramerLundbergSchranke, wenn kein Anpassungs-
koezient existiert. In: Geld, Finanzwirtschaft, Banken und Versicherun-
gen, pp. 715723. Karlsruhe: Verlag Versicherungswirtschaft
Sterk, H. P. (see also E. Helten)
[1979] Selbstbeteiligung unter risikotheoretischen Aspekten. Karlsruhe: Verlag Ver-
sicherungswirtschaft.
[1980] Risikotheoretische Aspekte von Selbstbeteiligungen. Blatter DGVM 14, 413
426.
[1988] Selbstbeteiligung. In: Handworterbuch der Versicherung, pp. 775780.
Karlsruhe: Verlag Versicherungswirtschaft.
Stormer, H.
[1969] Zur

Uberlagerung von Erneuerungsprozessen. Z. Wahrscheinlichkeitstheorie
verw. Gebiete 13, 924.
[1970] SemiMarkovProzesse mit endlich vielen Zustanden. Berlin Heidelberg
New York: Springer.
Bibliography 191
Straub, E.
[1988] NonLife Insurance Mathematics. Berlin Heidelberg New York:
Springer.
Sundt, B. (see also G. E. Willmot)
[1984] An Introduction to NonLife Insurance Mathematics. First Edition. Karls-
ruhe: Verlag Versicherungswirtschaft.
[1991] An Introduction to NonLife Insurance Mathematics. Second Edition. Karls-
ruhe: Verlag Versicherungswirtschaft.
[1992] On some extensions of Panjers class of counting distributions. ASTIN Bull.
22, 6180.
[1993] An Introduction to NonLife Insurance Mathematics. Third Edition. Karls-
ruhe: Verlag Versicherungswirtschaft.
Sundt, B., and Jewell, W. S.
[1981] Further results on recursive evaluation of compound distributions. ASTIN
Bull. 12, 2739.
Teicher, H. (see Y. S. Chow)
Teugels, J. L. (see J. Beirlant)
Thorin, O., and Wikstad, N.
[1977] Calculation of ruin probabilities when the claim distribution is lognormal.
ASTIN Bull. 9, 231246.
Thyrion, P.
[1960] Contribution `a letude du bonus pour non sinistre en assurance automobile.
ASTIN Bull. 1, 142162.
Tr oblinger, A.
[1961] Mathematische Untersuchungen zur Beitragsr uckgewahr in der Kraftfahr-
versicherung. Blatter DGVM 5, 327348.
[1975] Analyse und Prognose des Schadenbedarfs in der Kraftfahrzeughaftpicht-
versicherung. Karlsruhe: Verlag Versicherungswirtschaft.
Van Heerwaarden, A. E. (see M. J. Goovaerts)
Veraverbeke, N. (see P. Embrechts)
Villase nor, J. A. (see P. Embrechts)
Volodin, N. A. (see T. A. Azlarov)
Wang, S. (see also H. H. Panjer)
Wang, S., and Panjer, H. H.
[1994] Proportional convergence and tailcutting techniques in evaluating aggregate
claim distributions. Insurance Math. Econom. 14, 129138.
192 Bibliography
Wang, S., and Sobrero, M.
[1994] Further results on Hesselagers recursive procedure for calculation of some
compound distributions. ASTIN Bull. 24, 161166.
Watanabe, S.
[1964] On discontinuous additive functionals and Levy measures of a Markov pro-
cess. Japan. J. Math. 34, 5370.
Wikstad, N. (see O. Thorin)
Willmot, G. E. (see also H. H. Panjer)
[1986] Mixed compound Poisson distributions. ASTIN Bull. 16, S59S80.
[1988] Sundt and Jewells family of discrete distributions. ASTIN Bull. 18, 1730.
Willmot, G. E., and Lin, X.
[1994] Lundberg bounds on the tails of compound distributions. J. Appl. Probab.
31, 743756.
Willmot, G. E., and Panjer, H. H.
[1987] Dierence equation approaches in evaluation of compound distributions. In-
surance Math. Econom. 6, 4356.
Willmot, G. E., and Sundt, B.
[1989] On evaluation of the Delaporte distribution and related distributions. Scand.
Actuar. J., 101113.
Wolfsdorf, K.
[1986] Versicherungsmathematik. Teil 1: Personenversicherung. Stuttgart: Teub-
ner.
[1988] Versicherungsmathematik. Teil 2: Theoretische Grundlagen. Stuttgart:
Teubner.
Wolthuis, H.
[1994] Life Insurance Mathematics The Markovian Approach. Brussels: CAIRE.
List of Symbols
Numbers and Vectors
N the set 1, 2, . . .
N
0
the set 0, 1, 2, . . .
Q the set of rational numbers
R the set of real numbers
R
+
the interval [0, )
R
n
the ndimensional Euclidean space
Sets

A
indicator function of the set A

iI
A
i
union of the (pairwise) disjoint family A
i

iI
Algebras
(c) algebra generated by the class c
(Z
i

iI
) algebra generated by the family Z
i

iI
B((0, )) Borel algebra on (0, )
B(R) Borel algebra on R
B(R
n
) Borel algebra on R
n
Measures

Z
transformation of under Z
[
c
restriction of to c
product measure of and
convolution of (the measures or densities) and
Lebesgue measure on B(R)

n
Lebesgue measure on B(R
n
)
counting measure concentrated on N
0
Integrals
_
A
f(x) d(x) Lebesgue integral of f on A with respect to
_
c
a
f(x) dx Riemann integral of f on [a, c]
194 List of Symbols
Distributions

y
Dirac distribution
B() Bernoulli distribution
B(m, ) binomial distribution
Be(, ) beta distribution
C(Q, R) compound distribution
Del(, , ) Delaporte distribution
Exp() exponential distribution
Ga(, ) gamma distribution
Ga(, , ) gamma distribution
Geo() geometric distribution
Geo(m, ) geometric distribution
Log() logarithmic distribution
NB(, ) negativebinomial distribution
NH(m, , ) negativehypergeometric distribution
P() Poisson distribution
Par(, ) Pareto distribution
Probability
E[Z] expectation of (the random variable or distribution) Z
var [Z] variance of Z
v[Z] coecient of variation of Z

Z
characteristic function of Z
M
Z
moment generating function of Z
m
Z
probability generating function of Z
Conditional Probability
P
Z[
conditional distribution of Z with respect to ()
E(Z[) conditional expectation of Z with respect to ()
var (Z[) conditional variance of Z with respect to ()
Stochastic processes
G
n

nN
excess premium process
N
t

tR
+
claim number process
R
t

tR
+
reserve process
S
t

tR
+
aggregate claims process
T
n

nN
0
claim arrival process
U
n

nN
0
modied reserve process
W
n

nN
claim interarrival process
X
n

nN
claim size process
Author Index
A
Adelson, R. M., 125
Albrecht, P., 101, 102
Aliprantis, C. D., 3
Alsmeyer, G., 42
Ambagaspitiya, R. S., 126
Ammeter, H., 125
Azlarov, T. A., 16
B
Balakrishnan, N., 126
Bauer, H., 3, 85
Bauwelinckx, T., 4
Beirlant, J., 170
Bichsel, F., 102
Billingsley, P., 3, 85
Bjork, T., 169
Bowers, N. L., 4
Bremaud, P., 42
B uhlmann, H., 4, 84, 170
Burkinshaw, O., 3
C
Chow, Y. S., 85
Cox, D. R., 42
Csorg o, M., 170
D
Dassios, A., 169, 170
Daykin, C. D., 4
DeDominicis, R., 84
Deheuvels, P., 170
Delaporte, P. J., 102
Delbaen, F., 169
DePril, N., 126
Derron, M., 102
DeVylder, F., 126, 169, 170
Dickson, D. C. M., 170
Dienst, H. R., 154
Dubourdieu, J., 101, 102
E
Embrechts, P., 170
G
Galambos, J., 16
Gerathewohl, K., 154
Gerber, H. U., 4, 101, 102, 125, 126, 169
Goovaerts, M. J., 4, 124, 126
Grandell, J., 41, 42, 101, 169, 170
Gurland, J., 125
Gut, A., 42, 170
H
Haezendonck, J., 126, 169
Heilmann, W. R., 4
Helbig, M., 4
Heller, U., 101
Helten, E., 1
Herkenrath, U., 170
Hesselager, O., 125
Hickman, J. C., 4
196 Author Index
Hipp, C., 4, 126
Hofmann, M., 101
I
Isham, V., 42
J
Janssen, J., 84
Jewell, W. S., 125
Johnson, N. L., 125, 175, 179
Jones, D. A., 4
K
Kaas, R., 4, 124, 126
Kallenberg, O., 42
Karr, A. F., 42
Kemp, A. W., 175
Kerstan, J., 41, 42
Kingman, J. F. C., 42
Kling, B., 126
Kl uppelberg, C., 170
Konig, D., 42
Kotz, S., 16, 125, 175, 179
Kupper, J., 1, 102
L
Lemaire, J., 102
Letta, G., 42
Lin, X., 125
Lundberg, O., 101, 124
M
Mammitzsch, V., 84, 170
Mathar, R., 42
Matthes, K., 41, 42
Mecke, J., 41, 42
Michel, R., 4, 125, 126
Mikosch, T., 170
Milbrodt, H., 4
Mller, C. M., 170
N
Nesbitt, C. J., 4
Neveu, J., 42, 170
Nollau, V., 84
Norberg, R., 4
P
Panjer, H. H., 4, 124, 125, 126, 171
Pentikainen, T., 4
Pesonen, M., 4
Pfeifer, D., 42, 101
Q
Quenouille, M. H., 124
R
Reiss, R. D., 42
Resnick, S. I., 42
Rhiel, R., 124, 169
Runnenburg, J. T., 124
Ruohonen, M., 102
S
Scheike, T. H., 126
Schmidli, H., 170
Schmidt, K. D., 101, 170
Schmidt, V., 42
Schroter, K. J., 125, 126
Seal, H. L., 101, 170
Shumway, R., 125
Sobrero, M., 126
Steinebach, J., 170
Sterk, H. P., 1, 154
Stormer, H., 84, 154
Straub, E., 4
Sundt, B., 4, 84, 101, 125, 126
Author Index 197
T
Teicher, H., 85
Teugels, J. L., 170
Thorin, O., 170
Thyrion, P., 102
Tr oblinger, A., 1, 102
V
Van Heerwaarden, A. E., 4
Veraverbeke, N., 170
Villase nor, J. A., 170
Volodin, N. A., 16
W
Wang, S., 125
Watanabe, S., 42
Wikstad, N., 170
Willmot, G. E., 4, 124, 125, 126, 171
Wolfsdorf, K., 4
Wolthuis, H., 4
Subject Index
A
abstract portfolio, 100
adjustment coecient, 164
admissible pair, 45
aggregate claims amount, 103, 109
aggregate claims process, 103
Ammeter transform, 114
B
Bernoulli distribution, 176
Bernoulli process, 40
beta distribution, 179
beta function, 171
binomial criterion, 88
binomial distribution, 176
binomial process, 40
binomial risk process, 132
bounded stopping time, 161
C
canonical ltration, 25
Cantellis inequality, 113
ChapmanKolmogorov equations, 46
characteristic function, 174
claim amount, 103
claim arrival process, 6
claim event, 6
claim interarrival process, 6
claim measure, 8
claim number, 17, 109
claim number process, 17
claim severity, 103
claim size process, 103
coecient of variation, 173
compound binomial distribution, 111
compound distribution, 109
compound negativebinomial
distribution, 111
compound Poisson distribution, 109
compound Poisson process, 107
conditionally independent
increments, 86
conditionally stationary
increments, 86
contagion, 78, 82, 83, 95
continuous distribution, 179
continuous time model, 9
convolution, 175
counting measure, 172
Cox process, 170
D
decomposition of a claim size
process, 137
decomposition of a Poisson
process, 133
decomposition of a Poisson risk
process, 137
degenerate distribution, 172
Delaporte distribution, 176
DePrils recursion, 123
Dirac distribution, 172
discrete distribution, 175
discrete time model, 9
disjoint familiy, 4
Subject Index 199
distribution, 172
doubly stochastic Poisson process, 170
E
Erlang distribution, 180
essential inmum, 157
estimation, 98
exceptional null set, 6, 17
excess of loss reinsurance, 108
excess premium process, 158
expectation, 173
experience rating, 101
explosion, 7, 10, 75
exponential distribution, 180
F
failure rate, 50
ltration, 25
nite expectation, 173
nite moment, 173
nite time ruin probability, 170
Fourier transform, 174
G
gamma distribution, 180
gamma function, 171
generalized binomial coecient, 172
geometric distribution, 176
H
heavy tailed distribution, 170
homogeneous claim number process, 47
homogeneous Poisson process, 23
I
increasing, 4
increment, 20, 105
independent increments, 23, 105
inhomogeneous Poisson process, 56
initial reserve, 155
insurers portfolio, 100
intensity, 49, 56
K
Kolmogorovs inequality, 164
L
layer, 132
Lebesgue measure, 172
life insurance, 9
lifetime, 9
logarithmic distribution, 177
Lundbergs binomial criterion, 88
Lundbergs inequality, 165
M
Markov (claim number) process, 44
martingale, 25
maximal inequality, 161
memoryless distribution, 13
mixed binomial process, 92
mixed claim number process, 86
mixed Poisson process, 87
modied reserve process, 158
moment, 173
moment generating function, 174
multinomial criterion, 23, 60, 87
multiple life insurance, 9
N
negative contagion, 82, 83
negativebinomial distribution, 177
negativehypergeometric distribution, 178
net premium, 160
nondegenerate distribution, 172
number of claim events, 107
number of claims, 17
number of claims occurring at a claim
event, 107
number of large claims, 108
200 Subject Index
number of occurred claims, 107, 108
number of reported claims, 107
O
occurrence time, 7
operational time, 76
P
Panjers recursion, 122
Pareto distribution, 180
partition, 4
Pascal distribution, 178
point process, 41
Poisson distribution, 178
Poisson process, 23
Poisson risk process, 137
PolyaEggenberger distribution, 178
PolyaLundberg process, 93
positive, 4
positive adapted sequence, 164
positive contagion, 78, 82, 95
prediction, 38, 39, 97, 98
premium income, 155
premium intensity, 155
priority, 108
probability generating function, 174
R
regular claim number process, 48
reinsurance, 108
renewal process, 42
reserve process, 155
risk process, 127
ruin, 158
ruin problem, 156
S
safety loading, 160
semiMarkov process, 84
sequence of intensities, 49
single life insurance, 9
stationary increments, 23, 105
stopping time, 161
structure distribution, 86
structure parameter, 86
submartingale, 25
superadjustment coecient, 164
supermartingale, 25
superposition of claim size
processes, 148
superposition of Poisson processes, 141
superposition of Poisson risk
processes, 149
survival function, 13
T
tail probability, 113
thinned binomial process, 108
thinned claim number process, 107, 128
thinned claim size process, 130
thinned Poisson process, 108
thinned risk process, 130, 132
time of death, 9
total number of claims, 107
transition probability, 46
transition rule, 45
V
variance, 173
variance decomposition, 86
W
waiting time, 7
Walds identities, 111
X
XL reinsurance, 108
Z
zeroone law on explosion, 10, 75

You might also like