You are on page 1of 611

This Page intentionally left blank

BAYESlAN
THEORY

This Page intentionally left blank

BAYESIAN
THE0RY
Jose M. Bernard0
Professor of Statistics Universidad de Valencia, Spain

Adrian F. M. Smith
Professor of Statistics Imperial College of Science, Technology and Medicine, London, UK

JOHN WILEY & SONS, LTD


Chichester . New York . Weinheim Brisbane . Singapore * Toronto
1

Copyright 6 2000 by John Wiley & Sons,Ltd Bafiins Lane, Chichestcr, West Sussex PO19 IUD, Engtand National 01243 779771 Intemional (W)1243 179777 e-mail (for orders and customer s m i c e enquiries): cs-books@viley.co.uk Visit our Home Page on http://www.wiley.co.uk or http://www.wilcy.com First published in hardback 1994 (ISBN 0 471 92416 4)
All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval systcm, or transmitted, in any form or by any means, electronk, mechanical, photocopying, wording, s m i n g or otherwise, except under the terms of the copyright, Designs and Patents Act 1988 or under the terms of a licence issued by the Copyright Licensing Agency, 90 Tottenham Court Road,London, UK W1 P 9HE. without UIC permission in writing of the Publisher, with the exception of any malerid supplied specifically for the purpose of being entmd and executed on a computer system, for exclusive use by the purchaser of the publication.

Neither the authors nor John Wiley & Sons Ud accept any responsibility or liability for loss or damage o occasioned t any person or propmy through using the material, instructions, methods or ideas contained herein, or acting or refraining from acting as a result of such use. The authors and Publisher ins expressly disclaim all implied warranties, including merchantabilityor ftes for MYparticular purpose. There will be no duty on the authors or Publisher to comct any CKO~Sor defects in the software. Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley & Sons is aware of a claim, the product name appear in initial capital or all capital lettca. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration.
Other Wiky Editorial Ogices

John Wiley & Sons, lnc., 605 Third Avenue, New York, NY 10158-0012, USA Weinhcim Brisbane Singapore Toronto
Libraw o Cbngress ~atdoging-in-PubIicationData f

Bernardo, JOSC M. Bayesian theory / JoSt M. Bernardo, Adrian F. M. Smith. p. cm. - (Wiley series in probability and mathematical statistics) Includes bibliographical references and indexes. ISBN 0 471 92416 4 1. Bayesian statistical decision theory. 1. Smith, Adrian F. M. 11. Title. ill Series. QA279.5.B47 1993 519.5'424~20 93-31554 CIP
British U b r q v Cataloguing in Pvblicaron Data

A catalogue record for this book is available from the British Library ISBN 0 471 49464 X

T O

MARINAand DANIEL

This Page intentionally left blank

Preface
This volume, first published in hardback in 1994, presents an overview of the foundations and key theoretical concepts of Bayesian Statistics. Our original intention had been to produce further volumes on computationand methods. However, these projects have been shelved as tailored Markov chain Monte Carlo methods have emerged and are being refined as the standard Bayesian computational tools. We have taken the opportunity provided by this reissue to make a number of typographical corrections. The original motivation for this enterprise stemmed from the impact and influence of de Finettis two-volume Theory o Probability, which one of us helped f translate into English from the Italian in the early 1970s. This was widely acknowledged as the definitiveexposition of the operationalist, subjectivist approach to uncertainty, and provided further impetus at that time to a growth in activity and interest in Bayesian ideas. From a philosophical,foundationalperspective,the de Finetti volumes provide -in the words of the authors dedication to his friend Segrea necessary document for clarifying one point of

view in its entirety.

From a statistical, methodological perspective, however, the de Finetti volumes end abruptly, with just the barest introduction to the mechanics of Bayesian inference. Some years ago, we decided to try to write a series of books which would take up the story where de Finetti left off, with the grandiose objective of clarifying in its entirety the world of Bayesian statistical theory and practice.

viii

Prejiice

It is now clear that this was a hopeless undertaking. The world of Bayesian Statistics has been changing shape and growing in size rapidly and unpredictablymost notably in relation to developments in computational methods and the subsequent opening up of new application horizons. We are greatly relieved that we were too incompetent to finish our books a few years ago! And, of course, these changes and developments continue. There is no static world of Bayesian Statistics to describe in a once-and-for-all way. Moreover. we are dealing with a field of activity where, even among those whose intellectual perspectives fall within the broad paradigm, there are considerable differences of view at the level of detail and nuance of interpretation. This volume on Bayesiun Theor:\. attempts to provide a fairly complete and up-to-date overview of what we regard as the key concepts, results and issues. However, it necessarily reflects the prejudices and interests of its authors-as well as the temporal constraints imposed by a publisher whose patience has been sorely tested for far too long. We can hut hope that our sins of commission and omission are not too grievous. Too many colleagues have taught us too many things for it to be practical to list everyone to whom we are beholden. However, Dennis Lindley has played a special role, not least in supervising us as Ph.D. students. and we should like to record our deep gratitude to him. We also shared many enterprises with Morrie DeGroot and continue to miss his warmth and intellectual stimulation. For detailed comments on earlier versions of material in this volume. we are indebted to our colleagues M. J. Bayam, J. 0. Berger. J. de la Horra, P. Diaconis, F. J. Gir6n. M. A. G6mez-Villegas. D. V Lindley, M. Mendoza, J. Muiioz. E. Moreno. L. R. Pericchi. . A. van der Linde. C. Villegas and M. West. We are also grateful, in more ways than one, to the State of Valencia. It has provided a beautiful and congenial setting for much of the writing of this book. And. in the person of the Governor, Joan Lerma. it has been wonderfully supportive of the celebrated series of Vulenciu International Meetings on Buyesiun Statistics. During the secondment of one of us as scientific advisor to the Governor. it also provided resources to enable the writing of this book to continue. This volume has been produced directly in TEX and we are grateful to Maria Dolores Tortajada for all her efforts. Finally, we thank past and present editors at John Wiley & Sons for their support of this project: Jamie Cameron for saying "Go!" and Helen Ramsey for saying "Stop!"

Valencia, Spain January 26,2000

J. M. Bernardo A. F. M. Smith

Contents

1. INTRODUCTION 1.1. ThornasBayes

1
1

1.2. The subjectivist view of probability


1.3. Bayesian Statistics in perspective
1.4. An overview of Bayesian Theory 1.4.1. Scope 5 I .4.2. Foundations 5 I .4.3. Generalisations 6 1.4.4. Modelling 7 1.4.5. Inference 7 I .4.6. Remodelling 8 1.4.7. Basic formulae 8 1.4.8. Non-Bayesian theories 9 1.5. A Bayesian reading list

2
3

Contents

2. FOUNDATIONS

13

2.1. Beliefs and actions

I3

2.2. Decision problems 16 16 2.2.1. Basic elements 2.2.2. Formal representation

I8
23

2.3. Coherence and quantification 23 2.3.1. Events, options and preferences 2.3.2. Coherent preferences 23 2.3.3. Quantification 28

2.4. Beliefs and probabilities 33 33 2.4. I . Representation of beliefs 2.4.2. Revision of beliefs and Bayes theorem 45 2.4.3. Conditional independence 47 2.4.4. Sequential revision of beliefs 2.5. Actions and utilities 49 2.5. I . Bounded sets of consequences 49 2.5.2. Bounded decision problems 50 2.5.3. General decision problems 54
2.6.

38

Sequential decision problems 56 2.6. I . Complex decision problems 56 59 2.6.2. Backward induction 2.6.3. Design of experiments 63
67 69

2.7. Inference and information 67 2.7.1. Reporting beliefs as a decision problem 2.7.2. The utility of a probability distribution 2.7.3. Approximation and discrepancy 75 77 2.7.4. Information
2.8.

Discussion and further references 8 I 2.8.1. Operational definitions 8I 2.8.2. Quantitative coherence theories 83 1.8.3. Related theories 85 2.8.3. Critical issucs 92

Contents

xi
I05
I5 0

3. GENERALISATIONS
3.1.

Generalised representation of beliefs 3.1.1. Motivation 105 3.1.2. Countable additivity 106

3.2.

Review of probability theory 109 3.2. I . Random quantities and distributions 109 3.2.2. Some particular univariate distributions I 14 3.2.3. Convergence and limit theorems 125 I27 3.2.4. Random vectors, Bayes' theorem 3.2.5. Some particular multivariate distributions I33

3.3. Generalised options and utilities 141 3.3.1. Motivation and preliminaries 141 145 3.3.2. Generalised preferences 3.3.3. The value of information 147
3.4. Generalised information measures 1 SO 3.4.1. The general problem of reporting beliefs I0 5 3.4.2. The utility of a general probability distribution 151 3.4.3. Generalised approximation and discrepancy 154 3.4.4. Generalised information 157 Discussion and further references 160 3.5. I . The role of mathematics I60 3.5.2. Critical issues 161

3.5.

4. MODELLING

165 I65

4.1 Statistical models I65 4.1. I . Beliefs and models 4.2.

Exchangeability and related concepts I67 I67 4.2. I . Dependence and independence 4.2.2. Exchangeability and partial exchangeability
I72

168

4.3. Models via exchangeability I72 4.3. I . The Bernoulli and binomial models 176 4.3.2. The multinomial model I77 4.3.3. The general model

xii
4.4. Models via invariance I8 I 4.4.1. The normal model 18 1 4.4.2. The multivariate normal model 4.4.3. The exponential model 187 I89 4.4.4. The geometric model

Corttents

185

4.5.

Models via sufficient statistics 190 4.5.1. Summary statistics 190 4.5.2. Predictive sufficiency and parametric sufficiency 197 4.5.3. Sufficiency and the exponential family 4.5.4. Information measures and the exponential family

I9 1
207

4.6. Models via partial exchangeability 209 4.6. I . Models for extended data structures 209 21 I 4.6.2. Several samples 4.6.3. Structured layouts 2 I7 4.6.4. Covariates 2 I9 222 4.6.5. Hierarchical models
4.7. Pragmatic aspects 226 4.7. I . Finite and infinite exchangeability 226 228 4.7.2. Parametric and nonparametric models 4.7.3. Model elaboration 229 4.7.4. Model simplification 233 4.7.5. Prior distributions 234

4.8. Discussion and further references 235 4.8.1, Representation theorems 235 4.8.2. Subjectivity and objectivity 236 4.8.3.Critical issues 237

5. INFERENCE
S.1.

241

The Bayesian paradigm 24 I 24 I 5. I . I . Observables, beliefs and models 242 5. I .2. The role of Bayes' theorem 243 5. I .3. Predictive and parametric inference 247 5. I .4. Sufficiency. ancillarity and stopping rules 5.1.5. Decisions and inference summaries 255 7-63 5. I .6. Implementation issues

Conrenrs

xiii

5.2. Conjugate analysis 265 5.2.1. Conjugate families 265 5.2.2. Canonical conjugate analysis 269 5.2.3. Approximations with conjugate families

279

5.3. Asymptotic analysis 285 286 5.3. I. Discrete asymptotics 5.3.2. Continuous asymptotics 287 5.3.3. Asymptotics under transformations

295

5.4. Reference analysis 298 299 5.4. I. Reference decisions 5.4.2. One-dimensional reference distributions 5.4.3. Restricted reference distributions 3 16 5.4.4. Nuisance parameters 320 5.4.5. Multiparameter problems 333
5.5. Numerical approximations 339 5.5.1. Laplace approximation 340 5.5.2. Iterative quadrature 346 5.5.3. Importance sampling 348 5.5.4. Sampling-importance-resampling 350 5.5.5. Markov chain Monte Carlo 353

302

5.6.

Discussion and further references 356 356 5.6. I. An historical footnote 5.6.2. Prior ignorance 357 5.6.3. Robustness 367 5.6.4. Hierarchical and empirical Bayes 37 1 5.6.5. Further methodological developments 373 5.6.6. Critical issues 374
377

6. REMODELLING

6.1. Model comparison 377 377 6.1. I. Ranges of models 383 6.1.2. Perspectives on model comparison 386 6.1.3. Model comparison as a decision problem 6.1.4. Zero-one utilities and Bayes factors 389 6.1.5. General utilities 395 6.1.6. Approximation by cross-validation 403 6. I .7. Covariate selection 407

xiv
6.2.

Model rejection 409 6.2. I . Model rejection through model comparison 6.2.2. Discrepancy measures for model rejection 6.2.3. Zero-one discrepancies 413 6.2.4. General discrepancies 4 I5

409 4 12

6.3. Discussion and further references 4 I7 6.3.1. Overview 4 17 6.3.2. Modelling and remodelling 418 6.3.3. Critical issues 4 I8 A. SUMMARY O F BASIC FORMULAE A. 1. A.2.
427

Probability distributions Inferential processes


436

427

B. NON-BAYESIAN THEORIES B.l. B.2.

443

Overview

443

Alternative approaches 445 B.2. I . Classical decision theory 445 B.2.2. Frequentist procedures 5.19 B.2.3. Likelihood inference 454 B.2.4. Fiducial and related theories 456 Stylised inference problems 460 B.3. I . Point estimation 460 B.3.2. Interval estimation 465 B.3.3. Hypothesis testing 469 8.3.4. Significance testing 475 Comparative issues 478 8.4. I . Conditional and unconditional inl'erence 8.4.2. Nuisance paramctcrs and marginalisation B.4.3. Approaches to prediction 482 B.4.4. Aspects of asymptotics 485 B.4.S. Model choice criteria 486
489
478 379

B.3.

B.4.

REFERENCES SUBJECT INDEX AUTHOR INDEX

555
57.3

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

Chapter 1

Introduction
Summary
A brief historical introduction to Bayes' theorem and its author is given, as

a prelude to a statement of the perspective adopted in this volume regarding Bayesian Statistics. An overview is provided of the material to be covered in successive chapters and appendices, and a Bayesian reading list is provided.

1.1

THOMAS BAYES

According to contemporary journal death notices and the inscription on his tomb in Bunhill Fields cemetery in London, Thomas Bayes died on 7th April, 1761, at the age of 59. The inscription on top of the tomb reads:
Rev. Thomas Bayes. Son of the said Joshua and Ann Bayes (59). 7 April 1761. In recognition of Thomas Bayes's important work in probability. The vault was restored in 1969 with contributions received from statisticians throughout the world.

Definitive records of Bayes' birth do not seem to exist. but. allowing for the calendar reform of 1752 and accepting that he died at the age of 59, it seems likely that he was born in 1701 (an argument attributed to Bellhouse in the Insr. Murk

I Ititrodiictioti

Srutist. Bull. 26, 1992). Some background on the life and the work of Bayes may be found in Barnard (1958). Holland (1962). Pearson (1978). Gillies (1987). Dale ( 1990, I99 I ) and Earman ( 1990). See. also, Stigler ( 1986a). That his name lives on in the characterisation of a modern statistical methodology is a consequence of the publication of An rssciy towards solving (Iproblem irt the doctrine ofchmces, attributed to Bayes and communicated to the Royal Society after Bayes' death by Richard Price in I763 (Phil. Trcrtrs. Rox Soc. 53, 3 7 0 4 18). The technical result at the heart of the essay is what we now know as Buyrs' theorem. However, from a purely formal perspective there is no obvious reason why this essentially trivial probability result should continue lo excite interest. In its simplest form, if II denotes an hypothesis and Ll denotes data. the theorem states that

P( II I D )= P ( D I H )x P ( H ) / I ' (11)
With I 12) regarded as a probabilistic statement of belief about H before obtaining ' ( data D, the left-hand side P(H 1 U ) becomes a probabilistic statement of belief about H after obtaining 11. Having specified f)( 11 I H ) and P( 11).the mechanism of the theorem provides a solution to the problem of how to learn from data. Actually, Bayes only stated his result for a uniform prior. According to Stigler ( 1986b), it was Laplace ( I774/ 1986)-apparently unaware of Bayes' work - who stated the theorem in its general (discrete) form. Like any theorem in probability. at the technical level Bayes' theorem merely provides a form 0f"uncertainty accounting". which asserts that the left-hand side of the equation must equal the right-hand side. The interest and controversy, of course. lie in the interpretation and assumed scope of thc formal inputs to the two sides of the equation-and it is here that past and present commentators part compiiny in their responses to the idea that Bayes' theorem can or should be regarded as a central feature of the statistical learning process. At the heart of the controversy is the issue of the philosophical interpretation of probability -objective or subjective?- and the appropriateness and legitimacy of basing a scientific theory on the latter. What Thomas Baycs-from the tranquil surroundingsof Bunhill Fields. where he lies in peace with Richard Price for company- has made of all the fuss over the last 233 years we shall never know. We would like to think that he is a subjectivist fellow-traveller but. in any case, he is in no position to complain at the liberties we are about to take in his name.

1.2

THE SUBJECTIVIST VIEW OF PROBABILITY

Throughout this work. we shall adopt a wholehearted subjectivist position regarding the interpretation of probability. The definitive account and defence of this position are given in de Finetti's two-volume Theory ofProl,ahi/ih ( I970/ 1974, I970/ 1975)

I .3

Bayesian Statistics in Perspective

and the following brief extract from the Preface to that work perfectly encapsulates the essence of the case.
The only relevant thing is uncertainty-the extent of our own knowledge and ignorance. The actual fact of whether or not the events considered are in some sense determined. or known by other people, and so on, is of no consequence. The numerous, different, opposed attempts to put forward particular points of view which. in the opinion of their supporters. would endow Probability Theory with a 'nobler' status. or a 'more scientific' character, or 'firmer' philosophical or logical foundations, have only served to generate confusion and obscurity, and to provoke well-known polemics and disagreements-even between supporters of essentially the same framework. The main points of view that have been put forward are as follows. The classical view, based on physical considerationsof symmetry. in which one should be obliged to give the same probability to such 'symmetric' cases. But which symmetry? And, in any case, why? The original sentence becomes meaningful if reversed: the symmetry is probabilistically significant. in someone's opinion, if it leads him to assign the same probabilities to such events. The logical view is similar, but much more superficial and irresponsible inasmuch as it is based on similarities or symmetries which no longer derive from the facts and their actual properties, but merely from the sentences which describe them, and from their formal structure or language. Thefrequenrisr (or staristiruf) view presupposes that one accepts the classical view, in that it considers an evenr as a class of individuul events, the latter being 'trials' of the former. The individual events not only have to be 'equally probable', but also 'stochastically independent' . . . (these notions when applied to individual events are virtually impossible to define or explain in terms of the frequentist interpretation). In this case, also, it is straightforward, by means of the subjectiveapproach, to obtain, under the appropriateconditions, in a perfectly valid manner, the result aimed at (but unattainable) in the statistical formulation. It suffices to make use of the notion of exchangeability. The result, which acts as a bridge connecting the new approach with the old. has often been referred to by the objectivists as "de Finetti's representation theorem". It follows that all the three proposed definitions of 'objective' probability, although useless per se, turn out to be useful and good as valid auxiliary devices when included as such in the subjectivist theory. (de Finetti, 1970/1974, Preface. xi-xii)

1.3

BAYESIAN STATISTICS IN PERSPECTIVE

The theory and practice of Statistics span a range of diverse activities, which are n motivated and characterised by varying degrees of formal intent. Activity i the context of initial data exploration is typically rather informal; activity relating to

I Introduction

concepts and theories of evidence and uncertainty is somewhat more formally structured; and activity directed at the mathematical abstraction and rigorous analysis of these structures is intentionally highly formal. What is the nature and scope of Bayesian Statistics within this spectrum of activity? Bayesian Statistics offers a rationalist theory of personalistic beliefs in contexts of uncertainty, with the central aim of characterising how an individual should act in order to avoid certain kinds of undesirable behavioural inconsistencies. The theory establishes that expected utility maximisation provides the basis for rational decision making and that Bayes' theorem provides the key to the ways in which beliefs should tit together in the light of changing evidence. The goal, in effect. is to establish rules and procedures for individuals concerned with disciplined uncertainty accounting. The theory is not descriptive, in the sense of claiming to model actual khaviour. Rather, it is prescriptive. in the sense of saying "if you wish to avoid the possibility of these undesirable consequences you must act in the following way". From the very beginning, the development of the theory necessarily presumes a rather formal frame of discourse. within which uncertain events and available actions can be described and axioms of rational behaviour can be stated. But this formalism is preceded and succeeded in the scientific learning cycle by activities which, in our view. cannot readily be seen as part of the formalism.

In any field of application, a prerequisite for arriving at a structured frame of discourse will typically be an informal phase of exploratory data analysis. Also, it can happen that evidence arises which discredits a previously assumed and accepted formal framework and necessitates a rethink. Part of the process of realising that a change is needed can take place within the currently accepted framework using Bayesian ideas, but the process of rethinking is again outside the formalism. Both these phases of initial structuring and subsequent restructuring might well be guided by "Baycsian thinking"- by which we mean keeping in mind the objective of creating or re-creating a formal framework for uncertainty analysis and decision making-but are not themselves part of the Bayesian formalism. That said. there is, of course, often a pragmatic ambiguity about the boundaries of the formal and the informal.
The emphasis in this book is on ideas and we have sought throughout to keep the level of the mathematical treatment as simple as is compatible with giving what s we regard a an honest account. However, there are sections where the full slory would require a greater level of abstraction than we have adopted, and we have drawn attention to this whenever appropriate.

1.4 An Overview o Bayesian Theory f

1.4
1.4.1

AN OVERVIEW OF BAYESIAN THEORY


Scope

This volume on Bayesian Theory focuses on the basic concepts and theory of Bayesian Statistics. with chapters covering elementary Foundufions. mathematical Generalisations of the Foundations, Modelling, Inference and Remodelling. In addition, there are two appendices providing a Summary ofBasic Formulae and a review of Nun-Buyesiun Theories. The emphasis throughout is on general ideasthe Why?-of Bayesian Statistics. A detailed treatment of analytical and numerical techniquesfor implementing Bayesian procedures- the How.?- will be provided in the volume Bayesian Computation. A systematic study of the methods of analysis for a wide range of commonly encountered model and problem types- the What?will be provided in the volume Bayesian Merhod~. The selection of topics and the details of approach adopted in this volume necessarily reflect our own preferences and prejudices. Where we hold strong views, these are, for the most part. rather clearly and forcefully stated. while, hopefully, avoiding too dogmatic a tone. We acknowledge, however, that even colleagues who are committed to the Bayesian paradigm will disagree with at least some points of detail and emphasis in our account. For this reason, and to avoid complicating the main text with too many digressionary asides and references, each of Chapters 2 to 6 concludes with a Discussion und Further References section, in which some of the key issues in the chapter are critically re-examined. In most cases, the omission of a topic, or its abbreviated treatment in this volume. reflects the fact that a detailed treatment will be given in one or other of the volumes Buyesiun Compurution and Buyesiun Methods. Topics falling into this category include Design o Experiments. Imuge Analysis, Linear Models, Mulrivarif ute Analysis, Nonpararnetric Inference, Prior Elicitation, Robustness, Sequential Analysis, Survival Analysis and Time Series. However, there are important topics, such as Game Theory and Group Decision Muking. which are omitted simply because a proper treatment seemed to us to involve too much of a digression from our central theme. For a convenient source of discussion and references at the interface of Decision Theory and Game Theory, see French (1986).

1.4.2

Foundations

In Chapter 2. the concept of rationality is explored in the context of representing beliefs or choosing actions in situations of uncertainty. We introduce a formal framework for decision problems and an axiom system for the foundations of decision theory. which we believe to have considerable intuitive appeal and to be an improvement on the many such systems that have been previously proposed. Here, and throughout this volume, we stress the importance of a decision-oriented

I Introduction

framework in providing a disciplined setting for the discussion of issues relating to uncertainty and rationality. The dual concepts of probability and utility are formally defined and analysed within this decision making context and the criterion of maximising expected utility is shown to be the only decision criterion which is compatible with the axiom system. The analysis of sequential decision problems is shown to reduce to successive applications of the methodology introduced. A key feature of our approach is that statistical inference is viewed simply as a particular form of decision problem; specifically, a decision problem where an action corresponds to reporting a probability belief distribution for some unknown quantity of interest. Thus defined, the inference problem can be analysed within the general decision theory framework, rather than requiring a separate theory of inference. An important special feature of what we shall call a pure inference problem is the form of utility function to be adopted. We establish that the logarithmic utility function-more often referred toas a score function in this context-plays a special role as the natural utility function for describing the preferences of an individual faced with a pure inference problem. Within this framework. measures of the discrepancy between probability distributions and the amount of information contained in a distribution are naturally defined in terms of expected loss and expected increase. respectively. in logarithmic utility. These measures are mathematically closely related to well-known information-theoretic measures pioneered by Shannon ( 19.58) and employed in statistical contexts by Kullback ( 195911968). A resulting characteristic feature of our approach is therefore the systematic appearance of these information-theoretic quantities as key elements in the Bayesian analysis of inference and general decision problems.

1.4.3

Generalisations

In Chapter 3. the ideas and results of Chapter 2 are extended to a much more general mathematical setting. An additional postulate concerning the comparison of a countable collection of events is appended to the axiom system of Chapter 2. and is shown to provide a justification for restricting attention to countably additive probability as the basis for representing beliefs. The elements of mathematical probability theory required in our subsequent development are then reviewed. The notions of actions and utilities. introduced in a simple discrete setting in Chapter 1, are extended in a natural way to provide a very general mathematical framework for our development of decision theory. A further additional mathematical postulate regarding preferences is introduced and. within this more general framework. the criterion of maximising expected utility is shown to be the onl? decision making criterion compatible with the extended axiom system.

1.4 An Overview o Bayesian Theory f

In this generalised setting, inference problems are again considered simply as special cases of decision problems and generalised definitions of score functions and measures of information and discrepancy are given.
1.4.4

Modelling

In Chapter 4, we examine in detail the role of familiar mathematical forms of statistical models and the possible justifications-from a subjectivist perspectivefor their use as representations of actual beliefs about observable random quantities. A feature of our approach is an emphasis on the primacy of observables and the notion of a model as a (probabilistic) prediction device for such observables. From this perspective, the role of conventional parametric statistical modelling is problematic, and requires fundamental re-examination. The problem is approached by considering simple structural characteristics -such as symmetry with respect to the labelling of individual counts or measurements, a featurecommon to many individual beliefs about sequencesof observables. The key concept here is that of exchangeability. which we motivate, formalise and then use to establish a version of de Finetti's celebrated representation theorem. This demonstrates that judgements of exchangeabilitylead to general mathematical representationsof beliefs that justify and clarify the use and interpretationsof such familiar statistical concepts as parameters, random samples, likelihoods and prior distributions. Going beyond simple exchangeability, we show that beliefs which have certain additional invariance properties-for example. to rotation of the axes of measurements, or translation of the origin-can lead to mathematical representations involving other familiar specific forms of parametric distributions, such as normals and exponentials. A further approach to characterising belief distributions is considered, based on data reduction. The concept of a sufficient statistic is introduced and related to representations involving the exponential family of distributions. Various forms of partial exchangeability judgements about data structures are then discussed in a number of familiar contexts and links are established with a number of other commonly used statistical models. Structures considered include those of several samples, multiway layouts, problems involving covariates, and hierarchies.
1.4.5

Inference

In Chapter 5, the key role of Bayes' theorem in the updating of beliefs about observables in the light of new information is identified and related to conventional mechanisms of predictive and parametric inference. The roles of sufficiency. ancillarity and stopping rules in such inference processes are also examined.

Various standard forms of statistical problems, such as point and interval estimation and hypothesis testing, are reexamined within the general Bayesian decision framework and related to formal and informal inference summaries. The problems of implementing Bayesian procedures are discussed at length. The mathematical convenience and elegance of conjugate analysis are illustrated in detail, as are the mathematical approximations available under the assumption of the validity of large-sample asymptotic analysis. A particular feature of this volume is the extended account of so-called reference analysis, which can be viewed as a Bayesian formalisation of the idea of "letting the data speak for themselves". An alternative. closely related idea is that of how to represent "vague beliefs" or "ignorance". We provide a detailed historical review of attempts that have been made to solve this problem and compare and contrast some of these with the reference analysis approach. A brief account is given of recent analytic approximation strategie5 derived from Laplace-type methods. together with outline accounts of numerical quadrature. importance sampling. sampling-iniportance-resampling.and Markov chain Monte Carlo methods.

1.4.6

Remodelling

In Chapter 6, it is argued that. whether viewed from the perspective of a sensitive individual modeller or from that of a group of modellers. there are good reasons for systematically entertaining a range of possible belief models. rather than predicating all analysis on a single assumed model. A variety of decision problems are examined within this framework. some involving model choice only. some involving model choice followed by a terminal action. such as prediction, others involving only a terminal action. A feature of our treatment of this topic is that, throughout. a clear distinction is drawn among three rather different perspectives on the comparison of and choice from among a range of competing models. The first perspective arises when the range of models under consideration is assumed to include the "true" model. The second perspective arises when the range of models is assumed to be under consideration in order to provide a more conveniently implemented proxy for an actual, but intractable. belief model. The third perspective arises when thc range of models is under consideration because the models are "all there is available". in the absence of any specification of an actual belief mtdel. Our discussion relates and links these ideas with aspects of hypothesis testing. signiticancc testing and cross-validation.
1.4.7

Basic Formulae

In Appendix A. we collect together for convenience. in tabular format. summaries of the main univariate and multivariate probability distributions that appear in the

1.5 A Bayesian Reading List

text, together with summaries of the prior/posterior/predictive forms corresponding to these distributions in the context of conjugate and reference analyses.

1.4.8

Non-Bayesian Theories

In Appendix B. we review what we perceive to be the main alternatives to the Bayesian approach; namely, classical decision theory, frequentist procedures, likelihood theory, and fiducial and related theories. We compare and contrast these alternatives in the context of "stylised" inference problems such as point and interval estimation, hypothesis and significance testing. Through counter-examples and general discussion, we indicate why we find all these alternatives seriously deficient as formal inference theories.

1.5

A BAYESIAN READING LIST

As we have already remarked, this work is -necessarily -a selective account of Bayesian theory, reflecting our own interests and perspectives. The following is a list of other Bayesian books- by no means exhaustive- whose contents would provide a significant complement to the material in this volume. In those cases where there are several editions. or when the original is not in English, we quote both the original date and the date of the most recent English edition. Thus, Jeffreys ( 1939/196I ) refers to Jeffreys' Theory of Probability, first published in 1939,and to its most recent (3rd) edition, published in 1961; similarly, de Finetti (1970/1974) refers to the original (1970) Italian version of de Finetti's Teoria delle Probabilitu vol. 1 and to its English translation (published in 1974). Pioneering Bayesian books include Laplace (18 12). Keynes (1921/1929),Jeffreys (1939/1961),Good (1950,1965), Savage( 1954/1972,1962),Schlaifer( 1959, 1961),RaiffaandSchlaifer( 1961). Mostellerand Wallace( 1964/1984),Dubinsand Savage (1965/1976), Lindley (1965, 1972), Pratt et al. (1965), Tribus (1969). DeGroot (1970). de Finetti (1970/1974, 1970/1975, 1972) and Box and Tiao (1973). Elementary and intermediate Bayesian textbooks include those of Savage (1968), Schmitt (1969). Lavalle (1970), Lindley (1971/1985), Winkler (1972). Kleiter (1980), Bemardo ( 1981b), Daboni and Wedlin ( I982), Iversen ( I984), O'Hagan (1988a). Cifarelli and Muliere ( 1989), Lee ( 1989), Press (1989). Scozzafava ( 19891, Wichmann ( 1990). Borovcnik ( 1992). and Berry ( 1996). More advanced Bayesian monographs include Hartigan ( 1983). Regazzini (1983), Berger (1985a). Savchuk ( 1989), Florens et nl. (1990), Robert (1992) and O'Hagan (1994a). Polson and Tiao (1995) is a two volume collection of classic papers in Bayesian inference. Special topics have also been examined from a Bayesian point of view; these include Actuarial Science (Klugman, 1992). Biostutistics (Girelli-Bruni, 198I ;

10

I Iniroditction

Lecoutre, 1984; Bmai et al., 1992; Berry and Stangl, 1996), Control Theory (Aoki, 1967; Sawagari et ul., 1967).Decision Atialvsis (Duncan and Raiffa. 1957; Chernoff and Moses, 1959; Grayson. 1960; Fellner. 1965: Roberts, 1966; Edwards and Tversky. 1967; Hadley, 1967; Martin, 1967; Morris, 1968; Raiffa, 1968; Lusted. 1968; Schlaifer, 1969; Aitchison, 1970; Fishburn, 1970. 1982; Halter and Dean. 1971; Lindgren, 1971; Keeney and Raiffa. 1976; Rios. 1977: Lavalle. 1978; Roberts. 1979; French et al. 1983; French, 1986. 1989; Marinell and Seeber. 1988; Smith, I988a), Dynanric Forecasting (Spall, 1988; West and Harrison. 1989; Pole et al., 1994). Economics arid Econometrics (Morales, I97 1 ; Zellner. I97 I ; Richard, 1973; Bauwens, 1984; Boyer and Kihlstrom, 1984; Cyert and DeGroot. 1987). Educutional und Psvchoiogical Research (Novick and Jackson. 1974; Pollard. 1986). Foundations (Fishburn, 1964. 1970. 1982, 1987, 1988a; Berger and Wolpert. 1984/1988; Brown, 1985). History (Dale, 199 I 1, Inforniurion Theory ( Y a glom and Yaglom, 1960/ 1983;Osteyee and Gcxxl. I974), Lrrw and fimnsic Science (DeGroot etal., 1986; Aitken and Stoney. 1991). Linear Models (Lempers. 1971: Learner, 1978; Pilz, 1983/1991; Broemeling. 1985). Logic und Philosophy qfScience (Jeffrey. 1965/1983, 1981; Rosenkranz. 1977; Seidenfeld, 1979; Howson and Urbach 1989; Verbraak, 1990; Rivadulla. 19911, Mmirnunr Enrrupy (Levine and Tribus, 1978;Smith and Grandy, 1985;Justice. 1987; Smith and Erickson, 1987;Erickson and Smith, 1988; Skilling, 1989; Fougere. 1990; Grandy and Schick, 1991; Kapur and Kesavan,1992; Mohammad-Djafari and Demoment. 1993).Multivtiritrtr Analysis (Press, 197211982; Berger and DasGupta, I99 I ). 0ptinii.stction (Mockus. 1989), Pattern Recognition (Simon, 1984). Prediction (Aitchison and Dunsmore. 1975; Geisser. 1993). Probabiliry Assessnient (Stael von Holstein. 1970; Stael von Holstein and Matheson, 1979; Cooke. 1991). Reliability (Martz and Waller. 19x2: Claroti and Lindley, 1988).Satnple Surveys (Rubin. 1987).Social Science (Phillips. 1973)and Spectral Anu1ysi.s (Bretthorst. 1988). A number of collected works also include a wealth of Bayesian material. Among these, we note particularly; Kyburg and Smokler ( 1964/ 1980). Meyer and Collier ( 1970),Godambeand Sprott ( 197 1 ). Fienberg and Zellner ( 1974).White and Bowen (19751, Aykaq and Brumat (1977), Parenti (1978), Zellner (1980). Savage (1981). Box et ul. (1983). Dawid and Smith (1983). Florens et (11. (1983. 1985). Good (1983), Jaynes ( 1983). Kadane ( 1984). Box (1985). Gtxl and Zellner ( 1986). Smith and Dawid (1987), Viertl(1987), Gardenfors and Sahlin (1988). Gupta and Berger (1988, 1994). Geisser et trl. (1990). Hinkelmann (1990). Oliver and Smith ( 1990).Ghosh and Pathak ( 1992).Goel and Iyengar ( 1992).de Finetti ( 1993).Fearn and O'Hagan (1993). Gatsonis Y I ul. ( 1993). Freeman and Smith ( 1994). and last, but not least, the Proceedings of the kleticia Internutional Meetings on Stryesitin Statistics (Bernard0 et al. 1980, 1985. 1988. 1992, I996 and 1999). General discussions of Bayesian Statistics may be found in review papers and encyclopedia articles. Among these. we note de Finetti ( 195 I ), Lindley ( 1953. 1976, 1978. 1982b. 1982~. 1984, 1990. 1992). Ansconibe (1961). Savage (1961.

1.5 A Buyesian Reading List

11

1970),Edwards et al. (1963), Bartholomew (1963, Cornfield (1969),Good (1976, 1992), Roberts (1978), DeGroot (1982), Dawid (1983a), Smith (l984), Zellner ( 1985, 1987, 1988a), Pack (1986a, 1986b), Cifarelli (1987). Bernard0 (l989), Ghosh (1991) and Berger (1993). For discussion of important specific topics, see Luce and Suppes ( 1965).Birnbaum ( 1968,1978),de Finetti ( I968), Press ( I980a, I985a), Fishburn ( I98 1,1986, 1988b), Dickey (1982), Geisser (1982, 1986). Good (1982, 1985. 1987. 1988a, 1988b), Dawid (1983b, 1986a, 1992). Joshi ( 1983), LaMotte (1985), Genest and Zidek (19861, Goldstein (1986~). Racine-Poon etal. (1986). Hodges (1987), Eric9 son ( 1988).Zellner ( 1988c),Trader ( 1989),Breslow (1 W),Lindley ( I 91), Smith (1991), Barlow and Irony (1992). Ferguson et al. (1992), Arnold (1993). Kadane (1993), Bartholomew (1994). Berger (1994) and Hill (1994).

This Page intentionally left blank

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

Chapter 2

Foundations
Summary
The concept of rationality is explored in the context of representing beliefs or choosing actions in situations of uncertainty. An axiomatic basis, with intuitive operational appeal, is introduced for the foundationsof decision theory. The dual concepts of probability and utility are formally defined and analysed within this context. The criterion of maximising expected utility is shown to be the only decision criterion which is compatible with the axiom system. The analysis of sequential decision problems is shown to reduce to successive applicationsof the methodology introduced. Statistical inference is viewed as a particular decision problem which may be analysed within the framework of decision theory. The logarithmic score is established as the natural utility function to describe the preferences of an individual faced with a pure inference problem. Within this framework, the concept of discrepancy between probability distributions and the quantification of the amount of information in new data are naturally defined in terms of expected loss and expected increase in utility, respectively.

2.1

BELIEFS AND ACTIONS

We spend a considerable proportion of our lives, both private and professional, in a state of uncertainty. This uncertainty may relate to past situations, where direct

14

2 Foundatioris

knowledge or evidence is not available, or has been lost or forgotten: or to present and future developments which are not yet completed. Whatever the circumstances. there is a sense in which all states of uncertainty may be described in the same way: namely, an individual feeling of incomplete knowledge in relation to a specified situation (a feeling which may, of course, be shared by other individuals). And yet it is obvious that we do not attempt to treat all our individual uncertainties with thc same degree of interest or seriousness. Many feelings of uncertainty are rather insubstantial and we neither seek to analyse them. nor to order our thoughts and opinions in any kind of responsible way. This typically happens when we feel no actual or practical involvement with the situation in question. In other words, when we feel that we have no (or only negligible) capacity to influence matters. or that the possible outcomes have no (or only negligible) consequences so far as we are concerned. In such cases. we are not motivated to think carefully about our uncertainty either because nothing depends on it. or the potential effects are trivial in comparison with the effort involved in carrying out a conscious analysis. On the other hand, we all regularly encounter uncertain situations in which we at least aspire to behave rationally in some sense. This might be because we face the direct practical problem of choosing from among a set of possible actions. where each involves a range of uncertain consequences and we are concerned to avoid making an illogical choice. Alternatively, we might be called upon to sumrnarise our beliefs about the uncertain aspects of the situation, bearing in mind that others may subsequently use this summary as the basis for choosing an action. In this case, we are concerned that our summary be in a form which will enable a rational choice to be made at some future time. More specifically. we might regard the summary itself. i.e., the choice of a particular mode of representing and communicating our beliefs. as being a form of action t o which certain criteria of rationality might be directly applied. Our basic concern in this chapter is with exploring the concept of rationality in the context of representing beliefs or choosing actions in situations of uncertainty. To choose the best among a set of actions would. in principle, be immediate if we had perfect information about the consequences to which they would lead. So fw as this work is concerned, interesting decision problems are those for which such perfect information is not available. and we must take i4ncertoint.v into account as a major feature of the problem. It might be argued that there are complex situations where we do have complete information and yet still find it difficult to take the best decision. Here. however. the difficulty is trchriicul, not conceptual. For example. cven though we have. in principle. complete information. it is typically not easy to decide what is the optimal strategy to rebuild a Rubik cube or which is the cheapest diet fulfilling specified nutritional requirements. We take the view that such problems are purely technical. In the first case, they result from the large number of possible strategics:

2.1 Beliefs and Actions

15

in the second, they reduce to the mathematical problem of finding a minimum under certain constraints. But in neither case is there any doubt about the decision criterion to be used. In this work we shall not consider these kinds of combinatorial or mathematical programming problems, and we shall assume that in the presence of complete information we can, in principle, always choose the best alternative. Our concern, instead, is with the logical process o decision making in sitf uarions o uncertainty. In other words, with the decision criterion to be adopted f when we do not have complete information and are thus faced with, at least some, elements of uncertainty. To avoid any possible confusion, we should emphasise that we do not interpret actions in situations of uncertainty in a narrow, directly economic sense. For example, within our purview we include the situation of an individual scientist summarising his or her own current beliefs following the results of an experiment; or trying to facilitate the task of others seeking to decide upon their beliefs in the light of the experimental results. It is assumed in our approach to such problems that the notion of rational belief cannot be considered separately from the notion of rational action. Either a statement of beliefs in the light of available information is, actually or potentially, an input into the process of choosing some practical course of action,

. . . it is not asserted that a belief. . . does actually lead to action, but would lead to action in suitable circumstances;just as a lump of arsenic is called poisonous not because it actually has killed or will kill anyone, but because it would kill anyone if he ate it (Ramsey, 1926).
or, alternatively, a statement of beliefs might be regarded as an end in itself, in which case the choice of the form of statement to be made constitutes an action,
Frequently, it is a question of providing a convenient summary of the data . . . In such cases, the emphasis is on the inference rather than the decision aspect of problem, although formally it can still be considered a decision problem if the inferential statement itself is interpreted as the decision to be taken (Lehmann. I959/ 1986).

We can therefore explore the notion of rationalityfor both beliefs and actions by concentratingon the latter and asking ourselveswhat kinds of rules should govern preference patterns among sets of alternative actions in order that choices made in accordance with such rules commend themselves to us as rational, in that they cannot lead us into forms of behavioural inconsistency which we specifically wish to avoid. In Section 2.2, we describe the general structure of problems involving choices under uncertainty and introduce the idea of preferences between options. In Section 2.3, we make precise the notion of rational preferences in the form of axioms.

We describe these as principles of quantitative coherence because they specify the ways in which preferences need to be made quuntitutively precise and fit together, or cohere. if illogical forms of behaviour are to be avoided. In Sections 2.4 and 2.5, we prove that, in order to conform with the principles of quantitative coherence. degrees of belief about uncertain events should be described in terms of a (finitely additive) probubility meusure. relative values of individual possible consequences should be described in terms of a utilipfunction. and the rational choice of an action is to select one which has the nitrrirnuni expected utility. In Section 2.6. wc discuss sequential decision problems and show that their analysis reduces to successive applications of the maximum expected utility methodology; in particular, we identify the design of experiments as a particular case of a sequential decision problem. In Section 2.7. we make precise the sense in which choosing a form of a statement of beliefs can be viewed as a special case of a decision problem. This identification of inference as decision provides the fundamental justification for beginning our development of Bayesian Statistics with the discussion of decision theory. Finally, a general review of ideas and references is given in Section 2.8.

2.2
2.2.1

DECISION PROBLEMS
Basic Elements

We shall describe any situation in which choices are to be made among alternative courses of action with uncertain consequences as a decision prohlern, whose structure is determined by three basic elements: (i) a set {u,. i E I}of available actions, one of which is to be selected; (ii) for each action a,. a set { E,. ,j E J } of uncertcrin e\*ent.s,describing the uncertain outcomes of taking action a,: (iii) corresponding to each set { E.,. j E . I } . a set of consequences {c,. .j E .I}. The idea is as follows. Suppose we choose action ( 1 , ; then one and only one of the uncertain events E, j E .I. occurs and leads to the corresponding consequence c,, j E . I . Each set of events {E,,.j E ./) forms a purririon (an exclusive and exhaustive decomposition) of the total set of possibilities. Naturally, both the set of consequences and the partition which labels them may depend on the particular action considered. so that a more precise notation would be {I?,.,. ,j E .I,} and {c,,. j E J ; } for each action a,. However. to simplify notation. we shall omit this dependence, while remarking that it should always be borne in mind. We shall come back to this point in Section 2.6. In practical problems, the labelling sets, I and .I (for each i). are typically finite. In such cases, the decision problem can be represented schematically by means of a decision tree as shown in Figure 2. I .

2.2 Decision Problems

17

Figure 2.1 Decision tree

The square represents a decision node, where the choice of an action is required. The circle represents an uncertainty node where the outcome is beyond our control. Following the choice of an action and the occurrence of a particular event, the branch leads us to the corresponding consequence. Of course, most practical problems involve sequential considerations but, as shown in Section 2.6, these reduce, essentially, to repeated analyses based on the above structure. It is clear, either from our general discussion, or from the decision tree representation, that we can formally identify any a,, i E I, with the combination of { E J .j E J } and { c J ,j E J} to which it leads. In other words, to choose a, is to opt for the uncertain scenario labelled by the pairs (E,, c J ) , j E J . We shall write a, = {c, I EJ j E J } to denote this identification, where the notation cJ I El signifiesthat event EJ leads to consequence cI,i.e., that a,(E J )= c,. An individual's perception of the state of uncertainty resulting from the choice of any particular a, is very much dependent on the information currently available. In particular, { EJ j E J} forms a partition of the total set of relevant possibilities as the individual decision-maker now perceives them to be. Further information, of a kind which leads to a restriction on what can be regarded as the total set of possibilities, will change the perception of the uncertainties, in that some of the EJ's may become very implausible (or even logically impossible) in the light of the new information, whereas others may become more plausible. It is therefore of considerable importance to bear in mind that a representation such as Figure 2.1 only captures the structure of a decision problem as perceived at a particular point in time. Preferences about the uncertain scenarios resulting from the choices of actions depend on attitudes to the consequences involved und assessments of the uncertainties attached to the corresponding events. The latter are clearly subject to change as new information is acquired and this may well change overall preferences among the various courses of action. The notion of preference is, of course, very familiar in the everyday context of actual or potential choice. Indeed, an individual decision-maker often prefaces

an actual choice (from a menu, an investment portfolio, a range of possible forms of medical treatment, a textbook of statistical methods, etc.) with the phrase "I prefer.. . (caviar, equities, surgery. Bayesian procedures, etc.). To prefer action n l to action a.2 means that if these were the only two options available ( 1 1 would be chosen (conditional, of course, on the information available at the time). In everyday terms. the idea of indifference between two courses of action also has a clear operational meaning. It signifies a willingness to accept an externally determined choice (for example. letting adisinterested third party choose, or tossing a coin). In addition to represetiring [he srructure of a decision problem using the three elements discussed above, we must also be able to represenr rhe ideu clfprejkrence as applied to the comparison of some or all of the pairs of available options. We shall therefore need to consider a fourth basic element of a decision problem: (iv) the relation 5 , which expresses the individual decision-maker's preferences between pairs of available actions, so that n l 5 a:! signifies that ( 1 1 is nor prejerred to a?. These four basic elements have been introduced in a rather informal manner. In order to study decision problems in a precise way, we shall need to reformulate these concepts in a more formal framework. The development which follows. here and in Section 3.3. is largely based on Bernardo, Ferrandiz and Smith (1985).
"

2.2.2

Formal Representation

When considering a particular, concrete decision problem. we do not usually contine our thoughts to on/? those outcomes and options explicitly required for the specification of that problem. Typically. we expund our horizons to encompass analogous problems, which we hope will aid us in ordering our thoughts by providing suggestive points of reference or comparison. The collection of uncertain scenarios defined by the original concrete problem is therefore implicitly embedded in a somewhat wider framework of actual and hypothetical scenarios. We begin by within describing this widerfiume o~~di.scoi~r.~e which the comparisons of scenarios are to be carried out. It is to be understood that the initial specification of any such particular frame of discourse, together with the preferences among options within it, are dependent on the decision-maker's overall state of information at that time. Throughout. we shall denote this initial state of mind by ;\I(,. We now give a formal definition of a decision problem. This will be presented in a rather compact form; detailed elaboration is provided in the remarks following the definition.

Definition 2.1. (Decision problem). A decision problerii is dejned by the elements ( E .C. A, where: (i) E is an algebra of relevatit events, E,;

s),

2.2 Decision Problems

19

(ii) C is a set of possible consequences, c, ; (iii) A i a set of options. or potential acts, consisting offunctionswhich map s finite partitions of 0, the certuin event in E, to compatibly-dimensioned, ordered sets of elements of C ; (iv) 5 is a preference order, taking the form of a binary relation between some of the elements of A. We now discuss each of these elements in detail. Within this wider frame of discourse, an individual decision-maker will wish to consider the uncertain events judged to be relevant in the light of the initial state of information A J , . However, it is natural to assume that if El E and E2 E are judged to be relevant events then it may also be of interest to know about their joint occurrence, or whether at least one of them occurs. This means that El n E2 and El U E2 should also be assumed to belong to &. Repetition of this argument suggests that E should be closed under the operations of arbitrary finite intersections and unions. Similarly, it is natural to require & to be closed under complementation, so that E' E E . In particular, these requirements ensure that the certain event $2 and the impossible event 8, both belong to . Technically. we are assuming that the class of relevant events has the structure of an algebra. (However, it can certainly be argued that this is too rigid an assumption. We shall provide further discussion of this and related issues in Section 2.8.4.) As we mentioned when introducing the idea of a wider frame of discourse, the algebra E will consist of what we might call the real-world events (that is, those occurring in the structure of any concrete, actual decision problem that we may wish to consider), together with any other hypothetical events, which it may be convenient to bring to mind as an aid to thought. The class will simply be referred to as the ulgebra of(re1evant)events. We denote by C the set of all consequences that the decision-maker wishes to take into account; preferences among such consequences will later be assumed to be independent of the state of information concerning relevant events. The class C will simply be referred to as the set of (possible)consequences. In our introductory discussion we used the term action to refer to each potential act available as a choice at a decision node. Within the wider frame of discourse, we prefer the term option, since the general, formal framework may include hypothetical scenarios (possibly rather far removed from potential concrete actions). So far as the definition of an option as a func!ion is concerned, we note that this is a rather natural way to view options from a mathematical point of view: an option consists precisely of the linking of a partition of R, { E,, j E J}, with a corresponding set of consequences, {c,. j E J}. To represent such a mapping we shall adopt the notation {c, I E,, j E J } , with the interpretation that event E, leads to consequence c, ,j E J .

20

2 Foundutions

It follows immediately from the definition of an option that the ordering of the and labels within J is irrelevant, so that, for example, the options { C:I 1 E . cp I F'}. { ca I E', c1 I E } are identical, and forms such as {c I E l , c 1 E.1. c; I E,. j E J } and { c I El UE2$ c, I E, . j E .I) are completely equivalent. Which form is used in any particular context is purely a matter of convenience. Sometimes. the interpretation of an option with a rather cumbersome description is clarified by an appropriate reformulation. For example, a = {q 1 6 n G'. c2 I E" n G. c:j I C }may be inore compactly written as (1 = ( ( 1 1 I G. I G'"}.with (11 = { ( * I I E . ~2 I I?'}. Thus. if

we shall use the composirefuncriori iiorution ( I = { ( J , , I I;..; E .I}. In all cases, the ordering of the labels is irrelevant. The class A of options. or potential actions. will simply be referred to as the uction spucr. In defining options. the assumption of ajnite partition into events of E seems to us to correspond most closely to the structure of practical problems. However. an extension to admit the possibility of injinite partitions has certain mathematical advantages and will be fully discussed, together with other mathematical extensions. in Chapter 3. In introducing the preference binary relation 2. are not assuming that all we pairs of options (a,, ( 1 2 ) E A x A can necessarily be related by 5. If the rclation can be applied, in the sense that either (iI 5 (J? or a2 5 ( i I (or both), we say that ( I . ~is not preferred to u p . or (12 is not preferred to ( I I (or both). From 1. we can derive a number of other useful binary relations.

Definition 2.2. (Induced binary relations) ( i ) a I 0.2 ol 5 andnp 5 u1. nl 5 (12 and it i nor triw thut as 5 ( 1 1 . s (ii) 01 < a1 (iii) a I 3 n . ~e 0.2 5 a l .
(iv) a l > n?

.
(12

< 01.

Definition 2.2 is to be understood as referring to any options n l . r i p in A. To simplify the presentation we shall omit such universal quantifiers when there is no danger of confusion. The induced binary relations are to be interpreted to mean that (II is equivalent to u2 if and only if ( 1 1 (12. and o1 is strictly preferred to nz if and only i f a l > a2. Together with the interpretation of 2. these suffice to describe all cases where pairs of options can be compared. We can identify individual consequences as special cases of options by writing c = {c. I fl}, for any r E C. Without introducing further notation, we shall simply regard c as denoting either an element of C. or the element { c I 12) of A. There will be no danger of any confusion arising from this identification. Thus. we shall

2.2 Decision Problems

21

write c1 5 02 if and only if { c , 10) 5 {Q I S2) and say that consequence c1 is not preferred to consequence c2. Strictly speaking, we should introduce a new symbol to replace 5 when referring to a preference relation over C x C, since 5 is defined over A x A. In fact, this parsimonious abuse of notation creates no danger of confusion and we shall routinely adopt such usage in order to avoid a proliferation of symbols. We shall proceed similarly with the binary relations N and < introduced in Definition 2.2. To avoid triviality, we shall later formally assume that there exist at least two consequences CI and cz such that c1 < ep. The basic preference relation between options, 5 , conditional on the initial state of information Mo. can also be used to define a binary relation on E x &, the collection of all pairs of relewnt events. This binary relation will capture the intuitive notion of one event being "more likely" than another. Since, once again, there is no danger of confusion, we shall further economise on notation and also use the symbol 5 to denote this new uncertainty binary relation between events.

Definition 2.3. (Uncertainty relation).

we then suy that E is not more likely than F .

The intuitive content of the definition is clear. If we compare two dichotomised options, involving the same pair of consequences and differing only in terms of their uncertain events, we will prefer the option under which we feel it is "more likely" that the preferred consequence will obtain. Clearly, the force of this argument applies independently of the choice of the particular consequences el and c2. provided that our preferences between the latter are assumed independent of any considerations regarding the events E and F. Continuing the (convenient and harmless) abuse of notation. we shall also use the derived binary relations given in Definition 2.2 to describe uncertainty relations between events. Thus, E F if and only if E and Fare equally likely, and E > F if and only if E is strictly more likely than F. Since, for all < c?,

(01

it is always true. as one would expect. that 0 < $2. It is worth stressing once again at this point that ull the order relations over A x A. and hence over C x C and & x &. are to be understood as personal, in the sense that, given an agreed structure for a decision problem, each individual i s free to express his or her own personal preferences. in the light of his or her initial state of information Mo. Thus. for a given individual, a statement such as E > F is to be interpreted as "this individuul, given the stute of information described by Moo,considers event E to be more likely than event F ". Moreover,

22
cI < ~

2 Foi4ndations

Definition 2.3 provides such a statement with an operational nieaning since for all 2 E > F is equivalent to an agreement to choose option { ~2 I E. c1 I E " } in , preference to option (cz I F, ('1 I F"}. To complete our discussion of basic ideas and definitions. we need to consider one further important topic. Throughout this section. we have stressed that preferences, initially defined among options but inducing binary relations among consequences and events, are conditional on the current state of information. The initial state of information, taking as an arbitrary "origin" the first occasion on which an individual thinks systematically about the problem. has been denoted by Jf,,. Subsequently, however, we shall need to take into accountfurther irlforniufiori. obtained by considering the occurrence of real-world events. Given the assumed Occurrence of a possible event CJ, preferences between options will be described by a new binary relation <(;, taking into account both the initial information .\A, and the additional information provided by G. The obvious relation between 5 and is given by the following:

s(;

The intuitive content of the definition is clear. If we do not prefer u ] to 02. given G, then this preference obviously carries over to any pair of options leading. respectively, to ( 1 1 or (12 if C: occurs, and defined identically if G" occurs. Conversely, comparison of options which are identical if G" Occurs depends entirely on consideration of what happens if C; occurs. Naturally. the induced binary relations set out in Definition 2.2 have their obvious counterparts. denoted by -(; and <(;. The induced binary relation between consequences is obviously defined by

However, when we come, in Section 2.3, to discuss the desirable properties of 5 and we shall make formal assumptions which imply that. as one would expect. c1 ir;(''2 if and only if ('I <_ ~ 2 so that prcfcrences between pure consequences are , not affected by additional information regarding the uncertain events in f. is a simple translation The definition ofthe conditional uncertainty relation of Definition 2.3 to a conditional preference setting. The conditional uncertainty relation <(; induced between events is of fundamental importance. This relation. with its derived forms -(; and <(;. provides the key to investigating the way in which uncertainties about events should be modified in the light of new information. Obviously. if G = I I . all conditional relations reduce to their unconditional counterparts. Thus. it is only when k! < C; e: I I that conditioning on G may yield new preference patterns.

s(;

2.3 Coherence and Quantification

23

2.3
2.3.1

COHERENCE AND QUANTIFICATION


Events, Options and Preferences

The formal representation of the decision-maker's "wider frame of discourse" includes an algebra of events , a set of consequences C, and a set of options A, whose generic element has the form {c, I E,. j E J } , where { E j .j E J } is a finite partition of the certain event Q, EJ E , cj E C. j E J . The set A x A is equipped with a collection of binary relations 5 ~G .> 0,representing the notion that one option is not preferred to another, given the assumed Occurrenceof a possible event G. In addition, all preferences are assumed conditional on an initial state of information, ,I&,, with the binary relation 5 (i.e., representing the preference relation on A x A conditional on ,If,,alone. We now wish to make precise our assumptions about these elements of the formal representation of a decision problem. Bearing in mind the overall objective of developing a rational approach to choosing among options, our assumptions, presented in the form of a series of uxioms, can be viewed as responses to the questions: "what rules should preference relations obey?' and "what events should be included in E?" Each formal axiom will be accompanied by a detailed discussion of the intuitive motivation underlying it. It is important to recognise that the axioms we shall present areprescriptive,not descriptive. Thus. they do not purport to describe the ways in which individuals actually do behave in formulating problems or making choices, neither do they assert. on some presumed "ethical" basis. the ways in which individuals should behave. The axioms simply prescribe constraints which it seems to us imperative to acknowledge in those situations where an individual aspires to choose among alternatives in such a way as to avoid certain forms of behavioural inconsistency.

s$~)

2.3.2

Coherent Preferences

We shall begin by assuming that problems represented within the formal framework are non-trivial and that we are able to compare any pair of simple dichoromised options.

Axiom 1. (Comparability of consequences and dichoromised options).


(i) There exist consequences cl. c2 such thut c1 < c2. (ii) For all consequences c1, c2. und events E, F , either{c:!IE.clIE"} { c ~ I F . ~ ~ I F " } 5 o r { c 2 l E , c 1 I E c }2 { c ~ l F . c ~ l F ' } .

24
Discussion ofAriom 1. Condition (i)s very natural. If all consequences were i equivalent. there would not be a decision problem i n any real sense, since all choices would certainly lead to precisely equivalent outcomes. We have already noted that. in any given decision problem. C can be defined as simply the set of consequences required for that problem. Condition (ii) does riot therefore assert that we should be able to compare any pair o f conceivable options, however bizarre or fantastic. In most pructicul problems. there w i l l typically be a high degree of similarity i n the form of the consequences (e.g all monetary), although i t i s easy to think o f examples where this form i s complex (eg. combinations of monetary, health and industrial relations elements). We are trying to capture the essence o f what i s required for an orderly and systematic approach to cornparing alternatives of genuine interest. We are not. at this stage. making the direct assumption that rill optiom. however complex. can be compared. But there could be no possibility o f an orderly and systematic approach if we were unwilling to express preferences among simple dichotomised options and hence (with E = F = $ 1 ) among the consequence3 i themselves. Condition (ii)s therefore to be interpreted in the following sense: **// wu tispire to iiitrkr ( I rutiorrul clioirx. herwurrr rrlrrrrrcrti\~c optiori.s.rhrrr r w rrrrrst 111 leusi he willing to esprixs pt-i:ferunc*esbetu~urii siiiiplv cliclrororrrisutl opriorrs.

There are certainly many situations where we find the task of comparing simple options, and even consequences. very difficult. Resource allocation among competing health care programmes involving different target populations and morbidity and mortality rates is one obvious such example. However. the difticulty of comparing options in such cases does not. of course. obviate the r r c w l for such comparisons if we are to aspire lo responsible decision making. We shall now state our assumptions about the ways in which prcferences I should fit together or cohere in terms o f the order relation over ; x A.

Axiom 2. (Transitivity of preferences).


(i)n 5 (ii) f u l /
(I.

5 a:! ando? 5

thm

(11

(I,(.

Discussiori ofAriom 2. Condition (i) obvious intuitive support. I t would has make little sense to assert that an option was strictly preferred to itself. It would also seem strangely perverse to claim to be unable to compare an option with itself! We note that, from Definition 2.2 (i). u 5 (1. then ( I if ( 1 . Condition (ii) requires preferences to be trunsitirv. The intuitive basis for such a requirement i s perhaps best illustrated by considering the consequences of iiirrwisitirv preferences. Suppose. therefore, that we found ourselves expressing the preferences ( 1 1 < ( 1 2 . :: (12 < (Iri and < ( i I among three options n l . (12 and o.1. The assertion o f : strict preference rules out equivalence between any pair o f the options. so that our

2.3 Coherence and Quantification

25

expressed preferences reveal that we perceive some actual difference in value (no matter how small) between the two options in each case. Let us now examine the behavioural implications of these expressed preferences. If we consider, for example, the preference ul < (12, we are implicitly stating that there exists aprice, say 6,that we would be willing to pay in order to move from a position of having to accept option a to one where we have, instead, to accept option u2. Let y and 1 i denote the corresponding prices for switching from u2 to u3 and from a3 to u r l ,respectively. Suppose now that we are confronted with the prospect of having to accept option a ] . By virtue of the expressed preference nl < a2 and the above discussion, we are willing to pay I in order to exchange option a1 for option u2. But now, by virtue of the preference a2 < u3, we are willing to pay y in order to exchange a2 for u3. Repeating the argument once again, since u3 < a1 we are willing to pay z in order to avoid a:% have, instead, the prospect of option and u.,. We would thus have paid x + y + i in order lo find ourselves in precisely the same position as we starred from! What is more, we could find ourselves arguing through this cycle over and over again. Willingness to act on the basis of intransitive preferences is thus seen to be equivalent to a willingness to suffer unnecessarily the certain loss of something to which one attaches positive value. We regard this as inherently inconsistent behaviour and recall that the purpose of the axioms is to impose rules of coherence on preference orderings that will exclude the possibility of such inconsistencies. Thus, Axiom 2(ii) is to be understood in the following sense: lfwe aspire to avoid expressing preferences whose behavioural implications are such as to lead us to the certain loss of something we value, then we must ensure that our preferencesjt together in a transitive manner.
Our discussion of this axiom is, of course. informal and appeals to directly intuitive considerations. At this stage. it would therefore be inappropriate to

become involved in a formal discussion of terms such as value and price. It is intuitively clear that if we assert strict preference there must be some amount of money (or grains of wheat, or beads, or whatever), however small, having a value less than the perceived difference in value between the two options. We should therefore be willing to pay this amount to switch from the less preferred to the more preferred option.

The following consequences of Axiom 2 are easily established and will prove useful in our subsequent development.

Proposition 2.1. (Transitivity of uncertainties). (i) E E. (ii) El 5 E2 and E 5 E:i imply El 5 En. 2
ProoJ This is immediate from Definition 2.3 and Axiom 2.

26
Proposition 2.2. (Derived transitive properties). (i) l f a l u2 and u2 Q:{ then a1 o:$. lf El E2 und E? E:$then FJI 155. (ii) lf(1.1 < (12 und a? (I:{ {hen a1 < o+ If El < E2 ond E? E-l then El < E:s.

2 Foundations

--

--

--

Proof. Toprove(i),letnl a2and(12 qsothat. byDefinition2.2.nl 5 ~ 2 a1 and a? 5 a:$.ail <_ (12. Then. by Axiom 2(ii). Q I 5 u:t and o:t 5 0 1 . and thus n l 0%.A similar argument applies to events using Proposition 2. I . Again. part (ii) follows rather similarly. 4
a 2

Axiom 3. (Consistencyof preferences). (i) Ifcl 5 (f2 then..fur crll C > 8. CI 5(;1.2. : (ii) I f ; jhr sume (:I < "2, {Q I E. c1 I E " } 5 { r-2 I F.(-1 I P.}. theri E 5 F . (iii) G f o r some c and G > @. { a1 I G. 1 G"} F { (12 I G. I ' I G' }. then a1 $: (12. Discirssion of Ariom 3. Condition (i) formalises the idea that preferences between pure consequences should not be affected by the acquisition of further information regarding the uncertain events in 1. Conditions (ii) and (iii) ensure that Detinitions 2.3 and 2.4 have operational content. Indeed, (ii) asserts that if we have {cz I E . CI I E"'} 5 { r? I b'. I F"} for sot~ic' cI < ( ' 2 then we should have this preference for m y c1 < t.2. This forrnalises the intuitive idea that the stated preference should only depend on the "relative likelihood" of F-' and E' and should not depend on the particular consequences used in constructing the options. Similarly. (iii) asserts that if we have the preference { ( I , I G'. (. i G' } i { (12 I C. c I G"}for some c then. given C;. ( 1 1 should not be preferred to (12. so that. for any u. { (1 I 1 G, (I I C" } 5 { I G. ( 1 1 C ' } ,This latter argument is a version of what might be called the sure-thing principle: if two situations are such that whatever the outcome of the first there is a preferable corresponding outcome of the second, then the second situation is preferable overall. An important implication of Axiom 3 is that preferences between consequences are invariant under changes in the information "origin" regarding events in E .
(a1

Proposition 2.3. (Invarianceof preferences between consequences1. "1 5 Q ifund onl! iftlierr crist C > sud1 that CI 5(;('2.
Proof. If c1 5 (:2 then. by Axiom 3( i), cI <(; c? for any event C;. Conversely. by Definition 2.4(i), for any G > Y), (j <(; c2 implies that for cin? option ( 1 . one has { c1 I G. u I G } 5 { c2 I G. (I I C' }. Taking n = { (j 1 G. 1 G' }. this implies that {cl 1 C c.2 I G"} 5 { c1 10. c.?1 Q } . If ci > c? this implies, by Axiom 3(ii). that ; . G 5 (3. thus contradicting C > k!. Hence. by Axiom I(ii). ('1 5 ( ' 2 . :

2.3 Coherence and Quantification

27

Another important consequence of Axiom 3 is that uncertainty orderings of events respect logical implications, in the sense that if E logically implies F , i.e., if E E F , then F cannot be considered less likely than E .

Proposition 2.4. (Monotonicity). I f E 5 F then E 5 F.


Proof. For any c1

< CZ. define

a1 = { Q I E,cl I E'} = {CI 1 F - E , {CZ I E,CIE"} I ( F - E ) " } , 1 = {Q I F , c ~ IF"} = {CZ IF - Ey { ~ IE:cII E"} I ( F - E ) " } . 2

By Axiom 3(i) with G = F - E = F from Definition 2.2 that E 5 F.

n E", al 5 a2. It now follows immediately

This last result is an example of how coherent qualitative comparisons of uncertain events in terms of the "not more likely" relation conform to intuitive requirements. If follows from Proposition 2.4 that, as one would expect, for any event E, 8 5 E 5 SZ. We shall mostly work, however, with "significant" events, for which this ordering is strict.

Definition 2.5. (Significantevents). An event E is signifcant given G > 0 ifcl <G c2 implies that c1 <G {c2 I E.q I E'} < G c2. I f G = 0, we shall simply suy that E is signifcant.
Intuitively, significant events given G are those operationally perceived by the decision-maker as "practically possible but not certain" given the information provided by G. Thus, given G > 8 and assuming c1 <G CZ. if E is judged to be significant given G , one would strictly prefer the option {cz 1 E , c1 1 E c } to c1 for sure, since it provides an additional perceived possibility of obtaining the more desirable consequence c2. Similarly, one would strictly prefer c2 for sure to the stated option.
nijicant given G > 8, if and only if8 signijicant ifand only if0 < E < 12.

Proposition 2.5. (Characterisationo sign@ant events). An event E is sigf < E n G < G . In particular, E is
Proof. Using Definitions 2.4 and 2.5, if

E is significant given G then, for all

c1

ICc2 and for any option a,


{cI

IG,alG"} < { C ~ ~ E ~ G , C , ~ E ' < { C,~U G , G ~ G " } . ~ G I ~ U }

Taking a = c l . we have
CI = { C Z

10, ~1 IS1} < (

~ 2

1 E n C . CI 1 ( E n G)'} < {CZ I G , CI I G'}

20

2 Foundations

and hence, by Definition 2.3. (? < E 17 G < C;. Conversely, if 8 < E n ( ;

< G.

and hence, by Axiom 3(iii). ('1 <<; {(*. I E . cI I I?} <(; c?. If, in particular. G = Sl then E is significant if and only if 8 < E < $1. a The operational essence of "learning from experience" is that a decisionmaker's preferences may change in passing from one state of information to a new state brought about by the acquisition of further information regarding the Occurrence of events in E . which leads to changes in assessments of uncertainty. There are. however, too many complex ways in which such changes in assessments can take place for us to be able to capture the idea in a simple form. On the other hand, the very special case in which preferences do not change is easy to describe in terms of the concepts thus far available to us.

Definition 2.6. (Painvise independence of events). W suy that E and F are (pairwise) independent, denoted by E I F', if; and e on& if; for a l l c. ('1. r? ( i ) r { r2 I E. 1 E' } + c { ('2 I E . ('1 1 l }, ? ( i i ) c o { ~ ~ ~ F . c+ ~ .b "E { ( . ~ I F . ~ ~ I P ) , ~ ~ O }
(81

where

is any one of the relations < ,

or

>.

The definition is given for the simple situation of preferences between pure consequences and dichotomised options. Since by Proposition 2.3 preferences regarding pure consequences are unaffected by additional information, the condition stated captures, in an operational form, the notion that uncertainty judgements about E, say, are unaffected by the additional information F . We interpret E I F as "E is independent of F". An alternative characterisation will be given in Proposition 2.13.

233 ..

Quantification

The notion of preference between options, formalised by the binary relation 5. provides a qualitative basis for comparing options and, by extension. for comparing consequences and events. The coherence mioms (Axioms I to 3) then provide a minimal set of rules to ensure that qualitative comparisons based on 5 cannot have intuitively undesirable implications. We shall now argue that this purely qualitative framework is inadequate for serious, systematic comparisons of options. An illuminating analogy can be drawn between 5 and a number of qualitative relations in common use both in an cveryday setting and in the physical sciences.

2.3 Coherence and Quantification

Consider. for example, the relations not heavier than, nor longer than, not hotter than. It is abundantly clear that these cannot suffice, as they stand, as an adequate basis for the physical sciences. Instead, we need to introduce in each case some form of quantification by setting up a stundard unit of measurement. such as the kilogram, the merre, or the centigrude interval. together with an (implicitly) continuous scale such as arbitrary decimal fractions of a kilogram, a metre, a centigrade interval. This enables us to assign a numerical vche, representing weight. length, or temperature, to any given physical or chemical entity. This can be achieved by carrying out, implicitly or explicitly, a series of qualitative pairwise comparisonsof the feature of interest with appropriatelychosen points on the standard scale. For example, in quantifying the length of a stick, we place one end against the origin of a metre scale and then use a series of qualitative comparisons. based on not longer than (and derived relations, such as strictly longer than). If the stick is not longer than the scale mark of 2.5 metres, but is strictly longer than the scale mark of 2.4 metres, we might lazily report that the stick is 2.45 metres long. If we needed to, we could continue to make qualitative comparisons of this kind with finer subdivisions of the scale, thus extending the number of decimal places in our answer. The example is. of course, a trivial one. but the general point is extremely important. Precision, through quantification. is
I achieved by introducing some form o numerical standard into ( conte.rt already f equipped with a coherent qualirative ordering reittion. We shall regard it as essential to be able to aspire to some kind of quantitative

precision in the context of comparing options. It is therefore necessary that we have available some form of standard options, whose definitions have close links with an easily understood numerical scale, and which will play a role analogous to the standard metre or standard kilogram. As a first step towards this, we make the following assumption about the algebra of events, E.
and afunction 11 : S

Axiom 4. (Existence o s t a n h d events). There exists a subalgebra S of & f + [O. 1) such that:

(i) S 5 S, iJ and only$ ~(51) 1 4 5 2 ) : I 5 (ii) S n S, = Q) implies rhat ~ ( S I = p(S,) ~ ( 5 2 ) ; 1 u S,) (iii) for any number c in (0,1], and events E. F , there is a standard event S r such that p(S) = 0, E IS and k IS ; (iv) SI IS, implies that ~ ( S I = p(S1)p(S2). n S,) (v) i f E I S, F IS a n d E I F , then E S * E -F S.

Discussion of Ariom 4. A family of events satisfying conditions (i) and (ii) is easily identified by imagining an idealised roulette wheel of unit circumference. We suppose that no point on the circumference is favoured as a resting place for the ball (considered as a point) in the sense that given any CI.r? and events SI,S, corresponding to the ball landing within specified connected arcs, or finite unions

2 Fuundations
and intersections of such arcs, { c1 I SI. I Si'} and { c1 I SZ.cj I S;} are considered c! : equivalent if and only if p(Sl) = p ( & ) , where p is the function mapping the "arcevent" to its total length. Conditions (i) and (ii) are then intuitively obvious, as is the fact, in (iii), that for any a E [O. 1 we can construct an S with p ( S ) = o. Note that 1 S is required to be an algebra and thus both VJ and II are standard events. It follows from Proposition 2.4 and Axiom 4(i) that I & ( @= 0 and p(I2) = 1. The remainder ) of (iii) is intuitively obvious; we note first that the basic idea of an idealised roulette wheel does assume that each "play" on such a wheel is "independent". in the sense of Definition 2.6, of any other events. including previous"p1ays"on the same wheel. Thus, for any events E, F in E. we can always think of an "independent" play which generates independent events S in S with p(S)= o for any specified (1 in (0. 11. In this extended setting, if we think of the circumferences for two independent plays as unravelled to form the sides of a unit square, with 11, mapping events to the areas they define, condition (iv) is clearly satisfied. Finally, ( v ) encapsulates an obviously desirable consequence of independence; namely, that if E is independent of F and S, and F is independent of S,a judgement of equivalence between E and 5 should ' not be affected by the Occurrence of F. We will refer to S as a .srundctrd/umil~of events in E and will think of C as : the algebra generated by the relevant events in the decision problem together with the elements of S. Other forms of standard family satisfying ( i ) to ( v ) are easily imagined. For example, it is obvious that a roulette wheel of unit circumference could be imagined cut at some point and "unravelled" to form a unit interval. The underlying image would then be that of a point landing in the unit interval and an event S such that p ( S ) = I-, would denote a subinterval of length 1); alternatively. we could imagine a point landing in the unit square. with S denoting a region of area p . The obvious intuitive content of conditions ( i ) to ( v ) can clearly be similarly motivated in these cases, the discussion for the unit interval being virtually identical to that given for the roulette wheel. It is important to cmphasise that we do m r require the assumption that standard families of events actually. physically exist. or could be prwise/y constructed in accordance with conditions ( i ) to (v). We only require that we can invoke such a set up as a iiiental image. There is. of course, an element of mathematical idealisation involved in thinking about ull p e [O. I]. rather than. for example. some subset of the rationals. corresponding to binary expansions consisting of zeros from some specified 1 0 cation onwards, reflecting the inherent limits of accuracy in any actual prtxedurc for determining arc lengths or areas. The same is truc. however. of cill scientitic discourse in which measurements are taken. in principle, to be real numbers. rathcr than a subset of the rationals chosen to rellect the limits of accuracy in the physical measurement procedure being employed. Our argument for accepting this degree of mathematical idealisation in setting up our formal system is the same as \vould apply in the physical sciences. Namely. that no serious conccptual distortion is introduced. while many irrelevant technical difticultirs are avoided: in particular.

2.3 Coherence and Quantification

31

those concerning the non-closure of a set of numbers with respect to operations of interest. This argument is not universally accepted, however, and further, related discussion of the issue is provided in Section 2.8. Our view is that, from the perspective of the foundations of decision-making, the step from the finite to the infinite implicit in making use of real numbers is simply a pragmatic convenience, whereas the step from comparing a finite set of possibilities to comparing an infinite set has more substantive implications. We have emphasised this latter point by postponing infinite extensions of the decision framework until Chapter 3.
For anyfinire collection { 0 1 . . . . .a,,} real numbers such that a, > 0 and of a1 . . . a,, 5 1 there exists a corresponding collection { S1,. . . Sn} of disjoint standard events such that p ( S , ) = a,, = 1, . . . 11. i

Proposition 2.6. (Collections of disjoint standard events).

+ + + +

Proof. By Axiom 4(iii) there exists SI such that p(S1) = 0 1 . 1 < j 5 I?,, For suppose inductively that SI... . .S,-l are disjoint, BJ = S U. . - USJ-, and define l ijl = crl . * . (1,- I = p( B,).By Axiom 4 (iii, iv). there exists T, in S such that p(B, n T,) = p(BJ){a,/(l~ 3 ~ )Define S, = T, n BJ, that SJ n S, = 8, }. so i = 1,. . . j - 1. Then, T, = S, U (T, n B J )and hence, using Axiom 4(ii), /J(T,) p ( s J ) p ( ~n B,). n u s , /J(s,)= a J / ( l- ij,) - u,b,/(l- +) = 0, = , , and the result follows. a

Axiom 5 . (Precise measurement of preferences and uncertainties). (i) Ifcl 5 c 5 q.there exists a standard event S such that
c

{c2

1 s,c, IF}.

(ii) For each event E , there exists a standard event S such that E

S.

Discussion of Axiom 5. In the introduction to this section, we discussed the idea of precision through quantification and pointed out, using analogies with other measurement systems such as weight. length and temperature, that the process is based on successive comparisons with a standard. Let S,, denote a standard event c1 such that p( S,) = 4. We start with the obvious preferences, { c2 1 SO. I S;} 5 c 5 { c 2 1 51 c1 I Sf}, any e 5 c 5 CZ. and then begin to explore comparisons with . for l standard options based on S,. S, with 0 < s < y < 1. In this way, by gradually increasing .I: away from 0 and decreasing y away from 1, we arrive at comparisons suchas(c1 1 S r , qIS;} 5 c 5 {c2 I S,.cl I S;},withthedifference!/-.rbecoming increasingly small. Intuitively. as we increase s. {c? I S,., e I .; becomes more l S} and more attractive as an option, and as we decrease y, { c 2 1 S,. c:l 1 S;} becomes less attractive. Any given consequence c. such that c1 5 c 5 c : ~ ,can therefore be sandwiched arbitrarily tightly and. in the limit, be judged equivalent to one of the standard options defined in terms of c1, P,.The essence of Axiom 5(i) is that we

can proceed to a common limit, (P, say. approached from below by the successive r values of . and above by the successive values of g. The standard family of options is thus assumed to provide a continuous scale against which any consequence can be precisely compared. Condition (ii) extends the idea of precise comparison to include the assumption that, for any event E and for all consequences r I .c2 such that c, <. r?. the option { c? I E . I I?'} can be compared precisely with the family of standard options { q I S',.. I 5:r}..r E [O.11, defined by c1 and c2. The underlying idea is similar ('1 to that motivating condition (i). Indeed, given the intuitive content of the relation "not more likely than". we can begin with the obvious ordering {(-. 1 S,,. ('1 I .S[,} ;: < I 1;. ('1 I E"} 5 { (*. [ .?I.1.1 1 S , } for any event E. and then consider refinements , ('I with of this of the form { (-2 I S.,.. (j I $ } 5 { 1.2 i c'. vI I k" } 5 { ("2 I S,,. 1 S;,}. .I' increasing gradually from 0, !/ decreasing gradually from I. and !/ - .r becoming increasingly small. so that, in terms of the ordering of the events. S , 2 E 5 S<,. Again. the essence of the axiom is that this "sandwiching" can be refined arbitrarily closcly by an increasing sequence of .r's and a decreasing sequence of !/*stending to a common limit. The preceding argument certainly again involves an element of mathematical idealisation. In practice, there might. in fact. be some itirerwl oj'ittdifferencv.in the sense that we judge {c? I S, . ri I S;;} 5 ( ' 5 { I S , ('1 1 S;,} some (possibly '. for rational) .I' and g but feel unable to express a more precise form of preference. This is analogous to the situation where a physical measuring instrument has inherent limits. enabling one to conclude that a reading is in the range 3. I26 to 3.1.3.5. say. but not permitting a more precise statement. In this case. we would typically report the measurement to be 3.13 and proceed CIS iJ'this were a precise measurement. We formulate the theory on the prescriptive assumption that we aspire to exact measurement (exact comparisons in our case), whilst acknowledging that. in practice. we have to make do with the best level of precision currently available (or devote some resources to improving our measuring instruments!1.
( 8 ,

In the context of measuring beliefs. several authors have suggested that this into imprecision be forinn/!\' inc~orpnrccfccl /he ct.riom +s/u~J. For many applications. this would seem to be an unneccssary confusion of the prescrIp/iiv~ and the dtwripriw. Every physicist or chemist knows that there are inherent limits of accuracy in any given laboratory context but. so far i15 we know, n o one hits suggested developing the structures of theoretical physic5 or cherni5try on the assumption that quantities appearing in fundainental equations should be constrained to take values in some subset of the rationals. However. it may \wll he that therc are situations where imprecision in the context of comparing consequences is too basic and problematic a fciiture to be xlequately dealt with hy an approach based on theoretical precision. tempcred with pra~matically ncknowledged approximation. We shall return to this issue in Section 2.8.

2.4 Beliefs and Probabilities

33

The particular standard option to which c is judged equivalent will, of course, depend on c, but we have implicitly assumed that it does not depend on any information we might have concerning the Occurrence of real-world events. Indeed, Proposition 2.3 implies that our attitudes or values regarding consequences are Jixedthroughout the analysis of any particular decision problem. It is intuitively obvious that, if the time-scale on which values change were not rather long compared with the time-scale within which individual problems are analysed, there would be little hope for rational analysis of any kind.

2.4
2.4.1

BELIEFS AND PROBABILITIES


Representationof Beliefs

It is clear that an individuals preferences among options in any decision problem should depend, at least in part, on the degrees of belief which that individual attaches to the uncertain events forming part of the definitions of the options. The principles of coherence and quantificationby comparison with a standard, expressed in axiomatic form in the previous section, will enable us to give a formal definition of degree of belief, thus providing a numerical measure of the uncertainty attached to each event. The conceptual basis for this numerical measure will be seen to derive from the formal rules governing quantitative,coherent preferences, irrespective of the nature of the uncertain events under consideration. This is in vivid contrast to what are sometimescalled the rfassicalandfreyuencyapproachesto defining numerical measures of uncertainty (see Section 2.8). where the existence of symmetries and the possibility of indefinite replication. respectively, play fundamental roles in defining the concepts for restricted classes of events. We cannot emphasise strongly enough the important distinction between defining a general concept and evaluating a particular case. Our definition will depend only on the logical notions of quantitative, coherent preferences; our practical evaluations will often make use of perceived symmetries and observed frequencies. We begin by establishing some basic results concerning the uncertainty relation between events.

Proposition 2.7. (Complete comparability of events). Either El > E2, or El E2. or E2 > E l .

Proof. By Axiom 5(ii), there exist SIand S2 such that El SIand E2 N S2; the complete ordering now follows from Axiom 4(i) and Proposition 2.1. a

34

2 Foundations

We see from Proposition 2.7 that, although the order relation 5 between options was not assumed to be complete (i.e., not all pairs of options were assumed to be comparable), it turns out, as a consequence of Axiom 5 (the axiom of precise measurement), that the uncertainty relation induced between events is complete. A similar result concerning the comparability of all options will be established in Section 2.5.
and

Proposition 2.8. (Additivity of uncertainty relations). If '4 5 R . C 5 L) A n C = B n D = VJ, then A U C 5 U u D. Moreover. i f ,4 < 1 or 3 C' < D.then A u C < H u L).

Proof. We first show that, for any G, if A n G = BnC = 0 then A 5 U e A u G < B U C . F o r a n y q > r] . . . l n G = B n n = = . d e f i n e :

1 Then, by Definition 2.3, A 5 B I a 5 a?; by Axiom 3. a1 5 a2 5 n4 . u G 5 R u C. Thus, 4 (13 5 u4; and using again Definition 2.3.

AuC=..lu(C-B)u(CnB)

5 DU(B-C)u(CnB)=BuD.
a

The final statement follows from essentially the same argument.

We now make the key definition which enables us to move to a yuantirari\*e notion of degree of belief.

Definition 2.7. (Measureof degree of beliej). Given an uncertainty relutioti 5, the probability P ( E ) of an event E is the real number p( S ) associated
with any stancklrd event S such that E

S.

This definition provides a natural, operational extension of the qualitative uncertainty relation encapsulated in Definition 2.3, by linking the equivalence of any E E E to some S E S and exploiting the fact that the nature of the construction of S provides a direct obvious quantification of the uncertainty regarding S. With our operational definition, the nieaning of a probability statement i s clear. For instance, the statement P ( E ) = 0.5 precisely means that E is judged to be equally likely a a standard event of 'measure' 0.5, maybe a conceptual perfect coin s falling heads, or a computer generated 'random' integer being an odd number.

2.4 Beliefs and Probabilities

35

It should be emphasised that, according to Definition 2.7, probabilities are f always personal degrees o belief, in that they are a numerical representation of the decision-makers personal uncertainty relation 5 between events. Moreover, probabilitiesare always conditionalon the information currently available. It makes no sense, within the framework we are discussing, to qualify the word probability with adjectives such as objective. correct or unconditional.
Since probabilities are obviously conditional on the initial state of information M o , a more precise and revealing notation in Definition 2.7 would have been P(E I A&)). In order to avoid cumbersome notation, we shall stick to the shorter version, but the implicit conditioning on M,, should always be borne in mind.

Proposition 2.9. (Existenceand uniqueness). Given an uncertainty relation 5, there exisfsa unique probability P( E ) associated with each event E .
Proof. Existence follows from Axiom 5(ii). For uniqueness, if E S1 and S then by Proposition 2.2(ii), S, 2 S2. The result now follows from Axiom 4(i). a

Definition 2.8. (Compatibility). Afunction f : E 3 8 is said to be compaton ible with an order relation I E x & 8for all events,

Proposition 2.10. (Compatibility of probabilify and degrees of belief ). The probabilityfunction P( .) is compatible with the uncertainty relation 5.
Proof. By Axiom 5(ii) there exist standard events S and S2 such that E S 1 1 and F S,. Then, by Proposition 2.2(ii) , E 5 F iff S1 5 S and hence, by 2 Axiom 4(i), iff p(S1) 5 p(S2). The result follows from Definition 2.7. a

The following proposition is of fundamental importance. It establishes that coherent, quantitativedegrees of belief have the structure of a finitely additive probability measure over E. Moreover, it establishes that significant events, i.e., events which are practically possible but not certain, should be assigned probability values in the open interval (0,l).

Proposition 2.11. ( r b b l t structure o degrees o beliej). Poaiiy f f (i) P(0) = 0 and P(Q) = 1. (ii) I f E n F = 0,then P ( E U F ) = P ( E ) P ( F ) . (iii) E is signijEcant if, and only if, 0 < P( E ) < 1.

36

2 Foundations

Proof. (i) By Definition 2.7.0 5 P ( E ) 5 1. Moreover, by Axiom 4(iii) there exist S, and S* such that p ( S , ) = 0 and p ( S * )= 1. By Proposition 2.4. B 5 S. and, by Proposition 2.10 P(0) 5 0; hence, P(0) = 0; similarly, S 5 R implies ' that P ( 0 ) = 1. . (ii) If E = 0 or F = 8.or both, the result is trivially true. If E > and I' > ld, then, by Proposition 2.8, E U F > E: thus, if o = P(E) and J = P ( E U F). we have (1 < $and, by Proposition 2.6, there exist events S1, S 2 such that S1n.5, = Q). P(SI) ctandP(S2) = j j - 0 . ByProposition2.7.F > & o r F SzorF < S?. = If F > .S2. then, by Proposition 2.8. E U F > SI u S and hence P ( E U F) > ,j, 2 which is impossible; similarly, if F < S?then E U F < 5') US?and P(EU1.') < .j which. again, is impossible. Hence, F S and therefore P( F\ = 3 - ( I . so that , P(E u F') = P ( E ) P ( F ) ,as stated. (iii) By Proposition 2.5. E is significant iff 8 < E < S2. The result then follows immediately from Proposition 2.10. a

Corollary. (Finitely additive structure of degrees of belief). (i) If( E,. j E J ) is cijnite collection o disjoint events. then f

(ii) For any event E, P ( E " ) = 1 - P ( E ) .


Proof. The first part follows by induction from Proposition 2. I I(iii): the second part is a special case of (i) since if U,E, = $2 then. by Proposition 2. I 1 (i), C,P(E,) = 1. a

Proposition 2. I I is crucial. It establishes formally that coherent, quantitative measures of uncertainty about events must take the form of probabilities. therefore justifying the nomenclature adopted in Definition 2.6 for this measure of degree of belief. In short, coherent degrees of belief are probahiliries. It will often be convenient for us to use probability terminology, without explicit reference to the fact that the mathematical structure is merely serving as a representation of (personal) degrees of belief. The latter fact should. however. be constantly borne in mind.

Definition 2.9. (Probability distribution). If{ E J .j E .J}fornirifinite pcrrtition of $2, with P( E,) = pJ.J E J , then { p , .j E J } is said to be N probahilit.v distribution over the pcirtition.

2.4 Beliefs and Probabilities

37

This terminology will prove useful in later discussions. The idea is that total belief (in 0, having measure 1) is distributed among the events of the partition, { E,,J E J},according to the relative degrees of belief { p , , j E J},with C,p, = C,P(E,) = 1. Starting from the qualitative ordering among events, we have derived a quantitative measure, P( .) P(.1 ill"), over E and shown that, expressed in conventional mathematical terminology, it has the form of ajinitely udditive Probability measure, compatible with the qualitative ordering 5. We now establish that this is the only probability measure over & compatible with 5.

Proposition 2.12. (Uniqueness o the probability measure). P is the only f probability measure cornputihlv with the uncertainty relation 5.
Proof. If P' were another compatible measure. then by Proposition 2.8 we would always have P ' ( E ) 5 P ' ( F ) e P ( E ) 5 P ( F ) ; hence, there exists a monotonic function f of [O. 1 into itself such that P ( E ) = f { P ( E ) } . By 1 Proposition 2.6, for all non-negative (1. ,? such that B 3 5 1, there exist disjoint standard events S1 and S,, such that P ( S I ) = (r and P(S2) = 3. Hence, by . ( f f($) Axiom 4(ii), f ( 0 + 19) = P'(S1 U S2) = P'(S1) P'(sp) = ) and so (Eichhorn, 1978. Theorem 2.63), f ( o ) = ko for all ct in [O. 1). But, by Proposition 2.9, P(i2)= 1 and hence, k = 1, so that we have P'(.E) = P ( E ) for all E. a

+ +

We shall now establish that our operational definition of (pairwise) independence of events is compatible with its more standard, ad hoe, product definition.

Proposition 2.13. (Characterisation o independence). f

ELF e P(EnF ) = P(E)P(F).


Proof. Suppose E I F. By Axiom 4(iii), there exists S1 such that P(S1) = P ( E ) ,E I Sl and F I S1. Hence, by Axiom 4(v), E -I.' S1, so that, for any consequences el < c,, and any option a, {cp I E n F,cl I E n F.a. I F } ' '
Taking a = cl. we have {Q I E n F, CI I ( E n F)'}
so that E n F

{ c 2 I SI n F.cl I Sc n F.a. 1 F'}.

51n F. Again by Axiom 4(iii), given F, S , ' 1 there exists s such z that P ( S z )= P ( F ) ,F I Sp and S1 I Sz. Hence, by an identical argument to the above, and noting from Definition 2.6 the symmetry of I,we have

( ~ 11 S

n F, CI I (SIn F ) ' } ,

s1n F

S , n s,.

30
By Propositions 2.1,2.10, and Axiom 4(iv).

2 fimndutioiis

a n d h e n c e P ( E n F ) = P(E)f(F'). Suppose Y ( E f7 F) = P ( E ) P ( F ) . Axiom 4(iii), there exists S such that By P ( S ) = P ( F )and F IS,E IS. Hence, by the first part of the proof,

P ( E n S ) = P( E ) P ( S )= P ( E ) P ( F ) P(h' n k'). =
so that E n F E n S. Now suppose, without loss of generality, that c 5 {c2 I E. cI 1 E r } . Then, by Definition 2.6,

{ c ~ s . Is'} 5 { C 2 ( t i ' n S . c , [ ( E n s ) ' } . c,


But
{C

I S,~1 I Sr} { c 1 I.; CI I I."


'V

} and

{ c 2 I E n S c1 i ( E n S)"} .
hence by Proposition 2.2,

{r., I E

n k: c1 I ( E n F ) " } :

((*I F.cI IF"} 5 { C ~ I E ~ Fl .( C C F ) ' } . E,


so that c < f { c2 1 E. rI I P ' } .A similar argument can obviously be given reversing the roles of E and F. hence establishing that E: I k'. a

2.4.2

Revision of Beliefs and Bayes' Theorem

The assumed occurrence of a real-world event will typically modify preferences between options by modifying the degrees of belief attached. by an individual. to the events defining the options. In this section. we use the assumptions of Section 2.3 in order to identify the precise way in which coherent modification of initial beliefs should proceed. The starting point for analysing order relations between events. given the asdefined sumed Occurrence of a possible event G. is the uncertainty relation s(; between events. Given the assumed Occurrence of C > !A the ordering 5 between ; acts is replaced by 5 ~ ; . Analogues o f Propositions 3. I and 2.3 are trivially established and we recall (Proposition 2.3) that. for any C; > v). i.2 5 ( ' 1 iff cz Lc; :

Proposition 2.14. (Propertiesof conditional beliefs). ( i ) I.; <(; E' u En G 5 I.'!) (,'. (ii) lfrlirre p.vi.sr q < c.2 siich tlicrt { (.? ! I;. ('1 1 il" } s(; ~1 I E'. { tlleii E <<;F .

('1

I E" } ,

2.4 Beliefs and Probabilities Proof. By Definition 2.4 and Proposition 2.3, E

39
IGF iff, for all c2 2 CI, c {cz I F-Cl I F}?

{cz I E,Cl I E'} I i.e., if. and only if, for all a, {c21EnG,clI E n C , a l G ' } 5 ( ~ 1 F n G , c , I F C n G , a I G r } . 1 Taking a = q.

{c~(E~G.c~~(E~C)'}I{c2(FnC,c,((FnG)c and this is true iff E n G 5 F r l C. Moreover, if there exist c2 > CI such that {c2 I E , c1 I E"} Ic { c2 I F, CI I F'}
then, by Definition 2.4, with a = c1, { C ~ ~ E ~ C ~ C , ~ E " 5 {G :,! I F ~ G , Gl ( F ' n G , c ~ I G ' } , ~ c C n~ c ' } so that {c2IEnG,c,I(EnG)'} 5 {cqIFnC,clI(FnG)'} and the result follows from Axiom 3(ii) and part (i) of this proposition. a

E I ~ F

Definition 2.10. (Conditional measure of degree of bebf). Given a conditional uncertainty relation S C !G > 0,the conditional probability P(E I G) of an event E given the assumed occurrence of G is the real number p ( S ) such rhat E y: S, where S is an standard event independent of G.
Generalising the idea encapsulated in Definition 2.7, P( E I C ) provides a quantitativeoperational measure of the uncertainty attached to E given the assumed Occurrence of the event G. The following fundamental result provides the key to the process of revising beliefs in a coherent manner in the light of new information. It relates the conditional measure of degree of belief P(.I G) to the initial measure of degree of belief P(.).
We have, of course. already stressed that all degrees o f belief are conditional. The intention of the terminology used above is to emphasise the additional conditioning resulting from the Occurrence of G ; the initial state of information. ;If(,,is always present as a conditioning factor, although omitted throughout for notational convenience.

Proposition 2.15. (Conditionalprobability). For ans G > 0, P ( En G ) P ( E 1 G)= HG) Proof. By Axiom 4(iii) and Proposition 2.13, there exists S I C such that p ( S ) = P ( E n C ) / P ( G ) By Proposition 2.13, . P ( S n G ) = P ( S ) P ( G )= p ( s ) P ( c ) P ( E n G). = Thus, by Proposition 2.10, S n G E n G and, by Proposition 2.14, S -c: E. Thus, by Definition 2.10, P( E 1 C)= p ( S ) = P( F: n G ) / P ( G ) . 0

40

2 Foimdutions

Note that, in our formulation. P(E I C) = P ( E fi C : ) / P ( C )is a logical derivation from the axioms. nor an ad hoc definition. In fact. this is the simplest version of Bciyrs' theorfnt. An extended form is given later in Proposition 2.19.

Proposition 2.16. (Compatibility of conditional probability and conditional


f degrees o belief).

<(; F

e+ P ( E / G ) P ( F 1 G ) .

<

Prooj By Proposition 1.14(i). I:' <(; I' iff E I G 5 F 1 G,which. by . ? 1 Proposition 2.10. holds if and only if P(E (1G ) 5 P(F (-1 C;): the result now follows from Proposition 2. IS. .3

We now extend Proposition 2. I I t o degrees of belief conditional on the occurrence of significant events.

Proposition 2.17. (Probability structure o conditional degrees o belief f f


For any event C: > 0. ( i ) P(@I C;) = 0 5 C'( t.J 1 C;) 5 P ( Q I G') = 1: (ii) i f E 17 (1G = 13. tlieti P( E: u F G ) = P( E 1 G) -i- ( F I (;); F P 0 < P ( E I G ) < 1. (iii) i is sigrzificunt g i r m G 5

).

Proof. By Proposition 2.15. P(E 1 G ) 2 0 and I > ( @ I G ) = 0: moreover, since E fi G 5 CT'. Proposition 2.10 implies that P(I.: n C) <_ WG). so that. . by Proposition 2. IS. fJ( E I G j 5 1. Finally, I? n G = C; so that. using again Proposition 2.15. P ( I 1 ( C )= 1. By Proposition 2. IS.

Finally, by Proposition 2.5. E is significant given G iff (b c, E i 6" c: C;. l Thus, by Proposition 2.10, E is significant given C; iff 0 < P(I: n c;) K I'( (;). . The result follows from Proposition 2. IS. Corollary. (Finitely additive structure o conditional degrees o belief f f For nll C > 0. ( i ) if { E:, n (;.,I E . I } is ajnirc cnllectiott ofdisjoint events, thiw
).

(ii) ji)r uny event E. P( k . "


Proof.

1 G')

= 1

- 1'( E 1 (;).
,a

This parallels the proof' of' the Corollary to Proposition 2. I I .

2.4 Beliefs and Probabilities

41

certainty relution &.

Proposition 2.18. (Uniqueness of the conditionalprobabilily measure). P ( . I C ) is the only probability measure compatible with the conditional una

Proof. This parallels the proof of Proposition 2.12.

Example 2.1. (Simpson's paradox). The following example provides an instructive illustrationof the way in which the formalismofconditionalprobabilitiesprovides acoherent resolution of an otherwise seemingly paradoxical situation. Suppose that the results of a clinical trial involving 800 sick patients are a shown in s Table 2. I , where T. TCdenote, respectively, that patients did or did not receive a certain treatment, and R, R denote, respectively, that the patients did or did not recover. Table 2.1 Trial resultsfor all patients

R T T '
200 160

R '
200 240

Total
400 400

Recoveryrate

506 40%

Intuitively, it seems clear that the treatment is beneficial, and were one to base probability judgements on these reported figures, it would seem reasonable to specify

P(RI T ) = 0.5,

P ( R I T")= 0.4,

where recovery and the receipt of treatment by individualsare now represented, in an obvious notation, as events. Suppose now, however, that one became aware of the trial outcomes for male and female patients separately. and that these have the summary forms described in Tables 2.2 and 2.3.

Table 2.2 Trial resultsfor male patients

R
T T
180 70

Rr
120 30

Total 300
100

Recoveryrate

6Q%
70%

The results surely seem paradoxical. Tables 2.2 and 2.3 tell us that the treatment is neither beneficial for males nor for females; but Table 2. I tells us that overall it is beneficial! How are we to come to a coherent view in the light of this apparently conflicting evidence?

42
Table 2.3 Trial results for femule purienrs

2 Foicndations

R
T T
20 90

R
80
210

Torul

Recoveryrare

100 300

20% 30%

The seeming paradox is easily resolved by an appeal to the logic of probability which. after all, we have just demonstrated to be the prerequisite for the coherent treatment of uncertainty. With Af. Al' denoting, respectively, the events that a patient is either male or female. were one to base probability judgements on the figures reported in Tables 2.2 and 2.3. it would seem reasonable to specify
P ( R I XI r T ) = 0.L !

P ( RI :\I n T ) = 0.7 '

P ( R lw n r )= 0.2.

P ( RI w n T ) '

= 0.3.

To see that these judgements do indeed cohere with those based on Table 2. I . we note. from the Corollary to Proposition 2. I 1, Proposition 2. IS and the Corollary to Proposition 2.17.
that

P ( R ( T )= P ( R l ; U n T ) P ( , Z f I T ~ + P ( R i A I ' n 7 ) P ( A f ' l T )
P ( R I T )= P ( R ( , z ~ ~ T ' ) P ( A ~ ~ R ~ ) ~ T ) P ( M T ) . +P( T' M' I
where

P ( M I T ) = 0.75.

P ( AI 'r = 0.25. ~

The probability formalism reveals that the seeming paradox has arisen from the confounding of sex with treatment as a consequence of the unbalanced trial design. See Simpson ( I 95 I ). Blyth (1972, 1973) and Lindley and Novick (1981) for further discussion.

Proposition 2.19. (Bayed theorem). For m y finite purtition { E l ..j E J } o 9 and G > 0, f

Proof: By Proposition 2.15,

The result now follows from the Corollary to Proposition 2.1 I when applied to C = u,(C n E,). a

2.4 Beliefs and Probubilities

43

Bayes theorem is a simple mathematical consequence of the fact that quantitative coherence implies that degrees of belief should obey the rules of probability. From another point of view, it may also be established (Zellner, 1988b)that, under some reasonable desiderata, Bayes theorem is an optimal information processing system. Since the { EJ.j E J } form a partition and hence, by the Corollary to Proposition 2.17. P(EJI G) = 1. Bayes theorem may be written in the form

c,

since the missing proportionality constant is [P(G)]-- [CJP(G E J ) P ( E J ) ] - , = I and thus it is always possible to normalise the products by dividing by their sum. This form of the theorem is often very useful in applications. Bayes theorem acquires a particular significance in the case where the uncertain events { EJ,j E J}comespond to an exclusive and exhaustive set of hypotheses about some aspect of the world (for example, in a medical context, the set of possible diseases from which a patient may be suffering) and the event G corresponds to a relevant piece of evidence.or data (for example, the outcome of a clinical test). If we adopt the more suggestive notation, EJ = H, .J E J , G = D, and, as usual. we omit , explicit notational reference to the initial state of information Arc, Proposition 2.17 IeadstoBayestheoremintheformP(H, ID)= P(DI H , ) P ( H , ) / P ( D ) , E J, j where P ( D ) = C,P(D 1 H , ) P ( H J ) ,characterizing the way in which initial beliefs about the hypotheses, P ( H , ) , j E ,I. are modified by the data, D ,into a j revised set of beliefs, P ( H , I D), E *J. This process is seen to depend crucially on the specification of the quantities P( D 1 HJ).j E J, which reflect how beliefs about obtaining the given data, D,vary over the different underlying hypotheses, thus defining the relative likelihoods of the latter. The four elements, P(H,), P(D I H,), P( HJ I D)and P( D). occur, in various guises, throughout Bayesian statistics and it is convenient to have a standard terminology available.

Definition 2.1 1. (Prior,posterior, and predictive probabilities). If { HJ, I E . I } are exclusive and exhaustive events (hypotheses),thenfor any event (data)D, (i) P(H,), j E .I,ure culled thepriorprobabilities of the H J . j E J; (ii) P( D I HJ), E J , are called the likelihoods o the H,, E J , given D; j f j (iii) P( HJ I D ) , E .I, are calledtheposCeriorprobabisof the H,, J E J;
(iv) P( D )is called the predictive probability o D implied by the likelihoods f

and the prior probabilities.


It is important to realise that the terms prior and posterior only have significancegiven an initial state of information and relative to an additional piece of information. Thus, P( HJ), which could be more properly be written as P( HJ I .!f),

44

2 Foundations

represents beliefs prior to conditioning on data D, but posterior to conditioning on whatever history led to the state of information described by ill^. Similarly. P(H, I D), or, more properly, P ( H , I AfO n D ) , represents beliefs posterior to conditioning on X and D, but prior to conditioning on any further data which l o may be obtained subsequent to D. The predictive probability P ( D ) ,logically implied by the likelihoods and the prior probabilities, provides a basis for assessing the compatibility of the data D with our beliefs (see Box, 1980). We shall consider this in more detail in Chapter 6.
Example 2.2. (Medical diagnosis). In simple problems of medical diagnosih. Bayes' theorem often provides a particularly illuminating form of analysis of the various uncertainties involved. For simplicity, let us consider the situation where a patient may be characteribed as belonging either to state H I ,or to state H., representing the presence or absence. respectively, of a specified disease. Let us further suppose that I'(ff,) represents rhe prrvtrlrwe rote of the disease in the population to which the patient is assumed to belong. and that further information is available in the form of the result o f a single clinical test. whose outcome is either positive (suggesting the presence of the disease and denoted by D = 7'). or negative (suggesting the absence of the disease and denoted by D = T ).
I .0

0.6

0.2

The quantities P(T H I ) and f'(T I H 2 ) represent the !rue positive and true negnriw rates of the clinical test (often referred to as the test sensitivity and tehi spec$ficity. respectively) and the systematic usc of Bayes' theorem then enables us to understand the manner i n which these characteristics of the test combine with the prevalence rate to produce varying degrees of diagnostic discriminatory power. In particular. for a given clinical test of known sensitivity and specificity. we can investigate the range of underlying prevalence rates lor which the test has worthwhile diagnostic value.

2.4 Beliefs and Probabilities

45

As an illustration of this process. let us consider the assessment of the diagnostic value of stress thallium-201 scintigraphy, a technique involving analysis of Gamma camera image data as an indicator of coronary heart disease. On the basis of a controlled experimental study, Murray er ul. (198 I ) concluded that P ( T I HI) = 0.900, P ( T 1 H2)= 0.875 were reasonable orders of magnitude for the sensitivity and specificity of the test. Insight into the diagnostic value of the test can he obtained by plotting values of P( H I1 T), P( HI I T ) against P( H I). where

for D = Tor D = T ,as shown in Figure 2.2. As a single, overall measure of the discriminatory power of the test, one may consider the difference P ( H l I T ) - P ( H I I T). In cases where P ( H l )has very low or very high values (e.g. for large population screening or following individual patient referral on the basis of suspected coronary disease, respectively), there is limited diagnostic value in the test. However. in clinical situations where there is considerable uncertainty about the presence of coronary heart disease. for example, 0.25 <_ P ( H ,) 5 0.75, the test may be expected to provide valuable diagnostic information. One further point about the terms prior and posterior is worth emphasising. Theyare not necessarily to be interpretedin a chronological sense, with the assump tion that prior beliefs are specified first and then later modified into posterior beliefs. Propositions 2.15 and 2.17 do not involve any such chronological notions. They merely indicate that, for coherence, specifications of degrees of belief must satisfy the given relationships. Thus, for example, in Proposition 2.15 one might first specify P ( G ) and P ( E 1 G) and then use the relationship stated in the theorem to arrive at coherent specification of P(E n G). In any given situation, the particular order in which we specify degrees of belief and check their coherence is a pragmatic one; thus, some assessments seem straightforward and we feel comfortable in making them directly, while we are less sure about other assessments and need to approach them indirectly via the relationships implied by coherence. It is true that the natural order of assessment does coincide with the chronological order in a number of practical applications, but it is important to realise that this is a pragmatic issue and not a requirement of the theory.

2.4.3

Conditional Independence

An important special case of Proposition 2.15 arises when E and G are such that P(E 1 G) = P ( E ) ,so that beliefs about E are unchanged by the assumed Occurrence of G. Not surprisingly, this is directly related to our earlier operational definition of (pairwise) independence.

Proposition 2.20. For ull F > 8, E L F

P ( E I F )= P ( E ) .

46

2 Foundations

froofi E L F P ( E n F) = P ( L ) P ( F )and, by Proposition 2.15. we haveP(EnF) = P(EIF)P(b'). a

In the case of three events, E. f and C;. the situation is somewhat more ' complicated in that, from an intuitive point of view. we would regard our degree of belief for E as being "independent" of knowledge of F and G if and only if P( E I H )= P( E ) , for any of the four possible forms of I!. { F n G . F'nC:. l'nG'. f" nG'}.
describing the combined Occurrences. or otherwise, of F and G (and. of course. similar conditions must hold for the "independence" of F from E and C;. and of G from E and F ) . These considerations motivate the following formal definition. which generalises Definition 2.6 and can be shown (see e.g. Feller, 1950/1968, pp. 125-128) to be necessary and sufficient for encapsulating, in the general case. the intuitive conditions discussed above.

Definition 2.12. (Mutuul independence). Events { E , . j E .I} ure suid to be t n u t i d l ~ independent $ f o r any I

C .J.

An important consequence of the fact that coherent degrees of belief combine i n conformity with the rules of (finitely additive) mathematical probability theory is that the task of specifying degrees of belief for complex combinations of events is often greatly simplified. Instead of being forced into a direct specification, we can attempt to represent the complex event in terms of simpler events. for which we feel more comfortable in specifying degrees of belief. The latter are then recombined, using the probability rules, to obtain the desired specification for the complex event. Definition 2. I2 makes clear that the judgement of independence for a collection of events leads to considerable additional simplification when complex intersections ofevents are to be considered. Note that Proposition 2.20 derives from the uncertainty relation 5 1- and therefore reflects an inherently personal judgement (although coherence may rule out some events from being judged independent: for : example, any E. E' such that c? c I' C F c I?). There i s a sense. however. in which the judgement of independence (given for large classes of events of interest reflects a rather extreme form of belief. in that scope for learning from experience is very much reduced. This motivates consideration of the following weaker form of independence judgement.
J I l f j )

2.4 Beliefs and Probabilities

47

Definition 2.13. (Conditional independence). The events { EJ,j E J } are said to be conditionally independent given G > 0 if,for any I C_ J ,

For any subalgebra 3o &, the events { EJ,j E .I}are said to be conditionally f independent given F i f and only i f they are conditionally independent given any G > (b in 3.
Definitions 2. I2 and 2.13 could, of course, have been stated in primitive terms of choices among options, as in Definition 2.6. However, having seen in detail the way in which the latter leads to the standard product definition, it will be clear that a similar equivalence holds in these more general cases, but that the algebraic manipulations involved are somewhat more tedious.

The form of degree of belief judgement encapsulated in Definition 2.13 is one which is utilised in some way or another in a wide variety of practical contexts and statements of scientific theories. Indeed, a detailed discussion of the kinds of circumstances in which it may be reasonable to structure beliefs on the basis of such judgements will be a main topic of Chapter 4. Thus, for example, in the practical context of sampling, with or without replacement, from large dichotomised populations (of voters, manufactured items, or whatever), successive outcomes (voting intention, marketable quality, . . .) may very often be judged independent, given exuct knowledge o the proportional split in the dichotumised populution. f Similarly, in simple Mendelian theory, the genotypes of successive offspring are typically judged to be independent events, given the knouledge ofrhe two genorypes forming the muting. In the absence of such knowledge, however. in neither case would the judgement of independence for successive outcomes be intuitively plausible, since earlier outcomes provide information about the unknown population or mating composition and this, in turn, influences judgements about subsequent outcomes. For a detailed analysis of the concept of conditional independence, see Dawid ( I979a, 1979b, 1980b).

2.4.4

Sequential Revision of Beliefs

Bayes theorem characterises the way in which current beliefs about a set of mutually exclusive and exhaustive hypotheses, H,.j E J . are revised in the light of new data, D.In practice, of course, we typically receive data in successive stages, so that the process of revising beliefs is sequential. As a simple illustration of this process, let us suppose that data are obtained in two stages, which can be described by real-world events D1 and D2.Omitting.

48

2 fiiuridtitions

for convenience. explicit conditioning on Mu. revision of beliefs on the basis of the = H, first piece of data DI is described by P( HJ 101) P ( U I I ff,,)P( )/f(Dl). j E J . When it comes to the further. subsequent revision of beliefs in the light of Dz. the likelihoods and prior probabilities to be used in Bayes theorem are now P(D2 I H, n D l ) and P( H,, I DI).j E ./, respectively. since all judgements are now conditional on LII. We thus have. for all j E .I,

where P(U? 1 f l l ) = C j l(f12 I H,; I D ~ ) f ( f If D I ) . , From an intuitive standpoint. we would obviously anticipate that coherent revision of initial belief in the light of the combined data, D I 9 D?. should not depend on whether DI. D? were analysed successively or in combination. This is easily verified by substituting the expression for P ( H, I I>I) into the cxpression for P(H, I I l l f l L)?). whereupon we obtain

the latter being the direct expression for P( if., 111n U , ) from Bayes theorem 1 when DI n D2 is treated as a single piece of data. The generalisation of this sequential revision process to any number of stages. corresponding to data. D1.D,.. . . . D,,. . . . . proceeds straightforwardly. If we fl write DIk)= D 1 U2n . . . n L ) k to denote all the data received up to and including stage k. then. for all .j E ./.

which provides a recursive algorithm for the revision of beliefs. There is, however. a potential practical difficulty in implementing this process. since there is an implicit need to specify the successively cvriditiotred lik~li/i(iod.s. P( /I,;+ I H, n D). j E ./. a task which. in the absence of simplifying assumptions, may appear to be impossibly complex if I. is at all large. One possible form of simplifying assumption is the judgement of conditional independence for D I .D?. . . . . D,,,given any H,. j E J. since. by Definition 2.13. we then only need I ) = I( I H,)..j E .I. Another possibility n the evaluations f( D L . , 1~H,, might be to assume a rather weak form of dependence by making the judgement that a (Markov) property such as f(fIfiTI 1 H , n D) = P ( & , I I H, 1 4 . 1 . 7 j E , J , holds for all k . As we shall see later. these kinds of siniplifying structural assumptions play a fundamental role in statistical modelling and analysis.

2.5 Actions and Utilities

49

In the case of two hypotheses, H I ,H2, the judgement of conditional independence for D1, D2, . . . ,D,L, . . ,given H i or H2, enables us to provide an alternative . description of the process of revising beliefs by noting that, in this case,

With due regard to the relative nature of the terms prior and posterior, we can thus summarise the learning process (in favour of H I )as follows: posterior odds = prior odds x likelihood ratio.

In Section 2.6. we shall examine in more detail the key role played by the sequential revision of beliefs in the context of complex, sequential decision problems.

2.5
2.5.1

ACTIONS AND UTILITIES


Bounded Sets of Consequences

At the beginning of Section 2.4, we argued that choices among options are governed, in part, by the relative degrees of belief that an individual attaches to the uncertain events involved in the options. It is equally clear that choices among options should depend on the relative values that an individual attaches to the consequencesflowing from the events. The measurement framework of Axiom Xi) provides us with a direct, intuitive way of introducing a numerical measure of valuefor consequences, in such a way that the latter has a coherent, operational basis. Before we do this, we need to consider a little more closely the nature of the set of consequences C. The following special case provides a useful starting point for our development of a measure of value for consequences.

Definition 2.14. (Extreme consequences). The pair of consequences cI and c* are called, respectively,the worst and the best consequences in a decision problem 3for any other consequence c E C,c, 5 c 5 c*.
It could be argued that all real decision problems actually have extreme consequences. Indeed, we recall that all consequences are to be thought of as relevant consequences in the context of the decision problem. This eliminates pathological, mathematically motivated choices of C. which could be constructed in such a way as to rule out the existence of extreme consequences. For example, in mathematical modelling of decision problems involving monetary consequences, C is often taken to be the real line 3 or, in a no-loss situation with current assets k,to be the interval [k, Such Cs would not contain both a best and a worst consequence but, on the CG).

2 Foundutions

other hand, they clearly do not correspond to concrete, practical problems. In the next section, we shall consider the solution to decision problems for which extreme consequences are assumed to exist. Nevertheless. despite the force of the pragmatic argument that extreme consequences always exist, it must be admitted that insisting upon problem formulations which satisfy the assumption of the existence of extreme consequences can sometimes lead to rather tedious complications of a conceptual or mathematical nature. Consider. for example, a medical decision problem for which the consequences take the form of different numbers of years ofremaining life for a patient. Assuming that more value is attached to longer survival. it would appear rather difficult to justify any purticitlur choice of realistic upper bound, even though we believe there to be one. To choose a particular r* would be tantamount to putting forward (.. years as a realistic possible survival time, but regarding c* + 1 years as impossible! In such cases, it is attractive to have aviiilable the possibility. for conceptual and mathematical convenience, of dealing with sets of consequences 1101 possessing extreme elements (and the same is true of many problems involving monetary consequences). For this reason. we shall also deal (in Section 2.5.3) with the situation in which extreme consequences are not assumed to exist.

2.5.2
c.

Bounded Decision Problems

Let us consider a decision problem (8. A. 5 ) for which extreme consequences C. < c' are assumed to exist. We shall refer to such decision problems as bonnded.

Definition 2.15. (Canonical utility function for consequences). Ciwn ( I preference relution 5 , the utility i i ( r ) = i i ( c 1 (**. c - ) o ( I consequenc~rc, f relative to the extreme c~onseqiiences < c-. is the real nimiber 11 ( S ) NSSOc, ciaied with uny stundurd event S such that (. { (** I S.* * 1 s' } . The inupping ( ii : C 3 i s culled the utilityfunction.

--+

It is important to note that the definition of utility only involves comparison among consequences and options constructed with standard events. Since the preference patterns among consequences is unaffected by additional information. we would expect the utility ofa consequence to be uniquely defined and to remain unchanged as new information is obtained. This is indeed the case.

Proposition 2.21. (Existenceand uniqueness o bounded utilities). For cir1.v f bounded decision problem (. C. A. 5 ) with extrenie 1-onsequences c- :c c-. ( i ) jiw d l c, 11 I c.. c* ) exists und ib nnique; (ii) the value o i i ( c I c.. f )i s im$fected by the assumed occurrence of an f event G > 0: (iii) O = i i ( c , / c , . c ' )5 r ~ ( c ~ c . .<_c u ()r ' l c . . c . ) = 1. ~
( ( 8

2.5 Actions and Utilities

51

Proof. (i) Existence follows immediately from Axiom Xi). For uniqueness, note that if c {c* I S,, c* I Si} and c {c' 1 S2, S;} then, by transitivity c, c, and Axiom 3(ii), {c* 1 Sl,c, I Si} {c' I S2, I S;} and Sl S2; the result now follows from Axiom 4(i). {c* I S1.c' I Si}, so that u(c I c , , c * ) = p(S1); (ii) To establish this, let c using AxiomI(iii), for any G > 0choose S2such that G I S, andp(S2) = p(S,). Then, by Definition 2.6, c -c; {c' I S,, c, I Ss} and so the utility of c given G is just the original value p(S2). (iii) Finally, since c* = {c' I 0, c, I 12). C = {c' I S2, c* I 0}, and both 0 and R , belong to the algebra of standard events, we have u(c*1 c., e * ) = ~ ( ( 3 ) = 0 and U ( C * I c,.c*) = p(f2) = 1. It then follows, from Definition 2.15 and Axiom 4(i). c , that 0 5 ~ ( I c, c * ) 5 1. 0 It is interesting to note that u(c I c,, c.), which we shall often simply denote by u ( c ) , can be given an operational interpretation in terms of degrees of belief. Indeed, if we consider a choice between the fixed consequence I: and the option { c' 1 E , c, I E " } ,for some event E, then the utility of c can be thought of as defining a threshold value for the degree of belief in E, in the sense that values greater than u would lead an individual to prefer the uncertain option, whereas values less than 11 would lead the individual to prefer c for certain. The value u itself corresponds to indifference between the two options and is the degree of belief in the Occurrence of the best, rather than worst, consequence.

- -

,, .

This suggests one possible technique for the experimentalelicitariotz o uriliries, a f subject which has generateda large literature (with contributionsfrom economists and psychologists, as well as from statisticians). We shall illustrate the ideas in Example 2.3.

Using the coherence and quantification principles set out in Section 2.3, we have seen how numerical measures can be assigned to two of the elements of a decision problem in the form of degrees of belief j o r events and utilities for consequences. It remains now to investigate how an overall numerical measure of value can be attached to an option, whose form depends both on the events of a finite partition of the certain event R and on the particular consequences to which these events lead.

Definition 2.16. (Conditional expected ufility). Foranyc, < c',G > 0, a n d a = { c j I E j .j E J } ,

-( a I c , . c ' . G ) = u

C ~ (IC~,C * ) P (IEG, ) C.
JFJ

is the expected utility of the option u, given G, with respect to the extreme consequences c, , c*. If G = $2. we shall simply write ii( a I c*.c*) in place of F(a I c'*. c*. 0).

52

2 Foundations

In the language of mathematical probability theory (see Chapter 3). ifthe utility value of n is considered as a "random quantity", contingent on the occurrence of a particular event E,, then ii is simply the expected value of that utility when the probabilities of the events are considered conditional on G. Proposition 2.22. (Decision criterionfor a bounded decision problem). For any bounded decision with extreme consequences c. < c* niid G' > In.
I

u1 < G U L

Proof. Let a, = {c,) I E,J. = 1.. . . . j r , } , i = 1.2. By Axioms 5(ii). 4(iii). J and Proposition 2.13, for all ( r - j )there exist S,, and S: such that ',

ii(CC1

/ c * . c - . q <_

7i(O~/C..C*.G).

Hence, by Proposition 2.10. c#] {c" I S,,. c. I S;,}with .S,,l(E,, 17G ) and P(S,,) = u(c,, I c,. c*). By Detinition 2.6. for i = 1 . 2 and any option ( I .

I s;,) n c:]..l = 1.. . . . u,.(1 1 G' } . I E,, which may be written as {c* I A,. r. I B,. ( I I cz" }, where ' ,= IJ,( E,, n C; Ti S,,) 4
and B, = U,(E,, n G n S;,). By Propositions 2.14(ii) and 2.16. and using ; 3 f'(A1 G ) 5 /'(.#I? 1 C).But. by Definition 2.5, (11 5 ~ u2 =+ =I, <(; Proposition2.15,P(E;,,fi~nSf,)P(E,,nC)Y(S,,)= P ( S , , ) P f E ,I,WfYG'). = Hence,

{ [(c- I s,,.C ,

P ( A , 1 G)=
and SO (11 <c
(11

c
)),

u ( c , , 1 c,. c ' ) P ( E , , I G) = q n , I (',.


a

C)

/=1

ii(al

I c*,c*.G ) 5 E(a2 I C. c * .6').

The result just established is sometimes referred to as the principle of tna.vimising expected ittilitj. In our development, this is clearly not an independent "principle", but rather an implication of our assumptions and definitions. In summary form. the resulting prescription for quantitative. coherent decision-making is: choose the option with [lie greutest expected utility. Technically, of course, Proposition 2.22 merely establishes. for each <(;, il complete ordering of the options considercd and does not guarantee the c ~ . r i s t c m ~ of an optimal option for which the expected utility is a maximum. However, in most (if not all) concrete, practical problems the set of options considered will be finite and so a hesf option (not necessarily unique) will exist. In more abstract mathematical formulations, the existence of a maximum will depend on analytic features of the set of options and on the utility function I I : C .R.

2.5 Actions and Utilities

53

Example 2 3 (Uriliries o oil wildcoaers). One of the earliest reported systematic .. f attempts at the quantification of utilities in a practical decision-making context was that of Grayson ( 1960). whose decision-makerswereoil wildcattersengaged inexploratory searches for oil and gas. The consequences o drilling decisions and their outcomes are ultimately f changes in the wildcatters' monetary assets. and Grayson's work focuses on the assessment of utility functions for this latter quantity. For the purposes of illustration.supposethat we restrict attention tochanges in monetary assets ranging. in units of one thousand dollars, from - I50 (the worst consequence)to +825 (the best consequence). Assuming u(-150) = 0, ~ ( 8 2 5 ) 1 , the above development = suggests ways in which we might try to elicit an individual wildcatter's values of u(c) for various r in the range -150 < c < 825. For example. one could ask the wildcatter, using a series of values of c, which option he or she would prefer out of the following:
(i) c for sure, (ii) entry into a venture having outcome 825 with probability p and an outcome - 150 with probability 1 - p , for some specified p . If c],emerges from such interrogation as an approximate "indifference" value, the theory developed above suggests that, for a coherent individual,
I&( c,,) = p U(825)

+ (I - p )

?I(

- 150) = p .

Repeating this exercise for a range of values of p, provides a series of ( c ; , . p ) pairs, from over which a "picture"of ~ ( r ) the range of interest can be obtained. An alternativeprocedure. of course. would be to fix c, perform an interrogation for various p until an "indifference" value. pr is found, and then repeat this procedure for a range o values of c to obtain a series f of (c. 11, ) pairs.

I
I

Utility

..
* *
b

* *

*.

0 .

Thousunds of dollurs

Figure 2.3 Willium Beard's 14tilityfitncrion for chunges in moinetup assets

54

2 Founddons

Figure 2.3 shows the results obtained by Grayson using procedures of this kind to interrogate oil company executive. W. Beard, on October 23. 1957. A "picture" of Beard's utility function clearly emerges from the empirical data. In particular. over the range concerned. the utility function reflects considerable risk aversion, in the sense that even quite small asset losses lead to large (negative)changes in utility compared with the (positive) changes associated with asset gains. Since the expected utility Ti is a linear combination of values of the utility function, Proposition 2.22 guarantees that preferences among options are invariant under changes in the origin and scale of the utility measure used; i.e.. invariant with respect to transformations of the form f l u ( .) i . provided we take .4 0 . H so that the orientation of "best" and "worst" is not changed. In general. therefore. such an origin and scale can be chosen for convenience in any given problem, and we can simply refer to the expected utility of an option without needing to specify the (positive linear) transformation of the utility function which has been used. However. there may be bounded decision problems where the probabilistic interpretation discussed above makes it desirable to work in terms of ~ ~ ~ i n o t ~ i c ~ d utilities. derived by rcferring to the best and worst consequences. In the next section. we shall provide an extension of these ideas to more general decision problems where extreme consequences are not assumed to exist.

2.5.3

General Decision Problems

We begin with a more general definition of the utility of a consequence which preserves the linear combination structure and the invariance discussed above.

Definition 2.17. (General utility function). Given (I preference relation 5. I PI.c ofcr ~~onseq~ience . ) c. relutiw to the conseqitences < ~2~ is defined to he the r e d nuinher u such tlicrr (f c < ('1 and 1'1 { f.2 1 S, . c' I .!Y'~},rhen ii = - . r / ( 1 - .r ): ifcl 5 c 5 c ? and I' { r 2 1 S , . .(j I . , .then 11 = .r; S'} , i f c :> C? and r? { r I S6. 1 1.1 then I I = 1/,1, where .I' = / i ( S,. is the tnensirre crssociated with the .stand(ird ebwt S , . 1
Our restricted definition of utility (Definition 2. IS) relied on the existence of extreme consequences v.. c * . such that r , 5 t' 5 vw for all c E C. In the absence of this assumption. we have to select some reference ~~on.sequct~~~es. cI . c2 to play the role of c.. c.. However. we cannot then assume that rl 5 I' 5 c2 for all c. and this means that if q . ('2 are to define a utility scale by being assigned values 0. 1, respectively. we shall require negritiw assignments for c and assignments gremer t h m one for (. r?. The definition is motivated by a desire to maintain the linear features of the utility function obtained in the case where
c:
+((

the utility 11

--

2.5 Actions and Utilities

55

extreme consequences exist. It can be checked straightforwardly that if ql). cp). q3)denote any permutation of c, cl. c2. where c1 < cZ and ql) cf2)5 ct3).the definition given ensures that for any G > 0, qZ)-G {q3) ! c(]) 1 S:} implies IS , that

<

U ( C ( 2 )( C l , C * )

= Z74C(3)ICI.CZ)

+ (1 - Z ) U ( C ( , ) ICl,CZ).

The following result extends Proposition 2.2 I to the general utility function defined above.

Proposition 2.23. (Existence and uniqueness of utilities). For any decision problem and for any puir of consequences c1 < c2, (i) for all c;u(c I CI, c q ) exists and is unique; (ii) the value o u( c I c l ,c2) is unaffected by the occurrence o an event G > (b; f f (iii) u ( q I c 1 . c 2 )=Oandu(c2Icl,c2)= 1.
Proof. This is virtually identical to the proof of Proposition 2.21.
a

The following results guarantee that the utilities of consequences are linearly transformed if the pair of consequences chosen as a reference is changed.

Proposition 2.24. (Linearity). For all c 1 < c2 and c:$ < ca there exist A > 0 and B such that. for all c, u ( c I c1. c 2 ) = ,414~ c.1) + B. 14. . Proof. Suppose first that c 2 el. ca 5 q ,and c1 < c I c2. By Axg iom 5(ii), c:j < (I <_ c4 implies that there exists a standard event S, such that c {q I S,, qj I S;}. Hence, by Proposition 2.22,

u(cIcI:c2)= r u ( c ~ I c l . C 2 ) + ( -2)u(c3IcI.c2), 1 where z = P(& ) and, by Definition 2.17, ZL(C 1 cg, c.1) = x. Hence, u(c 1 c1. c ~ = ) Av(cIcg,c4) B,where A = u(c4 \ ~ I > Q ) - u(q5 1 ~ 1 . ~ 2a)n d B = ~ u ( c : ~ ~ c l . c ~ ) . By Axiom 5(ii), if Q > c there exists S, such that c3 {c4 I S,, c I S.;}. Hence, by Proposition 2.22,

,u(c3I c1. c2) = yu(c4 I c1. c2) + (1 - y ) 4 c 1 CI C 2 L where y = P(S,) and. by Definition 2.17, u ( c I c : t , c a ) = - g/(l - y). Hence, u ( c I cl, C.L) = A.u(c I q , c.*) B, with 4 and 13 as above. Similarly, if c > c . ~ there exists S, such that c4 {c I S , c3 I S }and
9

I q .CZ) = y u ( c I CI,c?) + (1 - y).u(c:t I C I . c . ~ ) , where 9 = P(S,) and, by Definition 2.17. u(c I c ,ca) = l/y. Hence, we have g u(c 1 c l ? = AU(C ca, c4) + B, with A and B as above. c2) 1
u(c4

Now suppose that the C'S have arbitrary order, subject to c2 > c1, c.1 > cg. Let c,. c* be the minimum and maximum, respectively. of (c1. c2, c3, c.1, c}. Then, by the above, there exist A , . B1,A2, 132 such that, for q,)E { c1, (12, Q, cq, c } . u(c(,)I c*, c') = A ~ u ( c (I,CI,CZ) + BI and u ( q , , I c,, c = A ~ u ( c ( 1, c:i. c d ) Bz; ) ' ) ) hence, u(c(,)1 q . c ~ = ( A ~ / A I ) u ( cI(::t.ca) (132 - B l ) / A l . a ) (;)

56

2 Fouticlcrt ions
Finally, we generalise Proposition 2.22 to unbounded decision problems;

Proposition 2.25. (General decision criterion ). For any decision problem. pciir ofcomeyuences ('1 < ('2. trnd esetil C: > g,

Pro<$ Suppose ( I , = { c,, I E ,.,.J = 1. . . . . u , } .i = 1.2. and let r.. r' be such that for all c,,, c, 5 c,., 5 r * . Then, by Proposition 2.22. N? 5(; (11 iff - 1 cw.c * .G ) 5 % ( a , I c.. r ' . L"). by Proposition 2.24, there exists .I 0 u(a2 But, and B such that ii(r I c.. r ' ) = Au(c 1 cl. r 2 )+ H . and so the result follows.

An immediate implication of Proposition 2.25 is that all options can be compared among themselves. We recall that we did nol directly assume that comparisons could be made between all pair of options (an ussuniprioti which is often criticised as unjustified; see. for example. Fine 1973, p. 221 ). Instead. we merely assumed that all consequences could be compared among themselves and with the (very simply structured) standard dichotomised options, and that the latter could be compared among themselves. This completes our elaboration of the axiom system set out in Section 2.3. Starting from the primitive notion of preference. 5 ,we have shown that quantitative. coherent comparisons of options must proceed cis if a utility function has been assigned to consequences. probabilities to events and the choice of an option made on the basis of maximising expected utility. If we begin by defining a utility function over I I : C .H.this indirccv in turn a preference ordering which is necessarily coherent. Any function can serve as a utility function (subject only to the existence of the expected utility for each option. a problem which does not arise in the case of finite partitions) and the choice is a personal one. In some contexts. however. there are further formal considerations which may delimit the form of function chosen. An important special case is discussed in detail in Section 2.7.

2.6
2.6.1

SEQUENTIAL DECISION PROBLEMS


Complex Decision Problems

Many real decision problems would appear to have a more complex structure than that encapsulated in Definition 2. I . For instance, in the fields of market research and production engineering investigators often consider first whether or not to run a pilot study and only then. in the light of information obtained (or on the basis of initial information if the study is not undertaken). are the major options considered. Such a two-stage process provides a simple example of il src~~rrnri~rl

2.6 Sequential Decision Problems

57

decision problem, involving successive, interdependent decisions. In this section, we shall demonstrate that complex problems of this kind can be solved with the tools already at our disposal, thus substantiating our claim that the principles of quantitative coherence suffice to provide a prescriptive solution to any decision problem. Before explicitly considering sequential problems, we shall review, using a more detailed notation, some of our earlier developments. Let A = {a,, i E I} be the set of alternativeactions we are willing to consider. For each a,, there is a class { f $ ] , j E J , } of exhaustive and mutually exclusive j events, which label the possible consequences { cLJ. E J , } which may result from action a,. Note that, with this notation, we are merely emphasising the obvious dependence of both the consequences and the events on the action from which they result. If Afo is our initial state of information and G > 0 is additional information obtained subsequently, the main result of the previous section (Proposition 2.25) may be restated as follows.
a1 is to be preferred to action a2. given hf,,and

For behuviour consistent with the principles of quantitutive coherence, action G, i f and only if

u ( c tJ) the value attached to the consequenceforeseen ifaction at is taken and the is occurs, and P( E,, I a,, Ale, G) is the degree of belief in the occurrence of event EZ3 event E,J,conditional on action a, having been taken, and the state of information being ( hlo. G).

We recall that the probability measure used tocompute theexpected utility is taken to be a representation of the decision-maker's degree of belief conditional on the total information available. By using the extended notation P(E,, lu,.G, M0), rather than the more economical P ( E, I C) used previously, we are emphasising that (i) theactual eventsconsidered may dependon theparticularactionenvisaged, (ii) the information available certainly includes the initial information together with C > 8,and (iii) degrees o belief in the Occurrence of events such as E,, are f understood to be conditional on action (I,having been assumed to be taken, so that the possible influence of the decision-maker on the real world is taken into account.

For any action ai. it is sometimes convenient to describe the relevant events Ell, j E J , in a sequential form. For example, in considering the relevant events which label the consequences of a surgical intervention for cancer, one may first

think of whether the patient will survive the operation and then, conditional on survival, whether or not the tumour will eventually reappear were this particular form of surgery to be performed. These situations are most easily described diagrammatically using decision trees. such as that shown in Figure 2.4, with as many successive random nodes as necessary. Obviously, this does not represent any formal departure from our previous structure. since the problem can be restated with a single random node where relevant events arc defined in ternis of appropriate intersections. such as E,, n F'/.,A.in the example shown. It is also usually the case. in practice, that it is easier to elicit the relevant degrees of belief conditionally. so that, for example. P( E;, fl k ; , / k I (I,. G. Mo) would often be best assessed by combining the separately assessed terms P( k;/k I E u,. G. M,,) P(Et,1 u , . C. .If,,). and ;

,,,,

Conditional analysis of this kind is usually necessary in order to understand the structure of complicated situations. Consider. for instance. the problem of placing a bet on the result of a race after which the total amount bet is to be divided up among those correctly guessing the winner. Clearly. if we bet on the favourite we have a higher probability of winning; but. if the favourite wins. many people will have guessed correctly and the prize will be small. It may appear at first sight that this is a decision problem where the utilities involved in an action (the possible prizes to be obtained from a bet) depend on the probabilities of the corresponding uncertain events (the possible winning horses), a possibility t i o r contemplated in our structure. A closer analysis reveals. however. that the structure of the problem is similar to that of Figure 2.4. The prize received depends on the het you place (a, 1 the related betting bchaviour of other people ( E, , J and the outcome of the race ( F , , k ) . It is only natural to assume that our degree o f belief in the possible outcomes of the race may be influenced by the betting hehaviourofother people. This conditional analysis straightfonvardly resolves the initiul. apparent complication.

We now turn to considering sc~yitencusofdecision problems. We shall consider situations where, after an action has been taken and its consequences observed. a

2.6 Sequential Decision Problems

59

new decision problem arises, conditional on the new circumstances. For example, when the consequences of a given medical treatment have been observed, a physician has to decide whether to continue the same treatment, or to change to an alternative treatment, or to declare the patient cured. If a decision problem involves a succession of decision nodes, it is intuitively obvious that the optimal choice at the first decision node depends on the optimal choices at the subsequent decision nodes. In colloquial terms, we typically cannot decide what to do today without thinking first of what we might do tomorrow, and that, of course, will typically depend on the possible consequences of today's actions. In the next section, we consider a technique, buchurd induction, which makes it possible to solve these problems within the framework we have already established. 2.6.2

Backward Induction

In any actual decision problem, the number of scenarios which may be contemplated at any given time is necessarily finite. Consequently, and bearing in mind that the analysis is only strictly valid under certain fixed general assumptionsand we cannot seriously expect these to remain valid for an indefinitely long period, the number of decision nodes to be considered in any given sequential problem will be assumed to be finite. Thus, we should be able to define ajinite horizon, after which no further decisions are envisaged in the particular problem formulation. If, at each node, the possibilities are finite in number, the situation may be diagrammaticallydescribed by means of a decision tree like that of Figure 2.5.

Figure 2.5 Decision tree with several decision nodes

Let 'n be the number of decision stages considered and let a('")denote an action being considered at the rnth stage. Using the notation for composite options

60

2 Fi-wdurions

introduced in Section 2.2, all first-stage actions may be compactly described in the form

where { E I J , jE J()} is the partition of relevant events which corresponds to the notation max a,::?refers to the musf preferred of the set of options {(if.k E K(,} which we would be confronted with were the event E,, to occur. The maximisation is naturally to be understood in the sense of our conditional preference ordering among the available second-stage options, given the Occurrence of EIJ.Indeed. the consequence of choosing u:li and having E,; occur is that we are confronted with a set of options { ul!. k E Iil,} from which UY (*(it?c-hoose that option which is preferred on the basis of our pattern of preferences at that stage. Similarly, second-stage options may be written in terms of third-stage options. and the process continued until we reach the rith stage, consisting of ordinary options defined in terms of the events and consequences to which they may lead. Formally. we have
a,:) and

It is now apparent that sequential decision problems are a special case of the general framework which we have developed. It follows from Proposition 2.25 that. at each stage r r i . if GI,,is the relevant information available, and u( .) is the (generalised) utility function. we may write

where

This means that one has to first solve the final (nth) stage, by maximising the appropriate expected utility; then one has to solve the ( 1 ) - 1)th stage by maximizing

2.6 Sequential Decision Problems

61

the expected utility conditional on making the optimal choice at the nth stage; and so on, working backwards progressively, until the optimal first stage option has been obtained, a procedure often referred to as gynamic programming. This process of hcrchwdinduction satisfies the requirement that, at any stage of the procedure, the mth. say. the continuation of the procedure must be identical to the optimal procedure starting at the mth stage with information G,,,. reThis quirement is usually known as Bellmun s oprimulity principle (Bellman, 1957). As with the principle of maximising expected utility. we see that this is not required as a further assumed principle in our formulation, but is simply a consequence of the principles of quantitative coherence.
Example 2.4. (An optimal sfopping problem). We now consider a famous problem. which is usually referred to in the literature as the marriage problem or the secretary problem. Suppose that a specitied number of objects I I 1 2 are to be inspected sequentially, one at a time, in order to select one of them. Suppose further that, at any stage r , 1 5 I 5 I ) . the inspector has the option of either stopping the inspection process, receiving, as a result, the object currently under inspection, or of continuing the inspection process with the next object. N o backtracking is permitted and if the inspection process has not terminated before the nth stage the outcome is that the trth object is received. At each stage, r , the only information available to the inspector is the relative rank ( I =best, r=worst) of the current object among those inspected so far, and the knowledge that the 78 objects are being presented in a completely random order. When should the inspection process be terminated? Intuitively, if the inspector stops too soon there is a good chance that objects more preferred to those seen so far will remain uninspected. However, if the inspection process goes on too long there is a good chance that the overall preferred object will already have been encountered and passed over. This kind of dilemma is inherent in a variety of practical problems, such as property purchase in a limited sellers market when a bid is required immediately after inspection, or staff appointment in a skill shortage area when a job offer is required immediately after interview. More exotically-and assuming a rather egocentric inspection process, again with no backtracking possibilities-this stopping problem has been suggested as a model for choosing a mate. Potential partners are encountered sequentially: the proverb marry in haste. repent at leisure warns against settling down too soon: but such hesitations have to be balanced against painful future realisations of missed golden opportunities. Less romantically, let r;, i = 1.. . . . n , denote the possible consequencesof the inspection process, with c, = i if the eventual object chosen has rank i out of all YI objects. We shall denote by u ( c , ) = u ( i ) . i = 1.. . . . n. the inspectors utility for these consequences. Now suppose that t < n objects have been inspected and that the relative rank among these of the object under current inspection is .r, where 1 5 . I . 5 7. There are two actions available at the rth stage: n l = stop, a2 = continue (where. to simplify notation, we have dropped the superscript, 1.). The information available at the r t h stage is G, = ( . r . r ) ; the information available at the ( r + 1)th stage would be G , = (,I/. I + 1). where I/. 1 5 I/ I r + I , is the rank of the next object relative to the I + 1 then inspected. all values . of ,y being. ofcourse, equally likely since the I I objects are inspected in a random order. If we denote the expected utility of stopping, given G,. by Ti,(.r. r ) and the expected utility

62

2 Foundations

of acting optimally, given C,,by lil,(.r.r ) , the general development given above establishes ; that

where

r Values of QI(.z. ) can be found from the final condition and the technique of backwards induction. The optimal procedure is then .seen to be:
(i) continue if li0(a.r ) > E,.(.r.
I.).

(ii) stop if ull(,r. ) = ii,(r.r ) . r For illustration, suppose that the inspector's preference ordering corresponds to a"nothing but the best" utility function. defined by I J ( I ) = I. i r ( . r ) = 0. .r = 2 . . . . . t t . It is then easy to show that I' //.(I. I.) = - *
It

ii.(.r. 1 . )

- 0.
1.).

,/'

2. . . . . I I

thus, if .r > 1.

&(,r. 1 . ) > E.(.r.

I'

= 1.. . . . / I - 1

This implies that inspection .should never be terminated ifthe cwrent ubje1.l is not thc best .SCPII sofur. The decision as to whether to stop if ,r = 1 is determined from the equation

which is easily verified by induction. If I,' is the smallest positive integer for which I
T Y

1
i

/ I - 1

I / - -

" ' 1

-5 1'.

I.

the optimal procedure is detined as follows: ( i ) continue until at least 1.' objects have been inspected:
(ii) if the r'th object is the best so far. stop;
(iii) otherwise. continue until the object under inspection is the best s o far. then htop (\topping in any case if the u t h stage is rcachcd).

It' I ) is large. approximation of the sum in the ithove inequality by an integral readily . yields the approximation I" z I / / (For further details. see DeGroot (1970, Chapter 13). whose account is based closely on Lindley ( 19613). For reviews 0 1 I'iirthcr. related work on this fascinating problem. see Freeman ( 1083) itnd Ferpuson (1989).

2.6 Sequential Decision Problems


Applied to the problem of choosing a mate, and assuming that potential partners are the encountered uniformly over time between the ages of I6 and 60, above analysis suggests delaying a choice until one is at least 32 years old, thereafter ending the search as soon as one encounters someone better than anyone encountered thus far. Readers who are suspicious of putting this into practice have the option, of course, of staying at home and continuing their study of this volume. Sequential decision problems are now further illustrated by considering the important special case of situations involving an initial choice of experimental design.

2.6.3

Design of Experiments

A simple, very important example of a sequential problem is provided by the situation where we have available a class of experiments, one of which is to be performed in order to provide information for use in a subsequent decision problem. We want to choose the best experiment. The structure of this problem, which embraces the topic usually referred to as the problem of experimental design, may be diagrammatically described by means of a sequential decision tree such as that shown in Figure 2.6.

Figure 2.6 Decision treefor e.vperimenrcri design

We must first choose an experiment e and, in light of the data obtained, take an action o., which, were event E to occur, would produce a consequence having utility which, modifying earlier notation in order to be explicit about the elements involved, we denote by u( u. t. D, E). Usually, we also have available

64

2 Foundations

the possibility, denoted by ell and referred to as the rrrr// e.vperinienf, of directly choosing an action without performing any experiment. Within the general structure for sequential decision problems developed in the previous section. we note that the possible sets of data obtainable may depend on the particular experiment performed. the set of available actions may depend on the results of the experiment performed, and the sets of consequences and labelling events may depend on the particular combination of experiment and action chosen. However. in our subsequent development we will use a simplified notation which suppresses these possible dependencies in order to centre attention on othcr. more important. aspects of the problem. We have seen. in Section 2.6.2, that to solve a sequential decision problem we start at the last stage and work backwards. In this case. the expected utility of option i i , given the information available at the stage when the action is to be taken, is i i ( U . f J ./ I , ) = / l ( ( i , f - . D,. b;,)P(fi;, f.. fl,.</).

c
/

.I

For each pair (c. U,) we can therefore choose the best possible continuation: namely, that action a: which maximises the expression given above. Thus. the expected utility of the pair (c. D,1 is given by

U(C.

11,) = TT(fi;.f. D,) IllasTT(fi.i. 11,). =


I

We are now in a position to determine the best possible experiment. This is that c which maximises, in the class of available experiments. the unconditional expected utility i i ( r . ) = ~ q o ; . d l , ) P ( l I(,). I,
lil

where I( I), I P ) denotes the degree of belief attached to the Occurrence of data D, if r were the experiment chosen. On the other hand, the expected utility of performing no experiment and choosing that action (I;, which maximises the (prior) expected utility is

so that an experiment P is worth performing if and only if T i ( ( ) > T i ( ( Naturally. Ti(n. c. I l , ) . i i ( c . D,) T i ( ( , ) are different functions defined on and

different spaces. However. to simplify the notation and without danger of confusion we shall always use ii to denote an expected utility. Proposition 2.26. (Optimal experimental design). Tlic optinid w t i o t i is to prrfortn the e.vperinrent ( ( f i i ( c . ) > t r ~ T i ( ( -1 = Illils, T i ( ( ); otlwrd wise. tlie optiniul cictiori is to iwfvrtti t i o cvpcrittiwt.
Proof: This is immediate trom Proposition 2.25.

2.6 Sequential Decision Problems

65

It is often interesting to determine the value which additional information might have in the context of a given decision problem. The expected value of the information provided by new data may be computed as the (posterior) expected difference between the utilities which correspond to optimal actions after and before the data have been obtained. Definition 2.18. (The value of additional infomaation). (i) The expected value of the & u D, provided by an experiment e is t

v(e. D ~ = )

C {.(a:.
J ~ J

e. D,, E ~-) u(aG,eo, E,)} P ( E , I e. D,. a:):

where a:, a;)are, respectively, the optimal actions given D,, and with no data. (ii) the expected value of an experiment e is given by

v(e) =

C v(e. D , ) ~ ( IDe )~.


lGr

It is sometimesconvenient to have an upper bound for the expected value u ( e ) of an experiment e. Let us therefore consider the optimal actions which would be available with perfect information, i.e., were we to know the particular event EJ which will eventually occur, and let a t , be the optimal action given E,, i.e., such that, for all E J , u(ai,,.eo,E,) = riiaxu(a,eo,E,).
a

Then, given EJ.the loss suffered by choosing any other action a will be

"(a&), E j ) - ~ ( a , EJ). eo, eo,


For a = a;, the optimal action under prior information. this difference will measure, conditional on E,, the value of perfect information and, under appropriate conditions, its expected value will provide an upper bound for the increase in utility which additional data about the El's could be expected to provide.

Definition 2.19. (Expected value of perfect informatton). The opportuniry f loss which would be suffered i action a were taken and event E, occurred is
l(a,E,) = m a x u ( a , , e o . E , )-u(a,eo,E,);
01

the expected value of perfect information is then given by


u*(eo) =
JEJ

I(aT,.EJ)p(EJ

66
f(tI,).

2 Foundations
It is important to bear in mind that the functions t i ( 0,).* ( t) and the number i

all crucially depend on the (prior) probability distributions { ( P ( E ,10).

d}although, for notational convenience. we have not made this dependency explicit.
R E

In many situations, the utility function u ( u . c . D,. E,) may be thought of as made up of two separate components. One is the (experimental) cost of performing c and obtaining D,; the other is the (terminal) utilip of directly choosing ( I and then finding that I?, occurs. Often. the latter component does not actually depend on the so preceding e and D,, that, assuming additivity of the two components. we may write / ) ( a .c . D,. E,) = u ( u . f , ) . E , ) - r(c. U , ) where c ( f .U , ) 2 0. Moreover. the probability distributions over the events arc often independent of the action taken. When these conditions apply, we can establish a useful upper bound for the expected value of an experiment in terms of the difference bctween the expected value of complete data and the expected cost of the experiment itself.

Proposition 2.27. (Additive decomposition). If the ittilit! function hus the


forrn

u(a.t.
with c(e.D,)

n,.E,)

//(U.V,,.

E , ) - ('(f . D,).

2 0. and the probability distributions are such that

then,for any available experiment P ,


I * ( c)

1" (PI,)

- 7(f ).

where
T(c) =

c
ICl

C(P.

D , ) P ( D ,1 v )

is the expected cost o c . f

ProoJ: Using Definitions 2. I8 and 2.19, ~ ( cmay he written as )

2.7 Inference and Information


and, hence,

67

as stated.

In Section 2.7. we shall study in more detail the special case of experimental design in situationswhere data are being collected for the purpose of pure inference, rather than as an input into a directly practical decision problem. We have shown that the simple decision problem structure introduced in Section 2.2, and the tools developed in Sections 2.3 to 2.5, suffice for the analysis of complex, sequential problems which, at first sight, appear to go beyond that simple structure. In particular, we have seen that the important problem of experimental design can be analysed within the sequential decision problem framework. We shall now use this framework to analyse the very special form of decision problem posed by sratistical inference, thus establishing the fundamental relevance of these foundational arguments for statistical theory and practice.

2.7
2.7.1

INFERENCE AND INFORMATION


Reporting Beliefs as a Decision Problem

The results on quantitativecoherence (Sections 2.2 to 2.5) establish that if we aspire in accordance with the axioms of toanalyse a given decision problem, { E , C, A, quantitative coherence, we must represent degrees of belief about uncertain events in the form of a finite probability measure over E and values for consequences in the form of a utility function over C. Options are then to be compared on the basis of expected utility. The probability measure represents an individualsbeliefs conditional on his or her current state of information. Given the initial state of information described by A10 and further information in the form of the assumed occurrence of a significant event G, we previously denoted such a measure by P(.G). We now wish to I specialise our discussion somewhat to the case where G can be thought of as a description of the outcome of an investigation (typically a survey, or an experiment) involving the deliberate collection of data (usually, in numerical form). The event G will then be defined directly in terms of the counts or measurements obtained, either as a precise statement, or involving a description of intervals within which

s},

2 Foundutions
readings lie. To emphasise the fact that G characterises the actual dutu collected. we shall denote the event which describes the new information obtained by D. An individuals degree of belief measure over E will then be denoted P ( .I D ) representing the individualscurrent beliefs in the light of the data obtained (where. again, we have suppressed. for notational convenience, the explicit dependence on :If,,). So far as uncertainty about the events of E is concerned. f(.D) constitutes I a complete encapsulation of the information provided by D. given the initial state of information :If(,. Moreover. in conjunction with the specification of a utility function, P ( . I D) provides all that is necessary for the calculation of the expected utility of any option and. hence. for the solution of uny decision problem defined in terms of the frame of reference adopted. Starting from the decision problem framework, we thus have a formal justification for the main topic of this book; namely, the study of models and techniques for nnalysing the w u y in which beliefs ure modified by data. However, many eminent writers have argued that basic problems of reporting scientific inferences do not fall within the framework of decision problems as defined in earlier sections: Statistical inferences involve the data. a specification of the set of possible populations sampled and a question concerning the true population.. . Decisions are based on not only the considerations listed for inferences. but also on an assessment of the losses resulting from wrong decisions.. . (Cox. 19%):

. . . a considerable body of doctrine has attempted 10 explain. or rather to reinterpret these (significance) tests on the basis of quite a different model. namely as means to making decisions in an acceptance procedure. The differences between these two situations seem to the author many and wide. . . . (Fisher. 1956/1973).
If views such as these were accepted. they would. of course, undermine our conclusion that problems concerning itncertuinfy are to be solved by revising degrees of belief in the light of new dutu in accordance with Buyes theorem. Our main purpose in this section is therefore to demonstrate that rheproblenr of reporting inferences is essentially u special case of a decision problem. By way of preliminary clarification, let us recall from Section 2.1 that we distinguished two, possibly distinct. reasons for trying to think rdtionally about uncertainty. On the one hand, quoting Ramsey (1926). we noted that, even if an immediate decision problem does not appear to exist, we know that our statements of uncertainty may be used by others in contexts representable within the decision framework. In such situations, our conclusion holds. On the other hand. quoting Lehmann (1959/1986). we noted that the inference. or inference statement. may sometimes be regarded as un end in itseff. t o be judged independently of any practical decision problem. It is this case that we wish to consider in more detail in this section, establishing that, indeed, it can be regarded as falling within the general framework of Sections 2.7, to 2.5.

2.7 Inference and Information

69

Formalising the first sentence of the remark of Cox, given above, a pure inference problem may be described as one in which we seek to learn which of a set of mutually exclusive hypotheses (theories. states of nature, or model parameters) is true. From a strictly realistic viewpoint, there is always, implicitly, a finite set of such hypotheses, say { H,. j E J},although it may be mathematically convenient to work as if this were not the case. We shall regard this set of hypotheses as equivalent to a finite partition of the certain event into events { E J j E J}, 9 having the interpretation EJ = the hypothesis HJ is true. The actions available to an individual are the various inference statements that might be made about j the events { E J , E I } , the latter constituting the uncertain events corresponding to each action. To complete the basic decision problem framework, we need to acknowledge that. corresponding to each inference statement and each , there , will be a consequence;namely, the record of what the individual put forward as an appropriate inference statement, together with what actually turned out to be the case. If we aspire to quantitative coherence in such a framework. we know that I our uncertainty about the {E,, j E J} should be represented by { P ( E J D ) , j E J } , where P ( . I D)denotes our current degree of belief measure. given data D in addition to the initial information ilfo. It is natural, therefore, to regard the set of possible inference statements as the class of probability distributions over { E l . j E .I} compatible with the information D. The inference reporting problem can thus be viewed as one of choosing a probability distribution to serve as an inference statement. But there is nothing (sofar) in thisformulation which leads to the conclusion that the besr action is to state onesactual beliefs. Indeed, we know from our earlier development that options cannot be ordered without an (implicit or explicit) specification of utilities for the consequences. We shall consider this specification and its implications in the following sections. A particular form of utility function for inference statements will be introduced and it will then be seen that the idea of inference as decision leads to rather natural interpretationsof commonly used information measures in terms of expected utility. In the discussion which follows. we shall only consider the case of finite partitions { E , . j E J}. Mathematical extensions will be discussed in Chapter 3.

2.7.2

The Utility of a Probability Distribution

We have argued above that the provision of a statistical inference statement about J}, conditional on some relevant data D,may be precisely stated as a decision problem. where the set of hypotheses { E,. j E J } is a partition consisting of elements of E , and the action space A relates to the class Q of conditional probability distributions over {E,. j E J};thus.
a class of exclusive and exhaustive hypotheses { EJ.j E

70

2 hiindatiotis

where q, is assumed to be the probability which. conditional on the available data D, an individual reports as the probability of EJ f H, being true. The set of consequences C, consists of all pairs ( q . E J ) . representing the conjunctions of reported beliefs and true hypotheses. The action corresponding to the choice of q isdefinedas{(q.E,))EJ. j E J } . To avoid triviality. we assume that none of the hypotheses is certain and that. without loss of generality, all are compatible with the available data; is., that all the EJ'sare significant given D, so that (Proposition 2.5) Cn < EJ n D < D for all j E J. If this were not so, we could simply discard any incompatible hypotheses. It then follows from Proposition 2. I7(iii) that each of the personal degrees of belief attached by the individual to the conflicting hypotheses given the data must be strictly positive. Throughout this section, we shall denote by

the probability distribution which describes. conditional again on the available data D. the individual's ucruul beliefs about the alternative "hypotheses". W emphasise again that, in the structure described so far. there is no logical e
requirement which forces an individual to report the probability distribution p which describes his or her personal beliefs. in preference to any other probability distribution q in Q.

Wc complete the specification of this decision problem by inducing the preference ordering through direct specification of a utility function i t ( .), which describes the "value" u.(q. E J )of reporting the probability distribution q as the final inferential summary ofthe investigation. were EJ to turn out to be the true "state of nature". Our next task is to investigate the properties which such a function should possess in order to describe a preference pattern which accords with what a scientific cornmunity ought to demand of an inference statement. This special class of utility functions is often referred to as the class of score functions (see also Section 2.8) since the functions describe the possible "scores" to be awarded to the individual as a "prize" for his or her "prediction". Definition 2.20. (Score function). A sc'orr&funcriori I I jhr probal>ility distrihitriotis q = { q,. .j E .I } d e j t i d over (I purtition { E, . j E .I } is ( I nrirpying i which assigns a real tiimil>er i t { q . L<J } to each pair ( q . E,). Thisjim~rioiis .wid to be smooth if it is c.otitinuoiisly differenriable us n,fittii.rioii oj'rwli (1,.
It seems natural t o assume that score functions should he smooth (in the intuitiw

sense). since one would wish small changes in the reported distribution to prtduce only small changes in the obtained score. The mathematical condition impoed is ii simple and convenient representation of such sniwthnesb.

2.7 Inference and Information

71

We have characterised the problem faced by an individual reporting his or her beliefs about conflicting "hypotheses" as a problem of choice among probability distributions over { E l , j E J}.with preferences described by a score function. This is a well specified problem, whose solution, in accordance with our development based on quantitative coherence, is to report that distribution q which maximises the expected utility

c (E J ) "q t
JEJ

1 D).

In order to ensure that a coherent individual is also honest, we need a form of u( .) which guarantees that the expected utility is maximised if, and only if, q, = p, = P(E, I D ) , for each j ; otherwise, the individual's best policy could be to report something other than his or her true beliefs. This motivates the following definition:

Definition 2.21. (Proper scorefunction). A scorefunction u is proper i j for each strictly positive probability distribution p = { p , , j E J } dejned over a partition { E, , j E J },

where the supremum, taken over the class Q of all probability distributions over { E,, j E J } , i attained $ and only i j q = p . s
It would seem reasonable that. in a scientific inference context, one should require score function to be. proper. Whether a scientific report presents the inference of a single scientist or a range of inferences, purporting to represent those that might be made by some community of scientists,we should wish to be reassured that any reported inference could be justified as a genuine current belief.
a

Smooth. proper score functions have been successfully used in practice in the following contexts: (i) to determine an appropriate fee to be paid to meteorologists in order to encourage them to report reliable predictions (Murphy and Epstein, 1967); (ii) to score multiple choice examinations so that students are encouraged to assign, over the possible answers, probability distributions which truly describe their beliefs (de Finetti, 1965; Bernardo. I98 1 b, Section 3.6); (iii) to devise general procedures to elicit personal probabilities and expectations (Savage. 1971); (iv) to select best subsets of variables for prediction purposes in political or medical contexts (Bernardo and Bermlidez, 1985). The simplest proper score function is the quadratic function (Brier, 1950; de Finetti. 1962) defined as follows.

72

2 Foundutions

Definition 2.22. (Quadratic score function). A yuudratic score jtnctiori j b r pr~)huhilit~distrihutions = { qJ.j E . I } definedover apartition { E,. j E .I } q is (my function (,ftheform

w l w e q = { y,. j E

.I } is tii1.v probtrbility clisrrihiitiono


I.

\ ~ {r El . .j E

.J } .

Using the indicator function for F,.,. I t quadratic score function is given by

an alternatiw expression for the

which makes explicit the role of a 'penalty' equal to the squared euclidean distance from q to a perfect prediction.

Proposition 2.28. A qundrcttic score function is proper.


Proof. We have to maximise. over q, the expected score

Taking derivatives with respect to the q1's and equating them to zero, we have the } = 0. .j E J , and since C ,11, = I , we have system of equations 2pJ - 2qJ { q, = /jJ for all j. It is easily checked that this gives a maximum. a

xk

Note that in the proof of Proposition 2.28 we did iiot need to use the condition 1 qI = 1: this is a rather special feature of the quadratic score function.
A further condition is required for score functions in contexts. which we shall refer to as "pure inference problems". where the value of a distribution. q. is only to be assessed in terms of the probability it assigned to the actual outcome.
etrch eleriient q = {y,, dejned over ci partition

Definition 2.23. (Local score funcfion). A score ,firnc.rioii I I is locd if; j h r ,i E . J } of the chrss Q of probnl>ility distrihiitioris { I-,:,. j E .I }, tlwre crist jiiircrions { I I I ( . 1. .i F .I } S11~~11 Iilflf I1 { q . E , } = I(,/ (q,, ).

It is intuitively clear that the preferences of an individual scientist faced with a pure inference problem should correspond to the ordering induced by a local score function. The reason for this is that. by definition. in a "pure" inference problem we are solely concerned with "the truth". It is therefore natural that if E l .say. turns out to be true, the individual scientist should be assessed (i.e.. scored) only on the basis of his or her reportedjudgement about the plausibility of E l .

2.7 Inference and Information


This can be contrasted with the forms of score function that would typically be appropriate in more directly practical contexts. In stock control, for example, probability judgements about demand would usually be assessed in the light of the relative seriousness of under- or over-stocking, rather than by just concentrating on the belief previously attached to what turned out to be the actual level of demand.

73

Note that, in Definition 2.23, the functional form u,(pJ)of the dependence of the score on the probability attached to the true El is allowed to vary with the particular El considered. By permitting different uJ(.)sfor each E,, we allow for the possibility that bad predictions regarding some truths may be judged more harshly than others. The situation described by a local score function is, of course, an idealised, limit situation. but one which seems, at least approximately, appropriate in reporting pure scientific research. In addition, later in this section we shall see that certain well-known criteria for choosing among experimental designs are optimal if, and only if, preferences are described by a smooth, proper, local score function.

Proposition 2.29. (Characterisation of proper local score functions). I u f


is a smooth, proper, local score function for probability distributions q = { q, ,j E J } defined over a partition { E,, j E J } which contains more than two elements, then it must be of the form u { q , E,} = A logq, B,, where A > 0 and the B,s are arbitrary constants.

Proofi Since u( .) is local and proper, then for some { u, (.), j E J } , we must
have y u P c u ( q 7 m 3 = s u P c u , ( q , ) P , = &(PI)P,, JEJ q ].I IEJ where p, > 0, p, = 1 and the supremum is taken over the class of probability distributionsq = ( q l r j E J ) , q, 2 0 , C,ql = 1. Writing p = { p l , p z , . . .} and q = {q,. Q, . . .}, with

c,

we seek {u,(.) . j E J}, giving an extremal of

For {qz, 4 3 , . . .} to make F stationary it is necessary (see e.g. Jeffreys and Jeffreys, 1946, p. 315) that

74

2 Foirnddons

for any E = { E ' L . 5 3 . . . .} such that all the E , are sufficiently small. Calculating this derivative, the condition is seen to reduce to

for all el's sufficiently small, where u' stands for the derivative of u. Moreover. . since IL is proper, ( ~ 2 . ~ 3 . . .} must be an extrernal of F and thus we have the system of equations

so that all the functions u J .j = 1.2.. . . satisfy the same functional equation, namely p , u [ , ( p J )= p1 u',(pl). j = 2 . 3 . . . . .
for all (p2,p:j, . . .} and, hence.
pui(p)=.A.

O < p < I.

f o r a l l j = 1.2..

so that u , ( p ) = A logy BJ.The condition A extremal found is indeed a maximum. a

2 0 suffices to guarantee that the

Definition 2.24. (Logarithmicscorefunction). A logarithmic scorefunction for strictly positive probabiliry distributions q = { qJ.j E .I } dejned o w - a partition { E J . j E .I} is unyfittiction of rhe form
u { q . E J } J l o g qJ =

+ B,.

A > 0.

If the partition { J . j E .I} only contains two elements. so that the partition is simply {H. W}, the locality condition is, of course. vacuous. In this case, u { q . E J } = u(((11.1 - (11). 1fi } = f ( q 1 .1 ~ )say, where 111 is the indicator . function for H . and the score function only depends on the probability ql attached to H ,whether or not W occurs. For u { ( q l . 1 - q1)? 1~ } to be proper we must have
SUP
Y1 t 10.1

{PI .f(ql. 1 ) + ( 1 - PI) f((Il. = 1'1 h . + ( 1 o)} 1)

- Y1)

f(P1.o)

so that. if the score function is smooth, then f must satisfy the functional equation
.rJ'(.r.l)+(l
-S)f'(S.O)

=o.

2.7 tnference and tnformation

75

The logarithmic function f(x,1) = A logx B1, f(x,0) = A log(1- x) B is 2 then just one of the many possible solutions (see Good,1952). We have assumed that the probability distributions to be considered as options assign strictly positive qJ to each EJ.This means that, given any particular q E &. we have no problem in calculating the expected utility arising from the logarithmic score function. It is worth noting, however, that since we place no (strictly positive) lower bound on the possible q,, we have an example of an unbounded decision problem; i.e., a decision problem without extreme consequences.

2.7.3

Approximation and Dlscrepancy

We have argued that the optimal solution to an inference reporting problem (either for an individual, or for each of several individuals)is to state the appropriate actual beliefs, p , say. From a technical point of view, however, particularly within the mathematical extensions to be considered in Chapter 3, the precise computation of p may be difficult and we may choose instead to report an approximation to our beliefs, q, say, on the grounds that q is close to p. but much easier to calculate. The justification of such a procedure requires a study of the notion of closeness between two distributions.

Proposition 2.30. (Expected loss in probability reporting). If preferences are described by a logarithmic scorefunction, the expected loss of utility in reporting a probubilit?,distribution q = { qJ j E J } defined over a partition { E , . j E J } . rather than the distributiori p = { p , , j E J } representing actual beliefs, is given by

Moreover, 6{ q I p } 2 0 with equality if and oniy i f q = p . Proof. Using Definition 2.24, the expected utility of reporting q when p is the { actual distribution of beliefs is E ( q ) = CJA log q, + BJ}p,,and thus

The final statement in the theorem is a consequence of Proposition 2.29 since, because the logarithmic score function is proper, the expected utility of reporting q is maximised if, and only if, q = p. so that V ( p ) 2 a(q),with equality if, and only

76

2 Foundations

if, p = q. An immediate direct proof is obtained using the fact that for all .r > 0. log .I' I - 1 with equality if. and only if, s = 1. Indeed, we then have . r

i with equality if, and only if. ', = p , for all .).

The quantity & { q l p } which arises here as a difference between two expected , utilities. was introduced by Kullback and Leibler ( I95 1 ) as an d h o c measure of (directed) divergence between two probability distributions.

Combining Propositions 2.29 and 2.30. it is clear that an individual with preferences approximately described by a proper local score function should beware of approximating by zero. This reflects the fact that the "tails" of the distribution are, generally speaking, extremely important in pure inference problems. This is in contrast to many practical decision problems where the form of the utility function often makes the solution robust with respect to changes in the "tails" of the distribution assumed. Proposition 2.30 suggests a natural, general measure of "lack of fit", or discrepancy, between a distribution and an approximation. when preferences are described by a logarithmic score function.

Definition 2.25. (Discrepancy of an approximation). The discrepcinc:\. hetween a strictly positive probability distribution p = ( p , . .j E ,J} oi-er (I partition { E,. j E J } and un upproxiniarion p = {fi,. j E . I } is dejitied by

Example 2.5. (Poisson approximation to a binomial distribution). The behaviour o f h { p I p } is well illustrated by a familiar. elementary example. Consider the binomial distribution
I ) , = ( g ) @ ' ( l -- 0)''
= 0 . othenvise

'.

= 0.1 . .

...I / .

and let
.j = 0. 1.

2.7 Inference and Information

77
I1

=1

n=2
tl

= 10

0.00

0.25

Figure 2.7 Discrepncy between a binomial distribution und its Poisson upproximation (logurithms10 btise 2).
be its Poisson approximation. It is apparent from Figure 2.7 that h { p I p } decreases as either I I increases or 19decreases, or both, and that the second factor is far more important than the first. However, it follows from our previous discussion that it would not be a good idea to reverse the roles and try to approximate a Poisson distribution by a binomial distribution.

When, as in Figure 2.7, logarithms to base 2 are used, the utility and discrepancy are measured on the well-known scale of bits of information (or entropy), which can be interpreted in terms of the expected number of yes-no questions required to identify the true event in the partition (see, for example. de Finetti. 1970/1974, p. 103, or Renyi, 1962/1970,p. 564).

Clearly, Definition 2.25 provides a systematic approach to approximation in pure inference contexts. The best approximation within a given family will be that which minimises the discrepancy.

Information in Section 2.4.2, we showed that, for quantitative coherence,any new information D should be incorporated into the analysis by updating beliefs via Bayes theorem, so that the initial representation of beliefs P(.) is updated to the conditional probability measure P(.I D). In Section 2.7.2, we showed that, within the context of the pure inference reporting problem, utility is defined in terms of the logarithmic score function.
2.7.4

70

2 Foimdutions

Proposition 2.31. (Expected utility o &a). Ifpreferences are described by f


over u partition { EJ. J E J }, then the e.Ypected increuse in utility provided by data D, when the initial probubilip distribution { P( I?, ) . j E J } i., strictlv positive. i s giwn by
u logarithmic score function for rhe class of prohahilip distrihutions dejned

where -3 > 0 is urhirrury, and { f ( EJ I D ) ..j E J } is the conditionul prohcibilip distribution. given D. M0reove.r. this expected increase in irrilip is non-negative und i s zero if. and onls P(E , I 11) = 1( E;,)jiJrull .J.
~

Proo& By Definition 2.24, the utilities of reporting P ( . )or P ( .I D1. were E, known to be true, would be A log [(&IJ) l3, and -4 P(EJI D ) i + log U,. respectively. Thus, conditional on D , the expected increase in utility provided by D is given by

which, by Proposition 2.30. is non-negative and is zero if and only if. for all j . P(E;,I D ) = P(f?,). a

In the context of pure inference problems, we shall tind it convenient to underline the fact that, because of the use of the logarithmic score function, utility assumes a special form and establishes a link between utility theory and classical information theory. This motivates Definitions 2.26 and 2.77.
it pctrtition { E l ,-1

Definition 2.26. (Informationfrom data). The umoimt of informution uhout E J } provided by the dutcr D. when the initial distrihiition over { E , , ,I E J } is p,, = { I( f C J ) . . j E . I ) . i.\ dejned ro be

Ivhere { P( EJI D). E .I } is the c.onditionaI prohuhiliry distrihittion gir-en J the (IUIU n.

2.7 Inference and Information

79

It follows from Definition 2.26 that the amount of information provided by data D is equal to 6(poI p u ) ,the discrepancy measure if p , = {P(E,),j E J } is consideredas an approximation to pu = { P(E, I D ) . j E J } . Another interesting interpretation of I ( D I p o ) arises from the following analysis. Conditional on E,, logP(EJ) log P(E, I D ) measure. respectively, how good the initial and the and conditional distributions are in predicting the true hypothesis EJ = H,, so that log P(E, I D)- log P(E,) is a measure of the value of D.were E, known to be true; I(D I p o ) is simply the expected value of that difference calculated with respect to pD. It should be clear from the preceding discussion that I ( D I p o ) measures indirectly the information provided by the data in terms of the changes produced in the probability distribution of interest. The amount of information is thus seen to be a relative measure, which obviously depends on the initial distribution. Attempts to define absolute measures of information have systematically failed to produce concepts of lasting value.
In the finite case. the entropy of the distribution p = { p l . .. . , I ) , , ) . defined by
Hip} = -

CP,logp,.
,-I

has been proposed and widely accepted as an absolute measure of uncertainty. The recognised fact that its apparently natural extension to the continuous case does not make sense (if only because it is heavily dependent on the particular parametrisation used) should, however. have raised doubts about the universality of this concept. The fact that, in the finite case, H { p } as a measure of uncertainty (and - H { p } as a measure of absolute infomation) seems to work correctly is explained (from our perspective) by the fact that

sothat, intermsoftheabovediscussion, - H { p } maybeinterpreted,apartfroman unimportant additive constant, as the amount of information which is necessary to obtain p = {pi... . . p , , } from an inirinl discrete uniform disrriburion (see Section 3.2.2), which acts as an origin or reference measure of uncertainty. As we shall see in detail later, the problem of extending the entropy concept to continuous distributions is closely related to that of defining an origin or reference measure of uncertainty in the continuous case, a role unambiguously played by the uniform distribution in the finite case. For detailed discussion of H { p } and other proposed entropy measures. see Renyi (1961).

We shall on occasion wish to consider the idea of the amount of information which may be expected from an experiment c, the expectation being calculated before the results of the experiment are actually available.

80

2 Foimdutioiis

Definition 2.27. (Expected informafion from an experiment). The eqiected inforntation to he provided by un experiment P ubotrt N purtition { E , . ,i E . I } , when the initial distribirtion over { E,. .j E .I} is p,, = { P( E l ) . ,I 6 . I } . is

where the possible resirits ofthe e.rperinient t'. { D,. i E I } , occur with probabilities { P( D, i E I } . ).

Proposition 232. An alternative e.xpression jhr the e.rpected iirli,rnrution is

where P( EJ D , ) = P( I),) I'( I D , ) , t t n d { P(E; I D,).J E . I } i,\ the EJ, conditionid distribution. given the occurrence of U , , cwrespondirtg to the initiul distribution p,, = { P ( E , )j. E J ) . Moreor.er, I ( ( Ip,,) 2 0. k i d t equality ifand only if.fr,r (ill E, und D,. P(E , n 1), = I)( ' ,) P (11,). ) k Proof. Let (1, = P ( D , ) . y, = P ( E , ) and p / , Definition 2.27.
=

P ( E / I D,). Then. by

and the result now follows from the fact that, by Bayes' theorem.

= Since. by Proposition 2.31. I ( D , Ip,,) 2 0 withequality iff. P(E, 1 D,) P(E,). it follows from Definition 2.27 that I ( P 1 p , , ) 2 0 with equality if. and only if. for all E , and D,. P ( , n 12,) = P ( E , ) P ( D , ) .

The expression for I ( c Ip,,) given by Proposition 2.32 is Shannon's ( 1948) measure of expected information. We have thus found, in ; decision theoretical 1 framework, a natural interpretation ofthis famous measure ofexpected information: Shannon's expected information is the expected urilip provided by uit e.rpcrirneiit in (I pure itlference c'onte.rt, when un indiridiiul 's preferertc*esare clesc~rihed a b!. smooth, proper. locd score junction. In conclusion. we have suggested that the problem of reporting inferences can be viewed as a particular decision problem and thus should be analysed within

2.8 Discussion and Further References


the framework of decision theory. We have established that, with a natural characterisation of an individuals utility function when faced with a pure inference problem, preferences should be described by a logarithmic score function. We have also seen that, within this framework, discrepancy and amount of information are naturally defined in terms of expected loss of utility and expected increase in utility, respectively, and that maximising expected Shannon information is a particular instance of maximising expected utility. We shall see in Section 3.4 how these results, established here for finite partitions, extend straightforwardly to the continuous case.

2.8
2.8.1

DISCUSSION AND FURTHER REFERENCES


Operational Definitions

In everyday conversation, the way in which we use language is typically rather informal and unselfconscious, and we tolerate each others ambiguities and vacuities for the most part, occasionally seeking an ad hoc clarification of a particular statement or idea if the context seems to justify the effort required in trying to be a little more precise. (For a detailed account of the ambiguities which plague qualitative probability expressions in English, see Mosteller and Youtz, 1990.) In the context of scientific and philosophical discourse, however, there is a paramount need for statements which are meaningful and unambiguous. The everyday, tolerant, ad hoc response will therefore no longer suffice. More rigorous habits of thought are required, and we need to be selfconsciously aware of the precautions and procedures to be adopted if we are to arrive at statements which make sense. A prerequisite for making sense is that the fundamental concepts which provide the substantive content of our statements should themselves be defined in an essentially unambiguous manner. We are thus driven to seek for definitions of fundamental notions which can be reduced ultimately to the touchstone of actual or potential personal experience, rather than remaining at the level of mere words or phrases. This kind of approach to definitions is closely related to the philosophy of pragmatism. as formulated in the second half of the nineteenth century by Peirce, who insisted that clarity in thinking about concepts could only be achieved by concentrating attention on the conceivable practical effects associated with a concept, or the practical consequence of adopting one form of definition rather than another. In Peirce (1878), this point of view was summarised as follows:
Consider what effects, that might conceivably have practical bearings, we conceive the object of our conception to have. Then, our conception of these effects is the whole of our conception of the object.

a2

2 Foundations

In some respects, however, this position is not entirely satisfactory in that it fails to go far enough in elaborating what is to bc understood by the term practical. This crucial elaboration was provided by Bridgman (1927) in a book entitled The Logic. of Modern Pkjsics. where the key idea of an operatinnul definition is introduced and illustrated by considering the concept of length:
, . . what do we mean by the length of an object? We evidently know what we mean by length if we can tell what the length of any and every object is and for the physicist nothing more is required. To find the length of an object. we have to perform certain physical operations. The concept of length is therefore fixed when the operations by which length is measured are tixed: that is. the concept of length involves as much as, and nothing more. than the set of operations by which length is determined. In general. we mean by any concept nothing more than a set of operations: the concept is synonymous with the corresponding set of operations. If the concept is physical. , . , the operatiom are actual physical measurements . . . ; or if the concept is mental. . . . the operations are mental operations.. .

Throughout this work, we shall seek to adhere to the operational approach to defining concepts in order to arrive at meaningful and unambiguous statemcnts in the context of representing beliefs and taking actions in situations of uncertainty. Indeed, we have stressed this aspect of our thinking in Sections 2. I to 2.7. where we made the practical. operational idea of preference between options the fundamental starting point and touchstone for all other definitions. We also noted the inevitable element of idealisation. or approximation, implicit in the operational approach to our concepts. and we remarked on this at several points in Section 2.3. Since many critics of the personalistic Bayesian viewpoint claim to find great difficulty with this feature of the approach. often suggesting that it undermines the entire theory. it is worth noting Bridgmans very explicit recognition that uN experience is subject to error and that all we can do is to take sufficient precautions when specifying sets of operations to ensure that remaining unspecified variations in procedure have negligible effects on the results of interest. This is well illustrated by Bridgmans account ofthe operational concept of length and its attendant idealisations and approximations:
. . .we take a measuring rod. lay it on the object so that one of its ends coincides with one end of the object. mark on the object the position of the rod. then move the rod along in a straight line extension of its previous position until the first
end coincides with the previous position ofthe second end. repcat this process as often as we can. and call the length the total number of times the rod was applied. This procedure, apparently so simple. is in practice exceedingly complicated. and doubtless a full description o f all the precautions that must be taken would till ;I large treatise. We must. for example. be sure that the temperature of the rod is the standard temperature at which its length is detined. or else we must niiike i i

2.8 Discussion and Further References


correction for it; or we must correct for the gravitational distortion of the rod if we measure a vertical length; or we must be sure that the rod is not a magnet or is not subject to electrical forces . . . we must go further and specify all the details by which the rod is moved from one position to the next on the object, its precise path through space and its velocity and acceleration in getting from one position to another. Practically, of course, precautions such as these are not taken, but the justification is in our experience that variations of procedure of this kind are without effect on the final result. . . This pragmatic recognition that there are inevitable limitations in any concrete application of a set of operational procedures is precisely the spirit of our discussion of Axioms 4 and 5 in Section 2.3. In practical terms, we have to stop somewhere, even though, in principle, we could indefinitely refine our measurement operations. What matters is to be able to achieve sufficient accuracy to avoid unacceptable distortion in any analysis of interest.

2.8.2

Quantitative Coherence Theories

In a comprehensive review of normative decision theories leading to the expected utility criterion, Fishburn (1981) lists over thirty different axiomatic formulations of the principles of coherence, reflecting a variety of responses to the underlying conflict between axiomatic simplicity and structural flexibility in the representation of decision problems. Fishburn sums up the dilemma as follows: On the one hand, we would like our axioms to be simple, interpretable, intuitively clear, and capable of convincing others that they are appealing criteria of coherency and consistency in decision making under uncertainty, but to do this it seems essential to invoke strong structural conditions. On the other hand, we would like our theory to adhere to the loose structures that often arise in realistic decision situations, but if this is done then we will be faced with fairly complicated axioms that accommodate these loose structures. In addition, we should like the definitions of the basic concepts of probability and utility to have strong and direct links with practical assessment procedures, in conformity with the operational philosophy outlined above. With these considerations in mind, our purpose here is to provide a brief historical review of the foundational writings which seem to us the most significant. This will serve in part to acknowledge our general intellectual indebtedness and orientation, and in part to explain and further motivate our own particular choice of axiom system. The earliest axiomatic approach to the problem of decision making under uncertainty is that of Ramsey ( I926), who presented the outline of a formal system. The key postulate in Ramseys theory is the existence of a so-called ethically neutral

2 Foundations
event E , say, which, expressed in terms of our notation for options, has the property that { c , 1 E, c2 1 EC} { c , I E', c2 1 E}. for any consequences cl. c.2. It is then rather natural to define the degree of belief in such an event to be 1/2 and. from this quantitative basis, it is straightforward to construct an operational measure of utility for consequences. This, in turn. is used to extend the definition of degree of belief to general events by means of an expected utility model. From a conceptual point of view, Ramsey's theory seems to us, as indeed it has to many other writers, a revolutionary landmark in the history of ideas. From a mathematical point of view. however. the treatment is rather incomplete and it was not until 1954, with the publication of Savage's (1954) book The fi~imdurioiis ofStutistics that the first complete formal theory appeared. No mathematical completion of Ramsey's theory seems to have been published. but a closely related development can be found in Pfanzagl ( 1967. 1968). Savage's major innovation in structuring decision problems is to define what he calls acts (options. in our terminology) as functions from the set of uncertain possible outcomes into the set of consequences. His key coherence assumption is then that of a complete, transitive order relation among acts and this is used to define qualitative probabilities. These are extended into quantitative prubabilities by means of a "continuously divisible" assumption about events. Utilities are subsequently introduced using ideas similar to those of voii Neumann and Morgenstern (1944/1953), who had, ten years earlier, presented an axiom system for utility alone, assuming the prior existence of probabilities. The Savage axiom system is a great historical achievement and provides the first formal justification of the personalistic approach to probability and decision making; for a modern appraisal see Shafer (1986) and lively ensuing discussion. See, also, Hens ( 1992). Of course. many variations on an axiomatic theme are possible and other Savage-type axiom systems have been developed since by Stigum (1972). Roberts (1974). Fishburn (1975) and Narens ( 1976). Suppes ( 1956) presented a system which combined elements of Savage's and Ramsey's approaches. See. also, Suppes ( 1960. 1974) and Savage ( 1970). There are, however, two major difficulties with Savage's approach. which impose severe limitations on the range of applicability of the theory. The first of these difficulties stems from the "continuously divisible" assumption about events, which Savage uses as the basis for proceeding from qualitative to quantitative concepts. Such an assumption imposes severe constraints on the allowable forms of structure for the set of uncertain outcomes: in fact. it even prevents the theory from being directly applicable to situations involving il finite or countably infinite set of possible outcomes. One way of avoiding this embarrassing structural limitation is to introduce a quantitative element into the system by a device like that of Ramsey's ethically neutral event. This is directly defined to have probability 1/2 and thus enables Ramsey to get the quantitative ball rolling without imposing undue constraints on

2.8 Discussion and Further References


the structure, All he requires is that (at least) one such event be included in the representation of the uncertain outcomes. In fact, a generalisation of Ramseys idea re-emerges in the form of canonical lotteries, introduced by Anscombe and Aumann (1963) for defining degrees of belief, and by Pratt, Raiffa and Schlaifer (1964. 1965) as a basis for simultaneously quantifying personal degrees of belief and utilities in a direct and intuitive manner. The basic idea is essentially that of a standard measuring device, in some sense external to the real-world events and options of interest. It seems to us that this idea ties in perfectly with the kind of operational considerations described above, and the standard events and options that we introduced in Section 2.3 play this fundamental operational role in our own system. Other systems using standard measuring devices (sometimes referred to as external scaling devices) are those of Fishburn (1967b, 1969) and Balch and Fishbum (1974). A theory which, like ours, combines a standard measuring device with a fundamental notion of conditional preference is that of Luce and Krantz ( I97 I ). The second major difficulty with Savages theory, and one that also exists in many other theories (see Table I in Fishburn, 1981), is that the Savage axioms imply the boundedness of utility functions (an implication of which Savage was f apparently unaware when he wrote The Foundations o Statistics, but which was subsequently proved by Fishburn. 1970). The theory does not therefore justify the use of many mathematically convenient and widely used utility functions; for example, those implicit in forms such as quadratic loss and logarithmic score. We take the view, already hinted at in our brief discussion of medical and monetary consequences in Section 2.5, that it is often conceptually and mathematically convenient to be able to use structural representationsgoing beyond what we perceive to be the essentially finitistic and bounded characteristics of real-world problems. And yet, in presenting the basic quantitative coherence axioms it is important not to confuse the primary definitions and coherence principles with the secondary issues of the precise forms of the various sets involved. For this reason, we have so far always taken options to be defined by finite partitions; indeed, within this simple structure, we hope that the essence of the quantitative coherence theory has already been clearly communicated, uncomplicated by structural complexities. Motivated by considerations of mathematical convenience, however, we shall, in Chapter 3, relax the constraint imposed on the form of the action space. We shall then arrive at a sufficientlygeneral setting for all our subsequent developments and applications.

283 ..

Related Theories

Our previous discussion centred on complete axiomatic approaches to decision problems, involving a unified development of both probability and utility concepts. In our view, a unified treatment of the two concepts is inescapable if operational

considerations are to be taken seriously. However. there have been a number of attempted developments of probability ideas separate from utility considerations. as well as separate developments of utility ideas presupposing the existence of prob-' abilities. In addition, there is a considerable literature on information-theoretic ideas closely related to those of Section 2.7. In this section. we shall provide a summary overview of a number of these related theories. grouped under the following subheadings: (i) Moneran Bets and Dexrees ofBelieJ (ii) Scoring Rit1c.s and Degrees of Belief, (iii) Ariomatic Approaches to Degrees c$ Beliej: (iv) Avioniutic Approaches to Utilities and (v) hlforniution Theories. For the most part, we shall simply give what seem to us the most important historical references. together with some brief comments. The first two topics will. however, be treated at greater length; partly because of their close relation with the main concerns of this book. and partly because of their connections with the important practical topic of the assessment of beliefs.
Monetary Bets and Degrees of Belief

An elegant demonstration that coherent degrees of belief satisfy the rules of(finitely additive) probability was given by de Finetti (1937/1964). without explicit use of the utility concept. Using the notation for options introduced in Section 2.3. de Finetti's approach can be summarised as follows. If consequences are assumed to be monetary. and if, given an arbitrary monetary sum 771 and uncertain event E. an individual's preferences among options are I such that { p r I~Q } { 711 I E. 0 I ? 1. then the individual's degree of belief in E is defined to be p. This definition is virtually identical to Bayes' own definition of probability (see our later discussion under the heading of Ariomuric Approoches to Degrees o f Belie). In modern economic terminology, probability can be considered to be a marginal rate of substitution or, more simply. a kind of "price'.. Given that an individual has specified his or her degrees of belief for some collection of events by repeated use of the above detinition. either it is possible to arrange a form of monetary bet in terms of these events which is such that the individual will certainly lose, a so-called "Dutch book". or such an arrangement is impossible. In the latter case, the individual is said to have specified a coherent set of degrees of belief. It is now straightforward to verify that coherent degrees of belief have the properties of finitely additive probabilities. To demonstrate that 0 5 p 5 1. for any E and r t ) . we can argue as follows. An individual who assigns p > 1 is implicitly agreeing to pay a stake larger than I ) ) to enter a gamble in which the maximum prize he or she can win is t i t : an individual who assigns 1) < 0 is implicitly agreeing to offer a gamble in which he or she will pay out either t n or nothing in return for a negative stake. which is equivalent to paying an opponent to enter such a gamble. In either case. a bet can be arranged

2.8 Discussion and Further References

87

which will result in a certain loss to the individual and avoidance of this possibility requires that 0 5 p 5 1. To demonstrate the additive property of degrees of belief for exclusive and exhaustiveevents, E l , E2, . . . ,En,we proceed as follows. If an individual specifies pl ,p2, . . . ,p,,. to be his or her degrees of belief in those events, this is an implicit in agreement to pay a total stake of plml p.21112 . . . p,*7nf, order to enter a gamble resulting in a prize of m,if E, occurs and thus a gain. or net return, of 9,= nit - C, p,nz,, which could, of course, be negative. In order to avoid the possibility of the m,s being chosen in such a way as to guarantee the negativity of the 9,s for fixed p,s in this system of linear equations, it is necessary that the determinant of the matrix relating the m,s to the 9,s be zero so that the linear system cannot be solved; this turns out to require that p1 p 2 . . pn = 1. Moreover, it is easy to check that this is also a sufficient condition for coherence: it implies C, p,gJ = 0, for any choice of the 7 r t J s , and hence the impossibility of all the returns being negative. The extension of these ideas to cover the revision of degrees of belief conditional on new information proceeds in a similar manner, except that an individuals degree of belief in an event E conditional on an event F is defined to be the number q such that, given any monetary sum ~ nwe have the equivalence , (q7n 1 0) { m I E n F, 0 1 E n F, qrn 1 F}, according to the individuals pref erence ordering among options. The interpretation of this definition is straightforward: having paid a stake of qrn, if F occurs we are confronted with a gamble with prizes 7 n if E occurs, and nothing otherwise; if F does not occur the bet is called o f f and the stake returned. However, despite the intuitive appeal of this simple and neat approach, it has two major shortcomings from an operational viewpoint. In the first place, it is clear that the definitions cannot be taken seriously in terms of arbitrary monetary sums: the perceived value of a stake or a return is not equivalent to its monetary value and the missing utility concept is required in order to overcome the difficulty. This point was later recognised by de Finetti (see Kyburg and Smokler, 1964/1980, p. 62, footnote (a)), but has its earlier origins in the celebrated St. Petersburg paradox (first discussed in terms of utility by Daniel Bernoulli, 1730/1954). For further discussion of possible forms of utility for money, see, for example, Pratt (1964). Lavalle (1968), Lindley (197 1/1985, Chapter 5) and Hull er a f .(1973). Additionally. one may explicitly recognise that some people have a positive utility for gambling (see. for instance, Conlisk, 1993). An ud hoc modification of de Finettis approach would be to confine attention to small stakes (thus, in effect, restricting attention to a range of outcomes over which the utility can be taken as approximately linear) and the argument, thus modified, has considerablepedagogical and, perhaps, practical use, despite its rather informal nature. A more formal argument based on the avoidance of certain losses in betting formulations has been given by Freedman and Purves (1969). Related

+ +

+ + +
+

00

2 Foundutions

arguments have also been used by Cornfield (1969). Heath and Sudderth (1972) and Buehler ( 1976) to expand on de Finetti's concept of coherent systems of bets. In addition to the problem of "non-linearity in the face of risk". alluded to above, there is also the difficulty that unwanted game-theoretic elements may enter the picture if we base a theory on ideas such as "opponents" choosing the levels of prizes in gambles. For this reason, de Finetti himself later preferred to use an approach based on scoring rules. a concept we have already introduced in Section 2.7. Scoring Rules and Degrees of Bclief The scoring rule approach to the definition of degrees of belief and the derivation of their properties when constrained to be coherent is due to de Finetti ( 1963. 1964). with important subsequent generalisations by Savage (1971) and Lindley ( 1982a). In terms of the quadratic scoring rule. the development proceeds as follows. Given an uncertain event b,', an individual is asked to select a number, 1). with the understanding that if Ii occurs he or she is to suffer a penalty (or loss) of I, = ( 1 - 1))'. whereas if E does not occur he or she is to suffer a penalty of L = 1;'. Using the indicator function for E. the penalty can be written in the general form. L = ( I[: - p ) ? . The number, p. which the individual chooses is defined to be his or her degree of belief in t.. .' Suppose now that El. &. . . . . ,I are an exclusive and exhaustive collection of uncertain events for which the individual. using the quadratic scoring rule scheme. has to specify degrees of belief 1". p.1. . . . .p , , . respectively, subject now to the penalty L = ( l E , -pI)L!+(lc..,- p L ) L ! + - + ( l / . ; l , -p,X Given a specification. p1. p?. . . . .p,,, either it is possible to find an alternative specification. (I(. q?. . . . . y,,. say. such that

for any assignment of the value 1 to one of the E,'s and 0 to the others. or it is not possible to find such yl. ( I . . . . . . q,,. In the latter case. the individual is said to have specified a coherent set of degrees of belief. The underlying idea i n this development is clearly very similar to that of de Finetti's ( I93711964) approach where the avoidance of a "Dutch book" is the basic criterion of coherence. A simple geometric argument now establishes that. for coherence we must have 0 5 p, 5 1. for i = 1.2. . . . . 11. and p1 + p 2 + . . . + p,, = I . To see this. note that the n logically compatible assignments of values 1 and 0 to the E,'sdefine )i points in !R". Thinking of p 1 . p ~ ).. . . . p,, as delining a further point in 'R". the coherence condition can be reinterpreted as requiring that this latter point cannot be moved in such a way as to reduce the distance from ;ill the other 11 points. This

2.8 Discussion and Further References

09

means that p1, p2, . . . p,, must define a point in the convex hull of the other R points, thus establishing the required result. The extension of this approach to cover the revision of degrees of belief conditional on new information proceeds as follows. An individuals degree of belief in an event E conditional on the occurrence of an event F is defined to be the number q, which he or she chooses when confronted with a penalty defined by L = 1~( 1~ - q). The interpretation of this penalty is straightforward. Indeed, if F occurs, the specification of q proceeds according to the penalty ( 1~ - q)2; if F does not occur, there is no penalty, a formulation which is clearly related to the idea of called-off bets used in de Finettis 1937 approach. Suppose now that, in addition to the conditional degree of belief q, the numbers p and T are the individuals degrees of belief, respectively. for the events E n F and F. specified subject to the penalty

L = 1F ( 1E - (1) + ( 1 E 1F - p ) 2 ( IF - T ) 2 . To derive the constraints on p. q and T imposed by coherence, which demands that no other choices will lead to a strictly smaller L. whatever the logically compatible outcomes of the events are, we argue as follows. If u, t i , w, respectively, are the values which L takes in the cases where E n F, ECn F and F occur, then p , q, T satisfy the equations
11
tl

= (1 - q y = =

+ ( 1 - p ) 2 + (1- r)2

q2

U!

+ (1 - r) p + r2.

If p, q, T defined a point in 91 where the Jacobian of the transformationdefined by the above equations did not vanish, it would be possible to move from that point in a direction which simultaneously reduced the values u, 2, and w. Coherence therefore requires that the Jacobian be zero. A simple calculation shows that this reduces to the condition q = p / r , which is. again, Bayes theorem. De Finettis penalty criterion and related ideas have been critically re-examined by a number of authors. Relevant additional references are Myerson ( 1 979). Regazzini (19831, Gatsonis (1984). Eaton (1992) and Gilio (1992a). See, also, Piccinato (1986).
Axiomatic Approaches to Degrees of Belief

Historically, the idea of probability as degree of belief has received a great deal of distinguished support, including contributions from James Bernoulli ( 1713/1899). Laplace ( 177411986, I8 1411952), De Morgan ( 1847)and Bore1 ( 1924/1964). However, so far as we know, none of these writers attempted an axiomatic development of the idea. The first recognisably axiomatic approach to a theory of degrees of belief was that of Bayes (1763) and the magnitude of his achievement has been clearly

90

2 Foundations

recognised in the two centuries following his death by the adoption of the adjective Bayesian as a description of the philosophical and methodological developments which have been inspired, directly or indirectly, by his essay. By present day standards, Bayes' formulation is. of course, extremely informal, and a more formal, modem approach only began to emerge a century and a half later, in a series of papers by Wrinch and Jeffreys ( 19 19. I92 1 ). Formal axiom systems which whole-heartedly embrace the principle of revising beliefs through systematic use of Bayes' theorem, are discussed in detail by Jeffreys (193111973, 1939/1961), whose profound philosophical and methodological contrihutions to Bayesian statistics are now widely recognised; see for example, the evaluations of his work by Geisser (1980a). by Good ( 1980a) and by Lindley ( 1980a). in the volume edited by Zellner ( 1980). From afixmfarional perspective. however. the flavour of Jeffreys' approach seems to us to place insufficient emphasis on the inescapably personal nature of degrees of belief, resulting in an over-concentrationon "conventional" representations of degrees of belief derived from "logical" rather than operational considerations (despite the fact that Jeffreys was highly motivated by real world applications!). Similar criticisms seem to us to apply to the original and elegant formal development given by Cox ( 1946. I961 ) and Jaynes ( 1958). who showed that the probability axioms constitute the only consistent extension of ordinary (Aristotelian) logic in which degrees of belief are represented by real numbers. We should point out, however. that our emphasis on operational considerations and the subjective character of degrees of belief would. in turn. be criticised by many colleagues who, in other respects. share a basic commitment to the Bayesian approach to statistical problems. See Good ( 1965. Chapter 2) for a discussion ofthe variety of attitudes to probability compatible with a systematic use ofthe Bayesian paradigm. There are, of course. many other examples of axiomatic approaches to quantifying uncertainty in some form or another. In the finite case, this includes work by Kraft et a / . ( 1959). Scott ( 1964). Fishburn ( 1970. Chapter 4). Krantz Y I 01. ( I97 I ). Domotor and Stelzer ( 197 1 ), Suppes and Zanotti ( 1976. 1982). Heath and Sudderth ( 1978) and Luce and Narens ( 1978). The work of Keynes ( 192 1/ 1929) and Carnap (1950/1962) deserves particular mention and will be further discussed later in Section 2.8.4. Fishburn (1986) provided an authoritative review of the axiomatic foundations of subjective probability, which is followed by a long. stimulating discussion. See, also, French (1982) and Chuaqui and Malitz (1983).
Axiomatic Approuches ro Utilities Assuming the prior existence of probabilities, von Neumann and Morgenstern (1944/1953) presented axioms for coherent preferences which led to a justification of utilities as numerical measures of value for consequences and to the optimality criterion of maximising expected utility. Much of Savage's (195411972) system

2.8 Discussion and Further References

91

was directly inspired by this seminal work of von Neumann and Morgenstern and the influence of their ideas extends into a great many of the systems we have mentioned. Other early developments which concentrate on the utility aspects of the decision problem include those of Friedman and Savage (1948, 1952). Marschak (1950), Arrow (I95 la). Herstein and Milnor (1 953), Edwards ( 1954) and Debreu (1960). Seminal references are reprinted in Page (1968). General accounts of utility are given in the books by Blackwell and Girshick (1954), Luce and Raiffa (1957), Chernoff and Moses (1959) and Fishburn ( 1970). Extensive bibliographies are given in Savage (1954/1972) and Fishburn (1968, 1981). Discussions of the experimental measurement of utility are provided by Edwards ( 1954),Davison etul. ( 1957). Suppes and Walsh ( I 959), Beckerer ul. (1963). DeGroot (1963). Becker and McClintock (1967), Savage (1971) and Hull er al. ( I 973). DeGroot (1970, Chapter 7) presents a general axiom system for utilities which imposes rather few mathematical constraints on the underlying decision problem structure. Multiattribute utility theory is discussed, among others, by Fishburn (1964) and Keeney and Raiffa (1976). Other discussions of utility theory include Fishburn (1967a. 1988b) and Machina ( 1982, 1987). See, also, Schervish era/. ( 1 990).

Information Theories Measures of information are closely related to ideas of uncertainty and probability and there is a considerable literature exploring the connections between these topics. The logarithmic information measure was proposed independentlyby Shannon (1948) and Wiener (1948) in the context of communication engineering; Lindley (1956) later suggested its use as a statistical criterion in the design of experiments. The logarithmic divergency measure was first proposed by Kullback and Leibler ( I95 I ) and was subsequently used as the basis for an information-theoreticapproach to statistics by Kullback (195911968). A formal axiomatic approach to measures of information in the context of uncertainty was provided by Good (1966), who has made numerous contributions to the literature of the foundations of decision making and the evaluation of evidence. Other relevant references on information concepts are Renyi ( 1964, 1966, 1967) and S h d a l (1970). The mathematical results which lead to the characterisation of the logarithmic scoring rule for reporting probability distributions have been available for some considerable time. Logarithmic scores seem to have been first suggested by Good (1952), but he only dealt with dichotomies, for which the uniqueness result is not applicable. The first characterisation of the logarithmic score for a finite distribution was attributed to Gleason by McCarthy ( 1956);Aczel and Pfanzagl(1966). Arimoto (1970) and Savage (1971) have also given derivations of this form of scoring rule under various regularity conditions. By considering the inference reporting problem as a particular case of a decision problem, we have provided (in Section 2.7) a natural, unifying account of

2 Foundations
the fundamental and close relationship between information-theoretic ideas and the Bayesian treatment of pure inference problems. Based on work of Bemardo (1979a), this analysis will be extended. in Chapter 3, to cover continuous distributions.

2.8.4

Critical Issues

We shall conclude this chapter by providing a summary overview of our position in relation to some of the objections commonly raised against the foundations of Bayesian statistics. These will be dealt with under the following subheadings: ( i ) Dynamic Frame rtfDiscourse, (ii) Updating Subjective Probability. (iii) Relevance ojun Axiomatic Approach. (iv) Structure of the Set of Relevant Events. ( v ) Prescriptive Nature of the Axioms. (vi) Precise, Complete, Quantitati\*e Preference. (vii) Subjectivity of Probabilit!. (viii) Statistical Inference us (I Decision Probktn and (ix) Communication and Group Derision Making. D y a m i c Frame of Discourse
As we indicated in Chapter I , our concern in this volume is with coherent beliefs and actions in relation to a limited set of specified possibilities, currently assumed necessary and sufficient to reflect key features of interest in the problem under study. In the language of Section 2.2, we are operating in terms of a fixed frame of discourse, defined in the light of our current knowledge and assumptions. :\I,,. However. as many critics have pointed out, this activity constitutes only one static phase of the wider, evolving. scientific learning and decision process. In the more general, dynamic, context. this activity has to be viewed, either potentially or actually, as sandwiched between two other vital processes. On the one hand, the creative generation of the set of possibilities to be considered; on the other hand, the critical questioning of the adequacy of the currently entertained set of possibilities (see, for example, Box. 1980). We accept that the mode of reasoning encapsulated within the quantitative coherence theory as presented here is ultimately conditional, and thus not directly applicable to every phase of the scientific process. But we do not accept, DS Box (1980) appears to, that alternativefornral statistical theories have a convincing. complementary role to play. The problem of generating the frame of discourse, i.e.. inventing new models or theories, seems to us to be one which currently lies outside the purview of any statistical formalism, although some limited formal clarification is actually possible within the Bayesian framework. as we shall see in Chapter 4. Substantive subject-matter inputs would seem to be of primary importance. although informal, exploratory data analysis is no doubt a necessary adjunct and, particularly in the context of the possibilities opened up by modern computer graphics. offers considerable intellectual excitement and satisfaction in its own right.

2.8 Discussion and Further References

93

The problem of criticising the frame of discourse also seems to us to remain essentially unsolved by any statistical theory. In the case of a revolution, or even rebellion, in scientific paradigm (Kuhn, 1962). the issue is resolved for us as statisticians by the consensus of the subject-matter experts, and we simply begin again on the basis of the frame of discourse implicit in the new paradigm. However, in the absence of such externally directed revision or extension of the current frame of discourse, it is not clear what questions one should pose in order to arrive at an internal assessment of adequacy in the light of the information thus far available. On the one hand, exploratory diagnostic probing would seem to have a role to play in confirming that specific forms of local elaboration of the frame of discourse should be made. The logical catch here, however, is that such specific diagnostic probing can only stem from the prior realisation that the corresponding specific elaborations might be required. The latter could therefore be incorporated ub initio into the frame of discourse and a fully coherent analysis carried out. The issue here is one of pragmatic convenience, rather than of circumscribing the scope of the coherent theory. On the other hand, the issue of assessing adequacy in relation to a total absence of any speciJic suggested elaborations seems to us to remain an open problem. Indeed, it is not clear that the problem as usually posed is well-formulated. For example, is the key issue that of surprise; or is some kind of extension of the notion of a decision problem required in order to give an operational meaning to the concept of adequacy? Readers interested in this topic will find in Box (1980), and the ensuing discussion, a range of reactions. We shall return to these issues in Chapter 6. Related issues arise in discussions of the general problem of assessing, or calibrating. the external, empirical performance of an internally coherent individual; see, for example, Dawid (1982a). Overall, our responses to critics who question the relevance of the coherent approach based on a fixed frame of reference can be summarised as follows. So far as the scope and limits of Bayesian theory are concerned: (i) we acknowledge that the mode of reasoning encapsulated within the quantitative coherence theory is ultimately conditional, and thus not directly applicable to every phase of the scientific process; (ii) informal, esplorurury techniques are an essential part of the process of generating ideas; there can be no purely statistical theory of model formulation; this aspect of the scientific process is not part of the foundational debate, although the process of passing from such ideas to their mathematical representation can often be subjected to formal analysis; (iii) we ull lack a decent theoretical formulation of and solution to the problem of global model criticism in the absence of concrete suggested alternatives. However, critics of the Bayesian approach should recognise that: (i) an enormous amount of current theoretical and applied statistical activity is concerned

94

2 Foundations

with the analysis of uncenainty in the context of models which are accepted. for the purposes of the analysis, as working frames of discourse. subject only to local probing of specific potential elaborations, and (ii) our arguments thus far. and those to follow, are an attempt to convince the reader that within this Iutrer context there are compelling reasons for adopting the Bayesian approach to statistical theory and practice.
Upduling Subjecrive Probubiiity

An issue related to the topic just discussed is that of the mechanism for updating subjective probabilities. In Section 2.4.2. we defined. in terms of a conditional uncertainty relation. - the notion of the conditional probability. P ( E I G), of an event E given the <(;. assumed Occurrence of an event G. From this, we derived Bayes' theorem. which establishes that p ( E I G ) = P(G I E ) P (E ) / P ( G ) .If we actually knocr.j)rcertain that G has occurred, I>( E I G) becomes our actual degree of belief in E. The prior probability P ( E ) ,has been updated to the poJIerior probability P(E 1 G). However, a number of authors have questioned whether it is justified to identify assessments made conditional on the assumed occurrence of C; with actual beliefs once G is knowvr. We shall not pursue this issue further, although we acknowledge its interest and potential importance. Detailed discussion and relevant references can be found in Diaconis and Zabell ( 1982). who discuss. in particular. JrJ?-eX'.s rule (Jeffrey. I965/1983). and Goldstein ( 1985). who examines the role of tempord colwrence. See. also, Good ( 1977).
Relevance o the Axiomatic Approach f

Arguments against over-concern with foundational issues come in many forms. At one extreme, we have heard Bayesian colleagues argue that the mechanics and flavour of the Bayesian inference process have their own sufficient, direct, intuitive appeal and do not need axiomatic reinforcement. Another form of this argument asserts that developments from axiom systems are "pointless" because the conclusions are. tautologically, contained in the premises. Although this is literally true. we simply do not accept that the methodological imperatives which How from the assumptions of quantitative coherence are in any way "obvious" to someone contemplating the axioms. At the other extreme. we have heard proponents of supposedly "model-free" exploratory methodology proclaim that wc can evolve towards "gocxl practice" by simply giving full encournpement to the creative imagination and then "seeing w h a t works". Our objection to both these attitudes is that they each implicitly assume. albeit from different perspectives. the existence of a commonly agreed notion of what constitutes "desirable statisticid practice". This does not seem to us a reasonable assumption at all, and to avoid potential confusion. an operational definition o f the notion is required. The quantitative coherence approach is based on the aswnption

2.8 Discussion and Further References

95

that, within the structured framework set out in Section 2.2, desirable practice requires, at least, to avoid Dutch-book inconsistencies,an assumption which leads to the Bayesian paradigm for the revision of belief.
Structure of the Set of Relevant Events But is the structure assumed for the set of relevant events too rigid? In particular, is it reasonable to assume that, in each and every context involving uncertainty, the logical description of the possibilities should be forced into the structure of an algebra (or cr-algebra), in which each event has the same logical status? It seems to us that this may not always be reasonable and that there is a potential need for further research into the implications of applying appropriate concepts of quantitativecoherence to event structuresother than simple algebras. For example, this problem has already been considered in relation to the foundations of quantum mechanics, where the notion of sample space has been generalised to allow for the simultaneous representation of the outcomes of a set of related experiments (see, for example, Randall and Foulis, 1975). In that context, it has been established that there exists a natural extension of the Bayesian paradigm to the more general setting. Another area where the applicability of the standard paradigm has been questioned is that of so-called knowledge-based expert systems, which often operate on knowledge representationswhich involve complex and loosely structured spaces of possibilities, including hierarchies and networks. Proponents of such systems have argued that (Bayesian) probabilistic reasoning is incapable of analysing these structures and that novel forms of quantitative representations of uncertainty are required (see Spiegelhalterand Knill-Jones, 1984, and ensuing discussion, for references to these ideas). However, alternative proposals, which include fuzzy logic, belief functions and confirmation theory, are, for the most part, ad hoc and the challenge to the probabilistic paradigm seems to us to be elegantly answered by Lauritzen and Spiegelhalter (1988). We shall return to this topic later in this section. Finally, another form of query relating to the logical status of events is sometimes raised (see, for example, Barnard. 1980a). This draws attention to the interpretational asymmetry between a statement like the underlying distribution is normal and its negation. This raises questions about their implicitly symmetric treatment within the framework given in Section 2.2. Choices of the elements to be included in & are. of course, bound up with general questions of modelling and the issue here seems to us to be one concerning sensible modelling strategies. We shall return to this topic in Chapters 4 and 6. Prescriptive Nature of the Axionis When introducing our formal development, we emphasised that the Bayesian foundational approach is prescriptive and not descriptive. We are concerned with un-

derstanding how we ought to proceed. if we wish to avoid a specified form of behavioural inconsistency. We are not concerned with sociological or psychological description of actual behaviour. For the latter. see, for example. Wallsten ( 1974). Kahneman and Tversky ( 1979). Kahneman PI (11. ( 1982). Machina ( 1987). Bordley (1992), Luce (1992) and Yilmaz (1992). See. also. Savage (1980). Despite this. many critics of the Bayesian approach have somehow taken comfort from the fact that there is empirical evidence, from experiments involving hypothetical gambles, which suggests that people often do not act in conformity with the cohercnce axioms: see, for examplc. Allais (1953) and Ellsberg ( I961 ). Allais' criticism is based on a study of the actual prcferences of individuals in contexts where they are faced with pairs of hypothetical situations. like those described in Figure 2.8. in each of which a choice has to be made bctween the two options where C' stands for current assets and the numbcrs describe thousands of units of a familiar currency.
I .oo

0.10

0.89 0.0I

0.1 I

0.X9
0.10

0.90

It has been found (see. for example. Allais and Hagen. 1979) that there are a great many individuals who prefer option I to option 2 in the first situation. and a t the same time prefer option 4 to option 3 in the second situation. To examine the coherence of these two revealed preferences. we note that. if they are to correspond to a consistent utility ordering. there must cxist a utility

2.8 Discussion and Further References

97

function u(.),defined over consequences (in this case, total assets in thousands of monetary units), satisfying the inequalities

~ ( 5 0 0 C) > 0.10~(2,500 C) 0.89u(500

+ +

+ G) + 0.01u(C)

O.l0~(2,50O+C) +0.90u(C) > 0.11~(5OO+C) +0.89u(C).


But simple rearrangement reveals that these inequalities are logically incompatible for any function u(.),and, therefore, the stated preferences are incoherent. How should one react to this conflict between the compelling intuitive attraction (for many individuals) of the originally stated preferences, and the realisation that they are not in accord with the prescriptive requirements of the formal theory? Allais and his followers would argue that the force of examples of this kind is so powerful that it undermines the whole basis of the axiomatic approach set out in Section 2.3. This seems to us a very peculiar argument. It is as if one were to argue for the abandonment of ordinary logical or arithmetic rules, on the grounds that individuals can often be shown to perform badly at deduction or long division. The conclusion to be drawn is surely the opposite: namely, the more liable people are to make mistakes, the more need there is to have the formal prescription available, both as a reference point, to enable us to discover the kinds of mistakes and distortions to which we are prone in ad hoc reasoning, and also as a suggestive source of improved strategies for thinking about and structuring problems.
Table 2.4 Savuge 's reformulution o Allais ' example f
ficketnumber

I
500-tC

2-11

12-100

situation I

option I option 2 option3 option 4

C
500+ C

500+ C 5OO+C 2500+C 500+C

situation2

5OO+C 2500-tC

In the case of Allais' example, Savage ( 1954/1972, Chapter 5 ) pointed out that a concrete realisation of the options described in the two situations could be achieved by viewing the outcomes as prizes from a lottery involving one hundred numbered tickets, as shown in Table 2.4. Indeed, when the problem is set out in this form, it is clear that if any of the tickets numbered from 12 to 100 is chosen it will not matter, in either situation, which of the options is selected. Preferences in both situations should therefore only depend on considerationsrelating to tickets in the range from I to I 1. But, for this range of tickets, situations 1 and 2 are identical in structure, so that preferring option I to option 2 and at the same time preferring option 4 to option 3 is now seen to be indefensible.

98

2 Foudutions

Viewed in this way. Allais problem takes on the appearance of a decisiontheoretic version of an optical illusion achieved through the distorting effects of extreme consequences, which go far beyond the ranges of our normal experience. The lesson of Savages analysis is that, when confronted with complex or tricky problems, we must be prepared to shift our angle of vision in order to view the structure in terms of more concrete and familiar images with which we feel more comfortable. Ellsbergs (1961) criticism is of a similar kind to Allais. but the distorting elements which are present in his hypothetical gambles stem from the rather vague nature of the uncertainty mechanisms involved, rather than from the extreme nature of the consequences. In such cases. where confusion is engendered by the probabilities rather than the utilities, the perceived incoherence may, in fact, disappear if one takes into account the possibility that the experimental subjects utility may be a function of more than one attribute. In particular, we may need to consider the attribute avoidance of looking foolish, often as a result of thinking that there is a right answer if the problem seems predominantly to do with sorting out experimentally assigned probabilities, in addition to the monetary consequences specified in the hypothetical gambles. Even without such refinements. however. and arguing solely in terms of the gambles themselves. Raiffa ( 1961) and Roberts (1963) have provided clear and convincing rejoinders to [he Ellsberg criticism. Indeed, Roberts presents a particularly lucid and powerful defence of the axioms. also making use of the analogy with optical and magical illusions. The form of argument used is similar to that in Savages rejoinder to Allais, and we shall not repeat the details here. For a recent discussion of both the Allais and Ellsberg phenomena. see Kadane ( 1992).
Precise, Complete, Qiruntitutive Preferences In our axiomatic development we have not made the a priori assumption that all options can be compared directly using the preference relation. We have. however. assumed, in Axiom 5 . that all consequences and certain general forms of dichotomised options can be compared with dichotomised options involving standard events. This latter assumption then turns out to imply a quantitative basis for all preferences, and hence for beliefs and values. The view has been put forward by some writers (e.g. Keynes. 1921/1929. and Koopman, 1940)that not all degrees of belief are quantifiable. or even comparable. However, beginning with Jeffreys review of Keynes Treutise (see also Jeffreys, I93 1/1973)the general response to this view has been that some form ofquantification is essential if we are to have an operational. scientifically useful theory. Other references. together with a thorough review of the mathematical consequences of these kind of assumptions. are given by Fine (1973. Chapter 2). Neverthcless. there has been a widespread feeling that the demand for precise quantification, implicit in standard axiom systems. is rather severe and certainly

2.8 Discussion and Further References

99

ought to be questioned. We should consider, therefore, some of the kinds of suggestions that have been put forward from this latter perspective. Among the attempts to present formal alternatives to the assumption of precise quantification are those of Good (1950, 1962), Kyburg (1961), Smith (1961), Dempster ( 1967.1985). Walley and Fine (1979), Gir6n and Rios (1980), DeRobertis and Hartigan (1981). Walley (1987, 1991) and Nakamura (1993). In essence, the suggestion in relation to probabilities is to replace the usual representation of a degree of belief in terms of a single number, by an interval defined by two numbers, to be interpreted as upper and lower probabilities. So far as decisions are concerned. such theories lead to the identification of a class of would-be actions, but provide no operational guidance as to how to choose from among these. Particular ideas, such as Dempsters (1968) generalization of the Bayesian inference mechanism, have been shown to be suspect (see, forexample, Aitchison, 1968).but have led on themselves to further generalizations, such as Shafers (1976, 1982a) theory of belief functions. This has attracted some interest (see e.g., Wasserman (1990a, 1990b),but its operational content has thus far eluded us. In general, we accept that the assumption of precise quantification, i.e., that comparisons with standard options can be successively refined without limit, is clearly absurd if taken literally and interpreted in a descriptive sense. We therefore echo our earlier detailed commentary on Axiom 5 in Section 2.3, to the effect that these kinds of proposed extension of the axioms seem to us to be based on a confusion of the descriptive and the prescriptive and to be largely unnecessary. It is rather as though physicists and surveyors were to feel the need to rethink their practices on the basis of a physical theory incorporating explicit concepts of upper and lower lengths. We would not wish, however, to be dogmatic about this. Our basic commitment is to quantitative coherence. The question of whether this should be precise, or allowed to be imprecise, is certainly an open, debatable one, and it might well be argued that measurement of beliefs and values is not totally analogous to that of physical length. An obvious, if often technically involved solution, is to consider simultaneously all probabilities which are compatible with elicited comparisons. This and other forms of robust Bayesian approaches will be reviewed in Section 5.6.3. In this work, we shall proceed on the basis of a prescriptive theory which assumes precise quantification, but then pragmatically acknowledges that, in practice, all this should be taken with a large pinch of salt and a great deal of systematic sensitivity analysis. For a related practical discussion, see Hacking (1965). See, also, Chateaneuf and Jaffray (1984).
Subjectivity of Probubility As we stressed in Section 2.2. the notion of preference between options. the primitive operationalconcept which underlies all our otherdefinitions, is to be understood as personal, in the sense that it derives from the response of a particular individual to a decision making situation under uncertainty. A particular consequence of this

100

2 Foundations

is that the concept which emerges is personal degree of belief, defined in Section 2.4 and subsequently shown to combine for compound events in conformity with the properties of a finitely additive probability measure. The individual referred to above could, of course, be some kind of group, such as a committee, provided the latter had agreed to speak with a single voice. in which case, to the extent that we ignore the processes by which the group arrives at preferences, it can conveniently be regarded as a person. Further comments on the problem of individuals versus groups will be given later under the heading Communicution and Group Decision Making. This idea that personal (or subjective) probability should be the key to the scientific or rational treatment of uncertainty has proved decidedly unpalatable to many statisticians and philosophers (although in some application areas. such as actuarial science. it has met with a more favourable reception; see Clarke, 1954). At the very least, it appears to offend directly against the general notion that the methods of science should, above all else, have an objective character. Nevertheless. bitter though the subjectivist pill may be, and admittedly difficult to swallow. the alternatives are either inert, or have unpleasant and unexpected side-effects or, to the extent that they appear successful, are found to contain subjectivist ingredients. From the objectivistic standpoint, there have emerged two alternative kinds of approach to the definition of probability both seeking to avoid the subjective degree of belief interpretation. The first of these retains the idea of probability as measurement of partial belief, but rejects the subjectivist interpretation ofthe latter. regarding it. instead, as a unique degree of partial logicul implication between one statement and another. The second approach. by far the most widely accepted in some form or another, as.sens that the notion of probability should be related in a fundamental way to certain objective aspects of physical reality. such as symmetries or frpquencies. The logical view was given its first explicit formulation by Keynes ( 192 I I1 929) and was later championed by Carnap ( I950/ 1962)and others; it is interesting to note, however, that Keynes seems subsequently to have changed his view and acknowledged the primacy of the subjectivist interpretation (see Good, 1965, Chapter 2). Brown ( 1993) proposes the related concept of impersonal probability. From an historical point of view. the first systematic foundation of the frequentist approach is usually attributed to Venn ( 1886). with later influential contributions from von Mises (1928) and Reichenbach ( 1935). The case for the subjectivist approach and against the objectivist alternatives can be summarised as follows. The logical view is entirely lacking in operational content. Unique probability values are simply assumed tocxist as a measure of the degree of implication between one statement and another, to be intuited, in some undefined way. from the formal structure of the language in which these statements are presented. The symmetry (or classical) view asserts that physical considerations of symmetry lead directly toa primitive notion ofequa1ly likely cases. But any uncertain

2.8 Discussion and Further References

101

situation typically possesses many plausible symmetries: a truly objective theory would therefore require a procedure for choosing a particular symmetry and for justifying that choice. The subjectivist view explicitly recognises that regarding a specific symmetry as probabilistically significant is itself, inescapably, an act of personal judgement. The frequency view can only attempt to assign a measure of uncertainty to an individual event by embedding it in an infinite class of similar events having certain randomness properties, a collective in von Mises ( 1928)terminology, and then identifying probability with some notion of limiting relative frequency. But an individual event can be embedded in many different collectives with no guarantee of the same resulting limiting relative frequencies: a truly objective theory would therefore require a procedure for justifying the choice of a particular embedding sequence. Moreover. there are obvious dificulties in defining the underlying notions of similar and randomness without lapsing into some kind of circularity. The subjectivist view explicitly recognises that any assertion of similarity among different, individual events is itself. inescapably, an act of personal judgement, requiring, in addition, an operational definition of which is meant by similar. In fact, this latter requirement finds natural expression in the concept of an exchangeable sequence of events, which we shall discuss at length in Chapter 4. This concept, via the celebrated de Finetti representation theorem, provides an elegant and illuminating explanation, from an entirely subjectivisticperspective, of the fundamental role of symmetriesand frequenciesin the structuringand evaluation of personal beliefs. It also provides a meaningful operational interpretation of the word objective in terms of intersubjective consensus. The identification of probability with frequency or symmetry seems to us to be profoundly misguided. It is of paramount importance to maintain the distinction between the definition of a general concept and the evaluation of a particular case. In the subjectivist approach, the definition derives from logical notions of quantitativecoherent preferences: practical evaluationsin particular instancesoften derive from perceived symmetries and observed frequencies. and it is only in this evaluatory process that the latter have a role to play. The subjectivist point of view outlined above is, course, not new and has been expounded at considerable length and over many years by a number of authors. The idea of probability as individual degree of confidence in an event whose outcome is uncertain seems to have been first put forward by James Bernoulli (17 13/1899). However, it was not until Thomas Bayes ( I 763) famous essay that it was explicitly used as a definition:
The probability of any event is the ratio between the value at which an expectation depending on the happening of the event ought to be computed. and the value of the thing expected upon its happening.

102

2 Foundutions

Not only is this directly expressed in terms of operational comparisons of certain kinds of simple options on the basis of expected values. but the style of Bayes' presentation strongly suggests that these expectations were to be interpreted as personal evaluations. A number of later contributions to the field of subjective probability are collected together and discussed in the volume edited by Kyburg and Smokier (i964/1980). which includes important seminal papers by Ramsey (1926) and de Finetti (193711964). An exhaustive and profound discussion of all aspects of subjective probability is given in de Finetti's magisterial Tlieory qf Prohubility (1970/1974, 1970/1975). Other interpretations of probability are discussed in Renyi (1955). Good (1959). Kyburg (1961, 1974). Fishburn (1964).Fine (1973). Hacking ( 1975), de Finetti ( 1978). Walley and Fine ( 1979) and Shafer ( 1990).
Stutisricul Inference us u Decision Prohlm Sty lised statistical problems have often been approached from adecision-theoretical viewpoint; see, for instance. the books by Ferguson (1967). DeGroot (1970). Barnett ( 1973/1982). Berger ( I985a) and references therein. However. we have already made clear that. in our view, the supposed dichotomy between inference and decision is illusory, since any report or communication of beliefs following the receipt of information inevitably itself constitutes a form of action. In Section 2.7. we formalised this argument and characterised the utility structure that is typically appropriate for consequences in the special case of a "pure inference" problem. The expected utility of an "experiment" in this context was then seen to be identified with expected information (in the Shannon sense), and a number of inforrnationtheoretic ideas and their applications were given a unified interpretation within a purely subjectivist Bayesian framework. Many approaches tostatistical inference do not. ofcourse. assign a primary role to reporting probability distributions. and concentrate instead on stylised estimation and hypothesis testing formulations of the problem (see Appendix B. Section 3). We shall deal with these topics in more detail in Chapters 5 and 6. Coniniutiicution trnd Group Decision M d i i i g The Bayesian approach which has been presented in this chapter is predicated on the primitive notion of individirul preference. A seemingly powerful argument against the use of the Bayesian paradigm is therefore that it provides an inappropriate basis for the kinds of interpersonal communication and reporting processes which characterise both public debate about beliefs regarding scientific and social issues. and also "cohesive-small-eroup" decision making processcs. We believe that the two contexts. "public" and "cohesive-small-:Iroup". pose rather different problems. requiring separate discussion.

2.8 Discussion and Further References

103

In the case of the revision and communication of beliefs in the context of general scientific and social debate, we feel that criticism of the Bayesian paradigm is largely based on a misunderstanding of the issues involved, and on an oversimplified view of the paradigm itself, and the uses to which it can be put. So far as
the issues are concerned, we need to distinguish two rather different activities: on the one hand, the prescriptive processes by which we ought individually to revise our beliefs in the light of new information if we aspire to coherence; on the other hand, the pragmatic processes by which we seek to report to and share perceptions with others. The first of these processes leads us inescapably to the conclusion that beliefs should be handled using the Bayesian paradigm; the second reminds us that a one-off application of the paradigm to summarise a single individuals revision of beliefs is inappropriate in this context. But, so far as we are aware. no Bayesian statistician has ever argued that the latter would be appropriate. Indeed, the whole basis of the subjectivist philosophy predisposes Bayesians to seek to report a rich range of the possible belief mappings induced by a data set, the range being chosen both to reflect (and even to challenge) the initial beliefs of a range of interested parties. Some discussion of the Bayesian reporting process may be found in Dickey ( 1973), Dickey and Freeman ( I 975) together with a and Smith (1978). Further discussion is given in Smith (1984), review of the connections between this issue and the role of models in facilitating communication and consensus. This latter topic will be further considered in Chapter 4. We concede that much remains to be done in developing Bayesian reporting technology, and we conjecture that modem interactive computing and graphics will have a major role to play. Some of the literature on expert systems is relevant here; see, for instance, Lindley (1987), Spiegelhalter (1987) and Gaul and Schader (1988).On the broader issue. however, one of the most attractive features of the Bayesian approach is its recognition of the legitimacy of the plurality of (coherently constrained) responses to data. Any approach to scientific inference which seeks to legitimise an answer in response to complex uncertainty seems to us a totalitarian parody of a would-be rational human learning process. On the other hand, in the cohesive-small-groupcontext there may be an imposed need for group belief and decision. A variety of problems can be isolated within this framework, depending on whether the emphasis is on combining probabilities, or utilities. or both; and on how the group is structured in relation to such issues as democracy. information-sharing. negotiation or competition. It is not yet clear to us whether the analyses of these issues will impinge directly on the broader controversies regarding scientific inference methodology, and so we shall not attempt a detailed review of the considerable literature that is emerging. Useful introductions to the extensive literature on amalgamation of beliefs or utilities, together with most of the key references, are provided by Arrow ( I 51 b), 9 Luce (1959).Stone (1961), Blackwell Edwards (1954).Luce and Raiffa (1957).

104

2 I.i,ruidution~

and Dubins (1962). Fishburn (1964, 1968, 1970, 1987). Kogan and Wallace ( 1964). Wilson (1968). Winkler (1968, 1981). Sen (1970). Kranz el d . (1971). Marschak and Radner ( 1972). Ctxhrane and Zeleny ( 1973), DeGroot ( 1974. 1980). Morris ( 1974). White and Bowen ( I975), White ( 1976a. 1976b), Press ( 1978. I980b. I985b). Lindley et ul. ( 1979). Roberts ( I979), Hogarth ( 1980). Saaty ( 1980).Berger ( 1981 ), French ( 198 I , 1985. 1986. 1989). Hylland and Zeckhauser ( 198 I ). Werrahandi and Zidek ( 198 I , 1983). Brown and Lindley ( I 982, 1986). Chankong and Haimes ( 1982). Edwards and Newman ( 1982). DeGroot and Feinberg ( 1982, 1983, 1986). Raiffa (1982). French et ul. (1983). Lindley (1983. 1985. 1986). Bunn ( I984), Caro etul. ( 1984). Genest ( 1984%1984b).Yu ( 1985).De Waal erul. ( 1986). Genest and Zidek ( I 986). Arrow and Raynaud ( I987), Clemen and Winkler ( 1987. 1993). Kim and Roush ( 1987). Rarlow et ti/. ( 1988). Bayarri and DeGroot ( 1988, 1989, I99 I ), Huseby ( 1988).West ( 1988. I992a), Clemen ( 1989. 1990). Rios rt ul. ( 1989). Seidenfeld rt ul. ( 1989). Rios ( 1990). DeGroot and Mortcra ( I9Y I ). Kelly (1991). Lindley and Singpurwalla (1991, 19931, Goel et ul. (1992). Goicocchca et t i / . (1992). Normand and Tritchler ( 1992) and Gilardoni and Clayton (1993). Important, seminal papers are rcprtxtuced in GHrdenfors and Sahlin ( 1968). For related discussion in the context of policy itnalysis. see Hodgrs ( 1987). References relating to the Bayesian approach to game theory include Harsany (1967). DeGroot and Kadane (1980). Eliashberg and Winkler ( 198 I ), Kadane and Larkcy ( 1982, 1983). Raiffa ( 1982). Wilson ( 1986). Aumann ( 1987). Smith ( 1988b).Nau and McCardle ( 1990).Young and Smith ( I99 I ). Kadane and Seidenfeld ( 1992) and Keeney ( 1992). A recent review of relatcd topics. followed by an informative discussion. is provided by Kadane ( 1993).

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

Chapter 3

Generalisations
Summary
The ideas and results of Chapter 2 are extended to a much more general mathematical setting. An additional postulate concerning the comparison of a countable collection of events is introduced and is shown to provide a justification for restricting attention to countably additive probability as the basis for representing beliefs. The elements of mathematical probability theory are reviewed. The notions of options and utilities are extended to provide a very general mathematical framework for decision theory. A further additional postulate regarding preferences is introduced, and is shown to justify the criterion of maximising expected utility within this more general framework. In the context of inference problems, generalised definitions of score functions and of measures of information and discrepancy are given.

3.1
3.1.1

GENERALISED REPRESENTATION OF BELIEFS


Motlvation

The developments of Chapter 2, based on Axioms I to 5, led to the fundamental result that quantitatively coherent degrees of belief for events belonging to the algebra & should fit together in conformity with the mathematical rules offinire/y

106

3 Generalisations

addirive probability. From a directly practical and intuitive point of view. there seems no compelling reason to require anything beyond this finitistic framework, a view argued forcefully, and in great detail, by de Finetti (1970/1974. 1970/1975). However, there are many situations where the implied necessity of choosing a particular finitistic representation of a problem can lead to annoying conceptual s and mathematical complications, a we remarked in Section 2.5. I when discussing bounded sets of consequences. The example given in that context involved the problem of representing the length of remaining life of a medical patient. Most people would accept that there is an implicit upper bound, but find it difficult to justify any particular choice of its value. Similar problems obviously arise in representing other forms of survival time (of equipment, transplanted organs. or whatever) and further difficulties occur in representing the possible outcomes of many other measurement processes. since these are generally regarded as being on a continuous scale. For these reasons, and provided we do not feel that in so doing we are distorting essential features of our beliefs, it is certainly attractive. from the point of view of descriptive and mathematical convenience. to consider the possibility of extending our ideas beyond the finite. discrete framework. In Section 3.1.2, we shall provide a formal extension ofthe quantitative coherence theory to the intinitedomain. Our fundamental conclusion about beliefs in this setting will be that quantitatively coherent degrees of belief for events belonging to a a-algebra & should fit together in conformity with the mathematical rules of coiintablv additive probability. The major mathematical advantage of this generalised framework is that all the standard manipulative tools and results of mathematical probability theory then become available to us; convenient references are. for example. Kingman and Taylor (1966) or Ash (1972). A selection of these tools and results will be reviewed in Section 3.2, and then used in Section 3.3 to develop natural extensions of our finitistic definitions of actions and utilities. thus establishing an extremely general mathematical setting for the representation and analysis of decision problems. The important special case of inference as a decision problem will be considered in Section 3.4. which extends to the general mathematical framework the discussion of the finite. discrete case, given in Section 2.7. Finally. a discussion of some particular issues is given in Section 3.5.

312 ..

Countable Additivity

In Definition 2. I and the subsequent discussion. we assumed that the collection & of events included in the underlying frame of discourse should be clobed under the operations of arbitraryjnire intersections and unions. As the first step in providing a mathematical extension to the infinite domain, we shall now assume that we allow arbitrary countable intersections and unions in E. so that the latter is taken to be a 0-ulgebru. Within this extended structure. Axioms 1 to 5 will continue to

3.I Generalised Representation of Beliefs

107

encapsulate the requirements of quantitative coherence for preferences, and hence for degrees of belief, provided that only finite combinations of events of & are involved. However, if we wish to deal with countable combinations of events, we shall need an extension of the existing requirements for quantitative coherence. One possible such extension is encapsulated in the following postulate.

Postulate 1. (Monolone continuiry).


I j for a l l j , EJ _> EJ+I and EJ 2 F. then

0Ej 2 F .
J=l

Discussion of Postulate 1. If the relation E, 2 F holds for every member of a decreasing sequence of events El 2 E2 . . ., and if we accept the limit event Ej into our frame of discourse, then it would seem very natural in terms ofcontinuity that the relation should carry over. The operational justification for considering such a countable sequence of comparisons is certainly open to doubt. However, if, for descriptive and mathematical convenience, we admit the possibility, then this form of continuity would seem to be a minimal requirement for coherence.

n,

proposition 3.1. (Continuitya~8). I j for all j , Ej 2 Ej+1


X

und
J=1

Ej = 0, then, for any G > 0. J-*X P(E, I G) = 0. lim

Proof. We note first that the condition encapsulated in Postulate 1 carries over to conditional preferences. By Proposition 2.14(i), if E, 2~F then we have E, n G 2 F n G; moreover, E, n G 2 E,+, n G, for all j = 1,2.. . . ,and thus, E,) n G 2 F n G. It now follows from Proposition 2.14(i) by Postulate 1, that E l ) 2~F. By Proposition 2.16, P(EJI G) 2 P(E,+l I C) 2 0, and so there exists a number p 2 0 such that lini,+cc.P(EJI G) = p , and, for all j, P( E, I G) 1 p. By Axiom 4(iii) and Proposition 2.16, there exists a standard event S such that p(S)= p , G I S and, for all j,EJ >c; S. Hence, by the above, we . and 0 have ( E,) = 8 >G S, which implies that S ,, thus, by Propositions 2.10 and2.1l,thatp=O. a

(n,

(n,

n,

Since we have already established in Proposition 2.17 that P ( . I G) is a finitely additive probability measure, the above result, based on the postulate of monotone conrinuiry, enables us to establish immediately that, in this extended setting, P(.I G) i s a countably additive probability measure.

Proposition 3.2. (Countably addinve structure o degrees o be&.. f f If { EJ,j = 1.2, . . .} are disjoint events in E, and G > 8, then
/ x \ x

108

3 Generalisations

Proof. Since P(.1 G) is finitely additive we have. for any I ) >_ 1.

where
% ..

It follows that F,, 2 F,,+ with

n,,F,,= L4; hence, by Proposition 3. I ,


lim P(
l,--X

I C ) = 0.
P(U, I G). E,

The result follows by taking limits in the last expression for

We shall consider the finite versus countable additivity debate in a little more detail in Section 3.5.2. For the present, we simply note that philosophical allegiance to the finitistic framework is in no way incompatible with the systematic adoption and use of countably additive probability measures for the overwhelming majority of applications. The debate centres on whether this particular restriction to ii subclass of the linitely additive measures should be considered as a twc'esscrr:\' feuture of quantitative coherence, or whether it is a pragntaric option, outside the quantitative coherence framework encapsulated in Axioms I to 5 . From a philosophical point of view, we identify strongly with this latter viewpoint; but in almost all the developments which follow we shall rarely feel discomfited by implicitly working within a countably additive framework. We have established (in Propositions 2.4 and 2.10) that if E and F are events in & with E C F. then P ( E ) 5 P ( F ) . so that if P(F)= 0 then P ( E ) = 0. However, in general. not all subsets of an event of probability zero (a so-called null event) will belong to & and so we cannot logically even refer to their probabilities. let alone infer that they are zero. In somc circumstances it may be desirable, as well as mathematically convenient, to be able to do this. If so. this can be done "automatically" by simply agreeing that E be replaced by the smallest n-ulgebrti. 3. which contains & and all the subsets of the null events of t (the so-called ' cornpletion of E). The induced probability measure over F is unique and has the property that all subsets o f null events are themselves null events. It is called ii complete probability measure.

Definition 3.1. (Probability space). A probubilit? space is dejned by tlw elenients ((1.3. P ) where 3 is u a-algebra of $1 arid I' is N cwnpletc). 0additiw prohibilit?. nieusure on 3.

3.2 Review of Probability Theory

109

From now on, our mathematical development will take place within the assumed structure of a probability space. We do not anticipate encountering situations where these mathematical assumptions lead to conceptual distortions beyond the usual, inevitable element of mathematical idealisation which enters any formal analysis. However, as de Finetti ( 1970/1974, 197011975) has so eloquently warned, one must always be on guard and aware that distortions rnighr occur. The material which follows (in Section 3.2) will differ in flavour somewhat from our preceding discussion of the general foundutions of coherent beliefs and actions (Chapter 2 and Section 3.1) and our subsequent discussions of generalised decision problems (Section 3.3) and of the link between beliefs about observables and the structure of specijc models for representing such beliefs (Chapter 4). These developments systematically invoke the subjectivist, operationalist philosophy as a basic motivation and guiding principle. In the next section, we shall concentrate instead on reviewing, from a purely mathematical standpoint, the concepts and results from mathematical probability theory which will provide the technical underpinning of our theory.

3.2
3.2.1

REVIEW OF PROBABILITY THEORY


Random Quantitiesand Distributions

In the framework we have been discussing, the constituent possibilities and probabilities of any decision problem are encapsulated in the structure of the probability space {R,.F, P } . Now, in a certain abstract sense, we might think of R as the primitive collection of all possible outcomes in a situation of interest; for example, that surrounding the birth of an infant, or the state of international commodity markets at a particular time point. However, we are not really interested in a complete description, even if such were possible, but rather in some numerical summary of the outcomes, in the forms of counts or measurements.
Recalling the discussion of Section 2.8, it might be argued that measurements are always, in fact, counts. However, when convenient, we shall distinguish the two in the usual pragmatic (fuzzy)way: counts will typically mean integervalued data; measurements will typically mean data which we prerend are real-valued. We move, therefore, from { 0,.F, P } to a more explicitly numerical setting by invoking a mapping, 3?:R x R,

which associates a real number Z ( W ) with each elementary outcome w of 52 (our initial exposition will be in terms of a single-valued z; vector extension will be the

110

3 Generulisutiotis

made in Section 3.2.4). Subsets of 51 are thus mapped into subsets of 'R and the probability measure P defined on F will induce a probability measure. P C ,say. over appropriate subsets of !R. However, we shall wish to ensure that f , is defined for on certain special subsets of 8. example, intervals such as (--x. n ] . (I E %. in 5 which case we shall want sets of the form { w : - x < ~ ( u )u } . ( I f 'R. to belong to 3, this will constrain the class of functions .r which we would wish and to use to define the numerical mapping. The standard requirement is that Z be ' , definable on the a-algebra of Burel sers, B, of %, the smallest o-algebra containing intervals of the form (-XI.,n E 'R, and hence all forms of interval. since the 0 1 latter can be generated by appropriate countable unions and intersections of the a]. intervals (-x.(1 E 'R.

Definition 3.2. (Random quantity). A rnndont quunrin. on CI prohuhi1ir.s R such that .I.- I (131 E . ,.tor F spuce { f 1 . 3 . P } is a function s : (;I -+ .Y Clll B E B.

Following de Finetti (1970/1974, 1970/1975). we use the term rundoni yiiurisignify a numerical entity whose value is uncertain. rather than use the traditional, but potentially confusing. term rundonr wriable. which might suggest a restriction to contexts involving repeated ''trials'' oter which the quantity may vary. Notationally. we shall use the same symbol for both a random quantity and its value. Thus, for example. .r may denote a function, or a particular value of the function .r(w) = .r, say. The interpretation will always be clear from the context. For a random quantity z'. the induced measure P, is defined in the natural way by P,(H) = f ( . r @)). n E 0.

fie to

The function 1; is easily seen to be aprububility measure, and describes the way in which probability is"distributed"over the possible values J' E .Y. This information can also be encapsulated in a single real-valued function.

Definition 3.3. (Distribution function). The distribution function of CI rnn' [O. 11 dom quunti? x : R --* X C_ R on { (1. F.P } is thefuncrion F, : 'R dejned by

If the probability distribution concentrates on a countable set of values, so that X = {-I., . sp... .}. .c is called a discrete random quantity and the function pr :R [O. 11 such that

p,,.(.r)= P { w : x(u)= .,.}

3.2 Review of Probubility Theory

111

is called its probability (mass)funcrion. The distribution function is then a step function with jumps pr(xi) at each 5 , . If the probability distribution is such that there exists a real, non-negative (measurable) function pr such that

r then . is called an (absolurely)conrinuous random quantity and p , is called its ciensivhncrion. In addition, of course, we might have a mixture of both discrete and continuous elements. No use of singular distributions will be made in this volume (for discussion of such distributions. see, for example, Ash, 1972. Section 2.2).
We shall use the same notation. y,, for both the mass function of a discrete random quantity and the density function of a continuous random quantity. In measure-theoreticterms, both are, of course, special cases of the Radon-Nikodym derivative. In general, we shall use the notation and results of Lebesgue and Lebesgue-Stieltjesintegration theory as and when it suits us. Readers unfamiliar with these concepts need not womy: virtually none pf the machinery will be visible, and the meanings of integrals will rarely depend on the niceties of the interpretation adopted. Moreover, when there is no danger of confusion. we shall often omit the suffix .c in p , ( x ) , using p(.r) both to represent the density at or mass function p,(.) and its value p,(s) a particular z E X. Also, to avoid tedious repetition of phrases like almost everywhere, we shall. when appropriate, simply state that densities are equal, leaving it to be understood that, with respect to the relevant measure, this means equal, except possibly on a set of measure zero.

If x is a random quantity defined on { Q , 3 , P} such that s : 52 4 X C 8, and if g : R -t Y C 8 is a function such that (y o s ) - ( B )E F for all B E D, then g o s is also a random quantity. We shall typically denote y o t by g(x),and, whenever we refer to such functions of a random quantity 2,it is to be understood that the composite function is indeed a random quantity. Writing y = g(z), the random quantity 9 induces a probability space {%, B,Py},where

P,(B) = P . ( { g - ( B ) } ) P ( { ( g O . r ) - ( B ) } ) . B E f?. = Functions such as F;I and pu are defined in the obvious way. These forms are easily related to those of F, and p I . In particular, if g- exists and is strictly monotonic increasing we have

FJY) = P*(g(4 9) = P*(. L g-(y)) = F*(g-(d) 5 and, in the continuous case, if g is monotonic and differentiable, the density p , is given by

Some examples of this relationship are given at the end of Section 3.2.2.

112

3 Generalisations

Definition 3.4. (Expectation). If.r. y ure rundoni quunrities with ,ti = {I( .r). the expectution of y. E[y] = E [ g ( . r ) ]is clejned by either ,

!/C

.I

.\-

for the discrete and continuous cuses, respectively, where tlir eyuulity is to he

interpreted in tlie sense h i t if'either side e.rists so cloes the other tirrd they (ire equul.
As in Definition 3.4. most sums or integrals over possible values of random quantities will involve the complete set of possible values (for example. .Y and 1. ). To simplify notation we shall usually omit the range of summation or integration. assuming it to be understood from the context. To avoid tiresome duplication. we shall also typically use the integral form to represent both the continuous and the discrete cases.

It is useful to be able to summarise the main features of a probability distribution by quantities defined to encapsulate its location, spread. or shape, often in terms of special cases of Definition 3.4. Assuming. in each case. the right-hand side to exist, such summary quantities include:

(i) E[.r].the mean of the distribution of the random quantity .r:

(ii) E[.r'], the kth (ohsolute)niotnent: (iii) \;'[.r] = E[(.r- E[.r])'] = E[.r2] E"[.r], vuriunce: the
(iv) u[.r]= \,'*[.r]' the sfandurd deviution; ", (v) ;\I[.r]. a mode of the distribution 0f.r.. such that
p.,.( d I [ X I ) = snp p,. (.r1:
rc .s

(vi)

Q,, an a-quantile of thc distribution of . I . , such that [.I:],


E;.(Q,,[.r]) =

",

( . r <_

q,,I) [.,.

= (1:

(vi) M e [ x ] = Q0,5[x]. a median; (vii) ( Q , ~ - , , ~ ;Q(.+T ] ; ~ [ . c ]pinreryuunrik runge. ~[ p,. I a ).


Thc expectation operator of Definition 3.4 is linear, so that if - r I ..r2 are two random quantities and C'I, c2 are finite real numbers then

E[(j.Ij + c 2 4 = ('1 E[.r,!+ (.'E[.I._1].

3.2 Review of Probability Theory

113

In the special case of the transformation g ( s ) = cx, for some real constant c, we clearly have E[g(x)l = cE[zI = g(E[sI)*
1/2

D [ g ( z ) ] (V[g(z)])1'2 = (b2V[x]) =

= g(D[x]).

For general transformations g ( x ) , E(g(s)] g ( E [ z ]and the moments of a trans# ) formed random quantity g(x)do not exactly relate in any straightforward manner to those of z. However, for suitably well-behaved g ( s ) the following result, which we shall illustrate at the end of Section 3.2.2,often provides useful approximations. Proposition 3.3. (Approximate mean and variance). I f x is a random quantity with E[s] = p, V[r] = c and y = g(z) then, subject to conditions on r ' the distribution of x und the smoothness o g, f
1 = S ( P ) + p 29
N

(PI,

V[UI ZZ [g'(4l2. Outline proof Expanding g (x ) in a Taylor series about p . we obtain


d z ) = ! d P ) + (.?.- P ) g W + ;(x - P)29"(/.1), where we are assuming regularity conditions sufficient to ensure the adequacy of this approximation in what follows. Taking expectations immediately yields the approximate form for E[y]; subtracting the latter approximation from both sides, squaring, taking expectations and ignoring higher order terms, yields the result for V[y]. Clearly, more refined approximationsare easily obtained by including higher order terms. a

In Definition 2.6, we introduced the notion of the independence of two events with subsequent generalisationsto mutual independence (Definition 2.12) and conditional independence (Definition 2.13). These notions can be extended to random quantities in the following way. Definition 3.5. (Mutual independence). The random quantities 21. . . . ,x,, are mutually independent $ f o r any t, E ! ? events {w;xt( w ) 5 t,}, for I the , i = 1. . . . ,n., are mutuully independent. We note that for independent random quantities 51 . . . ,z n ,

E n x , =

Definition 3.6. (Conditionalindependence). For any random quantify y, the random quantities 1 1 . . . . ,x,, are conditionally independent given y $ for any t , E 92, i = 1,. . . ,R, rhe events { w ; x,(w) 5 t , } are condirionally independent given the event { w ; y ( w ) 5 y}, for all y.

I*:,1

L:,

E [ T , ] and V

cxl
[ , ~ 1

=~ V [ G ] .

114

3 Generalisatians

Conditional independence will play a major role in our later discussion of modelling in Chapter 4. Many forms of technical manipulation of probability distributions are greatly facilitated by working with some suitable transformation of the original density or distribution function. One of the most useful such transforms is the following.

Definition 3.7. (Characteristicfunction). The characteristic function of u rundom quantity s is the function d, , mapping 92 to the conip1e.r plune, given by o,(t) t;"c"4]. f E 3. =
Among the most important properties of the characteristic function. we note the following.

1 Q r ( t ) <_ 1 and @,.(O) I


If
.rl, . . .

1.

o,is a uniformly continuous function o ft.


o,(t) = 1 Of,(U. Two random quantities have the same distribution if and only if they have the same characteristic function.
1 .

n:,

..rf,are independent random quantities, and s = I:=, s,. then

If E[.r'] < x, then o , ( t )= 1 q E [ . 7 4 ]+ o ( t ' ) .


/=I

.I .

Many similar properties hold for the closely related alternative transforms S[(' r ] . the moment generatingfunc-tion. and E[ t'1. the probahili? generating~4ficncrior1.

3.2.2

Some Particular Univariate Distributions

In this section. we shall review a number of particular univariate distributions which are frequently used in applications. and list some of their properties and characteristics. We shall assume that the reader is familiar with most of this material. and detailed discussion and derivations are therefore not given. The books by Johnson and Kotz (1969, 1970) provide a mass of detail on these and other distributions. One important initial warning is required! These distributions provide the building blocks for srutistic*almodels and are typically delined in terms of "parameters". The role and interpretation of "models" and *paratneters" within the general subjectivist, operationalist framework are extremely important issues. which will be discussed at length in Chapter4. For the present. '.parameters" should simply be regarded as "labels" of the various mathematical functions we shall be considering. although, as we shall see. these "labelling parameters" often relate closely to onc or other of the characteristics of the distribution.

3.2 Review of Probubiliry Theory The Binomial Distribution

115

A discrete random quantity x has a binomial distribution with parameters 8 and n (0 < 8 < 1. n = 1 , 2 , . . .) if its probability function Bi(x I n , 8) is

Bi(r(O,n)=

(:)

8"(1-0)"-'.

. r = O , l , ..., 11.

The mean and variance are E[I] = n8, and V[r] 7 4 1 - 8). A mode is attained = at the greatest integer Ai[z] which does not exceed I,,] = ( n 1)8; if I,,, is an and integer. then both x,], x,,, 1 are modes. r If n = 1, ; is said to have a Bernoulli distribution, with probability function denoted by Br(z 18). The sum of k independent binomial rmdom quantities with parameters (8, n,),i = 1, . . . k, is a binomial random quantity with parameters 8 and I I ~ . . . I I A .

+ +

The Hypergeometric Disrriburion


A discrete random quantity x has an hypergeometric distribution with integer parameters N, A1 and n (n 5 N hf) if its probability function Hy(x I A,hi. n) ' is

where

The mean and variance are given by

7N 1 E[4 = - and V[r\ =


N

+ hi

71.\1& ( N + hi - 71) ( N + n q z ( N + nr - 1)

A mode is attained at the greatest integer Al[x] which does not exceed
=
(n

.rm

1 ) ( N + 1). :11+N+2

if x,,, an integer, then both X,,~ x,,, 1 are modes. is and -

116
The Negative-Binomial Distribution

3 Generctlisations

A discrete random quantity 3: has a negative-binomial distribution with parameters 8 and r (0 < 6, < 1. f = 1 . 2 . . . .) if its probability function Nb(.r I r. 0) is

Nb(r 119.r ) = c (I'

;; 1

1 ) ( 1 - 8)'.

.I'

= 0. 1.2.

where c = 8'. The mean and variance are E[.4 = r8 and \.[.I.] = r ( 1 - H ) / H ' . If r ( 1 - 8) > 1 the mode Af is the least integer not less than [r(1 - 8 ) ] / 8 :if r( 1 - 0) = 1, there are two modes at 0 and I ; if I (1 - 0) < 1. df [.r]= 0. If I' = 1. r is said to have a geometric or Pascal distribution. Moreover, the sum of k independent negative binomial random quantities with parameters (8, r , ) ,i = 1 . . . . .k , is a negative binomial random quantity with parameters 0 and rl+...+rh.

[XI

The Poisson Distribution


A discrete random quantity .r has a Poissoti distribution with parameter A ( A > 0 ) if its probability function Pn(.r [ A ) is

x." Pn(.r [ A) = c' .r!

.I*

= 0. 1.2.

where c = c The mean and variance are given by E[.r]= \'[.I-] A. A mode = :\I [ r ] attained at the greatest integer which does not exceed A, It' A is an integer. is both X and A - 1 are modes. The sum of k independent Poisson random quantities with parameters A,. I = 1. . . . . k . is a Poisson random quantity with parameter XI .. . t X I .

',

The Beta Distribution


A continuous random quantity .r has a betti distribution with parameters o and (1 > 0. 3 > 0) if its density function Be(.)*0 . ,Ir) is ( 1
j

where
(a

l'(rr + .Ir) I- ((I ) r ( .j)

and r(.r) = J;," t ' ' d t ; integer and half-integer values of the gamma function are easily found from the recursive relation T ( . r + 1) = .rl-(.r), and the values I'( I ) = 1 and I-( 1/2) = v6 1.772.5. z

3.2 Review o Probability Theor?, f

117

Systematic application of the beta integral,

gives
0 czd E [ s ]= - and V[s1= a +d (0 + 3 ) ? ( .+ $ + 1) If a > 1 and 3 > 1, there is a unique mode at (a - l)/(n + ,3 - 2). If s has a Be(x I a, d) density, then ?J = 1 - s has a Be(y I 3. i r ) density. If o = A = 1, rz: is said to have a uniform distribution Un(r 10.1) on (0.1). By considering the transformed random quantity y = u + ~ ( -ba). where .r has a Be(r 111, a) density, the beta distribution can be generahed to any finite interval ( a . 6 ) . In particular, the uniform distribution Un(y 10.6) on ( a6 ) .

Un(yIa.b)=(b-u) has mean E [ y] = (u

I,

n<y<b,

+ b ) / 2 and variance V[y] = (6 - a)2/12

The Binomial-Beta Distribirrion

A discrete random quantity s has a binomial-betu distribution with parameters 0 , 9 and 11 ( u > 0. 8 > 0 , n = 1.2, . . .) if its probability function Bb(x 1 a, : 3 , 7 t ) is

Bb(o I a./j,n ) = c where

(:)

r(cz+ x ) r ( ! j+ 71 - x),

s = 0..

. . .n .

The distribution is generated by the mixture


r1

The mean and variance are given by

E [ x ]= n a

+ $9 and
XI,l

V[L] =

(0

na,d (a , n) j ,3)2( a + ;3 1)

+ + +

A mode is attained at the greatest integer Af [x] which does not exceed

( .

+ 1)(.

- 1)
'

~r+ij-2

if .I-,,, an integer. both zII, L,,, 1 are modes. If u = d = 1 we obtain the is and discrete uniform distribution, assigning mass ( T I 1 ) - * to each possible s.

118

3 Genemlisutions

The NeXcitive-Binomiul-BrrcrDistributioti A discrete random quantity s has a negutii,e-hinomicil-beta distribution with parameters ct. d and r (CI > 0. .j > 0. r = l . 2.. . .) if its probability function Nbb(.r In. d. 1 . ) is

The distribution is generated by the mixture Nbb(.r I (k.


.j. r ) =

The mean is t.,"r]= r.j(o - 1 )

I'

Nb(s 10. F ) Be(0 I (I. .j)tf0.

I. o

> I , and the variance is given by

The Guniniu Distribufion A continuous randomquantity s has agutnnzu distribution with parameters (1 and ( 0 > 0. ;j > 0) if its density function Ga(.r I o, J) is

.j

Ga(.r I (k.

<j)= r , r " - ' t

".

.I'

> 0.

where c = ,P/T(tr ). Systematic application of the gamma integral

givesE[.r] = n / J a n d V [ r ] = o/:j'. I f o > 1.thereisauniquemodeat ( o - l ) / , j : i f r r < 1 there are no modes (the density is unbounded). If = 1, .r is said to have an r.rpotientiul Ex(.r 1 .3) distribution with parameter :3 and density Ex(.r 1 J) = .jf, ". .I. 2 0.
f The mode o an exponential distribution is located at zero. If .j = 1. .r is said to have an Erlurig distribution with parameter 0 . If ( I = v/2. sj= 1/ 2 . .I' is said to have a (centrul) chi-syuured ( k 2 ) distribution with parameter 11 (often referred to as degrees qffreedonr) and density denoted by \ (.r I v ) or 4 (.r ). By considering the transformed random quantities !/ = (I + .I' o r := !J .r. where ;r has a Ga(.r I ( t . :3) density. the gamma distribution can be generalised to the ranges ( ( 1 . r-) or (-x.. Moreover. the sum of k independent gamma random h) quantities with parameters ((k,. 3 ) . I = 1.. . . . k. is a gamma random quantity with parameters ( t l + . . . + and .j.

'

3.2 Review of Probability Theory


The Inverted- Gamma Distribution

119

A continuous random quantity x has an inverted-gamma distribution with parameters a and /!! ( a > 0, 0 > 0) if its density function Ig(z I CL: 8) is

where c = jY'/r((i). Systematic application of the gamma integral gives

\X "I

=
(0

1' 3
- 1 ) 2 ( a - 2)
9

a>2.

There is a unique mode at B/(a + 1). The term inverted-gamma derives from the easily established fact that if y has a Ga(y I a. ,O) density then x = y-' has an I ( I a, 0)density. gz If x has an inverted-gamma distribution with CL = v/2, 0 = 1/2, then 2 is said to have an inverted-%: distribution. A continuous random quantity y has a squure-root inverted-gamma density, Ga-(':')(y 1 a, 3).if x = y' has a Ga(x I a. 9) density.

The Poisson-Gamma Distribution


(1, A

A discrete random quantity x has a Poisson-gummu distribution with parameters and v (a > 0, B > 0. v > 0) if its probability function Pg(x I n , 8, v ) is

Pg(2 10: 13,Y ) = c

r(n+x) v r x! ( p + I/)"+"

x = 0 . 1 . 2 ....

where c = P / r ( a ) . distribution is generated by the mixture The

This compound Poisson distribution is, in fact, a generalisation of the negative binomial distribution Nb(x I a, B/(L? v)), previously defined only for integer a. The mean is E [ x ] = va/p, the variance is V[z] = ~ ~ ( i i v)/,!?*. Moreover, and j if CLV > D v. there is a mode at the least integer not less than ( v ( a- l)//3) - 1; if av = fl v, there are two modes at 0 and 1; if QV < 1) + v. Af [z] 0. =

+ +

120
The Gunimu-Gunmu Distribution
0,z IS

3 Generulisatioru

A continuous random quantity ; has a gunznru-gumnzu distribution with parameters I j and I I (o > 0. *j> 0. I ) = 1.2. . . .) if its probability function Gg(.r. ( 1 . j I I ) 1 .

The distribution is generated by the mixture

The mean and variance arc given by

The Pureto Distribirtion


A continuous random quantity* I . has a Pureto distribution with parameters .j ( a > 0. d > 0) if its density function Pa(.r 10. .j)is
(I

and

where

= ~ 3 .mean and variance are given by The

E [ J ] -=
0

fk.j

- 1

. if ( 1 >

The mcxle i s ,II[.r] = J. The distribution is generated by the mixture Pa(.r In. ,j)=

1 1

Ex(.r -

.j10) Ga(H n. j) d H

.I = .y-

A continuous random quantity .y has an inverted-Pwetc~density Ip( g has a lfi(.r 10. 3 ) density.

I 11.

j)

if

3.2 Review of Probubiliry Theory

121

The Norm1 Distribution A continuous random quantity x has a normal distribution with parameters p and X ( p E R. X > 0) if its density function N(x I p, A) is
N(xIp.X)=cexp

where

The distribution is symmetrical about s = p. The mean and mode are E [ z ]= M [ x ] = IL and the variance is V [ x ]= A - ' , so that X here represents the precision of the distribution. Alternatively, N ( z 111, A) is denoted by N(x I p,I T - ~ )where , o2 = V[z] is the vuriance. If p = 0, X = 1, c is said to have a standurd normul distribution, with distribution function Q given by

@(s) = -

1 :

exp{-it2} d t .

If y = X'12(x - p ) = ( x - p ) / o , where I has a normal density N(x I p , A), then . y has a N(y 10: 1) (standard) density. In general, if y = a Cf.=, where b,.t.,, A,) the .r, are independent with N(s; I p;, densities, then y has a normal density, N ( y In b,pl. A) where X = b : / X j ) - ' , a weighted harmonic mean of the individual precisions. . If X I , . . . ,x ~are mutually independent standard normal random quantities, z?has a (central) k distribution. : then z =

+ cf'=,

(c:=,

cf'=,

The Non-central y2 Distribution A continuous random quantity LC has a non-centrul x2 distribution with parameters v (degrees of freedom) and X (non-centrality) (v > 0 , X > 0) if its density function y2(s I v, A) is

x2(x I v,A) =

C Pn (II X / ~ It 2 ( x 1 v + 2i).
1=0

i.e.. a mixture of central y2 distributions with Poisson weights. It reduces to a central x2(v) when X = 0. The mean and variance are E(x] = v X and V [ z ] = 2(v 2X). The distribution is unimodal; the mode occurs at the value M(x]such that x 2 ( A l [ ~ I: v. A) = x 2 ( A l [ z ]I v - 2, A). ] I f z l , . . . LCA. aremutually independent normal randomquantities N ( z , I p,: 1). c then z = El=, has a non-central x' distribution, LC?

+ .

2(tI k g = , r l ; Z ) .
The sum of k independent non-central x 2 distributions with parameters (v;, A,) is a non-central ,x2 with parameters vl + . . . + vk and XI + . . . + Xk.

122
The Logistic Distribittiori

3 Generuliscrtions

A continuous random quantity x has a logistic distribution with parameters o and , j ( o E R. . > 0) if its density function Lo(.r 10. J) is j

where c =

An alternative expression for the density function is

so that the logistic is sometimes called the sech-syitared distribution. The logistic distribution is most simply expressed in terms of its distribution function.
F,(s) [I + ( = a ,
-

(y)}]
-1

The distribution is symmetrical about .I' = o. The mean and mode are given by E(1.1 = :\f [.r]= o, and the variance is \'[s] j2ii2,/3. = The Student ( t ) Distrihitriotr A continuous random quantity .I' has a Studew distribution with parameten and o ( j i E '8. > 0. o > 0) if its density St(.r I i t . A. (k) is X

11.

where

The distribution is symmetrical about .r = i t . and has a unique mode A/[.r]=


i i , The mean and variance arc

The parameter rr is usually referred to as the clegrec~s c?f.freedom the distribution. of

3.2 Review of Probability Theory


The distribution is generated (Dickey, 1968) by the mixture

123

and includes the normal distribution as a limiting case, since

N(x J 11. A) = lirn St(x J p. A, a ) .


(t-w

Ify = A/(x-p),wherexhasaSt(.~ Ip.A,o)density,thenyhasa(srandurd) student density St(y 10,l. a). If a = 1, z is said to have a Cuuchy distribution, with density Ca(x I p,A). If x has a standard normal distribution, y has a k: distribution, and x and y are mutually independent, then

has a standard Student density St(z I O , l , u).

The Snedecor ( F ) Distribution A continuous random quantity x has a Snedecor, or Fisher, distribution with parameters cr and p (degreesuffieedutn) ( a > 0, 0 > 0) if its density Fs (xI a, 4) is

where

8 + If 0 > 2, E [ x ] = ,D/( - 2) and there is a unique mode at [ / 3 / ( p 2 ) ] [ ( a 2 ) / ( r ] ; moreover, if D > 4,


V[X] = 2

((Y+l3-2) a(@ 4) (0 - 2) -

o2

If x and y are independent random quantities with central ,y2 distributions, with, respectively, u1 and U.Ldegrees of freedom, then

1 has a Snedecor distribution with u and u2 degrees of freedom.

Relationships between some of the distributions described above can be established using the techniques described in Section 3.2. I. For a geometrical interpretation of some of these relations, see Bailey ( 1992).

124

3 Generulisutions

Example 3.1. (Gumma and 4 distributions). Suppose that .I' has a Ga(.r 1 f i ..1) den/ sity and let y = 2,j.r. Then, for ! > 0. we have

so that y has a t2(g12o) density. Since \' distributions are extensively tabulated. this relationship provides a useful basis for numerical work involving any gamma distribution.

Example 3.2. (Beta, Binomiul and Snedecor (F) distributions). Suppose that .I' has a density Be(.r 1 ct. .I) and let = .ir!o(1 - .r)j I . Then. for r/ > 0. and noting that
. = ntj[.j r

+ o,y] '. we have

so that y has the stated Snedecor ( F ) density. Binomial probabilities may also be obtained from the F distribution using the exact relation between their distribution functions given by Peizer and Pratt (1968)

Since 5' distributions are extensively tabulated, these relationships provide a useful basis for numerical work involving any beta or binomial distribution.

Example 33. (Approximutemomentsfor transformed random quuntih'es). Suppose that .r has a Be(r 10. ;j) density. 0 < .r < 1. but that we are interested in the means and variances of the transformed random quantities

Recall ing that

and noting that

3.2 Review of Probubiliry Theory

125

application of Proposition 3.3 immediately yields the following approximations:

We note. in particular, that if p z the second (correction) terms in the mean approximations will be small. and that, for all 11, the variance is stabilised (i.e.. does not depend on [ I ) under the second transformation.

3 2 3 Convergence and Limit Theorems ..


Within the countably additive framework for probability which we are currently reviewing, much of the powerful resulting mathematical machinery rests on various notions of limit process. We shall summarise a few of the main ideas and results, beginning with the four most widely used notions of convergence for random quantities.

Definition 3.8. (Convergence). A sequence X I , . .,o random quantities: x2,. f converges in mean square to a random quuntity .c ifand only i f
I-x

lini ~ ( ( - 3 : ) 7 = 0; r,

converges almost surely to a random quantify 3: ifand only if

in other words, i f s , ( w ) tends to ~ ( wfor all w except those lying in a ) set o P-measure zero; f converges in probability to a random quantity s i and only i f f for all E > 0 , lim P ( { w ; I.r,(w) - r(w)l > E}) = 0;
I-*

converges in distribution to a rundonr quantity J ifand only if the corresponding distribution functions are such that
!--a

liin F,(t) = F ( f )

at all continuity points t of F in ?R; we denote this by FI

F.

126

3 Generalisations

Convergence in mean square implies convergence in probability; for finite random quantities, almost sure convergencealso implies convergence in probability. Convergence in probability implies convergence in distribution; the converse is false. Convergence in distribution is completely determined by the distribution functions: the corresponding random quantities need not be defined on the same probability space. Moreover. (i) F, -+ F if and only if, for every bounded continuous function g. the sequence with respect to F, converges to the expected of the expected values E[g(.r,)] value E [ g ( s )with respect to F; ] (ii) if F, F and ~ ; ( tand q ( t )are the corresponding characteristic functions, ) then o,(t)-+ O ( t ) for all f E 3; converse also holds. provided that o(t ) is the continuous at f = 0; (iii) (Hel/ys rheorem) Given a sequence {Pi. I ; ; . . . .} of distributions functions such that for each E > 0 there exists an such that for all i sufficiently large F,(ci)- F,(-n) > 1 - 5. there exists a distribution function F and a subsequence FfI. . . . , such that I.,, F,?. F.
-2

An important class of limit results, the so-called luw.s ujlm-gvtti4mbers. link the limiting behaviour of averages of (independent) random quantities with their expectations. Some of the most basic of these are the following: (i) If.rl. s2.. . . are independent, identically distributed random quantities with E!.rf] < x. and E [ s , ]= p . then the sequence of random quantities T,, = I)- .r,. I ) = 1 . 2 . . . . . converges in mean square (and hence in probability) to 11; that is to say. to a degenerate. discrete random quantity which assigns probability one to I t . (ii) The w c i k k u ~of lurge numbers. If .rl..r?.. . . are independent. identically distributed random quantities with E[.r,] = < x, then the sequence of random quantities T,,. tI I= 1.2. . . . . converges in probability to I / .

x:..

(iii) The strong law oj/argen i r ~ d w s .Under the same conditions as in (ii). T,,. I I = 1.2.. . . . converges almost surely to 1.

In addition. there is a further class of limit results which characterises in more detail the properties ofthe distance between the sequence and the limit values. Two important examples are the following: (i) 7he centrcrl littiit theurmi, If . r ) . . r ~ . are independent identically distributed random quantities with El.).,; = 11 and \ . [ . I . , ] = o-) x. for all i . then the < sequence of standardised random quantities
-1,

. - .- I.,, - I /
f 7 i

J;; *

I/

= 1.2.

converges in distribution to thc standard normal distribution.

3.2 Review of Probubiiiry Theory

127

(ii) The law of the irerared logarithm. Under the conditions assumed for the central limit theorem, Xli - P -1/2 limsup-((2loglogn) = 1.
I(--%

./Jr;

There are enormously wide-ranging variations and generalisations of these results, but we shall rarely need to go beyond the above in our subsequentdiscussion.

3.2.4

Random Vectors,Bayes' Theorem

A random quantity represents a numerical summary of the potential outcomes in an uncertain situation. However, in general each outcome has many different numerical summaries which may be associated with it. For example, a description of the state of the international commodity market would typically involve a whole complex of price information; the birth of an infant might be recorded in terms of weight and heart-rate measurements. as well as anencoding (for example, using aO1 convention)of its sex. It is necessary therefore to have available the mathematical apparatus for handling a vector of numerical information. Formally, we wish to define a mapping

which associates a vector z ( w ) of k real numbers with each elementary outcome w of R. As in the case of (univariate) random quantities, we move the focus of attention from the underlying probability space { 52, F, } to the context of Skand P an induced probability measure P,. However, we shall again wish to ensure that P, is well-defined for particular subsets of Rk and this puts mathematical constraints on the form of the function x. Generalising our earlier discussion given in Section 3.2. I , we shall take this class of subsets to be the smallest a-algebra, B,containing This then all forms of k-dimensional interval (the so-called Bore1 sets of prompts the following definition.

e).

Definition 3.9. (Random vector). A random vector x on N probability space {R, F.P } is afuncrion x : 52 3 X C 'Jzk such that
x - l ( f 3 )E F. for all BE

B.

For a random vector x. the induced probability measure P, is defined in the natural way by P,(R) = P ( x - ' ( R ) ) . H E B. The possible forms of disrribution for x,P,, are potentially much more complicated for a random vector than in the case of a single random quantity. in that they

I28

3 Ceiiercilisuriotis

not only describe the uncertainty about each of the individual component random quantities in the vector. but also the dependencies among them. As in the one-dimensional case, we can distinguish discwte distrihtioiis. where x takes only a countable number of possible values and the distribution can be described by the probability (~,rciss)fiitictioti
px(cc) = P({w: ( w )= x}). x

and (absolutely)continirous distributions. where the distribution may be described p x such that by a d~~nsityfiiric~tioti( x )

The distribution firnction of a random vector x is the real-valued function -, [O. I ] defined by E.', :

I.',(x) = k i ( . r , .. . . . .r r ) = & { ( - x . . r ,17... n (-n..r,]}. ] In addition. we could have cases where some of the components are discrete and others are continuous. Some components. of course. might themselves be ;I mixture of the two types. In what follows. we shall usually present our discussion using the notation for the continuous case. It will always be clear from the context how to reinterpret things in the discrete (or mixed) cases. The density p r ( z ) = p x ( . r l .. . . . ./'A) of the random vector x is often referred to as the joint density of the random quantities .rI. . . . . . r k . If the random vector x is partitioned into x = (y. 2 ) . say. where y = ( . I . , . . . . . . r , ) . z = (.rI- I . . . . . . r * k ! . the niargittul density for the vector y is given by

or alternatively, dropping the subscripts without danger of confusion.


p(r,.. . . . . r f ) =

.I'

h'k - I

p ( . r , .. . . . .rA ) d r , .

. . d.rk
'

This operation of passing from a joint to a marginal density occurs so commonly in Bayesian inference contexts (see Chapter 5. in particular) that it is useful to have available a simple alternative notation. emphasising the operation itself. rather than the technical integration required. To denote the nrorgintiliscitioti opcv-titioii we shall thereforc write I d Y . 2 ) +I+(Y).
I/

The iwditiotial density for the random vcctor z. given that y ( w ) defined by

y. is

3.2 Review of Probability Theory

129

or, alternatively, again dropping the subscripts for convenience,

We shall almost always use the generic subscript-free notation for densities. It is therefore important to remember that the functional forms of the various marginal and conditional densities will typically differ.

Proposition 3.4. (Generaked Bayes theorem).

Proof. Exchanging the roles of y and z in the above, it is obvious that


P&)

= P&(Z

I Y)PY(Y) = Pylzb I Z)PZ)z(Z),

which immediately yields the result.


It is often convenient to re-express Bayes theorem in the simple proportionality form PYIAY I z> i P x& I Y)PY(Y), since the right-hand side contains all the information required to reconstruct the normalising constant,

should the latter be needed explicitly. In many cases, however, it is not explicitly required since the shape of p l ( I z ) is all that one needs to know. y,y
In fact, this latter observation is often extremely useful for avoiding unnecessary detail when canying out manipulations involving Bayes theorem. More generally, we note that if a density function p ( z ) can be expressed in the form c q ( z ) , where q is a function and c is a constant, not depending on x.then since Any such q ( x ) will be referred to as a kernel of the density p ( z ) . The proportionality form of Bayes theorem then makes it clear that, up to the final stage of calculating the normalisingconstant, we can always just work with kernels of densities.

130

3 Generdisutions

For further technical discussion of the generalised Bayes theorem, see Mouchart (I976), Hartigan (1983, Chapter 3) and Wasserman and Kadane (19%)). As with marginalising, the Buyes theorem operation of passing from a conditional and a marginal density to the other conditional density is also fundamental to Bayesian inference and. again, it is useful to have available an alternative notation. To denote the Bayes theorem operation. we shall therefore write
P;y(r

I Y) -,:I d Y ) = PJlZ(Vl4

In more explicit terms. and dropping the subscripts on densities. Bayes theorem can be written in the form
p ( 1, . . . . . .r, 1 s t
JHt

I.

. . . ..r1)
p(.r,+1.. .. .s/, . 1 . . ....I . , ) p(J.1.. . . . . I . , ) 1.1 p ( z , , I . . . . .xk I X I , . . . ..r,) p ( . r l . . . . ..r,)d r l . . . dr,

Manipulations based on this form will underlie the greater part of the ideas and results to be developed in subsequent chapters.
In particular. extending the use o f the terms given in Chapter 2. we shall typically interpret densities such as p ( . r , .. . . .. r l ) as describing beliefs for the random quantities . r l ,. . . . .r, before (i.e..prior to) observing the random quantities .r,, I . . . . .xk,andp(.rJ... . . .r, I .r,. ,.. . . . .rL) as describing beliefs after(i.e..posrerior to) observing .r, . . . . . . . r ~ .

Often, manipulation is si m pl i lied if independence or conditional independence assumptions can be made. For example, if s l .. . .-I~ . were independent we would have
t

p ( J . 1 . . . . . . r , ) = JJp(.r,) :
1-1

if .rf+ I . .

... xk were conditionally independent. given .rl. . . . . .rl. we would have


p(.cttI. . . . .rk .rl.. . . . .r,) = .

n
1
I - t T I

p(.rl j .TI. . . . ..I.,).

If x is a random vector defined on ($2.3. that z : I I P } such X C >RL 1 and if g : R ( h 5 k ) is a function such that (g o z) ( B )E 3 for all B E B,then g o x is also a random vector. We shall typically denote g o x by g(x),and. whenever we refer to such vector functions of a random vector x. it is to be understood that the composite function is indeed a random vector. Writing y=g(x).the random vector y induces a probability space (Y?. B.Ifv}where
f

P y ( B )= P x ( g - l ( B ) )-= P((g0z) l ( z 3 ) ) .

13 t

B.

3.2 Review of Probability Theov

131

and distribution and density functions Fy,pv are defined in the obvious way. These forms are easily related to Fx,px. In particular, if g is a one-to-one differentiable function with inverse g-*, have, for each y E Y , we

where

is the Jacobian of the transformation g-',defined by

where

h ( Y ) = [ g - ' ( d ]; '
If h < k, we are usually able to define an appropriate z , with dimension k - h, such that w = f(s) ( 9 ,z) = (g(x), ) is a one-to-one function with inverse = z f-',and then proceed in two steps to obtain py(y)by first obtaining
P d W ) = P Y . Z ( ? / , 4 = Px(f-'(w))lJf-l(w)l.

and then marginalising to

The expectation concept generalises to the case of random vectors in an obvious way.

Definition 3.10. (Expectation o a random vector). ifx,y are random vecf forssuchthary = g(x),x: R - X C R k , g : - Y C R h ( h 5 k), the , Rk , expectation ofy, E[y] = E(g(s)], a vector whose ith component. is

EIYJ=E[g(x)l,=E[g,(x)],= l , . . . , h i
i dejined by either s

for the discrete and absolutely continuous cases, respectively, where all the equalities are to be interpreted in the sense that ifeither side exists, so does the other and they are equal.

132

3 Generulisutions

In particular, the forms defined by E [I'IfY,.c:'~]are called the moments of x of order n = i i l + . . . + i l k . Important special cases include the first-order moments. E [ s , ]i. = 1. . . . .k,and the second-order moments, E[.rf]. i = I . . . . .k ) . E [ J * 1. ( s ,lJ (1 <_ i # j 5 k ) . If . r l . .. . ..)'A are independent. then E [II,.r:'] = 11, I I [ . r , ' ] . The cuvuriunce between r l and sJ is defined by

C.[.r,..I..,] = ~ [ ( . r , E[.L.,]) - ~ ( r , ]= ] [ . r , r , ,- ~ [ . r ~][ . r ., ] (.rJ )~ ] ,


and the correlation by

The Cauchy-Schwarz inequality establishes that 1 R[.r,. I 5 1. The ex.I.,] pectation vector with components E[x1], . . . E[.rk]is also called the mean vector, . E [ z ]of z; k x k matrix with ( i . j)thelement C[.r,.x,] called the covariance , the is of mutrk. V[x]. 2. If the components of 2 are independent random quantities. V[x] reduces to a diagonal matrix with (i. i)th entry given by \'[r,]. As in the case of a single random quantity, exact forms for moments of an arbitrary transformation. 21 = g(x). are not available. We shall not need very general results in this area. but the following will occasionally prove useful.

Proposition 35 (Approximate mean and covariance). If x is ( I rundom .. wcror in Xk,wirh E ( z ] = p, V[z] = C und y = g(x) is Q oii~-to-one trutisforwiutiott of x such thut g exists, then, subject to conditions on the distribution of x and on the smoothness of g,

'

E [ ( g ( z ) ) , = E [ y W ]= S f b ) + f t r [ C V " Y f ( P ) ] ]

V[9(z)] J g h ) C J $ ( c L ) .
where,for i = 1 . . . . . k,

where t\r[.]denotes the truce ofu mutrix urgurnent. Proof. This follows straightforwardly from a multivariate Taylor expansion: the details are tedious and we will omit them here.

3.2 Review of Probability Theory


A nore on rneusure theory. Readers familiar with measure theory will be aware

133

that there are many subtle steps in passing to a density representation of the probability measure PI. In particular. a detailed rigorous treatment of densities (Radon-Nikodym derivatives) requires statements about dominating measures and comments on the versions assumed for such densities. Readers unfamiliar with measure theory will already have assumed-correctly! -that we shall almost always be dealing with the standard versions of probability mass functions and densities (corresponding to counting and Lebesgue dominating measures and smoothly defined). Only occasionally. in Chapter 4. do we refer to general, i.e., non-density. forms.

3.2.5

Some Particular Multivariate Distributions

We conclude our review of probability theory with a selection of the more frequently used multivariate probability distributions; that is to say, distributions for random vectors. As in Section 3.2.3, no very detailed discussion will be given: see, for example Wilks (1962), Johnson and Kotz ( 1969, 1972) and DeGroot ( 1970) for further information.

The Multinoniial Distribution


A discrete random vector z = (sl ~. . sk) has a mulrinomial distribution of di. mension k, with parameters 8 = (61,. . . . O r ) and n (0 < 6, < 1, C,6, < 1, r7 = 1.2,. . .) if its probability function M u k ( z 18, n ) . for I, = 0 . 1 . 2 , . . with I E,=, 5 72, is J-,

..

The mean vector and covariance matrix are given by

The mode@)of the distribution is (are) located near E [ z ] ,satisfying

these inequalities, with the condition relatively few points.

x:=,5
xf

7t,

restrict the possible modes to a

134

3 Generalisiitions

The marginal distribution ofd"" = (XI.. . ..I*,,,),i u < k, is the multinomial . MU^,,(^('^') 181, . . . B,,,, n ) . The conditional distribution of x"") given the remain-

ing s,'~ is also multinomial, and it depends on the remaining s , ' ~ through their only sum s = E:=~,,, ;specifically, J

Ifz = (xl... . sr.) has density M u k ( x 10. ) I ) then y = (yI. . . . y f ) where .

has density Mut(y j 4, n ) , where

If z is the sum of 111 independent random vector> having multinomial densities with parameters (8.n , ) . i = 1.. . . . t i t . then z also has a multinomial density with parameters 0 and (711 . . , I , , , ) . If k = I, MUI. 10. I ! ) reduces to the binomial (z density Bi(x 18./ I ) . If .r1. . . . . .rl are k independent Poisson random quantities with densities Pn(x, I A,), then the joint distribution of 2 = ( . r I . . . . J L )given ,r, = I I is multinomial Mu(.r 1 8. n). with 8, = A,,' CtZI A,.

+. +

E:-J

The Dirichler Di.striburion


A continuous random vector x = ( X I . . . . .. r k ) has a Dirichler distribution of dimension k , with parameters a = (oI . . . . . ( ~ k . )+ ~( ( I , > 0. i = 1. . . . . k + 1 ) if its probability density Dik(z 1 a). 0 < .r, < 1 and .rI + . . + .vk < I, ih

where

Ifk = I . D i k ( z ( a )reducestothc betadensity B c ( . r t ( 1 1 . ( 1 2 )In thegeneralcase. . the mean vector and covariance matrix arc given by

3.2 Review of Probability Theory


If czi

135

> 1, i = 1,. . . k, there is a mode given by


M [ x , ]=
-1

C::: aJ - k - 1

The marginal distribution of d"') (xl,. . . ,z~,~), < k, is the Dirichlet = mi

.. k The conditional distribution, given z,,,+~, . ,z ,of

I ( is also Dirichlet, Di,,,(x',,. . . ,xi,,0 1 , . . . .(I,,,, t k + l ) . In particular,

Moreover, if z = ( I ~ ,.. xk) has density Dir,(Z I a). y = (y1,. . . yt) where . , then

has density Dit(y 10). where

The Mulrinornial-Dirichler Distribution


A discrete random vector z = ( X I , . . , .rk) has amultinomial-Dirichletdistribution . of dimension k, with parameters a = (oL., oktl) 71 where at > 0, and .., and 71 = 1.2,. , .. if its probability function Mdk(z I a, ) , for s, = 0 , l . 2 . . . .. with n a r 1 5 ri. is

where cr['I = fl;=,(a J - 1) defines the ascending facrorial function, with + k x ~ + I = )L .rJ and

I,=,

I,!

C=

CJ=l

kt i

pi]

"J

136
The mean vector and covariance matrix are given by

3 Generu1isution.s

The marginal distribution ofthe subset { . r l . . , . . J , } is a multinomial-Dirichlet : with parameters { G I . . . . . o,,.1 n,/ - C; . I n I } and I I . In particular. the marginal distribution of .c, is the binomial-beta Bb(.r, l(\,. ()! - ( I , ) . Moreover, the conditional distribution of { .r.-l. . . . . .I'L } given' { .rl. . . . . . I . . } is also I ,,=, multinomial-Dirichlet. with parameters { o , - I . . . . . o k . Ct o,,-1 . . I o,} and 12 Y,,. For an interesting characteri7~tion this distribution. see Basu and of Pereird ( 1983).

I'

c::;
:I

c';.z=l

The Norinul-Gurnmu Distrihutiort


A continuous bivariate random vector ( J . y) has a rtor~nal-,~um~m distribution, with parameters / I . A. n and ,j. (11 E 'N.X > 0. o. > 0. 8 j > 0) if its density Ng(s.ylp,X.n. 9) is

where the normal and gamma densities are defined in Section 3.2.2. It is clear from the definition. that the conditional density of s given p is N(.r I p . Xy) and that the marginal density of y is Ga(y 1 n. ,j). Moreover. the marginal density of .I. is St(.r I p. Xn/d. h). The shape of a normal-gamma distribution is illustrated in Figure 3. I . where the probability density of Ng(.r. y 10. 1.5.5) is displayed both as a surface and in tcrms of equal density contours.
The Miiltivuriute Norniul Distribution A continuous random vector x = ( . I * , . . . . . has a r?titltivariutenormul distribution ofdimension k. with parameters p = ( p1 . . . . . ) and A. where p E '5' and A is a k x k symmetric positive-definite matrix. if its probability density N k ( z 1 p . A ) is N ~ ( Z ~ ~= (A r x p - A ( x.- ~ ) ' A ( - p ) } . x E IP. . *) x
,181.)

3.2 Review of Probability Theory

137
Y

I'

-2

F i r e 3.1 The Normal-gamma densiry Ng(z, y Io.1.5,5)

where c = ( 2 ~ ) - ~ / ~ 1 X 1 ' / * . If k = 1, so that X i s a scalar, A, Nk(z I p, A) reduces to the univariate normal density N(x 1 p , A).

In the general case, E[x,]= p,, and, with C = A-' of general element u t j , V [ x l ] u , and C[x,..c,]= ~ 7 so that ,V[z]= X-'. The parameter p therefore = ~ ~ ~ labels the mean vector and the parameter X the precision matrix (the inverse of the

130

3 Generdisutions

coturiance mutri.r, X). If y = Ax,where A is an rri x k matrix of real numbers such that ACA' is non-singular. then y has density N,,, ( y I Ap.( ACA' 1 ). In particular. the murginal density for any subvector of x is (multibariate) normal. of appropriate dimension, with mean vector and covariance matrix given by the corresponding subvector of p and submatrix of X I . Moreover, if x = (21.22) a partition of x,with 2,having dimension k,. is and kl k? = k, and if the corresponding partitions of p and X are

'

then the conditional density of xI given x2 is also (multivariate) normal. of dimension kl with mean vector and precision matrix given. respectively. by

' ( The random quantity y = (a:- p)'X(a:- p ) has a kg I k ) density. We also note that, from the form of the multivariate normal density. we can deduce the integral formula

The Wishart Distribution


A symmetric, positive-definite matrix x of random quantities s,,= x,,. for 1 = I, . . . . k, j 5= 1. . . . .k, has a Wishart distribution of dimension k. with parameters
n and 0 (with 2ct > k - 1 and a k x A: symmetric. nonsingular matrix). if the density Wik(a: 1 n.p) of the k(k + 1)/2 dimensional random vector of the distinct entries of x is

Wik(a: where c = l / 3 1 n / I ' k ( ~ t ) ,

I'1

is the generulised gummufunction and tr( .), as before, denotes the truce o f a matrix argument. If k = 1, so that 0 is a scalar 3. then L\*k( 2 I 0. reduces to the gamma /3) density Ga(cr I ck. Jj).

3.2 Review of Probability Theory

139

If { x1 . . . ,x,} is a random sample of size n > 1 from a multivariate normal ? x,, Z is N k ( f 1 p , nA), and then Nk(x, I p, A), and Z = 7t-I

s = C(Z?Z)(q- 3)' c=l

11

is independent of Z, has a Wishart distribution Win.(SI i ( n - l ) , $A). and The following properties of the Wishart distribution are easily established: E [ s ]= ap-' and E [ x - ' ] = (a - (k 1)/2)-'D; if g~ = AxA' where A is an rn x k matrix (m 5 k) of real numbers, then y has a Wishart distribution of dimension m with parameters o and (A0-I A')-', if the iatterexists; in particular, if x and p-' conformably partition into

x=

(:;: ):
.

p-'

(:; ):

where 2 1 1 . 6 1 1 are square h x h matrices (1 5 h < k), then x11 has a Wishart distributionofdimensionh withparametersaand(aIl)-'. Moreover,ifzl, . . . , x 8 are independent k x k random matrices, each with a Wishart distribution, with parameters a t ,p, i = 1. . . . Y, then 21+ . . . + x,also has a Wishan distribution, with parameters a1 . . a, and 0. We note that, from the form of the Wishart density, we can deduce the integral formula 1 1"- ( P+ 1)/2 exp{-tr(ps)} ctx = c-l ,

+ +

the integration being understood to be with respect to the k(k elements of the matrix x. The Multivariate Student Distribution

+ 1)/2 distinct

A continuous random vector z = ( X I , .. .xk) has a multivariate Student distri. bution of dimension k, with parameters p = (111.. . . . pk). A and (Y ( p E Xk, A a symmetric, positive-definite k x k matrix. a > 0) if its probability density &(a: I p , A, a ) is

&(a: I p. A, a)= c where

- p)9(x- p)

-(n+k)/2

. xEqk.

If k = 1, so that A is a scalar. A, then Stk(x 1 p, A, a ) reduces to the univariate Student density St(z I p: A! Q). In the general case, E ( s ] = p and V [ z ]= A - ' ( a / ( a- 2)). Although not exactly equal to the inverse of the covariance matrix, the parameter A is often referred to as the precision matrix of the distribution.

140

3 Generalisations

If y = Ax, where A is an 111 x k matrix ( i n 5 k) of real numbers such that AA-IA' is non-singular. then y has density St,,,(y I Ap. (AA-'A')-'.o). In particular. the marginal density for any subvector of a: is (multivariate) Student, of appropriate dimension, with mean vector and inverse of the precision matrix given by the corresponding subvector of p and submatrix of A - ' . Moreover, if x = (xl.x2) is a partition of x and the corresponding partitions of 1.1 and A are given by

then the conditional density of 2 1 . given a:? is also (multivariate) Student. of dimension k l , with ct kz degrees of freedom. and mean vector and precision matrix, respectively. given by

111 - &%2(a:?
(k

- p2). - 111')

A11

+ A*:! [ + (a:? - p2)'(A22 - X?IA,'A,?)(x2


0

The random quantity y = (a: - p)'A(a:- p ) has an Fs(y 1 k . ( t ) density. The Multivariate Nornial-Garntnn Distribrrtiotr

.. t * ~ . )and a random quantity y have (2.1. a joint tnultivariate norntal-ganvna distribution of dimension k . with parameters p. Act. J ( p E g k ,X a k x k symmetric. positive-detinite matrix. ( I : 0 and > : > 0) if the joint probability density of a: and y. Ngk(a:.y 1 p. A. 0 . .j) is j
A continuous random vector x =

Ngk(x. I p. A. O . .j)= NA(a:I p. Ap)Ga(!/ 10. .j). y


where the multivariate normal and gamma densities have already been defined. From the definition, the conditional density of x given is NI,(5I p. A!]) and the marginal density of y is Ga(g I(\. ;3). Moreover. the marginal density of a: is Stk(x I p. (t-':jA. h).

The Mirltivarinte Normal- Wishart Disrribi4tioti A continuous random vector x and a symmetric, positive-delinite matrix of random quantities y have a joint Nornwl- Wislzart distribution of dimension k, with parameters p. A. u . 0 ( p E 92'. X 3 0. integer 20 > k .- 1, and 0 a k x k symmetric. non-singular matrix), if the probability density of x and the k(k + 1 ) / 2 distinct elements of g,Nwk(z. y I p. A. (t. p ) is
Nivk(2. y

I p . A. 0 .0 ) = Nk(a: I p. X y ) Wik(y l(1.0).

where the multivariate normal and Wishart densities are as defined above. From the definition. the conditional density of a: given y is N k ( z I p. Xy) and the marginal density of y is Wik.(y ] ( I . 0 ) .Moreover. the marginal density ofa: is Stk(x 1 p , Anp I . 20 ).

3.3 Generalised Options and Utilities

141

The Bilateral Pureto Distribution A continuous bivariate random vector (I, has a bilaterul Pureto distribution with y) parameters &, dl, and CI ({/?", < O1. a > 0 ) if its density function E R2, Paa(z, y 1 a, A ) is
Paz(I.yItr.~o.B,)=('(y-x)-("+').

J 5 85,. y>i31.

where c = ck(a

+ l)(/3, -

The mean and variance are given by

V [ s ]= V[y] =

Q($l

- /%)2
v

( n - 1 ) * ( 0 - 2)

ifa > 2,

and the correlation between I and y is - n - ' . The marginal distributions of tl = / ) I -.~andt.~=y-i3,)arebothPa(f1;31 -ik.cr).

3.3
3.3.1

GENERALISED OPTIONS AND UTILITIES


Motivation and Preliminaries

For reasons of mathematical or descriptive convenience, it is common in statistical decision problems to consider sets of options which consist of part or all of the real line (as in problems of point estimation) or are part of some more general space. It is therefore desirable to extend the concepts and results of Chapter 2 to a much more general mathematical setting, going beyond finite, or even countable, frameworks, first by taking & to be a o-algebra and then suitably extending the fundamental notion of an option. In the finite case, an option was denoted by a = {cJ I EJ. j E J } . with the straightforward interpretation that. if option (1 is chosen, cJ is the consequence of the occurrence of the event E,. The extension of this function definition to infinite settings clearly requires some form of constructive limit process, analogous to that used in Lebesgue measure and integration theory in passing from simple (i.e., "step") functions to more general functions. Since the development given in Chapter 2 led to the assessment of options in terms of their expected utilities, the "natural" definition of limit that suggests itself is one based fundamentally on the expected utility idea (Bemardo, Ferrhndiz and Smith, 1985). Let us therefore consider a decision problem { A ,E , C , I}. described which is F by a probability space { $2: . . P } and utility function u : C + %, and let

142

3 Generalisations

In other words, 'Dconsists of those functions (soon to be called decisions) rl : (1 C for which u o d = u(d(.)) is a random quantity whose expectation exists. In the case of the particular subset A of 'D, E ( a ) = i i ( r r I S 2 ) is precisely the expected utility of the simple option n (see Definition 2.16 with G = IT; we shall return later to the case of a general conditioning event G and corresponding probability measure P ( . I G)). In all the definitions and propositions in this section. A and { R. F.P } are to be understood as fixed background specifications.
Definition 3.1 1. (Convergence in expected utifity). For u giivn iitilityjm.tion. t i : C 3. sequence ofjuncrions (11. (11. . . . in 'D i5 said to ti-cwrivergi~ u to ( I firnction tl in 'D, written d, -+I, ti. if turd only if

(i)

(L

o d, converges to

(ii) ri(rl,)

11

o (1 almost sure!\. (with respect to . ) ' f

z(d).

Definition 3.12. (Decisions). Fnra given utili~fincrion. : C zi 92, [ijunction (I E 'D is (I decision (getirralised option) if' und only ij' there cvisrs u sequence 01.02. . . . o simple options siich thut ( I , f (I; the r*ulrte of
-),

- ( d ) = liiii i i ( t 1 , ) n ,
is then called the expected utilic of the derision d.
Discussion of DeJnifions 3.I I trnd 3.12. In abstract mathematical terms. the extension from simple functions, mapping C to 'J?. to more general functions requires some form of limit process. However. the fundamental coherence result of Proposition 2.25 was that simple options should be compared in terms of their expected utilities. In order for this to carry over smoothly to decisions (generalised options), it is natural to require a constructive definition of the latter in terms of a limit concept directly expressed in terms of expected utilities. As it stands, however. this constructive definition does not provide a straightforward means of checking whether or not, given a specified utility function. t i . a function d E D is or is not a generalised option. However. we can prove that any (I E D such that o t i is essentially bounded (i.e.. 11 o rl is bounded except on a subset of $2 of P-measure zero) is a decision. More specifically. we can prove the following.

Proposition 3.6. Given u urilit?.function 11 : C 9, un~JNnc-tion E 2, for tl siich that 11 o d is essentiully bounded. there exist sequences ( I I , 0 2 . . tind u', u;. . . . o j simple options such that ( I , (1. (1: -,, tl und. t;)r t i l l i. G((1,) 5 i i ( d ) 5 q c c : ) .
~

-,t

3.3 Generalised Options and Utilities

143

Proofi We prove first that if u o d is essentially bounded above then there exists a sequence of simple acts a l , an,. . . . such that a, -1 d and .iz(a,) 1 E ( d ) , for all i. (An exactly parallel proof exists if above is replaced by below and 2 by I.) begin by defining the partitions {&,. j E J i } ,i = 1.2,. . ., where We

For each i, this establishes a partition of R into 2(22 1) events, in such a way that two extreme events contain outcomes with values of u o d ( o ) < -i or - i, whereas the other events contain outcomes whose values of u o d(w) do not > differ by more than 2-. We now define a sequence {a,},of simple options a, = { clJI E,j. j E J , } such that:

(i) if P ( E,,) = 0 then czJis an arbitrary element of C;


(ii) if P ( E t J ) 0 then cy E d(E,,) and >

To see that the c,, exist and are well defined, note that, since a(d) < m, there exists Ti,, ( d ) < cc,defined by

but, if .u(d(w)) < T&j(d)for all w E E,,, then we would have

thus contradicting the definition of a,, ( d ) . By construction, a, d almost surely. Hence, for all E > 0,there exists z such that u o d ( w ) E [-i,i), with 2- < E and, for this i , la,(w) - d(w)l < E. In addition, for all i,

144
To show that u, -+,, ti, it remains to prove that

-7 Genercllisotions

1:
Writing Ji! =

I u ( d ( w ) )- rr(rr,(w))(

( ~ ( w ) 0.

as

- x.

+ J:,:.

where -4, = { w E I!

ic(d(w)) - t } . we note that for <


/I

sufficiently large i (larger than the essential supremum of

o (0.

which converges to zero as


E(d) <
%.

- x;moreover, since

ii(o,)

2 T i ( ( / ) . for all r . and

and. since A,

fl as i -, x,this also converges to zero.

In fact, we can show that any decision. whether or not o tl is essentially bounded, can be obtained as the limit of "bounding sequences of simple options" in the sense made precise in the following. Proposition 3.7. Given Q titilit~,firiic~tion: C ti 'T?.f.r ciriy clecisioii tl E D, tliere exist sequences ( I 1 . ( 1 2 . . . . tine1 (I; . ( I : . . . . , of siniple optiori.s sicc.h h i ( I , -,, (1, ( I : -,, rl cindfor till i. i i (( 1 , ) < ~ ( t l < ii(cr;' ) ).

Prouf. We shall show that there exists a sequence of simple options ( I I . a?. . . . such that 0; -,, d and. for all i . E(cr,) < i i ( d ) . An obviously parallel proof exists for the other inequality. We first note that either o (1 is essentially bounded above or it is not. In the former case. let Ir' denote the essential supremum and define :lo = { w E ! I : I I o d(w) = f<}: in the lattcr case, define f< = x and All r fl. IfP(&) > 0 . choose ;I decreasing sequence of real numbers ( I , E [O. 11 such that (1, 0 . Then by Axiom 4 and Proposition 2.6 there exists a sequence of standard events S,., S. uch that S , ,.I 2 S',. / / ( . S , ) = o , and Pr( ..I,, S,)n : P(ilu)c~.,. all j; then. define .,l,i = .lll and choose a consequence c E C for (7 ,S, such that n ( c ) < I<. If P(.&) = 0. choose a consequence ( ' E C and an increasing sequence of real numbers 3,. such that ., j Y and ( I ( ( * ) < ,j,: -4, :w E I!: I / o r l i w ) :> .j,}. let { In either case. define

rl,(w) =

rf(w). ifw E ,4', c. ifw E .A,.

3.3 Generalised Options and Utilities

145

Since d is a decision. there exists a sequence al a2, . . . of simple options such that a, d. If, for each a,, we now define a new sequence of simple options a:j, by
-)11

(w)=

;:(w).

if w E A; if w E A,,

we clearly have a:, -),, d,. so, that, for all i. d, is a decision. Moreover, by construction d, -+,1 d, ii(d,) < E ( d ) and u o d, is bounded above, for all i. Hence, by Proposition 3.6, there exist sequences of simple options
u(,),a, (1)

.. . ..

such that u:)

--I,

d, and E ( u y ) ) _> E ( 4 ) .

If we now choose a subsequence


uj, , u(7) 2 , . . (1) J .

such that TI (uia) - ii(d,) < E(d) - E(d,).

the required result follows, since for all k,

and d,

d implies that uji,) + d.

332 ..

Generalised Preferences

Given the adoption, for mathematical or descriptive convenience, of the extended framework developed in the previous section, it is natural to require that preferences among simple options should carry over, under the limit process we have introduced, to the corresponding decisions. This is made precise in the following.

Postulate 2. (Extension of the preference relation). Given a utilityfunction u :C R,for any decisions dl .&.and sequences of simple options { a ,} and {a:} such that { a , } -) d,, and { a ; } d2, we have: (i) if: for all i > io. for some io. a, 2 a:. then d, 2 dp; (ii) if:for all 2 > 10. for some io, u, > ui,then dl > d,;
The first part of the postulate simply captures the notion of the carry-over of preferences in the limit; the second part of the postulate is an obvious necessary condition for strict preference. Together with our previous axioms, this postulate enables us to establish a very general statement o the identification of quantitative coherence with the principle f of max imising expected uti I i ty.

146

3 Genemlisurions Proposition 3.8. (Maximisationof expected uliriryfor decisions). Given ans two decisions dl ,(&,
dl 2 (f,
Il(d1) 2 T7(tL)

Proof. We first establish that i7(d?) > i 7 ( d l ) implies that d? > (11. By Proposition 3.7. there exist sequences of simple options u I .a?. . . .. and u, . o i . . . .. such that a, -+,, d l , (1: +,, d2 and, for all i. E ( ( i , ) > iT(tll) and Z(o:) < Z(d2). With ,c = ( i i ( i f p ) - E(dl)]/:3, we can choose i l . i 2 such that, for all .j > i i i i \ s { i ~i t.} .
u( n, ) - i i ( t l , ) < 5.
Ti(tl2) - Z(t() i

:.

and we can choose i, . ih such that, for all j > i,

-( n , ) - - G ( r l l ) 5 u
and. for all j > i i ,
-

i q n , , ) -ii(d1)

U((f.2)

-ii(r()

5 ii(tl2) - Z(4J
> max{ i,. &}.
; >

It follows from Proposition 2.25 that. for all .j


C/,,
I

2 ( I ;I, ,

(J,l

2 (1,

and so, by Postulate 2, d2 > d l . To complete thc proof(tfl rf2 =$ Ti(dl) = T i ( r f 2 ) being obvious). we must show that G(d1) = i i ( d 2 )implies that t f l w (12. By Proposition 3.7. there exist iC! I I.) sequences of simple options ( ( 1 , .k = 1 . 2 . 3 .-1) such that 0, +,, (I1 fork = 1.2. 0,) (12 for k = 3 . -1, and, for all i . -,,
- (11 < - (j u(u,, ) - u( I ) 5 ii(((?)).

Z(nj:)

5 TT(rl2) 5

IIi

Z((1,

).

Since we have U ( d 1 ) = E ( d 2 ) . this implies, by Proposition 2.25. that o:. 2 u : . and ( I , ) ~ 2 a:. for all i , and hence. by Postulate 2, tf2 2 dl and t l l 2 r f j , so that d ] (12. 4

Proposition 3.9. fiw uriy G > Cn,


d,

>[; fl?
-

E(dl

1 c;) 2 i i ( d 2 I G ) .

Proof. Throughout the above, the probability measure P( .) can be replaced by P ( . I G) without any basic modifications to the proofs and results. Writing
f / c ; ( d )=

(I 0 d ( W ) $ P ( W

1 G).

it iseasily verified that P(G)z(;(d) ii( 1 ~od),where I(; is the indicator function = ; of G. It follows that if 11. o d is integrable with respect to P ( - ) then it is integrable with respect to P ( - 1 C).

3.3 Generalised Options and Utilities

147

This establishes in full generality that the decision criterion of maximising the expected utility is the only criterion which is compatible with an intuitive set of quantitative coherence axioms and the natural mathematical extensions encapsulated in Postulates 1 and 2. Specifically, we have shown that, given a general decision problem, where w E R labels the uncertain outcomes associated with the problem, u ( d ( w ) )describes the current preferences among consequences and p ( w ) . the probability density of P with respect to the appropriate dominating measure, describes current beliefs about w , the optimal action is that Ct;, which maximises the expected utility,

As we saw in Section 2.6.3, it is natural before making a decision to consider trying to reduce the current uncertainty by obtaining further information by experimentation. Whether or not this is sensible obviously depends on the relative costs and benefits of such additional information. and we shall now extend the notions related to the value of information, introduced in Section 2.6.3, into the more general mathematical framework established in this chapter.

333 ..

The Value of Information

For the general decision problem, the decision tree for experimental design, given originally in Figure 2.6, now takes the form given in Figure 3.2, where, as in Section 2.6.3, the utility notation is extended to make explicit the possible dependence on the experiment performed e (or eo if no data are collected) and the data obtained, x. If d$ is the optimal decision corresponding to ell. the expected utility from an optimal decision with no additional information is defined by

Let d: be the optimal decision after experiment e has been performed and data x have been obtained, so that 7i(d;. e. z), expected utility from the optimal the decision given e and 2,is q d ; , e . z) sup =
d

u ( d , e, z. ) p ( wI P . 5, ) d u , w d

and, hence, the expected utility from the optimal decision following e is
-

u(e) =

7i(di,e. x)p ( x I e ) dx,

148

3 Genernlisiitions

Fgr 32 Genemiised decision rreejw esprririienrtil tlrsign iue .

where p(x I f , ) describes beliefs about the occurrence of z were c to be performed. .

Proposition 3.10. (Optimal experimental design). The optitnal decision is


to perjorm experitnent r" i f i i ( r ' ) =
iiiax,

Ti((.) ctnd i i ( r ' )

> i i ( ( , , )mid to .

perform no experiment otherwise.


Pmoj: This follows immediately.
4

The expected value of the information provided by additional data x may be computed as the (posterior) expected difference between the utilities which correspond to optimal decisions after and before the data. Thus.

Definition 3.13. (The value of adiitional information).


(i) The expected viilire cgthe ittformutioti proiiiled hy x.is

(ii) the e.rpected value ofthe e.rperintent f i s

Let us now consider the optimal decisions which would be available to us if we knew the value of w . Thus. let (1:. be the optimal decision given w: i.e.. such that. for all t i . w u(d:.. ( I ) . w ) 2 t r ( r l . '.,,.w).E $1.

3.3 Generalised Options and Utilities


Then, given w , the loss suffered by choosing another decision d
u(d:, ell,w ) - u(d, eo, w ) .

149
# dw would be

For d = 4, the optimal decision with no additional data, this utility difference measures (conditional on w ) the value of perfect information. Its expected value with respect to p ( w ) will define, under certain conditions, an upper bound on the increase in utility which additional information about w could be expected to provide.

Definition 3.14. (Expected value of perfect information). The opportunity loss o choosing d is defined to be f
I ( d , w ) = u(d1.. f,J.W) - v(d, C(). w ) .

und the expected vulue of perfvct informution ubout w is defined by

where d; is the optimd decision with no udditionul information.


As we remarked in Section 2.6.3, in many situations the utility function may often be thought of as made up of two separate components: the experimental cost of performing e and obtaining 2,and the utility of directly taking decision d and finding w to be the state of the world. Given such an (additive) decomposition, we can establish a useful upper bound for the expected value of an experiment.

Proposition 3.1 1. (Additive decomposition). form

If the urilin,function has the

u(d.e.2.w) u ( d , q J w ) - r ( e , z ) , = ,
with c(e. x) L: 0,and the probability distributions are such that
y(w I e . 2.d) = y ( w 1 e, x). P(W I eo, 4 = P(W I PO),

then, for any available e.xperiment e,


v(v) <_ P*(eo)- F(e).

where ?(e) = c(c . 5 )p ( z I c) dx is the expected cost of e. Proof. This closely parallels the proof, given in Proposition 2.27, for the finite case. q
This concludes the mathematical extension of the basic framework and associated axioms. In the next section, we reconsider the important special problem of statistical inference, previously discussed in detail in its finitistic setting in Section 2.7.

150
3.4
3.4.1

3 Gmeralisarions

GENERALISED INFORMATION MEASURES


The General Problem of Reporting Beliefs

In Section 2.7, we argued that the problem of reporting a degree of belief distribu.j tion for a (finite) class of exclusive and exhaustive "hypotheses" {tf,. E . I } . conditional on some relevant data D and initial state of information :2/0. could be formulated as a decision problem { E . C. A. 5 ) . Here, { F;,. j E .I} is a partition of 9, consisting of elements of I ,with the interpretation E , = "hypothesis H, is true". A relates to

where q) is the probability which. conditional on D. an individual reports as the probability of E, being true. and the set of consequences C consists of all pairs (9. E,). representing the possible conjunction of reported beliefs and true hypotheses. In the previous finitistic setting, we denoted by p
= {y, =

W E , 1 D ) . .] E * I } .

/I,

> 0.

c,,,p,

= 1.

the probability measure describing an individual's actrrctl beliefs. conditional on 11. We then proceeded to consider a special class of utility functions (score functions) appropriate to this reporting problem and to examine the resulting forms of implied decisions and the links with information theory. In this section. we shall generalise these concepts and results to the extended framework developed in the previous sections. The first generalisation consists in noting that the set of alternative "hypotheses" now corresponds to the set of possible values of a (possibly continuous) random vector. w , say, labelling the "unknown states of the world.. so that the relevant uncertain events are E-. = ( w ) , w E (2. with the interpretation E-. = "the hypothesis w is true". Quantitative coherence requires that any particular individual's uncertainty about w. given data 1) and initial state of information -\f,). should be represented by a probability distribution P over a cr-algebra of subsets of I!. which we shall assume can be described by a density (to be understood as a mass function in the discrete case)

We shall take the set of possible inference statements to be the set of probability distributions for w , compatible with I). We denote by 2) the set of functions d,,. one for each pu(. Ill),which map w to the pair ( p w ( . I D ) . w ) .

3.4 Generalised Information Measures 3.4.2

151

The Utility of a General Probability Distribution

In this general setting. the problem of providing an inferencestatement about a class of exclusive and exhaustive hypotheses { w , w E R}, conditional on data D. is a decision problem, which we can conveniently denote by {D. R, u, P}. where R is the set of possible values of the random quantity w. D relates to the class of probability densities for w E Q compatible with D,

where qu(. I D) is the density which an individual reports as the basis for describing beliefs about w conditional on D. The set of consequences C consists of all pairs (d,. w ) ,representing the conjunction of reportedbeliefs and true states of nature. Throughout this section, we shall denote an individuals actual belief density by pu (. I D). decision space D consists of d,s correspondingto choosingto report The qw(. I D)sand defined by d,(w) = (qu(. I D), w ) . We shall assume the individual to be coherent, so that d, E 2 . Without loss of generality, we shall assume that ) pu(. 1 D) and the qw(. I D)E Q are strictly positive probability densities, so that, D for all w E 0,p(w 1 D) > 0 and q(w I D) > 0 for all d,, E . We complete the specification of this decision problem, by inducing the preference ordering through direct specification of a utility function u, which describes the value u(qu(. I D ) , w ) of reporting the probability density qu(- I D) were w to turn out to be the true state of nature. For this purpose and with the same motivation, we generalise the notion of score function introduced in Definition 2.20.

Definition 3.15. (Scorefunction). A score functionfor probability densities qu (. I D ) defined on R, is a mapping u : Q x 1 -+ 8. scorefunction is said 2 A lo be smooth i f it is continuously diflerentiable us afunction of q ( o 1 D ) for each w E 0.
The solution to the decision problem is then to report the density q W ( . I D) which maximises the expected utility

As in our earlier development in Chapter 2, we shall wish to restrict utility functions for the reporting problem in such a way as to encourage a coherent individual to be honest, given data D, in the sense that his or her expected utility is maximised if and only if d, is chosen such that, for each o E R, q(w I D ) = p(w 1 D). The appropriate generalisation of Definition 2.21 is the following.

152

3 Generulisutions

Definition 3.16. (Proper scorefunction). A scorefrtnction ti is proper if..fi)r euch strictly positive probability densiry pw (. I D 1,

As in the finite case (see Definition 2.22). the simplest proper score function in the general case is the quadratic.

Definition 3.17. (Quadraticscorefunction). A yuadrcrti~ scorefirnc.riorrfoi. probability densities yw (. 1 D ) E Q defined on $1 is a timapping 11 : Q x 5 1 ---- .12 ofthe form

sirch that the otherwise arbitrclry$int.tioti, B (. ), ensures the existence o f t i (d,,) for all d,, E D.

Proposition 3.12. A qucrdmtic score firtiction i s proper.


Proof. Given data I). we must choose yw(. I D ) E Q to maximise

subject to j ' q ( w I D)dw = 1. Rearranging, it is easily seen that this is equivalent to maximising
- j j a i w I U ) - Il(W I D))'

dw.

from which it follows that we require q ( o 1 D ) = p ( w 1 D ) for almost all w E 51. We note again (cf.Proposition 2.28) that the constraint J' q(w I D)dw = 1 has not been needed in establishing this result for the quadratic scoring rule.

3.4 Generulised Information Measures

153

In fact, as we argued in Section 2.7, for the problem of reportingpure inference statements it is natural to restrict further the class of appropriate utility functions. The following generalises Definition 2.23.
Definition 3.18. (Localscorefunction). A scorefunction is locul if for each yw(. I D ) f Q there existfunctions uw. w E Q, defined on 8 such that

Note that, as in Definition 2.23, the functional form, uw(-). the dependence of of the score function on the density value q(w 1 D ) which dflassigns tow is allowed to vary with the particular w in question. Intuitively, this enables us to incorporate the possibility that bad predictions. i.e., values of q ( u I D), for some true states of nature, w , may be judged more harshly than others. The next result generalises Proposition 2.29 and characterises the form of a smooth, proper, local score function.

Proposition 3.13. (Characterisationof proper local score functions). I t i : Q x R + SR is a smooth,proper, locul scorefunction. then it must be of f the form
.(qw(.

I m w ) = Alo.sq(w I D)f B ( w )

where A > 0 is un arbitrury constant and B(.) is an arbitruryfunction of w, subject to the existence of 21(dfl) all d,, E D. for Proof. Given data D, we need to maximise, with respect to q(. 1 D), the expected utility

4dq)

subject to of

q(w I

J T&d.

I m w ) P(W I D ) Q!u

D)dw = 1. Since IL is local, this reduces to finding an extremal

However, for qu(. 1 0 )to give a stationary value of F ( q w ( .1 D)) is necessary it that i3 = 0. --F(q(w I U ) n i ( w ) ) l da n=O for any function T : R --t R of sufficiently small norm (see, for example. Jeffreys and Jeffreys, 1946, Chapter 10). This condition reduces to the differential equation

154

3 Generulisuticm

where D1 u . denotes the first derivative of uw. But, since uw is proper, the maxi~ must be attained at qw (. I D ) = pw (. / D ) , so that a smooth. mum of F ( qw (. I D)) proper, local utility function must satisfy the differential equation

D , u w ( p ( o I D)) I D )- .<I= 0. p(w


whose solution is given by
llw

( p ( wI U ) ) = .4 logpfw 1 11) + 13(w).

as stated. This result prompts us to make the following formal definition.

Definition 3.19. (Logarithmic score funcfion). A Iogurithmic score @tun(.tion for probubility densities qw(. 1 D ) E Q defined on I1 i5 N iircippitig II : Q x i2 + 9 ($theform ?
u(YW(.I

D).o) = :Ilogq(wI D ) + H(w).

A > 0, B ( . )urbitrury. subject to the cJAi.5tenc.e ofii( (I,,) /i)r d I ti,, E V.

For additional discussion of generalised score functions see Good( 1069) and Buehler ( 197 1).

3.4.3

Generalised Approximation and Discrepancy

As we remarked in Section 2.7.3. although the optimal solution to an inference

problem under the above conditions is to state one's actual beliefs. there may be technical reasons why the computation of this "optimal" density pw(. I D) is difficult. In such cases, we may need to seek a tractable approximation, q d ( . I D). say, which is in some sense "close" to pw(. I D). but much easier to specify. As in the previous discussion of this idea. we shall need to examine carefully this notion of' "closeness". The next result generalises Proposition 2.30.

Proposition 3.14. (Expected loss in probability reporting). preferences are described I,v u logarithnric score fi4nctioti. the expected loss c irtilin q in reporting (I probubility (1eiisit.v y( ~3 1 D), rcrrher than the densir?.p( w I 11) representing u c t i d heliey5. is g i w r t>J

3.4 Generalised Information Measures

155

Proof. From Definition 3.19 the expected utility of reporting qw(. I D) when p w ( . 1 D )is the actual belief distribution is given by

so that

The final condition follows either from the fact that u is proper, so that E(dp) 2 ti(d,) with equality if and only if qw(. I D )= p w ( . I D); directly from the fact or that for all x > 0, log 2 5 I - 1 with equality if and only if 5 = 1 (cf. Proposition 2.30). a
As in the finitistic discussion of Section 2.7, the above result suggestsa natural, general measure of lack of fit, or discrepancy, between a distribution and an approximation, when preferences are described by a logarithmic score function.

Definition 3.20. (Discrepancy of an approximation). The discrepancy between a strictly positive probability density pw (.) and un approximation & (.), w E $2, is defined by

Example 3.4. (General normal upproximdons). Suppose that p(w) an arbitrary density on the real line, with finite first two moments given by

> 0. i~ E 9, is

and that we wish to approximatep(.) by a j(.) correspondingto a normal density, N ( u , I p , A), with labelling parameters p, X chosen to minimise the discrepancy measure given in Definition 3.20. It is easy to see that. subject to the given constraints, minimising 6 ( p I y ) with respect to p and X is equivalent to minimising
-

Lx

p ( w )log N(w I p . X) &.

and hence to minimising


- $ log X

+-

:J:

p ( w ) ( w - p)2du.

3 Generalisarions
Invoking the two moment constraints. and writing (d - \I)' = ( J - 111 + i t / - //)'. [his reduces to minimising, with respect to and A, the expression A - lo&A + - + ,\(HI - I / ) ? . t It follows that the optimal choice is / I = w . X = I . In other words.jiir the reporrinR problem with a logctrithntic score funcrion. the hest norntal crppro.\-i,,rcirion to a distribution on the real line (whose mean and variance exist) is the normal distribution having the .wne meatt and variance.

Example 3.5. (Normal approximations to Student distributions). Suppose that we wish to approximate the density St(.r 11'. A. (1). o =. 2, by a normal density. We have just shown that the best normal approximation to ony distribution is that with the same first ~ W O moments (assuming the latter to exist. corresponding here to the restriction ( t > 2). Thus. recalling from Section 3.2.2 that the mean and precision of St(.r I 1 1 . X. i t ) are given by and X(tr - 2 ) / i t , respectively, it follows that the best normal approximation to St(./.1 11. ,\. ( 1 ) is provided by N(.r I 11. A(ct - 2)/tt). From Definition 3.20. the corresponding discrepancy will be st(.r~//.<\.(l) n'( N I St) = St( .I' I 11. A. 0 ) log N ( . r j / I . ,\(I\ - 2 ) / 1 i

1 0

20

30

Figure 3.3 Uiscrepnnc~ brmecn Student cind nornial clrtisirirs


This is easily evaluated (see. for example, Bemardo, 1978a) using the fact that the entropy of a Student distribution is given by

3.4 Generulised Information Measures

157

where $ ( z ) = r(:)/r(z) denotes the diRammufunction (see. for example. Abramowitz and Stegun, 1964). from which it follows that 6(N 1 St) may be written as

which only depends on the degrees of freedom, shows a plot of 6(N 1 St) against 0 . Using Stirlings approximation,
logI(2)

ft,

of the Student distribution. Figure 3.3

( i

;)

log: - 2

1 + 7 log(2n). 2

we obtain, for moderate to large values of ct,

6(N 1 St)

[CV(O

- 2)] = O(l/(k2),

so that [a( n - 2)] provides a simple, approximate measure of the departure from normality of a Student distribution.

3.4.4

Generalised Information

In Section 2.7.4, we examined, in the finitistic context, the increase in expected utility provided by given data D. We now extend this analysis to the general setting, writing x to denote observed data D.

Proposition 3.15. (Expected urility ofdclra). Ifpreferences are described by a logarithmic score function for the class of probability densiries p(w I x) defined on 52, then the expected increase in utility provided by data x, when the prior probability density is p(w).is given by

where p(w 1 x ) is the density of the posterior distribution for w ,given x. This expected increase in utility is non-negative. and zero if, and only $ p(w I x ) is identical to p(w). Proof. Using Definition 3.19, the expected increase in utility provided by x
is given by

which, by Proposition 3.14, is non-negative with equality if and only if p(w I x ) and y(w)are identical. a

158

3 Generalisations

The following natural definition of the amount of information provided by the data extends that given in Definition 2.26.

Definition 3 2 . (Information .1 from data). The aiiiouirt ($infortrratioti uhoirt w E 11provided by data x when the prior density is p(w)is given hy

where p(w I x)is tlw corrrsyoirding posterior deiisity.

As in the finite case. it is interesting to note that the amount of inforination provided by z is equivalent to the discrepancy measure if the prior is considered as an approximation to the posterior. Alternatively, we see that logp(w). logp(w 1 x). respectively, measure how "gtmd", on a logarithmic scale. the prior and posterior are at "predicting" the "true state of nature" w. so that log p w I z)- log p( w j ( is a measure of the usefulness of x were w known to be the true value. Thus I { x I pu(.)} is simply the expected value of that utility difference with respect to the posterior density, b' e n 2. w
The functional , \ ' / ) ( & I ) logp(d)1 L has been used (see e.g.. Lindley. 1956. and references therein) as a measure of the 'absolute' information about contained in the probability density p ( r i ) . The increase in utility from observing .I' is then

instead of our Definition 3.21. However. this expression is m t invariant under ' one-to-one transformations of A . a property which seems to us to be essential. Note. however. that both expressions have the same expectation with respect to the distribution of 2. Draper and Guttman ( 1969) put fi)r\vard yet another non-invariant detinition o f inlormation.

Additional references on statistical information concepts are Renyi ( 1963. 1966, 1967). Goel and DeGroot ( 1979) and De Waal and Groenewald ( 1989). More generally, we may wish to step back to the situation before data become available. and consider the idea of the amount of information to be expected from an experiment 6'. We therefore generalise Definition 2.27.

Definition 3 2 . (Expected inforination from an experiment). Tlic c>.vpectetl .2 iilfornratioii to Iw provided I?\. (rii erperiirwiit t trhorrt LJ E 12. n.lieii rlre prior dens it^ is p(w)is gireri by

3.4 Generalised Information Measures

159

The following result, which is a generalisation of Proposition 2.32. provides an alternative expression for I { e pw (.) } .

Proposition 3.16. An alternative expressionfor the expected information is

wherep(w,x 1 e ) = p(w 1 x,e)y(x I e ) andp(w I x. e ) is theposterior density for o given data x and prior densiry p ( w ) . Moreover, I { e p , ( .) } 2 0, with equality i f and only i f x and w are independent random quantities, so rhar p(w,z 1 6 ) = p(w)p(xI .)for all w and x.

Proof,

and the result now follows from the fact that p(w 1 x.e ) = p(w 1 x, ) p ( x I e). e Moreover, since, by Proposition 3.14, I { e p w ( - ) } 2 0 with equality if and only ify(w I x,e)= y(w). it follows from Definition 3.19 that I { e p w ( . ) } >_ 0 with equality if and only if, for all w and x,p(w.z I e ) = p ( w ) p ( xI e ) . a

Maximisation of the expected Shannon information was proposed by Lindley (1956) as a "reasonable" ad hoc criterion for choosing among alternative experiments. Fedorov (1972) proved later that certain classical design criteria (in particular, D-optimality) are special cases of this when normal distributions are assumed. We have shown that maximising expected information is just a particular (albeit important) case of the general criterion, implied by quantitative coherence. of maximising the expected utility in the c a e of pure inference problems. See Polson (1992) for a closely related argument.
It follows from Proposition 2.3 I and the last remark, that someone who adopts the classical D-optimality criterion of optimal design under standard normality assumptions should. for consistency, have preferences which are described by a logarithmic scoring rule; otherwise, such designs are not optimal with respect to his or her underlying preferences.

There is a considerable literatureon the Bayesian design of experiments, which we will not attempt to review here. A detaileddiscussion will be given in the volume Bayesian Methods. We note that important references include Blackwell ( 1951,

1953). Lindley (1956). Chernoff (1959). Stone ( 1959). DeGroot (1962. 1970). Duncan and DeGroot (1976), Bandemer ( 1977). Smith and Verdinelli ( 1980). Pilz (1983/1991), Chaloner (I984). Sugden (1985). Mazloum and Meeden ( 1987). Felsenstein (1988. 1992). DasGupta and Studden (1991). El-Krunz and Studden ( I99 I). Pardo @ICJI. I99 I), Mitchell and Moms ( 1992). Pham-Cia and Turkkan ( ( 1992). Verdinelli ( 1992). Verdinelli and Kadane ( 1992).Lindley and Deely ( 1993). Lad and Deely ( 1994) and Parmigiani and Berry (1994).

3.5
3.5.1

DISCUSSION AND FURTHER REFERENCES


The Role of Mathematics

The translation of any substantive theory into a precise mathematical formalism necessarily involves an element of idealisation. We have already had occasion to remark on aspects of this problem in Chapter 2. in the context of using real numbers rather than subsets of the rationals to represent actual measurements (necessarily "finitised" by inherent accuracy limits of the unceflainty apparatus). Similar remarks are obviously called for in the context of using, for example. probability densities to represent belief distributions for real-valued observables. In some situations, as we shall see in Chapter 4. the adoption of specific forms of density may follow from simple. structural assumptions about the form of the belief distribution. in other situations. however. if we really try to think of such a density as being practically identified by expressions of preference among, say. standard options. we would encounter the obvious operational problem that, implicitly, an infinite number of revealed preferences would be required. Clearly, in such situations the precise mathematical form ofa density is likely to have arisen as an approximation to a "rough shape" obtained from some finite elicitation or observation process, and has been chosen. arbitrarily, tbr reasons of mathematical convenience. from an available mathematical tool-kit. Similar remarks apply to the choice, for descriptive or mathematical convenience. of infinite sets to represent consequences or decisions. with the attendant problems of defining appropriate concepts of expected utility. There are obvious dangers, therefore. in accepting too uncritically any orientation, or would-be insightful mathematical analysis, that flows from arbitrary. idealized mathematical inputs into the general quantitative coherence theory. However, given an awareness of the dangers involved. we can still systematically make use of the power and elegance o f the (idealised) mathematics by simultaneously asserting. as a central tenet of our approach. a concern with the i ~ o t ~ i ~ . v r ia i~d~ v i. w ~ s v sirirityofthe output of an analysis to the form of input assumed (see Section 5.0.3). Of course, we shall later have to make precise the sense in which thew terms arc to be interpreted and the actual forins of proccdures t o htr adopted. That being

3.5 Discussion and Further References

161

understood, our approach, as with the earlier formalism of Chapter 2, will be to work with the mathematical idealization, in order to exploit its potential power and insight, while constantly bearing in mind the need for a large pinch of salt and a repertoire of sensitivity diagnostics. 3.5.2

Critical Issues

We shall comment further on three aspects of the general mathematical structure we have developed and will be using throughout the remainder of this volume. These will be dealt with under the following subheadings: (i) Finite versus Countable Additivity; (ii) Measure versus Linear Theory (iii) Proper versus Improper Probabilities; (iv) Abstract versus Concrete Mathematics.
Finite versus Countable Additivity In Chapter 2, we developed, from a directly intuitive and operational perspective, a minimal mathematical framework for a theory of quantitative coherence. The role of the mathematics employed in this development was simply that of a tool to capture the essentials of the substantive concepts and theory; within the resulting finitistic framework we then established that uncertainties should be represented in terms offinitely additive probabilities. The generalisations and extensions of the theory given in the present chapter lead, instead, to the mathematical framework of countable additivity, within which we have available the full panoply of analytic tools from mathematical probability theory. The latter is clearly highly desirable from the point of view of mathematical convenience, but it is important to pause and consider whether the development of a more convenient mathematical framework has been achieved at the expense of a distortion of the basic concepts and ideas. First, let us emphasise that, from a philosophical perspective, the monotone continuity postulate introduced in Section 3.1.2 does not have the fundamental status of the axioms presented in Chapter 2. We regard the latter as encapsulating the essence of what is required for a theory of quantitative coherence. The former is an optional extra assumption that one might be comfortable with in specific contexts, but should in no way be obliged to accept as a prerequisite for quantitative coherence. Secondly, we note that the effect of accepting that preferences should conform to the monotone continuity postulate is to restrict ones available (in the sense of coherent) belief specifications to a subset of the finitely additive uncertainty measures; namely, those that are also countably additive. This is, of course, potentially disturbing from a subjectivist perspective, since a key feature of the theory is that the only constraints on belief specifications should be that they are coherent. For some such representations to be ruled out a priori, as a consequence of a postulate adopted purely for mathematical convenience, would indeed be a distortion of the

162

3 Generulisutions

theory. This is why we regard such a postulate as different from the basic axioms. However, provided one is aware of. and not concerned about, the implicit restriction of the available belief representations, its adoption may be very natural in contexts where one is, in any case, prepared to work in an extended mathematical setting. Throughout this work, we shall. in fact, make systematic use of concepts and tools from mathematical probability theory. without further concern or debate about this issue. However. to underline what we already said in Section 3.1.2. it is important to be on guard and to be aware that distortions might occur. To this end. we draw attention to some key references to which the reader may wish to refer in order to heighten such awareness and to study in detail the issues involved. De Finetti (1970/1974. pp. 116-133. 173-177 and 228-241: 1970/1975. pp. 267-276 and 340-361 ). provides a wealth of detailed analysis, illustration and comment on the issues surrounding finite versus countable (and other) additivity assumptions, his own analysis being motivated throughout by the guiding principle that
. . . mathematics is an instrument which should conform itself strictly to the exigencies of the field in which it is to be applied. (1970/1974. p. 3)

Further technical and philosophical discussion is given in de Finetti ( 1972, Chapters 5 and 6); see, also, Stone (1986). Systematic use of finite additivity in decisionrelated contexts is exemplified in Dubins and Savage (1965/1976), Heath and Sudderth (1978, 1989). Stone (1979b), Hill (1980). Suddenh (1980). Seidenfeld and Schervish ( I 983). Hill and Lane ( 1984). Regazzini ( 1987) and Regazzini and Petris ( 1 9 3 ) . A discussion of the statistical implications of finitely additive probability is given by Kadane et u1. ( 1986). In Section 2.8.3 we discussed , within a tinitistic framework, several "betting" approaches to establishing probability as the only coherent measure of degree of belief. These ideas may be extended to the general case. Dawid and Stone (1972. 1973) introduce the concept of "e.rprc.totion cwisi.stenc:v". and show the necessity of using Bayes' theorem to construct probability distributions corresponding to fair bets made with additional information. Other generdlised discusions on coherence of inference in terms o f gambling systems include Lane and Sudderth ( 1983) and Brunk (1991 1.
Mmsurc. versus Lineur Theory

Mathematical probability theory can be developed. equivalently. starting either from the usual Kolmogorov axioms for a set function defined over a a-field of events (scc,for example, Ash, 1972). or from axioms for a linear operator defined over a linear space of random quantities (see. for example. Whittle, 1976). The former deals directly with probability tnecrsurr; the latter with an cxpectcition operutor (or a prewXori. in de Finetti's terminology).

3.5 Discussion and Further References

163

In our development of a quantitativecoherence theory, the axiomatic approach to preferences among options has led us more naturally towards probability measures as the primary probabilisticelement, with expectation (prevision)defined subsequently. In the approach to coherence put forward in de Finetti (1972,197011974, 1970/1975),prevision is the primary element, with probability subsequentlyemerging as a special case for 0-1 random quantities. The case for adopting the linear rather than the measure theory approach is argued at length by de Finetti, there being many points of contact with the argument regarding finite versus countable additivity, particularly the need to avoid, in the mathematical formulation, going beyond those aspects required for the problem in hand. In the specific context of statistical modelling and inference, Goldstein ( I 98 I , 1986a, 1986b, 1987a. 1987b, 1988, I99 I , 1994) has systematically developed the linear approach advocated by de Finetti, showing that a version of a subjectivist programme for revising beliefs in the light of data can be implemented without recourse to the full probabilistic machinery developed in this chapter. Lad et a f .(1990) provide further discussion on the concept of prevision. We view these and related developments with great interest and with no dogmatic opinion concerning the ultimate relative usefulness and acceptance of linear versus probabilistic Bayesian statistical concepts and methods. That said, the present volume is motivated by our conviction that, currently, there remains a need for a detailed exposition of the Bayesian approach within the, more or less, conventional framework of full probabilistic descriptions.

Proper versus Improper Probabilities Whether viewed in terms of finite or countable additivity, we have taken probability to be a measure with values in the interval [O,1). However, it is possible to adopt axiomatic approaches which allow for infinite (or improper) probabilities: see, for example, Renyi (1955, 196211970, Chapter 2, and references therein), who uses conditional arguments to derive proper probabilities form improper distributions, and Hartigan (1983, Chapter 3), who directly provides an axiomatic foundation for improper or, as he terms them. non-unitary, probabilities. We shall not review such axiomatic theories in detail, but note that we shall encounter improper distributions systematically in Section 5.4. Abstract versus Concrete Mathemarics When probabilistic mathematics is being used as a tool for the representation and analysis of substantive non-mathematical problems, rather than as a direct mathematical concern in its own right, there is always adilemma regarding the appropriate level of mathematics to be used. Specifically, there are basic decisions to be made about how much measure-theoretical machinery should be invoked. The introduction of too much abstract mathematics can easily make the substantive content seem totally opaque to the very reader at whom it is most aimed. On the other hand, too

164

3 Generu1isation.s

little machinery may prove inadequate to provide a complete mathematical treatment, requiring the omission of certain topics, or the provision of just a partial, non-rigorous treatment, with insight and illustration attempted only by concrete examples. Thus far, we have tried to provide a complete, rigorous treatment of the Foundations and Generalisations of the theory of quantitative coherence, within the mathematical framework of Chapters 2 and 3. This chapter essentially defines the upper limit of mathematical machinery we shall be using and. in fact, most of our subsequent development will be much more straightforward. However, it will be the case. for example in Chapter 4. that some results of interest to us require rather more sophisticated mathematical tools than we have made available. Our response to this problem will be to try to make it clear to the reader when this is the case. and to provide references to a complete treatment of such results. together with (hopefully) sufficient concrete discussion and illustration to illuminate the topic. For more sophisticated mathematical treatments of Bayesian theory. the reader is referred to Hartigan ( 1983) and Florens et (11. ( 1990).

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

Chapter 4

Modelling
Summary The relationship between beliefs about observable random quantities and their representation using conventional forms of statistical models is investigated. It is shown that judgements of exchangeability lead to representations that justify and clarify the use and interpretation of such familiar concepts as parameters, random samples, likelihoods and prior distributions. Beliefs which have certain additional invariance properties are shown to lead to representations involving familiar specific forms of parametric distributions. such as normals and exponentials. The concept of a sufficient statistic is introduced and related to representations involving the exponential family of distributions. Various forms of partial exchangeabilityjudgements about data structures involving several samples, structured layouts. covariates and designed experiments are investigated, and links established with a number of other commonly used statistical models.

4.1
4.1.1

STATISTICAL MODELS
Beliefs and Models

The subjectivist. operationalist viewpoint ha. led us to the conclusion that, if we aspire to quantitative coherence, individual degrees of belief, expressed as probabilities. are inescapably the starting point for descriptions of uncertainty. There can

166

4 Modellitig

be no theories without theoreticians; no learning without learners: in general. no

science without scientists. It follows that learning processes. whatever their particular concerns and fashions at any given point in time. are necessarily reasoning processes which take place in the minds of individuals. To be sure, the object of attention and interest may well be an assumed external, objective reality: but the actuality of the learning process consists in the evolution of individual. subjective beliefs about that reality. However. it is important to emphasise, as in our earlier discussion in Section 2.8. that the primitive and fundamental notions of irrrhsidu d preference and belief will typically provide the starting point for irirrrpersotrtil communication and reporting processes. In what follows, both herc. and more particularly in Chapter 5 , we shall therefore often be concerned to identify and examine features of the individual learning process which relate to interpersonal issues, such as the conditions under which an approximate consensus of beliefs might occur in a population of individuals. In Chapters 3, and 3, we established a very general foundational framework for the study ofdegreesofbeliefand their evolution in the light of new information. We now turn to the detailed development of these ideas for the broad class of problems of primary interest to statisticians; namely. those where thc events of interest are defined explicitly in terms of rundotn yuunrities, X I . . . . . .I,,, (discrete or continuous. and possibly vector-valued) representing obscrved or experimental data. In such cases, we shall assume that an individuals degrees of belief for events of interest are derived from the specification of a joint distribution function P(.r,.....s,,), which we shall typically assume. without systematic reference to measure-theoretic niceties. to be representable in terms ofa joint density function p(.rl.. ... . r l , ) (to be understood as a mass function in the discrete case). Of course, any such specification implicitly defines a number of other degrees of belief specifications of possible interest: for example, for 1 (1 I ) ) < 1).

provides the marginal joint density for .I.].

. . . . .I

....and

= ) ) ( . I . ! . . . . . . l . , , ) / p ( . t I . . . . .J , , , )

yet unobserved .I.,,, , I . r , , . conditional on gives the joint density having observed . I * ( = .rl ..... .r,,, = .I* .... Within the Bayesian framework. this latter conditional form is the key to learning from experience.
Werecall that. throughout, we shall use notation such as Iandp in a,qrrreric.sense. rather than as specifying particular functions. In particular. P may sometimes refer to an underlyinp probability measure. and sometimes refer to implied distribution functions. such as P ( . I , ~ ) . . . . . . I , , , 1 or P!.r,,, . . . . . I . , , I .rI.. . . . . I , , , , i . P[.rI. . Siinilarly. we may write p(.rl). /J(.I., . . . . .x,, ). etc. w that. for example.
/J(I

..........., I.

,.I.,

. . . . . . I . , , J --

/)I.I.~.

.....I

.. ) : / ) ( . I

. . . . . ..I..,, i

4.2 Exchangeability and Related Concepts

167

simply indicates that the conditional density for .r:,,,+l.. . . .x , ~ given X I . . . . .T,,, is given by the ratio of the specified joint densities. Such usage avoids notational proliferation, and the context will always ensure that there is no confusion of

meaning. Thus far. however, our discussion is rather abstract. In actual applications we shall need to choose specific, concrete forms for joint distributions. This is clearly a somewhat daunting task, since direct contemplation and synthesis of the many complex marginal and conditionaljudgements implicit in such a specification are almost certainly beyond our capacity in all but very simple situations. We shall therefore need to examine rather closely this process of choosing a specific form of probability measure to represent degrees of belief.

Definition 4.1. (Predictive probability model). A predictive model for a sequence of rundom quanrities x!, . . . is a probability measure P , which x?, muthemutically specijies theform of the joint belief distributionfor any subset ofx,,x.L:. . . . In some cases, we shall find that we are able to identify general types of belief structure which pin down, in some sense, the mathematical representation strategy to be adopted. In other cases, this formal approach will not take us very far towards solving the representation problem and we shall have to fall back on rather more pragmatic modelling strategies. At this stage, a word of warning is required. In much statistical writing, the starting point for formal analysis is the assumption of a mathematical model form, typically involving unknown parameters, the main object of the study being to infer something about the values of these parameters. From our perspective, this is all somewhat premature and mysterious! We are seeking to represent degrees of belief about observables: nothing in our previous development justifies or gives any insight into the choice of particular models, and thus far we have no way of attachingany operational meaning to the parameterswhich appear in conventional models. However, as we shall soon see, the subjectivist, operationalist approach will provide considerable insight into the nature and status of these conventional assumptions.

4.2
4.2.1

EXCHANGEABILITY AND RELATED CONCEPTS


Dependence and Independence

Consider a sequence of random quantities x l . .r2, . . .. and suppose that a predictive model is assumed which specifies that, for all 11. the joint density can be written in

4 Modelling

the form

p(.r 1 . . . . . .r,,) =

n
il

p(.r,1.

so that the .r, are independent random quantities. It then follows strdightfonvardiy that, for any 1 5 t i / < 1 1 .
p(.r,,,+1.. . . . .(.,, I . / I .

. . . ..r,,,) = p(.(./,, 1 . . . . ../.,,I

so that no leurning.frci?re.vperience can take place within this sequence of observations. In other works, past data provide us with no additional information about the possible outcomes of future observations in the sequence. A predictive model specifying such an independence structure is clearly inappropriate in contexts where we believe that the successive accumulation of data will provide increasing information about future events. In such cases. the structure of the joint density p ( . r l . . . . ..I./,)must encapsulate some form ofdepcnrlunce among the individual random quantities. In general. however, there are a vast number of possible subjective assumptions about the form such dependencies might takc and there can be no all-embracing theoretical discussion. Instead. what we can do is to concentrate on some particular simple forms of judgement about dependence structures which might correspond to actual judgements of individuals in certain situations. There is no suggestion that the structures we are going to discuss in subsequent subsections have any special status. or oirght to be adopted in most cases, or whatever. They simply represent forms of judgement which may often be felt to be appropriate and whose detailed analysis provides illuminating insight into the specification and interpretation of certain classes of predictive models.

4.2.2

Exchangeabilityand Partial Exchangeability

Suppose that, in thinking about P ( . r , . . . . . . r l , ) . his or her joint degree of bclief distribution for a sequence of random quantities . r i .. . . ..r,,,an individual makes the judgement that the subscripts. the labels identifying the individual random quantities. are uninformative. in the sense that he or she would specify all the marginal distributions for the individual random quantities identically, and similarly for all the marginal joint distributions for all possible pairs. triples. etc.. of the random quantities. I t is easy to see that this implies that the form of the joint distribution must be such that

for any possible permutation ~i ofthe subscripts { 1.. . . . I ) } . We formalise thi\ notion of symmetry of beliefs for the individual random quantities as follo~v\.

4.2 Exchangeability and Related Concepts

Definition 4.2. (Finite exchangeability). The random quantities 11, . . . ,x,, are said to be judged yiniteiy) exchangeable under a probability measure P if the implied joint degree of belief distribution satisjes

for ull permututions 7r defined on the set { 1, . . . ,n}. In terms of the corresponding density or mass function, the condition reduces to

Example 4.1. (Tossing a thumb tuck). Consider a sequence of tosses of a standard metal drawing pin (or thumb tack), and let xi = 1 if the pin lands point uppermost on the ith toss, x, = 0 otherwise, i = 1.. . . .R. If the tosses are performed in such a way that time order appears to be irrelevant and the conditions of the toss appear to be essentially held constant throughout, it would seem to be the case that, whatever precise quunrirutive form their beliefs take, most observers would judge the outcomes of the sequence of tosses 11. r2,.. . to be exchangeable in the above sense.

In general, the exchangeability assumption captures, for a subjectivist interested in belief distributions for observables, the essence of the idea of a so-called "random sample". This latter notion is, of course, of no direct use to us at this stage, since it (implicitly) involves the idea of "conditional independence,given the value of the underlying parameter", a meaningless phrase thus far within our framework.

The notion of exchangeability involves a judgement of complete symmetry among all the observables x l , .. . ,I,, under consideration. Clearly, in many situations this might be too restrictive an assumption, even though a partial judgement of symmetry is present.
Example 4.1. (cont.). Suppose that the sequence of tosses of a drawing pin are not all made with the same pin, but that the even and odd numbered tosses are made with different pins: an all metal one for the odd tosses: a plastic-coated one for the even tosses. Alternatively, suppose that the same pin were used throughout. but that the odd tosses are made by a different person, using a completely different tossing mechanism from that used for the even tosses. In such cases, many individuals would retain an exchangeable form of belief distribution within the sequences of odd and even tosses separately, but might be reluctant to make a judgement of symmetry for the combined sequence of tosses.

170

4 Modelling

Example 4.2. (Luborato~ measurements). Suppose that . r l ..I.:. . . . are real-valued measurements ofa physical or chemical property of a given substance. all made on the same sample with the same measurement procedure. Under such conditions. many individuals might judge the complete sequence of meawrements to be exchangeable. Suppose, however. that sequencesof such measurements are combined from 1 different . laboratories. the substance being identical but the measurement procedures varying from laboratory to laboratory. In this case. judgements of exchangeability for each laboratory sequence separdtely might be appropriate. whereas such a judgement for the combined sequence might not be.

Example4.3. (Physiologicalresponses). Suppose that { . T i . x 2 . . . . . } are real-valued measurements of a specific physiological response in human subjects when a particular drug is administered. If the drug is administered at more than one dose level and it there are both male and female subjects, spanning a wide age range, most individuals would be very reluctant to make a judgement of exchangeability for the entire sequence of results. However. within each combination of dose-level. sex and appropriately defined age-group. a judgement of exchangeability might be regarded as reasonable.

Judgements of the kind suggested in the above examples correspond t o forms of purriuf exchongeubility. Clearly, there are many possible forms of departure from overall judgements of exchangeability to those of partial exchangeability and so a formal definition of the term does not seem appropriate. In general, it simply signifiesthat there may be additional 1abelson the random quantities (forexample. odd and even, or the identification of the tossing mechanism in Example 4.1 ) with exchangeable judgements made separately for each group of random quantities having the same additional labels. A detailed discussion of various possible forms of partial exchangeability will be given in Section 4.6. We shall now return to the simple case ofexchangeability and examine in detail the form of representation of p(.r].. . . . . I . , , ) which emerges in various special cases. As a preliminary, we shall generalise our previous definition of exchangeability to allow for potentially infinite sequences of random quantities. In practice, it should. at least in principle. always be possible to give an upper bound t o the number of observables to be considered. However. specifying an actual upper bound may be somewhat difficult or arbitrary and so. for mathematical and descriptive purposes. it is convenient to be able to proceed as if we were contemplating an infinite sequence of potential observables. Of course. it will be important to establish that working within the infinite framework does not cause any fundamental conceptual distortion. These and related issues of finite versus infinite exchangeability will be considered I in more detail in Section 4.7. . For the time being. we shall concentrate on the potent idly infinite case.

4.2 Exchangeability and Related Concepts

171

Definition 4.3. (Infinite exchangeability). The inJnite sequence of random quantities $1, x2!. . . is said to be judged (injnitely) exchangeable if everyfinite subsequence is judged exchangeable in the sense of Definition 4.2.
One might be tempted to wonder whether every finite sequence of exchangeable random quantities could be embedded in or extended to an infinitely exchangeable sequence of similarly defined random quantities. However, this is certainly not the case as the following example shows.
Example 4.4. (Non-extendibleexchangeability). Suppose that we define the three random quantities C ~ . : C ~ , . C : ~ that either I, = 1 or .r, = 0, i = 1.2?3,with joint such probability function given by
p ( q = O..C2 = 1..r:i = I ) = p ( q = 1. *'2 = O,.C:3 = 1)
= p(3.1 = 1.x2 = l,.r;t = 0)

= 1/3.

with all other combinationsof x i ,x2: r:{ having probability zero,so that x i .~ 2.r3are clearly . exchangeable. We shall now try to identify an sJ. taking only values 0 and 1, such that rI.. . . .x j are exchangeable. For this to be possible, we require, for example.
p ( s , = 0 , x r = 1.KI = 1.3.4 = 0) = p ( q = O..C? = 0.S:t = l . S l = 1).

But p ( q = 0, x:! = 1. X:) = 1, XA = 0)


= p(s1 = 0 . 3 . 2 = 1.r:i= 1) - p ( q = 0.1:z = l . & $= I . X J = 1) -1/3-p(.rI = O . . r y = l , . ~ : ~ = l , . r L = l )
= l / : I - p ( q = 1.1.2 = l,S.$ 1,S( = O ) . =

where
p ( q = l.Zz = 1 , q = 1. x.1 = 0 ) 5 p(xl =
so that
1.22

= 1.S;) = 1 ) = 0.

p(J.1 = 0.2-2 = 1,S.t = l.rr = 0) = 1/3.

However, we also have


p ( r , = 0. .rl = 0, S ) = 1. Si = 1) 5 p ( J l = 0, I1 = 0. S ) = 1) = 0

and so
p(r1 = O,ri = 1,s) 1 . ~ 4= 0) # p(r1 = 0.r2= O.x, = 1..r4= I ) . =

It follows that a finitely exchangeable sequence cannot even necessarily be embedded in a larger finitely exchangeable sequence, let alone an infinitely exchangeable sequence.

172
4.3
4.3.1

4 Modelling

MODELS VIA EXCHANGEABILITY


The Bernoulli and Binomial Models

We consider first the case of an infinitely exchangeable sequence o f 0 - I random quantities, . r l . . ~ .. . . . with s, = 0 or .r, = 1. for all i = I . ? . , . .. Without loss of generality, we shall derive a representation result for the joint mass function. p ( . r l .. . . . .rII), the first I I random quantities XI. . . . . .rll. of

Proposition 4.1. (Representation theorem for 0 -I random quantities). If X I .x i . . . . is un injnitely e.rchungeable sequence of0-1 random yuuntitics with prohahilin. measure P , there txists u distribution fiuiction Q such t k r r the joint inuss firnction p( . I . . . . . .r,,) jhr .I I . . . . .r,, hcrs the forin r

rcjhere,
Q ( 0 ) = lim
0-x

P[gll/tt

5 HI.

with y,, = .II

+ . . . + zI,. {rnd 0 = l i ~ i i , , - y~, , / t t .

Proof. (De Finetti. 1930, I937/ 1964; here we follow closely the proof given by Heath and Sudderth. 1976; see also Barlow. 199 I ) . Suppose .ri t .. . -t.r,, = .vI,. then, by exchangeability. for any 0 5 ?I,,r: t ~ .
]1(.1.1

+ + rl,= y,,) = (;;,) d . r . 7 , I


*

I.

.. .

JT, I , ,

for any permutation x of { 1. . . . , rt } such that .r7(I , + . . . t .rTl = y,, . Moreover. ,, for arbitrary R; 2 ri 2 y,, 2 0. and with the summations below taken over the range y y = y,, to g.\ = 2%- - ( 1 1 - g,,), we see that

where (;y.,)U,, = y.v(y,\ - 1) . . . [!I., - (?y,, - 1)). etc. (Intuitively. we can imagine sampling tt items without replacement from an urn of X items containing ,I/.\ 1s and :Ir y,, 0s. corresponding to the hypergeometric distribution of Section 3.7.2.) -

4.3 Models via Exchangeability

173

If we now define Q,v(8) on R to be the step function which is 0 for 8 < 0 and has jumps of p(zl t . . t S.V = y . ~ ) 8 = y.v/N. gAv= 0,. . . . N. we see that at

As N

x,
( 8 W U , l K1 - ~ ) A v r r - - l / l l
~

fly,,

(1

- ,),I-

,y,,

(Nht uniformly in 8. Moreover, by Hellys theorem (see, for example, Section 3.2.3 and Ash, 1972, Section 8.2), there exists a subsequence Q.v,, Q.v2. . . . such that
:YJ

lirn Q; = Q.

where Q is a distribution function. The result follows.

The interpretation of this representation theorem is of profound significance from the point of view of subjectivist modelling philosophy. It is as if: the .r, are judged to be independent, Bernoulli random quantities (see Section 3.2.2) conditional on a random quantity 8; 8 is itself assigned a probability distribution Q; by the strong law of large numbers, 8 = liInll-x(y,,/n), so that Q may be interpreted as beliefs about the limiting relative frequency of 1 s. In more conventional notation and language, it is as if, conditional on 8, . . ,xll are a random sample from a Bernoulli distribution with parameter 8, generating a parametrised joint sampling distribution
I, I,

p(x1.. . .
,=I

r=l

where the parameter is assigned a prior distribution Q(8).The operational content of this prior distribution derives from the fact that it is as ifwe are assessing beliefs about what we would anticipate observing as the limiting relativefrequency from a very large number of observations. Thought of as a function of 8, we shall refer to the joint sampling distribution as the likelihood function. In terms of Definition 4. I , the assumption of exchangeability for the infinite sequence of 0-1 random quantities ~ 1 . ~ 2 .... places a strict limitation on the family of probability measures P which can serve as predictive probability models for the sequence. Any such P must correspond to the mixture form given in Proposition 4.1, for some choice of prior distribution Q ( 8 ) . As we range over all possible choices of this latter distribution, we generate all possible predictive

174

4 Mudelling

probability models compatible with the assumption of infinite exchangeability for the 0-1 random quantities. Thus, "at a stroke". we establish a justification for the conventional model building procedure of combining a likelihood and a prior. The likelihcd is defined in terms of an assumption of conditional independence of the observations given a parameter; the latter, and its associated prior distribution. acquire an operational interpretation in terms of a limiting average of observables tin this case a limiting frequency). In many applications involving 0-1 random quantities. we may be more interested in a summary random quantity, such as yl, = .rI . . . + J , ~ than in the . individual sequences of . r t * s . The representation of p ( . r ~ . . . + .rI, = !I,,) is straightforwardly obtained from Proposition 4. I .

+ +

Corollary 1. Given the cvnditions o Proposition 4.1. f

PI-UOJThis follows immediately from Proposition 4.1 and the fact that

for all s1. . . . .r,l such that .rl


I

+ . . . f .rll = ,y,,.

T h i s provides a justification. when expressing beliefs about g,,,for acting us ifwe have a binomial likelihood. defined by Bi(.yll10. ) I ) . with a prior distribution Q(0) for the binomial parameter 0.

The formal learning process for models such as this will be developed systematically and generally in Chapter 5 . However. this simple example provides considerable insight into the learning process, showing how. in a sense. the key step is a straightforward consequence of the representation theorem.

Corollary 2. , # f i r . .r2. . . . is (it1 injnitely crchangeahle seqrretice ~ $ 0 - 1 rundun1 quantities with prohuhitity mrcrsitrr P. the conditional probability ,firnction p(rl,, ] . . . . .slI X I . . . . . .rll,).for . t . l , i . - l . . . . . .rIIgir-en .r1. . . . ..I' ,,,. hus + 1
thejhri

4.3 Models via Exchangeubiiity where

175

Proof. Clearly.
p(z,,,+1... . , 5 1 1
12,.

. . .I l l , )= p ( . n , . . . p(z1.. . . .I,,,)
, G I )
f

and the result follows by applying Proposition 4.1 to both p ( q , . . . , z,,) and p(z1... . x , , ~and rearranging the resulting expression. a ) We thus see that the basic form of representation of beliefs does not change. All that has happened. expressed in conventional terminology, is that the prior distribution Q ( 0 ) for 8 has been revised, via &yes' theorem, into the posrerior distribution Q(O 1 zl.. . . ..rnl). The conditional probability function ~ ) ( t ,.~ s I+zl,. ,. . ,z,,,) is called . ., , 2 ~ I the (conditionul,or posterior) predictive probability function for zn,+ . . . , I,# 1. given 11, . . . ,I , , ~and this, of course, also provides the basis for deriving the con, ditional predictive distribution of any other random quantity defined in terms of the futureobservations. Forexample, given rl. . . .xn,,the predictive probability func. tion p ( ~ , ~ XI... .~ I - ~ , .-c,,~) Y,,-,~,.i.e., the total number of 1's in I , , ~ + . .,,x,,, for .~ has the form

A particularly important random quantity defined in terms of future observations is the frequency of 1's in a large sample. But, by Proposition 4.1 and its Corollary 2.

Thus, a posterior distributionfor a parameter i seen to be a limiting case of s u posterior (conditionat)predictive distributionfor an observable.

176
4.3.2

4 Modelling

The Multinomial Model

An alternative way of viewing the 0 -1 random quantities discussed in Section 3.1 i s as defining category membership (given two exclusive and exhaustive categories). in the sense that .r; = 1 signifies that the ith observation belongs to category I and .T, = 0 signities membership of category 2. We can extend this idea in an obvious way by considering I-dimensional random vectors x,whose j t h component, J . , , . takes the value I to indicate membershipofthejth o f k 1 categories. At most one of the k components can take the value I ; if they all take the value 0 this signifies membership of the ( X 1)th category. In what follows. we shall refer t o such 5, as "0-1 random vectors". If xI,2 . . . . i s an infinitely exchangeable sequence of 5 0-1 random vectors. we can extend Proposition4 I in an obvious Wily. .

Proposition 4.2. (Represenfation theorem for 0 -I random vecfors). If 2 1 . 5 2 . . . . is (it1 itrjnitely c~.rchtirigeableseyitetice ($0-l nrntiotii \~~tcir.s P. rcitli probability tt~~(rsiire there e.rists u distrihirtiorifiinctiotr Q such thrit the joint mtus,fiinction p(x I . . , . . x,, x . . . . x,, tlie,fortr~ ),for htrs

where

Proof. This i s a straightforward, albeit algebraically cumbersome. generalisation of the proof of Proposition4. I . 4
As in the previous case, we are often most interested in the summary random x,, whose j t h component g,,,)i s the random quantity vector y,, = x l . . correspondingto the total number ofoccurrences ofcategory j in the I I observations. ( l . . . x,,= y,, 1 = y(y,,~.. . . . y , , ~ 1. We shall give the representation of p z generalising Corollary I to Proposition4.1. and then comment on the interpretation of these rcsults.
+

4.3 Models via Exchangeability


p(ynl,.. . .ynk) may be represented as

177

Corollary. Given the conditions of Proposition 4.2, the joint mass function

where

LlIl hJ
72

T?,!

. ..

= ?Jnl!Pn2!. . .Ynk!(n-

c?/,,l)!

Proof. This follows immediately from the generalisation of the argument used in proving Corollary 1 to Proposition 4. I. 4
Thus, we see in Proposition 4.2 that it is as if we have a likelihood corre1. where sponding to the joint sampling distribution of a random sample z , . . ,z,,, each z,has a multinomial distribution with probability function Muk(z, 18. l ) , together with a prior distribution Q over the multinomial parameter 8, where the components 8, of the latter can be thought of as the limiting relative frequency of membership of the jth category. In the corollary, it is as ifwe assume a multinomial likelihood, Mu&,,. 18. TI). with a prior Q(8)for 8.

4 3 3 The General Model ..


We now consider the case of an infinitely exchangeable sequence of real-valued random quantities xl, . . .. As one might expect, the mathematical technicalities x2, of establishing a representation theorem in the real-valued case are somewhat more complicated than in the 0-1 cases, and a rigorous treatment involves the use of measure-theoretic tools beyond the general mathematical level at which this volume is aimed. For this reason, we shall content ourselves with providing an outlineproof of a form of the representation theorem, having no pretence at mathematical rigour but, hopefully, providing some intuitive insight into the result. as well as the key ideas underlying a form of proper proof. Proposition 4.3. (General representation theorem). ~1~ x2.. . ., is an infinitely exchangeable sequence of real-valued random quantities with probability measure P , there exists a probability memure Q over 3, the space o all distributionfunctio,u on 8, f such that the joint distributionfunction of T I . . . . ,x,, has the form
I f

where
Q ( F ) = lirn P(Fl,)
11-2

and F,, is the empirical distributionfunction defined by X I , . . . ,T,, .

178
Ourline proof. (See Chow and Teicher. 1978/ 19138). Since

4 Modelling

we have. by exchangeability,

To see this, writing I, in place of

and noting that I = I,. we have '

Note also that E(I,) = P ( J , < .r) and E ( Z , I J ) = I'[(.rl < .r) ( . r ~ .I*)]. < for all i . j , by exchangeability. A straightforward count of the numbers of terms involved in the summations then gives the required result. tt x.and hence the random quantity The right-hand side tends to zero as ,IT. F,,(.r) in probability to some random quantity. F(.r). say, which implies that tends

in probability as ,V x , for fixed 11. Suppose we now let (1 I . . . . . o,, denote positive integers and set

,=I

I-'

and

' -= I ( n ) = I[(.r,,l 5 . r l ) n . . . n ( .,,,, < . r , , ) ] . 4 r


For il: > t t . it then follows that

4.3 Models via Exchangeability


However, as N
-+

179

co,

so that.
,I

But, by exchangeability,

and so

Recalling (*), we see that, as N

m,

/g

F ( z j )dQ(F)

P ( x ~ .,. ,x,) .

where Q ( F ) = lim,v-.x P ( F , v ) . a The general form of representation for real-valued exchangeablerandom quantities is therefore as ifwe have independent observations zl, . . . ,zn conditional on F , an unknown (i.e., random) distribution function (which plays the role of an infinite-dimensional "parameter" in this case), with a belief distribution Q for F, having the operational interpretation of "what we believe the empirical distribution function would look like for a large sample". The structure of the learning process for a general exchangeable sequence of real-valued random quantities, with the distribution function representation given in Proposition 4.3, cannot easily be described explicitly. In what follows, we shall therefore find it convenient to restrict attention to those cases where a corresponding representation holds in terms of density functions, labelled by a finite-dimensional parameter, 8, say, rather than the infinite-dimensional label. F. For ease of reference, we present this representation as a corollary to Proposition 4.3.

180
of Proposition 4.3 the joint density of s1. . . . , .rll has the form

4 Modelling

Corollary 1. Assuming the required densities to exist, under the conditions

p(x,. .
with p ( . 16) denoting the density jiinction corresponding to the "unknocon parameter" 6 E 8. The role of Bayes' theorem in the learning process is now easily identified.
X I .x 2 , . . . is an injnitely exchangeable sequence o retrlf valued random quantities admitting a density representation as in Corollarj I , then

Corollary 2. r f

Proof. This follows immediately on writing


P(.r,,/+I,.. . * X I , 1 X I . .

. . ..rt,,)=

... XI/) p ( r , . . . . . x,,,)


[.'(.rl.

applying the density representation form to both p ( . r l . . . . . x,,) p ( . r l . . . . ..rt,,). and and rearranging the resulting expression. a The technical discussion in this section has centred on exchangeable sequences, zl. x2, . . . . of real-valued random quantities. In fact. everything carries over in an obviously analogous manner to the case of exchangeable sequences xl. 2 . . . . , with x,E 'Rk. that happens, in effect, is that the distribution func2 All tions and densities referred to in Proposition 4.3 and its corollaries become the To joint distribution functions and densities for the k components of the z,. avoid tedious distinctions between .r E 'R and x E !Rk. in subsequent developments we shall often just write s E X. In cases where the distinction between k = 1 and A- > 1 matters, it will be clear from the context what is intended. In Section 4.8. I , we shall give detailed references to t e literature on represenh tation theorems for exchangeable sequences, including far-reaching generalisations of the 0 - I and real-valuedcases. However. wen the simple cases we have presented already provide, from the subjectivist perspective, a deeply satisfying clarification of such fundamental notions as models, parumeters. conditional independence and the relationship between helieji and l i n i i t i t i ~ ~ ~ t ~ ~ u e n c . i r s .

4.4 Models via Invariance

181

In terms of Definition 4.1, the assumption of exchangeability for the realvalued random quantities ~ 1 . ~ .2. ., again places (as in the 0-1 case) a limitation on the family of probability measures P which can serve as predictive probability models. In this case, however, in the context of the general form of representation given in Proposition 4.3, the parameter, F, underlying the conditional independence structure within the mixture is a random distribution function, so that the parameter is, in effect, injnire dimensional,and the family of coherent predictive probability models is generated by ranging through all possible prior distributions Q(Fj. The mathematical form of the required representation is well-defined, but the practical task of translating actual beliefs about real-valued random quantities into the required mathematical form of a measure over a function space seems, to say the least, a somewhat daunting prospect. It is interesting therefore to see whether there exist more complex formal structures of belief, imposing further symmetries or structure beyond simple exchangeability, which lead to more specific and familiar model representations. In particular, it is of interest to identify situations in which exchangeabilityleads to a mixture of conditional independence structures which are defined in terms of ajnite dimensional parameter so that the more explicit forms given in the corollaries to Proposition 4.3 can be invoked. Given the interpretation of the components of such a parameter as strong law limits of simple sequences of functions of the observations, the specification of Q , and hence of the complete predictive probability model P , then becomes a much less daunting task.

4.4
4.4.1

MODELS VIA INVARIANCE


The Normal Model

Suppose that in addition to judging an infinite sequence of real-valued random quantities q , 22,. . to be exchangeable, we consider the possibility of further . judgements of invariance, perhaps relating to the geometryof the space in which say, lie. The following definitions a finite subset of observations, 2 1 , . . . T,,, describe two such possible judgements of invariance. As with exchangeability, there is no claim that such judgements have any a priori special status. They are intended, simply, as possible forms of judgement that might be made, and whose consequences might be interesting to explore.
~

Definition 4.4. (Spherical symmetry).

xl,. . . x,,is said to have spherical symmetry under a predictive probability model P ifthe latter defines the distributionsof x = ( x l . . . , , x,,) and Ax to be identical,for any (orthogonal)n x n matrix A such that AtA = I .

A sequence of random quantities

102

4 Modelling

This definition encapsulates a judgement of rotational symmetry, in the sense that, although measurements happened to have been expressed in terms of a particular coordinate system (yielding s ~. .. ..r,,),our quantitative beliefs would not . change if they had been expressed in a rotated coordinate system. Since rotational invariance fixes "distances" from the origin. this is equivalent to a judgement of identical beliefs for all outcomes of sI.. . . ..r,, leading to the same value of
J;

+ . . . + .I.;.

The next result states that if we make the judgement of spherical symmetry (which in turn implies a judgement of exchangeability.since permutation is a special case of orthogonal transformation). the general mixture representation given in Proposition 4.3 assumes a much more concrete and familiar form.

Proposition 4.4. (Representaliontheorem under spherical symmetry). If .I.]. ,r2.. . . is uti infinite sequence .f rei~l-vul~ied rundotn yirnntities with probability meuswe P, und if, fi)r utiy 11, { .rl . . . . ..r,)} h a w sphrr-iculsynnietry. there e.risfs Q distribution function Q on 'R 1- such that thejoint distribirtion Jimctiori of r 1 . . . . .x has the jbrni

where CP is the standard tiorninl distrihuriori~fimctioti unit

with s = n-'(.r: :

+ . . . + xf), and X ' = l i i i i , ,

.x

si.

Proof. See, for example. Freedman (1963a) and Kingman (1972): details are omitted here, since the proof of a generalisation of this result will be given in full in Proposition 4.5. a The form of representation obtained in Proposition 4.4 tells us that the judgement of spherical symmetry restricts the set of coherent predictive probability models to those which are generated by acting us if:
(i) observations are conditionally independent riortnal random quantities. c' *wen X (which, as a "labelling parameter". corresponds to the the random quantity precision; i.e., the reciprocal of the variance); (ii) X is itself assigned a distribution Q; s i . so that (2 may be (iii) by the strong law of large numbers. X - ' = interpreted as "beliefs about the reciprocal of the limiting mean sum of squares of the observations".

4.4 Models via Invariance

183

For related work see Dawid ( 1977,1978). Toobtain a justification for the usual normal specification, with unknown mean and precision, we need to generalise the above discussion slightly. We note first that the judgement of spherical symmetry implicitly attaches a special significance to the origin of the coordinate system, since it is equivalent to a judgement of invariance in terms of distance from the origin. In general, however, if we were to feel able to make a judgement of spherical symmetry, it would typically only be relative to an origin defined in terms of the centre of the random quantities under consideration. This motivates the following definition.

Definition 4.5. (Centredsphericdsymmetry). A sequence ofrandom quantities X I , . . . . x, is said to have centred spherical symmetry ifthe random quantities-cl - P , , , . . . . x n - x , , ~ v e ~ ~ p h e r i c a l s y m m e t ~ , = n- where%,, This is equivalent to a judgement of identical beliefsfor all outcomes of x1,. . . , I,, leading to the same value of ( X I + . . . + (zf,- T t 1 ) * .

Ex,.

Proposition 4.5. (Representation under centred spherical symmetry). If xl, . . . is an infinitely exchangeable sequence of real-valued random 22, quantities with probability measure P, and if, for any n,, { X I , . . . x i , } have centred spherical symmetry, then there exists a distribution function Q on R x R+ such that the joint distribution of x 1 . . . ,x, h a s theform ~

where

is the standard normal distributionfunction and


Q ( p ,A) = lim P[(?,,I) n ( s i 2 I A)] p
n-x

with x,, = TI-(x~ ... x,,),S: = 7 t - [ ( ~ l - T , r ) 2 p = limrrd-a and A- = 1imtt-% En, s?,.

+ + (x,)- Z,)],

Proof. (Smith, 1981). Since the sequence X I , 22,. . is exchangeable, by . Proposition 4.3 there exists a random distribution function F such that, condi, tional on F , the random quantities .cI, . . . , x , ~for any n, are independent. There is therefore a random characteristic function, 4, corresponding to F . such that

184
If we now define y = .c, - . ~ - , ~ , j1.. . . . i t . it follows that , =

4 Mvdelling

for all real s l . . . . s,,such that sI + . . . + = 0. Since !jI.. . . . are spherically symmetric, both sides of this latter equality depend only on sf + . . . + s' . Recalling that q ( - f ) = o ( t ) . the complex conjugate. and that o ( 0 ) = 1. it follows that. for any real u and v .
II

E { J O +~ ( r,)o(u-

1.1

~ ; j ~ [ t ~ ) ~ ( f , ) ~ ~ ( - f l ) l ~ }

= f<{Q(l/

iQ)(;)(l(

f*j@(--//

I $ ) O ( / *-

u)}

- E ( O ( U-t ( . ) ( > ( I ( - t ? ) o " ( - u ) o ( - I - ) c ) ( I - ) } - E ( O ( - I I - tljc3(r. - l/)(;)~(l/j~;~(~,),,~(-l,)} + E { 0' ( (1 )O' ( I ' ) C 7 ? ( - /-)c? ( - I / ) } .


where all four terms in this expression are of the form of the right-hand side of ( * ) with n = 8. s1 + . . . + s X = 0 and .s; + . . . + .si = -I( f r 2 ) . All the four terms 11' are therefore equal, so that the overall expression is zero. This implies that. almost surely with respect to the probability measure P. L:, satisfies the functional equation

o f/ (
for all real
II

+ r)c>( / I -

I.)

= I>')( u)o(I ' ) C 3 (

-I * )

and

11.

This can be rewritten in the form


\zI, ( I 1

+ + \zI'(
I,)

I1

1.)

= A( I / )

+ n(

1 8 ) .

where @ , ( t ) = @ ? ( t ) = l o g d ( f ) ,and where A ( . ) = 2logd(,(u)and B ( v ) II: log[@(! ) @ ( - r ) ] ; it follows that log o ( t )is a quadratic in 1 (see. for example. Kagan. I Linnik and Rao. 1973, Lcmmn 1.5.1). Again using ~ ( - f ) = c ; [ f ) . c $ ( O ) = 1. we see that, for this quadratic. the constant coefficient must be zero. the linear coefficient purely imaginary and the quadratic coefficient real and non-positive. This establishes that the random characteristic function I.) takes the form

d ( t ) = c'sp ipt for some random quantities 11 E %. A E !W. If we now define a random quantity :by
2

; -

::l}

= cxp ( i

t,q)

4.4 Models via Invariance


then, by iterated expectation, we have

185

This establishes that, conditional on p and A, x l , . . . ,zn are independent normally distributed random quantities, each with mean p and precision A. The mixing distribution in the general representation theorem reduces therefore to a joint distribution over p and A. But, by the strong law of large numbers,
lirn
2 1

1- . *

2-11

l 7x

n n

= P,

lim
11-x,

(XI

- ~ 2 , ) 2 + ~ ~ . + -(Xx ,l) , - 1 1 2
_ - - 1

and the result follows.

We see, therefore, that the combined judgements of exchangeability and centred spherical symmetry restrict the set of coherent predictive probability models to those which, expressed in conventional terminology, correspond to acting as if:
(i) we have a random sample from a normal distribution with unknown mean and precision parameters, p and A, generating a likelihood
n

p(x1, * * . . x i , l p . A) = n N ( + i
i=l

(ii) we have a joint prior distribution Q ( p ,A) for the unknown parameters, p and A, which can be given an operational interpretation as "beliefs about the sample mean and reciprocal sample variance which would result from a large number of observations".
4.4.2
zl, 2 2 . .

The Multivariate Normal Model

Suppose now that we have an infinitely exchangeable sequence of random vectors . . taking values in !Rk, k 2 2, and that, in addition, we judge, for all n and for all c E 3Zk, that the random quantities c ~ z ~ . ,.ctz,, have centred ., spherical symmetry. The next result then provides a multivariate generalisation of Proposition 4.5.

186

I Madelling

Proposition4.6. (Mulfivariccrerepresentah'ontheorem under centred spherical symmetry). If x 1 . 2 2 . . . . is an infinitely exchangeable sequence of random vectors taking rdues in Rk, with probabiliry measure 1'. such that. .for any '11 and c f ?TIk, the random quantities C'XI. . . . .c'x,,have centred spherical symmetry the structure a evaluations under P of probabilities of events f . is nordejined by X I . . . . x,, cis if rhe latter were independent, ttiulti~nriare mally distributed rundam wcm-s. conditional on (1 rundoni meun wcm- p und a random precision matrix A, nith u distribution o w - p and A indircrd by P, where

j Prook Defining y, = ctzJ.= 1.. . . . 11. we see that the random quantities yl. . . . . y,, have centred spherical symmetry and so, by Proposition 4.5. there exist p = p(c)and X = X(c)such that, for all f , E %. j = I . . . . . I ) .

where
.
I,

so that

for all c E 92" f , E 92.1 = 1. . . . . I ) . It follows that. for all t , E !HA. = 1. . . . . I ) . ,j

so that, conditional on p and A, X I . . . . x,, independent multivariate normal . are random quantities each with mean p and precision matrix A. .3

4.4 Models via Invariance

187

4.4.3

The Exponential Model

Suppose x l ,xz,. . . is judged to be an infinitely exchangeable sequence of positive real-valued random quantities. In particular, we note that this implies, for any pair x,, an identity of beliefs for any events in the positive quadrant which are xJ, symmetrically placed with respect to the 45 line through the origin.
I
XI

Figure 4.1 A , Az, BI, L32 rejections in 45 line. C ,,C2rejections in (dashed) 45 line

Thus, for example, in Figure 4.1, the probabilities assigned to A1 and A z , B1 and 32, respectively, must be equal, for any i # j. In general, however, the 1 have assumption of exchangeabilitywould not imply that events such as C and CZ equal probabilities, even though they are symmetrically placed with respect to a 45 line (but not the one through the origin). It is interesting to ask under what circumstances an individual might judge , events such as C1 C2 to have equal probabilities. The answer is suggested by the additional (dashed) lines in the figure. Ifwe added to the assumption of exchangeability the judgement that the origins of the xi and x, axes are irrelevant, so far as probability judgements are concerned, then the probabilities of events such as C1and C would be judged equal. In perhaps more familiar terms, this would 2 be as though, when making judgements about events in the positive quadrant, an individualsjudgement exhibited a form of lack of memory property with respect to the origin. If such a judgement is assumed to hold for all subsets of n (rather than just two) random quantities, the resulting representation is as follows.

188
I f

4 Modelling

Proposition 4.7. (Continuousrepresentalion under origin invariance). X I . x2, , . . is an infinitely exchangeable sequence of positive real-valued random quantities with probability measure P. such that, for all 11, and any event A in R+ x . . . x !R+,

P [ ( r , .. . . ,&)

A] = Pl(r1... .

Z I f )E

+ a]

for all a E 8 x . . . x Rsuch that at1 = 0and A+Q is an event in %+ x . . . x %, then the joint densityfor x l . . . . . .r,, has the form
p ( x , .. . .

where 0 = lim,f-x T i , and

Outline prooJ (Diaconis and Y Ivisaker, 1985). By the general representation theorem, there exists a random distribution function F,such that, conditional on F. sl.. . . ,I,, independent, for any n. It can be shown that the additional are invariance property continues to hold conditional on F, so that, for any i # J , P[(J,.Z,)E .41F] = P[(;c,,r,) A + a ( F ] E
for A and Q as described above. If we now take a = ( a ] ,ap) and
A = { ( s , . s , ) ; s,> a1

+ a.L,x, > 0 )

we have

P [ ( x *> al

+ a 2 )n (x, > 0) I F ] = P [ ( x ,> a l ) n (2, > u 2 )1 F ]


=

P [ ( & > u1) I F ] P [ ( x ,> a2) I F ] .

By exchangeability, and recalling that I, is certainly positive for all j , this implies that P(1, > nl + (2.2 I F ) = P ( X , > 01 1 F ) P ( s ,> a2 1 F ) . But this functional relationship implies, for positive real-valued x,, that
p(.r, > . I E) = < r
r

for some 8,so that the density, p(s,I F) = p(.r, 10). is the derivative of
I - t>xp(--Hx,).

and hence given by 8 cxp( -Ox, ). The rest of the result follows on noting that, by the strong law of large numbers. tl- = 1iiiir,-. [ t i - ( . r , + . . + . I , , ) ] . <J

4.4 Models via lnvariance

189

Thus, we see that judgements of exchangeability and lack of memory for sequences of positive real-valued random quantities constrain the possible predictive probability models for the sequence to be those which are generated by acting US if we have a random sample from an exponential distribution with unknown parameter 8, with a prior distribution Q for the latter. In fact, if Qdenotes the corresponding distribution for 4 = 8-1 = lim,,,, Z,,, it may be easier to use the reparametrised representation

since Q*is then more directly accessible as beliefs about the sample mean from a large number of observations. Recalling the possible motivation given above for the additional invariance assumption on the sequence X I , ~ 2 , .. . it is interesting to note the very specific and well-known lack of memory property of the exponentialdistribution; namely,
~

p ( Z C ,>

nl

+ a2 1 8.1,> a l ) = P(.,

> u2 I 8).

which appears implicitly in the above proof.

4.4.4

The Geometric Model Suppose x 1 , x 2 , . . . is judged to be an infinitely exchangeable sequence of strictly

positive integer-valued random quantities. It is easy to see that we could repeat the entire introductory discussion of Section 4.4.3, except that events would now be defined in terms of sets of points on the lattice 2+ x . .. x Z+, rather than as regions in %+ x . , . x W. This enables us to state the following representation result.

Proposition 4.8. ( i c e erepresentution under origin invariance). Dsrt

I X I ,5 2 , . . , is an infinitely exchangeuble sequence of positive integer-valued f


rundom quantities with probability measure P, such that, for all event A in 2+ x . . . x 2+,
11 and

any

P ( ( r 1 , .. . .-c,,) E . ] P [ ( x l , . . ..L) E A a] 4= for (111 a E 2 x . . . x 2 such thut al = 0 and A + a is an event in 2+x . . . x 2 + , then the joint densityfor X I . . . . ,x,,has theform

190

4 Modelling

Outline proof. This follows precisely the steps in the proof of Proposition 4.7. except that, for positive integer-valued z,. functional equation the

P(r, > Rj
implies that

112

I F ) = P ( r , > f11 I F)P(.r, > (I:! I F )


P(.r, > x I F ) = I 9 ' .

so that the probability function, y ( r l 1 F) = p ( s , 1 I9) is easily seen to be #( 1 Again. by the strong law of large numbers, I9 I = l i i i i , , ., .I,,,. where. since .r, 2 1 . forall i.0 < 6 5 1. a
-O)"t-'.

In this case, the coherent predictive probability models must be those which are generated by acting us ifwe have a rundom sample from a geometric distribution with unknown purumeter 6 . with aprior distribution Q for the latter, where 19.- = lim,,,, Z,,. Again, recalling the possible motivation for the additional invariance property, it is interesting to note the familiar "lack of memory" property of the geometric distribution; P(n., > al + a ? 18. s,> 0 1 ) = P(.r, > a? 18).

4.5
4.5.1

MODELS VIA SUFFICIENT STATISTICS


Summary Statistics

We begin with a formal definition. which enables us to discuss the process ofsurnmurising a sequence, or sample. of random quantities. .rI. . . . . .I.,,, . (In general. our

discussion carries over to the case of random vectors, but for notational simplicity we shall usually talk in terms of random quantities.)

Definition 4.6. (Stalislic). Given rfJrIdOti1 quatitities (tvctors) . I . \ . . . . . J . , , ~ . with specified sets ofpossible i~uliresY I . . . . . XI,, respectiwly. (I mndoni vrc' . /or t,,,: . ix . . . x X,,, c R k " " ) ( L - ( ~ 5 )1 1 1 ) is culled X(irt)-tlintc.nsionuI ', -+ t~
statistic.
). A trivial case of such a statistic would be t,,,( X I . . . . . ,I.,#, = (./.I . . . . .r,,, but this clearly does not achieve much by way of summarisation since A. r r t ) = / I / . Familiar examples of summary statistics are:

t,,,= I H

+ . . . + x,,,). the strniple niean (A( .= 1 ): t,,, = [ r n , ( J ]+ . . . + x,,, ). (.ri + . + .r;, )I. the .sclnIplr si:e.
I
(XI
I//)
,
'

tr,tc/l Ulld .Slllll

(k(rrr)= 3 ) : t,,,= [ n i . med{X I . . . . ..).,,,)I. the sample size und nirdinn ( k (I / / ) = 2 ) : t,,, = lllax{./j. . . . . J',,, } - I l l i l l ( .!'I. . . . . J , , ,} . t h e strrnplr rr/rrgr (A,( / ) / ) = 1 ).
O ~ S ~ N N M S

4.5 Models via Suficient Statistics

191

To achieve data reduction, we clearly need k ( m ) < m:moreover, as with the above examples, further clarity of interpretation is achieved if k(m) = k, a fixed dimension independent of TR. In the next section, we shall examine the formal acceptabilityand implications of seeking to act as if particular summary statistics have a special status in the context of representing beliefs about a sequence of random vectors. We shall not concern ourselves at this stage with the origin of or motivation for any such choice of particular summary statistics. Instead, we shall focus attention on the general questions of whether, and under what circumstances, it is coherent to invoke such a form of data reduction and, if so, what forms of representation for predictive probability models might result, Throughout, we shall assume that beliefs can be represented in terms of density functions.

4.5.2

PredictiveSufficiency and Parametric Sufficiency

As an example of the way in which a summary statistic might be assumed to play

a special role in the evolution of beliefs, let us consider the following general situation. Past observations xl . . . x , , ~ available and an individual is contemare plating, conditional on this given information, beliefs about future observations x , , + ~. . . .xtl,to be described by p ( ~ , , ~. +.~x,,1 zl, . . . ,s , ) . The following , . .. definition describes one possible way in which assumptions of systematic data reduction might be incorporated into the structure of such conditional beliefs.
!

Definition 4.7. (Predictive su&iency). Given a sequence of random quantities x 1 ,x:!, . . . , with probability measure P, where xt takes values in Xi, i = 1.2,. . . the sequence of statistics t l , t 2 . . . . , with t, dejined on X x . . . x X,, is said to be predictive suficient for the 1 2 ....! m} =0. seyuence.zl.z2 . . . .i j f ~ w a l l 7 n 1.r >_ l a n d { i l.... ,zr}n{1

where p ( . I .) is the conditional density induced by P.

The above definition captures the idea that, given t,, = trl2(x1,. . ,.rrl1). . the individual values of SI, . . . .x,,, contribute nothing further to one's evaluation of probabilities of future events defined in terms of as yet unobserved random quantities. Another way of expressing this, as is easily verified from Definition 4.7, is that future observations ( x , ~. .. . .x,,.) and past observations (21.. . . , x,,,) are conditionally independent given t,,,. Clearly, from a pragmatic point of view the assumption of a specified sequence of predictive sufficient statistics will, in general, greatly simplify the process of assessing probabilitiesof future events conditional on past observations. From a formal point of view. however, we shall need additional

192

4 Modelling

structure if we are to succeed in using this idea to identify specific forms of the general representation of the joint distribution of ~ 1 . .. . .x,,.
As a particular ilfustratioti of what might be achieved, we shall assume in what follows that the probubilih nieusure P describing our belieji implies both .r2. predictive su@ciency and exchangeubili!\t for the injnite sequence .II. , . .. As with our earlier discussion in Section 4.4. a mathematically rigorous treatment is beyond the intended level of this book and so we shall confine ourselves to an informal presentation of the main ideas. In particular, throughout this section we shall assume that the exchangeability assumption leads to a finitely parametrised mixture representation, as in Corollary I to Proposition 4.3. so that. as shown in Corollary 2 to that proposition. the . . conditional density function of . I , , , , . . . . . .I.,,. given . r ~. . . . r,,, has the form

and all integrals, here and in what follows, are assumed to be over the set of possible values of 6. This latter form makes clear that, for such exchangeable beliefs. the learning process is transmitted within the mixture representation by the updating of beliefs about the unknown parameter 8. This suggests another possible way of defining a statistic t,,, = t,,,(.r,, . . .. r f , , )to be a sufficient summary of . r l .. . . . .r,,,. .

Definition 4.8. (Parametric sufiiency). If .rl . .I.?. . . . is un infinitely e.vchangeable sequence of rundoni quantities. where .r, tukes wlues in X I = X.i = 1.2, . . ., ,lie seqirence of srutistii-s t l . t?.. . ., with t , dejitieri on S x 1 . . . x .Y,. is said to be parametric sii&ienrjbr x I , .I*?.. . . if ,fi,r tiny I I 2 I ,

q ( e 1 .rl. . . . . .r,,) = (rc)(et,,). I


for any d 4 (6 ) dejtiing an exchangeable predictive prohubilih model rici thr represenration

p ( x , .. .. .

Definitions 4.7 and 4.8 both seem intuitively compelling as encapsulations of the notion of a statistic being a sufficient summary. I t is perhaps reassuring therefore that. within our assumed framework. we can establish the following.

4.5 Models via Suflcient Statistics

193

Proposition 4.9. (Equivalenceof predictive and paremetric sufiiencies). Given an infinitely exchangeable sequence of random quantities X I , x2, . . ., where xi takes values in Xi = X , i = 1,2,. . ., the sequence o statistics f t l ,t 2 , . . . with t j defined on XIx . - . x X, is predictive suflcient $ and only if; it is parametric suflcient.
Heuristicproof. For any X I ,. . . ,x,, I xm+l, . . ,xn and any sequence of statis. tics t,,, where t , = t,,,(21, . . . ,x,,,),m = 1, . . . ,n - 1. the representation theorem implies that

where A = ( ( 2 1 . . . $ x m ) i t m ( x l , .. . , x m ) = t m } ,which, in turn, can be easily shown to be expressible as

It follows that

P(x,n+1+ *

* 9

zn I t m ) = P(xm+lr.. . ,~n
=

* . 3 xm)

t=m+l

fi

p ( z l 1 e) d Q ( e 1x1,.. . , x m ) ,
Q

if, and only if, d Q ( 8 I X I , .

. . ,X m ) = dQ(6 I t m ) all dQ(6). for

To make further progress, we now establish that parametric sufficiency is itself equivalent to certain further conditions on the probability structure.

Proposition4.10. (Neymanfactorisation criterion). The sequence tl ,t 2 , . . . i parametric sufiient for infinitely exchangeable x 1 . 2 2 , . . . admitting a s finitely parametrised mixture representation if and only $ for any m 2 1, the joint density for xI, . . . ,x,, given 8 has the form
~ ( x I . . * . 16) ~ h n ~ ( t , , , B ) g ( ~ l , . . . , x , n ) , ,x = for somefunctions h,,, 2 0 . g > 0.

194

4 Modelling
Outline proof. Given such a factorisation. for any d Q ( 8 ) we have

. for some It,,, > 0. The right-hand side depends on X I , . . ..r,,,only through t,,, and, hence, dQ(0 I .rl.. . . J , , , ) = dQ(8 1 t,,,).Conversely, given parametric sufficiency, we have, for any dQ(8) with support 8,

Proposition 4.11. (Sufficiency and condifiunal independence). The sequence t 1 , t 2 , . . . is parametric suBcienr for infinitely exchangeable .rl,x2, . . . ij uttd only ij for ony I I I 2 1. the density p ( x ~. . . ..r,,, 18. t , , , ) is independenr off?.
Outline prooj For any t,,,= t,,, . T I . . . . . s,,,) have ( we

P(.w.. . ,.I.,,,I e ) = p ( . r l . .. . x,,,I e. t 1 7 1 ) p ( t1 ,el. l~ If p ( r l ,. . . s,,, tin) is independent of 8. the parametric sufficiency of t l .t?.. . . 1 8. follows immediately from Proposition 4.10. Conversely. suppose that t , .t 2 .. . . is parametric sufficient. so that, by Proposition 4.10. p ( . r l .. . . ,s,,, = h,,,(t,,,. 10) e)g(.rl.. . . ..r,,, 1 for some h,,, 2 0,y > 0. Integrating over all values t,,,( ~ - 1 ,. . . ,s,,,) t,,, we obtain = ,
p ( t l f l 0) = ~ I
{ ~ j . . ..r,,,} such

..

that

t ( t , ~, ) w ,

forsomeG > 0. Substitutingforh,,,(t,,,.0)intheexpressionforp(.r~.. . ..I',,,18). . we obtain

so that

which is independent of 8.

cl

4.5 Models viu SuBient Statistics

195

In the approach we have adopted, the definitions and consequences of predictive and parametric sufficiency have been motivated and examined within the general framework of seeking to find coherent representations of subjective beliefs about sequencesof observables. Thus, for example, the notion of parametric sufficiency has so far only been put forward within the context of exchangeable beliefs, where the operationalsignificanceof parametertypically becomes clear from the relevant representation theorem. In fact, however, as the reader familiar with moreconventionalapproaches will have already realised, related concepts of sufficiencyarealso central to nonsubjectivisttheories. In particular, we note that the non-dependence of the density p ( z l : . . . ,z, I 6, t,,,)on 6,established here in Proposition 4.1 1 as a consequence ,, of our definitions,was itself put forward as the dejnition of asufficient statistic by Fisher (1922). and the factorisation given in Proposition 4.10 was established by Neyman (1935) as equivalent to the Fisher definition. From an operational, subjectivist point of view, it seems to us rather mysterious to launch into fundamental definitions about learning processes expressed in terms of conditioning on parameters having no status other than as labels. However, from a technical point of view, since our representation for exchangeable sequences provides, for us. a justification for regarding the usual (Fisher) definition as equivalent to predictive and parametric sufficiency, we can exploit many of the important mathematical results which have been established using that definition as a starting point.

In the context of our subjectivist discussion of beliefs and models, we shall mainly be interested in asking the following questions. When is it coherent to act as if there is a sequence of predictive sufficient statistics associated with an exchangeable sequence of random quantities? What forms of predictive probability model are implied in cases where we can assume a sequence of predictive sufficient statistics? Aside from these foundational and modelling questions, however, the results given above also enable us to check the form of the predictive sufficient statistics for any given exchangeable representation. We shall illustrate this possibility with some simple examples before continuing with the general development.
Example 4.5. (Bernoulli model). We recall from Proposition 4.1 that if x l ,x2,. . . is an infinitely exchangeable sequence of 0- 1 random quantities, then we have the general representation

196
where Y,, = X I

4 Modelling

+ . . . + r,,.Defining t,, =
~(3.1..

[ t i . 5,,]

and noting that we can write

. . .t,, = h,,(t,,. 10) O)y(.rI.. . . ..r,,1.

with

h,,(t,,.@)P ( 1 - H ) - . g ( r I ...... = 1. = r,,)


it follows from Propositions 4.9 and 4.10 that the sequence t l . t 2 . .. . is predictive and parametric sufficient for sI. . . .. This corresponds precisely to the intuitive idea that the .c2. sequence length and total number of 1s summarises all the interesting information in any sequence of observed exchangeable 0- 1 random quantities.

Example 4.6. (Nonnol model). We recall from Proposition 4 5 that if s,. . . is .r2.. an exchangeable sequence of real-valued random quantities with the additional property of centred spherical symmetry then we have the general representation
p ( z l . . . . .z,,) =

1%

p(.r,.. . . .s,, . X ) t l Q ( p . A ) Ip

where

In the light of Propositions 4.10 and Proposition 4. I I , inspection of reveals that

~ ( Z I. . . .r,$ p .

. 1

A)

t,, = (12.2,,.. 2 , 9
defines a sequence of predictive and parametric sufficient statistics for . r 2 . .. . . In view of the centring and spherical symmetry conditions, it is perhaps not surprising that the sample size. mean and sample mean sum of squares about the mean turn out to be sufficient summaries. Of course. t,, is not unique; for example. since

. , I

we could equally well define t,, = statistics.

[7!.~,,.

rt-j(.r;

+ . r i ) / as the sequence of sufficient

4.5 Models via Suficient Statistics

197

e Example 4.7. (Expnentiulmudel). W recall from Proposition 4.7 that if 5 , ;.r2? . . . is an exchangeable sequence of positive real-valued random quantities with an additional origin invariance property, then we have the general representation

Lx

01

exp(-~u,,)t~~(~)

11 . . . r,, Again, it is immediate from Propositions 4.10 and 4. I 1 that . t , = In.s,,) defines a sequence of predictive and parametric sufficient statistics, although, in this example, there is not such an obvious link between the form of invariance assumed

where s,, =

+ +

and the form of the sufficient statistic.

It is clear from the general definition of a sufficient statistic (parametric or predictive) that t,,(zl,.. . ,x , ~ = (12. (XI,.. , x,,)] always a sufficient statistic. ) . is However, given our interest in achieving simplification through data reduction, it is equally clear that we should like to focus on sufficient statistics which are, in some sense, minimal. This motivates the following definition.

. .. tics, sl, s2,.. . , there existfunctions gl(.),g2(.). such that t , = g , ( s t ) , i = 1.2,...


It is easily seen that the forms of t ( z )identified in Examples 4.5 to 4.7 are minimal sufficient statistics. From now on, references to sufficient statistics should be interpreted as intending minimal sufficient statistics. Finally, since n very often appears as part of the sufficient statistic, we shall sometimes, to avoid tedious repetition, omit explicit mention of n and refer to the interesting function(s) of T I ,. . . ,T,, as the sufficient statistic.
4.5.3

Definition4.9. (Minimal suyicient statistic). Ifx x2, . . . ,is an infinitely exchangeable sequence of random quantities, where x,takes values in X,= X, the sequence of statistics tl .tar. . ., with t, defined on X x . . . x X .is min1 , imal suflcienr for 21.q , . . i given any other sequence of suficient statis. f

,.

Sufficiency and the Exponential Family

In the previous section, we identified some further potential structure in the general representation of joint densities for exchangeable random quantities when predictive sufficiency is assumed. We shall now take this process a stage further by examining in detail representations relating to sufficient statistics of fixed dimension.

I 98

4 Modelling

Since we have established, in the finite parameter framework. the equivalence of predictive and parametric sufficiency for the case of exchangeable random quantities, and their equivalence with the factorisation criterion of Proposition 4.1 1. we shall from now on simply use the tern suflcient stutisric. without risk of confusion. We begin by considering exchangeable beliefs constructed by mixing, with respect to some dQ(I9).over a specified parametric form

where I9 is a one-dimensional parameter. By Proposition 4.10, if the form of y( s 1 19) is such that p ( q . . . . ,sft10) factors into h f , ( t , ,0 ) g ( . r 1 7 . . . x f l ) , some h f 89. . for . the statistic t, = t,L(.rl..,z t l ) .. would be sufficient. An important class o such f p(.r 10) is identified in the following definition.

Definition 4.10. (One-parameterexponentialfamily). A probubility densin lor massfunction) p ( s I 19).lubelled by 8 E 0 C R. is suid ro belong to tha one-purumeter exponentialfamily if it is of the fornr

where, given f . h , 0,and c. [y(tl)]-' = J .f(.r) exp{co(O)h(s)}ri.r < x. , . Thefumilj is culled regular if X does not depend on 8: orherwise it is culled non-regular,

Proposition 4.12. (Su@knt s&atis&ics the one-parameter exponential for family). I f ' r 1 , .Q.. . . . .rfJ S. un e.rcliurigeuhle seqirence such thnt. given E is regular Ef(. I .),

for some dQ(t)), then t,, = t , , ( . r , . . . . . I ' , ~ )= [ r ~ h(.rI) + . . . + h ( . r I , ) jiir . ], I I = 1,2. . . ., i s ci sequence (f'.viiSJicient stntistics. Proof. This follows immediately from Proposition 4.10 on noting that

4.5 Models via Sujicient Statistics

199

The following standard univariate probability distributions are particular cases

of the (regular) one-parameter exponential family with the appropriate choices of f , 9,etc. as indicated.
Bernoulli
P(X I 8) = Br(s 18) eJ(i =

ep,

I E

{o, I}, e E [o. 1).

Poisson

We note that the term cg(8) appearing in the general Ef(. 1 .) form could always
be simply written as @ ( O ) with cjt suitably defined (see, also, Definition 4.1 I). However, it is often convenient to be able to separate the interesting function of 8,d(8), from the constant which happens to multiply it.

In Definition 4.10. we allowed for the possibility (the non-regular case) that the range, X, of possible values of z might itself depend on the labelling parameter 8. Although we have not yet made a connection between this case and forms of representation arising in the modelling of exchangeable sequences, it will be useful at this stage to note examples of the well-known forms of distribution which are covered by this definition. We shall indicate later how the use of such forms in the modelling process might be given a subjectivist justification.
Unijorm
p ( I 6) = u ~

( 10,~ e) = e-,

s E (0,B ) ,

e E %+.
= 1.

f ( . )

= 1.

g(e) = e - ) ,

h ( ~= o, )

d ( 8 ) = 8,

200
Shif ed exponentiul

4 Modelling

p(.r 18) = Shex(L 18) = c y [ - ( . r - O ) ] .


.f(.).) f =

.t'

-8E

>R+. 8 E W ,
(.

'. g ( H )

= f /I .

h(.r)= 0.

o(0) 0.
5 ;

= 1.

In order to identify sequences of sufficient statistics in these and similar cases. we make use of the factorisation criterion given in Proposition 4.10. For the uniforni. we rewrite the density in the form

so that. for any sequence .rI.. . . . .r,, which is conditionally independent given 8.

It then follows immediately from Proposition 4.10 that

is a sequence of sufficient statistics in this case.

For the shifted exponential, if we rewrite the density in the form

a similar argument shows that. for ( J , .. . . . x,,) E R'.

so that. for ri = 1.2.. . ..


I,, = f , , ( . r l . .. . . . I ' , , ) =

provides a sequence of sufficient statistics. The above discussion readily generalises to the case of exchangeable sequences generated by mixing over specified parametric forms involving a k-dimensional parameter 8.

4.5 Models via Suflcient Statistics

201

Definition4.1 1. (k-parameter expone& famdy). A probabilitydensity (or massfunction) p(x I O ) , x E X, which is labelled by 8 E 0 C Rk, is said to belong to the k-parameter exponentialfamily i it is of the form f

where h = ( h l , .. . ,hk), 4(8) = f , h, 4, and the constants c,,

($1

, .. . ,&) and, given the functions

Thefamily is called regular i X does not depend on 8; otherwise it is called f non-regular.

Proposition4.13. (SujJkientstatistics forthe k-jwmeterexponentialfamily). If 51, x2, . . . ,xt e X , is an exchangeable sequence such that, given regular k-parameter Efk(. 1 -),

for some dQ(t?),then

is a sequence of suflcient statistics.


Proof. This is analogous to Proposition 4.12 and is a straightforward consequence of Roposition 4.10. a

The following standard probability distributions are particular cases (the first regular, the second non-regular) of the Ic-parameter exponential family with the appropriate choices off, 9 etc. as indicated.

202
Normal (unknown mean and variunce) Y(x 10) = p(x I P . 7 ) = N(.T I p . 7 )

4 Modellitig

In this case. k = 2 and

so that t, = statistics.

@(e) ( 7 4 . c1 = 1. cp= -1/2. = [n,C:=lI,,X:llxp] , = 1.2,. . . is a sequence of sufficient


7i

Uniform (over the interval [el,021) = ) P(X e) = p ( : ~ ( e , , e ~ U ( Z

el&) = (e, - e , ) - ] , s E (el.Q2).o1 E 8. e, -el E R+.

In this case,

and

t , = (n.min{xl... . . x I f } ,max{r,. . . . . ~ , ~ } ] .= 1.2.. . . n


is easily seen to give a sequence of sufficient statistics.

The description of the exponential family forms given in Definitions 4.10 and 4. I I , is convenient for some purposes (relating straightforwardly to familiar ver-

sions of parametric families), but somewhat cumbersome for others. This motivates the following definition, which we give for the general k-parameter case. Definition 4.12. (Canonical exponentialfamily). The probability density (or massfunctiotr)

derivedfrom Efk(. I .) in Definition 4.11, sia the trunsjhnutions


y = (y

,.....yk).

I,= (c,.. . . C ' I ) . I .

i . y

Ll,=c,d,(e). i = i . . . . .A-. called the canonicalform of representcrtion ofthe e.~p.potietiticil~iniil~.


Yf=h,(T).

4.5 Models via Su$Ecienr Staristics

203

Systematic use of this canonical form to clarify the nature of the Bayesian learning process will be presented in Section 5.2.2. Here, we shall use it to examine briefly the nature and interpretation of the function b(+), and to identify the distribution of sums of independent Cef random quantities.

Proposition4.14. (Firsttwomoments of the canonicalexponeniialfamdy). For y in Definition 4.12,

E(9 I +) = Vb(+)?

V(Y 1111) = V 2 W .

+ is given by

Proof. It is easy to verify that the characteristic function of y conditional on

E(exp{iu'y}I +) = exp{b(iu +) - b ( + ) } ,
from which the result follows straightforwardly.
a

Proposition 4.15. (Sumiency in the canonical exponentialfamily). lfyl,. . . ,y,l are independent Cef(y I a, b: +) random quantities, then

is a suficient staristic and has a disrriburion Cef(s I a("),nb, +), where

d*)

is the n-fold convolurion of a. Proof. Sufficiency is immediate from Proposition 4.12. We see immediately that the characteristic function of s is exp{nb(iu + +) - nb(+)}, so that the distribution of s is as claimed, where a(") satisfies
7ib(+)

= log

a(")(s) exp{t/~'s}ds.

Examination of the density convolution form for n = 1, plus induction, establishes the form of a("). a

Our discussion thus far has considered the situation where exchangeablebelief distributions are constructed by assuming a mixing over finite-parameter exponential family forms. A consequence is that sufficient statistics of fixed dimension exist. Moreover, classical results of Darmois (1936). Koopman (1936), Pitman (1936), Hipp (1974) and Huzurbazar (1976) establish, under various regularity conditions, that the exponential family is the only family of distributions for which such sufficient statistics exist.

204

4 Modelling

In the second part of this subsection, we shall consider the question of whether 1 there are structural assumptions about an exchangeable sequence s ..rz. . . . . which imply that the mixing must be over exponential family forms. Previously. in Section 4.4, we considered particular invariance assumptions, which, together with exchangeability, identified the parametric forms that had to appear in the mixture representation. Here, we shall consider. instead, whether characterisations can be established via assumptions about ronditiond distrihutions. motivated by suflciency ideas. As a preliminary, suppose for a moment that an exchangeable sequence, { y,}. is modelled by

Now consider the form of p(yI.. . . .y, I y1+. . . y,, = s),k < n. Because of exchangeability, this has a representation a a mixture over s

But the latter does not involve because of the suffkiency of y1 + . . . (Propositions 4.1 1 and 4.15). so that

+ yI,

where, in the numerator, s,. = y1+ * . . yk 5 s. The exponential family mixture representation thus implies that,

Now suppose we consider the converse. If we assume y l .y?.. . . to be exchangeable and also assume that, for all 71and k < 11. the conditional distributions have the above form (for some n defining a Cef(y [ ( I , 6. +) form), does this im. ) ply that p ( ~ .,. . . g r 8 has the corresponding exponential family mixture form? A rigorous mathematical discussion of this question is beyond the scope of this volume (see Diaconis and Freedman, 1990). However, with considerable licence in ignoring regularity conditions. the result and the flavour of a proof are given by the following.

4.5 Models via Sujficient Statistics

205

Proposition 4.16. (Represenfationtheorem under surniency). Ifyl.y2,. . . i any exchangeuble sequence such that,for all n 2 2 and k < n, s
k

p(yI,...,& Jy1 +"'+Y,, = 8 ) = ~a(y,)a'~~-"(s-ss)/n("'(s),


?=I

where sk = y1 + . . . + gr and a ( . ) defines Cef(y I u , b. +). then


p(Y,, . . .

.ul,)=

1fi
1=1

Cef(y, I a. b. + ) ~ Q W L

for some dQ(+).

Outline pro($ We first note that exchangeability implies a mixture representation, mixing over distributions which make the y, independent. But each of the latter distributions, with densities denoted generically by f, themselves imply an exchangeablesequence, sothat, forn 1 2, k < 71, f(y,. . . . ,g k I y1+. . .+gn = s) also has the specified form in terms of u ( . ) . Now consider ri = 2. k = 1. Independence implies that

where denotes the marginal density and f C 2 ) ( . ) f( must satisfy


f ( 9 )

its twofoldconvolution, so that

a)

If we now define
U(Yl)

= log -- log 491) 40)

f(Y1)

f (0)

and

it follows that
4Yl)

+ 4 s - Y1) = 4 s ) .

and Setting 9 , = s,and noting that u(0) = 0, we obtain .u(s) = ~(s). hence
U(YA

+ 4 9 2 ) = 4 Y l + y.2)a

This implies that u(9) = +'y, for some @, so that

f b ) = 4 9 )e x P W Y - b(+)l*

The following example provides a concrete illustration of the general result.

206

4 Modelling

Example 4.8. (Cluuacterisdon o the Poisson model). Suppose that the sequence f of non-negative integer valued random quantities ,yl .y?. . . . is judged exchangeable. with the conditional distribution of y = (9,. . . . . yk) given - . . - g,, = c. { I 1 2 . A. < 1 1 . specified to be the multinomial Mul (g1 S . 8). where 8 = ( 1 : / I . . . . . I / I / ). so that

where ?it = y,
Cef(g / ( I .6.
c q )

. . . + I J ~ Noting that the Poisson distribution. Pn(!/ form as

t . ) . can he

witten in

Pn(!/ I 1 % )= - txp { !p.* !/!

= ( I ( ! / ) c'xp{,yi-- bO ,) }

from which it easily follows that d'"(s,) = w.,/s!. it is straightforward to check that, in terms
of a ( . )and d'"(-),

By Proposition 4.16. it follows that the belief specitication for y1.y,. . . . is coherent and
implies that

for some d c ) ( ~ . * L", E 8'. a )

As we remarked earlier. the above heuristic analysis and discussion for the 1.parameter regular exponential family has been given without any attempt at rigour. For the full story the reader is referred to Diaconis and Freedman (1990). Other relevant references for the mathematics of exponential families include RamdorffNielsen ( 1978). Morris ( 1982)and Brown ( 1985). We conclude this subsection by considering. briefly and informally, what can be said about characterisations of exchangeable sequences as mixtures of nonregular exponential families. For concreteness, we shall focus on the uniform. U(s10.0). distribution, which hasdensity & I l ! , l . v , ( x3'. E R. and sufficient statis) tic iiiitx{.rl.. . . .x , , } ,given a sample x i , . . . . .r,i. This sufficient statistic is clearly not a summation, as is the case for regular families (and plays a key role in Proposition 4.16). However. conditional on nil, = niax{.rl.. . . .. r , , } .X I . . . . . . r k . A. I). are approximately independent I:(.,, 10. m,,) this will therefore be true for and all exchangeable . T I , sg.. . . constructed by mixing over independent L'(.r, 10.0). Conversely, we might wonder whether positive exchangeable sequences having

4.5 Models via Sujficient Statistics

207

this conditional property are necessarily mixtures of independent U ( x , lo,@). In3 tuitively, if rn, tends to a finite f from below, as R + 00, one might expect the result to be true. This is indeed the case, but a general account of the required mathematical results is beyond our intended scope in this volume. The interested reader is referred to Diaconis and Freedman (1984), and the further references discussed in Section 4.8.I .

4.5.4

Information Measures and the Exponential Family

Our approach to the exponential family has been through the concept of predictive or, equivalently. parametric sufficient statistics. It is interesting to note, however. that exponential family distributions can also be motivated through the concept of the utility of a distribution (c.f. Section 3.4). using the derived notions of approximation and discrepancy. Consider the following problem. We seek to obtain a mathematical representation of a probability density p ( z ) ,which satisfies the k (independent) constraints

h,(r)p(x)& = mi, < 30, i = 1,.. . .k.


where ml, . . . ,nzk are specified constants, together with the normalizing constraint p(r)ds = 1, and, in addition, is to be approximated as closely as possible by a specified density f(z). We recall from Definition 3.20 (with a convenient change of notation) that the discrepancy from a probability density y ( ~ ) assumed to be true of an approximation f(x) is given by

ly

where f and p are both assumed to be strictly positive densities over the same range,

X, possible values. Note that we are interested in deriving a mathematical repreof


sentation of the true probability density y(x), not of the (specified) approximation f(x). Thus, we minimise S(f I p) over p subject to the required constraints on p . rather than 6( f I p) over f subject to constraints on f. Hence, we seek p to minimise

where 81,. . . ,O, and c are arbitrary constant multipliers.

208

4 Modelling
Proposition 4.17. (The exponential family as an approximatwn). Ihefuncrional F ( p ) defined above is nrinimised by

p ( . r ) = Efk(.r I f.9. 4.0.c ) ..r E .Y h.


where f and h are given in b(p).r , = 1. qb = 8 = ( 0 , .. . . . H i . ) mid

Proof. By a standard variational argument (see. for example. Jeffreys and Jeffreys, 1946, Chapter lo), a necessary condition for y to give a stationary value of F ( p ) is that

for any function T : s the equation

- % of sufficiently small norm. This condition reduces to

(a/ih)F(p(.r)

(tT(J))

,,-.-()

=0

from which it follows that

as required. (For an alternative proof. see Kullback. 195911968. Chapter 3 . ) The resulting exponential family form for p ( , r ) was derived on the basis of a given approximation f ( ~ ) a collection of constant functions h(.r)= [h,(.r). and . . . . h k ( r ) ] . If we wish to emphasise this derivation of the family, we shall refer to Ef(s 1 f. 9. h. Q. 6 . c ) as the e.rponenrialfantily Reneruted by f uttd h. In general, specification of the sufficient statistic
1
I

I -

does not uniquely identify the form off ( . r ) within the exponential family framework. Consider, for example. the Ga(x 10, H) family with n known. Each distinct
(t

defines a distinct exponential family with density


(.r- l / I - ( < t ) ) P #P( - f l J . } .

4.6 Models via Partial Exchangeability

209

so that, in addition to h ( z )= 2 , we need to specify f(z) = x0-/r(a)order to in identify the family. Returning to the general problem of choosing p to be as close as possible to an approximation f,subject to the k constraints defined by h ( z ) it is interesting , to ask what happens if the approximation f is very vague, in the sense that f is extremely diffusely spread over X. A limiting form of this would be to consider f(x) = constant, which leads us to seek the p minimising p ( z )logp(z)dx subject to the given constraints. The solution is then

sx

P(X) =

{ ct,8 M 4 } Jy exp {EL, w ) 4


exp

which, since minimising p ( z ) logp(z) dx is equivalent to maximising H ( p ) = - J.y p(x) logp(z) dx,is the so-called maximum enrropy choice of p . Thus, for example, if X = W and h(z) = x,the maximum entropy choice for p ( z ) is EX(LI4), the exponential distribution with 4-l = E(x 14). If X = 93 and h ( z )= (x, x2),the maximum entropy choice for p ( z )turns out to be N(z I p, A), the normal distribution with p = E ( z I p , A), A- = V(x I p , A) (c.f. Example 3.4, following Definition 3.20). Our discussion of modelling has so far concentrated on the case of beliefs about a single sequence of observations zl ,z2, . . . ,judged to have various kinds of invariance or sufficiency properties. In the next section, we shall extend our discussion in order to relate these ideas to the more complex situations, which arise when several such sequencesof observationsare involved, or when there are several possible ways of making exchangeable or related judgements about sequences.

sx

4.6
4.6.1

MODELS VIA PARTIAL EXCHANGEABILITY


Models for Extended Data Structures

In Section 4.5, we discussed various kinds of justification for modelling a sequence . of random quantities z1,22?. . as a random sample from a parametric family with density p(s 18). together with a prior distribution dQ(8) for 8. We also briefly mentioned further possible kinds of judgements, involving assumptions about conditional moments or information considerations, which further help to pinpoint the appropriate specification of a parametric family. However, in order to concentrate on the basic conceptual issues, we have thus far restricted attention to the case of a single sequence of random quantities, z1,52> .. labelled by a single index, i = 1,2* . . ., and unrelated to other random .. quantities. Clearly, in many (if not most) areas of application of statisticalmodelling the situation will be more complicated than this, and we shall need to extend and

210

4 Modelling

adapt the basic form of representation to deal with the perceived complexities of the situation. Among the typical (but by no means exhaustive) kinds of situation we shall wish to consider are the following. , (i) Sequences x I l s12.. . of random quantities are to be observed in each of i E I contexts. For example: we may have sequences of clinical responses to each of I different drugs; or responses to the same drug used on I different subgroups of a population. A modelling framework is required which enables us to learn. in some sense. about differences between some aspect of the responses in the different sequences. (ii) In each of i E I contexts. j E J different treatments are each replicated k E A times, and the random quantities .rl.,k denote observable responses for ' each context/treatment/replicate combination. For example: we may have I different irrigation systems for fruit trees, .I different tree pruning regimes and h trees exposed to each imgation/pruning combination, with .r,,k denoting ' the total yield of fruit in a given year; or we may have I different geographical areas, .I different age-groups and I< individuals in each of the I J combinations. with s,,k denoting the presence or absence of a specific type of disease. or a coding of voting intention. or whatever. A modelling framework is required which enables us to investigate differences between either contexts. or treatments, or context/treatment combinations. (iii) Sequences of random quantities . r , ~ . . . . . i E I. are to be observed. where .I.,.?. some form of qualitative assumption has been made about a form of relationship between the .rll and other specified (controlled or observed) quantities z l = ( z l l . . . . :,A), k 2 1. For example: . I ' , ~might denote the status (dead or alive) of the j t h rat exposed to a toxic substance administered at dose level z l . with an assumed form of relationship between z , and the corresponding "death might denote the height or weight at time -., from the jth replicate rate": or .rIJ measurement of a plant or animal following some assumed form of "growth curve"; or x t , might denote the output yield on the .jth run of a chemical process when k inputs are set at the levels z , = (:,,. . . . . : A , ) and the general form of relationship between process output and inputs is either assumed known or well-approximated by a specified mathematical form. In each case a inodelling framework is required which enables us to learn about the quantitative form of the relationship. and to quantify beliefs (predictions) about the observable f corresponding to a specified input or control quantity z.. (iv) Exchangeable sequences, L , ~.r,'. . . . . of random quantities are to be observed . in each o f i E I contexts. where I is itself a selection from a potentially larger index set I. Suppose that for each sequence. '

4.6 Models via Partial Exchangeability

211

is judged to be a sufficient statistic, that the strong law limits

exist and that the sequence 0, ,O?! .. . is itself judged exchangeable. For example: sequence i may consist of 0 - 1 (success-failure) outcomes on repeated trials with the ith of I similar electronic components; or sequence i may consist of quality measurements of known precision on replicate samples of the ith of I chemically similar dyestuffs. In the first case, the sequence of long-run frequencies of failures for each of the components might, a priori, be judged to be exchangeable; in the second case, the sequence of large-sample averages of quality for each of the dyestuffs might, a priori. be judged to be exchangeable. A modelling framework is required which enables us to exploit such further judgements of exchangeability in order to be able to use information from all the sequences to strengthen, in some sense, the learning process within an individual sequence.

4.6.2

Several Samples

We shall begin our discussion of possible forms of partial exchangeabilityjudgeby ments for several sequences of observables, x t l ,x , 2 . , . . , i = 1,. . . ,m., considering the simple case of 0 - I random quantities. In many situations, including that of a comparative clinical trial, joint beliefs about several sequences of 0 - 1 observables would typically have the property encapsulated in the following definition, where, here and throughout this section, z,(n,)denotes the vector of random quantities ( x , ~ .. . , x , ~ , ) . .

Definition 4.13. (Unrestricted exchangeability for 0 - 1 sequences). Sequences of 0 - 1 random quantities, x , l , x , 2 , . . . , i = 1,. . . ,m , are suid to be unrestricredly exchangeable i each sequence is injnitely exchangeable f and. in addition, for all 78, 5 IV,. i = 1: . . . ,rn,

where y , ( N , ) = .c,1

+ . . . + x , ~ ~ .= 1 , . . . ,ni. i

In addition to the exchangeability of the individual sequences, this definition encapsulates the judgement that, given the total number of successes in the first N , observations from the ith sequence, i = 1.. . .7n, only the total for the irh sequence is relevant when it comes to beliefs about the outcomes of any subset of n, of the N, observations from that sequence. Thus, for example, given 15

212

4 Modelling

deaths in the first 100 patients receiving Drug 1 (N1 100, y I ( N I ) = 15) and = 20 deaths in the first 80 patients receiving Drug 2 ( * V 2 = 80, y?(,V?)= 201, we would typically judge the latter information to be irrelevant to any assessment of the probability that the first three patients receiving Drug 1 survived and the fourth one . died (XI, = 0 .r12 = 0. ~ 1 : s= 0. .r14 = 1). Of course, the information might well be judged relevant if we were not informed of the value of y I ( N l ) . The definition thus encapsulates a kind of "conditional irrelevance" judgement.
As an example of a situation where this condition does i i o f apply. suppose that . . . is an infinitely exchangeable 0-1 sequence and that another sequence . r ~ 1 . . ~.~ . is defined by sZJ= .rl, (or by .I.?, = 1 - . r l j ) . Then -l'.ll. .. .r?'. certainly an exchangeable sequence (since s I I . . . . is), but. taking .r?,, = xi., .rll. and n I -1 712 = >'V1= N? = 2,
S I I . XI?,
p(.1:,1

= 0.2-12 =

1.T.21

1.X??

= 0 I g1.l =

1.!fTi

= 1) = 0.

whereas
p ( a l l = 0,.r12= 1 jy12 = 1) p(.rz1= l..rT2 = 0 1 yt. = 1 ) = 112 x 1/2 = 1,l.i.

Further insight is obtained by noting (from Definition 4.13)that unrestricted exchangeability implies that
P(Xl1.. . . . S I , I I *.
'

. . .CfI,l. . . . . ~ , , I , , , , , )

- - P ( : r I q ( l ) . ' .. . J - l x I i. r r I ! . . *. . ~ , , , . , , , [ 1 i . . . . . . ~ ' r l ~ ~ , , , ~ f l 1 , , ! ~

foranyunrestrictedchoiceofpermutationssr, { 1... . ,u , } ,i = 1.. . . in.whereas. of in the case of the above counter-example. we only have invariance of the joint distribution when T I = TIT?. a development starting from this latter condition For see de Finetti ( 1938).
We can now establish the following generalisation of Proposition 4.1.

Proposition 4.18. (Represenfation theorem for several sequences of 0-1 random qrcantilies). If'.c,1.. r r Z. . . . i = 1. . rri ure unrestrictedly infinitely e.rchangeuble sequences ($0-I random quantities Hith joint pmhabilic nieusure P, there exists u distrihutionfunc.tion Q such that

,i where,withy,(ri,)= r l l + . - . + r , , , , = 1 ..... 111.

4.6 Models via Partial Exchangeability

213

Corollary. Under the conditions of Proposition 4.18,


P(?/l(nl), . *
*

Y.() 7,%)

Proof. We first note that

so that, to prove the proposition, it suffices to establish the corollary. Moreover, as for any N , 2 11,. i = I.. . . ,rn. we may express p(yl(n1). . . . . yrla(n711))
CP(Yl(?l),.. . . Y l ? . ( n m ) l?/l(N),. . ? h ( N n ) )P(Yl(NI).- 4 h ( N ) ) . * where the ith of the r n summations ranges from y,( i V l ) = y, (ti,) to y l ( N , ) = 1 2 : . and where, by Definition 4.4 and a straightforwardgeneralisation of the argument given in Proposition 4.1,
P(YI(l1).

... ,Ym(%n)

IY l ( N ) , ...

~ y l n ( ~ ~ l l L ) )

Writing (YN), = y,v(ys - 1).. . (ys - ( y l l - 1)). etc., and defining the function Qx,. ...xrn (01, . . . 6,,,) on 3Zrn to be the rndimensional step function with .bmpsofp(yl(Nl). . . . ~ ~ l , l ( N at J ) ll

where y,(N,) = 0.. . . ,Nl,i 1,. . . ,rrr. We see that p(yl(n1). . . . , y , l , ( ~ ~ nis ) ) = l equal to

uniformly in el, . . . ,O,,,, and, by the multidimensional version of Hellys theorem (see Section 3.2.3). there exists a subsequence Q,v,(,)..,,.~,,,(j),= 1,2, . . . having j a limit Q. which is a distribution function on 8.The result follows. a

214

4 Modellitrg

Considering, for simplicity, 711 = 2. Proposition 4.18 (or its corollary) asserts that if we judge two sequences of 0 - I random quantities to be unrestrictedly exchangeable, we can proceed US if: (i) the x,, are judged to be independent Bernoulli random quantities (or the y , ( ~ , ) to be independent binomial random quantities) conditional on random quantities 0,. i = 1.2;

(ii) ( H I . 0 2 ) are assigned a joint probability distribution C); (iii) by the strong law of large numbers. 0, = liiii,#;-x ( y , ( n , ) / t t , )so that C) may , be interpreted as joint beliefs about the limiting relative frequencies of 1s in the two sequences. The model is completed by the specification of dQ(B1. H ? ) , whose detailed form will. of course. depend on the particular beliefs appropriate to the actual practical application of the model. At a qualitative level. we note the following possibilities: (a) knowledge of the limiting relative frequency for one of the sequences would not change beliefs about outcomes in the other sequence, so that we have the independent form of prior specification, d C ) ( H 1 . H ? ) = dQ(fl1)(fQ(H2): (b) the limiting relative frequency for the second sequence will necessarily be greater than that for the first sequence (due. for example, to a known improvement in a drug or an electronic component under test), s o that r I Q ( H l . & ) is zero outside the range 0 5 H I < H:! 5 I ; (c) there is a real possibility, to which an individual assigns probability T . say. that, in fact. the limiting frequencies could turn out to be equal. so that, writing 0 = 8, = 02. in this case r/Q(Hll has the form 612)
TdQ(0) 4- (1 - Ti)(IC)+(/?,.H:!) and the representation, for ( y ~ , ,yl,,?). say. has the form ~.

where dC) (0,.H?) assigns probability over the range of values of ( 0 , . 0 2 ) such that 0, # H2. As we shall see later, in Chapter 5. the general form of representation of beliefs for observables defined in terms of the two sequences. together with detailed specifications of dQ(0,.02). enables us to explore coherently any desired aspect of the learning process. For example. we may have observed that out of the lirst { ( I . I ) ?

4.6 Models via Partial Exchangeability

215

patients receiving drug treatments I , 2, respectively. y1( n l ) yz(n2) survived, and and, on the basis of this information, wish to make judgements about the relative performanceof the drugs were they to be used on a large future sequenceof patients. This might be done by calculating, for example,

P( !v-2 ( Y I ( N ) I N )- .v-= (Y.L(WlN) Y1(721),9/2(7L2))1 lim lim I


which, in the language of the conventional paradigm, is the posterior density for 4 - 62, given y~(m),ydn2). Clearly, the discussion and resulting forms of representation which we have given for the case of unrestrictedly exchangeable sequences of 0-1 random quantities can be extended to more general cases. One possible generalisation of Definition 4.13 is the following.

Definition 4.14. (Unrestricted exchangeabilityfor sequences withpredictive su@cient statistics). Sequences of random quantities x,1,z,2,. . . taking values in X , , i = 1, . . . .m, are said to be unrestrictedly infinitely exchangeable if each sequence is infinitely exchangeable and, in addition, for all n, 5 N, , i = l.....na,

1=I

where t,y, = t,yl ( z t ( N z ) )i, = 1,. . . , m ,are separately predictive suficient statistics for the individual sequences.

In general, given m unrestrictedly exchangeable sequences of random quantities, x l l , xt?, . . ,with I,, . taking values in X,, we typically amve at a representation of the form

and where 0 = fly:l 8, the parametric families

have been identified through consideration of sufficient statistics of fixed dimension, or whatever, as discussed in previous sections. Most often, the fact that the k sequences are being considered together will mean that the random quantities r,1.x,?. . . relate to the same form of measurement or counting procedure for all . i = 1.. . . , m , that typically we will have p l ( z 10,) = p(x I O,), i = 1 . . . . , m , so where the parameters correspond to strong law limits of functions of the sufficient statistics. The following forms are frequently assumed in applications.

216

4 Modelling

Example4.9. (Binomid). If ~ , ( n denotes the number of 1's in the first 1 1 , outcomes ,) of the ith of rrr unrestrictedly exchangeable sequences of 0 - 1 random quantities. then

Example4.10. (Multi'nomial). Ifg,(/i,) denotes the category membershipcount(into the firstl:ofk+l exc1usivecategories)fromthe first I ) , outcomesofthe ithofvr unrestrictedly exchangeable sequences of '0- 1 random vectors" (see Section 4.3). then
P(T/l("l).....
?J,,,(r1,,,))

l,,, fi

M L (y,(n, U

1 10.. I t , ) d C . 3 4 .

. . , . HI,, 1.

whereO,=liiri,,.., ( y , ( r r ) / r r ) a n d ~ = { O = ( H ,. . . Hh.)suchthatO<H, 5 1.1 < / < A . .. and 0, + . . .+ 5 I}. This model describes beliefs about an 111 x ( k 1 ) cotiringetyv rcrhle of count data, with row totals r r I . . . . . r r , , , . It generalises the c a ~ of the 111 x 2 contingency e table described in Example 4.9. ci

Example4.11. (Normal). If.r,,.J = 1.. . . . I / , . I = I . . . . . r~t.denotereal-valuedobservations from 11) unrestrictedly exchangeable sequence!, of real-valued random quantities. the assumed sufficiency of the sample sum and sum of squares within each sequence might lead to the representation

where.withS,,(i)- - ) I
h i ,,-,

In that the representation then takes the form

' ( . ~ , , + . .t. . . , , , ) a n d . ~ ~ ( ;r) ~ - ' ~ ~ - ~ (-.i.,,(r))2.wvehave//, i = . r , , = .,s i ( i ) . H = ( / I , . . . . . / I ,,.., . . .,\,,,) and (3) = H'" x (it");'. . many applications. the further judgement is made that X I = . . . .- A,,, = X. say. so

.F,,(i).A,

liiit,,

. [ I , , , . X J.
This is the model most often used to describe beliefs about a o t i e - w ~ ~ of measurement lqvoirr data. 4

As in the case of 0 - I random quantities with 'rri = 2, discussed earlier in this section, we could make analogous remarks concerning the various qualitative forms of specification of the prior distribution Q that might be made in these cases. We shall not pursue this further here. hut will comment further in Section 4.7.5.

4.6 Models via Partial Exchangeability

217

4.6.3

Structured Layouts

Let us now consider the situation described in (ii) of Section4.6.1, where the random quantity xyk is triple-subscripted to indicate that it is the kth of K replicates of an observable in context i E I, subject to treatment j E J. In general terms, we have a WO-way layout. having I rows and J columns, with K replicates in each of the I J cells. In such contexts, most individuals would find it unacceptable to make a judge. ment of complete exchangeability for the random quantities x T J k For example, if rows represent age-groups, columns correspond to different drug treatments, replicates refer to sequences of patients within each age-groupheatment combination and the X t J k measure death-rates, say, it is typically not the case that beliefs about the x f J t would be invariant under permutations of the subscript i. On the other hand, for the kinds of mechanisms routinely used to allocate patients to treatment groups in clinical trials, many individuals would have exchangeable beliefs about , the sequence x , , ~ xV2, . . . for any fixed i , j. Technically, such a situation corresponds to the invariance of joint beliefs for the collection of random quantities, x , , ~ under some restricted set of permutations , of the subscripts, rather than under the unrestricted set of all possible permutations (which would correspond to complete exchangeability). The precise nature of the appropriate set of invariances encapsulating beliefs in a particular application will, of course, depend on the actual perceived partial exchangeabilities in that application. In what follows, we shall simply motivate, using very minimal exchangeability assumptions, a model which is widely used in the context of the two-way layout. There is no suggestion that the particular form discussed has any special status, or oughr to be routinely adopted, or whatever. , x . as Suppose that, for any fixed I j . we think of r r J 1 ,, ~ .? .~ a (potentially) infinite sequence of real-valued random quantities (2 E such that the IJ sequences of this kind, with I and J fixed, are judged to be unrestrictedly exchangeable. If further assumptionsof centred spherical symmetry or sufficiencyfor each sequence then lead to the normal form of representation, we have

x),

218

4 Modelling

the x , , ~ assumed independently and normally distributed with means prJ and are variances (A/,)-'. In many cases, the nature of the observational process leads to the judgement that limb-:, ,$,(K) be assumed to be the same for all ( i . j).so that A,, = A. may say, for all z , J . Letting

denote the strong law limits of the row averages. column averages and overall average, respectively, from the two-way layout with Z and J fixed. we can always write p./J = /1 C k , d,

+ + +

?IlJ.

where

so that the random quantities x , , k are conditionally independently distributed with

The full model representation is then completed by the specification of a prior distribution Q for X and any Z J linearly independent combinations of the p;,. In conventional terminology. ci is referred to as the o\*erull inem, ( I ! as the ith row eflect. ;j,as the ,jth wlutnn efleect and as the (ij)th intercrcrioir rfleect. Collectively, the { (.tt } and { ,$, are referred to as the main cftecr.5 and { ?, } as the } interactions. Interest in applications often centres on whether or not interactions or main effects are close to zero and, if not. on making inferences about the magnitudes of differences between different row or column effects. In the above discussion, our exchangeability assumptions were restricted to the sequence .r,,l, . r t J 2 , .. . for fixed i . j . It is possible, of course. that further forms of symmetric beliefs might be judged reasonable for certain permutations of the i..j subscripts. We shall return to this possibility in Section 46.5.where we shall see that certain further assumptions of invariance lead naturally to the idea of hierarchical rcpresentations of beliefs.

",

4.6 Models via Partial Exchangeability

219

4.6.4

Covariates

In (iii) of Section 4.6, we gave examples of situations where beliefs about sequences of observables xtllx22, . . ,i = 1, . . . ,m are functionally dependent, in some . sense, on the observed values, z,, = 1, . . . m, of a related sequence of (random) i quantities. We shall refer to the latter as cuvariares and, in recognition of this dependency, we shall denote the joint density of ~ , ~=. 1,. . . ,n,,i = 1,. . . ,m, j by p(zl(n1) . . . . . 2 , , ( n * ,)lz1,...1z,,).

The examples which follow illustrate some of the typical forms assumed in applications. Again, there is no suggestion that these particular forms have any special status; they simply illustrate some of the kinds of models which are commonly Used. Example 4.12. (Bimsay). Suppose that at each of 'rri specified dose levels, tl, . . ., z,,,. of a toxic substance, typically measured on a logarithmic scale, sequences of 0 - 1 random quantities, z i l , x , ? !... i = l ? . . . m, are to be observed, where zi,= 1 if the . j t h animal receiving dose t, survives, z,, = 0 otherwise. If, for each i = 1. . . . ,t n , the sequences zil,2,2:. . . are judged exchangeable, and if we denote the number of survivors ~ out of ni animals observed in the ith sequence by y,(n,)= x , +. . . zi,,, ,a straightforward generalisation of the corollary to Proposition 4.18 implies a representation of the form

where z = (q,.. , z n l ) ,8(2)= (6JI(z), . . . O,,,(z)) 8, ( z ) = linin-'y,(n). . . and n -x In many situations, investigators often find it reasonable to assume that

where the functional form G (usually monotone increasing from 0 to 1) is specified, but d, is a random quantity. Functions having the form G(4: 2,) = G(41 p 2 z l ) with 4' E 92,42 E . R', are widely used (see, for example, Hewlett and Plackett, 1979). with

and

G(41 + &z,) = exp(d,~ &z,)/{l +

+ exp(4, + cj2z,)}

(the logit model)

being the most common. For any specified C(.; z l ) . the required representation has the form

220

4 Modellitig

with dQ(0) specifying a prior distribution for 0 E (P. In practice, the specification of (1 might be facilitated by reparametrising from Q to a more suitable (1-1) transformation @ = @(@). In the probit and logit cases, for example, vl = - C > ~ / O ~ corresponds to the (log) dose, z ; , at which G(o, + &:,) = 1/2. Beliefs about LnI then correspond to beliefs about the (log-)dose level for which the survival frequency in a large series of animals would equal 112, the so-called LD50 dose. Experimenters might typically be more accustomed to thinking in terms of ( - Q I /c72.02). say. than in terms of (0,.) . a 02

Example 4.13. (Growth-curues). Suppose that at each of irt specified time points, say i , ~.. . , z,!,,sequences of real-valued random quantities. .r,,. .r,?. . . . . I = 1.. . . . w . are to be observed, where .r,, is the jth replicate measurement (perhaps on a logarithmic scale) of the size or weight of the subject or object of interest at time 2 , . Suppose further that the kinds of judgements outlined in Example 4.1 1 are made about the sequences .I,,:. . . . with t = 1.. . . , in. so that we have the representation

wherefiJ(z)= ( p l ( z..... p , , , ( t ) . X 1 ( z ..... A,,,(z))and0: = W x (W-). ) ) In many such situations, the judgement is made that X l ( r ) = . . = ,\,,,(%I (particularly if measurements are made on a logarithmic scale) and that
/I,(Z) =

p;(:,) = g ( @ ::,I.

where the functional form y (usually monotone increasing) is specified, but @ is a random quantity. Commonly assumed forms include g ( ~2,): = (0, 0 ~ c . 5 , ~ + I. (the /ogisric model) and (the straight-line model). For any specified g ( . : :,), the joint predictive density representation has the form
022,

y(@: 2 , ) = cil

where dQ(& A ) specifying a prior distribution for @ E @ and X E %*. As with Example 4.12, specification of C) might be facilitated if we reparametrise from q5 to a more suitable ( I - 1) transformation, = I)(@). In the logistic case, for example. we might take v ! , = dl I . corresponding to the saturation growth level reached as s, -- x. and ~2 = (ol 02)-, + correspondingto the growth level at the time origin. 2, 2 0. Beliefs . about c ~ Q2then acquire an operational meaning as beliefs about the average growth-levels. and at times ,x 0.respectively, that would be observed from a large number of replicate measurements. A third possible parameter to which investigators could easily relate i n some applications might be c.+ = 1og[0~&/(20~-t & ) j / the time at which growth is half-way from the initial to the final level. (,1

4.6 Models via Partial Exchangeability

221

Example 4.14. (Muuiple regression). Suppose that, for each 1 = 1.. . . ,m, ses quences of real-valued random quantities x,]. , ~... . are to be observed, where each x , , is related to certain specified observed quantities z, = (i,].. . .z , ~and judgements are . ) made which lead to the belief representation

where

p , ( z , )= nlim z,,(z). A, *a

'(2,)

n-?i

liiii

si(i).

qz ) = (P,(ZI).. . ..LL,,,(Z,,,).

h ( Z 1 )* . . . ,X,rr(zno)

a n d 8 = R"'x (Rt)"'.withz =(zl ...., z,,,). In many situations, the furtherjudgements are made that XI (2,) = A, = X and p , (2,) = p ( z l ) L. = 1.. . . , m. where X and p ( . ) are unknown, but the latter is assumed to be a "smooth" function, adequately approximated by a first-order Taylor expansion, so that, for some (unspecified) z * ,

where we define

a,= ( l . ~ , ].., t , ~ ) vector) .. (row


and

8 = (&, 01,. . . .Ok)' (column vector)


with Conditional on

e,, = P ( z * ) - z*D pu(z-).e, = l ~ I ~ ( z *i )=] 1,., . . ,I C . ~

+ = (8,A), the joint distribution of

is thus seen to be multivariate normal, N,,(z A@. where A is an rz x 12 matrix (n = I A). rrl t . . . + n,,,), whose rows consist of a lreplicated n l times, followed by a replicated n2 2 times, and soon, and X = XI,,, with I,, denoting therr x 71 identity matrix. The unconditional representation can therefore be written as

It is conventional to refer to .zz,. . . .as values of the regressor rwiables z ' j ) , J = 1. . . . .k , to 8 as the vector of regression coeflcienrs and A as the design marrix. The form p ( z ) = A 8 is called a regression equation and the structure
E ( z I A . 8 . X )= A 8

-,,

222

4 Modelling

is said to define a linear model. if I; = 1, we have the simple regression (struighr-line) model, E(L,,) = 4, iO , Z , ~ for I; 2 2, we have a multiple regression model. - ; From an operational point of view, beliefs about 8 in the general case relate to beliefs about the intercept (&) of the regression equation and the marginal rates of change ( O I . . . . . f / r ) of the J,,with respect to the regressor variables (2,.. . . . z1 ). However, within this general structure we can represent various special cases such as 2 = z @oIynmnriid V regression)or Z ~ J )= sin(jH/K), for some . (a version of trigononterric regre.wiorr):in these cases. beliefs about 8 will stem from rather different considerations. .3 Specification of the kinds of structures which we have illustrated in Examples
4. I2 to 4.14 essentially reduces to the same process as we have seen in earlier

representations of joint predictive densities as integral mixtures. We proceed c1s if:


(i) the random quantities are conditionully independent. given the values of the relevant covuriates, L,and given the unknown parameters. 4; (ii) the latter are assigned a prior distribution, dd)(@).

In many cases, the likelihood, defined through conditional independence. involves familiar probability models. often of exponential family form (as with the binomial. normal and multivariate examples seen above), but with at least some of the usual labelling parameters replaced by more complex functional forms involving the covariates. From a conceptual point of view. this is all that really needs to be said for the time being. Howevcr. when we consider the applications of such models. together with the problems of computation. approximation. etc., which arise in implementing the Bayesian learning process,, it is often useful to have a more structured taxonomy in mind: for example. linear versus non-linear functional forms; normal versus non-normal distributions. and so on.
4.6.5

Hierarchical Models

In Section 4.6.2. we considered the general situation where several sequences of random quantities. x , ~ . . z - ., . ... I = 1.. . . . I n are judged unrestrictedly infinitely ~ exchangeable, leading typically to a joint density representation of the form

We remarked at that time that nothing can be said. in generul, about the prior specification Q ( & .. . . . O,,,), since this must reflect whatever beliefs are appropriate for the specific application being modelled. However, it is often the case that additional judgements about relationships among the nt \equenccs lead to interestingly structured forms of Q ( H I . . . . . Qt,,).

4.6 Models via Partial Exchangeabilio

223

In Section 4.6.1, we noted some of the possible contexts in which judgements of exchangeability might be appropriate not only for the random quantities within each of m separate sequence of observables, but also between the m strong law limits of appropriately defined statistics for each of the sequences. The following examples illustrate this kind of structuredjudgement and the forms of hierurchicuf model which result.
Example 4.15. (Exchangeablebinomial parameters). Suppose that we have unre1 strictedly infinitely exchangeable sequences of 0-1 random quantities, x l l , ~ 1 , . . .. with i = 1, . . . ,m. Then. for i = 1,2. . . . . [ 7 t , . vt(7/,) = s,l + . . . + s,,,,], is a sufficient statistic for the 2th sequence and
p(yi(n1). . . . .T/ffo(n,,i))
=

Y(YI (ni). . . .
11

181,.. . 30,,t)dQ(Q~. .ow) .. .

where

8, = liiri ( g , ( n ) / n ) .
11-x

As we remarked in Section 4.6. I , if the sequences consists of success-failure outcomes

on repeated trials with i i i different (but, to all intents and purposes, similar) types of component, it might be reasonable to judge the 711 long-run success frequencies to be themselves exchangeable. This corresponds to specifying an exchangeableform o prior f distributionfor rhe parameters el. . . . ,O,,,. If the rri types of component can be thought of as a selection from a potentially infinite sequence of similar components. we then have (see Section 4.3.3) the general representation
Q(Qi..

. . .O,,, I G) m(G)

The complete model structure is then seen to have the hierarchicalform

,,,
p(yi(lli)

..... ! / f , l ( t , , , , ) l e ~ , . . . . H , ,= n B i ( p , ( l ~ , ) I O , . ~ t , ) ,)
,-I

Q(6.. . .

I G) =
n(G)

n
f

G(@,)

-1

In conventional terminology, the first stage of the hierarchy relates data to parameters via binomial distributions; the second stage models the binomial parameters a5 i they were a f random sample from a distribution G; the third, and final, stage specifies beliefs about G.

224

4 Modelling

The above example is readily generalised to the case of exchangeable parameters for any one-parameter exponential family. In practice. beliefs about G might concentrate on a particular parametric family. so that, assuming the existence of the appropriate densities, the prior specification takes the form

/T

les the hierarchi-

As before, the first stage of the hierarchy relates data to parameters in a form as-

sumed to be independent of G; the second stage now models the parameters us if they were a random sample from a parametric family labelled by the hyperpurutnem- E CP; the third, and final, stage specifies beliefs about the hyperparameter. Such beliefs acquire operational significance by identifying the hyperparameter with appropriate strong law limits of observables, as we shall indicate in the following example.

Example 4.16. (Exchangeable normal mean parameters). Suppose that we hake 111 unrestrictedly infinitely exchangeable sequences .r,,..r,?.. . . . I = 1 . . . . . iu. of real valued random quantities, for which (see Example 4. I 1 ) the joint density has the representation

where we recall that A-' = I h , , . ,


It.?,,(/)
"(.I.,! +'"+./',,,).

.sf(()

and 11, =

hi,,

.,

.r,,(i).where

//Sf(/) Z ( . r , . r , , ( / ) ) i . ;: = ,
i t

1 _ . _ 1_1 .. 1

So far as the specification of Q ( p l . .. . . / I , , , . X ) is concerned. wc tirst note that in many applications it is helpful to think in terms o f

O(/fl. . . . . /',,,. A ) - - O , , b l . . . . .//,,,lA"(A).

4.6 Models via Partial Exchangeability

225

for some Qp, QA. In some cases, knowledge of the strong law limits of sums of squares about the mean may be judged irrelevant to the assessment of beliefs for strong law limits of the sample averages: in such cases, Q , ( p l , . . . p,,, I A) will not depend on A. In other cases, we might believe, for example, that variation among the limiting sample averages is certainly bigger (or certainly smaller) than within-sequence variation of observations about the sample mean: in such cases, (?,,(PI, . . . , I A) will involve A. In either case, it is useful to think in terms of the product form of Q. Now suppose that, conditional on A. the limiting sample means are judged exchangeable. If the rn sequencescan be thought of as a selection from a potentially infinite collection of similar sequences, we have (see Section 4.3) a further representation of Q, in the form

The complete model then has the hierarchical structure


tr1

In practice, beliefs about G, given A, might concentrate on a particular parametric family, so that, assuming the existence of the appropriate densities, the hierarchical structure would take the form
111

gp(11,.

. . . .PI12 I A.4) = n 9 J P t I A, 4)
1=1

lW4 I A)

QA(A).

For an explicit example of this, suppose that, given a potentially infinite sequence pl . p2.. . . (or, moreconcretely,~,1(1).~,,2(2). .forvery largenl,az,. . .)thequantitiesrri, p ( m ) = ... tn-'(pl + .. + f i n ) and s'(m) = m I 1 , ( p , - ji(m))2 the large sample analogues : (or of p(m)and s2(7n))were judged sufficient for the sequence. It would then be natural (see Section 4.5) to take g,,(pl I A, 4) = N ( p I 14,.0 2 ) . where

From an operational standpoint, the final stage specification of the joint prior distribution for

&, 42 and X then reduces to a specificationof beliefs about the following limits of observable quantities (for large m and 7 t l . i = 1,. . . r n ) :

(i) the mean of all the observations from all the sequences (&);

226

4 Modelling

(ii) the mean sum of squares of the individual sequence means about the overall mean (6.); (iii) the mean (over sequences)of the mean sum of squares of observations within a sequence about the sequence mean (A).

The precise form of specification at this stage will, of course. depend on the particular situation in which the model is being applied.

Hierarchical modelling provides a powerful and flexible approach to the representation of beliefs about observables in extended data structures, and is being increasingly used in statistical modelling and analysis. This section has merely provided a brief introduction to the basic ideas and the way such structures arise naturally within a subjectivist, modelling framework. In the context of the Bayesian learning process, further brief discussion will be given in Section 5.6.4, where links will be made with empirical Buyes ideas. An extensive discussion of hierarchical modelling will be given in the volumes Bayesian Curnpururiun and Bayesian Merhods. A selection of references to the literature on inference for hierarchical models will be given in Section 5.6.3.

4.7
4.7.1

PRAGMATIC ASPECTS
Finite and Infinite Exchangeability

The de Finetti representation theorem for 0-1 random quantities, and the various extensions we have been considering in this chapter. characterise forms of p ( r l . . . . , x t t ) for observables . ? - I . . . ...r,, assumed to be part of an infinite exchangeable sequence. However, in general, mathematical representations which correspond to probabilistic mixing over conditionally independent parametric forms do not hold for finite exchangeable sequences. To see this, consider 11 = 2 and finitely exchangeable 0- 1 .rl . .Q, such that

If the de Finetti representation held. we would have

for some Q ( O ) , an impossibility since the latter would have to assign probability one to both 0 = 0 and B = 1 (Diaconis and Freedman. 1980a).

4.7 Pragmatic Aspects

227

It appears, therefore, that there is a potential conflict between realistic modelling (acknowledgingthe necessarily finite nature of actual exchangeabilityjudgements) and the use of conventional mathematical representations (derived on the basis of assumed infinite exchangeability). To discuss this problem, let us call an exchangeablesequence, x1 . . .I I,, with IC, E X, N-extendible if it is part of the longer exchangeable sequence q , . . . x,v. Practical judgements of exchangeability for specific observables 2 1 , . . . ,x, are . typically of this kind: the xl, . . ,xn can be considered as part of a larger, but finite, potential sequence of exchangeable observables. Infinite exchangeability correspondsto the possibly unrealistic assumption of N-extendibility for all N > n. In general, the assumption of infinite exchangeability implies that the probaE bility assigned to an event (q . . .,z,,) E X is of the form

F(E)K?(F),

for some Q. If we denote by P( E) the corresponding probability assigned under N-extendibility for a specific N, a possible measure of the distortion introduced by assuming infinite exchangeability is given by

where the supremum is taken over all events in the appropriate a-field on X. Intuitively, one might feel that if 3c1 . . . ,x, is N-extendible for some N > > n, the distortion should be somewhat negligible. This is made precise by the following.

Proposition 4.19. (Finite approximation of infinite exchangeability).


With the preceding notation, there exists Q such that

where f ( n )is the number of elements in X , ifthe latter isfinite, and f ( 7 1 ) = ( n - 1) otherwise.
Proof. See Diaconis and Freedman (1980a) for a rigorous statement and technical details.

The message is clear and somewhat comforting. If a realistic judgement of N-extendibility for large, but finite, N is replaced by the mathematically convenient assumption of infinite exchangeability, no important distortion will occur in quantifying uncertainties. For further discussion, see Diaconis (1977), Jaynes (1986) and Hill (1992). For extensions of Proposition 4.19 to multivariate and linear model structures, see Diaconis et al. (1992).

228
4.7.2

4 Modelling

Parametrlc and NonparametricModels

In Section 4.3. we saw that the assumption of exchangeability for a sequence joint distribution function of x I. . . . .x,, of the form
sl, s2,. . of real-valued random quantities implied a general representation of the .

J . 1

1-1

where

Q ( F ) = lim P(F,,)
11

and F,, is the empirical distribution function defined by .TI. . . . ..rl,. This implies that we should proceed as ifwe have a random sample from an unknown distribution function F, with Q representing our beliefs about what the empirical distribution would look like for large n*. As we remarked at the end of Section 4.3.3. the task of assessing and representing such a belief distribution Q over the set 3 of all possible distribution functions is by no means straightforward,since F is, effectively. an infinite-dimensional parameter. Most of this chapter has therefore been devoted to exploring additional features of beliefs which justify the restriction of 3 to families of distributions having explicit mathematical forms involving only a tinite-dimensional labelling parameter. Conventionally, albeit somewhat paradoxically, representations in the tinitedimensional case are referred to as pururnetric models. whereas those involving the infinite-dimensional parameter are referred to as nonparumetric models! The technical key to Bayesian nonparametric modelling is thus seen to be the specification of appropriate probability measures over function spaces. rather than over finitedimensional real spaces, as in the parametric case. For this reason, the Bayesian analysis of nonparametric models requires considerably more mathematical machinery than the corresponding analysis of parametric models. In the rest of this volume we will deal exclusively with the parametric case. postponing a treatment of nonparametric problems to the volumes Buyesion Coniputation and Bqcsiun Methods. Among important references on this topic, we note Whittle ( 1958). Hill (1968. 1988, 1992). Dickey (1969). Kimeldorf and Wahha (1970), Good and Gaskins (1971, 1980). Ferguson (1973. 1974), Leonard (1973). Antoniak (1974). Doksum (1974). Susarla and van Ryzin ( 1976). Ferguson and Phadia ( 1979). Dalal and Hall (1980).DykstraandLaud(198l).Padgettand Wei( i981),Rolin( 1983).Lo( 1984). Thorburn ( I 986), Kestemont (I 987). Berliner and Hill ( 1988).Wahba ( 1988).Hjort (1990). Lenk (1991) and Lavine (1992a). As we have seen. the use of specific parametric forms can often be given a formul motivation or justification as the coherent representation of certain forms of

4.7 Pragmatic Aspects belief characterised by invariance or sufficiency properties. In practice, of course, there are often less formal, more pragmatic, reasons for choosing to work with a particular parametric model (as there often are for acting, formally, as ifparticular forms of summary statistic were sufficient!). In particular, specific parametric models are often suggested by exploratory data analysis (typically involving graphical techniques to identify plausible distributional shapes and forms of relationship with covariates),or by experience (i.e., historical reference tosimilar situations, where a given model seemed to work) or by scientific theory (wtuch determines that a specific mathematical relationship must hold, in accordance with an assumed law). In each case, of course, the choice involves subjective judgements; for example, regarding such things as the straightness of a graphical normal plot, the similarity between a current and a previous trial, and the applicabilityof a theory to the situation under study. From the standpoint of the general representation theorem, such judgements correspond to acting as ifone has a Q which concentrates on a subset of 3 defined in terms of a finite-dimensional labelling parameter.

4.7.3

Model Elaboration

However, in arriving at a particular parametric model specification, by means of whatever combination of formal and pragmatic judgements have been deemed appropriate, a number of simplifying assumptions will necessarily have been made (either consciously or unconsciously). It would always be prudent, therefore, to expand ones consciousness a little in relation to an intended model in order to review the judgements that have been made. Depending on the context, the following kinds of critical questions might be appropriate: (i) is it reasonable to assume that all the observables form a homogeneous sample, or might a few of them be aberrant in some sense?
(ii) is it reasonable to apply the modelling assumptions to the observables on their original scale of measurement, or should the scale be transformed to logarithms, reciprocals, or whatever?
(iii) when considering temporally or spatially related observables, is it reasonable to have made a particular conditional independence assumption, or should some form of correlation be taken into account?

(iv) if some, but not all, potential covariates have been included in the model, is it reasonable to have excluded the others, or might some of them be important, either individually or in conjunction with covariates already included?

We shall consider each of these possibilities in turn, indicating briefly the kinds of elaboration of the first thought o f model that might be considered.

4 Modelling

Outlier elaboration. Suppose that judgements about a sequence zI. z2. . . . of real-valued random quantities have led to serious consideration of the model

but, on reflection, it is thought wise to allow for the fact that (an unknown) one of . r l , . . . x f amight be aberrant. If aberrant observations are assumed to be such that a sequence of them would have a limiting mean equal to I / , but a limiting mean square about the mean equal to (7 A)-'. 0 < 2 < 1. where p and A denote the corresponding limits for non-aberrant observations. a suitable form of elaborated model might be

This model corresponds to an initial assumption that, with specified probability T, there are no aberrant observations, but, with probability 1- 7 ~ there is precisely one . aberrant observation, which is equally likely to be any one of .rl. . . . . x,,.Generalisations to cover more than one possible aberrant observation can be constructed in an obviously analogous manner. Such models are usually referred to as "outlier" models. since 7 < 1 implies an increased probability that, in the observed sample sI.. . sf,.the aberrant observation will "outlie". Since for an aberrant observa.. tion 1.E [ ( r - p ) ? 111, A. -,] = ( ? A ) ~ ' I ,prior belief in the relative inaccuracy of' an aberrant observation as a "predictor" of p is reflected in the weight attached by the prior distribution Q ( ? )t o values of-!much smaller than 1. De Finetti ( I96 1 ) and Box and Tiao ( 1968) are pioneering Bayesian papers on this topic. More recent literature includes; Dawid ( 1973). O'Hagan (1979. 1988b. 1990), Freeman ( 1980). Smith ( 1983). West ( I984 1985). Pettit and Smith ( 1985). Arnaiz and Ruiz-Rivas ( 1986). Muirhead ( 1986). Pettit ( 1986. 1992). Guttman and Peiia (1988) and Peiia and Guttman (1993).
Tran.vjhnnurion ekuhor-ution. Suppose now that judgements about a sequence xl ,rj. . . . of real-valued random quantities are such that it seems reasonable to sup/' i pose that. if'a suitaMe :were idetitijied. beliefs about the sequence .I.\- '. .t.? . . . . . defined by 1- ! .I., ( J ' , - I)/*, (? # 0.7 E 1') ( -. = 0 ) . = log(.I., )

4.7 Pragmatic Aspects

231

would plausibly have the representation

It then follows that

where
1

The case y = 1 corresponds to assuming a normal parametric model for the observations on their original scale of measurement. If r includes values such as y = -1,y = 1 / 2 , ~ 0, the elaborated model admits the possibility that trans= formations such as reciprocal, square root, or logarithm, might provide a better scale on which to assume a normal parametric model. Judgements about the relative plausibilities of these and other possible transformations are then incorporated in &+. For detailed developments see Box and Cox (1964), Pericchi (1981) and Sweeting (1984, 1985).
Correlationelaboration. Suppose that judgements about z1,x2, . . . again lead to a "first thought of model in which
I,

~ ( ~ 1 7 . r l n .

I P.

= n ~ ( rI P ~ X ) . t
r=l

but that it is then recognised that there may be a serial correlation structure among zl,. . . ,x,,(since, for example, the observations correspond to successive timepoints, t = 1. t = 2, etc.) A possible extension of the representation to incorporate such correlation might be to assume that, for a given y E [ - 1. 1). and conditionalon p and A, the correlation between x,and .r,+h is given by R(.r,.X , + h 1 p, A. 3) = ? h . so that

232
The elaborated model then becomes. for some Q'. 4

I Modelling

'.
d&'(p.XI?) d Q t ( i )

p(.r1. . . . .. r l l )=

N,,(zlpl.AI'
II

I )

H Y R + I I - - I

The "first thought of' model corresponds to = 0 and beliefs about the relative plausibility of this value compared with other possible values of positive or negative 2 ' . correlation are reflected in the specification of (
Cowiricite aluborurion. Suppose that the "first thought of' model for the observables z = (zl ( 1 1 ~ ) . . . . . ~ , , ( I ~ ~ , ,where z , ( / t , )= ( . r , . . . . . .I' ,,,,) denotes ) ) , ] replicate observations corresponding to the observed valuc z , = ( 2 , 1 . . . . . :,A ) of the covariates z I . . . . . q . is the multiple regression model with representation
Y(Z) =

pPtl,p-

Nl,(a: I AO. A)

&W. A)

as described in Example 4.16 of Section 4.6. If it is subsequently thought that ' covariates t A + l , . . . . 2 should also have been taken into account, a suitable elaboration might take the form of an extended regression model

3 where 3 consists of rows containing b, = ( z , k T l . .. . . z , , ) replicated I I , times. i FT 1 . . . . iri and y = ( & . + I , . . . .el) denotes the regression coefficients of the additional regressor variables Z A - ~ .. . . , si. The value y = 0 corresponds to the "first thought of' model. In all these cases, an initially considered representation of the form

=/P(z:

14) dQ(4)

is replaced by an elaborated representation

A=)= / A s

I 4.7) d 4 * ( 4 . 7 ) .

the latter reducing to the original representation on setting the elaboration parameter y equal to 0. Inference about such a 7. imaginatively chosen to reflect interesting possible forms of departure from the original model. often provides a natural basis for checking on the adequacy of an initially proposed model. as well as learning about the directions in which the model needs extending. Other Bayesian apprwdches to the problem of covariate selection include Bernardo and Bermudez ( 1985). Mitchell and Beauchamp ( 1988) and George and McCulloch ( I YY3a).

4.7 Pragmatic Aspects

233

4.7.4

Model Simplification

The process of model elaboration, outlined in the previous section, consists in expanding a first thought of model to include additional parameters (and possibly covariates), reflecting features of the situation whose omission from the original model formulation is, on reflection, thought to be possibly injudicious. The process of model simplification is, in a sense, the converse. In reviewing a currently proposed model, we might wonder whether some parameters (or covariates) have been unnecessary included, in the sense that a simpler form of model might be perfectly adequate. As it stands, of course, this latter consideration is somewhat ill-defined: the adequacy. or otherwise, of a particular form of belief representation can only be judged in relation to the consequence arising from actions taken on the basis of such beliefs. These and other questions relating to the fundamentally important area of model comparison and model choice will be considered at length in Chapter 6. For the present, it will suffice just to give an indication of some particular forms of model simplification that are routinely considered.

Equalify ofparameters. In Section 4.6, we analysed the situation where several sequences of observables are judged unrestrictedly infinitely exchangeable, leading to a general representation of the form

where 8, E Q,, 8 = flzlQ, and the parameter 8, relating to the ith sequence can typically be interpreted as the limit of a suitable summary statistic for the ith sequence. If, on the other hand, the simplifyingjudgement were made that, in fact, the labelling of the sequences is irrelevant and that any combined collection of observables from any or all of the sequences would be completely exchangeable, we would have the representation

where the same parameter 8 E 8 now suffices to label the parametric model for each of the sequences. In conventional terminology, the simplified representation is sometimes referred to as the null-hypothesis (61 = . . . = Om) and the original 8 representation as the alternative hypothesis ( 1 # . . . # 8,). As we saw in Section 4.6 (for the case of two 0-1 sequences), rather than opt for sure for one or other of these representations, we could take a mixture of the two (with weight x , say, on the null representation and 1 - x on the alternative. general, representation). This

234

4 Modelling

form of representation will be considered in more detail in Chapter 6. where it will be shown to provide a possible basis for evaluating the relative plausibility of the null and alternative hypotheses in the light of data.
Absence of efects. In Section 4.6, we considered the situation of a structured layout with replicate sequences of observations in each of 1.1 cells, and a possible parametric model representation involving row effecrs (a1. . . . . O I 1, column effecrs ($1,. . . d ~and interaction effects (711... . . T ~ J ) .A commonly considered sim) = plifying assumption is that there are no interaction effects (71I = . . . = 71.1 0). so that large sample means in individual cells are just the additive combination of the corresponding large sample row and column means. Further possible simplifying judgements would be that the row (or column) labelling is irrelevant, so that o1 = . . . = crl = 0 (or dl = . .. = .!I = 0) and large sample cell means coincide with column (or row) means. Again, conventional terminology would refer to these simplifying judgements as null hypotheses.

Omission o covuriares. Considering. for example, the multiple regression f case, described in Example 4.14 of Section 4.6 and reconsidered in the previous section on model elaboration, we see that here the simplification process is very clearly just the converse of the elaboration process. If y denotes the regression coefficients of the covariates we are considering omitting, then the model corresponding to = 0 provides the required simplification. In fact, in all the cases of elaboration which we considered in the previous section, setting the elaboration parameter to 0 provides a natural form of simplification of potential interest. Whether the process of model comparison and choicc is seen as one of elaboration or of simplification is then very much a pragmatic issue of whether we begin with a smaller model and consider making it bigger. or vice versa. In any case, issues of model comparison and choice require a separate detailed and extensive treatment, which we defer until Chapter 6.

4.7.5

Prior Distributions

The operational subjectivist approach to modelling views predictive models as representations of beliefs about observables (including limiting. large-sample functions of observables, conventionally referred to as parameters). lnvariancc and sufficiency considerations have then been shown to justify a structured approach to predictive models in terms of integral mixtures of parametric models with respect to distributions for the labelling parameters. In familiar terminology. we specify a distribution for the observables conditional on unknown parameters (a sampling distribufion, defining a likelihood), together with a distribution for the unknown parameters (a prior distribution). It is the combination of prior and likelihood which dejines the overull model. In terms of the mixture representation, the specification of a prior distribution for unknown parameters is theretbre an essential

4.8 Discussion and Further References

235

and unavoidable part of the process of representing beliefs about observables and hence of learning from experience. From the operational, subjectivist perspective, it is meaningless to approach
modelling solely in terms of the parametric component and ignoring the prior distribution. W are, therefore, in fundamental disagreement with approaches to e statistical modelling and analysis which proceed only on the basis of the sampling distribution or likelihood and treat the prior distribution as something optional, irrelevant, or even subversive (seeAppendix B).

That said, it should be readily acknowledged that the process of representing prior beliefs itself involves a number of both conceptual and practical difficulties, and certainly cannot be summarily dealt with in a superficial or glib manner. From a conceptual point of view. as we have repeatedly stressed throughout this chapter, prior beliefs about parameters typically acquire an operational significance and interpretationas beliefs about limiting (large-sample)functions of observables. Care must therefore obviously be taken to ensure that prior specifications respect logical or other constraintspertaining to such limits. Often, the specification process will be facilitated by suitable reparametrisation. From a practical point of view, detailed treatment of specific cases is very much a matter of methods rather than theory and will be dealt with in the third volume of this series. However, a general overview of representation strategies, together with a number of illustrative examples, will be given in the inference context in Chapter 5. In particular. we shall see that the range ofcreative possibilities opened up by the consideration of mixtures, asymptotics,robustness and sensitivity analysis, as well as novel and flexible forms of inference reporting, provides a rich and illuminatingperspective and framework for inference,within which many of the apparent difficulties associated with the precise specification of prior distributions are seen to be of far less significance than is commonly asserted by critics of the Bayesian approach.

4.8
4.8.1

DISCUSSION AND FURTHER REFERENCES


Representation Theorems

The original representation theorem for exchangeable 0 - 1 random quantities a p pears in de Finetti ( 1930), the concept of exchangeability having been considered earlier by Haag (1924) and also in the early 1930s by Khintchine (1932). Extensions to the case of general exchangeable random quantities appear in de Finetti (193711964) and Dynkin (1953), with an abstract analytical version appearing in Hewitt and Savage (1955). Seminal extensions to more complex forms of symmetry (partial exchangeability) can be found in de Finetti (1938) and Freedman

236
( 1962).

4 Modelling

See Diaconis and Freedman ( 1980b) and Wechsler ( 1993) for overviews and generalisations of the concept of exchangeability. Recent and current developments have generated an extensive catalogue of characterisations of distributions via both invariance and sufficiency conditions. Important progress is made in Diaconis and Freedman (1984, 1987, 1990) and Kuchlerand Lauritzen ( 1989). See, also, Ressel(l985). Useful reviews are given by Aldous ( 1985). Diaconis (1988a) and, from a rather different perspective. Lauritzen ( 1982, 1988). The conference proceedings edited by Koch and Spizzichino (1982) also provides a wealth of related material and references. For related developments from a reliability perspective, see Barlow and Mendel (1992. 1994) and Mendel (1992).

4.8.2

Subjectivity and Objectivity

Our approach to modelling has been dictated by a subjectivist, operational concern with individual beliefs about (potential) observables. Through judgements of symmetry, partial symmetry, more complex invariance or sufficiency, we have seen how mixtures over conditionally independent parameter-labelled forms arise as typical representations of such beliefs. We have noted how this illuminates. and puts into perspective, linguistic separation into likelihood (or sampling model) and prior components. But we have also stressed that, from our standpoint. the two are actually inseparable in defining a belief model. In contrast. traditional discussion of a statistical model typically refers to the parametric form as the model. The latter then defines objective probabilities for outcomes defined in terms of observables. these probabilities being determined by the values of the unknown parameters. It is often implicit in such discussion that if the true parameter were known, the corresponding parametric form would be the true model for the observables. Clearly. such an approach seeks to make a very clear distinction between the nature of observables and parameters. It is as if, given the true parameter, the corresponding parametric distribution is seen ;IS part of objective reality, providing the mechanism whereby the observables are generated. The prior, on the other hand, is seen as a subjective optional extra. a potential contaminant of the objective statements provided by the parametric mtdel. Clearly, this view has little in common with the approach we have systematically followed in this volume. However, there is an interesting sense, even from our standpoint. in which the parametric model and the prior can be seen as having different roles. Instead of viewing these roles as corresponding to an objectivdsubjcctive dichotomy. we view them in terms of an intersubjective/subjectivedichotomy (following Dawid. 1982b, 1986b). To this end. consider a group of Bayesians. all concerned with their belief distributions for the same sequence of observables. In the absence of any general agreement over assumptions of symmetry. invariance or

4.8 Discussion and Further References

237

sufficiency, the individuals are each simply left with their own subjective assessments. However, given some set of common assumptions,the results of this chapter imply that the entire group will structure their beliefs using some common form of mixture representation. Within the mixture, the parametric forms adopted will be the same (the intersubjective component), while the priors for the parameter will differ from individual to individual (the subjective component). Such intersubjective agreementclearly facilitatescommunication within the group and reduces areas of potential disagreement to just that of different prior judgements for the parameter. As we shall see in Chapter 5 , judgements about the parameter will tend more towards a consensus as more data are acquired, so that such a group of Bayesians may eventually come to share very similar beliefs, even if their initial judgements about the parameter were markedly different. We emphasise again, however, that the key element here is intersubjective agreement or consensus. We can find no real role for the idea of objectivity except, perhaps, as a possibly convenient, but potentially dangerously misleading, "shorthand" for intersubjective communality of beliefs.

4.8.3

Critical Issues

We conclude this chapter on modelling with some further comments concerning (i) The Role and Nature of Models, (ii) Structurul und Stochastic Assumptions, (iii) Identtj5abiliry and (iv) Robustness Considerations.

The Role and Nature of Models In the approach we have adopted, the fundamental notion of a model is that of a predictive probability specification for observables. However, the forms of representation theorems we have been discussing provide, in typical cases, a basis for separating out, if required, two components; the parametric model, and the belief model for the parameters. Indeed, we have drawn attention in Section 4.8.2 to the fact that shared structural belief assumptions among a group of individuals can imply the adoption of a common form of parametric model, while allowing the belief models for the parameters to vary from individual to individual. One might go further and argue that without some element of agreement of this kind there would be great difficulty in obtaining any meaningful form of scientific discussion or possible consensus. Non-subjectivist discussions of the role and nature of models in statistical analysis tend to have a rather different emphasis (see, for example, Cox, 1990. and Lehmann, 1990). However, such discussions often end up with a similar message, implicit or explicit, about the importanceof models in providing a focused framework to serve as a basis for subsequent identificationof areas of agreementand disagreement. In order to think about complex phenomena, one must necessarily work with simplified representations. In any given context, there are typically

4 Modelling a number of different choices of degrees of simplification and idealisation that might be adopted and these different choices correspond to what Lehmann calls "a reservoir of models", where
. . . particular emphasis is placed on transparent characterisations or descriptions of the models that would facilitate the understanding of when a given model is

appropriate. (Lehmann, 1990) But appropriate for what'? Many authors-including Cox and Lehmannhighlight a distinction between what one might call scientiJc and fechnolugical approaches to models. The essence of the dichotomy is that scientists are assumed to seek r.rpIuwtory models, which aim at providing insight into and understanding of the "true" mechanisms of the phenomenon under study: whereas technologists are content with empirical models. which are not concerned with the"truth". but simply with providing a reliable basis for practical action in predicting and controlling phenomena of interest. Put very crudely, in terms of our generic notation, explanatory modellcrs take the form of p ( z 10) very seriously. whereas empirical modellers are simply concerned that p(z)"works". For an elaboration of the latter view, see Leonard ( 1980). The approach we have adopted is compatible with either emphasis. As we have stressed many times, it is observables which provide the touchstone of experience. When comparing rival belief specifications, all other things being equal we are intuitively more impressed with the one which consistently assigns higher probabilities to the things that actually happen. If. in fact, a phenomenon is gova erned by the specific mechanism p ( z 10) with 0 = 0,). scientist who discovers this and sets p ( s ) = p(s10,) will certainly have a p(z) "works". that However, we are personally rather sceptical about taking the science versus technology distinction too seriously. Whilst we would not dispute that there are typically real differences in motivation and rhetoric between scientists and technologists, it seems to us that theories are always ultimately judged by the predictive power they provide. Is there really a meaningful concept of "truth" i n this context other than a pragmatic one predicated on p(z)'? shall return to this issue in We Chapter 6. but our prejudices are well-captured in the adage: "trll moifels are.fii1.w. hut .sonic are Irsefirr'. Structural and Stochastic Assumptions In Section 4.6. we considered several illustrative examples where, separate from considerations about the complete form of probability specification to be adopted. the key role of the parametric model component p ( z 10) was to specify structured forms of expectations for the observables conditional on the parameters. We recall two examples.

4.8 Discussion and Further References


In the case of observables x i j k in a two-way layout with replications (Section 4.6.3), with parameters corresponding to overall mean, main effects and interactions, we encountered the form

in the case of a vector of observables x in a multiple regression context with design matrix A (Section 4.6.4, Example 4-14),we encountered the form

In both of these cases, fundamental explanatory or predictive structure is captured by the specification of the conditionalexpectation, and this aspect can in many cases be thought through separately from the choice of a particular specification of full probability distribution.

Identi$abiliry
A parametric model for which an element of the parametrisation is redundant is said to be non-identified. Such models are often introduced at an early stage of model building (particularly in econometrics) in order to include all parameters which may originally be thought to be relevant. Identifiability is a property of the parametric model, but a Bayesian analysis of a non-identified model is always possible if a proper prior on all the parameters is specified. For detailed discussion of this issue, see Morales ( 197 1), Drhze ( 1974), Kadane ( 1974), Florens and Mouchart ( I986), Hills (1987) and Florens et ul. (1990, Section 4.5).

Robustness Considerations For concreteness, in our earlier discussion of these examples we assumed that the p ( z I e) terms were specified in terms of normal distributions. As we demonstrated earlier in this chapter, under the a priori assumption of appropriate invariances, or on the basis of experience with particular applications, such a specification may well be natural and acceptable. However, in many situations the choice of a specific probability distribution may feel a much less secure component of the overall modelling process than the choice of conditional expectation structure. For example, past experience might suggest that departures of observables from assumed expectations resemble a symmetric bell-shaped distribution centred around zero. But a number of families of distributions match these general characteristics, including the normal, Student and logistic families. Faced with a seemingly arbitrary choice. what can be done in a situation like this to obtain further insight and guidance? Does the choice matter? Or are subsequent inferences or predictions robust against such choices?

240

4 Modelling

An exactly analogous problem arises with the choice of mathematical specifications for the prior model component. In robustness considerations, theoretical analysis -sometimes referred to as "what if?" analysis-has an interesting role to play. Using the inference machinery which we shall develop in Chapter 5. the desired insight and guidance can often be obtained by studying mathematically the ways in which the various "arbitrary" choices affect subsequent formsof inferences and predictions. For example. a"what if?" analysis might consider the effect of a single. aberrant, outlying observation on inferences for main etYects in a multiway layout under the alternative assumptions of a normal or Student parametric model distribution. It can be shown that the influence of the aberrant observation is large under the normal assumption. but negligible under the Student assumption. thus providing a potential basis for preferring one or other of the otherwise seemingly arbitrary choices. More detailed analysis of such robustness issues will be given in Section 5.6.3.

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

Chapter 5

Inference
Summary
The role of Bayes theorem in the updating of beliefs about observables in the light of new information is identified and related to conventional mechanisms of predictive and parametric inference. The roles of sufficiency, ancillarity and stopping rules in such inference processes are also examined. Forms of common statistical decisions and inference summaries are introduced and the problems of implementing Bayesian procedures are discussed at length. In particular. conjugate, asymptotic and reference forms of analysis and numerical approximation approaches are detailed.

5.1
5.1.1

THE BAYESIAN PARADIGM


Observables, Beliefs and Models

Ourdevelopment has focused on the foundational issues which arise when we aspire to formal quantitative coherence in the context of decision making in situations of uncertainty. This development, in combination with an operational approach to the basic concepts, has led us to view the problem of statistical modelling as that of identifying or selecting particular forms of representation of beliefs about observables.

242

5 Inference

For example, in the case of a sequence .r1. .r2. . . . of 0 - 1 random quantities for which beliefs correspond to a judgement of infinite exchangeability, Proposition 4.1, (de Finetti's theorem) identifies the representation of the joint mass as function for x l . . . . . s,, having the form

for some choice of distribution Q over the interval [O. I]. More generally. for sequences of real-valued or integer-valued random quantities, .rl. q. . . . we have seen, in Sections 4.3 - 4.5, that beliefs which combine . judgements of exchangeability with some form of further structure (either in terms of invariance or sufficient statistics). often lead us to work with representations of the form

where p(s I d ) denotes a specified form of labelled family of probability distributions and Q is some choice of distribution over RA'. Such representations, and the more complicated forms considered in Section 4.6, exhibit the various ways in which the element of primary significance from the subjectivist, operationalist standpoint. namely the predictive matlef of beliefs about observables. can be thought of us (fconstructed from a puramerric model together with a prior distribution for the labelling parameter. Our primary concern in this chapter will bc with the way in which the updating of beliefs in the light of new information takes place within the framework of such representations.

5.1.2

The Role of Bayes' Theorem

In its simplest form. within the formal framework of predictive model belief distributions derived from quantitative coherence considerations, the problem corresponds to identifying the joint conditional density of

for any 7 n 2 1, given, for any p ( . r , .. . . ,slJ.

11

2 1. the form of representation of the joint density

In general. of course. this simply reduces to calculating

5.1 The Bayesian Paradigm

243

and, in the absence of further structure, there is little more that can be said. However, when the predictive model admits a representation in terms of parametric models and prior distributions, the learning process can be essentially identified, in conventional terminology, with the standard parametric form of Bayes theorem. Thus, for example, if we consider the general parametric form of representation for an exchangeable sequence, with dQ(8)having density representation,p ( 8 ) d 8 , we have

from which it follows that


P(xn+ly.

.. * xn+m I 1

where

P ( 8 I XI,.

This latter relationship is just Bayes theorem, expressing the posterior density for 8, given 11,. . . ,xn. in terms of the parametric model for 5 1 , . . . ,xn given 8, and the prior density for 8. The (conditional, or posterior) predictive model for zn+l,. . ,xfl+fll, . given x1 . . . , x I 1is seen to have precisely the same general form of representation as the initial predictive model, except that the corresponding parametric model component is now integrated with respect to the posterior distribution of the parameter, rather than with respect to the prior distribution. We recall from Chapter 4 that, considered as a function of 8 ,
+

lik(8 I xl,. . ,.rn) = p(xl.... ,I,18) . is usually referred to as the likelihood function. A formal definition of such a concept is, however, problematic; for details, see Bayarri et al. (1988) and Baymi and DeGroot (1992b).

5.1.3

Predictive and Parametric Inference

Given our operationalist concern with modelling and reporting uncertainty in terms of observables, it is not surprising that Bayes theorem, in its role as the key to a coherent learning process for parameters, simply appears as a step within the predictive process of passing from
p(Z,.

. . . x,) = / p ( x l . . . . .

5,1

1 e)p(e) e d

244
to

5 Inference

by means of

to denote future (or. as Writing y = { g ~. ,. . . gn,} = {.r,l.-,.. . . , z l l . yet unobserved) quantities and z = {SI. . . . zI,} to denote the already observed quantities. these relations may be re-expressed more simply as

and

P(@ 1x1 = P ( 3 : I8)P(6)/PtZ).


However, as we noted on many occasions in Chapter 4, if we proceed purely formally, from an operationalist standpoint it is not at all clear, at first sight. how we should interpret beliefs about parameters. as represented by p ( 6 ) and p(8 I z), or even whether such beliefs have any intrinsic interest. We also answered these questions on many occasions in Chapter 4, by noting that, in all the forms of predictive model representations we considered, the parameters had interpretations as strong law limits of (appropriate functions 00 observables. Thus. for example, in the case of the infinitely exchangeable 0 - I sequence (Section 4.3. I ) beliefs about 0 correspond to beliefs about what the long-run frequency of 1s would be in a future sample; in the context of a real-valued exchangeable sequence with centred spherical symmetry (Section 4.4.1). beliefs about 11 and d. respectively, correspond to beliefs about what the largc sample mean. and the large sample mean sum of squares about the sample mean would be. in a future sample. inference about purunietrrs is thus seen to be a limiting form of predictive inference about observables. This means that, although the predictive form is primary. and the role of parametric inference is typically that of an intermediate structural step, parametric inference will often itself be the legitimate end-product of a statistical analysis in situations where interest focuses on quantities which could be viewed as large-sample functions of observables. Either way, parametric inference is of considerable importance for statistical analysis in the context of the models we are mainly concerned with in this volume.

5.1 The Bayesian Paradigm

245

When a parametric form is involved simply as an intermediate step in the predictive process, we have seen that p(8 I x1.. . . ,T , ~ ) the full joint posterior , density for the parameter vector 8 is all that is required. However, if we are concerned with parametric inference per se, we may be interested in only some subset, 4, of the components of 8, or in some transformed subvector of parameters, g ( 8 ) . For example, in the case of a real-valued sequence we may only be interested in the large-sample mean and not in the variance; or in the case of two 0 - 1 sequences we may only be interested in the difference in the long-run frequencies. In the case of interest in a subvector of 8, let us suppose that the full parameter vector can be partitioned into 8 = {@,A}, where 4 is the subvector of interest, and A is the complementary subvector of 8, often referred to, in this context, as the vector of nuisance parameters. Since

the (marginal) posterior density for t is given by $

where

with all integrals taken over the full range of possible values of the relevant quantities. Expressed in terms of the notation introduced in Section 3.2.4, we have

P ( 4 . A I e>

------$

P ( 4 12).

In some situations, the prior specification p ( 4 , A) may be most easily arrived at through the specification of p(A I @ ) p ( 4 ) .In such cases, we note that we could first calculate the integrated likelihood for 4,

and subsequently proceed without any further need to consider the nuisance parameters. since

246

5 Inference

In the case where interest is focused on a transformed parameter vector. g ( 8 ) . we proceed using standard change-of-variable probability techniques as described in Section 3.2.4. Suppose first that ?/.J = g ( 8 ) is a one-to-one differentiable transformation of 8. It then follows that

where

is the Jacobian of the inverse transformation 8 = g-'(+). Alternatively, by substituting 8 = g - ' ( + ) . we could write p(z 18) as y(a: 1 +). and replace p ( 8 ) by 149 '($1) I J,, 1 (?I)I. to obtain A+ 12) = P(a: I + M + ) / P ( Z j directly.

If = g ( 8 ) has dimension less than 8 , we can typically define y = (q. )= w h(8), some w such that y = h ( 8 )is a one-to-one differentiable transformation, for and then proceed in two steps. We first obtain
P(?/.J.W 12) = PN(h

I(Y) 12) I J,,-l(Y)

I-

where

and then marginalise to

These techniques will be used extensively in later sections of this chapter. In order to keep the presentation of these basic manipulative techniques as simple as possible, we have avoided introducing additional notation for the ranges of possible values of the various parameters. In particular, all integrals have been assumed to be over the full ranges of the possible parameter values. In general, this notational economy will cause no confusion and the parameter ranges will be clear from the context. However. there are situations where specific constraints on parameters are introduced and need to be made explicit in the analysis. In such cases, notation for ranges of parameter values will typically also need to bc made explicit. Consider, for example. a parametric model. p(a: 18). together with a prior specification p ( 8 ) .8 E 0,for which the posterior density, suppressing explicit use

5.1 The Bayesian Paradigm

247

Now suppose that it is required to specify the posterior subject to the constraint 8 E 0"c 8, where p ( 8 ) d O > 0. Defining the constrained prior density by

,s ,

we obtain, using Bayes' theorem,

From this, substituting for w(8) in terms of p ( 8 ) and dividing both numerator and

expressing the constraint in terms of the unconstrained posterior (a result which could, of course, have been obtained by direct, straightforward conditioning). Numerical methods are often necessary to analyze models with constrained parameters; see Gelfand et al. (1992)for the use of Gibbs sampling in this context.
5.1.4

Sufficiency, Ancillarity and Stopping Rules

The concepts of predictive and parametric sufficient statistics were introduced in Section 4.5.2, and shown to be equivalent, within the framework of the kinds of models we are considering in this volume. In particular, it was established that a (minimal) sufficient statistic, t ( z ) ,for 8, in the context of a parametric model p ( z I O ) , can be characterised by either of the conditions

The important implication of the concept is that t ( z )serves as a sufficient summary of the complete data zin forming any required revision of beliefs. The resultingdata reduction often implies considerable simplification in modelling and analysis. In

240

5 Inference

many cases, the sufficient statistic t ( z )can itself be partitioned into two component statistics, t ( z )= [a(z). such that, for all 8. 45))

It then follows that. for any choice ol'p(c)).

so that, in the prior to posterior inference process defined by Bayes' theorem. it suffices to use p(s(z) a(z). ) .rather than p ( t ( z 10) as the likelihood function. I 8 ) This further simplification motivates the following definition. Definition 5.1. (Ancillary statiSric). A stutistic. a ( z ) . said to be ancillary. is with respect to 8 in u purumerric nrocidp(x I c ) ) . if p ,( a ( x 18) = p ( a ( x ) ) f i w ) all values of 8.
Example 5.1. (Bernoulli model). In Example 4.5. we saw that for the Bernoulli parametric model

which only depends on i i and r?,= .rl + . . . i.I,,,. Thus, t,, = [ t i . I.,,] provides a minimal sufficient statistic, and one may work in terms of the joint probability function p ( 11. I . * , 1 0). If we now write p("* I,,, 10) = p ( r . I f l . H)I'(fi I H ) .

and make the assumption that, for all r i 2 1, the mechanism by which the sample size, 0 . is arrived at does not depend on 0, so that p( r i i H ) = p ( . i i ) . i f 2 1. we see that is uncilkrry j i w (1. in the sen.= of Definition 5 . I. It follows that prior to posterior inference for 0 can thereforc proceed on the basis o f

for any choice of p ( B ) .15 0 5 1. From Corollary 4.I . we see that I

5.1 The Bayesian Paradigm

249

so that inferences in this case can be made as if we had adopted a binomiulpurametric model. However, if we write p(n.,rrlI 0) = ~ ( 7 I1r,,, @)p(r,, I 8) and make the assumption that, for all r,, 2 1, termination of sampling is governed by a r,, mechanism for selecting r,,, which does not depend on 8,so that p(r,, 18) = p(r,,), 2 1. we see that r , is ancillary for@, the sense of Definition 5. I . It follows that priorto posterior in inference for 0 can therefore proceed on the basis of p(6 I Z) = P ( @ I n: rw) x

@)P(O),

for any choice of p(B), 0 < 8 5 1. It is easily verified that

= Nb(n 1 ,I.,,) 8 (see Section 3.2.2). so that inferences in this case can be made as if we had adopted a negative-binomial parametric model. We note, incidentally, that whereas in the binomial case it makes sense to consider p(B) as specified over 0 5 6 5 1, in the negative-binomial case it may only make sense. to think of p ( 8 ) as specified over 0 < 8 5 1, since p ( r f l18 = 0) = 0. for all r,, 2. 1. So far as prior to posterior inference for 8 is concerned. we note that, for any specified p ( @ ) and assuming that either p ( n 18) = p ( n ) or p(r,, 18) = p ( r , , ) . we obtain ,
p ( 8 1 r l....,z,!) p ( B I ~ t . r , , x P ( i - 8 ) n - r t i p ( e ) = )

since, considered as functions of 6 ,


y(r,,I TI, 0) x p(n r,,.8) x Orfi (1 - 8)"-rfd.

The last part of the above example illustrates a general fact about the mechanism of parametric Bayesian inference which is trivially obvious; namely,for any 2 specified p ( 8 ) ,ifthe likelihoodfunctions pl ( z1I 8 ) ,p2(z I 8 ) are proportional US functions of 8, the resulting posterior densities for 8 are identical. It turns out, as we shall see in Appendix B, that many non-Bayesian inference procedures do not lead to identical inferences when applied to such proportional likelihoods. The assertion that they should, the so-called Likelihood Principle. is therefore a controversial issue among statisticians . In contrast, in the Bayesian inference context described above, this is a straightforward consequence of Bayes' theorem, rather than an imposed "principle". Note, however, that the above remarks are predicated on a specified p ( 8 ) . It may be, of course, that knowledge of the particular sampling mechanism employed has implications for the specification of p( 8 ) ,as illustrated, for example, by the comment above concerning negative-binomial sampling and the restriction to 0 < 8 5 1.

5 Inference
Although the likelihood principle is implicit in Bayesian statistics, it was developed as a separate principle by Barnard ( 1949), and became a focus of interest when Birnbaum (1962) showed that it followed from the widely accepted sufticiency and conditionality principles. Berger and Wolpert ( 1984/1988)provide an extensive discussion of the likelihood principle and related issues. Other relevant references are Barnard era!. ( 1962). Fraser (1963).Pratt ( 1963, Barnard ( 1967). Hartigan ( I967), Birnbaum ( 1968, 1978). Durbin ( 1970). Basu ( 1975). Dawid (1983a).Joshi (1983).Berger ( 198%). Hill (1987)and Bayarri et d.(1988). Example 5.1 illustrates the way in which ancillary statistics often arise naturally as a consequence of the way in which data are collected. In general. it is very often the case that the sample size, 11. is fixed in advance and that inferences are automatically made conditional on t i . without further reflection. It is. however. perhaps not obvious that inferences can be made conditional on r i if the latter has arisen as a result of such familiar imperatives as stop collecting data when you feel tired. or when the research budget runs out. The kind of analysis given above makes it intuitively clear that such conditioning is, in fact. valid. provided that the mechanism which has led to does not depend on 8. This latter condition may. however, not always be immediately obviously transparent. and the following definition provides one version of a more formal framework for considering sampling mechanisms and their dependence on model parameters.

Definition 5.2. (Stoppingrule ). A stopping rule. h.jilr (sequentiul)sarnpling frclm CI sequence of observubles .II E -Yt. r? 6 .Y?. . is u sequence (IJ. ,functions h,, : X x . . . x X,, [O.I]. such that, ifs,,,, ( . ( . I . . . . . . I . , , ) is I = observed, then sampling is terminated with probabiliv 11,,(q, , j; otherwise. I the ( t i + I)th observution is made. A stopping rule is proper the indirced probability distribution p,, ( r t j . 11 = 1. 2 . . . . .for find suniple s i x girurwtees that the latter is finite. The rule is deterministic i h,,(z, E { 0. 1} ,for ull f ) ( n ,x(,,)); otherwise, it is u randomised stopping rule.

In general, we must regard the data resulting from a sampling mechanism defined by a stopping rule h as consisting of ( I ! . z,,,,).the sample size. together with the observed quantities .r1. . . . . x,,. A parametric model for these data thus involves a probability density of the form p ( n , z,,,? h. 8 ) . conditioning both on I the stopping rule (i.e., sampling mechanism) and on an underlying labelling parameter 8. But, either through unawareness or misapprehension, this is typically ignored and, instead. we act as if the actual observed sample size I ) had been fixed in advance, in effect assuming that

14.r).

~ ( 1 , )

I h. 8 ) = Y(x,,,,I 1 ) . 8 ) = ~ ( z l i18),,

using the standard notation we have hitherto adopted for fixed I I . The important question that now arises is the following: under what circumstances. if any, can

5.I The Bayesian Paradigm

251

we proceed to make inferences about 0 on the basis of this (generally erroneous!) assumption, without consideringexplicit conditioningon the actual form of h? Let us first consider a simple example.
Example 5.2. (BiOSed stopping rulefor a Bernoulli sequence). Suppose, given 8, that .rl,x2. . . . may be regarded as a sequence of independent Bernoulli random quantities with p ( z , 10) = Bi(.r, I8,l). x I = 0.1. and that a sequential sample is to be obtained using the deterministic stopping rule h,defined by: hl(1) = 1, t i , (0) = 0, h 2 ( r I , Q) = 1 for all x l .ss. other words, if there is a success on the first trial, sampling is terminated (resulting In in n = 1. xI = 1); otherwise, two observations are obtained (resulting in either n = 2, x I = 0, L C ~= 0 or n = 2. .cl = 0. x2 = 1). At first sight, it might appearessentialto take explicit account of h in making inferences about 8, since the sampling procedure seems designed to bias us towards believing in large values of 8. Consider, however, the following detailed analysis:
p ( n = i . x l = 11 h.0) = p(rl = 1 I
= i,h.e)p(n = 1 1 h.e)

= 1 .p(x,= 1 1 e) = p ( t l = 1 1 e)
and, for x = 0,1, p ( n = 2 , r 1 = 0. x2 = z 1 h.e) = p(zl = o

= p(xl = O

. =~ 1 71 = 2,h,e)P(T1 1 h,e) ~ =z J =~2 . 1 t , e ) ~ =~ , l z I = 0. = 2,h,e)p(n 2 I h,e) ( = = I . p(.c2 = z 1 .cl = 0, e)p(zl= o 1 e ) = p ( 1 2 = 5, I = o 1 el. X

Thus, for all (n,

having non-zero probability, we obtain in this case

the latter considered pointwise as functions of 0 (i.e., likelihoods). It then follows trivially from Bayes theorem that,for any speciJiedp ( B ) , inferences for 0 based on assuming 71 to have been fixed at its observed value will be identical to those based on a likelihood derived from explicit consideration of h. Consider now a randomised version of this stopping rule which is defined by h1(1) = ir, h l ( 0 ) = 0, hz(zl.22) = 1 for all s I22. In this case, we have 1
p(71=i,X,

=iih.o)=p(tl = ~ l ~ = i , h , e ) p ( ~ ~ , = i ~ h , e ) = 1 . 7 i . y ( 3 j =118).

with, for . = 0 , l . r
p(71=2, z1= 0, x2 = s I h,e)

= ~ ( = 2 I z1 = 0. h,e) n

o 1 h,e)p(zL = 1 z1 = 0,7t = 1 .p ( q = 0 I O)p(zz = I 18)


x p(zI =

= 2, h,e)

252
and

5 Ir$erence

p ( n = 2..1', = l..r? = .r I h. 0) = p ( u = 2 .r1= 1. h.H)p(.r, = 1 I h. 0 )


x
p ( J . 2 = .I' J . 1

= 1. / / = 2. h . H j

= (1 - n)p(.I', = 1 ; cl)p(.r2: 1 H). .r

Thus, for all ( t i . q,,,) non-zero probability, we again find that having
j/(f/.3!(,,\

1h.H) X

/J(Z{,,)

/H)

as functions of

cl. so that the proportionality o f the likelihoods once more implies identical inferences from Bayes' theorem. for any given p(N).

The analysis of the preceding example showed, perhaps contrary to intuition. that, although seemingly biasing the analysis towards beliefs in larger values of 0, the stopping rule does not in fact lead to a different likelihood from that of the a priori fixed sample size. The following. rather trivial, proposition makes clear that this is true for all stopping rules as defined in Definition 5.2, which we might therefore describe as "likelihood non-informative stopping rules".

Proposition 5.1. (Stopping rules are likelihood non-informative 1. For any stopping rule h,for (sequential)samplingji-oni n sequence ofohserv,, = tibles .rl, .Q. . . . . having.fired suniplr size purunirtric niodel p(x, i 11.0)
P(X(,I) 101,
f ) ( 7 ? . z ( , 1 ) Ih.0)

P(z(,,, 10). 6 E 8.

for all

(71.

z(,~)) such rhatp(n. q l lI) 0) # 0. h.

Pruc,f. This follows straightforwardly on noting that

and that the term in square brackets does not depend on 6. a Again, it is a trivial consequence of Bayes' theorem that. for uny specified prior densir!. prior to posterior inference for 0 given data 0 1 . z i l l obtained using ,) a likelihood non-informative stopping rule h can proceed by acting as if q,,) were obtained using a fixed sample size u . However, a notationally precise rendering of Bayes' theorem,

5.1 The Bayesian Paradigm

253

reveals that knowledge of h might wellaffect the speciJcation of theprior density! It is for this reason that we use the term likelihood non-informative rather than just non-informative stopping rules. It cannot be emphasised too often that, although it is often convenient for expository reasons to focus at a given juncture on one or other of the likelihood and prior. components of the model, our discussion in Chapter 4 makes clear their basic inseparability in coherent modelling and analysis of beliefs. This issue is highlighted in the following example.
Example 5.3. (Biased stopping rule for a n o d mean). Suppose, given 0, that xl. x2. . . . . may be regarded as a sequence of independent normal random quantities with y(.rl 10) = N ( r , 10.1). 2, E %. Suppose further that an investigator has a particular concern with the parameter value 0 = 0 and wants to stop sampling i f f , , = C,S,/Wever takes on a value that is unlikely, assuming 0 = 0 to be true. For any fixed sample size n, if unlikely is interpreted as an event having probability less than or equal to a. for small fv, a possible stopping rule. using the fact that y(f,, I r i . 0) = N(Ti;,,10. T L ) , might be

for suitable k(n)(for example, k = lL96 for a = 0.05. k = 2.57 for o = 0.01. or k = 3.31 for R = 0.001). It can be shown, using the law of the iterated logarithm (see, for example, Section 3.2.3), that this is a proper stopping rule, so that termination will certainly occur for some finite n, yielding data ( 7 1 , z , ~ , ~ ) . Moreover, defining

we have

as a function of 8,for all ( r t , q,,)) for which the left-hand side is non-zero. It follows that h is a likelihood non-informative stopping rule. Now consider prior to posterior inference for 0, where, for illustration, we assume the I prior specification p ( 0 ) = N(O I p . A), with precision X I0, to be interpreted as indicating extremely vague prior beliefs about 8, which take no explicit account of the stopping rule h. Since the latter is likelihood non-informative,we have

5 1ttferenc.e
by virtue of the sufficiency of (R. Y,,) the normal parametric model. The right-hand side for is easily seen to be proportional to exp{ - iQ(B)}, where Q(B) = ( r t which implies that

+h)

H -

//+A

PI.?,,

+ A/'

I'
-

for X

2 0. ,, One consequence of this vague prior specification is that. having observed i n . 2, 1. we are led to the posterior probability statement
I

P BE

[ ( - ] y)I
T,, f
/I.+,,

= 1 - (t. .

But the stopping rule h ensures that IF,, > k(o)/"'%. This means that the value 0 = 0 j certainly does not lie in the posterior interval to which someone with initially very vague beliefs would attach a high probability. An investigator knowing B = 0 to be the true value can therefore, by using this stopping rule, mislead someone who, unaware of the stopping rule, acts as if initially very vague. However. let us now consider an analysis which takes into account the stopping rule. The nature of h might suggest a prior specification p ( H I h ) that recognises 0 0 as a possibly "special" parameter value. which should be assigned non-zero prior probability (rather than the zero probability resulting from any continuous prior density specification). As an illustration, suppose that we specify
p(Hih)= i l i NII,(Hj ;
T

(1

~)l,i,~,ll(H)N(O'O.,\,,).

which assigns a "spike" of probability, K . 10 the special value. N - 0. and assigns 1 - i~ : times a N(H \ ( I . All) density to the range fl # 0. Since h is a likelihood non-informative stopping rule and ( I ) . 7,,] are sufficient statistics for the normal parametric model. we have
,,,,. p(0111.z h ) ~ N ( F , , ' ~ ~ . I I ) P ( H I ~ ) .

The complete posterior p ( H

I I / . xl,, h) is thus given by ,.

5.1 The Bayesian Paradigm


where, since

255

it is easily verified that

The posterior distribution thus assigns a spike n* to 8 = 0 and assigns 1 - n times a N(8 I (n+ Xu) IrE,,, ri + A,) density to the range 6 # 0. The behaviwr of this posterior density, derived from a prior taking account of h, is clearly very different from that of the posterior density based on a vague prior taking no account of the stopping rule. For qualitative insight, consider the case where actually 6 = 0 and a has been chosen to be very small, so that k ( a )is quite large. In such a case. 11 is likely to be very large and at the stopping point we shall have 7,, k ( a ) / f i . This means that 2 :

for large 71, so that knowing the stopping rule and then observing that it results in a large sample size leads to an increasingconvictionthat 6 = 0. On the other hand, if 8 is appreciably different from 0. the resulting n, and hence n, will tend to be small and the posterior will be dominated by the N(8 I (7t + All) r i X t l . rt + Ao) component.

5.1.5

Decisions and Inference Summaries

In Chapter 2, we made clear that our central concern is the representation and revision of beliefs as the basis for decisions, Either beliefs are to be used directly in the choice of an action, or are to be recorded or reported in some selected form, with the possibility or intention of subsequently guiding the choice of a future action. With slightly revised notation and terminology, we recall fromchapters 2 and 3 the elements and procedures required for coherent. quantitative decision-making. The elements of a decision problem in the inference context are: (i) a E A, available answers to the inference problem; (ii) w E fl, unknown states of the world: (iii) u : A x (1 -+ R, a function attaching utilities to each consequence ( a , w ) of a decision to summarise inference in the form of an answer, a,and an ensuing state of the world, w ;

256

5 Inference

(iv) p(w), aspecification. in the form of aprobability distribution, of current beliefs about the possible states of the world. The optimal choice of answer to an inference problem is an a the expected utility.
E

A which ntuxitni.ws

Alternatively, if instead of working with rr(a.w ) we work with a so-called l o s s


junction.

I(a.w) /(w)u(a.w). = where f is an arbitrary, fixed function, the optimal choice of answer is an a E A which tnitiimises the e.rpected loss,

It is clear from the forms of the expected utilities or losses which have to be calculated in order to choose an optimal answer, that. if beliefs about unknown states of the world are to provide an appropriate basis for future decision making. where, as yet, A and u (or 1 ) may be unspecified, we need to report the complete belief distribution p(w). However, if an immediate application to a particular decision problem. with specified A and zi (or I). is all that is required. the optimal answer-maximising the expected utility or minimising the expected loss-may turn out to involve only limited, specific features of the belief distribution, so that these "summaries" of the full distribution suffice for decision-making purposes.

In the following headed subsections. we shall illustrate and discuss some of these commonly used forms of summary. Throughout, we shall have in mind the context of parametric and predictive inference. where the unknown states of the world are parameters or future data values (observables). and current beliefs. p ( w ) . typically reduce to one or other of the familiar forms:
p(0)

p ( O 1 z)
p(+

I z) p(y I z)

initial beliefs about a parameter vector, 0: beliefs about 6. given data 2: beliefs about = g ( 8 ) .given data z: beliefs about future data y. given data 2.

5.1 The Bayesian Paradigm

257

Point Estimates In cases where w E R corresponds to an unknown quantity, so that R is R, or IRk, or R+, or 8 x R+, etc., and the required answer, a E A, is an estimate of the true value of w (so that A = Q). the corresponding decision problem is typically referred to as one of point estimation. If w = 6 or w = 3, we refer to parametric point estimation; if w = 9, we refer to predictive point estimation. Moreover, since one is almost certain not to get the answer exactly right in an estimation problem, statisticians typically work directly with the loss function concept, rather than with the utility function. A point estimation problem is thus completely defined once A = R and 1(a, ) are w specified. Direct intuition suggests that in the one-dimensionalcase, distributional summaries such as the mean, median or mode ofp(w) could be reasonable point estimates of a random quantity w. Clearly, however, these could differ considerably, and more formal guidance may be required as to when and why particular functionals of the belief distribution are justified as point estimates. This is provided by the following definition and result.

Definition 5.3. (Bayes estimate). A Bayes estimate of w with respect to the lossfunction l(a,w ) and the belief distribution p(w)is an a E A = l2 which minimises J 1I(a,w)p(w) dw.
Proposition 5.2. (Forms of Bayes estimates). (i) IfA = R = Rk, l(a.w) = (a- w ) t H ( a- w ) , and H issymmetric definite positive, the Bayes estimate satisfies
H a = HE(w).
I f H - exists, a = E ( w ) , and so the Bayes estimate with respect to quadmtic form loss is the mean of p(w),assuming the mean to exist. (ii) I f d = Q = R a n d l ( a . w ) = c ~ ( a - w ) l ~ , ~ , ~ ( a ) + c ~ ( w - a ) l ~ , , , ~ ( a ) , the Bayes estimate with respect to linear loss is the qwntile such that

P(w 5 a ) = (2/(c,

+ cp).

Ifcl = c2, right-hand side equals 1/2 andso the Bayes estimate with the respect to absolute value loss is a median of p(w). (iii) I f A = R C @ and I(a,w) = 1 - l(BE(a))(w), &(a) is a ball where of radius E in R centred at a, the Bayes estimate maximises

La,
As

p ( w )rlw.

E + 0, the function to be maximised tends to p ( a ) and so the Bayes estimate with respect to zero-one loss is a mode of p(w),assuming a mode to exist.

250

5 Inference

Proof. DifferentiatingJ ( a - w ) ' H ( a- w ) p ( w )rlw with respect to a and equating to zero yields

2H
This establishes (i). Since

. I

(U - W ) Y ( W ) dw = 0.

l(a.w)p(w)dw C ] =

(a- W ) P ( W ) dw

+Q
P ( W ) h.

(w - a ) p ( w ) dw.

differentiating with respect to (I and equating to zero yields

c1

lW5., lW>.)
P ( W ) dw

= C?

whence, adding c1 J*,..o

p ( w )dw to each side, we obtain (ii). Finally. since

I ( u , w M w )d o = 1 -

lBE(,I,(w)P(w) h.

and this is minimised when JB5(", ( w )dw is maximised, we have (iii). a p Further insight into the nature of case (iii) can be obtained by thinking of a ) unimodal, continuous ~ ( u J in one dimension. It is then immediate by a continuity argument that a should be chosen such that

p ( a - E ) = Y("

+ E).

In the case of a unimodal, symmetric belief distribution, p ( u ) , for a single random quantity w', the mean, median and mode coincide. In general, for unimodal. positively skewed, densities we have the relation
mean > median > mode and the difference can be substantial if p ( ~is) markedly skew. Unless, therefore, there is a very clear need for a point estimate, and a strong rationale for a specific one of the loss functions considered in Proposition 5.2, the provision of a single number to summarise p ( d ) may be extremely misleading as a summary of the information available about ii. Of course, such a comment acquires even greater force if p ( w ) is multimodal or otherwise "irregular". For further discussion of Bayes estimators. see. for example, DeGroot and Rao (1963. 1966). Sacks (1963), Farrell (1964), Brown (1973). Tiao and Box (1974). Berger and Srinivasan (1978), Berger (1979, 1986). Hwang (1985, 1988). de la Horra (1987, 1988, 1992). Ghosh (19923, 1992b), Irony (1992) and Spall and Maryak (1992).

5.1 The Bayesian Paradigm

259

Credible regions We have emphasised that, from a theoretical perspective, uncertainty about an unknown quantity of interest, w ,needs to be communicated in the form of the full (prior, posterior or predictive) density, p ( w ) ,if formal calculation of expected loss or utility is to be possible for any arbitrary future decision problem. In practice, however. p ( w ) may be a somewhat complicated entity and it may be both more convenient, and also sufficient for general orientation regarding the uncertainty about w ,simply to describe regions C R of given probability undery(o). Thus, for example, in the case where R C 9, the identification of intervals containing 50%. 90%. 95% or 99% of the probability under p ( w ) might suffice to give a good idea of the general quantitative messages implicit in p ( w ) . This is the intuitive basis of popular graphical representations of univariate distributions such as box plots.

Definition 5.4. (Credible Region). A region C C_ R such that

is said to be a 100( 1 - a)%credible region for w ,with respect to p ( w ) . If R G 92, connected credible regions will be referred to as credible intervals. If p(w ) is a (prior-posterior-predictive)density, we refer to (prior-posterior-predictive)credible regions.

Clearly, for any given a there is not a unique credible region-even if we restrict attention to connected regions, as we should normally wish to do for obvious ease of interpretation (at least in cases where p(w) is unimodal). For given 0, p ( w ) and fixed a,the problem of choosing among the subsets C G R such that J p ( w )dw = 1 - a could be viewed as a decision problem, provided that we are C willing to specify a loss function, I(C,w), reflecting the possible consequences of quoting the 100(1- a)% credible region C. We now describe the resulting form of credible region when a loss function is used which encapsulates the intuitive idea that, for given a,we would prefer to report a credible region C whose size llCll (volume, area, length) is minimised.

Proposition5.3. (Minimal size credible regions). Let p(w)be a probability density for w E R almost everywhere continuous; given a,0 < a < 1. if A = {C; ( ~C ) = 1 - a } # 0 a n d P E

then C is optimal ifand only ifit has thepmperty that p(w1) 2 p(w2)for all w 1 E C, w2 $ C (exceptpossibly for a subset of R of Zen, probability). !

260
Proof. It follows straightforwardly that, for any C E A.

5 Inference

A
irif

l(C.w)p(w) = k1'1 dw 1(1

+ 1 - a.

so that an optimal C must have minimal size. If C has the stated property and D is any otherregion belonging to A. then since C = ( C n D ) u ( C n D ' ) , D= ( C ' n D ) u ( C " n D ) a n d P ( w E C )= P(w E D). we have

wcC'r81r'

p(w)ll~ D 11 5 n '

* (' / I f .

p(w)l cw p(w)riw 5
S I I ~p(w)llc"

=I
<

n DII

(*'l)Ll

W:-PnIl

with

so that IjC n D'll 2 IIC' n Dll. and hence IIC((5 IlDll.


If C does not have the stated property, there exists A C C such that for all w 1E A, there exists wz $ C such that p(w2) p(wI). B G C' be such that ! Let P(w E A ) = P(w E B)and p(w2)> p(wl) all w? E B and 0 1 E .1.Define for D = (C n A') U B. Then D E A and by a similar argument to that given above the result follows by showing that llDll < IlC'll.
cj

The property of Proposition 5.3 is worth emphasising in the form ofa definition
(Box and Tiao, 1965).

Definition 5.5. (Highestprobability density (HPD) regions).


A region C C fl is said to be a 100(1- a ) %highesrprobahilip density region for w with respect to p(w)i f

(i) P(w E c') = 1 - (1 (ii) p(wl) p(w2) for all w1 E C'and w? @ C e.rc.eptpossihl~,forcrsiih2 ' . set of $1 having prohahilit?. zero. If p ( w ) is a fprior-po.~teri-ior-predi~.ti~.e) we refer to highest (priordensity, posterior-predictive) density regions.
Clearly, the credible region approach to summarising p(w)is not particularly useful in the case of discrete $2, since such regions will only exist for limited choices of ( t . The above development should therefore be understood as intended for the case of continuous fl. For a number of commonly occurring univariate forms of p(w), there exist tables which facilitate the identification of HPD intervals for a range of values of ct

5.1 The Buyesiun Paradigm

261

WO

Figure 5.la woalmost as plausible us all (u E C

U O

Figure 5.lb w much less plausible than most w E C o

(see, for example, lsaacs ef al.. 1974, Ferrandiz and Sendra,1982, and Lindley and Scott, 1985). In general, however, the derivation of an HPD region requires numerical calculation and, particularly if p ( w )does not exhibit markedly skewed behaviour, it may be satisfactory in practice to quote some more simply calculated credible re-

262

5 Inference

gion. For example, in the univariate case, conventional statistical tables facilitate the identification of intervals which exclude equi-probable tails of p ( w ) for many standard distributions. Although an appropriately chosen selection of credible regions can serve to give a useful summary of p ( w ) when we focus just on the quantity w. there is a fundamental difficulty which prevents such regions serving, in general, as a proxy for the actual density y ( w ) . The problem is that of lack of invariance under parameter transformation. Even if v = g ( w ) is a one-to-one transformation, it is easy to see that there is no general relation between HPD regions for w and v. In addition, there is no way of identifying a marginal HPD region for a (possibly transformed) subset of components of w from knowledge of the joint HPD region. In cases where an HPD credible region C is pragmatically acceptable as a crude summary of the density p ( w ) . then, particularly for small values of c1 (for example, 0.05,O.Ol). a specific value wg E fl will tend to be regarded as somewhat ! implausible if wg $ C. This, of course, provides no justification for actions such as rejecting the hypothesis that w = w0. If we wish to consider such actions, we must formulate a proper decision problem, specifying alternativeactions and the losses consequent on correct and incorrect actions. Jnferenccs about a specific hypothesised value w,)of a random quantity w in the absence of alternative hypothesised values are often considered in the general statistical literature under the heading of significance testing. We shall discuss this further in Chapter 6. For the present. it will suffice to note-as illustrated in Figure 5 . I -that even the intuitive notion of implausibility if wl) @ cdepends much more on the complete characterisation of p ( w ) than on an either-or assessment based on an HPD region. For further discussion of credible regions see. for example. Pratt (1961 ). Aitchison ( 1964. 1966). Wright ( 1986) and DasGupta ( 199 I ). Hypothesis Testing The basic hypothesis testing problem usually considered by statisticians may be described as a decision problem with elements

together with p ( w ) , where 8 E 8 = 81)Uc31, the parameter labelling a paramctric is model, p(z 1 O ) , A = {a,).( I l } . with u l ( a l , )corresponding to rejecting hypothesis Ho(H1). loss function and = I,,. I . j E (0. 1). with the l,, reflecting the relative seriousness of the four possible consequences and, typically. /(H, = / I I = 0. Clearly, the main motivation and the principal use of the hypothesis testing framework is in model choice and comparison, an activity which has a somewhat different flavour from decision-making and inference within the context of an accepted model. For this reason. we shall postpone a detailed consideration of the

5.1 The Bayesian Paradigm

263

topic until Chapter 6, where we shall provide a much more general perspective on model choice and criticism. General discussions of Bayesian hypothesis testing are included in Jeffreys (1939/1961), Good (1950, 1965, 1983), Lindley (1957, 1961b, 1965, 1977), Edwards el al. (1963), Pratt (1965). Smith (1965). Farrell(1968), Dickey (197 1,1974, 1977), Lempers (1971), Rubin (1971), Zellner (1971), DeGroot (1973). Learner ( 1978). Box (1980), Shafer (1982b),Gilio and Scozzafava (1985), Smith, (1986), Berger and Delampady ( 1987),Berger and Sellke ( 1987)and Hodges ( 1990,1992).

5.1.6

Implementation Issues

Given a likelihood p ( z 18) and prior density p ( 8 ) , the starting point for any form of parametric inference summary or decision about 8 is the joint posterior density

and the starting point for any predictive inference summary or decision about future observables y is the predictive density

P(Y 12) = /P(Y

1%)

do.

It is clear that to form these posterior and predictive densities there is a technical requirement to perform integrations over the range of 8. Moreover, further summarisation, in order to obtain marginal densities, or marginal moments, or expected utilities or losses in explicitly defined decision problems, will necessitate further integrations with respect to components of 8 or y, or transformations thereof. The key problem in implementing the formal Bayes solution to inference reporting or decision problems is therefore seen to be that of evaluating the required integrals. In cases where the likelihoodjust involves a single parameter, implementation just involves integration in one dimension and is essentially trivial. However, in problems involving a multiparameter likelihood the task of implementation is anything but trivial, since, if 8 has k components, two k-dimensional integrals are and for required just to form p(8 I z) p(y 12). Moreover, in the case of p ( 8 I z), example, Ic (k- 1)-dimensionalintegrals are required to obtain univariate marginal density values or moments, (k- 2)-dimensionalintegrals are required to obtain bivariate marginal densities, and so on. Clearly. if k is at all large, the problem of implementation will, in general. lead to challenging technical problems, requiring simultaneous analytic or numerical approximation of a number of multidimensional integrals. The above discussion has assumed a given specification of a likelihood and prior density function. However, as we have seen in Chapter 4, although a specific mathematical form for the likelihood in a given context is very often implied

(i)

5 Injerence

or suggested by consideration of symmetry, sufficiency or experience, the mathematical specification of prior densities is typically more problematic. Some of the problems involved-such as the pragmatic strategies to be adopted in translating actual beliefs into mathematical form- relate more to practical methodology than to conceptual and theoretical issues and will be not be discussed in detail in this volume. However, many of the other problems of specifying prior densities are closely related to the general problems of implementation described above. as exemplified by the following questions:
(i) given that, for any specific beliefs. there is some arbitrariness in the precise choice of the mathematical representation of a prior density. are there choices which enable the integrations required to be carried out straightforwardly and hence permit the tractable implementation of a range of analyses. thus facilitating the kind of interpersonal analysis and scientific reporting referred to in Section 4.8.2 and again later in 6.3.3? (ii) if the information to be provided by the data is known to be far grcater than that implicit in an individual's prior beliefs. is there any necessity for a precise mathematical representation of the latter. or can a Bayesian implementation proceed purely on the basis of this qualitative understanding?

(iii) either in the context of interpersonal analysis. or as a special form of actual individual analysis. is there a formal way of representing the beliefs of an individual whose prior information is to be regarded as minimal. relative to the information provided by the data'? (iv) for general forms of likelihood and prior density, are there analytic/numerical techniques available for approximating the integrals required for implementing Bayesian methods?
Question (i) will be answered in Section 5.2, where the concept ofa cottjugatu prior density will be introduced. Question(ii)will beansweredinpartat theendofSectionS.2and in moredetail in Section 5.3, where an approximate "large sample" Bayesian theory involving asymptotic posterior norniality will be presented. Question (iii) will be answered in Section 5.4. where the information-based concept of a reference prior density will be introduced. An extended historical discussion ofthis celebrated philosophical problem of how to represent "ignorance" will be given in Section 5.6.2. Question (iv) will be answered in Section 5.5. where classical applied analysis techniques such as Lupluce's approxiniaticm for integrals will be briefly reviewed in the context of implementing Bayesian inference and decision summaries. together with classical numerical analytical techniques such as Gmss-Hennirc. quadrature and stochastic simulation techniques such as intportuncv sanipling. sumplirtg-itnportcirtc.e-resampli~ig Markov chain Monte Curlo. and

5.2 Conjugate Analysis

265

5.2
5.2.1

CONJUGATE ANALYSIS
Conjugate Families

The first issue raised at the end of Section 5.1.6 is that of tractability. Given a likelihood function p ( s 18). for what choices of p ( 8 ) are integrals such as

easily evaluated analytically? However, since any particular mathematical form of p ( 8 ) is acting as a representation of beliefs-either of an actual individual, or as part of a stylised sensitivity study involving a range of prior to posterior analyseswe require, in addition to tractability, that the class of mathematical functions from which p ( 8 ) is to be chosen be both rich in the forms of beliefs it can represent and also facilitate the matching of beliefs to particular members of the class. Tractability can be achieved by noting that, since Bayes theorem may be expressed in the form

both p(8 I x) and p ( 8 ) can be guaranteed to belong to the same general family of mathematical functions by choosing p ( 8 ) to have the same structure as p ( z I 8 ) , when the latter is viewed as a function of 8. However, as stated, this is a rather vacuous idea, since p(8 I s)and p ( 8 ) would always belong to the same general family of functions if the latter were suitably defined. To achieve a more meaningful version of the underlying idea, let us first m a l l (from Section 4.5) that if t = t ( s )is a sufficient statistic we have

so that we can restate our requirement for tractability in terms of p ( 8 ) having the same structure as p ( t 18). when the latter is viewed as a function of 8. Again, however, without further constraint on the nature of the sequence of sufficient statistics the class of possible functionsp(8) is too large to permit easily interpreted matching of beliefs to particular members of the class. This suggests that it is only in the case of likelihoods admitting sufficient statistics of fixed dimension that we shall be able to identify a family of prior densities which ensures both tractability and ease of interpretation. This motivates the following definition. Definition 5.6. (Conjugatepriorfamily). The conjugatefamily of prior densities for 8 E 8,with respect to a likelihood p ( x I 8 ) with suficienr statistic t = t ( x ) = {n,s ( x ) }of abed dimension k independent of that of x,is

266
where

5 inference

and

From Section 4.5 and Definition 5.6, it follows that the likelihoods for which conjugate prior families exist are those corresponding to general exponential family parametric models (Definitions 4.10 and 4.1 I), for which, given j,h. t$ and c,

The exponential family model is referred to as regular or non-regular, respectively, according as X does not or does depend on 8.

Proposition 5.4. (Conjugate families for regular exponential families). If x = ( X I : . . . ,x,,)is a random sample from a regular exponential family
distribution such that

then the conjugate family for 8 has the form

where

is such thar ~

( 7 ) =.

~ ~ [ ~ ( e p e~x ft -, c ~ l ~ , ( e ) d e }< x. ] Tl

Prooj By Proposition 4.10 (the Neyman factorisation criterion), the sufficient statistics for t$ have the form

5.2 Conjugate Analysis

267
00,

so that, for any r = (TO,q ,. . . ,T,) such that J,p(O Ir)dO < prior density has the form

a conjugate

p(eI 7) = p(sl(z) = T I , . . . sk(z) Tk I e, n. = To) =

by Proposition 4.2.

Example 5.4. (Bernoulli likelihood; beru prior). The Bernoulli likelihood has the
form
Jl(S1~ . S ,8) = .. ,

n
I,

,6 (1 - @ ) ' - ' I ' ,

(0 5 6, I 1)

,=I

so that, by Proposition 5.4. the conjugate prior density for 8 is given by p ( e I TI1,T I )
(1 - 8 ) ~ ) X {log ~ P

-assuming the existence of

K(qi.TI

'

(A)
TI}

8'1 (1 - g ) v r l ,

K(m.71) =

1'

8'1 (1 - o)r"-r' do.

Writing (x = T, + 1. and 13 = To - rI+ 1. we have p(H I a.d) x 8" ' ( 1 - 8)83-1, hence, and comparing with the definition of a beta density,
p ( 8 I q).T I ) = p ( 8 I Q .

8) = Be(@1 ck, ij),

> 0. d > 0.

Example 5.5. (Poisson likelihood; gamma prior). The Poisson likelihood has the
form

so that, by Proposition 5.4. the conjugate prior density for 6, is given by


p(6, I ro. r l ) x exp(-.rO@) (V(T

log8)

4 Inference
assuming the existence of

Writing c t = rI + 1and A = T,, we have p(H I (k. .5) x 8 " with the definition of a gamma density.

'ap(

and hence, comparing

Example 5.6. (Normal likelihood; normal-gamma prior). The normal likelihood. with unknown mean and precision. has the form

so that, by Proposition 5.4, the conjugate prior density for H

-c:

(I(. A ) is given by

assuming the existence of K ( q t .rl.T ? ) . given by

Writing ( I = : ( T O f 1). ;1= f(r: - $).7 = T , / T { , , and comparing with the definition ofa normal-gamma density, we have

__

5.2 Conjugate Analysis

269

5.2.2

Canonical Conjugate Analysis

Conjugate prior density families were motivated by considerations of tractability in implementing the Bayesian paradigm. The following proposition demonstrates that, in the case of regular exponential family likelihoodsand conjugate prior densities, the analytic forms of the joint posterior and predictive densities which underlie any form of inference summary or decision making are easily identified.

Proposition 5.5. (Conjugateanalysisfor regular exponentialfamilies). For the exponentid family likelihood and conjugate prior density of Proposition 5.4: (i) the posterior density for 8 is

which proves (ii). a

5 Inference

Proposition 5.5(i) establishes that the conjugate family is closed under sumpling, with respect to the corresponding exponential family likelihood, a concept

which seems to be due to G. A. Barnard. This means that both the joint prior and posterior densities belong to the same, simply defined, family of distributions. (T t,,(z)). the inference process being totally defined by the mapping T under which the labelling parameters of the prior density are simply modified by the addition of the values of the sufficient statistic to form the labelling parameter of the posterior distribution. The inference process defined by Bayes theorem is therefore reduced from the essentially infinite-dimensionalproblem of the transformation of density functions, to a simple, additive finite-dimensionaltransformation. Proposition 5.5(ii) establishes that a similar. simplifyingclosure property holds for predictive densities. The forms arising in the conjugate analysisof a number of standard exponential family forms are summarised in Appendix A. However, to provide some preliminary insights into the prior posterior predictive process described by Proposition 5.5, we shall illustrate the general results by reconsidering Example 5.4.

-+

Example 5.4. (continued). With the Bernoulli likelihood written in its explicit expothe nential family form, and writing r,, = .rI t . . . T rt,. posterior density corresponding to the conjugate prior density, p(B I GI. T!), is given by
[ I ( @ 12. 71,. 71)

/dz I #)I.(# I Tit. TI 1

x ( 1 - H) rxp {log

(-)

1-0

r,,

}(I

- H)

x ex[> {log

( -)H@ I-

TI (/!)}

where q1( ) = n 11, rl ( 7 1 ) = T~ +I.,, , showing explicitly how the inference process reduces to the updating of the prior to posterior hyperparameters by the addition of the sufficient statistics, 11 and r , , . Alternatively. we could proceed on the basis of the original representation of the Bernoulli likelihood, combining it directly with the familiar beta prior density. Be(H I (I. j). \o that p(fllz.o.*j) ( z l O ) p ( f l l o . . ~ ) xp
xH(j
-())-f@I

(I -0)

where (k,, = o + r,,,;j,, = c j ( 1 - r,, and. again, the process reduces to the updating of the + prior to posterior hyperparameters.

5.2 Conjugate Analysis

271

Clearly, the two notational forms and procedures used in the example are equivalent. Using the standard exponential family form has the advantage of displaying the simple hyperparameter updating by the addition of the sufficient statistics. However, the second form seems much less cumbersome notationally and is more transparently interpretable and memorable in terms of the beta density. In general, when analysing particular models we shall work in terms of whatever functional representation seems best suited to the task in hand.
Example 5.4. (continued). Instead of working with the original Bernoulli likelihood, p ( s l . .. . .r,ll@), wecould,ofcourse,work withalikelihooddefinedin termsofthesufficient statistic (R,T ~ ~In .particular, if either n or rl, were ancillary, we would use one or other of ) p(rl,I n , 0) or Y ( R I T,,. 6) and. in either case,

y(e I 71, r,,.0.13) x 8" (1 - ~ ) ~ ' - ~ n @ 'I -(1 ' Taking the binomial form. p(r,,I n. 6 the prior to posterior operation defined by Bayes' ' ) .
B)Ii-I

theorem can be simply expressed, in terms of the notation introduced in Section 3.2.4. as Bi(rll 18.n ) oc Be(8 I a.4) E Be(@ Q I

+ r,,.,$ + n - r , , ) .

The predictive density for future Bernoulli observables, which we denote by


9 = ( Y l 7 . . . . ym) = (G,+I.. . ,Tn+,,,)* .

is also easily derived. Writing

= yl

+ . . . + y,,,

we see that

where

a result which also could be obtained directly from Proposition 5 3 i i ) . If, instead, we were interested in the predictive density for r;,,, it easily follows that
p(rb, Ia,,. A,. r n ) =

I'

p(r:,, 1 r n . B)p(e a,,,, )de d

Comparison with Section 3.2.2 reveals this predictive density to have the binomial-beta form. Bb(r:,, I f t , , . df,.m). The particular case I I J = I is of some interest. since y ( (,, = 1 j ( I , , . j,,. I I I = 1 ) is then the predictive probability assigned to a success on the ( n + 1)th trial. given r , , observed successes in the first rt trials and an initial Be(@ .j)belief about the limiting relative If>. frequency of successes, H. We see immediately. on substituting into the above. that

using the fact that r(t + 1 ) = t l ' ( t ) and recalling. from Section 3.2.2, the lorn ofthe mean of a beta distribution. s With respect to quadratic loss, E(BI n , , ..j,,) ( ( I + r , , J / ( o -- .jt r i ) i the optimal estimate of H given current information, and the above result demonstrates that this should serve as the evaluation ofthe probability of a success o n the next trial. In the case ( I - .i 1 this evaluation becomes (r,,-t I ) / ( r / t-2 ) . which is the celebrated kiplace's r-rclec~.sicc.cessio~r (Laplace. I 8 12). which has served historically to stimulate considerable philosophical debate about the nature of inductive inference. We shall consider this problem further in Example 5. I6 of Section 5.4.4. For an elementary, hut insightful. ilccount of Bayesian inference for the Bernoulli case. see Lindley and Phillips (1976).
2

In presenting the basic ideas of conjugate analysis, we used the following notation for the k-parameter exponential family and corresponding prior form:

and

P(8
the latter being lefined for T such that K ( T )< x. From a notational perspective (cf. Definition 4.12). we can obtain considerable = ( $ ' $ I . . . . c ~ )y = (y,.. . . .yr), where kvf = , simplification by defining c,p,(O)and y, = h ( x , ) ,i = 1. . . . ,k, together with prior hyperparameters rill. yo. so that these forms become

P(Y I + )

= a ( y ) w ( y ' @- t i ( + ) } .
-

Y E 1'.

P ( @ I rLo.Yo) = ('(74). YIJ C'XP { rhYh+

~M11)} 11 E
3

**

for appropriately defined Y , a and real-valued functions u. h and c. We shall refer to these (Definition 4.12) as the cunonicul (or nururul)fi,rrn.s of the exponential family and its conjugate prior family. If Q = %*, we require I t ( ) > 0. yo E 1-

5.2 Conjugate Analysis

273

in order for p ( + I no, yo) to be a proper density; for J! # @, the situation is somewhat more complicated (see Diaconis and Ylvisaker, 1979, for details). We shall typically assume that @ consists of all $ such that s,p(y I $)dg = 1 and that b($) is continuously differentiable and strictly convex throughout the interior of The motivation for choosing no,p0 as notation for the prior hyperparameter is partly clarified by the following proposition and becomes even clearer in the context of Proposition 5.7.

*.

Proposition 5.6. (Canonical conjugate analysis). ffyl, . . . I/,~ the valare ues of I/ resultingfrom a random sample of size nfrom the canonical exponential family parametric model, p(y 1 $), rhen the posterior density corresponding to the canonical conjugateform, p ( $ I no, yo), is given by

Example 5.4. (continued). In the case of the Bernoulli parametric model, we have seen earlier that the pairing of the parametric model and conjugate prior can be expressed as
p(slt9)= (1 - 6 ) e x p

{zlog (1 3
a ( y ) = 1.
b(t+'j) = log(1

The canonical forms in this case are obtained by setting

+ e"),

and, hence, the posterior distribution of the canonical parameter 3 is given by

274

5 inference

Example 5.5. (continued). In the case of the Poisson parametric model. we have seen earlier that the pairings of the parametric model and conjugate form can be expressed
as
p(O I TI,. T I )

p(.r 10) = - c

q ( -0) PXp(.r k1g#) .r! = [ h - ( f ) I] I'Xl)( -T11#)(~XJI(Tt log0).

The canonical forms in this case are obtained by setting

The posterior distribution of the canonical parameter L" is now immediately given by Proposition 5.6.
Example 5.6. (continued). In the case ofthe normal parametric model. we have seen earlier that the pairings of the parametric model and conjugate form can be expressed as

The canonical forms in this caSe are obtained by setting

Again, the posterior distribution of the canonical parameters @ = ( t i I .t'?) is now immediately given by Proposition 5.6. For specific applications, the choice of the representation of the parametric model and conjugate prior forms is typically guided by the ease of interpretation of the parametrisations adopted. Example 5.6 above suffices to demonstrate that the canonical forms may be very unappealing. From a rheorerical perspective. however, the canonical representation often provides valuable unifying insight, as in Proposition 5.6, where the economy of notation makes it straightforward to demonstrate that the learning process just involves a simple weighted average.

w/l)+ nil.,,
rtn

11

of prior and sample information. Again using the canonical forms. we can give a more precise characterisation of this weighted average.

5.2 Conjugate Analysis

275

Proposition 5.7. (Weightedavemgeform of posterior expectation). I y1 . . . , y, are the values of y resulting from a random sample of size n f from the canonical exponential family parametric model.
!

Proof. By Proposition 5.6, it suffices to prove that E(Vb(@)no, yo) = yo. I


But

J, VP(+ I n o . Yo)d+.

This establishes the result. a Proposition 5.7 reveals, in this natural conjugate setting, that the posterior expectation of Vb(+), that is its Bayes estimate with respect to quadratic loss (see Proposition 5.2), is a weighted average of yo and 8,. The former is the prior estimate of Vb(+);the latter can be viewed as an intuitively natural sample-based estimate of Vb(+),since

and hence E ( y I +) = E ( g nI +) = Vb(+). For any given prior hyperparameters, (710,yo),as the sample size n becomes large, the weight, 71. tends to one and the sample-based information dominates the posterior. In this context, we make an important point alluded to in our discussion of objectivity and subjectivity, in Section 4.8.2. Namely, that in the stylised setting of a group of individuals agreeing on an exponential family parametric form, but assigning different conjugate priors, a sufficiently large sample will lead to more or less identical posterior beliefs. Statements based on the latter might well, in common parlance. be claimed to be objective. One should always be aware, however, that this is no more than a conventional way of indicating a subjective consensus, resulting from a large amount of data processed in the light of a central core of shared assumptions.

5 Inference
Proposition 5.7 shows that conjugate priors for exponential family parameters imply that posterior expectations are linear functions of the sufficient sratistics. It is interesting to ask whether other forms of prior specification can also lead to linear posterior expectations. Or. more generally. whether knowing or constraining posterior moments to be of some simple algebraicform suffices to characterice possible families of prior distributions. These kinds of questions are considered in detail in, for example, Diaconis and Ylvisaker (1979) and GoeI and DeGrim ( 1980). In particular. it can be shown, under some regularity conditions. that. for continuous exponential families, linearity of the posterior expectation does imply that the prior must be conjugate.

The weighted average form of posterior mean.

obtained in Proposition 5.7, and also appearing explicitly in the prior to posterior updating process given in Proposition 5.6 makes clear that the prior parameter. ii+ attached to the prior mean, y , for Ob(+). plays an analogous role to the sample size, 11, attached to the data mean y,,. The choice of an itII which is large relative to n thus implies that the prior will dominate the data in determining the posterior (see. however, Section 5.6.3 for illustration of why a weighted-average form might not be desirable). Conversely, the choice of an which is small relative to 11 ensures that the form of the posterior is essentially determined by the data. In particular, this suggests that a tractable analysis which "lets the data speak for themselves" can be obtained by letting 7)u -+0. Clearly, however. this has to be regarded as simply a convenient approximation to the posterior that would have been obtained from the choice of a prior with small, but positive nqj.The choice i t 0 = 0 typically implies a form of p ( @ 1 n o . yo) which does not integrate to unity (a so-called improper density) and thus cannot be interpreted as representing an actual prior belief. The following example illustrates this use of limiting, improper conjugate priors in the context of the Bernoulli parametric model with beta conjugate prior, using standard rather than canonical forms for the parametric models and prior densities.
Example 5.4. (confinuedJ. We have 5een that if I.,, = .r, . . . -t .r,, denotes the number of successes in ri Bernoulli trials. the conjugate beta prior density. Be(@ c t . j ) . lor I the limiting relative frequency of successes. H. leads to a Be(# I c i r,,. .{ + i t - r , ) posterior for H, which has expectation

I where 7: = ( c t + ,+ t z ) ' t i , providing a weighted average between the prior mean for H and the frequency estimate provided by the data. In this notalion. t),, 0 corresponds to ( I 0.

5.2 Conjugate Anulysis

277

3 4 0. which implies a Be(8 I r,,,n - r,()approximation to the posteriordistribution, having expectation r,,/7t~ limiting prior form, however, would be The

which is not a proper density. As a technique for arriving at the approximate posterior distribution, it is certainly convenient to make formal use of Bayes theorem with this improper form playing the role of a prior, since
p ( 8 I ct = 0. d = 0. I t . r,#)x p ( ~ . ,I,n8)p(8I c1 = 0. d = 0 )

XX6r(l-8) rJ8l ( 1 - 8 ) x Be(BIr,,,n - r , , ) .

It is important to recognise. however, that this is merely an approximation device and in no way justifies regarding p ( 8 I o = 0. (3 = 0) as having any special significance as a representation of prior ignorance. Clearly, any choice of ct, ;j small compared with I,,, rt - r,, (for example, a = ,1 = or ct = d = 1 for typical values of r v . - r,,)will lead to an almost identical posterior distribution for 8. A further problem of interpretation arises if we consider inferences for functions of 8 . Consider, for example, the choice ct = ,3 = 1, which implies a uniform prior density for 8. At an intuitive level, it might be argued that this represents complete ignorance about 8 which should, presumably, entail complete ignorance about any function, g(8). of 8. . However, p ( 8 ) uniform implies that p(g(8)) is not uniform. This makes it clear that ad hoc intuitive notions of ignorance, or of what constitutes a non-informativeprior distribution (in some sense), cannot be relied upon. There is a need for a more formal analysis of the concept and this will be given in Section 5.4, with further discussion in Section 5.6.2.

Proposition 5.2 established the general forms of Bayes estimates for some commonly used loss functions. Proposition 5.7 provided further insight into the (posterior mean) form arising from quadratic loss in the case of an exponential family parametric model with conjugate prior. Within this latter framework, the following development, based closely on GutiCrrez-Peiia ( 1992), provides further insight into how the posterior mode can be justified as a Bayes estimate. We recall, from the discussion preceding Proposition 5.6, the canonical forms of the k-parameter exponential family and its corresponding conjugate prior:

and
P(21,lwlo,Yo) = 4% Yo)exrJ{ WY;+

-7~0Wr))
7

21, E

Q,

for appropriately defined Y, and real-valued functions 6, and c. Q b

270
t E Y. Funherdefine

5 Inference

Consider p(+lno. yo) and define d(s. t) = - logc.(s.s It). with s > 0 and

= [ r f , ( s , .....d&.t)]' t)

and &(s. t) = &f(s. t)/i)n. As a final preliminary, recall the logarithmic diver-

between two distributions y(zl6) and p(xl0o).We can now establish the following technical results.

Proposition 5.8. (Logarithmic divergence befween conjugatedistributions). With respect to the canoniculform of the k-parameter exponentialfumily and
its corresponding conjugate prior:

( i ) 6(+l+l,j = b(+,) - b(+) + (+ - +,,)'W+): ( i i ) E[b(+l$~~~j] r o . rtnyo) b($l)) =4dr

+)Lo-'

{ k + [ ~ ~ ~ ( ~ J~ JI I ~. ~ I -) +(I]'?~I)$/(I}. J
- b(+)

Proof. From the definition of logarithmic divergence we see that

d"+llctIl) = h(+,,j
and (i) follows. Moreover,

+ (+ - +I,)'Ey,+[~l.

E[6(+I+n)I

= b(+n) - E[b(+)]+ E[+'Cb(+)I - +hE[Ch(+)I.

Differentiation of the identity

lag/Fxp{t'+

- .sb(+)}d+ = d ( s , t ) .

with respect to s, establishes straightforwardly that

E[b(+)] -4dW %Y,,). = = Recalling that E(Vb(+)] yo,we can write. for 1 = 1, . . . . k.

Differentiating this identity with respect to t,. and interchanging the order of differentiation and integration. we see that

J L.l,b,(+,)C(S,s-~t)cx,,(t"gD sb(+)}d+ for 7 = 1 . . . . .k, so that and (ii) follows.

= s "I

+ d,(s.t)f,].
yo)

E [ + ' v ~ ( ~ L )7]t , ' [ k = a

+ ~ d ( 7 1 n~opo)'(?tng/o)] - QG'+:)(w. .

5.2 ConjugafeAnalysis
This result now enables us to establish easily the main result of interest.

279

siae) Proposition 5.9. (Conjugateposterior modes as Bayes e t m t s . With respect to the loss function I ( @ , $) = 6($la).the Bayes estimate for $. derived from independent observations yl, . . . y, from the canonical k-parameter exponential family p(y I+) and corresponding conjugare prior p($lno. yo),is the posterior mode, $*, which satisfies

Vb($*)= (no + n)-l(nOyo + n&).


with

a,, = n-(y, + . . . + yn).


constant

Proof. We note first (see the proof of Proposition 5.6) that the logarithm of the posterior density is given by

+ (w,yo+ ng,,)$ - (no + n)b($),


+

from which the claimed estimating equation for the posterior mode, $*, is immediately obtained. The result now follows by noting that the same equation arises in the o minimisation of (ii) of Proposition 5.8, with n + n replacing no, and myo n&, replacing noyo. a For a recent discussion of conjugate priors for exponential families, see Consonni and Veronese (1992b). In complex problems, conjugate priors may have strong, unsuspected implications; for an example, see Dawid (1988a). 5.2.3

Approximations with Conjugate Families

Our main motivation in considering conjugate priors for exponential families has been to provide tractable prior to posterior (or predictive) analysis. At the same time, we might hope that the conjugate family for a particular parametric model would contain a sufficiently rich range of prior density shapes to enable one to approximate reasonably closely any particular actual prior belief function of interest. The next example shows that might well not be the case. However, it also indicates how, with a suitable extension of the conjugate family idea, we can achieve both tractability and the ability to approximate closely any actual beliefs.
Example5.7. (The spun coin ). Diaconis and Ylvisaker ( 1979)highlight the fact that, whereas a tossed coin typically generates equal long-run frequencies of heads and tails. this is not at all the case if a coin is spun on its edge. Experience suggests that these long-run frequencies often turn out for some coins to be in the ratio 2: 1 or 1:2, and for other coins even as extreme as 1:4. In addition, some coins do appear to behave symmetrically. Let us consider the repeated spinning under perceived identical conditions of a given coin, about which we have no specific information beyond the general background set out

280

5 Inference

above. Under the circumstances specified, suppose we judge the sequence of outcomes t o be exchangeable, so that a Bernoulli parametric model, together with a prior density for the long-run frequency of heads, completely specifies our belief model. How might we represent this prior density mathematically? We are immediately struck by two things: first. i n the light of the information given. any realistic prior shape will be at least bimodal. and possibly trimodal; secondly. the conjugate family for the Bernoulli parametric model is the beta family (see Example 5.4).which does not contain bimodal densities. It appears. therelore. that an insistence on tractability. in the sense of restricting ourselves to conjugate priors. would preclude an honest prior specification. However. we can easily generate multimodal shapes by considering rtri.rtrtrcrs of beta densities,
111

p ( H 1 l r . a . P )= C n , B e ( B I r t , . . j , ) .
8 . 1

z,,, with mixing weights x ; > (1. z l +. . .i- = 1. attached toaselectionofconjugatedensities. Be(0 1 r t , . d,). i = 1 . . . . . I ) ) . Figure 5.2 displays the prior density resulting from the mixture
0.5 Be(@I 10.20) t 0.2 Be(@ 15. 15) t (1.3 Be(@ I 121. 10).

which. among other things, reflects a judgement that about 20% of coins seem to behave symmetrically and most of the rest tend to lead to 2: 1 or I:2 ratios. with somewhat more of the latter than the former. Suppose now that we observe / I outcomes z = ( J , . . . . . I , , , ) and that these result in . r,, = .rl + . . . + .r,, heads. so that

. . . , I . , , [ H ) = nH'q - H ) I - ' J = H ' " ( 1 - H ) " - ( ' , .


,-I

Considering the general mixture prior form

5 - 1

we easily see from Bayes' theorem that

where
rt:

= (I,
rl

+ r,,.

J = ;

and

5.2 Conjugate Anulysis


so that the resulting posterior density,

281

is itself a mixture of

rri

beta components. This establishes that the general mixture class of

beta densities is closed under sampling with respect to the Bernoulli model.

In the case considered above, suppose that the spun coin results in 3 heads after 1 0 spins and 14 heads after 50 spins. The suggested prior density corresponds to rrt = 3,
~ = ( 0 . 5 . 0 . 2 . 0 . 3 ) . a=(lO.15.20). ~=(20.15.10).

Figure 5.2 Prior and posteriorsfrom a three-component beta mixture prior density
Detailed calculation yields: for 71 = 10, r,, = 3; x * = (0.77.0.16.0.07),
a = (13.18.23). '

p' = (27,22.17)

for 71 = 50, r,, = 14; x' = (0.90,0.09,O.O0fi). a = (24.29.34), p' = (56,51,46), '
and the resulting posterior densities are shown in Figure 5.2.

202

5 Inference

This example demonstrates that, at least in the case of the Bernoulli parametric model and the beta conjugate family, the use of mixtures of conjugate densities both maintains the tractability of the analysis and provides a great deal of flexibility in approximating actual forms of prior belief. In fact. the same is true for any exponential family model and corresponding conjugate family, as we show in the following.

Proposition 5.10. (Mixtures ofconjugute priors). Let z = ( X I . . . . . x,, ) bc u rundor sample from a regulur exponentia1,faniilj distribution S N ~ that I

P(X

and let
P(e

where, for 1 = 1,..

ure elements o the conjugate jamily. Then f

and

Proof. The results follows straightforwardly from Bayes' theorem and Proposition 5.5. d

5.2 Conjugate Analysis

203

It is interesting to ask just how flexible mixtures of conjugate prior are. The answer is that any prior density for an exponential family parameter can be approximated arbitrarily closely by such a mixture, as shown by Dalal and Hall (1983), and Diaconis and Ylvisaker (1985). However, their analyses do not provide a constructive mechanism for building up such a mixture. In practice, we are left with having to judge when a particular tractable choice, typically a conjugate form, a limiting conjugate form, or a mixture of conjugate forms, is good enough, in the sense that probability statements based on the resulting posterior will not differ radically from the statements that would have resulted from using a more honest, but difficult to specify or intractable, prior. The following result provides some guidance, in a much more general setting than that of conjugate mixtures, as to when an approximate (possibly improper) prior may be safely used in place of an honest prior.

Proposition 5.1 1. (Priorapproximation). Suppose that a belief model is defined by p ( z 1 8 ) und p ( B ) , 8 E 8 and rhut q ( 8 ) is a non-negative function such rhar q ( z ) = J0 p ( z I 8 ) q ( O ) d 8 < 30, where, for some 00 C 0 and a,p E %*, (a) 1 I( 8 ) / q ( 8 ) I 1 -t- a, all 8 E p for (b) p ( e ) / q ( e ) I 0 , f o r 011 8 E ~ 3 . Let P = S p ( 8 I z)d& = S q ( 8 I z ) d e . and , , 4 0 12) = Pta: I @)q(@)/P ( 2 I Q>q(fW8. %?n* S (i) (1 - P)/P I 1 - q ) / q a (ii) q I p ( z > / q ( z )5 (1 + Q>/P (iii) for all 8 E 8, ( 8 I p I a) Ip ( e ) / q ( W q 5 b / q [ (iv) for all 8 E 8,. P / ( I + a)I( 8 I z ) / q ( e 1 5 (1 f a ) / q p 4 (v) for E = max { ( 1 - p ) , ( 1 - q ) } and f : 8 8 such rhat I f( 8 ) I I m,

Proof. (Dickey, 1976). P r (i) clearly follows from at

Clearly,

284

5 Inference

which establishes (ii). Part (iii) follows from (b) and (ii). and (a) and (ii). Finally,

tnvs from

= (1

+ 0 - q ) + ( 1 - + ( 1 - q ) 5 0 + 3:.
11)

which proves (v).

If, in the above, 80 a subset of 8 with high probability under q(O I x)and CL is is chosen to be small and ijnot too large, so that q(0)provides a good approximation 0 to p(0) within 8 and p(0)is nowhere much greater than q(0).then (i) implies that 0,)has high probability under p ( 0 12) and (ii), (iv) and (v) establish that both the respective predictive and posterior distributions, within (30. and also the posterior expectations of bounded functions are very close. More specifically, i f f is taken to be the indicator function of any subset Q" 2 8,( v ) implies that

providing a bound on the inaccuracy of the posterior probability statement made using q(0 12) rather than p(O 12). Proposition 5.1 I therefore asserts that if a mathematically convenient alternative, q ( B ) , to the would-be honest prior, p ( 0 ) . can be found. giving high posterior probability to a set 80C 0 within which it provides a good approximation to p ( 0 ) and such that it is nowhere orders of magnitude smaller than p(0)outside C)O.then q(0)may reasonably be used in place of !>(0). In the case of 8 = 8, Figure 5.3 illustrates, in stylised form. a frequently occurring situation. where the choice q(0) = c. for some constant c. provides

5 3 Asymptotic Analysis .

Fgr 5.3 Typical conditionsfor precise meusurement iue

a convenient approximation. In qualitative terms. the likelihood is highly peaked relative to p ( 0 ) , which has little curvature in the region of non-negligible likelihood. In this situation of precise measurement (Savage, 1962), the choice of the function q(0) = c, for an appropriate constant c. clearly satisfies the conditions of Proposition 5.10 and we obtain

the normalised likelihood function. The second of the implementation questions posed at the end of Section 5.1.6 concerned the possibility of avoiding the need for precise mathematical representation of the prior density in situations where the information provided by the data is far greater than that implicit in the prior. The above analysis goes some way to answering that question; the following section provides a more detailed analysis.

5.3

ASYMPTOTIC ANALYSIS

In Chapter 4, we saw that in representations of belief models for observables involving a parametric model p(x I 0) and a prior specification p ( B ) , the parameter 8 acquired an operational meaning as some form of strong law limit of observables. Given observations z = ( L ~. . . ,L , , ) , the posterior distribution, p(B I z . . ) then describes beliefs about that strong law limit in the light of the information provided by 2 1 , . . . ,x,,.To answer the second question posed at the end of Section 5.1.6. we

286

5 Inference

now wish to examine various properties of p(8 I z) the number of observations as increases; i.e.. as 7) cx. Intuitively, we would hope that beliefs about 8 would become more and more concentrated around the true parameter value; i.e.. the corresponding strong law limit. Under appropriate conditions, we shall see that this is, indeed, the case.

5.3.1

Discrete Asymptotics

Webegin by considering the situation whereC3 = {el. 82.. . . . } consistsofacountable (possibly finite) set of values, such that the parametric model correspondingto the true parameter. 6,. is distinguishablefrom the others, in the sense that the logarithmic divergences, .[y(r I e,)logb)(.rI B f ) / p ( r1 e,)]d.c are strictly larger than zero, for all i # t .

Proposition 5.12. (Discrete asymptotics). kt x = (q... .rl,)be ohser.. vurions for which a belief model is defined by the purumetric model p ( .I: I 8 ) . where8 E 0 = { 8 , . 0 2 } . u n d r h e p r i u r y ( 8 ) = { p i , p 2 . . .}, p , > 0. .... . C,p , = 1. Suppose thut Of E 8 is the true vulue of 8 and thur.for all I # t .

then
r1-x

liin p ( 8 , I z) 1. =

lim p ( O I 12) = 0. i
1 1x

# f.

Proof. By Bayes theorem, and assuming that p(z18) =

nyFl( x , I O ) . p

where

Conditional on O f , the latter is the sum of I I independent identically distributed randomquantities and hence. by the strong law of large numbers (see Section 3.2.3).

as 11

The right-hand side is negative for all i # t. and equals ixro for I = I. so that. XI, s, 0 and S, - x for i # t , which establishes the result.

5.3 Asymptotic Analysis

207

An alternative way of expressing the result of Proposition 5.12, established is for countable 0, to say that the posterior distribution function for 8 ultimately degenerates to a step function with a single (unit) step at 8 = Bt. In fact, this result can be shown to hold, under suitable regularity conditions, for much more general forms of 8.However, the proofs require considerable measure-theoretic machinery and the reader is referred to Berk (1966, 1970)for details. A particularly interesting result is that if the true 8 is not in 8, postethe rior degenerates onto the value in 0 which gives the parametric model closest in logarithmic divergence to the true model.
5.3.2

Continuous Asymptotlcs

Let us now consider what can be said in the case of general 8 about the forms of probability statements implied by p(8 15) for large n. Proceeding heuristically for the moment, without concern for precise regularity conditions, we note that, in the case of a parametric representation for an exchangeable sequence of observables,

If we now expand the two logarithmic terms about theirrespective maxima, m u and 6. assumed to be determined by setting V logp(6) = 0, V logp(z 1 0) = 0, , respectively, we obtain

where R<,,Rn denote remainder terms and

Assuming regularity conditions which ensure that RQ, are small for large n, and R,, ignoring constants of proportionality, we see that
- mo)lHo(e mo)-

:(e - &,)tH(b,L)(O- erf) 2

- m , l ) f H , ( e m,,) -

288
with
HI,

5 Inference

+ W@/t) m,, = H,;' (H I ~ N ( e , )a,,). +I ,


=N o

where m (the prior mode) maximises p ( 8 ) and ell(the niuxiniutn likelihood eso timate) maximises p(z 18). The Hessian matrix. H ( e , , )measures the local cur. vature of the loglikelihood function at its maximum, e,,.and is often called the observed inforniution mutrix. This heuristic development thus suggests that p ( 8 15) will. for large t i . tend to resemble a multivariate normal distribution. Nk(8 I m,,, I , (see Section 3.2.5) H ) whose mean is a matrix weighted average of a prior (modal) estimate and an observation-based (maximum likelihood) estimate. and whose precision matrix is the sum of the prior precision matrix and the observed information matrix. Other approximations suggest themselves: for example, for large r z the prior precision will tend to be small compared with the precision provided by the data and could be ignored. Also. since, by the strong law of large numbers. for all i . j .

we see that H ( b , , )

n I ( & , , )where I ( 8 ) .defined by .

is the so-called Fisher (or expecfed)informution mutrix. We might approximate p(8 lz). therefore. by either Nk(8 I e,,. h , , ) )or Nk(8 I el,.rrl(hl,)), H( where k is the dimension of 8. In the case of 19 E 8 C 'R.

so that the approximate posterior variance is the negative reciprocal of the rate of change of the tirst derivative of log p ( z 1 8) in the neighbourhood of its maximum. Sharply peaked loglikelihoods imply small posterior uncertainty and vice-versa. There is a large literature on the regularity conditions required to justify rnathematically the heuristics presented above. Those who have contributed to the field include: Laplace ( 18 12). Jeffreys ( 1939/ I96 1. Chapteril). LeCarn ( 1953. 1956. 1958.

5.3 Asymptotic Analysis

289

1966, 1970, 1986). Lindley (196lb), Freedman (1963b, 1965), Walker (1969), Chao (1970). Dawid (l970), DeGroot (1970, Chapter lo), Ibragimov and Hasminski (1973), Heyde and Johnstone (1979), Hartigan (1983, Chapter 4), Bermlidez (1985), Chen (1985). Sweeting and Adekola (1987), Fu and Kass (1988), Fraser and McDunnough (1989), Sweeting (1992) and Ghosh et al. (1994). Related work on higher-order expansion approximations in which the normal appears as a leading term includes that of Hartigan (1965). Johnson (1967,1970), Johnson and Ladalla ( 1979) and Crowder (1988). The account given below is based on Chen (1985). In what follows, we assume that 8 E 8 C Rk and that ( p f l ( 8 ) , n 1. = 2 , . . .} is a sequence of posterior densities for 8, typically of the form ppl(8)= p ( 8 I q , . . ,z,,), derived from an exchangeable sequence with parametric model . p(x 18) and prior p ( 8 ) , although the mathematical development to be given does not require this. We define L n (8 ) = l o g p n ( 8 ) ,and assume throughout that, for every n, there is a strict local maximum, m,, of p , (or, equivalently, L,) satisfying:

L:,(m~) L t ( 8 )I 8=mr, 0 =V =
and implying the existence and positive-definiteness of

Cn = (-~::(mn))-,
= a~J) where [LZ(m,)ltJ ( a 2 L , ( ~ ) / a f h I 8=mn. Defining 1 I = (8t8)1i2 Bb(8) ( 8 E 0; 1 - 8 I < 6). we shall 8 and = 8 show that the following three basic conditions are sufficient to ensure a valid normal approximation for p , ( 8 )in a small neighbourhood of m,, as n becomes large. : (cl) Steepness. T i -+ 0 as n + 00, where 5 is the largest eigenvalue of Ell. (c2) Smoothness. For any E > 0 , there exists N and 6 > 0 such that, for any n > N and 8 E Bd(m,,),L,(8) exists and satisfies

I - A ( E 5 L~(8){L(m,2)}~-1 ) 5 I -t A(&).
where I is the k x k identity matrix and A ( ) a k x k symmetric positiveis semidefinite matrix whose largest eigenvalue tends to zero as E + 0. (c3) Concentration. For any 6 > 0. ~ 1 3 6 i pn(8)d8 1 as n + 30. m,) -+ Essentially, we shall see that (cl), (c2) together ensure that, for large n. inside a small neighbourhood of m, the function pll becomes highly peaked and behaves . like themultivariate normaldensitykernelexp{-i (0-m,)C; ( 8 - m , , ) }The final condition (c3) ensures that the probability outside any neighbourhood of m,, becomes negligible. We do not require any assumption that the m,, themselves converge, nor do we need to insist that m, be a global maximum of p n . We I C,, x, implicitly assume, however, that the limit of pr,(mrr) I exists as I I and we shall now establish a bound for that limit.
---f

290
Proposition 5.13. (Bounded eoncentralion). The conditions (cI ) , (c2)implv that
lirn p , ? ( m , , I X , , ~ "5 (2n)-",". ) ~
1,-x

5 Inference

with eyuulity ifand only i f ( c 3 ) holds.


Pmot Given E > 0 consider n > IV and d > 0 as given in ( ~ 2 ) Then. for . . any 8 E Bd(m,,).simple Taylor expansion establishes that a

I),,(e) P ~ ~ , , ) W { L J W- ~ =

, ( m ~ ~ ) ~

= p , l ( m , l ) ~ x p- - ( 0 - m/,)'(I R,,)C,'(0- m,,) where

{k

R,, = ~ ~ ~ ( 8 ' ) { ~ ~ ( m l l ) }1.' ( m l , ) - for some 8 lying between 8 and m,,.It follows that '

- lS2

Q ( E ) ( ~ ( E ) )the

where s,, = h ( l - g( ~ ) ) I ; */ cr ~ , t,, = b(1 g(?))'r2/ZF,,. iF',(i',) and and with largest (smallest) eigenvalues of C , and A ( E )respectively. since, . for any k x k matrix V,
Bdli7(0)c z : (%lV%)"2d} <

.I, . I.

exp { - ; z ' z } d r .
I,,

E Bh!K(O)

where are the largest (smallest) eigenvalues of V. Since (cl) implies that both s, and t , tend to infinity as ri

v2((v')
11

3c, we

have

-- A ( E ) J '

1,-x

lim ~ , ( b I- * x p n ( m l , ) I '~(f2 ~ l ~ ) nlim l~)

'

5 II

+ A(E)('

lirri P r , ( d ) .
1 1x

and the required inequality follows from the fact that 1 A ( E )--t 1 as E 1 / 0 and P,,(b) 5 1 for all n. Clearly. we have equality if and only if h i , , .x El(&)= 1. which is condition (c3).

5.3 Asymptotic Analysis

291

We can now establish the main result, which may colloquially be stated as "8 has an asymptotic posterior Nb(8(mn.Xi') distribution, where L',(mn)= 0 and c,' = --L;(mn):'

Proposition 5.14. (Asymptotic posterior n o d t y ) . For each n, consider p l l ( . ) as the densityfunction of a random quantity 8,. and dejne, using the (c3) notation above, (b, = C;'/2(8, - mn).Then, given ( c l ) and ( ~ 2 ) . is a necessary and suflcient conditionfor ( , to converge in distribution to (b, b where p ( 4 ) = ( 2 ~ ) - exp { - s(b'(b}. ~ / ~
Proof. Given (cl) and (c2). and writing b 2 a,for a,b E !Rk,to denote that all components of b - a are non-negative, it suffices to show that, as n -+ m, P,(a 5 4, 5 b) -, P ( a 5 (b 5 b) if and only if (c3) holds. We first note that

where, by (cl), for any 6 > 0 and sufficiently large n,

It then follows, by a similar argument to that used in Proposition 5.13, that, for any E > 0, P,,(Q5 (b, 5 b) is bounded above by

where
Z(E) =

Z; [I

- A ( E ) ] 'a ' z 5 [ I - A(E)]*'* ~5 b}

and is bounded below by a similar quantity with + A ( E in place of -A(&). ) Given (cl), (c2),as E -+ 0 we have

where Z ( 0 ) = { % ; a z 5 b}. The result follows from Proposition 5.13. 5

292

5 inference

Conditions (cl) and (c2) are often relatively easy to check in specific applications, but (c3) may not be so directly accessible. It is useful therefore to have available alternative conditions which, given (cl), (c2). imply (c3). Two such are provided by the following: (c4) For any d > 0, there exists an intcger lV and d E 3- such that. for any rz > iV and 8 $Z B,j(m,,),
( 8 .

(c5) As (c4), but, with G ( 8 ) = log y(8) for some density (or normalisable positive function) y(8) over 63.

Proposition 5.15. (Alternative conditions). Given ( ~ 1 )(c2), either ( 4 ) . or (c.5) implies ( ~ 3 ) .


Proof. It is straightforward to verify that

given (c4), and similarly, that

given (c4). is bounded (Proposition 5.1 I ) and the remaining terms Sincep,,(m,,) 1 C,, I or the right-hand side clearly tend to zero, it follows that the left-hand side tends to Zero as 11 -+ XJ. a
To understand better the relative ease of checking (c4) or (cS) in applications. we note that. ifp,,(O) is based on data z,

sothat L , , ( 8 ) - L f f ( m , doesnot involve the,often intractable,normalisingconstant l) p(z). Moreover, (c4) does not even require the use of a proper prior for the vector 8. We shall illustrate the use of (c4) for the general case of canonical conjugate analysis for exponential families.

5.3 Asymptotic Analysis

293

Proposition 5.16. (Asymptotic normality under conjugate analysis). Suppose that y l ,. . . ,y n are data resulting from a random sample of size n from the canonical exponential family form

with canonical conjugate prior density

For each n, consider the posterior density

with 5, = Cr=l yi/ri, to be the densiryfunction for a random quantify and &jne 4,1 Ei1r2(+,l- b(m11)).where =

+,,

Then 4,, converges in distribution to 4, where

Nk(@

Proof. Colloquially, we have to prove that has an asymptotic posterior 1 6(m,,)9 i ) distribution, where b(m:,)= (7t0 ~ i ) - ~ ( n , - , y ~ and X njjIl) E, = (7% n)6(mI,). From a mathematical perspective,

where h(+) = [b(m,)]+ - b(+), with b(70) a continuously differentiable and strictly convex function (see Section 5.2.2). It follows that, for each 11, yI,(+) is unimodal with a maximum at = m,, satisfying V h ( m , , ) = 0. By the strict we between concavity of h ( . ) .for any 6 > 0 and B # B , j ( m , ) , have, for some 11 and m,, with angle 0 between - m, and V h ( + + ) ,

++

294
for c = irif { 1 V h ( @ + I): @ $? B,s(rn,)}> 0. It follows that

5 Injerence

L(@) -

JL(mfl)

< -(no

+ r o I@-

m l

< -cl { (@ - m n ) ' K 1 ( + m J } ' where 1'1 = cX-', with X2 the largest eigenvalue of b"(rn,,). hence that (c4) is and satisfied. Conditions (cI), (c2) fo1lows straightforwardly from the fact that

' h;;(@){G;(rnfl)} b"(lC,){b"(mfI)}-' = '.


(n,, + n).y, = b/'(m,,).
the latter not depending on no and 5.13. a

+ r i . and so the result follows by Propositions 5. I2

Example 5.4. (confinued). Suppose that Be(@ n,,.J,$). I where ct,, = ( t + r,,. and :I,, = .I ri - v,,, is the posterior derived from I I Bernoulli trials with I.,, successes and a Be(@0 . J) prior. Proceeding directly, 1

L , , ( o )= iogp,,(@) 1 0 g p (j ~~ + iogP(e) I = ) = (fb, - l ) l o g H S

(,A, - l)log(l - 8 ) - I o g p ( z )

so that
((t,, - 1) ( . I , , - 1) L;,(@) - = H 1 -- H

and

It follows that

Condition (cl) is clearly satisfied since ( - L : ( m , , ) ) ' I) as 1 1 -- x;condition ( c 2 ) follows from the fact that L::(H)is a continuous function of H . Finally, (c4) may be veritied with an argument similar to the one used in the proof of Proposition 5.16. Taking n = c j= 1 for illustration, we see that

and hence that the asymptotic posterior for H is

(As an aside. we note the interesting "duality" between this asymptotic form for H given n . r,,. and the asymptotic distribution for r , , / n given H, which, by the central limit theorem,

has the form

Further reference to this kind of "duality" will be given in Appendix B.)

5 3 Asymptotic Analysis .

295

Asymptotics under Transformations The result of Proposition 5.16 is given in terms of the canonical paramevisation of the exponential family underlying the conjugate analysis. This prompts the obvious question as to whether the asymptotic posterior normality "carries over, with a p propriate transformations of the mean and covariance. to an arbitrary (one-to-one) reparametrisation of the model. More generally. we could ask the same question in relation to Proposition 5.14. A partial answer is provided by the following.

533 ..

Proposition 5.17. (Asymptoticnormality under transformation). With the notation and background of Proposition 5.14, suppose that 8 has an asymptotic Nk (8 ,C ) distribution, with the additional assumptions lm,, , thut, with respect to a parametric model p(xl80). ug --+ 0 and mn ---t 8 0 in probabilitj, and that given any 6 > 0, there is a constraint c(6) such that P(@,z 25 c(6)) 2 1 - 6 for all suflciently large n. where u: (z:) is the G largest (smallest) eigenvalue of C . Then, if u = g( 8 ) is a transformation ; such that, at 8 = On. is non-singular with continuous entries, u has an asymptotic distribution

Proof. This is a generalization and Bayesian reformulation of classical results presented in Sertling (1980, Section 3.3). For details, see Mendoza ( 1994). a

For any finite n, the adequacy of the normal approximation provided by Proposition 5.17 may be highly dependent on the particular transformation used. Anscombe (1964a, 1964b) analyses the choice of transformations which improve asymptotic normality. A related issue is that of selecting appropriate parametrisations for various numerical approximation methods (Hills and Smith, 1992, 1993). The expression for the asymptotic posterior precision matrix (inverse covariance matrix) given in Proposition 5. I is often rather cumbersome to work with. 7 A simpler, alternative form is given by the following.
Corollary 1. (Asymptoticprecision after transformah'on). In Proposition 5.10, ifH,, = C denotes the asymptoticprecision matrixfor , ' 8, then the asymptotic precision matrixfor u = g ( 8 ) has the form
where
J g - 1 ( u )= dg-'(u)

au

is the Jacobian of the inverse transformation. Proof. This follows immediately by reversing of the roles of 8 and v.
a

296

5 Inference

In many applications. we simply wish to consider one-to-one transformations ofa single parameter. The next result provides aconvenient summary of the required transformation result.

Corollary 2. (Asymptotic normality after scalar transformation). Suppose that given the conditions of Propositions 5.14, 5.I7 with sccilrr 0. the and that L : : ( i n , , )-+ 0 sequence ~ I I , ,tends in probubiiity to 0" under p(xl60), in probubiliry us 11 x.Then,.fi = g ( 0 ) is such thnt .9'(0) = d g ( # ) / ri0 is continirous and non-zero at 0 = 00, the asymptotic. posterior distribution for I / is N (i+g( rrr,, ). - 1.;:( i n , , ) [y'( rrtl,)]-?).

Proof. The conditions ensure. by Proposition 5.14. that 0 has an asymptotic posterior distribution of the form N(0Iwl,.-L;:(m,,)). so that the result follows from Proposition 5.17. .3
Example 5.4. (continued). Suppose, again, that Be(H I t t , < . ,j,,). whew t i , ! = ( i t I , , , . and (I,, = d + r ) - r,,. is the posterior distribution of the parameter of a Bernoulli distribution afte 7) trials, and suppose now that we are interested in the asymptotic posterior distribution of the variance stabilising transformation (recall Example 3.3)
v

= g ( ~= 2hiii-I )

\/ii

Straightforward application of Corollary 3, to Proposition 5.17. leads to the asymptotic distribution N(v12siii ' ( (1).

m).

whose mean and variance can be compared with the forms given in Example 3.3.

It is clear from the presence of the term [g'( m,,)] in the form of the asymptotic precision given in Corollary 2 to Proposition 5. I7 that things will go wrong if ~ ' ( T I z , , ) 0 as n --t 9c. This is dealt with in the result presented by the requirement that g'(00) # 0. where utI, -+ 8,) in probability. A concrete illustration of the problems that arise when such a condition is not met is given by the following.
+

Example 5.8. (Non-normal asympfoticposrerior). Suppose that the asymptotic pos, terior for a parameter H E W is given by N(H!.i.,..I t ) , / ~ . r ,= . r , i . . -C x , , . perhaps derived from N(x,I0. I ) , i = I . . . . . / I , with N(HI0.h). having IJ z 0 . Now consider the transformation I / -- o ( H ) = H'. and suppose that the actual value of H generating the .I., through N(.r,IU. I ) i s 0 = 0. Intuitivcly, it is clear that u cannot have an asymptotic normal distribution since the sequence .Fi is converging in probability to 0 through sfricr/? posilitv values. Technically. g ' ( 0 ) = 0 and the condition 0 1 the corollary is not satisfied. In fact. it can be shown that the asymptotic posterior distribution of 1111 is \' in this case.

5 3 Asymptotic Analysis .

297

One attraction of the availability of the results given in Proposition 5.17 and Corollary I is that verification of the conditions for asymptotic posterior normality (as in, for example, Proposition 5.14) may be much more straightforward under one choice of parametrisation of the likelihood than under another. The result given enables us to identify the posterior normal form for any convenient choice of parameters, subsequently deriving the form for the parameters of interest by straightforward transformation. An indication of the usefulness of this result is given in the following example (and further applications can be found in Section 5.4).
Example 5.9. (Asymptotic posterior normakity for a ratio). Suppose that we have a random sample sl.. . . .r,, from the model {fly7] N(xtIOl.I), N(BI10, A,)} and. independently, another random sample yl. . . . y,, from the model { fir=, N(y, 102,l).N(0210. XZ)}, where XI zz 0. XI zz 0 and O2 # 0. We are interested in the posterior distribution of dI = @,/& TI + 50. as First, we note that, for large n,it is very easily verified that the joint posterior distribution for 8 = (0,. 0,) is given by

n&,,= p1 . . . + y,,. Secondly, we note that the marginal where 91&# = X I + . . . + L,,, asymptoticposterior for can be obtained by defining an appropriated;, such that (8,,02) -+ (&. & is a one-to-one transformation, obtaining the distribution of c$ = (41, using ) , &) Proposition 5.17, and subsequently marginalising to 41. An obvious choice for 6 2 is 62 = 0,. so that, in the notation of Proposition 5.17, g(el, 0,) = (4,. 42) and

The determinant of this, 0; is non-zero for O2 # 0, and the conditions of Proposition 5. I7 are clearly satisfied. It follows that the asymptotic posterior of Q, is

so that the required asymptotic posterior for 4,= O,/& is

Any reader remaining unappreciative of the simplicity of the above analysis may care to examine the form of the likelihood function, etc.. corresponding to an initial parametri&. sation directly in terms of 0,. and to contemplate verifying directly the conditions of Proposition 5.14 using the 41. prrametrisation. @*

5 Inference

5.4

REFERENCE ANALYSIS

In the previous section, we have examined situations where data corresponding to large sample sizes come to dominate prior information, leading to inferences which are negligibly dependent on the initial state of information. The third of the questions posed at the end of Section 5.1.6 relates to specifying prior distributions in situations where it is felt that, evenfor moderate sumple sizes, the data should be expected to dominate prior information because of the vague nature of the latter. However, the problem of characterising a tion-informuri,ieor "objective" prior distribution, representing prior ignorance, vugue prior knowledge and letting the data speak for themselves is far more complex than the apparent intuitive immediacy of these words and phrases would suggest. In Section 5.6.2, we shall provide a brief review of the fascinating history of the quest for this baseline. limiting prior form. However, it is as well to make clear straightaway our own view -very much in the operationalist spirit with which we began our discussion of uncertainty in Chapter 2-that mere words are an inadequate basis for clarifying such a slippery concept. Put bluntly: data cannot ever speak entirely for themselves; every prior specification has sonic informative posterior or predictive implications: and vague is itself much too vague an idea to be useful. There is no objective prior that represents ignorance. On the other hand, we recognise that there is often a pragmatically important need for a form of prior to posterior analysis capturing, in sonie well-definedsense, the notion of the prior having a minimal effect, relative to the data, on the final inference. Such a reference analysis might be required as an approximation to actual individual beliefs; more typically, it might be required as a limiting what if? baseline in considering a range of prior to posterior analyses, or as a defuult option when there are insufficient resources for detailed elicitation of actual prior knowledge. In line with the unified perspective we have tried to adopt throughout this volume, the setting for our development of such a reference analysis will be the general decision-theoretic framework, together with the specific information-theoretic tools that have emerged in earlier chapters as key measures of the discrepancies (or distances) between belief distributions. From the approach we adopt, it will be clear that the reference prior component of the analysis is simply a mathematical tool. It has considerable pragmatic importance in implementing a reference unulysis, whose role and character will be precisely defined. but it is not a privileged, uniquely non-informative or objective prior. Its main use will be to provide a conventional prior, to be used when a default specification having a claim to being non-influential in the sense described above is required. We seek to move away, therefore, from the rather philosophically muddled debates about prior ignorance that have all too often confused these issues, and towards well-defined decision-theoretic and information-theoretic procedures.

5.4 Reference Analysis

299

5.4.1

Reference Decisions

Consider a specific form of decision problem with possible decisions d E 23 providing possible answers, a E A, to an inference problem, with unknown state of the world w = ( w I , u p ) ,utilities for consequences ( a , w ) given by u(d(w1)) = u ( a , w l ) and the availability of an experiment e which consists of obtaining an observation x having parametric model p(x I wa)and a prior probafor bility density p(w) = p(w1 I w ~ ) p ( w 2 ) the unknown state of the world, w. This general structure describes a situation where practical consequences depend directly on the w Icomponent of w , whereas inference from data 2 E X provided by experiment e takes place indirectly, through the w2 component of w as described by p(w1 1 w2).If w1 is a function of w2, the prior density is, of course, simply
P(W2).

To avoid subscript proliferation, let us now, without any risk of confusion, indulge in a harmless abuse of notation by writing wl = w > w 2= 8. This both simplifies the exposition and has the mnemonic value of suggesting that w is the state of the world of ultimate interest (since it occurs in the utility function), whereas 8 is a parameter in the usual sense (since it occurs in the probability model). Often of w is just some function w = @(0) 8; if w is not a function of 8, the relationship betweenw and 8 is that described in theirjoint distributionp(w, 8 ) = p ( o I 8 ) p ( 8 ) . Now, for given conditional prior p(w 18) and utility function u(a,w), let us examine, in uriliry rerms, the influenceof the priorp(8), relative to the observational information provided by e. We note that if a; denotes the optimal answer underp(w) and a; denotes the optimal answer under p(w I x), then, using Definition 3.13 (ii). with appropriate notational changes, and noting that

the expected (utility) value of the experiment e, given the prior p ( 8 ) , is

where, assuming w is independent of x, given 0.

and
= Jp(r

I8

~ 8 ) do.

If e ( k ) denotes the experiment consisting of k independent replications of e , that is yielding observations ( ~ 1 , . . . , x k } with joint parametric model p ( s i I O), then vu{e(k),p(8)},the expected utility value of the experiment e(lc), has the , same mathematical form as v , { e , p ( @ ) )but with x = (21 . . ,zk) andp(x I 8 ) = p(z, I 0). Intuitively, at least in suitably regular cases, as k m we obtain,

n:=,

nfZ1

? .

300

5 Inference

from c.) perfect (i.e., complete) information about 8, so that, assuming the limit (c. to exist, { e( x .( 8 ) ) = lirii r,,{ c( k).p ( 8 )} )p
Ill,

-X

is the expected (utility) vufue of perfecr injormution, about 8, given p ( 8 ) . Clearly, the more valuable the information contained in y ( 8 ) . the less will be the expected value of perfect information about 8; conversely. the less valuable the information contained in the prior, the more we would expect to gain from exhaustive experimentation. This, then, suggests a well-defined "thought experiment" procedure for characterising a "minimally valuable prior": choose. from the class of priors which has been identified as compatible with other assumptions about ( w ,8 ) ,that prior, T ( 8 ) .say. which muvimises the expected v h e ofperfect informafion ubout 8. Such a prior will be called a u-reference prior; the posterior distributions.

15) =

1 q a ( e 1 +e

I z)x lJ(ZI W 4 8 )
derived from combining T ( 8 )with actual data 5. will be called u-rejerence posferiors: and the optimal decision derived from ~ ( 15)and ~ ( ow) w . will be called a u-reference decision. It is important to note that the limit above is not taken in order to obtain some form of asymptotic "approximation" to reference distributions; the "exact" reference prior is dejned as that which maximises the value ofperfect information about 8, not as that which maximises the expected value of the experiment.
Example 5.10. (Prediction with quadratic loss). Suppose that beliefs about a sequence of observables, 2 = ( . r l .. . . . .I.,,), correspond to assuming the latter to be a random sample from an :V(.r I I(.A ) parametric model, with known precision A. together with a prior for / r to be selected from the class {,Y(/r I / I , , . AI,).p,, E R. A,, 2 0). Assuming a quadratic loss function, the decision problem is to provide a point estimate for .I,,, . given .rl.. . . . . I . , , . We shall derive a reference analysis of this problem. for which A = 'R. .J = .I.,, ,. and H = I ( . Moreover,

,.

and, for given pII, Al,, we have

For the purposes of the"thought experiment". let zL = ( 5 , .. . . .Q ) denote the (imagined) ,). outcomes of I; replications of the experiment yielding the observables ( . I . ] . . . . .. r ~ ~ say.

5.4 Reference Analysis


and let us denote the future observation to be predicted ($A,,+]) simply by 1. Then '

301

However, we know from Proposition 5.3 that optimal estimates with respect to quadratic loss functions are given by the appropriate means, so that

since, by virtue of the normal distributional assumptions, the predictive variance of T given does not depend explicitly on 2 1 . In fact. striightforward manipulations reveal that

so that the u-reference prior corresponds to the choice Xo = 0, with pIJarbitrary.

Example 5.1 1. (Variance estimation). Suppose that beliefs about I = {rI,. . ,x,,} . correspond to assuming I to be a random sample from N ( z 10, A) together with a gamma prior for X centred on XI,, so that p ( X ) = Ga(X I n , Q&'), Q > 0. The decision problem is to provide a point estimate for u2 = X - I , assuming a standardised quadratic loss function, so that

Thus, we have A = 9. 2 '

8 = A, 'w = 02. and


p ( z , X) =

n
,I 1-1

N(Z,

I 0. X) Ga(X I ( 1 . OX,, I ).
k replications of the experiment. Then

Let Z A = {z,.. . . ,II.} the outcome of denote

302
and krrs =

5 Inference

1, rf,. Since 1,
inf

Ga(X 1 0 .

. j ) (OX - 1 )(/A =

-*
ft

and this is attained when a = d / ( c i + 1). one has


v , , { r ( x )p ( X ) } = lirii .
I-X

I - , , { r ( k ) )>(A)) .

This is maximised when c) = 0 and, hence, the cwyfprencc prior corresponds to the choice f k = 0. with &, arbitrary. Given actualdata. z = (.r1. . . . J,,), . the u-referenc.epc).~feriur for X is Ga(X I 1/12..ns2/2), where nsL) = .rf and, thus. the ii-reference decision is to give the estimate

c,

?;

-/lS?/2

(71/2) 4-1

--- _ _ . - c.r;
f/

Hence, the reference estimator of 0 with respect to smndordised quadratic loss is t i o r the usual s 2 , but a slightly smaller multiple of s. It is of interest to note that, from a frequentist perspective, 6 is the best invariant estimator of u2 and is admissible. Indeed, Ci dominates 3 or any smaller multiple of .Y2 in terms of frequentist risk (cf. Example 45 in Berger. 198Sa, Chapter 4).Thus. the it-reference approach has led to the correctmultiple of .G2 as seen from a frequentist perspective. Explicit reference decision analysis is possible when the parameter space

8 = { B , , . . . ,0.,1} is finite. In this case, the expected value of perfect information


(cf. Definition 2.19) may be written as

andthewreference prior, which isthat r ( 0 )whichmaximises21,,{e(rX).~ ( 8 ) ) . may be explicitly obtained by standard algebraic manipulations. For further information, see Bemardo ( 1981a) and Rabena ( 1998).

5.4.2

Onedimensional Reference Distributions

In Sections 2.7 and 3.4. we noted that reporting beliefs is itself a decision problem, where the inference answer space consists of the class of possible belief distributions that could be reported about the quantity of interest, and the utility function is a proper scoring rule which-in pure inference problems-may be identified with the logarithmic scoring rule.

5.4 Reference Analysis

303
a } a},

Our development of reference analysis from now on will concentrate on this case, for which we simply denote v,,{ by v{ and replace the term u-reference by reference. In discussing reference decisions, we have considered a rather general utility structure where practical interest centred on a quantity w related to the 8 of an experiment by a conditional probability specification, p(w 18). Here, we shall consider the case where the quantity of interest is 8 itself, with 8 E 8 C 8.More general cases will be considered later. If an experiment e consists of an observation 3 E X having parametric model : p(3: I O ) , with w = 8, A = { q ( . ) ; q(8) > 0, Ja q(8)dO = 1) and the utility function is the logarithmic scoring rule
u{q(.).8} = A l o g g ( 8 )

+ B(8)?

the expected utility value of the experiment e, given the prior density p ( 8 ) , is

where qo(.),qx(-) denote the optimal choices of q ( . ) with respect to p ( 8 ) and p ( 8 ( s ) , respectively. Noting that u is a proper scoring rule, so that, for any

it is easily seen that

the amount of information about 0 which e may be expected to provide. The corresponding expected information from the (hypothetical) experiment e ( k )yielding the (imagined)observation f k = ( 2 1 , . . . 3:~) parametric model with
k

P(%
is given by

I @ = n P ( 3 : i 18)
r=l

and so the expected (utility) value of perfect information about 8 is


I { e ( O Q ) , p ( f l ) = C-m r { e ( k ) , P ( 8 ) } , ) lim

304

5 liverenee

provided that this limit exists. This quantity measures the missing informulion about 0 as a function of the prior p ( 0 ) . The reference prior for 0. denoted by ~ ( 0 )is thus defined to be that prior . which maximises the missing information functional. Given actual data x, the reference posterior ~ ( I x)to be reported is simply derived from Bayes' theorem. d as ~ ( 12) x p ( x I O ) T ( O ) . d Unfortunately. limk.-x I { c ( k ) .p(6,)j is typically infinite (unless H can only take a finite range of values) and a direct approach to deriving r(Q) along these lines cannot be implemented. However, a natural way of overcoming this technical difficulty is available: we derive the sequence of priors ~ k ( 0 which maxiniise ) I { t ( k ) .p ( 0 ) } .k = 1.2.. . ,.and subsequently take ~ ( 0 )be a suitable limit. This to approach will now be developed in detail. Let c be the experiment which consists of one observation 2 from ~ J [10). Z 6, f 0 C Y. Suppose that we are interested in reporting inferences about d and that R no restrictions are imposed on the form of the prior distribution ~ ( 0 )It is easily . verified that the amount of information about H which k independent replications of c may be expected to provide may be rewritten as

and zp

= (21,.. .

.a}a possible outcome from cr(k). so that is

is the posterior distribution for 0 after % A has been observed. Moreover. for any prior y(8) one must have the constraint p [ H ) dd = 1 and. therefore, the prior ~ p ( d which maximises I'{( (k).p(6,)} ) must be an extremal of the functional

y is twice continuously differentiable. any function 7 4 . ) which maximises

Since this is of the form F { y ( . ) } = l g { p ( . ) }do, where, as a functional of p ( . ) , F must satisfy the condition
for all
T.

5.4 Reference Analysis It follows that, for any function T .

305

where, after some algebra,

Thus, the required condition becomes

which implies that the desired extremal should satisfy, for all 8 E 8,

and hence that p ( 8 ) cx fk(8). Note that, for each k, this only provides an implicit solution for the prior which maximises 1 8 { e ( k ) , p ( 8 ) } ,since fk.0) depends on the prior through the However, for large values of posterior distribution p(8 zk) = p ( 8 I zl,. . . ,q). k, an approximation, p'(8 I q.), may be found to the posterior distribution of say, 8, which is independent of the prior p ( 0 ) . It follows that, under suitable regularity conditions, the sequence of positive functions

will induce, by formal use of Bayes' theorem, a sequence of posterior distributions

with the same limiting distributions that would have been obtained from the sequence of posteriors derived from the sequence of priors r k ( 8 ) which maximise I B { e ( k ) , p ( 8 ) } .This completes our motivation for Definition 5.7. For further information see Bemardo ( I 979b) and ensuing discussion.

306
Definition 5.7. (One-dimensional reference distributions).

5 Inference

Let x be the result of un experiment c which consists of one observation from p ( x 19).x E X , 9 E 0 C 92, let zk = {XI... ,xk} he the result oj'k .

independent replications off: , und dejine


Zn.)d%L.

where

The reference posterior density of9 u8er x has been observed i dejined to be s the log-divergence litnit. lr(9 I x ,of ~ k ( 6 I, x),assuming this limit to exist. ) where Tk(0 I x) = Q(z)p(" f;(@. 10) the ck(x)s are the required normalising constants und.fi)r almost ull x.

Any positiveefunction ~ ( 9such that,fi)r some c(x)> 0 und for ull B E 0. )

will be called a reference prior for 6, relutive to the e.rperiment P . It should be clear from the argument which motivates the definition that any asymptotic approximation to the posterior distribution may be used in place of the asymptotic approximation y'(8 I a )defined above. The use of convergence in the information sense, the natural convergence in this context, rather than just pointwise convergence, is necessary to avoid possibly pathological behaviour; for details, see Berger and Bernardo (1992~). Although most of the following discussion refers to reference priors. it must be stressed that only reference posterior distributions are directly interpretable in probabilistic terms. The positive functions IT( 0) are merely pragmatically convenient tools for the derivation of reference posterior distributions via Baycs' theorcm. An explicit form for the reference prior is immediately available from Definition 5.7. and it will be clear from later illustrative examples that the forms which arise may have no direct probabilistic interpretation. We should stress that the dejinitions and "propositions '' in this section ore by and l u g e heuristic in the sense that they are lacking statements of the technical conditions which would make the theory rigorous. Making the statements and

5.4 Reference Analysis

307

proofs precise, however, would require a different level of mathematics from that used in this book and, at the time of writing, is still an active area of research. The reader interested in the technicalities involved is referred to Berger and Bemardo (1989, 1992a, 1992b, 1992c)and Berger et al. (1989). So far as the contents of this section are concerned, the reader would be best advised to view the procedure as an algorithm. which compared with other proposals-discussed in Section 5.6.2appears to produce appealing solutions in all situations thus far examined.

Proposition 5.18. (Explicitform of the reference prior).


A reference prior for t9 relative to the experiment which consists of one observutionfrom p ( z 1 O ) , z E X,6 E 0 C 8, given, provided the limit exists, is

and convergence in the information sense is verijied, by

where c > 0,Bo E (3,

with zk = { q , . ,z }a random samplefrom p ( z I 0), and p ( 0I x k ) is an .. k asymptotic approximation to the posterior distribution of 0. Proof. Using n ( e )as a formal prior,

and hence

n(e I z)= k-cc nk(@ Iz), ~ k ( Iz) x p ( z I H ) f ; ( @ ) lim 0


a required. Note that, under suitable regularity conditions, the limits above will not depend on the particular asymptotic approximation to the posterior distribution used to derive fi((e). a
If the parameter space is finite, it turns out that the reference prior is uniform, independently of the experiment performed.

Proposition 5.19. (Reference prior in the finite case). Let x be the result o one observation from p ( z 1 8 ) . where B E 0 = { 61,. . . ,O,,,}. Then, any f function of the form n(B,)= a,a > 0, i = 1, . . . ,M ,is a reference prior and
the reference posterior is

n(e,I 2) = c ( z ) p ( z1 e,),

i = 1,.. . ,hi

where c(z) is the required normulising constant.

308

5 Inference

Proof. We have already established (Proposition 5.12) that if 8 is finite then. for any strictly positive prior, p(I9, I T I . . . . , . r k ) will converge to 1 if 8, is the true value of 19. It follows that the integral in the exponent of

fLd0,) = cxp J P ( % * 8,)logp(6, I . z L . ) ~ . z A .


will converge to zero a k s
-+

i = 1.. . . . dl.

cc Hence, a reference prior is given by

The general form of reference prior follows immediately.

The preceding result for the case of a finite parameter space is easily derived from first principles. Indeed. in this case the expected missing information is finite and equals the entropy

of the prior. This is maximised if and only if the prior is uniform. The technique encapsulatedin Definition 5.7 for identifying the reference prior depends on the asymptotic behaviour of the posterior for the parameter of interest under (imagined)replications of the experiment to be actually performed. Thus far. our derivations have proceeded on the basis of an assumed single observation from a parametric model, p ( x 1 ) The next proposition establishes that for experiments 6. involving a sequence of n 2 1 observations, which are to be modelled as if they are a random sample, conditional on a parametric model, the reference prior does not depend on the size of the experiment and can thus be derived on the basis of a single observation experiment. Note, however, that for experiments involving more structured designs (for example, in linear models) the situation is much more complicated. Proposition 5.20. (Independence of sample size). Let e,,.n 2 1, be the experiment which consists of the observation ofa rundom sample 2 1 . . . . . x,, from y ( x 1 0).x E X,0 E (3, and let P,, denote the class of reference priors for 0 with respect to e l # .derived in accordance with Definition 5.7, by considering the sample to be a single observation ,fi-om p ( x , 10). Then PI= P,,.for all n.

n::,

5.4 Reference Analysis

309

el.

Proof. If zk = {XI? .. . , XA.}is the result of a k-fold independent replicate of then, by Proposition 5.18, PIconsists of r(0)of the form

with c > 0,8,0,, E 8 and

where p*(0 1 zk) is an asymptotic approximation (as k -, 00) to the posterior distribution of 8 given z k . Now consider z,k = {xl.. , Z , , , X ~ , , . ~ . . .x2,,.. . ,xkn} which can be .. ., considered as the result of a k-fold independent replicate of e,,, so that P, consists of r ( 0 )of the form

But znk can equally be considered as a nk-fold independent replicate of el and so the limiting ratios are clearly identical. a In considering experiments involving random samples from distributions admitting a sufficient statistic of fixed dimension, it is natural to wonder whether the reference priors derived from the distribution of the sufficient statistic are identical to those derived from the joint distribution for the sample. The next proposition guarantees us that this is indeed the case.

Proposition 5.21. (Compatibility with sumient stutistics). Let err, 2 1 be the experiment which consists of the observation of a random n , sample xl,.. .x, from p ( x I 0), x E X, E 8, . 0 where,for all n, the latter , ) Then,for any n, the classes admits a suflcient statistic t,l = t(xl,. . . . x , . of reference priors derived by considering replicationsof ( 2 1 , . . . ,5) and t,, , respectively. coincide, and are identical to the class obtuined by considering replications of el.
Proof. If q. denotes a k-fold replicate of (xl,. ,z,) and y k denotes the .. corresponding k-fold replicate oft,, then, by the definition of a sufficient statistic, p ( 0 I zk) = p(O I uk).for any prior p ( 0 ) . It follows that the corresponding asymptotic distributionsare identical, so that p'(0 I zk) = p'(8 1 yk). We thus have

310

5 Inference

so that, by Definition 5.7. the reference priors are identical. Identity with those derived from c I follows from Proposition 5.20. a

Given a parametric model. p ( z If?). . ' E X , H E 0. could. of course. I we for reparametrise and work instead with p ( x lo). s E X.o = ~ ( 0 ) . any monotone CP. The question now arises as to whether referone-to-one mapping y : 8 ence priors for f? and o,derived from the parametric models p ( z I H ) and p ( z 1 o). respectively, are consistent. in the sense that their ratio is the required Jacobian element. The next proposition establishes this form of consistency and can clearly be extended to mappings which are piecewise monotone.
-t

Proposition5.22. (Invariance under one-to-one transformations).

Suppose that n (0). T',( (3) ure reference priors derived by considering replio cations ofe.tperiments consisting o u single obserwtion from p ( x I 0). with f z 6 X . f? E 8 undfroni p(z I &), with .I' E S. E 4?, respecti\selt-, where o o = y( 0 ) und y : (-3 9 is u one-to-one monotone mupping. Then, for sonic c > 0 und for all (3 E a: (i) K , , ( o ) = (-TOy - ' ( o ) ) . if0 isdiscrete; (

Proof. If 0 is discrete, so is Q and the result follows from Proposition 5.19. Otherwise. if zk denotes a k-fold replicate of a single observation from p ( z 10). then. for any proper prior p ( f ? )the corresponding prior for c? is given by p,,(o)= . p o ( g - ' ( o ) ) /J,,l and hence. for all o E CP.

p,,(o 1 zh) = PO (9 ' ( 0 )I % A ) I.IoI. It follows that, as k + x. the asymptotic posterior approximations are related by the same Jacobian element and hence

= ~ J ~I exp ~I

= l.J<,>l-' fi(0). The second result now follows from Proposition 5.18.

{.I 10)
p(zk

logp'((9 I zk)dzk

The assumed existence of the asymptotic posterior distributions that would result from an imagined k-fold replicate of the experiment under consideration clearly plays a key role in the derivation of the reference prior. However. it is important to note that no assumption has thus far been required concerning the form of this asymptotic posterior distribution. As we shall see later. we shall typically consider the case of asymptotic posterior normality. but the following example shows that the technique is by n o means restricted to this case.

5.4 Reference Analysis

311

Example5.12. (Uniform model). Let e be the experimentwhich consists of observing the sequence 11 . . . ,z. n 2 1, whose belief distribution is represented as that of a random , sample from a uniform distributionon [8 - 8+ 41.8 E R. together with aprior distribution p(e) fore. ~f

i.

t,, = [z::/n ,x ! ; : ~ . z!:;/n = mintxl.. . . ,z,,}.


then t,, is a sufficient statistic for 8, and
p ( e I z) p(t) I t,,)a ~(8). =

i111 q,,,,, = uiax{q, . . . ,z,, 1:

dnl - 1 < 5 x!;:il, + f . 2 lllnx


L J

It follows that, as k -+ oc, a k-fold replicate of e with a uniform prior will result in the posterior uniform distribution

P(0 I t k , , 1
It is easily verified that

c.

x,,,,

(r.11)

-1

* < t ) 5 p:I -

11u11

4.

the expectation being with respect to the distribution of tk,,. For large k. the right-hand side is well-approximatedby
-log

1-

and. noting that the distributions of


11

p mr i x) - 8 - 1 . 2

Iknl = J IlIln - 8 +

are Be(u I k71.1) and Ek(v I 1. kn), respectively, we see that the above reduces to

kn -+ 1
It follows that fi,,(0)= (ktz

+ 1)/2,and hence that


~ ( 8= c h )
(K71f
i

1)/2

k--l

(ktf

+ 1)/2 =

Any reference prior for this problem is therefore a constant and, therefore, given a set of .. actual data 2 = (q, . ,xn). the reference posterior distribution is

n ( e l z )I x c ,

Otl I,,,

f <_ 8 5 2) + f null

a uniform distribution over the set of 8 values which remain possible after z has been observed.

31 2

5 Inference

Typically, under suitable regularity conditions, the asymptotic posterior distribution p(0 I r k f l ) .corresponding to an imagined k-fold replication of an experiment p I l involving a random sample of IZ from p ( x 10). will only depend on za,, through an usyniptuticully sirflcient, consistent estiniute of 8. a concept which is made precise in the next proposition. In such cases. the reference prior can easily be identified from the form of the asymptotic posterior distribution.

Proposition 5.23. (Explicit form o the reference prior when there is a f consislent, asymptotically sufjiicient, estimator). L4t c he the e.rperinietit which consists ofthe observation of a random saniple x = { X I . . . . x,,)froni . p ( x I 0). x E X . 8 E 0 C %, und let zi I , be the result .fu k-jbld replicwte of ) c lfthere e.vists HI, ,I = #a ,,( z I I ~.such thnt, with prohahilic one

und. us k

x,

then, for m y c > 0.0,, E 0,reference priors are dejned by

where

Proc,$ As k

-) .

(xj.

it follows from the assumptions that

The rcsult now follows from Proposition 5.18.

5.4 Reference Analysis

313

Example 5.13. (Dsvhztbnfrom uniformity nrodel). Let c,, be the experiment which consists of obtaining a random sample from y(.r I e). 0 5 I 5 1. 0 > 0, where

~ { 2 ~ )1 @
P(I I@)=

for O < r < $

e(2(1 - x)}''-l for f 5 x 5 1 defines a one-parameter probability model on [O. 11, which finds application (see Bemardo and Bayarri, 1985) in exploring deviations from the standard uniform model on [O. 1)(given by 8 = 1). It is easily verified that if = {xI1.. xl;,,} ., results from a k-fold replicate of c',,, the sufficient statistic tk,, is given by

and, for any prior p(B).

P(B I zkrr
It is also easily shown that p(tk,,

1 E[tkrrel = -

~ [ t , , I,

from which we can establish that that

a,,, = ti,: is a sufficient, consistent estimate of 8. It follows

el

1 el = kn02

provides, for large k , an asymptotic posterior approximation which satisfies the conditions required in Proposition 5.23. From the form of the right-hand side, we see that
p'(0 I &,,) = Ga(0 I k7L + 1: knlb,,)

so that

and, from Proposition 5.18, for some c > 0.& > 0,

The reference posterior for B having observed actual data 2 = ( T I , .. . ,xn), producing the sufficient statistic t = t(z). is therefore 1

r(eI 5 ) = ~ ( I to)

p ( Z I e)

err-I exp{--71(61-

l)t} ,

which is a Ga(0 I n. rit) distribution.

314

5 Inference

Under regularity conditions similar to those described in Section 5.2.3. the asymptotic posterior distribution of CI tends to normality. In such cases, we can obtain a characterisation of the reference prior directly in terms of the parametric model in which 8 appears.

Proposition 5.24. (Reference priors under asymptotic normality). Let e, be the experiment which consists o the observation of a random sample f 2 , . . ,z,, 1 . from p(x I Q). x E X. 0 E 8 c R. Then. if rhe asymprotic posterior distribution of 8, given a k-jold replicate of el,, is normul with f precision knh ( O k I , ) , where @A.n is a consistent estimute u 0, reference priors have the fortn

K(e) {/~(e)}~?
l~

Prooj Under regularity conditions such as those detailed in Section 5.2.3, it follows that an asymptotic approximation to the posterior distribution of 8.given a k-fold replicate of el,, is

where &,, is some consistent estimator of 0. Thus, by Proposition 5.23,

(2)

1 :L'

{h(8)}li2.

and therefore, for some c > 0, H,, E 9,

as required.

The result of Proposition 5.24 is closely related to the "rules" proposed by Jeffreys ( 1946, I939/1961 ) and by Perks ( 1947) to derive "non-informative" priors. Spically. under the conditions where asymptotic posterior normality obtains we find that h(0) = p ( z 10) -- l o g p ( z 10)

:2 i

i.e., Fisher's information (Fisher, 1925). and hence the reference prior.
T ( 0 ) x h(0)' ?.

5.4 Reference Analysis

315

becomes Jeffreys (or Perks) prior. See Polson (1992) for a related derivation. It should be noted however that, even under conditions which guaranteeasymptotic normality, Jeffreys formula is not necessarily the easiest way of deriving a reference prior. As illustrated in Examples 5.12 and 5.13 above, it is often simpler to apply Proposition 5.18 using an asymptotic approximation to the posterior distribution. It is importantto stress that reference distributionsare, by definition,a function of the entire probability model p ( z I B), z E X ,8 E 0, not only of the observed likelihood. Technically. this is a consequence of the fact that the amount of information which an experiment may be expected to provide is rhe value of an integral over the entire sample space X.which, therefore. has to be specified. We have, of course, already encountered in Section 5.1.4 the idea that knowledge of the data generating mechanism may influence the prior specification.
Exampie 5.14. (Binomial and negative binomial &k). Consider an experiment which consists of the observation of n Bernoulli trials, with n fixed in advance, so that
G

= {XI.. . . ,zn},

p ( x p)= t3yi

-e

p,

E (0. i),

o Ie I 1,

and hence, by Roposition 5.24, the reference prior is

*(e) 3: e-1/2(i e ) - I / * . If T = C:=, the reference posterior. zi,

I q T ( e ) er-*(i- f.y-r-1/2, 1 s is the beta distribution Be(@ r + $ , n- r + i). Note that a(0 1 )is proper, whatever the number of successes In particular, if r = 0, a(@ = Be(@ f ,n + i),from which I
a(@ 12) C( P(G
C(

T.

12)

sensible inference summaries can be made. even though there are no observed successes. (Compare this with the Haldane (1948) prior, a(@)3: @ - ( l @ ) - I , which produces an improper posterior until at least one success is observed.) Consider now. however, an experiment which consists of counting the number z of Bernoulli trials which it is necessary to perform in order to observe a prespecified number of successes, r 2 1. The probability model for this situation is the negative binomial

from which we obtain

5 Inference and hence, by Proposition 5.24, the reference prior is a(@) x H ' ( 1 - H ) posterior is given by
K(H
I

'. The reference

1 .r) I p ( . r I H ) i T ( H )

x 8' ' ( 1 -- 0 ) ' '

I 2.

. = r

I'. I'

7-

1. . . . .

which is the beta distribution Be(0 1 r. .r - I' + f ). Again, we note that this distribution is proper, whatever the number of observations s required to obtain r successes. Note that I' = 0 is not possible under this model: the use of an inverse binomial sampling design implicitly assumes that r successes willeventually occurfor sure. which is not true in direct binomial sampling. This difference in the underlying assumption about 6, is duly reflected in the slight difference which occurs between the respective reference prior distributions. See Geisser (1984) and ensuing discussion for further analysis and discussion o f this canonical example.

In reporting results, scientists are typically required to specify not only the data but also the conditions under which the data were obtained (the design of the experiment), so that the data analyst has available the full specification of the probability model p(s I H ) , z E X. 8 E 8. In order to carry out the reference analysis described in this section. such a full specification is clearly required. We want to stress, however. that the preceding argument is totally compatible with a full personalistic view of probability. A reference prior is nothing but a (limiting) form of rather spec& beliefs: namely. those which maximise the missing information which a purriculur experiment could possibly be expected to provide. Consequently, different experiments generally define different types of limiting beliefs. To report the corresponding reference posteriors (possibly for a range of possible alternative models) is only part of the general prior-to-posterior mapping which interpersonal or sensitivity considerations would suggest should always be carried out. Reference analysis provides an answer to an important "what if?" question: namely, what can be said about the parameter of interest ifprior information were minimal rrlurive to the maximum information which a well-defined, specific experiment could be expected to provide?
5.4.3 Restricted Reference Distributions

When analysing the inferential implications of the result of an experiment for a quantity of interest, 8. where, for simplicity. we continue to assume that 0 E 8 C R, it is often interesting, either per se. or on a "what if:'" basis. to twndirion on some assumed features of the prior distribution p ( 0 ) . thus defining a restricted class, Q, say. of priors which consists of those distributions compatible with such conditioning. The concept of a reference posterior may easily be extended to this situation by maximising the missing information which the experiment may possibly be expected to provide n-irhin this restricted class of priors.

5.4 Reference Analysis

317

Repeating the argument which motivated the definition of (unrestricted) reference distributions, we are led to seek the limit of the sequence of posterior distributions, nk(6I z), which correspond to the sequence of priors, nk(6). which are obtained by maximising, within Q.the amount of infonnation

where

fk(6) = CXP

which could be expected from k independent replications I = {zl,.. . .z }of the k single observation experiment.

{J

P(%k 10) k P ( 6 I %

k ) h

Definition 5.8. (Restricted reference dishibutions). Let x be the result o an experiment e which consists of one observationfrom f p ( x IO), x E X , with 6 E (3 3,let 6 be a subclass of the class o all f prior distributionsfor 6, let zk = { X I , . . . .z k } be the result of k independent f replications o e and define

Provided it exists, the Q-reference posterior distribution of 6, afer x has been observed, is defined to be no(6 I z), that such

E [ ~ { ~ , Q ( 1oz), I% ) ) I + nQ(6
rf(6
12) cx

0,

as k

-+

30,

p ( s I O)n,Q(O),

where 6 is the logarithmic divergence specijied in Definition 5.7, and nf(6) is a prior which minimises, within 4

A positivefunction &(6) in Q such that

i then called a Q-referenceprior fur 6 relutive to the experiment e. s

318

5 Inference

The intuitive content of Definition 5.8 is illuminated by the following result. which essentially establishes that the Q-reference prior is the closest prior in G, to the unrestricted reference prior ~ ( 0 )in the sense of minimising its logarithmic . divergence from r(0).

Proposition 5.25. (The restricted reference prior as an approximation). Suppose that un unrestricted reference prior r ( 0 )relative to a given experis f sarisJes ment i proper; then. i ir exists, a Q-reference prior r ( ~ ( 0 )

Proof. It follows from Proposition

5.18 that x ( 0 ) is proper if and only if

Moreover,

which is maximised if the integral i s minimised. Let r f ( H ) be the prior which minimises the integral within Q. Then, by Definition 5.8.

where. by the continuity of the divergence functional. & ( H ) minimises. within Q ,

is the prior which

5.4 Reference Analysis

319

If n(8)is not proper, it is necessary to apply Definition 5.8 directly in order to The characterise nQ(8). following result provides an explicit solution for the rather large class of problems where the conditions which define Q may be expressed as a collection of expected value restrictions. Proposition 5.26. (Explicitform of restricted reference priors). Let e be an experiment which provides information about 8,and, for given { ( g i ( . ) , , O s ).i = I, ...,m } , letQberhecfassofpriordisrribu~oionsp(8)ofe , which satisfy

Let n(0) be an unrestricted reference prior for 8 relative to e; then, a Q-

reference prior o 8 relative to e, i it exists, is of the form f f

where the

' are constants determined by the conditions which define Q. s

Proofi The calculus of variations argument which underlay the derivation of reference priors may be extended to include the additional restrictions imposed by the definition of Q, thus leading us to seek an extremal of the functional

corresponding to the assumption of a k-fold replicate of e. A standard argument now shows that the solution must satisfy

and hence that

Taking k

-+ .

00,

the result follows from Proposition 5.18.

320

5 Inference

Example 5.15. (Loeation modoh). Let 2 = { . r l . . . . .r,, } be a random sample from a location model p(.r 1 H ) = h(.r - H ) . .I E Ji, H E P, and suppose that the prior mean and variance of 0 are restricted to be E[H]= p,,. V[H]= m i . Under suitable regularity conditions. the asymptotic posterior distribution of 19 will be of the form p ( 0 1 .ri. . . . .. I . , , ) Y - 0). where H,, is an asymptotically sufficient, consistentestimator of 8. Thus. by Proposition 5.23.

f(e,,

which is constant. so that the unrestricted reference prior will be from Proposition 5.26 that the restricted reference prior will be

u~i#~rnz. It

now follows

with Hav(6))dfl = p,,and [ ( O - p , , ) 2 1 ; ( 1 ( H ) cfH = mi. Thus, the restricted reference prior is the norm1 distribution with the specified mean and variance.

5.4.4

Nuisance Parameters

The development given thus far has assumed that H was one-dimensional and that interest was centred on 0 or on a one-to-one transformation of 19. We shall next consider the case where 8 is twodimensional and interest centres on reporting inferences for a one-dimensional function, Q!J = (p(8).Without loss of generality. I we may rewrite the vector parameter in the form 8 = (&A), o E CP, X E . . where (3 is the parameter of interest and X is a nuisance parameter. The problem is to identib a reference prior for 8. when the decision problem is that o reporting f marginul inferencesfor Q,assuming a logarithmic score (utility) function. To motivate our approach to this problem. consider Z A to be the result of a k-fold replicate of the experiment which consists in obtaining a single observation, z from p(a: 18) = p ( z 16.A). Recalling that p ( 8 ) can be thought of in terms o f . the decomposition Ij(e) f)(c>.A ) = I+,qll(~ = I 0). suppose. for the moment, that a slrirable referenceform, T (X I 0 ) . for p ( X I o) has been specified and that only n(@)remains to be identified. Proposition 5.18 then implies that the marginal reference prior for c) is given by

5.4 Reference Analysis

321

p * ( @I zk)is an asymptotic approximation to the marginal posterior for 4, and

By conditioningthroughout on Cp, we see from Proposition 5. I8 that the "conditional reference prior" for X given 4 has the form

where

f;(X 14) = ex*

JP(%k

I4,X) logp'(X 14, . Z k ) d Z k

p* (A 1 4, zk)is an asymptotic approximation to the conditionalposterior for X given 4, and


k

P(Xk

ICp,X) = JJP(zz 1 7 4. 4
i=l

Given actual data z, marginal reference posterior for 4, corresponding to the the reference prior r(e)= r(4,A) = r ( 4 w 14) derived from the above procedure, would then be

This would appear, then, to provide a straightforward approach to deriving reference analysis procedures in the presence of nuisance parameters. However, there is a major dificuiry. In general, as we have already seen, reference priors are typically not proper probability densities. This means that the integrated form derived from r ( X 1 Cp),
P(%k

14) =

which plays a key role in the above derivation of T ( @ ) , typically not be a proper will probability model. The above approach will fail in such cases. Clearly, a more subtle approach is required toovercome this technical problem. However, before turning to the details of such an approach, we present an example, involvingjnire parameter ranges, where the approach outlined above does produce an interesting solution.

P(Zk

I47 X).(X

I$) a.

322

5 Inference

Example 5.16. (Induction). Consider a large, finite dichotomised population, all of whose elements individually may or may not have a specified property. A random sample is taken without replacement from the population, the sample being large in absolute size. hut still relatively small compared with the population size. All the elements sampled turn out to have the specified property. Many commentators have argued that. in view of the large absolute size of the sample, one should be led to believe quite strongly that all elements of the popululion have the property. irrespective of the fact that the population size is greater still, an argument related to Laplace's rule of succession. (See, for example, Wrinch and Jeffreys. 1921, Jeffreys, 1939/1%1. pp. 128-132 and Geisser, 1980a.) Let us denote the population size by ,V, the sample size by I t , the observed number of elements having the property by .r, and the actual number of elements in the population having the property by 8. The probability model for the sampling mechanism is then the hypergeometric. which, for possible values of .r. has the form

If p(8 = T), T* = 0.. . . . ,I: defines a prior distribution for 8,the posterior probability that 8 = N . having observed :r = 71, is given by

Suppose we considered 0 to be the parameter of interest. and wished to provide a reference analysis. Then, since the set of possible values for 8 is finite, Proposition 5.19 implies that

p(e = r) =

1 !V+l.

r = 0.1.. . , . :V.

is a reference prior. Straightforwardcalculation then establishes that

which is not close to unity when ri is large but n / N is small. However, careful consideration of the problem suggests that it is nor 0 which is the parameter of interest: rather it is the parameter

o=

{0
1

if0 = lV if 0 # :V

To obtain a representation of O in the form (d.A), let us define


A = {

ifB=.V if H # :Y.

5.4 Reference Analysis

323

By Proposition 5.19, the reference priors r(@) K ( X 14) are both uniform over the apand propriate ranges, and are given by

T(X = 1 I Q, = 1) = 1,

1 T ( X = T 1 Q = 0) = -

= 0.1... . N

- 1.

These imply a reference prior for 0 of the form

and straightforward calculation establishes that


-1

p(e=~jz=n)=

n+l

which clearly displays the irrelevance of the sampling fraction and the approach to unity for large n (see Bemardo, 1985b.for further discussion). We return now to the general problem of defining a reference prior for 6 =

(4, A), # E 9, E A, where 4 is the parameter vector of interest and X is a nuisance I X parameter. We shall refer to the pair (4,A) as an ordered parurnerrisation of the
model. We recall that the problem arises because in order to obtain the marginal reference prior n(@) for the first parameter we need to work with the integrated model
P(.k

14) = J P ( % k I7X>n(X 14) dX. 4

However, this will only be a proper model if the conditional prior n(X 14) for the second parameter is a proper probability density and, typically, this will not be the case. This suggests the following strategy: identify an increasing sequence {A*} of subsets of A, Ui = A, which may depend on 4, such that, on each A,, the A, conditional reference prior, n ( X 14) restricted to A, can be normalised to give a which is proper. For each i, a proper integrated model reference prior, n,(X I4), can then be obtained and a marginal reference prior n,(@)identified. The required 00. The strategy reference prior n(q5, A) is then obtained by taking the limit as i clearly requires a choice of the A,s to be made, but in any specific problem a natural sequence usually suggests itself. We formalise this procedure in the next definition.

324

5 Inference

Definition 5.9. (Reference distributions given a nuisance parameter). Let x be the result of an experintent c which consists of one observationfrom A) the probubili9 model p ( x 1 o. A), x E X , (9. E 4) x A C fR x 9.The reference posterior, ir(9 I x ) ,for the parameter of interest (3, relative r o the 1 (0) experiment e and to the increasing sequences of subsets of 1 . { :Ii }. (3 E @. A, (4)= A, is defined to be the result of thefollowing procedure: (i) applying Definition 5.7 to the model p ( x I 0. Jilrfired Q, obtain the A), for conditional reference prior. 7r( A I 0). ,I; (ii) for each 6, normalise T (X I @) within each :(cp) to obtain a sequence o I , f

u,

proper priors. 7rf ( A 1 0); (iii) use these to obtain a sequence ofintegruted models
Pl(Z

10) =

l,,
0)

P(X

I cp. A b I ( A I 0 )flA:

(iv) use those to derive the sequence of reference priors

( v ) dejne 7r(@

I x ) such that,for almost all x.

The referenceprior, relutive to the orderedparametrisation (li.A). is any positivefunction 7r(@ A). such that

This will typically be simply obtained as

Ghosh and Mukerjee (1992) showed that, in effect, the reference prior thus defined maximises the missing information about the parameter of interest. 6.

5.4 Reference Analysis

325

subject to the condition that, given @, the missing information about the nuisance parameter, A, is maximised. In a model involving a parameter of interest and a nuisance parameter, the form chosen for the latter is, of course, arbitrary. Thus, p ( x I @, A) can be written alternatively as p ( z I 4, $), for any 1c, = $(#, A) for which the transformation (4,A) (4, $) is one-to-one. Intuitively, we would hope that the reference posterior for @ derived according to Definition 5.9 would not depend on the particular form chosen for the nuisance parameters. The following proposition establishes that this is the case.
---$

Proposition 5.27. (Invariance with respect t the choice of the nuisance o parameter). Let e be an experiment which consists in obtaining one observation from p(xI@,A), (&,A) E CP x A c R x R, and let e be an experiment which consists in obtaining one observation from p ( x I 4, $I), (@,+) E x \zI C ? ?where (Q, -+ (4, @) is one-to-one transx 9, I A) formation, with li, = go(A). Then, the reference posteriorsfor @, relative to [e,{At($)}] and [el, { S 1 ( # ) } where \zIt(&) = g d { A f ( @ ) }ure identical. ], .
Proof. By Proposition 5.22, for given 4,

where

Hence, if we define

and normalise re(@ over 9,(@) rA(g;(@) 14)over At(@),we see that the 14) and normalised forms are consistently related by the appropriate Jacobian element. If we denote these normalised forms, for simplicity, by r,(X I 4). ~ ~ ( $14). we see 7 that, for the integrated models used in steps (iii) and (iv) of Definition 5.9,

and hence that the procedure will lead to identical forms of r(415).

326

5 Inference

Alternatively, we may wish to consider retaining the same form of nuisance parameter, A, but redefining the parameter of interest to be a one-to-one function of 9. Thus, p ( z 14, A) might be written as p ( z I -f. A), where 7 = g ( 4 ) is now the parameter vector of interest. Intuitively, we would hope that the reference posterior for y would be consistentlyrelated to that of qj by means of the appropriate Jacobian element. The next proposition establishes that this is the case.

Proposition 5.28. (Invariance under one-to-one transformations). Let c be an experiment which consists in obtaining one observation from J J ( X 16. A), o E @, A E A, and let f J be un experiment which consists in obtaining one observationfmm p ( x 17. A), 7- E l?. A E iL where = g ( o ) . Then,given datu x, referenceposteriorsfor @ arid -{. relative to [ P . { A , (o)}] the and [e'. { @ I ( - y ) } ] , @.,(-y) = A f { g ( o ) are relatedby: }
A :

(i) 7r-,(~yIx) .rr,,(g =

':)3) (,1c.

$@isdiscrete;

Proof. In all cases, step (i)of Definition 5.9 clearly results in aconditional referencepriorn(A I @) = .rr(A 19 '(7)). Fordiscrete@,X,.rr,(o)and7rf(-,)definedby steps (ii)-(iv)of Definition 5.9 are both uniform distributions, by Proposition 5.18. and the result follows straightforwardly. If .Jg-i ( 7 ) exists. 7rf (0) 7r,(?) defined and by steps (ii)-(iv) of Definition 5.9 are related by the claimed Jacobian element. I -1,. I ( ? ) 1 by Proposition 5.22, and the result follows immediately. a

In Proposition 5.23, we saw that the identificationof explicit forms of reference prior can be greatly simplified if the approximate asymptotic posterior distribution is of the form p - ( 6 1 Zk) = p " ( 0 1 I % ) >

where & is an asymptotically sufficient, consistent estimate of 8. Proposition 5.24 establishes that even greater simplification results when the asymptotic distribution is normal. We shall now extend this to the nuisance parameter case.

Proposition5.29. ( B i v h ereferencepriors under asymptotic normulily). Let F,, be the experiment which consists ofthe observation ofu rundom sample xl.. .xnfrom p ( z 14. A), (o7.A) E @ x A C 'H x 'R, and let { A f ( c l ) } .. be suitably defined sequences of subsets of A. as required by Definition 5.9. Suppose that thejointas~mptc)ticpci.~terior distribution of (8. gir-ena I;-fold A), replicate off I , , is multivariute normul with precision matrix k n H (& ,,. ), uhere (& ),,i~ N consistent estimate rf (0. and suppose that h , , = ,,) is A)

5.4 Reference Analysis


hi, (&, Then

327
=

, ik,,), i = 1,2, j
n(A 14)

1,2, is the partition of H corresponding to &, A.

{h22(4, 4 v 2 :

deJinea reference prior relative to the ordered parametrisation ($, A), where
dA}

Proof. Given 4, the asymptotic conditional distribution of A is normal with precision knhlZ($kn,ik,). The first part of Proposition 5.29 then follows from Proposition 5.24. Marginally, the asymptotic distribution of $ is univariate normal with precision knh,, where ho = (hll - h12hG1h21). derive the form of x t ( $ ) , we note that To if zk E 2 denotes the result of a k-fold replication of e,),

where, with n,(A 14) denoting the normalised version of x(A 14) over A l ( $ ) , the integrand has the form

for large k , so that

328

5 Injerence

has the stated form. Since, for data z the reference prior n(S.A) is defined by .

I[)("

lo.X ) 7 r ( d . A)dX.

the result follows.

In many cases, the formsof {hz2(q>. and { h,,(@. factorise into products A)} A)} of separate functions of 6 and A, and the subsets {A, } do not depend on 0. In such cases, the reference prior takes on a very simple form. Corollary. Suppose that. under the conditions ($Proposition 5.29. we c*lroosc a suitable increasing sequence o subsets { -1, (.$A. which do not depend ON f } 0, suppose ulso that arul

Then a reference prior relative to the orderedparametrisation (0. ) is A

Proof. By Proposition 5.29, 7r( A

I 0)x fi(~)!g2(

A). and hence

7r,(A I d ' ) ) = tr,gz(A).

where a;' = J,

g2(A)
1

dX. It then follows that

where b, = J,

u,y.L(X)

logy1 ( A ) dA, and the result easily follows.

. I

Example 5.17. (Normal mean and standard deviation). Let c ,, he the experiment which consists in the observation of a random sample x = { . c I ,. . . . .I.,,} from a normal distribution, with both mean. 1 1 , and standard deviation, n. unknown. We shall first obtain a reference analysis for ~ r taking (T to be the nuisance parameter. ,

5.4 Reference Analysis

329

Since the distribution belongs to the exponential family, asymptotic normality obtains and the results of Proposition 5.29 can be applied. We therefore first obtain the Fisher (expected) information matrix, whose elements we recall are given by

from which it is easily verified that the asymptoticprecision matrix as a functionof 8 = (11: a) is given by

so that, for example, A, = {a; ' 5 a 5 el}, i = 1,2,. ., provides a suitable sequence e . of subsets of A = %+ not depending on p, over which ~ ( I p ) can be norrnalised and the a corollary to Proposition 5.29 can be applied. It follows that
x ( p 3 = x ( p ) n ( uI p ) x 1 x a)
0 1 -

provides a reference prior relative to the ordered parametrisation ( p , u).The corresponding reference posterior for p, given 2, is

3(

[' s

+ ( p - F)'] -"I2

= St(p (2, - l)K2, - 1). (n 71

where ns" = C(z, - Z)*. If we now reverse the roles of p and a,so that the latter is now the parameter of interest and /L is the nuisance parameter, we obtain. writing 4 = (a. ) p

330

5 Inference

sothat { h o ( ~ . p ) } ' : = f i u - ' , I r 2 2 ( a . / ~ ) } n-l2and, by a similar analysis tothe above. ' = 1ir(p1n) x a-'

so that, for example, A, = {p:--P' 5 / I 5 c ' } , i = 1.2.. . . provides a suitable sequence of subsets of 11 = R not depending on u,over which ~ ( 1 a) can be normalised and the p corollary to Proposition 5.29 can be applied. It follows that
7r(p,n) = 7r(n)7r(p n ) J 1 x n I

'

provides a reference prior relative to the ordered parametrisation (a. The corresponding p.). reference posterior for a.given 2, is
~ ( n l z3:) / p ( z l / i . n ) ~ ( p . udii )

40)

J fi
,?I

"J-,

I P. 0)4

10)

dP.

the right-hand side of which can be written in the form

Noting, by comparison with a N ( p If,n X ) density, that the integral is a constant, and implies that changing the variable to X = o-*.
?r(X

X!?l-W?-l

exp { ins'X}

or, alternatively,
~ ( X n l ~ = Ga (Xriu' s) z

I i(n -

1).

i)

= x?(Xrrs' I if - 1).

One feature of the above example is that the reference prior did not, in fact, depend on which of the parameters was taken to be the parameter of interest. In the following example the form does change when the parameter of interest changes.
Example 5.18. (Stundordised nomulmeun). We consider the same situation as that of Example 5.17, but we now take d = p / a to be the parameter of interest. If n is taken as n the nuisance parameter (by Proposition 5.27 the choice is irrelevant),@ = (6. ) = g ( / r .n ) is clearly a one-to-one transformation. with

5.4 Reference Analysis


and using Corollary I to Proposition 5.17.
@-I

331

a4(2

+ d2)) .

Again, the sequence A, = {a; 5 a 5 r'}. i = 1,2,. . ., provides a reasonable basis for e-' applying the corollary to Proposition 5.29. It is easily seen that

so that the reference prior relative to the ordered parametrisation (4, a)is given by

K ( 4 , a) x (2 + d - ' % - I
In the ( p , a)parametrisation this corresponds to
7r(p.a) x (2

5)

-'I2

a.-2,

which is clearly different from the form obtained in Example 5.17. Further discussion of this example will be provided in Example 5.26 of Section 5.6.2.

We conclude this subsection by considering a rather more involved example, where a natural choice of the required Ai(@) subsequence does depend on 4. In this case, we use Proposition 5.29, since its corollary does not apply.
Example 5.19. (Product of n o d means). Consider the case where independent random samples 2 = {q.. . ,z,,} y = { yl,. . . ,y,,,} are to be taken. respectively, from . and N ( z I (r. 1) and N(y I d. 1). fr > 0, ,3 > 0, so that the complete parametric model is

for which, writing 6 = (a. the Fisher information matrix is easily seen to be d)

H e ( 6 ) = W(tL a) =

(a

:J-

Suppose now that we make the one-to-one transformation q5 = (4. A) = (ad.a/i?) = g ( o ,b9) = g ( 6 ) , so that d = n o is taken to be the parameter of interest and A = a/@is taken to be the nuisance parameter. Such a parameter of interest arises, for example. when

332

5 Inference

inference about the area of a rectangle is required from data consisting of measurements of its sides. The Jacobian of the inverse transformation is given by

and hence, using Corollary 1 to Proposition 5. I7

The question now arises as to what constitutes a "natural" sequence {A,( o)}. over which to define the normalised a,(X I @) required by Definition 5.9. A natural increasing sequence of . subsets of the original parameter space, Pi+ x W , for (0.j ) would be the sets

s, = {(o. .j):

0 < (I

< 1 . 0 < < j< / } .

= 1.2.

which transform, in the space of X 6 A, into the sequence

We note that unlike in the previous cases we have considered. this does depend on 0. To complete the analysis, it can be shown. after some manipulation. that. for large i .

and

which leads to a reference prior relative to the ordered parametrisation

(L?. A )

given by

5.4 Reference Analysis


In the original parametnsation, this corresponds to

333

which depends on the sample sizes through the ratio m / n and reduces, in the case n = rn, to n(a,$)*x (tr2 + $?)12, a form originally proposed for this problem in an unpublished 1982 Stanford University technical report by Stein, who showed that it provides approximate agreement between Bayesian credible regions and classical confidence intervals for 4. For a detailed discussion of this example, and of the consequencesof choosing a different sequence A, (@),see Berger and Bemardo ( 1989).

We note that the preceding example serves to illustrate the fact that reference priors may depend explicitly on the sample sizes defined by the experiment. There is, of course, nothing paradoxical in this, since the underlying notion of a reference analysis is a minimally informative prior rekutive to the actual experiment to be performed.

5.4.5

Multlparameter Problems

The approach to the nuisance parameter case considered above was based on the use of an ordered parameuisation whose first and second components were (4. A), referred to, respectively, as the parameter of interest and the nuisance parameter. The reference prior for the ordered parametrisation (#, A) was then constructed by conditioning to give the form x(A I @)x(r#~). When the model parameter vector 8 has more than two components, this successive conditioning idea can obviously be extended by considering 8 as an ordered parametrisation, (81, . . . ,e,,), say. and generating, by successive conditioning, a reference prior, relative to this ordered parametrisation, of the form
x(e) =

.(ern

18,. . . . ,evt-l) . . T(e2 1 e l ) q , ) . .

In order to describe the algorithm for producing this successively conditioned form, in the standard. regular case we shall first need to introduce some notation. Assuming the parametric model p ( z I e), 8 E 8. to be such that the Fisher information matrix

has full rank, we define S ( 8 ) = H - ( 8 ) ,define the component vectors

g l J l = ( 8 ,,...,8,). eLI =(8J+l,...,87,~), and denote by SJ(8) corresponding upper left j x j submatrix of S ( 8 ) .and by the h J ( 8 )the lower right element of SJ-(8). Finally, we assume that 8 = 8 1 x . .x 0,,,with 8, E 8,, for i = 1.2, . . ., and, we denote by (8;). = 1 , 2 , . . an increasing sequence of compact subsets of 8,. I x . . . x e!,,. and define 8 l= L

..

334

5 Inference

Proposition 5.30. (Orderedreference priors under asymptotic normality). With the above notation, and under regu1arit.v conditions extending those of Proposition 5.29 in an obvious way, the reference prior n ( 0 ) .relative to the ordered paramerrisation (0,. . . . ,O,,, ), is given by

where w f( 0 )is dejined by the following recirrsion: and (i) For j = n?.. O,,, E k)!,,

(ii) For j = ria - 1. in - 2, . . . . 2 , and 6; E

el.

where

Proof. This follows closely the development given in Proposition 5.29. For details see Berger and Bemardo (1992a. 1992b. 1992~). a
The derivationof the ordered reference prior is greatly simplified if the { h, (0)} terms in the above depend only on even greater simplification obtains if H ( 8 ) is block diagonal, particularly. if, for j = 1.. . . . r n , the j t h term can be factored into a product of a function of 0, and a function not depending on fl,.

Corollary.

If b , ( 0 ) depends only on &I,

j = 1. . . . . rrt. then

5.4 Reference Analysis


!

335

I H ( 8 )is block diagonal (i.e..81, . . . 8, are mutually orthogonal),with f

then h, ( 8 ) = h,, ( 8 ) , = 1. . . . ,7n. Furthermore. j

in this latter case,

{)a,, ( 8 )1 = (6, )!?I ( 8 ) v : where g, ( 8 )does nor depend on el, and if the @ 's do not depend on 8. then

! ,

4 6 ) 0;

J=l

n
m

f,(O,).

f m o j The results follow from the recursion of Proposition 5.29.

The question obviously arises as to the appropriate ordering to be adopted in any specific problem. At present, no formal theory exists to guide such a choice, but experience with a wide range of examples suggests that-at least for nonhierarchical models (see Section 4.6.5). where the parameters may have special forms of interrelationship-the best procedure is to order the components of 8 on the basis of their inferential interest.
Example 5.20. (Reference anafysisfor nt normal means). Let e,, be an experiment . 71 2 2, a random sample from the multivariate which consists in obtaining (2,.. . in normal model N , , , ( zI p. TI,,,), 2 1. for which the Fisher information matrix is easily seen to be
It follows from Proposition 5.30 that the reference prior relative to the natural parametrisation ( p , . .. . p,,,, r ) .is given by x ( p , . . . . p,,,. T ) ix T-'.

Clearly, in this example the result does not, in fact, depend on the order in which the parametrisation is taken, since the parameters are all mutually orthogonal. The reference prior +, . . . , p ,,,. T ) x 7-I or n ( p l . . . . ,p , , , , ~ x a-'if we para) metrise in terms of u = r is thus the appropriate reference form if we are interested in any of the individual parameters. The reference posterior for any pJ is easily shown to be the Student density
7T(/lIlSl

....
1-

X,#)

=St(/II)F,J((I1-l)S-~,??l(n-l))

,=I

I-1

which agrees with the standard argument according to which one degree of freedom should be lost by each of the unknown means.

336

5 Inference

Example 5.21. (Multinomial model). Let x = { r l . . . . . r , , ,} be an observation from a rnultinomial distribution (see Section 3.2). so that
p ( 1.1. . . . .
,

H,,, 1 =

//!
rl!
'

' '

l',2,!(!l

- XI.,)!

H'

''
... ...
1

- 1 + HI - YH, 4
I W ( 0 , . . . .&,) =
1 - CH,

1 3- H? - - S8, 02

...
I

...
1

...
...

...
1 f H,,, - Z'H,

H,,,

In this case, the conditional reference priors derived using Proposition 5.28 turn out to be proper, and there i s no need to consider subset sequences { O ! } . In fact, noting that H ( O l . . . . .O,,,) is given by

we see that the conditional asymptotic precisions used in Proposition 5.29 are easily identified, and hence that

:I

f/1(1-01)

-HI#?
O2(I-H2)

" ' " '

-010, ...

...

-~lfl,,, -B,H,,,

...
'.'

...
k ( 1 -H,,,)

-U,H,,,

-H?H,,,

The required reference prior relative to the ordered parametrisation ( 0 , . . . . .H,,,). say. is then given by

and corresponding reference posterior for HI is


x(H1
11'1.

. . . . I.,,,) x

. I

/>(I).

. . . . I',,, 10,. . . .

.H , , , )

;T(HI.

. . . . N , , , ) rlH2 . . . IIH,,,

5.4 Reference Analysis


which is proportional to

337

J ();rID..

.p:"- t i 2 (1 - - q , , - c 5
x (1 - 81)-1!2(1 - 8 - 6)*)-1'* 1

After some algebra, this implies that


7r(01I r l , . . . r,.) = Be (8,
~

which, as one could expect, coincides with the reference posterior which would have been obtained had we initially collapsed the multinomial analysis to a binomial model and then carried out a reference analysis for the latter. Clearly, by symmetry considerations, the above analysis applies to any 8,.i = 1.. . . ,7n, after appropriate changes in labelling and it is independent of the particular order in which the parameters are taken. For a detailed discussion of this example see Berger and Bemardo ( 1992a). Further comments on ordering of parameters are given in Section 5.6.2.
Example5.22. (Normalcornlationcoefiient). Let { xl, , . . x , , }be a random sample from a bivariate normal distribution, A$(% I /A. 7 ) .where

Supposethat the correlationcoefficient p is the parameterof interest, and consider the ordered parametrisation { p , P I , p2,u l .u1}. It is easily seen that

50

that

338
After some algebra it can be shown that this leads to the reference prior
?T(/A//,.}".oI.02)

5 Inference

x (1

lol In,'

whatever ordering of the nuisance parameters p I. i l l . m , . o2 is taken. This agrees with Lindley's (1965, p. 219) analysis. Furthermore, as one could expect from Fisher's (1915) original analysis, the corresponding reference posterior distribution for p

(where F is the hypergeometric function), only depends on the data through the sample correlation coefficient r. whose samplingdistributiononly depends on p. For a detailed analysis of this example, see Bayarri (198 I ) ; further discussion will be provided in Section 5.6.2.

See, also, Hills (1987). Y and Berger ( 1991)and Berger and Bemardo ( I992b) e for derivations of the reference distributionsfor a variety of other interesting models.
Infinite discrete parameter spaces The infinite discrete case presents special problems, due to the nonexistence of an asymptotic theory comparable to that of the continuous case. It is. however. often possible to obtain an approximate reference posterior by embedding the discrete parameter space within a continuous one.
Example 5.23. (In@& discrete case). In the context of capture-recaptureproblems. suppose it is of interest to make inferences about an integer 19 E { 1.2.. . . } on the basis of a random sample z = {q.. . . r,, from . }

For several plausible "diffuse looking" prior distributions for 19one finds that the corresponding posterior virtually ignores the data. Intuitively, this has to be interpreted as suggesting that such priors actually contain a large amount of information about H compared with that provided by the data. A more careful approach to providing a "non-informative" prior is clearly required. One possibility would be to embed the discrete space { 1.2. . . .} in the continuous space 10. x[ since, for each H > 0. p ( . t ) l l )is still a probability density for .r. Then, using Proposition 5.24. the appropriate refrence prior is
x(B) x
//(#)I .2

(H+

])-lo

'

and it is easily verified that this prior leads to a posterior in which the data are no longer overwhelmed. If the physical conditions of the problem require the use of discrete H values. one could always use, for example,
P(H = 1 I z ) =
as an

, !

.I '2

,.I 2

r ; ( ~z)tlfl.

= ,j 1 z ) =

approximate discrete reference posterior.

5.5 Numerical Approximations

339

Prediction urui Hierurchical Models Two classes of problems that are not covered by the methods so far discussed are hierarchical models and prediction problems. The difficulty with these problems is that there are unknowns (typically the unknowns of interest) that have specified distributions. For instance, if one wants to predict y based on I when (y, z ) has density p(y, z I 8 ) .the unknown of interest is y, but its distribution is conditionally specified. One needs a reference prior for 8, not y. Likewise, in a hierarchical model with, say, pl, p2,.. . ,p p being N ( p t I pu,A), the p t ' s may be the parameters of interest but a prior is only needed for the hyperparameters po and A. The obvious way to approach such problems is to integrate out the variables with conditionally known distributions (y in the predictive problem and the { p , } in the hierarchical model), and find the reference prior for the remaining parameters based on this marginal model. The difficulty that arises is how to then identify parameters of interest and nuisance parameters to construct the ordering necessary for applying the reference prior method, the real parameters of interest having been integrated out. In future work, we propose todeal with this difficulty by defining the parameter of interest in the reduced model to be the conditional mean of the original parameter of interest. Thus, in the prediction problem, E[yl6] (which will be either 8 or some transformation thereof) will be the parameter of interest, and in the hierarchical model E [ p l I jb, A] = po will be defined to be the parameter of interest. This technique has so far worked well in the examples to which it has been applied, but further study is clearly needed.

5.5

NUMERICAL APPROXIMATIONS

Section 5.3 considered forms of approximation appropriate as the sample size becomes large relative to the amount of information contained in the prior distribution. Section 5.4 considered the problem of approximatinga prior specification maximising the expected information to be obtained from the data. In this section. we shall consider numerical techniques for implementing Bayesian methods for arbitrary forms of likelihood and prior specification. and arbitrary sample size. We note that the technical problem of evaluating quantities required for Bayesian inference summaries typically reduces to the calculation of a ratio of two integrals. Specifically, given a likelihood p(a: 1 8 ) and a prior density p ( 8 ) . the starting point for all subsequent inference summaries is the joint posterior density for 6 given by

From this, we may be interested in obtaining univariate marginal posterior densities for the components of 6, bivariate joint marginal posterior densities for pairs of

340

5 Inference

components of 8, and so on. Alternatively, we may be interested in marginal posterior densities for functions of components of 8 such as ratios or products. In all these cases, the technical key to the implementation of the formal solution given by Bayes' theorem, for specified likelihood and prior, is the ability to perform a number of integrations. First, we need to evaluate the denominator in Rayes' theorem in order to obtain the normalising constant of the posterior density: then we need to integrate over complementary components of 8. or transformations of 8, in order to obtain marginal (univariate or bivariate) densities. together with summary moments, highest posterior density intervals and regions, or whatever. Except in certain rather stylised problems (e.g.. exponential families together with conjugate priors). the required integrations will not be feasible analytically and. thus. efficient approximation strategies will be required. In this section, we shall outline five possible numerical approximation strategies. which will be discussed under the subheadings: kiplace Apl)r~~.rit?rcJriori;
Iterative Quadrature: Imporrance Sanipling; Sunipling-iniportcinc.e-r~.~.stinipliti~~; Markov Chuiri Monte Curlo. An exhaustive account of these and other methods will be given in the second volume in this series, Bu.vesiuti Conipirtcirion.

5.5.1

Laplace Approximation

We motivate the approximation by noting that the technical problem of evaluating quantities required for Bayesian inference summaries. is typically that of evaluating an integral of the form

wherep(8 I 5 )is derived from a predictive model with an appropriate representation as a mixture of parametric models, and g ( 8 ) is some real-valued function of interest. Often, g ( 8 ) is a first or second moment, and since p(8I z ) is given by

we see that E[y(8) I z)]has the form of a ratio of two integrals. Focusing initially on this situation of a required inference summary for g ( 8 ) . and assuming !I( 8 ) almost everywhere positive. we note that thc posterior expectation of interest can be written in the form

5.5 Numerical Approximations

341

where, with the vector x = (x,,. . ,x,,) observations fixed, the functions h(8) . of and h*(O) are defined by

-nh(e) = iogp(e) + iogp(z I el. -nh*(e) = iogg(e) + logp(6) + iogp(x I e).


Let us consider first the case of a single unknown parameter, 8 = 6 E 91,and define 8-8' and 6 ,d such that

-h(B) = sup { - h ( 8 ) }
B

ci = [h'7e)]-1'2

lo=*

'

Assuming I t ( . ) , It'(.) to be suitably smooth functions, the Lapface approximations for the two integrals defining the numerator and denominator of E[g(O)I x]are given (see, for example, Jeffreys and Jeffreys, 1946) by

J2ncT*n-'/2 { - - n h * ( ~ * ) } exp ,
and

&3n-'/*

exp -nh(e)} .

Essentially, the approximations consist of retaining quadratic terms in Taylor expansions of h ( - )and h'(.),and are thus equivalent to normal-like approximations to the integrands. In the context we are considering, it then follows immediately that the resulting approximation for E[g(O)I x has the form i

fi"g(8) 1 1 = x

($) exp { -n [h'(B')

"(e)]},

and Tierney and Kadane (1986) have shown that E [ g ( d )I 2 = fi[g(8)I x](1 4- O ( T ~ -.~ ) ) 1 The Laplace approximation approach, exploiting the fact that Bayesian inference summaries typically involve ratios of integrals, is thus seen to provide a potentially very powerful general approximation technique. See, also, Tierney, Kass and Kadane ( 1987, 1989a, 1989b), Kass, Tierney and Kadane (1988, 1989a. 1989b, 1991) and Wong and Li (1992) for further underpinning of, and extensions to, this methodology. Considering now the general case of 8 E 9'the Laplace approximation to '. the denominator of E[g(O)I x]is given by

342

and

the Hessian matrix of h evaluated at 8. with an exactly analogous expression for the numerator. defined in terms of It ' (.) and 8 * . Writing

completely analogous to the univariate case. If 8 = (4,A) and the required inference summary is the marginal posterior density for Cp, application of the Laplace approximation approach corresponds to obtaining y(Cp I 2)pointwise by fixing Cp in the numerator and defining g(A) = I . It is easily seen that this leads to

where -nh,,(A) = logy(@.A )

+ logy(z 14.A).
A

considered as a function of X for fixed 4, and


-/lJA,,)

sup /?,,(A).

The form fi(4I z) thus provides (up to proportionality) B pointwise approximation to the ordinates of the marginal posterior density for 4. Considering this form in more detail, we see that, if p(Cp. A ) IS constant. '

5.5 Numericul Approximurions

343

The form V2logp(s I 4, A,+,) is the Hessian of the log-likelihood function. considered as a function of X for fixed 4, and evaluated at the value which maximises the log-likelihood over X for fixed 4; the form p ( z 14.&) is usually called the profile likelihood for 9. corresponding to the parametric model p ( s I 4:A). The approximation to the marginal density for 4 given by $(+Is)has a form often referred to as the modiJedprofile likelihood (see, for example, Cox and Reid, 1987, for a convenient discussion of this terminology). Approximation to Bayesian inference summaries through Laplace approximation is therefore seen to have links with forms of inference summary proposed and derived from a non-Bayesian perspective. For further references, see Appendix B, Section 4.2. In relation to the above analysis, we note that the Laplace approximation is essentially derived by considering normal approximations to the integrands apIf pearing in the numerator and denominator of the general form E [ g ( 8 )I z]. the forms concerned are not well approximated by second-order Taylor expansions of the exponent terms of the integrands, which may be the case with small or moderate samples, particularly when components of 8 are constrained to ranges other than the real line, we may be able to improve substantially on this direct Laplace approximation approach. One possible alternative, at least if 8 = 0 is a scalar parameter, is to attempt to approximate the integrands by forms other than normal, perhaps resembling more the actual posterior shapes, such as gammas or betas. Such an approach has been followed in the one-parameter case by Morris (1988), who develops a general approximation technique based around the Pearson family of densities. These are characterised by parameters m,p0 and a quadratic function Q , which specify a density for 0 of the form

where

&(el = qo + qle + q2e2


and the range of 8 is such that 0 < Q(0) < x. It is shown by Morris (1988) that, for a given c..oice of qual--atic function Q, an analogue to the Laplace-type approximation of an integral of a unimodal function f (0) is given by

where rii = l"(e)Q(o) and 8 maximises r ( H ) = log(f(O)Q(H)].Details of the Q and p for familiar forms of Pearson densities are given in Morris forms of I< ( 19881, where it is also shown that the approximation can often be further simplified to the expression

'.

A second alternative is to note that the version of the Laplace approximation proposed by Tierney and Kadane (1986) is not invariant to changes in the (arbitrary) parametrisation chosen when specifying the likelihood and prior density functions. It may be, therefore. that by judicious rcparametrisation (of the likelihood, together with the appropriate. Jacobian adjusted, prior density) the Laplace approximation can itself be made more accurate, even in contexts where the original parametrisation does not suggest the plausibility of a normal-type approximation to the integrands. We, note, incidentally, that such a strategy is also available in multiparameter contexts. whereas the Pearson family approach does not seem so readily generalisable. To provide a concrete illustration of these alternative analytic approximation approaches consider the following. Example 5.24. (Approximatingthe mean of a beta distribution). Suppose that a posterior beta distribution. Be(@ I.,, - A. I I - I.,, i 4 ). has arisen from I a Bi( r,, I n . N ) likelihood. together with, Be(H I I. ) prior (the referenci prior. derived in f Example 5.14). Writing r,#= .I'. we can. in fact. identify the analytic form of the posterior
mean in this case.

but we shall ignore this for the moment and examine approximations implied by the techniques discussed above. First, dehning ! I ( @ ) = U, we see, after some algebra, that the Tierney-Kadane form of the Laplace approximation gives the estimated posterior mean

If. instead, we reparametrise to ( 5 = h i i i

\'Ti.

the required integrals are detined in terms of

and the Laplace approximation can be shown to be

5.5 Numerical Approximations

345

Alternatively, if we work via the Pearson family, with Q ( 0 ) = O( 1 - 0) as the natural choice for a beta-like posterior. we obtain
E(0Ir.j =

( n + 1)-1

+ 3/2) ( n + 2).1 (r + f)
(.r

+I

By considering the percentage errors of estimation. defined by

true - estimated true

I.

we can study the performance of the three estimates for various values of n and s. Details are given in Achcar and Smith (1989); here, we simply summarise, in Table 5.1, the results for 71 = 5, s = 3, which typify the performance of the estimates for small n.
Table 5.1 Approximation oJE[OI .r]fiomBe(0 I s + f . n - s + i) (percentage errors in purmrheses)
True value Laplace appro.rimations Pearson approximation

Ep Is]
~~ ~~

E[01.1
0.580 (0.6%)

E.10 I s]
0.585 (0.38)

0.583

0.563 (3.69)

We see fromTable 5. I that the Pearson approximation,which is, in some sense, preselected to be best, does, in fact, outperformthe others. However, it is strikingthat the performance of the Laplace approximation under reparametrisation leads to such a considerable improvement over that based on the original parametrisation, and is a very satisfactory alternative to the optimal Pearson form. Further examples are given in Achcar and Smith (1989).

In general, it would appear that. in cases involving a relatively small number of parameters, the Laplace approach, in combination with judicious reparametrisation, can provide excellent approximations to general Bayesian inference summaries. whether in the form of posterior moments o r marginal posteriordensities. However, in multiparameter contexts there may be numerical problems with the evaluation of local derivatives in cases where analytic forms are unobtainable or too tedious to identify explicitly. In addition, there are awkward complications if the integrands are multimodal. At the time of writing. this area of approximation theory is very much still an active research field and the full potential of this and related methods (see, also, Lindley. 1980b, Leonard et 01.. 1989) has yet to be clarified.

346
5.5.2

5 Irtference

lteratfve Quadrature

It is well known that univariate integrals of the type

are often well approximated by Gauss-Hermite quadrature rules of the form

where 1 , is the ith zero of the Hermite polynomial H,,(t). In particular, if f ( t ) is a polynomial of degree at most 211 - 1. then the quadrature rule approximates the integral without error. This implies, for example. that, if h ( t ) is a suitably well behaved function and
g(t) = h ( t )( 2 K I T L ) -"'cxp

{ - (-J}/ l t1

.,

'

then

where

n ) , = w; exp(t ; )

a;. :, =

/I

+ J2a t ,

(see, for example, Naylor and Smith. 1982). It follows that Gauss-Hermite rules are likely to prove very efficient for functions which, expressed in informal terms, closely resemble "polynomial x normal" forms. In fact, this is a rather rich class which. even for moderate n (less than 12. say). covers many of the likelihood x prior shapes we typically encounter for Moreover, the applicability of this approximaparameters defined on (--x.. x). tion is vastly extended by working with suitable transformations of parameters h defined on other ranges such as (0. x-) or (0. ) , using. for example, log(f ) or log(1 - u) - log(h - t ) . respectively. Of course, to use the above form we must specify 1-1and cr in the normal component. It turns out that. given reasonable starting values (from any convenient source. prior information, maximum likelihood estimates. etc.), we typically can successfully iterate the quadrature rule, substituting estimates of the posterior mean and variance obtained using previous values of 'in, and 2;. Moreover. we note that if the posterior density i s well-approximated by the product of a normal and a polynomial of degree at most 21, - 3, then an rwiluciting /he ti-point Gauss-Hermite rule will prove effective for .sitnul~cinc~oits!\ norrntrlising c w i s f r i n f and thc>.firsfmid .secoiid tttoitietirs. using the same (iterated)

5.5 Numricul Approximations

347

set of mi and 2,. In practice, it is efficient to begin with a small grid size (71 = 4 or n = 5) and then to gradually increase the grid size until stable answers are obtained both within and between the last two grid sizes used. Our discussion so far has been for the one-dimensional case. Clearly, however, the need for an efficient strategy is most acute in higher dimensions. The obvious extension of the above ideas is to use a Cartesian product rule giving the approximation

where the grid points and the weights are found by substituting the appropriate iterated estimates of p and a2corresponding to the marginal component t, . The problem with this obvious strategy is that the product form is only efficient if we are able to make an (at least approximate) assumption of posterior independence among the individual components. In this case, the lattice of integration points formed from the product of the two one-dimensional grids will efficiently cover the bulk of the posterior density. However, if high posterior comelations exist, these will lead to many of the lattice points falling in areas of negligible posterior density, thus causing the Cartesian product rule to provide poor estimates of the normalising constant and moments. To overcome this problem, we could first apply individual parameter transformations of the type discussed above and then attempt to transform the resulting parameters, via an appropriate linear transformation,to a new, approximately orthogonal, set of parameters. At the first step, this linear transformation derives from an initial guess or estimate of the posterior covariance matrix (for example, based on the observed information matrix from a maximum likelihood analysis). Successive transformationsare then based on the estimated covariance matrix from the previous iteration. The following general strategy has proved highly effective for problems involving up to six parameters (see, for example, Naylor and Smith, 1982, Smith er al., 1985, 1987, Naylor and Smith, 1988). ( I ) Reparametrise individual parameters so that the resulting working parameters all take values on the real line. (2) Using initial estimates of the joint posterior mean vector and covariance rnam x for the working parameters, transform further to a centred, scaled, more orthogonal set of parameters. (3) Using the derived initial location and scale estimates for these orthogonal parameters, cany out, on suitably dimensioned grids, Cartesian product integration of functions of interest.

340

5 1njerenc.e

(4) Iterate, successively updating the mean and covariance estimates, until stable

results are obtained both within and between grids of specified dimension. For problems involving larger numbers of parameters, say between six and twenty. Cartesian product approaches become computationally prohibitive and alternative approaches to numerical integration are required. One possibility is the use of spherical quadrature rules (Stroud. 1971. Sections 2.6. and 2.7). derived by transforming from cartesian to spherical polar coordinates and constructing optimal integration formulae based on symmetric configurations over concentric spheres. Full details of this approach will be given in the volume Bayesinn Conipicrclrion. For a brief introduction. see Smith (1991 ). Other relevant references on numerical quadrature include Shaw ( I98Xb). Flournoy and l'sutakawa ( 199 I ), O'Hagan ( I99 I ) and Dellaportas and Wright ( 1992). The efficiency of numerical quadrature methods is often very dependent o n the particular parametrisation used. For further information on this topic, see Marriott ( I 988). Hills and Smith ( 1992. 1993) and Marriott and Smith ( 1992). For related discussion. see Kass and Slate (1992). The ideas outlined above relate to the use of numerical quadrature formulae to implement Bayesian statistical methods. It is amusing to note that the roles can be reversed and Bayesian statistical methods used to derive optimal numerical quadrature formulae! See. for example. Diaconis ( 1988b) and O'Hagan ( 1992).

5.5.3

Importance Sampling

The importance sampling approach to numerical integration is based on the observation that, if f is a function and 9 is a probability density function

which suggest the "statistical" approach of generating a sample from the distribution function G- referred to in this context as the inzportunce sunipling distributionand using the average of the values of the ratio f l y as an unbiased estimator of j(.r)d.r. However, the variance of such an estimator clearly depends critically on the choice of G, it being desirable to choose g to be "similar" to the shape of f . In multiparameter Bayesian contexts, exploitation of this idea requires designing importance sampling distributions which are efficient for the kinds otintegrands arising in typical Bayesian applications. A considerable amount of work has focused on the use of multivariate normal or Student forms, or modifications thereof.

5.5 Numerical Approximations

349

much of this work motivated by econometric applications. We note, in particular, the contributions of Kloek and van Dijk (1978). van Dijk and Kloek (1983, 1985), van Dijk et al. (1987) and Geweke (1988, 1989). An alternative line of development (Shaw, 1988a) proceeds as follows. In the univariate case, if we choose g to be heavier-tailed than f, and if we work with y = C(s),the required integral is the expected value of f[G-(z))/y[G-(;r)] with respect to a uniform distribution on the interval (0.1). Owing to the periodic nature of the ratio function over this interval, we are likely to get a reasonable approximation to the integral by simply taking some equally spaced set of points on (0. l),rather than actually generating uniformly distributed random numbers. Iff is a function of more than one argument (k.say), an exactly parallel argument suggess that the choice of a suitable g followed by the use of a suitably selected uniform configuration of points in the k-dimensional unit hypercube will provide an efficient multidimensional integration procedure. However, the effectivenessof all this dependson choosing a suitable G, bearing in mind that we need to have available a flexible set of possible distributionalshapes, is for which G- available explicitly. In the univariate case, one such family defined on R is provided by considering the random variable

where u is uniformly distributed on (0. l), h : (0,l) function such that limh(u) = -cc
11-u

R is a monotone increasing

and 0 5 a 5 1 is a constant. The choice u = 0.5 leads to symmetric distributions; as a -, 0 or a + 1 we obtain increasingly skew distributions (to the left or right). The tail-behaviour of the distributions is governed by the choice of the function h. Thus. forexample. h ( u ) = log(u) leads toa family whosesymmetric member is the logisticdistribution; h.(u)= - tan [ T (1 - u ) / 2 ] leads toafamily whose symmetric member is the Cauchy distribution. Moreover, the moments of the distributions of the I, are polynomials in a (of correspondingorder), the median is linear in a, etc., so that sample information about such quantities provides (for any given choice of h ) operational guidance on the appropriate choice of a. To use this family in the multiparameter case, we again employ individual parameter transformations. so that all parameters belong to 8, together with orthogonalising transformations, so that parameters can be treated independently. In the transformed setting, it is natural to consider an iterative importance sampling strategy which attempts to learn about an appropriate choice of G for each parameter. As we remarked earlier, part of this strategy requires the specification of uniform configurationsof points in the k-dimensional unit hypercube. This problem has been extensively studied by number theorists and systematic experimentation

350

5 1nfrrenc.e

with various suggested forms of "quasi-random" sequences has identified effective forms of configuration for importance sampling purposes: for details. see Shaw ( 1988a). The general strategy is then the following. ( I ) Reparametrise individual parameters so that resulting working parameters all take values on the real line. (2) Using initial estimates of the posterior mean vector and covariance matrix for the working parameters. transform to a centred. scaled. more "orthogonal" set of parameters. (3) In terms of these transformed parameters, set

for "suitable" choices of q,, j = I. . . . . k. (4)Use the inverse distribution function transformation to reduce the problem to that of calculating an average over a "suitable" uniform configuration in the k-dimensional hypercube. ( 5 ) Use information from this "sample" to learn about skewness. tailweight. etc. for each gJ. and hence choose "better" !I,. j = 1. . . . . k. and revise estimates of the mean vector and covariance matrix. (6) Iterate until the sample variance of replicate estimates o f the integral value is sufficiently small. Teichroew ( 1965) provides a historical perspective on simulation techniques. For further advocacy and illustration of the use of (non-Markov-chain)Monte Carlo methods in Bayesian Statistics. see Stewart ( 1979. 1983, 1985. 1987). Stewart and Davis (1986). Shao ( 1989. 1990) and Wolpert (I991 ).

5.5.4

Sampling-importance-resampling

Instead of just using importance sampling to estimate integrals-and hence calculate posterior normalising constants and moments- we can also exploit the idea in order to produce simulated samples from posterior or predictive distributions. This technique is referred to by Rubin ( 1988)as scimplinK-import~~nc.r-re.~uniF,liil~ (SIR). We begin by taking a fresh look at Bayes' theorem from this samplingimportance-resampling perspective, shifting the focus in Bayes' theorem from densities to samples. Our account is based on Smith and Gelfand (1992). As a first step. we note the essential duality between a sample and the distribution from which it is generated: clearly, the distribution can generate the sample; conversely. given a sample we can re-create, at least approximately, thc distribution

5.5 Numerical Approximulions

351

(as a histogram, an empirical distribution function, a kernel density estimate, or whatever). In terms of densities, Bayes' theorem defines the inference process as the modification of the prior density p ( 8 ) to form the posterior density p ( 6 I z), through the medium of the likelihood function JI(Z 18). Shifting to a sampling perspective, this corresponds to the modification of a sample from p ( 8 ) to form a sample from p(8 12) through the medium of the likelihood function p ( z 18). To gain insight into the general problem of how a sample from one density may be modified to form a sample from a different density, consider the following. Suppose that a sample of random quantities has been generated from adensity g ( 8 ) , but that what it is required is a sample from the density

where only the functional form of f(8) is specified. Given f(8) and the sample from g ( 8 ) , how can we derive a sample from h(B)? In cases where there exists an identifiable constant hi > 0 such that

f(8)/g(6) A f , 5

for all 8 ,

an exact sampling procedure follows immediately from the well known rejection method for generating random quantities (see, for example, Ripley, 1987, p. 60): (i) consider a 8 generated from g ( 8 ) ; (ii) generate u from Un(u 10.1); (iii) if ti 5 f ( 8 ) / M g ( 8 ) accept 8; otherwise repeat (i)-(iii). Any accepted 8 is then a random quantity from h ( 8 ) .Given a sample of size N for g ( 8 ) . it is immediately verified that the expected sample size from h ( 8 ) is Al-IN f(x)drr. In cases where the bound A 1 in the above is not readily available, we can approximate samples from h ( 8 )as follows. Given el.. . . .8,v from g ( 8 ) , calculate

If we now draw 8' from the discrete distribution {el, . . ,8,v} having mass q1 . on then 8' is approximately distributed as a random quantity from h(8). To see this, consider, for mathematical convenience, the univariate case. Then, under appropriate regularity conditions, if P describes the actual distribution of 8*.
N

i=l

352

Since sampling with replacement is not ruled out. the sample size generated in this case can be as large as desired. Clearly. however, the less h ( 0 ) resembles g(0) the larger N will need to be if the distribution of 0' is t o be a reasonable approximation to h(0). With this sampling-importance-resampling procedure in mind, let us return to the prior to posterior sample process defined by Bayes' theorem. For fixed 2. define fz(8) = p(x I 8 ) p ( 8 ) .Then. if 4 maximibinp p(z18) is available. the rejection procedure given above can be applied to a sample for p ( 0 ) to obtain a sample from p ( e I z)by taking .u(e) p ( e ) . j ( e ) = fz(e)and .\I = p(x I b ) . = Bayes' theorem then takes the simple form: For each 8 in the prior sample, uccept 8 into the posterior saniylr with prohNb i b f,(O) - P(ZI8) ---. .\IIJ(W p(z1 e) The likelihood therefore acts in an intuitive way to define the resampling probability: those 8 with high likelihoods are more likely to be represented in the posterior sample. Alternatively. if "lf is not readily available, we can use the approximate resampling method, which selects 8, into the posterior sample with probability

Again we note that this is proportional to the likelihood. so that the inference process via sampling proceeds in an intuitive way. The sampling-resampling perspective outlined above opens up the possibility of novel applications of exploratory data analytic and computer graphical techniques in Bayesian statistics. We shall not pursue these ideas further here. since the topic is more properly dealt with in the subsequent volume Bu.wsictn Conipurutiori. For an illustration of the method in the context of sensitivity analysis and intractable reference analysis. see Stephens and Smith ( 1992);for pedagogical illustration, see Albert ( 1993).

5.5 Numerical Approximations

353

5.5.5

Markov Chain Monte Carlo

The key idea is very simple. Suppose that we wish to generate a sample from a ' posterior distribution p ( 8 1 z ) for 8 E 0 c 8'but cannot do this directly. However, which is straightsuppose that we can construct a Markov chain with state space 0, forward to simulate from, and whose equilibrium distribution is p ( 8 1 z ) . If we then run the chain for a long time, simulated values of the chain can be used as a basis for summarising features of the posterior p(8lz)of interest. To implement this strategy, we simply need algorithms for constructing chains with specified equilibrium distributions. For recent accounts and discussion, see, for example, Gelfand and Smith (1990), Casella and George (1992), Gelman and Rubin (1992a. 1992b), Geyer (1992), Raftery and Lewis ( 1 992), Ritter and Tanner (1992), Roberts (1992). Tierney (1992). Besag and Green (1993). Chan (1993, Gilks er ul. (1993) and Smith and Roberts ( 1993); see, also, Tanner and Wong ( 1987) and Tanner ( I 9 9 1 ). Under suitable regularity conditions, asymptotic results exist which clarify the sense in which the sample output from a chain with equilibrium distribution p ( 8 1 z ) can be used to mimic a random sample from p ( 8 l s ) or to estimate the expected value, with respect to p ( 8 ( z ) ,of a function g ( 8 ) of interest. If 01, . . . ,8', . . . is a realisation from an appropriate chain, typically avail02, , able asymptotic results as t - 3c include
8' -+ 8 CY p ( 8 1 s ) , in distribution

and
1 - C g(si) -, t
a=

{ g(e)} almost surely.

Clearly, successive8' will be correlated, so that, if the first of these asymptotic suitable spacings results is to be exploited to mimic a random sample from p ( 8 ) z ) . will be required between realisations used to form the sample, or parallel independent runs of the chain might be considered. The second of the asymptotic results implies that ergodic averaging of a function of interest over realisations from a single run of the chain provides a consistent estimator of its expectation. In what follows, we outline two particular forms of Markov chain scheme, which have proved particularly convenient for a range of applications in Bayesian statistics.
The Gibbs Sumpling Algorithm

Suppose that 8 , the vector of unknown quantities appearing in Bayes' theorem, has components 01,. . . ,o k , and that our objective is to obtain summary inferences from the joint posterior p(8 1 z)= ~ ( 0 1 ,. . . &I=). As we have already observed in this . section, except in simple, stylised cases, this will typically lead. unavoidably, to challenging problems of numerical integration.

In fact, this apparent need for sophisticated numerical integration technology can often be avoided by recasting the problem as one of iterative sampling of random quantities from appropriate distributions to produce an appropriate Markov chain. To this end, we note that

for the so-calledful(condirionaldensities the individual components. given the data and specified values of all the other components of 8, are typically easily identified, as functions of 8,. by inspection of the form of p ( 8 I z)x p(z I 8 ) p ( 8 )in any given application. Suppose then. that given an arbitrary set of starting values,
8, .....
(0)

[Ill

for the unknown quantities, we implement the following iterative procedure: draw 8;) from p(0l Iz.8;.
. . . .8, ).
101

draw@ from2)(6)2(z .61:. H.~ ..... H ~ ) ) ,


(0 draw B!, from ~ ( 8 : ~ 6rli I t .8, ). . . . .Iy,(01). Iz.0;.

and so on.

Now suppose that the above procedure is continued through 1 iterations and is independently replicated nr times so that from the current iteration we have 971 replicates of the sampled vector 8 = (8~.. . @:.I), where 8 is a realisation o f a .. Markov chain with transition probabilities given by

Then (see, for example, Geman and Geman, 1984. Roberts and Smith, 1994). () as t x , (0, , . . . 8i.)) tends in distribution to a random vector whose joint density is p(8 1 z).In particular, 8: tends in distribution to a random quantity whose density is p ( 8 , 12). Thus, for large f , the replicates (8:;.. . . .O::,:) are It approximately a random sample from p ( 8 , I z). follows, by making H I suitably
-+ -

5.5 Numerical Approxinzutions

355

large, that an estimate p(6, I 2) for p ( 8 , 12) is easily obtained, either as a kernel 8;, density estimate derived from ( ! ' . . . 8ib;). or from

So far as sampling from the p(8, I x. 8$',j # i) is concerned, i = 1.. . . ,k, either the full conditionals assume familiar forms, in which case computer routines are typically already available, or they are simple arbitrary mathematical forms, in which case general stochastic simulation techniques are available -such as envelope rejection and ratio of uniforms-which can be adapted to the specific forms (see, for example, Devroye, 1986, Ripley, 1987, Wakefield er al., 1991, Gilks. 1992, Gilks and Wild, 1992, and Dellaportas and Smith, 1993). See, also, Carlin and Gelfand (1991). The potential of this iterative scheme for routine implementation of Bayesian analysis has been demonstrated in detail for a wide variety of problems: see, for example, Gelfand and Smith (1990). Gelfand er al. (1990) and Gilks ef al. (1993). We shall not provide a more extensive discussion here, since illustration of the technique in complex situations more properly belongs to the second volume of this work. We note, however, that simulation approaches are ideally suited to providing summary inferences (we simply report an appropriate summary of the sample), inferences for arbitrar))kncrions of 81, .. . .& (we simply form a sample of the appropriate function from the samples of the 8,'s) orpredictions (for example, in an obvious notation, p(y I z) = n1-l p(y I8:"). the average being over distribution for large t). the 8;". which have an approximate p ( 0 I z)

Ez,

The Metropolis-Hustings algorithm


This algorithm constructs a Markov chain 8 ' , 8 * . .. . ! 8'. . . . with state space 8 and equilibrium distribution p ( 8 1 z ) by defining the transition probability from 8' = 8 to the next realised state dt+' as follows. Let q(8!s') denote a (for the moment arbitrary) transition probability function, such that, if 8' = 8, the vector 8' drawn from q(8,e')is considered as a proposed possible value for However, a further randomisation now takes place. With some probability a(@, we actually accept O'), = 8'; otherwise, we reject the value generated from q ( 0 , O ' ) and set Of+' = 8. This constructiondefines a Markov chain with transition probabilities given by

@'

q(8,e") a(e, de" 8")

where I(.) is the indicator function. If now we set

it is easy to check that p(Ojz)p(S. = p ( t I l z ) p ( 0 ' .0). which, provided that the 0') thus far arbitrary q(0.0') is chosen to be irreducible and aperiodic on a suitable state space, is a sufficient condition forp(01z) to be the equilibrium distribution of the constructed chain. This general algorithm is due to Hastings (1970); see. also, Metropolis ef al. (1953). Peskun (1973). Tierney ( 1992). Besag and Green ( 1993). Roberts and Smith (1994) and Smith and Roberts (1993). It is important to note that the (equilibrium) distribution of interest, y ( 0 l z ) . only enters p(0.e') through the ratio p ( 0 ' ( z ) / p ( 0 1 z ) .This is quite crucial since it means that knowledge of the distribution up to proportionality (given by the likelihood multiplied by the prior) is sufficient for implementation.

5.6
5.6.1

DISCUSSION AND FURTHER REFERENCES


An Historical Footnote

Blackwell (1988) gave a very elegant demonstration of the way in which a simple finite additivity argument can be used to give powerful insight into the relation between frequency and belief probability. The calculation involved has added interest in that-according to Stigler (1982)-it might very well have been made by Bayes himself. The argument goes as follows. Suppose that 0-1 observables x i . . . . . .r,,-l are finitely exchangeable. We observe 2 = (x-1. . . . . and wish to evaluate

Writing s = .rl + . . ' + .r,,, p ( t ) = P ( J , + . . . + .r,,,1 = f ) , this ratio. by virtue of exchangeability, is easily seen to be equal to

if p ( s ) 2 p(.s + 1) and s and r) - s are not too small.

5.6 Discussion and Further References

357

This can be interpreted as follows. If, before observing 5 , we considered s and s 1to be about equally plausible as values for 2 1 . . . xn+l,the resulting posterior odds for x n + l = 1 will be essentially the frequency odds based on the first n trials. Inverting the argument, we see that if one wants to have this convergence of beliefs and frequencies it is necessary that p ( s ) z p ( s + 1). But what does this entail? Reverting to an infinite exchangeability assumption, and hence the familiar binomial framework, suppose we require that p ( 8 ) be chosen such that

+ +

does not depend on s. An easy calculation shows that this is satisfied if p(19)is taken to be uniform on (0,l)-the so-called Bayes (or Bayes-Laplace) Postulate. Stigler (1982) has argued that an argument like the above could have been Bayes motivation for the adoption of this uniform prior. 5.6.2

Prior Ignorance

To many attracted to the formalism of the Bayesian inferential paradigm, the idea of a non-injormarive prior distribution, representing ignorance and letting the data speak for themselves has proved extremely seductive, often being regarded as synonymous with providing objective inferences. It will be clear from the general subjective perspective we have maintained throughout this volume, that we regard this search for objectivity to be misguided. However, it will also be clear from our detailed development in Section 5.4 that we recognise the rather special nature and role of the concept of a minimally informative prior specification -appropriately defined! In any case, the considerable body of conceptual and theoretical literature devoted to identifying appropriate procedures for formulating prior representationsof ignorance constitutes a fascinating chapter in the history of Bayesian Statistics. In this section we shall provide an overview of some of the main directions followed in this search for a Bayesian Holy Grail. In the early works by Bayes ( 1 763) and Laplace ( 1 8 14/1952), the definition of a non-informative prior is based on what has now become known as the principle of insuficient reason, or the Bayes-Laplace postulate (see Section 5.6. I). According to this principle, in the absence of evidence to the contrary, all possibilities should have the same initial probability. This is closely related to the so-called LaplaceBertrand paradox; see Jaynes (1971) for an interesting Bayesian resolution. In particular, if an unknown quantity, (6, say, can only take a finite number of values, h i . say, the non-informative prior suggested by the principle is the discrete uniform distribution y ( @ ) = { l/AI,. . . l/M}. This may, at first sight, seem

intuitively reasonable, but Example 5. I6 showed that even in simple. finite, discrete cases care can be required in appropriately defining the unknown yucmtip of inreresf. Moreover, in countably infinite, discrete cases the uniform (now intproper) prior is known to produce unappealing results. Jeffreys ( I939/ I96 I , p. 238) suggested. for the case of the integers, the prior
ir(/I)

I/-'.

t ) = 1.2.. ..

More recently, Rissanen ( 1983) used a coding theory argument to motivate the prior
ir(71)

.x - x
?t

-x

I log ?Z

1 x .... log log I t

r,=l.' .....

However. as indicated in Example 5.23, embedding the discrete problem within a continuous framework and subsequently discretising the resulting reference prior for the continuous case may produce better results. If the space, @. of @ values is a continuum (say. the real line) the principle of' insufficient reason has been interpreted as requiring a uniform distribution over @. However, a uniform distribution for o implies a non-uniform distribution for any non-linear monotone transformation of 62 and thus the Bayes-Laplace postulate is inconsistent in the sense that. intuitively, "ignorance about 0" should surely imply "equal ignorance" about a one-to-one transformation of o. Specifically. if some procedure yields p ( o ) as a non-informative prior for o and the same procedure yields p ( < ) as a non-informative prior for a one-to-one transformation ( = <( o)of Q,consistency would seem to demand that p ( <)d< = p(o)do; thus. a procedure for obtaining the "ignorance" prior should presumably be invariant under one-to-one reparametrisation. Based on these invariance considerations, Jeffreys ( 1946)proposed as a noninformative prior, with respect to an experiment f = {.Y.o,p(.r lo)}. involving a parametric model which depends on a single parameter o.the (often improper) density A ( 0 ) x h(o)'.'. where

In effect, Jeffreys noted that the logarithmic divergence locally behaves like the square of a distance, determined by a Riemannian metric, whose natural length ~, f element i s h ( ~ 5 ) ' /and that natural length elements o Riemannian nietrics are invariant to reparametrisation. In an illuminating paper, Kass ( 1989) elaborated on this geometrical interpretation by arguing that. more generally, natural volume elements generate "uniform" measures on manifolds. in the sense that equal mass

5.6 Discussion and Further References

359

is assigned to regions of equal volume, the essential property that makes Lebesgue measure intuitively appealing. In his work, Jeffreys explored the implications of such a non-informative prior for a large number of inference problems. He found that his rule (by definition restricted to a continuous parameter) works well in the one-dimensional case, but can lead to unappealing results (Jeffreys, 193911961, p. 182) when one tries to extend it to multiparameter situations. The procedure proposed by Jeffreys preferred rule was rather ad hoc, in that there are many other procedures (some of which he described) which exhibit the required type of invariance. His intuition as to what is required, however, was rather good. Jeffreys solution for the one-dimensional continuous case has been widely adopted, and a number of alternative justifications of the procedure have been provided. Perks (1947) used an argument based on the asymptotic size of confidence regions to propose a non-informative prior of the form

n(Q) x s ( Q ) -
where s(Q) is the asymptotic standarddeviationofthe maximum likelihoodestimate of I$.Under regularity conditions which imply asymptotic normality, this turns out to be equivalent to Jeffreys rule. Lindley (1961b) argued that, in practice, one can always replace a continuous range of q by discrete values over a grid whose mesh size, 6(Q),say, describes the precision of the measuring process. and that a possible operational interpretation of ignorance is a probability distribution which assigns equal probability to all points of this grid. In the continuous case, this implies a prior proportional to b(q5-I. To determine 6 ( Q ) in the context of an experiment e = {X,ai,p(r 14)). Lindley showed that if the quantity can only take the values Cp or Q + b(4). the amount of information that e may be expected to provide about 4, if p(4) = p ( @ 6(4)) = is 2b2(d)h(d).This expected information will be independent of Q if 6(0) 0 h ( ~ ) - !thus defining an appropriate mesh; arguing as before, this suggests : ~, . Jeffreys prior ~ ( qx)h ( Q ) / 2Akaike (1978a) used a related argument to justify Jeffreys prior as locally impartial. Welch and Peers (1963) and Welch (1965) discussed conditions under which there is formal mathematical equivalencebetween one-dimensionalBayesian credible regions and corresponding frequentist confidence intervals. They showed that, under suitable regularity assumptions. one-sided intervals asymptotically coincide if the prior used for the Bayesian analysis is Jeffreys prior. Peers ( I 965) later showed that the argument does not extend to several dimensions. Hanigan (1966b) and Peers (1968) discuss two-sided intervals. Tibshirani (19891, Mukerjee and Dey (1993) and Nicolau (1993) extend the analysis to the case where there are nuisance parameters.

i,

360

5 Inference

Hartigan (1965) reported that the prior density which minimises the bias of the estimator (I of ($ associated with the loss function / ( d .d)) is

If. in particular. one uses the discrepancy measure

as a natural loss function (see Definition 3.15). this implies that n ( 0 )= / / ( 0 ) ' . 2 . which is, again, Jeffreys' prior. Good ( 1969)derived Jeffreys' prior as the"least favourable" initial distribution with respect to a logarithmic scoring rule. in the sense that it minimises the expected score from reporting the true distribution. Since the logarithmic score is proper. and hence is maximised by reporting the true distribution. Jeffreys' prior may technically be described. under suitable regularity conditions. as a minimax solution to the problem of scientific reporting when the utility function is the logarithmic score function. Kashyap ( 197 1 ) provided a similar, more detailed argument; an axiom system is used to justify the use of an information measure as a payoff function and Jeffreys' prior is shown to be a minimax solution in a -two personzero sum game, where the statistician chooses the "non-informative" prior and nature chooses the "true" prior. Hartigan (197 I . 1983, Chapter 5) defines a similarity measure for events E. I: to be P ( E n F ) / P ( E ) P ( F ) shows that Jeffreys' priorensures, asymptotically. and constant similarily for current and future observations. Following Jeffreys ( 1955). Box and Tiao (1973. Section 1.3) argued for selecting a prior by convention to be used as a standard of'rejerencv. They suggested that the principle of insufficient reason may be sensible in location problems. and for ) proposed as a conventional prior " ( 0 ) a modcl parameter f.1 that ~ ( owhich implies a uniform prior

for a function 4 = <(@) such that p(.r I C ) is, at least approximately. a location J parameter family; that is, such that, for some functions C and f.
p(X

1e)

! (((0) f ( J ' ) ] . / -

Using standard asymptotic theory, they showed that. under suitable regularity conditions and for large samples. this will happen if

5.6 Discussion and Further References

361

i.e., if the non-informative prior is Jeffreys' prior. For a recent reconsideration and elaboration of these ideas, see Kass ( 1990), who extends the analysis by conditioning on an ancillary statistic. Unfortunately, although many of the arguments summarised above generalise to the multiparameter continuous case, leading to the so-called multivariate Jeffreys' rule

n(e) I H ( e )I
3c

where

is Fisher's infurmurim marrix, the results thus obtained typically have intuitively unappealing implications. An example of this, pointed out by Jeffreys himself (Jeffreys, 1939/1961 p. 182) is provided by the simple location-scale problem, where the multivariate rule leads to n(8,a)3c o-*, where 0 is the location and (z the scale parameter. See, also, Stein (1962).
Example 5.25. (Univoriote normal model). Let { cl,. . yc,, } be a random sample . from N ( S 1 p. A), and consider D = the (unknown) standard deviation. In the case of known mean, 1.1 = 0, say, the appropriate (univariate) Jeffreys' prior is . a J u-' and the () posterior distribution of u would be such that [E:=,.rf]/o* is 1 . in the case of unknown : mean, if we used the multivariate Jeffreys' prior x ( p . a ) x D the posterior distribution of B would be such that [Cy=,(s, 7 ) * ] / 0 2 again, x i . This is widely recognised as is, unacceptable, in that one dws not lose any degrees of freedom even though one has lost the knowledge that p = 0,and conflicts with the use of the widely adopted reference prior n(p,a) = B - ~ Example 5.17 in Section 5.4), which implies that [Ey=, - Z)']/a* is (see (T, Xi-,.

The kind of problem exemplified above led Jeffreys to the ad huc recommendation, widely adopted in the literature, of independent a priori treatment of location and scale parameters, applying his rule separately to each of the two subgroups of parameters, and then multiplying the resulting forms together to anive at the overall prior specification. For an illustration of this, see Geisser and Cornfield (1963): for an elaboration of the idea, see Zellner (1986a). At this point, one may wonder just what has become of the intuition motivating the arguments outlined above. Unfortunately, although the implied information limits are mathematically well-defined in one dimension, in higher dimensions the forms obtained may depend on the path followed to obtain the limit. Similar problems arise with other intuitively appealing desiderata. For example, the Box and Tiao suggestion of a uniform prior following transformation to a parametrisation ensuring data translation generalises, in the multiparameter setting, to the requirement of uniformity following a transformation which ensures that credible regions

5 Inference

are of the same size. The problem, of course, is that, in several dimensions. such regions can be of the same size but very different in form. Jeffreys' original requirement of invariance under reparametrisation remains perhaps the most intuitively convincing. If this is conceded, it follows that, whatever their apparent motivating intuition. approaches which do not have this property should be regarded as unsatisfactory. Such approaches include the use of limiting forms of conjugate priors, as in Haldane (1948). Novick and Hall ( 1965). Novick ( 1969), DeGroot ( 1970. Chapter 10) and Piccinato ( 1973, 1977). a predictivistic version of the principle of insufficient reason. Geisser (1984). and different forms of information-theoretical arguments, such as those put forward by Zellner ( 1977. 1991 ). Geisser ( 1979) and Torgesen ( 1981 ). Maximising the expected information (asopposed to maximising the expected missing information) gives invariant. but unappealing results, producing priors that can have finite support (Berger et ul.. 1989). Other information-based suggestions are those of Eaton (1 982), Spall and Hill ( 1990) and Rodriguez ( 1991 ). Partially satisfactory results have nevertheless been obtained in multiparameter problems where the parameter space can be considered as a group of transformations of the sample space. lnvariance considerations within such a group suggest the use of relurivefyinvuriant(Hartigan. 1964)priors like the Haar measures. This idea was pioneered by Barnard ( 1952). Stone ( 1965)recognised that, in an appropriate sense, it should be possible to approximate the results obtained using a non-informative prior by those obtained using a convenient sequence of proper priors. He went on to show that, if a group structure is present, the corresponding right Haar measure is the only prior for which such a desirable convergence is obtained. It is reassuring that. in those one-dimensional problems for which a group of transformations docs exist, the right Haar measures coincides with the relevant Jeffreys' prior. For some undesirable consequences of the f e j Haar measure see Bemardo (1978b). Further developments involving Haar measures are provided by Zidek ( 1969). Villegas ( 1969, I97 I , 1977a, 1977b, 1981), Stone ( 1970). Florens ( 1978, 1982). Chang and Villegas (1986) and Chang and Eaves ( I 990). Dawid ( 1983b)provides an excellent review of work up to the early 1980's. However, a large group of interesting models do not have any group structure, so that these arguments cannot produce general solutions. Even when the parameter space may be considered as a group of transformations there is no definitive answer. In such situations. the right Haar measures are the obvious choices and yet even these are open to criticism.
Example 5.26. (Standardised mean). Let 2 = { x , . . . . .x,,}be a random sample from a normal distribution "V(z I 11. A). The standard prior recommended by group invariance arguments is T ( I L , O )= o-' where X = n-'. Although this gives adequate results if one wants to make inferences about either 11 or 17. is quite unsatisfactory if inferences about the it standardised mean d = / L / O are required. Stone and Dawid ( 1 972) show that the posterior

5.6 Discussion and Further References


distribution of Q obtained from such a prior depends on the data through the statistic

363

whose sampling distribution.

P(t I P,a) = P(t 14)

only depends on d. One would, therefore, expect to be able to match the original inferences . about @J by the use of p ( t I Q) together with some appropriate prior for 4 However, no such prior exists. On the other hand, the reference prior relative to the ordered partition (4. a) is (see Example 5.18) 7r(& a) = (2 + &)-:La- and the corresponding posterior distribution for 4 is

We observe that the factor in square brackets is proportional to p ( t 1 Q) and thus the inconsistency disappears.

This type of marginalisation paradox, further explored by Dawid, Stone and Zidek (1973). appears in a large number of multivariate problems and makes it difficult to believe that, for any given model, a single prior may be usefully regarded as universally non-informative. Jaynes ( 1980)disagrees. An acceptable general theory for non-informative priors should be able to provide consistent answers to the same inference problem whenever this is posed in different, but equivalent forms. Although this idea has failed to produce a constructiveprocedure for deriving priors, it may be used to discard those methods which fail to satisfy this rather intuitive requirement.
Example 5.27. (Correlclrion coejjkient). Let (z.) = { ( X I . yl). . . . (.I.,,, yn)} be a 9 random sample from a bivariate normal distribution, and suppose that inferences about the correlation coefficient p are required. It may be shown that if the prior is of the form
x(/LI.#%~,aI.az,p) = 7r(p)a, D 2 n .

which includes all proposed non-informativepriors for this model that we are aware of, then the posterior distribution of p is given by
a(pIz79) = 4 p l r )

364
where
1'

5 Inference

[C,(.I', 7)yl-qc,(.y, 2 - 3)q'

c,

.I.,#, -

(173

is the sample correlation coefficient,and I.' is the hypergeometric function. This posterior distribution only depends on the data through the sample correlation coefficient I.: thus. with this form of prior. I' is sufficient. On the other hand. the sampling distribution of r is

Moreover. using the transformationsh = tanh ip and t = tanh ' I . . Jeffreys' prior for this univariate model is found to be- ~ ( 0x ( I - 1;') I (see Lindley. 1965. pp. 2 IS-? 19). ) Hence one would expect to be able to match. using this reduced model. the posterior distribution ir(pi I . ) given previously. so that

Comparison between x ( p 1 r ) and p ( I' 1 p ) shows that this is possible if and only if ( I = 1. and ~ ( p = (1 - p') I. Hence, to avoid inconsistency the joint reference prior must be of ) the form
7r(p*./l2.ffl.f7'.p)

= (1 - p ? ) - ' f f l'051'.

which is precisely (see Example 5.22. p. 337) the reference prior relative to the natural order,
{p. / I I. I/?. f 7 , . ni

1.

However, it is easily checked that Jeffreys' multivariate prior is

and that the "two-step" Jeffreys' multivariate prior which separates the location and scale parameters is ? r ( / / . / / 1 ) T ( f f l . f 7 : . / , )= ( I - pi) +rl I f 7 : I . For further detailed discussion of this example. see Bayarri ( 198I ).

Once again. this example suggests that different non-informative priors may be appropriate depending 011 the pcirtic.ul~irfrmc.tio,i interest or, more generally. of on the ordering of the parameters. Although marginalisation paradoxes disappear when one uses proper priors, t o use proper approximations to non-informative priors as an approximate description of "ignorance" does not solve the problem either.

5.6 Discussion and Further References

365

Example 5.28. (Steins parodar). Let x = (5,. . .z,, a random sample from .. } be a multivariate normal distribution ,V, (5I p. Ik}.Let T, be the mean of the 71. observations from coordinate i and let t = Tr.The universally recommended non-informativeprior for this model is x ( p 1 . . . . . / L A ) = 1, which may be approximated by the proper density

z,

p: are desired, the use of this where X is very small. However, if inferences about Q = prior ovenvhelms, for large k, what the data have to say about b. Indeed, with such a prior the posterior distribution of ~1.4 a non-central x2 distribution with k degrees of freedom is and non-centrality parameter nt. so that

c,

k E [ 4 1 z ] = t + - . Cb13.]=- 2 t + I1

. :]
I1

while the sampling distribution of nf is a non-central a2 distribution with k degrees of freedom and parameter n6r so that E[t 141 = dj + k / n . Thus, with, say, k = 1OO.n = 1 and t = 200. we have El0 I 2)c 30!, V [ @.r]z 322, I whereas the unbiased estimator based on the sampling distribution gives 4 = t - I; % 100. However, the asymptotic posterior distribution of q is A(@ 16. 5 (4&)-) and hence, by Proposition 5.2, the reference posterior for Q relative to p ( t 14) is
n ( @z)s n(o)p(t o + - l ~ ( n f k. 114) 1 x 14) ( 1

whose mode is close to . It may be shown that this is also the posterior distribution of 4 derived from the reference prior relative to the ordered partition { 4. u l , .. . ,L J ~ -},~ obtained by reparametrising to polar coordinates in the full model. For further details, see Stein (1959). Efron ( 1973). Bemardo (1979b) and Femindiz (1982). Naive use of apparently non-informative prior distributions can lead to posterior distributions whose corresponding credible regions have untenable coverage probabilities. in the sense that, for some region C, the corresponding posterior probabilities P(C I z) may be completely different from the conditional values P(C 18)for almost all 8 values. Such a phenomenon is often referred to as srrong inconsistency (see, for example, Stone, 1976). However, by carefully distinguishing between parameters of interest and nuisance parameters, reference analysis avoids this type of inconsistency. An illuminating example is provided by the reanalysis by Bemardo (1979b. reply to the discussion) of Stones (1976) Flatland example. For further discussion of strong inconsistency and related topics, see Appendix B, Section 3.2. Jaynes (1968) introduced a more general formulation of the problem. He allowed for the existence of a certain amount of initial objective information and then tried to determine a prior which reflected this initial information, but nothing

5 1nferenc.e
else (see, also, CsiszAr, 1985). Jaynes considered the entropy of a distribution to be the appropriate measure of uncertainty subject to any "objective" information one might have. If no such information exists and c) can only take a finite number of values, Jaynes' niurini~tn entropy solution reduces to the Bayes-Laplace postulate. His arguments are quite convincing in the finite case; however. if c) is continuous. the non-invariant entropy functional. ri{p(oj} = - ,j'p(ojl o g p ( o ) d ( . xno longer has a sensible interpretation in terms of uncertainty. Jaynes' solution is to introduce a "reference" density T ( Q ) in order to define an "invariantised" entropy.

and to use the prior which maximises this expression. subject. again, to any initial "objective" information one might have. Unfortunately, ~ ( o ) itself be a repmust resentation of ignorance about @ so that no progress has been made. If a convenient group of transformations is present. Jaynes suggests invariance arguments to select the reference density. However, no general procedure is proposed. Context-specific"non-informative" Bayesian analyses have been produced for specific classes of problems, with no attempt to provide a general theory. These include dynamic models (Pole and West. 1989)and finite population survey sampling (Meeden and Vardeman, 1991). The quest for non-informative priors could be summarised as follows. (i) In the finite case, Jaynes' principle of maximising the entropy is convincing. but cannot be extended to the continuous case. (ii) In one-dimensional continuous regular problems, Jeffreys' prior is appropriate. (iii) The infinite discrete case can often be handled by suitably embedding the problem within a continuous framework. (iv) In continuous multiparameter situations there is no hope for a single. unique. "non-informative prior". appropriate for all the inference problems within a given model. To avoid having the prior dominating the posterior for sonw function 4 of interest. the prior has to depend not only on the model but also on the parameter of interest or, more generally. on some notion of the order of importance of the parameters. The reference prior theory introduced in Bernardo ( 1979b) and developed in detail in Section 5.4 avoids most of the problems encountered with other proposals. It reduces to Jaynes' form in the finite case and to Jeffreys' form in one-dimensional regular continuous problems. avoiding marginalisation paradoxes by insisting that the reference prior be tailored to the parameter of interest. However. subsequent work by Berger and Bemardo ( 1989) has shown that the heuristic arguments in Bemardo ( 1979b) can be misleading in complicated situations. thus necessitating more precise definitions. Moreover. Berger and Bernardo ( 1992a. I992b. 19923

5.6 Discussion and Further References

367

showed that the partition into parameters of interest and nuisance parameter may not go far enough and that reference priors should be viewed relative to a given ordering-or, more generally, a given ordered grouping-of the parameters. This approach was described in detail in Section 5.4. Ye (1993) derives reference priors for sequential experiments. A completely different objection to such approaches to non-informative priors lies in the fact that, for continuous parameters, they depend on the likelihood function. This is recognised to be potentially inconsistent with a personal interpretation of probability. For many subjectivists, the initial density p ( @ )is a description of the opinions held about d. independent of the experiment performed;
why should ones knowledge. or ignorance, of a quantity depend on the experiment being used to determine it? Lindley (1972, p. 71).

In many situations. we would accept this argument. However, as we argued earlier, priors which reflect knowledge of the experiment can sometimes be genuinely appropriate in Bayesian inference. and may also have a useful role to play (see, for example, the discussion of stopping rules in Section 5.1.4) as technical devices to produce reference posteriors. Posteriors obtained from actual prior opinions could then be compared with those derived from a reference analysis in order to assess the relative importance of the initial opinions on the final inference.
In general we feel that it is sensible to choose a non-informative prior which expresses ignorance relarive to informationwhich can be supplied by a particular experiment. If the experiment is changed. then the expression of relative ignorance can be expected to change correspondingly. (Box and Tiao. 1973. p. 46).

Finally, non-informalivedistributions have sometimes been criticised on the grounds that they are typically improper and may lead, for instance, to inadmissible estimates (see, e.g. Stein, 1956). However, sensible non-informative priors may be seen to be, in an appropriate sense, limits of proper priors (Stone, 1963, 1965, 1970; Stein, 1965; Akaike. 1980a). Regarded as a baseline for admissible inferences, posterior distributions derived from non-informativepriors need not be themselves admissible, but only arbitrarily close to admissible posteriors. However, there can be no final word on this topic! For example, recent work by Eaton (1992), Clarke and Wasserman (1993), George and McCulloch (1993b) and Ye (1993) seems to open up new perspectives and directions.

5.6.3

Robustness

In Section 4.8.3, we noted that some aspects of model specification, either for the parametric model or the prior distribution components. can seem arbitrary, and cited

368

5 1trjert.nc-e

as an example the case of the choice between normal and Student4 distributions as a parametric model component to represent departures of observables from their conditional expected values. In this section. we shall provide some discussion of how insight and guidance into appropriate choices might be obtained. We begin our discussion with a simple, direct approach to examining the ways in which a posterior density for a parameter depends on the choices of parametric model or prior distribution components. Consider. for simplicity, a single observable I E R having a parametric density p(xlH), with 6 E Y? having prior density ! p(f?). The mechanism of Bayes theorem.

involves multiplication of the two model components. p(l.10). p ( 0 ) . followed by normalisation, a somewhat opaque operation from the point of view of comparing specifications of p(.clO)or y(O) on a what if! basis. However, suppose we take logarithms in Bayes theorem and subsequently differentiate with respect to 8. This now results in a /inem form

- logp(H1.r) = - logp(.rIH) + - logp(0). a0 OH do


The first term on the right-hand side is (apart from a sign change) a quantity known in classical statistics as the eyjicient score funcrioti (see, for example, Cox and Hinkley, 1974). O the linear scale. this is the quantity which transforms the prior n into the posterior and hence opens the way. perhaps, to insight into the effect of a particular choice of p(.rlO) given the form of p ( 0 ) . See, for example. Ramsey and Novick (1980) and Smith (1983). Conversely, examination of the second term on the right-hand side for given [ J ( T ~ #may provide insight into the implications of ) the mathematical specification of the prior. For convenience of exposition-and perhaps because the prior component is often felt to be the less secure element in the model specification-we shall focus the following discussion on the sensitivity of characteristics of p ( H I : r ) to the choice of p ( H ) . Similar ideas apply to the choice ofp(.r10). With .Idenoting the mean of 1 ) independent observables from a normal distribution with mean 0 and precision A, we shall illustrate these ideas by considering the form of the posterior mean for 19 when p(.rio) = N(.rlO. / ) A ) and p ( 0 ) is of arbitrary form. Defining
y(.L.) =
. s(1) =

I3

I3

p(.r[O)p(H)d0.

il logp(.r) ?l.l*

5.6 Discussion and Further References


it can be shown (see, for example, Pericchi and Smith, 1992) that

369

E ( e l X )= x - 7 A - 1 ~ ( ~ ) .
Suppose we carry out a "what if?" analysis by asking how the behaviour of the posterior mean depends on the mathematical form adopted for p ( 0 ) . What if we take p ( 0 ) to be normal? With p ( 0 ) = N(0lp. A"), the reader can easily verify that in this case p ( x ) will be normal, and hence S ( X ) will be a linear combination of x and the prior mean. The formula given for E(0lx) therefore reproduces the weighted average of sample and prior means that we obtained in Section 5.2, so that

What if we take p ( 8 ) to be Srudent-t? With p ( 0 ) = St(0lp, XU, a ) the exact treatment of p(x) and s ( x ) becomes intractable. However, detailed analysis (Pericchi and Smith, 1992) provides the approximation

What if we take p ( 0 ) to be double-exponential? In this case,

for some v > 0, p E 92 and the evaluation of p(s) and S ( X ) is possible, but tedious. After some algebra-see Pericchi and Smith (1992)-it can be shown that. if b = ~ . - ' v - ' A - '

JZ,

E(8l;l.)= W ( X ) ( X + b )

+ [l- w ( x ) ] ( x- b)!

where w(x) is a weight function, 0 I ( T ) 5 1, so that W

Examination of the three forms for E(0lx) reveals striking qualitative differences. In the case of the normal, the posterior mean is unbounded in z - p, the departure of the observed mean from the prior mean. In the case of the Student-t, we see that for very small x - p the posterior mean is approximately linear in x - p, like the normal, whereas for z - p very large the posterior mean approaches z. In the case of the double-exponential, the posterior mean is bounded, with limits equal to x plus or minus a constant.

S Injerence

Consideration of these qualitative differences might provide guidance regarding an otherwise arbitrary choice if, for example. one knew how one would like the Bayesian learning mechanism to react to an "outlying" x . which was far from p . See Smith (1983) and Pericchi t i crl. (1993) for further discussion and elaboration. See Jeffreys (1939/1961) for seminal ideas relating to the effect of the tail-weight of the distribution of the parametric model on posterior inferences. Other relevant references include Masreliez (1975). O'Hagan (1979. 1981, 1988b). West (1981 ). Main ( 1988), Polson ( I99 1 ), Gordon and Smith ( 1993)and 0 Hagan and Le ( 1994). ' The approach illustrated above is well-suited to probing qualitative differences in the posterior by considering, individually, the effects of a small number of potential alternative choices of model component (parametric model or prior distribution). Suppose, instead, that someone has in mind a specific candidate component specification, p ~ say, but is all too aware that aspects of the specification have . involved somewhat arbitrary choices. It is then natural to be concerned about whether posterior conclusions might be highly sensitive to the particular specitication plj. viewed in the context of alternative choices in an appropriately defined neighbourhood of 110. In the case of specifying a parametric component &,-for example an "error" model for differences between observables and their (conditional) expected values-such concern might be motivated by definite knowledge of symmetry and unimodality. but an awareness of the arbitrariness of choosing a conventional distributional form such as normality. Here. a suitable neighbourhood might be formed by taking po to be normal and forming a class of distributions whose tail-weights deviate (lighter and heavier) from normal: see. for example. the seminal papers of Box and Tiao ( 1962. 1964). In the case of specifying a prior component such concern might be motivated by the fact that elicitation of prior opinion has only partly determined the specification (for example. by identifying a few quantiles). with considerable remaining arbitrariness in "filling out" the rest of the distribution. Here, a suitable neighbourhood of p,, might consist of a class of priors all having the specified quantiles but with other characteristics varying: see. for example. O'Hagan and Berger
(

1988).

From a mathematical perspective, this formulation of the robustness problem prcsents some intriguing challenges. How to formulate interesting neighbourhood classes ofdistributions? How tocalculate. with respect to such prior classes. bounds on posterior quantities of interest such as expectations or probabilities? At the time of writing. this is an area of intensive research. For example. should neighbourhoods be defined parametrically or non-parametrically'? And, if nonparametrically, what measures of distance should be used to detine a neighbourhood "close" to p,,? Should the elements. p, of the neighbourhood be those such that the density ratio p / p 0 is hounded in some sense'? Or such that the niaximum

5.6 Discussion and Further References

371

difference in the probability assigned to any event under p and po is bounded? Or such that p can be written as a contamination of p o . p = ( 1 - E)P E q , for small E and q belonging to a suitable class? As yet, few issues seem to be resolved and we shall not, therefore, attept a detailed overview. Relevant references include; Edwards et al. (1963). Dawid (1 973), Dempster ( 1973, Hill ( 1975). Meeden and Isaacson ( I977), Rubin ( 1977. 1988a, 1988b), Kadane and Chuang ( 1978), Berger ( 1980, 1982, 1985a). DeRobertis and Hartigan (1 98 I ), Hartigan ( 1983), Kadane ( 1984), Berger and Berliner ( 1986). Kempthorne (1986), Berger and OHagan (1988), Cuevas and Sanz (1988), Pericchi and Nazaret (1988), Polasek and Potzelberger (1988,1994), Carlin and Demp ster (1989). Delampady (1989). Sivaganesan and Berger (1989.1993). Wasserman ( 1989, 1992a. 1992b). Berliner and Goel (l99O), Delampady and Berger ( 1990), Doksum and Lo (1990), Wasserman and Kadane (I990,1992a, 1992b). Rios (1990, 19921, Angers and Berger (1991). Berger and Fan (1991), Berger and Mortera (1991b, 1994). Lavine (1991a, 1991b. 1992a, 1992b. 1994). Lavine e t a f . (1991, 1993), Moreno and Can0 (199 I). Pericchi and Walley ( 1991), Potzelberger and Polasek ( 199l ), Sivaganesan ( I99 l ), Walley ( I99 l ), Berger and Jefferys ( 1992). Gilio (1992b). G6mez-Villegas and Main (1992). Moreno and Pericchi (1992, 1993), Nau (19921, Sans6 and Pericchi ( I 992). Liseo et al. (1993), Osiewalski and Steel ( 1993). Bayarri and Berger ( 1994). de la Horra and Femandez (1994), Delampady and Dey (1994), OHagan (1994b), Pericchi and Pkrez (1994), Rios and Martin (1994). Salinetti (1994). There are excellent reviews by Berger (l984a. 1985a, 1990, 1994) and Wasserman (1992a). which together provide a wealth of further references. Finally, in the case of a large data sample, one might wonder whether the data themselves could be used to suggest a suitable form of parametric model component, thus removing the need for detailed specification and hence the arbitrariness of the choice. The so-called Bayesian bootstrap provides such a possible approach; see, for instance, Rubin (1981) and Lo (1987, 1993). However, since it is a heavily computationally based method we shall defer discussion to the volume Bayesian Computation.

The term Boorsrrap is more familiar to most statisticians as a computationally intensive frequentist data-based simulation method for statistical inference; in particular, as a computer-based method for assigning frequentist measures of accuracy to point estimates. For an introduction to the method-and to the related technique ofjuckknijing-see Efron (1982). For a recent textbook treatment, see Efron and Tibshirani (1993). See,also, Hartigan (1969, 1975).

5.6.4

Hierarchicaland Emplrlcal Bayes

In Section 4.6.5. we motivated and discussed model structures which take the form of an hierarchy. Expressed in terms of generic densities, a simple version of such

372
an hierarchical model has the form

5 inference

The basic interpretation is as follows. Observables 21, . . . .z k are available from k different. but related, sources: for example. k individuals in a homogeneous population. or k clinical trial centres involved in the Same study. The first stage of the hierarchy specifies parametric model components for each of the k observables. But because of the relatedness of the k observables, the parameters el.. . . ,&. are themselves judged to be exchangeable. The second and third stages of the hierarchy thus provide a prior for 8 of the familiar mixture representation form

Here, the hyperparameter 4 typically has an interpretation in terms of characteristics-for example, mean and covariance-of the population (of individuals, trial centres) from which the k units are drawn. In many applications, it may be of interest to make inferences both about the unit characteristics, the e,s, and the population characteristics, 4. In either case, straightforward probability manipulations involving Bayes theorem provide the required posterior inferences as follows:

where
P(@,I4% !x P(zle,)Pt~I z) 14)
P ( @ b )x P ( 4 4 M 4 L

and P(ZI4) = / P w ) P ( e l 4 ) d e . Of course, actual implementation requires the evaluation of the appropriate integrals and this may be non-trivial in many cases. However, as we shall see in the volumes Buyesiun Computation and Buyesiun Methods. such models can be implemented in a fully Bayesian way using appropriate computational techniques.

5.6 Discussion and Further References

373

A detailed analysis of hierarchical models will be provided in those volumes; some key references are Good (1965, 1980b),Ericson (1969a, 1%9b), Hill (1969, 1974). Lindley (1971), Lindley and Smith (1972). Smith (1973a, 1973b), Goldstein and Smith (1974), Leonard (1975), Mouchart and Simar (1980), Goel and DeGroot (1981), Goel (1983), Dawid (1988b). Berger and Robert (1990), Pkrez and Pericchi ( 1992). Schervish et al. ( 1992). van der Merwe and van der Merwe ( 1992). Wolpert and Warren-Hicks ( 1992) and George et al. (1 993, 1994). A tempting approximation is suggested by the first line of the analysis above. We note that if p ( 4 l z ) were fairly sharply peaked around its mode, @*. say, we would have p(6,Iz) = P(& I#**PI.
The form that results can be thought of as ifwe first use the data to estimate 4 and then, with 4 as a plug-in value, use Bayes theorem for the first two stages of the hierarchy. The analysis thus has the flavour of a Bayesian analysis, but with an empirical prior based on the data. Such short-cut approximations to a fully Bayesian analysis of hierarchical models have become known as Empirical Bayes methods. This is actually slightly confusing, since the term was originally used to describe frequentist estimation of the second-stage distribution: see Robbins (1955, 1964, 1983). However, more recently, following the line of development of Efron and Moms (1972, 1975) and Morris (1983). the term has come to refer mainly to work aimed at approximating (aspects of) posterior distributions arising from hierarchical models. The naYve approximation outlined above is clearly deficient in that it ignores uncertainty in 4. Much of the development following Moms ( 1 983) has been directed to finding more defensible approximations. For more whole-hearted Bayesian approaches, see Deely and Lindley (1981), Gilliland et al. (1982), Kass and Steffey (1989) and Ghosh (1992a). An eclectic account of empirical Bayes methods is given by Maritz and Lwin (1989).

5.6.5

Further Methodological Developments

The distinction between theory and methods is not always clear-cut and the extensive Bayesian literature on specific methodological topics obviously includes a wealth of material relating to Bayesian concepts and theory. We shall review this material in the volume Bayesian Methods and confine ourselves here to simply providing a few references. Among the areas which have stimulated the development of Bayesian theory, we note the following: Actuarial Science and Insurance (Jewell, 1974, 1988; Singpurwalla and Wilson, 1992), Calibration (Dunsmore, 1968; Hoadley, 1970; Brown and Mtikelainen, 1992). Classification and Discrimination (Geisser, 1964, 1966; Binder, 1978; Bemardo, 1988, 1994; Bemardo and Gir6n. 1989; Dawid

5 1tlferenc.e
and Fang, 1992),Contingency Tables (Lindley, 1964; Good, 1965. 1967; Leonard. (Aoki. 1967; Sawagari et ul., 1967). 1975;Leonard and Hsu, I994), Control Theor?. Econometrics (Mills, 1992; Steel, 1992), Finite Populution Sampling (Basu, 1969, 1971; Ericson. 1969b. 1988; Godambe. 1969, 1970; Smouse. 1984; Lo. 1986). Image Analysis (Geman and Geman, 1984; Besag. 1986. 1989; Geman. 1988: Mardia et ul., 1992; Grenander and Miller, 1994). L a w (Dawid, 1994), MetuAnulysis (DuMoucheland Harris, 1992;Wolpert and Warren-Hicks, 1992).Missing Dutu (Little and Rubin, 1987: Rubin, 1987; Meng and Rubin, 1992). Mixtirres (Titterington et al., 1985; Berliner, 1987; Bernard0 and Gir6n. 1988: Florcns et al., 1992; West, 1992b: Diebolt and Robert, 1994: Robert and Soubiran. 1993: West et ul., 1994). Multivoriate Anulwis (Brown et a/.. 1994). Qiculity A.s.suruncr (Wetherill and Campling. 1966; Hald, 1968; Booth and Smith. 1976; Irony et d., 1992; Singpurwalla and Soyer, 1992). Splines (Wahba, 1978. 1983. 1988; Gu. 1992; Ansley et nl., 1993; Cox. 1993). Stochusric Appro.rimcrtion (Makov. 1988) and fime Series and fiwecmting (Meinhold and Singpurwalla, 1983; West and Migon. 1985; Mortera, 1986; Smith and Gathercole, 1986; West and Harrison. 1986. 1989; Harrison and West. 1987; Ameen, 1992; Carlin and Polson. 1992; Gamerman, 1992; Smith. 1992; Garnerman and Migon. 1993: McCulloch and Tsay. 1993; Pole et al.. 1994).

5.6.6

Critical Issues

We conclude this chapter on inference by briefly discussing some further issues under the headings: (i) Model Conditioned Inference, (ii) Prior Elicitdon. (iii) Sequentid Methods and (iv) Conrparative Injerence. Model Conditioned Inference We have remarked on several occasions that the Bayesian learning process is predicated on a more or less formal framework. In this chapter, this has translated into model conditioned inference, in the sense that all prior to posterior or predictive inferences have taken place within the closed world of an assumed model structure. It has therefore to be frankly acknowledged and recognised that all such inference is conditional. If we accept the model, fhen the mechanics of Bayesian learning-derived ultimately from the requirements of quantitative coherenceprovide the appropriate uncertainty accounting and dynamics. But what if, as individuals. we acknowledge some insecurity about the model? Or need to communicate with other individuals whose own models differ'? Clearly, issues of model criticism, model comparison. and, ultimately. model choice, are as much a part of the general world of confronting uncertainty as model conditioned thinking. We shall therefore devote Chapter 6 to a systematic exploration of these issues.

5.6 Discussion and Further References


Prior Elicitation

37s

We have emphasised,over and over, that our interpretation of a model requires -in conventional parametric representations-both a likelihood and a prior. In accounts of Bayesian Statistics from a theoretical perspective-like that of this volume-discussions of the prior component inevitablyfocus on stylised forms, such as conjugate or reference specifications, which are amenable to a mathematical treatment, thus enabling general results and insights to be developed. However, there is a danger of losing sight of the fact that, in real applications, prior specifications should be encapsulations of actual beliefs rather than stylised forms. This, of course, leads to the problem of how to elicit and encode such beliefs, i.e., how to structure questions to an individual, and how to process the answers, in order to arrive at a formal representation. Much has been written on this topic, which clearly goes beyond the boundaries of statistical formalism and has proved of interest and importance to researchers from a number of other disciplines, including psychology and economics. However, despite its importance, the topic has a focus and flavour substantially different from the main technical concerns of this volume, and will be better discussed in the volume Bayesian Methods. We shall therefore not attempt here any kind of systematic review of the very extensive literature. Very briefly, from the perspective of applications the best known protocol seems to be that described by Stliel von Holstein and Matheson (1979), the use of which in a large number of case studies has been reviewed by Merkhofer (1987). General discussion in a text-book setting is provided, for example, by Morgan and Henrion (1990). and Goodwin and Wright (1991). Warnings about the problems and difficulties are given in Kahneman et af.(1982). Some key references are de Finetti (1967), Winkler (1967a, 1967b), Edwards et al. (1968). Hogarth (1975, 1980) Dickey (1980), French (l980), Kadane (1980). Lindley (1982d). Jaynes (1985), Garthwaite and Dickey (1992), Leonard and Hsu (1992) and West and Crosse ( 1992).

Sequential Methods
In Section 2.6 we gave a briefoverview of sequential decision problems but for most of our developments, we assumed that data were treated globally. It is obvious, however, that data are often available in sequential form and, moreover, there are often computational advantages in processing data sequentially, even if they are all immediately available. There is a large Bayesian literature on sequential analysis and on sequential computation, which we will review in the volumes Bayesian Computation and Bayesian Methods. Key references include the seminal monograph of Wald ( 1947), Jackson (1960), who provides a bibliography of early work, Wetherill (1961). and the classic texts of Wetherill (1966) and DeGroot (1970). Berger and

Berry ( 1988)discuss the relevance of stopping rides in statistical inference. Some other references, primarily dealing with the analysis of stopping rules. are Amster ( 1963). Barnard ( 19671, Banholomew ( 1967), Roberts ( 1967). Basu ( 1975) and Irony ( 1993). Witmer ( 1986) reviews multistage decision problems.
Conipurutive Injerence

In this and in other chapters. our main concern has been to provide a self-contained systematic development of Bayesian ideas. However. both for completeness. and for the very obvious reason that there are still some statisticians who do not currently subscribe to the position adopted here. it seems necessary to make some attempt to compare and contrast Bayesian and non-Bayesian approaches. Wc shall therefore provide, in Appendix B. a condensed critical overview of mainstream non-Bayesian ideas and developments. Any reader for whom our treatment is too condensed, should consult Thatcher ( 1964). Pratt ( 1965). Banholomew (1971). Press (1972/1982). Barnett (1973/1982), Cox and Hinkley (1973). Box ( 1983). Anderson ( 1984). Casella and Berger ( 1987. 1990). DeGroot ( 1987). Piccinato (1992) and Poirier (1993).

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

Chapter 6

Remodelling
Summary
It is argued that whether viewed from the perspective of a sensitive individual modeller, or from that of a group of modellers, there are good reasons for systematically entertaining a range of possible belief models. A variety of decision problems are examined within this framework: some involving model choice only; some involving model choice followed by a terminal action, such as prediction; other involving only a terminal action. Throughout. a clear distinction is drawn between three rather different perspectives: first, the case where the range of models under consideration is assumed to include the true belief model; secondly, the case where the range of models is being considered in order to provide a proxy for a specified. but intractable, actual belief model; finally, the case where the range of models is being considered in the absence of specification of an actual belief model. Links with hypothesis testing, significance testing and cross-validation are established.

6.1
6.1.1

MODEL COMPARISON
Ranges of Models

We recall from Chapter 4 that our ultimate modelling concern is with predictive beliefs for sequences of observables. More specifically, most of our detailed develop-

370

6 Remodelling

ment has centred on belief models corresponding to judgements of exchangeability or, more generally, various forms of partial exchangeability. In such cases. the predictive model typically has a mixture representation in terms of a random sample from a labelled model, together with a prior distribution for the label. the latter being interpretable in terms of a strong law limit of observables. For example, we saw that for an exchangeable real-valued sequence. a predictive belief distribution, P. has the general representation

This corresponds to an (as ifi assumption of a random sample from the unknown distribution function, F, together with a prior distribution. Q. for F. defined over the space, F.of all distribution functions on 8. However, the very general nature of this representation precludes it -at least in terms of current limitations on intuition and technique - from providing a practical basis for routine concrete applications. This is why. in Chapter 4 much of our . subsequent development was based on formal assumptions of further invariance or sufficiency structure, or pragmatic appeal to historical experience or scientific authority, in order to replace the general representation by mixtures involving finiteparameter families of densities. Inescapably, however, this passage from the general, but intractable. form to a specific. but tractable, model involves judgements and assumptions going far beyond the simple initial judgement of exchangeability. These further judgements. and hence the models that result from them, are therefore typically much less securely based in terms of individual beliefs. and certainly much less likely to be mutually acceptable in an interpersonal context, than the straightforward symmetry judgement. Both from the perspective of a sensitive individual modeller and also from that of a groupof modellers, there are therefore strong reasons for systematically entertaining a range of possihle models (see. for example, Dickey. 1973. and Smith. 1986). Given the assumption of exchangeability, a range of different belief models. P I .t??.. can each be represented in the general form ..

J J

I-

for some Q1. Q 2 , . . .. the latter encapsulating the particular alternative judgements that characterise the different models. The following stylised examples serve to illustrate some of the kinds of ranges of models that might be entertained in applications involving simple exchangeability judgements. In each case. the range of

6.1 Model Comparison

379

models can either be thought of as generated by a single, non-dogmatic individual (seeking to avoid commitment to one specific form); or generated as concrete suggestions by a group of individuals (each committed to one of the forms); or generated purely formally, as an imaginative proxy for models thought likely to correspond to the ranges of judgements which might be made by the eventual readership of inference reports based on the models. In general, our subsequent development will be expressed in terms of a possibly infinite sequence of models PI,P2,. in practice, we typically only work with a finite range, fl, . . , p for . .; . k some k 2 2.
Inferencefor a Location Parameter Suppose that observations z1,.. , x,, . . . , can be thought of, conditional on .

as measurements of p with errors el,. . . ,en,.. . ,,so that


Xi=

p+e,,

i = 1, ...,n,

with el, e 2 , . . exchangeable. Various beliefs are then possible about the error distribution. For example, appeal to the central limit theorem (Section 3.2.3) might suggest the assumption of normality; however, past experience might suggest a substantial proportion of aberrant or outlying measurements, thus requiring a distribution with heavier tails than normality; different past experience might suggest that the experimenter automatically suppresses any observations suspected of being aberrant. thus requiring the assumption of a distribution with lighter tails than normality. With k = 3, and using density representationsthroughout, achoice of a range of models to cover these possibilities might be:

..

and

380

6 Remodelling

with p,(p. o).j = 1.2.3, specifying prior beliefs for the location and scale parameters appearing in these normal, double-exponential and uniform parametric models. Thus pJ (p,o) = dQ, (F)corresponds to a belief over 3 which assigns withdensityp,(p. a ) probability one tothe family with parametric formy,(. I p . o), for the two parameters of this family. If these modelling possibilities emanate from a single individual, p , ( p . 0 )might not depend on j; in general, however, the p , ( p . a ) could differ, even though, in this case. the interpretations of the parameter a strong s law limits of observable measures of the location and spread of the measurements are the same.
Normality versus non-Normality

Suppose that N c 3is the set of all normal distributions on the real line, and hence that he'. 3 - A[ is the set of all distributions other than normal. Then, given = the assumption of exchangeability for a real-valued sequence, an individual dogmatically asserting normality is specifying, in the general representation. a (21 ( F ) which concentrates with probability one on A*. Conversely. an individual dogmatically asserting non-normality is specifying a &( F ) which concentrates with probability one on N'. Our purpose here is mainly to point out how choices within the general exchangeable framework correspond to specification of Q. However. one given the "size" of 3, cannot but be struck by the monumental dogmatism implicit in 41!
Purumetric Hypotheses

Suppose that Q,. j = 1.. . . . k. are even more dogmatic, in that they not only all focus on a single parametric family, p(s 16). but. within the family, they specify 81. , . . .81, respectively. as the values of the parameter, so that
I1

p , ( . ~ . . . ..I',,) = l-Jp(.rf I 8,). .


)=-I

If k = 2, this is often referred to as a situation of two simple hypotheses. A somewhat different situation arises if k = 2 . 9 , again focuses on a specific parameter value, 8 , but ( 3 2 simply assigns a prior density p ( 0 ) over 8 in p(s 18). The rival models then have the representations
If

P ~ ( . c ~ . ...GI = n p ( s f18;) ..
,=I

corresponding to what are usually referred to, within the context of the parametric family p(.r I 8 ) .as a simple hypothesis and a general alternative.

6.1 Model Comparison

381

In the contexts of judgements of partial rather than full exchangeability, the many versions of the former discussed in Chapter 4 clearly provide considerable scope for positing interesting ranges of models in any given application. The examples which follow illustratejust a few of these possibilities, expanding somewhat on the earlier discussion of model elaboration and simplificationgiven in Sections4.7.3 and 4.7.4.
Several Samples Consider the situation of m unrestrictedly exchangeable sequences of zero-one random quantities, discussed in detail in Section 4.6.2. We recall that, if z(%)= (zJ1,. . ,z~,,~), = 1, . . . , In. the general representation of the joint predictive . i ., density for z ( n ~.) . ,z(nJ is given by

so that, given a basic assumption of unrestricted exchangeability,alternativemodels are defined by different forms of Q. As a stylised illustration of the possibilities, we might consider: assigning probability one to 81 = ... = Om = 6, say, corresponding to Qi: the assumed equality of the limiting frequencies of ones in each of the rn sequences. so that dQ(Ol$. . . ,O,,,) reduces to dQl(O); = Q = &, say, so that , , Q2 : assigning probability one to 81 = ~ I , O ?= dQ(OI,. . . . O,,,) reduces to dC)2(&,42); Q3 : retaining a general. non-degenerate, form dQ(131,.. . .O,,,) over the limiting frequencies. For example, in the context of 0-1 responses in 7n clinical trial treatment groups, Q1 corresponds, loosely speaking. to the hypothesis that all treatments have the same effect; 6 2 2 corresponds to the hypothesis that one of the treatments (possibly a control) is different from all the other treatments, which themselves have the same effects; Q3 corresponds to a general hypothesis that all treatments have different effects, any further (non-degenerate)relationshipsamong them being defined by the specific form of &;I.
Structured Luyours In Section 4.6.3, we considered triply subscripted random quantities, xIJk, r e p resenting the kth of a number of replicates of an observable in context i E I, subject to treatment j E J. In particular, we considered the situation where the predictive model might be thought of as generated via conditionally independent normal + a, $) + YZJ.T , N(ztJk I

382

6 Remodelling

distributions, together with a prior distribution Q for T and any I J linearly independent combinations of { o , } , { dj}. { n ( i l } , ,i E I , j E J. As a stylised illustration of alternative modelling possibilities, we might consider:

QI: specifyingT, = 0 for all i. j, together with a non-degenerate specification for {fk,}. {$,} and cr; &: specifying = 0 for all i. j and ,;jJ = 0 for all . j , together with a nondegenerate Specification for { n, } and p: Qj: specifying ?,, = 0. for all i . j and (I, = 0 for all i , together with a nondegenerate specification for { ;jJ } and 1.1;
01:

specifying = 0, n , = 0, ;; = 0, for all i. j . together with a non-degenerate ! specification for p.

The reader familiar with analysis of variance methods will readily identify these prior specifications with conventional forms of hypotheses regarding absences of interaction and main effects.

Cowriates

In Section 4.6.4, we discussed a variety of models involving covariates, where beliefs about the sequence of observables (5's) were structurally dependent on another set of observables (z's). Given the enormous potential variety of such covariate dependent models, it does not seem appropriate to attempt a notationally precise illustration of all possibilities. Instead. we shall simply indicate in general terms, for each of the cases considered in Section 4.6.4. the kinds of alternative models that might be considered.
Example 6.1. (Biwssay). Alternative models for a single experiment might correspond to different assumptions about the functional dependenceof the survival probabilities on the dose (for example, logit versus probit). In the case of several separate experiments. alternative models might assume the same functional form. but differ in whether or not they constrain model parameters-for example. the LDWs- to be equal.

Example 6.2. (Growth curves). Alternative models for an individual growth curve might correspond to different assumptions about the functional dependence of the response on time (for example, linear versus logistic). In the case of several growth curves for subjects from a relatively homogeneous population, alternative models might be concerned with whether some or all ofthe parameters defining the growth curves are identical or differ across subjects.

6.I Model Comparison

383

Example 63. (Multiple regression). Alternative models in the multiple regression context typically correspond to whether or not various regressor variables can be omitted from the linear regression form; equivalently, to whether or not various regression coefficients can be set equal to zero. In the third volume of this work, Bayesian Methods, we shall discuss in detail a number of practical applications of this kind.

Hierarchical Models Given the enormous variety of potential hierarchical models and alternative forms, we shall just content ourselves with some general comments for one of the specific cases considered in Section 4.6.5.
Example 6.4. (Exchangeable normal mean parameters). In Example 4.16 of Section 4.6.5, we considered a case where all the means, p l , . . . ,p,,, of the m. groups of observables with normal parametric models were judged exchangeable, and where this latter relationship was modelled as a mixture over a further parametric form, . reflecting a symmetric judgement of similarityfor pI. . . .pll,. However, other symmetry judgements are possible: for example, that rn - 1of the p,s are exchangeable. the other one is not, but all are equally likely, a priori, to be the odd one out. This would create a model allowing potential outliersamong the m groups themselves. (SeeSection 4.7.3 for further development of this idea in a non-hierarchical setting.) Confronted with a range of possible models, how should an individual or a group proceed? From the perspective adopted throughout this book, clearly the answer depends on the perceived decision problem to which the modelling is a response. In the remainder of this chapter, we shall therefore illustrate various of the kinds of decision problems that might be considered. The emphasis will be on somewhat stylised, typically simple, versions of such problems, in order to highlight the conceptual issues. Detailed case-studies, involving the substantive complexities of context and the computational complexities of implementation will be more appropriately presented in the volumes Bayesian Computationand Bayesian Methods.

6.1.2

Perspectiveson Model Comparison

To be concrete, let us assume that all the belief models P2, E I. say, under i consideration for observations z can be described in terms of finite parameter mixture representations. Given the specifications of the various densities forming the mixtures, the predictive distributions for the alternative models are described by PJZ) = P ( Z I = P , ( Z I e , ) p , ( e , ) d e , . E 1.

nu

6 RemodelliriR

For mnemonic convenience, from now on we shall denote the alternative models by { Af,. i E I}(rather than P,!i E I. as in our previous discussions) and the set of these models by M = { A I , , i E I}. Before we turn toa detailed discussion of decisions concerning model choice or comparison among { '11,. i E I}.we need to draw attention to important distinctions among three alternative ways in which these possible models might be viewed. The first alternative, which we shall call the ;M-c./osed view. corresponds to believing that one of the models { J\,. i E I} is "true". without the explicit knowledge of which of them is the true model. From this perspective. which may reflect cither the range of uncertainties within an undecided individual, or the range of different beliefs of a group of individuals, the overall model specifies beliefs for 2 of the form = P ( N , jp(a: I M,).

C
l i t

with f"("iI,)denoting prior weights on the component models { M,. i E I).There is, of course, some ambiguity a to what should be regarded as a component model s (for example, the renormalised mixture of i\f, and JI, could itself be regarded as a model), but this can be resolved pragmatically by taking { Af,. i E I}to be those individual models we are interested in comparing or choosing among. But, continuing the discussion of Section 4.8.3 on the role and nature of models, when does it actually make sense to speak of a "true" model and hence to adopt the ,V-closed perspective'? Clearly, this would be appropriate whenever one knew for sure that the real world mechanism involved was one of a specified finite set. One rather artificial situation where this would apply would be that of a computer simulation "inference game". where data are known to have been generated using one of a set of possible simulation programs, each a coded version of a different specified probability model, but it is not known which program was used. Beyond such "controlled" situations, it seems to us to be difficult to accept the M-ciosed perspective in a literal sense. However, there may he situations where one might not feel too uncomfortable in proceeding "as if' one meant it. For example. suppose that a parametric model with a specified parameter has heen extensively adopted and found to be a successful predictive device in a range of applications. Now suppose that a new application context arises and that it is felt necessary to reconsider whether to continue with the previous specified parameter value or, in this new context, to incorporate uncertainty about the appropriate value. Provided we feel comfortable, in principle. with assigning prior weights to these two alternative formulations. we can exploit the -;U-closed framework. However. reality is typically not as relatively straightforward as this. Nature does not provide us with an exhaustive list of possible mechanisms and a guarantee that one of them is true. Instead, we ourselves choose the lists as part of the process

6.1 Model Comparison

385

of settling on a predictive specification that we hope will prove fit for purpose -in the jargon of modern quality assurance. But if we abandon the M-closed perspective, how else might we approach the very real and important problem of comparing or choosing among alternative models? It seems to us that the approach depends critically on whether one has oneself separately formulated a clear belief model or not. In the former case, the alternative models are presumably being contemplated as a proxy because the actual belief model is too cumbersome to implement; however, they will still have to be evaluated and compared in the light of these actual beliefs. In the latter case, in the absence of an actual specified belief model, it would seem intuitively-and we shall see this more formally later-that the alternative models have to battle it out among themselves on some cross-validatory basis. We now proceed to give these alternativeperspectives a somewhat more formal description. The second alternative, which we shall call the M-completed view, corresponds to an individual acting as if { A l , , i E I}simply constitutea range of specified models currently available for comparison, to be evaluated in the light of the individuals separate actual belief model, which we shall denote by Alt. From this perspective, assigning the probabilities {P(Aft),i E I}does not make sense and the actual overall model specifies beliefs for z of the form p ( z ) = p t ( z ) = p ( z I Mt). M-completed models, relative to a given proposed range of models A&, i E I might be adopted for a variety of reasons. Typically, {Aft, i E I} will have been proposed largely because they are attractive from the point of view of tractability of analysis or communication of results compared with the actual belief model hit. The third alternative,which we shall call the M -open view, also acknowledges that { M , ,i E I} are simply a range of specified models available for comparison, so that assigning probabilities { P ( M t ) , iE I} does not make sense. However, in this case, there is no separate overall actual belief specification, p(z)-perhaps because we lack the time or competence to provide it. Examples of lists of proxy models that are widely used include familiar ones based on parametric components, corresponding to: regression models with different choices of regressors; generalised linear models with different choices of covariates. link functions, etc.; contingency table structures with different patterns of independence and dependence assumptions. The M-open perspective requires comparison of such models in the absence of a separate belief model. The M-completed perspective will typically have selected the particular proxy models in the light of an actual belief model. For example, if the actual belief model is based on non-linear functions of many covariates, together with Student probability distribution specifications, the proxy models to be evaluated might be various linear regression models with limited numbers of covariates and normal probability distribution specifications.

6 Remodelling

6.1.3

Model Comparison as a Decision Problem

We shall now discuss various possible decision problems where the answer to an inference problem involves model choice or comparison among the alternatives in M = {-If,,i E I}. Some of these only make sense from an M-closed perspective; I others can be approached from either an M-closed, an M-completed or an & open perspective. Throughout the following development, observed data on which decisions are to be based will be denoted by 2.and the choice of model M,. either as an end in itself, or as the basis for a subsequent answer to an inference problem. will be denoted by rrz,, i E I. The first decision problem we shall consider involves only the choice of an M,. without any subsequent action, so that the utility function has the form z!(?nf.u), where w is some unknown of interest. This decision structure is shown schematically in Figure 6. I. Provided we feel comfortable, in principle. with assigning prior weights to these two alternative formulations, we can exploit the ,bf-closed framework.

fI(r u , .

w)

Figure 6.1 A decision problem involving model choice only

It is perhaps not obvious why such a problem would be of interest from an M-open perspective. However, from an M-closed perspective, an example of an obvious w of interest might be the M , for which, imagining a large future x.Recalling sample of observations, y = (y,. . . . .y$). P ( M , I y) -+ 1 as s Proposition 5.9 of Section 5.2.3, w in this case labels the true model, and the utility of choosing a particular model then depends on whether a correct choice has been made. 0). Whatever the forms ofw and u(m,, in the general decision problem defined by Figure 6.1, maximising expected utility implies that the optimal model choice rri is given by 7f(77L* 1 )= supu(r7/, l ) 2 z.

ftl

where

~ ( m z)= I ,

u ( m f . w ) p ( w z)rlw. 1

I.

with p ( w 1 x)representing acruul belit$ about w having observed x.

6.1 Model Comparison


In the M-closed case,

387

where

and p , ( u I s)= p(w I A, 5)is given by standard (posterior or predictive) manipulations conditional on model Ma,i E I. We note, in particular, the key role played by the quantities (P(h1,I z),E I},which, within the purview of the M-closed i framework, are the posterior probabilities, given s, model All, i E I, being true. of From the M-completed perspective, we can, at least in principle, obtain p(w I z)and evaluate ii(m, I z), E I, even though this may require extensive i (Monte Carlo) numerical calculations in specific applications. In this way, one can compare the models in M, even though none of them corresponds to ones own assumption regarding the true model. From the M-open perspective, nothing can be said in general about the explicit form of p(w I z). turns out, however, perhaps surprisingly, that, at least approxiIt mately, the same analysis can be carried out in the M-open as in the M-completed framework; in other words, one can compare the models in M on the basis of their expected utilities without actually having specified an alternativetrue model. We shall defer a detailed discussion of this until Section 6.1.6. Let us now consider a rather different form of decision problem which first requires the choice of model Mt from M, which we denote by m,, and then, assuming klz to be the model, requires an answer a,, j E J , relating to an unknown state of the world w of interest. For example, we may wish to predict a future observation, or estimate a parameter common to all the models in M . If u(mt, w ) a,, denotes the utility resulting from the successive choices m, (i.e., model h4,)and a,. j E J, (answer to inference question, given M,), when w is the actual state of the world, the resulting decision problem is shown schematically in Figure 6.2.

Figure 6.2 A decisionproblem involving model choice and subsequenr inference

3aa

6 Remodelling

Systematic application of the criterion of maximising expected utility establishes that the optimal model choice is that r n ; for which

ii(nz;Iz) S l l p U ( m , / a ) . =
If1

where
G ( m , 12) =

1 1 ( 7 ~ ,a;.w)y(w .

1 x)dw

is the expected utility, given x.of optimal behaviour given model ,\I,.so that n; is obtained from maximising

The form pl(w1 x)in the above is again given by standard (posterior or predici tive) manipulation conditional on model M,. E 1. while the form p(w la)again represents actual beliefs about w given x. The explicit form of p(w 1 z) as a mixture of the y,(w I a), been given has above in the M-closed case. In the M-completed case, we have also noted above that evaluation of p(w I x) and {ii(rn,1 x).i E I} can in principle be carried out. numerically if necessary. Detailed analysis for the M-open case will be given in Section 6. I .6. From a conceptual perspective, it is important to recognise that different choices of w and different forms of utility structure will naturally imply different forms of solution to the problem of model choice. In the next two subsections. we shall explore a number of specific cases, in order to underline the general message that coherent comparison of a finite or countable .set of alternative models depends on the specification (at least implicitly) of a decision structure. including a utility function. Before proceeding to further aspects of model choice and comparison, however, it is worth remarking that. in the above context, it is not necessary to choose among the elements of M in order to provide an answer a. to an inference problem. If we omit the explicit model choice step, the resulting. different form of decision problem is that shown schematically in Figure 6.3.

Figure 63 A decision problem involving rerminul decision only .

6.1 Model Comparison

389

In this case, maximising expected utility leads immediately to the optimal answer a*, given by c(a* 12) = sup ii(u 12)
n

where

c(a* 12) =

with p(w I z) discussed above. In the particular case of an M-closed perspective, as it follows from the posterior weighted mixture form of p(w 1 x) that, although we have omitted the model choice step, model comparison in the light of the data x is still being effected through the presence of { P ( M , 1 z), E I}. In general, if i we entertain a range of possible models for data x,solutions to decision problems conditional on z will always implicitly depend on a comparison of the models in the light of the data, even if explicit choice among the models is not part of the decision problem.

u(a,w)p(w I z) dw.

6.1.4

Zero-one Utllitles and Bayes Factors

In this section, we confine attention to the M-closed perspective and consider first the problem of choosing a model from M , without any subsequent decision, when the state of the world of interest is defined to be the true model, so that assuming a future sample y = (yl,. . . , yS). P( Aft I y) + 1 as s + 30. From the M-closed perspective, the problem, stated colloquially, is that of choosing rhe m e model. In this case, a natural form of utility function may be

u(m,:w ) =

1 if w = ,\It 0 if w # Af,.

It is then easily seen from the analysis relating to Figure 6. I that


1 if w = A I , 0 if w # M,,

and

The expected utility of the decision m, (choosing ,%It),given z is hence ,

~ ( mz)= I ,

= P(M1I z),

u ( m zw)p(w I z)dw ,

i E I.

The optimal decision is therefore to choose rhe model which has the highesr posterior probability.

390

6 Remodelling

Buyes Factors Less formally. suppose that some form of intuitive measure o pairwise comparison f of plausibility is required between any two of the models { A f , . I E I}. The above analysis suggests that , I f l , .IfJ may be usefully compared using the posrerior odds rurio. P (M / 1 z) - p(s IM,) P(111,) P(MJ12) p(s I Af,)
x-7

r(q)

where, for example,

I :V) =

In words, the above comparison can be described as


posreriorodds rutio = intepited likelihood ratio x prior odds ratio.
making explicit the key role of the rutio o integruted fike1ihood.y in providing the f mechanism by which the data transform relative prior beliefs into relative posterior beliefs, in the context of parametric models. The fundamental importance of this transformation warrants the following definition, apparently due to Turing (see, for example, Good, I988b).

Pl(X

I e,)Pl(e,)del

Definition 6.1. (Bayesfactor). Given rwo hyporheses H I ,HJ corresponding to assumptions of alternative models. ,If,, ,IfJ. for data x.the Bayes factor it1 favour of H I (cind ugaittst H I )is Riven by rhe posterior to prior odds rm.0.

Intuitively, the Bayes factor provides a measure of whether the data x have increased or decreased the odds on H I relative to H,. Thus. B , , ( x ) > 1 signifies that HI is now more relatively plausible in the light of 5 ; (5) I signifies that H,, < the relative plausibility of H, has increased. Good ( 1950)has suggested that the logarithms ofthe various ratios in the above be called weights o evidence (a term apparently first used in a related context by f Peirce, 1878), so that log U , , ( x )corresponds to the integrated likelihood weight of evidence in favour of ill, (and against .\IJ). this logarithmic scale. the prior On weight of evidence and log B,, (2)combine udditive/y to give the posterior weight of evidence. In Section 6. I. 1, we noted the extremely simple forms of prcdictivc models which result when beliefs not only concentrate on a specific parametric family of distributions, but also identify the value of the parameter. An alternative set of such models, Ail, 1 E I . then just corrcsponds to the specifications { p l ( xI el). t I}. I and the integrated likelihood ratios reduce to simple ratios of likelihoods.

6.I Model Comparison

391

Hypothesis Testing The problem of hypothesis testing has its own conventional terminology, which, within the framework we are adopting, can be described as follows. Two alternative models, A l l , M 2 are under considerationand both are special cases of the predictive model
P ( Z ) = / P ( Z I 8)4?(e).

with the same assumed parametric form p ( z I O ) , 8 E 0,but with different choices of Q. If, for model A&, Qt assigns probability one to a specific value, 8,. say, the model is said to reduce to a simple hypothesis for 8 (recalling that the form p ( z 10) is assumed throughout). If. for model Af,, Q, defines a non-degenerate the density p , ( 8 )over 8,5 8, model is said to reduce to a composite hypothesis for 8. If a simple hypothesis is being compared with a composite hypothesis, so that 8,= 8 - { O , } , the latter is called a general ulfernative hypothesis. In the situation where the state of the world of interest, w , is defined to be the true model A&, we can generalise slightly the zero-one utility structure used earlier by assuming that

~ ( t n ~=, -1,,, ) ~

w = Af,.

i = 1.2, j = 1,2 .

with Z1, = 12. = 0 and 1 1 2 >Zzl > 0. Intuitively, there is a (possibly asymmetric)loss in choosing the wrong model, and there is no loss in choosing the correct model. Given data 5,and using, again p i ( w ( z )= 1 if w = Afi and 0 otherwise, the expected utility of mi is then easily seen to be

so that

We thus prefer A.12 to A l l , if and only if

revealing a balancing of the posterior odds against the relative seriousness of the two possible ways of selecting the wrong model. In the symmetric case, 112 = 121. the choice reduces to choosing the a posteriori most likely model, as shown earlier for the zero-one case. The following describes the forms of so-called Bayes tests which arise in comparing models when the latter are defined by parametric hypotheses.

392

6 Reniodellirig

Proposition6.1. (Forms of Bayes tests). In comparing nvo nrodels. i\lI . >\f?, defined by parametric hypothesesfor p ( x I 0 ) .with utility structure

where: (simple versus simple test).

Proof. The results follow directly from the preceding discussion.

The following examples illustrate both general model comparison and a specific instance of hypothesis testing.
Example 65. (Geometric versus Poisson). Suppose we wish to compare the two completely specified parametric models, Negative-Binomial and Poisson. defined for conby ditionally independent .rl . . . . . .(',,.
.\II : Nb(.r, 1 H I . I ) . A12 : Pn(.c.,1 1 9 2 ) .
i = I . . . . .I I

The Bayes factor in this case is given by the simple likelihood ratio

Suppose for illustration that HI = {, H1 = 2 (implying equal mean values E [ . r ]= 2 for both models); then, for example, with rt = 2. , r , = .r2 = 0, we have U l ? ( z ) = (.I;!) z (i.07. indicating an increase in plausibility for A I , ; whereas with I ! = 2. .rl = .rr : we have 2, U 1 ? ( s )= -lri/729 x 0.30, indicating a slight increase in plausibility for :\I2. Suppose now that 8,. are not known and are assigned the prior distributions

6.1 Model Comparison

393

whose forms are given in Section 3.2.2 (where details of Nb(z, I l , e l )and Pn(r, 102) can also be found). It follows straightforwardly that

and that

so that

We further note that

so that prior specifications with ( a l - l)az = ,&,&imply the same means for the two predictive models.

Table 6.1 Dependence of BI2 on prior-data combinations (2)


= 2, $1 = 2 = 2. & = 1
01

01

= 31.0, = 60

a 2
3 1

a = 2

( k i = 2, Sl = 3 6 . 3 = 30 a = 3. $2 = 2 012 2

= x2 = 0

x1=x2=2

2.70 0.29

5.69 0.30

0.80 0.49

6 Remodelling
As an illustration of the way in which the prior specification can affect the inferences. ) we present in Table 6.1 a selection of values of B I 9 ( zresulting from particular prior-data combinations. In the first twocolumns. the priors specify the same predictivemeansforthe two models. namely E [ sI A[,] = 2, but the priors in the second column are much more informative. In the final column, different predictive means are specified. Column 2 gives Bayes factors close to those obtained above assuming 0 , .B2 known, as might be expected from prior distributions concentrating sharply around the values 0, = and Or = 2. However, comparison of the first and third columns for s1= x2 = 0 makes clear that, with small data sets, seemingly minor changes in the priors for model parameters can lead to changes in direction in the Bayes factor.

4.

The point made at the end of the above example is, of course, a general one. In any model comparison, the Bayes factor will depend on the prior distributions specified for the parameters of each model. That such dependence can be rather striking is well illustrated in the following example.
Example6.6. (Lindleysparodoxj. Suppose that forz = (1,.. ..r,,)twoalternative .. models ,\Il. P ( M l )> 0. I = 1.2. correspond to simple and composite hypotheses ;If2.with about p in X ( x t 1 p. A) defined by

MI : pl(z) =

n
4-1

!V(x, I ill,. A).

p,,. A known.

,. , : p 2 ( z ) = / f i X ( x , Ir
I-

l ~ ~ , A ) ~ ( ~ 4 I ~ ~ , . AplI .) Af l . A ~known f ~ .

In more conventional tenninology, .cl. . . . ,s,, a random sample from S ( s [ 11. A). with are precision A known; the null hypothesis is that p = and the alternative hypothesis IS that /L # p,,, with uncertainty about /L described by V ( p 1 p l . A, ). Since F = n-I ~ ~ I,is a sufficient statistic under both models, we easily see that = ,

It is easily checked that,& anyjixed?, B 1 2 ( s-+ x as A, ) 0, so that evidence in favour of All becomes overwhelming as the prior precision in ,\I2 gets vanishingly small, and hence P(U1 15) 4 1. In particular, this is true for x such that 1 - / I , , i is 5 large enough to cause the null hypothesis to be rejected at any arbitrary, prespecified level using a conventional significance test! This paradox was tirst discussed in detail by Lindley (1957) and has since occasioned considerable debate: see Smith (1965), Bernardo ( 1980), Shafer ( 1982b). Berger and Delampady ( 1987). Moreno and Cano ( I989), Berger and Mortera (1991a)and Robert (1993) for further contributions and references.
-+

6.1 Model Comparison

395

A model comparison procedure which seems to be widely used implicitly in statistical practice, but rarely formalised, is the following. Given the assumption of a particular predictive model, { p ( z I 8 ) ,p ( 8 ) } ,8 E 0, posteriordensity,p(8 I z), a is derived and, as we have seen in Section 5.1.5, may be at least partially summarised by identifying, for some 0 < p < 1, a highest posterior density credible region R,(z), which is typically the smallest region such that

Intuitively, for large p , R p ( z )contains those values of 8 which are most plausible given the model and the data. Conversely, R;(z) consists of those values of 8 which are rather implausible. ) one Now suppose that, given a specified p and derived Rp(z . is going to assert that the true value of 8 (ie., the value onto which p(8 I y> would concentrate as the size of a future sample tended to 30) lies in R p ( z ) . Defining the decision problem to be the choice of p , so that the possible answers to the inference problem are in A = [O,11, with the state of the world w defined to be the true 8, a value up = p has to be chosen. An appropriate utility function may be

where f and g are decreasing functions defined on [O, 11. Essentially, such a utility function extends the idea of a zero-one function by reflecting the desire for a correct decision, but modified to allow for the fact that choosing p close to one leads to a rather vacuous assertion, whereas a correct assertion with p small is rather impressive. The expected utility of choosing a = p is easily seen to be ,
.ii(ap> = P f b )

+ (1 - p)g(l - P ) ,

from which the optimal p may be derived for any specific choices of f and g. We note that i f f = g, the unique maximum is at p = 0.50, so that it becomes optimal to quote a 50% highest posterior density credible region. If, for example, f(p) = 1 - p , g( 1 - p) = [l - (1 - p)I2 = p 2 , the resulting optimal value of p is l/& x 0.58, so that a 58% credible region is appropriate. More exotically, if f(p) = 1 - (2.7)p. g(l p ) = (1 - p ) - l , the reader might like to verify that a 95% credible region is optimal.

6.1.5

General Utilities

Continuing for the present with the (M-closed) hypothesis testing framework, the consequences of incorrectly choosing a model may be less serious if the alternative models areclose in some sense, in which case utilities of the zero-one type, which take no account of such closeness, may be inappropriate.

396

6 Remodelling

One-sided Tests We shall illustrate this idea, and forms of possibly more reasonable utility functions. by considering the special case of 0 E E 82, with parametric form p ( z 10) and models 1\21. 1\22 defined by p , ( . ~ - l 0= p ( r i 0 ) . 0 E 8, (0: 0 5 0(,} ) = y.r(a .I0 )=p (r1 8 ). I 9 0 2 = ( 0 : I 9 > 0 , , }

for some 00 E 8. models thus correspond to the hypotheses that the parameter The is smaller or larger than some specified value 00. It seems reasonable in such a situation to suppose that ifone were to incorrectly choose (0 > 0") rather than MI (I9 5 Ho), in many cases this would be much less serious if the true value of 0 were actually 19,)- i than if it were 0,) - 100:. say, for E > 0. Such arguments suggest that, with the state of the world ui now representing the true parameter value 6'. we might specify a utility function of the

for i = 1.2 where I,, /2 are increasing positive functions of ( H - 00) and (19,)- H ) , respectively. The expected utility of the decision corresponding to 7 ~ ! , (i.e., the choice of Af,) is therefore given by
ii(m, I z) =

J'.:

I,(O)p(B [ s)d0.

where

The optimal answer to the inference problem is to prefer 1\11 to A12 if and only if
E ( m * 1 )> ii(7172 1 ) 2 2.
with explicit solutions depending, of course, on the choices of (1.12, and the form of p(O I z), illustrated in the following example. as
Example 6.7. (Normalpostenor; linear losses). If I I ( H ) = H - H,,, I,(/?) = k(& - 0). with k retlectingthe relative seriousness of "overestimating" by choosing mcdel .If,. and p ( H I s). given z = . I ' , . . . . . .r,,.is N ( H I p,,. A,,). say, then we have
& ( m I z)= ,
-

( H - H,l),V(t? I I / , ) . A,,) tlH [A,,


2([AI

/I:.",,

= -,\,,

- //,8)].

6.1 Model Comparison


where

397

and

where
q * ( t )= N ( t 10. 1)

+t

"9 .

10.1) ds.

It is therefore optimal to prefer M ito Af2 if and only if

In the symmetric case. k = 1. it is easily seen that this reduces to preferring MI if and only if 11,~< 00, as one might intuitively have expected. For references and further discussion of related topics, see DeGroot ( 1970, Chapter I I), and Winkler (1972, Chapter 6).

Predicrion

Moving away now from model comparisons which reduce to hypothesis tests in parametric models, let us consider the problem of model comparison or choice, given data x,in order to make a point prediction for a future observation y. The general decision structure is that given schematicallyin Figure 6.2, where, assuming real-valued observables, m, corresponds to acting in accordance with model hiz,u J ,j E .I, denotes the choice, based on hl,, of a prediction, yz, for a future observation y, and we shall assume a "quadratic loss" utility,
u(mt.Ga,y) -(G, - Y Y ) ~ , =

i E I.

We recall from the analysis given in Section 6. I .3 that the optimal model choice is m*, given by

where 6 is the optimal prediction of a future observation 9, given data x and : assuming model hf,; is. the value y which minimises that

390

6 Remodelling

where p i ( y 12) is the predictive density for y given model AI,. It then follows immediately that

the predictive mean, given model AI,, s o that

Completion of the analysis now depends on the specification of the overall actual ' ) g belief distribution p(y I z)and the computation of the expectation of (g,' -- . i E 1, with respect to y(y 1 z). Again, in the M-completed case there is nothing further to be said explicitly: one simply carries out the necessary evaluations, using the appropriate form of p(y 12).by numerical integration if necessary. In the M-open case. the detailed analysis of the problem of point prediction with quadratic loss will be given in Section 6. I .6. In the M-closed case, we have

and. after some rearrangement. it is easily seen that

x)rly.
which reduces to

The first term does not depend on i , and the second term can be rearranged in the form

where $' is the weighted prediction

6.I Model Comparison

399

The preferred model MI is therefore seen to be that for which the resulting prediction, y{*, is closest to y', the posterior weighted-average, over models, of the individual model predictions. If k = 2, it is easily checked that the preferred model is simply that with the highest posterior probability. If we wish to make a prediction, but without first choosing a specific model, it is easily seen that the analysis of the problem in terms of the schematic decision problem given in Figure 6.3 of Section 6.1.2 leads directly to jj* as the optimal prediction. Clearly, the above analyses go through in an obvious way, with very few modifications, if, instead of prediction, we were to consider point estimation, with quadratic loss, of a parameter common to all the models. More generally, the analysis can be camed out for loss functions other than the quadratic.

Reporting Inferences Generalising beyond the specific problems of point prediction and estimation, let us consider the problem of model comparison or choice in order to report inferences about some unknown state of the world w. For example, the latter might be a common model parameter, a function of future observables, an indicator function of the future realisation of a specified event, or whatever. A major theme of our development in Chapters 2 and 3 has been that the problem of reporting beliefs about w is itself a decision problem, where the possible answers to the inference problem are the consists of the class of probability distributions for w which are compatible with given data. The appropriate utility functions in such problems were seen to be the scorefunctionsdiscussed in Sections 2.7 and 3.4. This general decision problem is thus a special case of that represented by Figure 6.2, where, given data ac, m ,represents the choice of model A!{, the subsequent answer u J ,j E J,. to the inference problem is some report of beliefs assuming M,, and the utility function is defined by about w,
u ( m 7 , a J . w ) u,(q l(. z ) , w ) , = l

for some score function u l , and form of belief report, ql( * 1 ac), about w, corresponding to d,, j J,. If pl(.I 2) is the form of belief report for w actually implied by m,and if u, is a proper scoring rule (see, for example, Definition 3.16) then it follows that the and optimal a,, j E J , must be a; = pl(. 1 z) that
u(7n7.a:,w) u , ( p t ( .lz).w), =

i E I.

If, moreover, the score function is local (see, for example, Definition 3.1% we have the logarithmic form

u(m,,a:.w) = Alogp,(w lz)

+ B(u), i E I ,

A > 0,

400

6 Reniodellitig

for n > 0 and B(w) arbitrary. in accordance with Proposition 3.13. The expected utility of I n , is therefore given by

u(m,lz) =

and the preferred model is the M , for which this is maximised over I E I. Comments about the detailed implementation of the analysis in the .M-open case are similar to those made in the previous problem. For ;M-closed models, we have the more explicit form
u(rn, 12) = (I c ( I J pM
JCI

{a1015y,(wIz)+U(w)}p(wIz)tlw.

p,(w

I z)logp,(wI s ) d w +

O ( w ) p ( w z)rlw. I
H I , ib

which, after straightforward rearrangement, shows that the preferred by minimising, over i E I.

given

the posterior weighted-average. over models. of the logarithmic divergence (or and discrepancy) between p,(w 1 z) each of p,(w 12)..j # i E 1. If. instead, we were to adopt the (proper) quadratic scoring rule (see. for example, Definition 3.17). we obtain, ignoring irrelevant constants,

U ( m , 1 )x 2

so that, after some algebraic rearrangement. in the case of M-closed models the preferred M, is seen to be that which minimises. over i E 1.

I{

2p,(w 1 )2

. I

p f ( wI z)dw p ( w 1 ) 2 cfw.

Comparison of the solutions for the logarithmic and quadratic cases reveals that if. for arbitrary f.

defines a discrepancy measure between y and q. both may be characterised as identifying the 121, for which
XI'('''Jz)ci{pJ(wz, y , ( w I z)} I I l
./El

is minimised over i E 1,the differences in the two case5 corresponding to the form of f (logarithmic or quadratic. respectively).

6.1 Model Comparison

401

Example 6.8. (Point prediction versuspredictive beliefs). To illustrate the different potential implications of model comparison on the basis of quadratic loss for point prediction versus model comparison on the basis of logarithmicscore
for predictive belief distributions, consider the following simple (M-closed) example. Suppose that alternative models A i l , A for z = (q, . . . r,,) defined by: l z are

with XI, Xz, /ill, known: we are thus assuming normal data models with precisions XI, A?. XI, respectively, and uncertainty about p described by N ( p Ip,). A,,) in both cases. Now, given x, consider two decision problems: the first problem consists in selecting a model and then providing a point prediction, with respect to quadratic loss, for the next the observable, x"+~; second problem consists in selecting a model and then providing a predictive distribution for z,,-], with respect to a logarithmic score function. For the first problem, straightforward manipulation shows that the predictive distribution for x n r l assuming model M, is given by

where

so that, corresponding to the analysis given earlier in this section, model M, leads to the prediction pf,(j), = 1.2, and the preferred model is MI if and only if j

To identify these posterior probabilities, we note that, if $ = T E - ~ C ( XS ) 2 , - ,

which, for small A,), is well approximated by

The posterior model probabilities are then given by

402

6 Remodelling

where pl? = P(All)/P(Jf2). Model Al is therefore preferred if and only if l


Iog[BI1(z)pl2j 11 >

In the case of equal prior weights, p12 I and, assuming small A,,, if we write the condition = in terms of the model variances cf = A;', J = 1.2. we prefer dll when

Noting that the left-hand side is an intuitively reasonable data-based estimate of the model variance, we see that model choice reduces to a simple cut-off rule in terms of this estimate. For the second decision problem, the logarithmic divergence of p ( .rjt I 9 1 1 . ~ ) from p(.r,,+ I A/?, z) is given. for small All. by

with a corresponding expression for d2,. The general analysis given above thus implies that model MI is preferred if and only if

i.e., if and only if

(rather than > 1, as in the point prediction case). Note, incidentally, that should it happen 1 model .\II. would be preferred if and only if < A l l . which . that P(:lfl j z) = P(,4f2 z), happens if and only if h l .> A?. Intuitively, all other things being equal. we prefer in this case the model with the smallest predictive variance. Toobtain some limited insight into the numerical implicationsof these results. consider the case where 0 ; = 1, 0 = 25, It - -1. P(dfl) = P(AL) = and .Y') = 3. which gives : : = 0.3'33,so that P( d l , 12) = 0.28, P ( M 2; I ) z 0.7'2. Usini the point prediction with ; quadratic loss criterion, we therefore prefer A I L . However. hI1 = 1 .I29 and hi1 10.31. 50 that if we want to choose a predictive distribution in accordance with the logarithmic score ' criterion we prefer A l l . since (13.28)/(13.72) > (1.129)/(11).31). However. if R = -1. the reader might like t o verify that :\I2 is preferred under both criteria (B12 = O.05S. implying l that P r ( Al i I ) = 0.(15.5).
7

6.1 Model Comparison

403

6.1.6

Approximation by Cross-validation

For the general problem of model choice followed by a subsequent answer to an inference problem, the analysis based on Figure 6.2 implies that the optimal choice of model from M is the All for which

is maximised over i f I, where (I; denotes the optimal subsequent decision given Adt. In the M-closed case, we have seen that the mixture form of p(w I z) enables an explicit form of general solution to be exhibited; in the M-completed case, we have noted that the solution is in principle available,given appropriate computation. We turn now to the case of model comparison within the M-open framework. What can be done to compare the values of hl,,i E I, as proxies for an actual is belief model which itself has not been specified, so that p(w I z) not available? We shall illustrate a possible approach to this problem by detailed consideration of the special case where w = 9, a future observation, for which a point prediction with respect to quadratic loss, or a predictive distribution with respect to logarithmic or quadratic score, is required. First, we note that, in all these cases, the expected utility of the choice A f t , i E I, has the mathematical form

for some function ft of y and x,depending on i, whose form can be explicitly identified. For example, for point prediction with quadratic loss, we have

for a predictive distribution with logarithmic score function we have, ignoring irrelevant terms, f i ( y , z ) = logp(yIAl,,z); and with a quadratic score function we have

Secondly, we note that there are TL possible partitions of 5 = z , (21.. .. z,,) ,= into z , [z,,-,(j), j = 1,. . . ,n,where xn-l (j)= x, - {z3} ,= xl], denotes z , with xJ deleted, and that, if ri is reasonably large. and the zs are exchangeable, each such partition effectively provides x,,-l(j)as a proxy for 5 and x, as a

404

6 Remodelling

proxy for y. If we now randomly select k from these n partitions, a standard law of large numbers argument suggests that, as I ? . k -3 x.

so that the expected utilities of MI, i E I, can be compared on the basis of the quantities
]=I

In the case of point prediction, if y is a future observation and Jij*(j) denotes the , when x is replaced by z,,_l(.j), approximation implies this value of E[y I M &z] that we minimise, over i E I.

which is an average measure. using squared distance, of how well MI performs when, on a leave-one-out-at-a-time basis, it attempts to predict a missing part of the data from an available subset of the data. In the case of a predictive distribution with a logarithmic score, we maximise. over i E I.

which can be regarded as an average measure based on the logarithm of the integrated likelihood under model AfI, and can be conveniently rewritten, for computational purposes, in the form k 1 ~ o g p (I zi\ij) - logp(t,,-l(j) I M,).

C
J=l

In the case of comparing two models, A l l , J f 2 , this criterion can be given an interesting reformulation. Under the logarithmic prediction distribution utility, we and writing p l ( y I z)= p(y I M I .z), can rearrange the criterion to see that we prefer model A l l if

where, however, in this M-open perspective, p(.y 1%) is no/ specified. But, as we ,= j saw above, we can form 71 partitions z , [zI,-,(j)..r,] such that z , , - ~ ( .=) z , x, is, for large r i , a proxy for z and xj is a proxy for y. It follows that. ,if we randomly select A. of these partitions, the quantity
-C

PI(^, Ixlt-I(j)) log M(1.JIzn 1 ( A ) k J=,


1

6.1 Model Comparison

405

provides a (consistent, as n --t 00) Monte Carlo estimate of the left-hand side of the model criterion above. But this, in turn, can be rewritten so that the criterion implies preferring model 11.11 if

for j = 1,.. . ,k, where B12(xj,xr1-1 ) )denotes the Bayes factor for 11 against (j 11 AI2. based on the versions

{ Pt (x,I8,) P* (61 I=a- 1 ( d ) }


1

of A l l , ,%I2. We recall from Section 6.1.4 the role of Bayes factors based on the I of 1 1 versions { p , (z O,), p, ( 8 , ) ) 1 1 .1112, in the context of zero-one loss functionsand the M-closed perspective. Although there are clear differences here in formulation (M-open versus M-closed, log-predictive utility versus 0- 1 loss), it is interesting to note the role played again by the Bayes factor. One interesting difference is the following (Pericchi, 1993). In Section 6.1.4, the Bayes factor is evaluating 1111,A12 on the basis of the models ability to predict x given no data (beyond what has been used to specify p z ( 8 , ) ) . In contrast, in the above we are taking a geometric average of Bayes factors which are evaluating MI, A I 2 on the basis of the models ability to predict one further observable, given R - 1 observations. The former situation puts the emphasis on fidelity to the observed data; the latter puts the emphasis on future predictive power. These kinds of approximate performance measurements for comparing models could obviously be generalised by considering random partitions of x involving leave-several-out-at-a-time techniques. We shall not develop such ideas further here-apart from giving one further interesting illustration in Section 6.3.3-but merely note that the above approximation to the optimal Bayesian procedure leads naturally to a cross-validation process, which results in a preference for models under which the data achieve the highest levels of internal consistency. Thus, for example, in both the quadratic loss and logarithmic score cases, if under model A I , there are zl which are surprising in the light of ~ , ~ - 1 ( j )thus leading to large , squared distance terms or small log-integrated-likelihood values, respectively, the performance measure will penalise Id,. Model choice and estimation procedures involving cross-validation (sometimes called predictive sample reuse) have been proposed by several authors, from a mainly non-Bayesian perspective, as a pragmatic device for generating statistical methods without seeming to invoke a true sampling model: see, for example, Stone ( I 974) and Geisser (1975) for early accounts and Shao ( I 993) for a recent perspective. The above development clearly establishes that such cross-validatory techniques do indeed have an interesting role in a Bayesian decision-theoretic

6 Rentodelling
setting for approximating expected utilities in decision problems where a set of alternative models are to be compared without wishing to act as if one of the models were "true", and in the absence of a specified actual belief model.
Example 6.9. (Lindley's paradox revisited). In Example 6.6, we considered the case of two alternative models, .\II, .\I?. for z = (.rI.. . . . .r,,),correspondingto simple and composite hypotheses about / r in :V(.r., 1 1 . A ) and de6ned by:

MI : p l ( z ) =

n
I
I 1

N ( . r , I pI,. A ) .

pll.X known:

The analysis given in Example 6.6 was within the M-closed context with P( .\I, ) > I). 0, P(N1 xj I 1 for uny tixed x. It follows from 0, MI would be the preferred model results given in Sections 6.1.3 and 6.1.4 that, as XI under either mro-one utility, or quadratic loss utility for point prediction (since in this latter case, the criterion reduces to the comparison of posterior probabilities when just two models are being compared). We shall now reconsider the case of quadratic loss for point prediction in the M-open context. First. we note that, given x, the optimal prediction of a future observation, !/. under ,\f1 is just i; = /ill, whereas (making appropriate notational changes to the results given in Example 5.10) under Af? the optimal prediction is
i = 1.2. and it was shown that, as XI

--

where w,, = u X ( X I r t X ) - ' . Secondly. from the cross-validation approximation analysis given above we see that A l , is preferred to df2 if and only if, based on k random partitions of 5 into s,and x,,-( j ) . I

C(/icl
1-1

r(

-.r,)')

< xi{(l * , , - ~ ) P + w,,-tT,, -U ~


1-

1 ( ~ ) -}. r , ! -

.,

where 7,,-j ) = 7 + ( u - 1) '(2 .r,) is the mean of the sample x with .r, omitted. ,( Intuitively, .If, will be preferred if the posterior mean on average does better as a predictor than / t o . In particular. if XI + 0,and k = H. an approximate analysis shows that d l , is preferred to !If, if and only if
I I

6.1 Model Comparison

407

This is easily seen (Pericchi, 1993) to be equivalent to preferring hl, if, and only if,

which, under Mi,is equivalent to rejecting Mi if a Snedecor Fl.,d-l random quantity exceeds See Leonard and Ord (1976) for a related argument. This result provides a marked contrast to that obtained in Example 6.6 and makes clear that, even given the same data and utility criterion, preferences among models in M may differ radically, depending on whether one is approaching their comparison From an
the value 2.

M-closed or M-open perspective.

6.1.7

Covariate Selection

We have already had occasion to remark several times that our emphasis in this volume is on concepts and theory, and that complex case-studies and associated computation will be more appropriately discussed in subsequent volumes. That said, it might be illuminating at this juncture to indicate briefly how the theory we have developed above can be applied in contexts which are much more complicated than those of the simple, stylised exampleson which most of our discussion has been based. To this end, we shall consider the important problem of model comparison which arises when we try to identify appropriate covariates for use in practical prediction and classification problems. To fix ideas, consider the following problem. Some kind of decision is to be made regarding an unknown state of the world w relating to an individual unit of a population: for example, classifying the, as yet unknown, disease state of a specific patient, or predicting the, yet unknown, quality level of the output from a particular run of an industrial production process. Possible predictive models are to be based, for various choices of m,on covariates y1 ( z ) ,. .. ,y,((z), which are themselves selected functions of z = (zl,. . . z s ) ,representing all possible observed relevant attributes (discrete or continuous) for the individual population unit: for example, the patients complete recorded clinical history, or a record of all the input and control parameters of the production run. To aid the modelling process, a data bank (of training data) is available consisting of

D = { ( W J . x:, . . .z{),j= 1,. . . ,n},


recording all the attributes and (eventually known) states of the world for n previously observed units of the same population: for example, n previous patients presenting at the same clinic, or 1%previous runs of the same production process. We shall suppose that the ultimate objective is to provide, for the state of the world w of the new individual unit, a predictive distribution p(w 1 g ( z ) ,D ) .

408

6 Reniodelling

where y denotes a generic element of the set of all possible { y,(.). i = I . . . . . i n } under consideration for defining covariates. If w is discrete, we typically refer to the problem as one of clnssijication; if w is continuous. we refer to it as one of
prediction.

To simplify the exposition, we shall suppose that identification of the density p ( . I y(z). D) is equivalent to the identification of y E J', where J' denotes the class of all y under consideration. The particular forms in Y will depend, of course. on the practical problem under consideration: typically, however. it will include functions mapping z to z ; , i = 1 . . . . ,s, so that individual attributes themselves are also eligible to be chosen as covariates. Then. if ci(y(.I y(z ) . D).w} denotes a utility function for using the predictive form p ( . I y( z ) . D ) when w turns out to be the true state of the world. the resulting decision problem is shown schematically in Figure 6.4.

Figure 6.4 Selecrion ojcovciririres

CIS n

decision prohler~r

Ifp(z. w I D) represents the predictive distribution for ( z .a), the "traingiven ing data" D, the different possible models corresponding to the different possible choices of covariates. y E Y . are then compared on the basis of their expected utilities

The resulting optimal choice will. of course, depend on the form of the utility function. Qpically, the latter will not only incorporate a score function component for assessing p ( . I y(z), D). but possibly also a cost component. reflecting the different costs associated with the use of different covariates y. For example, in the case of disease classification the use of fewer covariates could well mean cheaper and quicker diagnoses; in the case of predicting production quality the use of fewer covariates could cut costs by requiring less on-line measurement and monitoring. If we suppose. for simplicity, that the utility function can be decomposed into additive score and cost components.

6.2 Model Rejection

409

the expected utility of the choice y is given by

In many cases, it will be natural to use proper score functions, for example, quadratic or logarithmic. If costs are omitted, the optimal model will typically involve a large number of covariates; if cost functions are used which increase with the number of covariates in the model, a small subset of the latter will typically be optimal. More pragmatically, one could ignore costs, identify the optimal yLi),i = 1,2, . . . over all possible choices of one covariate. two covariates, etc., observe that

is typically concave, reflecting marginal expected utility for the incorporation of further covariates, and hence select that y;,) for which ii(y;,+,) I D) - a(yii, I D) is less than some appropriately predefined small constant. Given the complexity of problems of this type, the set Y = M of possible models is typically a rather pragmatically defined collection of stylised forms, and, recalling the discussion of Section 6.1.2. an M-closed perspective would not usually be appropriate. In fact, in most applications p ( z ,w I 0) likely to is prove far too complicated for any honest representation, so that, in the terminology o Section 6.1.2, we need to perform a comparison of the models in Y from the f M-open perspective. There are interesting open problems in the development of the cross-validation techniques, that might be employed in particular cases, but discussion of these would take us far into the realm of methods and case-studies, and so will be deferred to the second and third volumes of this work.

6.2
6.2.1

MODEL REJECTION
Model Rejection through Model Comparison

In the previous section, we considered model comparison problems arising from the existence of a proposed range of well-defined possible models, M = {Mi, i E I}, for observations x, where the primary decision consisted in choosing m,,i E I, with the implication of subsequently acting as if the corresponding M i , i E I, were the predictive model. In this section, we shall be concerned with the situation which arises when just one specific well-dejned modelfor x. say, has been proposed initially, and the primary decision corresponds either to the choice w, which corresponds to C subsequently acting as if M o were the predictive model, or to the choice m , (thus, in a sense, rejecting A&), with the implication of "doing something else. If. given

410

6 Remodelling

Figure 6.5 Model rejecrion as a decision problem

z,G ( . I z)denotes the ultimate expected utility of a primary action, this model rejection problem might be represented schematically by Figure 6.5. Such a structure may arise, for example, as a consequence of ill0 being the only predictive model thus far put forward in a specific decision problem context; or, as a consequence of the application of some kind of principle of simplicity or parsimony, as an attempt to "get away with" using :\I,,, instead of using more complicated (but, in this context. unstated) alternatives. What perspectives might one adopt in relation to this, thus far clearly illdefined. problem of model rejection'? If we are concerned with coherent action in the context ofa well-posed decision problem, we see from Figure 6.5 that we cannot proceed further unless we have some method for arriving at a value of C( n i t ; 12) to compare with u( mo I 2). One way or another, we are forced to consider alternative models to ill,,. Let us suppose therefore that we have embedded dl,, in some larger class of models , = { M , . i E I}. This might be done. particularly where -\I,, has M been put forward for reasons of simplicity or parsimony. by consideration of uctual ulternutives to ,\I,, thought (by someone) to be of practical interest. Otherwise, it might be done by consideration of formal dtertiutivrs, generated by selecting. in some way, a "mathematical neighbourhood" of :\I,, (which might also. of course. contain alternatives of practical interest). For this redefined problem of model rejection within .M. shown schematically in Figure 6.6. the hitherto undefined becomes value of ii( r n & I z)

where I' = I - ( 0 ) indexes the models in M distinct from ill,. For any specific decision problem. the calculation of U( rrr, I z),E I proceeds t as indicated in Section 6. I . Thus. if we adopt the ,bf-closed perspective, evaluations are based on mixture forms involving prior and posterior probabilities of the '11,. i E I; if we adopt the M-completed perspective, the calculation is, in principle. well-defined.but may be numerically involved; if we adopt the M-open perspective. we may use a cross-validation procedure to estimate the expected utilities.

6.2 Model Rejection

411

Figure 6.6 Model rejection within M = { M,, i E I }

In the case where {hi,,i E 1) consists of actual alternatives to h.10, we might regard the redefined model rejection problem as essentially identical to the model comparison problem, so that rejecting mo corresponds to choosing the best of m,, i E 1. However, this would seem to ignore the fact that when ho has been put f forward for reasons of simplicity or parsimony there is an implicit assumption that the latter has some extra utility, over and above the expected utility a(mo 1 z). Thus, if G(m81 z) - ii(nbI )were positive, but not too large, we might still s prefer Mo because o the special simple status of hfo. The same argument f applies even more forcibly in the case where { M t , z E Z} consists of formal alternativesto h.10,since rejecting MOmay not lead obviously to an actual alternative model, and the extra utility of choosing Ado if at all possible may be greater. From this perspective, the redefinition of the problem of model rejection as one of model comparison corresponds to modifying slightly the representation given in Figure 6.6, by replacing ii(rn0 I z)by C(n% f ~ ( 2 ) ~(z) Is) where represents, an given z, implicit (but as yet undefined) exrru utility relating to the special status of A10. (See Dickey and Kadane, 1980, for related discussion.) The formulation of the model rejection problem given above is rather too general to develop further in any detail. In order, therefore, to provide concrete illustrative analyses, we shall assume, for the remainder of this chapter that M = { M o ,M I } ,where, forsome parametric family { p ( . IB ) , 8 E Q}, predictive models for 2 are defined, for some 80c 8, by:

Specially, M, will correspond to either: (i) p ( z I Bo), a simple hypothesis that 8 = Bo (specified by a degenerate prior pO(0) which concentrates on 80), or (ii) p ( z I &, A), a simple hypothesis on a parameter of interest q3 where X is a nuisance parameter (specified by a priorpo(8) = p(A I & which concentrates , ) on the subspace defined by 4 = 40).

412

6 Remodelling

The next three sections consider some detailed model rejection procedures within this parametric framework.

6.2.2

Discrepancy Measures for Model Rejection

Within the parametric framework described at the end of the previous section. model Af,, corresponds to a form of parametric restriction (or null hypothesis) imposed on model ill,. In such situations, it is common practice to consider the decision problem of model rejection to be that of assessing the conzpciribiliry of model Ml1 with the data z this being calibrated relative to the wider model ,Ilk , within which A J o is embedded. We shall focus on this version of the model rejection problem with utilities defined by ~ ( n $ ~ ) ( m I , 0). 8 E @, and overall beliefs. O), u , p(0 1 z). E 8, 8 defined either by an ,,M-closed form,

MI}, or by the { Afl}-closed form. p ( 0 I Af,. 2).the latter prowith M = (Mol viding a kind of adversarial analysis. since it assigns !210no special status. Noting that there are only two alternatives in this decision problem, it suffices to specify the (conditional)difference in utilities. say in favour of the larger model

nr,.

a(e) = u ( l l l , . e) - 11(7U{). el.

since the optimal inference will clearly be to reject model ,If, if and only if

where ~ ( 2 ) represents, as before, the utility premium attached to keeping the simpler model Mo. We shall refer to a(@) as a (utility-based)discrepuncy measure between M,, ;Ifl when 8 E c ) is the true parameter. In terms of the discrepancy. and the optimal action is to reject model .\l~if and only if
1(z)> f ( l ( Z ) .

where

With a considerable reinterpretation of conventional statistical terminology, we might refer to 1( .) as a test stutistic, leading, for given data x,to the rejection of model Af,, if the observed value of the test statistic exceeds a criticul \ d u i ~ . c ( 5 ) = E(l(5).

6.2 Model Rejection

413

How might c(z) be chosen? One possible approach could be to consider, prior to observing and assuming hi0 to be true, a choice of c ( . )which would lead h.lo
to only be rejected with low probability (a,say, for values of a of the order of 0.05 or 0.01). Under this approach, we would choose c ( . )such that

P ( t ( z )> . z 1 Ill") = a , ()
thus obtaining the (1 - a ) t h percentage point of the predictive distribution of t(z). conditional on information available prior to observing x and assuming A10 to be true. Of course, this is just one possible approach to selecting c ( - )and has no special theoretical significance. It is interesting that this choice turns out to lead to commonly used procedures for model rejection which have typically not been derived or justified previously from the perspective of a decision problem (see, for example, Box, 1980, for a nondecision-theoretic approach). Examples will be given in the following two sections. However, for criticism of the practice of working with a fixed a. see Appendix B, Section 3.3.

6.2.3

Z m n e Discrepancies

Suppose that the discrepancy measure introduced in Section 6.2.2 is defined to be

Assuming the decision problem of model rejection to be defined from the M-closed perspective, we obtain

t(z> =
where
~ (I ' z0 ) = 1

J s(e>P(ez)de I
P(M I 11.111 P(Afo,)P(a:I A h ) + P(hfl)P(Z a1.11) ' I
if 00specifies a simple hypothesis if@"= (+(,.A).

=P(h1lIX)

{ ?;(FAo,A I 4 0 ) d A A)p(
IM I )
=

and

J P(a: I

(me.
Ill0

It follows from the analysis given in the previous section that rejected if and only if, for specified critical value c(z),

should be

P ( M , 1 )> c(z). 2

414

6 Rernodelling

i.e., if the posterior odds on Af0 and against MI are smaller than (1 - c(z))/c(z). In the case where the prior odds are equal. the rejection criterion for Alfo terms in of the Bayes factor is given by

Example 6.10. (Null hypothesisfor a binomial pammeter). Suppose that 2 represents s successes in I I binomial trials, with ,If,,, :Ifl defined by

and F'(Af,]) = P(dfl) =

f . Straightforward manipulation shows that

which, assuming, for purposes of illustration. large .r, rr - . I . . and applying Stirling's formula, logr(n + 1) z ( i t + l)log(rt) - J t + f l o g ( 2 ~+ (127,) I . can be approximated by )

where

is the usual chi-squared test statistic. By considering -2 log U , , ( z ) ,for given r r , HGI.a value o f c ( s ) = c(O,,. n ) which calibrates the procedure to only reject MIwith probability o when MII true. is detined by the equation is

denotes the upper 100o'X point o f a 1; distribution. Of course, having decided where ti,,, on this particular approach to the choice o f cfz), there is n o real need to identify it! The rejection procedure is simply defined by comparing the test statistic value, I?(=). with its tail critical value, \:.(,. The reader might like 10verify (perhaps with the aid of Jeffreys, 1939/1961, Chapter 5) that similar results can be ohtained for a variety of"null models" in more general contingency tables.

6.2 Model Rejection

415

6.2.4

General Discrepancies

Given our systematic use throughout this volume of the logarithmic divergence between two distributions, an obviously appealing form of discrepancy measure is given by

where p ( z I 8 ) is the general parametric form of model h 1 1. , In the case of a location-scale model, O0 = ( p ~ , u )8 = ( p . 6 ) . we might consider the standardised measure

In any case, the general prescription will be to reject A40 if t ( z ) > c(z), for some appropriate c(z), where

withp(8 I z) derived either from an M-closed model, as illustrated in Section 6.2.3, or from the "adversarial" form corresponding to assuming model 1 1 . 11
Example 6.11. (Lindley's paradox revisited, again). In Examples 6.6 and 6.9 we considered the use of models A l l , A12 defined, for x = ( 2 1 , . . . .x , , ) , by

MI : p(x) =

n
,=I

, ,

N ( z , 1 prl. A),

pll.X known.

Using the logarithmic divergence discrepancy, we obtain

which is just a multiple (by n / 2 ) of a natural, standardised. measure (the non-centrality parameter) suggested by intuition as a discrepancy measure for a location scale family.

416

6 Remodellirig

Assuming the reference prior for / I derived from an {Afl}-closedperspective. which. as a special case of Proposition 5.24. is easily seen 10 be uniform. we have the reference posterior
p(/rIz) :V(/i17.rrA). =

and hence
(// -//l,)L'v(// IT. //A)+

where, with CT') = A ", the statistic ~ ( z= dx(7 ..- / I , ~ ) / V seen to be a version of ) is the standard significance test statistic for a normal location null hypothesis. With respect to p(z 1 Oil), this has an ,V(Z 10. 1) distribution and the appropriately calibrated value of ('(2)is thus implicitly defined, for example, by rejecting Jf,, if I ;(z) 1 exceeds the upper 100(tr/2)'%point of an : I: 1j . II). distribution. (

The above analysis is easily generalised to the case of unknown A. Here. the reference posterior for ( I / . A ) from an { dl,}-closed perspective (see Example 5.17) is given by

where t w 2 = X(J, - 7)2.follows that It

we see that fi(7 - / I , ~ ) / S 'is B version of the standard where, with s' = [ r j s 2 / ( r r .significancetest statistic for a normal location null hypothesis in the presence of an unknown this has a St(t i 0. I . u - I ) distribution and the scale parameter. With respect to p(zI appropriately calibrated value of v(z) is defined by the standard rejection procedure. The reader can easily extend the above analyses toother stylised test situations: for example, testing the equality of means in two independent normal samples. with known or unknown (equal) precisions. Rueda (1992) provides the gencral expressions for one-dimensional regular exponential family models. Multivariate normal location cases are also easily dealt with, the logarithmic divergence discrepancy in this case being proportional to the Mahalanobis distance (see Ferrindiz, 1985). Wc shall not pursue such cases further here. since it seems t o us that detailed discussion of model rejection and comparison procedures all too easily becomes artificial

6.3 Discussion and Further References

417

outside the disciplined context of real applications of the kind we shall introduce in the second and third volumes of this work. From the perspective of this volume, we have taken the analyses of this chapter sufficiently far to demonstrate the creative possibilities for model choice and comparison within the disciplined framework of Bayesian decision theory.

6.3
6.3.1

DISCUSSION AND FURTHER REFERENCES


Overvlew

We have argued that both from the perspective of a sensitive individual modeller, and also from that of a group of modellers, there are frequently strong reasons for considering a range of possible models. This obviously leads to the problem of model comparison, or model choice, and our approach has been to consider formally a decision problem where the action space is the class of available models. In this setting, we have shown that natural Bayesian solutions, such as choosing the model with the highest posterior probability, are obtained as particular cases of the general structure for stylised. appropriately chosen, loss functions. We have also considered the generally ill-posed problem of model rejection, where the primary decision consists in acting as if the proposed model were true -without having specific alternatives in mind-and have shown that useful results may be obtained by embedding the proposed model within a larger class, and then using discrepancy measures as loss functions in order to decide whether or not the original simpler model may be retained after all. There is an extensive Bayesian literature directly related to the issues discussed in this chapter. Some authors adopt a purely inferential approach, by deriving either posterior probabilities, or Bayes factors for competing models; see, for example, Lindley (1965, 1972), Dickey and Lientz (1970). Dickey (1971, 1977), Leamer (1978), Bemardo (1980), Smith and Spiegelhalter (1980). Zellner and Siow (1980), Spiegelhalterand Smith ( I982), Zellner ( 1984). Berger and Delampady ( I987), Pettit and Young ( 1990), Aitkin (1991), G6mez-Villegasand G6mez (1992), Kass and Vaidyanathan ( 1992), McCulloch and Rossi ( 1992) and Lindley ( 1993). Others openly adopt a decision-theoretic approach; see, for example, Karlin and Rubin (1956). Raiffa and Schlaifer (1961), Schlaifer (1961), Box and Hill (1967). DeGroot (1970). Zellner (197 I), Bemardo (1982. 1985a). San Martini and Spezzafem ( 1984), Berger ( 1985a), Bemardo and Bayarri (1985), Ferrfindiz ( 1985). Poskitt (1987). Felsenstein (1992) and Rueda (1992).

418
6.3.2 Modelling and remodelling

6 Remodelling

We have already argued that we see Bayesian statistics as a rather formalised procedure for inference and decision making within a well-defined probabilistic structure. Fully specified belief models are an integral part of this structure. but it would be highly unrealistic to expect that in any particular application such a belief model will be general enough to pragmatically encompass a defensible description of reality from the very beginning. In practice, we typically first consider simple models, which may have been informally suggested by a combination of exploratory data analysis, graphical analysis and prior experience with similar situations. And even with such a simple model, more formal investigation of its adequacy and the consequences of using it will often be necessary before one is prepared to seriously consider the model as a predictive specification. Such investigations will typically include residuul analysis, identification of outliers andor influential duru. cluster unulwis. and the behaviour of diagnostic srurisrics when compared with their predictive distributions. We shall not elaborate on this here. Some relevant references are Johnson and Geisser (1982, 1983, 1985). Pettit and Smith (1983, Pettit (1986). Geisser (1987, 1992, 1993), Chaloner and Brant (1988). McCulloch (1989). Verdinelli and Wasserman (1991), Gelfand, Dey and Chang (1992). Weiss and Cook ( 1992). Guttman and Peiia (1993), Peiia and Guttman (19931, and Chaloner (1994). Bayarri and DeGroot (1987, 1990, 1992a) provide a Bayesian analysis of selecfion models, where data are randomly selected from a proper subset of the sample space rather than from the entire population. As a consequence of this probing, mainly exploratory analysis, a class of alternative models will typically emerge. In this chapter we have discussed some of the procedures which may be useful in a jbrniui comparison of such alternative models. The outcome of this strategy will typically be a more refined model for which a similar type of analysis may be repeated again. Naturally. a pragmatic combination of time constraints, data limitations, and capacity of imagination, will force this sequence of informal exploration and formal analysis to eventually settle on the use of a particular belief model, which hopefully can be defended as a sensible and useful conceptual representation of the problem. This remodelling process is never fully completed, however. in that either new data, more time, or an imaginative new idea, may force one to make yet another iteration towards the never attainable "perfect". all powerful predictive machine.

6.3.3

Critical issues

We shall comment further on six aspects of the general topic of remodelling under the following subheadings: ( i ) Model Choice iind Model Criticism.(ii) Inferenre

6.3 Discussion and Further References

419

and Decision, (iii) OverjStting and Cross-validation,(iv) Improper Priors, (v) Scient@cReporting, and (vi) Computer Sopare.
Model Choice and Model Criticism

We have reviewed several procedures which, under different headings such as model comparison, model choice or model selection, may be used to choose among a class of alternative models, and we have argued that, from a decision-theoretical point of view, the problem of accepting that a particular model i s suitable, is ill-defined unless an alternative is formally considered. See, also, Hodges (1990, 1992). However, partly due to the classical heritage of significance testing, and given the obvious attraction of being able to check the adequacy of a given model without explicit consideration of any alternative, non-decision-theoretic Bayesians have often tried to produce procedures which evaluate the compatibility of the data with specific models. As clearly stated by Box (1980), the posterior distribution of the parameters only permits the
estimutionof the parameters conditionalon the adequacy of theentertained model, while the predictivedistribution makes possible criticism of the entertained model in the light of current data.

Moreover, the predictive distributions which correspond to different models are comparable among themselves, while-in general-the posteriors are not. The use of predictive distributions to check model assumptions was pioneered by Jeffreys (193911961). Additional references include Geisser (1966, 1971,1985, 1987, 1988,1993), Box and Tiao (1973). Dempster ( 1975),Geisser and Eddy ( 1979), Rubin (1984), Bemardo and Bermlidez (1985), Clayton et al. (1986), Gelfand, Dey and Chang (1992) and Gircin et al. ( 1992). The basic idea consists of defining a set of appropriate diagnosricfuncrions t , = t c ( z n + . . .,z,,+k) of the data and comparing their actual values in a sample l, , based on a different sample with their predictive distributions p t , (- I z1 . . . ,z,~) from the same population. Possible comparisons include checking whether or not the observed t i s belong to appropriate predictive HPD intervals, or determining the predictive probability of observing tts more outlying than those observed. The reader will readily appreciate that common techniques such as residual analysis, identification of influential observations, segregation of homogeneous clusters, or outlier screening. can all be reformulated as particular implementations of this general framework. As mentioned before, we see these very useful activities as part of the informal process that necessarily precedes the formulation of a model which we can then seriously entertain as an adequate predictive tool. However, it seems to us

420

6 Remodelling

inescapable that if a fonnal decision is to be reached on whether or not to operate with a given model, then some form of alternative must be considered. For further discussion of model choice, see Winkler (1980). Klein and Brown (1984), Krasker (1984), Florens and Mouchart (1985). Poirier (1985). Hill ( 1986. 1990). Skene ef al. ( 1986) and West (1 986). Inference and Decision Throughout this volume, we have emphasised the advantages of using a formal decision-oriented approach to the stylised statistical problems which represent a large proportion of the theoretical statistical literature. These advantages are specially obvious in model comparison since, by requiring the specification of an appropriate utility function, they make explicit the identification of those aspects of the model which really matter. We have seen, moreover, that the more traditional Bayesian approaches to model comparison, such us determining the posterior probabilities of competing models or computing the relevant Bayes factors. can be obtained as particular cases of the general structure by using appropriately chosen, stylised utility functions. Very often, the consequences of entertaining a particular model may usefully be examined in terms of the discrepancies between the prediction provided by the model for the value of a relevant observable vector, t, say. and its actual. eventually observed, value. Scoring rules, of the general type ( I ( p t ( . I z l . . . . z,, t ) ). provide natural utility functions to use in this context, by explicitly evaluating the degree of compatibility between the observed t and its predictive distribution.

Overjfring and Cross-Validation


If we are hoping for a positive evaluation of a prediction it is crucial that the predictive distribution is based on data which do not include the value to be predicted; otherwise, severe over-tring may occur. Pragmatically, however, although it is sometimes possiblc to check the predictions of the model under investigation by using a totally different set of data than that used to develop the model. it is far more common to be obliged to do both model construction and model checking with the same data set.
{ X I .. . .

A natural solution consists of randomly partitioning the available sample z = .x,} say, into two subsamples { z , ,z} one of which is used to produce the .

relevant predictive distributions, and the other to compute the diagnostic functions; the procedure then being repeated as many times as is necessary to reach stable conclusions. This technique is usually known ;1s cross-validation. For recent work, see Peiia and Tiao (1992). Gelfand. Dey and Chang ( 1992), and references therein. For a discussion of how cross validation may be seen as approximating formal Bayes procedures, see Section 6.1.6.

6.3 Discussion and Further References

421

A possible systematic approach to cross-validation starting from a sample of .. size n. a = {tl,., G } , and a model { p ( z Ie), p ( B ) } , involves the following steps: (i) Define a sample size k,where A: 5 n./2 is large enough to evaluate the relevant observable function t = t(z1,. . . ,sk).The observable function could either be that predictive aspect of the model which is of interest, as described by the utility function, or a diagnostic function, as described in the above approach to model criticism. (ii) Determine the set of all predictive distributions of the form

where aJ is a subsample of z of size k and air] consists of all the 5,'s in a which are not in a,. (iii) Estimate the expected utility of the model by

Note that the last expression is simply a Monte Car10 approximation to the exact value of the expected utility. We also note that this programme may be carried out with reference distributionssince the correspondingreference (posterior) predictive distributions r l ( t ( a , )I zbi) will be proper even if the reference prior r ( 8 )is not.

Improper Priors In the context of analysis predicated on a fixed model, we have seen in Chapter 5 that perfectly proper posterior parametric and predictive inferences can be obtained for improper prior specifications. When it comes to comparing models, however, in general the use of improper priors is much more problematic. We first note that for models

the predictive quantities

typically play a key role in model comparisons for a range of specific decision problems and perspectives on model choice. But if one or more of the p t ( 0 ) is not a proper density, the corresponding pi (2)'s will also be improper, thus precluding,

422

6 Remodelling

for example, the calculation of posterior probabilities for models in an M-closed analysis. Essentially, with a formal improper specification for the prior component of a model an initial amount of data needs to be passed through the prior to posterior process before the model attains proper status as a predictive tool and can hence be compared with other alternative predictive tools. An exception to this arises when two mcdels '11,. A!,. say, have common B and improper p ( B ) , in which case it can be argued that the ratio p,(z)/p,(z), the Bayes factor in favour of A!,, does provide a meaningful comparison between the two models, However, there is an inherent difficulty with these methods when the models compared have different dimensions. Indeed, with the reference prior approach some models are implicitly disadvantaged relative to others; technically. this is due to the fact that the amount of information about the parameters of interest to be expected from the data crucially depends on the prior distribution of the nuisance parameters present in the model. Lindley's paradox (see. also, Bartlett, 1957). discussed earlier in this chapter, is a well known simple example of this behaviour. A possible solution consists in specifying the improper prior probabilities of the models-or, equivalently. weighting the Bayes factors- in a way which may be expected to achieve neutral discrimination between the models. Some suggestions along these and similar lines include Bemardo ( I 980). Spiegelhalter and Smith (1982). Pericchi (19841, Eaves (1985) and Consonni and Veronese (1992a). Another possible solution to the problem of comparing two models in the case of improper prior specification for the parameters is to exploit the use of cross-validation as a Monte Carlo approximation to a Bayes decision rule. As we saw in Section 6. I .6, for the problem of predicting a future observation using a log-predictive utility function the (Monte Carlo approximated) criterion for model choice involves the geometric mean of Bayes factors of the form

where ( z ~ ( j )~ denotes a partition of 2 = ( $ 1 , . . . . x,,). and ,- , x,]

Since, for sufficiently large I ) , p , (8, I z l , . ( J ) ) will be proper. even for improper 1 (non-pathological) priors p, (6,). no problem arises. However, recalling from our discussion in Section 6.1.6 that the conventional Bayes factor is used to assess the models' ability to "predict Z" from { p l ( z1 6 , ) . p 7 ( O l ) } we see that the latter does run into trouble if p , ( O l ) is im, proper, since, then. p , ( ~is) not proper.

6.3 Discussion and Further References

423

Mi,

Proceeding formally, if we were to take logpi(x) as the utility of choosing the M-open perspective prefers lcll if

where p ( z ) is not specified. Again, we can approach the evaluation of the lefthand side as a Monte Carlo calculation, based on partitioning x and averaging over random partitions. However, in this contect we want partitions where the proxy for the predictive part resembles data z and the proxy for the conditioning part resembles no data. The closest we can come to this, and overcome the problem of the impropriety of p t ( O , ) ,is to take partitions of the form x = [xs(j), - ~ ( j ) ] , ~ wheres(2 l)isthesmallestintegersuchthatbothp~(81~ s ( j )andpz(02 I zs(j)) e ) are proper, and j = 1,. . . , (:). The proposal is now to select randomly k such partitions, and approximate the left-hand side of the criterion inequality by

The (Monte Carlo approximated) model choice criterion then becomes, prefer hi1 if

, ) where B t 2 ( z r , - , ( j )z s ( j )denotes the Bayes factor for All against A h based on the versions

of A f l , A&, Again, we see the explicit role of the geometric average of Bayes factors, but with the latter reversing. in a sense, the role of past and future data compared with the form obtained in Section 6.1.6.
At the time of writing we are aware of work in progress by several researchers who propose forms related to those discussed here. These include: J. 0. Berger and L. R. Pericchi (intrinsic Bayes factors), A. OHagan Cfroctionul Bayes factors) and A. F. de Vos cfair Bayes factors).

424

6 Rentodelling

From a practical perspective, it might be desirable to trade-off, in utility terms, "fidelity to the observed data" and "future predictive power''. This can be formalised by adopting a utility function of the form

which, in turn, leads to a criterion of the form, prefer 1\11 if

Work in progress (by L. R. Pericchi and A. F. M.Smith) suggests that such a criterion effectively encompasses and extends a number of current criteria. We conclude by emphasising again that a predictivistic decision-theoretical approach to model comparison, where models are evaluated in terms of their predictive behaviour. bypasses the dimensionality issue. since posterior predicfiw distributions obtained from models with different dimensions are always directly comparable.
Scientific Reporring

Our whole development has been predicated on the central idea of normative standards for an individual wishing to act coherently in response to uncertainty. Beliefs as individual. personal probabilities are the key element in this process and. at any given moment in the learning cycle, are the encapsulation of the current response to the uncertainties of interest. However. while many are willing to concede that. in narrowly focused decision-making, such beliefs are an essential element, there has also been a widespread view (see, for example. Fisher, 1956/1973) that it would be somehow subversive to sully the nobler. objective processes of science by allowing subjective beliefs to enter the picture. As Dickey ( 1973) has remarked, rhetorically:
But is not personal knowledge, or opinion. like superstition. non-objective and unscientific, and therefore to be avoided in science'? Who cares to read about a scientific reporter's opinion as described by his prior and posterior probabilities'?

We have already made clear our own general view that objectivity has no meaning in this context apart from that pragmatically endowed by thinking of it as a shorthand for subjective consensus. However, there are clearly practical problems of communication between analysts and audiences which need addressing. The solution to such problems lies in combining ideas from Chapters 4 and 6. On the one hand, we have seen that shared assumptions about structural aspects

6.3 Discussion and Further References

425

of beliefs (for example, exchangeability) can lead a group of individuals to have shared assumptionsabout the parametric model component, while perhaps differing over the prior Component specification. On the other hand, we have seen, from several perspectives, that entertaining and comparing a range of models fit perfectly naturally within the formalism. There is nothing within the world of Bayesian Statistics that prohibits a scientist from performing and reporting a range of "what if?" analyses. To quote Dickey again:
. . . communicating a single opinion ought not to be the purpose of a scientific report; but, rather, to let the data speak for themselves by giving the effect of the data on the wide diversity of real prior opinions. . . . an experimenter can summarise the application of Bayes' theorem to whole ranges of prior distributions, derived to include the opinions of his readers. Scientific reports should objectively exhibit as much as possible of the inferenrid conrenr of rhe data, the data-specific prior-to-posterior nunsformarion of the collection of all personal probability distributionson the parameters of a realistically rich statistical model.

We believe that this is the way forward-although, it has to be said, there is a great deal of work to be done in effecting such a cultural change. There are also some obvious technical challenges in making such a programme routinely implementable in practice. However, computational power grows apace, as does the sophistication of graphical displays. We shall return to this general problem in the second and third volumes of this work. For early thoughts on these issues, see Edwards et al. (1963) and Hildreth (1963); for technical expositions, see Dickey (1973), Roberts (1974) and Dickey and Freeman (1975); for a discussion in the context of a public policy debate, see Smith (1978).
Computer Sofnoare Despite the emphasis in Chapter 2 and 3 of this volume on foundational issuesnecessary for a complete treatment of Bayesian theory-we are well aware that the majority of practising statisticians are more likely to be influenced by positive, preferably hands-on, experience with applicationsof methods to concrete problems than they ever will be by philosophical victories attained through the (empirically) bloodless means of axiomatics and stylised counterexamples. We are also well aware that the availability of suitable software is the key to the possibility of obtaining that hands-on experience. But what are the appropriate software tools for Bayesian Statistics? What software? For whom? For what kinds of problems and purposes? A number of such issues were reviewed in Smith (1988), but at the time of writing, many still remain unresolved. Goel (1988) provided a review of Bayesian

426

6 Remodelling

software in the late 8 ' . Examples of creative use of modem software in Bayesian 0s analysis include: Smith er a f . (1985. 1987) and Racine-Poon et al. (1986). who describe the use of the Buyes Four package; Grieve ( 1987); Lauritzen and Spiegelhalter (1988), on which the commercial expert system builder Ergo'" is based; Albert (1990). Korsan (1992). and Ley and Steel ( 1992), who make use of the commercial package Muthemutica'" ;Tiemey ( I 990, 1992), who presents LISP-STAT. an object oriented environment for statistical computing and discusses possible uses of graphical animation; Cowell ( 1992) and Spiegelhalter and Cowell ( 1992). who, respectively, describe and apply the probabilistic expert system shell BAIES; Racine-Poon ( 1992), who discusses sample-assisted graphical analysis; Thomas et al. (1992). who describes BUGS, a program to perform Bayesian inference using Gibbs sampling; Wooff ( 1992). who describes [B/D/. an implementation of subjectivist analysis of beliefs. as described by Goldstein ( I98 I. 1988, 199 I. and references therein); and Marriot and Naylor (1993). who discuss the use of MINITAB to teach Bayesian statistics. Further review and detailed illustration will be provided in the volumes Buyesian Computution and Bayesian Methods.

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

Appendix A
~~

Summary of Basic Formulae


Summary Two sets of tables are provided for reference. The first records the definition, and the first two moments of the most common probability distributions used in this volume. The second records the basic elements of standard Bayesian inference processes for a number of special cases. In particular, it records the appropriate likelihood function. the sufticient statistics, the conjugate prior and corresponding posterior and predictive distributions, the reference prior and corresponding reference posterior and predictive distributions.

A.l

PROBABILITY DISTRIBUTIONS

The first section of this Appendix consists of a set of tables which record the notation, parameter range, variable range, definition, and first two moments of the probability distributions (discrete and continuous, univariate and multivariate) used in this volume.

A. Sumtnary of Basic k'ormulae

Univariate Discrete Distributions

Br(r I 8 ) Bernoulli ( p . 115)

Bi(z 18; r i ) Binomial (p. 115)


0 < e < 1 71 = 1,2. . . . ,
x = 0. I . . . . ,11

Bb(z 1 (I. 9. a ) Binomial-Beta f p . I 17)


.I'

= 0. 1.. . .

.I /

Hy(x I N . :If. n ) Hypergeometric ( p . 115)

N = 1.2.. . .
A f = 1,2, . . . r z = 1.....,VfAI

z = ~ , ( L l+. . . . b . ( = n1ax(O, t / - :If) I

h = t n i t i ( n . A')
-

p ( r )=c

)( (n.'Ux) ;
N

E [ r ]= 7)-'

+ *\I

A.1 Probability Distributions

429

Univariate Discrete Distributions (continued)

Nb(x 18, r ) Negative-Binomial (p. 116)

O < O < l,r=l,2,..


1 - 8)'

r = O.l,2,. . . c = 8' V[z] = T1-0


02

E[z] = r0
Nbb(x I a ,

r ) Negative-Binomial-Beta (p. 118)

rB E [ z ]= a-1

V[x] =

+ ( a - l ) ( a - 2)
= 0.1.2,. . .

Pn(x I A)
X>O

Poisson (p. 116)

p(x) = c E[x] = X

X!
V[T] = x
~~~

A'

Pg(x I a , 9,) Poisson-Gamma fp. 1191 n

a>0, p>o, v > o

. E

= 0 , l . 2,. .

a E[x] = v-

430
Univariate Continuous Distributions

A. Summary o Basic Forntulae f

Be(r 1 a. d) Beta ( p . 116)

U n ( s 1 u. 6 ) Uniform ( p . 117) b>a


P(P)

rr < .I' < t j


=c
I'

= (b- u)-' = &(I> - (I)'

E [ x ]= + ( U

+ b)

G.. ",]

G a ( i 1 a.;j) Gamma (p. 118)

a > 0.1) > 0


E [ x ] = aS-1

x >0

Ex(x 10) Exponential (p. 118)


t1>0 p ( r ) = (.e-@= E [ s ] = 1/e
Gg(x I ct. 3. n ) Gamma-Gamma f p . 120)
0

.I'

>0
1/19'

c=6,
\'[.I-] =

> 0.3 > 0, 71 > 0

.I'

>0

A.1 Probability Distributions

431

Univariate Continuous Distributions (continued)

x z ( x I v) = x:

Chi-squared ( p . 120)

x>o

V [ x ]= 2v x 2 ( x I v,A)
v > 0, x > 0

Nan-central Chi-squared ( p . 121)

x>O

E [ x ]= v

+x

V [ x ]= 2 ( v

+2 4

Ig(x I a , p) Inverted-Gamma ( p . 119)


Cr>o,j3>0
X > O

E [ 4= a-1

8
V [ x ]= (a- 1)2(a- 2 )

x-(z1 v ) Inverted-Chi-squared (p. 119)


V > O

. 2

>0

c=-

E[z]= v-2

(1/2)2 Uv/2) V[X]= (v - 2)2(v - 4)


~ ~~ ~

Ga- ( x I a , B) Square-root Inverted-Gamma (p. 119)

a > 0, i3 > 0

x>o

432

A. Summar?,of Basic Formulae

Univariute Continuous Distributions (continued)

Ip(x 1 ( 1 . 1 3 )
(1

Inverted-Purero (p. 120)

> 0.

t j

>0

o<s<
I'

j I

p ( r ) = cz"-' Els] = , 3 - ' a ( t s

= (1.P
'(k(Ct

+ 1)-1

\'I.r] = ;r

+ 1) '(a

4-

2)

N(.r I p . A)
--x

NorniuI (p. 121)


--x

< 11 < soc, x > 0


-p)2}

< .r < +x
1(2K)-l 2
I

y(r) = c exp {-;A(.

= x'
\.-[.I.]

E[:r]= p
St(s 111, A, ct)

=x

Student t ( p . 122)

--x.

< 11 < C X . x > 0.n > 0

--x

< .r < + x

A.1 Probability Distributions

433

Univariate Continuous Distributions (continued)

Lo(x a, ) Logistic P

(p. 122)

Multivariate Discrete Distributions

Muk(x I 8, n) Multinomial (p. 133)

Mdk(x I 8.,n) Multinomial-Dirichlet (p. 135)

z = (51> . . . ,xa) xj =0,1,2, ... Chl 5 n 2 1


n!

n;=,+ e - 1) (a
a,

E[x,] npi =

434

A. Summary of Basic Formulae

Multivariate Continuous Distributions

Dik.(z I a ) Dirichlet (p. 134)

x = (.XI.. . . ..(.k) I 0 < J , < 1. XI=,.I.( 5 I

Ng(z. y 1 p. A.

11.

J) Nornial-Gammu (p. I361

N k (x1 p.A)

Multivuriute Norntul ( p . 136)


2

p = (/I,,. . . ./&) E Rk A symmetric positive-definite

= ( X I . .....rA ) E

%A

p ( z ) = c cxp { - ( - p)'A(z - p)} !z


EIX.1 =

r= j

CC

x 1 "?(27i)-~4 I-[x]= x '

Pa2(r, y ICY,A. 31) Biluterul Pureto (p. 141)

A. I Probability Distributions

435

Multivariate Continuous Distributions (continued)

Ngk(x,y 1 p , A, a, 8 ) Multivariate Normal-Gamma (p. 140)

______

Nwk(x,g I p , A, a , P ) Multivariate Normal-Wishart ( p . 140)


-90

< pz < +,m, A > 0

2a>k-l
j3 symmetric non-singular .

x = ( q , ...:;ck) --x xi < +m < g symmetric positivedefinite

Stk(x I p, A, a ) Multivariate Student (p. 139)


-90 < pi < +w, CL > 0 A symmetric positive-definite
5

--M

= (XI,.. ,Zk) < 2, < +m

- p ) [ A ( z- p )
E [ x ]= p,
Wik(x I a,P) Wishart ( p . 138)
h > k - l

-(n+k)/2

c=

V [ X ] A-'(cI - 2)-'a =

x symmetric positive-definite

symmetric non-singular

436

A. Summar?,of Basic Formulae

A.2

INFERENTIAL PROCESSES

The second section of this Appendix records the basic elements of the Bayesian learning processes for many commonly used statistical models. For each of these models, we provide. in separate .sections of the table, the following: the sufficient statistic and its sampling distribution; the conjugate family, the conjugate prior predictives for a single observable and for the sufficient statistic. the conjugate posterior and the conjugate posterior predictive for a single observable. When clearly defined, we also provide, in a final section. the reference prior and the corresponding reference posterior and posterior predictive for a single observable. In the case of uniparameter models this can always be done. We recall. however. from Section 5.2.4 that, in multiparameter problems. the reference prior is only defined relative to an ordered parametrisation. In the univariatc normal model (Example 5.17). the reference prior for ( p . A) happens to be the same as that for (A. p ) , namely ~ ( p . = T (A, p ) x A-. and we provide the corresponding A) reference posteriors for 11, and A. together with the reference predictive distribution for a future observation. in the multinomial, multivariate normal and linear regression models, however, there are very many different reference priors. corresponding to different inference problems, and specified by different ordered parametrisations. These are not reproduced in this Appendix.

Bernoulli model

p ( 0 ) = Be(@1 0 . d) p(s)= Bb(r I o.;9. 1) p ( r ) = Bb(r 1 ct. 3, n ) y(6, I z ) = Be(6, I c1 r. !j p ( . I~ ) = Bb(s I CL + r, 9 z

+ +

71 7i

- r) - I. 1)

I i. i) I Be(0 I 4 + r, 4 + ri - r ) I T ( X 1 Z ) = B b ( t 1 f + I . + 11 - I., 1) 6
~ ( 6 , = Be(# ) ~ ( 6 z)= ,

A.2 Inferential processes

437

Poisson Model

p ( r I A) = Pn(r I n X )

t(%) T =

c:'=,
2,

Negative-Binomial model

z = ( x I , . . . . 5,J, 2 , = 0 , 1 . 2.... 0c 8 < 1 P(G I@)= Nb(z., 18, r ) ,

.(e)

n(8 I z ) = Be(8 I nr, s f ) T ( Z I z ) = Nbb(s I nr, s r)

CX @--1(1 - q - 1 ' 2

+ i,

A. Summury of Busic Formulae

Exponential Model

p(H) = Ga(O I Q . J) p ( . r ) = G g ( r I cr. 1) p ( t ) = G g ( fI o.,j.n ) p(8Jz) = Ga(8 I ( I I ) . J y(s 12)= G g ( r I n I t . i ?

+ +

+ t) + 1.1)

T ( 0 ) x 8-' ~ (I z) = Ga(8 I r i . t) 8 7r(s1 a ) = Gg(.r I I t . t , 1)

Uniform Model
=

($1..

.. . X I , } .

0 <XI < H
6,

P ( J , 18) = Un(r, 10. e).

>o

A.2 Inferential processes

439

Normul Model (known precision A)

Normal Model (known mean p )

.(A)
. ( b

o (

A-1

7r(A I z ) = Ga(A

1 in, i t )

J %) = St(s I p , nt-I, n)

440
Normul Model (both puramriers utiknown)

A. Summary oj Basic Formdue

A.2 Inferential processes

441

Multinomial Model

Multivariate Normal Model

z = (21,.

p ( s l I p, A) = N k ( q I p, A),

. ., 2 , 1 } ,

2*E

Rk
p E

Rk.

X k x k positive-definite

442
Linear Regression

A. Summary o Basic Farmulae f

t ( % ) (X'X. = X'y)

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

Appendix B

Non= Bayesian Theories


Summary
A summary is given of a number of nowBayesian statistical approaches and

procedures. The main theories reviewed include classical decision theory, frequentism, likelihood, and fiducial inference. These are illustrated and compared and contrasted with Bayesian methods in the stylised contexts of point and interval estimation. and hypothesis and significance testing. Further issues discussed include: conditional and unconditional inferences; nuisance parameters and marginalisation;prediction; asymptotics and criteria for model choice.

6.1

OVERVIEW

Bayesian statistical theory as presented in this book is self-contained and can be understood and applied without reference to alternative statistical theories. There are, however, two broad reasons why we think it appropriate to give a summary overview of our attitude to other theories. First. many, if not most, readers will have some previous exposure to classical statistics, and the material in this Appendix may help them to put the contents of this book into perspective.

444

B. Non-Bayesian Theories

Secondly,our own experience has been that some element of comparative analysis contributes significantly to an appreciation of the attractions of the Bayesian paradigm in statistics. As a preliminary, we recall from Chapter 1 our acknowledgment that Bayesian l analysis takes place in a r a t h e r j ~ r t aframework, and that exploratory data analysis and graphical displays are often prerequisite, informal activities. It is important. therefore, to be clear that in this Appendix we are discussing non-Bayesian,forniuI procedures. We begin by making explicit some of the key differences between Bayesian and non-Bayesian theories:

(i) As we showed in detail in Chapter 2, Bayesian statistics has an uxionrutic foundation which guarantees quantitative coherence. Non-Bayesian statistical theories typically lack foundational support of this kind and essentially consist of a set of recipes which are not necessarily internally consistent. (ii) Non-Bayesian theories typically use only a parametric model family of the ignoring the prior distribution p ( 0 ) . The form { y ( z 18). z E X.8 E e ) . implications of this fact are so far reaching that sometimes Bayesian statistics is simplistically thought of as statistics with the optional extra of a prior distribution. In Chapters 2 and 2, we have argued that the existence of a prior distribution is a mathematical consequence of the foundational axioms. In Chapter 4. we stressed that predictive models, typically derived from combining p ( z I 8 ) and p ( B ) ,are primary. (iii) The decision theoretical foundations of Bayesian statistics provide a natural framework within which specific problems can easily be structured, with solutions directly tailored to problems. In contrast. most non-Bayesian theories essentially consist of stylised procedures, such as those for point or interval estimation, or hypothesis testing. designed to satisfy or optimise an ud hoc criterion, and often lacking the necessary flexibility to be adaptable to specific problem situations. (iv) We have argued that, from a Bayesian viewpoint, a decision ~tructurc is the natural framework for any fornial statistical problem, and have described how a pure inference problem may be seen as a particular decision problem. Non-Bayesian theories depart radically from this viewpoint; classical decision theory is only partially relevant to inference, and non-Bayesian inference theories typically ignore the decision aspects of inference problems.

In Section B.2, we will revise the key ideas of a number of non-Bayesian statistical theories, specifically reviewing Classical Decision Theory Frequentist Procedures. Likelihood I nference and Fiduciul cind Relared 7heories.

8.2 Alternative Approaches

445

In Section B.3, we will follow the typical methodological partition of nonEstimation, HyBayesian textbooks into the topics of Point Estimarion, Inren~al pothesis Testing and Sign8cance Testing. Within each of those subheadings we will comment on the internal logic, the relevance to actual statistical problems, and the performance of classical procedures relative to their Bayesian counterparts. In Section B.4. we will discuss in detail some key comparative issues: Conditional and Unconditional Inference, Nuisunce Purumeters und Marginalisation. f Approuches to Prediction, Aspects o Asymptotics and Model Choice Criteria. For readers seeking further comparativediscussion at textbook level. we recall, from our discussion at the end of Chapter 5 , the books by Barnett ( I97 I / 1982),Press (1972/1982),Cox and Hinkley (1974). Anderson (l984), DeGroot (1987),Casella and Berger ( 1990) and Poirier ( 1993).

8.2
B.2.1

ALTERNATIVE APPROACHES
Classical Decision Theory

We recall from Section 3.3 the basic structure of a general decision problem, consisting of a set of a possible decisions D, parameter space 0, a prior distribution a p(w)over 0, and a utility function u ( d ( w ) )which we shall denote by .u(d7 ) to w conform more closely to standard notation in classical decision theory. We established that the existence of horh the prior distribution p ( w )and the utility function u(d,w ) is a mathematical consequenceof the axioms of quantitativecoherence and that the best decision d' is that which maximises the expected utility
E(d) =

u(d. w)p(w)dw.

We established furthermorethat, if additional information z is obtained which is probabilistically related to w by y ( z 1 w ) ,then the best decision d: is that which maximises the posterior expected utility

where
P(W

I=)

P(= I w)P(w)-

Some authors prefer to use loss functions instead of utilities. A regretfunction, or decision loss, is easily defined from the utility function (at least in bounded cases)
by

l ( d ,w ) = sup u(d,,w ) - u(d. 0 ) .


11; EL)

446

B. Non-Bayesian Theories

which quantifies the maximum loss that. for each w, may suffer as a conseone quence of a wrong decision. Since slipD u(d. w) only depends on w. expected the loss
-

l ( d )=

f(d,w)p(w)c~u

is minimised by the same decision d' which maximises Z(d) and hence. from a Bayesian point of view, the two formulations are essentially equivalent. In contrast to this Bayesian formulation, the core framework of classical decision theory may be loosely described as decision theory without a prior distribution. A utility function (or a loss function) is accepted, perhaps justified by utility-only axiomatics of the type pioneered by von Neumann and Morgenstem ( 1944/1953), but a prior distribution for w is not. Although some of the basic ideas in classical statistical theory were present in the work of Neyman and Pearson (1933). it was Wald (1950) who introduced a systematic decision theory framework. excluding prior distributions as core elements, but including a formulation of standard statistical problems within a decision framework. This work was continued by Girshick and Savage (1951) and by Stein (1956). An excellent textbook introduction is that of Ferguson (1967). Classical decision theory focuses on the way in which additional information x should be used to assist the decision process. Consequently. the basic space is not the class of decisions, I ) , but the class of decision rrtles. A. consisting of functions Ci : S f ) which attach a decision n'(z)to each possible data set x. It is then suggested that decision rules should be evaluated in terms of their triwtrge loss with respect to the data which might arise. Thus. the risk,firricrion I * ( 4.L j o f a . decision rule 0' is defined as

and subsequent comparison of decision rules is based on their risk functions. The formulation includes. as 3 special case. the situation with no additional data (the no-data case). where the risk function reduces to the loss function.
Example B.l. (Estimation o the mean of a normal distributiun). f Let 2 = { . r , .. . . . ./.,,I be a random sample from a .'i'(.r : p . I ) distribution. arid w p p that ~ we want to select an estimator for p . so thul 11 = 'W. under the assuniption of a quadratic loss function /(ti. 1 1 ) = (11 - d ) L .Some possible derision rules are
( i ) i j i ( 3 c ) = 7, the sample mean ( i i ) n'-,(z) .?, the sample median = ( i i i ) n'.,(c) / i t ) , a fixed value = (iv) &(z) ( I J u , , ) ' ( U ~ , / I , ~ rr7). the posterior mean from an = centred on p,, and with precision i t l i .

.\~(/i'/t,,.u,~) prior.

B.2 Alternative Approaches

447

Using the fact that the variance of the sample median is approximately 7 ~ / 2 nthe , corresponding risk functions are easily seen to be (i) r(dl.p) = l / n (ii) r(&.p ) = 7r/2n. (iii) r ( & .11) = (11 - po)L (iv) ~(6~. = (n + 7 2 0 ) - 2 { ~ ~ 11)

+ ni(p,, - I L ) ~ }

-3

-2

-1

Figure B1 Riskfunctions for p,, = 0, n = 1 0 and n,, = 5 .


Note that (iv) includes both (i) and (iii) as limiting cases when rt,, 0 and 71,) + x respectively. Figure B. 1 provides a graphical comparison of the risk functions. It is easily Seen that, whatever the value of p.6, has larger risk than 61 but, otherwise. the best decision rule markedly depends on p . The closer fill is to the true value of 11, the more attractive 6 and 61will obviously be. ,

Admissibility
The decision rule 62 in Example B. I can hardly be considered a good decision rule since, for any value of the unknown parameter p, the rule 6, has a smaller risk. This is formalised within classical decision theory by saying that a decision rule 6 is dominated by another decision rule 6 if, for all w,

r(6:w)2 r ( b , w )

aa

B. Non-Bayesian Theories

with strict inequality for some w . and that a decision rule is udniissible if there is no other decision rule which dominates it. A class of decision rules is corviplete if for any 6' not in the class there is a n' in the class which dominates it. and a class is ntinirnal complere if it does not contain a complete subclass. If one is to choose among decision rules in terms of their risk functions. classical decision theory establishes that one can limit attention to a minimal complete class. However. for guidance on how to choose among admissible decision rules. further concepts and criteria are required.
Bayes Rides If the existence of a prior distribution p ( w )for the unknown parameter is accepted. classical decision theory focuses on the decision rule which minimises e.vpected risk (or so-called Buyes risk)

uiin

r ( b , d )p ( w )~ l w i n i r i =
'I

. I!.

/I

I ( 6( z) .w ) y( z I w ) )I( w ) tlz riw .

.\

which it calls a Bayes decision ride. Note that. since


P(ZI W ) P ( W ) --

o(w I Z ) l J ( Z ) .

under appropriate regularity conditions we may reverse the order of integration above to obtain
rriin

1/
I!.

r ( d ( . r ) w . ) p ( o 1 z ) p ( z riztlw . )

so that the Bayes rule may be simply described as the decision rule which maps each data set x to the decision P ( z ) which minimises the corresponding posterior expected loss. Note that this interpretation does not require the evaluation of any risk function. It is easily shown that uny Buys rule bt4iic.h correspontls ro n proper prior ciistribution is dmissihle. Indeed, if n" is the Bayes decision rule which corresponds to p ( w )and 6' were another decision rule such that r(6'. w ) 5 r ( P .w ) with strict inequality on some subset of I2 with positive probability under p ( w ) .then one would have ,.(6/.w)p(w)dw< I . ( & * . w ) p ( w ) d u .

which would contradict the definition of a Bayes rule as one which minimises the expected risk. Wald (1950) proved the important converse result that, under rather general conditions. any admissible decision r u k is ( I Bu.ve.s deiisioti ride with respect to some, pos.sih1.v improper. prior disrrihution.

B.2 Alternative Approaches

449

There is, however, no guarantee that improper priors lead to admissible decision rules. A famous example is the inadmissibility of the sample mean of multivariate normal data as an estimator of the population mean, even though it is the Bayes estimator which corresponds to a uniform prior. For details. see James and Stein
(1961).

Minimax rules The combined facts that admissible rules must be Bayes, and that to derive the Bayes rule does not require computation of the risk function but simply the minimisation of the posterior expected loss, make it clear that, apart from purely mathematical interest, it is rather pointless to work in decision theory outside the Bayesian framework. Indeed, this has been the mainstream view since the early 1960s, with the authoritative monographs by DeGroot (1970) and Berger (1985a) becoming the most widely used decision theory text.. . Nevertheless, some textbooks continue to propose as a criterion for choosing among decisions (without using a prior distribution)the rather unappealing minimax principle. This asserts that one should choose that decision (or decision rule) for which the maximum possible loss (or risk) is minimal. It can be shown, under rather general conditions, and certainly in the finite spaces of real world applications, that the minimax rule is the Bayes decision rule which corresponds to the leasr favourable prior distriburion, i.e., that which gives the highest expected risk. The intuitive basis of the minimax principle is that one should guard against the largest possible loss. While this may have some value in the context of game theory, where a player may expect the opponent to try to put him or her in the worst possible situation, it has no obvious intuitive merit in standard decision problems. The idea that the minimax rule should be preferred to a rule which has better properties for nearly all plausible w values, but has a slightly higher maximum risk for an extremely unlikely w value seems absurd. Moreover, even as a formal decision criterion, minimax has very unattractive features; for instance, it gives different answers if applied to losses rather than to regret functions, and it can violate the transitivity of preferences (see e.g., Lindley, 1972). Thus, although in specific instances-namely when prior beliefs happen to be close to the least favourable distribution- the minimax solution may be reasonable (essentially coinciding with the Bayes solution). the minimax criterion seems entirely unreasonable.
8.22

Frequentlst Procedures

We recall from Section 5.1 the basic structure of a stylised inferenceproblem, where probabilistically related to 8 inferences about 8 E 8 are to be drawn from data z, by the parametric model component { p ( z 18).8 E 0).

450

B. Non-Bayesian Theories

We established that the existence of a prior distribution p ( 8 ) is a mathematical consequence of the axioms of quantitative coherence, and that the required inferential statement about 8 given x is simply provided by the full posterior distribution

where

Frequentist statistical procedures are mainly distinguished by two related features; (i) they regard the information provided by the data x as the sole quantifiable form of relevant probabilistic infomation and (ii) they use. as a basis for both the construction and the assessment of statistical procedures, long-run frequency behaviour under hypothetical repetition of similar circumstances. Although some of the ideas probably date back to the early 18OO's, most of the basic concepts were brought together in the 1930's from two somewhat different perspectives, the work of Neyman and Pearson. being critically opposed by Fisher, as reflected in discussions at the time published in the Royal Statistical Societyjournals. Convenient references are Neyman and Pearson ( 1967)and Fisher (1990). See, also, Wald (1947) for specific methods for sequential problems. Frequentist procedures make extensive use of the /ike/ihood~~nc.tioti

lik(8 12) = p ( x 18)


(or variants thereof), essentially taking the mathematical form of the sampling distribution of the observed data x and considering it as a function of the unknown parameter 8. If z = x(x) is a one-to-one transformation of x,the likelihood in terms of the sampling distribution of z becomes (in the above variant)

which suggests that meaningful likelihood comparisons should be made in the form of ratios rather than. say. differences, in order for such comparisons not to depend on the use of z rather than 2. The basic ideas behind frequentist statistics consist of (i) selecting a function of the data t = t ( x ) ,called a sfurisric, which is related to the parameter 8 in a convenient way, (ii) deriving the sampling distribution of t . i.e., the conditional distribution p ( t 1 O ) , and (iii) measuring the "plausibility" of each possible 8 by calibrating the observed value of the statistic t against its expected long-run behaviour given 8, described by p ( t I e). For a specific parameter value 8 = 8,,.if the observed value o f t is well within the area where most of the probability density

B.2 Alternative Approaches

451

of p ( t 18,) lies, then O0 is claimed to be compatible with the data; otherwise it is said that either O0 is not the true value of 8. or a rare event has happened. Such an approach is clearly far removed from the (to a Bayesian rather intuitively obvious) view that relevant inferences about 8 should be probability statements about 8 given the observed data, rather than probability statements about hypothetical repetitions of the data conditional on ( h e unknown) 8. This contrast is highlighted by the following example taken from Jaynes (1976).
Example B2 (Cauchy observaabns). Let z = {slQ } consist of two independent .. , observationsfromaCauchydistributiony(;c18) = St(z I8,1, 1). Commonsense(supported by translational and permutational symmetry arguments) suggests that 8 = (zI+ ~ ) / may 2 be a sensible estimate of 8. Yet, the sampling distribution of 0 is again S t ( r IB.l,l) so that, according to a ndive frequentist. it cannot make any difference whether one uses s . or 1 .rz 8 to estimate 8. Clearly, there is more to inference than the choice of estimators and their assessment on the basis of sampling distributions.

SuJiciency WerecallfromSection4.5thatastatistict issuflcienrifp(Z 1 t , 8 ) = p ( z I t);i.e.,if the conditionaldistribution ofthedatagiven t is independent o f 8 (Proposition4.1 l), and that a necessary and sufficient condition for t to be sufficient for 8 is that the likelihood function may be factorised as lik(8 I Z) = p ( 18) = h ( t ,8)g(Z), ~ in which case, for any prior p ( 8 ) . the posterior distribution of 8 only depends on z through t , i.e., p(8 I Z) = p(8 I t). The concept of sufficiency in the presence of nuisance parameters is controversial; see, for example. Can0 er al. (1988) and references therein. The suflciency principle in classical statistics essentially states that, for any given model p ( z 18) with sufficient statistic t, identical conclusions should be drawn from data zI and 2 2 with the same value oft. The idea was introduced by Fisher (1922) and developed mathematically by Halmos and Savage (1949) and Bahadur ( 1954). From a Bayesian viewpoint there is obviously nothing new in this principle: it is a simple mathematical consequence of Bayes theorem. However, from a textbook perspective, other frequentist developments of the sufficiency concept have little or no interest from a Bayesian perspective. For example: a sufficient statistic t is compfere if for all 8 in 0,J h ( t ) p ( t1 8 ) d t = O implies that h(t) = 0. T e property of completeness guarantees the uniqueness of h certain frequentist statistical procedures based on t, but otherwise seems inconsequential.

452

B. Nun-Bayesian Theories

Ancillarity In Section 5.1 we demonstrated how a sufficient statistic t = t ( z )may often be partitioned into two component statistics t ( z ) = [a(z). s(z)J such that the sampling distribution of a(z) independent of 8. We defined such an n ( z ) to be is an ancillary statistic and showed that. if a is ancillary, then

so that. in the inferential process described by Bayes' theorem, it suffices to work conditionally on the value of the ancillary statistic. For further information, see Basu ( 1959). The cunditiunali5 principle in classical statistics states that, whenever there is an ancillary statistic a,the conclusions about the parameter should be drawn as if a were fixed at its observed value. The apparent need for such a principle in frequentist procedures is well illustrated in the following simple example.
Example B.3. (Condifional versus uncondilional arguments). A 0- I signal comes from one of two sources HI or HI'. and there are two receivers R I and It2 such that

])(.I'

= (1 I / f ~ . #Ji = fJ(.l' = 11 H 2 . N ) = 0.2

where the receiver is selected at random. with p ( R I ) = 0.99. If It? were the receiver and

.r = 1 were obtained, the conditional likelihood would have been


lik(8, 1 Rr..r = 1 ) = 0.8.

lik(H1 1 R1..i = I ) = 0.2.

suggesting H I as the true value of 0. On the other hand, the unconditional likelihood given .I' = 1 would have been lik(fll I .r = 1) = (J.lK.

iik(0.2 j .r = 1) = 0.845.

suggesting 82 instead. The conflict arises because the latter (unconditional) argument takes f undue account o what nrighf have happened (i.e., R1 might have been the receiver) hut did
Nor.

A further example regarding ancillarity is provided by reconsidering Example B.2. The difficulty in this case disappears if one works conditionally on the ancillary statistic 1 .rl - x2 I /2. These examples serve to underline the obvious appeal of a trivial conscquencc of Bayes' theorem: namely. that one should always condition inferences on whatever information is available; the conditionality "principle" is .just a small (id / r o c ,

B.2 Alternative Approaches

453

step towards this rather obvious desideratum (which is, in any case, automatic in the Bayesian approach). From a frequentist viewpoint, however, the conditionality principle is not necessarily easy to apply, since ancillary statistics are not readily identified. and are not necessarily unique. Moreover, applying the conditionality principle may leave the frequentist statistician in an impasse. For example, Basu (1964) noted that if x is uniform on [O. 1+ O [ , then the fractional part of I is uniformly distributed on [0, I [ and hence ancillary, but the conditional distribution of x given its fractional part is a useless one-point distribution! See Basu (1992) for further elegant demonstration of the difficulties with ancillary statistics in the frequentist approach.

The Repeated Sampling Principle A weak version of the repeated sampling principle states that one should not follow statistical procedures which, for some possible value of the parameter, would too frequently give misleading conclusions in hypothetical repetitions. Although this is too vague a formulation on which to base a formal critique, it can be used to criticise specific solutions to concrete problems. A much stronger version of this principle, whose essence is at the heart of frequentist statistics, states that statistical procedures have to be assessed by their performance in hypothetical reperitions under identical conditions. This implies that (i) measures of uncertainty have to be interpreted as long-run hypothetical frequencies, that (ii) optimality criteria have to be defined in terms of long-run behaviour under hypothetical repetitions and, that (iii) there are no means of assessing any finite-sample realised accuracy of the procedures.
Example BA. (Confidence versus HPD intervals). Let x = {s,,. . , x,, } be a ran. dom sample from N ( s I p. 1). It is easily seen that F is a sufficient statistic, whose sampling distribution is N ( f I p . n ) . a normal distribution centred at the true value of the parameter, with precision n. Since the sampling distribution of L concentrates around p. one might expect f to be close to 11 on the ba5is of a large number of hypothetical repetitions of the sample, so that E suggests itself as an estimator of ji. Moreover, conditional on 11,
P

[E / I f l.!)C/&q .

/I]

= 0.95

so that, if we define a statistical procedure to consist of producing the interval Z f 1.9S/fi whenever a random sample of size n from N ( s 1 ji, 1) is obtained, we are producing an interval which will include the true value of the parameter 95% of the time, in the Ion8 run. Note that this says nothing about the probability that j i belongs to that interval for any given sample. In contrast, the superficially similar statement

P [ p E 2 f 1 . 9 6 / 6 1 S] = 0.95
which is derived from the reference postenor distribution of p given f. explicitly says that given f,the degree ofbeliefis 0.95 that p belongs to f f 1 . 9 6 6 . and is not concerned at all with hypothetical repetitions of the experiment.

454
Invuriance

6. Non-Bayesian Theories

If a parametric model p ( z 18) is such that two different data sets. 21 and x2. have the same distribution for every 8 , then both the likelihood principle and the mechanics of Bayes theorem imply that one should derive the same conclusions about 8 from observing z l as from observing x?. A more elaborate form of invariance principle involves transformations of both the sample and the parameter spaces. Suppose that, with X = 0. for all the elements of a group 6 of transformations there is a unique transformation g such thaty(O) = 8 a n d p ( z 18) = p(y(z) Ig(8)). Thenthein~urianceprincipfewould require the conclusions about 8 drawn from the statistic f(.q(s)) be the same to g ( t ( x ) ) .For example. in estimating I9 E R from a location as those drawn from model p ( I 0) = h(.r - 19)it may be natural to consider the group of translations. ~ In this case, g(.t:) = 3 + u, ci E 31. and g(0) = 0 + a. The invariance principle then requires that any estimate t ( s )of0 should satisfy t(z + n l ) = t(z) + n. where 1 is a vector of unities. Note that the argument only works if there is no reason to believe that some 8 values are more likely than others. From a Bayesian point of view, for invariance to be a relevant notion it must be true that the transformation involved also applies to the prior distribution (otherwise. one may have a uniform loss of expected utility from following the invariance principle). Another limitation to the practical usefulness of invariance ideas is the condition that g ( e ) = 8. Thus. in the location/translation example. the invariance principle could not be applied if it were known that B 2 0. A final general comment. Frequentist procedures centre their attention on producing inference statements about irnohscmvrhle parameters. As we shall see in Section B.4. such an approach typically fails to produce a sensible solution to the more fundamental problem of predictinx furitre ohsenwtions.

8 2 3 Likelihood Inference ..
We recall from Section 5 . I the following trivial consequence of Baye\ theorem. Consider two experiments yielding. respectively. data z and z arid with model repthe resentation involving the same parameter 8 E 8. same prior, and proportional likelihoods, so that

I+

Ie)=

z ) p ( z I el.

Then the experiments produce the same conclusions about 8, since they induce the same posterior distribution. The Iikelikood principle suggests that this should indeed be the case. for [he relative support givcn by the two sets o f data l o the possible values of 8 is precisely the same. Frequentist procedures typically violate the likelihood principle, \ince long run behaviour under hypothetical repetitions depends on the entire distribution { p ( z 10).z E -Y}and not only on the likelihood.

B.2 Alternative Approaches

455

As mentioned before, when common priors are used across models with proportional likelihoods, the Bayesian approach automatically obeys the likelihood principle and certainly accepts the likelihood function as a complete summary of the information provided by the data about the parameter of interest. With a uniform prior, the posterior distribution is, of course, proportional to the likelihood function. Proponentsof the likelihood approach to inferencego further, however, in their uses of the likelihood function, in that they regard it not only as the sole expression of the relevant information, but also as a meaningful relative numerical measure of support for different possible models. or for alternativeparameter values within the same model. The basic ideas of this pure likelihood approach were established by Barnard ( 1949,1963). Barnard er al. ( 1962),Birnbaum ( 1962,1968,1972) and Edwards (1972/1992). They essentially argue that (i) the likelihood function conveys all the information provided by a set of data about the relative plausibility of any two different possible values of 8 and (ii) the ratio of the likelihood at two different 8 values may be interpreted as a measure of the strength of evidence in favour of one value relative to the other. Both claims make sense from a Bayesian point of view when there are no nuisance purumerers. Indeed, (i) is just a restatement of the likelihood principle and, moreover, it follows from Bayes theorem that

so that the likelihood ratio satisfies (ii), since it is the factor which modifies prior odds into posterior odds. However, the pure likelihood approach, i.e., the attempt to produce inferences solely based on the likelihood function, breaks down immediately when there are nuisance parameters. The use of marginal likelihoods necessarily requires the elimination of nuisance parameters, but the suggested procedures for doing this seem hard to justify in terms of the likelihood approach. For early attempts, see Kalbfleish and Sprott (1970, 1973) and Andersen (1970. 1973). In recent years, work has focused on the properties ofprojle likelihood and its variants. Useful references include: Barnard and Sprott ( 1968),Barndorff-Nielsen ( 1980,1983, 1991 ), Butler ( I 986), Davison (1986), Cox and Reid (1987, 1992). Cox (1988). Fraser and Reid (1989), Bjrnstad (1990) and Monahan and Boos (1992). Other references relevant to the interface between likelihood inference and Bayesian statistics include Hartigan (1967). Plante (197 I), Akaike (1980b), Pereira and Lindley ( 1987), Bickel and Ghosh (1990). Goldstein and Howard (1991) and Royal1 (1992). See, also, Section 5.5.1 for a link with Laplace approximations of posterior densities. For further information on the history of likelihood, see Edwards (1974). The likelihood approach can also conflict with the weak repeated sampling principle, in that examples exist where, for some possible parameter values, hypothetical repetitions result in mostly misleading conclusions. Frequentist statistics

456

B. Non-Bayesian Theories

solves the difficulty by comparing the observed likelihood function with the distribution of the likelihood functions which could have been obtained in hypothetical repetitions; Bayesian statistics solves the problem by working. not with the likelihood function. but with the posterior distribution defined as the weighted average of the likelihood function with respect to the prior. The following example is due to Birnbaum (1969).
p ( . r I H ) . .I'

Example B.5. (NuLve fikolihood versus reference onufysis). Consider the model e { I. 2. . . . . 100). H E { O . 1.2. . . . . 100). where. for .I' = 1. '1. . . . . 1 0 0 .

Then. whatever .r is observed, if H = 0 the likelihood of the true value is always I ilOOfh of the likelihood of the only other possible H value. namely H = . I . . From a Bayesian point of view. the answer obviously depends on the prior distribution. If all f) are judged a priori to have the same probability. then one certainly has

However, if. say, H = 0 is considered to be special. as might well be the case in any real application of such a model, and is declared to be the parameterof interesf. then the reference prior turns out to be p ( H = 0) = 1j 2 .
p ( H = 1 . ) = 1/200.
I'

= 1 . . . . . 100

and a straightforward calculation reveals that this is cilso the posrerior. given a single observation, .r. Thus. with this prior, one observation from the model provides no information. Of course, for any prior. a second observation would. with high probability. reveal the true value of 0.

Finally, as we shall discuss further in Section B.4. we note that. like frequentist procedures, the likelihood approach has difficulties in producing an agreed solution to prediction problems.
6.2.4

Fiducial and Related Theories

We noted in Section B.2.3 that frequentist approaches are inherently unable to produce probability statements about the parameter of interest conditional on the data. a form of inference summary that seems most intuitively useful. This fact, coupled with the seeming aversion of most statisticians to the use of prior distributions. has led to a number of attempts to produce "posterior" distributions without using priors. We now review some of those proposals.

B.2 Alternative Approaches

457

Fiducial Inference In a series of papers published in the thirties, Fisher (1930, 1933, 1935, 1939) developed, through a series of examples and without any formal structure or theory, what he termed thefiduciul argument. Essentially, he proposed using the distribution function F ( t 18) of a sufficient estimator t E T for 8 E 0 in order to make conditional probability statements about 8 given t , thus somehow transferring the probability measure from T to 63. However, no formal justification was offered for this controversial transfer. The basic characteristics of the argument may be described as follows. Let p ( z I 8 ) , 8 E (80,81) ? be a one-dimensional parametric model and let t = t ( z ) R be a sufficient statistic for 0. Suppose further that the distribution function o f t , F(t I 0), is monotone decreasing in 8, with F(t 160)= 1 and F ( t 181) = 0. Then, G(8 I t) = 1 - F(t 18) has the mathematical properties of a distribution function over (80,Ol) and, hence,

has the mathematical structure of a posterior density for 8. This is thefiduciul disrriburion of 8, as proposed by Fisher ( 1930,1956/1973). The argument is trivially modified if F(t 10) is monotone increasing in 8, by using G(O I t) = F ( t 10).
Example B.6. (Fiducial and reference distributions). Let x = {x, . . . .z,, } be a . random sample from an exponential distribution p(z 18) = Ex(z 18) = Be-@,with mean 6, It is easily verified that 3 is a sufficient statistic for 8, and has a distribution function

which is monotone increasing in 8. Hence. G(6,I 3 ) = F(% I 0) is monotone increasing from 0 to 1 as 6, ranges over (0. cc), the fiducial distribution of 6, is obtained as and

Note that this has the form f(6, 1%) x p ( z 1 @a(@), with ~ ( 8= 8 I . Since n(8) = 8- is ) in this case the reference prior for 0, it follows that, in this example, the fiducial distribution coincides with the reference posterior distribution.

This last example suggests that the fiducial argument might simply be a reexpression of Bayesian inference with some appropriately chosen non-informative prior. However, Lindley (1958) established that this is true if, and only if,

458

B. Non-Buyesian Theories

the probability model y(r 10) is such that I and II may separately be transformed so as to obtain a new parameter which is a locution parameter for the transformed variable. See Seidenfeld (1992) for further discussion. In one-dimensional problems, the fiducial argument, when applicable, is more or less well defined and often produces reasonable answers, which are nevertheless far better justified from a Bayesian reference analysis viewpoint. However. it is by no means clear-and, in fact, a matter of considerable controversy-how the argument might be extended to multiparameter problems. The Royal Statistical Society discussions following the papers by Fieller (1954) and Wilkinson ( 1977) serve to illustrate the difficulties. Other relevant references are Brillinger ( 1962) and Barnard (1963). From a modem perspective. the fiducial argument seems now to have at most historical interest, and that mainly due to the perceived stature of its proponent. As Good ( I97 I ) puts it

. . . if we do not examine the tiducial argument carefully, it seems almost inconceivable that Fisher should have made the error which he did in fact make. It is because (i) it seemed so unlikely that a man of his stature should persist in the error. and (ii) because, as he modestly says. his 1930 'explanation left a good deal to be desired, that so many people assumed for so long that the argument was correct. They lacked the daring to question it.
See, however, Efron ( 1993) for a recent suggested modification of the fiducial distribution which may have better Bayesian properties.
Pivotul Inference

Suppose that, for a given model p ( z 10). with sufficient statistic t , it is possible to find some function h ( 8 . t ) which is monotone increasing in 0 for tixed t . and in t for fixed 0. and which has a distribution which only depends on 0 through h(0,t). Then, h(0, t) is called a pivotalfunction and the tiducial distribution of 8 may simply be obtained by reinterpreting the probability distribution of It over T as a probability distribution over 8. Fisher's original argument. as described I above, is if special case of this formulation, since G(f? t ) is a pivotal function with a uniform distribution over [O. 11, which is independent of H. Barnard ( I980b) has tried to extend this idea into a general approach to inference. His basic idea is to produce statements derived from the distribution of an appropriately chosen pivotal function, possibly conditional on the observed values of an ancillary statistic ( ~ ( z ) . Partitioning a pivotal function h(H. 2) = [g(O.2).o ( z ) ] to identify a possibly uniquely defined ancillary statistic a ( z ) . and using the distribution of g(O. z) conditional on the observed value of (~(z), produce some interesting results does in multiparameter problems where the standard fiducial argument fails. However.

B.2 Alternative Approaches

459

the mechanism by which the probability measure is transferred from the sample space to the parameter space remains without foundational justification, and the argument is limited to the availability-by no means obvious-of an appropriate pivotal function for the envisaged problem.
Structural Inference

Yet another attempt at justifying the transfer of the probability measure from the sample space into the parameter space is the structural approach proposed by Fraser ( 1968, 1972, 1979). Fraser claimed that one often knows more about the relationship between data and parameters than that described by the standard parametric model p ( z I 8). He proposes the specification of what he terms a structural model, having two parts: a structural equation, which relates data 2 and parameter 8 to some error variable e; and a probability distribution for e which is assumed known, and independent of 8. Thus. the observed variable 2 is seen as a transformation of the error e, the transformation governed by the value of 8. The key idea is then to reverse this relationship, and to interpret 8 as a transformation of e governed by the observed 2,so that 8 in a sense inherits the probability distribution.
Example B.7. (Sfructuml and reference distributions). Let x = {xI:. . . .x,,}be a set of independent measurements with unknown location p and scale 0.If the errors have a known distribution p ( e ) , the svuctural equation is

and the error distribution is,

If p ( e ) is normal, this structural model may be reduced in terms of the sufficient statistics 2 and Sz to the equations s = ns, 3 = p + uz. and error distributions

Reversing the probability relationship in the pivotal functions ( n - l)s2/a2 xfi., and fi(f - p ) / s St ( t I 0,1,71. 1) leads to structural distributions for u and p which, as is often the case, coincide with the corresponding reference posterior distributions.

460

B. Non-Buyesiun Theories

The general formulation of structural inference generalises the affine group structure underlying the last example, and considers a structural equation z = Be. to be interpreted as the response 2 generated by some transformation 8 E G in a group of transformations G, operating on a realised error e, with a completely identified error distribution for e. It is then claimed that 8 - l ( z )has the same probability distribution e and. hence. this may be used to provide a structurul distribution for 8. Here, the mechanism by which the probability measure on X is transferred to t3 is certainly well-defined in the presence of the group structure central to Frasers argument. However, the group structure is fundamental and the approach seems to lack general validity and applicability. As Lindley (1969) puts it

. . . Frasers argument [is]an improvement upon and an extension of Fishers in the special case where the group structure is present but [one should beJ . . .suspiciouh of any argument, . . . that only works in some situations. for inference is surely B whole, and the Poisson distribution [is] not basically different in character from . . . the normal.

When the structural argument can be applied it produces answers which are mathematically closely related to Bayesian posterior distributions with noninformative priors derived from (group) invariance arguments. In fact, in most examples, the structural distributions are precisely the posterior distributions obtained by using as priors the right Haar measures associated with the structural group, which in turn. are special cases of reference posterior distributions (sce Villegas, 1977a. 1981. 1990; Dawid, IY83b. and references therein).

8.3

STYLISED INFERENCE PROBLEMS

8 3 1 Point Estimation ..
Let { p ( z 1 O ) , 6 E 0)be a fully specified parametric family of models and suppose that it is desired to calculate from the data z a single value e(z) E 8 representing the best estimate of the unknown parameter 0 . This is the so-called point estimation problem. Note that, in this formulation. the final answer is an element f of t3, with no explicit recognition o the rcncertuinry inwlved. Pragmatically. a point estimate of 8 may be motivated as being the simplest possible summary of the inferences to be drawn from z about the value of 6: alternatively. one may genuinely require a point estimate as the solution to a decision problem; for example. adjusting a control mechanism, or setting a stock level. We recall from Section 5.1.S that. within the Bayesian framework. the problem of point estimation is naturally described as a decision problem where the set of possible answers to the inference problem, A. is the parameter space 0. Formally,

B.3 Stylised Inference Problems

461

one specifies the loss function l(a,8 ) which describes the decision makers preferences in that context, and chooses as the (Bayes) estimate that value @(z)which minimises the posterior expected loss,

We have seen (Propositions 5.2 and 5.9) that intuitively natural solutions, such as the mean, mode or median of the posterior distribution of 8, are particular cases of this formulation for appropriately chosen loss functions. We also note that the definition of an optimal Bayesian estimator is constructive. in that it identifies a precise procedure for obtaining the required value. Classical decision theory ideas can obviously be applied to point estimation viewed as a decision problem. Thus, one may define admissibleestimates, minimax estimates, etc., with respect to any particular loss function. From our perspective, the problems and limitations of classical decision theory that we identified in Section B.2.1 carry over to particular applications such as point estimation. Thus, admissible estimators are essentially Bayes estimators, but classical decision theory provides no foundationally justified procedure for choosing among admissible estimators, with-as we noted-the general minimax principle being unpalatable to most statisticians. The frequentist approach proceeds by defining possible desiderata of the long run behaviour of point estimators, and, using these desiderata as criteria, proposes methods for obtaining best estimators, and identifies conditions under which good behaviour will result. The criteria adopted are typically non-constructive. The likelihood approach proceeds by using the likelihood function to measure the strength with which the possible parameter values are supported by the data. Hence, the optimal estimator is naturally taken to be that which maximises the likelihood function. It is worth stressing that this is a constructive criterion, in that the very definition of a maximum likelihood estimator (MLE) determines its method of construction. Fiducial, pivotal and structural inference approaches all produce posterior probability distributions for 8. Hence, their solution to the problem of point estimation is essentially that suggested by the Bayesian approach; either to offer as an estimator of 8 some location measure of the probability distribution of 8 or, more formally, to obtain that value of 8 which minimises some specified loss function with respect to such a distribution.

462

B. Non-Bayesian Theories

Criteria for Point Estimution It should be clear from Sections 5.1.4 and B.2.2 that the search for good estimators may safely be limited to those based on suflcient statistics, for then, and only then. is one certain to use all the relevant information about the parameter of interest. However, the following two points introduce a note of caution. (i) Sufficiency is a global concept; thus, if e s is sufficient for 8 , it does (j not follow that &(z) sufficient for a component parameter 8;. even if 8,(z) is is sufficient for 8, when 8 - { O , } is known. For instance, with univariate normal s?) ), data (2. is jointly sufficient for ( p . ~ ? but 2 is not sufficient for / I , nor is R? sufficient for a*. (ii) Sufficiency is a concept rekutive to a model; thus, even a small perturbation to the assumed model may destroy sufficiency. For example ( 2 .s) is not sufficient for (11- m ) if the true model is St ( r 1 p . v . 1000) or the mixture form 0.999 x N ( s I p,cr) + 0.001 x N(;c 10.1). even though these two models are indeed very close to N(I I p, cr). The bias of an estimator &z)is defined to be

and its mean squared error (mse) to be

From a frequentist point of view it is desired that, in the long run, (5, should be as close to 8 as possible; thus, if quadratic loss isjudged to be an appropriate distance measure, a frequentist would like an estimator (5, with small mse(d 18) for almost all values of 8. A concept of relative efjfcienc-y is developed in these terms. An estimator 6 , is more e&ient than 6,. if, for all 8, mse(8, 18) < me(&-, 8 ) . ] A simple theory is available if attention is restricted to unbiased estimators. i.e., estimators such that b(8) = 0, since then we simply have t o minimise I8) in this unbiased class. However, although requiring the sampling distribution of 6 to be centred at 8 may have some intuitive appeal, there are powerful arguments u p i n s t requiring unbiasedness. Indeed: (i) In many problems, there ure no unbiased estimators. For instance. r / u is an distribution. unbiased estimator of the parameter 8 for a binomial Bi(r 18.71) but there is no unbiased estimator of O;?. (ii) Even when they exist, unbiased estimators may give nonsensical answers. and no theory exists which specifies conditions under which this can be guaranteed not to happen. For example, the (unique) unbiased estimator of the parameter O E (0. 1) of a geometric distribution p(.r lOj = N( I - 0 ) . 1 = 0. I . . . ..

L(e

B.3 Stylised Inference Problems


is 8(0) = 1, 8(z) = 0, z = 1: 2.. . .; hardly a sensible solution! Similarly (see Ferguson, 1967). if 8 is the mean of a Poisson distribution, P ( 18) = nz e-'B"/z!, z = 0.1.. . . then the onfy unbiased estimator of e-', a quantity which must lie in (0, l), is 1 if z is even and 0 if it is odd (again, hardly sensible); but-even more ridiculously -the only unbiased estimate of e-28 is (- 1)". leading to the estimate of a probability as - 1 (for all odd z)! (iii) The unbiasedness requirement violates the likelihood principle, by making the answer dependent on the sampling mechanism. Thus, the unbiased estimator of p from a N ( z I p , a)observation is x, but the unbiased estimator from ifz 5 100 p ( x I p , 0)= N ( z I p, a), = N ( z 10.1) otherwise will be something else. Yet, if one is measuring ,ii with an instrument which only works for values L 5 100 and obtains z = 50, i.e., a valid measurement, it seems inappropriate to make our estimate of p dependent on the fact that we might have obtained an invalid measurement, but did not. (iv) Even from a frequentist perspective, unbiased estimators may well be unap pealing if they lead to large mean squared errors, so that an estimator with small bias and small variance may be preferred to one with zero bias but a large variance. For further discussion of the conflict between Bayes and unbiased estimators, see Bickel and Blackwell (1967). See, also, Wald (1939). Another frequentist criterion for judging an estimator concerns the asymptotic = . . . 2) to make , behaviour of its sampling distribution. If we write explicit the dependence of the estimator on the sample size, a frequentist would to converge to 8 (in some sense) as n increases. An estimator clearly like is said to be weakly consistent if - 8 in probability, and strongly consistent if , --t 8 with probability one. By Chebychev's inequality, a sufficient condition , , for the weak consistency of unbiased estimators is that V ( 8 , ) - 0 as n - 0. Obviously, a consistent estimator is asymptotically unbiased. For discussion on the consistency of Bayes estimators, see. for example, Schwartz ( 1965). Freedman and Diaconis ( 1983). de la Horra ( 1986) and Diaconis and Freedman (1986a. 1986b). For the frequentist properties of Bayes estimators, see Diaconis and Freedman (1983).

a,,

a,,

a,$

a,,

a,

"Optimum Estimators '' We have mentioned before that minimising the variance among unbiased estimators is often suggested as a procedure for obtaining "good" estimators. Sometimes, this procedure is even further restricted to linear functions; thus, provided p = E ( z I 8) and a2= V ( z 18) exist, 3 is said to be the best linear unbiased estimator (BLUE) of p, in the sense that it has the smallest mse among all linear, unbiased estimators. It is easy, however, to demonstrate, with appropriate examples, that this is a rather restricted view of optimality,since non-linear estimators may be considerably

464

B. Non- Bayesiun Theories

more efficient. An absolute standard by which unbiased estimators may be judged is provided by the Cramer-Rao inequality. Let j = g(s)be an unbiased estimator of g ( 0 ) and define the eflcienr scorefunction u,(xI 0)to be
?1(5 18) =

- logp(x 10).

i)

oe

Then, under suitable regularity conditions, ,,. @[u(x O)] = 0, I


,

with equality if, and only if,

where k(B) does not depend on x. in which case g is said to be a mitiitnimt w r i unce bound (MVB) estimator of g(t9). It follows that a minimum variance bound estimator must be sufficient, unbiased, and a linear function of the score function. We have already stressed that limiting attention to unbiased estimators may not be a good idea in the first place. Moreover, the range of situations where optimal unbiased estimators, i.e.. the MVB estimators. can be found is rather limited. Indeed, if 0 is sufficient for 6 there is a irniyue function g ( 8 ) for which a MVB exists, namely that described above. For example. if x = { 11.1. . . . . .r,, } is a is random sample from N(:r 10. n?).Z r ~ / n a MVB estimator f o r d . but no MVB estimator exists for (T! One might then ask whether it is at least possible to obtain an unbiased estimator with a variance which is lower than that of any other estimator for each 0. even if it does not reach the Cramer-Rao lower bound. Under suitable regularity conditions. the existence of such iinijortnl! minimum ruriunce (UMV) estimators can indeed be established. Specifically, Rao ( 1945) and Blackwell ( 1947) independently proved that if G(z) is an estimator of Q and t = t(x)is a sufficient statistic for 8. then, given the value of the sufficient statistic f . the conditional expectation of 8 ( x ) ,

B ( t ) = E[B I t ] =

.I

6 ( z ) p ( sI t)&.

is an improved estimuror of 8, in the sense that, for every value of 8,mse(8 I 8) I mse(8 I O ) , a result which can be generalised to multidimensional problems. A decision-theoretic consequence of the so-called Rao-Blackwell theorem is that any estimator of t which is not a function of the sufficient statistic f must be 9

B.3 Stylised Inference Problems

465

inadmissible. However, as a constructive procedure for obtaining estimators this result is of limited value due the fact that it is usually very difficult to calculate the required conditional expectation. If 6(x) is unbiased and complete, and there is a complete sufficient statistic t = t(x),then 8 ( t ) is unbiased, and is the UMV estimator of 8. For example, r / a is the MVB estimator of the parameter 8 of a binomial distribution Bi(r 18,n), but there is no MVB estimator of 8. However, the result may be used to show that r(r - l)/[n(n - l)]is a UMV estimator of 8*. MLE estimators are not guaranteed to exist or to be unique; but when they do exist they typically have very good asymptotic properties. Under fairly general conditions, MLEs can be shown to be consistent (hence asymptotically unbiased, even if biased in small samples), asymptotically fully efficient, and asymptotically normal, so that, if n *m,the sampling distribution of converges to the normal with mean 8 and precision I ( 8 ) .the information function. distribution N(8 1 8, I ( @ ) ) Bayesian estimators always exist for appropriately chosen loss functions and automatically use all the relevant information in the data. They are typically biased, and have analogous asymptotic properties to maximum likelihood estimators (i.e., from a frequentistperspective they are consistent, asymptotically fully efficient and, under suitable regularity conditions, asymptoticallynormal). A famous example is the Pimun esrimuror (Pitman (1939), which may be obtained as the posterior mean which corresponds to a uniform prior; see, also, Robert et ul. (1993). Both the likelihood and the Bayesian solutions to the point estimation problem automatically define procedures for obtaining them; the frequentist approach does not (expect for special cases like the exponential family). In addition to the MLE approach, other methods of construction include minimum chi-squared, least squares, and the method of moments. However, these methods do not in themselves guarantee any particular properties for the resulting estimators, which usually have to be investigated case by case. Historically, all these construction methods have been used at various times within the frequentist approach to produce candidate good estimators, which have then been analysed using the criteria described above. Nowadays, partly under the influence of classical decision theory, some frequentist statisticians pragmatically minimise an expected posterior loss to obtain an estimator, whose behaviour they then proceed to study using non-Bayesian criteria. For an extensive treatment of the topic of point estimation, see Lehmann ( 195911983).

8 3 2 Interval Estimation ..
Let { p ( z I 6 ) ,8 E (3) be a fully specified parametric family of models and suppose that it is desired to calculate, from the data x , a region C ( z ) within which the parameter 8 may reasonably be expected to lie. Thus, rather than mapping X

466

B. Non-Bayesiun Theories

into 0, as in point estimation, a subset of 8 is associated with each value of x, whose elements may be claimed to be supported by the data as "likely" values of the unknown parameter 8. This is the so-called region estimation problem; when 8 is one-dimensional, the regions obtained are typically intervals, hence the more standard reference to the interval estimation problem. Region estimates of 8 may be motivated pragmatically as informative simple summaries of the inferences to be drawn from x about the value of 8 or. more formally. as a set of 8 values which may safely be declared to be consistent with the observed data. We recall from Section 5. I .5 that, within a Bayesian framework, credible regions provide a sensible solution to the problem of region estimation. Indeed, for each (L value. 0 < (1 < 1, a 100(1 - (I)% credible region C. i.e., such that

contains the true value of the parameter with (posterior) probability 1 - (1 and. among such regions, those of the smallest size, i.e.. the highest posterior density (HPD) regions, suggest themselves as summaries of the inferential content of the posterior distribution. Note that this formulation is equally applicable to prediction problems simply by using the corresponding posterior predictive distribution.
Confidence Limits

For 0 < c t < 1 and scalar B E 8 C A. a statistic 8" (2) such that for all 0.

and such that if a l > (12 then Pl 5 0"2, is called an upper confidence limit for (9 with confidence coeflcient 1 - n. Note that if y is strictly increasing, then g(@') is an upper confidence limit for g(0). The nesting condition is important to avoid inconsistency; see e.g., Plante ( 1984, 199I 1. Given x,the specific interval ( - x . ~ " ( z ) ]is then typically interpreted as a region where, given z, parameter H may reasonably be expected to lie. It the is crucial however to recognise that the only proper probubiliry interpretatioir of a confidence interval is that. in rhr long run. a proportion 1 - (b of the @'(x)values will be larger than 0. Whether or not the pcirticulur @(z) which corresponds to the observed data x is smaller or greater than (9 is enrirely uncertuin. One only has the rather dubious 'Transferred assurance" from the long-run definition. A lol.c,erconfiderzceIimit8,(x) issimilarlydefinedasastatistica(x) such that Pr{&,(z) 5 0 18) = 1 - u with the corresponding nesting property. Combining a lower limit at confidence level 1 - ( t with an upper limit at confidence level

B.3 Stylised Inference Problems


1 - 0 2 , we obtain a nvo-sided confidence interval level 1 - crl - a2 ,such that, for all 8,

467

[aI (z)

&'2

(z)] confidence at

For two-sided confidence intervals, a convenient choice is a1 = 02, which produces central confidence intervals based on equal tail-area probabilities. There are, however, other alternatives. (i) Shorrest confidence intervals. For fixed a1 a2 = a, a1 and cr2 may be chosen to minimise the expected interval length Ez I ~ [ e ^ 2 ( z )- k1 181. (5) It must be realised, however, that shortest intervals for 8 do not generally transform to shortest intervals for functionsg(8). It can be proved that intervals based on the score function

have asymptotically minimum expected length; moreover, the fact that u has I(8)),with a sampling distribution which is asymptotically normal N(u 10, mean 0 and precision I ( e ) ,may be used to provide approximate confidence intervals for 8. (ii) Most selective intervals. For fixed crl + (x = (1. one could try to choose a 12 1 and (12 to minimise the probability that the interval contains false values of 8. However, such uniformly most accurate intervals are not guaranteed to exist. It is worth noting that, for a variety of reasons, the construction of confidence intervals is by no means immediate. (i) They typically do not exist for arbitrary confidence levels when the model is discrete. (ii) There is no general constructive guidance on which particular statistic to choose in constructing the interval. (iii) There are serious difficulties in incorporating any known restrictions on the parameter space, and no systematic procedure exists for incorporating such knowledge in the construction of confidence intervals. (iv) In multiparameter situations, the construction of simultaneous confidence intervals is rather controversial. It is less than obvious whether one should use the confidence limits associated with individual intervals, or whether one should think of the problem as that of estimating a region for a single vector parameter, or as one of considering the probability that a number of confidence statements are simultaneously correct. (v) Interval estimation in the presence of nuisance parameters is another controversial topic. Unless appropriatepivotal quantitiescan be found, the properties of various alternative procedures, typically based on replacing the unknown nuisance parameters by estimates, are generally less than clear.

468

B. Non-Buyesian Theories

(vi) Interval estimation of future observations poses yet another set of difficulties. Unless one is able to find a function of the present and future observation whose sampling distribution does not depend on the parameters (and this is not typically the case). one is again limited to ud huc approximations based on substituting estimates for parameters.
But, even in the simplest case where 0 is a scalar parameter labelling a continuous model p ( z I e), the concepr of a confidence interval is open to what many would regard as a rather devastating criticism. Namely the fact that the confidence limits can turn out to be either vacuous or just plain silly in the lighr o j f h eobservrd dam. We give two examples.
(i) In the Fieller-Creasy problem. where the parameter of interest is the ratio of two normal means, there are values ct < 1 such that, for a subset of possible data with positive probability, the corresponding 1 - (t confidence interval is the enrire real line. Solemnly quoting the whole real line as a 95% confidence interval for a real parameter is not a good advertisement for statistics. For Bayesian solutions, see Bemardo (1977) and Raftery and Schweder t 1993). (ii) If .r1 and 2 2 are two random observations from a uniform distribution on the and interval (8 - 0.5.0 0..5), and . y ~ y.2 are. respectively, the smaller and the larger of these two observations, then it is easily established that for all H

so that (y,, 9 2 ) provides a 50%. confidence interval. However. if for the observed data it turns out that y~ - yl L. 0.5 then certainly yl < 0 < y2.so that we know fur sure that 0 belongs to the interval (g,. y?).even though the confidence level of the interval is only 50%.
These examples reflect the inherent difficulty that the frequentist approach to statistics has of being unable to condition on the complete observed data. Conditioning on ancillary statistics, when possible, may mitigate this problem, but it certainly does not solve it and, as discussed in Section B.2.2, it may create others. The reader interested in other blatant counterexamples to the (unconditional) frequentist approach to statistics will find references in the literature under the keywords relevunf subsets, which refer to subsets of the sample space yielding special information and subverting the "long-run" or "on average" frequentist viewpoint. Two important such references are Robinson ( 1975) and Jaynes (1976); see. also. Buehler ( I959), Basu ( 1964, 1988). Corntield ( I 969). Pierce ( 1973). Robinson (1979a. 1979b). Casella (1987. 1992). Maatta and Casella (1990) and Goutis and Casella (1991). As a final point, we should mention that for many of' the standard textbook examples of confidence intervals (typically those which can be derived from univariate continuous pivotal quantities). the quoted intervals are nicmeric~crl/~ equal

B.3 Srylised lnference Problems

469

to credible regions of the same level obtained from the corresponding reference posterior distributions. This means that, in these cases, the intuitive interpretation that many users (incorrectly, of course!) tend to give to frequentist intervals of confidence 1 - a,namely that, given the data, there is probability 1 - o that the interval contains the true parameter value, would in fact be correct, if described, instead, as a reference posterior credible interval. A typical example of this situation is provided by the class of intervals

for the mean of a normal distribution with unknown precision. These are both the best confidence intervals for 11. derivable from the sampling distribution of the pivotal quantity m ( Z - p ) / s , and also the credible intervals which correspond to the reference posterior distribution for p, x ( p I x) = St(p 1 ?, ( n - l)s-*,t - 1) i derived in Example 5.17. Buehler and Feddersen (1963) demonstrated that relevant subsets exist even in this standard case. Indeed, if x = { q . ~then C = }. {x,,,i,,, xnlaX) a 50% interval for p , but if both observations belong to the set is

then P r { C I z E R , p : o } = 0.5181. Pierce (1973) has shown that similar situations can occur whenever the confidence interval cannot be interpreted as a credible region corresponding to a posterior distribution with respect to a proper prior. Note that although this long-term coverage probability is not directly relevant to a Bayesian, the example suggests that special care should be exercised when interpreting posterior distributions obtained from improper priors. Casella er al. ( 1 993) have proposed, for interval estimation, alternative loss functions to the standard linear functions of volume and coverage probability.

8 3 3 Hypothesis Testing ..
Let { p ( z I O ) , 8 E 0) a fully specified parametric family of models, with 0, be partitioned into two disjoint subsets 0 0 and el, suppose that we wish to decide and whether the unknown 8 lies in 80or in el. If H0 denotes the hypothesis that 0 E OOand H I the hypothesis that 8 E El,, we have a decision problem, with only two possible answers to the inference problem, au = accept NOor 01 E accept HI, where the choice is to be made on the basis of the observed data 2. This is the so-called problem of hypothesis resting. In most such problems, the two hypotheses are not symmetrically treated; the working hypothesis Ho is usually called the nuff hypothesis, while H I is referred to as the alrernative hypothesis. Although the theory can easily be extended to any finite number of alternative hypotheses, we will present our discussion in terms of a single alternative hypothesis.

470

B. Non-Bayesian Theories

We recall from Section 6.1 that, within a Bayesian framework, the problem of hypothesis testing, as formulated above, can be appropriately treated using standard decision theoretical methodology; that is. by specifying a prior distribution and an appropriate utility function, and maximising the corresponding posterior expected utility. We also recall that the solution to the decision problem posed generally depends on whether or not the "true" model is ussutned to be included in the family of analysed models. Assuming the stylised M-closed case. where the true model ) is assumed to belong to the family { p ( z 10).8 E k } and the utility structure is simply .(a,.O) = 0 0 E 8,. i = 0. 1 = -I,, e E 8,. j # i . we have seen (Proposition 6. I ) that the null hypothesis H I , should be rejected if. and only if,

This corresponds to checking whether the appropriate (integrated) likelihood ratio. or Bayes factor. ji)" P ( X I e)P(We J& P ( W 8

BIII(X) =

Je, Y(Z I

e)P(e)de J& P ( W 8

is smaller than a cut-off point which depends on the ratio l o i / l l ~of the losses incurred, respectively, by accepting a false null and rejecting a true null. and on the ratio of the prior probabilities of the hypotheses.

p(Mi) =

L,y(e)rie.

i = o. 1.

From the point of view of classical decision theory. the problem of hypothesis testing is naturally posed in terms of decision rules. Thus, a decision rule for this problem (henceforth called a test />roceditre 6, or simply a rest 0') is specified in terms of a critical region Hn. defined as the set of x values such that H is rejected o whenever x E Rd. The most relevant frequentist aspect of such a procedure 0' is its powerfuncrion p o q e I ($1 = Pr{z E R,, 1 e } . which specifies, as a function of 8. the long-run probability that the test rejects the null hypothesis Ho. Obviously. the ideal power function would be pow(e 1 o') = 0.
= 1.

e E c3(,
e E 0 ,

although, naturally, one will seldom be able to derive a test procedure with such pow(8 an ideal power function. For any 8 E 8,). I (i) is the long-run probability of

B.3 Srylised Inference Problems

471

incorrect rejection of the null hypothesis; frequentist statisticians often specify an upper bound for such probability. which is then called the level of signflcunce of the tests to be considered. The size of any specific test d is defined to be

= sup pow(e eceo

I 6);

thus. to specify a significance level CI is to restrict attention to those tests whose size is not larger than a. Either 80 or may contain just a single value of 8. In this case, the corresponding hypothesis is referred to as a simpfe hypothesis; if 0, contains more than one value of 8, then Hi is referred to as a composire hypothesis. For any test procedure 6 one may explicitly consider two types of error: rejecting a true null hypothesis, a so-called error of v p e I , and accepting a false null hypothesis, a so-called error of type 2. Let us denote by a(6 18) and p(S 18) the respective probabilities of these two types of error,

a(6 18) = Pr{z E Rd 18)

=o
p(Sl8) = Pr{z

if 8 E eo, otherwise

4' Ra 18)

=O

if 8 E el, otherwise.

It would obviously be desirable to identify tests which keep both error probabilities as small as possible. However, typically, modifying Rs to reduce one would make the other larger. Hence, one usually tries to minimise some function of the two; for example, a linear combination aa(6 18) bP(6 18).

Testing Simple Hyporheses When both Ho and H I are simple hypothesis, so that a(6 18) = a(6 1 8 0 ) = a ( & ) and O(d 18) = O(SIOl) = O(d), it can be proved that a test which minimises m ( d ) + b,O(6) should reject HOif, and only if.

i.e., if the likelihood ratio in favour of the null is smaller than the ratio of the weights given to the two kinds of error. This can be seen as a particular case of the Bayesian solution recalled above, and is closely related to the Neyman-Pearson lemma (Neyman and Pearson, 1933, 1967) which says that a test which minimises p(d) subject to a(6) 5 a must reject Ho if, and only if,

472

B. Non-Buyesian Theories

for some appropriately chosen constant k. It has become standard practice among many frequentist statisticians to choose a significance level 00 (often "conventional" quantities such as 0.05 or 0.01) and then to find a test procedure which minimises J(6) among all tests such that o( b) 5 (t(I (rather than explicitly minimising some combination of the two probabilities of error). The Neyman-Pearson lemma shows explicitly how to derive such a test. but it should be emphasised that this is not a sensible procedure. Indeed: (i) With discrete data one cannot attain a fixed specific size (0') without recourse to auxiliary, irrelevunt rundoinisation, whereas minimisation of a linear combination of the form mi (6) + Iid(6) can always be achieved. For a Bayesian view on randomisation. see Kadane and Seidenfeld (1986). (ii) More importantly. by fixing o(6) minimising , j ( d ) one may find that. with and large sample sizes, H,, is rejected when p(s1 H I , )is far larger than p(zI i l l ). due to the fact that the minimising j j ( 0') may be extremely small compared with the fixed ft(n'). Although this can be avoided by carefully selecting n ( h ) as a decreasing function of the sample size, it seems far inore natural to minimise a linear combination m ( 6 ) + bJ( 0') of the two error probabilities. in which case no difficulties of this type can arise. Other strategies for the choice ofo(6) and <j( have been proposed. For example. 6) in the 0 - 1 loss case. o( = .j(6) corresponds to the minimax principle. However. n') it is important to note (see e.g., Lindley. 1972)that minimising a lineur combination of the two types of error is actually the only coherent way of making a choice, in the sense that no other procedure is equivalent to minimising an expected loss. Conipositu Alternurive Hypotheses In spitc of the difficultiesdescribed above, frequentist statisticians have traditionally defined an optimal test 6 to be one which minimises d(6 1 8 ) fora fixed significance level oO. terms of the power function. this implies deriving a test 0' such that In pow(e 10') i all.

e E efl

and for which pow(8 I 6) is as large as possible in A test procedure 6' is called a irniforrniy most power-1 (UMP) test. at level ofsignificuncx~ i f o (6' I8) 5 1 1 , ) oo. and. for any other 6 such that N(6 1 8 ) 5 oo, for all 8 E (3,. 5 posv(8 1 6-). It can be proved that. when 8 is one-dimensional, UMP tests often exist for onepow(8 I n ' )

sided alternative hypotheses. A model {p(z 10).6, E t) 5 %} is said to have a monotone likelihood ratio in the statistic f = t ( s )if for all 0 < 612. p ( z 1 H z ) / p ( z I H I ) is an increasing function 1 of 1. If p ( z 1 8) has a monotone likelihood ratio in I and 1. is a constant such that

PI.{f 1 c I 0,)) = 0 , ) .

B.3 Stylised Inference Problems

473

then the test 6 which rejects HOif t 2 cis a UMP test of the hypothesis HO 8 5 00 versus the alternative H I = 8 > Oo, at the level of significanceao. However, UMP tests do not generally exist.
Example B.8. (Non-exisfence of4 UMP fesf).If x = (.c,. . . . . . I , , } is a random sample from a normal distribution N ( s I p , 1). then the test 6, defined by critical region Rb, = {x; - /LO > 1.282/fi} is a UMP test for H(, 11 5 /Lo versus HII p > 110, with E =0.10 significance level. Similarly, the test 6, defined by R62 = (2: pll- j : > 1.282/fi} is a UMP test for H,, 2 p0 versus HI E / I < [L,~,with the same level. Since these critical regions are different, it follows that there is no UMP test for p = /ill versus p # PIl.

The fact, illustrated in the above example, that UMP tests typically do not exist for two-sided alternatives, suggests that a less demanding criterion must be used if one is to define a best test among those with a fixed significance level. Since the power function pow(8 1 6 ) describes the probability that the test 6 rejects the null, it seems desirable that, when Ho is true, pow(&16) should be smaller in Qo than elsewhere. A test 6 is called unbiased if for any pair 8 0 E 80 81 E 8 1 and it is true that pow(Oo 16) 5 pow(OLI6).
Example B.9. (Compurative power o &@red tests). If x = {J,,. . . .x , , } is a ranf dom sample from a normal distribution N(.r 111.1) then the test & defined by the region

Rn:, = (2; I E - po I > 1.645/fi}

Figure B.2 Power of testsfor the mean of a normul distribution P(, = 1 . n = 30, r , = 1.40. (-2 = 2.05

474

B. Non-Bayesian Theories

is an unbiased test for Nu 5 11 = f i b versus H I = /i # ~ 4 , ~Figure B.2 compares the power . of this test with those defined in Example B.8, and with that of a typical non-symmetric test 6, of the same level. which has the critical region

for suitably chosen constants (j < r 2 . It seems obvious that 6,, which is more cautious about accepting values of / I larger than plI than about accepting values of 11 smaller than pi,, should be preferred to the unbiased test whenever the consequences of the first class of errors are more serious, or whenever the values of smaller than / I ( , are considered to be more likely.

It is clear from Example B.9 above that, even when they exist, unbiased procedures may only be reasonable in special circumstances. We are drawn again to the general comment that, in any decision procedure, prior information and utility preferences should be an integral part of the solution. Yet another approach to defining a good test when UMP tests do not exist is to focus attention on localpowr, by requiring the power function to be maximised in a neighhourhood of the null hypothesis. Under suitable regularity conditions. locally most powerful tesrs may be derived by using the sampling distribution of the efficient score function in a process which is closely related to that described in our discussion of interval estimation. However, the requirement of maximum local power does not say anything about the behaviour of the test in a region of high power and, indeed, locally most powerful tests may be very inappropriate when the true value of 0 is far from O,,. Methodological Discussion Testing hypotheses using the frequentist methodology described above may be misleading in many respects. In particular:
(i) It should be obvious that the all too frequent practice of simply quoting whether or not a null hypothesis is rejected at a specified significance level no ignores a lot of relevant information. Clearly, if such a test is to be performed, the statistician should report the cut-off point (1 such that Hn would not be rejected for any level of significance smaller than ( t . This value is called the tail ureu or p-value corresponding to the observed value of the statistic. An added advantage of this approach is that there is no need to select beforehand an arbitrary significance level. As noted in the case of confidence intervals. there is a tendency on the part of many users to interpret a p-value as implying that the probability that If,,is true is smaller than the p-value. Not only, of course. is this false within the frequentist framework but. in this case. there is. in general. no simple form of reinterpretation which would have a Bayesian justification, so that. even numerically. p-values cannot generally

B.3 Stylised Inference Problems

475

be interpreted as posterior probabilities. For detailed discussions see, for example, Berger (1 985a), Berger and Delampady ( 1987)and Berger and Sellke ( 1987). See Casella and Berger (1987) for an attempted reconciliation in the case of one-sided tests. (ii) Another statistical tradition related to hypothesis testing consists of declaring an observed value stutisticully signijcunt, implying that there exists statistical evidence which is sufficient to reject the null hypothesis, whenever the corresponding tail area is smaller than a conventional value such as 0.05 or 0.0 I . However, since the classical theory of hypothesis testing does not make any use of a utility function, there is no way to assess formally whether or not the true value of the parameter 8, which may well be numerically different from a hypothetical value 8 0 , is significantly different from 8 0 in the sense of implying any practical difference. Thus, a vote proportion of 34% for a political party is technically different from a proportion of 34.001 %, but under most plausible utility functions the difference has no political significance. (iii) Finally, the mutual inconsistency of frequentist desiderata often makes it impossible, even in the theorys own terms, to identify the most appropriate procedure. For example, if z is a random sample from N(x I p , r r 2 X ) with precision m determined by a random integer m,then rn is ancillary and hence, X by the conditionalityprinciple, tests on p or X should condition on the observed m. Yet, Durbin (1969) showed that, at least asymptotically, unrestricted tests may be uniformly more powerful.
See Chernoff (1951) and Stein (1951) for further arguments against standard hypothesis testing.

8.3.4

SignificanceTesting

In the previous section, we have reviewed the problem of hypothesis testing where, given a family { p ( z I O ) , 8 E 0}, null hypothesis HO = 8 E 8 0 is tested against a (at least) one specific alternative. In this section we shall review the problem of pure significance tests, where onfy the null hypothesis H0 = { p ( z 1 O ) , 8 E 8 0 ) has been initially proposed, and it is desired to test whether or not the data x are compatible with this hypothesis, without considering specific alternatives. The null hypothesis may be either simple, if it completely specifies a density p ( x 100). or composite. We recall from Section 6.2 that, within the Bayesian framework, the problem of significance testing, as formulated above, could be solved by embedding the hypothetical model in some larger class { p ( e It?), 8 E 0).designed either to contain acruafalternatives of practical interest, orformal alternatives generated by selecting a mathematical neighbourhood of Ho. any discrepancy measure For

476

B. Non-Buyesiun Theories

describing, for each 8, the conditional utility difference. and for any function E ~ I ( X ) . describing the additional utility obtained by retaining HO because of its special status, we showed that Ho should be rejected if,
t ( ~> EO(x) )

where

is the expecred posreriur discrepane?. In particular, we proposed the logarithmic

as a reasonable general discrepancy measure. This (fully Bayesian) procedure could be described as that of selecting a test statistic t ( x ) which is expected to measure the discrepancy between HI,and the true model. and rejecting H,, if t(z) is larger than some cut-off point E O ( Z ) describing the additional utility of keeping I i , if it were true, due to its special status corresponding to simplicity (Occam's razor). scientific support (or fashion). or whatever. From a frequentist point of view, a test statistic t = t ( x ) is selected with two requirements in mind.

(i) The sampling distribution of f under the null hypothesis p ( t 1 H o ) must be known and, if HO is composite, p(1 I H O )should be the same for all 8 E ell. (ii) The larger the value of t the stronger the evidence of the departure from H,, of the kind which it is desired to test.
Then, given the data x,ap-vulue or significance level is calculated as the probability. conditional on Ho, that. in repeated samples, f would exceed the observed value t (x), that p is given by so

Small values of p are regarded as strong evidence that HO should be rejected. The result of the analysis is typically reported by stating the pvalue and declaring that Ho should be rejected for all significance levels which are smaller than 1'. Comparison with the Bayesian analogue summarised above prompts the following remarks.
( i ) The frequentist theory does not generally offer any guidance on the choice of an appropriate test statistic (the gcnerulised likelihood rurio resf. a disguised Bayes factor seems to be the only proposal). While in the Bayesian analysis

B.3 Stylised Inference Problems

477

t(s)is naturally and constructively derived as an expected measure of discrepancy, the frequentist statistician must, in general, rely on intuition to select t.
The absence of declared alternatives even precludes the use of the frequentist optimality criteria used in hypothesis testing. (ii) Even if a function t = t ( z ) is found which may be regarded as a sensible discrepancy measure, the frequentist statistician needs to determine the unconditional sampling distribution o f t under Ho; this may be very difficult, and actually impossible when there are nuisance parameters. Moreover, in the more interesting situation of composite null hypotheses, it is required that p ( t I #) be the same for all 6 in 00, which, often, is simply not the case. (iii) If a measure of the strength of evidence against Ho is all that is required, the position of the observed value of t with respect to its posterior predictive distribution p ( t Is,Ho) under the null hypothesis seems a more reasonable, more relevant answer than quoting the realised p-value. Indeed, the compatibility of t ( s )with HI, may be described by quoting the HPD intervals to which it belongs, or may be measured with any proper scoring rule such as A logp(t(z) Is,HI)) +B. Thus, in Figure B.3, t l ( s ) may readily be accepted as compatible with Ho while t 2 ( s ) may not.

tl(X)

tdx)

Flgure B 3 Esualising the compatibility o t ( x ) with H f o

(iv) If a decision on whether or not to reject Ho has to be made, this should certainly f take into account the advantages o keeping Ho. i.e., defining the cut-off point in terms of utility. We described in Section 6.2 how this may actually be

478

B. Non-Bayesian Theories
chosen to guarantee a specified significance level, but this is only one possible choice, not necessarily the most appropriate in all circumstances.

We should finally point out that most of the criticisms already made about hypothesis testing are equally applicable to significance testing. Similarly, criticisms made of confidence intervals typically apply to significance testing. since confidence intervals can generally be thought of as consisting of those null values which are not rejected under a significance test.

8.4
8.4.1

COMPARATIVE ISSUES
Conditional and UnconditionalInference

At numerous different points of this Appendix we have ernphasised the following essential difference between Bayesian and frequentist statistics: Bayesian statistics directly produces statements about the uncertainty of unknown quantities, either parameters or future observations, conditional on known data; frequentist statistics produces probability statements about hypothetical repetitions of the data conditional on the unknown parameter, and then seeks (indirectly) ways of making this relevant to inferences about the unknown parameters given the observed data. Indeed, the problem at the very heart of the frequentist approach to statistics is that of connecting uggregute. lottg-rim suritpling properties under hypothetical repctitions. to spec@- inferences of a totally different type. Not only may one dispute the existence of the conceptual "co//ectii.c." where these hypothetical repetitions might take place, but the rekcvczric~ the aggregate. long-run properties for specific inference of problems seems, at best. only tangential. It is useful to distinguish between two very different concepts. initiu/precisiorr andJirrolprecision, introduced by Savage ( 1962). Thus. frequentist procedures are designed in terms of their expected behaviour over the sample space; they typically have average characteristics which describe, for each value of the unknown parameters, the "precision" we may iniriu//,vexpect, before the data are collected. Thus. for example, one might expect that. in the long run. the true mean p will be included in 95%.of the intervals of the form d f I . Y G / f i which might be constructed by repeated sampling from a normal distribution with known unit precision. A far more pertinent question, however, is the following: given the observed .I' which derives from the observed sample (which typically will r i o f be repeated in any actual practice). how close is the unknown 11 to the observed . I . ? Within the frequentist approach. one must rest on the rather dubious "transferred" propzrties of the long-run behaviour of the procedure. with no logical possibility of assessing the relevant,fincz/precision. Thus. ,+values or confidence intervals are largely irrelevant once the sample has been observed, since they are concerned with events which might have occurred. but have not. Indeed. to quote Jetfreys ( 1939/1961. p. 385)

B.4 Comparative Issues

479

. . . a hypothesis which may be true may be rejected because it has not predicted observable results which have not occurred. This seems a remarkable procedure.
The following example, taken from Welch (1939) further illustrates the difference between initial and final precision.
Example B 1 . (Itdial a d f i n a l precision). Let z = {x,,. . . .s,, be a random .0 sample from a uniform distribution over ]/I - i. p + i-1.It is easily verified that the midrange fi = (x,,,,+ ~ , ~ ~ =is a 2 efficient estimator with a sampling variance of the order of ) / very 1/n2, rather than the usual l/n,. so that, from large samples. we may expect, on uveruge, very precise estimates of p, Suppose however that we obtain a specific large sample with a small range; this is, admittedly, unlikely, but nevertheless possible. Given the sample and + 51 using a uniform (reference)prior for p. we can only really claim that p E]I,,,, 4. x,,,,,, [ (since the reference posterior distribution is uniform on that interval). Thus. if the acniul data turn out this way, the final precision of our inferences about p is bound to be rather poor, no matter how efficient the estimator fi was expected to be.

The need for conditioning on observed data can be partially met in frequentist procedures by conditioning on an ancillary statistic. Indeed, we saw in Examples B.2 and B.3 that it is easy to construct examples where totally unconditional procedures produce ludicrous results. However, as pointed out in our discussion of the conditionality principle, there remain many problems with conditioning on ancillary statistics; they are not easily identifiable, they are not necessarily unique and, moreover, conditioning on an ancillary statistic, can yield a totally uninformative sampling distribution, and can conflict with other frequentist desiderata, such as the search for maximum power in hypothesis testing; see, for example, Basu (1964, 1992) and Cox and Reid ( 1987). See,also, Berger ( 1984b).

8.4.2

Nuisance Parameters and Marginalisation

Most realistic probability models make the sampling distribution dependent not only on the unknown quantity of primary interest but also on some other parameters. Thus, the full parameter vector 8 can typically be partitioned into 8 = {&,A} where & is the subvector of interest and X is the complementary subvector of 8, often referred to as the vector of nuisance parameters. We recall from Section 5.1 that, within a Bayesian framework, the presence of nuisance parameters does not pose any formal, theoretical problems. Indeed, the desired result, namely the (marginal) posterior distribution of the parameter of interest, can simply be written as

480

B. Noti-Buyesiun Theories

where the full posterior y(@.X I z)is directly obtained from Bayes' theorem. The situation is very different from a frequentist point of view. Indeed. the problem posed by the presence of nuisance parameters is only satisfactorily solved within a pure frequentist framework in those few cases where the optimality criterion used leads to a procedure which depends on a statistic whose sampling distribution does not depend on the nuisance parameter. Frequentist inferences about the mean of a normal distribution with unknown variance based on the Student-! statistic, whose sampling distribution does not involve the variance. provides the best known example. In general, frequentists arc forced t o use uppro.rimritc. methods, typically based on asymptotic theory. Indeed. some statisticians see this as thc main motivation for developing asymptotic results:
a . . . serious difficulty is that the techniques . . . for problems with nuisance It parameters are offairly resrricrrd uppliccr6ili~. is, therefore, essential to hare widely applicable procedures that in some sense provide good nppro.rinmrioris when "exact" solutions are not available. . . . the central idea being that when the number 11 of observations is large and emrs of estimation correspondingly small. simplificationsbecome available that are not available in general (Cox and Hinkley, 1974. p. 279. our italics)

However, even the domain of "fairly restricted applicability" resulting from reliance on asymptotic methods can be problematic. In an early paper, Neyman and Scott ( 1948) illustrated such problems by considering models with many nuisance parameters of the type
,I

[J(Z 1 . 0 A)

=n

/ J ( X , 1. 0 A,).

,=-I

where a new nuisance parameter A, is introduced with each observation. Note that such models are not unrealistic: for example, .r,may be a physiological measurement on individual i, which may have a normal distribution with mean A, and common variance 9,the latter being the parameter of interest. Kiefer and Wolfowitz (1956) and Cox (1975) proposed solutions for this type of problem based on treating the A,'s as independent observations from some distribution. From ;L Bayesian viewpoint, this, of course, then becomes a case of hierarchical modelling a discussed in Section 4.6.5. s The only general alternative strategy which has been proposed to avoid resorting to asymptotics when exact methods are not available is to use a modified form of likelihood, (estimated, conditional, or marginal). where the dependence on the nuisance parameters has been reduced or eliminated. An rsritnated likelihood is obtained by replacing nuisance parameters by (for example) their maximum likelihood estimates. This procedure does not take account ofthe uncertainty due to the lack of knowledge aboul the nuisance parameters,

B.4 Comparative Issues

481

and may be misleading both in the precision and in the location associated with inferences about the parameters of interest. For example, in linear regression with many regressors, substitution of the regression coefficients by their mles leads to an estimate of the variance which is misleadingly precise.

Marginal and conditional likelihoods are based on breaking the likelihood function into two factors, either using invariance arguments or conditioning on sufficient statistics for the nuisance parameters. In both cases, one factor provides a likelihood function for the parameter of interest while the other is assumed to contain no information about the parameter of interest in the absence of knowledge about the nuisance parameter. Key references to this approach are Kalbfleish and Spron ( 1970, 1973) and Andersen ( 1970, 1973).
There are however two main problems with this type of approach. (i) They are not general and can only be applied under rather specific circumstances. (ii) They critically depend on the highly controversial notion of a function not containing relevant information in the absence of knowledge about the nuisance parameters,for which no operational definition has ever been provided.

In the cases where the techniques can be applied, and a consensus seems to exist about this vague information condition, the resulting forms tend to coincide, as one might expect, with the integrated likelihood

integrated with respect to the conditional reference prior distribution 7r(A 14) of the nuisance parameters given the parameter of interest. ProJle likelihood provides a much more retined version of this approach, which often gives answers which closely correspond to Bayesian marginalisation results; the Fieller-Creasy problem concerning the ratio of normal means provides a typical example (see Bemardo, 1977). For further discussion and extensive references, see, forexample,Barndorff-Nielsen(1983,1991),CoxandReid( 1987),Cox (1988)and Fraser and Reid ( 1989). Another suggestion. closely related to fiducial inference. is the implied likelihood (Efron. 1993). Liseo ( 1993) shows that reference posterior credible regions have better frequentist coverage properties than those obtained from likelihood methods. For a Bayesian overview of methods for treating nuisance parameters, see Basu ( 1 9771, Dawid (1980a), Willing (1988) and Albert (1989).

482
8.4.3

B. Non-Bayesian Theories

Approaches to Prediction

The general problem of statistical prediction may be described as that of inferring the values of unknown observable variables from current available information. Thus, from data z,usually a random sample {q. . . .rII}.inference statements .. are desired about, as yet. unobserved data j j , often J,], (the original problem considered by Bayes, 1763, in a binomial setting). We recall from Section 5.1.3 that, from a Bayesian point of view. with an operationalist concern with modelling uncertainty in terms of obsewables, Bayes theorem, in its central role as a coherent learning process about parameters. is just a convenient step in the process of passing from

/=1

to by means of p ( @I z)x p ( z 1 @ ) [ I ( @ ) . Since any valid coherent inferential statement about y given z is contained in the posterior predictive distribution p(y 1 2). no special theory has to be developed. Of course, the inferential content of the predictive distribution may be appropriately summarised by location or spread measures. respectively providing estimators of y. such as the mean and the mode of p(y 1 z), interval estimates of y such as the class of HPD intervals which or may be derived from p(y 12). Moreover. if one is faced with a decision problem whose utility function u ( o . y ) involves a future observable. then p(,y 12) becomes the necessary ingredient in determining the optimal action, a*, which maximises the appropriate (posterior) expected utility
l l ( f 1 . y)p(y

I z) dy.

The range of potential applications of these ideas is extensive.


( i ) Density esrimarion. The action space consists of the class of sampling distributions; the predictive distribution, which is the posterior expectation of the sampling distribution, is, for squared error loss, the optimal estimator of the sampling distribution. , are (ii) Calibration. Two observations ( x l I q,) made on a set of I I individuals using two different measuring procedures, and it is desired to estimate the measurement yl that the second procedure would yield on a new individual. given that the measurement using the first procedure has turned out to be gl. The solution is a simple exercise in probability calculus leading to the required q). posterior predictive density p(y2 I UI.ZI.

B.4 Comparative Issues

483

(iii) Classification. This is a particular case of the problem of calibration where the 32is (and y2) can only take on a discrete, usually finite, set of values. (iv) Regulation. In contexts analogous to (ii) and (iii), it is desired to select and fix a value of yl so that y2 is as close as possible to a prescribed value. The solution is obtained by minimising the expectationof an appropriate loss function with respect to the predictive distribution p(y2 I y1, zl, 5 2 ) . The particular case of optimisution obtains when it is desired to make y2 as large (or small) as possible. (v) Model comparison. In a setting with alternative models, the latter may be compared in terms of their predictive posterior probabilities (cf. Section 6. I). (vi) Model criticism. The compatibility of a given model with observed data may be assessed by comparingthe realised value of a test statistic with its predictive distribution under that model (cf. Section 6.2).

For further details of the systematic use of predictive ideas, the reader is referred, for example, to Roberts (1963, Geisser (1966, 1974, 1980b. 1988) and Zellner (1986b). The books by Aitchison and Dunsmore (1975)and Geisser (1993) contain a wealth of detailed discussion of prediction problems, including those involving decision making. Applicationsof predictive ideas to classification,calibration, regulation, optimization and smoothing are found, for instance, in Dunsmore (1966,1968,1969),Bemardo (1988), Racine-Poon (1988), Klein and Press (1992), Lavine and West (1992) and Zidek and Weerahandi (1992). See, also, Gelfand and Desu (1968) and Amaral-Turkman and Dunsmore ( I 985). It is important to recall here (see Section 5. I ) that, by virtue of the representation theorems, parameters are limiting forms of observables and, hence, inference about parameters may be seen as a limiting form of predictive inference about observables. Although in practice it is usually convenient to work via parametric models, this point, stressed by de Finetti (1970/1974, 197011975)has considerable theoretical importance. Cifarelli and Regazzini (1982). among others, have continued this tradition by trying to develop a completely predictive approach which bypasses entirely the use of parametric models. We should emphasise again that the all too often adopted naive solution of prediction based on the plug-in estimate form

effectivelyreplacing the posteriordistributionby a degeneratedistributionassigning probability one to an estimator of 8, usually the maximum likelihood estimate, is bound to give misleadingly overprecise inference statements a b u t y, since it effectively ignores the uncertainty about 6. The point is illustrated in detail by Aitchison and Dunsmore (1975).

484

B. Non-BayesianTheories

By comparison,the possibilitiesfor frequentist-basedprediction are fairly limited. They are essentially limited to producing rolerance regions, R(x), designed to guarantee that, in the long run, a proportion p of possible samples x would produce regions R( 2 ) such that Pr[y E R ( z )1 1 = I - a, 8 for all 8 E 0.

i.e., regions which, for all parameters values, will contain a proportion I - o of future observations. If this sounds obscure, particularly in comparison with the simple idea of an HPD region from the predictive distribution p ( g 1 x).we can but agree! Moreover: In order to construct a tolerance region it is essential to find a function of y and x with a sampling distribution which does not involve 8. something which is typically only possible in very simple stylised problems. (ii) The difficulties of transferring the long-run aggregate properties of confidence intervals into inference statementsconditional on the observed data, are even more acute in a tolerance region setting.
(I)

Descriptions of the frequentist approach to prediction are given in Cox (1975). Mathiasen (1979) and Barndorff-Nielsen (1980); Guttman (1970) provides a comparison between frequentist tolerance regions and HPD regions from predictive distributions. Kalbfleish ( 1 97 1 ) was one of the tirst to examine likelihood methods for prediction. Essentially, with t denoting a sufficient statistic for 8, he proposed computing a predictive distribution of the form
P(Y I t )=

J P(Y I
c)

1 f)

whenever a fiducial distribution for 8 , / ( 8 1 t). can be derived from the sampling distribution o f t . Of course, the method is not always applicable; moreover. even when it is, it may lead to inconsistent results when the fiducial distribution is not a Bayes posterior. For instance, in the discussion which follows Kalbfleishs paper, Lindley points out that if the model is
H (.r p(.. 18) = - + I)c-.
.I

O+l

> 0. N > 0
+ I

and the method is applied both to obtain directly p(.r,, 1 . ~ 1 , . . . . r,,) to obtain and p ( ~ , ~ +~~ I , . . . J , Iz . and then p(.c,,, 1 I . c I . . . . ..c,,) from the joint predictive. one obtains different answers. This is an interesting example of the fact that fiducial distributions do not necessarily have basic coherence properties unless they are equivalent to Bayesian posterior distributions.

B.4 Comparative Issues

405

Since the late 1970s a variety of more sophisticated likelihood prediction methods have been proposed, some sufficiency-based, others relating to profile likelihood ideas. Seminal contributions include those by Hinkley ( 1979),Lejeune and Faulkenben-y (1982), Butler (1986) and Lane and Sudderth (1989). A review is given by Bjrnstad (1990), and a further overview is provided by Geisser (1993). A more radical approach to prediction is set out in Dawid (1984), who sets out a theory of preyuentiul analysis. This is closely related to our view that a model or theory is simply a probability forecasting system, but Dawids theory is not predicated on such a system necessarily being Bayesian. Instead, the basic ingredients are simply two sequences; one a string of observations, the other a string of probability forecasts. Theoretical developments requiring an extension of the standard Kolmogorov ( 193311950) framework for probability are pursued in Vovk (1993a). See, also, Vovk (1993b) and Vovk and Vyugin (1993). Links with stochastic complexig (Solomonoff, 1978; Rissanen, 1987, 1989; Wallace and Freeman, 1987) are reviewed in Dawid ( 1992). 8.4.4

Aspects of Asymptotics

In most statistical problems, a number of simplifications become available when the sample size becomes sufficiently large. In frequentist statistics, this is often the only way to obtain analytic results. From a Bayesian point of view, such simplifications are never theoretically necessary, although, of course, they often make computations easier and sometimes provide valuable analytic insight. We recall from Section 5.3 that, as the sample size increases, the posterior distribution of the parameter of interest 8 converges to a degenerate distribution which gives probability one to the true parameter value when the parameter of interest is discrete and, under suitable regularity conditions when the parameter of interest is continuous, converges to a normal distribution N(8 I d,,, H ( d n ) ) ,with precision matrix

The most frequently used asymptotic results in frequentist statistics concern the large sample behaviour of the maximum likelihood estimate d n which, under suitable regularity conditions (mathematicallyusually closely related to those required to guarantee posterior asymptotic norm-ality), may be shown to have an with precision maasymptotically normal sampling distribution N(O,, I 8 , d ( 8 ) ) , trix whose general element is

For details, see, for example, LeCam (1956, 1970, 1986),and references therein.

486

B. Non-Bayesian Theories

Since it is easily established that, for large ra, H ( e , ? ) converges to n l ( O ) ,and since, asymptotically, the sampling distribution of b,, becomes a location model for 8, it follows (Lindley. 1958) that the reference posterior distribution for 8 and the asymptotic fiducial distribution of 8 based on the sampling distribution of b,, are asymptotically equivalent. Moreover, the maximum likelihood estimator of O and will be, respectively. numerically the asymptotic confidence intervals based on identical to the mode (or the mean) and the HPD intervals based on the reference posterior distribution (or any other posterior distribution based on a reasonably well-behaved prior). These results explain the fact that. for large samples (relative to the dimensionality ofthe parametric model component) there are typically very few numericaldifferences between Bayesian inferential statements and frequentist statements based on asymptotic properties. This asymptotic equivalence carries over, of course, to a number of applications. For example:

e,,

(i) We showed (Corollary 2 to Proposition 5.17) that if 6, is asymptotically normal N(B [ d,,. - L t ( d , , ) ) then. under appropriate regularity conditions g(0) is asymptotically normal

The frequentist equivalent (typically derived using the delta method for determining the asymptotic distribution of an estimator) is that if li,, has an asymptotic sampling distribution N(6,, 18. n l ( B ) ) , then g ( 6 ) has an asymptotic sampling distribution

(ii) The predictivedistribution p(y 12) is asymptotically approached by p(y 16,,). (iii) The action which maximises the posterior expected utility is. asymptotically. the same as that which rnaximises ~ ( c I O , , ) . .

In the fictional world of unlimited data, tuimrricul differences between frequentist and Bayesian solutions would tend to disappear with increasing sample size although, even then. differences in inrerprrturion would persist. However. in the real world of limited data relative to the (often multiparameter) models required for realism, there is no reason to expect. in general. close coincidence of numerical solutions.
6.4.5

Model Choice Criteria

We have discussed earlier in Sections B.3.3 and B.3.3, the hypothesis and signifcance testing approaches to parametric hypotheses. but have noted that. in general.

B.4 Comparative Issues

407

no satisfactory exact procedures exist. This may be because of a lack of simplification via sufficiency or invariance arguments, resulting in intractable distributions, or because a procedure cannot be found which has uniformly optimal properties through the range of parameter values under the alternative hypothesis. A procedure frequently adopted in such situations is the so-called general maximum likelihood ratio test, which we describe first for the case of a simple null hypothesis, 8 = On E !Rkand observations x = (zl.. . . x,,). The procedure is motivated by considering the ratio

where b is the maximum likelihood estimate. Intuitively, small values of r(x) suggest rejection of the null hypothesis, but using this type of test requires deriving the distribution of ~ ( z ) , which is, in general, not possible. However, a simple asymptoticargument (see, for example, Cox and Hinkley, 1974, Section9.3) reveals that, for suitable regularity conditions, under the null hypothesis, as r~ ---* 30, A(z) = -2 log r ( x )has a limiting xf distribution. The procedure is easily extended to the case of a composite null hypothesis 8 E 0 0 !Rk. If the alternative hypothesis is 8 E el, we define 0 = Q,UO,, and we consider the ratio suPt)EeoU ( Z I 8 ) r ( x )= SWO& P ( 2 18)

In this case, asymptotic analysis reveals that, for suitable regularity conditions, X ( z ) = -2 log ~ ( zhas a limiting ~2 distribution, where d is the difference in ) dimensionality, dim(0) - dim(&), of the general and null hypothesis parameter spaces, respectively. It is interestingto compare this with a widely used Bayesian form of assessment of null and alternative models. Schwarz (1978) shows that, asymptotically,
-210gB"l = A ( l ) - dlogn, where Bol is the Bayes factor (Section 6. I .4). We see, therefore, that the so-calledSchwurz criterion for model choice adjusts t ) the -2 log ~ (factor by a log n multiple of the dimensionality difference. An earlier proposal for adjusting the general likelihood ratio criterion is that of Akaike (1973, 1974, 1978b. 1987), whose so-called Akaike Information Criterion (AIC) takes the form AIC = X(Z) - 2d.

See also, Akaike (1978b, 1979) for a Bayesian extension (BIC) of the AIC procedure.

400

B. Non-Bayesiun Theories

Yet another variant is found in Nelder and Wedderburn (1972). whose suggestion for goodness-of-fit comparisons of general linear models through plotting degrees of freedom against deviance is, in effect, the criterion

These and other related proposals are reviewed from a Bayesian perspective in Smith and Spiegelhalter (1980). See Stone (1977, 1979a) for further discussion and comparison. Roughly speaking, the Akaike criterion can be derived from a Bayes factor perspective as corresponding to a prior which concentrates on a neighbourhood of the alternative which is close, in an appropriate sense depending on 1 1 . to the null. The Schwarz criterion is derived from a Bayes factor perspective through a prior which does not depend on i t . Finally. we note that the prequential theory of Dawid (1984)-see, also. Section B4.3-directly embraces the view that models are simply predictive tools and should be compared on that basis, but dues not necessarily use a Bayesian mechanism for such prediction. In Dawid (1992). it is shown that a particular form of so-called prequcntial assessment, based on the logarithmic scoring rule. leads to a model choice criterion which is asymptotically equivalent to the Schwarz criterion. It is also shown that this approach is essentially equivalent lo model choice procedures arising in the stochastic complexity theory of Rissanen ( 1987).

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

References

Abramowitz, M.and Stegun, I. A. (1964). Handbook of Marhematical Functions. New York: Dover. Achcar, J. A, and Smith, A. F. M. (1989). Aspects of reparameterisation in approximate Bayesian inference. Bayesian and Likelihood Methods in Statistics and Econometrics: Essays in Honor ofGeorge A. Barnard(S.Geisser. J. S . Hodges, S. J. h e s s and A. Zellner, eds.). Amsterdam: North-Holland, 439452. Aczel, J. and F'fanzagl, J. (1966). Remarks on the measurement of subjective probability and information. Merrika 11,91-105. Aitchison, J. (1964). Bayesian tolerance regions. J. Roy. Statist. Soc. B 26. 161-175. Aitchison, J. (1966). Expected cover and linear utility tolerance intervals. J. Roy. Statist. Soc. B 28,5742. Aitchison. J. (1968). In discussion of Dempster (1968). J . Roy. Sratisr. Soc. B 30,234-237. Aitchison, J. ( 1970). Choice aguinst chance. An Introduction to Statistical Decision Theory. Reading, MA: Addison-Wesley. Aitchison. J. and Dunsmore. I. R. (1975). StutisricaI Predicrion Analysis. Cambridge: University Press. Aitken, C. G. G.and Stoney. D. A. ( 1 99 I). The Use of Sfatisticsin Forensic Science.Chichester: Ellis Horwood. Aitkin, M.(1991). Posterior Bayes factors. J. Roy. Srarist. Soc. B 53, I 1 1-142 (with discussion). Akaike, H. 1 973). Information theory and an extension of the maximum likelihood principle. ( 2nd. int. Symp. Information Theory. Budapest: Akademia Kaido, 267-28I .

References

Akaike, H. (1974). A new look at the statistical model identification. IEEE Trans. Automttric~ Control 19, 7 16-727. Akaike, H. (1978a). A new look at the Bayes procedure. Biontetriku 65,53-59. Akaike, H. (1978b). A Bayesian analysis of the minimum AIC procedure. Ann. lnsr. Statist. Math. 30.9- 14. Akaike, H. (1979). A Bayesian extension of the minimum AIC procedure of autoregressive model fifting. Rionzetrika 6 .53-59. 6 Akaike, H. (1980a). The interpretation of improper prior distributions as limits of data dependent proper prior distributions. J. Roy. Statist. Soc. B 45,4652. Akaike, H. (1980b). Likelihood and the Bayes procedure. Bayesian Statistics (J. M. Bernardo, M.H. DeGroot, D. V. Lindley and A. E M. Smith, eds.). Valencia: University Press. 144-166 and 185-203 (with discussion). Akaike, H. ( 1987). Factor analysis and the AIC. Psychometrika 52. 3 17-332. Albert, J. H. (1989). Nuisance parameters and the use of exploratory graphical methods in Bayesian analysis. Amer. Statist. 43, 191- 196. Albert, J. H. (1990). Algorithms for Bayesian computing using Mathematica. Cimputing Science and Stutistics: Proceedings of the S~mpasium the lnterjace (C. Page and R. on LePage eds.). Berlin: Springer, 286-290. Albert, J. H. (1993). Teaching Bayesian statistics using sampling methods and MINITAB. Amer. Statist. 47. 182- I9 I . Aldous. D. ( 1985). Ewhmigeability and Related Topics. Berlin: Springer. Allais, M. (1953). Le comportement de Ihomme rational devant le risque: Critique des postulats et axiomes de ICcole AmCricaine. Economctricu 21. 503-546. Allais, M. and Hagen, D. ( 1979). Expected Utility Hypotheses and the Allais Parados. Dordrecht: Reidel. Amaral-Turkman. M. A. and Dunsmore, I. R. ( 1985). Measures of information in the pre2 dictive distribution. Buyesiun Sturis~ics (J. M.Bernardo. M. H. DeGroot, D. V. Lindley and A. F. M. Smith. eds.), Amsterdam: North-Holland. 603-612. Ameen, J. R. M. (1992). Non linear prediction models. J . Forecurring 11. 309-324. Amster, S. J. (1963). A modified Bayes stopping rule. Atin. Malh. Stntist. 34, 1404-1413. Andersen. E. B. ( 1970). Asymptotic properties of conditional maximum-likelihood estimators. J. Roy. Sttitist. Soc. B 32.283-301. Andersen, E. B. ( 1973). Condirional Injerence and Models jor Measuring. Copenhagen: Mental Hygiejnisk Foslay. Anderson, T. W. ( 1984). An Introduction to Multiwriate Statistical Analysis. New York: Wiley. Angers, J.-F. and Berger. J. 0. 1991 ). Robust hicrarchical Bayes estimation of exchangeable ( means. Cunudian 1.Statist. 19, 39-56. Anscombe. F. J. (1961). Bayesian statistics. Amer. Statist. 15, 21-24. Anscombe, F. J. ( 1963). Sequential medical trials. J. Amer. Smtisr. Assoc. 58, 365-383. Anscombe. F. J. (1964a). Some remarks on Bayesian statistics. Hiinwn Judpneni und Optimlit?; (Shelly and Bryan, eds.). New York: Wiley, 155- 177. Anscombe. F. J. (1964b). Normal likelihcnxl functions. Ann. Insr. Sfutist. Muth. 16. 1 - 4 1 ,

References

491

Anscombe, F. J. and Aumann, R. J. (1963). A definition of subjective probability. Ann. Math. Statist. 34, 199-205. Ansley, C . F., Kohn, R. and Wong. C.-M. (1993). Non-parametric spline regression with prior information. Biomerrika 80. 75-88. Antoniak, C. ( 1 974). Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. Ann. Srarist. 2, 1152-1 174. Aoki, M. (1967). Optimization qfStochastic Systems. New York: Academic Press. Arimoto. S. (1 970). Bayesian decision rule and quantity of equivocation. Systems, Cornpute n , Controls 1. 17-23. Amaiz, G.and Ruiz-Rivas, C. (1986). Outliers in circular data, a Bayesian approach. Qiiestiid 10, 1-6. Arnold, S. F,(1993). Gibbs sampling. Handbook of Statistics 9. Computational Statistics ( C . R. Rao, 4 ) .Amsterdam: North-Holland, 599425. . AROW,K. J. (1951a). Alternative approaches to theory of choice in risk-taking situations. Economerrica 19,404-437. Arrow, K. J. ( I 95 1 b). Social Choice and Individual Values. New York: Wiley Arrow, K. I. and Raynaud. H. (1987). Social Choice and Multicriteria Decision Making. Cambridge. MA: The MIT Press Ash, R. B. (1972). Real Analysis and Probability. New York: Academic Press. Aumann, R. J. (1987). Correlated equilibrium as an expression of Bayesian rationality. Economerrica 55. I -I 8. Aykas, A. and Brumat. C. (eds.) (1977). New Developments in the Applications of Bayesian Methods. Amsterdam: North-Holland. Bahadur, R. R. (1954). Sufficiency and statistical decision functions. Ann. Math. Statist. 25. 423-462. Bailey, R. W. (1992). Distributional identities of Beta and chi-squared variates:a geometrical interpretation. Amer. Starist. 46, I 17-120. Balch, M. and Fishburn, P C. (1974). Subjective expected utility for conditional primitives. . Essays on Economic Behaviour under Uncertainty (M. Balch. D. McFadden and S. Wu, eds.). Amsterdam: North-Holland, 45-54. Bandemer, H. (1977). Theorie und Anwendung der Optimalen Versuchsplanung. Berlin: Akademie-Verlag. Barlow, R. E. ( 1989). Influence diagrams. Encyclopedia of Statistical Sciences Suppl. ( S . Kotz. N. L. Johnson and C. B. Read, eds.). New York: Wiley. 72-74. Barlow, R. E. ( I 99 1 ). Introduction to de Finetti ( I 937). Breakthroughsin Srarisrics 1 ( S . Kotz and N.L. Johnson, eds.). Berlin: Springer, 125-1 33. Barlow, R. E. and Irony, T.Z. (1992). Foundations of statistical quality control. Current . Issues in Statistical Inference: Essays in Honor of D. Busu. (M. Ghosh and P K. Pathak eds.). Hayward, CA: IMS, 99-1 12. Barlow. R. E.and Mendel. M. B. (1992). De Finetti-type representations for lifetime distributions. J . Amer. Stutist. Assoc. 87. 1 I 16-1 123. Barlow. R. E. and Mendel, M. B. (1994).The operational Bayesian approach. Aspects o f Uncertuinty: a Tribute to D. V Lindley (F? R. Freeman. and A. F. M. Smith. eds.). Chichester: Wiley, 19-28.

492

References

Barlow, R. E., Wechsler. S. and Spizzichino. F. (1988). De Finetti's approach to group decision making. Buyesiun S/riri.s/ics3 (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press. 1-15 (with discussion). Barnard. G. A. (1949). Statistical inference. J. Ray. Statist. S<K.B 11, 1 15-149 (with discussion). Barnard, G. A. ( 195I). The theory of information. J. Roy. Srurisr. Soc. B 1 3 , 4 6 4 . Barnard. G. A. ( 1952).The frequency justification of certain sequential tests. Biomerrikn 39.
155-150.

Barnard, G. A. (1958). Thomas Bayes. a biographical note. Bionietrikct 45.293-295. Barnard. G. A. (1963). Some aspects of the fiducial argument. J. Roy. Srurist. Sew. 8 25. I 1 1 -1 14. Barnard. G. A. (1967). The use of the likelihood function in statistical practice. Pro(.. Fifth Berkeley Symp. 1 (J. Neyman and E. L. Scott. eds.). Berkeley: Univ. California Press. 27-40. Barnard. G. A. ( 1980a). In discussion of Box ( 1980). J. Roy. Srarist. Sot.. A 143.404406. Barnard. G. A. ( 1980b). Pivotal inference and the Bayesian controversy. B c i y s i c i t i Srcrristics (J. M. Bernardo, M. H. DeGroot. D. V. Lindley and A. F. M. Smith. ed University Press, 295-3 18 (with discussion). Barnard. G. A.. Jenkins, G. M. and Winsten. C. B. (1962). Likelihood inference and time series. J. Roy. Sratist. Soc. A 125. 32 1-372 (with discussion). Barnard. G. A. and Sprott, D. A. ( 1968). Likelihood. Enqcloprdiu of Srutistical Sciences 9 (S. Korz, N. L. Johnson and C. B. Read, eds.). New York: Wiley. 639-644. Barndorff-Nielsen,0.E. ( 1978).Infomitition tinct E.rponrnriul Fciniilics in Statistied llruo~:\. New York: Wiley. Bamdorff-Nielsen. 0.E. ( 1980). Likelihood prediction. Syniposiuni Muthenriiticu 25. I 1-24. Bamdorff-NieIsen.0.E. ( 1983).On a formula for the distributionof the maximum likelihood estimator. Biomerriku 70. 343-365. Barndorff-Nielsen. 0. E. ( 1991). Likelihood theory. Sratisricnl Theory and Modelling. / n HonourofSirDcrvidC~x(D. Hinkley. N. Reidand E. J. SnelLeds.).London: Chapman V. and Hall. 232-265. Bamett. V. (197311982). Compurufive Statisricd Inference. Second edition in 1982, Chichester: Wiley. Barrai. I.. Coletti, G. and Di Bacco, M.(eds.) ( 1992). Probability tind Bu.ve.siun Srctristics in Medicine and Biology. Pisa: Giardini. Bartholomew, D. J. (1965). A comparison of some Bayesian and frequentist inferences. Biomerrika 52, 19-35, Bartholomew. D. 1. (1967). Hypothesis testing when the sample size is treated as a random variable. J. Roy. Sratist. Soc. B 29, 53-82. Bartholomew, D. J. (197 I ). A comparison of Bayesian and frequentist approaches to inferences with prior knowledge. Foundutions .f S/u/isric.ul Inference (V. P. Godambe and D. A. Sprott, eds.). Toronto: Holt, Rinehart and Winston. 41 7-434 (with discussion). Bartholomew, D. J. ( 1994). Bayes theorem in latent variable modelling. Aspects c I j Uncwtuitrty: a Tribirte to D.V. Lindley (P. R. Freeman. and A. F. M. Smith. eds.). Chichester: 1.. Wiley. 3 - W

References
Bartlett, M. (1957). A comment on D. V. Lindleys statistical paradox. Biomerrika 44, 533534. Basu, D. (1959).The family of ancillary statistics. SankhyiZ A 21.247-256. Basu, D. (1964). Recovery of ancillary information, Sankhyii A 26, 3-16. Basu. D. (1969). Role of sufficiency and likelihood principles in survey sampling theory. SunkhyZi A 31,441454. Basu, D. (1971). An essay on the logical foundations of survey sampling. Foundations of Statisricollnfererice(V. P. Godambe and D. A. Sprott, eds.). Toronto: Holt, Rinehart and Winston, 203-242 (with discussion). Basu, D. (1975). Statistical information and likelihood. SankhyijA37.1-71 (withdiscussion). Basu, D. (1977). On the elimination of nuisance parameters. J. Amer. Statist. Assoc. 72. 355-366. Basu, D. (1988).Stutistical information and Likelihood: a Collection of Critical Essays (J. K. Ghosh, ed.). Berlin: Springer. Basu, D. ( 1992). Learning statistics from counter examples: ancillary statistics. Eayesiun Analysis in Staristics und Econometrics (P. K. Goel and N. S. Iyengar, eds.). Berlin: Springer, 2 17-224. Basu, D. and Pereira. C. (1983). A note on Blackwell sufficiency and a Skibinsky chartcterization of distributions. Sunkhya A 45.99-104. Bauwens, L. ( 1984).Buyyesian Full Infi)rrnarionAnalysis of Siniultuneous Equation Models Using Inregration by Monte Carlo. Berlin: Springer Bayarri, M. J. (1981). Inferencia Bayesiana sobre el cwficiente de correlacidn de una poblacidn normal bivariante. Trub. Esfadis/. 32, 18-3 1. Bayarri. M.J. and Berger, J. 0. (1994). Applications and limitations of robust Bayesian bounds and type 11 MLE. Statisticul Decision Theory and Related Topics V ( S . S.Gupta and J. 0. Berger. 4 s . ) . Berlin: Springer. 121- 134. Bayarri, M. J. and DeGroot. M. H. (1987). Bayesian analysis of selection models. The Statistician 36, 137- 146. Bayarri. M. J. and DeGroot, M. H.(1988).Gaining weight: a Bayesian approach. Bayesian Statistics 3 (J. M. Bernardo, M.H. DeGroot, D. V. Lindley and A. F. M.Smith, eds.). Oxford: University Press, 2 5 4 4 (with discussion). Bayarri, M. J. and DeGroot, M. H. (1989).Optimal reporting of predictions. J. Amer. Statist. Assoc. 84.2 14-222. Bayarri, M. J. and DeGroot. M. H. (1990). Selection models and selection mechanisms. Bayesian and Likelihood Methods in Statistics und Econometrics: Essays in Honor of George A. Barnard(S.Geisser, J. S . Hodges, S.J. Press and A. Zellner, eds.). Amsterdam: North-Holland, 2 1 1-227. Bayarri, M. J. and DeGroot, M.H. (1991). What Bayesians expect of each other. J. Amer. Statist.Assoc. 86.924-932. Bayarri, M. J. and DeGroot, M. H.(1992a). A BAD view of weighted distributions and selection models. Buyesiun Statistics 4 (J. M.Bernardo, J. 0. Berger, A. P. Dawid and A. F. M.Smith, eds.). Oxford: University Press. 17-33 (with discussion). Bayani. M.J. and DeGroot, M.H.(1992b). Difficulties and ambiguities in the definition of a likelihood function. J. It. Statist. Soc. 1, 1-15.

494

References

Bayarri, M. J.. DeGroot. M. H. and Kadane, J. B. (1988). What is the likelihood function? Srurisricul Decision Theory and Relured Topics IV 1 (S. S. Gupta and J. 0. Bergcr. eds. 1. Berlin: Springer, 3-27. Bayes, T. ( I 763). An essay towards solving a problem in the doctrine of chances. Published posthumously in Phil. Trans. Roy. Soc. Luridori 53. 370-418 and 54. 296-325. Reprinted in Biomefriku 45 (1958), 293-3 IS. with a biographical note by G. A. Bamard. Reproduced in Press (1989). 185-2 17. Becker. G. M., DeGroot. M. H. and Marschak. J. ( 1963). Stochastic models of choice behavior. Behuviorul Sci. 8.41-55. Reprinted in Decisioti Making (W. L. Edwards and A. Tversky. eds.) Baltimore: Penguin. Becker, G. M. and McClintock. C. Ci. (1967). Value: Behavioral decision theory. Afinud Rev. Psychology 18.239-286. Bellman. R. E. ( 1957). Dynamic Programming. Princeton: University Press. Berger. J. 0. ( 1979). Multivariate estimation with nonsymmetric loss functions. Oprimi:iti,q Methods in Srurisrics (J. S. Rustagi. ed.). New York: Academic Press. Berger. J. 0. (1980). A robust generalized Bayes estimator and confidence region for a multivariate normal mean. Ann. Srutisr. 8. 7 16-761. Berger. J. 0. (1982). Bayesian robustness and the Stein effect. J. Anier. Srcitist. Assor. 77. 358-368. Berger, J. 0. (l984a). The robust Bayesian viewpoint. Robustness of Bayesicin Analysis (J. B. Kadane, ed.). Amsterdam: North-Holland. 63- 144 (with discussion). Berger, J. 0. (1984b). The frequentist viewpoint and conditioning. Proc. Berkeley Symp. Pacitic Drove. CA: in Honor o Kiefer and Neynan (L. LeCam and R. Olshen. ed f Wadsworth. Berger, J. 0. ( 198Sa). Srutisticul Derision Theory und Buyesiun Anulysis. Berlin: Springer. Berger, J. 0. (1985b). In defense of the likelihood principle: axiomatics and coherence. Buyesiun Srurisrics 2 (J. M. Bemardo. M. H. DeGroot. D. V. Lindley and A. F. M.Smith. eds.), Amsterdam: North-Holland. 33-65. (with discussion). Berger. J. 0.( 1986). Bayesian salesmanship. Buyesiun Injermcr und Decision Ter~liiiiyues: Essays in Honor of Bruno de Anerri (P. K. Goel and A. Zellner. eds.). Amsterdam: North-Holland, 4731188. Berger, J. 0. (1990). Robust Bayesian analysis: sensitivity to the prior. J. Starist. Pluming and Inference 25. 303-328. (1993).The present and future of Bayesian multivariate analysis. Mulrivuricifr Berger, J. 0. Anulysis: Future Directions (C. R. Rao. ed.). Amsterdam: North-Holland, 5 - 5 3 . Berger. J. 0. (1994). A review of recent developments in robust Bayesian analysis. Test 3. (to appear, with discussion). Berger. J. 0. Berliner. L. M. ( 1986). Robust Bayes and empirical Bayes analysis with and r-contaminated priors. Ann. Srutisr. 14.461486. Berger, J. 0. and Bemardo, J. M.( 1989).Estimating a product o f means: Bayesian analysis with reference priors. J. Amer. Sfurist. A.ssoc. 84, 200-207. Berger, J. 0. and Bcmardo, J. M. ( 1992a). Ordered group reference priors with applications to a multinomial problem. Biomerriku 79. 25-37.

References
Berger, J. 0. and Bernardo, J. M. (1992b). Reference priors in a variance components problem. Bayesian Analysis in Statistics and Econometrics (P. K.Goel and N. S.lyengar, eds.). Berlin: Springer, 323-340. Berger, J. 0.and Bemardo, J. M. (1992~). the development of reference priors. Bayesian On Starisrics 4 (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 35-60 (with discussion). Berger, J. 0..Bemardo, J. M. and Mendoza. M. (1989). On priors that maximize expected information. Recent Developments in Starisrics and their Applications (J. P. Klein and J. C. Lee, eds.). Seoul: Freedom Academy, 1-20. Berger, J. 0.and Berry, D. A. ( 1988). The relevance of stopping rules in statistical inference. Statistical Decision The00 and Related Topics IV 1 ( S . S . Gupta and J. 0. Berger. eds.). Berlin: Springer, 29-72 (with discussion). Berger, J. 0. and DasGupta, A. (1 99 1 ). Multivariate Estimation. Buyes, Empirical B u y s and Stein Approaches. Philadelphia, PA: SIAM. Berger, J. 0.and Delampady, M. (1987). Testing precise hypotheses. Sturisr. Sci. 2,3 17-352 (with discussion). Berger, J. 0. and Fan, T.H. (1991). Behaviour of the posterior distribution and inferences for a normal mean with t prior distributions. Statistics and Decisions 10,99-120. Berger, J. 0. and Jefferys, W. H. (1992) The application of robust Bayesian analysis to hypothesis testing and Occams razor. J. I f . Starisr. SOC. 1, 17-32. Berger, J. 0. and Mortera, J. (1991a). Interpreting the stars in precise hypothesis testing. Internat. Srutist. Rev. 59,337-353. Berger, J. 0.and Mortera, J. (1991 b). Bayesian analysis with limited communication.J. Sratist. Planning and Inference 28, 1-24. Berger, J. 0. and Mortera, J. ( 1994). Robust Bayesian hypothesis testing in the presence of nuisance parameters. J. Statist. Planning and Inference 31,357-373. and Berger, J. 0. OHagan A. (1988). Ranges of posterior probabilities of unimodal priors with specified quantiles. Buyesian Sratisrics 3 (J. M. Bemardo, M. H. DeGroot, D. V. Lindley and A. F. M.Smith, eds.). Oxford: University Press, 45-65 (with discussion). . Berger, J. 0. and Robert, C. P (1990). Subjective hierarchical Bayes estimation of a multivariate normal mean: on the frequentist interface. Ann. Stutisr. 18.6 17-651. Berger, J. 0. and Sellke. T. (1987). Testing a point null hypothesis: the irreconcilability of significance levels and evidence. J. Anier. Statist. Assoc. 82, 1 12-1 33 (with discussion). Berger, J. 0. and Srinivasan,C.(1978). Generalized Bayes estimators in multivariate problems. Ann. Starisr. 6. 783-801. Berger, J. 0. and Wolpert. R. L. (198411988). The Likelihuud Principle. Second edition in 1988, Hayward, CA: IMS. Berger. R. L. (1981). A necessary and sufficient condition for reaching a consensus using DeGroots method. J. h e r . Stutisr. Assoc. 76.415418. Berk. R. H. (1966). Limiting behaviour of the posterior distributions when the model is incorrect. Ann. Marh. Srurisr. 37,5 1-58. Berk, R. H. (1970). Consistency a posteriori. Ann. Marh. Statist. 41.894-906. Berliner, L. M. (1987). Bayesian control in mixture models. Technometrics 29,45%460.

References
Berliner, L. M. and Goel P. K. (1990). Incorporating partial prior information: ranges of es. Buyesian and Likelihood Methods in Sruristics and Econotwtricx Essop in HonorofGeorge A. Barnrird(S. Geisser. J. S. Hodges. S. J. Press and A. Zcllner. eds.). Amsterdam: North-Holland. 3 9 7 4 6 . Berliner, L. M. and Hill, B. M. (1988). Bayesian non-parametric survival analysis. 1.Atiiur. Statist. Assoc.. 83.772-782 (with discussion). Bermudez. J. D. (1985). On the asymptotic normality of the posterior distribution of the logistic classification model. Stciri.sticscold Decisions 2. 30 1 -30R. Bernardo. J. M. (1977). Inferences itbout the ratio of normal means: a Bayesian approach to the Fieller-Creasy problem. Recent Developtiietrrs it1 Stcitistics ( J . R. Barra et (11. eds. ). Amsterdam: North-Holland. 345-349. Bemardo, J. M.( 1978a). Una medida de la informacion util proporcionada por tin experimento. Rev. Acrid. Cietrcias Madrid 72.41 Y - 4 4 0 . Bernardo. J. M. ( 1978b). Unacceptable implications ol' the left Haar measure in a standard normal theory inference problem Trrib. Esradist. 29,3-9. Bemardo. J. M. (1979a). Expected information as expected utility. A t i i i . Sturisr. 7 . 6864YO. Bemardo. J. M. ( 197Yb). Reference posterior distributions for Bayesian inference. J . Ro,v. Stotist. SOL-,B41. 113-147 (with di.scussion). Bemardo. J. M. ( 1980). A Bayesian analysis of classical hypothesis testing. Baysiuii Statist i c : ~ M. Bernardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith. eds.). Valencia: (J. University Press. 605-647 (with discussion). Bemardo. J. M. ( 198I a). Reference decisions. Sytu/x).siti Mcitlieiiiciticu 25, 85-94, Bernardo, J. M. ( I98 I b). BiorstcidLsricci. i i m Perspecviiw Buyr.vitrriri. Barcelona: VicensVives. Bernardo. J. M. (1982). Contraste de modelos probdbiklicos desde una perspectiva Bayesiana. Trclh. Estudist. 33. 16-30. Bemardo. J. M. (1984). Monitoring the 1982 Spanish socialist victory: a Bayesian analysis. J . Atner. Stcitist. Assoc. 79. 5 10-5 IS. Bernardo, J. M. ( 1985a). Andisis Bayesiano de 10s contrastes de hipcitesis parametricos. Trab. EstadIst. 36.45-54. Bernardo, J. M. ( 198Sb). On a famous problem of induction. Trcih. stcidi.st. 36.24-30. Bernardo. J. M. ( 1988). Bayesiun linear probabilistic classitication. Statistic~cilDecision 7heory and Related Topics I V 1 ( S . S. Gupta and J. 0. Berger. eds.). Berlin: Springer, IS I- 162. Bernardo, J. M. ( 1989). Andisis de datos y mCtodos Bayesianos. HistorIrr cle / ( I Ciencio Estadisricci 6. Rios, ed.). Madrid: Academia de Ciencias. 87- 105. Bernardo, J. M. ( 1994). Optimal prediction with hierarchical models: Bayesian clustering. Aspem ojUticertciitity: I I Trihittu t o D. V Litidli,,v (P. R. Freeman. and A. F. M. Smith. eds.). Chichester: Wiley. 67-76. Bemardo. J. M. and Bayarri. M. J. (1985). Bayesian model criticism. Motful Choicv (J.P. Florens. M. Mouchart. J.-P. Raoult and L. Simar. eds.). Brussels: Pub. Fac. Univ. Saint Louis. 43-59. . Bernardo, J. M . Berger. J. 0.. Dawid. A. P. and Smith. A. F. M. (cds.) (1992). Boyeskiti Sturis/ic..$1 Oxford: University Press. .

References
Bemardo, J. M. and Bermddez, J. D. (1985). The choice of variables in probabilistic classification. Bayesian Srarisrics 2 (J. M. Bernardo. M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.), Amsterdam: North-Holland, 67-81 (with discussion). Bernardo, J. M., DeGroot, M. H., Lindley, D. V. and Smith, A. F. M. (eds.) (1980). Bayesian Statistics. Valencia: University Press. Bemardo, J. M., DeGroot. M. H.. Lindley, D. V. and Smith, A. F. M. (eds.) (1985). Bayesian Srarisrics 2. Amsterdam: North-Holland. Bernardo, J. M., DeGroot, M. H.. Lindley, D. V. and Smith, A. F. M. ( 4 s . )(1988). Bayesian Srarisrics 3. Oxford: University Press. Bemardo. J. M., Fedndiz, J. R. and Smith, A. F. M. (1985). The foundations of decision theory: an intuitive, operational approach with mathematical extensions. Theory and Decision 18. 127-150. Bernardo. J. M. and Gir6n F. J. (1988). A Bayesian analysis of simple mixture problems. Bayesian Srarisrics 3 (J. M. Bemardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.). Oxford: University Press, 67-88 (with discussion). Bemardo. J. M. and Giron F J. (1989). A Bayesian approach to cluster analysis. Qiiestiid 5. .
97-1 12.

Bernoulli, D. (1730/1954). Specimen theonae novae de mensura sortis. Commenr. Acad. Sci. Imp. Petropolitanae 5. 175-192. English translation as "Exposition of a new theory on the measurement of risk" in 1954, Economerrica 22.23-26. Bernoulli, J. (1713/1899). Ars Conjecrandi. Basel: Thumisiorum. Trdnslated into German as Wahrscheinfichkeirs~echn~ng. Leipzig: Engelmann. 1899. Beny. D. A. (19%). Srafisrics.A Bayesian perspective. Belmont, CA: Duxbury. Beny. D. A. and Stangl. D. K. (eds.) (1996). Bayesian Biostafisrics. New York: Marcel Dekker. Besag, J. (1986). Statistical analysis of dirty pictures. J. Roy. Srarisr. SOC. B 48, 259-302 (with discussion). Besag. J. (1989). Towards Bayesian image analysis. J. Appl. Srarisr. 1 6 , 3 9 5 4 7 . Besag, J. and Green, P.J. (1993). Spatial statistics and Bayesian computation. J. Roy. Starisr. SOL'.B 55.25-37. Bickel. P. J. and Blackwell, D. (1967). A note on Bayes estimates. Ann. Math. Srarisr. 38,
1907-191 I.

Bickel, P J. and Ghosh, J. K. (1990). A decomposition for the likelihood ratio statistic and . the Bartlett correction-a Bayesian argument. Ann. Srarisr. 18, 1070-1090. Binder, S. (1978). Bayesian cluster analysis. Biometrika 65,31-38. Bimbaum, A. (1962). On the foundations of statistical inference. J. Amer. Starisf. Assoc. 57,
269-306.

Birnbaum, A. ( 1968). Likelihood. Inrernar. Encyclopedia ofrhe Social Sciences 9.299-30 I . Bimbaum, A. (1969). Concepts of statistical evidence. Philosophy Science and Methods. (S. Morgenbesso, P. Suppes and M. White eds.) New York: St. John's Press. Assoc. 67, Bimbaum, A. (1972). More on concepts of statistical evidence. J. Amer. Srafisf.
858-861.

Birnbaum, A. (1978). Likelihood. InrernationuI Encyclopedia of Sraristics (W. H.Kruskal and J. M. Tanur, eds.). London: Macmillan, 519-522.

References
Bjrnstad, J. F. (1990). Predictive likelihood: a review. Statist. Sci. 5, 242-265 (with discussion). Blackwell, D. ( 1947). Conditional expectation and unbiased sequential estimation. Ann. Math. Statist. 18, 105-1 10. Blackwell, D. ( I95 I ). Comparison of experiments. Proc. Second Berkeles Symp. (J. Neyman ed.). Berkeley: Univ. California Press 93-102, Blackwell, D. (1953). Equivalent comparison of experiments. Ann. Math. Statist. 24. 265272. Blackwell. D. (1988). In discussion of Diaconis ( 1988). Bayesian Staristics 3 (J. M. Bernardo. M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.). Oxford: University Press. I 23- 124. Blackwell, D. and Dubins. L. E. ( 1962). Merging of opinions with increasing information. Ann. Math. Statist. 33,882-886. Blackwell, D. and Girshick. M. A. ( 1954). Theory ofGumes andStutisricul Dec.i.\ion.s. New York: Wiley. Blyth. C. R. ( 1972). On Simpson's paradox and the sure-thing principle. J . Amer. Sluris!. Assoc-.67. 364-366. Blyth. C. R. ( 1973). Simpson's paradox and mutually favourable events. J . Amer. Statist. Assoc. 68, 746. Booth, N. B. and Smith, A. F. M. ( 1976). Batch acceptance schemes based on an autogressive prior. Biometriku 63, 133-1 36. Bordley, R. F.(1992).An intransitive expectations based Bayesian variant of prospect theory. J. Risk und Uncertuinh 5. 127- 144. Borel. E. ( 1924/1964). A propos d'un trait6 de probabilitks. Rewe Philosrip/iiyue 98. 321336. Reprinted in 1980 as "A propos of a treatise on probability" in Studies iri Siihjective Prububility (H. Kyburg and H. E Smokler, eds.). New York: Dover. 45-60, E. Borovcnik. M. ( 1992).Stochnstik in? Wechselspiels y o n Intuiriotierr itrid Mntheniutik. Mannheim: BI-Wissenschaftsverlag. Box. G. E. P. (1980). Sampling and Bayes' inference in scientific modelling. J. Roy. Stutisr. Sot.. A 143, 3 8 3 4 0 (with discussion). Box, G. E. I? (1983). An apology for ecumenism in statistics. Science 151. 15-84 Box. G. E. P. (1985). The Colkctrcl \York.s cYG. E. P. BM (G. C. Tiao. ed.). Pacitic Drove. CA: Wadsworth. Box, G. E. P. and Cox. D. R. ( 1964). An analysis oftransformations. J. Ro.~. Sturisr. Sot. B 26. 21 1-252 (with discussion). Box, G. E. P. and Hill. W. J. ( 1967). Discrimination among mechanistic models. Technometrics 9.57-7 I . Box. G. E. P.. Leonard, T. and Wu, C.-F. (eds.)(1983).ScirririJc Iizjierenc.r. Diitri Ariu1ui.s and Robustness. New York: Academic Press. Box, G. E. P and Tiao, G. C. (1962). A further look at robustness via Bayes' theorem. . Biotnetrikn 49. 4 19-432. Box, G . E. P. and Tiao. G. C. ( 1964). A note on criterion robustness and inference robustness.
Bionirrriktr 51. 169- 173.

References

499

Box, G. E. P. and Tiao, G. C. (1965). Multiparameter problems from a Bayesian point of view. Ann. Murh. Sfarisr. 3 ,1468-1482. 6 Box, G. E. P and Tiao, G. C. ( I 948). A Bayesian approach to some outlier problems. Bio. merriku 55, 119-129. Box, G . E. P. and Tiao, G. C. (1973). Bayesiun Inference in Sratisricul Analysis. Reading, MA: Addison-Wesley. Boyer, M. and Kihlstrorn, R. E. (eds.) ( 1984).Bayesian Models in Economic Theory. Amsterdam: North-Holland. Breslow. N. (1990). Biostatistics and Bayes. Srurisr. Sci. 5, 269-298 (with discussion). Bretthorst. G . L. (1988). Buyesian Specrrum Analysis and Parumeter Estimurion. New York: Spri nger-Verlag. Bridgman, P. W. (1927). The Logic of Modern Physics. London: Macmillan. Brier, G. W. (1950). Verification of forecasts expressed in terms of probability. Monrh. Weurher Rev. 7 8 , 1-3. Brillinger, D. R. (1962). Examples bearing on the definition of fiducial probability, with a bibliography. Ann. Mufh. Srurisr. 33, 1349-1355. Broemeling. L. D. (1985). Bayesian Analysis ofLineur Models. New York: Marcel Dekker. Brown, L. D. (1973). Estimation with incompletely specified loss functions. J. Amer. Srufisr. Assoc. 70,4 17427. Brown, L. D. (1985). Foundations of fiponenrictl Families. Hayward, CA: IMS. Brown, P. J., Le,N. D. and Zidek, J. V. (1994). Inference for a covariance matrix. Aspecrs of Uncertuinry: u Tribute to D. V. Lindley (P. R. Freeman, and A. F. M. Smith, eds.). Chichester: Wiley. 77-92. Brown, P J. and Miikelainen, T. (1992). Regression, sequential measurements and coher. ent calibration. Buyesian Sratisrics 4 (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F M. Smith, eds.). Oxford: University Press, 97-108 (with discussion). . Brown, R. V. (1993).Impersonal probability as an ideal assessment,J. Riskund Uncerruinty7. 2 15-235. Brown. R. V. and Lindley. D. V. (1982). Improving judgement by reconciling incoherence. Theoryand Decision 14, 113-132. Brown, R. V. and Lindley, D. V. (1986). Plural analysis: multiple approach to quantitative research. Theory and Decision 20, 133- 154. Bwnk. H. D. (1991). Fully coherent inference. Ann. Srurisr. 19.830-849. Buehler, R. J. (1959). Some validity criteria for statistical inference. Ann. Murk Srurisr. 30, 845-863. Buehler, R. J. (1971). Measuring information and uncertainty. Foundations of Srurisricul Inference (V. P. Godambe and D. A. Sprott, eds.). Toronto: Holt. Rinehart and Winston, 330-351 (with discussion). Buehler, R. J. (1976). Coherent preferences. Ann. Srurisr. 4, 1051-1064. Buehler, R. J. and Feddersen, A. P. (1963). Note on a conditional property of Student's t. Ann. Murh. Sfufist. 34. 1098-1 100. Bunn. D. J. ( 1 984). Applied Decision Analysis. New York McGraw-Hill Butler, R. W. (1986). Predictive likelihood inference with applications. J. Roy. Srarisr. SOC.B 48. 1-38 (with discussion).

500

Rejerences

Cano, J. A., Hernhdez, A. and Moreno, E. (1988). On Kolmogorovs partial sufficiency. Bayesian Sturistics3 (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.). Oxford: University Press, 553-556. Carlin, B. I? and Gelfand. A. E. (1991). An iterative Monte Carlo method for nonconjugate Bayesian analysis. Statist. Computing 1, I 19- 128. Carlin. B. P. and Polson N. G.( 1992).Monte Carlo Bayesian methods for discrete regression models and categorical time series. Bayesian Statistics J (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 577-586. Carlin, J. B. and Dempster, A. P. (1989). Sensitivity analysis of seasonal adjustments: empirical case studies. J. Amcr. Stutist. Assoc. 84.6-32 (with discussion). Carnap. R. ( 1950/1962).Lngicol Foundations of ProbabiliR. Chicago: University Presb. Caro. E.. Dominguez, J. I., and Gidn, F. J. ( 1984). Compatibilidad del metodo de DeGroot para llegar a un consenso en la f6rmula de Bayes. Trab. Estadisr. 35, 139- 153. Casella. G. ( 1987). Conditionally acceptable recentered set estimators. Anti. Srurisf. 15.
1364-1371.

Casella, G.( 1992). Conditional inference for confidence sets. Curretrr Issues in Stutis~iuil Iilference: Essap in Honor of D.Bosu. (M. Ghosh and P.K. Pathak eds.). Hayward, CA: IMS. Casella, G . and Berger. R. L. (1987). Reconciling Bayesian and frequentist evidence in the one-sided testing problem. J. Amer. Statist. Assoc. 82. 106-1 35. (with discussion). Casella, G. and Berger, R. L. (1990).Staristical Inferenc~. Pacific Drove. CA: Wadswonh. Casella, G.and George. E. 1. (1992). Explaining the Gibbs sampler. Attier. Statist. 46. 167174. Casella, G., Hwang, J. T. G. and Robert, C. P. ( 1993). A paradox in decision theoretic interval estimation. Statisrica Sinica 3, 141-155. Chaloner. K. (1984). Optimal Bayesian experimental design for linear models. Ann. Srutisr. 12.283-300. Chaloner, K. ( 1994). Residual analysis and outliers in hierarchical models. Aspects ofUncerraing: a Tribute to D.V Lindley (P. Freeman, and A. F. M.Smith, eds.). Chichester: R. Wiley. 149-157. Chaloner, K. and Brant, R. (1988). A Bayesian approach to outlier detection and residual analysis. Biumetriku 75.65 1-659. Chan, K. S. ( 1993). Asymptotic behaviour of the Gibbs sampler. J. Amer. Stutisr. Assoc.. 88.
320-326.

Chang, T. and Eaves, D. M. (1990). Reference priors for the orbit of a group model. Atin. Stutist. 18. 1595-1614. Chang, T. and Villegas. C. ( 1986). On a theorem of Stein relating Bayesian and classical inferences in group models. Cunudiun J . Starisr. 14,289-296. Chankong. V. and Haimes, Y. ( 1982).Multiobjectirv Decision Mukin,q. Amsterdam: NorthHolland. Chao. IM. T. ( 1970).The asymptotic behaviorof Bayes estimators.Munug. Sci.41.601-608. Chateaneuf, A. and Jaffray, J. Y. ( 1984). Archimedean qualitative probabilities. J. Muth. P s ~ c h o l o ~ ~ 1-204. 28. I9

References

501

Chen, C. F. (1985). On asymptotic normality of limiting density functions with Bayesian implications.J . Roy. Statist. SOC.B 97. 540-546. Chernoff, H.(1951). A property of some type A regions. Ann. Math. Statist. 22,472-474. Chernoff, H. (1959). Sequential design of experiments. Ann. Muth. Sratist. 30,755-770. Chernoff, H. and Moses, L. E. (1959). Elementary Decision Theorq. New York: Wiley. Chow, Y. S. and Teicher. H. (1978/1988). Probability Theory. Berlin: Springer. Second edition in 1988. Berlin: Springer. Chuaqui, R.and Malitz, J. (1983). Preorderingscompatiblewith probability measures. Trans.
Amer. Math. SOC.279.8 1 1-824.

Cifarelli, D. M. ( 1987). Recent contributions to Bayesian statistics. Italian Contributions to the Methodology o Statistics (A. Naddeo. ed.). Padova: Cleub. 483-516. f Cifarelli, D. M. and Muliere, P. (1989). Statisticu Bayesiunu. Pavia: G. Iuculano. Cifarelli, D. M. and Regazzini. E. ( 1982). Some considerationsabout mathematical statistics teaching methodology suggested by the concept of exchangeability. Exchangeubiliry in Probability and Statistics. (G. Koch and F. Spizzichino. eds.). Amsterdam: NorthHolland, 185-205. Clarke, B. and Wasserman, L. ( 1993). Non-informative priors and nuisance parameters. J. Amer. Stutist. Assoc. 88, 1427-1432. Clarke, R. D. (1 954). The concept of probability. J. Inst. Actuaries 80,l-3 I (with discussion). Claroti, C. A. and Lindley, D. V. (eds.) (1988). Accelerated Life testing und Expert Opinion in Re1iubilir;v. Amsterdam: North-Holland. Clayton, M. K, Geisser, S. and Jennings, D. E. (1986). A comparison of several model selection procedures. Bayesian Infierence and Decision Techniques: Essays in Honor of Bruno de Finetti (P. K. Goel and A. Zellner, eds.). Amsterdam: North-Holland,425-442. Clemen, R. T (1989). Combining forecasts: a review and annotated bibliography. b i t . J. .
Forecasting 5.559-583.

Clemen, R. T. (1990). Unanimity and compromise among probability forecasters. Manag. Sci. 36.767-779. Clemen. R. T. and Winkler, R.L. (1987). Calibratingandcombiningprecipitationprobability forecasts. Probabiliry and Bayesiun Sruristics (R.Viertl, ed.). London: Plenum, 97-1 10. Clemen, R. T. and Winkler, R. L. (1993). Aggregating point estimates, a flexible modelling approach. Munag. Sci. 39,501-5 15. Cochrane, I. L. and Zeleny, M. (eds.) ( 1973). Multiple Criteriu Decision Making. Columbia. SC: University Press Conlisk, J. (1993). The utility of gambling. J. Risk und Uncertainfy 6,255-275. Consonni. G. and Veronese, P (1987). Coherent distributions and Lindleys paradox. frob. abiliry and Bayesian Staristics (R. Viertl, ed.). London: Plenum, 1 1 1-120. Consonni, G. and Veronese. P. ( 1992a). Bayes factors for linear models and improper priors. Bayesian Statistics 4 (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F M. Smith, . eds.). Oxford: University Press, 587-594. Consonni, G. and Veronese, P. (1992b). Conjugate priors for exponential families having quadratic variance functions. J . Amer. Stutist. Assoc. 87, 1123-1 127. Cooke, R. M. (1991). Experts in Uncertuing. Opinion and Subjective Probability in Science. Oxford: University Press.

References
Cornfield, J. (1969). The Bayesian outlook and its applications. Biomerrics 25.617-657. Cowell, R. G. (1992). BAIES, a probabilistic expert system shell with qualitative and quantitative learning. Briyesicin Srurisrics 4 (J. M. Bernardo. J. 0. Berger, A. P. Dawid and Oxford: University Press, 595-600. Cox. D. D. ( 1993). An analysis of Bayesian inference for nonparametric regression. Ann.
Srarisr. 21.903-923.

Cox. D. R. (1958). Some problems connected with statistical inference. Ann. Murk Srririsr. 29, 357-372.

Cox. D. R. (1975). Partial likelihood. Bionwrrika 62, 269-276. Cox. D. R. ( 1988). Conditional and asymptotic inference. Sankhyii A 50. 3 14-337. Cox. D. R. (1990). Models in statistical analysis. Statist. Sci. 5, 169-174. Cox. D. R. and Hinkley, D. V. (1974). Theorericcd Sratisric.~. London: Chapman and Hall. Cox, D. R. and Reid. N. (1987). Parameter orthogonality and approximate conditional inference. J. Roy. Stark?.Soc. B 49, 1-39 (with discussion). Cox. D. R. and Reid, N. ( 1992). A note on the difference between protile and modified profile likelihood. Biomerrika 7 9 . 4 0 8 3 1 I. Cox R. T. (1946). Probability. frequency and expectation. Amer. J. Phy.sics 14. 1-13. Cox R. T.(1961). The Algebra NfProbuhle Inference. Baltimore: Johns Hopkins. Crowder, M. J. ( 1988). Asymptotic expansions of posterior expectations. distributions and densities for stochastic processes. Ann. Insr. Stcirist. Mcirh. 4 .297-309. 0 Csisdr. I. (1985). An cxtended maximum entropy principle and a Bayesian justification. Bqvsian S~arisrics (J. M. Bemardo. M. H.DeGroot, D. V. Lindley and A. E M. Smith. 2 eds.). Amsterdam: North-Holland, 83-98. (with discussion). Cuevas. A. and Sanz. P. ( 1988). On differentiability properties of Bayes operators. Bciyesiari Srurisrics 3 (J. M. Bemardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press. 569-577. Cyert, R. M. and DeGroot. M. H. ( 19x7). Bayusiciri Anulysis and Uncerruinry in Economic Theory. London: Chapman and Hall. Nuo-BuyDaboni. L. and Wedlin, A. ( 1982). Sturisricu. Un '1ntrodu;ione cill'Ini~i.sru~io~ie siana. Torino: UTET. Dalal. S . and Hall. W. J. (1980). On approximating parametric models by nonparametric Bayes models. Aim. Srarist. 8,664-672. Daial. S . and Hall, W. J. ( 1983). Approximating priors by mixtures of natural conjugate priors. J. Roy. Srarisr. Soc. B 45. 278-286. Dale, A. 1. (1990).Thomas Bayes: some clues to his education. Srcirisl. mid Proh. Lerters 9,
289-290.

Dale. A. 1. ( 199 I).History of Inwrse Probnbiliry: From Thonias Bciyes ro Kcirl Pearsori. Berlin: Springer. Darmois, G . ( 1936). Sur les lois de probabiliti estimation exhaustive. C. H. Acud. Sci. Paris 200, 1265-1 266. DasGupta, A. (1991). Diameter and volume minimiring confidence sets in Bayes and classical problems. Ann. Sratisr. 19. 1225- 1243. DasGupta. A. and Studden, W. J. (1991). Robust Bayesian experimental designs in normal linear models. Anti. Srarisr. 19. 1244- 1256.

References
Davison. A. C. (1986). Approximate predictive likelihood. Biomerriku 73,323-332. Davison, D., Suppes, P.and Siegel, S. (1957). Decision Making: An Experimental Approuch. Stanford: University Press. Dawid, A. P. (1970). On the limiting normality of posterior distributions. Proc. Camb. Phil. SOC. 67,625433. Dawid, A. P. (1973). Posterior expectations for large observations. Biomerriku 60,644-666. Dawid. A. P. (1977). Invariant distributions and analysis of variance models. Biornerrih 64. 29 1-297. Dawid, A. P. (1978). Extendibility of spherical matrix distributions. J. Multivariate Analysis 14,559-566. Dawid, A. P. (1979a). Conditional independence in statistical theory. J. Roy. Statist.Soc. B41, 1-31, (with discussion). Dawid, A. P. ( 1979b). Some misleading arguments involving conditional independence. J. Roy. Starist. SOC.B 41,249-252. Dawid, A. P. (198Oa).A Bayesian look at nuisance parameters. Bayesian Srurisrics (J. M. Bernardo, M.H. DeGroot. D. V. Lindley and A. F. M. Smith, eds.). Valencia: University Press, 167-203 (with discussion). Dawid, A. I? (198Ob). Conditional independence for statistical operators. Ann. Statist. 8,. 598-6 17. Dawid, A. P. (1982a). The well-calibrated Bayesian. J. Amer. Statist. Assoc. 77,605-613. Dawid, A. P. ( 1 982b). Intersubjective statistical models. Exchangeability in Probabiliry and Sraristics. (G. Koch and F. Spiuichino, eds.). Amsterdam: North-Holland, 2 17-232. Dawid. A. P. (1983a). Statistical inference. Encyclopedia of Statistical Sciences 4 ( S . Kotz, N. L. Johnson and C. B. Read, eds.). New York: Wiley, 89-105. Dawid. A. P. (1983b). Invariant prior distributions. Encyclopedia of Statistical Sciences 4 ( S . Kotz, N. L. Johnson and C. B. Read, eds.). New York: Wiley, 228-236. Dawid, A. P. (1984). Statistical theory, the prequential approach. J. Roy. Statist. Soc. A 147, 278-292. Dawid, A. P. (1 986a). Probability forecasting. Encyclopedia ofStatisricalSc~ences7S . Kotz, ( N . L. Johnson and C. B. Read, eds.). New York: Wiley, 210-218. . Dawid, A. P (1986b). A Bayesian view of statistical modelling. Buyesian Inference and Decision Techniques: Essays in Honor of Bruno de Finerr; (P.K. Gwl and A. Zellner. eds.). Amsterdam: North-Holland. 391-404. Dawid, A. P. (1988a). The infinite regress and its conjugate analysis. Bayesian Statistics 3 (J. M. Bernardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith, eds.). Oxford: University Press, 95-1 10 (with discussion). Dawid, A. P. (1988b). Symmetry models and hypotheses for structured data layouts. J. Roy. B Srarisr. SOC. 50, 1-34 (with discussion). Dawid. A. P. (1992). Prequential analysis, stochastic complexity and Bayesian inference. Bayesian Statistics 4 (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F. M.Smith, eds.). Oxford: University Press. 109-125 (with discussion). Dawid, A. P. (1994). The island problem: coherent use of identification evidence. Aspects of Uncertainty:a Dibute to D. ).! Linclley (P. R. Freeman, and A. E M.Smith, eds.). Chichester: Wiley. 159-170.

504

Referertces

Dawid, A. P. and Fang, B. Q. (1992). Conjugate Bayes discrimination with infinitely many variables. J. Multivuriure Anulysis 41, 2 7 4 2 . Dawid, A. P. and Smith. A. F. M. (eds.) ( 1983). 1982 Conference on frwticul Buyrsicrn Staristics. Special issue. The Sturisriciun 32, Numbers I and 2. Dawid, A. P. and Stone. M. ( 1972). Expectation consistency of inverse probability distributions. Biometriku 59.486189. Dawid, A. P. and Stone, M. ( 1973). Expectation consistency and generalised Bayes inference. Ann. Stutist. 1. 478-485. Dawid. A. P.. Stone, M. and Zidek, J. V. ( 1973). Marginalization paradoxes in Bayesian and structural inference. J. Huy. Stutisr. Soc. B 35. 189-233 (with discussion). Debreu. G . ( 1960). Topological methods in cardinal utility. Muthenruticul Methods in the Suciul Sciences (K. J. Arrow, S. Karlin and P. Suppes. eds.). Stanford: University Press. 16-26. Deely. J. J. and Lindley. D. V. ( 1981). Bayes empirical Bayes. J. ,4n/er. Sturisf. Asroc.. 76, 833-84 I. de Finetti. B. ( 1930). Funzione caratteristica di un fcnomeno aleatorio. Mrrn. A c w c l . Nu:. Lincei 4, 86- 133. dc Finetti, B. (1937/1964). La prevision: ses lois logiques. ses sources subjectives. Aim. Iris:. H. PoinctrrP7, 1-68. Reprinted in 1980 as 'Foresight; its logical laws. its subjective sources' in Studies in Sirbjec.tiw frribuhiliry (H. E. Kyburg and H. E Smokler. ed York: Dover, 93- 158. de Finetri, B. ( 1938). Sur la condition d'iquivalence partielle. A c t i d i t i s Scienti$qirr.s C I Indusrrielles 739. Paris: Herman and Cii. Translated in Sritdirs in Inthic~tii~e t ~ i c und D . frubahilit,~ (R. Jeffrey. ed.). Berkeley: Univ. California Press. 193-206. 2 de Finetti, B. (1951). Recent suggestions for the reconciliation of theories of probability. f r w . Si.coridBrrkelr?.Synip.(J. Neymaned.). Berkeley: Univ. California Press. 2 17-26. de Finetti. B. ( I96 I 1. The Bayesian approach to the rejection of outliers. P m - . Fourth Hc9rk/cy Synrp. 1 (J. Neyman and E. L. Scott. eds.). Berkeley: Univ. California Press. 199-210. de Finetti. B. (1962). Does it make sense to speak of 'Gwd Probability Appraiser\'? Tlrr Sc,ieruist Specu1ute.s: An Antkolog,v oj' Purtly-Bukd Ideus (I. J. Good. ed. ). New York: Wiley, 257-364. Reprinted in 1972. fro/xhi/it\, /ntluc.tion untl Sfcifisfic..\ Kew York: Wiley. 19-23. de Finetti, B. (1963). La decision et les probabilities. Rev, Rounruitw Muth. firres App/. 7 . 4 0 5 4 13. de Fineui. B.( 1964).Probabilitb subordinate e tcoria delle decisioni. Rrndicunti /Mtctetnuti c u 23. 128-1 3 I . Reprinted as 'Conditional probabilities and decision theory' in 1972. Probuhiliry, Induction and S~utisticsNew York: Wiley. 13- 18. de Finetti. B. ( 1965). Methods for discriminating levels of partial knowledge concerning a test item. British J . Math. Srarisr. Psycho/. 18. 87-1 23. Reprinted in 1972. Prr~bicbilir?; Indiictioti urrd Stcirisrics New York: Wiley. 75-63. de Finetti, B. ( 1967). Logical foundations and measurement of subjective probability. AcYu
Ps\c'holoRict/ 34. 129-145.

de Finetti, B. ( 1968). Probability: interpretations. Irrterriur. Oicykrpediu qfrlw Suc~iulSciencm. 12. London: Macmillan. 496-504.

References
de Finetti, B. (1970/1974). Teoria delfeProbabilira 1. Turin: Einaudi. English translation as Theory ojProbabifify 1 in 1974, Chichester: Wiley. de Fmetti, B. (197011975). Teoria delle Probabilitu 2. Turin: Einaudi. English translation as Theory of Probability 2 in 1975, Chichester: Wiley. de Finetti, B. (1972). Probability, Induction and Staristics. Chichester: Wiley. de Finetti, B. (1978). Probability: interpretations. Inrernat. Encyclopedia of Starisrics (W. H. Kruskal and J. M. Tanur. eds.) London: Macmillan. 496-505. Monori and D. Cocchi, 4 s . ) . Bologna: de Finetti, B. (1993). Induction and Probability. (P. Clueb. DeGroot, M. H. (1962). Uncertainty. information and sequential experiments. Ann. Math. Srarist. 33,404419. DeGroot, M. H. (1963). Some comments on the experimental measurement of utility. Behavioral Sci.8, 146- 149. DeGroot, M. H. (1970). Optimal StatisticalDecisions. New York: McGraw-Hill. DeGroot, M. H. (1973). Doing what comes naturally: interpreting a tail area as a posterior probability or as a likelihood ratio. J. Amer. Starisr. Assoc. 6 8 , 9 6 9 6 9 . DeGroot, M. H. ( 1974). Reaching a consensus. J. Amer. Srarisr. Assoc. 69, 1 18-1 2 1. DeGroot, M.H. (1980). Improving predictive distributions. Bayesian Sratisrics (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.). Valencia: University Press, 385-395 and 415429 (with discussion). DeGmt, M. H. (1982). Decision theory. Encyclopedia of Statistical Sciences 2 ( S . Kotz, N. L. Johnson and C. B. Read, eds.). New York: Wiley. 277-286. DeGroot, M. H. (1987). Probability and Statisrics.Reading. MA: Addison-Wesley. DeGroot, M. H. and Fienberg, S. E. (1982). Assessing probability assessors: calibration and refinement. Statistical Decision Theory and Related Topics III 1 (S. S.Gupta and J. 0. Berger, eds.). New York: Academic Press, 291-314. DeGroot, M. H. and Fienberg, S. E. (1983). The comparison and evaluation of forecasters. The Statistician 32, 12-22. DeGroot, M. H. and Fienberg, S. E.(1986). Comparing probability forecasters: basic binary concepts and multivariate extensions. Bayesian Inference and Decision Techniques: Essays in Honor o Bruno de Finetti (P. K. Goel and A. Zellner, eds.). Amsterdam: Northf Holland, 247-264. DeGroot, M. H.. Fienberg. S. E. and Kadane. J. B. (eds.) (1986). Statis/ics and the Law. New Yo&: Wiley. DeGmt. M. H. and Kadane. J. B. (1980) Optimal challenges for selection. Operations Research 28,952-968. DeGroot, M. H. and Mortera. J. (1991). Optimal linear opinion pools. Manag. Sci. 37,
546-55 8.

DeGroot, M. H. and Rao. M. M. (1963). Bayes estimation with convex loss. Ann. Math. Statist. 34, 839-846. D e G m t , M. H. and Rao, M. M. (1966). Multidimensional information inequalities and New York: Academic Press. prediction. MuItivariate Statistics (I? R. Krishnaiah, 4.).
287-3 13.

506

References

de la Horra, J. (1986). Convergencia del vector de probabilidad a posterior bajo una distribuci6n predictiva. Trub. Estudisr. 1, 3-1 I . de la Horra. J. (1987). Generalized estimators: a Bayesian decision theoretic view. Stutistics and Decisions 5, 347-352. de la Horra, J. (1988). Parametric estimation with .GI distance. Bayesian Statistics3 (J. M. Bemardo, M. H. DeGroot, D. V.Lindley and A. F. M. Smith, eds.). Oxford: University Press, 579-583. de la Horra, J. (1992). Using the prior mean o f a nuisance parameter. Test 1. 31-38. de la Horra, J. and Fernandez, C. (1094). Bayesian robustness of credible regions in the presence of nuisance parameters. Conrnr. Stutist. Thecirv und Methods 23. 689499. Delampady. M. ( 1989). Lower bounds on Bayes factors for interval null hypotheses. J. Amrr. Stuti.yt. As.Sc>C.84,120- 124, Delampady. M. and Berger. J. 0. (1990). Lower bounds on Bayes factors for multinomial and chi-squared tests of fit. Ann. Sturisr. 18, 1205-1 3 16. Delampady. M. and Dey, D. K.(1994). Bayesian robustness for multiparameter problems. J. Sturist. Plmning und 1njerenc.e40,375-382. Dellaportas, P and Smith. A. F. M. ( 1993). Bayesian inference for generalised linear and . proportional hazards models via Gibbs sampling. Appl. S/uti.vt. 42.443460. Dellaportas. P. and Wright. D. E. (1992). A numerical integration strategy in Bayesian anaiysis. Bayesian Stutisfics I (J. M. Bernardo. J. 0.Berger, A. P. Dawid and A. F. M. Smith. Oxford: University Press. 60 1-606. D Morgan, A. (1847). Fornrul Logic. London: Taylor and Walton. e F Dempster. A. ! (1967). Upper and lower probabilities induced by a multivalued mapping. Ann. Math. Statist. 38. 325-339. Dempster, A. P. (1968). A generalization of Bayesian inference. J. Roy. Strrtisr. Soc. B 30. 205-247 (with discussion). Dempster, A. P. ( 1975). A subjective look at robustness. Internat. S r d s t . Re\: 46,349-374. Dempster, A. P. ( 1985).Probability.evidence and judgement. Buysiun Sruri.sricsZ(J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith. eds.). Amsterdam: NorthHolland, 119-132 (with discussion). DeRobertis, L. and Hartigan, J. ( 198 I1. Bayesian inference using intervals of measures. Ann. Smrist. 9. 235-244. Devroye, L. ( 1986). Non-Unifr,rm run don^ Vurintr Gcnerurion. Berlin: Springer. D Waal, D. J. and Groenewald. P. C. N. (1989). On measuring the amount of information e from the data in a Bayesian analysis. South Afi-icon Stutist. J. 2 3 . 2 3 4 I (with discussion ). De Waal, D. J., Groenewald. P. C. N.. van Zyl, D. and Zidek. J. (1986). Multi-Bayesian estimation theory. Stutisrics und I>rcisions 4. 1 - 18. Diaconis. P. (1977). Finite forms of de Finetti's theorem on exchangeability. Synthese 36. 27 1-28 1. Diaconis, P. ( 1988a). Recent progress on de Finetti's notion of exchangeability. Buyesiim Stutisrics 3 (J. M. Bernardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press, I1 I - I25 (with discussion). Diaconis. P. ( l988b). Bayesian numerical analysis. Srutistictrl Decision Theory irnd Rrlutrd Topics I V 1 (S.S . Gupra and J. 0. Berger. eds.). Berlin: Springer. 163-175.

References

507

Diaconis, P.. Eaton, M. L. and Lauritzen, S. L. (1992). Finite de Finetti theorems in linear models and multivariate analysis. Scandinavian J. Statist. 19. 289-3 16. Diaconis, P. and Freedman, D. (1980a). Finite exchangeable sequences. Ann. frob. 8,745764. Diaconis, P. and Freedman,D. ( 1980b). D Finetti generalizationsof exchangeability. Studies e Inductive Logic and Probability (Jeffrey, R. C. ed.). Berkeley: Univ. California Press, 223-249. Diaconis, P. and Freedman, D. (1983). Frequency properties of Bayes rules. Scientific.Inference, Data Analysis and Robustness (G. E. P Box, T. Leonard and C. F. Wu, eds.). New . York: Academic Press, 105-1 16 Diaconis, P and Freedman, D. (1984). Partial exchangeability and sufficiency. Statistics: . Applications and New Directions (J. K. Ghosh and J. Roy, eds.). Calcutta: Indian Statist. Institute, 205-236. . Diaconis, P and Freedman, D. (1986a). On the consistency of Bayes estimates. Ann. Statist. 14, 1-67, (with discussion). Diaconis, P. and Freedman, D. (1986b). On inconsistent Bayes estimates of location. Ann. Statist. 14,68-87. Diaconis, P and Freedman, D. (1987). A dozen de Finetti-style results in search of a theory. . Ann. Inst. H . Poincard 23,397423. . Diaconis, P and Freedman, D. (1990). Cauchy's equation and de Finetti's theorem. Scandinavian J. Statist. 17,235-274. Diaconis, P. and Ylvisaker, D. (1979). Conjugate priors for exponential families. Ann. Statist. 7, 269-28 I. Diaconis, P and Ylvisaker, D. (1985). Quantifying prior opinion. Bayesian Staristics 2 . (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.), Amsterdam: North-Holland, 133-156 (with discussion). Diaconis, P. and Zabell, S. L. (1982). Updating subjective probability. J. Amer Stutist. ASSOC. 77,822-830. Dickey, J. M. (1968).Three multidimensional integral identities with Bayesian applications. Ann. Math. Sratist. 39, 1615-1627. 0 Dickey, J. M. (1969). Smoothing by cheating. Ann. Math. Statist. 4 . 1477-1482. Dickey, J. M. (1971). The weighted likelihood ratio. linear hypotheses on normal location parameteters. Ann. Math. Statist. 42,204-223. Dickey, J. M. (1973). Scientific reporting and personal probabilities: Student's hypothesis. J. Roy. Statist. SOC.835.285-305. Reprinted in 1974in Studies in Bayesian Econometrics und Statistics: in Honor of Leonard J . Savage ( S . E. Fienberg and A. Zellner, eds.). Amsterdam: North-Holland 485-5 1 1. Dickey, J. M. (1974). Bayesian alternativesto the F test and least-squaresestimate in normal linear model. Studies in Bayesian Econometrics and Statistics: in Honor of Leonard J . Savage ( S . E. Fienberg and A. Zellner, eds.). Amsterdam: North-Holland, 515-554. Dickey, J. M. (1976). Approximate posterior distributions. J. Amer. Stutist. Assoc. 71,680689. Dickey, J. M. (1977). Is the tail area useful as an approximateBayes factor? J. Amer. Statisr. Asscic. 72, 138-142.

508

References

Dickey, J. M. (1980). Beliefs about beliefs, a theory of stochastic assessment of subjective probabilities. Bayesian Statistics (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.). Valencia: University Press. 471-487 and 504512 (with discussion). Dickey, J. M. ( 1982). Conjugate families of distributions. Encyclopedia of Starisricul Sciences 2 ( S . Kotz, N. L. Johnson and C. B. Read, eds.). New York: Wiley. 135-145. Dickey, J. M. and Freeman, P. R. (1975). Population-distributed personal probab J. Amer. Sfutisf.Assoc. 70, 362-364. Dickey, J. M. and Kadane. J. B. (1980). Bayesian decision theory and the simplification of f models. Evaluation o Econometric Models (J. Kwenta and J. Ramsey. eds.). New York: Academic Press, 245-268. Dickey, J. M. and Lientz, B. P. (1970). The weighted likelihood ratio. sharp hypotheses a b u t chances. the order of a Markov chain. Ann. Math. Statist. 41,214-226. Diebolt. J. and Robert. C. P. (1994). Estimation of finite mixture distributions through Bayesian sampling. 1. Roy. Statist. Soc. B 56. 363-375. Doksum, K. A. (1974). Tailfree and neutral random probabilities and their posterior distributions. Ann. Proh. 2. 183-20 I . Doksum, K. A. and Lo. A. Y. ( 1990). Consistent and robust Bayes procedures for location based on partial information. Ann. Statist. 18.443-453. Domotor. Z. and Stelzer, J. ( I 97 I). Representation of finitely additive semiordered qualitative probability structures. J. Math. Psychology 8. 145- 158. Draper, N. R. and Guttman, 1. ( 1969). The value of prior information. New Developments in Sitrvq Sampling (N. L. Johnson and ti. Smith Jr.. eds.). New York: Wiley. Dreze. J. H. ( 1974). Bayesian theory of identification in simultaneousequations models. Studies in Bayesian Econonrerrics and Statistics: in Honor ofleonard J . Saiwge ( S . E. Fienberg and A. Zellner. eds.). Amsterdam: North-Holland, 159-1 74. Dubins, L. E. and Savage, L. J. (1965i1976). H o w 10 Gumhle iJxoit Must: 1neyuulitie.s j w Stochastic Processes. New York: McGraw-Hill. Second edition in 1976. New York: Dover. DuMouchel, W. ( 1990). Bayesian metaanalysis. Srcrristical MethodoloRy in the P harmaceutical Sciences (D. A. Berry, ed.). New York: Marcel Dekker. 509-529. DuMouchel. W. and Harris, J. E. (1983). Bayes methods for combining the results of cancer studies in humans and other species. 1. Amer. Statist. Assoc. 78. 293-3 IS. Duncan. G. and DeGroot, M. H. ( 1976). A mean squared error approach to optimal design theory. Proc. 1976 Conf: Informution Sciences urf Systmms. Baltimore: John Hopkins University Press, 2 17-22 I . Duncan L. R. and Raiffa. H. (1957). Gumes and Decisions. New York: Wiley. Dunmore, 1. K. (1966). A Bayesian approach to classification. J. Roy. Sttrtisr. Soc. B 28.
568-577.

Dunsmore. 1. R. (1968). A Bayesian approach to calibration. J. Roy.

Statist. Soc.

B 30.

396405. Dunsmore, 1. R. (1969). Regulation and optimization. J. Roy. Sruti.ct. SIC. 31. 160-170. S Durbin, J. ( 1969). Inferential aspects of the randomness of sample size in survey sampling. New Dewfopmenrsin Survey Sunipling (N. 1.. Johnson and H. Smith Jr.. eds.). New York: Wiley, 62945 1.

References
Durbin,J. (1970).On Birnbaums theorem on the relation between sufficiency, conditionality and likelihood. J. Amer. Statist. Assoc. 65,395-398. Dykstra. R. L. and Laud, P. (1981). A Bayesian nonparamemc approach to reliability. Ann. Statist. 9, 356-367. Dynkin, E. B. (1953). Klassy ekvivalentnych slucainychy velicin. Llspechi. Mar. Nuuk 54, 125-134. Earman. J. (1990). Bayes Bayesianism. Stud. History Philos. Sci. 21, 351-370. Eaton, M. L. (1982). A method for evaluating improper prior distributions. Statistical Decision Theory und Reluted Topics 111 1 (S.S.Gupta and J. 0. Berger, eds.). New York: Academic Press. Eaton, M. L. (1992). A statistical diptych: admissible inferences, recurrence of symmetric Markov chains. Ann. Srarisr. 20. 1147-1 179. Eaves, D. M. (1985). On maximizing the missing information about a hypothesis. J. Roy. Stutist. Soc. B 47,263-266. Edwards, A. W. F. (1972/1992).Likelihood. Cambridge: University Press. Second edition in 1992. Baltimore: John Hopkins University Press. Edwards, A. W. F. (1974). The history of likelihood. Internut. Sfutist. Rev. 42.9-15. Edwards. W. L. (1954). The theory of decision making. Psychological Bul. 51,380-417. Edwards, W. L. (1961). Behavioral decision theory. Annuul Rev. Psychology 12.473498. Edwards. W. L., Lindman, H. and Savage, L.J. (1963). Bayesian statistical inference for psychological research. Psychol. Rev. 70, 193-242. Reprinted in Robusrness ofBayesian Analysis (J. B. Kadane, ed.). Amsterdam: North-Holland, 1984, 1-62. Edwards, W. L. and Newman. J. R. (1982). Mulriurrribute Evaluation. Beverly Hills, CA: Sage. Edwards, W. L., Phillips, L. D.. Hays, W. L. and Goodman, B. C. (1968). Probability information processing systems: design and evaluation. IEEE Trans. Sysrems, Science and Cybernetics 4, 248-265. Edwards, W. L. and Tversky, A. (eds.) (1967). Decision Making. Baltimore: Penguin. Efron. B. ( I 973). In discussion of Dawid. Stone and Zidek ( 1973). J . Roy. Stutisr. Soc.B 35. 219. Efron, B. (1982). The Jacknife. the Bootstrap and orher Resampling Plans. Philadelphia, PA: SIAM. Efron. B. ( 1 986). Why isnt everyone a Bayesian? Amer. Stutisr. 40,1-1 1 (with discussion). Efron, B. ( 1993).Bayes and likelihood calculationsfrom contidence intervals. Biometriku 80, 3-26. Efron, B. and Morris, C. N. (1972). Empirical Bayes estimators on vector observations-an extension of Steins method. Biometrika 59,335-347. Efron. B. and Morris. C. N. (1975). Data analysis using Steins estimator and its generilisations. J . Amer. Sturisr. Assoc. 70. 3 I 1-3 19. Efron, B. and Tibshirani, R. J. (1993). An Innoduction to the Boorstrap. London: Chapman and Hall. Eichhorn, W. ( 1978). Functional Equations in Economics. Reading, M A : Addison-Wesley. Eliashberg. J. and Winkler, R. L. (1981). Risk sharing and group decision making. Munag. Sci. 27, 1121-1235.

510

References

El-Krunz. S. M.and Studden. W. J. (1991). Bayesian optimal designs for linear regression models. Ann. Stutist. 19, 2 183-2208. Ellsberg, D. (1961). Risk, ambiguity and the Savage axioms. Quart. J. &on. 75.643469. Erickson, G . J. and Smith, C. R. (eds.) (1988). Marinrum Entropy and Buyesiun Method.s irr Science and Engineering. ( 2 volumes). Dordrecht: Kluwer. Ericson. W. A. (1969a). Subjective Bayesian models in sampling finite populations. J. Roy. Stutist. Soc. B 31, 195-233. Ericson. W. A. (1969b). Subjective Bayesian models in sampling finite populations: stratification. New Developments in Survey Sumpling (N. L. Johnson and H. Smith Jr.. eds. ). New York: Wiley, 326-357. Ericson, W. A. ( 1988). Bayesian inference in finite populations. Hundbook of'S~ti/isrics 6. Sunipling (P. R. Krishnaiah and C. R. Rao eds.). Amsterdam: North-Holland. 213-246. Farrell. R. H. (1964). Estimators of a location parameter in the absolutely continuous case. Ann. Muth. Stutisr. 35.949-998. Farrell. R. H. ( 1968). Towards a theory of generalized Bayes tests. Ann. Math. Srarisr. 39. 1-22. Bu.vesiciti Srcrrisfics. Fearn, T. and O'Hagan, A. ( 4 s . )(1993). 1992 Conference on Prac~ic~rl Special issue, The Srutisriciun 42. Number 1. Fedorov, V. V. ( 1972). Theor! of Oprimul E.rperimunts. New York: Academic Press. Feller. W. ( I 95011968). An Inrroducvion to Probahiliy Theory arid its Applicarioris 1. Chichester: Wiley. Third edition in 1968. Fellner, W. ( 1965). Probuhilirv und ProJks: A Sady (fEconomic Behavior irlong Ba,vesion Lines. Homewood. IL: Irwin. Felsenstein, K. ( 1988). Iterative procedures for continuous Bayesian designs. Bayesiirtr Statistics 3 (J. M. Bernardo, M.H. DeGroot. D.V. Lindley and A. F. M. Smith. eds.). Oxford: University Press. 609-613. Felsenstein, K. ( 1992). Optimal Bayesian design for discrimination among rival models. J. Comp. Stiuisr. and Dotii Analysis 14, 427-436. ti Ferguson, T. S. ( 1967).Mctthenuiticrrl Stctris~ic~s: Drcision Theoruric.Appr(iuc'h.New York: Academic Press. Ferguson, T. S. ( 1973).A Bayesian analysis of some nonparametric problems. Aritr. Sturisf. 1. 209-730. Ferguson, T. S. ( 1974). Prior distributionson spaces of probability measures. Airti. Stcitist. 2.
6 15-620. Ferguson, T. S. ( 1989). Who solved the secretary problem'! Stctrisr. Sci. 4. 282-206 (with

discussion). Ferguson. T. S. and Phadia. E. G. ( 1979). Bayesian nonparametric estimation based on censored data. Ann. Sfatisf. 7 , 163- 186. Ferguson, T. S.. Phadia. E. G. and Tiwari. R. C. ( 1992). Bayesian nonparametric inference. Crtrrunt Ositvs in Sturisticul ltfiretice: Esscr~siri Honor if I). Rirsci. (M. Ghosh and P. K. Path& eds.). Hayward. CA: IMS, 127-150. Ferrindiz, J. R. ( 19x2). Una solucicin Bayesiana a la paradoja de Stein. Trab. Esr~rdisr. 33. 3 1-46.

References

511

Ferrindiz, J. R. (1985). Bayesian inference on Mahalanobis distance: an alternative approach to Bayesian model testing. Bayesian Statistics 2 (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M.Smith, eds.).Amsterdam: North-Holland. 645654. Fedndiz, J. R. and Sendra, M.(1982). Tablus de Eioestadisticu.Valencia: University Press. Fieller, E. C. (1954). Some problems in interval estimation.J. Roy. Statist.SOC.B 16.186-194 (with discussion). Fienberg,S . E. and Zellner, A. (eds.)( 1974). Srudies in Bayesian Econometrics and Sratisrics: in Honor o Leonard J. Savage. Amsterdam: North-Holland. f Fine, T. L. (1973). Theories of Probability: an Exurriinurion of Foundations. New York: Academic Press. Fishburn, P. C. ( 1964). Decision and Value Theory. New York: Wiley. Fishburn, P. C. (1%7a). Bounded expected utility. Ann. Math. Srutist. 38, 1054-1060. Fishburn, P. C. ( 1 967b). Preference-baseddefinitionsof subjective utility. Ann. Math. Statist. 38. 1605-1617. Fishburn, P. C . (1968). Utility theory. Manug. Sci. 14, 335-378. Fishburn, P. C. (1969). A general theory of subjectiveprobability and expected utilities. Ann. Math. StMist. 40,1419- 1429. Fishburn, P. C . (1970). Uiiy Theoryfor Decision Making. New York: Wiley. tlt Fishburn, P C. (1975).A theory of subjectiveexpected utility with vague preferences. Theory . and Decision 6,287-3 10. Fishburn, P. C. (1981). Subjective expected utility: a review of normative theories. Theory and Decision 13. 139-199. Fishburn, P. C . (1982). The Foundations o Expected Utifify.Dordrecht: Reidel. f Fishburn, P. C. (1986). The axioms of subjective probability. Statist. Sci. 1, 335-358 (with discussion). Fishburn, P. C. ( 1987). Interprojfle Conditions und Impossibility. London: Harwood. tlt Fishburn, P. C. (1988a). Non-linearPreferenceand Uiiy Theory.Baltimore: John Hopkins University Press. Fishbum, P. C. (1988b). Utility theory. Encyclopedia o Srarisricu/ Sciences 9 ( S . Kotz, f N. L. Johnson and C. B. Read, eds.). New York: Wiley. 445452. Fisher, R. A. (1915). Frequency distribution of the values of the correlation coefficient in samples from an indefinitely large population. Eiomerriku 10.507-521. Fisher, R. A. (1922). On the mathematical foundations of theoretical statistics. Phil. Trans. Roy. Suc. Londun A 222, 309-368. Reprinted in Breakthroughs in Starisrics 1 ( S . Kotz and N. L. Johnson, eds.). Berlin: Springer, 1991, 1 1-44. Fisher, R. A. (1925).Theory of statistical information. Proc. Cumb. Phil. Soc. 22,700-725. Fisher, R. A. (1930). Inverse probability. Proc. Cumb. Phil. Soc. 26,528-535. Fisher, R. A. (1933). The concepts of inverse probability and fiducial probability referring to unknown parameters. Proc. Roy. Soc. A 139,343-348. Fisher, R. A. ( 1935).The fiducial argument in statistical inference.Ann. Eugenics 6.391 -398. Fisher, R. A. (1939). A note on fiducial inference. Ann. Sturisr. 10,383-388. Fisher, R. A. (1956/1973). Srurisricuf Methods und Scienrijc Inference. Third edition in 1973. Edinburgh: Oliver and Boyd. Reprinted in 1990 whithin Stutisticul Methods. Experimental Design. and Scientific. Inference (J. H. Bennet. ed.). Oxford: University Press.

512

References

Florens, J.-P. (1978). Mesures ? et invariance dans une exp6rience Baytsienne. Pub. priori i Inst. Statist. Unit.. Paris 23. 29-S5. Florens. J.-P. ( 1982). Expriences Baytsiennes invariantes. Ann. Inst. M. Poincctrk 18. 309317. Florens, J.-P. and Mouchart. M. ( 1985). Model selection: some remarks from a Bayesian viewpoint. Model Choice (Florens. J.-P.. Mouchart. M.. Raoult J.-P. and Simar. L.. ed Brussels: Pub. Fac. Univ. Saint Louis, 27-44. Florens. J.-P. and Mouchart. M. ( 1986).Exaustivitt. ancillaritci-et identitication en statihtique baytsienne. Ann. E w n . Sttitist. 4. 63-93. Florens. J.-P.. Mouchart, M.. Raoult J.-P. and Simar. 1,. (eds.) ( 1985).MdelChoicc~. Brussels: Pub. Fac. Univ. Saint Louis. Florens. J.-P.. Mouchan, M.. Raoult. J.-P.. Simar. L. and Smith. A. F. M. (cds.) (1983). Spec$ving Stti tis ficctl Models. Rerli n : Springer. Florens. J . X . Mouchart. M. and Rolin. J.-M. ( 1990). Elem~rtrsofBttyrsictrt Stutistic5. New York: Marcel Dekker. Florens. J.-P.. Mouchart. M. and Rolin. J.-M. (1992). Bayesian analysis of mixtures: some results on exact estimability and identitication. Bt~yr.siiin Sttrti.srics 4 (J. M. Bernardo. J. 0. Berger, A. P Dawid and A. F. M. Smith. eds.). Oxford: University Press. 127-14.5 . (with discussion). Providencc: Flournoy. N. andTsutakawa. R. K. (eds.) [ 1991). S t l t t i . s r i ~ ~ t / ~ ~ t i / t i ~ / e I i t t u ~ r ~ 7 t i ~ i t . RI: ASA. Foughe. P. T. (ed.) ( 1990). Musinturn Entropy iintf Baycsittri Methoits. Dordrecht: Kluwer. Fraser, D. A. S. (1963).On the sufficiency and likelihood principles. J . Antur. Stct/ist. AsS I J l . 58. 641-647. Fraser. D. A. S. ( I 968). The Structure oflnference. New York: Wiley. Fraser. D. A. S. (1972). Bayes. likelihood or structural. Ann. Murh. Statist. 43.777-790. Fra.ser. D. A. S. ( 1979). Inference and Linear Models. New York: McGraw-Hill. Fraser, D. A. S. and McDunnough. P. ( 19M). Further remarks on the asymptotic norniality of likelihood and conditional analysis. Cwmfiart J. Stcrtisr. 12. 183-190. Fraser, D. A. S. and Reid, N. (l98Y). Adjustments to protile likelihood. Biornetriku 76. 477488. Freedman, D. A. (1962). Invariants under mixing which generalize de Finettis theorem. Ann. Math. Statist. 33.916-923. Freedman, D.A. ( 19631). Invariants under mixing which generdlize de Finettis theorem: . continuous time parameter. Ann. M ~ t hStitti.\t. 34. I 194-1 2 16. Freedman, D. A. (1963b). On the asymptotic behavior of Rayes estimates in the discrete case. Ann. Math. Sttitist. 34. 1386-1.103. Freedman, D. A. ( 1965). On the asymptotic behavior of Bayes estimates in the discrete case 11. Ann. Math. Sratisr. 36.454456. Freedman. D. A. and Diaconis. P. ( 1983). On inconsistent Bayes estimates in the discrete case. Ann. Srarist. 11. 1109-1 118. Freedman, D. A. and Purves. R. A. ( I Y6Y ). Bayes method for bookies. Ann. Math. Sttitist. 40. I 117-1 186.

References

513

Freeman, P. R. (1980). On the number of outliers in data from a linear model. Bayesian Statistics (J. M. Bemardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.). Valencia: University Press, 349-365 and 370-38 1 (with discussion). Freeman. P. R. ( 1 983). The secretary problem and its extensions-a review. Internar.Srurist. Rev. 51,189-206. Freeman, P. R. and Smith, A. F. M.(4s.) ( 1 994). Aspects .fUncertainty: a Tribute to D. V: Lindley. Chichester: Wiley. French, S. (1980). Updating of beliefs in the light of someone else's opinion. J. Roy. Stutist. SOC.A 143,4348. French. S . (1981). Consensus of opinion. EUK J. Oper. Res. 7,332-340. French, S . (1982).On the axiomatisation of subjectiveprobabilities.Theoryand Decision 14. 19-33. French, S. (1985). Group consensus probability distributions: a critical survey. Bayesiun Statistics 2 (J. M. Bemardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.), Amsterdam: North-Holland, 183-202 (with discussion). French, S. (1986). Decision Theory: an Introduction 10 the Mathematics of Rationality. Chichester: Ellis Horwood. French, S. (ed.) (1989). Readings in Decision Analysis. London: Chapman and Hall. French, S., Hartley. R..Thomas, L. C. and White, D. J (eds.) (1983). Multiobjective Decision Making. New York: Academic Press. Friedman, M. and Savage, L. J. (1948). The utility analysis of choice involving risk. J. Political Econ. 56,279-304. Friedman, M. and Savage.L. J. (1952).The expected utility hypothesis and the measurement of utility. J. Political Econ. 60. 463-474. Fu,J. C. and Kass, R. E. (1988). The exponential rate of convergence of posterior distributions. Ann. Inst. Srurist. Math. 40.683-691. Gameman, D. (1992). A dynamic approach to the statistical analysis of point processes. Biometriku 79,39-50. Gamerman, D. and Migon, H. S. (1993). Dynamic hierarchical models. J. Roy. Sfatis?. Soc. B 55.629-642. Giirdenfors, P. and Sahlin, N.-E. (1988) (eds.) Decision, Probability, and Utilie. Selected Readings. Cambridge: University Press. Garthwaite, P. H. and Dickey, J. M. (1992). Elicitation of prior distributions for variable selection problems in regression. Ann. Sratist. 20. 1697-17 19. Gatsonis, C. A. (1984). Deriving posterior distributions for a location parameter: a decisiontheoretic approach. Ann. Sturist. 12,958-970. Gatsonis. C. A., Hodges, J. S., Kass, R. E. and Singpurwalla,N. (eds.) (1993). Case Studies in Bayesian Stafistics. Berlin: Springer. Gaul. W. and Schader, M. (eds.) (1978). Data. Expert Knowledge and Decisions. Berlin: Springer. Geisser, S . (1964). Posterior odds for multivariate normal classification. J. Roy. Statist. Soc. B 26.69-76. Geisser, S . ( 1966). Predictive discrimination. Multivariate Analysis (P. R. Krishnaiah. ed.). New York: Academic Press. 149-163.

514

References

Geisser, S. (197I). The inferential use of predictive distributions. foundations ojSiutisficu1 Inference (V. P. Godambe and D. A. Sprott. eds.). Toronto: Holt. Rinehart and Winston. 456469. Geisser, S. (1974). A predictive approach to the random effect model. Biomerrika 61. 101107. Geisser. S. (1975). The predictive sample reuse method, with applications. J. Amer. Stutisr. Assoc. 70. 320-328. Geisser, S. (1979). in discussion of Bernardo (1979b). J. Roy. Sratisr. Soc. B 41. 136-1 37. Geisser, S. ( I980a). The contributions of Sir Harold Jeffreys to Bayesian inference. Bayesiun Analysis in Econometrics and Stutistics:Essuy.s in Honor of Harold Jefleys (A. Zellner. ed.). Amsterdam: North-Holland. 13-20. Geisser. S. ( I980b). A predictivist primer. Buyesiun Ana1yi.s in Ecotiotnc~rics Sfatisfitx and Essuys in Hunor of Harold J e D e y (A. Zellner. ed.). Amsterdam: North-Holland, 363381. Geisser. S. ( 1982). Bayesian discrimination. Handbook o Sratisrics 2. ClussiJcation (P. R. f Krishnaiah and L. N. Kana1 eds.). Amsterdam: North-Holland. 101-120. Geisser, S. ( 1984).On prior distributions for binary trials. J. Anier. Stutisr. Assoc. 38.244-25 I (with discussion). Geisser, S. (1985). On the prediction of observables: a selective update. Bayesian Stutistics 2 (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.). Amsterdam: North-Holland, 203-230 (with discussion). Geisser. S. (1986). Predictive analysis. Encyclopedia of Statistical Sciences 7 (S. Kotz. N. L. Johnson and C. B. Read, eds.). New York: Wiley, 158-1 70. Geisser, S. ( 1987). Influential observations. diagnostics and discordancy tests. Appl. Sturist. 14, 133- 142. Geisser, S. (1988). The future of statistics in retrospect. Bayesian Sruristics 3 (J. M.Bernardo, M. H. DeGmot, D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press. 147-158 (with discussion). Geisser, S. ( 1992). Bayesian perturbation diagnostics and robustness. Bayesian Atdysis in Sraristics and Econometrics (P. K. Goel and N. S. lyengar, eds.). Berlin: Springer. 289-302. Geisser. S. ( 1993). Predictive Injerence: on Introduction. I-ondon: Chapman and Hall. Geisser, S. and Cornfield, J. (1963). Posterior distributions for multivariate normal panmeters. J. Roy. Srarisr. Suc. B 25, 368-376. Geisser, S. and Eddy. W. F. ( 1979). A predictive approach to model selection. J . Anrer. Srutist. ASSOC. 153-160. 14. Geisser. S.. Hodges. J. S., Press, S. J. and Zellner, A. (eds.) (1990). Bayesiun trnd Likclihuod methods in Statistics and Econonietrics: Essuys in Honor of George A . Burnurd. Amsterdam: North-Holland. Gelfand. A. E. and Dew,A. (1968). Predictive zero-mean uniform discrimination. Biomerriku 55. 5 10-524. Gelfand, A. E. and Dey, D. K. ( 1991). On Bayesian robustness in contaminated classes of priors. Starisrirs and Derisions 9 63-80. .

References

515

Gelfand, A. E., Dey, D. K. and Chang, H. (1992). Model determination using predictive distributions with implementation via sampling-based methods. Bayesian Statistics 4 (J. M. Bernardo, J. 0. Berger, A. P Dawid and A. F. M. Smith, eds.). Oxford: University . Press, 147-167 (with discussion). Gelfand. A. E., Hills, S. E., Racine-Poon, A. and Smith, A. F. M. (19%). Illustration of Bayesian inference in normal models using Gibbs sampling. J. Amer. Statist. Assoc. 85, 972-985. Gelfand, A. E. and Smith. A. F. M. (1990). Sampling based approaches to calculating marginal densities. J. Amer. Sturist. Assoc. 85,398-409. Gelfand. A. E., Smith, A. F. M. and Lee, T.-M. (1992). Bayesian analysis of constrained parameter and truncated data problems using Gibbs sampling. J. Amer. Statist.Assoc. 87, 523-532. Gelman, A. and Rubin, D. B. (1992a). Inference from iterative simulation using multiple sequences. Stutist. Sci. 7,457-5 I I (with discussion). Gelman. A. and Rubin. D. B. (1992b). A single series from the Gibbs sampler provides a false sense of security. Bayesian Statistics 4 (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F.M. Smith, eds.). Oxford: University Press, 625-631. Geman, S.(1988). Experiments in Bayesian image analysis. Bayesian Statistics 3 (J. M. Bernardo, M. H.M r o o t , D. V. Lindley and A. F. M. Smith, eds.). Oxford. University Press, 159-171 (with discussion). Geman. S. andGeman, D. ( 1984). Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Trans. Putt. Anal. Mach. Intelligence 6,721-740. Genest, C. (1984a). A characterization theorem for externally Bayesian groups. Ann. Statist. 12, 1100-1105. Genest, C. (1984b). A conflict between two axioms for combining subjective distributions. J. Roy. Statist. Soc. B 46,403-405. Genest, C. and Zidek, J. (1986). Combining probability distributions: a critique and an annotated bibliography. Statist. Sci. 1, 1 14-148 (with discussion). George, E. I.. Makov, U.E. and Smith,A. F. M. (1993).Conjugate likelihood distributions. Scandinavian J. Statist. 20, 147-1 56. George, E. I., Makov. U. E. and Smith, A. F. M. (1994). Bayesian hierarchical analysis for f exponential families via Markov chain Monte Carlo. Aspects o Uncertainty: a Tribute to D.V Lindley (P. R. Freeman, and A. F. M. Smith, eds.). Chichester: Wiley, 181-199. George, E. I. and McCulloch, R. (1993a). Variable selection via Gibbs sampling. J. Amer. Statist. Assoc. 88.88 1-889. George. E. I. and McCulloch. R. (1993b). On obtaining invariant priordistributions. J. Statist. Planning and Inference 37, 169-179. Geweke. J. (1988).Antithetic acceleration of Monte Carlo integration in Bayesian inference. 1. Econometrics 38.73-90. Geweke, J. ( 1989).Bayesian inference in econometric models using Montecarlo integration. Econometrica 57.13 17- 1339. Geyer, C. J. (1992). Practical Markov chain Monte Carlo. Statist. Sci. 7 , 473-51 I (with discussion).

References
Ghosh, J. K., Ghosal, S. and Samanta. T (1994). Stability and convergence of the posterior . in non-regular problems. Statistical Decision Theory and Relried Topics V ( S . S. Gupta and J. 0. Berger, eds.). Berlin: Springer, 183-199. Ghosh, J. K. and Mukerjee, R. (1992). Non-informative priors. Bayesian Statistics 4 (J. M. Bemardo. J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press. 195-210 (with discussion). Ghosh, M. (1991). Hierarchical and empirical Bayes sequential estimation. Handbook u/ Statistics 8. Siatistical Method.q in Biological and Medical Sciences (C. R. Rao and R. Chakraborty. eds.). Amsterdam: North-Holland. 441-458. Ghosh, M. ( 1992a). Hierdrchical and empirical Bayes multivariate estimation. Current Issues in Statistical Inference: Essays in Honor of D.Basic. (M. Ghosh and P. K. Pdthak eds.). Hayward. CA: IMS, 151-177. Ghosh, M. ( 1992b). Constrained Bayes estimation with application. J. Anrer. Sicrtisr. Assoc. 87.533-540. Ghosh, M. and Pathak, P. K. (eds.) ( 1992). Currenr Issues in Sfatistical Inference: Essuy in Honor of I). Busu. Hayward, CA: IMS. Gilardoni. G. L. and Clayton. M. K. (1993). On reaching a consensus using DeGroots iterative pooling. Ann. Siarist. 21. 3 9 1 4 1 . Gilio, A. ( 1992a). (-,,-Coherence and extensions of conditional probabilities. Bqvesicm Statist i c ~ (J. M. Bernardo. J. 0. Berger. A. P. Dawid and A. F. M. Smith. eds.). Oxford: 4 University Press, 633-640. Gilio. A. ( l992b). Incomplete probability assessments in decision analysis. J. It. Statist. SOC. 1.67-76. Gilio. A. and Scozzafava, R. (1985). Vague distributions in Bayesian testing of P null hypothesis. Meiron 43. 167-1 74. Gilks, W. R. (1992). Derivative-free adaptive rejection sampling for Gibbs sampling. Bayesian Statistics 4 (J. M. Bemardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press. 641-649. Gilks, W. R., Clayton. D. G., Spiegelhalter, D. J.. Best. N. G.. McNeil, A. J.. Sharpies. L. D. and Kirby, A. J. (1993). Modelling complexity: applications of Gibbs sampling in medicine. J. Roy. Statist. Soc. B 55. 39-52 (with discussion). Gilks, W. R. and Wild, P. (1992). Adaptive rejection sampling for Gibbs sampling. Appl. Statist. 41. 337-348. Gillies, D. A. (1987). Was Bayes a Bayesian? Hist. Muth. 14. 325-346. Gilliland. D. C., Boyer, J. E. Jr. and Tsao, H. J. (1982). Bayes empirical Bayes: tinite parameter case. Ann. Statist. 10, 1277-1282. Girelli-Bruni, E. (ed.) ( 198I ). Troriu delle Dt,risioni in Medicinu. Verona: Bertani. Giron, F. J., Martinez. L. and Morcillo. C. (1992). A Bayesian justification for the analysis of residuals and inference measures. Bajesian Sturistics 4 (J. M. Bernardo. J. 0. Berger. A. P. Dawid and A. F. M. Smith. eds.). Oxford: University Press. 651-660. Giron, F. J. and Rios. S. (1980). Quasi-Bayesian behaviour: a more realistic approach to decision making? Bayesian Statistics (J. M. Bemardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith, eds.). Valcncia: l!niversity Press, 17-38 (with discussion).

References
Girshick, M. A. and Savage, L. J. (195 I). Bayes and minimax estimates for quadratic loss functions. Proc. Second Berkeley Symp. (J. Neyman ed.). Berkeley: Univ. California Press, 53-74. Godambe, V. P. (1969). Some aspects of the theoretical development in survey sampling. New Developments in Survey Sampling (N. L. Johnson and H. Smith Jr., eds.). New York: Wiley, 27-58. Godambe. V. P. (1970). Foundations of survey sampling. Amer. Srarisr. 24.33-38. Godambe. V. P. and Spmtt, D. A. (eds.) ( 197 I ). Foundurionsof Sraristic*al Inference.Toronto: Holt, Rinehan and Winston. G e ,P. K. (1983). Information measures and Bayesian hierarchical models. J . Amer. Starisr. ol Assoc. 78.408410. Goel, P. K. (1988). Software for Bayesian analysis: current status and additional needs. Bayesiun Sfatistics 3 (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F.M. Smith, eds.). Oxford: University Press, 173-1 88 (with discussion). Goel. P. K. and DeGroot, M. H. (1979). Comparison of experiments and information measures. Ann. Srarisr. 7 . 1066-1077. Goel, P K. and DeGroot, M. H. (1980). Only normal distributions have linear posterior . expectations in linear regression. J. Amer. Srurisr. Assoc. 75, 895-900. Goel, P K. and DeGroot, M.H. (1981). Information about hyperparameters in hierarchical . models. J. Amer. Siarisr. Assoc. 76, 140-147. Gwl, P. K., Gulati, C. M and DeGroot, M. H. (1992). Optimal stopping for a non-communicating team. Buyesian Sratisrirs 4 (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 21 1-226 (with discussion). Goel, P. K. and Iyengar, N. S. (eds.) ( 1992). Bayesian Analysis in StarisricsandEconomerrics. Berlin: Springer Goel, P. K. and Zellner. A. (eds.) (1986). Buyesian Inference and Decision Techniques: Essays in Honor of Bruno de Finerri. Amsterdam: North-Holland. Goicoechea. A., Duckstein, L. and Zionts, S. (eds.) (1992). Multiple Criteria Decision Making. Berlin: Springer Goldstein, M.(1981). Revising previsions: a geometric interpretation. J. Roy. Sfatisr. Soc.

B43. 105-130.
Goldstein, M.( 1985). Temporal coherence. Buyesian Srurisrics 2 (J. M. Bernardo, M. H. DeGroot. D. V. Lindley and A. F. M. Smith, eds.). Amsterdam: Nonh-Holland. 23 1-248 (with discussion). Goldstein, M. ( 1986a). Separating beliefs. Bayesian Inference and Decision Techniques: Essuys in Honor of Bruno de Finerri (P. K. Goel and A. Zellner, eds.). Amsterdam: North-Holland, 197-2 15. Goldstein, M.(1986b). Exchangeable belief structures. J . Amer. Sfutisf. Assoc. 81.97 1-976. Goldstein, M.(1986~). Prevision. Encyclopedia ofStatisrical Sciences 7 ( S . Kotz, N. L. Johnson and C. B. Read, eds.). New York: Wiley, 175-1 76. Goldstein, M. (1987a). Systematic analysis of limited belief specifications. The Srarisrician
36.191-199.

Goldstein, M.(1987b). Can we build a subjectivist statistical package? frobabiliry and Bayesian Starisrics (R. Vied, ed.). London: Plenum, 203-21 7.

Relerences Goldstein, M. (1988). The data trajectory. Bayesian Statistics 3 (I. M. Bernardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith, ed Oxford: University Press. 189-209 (with discussion). Goldstein. M. (1991). Belief transforms and the comparison of hypothesis. Ann. Stiirist. 19. 2067-2089. Goldstein, M. ( 1994). Revising exchangeable beliefs: subjectivist foundations for the inducf tive argument. Aspects o Uncertainty: a Tribute to D. V Lind/e.v (P. R. Freeman. and A. F. M. Smith, eds.). Chichester: Wiley, 201-222. . Goldstein, M. and Howard. J. V. (1991 1. A likelihood paradox. 1 Roy. Statist. SOC. B 53. 619-628 (with discussion). Goldstein. M. and Smith, A. F. M. (1974). Ridge-type estimators for regression analysis. J. Roy. Statist. Soc. B 36. 284-3 19. G6mez-Villegas. M. A. and Gornez. E. ( 1992). Bayes factors in testing precise hypotheses. Comni. Starist. A 21, 1707-1715. . Gbmez-Villegas. M. A. and Main. P (IY92). The influence of prior and likelihood tail behaviour on the posterior distribution. Buysicin Srutistics 4 (J. M.Bernardo. J. 0. Ber. ger. A. P Dawid and A. F. M. Smith, eds.). Oxford: University Press. 661-667. Good. 1. J. ( 1950). Probahilit? und the Weighing oJ'Ecidencc. London : Griffin: New York: Hafner Press. Good, 1. J. (1952). Rational decisions. J. Roy, Starist. Soc-. B 14, 107-1 14. Good, I . J. ( 1959). Kinds of probability. Science 127.443-447. Good. I. J. (1960). Weight of evidence. corroboration, explanatory power and the utility of experiments. J. Roy Starist. SOC. B 22. 3 19-33 I . Good, 1. J. ( 1962). Subjective probability on the measure of a non-measurable set. kigic Methodolog! and Philosophy ofScience (E. Nagel. P. Suppes and A. Tarski, eds. ). Stanford: University Press. 3 19-329. Good, 1. J. ( 1965). The Estimurion ojProbubi1itie.v. An Essuy o n Modern Bu,vusiirrt Merhorls. Cambridge, Mass: The MIT Press. Good, 1. J. (1966). A derivation of the probabilistic explanation of information. J . Roy. Stutisr. SM..B 28,578-58 I . Good, 1. J. (1967). A Bayesian test for niultinomial distributions. J. /-?OF. Statisr. Sot. B 2y. 399-43 I . Goocl. 1. J. (1969).What is theuseoTadistribution'? Multii~~iriciteAtiulssis2(P. Krishnaiah. R. ed.).New York: Academic Press. 183-203. Good, 1. J. ( 1971). The probabilistic explication of information. evidence, surprise. causality. explanation and utility. Twenty seven principles of rationality. Foitnduticm of Stati.sriccrl 1nJeruni.r (V. P. Godambe and D. A. Sprott. eds.). Toronto: Holt. Rinehart and Winston. 108-141 (with discussion). Good, 1. J. ( 1976). The Bayesian influence. or how to sweep subjectivism under the carpet. fiiitrrdations of Probability 7'lieory. Statisrical Inference and Srotisticul 7'hcoric.s OJ Science 2 (W. L. Harper and C. A. Hooker eds.). Dordrecht: Reidel. 119-168. G o d . I. J. ( 1977). Dynamic probability. computer chess and the measurement of knowledge. Muchine In~elligence (E. W. Elcock and D. Michie. eds.). Chichester: Ellis Horwcwd. 8. 139-150. Reprinted in Good (1983). 106-1 16.

References

519

Good, 1. J. (1980a). The contributions of Jeffreys to Bayesian statistics. Bayesian Analysis in Econometrics and Statistics: Essays in Honor of Harold Jefleys (A. Zellner. ed.).
Amsterdam: North-Holland,2 1-34.

Good, J. (1980b). Some history of the hierarchical Bayesian mehodology. Bayesian Sratis1. tics (J. M.Bemardo, M. H. DeGroot. D. V. Lindley and A. F. M. Smith, eds.). Valencia:
University Press, 489-5 19.
Sciences 2 ( S . Kotz, N. L. Johnson and C. B. Read, eds.). New York: Wiley. 287-292. G d , 1. J. (1983). Good Thinking: The Foundations of Probabiliry and its Applications. Minneapolis: Univ. Minnesota Press. Good.1. J. (1985). Weight of Evidence: a brief survey. Bayesian Statistics 2 (J. M.Bemardo, M. H. DeGroot. D. V. Lindley and A. F. M. Smith, 4s.). Amsterdam: North-Holland, 249-270 (with discussion). Good.1. J. (1987). Hierarchical Bayesian and empirical Bayesian methods. Amer. Statist. 41. (with discussian). Good.1. J. (1988a). Statistical evidence. Encyclopedia of Statistical Sciences 8 ( S . Kotz. N. L. Johnson and C. B. Read. eds.). New York: Wiley. 651-656. Good, I. J. (1988b).The interface between statistics and philosophy of science. Statist.Sci. 3. 386-398 (with discussion). Good, I. J. (1992). The Bayeslnon-Bayescompromise: a brief review. J . Amer. Statist. AsSOC. 87,597606. Good. I. J. and Gaskins. R. (1971). Non-paramemic roughness penalties for probability densities. Biometrika 58, 255-277. Good, 1. J. and Gaskins, R. (1980). Density estimation and bump hunting by the penalized likelihood method, exemplified by scattering and meteorite data. J . Amer. Statist. ASWC.75,42-73. Goodwin, P and Wright, G.(1991). Decision Analysis for Management Judgement. New . York: Wiley. Gordon, N. J. and Smith, A. F. M. (1993). Approximate non-Gaussian Bayesian estimation and modal consistency. J . Roy. Statist. SOC.B 55, 9 13-9 18. Goutis, C. and Casella, G. (1991). Improved invariant confidence intervals for a normal variance. Ann. Statist. 19,2019-203 1. Grandy, W. T. and Schick, L.H. (eds.) (1991). Maximum Entropy and Bayesian Methods. Dordrecht: Kluwer. Grayson, C. J. ( 1960). Decisions under Uncertainty: Drilling Decisions b-y Oil and Gas Operators. Harvard, MA: University Press. Grenander, U. and Miller, M. 1. (1994). Representationsof knowledge in complex systems. J . Roy. Statist. Soc. B 56 (to appear, with discussion). Grieve, A. P. (1987). Applications of Bayesian software: two examples. The Statistician 36, 283-288. Gu, C. (1992). Penalized likelihood regression: a Bayesian analysis. Sfatistica Sinica 2, 255-264. Gupta, S . S. and Berger, J. 0. (eds.) ( 1988). Statistical Decision Theory and Related Topics IV 1. Berlin: Springer.

Good. I. J. (1982). Degrees of belief. Encyclopedia of Sratistical

References

Gupta. S. S. and Berger. J. 0.(eds.) ( 1994).Sratisticul Decision Theorycind Relureel 7bpic.s V. Berlin: Springer. Gutierrez-Peiia. E. ( 1992). Expected logarithmic divergence for exponential families. Bayesiun Srutisrit.~ (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F M. Smith. 4 . eds.). Oxford: University Press. 669-674. Guttman, 1. ( 1970). Statistical Tolerniice Regions: Clcissiccil onel Uqvsiciri. London: Griffin. Guttman. 1. and Peria. D. (1988). Outliers and influence. Evaluation by posteriors of parameters in the linear model. Bayesian Stntisrics 3 (J. M. Bernardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith. ed Oxford: University Press. 63 1440. Guttman. 1. and Petia. D. (1993).A Bayesian look at the question of diagnostics. Srctrisriccr Sinicu 3.367-390. H u g , J. (1924). Sur un probleme general de probabilites et ses diverses applications. P m c . Inrermt. Congress Murh. fiirotito. 659-674. Hacking. 1. ( 1965). Slightly more realistic personal probability. P/iiInsopliy of Scierrcv 34. 3 1 1-325. Hacking, 1. ( 1975). The Eniergence of Prdxibility. Cambridge: University Press. Hadley. G. ( I Y67). Inrroducrinn 10 Probability rind Srurisrical Ducisiori T1ieot;v. San Francisco. CA: Holden-Day. Hald. A. ( 1968). Bayesian single acceptance plans for continuous prior distributions. Techiiomrrrics 10,667-683. Haldane. J. B. S . (1931). A note on inverse probability. Proc. CeJni6. Phil. Soc. 28. 55-61. Haldane. J. B. S. (1948). The precision of observed values of small frequencies. Biomerriku 35. 297-303. Halmos. P. R. and Savage, L. 1. ( 1949). Application of the Radon-Nikodym theorem to the theory of sufticient statistics. Ann. Muth. Srurisr. 20. 225-211. Cincinnati. OH: SouthHalter. A. N. and Dean, G. W. (I97 1 ). Decisions uiider Uticerraifiry. Western. . Harrison P J. and West. M. (19x7). Practical Bayesian forecasting. The Storisticiati 36. 115-125. Harsany, J. ( 1967). Games with incomplete information played by Bayesian players. MUfl<Ig. Sci. 14. 159-1 82. 320-334.48f1-502. Hartigan, J. A. (1964). Invariant prior distributions. Ann. Math. Starist. 35. 836-845. Hartigan. 1. A. ( 1965). The asymptotically unbiased prior distribution. Anti. Murli. Stcrri.st. 36. 1 137-1 152. Hartigan. J. A. ( 1966a). Estimation by ranking parameters. J. Roy. Srurisr. So(.. H 28. 32-44, Hartigan. J. A. (1966b). Note on the confidence prior of Welch and Peers. 1. Roy. Sfctrisr. SOC.B 28.55-56. Hartigan, J. A. (1967). The likelihood and invariance principles. J. Roy. .Sfuri.~r.Sol.. H 29. 533-539. Hanigan. J. A. (1969). ilse of subsample values as typical values. J. Aniur. Srcrrisr. A.s.wc. 104, 1003-1 3 17. Hartigan. J. A. ( 197 1 ). Similarity and probability. Foumfatioris of Sturisticul Injermce (V. I? Godambe and D. A. Sprott, eds.). Toronto: Holt. Rinehart and Winston. 305-3 I3 (with discussion).

References
Hartigan, J. A. (1975). Necessary and sufficient conditions for asymptotic normality of a statistic and its subsample values. Ann. Srutisr. 3,573-580. Hartigan, J. A. (1983). Bayes Theory. Berlin: Springer. Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. Biometriku 57.97-109. Heath, D. L. and Sudderth, W. D. (1972). On a theorem of de Finetti. oddsmaking and game theory. Amer. Srarist. 43,2072-2077. Heath, D. L. and Sudderth, W.D. (1976). De Finetti's theorem for exchangeable random variables. h e r . Statist. 30,333-345. Heath, D. L. and Sudderth, W. D. (1978). On finitely additive priors, coherence and extended admissibility. Ann. Srurist. 6, 333-345. Heath, D. L. and Sudderth, W.D. (1989). Coherent inference from improper priors and from finitely additive priors. Ann. Statist. 17,907-919. Hens, T. (1992). A note on Savage's theorem with a finite number of states. J. Risk and Uncerrainry 5,63-7 I . Herstein, I. N. and Milnor, J. (1953). An axiomatic approach to measurable utility. Econometrica 21,291-297. Hewitt, E. and Savage, L. J. ( 1955). Symmetric measures on Cartesian products. Trans. Amer. Marh. Soc. 80,47040 I . Hewlett. P. S. and Plackett, R.L. (1979). The Interpretation of QuantalResponses in Biology. London: Edward Arnold. Heyde, C . C . and Johnstone, I. M.(1979). On asymptotic posterior normality for stochastic processes. J. Roy. Statist. SOC.B 41, 184-1 89. Hildreth, C. (1963). Bayesian statisticians and remote clients. Econometrica 31,422438. Hill, B. M.(1968). Posterior distributions of percentiles: Bayes' theorem for sampling from a finite population. J. Amer. Statist. Assoc. 63,677491. Hill, B. M.(1969). Foundations of the theory of least squares. J . Roy. Statist. Soc. B 31,
89-97.

Hill, B. M. (1974). On coherence, inadmissibility and inference about many parameters in the theory of least squares. Studies in Bayesian Econometrics and Srarisrics: in Honor o f k o n a r d J . Savage (S. E. Fienberg and A. Zellner. eds.). Amsterdam: North-Holland,
555-584. Hill, B. M. (1975). A simple general approach to inference about the tail of a distribution. Ann. Srutisr. 3, 1163-1 174. Hill, B. M. (1980). On finite additivity, nonconglomerability, and statistical paradoxes. Bayesian Statistics (J. M.Bernardo, M.H. DeGroot, D. V. Lindley and A. F. M.Smith, eds.). Valencia: University Press, 3 9 4 6 (with discussion). Hill, B. M. (1986). Some subjective Bayesian considerations in the selection of models. Econometric Reviews 4, 191-288. Hill. B. M. (1987). The validity of the likelihood principle. Amer. Stutist. 41,95-100. Hill, B. M. (1988). De Finetti's theorem, induction and A,,,,, or Bayesian nonparametric predictive inference. Bayesian Staristics 3 (J. M.Bernardo, M.H. DeGroot, D. V. Lindley and A. F. M.Smith, eds.). Oxford: University Press, 21 1-241 (with discussion).

522

References

Hill, B. M. (1990). A theory of Bayesian data analysis. Bayesim and Likelihood Methods in Stutistics und Econometrics: Essuys in Honor of George A . Barnurd (S. Geisser. J. S . Hodges, S . J. Press and A. Zellner, eds.). Amsterdam: North-Holland. 49-73. Hill, B. M. (1 992). Bayesian nonparametric prediction and statistical inference. Buyesiati und Econometrics (P.K. Goel and N. S. lyengar, eds.). Berlin: AnalFsis in Sturistic.~ Springer,43-76. Hill. B. M. (1994). On Steinian shrinkage estimators: the finiteiinfinite problem and formalism in probability and statistics. Aspects of Uncertainp: a Tribute to D. V Litidley (P. R. Freeman, and A. F. M. Smith, eds.). Chichester: Wiley, 223-260. Hill, B. M. and Lane, D. (1986). Conglomerability and countable additivity. Buyesiun h i ference ond Decision Techniques: Essuys in Honor of Bruno de Fineiti (P. K. Goel and A. Zellner, eds.). Amsterdam: North-Holland,45-57. Hills. S. E. (1987). Reference priors and identifiability problems in non-linear models. The Stutistician 36. 235-240. Hills, S. E. and Smith. A. F. M. (1992). Parametrization issues in Bayesian inference. Buyesiun Stutistics 4 (J. M. Bemardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith. eds.). Oxford: University Press. 227-246 (with discussion). Hills, S. E. and Smith, A. F M. ( I 993). Diagnostic plots for improved parametrisativn in . Bayesian inference. Bionierrika 80, 6 1-74. Hinkelmann, K. (ed.) ( 1990). Foundations of Statistics. An Internutiorwl SFmposium in Honor of I. J. Good. Special issue. J. Stntist. Planning arid Inference 25. Hinkley, D. V (1979). Predictive likelihood. Ann. Statist. 7.718-728. . Hipp, C. (1974). Sufficient statistics and exponential families. Ann. Statist. 2. 1283-1292. Hjort, N. L. (1990). Nonparametric Bayes estimator based on beta processes in models for life history data. Ann. Sturist. 18, 1259-1 294. Hoadley, B. (1970). A Bayesian look at inverse regression. J . Amer. Sturist. Assoc. 65. 356369. Hodges, J. S. (1987). Uncertainty,policy analysis and statistics.Stutist. Sci. 2.259-291 (with discussion). Hodges, J. S. (1990). Cadmay Bayesians use pure tests of significance? Bayesiun and Likelihood Merhods in Statistics und Econometrics: Essays in Honor of George A . Burnard ( S . Geisser. J. S. Hodges, S. J. Press and A. Zellner, eds.). Amsterdam: North-Holland. 75-90. Hodges. J. S.(1992). Who knows what alternative lurks in the hearts of significance tests? Bcgwiun Sturistics 4 (J. M. Bemardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith. eds.). Oxford: University Press, 247-266 (with discussion). Hogarth, R. ( 1975).Cognitive processes and the assessment of subjective probability distributions. J . Amer. Statist. Assoc. 70, 77 1-294. Hogarth, R. ( 1 980). Judgement and Choice. New York: Wiley Holland. G. D. (1962). The reverend Thomas Bayes, F.R.S. (1702-1761). J . Roy. Sturist. Soc. A 125.42 1-46 I. Howson, C . and Urbach, P. ( 1 989). Scient$c Reasoning: the Buyesiun Approoch. La Salle. IL: Open Court.

References
Hull, J., Moore, P. G.and Thomas, H. (1973). Utility and its measurement. J. Roy. Statist. SOC. A 136,226247. Huseby, A. B. (1988). Combining opinions in a predictive case. Bayesian Statistics 3 (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.). Oxford: University Press, 641-65 1. Huzurbazar, V. S. ( 1976). Su#cient Statistics. New York: Marcel Dekker. Hwang, J. T. (1985). Universal domination and stochastic domination: decision theory under a broad class of loss functions. Ann. Statist. 13,295-3 14. Hwang. J. T. (1988). Stochastic and universal domination. Encyclopedia of Sraristical Sciences 8 ( S . Kotz, N. L. Johnson and C. B. Read, eds.). New York:Wiley, 781-784. Hylland. A, and Zeckhauser. R. ( 1981). The impossibility of Bayesian group decision making with separate aggregation of beliefs and values. Econometrica 79, 1321-1 336. Ibragimov, I. A. and Hasminski, R. Z. (1973). On the information in a sample about a parameter. Proc. 2nd Internat. Symp. Information Theory. (B. N. Petrov and F. Csaki, 4 s . ) . Budapest: Akademiaikiadh, 295-309. Irony, T. Z. (1992). Bayesian estimation for discrete distributions. J. Appl. Sratist. 19, 533549. 6 Irony, T.Z. (1993). Information in sampling rules. J. Statist. Planning and Inference 3 . 27-38. irony, T. Z.. Pereira, C. A. de B. and Barlow. R. E. (1992). Bayesian models for quality assurance. Bayesian Statistics 4 (J. M. Bernardo, J. 0.Berger. A. P. Dawid and A. F. M. Smith. 4 s . ) . Oxford: University Press, 675-688. lsaacs. G. L., Christ, D. E., Novick, M. R. and Jackson, P. H. (1974). Tablesfor Bayesian Sratisticians. Ames, 10: Iowa University Press. Iversen. G.R. (1984).Bavesian Statistical Inference. Beverly Hills, CA: Sage Bibliography on sequential analysis. J . h e r . Statist. Assoc. 55.561Jackson, J. E. (1W). 580. James, W. and Stein,C. ( I %I). Estimation with quadratic loss. Proc. Fourth Berkeley Symp. 1 (J. Neyman and E. L. Scott, eds.). Berkeley: Univ. California Press, 361-380. Jaynes. E. T. (1958).Pmbubilip Theory in Science and Engineering. Dallas: Mobil Oil Co. Jaynes, E. T.(1968). Prior probabilities. IEEE Trans. Svstems, Science and Cybernetics 4. 227-29 1. Jaynes, E. T.(197 1). The well posed problem. Foundations of Staristical Inference (V. P. Godambe and D. A. Sprott, eds.). Toronto: Holt, Rinehan and Winston, 342-356 (with discussion). Jaynes, E. T. (1976). Confidence intervals vs. Bayesian intervals. Foundationsof Probability Theory, Statistical Inference and Statistical Theories of Science 2 (W. L. Harper and C. A. Hooker eds.). Dordrecht: Reidel, 175-257 (with discussion). Jaynes, E. T. (1980). Marginalization and prior probabilities. Bayesian Analysis in Econometrics and Stluistics: Essays in Honor of Harold JeBeys (A. Zellner. ed.). Amsterdam: North-Holland, 43-87 (with discussion). Jaynes, E. T. (1 983). Papers on Probability, Statistics and Sratistical Physics. (R. D. Rosenkrantz. ed.). Dordrecht: Kluwer.

References
Jaynes. E. T. (1985). Highly informative priors. Bayesian Statistics 2 (J. M. Bernardo. M. H. DeGroot, D. V. Lindley and A. F. M. Smith. eds.), Amsterdam: North-Holland. 329-359 (with discussion). Jaynes. E. T.(1986). Some applicationsand extensions of the de Finetti representation theorem. Bayesian Injerenre und Decision Techniques: Essays in Honor (flBruno de Finrrti (P. Goel and A. Zellner. eds.). Amsterdam: North-Holland. 3 1 4 2 . K. Jeffrey, R. C. (196511983). The Logicof Decision. New York: McGraw-Hill. Secondedition in 1983. Chicago: University Press. Jeffrey. R. C. (ed.) ( I 98 1 ). Studies in Inductive Logic und Probuhility. Berkeley: Univ. California Press. Jeffreys. H. (193M973). ScientiJc Inference. Cambridge: University Press. Third edition in 1973, Cambridge: University Press. Jeffreys, H. ( 1939/1961). Theory of Pmbability. Oxfod: University Press. Third edition in I96 I . Oxford: University Press. Jeffreys, H. ( 1946). An invariant form for the prior probability in estimation problems. /roc.. Roy. Sot.. A 186.45346 I. Jeffreys,H. (1955).The present position in probability theory. Brir. J. Philos. Sci. 5.175-289. Jeffreys, H. and Jeffreys, 8 . S.( 1946/1972).Methods ojMathetnaricul Physics. Cambridge: University Press. Third edition in 1972. Cambridge: University Press. Jewell, W. S. (1974). Credible means are exact Bayesian for simple exponential bmilies. ASTIN Bulletin 8. 77-90. Jewell. W S.(1988). A heterocedastic hierarchical model. Buyesian Statistics 3 (J. M. Ber. nardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith.eds.).Oxford: University Press. 657-663. Johnson, N. L. and Kotz, S. (1969). Discrete Distributions. New York: Wiley. Johnson. N. L. and Kotz. S. (1970).Continuous Uniwriate Distributions. New York: Wiley. Johnson,N. L. and Kotz. S. ( 1972).Continuous Multivuriute Di.stribritioti.s.New York: Wiley. Johnson, R. A. (1967). An asymptotic expansion for posterior distributions. Ann. Math. StlUi.St. 3 . 1899-1 W6. 8 Johnson, R. A. ( 1970). Asymptotic expansions associated with posterior distributions.Ann. Muth. Statist. 41.85 1-864. Johnson, R. A. and Ladalla. J. N. (1979). The large-samplebehaviour of posterior distributions with sampling from muitiparameter exponential family models and allied results. Sunkhyii 841. 169-215. Johnson, W. and Geisser, S. ( 1982). Assessing the predictive influence of observations. Statistics und Prohabiliry Essays in Honor OfC. R. Rao ( G . Kallianpur, P. K. Krishnaiah and J. K. Ghosh, eds.). Amsterdam: North-Holland. 343-358. Johnson. W. and Geisser. S. ( 1983). A predictive view of the detection and characterisation of influential observations in regression analysis. J . Anier. Sturisr. Assvi.. 78. 137-1 44. Johnson. W. and Geisser. S. (1985). Estimative influence measures for the multivariate general linear model. J. Stntist. Planning and Inference 11. 33-56, Joshi. V. M.(1983). Likelihood principle. Encyclopedia o Statistical Sciences 4 (S. f Kotz. N. L. Johnson and C. B. Read. eds.). New York: Wiley. 644-647.

References

525

Justice, J. M. ( 4 . ) (1987). Maximum Entropy and Bayesian Methods in Applied Statistics. Cambridge: University Press. Kadane, J. B. (1974). The role of identification in Bayesian theory. Studies in Bayesian Econometrics and Statistics: in Honor of Leonard J . Savage ( S . E. Fienberg and A. Zellner, eds.). Amsterdam: North-Holland, 175- 191. Kadane, J. B. (1980). Predictive and structural methods for eliciting prior distributions. Bayesian Analysis in Econometrics and Statistics: Essays in Honor of Hamld JeBeys (A. Zellner, ed.). Amsterdam: North-Holland, 89- 109. Kadane, J. B. (ed.) ( 1984). Robustness of Buyesian Analysis. Amsterdam: North-Holland. Kadane, J. B. (1992). Healthy scepticism as an expected utility explanation of the phenomena of Allais and Ellsberg. Theory and Decision 32,5744. Kadane. J. B. (1993). Several Bayesians: a review. Test 2, 1-32 (with discussion). Kadane, 1. B. and Chuang, D. T ( 1978). Stable decision problems. Ann. Statist.6,1095-1 1 10. . Kadane, J. B. and Larkey, P. (1982). Subjective probability and the theory of games. Manug. sci. 28, 11.3-120. Kadane, J. B. and Larkey. P. ( 1 983). The confusion of is and ought in game theoretic contexts. Manag. Sci. 29. 1365-1 379. Kadane, J. B., Schervish. M. J. and Seidenfeld, T. (1986). Statistical implications of finitely additive probability. Bayesian inference and Decision Techniques: Essays in Honor of Bruno de Finerti (P.K. Goel and A. Zellner, eds.). Amsterdam: North-Holland, 59-76. Kadane, J. B. and Seidenfeld, T. ( 1990). Randomization in a Bayesian perspective. J. Stafist. Planning and Inference 25.329-345. Kadane, J. B. and Seidenfeld, T. (1992). Equilibrium, common knowledge and optimal sequential decisions. Knowledge. Beliefs and Strategic Information (C. Bicchini and M. L. Dalla Chiara, eds.). Cambridge: University Press, 27-45. Kagan. A. M., Linnik, Y. V. and Rao, C. R. (1973). Characterization Problems in Muthematicul Statistics. New York: Wiley. Kahneman, D., Slovick, P. and Tversky, A. ( 4 s . ) (1982). Judgement under Uncertainf?: Heuristics and Biuses. Cambridge: University Press. Kahneman, D. and Tversky. A. (1979). Prospect theory: an analysis of decision under risk. Economenica 47.263-29 I. Kalbfleish, J. G. (197 I). Likelihood methods in prediction. Foundations of Statistical Inference (V. l? Godambe and D. A. Sprott, eds.). Toronto: Holl, Rinehart and Winston, 372-392 (with discussion). Kalbfleish, J. G. and Sprott, D. A. (1970). Application of likelihood methods to models involving large number of parameters. J. Roy. Statist. Soc. B 32. 175-208 (with discussion). Kalbfleish, J. G.and Sprott, D. A. (1973). Marginal and conditional likelihoods. sctnkhyG A 35.3 I 1-328. Kapur, J. M. and Kesavan, H.K. ( 1992). Entropy Optimization Principles und Applicarions. New York: Academic Press. Karlin. S . and Rubin, H. (1956). The theory of decision procedures for distributions with monotone likelihood ratio. Ann. Math. Statist. 27.272-299.

526

References

Kashyap, R. L.( I97 1 ). Prior probability and uncertainty.IEEE Truns.Informution Theor! 14, 64 1-650. Kashyap, R. L. ( 1974). Minimax estimation with divergence loss function. Inforniution Sciences 7,34 1-364. Kass. R. E. (1989). The geometry of asymptotic inference. Stutist. Sci. 4, 188-234. Kass. R. E. (1990). Data-translated likelihood and Jeffreys' rule. Biometrika 77. 107-1 14. Kass. R. E. and Slate E. H. (1992). Reparametrization and diagnostics of posterior nonnormality. Bayesiun Statistics 4 (J. M. Bernardo, J. 0. Berger. A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 289-305 (with discussion). Kass, R. E. and Steffey, D. ( 1989). Approximate Bayesian inference in conditionally independent hierarchical models (parametric empirical Bayes). J. Amer. Stufist. Assor.. 84. 7 17-726. Kass, R. E.. Tierney, L. and Kadane, J. 8. (1988). Asymptotics in Bayesian computation. Bayesian Starisrics 3 (J. M. Bemardo. M. H. DeGroot. D. V. Lindley and A. E M. Smith. eds.). Oxford: University Press, 161-278. (with discussion). Kass. R.E., Tierney, L.and Kadane. J. B. ( I989a). The validity of posterior expansionsbased on Laplace's method. Bayesian mid Likelihood Method.$in Srurisrics und Eci)nonietric~s: Essuys in Honor ofGeorge A. Bitrnard(S. Geisser. J. S . Hodges, S. 1. Press and A. Zellner, eds.). Amsterdam: North-Holland. 473488. Kass, R. E., Tiemey, L. and Kadane. J. B. ( 1989b). Approximate methods for assessing influence and sensitivity in Bayesian analysis. Bionietrika 76,663-674. Kass, R. E.. Tierney. L. and Kadane. J. B. (1991). Laplace's method in Bayesian analysis. Sturistical Multiple Inwgrutiorr (N. Flournoy and R. K. Tsutakawa eds.). Providence: RI: ASA. 89-99. Kass, R. E.and Vaidyanathan. S. ( 1992j. Approximate Bayes factors and orthogonal parameters, with application to testing equality of two binomial proportions. J. Rex. Starisr. SOC. B 54, 129- 144. Keeney. R. L. ( 1992). Value-FocusedThinking. Haward. MA: University Press. Keeney. R. L. and Raiffa. H. (1976). Decisions with Midfiple Objectives: Prejerences itnd &due Trudec~ffs. New York: Wiley. Kelly, J. S. ( 19911. Social choice bibliography. Social Choice unil We&tre 8,97-169. Kempthorne. P J. (1986). Decision-theoretic measures of influence in regression. J . Ho,v. . Statist. Soc. B 48. 370-378. Kestemont. M.-I? (1987). The Kolrnogorov distance as comparison measure between parametric and non-parametric Bayesian predictions. The Stutistician 36,259-264. Keynes, J. M. (192 111929). A Treutbe ON Pro&cibi/ity.London: Macmillan. Second edition in 1929, London: Macmillan. Reprinted in 1962. New York: Harper and Row. Khintchine, A. I. (1932). Sur les classes d'CvCnements Cquivalents. Mut. Sbornik 39.40-43. Kiefer. J. and Wolfowitz. J. ( 1956).Consistency of the maximum likelihood estimator in the presence of intinitely many nuisance parameters. Ann. Math. Stcctist. 27. 887-906. Kim. K. H. and Roush, F. W. (1987). Teunr Theory. Chichester: Ellis Horwood. Kimeldorf, G. S. and Wahba, G. ( 1970). A correspondencebetween Bayesian estimation in stochastic processes and smoothing by splines. Ann. Murh. Sturisr. 41. 495-502.

References
Kingman, J. F. C. (1972). On random sequences with spherical symmetry. Biomerriku 59, 492-494. Kingman, J. F.C. and Taylor, S . J. (1966). Inrroducrion to Measure and Probability. Cambridge: University Press. Klein. R. and Press, S. J. (1992). Adaptive Bayesian classification of spatial data. J. Arner. Srarist. Assoc. 87,844-85 I. Klein, R. W. and Brown, S. J. (1984). Model selection when there is minimal prior information. Economerrica 52, 1291- I3 12. Kleiter, G. D. (1 980). Bayes-Srarisrik:Grundlagenundhwendungen. Berlin: W. de Gruyter. Kloek. T. and van Dijk. H. K. (1978). Bayesian estimates of system equation parameters: an application of integration by Monte Carlo. Econornetrica 46,1-19. Klugman, S. A. ( I 992). Bayesian Statistics in Acruarial Science, wirh Emphasis on Credibiliry. Dordrecht: Kluwer. Koch. G. and Spizzichino, F. (eds.) (1982). Exchangeability in Probability and Sratistics. Amsterdam: North-Holland. Kogan. N. and Wallace, M. A. (1964). Risk Taking.: A Study in Cognition and Personality. Toronto: Holt. Rinehart and Winston. Kolmogorov. A. N. ( I933/ 1950). Grundbegriye der Wahrscheinlichkeirsrechnung. Berlin: Springer. English translation in 1950 as Foundations ofthe Theory of Probability. New York: Chelsea. Koopman, B. 0. (1940). The axioms and algebra of intuitive probability. Ann. Math. Statisr. 41,269-292. Koopman. L. H. (1936).On distributions admitting a sufficient statistics. Trans.Amer. Math. SOC. 39.399409. Korsan, R. J. (1992). Decision analytica: an example of Bayesian inference and decision theory using Mathematica. Economic and Financial Modelling with Mathemurica (H. R. Varian. ed.). Berlin: Springer, 407-458. Kraft, C., Ran, J. W. and Seidenberg, A. ( 1959). Intuitive probability on finite sets. Ann. Math. Srarist. 3 0 , 4 0 8 4 19. Krantz, D. H., Luce. R. D., Suppes, P.and Tversky, A. ( I97 1 ), Foundations of Measurement 1. New York: Academic Press. Krasker, W. S. (1984). A note on selecting parametric models in Bayesian inference. Ann. Sratisr. 12,751-757. Kiichler. U. and Lauritzen, S. L. (1989). Exponential families, extreme point models, and minimal space-time invariant functions for stochastic processes with stationary and independent increments. Scandinavian J. Srutist. 15,237-26 I . Kuhn, T. S. (1962). The Structure ofScienrific Revolurions. Chicago: University Press. Kullback, S. (1959/1968). Information Theory and Statisrics. New York: Wiley. Second edition in 1968, New York: Dover. Reprinted in 1978. Gloucester, MA: Peter Smith. Kullback, S.and Leibler, R. A. (1 95 I 1. On information and sufficiency.Ann. Math. Statist.22, 79-86. Kyburg, H. E. (1961). Probabiliry and the Logic. ofRational Belief. Middletown: Wesleyan University Press. Kyburg. H. E. (1 974). The Lugical Foundations of Srarisrical Inference. Dordrecht: Reidel.

References
Kyburg, H. E. and Smokler, H. E. (eds.) (1964/1980). Studies in Subjecti1.e Probabilip. Chichester: Wiley. Second edition in 1980. New York: Dover. Lad. F. and Deely, J. J. (1994). Experimental design from a subjective utilitarian viewpoint. Aspects of Uncertainp: u Tribute to D. K Lindlej (P. R. Freeman. and A. F. M. Smith. eds.). Chichester: Wiley. 267-282. Lad. F., Dickey. J. M. and Rahman. M. A. ( 1990). The fundamental theorem of prevision. S t ~ i t i ~ t i t 50, 19-38, ut LaMotte, L. R. ( 1985). Bayesian linear estimators. nc:\.c./opedirt (g Stutisricd Sc%wes 5 (S. Kotz. N. L. Johnson and C. B. Read. eds.). New York: Wiley. 20-22. Lane. D. A. and Sudderth. W. D. (1983). Coherent and continuous inference. Atin. Stctrisr. 11. 114-120. Lane. D. A. and Sudderth, W. D. (1984). Coherent predictive inference. Smkli.vC A 46. 166-185. Laplace, P. S. (1774/1986). Memoire sur la probabilite des causes par les kvenements. Meni. Acod. Sci. Paris 6.621-656. English trmslation in 1986 as "Memoir on the probability of the causes of events". with an introduction by S. M. Stigler. Stutist. Sci. 1. 359-378. Laplace. P. S. ( 18 12). ThPorie Anul~riquedes ProbddirP.~Paris: Courcier. Reprinted as Ociiwes Completes de Luplocc 7. 1878-1912. Paris: Gauthier-Villars. Laplace. P. S. ( 18 14/1952). Essrri fhi/~~.sophiyue les Pmlxitr(ii1itiP.v. .wr Paris: Courcier. The 5th edition ( 1825)was the last revised by Laplace. English translation in 1952 as Philosophicci/ E.tsri,v on f rohnhiliries. New York: Dover. Lauritzen. S . L. ( 1982).Stutisticrtl F h i l i c s ( i s E.rtrrniti1 fiiniilies. Aalhorg: University Press. Lauritren. S. L. ( 1988). Ertrcrrirtl firririlies cirrd Sjswnis of Stificieirr . S t [ i t i . d ( x Berlin: Springer. Lauritzen. S. L.and Spiegelhalter, D. J. ( 1988). Local compumtions with probabilities o n graphical structures, and their application to expert systems. J. Roy. S t d s t . So(.. B 58. 157-224 (with discussion). Lavalle, 1. H. (1968). On cash equivalents and information evaluation in decisions under uncertainty. J . Arrirr. Srtiti.sr. Assoc. 63. 253-290. Decision and lirjererice.Toronto: Holt. Lavalle. 1. H. ( 1970). An lnrrodiictioii to Probabilit~, Rinehart and Winston. Toronto: Holt. Rinehart and WinLavalle, 1. H. ( 1978). Ft'ltndrmrnicils oJDcci~iortAnri ston. Lavine. M. (1991a). Sensitivity in Bayesian statistics: the prior und the likelihood. J. Anwr. Strtiist. Assoc. 86, 396-399. Lavine. M. ( I99 I b). An approach to robust Bayesian analysis for multidimensional parameter 86. spaces. J. Anter. Strrtist. A.S.WIC. 400-403. Lavine, M. (1992a). Some aspects of Polya tree distributions for statistical modelling. .4rtn. Storisr. 20. 1222- 1235. Lavine. M. (1992b). Sensitivity in Bayesian statistics: the prior and the likelihwd. J. Anref: Sfotist. Assoc. 86, 396-399. Lavine, M. ( 1994). An approach to evaluating sensitivity in Bayesian regression analysis. J. Srritisr. Plcitiriiir~ Inference 40.242-244. (with discussion). und

References
Lavine, M., Wasserman. L. and Wolpert, R. L. (1991). Bayesian inference with specified prior marginals. J. Amer. Statist. Assoc. 86,964-971. Lavine, M., Wasserman. L. and Wolpert, R. L. (1993). Linearization of Bayesian robustness problems. J. Statist. Planning and Inference 37,307-3 16. Lavine, M. and West, M. (1992). A Bayesian method for classification and discrimination. Canadian J. Statist. 20.45 1 4 1 . Learner, E. E. ( 1978). specification Searches: Ad hoc Inference with Nonexperimental Data. New York: Wiley. LeCam, L. (1953). On some asymptotic propenies of maximum likelihood estimates and related Bayes' estimates. Univ. California Pub. Statist. 1,277-329. L a a m , L. (1956). On the asymptotic theory of estimation and testing hypothesis. Proc. Third Berkeley Symp. 1 (J. Neyman and E. L. Scott, eds.). Berkeley: Univ. California

Press, 129-156.
LeCam. L. (1958). Les propietes asymptotiques de solutions de Bayes. Pub. Inst. Statist. Univ. Paris 7. 11-35. LeCam, L. (1966). Likelihood functions for large number of independent observations. Research Papers in Statistics. Festschriftfor J. Neyman (F. N. David, ed.).New York: Wiley. 167-1 87. LeCam, L. (1970). On the assumptions used to prove asymptotic normality of maximum likelihood estimates. Ann. Marh. Statist. 41,802-828. LeCam, L. ( 1986). Asymptotic Methods in Statistical Decision Theory.Berlin: Springer. Lecoutre, B. ( 1984). L'Analyse BayPsienne des Cornparaisom.Lille: Presses Universitaires. Lee, P M. (1989). Bayesian Statistics: an Introduction. London: Edward Arnold. . Lehmann, E. L. (195911983). TheoryofPoint Estimation. Second edition in 1983, New York: Wiley. Reprinted in 1991, Belmont, CA: Wadsworth. Lehmann, E. L. (1959/1986). Testing Statistical Hypotheses. Second edition in 1986, New York: Wiley. Reprinted in 1991, Belmont, CA: Wadsworth. Lehmann, E. L. (1990). Model specification. Statist. Sci. 5. 160-168. Lejeune, M. and Faulkenberry, G . D. (1982). A simple predictive density function. J. Amer. Statist. Assoc. 87,654-651. Lempers, F. B. (1971). Posterior Probabilities of Alternative Linear Models. Rotterdam: University Press. Lenk. P J. (1991). Towards a practicable Bayesian nonparametric density estimator. Bio. metrika 78,53 1-543. Leonard, T. (1973). A Bayesian method for histograms. Biometrika 60,297-308. Leonard, T. (1975). Bayesian estimation methods for two-way contingency tables. J. Roy. Statist. SOC.B 37.23-37. Leonard, T. (1980). The roles of inductive modelling and coherence in Bayesian statisitcs. Bayesiun Statistics (J. M. Bernardo. M. H. D e G m t , D. V. Lindley and A. F. M. Smith, eds.). Valencia: University Press, 537-555 and 568-58 I (with discussion). Leonard, T. and Hsu, J. S. J. ( I 992). Bayesian inference for a covariance matrix. Ann. Statist. 20. 1669- 16%.

References
Leonard, T and Hsu, J. S. J. (1994). The Bayesian analysis of categorical data: a selec. tive review. Aspects of Uncertuinty: u Tribute to D. V Lindley (P. R. Freeman. and A. F.M. Smith, eds.). Chichester: Wiley, 283-3 10. Leonard. T., Hsu, J. S. J. and Tsui, K.-W. (1989). Bayesian marginal inference. J. Airier: Statist. ASSOC.84, 105 I - 1058. Leonard. T. and ON!, K. (1976). An investigation of the I= test procedure as an estimation short-cut. J. Roy Statist. Soc. B 38.95-98. Levine, R. D. and Tribus. M. (eds.) ( 1978). The Maxitnuni Entropy Formalism. Cambridge, MA: The MIT Press. Ley, E. and Steel, M. F. J. ( 1992). Bayesian econometrics. conjugate analysis and rejection sampling. Ecwnomic und Finuriciul Modelling with Matlieniatica (H. R. Varian. ed. ). Berlin: Springer. 344-367. Lindgren, B. W. ( 197I ). Elements oj'Decision Theory. London: Macmillan. Lindley. D. V. (1953). Statistical inference. J. Roy. Statist. Soc. B IS. 30-76. Lindley. D. V. (1956).On a measure of information provided by an experiment. Ann. Mark Stutist. 27, 986-1005. Lindley, D. V. ( 1957). A statistical paradox. Biometriko 44, 187- 192. Lindley, D. V. ( 1958). Fiducial distribution and Bayes' Theorem. J . Roy. Sfutist. Soc. B 20. 102- 107. Lindley. D. V. ( 1961a). Dynamic programming and decision theory. Appl. Statist. 10,39-5 I . Lindley, D. V. ( 1961b). The use of prior probability distributions in statistical inference and decision. ?'roc. Fourrh Berkele? Symp. 1 (J. Neyman and E. L. Scott. eds.). Berkeley: Univ. California Press, 453-468. Lindley. D. V. (1964). The Bayesian analysis of contingency tables. Ann. Mufh. Stutisr. 35. 1622-1643. Lindley. D. V. ( 1965). Introduction to Probubiliw and Stutisticsfmm (I Bayesian Viewpoint. Cambridge: University Press. Lindley, D. V. ( 1969).Review of Fraser ( 1968). Biometriko 56,453456. Lindley, D. V. ( I97 I ). The estimation of many parameters. Fovndations u/ Sturisticul 111ference (V. P. Godambe and D. A. Sprott. eds.). Toronto: Holt. Rinehart and Winston, 4 3 5 4 5 3 (with discussion). Lindley. D. V. ( 1971/1985).Making Decisions. Second edition in 1985. Chichester: Wiley. Lindley. D. V. ( 1972). Bayesian Statistics. (i Review. Philadelphia, PA: SIAM. Lindley, D. V. ( 1976). Bayesian Statistics. fiwndutictns 1,f Probubility Theory. Sttrrisrical ItiJivencc.. und Starisricul Theories oJSciencr 2 (W. L. Harper and C. A. Hooker. eds. ). Dordrecht : Re idel. 353-363. Lindley. D. V. ( 1977). A problem in forensic science. Riometrikci 44. 187-192. Lindley. D. V. ( 1978). The Bayesian approach. Scandinuvian J . Statist. 5. 1-26. Lindley, D.V. ( 1980a). Jeffreys's contribution t o modem statistical thought. Btryesiun A n d y i s in Ecmomerrics und Stutisticx E S S U ~ S Honor o Hurokd Jeffrey (A. Zellner, ed.). in f Amsterdam: North-Holland, 35-39. Lindley, D. V. ( 1980b). Approximate Bayesian methods. Ba.vesiun Storistics (J. M. Bemardo. M. H. DeGroot, D. V. Lindley and A. E M. Smith. ed Valencia: University Press. 223-245 (with discussion).

References

531

Lindley, D. V. (l982a). Scoring rules and the inevitability of probability. Internut. Srarisr. Rev. 50, 1-26 (with discussion). Lindley. D. V. (1982b). Bayesian inference. Encyclopedia ofStatisrical Sciences 1( S . KO&, N. L. Johnson and C. B. Read, 4 s . ) . New York: Wiley, 197-204. Lindley, D. V. (1982~). Coherence. Encyclopedia o Srarisricul Sciences 2 ( S . Kotz. N. L. f Johnson and C. B. Read, eds.). New York: Wiley, 29-3 I. Lindley, D. V. (1982d). The improvement of probability judgements. J. Roy. Statist. Soc. A 145, 117-126. Lindley, D. V. (1983). Reconciliationof probability distributions. Operations Res. 31,866880. Lindley, D. V. (1984). The next 50 years. J. Ray. Statisr. SOC.A 147,359-367. Lindley, D. V. ( 1985). Reconciliation of discrete probability distributions. Bayesian Starisrics 2 (J. M. Bernardo, M. H.DeGroot. D. V. Lindley and A. F. M.Smith, 4 s . ) . Amsterdam: North-Holland, 375-390 (with discussion). Lindley, D. V. (1986). The reconciliationof decision analyses. Oper. Research 14,289-295. Lindley, D. V. (1987). The probability approach to the treatment of uncertainty in artificial intelligence and expert systems. Srarisr. Sci. 2, 1 7 4 (with discussion). Lindley, D. V. (1990).The present position in Bayesian Statistics. Srarist. Sci. 5.44-89 (with discussion). Lindley, D. V. ( 1991). Subjectiveprobability,decision analysis and their legal consequences. J. Roy. Srarisr. SOC.A 154.83-92. Lindley, D. V. ( 1 992). Is our view of Bayesian statistics too narrow? Bayesian Stafisfics 4 (J. M. Bemardo, J. 0. Berger. A. P Dawid and A. F. M. Smith, eds.).Oxford: University . Press, 1-15 (with discussion). Lindley. D. V. ( I 993). On the presentation of evidence. Math. Scientist 18,6043. Lindley, D. V. and Deely, J. J. (1993). Optimal allocation of stratified sampling with partial information. Test 2, 147-160. Lindley, D. V. and Novick, M. R. (1981). The role of exchangeability in inference. Ann. Statisr. 9,45-58. Lindley, D. V. and Phillips, L. D. (1976). Inference for a Bernoulli process (a Bayesian view). Amer. Statist. 30,1 12- 1 19. Lindley, D. V. and Scott, W. F. ( 1985).New Cumbridge Elementary Statistical Tables. Cambridge: University Press. Lindley. D. V. and Singpurwalla. N. D. (1991). On the evidence needed to reach agreed action between adversaries, with application to acceptance sampling. J. Amer. Statist. Assoc. 86,933-937. Lindley, D. V. and Singpurwalla, N. D. (1993). Adversarial life testing. J. Roy. Statist. SOC. 55.837-847. B Lindley, D. V. and Smith, A. F. M. (1972). Bayes estimates for the linear model. J. Roy. Statist. Soc. B 34, 1 4 1 (with discussion). Lindley. D. V., Tversky. A. and Brown, R. V. (1979). On the reconciliation of probability assessments. J. Roy. Srarisr. Soc.A 142, 146-180. Liseo, B. (1993). Elimination of nuisance parameters with reference priors. Biomerrika 80, 295-304.

References
Liseo, B., Petrella, L. and Salinetti, G. (1993). Block unimodality for multivariate Bayesian robustness. J. It. Sturist. Soc. 2.55-7 I. Little, R. J. A. and Rubin, D. B. (1987). Srufisricul Analysis with Missing Data New York: Wiley. Lo. A. Y. (1984). O a class of Bayesian non-parametric estimates: I. Density estimates. n Ann. Srurisr. 12, 35 1-357. Lo. A. Y. (1986). Bayesian statistical inference for sampling a finite population. Ann. Srutisr. 14, 12261233. Lo. A. Y. (1987). A large sample study of the Bayesian bootstrap. Ann. Srutisr. 15.360-375. Lo,A. Y. (1993). A Bayesian bootstrap for censored data. Ann. Srurisr. 20, 100-1 23. Luce. R. D. (1959). lndividuul Choice Behuviour. New York: Wiley. Luce, R. D. (1992). When does subjective expected utility fail descriptively! J. Risk und U~cerrain~~y 5.5-27. Luce, R. D. and Kranu. D. H. (197 I). Conditional expected utility. Ecortonrerric*u39. 253271.

Luce, R. D. and Narens, L. (1978). Qualitative independence in probability theory. Theory und Decision 9, 225-239. Luce. R. D. and Raiffa, H. ( 1957). G m e s und Decisions. Inrroduciiort nnd Crirical Survcv. Chichester: Wiley. Luce, R. D. and Suppes. P. ( 1965). Preference. utility and subjective probability. Hundbook ofMathemnricu1 Psychology 3 (R.D. Luce. Bush and Galanter. eds.). New York: Wiley.
2 4 9 4 10.

Lusted, L. B. ( I %a). Inrroduction ru Medicul Decision Making. Springfield. IL: Thomas. Maatta, J. and Casella. G. ( 1990). Developments in decision theoretic variance estimation. Srutist. Sci. 5,90-120 (with discussion). Machina, M. ( 1982). Expected utility analysis without the independence axiom. Econonietricu 50. 277-323. Machina, M. ( 1987). Choices under uncertainty. Problems solved and unsolved. J. E<wr. Perspecrivcs 1, 1 2 I - 154. Main, P. ( 1988). Prior and posterior tail comparisons. Buysiun Sfurisfics 3 (J. M. Bernardo, M. H.DeGroot, D. V. Lindlcy and A. F. M. Smith. eds.). Oxford: University Press.
669-675.

Makov. U. E. (1988). On stochastic approximation and Bayes linear estimators. Ruyesian Srutisrics 3 (J. M. Bernardo. M. H.DeGrcmt, D. V. Lindley and A. F. M. Smith, cds.). Oxford: University Press, 697-699. Mardia. K. V.. Kent, J. T. and Wa1der.A. N. ( 1992). Statistical shape models in image analysis. Computer Science and Sruristics: Proc. 23rd. Synzp. Interface (E. M. Keramidas. ed.). Fairfax Station: Interface Foundation. 550-557. Marinell. G. and Seeber. G. ( 1988). AngetandreSrurisrik. Munich: Oldenbourg Verlag. Maritz. J. S. and Lwin. T. (1989). Empirical Buyes Methods. London: Chapman and Hall. Marriott. J. M. (1988). Reparametrisation for Bayesian inference in ARMA time series. Buysiun Srurisrics 3 (J. M. Bernardo. M . H. DeGroot. D. V. Lindley and A. F. M. Smith, Oxford: University Press. 70 1-704.

References
Maniott, J. M. and Naylor. J. C. (1993). Teaching Bayes on MINITAB. Appl. Statist. 42,
223-232.

Marriott, J. M. and Smith, A. F. M. (1992). Reparametrisation aspects of numerical Bayesian methodology for autoregressive moving-average models. J. Time Series Anal. 13,327343.

Marschak, J. ( 1950). Statistical inference in economics: an introduction. Statistical Inference in Dynamic Economic Models. New York: Cowles Commission, 1-50. Marschak, J. and Radner, R. (1972). Economic Theoryof Teams.New Haven: Yale University PRSS. Martin, J. J. (1967). Bayesiun Decision Problems and Murkov C h i n s . New York: Wiley. Martz, H. F. and Waller, R. A. ( 1982). Buyesian ReIiability Analysis. New York: Wiley Masreliez, C. J. (1975). Approximate non-Gaussian filtering with linear state and observation relations. IEEE Trans. Automatic Control 20, 107-1 10. Mathiasen, P. E. (1979). Prediction functions. Scandinavian J. Statist. 6, 1-2 1. Mazloum, R. and Meeden, G. ( 1 987). Using the stepwise Bayes technique to choose between experiments. Ann. Statist. 15, 269-277. McCanhy, J. (1956). Measurements of the value of information. Proc. Nut. Acad. Sci. USA
42,654-655.

McCulloch, R. E. (1989). Local model influence. J. Amer. Srutisr. Assoc. 84,473478. McCulloch, R. E. and Rossi, P. E. (1992). Bayes factors for non-linear hypothesis and likelihood distributions. Biomerrih 79,663-676. McCulloch, R. E. and Tsay. R. S. (1993). Bayesian inference and prediction for mean and variance shifts in autoregressive time series. J. Amer. Sratisr. Assoc. 88, %8-978. Meeden. G.(1990). Admissible contour credible sets. Srarisrics and Decisions 8, 1-10. Meeden. G. and Isaacson, D. (1977). Approximate behavior of the posterior distribution for a large observation. Ann. Sratisr. 5,899-908. Meeden, G. and Vardeman, S. (1991). A non-informative Bayesian approach to interval estimation in finite population sampling. J . Amer. Starisr. Assoc. 86,972-986. Meinhold, R. and Singpurwalla, N. D. ( I 983). Understanding the Kalman filter. Amer. Statist.

37, 123-1 27.

Mendel, M. B. (1992). Bayesian parametric models for lifetimes. Bqvesian Statisrics 4 (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 697-705. Mendoza, M. (1994). Asymptotic posterior normality under transformations. Tesr 3, 173180.

Meng X.-L.and Rubin, D. B. (1992). Recent extensions to the EM algorithm. Bayesian . Statisrics 4 (J. M. Bernardo, J. 0. Berger, A. P Dawid and A. E M. Smith. eds.). Oxford: University Press, 307-320 (with discussion). Merkhofer, M. W. ( 1987). Quantifying judgemental uncertainty: methodology, experiences and insights. IEEE Trans. Systems. Science and Cybernetics 17.741-752. Metropolis, N., Rosenblurh, A. W., Rosenbluth, M. N., Teller, A. H.and Teller, E. (1953). Equation of state calculations by fast computing machines. J. Chem. Phys. 21, 10871092.

Meyer, D. L.and Collier, R. 0. (eds.) (1970). Bayesian Srarisrics. Itasca, 1L: Peacock.

References
Mills, J. A. (1992). Bayesian prediction tests for structural stability. J. Ecoriomcrrics 52. 38 1-388. Mitchell, T. J. and Beauchamp, T. J. (1988). Bayesian variable selection in linear regression. J. Amer. Srulisr. A.s.coc.. 83. 1023-1035 (with discussion). Mitchell, T. J. and Morris. M. D. (1992). Bayesian design and analysis of computer experi2. ments: two examples. Srutisricu S i t i i c . ~ 359-379. Mockus. J. ( 1989). Bqesiun Approuch ro Clohnl 0prittii:~~iori. Dordrecht: Kluwer. Mohammad-Djafari, A. and Demoment, G. (eds. ( 1993). Muxitnutti Ditropx und Bu~e.siurr Methods. Dordrecht: Kluwer. Monahan. J. F. and Boos, D. D. (1992). Proper likelihoods for Bayesian analysis. Biornerriku 79.27 1-278. Morales, J. A. ( 197 I ). Bnwsiun Full Irlformufiori Srructurul Airu1ysi.s. Berlin: Springer. Moreno, E. and Cano. J. A. (1989). Tcstinp a point null hypothesis: asymptotic robust Bayesian analysis with respect to priors given on a sub-sigma tield. /rirrrriui. SICJI~SI.
Rev. 57.22 1-232.

Moreno. E. andCano. J. A. (199 I ). Kobust Bayesian analysis with c-contaminations partially known. J. Roy. Sturist. Soc. B 53, 143-155. Mowno. E. and Pericchi, L. R. ( 1092). Bands of probability measures: a robust Bayesian analysis. Buwsian Stufi.sric~ (J. M. Bernardo, J. 0. Berger. A. P. Dawid and A. F. M. Smith. 4 eds.). Oxford: University Press. 707-7 13. Moreno, E. and Pericchi. L. R. ( IYY3). Prior assessments for bands of probability measures: empirical Bayesian analysis. Tesr 2. 101-1 10. Morgan, M. G.and Henrion, M. (1990). Uricwruiriry: u Guide to L)rulinl: ndrh Uric~rrruirrr,v in Quunritutiw Risk utid Polic~ Atiulnis. Cambridge: University Press. Morris. C'. N. ( 1982). Natural exponential families with quadratic variance functions. A w l . Srurisr. 10, 65-80. Morris. C. N. ( 1983). Parametric empirical Bayes inference: theory and applications. J. Airier.
Srarist. AJSIH.. 8.47-59.

Morris. C. N. ( 1988). Approximating posterior distributions and posterior moments. Buysimi Srurisrics 3 (J. M. Bernardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press, 327-344 (with discussion). Morris. P. A. (1974). Decision analysis expert use. Muriug. Sci. 20, 1233-1 241. Morris. W. T. ( 1968). Murzugerrzetir Scierrce. it Bu,ve.\iu/i 1trrroduc.riori.Engleaood Cliffs. NJ: Prentice-Hal I. Mortera, J. (1986). Bayesian forecasting. Metrori 44,277-2%. Mosteller, F. and Wallace, D. L. ( I Y64/1984). lt&wtice und Dispirred Aurhorship: The Federalist. Reading. MA: Addison-Wesley. Second edition, published in I 984 as Applied Bawsinti mid Clussical Inferewe. thr Cuse Nfrhe F e d i ~ ~ Pupers. Berlin: Springer. li.~~ Quantifying probabilistic expressions. Srulisr. Sci. 5. Mosteller. F. and Youtz. C. ( 1%). 2-24 (with discussion). Mouchan, M. ( 1976). A note on Bayes' theorem. Srurisrictr 36. 349-357. Mouchan, M. and Simar. L. ( 1980). Least squares approximation in Bayesian analysis. Bu,vesiati Sturisrics (J. M. Bemardo. M. H. DeGrcKtt. D. V. Lindley and A. E M. Smith. eds.). Valencia: University Press, 207-222 arid 37-245 (with discussion).

References

535

Muirhead, C. R. (1986). Distinguishing outlier types in time series. J. Roy. Statist. SOC.B48,
39-47.

Mukerjee. R. and Dey, D. K. (1993). Frequentist validity of posteriorquantiles in the presence of a nuisance parameter: Higher order asymptotics. Biometrika 80,499-505. Murphy, A. H. and Epstein, E. S. (1967). Verification of probabilistic predictions: a brief review. J. Appl. Meteorology 6,748-755. Murray, R. G.,McKillop, J. H., Bessant, R. G., Hutton, I, Lorimer, A. R. and Lawrie. T. D. V.( 1981). Bayesian analysis of stress thallium-201 scintigraphy. Eur. J. Nucl. Med.
6 2 0 1-204.

Myerson, R. B. (1979). An axiomatic derivation of subjective probability, utility and evaluation functions. Theory and Decision 11,339-352. Nakamura. Y. (1993). Subjective utility with upper and lower probabilities on finite states. J. Risk and Uncertainty 6, 33-48. Narens, L. (1976). Utility, uncertainty and trade-off structures. J . Math. Psychol. 13,296332.

Nau, R. F. (1992). Indeterminate probabilities on finite sets. Ann. Statist. 20, 1737-1767. Nau. R. F. and McCardle, K. F. (1990). Coherent behavior in non-cooperative games. J. Economic Theory 5 0 , 2 4 2 4 . Naylor. J. C. and Smith, A. F M. (1982). Applications of a method for the efficient compu. tation of posterior distributions. Appl. Statist. 31. 214-225. Naylor, J. C. and Smith, A. F. M. ( 1 988). Economic illustrations of novel numerical integration methodology for Bayesian inference. J. Econometrics 38,103-125. Nelder, 1. A. and Wedderburn, R. W.M. (1972). Generalised linear models. J. Roy. Statist. SOC.A 135.370-384. Neyman. J. (1935). Sur un teorema concerte le cosidette statistiche sufficenti. Giorn. 1st.
Ital. 6,320-334.

Neyman, J. and Pearson, E. S. ( I 933). On the problem of the most efficient tests of statistical hypothesis. Phil. Trans. Ro-v. SOC. London A 231, 289-337. Neyman, J. and Pearson, E. S. (1967). Joinf Statistical Papers. Cambridge: University Press. Neyman, J. and Scott, E. L. (1948). Consistent estimates based on partially consistent observations. Econometrica 16, 1-32. Nicolau. A. (1993). Bayesian intervals with good frequentist behaviour in the presence of nuisance parameters. J. R0.y. Statisr. Soc. B 55,377-390. Normand, S.-L.and Tritchler, D. (1992). Parameter updating in a Bayes network. J. Amer. Statist. Assoc. 87, 1109-1 115. Novick. M. R. ( 1969). Multiparameter Bayesian indifference procedures. J. Roy. Statist. SOC. B 3 1 , 2 9 4 . Novick, M. R. and Hall, W. K. (1965). A Bayesian indifference procedure. J. Amer. Statist. Assoc. 60,1104-1 117. Novick, M. R. and Jackson, P H. (1974). Statistical Methods for Mucational and Psycho. logical Research. New York: McGraw-Hill. O'Hagan, A. (1979). On outlier rejection phenomena in Bayes inference. J. Roy Sratist. SOC.B 41,358-367. O'Hagan, A. (1981). A moment of indecision. Biometrika 68,329-330.

References

O'Hagan, A. ( 1988a). Prohahilip: Methodsand Measurements. London: Chapman and Hall. O'Hagan, A. (1988b). Modelling with heavy tails. Buyesiun Statistics 3 (J. M. Bemardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith, eds.). Oxford: University Press. 3 5 - 3 9 (with discussion). O'Hagdn. A. ( 19%). Outliers and credence for location parameter inference. J. Aniw Stcitist. Assoc'. 85, 172- 176. O'Hagan. A. ( 1991). Bayes-Hermite quadrature. J . S t d s t . Plunning urid Itrfpri.rice 29. 245260. O'Hagan, A. (1992). Some Bayesian numerical analysis. Bciysiciri Stciti.s/ic.c.I (J. M.Bernardo, J. 0. Berger. A. P. Dawid and A. F. M. Smith. eds.). Oxford: University Press. 345-363 (with discussion). Theory f O'Hagan. A. ( I994a). Kenddl*s A ~ W I I ~ C C ~ o Statistics 2B: Briysitiii liflercnce. London: Edward Amold O'Hagan. A. ( 1994b). Robust modelling for asset management. J. Stutist. P h i i i i n g and Inference 40.245-259. O'Hagan. A. and Berger, J. 0. (1988). Ranges of posterior probabilities for quasimodal priors with specified quantiles. J. Amei: Stcitisr. Assctc. 83. 503-508. O'Hagan. A. and Le. H. ( I 994). Conflicting information and a class of bivariate heavy-tailed disrrihutions. A.spects of Uncertrrinn: u Tribute to D. V Lindle! (P. R. Freeman. and Chichester: Wiley. 3 1 1-327. Oliver. R. M. and Smith. J. Q. (eds.) ( 1990). Influence Diugrunis, Belief Nets und Decision Ancilysis. Chichester: Wiley. Osiewalski, J. and Steel. M. F. J. ( 1993). Robust Bayesian inference in /,,-spherical modelh. Biotttetriko 80.456-360. Osteyee. D. D. B. and G o d . 1. J. ( 1974). Infi~niutiott.Weight of Evidence. the Singulcirity between ProlxMity Measures und Signal Detection. Berlin: Springer. Pack. D. J. ( 1986a). Posterior distributions. Posterior probabilities. Encyclopedia of Stcit i v r i c d Sciences 7 (S. Kotz. N. L. Johnson and C. B. Read. ed I 2.1 - 1 24. Pack, D. J. ( 1986b). Prior distributions. Encyclopedici of Stutisficd Sciences 7 (S. Kotz. N. L. Johnson and C. B. Read, eds.). New York: Wiley, 194-196. Padgett, W. J. and Wei, L. J. (1981). A Bayesian nonpararnetric estimator of survival probability assuming increasing failure rate. Coirini. Stalisr. Theory and Methods 10.4943. Page. A. N. (ed.) ( 1968). Utilin 7'he.ory:A Book f,fRe.ading.s.New York: Wiley. Pardo, L., Taneja, 1. J. and Morales. D. (1991 ). A-measures of hypoentropy and comparison of experiments: Bayesian approach. The Srcitisticimi 51. 173- 184. Parenti, G. (ed.) ( 1978). I Fondurnenti del1'lnjierett:u Starisricu. Florence: Universiti degli Studi. Parmigiani, G. and Berry, D. A. ( 1994). Applications of Lindley information to the design of clinical experiments. Aspects of Uncertuinp: (I Tribure to D. V Lind1t.y (P. Freeman. R. and A. E M. Smith. eds.). Chichester: Wiley. 329-348. Peanon. E. S. (1978). The History of Statistics in rlze 17th and 18th Centicrics. London: Macmillan.

References

537

Peers, H. W. (1965). On confidence points and Bayesian probability points in the case of several parameters. J. Roy. Starisr. SOC.B 27.9-16. Peers, H. W. (I%@. Confidence properties of Bayesian interval estimates. J. Roy. Statist. SOC. B 30,535-544. Peirce, C . S . (1 878). How to make our ideas clear. Popular Science Monthly 12,286-302. Peizer, D. B. and Pratt, J. W. (1968). A normal approximation for binomial, F. beta, and other common related tail probabilities. J. Amer. Statist. Assoc. 43,2626. Peiia. D. and Guttman, I. (1993). Comparing probabilistic methods for outlier detection. Biomeaika 8 0 , 6 0 3 4 10. Peiia, D. and Tiao, G. C. (1992). Bayesian robustness functions for linear models. Bayesian Staristics 4 (J. M. Bemardo. J. 0.Berger, A. P. Dawid and A. F, M. Smith, eds.). Oxford: University Press, 365-388 (with discussion). Pereira. C. A. de B. and Lindley, D. V. (1987). Examples questioning the use of partial likelihood. The Statistician37. 15-20. Pkrez, M. and Pericchi, L. R. (1992). Analysisof multistagesurvey as a hierarchicalmodel. E. Bayesian Statistics 4 (J. M. Bernardo. J. 0. Berger, A. P Dawid and A. F.M. Smith, . eds.). Oxford: University Press, 723-730. Pericchi, L. R. (1981). A Bayesian approach to transformationsto normality. Biomerrika 68,
35-43.

Pericchi, L. R. (1984). An alternative to the standard Bayesian procedure for discrimination between normal linear models. Biometrika 71,576-586. Pericchi, L. R. (1993). Personal communication. Pericchi, L. R. and Nazaret, W. A. (1988). On being imprecise at the higher levels of a hierarchical linear model. Bayesian Statisrics 3 (J. M. Bernardo. M. H.DeGroot. D. V. Lindley and A. F. M. Smith, eds.). Oxford: University Press, 361-375 (with discussion). Pericchi, L. R. and Pkrez, M. E. (1994). Posterior robustness with more than one sampling model. J. Statist. Planning and Inference 40.279-984. Pericchi, L. R., Sand, B. and Smith, A. F. M. (1993). Posterior cumulant relationships in Bayesian inference involving the exponential family. J . Amer. Statist. Assoc. 88, 14191426.

Pericchi, L. R. and Smith, A. F. M. (1992). Exact and approximate posterior moments for a normal location parameter. J. Roy. Sratist. Soc. B 54, 793-804. Pericchi, L. R. and Walley, P. (1991). Robust Bayesian credible intervalsand prior ignorance. Internot. Statist. Rev. 59. 1-23. Perks, W. ( 1 947). Some observations on inverse probability, including a new indifference rule. J. Inst. Actuaries 73, 285-334 (with discussion). Peskun, P. H.(1973). Optimal Monte Carlo sampling using Markov chains. Biometrika 60,
607412.

Pettit, L. I. (1986). Diagnostics in Bayesian model choice. The Statistician 35. 183-190. Pettit. L. I. (1992). Bayes factors for outlier models using the device of imaginary observations. J. Amer. Statist. Assoc. 87,541-545. Pettit, L. 1. and Smith, A. F M. (1985). Outliers and influential observation in linear models. . Bayesian Statistics 2 (J. M. Bemardo. M. H. M r o o t , D. V. Lindley and A. F. M. Smith, eds.). Amsterdam: North-Holland, 473-494 (with discussion).

References
Pettit, L. I. and Young, K. S. (1990). Measuring the effect of observations on Bayes factors. Biomerriku 77.455466. Pfanzagl. J. (1967). Subjective probability derived from the Morgenstern-von Neumann utility concept. Essays in Mathematicul Economics (M. Shubik, ed.). Princeton: University Press, 237-25 1. Pfanzagl. J. (1968). Theory of Meusuremenr. Chichester: Wiley. Pham-Gia, T. and Turkkan, N. ( 1992). Sample size determination in Bayesian analysis. The Sraristician 41,389404. Phillips, L. D. ( 1973). Buyesiun Srarisrics,forSocial Scienrisrs. London: Nelson. Piccinato, L. ( 1973). Un metodo per determinare distribuzioni iniziali relativamente noninformative. Metron 31. 124- 156. Piccinato. L. ( 1977). Predictive distributions and non-informative priors. Trans. 7rh. Prague Cot$ lnformution Theory (M. Uldrich. ed.). Prague: Czech. Acad. Sciences, 3 9 9 4 7 . Piccinato, L. (1986). D Finetti's logic of uncertainty and its impact on statistical thinking e f and practice. Bayesian Inference and Decision Techniques: Essays in Honor o Bruno de Finerti (P. K. Goel and A. Zellner. eds.). Amsterdam: North-Holland. 13-20. Piccinato, L. (1992). Critical issues in different inferential paradigms. .I.S~ati.st. I. f Soc. 2. 25 1-274. Pierce. D. ( I 973). On some difficulties in a frequency theory of inference. Ann. Srutist. 1. 24 1-250. Pilz, J. ( 19831I99 1 ). Buyesicin Estimurion und Experirnenral Design in Lineur Ke~ression Models. Leipzig: Teubner. Second edition in 1991, Chichester: Wiley. Pitman E. J. G. (1936). Sufficient statistics and intrinsic accuracy. Prcw. Cumh. Phil Soc. 32. 567-579. Pitman E. J. G. ( 1939). Location and scale parameters. Bionietrika 36. 39 I 4 2 I . Plante, A. ( 197 I ). Counter-example and likelihood. Foundations of Sruristicul Inference (V. P. Godanibe and D. A. Sprott. eds.). Toronto: Holt, Rinehart and Winston, 357-371 (with discussion). Plante, A. (1984). A reexamination of Stein's antifiducial example. Canad. .Starisr. 12. I .
135-141.

Plante. A. (1991). An inclusion-consistent solution to the problem of absurd contidence statements. Canud. J. Srarisr. 19, 389-397. Poirier. D. J. ( 1985). Bayesian hypothesis testing in linear models with continuously induced conjugate priors across hypotheses. Bayesian Sruristic.sZ (J. M. Bernardo. M. H.DeGroot. D. V. Lindley and A. F M. Smith. eds.). Amsterdam: North-Holland. 7 I 1-722. . Poirier, D. J. ( I 993). intermediare Statistics und Econometrics: N Compurcitiw Approac.h. Cambridge, MA: The MIT Press. Polasek, W. and Pfitzelberger. K. ( 1988). Robust Bayesian analysis in hierarchical models. Buyesiun Srutisrics 3 (J. M. Bernardo, M.H. DeGroot, D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press. 377-394. Polasek, W. and Potzelberger, K. (1904). Robust Bayesian methods in simple ANOVA problems. J. Starist. Planning and Inference 40,295-3 I I . Pole. A. and West. M. ( 1989). Reference analysis of the dynamic linear model. J. 7 h e Series Anu/y.sis 10. 13- 147.

References
Pole, A., West, M. and Harrison P. J. (1994). Applied Bayesian Forecasting and lime Series Analysis. New York:Chapman and Hall. Pollard, W. E.(1986). Bayesian Statisticsfor Evaluation Research: an Introduction.Beverly Hills. CA: Sage. Polson, N. G. (1991). A representation of the posterior mean for a location model. Biometrika 78,426-430. Polson, N. G. (1992). In discussion of Ghosh and Mukerjee (1992). Bayesian Statistics 4 (J. M. Bernardo. J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 203-205. Polson, N. G.and Tiao. G. C. (1995). Bayesian Inference (2 volumes). Aldershot: Edward Elger. Poskitt, D. S. (1987). Precision, complexity and Bayesian model determination. J. Roy. Statist. Soc.B 49, 199-208. Potzelberger, K. and Polasek, W. ( I 99 1). Robust HPD regions in Bayesian regression models. Econometrica 59,1581-1590. Pratt, J. W. (1961). Length of confidence intervals. J. Amer. Statist. Assoc. 56,549-567. Prait, J. W. (1964). Risk aversion in the small and in the large. Econometrica 32, 122-136. Pratt. J. W. (1965). Bayesian interpretation of standard inference statements. J . Roy. Statist. SOC. B 27,169-203. Pratt, J. W., Raiffa. H. and Schlaifer. R. ( 1964).The foundationsof decision underuncertainty: an elementary exposition. J. Amer. Statist. Assoc. 59, 353-375. Pratt, J. W., Raiffa, H. and Schlaifer, R. (1965). Introduction to Statistical Decision Theory. New York: McGraw-Hill. Press, S . J. (1972/1982). Applied Multivariate Analysis: using Bayesian and Frequentist Methods of Inference. Second edition in 1982, Melbourne, FL: Krieger. Press. S. J. (1978). Qualitative controlled feedback for forminggroup judgements and making decisions. J. Amer. Statist. Assoc. 73,526-535. Press, S . J. (1980a). Bayesian Inference in MANOVA. Handbook of Statistics 1. Analysis of Variance.(P. R. Krishnaiah, ed.). Amsterdam: North-Holland, 1 17-1 32. Press, S. J. (1 980b). Bayesian inference in group judgement formulation anddecision making using qualitative controlled feedback. Bayesian Statistics (J. M. Bernardo, M. H. DeGroot. D. V. Lindley and A. F. M. Smith, eds.). Valencia: University Press, 383430 (with discussion). Press, S. J. (1985a). Multivariate Analysis (Bayesian). Encyclopedia of Statistical Sciences 6 ( S . Kotz. N. L. Johnson and C. B. Read, eds.). New York: Wiley, 16-20. Press. S. J. (1985b). Multivariate group assessment of probabilities of nuclear war. Bayesian Statistics 2 (J. M. Bemardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith, eds.), Amsterdam: North-Holland, 4 2 5 4 6 2 (with discussion). Press, S . J. (1989). Bayesian Statistics. New York:Wiley. Rabena, M. (1998). Deriving reference decisions. Test 7, 161-178. Racine-Poon, A. (1988). A Bayesian approach to non-linear calibration problems. J. Amer. Statist. Assoc. 83.650-656. Racine-Poon, A. (1992). SAGA: Sample assisted graphical analysis. Bayesiun Statistics 4 (J. M. Bernardo. J. 0. Berger, A. P Dawid and A. F. M. Smith, eds.). Oxford: University . Press, 3 8 9 4 (with discussion).

540

References

Racine-Poon, A., Grieve, A. P.. Fliihler, H. and Smith, A. F. M. (1986). Bayesian methods in practice: experiences in the pharmaceutical industry. Appl. Stutist. 35.9-3-150 (with discussion). Raftery, A. E. and Lewis, S. M. (1992). How many iterations in the Gibbs sampler? Bayesian Stutistics 4 (J. M. Bernardo. J. 0.Berger, A. P Dawid and A. F. M. Smith, eds.). Oxford: . University Press. 763-773. Raftery, A. E. and Schweder, T. (1993). Inference about the ratio of two parameters. with applications to whale censusing. Amer. Sfutist.47. 259-264. Raiffa. H. (1961). Risk ambiguity and the Savage axioms. Comment. Quart. J. Ecoti. 75. 690-694. Raiffa. H. ( 1968). Decision Analysis. introductory Ltcfrrres o t i Choic.e.sunder Uncwruinfy. Reading, MA: Addison-Wesley Raiffa. H. ( 1982). The Art ctnd Scietice ofNegotiution. Cambridge: University Press Raiffa. H. and Schlaifer, R. ( 1961). Applied S?aristicalDecision Theor?.. Boston: Harvard University. Ramsey. F. P. ( 1926). Truth and probability. The Foundutions (d Muthemafics and Ofher LogiculEssays (R. B. Braithwaite,ed.). London: Kegan Paul ( 193 I). 156-198. Reprinted in 1980 in Studies in Subjective Prohabili@ (H. Kyburg and H.E Smokler. eds.). New E. York: Dover. 61-92. Ramsey, J. 0. and Novick. M.R. (1980).PLU robust Bayesian decision theory: point estimation. J. Amcr. Stufisf.Assoc. 75. 901-907. Randal1.C. H. and Foulis, D. J. (1975).A mathematical setting for inductive reasoning. Foundations of Probability Theory Sratisticnl Inference, nnd Sraristicirl Theories of Sciencc 3 (W. L. Harper and C. A. Hooker. eds.). Dordrecht: Reidel. Rao. C. R. ( 1945).Information and accuracy attainable in estimationof statistical parameters. Bull. Calcutta Math. Soc. 37. 8 1-9 1 . Regazzini. E. ( 1983). Sulk Probubilitu Coierenri nel Senso di de Finetti. Bologna: Clueb. e 15. Regazzini. E. (1987).D Finetti's coherence and statistical inference. Ann. Sfafisf. 845864. Regazzini. E. and Petris. G. (1992). Some critical aspects of the use of exchangeability in Soc. 1. 103-130. statistics. J . 11. St~itisr. Reichenbach. H. ( 1935). 7he Theory of Probability. Berkeley: Univ. California Prebs. Renyi, A. (1955). On a new axiomatic theory of probability. .4cm Muth. Acrid. Sci. Hungctricue 6 , 285-335. Renyi. A. (1961). On measures of entropy and information. Pmc. Fourth Berkeley Symp. 1 (J. Neyman and E. L. Scott, eds.). Berkeley: Univ. California Press. 547-561. Berlin: Deutscher Verlag der WisRenyi. A. ( I962/ 1970). Wuhrscheinlir.hkeirsre~.hnrtnR. senschaften. English translation in 1970 as Probability Them:\.. San Francisco. CA: Holden-Day. Renyi. A. ( 1964). On the amount of information concerning an unknown parameter in a sequence of observations. Pub. Murh. Inst. Hung. Acrid. Sci. 9. 617-624. Renyi. A. i1966). On the amount of missing information and the Neyman-Pearson lemma. Research Papers in Statistics. Fes~.sc.hri/rfor . Newnun (F. N. David. ed.). New York: J Wiley, 28 1-288.

References
Renyi, A. (1%7). On some basic problems of statistics from the point of view of information theory. Proc. Fifrh Berkeley Symp. 1 (J. Neyman and E. L. Scott, eds.). Berkeley: Univ. California Press, 53 1-543. Ressel. P. (1985). de Finetti type theorems: an analytical approach. Ann. frob. 13,818-922. Richard, J. F. ( 1973). Posterior and Predictive Densitiesfor Simultaneous Equations Models. Berlin: Springer. Rios, D. ( 1990). Sensitivity Anulysis in Multiobjective Decision Making. Berlin: Springer. Rios, D. (1992). Foundations for a robust theory of decision making: the simple case. Test 1,
69-78.

Rios. D. and Martin, J. (1994). Robustness issues under precise beliefs and preferences.

J. Statist. Planning and Inference 40,383-389.


Rios. S. ( 1977). Analisis de Decisiones. Madrid: ICE. Rios. S.. Rios, S. Jr. and Rios, M. J. (1989). Procesos de Decision Multicriterio. Madrid: Eudema. Ripley, B. D. ( 1987). Stochastic Simulation. Chichester: Wiley. Rissanen, J. ( 1983). A universal prior for integers and estimation by minimum description length. Ann. Statist. 11,416-431. Rissanen, J. (1987). Stochastic complexity. J. Roy. Statist. Soc. B 49, 223-239 and 252-265 (with discussion). Rissanen, J. ( 1989). Stdchasric Complexiy in Statistical Enquiry. Singapore: World Scientific. Ritter. C, and Tanner, M.A. (1992). Facilitating the Gibbs sampler: the Gibbs stopper and the griddy-Gibbs sampler. J. Atner. Statist. Assoc. 87.861-868. Rivadulla, A. (1 991). Probubilidad E Inferencia Cient(fica. Barcelona: Anthmpos. Robbins. H. (1955). An empirical Bayes approach to statistics. Proc. Third Berkeley Symp. 1 (J. Neyman and E. L. Scott, eds.). Berkeley: Univ. California Press, 157-164. Robbins. H. (1964). The empirical Bayes approach to statistical decision problems. Ann. Math. Statist. 35. 1-20. Robbins, H.( 1983). Some thoughts on empirical Bayes estimation. Ann. Sturist. 1.7 13-723. Robert, C. P. ( I 992). LAnalyse Srutisrique Buyisienne. Paris: Ekonomica. Robert, C. P. ( 1 993). A note on Jeffreys-Lindley paradox. StarisficaSinicu 3,603-608. Robert, C. P ,Hwang, J. T. G. and Strawderman, W. E. (1993). Is Pitman closeness a rea. sonable criterion? J. Amer. Statist. Assoc. 88.57-76 (with discussion). Robert, C. P. and Soubiran, C. (1993). Estimation of a normal mixture model through Gibbs sampling and prior feedback. Test 2, 125-146. Roberts, F. (1974). Laws of exchange and their applications. SlAMJ. Appl. Math. 26,260284.

Roberts, F. ( 1979). Measurement Theory. Reading, MA: Addison-Wesley Roberts, G.0. (1992). Convergence diagnostics of the Gibbs sampler. Bayesian Sratisrics 4 (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 775-782. Roberts, G. 0. and Smith. A. F. M. (1994). Simple conditions for the convergence of the Gibbs sampler and Metropolis-Hastings algorithms. Sruch. Pwc. and heir Applic. 44,
207-2 16.

542

References

Roberts, H. V. (1963). Risk ambiguity and the Savage axioms. Comment. Quurr. J. Ecrin. 77.327-342. Roberts. H. V. (1965). Probabilistic prediction. J. Amer. Statist. Assoc. 60.50-62. Roberts, H.V. ( 1966). Sruristicul Inference and Decision. Chicago: University Press. Roberts, H. V. (1967). Informative stopping rules and inferences about population size. J. Amer. Sturist. Assoc. 62. 763-775. Roberts. H. V. ( 1974). Reporting of Bayesian studies. Sritdies in Uuyesirrri Econoimetrics and Stutisrics: in Honor of Lmnurd J . Smuge ( S . E. Fienberg and A. Zellner. eds.). Amsterdam: North-Holland. 465-483. Roberts, H. V. ( 1978). Bayesian inference. ltiternarionul Enc.vclctpedia c~'Stutistics W. H. ( Kruskal, and J. M. Tanur. eds.). London: Macmillan. 9-16. Robinson. G. K. ( 1975). Some counterexamples to the theory of confidence intervals. Biometriku 62. 155- 16I . Robinson. G. K. ( 1979a). Conditional properties of statistical procedures. Atin. Srutist. 7. 742-755. Robinson. G. K. ( 1 979b). Conditional properties of statistical procedures for lwation and scale parameters. Ann. Stcitist. 7,756-77 I . Rodriguez. C. C. ( 1991). From Euclid to entropy. Muviniunz Entropy and Buyesian Method.s (W. Grandy and L.H. Schick eds.). Dordrecht: Kluwer. 343-348. T. Rolin, J.-M. ( 1983). Non-parametric Bayesian statistics: a stochastic processes approach. Spccifiitig Statistied Models (J.-P. Florens et nl. eds.). Berlin: Springer. 108-1 33. Rosenkranz. R. D. ( 1977). Inference, Methodund Decision. Tonwds a Buyesinii Philosophy of Science. Dordrecht: Reidel. Royal]. R. M. (1992). The elusive concept of statistical evidence. Buyesiuti Sttrfistics 4 (J. M. Bemardo, J. 0. Berger. A. P. Dawid and A. F. M. Smith. eds.). Oxford: University Press, 405418 (with discussion). Rubin. D. B. ( I 98 I ). The Bayesian bootstrap. Ann. Statisr. 9. 130- 134. Rubin. D. B. ( 1984).Bayesianlyjustifiable and relevant frequencycalculationsfor the applied statistician. Anti. Stnrist. 12, 1151-1 172. Rubin. D. B. ( 1987). Multiple Iniprttnrioti for Non-Response in Surveys. New York: Wiley Rubin, D. B. ( 1988). Using the SIR algorithm to simulate posterior distributions. Buyesian Stutisrics 3 (J. M. Bemardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press. 395-402 (itith discussion). Rubin, H. ( 1971 ). A decision-theoreticapproach to the problem of testing a null hypothesis. Stuti.vic.al Ducivion Theory trrid Helnted Topics (S. S. Gupta and J. Yackel. ed York: Academic Press. 103-108. Rubin, H. ( 1977). Robust Bayesian estimation. Stiitisticd Decision Theor\ cind Reluted 7opic.s I / ( S . S. Gupta and D. S. M c H ) 4 s . ) . New York: Academic Press. ~. Rubin. H. ( 1987).A weak system o f axioms for 'rational' behaviour and the non-separability of utility from prior. S~utistic.\and Dec-isiota5.47-58. Rubin. H. ( 1988a). Some results on robustness in testing. Stutisticwl Decisiori T/ieory nnd Relntud 7iq~ic.sV 1 (S. S. Gupta and J. 0. Rerper. eds.). Berlin: Springer. 271-278. I Rubin. H. ( 1988b).Robustness in generalized ridge repression arid related topics. & i ~ u . ~ i t ~ t i Stnristics Z (J. M. Bernardo. M. H. DeGroot. D. V. Lindlry and A. F. M. Smith. eds.). Oxford: University Press. 4 0 3 3 1 0 (with discu54on).

References
Rueda, R. (1992). A Bayesian alternative to parametric hypothesis testing. Test 1.61-67. Saaty, T. L. (1980). The Analytic Hierarchy Process. New York: McGraw-Hill. Sacks. J. ( 1963).Generalized Bayes solutions in estimation problems. Ann. Math. Statist.34, 787-794. Salinetti. G. (1994). Stability of Bayesian decisions. J. Statist. Planning and Inference , (to appear). San Martini, A. and Speuaferri F. (1984). A predictive model selection criterion. J. Roy. Statist. Soc. B 46.296-303. Sand, B. and Pericchi, L. R. (1992). Near ignorance classes of log-concave priors for the location model. Test 1, 3946. SIrndal C.-E. (1970). A class of explicata for 'information' and 'weight of evidence'. Internut. Statist. Rev. 38.223-235. Savage, I. R. (1968). Statistics: Uncertainty and Behavior. Boston: Houghton Miffin. Savage, I. R. (1980). On not being rational. Bayesian Statistics (J. M. Bemardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith, eds.). Valencia: University Press. 32 1-328 and 339-346 (with discussion). Savage, L. J. (1954/1972). The Foundafionso Statistics. New York: Wiley. Second edition f in 1972, New York: Dover. f Savage, L. J. (1962) (with others). The Foundations o Statistical Inference: u Discussion. London: Methuen. Savage, L. J. (1961). The foundations of statistics reconsidered. Proc. Fourth Berkeley Symp. 1 (J. Neyman and E. L. Scott, eds.). Berkeley: Univ. California Press, 575-586. Reprinted in 1980 in Studies in Subjective Probabilify (H. E. Kyburg and H. E Smokler. eds.). New York: Dover, 175-1 88. Savage, L. J. (1970). Reading suggestions for the foundationsof statistics. Amer. Statist. 24. 23-27. Savage, L. J. (1971). Elicitation of personal probabilities and expectations. J. Amer. Statist. Assoc. 66,78 1-80 1, Reprinted in 1974 in Studies in Bayesian Econometrics ond Statistics: in Honor of Leonard J. Savage (S. E. Fienberg and A. Zellner. eds.). Amsterdam: North-Holland, 1 1 1-156. Savage, L. J. ( I98 I). The Writings OJ Leonard Jimmie Savuge: a Memorial Collection. Washington: ASAIIMS. Savchuk, V. P. ( 1989).Bayesovskiye Metodi Statisticheskogo Otsenivaniya. Moscow: Nauka. . Sawagari, Y., Sunahara, Y. and Nakamizo, T ( I 967). Statisticul Decision Theory in Adaptive Control Systems. New York: Academic Press. Schervish, M. J., Seidenfeld. T. and Kadane. J. B. (1990). State-dependentutilities. J . Amer. Srarisr. Assoc. 85, 840-847. Schervish. M. J., Seidenfeld,T. and Kadane, J. B. (1992).Bayesian analysisof linear models. Bayesian Statistics 4 (J. M. Bemardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith, 4 s . ) . Oxford: University Press, 419-434 (with discussion). Schlaifer, R. ( 1959). Probabifiyund Statisticsfor Business Decisions. New York: McGrawHill. Schlaifer, R . ( 1 %I). Introduction to Statistics for Business Decisiuns. New York: McGrawHill.

544

Hejerentus

Schlaifer, R. ( 1969). Anulysis ofDecisions under Uncerfiiint.~. New York: McGraw-Hill. Schmitt, S . A. ( 1969). Measuring Uncertuinq: an Elrmentiiry Ititrodrtc~ionto Bayesicin Stutisrics. Reading, MA: Addison-Wesley Schwartz, L. ( 1965). On Bayes procedures. 2. Wcili,: 4. 10-26. Schwarz. G. ( 1978). Estimating the dimension o f a model. Ann. Sfcitisf. 6,461464. Scott. D. ( 1964). Measurement structures and linear inequalities. J. Math. Psychology 1. 233-247. Scozzafava, R. ( 1989). 151 frohubili~ci Soggcrrir~irc le siie Applicci:ioni. Milano: Veschi. f Seidenfeld. T. ( 1979). Philosophical froblenrs o Srci~isricdInference. Dordrecht: Reidel. Seidenfeld. T. (1992). R. A. Fisher's fiducial argument and Bayes' theorem. Sfctrisf. Sci. 7 . 358-368. Seidenfeld. T., Kadane. J. B. and Schervish, M. 1. ( 1989). On the shared preferences of two Bayesian decision makers. J. r$fsychoki,~p5. 325-244. Seidenfeld. T. and Schervish. M. J. ( 1983). A conHict between tinite additivity and avoiding Dutch book. Philos. qfScience 50.398-412. Sen. A. K . ( 1970). Collective Choice inid Socictl We/fitre. San Francisco. CA: Holden-Day. 7%er~retii.soJ'MarlieniaricalStati.stics. York: Wiley. New Sertling. R. 1. ( 1980). Appro.~imcrtion TIieiiry of Eviclenc~~. Princeton: University P r e s Shafer. G. ( 1976). A Marhenic~ric~al Soi.. B 44.322Shaier. G . ( 1982a). Belief functions and parametric models. J. Roy. Sttr~ist. 352 (with discussion). Shafer.G. (1982b). Lindley's paradox. J . Amer. Sftitist.Assoc. 77.325-351 (with discussion). Shafer. G. (1986). Savage revisited. Sfcirisf.Sci. 1. 4 3 5 4 6 2 (with discussion). Shafer. G. (19W). The unity and diversity of probability. Srarisr. Sci. 5. 463-501 (with discussion). Shannon. C. E. (1948). A mathematical theory of communication. Bell Sysfrm Tec.11.J. 27 379423 and 6 2 3 4 5 6 . Reprinted in The Mufhenicttic.cilTheor:\ c~fCorimrittric~tio~i (Shannon, C. E. and Weaver, W.. 1949). Urbana, IL.: Univ. Illinois Press. Shao, J. ( 1989). Monte Carlo approximations in Bayesian decision theory. J. Aniar. Srctrist. ASWC.84.727-732. Shao. J. (19W). Limiting behaviour of Monte Carlo approximation to Bayesian action. Starisrics cind Decisions 8, 85-99. Shao, J. (1993). Linear model selection by cross-validation. J. Attie,: Statist. Assoc.. 88. 486494. Shaw. J. E. H. (1088a). A quasi-random approach to integration in Bayesian statistics. h n . Starist. 16, 895-914. Shaw. J. E. H. (1988b). Aspects of numerical integration and summarisation. Buyesiin~ Stinistics -3 (J. M. Bernardo. M. H. DeGroot. D. V. Lindley and A. E M. Smith. edh.). Oxford: University Press. 41 1-428 (with discussion). Simon. J. C. ( 1984). /A Reconnuissunco des Frirnie.~. Paris: Masson. Simpson. E. H. ( I95 I ). The interpretation of interaction in contingency tables. J. Roy.Sfiitist. sot..B 13.238-241. Singpurwalla, N. D. and Soyer. R. ( 1992). Non homogeneous autoregressive procrshes for tracking (software) reliability growth. and their Bayesian analysis. J. R+. Statisr. S f I C . . R 51. 145- I 56.

References
Singpurwalla, N. D. and Wilson, S. P (1992). Warranties. Bayesian Sraristics 4 (J. M. Ber. nardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 4 3 5 4 (with discussion). Sivaganesan S. (1991). Sensitivity of some standard Bayesian estimates to prior uncertainty: a comparison. J. Statist. Planning and Inference 27,85-103. Sivaganesan S . (1993). Robust Bayesian analysis of the binomial empirical Bayes problems. Canadian J. Statist. 21. 107-1 19. Sivaganesan S. and Berger. J. 0. (1989). Ranges of posterior measures for priors with unimodal contamination. Ann. Statist. 17,868-889. Skene. A. M., Shaw, J. E. H.and Lee. T.D. (1986). Bayesian modelling and sensitivity analysis. The Sratistician 35,28 1-288. Skilling, J. (ed.) (1989). Muximum Entropy and Bayesian Methods. Dordrecht: Kluwer. Smith, A. F.M. (1973a). Bayes estimates in one-way and two way models. Biometriku 60. 3 19-330. Smith, A. F. M.(1973b). A general Bayesian linear model. J . Roy. Statist. SOC.B 35.67-75. Smith, A. F. M. (1978). In discussion of Tanner (1978). J. Roy. Sratisr. SOC.A 141.50-51. Smith, A. F. M.(1981). On random sequences with centred spherical symmetry. J. Roy. Statist. SOC. B 43,208-209. Smith, A. F. M. ( 1983). Bayesian approachestooutliers and robustness.Speci&ing Sraristical Models (J.-P. Florens. M. Mouchart. J.-P. Raoult. L.Simar and A. F. M. Smith, eds.). Berlin: Springer, 13-55. Smith, A. F. M. (1984). Bayesian Statistics. Present position and potential developments: some personal views. J. Roy. Smrist. Soc. A 147.245-259 (with discussion). Smith, A. F. M.(1986). Some Bayesian thoughts on modeling and model choice. The Statistician 35,97- 102. Smith, A. F. M. (1988). What should be Bayesian about Bayesian software? Bayesian Srutistics 3 (J. M. Bernardo, M.H.DeGruot, D. V. Lindley and A. F. M. Smith, 4s.). Oxford: University Press, 429-435 (with discussion). Smith, A. F. M. (1991). Bayesian computational methods. Phil. Trans. Roy. SOC. London A 337.369-386. Smith, A. F. M. and Dawid, A. P (eds.) (1987). 1986 Conference on Practical Bayesian . Statistics. Special issue, The Statistician 36,Numbers 2 and 3. Smith, A. F. M. and Gelfand. A. E. (1992). Bayesian statistics without tears: a samplingresampling perspective. Amer. Statist. 46,8688. Smith, A. F. M. and Roberts, G. 0. (1993). Bayesian computation via the Gibbs sampler and related Markov chain Monte Carlo methods. J. Roy. Srarist. Soc.B 55, 3-23 (with discussion). . Shaw, J. E. H. and Naylor, J. C. (1987). Progress with Smith. A. F M., Skene, A. M.. numerical and graphical methods for Bayesian statistics. The Srarisrician 36,75-82. Smith. A. F. M., Skene, A. M.. Shaw, J. E. H., Naylor, J. C. and Dransfield, M. (1985). The implementation of the Bayesian paradigm. Cornm. Stutisr. Theoq and Methods 14. 1079-1 109. Smith, A. F. M. and Spiegelhalter, D. J. (1980). Bayes factors and choice criteria for linear models. J. Roy. Statist. Soc. B 42. 21 3-220.

References

Smith, A. E M. and Verdinelli, 1. (1980). A note on Bayes designs for inference using a hierarchical linear model. Biomerrika 47.613-619. Smith, C. A. 8. (l%l). Consistency in statistical inference and decision. J. Roy. Stutisr. Soc. B 23. 1-37 (with discussion). Smith,C. A. B. ( 1965). Personal probability and statistical analysis.J. Roy. Statist. S w . A 1%. 469-499. Smith, C. R. and Erickson, J. G . (eds.) (1987). Muximum Entropy mid Bu.vesiun Spectrul Analysis and Esrimurion Problems. Dordrecht: Reidel. Smith. C. R. and Grandy, W. T. (eds.) ( 1985). Mnrimum Enrropy and Bayesian Meriiods it1 Inverse Problems. Dordrecht: Reidel. Smith. J. Q. ( 1988a).Decision Anulwis, (I Buyrsian Approach. London: Chapman and Hall. Smith,J. Q. ( 1 988b). Models, optimal decisionsand influence diagrams. Bayesian Stutistics 3 (J. M. Bernardo. M. H. DeGroot, D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press, 765-776. Smith. J. Q. ( 1992). A comparison of the characteristicsof some Bayesian forecastingmodels. Internat. Statist. Rev. 60. 75-87. Smith, 3. Q. and Gathercole. R. B. ( 1986). Principles of interactive forecasteing. Bayesian Inference u d Decision Techniques: Essays in Honor ofBruno de Finerri (P. Goel and K. A. Zellner, eds.). Amsterdam: North-Holland,405-423. Smouse, E. P. (1 984). A note on Bayesian least squares inference for finite population models. J. Anier. Stutist. Assoc. 79. 390-392. Solomonoff. R. J. ( 1978).Complexity based induction systems: comparison and convergence theorems. IEEE Truns. Information Theorv 24,422-432. Spall. J. C. (ed.) ( 1988).Baysicin Analysis ofTime Series and Dynamic Models. New York: Marcel Dekker. Spall. J. C. and Hill, S. D. (1990). Least informative Bayesian prior distributions for finite samples based on information theory. IEEE Truns. Automuric Conrrol35,580-583. Spall, J. C. and Maryak, J. C. ( 1992). A feasible Bayesian estimator of quantiles for projectile accuracy for i.d.d. data. J. Amer. Stutist. Assoc. 87,676-68 1. Spiegelhalter, D. J. (1987). Probability expert systems in medicine: practical issues in handling uncertainty. Statisr. Sci. 2. 25-34 (with discussion). Spiegelhalter. D. 1. and Cowell, R. G. (1992). Learning in probabilistic expert systems. Buysian Statistics 4 (J. M. Bernardo. J. 0. Berger. A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 447465 (with discussion). Spiegelhalter,D. J. and Knill-Jones, R. ( 1984).Statistical and knowledge-based approaches to clinical decision support systems with application in gastroenterology. J. Roy. Stutist. Soc. A 147. 34-77 (with discussion). Spiegelhalter,D. 1. and Smith, A. F. M. (1982). Bayes factors for linearand log-linearmodels with vague prior information.J. Roy. S t a r b . SOC.B 44, 377-387. StLl von Holstein. C.-A. S. ( 1970). Assessment and Evaluation .f Sitbjectiw Pr~ibribiliry Distribitrions. Stockholm: School of Economics. Stiiel von Holstein,C.-A. S. and Matheson, J. E. (1979).A Manualf o r Encoding Probubility Disrribitrions. Palo Alto: CA.: SRI International.

References

547

Steel, M. F. J. (1992). Posterior analysis of restricted seemingly unrelated regression equation models. Econometric Reviews 11. 129-142. Stein, C. (195 I). A property of some tests of composite hypotheses. Ann. Murh. Statist. 22, 475476. Stein, C. (1956). Inadmissibilityof the usual estimation of the mean of a multivariate normal distribution. Proc. Third Berkeley Symp. 1 (J. Neyman and E. L. Scott, eds.). Berkeley: Univ. California Press, 197-206. Stein, C. (1959). An example of wide discrepancybetween fiducial and confidence intervals. Ann. Math. Statist. 30.877-880. Stein, C. (1962). Confidence sets for the mean of a multivariate normal distribution. J. Roy. Starisf. Soc. B 24.265-2% (with discussion). Stein, C. (1965). Approximation of improper prior measures by proper probability measures. Bernoulli, Buyes, Laplace Festschrifi.(J. Neyman and L.LeCam, 4s.). Berlin: Springer. 2 17-240. Stephens, D. A. and Smith, A. F. M. (1992). Sampling-resamplingtechniques for the computation of posterior densities in normal means problems. Test 1, 1-18. Stewart, L. (1979). Multiparameterunivariate Bayesian analysis. 1.Amer. Statist. Assoc. 74, 684-693. Stewart,L. (1983).Bayesian analysisusing Monte Carlo integration,a powerful methodology for handling some difficult problems. The Statistician 32, 195-200. Stewart, L. (1985). MultiparameterBayesian inference using Monte Carlo integration, some techniques for bivariate analysis. Bayesian Statistics 2 (J. M. Bernardo, M.H. DeGroot. D. V. Lindley and A. F. M. Smith, eds.), Amsterdam: North-Holland,495-510. Stewart,L. (1987).HierarchicalBayesian analysis using Monte Carlo integration: computing posterior distributions when there are many models. The Statisrician 36.21 1-219. Stewart, L. and Davis, W. W. (1986). Bayesian posterior distributions over sets of possible models with inferencescomputed by Monte Carlo integration. The Statistician 35, 175182. Stigler, S. M. (1982). Thomas Bayes' Bayesian inference. J. Roy. Statist. Soc. A 145,250258. Stigler, S. M. (1986a). The History o Statistics. Harvard, MA: University Press. f Stigler, S. M. ( 1986b).Laplace's I774 memoir on inverse probability. Statist.Sci. 1.359-378. Stigum. B. P. (1972). Finite state space and expected utility maximization. Econornetrica 40,253-259. Stone, M. (1959). Application of a measure of information to the design and comparison of experiments. Ann. Math. Statist. 30.55-70. Stone, M. (1961). The opinion pool. Ann. Math. Statist. 32, 1339-1342, Stone, M. (1963). The posterior t distribution.Ann. Math. Statist. 34,568-573. Stone, M. (1965). Right Haar measures for convergence in probability to invariant posterior distributions. Ann. Math. Sratist. 36,440-453. Stone, M. ( 1970). Necessary and sufficient conditions for convergence in probability to invariant posterior distributions. Ann. Math. Srarist. 41, 1939-1953. Stone, M. (1974). Cross-validatorychoice and assessment of statistical predictions. J. Roy. Starist. SOC. 36, 11-147 (with discussion). B

References
Stone, M. (1976). Strong inconsistency from uniform priors. J. Amer. Statist. Assoc. 71, 114-125 (with discussion). Stone. M. (1977). An asymptotic equivalence of choice of model by cross-validation and Akaike's criterion. J. Roy. Stutist. Suc. B 3 9 , 4 4 4 7 . Stone. M. (1979a). Comments on model selection criteria of Akaike and Schwarz. J. Roy. Statist. Soc. 6 41,276-278. Stone. M. ( 1 979b). Review and analysis of some inconsistencies related to improper distributions and finite additivity. Proc. 6rh Internot. COI$ Logic. Merhodology and Philosophy ufScience (L. J. Cohen. J. Los, H. Pfeiffer and K. P. Podewski, eds.). Amsterdam: North-Holland, Stone, M. (1986). In discussion of Fishburn (19x6). Stufisr. Sci. 1,356-357. . Stone. M. and Dawid, A. P ( 1972). Un-Bayesian implicationsof improper Bayesian inference in routine statistical problems. Biomefriku 59. 369-373. Slroud. A, H. ( I97 1 ), Approximate CnlrrtlutiunUfMultipleIiitegruls. Englewood Cliffs, NJ: Prentice-Hall Sudderth, W. D. ( 1980).Finitely additive priors. coherence and the marginalization paradox. J. ROY.S t ~ t i ~SUC.B 42, 339-34 I . t. Sugden, R. A. (1985). A Bayesian view of ignorable designs in survey sampling inference. Bayesian Staristics 2 (J. M. Bernardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith. eds.), Amsterdam: North-Holland, 75 1-754. Suppes. P ( 1956). The role of subjective probability in decision making. Proc. Third Berkeley . Symp. 5 (J. Neyman and E. L. Scott, eds.). Berkeley: Univ. California Press. 6 1-73, Suppes. P. (1960). Some open problems in the foundations of subjective probabiliry. Irzjbrniofion cind Decision Processes (Macho]. ed.). New York: McGraw-Hill. 162-1 70. Suppes, P. (1 974). The measurement of belief. J. Roy. Statist. Suc. B 36, 160- 175. Suppes, P. and Walsh. K. ( 1959). A non-linear model for the experimental measurement of utility. Behuvioral Sci. 4, 204-2 1 I . Suppes. P. and Zanotti. M. ( 1976). Necessary and sufficient conditions for the existence of a unique measure strictly agreeing with a qualitative probability ordering. J. Philos. Logic 5.43 1-438. Suppes, P. and Zanotti, M. (1982). Necessary and sufficient qualitative axioms for conditional probability. Z. Wahisch. vent'. Gebiete 60.16.3-169. Susarla V. and van Ryzin. J. ( 1976). Nonparametric Bayesian estimation of survival curves from incomplete observations. J. Amer. Sriirisr. A.ssoc~.71. 897-902. Sweeting, T. J. (1984). On the choice of prior distribulions lor the Box-Cox transformed linear model. Biunretriku 71, 127-1 34. Sweeting, T. J. ( 1985). Consistent prior distributions for transformed models. Bu~esiuir Stutistic..~ (J. M. Bernardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith. eds.). 2 Amsterdam: North-Holland. 755-762. Sweeting. T. J. ( 1992). On asymptotic posterior normality in the multiparameter case. Bayesian Statistics 4 (J. M. Bernardo. J. 0. Berger. A. P. Dawid and A. F. M. Smith. eds.). Oxford: University Press. 825-835. Sweeting, T J. and Adekola. A. D. (19x7). Asymptotic pstcrior normality for stochastic . processes revisited J. Roy. Smtisr. Soc-. B 49. 215222.

References
Tanner, M. A. ( 1991). Toolsfar Statistical inference: Observed Data and Data Augmentation Methods. Berlin: Springer. Tanner, M. A. and Wong, W. H. (1987). The calculation of posterior distributions by data augmentation. J. Amer. Statist. Assoc. 82.582-548 (with discussion). Teichroew, D. (1%5). A history of distribution sampling prior to the era of the computer and its relevance to simulation. J. Amer. Statist. Assoc. 60,27-49. Thatcher, A. R. (1964).Relationships between Bayesian and confidence limits for prediction J. Roy. Stutist. Soc. B 26. 126-210. Thomas, A., Spiegelhalter, D. J. and Gilks, W. R. (1992). BUGS, a program to perform Bayesian inference using Gibbs sampling. Bayesicm Statistics 4 (J. M. Bemardo. J. 0. Berger, A. P Dawid and A. F. M. Smith, eds.). Oxford: University Press, 837-842. . Thorburn, D. (1986). A Bayesian approach to density estimation. Biotnetrika 73.65-75. Tiao. G. C. and Box, G. E. I? (1974). Some comments on 'Bayes' estimators. Studies in Bayesian Econometrics atid Statistics: in Honor of Leonard J . Savage ( S . E. Fienberg and A. Zellner. eds.). Amsterdam: North-Holland. 620-626. Tibshirani, R. (1989). Noninfomative priors for one parameter of many. Biometrika 76, 604-608. Tiemey, L. ( 19YO). LISP-STAT.An Object Oriented Environmenl far Statistical Computing and Dynamic Graphics. Chichester. Wiley. Tiemey, L. ( 1992). Exploring posterior distributions using Markov chains. CotnputerScience und Stutistics: Prw. 23rd. Symp. lnterfuce (E. M. Keramidas, ed.). Fairfax Station: Interface Foundation, 563-570. Tierney, L. and Kadane, J. B. (1986). Accurate approximations for posterior moments and marginal densities. J. Amer. Stutist. Assoc. 8 1 . 8 2 4 6 . Tiemey. L.. Kass, R. E. and Kadane, J. B. ( I 987). Interactive Bayesian analysis using accurate asymptotic approximations. Cotnputer Science and Statistics: 19th S?i,nposiitm on the Interfme (R. Heiberger, ed.). Alexandria, VA: ASA, 15-2 I . Tiemey. L., Kass, R. E. and Kadane. J. B. ( 1989a). Fully exponential Laplace approximations to expectations and variances of nonpositive functions. J. Amer. Sturisr. Assoc. 84.7 IO716. Tiemey, L.. Kass, R. E. and Kadane, J. B. (l989b). Approximate marginal densities of nonlinear functions. Biotnetrika 76.425433. Titterington, D. M., Smith, A. F. M. and Makov, U. E. (1985). Statistical Anal.vsis of Finite Mixture Distributions. Chichester: Wiley. Torgesen, E. N. (198 I). Measures of information based on comparison with total information and with total ignorance. Ann. Statist. 9,638-657. Trader, R. L. (1989). Thomas Bayes. Encyclopedia of Statistical Sciences suppl. (S.Kotz. N. L. Johnsonand C. B. Read,eds.). New York: Wiley. 14-17. Tribus, M. ( 1 %9). Ruticinul Descriptions. Decisions und Designs. New York: Pergamon. van der Merwe. A. J. and van der Merwe, C. A. ( 1992). Empirical and hierarchical Bayes estimation in multivariate regression models. Bayesiun Statistics 4 (J. M. Bemardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 843-850. van Dijk, H. K.. Hop, J. P. and Louter, A. S. (1987). An algorithm for the computation of posterior moments and densities using simple importance sampling. The Statistician 3 . 6 83-90.

References

van Dijk, H. K. and Kloek, T. (1983). Monte Carlo analysis of skew posterior distributions: An illustrative econometric example. The Statistician 32. 2 16-223. van Dijk, H. K. and Klwk. T. (1985). Experiments with some alternatives for simple importance sampling in Monte Carlo integration. Bnysiun Srutistics 2 (J. M. Bernardo. M.H. DeGroot. D. V. Lindley and A. F. M. Smith. eds.). Amsterdam: North-Holland. 5 1 1-530 (with discussion). Venn. J. (1886). The Logic ofChunce. London: MacMillan. Reprinted in 1963. New York: Chelsea. Verbraak. H. L. F. (1990). The Logic of Objective Buyesiunism. The Hague: CIP-DATA. Verdinelli, 1. ( 1992). Advances in Bayesian experimental design. Buyesiun Statistic-s I (J. M. Bernardo. J. 0. Berger. A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press. 467481, (with discussion). Verdinelli, 1. and Kadane, J. B. ( 1992). Bayesian designs for maximiring informarion and outcome. J. h e r . Sturist. Assoc. 87. 5 10-5 15. Verdinelli, I. and Wasserman, L. ( I991 ). Bayesian analysis of outlier problems using the Gibbs sampler. Stutist. Computing I , 135- 139. Viertl, R. (ed.) 1987). Probicbili~ B~7yesinn ( arid Statistics. London: Plenum. C. (1964). On qualitative 17-algebras. Ann. Math. Stutist. 35, 1787-1 796. Villegas, Villegas, C. ( 1969). On the a priori distribution of the covariance matrix. Ann. Math. Stm tist. 40, 1098-1099. Inference (v.P. Godamhe and Villegas, C. ( I 97 I). On Hadr priors. Foimdufions ofStoristica~ D. A. Sprott, eds.). Toronto: Holt, Rinehart and Winston, 409-414 (with discussion). Villegas. C. ( 1977a).On the representation of ignorance. J. Arner. Stuti.sr.Assoc. 72.65 1-654. Villegas. C. ( 1977b). Inner statistical inference. J. Amer. Statist. Assoc.. 72.453458. Villega5, C . (1981). Inner statistical inference 11. Ann. Stnrist. 9. 768-776. Villegas. C. (1990). Bayesian inference in models with euclidean structures. J. Amrr. Statist. ASSO(.85, I 1.59- I 164. von Mises. R. ( 1928). Probubility Statistics tind Truth. Reprinted in 1957. London: MacmilIan. von Neumann, J. and Morgenstern. 0. (1944/1953). Theor:\. of Cutnes and Economic Behai*iour.3rd. edition in 1953. Princeton: University Press. Vovk. V. G.(1993a). A logic of probability. with applications to the foundation of statistics. J. Ro!. Statisr. Soc. B 55. 3 17-35? (with discussion). Vovk, V. G.(1993b). Forecasting point andcontinuous processes: prequential analysis. 7i.sr2. 189-2 17. Vovk, V. G.and Vyugin V. V. (1993). On the empirical validity of the Bayesian method. J. Roy. Stmist. Soc. R 55. 2.53-266. Wahba. G. f 1978). Improper priors. spline smoothing and the problems of guarding against model errors in regression. J. Roy. Stmist. Soc. B 40. 364-372. Wahba, G.( 1983). Bayesian contidence intervals for the cross-validated smoothing spline. J. Roy. Stutist. Soc. B 45. 133- I 50. Wahba. G. ( 1988). Partial and interaction spline models. Buyesicur Statistics 3 (J. M. Bernardo. M. H. M r o o t . D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press. 479491 (with discussion).

References
Wakefield, J. C.. Gelfand. A. E. and Smith, A. E M.(1991). Efficient generation of random variates via the ratio-of-uniforms method. Statistics and Computing 1, 129-133. Wald, A. ( 1939). Contributionsto the theory of statistical estimation and testing hypothesis. Ann. Math. Statist. 10.299-326. Wald, A. (1947). Sequential Analysis. New York: Wiley. Wald, A. (1950). Statistical Decision Functions. New York: Wiley. Walker, A. M. (1969). On the asymptotic behaviour of posterior distributions.J. Roy. Statist. SOC.B 31,8048. Wallace, C. S. and Freeman, P R. (1987). Estimation and inference by compact coding. . J. Roy. Statist. Soc. B 49.240-260 (with discussion). Walley, P. (1987). Belief function representations of statistical evidence. Ann. Statist. 15. 1439- 1465. Walley, P. ( I 991). Statistical Reasoning with Imprecise Probabilities. London: Chapman and Hall. Walley, P and Fine, T. L. (1979). Varieties of modal (classificatory)and comparative prob. ability. Synthese 41,321-374. Wallsten, T. S. (1974). The psychological concept of subjective probability: a measurement theoretic view. The Concept of Probability in PsychologicalExperiments (C.-A. S. Stiiel von Holstein, ed.). Dordrecht: Reidel, 49-72. Wasserman, L. (1989). A robust Bayesian interpretationof likelihood regions. Ann. Statist. 17, 1387-1393. Wasserman, L. (1990a). Belief functions and statistical inference. Canadian J. Statist. 18. 183-1 96. Wasserman. L. (1990b). Prior envelopesbased on belief functions.Ann. Statist. 18,454-464. Wasserman. L. ( I 992a). Recent methodological advances in robust Bayesian inference. Bayesian Statistics 4 (J. M. Bernardo, J. 0. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press, 483-502 (with discussion). Wasserman, L. ( 1992b). Invariance properties of density ratio priors. Ann. Statist. 20.21772182. Wasserman, L. and Kadane, J. B. (1990). Bayes' theorem for Choquet capacities. Ann. Statist. 18, 1328-1 339. Wasserman, L. and Kadane, J. B. ( I 992a). Computing bounds on expectations. J . Amer. Statist.Assoc. 87, 5 16-522. Wasserman, L. and Kadane, J. B. ( 1992b). Symmetric upper probabilities. Ann. Statist. 20, 1720-1736. Wechsler, S. ( 1993). Exchangeability and predictivism. Erkenntnis 38,343-350. Weerahandi, S. and Zidek, J. V. (1981). Multi-Bayesian statistical decision theory. J. Roy. Statist. SOC.A 144, 85-93. Weerahandi, S. and Zidek. J. V. (1983). Elements of multi-Bayesian decision theory. Ann. Statist. 11. 1032-1046. Weiss. R. E. and Cook, R. D. (1992). A graphical case statistic for assessing posterior inference. Biometrika 79,51-55. Welch, B. L. (1939). On confidence limits and sufficiency, with particular reference to parameters of location. Ann. Math. Srarist. 10,5849.

552

References

Welch. B. L. (1965).On comparisons between confidence point procedures in the case of a single parameter. J. Roy. Statist. Soc. B 27, 1-8. Welch. B. L. and Peers, H. W. (1963).On formulae for confidence points based on intervals of weighted likelihoods. J. Roy. Statis/. Soc. B 25, 3 18-329. West, M. (1981). Robust sequential approximate Bayesian estimation. J. Roy. Stcrrist. Soc. B43, 157-166. West, M. (1984). Outlier models and prior distributions in linear regression. J. Roj. Srurist. Soc. B 46.43 1-49, West. M. ( 1985). Generalized linear models: scale parameters. outlier accomodation and prior distributions. Bryesirm Strrristics 2 (J. M. Bernardo. M. H. DeGrtwt. D. V. Lindley and A. F. M. Smith, eds.), Amsterdam: North-Holland. 53 1-557 (with discussion). West. M. ( 1986). Bayesian model monitoring. J. Roy. Sfatisr. Soc. B 48. 70-78. West. M. ( 1988). Modelling expert opinion. Briysirrn Stntisiics 3 (J. M. Bemardo. M. H. DeGroot. D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press. 493-508 (with discussion). West. M. ( 1992a). Modelling agent forecast distributions. J . Roy. Stcirisr. Soc. B 54,553-568. West. M. ( 1992b). Modelling with mixtures. Bayc.sian Srufistics4(J.M. Bernardo. J. 0.Berger. A. P. Dawid and A. E M. Smith, eds.). Oxford: University Press. 503-524 (with discussion). West. M. and Crosse. J. (1992). Muddling probabilistic agent opinion. J. Roy. Staris/. Soc. B 54,285-299. West. M. and Harrison, P. J. (1986). Monitoring and adaptation in Bayesian forecasting models. J. Anzrr. Stritist. Assoc. 81, 741-750. West, M. and Harrison. P. J. ( 1989). Bayesiati Forecastitrg arid Dwmmic Models. Berlin: Springer. West. M. and Migon, H. S. (1985). Dynamic generalised linear models and Bayesian forecasting. J . Amer. Sfutist. AssocS.80, 73-83. West. M., Mueller. P. and Escobar. M. D. (1994). Hierarchical priors and mixture models with applications in regression and density estimation. Aspects qf Uiicertaitin: a Tribicre t o 11. c! lit idle^ (P. R. Freeman. and A. E M. Smith, eds.). Chichester: Wiley, 363-386. Wetherill, G . B. (1961 ). Bayesian sequential analysis. Bionietrikrc 48. 281-291. Wetherill, G . B. ( 1966).Seyrretitirrl MotAods iri Statistics. New York: Wiley. Wetherill, G. B. and Campling. C. E. G. (1966). The decision theory approach to sampling inspection. J. Roy. Sitifi.tr. Sot.. B 28. 381-416. White. D. J. ( 1976a). Frtndattienials of Decision Theor!. Amsterdam: North-Holland. White, D. J. ( I 976b). A Uecisiwi Mc/hodo/ogy.Chichester: Wiley. j White. D. J. and Bowen. K. C. (eds.) ( 1975). The Role arid Effec*tiivtiesso 'Theories pf D e i i ~ i o n Practice. London: Hodder and Stoughton. in Whittle. P. ( 1958). On the smoothing of probability density functions. J. Roy. Smtisr. Soc. B 20,334-343. Whittle. P. ( 1976). Probability. Chichester: Wiley. Wichmann. D. ( 19%). Bayes-Siariaik. Mannheim: Bl-Wissenschaftsverlag. Wiener. N. (1948). Cvhernetics. Cambridge. Mass.: The MIT Press. Reprinted in 1961.

References
Wilkinson, G. N. (1977). On resolving the controversy in statistical inference. J. Roy. Sratist. Soc.B 39,119-171 (with discussion). Wilks, S . S. (1962). Mathematical Statistics. New York: Wiley. Willing, R. ( 1 988). Information contained in nuisance parameters. Bayesian Statistics 3 (J. M. Bemardo. M. H.DeGroot. D. V. Lindley and A. F. M. Smith, eds.). Oxford: University Press, 80 1-805. Wilson. J. (1986). Subjective probabilities and the prisoners' dilemma. Manag. Sci. 32,
45-55.

Wilson. R. B. (1968). On the theory of syndicates. Econometrica 36,119-132. Winkler, R. L. (1967a). The assessment of prior distributions in Bayesian analysis. J. Amer. Sratist.Assoc. 62.776-800. Winkler. R. L. (1967b). The quantification of judgement; some methodological suggestions. J. Amer. Sturist. Assoc. 62, I 105-1 120. Winkler, R. L. ( 1968). The consensus of subjective probability distributions. Manag. Sci. 15, 86 1-875. Winkler, R. L. (1972). Introduction ro Bayesian Inference and Decision. Toronto: Holt, Rinehart and Winston. Winkler, R. L.(1980). Prior information. predictive distributions and Bayesian model building. Bayesian Analysis in Econometrics and Statistics:Essays in Honor of Harold JeBeys (A. Zellner, ed.). Amsterdam: North-Holland, Winkler, R. L. (1981). Combining probability distributions from dependent information sources. Munag. Sci. 27,479488. Witmer, J. A. (1986). Bayesian multistage decision problems. Ann. Stutisr. 14,283-297. Wolprt, R. L. (1991). Monte Carlo importance sampling in Bayesian statistics. Statisrical Multiple Integration (N. Flournoy, and R. K. Tsutakawa, (eds.). Providence: R1: ASA, Wolpert, R. L. and Warren-Hicks. W. J. (1992). Bayesian hierarchical logistic models for combining field and laboratory survival data. Bayesian Statistics 4 (J. M. Bemardo, J. 0. Berger, A. P. Dawid and A. F.M. Smith. eds.). Oxford: University Press, 525-546 (with discussion). Wong, W. H. and Li, B. (1992). Laplace expansion for posterior densities of nonlinear functions of parameters. Biomerrika 79, 393-398. Wooff, D. A. (1992). [B/D] works. Bayesian Statistics 4 (J. M. Bernardo. J. 0. Berger, A. P. Dawid and A. F.M. Smith, eds.). Oxford: University Press, 85 1-859. Wright, D. E. ( I 986). A note on the construction of highest posterior density intervals. Appl. Stutisr. 35.49-53. Wrinch, D. H. and Jeffreys, H. (1919). On some aspects of the theory of probability. Phil. Mag. Ser. 6,38,7 15-73 1. Wrinch, D. H. and Jeffreys, H. (192 1). On certain fundamental principles of scientific inquiry. Phil. Mugazine 6,42,363-390: 45,368-374. Yaglom, A. M.and Yaglom, 1. M.(1960/1983). Verojatnost i Informacija.Moscow: Nauka. English translation in 1983 as Probability and Information. Dordrecht: Reidel. Ye, K.(1993). Reference priors when the stopping rule depends on the parameter of interest. J . Amer. Statist. Assoc. 88, 360-363.

Rejerences

Ye, K. and Berger, J. 0.(1991). Non-informative priors for inferences in exponential regression models. Biometrika 78,645-656. Yilmaz, M. R. ( 1992).An information+xpectationframework for decision under uncertainty. J. Multi-Criteria Dec. Analysis 1, 65-80. Young, S. C. and Smith, J. Q. (1991). Deriving and analysing optimal strategies in Bayesian models of games. Manag. Sci. 37.559-57 I . Yu, P. L. ( 1985). Multiple Criteria Decision Muking. London: Plenum. Zellner, A. ( I97 I ). An Introduction to Bayesian Inference in Econometrics.New York: Wiley. Reprinted in 1987, Melbourne, FL: Krieger. Zellner. A. (1977). Maximal data information prior distributions. New Developments in the Applications of Bayesian Methods (A. Aykaq and C. Brumat, eds.). Amsterdam: NorthHolland. 21 1-232. Zellner, A. (ed.) ( 1980). Bayesian Analysis in Econometrics and Statistics: Essc1.w in Honor of Harold J e B q s . Amsterdam: North-Holland. Zellner. A. ( 1984). Posterior odds ratios for regression hypothesis: generd considerationsand some specific results. Basic h u e s in Econometrics(A. Zellner. ed.). Chicago: University Press. 275-305. Zellner, A. ( 1985).Bayesian econometrics. Econornetrica 53. 253-269. Zellner, A. (1986a). On assessing prior distibutions and Bayesian regression analysis with g-prior distributions. Bayesian Inference and Decision Techniques:Essays in Honor o f Bruno de Finetti (P. K. Goel and A. Zellner. eds.). Amsterdam: North-Holland, 233-243. Zellner, A. ( I 986b). Bayesian estimation and prediction using asymmetric loss functions. J. h e r . Statist. Assoc. 81.446-45 I . Zellner, A. (1987). Bayesian inference. The New Palgrave: a Dictionary of Economics 1 (J. Eatwell. M. Milgate and P. Newman, eds.). London: Macmillan. 208-2 18. Zellner, A. (1988a).A Bayesian era. Bayesian Statistics 3 (J. M. Bernardo, M. H. DeGroot, D. V. Lindley and A. F. M. Smith. eds.). Oxford: University Press, 509-516. Zellner, A. (1988b).Optimal information processing and Bayes' theorem. Amer. Stafist. 42, 278-284 (with discussion). Bayesian analysis in econometrics.J. Econometrics 37, 27-50. Zellner. A. ( 1988~). Zellner, A. ( I99 I ). Bayesian methods and entropy in economicsand econometrics.Marimurn Entropy andBuyesian Methods (W. T. Grandy and L. H. Schickeds.). Dordrecht: Kluwer, 17-31. Zellner, A. and Siow, A. (1980). Posterior odds ratios for selected regression hypothesis. Bayesian Statistics (J. M. Bernardo, M. H. DeGrmt. D. V. Lindley and A. F. M. Smith. eds.). Valencia: University Press, 585-603 and 618-647 (with discussion). Zidek. J. (1969). A representation of Bayes invariant procedures in terms of Haar measure. Ann. Inst. Statist. Math. 21, 291-308. Zidek, J. and Weerahandi, S. ( 1992). Bayesian predictive inference for samples from smooth . processes. Bayesian Statistics 4 (J. M. Bernardo, J. 0. Berger, A. P Dawid and A. F. M. Smith, eds.). Oxford: University Press. 547-563 (with discussion).

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

Subject Index

A posteriori distribution (see Posterior distribution) A priori distribution (see Prior distribution) Absolutely continuous distribution multivariate 128,434,435 univariate 1 I I , 430-433 Absolutely continuous random quantity 109-1 I I Actions 5. 13. 15-17 in bounded decision problems x. 50-54 in general decision problems x, 54-56 set of x. 16. 19-21 Actuarial science 9. 100,373 Additive decomposition of utility 66, 149.408 Admissibility 4 4 7 4 9 , 4 6 1 and complete classes 448 of Bayes rules 448,449 Admissible decision (see Admissibility)

Akaike information criterion 487,488 Amount of information 6,77-8 I , 157-159,303,422 Analysis of variance 382 Ancillarity xii, 7,452,453,475,479 Approximations and discrepancy x. xi. 75-77, 154-157. 207-209 asymptotic (see Asymptotic analysis) Gibbs sampling algorithm 353-355 importance sampling xiii, 8, 264, 340, 348-350 iterative quadrature xiii. 264. 346-348 Laplace approximation xiii, 8,264, 340-345,455 Markov chain Monte Carlo xiii, 8. 264.340.353-356 Metropolis-Hastings algorithm 355,356 of a Binomial by a Poisson 76.77 of a Student by a Normal 156

Subject Index
of moments of a transformed beta distribution 124. 125 of prior distributions 283-285 of the mean of a beta distribution 344,345 sampling-importance-resampling xiii, 8,264,340.350-352 to mean and variance I I3 to mean vector and covariance I32 Ascending factorial function 135 Asymptotic analysis xiii, 8.235. 264, 285-297.445,463-465.480, 485.486 continuous xiii, 287-297 discrete xiii, 286, 287 non-normal 296 see ulso Asymptotic normality see also Reference distribution Asymptotic normality of posterior distributions 286-297. 3 14.365 for a ratio 297 regularity conditions for 289-292 under conjugate analysis 293. 294 under transformationsxiii. 295-297.486 Axiomatic approaches 5.83-9 I relevance of 94.95 to degrees of belief 89.90 to utilities 90,91 see ulso Foundations Axioms of coherence 15, 16.23-28 Backward induction x, 59-63 Bayes decision rule 44 48 estimate (see Estimation) risk 448 rules 448 test 391-397.412-417 Thomas I , 2 Bayes factor xiii, 389-395.414.417. 422-424,470,476.487,488 Bayes-Laplace postulate 357.358 Bayes' theorem xii, 2.4.7, 162, 175. 180, 241-255, 265, 269-285, 305. 306. 340. 352, 368, 425. 451, 452. 454. 455,480 finite x, 2.3840.4245.47-49.68.77. 80.89.94 generalised xi. xiii. 127. I30 Bayesian bootstrap 37 I computation vii, 5 implementation 8.263-356 inference (see Inference, Bayesian) methods vii. 5 paradigm xiii. 24 1-264 reading list ix, 9-1 I software 425.426 theory vii. ix. 5-9 Belief functions 95. 99 Beliefs 3-9, I 3- 16.3349 and actions x. 13-16 and limiting frequencies 17.3-175. 177. 179, 181 and models xi, xiii. 165-167 and probabilities x. 3 3 4 9 conditional x. 3 8 4 9 tinite representation of x. 33-38 general representation of xi. 105- 109 reportingofx. xi.67-81, 150-157.302. 320.360 revision of x. 3849. 127- I30 sequential revision of x. 47-49. 56-67 Bernoulli distribution, model I 15,428 and exchangeability xi. 172- 175. 211-215 asymptotic analysis for 294, 296 "biased" stopping rule for 25 I.252 conjugate analysis of 270-272.436 conjugate family for 267, 270 inferential process for 248. 249. 270-274.276.277.279-282,436 reference distributions for 3 15, 316. 337.436 sufficient statistics for 195. 196,436

Subject Index
Beta distribution 116, 117, 124. 125, 134. 135,430 as prior distribution 267.270-273'344 mixtures of 279-281 Beta-binomial distribution (see Binomial-beta distribution) Betting (see Monetary Bets) Bias 360,462-465 Bilateral Pareto distribution 141,434 Binomial distribution, model I 15. 134. 172-175,ZI 1-215.428 asymptotic analysis for 294,2% conjugate analysis of 270-272 conjugate family for 267,27 1 inferential process for 248,249, 270-274,279-282.4 14 reference distributions for 315,316,337 sufficient statistics for 195, 196 see also Bernoulli distribution Binomial-beta distribution 117,272,428 Bioassay 219,220,382 Biostatistics 9 Bootstrap 37 I see also Bayesian, bootstrap Bore1 set 110. 127 Calibration 373,482 Called-off bets 87-89 Canonical conjugate analysis 269-279 Cauchy distribution 123. 349 location parameter estimation 45 I Central limit theorem 126 Centred spherical symmetry 183-186 Characteristic function I 14, 126, 183-186.203 Chi-squared distribution 118, 121, 123, 124, 138.330.361,414,431. 459,487 Choice of experiments see Design of experiments Classical decision theory xiv. 444449,461 Classification 373,407,408,483 Closed under sampling 270.28 I see also Conjugate analysis Cluster analysis 4 I8 Coding theory 358 see also Stochastic complexity Coherence x. 23-33,38-45,83-91, 94-99. 195 and preferences x. 23-28 and quantification x, 28-33.83-85 temporal 94 see also Axiomatic Approaches Combining evidence 103-104 Communication 92, 102-104, 166, 236. 231 Comparability of consequences 23.24 Comparability of events 33 Comparative inference xiv, 376,443-488 Compatibility of probability and degrees of belief 35.40 Complete probability measures 108 Complete symmetry 169 Composite hypothesis (see Hypothesis) Computer software 425,426 Conditional beliefs (see Beliefs) density function 128-131, 166, 172-226,242-247,353-355 expectation 51.52 independence (see Independence) inference xiv, 374, 445,452, 468. 475, 478,479 likelihood 48 mass function 128-1 30 preference 22.85 probability 3 8 4 9 uncertainty relation 2 2 , 3 8 4

Subject 1nde.v
Confidence interval, region 359.445.453. 465469,478 (see ufso Interval estimation) Contidence level, coefficient 466 Confirmation theory 95 Conjugate analysis xiii, 8, 9, 264-285 (see also Canonical conjugate analysis) Conjugate prior family of distributions approximation with 279-285 canonical form of 269 convex combination (mixtures)of 282,283 definition of 265,266 for regular exponential models 266-279 limiting form of 276. 362 logarithmic divergence between 278 Consequences 16. 17, 19-2 I bounded set of 49-54 extreme 49-54 reference 54-56 Consistency of preferences 26 Consistent sequence of statistics. estimates 312,320.326.353.405 Constrained parameters 246. 247 Contingency table 4 I. 2 16. 374.4 I4 Continuity of uncertainty relation 107 Control problems, theory 9.374 Convergence xi, 125-127 almost surely 125. 126.352 in distribution 125, 126 in expected utility 142-147 in mean square 125. 126 in probability 125. I26 Correlation 132,231,232 inference for 337.338. 363,364 reference distributions for 337, 338. 363.364 Cost of observation, experiment 66. 67. 149.408.409 Countable additivity xi. 6, 106-109. 161. 162 Data 2.63, 147 expected information from 78. 158 expected utility of 78. 157 expected value of6S. 148 reduction 190, 191 "speaking for themselves" 276,298,357 structures xii. 209-226 transformation 230. 23 I Decision 3-9, 13-22. analysis 9. 10 classical theory of xiv. 9 . 4 4 4 4 9 criterion 52.56. 146 function I4 I- Ids rule 446 theory 5 tree 16. 17. 58.59.63, 147. 148. 386388.408.4 10.4 I I .see ~ 1 1 . ~ 0 Action see also Decision problem Covariance (matrix) 132 Covariates and partial exchangeability xii. 7, 209-2 1 I , 2 19-222 elaboration 232. 234 selection of xiii. 407409 Coverage probability 36s Credible regions 8,259-262,466 coincidence with confidence sets 359.468.169 highest posterior density (HPD) 260-262.395.453.477.484.486 intervals 259 lack of invariance of HPD 262 loss function for 259 size of 259 Critical value 412, 413 Cross-validation xiii. 8.403-407.409. 420-424

Subject Index
Decision problem x, 1 6 2 2 basic elements of x. 16-18.255.256 bounded x, 50-54 formal representation of x. 18-22 general x, 54-56 sequential x, 56-67 Degrees of belief 33-49 and monetary bets 86-88 and scoring rules 88,89 axiomatic approaches to 89.90 see also Beliefs Density estimation 482 Density function joint 128, 166 marginal 128, 166 multivariate 128 univariate I I I see also Conditional Descriptive theory 4.23,32,95-98 Design matrix 22 I Design of experiments x. 5,6347, 147-149, 159-160 Deviation from uniformity model 3 13 Diagnostic functions 4 18.4 I9 Digamma function I56 Dirichlet distribution 134-135.434 as prior distribution 441 Discrepancy measures for model rejection 412-417 of an approximation x, xi, 76.77, 156, 157,207-209,298 Discrete distribution multivariate 128,433 univariate 110,428.429 Discrete random quantity 109-1 11 Discrimination 373 Distribution function multivariate I28 univariate I10 Dominance 447 D-optimality 159 Double-exponential distribution 369,379,380 Dutch book 86,88,95 Dynamic programming (see Backward induction) Econometrics 10,374 Economics 10 Educational research 10 Effects (main. interaction) 218,234,382 Efficient score function 368,464 Empirical Bayes xiii, 226. 371-373 Empirical distribution function 177-181, 228.229.351 Entropy 77,79 see ulso Maximum entropy Ergodic average 353 Erlang distribution I18 Errors of type I and 2 471 Essentially bounded functions 142-144 Estimation see Interval Estimation see Point Estimation Ethically neutral event 83.84 Events x, 16-18 relevant 18-22.92,95 real-world 19 significant 27,28 standard 29-32,5042.54 Exchangeability xi, 3, 101, 167-181, 223-226,403,425 and independence 167, 168 and invariance I8 1 - I90 and sufficient statistics 191-207 extensibility of I7 I , 226,227 finite 169, 170,226,227,356,357 infinite 171, 172-226, 357 of parameters 222-226.372 partial xii, 7, 168-170,209-222.381 role of 7 unrestricted 21 1-216

Subject 1nde.r
Exchangeable binomial parameters 223 normal mean parameters 224-226 Expectation (see uiso Mean) of a random quantity 1 I3 of a random vector 13 I. I32 see trlso Conditional Expected information 80.9 I. 158. I59 loss 75, 76, 154. I55 utility 6. 51-81, 84. 9 . 91. 141-147, 0 155- 157.299-302 Experiment expected information from 80. 158. 159. 299.300 expected value of 65-67, 148.299.300 see rrlso Design of experiments Expert System 426 Exponential distribution, model 7. I 18. 187-1 89. 199,430 and exchangeability xii, 187-189 as maximum entropy choice 209 conjugate analysis of 438 conjugate family for 438 inferential process for 438 reference distributions for 438 sufficient statistics for 197,438 Exponential family of distributions 7, 197-209 and maximum entropy 207-309 and exchangeability xii, 204-207 as an approximation 207-209 canonical form of 202-203 conjugate analysis of 269-279 conjugate family for 265-279 first two moments of 203 inferential process for 273-276 information measures and xii. 207-209 k-parameter 20 I , 202 non-regular 198-200 one-parameter 198-200 regular 198, 199 sufficient statistics for 197-207 Extreme consequences 49-54

F distribution (see Snedecor distribution)


Fiducial inference xiv, 9. 444. 435458. 46 I. 484,486 Fieller-Creasy problem 46X Final precision sec Initial and final precision Finite additivity xii. 16, 36. 40. 105, 106. 108. 161. 162 Finite decision problems 16 Finite exchangeability 169. 170. 226. 227.356.357 Finite horiron 59 Finite population sampling 374 Fisher dibtribution (see Snedecor distribution) Fisher information (matrix) expected 288. 3 14. 329. 33 I. 333, 335. 336.361 observed 288 Flatland paradox 365 Forecasring 10. 374 Forensic science 10 Foundations x. 10. 13-104 and coherence x. 23-33.38-45.83-91. 94-99 axiomatic basis of x. 5. 15. 16.23. 83-91.94.95.444 critical issues x, xi, 92-104 theories x. 83-92 Fractiles (see Quantiles) Frame of discourse 18. 92-94 Frequency (see Limiiing frequency) Frequentist procedures xiv. 9. 37 I. 441. 449-454.46 I Fuzzy logic 95 Gambling (see Monetary bets) Game theory 5.88, 104.360 Gamma distribution I 18. 123. 124. 136. 138. 140.3 13. 330.430

Subject Index
as a prior distribution 267,268. 274, 301,302 Gamma function 1 16
Gamma-gamma distribution 120,430 as predictive distribution 438 Gamma-Poisson distribution see Poisson-Gamma distribution Gauss-Hermite quadrature 264,346 Generalised gamma function 138 Geometric distribution. model 116, 189, 190 and exchangeability xi. 189. 190 conjugate analysis of 437 inferential process for 437 reference distributions for 437 sufficient statistics for 437 Gibbs sampling algorithm 353-355 Group communication, decision making 5,8,92,102-104,166,236,237,275, 424.425 Groups of transformations 362,454,460 Growth curves 220,382 Haar measure prior 362 see also Noninformative prior see also Reference distribution Helly's theorem 126, 173.213 Hierarchical model, prior xiii, 7,222-226. 371-373,383,480 and binomial parameters 223 and normal parameters 224226,383 Highest probability density (HPD) region (see Credible region) Histogram 351 History 10,356.357 Hypergeometric distribution, model 115. 172,322,428 Hypergeometric, function 338,363 Hyperparameter 224,270,271,275, 339.372 ldentifiability 239 Image analysis 5,374 Implementation issues xii, 263,264 Importance sampling xiii, 8,264,340, 348-350 Improper prior distribution 276,277,321, 357-367,421424,449,469 see also Non-informative prior see also Reference distribution Improper probabilities 163 Inadmissibility 367 see also Admissibility Incoherence (see Coherence) Hypothesis 2,262,263,380-382, 390 alternative 233,380,391,469,470 composite 39 I , 392,394.47 1 4 7 5 null 233,394,414,416,469,470 simple 380, 391, 392.394.47 I , 475 Hypothesis testing xiv, 8,9.445,469-475 and Lindley's paradox 394, 406,407. 415,416,422 composite versus composite 392-394,396,397 critical region 470 in Bayesian inference 262,263. 39 1-397 inadequacies of 472-475 likelihood ratio test 476.487488 minimax 472 one-sided tests 396,397,472 of point null hypothesis 394.406, 407, 414416 power function 470,473 simple versus composite 392,394 simple versus simple 392,47 1 size of a test 47 I test procedure 470 two-sided tests 473 unbiased test 473 uniformly most powerful test 472,473 see also Bayes factor see also Significance testing

Subject Index
Independence and dependence xi, 167, 168 characterisation of 37-38 conditional 45117 mutual 46 of random quantities I 13.130.172- 190. 194-197 pairwise 28 Induction, a problem of 322.323 Infinite exchangeability (see Exchangeability) Inference as a decision problem 6.67-8 I , 92. 102. 420.444 Bayesian 24 1-426 comparative 376,445 conditional and unconditional xiv. 374. 445,452,468.475.478.479 critical issues xiii, 374,375,418-426 fiducial xiv. 9.444,456-458.461. 4114,486 likelihood xiv. 9,444,454456.461 pivotal 458,459,461 pure problem of 6 , 6 7 4 1. I02 statement, answer 255 structural 459-461 summaries xii, 255-263 Inferential processes Bernoulli. binomial model 248,249, 270-274,276,277.279-282.436 exponential model 438 linear regression 44 42 multinomial model 44 I multivariate normal model 441 summaries xiv, 436-442 negative binomial model 437 uniform model 438 univariate normal model 439,440 Influential observations 41 8 see nlso Robustness Information 6, 17.77-81. 147-159. 207-209.298-300,360 amount of x. xi, 6 , 7 7 4 I and the exponential family xii, 207-209 expected 80.9 I . 158, 159 missing 304.308, 324. 362 perfect 65. 149. 300 theories of 10,91,92 value of 65. 147- I49 set also Logarithmic divergence Information matrix (sea Fisher information) Initial and final precision 478.479 Insufficient reason, principle of (sac. Bayes-Laplace postulate) Insurance 373 lnterquantile range 1 12 lntersubjective 102-104. 166. 237 .we also Group communication Interval estimation (Bayesian) sec Credible regions Interval estimation (non-Bayesian) xiv. 445.465-469 confidence limits for 465-469 most selective interval 467 shortest confidence interval 467 Invariancc 7.454 and Laplace approximation 344.345 and noninformativepriors 358.362.366 and representation theorems 181-1%). 204 in estimation 454 of preferences 26 Invariant prior distribution 358. 362 Inverted chi-squared distribution I 19.43 I Inverted gamma distribution I 19. 43 I Inverted Pareto distribution 120.432 Iterative quadrature (see Numerical quadrature)

Subject Index
Jacknife 37 1 Jeffrey's rule 94 Jeffreys' prior, rule 314,315.357-362 see also Non-informative prior see also Reference distribution Kernel density estimate 35 1,355 Kernel of a density 129 Lack of memory property 187, 190 Laplace approximation xiii, 8,264. 340-345455 Laplace's rule of succession 272,322,323 Large sample approximations comparing Bayes to classical 486 to posterior distribution 285-297. 3 14.365 see also Asymptotic analysis Law 10,374 Law of large numbers 126 Law of the iterated logarithm 127 Least favourable prior distribution 449 see also Non-informative prior see also Reference distribution Lebesgue integration. measure 1 I I , 133, 141, 161-163 Length, operational definition of 82,83 Likelihood function 7.43, 173, 174, 177, 185.243,450,480,481 approximation for large samples 485, 486 conditioned 48 integrated 245,390,470 Likelihood inference xiv, 9,444, 454-456.461 Likelihood principle 249,250,454 and noninfonnative priors 367 and stopping rules 250-255 derivation of 249,250 implies sufficiency 455 satisfied by Bayesian analysis 249 violation of 463 Likelihood ratio 49. 392, 455, 470. 476, 487 Limit theorems xi, 125-1 27 Limiting frequency 173, 174, 177, 179, 450454 Lindley's paradox (see Hypothesis testing) Linear models 5, 10.22 I , 222,442 (see also Regression) Linear theory (based on expectation operator) 162, 163 Linear transformation of utilities 55 Location-scale parameters 320. 362, 379, 380.4 15,458 Logarithmic divergence 76. 9 1, 278, 279, 286,287,358,402,416,476 Logistic distribution 122, 239. 240, 349, 433 Logistic model 220. 382 Logit model 2 19,220.382
Loss functions 256-263.445-449,461

absolute value 257 for inference problems 256-263 from utility functions 256,391,396,445 linear 257, 396, 397 logarithmic divergence 279 quadratic 85,88,257,300,301,397, 401,406 standardised quadratic 30 I zero-one 257,405 Lottery (see Option) canonical 85 Mahalanobis distance 416 Marginal distribution 128, I66 Marginalisation paradoxes 363,364. 479-48 1 Marginalisation procedures xiv, 128.445 Markov chain Monte Carlo xiii, 8, 264, 340,353-356 Marriage problem 61-63

Subject 1nde.v
Mathematics, the role of xi, 30.3 I , 49, 50, 82.85, 105, 106. 141. 142. 160-164, 170.226229 Maximisation of expected utility 52. 56. 90.9 I , 146, 147,256,386,388 Maximum entropy. procedures 10.209.366 Maximum likelihood estimate 288,46 I. 465 asymptotic distribution of 485 Mean I 12, 13 I. 357.258.461 see d s o Expectation Mean squared error 462465 Measure theory I 1 1 . 133, 141. 161-163. 177 Median 1 12,257,258,461 Medical diagnosis 44. 45 Meta-analysis 374 Metropolis-Hastings algorithm 355. 356 Minimax principle, procedures 360.449. 461.172 Missing data 374 Missing information 304, 308. 324.362 Mixtures (see Representation theorems) finite 279-283. 374.462 Mode I 12.257.258.46 1 Mtxiel comparison and Bayes factors xiii, 390-394. 105. 414,417,422424 and cavariate selection xiii. 407109 approximation by cross validation xiii. 40347.-K)9.420-424 as a decision problem xiii. 386409 general utilities for xiii, 395402 perspectives on xiii. 383-385 zero-one utilities for xiii. 389-395 Model comparison perspectives completed view 385 closed view 384 open view 38s Model rejection discrepancy measures for xiv. 412-41 7 through model comparison xiv. 4094 I2 Modelling xi, 7. 165-240 and exchangeability 167-181 and invariance 181-190 and partial exchangeability 209-222 and remodelling xiv. 377426 and sufficient statistics 101-207 critical issues xii. 237-240 scientific 237, 238 technological 237. 238 Models 7. I14 and invariance xii. l8I-lW compilrison of xiii, 3771109. 483
choiceofxiv.8.377409.419.420.445.

186-488 criticism of 4 19,420.483 elaboration of xii, 229-232 empirical 237. 238 explanatory 237. 238 nonparametric xii. 228 parametric xii. 7. 172-234, 242-255 380-383 predictive 165-167,243.244 ranges of xiii. 8,277-283 rejection of xiv. 40941 7 role of 737.238 simplification ol' xii, 233. 234
Moment-generating function 1 I4 Moments 112. 113. 131. 132 Monetary bets 06-88. 162 Monotone continuity of uncertainty relation 107 Monutonicity of uncertainty relation 27

Subject Index
Monte Carlo methods (see Stochastic simulation) Multinomial distribution, model 133, 134, 176, 177,216,433 and exchangeability xi. 176, 177 conjugate analysis of 44 I conjugate family for 44 I inferential process for 44 1 reference distributions for 336,441 Multinomial-Dirichlet distribution 135,136,433 as predictive distribution 441 Multiple regression 221.222.383.442 Multivariate analysis 5. 10. 374 Multivariate distributions 133-111.433435 Multivariate normal distribution, model 136-138, 140.365,434 and exchangeability xi, 185, I86 as prior distribution 441,442 as approximation to posterior distribution 286297,314,365 conjugate analysis of 44I conjugate family for 441 inferential process for 44 1.449 reference distributions for 441 sufficient statistics for 441 Multivariate normal-gamma distribution 140,435 as prior distribution 442 Multivariate normal-Wishart distribution 140,435 as prior distribution 441 Multivariate Student distribution 139, 140,435 as marginal posterior distribution 441 in linear model 442 Mutual independence46 Negative-binomial distribution 116, 119,429 conjugate analysis of 437 conjugate family for 437 inferential process for 437 reference distributions for 437 sufficient statistics for 437 Negative-binomial-beta distribution 118.429 Neighbourhood of distributions 370 Neyman factorisation criterion 193-1 95 Neyman-Pearson lemma 47 I , 472 No-data problems 446 Non-Bayesian theories alternative approaches xiv, 445460 hypothesis testing xiv, 469-475 interval estimation xiv, 465-469 point estimation xiv, 460465 significance testing xiv. 475478 Non-central chi-squared distribution 121,431 Non-informative prior 277,298,314. 357-367 invariant prior 358,361,362,366 Jeffreys prior 3 14.3 15.357-362 see also Reference distribution Nonparametric models, inference xii, 5.228 Normal distribution, model 7, 121, 123, 136, 155-157, 181-185, 1%. 216, 432,459,463.464,478,480 and exchangeability xii, 181-185 as approximation to posterior distribution 286297,314,365 as maximum entropy choice 209 as prior distribution 253,300 biased stopping rule for 253-255 conjugate analysis of 274,439,440 conjugate family for 268,439.440 inferential process for 253-255.297. 361-365,369,394,396,397,401,
402.406.407,415.416,439,440.

446,453.413

Subject lndes
reference distributions for 328-333. 439.440 sufficient statistics for 196. 199.439. 440.462 Normal-gamma distribution 136.434 as prior distribution 268 Nuisance parameter xiv. 245.445. 479481 Numerical approximations xiii. 339-356 see d s o Approximations Numerical quadrature xiii. 8. 264. 346-348 Objectivity xii, 2, 3, 99-102. 236. 237. 275,298.424.42s Observables xii. 7. 241-247 Occam's razor 476 Odds 49.357.390.391 Oil wildcatters 53.54 Operational perspective x. 18.22.5 I . 81-83.85. 87.90.94.99. 100. 102. 161. 195,135.243.298 Opportunity loss 65, 149 Optimal stopping 59-63 Optirnisation 10 Options 18. 19 linite 23-33 generalised xi. 141-149 standard 29-33 Ordered parametrisation 323.333.364.367 Ordering (SYY Preference relation) Origin invariance 187-191 Outlier 229,230,240.370 Overfitting 420.42 I 11-values 474-478 Parameter7. 173-177.179-190, 192-197. 220. 23s as label ofdistribution 114-125. I 33- I 4 I equality 233. 234 nuisance 245.445.47948l of conjugate family 265. 2% of interest 245 S ( N/.W Hyperparameter P' see criso Location- scale parameters Parametric inference 243-247 midel xii. 228.229 sulliciency xii. 192. 193 Pareto distribution 120. I4 I . 432 as prior distribution 438 Partial exchangeability xii. 7. 168-1 70. 208-222 Pascal distribution (see Geometric distribution) Pattern recognition 10 Pearson family of distributions 343 Philosophy of science 1 0 Pitmari estimator 4hS Pivotal inference 458,459,461 Point estimation (Bayesian) 8 Bayes estimate 257.258.161.465 for absolute error loss 257 for logarithmic divergence loss 279 for linear loss 257 for quadratic loss 157 tor zero-<)neloss 257 in inference 257. 258 Point estimation (non-Bayesian) xiv. 9.445.460465 best linear unbiased (BLUE)46.1 bias 3'. 462465 5) conhistent 3 12. 320. 326. 353.405. 463465 efticiency of 362 maximum likelihood (MLE) 288.359 minimum variance bound (MVB) 464.465 unbiased 348.462-465 uniform minimum variance ( U M V ) 464.465

Subject Index
Poisson distribution, model I 16, 199,206.429 as an approximation to the Binomial 76,77 characterisation via sufficiency 206 conjugate analysis of 274,437 conjugate family for 267,268,437 inferential process for 437 reference distributions for 437 sufficient statistics for 437 Poisson-gamma distribution I 19,429 as predictive distribution 437 Posterior distribution 9.43. 94, 130. 175. 242-247 approximation for large samples 286-297.3 14.365 convergence to correct value 285-287 reference (see Reference distribution) Posterior odds ratio 49,356,390 Power function 470-475 Pragmatism 8 I Precise measurement 285 Precision matrix of multivariate normal distribution 137 of multivariate t distribution 140 Precision, of normal distribution 121, I82 Prediction 7, 10,407,408,445,454 comparative issues xiv, 482-485 with quadratic loss - 0 , 397-399, 3 0301. 401,402,406,407 Predictive distribution 43,243-247,419,483 inference 243-247,483 model I67,4 19.444 sufficiency 191-193 Preference relation x. xi among options 17-33.52.56 among decisions 141-147 precise measurement of 31-33 Prequential analysis 485,488 Prescriptive theory 4,23. 32.95-98 Prevision 162. 163 Principle of optimality (Bellman)61 Prior distribution 9,43.94, 130. 173- 182, 185. 186, 189, 190,234,235. 241-247, 357-367, 444, 446, 450, 470 and modelling xii, 165-235 approximation 279-285 conjugate (see, Conjugate prior family of distributions) elicitation of 5, 160.234,235,370,375 hierarchical 222-226,37 I , 372, 383 invariant and relatively invariant 358, 36 1.362 Jeffreys' prior 314. 315,357-362 least favourable 449 maximum entropy 365 non-informative 357-367 "objective" 275,298.357 possible dependence on stopping rule 252,255 reference 298,300-339.361, 363-365.416 robust 367-37 1 "vague" 8, 357-367 see also Improper prior distribution see idsu Non-informative prior see also Reference distribution Prior ignorance xiii. 8,264,298,357-367 see also Non-informative prior see also Reference prior Probability 1-3.6.7 assessment 10, 150,234,235,370,375 classical (or symmetry) 3,33,99-102 degrees of belief as 33-49 frequentist 3, 33.99-102 logical 3,99-102 personal 33-49.99-102 review of mathematics of xi. 109-141 upper and lower 99 see ulso Subjective probability Probability density function (see Density function)

Probability distribution 36. 37 multivariate xi. 8. 127-141.433335 summaries of xiv, 1 12, I 13. 1 3 I - I 33, 427-435 univariate xi. 8, 109-1 25.427333 utility of x, xi. 69-75, 151-154 see crlsu Distribution function

4 Probability generating function I I


Probability (mass) function multivariate xi. I28 univariate xi. I I I Probability space 108. 109. 141 Probability theory. review of xi, 109-141 Probit model 2 19. 720.382 Profile likelihood 343.455.48 I Psychological research 1 0 Quality assurance 374 Quantification 28-33,98.99 Quantile 112. 257. 370 Quantitative coherence x. 22-33.83-85 Quasi-random numbers 350 Radon-Nikodym derivative I I 1, I33 Random chardcteristic function 183-1 85 distribution function I79 quantity xi, 7, 109-1 14. 165-167 sample 7. 169, 173. 177. 185, 189. IYO vector xi. 127-133 Randomisation 472 RacrBlackwell theorem 464 Rationality 4,5. 13-1 6 Reference analysis xiii. 8. 9. 264. 298339.358.459 Reference analysis for an induction problem 322.323 binomial and negative binomial models 315.316 deviation from uniformity midel 3 I3 exponential model 457

infinite discrete case 338 location model 320 multinomial model 336, 337 normal correlation coefticient 337-338. 363.364 normal mean and standard deviation 128-330.361 normal standardised mean 330-33 I. 362.363 normal variance 30 I. 303 prediction with quadratic loss 300. 301 product of normal means 33 1-33? several normal means 335. 364.365 uniform model 3 1 I . 479 Reference decisions xiii. 399-302 Reference distribution xiii and Jeffreys' prior 3 14. 3 I5 and maximum entropy 365 and model criticism 213 and ordered parametrisation 323. 333. 364.366 and strong inconsistency 365 and the likelihood principle 366. 367 compatibility with sufticienr statistics 309-3 I 0 explicit form of 307.3 12. 3 I9 for hierarchical models 339 for prediction 339 given ii consistent estimator 3 12. 3 I3 given a nuisance parameter xiii. 320-333 independence of sample size 308. 309 in the finite case 307, 308 invariancr of 3 10. 325. 326 multiparameter xiii. 333-339 one-dimensional xiii. 302-320 posterior 300-339. 363-365.4 16 prior 298. 300-339. 361.363-365. 4 I6 rebtricted xiii. 3 16-320 under asyniptoric normality 3 14-2 16. 326-333 Regress ion coefficient 22 1 equation 22 I inferential process for 442

Subject Index
multiple 22 1,222,383 polynomial 222 simple 222 trigonometric 222 Regressor variables 22 I , 377 Regulation problems 483 Relevant event (see Event) Relevant subsets 468 Reliability 10 Remodelling xiv, 377426 by model comparison (see Model comparison) by model rejection (see Model rejection) critical issues xiv, 41-26 Reparametrisation 189,220,235,344. 345,347,348 Repeated sampling 453,454,455 Reporting beliefs 67-8 I , 150- 159,302, 3 16,320,357,399402,424,425 Representation theorems xii, 3,7, 172-207.235.236 for 0-1 random quantities 172-175 for 0-1 random vectors 176, 177 for several sequences of 0-I random quantities 21 1-215 general form of 177- I 8 1 under centred spherical symmetry 183-186 under origin invariance 187-190 under spherical symmetry I8 1- 183 under sufficiency 191-207 Residual analysis 418 Revision of beliefs x, 3 8 4 5 Riemannian metric 358 Risk function 446 Robustness xiii, 5.99. 160,235,239. 367-37 I Sample mean 190 median 190 range 190 size 190 space 454 sum of squares 190 total 190 Sample survey 10 Sampling cost (see Cost of observation) Sampling-importance-resampling xiii, 8,264,340, 350-352 Sampling distribution 173, 177.450 see also Likelihood function see also Model, parametric Schwdrz criterion 487,488 Scientific reporting 67-77. 150-157, 302, 320,360,399-402,424,425 Score function 6,69-75,88-89. 151-157, 399-409 local 72-74, 153, 154 logarithmic 74-8 1,85,9 I , 154- 159, 207-209,302-339,4O0-407. 422424,488 proper 7 1-74. 152,399,477 quadratic 7 1,72,88,89. 152.397-407 smooth 70, I5 I , 399 Scoring rules (see Score function) Sech-squared distribution (see Logistic distribution) Secretary problem 6 1-63 Sensitivity analysis 160. 235 see ulso Robustness Sequentid analysis 5,375 decision problem 6 , 5 6 5 7 revision of beliefs 47-49 Several Samples xiii, 2 I 1-2 16.38 1 and Bernoulli, binomial models 21 1-215,216 and multinomial models 216 and normal models 2 I6 Significance testing xiv, 9, 262,416,445, 475478 inadequacies of 476,479 pure significance test 475

Subject Index

significance level +value) 47 1.474-478 see ulso Hypothesis testing Significant event 27,28 Simple hypothesis (see Hypothesis) Simpson's paradox 4 I , 42 Singular distribution I I 1 Snedecor distribution 123, 124, 140.432 Spectral analysis 10 Spherical quadrature 348 Spherical symmetry 18 I - I83 Splines 374 Spun coin, the 279-28 I Squared root inverted gamma distribution I 19.43 I St. Rtersburg paradox 87 Standard deviation I12 Standard events 29-32, 50-52.54 Standard normal distribution 12 I , I23 Standard Student distribution 123 Statistic 190 Statistical decision problem (see Decision Problem) Statistical inference (see Inference) Statistical models (see Models) Stein's paradox 365 Stirling's formula 414 Stochastic approximation 374 Stochastic assumptions 238, 239 Stochastic complexity 485.488 Stochastic simulation 264, 340, 348-356 Stopping rule xii. 7.247-255, 367, 375 Straight-line model 220,222. 382 Strong law of large numbers 126. 173. 182. 185, 188,286.288 Strong inconsistency 365 Structural assumptions 238.239 Structural inference 45946 I

Structured layouts xii. 2 17. 2 18. 38 I . 382 Student distribution 122, 123. 136, 139, 239. 240. 329, 335. 369. 385, 416, 432.459.462.469.480 Subjective probability ix. 2.3.94.99-102 measurement of 3 I . 32.85 uniqueness of 35.37.41 .we also Beliefs Subjectivity ix, xii. 2.3.99-102, 195.236. 237.275.424.42s Sufficiency 247-255,45 1 parametric xii. 192. I93 predictive xii. 191-193 Sufficient statistics xii. 7. 190-207.461 definition of 191. 192 factorization criterion for 193, 194 for exponential family xii. 197-207 in conjugate analysis 264-285 minimal 197 Sure-thing principle 26 Survival analysis 5 Symmetry 3.7,33. 168-170 centred spherical 183- I86 complete 169 spherical I8 I - I83
I distribution (see Student distribution)

Temporal coherence 94 Test procedure (see Hypothesis testing) Test statistic 412-417 Testing hypotheses (.weHypothesis testing) Time series 5. 374 Tolerance region 484 Transformation elaboration 230. 23 I of random quantities I I I . 1 13. 130- 132. 246.295-197.344347.350. 358.360 Transitivity 24-26. 84

Subject Index Truncated exponential distribution, model 200 Two-way layout 217.218 Type 1 and type 2 error probabilities (see Hypothesis testing) Unbiased estimation 3 4 8 , 4 6 2 4 5 Uncertainty 2-5, 13-16 precise measurement of 31-33 relation 21-38, 106-109 Uniform distribution, model 1 17,349, 380,430 as prior distribution 308,320,356,357 conjugate analysis of 438 discrete 79, 117,308.356 inferential process for 468,479,438 reference distributions for 3 1 1,438 sufficient statistics for 199,200,438 Uniformly most powerful test 472,473 Uniqueness of probability measures 37 of conditional probability measures 41 Utility x, xi, 6, 16.49-75, 141-154 and loss functions 256,39 1,396.445 axioms for 90,91 bounded49-54 canonical 50 elicitation of 51.53.54.91 expected (see Expected, utility) for money 87 general 54 logarithmic (see Score function, logarithmic) of a probability distribution x. xi, 69-75, 15 1- 154 z e m n e 389-395 see also Score function Value of perfect information 65,66. 149 Variance 112, 113 Weak law of large numbers 126 Weight of evidence 390 Weighted average form of posterior means 275,276,368 form of prediction 384,398 of posterior distributions 281,282,387 of prior distributions 280,282 Wishart distribution 138-140.435 as prior distribution 441 Zeroone discrepancy xiv. 413.414 utility xiii, 389-395

This Page intentionally left blank

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

Author Index
In an attempt to signal the contributions of authors who otherwise are subsumed anonymously in the er al. of multi-author papers, we have included page references for all authors of such papers, even though only the first authors name actually appears on the page.

Abramowitz, M. 156,489 Achcar, J. A. 345.489 Aczel, J. 91.489 Adekola, A. D. 289,548 Aitchison, J. 10,99.262,483.489 Aitken, C. G. G. 10,489 Aitkin, M. 41 7,489 Akaike, H. 359,367,455,487,489,490 Albert. J. H. 352,426.48 I , 490 Aldous. D. 236,490 Allais, M. 96.97, 98.490 Ameen, J. R. M. 374,490 Amaral-Turkman. M. A. 483,490 Amster, S. J. 376,490

Andersen. E. B. 455.48 I , 490 Anderson, T. W. 376,445,490 Angers, J.-F. 371.490 Anscombe, F. J. 10.85.295.490 Ansley. C. F. 374.491 Antoniak, C. 228.491 Aoki, M. 10,374,491 Arimoto, S. 9 I , 49 1 Arnaiz. G. 230.49 I Arnold. S. F. 1 1,491 AKOW,K.J. 91, 103. 104,491 Ash, R. B. 106, 1 11. 162, 173.491 Aumann. R. J. 85. 104.490,491 Aykaq, A. 10,491

Bahadur, R. R. Bailey, R. W. Balch, M.

45 I. 49 I 123,491

85.491

Banderner, H. 160. 491 Barlow, R. E. I I . 104, 172.236.374.491.492.523 Barnard. G. A. 2.95.250.270.362.376. 455.458.492 Barndorff-Nielsen, 0.E. 484,492 Barnett. V. Bami. 1. Bartleu, M. 10,492 206.455.48 I.

Bemardo, J. M. 9, 10. 1 I. 18, 71. 92, 141, 156. 232. 302, 305, 306. 307. 313. 323. 333. 334. 337. 338, 362. 365. 366. 373. 374. 394. 417. 419. 422. 468. 481. 483. 494. 495. 396. 497 Bernoulli. D. 87.497 Bernoulli. J. 89. 101.497 Berry. D. A. 10.160.376,495,497.536. Besag. J. 353. 356, 374.497 Bessant, R. G. 44.535 Best. N. G. 353.355.516 Bickel. P. J. Binder. S. 455,463.497 373.497

102,376,445,492

Bartholornew, D. J.

I I. 376,492

422.493

Basu. D. 136, 250,374. 376.452.453, 468.479.48 1,493 Bauwens, L. 10,493 Bayarri, M. J. viii, 104. 243, 250, 313. 338, 364, 371, 417. 418, 493, 494. 496 Bayes, T. I , 2, 89, 101. 102. 356. 357, 482,494 Beauchamp, T. J. Becker, G. M. Bellman, R. E. 232,534 61,494 102, 250. 333. 334, 371. 373. 475. 479. 506. 519.

Birnbaum. A. I I . 250.455,456.497 Bjernstad. J. F. 455.485.498 Blackwell, D. 91. 103. 159, 356. 463. 464.498 Blyth, C. R. 42.498 Boos. D. D. 455.534 Booth, N. B. Bordley. R. F. 374,498 96.498

9 1,494

Borel. E. 89.498 Borovcnik, M. 9.498 Bowen. K.C.


Box. G. E. P.

10. 104,552

Berger. J. 0. viii, 9, 10. 1 I , 258, 263, 302, 306. 307. 337. 338, 362, 366. 370. 375, 394. 417. 423. 449, 490,493, 494, 495, 4%. 520,536.554 Berger, R. L. SO0 Berk. R. H. Berliner. L. M. 496 Bermudez. J. D. 497

9. l0.44.92.93.230,23l. 358. 260. 263, 360. 361, 367, 370,

376.413.417.419.498.499,949

104, 376, 445, 475. 495. 287.495 228.371.374.494,495. 7 I , 232,289.4 19,496.

Boyer. J. E. Jr. 373.5 I6 Boyer, M. 10.499 Brant, R. 418.5(K) Breslow. N. 1 I. 499 Bretthorst. G. L. 10.499 Bridgman. P. W. 82.499 Brier, G. W. 7 1.499 Brillinger, D. R. 458,499 Broemeling, L. D. 10.499 Brown, L. D. 10, 206. 358.499

Author Index
Brown, P J. . 10,373,374,499

575
Cochrane, J. L. 104,501 Collier, R. 0. 10,412 Coletti, G. 10,492 Conlisk, J. 87,501 Consonni, G. 279,422,501 Cook, R. D. 418,551 Cooke. R. M. 10,501 Cornfield. J. 1 1,88,361,468,502,514 Cowell, R. G. 426,502,546 Cox, D. D. 374.502 Cox, D. R. 68,231,237,238,343,368, 376, 445. 455, 479, 480, 481, 484, 487,498.502 Cox, R. T. 90,502 Crosse, J. 375,552 Crowder, M. J. 289,502 CsiszBr, I. 366. 502 Cuevas, A. 37 I , 502 Cyert. R. M. 10,502 Daboni, L. 9,502 Dalal, S. 228,283,502 Dale, A. I. 2. 10.502 Darmois, G. 203,502 DasGupia, A. 10, 160,262,495,502 Davis, W. W. 350,547 Davison. A. C. 455,503 Davison. D. 91.503 Dawid, A. P. 10, I I, 47. 93. 162, 183, 230, 236, 250, 279, 289, 362. 363, 6, 371, 373, 374, 4 0 481, 485, 488. 496,503,504,545,548 Dean, G. W. 10,520 Debreu, G. 9 1.504 Deely. J. J. 160,373,504,528.531 de Finetti, B. vii, 2.3.9. 10, I I , 71.77, 86, 87. 88, 89, 101, 102, 106, 109, 110, 162, 163, 172, 212, 230, 235, 242,375,483,504,505

Brown, R. V. 100,104,499.531 Brown, S. J. 420,527 Brumat, C. 10,491 Brunk, H.D. 162,499 Buehler, R. J. 88. 154,468,469,499 Bunn, D. J. 104.499 Butler, R. W. 455,485.499 Campling, G. E. G. 374,552 Cano, J. A. 37 1,394,45I , 500,534 Carlin, B. P. 355,374,500 Carlin, J. B. 371,500 Carnap, R. 90.100.500 Caro, E. 104,500 Casella. G. 353, 376, 445, 468, 469, 475,500,5 19,532 Chaloner, K. 160,418,500 Chan, K.S. 353,500 Chang, H. 418,419,420,515 Chang. T 362.500 . Chankong, V. 104,500 Chao, M. T. 289,500 Chateaneuf, A. 99,500 Chen, C. F. 289,501 Chernoff, H. 10.91.160,475.501 . Chow, Y S. 178,501 Christ, D. E. 261.523 Chuang, D. T. 37 I , 525 Chuaqui, R. 90.501 Cifarelli. D. M. 9, 10, 1 I, 483,501 Clarke, B. 367.501 Clarke, R. D. 100.501 Claroti, C. A. 10,501 Clayton, D. G. 353, 355,516 C1ayton.M. K. 104,419,501,516 Clemen, R. T. 104.50 I

Author Index

DeGroot, M. H. viii, 9. 10, I I. 62, 91, 102, 104, 133. 158, 160. 243, 250. 258. 263. 276, 289. 362. 373. 375. 316. 397, 417, 418, 445, 449, 493. 494.497,502,505,508,517 de la Horra. J. viii, 258, 37 1.463.506 Delampady. M. 263.37 I, 394.4 17,475, 495,506 Dellaportas, P. Demonient. G . De Morgan. A. 348, 355, 506 10.534
XK).

Dunsmore, 1. R.

10.373,483.489,490.

508 Durbin, J.

250.475. SOX, 509 Dykstra. R. L. 228.509 Dynkin, E. B. 35.509 Earman, J. 2,509 Eaton. M. L. 89.227.362.367.507. 509 Eaves. D. M. 367.422. 500. SOY Eddy. W. F. 49.514 Edward%, W. F. 455. 509 A. 10. 1 1 . 91. 103. 104. Edwards. W. L. 263.37 1.315.425. 509 Efron. R . 365. 37 I . 373.158.48 1. 509 Eichhorn. W. Eliashberg, J. El-Krunz, S. M. 37. SO9 104.504 160.510

89,506 . Dempster. A. P 99.371.419. DeRobertis. L. 99, 37 1, 506 Dew. A. 483.5 14 423 355.506 104, 158,506 de Vos. A. F. Devroye. L. De Waal. D. J.

506

Dey.D. K. 359.371,418.419,420.506, 5 14.515.535 Diaconis. P. viii. 94, 188,204,206.207. 226. 227, 236, 273. 276, 279. 283. 348.463,506.507,512 Di Becco. M. 10.492 Dickey. J. M. I I , 103, 123, 163, 228. 263, 283, 375, 378, 411, 417. 424. 425,507,508.5 13, 528 Diebolt, J. Doksum, K. A. 374, 508 228,37 I , 508 104,500

Ellsberg. D. 96,98.5 10 Epstein. E. S. 71, 53.5 Escobar, M. D. Erickson. G. J. Ericson. W. A. 374,552 10.510.546

I I . 373.3745 10

Dominguez. J. I.

Fan. T.H. 37 I , 495 Fang. B. 0. 373, 504 Farrell. R. H. 258.263.5 10 Faulkenherry. G. D. 485,529 Fearn. T. 10. 510 Feddersen. A. P. 469.499 0 Fedorov. V V. 159.5 1 Feller. W. 46.5 1 0 Fellner. W. 10.510 Felsenstein. K. 160.417. 510 Ferguson, T. S. I I . 62. 102. 228. 446. 463.510 Fernandea C. 37 I , 506 Ferrandii, J. R. 18. 141,261. 365.416. 417.497.510.511

Domotor. Z. 90.508 Dransfield, M. 347.426. 545 Draper, N. R. 158, -SO8 Dr&ze, H. 239. 508 J. Dubins, L. E. 9. 104. 162.498.508 Duckstein, L. 104.5 I7 DuMouchel. W. 374.508 Duncan, G. 160.508 Duncan. L. R. 10,508

Author Index
Fieller, E. C. Fienberg, S. E. Fine, T. L. 458.5 1 1
10, 104,505.51 1

577
Gelman, A. Geman, D. 353,515 354.374.515

56.98,99, 102.51 1.551

Fishbum, P. C. 10, 11. 83, 84, 85, 9 . 0 91,102, 103, 104.491,51 I Fisher, R. A. 68, 195, 314, 338, 420, 424,450,45 1,457.460.5 1 1 Florens, J.-P. 9, 10, 164,239,362,374, 420,5 12 Floumoy, N. Fluhler, H. Fouggre, P T. . Foulis, D. J. Fraser. D. A. S. 481,512 348,512
1 I , 426,540
10,5 12

Geman, S. 354,374.5 15 Genest, C. 11, 104,515 George. E. I. 232. 353, 367. 373, 500,
515

Geweke. J. 349,515 Geyer, C. J. 353.515 Ghosal, S. 289,516 Ghosh, J. K. 289,324,455,497,516 Ghosh, M. 10. 11,258,373,516 Gilardoni, G. L. 104,516 Gilio, A. 89,263,371,516 Gilks, W. R. 353,355,426,516,549 Gillies, D.A. 2,516 Gilliland, D. C. 373,516 Girelli-Bruni, E. 9. 516 Gir6n. F. J. viii, 99, 104,373,374,419, 497,500.5 16 Girshick, M. A. 91,446,498,517 Godambe, V.P 10,374.517 . Goe1,P. K. 10,104, 158,276,371,373. 425,496,5 17 Goicoechea. A. 104.5 17 Goldstein, M. I I , 94, 163, 373, 426, 455,517,518 G6mez, E. 417,518 G6mez-Villegas. M. A. viii. 371, 417, 518 G0od.I.J. 9,10,11,75,90,91.94,99, 100, 102. 154. 228, 263, 360, 373, 374,390,458.5 18.5 19,536 Goodman. B. C. 375,509 Goodwin, P 375, 519 . Gordon. N. J. 370.519 Goutis, C. 468.5 19 Grandy, W.T. 10.5 19,546

95,540 250,289,455,459,460.

Freedman, D. A. 87,182,204,206,207, 226, 227. 235, 236. 289, 463. 507. 512,513 Freeman, P. R. 10, 62, 103, 230, 425, 485,508.5 13,55 1 French, S. 5, lo,%, l04,375,513,541 Friedman. M. Fu, J. C. 91,513 289,5 13 374,5 13
10, 104,513

Gamerman. D. Gitrdenfors, P. Gaskins. R.

Garthwaite, P. H. 375,513 228.5 19 10,89,513 Gathercole, R. B. 374,546


Gatsonis, C. A.

Gaul, W.

103,513

Geisser, S. 10, 11. 90,316, 322, 361, 362, 373, 405. 418. 419. 483. 485. 501,513,514,524 Gelfand. A. E. vii, 247, 350, 353, 355, 418, 419, 420, 483, 500, 514, 515, 545,s1

Aurhor 1nde-r
Grayson, C. J. 10.53. 519 Green, P.J. 353, 356,497 Grenander, V. 374.5 19 Grieve, A. P. 1 I , 426,5 19,540 Groenewald, P. C. N. 104, 158,506 Gu.C. 374.5 I9 Gulati. C. M. 104.517 Gupta. S. S. 10.519.520 Gutiemez-Peiia.E. 277.520 Gunman, 1. 158. 230, 418. 484, 508, 520, 537 Haag, J. 235,520 Hacking, 1. 99. 102.520 Hadley, G . 10,520 Hagen, D. 96,490 Haimes, Y . 104,500 Hald, A. 374.520 Haldane, J. B. S. 315,362,520 Hall, W.J. 228,283,502 Hall, W.K. 362,535 Halmos. P. R. 45 I , 520 Halter, A. Pi. 10, 520 Harris, J. E. 374.508 Harrison, P J. 10,374,520,539,552 . Harsany, J. 104.520 Hartigan.J. A. 9.99.130,163,164,250, 289, 359. 360, 362, 371, 455. 506. 520,52 I Hartley, R. 10. 104,513 Hasminski, R. Z. 289.523 Hasting, W. K. 356,521 Hays. W. L. 375,509 Heath. D. L. 88.90. 162. 172.521 Henrion, M. 375,534 Hens. T. 84.521 Hernandez. A. 45 1,500 Herstein. I. N. 9 I , 52 1 Hewitt. E. 235,521

Hewlett. P. S. 2 19.52 I Heyde. C. C. 289.52 I Hildreth. C. 425,52 1 Hill. B. M. I I . 162,227,228, 250,371. 373.420.496.52 1,522 Hill. S. D. 362.546 Hill. W. J. 417, 498 Hills. S. E. 239,295,338,348,355. 5 IS. 522 Hinkeimann. K. 10,522 Hinkley. D. V. 368. 376,445,480,485. 487,502.522 Hipp, C. 203,522 Hjon. N.L. 228,522 Hoadley. B. 373.522 Hodges. J.S. 10. 1 1 , 104,263,419,513, 514,522 Hogarth, R. 104,375.522 Holland. G.D. 2,522 Hop. J. P. 349, -549 Howard, J. V. 455, 5 I8 Howson. C. 10.522 Hsu. 1. S. J. 345, 314.375.529.530 Hull. J. 87,91.523 Huseby. A. B. 104.523 Nunon. 1. 44.535 Huzurbazar, V. S. 203,523 Hwang. J. T. 258,523 Hwang, J. T. G. 465,469,500.541 Hylland, A. 104.523 Ibragimov, I. A. 289,523 Irony, T. Z. 11,258, 374,376,491.523 Isaacs. G. L. 26 1, 523 Isaacson, D. 37 I , 533 Iversen, G. R. 9,523 Iyengar, N. S . 10,517

Author Index
Jackson, 1.E. 375,523 10,261,523,535 Keeney, R. L. Kelly, J. S. 10,91. 104,526 104,526

579

. Jackson, P H.

Jaffray, J. Y 99, 500 . James, W. 449,523 Jaynes. E. T. 10,90.227,357.363,365, 366,375,45 I , 468,523 Jefferys, W.H. Jeffrey, R.C. Jeffreys, B. S. 371,495 10.94.524 73, 153,208,341,524

Kent. J. T. 374,532 Kempthorne, P J. 37 1.526 . Kesavan, H. K. 10.525 Kestemont. M.-P. 228,526 Keynes, J. M. 9,90,98, 100,526 Khintchine, A. I. 235,526 Kiefer, J. 480,526 Kihlstrom, R. E. 10,499 Kim, K. H. 104,526 Kimeldorf, G. S. 228,526 Kingman, J. F. C. 106, 182,527 Kirby, A. J. 353,355,516 Klein. R. 483,527 Klein, R.W. 420,527 Kleiter, G. D. 9.527
Kloek, T. 349,527,550 Klugman, S.A. 9,527

Jeffreys, H. 9,73,90,98,153.208,263. 288, 314, 315. 322. 341, 358. 359, 6, 360. 361, 362, 3 6 370, 414, 419, 478.524.553 Jenkins, G. M. 250,455,492 Jennings, D. E 419,501 . Jewell. W.S. 373,524 Johnson, N.L. Johnson, R. A. 114. 133,524 289,524

Johnson, W. 418,524 Johnstone, I. M. 289,521 Joshi, V. M. Justice, J. M.

Knill-Jones, R. 95,546 Koch, G. Kogan, N. 236,527 104.527

I 1,250.524
10,525

Kadane, J. B. 10, 11,91,98, 104, 130, 160, 162, 239, 243, 250, 341. 344, 371, 373, 375, 41 1, 472. 494. 505, 508, 525, 526, 543. 544, 549, 550.
55 1

Kohn, R. 374,491 Kolmogorov, A. N. 485.527 Koopman, B. 0. 98,527 Koopman, L. H. 203,527 Korsan, R.J. 426,527 Kotz. S. 114,133.524 Kraft, C. 90,527 Krantz, D. H. 85,90, 104,527,532 Krasker, W.S. 420,527 Kiichler, U. 236.527 Kuhn. T. S. 93,527 Kullback, S 6,76,91,208,527 . Kyburg, H. E. 10.87.99.102.527.528

Kagan, A. M. Kahneman. D. Kalbfleish. J. G. Kapur. J. M. Kashyap, R.L.

184,525 96,375,525 455,48 I , 484.525 10,525 360,526

Karlin, S. 417,525 Kass, R.E. 10.289,341,348,358.361, 373,417,513,526,549

Author 1nde.x

Lad, F. 160, 163.528 Ladalla, J. N. 289,524


LaMotte. L. R. Lane, D. A. Laplace, P. S. 528 Larkey P. Laud. P. 1 1,528 162,485,522,528 2, 9, 89, 272, 288, 357.

104,525 228,5OY 95.227, 236.426,507, 9, 10.87,528 228.37 I , 483,528

Lindley, D. V. viii, 9, 10, I I , 42, 62.87. 88.90,91. 103, 104. 158, 159, 160, 261, 263, 272, 289. 338, 345. 359. 364, 367. 373, 374, 375. 394. 417, 449, 455, 457, 460, 472. 484. 486, 496, 497. 499. 501, 504. 530. 531. 537 Lindrnan. H. I 1,263, 37 1.425.509 Linnik. Y.V. 184,525 Liseo.B. 371,481,531,532 Little. R. J. A. 374.532 Lo. A. Y. 228.37 I. 374508.532 Lorimer, A. R. 33.535 Louter, A. S. 349.549 Luce.R.D 11,85.YO.Y1,96,103,104, 527,532 Lusted, L. B. 10,532 Lwin, T. 373.532 Maatta, J. 468,532

Lauritzen, S. L. 527,528 Lavalle. I. H. Lavine. M. Lawrie, T. D. V.

44,535

Le,H. Le, N. D.

370,536 374,499 10,263,417,528 288,485,528.529 10.529 9,529 420,545 247.5 15 15. 68. 237, 238, 465, 76.91,527 485.529
10. 263,529

Learner, E. E.
LeCam, L. Lecoutre, B. Lee, P. M. Lee, T. D. Lee. T M. .

Machina. M. 9 1,96,532 Main, P. 370, 371.518.532 Makelainen, T. 373,499 Makov. U. E. 373,374.515.532.549 Mardia, K. V. 374,532 Malitz. J. YO. 501 Marinell, G. 10,532 Maritz. J. S. 373.532 Maniott, J. M. 348,426.532.533 Marschak, J. 91. 104,494,533 Martin, J. J. 10.533 Martin. J. 371.541 Martinez. L. 419,516 Manz. H. F. 10. 533 Maryak, J. L. 258,546 MasrelieL. J. I.. 370. 533 Matheson, J. E. 10,375. 546 Mathiasen. P. E. 484.533

Lehmann, E. L. 529 Leibler, R. A. Lejeune. M. Lempers, F. B. Lenk, P. J.

228,529

Le0nard.T. 10.228.238,345,373,374, 375,407,498.529.530 Levine, R. D.

10,530

Lewis. S. M. 353,539 Ley, E. 426,530 Li. B. 341,553 41 7,508 10,530 LienK. B. P.

Lindgren, B. W.

Author Index Mazloum, R. 160,533 Moms, W. T. Mortera, J. 534 Moses, L. E. Mosteller. F. 10,534

581
104,371,374,394,495,505, 10,9l, 501 9, 81. 534

McCardle, K. F. 104,535 McCarthy. J. 91,533 McClintock, C. G. 91,494 McCulloch, R. E. 232, 367. 374, 417, 418,515,533 McDunnough, P. 289,5 I2 McKillop, J. H. 44,535 McNeil, A. J. 353,355.5 I6 Meeden, G. 160,366,371,533 Meinhold, R. 374.533 Mendel. M. B. 236,491,533 Mendoza, M. viii, 295, 307. 362. 495, 533 Meng X.-L. 374,533 Merkhofer, M. W. 375.533 Metropolis. N. 356,533 Meyer, D. L. 10,533 Migon, H. S. 374,5 12,552 Miller, M. 374, 519 Mills, J. A. 374,534 Milnor. J. 91, 521 Mitchell, T. J. 160,232,534 Mockus. J. 10,534 Moharnmad-Djafari, A. 10,534 Monahan, J. F. 455,534 . Moore. P G. 87,91,523 Morales, D. 160, 536 Morales, J. A. 10,239,534 Morcillo. C. 419,516 Moreno, E. viii, 37 1,394.45 1,500,534 Morgan. M. G. 375,534 Morgenstern. 0. 84,90,91,446,550 Moms, C. N. 206,343, 344, 373.509, 534 Moms, M. D. 160.534 Moms, P. A. 104,534

Mouchart, M. 9,lO. 130,164.239.373, 374.420,5 12,534 Mueller. P. Mukerjee, R. . Muliere. P Muioz, J viii 44.535 89.535 10,374,543 84,90,532,535 104.37 I , 535 374. 552 230,535 324,359,516,535 9.501 Muirhead, C. R.

Murphy, A. H. 7 I , 535 Murray, R.G. Myerson, R. B. Nakarnizo. T. Narens, L. Nau, R. F. Naylor, J. C. 545 Nazaret, W.A. Nelder, J. A. Newman, J. R. Neyman, J. Nicolau. A.

. Nakarnura, Y 99,535

3 6 347, 426, 533, 535, 4,


371.537 488,535 104,509 195,446,450,471,480,535 359,535 104,535

Normand, S.-L.

Novick, M. R. 10, 42, 261, 362, 368, 523,531,535,540 OHagan. A. 9, 10,230,348,370,371, 423,495,510,535,536 Oliver. R. M. 10,536 Ord, K. 407,530 Osiewalski, J. 37 1, 536 Osteyee, D. D. B.
10,536

Author 1nde.t

Pack, D. J. I I , 536 Padgett. W. J. 228.536 Page, A. N. 91,536 Pardo, L. 160.536 Parenti, G. 10,536 Parmigiani, G. 160,536 Pathak, P. K. 10.516 Pearson. E. S . 2,446,450,471,536 Peers, H. W. 359.537.552 Peirce. C. S. 81, 390, 537 Peizer. D. B. 124,537 Peiia. D. 230.4 I8,420.520.537 Pereira. C. A. de B. 136.374.455.493, 523.537 P6re.z. M. E. 37 I . 373. 537 Pericchi, L. R. viii. 231, 369. 370, 371. 373. 405, 407, 422. 423. 424. 534. 537.543 Perks, W. 314,315,359.537 Peskun. P H. 356.537 . Petrella. L. 37 I. 532 Petris. G. 162.540 Pettit. L. 1. 230.417.418.537.538 Pfanzagl, J. 84,91,489.538 0 Phadia. E. G. I 1.228.5 1 Pham-Gia, T. 160,538 Phillips, L. D. 10, 272, 375, 509, 531,

Polson. N.G. 9. 159.315,370,374,500. 539 Poskin. D. S . 417,539 Piitzelberger. K. 37 1.538. 539 Pratt, J. W. 9.85.87. YO. 124.250.262. 263.376.527.537.539 Pres,. S.J. 9,10, I I . IW.376.44S.#3. 5 14.527.539 Price, R. 2 Purves. R. A. 87. 5 I2 Rabena. M. 302.539 Racine-Poon. A. I I . 426,483.5 15. 539, 540 Radner. R. 104.533 Raftery. A. E. 353,468,540 Rahman. M. A. 163.528 Raiffa. H. 9. 10. 85. 91. 98. 103. IW. 41 7.508.526.532.540 Ramsey, F. P. 15. 68. 83. 84. 85. 102. 540 Ramsey, J. 0. 85.368.540 Randall, C. H. 95.540 Rao, C. R. 184.464.525.540 Rao, M. M. 258. 505 Raoult. J.-P. 10, 5 12 Raynaud, H. 104,491 R e g u h i . E. 9, 89, 162,483, 501.540 Reichenbach. H. 100.540 Reid, N. 343.455.479.48 I. 502.5 I2 Renyi. A. 77,7Y. 91. 102. 158. 163,540. 54 1 Ressel, P. 236. 541 Richard, J. F. 10.541 Rios. D. I04.371.541 Rios. M. J. I04 541 Rios, S . 10.99, 104.516, 541 104,541 Rios. S . Jr. Ripley, B. D. 35 1 , 355.541 Rissanen. J. 358.485.4X8.541

538
Piccinato, L. 362,376.538 Pierce. D. 468,469.538 Pilz, J. 10, 160, 538 Pitman, E. J. G. 203.465.538 Plackett. R. L. 2 19.52 1 Plante, A. 455,466. 538 Poirier. D. J. 376.420.445. 538 Polasek. W. 37 I. 538,539 Pole, A. 10, 366. 374,538.539 Pollard. W. E. 10.539

Author Index
Ritter. C. 353,541 Rivadulla, A. 10,541 Robbins. H. 373,541 Robert. C. P 9,373,374,394,465,469, . 495,500,508,541 Roberts, F. 10.84, 104.541 Roberts, G. 0. 353,354.356.541.545 Roberts. H. V. 1 I . 98. 376, 425, 483. 542 Robinson. G. K. 468.542 Rodriguez, C. C. 362,542 Rolin, J.-M. 9, 164,228,239,374.5 12. 542 Rosenkranz, R. D. 10,542 Rosenbluth, A. W . 533 . Rosenbluth, M. N., 533 Rossi, I? E. 417,533 Roush, F, W. 104,526 Royall, R. M. 455,542 Rubin, D. B. 10, 350, 353, 371, 374, 419,515,532,533,542 Rubin, H. 263,371,417,525,542 Rueda, R. 416,417,543 Ruiz-Rivas, C. 230,491 Saaty, T. L. 104.543 Sacks, J. 258,543 Sahlin, N.-E. 10, 104.513 Salinetti,G. 371,532,543 Samanta, T. 289.516 San Martini, A. 417.543 Sand, B. 370.37 1,537,543 Sanz, F 371,502 ! S h d a l C.-E. 91,543 Savage, I. R. 9.96.543 Savage. L. J. 9. 10, 11, 71. 84, 85. 88, 91, 92, 97, 98, 162, 235, 263, 285, 371, 425, 446,451, 478, 508, 509, 513,517,520,521.543,544 Savchuk. V. P 9.544 . 10,374,544 103,513

Sawagari. Y. Schader, M.

Schervish, M. J. 91.104.162.373,525. 543,544 Schick. L.H. 10,519 Schlaifer, R. 9, 10. 85, 417, 539, 540, 543.544 Schmitt. S. A. 9.544 Schwartz. L. Schwarz, G. Schweder. T.. Scott, E.L. 463,544 487,544 468.540 480,535

Scott, D. 90,544 Scott, W. F. 261.531 Scozzafava, R. 9,263,516,544 Seeber, G. 10,532 Seidenberg. A. 90.527 Seidenfe1d.T. 10,90,91, 104. 162.373, 458,472,525,527,543,544 Sellke. T. 263,475,495 Sen,A. K. 104,544 Sendra,M. Sertling. R. J. 261,511 295.544

Shafer,G. 84.99. 102.263.394.54 Shann0n.C. E. 6,80,91, 102, 159.544 Shao, J. 350,405.544 Sharples, L. D. 353,355,516 Shaw, J. E. H. 347,348,349,350,420, 426,544.545 Siegel, S. 91,503 Simar, L. 10,373,512,534 Simon, J. C. 10,544 Simpson, E. H. 42,544 Singpunvalla. N. D. 10. 104, 373, 374. 5 13.53 I , 533,544,545 Siow, A. 417,554

Author Index

Sivaganesan, S. Skene. A. M. Skilling. J. Slate E. H.

37 I.545 347.420.426.545

Stelzer. J. 90.508 Stephens. D. A. 352.547 Stewart. L. 350,547 Stigler. S. M. 2.356. 357. 547 Stigum. B. P. 84.547 Stone, M. 103, 160. 162,362,363,365. 367.405.488.504.541.548 Stoney. D. A. 10.389 Strawderman. W. E. 465. 54 I Stroud. A. H. 348.548 Studden. W. J. 160. -502. 510 Sudderth. W. D. 88.90. 162. 172.485. 52 I, 528.548 Sugden. R. A. 160.548 Sunahara, Y. 10. 374.543 Suppes, P. I I. 84.90.91. IW. 503.527. 532.548 Susarla V. 228.548 Sweeting, T. 1. Taneja, I. J..
23 I , 289.548

10,545 348.526

Slovick, P. 96,375,525 Smith. A. F. M. vii, 10. I I . 18. 103. 141, 160, 183, 2-30. 247, 263, 295. 345, 346, 347. 348. 350, 352. 353, 354, 355, 356, 368. 369, 370. 373. 374. 378, 417, 418. 422, 424. 425, 426. 488. 489. 496. 497. 498. 504. -506, 512. 513. 515, 518, 519. 522. 531, 533, 535, 537, 540, 541, 545. 546,547.549,55 I Smith. C. A. B. 99,263.394.546 Smith. C. R. Smith, J. Q. 10,510, 546 10.104,374.536.546,554

Smokler, H. E. 10, 87. 102.528 Smouse, E. I?, 374.546 Solomonoff. R. J. Soubiran, C., Soyer, R.. 485.546 374, 54 I

374,544

Spall, J. C. 10,258. 362,546 Spezraferri F. 4 17.543 Spiegelhalter, D. J. 95, 103, 353, 355, 417, 422, 426, 488. 516, 528. 545, 546,549 Spizzichino. F. Sprott, D. A. 525 Srinivaban, C. 258,495 Stael von Holstein. C.-A. S. 10,375.546 Stangl, D. K. 10.497 Steel. M. F. J. 371. 374.426, 530,536, 547 Steffey, D. 373. 526 Stegun, 1. A. 156.489 Stein, C. 333, -361. 365, 367.446, 449. 475.523.541 104,236,492, 527 10. 455, 481, 492. 517.

160. 536 Tanner, M. A. 353.54 I , 549 Taylor. S. J. 106.527 Teicher. H. 178. 501 Teichroew. D. 350.549 Teller. A. H., 356.533 Teller. E.. 356.533 Thatcher. A. R. 376. 549 Thomas. A. 426.549 Thomas. H. 87.9 I , 104.523 Thomas. L. C. 10.513 Thorburn, D. 228.549 Tiao. G. C. 9.230. 258,260,360,361. 367. 370. 419. 420. 498. 499. 537. 539.549 Tibshirdni. R. J. 359.37 I. 509.549 Tierney. L. 341.344.353.356.426.526. 549

Author Index

585
Walder, A. N. Walker, A. M.
374,532 289,55 1

Tinerington, D. M. 374,549 Tiwari, R. C. I I , 510 Torgesen, E. N. 362,549 Trader, R. L. 1 I , 549 Tribus, M. 9, 10,530,549 Tritchler. D. 104,535 Tsao, H. J. 373,516 Tsay, R. S. 374,533 Tsui, K.-W. 345,530 Tsutakawa, R. K. 348,512 Turing,A. 390 Turkkan, N., 160,538 Tversky, A. 10, 90,96, 104, 375. 509, 525,527.531 Urbach, P .
10,522

Wallace, C. S. 485.55 1 Wallace. D. L. 9,534 Wallace, M. A. 104.527 Waller. R. A. 10,532 Walley, P 99, 102,371,537,551 . Wallsten, T. S. 96,551 Walsh. K. 9 I , 548 Warren-Hicks, W. J.
373,374,553

Wasserman, L. 99,130, 367, 371,418, 501,528,550.55 1 Wechsler. S. 104,236,492.55 I Wedderbum, R. W. M. 488,535 Wedlin. A.
9.502

Vaidyanathan. S. 417,526 van der Linde, A. viii van der Merwe, A. J. 373,549 van der Merwe, C. A. 373,549 van Dijk, H.K. 349,527,549,550 van Ryzin. J. 228.548 van Zyl, D. 104,506 Vardeman, S. 366,533 Venn, J. 100,550 Verbraak, H.L. F. 10.550 Verdinelli. I. 160,418,546,550 Veronese, P 279,422,501 . Viertl, R. 10.550 Villegas, C. viii, 362,460,500.550 von Mises. R. 100,101,550 von Neumann, J. 84,90,9 I , 446,550 Vovk, V. G. 485.550 Vyugin V. V. 485,550 Wahba, G. 228,374,526,550 Wakefield, J. C., 355,55 I Wald, A. 375.446.448.450.551

Weerahandi, S. 104,483,551,554 Wei, L. J. 228.536 Weiss. R. E , 418.551 . Welch, B. L. 359,479.55 I , 552 West, M. viii, 10, 104, 230, 366, 370, 374, 375,420,483, sm, 528, 538, 539.552 Wetherill, G. B. 374.375.552 White, D. J. 10, 104,513,552 Whittle, P.
162,228,552

Wichmann. D. 9,552 Wiener, N. 91,552 Wild, P. 355,s 16 Wilkinson. G. N. 458.553 Wilks, S. S. 133,553 Willing. R. 481,553 Wilson, J. 104.553 Wilson, R. B. 104,553 Wilson, S.P 373.545 . Winkler. R. L. 9. 104. 375. 397, 420, 501,509.553 Winsten. C. B. 250,455,492

Author 1nde.v
Witmer. J. A. Wolfowitz, J. 376,553 480.526

Yilrnaz. M. R.

96, 554

Wolpert. R. L. 10. 250, 350. 371, 373, 374,495,528,553 Wong, C.-M. 374,491 Wong, W.H. Wooff, D. A. Wright, D. E. Wright, G. WU,C.-F. Wrinch, D. H. 341,353,549,553 426,553 262,348,506,553 375,519 90,322,553 10,498

Ylvisaker, D. 188. 273, 276. 279. 283, 507 Young, K. S. 417,538 Young, S. C. 104.554 Youtz. C. 8 I . 534 Yu. P. L. 104.554

Zabell, S. L. 94,507 Zanotti, M. 90,548 Zeckhauser. R. 104,523

Yaglorn. A. M.

10,553
10.553

Yaglom. 1. M. Ye, K.

338,367,553,554

Zeleny. M. 104.501 Zellner, A. 10, 11.43,90,263,361,362, 417.483.511.514.517.554 Zidek. J. V. 1 1 , 104,362,363,374.483. 499,504,506,S 15. 55 1,554 Zionts. S. 104, 5 17

BAYESIAN THEORY
Edited by Josd M. Bemardo and Adrian F. M. Smith Copyright 02000 by John Wiley & Sons, Ltd

WILEY SERIES I PROBABILITY AND STATISTICS N ESTABLISHED WALTER BY A. SHEWHART S AND w S. W u s


Editors Vic Barnett, Noel A. C. Cressie, Nicholar I. Fisher, lain M. Johnstone, & David W. Scott, Bernard W. Silverman, J. B. K, Adrian F. M. Smith, Jozcf . Teugels, Ralph A. Bradley, Emeritus, L J. Stuart Hunter, Emeritus, k i d G. Kendall. Emeritus

*ANDERSON . The Statistical Analysis of Time Series ARNOLD,BALAKRISHNAN, and NAGARAJA A F i i Cour~e chder Statistiin ARNOLD,BALAKRISHNAN. and NAGARAJA Records BACCELLI, COHEN. OLSDER, and QUADRAT Synchnization and Linearity: An Algebra for Discrete Event Systems BARNETT Comparative Statistical Inference, Third Edirion BASJLEVSKY * Statistid F c o Analysis and Related Methods: Theory md atr Applications BERNARD0 and S M I T H * Bayesian T"y BILLINGSLEY Convergence of Robability Measures BOROVKOV * Asymptotic Metbods in Q ~ u a Theory BOROVKOV . Ergodicity and Stability of Stochastic Processes BRANDT. FRANKEN, and LISEK * Stationary Stochastic Modcls CAINES . Linear Stochastic Systems CAIROLJ and DALANG Sequential Stochastic Optimization CONSTANTINE Combhatad Theory and Statistical Design COOK Regression Graphics COVER and " H O W . Elements of Information Theory C S O R e and HORVATH . Weighted Approximation in Robability Statistics CSciRGO and HORVATH Limit Tbeorans in Change Point Analysis DETIE and STUDDEN. The Tbeory of Canonical Moments with Applications in Statistics, Robability, and Analysis *DOOB * Stochastic Processes DUPUIS and ELLIS . A Weak Convergence Approach to the Theory of Large Deviations 'zation and Convergence ETHER and KUR'IZ Markov Recesses: characten FELLER . An Introduction t Robability Theory and Its Applications, Volume 1, Third o Edition. Revised, Volume I& Second Edirion FULLER Inrroduction t Statistical Time Series, Second Edition o FULLER. Measurement Error Models GHOSH. MUKHOPADHYAY. and SEN Sequential Estimation GIFI Nonlinear Multivariate Analysis G W O R P Statistical Merence for Branching Processes HALL Introduction to the Thaory of Coverage Rocesses

*Now available in a lower priced p a p b a c k edition in the Wiley Classics Library.

Probability and Statistics Section (Continued) HAMPEL Robust Statistics: The Approach Based on Influence Functions

HANNAN and DEISTLER The Statistical Theory of Linear Systems HUBER ' Robust statistics HUSKOVA. BERAN, and DUPAt . Collected Works of Jaroslav Mjek-With JLTREK and MASON . Operator-Limit Distributions in Probability Theory KASS and VOS . The Geometrical Foundations of Asymptotic Inference KAUFMAN and ROUSSEEUW . Finding G o p in Data: An Introduction to Cluster rus Analysis KELLY. Robability, Statistics, and Optimization KENDALL, BARDEN, CARNE and LE * Shape and Shape Theory LINDVALL Lecblres on the Coupling Method McFADDEN . Management of Data in Clinical Trials MANTON, WOODBURY, and TOLLEY . Statistical Applications Using Fuzy Sets MARDIA and JUPP . Directional Statistics MORGENTHALER and TUKEY * Configural Polysampling: A Route to Practical Robustness Of Multivariate Statistical Theory MLJJRHEAD * OLIVER and SMlTH Influence Diagrams, Belief Nets, and Decision Analysis *PARZEN M d r Probability Theory and Its Applications oen PRESS Bayesian Statistics: Principles, Models. and Applications PUKELSHEIM Optimal Experimental Design RAO . Asymptotic TheOry O f Statistical I n f ~ ~ t n ~ e RAO . Linear Statistical Inference and Its Applications, Second Edition RAO and SHANBHAG Choquet-Deny Type Functional Equations with Applications t Stochastic Models o ROBERTSON. WRIGHT,and DYKSTRA Order Restricted Statistical Inference ROGERS and WILLJAMS - Diffusions. Markov Processes. and Martingales, Volume I: Foundations, Second Edition, Volume Ik It6 Calculus RUBINSTEIN and SHAPIRO Discrete Event Systems: Sensitivity Analysis and Stochastic Optimization by the Score Function Method RUZSA @ SZEKELEY Algebraic Robability Theory SCHEFFE The Analysis of Variance SEBER . Linear Regression Analysis SEBER . Multivariate Observations SEBER and WILD Nonlinear Regression SERFLING Approximation "heorems of Mathematical Statistics SHORACK and WELLNER . Empirical Rocesses with Applications to Statistics SMALL and McLEISH . Hilbert Space Methods in Robability and Statistical Inference STAPLETON. Linear Statistical Models STAUDTE and SHEATHER . Robust Estimation and Testing STOYANOV . Counterexamples in Robability TANAKA . Time Series Analysis: Nonstatiotwy and Noninvertible Distribution Theory THOMPSON and SEBER Adaptive Sampling WELSH. Aspects of Statistical Inference oes WHmAKER . Graphical M d l in Applied Multivariate Statistics YANG . The Construction Theory of Denumerable Markov Recesses

IMAN and CONOVER. A Modem Approach to Statistics

c w

*Now available in a lower priced paperback edition in the Wiley Classics Library.

Applied RobabW~ and S&&s

Section

ABRAHAM and LEDOLTER * Statistical Methods for ForeceSting AGRESTI * Analysis of ordinal categorical Data AGRESTI . Categorical Dt Analysis aa ANDERSON, AUQUIER, H A W , O m . VANDAELE, and WEISBERG Statistical Methods for Comparative Studies ARMITAGE and DAVID (editors) Advances in Biometry *ARTHANARI and DODGE . Mathematical Programming in Statistics ASMUSSEN Applied Probability and Queues *BAILEY . The Elements of Stochastic Processes with Applications to the Natural Sciences BARNETT and LEWIS Outliers in Statistical D t ,Third Edition aa BARTHOLOMEW, FORBES,and McLEAN Statistical Techniques for Manpower Planning. Second Editwn BATES and WATTS Nonlinear Regression Analysis and Its Applications BECHHOFER. SANTNER,and GOLDSMAN. Design and Analysis of E p r m n s xeiet for Statistical Selection. Screening, and Multiple Comparisons BELSLEY . Conditioning Diagnostics: Collin&ty and Weak Data in Regression BELSLEY, KUH, and WELSCH Regression Diagnostics: Identifying Influential Data and Sources of Collinearity BHAT . Elements of Applied Stochastic Recesses. Second Edition BHATTACHARYA and WAYMIRE. Stochastic Rocesses with Applications BIRKES and DODGE . Alternative Methods of Regression BLOOMFIELD Fourier Analysis of Time Series: An Introduction BOUEN . Structural Equations with Latent Variables BOULEAU Numerical Methods for Stochastic pIocesses BOX . Bayesian Inference in Statistical Analysis BOX and DRAPER Empirical Model-Building and Response Surfaces BOX and DRAPER.E v o l u t i o ~ Opuation:A Statistical Method for Recess Imptovement BUCKLEW Large Deviation Techniques in Decision, Simulation, and E t m t o siain BUNKE and BUNKE . Nonlinear Regression. Functional Relations. and Robust Methods: Statistical Methods of Model Building CHATTERJEE and HAD1 Sensitivity Analysis in Linear Regression CHERNICK . Bootstrap Methods: A Practitioner's Guide CH&S and DELFINER Gaostatistics: Modeling S a i l Uncertainty pta CHOW and L N . Design and Analysis of Clinical Trials CLARKE and DISNEY . Probability and Random Roctsses: A First Course with Apphcations. Second Edition *COCHRAN and COX. Experimental Designs, Second Editwn CONOVER Practical Nonparamem Statistics. Second Edition 'c CORNELL Experiments with Mixtures, Designs, Models. and the Analysis of Mixture Data, Second Edition *COX Planning of Experiments CRESSIE Statistics for S a i l Data, Revised Edition pta DANIEL . Applications of Statistics to Industrial Experimentation DANIEL . Biostatistics: A Foundation for Analysis i the H d t h Sciences. Sixth Edition n DAVID Order Statistics. Second Edition *DEGROOT. FENBERG, and KADANE * Statistics and the Law *DODGE Alternative Methods of Regression DOWDY and WEARDEN Statistics for Research. Second Edirwn DRYDEN and MARDIA Statistical Shape Analysis

*Now available in a lower priced paperback edition in the Wiley Classics Library.

Applied Probability and Statistics (Continued)

DUNN and CLARK Applied Statistics: Analysis of Variance and Regression, Second Edirion ELANDT-JOHNSON and JOHNSON Survival Models and Data Analysis EVANS, PEACOCK. and HASTINGS . Statistical Distributions. Second Edition FLEISS The Design and Analysis of Clinical Experiments FLEISS Statistical Methods for Rates and Proportions. Second Editwn FLEMING and HARRINGTON Counting Processes and Survival Analysis GALLANT . Nonlinear Statistical Models GLASSERMAN and YAO . Monotone Structure in Discrete-Event Systems GNANADESIKAN . Methods for statistical Data Analysis of Multivariate Observations. Second Edition GOLDSTEIN and LEWIS . Assessment: Problems, Development, and Statistical Issues GREENWOOD and NIKULIN A Guide to Chi-squared Testing *HAHN . Statistical Models in Engineering HAHN and MEEKER . Statistical Intervals: A Guide for Ractitioners HAND Construction and Assessment of Classification Rules HAND . Discrimination and Classification HEDAYAT and SINHA Dwign and Inference in Einite Population Sampling HEIBERGER . Computation for the Analysis of Designed Experiments H I " and KEMPTHORNE . Design and Analysis of Experiments. Volume 1: Introduction to Experimental Design HOAGLIN, MOSTELLER. and TUKEY Exploratory Approach to Analysis of Variance HOAGLIN, MOSTELLER, and TUKEY Exploring Data Tables, Trends and Shapes HOAGLIN. MOSTELLER. and TUKEY Understanding Robust and Exploratory Data Analysis HOCHBERG and TAMHANE . Multiple Comparison Procedures HOCKING Methods and Applications of Lineat Models: Regression and the Analysis of Variables HOGG and KLUGMAN . LOSSDistributim HOLLANDER and WOLFE Nonparametric Statistical Methods HOSMER and LEMESHOW Applied Logistic Regression HlbYLAND and RAUSAND System Reliability Theory: Models and Statistical Methods HUBERTY . Applied Discriminant Analysis HUNT and KENNEDY Financial Derivatives in Theory and Practice JACKSON A User's Guide to Principal Components JOHN . Statistical Methods in Engineering and Quality Assurance JOHNSON . Multivariate Statistical Simulation JOHNSON & KOTZ . Distributions in Statistics

Continuous Multivarious Distributions JOHNSON, KOTZ,and BALAKRISHNAN Continuous Univariate Distributions, JOHNSON, KOTZ, and BALAKRISHNAN . Continuous Univariate Distributions. Volume 2. Second Edition JOHNSON, KOTZ, and BALAKRISHNAN . Discrete Multivariate Distributions JOHNSON, KO% and KEMP . Univariate Discrete Distributions, Second Edition JURECKOVA and SEN . Robust Statistical Prooedures: Asymptodics and Interrelations W A N E . Bayesian Methods and Ethics in a Clinical Trial Design W A N E and SCHUM A Probabilistic Analysis of the Sacco and Vanzetti Evidence KALBFLEISCH and PRENTICE The Statistical Analysis of F i u e Time Data alr KELLY Rtversibility and Stochastic Networks
1

Volume 1, Second Eifitim

*Now available in a lower priced paperback edition in the Wiley Classics Library.

Applkd P r M i l i and Statistics (Continued) ~ KHURI. MATHEW, and SINHA Statistical Tests for Mixed Linear Models KLUGMAN, PANJER, and WILLMOT Las Models: From Data to Decisions KLUGMAN, PANJER, and WILLMOT Solutions Manual to Accompany Loss Models: From Data to Decisions KOVALENKO, KUZNETUIV. and P W Mathematical Theory of Reliability of E Time-Dependent Systems with Practical Applications LAD . Operational Subjective Statistical Methods: A Mathematical. Wilosophical, and Historical Introduction LANCE, RYAN, BILLARD. BRILLINGER. CONQUEST, and GREENHOUSE. Case Studies in Biometry LAWLESS . Statistical Models and Methods for Lifetime Data LEE Statistical Methods for Survival Dt Analysis, Second Edition aa LEPAGE and BLLARD Exploring the Limits of Bootstrap LINHART and ZUCCHINI Model Selection LJTTLE and RUBIN Statistid Analysis with Mising Data MAGNUS and NEUDECKER Matrix Differential Calculus with Applications in Statistics and Econometrics, Revised Edition MALLER and W O U Survival Analysis with Long Term Survivors M A " . SCHAFER. and SINGPURWALLA Methods for Statistical Analysis of Reliability and Life Data McLACHLAN and KRISHNAN . Tbe EM Algorithm and Extensions McLACHLAN . Discriminant Analysis and Statistical Pattern Recognition McNElL . Epidemiological Research Methods MEEKER and ESCOBAR Statistical Methods for Reliability Data MILLER * survival Analysis MONTGOMERY and PECK Introduction to Linear R e p s i o n Analysis, Second

MYERS and MONTGOMERY . Response Surface Methodology: Rocess and Product in Optimization Using Designed Experiments NELSON Accelemted Testing, Statistical Models, Test Plans, and Data Analyses NELSON . Applied Life Data Analysis OCHI . Applied Probability and Stochastic Recesses in Engineering and Physical Sciences O W E . BOOTS, and SUGIHARA . S a i l Tesselations: Concepts and Applications of pta Vomnoi Diagrams PANKRATZ . Forecasting with Dynamic Regression Models PANKRATZ . Forecasting with Univariate Box-Jenkins Models: Concepts and Cases PIANTADOSI Clinical Trials: A Methodologic Penpcctive PORT . Theoretical Probability for Applications PUTERMAN . Markov Decision Recesses: Discrete Stochastic Dynamic Programming QCHEV Probability Metrics and the Stability of Stochastic Models RENW A Diary on Information Theory RIPLEY spatial statistics RIPLEY Stochastic Simulation ROLSKI, SCHMIDLI, SCHMIDT and TEUGELS Stochastic Processes for Insuranceand Finance ROUSSEEUW and LEROY Robust Regression and Outlier Detection RUBIN Multiple Imputation for Nonresponse in Surveys RUBINSTEIN . Simulation and the Monte Car10 Method RUBINSTEIN and MELAMED Modem Simulation and Modeling RYAN. Statistical Methods for Quality Improvement SCHUSS T e r and Applications of Stochastic Differential Equations hoy

Edition

'Now available in a lower priced paperback edition in the Wiley Classics Library.

Applied Probability and Statistics (Continued) SCOTT. Multivariate Density Estimation: Theory. Practice. and Visualization *SEARLE. Linear Models SEARLE . Linear Models for Uiibalanced Data SEARLE CASELLA, and McCUUOCH . Variance Components STOYAN. KENDALL. and MECKE Stochastic Geometry and Its Applications. Second Edition STOYAN and STOYAN * Fractals, Random Shapes, and Point Fields: Methods of Geometrical Statistics THOMPSON. Empirical Model Building THOMPSON . Sampling THOMPSON . Simulation: A Modelers Approach TUMS . Stochastic Modeling and Analysis: A Computational Approach TIJMS . Stochastic Models: An Algorithmic Approach TITIWUNGTON, SMITH, and MARKOV . Statistical Analysis of Finite Mixture Distributions UPTON and FINGLETON . Spatial Data Analysis by Example, Volume 1: Point Pattern and Quantitative Data UPTON and FINGETON. Spatial Data Analysis by Example, Volume II: Categorical and Directional Data VAN RUKEVORSEL and DE LEEUW . Component and Correspondence Analysis VIDAKOVIC . Statistical Modeling by Wavelets WEISBERG . Applied Linear Regression. Second mition WESTFALL and YOUNG . Resampling-Based Multiple Testing: Examples and Methods for p-Value Adjustment WHITTLE . Systems in Stochastic Equilibrium WOODING Planning Pharmaceutical Clinical Trials: Basic Statistical Principles WOOLSON . Statistical Methods for the Analysis of Biomedical Data *ZEELNER An Introduction to Bayesian Inference in Economebics

Texts and Refemnces Seclion


AGRESTI . An Introduction to Categorical Data Analysis ANDERSON . An Introduction t Multivariate Statistical Analysis, Second Edition o ANDERSON and LOYNES . The Teaching of Practical Statistics ARMITAGE and COLTON Encyclopedia of Biostatistics. 6 Volume set BARTOSZYNSKI and NIEWIADOMSKA-BUGAJ . Probability and Statistical Inference BERRY, CHALONER. and GEWEKE . Bayesian Analysis in Statistics and Econometrics: Essays in Honor of Arnold Zcllner BHATTACHARYA and JOHNSON. Statistical Concepts and Methods BILLINGSLEY . Probability and Measure. Second Edition BOX R. A. Fisher, the Life of a Scientist BOX. HUNTER. and HUNTER * Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building BOX and LUCERO Statistical Control by Monitoring and Feedback Adjustment BROWN and HOLLANDER. Statistics: A Biomedical Introduction CHATTERIEE and PRICE Regression Analysis by Example, Second Editwn COOK and WEISBERG An Introduction to Regression Graphics COOK and WEISBERG . Applied Regression Including Computing and Graphics COX. A Handbook of Introductory Statistical Methods DILLON and GOLDSTEIN . Multivariate Analysis: Methods and Applications

*Now available in a lower priced paperback edition in the Wiley Classics Library.

Texts and References Section (Conrinucd) DODGE and ROMIG . Sampling Inspection Tables, Second Edirion DRAPER and SMITH . Applied Regression Analysis, Third Edition DUDEWICZ and MlSHRA . Modem Mathematical Statistics DUNN Basic Statistics: A Rimer for the Biomedical Sciences, Second Edition FISHER and VAN BELLE Biostatistics: A Methodology for the Health Sciences FREEMAN and SMITH . Aspacts of Uncertainty: A Tribute to D. V. Lindley GROSS and HARRIS . Fundamentals of Queueing Theory. Third Edition HALD A History o Probability and Statistics and their Applications Before 1750 f HALD . A History of Mathematical Statistics from 1750 to 1930 HELLER . MACSYMA for StatiMicians HOEL . Introduction to Mathematical Statistics, Fiph Editwn HOLLANDER and WOLFE . Nonparametric Statistical Methods. Second Edition HOSMER and -HOW * Applied Survival Analysis: Regnssion Modeling of Time to Event Data JOHNSON and BALAKRISHNAN Advances in the Theory and Practice of Statistics: A Volume in Honor of Samuel Kotz JOHNSON and KOTZ (editors) Leading Pasonalities in Statistical Sciences: From the Seventeenth Century to the Resent JUDGE, GRIFFITHS, HILL, LOTKl3WHL. and LEE * The T h . ~ r y Ractice O and f Econometrics. Second Edirion with S in KHURI AdVmced C ~ ~ C U ~ UApplicatiC~~ Statistics KOTZ and JOHNSON (editors) Encyclopedia of Statistical Sciences. Volumes 1 to 9 wt Index ih KOTZ and JOHNSON (editors) Encyclopedia of Statistical Sciences: Supplement Volume KOTZ, REED. and BANKS (editors) Encyclopedia of Statistical Sciences: Update Volume 1 KOTZ, REED, and BANKS (editors). Eacyclopedia of Statistical Sciences: Update Volume 2 L M E R T I Probability: A Survey of the Mathematical Theory, Second Edition LARSON . Introduction to Robability Theory and Statistical I f n e Third Edition nm c , LE . Applied Categorical Data Analysis LE Applied Survival Analysis MALLOWS Design. Data, and Analysis by Some Friends of Cuthbert Daniel MARDIA . l l e Art of Statistical Science: A Tribute to G. S. Watson MASON, GUNST, and HESS Statistical Design and Analysis of Experiments wt ih Applications to Engineering and Science MURRAY. X-STAT 2.0 Statistical Exprrimcntation, Design Data Analysis. and Nonlinear

Optimization

PURI, VJLAPLANA, and WER'IZ. New Perspectives in Theoretical and Applied Statistics RENCHER * Methods O Multivariate AnalySiS f RENCHER . Multivariate Statistical hfemncc wt Applications ih ROSS Introduction to Probability and Statistics for Engineers and Scientists ROHATGI . An Introduction to probability Theory and Mathematical Statistics RYAN . Modem Regression Methods SCH(MT Matrix Analysis for Statistics SEARLE Matrix Algebra Useful for Statistics STYAN . The Collected Papers O T. W.Anderson: 1943-1985 f TIERNEY LISP-STAT An Object-Oriented Environment for Statistical Computing and Dynamic Graphics WONNACOTT and WONNACOTT Econometrics. Second Edition

*Now available in a lower priced paperback edition in the Wiley Classics Library.

WILEY SERIES I PROBABILITY AND STATISTICS N


ESTABLISHED BY

WALTER SHEWHART SAMUEL WILKS A. AND S.

Editors Roben M. Groves, Graham Kalton, J. N. K. Rao, Norbert Schwarz, Christopher Skinner

Survey Methodology Section

BIEMER. GROVES, LYBERG. MATHIOWETZ, and SUDMAN Measurement Errors in Surveys COCHRAN . Sampling Techniques, Third Edition COWER, BAKER, BETHLEHEM, CLARK. MARTIN, NICHOLLS and O'REILLY ( d t r ). Computer Assisted Survey Information Collection eios COX. BINDER, CHINNAPPA, CHRISTIANSON, COLLEDGE. and KOTT (editon) * Business Survey Methods *DEMING . Sample Design in Business Research DILLMAN . Mail and Telephone Surveys: The Total Design Method GROVES. Survey Errors and Survey Costs GROVES and COWER. Nonresponse in Household Interview Surveys GROVES, BlEMER. LYBERG. MASSEY. NICHOLLS, and WAKSBERG . Telephone Survey Methodology *HANSEN, HURWITZ, and MADOW Sample Survey Methods and Theory, Volume I: Methods and Applications *HANSEN. HURWITZ, and MADOW Sample Survey Methods and Theory, Volume 11: Theory KASPRZYK. DUNCAN, KALTON, and SINGH . Panel Surveys KISH . Statistical Design for Research *KISH . Survey Sampling LESSLER and KALSBEEK . Nonsampling Error in Surveys ehd LEVY and LEMESHOW Sampling of Populations:M t o s and Applications LYBERG, BIEMER, COLLINS, de LEEUW, DIPPO, SCHWARZ, TREWIN (editors). Survey Measurement and Process Quality SIRKEN. HERRMANN, S C E H T E R , SCHWARZ, TANUR and TOURNANGEAU (editors) Cognition and Survey Research SKINNER, HOLT and SMITH . Analysis of Complex Surveys

'Now availabk in a lower priced paperback edition in the Wiley Classics Library.

9 780471 924166

11 lllI ll~llllIl1l

You might also like