You are on page 1of 678

Contents

Preface vii
PART A ANTECHAMBER 1
1 Database Systems 3
1.1 The Main Principles 3
1.2 Functionalities 5
1.3 Complexity and Diversity 7
1.4 Past and Future 7
1.5 Ties with This Book 8
Bibliographic Notes 9
2 Theoretical Background 10
2.1 Some Basics 10
2.2 Languages, Computability, and Complexity 13
2.3 Basics from Logic 20
3 The Relational Model 28
3.1 The Structure of the Relational Model 29
3.2 Named versus Unnamed Perspectives 31
3.3 Conventional versus Logic Programming Perspectives 32
3.4 Notation 34
Bibliographic Notes 34
xiii
xiv Contents
PART B BASICS: RELATIONAL QUERY LANGUAGES 35
4 Conjunctive Queries 37
4.1 Getting Started 38
4.2 Logic-Based Perspectives 40
4.3 Query Composition and Views 48
4.4 Algebraic Perspectives 52
4.5 Adding Union 61
Bibliographic Notes 64
Exercises 65
5 Adding Negation: Algebra and Calculus 70
5.1 The Relational Algebras 71
5.2 Nonrecursive Datalog with Negation 72
5.3 The Relational Calculus 73
5.4 Syntactic Restrictions for Domain Independence 81
5.5 Aggregate Functions 91
5.6 Digression: Finite Representations of Innite Databases 93
Bibliographic Notes 96
Exercises 98
6 Static Analysis and Optimization 105
6.1 Issues in Practical Query Optimization 106
6.2 Global Optimization 115
6.3 Static Analysis of the Relational Calculus 122
6.4 Computing with Acyclic Joins 126
Bibliographic Notes 134
Exercises 136
7 Notes on Practical Languages 142
7.1 SQL: The Structured Query Language 142
7.2 Query-by-Example and Microsoft Access 149
7.3 Confronting the Real World 152
Bibliographic Notes 154
Exercises 154
PART C CONSTRAINTS 157
8 Functional and Join Dependency 159
8.1 Motivation 159
8.2 Functional and Key Dependencies 163
8.3 Join and Multivalued Dependencies 169
Contents xv
8.4 The Chase 173
Bibliographic Notes 185
Exercises 186
9 Inclusion Dependency 192
9.1 Inclusion Dependency in Isolation 192
9.2 Finite versus Innite Implication 197
9.3 Nonaxiomatizability of fds + inds 202
9.4 Restricted Kinds of Inclusion Dependency 207
Bibliographic Notes 211
Exercises 211
10 A Larger Perspective 216
10.1 A Unifying Framework 217
10.2 The Chase Revisited 220
10.3 Axiomatization 226
10.4 An Algebraic Perspective 228
Bibliographic Notes 233
Exercises 235
11 Design and Dependencies 240
11.1 Semantic Data Models 242
11.2 Normal Forms 251
11.3 Universal Relation Assumption 260
Bibliographic Notes 264
Exercises 266
PART D DATALOG AND RECURSION 271
12 Datalog 273
12.1 Syntax of Datalog 276
12.2 Model-Theoretic Semantics 278
12.3 Fixpoint Semantics 282
12.4 Proof-Theoretic Approach 286
12.5 Static Program Analysis 300
Bibliographic Notes 304
Exercises 306
13 Evaluation of Datalog 311
13.1 Seminaive Evaluation 312
13.2 Top-Down Techniques 316
13.3 Magic 324
xvi Contents
13.4 Two Improvements 327
Bibliographic Notes 335
Exercises 337
14 Recursion and Negation 342
14.1 Algebra + While 344
14.2 Calculus + Fixpoint 347
14.3 Datalog with Negation 355
14.4 Equivalence 360
14.5 Recursion in Practical Languages 368
Bibliographic Notes 369
Exercises 370
15 Negation in Datalog 374
15.1 The Basic Problem 374
15.2 Stratied Semantics 377
15.3 Well-Founded Semantics 385
15.4 Expressive Power 397
15.5 Negation as Failure in Brief 406
Bibliographic Notes 408
Exercises 410
PART E EXPRESSIVENESS AND COMPLEXITY 415
16 Sizing Up Languages 417
16.1 Queries 417
16.2 Complexity of Queries 422
16.3 Languages and Complexity 423
Bibliographic Notes 425
Exercises 426
17 First Order, Fixpoint, and While 429
17.1 Complexity of First-Order Queries 430
17.2 Expressiveness of First-Order Queries 433
17.3 Fixpoint and While Queries 437
17.4 The Impact of Order 446
Bibliographic Notes 457
Exercises 459
18 Highly Expressive Languages 466
18.1 While
N
while with Arithmetic 467
18.2 While
new
while with New Values 469
Contents xvii
18.3 While
uty
An Untyped Extension of while 475
Bibliographic Notes 479
Exercises 481
PART F FINALE 485
19 Incomplete Information 487
19.1 Warm-Up 488
19.2 Weak Representation Systems 490
19.3 Conditional Tables 493
19.4 The Complexity of Nulls 499
19.5 Other Approaches 501
Bibliographic Notes 504
Exercises 506
20 Complex Values 508
20.1 Complex Value Databases 511
20.2 The Algebra 514
20.3 The Calculus 519
20.4 Examples 523
20.5 Equivalence Theorems 526
20.6 Fixpoint and Deduction 531
20.7 Expressive Power and Complexity 534
20.8 A Practical Query Language for Complex Values 536
Bibliographic Notes 538
Exercises 539
21 Object Databases 542
21.1 Informal Presentation 543
21.2 Formal Denition of an OODB Model 547
21.3 Languages for OODB Queries 556
21.4 Languages for Methods 563
21.5 Further Issues for OODBs 571
Bibliographic Notes 573
Exercises 575
22 Dynamic Aspects 579
22.1 Update Languages 580
22.2 Transactional Schemas 584
22.3 Updating Views and Deductive Databases 586
22.4 Updating Incomplete Information 593
22.5 Active Databases 600
xviii Contents
22.6 Temporal Databases and Constraints 606
Bibliographic Notes 613
Exercises 615
Bibliography 621
Symbol Index 659
Index 661
1 Database Systems
Alice: I thought this was a theory book.
Vittorio: Yes, but good theory needs the big picture.
Sergio: Besides, what will you tell your grandfather when he asks what you study?
Riccardo: You cant tell him that youre studying the fundamental implications of
genericity in database queries.
C
omputers are now used in almost all aspects of human activity. One of their main
uses is to manage information, which in some cases involves simply holding data for
future retrieval and in other cases serving as the backbone for managing the life cycle of
complex nancial or engineering processes. A large amount of data stored in a computer
is called a database. The basic software that supports the management of this data is
called a database management system (dbms). The dbms is typically accompanied by a
large and evergrowing body of application software that accesses and modies the stored
information. The primary focus in this book is to present part of the theory underlying
the design and use of these systems. This preliminary chapter briey reviews the eld of
database systems to indicate the larger context that has led to this theory.
1.1 The Main Principles
Database systems can be viewed as mediators between human beings who want to use
data and physical devices that hold it (see Fig. 1.1). Early database management was based
on explicit usage of le systems and customized application software. Gradually, princi-
ples and mechanisms were developed that insulated database users from the details of the
physical implementation. In the late 1960s, the rst major step in this direction was the de-
velopment of three-level architecture. This architecture separated database functionalities
into physical, logical, and external levels. (See Fig. 1.2. The three views represent various
ways of looking at the database: multirelations, universal relation interface, and graphical
interface.)
The separation of the logical denition of data from its physical implementation is
central to the eld of databases. One of the major research directions in the eld has
been the development and study of abstract, human-oriented models and interfaces for
specifying the structure of stored data and for manipulating it. These models permit the
user to concentrate on a logical representation of data that resembles his or her vision
of the reality modeled by the data much more closely than the physical representation.
3
4 Database Systems
DBMS
Figure 1.1: Database as mediator between humans and data
Several logical data models have been developed, including the hierarchical, network,
relational, and object oriented. These include primarily a data denition language (DDL)
for specifying the structural aspects of the data and a data manipulation language (DML)
for accessing and updating it. The separation of the logical from the physical has resulted
in an extraordinary increase in database usability and programmer productivity.
Another benet of this separation is that many aspects of the physical implementa-
tion may be changed without having to modify the abstract vision of the database. This
substantially reduces the need to change existing application programs or retrain users.
The separation of the logical and physical levels of a database system is usually called
the data independence principle. This is arguably the most important distinction between
le systems and database systems.
The second separation in the architecture, between external and logical levels, is also
important. It permits different perspectives, or views, on the database that are tailored to
specic needs. Views hide irrelevant information and restructure data that is retained. Such
views may be simple, as in the case of automatic teller machines, or highly intricate, as in
the case of computer-aided design systems.
A major issue connected with both separations in the architecture is the trade-off
between human convenience and reasonable performance. For example, the separation
between logical and physical means that the system must compile queries and updates
directed to the logical representation into real programs. Indeed, the use of the relational
model became widespread only when query optimization techniques made it feasible. More
generally, as the eld of physical database optimization has matured, logical models have
become increasingly remote from physical storage. Developments in hardware (e.g., large
and fast memories) are also inuencing the eld a great deal by continually changing the
limits of feasibility.
1.2 Functionalities 5
View 2 View 3 View 1
External Level
Physical Level
Logical Level
Figure 1.2: Three-level architecture of database systems
1.2 Functionalities
Modern dbmss include a broad array of functionalities, ranging from the very physical
to the relatively abstract. Some functionalities, such as database recovery, can largely be
ignored by almost all users. Others (even among the most physical ones, such as indexing)
are presented to application programmers in abstracted ways.
The primary functionalities of dbmss are as follows:
Secondary storage management: The goal of dbmss is the management of large amounts
of shared data. By large we mean that the data is too big to t in main memory. Thus an
essential task of these systems is the management of secondary storage, which involves
an array of techniques such as indexing, clustering, and resource allocation.
Persistence: Data should be persistent (i.e., it should survive the termination of a particular
database application so that it may be reused later). This is a clear divergence from
standard programming, in which a data structure must be coded in a le to live beyond
the execution of an application. Persistent programming languages (e.g., persistent
C
++
) are now emerging to overcome this limitation of programming languages.
6 Database Systems
Concurrency control: Data is shared. The system must support simultaneous access to
shared information in a harmonious environment that controls access conicts and
presents a coherent database state to each user. This has led to important notions such
as transaction and serializability and to techniques such as two-phase locking that
ensure serializability.
Data protection: The database is an invaluable source of information that must be protected
against human and application program errors, computer failures, and human mis-
use. Integrity checking mechanisms focus on preventing inconsistencies in the stored
data resulting, for example, from faulty update requests. Database recovery and back-
up protocols guard against hardware failures, primarily by maintaining snapshots of
previous database states and logs of transactions in progress. Finally, security control
mechanisms prevent classes of users from accessing and/or changing sensitive infor-
mation.
Human-machine interface: This involves a wide variety of features, generally revolving
around the logical representation of data. Most concretely, this encompasses DDLs
and DMLs, including both those having a traditional linear format and the emerging
visual interfaces incorporated in so-called fourth-generation languages. Graphically
based tools for database installation and design are popular.
Distribution: In many applications, information resides in distinct locations. Even within
a local enterprise, it is common to nd interrelated information spread across several
databases, either for historical reasons or to keep each database within manageable
size. These databases may be supported by different systems (interoperability) and
based on distinct models (heterogeneity). The task of providing transparent access to
multiple systems is a major research topic of the 1990s.
Compilation and optimization: A major task of database systems is the translation of the
requests against the external and logical levels into executable programs. This usually
involves one or more compilation steps and intensive optimization so that performance
is not degraded by the convenience of using more friendly interfaces.
Some of these features concern primarily the physical data level: concurrency control,
recovery, and secondary storage management. Others, such as optimization, are spread
across the three levels.
Database theory and more generally, database models have focused primarily on
the description of data and on querying facilities. The support for designing application
software, which often constitutes a large component of databases in the eld, has gen-
erally been overlooked by the database research community. In relational systems appli-
cations can be written in C and extended with embedded SQL (the standard relational
query language) commands for accessing the database. Unfortunately there is a signif-
icant distance between the paradigms of C and SQL. The same can be said to a cer-
tain extent about fourth-generation languages. Modern approaches to improving appli-
cation programmer productivity, such as object-oriented or active databases, are being
investigated.
1.4 Past and Future 7
1.3 Complexity and Diversity
In addition to supporting diverse functionalities, the eld of databases must address a
broad variety of uses, styles, and physical platforms. Examples of this variety include the
following:
Applications: Financial, personnel, inventory, sales, engineering design, manufacturing
control, personal information, etc.
Users: Application programmers and software, customer service representatives, secre-
taries, database administrators (dbas), computer gurus, other databases, expert sys-
tems, etc.
Access modes: Linear and graphical data manipulation languages, special purpose graphi-
cal interfaces, data entry, report generation, etc.
Logical models: The most prominent of these are the network, hierarchical, relational,
and object-oriented models; and there are variations in each model as implemented
by various vendors.
Platforms: Variations in host programming languages, computing hardware and operating
systems, secondary storage devices (including conventional disks, optical disks, tape),
networks, etc.
Both the quality and quantity of variety compounds the complexity of modern dbmss,
which attempt to support as much diversity as possible.
Another factor contributing to the complexity of database systems is their longevity.
Although some databases are used by a single person or a handful of users for a year or
less, many organizations are using databases implemented over a decade ago. Over the
years, layers of application software with intricate interdependencies have been developed
for these legacy systems. It is difcult to modernize or replace these databases because
of the tremendous volume of application software that uses them on a routine basis.
1.4 Past and Future
After the advent of the three-level architecture, the eld of databases has become increas-
ingly abstract, moving away from physical storage devices toward human models of in-
formation organization. Early dbmss were based on the network and hierarchical models.
Both provide some logical organization of data (in graphs and trees), but these representa-
tions closely mirror the physical storage of the data. Furthermore, the DMLs for these are
primitive because they focus primarily on navigation through the physically stored data.
In the 1970s, Codds relational model revolutionized the eld. In this model, humans
view the data as organized in relations (tables), and more declarative languages are pro-
vided for data access. Indexes and other mechanisms for maintaining the interconnection
between data are largely hidden from users. The approach became increasingly accepted
as implementation and optimization techniques could provide reasonable response times in
spite of the distance between logical and physical data organization. The relational model
also provided the initial basis for the development of a mathematical investigation of data-
bases, largely because it bridges the gap between data modeling and mathematical logic.
8 Database Systems
Historically dbmss were biased toward business applications, and the relational model
best tted the needs. However, the requirements for the management of large, shared
amounts of data were also felt in a variety of elds, such as computer-aided design and
expert systems. These new applications require more in terms of structures (more complex
than relations), control (more dynamic environments), and intelligence (incorporation of
knowledge). They have generated research and developments at the border of other elds.
Perhaps the most important developments are the following:
Object-oriented databases: These have come from the merging of database technology,
object-oriented languages (e.g., C++), and articial intelligence (via semantic models).
In addition to providing richer logical data structures, they permit the incorporation of
behavioral information into the database schema. This leads to better interfaces and a
more modular perspective on application software; and, in particular, it improves the
programmers productivity.
Deductive and active databases: These originated from the fusion of database technology
and, respectively, logic programming (e.g., Prolog) and production-rule systems (e.g.,
OPS5). The hope is to provide mechanisms that support an abstract view of some
aspects of information processing analogous to the abstract view of data provided by
logical data models. This processing is generally represented in the form of rules and
separated from the control mechanism used for applying the rules.
These two directions are catalysts for signicant new developments in the database eld.
1.5 Ties with This Book
Over the past two decades, database theory has pursued primarily two directions. The
principal one, which is the focus of this book, concerns those topics that can meaningfully
be discussed within the logical and external layers. The other, which has a different avor
and is not discussed in this book, is the elegant theory of concurrency control.
The majority of this book is devoted to the study of the relational model. In particular,
relational query languages and language primitives such as recursion are studied in depth.
The theory of dependencies, which provides the formal foundation of integrity constraints,
is also covered. In the last part of the book, we consider more recent topics whose theory is
generally less well developed, including object-oriented databases and behavioral aspects
of databases.
By its nature, theoretical investigation requires the careful articulation of all assump-
tions. This leads to a focus on abstract, simplied models of much more complex practical
situations. For example, one focus in the early part of this book is on conjunctive queries.
These form the core of the select-from-where clause of the standard language in database
systems, SQL, and are perhaps the most important class of queries from a practical stand-
point. However, the conjunctive queries ignore important practical components of SQL,
such as arithmetic operations.
Speaking more generally, database theory has focused rather narrowly on specic
areas that are amenable to theoretical investigation. Considerable effort has been directed
toward the expressive power and complexity of both query languages and dependencies, in
which close ties with mathematical logic and complexity theory could be exploited. On the
Bibliographic Notes 9
other hand, little theory has emerged in connection with physical query optimization, in
which it is much more difcult to isolate a small handful of crucial features upon which a
meaningful theoretical investigation can be based. Other fundamental topics are only now
receiving attention in database theory (e.g., the behavioral aspects of databases).
Theoretical research in computer science is driven both by the practical phenomena
that it is modeling and by aesthetic and mathematical rigor. Although practical motiva-
tions are touched on, this text dwells primarily on the mathematical view of databases and
presents many concepts and techniques that have not yet found their place in practical sys-
tems. For instance, in connection with query optimization, little is said about the heuristics
that play such an important role in current database systems. However, the homomorphism
theorem for conjunctive queries is presented in detail; this elegant result highlights the es-
sential nature of conjunctive queries. The text also provides a framework for analyzing a
broad range of abstract query languages, many of which are either motivated by, or have
inuenced, the development of practical languages.
As we shall see, the data independence principle has fundamental consequences for
database theory. Indeed, much of the specicity of database theory, and particularly of the
theory of query languages, is due to this principle.
With respect to the larger eld of database systems, we hope this book will serve a dual
purpose: (1) to explain to database system practitioners some of the underlying principles
and characteristics of the systems they use or build, and (2) to arouse the curiosity of
theoreticians reading this book to learn how database systems are actually created.
Bibliographic Notes
There are many books on database systems, including [Dat86, EN89, KS91, Sto88, Ull88,
Ull89b, DA83, Vos91]. A (now old) bibliography on databases is given in [Kam81]. A
good introduction to the eld may be found in [KS91], whereas [Ull88, Ull89b] provides
a more in-depth presentation.
The relational model is introduced in [Cod70]. The rst text on the logical level of
database theory is [Mai83]. More recent texts on the subject include [PBGG89], which
focuses on aspects of relational database theory; [Tha91], which covers portions of de-
pendency theory; and [Ull88, Ull89b], which covers both practical and theoretical aspects
of the eld. The reader is also referred to the excellent survey of relational database the-
ory in [Kan88], which forms a chapter of the Handbook of Theoretical Computer Science
[Lee91].
Database concurrency control is presented in [Pap86, BHG87]. Deductive databases
are covered in [Bid91a, CGT90]. Collections of papers on this topic can be found in
[Min88a]. Collections of papers on object-oriented databases are in [BDK92, KL89,
ZM90]. Surveys on database topics include query optimization [JK84a, Gra93], deductive
databases [GMN84, Min88b, BR88a], semantic database models [HK87, PM88], database
programming languages [AB87a], aspects of heterogeneous databases [BLN86, SL90],
and active databases [HW92, Sto92]. A forthcoming book on active database systems is
[DW94].
2 Theoretical Background
Alice: Will we ever get to the real stuff?
Vittorio: Cine nu cunoa ste lema, nu cunoa ste teorema.
Riccardo: What is Vittorio talking about?
Sergio: This is an old Romanian saying that means, He who doesnt know the
lemma doesnt know the teorema.
Alice: I see.
T
his chapter gives a brief review of the main theoretical tools and results that are used in
this volume. It is assumed that the reader has a degree of maturity and familiarity with
mathematics and theoretical computer science. The review begins with some basics from
set theory, including graphs, trees, and lattices. Then, several topics from automata and
complexity theory are discussed, including nite state automata, Turing machines, com-
putability and complexity theories, and context-free languages. Finally basic mathematical
logic is surveyed, and some remarks are made concerning the specializing assumptions
typically made in database theory.
2.1 Some Basics
This section discusses notions concerning binary relations, partially ordered sets, graphs
and trees, isomorphisms and automorphisms, permutations, and some elements of lattice
theory.
A binary relation over a (nite or innite) set S is a subset R of S S, the cross-
product of S with itself. We sometimes write R(x, y) or xRy to denote that (x, y) R.
For example, if Z is a set, then inclusion () is a binary relation over the power set
P(Z) of Z and also over the nitary power set P
n
(Z) of Z (i.e., the set of all nite subsets
of Z). Viewed as sets, the binary relation on the set N of nonnegative integers properly
contains the relation < on N.
We also have occasion to study n-ary relations over a set S; these are subsets of S
n
,
the cross-product of S with itself n times. Indeed, these provide one of the starting points
of the relational model.
A binary relation R over S is reexive if (x, x) R for each x S; it is symmetric if
(x, y) R implies that (y, x) R for each x, y S; and it is transitive if (x, y) R and
(y, z) R implies that (x, z) R for each x, y, z S. A binary relation that is reexive,
symmetric, and transitive is called an equivalence relation. In this case, we associate to
each x S the equivalence class [x]
R
={y S | (x, y) R}.
10
2.1 Some Basics 11
An example of an equivalence relation on N is modulo for some positive integer n,
where (i, j) mod
n
if the absolute value |i j| of the difference of i and j is divisible
by n.
A partition of a nonempty set S is a family of sets {S
i
| i I} such that (1)
iI
S
i
=S,
(2) S
i
S
j
= for i =j, and (3) S
i
= for i I. If R is an equivalence relation on S, then
the family of equivalence classes over R is a partition of S.
Let E and E

be equivalence relations on a nonempty set S. E is a renement of E

if E E

. In this case, for each x S we have [x]


E
[x]
E
, and, more precisely, each
equivalence class of E

is a disjoint union of one or more equivalence classes of E.


A binary relation R over S is irreexive if (x, x) R for each x S.
A binary relation R is antisymmetric if (y, x) R whenever x = y and (x, y) R.
A partial order of S is a binary relation R over S that is reexive, antisymmetric, and
transitive. In this case, we call the ordered pair (S, R) a partially ordered set. A total order
is a partial order R over S such that for each x, y S, either (x, y) R or (y, x) R.
For any set Z, the relation over P(Z) is a partially ordered set. If the cardinality |Z|
of Z is greater than 1, then this is not a total order. on N is a total order.
If (S, R) is a partially ordered set, then a topological sort of S (relative to R) is a binary
relation R

on S that is a total order such that R

R. Intuitively, R

is compatible with R
in the sense that xRy implies xR

y.
Let R be a binary relation over S, and P be a set of properties of binary relations. The
P-closure of R is the smallest binary relation R

such that R

R and R

satises all of the


properties in P (if a unique binary relation having this specication exists). For example, it
is common to form the transitive closure of a binary relation or the reexive and transitive
closure of a binary relation. In many cases, a closure can be constructed using a recursive
procedure. For example, given binary relation R, the transitive closure R
+
of R can be
obtained as follows:
1. If (x, y) R then (x, y) R
+
;
2. If (x, y) R
+
and (y, z) R
+
then (x, z) R
+
; and
3. Nothing is in R
+
unless it follows from conditions (1) and (2).
For an arbitrary binary relation R, the reexive, symmetric, and transitive closure of R
exists and is an equivalence relation.
There is a close relationship between binary relations and graphs. The denitions and
notation for graphs presented here have been targeted for their application in this book. A
(directed) graph is a pair G=(V, E), where V is a nite set of vertexes and E V V. In
some cases, we dene a graph by presenting a set E of edges; in this case, it is understood
that the vertex set is the set of endpoints of elements of E.
A directed path in G is a nonempty sequence p = (v
0
, . . . , v
n
) of vertexes such
that (v
i
, v
i+1
) E for each i [0, n 1]. This path is from v
0
to v
n
and has length n.
An undirected path in G is a nonempty sequence p = (v
0
, . . . , v
n
) of vertexes such that
(v
i
, v
i+1
) E or (v
i+1
, v
i
) E for each i [0, n 1]. A (directed or undirected) path is
proper if v
i
= v
j
for each i = j. A (directed or undirected) cycle is a (directed or undi-
rected, respectively) path v
0
, . . . , v
n
such that v
n
=v
0
and n > 0. A directed cycle is proper
if v
0
, . . . , v
n1
is a proper path. An undirected cycle is proper if v
0
, . . . , v
n1
is a proper
12 Theoretical Background
path and n > 2. If G has a cycle from v, then G has a proper cycle from v. A graph
G = (V, E) is acyclic if it has no cycles or, equivalently, if the transitive closure of E
is irreexive.
Any binary relation over a nite set can be viewed as a graph. For any nite set Z, the
graph (P(Z), ) is acyclic. An interesting directed graph is (M, L), where M is the set of
metro stations in Paris and (s
1
, s
2
) L if there is a train in the system that goes from s
1
to
s
2
without stopping in between. Another directed graph is (M, L

), where (s
1
, s
2
) L

if
there is a train that goes from s
1
to s
2
, possibly with intermediate stops.
Let G=(V, E) be a graph. Two vertexes u, v are connected if there is an undirected
path in G from u to v, and they are strongly connected if there are directed paths from u
to v and from v to u. Connectedness and strong connectedness are equivalence relations
on V. A (strongly) connected component of G is an equivalence class of V under (strong)
connectedness. A graph is (strongly) connected if it has exactly one (strongly) connected
component.
The graph (M, L) of Parisian metro stations and nonstop links between them is
strongly connected. The graph ({a, b, c, d, e}, {(a, b), (b, a), (b, c), (c, d), (d, e), (e, c)})
is connected but not strongly connected.
The distance d(a, b) of two nodes a, b in a graph is the length of the shortest path
connecting a to b [d(a, b) = if a is not connected to b]. The diameter of a graph G is
the maximum nite distance between two nodes in G.
A tree is a graph that has exactly one vertex with no in-edges, called the root, and no
undirected cycles. For each vertex v of a tree there is a unique proper path from the root to
v. A leaf of a tree is a vertex with no outedges. A tree is connected, but it is not strongly
connected if it has more than one vertex. A forest is a graph that consists of a set of trees.
Given a forest, removal of one edge increases the number of connected components by
exactly one.
An example of a tree is the set of all descendants of a particular person, where (p, p

)
is an edge if p

is the child of p.
In general, we shall focus on directed graphs, but there will be occasions to use
undirected graphs. An undirected graph is a pair G= (V, E), where V is a nite set of
vertexes and E is a set of two-element subsets of V, again called edges. The notions of
path and connected generalize to undirected graphs in the natural fashion.
An example of an undirected graph is the set of all persons with an edge {p, p

} if p
is married to p

. As dened earlier, a tree T =(V, E) is a directed graph. We sometimes


view T as an undirected graph.
We shall have occasions to label the vertexes or edges of a (directed or undirected)
graph. For example, a labeling of the vertexes of a graph G=(V, E) with label set L is a
function : V L.
Let G=(V, E) and G

=(V

, E

) be two directed graphs. A function h : V V

is a
homomorphism from G to G

if for each pair u, v V, (u, v) E implies (h(u), h(v))


E

. The function h is an isomorphism from G to G

if h is a one-one onto mapping from


V to V

, h is a homomorphism from G to G

, and h
1
is a homomorphism from G

to G.
An automorphism on G is an isomorphism from G to G. Although we have dened these
terms for directed graphs, they generalize in the natural fashion to other data and algebraic
structures, such as relations, algebraic groups, etc.
2.2 Languages, Computability, and Complexity 13
Consider the graph G= ({a, b, c, d, e}, {(a, b), (b, a), (b, c), (b, d), (b, e), (c, d),
(d, e), (e, c)}). There are three automorphisms on G: (1) the identity; (2) the function that
maps c to d, d to e, e to c and leaves a, b xed; and (3) the function that maps c to e, d to
c, e to d and leaves a, b xed.
Let S be a set. A permutation of S is a one-one onto function : S S. Suppose that
x
1
, . . . , x
n
is an arbitrary, xed listing of the elements of S (without repeats). Then there is
a natural one-one correspondence between permutations on S and listings x
i
1
, . . . , x
i
n
of elements of S without repeats. A permutation

is derived from permutation by


an exchange if the listings corresponding to and

agree everywhere except at some


positions i and i +1, where the values are exchanged. Given two permutations and

can be derived from using a nite sequence of exchanges.


2.2 Languages, Computability, and Complexity
This area provides one of the foundations of theoretical computer science. A general
reference for this area is [LP81]. References on automata theory and languages include, for
instance, the chapters [BB91, Per91] of [Lee91] and the books [Gin66, Har78]. References
on complexity include the chapter [Joh91] of [Lee91] and the books [GJ79, Pap94].
Let be a nite set called an alphabet. A word over alphabet is a nite sequence
a
1
. . . a
n
, where a
i
, 1 i n, n 0. The length of w =a
1
. . . a
n
, denoted |w|, is n.
The empty word (n =0) is denoted by . The concatenation of two words u =a
1
. . . a
n
and
v =b
1
. . . b
k
is the word a
1
. . . a
n
b
1
. . . b
k
, denoted uv. The concatenation of u with itself
n times is denoted u
n
. The set of all words over is denoted by

. A language over is
a subset of

. For example, if ={a, b}, then {a


n
b
n
| n 0} is a language over . The
concatenation of two languages L and K is LK = {uv | u L, v K}. L concatenated
with itself n times is denoted L
n
, and L

n0
L
n
.
Finite Automata
In databases, one can model various phenomena using words over some nite alphabet.
For example, sequences of database events form words over some alphabet of events. More
generally, everything is mapped internally to a sequence of bits, which is nothing but a word
over alphabet {0, 1}. The notion of computable query is also formalized using a low-level
representation of a database as a word.
An important type of computation over words involves acceptance. The objective is
to accept precisely the words that belong to some language of interest. The simplest form
of acceptance is done using nite-state automata (fsa). Intuitively, fsa process words by
scanning the word and remembering only a bounded amount of information about what
has already been scanned. This is formalized by computation allowing a nite set of states
and transitions among the states, driven by the input. Formally, an fsa M over alphabet
is a 5-tuple S, , , s
0
, F, where
S is a nite set of states;
, the transition function, is a mapping from S to S;
14 Theoretical Background
s
0
is a particular state of S, called the start state;
F is a subset of S called the accepting states.
An fsa S, , , s
0
, F works as follows. The given input word w =a
1
. . . a
n
is read one
symbol at a time, fromleft to right. This can be visualized as a tape on which the input word
is written and an fsa with a head that reads symbols from the tape one at a time. The fsa
starts in state s
0
. One move in state s consists of reading the current symbol a in w, moving
to a new state (s, a), and moving the head to the next symbol on the right. If the fsa is in
an accepting state after the last symbol in w has been read, w is accepted. Otherwise it is
rejected. The language accepted by an fsa M is denoted L(M).
For example, let M be the fsa
{even,odd}, {0, 1}, , even, {even}, with
0 1
even even odd
odd odd even
The language accepted by M is
L(M) ={w | w has an even number of occurrences of 1}.
A language accepted by some fsa is called a regular language. Not all languages are
regular. For example, the language {a
n
b
n
| n 0} is not regular. Intuitively, this is so
because no fsa can remember the number of as scanned in order to compare it to the
number of bs, if this number is large enough, due to the boundedness of the memory.
This property is formalized by the so-called pumping lemma for regular languages.
As seen, one way to specify regular languages is by writing an fsa accepting them.
An alternative, which is often more convenient, is to specify the shape of the words in the
language using so-called regular expressions. A regular expression over is written using
the symbols in and the operations concatenation,

and +. (The operation + stands
for union.) For example, the foregoing language L(M) can be specied by the regular
expression ((0

10

)
2
)

+ 0

. To see how regular languages can model things of interest


to databases, think of employees who can be affected by the following events:
hire, transfer, quit, re, retire.
Throughout his or her career, an employee is rst hired, can be transferred any number of
times, and eventually quits, retires, or is red. The language whose words are allowable
sequences of such events can be specied by a regular expression as hire (transfer)

(quit
+ re + retire). One of the nicest features of regular languages is that they have a dual
characterization using fsa and regular expressions. Indeed, Kleenes theorem says that a
language L is regular iff it can be specied by a regular expression.
There are several important variations of fsa that do not change their accepting power.
The rst allows scanning the input back and forth any number of times, yielding two-way
2.2 Languages, Computability, and Complexity 15
automata. The second is nondeterminism. A nondeterministic fsa allows several possible
next states in a given move. Thus several computations are possible on a given input.
A word is accepted if there is at least one computation that ends in an accepting state.
Nondeterministic fsa (nfsa) accept the same set of languages as fsa. However, the number
of states in the equivalent deterministic fsa may be exponential in the number of states of
the nondeterministic one. Thus nondeterminism can be viewed as a convenience allowing
much more succinct specication of some regular languages.
Turing Machines and Computability
Turing machines (TMs) provide the classical formalization of computation. They are also
used to develop classical complexity theory. Turing machines are like fsa, except that
symbols can also be overwritten rather than just read, the head can move in either direction,
and the amount of tape available is innite. Thus a move of a TM consists of reading
the current tape symbol, overwriting the symbol with a new one from a specied nite
tape alphabet, moving the head left or right, and changing state. Like an fsa, a TM can
be viewed as an acceptor. The language accepted by a TM M, denoted L(M), consists
of the words w such that, on input w, M halts in an accepting state. Alternatively, one
can view TM as a generator of words. The TM starts on empty input. To indicate that
some word of interest has been generated, the TM goes into some specied state and then
continues. Typically, this is a nonterminating computation generating an innite language.
The set of words so generated by some TM M is denoted G(M). Finally, TMs can also
be viewed as computing a function from input to output. A TM M computes a partial
mapping f from

to

if for each w

: (1) if w is in the domain of f , then M


halts on input w with the tape containing the word f (w); (2) otherwise M does not halt on
input w.
A function f from

to

is computable iff there exists some TM computing it.


Churchs thesis states that any function computable by some reasonable computing device
is also computable in the aforementioned sense. So the denition of computability by TMs
is robust. In particular, it is insensitive to many variations in the denition of TM, such
as allowing multiple tapes. A particularly important variation allows for nondeterminism,
similar to nondeterministic fsa. In a nondeterministic TM (NTM), there can be a choice of
moves at each step. Thus an NTM has several possible computations on a given input (of
which some may be terminating and others not). A word w is accepted by an NTM M if
there exists at least one computation of M on w halting in an accepting state.
Another useful variation of the Turing machine is the counter machine. Instead of a
tape, the counter machine has two stacks on which elements can be pushed or popped.
The machine can only test for emptiness of each stack. Counter machines can also dene
all computable functions. An essentially equivalent and useful formulation of this fact is
that the language with integer variables i, j, . . . , two instructions increment(i) and decre-
ment(i), and a looping construct while i > 0 do, can dene all computable functions on the
integers.
Of course, we are often interested in functions on domains other than wordsintegers
are one example. To talk about the computability of such functions on other domains, one
goes through an encoding in which each element d of the domain is represented as a word
16 Theoretical Background
enc(d) on some xed, nite alphabet. Given that encoding, it is said that f is computable if
the function enc(f ) mapping enc(d) to enc(f (d)) is computable. This often works without
problems, but occasionally it raises tricky issues that are discussed in a few places of this
book (particularly in Part E).
It can be shown that a language is L(M) for some acceptor TM M iff it is G(M)
for some generator TM M. A language is recursively enumerable (r.e.) iff it is L(M) [or
G(M)] for some TM M. L being r.e. means that there is an algorithm that is guaranteed to
say eventually yes on input w if w L but may run forever if w L (if it stops, it says no).
Thus one can never know for sure if a word is not in L.
Informally, saying that L is recursive means that there is an algorithm that always
decides in nite time whether a given word is in L. If L =L(M) and M always halts, L is
recursive. A language whose complement is r.e. is called co-r.e. The following useful facts
can be shown:
1. If L is r.e. and co-r.e., then it is recursive.
2. L is r.e. iff it is the domain of a computable function.
3. L is r.e. iff it is the range of a computable function.
4. L is recursive iff it is the range of a computable nondecreasing function.
1
As is the case for computability, the notion of recursive is used in many contexts that
do not explicitly involve languages. Suppose we are interested in some class of objects
called thing-a-ma-jigs. Among these, we want to distinguish widgets, which are those
thing-a-ma-jigs with some desirable property. It is said that it is decidable if a given thing-
a-ma-jig is a widget if there is an algorithm that, given a thing-a-ma-jig, decides in nite
time whether the given thing-a-ma-jig is a widget. Otherwise the property is undecidable.
Formally, thing-a-ma-jigs are encoded as words over some nite alphabet. The property of
being a widget is decidable iff the language of words encoding widgets is recursive.
We mention a few classical undecidable problems. The halting problem asks if a given
TM M halts on a specied input w. This problem is undecidable (i.e., there is no algorithm
that, given the description of M and the input w, decides in nite time if M halts on w).
More generally it can be shown that, in some precise sense, all nontrivial questions about
TMs are undecidable (this is formalized by Rices theorem). A more concrete undecidable
problem, which is useful in proofs, is the Post correspondence problem (PCP). The input
to the PCP consists of two lists
u
1
, . . . , u
n
; v
1
, . . . , v
n
;
of words over some alphabet with at least two symbols. A solution to the PCP is a
sequence of indexes i
1
, . . . , i
k
, 1 i
j
n, such that
u
i
1
. . . u
i
k
=v
i
1
. . . v
i
k
.
1
f is nondecreasing if |f (w)| |w| for each w.
2.2 Languages, Computability, and Complexity 17
The question of interest is whether there is a solution to the PCP. For example, consider the
input to the PCP problem:
u
1
u
2
u
3
u
4
v
1
v
2
v
3
v
4
aba bbb aab bb a aaa abab babba.
For this input, the PCP has the solution 1, 4, 3, 1; because
u
1
u
4
u
3
u
1
=ababbaababa =v
1
v
4
v
3
v
1
.
Now consider the input consisting of just u
1
, u
2
, u
3
and v
1
, v
2
, v
3
. An easy case analysis
shows that there is no solution to the PCP for this input. In general, it has been shown that
it is undecidable whether, for a given input, there exists a solution to the PCP.
The PCP is particularly useful for proving the undecidability of other problems. The
proof technique consists of reducing the PCP to the problem of interest. For example,
suppose we are interested in the question of whether a given thing-a-ma-jig is a widget.
The reduction of the PCP to the widget problem consists of nding a computable mapping
f that, given an input i to the PCP, produces a thing-a-ma-jig f (i) such that f (i) is a
widget iff the PCP has a solution for i. If one can nd such a reduction, this shows that it
is undecidable if a given thing-a-ma-jig is a widget. Indeed, if this were decidable then one
could nd an algorithm for the PCP: Given an input i to the PCP, rst construct the thing-
a-ma-jig f (i), and then apply the algorithm deciding if f (i) is a widget. Because we know
that the PCP is undecidable, the property of being a widget cannot be decidable. Of course,
any other known undecidable problem can be used in place of the PCP.
A few other important undecidable problems are mentioned in the review of context-
free grammars.
Complexity
Suppose a particular problem is solvable. Of course, this does not mean the problem has a
practical solution, because it may be prohibitively expensive to solve it. Complexity theory
studies the difculty of problems. Difculty is measured relative to some resources of
interest, usually time and space. Again the usual model of reference is the TM. Suppose Lis
a recursive language, accepted by a TMM that always halts. Let f be a function on positive
integers. M is said to use time bounded by f if on every input w, M halts in at most f (|w|)
steps. M uses space bounded by f if the amount of tape used by M on every input w is at
most f (|w|). The set of recursive languages accepted by TMs using time (space) bounded
by f is denoted TIME(f ) (SPACE(f )). Let F be a set of functions on positive integers.
Then TIME(F) =

f F
TIME(f ), and SPACE(F) =

f F
SPACE(f ). A particularly
important class of bounding functions is the polynomials Poly. For this class, the following
notation has emerged: TIME(Poly) is denoted ptime, and SPACE(Poly) is denoted pspace.
Membership in the class ptime is often regarded as synonymous to tractability (although,
of course, this is not reasonable in all situations, and a case-by-case judgment should be
made). Besides the polynomials, it is of interest to consider lower bounds, like logarithmic
space. However, because the input itself takes more than logarithmic space to write down, a
18 Theoretical Background
separation of the input tape from the tape used throughout the computation must be made.
Thus the input is given on a read-only tape, and a separate worktape is added. Now let
logspace consist of the recursive languages L that are accepted by some such TM using
on input w an amount of worktape bounded by c log(|w|) for some constant c.
Another class of time-bounding functions we shall use is the so-called elementary
functions. They consist of the set of functions
Hyp ={hyp
i
| i 0}, where
hyp
0
(n) =n
hyp
i+1
(n) =2
hyp
i
(n)
.
The elementary languages are those in TIME(Hyp).
Nondeterministic TMs can be used to dene complexity classes as well. An NTM
uses time bounded by f if all computations on input w halt after at most f (|w|) steps. It
uses space bounded by f if all computations on input w use at most f (|w|) space (note
that termination is not required). The set of recursive languages accepted by some NTM
using time bounded by a polynomial is denoted np, and space bounded by a polynomial
is denoted by npspace. Are nondeterministic classes different from their deterministic
counterparts? For polynomial space, Savitchs theorem settles the question by showing
that pspace = npspace (the theorem actually applies to a much more general class of space
bounds). For time, things are more complicated. Indeed, the question of whether ptime
equals np is the most famous open problemin complexity theory. It is generally conjectured
that the two classes are distinct.
The following inclusions hold among the complexity classes described:
logspace ptime np pspace TIME(Hyp) =SPACE(Hyp).
All nonstrict inclusions are conjectured to be strict.
Complexity classes of languages can be extended, in the same spirit, to complexity
classes of computable functions. Here we look at the resources needed to compute the
function rather than just accepting or rejecting the input word.
Consider some complexity class, say C = TIME(F). Such a class contains all problems
that can be solved in time bounded by some function in F. This is an upper bound, so
C clearly contains some easy and some hard problems. How can the hard problems be
distinguished from the easy ones? This is captured by the notion of completeness of a
problem in a complexity class. The idea is as follows: A language K in C is complete
in C if solving it allows solving all other problems in C, also within C. This is formalized
by the notion of reduction. Let L and K be languages in C. L is reducible to K if there
is a computable mapping f such that for each w, w L iff f (w) K. The denition of
reducibility so far guarantees that solving K allows solving L. How about the complexity?
Clearly, if the reduction f is hard then we do not have an acceptance algorithm in C.
Therefore the complexity of f must be bounded. It might be tempting to use C as the
bound. However, this allows all the work of solving L within the reduction, which really
makes K irrelevant. Therefore the denition of completeness in a class C requires that the
complexity of the reduction function be lower than that for C. More formally, a recursive
2.2 Languages, Computability, and Complexity 19
language is complete in C by C

reductions if for each L C there is a function f in C

reducing L to K. The class C

is often understood for some of the main classes C. The


conventions we will use are summarized in the following table:
Type of Completeness Type of Reduction
p completeness logspace reductions
np completeness ptime reductions
pspace completeness ptime reductions
Note that to prove that a problem L is complete in C by C

reductions, it is sufcient
to exhibit another problem K that is known to be complete in C by C

reductions, and a C

reduction from K to L. Because the C

-reducibility relation is transitive for all customarily


used C

, it then follows that L is itself C complete by C

reductions. We mention next a


few problems that are complete in various classes.
One of the best-known np-complete problems is the so-called 3-satisability (3-SAT)
problem. The input is a propositional formula in conjunctive normal form, in which each
conjunct has at most three literals. For example, such an input might be
(x
1
x
4
x
2
) (x
1
x
2
x
4
) (x
4
x
3
x
1
).
The question is whether the formula is satisable. For example, the preceding formula
is satised with the truth assignment (x
1
) = (x
2
) = false, (x
3
) = (x
4
) = true. (See
Section 2.3 for the denitions of propositional formula and related notions.)
A useful pspace-complete problem is the following. The input is a quantied propo-
sitional formula (all variables are quantied). The question is whether the formula is true.
For example, an input to the problem is
x
1
x
2
x
3
x
4
[(x
1
x
4
x
2
) (x
1
x
2
x
4
) (x
4
x
3
x
1
)].
A number of well-known games, such as GO, have been shown to be pspace complete.
For ptime completeness, one can use a natural problem related to context-free gram-
mars (dened next). The input is a context-free grammar G and the question is whether
L(G) is empty.
Context-Free Grammars
We have discussed specication of languages using two kinds of acceptors: fsa and TM.
Context-free grammars (CFGs) provide different approach to specifying a language that
emphasizes the generation of the words in the language rather than acceptance. (Nonethe-
less, this can be turned into an accepting mechanism by parsing.) A CFG is a 4-tuple
N, , S, P, where
N is a nite set of nonterminal symbols;
is a nite alphabet of terminal symbols, disjoint from N;
20 Theoretical Background
S is a distinguished symbol of N, called the start symbol;
P is a nite set of productions of the form w, where N and w (N )

.
A CFG G = N, , S, P denes a language L(G) consisting of all words in

that
can be derived from S by repeated applications of the productions. An application of the
production w to a word v containing consists of replacing one occurrence of by
w. If u is obtained by applying a production to some word v, this is denoted by u v, and
the transitive closure of is denoted

. Thus L(G) ={w | w

, S

w}. A language
is called context free if it is L(G) for some CFG G. For example, consider the grammar
{S}, {a, b}, S, P, where P consists of the two productions
S ,
S aSb.
Then L(G) is the language {a
n
b
n
| n 0}. For example the following is a derivation of
a
2
b
2
:
S aSb a
2
Sb
2
a
2
b
2
.
The specication power of CFGs lies between that of fsas and that of TMs. First,
all regular languages are context free and all context-free languages are recursive. The
language {a
n
b
n
| n 0} is context free but not regular. An example of a recursive language
that is not context free is {a
n
b
n
c
n
| n 0}. The proof uses an extension to context-free
languages of the pumping lemma for regular languages. We also use a similar technique in
some of the proofs.
The most common use of CFGs in the area of databases is to view certain objects as
CFGs and use known (un)decidability properties about CFGs. Some questions about CFGs
known to be decidable are (1) emptiness [is L(G) empty?] and (2) niteness [is L(G)
nite?]. Some undecidable questions are (3) containment [is it true that L(G
1
) L(G
2
)?]
and (4) equality [is it true that L(G
1
) =L(G
2
)?].
2.3 Basics from Logic
The eld of mathematical logic is a main foundation for database theory. It serves as the
basis for languages for queries, deductive databases, and constraints. We briey review the
basic notions and notations of mathematical logic and then mention some key differences
between this logic in general and the specializations usually considered in database theory.
The reader is referred to [EFT84, End72] for comprehensive introductions to mathematical
logic, and to the chapter [Apt91] in [Lee91] and [Llo87] for treatments of Herbrand models
and logic programming.
Although some previous knowledge of logic would help the reader understand the
content of this book, the material is generally self-contained.
2.3 Basics from Logic 21
Propositional Logic
We begin with the propositional calculus. For this we assume an innite set of proposi-
tional variables, typically denoted p, q, r, . . . , possibly with subscripts. We also permit
the special propositional constants true and false. (Well-formed) propositional formulas
are constructed from the propositional variables and constants, using the unary connective
negation () and the binary connectives disjunction (), conjunction (), implication (),
and equivalence (). For example, p, (p (q)) and ((p q) p) are well-formed
propositional formulas. We generally omit parentheses if not needed for understanding a
formula.
A truth assignment for a set V of propositional variables is a function : V
{true, false}. The truth value [] of a propositional formula under truth assignment
for the variables occurring in is dened by induction on the structure of in the natural
manner. For example,
true[] =true;
if =p for some variable p, then [] =(p);
if =() then [] =true iff [] =false;
(
1

2
)[] =true iff at least one of
1
[] =true or
2
[] =true.
If [] =true we say that [] is true and that is true under (and similarly for false).
A formula is satisable if there is at least one truth assignment that makes it true,
and it is unsatisable otherwise. It is valid if each truth assignment for the variables in
makes it true. The formula (p q) is satisable but not valid; the formula (p (p)) is
unsatisable; and the formula (p (p)) is valid.
A formula logically implies formula (or is a logical consequence of ), denoted
|= if for each truth assignment , if [] is true, then [] is true. Formulas and
are (logically) equivalent, denoted , if |= and |=.
For example, (p (p q)) |=q. Many equivalences for propositional formulas are
well known. For example,
(
1

2
) ((
1
)
2
); (
1

2
) (
1

2
);
(
1

2
)
3
(
1

3
) (
2

3
);
1

2

1
(
1

2
);
(
1
(
2

3
)) ((
1

2
)
3
).
Observe that the last equivalence permits us to view as a polyadic connective. (The same
holds for .)
A literal is a formula of the form p or p (or true or false) for some propositional
variable p. A propositional formula is in conjunctive normal form (CNF) if it has the form

1

n
, where each formula
i
is a disjunction of literals. Disjunctive normal form
(DNF) is dened analogously. It is known that if is a propositional formula, then there
is some formula equivalent to that is in CNF (respectively DNF). Note that if is in
CNF (or DNF), then a shortest equivalent formula in DNF (respectively CNF) may have
a length exponential in the length of .
22 Theoretical Background
First-Order Logic
We now turn to rst-order predicate calculus. We indicate the main intuitions and concepts
underlying rst-order logic and describe the primary specializations typically made for
database theory. Precise denitions of needed portions of rst-order logic are included in
Chapters 4 and 5.
First-order logic generalizes propositional logic in several ways. Intuitively, proposi-
tional variables are replaced by predicate symbols that range over n-ary relations over an
underlying set. Variables are used in rst-order logic to range over elements of an abstract
set, called the universe of discourse. This is realized using the quantiers and . In ad-
dition, function symbols are incorporated into the model. The most important denitions
used to formalize rst-order logic are rst-order language, interpretation, logical implica-
tion, and provability.
Each rst-order language L includes a set of variables, the propositional connectives,
the quantiers and , and punctuation symbols ), (, and ,. The variation in rst-
order languages stems from the symbols they include to represent constants, predicates,
and functions. More formally, a rst-order language includes
(a) a (possibly empty) set of constant symbols;
(b) for each n 0 a (possibly empty) set of n-ary predicate symbols;
(c) for each n 1 a (possibly empty) set of n-ary function symbols.
In some cases, we also include
(d) the equality symbol , which serves as a binary predicate symbol,
and the propositional constants true and false. It is common to focus on languages that are
nite, except for the set of variables.
A familiar rst-order language is the language L
N
of the nonnegative integers, with
(a) constant symbol 0;
(b) binary predicate symbol ;
(c) binary function symbols +, , and unary S (successor);
and the equality symbol.
Let L be a rst-order language. Terms of L are built in the natural fashion from con-
stants, variables, and the function symbols. An atom is either true, false, or an expres-
sion of the form R(t
1
, . . . , t
n
), where R is an n-ary predicate symbol and t
1
, . . . , t
n
are
terms. Atoms correspond to the propositional variables of propositional logic. If the equal-
ity symbol is included, then atoms include expressions of the form t
1
t
2
. The family
of (well-formed predicate calculus) formulas over L is dened recursively starting with
atoms, using the Boolean connectives, and using the quantiers as follows: If is a for-
mula and x a variable, then (x) and (x) are formulas. As with the propositional case,
parentheses are omitted when understood from the context. In addition, and are viewed
as polyadic connectives. A term or formula is ground if it involves no variables.
Some examples of formulas in L
N
are as follows:
2.3 Basics from Logic 23
x(0 x), (x S(x)),
x(y(y x)), yz(x y z (y S(0) z S(0))).
(For some binary predicates and functions, we use inx notation.)
The notion of the scope of quantiers and of free and bound occurrences of variables
in formulas is now dened using recursion on the structure. Each variable occurrence in an
atom is free. If is (
1

2
), then an occurrence of variable x in is free if it is free as
an occurrence of
1
or
2
; and this is extended to the other propositional connectives. If
is y, then an occurrence of variable x =y is free in if the corresponding occurrence is
free in . Each occurrence of y is bound in . In addition, each occurrence of y in that is
free in is said to be in the scope of y at the beginning of . A sentence is a well-formed
formula that has no free variable occurrences.
Until now we have not given a meaning to the symbols of a rst-order language and
thereby to rst-order formulas. This is accomplished with the notion of interpretation,
which corresponds to the truth assignments of the propositional case. Each interpretation
is just one of the many possible ways to give meaning to a language.
An interpretation of a rst-order language L is a 4-tuple I =(U, C, P, F) where U
is a nonempty set of abstract elements called the universe (of discourse), and C, P, and F
give meanings to the sets of constant symbols, predicate symbols, and function symbols.
For example, C is a function from the constant symbols into U, and P maps each n-ary
predicate symbol p into an n-ary relation over U (i.e., a subset of U
n
). It is possible for
two distinct constant symbols to map to the same element of U.
When the equality symbol denoted is included, the meaning associated with it
is restricted so that it enjoys properties usually associated with equality. Two equivalent
mechanisms for accomplishing this are described next.
Let I be an interpretation for language L. As a notational shorthand, if c is a constant
symbol in L, we use c
I
to denote the element of the universe associated with c by I. This
is extended in the natural way to ground terms and atoms.
The usual interpretation for the language L
N
is I
N
, where the universe is N; 0 is
mapped to the number 0; is mapped to the usual less than or equal relation; S is mapped
to successor; and + and are mapped to addition and multiplication. In such cases, we
have, for example, [S(S(0) +0))]
I
N
2.
As a second example related to logic programming, we mention the family of Her-
brand interpretations of L
N
. Each of these shares the same universe and the same mappings
for the constant and function symbols. An assignment of a universe, and for the constant
and function symbols, is called a preinterpretation. In the Herbrand preinterpretation for
L
N
, the universe, denoted U
L
N
, is the set containing 0 and all terms that can be constructed
from this using the function symbols of the language. This is a little confusing because the
terms now play a dual roleas terms constructed from components of the language L, and
as elements of the universe U
L
N
. The mapping C maps the constant symbol 0 to 0 (consid-
ered as an element of U
L
N
). Given a term t in U, the function F(S) maps t to the term S(t ).
Given terms t
1
and t
2
, the function F(+) maps the pair (t
1
, t
2
) to the term+(t
1
, t
2
), and the
function F() is dened analogously.
The set of ground atoms of L
N
(i.e., the set of atoms that do not contain variables)
is sometimes called the Herbrand base of L
N
. There is a natural one-one correspondence
24 Theoretical Background
between interpretations of L
N
that extend the Herbrand preinterpretation and subsets of
the Herbrand base of L
N
. One Herbrand interpretation of particular interest is the one
that mimics the usual interpretation. In particular, this interpretation maps to the set
{(t
1
, t
2
) | (t
I
N
1
, t
I
N
2
)
I
N
}.
We now turn to the notion of satisfaction of a formula by an interpretation. The
denition is recursive on the structure of formulas; as a result we need the notion of variable
assignment to accommodate variables occurring free in formulas. Let L be a language and
I an interpretation of L with universe U. A variable assignment for formula is a partial
function : variables of L U whose domain includes all variables free in . For terms t ,
t
I,
denotes the meaning given to t by I, using to interpret the free variables. In addition,
if is a variable assignment, x is a variable, and u U, then [x/u] denotes the variable
assignment that is identical to , except that it maps x to u. We write I |=[] to indicate
that I satises under . This is dened recursively on the structure of formulas in the
natural fashion. To indicate the avor of the denition, we note that I |=p(t
1
, . . . , t
n
)[] if
(t
I,
1
, . . . , t
I,
n
) p
I
; I |=x[] if there is some u U such that I |=[[x/u]]; and
I |=x[] if for each u U, I |=[[x/u]]. The Boolean connectives are interpreted
in the usual manner. If is a sentence, then no variable assignment needs to be specied.
For example, I
N
|=xy((x y) x y); I
N
|=S(0) 0; and
I
N
|=yz(x y z (y S(0) z S(0)))[]
iff (x) is 1 or a prime number.
An interpretation I is a model of a set of sentences if I satises each formula in .
The set is satisable if it has a model.
Logical implication and equivalence are now dened analogously to the propositional
case. Sentence logically implies sentence , denoted |=, if each interpretation that
satises also satises . There are many straightforward equivalences [e.g., ()
and x x]. Logical implication is generalized to sets of sentences in the natural
manner.
It is known that logical implication, considered as a decision problem, is not recursive.
One of the fundamental results of mathematical logic is the development of effective
procedures for determining logical equivalence. These are based on the notion of proofs,
and they provide one way to show that logical implication is r.e. One style of proof,
attributed to Hilbert, identies a family of inference rules and a family of axioms. An
example of an inference rule is modus ponens, which states that from formulas and
we may conclude . Examples of axioms are all tautologies of propositional logic
[e.g., ( ) ( ) for all formulas and ], and substitution (i.e., x

x
t
, where t is an arbitrary term and
x
t
denotes the formula obtained by simultaneously
replacing all occurrences of x free in by t ). Given a family of inference rules and axioms,
a proof that set of sentences implies sentence is a nite sequence
0
,
1
, . . . ,
n
=,
where for each i, either
i
is an axiom, or a member of , or it follows from one or more
of the previous
j
s using an inference rule. In this case we write .
The soundness and completeness theorem of G odel shows that (using modus ponens
and a specic set of axioms) |= iff . This important link between |= and
permits the transfer of results obtained in model theory, which focuses primarily on in-
2.3 Basics from Logic 25
terpretations and models, and proof theory, which focuses primarily on proofs. Notably,
a central issue in the study of relational database dependencies (see Part C) has been the
search for sound and complete proof systems for subsets of rst-order logic that correspond
to natural families of constraints.
The model-theoretic and proof-theoretic perspectives lead to two equivalent ways of
incorporating equality into rst-order languages. Under the model-theoretic approach, the
equality predicate is given the meaning {(u, u) | u U} (i.e., normal equality). Under
the proof-theoretic approach, a set of equality axioms EQ
L
is constructed that express the
intended meaning of . For example, EQ
L
includes the sentences x, y, z(x y y
z x z) and x, y(x y (R(x) R(y)) for each unary predicate symbol R.
Another important result from mathematical logic is the compactness theorem, which
can be demonstrated using G odels soundness and completeness result. There are two
common ways of stating this. The rst is that given a (possibly innite) set of sentences
, if |= then there is a nite

such that

|=. The second is that if each nite


subset of is satisable, then is satisable.
Note that although the compactness theorem guarantees that the in the preceding
paragraph has a model, that model is not necessarily nite. Indeed, may only have
innite models. It is of some solace that, among those innite models, there is surely at least
one that is countable (i.e., whose elements can be enumerated: a
1
, a
2
, . . .). This technically
useful result is the L owenheim-Skolem theorem.
To illustrate the compactness theorem, we show that there is no set of sentences
dening the notion of connectedness in directed graphs. For this we use the language L
with two constant symbols, a and b, and one binary relation symbol R, which corresponds
to the edges of a directed graph. In addition, because we are working with general rst-
order logic, both nite and innite graphs may arise. Suppose now that is a set of
sentences that states that a and b are connected (i.e., that there is a directed path from
a to b in R). Let ={
i
| i > 0}, where
i
states a and b are at least i edges apart from
each other. For example,
3
might be expressed as
R(a, b) x
1
(R(a, x
1
) R(x
1
, b)).
It is clear that each nite subset of is satisable. By the compactness theorem
(second statement), this implies that is satisable, so it has a model (say, I). In I,
there is no directed path between (the elements of the universe identied by) a and b, and
so I |=. This is a contradiction.
Specializations to Database Theory
We close by mentioning the primary differences between the general eld of mathematical
logic and the specializations made in the study of database theory. The most obvious
specialization is that database theory has not generally focused on the use of functions
on data values, and as a result it generally omits function symbols from the rst-order
languages used. The two other fundamental specializations are the focus on nite models
and the special use of constant symbols.
An interpretation is nite if its universe of discourse is nite. Because most databases
26 Theoretical Background
are nite, most of database theory is focused exclusively on nite interpretations. This is
closely related to the eld of nite model theory in mathematics.
The notion of logical implication for nite interpretations, usually denoted |=
n
, is
not equivalent to the usual logical implication |=. This is most easily seen by considering
the compactness theorem. Let = {
i
| i > 0}, where
i
states that there are at least i
distinct elements in the universe of discourse. Then by compactness, |=false, but by the
denition of nite interpretation, |=
n
false.
Another way to show that |= and |=
n
are distinct uses computability theory. It is
known that |= is r.e. but not recursive, and it is easily seen that |=
n
is co-r.e. Thus if they
were equal, |= would be recursive, a contradiction.
The nal specialization of database theory concerns assumptions made about the uni-
verse of discourse and the use of constant symbols. Indeed, throughout most of this book
we use a xed, countably innite set of constants, denoted dom (for domain elements).
Furthermore, the focus is almost exclusively on nite Herbrand interpretations over dom.
In particular, for distinct constants c and c

, all interpretations that are considered satisfy


c c

.
Most proofs in database theory involving the rst-order predicate calculus are based
on model theory, primarily because of the emphasis on nite models and because the link
between |=
n
and does not hold. It is thus informative to identify a mechanism for
using traditional proof-theoretic techniques within the context of database theory. For this
discussion, consider a rst-order language with set dom of constant symbols and predicate
symbols R
1
, . . . , R
n
. As will be seen in Chapter 3, a database instance is a nite Herbrand
interpretation I of this language. Following [Rei84], a family
I
of sentences is associated
with I. This family includes the axioms of equality (mentioned earlier) and
Atoms: R
i
( a) for each a in R
I
i
.
Extension axioms: x(R
i
( x) ( x a
1
x a
m
)), where a
1
, . . . , a
m
is a listing of
all elements of R
I
i
, and we are abusing notation by letting range over vectors of
terms.
Unique Name axioms: c c

for each distinct pair c, c

of constants occurring in I.
Domain Closure axiom: x(x c
1
x c
n
), where c
1
, . . . , c
n
is a listing of all
constants occurring in I.
A set of sentences obtained in this manner is termed an extended relational theory.
The rst two sets of sentences of an extended relational theory express the specic
contents of the relations (predicate symbols) of I. Importantly, the Extension sentences en-
sure that for any (not necessarily Herbrand) interpretation J satisfying
I
, an n-tuple is in
R
J
i
iff it equals one of the n-tuples in R
I
i
. The Unique Name axiom ensures that no pair of
distinct constants is mapped to the same element in the universe of J, and the Domain Clo-
sure axiom ensures that each element of the universe of J equals some constant occurring
in I. For all intents and purposes, then, any interpretation J that models
I
is isomorphic
to I, modulo condensing under equivalence classes induced by
J
. Importantly, the fol-
lowing link with conventional logical implication now holds: For any set of sentences,
I |= iff
I
is satisable. The perspective obtained through this connection with clas-
2.3 Basics from Logic 27
sical logic is useful when attempting to extend the conventional relational model (e.g., to
incorporate so-called incomplete information, as discussed in Chapter 19).
The Extension axioms correspond to the intuition that a tuple a is in relation R only
if it is explicitly included in R by the database instance. A more general formulation of
this intuition is given by the closed world assumption (CWA) [Rei78]. In its most general
formulation, the CWA is an inference rule that is used in proof-theoretic contexts. Given
a set of sentences describing a (possibly nonconventional) database instance, the CWA
states that one can infer a negated atom R( a) if R( a) [i.e., if one cannot prove R( a)
from using conventional rst-order logic]. In the case where is an extended relational
theory this gives no added information, but in other contexts (such as deductive databases)
it does. The CWA is related in spirit to the negation as failure rule of [Cla78].
3 The Relational Model
Alice: What is a relation?
Vittorio: You studied that in math a long time ago.
Sergio: It is just a table.
Riccardo: But we have several ways of viewing it.
A
database model provides the means for specifying particular data structures, for con-
straining the data sets associated with these structures, and for manipulating the data.
The specication of structure and constraints is done using a data denition language
(DDL), and the specication of manipulation is done using a data manipulation language
(DML). The most prominent structures that have been used for databases to date are graphs
in the network, semantic, and object-oriented models; trees in the hierarchical model; and
relations in the relational model.
DMLs provide two fundamental capabilities: querying to support the extraction of data
from the current database; and updating to support the modication of the database state.
There is a rich theory on the topic of querying relational databases that includes several
languages based on widely different paradigms. This theory is the focus of Parts B, D,
and E, and portions of Part F of this book. The theory of database updates has received
considerably less attention and is touched on in Part F.
The term relational model is actually rather vague. As introduced in Codds seminal
article, this term refers to a specic data model with relations as data structures, an al-
gebra for specifying queries, and no mechanisms for expressing updates or constraints.
Subsequent articles by Codd introduced a second query language based on the predicate
calculus of rst-order logic, showed this to be equivalent to the algebra, and introduced the
rst integrity constraints for the relational modelnamely, functional dependencies. Soon
thereafter, researchers in database systems implemented languages based on the algebra
and calculus, extended to include update operators and to include practically motivated
features such as arithmetic operators, aggregate operators, and sorting capabilities. Re-
searchers in database theory developed a number of variations on the algebra and calculus
with varying expressive power and adapted the paradigm of logic programming to provide
a third approach to querying relational databases. The story of integrity constraints for the
relational model is similar: A rich theory of constraints has emerged, and two distinct but
equivalent perspectives have been developed that encompass almost all of the constraints
that have been investigated formally. The term relational model has thus come to refer to
the broad class of database models that have relations as the data structure and that incor-
porate some or all of the query capabilities, update capabilities, and integrity constraints
28
3.1 The Structure of the Relational Model 29
mentioned earlier. In this book we are concerned primarily with the relational model in
this broad sense.
Relations are simple data structures. As a result, it is easy to understand the concep-
tual underpinnings of the relational model, thus making relational databases accessible to
a broad audience of end users. A second advantage of this simplicity is that clean yet pow-
erful declarative languages can be used to manipulate relations. By declarative, we mean
that a query/program is specied in a high-level manner and that an efcient execution of
the program does not have to follow exactly its specication. Thus the important practical
issues of compilation and optimization of queries had to be overcome to make relational
databases a reality.
Because of its simplicity, the relational model has provided an excellent framework
for the rst generation of theoretical research into the properties of databases. Fundamental
aspects of data manipulation and integrity constraints have been exposed and studied in a
context in which the peculiarities of the data model itself have relatively little impact. This
research provides a strong foundation for the study of other database models, rst because
many theoretical issues pertinent to other models can be addressed effectively within the
relational model, and second because it provides a variety of tools, techniques, and research
directions that can be used to understand the other models more deeply.
In this short chapter, we present formal denitions for the data structure of the rela-
tional model. Theoretical research on the model has grown out of three different perspec-
tives, one corresponding most closely to the natural usage of relations in databases, another
stemming from mathematical logic, and the third stemming from logic programming. Be-
cause each of these provides important intuitive and notational benets, we introduce no-
tation that encompasses the different but equivalent formulations reecting each of them.
3.1 The Structure of the Relational Model
An example of a relational database is shown in Fig. 3.1
1
. Intuitively, the data is represented
in tables in which each row gives data about a specic object or set of objects, and rows
with uniform structure and intended meaning are grouped into tables. Updates consist
of transformations of the tables by addition, removal, or modication of rows. Queries
allow the extraction of information from the tables. A fundamental feature of virtually all
relational query languages is that the result of a query is also a table or collection of tables.
We introduce now some informal terminology to provide the intuition behind the
formal denitions that follow. Each table is called a relation and it has a name (e.g.,
Movies). The columns also have names, called attributes (e.g, Title). Each line in a table is
a tuple (or record). The entries of tuples are taken from sets of constants, called domains,
that include, for example, the sets of integers, strings, and Boolean values. Finally we
distinguish between the database schema, which species the structure of the database;
and the database instance, which species its actual content. This is analogous to the
standard distinction between type and value found in programming languages (e.g., an
1
Pariscope is a weekly publication that lists the cultural events occurring in Paris and environs.
30 The Relational Model
Movies Title Director Actor
The Trouble with Harry Hitchcock Gwenn
The Trouble with Harry Hitchcock Forsythe
The Trouble with Harry Hitchcock MacLaine
The Trouble with Harry Hitchcock Hitchcock

Cries and Whispers Bergman Andersson
Cries and Whispers Bergman Sylwan
Cries and Whispers Bergman Thulin
Cries and Whispers Bergman Ullman
Location Theater Address Phone Number
Gaumont Op era 31 bd. des Italiens 47 42 60 33
Saint Andr e des Arts 30 rue Saint Andr e des Arts 43 26 48 18
Le Champo 51 rue des Ecoles 43 54 51 60

Georges V 144 av. des Champs-Elys ees 45 62 41 46
Les 7 Montparnassiens 98 bd. du Montparnasse 43 20 32 20
Pariscope Theater Title Schedule
Gaumont Op era Cries and Whispers 20:30
Saint Andr e des Arts The Trouble with Harry 20:15
Georges V Cries and Whispers 22:15

Les 7 Montparnassiens Cries and Whispers 20:45
Figure 3.1: The CINEMA database
identier X might have type record A : int, B : bool endrecord and value record A : 5,
B : true endrecord).
We now embark on the formal denitions. We assume that a countably innite set att
of attributes is xed. For a technical reason that shall become apparent shortly, we assume
that there is a total order
att
on att. When a set U of attributes is listed, it is assumed that
the elements of U are written according to
att
unless otherwise specied.
For most of the theoretical development, it sufces to use the same domain of values
for all of the attributes. Thus we now x a countably innite set dom (disjoint from att),
called the underlying domain. A constant is an element of dom. When different attributes
should have distinct domains, we assume a mapping Dom on att, where Dom(A) is a set
called the domain of A.
3.2 Named versus Unnamed Perspectives 31
We assume a countably innite set relname of relation names disjoint from the pre-
vious sets. In practice, the structure of a table is given by a relation name and a set of
attributes. To simplify the notation in the theoretical treatment, we now associate a sort
(i.e., nite set of attributes) to each relation name. (An analogous approach is usually taken
in logic.) In particular, we assume that there is a function sort from relname to P
n
(att)
(the nitary powerset of att; i.e., the family of nite subsets of att). It is assumed that sort
has the property that for each (possibly empty) nite set U of attributes, sort
1
(U) is in-
nite. This allows us to use as many relation names of a given sort as desired. The sort of a
relation name is simply sort(R). The arity of a relation name R is arity(R) =|sort(R)|.
A relation schema is now simply a relation name R. We sometimes write this as R[U]
to indicate that sort(R) =U, or R[n], to indicate that arity(R) =n. A database schema is a
nonempty nite set R of relation names. This might be written R ={R
1
[U
1
], . . . , R
n
[U
n
]}
to indicate the relation schemas in R.
For example, the database schema CINEMA for the database shown in Fig. 3.1 is
dened by
CINEMA ={Movies, Location, Pariscope}
where relation names Movies, Location, and Pariscope have the following sorts:
sort(Movies) ={Title, Director, Actor}
sort(Location) ={Theater, Address, Phone Number}
sort(Pariscope) ={Theater, Title, Schedule}.
We often omit commas and set brackets in sets of attributes. For example, we may write
sort(Pariscope) =Theater Title Schedule.
The formalism that has emerged for the relational model is somewhat eclectic, be-
cause it is intimately connected with several other areas that have their own terminology,
such as logic and logic programming. Because the slightly different formalisms are well
entrenched, we do not attempt to replace them with a single, unied notation. Instead we
will allow the coexistence of the different notations; the reader should have no difculty
dealing with the minor variations.
Thus there will be two forks in the road that lead to different but largely equivalent
formulations of the relational model. The rst fork in the road to dening the relational
model is of a philosophical nature. Are the attribute names associated with different relation
columns important?
3.2 Named versus Unnamed Perspectives
Under the named perspective, these attributes are viewed as an explicit part of a database
schema and may be used (e.g., by query languages and dependencies). Under the unnamed
32 The Relational Model
perspective, the specic attributes in the sort of a relation name are ignored, and only the
arity of a relation schema is available (e.g., to query languages).
In the named perspective, it is natural to view tuples as functions. More precisely, a
tuple over a (possibly empty) nite set U of attributes (or over a relation schema R[U])
is a total mapping u from U to dom. In this case, the sort of u is U, and it has arity |U|.
Tuples may be written in a linear syntax using angle bracketsfor example, A : 5, B : 3.
(In general, the order used in the linear syntax will correspond to
att
, although that is not
necessary.) The unique tuple over is denoted .
Suppose that u is a tuple over U. As usual in mathematics, the value of u on an attribute
A in U is denoted u(A). This is extended so that for V U, u[V] denotes the tuple v over
V such that v(A) =u(A) for each A V (i.e., u[V] =u|
V
, the restriction of the function
u to V).
With the unnamed perspective, it is more natural to view a tuple as an element of a
Cartesian product. More precisely, a tuple is an ordered n-tuple (n 0) of constants (i.e.,
an element of the Cartesian product dom
n
). The arity of a tuple is the number of coordinates
that it has. Tuples in this context are also written with angle brackets (e.g., 5, 3). The i
th
coordinate of a tuple u is denoted u(i). If relation name R has arity n, then a tuple over R
is a tuple with arity arity(R).
Because of the total order
att
, there is a natural correspondence between the named
and unnamed perspectives. A tuple A
1
: a
1
, A
2
: a
2
(dened as a function) can be viewed
(assuming A
1

att
A
2
) as an ordered tuple with (A
1
: a
1
) as a rst component and (A
2
: a
2
)
as a second one. Ignoring the names, this tuple may simply be viewed as the ordered tuple
a
1
, a
2
. Conversely, the ordered tuple t =a
1
, a
2
may be interpreted as a function over
the set {1, 2} of integers with t (i) =a
i
for each i. This correspondence will allow us to blur
the distinction between the two perspectives and move freely from one to the other when
convenient.
3.3 Conventional versus Logic Programming Perspectives
We now come to the second fork in the road to dening the relational model. This fork
concerns how relation and database instances are viewed, and it is essentially independent
of the perspective taken on tuples. Under the conventional perspective, a relation or relation
instance of (or over) a relation schema R[U] (or over a nite set U of attributes) is a
(possibly empty) nite set I of tuples with sort U. In this case, I has sort U and arity
|U|. Note that there are two instances over the empty set of attributes: {} and {}.
Continuing with the conventional perspective, a database instance of database schema
R is a mapping I with domain R, such that I(R) is a relation over R for each R R.
The other perspective for dening instances stems from logic programming. This
perspective is used primarily with the ordered-tuple perspective on tuples, and so we focus
on that here. Let R be a relation with arity n. A fact over R is an expression of the form
R(a
1
, . . . , a
n
), where a
i
dom for i [1, n]. If u = a
1
, . . . , a
n
, we sometimes write
R(u) for R(a
1
, . . . , a
n
). Under the logic-programming perspective, a relation (instance)
over R is a nite set of facts over R. For a database schema R, a database instance is a
nite set I that is the union of relation instances over R, for R R. This perspective on
3.3 Conventional versus Logic Programming Perspectives 33
instances is convenient when working with languages stemming from logic programming,
and it permits us to write database instances in a convenient linear form.
The two perspectives provide alternative ways of describing essentially the same data.
For instance, assuming that sort(R) = AB and sort(S) = A, we have the following four
representations of the same database:
Named and Conventional
I (R) ={f
1
, f
2
, f
3
}
f
1
(A) =a f
1
(B) =b
f
2
(A) =c f
2
(B) =b
f
3
(A) =a f
3
(A) =a
I (S) ={g}
g(A) =d
Unnamed and Conventional
I (R) ={a, b, c, b, a, a}
I (S) ={d}
Named and Logic Programming
{R(A : a, B : b), R(A : c, B : b), R(A : a, B : a), S(A : d)}
Unnamed and Logic Programming
{R(a, b), R(c, b), R(a, a), S(d)}.
Because relations can be viewed as sets, it is natural to consider, given relations of the
same sort, the standard set operations union (), intersection (), and difference () and
the standard set comparators , , =, and =. With the logic-programming perspective on
instances, we may also use these operations and comparators on database instances.
Essentially all topics in the theory of relational database can be studied using a xed
choice for the two forks. However, there are some cases in which one perspective is much
more natural than the other or is technically much more convenient. For example, in a
context in which there is more than one relation, the named perspective permits easy and
natural specication of correspondences between columns of different relations whereas
the unnamed perspective does not. As will be seen in Chapter 4, this leads to different but
equivalent sets of natural primitive algebra operators for the two perspectives. A related
example concerns those topics that involve the association of distinct domains to different
relation columns; again the named perspective is more convenient. In addition, although
relational dependency theory can be developed for the unnamed perspective, the motivation
is much more natural when presented in the named perspective. Thus during the course
of this book the choice of perspective during a particular discussion will be motivated
primarily by the intuitive or technical convenience offered by one or the other.
In this book, we will need an innite set var of variables that will be used to range
over elements of dom. We generalize the notion of tuple to permit variables in coordinate
positions: a free tuple over U or R[U] is (under the named perspective) a function u from
U to var dom. An atom over R is an expression R(e
1
, . . . , e
n
), where n =arity(R) and
34 The Relational Model
e
i
is term (i.e., e
i
var dom for each i [1, n]). Following the terminology of logic
and logic programming, we sometimes refer to a fact as a ground atom.
3.4 Notation
We generally use the following symbols, possibly with subscripts:
Constants a, b, c
Variables x, y
Sets of variables X, Y
Terms e
Attributes A, B, C
Sets of attributes U, V, W
Relation names (schemas) R, S; R[U], S[V]
Database schemas R, S
Tuples t, s
Free tuples u, v, w
Facts R(a
1
, . . . , a
n
), R(t )
Atoms R(e
1
, . . . , e
n
), R(u)
Relation instances I, J
Database instances I, J
Bibliographic Notes
The relational model is founded on mathematical logic (in particular, predicate calcu-
lus). It is one of the rare cases in which substantial theoretical development preceded the
implementation of systems. The rst proposal to use predicate calculus as a query lan-
guage can be traced back to Kuhns [Kuh67]. The relational model itself was introduced by
Codd [Cod70]. There are numerous commercial database systems based on the relational
model. They include IBMs DBZ, [A
+
76], INGRES [SWKH76], and ORACLE [Ora89],
Informix, and Sybase.
Other data models have been proposed and implemented besides the relational model.
The most prominent ones preceding the relational model are the hierarchical and network
models. These and other models are described in the books [Nij76, TL82]. More recently,
various models extending the relational model have been proposed. They include seman-
tic models (see the survey [HK87]) and object-oriented models (see the position paper
[ABD
+
89]). In this book we focus primarily on the relational model in a broad sense. Some
formal aspects of other models are considered in Part F.
4 Conjunctive Queries
Alice: Shall we start asking queries?
Sergio: Very simple ones for the time being.
Riccardo: But the system will answer them fast.
Vittorio: And there is some nice theory.
I
n this chapter we embark on the study of queries for relational databases, a rich topic
that spans a good part of this book. This chapter focuses on a limited but extremely
natural and commonly arising class of queries called conjunctive queries. Five equivalent
versions of this query family are presented here: one from each of the calculus and datalog
paradigms, two from the algebra paradigm, and a nal one that has a more visual form.
In the context of conjunctive queries, the three nonalgebraic versions can be viewed as
minor syntactic variants of each other; but these similarities diminish as the languages are
generalized to incorporate negation and/or recursion. This chapter also discusses query
composition and its interaction with user views, and it extends conjunctive queries in a
straightforward manner to incorporate union (or disjunction).
The conjunctive queries enjoy several desirable properties, including, for example,
decidability of equivalence and containment. These results will be presented in Chapter 6,
in which a basic tool, the Homomorphism Theorem, is developed. Most of these results
extend to conjunctive queries with union.
In the formal framework that we have developed in this book, we distinguish between
a query, which is a syntactic object, and a query mapping, which is the function dened by
a query interpreted under a specied semantics. However, we often blur these two concepts
when the meaning is clear from the context. In the relational model, query mappings
generally have as domain the family of all instances of a specied relation or database
schema, called the input schema; and they have as range the family of instances of an
output schema, which might be a database schema or a relation schema. In the latter case,
the relation name may be specied as part of the syntax of the query or by the context, or
it may be irrelevant to the discussion and thus not specied at all. We generally say that
a query (mapping) is from (or over) its input schema to its output schema. Finally, two
queries q
1
and q
2
over R are equivalent, denoted q
1
q
2
, if they have the same output
schema and q
1
(I) =q
2
(I) for each instance I over R.
This chapter begins with an informal discussion that introduces a family of simple
queries and illustrates one approach to expressing them formally. Three versions of con-
junctive queries are then introduced, and all of them have a basis in logic. Then a brief
37
38 Conjunctive Queries
(4.1) Who is the director of Cries and Whispers?
(4.2) Which theaters feature Cries and Whispers?
(4.3) What are the address and phone number of the Le Champo?
(4.4) List the names and addresses of theaters featuring a Bergman lm.
(4.5) Is a lm directed by Bergman playing in Paris?
(4.6) List the pairs of persons such that the rst directed the second in a movie, and vice versa.
(4.7) List the names of directors who have acted in a movie they directed.
(4.8) List pairs of actors that acted in the same movie.
(4.9) On any input produce Apocalypse Now, Coppola as the answer.
(4.10) Where can I see Annie Hall or Manhattan?
(4.11) What are the lms with Allen as actor or director?
(4.12) What lms with Allen as actor or director are currently featured at the Concorde?
(4.13) List all movies that were directed by Hitchcock or that are currently playing at the Rex.
(4.14) List all actors and director of the movie Apocalypse Now.
Figure 4.1: Examples of conjunctive queries, some of which require union
digression is made to consider query composition and database views. The algebraic per-
spectives on conjunctive queries are then given, along with the theorem showing the equiv-
alence of all ve approaches to conjunctive queries. Finally, various forms of union and
disjunction are added to the conjunctive queries.
4.1 Getting Started
To present the intuition of conjunctive queries, consider again the CINEMA database of
Chapter 3. The following correspond to conjunctive queries:
(4.1) Who is the director of Cries and Whispers?
(4.2) Which theaters feature Cries and Whispers?
(4.3) What are the address and phone number of the Le Champo?
These and other queries used in this section are gathered in Fig. 4.1. Each of the queries
just given calls for extracting information from a single relation. In contrast, queries (4.4)
through (4.7) involve more than one relation.
In queries (4.14.4 and 4.64.9), the database is asked to nd values or tuples of values
for which a certain pattern of data holds in the database, and in query (4.5) the database is
asked whether a certain pattern of data holds. We shall see that the patterns can be described
simply in terms of the existence of tuples that are connected to each other by equality
of some of their coordinates. On the other hand, queries (4.10) through (4.14) cannot be
expressed in this manner unless some form of disjunction or union is incorporated.
4.1 Getting Started 39
Example 4.1.1 Consider query (4.4). Intuitively, we express this query by stating that
if there are tuples r
1
, r
2
, r
3
respectively in relations
Movies, Pariscope, Location such that
the Director in r
1
is Bergman
and the Titles in tuple r
1
and r
2
are the same
and the Theaters in tuple r
2
and r
3
are the same
then we want the Theater and Address coordinates from tuple r
3
.
In this formulation we essentially use variables that range over tuples. Although this is the
basis of the so-called (relational) tuple calculus (see Exercise 5.23 in the next chapter),
the focus of most theoretical investigations has been on the domain calculus, which uses
variables that range over constants rather than tuples. This also reects the convention
followed in the predicate calculus of rst-order logic. Thus we reformulate the preceding
query as
if there are tuples x
t i
, Bergman, x
ac
, x
t h
, x
t i
, x
s
, and x
t h
, x
ad
, x
p
,
respectively, in relations Movies, Pariscope, and Location
then include the tuple Theater : x
t h
, Address : x
ad
in the answer,
where x
t i
, x
ac
, . . . are variables. Note that the equalities specied in the rst formula-
tion are achieved implicitly in the second formulation through multiple occurrences of
variables.
The translation of this into the syntax of rule-based conjunctive queries is now ob-
tained by
ans(x
t h
, x
ad
) Movies(x
t i
, Bergman, x
ac
), Pariscope(x
t h
, x
t i
, x
s
),
Location(x
t h
, x
ad
, x
p
)
where ans (for answer) is a relation over {Theater, Address}. The atom to the left of the
is called the rule head, and the set of atoms to the right is called the body.
The preceding rule may be abbreviated as
ans(x
t h
, x
ad
) Movies(x
t i
, Bergman,
_
), Pariscope(x
t h
, x
t i
,
_
),
Location(x
t h
, x
ad
,
_
)
where
_
is used to replace all variables that occur exactly once in the rule. Such variables
are sometimes called anonymous.
In general, a rule-based conjunctive query is a single rule that has the form illustrated
in the preceding example. The semantics associated with rule-based conjunctive queries
ensures that their interpretation corresponds to the more informal expressions given in the
preceding example. Rule-based conjunctive queries can be viewed as the basic building
block for datalog, a query language based on logic programming that provides an elegant
syntax for expressing recursion.
A second paradigm for the conjunctive queries has a more visual form and uses tables
with variables and constants. Although we present a more succinct formalism for this
40 Conjunctive Queries
Movies Title Director Actor
_The Seventh Seal Bergman
Pariscope Title Schedule
_The Seventh Seal
Theater
_Rex
Location Address Phone number
P._1 bd. Poissonnire
Theater
P._Rex
Figure 4.2: A query in QBE
paradigmlater in this chapter, we illustrate it in Fig. 4.2 with a query presented in the syntax
of the language Query-By-Example (QBE) (see also Chapter 7). The identiers starting
with a
_
designate variables, and P. indicates what to output. Following the convention
established for QBE, variable names are chosen to reect typical values that they might
take. Note that the coordinate entries left blank correspond, in terms of the rule given
previously, to distinct variables that occur exactly once in the body and do not occur in
the head (i.e., to anonymous variables).
The third version of conjunctive queries studied in this chapter is a restriction of the
predicate calculus; as will be seen, the term conjunctive query stems from this version. The
fourth and fth versions are algebraic in nature, one for the unnamed perspective and the
other for the named perspective.
4.2 Logic-Based Perspectives
In this section we introduce and study three versions of the conjunctive queries, all stem-
ming from mathematical logic. After showing the equivalence of the three resulting query
languages, we extend them by incorporating a capability to express equality explicity,
thereby yielding a slightly more powerful family of languages.
Rule-Based Conjunctive Queries
The rule-based version of conjunctive queries is now presented formally. As will be seen
later, the rule-based paradigm is well suited for specifying queries from database schemas
to database schemas. However, to facilitate the comparison between the different variants
of the conjunctive queries, we focus rst on rule-based queries whose targets are relation
schemas. We adopt the convention of using the name ans to refer to the name of the target
relation if the name itself is unimportant (as is often the case with relational queries).
4.2 Logic-Based Perspectives 41
Denition 4.2.1 Let R be a database schema. A rule-based conjunctive query over R
is an expression of the form
ans(u) R
1
(u
1
), . . . , R
n
(u
n
)
where n 0, R
1
, . . . , R
n
are relation names in R; ans is a relation name not in R; and
u, u
1
, . . . , u
n
are free tuples (i.e., may use either variables or constants). Recall that if
v = x
1
, . . . , x
m
, then R(v) is a shorthand for R(x
1
, . . . , x
m
). In addition, the tuples
u, u
1
, . . . , u
n
must have the appropriate arities (i.e., u must have arity of ans, and u
i
must
have the arity of R
i
for each i [1, n]). Finally, each variable occurring in u must also
occur at least once in u
1
, . . . , u
n
. The set of variables occurring in q is denoted var(q).
Rule-based conjunctive queries are often more simply called rules. In the preceding
rule, the subexpression R
1
(u
1
), . . . , R
n
(u
n
) is the body of the rule, and ans(u) is the
head. The rule here is required by the denition to be range restricted (i.e., each variable
occurring in the head must also occur in the body). Although this restriction is followed in
most of the languages based on the use of rules, it will be relaxed in Chapter 18.
Intuitively, a rule may be thought of as a tool for deducing new facts. If one can nd
values for the variables of the rule such that the body holds, then one may deduce the
head fact. This concept of values for the variables in the rules is captured by the notion
of valuation. Formally, given a nite subset V of var, a valuation over V is a total
function from V to the set dom of constants. This is extended to be identity on dom and
then extended to map free tuples to tuples in the natural fashion.
We now dene the semantics for rule-based conjunctive queries. Let q be the query
given earlier, and let I be an instance of R. The image of I under q is
q(I) ={(u) | is a valuation over var(q) and (u
i
) I(R
i
),
for each i [1, n]}.
The active domain of a database instance I, denoted adom(I), is the set of all constants
occurring in I, and the active domain adom(I) of relation instance I is dened analogously.
In addition, the set of constants occurring in a query q is denoted adom(q). We use
adom(q, I) as an abbreviation for adom(q) adom(I).
Let q be a rule and I an input instance for q. Because q is range restricted, it is easily
veried that adom(q(I)) adom(q, I) (see Exercise 4.2). In other words, q(I) contains
only constants occurring in q or in I. In particular, q(I) is nite, and so it is an instance.
A straightforward algorithm for evaluating a rule q is to consider systematically all
valuations with domain the set of variables occurring in q, and range the set of all constants
occurring in the input or q. More efcient algorithms may be achieved, both by performing
symbolic manipulations of the query and by using auxiliary data structures such as indexes.
Such improvements are considered in Chapter 6.
Returning to the intuition, under the usual perspective a fundamental difference be-
tween the head and body of a rule R
0
R
1
, . . . , R
n
is that body relations are viewed as
being stored, whereas the head relation is not. Thus, referring to the rule given earlier, the
values of relations R
1
, . . . , R
n
are known because they are provided by the input instance
42 Conjunctive Queries
I. In other words, we are given the extension of R
1
, . . . , R
n
; for this reason they are called
extensional relations. In contrast, relation R
0
is not stored and its value is computed on
request by the query; the rule gives only the intension or denition of R
0
. For this reason
we refer to R
0
as an intensional relation. In some cases, the database instance associated
with R
1
, . . . , R
n
is called the extensional database (edb), and the rule itself is referred to
as the intensional database (idb). Also, the dened relation is sometimes referred to as an
idb relation.
We now present the rst theoretical property of conjunctive queries. A query q over R
is monotonic if for each I, J over R, I J implies that q(I) q(J). A query q is satisable
if there is some input I such that q(I) is nonempty.
Proposition 4.2.2 Conjunctive queries are monotonic and satisable.
Proof Let q be the rule-based conjunctive query
ans(u) R
1
(u
1
), . . . , R
n
(u
n
).
For monotonicity, let I J, and suppose that t q(I). Then for some valuation over
var(q), (u
i
) I(R
i
) for each i [1, n], and t =(u). Because I J, (u
i
) J(R
i
) for
each i, and so t q(J).
For satisability, let d be the set of constants occurring in q, and let a dom be new.
Dene I over the relation schemas R of the rule body so that
I(R) =(d {a})
arity(R)
[i.e., the set of all tuples formed from (d {a}) having arity arity(R)]. Finally, let map
each variable in q to a. Then (u
i
) I(R
i
) for i [1, n], and so (u) q(I). Thus q is
satisable.
The monotonicity of the conjunctive queries points to limitations in their expressive
power. Indeed, one can easily exhibit queries that are nonmonotonic and therefore not
conjunctive queries. For instance, the query Which theaters in New York show only
Woody Allen lms? is nonmonotonic.
We close this subsection by indicating how rule-based conjunctive queries can be used
to express yes-no queries. For example, consider the query
(4.5) Is there a lm directed by Bergman playing in Paris?
To provide an answer, we assume that relation name ans has arity 0. Then applying the rule
ans() Movies(x, Bergman, y), Pariscope(z, x, w)
returns the relation {} if the answer is yes, and returns {} if the answer is no.
4.2 Logic-Based Perspectives 43
Tableau Queries
If we blur the difference between a variable and a constant, the body of a conjunctive
query can be seen as an instance. This leads to a formulation of conjunctive queries called
tableau, which is closest to the visual form provided by QBE.
Denition 4.2.3 The notion of tableau over a schema R (R) is dened exactly as was
the notion of instance over R (R), except that both variables and constants may occur. A
tableau query is simply a pair (T, u) [or (T, u)] where T is a tableau and each variable in
u also occurs in T. The free tuple u is called the summary of the tableau query.
The summary tuple u in a tableau query (T, u) represents the tuples included in the
answer to the query. Thus the answer consists of all tuples u for which the pattern described
by T is found in the database.
Example 4.2.4 Let T be the tableau
Movies Title Director Actor
x
t i
Bergman x
ac
Pariscope Theater Title Schedule
x
t h
x
t i
x
s
Location Theater Address Phone Number
x
t h
x
ad
x
p
The tableau query (T, Theater : x
t h
, Address : x
ad
) expresses query (4.4). If the un-
named perspective on tuples is used, then the names of the attributes are not included in u.
The notion of valuation is extended in the natural fashion to map tableaux
1
to in-
stances. An embedding of tableau T into instance I is a valuation for the variables oc-
curring in T such that (T) I. The semantics for tableau queries is essentially the same
as for rule-based conjunctive queries: The output of (T, u) on input I consists of all tuples
(u) where is an embedding of T into I.
Aside from the fact that tableau queries do not indicate a relation name for the an-
swer, they are syntactically close to the rule-based conjunctive queries. Furthermore, the
alternative perspective provided by tableaux lends itself to the development of several nat-
ural results. Perhaps the most compelling of these arises in the context of the chase (see
1
One tableau, two tableaux.
44 Conjunctive Queries
Chapter 8), which provides an elegant characterization of two conjunctive queries yielding
identical results when the inputs satisfy certain dependencies.
A family of restricted tableaux called typed have been used to develop a number of
theoretical results. A tableau query q = (T, u) under the named perspective, where T is
over relation schema R and sort(u) sort(R), is typed if no variable of T or t is associated
with two distinct attributes in q. Intuitively, the term typed is used because it is impossible
for entries fromdifferent attributes to be compared. The connection between typed tableaux
and conjunctive queries in the algebraic paradigm is examined in Exercises 4.19 and
4.20. Additional results concerning complexity issues around typed tableau queries are
considered in Exercises 6.16 and 6.21 in Chapter 6. Typed tableaux also arise in connection
with data dependencies, as studied in Part C.
Conjunctive Calculus
The third formalism for expressing conjunctive queries stems from predicate calculus. (A
review of predicate calculus is provided in Chapter 2, but the presentation of the calculus
in this and the following chapter is self-contained.)
We begin by presenting conjunctive calculus queries that can be viewed as syntactic
variants of rule-based conjunctive queries. They involve simple use of conjunction and
existential quantication. As will be seen, the full conjunctive calculus, dened later,
allows unrestricted use of conjunction and existential quantication. This provides more
exibility in the syntax but, as will be seen, does not increase expressive power.
Consider the conjunctive query
ans(e
1
, . . . , e
m
) R
1
(u
1
), . . . , R
n
(u
n
).
A conjunctive calculus query that has the same semantics is
{e
1
, . . . , e
m
| x
1
, . . . , x
k
(R
1
(u
1
) R
n
(u
n
))},
where x
1
, . . . , x
k
are all the variables occurring in the body and not the head. The sym-
bol denotes conjunction (i.e., and), and denotes existential quantication (intuitively,
x . . . denotes there exists an x such that . . .). The term conjunctive query stems from
the presence of conjunctions in the syntax.
Example 4.2.5 In the calculus paradigm, query (4.4) can be expressed as follows:
{x
t h
, x
ad
| x
t i
x
ac
x
s
x
p
(Movies(x
t i
, Bergman, x
ac
)
Pariscope(x
t h
, x
t i
, x
s
)
Location(x
t h
, x
ad
, x
p
))}.
Note that some but not all of the existentially quantied variables play the role of anony-
mous variables, in the sense mentioned in Example 4.1.1.
The syntax used here can be viewed as a hybrid of the usual set-theoretic notation,
4.2 Logic-Based Perspectives 45
used to indicate the form of the query output, and predicate calculus, used to indicate what
should be included in the output. As discussed in Chapter 2, the semantics associated with
calculus formulas is a restricted version of the conventional semantics found in rst-order
logic.
We now turn to the formal denition of the syntax and semantics of the (full) conjunc-
tive calculus.
Denition 4.2.6 Let R be a database schema. A (well-formed) formula over R for the
conjunctive calculus is an expression having one of the following forms:
(a) an atom over R;
(b) ( ), where and are formulas over R; or
(c) x, where x is a variable and is a formula over R.
In formulas we permit the abbreviation of x
1
. . . x
n
by x
1
, . . . , x
n
.
The usual notion of free and bound occurrences of variables is now dened. An
occurrence of variable x in formula is free if
(i) is an atom; or
(ii) =( ) and the occurrence of x is free in or ; or
(iii) =y, x and y are distinct variables, and the occurrence of x is free in .
An occurrence of x in is bound if it is not free. The set of free variables in , denoted
free(), is the set of all variables that have at least one free occurrence in .
Denition 4.2.7 A conjunctive calculus query over database schema R is an expression
of the form
{e
1
, . . . , e
m
| },
where is a conjunctive calculus formula, e
1
, . . . , e
m
is a free tuple, and the set of
variables occurring in e
1
, . . . , e
m
is exactly free(). If the named perspective is being
used, then attributes can be associated with output tuples by specifying a relation name R
of arity m. The notation
{e
1
, . . . , e
m
: A
1
. . . A
m
| }
can be used to indicate the sort of the output explicitly.
To dene the semantics of conjunctive calculus queries, it is convenient to introduce
some notation. Recall that for nite set V var, a valuation over V is a total function
from V to dom. This valuation will sometimes be viewed as a syntactic expression of the
form
{x
1
/a
1
, . . . , x
n
/a
n
},
46 Conjunctive Queries
where x
1
, . . . , x
n
is a listing of V and a
i
= (x
i
) for each i [1, n]. This may also be
interpreted as a set. For example, if x is not in the domain of and c dom, then {x/c}
denotes the valuation with domain V {x} that is identical to on V and maps x to c.
Now let R be a database schema, a conjunctive calculus formula over R, and a
valuation over free(). Then I satises under , denoted I |=[], if
(a) =R(u) is an atom and (u) I(R); or
(b) =( ) and
2
I |=[|
free()
] and I |=[|
free()
]; or
(c) =x and for some c dom, I |=[ {x/c}].
Finally, let q = {e
1
, . . . , e
m
| } be a conjunctive calculus query over R. For an in-
stance I over R, the image of I under q is
q(I) ={(e
1
, . . . , e
n
) | I |=[] and is a valuation over free()}.
The active domain of a formula , denoted adom(), is the set of constants occurring
in ; and as with queries q, we use adom(, I) to abbreviate adom() adom(I). An easy
induction on conjunctive calculus formulas shows that if I |=[], then the range of is
contained in adom(I) (see Exercise 4.3). This implies, in turn, that to evaluate a conjunctive
calculus query, one need only consider valuations with range contained in adom(, I) and,
hence, only a nite number of them. This pleasant state of affairs will no longer hold when
disjunction or negation is incorporated into the calculus (see Section 4.5 and Chapter 5).
Conjunctive calculus formulas and over R are equivalent if they have the same
free variables and, for each I over R and valuation over free() = free(), I |= []
iff I |=[]. It is easily veried that if and are equivalent, and if

is the result of
replacing an occurrence of by in conjunctive calculus formula , then and

are
equivalent (see Exercise 4.4).
It is easily veried that for all conjunctive calculus formulas , , and , ( ) is
equivalent to ( ), and ( ( )) is equivalent to (( ) ). For this reason,
we may view conjunction as a polyadic connective rather than just binary.
We next show that conjunctive calculus queries, which allow unrestricted nesting
of and , are no more powerful than the simple conjunctive queries rst exhibited,
which correspond straightforwardly to rules. Thus the simpler conjunctive queries provide
a normal form for the full conjunctive calculus. Formally, a conjunctive calculus query
q ={u | } is in normal form if has the form
x
1
, . . . , x
m
(R
1
(u
1
) R
n
(u
n
)).
Consider now the two rewrite (or transformation) rules for conjunctive calculus queries:
Variable substitution: replace subformula
x by y
x
y
,
2
|
V
for variable set V denotes the restriction of to V.
4.2 Logic-Based Perspectives 47
if y does not occur in , where
x
y
denotes the formula obtained by replacing all free
occurrences of x by y in .
Merge-exists: replace subformula
(y
1
, . . . , y
n
z
1
, . . . , z
m
) by y
1
, . . . , y
n
, z
1
, . . . , z
m
( )
if {y
1
, . . . , y
n
} and {z
1
, . . . , z
m
} are disjoint, none of {y
1
, . . . , y
n
} occur (free or bound)
in , and none of {z
1
, . . . , z
m
} occur (free or bound) in .
It is easily veried (see Exercise 4.4) that (1) application of these transformation rules to a
conjunctive calculus formula yields an equivalent formula, and (2) these rules can be used
to transform any conjunctive calculus formula into an equivalent formula in normal form.
It follows that:
Lemma 4.2.8 Each conjunctive calculus query is equivalent to a conjunctive calculus
query in normal form.
We now introduce formal notation for comparing the expressive power of query lan-
guages. Let Q
1
and Q
2
be two query languages (with associated semantics). Then Q
1
is
dominated by Q
2
(or, Q
1
is weaker than Q
2
), denoted Q
1
Q
2
, if for each query q
1
in Q
1
there is a query q
2
in Q
2
such that q
1
q
2
. Q
1
and Q
2
are equivalent, denoted Q
1
Q
2
,
if Q
1
Q
2
and Q
2
Q
1
.
Because of the close correspondence between rule-based conjunctive queries, tableau
queries, and conjunctive calculus queries in normal form, the following is easily veried
(see Exercise 4.15).
Proposition 4.2.9 The rule-based conjunctive queries, the tableau queries, and the
conjunctive calculus are equivalent.
Although straightforward, the preceding result is important because it is the rst of
many that show equivalence between the expressive power of different query languages.
Some of these results will be surprising because of the high contrast between the languages.
Incorporating Equality
We close this section by considering a simple variation of the conjunctive queries pre-
sented earlier, obtained by adding the capability of explicitly expressing equality between
variables and/or constants. For example, query (4.4) can be expressed as
ans(x
t h
, x
ad
) Movies(x
t i
, x
d
, x
ac
), x
d
=Bergman,
Pariscope(x
t h
, x
t i
, x
s
), Location(x
t h
, x
ad
, x
p
)
and query (4.6) can be expressed as
ans(y
1
, y
2
) Movies(x
1
, y
1
, z
1
), Movies(x
2
, y
2
, z
2
), y
1
=z
2
, y
2
=z
1
.
48 Conjunctive Queries
It would appear that explicit equalities like the foregoing can be expressed by con-
junctive queries without equalities by using multiple occurrences of the same variable or
constant. Although this is basically true, two problems arise. First, unrestricted rules with
equality may yield innite answers. For example, in the rule
ans(x, y) R(x), y =z
y and z are not tied to relation R, and there are innitely many valuations satisfying the
body of the rule. To ensure nite answers, it is necessary to introduce an appropriate notion
of range restriction. Informally, an unrestricted rule with equality is range restricted if the
equalities require that each variable in the body be equal to some constant or some variable
occurring in an atom R(u
i
); Exercise 4.5 explores the notion of range restriction in more
detail. A rule-based conjunctive query with equality is a range-restricted rule with equality.
A second problem that arises is that the equalities in a rule with equality may cause
the query to be unsatisable. (In contrast, recall that rules without equality are always
satisable; see Proposition 4.2.2.) Consider the following query, in which R is a unary
relation and a, b are distinct constants.
ans(x) R(x), x =a, x =b.
The equalities present in this query require that a =b, which is impossible. Thus there
is no valuation satisfying the body of the rule, and the query yields the empty relation on
all inputs. We use q
:R,R
(or q

if R and R are understood) to denote the query that maps


all inputs over R to the empty relation over R. Finally, note that one can easily check if the
equalities in a conjunctive query with equality are unsatisable (and hence if the query is
equivalent to q

). This is done by computing the transitive closure of the equalities in the


query and checking that no two distinct constants are required to be equal. Each satisable
rule with equality is equivalent to a rule without equality (see Exercise 4.5c).
One can incorporate equality into tableau queries in a similar manner by adding sep-
arately a set of required equalities. Once again, no expressive power is gained if the
query is satisable. Incorporating equality into the conjunctive calculus is considered in
Exercise 4.6.
4.3 Query Composition and Views
We now present a digression that introduces the important notion of query composition
and describe its relationship to database views. A main result here is that the rule-based
conjunctive queries with equality are closed under composition.
Consider a database R = {R
1
, . . . , R
n
}. Suppose that we have a query q (in any of the
preceding formalisms). Conceptually, this can be used to dene a relation with new relation
name S
1
, which can be used in subsequent queries as any ordinary relation from R. In
particular, we can use S
1
in the denition of a new relation S
2
, and so on. In this context, we
could call each of S
1
, S
2
, . . . intensional (in contrast with the extensional relations of R).
This perspective on query composition is expressed most conveniently within the rule-
4.3 Query Composition and Views 49
based paradigm. Specically, a conjunctive query program (with or without equality) is a
sequence P of rules having the form
S
1
(u
1
) body
1
S
2
(u
2
) body
2
.
.
.
S
m
(u
m
) body
m
,
where each S
i
is distinct and not in R; and for each i [1, m], the only relation names
that may occur in body
i
are R
1
, . . . , R
n
and S
1
, . . . , S
i1
. An instance I over R and the
program P can be viewed as dening values for all of S
1
, . . . , S
m
in the following way:
For each i [1, m], [P(I)](S
i
) = q
i
([P(I)]), where q
i
is the i
th
rule and denes relation S
i
in terms of I and the previous S
j
s. If P is viewed as dening a single output relation, then
this output is [P(I)](S
m
). Analogous to rule-based conjunctive queries, the relations in R
are called edb relations, and the relations occurring in rule heads are called idb relations.
Example 4.3.1 Let R ={Q, R} and consider the conjunctive query program
S
1
(x, z) Q(x, y), R(y, z, w)
S
2
(x, y, z) S
1
(x, w), R(w, y, v), S
1
(v, z)
S
3
(x, z) S
2
(x, u, v), Q(v, z).
Figure 4.3 shows an example instance I for Rand the values that are associated to S
1
, S
2
, S
3
by P(I).
It is easily veried that the effect of the rst two rules of P on S
2
is equivalent to the
effect of the rule
S
2
(x, y, z) Q(x
1
, y
1
), R(y
1
, z
1
, w
1
), x =x
1
, w =z
1
,
R(w, y, v), Q(x
2
, y
2
), R(y
2
, z
2
, w
2
), v =x
2
, z =z
2
.
Alternatively, expressed without equality, it is equivalent to
S
2
(x, y, z) Q(x, y
1
), R(y
1
, w, w
1
), R(w, y, v), Q(v, y
2
), R(y
2
, z, w
2
).
Note how variables are renamed to prevent undesired cross-talk between the different
rule bodies that are combined to form this rule. The effect of P on S
3
can also be expressed
using a single rule without equality (see Exercise 4.7).
It is straightforward to verify that if a permutation P

of P (i.e., a listing of the elements


of P in a possibly different order) satises the restriction that relation names in a rule
body must be in a previous rule head, then P

will dene the same mapping as P. This


kind of consideration will arise in a richer context when stratied negation is considered in
Chapter 15.
50 Conjunctive Queries
Q R S
1
S
2
S
3
1 2 1 1 1 1 3 1 1 1 1 2
2 1 2 3 1 2 1 1 1 3 2 2
2 2 3 1 2 2 3 2 1 1
4 4 1 2 1 3
Figure 4.3: Application of a conjunctive query program
Example 4.3.2 Consider the following program P:
T (a, x) R(x)
S(x) T (b, x).
Clearly, P always denes the empty relation S, so it is not equivalent to any rule-based
conjunctive query without equality. Intuitively, the use of the constants a and b in P masks
the use of equalities, which in this case are contradictory and yield an unsatisable query.
Based on the previous examples, the following is easily veried (see Exercise 4.7).
Theorem 4.3.3 (Closure under Composition) If conjunctive query program P denes
nal relation S, then there is a conjunctive query q, possibly with equality, such that on
all input instances I, q(I) = [P(I)](S). Furthermore, if P is satisable, then q can be
expressed without equality.
The notion of programs is based on the rule-based formalism of the conjunctive
queries. In the other versions introduced previously and later in this chapter, the notation
does not conveniently include a mechanism for specifying names for the output of inter-
mediate queries. For the other formalisms we use a slightly more elaborate notation that
permits the specication of these names. In particular, all of the formalisms are compatible
with a functional, purely expression-based paradigm:
let S
1
=q
1
in
let S
2
=q
2
in
.
.
.
let S
m1
=q
m1
in
q
m
and with an imperative paradigm in which the intermediate query values are assigned to
relation variables:
4.3 Query Composition and Views 51
S
1
:=q
1
;
S
2
:=q
2
;
.
.
.
S
m1
:=q
m1
;
S
m
:=q
m
.
It is clear from Proposition 4.2.9 and Theorem 4.3.3 that the conjunctive calculus and
tableau queries with equality are both closed under composition.
Composition and User Views
Recall that the top level of the three-level architecture for databases (see Chapter 1) consists
of user views (i.e., versions of the data that are restructured and possibly restricted images
of the database as represented at the middle level). In many cases these views are specied
as queries (or query programs). These may be materialized (i.e., a physical copy of the view
is stored and maintained) or virtual (i.e., relevant information about the view is computed
as needed). In the latter case, queries against the view generate composed queries against
the underlying database, as illustrated by the following example.
Example 4.3.4 Consider the view over schema {Marilyn, Champo-info} dened by the
following two rules:
Marilyn(x
t
) Movies(x
t
, x
d
, Monroe)
Champo-info(x
t
, x
s
, x
p
) Pariscope(Le Champo, x
t
, x
s
),
Location(Le Champo, x
a
, x
p
).
The conjunctive query What titles in Marilyn are featured at the Le Champo at 21:00?
can be expressed against the view as
ans(x
t
) Marilyn(x
t
), Champo-info(x
t
, 21:00, x
p
).
Assuming that the view is virtual, evaluation of this query is accomplished by con-
sidering the composition of the query with the view denition. This composition can be
rewritten as
ans(x
t
) Movies(x
t
, x
d
, Monroe),
Pariscope(Le Champo, x
t
, 21:00)
Location(Le Champo, x
a
, x
p
).
An alternative expression specifying both view and query now follows. (Expressions
from the algebraic versions of the conjunctive queries could also be used here.)
52 Conjunctive Queries
Marilyn :={x
t
| x
d
(Movies(x
t
, x
d
, Monroe))};
Champo-info :={x
t
, x
s
, x
p
| x
a
(Location(Le Champo, x
t
, x
s
)
Location(Le Champo, x
a
, x
p
)};
ans :={x
t
| Marilyn(x
t
) x
p
(Champo-info(x
t
, 21:00, x
p
))}.
This example illustrates the case in which a query is evaluated over a single view;
evaluation of the query involves a two-layer composition of queries. If a series of nested
views is dened, then query evaluation can involve query compositions having two or more
layers.
4.4 Algebraic Perspectives
The use of algebra operators provides a distinctly different perspective on the conjunctive
queries. There are two distinct algebras associated with the conjunctive queries, and they
stem, respectively, from the named, ordered-tuple perspective and the unnamed, function-
based perspective. After presenting the two algebras, their equivalence with the conjunctive
queries is discussed.
The Unnamed Perspective: The SPC Algebra
The algebraic paradigmfor relational queries is based on a family of unary and binary oper-
ators on relation instances. Although their application must satisfy some typing constraints,
they are polymorphic in the sense that each of these operators can be applied to instances
of an innite number of arities or sorts. For example, as suggested in Chapter 3, the union
operator can take as input any two relation instances having the same sort.
Three primitive algebra operators form the unnamed conjunctive algebra: selection,
projection, and cross-product (or Cartesian product). This algebra is more often referred
to as the SPC algebra, based on the rst letters of the three operators that form it. (This
convention will be used to specify other algebras as well.) An example is given before the
formal denition of these operators.
Example 4.4.1 We show how query (4.4) can be built up using the three primitive
operators. First we use selection to extract the tuples of Movies that have Bergman as
director.
I
1
:=
2=Bergman
(Movies)
Next a family of wide (six columns wide, in fact) tuples is created by taking the cross-
product of I
1
and Pariscope.
I
2
:=I
1
Pariscope
4.4 Algebraic Perspectives 53
Another selection is performed to focus on the members of I
2
that have rst and fth
columns equal.
I
3
:=
1=5
(I
2
)
In effect, the cross-product followed by this selection nds a matching of tuples from I
1
and Pariscope that agree on the Title coordinates.
At this point we are interested only in the theaters where these lms are playing, so
we use projection to discard the unneeded columns, yielding a unary relation.
I
4
:=
4
(I
3
)
Finally, this is paired with Location and projected on the Theater and Address columns to
yield the answer.
I
5
:=
2,3
(
1=2
(I
4
Location))
The development just given uses SPC expressions in the context of a simple imperative
language with assignment. In the pure SPC algebra, this query is expressed as

2,3
(
1=2
(
4
(
1=5
(
2=Bergman
(Movies) Pariscope)) Location)).
Another query that yields the same result is

4,8
(
4=7
(
1=5
(
2=Bergman
(Movies Pariscope Location)))).
This corresponds closely to the conjunctive calculus query of Example 4.2.5.
Although the algebraic operators have a procedural feel to them, algebraic queries are
used by most relational database systems as high-level specications of desired output.
Their actual implementation is usually quite different from the original form of the query,
as will be discussed in Section 6.1.
We now formally dene the three operators forming the SPC algebra.
Selection: This can be viewed as a horizontal operator. The two primitive forms are

j=a
and
j=k
, where j, k are positive integers and a dom. [In practice, we usually
surround constants with quotes ( ).] The operator
j=a
takes as input any relation
instance I with arity j and returns as output an instance of the same arity. In
particular,

j=a
(I) ={t I | t (j) =a}.
The operator
j=k
for positive integers j, k is dened analogously for inputs with arity
max{j, k}. This is sometimes called atomic selection; generalizations of selection
will be dened later.
54 Conjunctive Queries
Projection: This vertical operator can be used to delete and/or permute columns of a
relation. The general form of this operator is
j
1
,...,j
n
, where j
1
, . . . , j
n
is a possibly
empty sequence of positive integers (the empty sequence is written [ ]), possibly with
repeats. This operator takes as input any relation instance with arity max{j
1
, . . . , j
n
}
(where the max of is 0) and returns an instance with arity n. In particular,

j
1
,...,j
n
(I) ={t (j
1
), . . . , t (j
n
) | t I}.
Cross-product (or Cartesian product): This operator provides the capability for combining
relations. It takes as inputs a pair of relations having arbitrary arities n and m and
returns a relation with arity n + m. In particular, if arity(I) = n and arity(J) = m,
then
I J ={t (1), . . . , t (n), s(1), . . . , s(m) | t I and s J}.
Cross-product is associative and noncommutative and has the nonempty 0-ary relation
{} as left and right identity. Because it is associative, we sometimes view cross-product
as a polyadic operator and write, for example, I
1
I
n
.
We extend the cross-product operator to tuples in the natural fashionthat is u v is
a tuple with arity = arity(u) +arity(v).
The SPC algebra is the family of well-formed expressions containing relation names
and one-element unary constants and closed under the application of the selection, projec-
tion, and cross-product operators just dened. Each expression is considered to be dened
over a given database schema and has an associated output arity. We now give the formal,
inductive denition.
Let R be a database schema. The base SPC (algebra) queries and output arities are
Input relation: Expression R; with arity equal to arity(R).
Unary singleton constant: Expression {a}, where a dom; with arity equal to 1.
The family of SPC (algebra) queries contains all base SPC queries and, for SPC queries
q
1
, q
2
with arities
1
,
2
, respectively,
Selection:
j=a
(q
1
) and
j=k
(q
1
) whenever j, k
1
and a dom; these have arity
1
.
Projection:
j
1
,...,j
n
(q
1
), where j
1
, . . . , j
n

1
; this has arity n.
Cross product: q
1
q
2
; this has arity
1
+
2
.
In practice, we sometimes use brackets to surround algebraic queries, such as [R

1=a
(S)](I). In addition, parentheses may be dropped if no ambiguity results.
The semantics of these queries is dened in the natural manner (see Exercise 4.8).
The SPC algebra includes unsatisable queries, such as
1=a
(
1=b
(R)), where
arity(R) 1 and a =b. This is equivalent to q

.
As explored in Exercise 4.22, permitting as base SPC queries constant queries that are
not unary (i.e., expressions of the form {a
1
, . . . , a
n
}) yields expressive power greater
than the rule-based conjunctive queries with equality. This is also true of selection for-
mulas in which disjunction is permitted. As will be seen in Section 4.5, these capabilities
4.4 Algebraic Perspectives 55
are subsumed by including an explicit union operator into the SPC algebra. Permitting
negation in selection formulas also extends the expressive power of the SPC algebra (see
Exercise 4.27b).
Before leaving SPC algebra, we mention three operators that can be simulated by the
primitive ones. The rst is intersection (), which is easily simulated (see Exercise 4.28).
The other two operators involve generalizations of the selection and cross-product oper-
ators. The resulting algebra is called the generalized SPC algebra. We shall introduce a
normal form for generalized SPC algebra expressions.
The rst operator is a generalization of selection to permit the specication of multiple
conditions. A positive conjunctive selection formula is a conjunction F =
1

n
(n 1), where each conjunct
i
has the form j =a or j =k for positive integers j, k and
a dom; and a positive conjunctive selection operator is an expression of the form
F
,
where F is a positive conjunctive selection formula. The intended typing and semantics
for these operators is clear, as is the fact that they can be simulated by a composition of
selections as dened earlier.
The second operator, called equi-join, is a binary operator that combines cross-product
and selection. A (well-formed) equi-join operator is an expression of the form
F
where
F =
1

n
(n 1) is a conjunction such that each conjunct
i
has the form j =k.
An equi-join operator
F
can be applied to any pair I, J of relation instances, where the
arity(I) the maximum integer occurring on the left-hand side of any equality in F, and
arity(J) the maximum integer occurring on the right-hand side of any equality in F.
Given an equi-join expression I
F
J, let F

be the result of replacing each condition


j =k in F by j =arity(I) +k. Then the semantics of I
F
J is given by
F
(I J). As
with cross-product, equi-join is also dened for pairs of tuples, with an undened output if
the tuples do not satisfy the conditions specied.
We now develop a normal form for SPC algebra. We stress that this normal form is
useful for theoretical purposes and, in general, represents a costly way to compute the
answer of a given query (see Chapter 6).
An SPC algebra expression is in normal form if it has the form

j
1
,...,j
n
({a
1
} {a
m
}
F
(R
1
R
k
)),
where n 0; m 0; a
1
, . . . , a
m
dom; {1, . . . , m} {j
1
, . . . , j
n
}; R
1
, . . . , R
k
are rela-
tion names (repeats permitted); and F is a positive conjunctive selection formula.
Proposition 4.4.2 For each (generalized) SPCquery q there is a generalized SPCquery
q

in normal form such that q q

.
The proof of this proposition (see Exercise 4.12) is based on repeated application of the
following eight equivalence-preserving SPC algebra rewrite rules (or transformations).
Merge-select: replace
F
(
F
(q)) by
FF
(q).
Merge-project: replace

j
(

k
(q)) by

l
(q), where l
i
=k
j
i
for each term l
i
in

l.
Push-select-through-project: replace
F
(

j
(q)) by

j
(
F
(q)), where F

is obtained from
F by replacing all coordinate values i by j
i
.
56 Conjunctive Queries
Push-select-through-singleton: replace
1=j
(a q) by a
(j1)=a
(q).
Associate-cross: replace ((q
1
q
n
) q) by (q
1
q
n
q), and replace (q (q
1

q
n
)) by (q q
1
q
n
).
Commute-cross: replace (q q

) by

(q

q), where

j =arity(q

) +1, . . . , arity(q

) +
arity(q), and

j

=1, . . . , arity(q

).
Push-cross-through-select: replace (
F
(q) q

) by
F
(q q

), and replace (q
F
(q

))
by
F
(q q

), where F

is obtained from F by replacing all coordinate values i by


i +arity(q).
Push-cross-through-project: replace (

j
(q) q

) by

j
(q q

), and replace (q

j
(q

))
by

(q q

), where

j

is obtained from

j by replacing all coordinate values i by
i +arity(q).
For a set S of rewrite rules and algebra expressions q, q

, write q
S
q

, or simply
q q

if S is understood from the context, if q

is the result of replacing a subexpression


of q according to one of the rules in S. Let

S
denote the reexive, transitive closure
of
S
.
A family S of rewrite rules is sound if q
S
q

implies q q

. If S is sound, then
clearly q

S
q

implies q q

.
It is easily veried that the foregoing set of rewrite rules is sound and that for each SPC
query q there is a normal form SPC query q

such that q

is in normal form, and q


(see Exercise 4.12).


In Section 6.1, we describe an approach to optimizing the evaluation of conjunctive
queries using rewrite rules. For example, in that context, the merge-select and merge-
project transformations are helpful, as are the inverses of the push-cross-through-select
and push-cross-through-project.
Finally, note that an SPC query may require, as the result of transitivity, the equality
of two distinct constants. Thus there are unsatisable SPC queries equivalent to q

. This is
analogous to the logic-based conjunctive queries with equality. It is clear, using the normal
form, that one can check whether an SPC query is q

by examining the selection formula


F. The set of SPC queries that are not equivalent to q

forms the satisable SPC algebra.


The Named Perspective: The SPJR Algebra
In Example 4.4.1, the relation I
3
was constructed using selection and cross-product by the
expression
1=5
(I
1
Pariscope). As is often the case, the columns used in this selection
are labeled by the same attribute. In the context of the named perspective on tuples, this
suggests a natural variant of the cross-product operator (and of the equi-join operator) that
is called natural join and is denoted by . Informally, the natural join requires the tuples
that are concatenated to agree on the common attributes.
4.4 Algebraic Perspectives 57
Example 4.4.3 The natural join of Movies and Pariscope is
Movies Pariscope
={u with sort Title Director Actor Theater Schedule |
for some v Movies and w Pariscope,
u[Title Director Actor] =v and u[Theater Title Schedule] =w}
=
1,2,3,4,6
(Movies
1=2
Pariscope)
(assuming that the sort of the last expression corresponds to that of the previous expres-
sion). More generally, using the natural analog of projection and selection for the named
perspective, query (4.4) can be expressed as

Theater,Address
((
Director=Bergman
(Movies) Pariscope) Location).
As suggested by the preceding example, natural join can be used in the named context
to replace certain equi-joins arising in the unnamed context. However, a problem arises if
two relations sharing an attribute A are to be joined but without forcing equality on the A
coordinates, or if a join is to be formed based on the equality of attributes not sharing the
same name. For example, consider the query
(4.8) List pairs of actors that acted in the same movie.
To answer this, one would like to join the Movies relation with itself but matching only on
the Title column. This will be achieved by rst creating a copy Movies

of Movies in which
the attribute Director has been renamed to Director

and Actor to Actor

; joining this with


Movies; and nally projecting onto the Actor and Actor

columns. Renaming is also needed


for query (4.6) (see Exercise 4.11).
The named conjunctive algebra has four primitive operators: selection, essentially as
before; projection, now with repeats not permitted; (natural) join; and renaming. It is thus
referred to as the SPJR algebra. As with the SPCalgebra, we dene the individual operators
and then indicate how they are combined to form a typed, polymorphic algebra. In each
case, we indicate the sorts of input and output. If a relation name is needed for the output,
then it is assumed to be chosen to have the correct sort.
Selection: The selection operators have the form
A=a
and
A=B
, where A, B att and
a dom. These operators apply to any instance I with A sort(I) [respectively,
A, B sort(I)] and are dened in analogy to the unnamed selection, yielding an
output with the same sort as the input.
Projection: The projection operator has the form
A
1
,...,A
n
, n 0 (repeats not permitted)
and operates on all inputs having sort containing {A
1
, . . . , A
n
}, producing output with
sort {A
1
, . . . , A
n
}.
(Natural) join: This operator, denoted , takes arbitrary inputs I and J having sorts V and
58 Conjunctive Queries
W, respectively, and produces an output with sort equal to V W. In particular,
I J ={t over V W | for some v I and w J,
t [V] =v and t [W] =w}.
When sort(I) =sort(J), then I J =I J, and when sort(I) sort(J) =, then
I J is the cross-product of I and J. The join operator is associative, commutative, and
has the nonempty 0-ary relation {} as left and right identity. Because it is associative, we
sometimes view join as a polyadic operator and write, for example, I
1
I
n
.
As with cross-product and equi-join, natural join is extended to operate on pairs of
tuples, with an undened result if the tuples do not match on the appropriate attributes.
Renaming: An attribute renaming for a nite set U of attributes is a one-one mapping from
U to att. An attribute renaming f for U can be described by specifying the set of pairs
(A, f (A)), where f (A) =A; this is usually written as A
1
A
2
. . . A
n
B
1
B
2
. . . B
n
to
indicate that f (A
i
) =B
i
for each i [1, n] (n 0). A renaming operator for inputs
over U is an expression
f
, where f is an attribute renaming for U; this maps to
outputs over f [U]. In particular, for I over U,

f
(I) ={v over f [U] | for some u I, v(f (A)) =u(A) for each A U}.
Example 4.4.4 Let I, J be the two relations, respectively over R, S, given in Fig. 4.4.
Then I J,
A=1
(I),
BCB

A
(J), and
A
(I) are also shown there. Let K be the one-
tuple relation A : 1, C : 9. Then
A,B
(I K) coincides with
A=1
(I) and J K =
{A : 1, B : 8, C : 9}.
The base SPJR algebra queries are:
Input relation: Expression R; with sort equal to sort(R).
Unary singleton constant: Expression {A : a}, where a dom; with sort A.
The remainder of the syntax and semantics of the SPJR algebra is now dened in analogy
to those of the SPC algebra (see Exercise 4.8).
Example 4.4.5 Consider again Fig. 4.4. Let I be the instance over {R, S} such that
I(R) = I and I(S) = J. Then [R] is a query and the answer to that query, denoted
R(I), is just I. Figure 4.4 also gives the values of S(I), [R S](I), [
A=1
(R)](I),
[
BCB

A
(S)](I), and [
A
(R)](I). Let K
A
= {A : 1} and K
C
= {C : 9}. Then [K
A
]
and [K
C
] are constant queries, and [K
A
K
C
] is a query that evaluates (on all inputs) to
the relation K of Example 4.4.4.
As with the SPC algebra, we introduce a natural generalization of the selection oper-
ator for the SPJR algebra. In particular, the notions of positive conjunctive selection for-
mula and positive conjunctive selection operator are dened for the context in complete
4.4 Algebraic Perspectives 59
R A B S B C [R S] A B C
1 2 2 3 1 2 3
4 2 2 5 1 2 5
6 6 6 4 4 2 3
7 7 8 9 4 2 5
1 7 6 6 4
1 6 1 6 4
[
A=1
(R)] A B [
BCB

A
(S)] B

A [
A
(R)] A
1 2 2 3 1
1 7 2 5 4
1 6 6 4 6
8 9 7
Figure 4.4: Examples of SPJR operators
analogy to the unnamed case. Including this operator yields the generalized SPJR algebra.
A normal form result analogous to that for the SPC algebra is now developed. In
particular, an SPJR algebra expression is in normal form if it has the form

B
1
,...,B
n
({A
1
: a
1
} {A
m
: a
m
}
F
(
f
1
(R
1
)
f
k
(R
k
))),
where n 0; m 0; a
1
, . . . , a
m
dom; each of A
1
, . . . , A
m
occurs in B
1
, . . . , B
n
; the
A
i
s are distinct; R
1
, . . . , R
k
are relation names (repeats permitted);
f
j
is a renaming
operator for sort(R
j
) for each j [1, k] and no A
i
s occur in any
f
j
(R
j
); the sorts
of
f
1
(R
1
), . . . ,
f
k
(R
k
) are pairwise disjoint; and F is a positive conjunctive selection
formula. The following is easily veried (see Exercise 4.12).
Proposition 4.4.6 For each (generalized) SPJR query q, there is a generalized SPJR
query q

in normal form such that q q

.
The set of SPJR queries not equivalent to q

forms the satisable SPJR algebra.


Equivalence Theorem
We now turn to the main result of the chapter, showing the equivalence of the various
formalisms introduced so far for expressing conjunctive queries. As shown earlier, the three
logic-based versions of the conjunctive queries are equivalent. We now show that the SPC
and SPJR algebras are also equivalent to each other and then obtain the equivalence of the
algebraic languages and the three logic-based languages.
60 Conjunctive Queries
Lemma 4.4.7 The SPC and SPJR algebras are equivalent.
Crux We prove the inclusion SPC algebra SPJR algebra; the converse is similar (see
Exercise 4.14). Let q be the following normal form SPC query:

j
1
,...,j
n
({a
1
} {a
m
}
F
(R
1
R
k
)).
We now describe an SPJR query q

that is equivalent to q; q

has the following form:

A
j
1
,...,A
jn
({A
1
: a
1
} {A
m
: a
m
}
G
(
f
1
(R
1
)
f
k
(R
k
))).
We use the renaming functions so that the attributes of
f
t
(R
t
) are A
s
, . . . , A
s
, where
s, . . . , s

are the coordinate positions of R


t
in the expression R
1
R
k
and modify F
into Gaccordingly. In a little more detail, for each r [1, k] let (t ) =m+
t
s=0
arity(R
s
),
and let A
m+1
, . . . , A
(k)
be new attributes. For each t [1, k], choose
f
t
so that it maps
the i
th
attribute of R
t
to the attribute A
(t 1)+i
. To dene G, rst dene the function from
coordinate positions to attribute names so that (j) =A
m+j
, extend to be the identity on
constants, and extend it further in the natural manner to map unnamed selection formulas
to named selection formulas. Finally, set G= (F). It is now straightforward to verify that
q

q.
It follows immediately from the preceding lemma that the satisable SPC algebra and
the satisable SPJR algebra are equivalent.
The equivalence between the two algebraic languages and the three logic-based lan-
guages holds with a minor caveat involving the empty query q

. As noted earlier, the SPC


and SPJR algebras can express q

, whereas the logic-based languages cannot, unless ex-


tended with equality. Hence the equivalence result is stated for the satisable SPC and
SPJR algebras.
Theorem 4.3.3 (i.e., the closure of the rule-based conjunctive queries under composi-
tion) is used in the proof of this result. The closures of the SPC and SPJR algebras under
composition are, of course, immediate.
Theorem 4.4.8 (Equivalence Theorem) The rule-based conjunctive queries, tableau
queries, conjunctive calculus queries, satisable SPC algebra, and satisable SPJR algebra
are equivalent.
Proof The proof can be accomplished using the following steps:
(i) satisable SPC algebra rule-based conjunctive queries; and
(ii) rule-based conjunctive queries satisable SPC algebra.
We briey consider how steps (i) and (ii) might be demonstrated; the details are left
to the reader (Exercise 4.15). For (i), it is sufcient to show that each of the SPC algebra
operations can be simulated by a rule. Indeed, then the inclusion follows from the fact that
rule-based conjunctive queries are closed under composition by Theorem 4.3.3 and that
4.5 Adding Union 61
satisable rules with equality can be expressed as rules without equality. The simulation of
algebra operations by rules is as follows:
1. P Q, where P and Q are not constant relations, corresponds to ans( x, y)
P( x), Q( y), where x and y contain no repeating variables; in the case when P
(Q) are constant relations, x ( y) are the corresponding constant tuples.
2.
F
(R) corresponds to ans( x) R(
F
( y)), where y consists of distinct variables,

F
( y) denotes the vector of variables and constants obtained by merging variables
of y with other variables or with constants according to the (satisable) selection
formula F, and x consists of the distinct variables in
F
( y).
3.
j
1
...j
n
(R) corresponds to ans(x
j
1
. . . x
j
n
) R(x
1
. . . x
m
), where x
1
, . . . , x
m
are
distinct variables.
Next consider step (ii). Let ans( x) R
1
( x
1
), . . . , R
n
( x
n
) be a rule. There is an equiv-
alent SPC algebra query in normal form that involves the cross-product of R
1
, . . . , R
n
, a
selection reecting the constants and repeating variables occurring in x
1
, . . . , x
n
, a fur-
ther cross-product with constant relations corresponding to the constants in x, and nally
a projection extracting the coordinates corresponding to x.
An alternative approach to showing step (i) of the preceding theorem is explored in
Exercise 4.18.
4.5 Adding Union
As indicated by their name, conjunctive queries are focused on selecting data based on
a conjunction of conditions. Indeed, each atom added to a rule potentially adds a further
restriction to the tuples produced by the rule. In this section we consider a natural mech-
anism for adding a disjunctive capability to the conjunctive queries. Specically, we add
a union operator to the SPC and SPJR algebras, and we add natural analogs of it to the
rule-based and tableau-based paradigms. Incorporating union into the conjunctive calculus
raises some technical difculties that are resolved in Chapter 5. This section also consid-
ers the evaluation of queries with union and introduces a more restricted mechanism for
incorporating a disjunctive capability.
We begin with some examples.
Example 4.5.1 Consider the following query:
(4.10) Where can I see Annie Hall or Manhattan?
Although this cannot be expressed as a conjunctive query (see Exercise 4.22), it is easily
expressed if union is added to the SPJR algebra:

Theater
(
Title=Annie Hall
(Pariscope)
Title=Manhattan
(Pariscope)).
62 Conjunctive Queries
An alternative formulation of this uses an extended selection operator that permits disjunc-
tions in the selection condition:

Theater
(
Title=Annie HallTitle=Manhattan
(Pariscope)).
As a nal algebraic alternative, this can be expressed in the original SPJR algebra but
permitting nonsingleton constant relations as base expressions:

Theater
(Pariscope {Title: Annie Hall, Title: Manhattan}).
The rule-based formalism can accommodate this query by permitting more than one rule
with the same relation name in the head and taking the union of their outputs as the answer:
ans(x
t
) Pariscope(x
t
, Annie Hall, x
s
)
ans(x
t
) Pariscope(x
t
, Manhattan, x
s
).
Consider now the following query:
(4.11) What are the lms with Allen as actor or director?
This query can be expressed using any of the preceding formalisms, except for the SPJR
algebra extended with nonsingleton constant relations as base expressions (see Exer-
cise 4.22).
Let I
1
, I
2
be two relations with the same arity. As standard in mathematics, I
1
I
2
is the relation having this arity and containing the union of the two sets of tuples. The
denition of the SPCU algebra is obtained by extending the denition of the SPC algebra
to include the union operator. The SPJRU algebra is obtained in the same fashion, except
that union can only be applied to expressions having the same sort.
The SPCU and SPJRU algebras can be generalized by extending the selection oper-
ator (and join, in the case of SPC) as before. We can then dene normal forms for both
algebras, which are expressions consisting of one or more normal form SPC (SPJR) ex-
pressions combined using a polyadic union operator (see Exercise 4.23). As suggested by
the previous example, disjunction can also be incorporated into selection formulas with no
increase in expressive power (see Exercise 4.22).
Turning now to rule-based conjunctive queries, the simplest way to incorporate the
capability of union is to consider sets of rules all having the same relation name in the
head. These queries are evaluated by taking the union of the output of the individual rules.
This can be generalized without increasing the expressive power by incorporating
something analogous to query composition. A nonrecursive datalog program (nr-datalog
program) over schema R is a set of rules
4.5 Adding Union 63
S
1
body
1
S
2
body
2
.
.
.
S
m
body
m
,
where no relation name in R occurs in a rule head; the same relation name may appear
in more than one rule head; and there is some ordering r
1
, . . . , r
m
of the rules so that the
relation name in the head of r
i
does not occur in the body of a rule r
j
whenever j i.
The term nonrecursive is used because recursion is not permitted. A simple example
of a recursive rule is
ancestor(x, z) parent(x, y), ancestor(y, z).
A xpoint operator is used to give the semantics for programs involving such rules. Recur-
sion is the principal topic of Part D.
As in the case of rule-based conjunctive query programs, the query is evaluated on
input I by evaluating each rule in (one of) the order(s) satisfying the foregoing property and
forming unions whenever two rules have the same relation name in their heads. Equality
atoms can be added to these queries, as they were for the rule-based conjunctive queries.
In general, a nonrecursive datalog program P over R is viewed as having a database
schema as target. Program P can also be viewed as mapping from R to a single relation
(see Exercise 4.24).
Turning to tableau queries, a union of tableaux query over schema R (or R) is an
expression of the form ({T
1
, . . . , T
n
}, u), where n 1 and (T
i
, u) is a tableau query over
R for each i [1, n]. The semantics of these queries is obtained by evaluating the queries
(T
i
, u) independently and then taking the union of their results. Equality is incorporated
into these queries by permitting each of the queries (T
i
, u) to have equality.
We can now state (see Exercise 4.25) the following:
Theorem 4.5.2 The following have equivalent expressive power:
1. the nonrecursive datalog programs (with single relation target),
2. the SPCU queries,
3. the SPJRU queries.
The union of tableau queries is weaker than the aforementioned languages with union.
This is essentially because the denition of union of tableau queries does not allow separate
summary rows for each tableau in the union. With just one summary row, the nonrecursive
datalog query
ans(a)
ans(b)
cannot be expressed as a union of tableaux query.
64 Conjunctive Queries
As with conjunctive queries, it is easy to show that the conjunctive queries with union
and equality are closed under composition.
Union and the Conjunctive Calculus
At rst glance, it would appear that the power of union can be added to the conjunctive
calculus simply by permitting disjunction (denoted ) along with conjunction as a binary
connective for formulas. This approach, however, can have serious consequences.
Example 4.5.3 Consider the following query:
q ={x, y, z | R(x, y) R(y, z)}.
Speaking intuitively, the answer of q on nonempty instance I will be (using a slight abuse
of notation)
q(I) =(I dom) (domI).
This is an innite set of tuples and thus not an instance according to the formal denition.
Informally, the query q of the previous example is not safe. This notion is one of
the central topics that needs to be resolved when using the rst-order predicate calculus as
a relational query language, and it is studied in Chapter 5. We return there to the issue of
adding union to the conjunctive calculus (see also Exercise 4.26).
Bibliographic Notes
Codds pioneering article [Cod70] on the relational model introduces the rst relational
query language, a named algebra. The predicate calculus was adapted to the relational
model in [Cod72b], where it was shown to be essentially equivalent to the algebra. The
conjunctive queries, in the calculus paradigm, were rst introduced in [CM77]. Their
equivalence with the SPC algebra is also shown there.
Typed tableau queries appeared as a two-dimensional representation of a subset of the
conjunctive queries in [ASU79b] along with a proof that all typed restricted SPJ algebra
expressions over one relation can be expressed using them. A precursor to the typed tableau
queries is found in [ABU79], which uses a technique related to tableaux to analyze the join
operator. [ASU79a, ASSU81, CV81] continued the investigation of typed tableau queries;
[SY80] extends tableau queries to include union and a limited form of difference; and
[Klu88] extends them to include inequalities and order-based comparators. Tableau queries
have also played an important role in dependency theory; this will be discussed in Part C.
Many of the results in this chapter (including, for example, the equivalence of the SPC
and SPJR algebras and closure of conjunctive queries under composition) are essentially
part of the folklore.
Exercises 65
Exercises
Exercise 4.1 Express queries (4.14.3) and (4.54.9) as (a) rule-based conjunctive queries,
(b) conjunctive calculus queries, (c) tableau queries, (d) SPC expressions, and (e) SPJR expres-
sions.
Exercise 4.2 Let R be a database schema and q a rule.
(a) Prove that q(I) is nite for each instance I over R.
(b) Show an upper bound, given instance I of R and output arity for conjunctive query q,
for the number of tuples that can occur in q(I). Show that this bound can be achieved.
Exercise 4.3 Let R be a database schema and I an instance of R.
(a) Suppose that is a conjunctive calculus formula over R and is a valuation for
free(). Prove that I |=[] implies that the image of is contained in adom(I).
(b) Prove that if q is a conjunctive calculus query over R, then only a nite number
of valuations need to be considered when evaluating q(I). (Note: The presence of
existential quantiers may have an impact on the set of valuations that need to be
considered.)
Exercise 4.4
(a) Let and be equivalent conjunctive calculus formulas, and suppose that

is the
result of replacing an occurrence of by in conjunctive calculus formula . Prove
that and

are equivalent.
(b) Prove that the application of the rewrite rules rename and merge-exists to a conjunc-
tive calculus formula yields an equivalent formula.
(c) Prove that these rules can be used to transform any conjunctive calculus formula into
an equivalent formula in normal form.
Exercise 4.5
(a) Formally dene the syntax and semantics of rule-based conjunctive queries with
equality and conjunctive calculus queries with equality.
(b) As noted in the text, logic-based conjunctive queries with equality can generally
yield innite answers if not properly restricted. Give a denition for range-restricted
rule-based and conjunctive calculus queries with equality that ensures that queries
satisfying this condition always yield a nite answer.
(c) Prove for each rule-based conjunctive query with equality q that either q q

or
q q

for some rule-based conjunctive query q

without equality. Give a polynomial


time algorithm that decides whether q q

, and if not, constructs an equivalent rule-


based conjunctive query q

.
(d) Prove that each rule-based conjunctive query with equality but no constants is equiv-
alent to a rule-based conjunctive query without equality.
Exercise 4.6 Extend the syntax of the conjunctive calculus to include equality. Give a syn-
tactic condition that ensures that the answer to a query q on I involves only constants from
adom(q, I) and such that the answer can be obtained by considering only valuations whose
range is contained in adom(q, I).
Exercise 4.7 Give a proof of Theorem 4.3.3.
66 Conjunctive Queries
Exercise 4.8
(a) Give a formal denition for the semantics of the SPC algebra.
(b) Give a formal denition for the syntax and semantics of the SPJR algebra.
Exercise 4.9 Consider the algebra consisting of all SPJR queries in which constants do not
occur.
(a) Dene a normal form for this algebra.
(b) Is this algebra closed under composition?
(c) Is this algebra equivalent to the rule-based conjunctive queries without constants or
equality?
Exercise 4.10 Under the named perspective, a selection operator is constant based if it has
the form
A=a
, where A att and a dom. Prove or disprove: Each SPJR algebra expression
is equivalent to an SPJR algebra expression all of whose selection operators are constant based.
Exercise 4.11 Prove that queries (4.6 and 4.8) cannot be expressed using the SPJ algebra (i.e.,
that renaming is needed).
Exercise 4.12
(a) Prove that the set of SPC transformations presented after the statement of Proposi-
tion 4.4.2 is sound (i.e., preserves equivalence).
(b) Prove Proposition 4.4.2.
(c) Prove that each SPJR query is equivalent to one in normal form. In particular, exhibit
a set of equivalence-preserving SPJR algebra transformations used to demonstrate
this result.
Exercise 4.13
(a) Prove that the nonempty 0-ary relation is the left and right identity for cross product
and for natural join.
(b) Prove that for a xed relation schema S, there is an identity for union for relations
over S. What if S is not xed?
(c) Let S be a relational schema. For the binary operations {, }, does there exist
a relation I such that IJ =I for each relation J over S?
Exercise 4.14 Complete the proof of Lemma 4.4.7 by showing the inclusion SPJR algebra
SPC algebra.
Exercise 4.15
(a) Prove Proposition 4.2.9.
(b) Complete the proof of Theorem 4.4.8.
Exercise 4.16 Consider the problem of dening restricted versions of the SPC and SPJR
algebras that are equivalent to the rule-based conjunctive queries without equality. Find natural
restricted versions, or explain why they do not exist.
Exercise 4.17 Let q be a tableau query and q

the SPC query corresponding to it via the trans-


lation sketched in Theorem 4.4.8. If q has r rows and q

has j joins of database (nonconstant)


relations, show that j =r 1.
Exercises 67
Exercise 4.18
(a) Develop an inductive algorithmthat translates a satisable SPCquery q into a tableau
query by associating a tableau query to each subquery of q.
(b) Do the same for SPJR queries.
(c) Show that if q is a satisable SPC (SPRJ) query with n joins (not counting joins
involving constant relations), then the tableau of the corresponding tableau query
has n +1 rows.
Exercise 4.19 [ASU79b] This exercise examines the connection between typed tableaux and
a subset of the SPJ algebra. A typed restricted SPJ algebra expression over R is an SPJR algebra
expression that uses only [R] as base expressions and only constant-based selection (i.e., having
the form
A=a
for constant a), projection, and (natural) join as operators.
(a) Describe a natural algorithm that maps typed restricted SPJ queries q over R into
equivalent typed tableau queries q

= (T, u) over R, where |T | = (the number of


join operations in q) + 1.
(b) Show that q =({x, y
1
, x
1
, y
1
, x
1
, y}, x, y) is not the image of any typed re-
stricted SPJ query under the algorithm of part (a).
(c) [ASSU81] Prove that the tableau query q of part (b) is not equivalent to any typed
restricted SPJ algebra expression.
Exercise 4.20 [ASU79b] A typed tableau query q =(T, u) with T over relation R is repeat
restricted if
1. If A sort(u), then no variable in
A
(T ) {u(A)} occurs more than once in T .
2. If A sort(u), then at most one variable in
A
(T ) occurs more than once in T .
Prove that if q =(T, u) is a typed repeat-restricted tableau query over R, then there is a typed
restricted SPJ query q

such that the image of q

under the algorithm of Exercise 4.19 part (a) is


q.
Exercise 4.21 Extend Proposition 4.2.2 to include disjunction (i.e., union).
Exercise 4.22 The following query is used in this exercise:
(4.15) Produce a binary relation that includes all tuples t , excellent where t is a movie
directed by Allen, and all tuples t , superb where t is a movie directed by Hitch-
cock.
(a) Show that none of queries (4.104.15) can be expressed using the SPC or SPJR
algebras.
A positive selection formula for the SPC and SPJR algebras is a selection formula as before,
except that disjunction can be used in addition to conjunction. Dene the S+PC algebra to be
the SPC algebra extended to permit arbitrary positive selection operators; and dene the S+PJR
algebra analogously.
(b) Determine which of queries (4.104.15) can be expressed using the S+PJR algebra.
Dene the SPC-1* algebra to be the SPC algebra, except that nonsingleton unary constant
relations can be used as base queries; and dene the SPC-n* algebra to be the SPC algebra,
68 Conjunctive Queries
except that nonsingleton constant relations of arbitrary arity can be used as base queries. Dene
the SPJR-1

and SPJR-n

algebras analogously.
(c) Determine which of queries (4.104.15) can be expressed using the SPJR-1

and
SPJR-n

algebras.
(d) Determine the relative expressive powers of the S+PC, SPC-1

, SPC-n

, and SPCU
algebras.
Exercise 4.23 Give precise denitions for normal forms for the SPCU and SPJRU algebras,
and prove that all expressions from these algebras have an equivalent in normal form.
Exercise 4.24 An nr-datalog program P is in normal form if all relation names in rule heads
are identical. Prove that each nonrecursive datalog query with single relation target has an
equivalent in normal form.
Exercise 4.25 Prove Theorem 4.5.2.
Exercise 4.26 Recall the discussion in Section 4.5 about disjunction in the conjunctive
calculus.
(a) Consider the query q ={x|(x)}, where
(x) R(x) y, z(S(y, x) S(x, z)).
Let I be an instance over {R, S}. Using the natural extension of the notion of satises
to disjunction, show for each subformula of with form , and each valuation
over free() with range contained in adom(I) that: there exists c dom such that
I |=[ {w/c}] iff there exists c adom(I) such that I |=[ {w/c}]. Conclude
that this query can be evaluated by considering only valuations whose range is
contained in adom(I).
(b) The positive existential (relational) calculus is the relational calculus query language
in which query formulas are constructed using , , . Dene a condition on positive
existential calculus queries that guarantees that the answer involves only constants
from adom(q, I) and such that the answer can be obtained by considering only
valuations whose range is contained in adom(q, I). Extend the restriction for the case
when equality is allowed in the calculus.
(c) Prove that the family of restricted positive existential calculus queries dened in the
previous part has expressive power equivalent to the rule-based conjunctive queries
with union and that this result still holds if equality is added to both families of
queries.
Exercise 4.27
(a) Consider as an additional algebraic operation, the difference. The semantics of
q q

is given by [q q

](I) = q(I) q

(I). Show that the difference cannot be


simulated in the SPCU or SPJRU algebras. (Hint: Use the monotonicity property of
these algebras.)
(b) Negation can be added to (generalized) selection formulas in the natural waythat
is, if is a selection formula, then so is ( ). Give a precise denition for the
syntax and semantics of selection with negation. Prove that the SPCU algebra cannot
simulate selections of the form
1=2
(R) or
1=a
(R).
Exercises 69
Exercise 4.28 Show that intersection can be expressed in the SPC algebra.
Exercise 4.29
(a) Prove that there is no redundant operation in the set = {, , , } of unnamed
algebra operators (i.e., for each operator in the set, exhibit a schema and an
algebraic query q over that schema such that q cannot be expressed with {}).
(b) Prove the analogous result for the set of named operators {, , , , }.
Exercise 4.30 An inequality atom is an expression of the form x = y or x = a, where x, y
are variables and a is a constant. Assuming that the underlying domain has a total order, a
comparison atom is an expression of the form xy, xa, or ax, where ranges over <, , >,
and .
(a) Show that the family of rule-based conjunctive queries with equality and inequality
strictly dominates the family of rule-based conjunctive queries with equality.
(b) Assuming that the underlying domain has a total order, describe the relationships
between the expressive powers of the family of rule-based conjunctive queries with
equality; the family of rule-based conjunctive queries with equality and inequality;
the family of rule-based conjunctive queries with equality and comparison atoms;
and the family of rule-based conjunctive queries with equality, inequality, and com-
parison atoms.
(c) Develop analogous extensions and results for tableau queries, the conjunctive calcu-
lus, and SPC and SPJR algebras.
Exercise 4.31 For some lms, we may not want to store any actor name. Add to the domain a
constant meaning unknown information. Propose an extension of the SPJR queries to handle
unknown information (see Chapter 19).
5
Adding Negation: Algebra
and Calculus
Alice: Conjunctive queries are great. But what if I want to see a movie that
doesnt feature Woody Allen?
Vittorio: We have to introduce negation.
Sergio: It is basically easy.
Riccardo: But the calculus is a little feisty.
A
s indicated in the previous chapter, the conjunctive queries, even if extended by union,
cannot express queries such as the following:
(5.1) What are the Hitchcock movies in which Hitchcock did not play?
(5.2) What movies are featured at the Gaumont Opera but not at the Gaumont les
Halles?
(5.3) List those movies for which all actors of the movie have acted under Hitchcocks
direction.
This chapter explores how negation can be added to all forms of the conjunctive queries
(except for the tableau queries) to provide the power needed to express such queries. This
yields languages in the various paradigms that have the same expressive power. They in-
clude relational algebra, relational calculus, and nonrecursive datalog with negation. The
class of queries they express is often referred to as the rst-order queries because relational
calculus is essentially rst-order predicate calculus without function symbols. These lan-
guages are of fundamental importance in database systems. They provide adequate power
for many applications and at the same time can be implemented with reasonable efciency.
They constitute the basis for the standard commercial relational languages, such as SQL.
In the case of the algebras, negation is added using the set difference operator, yielding
the language(s) generally referred to as relational algebra (Section 5.1). In the case of
the rule-based paradigm, we consider negative literals in the bodies of rules, which are
interpreted as the absence of the corresponding facts; this yields nonrecursive datalog

(Section 5.2).
Adding negation in the calculus paradigm raises some serious problems that require
effort and care to resolve satisfactorily. In the development in this chapter, we proceed in
two stages. First (Section 5.3) we introduce the calculus, illustrate the problematic issues of
safety and domain independence, and develop some simple solutions for them. We also
showthe equivalence between the algebra and the calculus at this point. The material in this
section provides a working knowledge of the calculus that is adequate for understanding
the study of its extensions in Parts D and E. The second stage in our study of the calculus
70
5.1 The Relational Algebras 71
(Section 5.4) focuses on the important problem of nding syntactic restrictions on the
calculus that ensure domain independence.
The chapter concludes with brief digressions concerning how aggregate functions can
be incorporated into the algebra and calculus (Section 5.5), and concerning the emerging
area of constraint databases, which provide a natural mechanism for representing and
manipulating innite databases in a nite manner (Section 5.6).
From the theoretical perspective, the most important aspects of this chapter include
the demonstration of the equivalence of the algebra and calculus (including a relatively
direct transformation of calculus queries into equivalent algebra ones) and the application
of the classical proof technique of structural induction used on both calculus formulas and
algebra expressions.
5.1 The Relational Algebras
Incorporating the difference operator, denoted , into the algebras is straightforward. As
with union and intersection, this can only be applied to expressions that have the same sort,
in the named case, or arity, in the unnamed case.
Example 5.1.1 In the named algebra, query (5.1) is expressed by

Title

Director=Hitchcock
(Movies)
Title

Actor=Hitchcock
(Movies).
The unnamed relational algebra is obtained by adding the difference operator to the
SPCU algebra. It is conventional also to permit the intersection operator, denoted in
this algebra, because it is simulated easily using cross-product, select, and project or using
difference (see Exercise 5.4). Because union is present, nonsingleton constant relations
may be used in this algebra. Finally, the selection operator can be extended to permit
negation (see Exercise 5.4).
The named relational algebra is obtained in an analogous fashion, and similar gener-
alizations can be developed.
As shown in Exercise 5.5, the family of unnamed algebra operators {, , , , } is
nonredundant, and the same is true for the named algebra operators {, , , , , }. It
is easily veried that the algebras are not monotonic, nor are all algebra queries satisable
(see Exercise 5.6). In addition, the following is easily veried (see Exercise 5.7):
Proposition 5.1.2 The unnamed and named relational algebras have equivalent
expressive power.
The notion of composition of relational algebra queries can be dened in analogy
to the composition of conjunctive queries described in the previous chapter. It is easily
veried that the relational algebras, and hence the other equivalent languages presented in
this chapter, are closed under composition.
72 Adding Negation: Algebra and Calculus
5.2 Nonrecursive Datalog with Negation
To obtain a rule-based language with expressive power equivalent to the relational algebra,
we extend nonrecursive datalog programs by permitting negative literals in rule bodies.
This yields the nonrecursive datalog with negation also denoted nonrecursive datalog

and nr-datalog

.
A nonrecursive datalog

(nr-datalog

) rule is a rule of the form


q : S(u) L
1
, . . . , L
n
,
where S is a relation name, u is a free tuple of appropriate arity, and each L
i
is a literal [i.e.,
an expression of the form R(v) or R(v), where R is a relation name and v is a free tuple
of appropriate arity and where S does not occur in the body]. This rule is range restricted
if each variable x occurring in the rule occurs in at least one literal of the form R(v) in
the rule body. Unless otherwise specied, all datalog

rules considered are assumed to be


range restricted.
To give the semantics of the foregoing rule q, let R be a relation schema that includes
all of the relation names occurring in the body of the rule q, and let I be an instance of R.
Then the image of I under q is
q(I) ={(u) | is a valuation and for each i [1, n],
(u
i
) I(R
i
), if L
i
=R
i
(u
i
), and
(u
i
) I(R
i
), if L
i
=R
i
(u
i
)}.
In general, this image can be expressed as a difference q
1
q
2
, where q
1
is an SPC query
and q
2
is an SPCU query (see Exercise 5.9).
Equality may be incorporated by permitting literals of the form s = t and s = t for
terms s and t . The notion of range restriction in this context is dened as it was for rule-
based conjunctive queries with equality. The semantics are dened in the natural manner.
To obtain the full expressive power of the relational algebras, we must consider sets
of nr-datalog

rules; these are analogous to the nr-datalog programs introduced in the


previous chapter. Anonrecursive datalog

program (with or without equality) over schema


R is a sequence
S
1
body
1
S
2
body
2
.
.
.
S
m
body
m
of nr-datalog

rules, where no relation name in R occurs in a rule head; the same relation
name may appear in more than one rule head; and there is some ordering r
1
, . . . , r
m
of
the rules so that the relation name in the head of a rule r
i
does not occur in the body
of a rule r
j
whenever j i. The semantics of these programs are entirely analogous to
5.3 The Relational Calculus 73
the semantics of nr-datalog programs. An nr-datalog

query is a query dened by some


nr-datalog

program with a specied target relation.


Example 5.2.1 Assume that each movie in Movies has one director. Query (5.1) is
answered by
ans(x) Movies(x, Hitchcock, z),
Movies(x, Hitchcock, Hitchcock).
Query (5.3) is answered by
Hitch-actor(z) Movies(x, Hitchcock, z)
not-ans(x) Movies(x, y, z), Hitch-actor(z)
ans(x) Movies(x, y, z), not-ans(x).
Care must be taken when forming nr-datalog

programs. Consider, for example, the fol-


lowing program, which forms a kind of merging of the rst two rules of the previous
program. (Intuitively, the rst rule is a combination of the rst two rules of the preceding
program, using variable renaming in the spirit of Example 4.3.1.)
bad-not-ans(x) Movies(x, y, z), Movies(x

, Hitchcock, z),
Movies(x

, Hitchcock, z

),
ans(x) Movies(x, y, z), bad-not-ans(x)
Rather than expressing query (5.3), it expresses the following:
(5.3

) (Assuming that all movies have only one director) list those movies for which all
actors of the movie acted in all of Hitchcocks movies.
It is easily veried that each nr-datalog

program with equality can be simulated by


an nr-datalog

program not using equality (see Exercise 5.10). Furthermore (see Exer-
cise 5.11), the following holds:
Proposition 5.2.2 The relational algebras and the family of nr-datalog

programs that
have single relation output have equivalent expressive power.
5.3 The Relational Calculus
Adding negation in the calculus paradigm yields an extremely exible query language,
which is essentially the predicate calculus of rst-order logic (without function symbols).
However, this exibility brings with it a nontrivial cost: If used without restriction, the
calculus can easily express queries whose answers are innite. Much of the theoretical
development in this and the following section is focused on different approaches to make
74 Adding Negation: Algebra and Calculus
the calculus safe (i.e., to prevent this and related problems). Although considerable effort
is required, it is a relatively small price to pay for the exibility obtained.
This section rst extends the syntax of the conjunctive calculus to the full calculus.
Then some intuitive examples are presented that illustrate how some calculus queries can
violate the principle of domain independence. A variety of approaches have been devel-
oped to resolve this problem based on the use of both semantic and syntactic restrictions.
This section focuses on semantic restrictions. The rst step in understanding these
is a somewhat technical denition based on relativized interpretation for the semantics
of (arbitrary) calculus queries; the semantics are dened relative to different underlying
domains (i.e., subsets of dom). This permits us to give a formal denition of domain
independence and leads to a family of different semantics for a given query.
The section closes by presenting the equivalence of the calculus under two of the se-
mantics with the algebra. This effectively closes the issue of expressive power of the calcu-
lus, at least from a semantic point of view. One of the semantics for the calculus presented
here is the active domain semantics; this is particularly convenient in the development of
theoretical results concerning the expressive power of a variety of languages presented in
Parts D and E.
As noted in Chapter 4, the calculus presented in this chapter is sometimes called the
domain calculus because the variables range over elements of the underlying domain of
values. Exercise 5.23 presents the tuple calculus, whose variables range over tuples, and
its equivalence with the domain calculus and the algebra. The tuple calculus and its variants
are often used in practice. For example, the practical languages SQL and Quel can be
viewed as using tuple variables.
Well-Formed Formulas, Revisited
We obtain the relational calculus from the conjunctive calculus with equality by adding
negation (), disjunction (), and universal quantication (). (Explicit equality is needed
to obtain the full expressive power of the algebras; see Exercise 5.12.) As will be seen, both
disjunction and universal quantication can be viewed as consequences of adding negation,
because ( ) and x x.
The formal denition of the syntax of the relational calculus is a straightforward
extension of that for the conjunctive calculus given in the previous chapter. We include
the full denition here for the readers convenience. A term is a constant or a variable. For
a given input schema R, the base formulas include, as before, atoms over R and equality
(inequality) atoms of the form e =e

(e =e

) for terms e, e

. The (well-formed) formulas


of the relational calculus over R include the base formulas and formulas of the form
(a) ( ), where and are formulas over R;
(b) ( ), where and are formulas over R;
(c) , where is a formula over R;
(d) x, where x is a variable and a formula over R;
(e) x, where x is a variable and a formula over R.
As with conjunctive calculus,
5.3 The Relational Calculus 75
x
1
, x
2
, . . . , x
m
abbreviates x
1
x
2
. . . x
m
, and
x
1
, x
2
, . . . , x
m
abbreviatesx
1
x
2
. . . x
m
.
It is sometimes convenient to view the binary connectives and as polyadic connectives.
In some contexts, e =e

is viewed as an abbreviation of (e =e

).
It is often convenient to include two additional logical connectives, implies () and
is equivalent to (). We view these as syntactic abbreviations as follows:

( ) ( ).
The notions of free and bound occurrences of variables in a formula, and of free()
for formula , are dened analogously to their denition for the conjunctive calculus. In
addition, the notion of relational calculus query is dened, in analogy to the notion of
conjunctive calculus query, to be an expression of the form
{e
1
, . . . , e
m
: A
1
, . . . , A
m
| }, in the named perspective,
{e
1
, . . . , e
m
| }, in the unnamed perspective,
or if the sort is understood from the context,
where e
1
, . . . , e
m
are terms, repeats permitted, and where the set of variables occurring in
e
1
, . . . , e
m
is exactly free().
Example 5.3.1 Suppose that each movie has just one director. Query (5.1) can be ex-
pressed in the relational calculus as
{x
t
| x
a
Movies(x
t
, Hitchcock, x
a
)
Movies(x
t
, Hitchcock, Hitchcock)}.
Query (5.3) is expressed by
{x
t
| x
d
, x
a
Movies(x
t
, x
d
, x
a
)
y
a
(y
d
Movies(x
t
, y
d
, y
a
)
z
t
Movies(z
t
, Hitchock, y
a
))}.
The rst conjunct ensures that the variable x
t
ranges over titles in the current value of
Movies, and the second conjunct enforces the condition on actors of the movie identied
by x
t
.
Unsafe Queries
Before presenting the alternative semantics for the relational calculus, we present an in-
tuitive indication of the kinds of problems that arise if the conventional denitions from
predicate calculus are adapted directly to the current context.
76 Adding Negation: Algebra and Calculus
The fundamental problems of using the calculus are illustrated by the following ex-
pressions:
(unsafe-1) {x | Movies(Cries and Whispers, Bergman, x)}
(unsafe-2) {x, y | Movies(Cries and Whispers, Bergman, x)
Movies(y, Bergman, Ullman)}.
If the usual semantics of predicate calculus are adapted directly to this context, then
the query (unsafe-1) produces all tuples a where a dom and Cries and Whispers,
Bergman, a is not in the input. Because all input instances are by denition nite, the
query yields an innite set on all input instances. The same is true of query (unsafe-2), even
though it does not use explicit negation.
An intuitively appealing approach to resolving this problem is to view the different
relation columns as typed and to insist that variables occurring in a given column range
over only values of the appropriate type. For example, this would imply that the answer to
query (unsafe-1) is restricted to the set of actors. This approach is not entirely satisfactory
because query answers now depend on the domains of the types. For example, different
answers are obtained if the type Actor includes all and only the current actors [i.e., persons
occurring in
Actor
(Movies)] or includes all current and potential actors. This illustrates
that query (unsafe-1) is not independent of the underlying domain within which the query
is interpreted (i.e., it is not domain independent). The same is true of query (unsafe-2).
Even if the underlying domain is nite, users will typically not knowthe exact contents
of the domains used for each variable. In this case it would be disturbing to have the result
of a user query depend on information not directly under the users control. This is another
argument for permitting only domain-independent queries.
A related but more subtle problem arises with regard to the interpretation of quantied
variables. Consider the query
(unsafe-3) {x | yR(x, y)}.
The answer to this query is necessarily nite because it is a subset of
1
(R). However, the
query is not domain independent. To see why, note that if y is assumed to range over all
of dom, then the answer is always the empty relation. On the other hand, if the underlying
domain of interpretation is nite, it is possible that the answer will be nonempty. (This
occurs, for example, if the domain is {1, . . . , 5}, and the input for R is {3, 1, . . . 3, 5}.)
So again, this query depends on the underlying domain(s) being used (for the different
variables) and is not under the users control.
There is a further difculty of a more practical nature raised by query (unsafe-3).
Specically, if the intuitively appealing semantics of the predicate calculus are used, then
the naive approach to evaluating quantiers leads to the execution of potentially innite
procedures. Although the proper answer to such queries can be computed in a nite manner
(see Theorem 5.6.1), this is technically intricate.
The following example indicates how easy it is to form an unsafe query mistakenly in
practice.
5.3 The Relational Calculus 77
Example 5.3.2 Recall the calculus query answering query (5.3) in Example 5.3.1. Sup-
pose that the rst conjunct of that query is omitted to obtain the following:
{x
t
| y
a
(y
d
Movies(x
t
, y
d
, y
a
)
z
t
Movies(z
t
, Hitchcock, y
a
))}.
This query returns all titles of movies that have the specied property and also all elements
of dom not occurring in
Title
(Movies). Even if x
t
were restricted to range over the set of
actual and potential movie titles, it would not be domain independent.
Relativized Interpretations
We now return to the formal development. As the rst step, we present a denition that will
permit us to talk about calculus queries in connection with different underlying domains.
Under the conventional semantics associated with predicate calculus, quantied vari-
ables range over all elements of the underlying domain, in our case, dom. For our purposes,
however, we generalize this notion to permit explicit specication of the underlying domain
to use (i.e., over which variables may range).
A relativized instance over schema R is a pair (d, I), where I is an instance over R and
adom(I) d dom. A calculus formula is interpretable over (d,I) if adom() d. In
this case, if is a valuation over free() with range contained in d, then I satises for
relative to d, denoted I |=
d
[], if
(a) =R(u) is an atom and (u) I(R);
(b) =(s =s

) is an equality atom and (s) =(s

);
(c) =( ) and
1
I |=
d
[|
free()
] and I |=
d
[|
free()
];
(d) =( ) and I |=
d
[|
free()
] or I |=
d
[|
free()
];
(e) = and I |=
d
[] (i.e., I |=
d
[] does not hold);
(f) =x and for some c d, I |=
d
[ {x/c}]; or
(g) =x and for each c d, I |=
d
[ {x/c}].
The notion of satises . . . relative to just presented is equivalent to the usual notion
of satisfaction found in rst-order logic, where the set d plays the role of the universe of
discourse in rst-order logic. In practical database settings it is most natural to assume that
the underlying universe is dom; for this reason we use specialized terminology here.
Recall that for a query q and input instance I, we denote adom(q) adom(I) by
adom(q, I), and the notation adom(, I) for formula is dened analogously.
We can now dene the relativized semantics for the calculus. Let R be a schema,
q ={e
1
, . . . , e
n
| } a calculus query over R, and (d, I) a relativized instance over R. Then
1
|
V
for variable set V denotes the restriction of to V.
78 Adding Negation: Algebra and Calculus
the image of I under q relative to d is
q
d
(I) ={(e
1
, . . . , e
n
) | I |=
d
[],
is a valuation over free() with range d}.
Note that if d is innite, then this image may be an innite set of tuples.
As a minor generalization, for arbitrary d dom, the image of q on I relative to d is
dened by
2
q
d
(I) =q
dadom(q,I)
(I).
Example 5.3.3 Consider the query
q ={x | R(x) y(R(y) z(R(z) z =y))}
Then
q
dom
(I) ={} for any instance I over R
q
{1,2,3,4}
(J
1
) ={} for J
1
={1, 2} over R
q
{1,2,3,4}
(J
2
) =J
2
for J
2
={1, 2, 3} over R
q
{1,2,3,4}
(J
3
) ={} for J
3
={1, 2, 3, 4} over R
q
{1,2,3,4}
(J
4
) =J
4
for J
4
={1, 2, 3, 5} over R.
This illustrates that under an interpretation relative to a set d, a calculus query q on input I
may be affected by |d adom(q, I)|.
It is important to note that the semantics of algebra and datalog

queries q evaluated
on instance I are independent of whether dom or some subset d satisfying adom(q, I)
d dom is used as the underlying domain.
The Natural and Active Domain Semantics for Calculus Queries
The relativized semantics for calculus formulas immediately yields two important seman-
tics for calculus queries. The rst of these corresponds most closely to the conventional
interpretation of predicate calculus and is thus perhaps the intuitively most natural seman-
tics for the calculus.
Denition 5.3.4 For calculus query q and input instance I, the natural (or unrestricted)
interpretation of q on I, denoted q
nat
(I), is q
dom
(I) if this is nite and is undened other-
wise.
2
Unlike the convention of rst-order logic, interpretations over an empty underlying domain are
permitted; this arises only with empty instances.
5.3 The Relational Calculus 79
The second interpretation is based on restricting quantied variables to range over the
active domain of the query and the input. Although this interpretation is unnatural from the
practical perspective, it has the advantage that the output is always dened (i.e., nite). It
is also a convenient semantics for certain theoretical developments.
Denition 5.3.5 For calculus query q and input instance I, the active domain interpre-
tation of q on I, denoted q
adom
(I), is q
adom(q,I)
(I). The family of mappings obtained from
calculus queries under the active domain interpretation is denoted CALC
adom
.
Example 5.3.6 Recall query (unsafe-2). Under the natural interpretation on input the
instance I shown in Chapter 3, this query yields the undened result. On the other hand,
under the active domain interpretation this yields as output (written informally) ({actors
in Cries and Whispers} adom(I)) (adom(I) {movies by Bergman featuring
Ullman}), which is nite and dened.
Domain Independence
As noted earlier, there are two difculties with the natural interpretation of the calculus
from a practical point of view: (1) it is easy to write queries with undened output, and (2)
even if the output is dened, the naive approach to computing it may involve consideration
of quantiers ranging over an innite set. The active domain interpretation solves these
problems but generally makes the answer dependent on information (the active domain)
not readily available to users. One approach to resolving this situation is to restrict attention
to the class of queries that yield the same output on all possible underlying domains.
Denition 5.3.7 A calculus query q is domain independent if for each input instance I,
and each pair d, d

dom, q
d
(I) =q
d
(I). If q is domain independent, then the image of q
on input instance I, denoted simply q(I), is q
dom
(I) [or equivalently, q
adom
(I)]. The family
of mappings obtained from domain-independent calculus queries is denoted CALC
di
.
In particular, if q is domain independent, then the output according to the natural
interpretation can be obtained by computing the active domain interpretation. Thus,
Lemma 5.3.8 CALC
di
CALC
adom
.
Example 5.3.9 The two calculus queries of Example 5.3.1 are domain independent, and
the query of Example 5.3.2 is not (see Exercise 5.15).
Equivalence of Algebra and Calculus
We now demonstrate the equivalence of the various languages introduced so far in this
chapter.
80 Adding Negation: Algebra and Calculus
Theorem5.3.10 (Equivalence Theorem) The domain-independent calculus, the calcu-
lus under active domain semantics, the relational algebras, and the family of nr-datalog

programs that have single-relation output have equivalent expressive power.


Proposition 5.2.2 shows that nr-datalog

and the algebras have equivalent expressive


power. In addition, Lemma 5.3.8 shows that CALC
di
CALC
adom
. To complete the proof,
we demonstrate that
(i) algebra CALC
di
(Lemma 5.3.11)
(ii) CALC
adom
algebra (Lemma 5.3.12).
Lemma 5.3.11 For each unnamed algebra query, there is an equivalent domain-indepen-
dent calculus query.
Proof Let q be an unnamed algebra query with arity n. We construct a domain-
independent query q

= {x
1
, . . . , x
n
|
q
} that is equivalent to q. The formula
q
is con-
structed using an induction on subexpressions of q. In particular, for subexpression E of
q, we dene
E
according to the following cases:
(a) E is R for some R R:
E
is R(x
1
, . . . , x
arity(R)
).
(b) E is {u
1
, . . . , u
m
}, where each u
j
is a tuple of arity :
E
is
(x
1
=u
1
(1) x

=u
1
()) (x
1
=u
m
(1) x

=u
m
()).
(c) E is
F
(E
1
):
E
is
E
1

F
, where
F
is the formula obtained from F by
replacing each coordinate identier i by variable x
i
.
(d) E is
i
1
,...,i
n
(E
1
):
E
is
y
i
1
, . . . , y
i
n
((x
1
=y
i
1
x
n
=y
i
n
) y
j
1
. . . y
j
l

E
1
(y
1
, . . . , y
arity(E
1
)
)),
where j
1
, . . . , j
l
is a listing of [1, arity(E
1
)] {i
1
, . . . , i
n
}.
(e) E is E
1
E
2
:
E
is
E
1

E
2
(x
arity(E
1
)+1
, . . . , x
arity(E
1
)+arity(E
2
)
).
(f) E is E
1
E
2
:
E
is
E
1

E
2
.
(g) E is E
1
E
2
:
E
is
E
1

E
2
.
We leave verication of this construction and the properties of q

to the reader (see Exer-


cise 5.13a).
Lemma 5.3.12 For each calculus query q, there is a query in the unnamed algebra that is
equivalent to q under the active domain interpretation.
Crux Let q ={x
1
, . . . , x
n
| } be a calculus query over R. It is straightforward to develop
a unary algebra query E
adom
such that for each input instance I,
5.4 Syntactic Restrictions for Domain Independence 81
E
adom
(I) ={a | a adom(q, I)}.
Next an inductive construction is performed. To each subformula (y
1
, . . . , y
m
) of this
associates an algebra expression E

with the property that (abusing notation slightly)


{y
1
, . . . , y
m
| }
adom(q,I)
(I) =E

(I) (adom(q, I))


m
.
[This may be different from using the active domain semantics on , because we may have
adom(, I) adom(q, I).] It is clear that E

is equivalent to q under the active domain


semantics.
We now illustrate a few cases of the construction of expressions E

and leave the


rest for the reader (see Exercise 5.13b). Suppose that is a subformula of . Then E

is
constructed in the following manner:
(a) (y
1
, . . . , y
m
) is R(t
1
, . . . , t
l
), where each t
i
is a constant or in y: Then E

k
(
F
(R)), where

k and F are chosen in accordance with y and t .


(b) (y
1
, y
2
) is y
1
=y
2
: E

is
1=2
(E
adom
E
adom
).
(c) (y
1
, y
2
, y
3
) is

(y
1
, y
2
)

(y
2
, y
3
): E

is (E

E
adom
) (E
adom
E

).
(d) (y
1
, . . . , y
m
) is

(y
1
, . . . , y
m
): E

is (E
adom
E
adom
) E

.
5.4 Syntactic Restrictions for Domain Independence
As seen in the preceding section, to obtain the natural semantics for calculus queries,
it is desirable to focus on domain independent queries. However, as will be seen in the
following chapter (Section 6.3), it is undecidable whether a given calculus query is domain
independent. This has led researchers to develop syntactic conditions that ensure domain
independence, and many such conditions have been proposed.
Several criteria affect the development of these conditions, including their generality,
their simplicity, and the ease with which queries satisfying the conditions can be translated
into the relational algebra or other lower-level representations. We present one such con-
dition here, called safe range, that is relatively simple but that illustrates the avor and
theoretical properties of many of these conditions. It will serve as a vehicle to illustrate
one approach to translating these restricted queries into the algebra. Other examples are
explored in Exercises 5.25 and 5.26; translations of these into the algebra are considerably
more involved.
This section begins with a brief digression concerning equivalence preserving rewrite
rules for the calculus. Next the family CALC
sr
of safe-range queries is introduced. It is
shown easily that the algebra CALC
sr
. A rather involved construction is then presented
for transforming safe-range queries into the algebra. The section concludes by dening a
variant of the calculus that is equivalent to the conjunctive queries with union.
82 Adding Negation: Algebra and Calculus
1
2
1

n
(
n+1

n+2
)
1

n

n+1

n+2
3
4
1

n
(
n+1

n+2
)
1

n

n+1

n+2
5 ( ) () ()
6 ( ) () ()
7 ()
8 x x
9 x x
10 x x
11 x x
12 x x( ) (x not free in )
13 x x( ) (x not free in )
14 x x( ) (x not free in )
15 x x( ) (x not free in )
16 x y
x
y
(y not free in )
17 x y
x
y
(y not free in )
Figure 5.1: Equivalence-preserving rewrite rules for calculus formulas
Equivalence-Preserving Rewrite Rules
We now digress for a moment to present a family of rewrite rules for the calculus. These
preserve equivalence regardless of the underlying domain used to evaluate calculus queries.
Several of these rules will be used in the transformation of safe-range queries into the
algebra.
Calculus formulas , over schema R are equivalent, denoted , if for each I
over R, d dom, and valuation with range d
I |=
dadom(,I)
[] if and only if I |=
dadom(,I)
[].
(It is veried easily that this generalizes the notion of equivalence for conjunctive calculus
formulas.)
Figure 5.1 shows a number of equivalence-preserving rewrite rules for calculus for-
mulas. It is straightforward to verify that if transforms to

by a rewrite rule and if

is the result of replacing an occurrence of subformula of by formula

, then

(see Exercise 5.14).


Note that, assuming x free() and y free(),
x y xy( ) yx( ).
Example 5.4.1 Recall from Chapter 2 that a formula is in prenex normal form (PNF)
if it has the form %
1
x
1
. . . %
n
x
n
, where each %
i
is either or , and no quantiers occur
in . In this case, is called the matrix of formula .
5.4 Syntactic Restrictions for Domain Independence 83
A formula without quantiers or connectives or is in conjunctive normal
form (CNF) if it has the form
1

m
(m1), where each conjunct
j
has the form
L
1
L
k
(k 1) and where each L
l
is a literal (i.e., atomor negated atom). Similarly, a
formula without quantiers or connectives or is in disjunctive normal form (DNF)
if it has the form
1

m
, where each disjunct
j
has the form L
1
L
k
where
each L
l
is a literal (i.e., atom or negated atom).
It is easily veried (see Exercise 5.14) that the rewrite rules can be used to transform
an arbitrary calculus formula into an equivalent formula that is in PNF with a CNF matrix,
and into an equivalent formula that is in PNF with a DNF matrix.
Safe-Range Queries
The notion of safe range is presented now in three stages, involving (1) a normal form
called SRNF, (2) a mechanism for determining how variables are range restricted by
subformulas, and (3) specication of a required global property of the formula.
During this development, it is sometimes useful to speak of calculus formulas in terms
of their parse trees. For example, we will say that the formula (R(x) y(S(y, z))
T (x, z)) has and or as a root (which has an atom, an , and a as children).
The normalization of formulas puts them into a form more easily analyzed for
safety without substantially changing their syntactic structure. The following equivalence-
preserving rewrite rules are used to place a formula into safe-range normal form (SRNF):
Variable substitution: This is from Section 4.2. It is applied until no distinct pair of quan-
tiers binds the same variable and no variable occurs both free and bound.
Remove universal quantiers: Replace subformula x by x. (This and the next
condition can be relaxed; see Example 5.4.5.)
Remove implications: Replace by , and similarly for .
Push negations: Replace
(i) by
(ii) (
1

n
) by (
1

n
)
(iii) (
1

n
) by (
1

n
)
so that the child of each negation is either an atom or an existentially quantied
formula.
Flatten ands, ors, and existential quantiers: This is done so that no child of an and
is an and, and similarly for or and existential quantiers.
The SRNF formula resulting from applying these rules to is denoted SRNF(). A formula
(query { e | }) is in SRNF if SRNF() =.
Example 5.4.2 The rst calculus query of Example 5.3.1 is in SRNF. The second calcu-
lus query is not in SRNF; the corresponding SRNF query is
84 Adding Negation: Algebra and Calculus
{x
t
| x
d
, x
a
Movies(x
t
, x
d
, x
a
)
y
a
(y
d
Movies(x
t
, y
d
, y
a
)
z
t
Movies(z
t
, Hitchcock, y
a
))}.
Transforming the query of Example 5.3.2 into SRNF yields
{x
t
| y
a
(y
d
Movies(x
t
, y
d
, y
a
)
z
t
Movies(z
t
, Hitchcock, y
a
))}.
We nowpresent a syntactic condition on SRNF formulas that ensures that each variable
is range restricted, in the sense that its possible values all lie within the active domain of
the formula or the input. If a quantied variable is not range restricted, or if one of the
free variables is not range restricted, then the associated query is rejected. To make the
denition, we rst dene the set of range-restricted variables of an SRNF formula using
the following procedure, which returns either the symbol , indicating that some quantied
variable is not range restricted, or the set of free variables that is range restricted.
Algorithm 5.4.3 (Range restriction (rr))
Input: a calculus formula in SRNF
Output: a subset of the free variables of or
3

begin
case of
R(e
1
, . . . , e
n
) : rr() =the set of variables in {e
1
, . . . , e
n
};
x =a or a =x : rr() ={x};

1

2
: rr() =rr(
1
) rr(
2
);

1
x =y : rr() =

rr() if {x, y} rr() =,


rr() {x, y} otherwise;

1

2
: rr() =rr(
1
) rr(
2
);

1
: rr() =;
x
1
: if x rr(
1
)
then rr() =rr(
1
) x
else return
end case
end
3
In the following, for each Z, Z =Z =Z =Z =. In addition, we showthe case
of binary ands, etc., but we mean this to include polyadic ands, etc. Furthermore, we sometimes
use x to denote the set of variables occurring in x.
5.4 Syntactic Restrictions for Domain Independence 85
Intuitively, the occurrence of a variable x in a base relation or in an atom of the
form x =a restricts that variable. This restriction is propagated through , possibly lost
in , and always lost in . In addition, each quantied variable must be restricted by the
subformula it occurs in.
A calculus query {u | } is safe range if rr(SRNF()) =free(). The family of safe-
range queries is denoted by CALC
sr
.
Example 5.4.4 Recall Examples 5.3.1 and 5.4.2. The rst query of Example 5.3.1 is safe
range. The rst query of Example 5.4.2 is also safe range. However, the second query of
Example 5.4.2 is not because the free variable x
t
is not range restricted by the formula.
Before continuing, we explore a generalization of the notion of safe range to permit
universal quantication.
Example 5.4.5 Suppose that formula has a subformula of the form
x(
1
( x)
2
( y)),
where x and y might overlap. Transforming into SRNF (and assuming that the parent of
is not ), we obtain

x(
1
( x)
2
( y)).
Now rr(

) is dened iff
(a) rr(
1
) = x, and
(b) rr(
2
) is dened.
In this case, rr(

) =. This is illustrated by the second query of Example 5.3.1, that was


transformed into SRNF in Example 5.4.2.
Thus SRNF can be extended to permit subformulas that have the form of without
materially affecting the development.
The calculus query constructed in the proof of Lemma 5.3.11 is in fact safe range. It
thus follows that the algebra CALC
sr
.
As shown in the following each safe range query is domain independent (Theo-
rem 5.4.6). For this reason, if q is safe range we generally use the natural interpretation
to evaluate it; we may also use the active domain interpretation.
The development here implies that all of CALC
sr
, CALC
di
, and CALC
adom
are equiv-
alent. When the particular choice is irrelevant to the discussion, we use the term relational
calculus to refer to any of these three equivalent query languages.
86 Adding Negation: Algebra and Calculus
From Safe Range to the Algebra
We nowpresent the main result of this section (namely, the translation of safe-range queries
into the named algebra). Speaking loosely, this translation is relatively direct in the sense
that the algebra query E constructed for calculus query q largely follows the structure of
q. As a result, evaluation of E will in most cases be more efcient than using the algebra
query that is constructed for q by the proof of Lemma 5.3.12.
Examples of the construction used are presented after the formal argument.
Theorem 5.4.6 CALC
sr
the relational algebra. Furthermore, each safe-range query
is domain independent.
The proof of this theorem involves several steps. As seen earlier, the algebra
CALC
sr
. To prove the other direction, we develop a translation from safe-range queries
into the named algebra. Because the algebra is domain independent, this will also imply
the second sentence of the theorem.
To begin, let be a safe-range formula in SRNF. An occurrence of a subformula in
is self-contained if its root is or if
(i) =
1

n
and rr() =rr(
1
) = =rr(
n
) =free();
(ii) = x
1
and rr() =free(
1
); or
(iii) =
1
and rr() =free(
1
).
A safe-range, SRNF formula is in
4
relational algebra normal form (RANF) if each
subformula of is self-contained.
Intuitively, if is a self-contained subformula of that does not have as a root, then
all free variables in are range restricted within . As we shall see, if is in RANF, this
permits construction of an equivalent relational algebra query E

using an induction from


leaf to root.
We now develop an algorithm RANF-ALG that transforms safe-range SRNF formulas
into RANF. It is based on the following rewrite rules:
(R1) Push-into-or: Consider the subformula
=
1

n
,
where
=
1

m
.
Suppose that rr() =free(), but rr(
1

m
) =free(
1

m
). Nondeter-
ministically choose a subset i
1
, . . . , i
k
of 1, . . . , n such that

=(
1

i
1

i
k
) (
m

i
1

i
k
)
4
This is a variation of the notion of RANF used elsewhere in the literature; see Bibliographic Notes.
5.4 Syntactic Restrictions for Domain Independence 87
satises rr(

) = free(

). (One choice of i
1
, . . . , i
k
is to use all of 1, . . . , n; this
necessarily yields a formula

with this property.) Letting {j


1
, . . . , j
l
} ={1, . . . , n}
{i
1
, . . . , i
k
}, set

=SRNF(
j
1

j
l

).
The application of SRNF to

only has the effect of possibly renaming quantied


variables
5
and of attening the roots of subformulas
p

i
1

i
k
, where
p
has root ; analogous remarks apply. The rewrite rule is to replace subformula by

and possibly apply SRNF to atten an , if both l =0 and the parent of is .


(R2) Push-into-quantier: Suppose that
=
1

n
x,
where rr() =free(), but rr() =free(). Then replace by

=SRNF(
j
1

j
l
x

),
where

=
i
1

i
k

and where rr(

) =free(

) and {j
1
, . . . , j
l
} ={1, . . . , n} {i
1
, . . . , i
k
}. The rewrite
rule is to replace by

and possibly apply SRNF to atten an .


(R3) Push-into-negated-quantier: Suppose that
=
1

n
x,
where rr() =free(), but rr() =free(). Then replace by

=SRNF(
1

n
x

),
where

=
i
1

i
k

and where rr(

) =free(

) and {i
1
, . . . , i
k
} {1, . . . , n}. That

is equivalent to
follows from the observation that the propositional formulas p q r and p q
(p r) are equivalent. The rewrite rule is to replace by

.
The algorithm RANF-ALG for applying these rewrite rules is essentially top-down
and recursive. We sketch the algorithm now (see Exercise 5.19).
5
It is assumed that under SRNF renamed variables are chosen so that they do not occur in the full
formula under consideration.
88 Adding Negation: Algebra and Calculus
Algorithm 5.4.7 (Relational Algebra Normal Form (RANF-ALG))
Input: a safe-range calculus formula in SRNF
Output: a RANF formula

=RANF() equivalent to
begin
while some subformula (with its conjuncts possibly reordered) of satises the
premise of R1, R2, or R3
do
case R1: (left as exercise)
R2: (left as exercise)
R3: Let =
1

n
x
and
i
1
, . . . ,
i
k
satisfy the conditions of R3;
:=RANF(
1

n
);
:=RANF(SRNF(
i
1

i
k
));

:= x;
:= result of replacing by

in ;
end case
end while
end
The proof that these rewrite rules can be used to transform a safe-range SRNF for-
mula into a RANF formula has two steps (see Exercise 5.19). First, a case analysis can
be used to show that if safe-range in SRNF is not in RANF, then one of the rewrite
rules (R1, R2, R3) can be applied. Second, it is shown that Algorithm 5.4.7 terminates.
This is accomplished by showing that (1) each successfully completed call to RANF-ALG
reduces the number of non-self-contained subformulas, and (2) if a call to RANF-ALG on
invokes other calls to RANF-ALG, the input to these recursive calls has fewer non-self-
contained subformulas than does .
We nowturn to the transformation of RANF formulas into equivalent relational algebra
queries. We abuse notation somewhat and assume that each variable is also an attribute.
(Alternatively, a one-one mapping var-to-att : var att could be used.) In general, given
a RANF formula with free variables x
1
, . . . , x
n
, we shall construct a named algebra
expression E

over attributes x
1
, . . . , x
n
such that for each input instance I, E

(I) =
{x
1
, . . . , x
n
| }(I). (The special case of queries {e
1
, . . . , e
n
| }, where some of the e
i
are
constants, is handled by performing a join with the constants at the end of the construction.)
A formula is in modied relational algebra normal form (modied RANF) if it is
RANF, except that each polyadic and is ordered and transformed into binary ands,
so that atoms x = y (x = y) are after conjuncts that restrict one (both) of the variables
involved and so that each free variable in a conjunct of the form occurs in some
preceding conjunct. It is straightforward to verify that each RANF formula can be placed
into modied RANF. Note that each subformula of a modied RANF formula is self-
contained.
Let RANF formula be xed. The construction of E

is inductive, from leaf to root,


and is sketched in the following algorithm. The special operator diff, on inputs R and S
where att(S) att(R), is dened by
5.4 Syntactic Restrictions for Domain Independence 89
R diff S =R (R S).
(Many details of this transformation, such as the construction of renaming function f ,
projection list

k, and selection formula F in the rst entry of the case statement, are left to
the reader; see Example 5.4.9 and Exercise 5.19.)
Algorithm 5.4.8 (Translation into the Algebra)
Input: a formula in modied RANF
Output: an algebra query E

equivalent to
begin
case of
R( e)
f
(

k
(
F
(R)))
x =a {x : a}
if is x =x, then E

if is x =y (with x, y distinct), then

x=y
(E

), if {x, y} free()

x=y
(E


xy
E

), if x free() and y free()

x=y
(E


yx
E

), if y free() and x free()


if is x =y, then
x=y
(E

)
if =

, then
E

diff E

, if free(

) free()
E

, if free(

) =free()
otherwise, E

{} E

(in the case that does not have and as parent)

1

n
E

1
E

n
x
1
, . . . , x
n
(x
1
, . . . , x
n
, y
1
, . . . , y
m
)

y
1
,...,y
m
(E

)
end case
end
Finally, let q = {x
1
, . . . , x
n
| } be safe range. Because the transformations used for
SRNF and RANF are equivalence preserving, without loss of generality we can assume
that is in modied RANF. To conclude the proof of Theorem 5.4.6, it must be shown
that q and E

are equivalent. In fact, it can be shown that for each instance I and each d
satisfying adom(q, I) d dom,
q
d
(I) =E

(I).
90 Adding Negation: Algebra and Calculus
This will also yield that q is domain independent.
Let I and d be xed. A straightforward induction can be used to show that for each
subformula (y
1
, . . . , y
m
) of and each variable assignment with range d,
I |=
d
[] (y
1
), . . . , (y
m
) E

(I)
(see Exercise 5.19.) This completes the proof of Theorem 5.4.6.
Example 5.4.9 (a) Consider the query
q
1
={a, x, y : A
1
A
2
A
3
| z(P(x, y, z) [R(x, y)
([S(z) T (x, z)] [T (y, z)])])}.
The formula of q
1
is in SRNF. Transformation into RANF yields
z(P(x, y, z) [R(x, y) S(z) T (x, z)] [R(x, y) T (y, z)]).
Assuming the schemas P[B
1
B
2
B
3
], R[C
1
C
2
], S[D], and T [F
1
F
2
], transformation of this
into the algebra yields
E =
x,y
(
B
1
B
2
B
3
xyz
(P)
((
C
1
C
2
xy
(R)
Dz
(S)) diff
F
1
F
2
yz
(T ))
(
C
1
C
2
xy
(R)
F
1
F
2
yz
(T ))).
Finally, an algebra query equivalent to q
1
is
{A
1
: a}
xyA
2
A
3
(E).
(b) Consider the query
q
2
={x | y[R(x, y) z(S(z, a) T (y, z))
v, w(T (v, w) w =b v =x)]}.
Transforming to SRNF, we have
y[R(x, y) z(S(z, a) T (y, z)) v, w(T (v, w) w =b v =x)].
Transforming to RANF and reordering the conjunctions, we obtain
y[v, w(R(x, y)w =bv =xT (v, w))z(R(x, y)S(z, a)T (y, z))].
Assuming schemas R[A
1
, A
2
], S[B
1
, B
2
], and T [C
1
, C
2
], the equivalent algebra query is
obtained using the program
5.5 Aggregate Functions 91
E
1
:=(
A
1
A
2
xy
(R) {w : b});
E
2
:=(
v=x
(E
1

xv
(E
1
))) diff
C
1
C
2
vw
(T );
E
3
:=
x,y
(E
2
);
E
4
:=
x,y
(
A
1
A
2
xy
(R)
B
1
z
(
B
1
(
B
2
=a
(S))) diff
C
1
C
2
yz
(T ));
E
5
:=
x
(E
3
E
4
).
The Positive Existential Calculus
In Chapter 4, disjunction was incorporated into the rule-based conjunctive queries, and
union was incorporated into the tableau, SPC, and SPJR queries. Incorporating disjunction
into the conjunctive calculus was more troublesome because of the possibility of in-
nite answers. We now apply the tools developed earlier in this chapter to remedy this
situation.
A positive existential (calculus) query is a domain-independent calculus query q =
{e
1
, . . . , e
n
| }, possibly with equality, in which the only logical connectives are , ,
and . It is decidable whether a query q with these logical connectives is domain inde-
pendent; and if so, q is equivalent to a safe-range query using only these connectives (see
Exercise 5.16). The following is easily veried.
Theorem 5.4.10 The positive existential calculus is equivalent to the family of conjunc-
tive queries with union.
5.5 Aggregate Functions
In practical query languages, the underlying domain is many-sorted, with sorts such as
boolean, string, integer, or real. These languages allow the use of comparators such as
between database entries in an ordered sort and aggregate functions such as sum, count,
or average on numeric sorts. In this section, aggregate operators are briey considered.
In the next section, a novel approach for incorporating arithmetic constraints into the
relational model will be addressed.
Aggregate operators operate on collections of domain elements. The next example
illustrates how these are used.
Example 5.5.1 Consider a relation Sales[Theater, Title, Date, Attendance], where a
tuple th, ti, d, a indicates that on date d a total of a people attended showings of movie
ti at theater th. We assume that {Theater, Title, Date} is a key, i.e., that two distinct tuples
cannot share the same values on these three attributes. Two queries involving aggregate
functions are
(5.4) For each theater, list the total number of movies that have been shown there.
(5.5) For each theater and movie, list the total attendance.
92 Adding Negation: Algebra and Calculus
Informally, the rst query might be expressed in a pidgin language as
{th,c | th is a theater occurring in Sales
and c =|
Title
(
Theater=t h
(Sales))|}
and the second as
{th, ti, s | th, ti is a theater-title pair appearing in Sales
and s is the sum that includes each occurrence of each a-value in

Theater=thTitle=t i
(Sales)}
A subtlety here is that this second query cannot be expressed simply as
{th, ti, s | th, ti is a theater-title pair appearing in Sales
and s ={a
Attendance
(
Theater=thTitle=ti
(Sales))}}
since a value a has to be counted as many times as it occurs in the selection. This sug-
gests that a more natural setting for studying aggregate functions would explicitly include
bags (or multisets, i.e., collections in which duplicates are permitted) and not just sets, a
somewhat radical departure from the model we have used so far.
The two queries can be expressed as follows using aggregate functions in an algebraic
language:

Theater; count(Title)
(Sales)

Theater,Title; sum(Attendance)
(Sales).
We now briey present a more formal development. To simplify, the formalism is
based on the unnamed perspective, and we assume that dom = N, i.e., the set of non-
negative integers. We stay within the relational model although as noted in the preceding
example, a richer data model with bags would be more natural. Indeed, the complex value
model that will be studied in Chapter 20 provides a more appropriate context for consider-
ing aggregate functions.
We shall adopt a somewhat abstract view of aggregate operators. An aggregate func-
tion f is dened to be a family of functions f
1
, f
2
, . . . such that for each j 1 and each
relation schema S with arity(S) j, f
j
: Inst(S) N. For instance, for the sum aggregate
function, we will have sum
1
to sum the rst column and, in general, sum
i
to sum the i
th
one. As in the case of sum, we want the f
i
to depend only on the content of the column
to which they are applied, where the content includes not only the set of elements in the
column, but also the number of their occurrences (so, columns are viewed as bags). This
requirement is captured by the following uniformity property imposed on each aggregate
function f :
Suppose that the i
th
column of I and the j
th
of J are identical, i.e., for each a,
there are as many occurrences of a in the i
th
column of I and in the j
th
column of
J. Then f
i
(I) =f
j
(J).
5.6 Digression: Finite Representations of Innite Databases 93
All of the commonly arising aggregate functions satisfy this uniformity property. The
uniformity condition is also used when translating calculus queries with aggregates into
the algebra with aggregates.
We next illustrate how aggregate functions can be incorporated into the algebra and
calculus (we do not discuss how this is done for nr-datalog

, since it is similar to the alge-


bra.) Aggregate functions are added to the algebra using an extended projection operation.
Specically, the projection function for aggregate function f on relation instance I is de-
ned as follows:

j
1
,...,j
m
;f (k)
(I) ={a
j
1
, . . . , a
j
m
, f
k
(
j
1
=a
j
1
j
m
=a
jm
(I)) | a
1
, . . . , a
n
I}.
Note that the aggregate function f
k
is applied separately to each group of tuples in I
corresponding to a different possible value for the columns j
1
, . . . , j
m
.
Turning to the calculus, we begin with an example. Query (5.5) can be expressed in
the extended calculus as
{th, ti, s | d
1
, a
1
(Sales(th, ti, d
1
, a
1
)
s =sum
2
{d
2
, a
2
| Sales(th, ti, d
2
, a
2
)})}
where sum
2
is the aggregate function summing the second column of a relation. Note that
the subexpression {d
2
, a
2
| Sales(th, ti, d
2
, a
2
)} has free variables th and ti that do not occur
in the target of the subexpression. Intuitively, different assignments for these variables will
yield different values for the subexpression.
More formally, aggregate functions are incorporated into the calculus by permitting
aggregate terms that have the form f
j
{ x | }, where f is an aggregate function, j
arity( x) and is a calculus formula (possibly with aggregate terms). When dening the
extended calculus, care must be taken to guarantee that aggregate terms do not recursively
depend on each other. This can be accomplished with a suitable generalization of safe
range. This generalization will also ensure that free variables occurring in an aggregate
term are range restricted by a subformula containing it. It is straightforward to dene
the semantics of the generalized safe-range calculus with aggregate functions. One can
then show that the extensions of the algebra and safe-range calculus with the same set of
aggregate functions have the same expressive power.
5.6 Digression: Finite Representations of Innite Databases
Until now we have considered only nite instances of relational databases. As we have
seen, this introduced signicant difculty in connection with domain independence of
calculus queries. It is also restrictive in connection with some application areas that involve
temporal or geometric data. For example, it would be convenient to think of a rectangle in
the real plane as an innite set of points, even if it can be represented easily in some nite
manner.
In this short section we briey describe some recent and advanced material that uses
logic to permit the nite representation of innite databases. We begin by presenting an
alternative approach to resolving the problemof safety, that permits queries to have answers
94 Adding Negation: Algebra and Calculus
that are innite but nitely representable. We then introduce a promising generalization of
the relational model that uses constraints to represent innite databases, and we describe
how query processing can be performed against these in an efcient manner.
An Alternative Resolution to the Problem of Safety
As indicated earlier, much of the research on safety has been directed at syntactic restric-
tions to ensure domain independence. An alternative approach is to use the natural inter-
pretation, even if the resulting answer is innite. As it turns out, the answers to such queries
are recursive and have a nite representation.
For this result, we shall use a nite set d dom, which corresponds intuitively to the
active domain of a query and input database; and a set C ={c
1
, . . . , c
m
} of mdistinct new
symbols, which will serve as placeholders for elements of domd. Speaking intuitively,
the elements of C sometimes act as elements of dom, and so it is not appropriate to view
them as simple variables.
A tuple with placeholders is a tuple t = t
1
, . . . , t
n
, where each t
i
is in d C. The
semantics of such t relative to d are
sem
d
(t ) ={(t ) | is a one-one mapping from d C
that leaves d xed and maps C into domd}.
The following theorem, stated without proof, characterizes the result of applying an
arbitrary calculus query using the natural semantics.
Theorem 5.6.1 Let q ={e
1
, . . . , e
n
| } be an arbitrary calculus query, such that each
quantier in quanties a distinct variable that is not free in . Let C ={c
1
, . . . , c
m
} be
a set of m distinct new symbols not occurring in dom, but viewed as domain elements,
where m is the number of distinct variables in . Then for each input instance I,
q
dom
(I) ={sem
adom(q,I)
(t ) | t q
adom(q,I)C
(I)}.
This shows that if we apply a calculus query (under the natural semantics) to a nite
database, then the result is recursive, even if innite. But is the set of innite databases
described in this manner closed under the application of calculus queries? The afrmative
answer is provided by an elegant generalization of the relational model presented next (see
Exercise 5.31).
Constraint Query Languages
The following generalization of the relational model seems useful to a variety of new
applications. The starting point is to consider innite databases with nite representations
based on the use of constraints. To begin we dene a generalized n-tuple as a conjunction
of constraints over n variables. The constraints typically include =, =, , etc. In some
sense, such a constraint can be viewed as a nite representation of a (possibly innite) set
of (normal) n-tuples (i.e., the valuations of the variables that satisfy the constraint).
5.6 Digression: Finite Representations of Innite Databases 95
Example 5.6.2 Consider the representation of rectangles in the plane. Suppose rst that
rectangles are given using 5-tuples (n, x
1
, y
1
, x
2
, y
2
), where n is the name of the rectangle,
(x
1
, y
1
) are the coordinates of the lower left corner, and (x
2
, y
2
) are the coordinates of the
upper right. The set of points u, v in such a rectangle delimited by x
1
, y
1
, x
2
, y
2
is given
by the constraint
x
1
u x
2
y
1
v y
2
.
Now the names of intersecting rectangles from a relation R are given by
{n
1
, n
2
| x
1
, y
1
, x
2
, y
2
, x

1
, y

1
, x

2
, y

2
, u, v
(R(n
1
, x
1
, y
1
, x
2
, y
2
) (x
1
u x
2
y
1
v y
2
)
R(n
2
, x

1
, y

1
, x

2
, y

2
) (x

1
u x

2
y

1
v y

2
))}.
This is essentially within the framework of the relational model presented so far, except
that we are using an innite base relation . There is a level of indirection between the
representation of a rectangle (a, x
1
, y
1
, x
2
, y
2
) and the actual set of points that it contains.
In the following constraint formalism, a named rectangle can be represented by a
generalized tuple (i.e., a constraint). For instance, the rectangle of name a with corners
(0.5, 1.0) and (1.5, 5.5) is represented by the constraint
z
1
=a 0.5 z
2
z
2
1.5 1.0 z
3
z
3
5.5.
This should be viewed as a nite syntactic representation of an innite set of triples. A
triple z
1
, z
2
, z
3
satisfying this constraint indicates that the point of coordinates (z
2
, z
3
) is
in a rectangle with name z
1
.
One can see a number of uses in allowing constraints in the language. First, con-
straints arise naturally for domains concerning measures (price, distance, time, etc.). The
introduction of time has already been studied in the active area of temporal databases (see
Section 22.6). In other applications such as spatial databases, geometry plays an essential
role and ts nicely in the realm of constraint query languages.
One can clearly obtain different languages by considering various domains and vari-
ous forms of constraints. Relational calculus, relational algebra, or some other relational
languages can be extended with, for instance, the theory of real closed elds or the the-
ory of dense orders without endpoints. Of course, a requirement is the decidability of the
resulting language.
Assuming some notion of constraints (to be formalized soon), we now dene some-
what more precisely the constraint languages and then illustrate them with two examples.
Denition 5.6.3 A generalized n-tuple is a nite conjunction of constraints over vari-
ables x
1
, . . . , x
n
. A generalized instance of arity n is a nite set of generalized n-tuples
(the corresponding formula is the disjunction of the constraints).
96 Adding Negation: Algebra and Calculus
Suppose that I is a generalized instance. We refer to I as a syntactic database and to
the set of conventional tuples represented by I as the semantic database.
We now present two applications of this approach, one in connection with the reals
and the other with the rationals.
We assume now that the constants are interpreted over a real closed eld (e.g., the
reals). The constraints are polynomial inequality constraints [i.e., inequalities of the form
p(x
1
, . . . , x
n
) 0, where p is a polynomial]. Two 3-tuples in this context are
(3.56 x
2
1
+4.0 x
2
0) (x
3
x
1
0)
(x
1
+x
2
+x
3
0).
One can evaluate queries algebraically bottom-up (i.e., at each step of the computation,
the result is still a generalized instance). This is a straightforward consequence of Tarskis
decision procedure for the theory of real closed elds. A difculty resides in projection
(i.e., quantier elimination). The procedure for projection is extremely costly in the size of
the query. However, for a xed query, the complexity in the size of the syntactic database
is reasonable (in nc).
We assume now that the constants are interpreted over a countably innite set with a
binary relation that is a dense order (e.g., the rationals). The constraints are of the form
xy or xc, where x, y are variables, c is a constant, and is among , <, =. An example
of a 3-tuple is
(x
1
x
2
) (x
2
< x
3
).
Here again, a bottom-up algebraic evaluation is feasible. Indeed, evaluation is in ac
0
in the size of the syntactic database (for a xed query).
In the remainder of this book, we consider standard databases and not generalized
ones.
Bibliographic Notes
One of the rst investigations of using predicate calculus to query relational database struc-
tures is [Kuh67], although the work by Codd [Cod70, Cod72b] played a tremendous role
in bringing attention to the relational model and to the relational algebra and calculus. In
particular, [Cod72b] introduced the equivalence of the calculus and algebra to the database
community. That paper coined the phrase relational completeness to describe the expres-
sive power of relational query languages: Any language able to simulate the algebra is
called relationally complete. We have not emphasized that phrase here because subsequent
research has suggested that a more natural notion of completeness can be described in terms
of Turing computability (see Chapter 16).
Actually, a version of the result on the equivalence of the calculus and algebra was
known much earlier to the logic community (see [CT48, TT52]) and is used to show
Tarskis algebraization theorem (e.g., see [HMT71]). The relation between relational al-
gebras and cylindric algebras is studied in [IL84] (see Exercise 5.36). The development of
algebras equivalent to various calculus languages has been a fertile area for database theory.
One such result presented in this chapter is the equivalence of the positive existential cal-
Bibliographic Notes 97
culus and the SPCU algebra [CH82]; analogous results have also been developed for the
relational calculus extended with aggregate operators [Klu82], the complex value model
[AB88] (studied in Chapter 20), the Logical Data Model [KV84], the directory model
[DM86a, DM92], and formalizations of the hierarchy and network models using database
logic [Jac82].
Notions related to domain independence are found as early as [Low15] in the logic
community; in the database community the rst paper on this topic appears to be [Kuh67],
which introduced the notion of denite queries. The notion of domain independence used
here is from [Fag82b, Mak81]; the notions of denite and domain independent were proved
equivalent in [ND82]. A large number of classes of domain-independent formulas have
been investigated. These include the safe [Ull82b], safe DRC [Ull88], range separable
[Cod72b], allowed [Top87] (Exercise 5.25), range restricted [Nic82] (Exercise 5.26), and
evaluable [Dem82] formulas. An additional investigation of domain independence for the
calculus is found in [ND82]. Surveys on domain independence and various examples of
these classes can be found in [Kif88, VanGT91]. The focus of [VanGT91] is the general
problem of practical translations from calculus to algebra queries; in particular, it provides
a translation from the set of evaluable formulas into the algebra. It is also shown there that
the notions of evaluable and range restricted are equivalent. These are the most general
syntactic conditions in the literature that ensure domain independence. The fact that domain
independence is undecidable was rst observed in [DiP69]; this is detailed in Chapter 6.
Domain independence also arises in the context of dependencies [Fag82b, Mak81] and
datalog [Dec86, Top87, RBS87, TS88]. The issue of extending domain independence to
incorporate functions (e.g., arithmetic functions, or user-dened functions) is considered
in [AB88, Top91, EHJ93]. The issue of extending domain independence to incorporate
freely interpreted functions (such as arise in logic programming) is addressed in [Kif88].
Syntactic conditions on (recursive) datalog programs with arithmetic that ensure safety
are developed in [RBS87, KRS88a, KRS88b, SV89]. Issues of safety in the presence of
function or order symbols are also considered in [AH91]. Aggregate functions were rst
incorporated into the relational algebra and calculus in [Klu82]; see also [AB88].
The notion of safe range presented here is richer than safe, safe DRC, and range sep-
arable and weaker than allowed, evaluable, and range restricted. It follows the spirit of
the denition of allowed presented in [VanGT91] and safe range in [AB88]. The transfor-
mations of the safe-range calculus to the algebra presented here follows the more general
transformations in [VanGT91, EHJ93]. The notion of relational algebra normal form
used in those works is more general than the notion by that name used here.
Query languages have mostly been considered for nite databases. An exception is
[HH93]. Theorem 5.6.1 is due to [AGSS86]. An alternative proof and extension of this
result is developed in [HS94].
Programming with constraints has been studied for some time in topic areas ranging
from linear programming to AI to logic programming. Although the declarative spirit of
both constraint programming and database query languages leads to a natural marriage,
it is only recently that the combination of the two paradigms has been studied seriously
[KKR90]. This was probably a consequence of the success of constraints in the eld of
logic programming (see, e.g., [JL87] and [Lel87, Coh90] for surveys). Our presentation
was inuenced by [KKR90] (calculii for closed real elds with polynomial inequalities
and for dense order with inequalities are studied there) as well as by [KG94] (an algebra
98 Adding Negation: Algebra and Calculus
for constraint databases with dense order and inequalities is featured there). Recent works
on constraint database languages can be found in [Kup93, GS94].
Exercises
Exercise 5.1 Express queries (5.2 and 5.3) in (1) the relational algebras, (2) nonrecursive
datalog

, and (3) domain-independent relational calculus.


Exercise 5.2 Express the following queries against the CINEMA database in (1) the relational
algebras, (2) nonrecursive datalog

, and (3) domain-independent relational calculus.


(a) Find the actors cast in at least one movie by Kurosawa.
(b) Find the actors cast in every movie by Kurosawa.
(c) Find the actors cast only in movies by Kurosawa.
(d) Find all pairs of actors who act together in at least one movie.
(e) Find all pairs of actors cast in exactly the same movies.
(f) Find the directors such that every actor is cast in one of his or her lms.
(Assume that each lm has exactly one director.)
Exercise 5.3 Prove or disprove (assuming X sort(P) =sort(Q)):
(a)
X
(P Q) =
X
(P)
X
(Q);
(b)
X
(P Q) =
X
(P)
X
(Q).
Exercise 5.4
(a) Give formal denitions for the syntax and semantics of the unnamed and named
relational algebras.
(b) Show that in the unnamed algebra can be simulated using (1) the difference oper-
ator ; (2) the operators , , .
(c) Give a formal denition for the syntax and semantics of selection operators in the un-
named algebra that permit conjunction, disjunction, and negation in their formulas.
Show that these selection operators can be simulated using atomic selection opera-
tors, union, intersect, and difference.
(d) Show that the SPCU algebra, in which selection operators with negation in the
formulas are permitted, cannot simulate the difference operator.
(e) Formulate and prove results analogous to those of parts (b), (c), and (d) for the named
algebra.
Exercise 5.5
(a) Prove that the unnamed algebra operators {, , , , } are nonredundant.
(b) State and prove the analogous result for the named algebra.
Exercise 5.6
(a) Exhibit a relational algebra query that is not monotonic.
(b) Exhibit a relational algebra query that is not satisable.
Exercises 99
Exercise 5.7 Prove Proposition 5.1.2 (i.e., that the unnamed and named relational algebras
have equivalent expressive power).
Exercise 5.8 (Division) The division operator, denoted , is added to the named algebra as
follows. For instances I and J with sort(J) sort(I), the value of I J is the set of tuples
r
sort(I)sort(J)
(I) such that ({r} J) I. Use the division to express algebraically the
query, Which theater is featuring all of Hitchcocks movies?. Describe how nr-datalog

can
be used to simulate division. Describe how the named algebra can simulate division. Is division
a monotonic operation?
Exercise 5.9 Showthat the semantics of each nr-datalog

rule can be described as a difference


q
1
q
2
, where q
1
is an SPJR query and q
2
is an SPJRU query.
Exercise 5.10 Verify that each nr-datalog

program with equality can be simulated by one


without equality.
Exercise 5.11 Prove Proposition 5.2.2. Hint: Use the proof of Theorem 4.4.8 and the fact that
the relational algebra is closed under composition.
Exercise 5.12 Prove that the domain-independent relational calculus without equality is
strictly weaker than the domain-independent relational calculus. Hint: Suppose that calculus
query q without equality is equivalent to {x | R(x) x =a}. Show that q can be translated into
an algebra query q

that is constructed without using a constant base relation and such that all
selections are on base relation expressions. Argue that on each input relation I over R, each
subexpression of q

evaluates to either I
n
for some n 0, or to the empty relation for some
n 0.
Exercise 5.13
(a) Complete the proof of Lemma 5.3.11.
(b) Complete the proof of Lemma 5.3.12.
Exercise 5.14
(a) Prove that the rewrite rules of Figure 5.1 preserve equivalence.
(b) Prove that these rewrite rules can be used to transform an arbitrary calculus formula
into an equivalent formula in PNF with CNF matrix. State which rewrite rules are
needed.
(c) Do the same as (b), but for DNF matrix.
(d) Prove that the rewrite rules of Figure 5.1 are not complete in the sense that there
are calculus formulas and such that (1) , but (2) there is no sequence of
applications of the rewrite rules that transforms into .
Exercise 5.15 Verify the claims of Example 5.3.9.
Exercise 5.16
(a) Show that each positive existential query is equivalent to one whose formula is in
PNF with either CNF or DNF matrix and that they can be expressed in the form
{e
1
, . . . , e
n
|
1

m
}, where each
j
is a conjunctive calculus formula with
free(
j
) = the set of variables occurring in e
1
, . . . , e
n
. Note that this formula is safe
range.
100 Adding Negation: Algebra and Calculus
(b) Show that it is decidable, given a relational calculus query q (possibly with equality)
whose only logical connectives are , , and , whether q is domain independent.
(c) Prove Theorem 5.4.10.
Exercise 5.17 Use the construction of the proof of Theorem 5.4.6 to transform the following
into the algebra.
(a) { | x(R(x) y(S(x, y) z(T (x, y, a))))}
(b) {w, x, y, z | (R(w, x, y) R(w, x, z)) (R(y, z, w) R(y, z, x))}
Exercise 5.18 For each of the following queries, indicate whether it is domain independent
and/or safe range. If it is not domain independent, give examples of different domains yielding
different answers on the same input; and if it is safe range, translate it into the algebra.
(a) {x, y | z[T (x, z) wT (w, x, y)] x =y}
(b) {x, y | [x =a z(R(y, z))] S(y)}
(c) {x, y | [x =a z(R(y, z))] S(y) T (x)}
(d) {x | y(R(y) S(x, y))}
(e) { | xy(R(y) S(x, y))}
(f) {x, y | zT (x, y, z) u, v([(R(u) S(u, v)) R(v)]
[w(T (x, w, v) T (u, v, y))])}
Exercise 5.19 Consider the proof of Theorem 5.4.6.
(a) Give the missing parts of Algorithm 5.4.7.
(b) Show that Algorithm 5.4.7 is correct and terminates on all input.
(c) Give the missing parts of Algorithm 5.4.8 and verify its correctness.
(d) Given q ={x
1
, . . . , x
n
| } with in modied RANF, show for each instance I and
each d satisfying adom(q, I) d dom that q
d
(I) =E

(I).
Exercise 5.20 Consider the proof of Theorem 5.4.6.
(a) Present examples illustrating how the nondeterministic choices in these rewrite rules
can be used to help optimize the algebra query nally produced by the construction of
the proof of this lemma. (Refer to Chapter 6 for a general discussion of optimization.)
(b) Consider a generalization of rules (R1) and (R2) that permits using a set of indexes
{j
1
, . . . , j
l
} {1, . . . , n} {i
1
, . . . , i
k
}. What are the advantages of this generaliza-
tion? What restrictions must be imposed to ensure that Algorithm 5.4.8 remains
correct?
Exercise 5.21 Develop a direct proof that CALC
adom
CALC
sr
. Hint: Given calculus query
q, rst build a formula
adom
(x) such that I |=
adom
(x)[] iff (x) adom(q, I). Now perform
an induction on subformulas.
Exercise 5.22 [Coh86] Let R have arity n. Dene the gen(erator) operator so that for instance
I of R, indexes 1 i
1
< < i
k
n, and constants a
1
, . . . , a
k
,
gen
i
1
:a
1
,...,i
k
:a
k
(I) =
j
1
,...,j
l
(
i
1
=a
1
i
k
=a
k
(I)),
where {j
1
, . . . , j
l
} is a listing in order of (some or all) indexes in {1, . . . , n} {i
1
, . . . , i
k
}. Note
that the special case of gen
1:b
1
,...,n:b
n
(I) can be viewed as a test of b
1
, . . . , b
n
I; and gen
[ ]
(I)
Exercises 101
is a test of whether I is nonempty. In some research in AI, the primitive mechanismfor accessing
relations is based on generators that are viewed as producing a stream of tuples as output. For
example, the query {x, y, z | R(x, y) S(y, z)} can be computed using the algorithm
for each tuple x, y generated by gen
1:x,2:y
(R)
for each value z generated by gen
1:y
(S)
output x, y, z
end for each
end for each
Develop an algorithm for translating calculus queries into programs using generators.
Describe syntactic restrictions on the calculus that ensure that your algorithm succeeds.
Exercise 5.23 [Cod72b] (Tuple calculus.) We use a set tvar of sorted tuple variables. The
tuple calculus is dened as follows. If t is a tuple variable and A is an attribute in the sort of t ,
t.A is a term. A constant is also a term. The atomic formulas are either of the form R(t ) with
the appropriate constraint on sorts, or e =e

, where e, e

are terms. Formulas are constructed as


in the standard relational calculus. For example, query (5.1) is expressed by the tuple calculus
query
{t : title | s: title, director, actor[Movie(s) t.title =s.title
s.director = Hitchcock]
u: title, director, actor[Movie(u) u.title =s.title
u.actor = Hitchcock]}.
Give a formal denition for the syntax of the tuple calculus and for the relativized interpretation,
active domain, and domain-independent semantics. Develop an analog of safe range. Prove the
equivalence of conventional calculus and tuple calculus under all of these semantics.
Exercise 5.24 Prove that the relational calculus and the family of nr-datalog

programs with
single-relation output have equivalent expressive power by using direct simulations between the
two families.
Exercise 5.25 [Top87] Let R be a database schema, and dene the binary relation gen(erates)
on variables and formulas as follows:
gen(x, ) if =R(u) for some R R and x free()
gen(x, ) if gen(x, pushnot())
gen(x, y) if x, y are distinct and gen(x, )
gen(x, y) if x, y are distinct and gen(x, )
gen(x, ) if gen(x, ) and gen(x, )
gen(x, ) if gen(x, ) or gen(x, ),
where pushnot() is dened in the natural manner to be the result of pushing the negation
into the next highest level logical connective (with consecutive negations cancelling each other)
unless is an atom (using the rewrite rules 5, 6, 7, 10, and 11 from Fig. 5.1). A formula is
allowed
(i) if x free() then gen(x, );
(ii) if for each subformula y of , gen(y, ) holds; and
102 Adding Negation: Algebra and Calculus
(iii) if for each subformula y of , gen(y, ) holds.
A calculus query is allowed if its formula is allowed.
(a) Exhibit a query that is allowed but not safe range.
(b) Prove that each allowed query is domain independent.
In [VanGT91, EHJ93] a translation of allowed formulas into the algebra is presented.)
Exercise 5.26 [Nic82] The notion of range-restricted queries, which ensures domain inde-
pendence, is based on properties of the normal form equivalents of queries. Let q ={ x | } be
a calculus query, and let
DNF
=

%y(D
1
D
n
) be the result of transforming into PNF
with DNF matrix using the rewrite rules of Fig. 5.1; and similarly let
CNF
=

%z(C
1
C
m
)
be the result of transforming into PNF with CNF matrix. The query q is range restricted if
(i) each free variable x in occurs in a positive literal (other than x =y) in every D
i
;
(ii) each existentially quantied variable x in
DNF
occurs in a positive literal (other than
x =y) in every D
i
where x occurs; and
(iii) each universally quantied variable x in
CNF
occurs in a negative literal (other than
x =y) in every C
j
where x occurs.
Prove that range-restricted queries are domain independent. (In [VanGT91] a translation of the
range-restricted queries into the algebra is presented.)
Exercise 5.27 [VanGT91] Suppose that R[Product, Part] holds project numbers and the parts
that are used to make them, and S[Supplier, Part] holds supplier names and the parts that they
supply. Consider the queries
q
1
={x | y(R(100, y) S(x, y))}
q
2
={ | xy(R(100, y) S(x, y))}
(a) Prove that q
1
is not domain independent.
(b) Prove that q
2
is not allowed (Exercise 5.25) but it is range restricted (Exercise 5.26)
and hence domain independent.
(c) Find an algebra query q

equivalent to q
2
.
Exercise 5.28 [Klu82] Consider a database schema with relations Dept[Name, Head, Col-
lege], Faculty[Name, Dname], and Grad[Name, MajorProf , GrantAmt], and the query
For each department in the Letters and Science College, compute the total graduate
student support for each of the departments faculty members, and produce as output a
relation that includes all pairs d, a where d is a department in the Letters and Science
College, and a is the average graduate student support per faculty member in d.
Write algebra and calculus queries that express this query.
Exercise 5.29 We consider constraint databases involving polynomial inequalities over the re-
als. Let I
1
={(9x
2
1
+4x
2
0)} be a generalized instance over AB, where x
1
ranges over A and
x
2
ranges over B, and let I
2
={(x
3
x
1
0)} over AC. Express
BC
(I
1
I
2
) as a generalized
instance.
Exercise 5.30 Recall Theorem 5.6.1.
Exercises 103
(a) Let nite d dom be xed, C be a set of new symbols, and t be a tuple with
placeholders. Describe a generalized tuple (in the sense of constraint databases) t

whose semantics are equal to sem


d
(t ).
(b) Show that the family of databases representable by sets of tuples with placeholders
is closed under the relational calculus.
Exercise 5.31 Prove Theorem 5.6.1.
Exercise 5.32 [Mai80] (Unrestricted algebra) For this exercise we permit relations to be nite
or innite. Consider the complement operator
c
dened on instances I of arity n by I
c
=
dom
n
I. (The analogous operator is dened for the named algebra.) Prove that the calculus
under the natural interpretation is equivalent to the algebra with operators {, , , ,
c
}.
Exercise 5.33 A total mapping from instances over R to instances over S is C-generic for
C dom, iff for each bijection over dom that is the identity on C, and commute. That
is, ((I)) =((I)) for each instance I of R. The mapping is generic if it is C-generic for
some nite C dom. Prove that each relational algebra query is genericin particular, that
each algebra query q is adom(q)-generic.
Exercise 5.34 Let R be a unary relation name. A hyperplane query over R is a query of the
form
F
(R R) (with 0 or more occurrences of R), where F is a conjunction of atoms
of the form i =j, i =j, i =a, or i =a (for indexes i, j and constant a). A formula F of this
form is called a hyperplane formula. A hyperplane-union query over R is a query of the form

F
(R R), where F is a disjunction of hyperplane formulas; a formula of this form is
called a hyperplane-union formula.
(a) Show that if q is an algebra query over R, then there is an n 0 and a hyperplane-
union query q

such that for all instances I over R, if |I| n and adom(I)


adom(q) =, then q(I) =q

(I).
The query even is dened over R as follows: even(I) = {} (i.e., yes) if |I| is even; and
even(I) ={} (i.e., no) otherwise.
(b) Prove that there is no algebra query q over R such that q even.
Exercise 5.35 [CH80b] (Unsorted algebra) An h-relation (for heterogeneous relation) is a
nite set of tuples not necessarily of the same arity.
(a) Design an algebra for h-relations that is at least as expressive as relational algebra.
(b) Show that the algebra in (a) can be chosen to have the additional property that if
q is a query in this algebra taking standard relational input and producing standard
relational output, then there is a standard algebra query q

such that q

q.
Exercise 5.36 [IL84] (Cylindric algebra) Let n be a positive integer, R[A
1
, . . . , A
n
] a relation
schema, and C a (possibly innite) set of constants. Recall that a Boolean algebra is a 6-tuple
(B, , , , , ), where B is a set containing and ; , are binary operations on B; and
is a unary operation on B such that for all x, y, z B:
(a) x y =y x;
(b) x y =y x;
(c) x (y z) =(x y) (x z);
(d) x (y z) =(x y) (x z);
104 Adding Negation: Algebra and Calculus
(e) x =;
(f) x =;
(g) x x =;
(h) x x =; and
(i) =.
For a Boolean algebra, dene x y to mean x y =x.
(a) Show that R
C
, , ,
c
, , C
n
is a Boolean algebra where R
C
is the set of all (pos-
sibly innite) R-relations over constants in C and
c
denotes the unary complement
operator, dened so that I
c
=C
n
I. In addition, show that I J iff I J.
Let the diagonals d
ij
be dened by the statement, for each i, j, d
ij
=
A
i
=A
j
(C
n
); and let the
i
th
cylinder C
i
be dened for each I by the statement, C
i
I is the relation over R
C
dened by
C
i
I ={t |
A
1
...A
i1
A
i+1
...A
n
(t )
A
1
...A
i1
A
i+1
...A
n
(I) and t (A
i
) C}.
(b) Show the following properties of cylindric algebras: (1) C
i
=; (2) x C
i
x; (3)
C
i
(x C
i
y) =C
i
x C
i
y; (4) C
i
C
j
x =C
j
C
i
x; (5) d
ii
=C
n
; (6) if i =j and i =k,
then d
jk
=C
i
(d
ji
d
ik
); (7) if i =j, then C
i
(d
ij
x) C
i
(d
ij
x) =.
(c) Let h be the mapping from any (possibly innite) relation S with sort(S) A
1
. . . A
n
with entries in C to a relation over R obtained by extending each tuple in S to
A
1
. . . A
n
in all possible ways with values in C. Prove that (1) h(R
1
R
2
) =h(R
1
)
h(R
2
) and (2) if A
1
sort(R), then h(
A
1
(R)) =C
1
h(R
1
).
6
Static Analysis and
Optimization
Alice: Do you guys mean real optimization?
Riccardo: Well, most of the time its local maneuvering.
Vittorio: But sometimes we go beyond incremental reform . . .
Sergio: . . . with provably global results.
T
his chapter examines the conjunctive and rst-order queries from the perspective of
static analysis (in the sense of programming languages). It is shown that many prop-
erties of conjunctive queries (e.g., equivalence, containment) are decidable although they
are not decidable for rst-order queries. Static analysis techniques are also applied here in
connection with query optimization (i.e., transforming queries expressed in a high-level,
largely declarative language into equivalent queries or machine instruction programs that
are arguably more efcient than a naive execution of the initial query).
To provide background, this chapter begins with a survey of practical optimization
techniques for the conjunctive queries. The majority of practically oriented research and
development on query optimization has been focused on variants of the conjunctive queries,
possibly extended with arithmetic operators and comparators. Because of the myriad fac-
tors that play a role in query evaluation, most practically successful techniques rely heavily
on heuristics.
Next the chapter presents the elegant and important Homomorphism Theorem, which
characterizes containment and equivalence between conjunctive queries. This leads to
the notion of tableau minimization: For each tableau query there is a unique (up to
isomorphism) equivalent tableau query with the smallest number of rows. This provides a
theoretical notion of true optimality for conjunctive queries. It is also shown that deciding
these properties and minimizing conjunctive queries is np-complete in the size of the input
queries.
Undecidability results are then presented for the rst-order queries. Although related
to undecidability results for conventional rst-order logic, the proof techniques used here
are necessarily different because all instances considered are nite by denition. The
undecidability results imply that there is no hope of developing an algorithm that performs
optimization of rst-order queries that is complete. Only limited optimization of rst-order
queries involving difference is provided in most systems.
The chapter closes by returning to a specialized subset of the conjunctive queries based
on acyclic joins. These have been shown to enjoy several interesting properties, some
yielding insight into more efcient query processing.
Chapter 13 in Part D examines techniques for optimizing datalog queries.
105
106 Static Analysis and Optimization
6.1 Issues in Practical Query Optimization
Query optimization is one of the central topics of database systems. A myriad of factors
play a role in this area, including storage and indexing techniques, page sizes and paging
protocols, the underlying operating system, statistical properties of the stored data, statis-
tical properties of anticipated queries and updates, implementations of specic operators,
and the expressive power of the query languages used, to name a few. Query optimization
can be performed at all levels of the three-level database architecture. At the physical level,
this work focuses on, for example, access techniques, statistical properties of stored data,
and buffer management. At a more logical level, algebraic equivalences are used to rewrite
queries into forms that can be implemented more efciently.
We begin now with a discussion of rudimentary considerations that affect query pro-
cessing (including the usual cost measurements) and basic methods for accessing relations
and implementing algebraic operators. Next an optimization approach based on algebraic
equivalences is described; this is used to replace a given algebraic expression by an equiva-
lent one that can typically be computed more quickly. This leads to the important notion of
query evaluation plans and how they are used in modern systems to represent and choose
among many alternative implementations of a query. We then examine intricate techniques
for implementing multiway joins based on different orderings of binary joins and on join
decomposition.
The discussion presented in this section only scratches the surface of the rich body of
systems-oriented research and development on query optimizers, indicating only a handful
of the most important factors that are involved. Nothing will be said about several factors,
such as the impact of negation in queries, main-memory buffering strategies, and the
implications of different environments (such as distributed, object oriented, real time, large
main memory, and secondary memories other than conventional disks). In part due to the
intricacy and number of interrelated factors involved, little of the fundamental theoretical
research on query optimization has found its way into practice. As the eld is maturing,
salient aspects of query optimization are becoming isolated; this may provide some of the
foothold needed for signicant theoretical work to emerge and be applied.
The Physical Model
The usual assumption of relational databases is that the current database state is so large
that it must be stored in secondary memory (e.g., on disk). Manipulation of the stored
data, including the application of algebraic operators, requires making copies in primary
memory of portions of the stored data and storing intermediate and nal results again
in secondary memory. By far the major time expense in query processing, for a single-
processor system, is the number of disk pages that must be swapped in and out of primary
memory. In the case of distributed systems, the communication costs typically dominate
all others and become an important focus of optimization.
Viewed a little more abstractly, the physical level of relational query implementation
involves three basic activities: (1) generating streams of tuples, (2) manipulating streams
6.1 Issues in Practical Query Optimization 107
of tuples (e.g., to perform projections), and (3) combining streams of tuples (e.g., to per-
form joins, unions, and intersections). Indexing methods, including primarily B-trees and
hash indexes, can be used to reduce signicantly the size of some streams. Although not
discussed here, it is important to consider the cost of maintaining indexes and clusterings
as updates to the database occur.
Main-memory buffering techniques (including the partitioning of main memory into
segments and paging policies such as deleting pages based on policies of least recent use
and most recent use) can signicantly impact the number of page I/Os used.
Speaking broadly, an evaluation plan (or access plan) for a query, a stored database
state, and a collection of existing indexes and other data structures is a specication of a
sequence of operations that will compute the answer to the query. The term evaluation
plan is used most often to refer to specications that are at a low physical level but
may sometimes be used for higher-level specications. As we shall see, query optimizers
typically develop several evaluation plans and then choose one for execution.
Implementation of Algebraic Operators
To illustrate the basic building blocks from which evaluation plans are constructed, we now
describe basic implementation techniques for some of the relational operators.
Selection can be realized in a straightforward manner by a scan of the argument
relation and can thus be achieved in linear time. Access structures such as B-tree indexes
or hash tables can be used to reduce the search time needed to nd the selected tuples. In
the case of selections with single tuple output, this permits evaluation within essentially
constant time (e.g., two or three page fetches). For larger outputs, the selection may take
two or three page fetches per output tuple; this can be improved signicantly if the input
relation is clustered (i.e., stored so that all tuples with a given attribute value are on the
same or contiguous disk pages).
Projection is a bit more complex because it actually calls for two essentially differ-
ent operations: tuple rewriting and duplicate elimination. The tuple rewriting is typically
accomplished by bringing tuples into primary memory and then rewriting them with coor-
dinate values permuted and removed as called for. This may yield a listing of tuples that
contains duplicates. If a pure relational algebra projection is to be implemented, then these
duplicates must be removed. One strategy for this involves sorting the list of tuples and
then removing duplicates; this takes time on the order of n log n. Another approach that is
faster in some cases uses a hash function that incorporates all coordinate values of a tuple.
Because of the potential expense incurred by duplicate elimination, most practical re-
lational languages permit duplicates in intermediate and nal results. An explicit command
(e.g., distinct) that calls for duplicate elimination is typically provided. Even in languages
that support a pure algebra, it may be more efcient to leave duplicates in intermediate
results and perform duplicate elimination once as a nal step.
The equi-join is typically much more expensive than selection or projection because
two relations are involved. The following naive nested loop implementation of
F
will
take time on the order of the product n
1
n
2
of the sizes of the input relations I
1
, I
2
:
108 Static Analysis and Optimization
J :=;
for each u in I
1
for each v in I
2
if u and v are joinable then J :=J {u
F
v}.
Typically this can be improved by using the sort-merge algorithm, which independently
sorts both inputs according to the join attributes and then performs a simultaneous scan of
both relations, outputting join tuples as discovered. This reduces the running time to the
order of max(n
1
log n
1
+n
2
log n
2
, size of output).
In many cases a more efcient implementation of join can be accomplished by a vari-
ant of the foregoing nested loop algorithm that uses indexes. In particular, replace the inner
loop by indexed retrievals to tuples of I
2
that match the tuple of I
1
under consideration.
Assuming that a small number of tuples of I
2
match a given tuple of I
1
, this computes the
join in time proportional to the size of I
1
. We shall consider implementations of multiway
joins later in this section and again in Section 6.4. Additional techniques have been devel-
oped for implementing richer joins that include testing, e.g., relationships based on order
().
Cross-product in isolation is perhaps the most expensive algebra operation: The output
necessarily has size the product of the sizes of the two inputs. In practice this arises only
rarely; it is much more common that selection conditions on the cross-product can be used
to transform it into some form of join.
Query Trees and Query Rewriting
Alternative query evaluation plans are usually generated by rewriting (i.e., by local trans-
formation rules). This can be viewed as a specialized case of program transformation. Two
kinds of transformations are typically used in query optimization: one that maps from the
higher-level language (e.g., the algebra) into the physical language, and others that stay
within the same language but lead to alternative, equivalent implementations of a given
construct.
We present shortly a family of rewriting rules that illustrates the general avor of this
component of query optimizers (see Fig. 6.2). Unlike true optimizers, however, the rules
presented here focus exclusively on the algebra. Later we examine the larger issue of how
rules such as these are used to nd optimal and near-optimal evaluation plans.
We shall use the SPC algebra, generalized by permitting positive conjunctive selection
and equi-join. A central concept used is that of query tree, which is essentially the parse
tree of an algebraic expression. Consider again Query (4.4), expressed here as a rule:
ans(x
t h
, x
ad
) Movies(x
t i
, Bergman, x
ac
), Pariscope(x
t h
, x
t i
, x
s
),
Location(x
t h
, x
ad
, x
p
).
A naive translation into the generalized SPC algebra yields
q
1
=
4,8

2=Bergman
((Movies
1=2
Pariscope)
4=1
Location).
6.1 Issues in Practical Query Optimization 109

2

1=2

1,2

2,3

1=1

1,2
Location
Pariscope
Movies

2=Bergman

1=2
Movies Pariscope

4,8

4=1
Location

2=Bergman
(a) (b)
Figure 6.1: Two query trees for Query (4.4) from Chapter 4
The query tree of this expression is shown in Fig. 6.1(a).
To provide a rough idea of how evaluation costs might be estimated, suppose now that
Movies has 10,000 tuples, with about 5 tuples per movie; Pariscope has about 200 tuples,
and Location has about 100 tuples. Suppose further that in each relation there are about 50
tuples per page and that no indexes are available.
Under a naive evaluation of q
1
, an intermediate result would be produced for each
internal node of q
1
s query tree. In this example, then, the join of Movies and Pariscope
would produce about 200 5 = 1000 tuples, which (being about twice as wide as the input
tuples) will occupy about 40 pages. The second equi-join will yield about 1000 tuples that
t 18 to a page, thus occupying about 55 pages. Assuming that there are four Bergman
lms playing in one or two theaters each, the nal answer will contain about six tuples.
The total number of page fetches performed here is about 206 for reading the input relations
(assuming that no indexes are available) and 95 for working with the intermediate relations.
Additional page fetches might be required by the join operations performed.
Consider now the query q
2
whose query tree is illustrated in Fig. 6.1(b). It is easily
veried that this is equivalent to q
1
. Intuitively, q
2
was formed from q
1
by pushing
selections and projections as far down the tree as possible; this generally reduces the
size of intermediate results and thus of computing with them.
110 Static Analysis and Optimization
In this example, assuming that all (i.e., about 20) of Bergmans lms are in Movies, the
selection on Movies will yield about 100 tuples; when projected these will t onto a single
page. Joining with Pariscope will yield about six tuples, and the nal join with Location
will again yield six tuples. Thus only one page is needed to hold the intermediate results
constructed during this evaluation, a considerable savings over the 95 pages needed by the
previous one.
It is often benecial to combine several algebraic operators into a single implemented
operation. As a general rule of thumb, it is typical to materialize the inputs of each equi-
join. The equi-join itself and all unary operations directly above it in the query tree are
performed before output. The dashed ovals of Fig. 6.1(b) illustrate a natural grouping that
can be used for this tree. In practical systems, the implementation and grouping of operators
is typically considered in much ner detail.
The use of different query trees and, more generally, different evaluation plans can
yield dramatically different costs in the evaluation of equivalent queries. Does this mean
that the user will have to be extremely careful in expressing queries? The beauty of query
optimization is that the answer is a resounding no. The user may choose any representation
of a query, and the system will be responsible for generating several equivalent evaluation
plans and choosing the least expensive one. For this reason, even though the relational
algebra is conceptually procedural, it is implemented as an essentially declarative language.
In the case of the algebra, the generation of evaluation plans is typically based on the
existence of rules for transforming algebraic expressions into equivalent ones. We have
already seen rewrite rules in the context of transforming SPC and SPJR expressions into
normal form (see Propositions 4.4.2 and 4.4.6). A different set of rules is useful in the
present context due to the focus on optimizing the execution time and space requirements.
In Fig. 6.2 we present a family of representative rewrite rules (three with inverses) that
can be used for performing the transformations needed for optimization at the logical level.
In these rules we view cross-product as a special case of equi-join in which the selection
formula is empty. Because of their similarity to the rules used for the normal form results,
several of the rules are shown only in abstract form; detailed formulation of these, as well
as verication of their soundness, is left for the reader (see Exercise 6.1). We also include
the following rule:
Simplify-identities: replace
1,...,arity(q)
q by q; replace
i=i
q by q; replace q {} by q;
replace q {} by {}; and replace q
1=1arity(q)=arity(q)
q by q.
Generating and Choosing between Evaluation Plans
As suggested in Fig. 6.2, in most cases the transformations should be performed in a certain
direction. For example, the fth rule suggests that it is always desirable to push selections
through joins. However, situations can arise in which pushing a selection through a join is
in fact much more costly than performing it second (see Exercise 6.2). The broad variety
of factors that inuence the time needed to execute a given query evaluation plan make
it virtually impossible to nd an optimal one using purely analytic techniques. For this
reason, modern optimizers typically adopt the following pragmatic strategy: (1) generate
a possibly large number of alternative evaluation plans; (2) estimate the costs of executing
6.1 Issues in Practical Query Optimization 111

F
(
F
(q))
FF
(q)

j
(

k
(q))

l
(q)

F
(

l
(q))

l
(
F
(q))
q
1
q
2
q
2
q
1

F
(q
1

G
q
2
)
F
(q
1
)
G
q
2

F
(q
1

G
q
2
) q
1

G

F
(q
2
)

F
(q
1

G
q
2
) q
1

G
q
2

l
(q
1

G
q
2
)

l
(q
1
)
G
q
2

l
(q
1

G
q
2
) q
1

k
(q
2
)
Figure 6.2: Rewriting rules for SPC algebra
them; and (3) select the one of lowest cost. The database system then executes the selected
evaluation plan.
In early work, the transformation rules used and the method for evaluation plan genera-
tion were essentially intermixed. Motivated in part by the desire to make database systems
extensible, more recent proposals have isolated the transformation rules from the algo-
rithms for generating evaluation plans. This has the advantages of exposing the semantics
of evaluation plan generation and making it easier to incorporate new kinds of information
into the framework.
A representative system for generating evaluation plans was developed in connection
with the Exodus database toolkit. In this system, techniques from AI are used and, a set
of transformation rules is assumed. During processing, a set of partial evaluation plans is
maintained along with a set of possible locations where rules can be applied. Heuristics are
used to determine which transformation to apply next, so that an exhaustive search through
all possible evaluation plans can be avoided while still having a good chance of nding an
optimal or near-optimal evaluation plan. Several of the heuristics include weighting factors
that can be tuned, either automatically or by the dba, to reect experience gained while
using the optimizer.
Early work on estimating the cost of evaluation plans was based essentially on
thought experiments similar to those used earlier in this chapter. These analyses use
factors including the size of relations, their expected statistical properties, selectivity fac-
tors of joins and selections, and existing indexes. In the context of large queries involving
multiple joins, however, it is difcult if not impossible to predict the sizes of intermediate
results based only on statistical properties. This provides one motivation for recent research
on using random and background sampling to estimate the size of subquery answers, which
can provide more reliable estimates of the overall cost of an evaluation plan.
Sideways Information Passing
We close this section by considering two practical approaches to implementing multiway
joins as they arise in practical query languages.
Much of the early research on practical query optimization was performed in con-
nection with the System R and INGRES systems. The basic building block of the query
112 Static Analysis and Optimization
languages used in these systems (SQL and Quel, respectively) takes the form of select-
from-where clauses or blocks. For example, as detailed further in Chapter 7, Query (4.4)
can be expressed in SQL as
select Theater, Address
from Movies, Location, Pariscope
where Director = Bergman
and Movies.Title = Pariscope.Title
and Pariscope.Theater = Location.Theater.
This can be translated into the algebra as a join between the three relations of the from
part, using join condition given by the where and projecting onto the columns mentioned
in the select. Thus a typical select-from-where block can be expressed by an SPC query as

j
(
F
(R
1
R
n
)).
With such expressions, the System R query optimizer pushes selections that affect a
single relation into the join and then considers evaluation plans based on left-to-right joins
that have the form
(. . . (R
i
1
R
i
2
) R
i
n
)
using different orderings R
i
1
, . . . , R
i
n
. We now present a heuristic based on sideways in-
formation passing, which is used in the System R optimizer for eliminating some possible
orderings from consideration. Interestingly, this heuristic has also played an important role
in developing evaluation techniques for recursive datalog queries, as discussed in Chap-
ter 13.
To describe the heuristic, we rewrite the preceding SPC query as a (generalized) rule
that has the form
() ans(u) R
1
(u
1
), . . . , R
n
(u
n
), C
1
, . . . , C
m
,
where all equalities of the selection condition F are incorporated by using constants and
equating variables in the free tuples u
1
, . . . , u
n
, and the expressions C
1
, . . . , C
m
are con-
ditions in the selection condition F not captured in that way. (This might include, e.g.,
inequalities and conditions based on order.) We shall call the R
i
(u
i
)s relation atoms and
the C
j
s constraint atoms.
Example 6.1.1 Consider the rule
ans(z) P(a, v), Q(b, w, x), R(v, w, y), S(x, y, z), v x,
where a, b denote constants. A common assumption in this case is that there are few values
for v such that P(a, v) is satised. This in turn suggests that there will be few triples
(v, w, y) satisfying P(a, v) R(v, w, y). Continuing by transitivity, then, we also expect
there to be few 5-tuples (v, w, y, x, z) that satisfy the join of this with S(x, y, z).
6.1 Issues in Practical Query Optimization 113
6-3.eps
S(x, y, z)
R(v, w, y)
P(a, v) Q(b, w, x)
Figure 6.3: A sip graph
More generally, the sideways information passing graph, or sip graph, of a rule that
has the form () just shown has vertexes the set of relation atoms of a rule, and includes
an undirected edge between atoms R
i
(u
i
), R
j
(u
j
) if u
i
and u
j
have at least one variable in
common. Furthermore, each node with a constant appearing is specially marked. The sip
graph for the rule of Example 6.1.1 is shown in Fig. 6.3.
Let us assume that the sip graph for a rule is connected. In this case, a sideways
information passing strategy (sip strategy) for is an ordering A
1
, . . . , A
n
of the atoms in
the rule, such that for each j > 1, either
(a) a constant occurs in A
j
;
(b) A
j
is a relational atom and there is at least one i < j such that {A
i
, A
j
} is an
edge of the sip graph of (); or
(c) A
j
is a constraint atom and each variable occurring in A
j
occurs in some atom
A
i
for i < j.
Example 6.1.2 A representative sample of the several sip strategies for the rule of Ex-
ample 6.1.1 is as follows:
P(a, v), Q(b, w, x), v x, R(v, w, y), S(x, y, z)
P(a, v), R(v, w, y), S(x, y, z), v x, Q(b, w, x)
Q(b, w, x), R(v, w, y), P(a, v), S(x, y, z), v x.
A sip strategy for the case in which the sip graph of rule is not connected is a set
of sip strategies, one for each connected component of the sip graph. (Incorporation of
constraint atoms whose variables lie in distinct components is left for the reader.) The
System R optimizer focuses primarily on joins that have connected sip graphs, and it
considers only those join orderings that correspond to sip strategies. In some cases a more
efcient evaluation plan can be obtained if an arbitrary tree of binary joins is permitted;
see Exercise 6.5. While generating sip strategies the System R optimizer also considers
114 Static Analysis and Optimization
alternative implementations for the binary joins involved and records information about
the orderings that the partial results would have if computed. An additional logical-level
technique used in System R is illustrated in the following example.
Example 6.1.3 Let us consider again the rule
ans(z) P(a, v), R(v, w, y), S(x, y, z), v x, Q(b, w, x).
Suppose that a left-to-right join is performed according to the sip strategy shown. At
different intermediate stages certain variables can be forgotten, because they are not used
in the answer, nor are they used in subsequent joins. In particular, after the third atom the
variable y can be projected out, after the fourth atom v can be projected out, and after the
fth atom w and x can be projected out. It is straightforward to formulate a general policy
for when to project out unneeded variables (see Exercise 6.4).
Query Decomposition: Join Detachment and Tuple Substitution
We now briey discuss the two main techniques used in the original INGRES system for
evaluating join expressions. Both are based on decomposing multiway joins into smaller
ones.
While again focusing on SPC queries of the form

j
(
F
(R
1
R
n
))
for this discussion, we use a slightly different notation. In particular, tuple variables rather
than domain variables are used. We consider expressions of the form
() ans(s) R
1
(s
1
), . . . , R
n
(s
n
), C
1
, . . . , C
m
, T,
where s, s
1
, . . . , s
n
are tuple variables; C
1
, . . . , C
n
are Boolean conditions referring to
coordinates of the variables s
1
, . . . , s
n
(e.g., s
1
.3 = s
4
.1 s
2
.4 = a); and T is a target
condition that gives a value for each coordinate of the target variable s. It is generally
assumed that none of C
1
, . . . , C
n
has as its parent connective.
A condition C
j
is called single variable if it refers to only one of the variables s
i
. At
any point in the processing it is possible to apply one or more single-variable conditions to
some R
i
, thereby constructing an intermediate relation R

i
that can be used in place of R
i
.
In the INGRES optimizer, this is typically combined with other steps.
Join detachment is useful for separating a query into two separate queries, where the
second refers to the rst. Consider a query that has the specialized form
()
ans(t ) P
1
(p
1
), . . . , P
m
(p
m
), C
1
, . . . , C
k
, T,
Q(q),
R
1
(r
1
), . . . , R
n
(r
n
), D
1
, . . . , D
l
,
6.2 Global Optimization 115
where conditions C
1
, . . . , C
k
, T refer only to variables t, p
1
, . . . , p
m
, q and D
1
, . . . , D
l
refer only to q, r
1
, . . . , r
n
. It is easily veried that this is equivalent to the sequence
temp(q) Q(q), R
1
(r
1
), . . . , R
n
(r
n
), D
1
, . . . , D
l
ans(t ) P
1
(p
1
), . . . , P
m
(p
m
), t emp(q), C
1
, . . . , C
k
, T.
In this example, variable q acts as a pivot around which the detachment is performed.
More general forms of join detachment can be developed in which a set of variables serves
as the pivot (see Exercise 6.6).
Tuple substitution chooses one of the underlying relations R
j
and breaks the n-variable
join into a set of (n 1)-variable joins, one for each tuple in R
j
. Consider again a query
of form () just shown. The tuple substitution of this on R
i
is given by the program
for each r inR
i
do
ans(s) +R
1
(s
1
), . . . , R
i1
(s
i1
), R
i+1
(s
i+1
), . . . , R
n
(s
n
),
(C
1
, . . . , C
m
, T )[s
i
/r].
Here we use +to indicate that ans is to accumulate the values stemming from all tuples
r in (the value of) R
i
; furthermore, r is substituted for s
i
in all of the conditions.
There is an obvious trade-off here between reducing the number of variables in the join
and the number of tuples in R
i
. In the INGRES optimizer, each of the R
i
s is considered as a
candidate for forming the tuple substitution. During this process single-variable conditions
may be applied to the R
i
s to decrease their size.
6.2 Global Optimization
The techniques for creating evaluation plans presented in the previous section are essen-
tially local in their operation: They focus on clusters of contiguous nodes in a query tree. In
this section we develop an approach to the global optimization of conjunctive queries. This
allows a transformation of an algebra query that removes several joins in a single step, a
capability not provided by the techniques of the previous section. The global optimization
technique is based on an elegant Homomorphism Theorem.
The Homomorphism Theorem
For two queries q
1
, q
2
over the same schema R, q
1
is contained in q
2
, denoted q
1

q
2
, if for each I over R, q
1
(I) q
2
(I). Clearly, q
1
q
2
iff q
1
q
2
and q
2
q
1
. The
Homomorphism Theorem provides a characterization for containment and equivalence of
conjunctive queries.
We focus here on the tableau formalism for conjunctive queries, although the rule-
based formalism could be used equally well. In addition, although the results hold for
tableau queries over database schemas involving more than one relation, the examples
presented focus on queries over a single relation.
Recall the notion of valuationa mapping from variables to constants extended to be
the identity on constants and generalized to free tuples and tableaux in the natural fashion.
116 Static Analysis and Optimization
R A B
x y
1
x
1
y
1
y
2
y
2
y
x y
x
1
x
2
x
2
R A B
x y
1
x
1
y
x y
R A B
x y
1
x
1
y
1
y
x y
x
1
R A B
x y
x y
(a) (b) (c) (d)
q
0
= (T
0
, x, y) q
1
= (T
1
, x, y) q
2
= (T
2
, x, y) q

= (T

, x, y)
Figure 6.4: Tableau queries used to illustrate the Homomorphism Theorem
Valuations are used in the denition of the semantics of tableau queries. More generally, a
substitution is a mapping from variables to variables and constants, which is extended to be
the identity on constants and generalized to free tuples and tableaux in the natural fashion.
As will be seen, substitutions play a central role in the Homomorphism Theorem.
We begin the discussion with two examples. The rst presents several simple examples
of the Homomorphism Theorem in action.
Example 6.2.1 Consider the four tableau queries shown in Fig. 6.4. By using the Ho-
momorphism Theorem, it can be shown that q
0
q
1
q
2
q

.
To illustrate the avor of the proof of the Homomorphism Theorem, we argue infor-
mally that q
1
q
2
. Note that there is substitution such that (T
2
) T
1
and (x, y) =
x, y [e.g., let (x
1
) =(x
2
) =x
1
and (y
1
) =(y
2
) =y
1
]. Now suppose that I is an in-
stance over AB and that t q
1
(I). Then there is a valuation such that (T
1
) I and
(x, y) =t . It follows that is a valuation that embeds T
2
into I with (x, y) =
t , whence t q
2
(I).
Intuitively, the existence of a substitution embedding the tableau of q
2
into the tableau
of q
1
and mapping the summary of q
2
to the summary of q
1
implies that q
1
is more re-
strictive than q
2
(or more correctly, no less restrictive than q
2
.) Surprisingly, the Homo-
morphism Theorem states that this is also a necessary condition for containment (i.e., if
q q

, then q is more restrictive than q

in this sense).
The second example illustrates a limitation of the techniques discussed in the previous
section.
Example 6.2.2 Consider the two tableau queries shown in Fig. 6.5. It can be shown that
q q

but that q

cannot be obtained from q using the rewrite rules of the previous section
(see Exercise 6.3) or the other optimization techniques presented there.
6.2 Global Optimization 117
R A B
x
x
x
(a) (b)
R A B
x y
1
y
1
y
2
y
n
x
x

y
n1
y
n
x x

q = (T, u) q = (T, u)
Figure 6.5: Pair of equivalent tableau queries
Let q = (T, u) and q

= (T

, u

) be two tableau queries over the same schema R. A


homomorphism from q

to q is a substitution such that (T

) T and (u

) =u.
Theorem 6.2.3 (Homomorphism Theorem) Let q = (T, u) and q

= (T

, u

) be tab-
leau queries over the same schema R. Then q q

iff there exists a homomorphism from


(T

, u

) to (T, u).
Proof Suppose rst that there exists a homomorphism fromq

to q. Let I be an instance
over R. To see that q(I) q

(I), suppose that w q(I). Then there is a valuation that


embeds Tinto I such that (u) =w. It is clear that embeds T

into I and (u

) =w,
whence w q

(I) as desired.
For the opposite inclusion, suppose that q q

[i.e., that (T, u) (T

, u

)]. Speaking
intuitively, we complete the proof by applying both q and q

to the instance T. Because


q will yield the free tuple u, q

also yields u (i.e., there is an embedding of T

into T that
maps u

to u). To make this argument formal, we construct an instance I


T
that is isomorphic
to T.
Let V be the set of variables occurring in T. For each x V, let a
x
be a new distinct
constant not occurring in T or T

. Let be the valuation mapping each x to a


x
, and
let I
T
= (T). Because is a bijection from V to (V), and because (V) has empty
intersection with the constants occurring in T, the inverse
1
of is well dened on
adom(I
T
).
It is clear that (u) q(I
T
), and so by assumption, (u) q

(I
T
). Thus there is a
valuation that embeds T

into I
T
such that (u

) = (u). It is now easily veried that



1
is a homomorphism from q

to q.
Permitting a slight abuse of notation, we have the following (see Exercise 6.8).
Corollary 6.2.4 For tableau queries q =(T, u) and q

=(T

, u

), q q

iff u q

(T).
118 Static Analysis and Optimization
We also have
Corollary 6.2.5 Tableau queries q, q

over schema R are equivalent iff there are


homomorphisms from q to q

and from q

to q.
In particular, if q =(T, u) and q

=(T

, u

) are equivalent, then u and u

are identical
up to one-one renaming of variables.
Only one direction of the preceding characterization holds if the underlying domain is
nite (see Exercise 6.12). In addition, the direct generalization of the theorem to tableau
queries with equality does not hold (see Exercise 6.9).
Query Optimization by Tableau Minimization
Although the Homomorphism Theorem yields a decision procedure for containment and
equivalence between conjunctive queries, it does not immediately provide a mechanism,
given a query q, to nd an optimal query equivalent to q. The theorem is now applied to
obtain just such a mechanism.
We note rst that there are simple algorithms for translating tableau queries into
(satisable) SPC queries and vice versa. More specically, given a tableau query, the
corresponding generalized SPC query has the form

j
(
F
(R
1
R
k
)), where each
component R
i
corresponds to a distinct row of the tableau. For the opposite direction, one
algorithmfor translating SPCqueries into tableau queries is rst to translate into the normal
form for generalized SPC queries and then into a tableau query. A more direct approach
that inductively builds tableau queries corresponding to subexpressions of an SPC query
can also be developed (see Exercise 4.18). Analogous remarks apply to SPJR queries.
The goal of the optimization presented here is to minimize the number of rows in
the tableau. Because the number of rows in a tableau query is one more than the number
of joins in the SPC (SPJR) query corresponding to that tableau (see Exercise 4.18c), the
tableau minimization procedure provides a way to minimize the number of joins in SPC
and SPJR queries.
Surprisingly, we show that an optimal tableau query equivalent to tableau query q can
be obtained simply by eliminating some rows from the tableau of q.
We say that a tableau query (T, u) is minimal if there is no query (S, v) equivalent to
(T, u) with |S| <|T| (i.e., where S has strictly fewer rows than T).
We can now demonstrate the following.
Theorem 6.2.6 Let q =(T, u) be a tableau query. Then there is a subset T

of T such
that q

=(T

, u) is a minimal tableau query and q

q.
Proof Let (S, v) be a minimal tableau that is equivalent to q. By Corollary 6.2.5, there
are homomorphisms from q to (S, v) and from (S, v) to q. Let T

= (S). It is
straightforward to verify that (T

, u) q and |T

| |S|. By minimality of (S, v), it follows


that |T

| =|S|, and (T

, u) is minimal.
Example 6.2.7 illustrates how one might minimize a tableau by hand.
6.2 Global Optimization 119
R A B C
u
1
x
2
y
1
z
u
2
x y
1
z
1
u
3
x
1
y z
1
u
4
x y
2
z
2
u
5
x
2
y
2
z
u x y z
Figure 6.6: The tableau (T, u)
Example 6.2.7 Let R be a relation schema of sort ABC and (T, u) the tableau over R
in Fig. 6.6. To minimize (T, u), we wish to detect which rows of T can be eliminated.
Consider u
1
. Suppose there is a homomorphism from (T, u) onto itself that eliminates
u
1
[i.e., u
1
(T )]. Because any homomorphism on (T, u) is the identity on u, (z) =z.
Thus (u
1
) must be u
5
. But then (y
1
) = y
2
, and (u
2
) {u
4
, u
5
}. In particular, (z
1
)
{z
2
, z}. Because u
3
involves z
1
, it follows that (u
3
) = u
3
and (y) = y. But the last
inequality is impossible because y is in u so (y) = y. It follows that row u
1
cannot be
eliminated and is in the minimal tableau. Similar arguments show that u
2
and u
3
cannot
be eliminated. However, u
4
and u
5
can be eliminated using (y
2
) = y
1
, (z
2
) = z
1
(and
identity everywhere else). The preceding argument emphasizes the global nature of tableau
minimization.
The preceding theorem suggests an improvement over the optimization strategies de-
scribed in Section 6.1. Specically, given a (satisable) conjunctive query q, the following
steps can be used:
1. Translate q into a tableau query.
2. Minimize the number of rows in the tableau of this query.
3. Translate the result into a generalized SPC expression.
4. Apply the optimization techniques of Section 6.1.
As illustrated by Examples 6.2.2, 6.2.7, and 6.2.8, this approach has the advantage of
performing global optimizations that typical query rewriting systems cannot achieve.
Example 6.2.8 Consider the relation schema R of sort ABC and the SPJR query q
over R:

AB
(
B=5
(R))
BC
(
AB
(R)
AC
(
B=5
(R))).
120 Static Analysis and Optimization
R A B C
x 5 z
1
x
1
5 z
2
x
1
5 z
u x 5 z
Figure 6.7: Tableau equivalent to q
The tableau (T, u) corresponding to it is that of Fig. 6.7. To minimize (T, u), we wish to
nd a homomorphism that folds T onto a subtableau with minimal number of rows. (If
desired, this can be done in several stages, each of which eliminates one or more rows.)
Note that the rst row cannot be eliminated because every homomorphism is the identity
on u and therefore on x. A similar observation holds for the third row. However, the second
row can be eliminated using the homomorphism that maps z
2
to z and is the identity
everywhere else. Thus the minimal tableau equivalent to (T, u) consists of the rst and
third rows of T . An SPJR query equivalent to the minimized tableau is

AB
(
B=5
(R))
BC
(
B=5
(R)).
Thus the optimization procedure resulted in saving one join operation.
Before leaving minimal tableau queries, we present a result that describes a strong
correspondence between equivalent minimal tableau queries. Two tableau queries (T, u),
(T

, u

) are isomorphic if there is a one-one substitution that maps variables to variables


such that ((T, u)) = (T

, u

). In other words, (T, u) and (T

, u

) are the same up to


renaming of variables. The proof of this result is left to the reader (see Exercise 6.11).
Proposition 6.2.9 Let q =(T, u) and q

=(T

, u

) be minimal and equivalent. Then q


and q

are isomorphic.
Complexity of Tableau Decision Problems
The following theorem shows that determining containment and equivalence between
tableau queries is np-complete and tableau query minimization is np-hard.
Theorem 6.2.10 The following problems, given tableau queries q, q

, are np-complete:
(a) Is q q

?
(b) Is q q

?
(c) Suppose that the tableau of q is obtained by deleting free tuples of the tableau of
q

. Is q q

in this case?
6.2 Global Optimization 121
These results remain true if q, q

are restricted to be single-relation typed tableau queries


that have no constants.
Proof The proof is based on a reduction from the exact cover problem to the different
tableau problems. The exact cover problem is to decide, given a set X ={x
1
, . . . , x
n
} and
a collection S ={S
1
, . . . , S
m
} of subsets of X such that S =X, whether there is an exact
cover of X by S (i.e., a subset S

of S such that each member of X occurs in exactly one


member of S

). The exact cover problem is known to be np-complete.


We now sketch a polynomial transformation from instances E =(X, S) of the exact
cover problem to pairs q
E
, q

E
of typed tableau queries. This construction is then applied
in various ways to obtain the np-completeness results. The construction is illustrated in
Fig. 6.8.
Let E =(X, S) be an instance of the exact cover problem, where X ={x
1
, . . . , x
n
} and
S ={S
1
, . . . , S
m
}. Let A
1
, . . . , A
n
, B
1
, . . . , B
m
be a listing of distinct attributes, and let R
be chosen to have this set as its sort. Both q
E
and q

E
are over relation R, and both queries
have as summary t =A
1
: a
1
, . . . , A
n
: a
n
, where a
1
, . . . , a
n
are distinct variables.
Let b
1
, . . . , b
m
be an additional set of m distinct variables. The tableau T
E
of q
E
has n
tuples, each corresponding to a different element of X. The tuple for x
i
has a
i
for attribute
A
i
; b
j
for attribute B
j
for each j such that x
i
S
j
; and a new, distinct variable for all other
attributes.
Let c
1
, . . . , c
m
be an additional set of m distinct variables. The tableau T

E
of q

E
has m
tuples, each corresponding to a different element of S. The tuple for S
j
has a
i
for attribute
A
i
for each i such that x
i
S
j
; c
j
for attribute B
j
for each j

such that j

=j; and a new,


distinct variable for all other attributes.
To illustrate the construction, let E =(X, S) be an instance of the exact cover problem,
where X ={x
1
, x
2
, x
3
, x
4
} and S ={S
1
, S
2
, S
3
} where
S
1
={x
1
, x
3
}
S
2
={x
2
, x
3
, x
4
}
S
3
={x
2
, x
4
}.
The tableau queries q

and q

corresponding to (X, S) are shown in Fig. 6.8. (Here the


blank entries indicate distinct, new variables.) Note that = (X, S) is satisable, and
q

.
More generally, it is straightforward to verify that for a given instance =(X, S) of
the exact cover problem, X has an exact cover in S iff q

. Verication of this, and of


parts (b) and (c) of the theorem, is left for Exercise 6.16.
A subclass of the typed tableau queries for which containment and equivalence is
decidable in polynomial time is considered in Exercise 6.21.
Although an np-completeness result often suggests intractability, this conclusion may
not be warranted in connection with the aforementioned result. The complexity there is
measured relative to the size of the query rather than in terms of the underlying stored
122 Static Analysis and Optimization
(a)
R A
1
a
1
q

A
2
A
3
A
4
B
1
B
2
B
3
a
1
a
2
a
3
a
4
b
1
b
1
b
2
b
2
b
2
b
3
b
3
a
2
a
3
a
4
(b)
R A
1
a
1
q

A
2
A
3
A
4
B
1
B
2
B
3
a
1
a
2
a
3
c
1
c
1
c
2
c
2
c
3
a
2
a
3
a
4
a
2
a
3
a
4
a
4
c
3
Figure 6.8: Tableau queries corresponding to an exact cover
data. Given an n-way join, the System R optimizer may potentially consider n! evaluation
strategies based on different orderings of the n relations; this may be exponential in the size
of the query. In many cases, the search for a minimal tableau (or optimal left-to-right join)
may be justied because the data is so much larger than the initial query. More generally,
in Part D we shall examine both data complexity and expression complexity, where the
former focuses on complexity relative to the size of the data and the latter relative to the
size of queries.
6.3 Static Analysis of the Relational Calculus
We now demonstrate that the decidability results for conjunctive queries demonstrated in
the previous section do not hold when negation is incorporated (i.e., do not hold for the rst-
order queries). In particular, we present a general technique for proving the undecidability
of problems involving static analysis of rst-order queries and demonstrate the undecid-
ability of three such problems.
We begin by focusing on the basic property of satisability. Recall that a query q
is satisable if there is some input I such that q(I) is nonempty. All conjunctive queries
are satisable (Proposition 4.2.2), and if equality is incorporated then satisability is not
guaranteed but it is decidable (Exercise 4.5). This no longer holds for the calculus.
To prove this result, we use a reduction of the Post Correspondence Problem (PCP)
(see Chapter 2) to the satisability problem. The reduction is most easily described in terms
of the calculus; of course, it can also be established using the algebras or nr-datalog

.
At rst glance, it would appear that the result follows trivially from the analogous re-
sult for rst-order logic (i.e., the undecidability of satisability of rst-order sentences).
There is, however, an important difference. In conventional rst-order logic (see Chap-
ter 2), both nite and innite interpretations are considered. Satisability of rst-order sen-
tences is co-recursively enumerable (co-r.e.) but not recursive. This follows from G odels
Completeness Theorem. In contrast, in the context of rst-order queries, only nite in-
stances are considered legal. This brings us into the realm of nite model theory. As will
6.3 Static Analysis of the Relational Calculus 123
be shown, satisability of rst-order queries is recursively enumerable (r.e.) but not recur-
sive. (We shall revisit the contrast between conventional rst-order logic and the database
perspective, i.e., nite model theory, in Chapters 9 and 10.)
Theorem 6.3.1 Satisability of relational calculus queries is r.e. but not recursive.
Proof To see that the problem is r.e., imagine a procedure that, when given query q over
R as input, generates all instances I over R and tests q(I) = until a nonempty answer is
found.
To show that satisability is not recursive, we reduce the PCP to the satisability
problem. In particular, we show that if there were an algorithm for solving satisability,
then it could be used to construct an algorithm that solves the PCP.
Let P =(u
1
, . . . , u
n
; v
1
, . . . , v
n
) be an instance of the PCP (i.e., a pair of sequences of
nonempty words over alphabet {0,1}). We describe now a (domain independent) calculus
query q
P
={ |
P
} with the property that q
P
is satisable iff P has a solution.
We shall use a relation schema R having relations ENC(ODING) with sort [A, B,
C, D, E] and SYNCH(RONIZATION) with sort [F, G]. The query q
P
shall use constants
{0, 1, $, c
1
, . . . , c
n
, d
1
, . . . , d
n
}. (The use of multiple relations and constants is largely a
convenience; the result can be demonstrated using a single ternary relation and no con-
stants. See Exercise 6.19.)
To illustrate the construction of the algorithm, consider the following instance of the
PCP:
u
1
=011, u
2
=011, u
3
=0; v
1
=0, v
2
=11, v
3
=01100.
Note that s =(1, 2, 3, 2) is a solution of this instance. That is,
u
1
u
2
u
3
u
2
=0110110011 =v
1
v
2
v
3
v
2
.
Figure 6.9 shows an input instance I
s
over R which encodes this solution and satises the
query q
P
constructed shortly.
In the relation ENC of this gure, the rst two columns form a cycle, so that the 10
tuples can be viewed as a sequence rather than a set. The third column holds a listing of the
word w =0110110011 that witnesses the solution to P; the fourth column describes which
words of sequence (u
1
, . . . , u
n
) are used to obtain w; and the fth column describes which
words of sequence (v
1
, . . . , v
n
) are used. The relation SYNCH is used to synchronize the
two representations of w by listing the pairs corresponding to the beginnings of new u-
words and v-words.
The formula
P
constructed now includes subformulas to test whether the various
conditions just enumerated hold on an input instance. In particular,
=
ENC-key

cycle

SYNCH-keys

u-encode

v-encode

u-v-synch
,
where, speaking informally,
124 Static Analysis and Optimization
ENC A B C D E SYNCH F G
$ a
1
0 c
1
d
1
$ $
a
1
a
2
1 c
1
d
2
a
3
a
1
a
2
a
3
1 c
1
d
2
a
6
a
3
a
3
a
4
0 c
2
d
3
a
7
a
8
a
4
a
5
1 c
2
d
3
a
5
a
6
1 c
2
d
3
a
6
a
7
0 c
3
d
3
a
7
a
8
0 c
2
d
3
a
8
a
9
1 c
2
d
2
a
9
$ 1 c
2
d
2
Figure 6.9: Encoding of a solution to PCP

ENC-key
: states that the rst column of ENC is a key; that is, each value occurring in the
A column occurs in exactly one tuple of ENC.

cycle
: states that constant $ occurs in a cycle with length > 1 in the rst two columns of
ENC. (There may be other cycles, which can be ignored.)

SYNCH-keys
: states that both the rst and second columns of SYNCH are keys.

u-encode
: states that for each value x occurring in the rst column of SYNCH, if tuple
x
1
, y
1
, z
1
, c
i
, d
j
1
is in ENC, then there are at least |u
i
| 1 additional tuples in ENC
after this tuple, all with value c
i
in the fourth coordinate, and if these tuples are
x
2
, y
2
, z
2
, c
i
, d
j
2
, . . . , x
k
, y
k
, z
k
, c
i
, d
j
k

then z
1
. . . z
k
= u
i
; none of x
2
, . . . , x
k
occurs in the rst column of SYNCH; and if
y
k
= $, then the A value after x
k
occurs in the rst column of SYNCH.

v-encode
: is analogous to
u-encode
.

u-v-synch
: states that (1) $, $ is in SYNCH; (2) if a tuple x, y is in SYNCH, then the
associated u-word and v-word have the same index; and (3) if a tuple x, y is in
SYNCH, and either x or y are not the maximum A value occurring in F or G, then
there exists a tuple x

, y

in SYNCH, where x

is the rst A value after x occurring


in F and y

is the rst A value after y occurring in G. Finding the A values after


x and y is done as in
u-encode
.
The constructions of these formulas are relatively straightforward; we give two of them
here and leave the others for the reader (see Exercise 6.19). In particular, we let
(x, y) =p, q, r ENC(x, y, p, q, r)
and set
6.3 Static Analysis of the Relational Calculus 125

cycle
= x((x, $) (x =$)) y(($, y) (y =$))
x((y(x, y)) (z(z, x)))
x((y(y, x)) (z(x, z)))
x, y
1
, y
2
((y
1
, x) (y
2
, x) y
1
=y
2
).
If ENC satises
ENCkey

cycle
, then the rst two coordinates of ENC hold one or more
disjoint cycles, exactly one of which contains the value $.
Parts (1) and (2) of
u-v-synch
are realized by the formula
SYNCH($, $)
x, y(SYNCH(x, y)
s, p, r, t, p

, q((ENC(x, s, p, c
1
, r) ENC(y, t, p

, q, d
1
))
(ENC(x, s, p, c
2
, r) ENC(y, t, p

, q, d
2
))
.
.
.
(ENC(x, s, p, c
n
, r) ENC(y, t, p

, q, d
n
)))).
Verifying that the query q
P
is satisable if and only if P has a solution is left to the
reader (see Exercise 6.19).
The preceding theorem can be applied to derive other important undecidability results.
Corollary 6.3.2
(a) Equivalence and containment of relational calculus queries are co-r.e. and not
recursive.
(b) Domain independence of a relational calculus query is co-r.e. and not recursive.
Proof It is easily veried that the two problems of part (a) and the problem of part (b)
are co-r.e. (see Exercise 6.20). The proofs of undecidability are by reduction from the
satisability problem. For equivalence, suppose that there were an algorithm for deciding
equivalence between relational calculus queries. Then the satisability problem can be
solved as follows: For each query q = {x
1
, . . . , x
n
| }, this is unsatisable if and only if it
is equivalent to the empty query q

. This demonstrates that equivalence is not decidable.


The undecidability of containment also follows from this.
For domain independence, let be a sentence whose truth value depends on the
underlying domain. Then {x
1
, . . . , x
n
| } is domain independent if and only if is
unsatisable.
The preceding techniques can also be used to show that true optimization cannot be
performed for the rst-order queries (see Exercise 6.20d).
126 Static Analysis and Optimization
6.4 Computing with Acyclic Joins
We now present a family of interesting theoretical results on the problem of computing the
projection of a join. In the general case, if both the data set and the join expression are al-
lowed to vary, then the time needed to evaluate such expressions appears to be exponential.
The measure of complexity here is a combination of both data and expression com-
plexity, and is somewhat non-standard; see Part D. Interestingly, there is a special class
of joins, called acyclic, for which this evaluation is polynomial. A number of interesting
properties of acyclic joins are also presented.
For this section we use the named perspective and focus exclusively on at project-join
queries of the form
q =
X
(R
1
R
n
)
involving projection and natural join. For this discussion we assume that R =R
1
, . . . , R
n
is a xed database schema, and we use I =(I
1
, . . . , I
n
) to refer to instances over it.
One of the historical motivations for studying this problemstems fromthe pure univer-
sal relation assumption (pure URA). An instance I =(I
1
, . . . , I
n
) over schema R satises
the pure URA if I =(
R
1
(I), . . . ,
R
n
(I)) for some universal instance I over
n
j=1
R
j
.
If I satises the pure URA, then I can be stored, and queries against the corresponding
instance I can be answered using joins of components in I. The URA will be considered
in more depth in Chapter 11.
Worst-Case Results
We begin with an example.
Example 6.4.1 Let n > 0 and consider the relations R
i
[A
i
A
i+1
], i [1, n 1], as
shown in Fig. 6.10(a). It is easily seen that the natural join of R
1
, . . . , R
n1
is exponential
in n and thus exponential in the size of the input query and data.
Nowsuppose that n is odd. Let R
n
be as in Fig. 6.10(b), and consider the natural join of
R
1
, . . . , R
n
. This is empty. On the other hand, the join of any i of these for i < n has size
exponential in i. It follows that the algorithms of the System R and INGRES optimizers
take time exponential in the size of the input and output to evaluate this query.
The following result implies that it is unlikely that there is an algorithm for computing
projections of joins in time polynomial in the size of the query and the data.
Theorem 6.4.2 It is np-complete to decide, given project-join expression q
0
over R,
instance I of R, and tuple t , whether t q
0
(I). This remains true if q
0
and I are restricted
so that |q
0
(I)| 1.
Proof The problem is easily seen to be in np. For the converse, recall from Theo-
rem 6.2.10(a) that the problem of tableau containment is np-complete, even for single-
6.4 Computing with Acyclic Joins 127
R
i
A
i
A
i+1
R
n
A
n
A
1
0 a 0 a
0 b 0 b
1 a 1 a
1 b 1 b
a 0 a 0
a 1 a 1
b 0 b 0
b 1 b 1
(a) (b)
Figure 6.10: Relations to illustrate join sizes
relation typed tableaux having no constants. We reduce this to the current problem. Let
q =(T, u) and q

=(T

, u

) be two typed constant-free tableau queries over the same rela-


tion schema. Recall from the Homomorphism Theorem that q q

iff there is a homomor-


phism of q

to q, which holds iff u q

(T ).
Assume that the sets of variables occurring in q and in q

are disjoint. Without loss


of generality, we view each variable occurring in q to be a constant. For each variable
x occurring in q

, let A
x
be a distinct attribute. For free tuple v = (x
1
, . . . , x
n
) in T

, let
I
v
over A
x
1
, . . . , A
x
n
be a copy of T , where the i
th
attribute is renamed to A
x
i
. Letting
u

=u

1
, . . . , u

m
, it is straightforward to verify that
q

(T ) =
A
u

1
,...,A
u

m
({I
v
| v T

}).
In particular, u q

(T ) iff u is in this projected join.


To see the last sentence of the theorem, let u =u
1
, . . . , u
m
and use the query

A
u

1
,...,A
u

m
({I
v
| v T

} {A
u

1
: u
1
, . . . , A
u

m
: u
m
}).
Theorem 6.2.10(a) considers complexity relative to the size of queries. As applied
in the foregoing result, however, the queries of Theorem 6.2.10(a) form the basis for
constructing a database instance {I
v
| v T

}. In contrast with the earlier theorem, the


preceding result suggests that computing projections of joins is intractable relative to the
size of the query, the stored data, and the output.
Acyclic Joins
In Example 6.4.1, we may ask what is the fundamental difference between R
1

R
n1
and R
1
R
n
? One answer is that the relation schemas of the latter join form a
cycle, whereas the relation schemas of the former do not.
We now develop a formal notion of acyclicity for joins and four properties equivalent
128 Static Analysis and Optimization
to it. All of these are expressed most naturally in the context of the named perspective for
the relational model. In addition, the notion of acyclicity is sometimes applied to database
schemas R={R
1
, . . . , R
n
} because of the natural correspondence between the schema R
and the join R
1
R
n
.
We begin by describing four interesting properties that are equivalent to acyclicity.
Let R ={R
1
, . . . , R
n
} be a database schema, where each relation schema has a different
sort. An instance I of R is said to be pairwise consistent if for each pair j, k [1, n],

R
j
(I
j
I
k
) =I
j
. Intuitively, this means that no tuple of I
j
is dangling or lost after
joining with I
k
. Instance I is globally consistent if for each j [1, n],
R
j
(I) =I
j
(i.e.,
no tuple of I
j
is dangling relative to the full join). Pairwise consistency can be checked in
ptime, but checking global consistency is np-complete (Exercise 6.25). The rst property
that is equivalent to acyclicity is:
Property (1): Each instance I that is pairwise consistent is globally consistent.
Note that the instance for schema {R
1
, . . . , R
n1
} of Example 6.4.1 is both pairwise and
globally consistent, whereas the instance for {R
1
, . . . , R
n
} is pairwise but not globally
consistent.
The second property we consider is motivated by query processing in a distributed
environment. Suppose that each relation of I is stored at a different site, that the join I is
to be computed, and that communication costs are to be minimized. A very naive algorithm
to compute the join is to send each of the I
j
to a specic site and then form the join. In
the general case this may cause the shipment of many unneeded tuples because they are
dangling in the full join.
The semi-join operator can be used to alleviate this problem. Given instances I, J over
R, S, then semi-join of I and J is
I < J =
R
(I J).
It is easily veried that I J = (I < J) J = (J < I) I. Furthermore there are
many cases in which computing the join in one of these ways can reduce data transmission
costs if I and J are at different nodes of a distributed database (see Exercise 6.24).
Suppose now that R satises Property (1). Given an instance I distributed across the
network, one can imagine replacing each relation I
j
by its semi-join with other relations of
I. If done cleverly, this might be done with communication cost polynomial in the size of
I, with the result of the replacements satisfying pairwise consistency. Given Property (1),
all relations can now be shipped to a common site, safe in the knowledge that no dangling
tuples have been shipped.
More generally, a semi-join program for R is a sequence of commands
R
i
1
:=R
i
1
< R
j
1
;
R
i
2
:=R
i
2
< R
j
2
;
.
.
.
R
i
p
:=R
i
p
< R
j
p
;
6.4 Computing with Acyclic Joins 129
R
1
A B C R
2
B C D E R
3
B C D G R
4
C D E F
0 3 2 3 2 1 0 3 2 1 4 2 1 1 4
0 1 2 1 2 3 0 1 2 3 2 2 3 0 1
3 1 2 1 3 1 0 1 3 1 0 3 1 0 2
1 1 3 1 3 1 1 3 1 0 3
Figure 6.11: Instance for Example 6.4.3
(In practice, the original values of R
i
j
would not be overwritten; rather, a scratch copy
would be made.) This is a full reducer for R if for each instance I over R, applying this
program yields an instance I

that is globally consistent.


Example 6.4.3 Let R ={ABC, BCDE, BCDG, CDEF} = {R
1
, R
2
, R
3
, R
4
} and con-
sider the instance I of R shown in Fig. 6.11. I is not globally consistent; nor is it pairwise
consistent.
A full reducer for this schema is
R
2
:=R
2
< R
1
;
R
2
:=R
2
< R
4
;
R
3
:=R
3
< R
2
;
R
2
:=R
2
< R
3
;
R
4
:=R
4
< R
2
;
R
1
:=R
1
< R
2
;
Note that application of this program to I has the effect of removing the rst tuple from
each relation.
We can now state the second property:
Property (2): R has a full reducer.
It can be shown that the schema {R
1
, . . . , R
n1
} of Example 6.4.1 has a full reducer,
but {R
1
, . . . , R
n
} does not (see Exercise 6.26).
The next property provides a way to view a schema as a tree with certain properties.
A join tree of a schema R is an undirected tree T =(R, E) such that
(i) each edge (R, R

) is labeled by the set of attributes R R

; and
(ii) for every pair R, R

of distinct nodes, for each A R R

, each edge along the


unique path between R and R

includes label A.
Property (3): R has a join tree.
For example, two join trees of the schema R of Figure 6.11 are T
1
=(R, {(R
1
, R
2
),
(R
2
, R
3
), (R
2
, R
4
)}) and T
2
=(R, {(R
1
, R
3
), (R
3
, R
2
), (R
2
, R
4
)}). (The edge labels are not
shown.)
130 Static Analysis and Optimization
(a)
(c)
A
B C
R
1
[AB], R
2
[BC], R
3
[AC]
T
1
[ABC], T
2
[BCD], T
3
[ABD], T
4
[ACD]
(b)
A
C E
S
1
[ABC], S
2
[CDE], S
3
[AFE], S
4
[ACE]
B F
D
A
B D
C
Figure 6.12: Three schemas and their hypergraphs
The fourth property we consider focuses entirely on the database schema R and is
based on a simple algorithm, called the GYO algorithm.
1
This is most easily described in
terms of the hypergraph corresponding to R. A hypergraph is a pair F = (V, F), where
V is a set of vertexes and F is family of distinct nonempty subsets of V, called edges
(or hyperedges). The hypergraph of schema R is the pair (U, R), where U =R. In what
follows, we often refer to a database schema R as a hypergraph. Three schemas and their
hypergraphs are shown in Fig. 6.12.
A hypergraph is reduced if there is no pair f, f

of distinct edges with f a proper


subset of f

. The reduction of F = (V, F) is (V, F {f F | f

F with f f

}).
Suppose that R is a schema and I over R satises the pure URA. If R
j
R
k
, then I
j
=
1
This is so named in honor of M. Graham and the team C. T. Yu and M. Z. Ozsoyoglu, who
independently came to essentially this algorithm.
6.4 Computing with Acyclic Joins 131

R
j
(I
k
), and thus I
j
holds redundant information. It is thus natural in this context to assume
that R, viewed as a hypergraph, is reduced.
An ear of hypergraph F =(V, F) is an edge f F such that for some distinct f

F,
no vertex of f f

is in any other edge or, equivalently, such that f ((F {f })) f

.
In this case, f

is called a witness that f is an ear. As a special case, if there is an edge f


of F that intersects no other edge, then f is also considered an ear.
For example, in the hypergraph of Fig. 6.12(b), edge ABC is an ear, with witness ACE.
On the other hand, the hypergraph of Fig. 6.12(a) has no ears.
We now have
Algorithm 6.4.4 (GYO Algorithm)
Input: Hypergraph F =(V, F)
Output: A hypergraph involving a subset of edges of F
Do until F has no ears:
1. Nondeterministically choose an ear f of F.
2. Set F :=(V

, F {f }), where V

=(F {f }).
The output of the GYO algorithm is always reduced.
A hypergraph is empty if it is (, ). In Fig. 6.12, it is easily veried that the output
of the GYO algorithm is empty for part (b), but that parts (a) and (c) have no ears and so
equal their output under the algorithm. The output of the GYO algorithm is independent of
the order of steps taken (see Exercise 6.28).
We now state the following:
Property (4): The output of the GYO algorithm on R is empty.
Speaking informally, Example 6.4.1 suggests that an absence of cycles yields Prop-
erties (1) to (4), whereas the presence of a cycle makes these properties fail. This led
researchers in the late 1970s to search for a notion of acyclicity for hypergraphs that
both generalized the usual notion of acyclicity for conventional undirected graphs and was
equivalent to one or more of the aforementioned properties. For example, the conventional
notion of hypergraph acyclicity from graph theory is due to C. Berge; but it turns out that
this condition is necessary but not sufcient for the four properties (see Exercise 6.32).
We now dene the notion of acyclicity that was found to be equivalent to the four
aforementioned properties. Let F =(V, F) be a hypergraph. A path in F from vertex v to
vertex v

is a sequence of k 1 edges f
1
, . . . , f
k
such that
(i) v f
1
;
(ii) v

f
k
;
(iii) f
i
f
i+1
= for i [1, k 1].
Two vertexes are connected in F if there is a path between them. The notions of connected
pair of edges, connected component, and connected hypergraph are now dened in the
usual manner.
Now let F =(V, F) be a hypergraph, and U V. The restriction of F to U, denoted
F|
U
, is the result of forming the reduction of (U, {f U | f F} {}).
132 Static Analysis and Optimization
Let F = (V, F) be a reduced hypergraph, let f, f

be distinct edges, and let g =


f f

. Then g is an articulation set of F if the number of connected components of F|


Vg
is greater than the number of connected components of F. (This generalizes the notion of
articulation point for ordinary graphs.)
Finally, a reduced hypergraph F = (V, F) is acyclic if for each U V, if F|
U
is
connected and has more than one edge then it has an articulation set; it is cyclic otherwise.
A hypergraph is acyclic if its reduction is.
Note that if F =(V, F) is an acyclic hypergraph, then so is F|
U
for each U V.
Property (5): The hypergraph corresponding to R is acyclic.
We now present the theorem stating the equivalence of these ve properties. Addi-
tional equivalent properties are presented in Exercise 6.31 and in Chapter 8, where the
relationship of acyclicity with dependencies is explored.
Theorem 6.4.5 Properties (1) through (5) are equivalent.
Proof We sketch here arguments that (4) (2) (1) (5) (4). The equivalence of
(3) and (4) is left as Exercise 6.30(a).
We assume in this proof that the hypergraphs considered are connected; generalization
to the disconnected case is straightforward.
(4) (2): Suppose now that the output of the GYO algorithm on R ={R
1
, . . . , R
n
} is
empty. Let S
1
, . . . , S
n
be an ordering of R corresponding to a sequence of ear removals
stemming from an execution of the GYO algorithm, and let T
i
be a witness for S
i
for
i [1, n 1]. An induction on n (from the inside out) shows that the following is a
full reducer (see Exercise 6.30a):
T
1
:=T
1
< S
1
;
T
2
:=T
2
< S
2
;
.
.
.
T
n1
:=T
n1
< S
n1
;
S
n1
:=S
n1
< T
n1
;
.
.
.
S
2
:=S
2
< T
2
;
S
1
:=S
1
< T
1
;
(2) (1): Suppose that R has a full reducer, and let I be a pairwise consistent instance
of R. Application of the full reducer to I yields an instance I

that is globally consistent.


But by pairwise consistency, each step of the full reducer leaves I unchanged. It follows
that I = I

is globally consistent.
(1) (5): This is proved by contradiction. Suppose that there is a hypergraph that
satises Property (1) but violates the denition of acyclic. Let R = {R
1
, . . . , R
n
} be such a
hypergraph where n is minimal among such hypergraphs and where the size of U =R is
minimal among such hypergraphs with n edges.
6.4 Computing with Acyclic Joins 133
I A
1
A
2
. . . A
p
B
1
. . . B
q
1 0 . . . 0 1 . . . 1
0 1 . . . 0 2 . . . 2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 . . . 1 p . . . p
Figure 6.13: Instance for proof of Theorem 6.4.5
It follows easily from the minimality conditions that R is reduced. In addition, by
minimality no vertex (attribute) in U is in only one edge (relation schema).
Consider now the schema R

={R
2
R
1
, . . . , R
n
R
1
}. Two cases arise:
Case 1: R

is connected. Suppose that R


1
= {A
1
, . . . , A
p
} and U R
1
= {B
1
. . . , B
q
}.
Consider the instance I over U shown in Fig. 6.13. Dene I = {I
1
, . . . , I
n
} so that
I
j
=
R
j
(I) for j [2, n], and
I
1
=
R
1
(I) {0, 0, . . . , 0}.
Using the facts that R

is connected and that each vertex of R occurs in at least two edges,


it is straightforward to verify that I is pairwise consistent but not globally consistent, which
is a contradiction (see Exercise 6.30b).
Case 2: R

is not connected. Choose a connected component of R

and let {S
1
, . . . , S
k
} be
the set of edges of R {R
1
} involved in that connected component. Let S =
k
i=1
S
i
and let
R

1
=R
1
S. Two subcases arise:
Subcase 2.a: R

1
S
j
for some j [1, k]. If this holds, then R

1
S
j
is an articulation
set for R, which is a contradiction (see Exercise 6.30b).
Subcase 2.b: R

1
S
j
for each j [1, k]. In this case R

={S
1
, . . . , S
k
, R

1
} is a reduced
hypergraph with fewer edges than R. In addition, it can be veried that this hypergraph
satises Property (1) (see Exercise 6.30b). By minimality of n, this implies that R

is
acyclic. Because it is connected and has at least two edges, it has an articulation set. Two
nested subcases arise:
Subcase 2.b.i: S
i
S
j
is an articulation pair for some i, j. We argue in this case that
S
i
S
j
is an articulation pair for R. To see this, let x R

1
(S
i
S
j
) and let y be a vertex
in some other component of R

|
S{S
i
S
j
}
. Suppose that R
i
1
, . . . , R
i
l
is a path in R from
y to x. Let R
i
p
be the rst edge in this path that is not in {S
1
, . . . , S
k
}. By the choice of
{S
1
, . . . , S
k
}, R
i
p
=R
1
. It follows that there is a path from y to x in R

|
S{S
i
S
j
}
, which
is a contradiction. We conclude that R has an articulation pair, contradicting the initial
assumption in this proof.
Subcase 2.b.ii: R

1
S
i
is an articulation pair for some i. In this case R
1
S
i
is an
articulation pair for R (see Exercise 6.30b), again yielding a contradiction to the initial
assumption of the proof.
134 Static Analysis and Optimization
(5) (4): We rst show inductively that each connected reduced acyclic hypergraph
F with at least two edges has at least two ears. For the case in which F has two edges, this
result is immediate. Suppose now that F =(V, F) is connected, reduced, and acyclic, with
|F| > 2. Let h =f f

be an articulation set of F. Let G be a connected component of


F|
Vh
. By the inductive hypothesis, this has at least two ears. Let g be an ear of G that is
different fromf h and different fromf

h. Let g

be an edge of F such that g =g

h.
It is easily veried that g

is an ear of F (see Exercise 6.30b). Because F|


Vh
has more
than two connected components, it follows that F has at least two ears.
Finally, suppose that F =(V, F) is acyclic. If there is only one edge, then the GYO
algorithm yields the empty hypergraph. Suppose that it has more than one edge. If F is
not reduced, the GYO algorithm can be applied to reduce it. If F is reduced, then by the
preceding argument F has an ear, say f . Then a step of the algorithm can be applied to
yield F|
(F{f })
. This is again acyclic. An easy induction now yields the result.
Recall from Theorem 6.4.2 that computing projections of arbitrary joins is probably
intractable if both query and data size are considered. The following shows that this is not
the case when the join is acyclic.
Corollary 6.4.6 If R is acyclic, then for each instance I over R, the expression

X
(I) can be computed in time polynomial in the size of IR, the input, and the output.
Proof Because the computation for each connected component of R can be performed
separately, we assume without loss of generality that R is connected. Let R=(R
1
, . . . , R
n
)
and I =(I
1
, . . . , I
n
). First apply a full reducer to I to obtain I

=(I

1
, . . . , I

n
). This takes
time polynomial in the size of the query and the input; the result is globally consistent; and
II = II

.
Because R is acyclic, by Theorem 6.4.5 there is a join tree T for R. Choose a root
for T , say R
1
. For each subtree T
k
of T with root R
k
=R
1
, let X
k
=X ({R | R T
k
}),
and Z
k
=R
k
(the parent of R
k
). Let J
k
=I

k
for k [1, n]. Inductively remove nodes R
k
and replace instances J
k
from leaf to root of T as follows: Delete node R
k
with parent R
m
by replacing J
m
with J
m

X
k
Z
k
J
k
. A straightforward induction shows that immediately
before nonleaf node R
k
is deleted, then J
k
=
X
k
R
k
(
R
l
T
k
I

l
). It follows that at the end
of this process the answer is
X
J
1
and that at each intermediate stage each instance J
k
has
size bounded by |I

k
| |
X
(II
k
)| (see Exercise 6.33).
Bibliographic Notes
An extensive discussion of issues in query optimization is presented in [Gra93]. Other
references include [JK84a, KS91, Ull89b]. Query optimization for distributed databases
is surveyed in [YC84]. Algorithms for binary joins are surveyed in [ME92].
The paper [SAC
+
79] describes query optimization in System/R, including a discus-
sion of generating and analyzing multiple evaluation plans and a thorough discussion of
accessing tuples from a single relation, as from a projection and selection. System/R is the
precursor of IBMs DB2 database management system. The optimizer for INGRES intro-
duces query decomposition, including both join detachment and tuple substitution [WY76,
SWKH76].
Bibliographic Notes 135
The use of semi-joins in query optimization was rst introduced in INGRES [WY76,
SWKH76] and used for distributed databases in [BC81, BG81]. Research on optimiz-
ing buffer management policies includes [FNS91, INSS92, NCS91, Sto81]. Other sys-
tem optimizers include those for Exodus [GD87], distributed INGRES [ESW78], SDD-1
[BGW
+
81], and the TI Open Object-Oriented Data Base [BMG93].
[B
+
88] presents the unifying perspective that physical query implementation can be
viewed as generation, manipulation, and merging of streams of tuples and develops a very
exible toolkit for constructing dbmss. A formal model incorporating streams, sets, and
parallelism is presented in [PSV92].
The recent work [IK90] focuses on nding optimal and near-optimal evaluation plans
for n-way joins, where n is in the hundreds, using simulated annealing and other tech-
niques. Perhaps most interesting about this work are characterizations of the space of eval-
uation plans (e.g., properties of evaluation plan cost in relation to natural metrics on this
space).
Early research on generation and selection of query evaluation plans is found in
[SAC
+
79, SWKH76]. Treatments that separate plan generation from transformation rules
include [Fre87, GD87, Loh88]. More recent research has proposed mechanisms for gener-
ating parameterized evaluation plans; these can be generated at compile time but permit the
incorporation of run-time information [GW89, INSS92]. An extensive listing of references
to the literature on estimating the costs of evaluation plans is presented in [SLRD93]. This
article introduces an estimation technique based on the computation of series functions that
approximates the distribution of values and uses regression analysis to estimate the output
sizes of select-join queries.
Many forward-chaining expert systems in AI also face the problem of evaluating what
amounts to conjunctive queries. The most common technique for evaluating conjunctive
queries in this context is based on a sequential generate-and-test algorithm. The paper
[SG85] presents algorithms that yield optimal and near-optimal orderings under this ap-
proach to evaluation.
The technique of tableau query minimization was rst developed in connection with
database queries in [CM77], including the Homomorphism Theorem (Theorem 6.2.3) and
Theorem 6.2.6. Theorem 6.2.10 is also due to [CM77]; the proofs sketched in the exercises
are due to [SY80] and [ASU79b]. Renements of this result (e.g., to subclasses of typed
tableau queries) are presented in [ASU79b, ASU79a].
The notion of tableau homomorphism is a special case of the notion of subsumption
used in resolution theorem proving [CL73]. That work focuses on clauses (i.e., disjunctions
of positive and negative literals), and permits function symbols. A clause C =(L
1

L
n
) subsumes a clause D = (M
1
M
k
) if there is a substitution such that C
is a subclause of D. A generalized version of tableau minimization, called condensation,
also arises in this connection. A condensation of a clause C =(L
1
L
n
) is a clause
C

= (L
i
1
L
i
m
) with m minimal such that C

= C for some substitution . As


observed in [Joy76], condensations are unique up to variable substitution.
Reference [SY80] studies restricted usage of difference with SPCU queries, for which
several positive results can be obtained (e.g., decidability of containment; see Exer-
cise 6.22).
The undecidability results for the relational calculus derive from results in [DiP69]
(see also [Var81]). The assumption in this chapter that relations be nite is essential. For
136 Static Analysis and Optimization
instance, the test for containment is co-r.e. in our context whereas it is r.e. when possibly
innite structures are considered. (This is by reduction to the validity of a formula in rst-
order predicate logic with equality using the G odel Completeness Theorem.
The complexity of query languages is studied in [CH82, Var82a] and is considered in
Part E of this volume.
As discussed in Chapter 7, practical query languages typically produce bags (also
called multisets; i.e., collections whose members may occur more than once). The problem
of containment and equivalence of conjunctive queries under the bag semantics is consid-
ered in [CV93]. It remains open whether containment is decidable, but it is
p
2
-hard. On
the other hand, two conjunctive queries are equivalent under the bag semantics iff they are
isomorphic.
Acyclic joins enjoyed a urry of activity in the database research community in the
late 1970s and early 1980s. As noted in [Mal86], the same concept has been studied
in the eld of statistics, beginning with [Goo70, Hab70]. An early motivation for their
study in databases stemmed from distributed query processing; the notions of join tree
and full reducers are from [BC81, BG81]; see also [GS82, GS84, SS86]. The original
GYO algorithm was developed in [YO79] and [Gra79]; we use here a variant due to
[FMU82]. The notion of globally consistent is studied in [BR80, HLY80, Ris82, Var82b];
see also [Hul83]. Example 6.4.1 is taken from [Ull89b]. The paper [BFM
+
81] intro-
duced the notion of acyclicity presented here and observed the equivalence to acyclicity
of several previously studied properties, including those of having a full reducer and pair-
wise consistency implying global consistency; this work is reported in journal form in
[BFMY83].
A linear-time test for acyclicity is developed in [TY84]. Theorem 6.4.2 and Corol-
lary 6.4.6 are due to [Yan81].
The notion of Berge acyclic is due to [Ber76a]. [Fag83] investigates several notions
of acyclicity, including the notion studied in this chapter and Berge acyclicity. Further
investigation of these alternative notions of acyclicity is presented in [ADM85, DM86b,
GR86]. Early attempts to develop a notion of acyclic that captured desirable database
characteristics include [Zan76, Gra79].
The relationship of acyclicity with dependencies is considered in Chapter 8.
Many variations of the universal relation assumption arose in the late 1970s and early
1980s. We return to this topic in Chapter 11; surveys of these notions include [AP82,
Ull82a, MRW86].
Exercises
Exercise 6.1
(a) Give detailed denitions for the rewrite rules proposed in Section 6.1. In other words,
provide the conditions under which they preserve equivalence.
(b) Give the step-by-step description of how the query tree of Fig. 6.1(a) can be trans-
formed into the query tree of Fig. 6.1(b) using these rewrite rules.
Exercises 137
Exercise 6.2 Consider the transformation
F
(q
1

G
q
2
)
F
(q
1
)
G
q
2
of Fig. 6.2. De-
scribe a query q and database instance for which applying this transformation yields a query
whose direct implementation is dramatically more expensive than that of q.
Exercise 6.3
(a) Write generalized SPC queries equivalent to the two tableau queries of Exam-
ple 6.2.2.
(b) Show that the optimization of this example cannot be achieved using the rewrite rules
or multiway join techniques of System/R or INGRES discussed in Section 6.1.
(c) Generate an example analogous to that of Example 6.2.2 that shows that even for
typed tableau queries, the rewrite rules of Section 6.1 cannot achieve the optimiza-
tions of the Homomorphism Theorem.
Exercise 6.4 Present an algorithm that identies when variables can be projected out during
a left-to-right join of a sip strategy.
Exercise 6.5 Describe a generalization of sip strategies that permits evaluation of multiway
joins according to an arbitrary binary tree rather than using only left-to-right join processing.
Give an example in which this yields an evaluation plan more efcient than any left-to-right
join.
Exercise 6.6 Consider query expressions that have the form () mentioned in the discussion
of join detachment in Section 6.1.
(a) Describe how the possibility of applying join detachment depends on how equali-
ties are expressed in the conditions (e.g., Is there a difference between using con-
ditions x.1 =y.1, y.1 =z.1 versus x.1 =z.1, z.1 =y.1?). Describe a technique
for eliminating this dependence.
(b) Develop a generalization of join detachment in which a set of variables serves as the
pivot.
Exercise 6.7 [WY76]
(a) Describe some heuristics for choosing the atom R
i
(s
i
) for forming a tuple substitu-
tion. These may be in the context of using tuple substitution and join detachment for
the resulting subqueries, or they may be in a more general context.
(b) Develop a query optimization algorithm based on applying single-variable condi-
tions, join detachment, and tuple substitution.
Exercise 6.8 Prove Corollary 6.2.4.
Exercise 6.9
(a) State the direct generalization of Theorem 6.2.3 for tableau queries with equality,
and show that it does not hold.
(b) State and prove a correct generalization of Theorem 6.2.3 that handles tableau
queries with equality.
Exercise 6.10 For queries q, q

, write q q

to denote that q q

and q q

. The meaning
of q q

is dened analogously.
138 Static Analysis and Optimization
(a) Exhibit an innite set {q
0
, q
1
, q
2
, . . .} of typed tableau queries involving no constants
over a single relation with the property that q
0
q
1
q
2
. . . .
(b) Exhibit an innite set {q

0
, q

1
, q

2
, . . .} of (possibly nontyped) tableau queries involv-
ing no constants over a single relation such that q

i
q

j
and q

j
q

i
for each pair
i =j.
(c) Exhibit an innite set {q

0
, q

1
, q

2
, . . .} of (possibly nontyped) tableau queries involv-
ing no constants over a single relation with the property that q

0
q

1
q

2
. . . .
(d) Do parts (b) and (c) for typed tableau queries that may contain constants.
(e) [FUMY83] Do parts (b) and (c) for typed tableau queries that contain no constants.
Exercise 6.11 [CM77] Prove Proposition 6.2.9.
Exercise 6.12
(a) Prove that if the underlying domain dom is nite, then only one direction of the
statement of Theorem 6.2.3 holds.
(b) Let n > 1 be arbitrary. Exhibit a pair of tableau queries q, q

such that under the


assumption that dom has n elements, q q

, but there is no homomorphism from q

to q. In addition, do this using typed tableau queries.


(c) Show for arbitrary n > 1 that Theorem 6.2.6 and Proposition 6.2.9 do not hold if
dom has n elements.
Exercise 6.13 Let R be a relation schema of sort ABC. For each of the following SPJRqueries
over R, construct an equivalent tableau (see Exercise 4.19), minimize the tableau, and construct
from the minimized tableau an equivalent SPJR query with minimal number of joins.
(a)
AC
[
AB
(R)
BC
(R)]
A
[
AC
(R)
CB
(R)]
(b)
AC
[
AB
(R)
BC
(R)]
AB
(
B=8
(R))
BC
(
A=5
(R))
(c)
AB
(
C=1
(R))
BC
(R)
AB
[
C=1
(
AC
(R))
CB
(R)]
Exercise 6.14 [SY80]
(a) Give a decision procedure for determining whether one union of tableaux query
is contained in another one. Hint: Let the queries be q = ({T
1
, . . . , T
n
}, u) and
q

= ({S
1
, . . . , S
m
}, v); and prove that q q

iff for each i [1, n] there is some


j [1, m] such that (T
i
, u) (S
j
, v). (The case of queries equivalent to q

must be
handled separately.)
A union of tableaux query ({T
1
, . . . , T
n
}, u) is nonredundant if there is no distinct pair i, j such
that (T
i
, u) (T
j
, u).
(b) Prove that if ({T
1
, . . . , T
n
}, u) and ({S
1
, . . . , S
m
}, v) are nonredundant and equiva-
lent, then n =m; for each i [1, n] there is a j [1, n] such that (T
i
, u) (S
j
, v);
and for each j [1, n] there is a i [1, n] such that (S
j
, v) (T
i
, u).
(c) Prove that for each union of tableaux query q there is a unique (up to renaming)
equivalent union of tableaux query that has a minimal total number of atoms.
Exercise 6.15 Exhibit a pair of typed restricted SPJ algebra queries q
1
, q
2
over a relation R
and having no constants, such that there is no conjunctive query equivalent to q
1
q
2
. Hint: Use
tableau techniques.
Exercises 139
Exercise 6.16 [SY80]
(a) Complete the proof of part (a) of Theorem 6.2.10.
(b) Prove parts (b) and (c) of that theorem. Hint: Given and q

= (T

, t ) and q

=
(T

, t ) as in the proof of part (a), set q

=(T

, t ). Show that is satisable iff


q

.
(c) Prove that it is np-hard to determine, given a pair q, q

of typed tableau queries over


the same relation schema, whether q is minimal and equivalent to q

. Conclude that
optimizing conjunctive queries, in the sense of nding an equivalent with minimal
number of atoms, is np-hard.
Exercise 6.17 [ASU79b] Prove Theorem 6.2.10 using a reduction from 3-SAT (see Chapter 2)
rather than from the exact cover problem.
Exercise 6.18 [ASU79b]
(a) Prove that determining containment between two typed SPJ queries of the form

X
(
n
i=1
(
X
i
R)) is np-complete. Hint: Use Exercise 6.16.
(b) Prove that the problem of nding, given an SPJ query q of the form
X
(
n
i=1
(
X
i
R)), an SPJ query q

equivalent to q that has the minimal number of join


operations among all such queries is np-hard.
Exercise 6.19
(a) Complete the proof of Theorem 6.3.1.
(b) Describe how to modify that proof so that q
P
uses no constants.
(c) Describe howto modify the proof so that no constants and only one ternary relation is
used. Hint: Speaking intuitively, a tuple t =a
1
, . . . , a
5
of ENC can be simulated as
a set of tuples {b
t
, b
1
, a
1
, . . . , b
t
, b
5
, a
5
}, where b
t
is a value not used elsewhere
and b
1
, . . . , b
5
are values established to serve as integers 1, . . . , 5.
(d) Describe how, given instance P of the PCP, to construct an nr-datalog

program that
is satisable iff P has a solution.
Exercise 6.20 This exercise develops further undecidability results for the relational calculus.
(a) Prove that containment and equivalence of range-safe calculus queries are co-r.e.
(b) Prove that domain independence of calculus queries is co-r.e. Hint: Theorem 5.6.1 is
useful here.
(c) Prove that containment of safe-range calculus queries is undecidable.
(d) Show that there is no algorithm that always halts and on input calculus query q gives
an equivalent query q

of minimum length. Conclude that complete optimization


of the relational calculus is impossible. Hint: If there were such an algorithm, then it
would map each unsatisable query to a query with formula (of form) (a =b).
Exercise 6.21 [ASU79a, ASU79b] In a typed tableau query (T, u), a summary variable is
a variable occurring in u. A repeated nonsummary variable for attribute A is a nonsummary
variable in
A
(T ) that occurs more than once in T . A typed tableau query is simple if for each
attribute A, there is a repeated nonsummary variable in
A
(T ), then no other constant or variable
in
A
(T ) occurs more than once
A
(T ). Many natural typed restricted SPJ queries translate into
simple tableau queries.
140 Static Analysis and Optimization
(a) Show that the tableau query over R[ABCD] corresponding to

AC
(
AB
(R)
BC
(R)) (
AB
(R)
BD
(R))
is not simple.
(b) Exhibit a simple tableau query that is not the result of transforming a typed restricted
SPJ query under the algorithm of Exercise 4.19.
(c) Prove that if (T, u) is simple, T

T , and (T

, u) is a tableau query, then (T

, u) is
simple.
(d) Develop an O(n
4
) algorithm that, on input a simple tableau query q, produces a
minimal tableau query equivalent to q.
(e) Develop an O(n
3
) algorithm that, given simple tableau queries q, q

, determines
whether q q

.
(f) Prove that testing containment for simple tableau queries is np-complete.
Exercise 6.22 [SY80] Characterize containment and equivalence between queries of the form
q
1
q
2
, where q
1
, q
2
are SPCU queries. Hint: First develop characterizations for the case in
which q
1
, q
2
are SPC queries.
Exercise 6.23 Recall from Exercise 5.9 that an arbitrary nonrecursive datalog

rule can be
described as a difference q
1
q
2
, where q
1
is an SPC query and q
2
is an SPCU query.
(a) Show that Exercise 5.9 cannot be strengthened so that q
2
is an SPC query.
(b) Show that containment between pairs of nonrecursive datalog

rules is decidable.
Hint: Use Exercise 6.22.
(c) Recall that for each nr-datalog program P with a single-relation target there is an
equivalent nr-datalog programP

such that all rule heads have the same relation name
(see Exercise 4.24). Prove that the analogous result does not hold for nr-datalog

programs.
Exercise 6.24
(a) Verify that I J =(I < J) J.
(b) Analyze the transmission costs incurred by the left-hand and right-hand sides of this
equation, and describe conditions under which one is more efcient than the other.
Exercise 6.25 [HLY80] Prove that the problem of deciding, given instance I of database
schema R, whether I is globally consistent is np-complete.
Exercise 6.26 Prove the following without using Theorem 6.4.5.
(a) The database schema R = {AB, BC, CA} has no full reducer.
(b) For arbitrary n > 1, the schema {R
1
, . . . , R
n1
} of Example 6.4.1 has a full reducer.
(c) For arbitrary (odd or even) n > 1, the schema {R
1
, . . . , R
n
} of Example 6.4.1 has no
full reducer.
Exercise 6.27
(a) Draw the hypergraph of the schema of Example 6.4.3.
(b) Draw the hypergraph of Fig. 6.12(b) in a fashion that suggests it to be acyclic.
Exercises 141
Exercise 6.28 Prove that the output of Algorithm 6.4.4 is independent of the nondeterministic
choices.
Exercise 6.29 As originally introduced, the GYO algorithm involved the following steps:
Nondeterministically perform either step,
until neither can be applied
1. If v V is in exactly one edge f F
then F :=(V {v}, (F {f } {f {v}}) {}).
2. If f f

for distinct f, f

F,
then F :=(V, F {f }).
The result of applying the original GYO algorithm to a schema R is the GYO reduction of R.
(a) Prove that the original GYO algorithm yields the same output independent of the
nondeterministic choices.
(b) [FMU82] Prove that Algorithm 6.4.4 given in the text yields the empty hypergraph
on R iff the GYO reduction of R is the empty hypergraph.
Exercise 6.30 This exercise completes the proof of Theorem 6.4.5.
(a) [BG81] Prove that (3) (4).
(b) Complete the other parts of the proof.
Exercise 6.31 [BFMY83] R has the running intersection property if there is an ordering
R
1
, . . . , R
n
of R such that for 2 i n there exists j
i
< i such that R
i
(R
1
R
i1
)
R
j
i
. In other words, the intersection of each R
i
with the union of the previous R

j
s is contained
in one of these. Prove that R has the running intersection property iff R is acyclic.
Exercise 6.32 [BFMY83] A Berge cycle in a hypergraph F is a sequence (f
1
, v
1
, f
2
, v
2
, . . . ,
f
n
, v
n
, f
n+1
) such that
(i) v
1
, . . . , v
n
are distinct vertexes of F;
(ii) f
1
, . . . , f
n
are distinct edges of F, and f
n+1
=f
1
;
(iii) n 2; and
(iv) v
i
f
i
f
i+1
for i [1, n].
A hypergraph is Berge cyclic if it has a Berge cycle, and it is Berge acyclic otherwise.
(a) Prove that Berge acyclicity is necessary but not sufcient for acyclicity.
(b) Show that any hypergraph in which two edges have two nodes in common is Berge
cyclic.
Exercise 6.33 [Yan81] Complete the proof of Corollary 6.4.6.
7
Notes on Practical
Languages
Alice: What do you mean by practical languages?
Riccardo: select from where.
Alice: Thats it?
Vittorio: Well, there are of course lots of bells and whistles.
Sergio: But basically, this forms the core of most practical languages.
I
n this chapter we discuss the relationship of the abstract query languages discussed
so far to three representative commercial relational query languages: Structured Query
Language (SQL), Query-By-Example (QBE), and Microsoft Access. SQL is by far the
dominant relational query language and provides the basis for languages in extensions of
the relational model as well. Although QBE is less widespread, it illustrates nicely the
basic capabilities and problems of graphic query languages. Access is a popular database
management system for personal computers (PCs) and uses many elements of QBE.
Our discussion of the practical languages is not intended to provide a complete de-
scription of them, but rather to indicate some of the similarities and differences between
theory and practice. We focus here on the central aspects of these languages. Many fea-
tures, such as string-comparison operators, iteration, and embeddings into a host language,
are not mentioned or are touched on only briey.
We rst present highlights of the three languages and then discuss considerations that
arise from their use in the real world.
7.1 SQL: The Structured Query Language
SQL has emerged as the preeminent query language for mainframe and client-server rela-
tional dbmss. This language combines the avors of both the algebra and the calculus and
is well suited for the specication of conjunctive queries.
This section begins by describing how conjunctive queries are expressed using SQL.
We then progress to additional features, including nested queries and various forms of
negation.
Conjunctive Queries in SQL
Although there are numerous variants of SQL, it has become the standard for relational
query languages and indeed for most aspects of relational database access, including data
denition, data modication, and view denition. SQL was originally developed under the
142
7.1 SQL: The Structured Query Language 143
name Sequel at the IBM San Jose Research Laboratory. It is currently supported by most
of the dominant mainframe commercial relational systems, and increasingly by relational
dbmss for PCs.
The basic building block of SQL queries is the select-from-where clause. Speaking
loosely, these have the form
select <list of elds to select>
from <list of relation names>
where <condition>
For example, queries (4.1) and (4.4) of Chapter 4 are expressed by
select Director
from Movies
where Title = Cries and Whispers;
select Location.Theater, Address
from Movies, Location, Pariscope
where Director = Bergman
and Movies.Title = Pariscope.Title
and Pariscope.Theater = Location.Theater;
In these queries, relation names themselves are used to denote variables ranging over
tuples occurring in the corresponding relation. For example, in the preceding queries, the
identier Movies can be viewed as ranging over tuples in relation Movies. Relation name
and attribute name pairs, such as Location.Theater, are used to refer to tuple components;
and the relation name can be dropped if the attribute occurs in only one of the relations in
the from clause.
The select keyword has the effect of the relational algebra projection operator, the
from keyword has the effect of the cross-product operator, and the where keyword has the
effect of the selection operator (see Exercise 7.3). For example, the second query translates
to (using abbreviated attribute names)

L.T h,A
(
D=BergmanM.T i=P.T iP.T h=L.T h
(Movies Location Pariscope)).
If all of the attributes mentioned in the from clause are to be output, then * can be used
in place of an attribute list in the select clause. In general, the where condition may include
conjunction, disjunction, negation, and (as will be seen shortly) nesting of select-from-
where blocks. If the where clause is omitted, then it is viewed as having value true for all
tuples of the cross-product. In implementations, as suggested in Chapter 6, optimizations
will be used; for example, the from and where clauses will typically be merged to have the
effect of an equi-join operator.
In SQL, as with most practical languages, duplicates may occur in a query answer.
144 Notes on Practical Languages
Technically, then, the output of an SQL query may be a bag (also called multiset)
a collection whose members may occur more than once. This is a pragmatic compromise
with the pure relational model because duplicate removal is rather expensive. The user may
request that duplicates be removed by inserting the keyword distinct after the keyword
select.
If more than one variable ranging over the same relation is needed, then variables can
be introduced in the from clause. For example, query (4.7), which asks for pairs of persons
such that the rst directed the second and the second directed the rst, can be expressed as
select M1.Director, M1.Actor
from Movies M1, Movies M2
where M1.Director =M2.Actor
and M1.Actor =M2.Director;
In the preceding example, the Director coordinate of M1 is compared with the Actor
coordinate of M2. This is permitted because both coordinates are (presumably) of type
character string. Relations are declared in SQL by specifying a relation name, the attribute
names, and the scalar types associated with them. For example, the schema for Movies
might be declared as
create table Movies
(Title character[60]
Director character[30]
Actor character[30]);
In this case, Title and Director values would be comparable, even though they are character
strings of different lengths. Other scalar types supported in SQL include integer, small
integer, oat, and date.
Although the select-from-where block of SQL has a syntactic avor close to the re-
lational calculus (but using tuple variables rather than domain variables), from a technical
perspective the SQL semantics are rmly rooted in the algebra, as illustrated by the follow-
ing example.
Example 7.1.1 Let {R[A], S[B], T [C]} be a database schema, and consider the follow-
ing query:
select A
from R, S, T
where R.A =S.B or R.A =T.C;
A direct translation of this into the SPJR algebra extended to permit disjunction in selection
formulas (see Exercise 4.22) yields
7.1 SQL: The Structured Query Language 145

A
(
A=BA=C
(R S T )),
which yields the empty answer if S is empty or if T is empty. Thus the foregoing SQL
query is not equivalent to the calculus query:
{x | R(x) (S(x) T (x))}.
A correct translation into the conjunctive calculus (with disjunction) query is
{w | x, y, z(R(x) S(y) T (z) x =w (x =y x =z))}.
Adding Set Operators
The select-from-where blocks of SQL can be combined in a variety of ways. We describe
rst the incorporation of the set operators (union, intersect, and difference). For example,
the query
(4.14) List all actors and director of the movie Apocalypse Now.
can be expressed as
(select Actor Participant
from Movies
where Title = Apocalypse Now)
union
(select Director Participant
from Movies
where Title = Apocalypse Now);
In the rst subquery the output relation uses attribute Participant in place of Actor. This
illustrates renaming of attributes, analogous to relation variable renaming. This is needed
here so that the two relations that are unioned have compatible sort.
Although union, intersect, and difference were all included in the original SQL, only
union is in the current SQL2 standard developed by the American National Standards
Institute (ANSI). The two left out can be simulated by other mechanisms, as will be seen
later in this chapter.
SQL also includes a keyword contains, which can be used in a selection condition to
test containment between the output of two nested select-from-where expressions.
Nested SQL Queries
Nesting permits the use of one SQL query within the where clause of another. A simple
illustration of nesting is given by this alternative formulation of query (4.4):
146 Notes on Practical Languages
select Theater
from Pariscope
where Title in
(select Title
from Movies
where Director = Bergman);
The preceding example tests membership of a unary tuple in a unary relation. The
keyword in can also be used to test membership for arbitrary arities. The symbols < and
> are used to construct tuples from attribute expressions. In addition, because negation is
permitted in the where clause, set difference can be expressed. Consider the query
List title and theater for movies being shown in only one theater.
This can be expressed in SQL by
select Title, Theater
from Pariscope
where Title, Theater not in
(select P1.Title, P1.Theater
from Pariscope P1, Pariscope P2
where P1.Title = P2.Title
and not (P1.Theater = P2.Theater));
Expressing First-Order Queries in SQL
We now discuss the important result that SQL is relationally complete, in the sense that
it can express all relational queries expressible in the calculus. Recall from Chapter 5 that
the family of nr-datalog

programs is equivalent to the calculus and algebra. We shall show


how to simulate nr-datalog

using SQL. Intuitively, the result follows from the facts that
(a) each rule can be simulated using the select-from-where construct;
(b) multiple rules dening the same predicate can be simulated using union; and
(c) negation in rule bodies can be simulated using not in.
We present an example here and leave the formal proof for Exercise 7.4.
Example 7.1.2 Consider the following query against the CINEMA database:
Find the theaters showing every movie directed by Hitchcock.
An nr-datalog

program expressing the query is


7.1 SQL: The Structured Query Language 147
Pariscope

(x
t h
, x
title
) Pariscope(x
t h
, x
title
, x
sch
)
Bad_t h(x
t h
) Movies(x
title
, Hitchcock, x
act
),
Location(x
t h
, x
loc
, x
ph
),
Pariscope

(x
t h
, x
title
)
Answer(x
t h
) Location(x
t h
, x
loc
, x
ph
), Bad_t h(x
t h
).
In the program, Bad_th holds the list of bad theaters, for which one can nd a movie by
Hitchcock that the theater is not showing. The last rule takes the complement of Bad_th
with respect to the list of theaters provided by Location.
An SQL query expressing an nr-datalog

program such as this one can be constructed


in two steps. The rst is to write SQL queries for each rule separately. In this example, we
have
Pariscope

: select Theater, Title


from Pariscope;
Bad_th: select Theater
from Movies, Location
where Director = Hitchcock
and Theater, Title not in
(select *
from Pariscope

);
Answer: select Theater
from Location
where Theater not in
(select *
from Bad_th);
The second step is to combine the queries. In general, this involves replacing nested
queries by their denitions, starting from the answer relation and working backward. In
this example, we have
select Theater
from Location
where Theater not in
(select Theater
from Movies, Location
where Director = Hitchcock
and Theater, Title not in
(select Theater, Title
from Pariscope));
In this example, each idb (see Section 4.3) relation that occurs in a rule body occurs
148 Notes on Practical Languages
negatively. As a result, all variables that occur in the rule are bound by edb relations, and
so the from part of the (possibly nested) query corresponding to the rule refers only to
edb relations. In general, however, variables in rule bodies might be bound by positively
occurring idb relations, which cannot be used in any from clause in the nal SQL query.
To resolve this problem, the nr-datalog

program should be rewritten so that all positively


occurring relations in rule bodies are edb relations (see Exercise 7.4a).
View Creation and Updates
We conclude our consideration of SQL by noting that it supports both view creation and
updates.
SQL includes an explicit mechanismfor viewcreation. The relation Champo-info from
Example 4.3.4 is created in SQL by
create view Le Champo as
select Pariscope.Title, Schedule, Phone
from Pariscope, Location
where Pariscope.Theater = Le Champo
and Location.Theater = Le Champo.
Views in SQL can be accessed as can normal relations and are useful in building up
complex queries.
As a practical database language, SQL provides commands for updating the database.
We briey illustrate these here; some theoretical aspects concerning updates are presented
in Chapter 22.
SQL provides three primitive commands for modifying the contents of a database
insert, delete, and update (in the sense of modifying individual tuples of a relation).
The following can be used to insert a new tuple into the Movies database:
insert into Movies
values (Apocalypse Now, Coppola, Duvall);
A set of tuples can be deleted simultaneously:
delete Movies
where Director = Hitchcock;
Tuple update can also operate on sets of tuples (as illustrated by the following) that
might be used to correct a typographical error:
update Movies
set Director = Hitchcock
where Director = Hickcook;
7.2 Query-by-Example and Microsoft Access 149
The ability to insert and delete tuples provides an alternative approach to demon-
strating the relational completeness of SQL. In particular, subexpressions of an algebra
expression can be computed in intermediate, temporary relations (see Exercise 7.6). This
approach does not allow the same degree optimization as the one based on views because
the SQL interpreter is required to materialize each of the intermediate relations.
7.2 Query-by-Example and Microsoft Access
We now turn to two query languages that have a more visual presentation. The rst, Query-
by-Example (QBE), presents a visual display for expressing conjunctive queries that is
close to the perspective of tableau queries. The second language, Access, is available on
personal computers; it uses elements of QBE, but with a more graphical presentation of
join relationships.
QBE
The language Query-By-Example (QBE) was originally developed at the IBM T. J. Watson
Research Center and is currently supported as part of IBMs Query Management Facility.
As illustrated at the beginning of Chapter 4, the basic format of QBE queries is fundamen-
tally two-dimensional and visually close to the tableau queries. Importantly, a variety of
features are incorporated into QBE to give more expressive power than the tableau queries
and to provide data manipulation capabilities. We now indicate some features that can be
incorporated into a QBE-like visual framework. The semantics presented here are a slight
variation of the semantics supported for QBE in IBMs product line.
As seen in Fig. 4.2, which expresses query (4.4), QBE uses strings with prex _ to
denote variables and other strings to denote constants. If the string is preceded by P., then
the associated coordinate value forms part of the query output. QBE framework can provide
a partial union capability by permitting the inclusion in a query of multiple tuples having a
P. prex in a single relation. For example, Fig. 7.1 expresses the query
(4.12) What lms with Allen as actor or director are currently featured at the Concorde?
Under one natural semantics for QBE queries, which parallels the semantics of conjunctive
queries and of SQL, this query will yield the empty answer if either
Director=Allen
Movies
or
Actor=Allen
Movies is empty (see Example 7.1.1).
QBE also includes a capability of condition boxes, which can be viewed as an exten-
sion of the incorporation of equality atoms into tableau queries.
QBE does not provide a mechanism analogous to SQL for nesting of queries. It is hard
to develop an appropriate visual representation of such nesting within the QBE framework,
in part due to the lack of scoping rules. More recent extensions of QBE address this issue
by incorporating, for example, hierarchical windows. QBE also provides mechanisms for
both view denition and database update.
Negation can be incorporated into QBE queries in a variety of ways. The use of data-
base update is an obvious mechanism, although not especially efcient. Two restricted
150 Notes on Practical Languages
Movies Title Director Actor
_X
_Y
Allen
Allen
Pariscope Title Schedule
P._X
P._Y
Theater
Concorde
Concorde
Figure 7.1: One form of union in QBE
Movies Title Director Actor
_Z Bergman
Pariscope Title Schedule
_Z
Theater
P._champio Concorde

Figure 7.2: A query with negation in QBE


forms of negation are illustrated in Fig. 7.2, which expresses the following query: (assum-
ing that each lm has only one director) what theaters, other than the Concorde, feature a
lm not directed by Bergman? The in the Pariscope relation restricts attention to those
tuples with Theater coordinate not equal to Concorde, and the preceding the tuple in the
Movies relation is analogous to a negative literal in a datalog rule and captures a limited
form of from the calculus; in this case it excludes all lms directed by Bergman. When
such negation is used, it is required that all variables that occur in a row preceded by also
appear in positive rows. Other restricted forms of negation in QBE include using negative
literals in condition boxes and supporting an operator analogous to relational division (as
dened in Exercise 5.8).
The following example shows more generally how view denition can be used to
obtain relational completeness.
Example 7.2.1 Recall the query and nr-datalog

program of Example 7.1.2. As with


SQL, the QBE query corresponding to an nr-datalog

will involve one or more views for


each rule (see Exercise 7.5). For this example, however, it turns out that we can compute
the effect of the rst two rules with a single QBE query. Thus the two stages of the full
query are shown in Fig. 7.3, where the symbol I. indicates that the associated tuples are
to be inserted into the answer. The creation of the view Bad_th is accomplished using the
7.2 Query-by-Example and Microsoft Access 151
Movies Title Director Actor
_x
title
Hitchcock
Pariscope Title Schedule
_x
title
Theater
_x
th
Location Theater Address Phone
_x
th

I.VIEW Bad_th I. Theater


_x
th
I.
Stage I:
Location Theater Address Phone
_x
th
Stage II:
Answer Theater
_x
th
I.
Bad_th Theater
_x
th

Figure 7.3: Illustration of relational completeness of QBE


expression I.VIEWBad_th I., which both creates the view and establishes the attribute
names for the view relation.
Microsoft Access: A Query Language for PCs
A number of dbmss for personal computers have become available over the past few years,
such as DBASE IV, Microsoft Access, Foxpro, and Paradox. Several of these support a
version of SQL and a more visual query language. The visual languages have a avor
somewhat different from QBE. We illustrate this here by presenting an example of a query
from the Microsoft Access dbms.
Access provides an elegant graphical mechanism for constructing conjunctive queries.
This includes a tabular display to indicate the form and content of desired output tuples,
the use of single-attribute conditions within this display (in the rows named Criteria and
or), and a graphical presentation of join relationships that are to hold between relations
used to form the output. Fig. 7.4 shows how query (4.4) can be expressed using Access.
152 Notes on Practical Languages
SelectQuery: Query4
Movies
Title
Director
Actor
Pariscope
Theater
Title
Schedule
Field
Table
Sort
Show
Criteria
Or
Theater
Location
Address
Location
Director
Movies
Bergman
Location
Theater
Address
Phone
Figure 7.4: Example query in Access
(Although not shown in the gure, join conditions can also be expressed using single-
attribute conditions represented as text.)
Limited forms of negation and union can be incorporated into the condition part of an
Access query. For more general forms of negation and union, however, the technique of
building views to serve as intermediate relations can be used.
7.3 Confronting the Real World
Because they are to be used in practical situations, the languages presented in this chapter
incorporate a number of features not included in their formal counterparts. In this section
we touch on some of these extensions and on fundamental issues raised by them. These in-
clude domain independence, the implications of incorporating many-sorted atomic objects,
the use of arithmetic, and the incorporation of aggregate operators.
Queries from all of the practical languages described in this chapter are domain inde-
pendent. This is easily veried from the form of queries in these languages: Whenever a
variable is introduced, the relation it ranges over is also specied. Furthermore, the specic
semantics associated with ors occurring in where clauses (see Example 7.1.1) prevent the
kind of safety problem illustrated by query unsafe-2 of Section 5.3.
Most practical languages permit the underlying domain of values to be many-sorted
for example, including distinct scalar domains for the types integer, real, character string,
etc., and some constructed types, such as date, in some languages. (More recent systems,
such as POSTGRES, permit the user to incorporate abstract data types as well.) For most
of the theoretical treatment, we assumed that there was one underlying domain of values,
dom, which was shared equally by all relational attributes. As noted in the discussion of
7.3 Confronting the Real World 153
SQL, the typing of attributes can be used to ensure that comparisons make sense, in that
they compare values of comparable type. Much of the theory developed here for a single
underlying domain can be generalized to the case of a many-sorted underlying domain (see
Exercise 7.8).
Another fundamental feature of practical query languages is that they offer value
comparators other than equality. Typically most of the base sorts are totally ordered. This
is the case for the integers or the strings (under the lexicographical ordering). It is therefore
natural to introduce , , <, > as comparators. For example, to ask the query, What can
we see at the Le Champo after 21:00, we can use
ans(x
t
) Pariscope(Le Champo,x
t
, x
s
), x
s
> 21:00;
and, in the algebra, as

Title
(
Theater=Le ChampoSchedule>21:00
Pariscope).
Exercise 4.30 explores the impact of incorporating comparators into the conjunctive
queries. Many languages also incorporate string-comparison operators.
Given the presence of integers and reals, it is natural to incorporate arithmetic oper-
ators. This yields a fundamental increase in expressive power: Even simple counting is
beyond the power of the calculus (see Exercise 5.34).
Another extension concerns the incorporation of aggregate operators into the practical
languages (see Section 5.5). Consider, for example, the query, How many lms did
Hitchcock direct?. In SQL, this can be expressed using the query
select count(distinct Title)
from Movies
where Director = Hitchcock;
(The keyword distinct is needed here, because otherwise SQL will not remove duplicates
from the projection onto Title.) Other aggregate operators typically supported in practical
languages include sum, average, minimum, and maximum.
In the preceding example, the aggregate operator was applied to an entire relation.
By using the group by command, aggregate operators can be applied to clusters of tuples,
each common values on a specied set of attributes. For example, the following SQL query
determines the number of movies directed by each director:
select Director, count(distinct Title)
from Movies
group by Director;
The semantics of group by in SQL are most easily understood when we study an extension
of the relational model, called the complex object (or nested relation) model, which models
grouping in a natural fashion (see Chapter 20).
154 Notes on Practical Languages
Bibliographic Notes
General descriptions of SQL and QBE may be found in [EN89, KS91, Ull88]; more details
on SQL can be found in [C
+
76], and on QBE in [Zlo77]. Another language similar in spirit
to SQL is Quel, which was provided with the original INGRES system. A description of
Quel can be found in [SWKH76]. Reference [OW93] presents a survey of QBE langauges
and extensions. A reference on Microsoft Access is [Cam92]. In Unix, the command awk
provides a basic relational tool.
The formal semantics for SQL are presented in [NPS91]. Example 7.1.1 is from
[VanGT91]. Other proofs that SQL can simulate the relational calculus are presented in
[PBGG89, Ull88]. Motivated by the fact that SQL outputs bags rather than sets, [CV93]
studies containment and equivalence of conjunctive queries under the bag semantics (see
Bibliographic Notes in Chapter 6).
Aggregate operators in query languages are studied in [Klu82].
SQL has become the standard relational query language [57391, 69392]; reference
[GW90] presents the original ANSI standard for SQL, along with commentary about par-
ticular products and some history. SQL is available on most main-frame relational dbmss,
including, for example, IBMs DB2, Oracle, Informix, INGRES, and Sybase, and in some
more recent database products for personal computers (e.g.,Microsoft Access, dBASE IV).
QBE is available as part of IBMs product QMF (Query Management Facility). Some
personal computer products support more restricted graphical query languages, including
Microsoft Access and Paradox (which supports a form-based language).
Exercises
Exercise 7.1 Write SQL, QBE, and Access queries expressing queries (4.1 to 4.14) from
Chapter 4. Start by expressing them as nr-datalog

programs.
Exercise 7.2 Consider again the queries (5.2 and 5.3) of Chapter 5. Express these in SQL,
QBE, and Access.
Exercise 7.3 Describe formally the mapping of SQL select-from-where blocks into the SPJR
algebra.
Exercise 7.4
(a) Let P be an nr-datalog

program. Describe how to construct an equivalent program


P

such that each predicate that occurs positively in a rule body is an edb predicate.
(b) Develop a formal proof that SQL can simulate nr-datalog

.
Exercise 7.5 Following Example 7.2.1, show that QBE is relationally complete.
Exercise 7.6
(a) Assuming that R and S have compatible sorts, show how to compute in SQL the
value of R S into the relation T using insert and delete.
(b) Generalize this to show that SQL is relationally complete.
Exercises 155
Exercise 7.7 In a manner analogous to Exercise 7.6, showthat Access is relationally complete.
Exercise 7.8 The intuition behind the typed restricted PSJ algebra is that each attribute has a
distinct type whose elements are incomparable with the types of other attributes. As motivated
by the practical query languages, propose and study a restriction of the SPJR algebra analo-
gous to the typed restricted PSJ algebra, but permitting more than one attribute with the same
type. Does the equivalence of the various versions of the conjunctive queries still hold? Can
Exercise 6.21 be generalized to this framework?
8
Functional and Join
Dependency
Alice: Your model reduces the most interesting information to something at and
boring.
Vittorio: Youre right, and this causes a lot of problems.
Sergio: Designing the schema for a complex application is tough, and it is easy to
make mistakes when updating a database.
Riccardo: Also, the system knows so little about the data that it is hard to obtain
good performance.
Alice: Are you telling me that the model is bad?
Vittorio: No, wait, we are going to x it!
T
his chapter begins with an informal discussion that introduces some simple dependen-
cies and illustrates the primary motivations for their development and study. The two
following sections of the chapter are devoted to two of the simple kinds of dependencies;
and the nal section introduces the chase, an important tool for analyzing these dependen-
cies and their effect on queries.
Many of the early dependencies introduced in the literature use the named (as op-
posed to unnamed) perspective on tuples and relations. Dependency theory was one of the
main reasons for adopting this perspective in theoretical investigations. This is because de-
pendencies concern the semantics of data, and attribute names carry more semantics than
column numbers. The general view of dependencies based on logic, which is considered
in Chapter 10, uses the column-number perspective, but a special subcase (called typed)
retains the spirit of the attribute-name perspective.
8.1 Motivation
Consider the database shown in Fig. 8.1. Although the schema itself makes no restrictions
on properties of data that might be stored, the intended application for the schema may
involve several such restrictions. For example, we may know that there is only one director
associated with each movie title, and that in Showings, only one movie title is associated
with a given theater-screen pair.
1
Such properties are called functional dependencies (fds)
because the values of some attributes of a tuple uniquely or functionally determine the
values of other attributes of that tuple. In the syntax to be developed in this chapter, the
1
Gone are the days of seeing two movies for the price of one!
159
160 Functional and Join Dependency
Movies Title Director Actor
The Birds Hitchcock Hedren
The Birds Hitchcock Taylor
Bladerunner Scott Hannah
Apocalypse Now Coppola Brando
Showings Theater Screen Title Snack
Rex 1 The Birds coffee
Rex 1 The Birds popcorn
Rex 2 Bladerunner coffee
Rex 2 Bladerunner popcorn
Le Champo 1 The Birds tea
Le Champo 1 The Birds popcorn
Cinoche 1 The Birds Coke
Cinoche 1 The Birds wine
Cinoche 2 Bladerunner Coke
Cinoche 2 Bladerunner wine
Action Christine 1 The Birds tea
Action Christine 1 The Birds popcorn
Figure 8.1: Sample database illustrating simple dependencies
dependency in the Movies relation is written as
Movies : Title Director
and that of the Showings relation is written as
Showings : Theater Screen Title.
Technically, there are sets of attributes on the left- and right-hand sides of the arrow, but
we continue with the convention of omitting set braces when understood from the context.
When there is no confusion from the context, a dependency R : X Y is simply
denoted X Y. A relation I satises a functional dependency X Y if for each pair
s, t of tuples in I,

X
(s) =
X
(t ) implies
Y
(s) =
Y
(t ).
An important notion in dependency theory is implication. One can observe that any
relation satisfying the dependency
8.1 Motivation 161
(a) Title Director
also has to satisfy the dependency
(b) Title, Actor Director.
We will say that dependency (a) implies dependency (b).
A key dependency is an fd X U, where U is the full set of attributes of the relation.
It turns out that dependency (b) is equivalent to the key dependency Title, Actor Title,
Director, Actor.
A second fundamental kind of dependency is illustrated by the relation Showings. A
tuple (th, sc, ti, sn) is in Showings if theater th is showing movie ti on screen sc and if
theater th offers snack sn. Intuitively, one would expect a certain independence between the
Screen-Title attributes, on the one hand, and the Snack attribute, on the other, for a given
value of Theater. For example, because (Cinoche, 1, The Birds, Coke) and (Cinoche, 2,
Bladerunner, wine) are in Showings, we also expect (Cinoche, 1, The Birds, wine) and
(Cinoche, 2, Bladerunner, Coke) to be present. More precisely, if a relation I has this
property, then
I =
Theater,Screen,Title
(I)
Theater,Snack
(I).
This is a simple example of a join dependency (jd) which is formally expressed by
Showings : [{Theater, Screen, Title}, {Theater, Snacks}].
In general, a jd may involve more than two attribute sets. Multivalued dependency
(mvd) is the special case of jds that have at most two attribute sets. Due to their naturalness,
mvds were introduced before jds and have several interesting properties, which makes
them worth studying on their own.
As will be seen later in this chapter, the fact that the fd Title Director is satised by
the Movies relation implies that the jd
[{Title, Director}, {Title, Actor}]
is also satised. We will also study such interaction between fds and jds.
So far we have considered dependencies that apply to individual relations. Typically
these dependencies are used in the context of a database schema, in which case one has
to specify the relation concerned by each dependency. We will also consider a third fun-
damental kind of dependency, called inclusion dependency (ind) and also referred to as
referential constraint. In the example, we might expect that each title currently being
shown (i.e., occurring in the Showings relation) is the title of a movie (i.e., also occurs in
the Movies relation). This is denoted by
Showings[Title] Movies[Title].
162 Functional and Join Dependency
In general, inds may involve sequences of attributes on both sides. Inclusion dependencies
will be studied in depth in Chapter 9.
Data dependencies such as the ones just presented provide a formal mechanism for
expressing properties expected from the stored data. If the database is known to satisfy a
set of dependencies, this information can be used to (1) improve schema design, (2) protect
data by preventing certain erroneous updates, and (3) improve performance. These aspects
are considered in turn next.
Schema Design and Update Anomalies
The task of designing the schema in a large database application is far from being trivial,
so the designer has to receive support from the system. Dependencies are used to provide
information about the semantics of the application so that the system may help the user
choose, among all possible schemas, the most appropriate one.
There are various ways in which a schema may not be appropriate. The relations
Movies and Showings illustrate the most prominent kinds of problems associated with fds
and jds:
Incomplete information: Suppose that one is to insert the title of a new movie and its direc-
tor without knowing yet any actor of the movie. This turns out to be impossible with
the foregoing schema, and it is an insertion anomaly. An analogue for deletion, a dele-
tion anomaly, occurs if actor Marlon Brando is no longer associated with the movie
Apocalypse Now. Then the tuple Apocalypse Now, Coppola, Brando should be
deleted from the database. But this has the additional effect of deleting the association
between the movie Apocalypse Now and the director Coppola from the database,
information that may still be valid.
Redundancy: The fact that Coke can be found at the Cinoche is recorded many times.
Furthermore, suppose that the management of the Cinoche decided to sell Pepsi in-
stead of Coke. It is not sufcient to modify the tuple Cinoche, 1, The Birds, Coke
to Cinoche, 1, The Birds, Pepsi because this would lead to a violation of the jd. We
have to modify several tuples. This is a modication anomaly. Insertion and deletion
anomalies are also caused by redundancy.
Thus because of a bad choice for the schema, updates can lead to loss of information,
inconsistency in the data, and more difculties in writing correct updates. These problems
can be prevented by choosing a more appropriate schema. In the example, the relation
Movies should be decomposed into two relations M-Director[Title, Director] and M-
Actor[Title, Actor], where M-Director satises the fd Title Director. Similarly, the
relation Showings should be replaced by two relations ST-Showings[Theater, Screen, Title]
and S-Showings[Theater, Snack], where ST-Showings satises the fd Theater, Screen
Title. This approach to schema design is explored in Chapter 11.
Data Integrity
Data dependencies also serve as a lter on proposed updates in a natural fashion: If a
database is expected to satisfy a dependency and a proposed update would lead to the
8.2 Functional and Key Dependencies 163
violation of , then the update is rejected. In fact, the system supports transactions. During
a transaction, the database can be in an inconsistent state; but at the end of a transaction,
the system checks the integrity of the database. If dependencies are violated, the whole
transaction is rejected (aborted); otherwise it is accepted (validated).
Efcient Implementation and Query Optimization
It is natural to expect that knowledge of structural properties of the stored data be useful in
improving the performances of a system for a particular application.
At the physical level, the satisfaction of dependencies leads to a variety of alternatives
for storage and access structures. For example, satisfaction of an fd or jd implies that a
relation can be physically stored in decomposed form. In addition, satisfaction of a key
dependency can be used to reduce indexing space.
A particularly striking theoretical development in dependency theory provides a
method for optimizing conjunctive queries in the presence of a large class of dependencies.
As a simple example, consider the query
ans(d, a) Movies(t, d, a

), Movies(t, d

, a),
which returns tuples d, a, where actor a acted in a movie directed by d. A naive imple-
mentation of this query will require a join. Because Movies satises Title Director, this
query can be simplied to
ans(d, a) Movies(t, d, a),
which can be evaluated without a join. Whenever the pattern of tuples {t, d, a

, t, d

, a}
is found in relation Movies, it must be the case that d =d

, so one may as well use just the


pattern {t, d, a}, yielding the simplied query. This technique for query optimization is
based on the chase and is considered in the last section of this chapter.
8.2 Functional and Key Dependencies
Functional dependencies are the most prominent form of dependency, and several elegant
results have been developed for them. Key dependencies are a special case of functional
dependencies. These are the dependencies perhaps most universally supported by relational
systems and used in database applications. Many issues in dependency theory have nice
solutions in the context of functional dependencies, and these dependencies lie at the origin
of the decomposition approach to schema design.
To specify a class of dependencies, one must dene the syntax and the semantics of
the dependencies of concern. This is done next for fds.
Denition 8.2.1 If U is a set of attributes, then a functional dependency (fd) over U is
an expression of the form X Y, where X, Y U. A key dependency over U is an fd of
the form X U. A relation I over U satises X Y, denoted I |=X Y, if for each
164 Functional and Join Dependency
pair s, t of tuples in I,
X
(s) =
X
(t ) implies
Y
(s) =
Y
(t ). For a set of fds, I satises
, denoted I |=, if I |= for each .
A functional dependency over a database schema R is an expression R : X Y,
where R R and X Y is a dependency over sort(R). These are sometimes referred
to as tagged dependencies, because they are tagged by the relation that they apply to.
The notion of satisfaction of fds by instances over R is dened in the obvious way. In the
remainder of this chapter, we consider only relational schemas. All can be extended easily
to database schemas.
The following simple property provides the basis for the decomposition approach to
schema design. Intuitively, it says that if a certain fd holds in a relation, one can store
instead of the relation two projections of it, without loss of information. More precisely,
the original relation can be reconstructed by joining the projections. Such joins have been
termed lossless joins and will be discussed in some depth in Section 11.2.
Proposition 8.2.2 Let I be an instance over U that satises X Y and Z =U XY.
Then I =
XY
(I)
XZ
(I).
Proof The inclusion I
XY
(I)
XZ
(I) holds for all instances I. For the opposite
inclusion, let r be a tuple in the join. Then there are tuples s, t I such that
XY
(r) =

XY
(s) and
XZ
(r) =
XZ
(t ). Because
X
(r) =
X
(t ), and I |=X Y,
Y
(r) =
Y
(t ).
It follows that r =t , so r is in I.
Logical Implication
In general, we may know that a set of fds is satised by an instance. A natural question
is, What other fds are necessarily satised by this instance? This is captured by the
following denition.
Denition 8.2.3 Let and be sets of fds over an attribute set U. Then (logically)
implies , denoted |=
U
or simply |=, if U is understood from the context, if for
all relations I over U, I |= implies I |= . Two sets , are (logically) equivalent,
denoted , if |= and |=.
Example 8.2.4 Consider the set
1
= {A C, B C, CD E} of fds over {A, B,
C, D, E}. Then
2
a simple argument allows to show that
1
|=AD E. In addition,
1
|=
CDE C. In fact, |=CDE C (where is the empty set of fds).
Although the denition just presented focuses on fds, this denition will be used in
connection with other classes of dependencies studied here as well.
2
We generally omit set braces from singleton sets of fds.
8.2 Functional and Key Dependencies 165
The fd closure of a set of fds over an attribute set U, denoted
,U
or simply

if
U is understood from the context, is the set
{X Y | XY U and |=X Y}.
It is easily veried that for any set of fds over U and any sets Y X U, X
Y
,U
. This implies that the closure of a set of fds depends on the underlying set of
attributes. It also implies that
,U
has size greater than 2
|U|
. (It is bounded by 2
2|U|
by
denition.) Other properties of fd closures are considered in Exercise 8.3.
Determining Implication for fds Is Linear Time
One of the key issues in dependency theory is the development of algorithms for testing
logical implication. Although a set of fds implies an exponential (in terms of the number
of attributes present in the underlying schema) number of fds, it is possible to test whether
implies an fd X Y in time that is linear in the size of and X Y (i.e., the space
needed to write them).
A central concept used in this algorithm is the fd closure of a set of attributes. Given
a set of fds over U and attribute set X U, the fd closure of X under , denoted
(X, )
,U
or simply X

if and U are understood, is the set {A U | |=X A}. It


turns out that this set is independent of the underlying attribute set U (see Exercise 8.6).
Example 8.2.5 Recall the set
1
of fds from Example 8.2.4. Then A

=AC, (AB)

=
ABC, and (AD)

=ACDE. The family of subsets X of U such that X

=X is {, C, D, E,
AC, BC, CE, DE, ABC, ACE, ADE, BCE, BDE, CDE, ABCE, ACDE, BCDE, ABCDE}.
The following is easily veried (see Exercise 8.4):
Lemma 8.2.6 Let be a set of fds and X Y an fd. Then |=X Y iff Y X

.
Thus testing whether |= X Y can be accomplished by computing X

. The fol-
lowing algorithm can be used to compute this set.
Algorithm 8.2.7
Input: a set of fds and a set X of attributes.
Output: the closure X

of X under .
1. unused :=;
2. closure :=X;
3. repeat until no further change:
if W Z unused and W closure then
i. unused :=unused {W Z};
ii. closure :=closure Z
4. output closure.
166 Functional and Join Dependency
Proposition 8.2.8 On input and X, Algorithm 8.2.7 computes (X, )

.
Proof Let U be a set of attributes containing the attributes occurring in or X, and let
result be the output of the algorithm. Using properties established in Exercise 8.5, an easy
induction shows that result X

.
For the opposite inclusion, note rst that for attribute sets Y, Z, if Y Z then Y

.
Because X result, it now sufces to show that result

result. It is enough to show that


if A U result, then |=result A. To show this, we construct an instance I over U
such that I |= but I |=result A for A U result. Let I ={s, t }, where
result
(s) =

result
(t ) and s(A) =t (A) for each A U result. (Observe that this uses the fact that the
domain has at least two elements.) Note that, by construction, for each fd W Z , if
W result then Z result. It easily follows that I |=. Furthermore, for A U result,
s(A) =t (A), so I |=result A. Thus |=result A, and result

result.
The algorithm provides the means for checking whether a set of dependencies implies
a single dependency. To test implication of a set of dependencies, it sufces to test inde-
pendently the implication of each dependency in the set. In addition, one can check that
the preceding algorithm runs in time O(n
2
), where n is the length of and X. As shown
in Exercise 8.7, this algorithm can be improved to linear time. The following summarizes
this development.
Theorem 8.2.9 Given a set of fds and a single fd , determine whether |= can
be decided in linear time.
Several interesting properties of fd-closure sets are considered in Exercises 8.11 and
8.12.
Axiomatization for fds
In addition to developing algorithms for determining logical implication, the second funda-
mental theme in dependency theory has been the development of inference rules, which can
be used to generate symbolic proofs of logical implication. Although the inference rules do
not typically yield the most efcient mechanisms for deciding logical implication, in many
cases they capture concisely the essential properties of the dependencies under study. The
study of inference rules is especially intriguing because (as will be seen in the next section)
there are several classes of dependencies for which there is no nite set of inference rules
that characterizes logical implication.
Inference rules and algorithms for testing implication provide alternative approaches
to showing logical implication between dependencies. In general, the existence of a nite
set of inference rules for a class of dependencies is a stronger property than the existence
of an algorithm for testing implication. It will be shown in Chapter 9 that
the existence of a nite set of inference rules for a class of dependencies implies the
existence of an algorithm for testing logical implication; and
8.2 Functional and Key Dependencies 167
there are dependencies for which there is no nite set of inference rules but for which
there is an algorithm to test logical implication.
We now present the inference rules for fds.
FD1: (reexivity) If Y X, then X Y.
FD2: (augmentation) If X Y, then XZ YZ.
FD3: (transitivity) If X Y and Y Z, then X Z.
The variables X, Y, Z range over sets of attributes. The rst rule is sometimes called an
axiom because it is degenerate in the sense that no fds occur in the antecedent.
The inference rules are used to form proofs about logical implication between fds,
in a manner analogous to the proofs found in mathematical logic. It will be shown that
the resulting proof system is sound and complete for fds (two classical notions to be
recalled soon). Before formally presenting the notion of proof, we give an example.
Example 8.2.10 The following is a proof of AD E from the set
1
of fds of Exam-
ple 8.2.4.

1
: A C
1
,

2
: AD CD from
1
using FD2,

3
: CD E
1
,

4
: AD E from
2
and
3
using FD3.
Let U be a set of attributes. A substitution for an inference rule (relative to U) is
a function that maps each variable appearing in to a subset of U, such that each set
inclusion indicated in the antecedent of is satised by the associated sets. Now let be a
set of fds over U and an fd over U. A proof of from using the set I = {FD1, FD2,
FD3} is a sequence of fds
1
, . . . ,
n
= (n 1) such that for each i [1, n], either
(a)
i
, or
(b) there is a substitution for some rule I such that
i
corresponds to the conse-
quent of , and such that for each fd in the antecedent of the corresponding fd
is in the set {
j
| 1 j < i}.
The fd is provable from using I (relative to U), denoted
I

or if I is
understood from the context, if there is a proof of from using I.
Let I be a set of inference rules. Then
I is sound for logical implication of fds if
I

implies |=,
I is complete for logical implication of fds if |= implies
I

.
We will generalize these denitions to other dependencies and other sets of inference
rules.
In general, a nite sound and complete set of inference rules for a class C of depen-
dencies is called a (nite) axiomatization of C. In such a case, C is said to be (nitely)
axiomatizable.
We now state the following:
168 Functional and Join Dependency
Theorem 8.2.11 The set {FD1, FD2, FD3} is sound and complete for logical implica-
tion of fds.
Proof Suppose that is a set of fds over an attribute set U. The proof of soundness
involves a straightforward induction on proofs
1
, . . . ,
n
from , showing that |=
i
for each i [1, n] (see Exercise 8.5).
For the proof of completeness, we show that |= X Y implies X Y. As
a rst step, we show that X X

using an induction based on Algorithm 8.2.7. In


particular, let closure
i
be the value of closure after i iterations of step 3 for some xed
execution of that algorithm on input and X. We set closure
0
=X. Suppose inductively
that a proof
1
, . . . ,
k
i
of X closure
i
has been constructed. [The case for i =0 follows
from FD1.] Suppose further that W Z is chosen for the (i + 1)
st
iteration. It follows
that W closure
i
and closure
i+1
=closure
i
Z. Extend the proof by adding the following
steps:

k
i
+1
= W Z in

k
i
+2
= closure
i
W by FD1

k
i
+3
= closure
i
Z by FD3

k
i
+4
= closure
i
closure
i+1
by FD2

k
i
+5
= X closure
i+1
by FD3
At the completion of this construction we have a proof
1
, . . . ,
n
of X X

. By
Lemma 8.2.6, Y X

. Using FD1 and FD3, the proof can be extended to yield a proof of
X Y.
Other inference rules for fds are considered in Exercise 8.9.
Armstrong Relations
In the proof of Proposition 8.2.8, an instance I is created such that I |= but I |=X A.
Intuitively, this instance witnesses the fact that |= X A. This raises the following
natural question: Given a set of fds over U, is there a single instance I that satises
and that violates every fd not in

? It turns out that for each set of fds, there is such an


instance; these are called Armstrong relations.
Proposition 8.2.12 If is a set of fds over U, then there is an instance I such that,
for each fd over U, I |= iff

.
Crux Suppose rst that |= A for any A (i.e.,

= ). For each set X U sat-


isfying X =X

, choose an instance I
X
={s
X
, t
X
} such that s
X
(A) =t
X
(A) iff A X. In
addition, choose these instances so that adom(I
X
) adom(I
Y
) = for X =Y. Then
{I
X
| X U and X =X

}
is an Armstrong relation for .
8.3 Join and Multivalued Dependencies 169
If

=, then the instances I


X
should be modied so that
A
(I
X
) =
A
(I
Y
) for each
X, Y and A

.
In some applications, the domains of certain attributes may be nite (e.g., Sex con-
ventionally has two values, and Grade typically consists of a nite set of values). In such
cases, the construction of an Armstrong relation may not be possible. This is explored in
Exercise 8.13.
Armstrong relations can be used in practice to assist the user in specifying the fds for
a particular application. An interactive, iterative specication process starts with the user
specifying a rst set of fds. The system then generates an Armstrong relation for the fds,
which violates all the fds not included in the specication. This serves as a worst-case
counterexample and may result in detecting additional fds whose satisfaction should be
required.
8.3 Join and Multivalued Dependencies
The second kind of simple dependency studied in this chapter is join dependency (jd),
which is intimately related to the join operator of the relational algebra. As mentioned in
Section 8.1, a basic motivation for join dependency stems from its usefulness in connection
with relation decomposition. This section also discusses multivalued dependency (mvd), an
important special case of join dependency that was historically the rst to be introduced.
The central results and tools for studying jds are different from those for fds. It has
been shown that there is no sound and complete set of inference rules for jds analogous
to those for fds. (An axiomatization for a much larger family of dependencies will be
presented in Chapter 10.) In addition, as shown in the following section, logical implication
for jds is decidable. The complexity of implication is polynomial for a xed database
schema but becomes np-hard if the schema is considered part of the input. (An exact
characterization of the complexity remains open.)
The following section also presents an interesting correspondence between mvds and
acyclic join dependencies (i.e., those based on joins that are acyclic in the sense introduced
in Chapter 6).
A major focus of the current section is on mvds; this is because of several positive
results that hold for them, including axiomatizability of fds and mvds considered together.
Join Dependency and Decomposition
Before dening join dependency, we recall the denition of natural join. For attribute set
U, sets X
1
, . . . , X
n
U, and instances I
j
over X
j
for j [1, n], the (natural) join of the
I
j
s is

n
j=1
{I
j
} ={s over X
j
|
X
j
(s) I
j
for each j [1, n]}.
A join dependency is satised by an instance I if it is equal to the join of some of its
projections.
170 Functional and Join Dependency
Denition 8.3.1 Ajoin dependency (jd) over attribute set U is an expression of the form
[X
1
, . . . , X
n
], where X
1
, . . . , X
n
U and
n
i=1
X
i
=U. A relation I over U satises
[X
1
, . . . , X
n
] if I =
n
j=1
{
X
j
(I)}.
A jd is n-ary if the number of attribute sets involved in is n. As discussed earlier,
the relation Showings of Fig. 8.1 satises the 2-ary jd
[{Theater, Screen, Title}, {Theater, Snacks}].
The 2-ary jds are also called multivalued dependencies (mvds). These are often denoted
in a style reminiscent of fds.
Denition 8.3.2 If U is a set of attributes, then a multivalued dependency (mvd) over
U is an expression of the form X Y, where X, Y U. A relation I over U satises
X Y if I |= [XY, X(U Y)].
In the preceding denition, it would be equivalent to write [XY, (U Y)]; we
choose the foregoing form to emphasize the importance of X. For instance, the jd
[{Theater, Screen, Title}, {Theater, Snack}]
can be written as an mvd using
Theater Screen, Title, or equivalently, Theater Snack.
Exercise 8.16 explores the original denition of satisfaction of an mvd.
Figure 8.2 shows a relation schema SDT and an instance that satises a 3-ary jd. This
relation focuses on snacks, distributors, and theaters. We assume for this example that a
tuple (s, d, p, t ) is in SDT if the conjunction of the following predicates is true:
P
1
(s, d, p): Snack s is supplied by distributor d at price p.
P
2
(d, t ): Theater t is a customer of distributor d.
P
3
(s, t ): Snack s is bought by theater t .
Under these assumptions, each instance of SDT must satisfy the jd:
[{Snack, Distributor, Price}, {Distributor, Theater}, {Snack, Theater}].
For example, this holds for the instance in Fig. 8.2. Note that if tuple coffee, Smart, 2.35,
Cinoche were removed, then the instance would no longer satisfy the jd because coffee,
Smart, 2.35, coffee, Cinoche, and Smart, Cinoche would remain in the appropriate
projections. We also expect the instances of SDT to satisfy Snack, Distributor Price.
It can be argued that schema SDT with the aforementioned constraint is unnatural
in the following sense. Intuitively, if we choose such a schema, the presence of a tuple
8.3 Join and Multivalued Dependencies 171
SDT Snack Distributor Price Theater
coffee Smart 2.35 Rex
coffee Smart 2.35 Le Champo
coffee Smart 2.35 Cinoche
coffee Leclerc 2.60 Cinoche
wine Smart 0.80 Rex
wine Smart 0.80 Cinoche
popcorn Leclerc 5.60 Cinoche
Figure 8.2: Illustration of join dependency
s, d, p, t seems to indicate that t buys s from d. If we wish to record just the information
about who buys what, who sells what, and who sells to whom, a more appropriate schema
would consist of three relations SD[Snack, Distributor, Price], ST[Snack, Theater], and
DT[Distributor, Theater] corresponding to the three sets of attributes involved in the
preceding jd. The jd then guarantees that no information is lost in the decomposition
because the original relation can be reconstructed by joining the projections.
Join Dependencies and Functional Dependencies
The interaction of fds and jds is important in the area of schema design and user interfaces
to the relational model. Although this is explored in more depth in Chapter 11, we present
here one of the rst results on the interaction of the two kinds of dependencies.
Proposition 8.3.3 Let U be a set of attributes, {X, Y, Z} be a partition of U, and be
a set of fds over U. Then |= [XY, XZ] iff either |=X Y or |=X Z.
Crux Sufciency follows immediately fromProposition 8.2.2. For necessity, suppose that
does not imply either of the fds. Then Y X

= and Z X

=, say C Y X

and C

Z X

. Consider the two-element instance I ={u, v} where, u(A) =v(A) =0


if A is in X

and u(A) =0, v(A) =1 otherwise. Clearly, I satises and one can verify
that
XY
(I)
XZ
(I) contains a tuple w with w(C) =0 and w(C

) =1. Thus w is not in


I, so I violates [XY, XZ].
Axiomatizations
As will be seen later (Theorem 8.4.12), there is a decision procedure for jds in isolation,
and for jds and fds considered together. Here we consider axiomatizations, rst for jds in
isolation and then for fds and mvds taken together.
We state rst the following result without proof.
Theorem 8.3.4 There is no axiomatization for the family of jds.
172 Functional and Join Dependency
In contrast, there is an axiomatization for the class of fds and multivalued dependen-
cies. Note rst that implication for fds is independent of the underlying set of attributes
(i.e., if {} is a set of fds over U and V U, then |= relative to U iff |= rel-
ative to V; see Exercise 8.6). An important difference between fds and mvds is that this is
not the case for mvds. Thus the inference rules for mvds must be used in connection with
a xed underlying set of attributes, and a variable (denoted U) referring to this set is used
in one of the rules.
The following lists the four rules for mvds alone and an additional pair of rules needed
when fds are incorporated.
MVD0: (complementation) If X Y, then X (U Y).
MVD1: (reexivity) If Y X, then X Y.
MVD2: (augmentation) If X Y, then XZ YZ.
MVD3: (transitivity) If X Y and Y Z, then X (Z Y).
FMVD1: (conversion) If X Y, then X Y.
FMVD2: (interaction) If X Y and XY Z, then X (Z Y).
Theorem 8.3.5 The set {FD1, FD2, FD3, MVD0, MVD1, MVD2, MVD3, FMVD1,
FMVD2} is sound and complete for logical implication of fds and mvds considered
together.
Crux Soundness is easily veried. For completeness, let an underlying set U of attributes
be xed, and assume that , where =X Y or =X Y.
The dependency set of X is dep(X) ={Y U | X Y}. One rst shows that
1. dep(X) is a Boolean algebra of sets for U.
That is, it contains U and is closed under intersection, union, and difference (see Exer-
cise 8.17). In addition,
2. for each A X
+
, {A} dep(X),
where X
+
denotes {A U | X A}.
A dependency basis of X is a family {W
1
, . . . , W
m
} dep(X) such that (1)
n
i=1
W
i
=
U; (2) W
i
= for i [1, n]; (3) W
i
W
j
= for i, j [1, n] with i = j; and (4) if
W dep(X), W = , and W W
i
for some i [1, n], then W = W
i
. One then proves
that
3. there exists a unique dependency basis of X.
Now construct an instance I over U that contains all tuples t satisfying the following
conditions:
(a) t (A) =0 for each A X
+
.
(b) If W
i
is in the dependency basis and W
i
={A} for each A X
+
, then t (B) =0
for all B W
i
or t (B) =1 for all B W
i
.
It can be shown that I |= but I |= (see Exercise 8.17).
8.4 The Chase 173
This easily implies the following (see Exercise 8.18):
Corollary 8.3.6 The set {MVD0, MVD1, MVD2, MVD3} is sound and complete for
logical implication of mvds considered alone.
8.4 The Chase
This section presents the chase, a remarkable tool for reasoning about dependencies that
highlights a strong connection between dependencies and tableau queries. The discussion
here is cast in terms of fds and jds, but as will be seen in Chapter 10, the chase generalizes
naturally to a broader class of dependencies. At the end of this section, we explore impor-
tant applications of the chase technique. We show how it can also be used to determine
logical implication between sets of dependencies and to optimize conjunctive queries.
The following example illustrates an intriguing connection between dependencies and
tableau queries.
Example 8.4.1 Consider the tableau query (T, t ) shown in Fig. 8.3(a). Suppose the
query is applied only to instances I satisfying some set of fds and jds. The chase is
based on the following simple idea. If is a valuation embedding T into an instance I
satisfying , (T ) must satisfy . Valuations that do not satisfy are therefore of no use.
The chase is a procedure that eliminates the useless valuations by changing (T, t ) itself so
that T , viewed as an instance, satises . We will show that the tableau query resulting
from the chase is then equivalent to the original on instances satisfying . As we shall see,
this can be used to optimize queries and test implication of dependencies.
Let us return to the example. Suppose rst that = {B D}. Suppose (T, t ) is
applied to an instance I satisfying . In each valuation embedding T into I, it must be
the case that z and z

are mapped to the same constant. Thus in this context one might as
well replace T by the tableau where z =z

. This transformation is called applying the fd


B D to (T, t ). It is easy to see that the resulting tableau query is in fact equivalent to
the identity, because T contains an entire row of distinguished variables.
Consider next an example involving both fds and jds. Let consist of the following
two dependencies over ABCD: the jd [AB, BCD] and the fd A C. In this example we
argue that for each I satisfying these dependencies, (T, t )(I) =I or, in other words, in the
context of input instances that satisfy the dependencies, the query (T, t ) is equivalent to
the identity query ({t }, t ).
Let I be an instance over ABCD satisfying the two dependencies. We rst explain
why (T, t )(I) = (T

, t )(I) for the tableau query (T

, t ) of Fig. 8.3(b). It is clear that


(T

, t )(I) (T, t )(I), because T

is a superset of T . For the opposite inclusion, suppose


that is a valuation for T with (T ) I. Then, in particular, both (w, x, y, z

) and
(w

, x, y

, z) are in I. Because I |= [AB, BCD], it follows that (w, x, y

, z) I.
Thus (T

) I and (t ) (T

, t )(I). The transformation from (T, t ) to (T

, t ) is termed
applying the jd [AB, BCD], because T

is the result of adding a member of


AB
(T )
174 Functional and Join Dependency
T
A B
w x
w x
(a) The tableau query (T, t)
C
y
y
D
z
z
w x y z
t
T
A B
w x
w x
(b)
C
y
y
D
z
z
w x y z
t
(One) result of applying
the jd [AB, BCD]
w x y z
T
A B
w x
w x
(c)
C
y
y
D
z
z
w x y z
t
Result of applying
the fd A C
w x y z
Figure 8.3: Illustration of the chase

BCD
(T ) to T . We shall see that, by repeated applications of a jd, one can eventually
force the tableau to satisfy the jd.
The tableau T

of Fig. 8.3(c) is the result of chasing (T

, t ) with the fd A C (i.e.,


replacing all occurrences of y

by y). We now argue that (T

, t )(I) = (T

, t )(I). First,
by Theorem 6.2.3, (T

, t )(I) (T

, t )(I) because there is a homomorphism from (T

, t )
to (T

, t ). For the opposite inclusion, suppose now that (T

) I. This implies that


embeds the rst tuple of T

into I. In addition, because (w, x, y, z

) and (w, x, y

, z)
are in I and I |= A C, it follows that (y) = (y

). Thus (w

, x, y, z) = (w

, x,
y

, z) I, and (w, x, y, z) =(w, x, y

, z) I, [i.e., embeds the second and third


tuples of T

into I, such that (T

) I]. Note that (T

, t ) is the result of identifying a


pair of variables that caused a violation of A C in T

. We will see that by repeated


applications of an fd, one can eventually force a tableau to satisfy the fd. Note that
in this case, chasing with respect to A C has no effect before chasing with respect to
[AB, BCD].
Finally, note that by the Homomorphism Theorem 6.2.3 of Chapter 6, (T

, t )
({t }, t ). It follows, then, that for all instances I that satisfy {A C, [AB, BCD]}, (T, t )
and ({t }, t ) yield the same answer.
Dening the Chase
As seen in Example 8.4.1, the chase relates to equivalence of queries over a family of
instances satisfying certain dependencies. For a family F of instances over R, we say
that q
1
is contained in q
2
relative to F, denoted q
1

F
q
2
, if q
1
(I) q
2
(I) for each
instance I in F. We are particularly interested in families F that are dened by a set of
dependencies (in the current context, fds and jds). Let be a set of (functional and join)
dependencies over R. The satisfaction family of , denoted sat(R, ) or simply sat() if
R is understood from the context, is the family
sat() ={I over R | I |=}.
8.4 The Chase 175
Query q
1
is contained in q
2
relative to , denoted q
1

q
2
, if q
1

sat()
q
2
. Equivalence
relative to a family of instances (
F
) and to a set of dependencies (

) are dened
similarly.
The chase is a general technique that can be used, given a set of dependencies ,
to transform a tableau query q into a query q

such that q

. The chase is dened as a


nondeterministic procedure based on the successive application of individual dependencies
from , but as will be seen this process is Church-Rosser in the sense that the procedure
necessarily terminates with a unique end result. As a nal step in this development, the
chase will be used to characterize equivalence of conjunctive queries with respect to a set
of dependencies (

).
In the following, we let R be a xed relation schema, and we focus on sets of fds
and jds over R and tableau queries with no constants over R. The entire development can
be generalized to database schemas and conjunctive queries with constants (Exercise 8.27)
and to a considerably larger class of dependencies (Chapter 10).
For technical convenience, we assume that there is a total order on the set var. Let
R be a xed relation schema and suppose that (T, t ) is a tableau query over R. The chase
is based on the successive application of the following two rules:
fd rule: Let =X A be an fd over R, and let u, v T be such that
X
(u) =
X
(v) and
u(A) =v(A). Let x be the lesser variable in {u(A), v(A)} under the ordering , and
let y be the other one (i.e., {x, y} ={u(A), v(A)} and x < y). The result of applying
the fd to u, v in (T, t ) is the tableau query ((T ), (t )), where is the substitution
that maps y to x and is the identity elsewhere.
jd rule: Let =[X
1
, . . . , X
n
] be a jd over R, let u be a free tuple over R not in T , and
suppose that u
1
, . . . , u
n
T satisfy
X
i
(u
i
) =
X
i
(u) for i [1, n]. Then the result of
applying the jd to (u
1
, . . . , u
n
) in (T, t ) is the tableau query (T {u}, t ).
Following the lead of Example 8.4.1, the following is easily veried (see Exer-
cise 8.24a).
Proposition 8.4.2 Suppose that is a set of fds and jds over R, , and q is a
tableau query over R. If q

is the result of applying to some tuples in q, then q

q.
A chasing sequence of (T, t ) by is a (possibly innite) sequence
(T, t ) =(T
0
, t
0
), . . . , (T
i
, t
i
), . . .
such that for each i 0, (T
i+1
, t
i+1
) (if dened) is the result of applying some dependency
in to (T
i
, t
i
). The sequence is terminal if it is nite and no dependency in can be
applied to it. The last element of the terminal sequence is called its result. The notion
of satisfaction of a dependency is extended naturally to tableaux. The following is an
important property of terminal chasing sequences (Exercise 8.24b).
Lemma 8.4.3 Let (T

, t

) be the result of a terminal chasing sequence of (T, t ) by .


Then T

, considered as an instance, satises .


176 Functional and Join Dependency
Because the chasing rules do not introduce new variables, it turns out that the chase
procedure always terminates. The following is easily veried (Exercise 8.24c):
Lemma 8.4.4 Let (T, t ) be a tableau query over R and a set of fds and jds over R.
Then each chasing sequence of (T, t ) by is nite and is the initial subsequence of a
terminal chasing sequence.
An important question now is whether the results of different terminal chasing se-
quences are the same. This turns out to be the case. This property of chasing sequences is
called the Church-Rosser property. We provide the proof of the Church-Rosser property
for the chase at the end of this section (Theorem 8.4.18).
Because the Church-Rosser property holds, we can dene without ambiguity the result
of chasing a tableau query by a set of fds and jds.
Denition 8.4.5 If (T, t ) is a tableau query over R and a set of fds and jds over R,
then the chase of (T, t ) by , denoted chase(T, t, ), is the result of some (any) terminal
chasing sequence of (T, t ) by .
From the previous discussion, chase(T, t, ) can be computed as follows. The depen-
dencies are picked in some arbitrary order and arbitrarily applied to the tableau. Applying
an fd to a tableau query q can be performed within time polynomial in the size of q. How-
ever, determining whether a jd can be applied to q is np-complete in the size of q. Thus the
best-known algorithmfor computing the chase is exponential (see Exercise 8.25). However,
the complexity is polynomial if the schema is considered xed.
Until now, besides the informal discussion in Section 8.1, the chase remains a purely
syntactic technique. We next state a result that shows that the chase is in fact determined
by the semantics of the dependencies in and not just their syntax.
In the following proposition, recall that by denition,

if |=

and

|=
. The proof, which we omit, uses the Church-Rosser property of the chase (see also
Exercise 8.26).
Proposition 8.4.6 Let and

be sets of fds and jds over R, and let (T, t ) be a


tableau query over R. If

, then chase(T, t, ) and chase(T, t,

) coincide.
We next consider several important uses of the chase that illustrate the power of this
technique.
Query Equivalence
We consider rst the problemof checking the equivalence of tableau queries in the presence
of a set of fds and jds. This allows, for example, checking whether a tableau query can
be replaced by a simpler tableau query when the dependencies are satised. Suppose now
that (T

, t

) and (T

, t

) are two tableau queries and a set of fds and jds such that
(T

, t

(T

, t

). From the preceding development (Proposition 8.4.2), it follows that


8.4 The Chase 177
chase(T

, t

, )

(T

, t

(T

, t

chase(T

, t

, ).
We now show that, in fact, chase(T

, t

, ) chase(T

, t

, ). Furthermore, this condi-


tion is sufcient as well as necessary.
To demonstrate this result, we rst establish the following more general fact.
Theorem 8.4.7 Let F be a family of instances over relation schema R that is closed
under isomorphism, and let (T
1
, t
1
), (T
2
, t
2
), (T

1
, t

1
), and (T

2
, t

2
) be tableau queries over
R. Suppose further that for i =1, 2,
(a) (T

i
, t

i
)
F
(T
i
, t
i
) and
(b) T

i
, considered as an instance, is in F.
3
Then (T
1
, t
1
)
F
(T
2
, t
2
) iff (T

1
, t

1
) (T

2
, t

2
).
Proof The if direction is immediate. For the only-if direction, suppose that (T
1
, t
1
)
F
(T
2
, t
2
). It sufces by the Homomorphism Theorem 6.2.3 to exhibit a homomorphism that
embeds (T

2
, t

2
) into (T

1
, t

1
). Because T

1
, considered as an instance, is in F,
t

1
(T

1
, t

1
)(T

1
) t

1
(T
1
, t
1
)(T

1
) t

1
(T
2
, t
2
)(T

1
) t

1
(T

2
, t

2
)(T

1
).
It follows that there is a homomorphism h such that h(T

2
) T

1
and h(t

2
) = t

1
. Thus
(T

1
, t

1
) (T

2
, t

2
). This completes the proof.
Together with Lemma 8.4.3, this implies the following:
Theorem 8.4.8 Let (T
1
, t
1
) and (T
2
, t
2
) be tableau queries over R and a set of fds
and jds over R. Then
1. (T
1
, t
1
)

(T
2
, t
2
) iff chase(T
1
, t
1
, ) chase(T
2
, t
2
, ).
2. (T
1
, t
1
)

(T
2
, t
2
) iff chase(T
1
, t
1
, ) chase(T
2
, t
2
, ).
Query Optimization
As suggested in Example 8.4.1, the chase can be used to optimize tableau queries in the
presence of dependencies such as fds and jds. Given a tableau query (T, t ) and a set of
fds and jds, chase(T, t, ) is equivalent to (T, t ) on all instances satisfying . A priori, it
is not clear that the new tableau is an improvement over the rst. It turns out that the chase
using fds can never yield a more complicated tableau and, as shown in Example 8.4.1,
can yield a much simpler one. On the other hand, the chase using jds may yield a more
complicated tableau, although it may also produce a simpler one.
We start by looking at the effect on tableau minimization of the chase using fds.
In the following, we denote by min(T, t ) the tableau resulting from the minimization of
3
More precisely, T

considered as an instance is in F means that some instance isomorphic to T

is
in F.
178 Functional and Join Dependency
the tableau (T, t ) using the Homomorphism Theorem 6.2.3 for tableau queries, and by
|min(T, t )| we mean the cardinality of the tableau of min(T, t ).
Lemma 8.4.9 Let (T, t ) be a tableau query and a set of fds. Then |min(chase(T, t,
))| |min(T, t )|.
Crux By the Church-Rosser property of the chase, the order of the dependencies used in
a chase sequence is irrelevant. Clearly it is sufcient to show that for each tableau query
(T

, t

) and , |min(chase(T

, t

, ))| |min(T

, t

)|. We can assume without loss of


generality that is of the form X A, where A is a single attribute.
Let (T

, t

) = chase(T

, t

, {X A}), and let be the chase homomorphism of a


chasing sequence for chase(T

, t

, {X A}), i.e., the homomorphism obtained by com-


posing the substitutions used in that chasing sequence (see the proof of Theorem 8.4.18).
We will use here the Church-Rosser property of the chase (Theorem 8.4.18) as well as a
related property stating that the homomorphism , like the result, is also the same for all
chase sequences (this follows from the proof of Theorem 8.4.18).
By Theorem 6.2.6, there is some S T

such that (S, t

) is a minimal tableau query


equivalent to (T

, t

); we shall use this as the representative of min(T

, t

). Let h be a
homomorphism such that h(T

, t

) =(S, t

). Consider the mapping f on (T

, t

) dened
by f ((x)) =(h(x)), where x is a variable in (T

, t

). If we show that f is well dened,


we are done. [If f is well dened, then f is a homomorphism from (T

, t

) to (S, t

) =
((S), t

), and so (T

, t

) (S, t

). On the other hand, the (S) (T

) = T

, and
so (T

, t

) (S, t

). Thus, (T

, t

) (S, t

) = (min(T

, t

)), and so |min(T

, t

)| =
|min((min(T

, t

)))| |(min(T

, t

))| |min(T

, t

)|.]
To see that f is well dened, suppose (x) =(y). We have to show that (h(x)) =
(h(y)). Consider a terminal chasing sequence of (T

, t

) using X A, and (u
1
, v
1
), . . . ,
(u
n
, v
n
) as the sequence of pairs of tuples used in the sequence, yielding the chase homo-
morphism . Consider the sequence (h(u
1
), h(v
1
)), . . . , (h(u
n
), h(v
n
)). Clearly if X A
can be applied to (u, v), then it can be applied to (h(u), h(v)), unless h(u(A)) =h(v(A)).
Let (h(u
i
1
), h(v
i
1
)), . . . , (h(u
i
k
), h(v
i
k
)) be the subsequence of these pairs for which X
A can be applied. It can be easily veried that there is a chasing sequence of (h(T

), t

)
using X A that uses the pairs (h(u
i
1
), h(v
i
1
)), . . . , (h(u
i
k
), h(v
i
k
)), with chase homo-
morphism

. Note that for all x

, y

, if (x

) =(y

) then

(h(x

)) =

(h(y

)). In particu-
lar,

(h(x)) =

(h(y)). Because h(T

) T

is the chase homomorphism of a chasing


sequence
1
, . . . ,
k
of (T

, t

). Let

be the chase homomorphism formed from a termi-


nal chasing sequence that extends
1
, . . . ,
k
. Then

(h(x)) =

(h(y)). Finally, by the


uniqueness of the chase homomorphism,

=, and so (h(x)) =(h(y)) as desired. This


concludes the proof.
It turns out that jds behave differently than fds with respect to minimization of
tableaux. The following shows that the chase using jds may yield simpler but also more
complicated tableaux.
8.4 The Chase 179
T
A B
w x
w x
(a) The tableau query (T, t)
C
y
y
D
z
z
w x y z
t
T
A B
w x
w x
C
y
y
D
z
z
w x y z
t
T
A B
w x
w x
C
y
y
D
z
z
w x y z
t
(c) The tableau query
chase(T, t, { [AB, CD]})
w x y z
(b) The tableau query (T, t)
w x y z
Figure 8.4: Minimization and the chase using jds
Example 8.4.10 Consider the tableau query (T, t ) shown in Fig. 8.4(a) and the jd =
[AB, BCD]. Clearly (T, t ) is minimal, so |min(T, t )| =2. Next consider chase(T, t, ). It
is easy to check that w, x, y, z chase(T, t, ), so chase(T, t, ) is equivalent to the
identity and
|min(chase(T, t, ))| =1.
Next let (T

, t

) be the tableau query in Fig. 8.4(b) and =[AB, CD]. Again (T

, t

) is
minimal. Now chase(T

, t

, ) is represented in Fig. 8.4(c) and is minimal. Thus


|min(chase(T

, t

, ))| =4 >|min(T

, t

)|.
Despite the limitations illustrated by the preceding example, the chase in conjunction
with tableau minimization provides a powerful optimization technique that yields good
results in many cases. This is illustrated by the following example and by Exercise 8.28.
Example 8.4.11 Consider the SPJ expression
q =
AB
(
BCD
(R)
ACD
(R))
AD
(R),
where R is a relation with attributes ABCD. Suppose we wish to optimize the query on
databases satisfying the dependencies
={B D, D C, [AB, ACD]}.
The tableau (T, t ) corresponding to q is represented in Fig. 8.5(a). Note that (T, t )
is minimal. Next we chase (T, t ) using the dependencies in . The chase using the
fds in does not change (T, t ), which already satises them. The chase using the jd
180 Functional and Join Dependency
T
A B
w x
w x
(a) The tableau query
(T, t) corresponding to q
C
y
y
D
z
z
w x y z
t
T
A B
w x
w x
C
y
y
D
z
z
w x y z
t
(b) The tableau query
(T, t) = chase(T, t, { [AB, ACD]})
w x y z
w x y z
w x y z
w x y z
T
A B
w x
w x
C
y
y
D
z
z
w x y z
t
(c) The tableau query
(T, t) = chase(T, t, {B D, D C})
w x y z
T
A B
w x
w x
C
y
y
D
z
z
w x y z
t
(d) The tableau query
(T, t) = min(T, t)
Figure 8.5: Optimization of SPJ expressions by tableau minimization and the chase
[AB, ACD] yields the tableau (T

, t

) in Fig. 8.5(b). Now the fds can be applied to


(T

, t

) yielding the tableau (T

, t

) in Fig. 8.5(c). Finally (T

, t

) is minimized to
(T

, t

) in Fig. 8.5(d). Note that (T

, t

) satises , so the chase can no longer be ap-


plied. The SPJ expression corresponding to (T

, t

) is
ABD
(
BCD
(R)
ACD
(R)). Thus,
the optimization of q resulted in saving one join operation. Note that the new query is not
simply a subexpression of the original. In general, the shape of queries can be changed
radically by the foregoing procedure.
The Chase and Logical Implication
We consider a natural correspondence between dependency satisfaction and conjunctive
query containment. This correspondence uses tableaux to represent dependencies. We will
see that the chase provides an alternative point of view to dependency implication.
First consider a jd =[X
1
, . . . , X
n
]. It is immediate to see that an instance I
satises iff q

(I) q
id
(I), where
q

=[X
1
] [X
n
]
8.4 The Chase 181
and q
id
is the identity query. Both q

and q
id
are PSJR expressions. We can look at
alternative formalisms for expressing q

and q
id
. For instance, the tableau query of is
(T

, t ), where for some t


1
, . . . , t
n
,
t is a free tuple over R with a distinct variable for each coordinate,
T

={t
1
, . . . , t
n
},

X
i
(t
i
) =
X
i
(t ) for i [1, n], and
the other coordinates of the t
i
s hold distinct variables.
It is again easy to see that q

=(T

, t ), so I |= iff (T

, t )(I) ({t }, t )(I).


For fds, the situation is only slightly more complicated. Consider an fd

=X A
over U. It is easy to see that I |=

iff (T

, t

)(I) (T

, t

)(I), where
X A (U AX) X A (U AX)
T

u x v
1
u x v
1
u x

v
2
u x

v
2
t

x x

x x
where u, v
1
, v
2
are vectors of distinct variables and x, x

are distinct variables occurring in


none of these vectors. The tableau query of

is (T

, t

).
Again observe that (T

, t

), (T

, t

) can be expressed as PSJR expressions, so fd


satisfaction also reduces to containment of PSJR expressions. It will thus be natural to
look more generally at all dependencies expressed as containment of PSJR expressions.
In Chapter 10, we will consider the general class of algebraic dependencies based on
containment of these expressions.
Returning to the chase, we next use the tableau representation of dependencies to
obtain a characterization of logical implication (Exercise 8.29). This result is generalized
by Corollary 10.2.3.
Theorem 8.4.12 Let and {} be sets of fds and jds over relation schema R, let
(T

, t

) be the tableau query of , and let T be the tableau in chase(T

, t

, ). Then
|= iff
(a) =X A and |
A
(T )| =1, that is, the projection over A of T is a singleton;
or
(b) = [X
1
, . . . , X
n
] and t

T .
This implies that determining logical implication for jds alone, and for fds and jds
taken together, is decidable. On the other hand, tableau techniques are also used to obtain
the following complexity results for logical implication of jds (see Exercise 8.30).
182 Functional and Join Dependency
Theorem 8.4.13
(a) Testing whether a jd and an fd imply a jd is np-complete.
(b) Testing whether a set of mvds implies a jd is np-hard.
Acyclic Join Dependencies
In Section 6.4, a special family of joins called acyclic was introduced and was shown to
enjoy a number of desirable properties. We show now a connection between those results,
join dependencies, and multivalued dependencies.
A jd [X
1
, . . . , X
n
] is acyclic if the hypergraph corresponding to [X
1
, . . . , X
n
] is
acyclic (as dened in Section 6.4).
Using the chase, we show here that a jd is acyclic iff it is equivalent to a set of mvds.
The discussion relies on the notation and techniques developed in the discussion of acyclic
joins in Section 6.4.
We shall use the following lemma.
Lemma 8.4.14 Let = X be a jd over U, and let X, Y U be disjoint sets. Then the
following are equivalent:
(i) |=X Y;
(ii) there is no X
i
X such that X
i
Y = and X
i
(U XY) =;
(iii) Y is a union of connected components of the hypergraph X|
UX
.
Proof Let Z =U XY. Let denote the mvd X Y, and let (T

, t

) be the tableau
query corresponding to . Let T

={t
Y
, t
Z
} where t
Y
[XY] =t

[XY] and t
Z
[XZ] =t

[XZ]
and distinct variables are used elsewhere in t
Y
and t
Z
.
We show now that (i) implies (ii). By Theorem 8.4.12, t

T =chase(T

, t

, ). Let
X
i
X. Suppose that t is a new tuple created by an application of during the computation
of T . Then t [X
i
] agrees with t

[X
i
] for some already existing tuple. An induction implies
that t

[X
i
] =t
Y
[X
i
] or t

[X
i
] =t
Z
[X
i
]. Because t
Y
and t
Z
agree only on X, this implies
that X
i
cannot intersect with both Y and Z.
That (ii) implies (iii) is immediate. To see that (iii) implies (i), consider an applica-
tion of the jd X on T

, where X
i
X is associated with t
Y
if X
i
X Y, and X
i
is associated with t
Z
otherwise. This builds the tuple t

, and so by Theorem 8.4.12, |=


X Y.
We now have the following:
Theorem 8.4.15 A jd is acyclic iff there is a set of mvds that is equivalent to .
Proof (only if) Suppose that =X over U is acyclic. By Theorem 6.4.5, this implies
that the output of the GYO algorithm on X is empty. Let X
1
, . . . , X
n
be an enumeration
of X in the order of an execution of the GYO algorithm. In particular, X
i
is an ear of the
hypergraph formed by {X
i+1
, . . . , X
n
}.
8.4 The Chase 183
For each i [1, n 1], let P
i
=
j[1,i]
X
j
and Q
i
=
j[i+1,n]
X
j
. Let = {[P
i

Q
i
] Q
i
| i [1, n 1]}. By Lemma 8.4.14 and the choice of sequence X
1
, . . . , X
n
,
|= . To show that |= , we construct a chasing sequence of (T

, t

) using that
yields t

. This chase shall inductively produce a sequence t


1
, . . . , t
n
of tuples, such that
t
i
[P
i
] =t

[P
i
] for i [1, n].
We begin by setting t
1
to be the tuple of T

that corresponds to X
1
. Then t
1
[P
1
] =
t

[P
1
] because P
1
=X
1
. More generally, given t
i
with i 1, the mvd [P
i
Q
i
] Q
i
on
t
i
and the tuple corresponding to X
i+1
can be used to construct tuple t
i+1
with the desired
property. The nal tuple t
n
constructed by this process is t

, and so |= as desired.
(if) Suppose that =X over U is equivalent to the set of mvds but that is
not acyclic. From the denition of acyclic, this implies that there is some W U such that
Y =X|
W
has no articulation pair. Without loss of generality we assume that Yis connected.
Let Y= {Y
1
, . . . , Y
m
}. Suppose that s
1
, . . . are the tuples produced by some chasing
sequence of (T

, t

). We argue by induction that for each k 1, s


k
[W]
W
(T

). Suppose
otherwise, and let s
k
be the rst where this does not hold. Suppose that s
k
is the result of
applying an mvd X Y in . Without loss of generality we assume that X Y = .
Let Z =U XY. Because s
k
results from X Y, there are two tuples s

and s

either
in T

or already produced, such that s


k
[XY] = s

[XY] and s
k
[XZ] = s

[XZ]. Because
s
k
is chosen to be least, there are tuples t
i
and t
j
in T

, which correspond to X
i
and X
j
,
respectively, such that s

[W] =t
i
[W] and s

[W] =t
j
[W].
Because t
i
and t
j
correspond to X
i
and X
j
, for each attribute A U we have t
i
[A] =
t
j
[A] iff A X
i
X
j
. Thus X W X
i
X
j
.
Because s
k
[W] = t
i
[W], W XZ = , and because s
k
[W] = t
j
[W], W XY = .
Now, by Lemma 8.4.14, because X Y is implied by , there is no X
k
X such
that X
k
Y = and X
k
Z = . It follows that Y|
WX
is disconnected. Finally, let
Y =X
i
W and Y

=X
j
W. Because X W X
i
X
j
, it follows that Y Y

is an
articulation set for Y, a contradiction.
We conclude with a complexity result about acyclic jds. The rst part follows from
the proof of the preceding theorem and the fact that the GYO algorithm runs in polynomial
time. The second part, stated without proof, is an interesting converse of the rst part.
Proposition 8.4.16
(a) There is a ptime algorithm that, given an acyclic jd , produces a set of mvds
equivalent to .
(b) There is a ptime algorithm that, given a set of mvds, nds a jd equivalent to
or determines that there is none.
The Chase Is Church-Rosser
To conclude this section, we provide the proof that the results of all terminal chasing
sequences of a tableau query q by a set of fds and jds are identical. To this end, we
rst introduce tools to describe correspondences between the free tuples occurring in the
different elements of chasing sequences.
184 Functional and Join Dependency
Let (T, t ) =(T
0
, t
0
), . . . , (T
n
, t
n
) be a chasing sequence of (T, t ) by . Then for each
i [1, n], the chase homomorphism for step i, denoted
i
, is an assignment with domain
var(T
i
) dened as follows:
(a) If (T
i+1
, t
i+1
) is the result of applying the fd rule to (T
i
, t
i
), which replaces all
occurrences of variable y by variable x, then
i+1
is dened so that
i+1
(y) =x
and
i+1
is the identity on var(T
i
) {y}.
(b) If (T
i+1
, t
i+1
) is the result of applying the jd rule to (T
i
, t
i
), then
i+1
is the
identity on var(T
i
).
The chase homomorphism of this chasing sequence is =
1

n
. If w (T {t }),
then the tuple corresponding to w in (T
i
, t
i
) is w
i
=
1

i
(w). It may arise that
u
i
= v
i
for distinct tuples u, v in T . Observe that
1

i
(T ) T
i
and that, because
of the jd rule, the inclusion may be strict.
We now have the following:
Lemma 8.4.17 Suppose that I |= , is a substitution over var(T ), (T ) I, and
(T
0
, t
0
), . . . , (T
n
, t
n
) is a chasing sequence of (T, t ) by . Then
(w
i
) =(w) for each i [1, n] and each w (T {t }),
and (T
i
) I for each i [1, n].
Crux Use an induction on the chasing sequence (Exercise 8.24d).
Observe that this also holds if I is a tableau over R that satises . This is used in the
following result.
Theorem 8.4.18 Let (T, t ) be a tableau query over R and a set of fds and jds over
R. Then the results of all terminal chasing sequences of (T, t ) by are identical.
Proof Let (T

, t

) and (T

, t

) be the results of two terminal chasing sequences on (T, t )


using , and let

be the chase homomorphisms of these chasing sequences. For each


tuple w T , let w

denote the tuple of T

that corresponds to w, and similarly for w

, T

.
By construction,

(T ) T

and

(t ) = t

. Because T

|= and

(T ) T

(T

) T

by Lemma 8.4.17 considering the chasing sequence leading to T

. The
same argument shows that

(w

) = w

for each w in T and

(t

) = t

. By symmetry,

(T

) T

(w

) =w

for each w in T and

(t

) =t

.
We next prove that
(*)

is an isomorphism from (T

, t

) to (T

, t

).
Let w

be in T

for some w in T . Then

(w

) =

(w

)) =

(w

) =w

.
Bibliographic Notes 185
Observe that each variable x in var(T

) occurs in w

, for some w in T . Thus

is the
identity over var(T

). We therefore have

(T

) =T

.
By symmetry,

is the identity over var(T

) and

(T

) =T

.
Thus |T

| = |T

|. Because

(T

) T

(T

) = T

and

is an isomorphism from
(T

, t

) to (T

, t

), so (*) holds.
To conclude, we prove that
(**)

is the identity over var(T

).
We rst show that for each pair x, y of variables occurring in T ,
()

(x) =

(y) iff

(x) =

(y).
Suppose that

(x) =

(y). Then for some tuples u, v T and attributes A, B, we


have u(A) = x, v(B) = y and u

(A) =

(x) =

(y) = v

(B). Next

(x) = u

(A) and

(y) = v

(B). Because

is an isomorphism from (T

, t

) to (T

, t

) and

(u

) =
u

(v

) = v

, it follows that u

(A) = v

(B). Hence

(x) = u

(A) = v

(B) =

(y) as
desired. The if direction follows by symmetry.
Now let x var(T

). To prove (**) and the theorem, it now sufces to show that

(x) =x. Let


A

={y var(T ) |

(y) =

(x)},
A

={y var(T ) |

(y) =

(x)}.
First () implies that A

= A

. Furthermore, an induction on the chasing sequence


for (T

, t

) shows that for each z A

(z) is the least (under the ordering on var) ele-


ment of A

, and similarly for (T

, t

). Thus

and

map all elements of A

and A

to
the same variable z. Because x var(T

), it follows that z =x so, in particular,

(x) =

(x) =x.
Bibliographic Notes
On a general note, we rst mention that comprehensive presentations of dependency the-
ory can be found in [Var87, FV86]. A more dense presentation is provided in [Kan91].
Dependency theory is also the topic of the book [Tha91].
Research on general integrity constraints considered from the perspective of rst-order
logic is presented in [GM78]. Other early work in this framework includes [Nic78], which
observes that fds and mvds have a natural representation in logic, and [Nic82], which
186 Functional and Join Dependency
considers incremental maintanence of integrity constraints under updates to the underlying
state.
Functional dependencies were introduced by Codd [Cod72b]. The axiomatization is
due to [Arm74]. The problem of implication is studied in [BB79, Mai80]. Several alterna-
tive formulations of fd implication, including formulation in terms of the propositional cal-
culus perspective (see Exercise 8.22), are mentioned in [Kan91]; they are due to [SDPF81,
CK85, CKS86].
Armstrong relations were introduced and studied in [Fag82b, Fag82a, BDFS84]. In-
teresting practical applications of Armstrong relations are proposed in [SM81, MR85]. The
idea is that, given a set of fds, the system presents an Armstrong relation for with nat-
ural column entries to a user, who can then determine whether includes all of the desired
restrictions.
The structure of families of instances specied by a set of fds is studied in [GZ82,
Hul84].
Multivalued dependencies were discovered independently in [Zan76, Fag77b, Del78].
They were generalized in [Ris77, Nic78, ABU79]. The axiomatization of fds and mvds
is from [BFH77]. A probabilistic view of mvds in terms of conditional independence is
presented in [PV88, Pea88]. This provides an alternative motivation for the study of such
dependencies.
The issue of whether there is an axiomatization for jds has a lengthy history. As
will be detailed in Chapter 10, the family of full typed dependencies subsumes the family
of jds, and an axiomatization for these was presented in [BV84a, YP82]; see also [SU82].
More focused axiomatizations, which start with jds and end with jds but use slightly more
general dependencies at intermediate stages, are presented in [Sci82] and [BV85]; see also
[BV81b]. Reference [BV85] also develops an axiomatization for join dependencies based
on Gentzen-style proofs (see, e.g., [Kle67]); proofs in this framework maintain a form of
scratch paper in addition to a sequence of inferred sentences. Finally, [Pet89] settled the
issue by establishing that there is no axiomatization (in the sense dened in Section 8.2)
for the family of jds.
As noted in Chapter 6, acyclic joins received wide interest in the late 1970s and
early 1980s so Theorem 8.4.15 was demonstrated in [FMU82]. Proposition 8.4.16 is from
[GT83].
An ancestor to the chase can be found in [ABU79]. The notion of chase was articulated
in [MMS79]. Related results can be found in [MSY81, Var83]. The relationship between
the chase and both tableau queries and logical implication was originally presented in
[MMS79] and builds on ideas originally introduced in [ASU79b, ASU79a]. The chase
technique is extended to more general dependencies in [BV84c]; see also Chapter 10.
The connection between the chase and the more general theorem-proving technique of
resolution with paramodulation (see [CL73]) is observed and analyzed in [BV80b]. The
chase technique is applied to datalog programs in [RSUV89, RSUV93].
Exercises
Exercise 8.1 Describe the set of fds, mvds, and jds that are tautologies (i.e., dependencies
that are satised by all instances) for a relation schema R.
Exercises 187
Exercise 8.2 Let
1
be as in Example 8.2.4. Prove that
1
|=AD E and
1
|=CDE C.
Exercise 8.3 Let U be a set of attributes, and let , be sets of dependencies over U. Show
that
(a)

.
(b) (

.
(c) If , then

.
State and prove analogous results for fd closures of attribute sets.
Exercise 8.4 Prove Lemma 8.2.6.
Exercise 8.5 Let U be a set of attributes and a set of fds over U. Prove the soundness of
FD1, FD2, FD3 and show that
If X Y and X Z, then X YZ.
Exercise 8.6 Let be a set of fds over U.
(a) Suppose that X U and U V. Show that (X, )
,U
=(X, )
,V
. Hint: Use the
proof of Proposition 8.2.8.
(b) Suppose that XY U, and U V. Show that |=
U
X Y iff |=
V
X Y.
Exercise 8.7 [BB79] Describe howto improve the efciency of Algorithm8.2.7 to linear time.
Hint: For each unused fd W Z in , record the number attributes of W not yet in closure.
To do this efciently, maintain a list for each attribute A of those unused fds of for which A
occurs in the left-hand side.
Exercise 8.8 Give a proof of AB F from = {AB C, A D, CD EF} using
{FD1, FD2, FD3}.
Exercise 8.9 Prove or disprove the soundness of the following rules:
FD4: (pseudo-transitivity) If X Y and YW Z, then XW Z.
FD5: (union) If X Y and X Z, then X YZ.
FD6: (decomposition) If X YZ, then X Y.
MVD4: (pseudo-transitivity) If X Y and YW Z, then XW Z Y.
MVD5: (union) If X Y and X Z, then X YZ.
MVD6: (decomposition) If X Y and X Z, then X Y Z, X Y Z, and
X Z Y.
bad-FD1: If XW Y and XY Z, then X (Z W).
bad-MVD1: If X Y and Y Z, then X Z.
bad-FMVD1: If X Y and XY Z, then X Z.
(The use of the hint is optional.)
Exercise 8.10 Continuing with Exercise 8.9,
(a) [BFH77] Find a two-element subset of {FD1, FD2, FD3, FD4, FD5, FD6} that is
sound and complete for inferring logical implication of fds.
188 Functional and Join Dependency
(b) Prove that there is exactly one two-element subset of {FD1, FD2, FD3, FD4, FD5,
FD6} that is sound and complete for inferring logical implication of fds.
Exercise 8.11 [Arm74] Let U be a xed set of attributes. An attribute set X U is saturated
with respect to a set of fds over U if X =X

. The family of saturated sets of with respect


to U is satset() ={X U | X is saturated with respect to }.
(a) Show that satset =satset() satises the following properties:
S1: U satset.
S2: If Y satset and Z satset, then Y Z satset.
(b) Suppose that satset is a family of subsets of U satisfying properties (S1) and (S2).
Prove that satset =satset() for some set of fds over U. Hint: Use ={Y Z|
for each X satset, if Y X then Z X}.
Exercise 8.12 Let and be sets of fds over U. Using the notation of Exercise 8.11,
(a) Show that satset( ) =satset() satset().
(b) Show that satset(

) = satset() satset(), where for families F, G, the


wedge of F and G is F G ={X Y | X F and Y G}.
(c) For V U, dene
V
={X Y | XY V}. For V U characterize satset
(
V
(

)) (where this family is dened with respect to V).


Exercise 8.13
(a) Exhibit a set
1
of fds over {A, B} such that each Armstrong relation for has at
least three distinct values occurring in the A column. Exhibit a set
2
of fds over
{A, B, C} such that each Armstrong relation for has at least four distinct values
occurring in the A column.
(b) [GH83, BDFS84] Let be a set of fds over U. Recall the notion of saturated set
from Exercise 8.11. For an instance I over U, the agreement set of I is agset(I) =
{X U | s, t I such that s(A) =t (A) iff A X}. For a family F of subsets of
U, the intersection closure of F is intclo(F) = {
n
i=1
X
i
| n 0 and each X
i
F}
(where the empty intersection is dened to be U). Prove that I is an Armstrong
relation for iff intclo(agset(I)) =satset().
Exercise 8.14 [Mai80] Let be a set of fds over U, X Y , and let A be an attribute.
A is extraneous in X Y with respect to if either
(a) ( {X Y}) {X (Y A)} |=X Y; or
(b) ( {X Y}) {(X A) Y} |=X Y.
Develop an O(n
2
) algorithm that takes as input a set of fds and produces as output a set

, where

has no extraneous attributes.


Exercise 8.15 Show that there is no set of jds and fd X A such that |=X A. Hint:
Show that for any instance I there exists an instance I

such that I I

and I

|= . Then
choose I violating X A.
Exercise 8.16 [Fag77b, Zan76] This exercise refers to the original denition of mvds. Let U
be a set of attributes and X, Y U. Given an instance I over U and a tuple x
X
(I), the image
Exercises 189
of x on Y in I is the set image
Y
(x, I) =
Y
(
X=x
(I)) of tuples over Y. Prove that I |=X Y
iff
for each x
X
(I) and each z image
Z
(x, I), image
Y
(x, I) =image
Y
(xz, I),
where Z =U XY and xz denotes the tuple w over XZ such that
X
(w) =x and
Z
(w) =z.
Exercise 8.17 [BFH77] Complete the proof of Theorem 8.3.5. Hint: Of course, the inference
rules can be used when reasoning about I. The following claims are also useful:
Claim 1: If A X
+
, then I |= A.
Claim 2: If A, B W
i
for some i [1, n], then I |=A B.
Claim 3: For each i [1, n], I |= W
i
.
Exercise 8.18 Prove Corollary 8.3.6.
Exercise 8.19 [Kan91] Consider the following set of inference rules:
MVD7: X U X.
MVD8: If Y Z =, X Y, and Z W, then X W Y.
FMVD3: If Y Z =, X Y, and Z W, then X Y W.
Prove that {MVD7, MVD2, MVD8} are sound and complete for inferring implication for
mvds, and that {FD1, FD2, FD3, MVD7, MVD2, MVD8, FMVD1, FMVD3} are sound and
complete for inferring implication for fds and mvds considered together.
Exercise 8.20 [Bee80] Let be a set of fds and mvds, and let m() ={X Y | X
Y } {X A | A Y for some X Y }. Prove that
(a) |=X Y implies m() |=X Y; and
(b) |=X Y iff m() |=X Y.
Hint: For (b) do an induction on proofs using the inference rules.
Exercise 8.21 For sets and of dependencies over U, implies for two-element in-
stances, denoted |=
2
, if for each instance I over U with |I| 2, I |= implies I |=.
(a) [SDPF81] Prove that if {} is a set of fds and mvds, then |=
2
iff |=.
(b) Prove that the equivalence of part (a) does not hold if jds are included.
(c) Exhibit a jd such that there is no set of mvds with .
Exercise 8.22 [SDPF81] This exercise develops a close connection between fds and mvds,
on the one hand, and a fragment of propositional logic, on the other. Let U be a xed set of
attributes. We view each attribute A U as a propositional variable. For the purposes of this
exercise, a truth assignment is a mapping : U {T, F} (where T denotes true and F denotes
false). Truth assignments are extended to mappings on subsets X of U by (X) =
AX
(A). A
truth assignment satises an fd X Y, denoted |=X Y, if (X) =T implies (Y) =T .
It satises an mvd X Y, denoted |=X Y, if (X) =T implies that either (Y) =T
or (U Y) = T . Given a set {} of fds and mvds, implies in the propositional
calculus, denoted |=
prop
, if for each truth assignment , |= implies |=. Prove that
for all sets {} of fds and mvds, |= iff |=
prop
.
190 Functional and Join Dependency
Exercise 8.23 [Bis80] Exhibit a set of inference rules for mvds that are sound and complete
in the context in which an underlying set of attributes is not xed.
Exercise 8.24
(a) Prove Proposition 8.4.2.
(b) Prove Lemma 8.4.3.
(c) Prove Lemma 8.4.4. What is the maximum size attainable by the tableau in the result
of a terminal chasing sequence?
(d) Prove Lemma 8.4.17.
Exercise 8.25
(a) Describe a polynomial time algorithm for computing the chase of a tableau query by
, assuming that contains only fds.
(b) Show that the problem of deciding whether a jd can be applied to a tableau query is
np-complete if the schema is considered variable, and polynomial if the schema is
considered xed. Hint: Use Exercise 6.16.
(c) Prove that it is np-hard, given a tableau query (T, t ) and a set of fds and jds, to
compute chase(T, t, ) (this assumes that the schema is part of the input and thus
not xed).
(d) Describe an exponential time algorithm for computing the chase by a set of fds and
jds. (Again the schema is not considered xed.)
Exercise 8.26 Prove Proposition 8.4.6. Hint: Rather than modifying the proof of Theo-
rem 8.4.18, prove as a lemma that if |=, then chase(T, t, ) =chase(T, t, {}).
Exercise 8.27
(a) Verify that the results concerning the chase generalize immediately to the context in
which database schemas as opposed to relation schemas are used.
(b) Describe how to generalize the chase to tableau in which constants occur, and state
and prove the results about the chase and tableau queries. Hint: If the chase procedure
attempts to equate two distinct constants (a situation not occurring before), we obtain
a particular new tableau, called T
false
, which corresponds to the query producing an
empty result on all input instances.
Exercise 8.28 For each of the following relation schemas R, SPJ expressions q over R, and
dependencies over R, simplify q knowing that it is applied only to instances over R satisfying
. Use tableau minimization and the chase.
(a) sort(R) = ABC, q =
AC
(
AB
(
A=2
(R)
BC
(R))
AB
(
B=8
(R)
BC
(R)),
={A C, B C}
(b) sort(R) =ABCD, q =
BC
(R)
ABD
(R), ={B CD, B D}
(c) sort(R) =ABCD, q =
ABD
(R)
AC
(R), ={A B, B C}.
Exercise 8.29 Prove Theorem 8.4.12.
Exercise 8.30 Prove Theorem 8.4.13(a) [BV80a] and Theorem 8.4.13(b) [FT83].
Exercise 8.31 [MMS79] Describe an algorithm based on the chase for
(a) computing the closure of an attribute set X under a set of fds and jds (where the
notion of closure is extended to include all fds implied by ); and
Exercises 191
(b) computing the dependency basis (see Section 8.3) of a set X of attributes under a set
of fds and jds (where the notion of dependency basis is extended to include fds
in the natural manner).
Exercise 8.32 [GH86] Suppose that the underlying domain dom has a total order . Let
U ={A
1
, . . . , A
n
} be a set of attributes. For each X U, dene the partial order
X
over the
set of tuples of X by t
X
t

iff t (A) t

(A) for each A X. A sort set dependency (SSD)


over U is an expression of the form s(X), where X U. An instance I over U satises s(X),
denoted I |=s(X), if
X
is a total order on
X
(I).
(a) Show that the following set of inference rules is sound and complete for nite logical
implication between SSDs:
SSD1: If A is an attribute, then s(A).
SSD2: If s(X) and Y X, then s(Y).
SSD3: If s(X), s(Y) and s(X Y), then s(XY) [where X Y denotes (X
Y) (Y X), i.e., the symmetric difference of X and Y].
(b) Exhibit a polynomial time algorithm for inferring logical implication between sets
of SSDs.
(c) Describe how SSDs might be used in connection with indexes.
9 Inclusion Dependency
Vittorio: Fds and jds give some structure to relations.
Alice: But there are no connections between them.
Sergio: Making connections is the next step . . .
Riccardo: . . . with some unexpected consequences.
T
he story of inclusion dependencies starts in a manner similar to that for functional
dependencies: Implication is decidable (although here it is pspace-complete), and
there is a simple set of inference rules that is sound and complete. But the story becomes
much more intriguing when functional and inclusion dependencies are taken together.
First, the notion of logical implication will have to be rened because the behavior of
these dependencies taken together is different depending on whether innite instances are
permitted. Second, both notions of logical implication are nonrecursive. And third, it can
be proven in a formal sense that no nite axiomatization exists for either notion of logical
implication of the dependencies taken together. At the end of this chapter, two restricted
classes of inclusion dependencies are discussed. These are signicant because they arise in
modeling certain natural relationships such as those encountered in semantic data models.
Positive results have been obtained for inclusion dependencies from these restricted classes
considered with fds and other dependencies.
Unlike fds or jds, a single inclusion dependency may refer to more than one relation.
Also unlike fds and jds, inclusion dependencies are untyped in the sense that they
may call for the comparison of values from columns (of the same or different relations)
that are labeled by different attributes. A nal important difference from fds and jds is
that inclusion dependencies are embedded. Speaking intuitively, to satisfy an inclusion
dependency the presence of one tuple in an instance may call for the presence of another
tuple, of which only some coordinate values are determined by the dependency and the rst
tuple. These and other differences will be discussed further in Chapter 10.
9.1 Inclusion Dependency in Isolation
To accommodate the fact that inclusion dependencies permit the comparison of values from
different columns of one or more relations, we introduce the following notation. Let R be a
relation schema and X =A
1
, . . . , A
n
a sequence of attributes (possibly with repeats) from
R. For an instance I of R, the projection of I onto the sequence X, denoted I[X], is the
n-ary relation {t (A
1
), . . . , t (A
n
) | t I}.
The syntax and semantics of inclusion dependencies is now given by the following:
192
9.1 Inclusion Dependency in Isolation 193
Denition 9.1.1 Let R be a relational schema. An inclusion dependency (ind) over R
is an expression of the form =R[A
1
, . . . , A
m
] S[B
1
, . . . , B
m
], where
(a) R, S are (possibly identical) relation names in R,
(b) A
1
, . . . , A
m
is a sequence of distinct attributes of sort(R), and
(c) B
1
, . . . , B
m
is a sequence of distinct attributes of sort(S).
An instance I of R satises , denoted I |=, if
I(R)[A
1
, . . . , A
m
] I(S)[B
1
, . . . , B
m
].
Satisfaction of a set of inds is dened in the natural manner.
To illustrate this denition, we recall an example from the previous chapter.
Example 9.1.2 There are two relations: Movies with attributes Title, Director, Actor and
Showings with Theater, Screen, Title, Snack; and we have an ind
Showings[Title] Movies[Title].
The generalization of inds to permit repeated attributes on the left-or right-hand side
is considered in Exercise 9.4.
The notion of logical implication between sets of inds is dened in analogy with that
for fds. (This will be rened later when fds and inds are considered together.)
Rules for Inferring ind Implication
The following set of inference rules will be shown sound and complete for inferring logical
implication between sets of inds. The variables X, Y, and Z range over sequences of
distinct attributes; and R, S, and T range over relation names.
IND1: (reexivity) R[X] R[X].
IND2: (projection and permutation) If R[A
1
, . . . , A
m
] S[B
1
, . . . , B
m
], then R[A
i
1
,
. . . , A
i
k
] S[B
i
1
, . . . , B
i
k
] for each sequence i
1
, . . . , i
k
of distinct integers in
{1, . . . , m}.
IND3: (transitivity) If R[X] S[Y] and S[Y] T [Z], then R[X] T [Z].
The notions of proof and of provability (denoted ) using these rules are dened in
analogy with that for fds.
Theorem 9.1.3 The set {IND1, IND2, IND3} is sound and complete for logical impli-
cation of inds.
Proof Soundness of the rules is easily veried. For completeness, let be a set of inds
over database schema R = {R
1
, . . . , R
n
}, and let =R
a
[A
1
, . . . , A
m
] R
b
[B
1
, . . . , B
m
]
194 Inclusion Dependency
be an ind over R such that |=. We construct an instance I of R and use it to demonstrate
that .
To begin, let s

be the tuple over R


a
such that s

(A
i
) =i for i [1, m] and s

(B) =0
otherwise. Set I(R
a
) ={s

} and I(R
j
) = for j =a. We now apply the following rule to I
until it can no longer be applied.
()
If R
i
[C
1
, . . . , C
k
] R
j
[D
1
, . . . , D
k
] and t I(R
i
), then add
u to I(R
j
), where u(D
l
) =t (C
l
) for l [1, k] and u(D) =0 for D
{D
1
, . . . , D
k
}.
Application of this rule will surely terminate, because all tuples are constructed from
a set of at most m+1 values. Clearly the result of applying this rule until termination is
unique, so let J be this result.
Remark 9.1.4 This construction is reminiscent of the chase for join dependencies. It
differs because the inds may be embedded. Intuitively, an ind may not specify all the
entries of the tuples we are adding. In the preceding rule (), the same value (0) is always
used for tuple entries that are otherwise unspecied.
It is easily seen that J |=. Because |=, we have J |=. To conclude the proof,
we show the following:
()
If for some R
j
in R, u J(R
j
), integer q, and distinct attributes
C
1
, . . . , C
q
in sort(R
j
), u(C
p
) > 0 for p [1, q], then
R
a
[A
u(C
1
)
, . . . , A
u(C
q
)
] R
j
[C
1
, . . . , C
q
].
Suppose that () holds. Let s

be a tuple of J(R
b
) such that s

[B
1
, . . . , B
m
] =
s

[A
1
, . . . , A
m
]. (Such a tuple exists because J |= .) Use () with R
j
= R
b
, q = m,
C
1
, . . . , C
q
=B
1
, . . . , B
m
.
To demonstrate (), we showinductively that it holds for all tuples of J by considering
them in the order in which they were inserted. The claim holds for s in J(R
a
) by IND1.
Suppose now that
I

is the instance obtained after k applications of the rule for some k 0;


the claim holds for all tuples in I

; and
u is added to R
j
by the next application of rule (), due to the ind R
i
[C
1
, . . . , C
k
]
R
j
[D
1
, . . . , D
k
] and tuple t I

(R
i
).
Now let {E
1
, . . . , E
q
} be a set of distinct attributes in sort(R
j
) with u(E
p
) > 0 for p
[1, q]. By the construction of u in (*), {E
1
, . . . , E
q
} {D
1
, . . . , D
k
}. Choose the mapping
such that D
(p)
= E
p
for p [1, q]. Because R
i
[C
1
, . . . , C
k
] R
j
[D
1
, . . . , D
k
] ,
IND2 yields
R
i
[C
(1)
, . . . , C
(q)
] R
j
[E
1
, . . . , E
q
].
9.1 Inclusion Dependency in Isolation 195
By the inductive assumption,
R
a
[A
t (C
(1)
)
, . . . , A
t (C
(q)
)
] R
i
[C
(1)
, . . . , C
(q)
].
Thus, by IND3,
R
a
[A
t (C
(1)
)
, . . . , A
t (C
(q)
)
] R
j
[E
1
, . . . , E
q
].
Finally, observe that for each p, t (C
(p)
) =u(D
(p)
) =u(E
p
), so
R
a
[A
u(E
1
)
, . . . , A
u(E
q
)
] R
j
[E
1
, . . . , E
q
].
Deciding Logical Implication for inds
The proof of Theorem9.1.3 yields a decision procedure for determining logical implication
between inds. To see this, we use the following result:
Proposition 9.1.5 Let be a set of inds over R and R
a
[A
1
, . . . , A
m
] R
b
[B
1
, . . . ,
B
m
]. Then |= R
a
[A
1
, . . . , A
m
] R
b
[B
1
, . . . , B
m
] iff there is a sequence R
i
1
[

C
1
], . . . ,
R
i
k
[

C
k
] such that
(a) R
i
j
R for j [1, k];
(b)

C
j
is a sequence of m distinct attributes in sort(R
i
j
) for j [1, k];
(c) R
i
1
[

C
1
] =R
a
[A
1
, . . . , A
m
];
(d) R
i
k
[

C
k
] =R
b
[B
1
, . . . , B
m
];
(e) R
i
j
[

C
j
] R
i
j+1
[

C
j+1
] can be obtained from an ind in by one application of
rule IND2, for j [1, (k 1)].
Crux Use the instance J constructed in the proof of Theorem 9.1.3. Working backward
from the tuple s

in J(R
b
), a chain of relation-tuple pairs (R
i
j
, s
j
) can be constructed so
that each of 1, . . . , m occurs exactly once in s
j
, and s
j+1
is inserted into I as a result of s
j
and IND2.
Based on this, it is straightforward to verify that the following algorithm determines
logical implication between inds. Note that only inds of arity m are considered in the
algorithm.
Algorithm 9.1.6
Input: A set of inds over R and ind R
a
[A
1
, . . . , A
m
] R
b
[B
1
, . . . , B
m
].
Output: Determine whether |=R
a
[A
1
, . . . , A
m
] R
b
[B
1
, . . . , B
m
].
Procedure: Build a set E of expressions of the form R
i
[C
1
, . . . , C
m
] as follows:
1. E :={R
a
(A
1
, . . . , A
m
)}.
196 Inclusion Dependency
2. Repeat until R
b
[B
1
, . . . , B
m
] E or no change possible:
If R
i
[C
1
, . . . , C
m
] E and
R
i
[C
1
, . . . , C
m
] R
j
[D
1
, . . . , D
m
]
can be derived from an ind of by one application of IND2, then insert
R
j
[D
1
, . . . , D
m
] into E.
3. If R
b
[B
1
, . . . , B
m
] E then return yes; else return no.
As presented, the preceding algorithm is nondeterministic and might therefore take
more than polynomial time. The following result shows that this is indeed likely for any
algorithm for deciding implication between inds.
Theorem 9.1.7 Deciding logical implication for inds is pspace-complete.
Crux Algorithm 9.1.6 can be used to develop a nondeterministic polynomial space pro-
cedure for deciding logical implication between inds. By Savitchs theorem (which states
that pspace = npspace), this can be transformed into a deterministic algorithm that runs in
polynomial space. To show that the problem is pspace-hard, we describe a reduction from
the problem of linear space acceptance.
A (Turing) machine is linear bounded if on each input of size n, the machine does not
use more that n tape cells. The problem is the following:
Linear Space Acceptance (LSA) problem
Input: The description of a linear bounded machine M and an input word x;
Output: yes iff M accepts x.
The heart of the proof is, given an instance (M, x) of the LSA problem, to construct a
set of inds and an ind such that |= iff x is accepted by M.
Let M = (K, , , s, h) be a Turing machine with states K, alphabet , transition
relation , start state s, and accepting state h; and let x =x
1
. . . x
n

have length n.
Congurations of M are viewed as elements of

K
+
with length n +1, where the
placement of the state indicates the head position (the state is listed immediately left of
the scanned letter). Observe that transitions can be described by expressions of the form

1
,
2
,
3

1
,
2
,
3
with
1
, . . . ,
3
in (K ). For instance, the transition
if reading b in state p, then overwrite with c and move left
corresponds to a, p, b p, a, c for each a in . Let be the set of all such expressions
corresponding to transitions of M.
The initial conguration is sx. The nal conguration is hb
n
for some particular letter
b, iff M accepts x.
The inds of are dened over a single relation R. The attributes of R are {A
i,j
| i
(K ), j {1, 2, . . . , n +1}} . The intuition here is that the attribute A
p,j
corresponds to
the statement that the j
th
symbol in a given conguration is p. To simplify the presentation,
attribute A
a,k
is simply denoted by the pair (a, k).
9.2 Finite versus Innite Implication 197
The ind is
R[(s, 1), (x
1
, 2), . . . , (x
n
, n +1)] R[(h, 1), (b, 2), . . . , (b, n +1)].
The inds in correspond to valid moves of M. In particular, for each j [1, n 1],
includes all inds of the form
R[(
1
, j), (
2
, j +1), (
3
, j +2),

A] R[(
1
, j), (
2
, j +1), (
3
, j +2),

A],
where
1
,
2
,
3

1
,
2
,
3
is in , and

A is an arbitrary xed sequence that lists all
of the attributes in {1, . . . , j 1, j + 3, . . . , n + 1}. Thus each ind in has arity
3 +(n 2)||, and || n||.
Although the choice of

A permits the introduction of many inds, observe that the
construction is still polynomial in the size of the linear space automaton problem (M, x).
Using Proposition 9.1.5, it is now straightforward to verify that |= iff M has an
accepting computation of x.
Although the general problem of deciding implication for inds is pspace-complete,
naturally arising special cases of the problem have polynomial time solutions. This
includes the family of inds that are at most k-ary (ones in which the sequences of at-
tributes have length at most some xed k) and inds that have the form R[

A] S[

A] (see
Exercise 9.10). The latter case arises in examples such as Grad Stud[Name, Major]
Student[Name, Major]. This theme is also examined at the end of this chapter.
9.2 Finite versus Innite Implication
We now turn to the interaction between inds and fds, which leads to three interesting
phenomena. The rst of these requires a closer look at the notion of logical implication.
Consider the notion of logical implication used until now: logically implies if for
all relation (or database) instances I, I |= implies I |=. Although this notion is close
to the corresponding notion of mathematical logic, it is different in a crucial way: In the
context of databases considered until now, only nite instances are considered. From the
point of view of logic, the study of logical implication conducted so far lies within nite
model theory.
It is also interesting to consider logical implication in the traditional mathematical
logic framework in which innite database instances are permitted. As will be seen shortly,
when fds or inds are considered separately, permitting innite instances has no impact on
logical implication. However, when fds and inds are taken together, the two avors of
logical implication do not coincide.
The notion of innite relation and database instances is dened in the natural manner.
An unrestricted relation (database) instance is a relation (database) instance that is either
nite or innite. Based on this, we now redene unrestricted implication to permit
innite instances, and we dene nite logical implication for the case in which only
nite instances are considered.
198 Inclusion Dependency
R A B R A B
1 0 1 1
2 1 2 1
3 2 3 2
4 3 4 3
.
.
.
.
.
.
.
.
.
.
.
.
(a) (b)
Figure 9.1: Instances used for distinguishing |=
n
and |=
unr
Denition 9.2.1 A set of dependencies over R implies without restriction a depen-
dency , denoted |=
unr
, if for each unrestricted instance I of R, I |= implies I |=.
A set of dependencies over R nitely implies a dependency , denoted |=
n
, if for
each (nite) instance I of R, I |= implies I |=.
If nite and unrestricted implication coincide, or if the kind of implication is under-
stood from the context, then we may use |= rather than |=
n
or |=
unr
. This is what we
implicitly did so far by using |= in place of |=
n
.
Of course, if |=
unr
, then |=
n
. The following shows that the converse need
not hold:
Theorem 9.2.2
(a) There is a set of fds and inds and an ind such that |=
n
but |=
unr
.
(b) There is a set of fds and inds and an fd such that |=
n
but |=
unr
.
Proof For part (a), let R be binary with attributes A, B; let ={A B, R[A] R[B]};
and let be R[B] R[A]. To see that |=
n
, let I be a nite instance of R that satises
. Because I |= A B, |
A
(I)| |
B
(I)| and because I |= R[A] R[B], |
B
(I)|
|
A
(I)|. It follows that |
A
(I)| =|
B
(I)|. Because I is nite and
A
(I)
B
(I), it fol-
lows that
B
(I)
A
(I) and I |=R[B] R[A].
On the other hand, the instance shown in Fig. 9.1(a) demonstrates that |=
unr
.
For part (b), let be as before, and let be the fd B A. As before, if I |=, then
|
A
(I)| =|
B
(I)|. Because I |=A B, each tuple in I has a distinct A-value. Thus the
number of B-values occurring in I equals the number of tuples in I. Because I is nite,
this implies that I |= B A. Thus |=
n
. On the other hand, the instance shown in
Fig. 9.1(b) demonstrates that |=
unr
.
It is now natural to reconsider implication for fds, jds, and inds taken separately
and in combinations. Are unrestricted and nite implication different in these cases? The
answer is given by the following:
9.2 Finite versus Innite Implication 199
Theorem 9.2.3 Unrestricted and nite implication coincide for fds and jds considered
separately or together and for inds considered alone.
Proof Unrestricted implication implies nite implication by denition. For fds and jds
taken separately or together, Theorem 8.4.12 on the relationship between chasing and
logical implication can be used to obtain the opposite implication. For inds, Theorem 9.1.3
shows that nite implication and provability by the ind inference rules are equivalent. It
is easily veried that these rules are also sound for unrestricted implication. Thus nite
implication implies unrestricted implication for inds as well.
The notion of nite versus unrestricted implication will be revisited in Chapter 10,
where dependencies are recast into a logic-based formalism.
Implication Is Undecidable for fds + inds
As will be detailed in Chapter 10, fds and inds (and most other relational dependencies)
can be represented as sentences in rst-order logic. By G odels Completeness Theorem
implication is recursively enumerable for rst-order logic. It follows that unrestricted im-
plication is r.e. for fds and inds considered together. On the other hand, nite implication
for fds and inds taken together is co-r.e. This follows from the fact that there is an effec-
tive enumeration of all nite instances over a xed schema; if |=
n
, then a witness of
this fact will eventually be found. When unrestricted and nite implication coincide, this
pair of observations is sufcient to imply decidability of implication; this is not the case
for fds and inds.
The Word Problem for (Finite) Monoids
The proof that (nite) implication for fds and inds taken together is undecidable uses a
reduction from the word problem for monoids, which we discuss next.
A monoid is a set with an associative binary operation dened on it and an identity
element . Let be a nite alphabet and

the free monoid generated by (i.e., the


set of nite words with letters in with the concatenation operation). Let E ={
i
=
i
|
i [1..n]} be a nite set of equalities, and let e be an additional equality = , where

i
,
i
, ,

. Then E (nitely) implies e, denoted E |=


unr
e (E |=
n
e), if for each
(nite) monoid M and homomorphism h :

M, if h(
i
) =h(
i
) for each i [1..n],
then h() = h(). The word problem for (nite) monoids is to decide, given E and e,
whether E |=
unr
e (E |=
n
e). Both the word problem for monoids and the word problem
for nite monoids are undecidable.
Using this, we have the following:
Theorem9.2.4 Unrestricted and nite implication for fds and inds considered together
are undecidable. In particular, let range over sets of fds and inds. The following sets
are not recursive:
(a) {(, ) | an ind and |=
unr
}; {(, ) | an ind and |=
n
};
200 Inclusion Dependency
(b) {(, ) | an fd and |=
unr
}; and {(, ) | an fd and |=
n
}.
Crux We prove (a) using a reduction from the word problem for (nite) monoids to the
(nite) implication problem for fds and inds. The proof of part (b) is similar and is left for
Exercise 9.5. We rst consider the unrestricted case.
Let be a xed alphabet. Let E ={
i
=
i
| i [1, n]} be a set of equalities over

,
and let e be another equality = . A prex is dened to be any prex of
i
,
i
, , or
(including the empty string , and full words
1
,
1
, etc.). A single relation R is used,
which has attributes
(i) A

, for each prex ;


(ii) A
x
, A
y
, A
xy
;
(iii) A
ya
, for each a ; and
(iv) A
xya
, for each a ;
where x and y are two xed symbols.
To understand the correspondence between constrained relations and homomorphisms
over monoids, suppose that there is a homomorphism h from

to some monoid M.
Intuitively, a tuple of R will hold information about two elements h(x), h(y) of M (in
columns A
x
, A
y
, respectively) and their product h(x) h(y) = h(xy) (in column A
xy
).
For each a in , tuples will also hold information about h(ya) and h(xya) in columns
A
ya
, A
xya
. More precisely, the instance I
M,h
corresponding to the monoid M and the
homomorphism h :

M is dened by
I
M,h
={t
u,v
| u, v

},
where for each u, v

, t
u,v
is the tuple such that
t
u,v
(A
x
) =h(u), t
u,v
(A

) =h( ), for each prex ,


t
u,v
(A
y
) =h(v), t
u,v
(A
ya
) =h(va), for each a ,
t
u,v
(A
xy
) =h(uv), t
u,v
(A
xya
) =h(uva), for each a .
Formally, to force the correspondence between the relations and homomorphisms over
monoids, we use a set of dependencies. In other words, we wish to nd a set of
dependencies that characterizes precisely the instances over R that correspond to some
homomorphism h from

to some monoid M. The key to the proof is that this can be


done using just fds and inds. Strictly speaking, the dependencies of (8) in the following
list are not inds because an attribute is repeated in the left-hand side. As discussed in
Exercise 9.4(e), the set of dependencies used here can be modied to a set of proper inds
that has the desired properties. In addition, we use fds with an empty left-hand side, which
are sometimes not considered as real fds. The use of such dependencies is not crucial. A
slightly more complicated proof can be found that uses only fds with a nonempty left-hand
side. The set is dened as follows:
9.2 Finite versus Innite Implication 201
1. A

for each prex ;


2. A
x
A
y
A
xy
;
3. A
y
A
ya
, for each a ;
4. R[A

] R[A
y
];
5. R[A

, A
a
] R[A
y
, A
ya
], for each a and prex ;
6. R[A
xy
, A
xya
] R[A
y
, A
ya
], for each a ;
7. R[A
x
, A
ya
, A
xya
] R[A
x
, A
y
, A
xy
], for each a ;
8. R[A
y
, A

, A
y
] R[A
x
, A
y
, A
xy
]; and
9. R[A

i
] R[A

i
], for each i [1, n].
The ind is R[A

] R[A

].
Let I be an instance satisfying . Observe that I has to satisfy a number of implied
properties. In particular, one can verify that I also satises the following property:
R[A
xya
] R[A
ya
] R[A
y
] =R[A
xy
] R[A
x
]
and that adom(I) I[A
x
].
We now show that |=
unr
iff E |=
unr
e. We rst show that E |=
unr
e implies
|=
unr
. Suppose that there is a monoid M and homomorphism h :

M that sat-
ises the equations of E but violates the equation e. Consider I
M,h
dened earlier. It is
straightforward to verify that I |= but I |=.
For the opposite direction, suppose now that E |=
unr
e, and let I be a (possibly innite)
instance of R that satises . To conclude the proof, it must be shown that I[A

] I[A

].
(Observe that these two relations both consist of a single tuple because of the fds with an
empty left-hand-side.)
We now dene a function h :

adom(I). We will prove that h is a homomorphism


from

to a free monoid whose elements are h(

) and that satises the equations of E


(and hence, e). We will use the fact that the monoid satises e to derive that I[A

] I[A

].
We now give an inductive denition of h and show that it has the property that h(v)
I[A
y
] for each v

.
Basis: Set h() to be the element in I[A

]. Note that h() is also in I[A


y
] because R[A

]
R[A
y
] .
Inductive step: Given h(v) and a , let t I be such that t [A
y
] =h(v). Dene h(va) =
t (A
ya
). This is uniquely determined because A
y
A
ya
. In addition, h(va)
I[A
y
] because R[A
x
, A
ya
, A
xya
] R[A
x
, A
y
, A
xy
] .
We next show by induction on v that
() h(u), h(v), h(uv) I[A
x
, A
y
, A
xy
] for each u, v

.
For a xed u, the basis (i.e., v = ) is provided by the fact that h(u) I[A
y
] and the
ind R[A
y
, A

, A
y
] R[A
x
, A
y
, A
xy
] . For the inductive step, let h(u), h(v), h(uv)
I[A
x
, A
y
, A
xy
] and a . Let t I be such that t [A
x
, A
y
, A
xy
] = h(u), h(v), h(uv).
202 Inclusion Dependency
Then by construction of h, h(va) =t (A
ya
), and from the ind R[A
xy
, A
xya
] R[A
y
, A
ya
],
we have h(uva) = t (A
xya
). Finally, the ind R[A
x
, A
ya
, A
xya
] R[A
x
, A
y
, A
xy
] implies
that h(u), h(va), h(uva) I[A
x
, A
y
, A
xy
] as desired.
Dene the binary operation on h(

) as follows. For a, b h(

), let
a b =c if for some t I, t [A
x
, A
y
, A
xy
] =a, b, c.
There is such a tuple by () and c is uniquely dened because A
x
, A
y
A
xy
. Fur-
thermore, by (), for each u, v, h(u) h(v) =h(uv). Thus for h(u), h(v), h(w) in h(

),
(h(u) h(v)) h(w) =h(uvw) =(h(u) h(v)) h(w),
and
h(u) h() =h(u)
so (h(

), ) is a monoid. In addition, h is a homomorphism from the free monoid over

to the monoid (h(

), ).
It is easy to see that I[A

i
] = {h(
i
)} and I[A

i
] = {h(
i
)} for i [1, n]. Let i be
xed. Because R[A

i
] R[A

i
], h(
i
) = h(
i
). Because E |=
unr
e, h() = h(). Thus
I[A

] ={h()} ={h()} =I[A

]. It follows that I |=
unr
R[A

] R[A

] as desired.
This completes the proof for the unrestricted case. For the nite case, note that every-
thing has to be nite: The monoid is nite, I is nite, and the monoid h[

] is nite. The
rest of the argument is the same.
The issue of decidability of nite and unrestricted implication for classes of dependen-
cies is revisited in Chapter 10.
9.3 Nonaxiomatizability of fds + inds
The inference rules given previously for fds, mvds and inds can be viewed as inference
rule schemas, in the sense that each of them can be instantiated with specic attribute sets
(sequences) to create innitely many ground inference rules. In these cases the family of
inference rule schemas is nite, and we informally refer to themas nite axiomatizations.
Rather than formalizing the somewhat fuzzy notion of inference rule schema, we focus
in this section on families R of ground inference rules. A (ground) axiomatization of a
family S of dependencies is a set of ground inference rules that is sound and complete for
(nite or unrestricted) implication for S. Two properties of an axiomatization R will be
considered, namely: (1) R is recursive, and (2) R is k-ary, in the sense (formally dened
later in this section) that each rule in Rhas at most k dependencies in its condition.
Speaking intuitively, if S has a nite axiomatization, that is, if there is a nite
family R

of inference rule schemas that is sound and complete for S, then R

species
a ground axiomatization for S that is both recursive and k-ary for some k. Two results are
demonstrated in this section: (1) There is no recursive axiomatization for nite implication
9.3 Nonaxiomatizability of fds + inds 203
of fds and inds, and (2) there is no k-ary axiomatization for nite implication of fds and
inds. It is also known that there is no k-ary axiomatization for unrestricted implication of
fds and inds. The intuitive conclusion is that the family of fds and inds does not have a
nite axiomatization for nite implication or for unrestricted implication.
To establish the framework and some notation, we assume temporarily that we
are dealing with a family F of database instances over a xed database schema R =
{R
1
, . . . , R
n
}. Typically, F will be the set of all nite instances over R, or the set of all
(nite or innite) instances over R. All the notions that are dened are with respect to F.
Let S be a family of dependencies over R. (At present, S would be the set of fds and inds
over R.) Logical implication |= among dependencies in S is dened with respect to F in
the natural manner. In particular, |=
unr
and |=
n
are obtained by letting F be the set of
unrestricted or nite instances.
A (ground) inference rule over S is an expression of the form
=if S then s,
where S S and s S.
Let R be a set of rules over R. Then R is sound if each rule in R is sound. Let
{} S be a set of dependencies over R. A proof of from using R is a nite
sequence
1
, . . . ,
n
= such that for each i [1, n], either (1)
i
, or (2) for some
rule if S then s in R,
i
=s and S {
1
, . . . ,
i1
}. We write
R
(or if Ris
understood) if there is a proof of from using R. Clearly, if each rule in R is sound,
then implies |=. The set R is complete if for each pair (, ), |= implies

R
. A (sound and complete) axiomatization for logical implication is a set Rof rules
that is sound and complete.
The aforementioned notions are nowgeneralized to permit all schemas R. In particular,
we consider a set R of rules that is a union {R
R
| R is a schema}. The notions of sound,
proof, etc.can be generalized in the natural fashion.
Note that with the preceding denition, every set S of dependencies has a sound and
complete axiomatization. This is provided by the set Rof all rules of the form
if S then s,
where S |=s. Clearly, such trivial axiomatizations hold no interest. In particular, they are
not necessarily effective (i.e., one may not be able to tell if a rule is in R, so one may not be
able to construct proofs that can be checked). It is thus natural to restrict Rto be recursive.
We now present the rst result of this section, which will imply that there is no
recursive axiomatization for nite implication of fds and inds. In this result we assume
that the dependencies in S are sentences in rst-order logic.
Proposition 9.3.1 Let S be a class of dependencies. If S has a recursive axiomatization
for nite implications, then nite implication is decidable for S.
Crux Suppose that S has a recursive axiomatization. Consider the set
204 Inclusion Dependency
Implic ={(S, s) | S S, s S, and S |=
n
s}.
First note that the set Implic is r.e.; indeed, let R be a recursive axiomatization for S. One
can effectively enumerate all proofs of implication that use rules in R. This allows one to
enumerate Implic effectively. Thus Implic is r.e. We argue next that Implic is also co-r.e.
To conclude that a pair (S, s) is not in Implic, it is sufcient to exhibit a nite instance
satisfying S and violating s. To enumerate all pairs (S, s) not in Implic, one proceeds as
follows. The set of all pairs (S, s) is clearly r.e., as is the set of all instances over a xed
schema. Repeat for all positive integers n the following. Enumerate the rst n pairs (S, s)
and the rst n instances. For each (S, s) among the n, check whether one of the n instances
is a counterexample to the implication S |= s, in which case output (S, s). Clearly, this
procedure enumerates the complement of Implic, so Implic is co-r.e. Because it is both r.e.
and co-r.e., Implic is recursive, so there is an algorithm testing whether (S, s) is in Implic.
It follows that there is no recursive axiomatization for nite implication of fds and
inds. [To see this, note that by Theorem 9.2.4, logical implication for fds and inds is
undecidable. By Proposition 9.3.1, it follows that there can be no nite axiomatization for
fds and inds.] Because implication for jds is decidable (Theorem 8.4.12), but there is no
axiomatization for them (Theorem 8.3.4), the converse of the preceding proposition does
not hold.
Speaking intuitively, the preceding development implies that there is no nite set
of inference rule schemas that is sound and complete for nite implication of fds and
inds. However, the proof is rather indirect. Furthermore, the approach cannot be used in
connection with unrestricted implication, nor with classes of dependencies for which nite
implication is decidable (see Exercise 9.9). The notion of k-ary axiomatization developed
now shall overcome these objections.
A rule if S then s is k-ary for some k 0 if |S| =k. An axiomatization Ris k-ary if
each rule in R is l-ary for some l k. For example, the instantiations of rules FD1 and
IND1 are 0-ary, those of rules FD2 and IND2 are 1-ary, and those of FD3 and IND3
are 2-ary. Theorem 9.3.3 below shows that there is no k-ary axiomatization for nite
implication of fds and inds.
We now turn to an analog in terms of logical implication of k-ary axiomatizability.
Again let S be a set of dependencies over R, and let F be a family of instances over R. Let
k 0. A set S is:
closed under implication with respect to S if whenever
(a) S and (b) |=
closed under k-ary implication with respect to S if whenever
(a) S, and for some , (b
1
) |= and (b
2
) || k.
Clearly, if is closed under implication, then it is closed under k-ary implication for each
9.3 Nonaxiomatizability of fds + inds 205
k 0, and if is closed under k-ary implication, then it is closed under k

-ary implication
for each k

k.
Proposition 9.3.2 Let R be a database schema, S a set of dependencies over R, and
k 0. Then there is a k-ary axiomatization for S iff whenever S is closed under k-ary
implication, then is closed under implication.
Proof Suppose that there is a k-ary axiomatization for S, and let S be closed under
k-ary implication. Suppose further that |= for some S. Let
1
, . . . ,
n
be a proof
of from using R. Using the fact that R is k-ary and that is closed under k-ary
implication, a straightforward induction shows that
i
for i [1, n].
Suppose now that for each S, if is closed under k-ary implication, then is
closed under implication. Set
R={if S then s | S S, s S, |S| k, and S |=s}.
To see that Ris complete, suppose that |=. Consider the set

={ |
R
}. From
the construction of R,

is closed under k-ary implication. By assumption it is closed


under implication, and so
R
as desired.
In the following, we consider nite implication, so F is the set of nite instances.
Theorem 9.3.3 For no k does there exist a k-ary sound and complete axiomatization
for nite implication of fds and inds taken together. More specically, for each k there
is a schema R for which there is no k-ary sound and complete axiomatization for nite
implication of fds and inds over R.
Proof Let k 0 be xed. Let R = {R
0
, . . . , R
k
} be a database schema where sort(R
i
) =
{A, B} for each i [0, k]. In the remainder of this proof, addition is always done modulo
k +1. The dependencies =
a

b
and are dened by
(a)
a
={R
i
: A B | i [0, k]};
(b)
b
={R
i
[A] R
i+1
[B] | i [0, k]}; and
(c) =R
0
[B] R
k
[A].
Let be the union of with all fds and inds that are tautologies (i.e., that are satised by
all nite instances over R).
In the remainder of the proof, it is shown that (1) is not closed under nite impli-
cation, but (2) is closed under k-ary nite implication. Proposition 9.3.2 will then imply
that the family of fds and inds has no k-ary sound and complete axiomatization for R.
First observe that does not contain , so to show that is not closed under nite
implication, it sufces to demonstrate that |=
n
. Let I be a nite instance of R that
satises . By the inds of , |I(R
i
)[A]| |I(R
i+1
)[B]| for each i [0, k], and by the fds
of , |I(R
i
)[B]| |I(R
i
)[A]| for each i [0, k]. From this we obtain
206 Inclusion Dependency
|I(R
0
)[A]| |I(R
1
)[B]| |I(R
1
)[A]|
. . .
|I(R
k
)[B]| |I(R
k
)[A]| |I(R
0
)[B]| |I(R
0
)[A]|.
In particular, |I(R
k
)[A]| =|I(R
0
)[B]|. Since I is nite and we have I(R
k
)[A] I(R
0
)[B]
and |I(R
k
)[A]| =|I(R
0
)[B]|, it follows that I(R
0
)[B] I(R
k
)[A] as desired.
We now show that is closed under k-ary nite implication. Suppose that has
no more than k elements (|| k). It must be shown that if is an fd or ind and |=
n
,
then . Because contains k +1 inds, any subset of that has no more than k
members must omit some ind of . We shall exhibit an instance I such that I |= iff
{}. (Thus I will be an Armstrong instance for {}.) It will then follow that
{} is closed under nite implication. Because {}, this will imply that for
each fd or ind , if |=
n
, then {} |=
n
, so .
Because is symmetric with regard to inds, we can assume without loss of generality
that is the ind R
k
[A] R
0
[B]. Assuming that N N is contained in the underlying
domain, dene I so that
I(R
0
) ={(0, 0), (0, k +1), (1, 0), (1, k +1), (2, 0), (1, k +1)}
and for each i [1, k],
I(R
i
) ={(0, i), (0, i 1), (1, i), (1, i 1), . . . ,
(2i +1, i), (2i +1, i 1), (2i +2, i), (2i +1, i 1)}.
Figure 9.2 shows I for the case k =3.
We now show for each fd and ind over R that I |= iff . Three cases arise:
1. is a tautology. Then this clearly holds.
2. is an fd that is not a tautology. Then is equivalent to one of the following for
some i [0, k]:
R
i
: A B, R
i
: B A,
R
i
: A, R
i
: B,
or R
i
: AB.
If is R
i
: A B, then and clearly I |= . In the other cases, and
I |= .
3. is an ind that is not a tautology. Considering now which inds I satises, note
that the only pairs of nondisjoint columns of relations in I are
I(R
0
)[A], I(R
1
)[B];
I(R
1
)[A], I(R
2
)[B]; . . . ;
I(R
k1
)[A], I(R
k
)[B].
Furthermore, I |=R
i+1
[B] R
i
[A] for each i [0, k]; and I |=R
i
[A] R
i+1
[B].
This implies that I |= iff {}, as desired.
9.4 Restricted Kinds of Inclusion Dependency 207
I(R
0
) A B I(R
1
) A B
(0,0) (0,4) (0,1) (0,0)
(1,0) (1,4) (1,1) (1,0)
(2,0) (1,4) (2,1) (2,0)
(3,1) (3,0)
(4,1) (3,0)
I(R
2
) A B I(R
3
) A B
(0,2) (0,1) (0,3) (0,2)
(1,2) (1,1) (1,3) (1,2)
(2,2) (2,1) (2,3) (2,2)
(3,2) (3,1) (3,3) (3,2)
(4,2) (4,1) (4,3) (4,2)
(5,2) (5,1) (5,3) (5,2)
(6,2) (5,1) (6,3) (6,2)
(7,3) (7,2)
(8,3) (7,2)
Figure 9.2: An Armstrong relation for
In the proof of the preceding theoremall relations used are binary, and all fds and inds
are unary, in the sense that at most one attribute appears on either side of each dependency.
In proofs that there is no k-ary axiomatization for unrestricted implication of fds and inds,
some of the inds used involve at least two attributes on each side. This cannot be improved
to unary inds, because there is a 2-ary sound and complete axiomatization for unrestricted
implication of unary inds and arbitrary fds (see Exercise 9.18).
9.4 Restricted Kinds of Inclusion Dependency
This section explores two restrictions on inds for which several positive results have been
obtained. The rst one focuses on sets of inds that are acyclic in a natural sense, and the
second restricts the inds to having only one attribute on either side. The restricted depen-
dencies are important because they are sufcient to model many natural relationships, such
as those captured by semantic models (see Chapter 11). These include subtype relationships
of the kind every student is also a person.
This section also presents a generalization of the chase that incorporates inds. Be-
cause inds are embedded, chasing in this context may lead to innite chasing sequences.
In the context of acyclic sets of inds, however, the chasing sequences are guaranteed
to terminate. The study of innite chasing sequences will be taken up in earnest in
Chapter 10.
208 Inclusion Dependency
Inds and the Chase
Because inds may involve more than one relation, the formal notation of the chase must be
extended. Suppose now that R is a database schema, and let q =(T, t ) be a tableau query
over R. The fd and jd rules are generalized to this context in the natural fashion.
We rst present an example and then describe the rule that is used for inds.
Example 9.4.1 Consider the database schemas consisting of two relation schemas P, Q
with sort(P) =ABC, sort(Q) =DEF, the dependencies
Q[DE] P[AB] and P : A B,
and the tableau T shown in Fig. 9.3. Consider T
1
and T
2
in the same gure. The tableau
T
1
is obtained by applying to T the ind rule given after this example. The intuition is that
the tuples x, y
i
should also be in the P-relation because of the ind. Then T
2
is obtained
by applying the fd rule. Tableau minimization can be applied to obtain T
3
.
The following rule is used for inds.
ind rule: Let =R[X] S[Y] be an ind, let u T(R), and suppose that there is no free
tuple v T(S) such that v[Y] =u[X]. In this case, we say that is applicable to R(u).
Let w be a free tuple over S such that w[Y] =u[X] and w has distinct new variables in
all coordinates of sort(S) Y that are greater than all variables occurring in q. Then
the result of applying to R(u) is (T

, t ), where
T

(P) =T(P) for each relation name P R {S}, and


T

(S) =T(S) {w}.


For a tableau query q and a set of inds, it is possible that two terminal chasing
sequences end with nonisomorphic tableau queries, that there are no nite terminal chas-
ing sequences, or that there are both nite terminal chasing sequences and innite chasing
sequences (see Exercise 9.12). General approaches to resolving this problem will be con-
sidered in Chapter 10. In the present discussion, we focus on acyclic sets of inds, for which
the chase always terminates after a nite number of steps.
Acyclic Inclusion Dependencies
Denition 9.4.2 A family of inds over R is acyclic if there is no sequence R
i
[X
i
]
S
i
[Y
i
] (i [1, n]) of inds in where for i [1, n], R
i+1
= S
i
for i [1, n 1], and
R
1
=S
n
. A family of dependencies has acyclic inds if the set of inds in is acyclic.
The following is easily veried (see Exercise 9.14):
Proposition 9.4.3 Let q be a tableau query and a set of fds, jds, and acyclic inds
over R. Then each chasing sequence of q by terminates after an exponentially bounded
number of steps.
9.4 Restricted Kinds of Inclusion Dependency 209
T(Q) D E
x y
1
x y
2
y
1
x
T(P) A B C F
z
x
t
T
1
(Q) D E
x y
1
x y
2
y
1
x
T
1
(P) A B C F
z
x
t
x y
1
x y
2
w
1
w
2
T
2
(Q) D E
x y
1
x y
1
y
1
x
T
2
(P) A B C F
z
x
t
x y
1
x y
1
w
1
w
2
T
3
(Q) D E
x y
1
y
1
x
T
3
(P) A B C F
x
t
x y
1
w
1
Figure 9.3: Chasing with inds
For each tableau query q and set of fds, jds, and acyclic inds, let chase(q, )
denote the result of some arbitrary chasing sequence of q by . (One can easily come up
with some syntactic strategy for arbitrarily choosing this sequence.)
Using an analog to Lemma 8.4.3, one obtains the following result on tableau query
containment (an analog to Theorem 8.4.8).
Theorem 9.4.4 Let q, q

be tableau queries and a set of fds, jds, and acyclic inds


over R. Then q

iff chase(q, ) chase(q

, ).
Next we consider the application of the chase to implication of dependencies. For
database schema R and ind = R[X] S[Y] over R, the tableau query of is q

=
({R(u

)}, u

), where u

is a free tuple all of whose entries are distinct. For example,


given R[ABCD], S[EFG], and = R[BC] S[GE], q

= ({R(x
1
, x
2
, x
3
, x
4
)}, x
1
, x
2
,
210 Inclusion Dependency
x
3
, x
4
). In analogy with Theorem 8.4.12, we have the following for fds, jds, and acyclic
inds.
Theorem 9.4.5 Let be a set of fds, jds, and acyclic inds over database schema R
and let T be the tableau in chase(q

, ). Then |=
unr
iff
(a) For fd or jd over R, T satises the conditions of Theorem 8.4.12.
(b) For ind =R[X] S[Y], u

[X] T(S)[Y].
This yields the following:
Corollary 9.4.6 Finite and unrestricted implication for sets of fds, jds, and acyclic
inds coincide and are decidable in exponential time.
An improvement of the complexity here seems unlikely, because implication of an ind
by an acyclic set of inds is np-complete (see Exercise 9.14).
Unary Inclusion Dependencies
A unary inclusion dependency (uind) is an ind in which exactly one attribute appears on
each side. The uinds arise frequently in relation schemas in which certain columns range
over values that correspond to entity types (e.g., if SS# is a key for the Person relation and
is also used to identify people in the Employee relation).
As with arbitrary inds, unrestricted and nite implication do not coincide for fds
and uinds (proof of Theorem 9.2.2). However, both forms of implication are decidable
in polynomial time. In this section, the focus is on nite implication. We present a sound
and complete axiomatization for nite implication of fds and uinds (but in agreement with
Theorem 9.3.3, it is not k-ary for any k).
For uinds considered in isolation, the inference rules for inds are specialized to
yield the following two rules, which are sound and complete for (unrestricted and nite)
implication. Here A, B, and C range over attributes and R, S, and T over relation names:
UIND1: (reexivity) R[A] R[A].
UIND2: (transitivity) If R[A] S[B] and S[B] T [C], then R[A] T [C].
To capture the interaction of fds and uinds in the nite case, the following family of
rules is used:
C: (cycle rules) For each positive integer n,
if

R
1
: A
1
B
1
,
R
2
[A
2
] R
1
[B
1
],
. . . ,
R
n
: A
n
B
n
, and
R
1
[A
1
] R
n
[B
n
]
then

R
1
: B
1
A
1
,
R
1
[B
1
] R
2
[A
2
],
. . . ,
R
n
: B
n
A
n
, and
R
n
[B
n
] R
1
[A
1
].
Exercises 211
The soundness of this family of rules follows from a straightforward cardinality argument.
More generally, we have the following (see Exercise 9.16):
Theorem 9.4.7 The set {FD1, FD2, FD3, UIND1, UIND2} along with the cycle rules
(C) is sound and complete for nite implication of fds and uinds. Furthermore, nite
implication is decidable in polynomial time.
Bibliographic Notes
Inclusion dependency is based on the notion of referential integrity, which was known to
the broader database community during the 1970s (see, e.g., [Dat81]). A seminal paper
on the theory of inds is [CFP84], in which inference rules for inds are presented and the
nonaxiomatizability of both nite and unrestricted implication for fds and inds is demon-
strated. A non-k-ary sound and complete set of inference rules for nite implication of fds
and inds is presented in [Mit83b]. Another seminal paper is [JK84b], which also observed
the distinction between nite and unrestricted implication for fds and inds, generalized
the chase to incorporate fds and inds, and used this to characterize containment between
conjunctive queries. Related work is reported in [LMG83].
Undecidability of (nite) implication for fds and inds taken together was shown
independently by [CV85] and [Mit83a]. The proof of Theorem 9.2.4 is taken from [CV85].
(The undecidability of the word problem for monoids is from [Pos47], and of the word
problem for nite monoids is from [Gur66].)
Acyclic inds were introduced in [Sci86]. Complexity results for acyclic inds include
that implication for acyclic inds alone is np-complete [CK86], and implication for fds and
acyclic inds has an exponential lower bound [CK85].
Given the pspace complexity of implication for inds and the negative results in con-
nection with fds, unary inds emerged as a more tractable form of inclusion dependency.
The decision problems for nite and unrestricted implication for uinds and fds taken to-
gether, although not coextensive, both lie in polynomial time [CKV90]. This extensive
paper also develops axiomatizations of both nite and unrestricted logical implication for
unary inds and fds considered together, and develops results for uinds with some of the
more general dependencies studied in Chapter 10.
Typed inds are studied in [CK86]. In addition to using traditional techniques from
dependency theory, such as chasing, this work develops tools for analyzing inds using
equational theories.
Inds in connection with other dependencies are also studied in [CV83].
Exercises
Exercise 9.1 Complete the proof of Proposition 9.1.5.
Exercise 9.2 Complete the proof of Theorem 9.1.7.
Exercise 9.3 [CFP84] (In this exercise, by a slight abuse of notation, we allow fds with
sequences rather than sets of attributes.) Demonstrate the following:
(a) If |

A| =|

B|, then {R[

C] S[

D], S :

B

D} |=
unr
R :

A

C.
212 Inclusion Dependency
(b) If |

A| =|

B|, then {R[

C] S[

D], R[

E] S[

F], S :

B

D} |=
unr
R[

E]
S[

F].
(c) Suppose that |

A| = |

B|; = {R[

C] S[

D], R[

E] S[

D], S :

B

D};
and I |=. Then u[

C] =u[

E] for each u I(R).


Exercise 9.4 As dened in the text, we require in ind R[A
1
, . . . , A
m
] S[B
1
, . . . , B
m
] that
the A
i
s and B
i
s are distinct. A repeats-permitted inclusion dependency (rind) is dened as was
inclusion dependency, except that repeats are permitted in the attribute sequences on both the
left- and right-hand sides.
(a) Show that if is a set of inds, a rind, and |=
unr
, then is equivalent to an
ind.
(b) Exhibit a set of inds and fds such that |=
unr
R[AB] S[CC]. Do the same for
R[AA] R[BC].
(c) [Mit83a] Consider the rules
IND4: If R[A
1
A
2
] S[BB] and R[

C] T [

D], then R[

] T [

D], where

C

is obtained from

C by replacing one or more occurrences of A
2
by A
1
.
IND5: If R[A
1
A
2
] S[BB] and T [

C] R[

D], then T [

C] R[

], where

D

is obtained from

D by replacing one or more occurrences of A
2
by A
1
.
Prove that the inference rules {IND1, IND2, IND3, IND4, IND5} are sound and
complete for nite implication of sets of rinds.
(d) Prove that unrestricted and nite implication coincide for rinds.
(e) Aleft-repeats-permitted inclusion dependency (l-rind) is a rind for which there are no
repeats on the right-hand side. Given a set {} of l-rinds over R, describe how
to construct a schema R

and inds

} over R

such that |= iff

|=

and |=
n
iff

|=
n

.
(f) Do the same as in part (e), except for arbitrary rinds.
Exercise 9.5 [CV85] Prove part (b) of Theorem 9.2.4. Hint: In the proof of part (a), extend
the schema of R to include new attributes A

, A

, and A
y
; add dependencies A
y
A
y
,
R[A

, A

] R[A
y
, A
y
], R[A

, A

] R[A
y
, A
y
]; and use A

as .
Exercise 9.6
(a) Develop an alternative proof of Theorem 9.3.3 in which is an fd rather than an ind.
(b) In the proof of Theorem 9.3.3 for nite implication, the dependency used is an ind.
Using the same set , nd an fd that can be used in place of in the proof.
Exercise 9.7 Prove that there is no k for which there is a k-ary sound and complete axiomati-
zation for nite implication of fds, jds, and inds.
Exercise 9.8 [SW82] Prove that there is no k-ary sound and complete set of inference rules
for nite implication of emvds.
Exercise 9.9 Recall the notion of sort-set dependency (ssd) from Exercise 8.32.
(a) Prove that nite and unrestricted implication coincide for fds and ssds considered
together. Conclude that implication for fds and ssds is decidable.
Exercises 213
(b) [GH86] Prove that there is no k-ary sound and complete set of inference rules for
nite implication of fds (key dependencies) and ssds taken together.
Exercise 9.10
(a) [CFP84] A set of inds is bounded by k if each ind in the set has at most k attributes
on the left-hand side and on the right-hand side. Show that logical implication for
bounded sets of inds is decidable in polynomial time.
(b) [CV83] An ind is typed if it has the form R[

A] S[

A]. Exhibit a polynomial time


algorithm for deciding logical implication between typed inds.
Exercise 9.11 Suppose that some attribute domains may be nite.
(a) Show that {IND1, IND2, IND3} remains sound in the framework.
(b) Show that if one-element domains are permitted, then {IND1, IND2, IND3} is not
complete.
(c) Show for each n > 0 that if all domains are required to have at least n elements, then
{IND1, IND2, IND3} is not complete.
Exercise 9.12 Suppose that no restrictions are put on the order of application of ind rules in
chasing sequences.
(a) Exhibit a tableau query q and a set of inds and two terminal chasing sequences of
q by that end with nonisomorphic tableau queries.
(b) Exhibit a tableau query q and a set of inds, a terminal chasing sequence of q by
, and an innite chasing sequence of q by .
(c) Exhibit a tableau query q and a set of inds such that q has no nite terminal
chasing sequence by .
Exercise 9.13 [JK84b] Recall that for tableau queries q and q

and a set of fds and jds


over R, q

if for each instance I that satises , q(I) q

(I). In the context of inds, this


containment relationship may depend on whether innite instances are permitted or not. For
tableau queries q, q

and a set of dependencies over R, we write q


,n
q

(q
,unr
q

) if
q(I) q

(I) for each nite (unrestricted) instance I that satises .


(a) Show that if is a set of fds and jds, then
,n
and
,unr
coincide.
(b) Exhibit a set of fds and inds and tableau queries q, q

such that q
,n
q

but
q
,unr
q

.
Exercise 9.14
(a) Prove Proposition 9.4.3.
(b) Prove Theorem 9.4.4.
(c) Let q be a tableau query and a set of fds, jds, and inds over R, where the set of
inds in is acyclic; and suppose that q

, q

are the nal tableaux of two terminal


chasing sequences of q by (where the order of rule application is not restricted).
Prove that q q

.
(d) Prove Theorem 9.4.5.
(e) Prove Corollary 9.4.6.
Exercise 9.15
214 Inclusion Dependency
(a) Exhibit an acyclic set of inds and a tableau query q such that chase(q, ) is
exponential in the size of and q.
(b) [CK86] Prove that implication of an ind by an acyclic set of inds is np-complete.
Hint: Use a reduction from the problem of Permutation Generation [GJ79].
(c) [CK86] Recall from Exercise 9.10(b) that an ind is typed if it has the form R[

A]
S[

A]. Prove that implication of an ind by a set of fds and an acyclic set of typed
inds is np-hard. Hint: Use a reduction from 3-SAT.
Exercise 9.16 [CKV90] In this exercise you will prove Theorem 9.4.7. The exercise begins by
focusing on the unirelational case; for notational convenience we omit the relation name from
uinds in this context.
Given a set of fds and uinds over R, dene G() to be a multigraph with node set R
and two colors of edges: a red edge from A to B if A B , and a black edge from A to
B is B A . If A and B have red (black) edges in both directions, replace them with an
undirected red (black) edge.
(a) Suppose that is closed under the inference rules. Prove that G() has the following
properties:
1. Nodes have red (black) self-loops, and the red (black) subgraph of G() is
transitively closed.
2. The subgraphs induced by the strongly connected components of G()
contain only undirected edges.
3. In each strongly connected component, the red (black) subset of edges
forms a collection of node disjoint cliques (the red and black partitions of
nodes could be different).
4. If A
1
. . . A
m
B is an fd in and A
1
, . . . , A
m
have common ancestor A
in the red subgraph of G(), then G() contains a red edge from A to B.
(b) Given a set of fds and uinds closed under the inference rules, use G() to build
counterexample instances that demonstrate that implies |=
n
for fd or
uind .
(c) Use the rules to develop a polynomial time algorithm for inferring nite implication
for a set of fds and uinds.
(d) Generalize the preceding development to arbitrary database schemas.
Exercise 9.17
(a) Let k > 1 be an integer. Prove that there is a database schema R with at least one
unary relation R R, and a set of fds and inds such that
(i) for each I |=, |I(R)| =0 or |I(R)| =1 or |I(R)| k.
(ii) for each l k there is an instance I
l
|= with |I(R)| =l.
(b) Prove that this result cannot be strengthened so that condition (i) reads
(i) (i

) for each I |=, |I(R)| =0 or |I(R)| =1 or |I(R)| =k.


Exercise 9.18 [CKV90]
(a) Show that the set of inference rules containing {FD1, FD2, FD3, UIND1, UIND2}
and
FD-UIND1: If A and R[B] R[A], then B.
FD-UIND1: If A and R[B] R[A], then R[A] R[B].
Exercises 215
is sound and complete for unrestricted logical implication of fds and uinds over a
single relation schema R.
(b) Generalize this result to arbitrary database schemas, under the assumption that in all
instances, each relation is nonempty.
10 A Larger Perspective
Alice: fds, jds, mvds, ejds, emvds, indsits all getting very confusing.
Vittorio: Wait! Well use logic to unify it all.
Sergio: Yes! Logic will make everything crystal clear.
Riccardo: And well get a better understanding of dependencies that make sense.
T
he dependencies studied in the previous chapters have a strong practical motivation
and provide a good setting for studying two of the fundamental issues in dependency
theory: deciding logical implication and constructing axiomatizations.
Several new dependencies were introduced in the late 1970s and early 1980s, some-
times motivated by practical examples and later motivated by a desire to understand funda-
mental theoretical properties of unirelational dependencies or to nd axiomatizations for
known classes of dependencies. This process culminated with a rather general perspec-
tive on dependencies stemming from mathematical logic: Almost all dependencies that
have been introduced in the literature can be described as logical sentences having a sim-
ple structure, and further syntactic restrictions on that structure yield natural subclasses
of dependencies. The purpose of this chapter is to introduce this general class of depen-
dencies and its natural subclasses and to present important results and techniques obtained
for them.
The general perspective is given in the rst section, along with a simple application of
logic to obtain the decidability of implication for a large class of dependencies. It turns out
that the chase is an invaluable tool for analyzing implication; this is studied in the second
section. Axiomatizations for important subclasses have been developed, again using the
chase; this is the topic of the third section. We conclude the chapter with a provocative
alternative view of dependencies stemming from relational algebra.
The classes of dependencies studied in this chapter include complex dependencies that
would not generally arise in practice. Even if they did arise, they are so intricate that they
would probably be unusableit is unlikely that database administrators would bother to
write them down or that software would be developed to use or enforce them. Nevertheless,
it is important to repeat that the perspective and results discussed in this chapter have served
the important function of providing a unied understanding of virtually all dependencies
raised in the literature and, in particular, of providing insight into the boundaries between
tractable and intractable problems in the area.
216
10.1 A Unifying Framework 217
10.1 A Unifying Framework
The fundamental property of all of the dependencies introduced so far is that they essen-
tially say, The presence of some tuples in the instance implies the presence of certain other
tuples in the instance, or implies that certain tuple components are equal. In the case of
jds and mvds, the new tuples can be completely specied in terms of the old tuples, but
for inds this is not the case. In any case, all of the dependencies discussed so far can be
expressed using rst-order logic sentences of the form
() x
1
. . . x
n
[ (x
1
, . . . , x
n
) z
1
. . . z
k
(y
1
, . . . , y
m
) ],
where {z
1
, . . . , z
k
} ={y
1
, . . . , y
m
} {x
1
, . . . , x
n
}, and where is a (possibly empty) con-
junction of atoms and a nonempty conjunction. In both and , one nds relation
atoms of the form R(w
1
, . . . , w
l
) and equality atoms of the form w =w

, where each of
the w, w

, w
1
, . . . , w
l
is a variable.
Because we generally focus on sets of dependencies, we make several simplifying as-
sumptions before continuing (see Exercise 10.1a). These include that (1) we may eliminate
equality atoms from without losing expressive power; and (2) we can also assume with-
out loss of generality that no existentially quantied variable participates in an equality
atom in . Thus we dene an (embedded) dependency to be a sentence of the foregoing
form, where
1. is a conjunction of relation atoms using all of the variables x
1
, . . . , x
n
;
2. is a conjunction of atoms using all of the variables z
1
, . . . , z
k
; and
3. there are no equality atoms in involving existentially quantied variables.
A dependency is unirelational if at most one relation name is used, and it is multire-
lational otherwise. To simplify the presentation, the focus in this chapter is almost exclu-
sively on unirelational dependencies. Thus, unless otherwise indicated, the dependencies
considered here are unirelational.
We now present three fundamental classications of dependencies.
Full versus embedded: A full dependency is a dependency that has no existential quanti-
ers.
Tuple generating versus equality generating: A tuple-generating dependency (tgd) is a
dependency in which no equality atoms occur; an equality-generating dependency
(egd) is a dependency for which the right-hand formula is a single equality atom.
Typed versus untyped: A dependency is typed if there is an assignment of variables to
column positions such that (1) variables in relation atoms occur only in their assigned
position, and (2) each equality atom involves a pair of variables assigned to the same
position.
It is sometimes important to distinguish dependencies with a single atom in the right-
hand formula. A dependency is single head if the right-hand formula involves a single
atom; it is multi-head otherwise.
The following result is easily veried (Exercise 10.1b).
218 A Larger Perspective
Untyped
fds
inds
jds
mvds
egds Multi-head tgds Single-head tgds
tgds
Embedded
Typed
Full
Figure 10.1: Dependencies
Proposition 10.1.1 Each (typed) dependency is equivalent to a set of (typed) egds
and tgds.
It is easy to classify the fds, jds, mvds, ejds, emvds and inds studied in Chapters 8
and 9 according to the aforementioned dimensions. All except the last are typed. During the
late 1970s and early 1980s the class of typed dependencies was studied in depth. In many
cases, the results obtained for dependencies and for typed dependencies are equivalent.
However, for negative results the typed case sometimes requires more sophisticated proof
techniques because it imposes more restrictions.
A classication of dependencies along the three axes is given in Fig. 10.1. The gray
square at the lower right indicates that each full multihead tgd is equivalent to a set of
single-head tgds. The intersection of inds and jds stems from trivial dependencies. For
example, R[AB] R[AB] and [AB] over relation R(AB) are equivalent [and are syn-
tactically the same when written in the form of ()].
There is a strong relationship between dependencies and tableaux. Tableaux provide
a convenient notation for expressing and working with dependencies. (As will be seen in
Section 10.4, the family of typed dependencies can also be represented using a formalism
based on algebraic expressions.) The tableau representation of two untyped egds is shown
in Figs. 10.2(a) and 10.2(b). These two egds are equivalent. Note that all egds can be
expressed as a pair (T, x = y), where T is a tableau and x, y var(T ). If (T, x = y) is
typed, unirelational, and x, y are in the A column of T , then this is referred to as an A-egd.
Parts (c) and (d) of Fig. 10.2 show two full tgds that are equivalent. This is especially
interesting because, considered as tableau queries, (T

, t ) properly contains (T, t ) (see


Exercise 10.4). As suggested earlier, each full tgd is equivalent to some set of full single-
head tgds. In the following, when considering full tgds, we will assume that they are
single head.
Part (e) of Fig. 10.2 shows a typed tgd that is not single head. To represent these within
10.1 A Unifying Framework 219
S
A B
x y
y w
2
x = z
S
A B C C
w
1
u
x y
y w
2
w
1
z
z y w
3
u y w
3
y w
4
z
z y w
5
x = z
(a) (S, x = z) (b) (S, x = z)
T
A B
x y
1
x
1
y
1
x
1
y
x
(c) (T, t)
y t
T
A B
x y
1
x
1
y
1
x
1
y
2
x
(d) (T, t)
y t
T
1
A B
x y
1
x
1
y
1
x
1
y
2
x
(e) (T
1
, T
2
)
y
3
T
2
x
2
y
2
x
2
y
x y
2
x y
3
Figure 10.2: Five dependencies
the tableau notation, we use an ordered pair (T
1
, T
2
), where both T
1
and T
2
are tableaux.
This tgd is not equivalent to any set of single-head tgds (see Exercise 10.6b).
Finite versus Unrestricted Implication Revisited
We now reexamine the issues of nite versus unrestricted implication using the logical
perspective on dependencies. Because all of these lie within rst-order logic, |=
n
is co-r.e.
and |=
unr
is r.e. (see Chapter 2). Suppose that ={
1
, . . . ,
n
} is a set of dependencies and
{} a dependency. Then |=
unr
( |=
n
) iff there is no unrestricted (nite) model of

1

n
. If these are all full dependencies, then they can be rewritten in prenex
normal form, where the quantier prex has the form

. (Here each of the


i
is uni-
versally quantied, and contributes the existential quantier.) The family of sentences
that have a quantier prex of this form (and no function symbols) is called the initially ex-
tended Bernays-Sch onnkel class, and it has been studied in the logic community since the
1920s. It is easily veried that nite and unrestricted satisability coincide for sentences
in this class (Exercise 10.3). It follows that nite and unrestricted implication coincide for
full dependencies and, as discussed in Chapter 9, it follows that implication is decidable.
220 A Larger Perspective
On the other hand, because fds and uinds are dependencies, we know from Theorem 9.2.4
that the two forms of implication do not coincide for (embedded) dependencies, and both
are nonrecursive. Although not demonstrated here, these results have been extended to the
family of embedded multivalued dependencies (emvds).
To summarize:
Theorem 10.1.2
1. For full dependencies, nite and unrestricted implication coincide and are decid-
able.
2. For (typed) dependencies, nite and unrestricted implication do not coincide and
are both undecidable. In fact, this is true for embedded multivalued dependencies.
In particular, nite implication is not r.e., and unrestricted implication is not co-r.e.
10.2 The Chase Revisited
As suggested by the close connection between dependencies and tableaux, chasing is an in-
valuable tool for characterizing logical implication for dependencies. In this section we rst
use chasing to develop a test for logical implication of arbitrary dependencies by full depen-
dencies. We also present an application of the chase for determining how full dependencies
are propagated to views. We conclude by extending the chase to work with embedded de-
pendencies. In this discussion we focus almost entirely on typed dependencies, but it will
be clear that the arguments can be modied to the untyped case.
Chasing with Full Dependencies
We rst state without proof the natural generalization of chasing by fds and jds (Theo-
rem 8.4.12) to full dependencies (see Exercise 10.8). In this context we begin either with a
tableau T , or with an arbitrary tgd (T, T

) or egd (T, x =y). The notion of applying a full


dependency to this is dened in the natural manner. Lemma 8.4.17 and the notation devel-
oped for it generalize naturally to this context, as does the following analog of Theorem
8.4.18:
Theorem 10.2.1 If is a set of full dependencies and T is a tableau ( a dependency),
then chasing T () by yields a unique nite result, denoted chase(T, ) (chase(, )).
Logical implication of (full or embedded) dependencies by sets of full dependencies
will now be characterized by a straightforward application of the techniques developed in
Section 8.4 (see Exercise 10.8). A dependency is trivial if
(a) is an egd (T, x =x); or
(b) is a tgd (T, T

) and there is a substitution for T

such that (T

) T and
is the identity on var(T ) var(T

).
Note that if is a full tgd, then (b) simply says that T

T .
10.2 The Chase Revisited 221
A dependency is a tautology for nite (unrestricted) instances if each nite (unre-
stricted) instance of appropriate type satises that is, if |=
n
( |=
unr
). It is easily
veried that a dependency is a tautology iff it is trivial.
The following now provides a simple test for implication by full typed dependencies:
Theorem 10.2.2 Let be a set of full typed dependencies and a typed dependency.
Then |= iff chase(, ) is trivial.
Recall that the chase relies on a total order on var. For egd (T, x =y) we assume
that x < y and that these are the least and second to least variables appearing in the tableau;
and for full tgd (T, t ), t (A) is least in T (A) for each attribute A. Using this convention, we
can obtain the following:
Corollary 10.2.3 Let be a set of full typed dependencies.
(a) If = (T, x = y) is a typed egd, then |= iff x and y are identical or y
var(chase(T, )).
(b) If =(T, t ) is a full typed tgd, then |= iff t chase(T, ).
Using the preceding results, it is straightforward to develop a deterministic exponential
time algorithm for testing implication of full dependencies. It is also known that for both
the typed and untyped cases, implication is complete in exptime. (Note that, in contrast,
logical implication for arbitrary sets of initially extended Bernays-Sch onkel sentences is
known to be complete in nondeterministic exptime.)
Dependencies and Views
On a bit of a tangent, we now apply the chase to characterize the interaction of full
dependencies and user views. Let R ={R
1
, . . . , R
n
} be a database schema, where R
j
has
associated set
j
of full dependencies for j [1, n]. Set ={R
i
: |
i
}. Note that
the elements of are tagged by the relation name they refer to. Suppose that a view is
dened by algebraic expression E : R S[V]. It is natural to ask what dependencies will
hold in the view. Formally, we say that R : implies E : , denoted R : |=E : , if E(I)
satises for each I that satises . The notion of R : |=E : for a set is dened in
the natural manner.
To illustrate these notions in a simple setting, we state the following easily veried
result (see Exercise 10.10).
Proposition 10.2.4 Let (R[U], ) be a relation schema where is a set of fds and
mvds, and let V U. Then
(a) R : |=[
V
(R)] : X A iff |=X A and XA V.
(b) R : |=[
V
(R)] : X Y iff |=X Z for some X V and Y =Z V.
Given a database schema R, a family of tagged full dependencies over R, a view
222 A Larger Perspective
expression E mapping R to S[V], and a full dependency , is it decidable whether
R : |=E : ? If E ranges over the full relational algebra, the answer is no, even if the
only dependencies considered are fds.
Theorem 10.2.5 It is undecidable, given database schema R, tagged fds , algebra
expression E : R S and fd over S, whether R : |=E : .
Proof Let R ={R[U], S[U]}, =R : U and ={}. Given two algebra expres-
sions E
1
, E
2
: S R, consider
E =R [E
1
(S) E
2
(S)] [E
2
(S) E
1
(S)]
Then R : |=E : iff E
1
E
2
. This is undecidable by Corollary 6.3.2.
In contrast, we now present a decision procedure, based on the chase, for inferring
view dependencies when the view is dened using the SPCU algebra.
Theorem 10.2.6 It is decidable whether R : |= E : , if E is an SPCU query and
{ } is a set of (tagged) full dependencies.
Crux We prove the result for SPC queries that do not involve constants, and leave the
extension to include union and constants for the reader (Exercise 10.12).
Let E : R S[V] be an SPC expression, where S R. Recall from Chapter 4 (The-
orem 4.4.8; see also Exercise 4.18) that for each such expression E there is a tableau
mapping
E
=(T, t ) equivalent to E.
Assume now that is a set of full dependencies and a full tgd. (The case where is
an egd is left for the reader.) Let the tgd over S be expressed as the tableau (W, w). Create
a new free instance Z out of (T, t ) and W as follows: For each tuple u W, set T
u
= (T)
where valuation maps t to u, and maps all other variables in T to new distinct variables.
Set Z =
uW
T
u
. It can now be veried that R : |=E : iff w E(chase(Z, )).
In the case where { } is a set of fds and mvds and the viewis dened by an SPCU
expression, testing the implication of a view dependency can be done in polynomial time,
if jds are involved the problem is np-complete, and if full dependencies are considered the
problem is exptime-complete.
Recall from Section 8.4 that a satisfaction family is a family sat(R, ) for some set
of dependencies. Suppose now that SPC expression E : R[U] S[V] is given, and that
is a set of full dependencies over R. Theorem 10.2.6, suitably generalized, shows that the
family of full dependencies implied by for view E is recursive. This raises the natural
question: Does E(sat(R, )) =sat(), that is, does completely characterize the image
of sat(R, ) under E? The afrmative answer to this question is stated next. This result
follows from the proof of Theorem 10.2.6 (see Exercise 10.13).
10.2 The Chase Revisited 223
Theorem 10.2.7 If is a set of full dependencies over R and E : R S is an SPC
expression without constants, then there is a set of full dependencies over S such that
E(sat(R, )) =sat(S, ).
Suppose now that E : R[U] S[V] is given, and is a nite set of dependencies.
Can a nite set be found such that E(sat(R, )) =sat(S, )? Even in the case where
E is a simple projection and is a set of fds, the answer to this question is sometimes
negative (Exercise 10.11c).
Chasing with Embedded Dependencies
We now turn to the case of (embedded) dependencies. From Theorem 10.1.2(b), it is
apparent that we cannot hope to generalize Theorem 10.2.2 to obtain a decision procedure
for (nite or unrestricted) implication of dependencies. As initially discussed in Chapter 9,
the chase need not terminate if dependencies are used. All is not lost, however, because we
are able to use the chase to obtain a proof procedure for testing unrestricted implication of
a dependency by a set of dependencies.
For nonfull tgds, we shall use the following rule. We present the rule as it applies to
tableaux, but it can also be used on dependencies.
tgd rule: Let T be a tableau, and let =(S, S

) be a tgd. Suppose that there is a valuation


for S that embeds S into T , but no extension

to var(S) var(S

) of such that

(S

) T . In this case can be applied to T .


Let
1
, . . . ,
n
be a list of all valuations having this property. For each i [1, n],
(nondeterministically) choose a distinct extension, i.e., an extension

i
to var(S)
var(S

) of
i
such that each variable in var(S

) var(S) is assigned a distinct new


variable greater than all variables in T . (The same variable is not chosen in two
extensions

i
,

j
, i =j.)
The result of applying to T is T {

i
(S

) | i [1, n]}.
This rule is nondeterministic because variables not occurring in T are chosen for the
existentially quantied variables of . We assume that some xed mechanism is used for
selecting these variables when given T , (S, S

), and .
The notion of a chasing sequence T = T
1
, T
2
, . . . of a tableau (or dependency) by a
set of dependencies is now dened in the obvious manner. Clearly, this sequence may be
innite.
Example 10.2.8 Let ={
1
,
2
,
3
}, where
T
A B C
w
x
y
y
w

2
D
z
x z
T
A B C
x
x

3
D
z
z
z = z
T
A B C
w x
w y

1
D
x y t t
224 A Larger Perspective
A B C
x
1
x
2
x
1
x
5
x
3
x
6
x
10
x
2
x
6
(a)
x
11
x
5
x
3
D
x
4
x
7
x
12
x
13
application
of
1
A B C
x
1
x
2
x
1
x
5
x
3
x
6
x
10
x
2
x
6
(b)
x
11
x
5
x
3
D
x
4
x
7
x
4
x
7
application
of
3
A B C
x
1
x
2
x
1
x
5
x
3
x
6
x
10
x
2
x
6
(c)
x
11
x
5
x
3
D
x
4
x
7
x
4
x
7
A B C
x
1
x
2
x
1
x
5
x
3
x
6
x
10
x
2
x
6
(d)
x
11
x
5
x
3
D
x
4
x
4
x
4
x
4
x
1
x
5
x
11
x
2
x
20
x
21
x
1
x
2
x
22
x
10
x
5
x
23
x
4
x
7
x
7
x
4
application
of
2
x
1
x
5
x
11
x
2
x
20
x
21
x
1
x
2
x
22
x
10
x
5
x
23
x
4
x
4
x
4
x
4
application
of
3
Figure 10.3: Parts of a chasing sequence
We show here only the relevant variables of
1
,
2
, and
3
; all other variables are
assumed to be distinct. Here
3
B D.
In Fig. 10.3, we show some stages of a chasing sequence that demonstrates that
|=
unr
A D. To do that, the chase begins with the tableau {x
1
, x
2
, x
3
, x
4
, x
1
, x
5
, x
6
,
x
7
}. Figure 10.3 shows the results of applying
1
,
3
,
2
,
3
in turn (left to right). This
sequence implies that |=
unr
A D, because variables x
4
and x
7
are identied.
Consider now the typed tgds:
T
A B C
w
y
w
D
x y
T
A B C
w x
w y
D
x y t t
z
z
x
w

4
10.2 The Chase Revisited 225
The chasing sequence of Fig. 10.3 also implies that |=
unr

4
, because (x
10
, x
2
, x
6
,
x
4
) is in the second tableau. On the other hand, we now argue that |=
unr

5
. Consider
the chasing sequence beginning as the one shown in Fig. 10.3, and continuing by applying
the sequence
1
,
3
,
2
,
3
repeatedly. It can be shown that this chasing sequence will not
terminate and that (x
1
, x
2
, x
6
, v) does not occur in the resulting innite sequence for any
variable v (see Exercise 10.16). It follows that |=
unr

5
; in particular, the innite result
of the chasing sequence is a counterexample to this implication. On the other hand, this
chasing sequence does not alone provide any information about whether |=
n

5
. It can
be shown that this also fails.
To ensure that all relevant dependencies have a chance to inuence a chasing sequence,
we focus on chasing sequences that satisfy the following conditions:
(1) Whenever an egd is applied, it is applied repeatedly until it is no longer
applicable.
(2) No dependency is starved (i.e., each dependency that is applicable innitely
often is applied innitely often).
Even if these conditions are satised, it is possible to have two chasing sequences of a
tableau T by typed dependencies, where one is nite and the other innite (see Exer-
cise 10.14).
Now consider an innite chasing sequence T
1
=T, T
2
, . . . . Let us denote it by T, .
Because egds may be applied arbitrarily late in T, , for each n, tuples of T
n
may be
modied as the result of later applications of egds. Thus we cannot simply take the union
of some tail T
n
, T
n+1
, . . . to obtain the result of the chase. As an alternative, for the chasing
sequence T, =T
1
, T
2
, . . . , we dene
chase(T, ) ={u | n m > n(u T
m
)}.
This is nonempty because (1) the new variables introduced by the tgd rule are always
greater than variables already present; and (2) when the egd rule is applied, the newer
variable is replaced by the older one.
By generalizing the techniques developed, it is easily seen that the (possibly innite)
resulting tableau satises all dependencies in . More generally, let be a set of dependen-
cies and a dependency. Then one can show that |=
unr
iff for some chasing sequence
, of using , chase(, ) is trivial. Furthermore, it can be shown that
if for some chasing sequence , of using , chase(, ) is trivial, then it is so
for all chasing sequences of using ; and
for each chasing sequence , =T
1
, . . . , T
n
, . . . of using , chase(, ) is trivial
iff T
i
is trivial for some i.
This shows that, for practical purposes, it sufces to generate some chasing sequence of
using and stop as soon as some tableau in the sequence becomes trivial.
226 A Larger Perspective
10.3 Axiomatization
A variety of axiomatizations have been developed for the family of dependencies and for
subclasses such as the full typed tgds. In view of Theorem 10.1.2, sound and complete
recursively enumerable axiomatizations do not exist for nite implication of dependencies.
This section presents an axiomatization for the family of full typed tgds and typed egds
(which is sound and complete for both nite and unrestricted implication). Ageneralization
to the embedded case (for unrestricted implication) has also been developed (see Exercise
10.21). The axiomatization presented here is closely related to the chase. In the next
section, a very different kind of axiomatization for typed dependencies is discussed.
We now focus on the full typed dependencies (i.e., on typed egds and full typed
tgds). The development begins with the introduction of a technical tool for forming the
composition of tableaux queries. The axiomatization then follows.
Composition of Typed Tableaux
Suppose that = (T, t ) and = (S, s) are two full typed tableau queries over relation
schema R. It is natural to ask whether there is a tableau query corresponding to the
composition of followed by that is, with the property that for each instance I over R,
( )(I) = ((I))
and, if so, whether there is a simple way to construct it. We now provide an afrmative
answer to both questions. The syntactic composition of full typed tableau mappings will
be a valuable tool for combining pairs of full typed tgds in the axiomatization presented
shortly.
Let T ={t
1
, . . . , t
n
} and S ={s
1
, . . . , s
m
}. Suppose that tuple w is in ((I)). Then
there is an embedding of s
1
, . . . , s
m
into (I) such that (s) =w. It follows that for each
j [1, m] there is an embedding
j
of T into I, with
j
(t ) =(s
j
). This suggests that the
tableau of should have mn tuples, with a block of n tuples for each s
j
.
To be more precise, for each j [1, m], let T
s
j
be
j
(T ), where
j
is a substitution that
maps t (A) to s
j
(A) for each attribute A of R and maps each other variable of T to a new,
distinct variable not used elsewhere in the construction. Now set
[S](T, t ) {T
s
j
| j [1, m]} and ([S](T, t ), s).
The following is now easily veried (see Exercise 10.18):
Proposition 10.3.1 For full typed tableau queries and over R, and for each instance
I of R, (I) =((I)).
Example 10.3.2 The following table shows two full typed tableau queries and their
composition.
10.3 Axiomatization 227
T
A B
x y
w x

C
z
y
x y z
t
x y z
S
A B
u v
u v

C
w
w
u v w
s
A B
u v
u v

C
p
1
w
u p
2
w
u p
2
p
3
u v p
4
u p
5
w
u p
5
p
6
It is straightforward to verify that the syntactic operation of composition is associative.
Suppose that and are full typed tableau queries. It can be shown by simple chasing
arguments that {, } and { } are equivalent as sets of dependencies. It follows that full
typed tgds are closed under nite conjunction, in the sense that each nite set of full typed
tgds over a relation schema R is equivalent to a single full typed tgd. This property does
not hold in the embedded case (see Exercise 10.20).
An Axiomatization for Full Typed Dependencies
For full typed tgds, = (T, t ) and = (S, s), we say that embeds into denoted
, if there is a substitution such that (T ) S and (t ) =s. Recall from Chapter 4
that (considered as tableau queries) iff . As a result we have that if ,
then |= , although the converse does not necessarily hold. Analogously, for A-egds
=(T, x =y) and =(S, v =w), we dene if there is a substitution such that
(T ) S, and ({x, y}) ={v, w}. Again, if , then |=.
We now list the axioms for full typed tgds:
FTtgd1: (triviality) For each free tuple t without constants, ({t }, t ).
FTtgd2: (embedding) If and , then .
FTtgd3: (composition) If and , then .
The following rules focus exclusively on typed egds:
Tegd1: (triviality) If x var(T ), then (T, x =x).
Tegd2: (embedding) If and , then .
The nal rules combining egds and full typed tgds use the following notation. Let
R[U] be a relation schema. For A U, A denotes U {A}. Given typed A-egd =
(T, x =y) over R, dene free tuples u
x
, u
y
such that u
x
(A) =x, u
y
(A) =y and u
x
[A] =
u
y
[A] consists of distinct variables not occurring in T . Dene two full typed tgds
x
=
(T {u
y
}, u
x
) and
y
=(T {u
x
}, u
y
).
228 A Larger Perspective
FTD1: (conversion) If =(T, x =y), then
x
and
y
.
FTD2: (composition) If (T, t ) and (S, x =y), then ([S](T, t ), x =y).
We now have the following:
Theorem 10.3.3 The set {FTtgd1, FTtgd2, FTtgd3, Tegd1, Tegd2, FTD1, FTD2}
is sound and complete for (nite and unrestricted) logical implication of full typed
dependencies.
Crux Soundness is easily veried. We illustrate completeness by showing that the FTtgd
rules are complete for tgds. Suppose that |= =(T, t ), where is a set of full typed
tgds and (T, t ) is full and typed. By Theorem 10.2.2 there is a chasing sequence of T by
yielding T

with t T

. Let
1
, . . . ,
n
(n 0) be the sequence of elements of used
in the chasing sequence. It follows that t
n
(. . . (
1
(T ) . . .), and by Proposition 10.3.1,
t (
1

n
)(T ). This implies that (
1

n
) (T, t ). A proof of from is
now obtained by starting with
1
(or ({s}, s) if n =0), followed by n 1 applications of
FTtgd3 and one application of FTtgd2 (see Exercise (10.18b).
The preceding techniques and the chase can be used to develop an axiomatization of
unrestricted implication for the family of all typed dependencies.
10.4 An Algebraic Perspective
This section develops a very different paradigm for specifying dependencies based on the
use of algebraic expressions. Surprisingly, the class of dependencies formed is equivalent to
the class of typed dependencies. We also present an axiomatization that is rooted primarily
in algebraic properties rather than chasing and tableau manipulations.
We begin with examples that motivate and illustrate this approach.
Example 10.4.1 Let R[ABCD] be a relation schema. Consider the tgd of Fig. 10.4 and
the algebraic expression

AC
(
AB
(R)
BC
(R))
AC
(R).
It is straightforward to verify that for each instance I over ABCD,
I |= iff
AC
(
AB
(I)
BC
(I))
AC
(I).
Now consider dependency . One can similarly verify that for each instance I over
ABCD,
I |= iff
AC
(
AB
(I)
BC
(I))
AC
(
AD
(I)
CD
(I)).
10.4 An Algebraic Perspective 229
S
A B C
x
z
x
D
w
T
A B C
x y
z
D
x z t S
y
y
z w
y

Figure 10.4: Dependencies of Example 10.4.1
The observation of this example can be generalized in the following way. A project-
join (PJ) expression is an algebraic expression over a single relation schema using only
projection and natural join. We describe next a natural recursive algorithm for translating
PJ expressions into tableau queries (see Exercise 10.23). (This algorithm is also implicit in
the equivalence proofs of Chapter 4.)
Algorithm 10.4.2
Input: a PJ expression E over relation schema R[A
1
, . . . , A
n
]
Output: a tableau query (T, t ) equivalent to E
Basis: If E is simply R, then return ({x
1
, . . . , x
n
}, x
1
, . . . , x
n
).
Inductive steps:
1. If E is
X
(q) and the tableau query of q is (T, t ), then return (T,
X
(t )).
2. Suppose E is q
1
q
2
and the tableau query of q
i
is (T
i
, t
i
) for i [1, 2].
Let X be the intersection of the output sorts of q
1
and q
2
. Assume without
loss of generality that the two tableaux use distinct variables except that
t
1
(A) =t
2
(A) for A X. Then return (T
1
T
2
, t
1
t
2
).
Suppose now that (T, T

) is a typed dependency with the property that for some free


tuple t , (T, t ) is the tableau associated by this algorithmwith PJ expression E, and (T

, t ) is
the tableau associated with PJ expression E

. Suppose also that the only variables common


to T and T

are those in t . Then for each instance I, I |=(T, T

) iff E(I) E

(I).
This raises three natural questions: (1) Is the family of PJ inclusions equivalent to the
set of typed tgds? (2) If not, can this paradigm be extended to capture all typed tgds? (3)
Can this paradigm be extended to capture typed egds as well as tgds?
The answer to the rst question is no (see Exercise 10.24).
The answer to the second and third questions is yes. This relies on the notion
of extended relations and extended project-join expressions. Let R[A
1
, . . . , A
n
] be a
relation schema. For each i [1, n], we suppose that there is an innite set of at-
tributes A
1
i
, A
2
i
, . . . , called copies of A
i
. The extended schema of R is the schema
R[A
1
1
, . . . , A
1
n
, A
2
1
, . . . , A
2
n
, . . .]. For an instance I of R, the extended instance of R corre-
sponding to I, denoted I, has one tuple u for each tuple u I, where u(A
j
i
) =u(A
i
) for
each i [1, n] and j > 0.
An extended project-join expression over R is a PJ expression over R such that a
230 A Larger Perspective
T
A B C
x
x = x
D
T
A B C
x z
D
x y T
z

w
w
z w x
y z w
z w
z w
x z w
Figure 10.5: tgd and egd of Example 10.4.3
projection operator is applied rst to each occurrence of R. (This ensures that the evaluation
and the result of such expressions involve only nite objects.) Given two extended PJ
expressions E and E

with the same target sort, and instance I over R, E(I)


e
E

(I)
denotes E(I) E

(I).
An algebraic dependency is a syntactic expression of the form E
e
E

, where E and
E

are extended PJ expressions over a relation schema R with the same target sort. An
instance I over R satises E
e
E

if E(I)
e
E

(I)that is, if E(I) E

(I).
This is illustrated next.
Example 10.4.3 Consider the dependency of Fig. 10.5. Let
E =
ACD
1(R)
C
1
D
1(R)
A
1
C
1
D
(R).
Here we use A, A
1
, . . . to denote different copies the attribute A, etc.
It can be shown that, for each instance I over ABCD, I |= iff E
1
(I)
e
E
2
(I), where
E
1
=
ACD
(E)
E
2
=
ACD
(
AB
1(R)
B
1
CD
(R)).
(See Exercise 10.25).
Consider now the functional dependency A BC over ABCD. This is equivalent to

ABC
(R)
AB
1
C
1(R)
e

ABCB
1
C
1(R).
Finally, consider of Fig. 10.5. This is equivalent to F
1

e
F
2
, where
F
1
=
AA
1(E)
F
2
=
AA
1(R).
We next see that algebraic dependencies correspond precisely to typed dependencies.
Theorem 10.4.4 For each algebraic dependency, there is an equivalent typed depen-
dency, and for each typed dependency, there is an equivalent algebraic dependency.
10.4 An Algebraic Perspective 231
Crux Let R[A
1
, . . . , A
n
] be a relation schema, and let E
e
E

be an algebraic depen-
dency over R, where E and E

have target sort X. Without loss of generality, we can


assume that there is k such that the sets of attributes involved in E and E

are contained
in

U ={A
1
1
, . . . , A
1
n
, . . . , A
k
1
, . . . , A
k
n
}. Using Algorithm 10.4.2, construct tableau queries
=(T, t ) and

=(T

, t

) over

U corresponding to E and E

. We assume without loss of


generality that and

do not share any variables except that t (A) =t

(A) for each A X.


Consider T (over

U). For each tuple s T and j [1, k],


construct an atom R(x
1
, . . . , x
n
), where x
i
=s(A
j
i
) for each i [1, n];
construct atoms s(A
j
i
) =s(A
j

i
) for each i [1, n] and j, j

satisfying
1 j < j

k.
Let (x
1
, . . . , x
p
) be the conjunction of all atoms obtained from in this manner. Let
(y
1
, . . . , y
q
) be constructed analogously from

. It can now be shown (Exercise 10.26)


that E
e
E

is equivalent to the typed dependency


x
1
. . . x
p
((x
1
, . . . , x
p
) z
1
. . . z
r
(y
1
, . . . , y
q
)),
where z
1
, . . . , z
r
is the set of variables in {y
1
, . . . , y
q
} {x
1
, . . . , x
p
}.
For the converse, we generalize the technique used in Example 10.4.3. For each at-
tribute A, one distinct copy of A is used for each variable occurring in the A column.
An Axiomatization for Algebraic Dependencies
Figure 10.6 shows a family of inference rules for algebraic dependencies. Each of these
rules stems from an algebraic property of join and project, and only the last explicitly uses
a property of extended instances. (It is assumed here that all expressions are well formed.)
The use of these rules to infer dependencies is considered in Exercises 10.31, and
10.32.
It can be shown that:
Theorem 10.4.5 The family {AD1, . . . , AD8} is sound and complete for inferring
unrestricted implication of algebraic dependencies.
To conclude this discussion of the algebraic perspective on dependencies, we consider
a new operation, direct product, and the important notion of faithfulness.
Faithfulness and Armstrong Relations
We show now that sets of typed dependencies have Armstrong relations,
1
although these
may sometimes be innite. To accomplish this, we rst introduce a new way to combine
instances and an important property of it.
1
Recall that given a set of dependencies over some schema R, an Armstrong relation for is an
instance I over R that satises and violates every dependency not implied by .
232 A Larger Perspective
AD1: (Idempotency of Projection)
(a)
X
(
Y
E) =
e

X
E
(b)
sort(E)
E =
e
E
AD2: (Idempotency of Join)
(a) E
X
E =
e
E
(b)
sort(E)
(E E

)
e
E
AD3: (Monotonicity of Projection)
If E
e
E

then
X
E
e

X
E

AD4: (Monotonicity of Join)


If E
e
E

, then E E

e
E

AD5: (Commutativity of Join)


E E

=
e
E

E
AD6: (Associativity of Join)
(E E

) E

=
e
E (E

)
AD7: (Distributivity of Projection over Join)
Suppose that X sort(E) and Y sort(E

). Then
(a)
XY
(E E

)
e

XY
(E
Y
E

).
(b) If sort(E) sort(E

) Y, then equality holds in (a).


AD8: (Extension)
If X sort(R) and A, A

are copies of the same attribute, then

AA
R
AX
R =
e

AA

X
R.
Figure 10.6: Algebraic dependency axioms
Let R be a relation schema of arity n. We blur our notation and use elements of
dom dom as if they were elements of dom. Given tuples u = x
1
, . . . , x
n
and v =
y
1
, . . . , y
n
, we dene the direct product of u and v to be
u v =(x
1
, y
1
), . . . , (x
n
, y
n
).
The direct product of two instances I, J over R is
I J ={u v | u I, v J}.
This is generalized to form k-ary direct product instances for each nite k. Furthermore,
if J is a (nite or innite) index set and {I
j
| j J} is a family of instances over R, then
{I
j
| j J} denotes the (possibly innite) direct product of this family of instances.
A dependency is faithful if for each family {I
j
| j J} of nonempty instances,
{I
j
| j J} |= if and only if j J, I
j
|=.
(The restriction that the instances be nonempty is importantif this were omitted then no
nontrivial dependency would be faithful.)
The following holds because the operator commutes with project, join, and exten-
sion (see Exercise 10.29).
Bibliographic Notes 233
Proposition 10.4.6 The family of typed dependencies is faithful.
We can now prove that each set of typed dependencies has an Armstrong relation.
Theorem 10.4.7 Let be a set of typed dependencies over relation R. Then there is a
(possibly innite) instance I

such that for each typed dependency over R, I

|= iff
|=
unr
.
Proof Let be the set of typed dependencies over R not in

. For each , let


I

be a nonempty instance that satises but not . Then {I

| } is the desired
relation.
This result cannot be strengthened to yield nite Armstrong relations because one can
exhibit a nite set of typed tgds with no nite Armstrong relation.
Bibliographic Notes
The papers [FV86, Kan91, Var87] all provide excellent surveys on the motivations and
history of research into relational dependencies; these have greatly inuenced our treatment
of the subject here.
Because readers could be overwhelmed by the great number of dependency theory
terms we have used a subset of the terminology. For instance, the typed single-head tgds
(that were studied in depth) are called template dependencies. In addition, the typed unire-
lational dependencies that are considered here were historically called embedded implica-
tional dependencies (eids); and their full counterparts were called implicational depen-
dencies (ids). We use this terminology in the following notes.
After the introduction of fds and mvds, there was a urry of research into special
classes of dependencies, including jds and inds. Embedded dependencies were rst intro-
duced in [Fag77b], which dened embedded multivalued dependencies (emvds); these are
mvds that hold in a projection of a relation. Embedded jds are dened in the analogous
fashion. This is distinct from projected jds [MUV84]these are template dependencies
that correspond to join dependencies, except that some of the variables in the summary
rowmay be distinct variables not occurring elsewhere in the dependency. Several other spe-
cialized dependencies were introduced. These include subset dependencies [SW82], which
generalize mvds; mutual dependencies [Nic78], which say that a relation is a 3-ary join;
generalized mutual dependencies [MM79]; transitive dependencies [Par79], which gener-
alize fds and mvds; extended transitive dependencies [PPG80], which generalize mutual
dependencies and transitive dependencies; and implied dependencies [GZ82], which form
a specialized class of egds. In many cases these classes of dependencies were introduced
in attempts to provide axiomatizations for the emvds, jds, or superclasses of them. Al-
though most of the theoretical work studies dependencies in an abstract setting, [Sci81,
Sci83] study families of mvds and inds as they arise in practical situations.
The proliferation of dependencies spawned interest in the development of a unifying
framework that subsumed essentially all of them. Nicolas [Nic78] is credited with rst ob-
serving that fds, mvds, and others have a natural representation in rst-order logic. At
234 A Larger Perspective
roughly the same time, several researchers reached essentially the same generalized class
of dependencies that was studied in this chapter. [BV81a] introduced the class of tgds and
egds, dened using the paradigm of tableaux. Chasing was studied in connection with both
full and embedded dependencies in [BV84c]. Reference [Fag82b] introduced the class of
typed dependencies, essentially the same family of dependencies but presented in the par-
adigm of rst-order logic. Simultaneously, [YP82] introduced the algebraic dependencies,
which present the same class in algebraic terms. Ageneralization of algebraic dependencies
to the untyped case is presented in [Abi83].
Related general classes of dependencies introduced at this time are the general depen-
dencies [PJ81], which are equivalent to the full typed tgds, and generalized dependency
constraints [GJ82], which are the full dependencies.
Importantly, several kinds of constraints that lie outside the dependencies described in
this chapter have been studied in the literature. Research on the use of arbitrary rst-order
logic sentences as constraints includes [GM78, Nic78, Var82b]. A different extension of
dependencies based on partitioning relationships, which are not expressible in rst-order
logic, is studied in [Cos87]. Another kind of dependency is the afunctional dependency of
[BP83], which, as the name suggests, focuses on the portions of an instance that violate
an fd. The partition dependencies [CK86] are not rst-order expressible and are powerful;
interestingly, nite and unrestricted implication coincide for this class of dependencies and
are decidable in ptime. Order [GH83] and sort-set dependencies [GH86] address proper-
ties of instances dened in terms of orderings on the underlying domain elements. There is
provably no nite axiomatization for order dependencies, or for sort-set dependencies and
fds considered together (Exercise 9.8).
Another broad class of constraints not included in the dependencies discussed in this
chapter is dynamic constraints, which focus on how data change over time [CF84, Su92,
Via87, Via88]; see Section 22.6.
As suggested by the development of this chapter, one of the most signicant theoretical
directions addressed in connection with dependencies has been the issue of decidability of
implication. The separation of nite and unrestricted implication, and the undecidability
of the implication problem, were shown independently for typed dependencies in [BV81a,
CLM81]. Subsequently, these results were independently strengthened to projected jds in
[GL82, Var84, YP82]. Then, after nearly a decade had elapsed, this result was strengthened
to include emvds [Her92].
On the other hand, the equivalence of nite and unrestricted implication for full depen-
dencies was observed in [BV81a]. That deciding implication for full typed dependencies
is complete in exptime is due to [CLM81]. See also [BV84c, FUMY83], which present
numerous results on full and embedded typed dependencies. The special case of decid-
ing implication of a typed dependency by inds has been shown to be pspace-complete
[JK84b].
The issue of inferring view dependencies was rst studied in [Klu80], where Theo-
rem 10.2.5 was presented. Reference [KP82] developed Theorem 10.2.6.
The issue of attempting to characterize view images of a satisfaction family as a
satisfaction family was rst raised in [GZ82], where Exercise 10.11b was shown. Theo-
rem 10.2.7 is due to [Fag82b], although a different proof technique was used there. Refer-
ence [Hul84] demonstrates that some projections of satisfaction families dened by fds
Exercises 235
cannot be characterized by any nite set of full dependencies (see Exercise 10.11c,d).
That investigation is extended in [Hul85], where it is shown that if is a family of
fds over U and V U, and if
V
(sat(U, )) = sat(V, ) for any set of fds, then

V
(sat(U, )) =sat(V, ) for any nite set of full dependencies.
Another primary thrust in the study of dependencies has been the search for axiom-
atizations for various classes of dependencies. The axiomatization presented here for full
typed dependencies is due to [BV84a], which also provides an axiomatization for the em-
bedded case. The axiomatization for algebraic dependencies is from [YP82]. An axiom-
atization for template dependencies is given in [SU82] (see Exercise 10.22). Research on
axiomatizations for jds is described in the Bibliographic Notes of Chapter 8.
The direct product construction is from [Fag82b]. Proposition 10.4.6 is due to
[Fag82b], and the proof presented here is from [YP82]. A nite set of tgds with no -
nite Armstrong relation is exhibited in [FUMY83]. The direct product has also been used
in connection with tableau mappings and dependencies [FUMY83] (see Exercise 10.19).
The direct product has been studied in mathematical logic; the notion of (upward) faithful
presented here (see Exercise 10.28) is equivalent to the notion of preservation under direct
product found there (see, e.g., [CK73]); and the notion of downward faithful is related to,
but distinct from, the notion of preservation under direct factors.
Reference [MV86] extends the work on direct product by characterizing the expressive
power of different families of dependencies in terms of algebraic properties satised by
families of instances denable using them.
Exercises
Exercise 10.1
(a) Show that for each rst-order sentence of the form () of Section 10.1, there exists
an equivalent nite set of dependencies.
(b) Show that each dependency is equivalent to a nite set of egds and tgds.
Exercise 10.2 Consider the tableaux in Example 10.3.2. Give . Compare it (as a map-
ping) to . Give . Compare it (as a mapping) to .
Exercise 10.3 [DG79] Let be a rst-order sentence with equality but no function symbols
that is in prenex normal form and has quantier structure

. Prove that has an unrestricted


model iff it has a nite model.
Exercise 10.4 This exercise concerns the dependencies of Fig. 10.2.
(a) Show that (S, x =z) and (S

, x =z) are equivalent.


(b) Show that (T, t ) and (T

, t ) are equivalent, but that (T, t ) (T

, t ) as tableau
queries.
Exercise 10.5 Let R[ABC] be a relation scheme. We construct a family of egds over R as
follows. For n 0, let
T
n
={x
i
, y
i
, z
2i
, x
i
, y
i+1
, z
2i+1
| i [0, n]}
and set
n
=(T
n
, z
0
=z
2n+1
). Note that
0
A C.
236 A Larger Perspective
(a) Prove that as egds,
i

j
for all i, j > 0.
(b) Prove that
0
|=
1
, but not vice versa.
Exercise 10.6
(a) [FUMY83] Prove that there are exactly three distinct (up to equivalence) full typed
single-head tgds over a binary relation. Hint: See Exercise 10.4.
(b) Prove that there is no set of single-head tgds that is equivalent to the typed tgd
(T
1
, T
2
) of Fig. 10.2.
(c) Exhibit an innite chain
1
,
2
, . . . of typed tgds over a binary relation where each
is strictly weaker than the previous (i.e., such that
i
|=
i+1
but
i+1
|=
i
for each
i 1).
Exercise 10.7 [FUMY83] Let U ={A
1
, . . . , A
n
} be a set of attributes.
(a) Consider the full typed single-head tgd (full template dependency)
strongest
=
({t
1
, . . . , t
n
}, t ), where t
i
(A
i
) =t (A
i
) for i [1, n], and all other variables used are
distinct. Prove that
strongest
is the strongest template dependency for U, in the
sense that for each (not necessarily full) template dependency over U,
strongest
|=.
(b) Let
weakest
be the template dependency (S, s), where s(A
i
) = x
i
for i [1, n] and
where S includes all tuples s

over U that satisfy (1) s

(A
i
) =x
i
or y
i
for i [1, n],
and (2) s

(A
i
) =x
i
for at least one i [1, n]. Prove that
weakest
is the weakest full
template dependency U, in the sense that for each nontrivial full template depen-
dency over U, |=
weakest
.
(c) For V U, a template dependency over U is V-partial if it can be expressed as a
tableau (T, t ), where t is over V. For V U exhibit a weakest V-partial template
dependency.
Exercise 10.8 [BV84c] Prove Theorems 10.2.1 and 10.2.2.
Exercise 10.9 Prove that the triviality problem for typed tgds is np-complete. Hint: Use a
reduction from tableau containment (Theorem 6.2.3).
Exercise 10.10
(a) Prove Proposition 10.2.4.
(b) Develop an analogous result for the binary natural join.
Exercise 10.11 Let R[ABCDE] and S[ABCD] be relation schemas, and let V =ABCD. Con-
sider ={A E, B E, CE D}.
(a) Describe the set of fds implied by on
V
(R).
(b) [GZ82] Show that sat(
V
(R, )) = sat(S, ). Hint: Consider the instance J =
{a, b
1
,
c, d
1
, a, b, c
1
, d
2
, a
1
, b, c, d
3
} over S.
(c) [Hul84] Show that there is no nite set of full dependencies over S such that

V
(sat(R, )) =sat(S, ) Hint: Say that a satisfaction family F over R has rank
n if F =sat(R, ) for some where the tableau in each dependency of has n
elements. Suppose that
V
(sat(R, )) has rank n. Exhibit an instance J over V with
n +1 elements such that (a) J
V
(sat(R, )), and (b) J satises each dependency
that is implied for
V
(R) by , and that has n elements in its tableau. Conclude
that J sat(V, ), a contradiction.
Exercises 237
(d) [Hul84] Develop a result for mvds analogous to part (c).
Exercise 10.12 [KP82] Complete the proof of Theorem 10.2.6 for the case where is a set of
full dependencies and is a full tgd. Show how to extend that proof (a) to the case where is
an egd; (b) to include union; and (c) to permit constants in the expression E. Hint: For (a), use
the technique of Theorem 8.4.12; for (b) use union of tableaux, but permitting multiple output
rows; and for (c) recall Exercise 8.27b.
Exercise 10.13 [Fag82b] Prove Theorem 10.2.7.
Exercise 10.14 Exhibit a typed tgd and a set of typed dependencies such that |=, and
there are two chasing sequences of by , both of which satisfy conditions (1) and (2), in the
denition of chasing for embedded dependencies in Section 10.2, where one sequence is nite
and the other is innite.
Exercise 10.15 Consider these dependencies:
A B C
x
y
z
z
x

2
y

3
A B C
x y
x z

1
y z
AC B
(a) Starting with input T = {1, 2, 3, 1, 4, 5}, perform four steps of the chase using
these dependencies.
(b) Prove that {
1
,
2
,
3
} |=
unr
A B.
Exercise 10.16
(a) Prove that the chasing sequence of Example 10.2.8 does not terminate; then use this
sequence to verify that |=
unr

5
.
(b) Show that |=
n

5
.
(c) Exhibit a set

of dependencies and a dependency

such that the chasing sequence


of

with

is innite, and such that

|=
unr

but

|=
n

.
Exercise 10.17 [BV84c] Suppose that T, is a chasing sequence. Prove that chase(T, )
satises .
Exercise 10.18 [BV84a] (a) Prove Proposition 10.3.1. (b) Complete the proof of Theo-
rem 10.3.3.
Exercise 10.19 [FUMY83] This exercise uses the direct product construction for combining
full typed tableau mappings. Let R be a xed relation schema of arity n. The direct product
of free tuples and tableaux is dened as for tuples and instances. Given two full typed tgds
=(T, t ) and

=(T

, t

) over relation schema R, their direct product is


=(T T

, t t

).
(a) Let , be full typed single-head tgds over R. Prove that is equivalent to
{, }.
238 A Larger Perspective
(b) Are and comparable as tableau queries under , and, if so, how?
(c) Show that the family of typed egds that have equality atoms referring to the same
column of R is closed under nite conjunction.
Exercise 10.20 [FUMY83]
(a) Let and

be typed tgds. Prove that |=


unr

iff |=
n

. Hint: Show that chasing


will terminate in this case.
(b) Prove that there is a pair ,

of typed tgds for which there is no typed tgd

equivalent to {,

}. Hint: Assume that typed tgds were closed under conjunction in


this way. Use part (a).
Exercise 10.21 [BV84a] State and prove an axiomatization theorem for the family of typed
dependencies.
Exercise 10.22 [SU82] Exhibit a set of axioms for template dependencies (i.e., typed single-
head tgds), and prove that it is sound and complete for unrestricted logical implication.
Exercise 10.23 Prove that Algorithm 10.4.2 is correct. (See Exercise 4.18a).
Exercise 10.24
(a) Consider the full typed tgd
=({x, y

, x

, y

, x

, y}, x, y).
Prove that there is no pair E, E

of (nonextended) PJ expressions such that is


equivalent to E E

[i.e., such that I |= iff E(I) E

(I)].
(b) Let be as in Fig. 10.5. Prove that there is no pair E, E

of (nonextended) PJ
expressions such that is equivalent to E E

.
Exercise 10.25 In connection with Example 10.4.3,
(a) Prove that is equivalent to E
1

e
E
2
.
(b) Prove that A BC is equivalent to
ABC
(R)
AB
1
C
1(R)
e

ABCB
1
C
1(R).
(c) Prove that is equivalent to F
1

e
F
2
.
Exercise 10.26 Complete the proof of Theorem 10.4.4.
Exercise 10.27 An extended PJ expression E is shallow if it has the form
X
(R) or the form

X
(
Y
1
(R)
Y
n
(R)). An algebraic dependency E
e
E

is shallow if E and E

are
shallow. Prove that every algebraic dependency is equivalent to a shallow one.
Exercise 10.28 [Fag82b] A dependency is upward faithful (with respect to direct products)
if, for each family of nonempty instances {I
j
| j J},
j J, I
j
|= implies {I
j
| j J} |=.
Analogously, is downward faithful if
{I
j
| j J} |= implies j J, I
j
|=.
Exercises 239
(a) Show that the constraint
x, y, y

, z, z

(R(x, y, z) R(x, y

, z

) (y =y

z =z

))
is downward faithful but not upward faithful.
(b) Show that the constraint
x, y, z(R(x, y) R(y, z) R(x, z))
is upward faithful but not downward faithful.
Exercise 10.29 [Fag82b, YP82] Prove Proposition 10.4.6.
Exercise 10.30 [Fag82b] The direct product operator is extended to instances of database
schema R = {R
1
, . . . , R
n
} by forming, for each i [1, n], a direct product of the relation
instances associated with R
i
. Let R ={P[A], Q[A]} be a database schema. Showthat the empty
set of typed dependencies over R has no Armstrong relation. Hint: Find typed dependencies

1
,
2
over R such that |=(
1

2
) but |=
1
and |=
2
.
Exercise 10.31 [YP82] Let R[ABCD] be a relation schema. The pseudo-transitivity rule for
multivalued dependencies (Chapter 8) implies, given A B and B C, that A C.
Express this axiom in the paradigm of algebraic dependencies. Prove it using axioms {AD1,
. . . , AD7} (without using extended relations).
Exercise 10.32 Infer the three axioms for fds from rules {A1, . . . , A8}.
Exercise 10.33 [YP82] Prove that {A1, . . . , A8} is sound.
11 Design and Dependencies
When the only tool you have is a hammer,
everything begins to look like a nail.
Anonymous
Alice: Will we use a hammer for schema design?
Riccardo: Sure: decomposition, semantic modeling, . . .
Vittorio: And each provides nails to which the data must t.
Sergio: The more intricate the hammer, the more intricate the nail.
W
e have discussed earlier applications of dependencies in connection with query
optimization (Section 8.4) and user views (Section 10.2). In this chapter, we briey
consider how dependencies are used in connection with the design of relational database
schemas.
The problem of designing database schemas is complex and spans the areas of cog-
nitive science, knowledge representation, software practices, implementation issues, and
theoretical considerations. Due to the interaction of these many aspects (some of them in-
tegrally related to how people think and perceive the world), we can only expect a relatively
narrow and somewhat simplistic contribution from theoretical techniques. As a result, the
primary focus of this chapter is to introduce the kinds of formal tools that are used in the
design process; a broader discussion of how to use these tools in practice is not attempted.
The interested reader is referred to the Bibliographic Notes, which indicate where more
broad-based treatments of relational schema design can be found.
In the following discussion, designing a relational schema means coming up with a
good way of grouping the attributes of interest into tables, yielding a database schema.
The choice of a schema is guided by semantic information about the application data
provided by the designer. There are two main ways to do this, and each leads to a different
approach to schema design.
Semantic data model: In this approach (Section 11.1), the application data is rst described
using a model with richer semantic constructs than relations. Such models are called
semantic data models. The schema in the richer model is then translated into a
relational schema. The hope is that the use of semantic constructs will naturally lead
to specifying good schemas.
Renement of relational schema: This approach (Section 11.2) starts by specifying an
initial relational schema, augmented with dependencies (typically fds and mvds). The
design process uses the dependencies to improve the schema. But what is it that makes
240
Design and Dependencies 241
one schema better than another? This is captured by the notion of normal form for
relational schemas, a central notion in design theory.
Both of these approaches focus on the transformation of a schema S
1
into a relational
schema S
2
. Speaking in broad terms, three criteria are used to evaluate the result of this
transformation:
(1) Preservation of data;
(2) Desirable properties of S
2
, typically described using normal forms; and
(3) Preservation of meta-data (i.e., information captured by schema and depen-
dencies).
Condition (1) requires that information not be lost when instances of S
1
are represented in
S
2
. This is usually formalized by requiring that there be a natural mapping : Inst(S
1
)
Inst(S
2
) that is one-to-one. As we shall see, the notion of natural can vary, depending on
the data model used for S
1
.
Criterion (2) has been the focus of considerable research, especially in connection with
the approach based on rening relational schemas. In this context, the notion of relational
schema is generalized to incorporate dependencies, as follows: A relation schema is a pair
(R, ), where R is a relation name and is a set of dependencies over R. Similarly, a
database schema is a pair (R, ), where R is a database schema as before, and is a set of
dependencies over R. Some of these may be tagged by a single relation (i.e., have the form
R
j
: , where is a dependency over R
j
R). Others, such as inds, may involve pairs
of relations. More generally, some dependencies might range over the full set of attributes
occurring in R. (This requires a generalization of the notion of dependency satisfaction,
which is discussed in Section 11.3.)
With this notation established, we return to criterion (2). In determining whether one
relational schema is better than another, the main factors that have been considered are
redundancy in the representation of data and update anomalies. Recall that these were
illustrated in Section 8.1, using the relations Movies and Showings. We concluded there
that certain schemas yielded undesirable behavior. This resulted from the nature of the
information contained in the database, as specied by a set of dependencies.
Although the dependencies are in some sense the cause of the problems, they also
suggest ways to eliminate them. For example, the fd
Movies: Title Director
suggests that the attribute Director is a characteristic of Title, so the two attributes be-
long together and can safely be represented in isolation from the other data. It should be
clear that one always needs some form of semantic information to guide schema design;
in the absence of such information, one cannot distinguish good schemas from bad
ones (except for trivial cases). As will be seen, the notion of normal form captures some
characteristics of good schemas by guaranteeing that certain kinds of redundancies and
update anomalies will not occur. It will also be seen that the semantic data model approach
to schema design can lead to relational schemas in normal form.
242 Design and Dependencies
In broad terms, the intuition behind criterion (3) is that properties of data captured by
schema S
1
(e.g., functional or inclusion relationships) should also be captured by schema
S
2
. In the context of rening relational schemas, a precise meaning will be given for this
criterion in terms of preservation of dependencies. We shall see that there is a kind of
trade-off between criteria (2) and (3).
The approach of rening relational schemas typically makes a simplifying assump-
tion called the pure universal relation assumption (pure URA). Intuitively, this states that
the input schema S
1
consists of a single relation schema, possibly with some dependen-
cies. Section 11.3 briey considers this assumption in a more general light. In addition, the
weak URA is introduced, and the notions of dependency satisfaction and query interpre-
tation are extended to this context.
This chapter is more in the form of a survey than the previous chapters, for several
reasons. As noted earlier, more broad-based treatments of relational schema design may
be found elsewhere and require a variety of tools complementary to formal analysis. The
tools presented here can at best provide only part of the skeleton of a design methodology
for relational schemas. Normal forms and the universal relation assumption were active
research topics in the 1970s and early 1980s and generated a large body of results. Some
of that work is now considered somewhat unfashionable, primarily due to the emergence
of new data models. However, we mention these topics briey because (1) they lead to
interesting theoretical issues, and (2) we are never secure from a change of fashion.
11.1 Semantic Data Models
In this section we introduce semantic data models and describe how they are used in rela-
tional database design. Semantic data models provide a framework for specifying database
schemas that is considerably richer than the relational model. In particular, semantic mod-
els are arguably closer than the relational model to ways that humans organize information
in their own thinking. The semantic data models are precursors of the recently emerging
object-oriented database models (presented in a more formal fashion in Chapter 21) and
are thus of interest in their own right.
As a vehicle for our discussion, we present a semantic data model, called loosely the
generic semantic model (GSM). (This is essentially a subset of the IFO model, one of the
rst semantic models dened in a formal fashion.) We then illustrate how schemas from
this model can be translated into relational schemas. Our primary intention is to present
the basic avor of the semantic data model approach to relational schema design and some
formal results that can be obtained. The presentation itself is somewhat informal so that the
notation does not become overly burdensome.
In many practical contexts, the semantic model used is the Entity-Relationship model
(ER model) or one of its many variants. The ER model is arguably the rst semantic data
model that appeared in the literature. We use the GSM because it incorporates several
features of the semantic modeling literature not present in the ER model, and because the
GSM presents a style closer to object-oriented database models.
11.1 Semantic Data Models 243
GSM Schemas
Figure 11.1 shows the schema CINEMA-SEM from the GSM, which can be used to
represent information on movies and theaters. The major building blocks of such schemas
are abstract classes, attributes, complex value classes, and the ISA hierarchy; these will be
considered briey in turn.
The schema of Fig. 11.1 shows ve classes that hold abstract objects: Person, Direc-
tor, Actor, Movie, and Theater. These correspond to collections of similar objects in the
world. There are two kinds of abstract class: primary classes, shown using diamonds, and
subclasses shown using circles. This distinction will be claried further when ISA relation-
ships are discussed.
Instances of semantic schemas are constructed from the usual printable classes (e.g.,
string, integer, oat, etc.) and abstract classes. The printable classes correspond to (sub-
sets of) the domain dom used in the relational model. The printable classes are indicated
using squares; in Fig. 11.1 we have labeled these to indicate the kind of values that popu-
late them. Conceptually, the elements of an abstract class such as Person are actual persons
in the world; in the formal model internal representations for persons are used. These inter-
nal representations have come to be known as object identiers (OIDs). Because they are
internal, it is usually assumed that OIDs cannot be presented explicitly to users, although
programming and query languages can use variables that hold OIDs. The notion of instance
will be dened more completely later and is illustrated in Example 11.1.1 and Fig. 11.2.
Attributes provide one mechanism for representing relationships between objects and
other objects or printable values; they are drawn using arrows. For example, the Person
class has attributes name and citizenship, which associate strings with each person object.
These are examples of single-valued attributes. (In this schema, all attributes are assumed
to be total.) Multivalued attributes are also allowed; these map each object to a set of
objects or printable values and are denoted using arrows with double heads. For example,
acts_in maps actors to the movies that they have acted in. It is common to permit inverse
constraints between pairs of attributes. For example, consider the relationship between
actors and movies. It can be represented using the multivalued attribute acts_in on Actor
or the multivalued attribute actors on Movie. In this schema, we assume that the attri-
butes acts_in and actors are constrained to be inverses of each other, in the sense that
m acts_in(a) iff a actor(m). A similar constraint is assumed between the attributes
associating movies with directors.
In the schema CINEMA-SEM, the Pariscope node is an example of a complex value
class. Members of the underlying class are triples whose coordinates are from the classes
Theater, Time, and Movie, respectively. In the GSM, each complex value is the result of
one application of the tuple construct. This is indicated using a node of the form , with
components indicated using dashed arrows. The components of each complex value can be
printable, abstract, or complex values. However, there cannot be a directed cycle in the set
of edges used to dene the complex values. As suggested by the attribute price, a complex
value class may have attributes. Complex value classes can also serve as the range of an
attribute, as illustrated by the class Award.
Complex values are of independent interest and are discussed in some depth in Chap-
ter 20. Complex values generally include hierarchical structures built from a handful of
244 Design and Dependencies
Address Phone Name
Theater Time
Pariscope
Title
Citizen-
ship
Name
Person
Award
Prize Name
Movie Director
Acts_in
Actors
Price
Actor
Figure 11.1: The schema CINEMA-SEM in the Generic Semantic Model
11.1 Semantic Data Models 245
basic constructors, including tuple (as shown here) set, and sometimes others such as bag
and list. Rich complex value models are generally incorporated into object-oriented data
models and into some semantic data models. Some constructs for complex values, such as
set, cannot be simulated directly using the pure relational model (see Exercise 11.24).
The nal building block of the GSM is the ISA relationship, which represents set
inclusion. In the example schema of Fig. 11.1, the ISArelationships are depicted by double-
shafted arrows and indicate that the set of Director is a subset of Person, and likewise
that Actor is a subset of Person. In addition to indicating set inclusion, ISA relationships
indicate a form of subtyping relationship, or inheritance. Specically, if class B ISA class
A, then each attribute of A is also relevant (and dened for) elements of class B. In the
context of semantic models, this should be no surprise because the elements of B are
elements of A.
In the GSM, the graph induced by ISA relationships is a directed acyclic graph (DAG).
The root nodes are primary abstract classes (represented with diamonds), and all other
nodes are subclass nodes (represented with circles). Each subclass node has exactly one
primary node above it. Complex value classes cannot participate in ISA relationships.
In the GSM, the tuple and multivalued attribute constructs are somewhat redundant: A
multivalued attribute is easily simulated using a tuple construct. Such redundancy is typical
of semantic models: The emphasis is on allowing schemas that correspond closely to the
way that users think about an application. On a bit of a tangent, we also note that the tuple
construct of GSM is close to the relationship construct of the ER model.
GSM Instances
Let S be a GSM schema. It is assumed that a xed (nite or innite) domain is associated
to each printable class in S. We also assume a countably innite set obj of OIDs.
An instance of S is a function I whose domain is the set of primary, subclass, and
complex value classes of S and the set of attributes of S. For primary class C, I(C) is a
nite set of OIDs, disjoint from I(C

) for each other primary class C

. For each subclass


D, I(D) is a set of OIDs, such that the inclusions indicated by the ISA relationships of S
are satised. For complex value class C with components D
1
, . . . , D
n
, I(C) is a nite set
of tuples d
1
, . . . , d
n
, where d
i
I(D
i
) if D
i
is an abstract or complex value class, and d
i
is in the domain of D
i
if D
i
is a printable class. For a single-valued attribute f from C to
C

, I(f ) is a function from I(C) to I(C

) (or to the domain of C

, if C

is printable). For a
multivalued attribute f from C to C

, I(f ) is a function from I(C) to nite subsets of I(C

)
(or the domain of C

, if C

is printable). Given instance I, attribute f from C to C

, and
object o in I(C), we often write f (o) to denote [I(f )](o).
Example 11.1.1 Part of a very small instance I
1
of CINEMA-SEM is shown in
Fig. 11.2. The values of complex value Award, the attributes award, address, and phone
are not shown. The symbols o
1
, o
2
, etc., denote OIDs.
Consider an instance I

that is identical to I
1
, except that o
2
is replaced by o
8
ev-
erywhere. Because OIDs serve only as internal representations that cannot be accessed
246 Design and Dependencies
I
1
(Person) = name(o
1
) =Alice citizenship(o
1
) =Great Britain
{o
1
, o
2
, o
3
} name(o
2
) =Allen citizenship(o
2
) =United States
name(o
3
) =Keaton citizenship(o
3
) =United States
I
1
(Director) ={o
2
} directed(o
2
) ={o
4
, o
5
}
I
1
(Actor) ={o
2
, o
3
} acts_in(o
2
) ={o
4
, o
5
}
acts_in(o
3
) ={o
5
}
I
1
(Movie) ={o
4
, o
5
} title(o
4
) =Take the Money
and Run
title(o
5
) =Annie Hall
director(o
4
) =o
2
actors(o
4
) ={o
2
}
director(o
5
) =o
2
actors(o
5
) ={o
2
, o
3
}
I
1
(Theater) ={o
6
} name(o
6
) =Le Champo
I
1
(Pariscope) = price(o
6
, 20:00, o
4
) =30FF
{o
6
, 20:00, o
4
}
Figure 11.2: Part of an instance I
1
of CINEMA-SEM
explicitly, I
1
and I

are considered to be identical in terms of the information that they


represent.
Let S be a GSM schema. An OID isomorphism is a function that is a permutation on
the set obj of OIDs and leaves all printables xed. Such functions are extended to Inst(S)
in the natural fashion. Two instances I and I

are OID equivalent, denoted I


OID
I

, if
there is an OID isomorphism such that (I) =I

. This is clearly an equivalence relation.


As suggested by the preceding example, if two instances are OID equivalent, then they
represent the same information. The formalism of OID equivalence will be used later when
we discuss the relational simulation of GSM.
The GSM is a very basic semantic data model, and many variations on the semantic
constructs included in the GSM have been explored in the literature. For example, a variety
of simple constraints can be incorporated, such as cardinality constraints on attributes
and disjointness between subclasses (e.g., that Director and Actor are disjoint). Another
variation is to require that a class be dependent on an attribute (e.g., that each Award
object must occur in the image of some Actor) or on a complex value class. More complex
constraints based on rst-order sentences have also been explored. Some semantic models
support different kinds of ISA relationships, and some provide derived data (i.e., a form
of user view incorporated into the base schema).
11.1 Semantic Data Models 247
Translating into the Relational Model
We now describe an approach for translating semantic schemas into relational database
schemas. As we shall see, the semantics associated with the semantic schema will yield
dependencies of various forms in the relational schema.
A minor problem to be surmounted is that in a semantic model, real-world objects
such as persons can be represented using OIDs, but printable classes must be used in the
pure relational model. To resolve this, we assume that each primary abstract class has a
key, that is, a set {k
1
, . . . , k
n
} of one or more attributes with printable range such that for
each instance I and pair o, o

of objects in the class, o =o

iff k
1
(o) =k
1
(o

) and . . . and
k
n
(o) =k
n
(o

). (Although more than one key might exist for a primary class, we assume
that a single key is chosen.) In the schema CINEMA-SEM, we assume that (person_)
name is the key for Person, that title is the key for Movie, and that (theater_)name is the
key for Theater. (Generalizations of this approach permit the composition of attributes to
serve as part of a key; e.g., including in the key for Movie the composition director name,
which would give the name of the director of the movie.)
An alternative to the use of keys as just described is to permit the use of surrogates.
Informally, a surrogate of an object is a unique, unchanging printable value that is associ-
ated with the object. Many real-world objects have natural surrogates (e.g., Social Security
number for persons in the United States or France; or Invoice Number for invoices in a
commercial enterprise). In other cases, abstract surrogates can be used.
The kernel of the translation of GSM schemas into relational ones concerns how ob-
jects in GSM instances can be represented using (tuples of) printables. For each class C
occurring in the GSM schema, we associate a set of relational attributes, called the repre-
sentation of C, and denoted rep(C). For a printable class C, rep(C) is a single attribute hav-
ing this sort. For abstract class C, rep(C) is a set of attributes corresponding to the key at-
tributes of the primary class above C. For a complex value class C =[C
1
, . . . , C
m
], rep(C)
consists of (disjoint copies of) all of the attributes occurring in rep(C
1
), . . . , rep(C
m
).
Translation of a GSM schema into a relation schema is illustrated in the following
example.
Example 11.1.2 One way to simulate schema CINEMA-SEM in the relational model
is to use the schema CINEMA-REL, which has the following schema:
Person [name, citizenship]
Director [name]
Actor [name]
Acts_in [name, title]
Award [prize, year]
Has_Award [name, prize, year]
Movie [title, director_name]
Theater [theater_name, address, phone]
Pariscope [theater_name, time, title, price]
248 Design and Dependencies
Person name citizenship Movie title director_name
Alice Great Britain Take the Money and Run Allen
Allen United States Annie Hall Allen
Keaton United States
Pariscope theater_name time title price
Le Champo 20:00 Take the Money and Run 30FF
Figure 11.3: Part of a relational instance I
2
that simulates I
1
Figure 11.3 shows three relations in the relational simulation I
2
of the instance I
1
of
Fig. 11.2.
In schema CINEMA-REL, both Actor and Acts_in are included in case there are one
or more actors that did not act in any movie. For similar reasons, Acts_in and Has_Award
are separated.
In contrast, we have assumed that each person has a citizenship (i.e., that citizenship is
a total function). If not, then two relations would be needed in place of Person. Analogous
remarks hold for directors, movies, theaters, and Pariscope objects.
In schema CINEMA-REL, we have not explicitly provided relations to represent the
attributes directed of Director or actors of Movie. This is because both of these are inverses
of other attributes, which are represented explicitly (by Movie and Acts_in, respectively).
If we were to consider the complex value class Awards of CINEMA-SEM to be
dependent on the attribute award, then the relation Award could be omitted.
Suppose that I is an instance of CINEMA-SEM and that I

is the simulation of I.
The semantics of CINEMA-SEM, along with the assumed keys, imply that I

will satisfy
several dependencies. This includes the following fds (in fact, key dependencies):
Person : name citizenship
Movie : title director_name
Theater : theater_name address, phone
Pariscope : theater_name, time, title price
A number of inds are also implied:
Director[name] Person[name]
Actor[name] Person[name]
Movie[director_name] Director[name]
Acts_in[name] Actor[name]
Acts_in[title] Movie[title]
Has_Award[name] Actor[name]
11.1 Semantic Data Models 249
Has_Award[prize, year] Award[prize, year]
Pariscope[theater_name] Theater[theater_name]
Pariscope[title] Movie[title]
The rst group of inds follows from ISA relationships; the second from restrictions on
attribute ranges; and the third from restrictions on the components of complex values. All
but one of the inds here are unary, because all of the keys, except the key for Award, are
based on a single attribute.
Preservation of Data
Suppose that S is a GSM schema with keys for primary classes, and (R, ) is a
relational schema that simulates it, constructed in the fashion illustrated in Example 11.1.2,
where is the set of fds and is the set of inds. As noted in criterion (1) at the beginning
of this chapter, it is desirable that there be a natural one-to-one mapping from instances
of S to instances of (R, ). To formalize this, two obstacles need to be overcome.
First, we have not developed a query language for the GSM. (In fact, no query language
has become widely accepted for any of the semantic data models. In contrast, some query
languages for object-oriented database models are now gaining wide acceptance.) We shall
overcome this obstacle by developing a rather abstract notion of natural for this context.
The second obstacle stems from the fact that OID-equivalent GSM instances hold
essentially the same information. Thus we would expect OID-equivalent instances to map
to the same relational instance.
1
To rene criterion (1) for this context, we are searching
for a one-to-one mapping from Inst(S)/
OID
into Inst(R, ).
A mapping : Inst(S) Inst(R, ) is OID consistent if I
OID
I

implies (I) =
(I

). In this case, we can view as a mapping with domain Inst(S)/


OID
. The mapping
preserves the active domain if for each I Inst(S), adom((I)) =adom(I). [The active
domain of a GSM instance I, denoted adom(I), is the set of all printables that occur in I.]
The following can be veried (see Exercise 11.3):
Theorem 11.1.3 (Informal) Let S be a GSM schema with keys for primary classes,
and let (R, ) be a relational simulation of S. Then there is a function : Inst(S)
Inst(R, ) such that is OID consistent and preserves the active domain, and such
that : Inst(S)/
OID
Inst(R, ) is one-to-one and onto.
Properties of the Relational Schema
We now consider criteria (2) and (3) to highlight desirable properties of relational schemas
that simulate GSM schemas.
1
When articial surrogates are used to represent OIDs in the relational database, one might have to
use a notion of an equivalent relational database instances as well.
250 Design and Dependencies
Criterion (2) for schema transformations concerns desirable properties of the target
schema. We now describe three such properties resulting from the transformation of GSM
schemas into relational ones.
Suppose again that S is a GSM schema with keys, and (R, ) is a relational
simulation of it. We assume as before that no constraints hold for S, aside from those
implied by the constructs in S and the keys.
The three properties are as follows:
1. First, is equivalent to a family of key dependencies; in the terminology of the
next section, this means that each of the relation schemas obtained is in Boyce-
Codd Normal Form (BCNF). Furthermore, the only mvds satised by relations in
R are implied by , and so the relation schemas are in fourth normal form (4NF).
2. Second, the family of inds is acyclic (see Chapter 9). That is, there is no
sequence R
1
[X
1
] R
2
[Y
1
], R
2
[X
2
] R
3
[Y
2
], . . . , R
n
[X
n
] R
1
[Y
n
] of inds in
the set. By Theorem 9.4.5, this implies that logical implication can be decided
for ( ) and that nite and unrestricted implication coincide.
3. Finally, each ind R[X] S[Y] in key based. That is, Y is a (minimal) key of S
under .
Together these properties present a number of desirable features. In particular, depen-
dency implication is easy to check. Given a database schema R and sets of fds and
of inds over R, and are independent if (1) for each fd over R, ( ) |= im-
plies |=, and (2) for each ind over R, ( ) |= implies |= . Suppose that S
is a GSM schema and that (R, ) is a relational simulation of S. It can be shown
that the three aforementioned properties imply that and are independent
(see Exercise 11.4).
To conclude this section, we consider criterion (3). This criterion concerns the preser-
vation of meta-data. We do not attempt to formalize this criterion for this context, but it
should be clear that there is a close correspondence between the dependencies in
and the constructs used in S. In other words, the semantics of the application as expressed
by S is also captured, in the relational representation, by the dependencies .
The preceding discussion assumes that no dependency holds for S, aside from those
implied by the keys and the constructs in S. However, in many cases constraints will be
incorporated into S that are not directly implied by the structure of S. For instance, recall
Example 11.1.2, and suppose that the fd Pariscope : theater_name, time price is true for
the underlying data. The relational simulation will have to include this dependency and, as
a result, the resulting relational schema may be missing some of the desirable features (e.g.,
the family of fds is not equivalent to a set of keys and the schema is no longer in BCNF).
This suggests that a semantic model might be used to obtain a coarse relational schema,
which might be rened further using the techniques for improving relational schemas
developed in the next section.
11.2 Normal Forms 251
11.2 Normal Forms
In this section, we consider schema design based on the renement of relational schemas
and normal forms, which provide the basis for this approach. The articulation of these
normal forms is arguably the main contribution of relational database theory to the realm of
schema design. We begin the discussion by presenting two of the most prominent normal
forms and a design strategy based on decomposition. We then develop another normal
form that overcomes certain technical problems of the rst two, and describe an associated
design strategy based on synthesis. We conclude with brief comments on the relationship
of inds with decomposition.
When all the dependencies in a relational schema (R, ) are considered to be tagged,
one can view the database schema as a set {(R
1
,
1
), . . . , (R
n
,
n
)}, where each (R
j
,
j
)
is a relation schema and the R
j
s are distinct. In particular, an fd schema is a relation schema
(R, ) or database schema (R, ), where is a set of tagged fds; this is extended in
the natural fashion to other classes of dependencies. Much of the work on renement of
relational schemas has focused on fd schemas and (fd + mvd) schemas. This is what we
consider here. (The impact of the inds is briey considered at the end of this section.)
A normal form restricts the set of dependencies that are allowed to hold in a relation
schema. The main purpose of the normal forms is to eliminate at least some of the redun-
dancies and update anomalies that might otherwise arise. Intuitively, schemas in normal
form are good schemas.
We introduce next two kinds of normal forms, namely BCNF and 4NF. (We will
consider a third one, 3NF, later.) We then consider techniques to transform a schema into
such desirable normal forms.
BCNF: Do Not Represent the Same Fact Twice
Recall the schema (Movies[T(itle), D(irector), A(actor)], {T D}) from Section 8.1. As
discussed there, the Movies relation suffers from various anomalies, primarily because
there is only one Director associated with each Title but possibly several Actors. Suppose
that (R[U], ) is a relation schema, |=X Y, Y X and |=X U. It is not hard
to see that anomalies analogous to those of Movies can arise in R. Boyce-Codd normal
form prohibits this kind of situation.
Denition 11.2.1 A relation schema (R[U], ) is in Boyce-Codd normal form (BCNF)
if |=X U whenever |=X Y for some Y X. An fd schema (R, ) is in BCNF
if each of its relation schemas is.
BCNF is most often discussed in cases where involves only functional dependen-
cies. In such cases, if (R, ) is in BCNF, the anomalies of Section 8.1 do not arise. An
essential intuition underlying BCNF is, Do not represent the same fact twice.
The question now arises: What does one do with a relation schema (R, ) that is
not in BCNF? In many cases, it is possible to decompose this schema into subschemas
(R
1
,
1
), . . . , (R
n
,
n
) without information loss. As a simple example, Movies can be
decomposed into
252 Design and Dependencies

(Movie_director[TD], {T D}),
(Movie_actors[TA], )

A general framework for decomposition is presented shortly.


4NF: Do Not Store Unrelated Information in the Same Relation
Consider the relation schema (Studios[N(ame), D(irector), L(ocation)], {N D|L}). A
tuple n, d, l is in Studios if director d is employed by the studio with name n and if
this studio has an ofce in location l. Only trivial fds are satised by all instances of
this schema, and so it is in BCNF. However, update anomalies can still arise, essentially
because the Dand L values are independent fromeach other. This gives rise to the following
generalization of BCNF
2
:
Denition 11.2.2 A relation schema (R[U], ) is in fourth normal form (4NF) if
(a) whenever |=X Y and Y X, then |=X U
(b) whenever |=X Y and Y X, then |=X U.
An (fd + mvd) schema (R, ) is in 4NF if each of its relation schemas is.
It is clear that if a relation schema is in 4NF, then it is in BCNF. It is easily seen that
Studios can be decomposed into two 4NF relations, without loss of information and that the
resulting relation schemas do not have the update anomalies mentioned earlier. An essential
intuition underlying 4NF is, Do not store unrelated information in the same relation.
The General Framework of Decomposition
One approach to rening relational schemas is decomposition. In this approach, it is usually
assumed that the original schema consists of a single wide relation containing all attributes
of interest. This is referred to as the pure universal relation assumption, or pure URA. A
relaxation of the pure URA, called the weak URA, is considered briey in Section 11.3.
The pure URA is a simplifying assumption, because in practice the original schema is
likely to consist of several tables, each with its own dependencies. In that case, the design
process described for the pure URA is applied separately to each table. We adopt the pure
URA here. In this context, the schema transformation produced by the design process con-
sists of decomposing the original table into smaller tables by using the projection operator.
(In an alternative approach, selection is used to yield so-called horizontal decompositions.)
We now establish the basic framework of decompositions. Let (U[Z], ) be a relation
schema. A decomposition of (U[Z], ) is a database schema R = {R
1
[X
1
], . . . , R
n
[X
n
]}
with dependencies , where {X
j
| j [1, n]} = Z. (The relation name U is used to
suggest that it is a universal relation.) In the sequel, we often use relation names U (R
i
)
and attribute sets Z (X
i
), interchangeably if ambiguity does not arise.
2
The motivation behind the names of several of the normal forms is largely historical; see the
Bibliographic Notes.
11.2 Normal Forms 253
We now consider the three criteria for schema transformation in the context of decom-
position. As already suggested, criterion (2) is evaluated in terms of the normal forms. With
regard to the preservation of data (1), the natural mapping from R to R is obtained by
projection: The decomposition mapping of R is the function
R
: Inst(U) Inst(R) such
that for I inst(U), we have
R
(I)(R
j
) =
R
j
(I). Criterion (1) says that the decompo-
sition should not lose information when I is replaced by its projections (i.e., it should be
one-to-one).
A natural property implying that a decomposition is one-to-one is that the original
instance can be obtained by joining the component relations. Formally, a decomposition
is said to have the lossless join property if for each instance I of (U, ) the join of the
projections is the original instance, i.e., (
R
(I)) =I. It is easy to test if a decomposition
R = {R
1
, . . . , R
n
} of (U, ) has the lossless join property. Consider the query q(I) =

R
1
(I)
R
n
(I). The lossless join property means that q(I) =I for every instance
I over (U, ). But q(I) =I simply says that I satises the jd [R]. Thus we have the
following:
Theorem11.2.3 Let (U, ) be a (full dependencies) schema and R a decomposition for
(U, ). Then R has the lossless join property iff |= [R].
The preceding implication can be tested using the chase (see Chapter 8), as illustrated
next.
Example 11.2.4 Recall the schema (Movies[TDA], {T D}). As suggested earlier,
a decomposition into BCNF is R = {TD, TA}. This decomposition has the lossless join
property. The tableau associated with the jd = [TD, TA] is as follows:
T

T D A
t d a
1
t d
1
a
t

t d a
Consider the chase of T

, t

with {T D}. Because the two rst tuples agree on the T


column, d and d
1
are merged because of the fd. Thus t, d, a chase(T

, t

, {T D}).
Hence T D implies the jd , so R has the lossless join property. (See also Exer-
cise 11.9.)
Referring to the preceding example, note that it is possible to represent information in
Rthat cannot be directly represented in Movies. Specically, in the decomposed schema we
can represent a movie with a director but no actors and a movie with an actor but no director.
This indicates, intuitively, that a decomposed schema may have more information capacity
254 Design and Dependencies
than the original (see Exercise 11.23). In practice, this additional capacity is exploited; in
fact, it provides part of the solution of so-called deletion anomalies.
Remark 11.2.5 In the preceding example, we used the natural join operator to recon-
struct decompositions. Interestingly, there are cases in which the natural join does not
sufce. To show that a decomposition is one-to-one, it sufces to exhibit an inverse to
the projection, called a reconstruction mapping. If is permitted to include very general
constraints expressed in rst-order logic that may not be dependencies per se, then there
are one-to-one decompositions whose reconstruction mappings are not the natural join (see
Exercise 11.20).
We now consider criterion (3), the preservation of meta-data. In the context of decom-
position, this is formalized in terms of dependency preservation: Given schema (U, ),
which is replaced by a decomposition R = {R
1
, . . . , R
n
}, we would like to nd for each j
a family
j
of dependencies over R
j
such that
j

j
is equivalent to the original . In the
case where is a set of fds, we can make this much more precise. For V U, let

V
() ={X A | XA V and |=X A},
let
j
=
X
j
(), and let =
j

j
. Obviously, |= . (See Proposition 10.2.4.) Intu-
itively, consists of the dependencies in

that are local to the relations in the decom-


position R. The decomposition R is said to be dependency preserving iff . In other
words, can be enforced by the dependencies local in the decomposition. It is easy to see
that the decomposition of Example 11.2.4 is dependency preserving.
Given an fd schema (U, ) and V U,
V
() has size exponential in V, simply
because of trivial fds. But perhaps there is a smaller set of fds that is equivalent to

V
(). A cover of a set of fds is a set

of fds such that

. Unfortunately, in
some cases the smallest cover for a projection
V
() is exponential in the size of (see
Exercise 11.11).
What about projections of sets of mvds? Suppose that is a set of fds and mvds
over U. Let V U and

mvd
V
() ={[X (Y V)|(Z V)] | [X Y|Z]

and X V}.
Consider a decomposition R of (U, ). Viewed as constraints on U, the sets
mvd
R
j
() are
now embedded mvds. As we saw in Chapter 10, testing implication for embedded mvds
is undecidable. However, the issue of testing for dependency preservation in the context of
decompositions involving fds and mvds is rather specialized and remains open.
Fds and Decomposition into BCNF
We now present a simple algorithm for decomposing an fd schema (U, ) into BCNF
relations. The decomposition produced by the algorithm has the lossless join property but
is not guaranteed to be dependency preserving.
We begin with a simple example.
11.2 Normal Forms 255
Example 11.2.6 Consider the schema (U, ), where U has attributes
TITLE D_NAME TIME PRICE
TH_NAME ADDRESS PHONE
and contains
FD1 : TH_NAME ADDRESS, PHONE
FD2 : TH_NAME, TIME, TITLE PRICE
FD3 : TITLE D_NAME
Intuitively, schema (U, ) represents a fragment of the real-world situation represented by
the semantic schema CINEMA-SEM.
A rst step toward transforming this into a BCNF schema is to decompose using FD1,
to obtain the database schema

({TH_NAME, ADDRESS, PHONE}, {FD1}),


({TH_NAME, TITLE, TIME, PRICE, D_NAME}, {FD2, FD3})

Next FD3 can be used to split the second relation, obtaining

({TH_NAME, ADDRESS, PHONE}, {FD1})


({TITLE, D_NAME}, {FD3})
({TH_NAME, TITLE, TIME, PRICE}, {FD2})

which is in BCNF. It is easy to see that this decomposition has the lossless join property
and is dependency preserving. In fact, in this case, we obtain the same relational schema
as would result from starting with a semantic schema.
We now present the following:
Algorithm 11.2.7 (BCNF Decomposition)
Input: A relation schema (U, ), where is a set of fds.
Output: A database schema (R, ) in BCNF
1. Set (R, ) :={(U, )}.
2. Repeat until (R, ) is in BCNF:
(a) Choose a relation schema (S[V], ) R that is not in BCNF.
(b) Choose nonempty, disjoint X, Y, Z V such that
(i) XYZ =V;
(ii) |=X Y; and
(iii) |=X A for each A Z.
(c) Replace (S[V], ) in R by (S
1
[XY],
XY
()) and (S
2
[XZ],
XZ
()).
(d) If there are (S[V], ), (S

[V

],

) in R with V V

, then remove
S([V], ) from R.
256 Design and Dependencies
It is easily seen that the preceding algorithmterminates [each iteration of the loop elim-
inates at least one violation of BCNF among nitely many possible ones]. The following
is easily veried (see Exercise 11.10):
Theorem 11.2.8 The BCNF Decomposition Algorithm yields a BCNF schema and a
decomposition that has the lossless join property.
What is the complexity of running the BCNF Decomposition Algorithm? The main
expenses are (1) examining subschemas (S[V], ) to see if they are in BCNF and, if not,
nding a way to decompose them; and (2) computing the projections of . (1) is polyno-
mial, but (2) is inherently exponential (see Exercise 11.11). This suggests a modication
to the algorithm, in which only the relational schemas S[V] are computed at each stage,
but =
V
() is not. However, the problem of determining, given fd schema (U, ) and
V U, whether (V,
V
()) is in BCNF is co-np-complete (see Exercise 11.12). Interest-
ingly, a polynomial time algorithm does exist for nding some BCNF decomposition of an
input schema (U, ) (see Exercise 11.13).
When applying BCNF decomposition to the schema of Example 11.2.6, the same
result is achieved regardless of the order in which the dependencies are applied. This is
not always the case, as illustrated next.
Example 11.2.9 Consider (ABC, {A B, B C}). This has two BCNF decompo-
sitions
R
1
={(AB, {A B}), (BC, {B C})}
R
2
={(AB, {A B}), (AC, )}.
Note that R
1
is dependency preserving, but R
2
is not.
Fds, Dependency Preservation, and 3NF
It is easy to check that the schemas in Examples 11.2.4, 11.2.6, and 11.2.9 have depen-
dency-preserving decompositions into BCNF. However, this is not always achievable, as
shown by the following example.
Example 11.2.10 Consider a schema Lectures[C(ourse), P(rofessor), H(our)], where
tuple c, p, h indicates that course c is taught by professor p at hour h. We assume that
Hour ranges over weekday-time pairs (e.g., Tuesday at 4PM) and that a given course may
have lectures during several hours each week. Assume that the following two dependencies
are to hold:
=

C P
PH C

.
In other words, each course is taught by only one professor, and a professor can teach only
one course at a given hour.
11.2 Normal Forms 257
The schema (Lectures, ) is not in BCNF because |= C P, but |= C H.
Applying the BCNF Decomposition Algorithm yields R = {(CP, {C P}), (CH, )}.
It is easily seen that {CP : C P} |=, and so this decomposition does not preserve
dependencies. A simple case analysis shows that there is no BCNF decomposition of
Lectures that preserves dependencies.
This raises the question: Is there a less restrictive normal form for fds so that a lossless
join decomposition that preserves dependencies can always be found? The afrmative
answer is based on third normal form (3NF). To dene it, we need some auxiliary
notions. Suppose that (R[U], ) is an fd schema. A superkey of R is a set X U such
that |=X U. A key of R is a minimal superkey. A key attribute is an attribute A U
that is in some key of R. We now have the following:
Denition 11.2.11 An fd schema (U, ) is in third normal form (3NF) if whenever
X A is a nontrivial fd implied by , then either X is a superkey or A is a key attribute.
An fd schema (R, ) is in 3NF if each of its components is.
Example 11.2.12 Recall the schema (Lectures,{C P, PH C}) described in Exam-
ple 11.2.10. Here PH is a key, so P is a key attribute. Thus the schema is in 3NF.
A 3NF Decomposition Algorithm can be dened in analogy to the BCNF Decompo-
sition Algorithm. We present an alternative approach, generally referred to as synthesis.
Given a set of fds, a minimal cover of is a set

of fds such that


(a) each dependency in

has the form X A, where A is an attribute;


(b)

;
(c) no proper subset of

implies ; and
(d) for each dependency X A in

, there is no Y X such that |=Y A.


A minimal cover can be viewed as a reduced representative for a set of fds. It is straight-
forward to develop a polynomial time algorithm for producing a minimal cover of a set of
fds (see Exercise 11.16).
We now have the following:
Algorithm 11.2.13 (3NF Synthesis)
Input: A relation schema (U, ), where is a set of fds that is a minimal cover. We
assume that each attribute of U occurs in at least one fd of .
Output: An fd schema (R, ) in 3NF
1. If there is an fd X A in , where XA =U, then output (U, ).
2. Otherwise
(a) for each fd X A in , include the relational schema (XA, {X A})
in the output schema (R, ); and
258 Design and Dependencies
(b) choose a key X of U under , and include (X, ) in the output.
A central aspect of this algorithm is to form a relation XA for each fd X A in .
Intuitively, then, the output relations result from combining or synthesizing attributes
rather than decomposing the full attribute set.
The following is easily veried (see Exercise 11.17):
Theorem 11.2.14 The 3NF Synthesis Algorithm decomposes a relation schema into a
database schema in 3NF that has the lossless join property and preserves dependencies.
Several improvements to the basic 3NF Synthesis Algorithm can be made easily. For
example, different schemas obtained in step (2.a) can be merged if they come fromfds with
the same left-hand side. Step (2.b) is not needed if step (2.a) already produced a schema
whose set of attributes is a superkey for (U, ). In many practical situations, it may be
appropriate to omit step (2.b) of the algorithm. In that case, the decomposition preserves
dependencies but does not necessarily satisfy the lossless join property.
In the preceding algorithm, it was assumed that each attribute of U occurs in at
least one fd of . Obviously, this may not always be the case, for example, the attribute
A_NAME in Example 11.2.15b does not participate in fds. One approach to remedy this
situation is to introduce symbolic fds. For instance, in that example one might include
the fd TITLE, A_NAME
1
, where
1
is a new attribute. One relation produced by the
algorithm will be {TITLE, A_NAME,
1
}. As a last step, attributes such as
1
are removed.
In Example 11.2.9 we saw that the output of a BCNF decomposition may depend on
the order in which fds are applied. In the case of the preceding algorithm for 3NF, the
minimal cover chosen greatly impacts the nal result.
Mvds and Decomposition into 4NF
A fundamental problem with BCNF decomposition and 3NF synthesis as just presented is
that they do not take into account the impact of mvds.
Example 11.2.15 (a) The schema (Studios[N(ame), D(irector), L(ocation)], {N
D|L}) is in BCNF and 3NF but has update anomalies. The mvd suggests a decomposition
into ({Name, Director}, {Name, Location}).
(b) A related issue is that BCNF decompositions may not separate attributes that
intuitively should be separated. For example, consider again the schema of Example 11.2.6,
but suppose that the attribute A_NAME is included to denote actor names. Following the
same decomposition steps as before, we obtain the schema

({TH_NAME, ADDRESS, PHONE}, {FD1}),


({TITLE, D_NAME}, {FD3}),
({TH_NAME, TITLE, TIME, PRICE, A_NAME}, {FD2})

11.2 Normal Forms 259


which can be further decomposed to

({TH_NAME, ADDRESS, PHONE}, {FD1}),


({TITLE, D_NAME}, {FD3}),
({TH_NAME, TITLE, TIME, PRICE}, {FD2}),
({TH_NAME, TITLE, TIME, A_NAME}, )

Although there is a connection in the underlying data between TITLE and A_NAME,
the last relation here is unnatural. If we assume that the mvd TITLE A_NAME is
incorporated into the original schema, we can further decompose the last relation and apply
a step analogous to (2d) of the BCNF Decomposition Algorithm to obtain

({TH_NAME, ADDRESS, PHONE}, {FD1}),


({TITLE, D_NAME}, {FD3}),
({TH_NAME, TITLE, TIME, PRICE}, {FD2}),
({TITLE, A_NAME}, )

Fourth normal form (4NF) was originally developed to address these kinds of situa-
tions. As suggested by the preceding example, an algorithm yielding 4NF decompositions
can be developed along the lines of the BCNF Decomposition Algorithm. As with BCNF,
the output of 4NF decomposition is a lossless join decomposition that is not necessarily
dependency preserving.
A Note on Inds
In relational schema design starting with a semantic data model, numerous inds are typ-
ically generated. In contrast, the decomposition and synthesis approaches for rening re-
lational schemas as presented earlier do not take inds into account. It is possible to in-
corporate inds into these approaches, but the specic choice of inds is dependent on the
intended semantics of the target schema.
Example 11.2.16 Recall the schema (Movies[TDA], {T D}) and decomposition into
(R
1
[TD], {T D}) and (R
2
[TA], ).
(a) If all movies must have a director and at least one actor, then both R
1
[T ] R
2
[T ]
and R
2
[T ] R
1
[T ] should be included. In this case, the mapping from Movies
to its decomposed representation is one-to-one and onto.
(b) If the fd T D is understood to mean that there is a total function from movies
to directors, but movies without actors are permitted, then the ind R
2
[T ]
R
1
[T ] should be included.
260 Design and Dependencies
(c) Finally, suppose the fd T D is understood to mean that each movie has at
most one director (i.e., it is a partial function), and suppose that a movie can
have no actor. Then an additional relation R
3
[T ] should be added to hold the
titles of all movies, along with inds R
1
[T ] R
3
[T ] and R
2
[T ] R
3
[T ].
More generally, what if one is to rene a relational schema (R, ), where is
a set of tagged fds and mvds and is a set of inds? It may occur that there is an ind
R
i
[X] R
j
[Y], and either X or Y is to be split as the result of a decomposition step.
The desired semantics of the target schema can be used to select between a variety of
heuristic approaches to preserving the semantics of this ind. If consists of unary inds,
such splitting cannot occur. Speaking intuitively, if the inds of are key based, then the
chances of such splitting are reduced.
11.3 Universal Relation Assumption
In the preceding section, we saw that the decomposition and synthesis approaches to
relational schema design assume the pure URA. This section begins by articulating some
of the implications that underly the pure URA. It then presents the weak URA, which
provides an intuitively natural mechanism for viewing a relational database instance I as if
it were a universal relation.
Underlying Assumptions
Suppose that an fd schema (U[Z], ) is given and that decomposition or synthesis will
be applied. One of several different database schemas might be produced, but presumably
all of them carry roughly the same semantics. This suggests that the attributes in Z can
be grouped into relation schemas in several different ways, without substantially affecting
their underlying semantics. Intuitively, then, it is the attributes themselves (along with the
dependencies in ), rather than the attributes as they occur in different relation schemas,
that carry the bulk of the semantics in the schema. The notion that the attributes can
represent a substantial portion of the semantics of an application is central to schema design
based on the pure URA.
When decomposition and synthesis were rst introduced, the underlying implications
of this notion were not well understood. Several intuitive assumptions were articulated
that attempted to capture these implications. We describe here two of the most important
assumptions. Any approach to relational schema design based on the pure URA should also
abide by these two assumptions.
Universal Relation Scheme Assumption: This states that if an attribute name appears in two
or more places in a database schema, then it refers to the same entity set in each place.
For example, an attribute name Number should not be used for both serial numbers and
employee numbers; rather two distinct attribute names Serial# and Employee# should
be used.
Unique Role Assumption: This states that for each set of attributes there is a unique rela-
11.3 Universal Relation Assumption 261
tionship between them. This is sometimes weakened to say that there may be several
relationships, but one is deemed primary. This is illustrated in the following example.
Example 11.3.1 (a) Recall in Example 11.2.15(b) that D_NAME and A_NAME were
used for director and actor names, respectively. This is because there were two possible
relationships between movies and persons.
(b) For a more complicated example, consider a schema for bank branches that in-
cludes attributes for B(ranch), L(oan), (checking) A(ccount), and C(ustomer). Suppose
there are four relations
BL, which holds data about branches and loans they have given
BA, which holds data about branches and checking accounts they provide
CL, which holds data about customers and loans they have
CA, which holds data about customers and checking accounts they have.
This design does not satisfy the unique role assumption, mainly because of the cycle in the
schema. For example, consider the relationship between branches and customers. In fact,
there are two relationshipsvia loans and via accounts. Thus a request for the data in the
relationship between banks and customers is somewhat ambiguous, because it could mean
tuples stemming from either of the two relationships or from the intersection or union of
both of them.
One solution to this ambiguity is to break the cycle. For example, we could replace
the Customer attribute by the two attributes L-C(ustomer) and A-C(ustomer). Now the user
can specify the desired relationship by using the appropriate attribute.
The Weak Universal Relation Assumption
Suppose that schema (U, ) has decomposition (R, ) (with R = {R
1
, . . . , R
n
}). When
studying decomposition, we focused primarily on instances I of (R, ) that were the image
of some instance I of (U, ) under the decomposition mapping
R
. In particular, such
instances I are globally consistent. [Recall from Chapter 6 that instance I is globally
consistent if for each j [1, n],
R
j
( I) = I(R
j
); i.e., no tuple of I(R
j
) is dangling
relative to the full join.] However, in many practical situations it might be useful to use
the decomposed schema R to store instances I that are not globally consistent.
Example 11.3.2 Recall the schema (Movies[TDA], {T D}) from Example 11.2.4 and
its decomposition {TD, TA}. Suppose that for some movie the director is known, but no
actors are known. As mentioned previously, this information is easily stored in the decom-
posed database, but not in the original. The impossibility of representing this information
in the original schema was one of the anomalies that motivated the decomposition in the
rst place.
Suppose that fd schema (U, ) has decomposition (R, ) ={(R
1
,
1
), . . . , (R
n
,
n
)}.
Suppose also that I is an instance of R such that (1) I(R
j
) |=
j
for each j, but (2) I is
262 Design and Dependencies
AB A B AB A B AB A B
a b a b a b
a

b
BC B C BC B C BC B C
b c b c b c
ACD A C D ACD A C D ACD A C D
a c d a c d a c d
a

c d

c d

I
1
I
2
I
3
Figure 11.4: Instances illustrating weak URA
not necessarily globally consistent. Should I be considered a valid instance of schema
(R, )? More generally, given a schema (U, ), a decomposition R of U, and a (not
necessarily globally consistent) instance I over R, how should we dene the notion of
satisfaction of by I?
The weak universal relation assumption (weak URA) provides one approach for an-
swering this question. Under the weak URA, we say that I satises if there is some
instance J sat (U, ) such that I(R
j
)
R
j
(J) for each j [1, n]. In this case, J is
called a weak instance for I.
Example 11.3.3 Let U ={ABCD}, ={A B, BC D}, and R = {AB, BC, ACD}.
Consider the three instances of R shown in Fig. 11.4. The instance I
1
satises under the
weak URA, because J
1
={a, b, c, d} is a weak instance.
On the other hand, I
2
, which contains I
1
, does not satisfy under the weak URA. To
see this, suppose that J
2
is a weak instance for I
2
. Then J
2
must contain the following (not
necessarily distinct) tuples:
t
1
=a, b, c
1
, d
1

t
2
=a

, b, c
2
, d
2

t
3
=a
3
, b, c, d
3

t
4
=a, b
4
, c, d
t
5
=a

, b
5
, c, d

where the subscripted constants may be new. Because J


2
|= A B, by considering the
11.3 Universal Relation Assumption 263
pairs t
1
, t
4
and t
2
, t
5
, we see that b
4
= b
5
= b. Next, because J
2
|= BC D, and by
considering the pair t
4
, t
5
, we have that d =d

, a contradiction.
Finally, I
3
does satisfy under the weak URA.
As suggested by the preceding example, testing whether an instance I over R is a
weak instance of (U, ) for a set of fds can be performed using the chase. To do that, it
sufces to construct a table over U by padding the tuples from each R
j
with distinct new
variables. The resulting table is chased with the dependencies in . If the chase fails, there
is no weak instance for I. On the other hand, a successful chase provides a weak instance
for I by simply replacing each remaining variable with a distinct new constant.
This yields the following (see Exercise 11.27):
Theorem 11.3.4 Let be a set of fds over U and R a decomposition of U. Testing
whether I over R satises under the weak URA can be performed in polynomial time.
Of course, the chasing technique can be extended to arbitrary egds, although the
complexity jumps to exptime-complete.
What about full tgds? Recall that full tgds can always be satised by adding new
tuples to an instance. Let be a set of full dependencies. It is easy to see that I satises
under the weak URA iff I satises

{ | is an egd} under the weak URA.


Querying under the Weak URA
Let (U, ) be a schema, where is a set of full dependencies, and let Rbe a decomposition
of U. Let us assume the weak URA, and suppose that database instance I over Rsatises .
How should queries against I be answered? One approach is to consider the query against
all weak instances for I and then take the intersection of the answers. That is,
q
weak
(I) ={q(I) | I is a weak instance of I}.
We develop now a constructive method for computing q
weak
.
Given instance I of R, the representative instance of I is dened as follows: For each
component I
j
of I, let I

j
be the result of extending I
j
to be a free instance over U by
padding tuples with distinct variables. Set I

= {I

j
| j [1, n]}. Now apply the chase
using to obtain the representative instance rep(I, ) (or the empty instance, if two
distinct constants are to be identied). Note that some elements of rep(I, ) may have
variables occurring in them.
For X U, let
X
(rep(I, )) denote the set of tuples (i.e., with no variables present)
in
X
(rep(I, )). The following can now be veried (see Exercise 11.28).
Proposition 11.3.5 Let (U, ), R and I be as above, and let X U. Then
(a) [
X
]
weak
(I) =
X
(rep(I, )).
(b) If is a set of fds, then [
X
]
weak
(I) can be computed in ptime.
264 Design and Dependencies
This proposition provides the basis of a constructive method for evaluating an arbitrary
algebra query q under the weak URA. Furthermore, if is a set of fds, then evaluating q
will take time at most polynomial in the size of the input instance. This approach can be
generalized to the case where is a set of full dependencies but computing the projection
is exptime-complete.
Bibliographic Notes
The recent book [MR92] provides an in-depth coverage of relational schema design, in-
cluding both the theoretical underpinnings and other, less formal factors that go into good
design. Extensive treatments of the topic are also found in [Dat86, Fv89, Ull88, Vos91].
References [Ken78, Ken79, Ken89] illustrate the many difculties that arise in schema de-
sign, primarily with a host of intriguing examples that show how skilled the human mind
is at organizing diverse information and how woefully limiting data models are.
Surveys of semantic data models include [Bor85, HK87, PM88], and the book [TL82];
[Vos91] includes a chapter on this topic. Prominent early semantic data models include the
Entity-Relationship (ER) model [Che76] (see also [BLN86, MR92, TYF86]), the Func-
tional Data Model [Shi81, HK81], the Semantic Data Model [HM81], and the Semantic
Binary Data Model [Abr74]. An early attempt to incorporate semantic data modeling con-
structs into the relational model is RM/T [Cod79]; more recently there have been various
extensions of the relational model to incorporate object-oriented data modeling features
(e.g., [SJGP90]). Many commercial systems support tuple IDs, which can be viewed
as a form of OID. Galileo [ACO85], Taxis [MBW80], and FQL [BFN82] are program-
ming languages that support constructs stemming from semantic data models. The IFO
[AH87] model is a relatively simple, formal semantic data model that subsumes the struc-
tural components of the aforementioned semantic models and several others. Reference
[AH87] claries issues concerning ISA hierarchies in semantic schemas (see also [BLN86,
Cod79, DH84] and studies the propagation of updates.
Reference [Che76] describes a translation of the ERmodel into the relational model, so
that the resulting schema is in BCNF. From a practical perspective, this has become widely
accepted as the method of choice for designing relational schemas; [TYF86] provides
a subsequent perspective on this approach. There has also been considerable work on
understanding the properties of relational schemas resulting fromERschemas and mapping
relational schemas into ER ones. Reference [MR92] provides an in-depth discussion of this
area.
Reference [LV87] presents a translation from a semantic to the relational model
and studies the constraints implied for the relational schema, including cardinality con-
straints. The logical implication of constraints within a semantic model schema is studied
in [CL94]. References [Lie80, Lie82] study the relationship of schemas from the network
and relational models.
At a fundamental level, an important aspect of schema design is to replace one schema
with another that can hold essentially the same information. This raises the issue of de-
veloping formal methods for comparing the relative information capacity of different
schemas. Early work in this direction for the relational model includes [AABM82] and
[BMSU81] (see Exercise 11.22). More abstract work is found in [HY84, Hul86] (see Ex-
ercises 11.23 and 11.24), which forms the basis for Theorem 11.1.3. Reference [MS92]
Bibliographic Notes 265
provides justication for translations from the Entity-Relationship model into the relational
model using notions of relative information capacity. Formal notions of relative informa-
tion capacity have also been applied in the context of schema integration and translation
[MIR93] and heterogeneous databases [MIR94]. A very abstract framework for comparing
schemas from different data models is proposed in [AT93].
The area of normal forms and relational database design was studied intensively in the
1970s and early 1980s. Much more complete coverage of this topic than presented here may
be found in [Dat86, Mai83, Ull88, Vos91]. We mention some of the most important papers
in this area. First normal form [Cod70] is actually fundamental to the relational model: A
relation is in rst normal form (1NF) if each column contains atomic values. In Chapter 20
this restriction shall be relaxed to permit relations some of whose columns themselves
hold relations (which again may not be in rst normal form). References [Cod71, Cod72a]
raised the issue of update anomalies and initiated the search for normal forms that prevent
them by introducing second and third normal forms. The denition of 3NF used here is
from [Zan82]. (Second normal form is less restrictive than third normal form.) Boyce-
Codd normal form (BCNF) was introduced in [Cod74] to provide a normal form simpler
than 3NF. Another improvement of 3NF is proposed in [LTK81]. Fourth normal form
was introduced in [Fag77b]; Example 11.2.15 is inspired from that reference. Even richer
normal forms include project-join normal form (PJ/NF) [Fag79] and domain-key normal
form [Fag81].
In addition to introducing second and third normal form, [Cod72a] initiated the
search for normalization algorithms by proposing the rst decomposition algorithms. This
spawned other research on decomposition [DC72, RD75, PJ81] and synthesis [BST75,
Ber76b, WW75]. The fact that these two criteria are not equivalent was stressed in [Ris77],
where it is proposed that both be attempted. Early surveys on these approaches to rela-
tional design include [BBG78, Fag77a, Ris78]. Algorithms for synthesis into 3NF include
[Ber76b, BDB79], for decomposition into BCNF include [TF82], and for decomposition
into 4NF include [Fag77b]. Computational issues raised by decompositions are studied in
[LO78, BB79, FJT83, TF82] and elsewhere. Reference [Got87] presents a good heuristic
for nding covers of the projection of a set of fds. The 3NF Synthesis Algorithm presented
in this chapter begins with a minimal cover of a set of fds; [Mai80] shows that minimal
covers can be found in polynomial time.
The more formal study of decompositions and their properties was initiated in [Ris77],
which considered decompositions into two-element sets and proposed the notion of inde-
pendent components; and [AC78], which studied decompositions with lossless joins and
dependency preservation. This was extended independently to arbitrary decompositions
over fds by [BR80] and [MMSU80]. Lossless join was further investigated in [Var82b]
(see Exercise 11.20).
The notion that not all integrity constraints specied in a schema should be considered
for the design process was implicit in various works on semantic data modeling (e.g.,
[Che76, Lie80, Lie82]). It was stated explicitly in connection with relational schema design
in [FMU82, Sci81]. An extensive application of this approach to develop an approach to
schema design that incorporates both fds and mvds is [BK86].
A very different form of decomposition, called horizontal decomposition, is intro-
duced in [DP84]. This involves splitting a relation into pieces, each of which satises a
given set of fds.
266 Design and Dependencies
The universal relation assumption has a long history; the reader is directed to [AA93,
MUV84, Ull89b] for a much more complete coverage of this topic than found in this chap-
ter. The URA was implicit in much of the early work on normal forms and decompositions;
this was articulated more formally in [FMU82, MUV84]. The weak URA was studied in
connection with query processing in [Sag81, Sag83], and in connection with fd satisfac-
tion in [Hon82]. Proposition 11.3.5(a) is due to [MUV84] and part (b) is due to [Hon82];
the extension to full dependencies is due to [GMV86]. Reference [Sci86] presents an in-
teresting comparison of the relational model with inclusion dependencies to a variant of
the universal relation model and shows an equivalence when certain natural restrictions are
imposed.
A topic related to the URA is that of universal relation interfaces (URI); these attempt
to present a user view of a relational database in the form of a universal relation. An
excellent survey of research on this topic is found in [MRW86]; see also [AA93, Osb79,
Ull89b].
Exercises
Exercise 11.1
(a) Extend the instance of Example 11.1.1 for CINEMA-SEM so that it has at least two
objects in each class.
(b) Let CINEMA-SEM

be the same as CINEMA-SEM, except that a complex value


class Movie_Actor is used in CINEMA-SEM in place of the attributes acted_in and
has_actors. How would the instance you constructed for part (a) be represented in
CINEMA-SEM

?
Exercise 11.2
(a) Suppose that in CINEMA-SEMsome theaters do not have phones. Describe howthe
simulation CINEMA-RELcan be changed to reect this (without using null values).
What dependencies are satised?
(b) Do the same for the case where some persons may have more than one citizenship.
Exercise 11.3
(a) Describe a general algorithm for translating GSM schemas with keys into relational
ones.
(b) Verify Theorem 11.1.3.
(c) Verify that the relational schema resulting from a GSM schema is in 4NF and has
acyclic and key-based inds.
Exercise 11.4 [MR88, MR92] Let R be a relational database schema, a set of tagged fds
for R, and a set of inds for R. Assume that (R, ) is in BCNF and that is acyclic and
consists of key-based inds (as will arise if R is the simulation of a GSM schema). Prove that
and are independent. Hint: Show that if I is an instance of R satisfying , then no fd can be
applied during chasing of I by ( ). Now apply Theorem 9.4.5.
Exercise 11.5 [Fag79] Let (R, ) be a relation schema, and let

be the set of key depen-


dencies implied by . Show that R is in 4NF iff each nontrivial mvd implied by is implied
by

.
Exercises 267
Exercise 11.6 [DF92] A key dependency X U is simple if X is a singleton.
(a) Suppose that (R, ) is in BCNF, where may involve both fds and mvds. Suppose
further that (R, ) has at least one simple key. Prove that (R, ) is in 4NF.
(b) Suppose that (R, ) is in 3NF and that each key of is simple. Prove that (R, ) is
in BCNF.
A schema (R, ) is in project-join normal form (PJ/NF) if each JD implied by is implied
by the key dependencies implied by .
(a) Show that if (R, ) is in 3NF and each key of is simple, then (R, ) is in PJ/NF.
Exercise 11.7 Let (U, ) be a schema, where contains possibly fds, mvds, and jds. Show
that (a) (U, ) is in BCNF implies (U, ) is in 3NF; (b) (U, ) is in 4NF implies (U, ) is in
BCNF; (c) (U, ) is in PJ/NF implies (U, ) is in 4NF.
Exercise 11.8 [BR80, MMSU80] Prove Theorem 11.2.3.
Exercise 11.9 Recall the schema (Movies[TDA],{T D}). Consider the decomposition R
1
=
{(TD, {T D}), (DA, )}.
(a) Show that this does not have the lossless join property.
(b) Showthat this decomposition is not one-to-one. That is, exhibit two distinct instances
I, I

of (Movies, {T D}) such that


R
1
(I) =
R
1
(I

).
Exercise 11.10 Verify Theorem 11.2.8. Hint: To prove the lossless join property, use repeated
applications of Proposition 8.2.2.
Exercise 11.11 [FJT83] For each n 0, describe an fd schema (U, ) and V U, such that
has 2n +1 dependencies but the smallest cover for
V
() has at least 2
n
elements.
Exercise 11.12
(a) Let (U[Z], ) be an fd schema. Give a polynomial time algorithm for determining
whether this relation schema is in BCNF. (In fact, there is a linear time algorithm.)
(b) [BB79] Show that the following problem is co-np-complete. Given fd schema
(R[U], ) and V U, determine whether (V,
V
()) is in BCNF. Hint: Reduce
to the hitting set problem [GJ79].
Exercise 11.13 [TF82] Develop a polynomial time algorithm for nding BCNF decompo-
sitions. Hint: First show that each two-attribute fd schema is in BCNF. Then show that if
(S[V], ) is not in BCNF, then there are A, B V such that (V AB) A.
Exercise 11.14 Recall the schema Showings[Th(eater), Sc(reen), Ti(tle), Sn(ack)] of Sec-
tion 8.1, which satises the fd Th,Sc Ti and the mvd Th Sc,Ti | Sn. Consider the two
decompositions
R
1
={{Th, Sc, Ti}, {Th, Sn}}
R
2
={{Th, Sc, Ti}, {Th, Sc, Sn}}.
Are they one-to-one? dependency preserving? Describe anomalies that can arise if either of
these decompositions is used.
Exercise 11.15 [BB79] Verify that the schema of Example 11.2.10 has no BCNF decomposi-
tion that preserves dependencies.
268 Design and Dependencies
Exercise 11.16 [Mai80] Develop a polynomial time algorithm that nds a minimal cover of a
set of fds.
Exercise 11.17 Prove Theorem 11.2.14.
Exercise 11.18 [Mai83] Show that a schema (R[U], ) with 2n attributes and 2n fds can
have as many as 2
n
keys.
Exercise 11.19 [LO78] Let (S[V], ) be an fd schema. Show that the following problem is
np-complete: Given A V, is there a nontrivial fd Y A implied by , where Y is not a
superkey and A is not a key attribute?
Exercise 11.20 [Var82b] For this exercise, you will exhibit an example of a schema (R, ),
where consists of dependencies expressed in rst-order logic (which may not be embedded
dependencies) and a decomposition R of R such that R is one-to-one but does not have the
lossless join property.
Consider the schema R[ABCD]. Given t I inst(R), t [A] is a key element for AB in I
if there is no s I with t [A] =s[A] and t [B] =s[B]. The notion of t [C] being a key element
for CD is dened analogously. Let consist of the constraints
(i) t I such that both t [A] and t [C] are key elements.
(ii) If t I, then t [A] is a key element or t [C] is a key element.
(iii) If s, t I and s[A] or t [C] is a key element, then the tuple u is in I, where u[AB] =
s[AB] and u[CD] =t [CD].
Let R ={R
1
[AB], R
2
[CD]} be a decomposition of (R, ).
(a) Show that the decomposition R for (R, ) is one-to-one.
(b) Exhibit a reconstruction mapping for R. (The natural join will not work.)
Exercise 11.21 This and the following exercise provide one kind of characterization of the
relative information capacity of decompositions of relation schemas. Let U be a set of attributes,
let ={X
1
, . . . , X
n
} be a nonempty family of subsets of U, and let X =
n
i=1
X
i
. The project-
join mapping determined by , denoted PJ

, is a mapping from instances over U to instances


over
n
i=1
X
i
dened by PJ

(I) =
n
i=1
(
X
i
(I)). is full if
n
i=1
=U, in which case PJ

is a
full project-join mapping.
Prove the following for instances I and J over U:
(a)
X
(I) PJ

(I)
Exercises 269
(b) PJ

(PJ

(I)) =PJ

(I)
(c) if I J then PJ

(I) PJ

(J).
Exercise 11.22 [BMSU81] Let U be a set of attributes. If ={X
1
, . . . , X
n
} is a nonempty
full family of subsets of U, then Fixpt() denotes {I over U | PJ

(I) =I} (see the preceding


exercise). For and nonempty full families of subsets of U, covers , denoted , if
for each set X there is a set Y such that X Y. Prove for nonempty full families ,
of subsets of U that the following are equivalent:
(a)
(b) PJ

(I) PJ

(I) for each instance I over U


(c) Fixpt() Fixpt().
Exercise 11.23 Given relational database schemas S and S

, we say that S

dominates S using
the calculus, denoted S
calc
S

, if there are calculus queries q : Inst(S) Inst(S

) and q

:
Inst(S

) Inst(S) such that q q

is the identity on Inst(S). Let schema R =(ABC, {A B})


and the decomposition R ={(AB, {A B}), (AC, )}. (a) Verify that R
calc
R. (b) Showthat
R
calc
R. Hint: For schemas S and S

, S

dominates S absolutely, denoted S


abs
S

, if there is
some n 0 such that for each nite subset d dom with |d| n, |{I Inst(S) | adom(I)
d}| |{I Inst(S

) | adom(I) d}|. Show that S


calc
S

implies S
abs
S

. Then show that


R
abs
R.
Exercise 11.24 [HY84] Let A and B be relational attributes. Consider the complex value type
T = A, {B}, where each instance of T is a nite set of pairs having the form a,

b, where
a dom and

b is a nite subset of dom. Show that for each relational schema R, R
abs
T and
T
abs
R. (See Exercise 11.23 for the denition of
abs
.)
Exercise 11.25 [BV84b, CP84]
(a) Let (U, ) be a (full dependencies) schema and R an acyclic decomposition of U (in
the sense of acyclic joins). Then
R
is one-to-one iff R has the lossless join property.
Hint: First prove the result for the case where the decomposition has two elements
(i.e., it is based on an mvd). Then generalize to acyclic decompositions, using an
induction based on the GYO algorithm.
(b) [CKV90] Show that (a) can be generalized to include unary inds in .
Exercise 11.26 [Hon82] Let (U, ) be an fd schema and R={R
1
, . . . , R
n
} a decomposition
of U. Consider the following notions of satisfaction by I over R of :
I |=
1
: if I
j
|=
R
j
() for each j [1, n].
I |=
2
: if I |=.
I |=
3
: if I =
R
(I) for some I over U such that I |=.
(a) Show that |=
1
and |=
2
are incomparable.
(b) Show that if R preserves dependencies, then |=
1
implies |=
2
.
(c) What is the relationship of |=
1
and |=
2
to |=
3
?
(d) What is the relationship of all of these to the notion of satisfaction based on the weak
URA?
Exercise 11.27 [Hon82] Prove Theorem 11.3.4.
Exercise 11.28 [MUV84, Hon82] Prove Proposition 11.3.5.
P A R T
D Datalog and Recursion
I
n Part B, we considered query languages ranging from conjunctive queries to rst-order
queries in the three paradigms: algebraic, logic, and deductive. We did this by enriching
the conjunctive queries rst with union (disjunction) and then with difference (negation).
In this part, we further enrich these languages by adding recursion. First we add recursion
to the conjunctive queries, which yields datalog. We study this language in Chapter 12.
Although it is too limited for practical use, datalog illustrates some of the essential aspects
of recursion. Furthermore, most existing optimization techniques have been developed for
datalog.
Datalog owes a great debt to Prolog and the logic-programming area in general. A
fundamental contribution of the logic-programming paradigmto relational query languages
is its elegant notation for expressing recursion. The perspective of databases, however, is
signicantly different from that of logic programming. (For example, in databases datalog
programs dene mappings from instances to instances, whereas logic programs generally
carry their data with them and are studied as stand-alone entities.) We adapt the logic-
programming approach to the framework of databases.
We study evaluation techniques for datalog programs in Chapter 13, which covers
the main optimization techniques developed for recursion in query languages, including
seminaive evaluation and magic sets.
Although datalog is of great theoretical importance, it is not adequate as a practi-
cal query language because of the lack of negation. In particular, it cannot express even
the rst-order queries. Chapters 14 and 15 deal with languages combining recursion and
negation, which are proper extensions of rst-order queries. Chapter 14 considers the issue
of combining negation and recursion. Languages are presented from all three paradigms,
which support both negation and recursion. The semantics of each one is dened in fun-
damentally operational terms, which include datalog with negation and a straightforward,
xpoint semantics. As will be seen, the elegant correspondence between languages in the
three paradigms is maintained in the presence of recursion.
Chapter 15 considers approaches to incorporating negation in datalog that are closer
in spirit to logic programming. Several important semantics for negation are presented,
including stratication and well-founded semantics.
271
12 Datalog
Alice: What do we see next?
Riccardo: We introduce recursion.
Sergio: He means we ask queries about your ancestors.
Alice: Are you leading me down a garden path?
Vittorio: Kind of queries related to paths in a graph call for recursion
and are crucial for many applications.
F
or a long time, relational calculus and algebra were considered the database languages.
Codd even dened as complete a language that would yield precisely relational
calculus. Nonetheless, there are simple operations on data that cannot be realized in the
calculus. The most conspicuous example is graph transitive closure. In this chapter, we
study a language that captures such queries and is thus more complete than relational
calculus.
1
The language, called datalog, provides a feature not encountered in languages
studied so far: recursion.
We start with an example that motivates the need for recursion. Consider a database
for the Parisian Metro. Note that this database essentially describes a graph. (Database
applications in which part of the data is a graph are common.) To avoid making the
Metro database too static, we assume that the database is describing the available metro
connections on a day of strike (not an unusual occurrence). So some connections may
be missing, and the graph may be partitioned. An instance of this database is shown in
Fig. 12.1.
Natural queries to ask are as follows:
(12.1) What are the stations reachable from Odeon?
(12.2) What lines can be reached from Odeon?
(12.3) Can we go from Odeon to Chatelet?
(12.4) Are all pairs of stations connected?
(12.5) Is there a cycle in the graph (i.e., a station reachable in one or more stops from
itself)?
Unfortunately, such queries cannot be answered in the calculus without using some a
1
We postpone a serious discussion of completeness until Part E, where we tackle fundamental issues
such as What is a formal denition of data manipulation (as opposed to arbitrary computation)?
What is a reasonable denition of completeness for database languages?
273
274 Datalog
Links Line Station Next Station
4 St.-Germain Odeon
4 Odeon St.-Michel
4 St.-Michel Chatelet
1 Chatelet Louvre
1 Louvre Palais-Royal
1 Palais-Royal Tuileries
1 Tuileries Concorde
9 Pont de Sevres Billancourt
9 Billancourt Michel-Ange
9 Michel-Ange Iena
9 Iena F. D. Roosevelt
9 F. D. Roosevelt Republique
9 Republique Voltaire
Figure 12.1: An instance I of the Metro database
priori knowledge on the Metro graph, such as the graph diameter. More generally, given a
graph G, a particular vertex a, and an integer n, it is easy to write a calculus query nding
the vertexes at distance less than n from a; but it seems difcult to nd a query for all
vertexes reachable froma, regardless of the distance. We will prove formally in Chapter 17
that such a query is not expressible in the calculus. Intuitively, the reason is the lack of
recursion in the calculus.
The objective of this chapter is to extend some of the database languages considered
so far with recursion. Although there are many ways to do this (see also Chapter 14), we
focus in this chapter on an approach inspired by logic programming. This leads to a eld
called deductive databases, or database logic programming, which shares motivation and
techniques with the logic-programming area.
Most of the activity in deductive databases has focused on a toy language called dat-
alog, which extends the conjunctive queries with recursion. The interaction between nega-
tion and recursion is more tricky and is considered in Chapters 14 and 15. The importance
of datalog for deductive databases is analogous to that of the conjunctive queries for the
relational model. Most optimization techniques for relational algebra were developed for
conjunctive queries. Similarly, in this chapter most of the optimization techniques in de-
ductive databases have been developed around datalog (see Chapter 13).
Before formally presenting the language datalog, we present informally the syntax and
various semantics that are considered for that language. Following is a datalog program
P
T C
that computes the transitive closure of a graph. The graph is represented in relation G
and its transitive closure in relation T :
T (x, y) G(x, y)
T (x, y) G(x, z), T (z, y).
Datalog 275
Observe that, except for the fact that relation T occurs both in the head and body of the
second rule, these look like the nonrecursive datalog rules of Chapter 4.
A datalog program denes the relations that occur in heads of rules based on other
relations. The denition is recursive, so dened relations can also occur in bodies of rules.
Thus a datalog program is interpreted as a mapping from instances over the relations
occurring in the bodies only, to instances over the relations occurring in the heads. For
instance, the preceding program maps a relation over G (a graph) to a relation over T (its
transitive closure).
A surprising and elegant property of datalog, and of logic programming in general, is
that there are three very different but equivalent approaches to dening the semantics. We
present the three approaches informally now.
A rst approach is model theoretic. We view the rules as logical sentences stating a
property of the desired result. For instance, the preceding rules yield the logical formulas
x, y(T (x, y) G(x, y)) (1)
x, y, z(T (x, y) (G(x, z) T (z, y))). (2)
The result T must satisfy the foregoing sentences. However, this is not sufcient to deter-
mine the result uniquely because it is easy to see that there are many T s that satisfy the
sentences. However, it turns out that the result becomes unique if one adds the following
natural minimality requirement: T consists of the smallest set of facts that makes the sen-
tences true. As it turns out, for each datalog program and input, there is a unique minimal
model. This denes the semantics of a datalog program. For example, suppose that the
instance contains
G(a, b), G(b, c), G(c, d).
It turns out that T (a, d) holds in each instance that obeys (1) and (2) and where these three
facts hold. In particular, it belongs to the minimum model of (1) and (2).
The second proof-theoretic approach is based on obtaining proofs of facts. A proof of
the fact T (a, d) is as follows:
(i) G(c, d) belongs to the instance;
(ii) T (c, d) using (i) and the rst rule;
(iii) G(b, c) belongs to the instance;
(iv) T (b, d) using (iii), (ii), and the second rule;
(v) G(a, b) belongs to the instance;
(vi) T (a, d) using (v), (iv), and the second rule.
A fact is in the result if there exists a proof for it using the rules and the database facts.
In the proof-theoretic perspective, there are two ways to derive facts. The rst is to
view programs as factories producing all facts that can be proven from known facts.
The rules are then used bottom up, starting from the known facts and deriving all possible
new facts. An alternative top-down evaluation starts from a fact to be proven and attempts
to demonstrate it by deriving lemmas that are needed for the proof. This is the underlying
276 Datalog
intuition of a particular technique (called resolution) that originated in the theorem-proving
eld and lies at the core of the logic-programming area.
As an example of the top-down approach, suppose that we wish to prove T (a, d). Then
by the second rule, this can be done by proving G(a, b) and T (b, d). We know G(a, b), a
database fact. We are thus left with proving T (b, d). By the second rule again, it sufces
to prove G(b, c) (a database fact) and T (c, d). This last fact can be proven using the rst
rule. Observe that this yields the foregoing proof (i) to (vi). Resolution is thus a particular
technique for obtaining such proofs. As detailed later, resolution permits variables as well
as values in the goals to be proven and the steps used in the proof.
The last approach is the xpoint approach. We will see that the semantics of the
program can be dened as a particular solution of a xpoint equation. This approach leads
to iterating a query until a xpoint is reached and is thus procedural in nature. However,
this computes again the facts that can be deduced by applications of the rules, and in that
respect it is tightly connected to the (bottom-up) proof-theoretic approach. It corresponds
to a natural strategy for generating proofs where shorter proofs are produced before longer
proofs so facts are proven as soon as possible.
In the next sections we describe in more detail the syntax, model-theoretic, xpoint,
and proof-theoretic semantics of datalog. As a rule, we introduce only the minimum
amount of terminology from logic programming needed in the special database case. How-
ever, we make brief excursions into the wider framework in the text and exercises. The
last section deals with static analysis of datalog programs. It provides decidability and
undecidability results for several fundamental properties of programs. Techniques for the
evaluation of datalog programs are discussed separately in Chapter 13.
12.1 Syntax of Datalog
As mentioned earlier, the syntax of datalog is similar to that of languages introduced in
Chapter 4. It is an extension of nonrecursive datalog, which was introduced in Chapter 4.
We provide next a detailed denition of its syntax. We also briey introduce some of the
fundamental differences between datalog and logic programming.
Denition 12.1.1 A (datalog) rule is an expression of the form
R
1
(u
1
) R
2
(u
2
), . . . , R
n
(u
n
),
where n 1, R
1
, . . . , R
n
are relation names and u
1
, . . . , u
n
are free tuples of appropriate
arities. Each variable occurring in u
1
must occur in at least one of u
2
, . . . , u
n
. A datalog
program is a nite set of datalog rules.
The head of the rule is the expression R
1
(u
1
); and R
2
(u
2
), . . . , R
n
(u
n
) forms the body.
The set of constants occurring in a datalog program P is denoted adom(P); and for an
instance I, we use adom(P, I) as an abbreviation for adom(P) adom(I).
We next recall a denition from Chapter 4 that is central to this chapter.
12.1 Syntax of Datalog 277
Denition 12.1.2 Given a valuation , an instantiation
R
1
((u
1
)) R
2
((u
2
)), . . . , R
n
((u
n
))
of a rule R
1
(u
1
) R
2
(u
2
), . . . , R
n
(u
n
) with is obtained by replacing each variable x by
(x).
Let P be a datalog program. An extensional relation is a relation occurring only
in the body of the rules. An intensional relation is a relation occurring in the head of
some rule of P. The extensional (database) schema, denoted edb(P), consists of the
set of all extensional relation names; whereas the intensional schema idb(P) consists
of all the intensional ones. The schema of P, denoted sch(P), is the union of edb(P)
and idb(P). The semantics of a datalog program is a mapping from database instances
over edb(P) to database instances over idb(P). In some contexts, we call the input data
the extensional database and the program the intensional database. Note also that in the
context of logic-based languages, the term predicate is often used in place of the term
relation name.
Let us consider an example.
Example 12.1.3 The following program P
metro
computes the answers to queries (12.1),
(12.2), and (12.3):
St_Reachable(x, x)
St_Reachable(x, y) St_Reachable(x, z), Links(u, z, y)
Li_Reachable(x, u) St_Reachable(x, z), Links(u, z, y)
Ans_1(y) St_Reachable(Odeon, y)
Ans_2(u) Li_Reachable(Odeon, u)
Ans_3() St_Reachable(Odeon, Chatelet)
Observe that St_Reachable is dened using recursion. Clearly,
edb(P
metro
) ={Links},
idb(P
metro
) ={St_Reachable, Li_Reachable, Ans_1, Ans_2, Ans_3}
For example, an instantiation of the second rule of P
metro
is as follows:
St_Reachable(Odeon, Louvre) St_Reachable(Odeon, Chatelet),
Links(1, Chatelet, Louvre)
278 Datalog
Datalog versus Logic Programming
Given the close correspondence between datalog and logic programming, we briey high-
light the central differences between these two elds. The major difference is that logic
programming permits function symbols, but datalog does not.
Example 12.1.4 The simple logic program P
leq
is given by
leq(0, x)
leq(s(x), s(y)) leq(x, y)
leq(x, +(x, y))
leq(x, z) leq(x, y), leq(y, z)
Here 0 is a constant, s a unary function sysmbol, + a binary function sysmbol, and leq a
binary predicate. Intuitively, s might be viewed as the successor function, + as addition,
and leq as capturing the less-than-or-equal relation. However, in logic programming the
function symbols are given the free interpretationtwo terms are considered nonequal
whenever they are syntactically different. For example, the terms +(0, s(0)), +(s(0), 0),
and s(0) are all nonequal. Importantly, functional terms can be used in logic programming
to represent intricate data structures, such as lists and trees.
Observe also that in the preceding program the variable x occurs in the head of the
rst rule and not in the body, and analogously for the third rule.
Another important difference between deductive databases and logic programs con-
cerns perspectives on how they are typically used. In databases it is assumed that the
database is relatively large and the number of rules relatively small. Furthermore, a da-
talog program P is typically viewed as dening a mapping from instances over the edb
to instances over the idb. In logic programming the focus is different. It is generally as-
sumed that the base data is incorporated directly into the program. For example, in logic
programming the contents of instance Link in the Metro database would be represented
using rules such as Link(4, St.-Germain, Odeon) . Thus if the base data changes, the
logic program itself is changed. Another distinction, mentioned in the preceding example,
is that logic programs can construct and manipulate complex data structures encoded by
terms involving function symbols.
Later in this chapter we present further comparisons of the two frameworks.
12.2 Model-Theoretic Semantics
The key idea of the model-theoretic approach is to view the program as a set of rst-
order sentences (also called a rst-order theory) that describes the desired answer. Thus
the database instance constituting the result satises the sentences. Such an instance is
also called a model of the sentences. However, there can be many (indeed, innitely
many) instances satisfying the sentences of a program. Thus the sentences themselves
do not uniquely identify the answer; it is necessary to specify which of the models is
12.2 Model-Theoretic Semantics 279
the intended answer. This is usually done based on assumptions that are external to the
sentences themselves. In this section we formalize (1) the relationship between rules and
logical sentences, (2) the notion of model, and (3) the concept of intended model.
We begin by associating logical sentences with rules, as we did in the beginning of this
chapter. To a datalog rule
: R
1
(u
1
) R
2
(u
2
), . . . , R
n
(u
n
)
we associate the logical sentence
x
1
, . . . , x
m
(R
1
(u
1
) R
2
(u
2
) R
n
(u
n
)),
where x
1
, . . . , x
m
are the variables occurring in the rule and is the standard logical
implication. Observe that an instance I satises , denoted I |=, if for each instantiation
R
1
((u
1
)) R
2
((u
2
)), . . . , R
n
((u
n
))
such that R
2
((u
2
)), . . . , R
n
((u
n
)) belong to I, so does R
1
((u
1
)). In the following, we
do not distinguish between a rule and the associated sentence. For a program P, the
conjunction of the sentences associated with the rules of P is denoted by
P
.
It is useful to note that there are alternative ways to write the sentences associated with
rules of programs. In particular, the formula
x
1
, . . . , x
m
(R
1
(u
1
) R
2
(u
2
) R
n
(u
n
))
is equivalent to
x
1
, . . . , x
q
(x
q+1
, . . . , x
m
(R
2
(u
2
) R
n
(u
n
)) R
1
(u
1
)),
where x
1
, . . . , x
q
are the variables occurring in the head. It is also logically equivalent to
x
1
, . . . , x
m
(R
1
(u
1
) R
2
(u
2
) R
n
(u
n
)).
This last form is particularly interesting. Formulas consisting of a disjunction of liter-
als of which at most one is positive are called in logic Horn clauses. A datalog program
can thus be viewed as a set of (particular) Horn clauses.
We next discuss the issue of choosing, among the models of
P
, the particular model
that is intended as the answer. This is not a hard problem for datalog, although (as we shall
see in Chapter 15) it becomes much more involved if datalog is extended with negation.
For datalog, the idea for choosing the intended model is simply that the model should not
contain more facts than necessary for satisfying
P
. So the intended model is minimal in
some natural sense. This is formalized next.
Denition 12.2.1 Let P be a datalog program and I an instance over edb(P). A model
of P is an instance over sch(P) satisfying
P
. The semantics of P on input I, denoted
P(I), is the minimum model of P containing I, if it exists.
280 Datalog
Station
Odeon
St.-Michel
Chatelet
Louvres
Palais-Royal
Tuileries
Concorde
Ans
_
1 Line
4
1
Ans
_
2

Ans
_
3
Figure 12.2: Relations of P
metro
(I)
For P
metro
as in Example 12.1.3, and I as in Fig. 12.1, the values of Ans_1, Ans_2, and
Ans_3 in P(I) are shown in Fig. 12.2.
We briey discuss the choice of the minimal model at the end of this section.
Although the previous denition is natural, we cannot be entirely satised with it at
this point:
For given P and I, we do not know (yet) whether the semantics of P is dened (i.e.,
whether there exists a minimum model of
P
containing I).
Even if such a model exists, the denition does not provide any algorithm for
computing P(I). Indeed, it is not (yet) clear that such an algorithm exists.
We next provide simple answers to both of these problems.
Observe that by denition, P(I) is an instance over sch(P). A priori, we must consider
all instances over sch(P), an innite set. It turns out that it sufces to consider only those
instances with active domain in adom(P, I) (i.e., a nite set of instances). For given P and
I, let B(P, I) be the instance over sch(P) dened by
1. For each R in edb(P), a fact R(u) is in B(P, I) iff it is in I; and
2. For each R in idb(P), each fact R(u) with constants in adom(P, I) is in B(P, I).
We now verify that B(P, I) is a model of P containing I.
Lemma 12.2.2 Let P be a datalog program and I an instance over edb(P). Then B(P, I)
is a model of P containing I.
Proof Let A
1
A
2
, . . . , A
n
be an instantiation of some rule r in P such that A
2
, . . . ,
A
n
hold in B(P, I). Then consider A
1
. Because each variable occurring in the head of r
also occurs in the body, each constant occurring in A
1
belongs to adom(P, I). Thus by
denition 2 just given, A
1
is in B(P, I). Hence B(P, I) satises the sentence associated
with that particular rule, so B(P, I) satises
P
. Clearly, B(P, I) contains I by denition 1.
12.2 Model-Theoretic Semantics 281
Thus the semantics of P on input I, if dened, is a subset of B(P, I). This means that
there is no need to consider instances with constants outside adom(P, I).
We next demonstrate that P(I) is always dened.
Theorem 12.2.3 Let P be a datalog program, I an instance over edb(P), and X the set
of models of P containing I. Then
1. X is the minimal model of P containing I, so P(I) is dened.
2. adom(P(I)) adom(P, I).
3. For each R in edb(P), P(I)(R) =I(R).
Proof Note that X is nonempty, because B(P, I) is in X. Let r A
1
A
2
, . . . , A
n
be
a rule in P and a valuation of the variables occurring in the rule. To prove (1), we show
that
(*) if (A
2
), . . . , (A
n
) are in X then (A
1
) is also in X.
For suppose that (*) holds. Then X |=r, so X satises
P
. Because each instance in X
contains I, X contains I. Hence X is a model of P containing I. By construction, X
is minimal, so (1) holds.
To show (*), suppose that (A
2
), . . . , (A
n
) are in X and let K be in X. Because
X K, (A
2
), . . . , (A
n
) are in K. Because K is in X, K is a model of P, so (A
1
) is
in K. This is true for each K in X. Hence (A
1
) is in X and (*) holds, which in turn
proves (1).
By Lemma 12.2.2, B(P, I) is a model of P containing I. Therefore P(I) B(P, I).
Hence
adom(P(I)) adom(B(P, I)) =adom(P, I), so (2) holds.
For each R in edb(P), I(R) P(I)(R) [because P(I) contains I] and P(I)(R)
B(P, I)(R) =I(R); which shows (3).
The previous development also provides an algorithm for computing the semantics
of datalog programs. Given P and I, it sufces to consider all instances that are subsets of
B(P, I), nd those that are models of P and contain I, and compute their intersection. How-
ever, this is clearly an inefcient procedure. The next section provides a more reasonable
algorithm.
We conclude this section with two remarks on the denition of semantics of datalog
programs. The rst explains the choice of a minimal model. The second rephrases our
denition in more standard logic-programming terminology.
Why Choose the Minimal Model?
This choice is the natural consequence of an implicit hypothesis of a philosophical nature:
the closed world assumption (CWA) (see Chapter 2).
The CWA concerns the connection between the database and the world it models.
282 Datalog
Clearly, databases are often incomplete (i.e., facts that may be true in the world are not
necessarily recorded in the database). Thus, although we can reasonably assume that a
fact recorded in the database is true in the world, it is not clear what we can say about
facts not explicitly recorded. Should they be considered false, true, or unknown? The CWA
provides the simplest solution to this problem: Treat the database as if it records complete
information about the world (i.e., assume that all facts not in the database are false). This
is equivalent to taking as true only the facts that must be true in all worlds modeled by
the database. By extension, this justies the choice of minimal model as the semantics of
a datalog program. Indeed, the minimal model consists of the facts we know must be true
in all worlds satisfying the sentences (and including the input instance). As we shall see,
this has an equivalent proof-theoretic counterpart, which will justify the proof-theoretic
semantics of datalog programs: Take as true precisely the facts that can be proven true
from the input and the sentences corresponding to the datalog program. Facts that cannot
be proven are therefore considered false.
Importantly, the CWAis not so simple to use in the presence of negation or disjunction.
For example, suppose that a database holds {p q}. Under the CWA, then both p and
q are inferred. But the union {p q, p, q} is inconsistent, which is certainly not the
intended result.
Herbrand Interpretation
We relate briey the semantics given to datalog programs to standard logic-programming
terminology.
In logic programming, the facts of an input instance I are not separated from the
sentences of a datalog program P. Instead, sentences stating that all facts in I are true
are included in P. This gives rise to a logical theory
P,I
consisting of the sentences in
P
and of one sentence P(u) [sometimes written P(u) ] for each fact P(u) in the instance.
The semantics is dened as a particular model of this set of sentences. A problem is that
standard interpretations in rst-order logic permit interpretation of constants of the theory
with arbitrary elements of the domain. For instance, the constants Odeon and St.-Michel
may be interpreted by the same element (e.g., John). This is clearly not what we mean
in the database context. We wish to interpret Odeon by Odeon and similarly for all other
constants. Interpretations that use the identity function to interpret the constant symbols
are called Herbrand interpretations (see Chapter 2). (If function symbols are present,
restrictions are also placed on how terms involving functions are interpreted.) Given a set
of formulas, a Herbrand model of is a Herbrand interpretation satisfying .
Thus in logic programming terms, the semantics of a program P given an instance I
can be viewed as the minimum Herbrand model of
P,I
.
12.3 Fixpoint Semantics
In this section, we present an operational semantics for datalog programs stemming from
xpoint theory. We use an operator called the immediate consequence operator. The oper-
ator produces new facts starting from known facts. We show that the model-theoretic se-
12.3 Fixpoint Semantics 283
mantics, P(I), can also be dened as the smallest solution of a xpoint equation involving
that operator. It turns out that this solution can be obtained constructively. This approach
therefore provides an alternative constructive denition of the semantics of datalog pro-
grams. It can be viewed as an implementation of the model-theoretic semantics.
Let P be a datalog program and K an instance over sch(P). A fact A is an immediate
consequence for K and P if either A K(R) for some edb relation R, or A A
1
, . . . , A
n
is an instantiation of a rule in P and each A
i
is in K. The immediate consequence operator
of P, denoted T
P
, is the mapping from inst(sch(P)) to inst(sch(P)) dened as follows.
For each K, T
P
(K) consists of all facts A that are immediate consequences for K and P.
We next note some simple mathematical properties of the operator T
P
over sets of
instances. We rst dene two useful properties. For an operator T ,
T is monotone if for each I, J, I J implies T (I) T (J).
K is a xpoint of T if T (K) =K.
The proof of the next lemma is straightforward and is omitted (see Exercise 12.9).
Lemma 12.3.1 Let P be a datalog program.
(i) The operator T
P
is monotone.
(ii) An instance K over sch(P) is a model of
P
iff T
P
(K) K.
(iii) Each xpoint of T
P
is a model of
P
; the converse does not necessarily hold.
It turns out that P(I) (as dened by the model-theoretic semantics) is a xpoint of T
P
.
In particular, it is the minimum xpoint containing I. This is shown next.
Theorem 12.3.2 For each P and I, T
P
has a minimum xpoint containing I, which
equals P(I).
Proof Observe rst that P(I) is a xpoint of T
P
:
T
P
(P(I)) P(I) because P(I) is a model of P; and
P(I) T
P
(P(I)). [Because T
P
(P(I)) P(I) and T
P
is monotone, T
P
(T
P
(P(I)))
T
P
(P(I)). Thus T
P
(P(I)) is a model of
P
. Because T
P
preserves the contents
of the edb relations and I P(I), we have I T
P
(P(I)). Thus T
P
(P(I)) is a model
of
P
containing I. Because P(I) is the minimum such model, P(I) T
P
(P(I)).]
In addition, each xpoint of T
P
containing I is a model of P and thus contains P(I) (which
is the intersection of all models of P containing I). Thus P(I) is the minimum xpoint of
P containing I.
The xpoint denition of the semantics of P presents the advantage of leading to a
constructive denition of P(I). In logic programming, this is shown using xpoint theory
(i.e., using Knaster-Tarskis and Kleenes theorems). However, the database framework
is much simpler than the general logic-programming one, primarily due to the lack of
function symbols. We therefore choose to show the construction directly, without the
formidable machinery of the theory of xpoints in complete lattices. In Remark 12.3.5
284 Datalog
we sketch the more standard proof that has the advantage of being applicable to the larger
context of logic programming.
Given an instance I over edb(P), one can compute T
P
(I), T
2
P
(I), T
3
P
(I), etc. Clearly,
I T
P
(I) T
2
P
(I) T
3
P
(I) . . . B(P, I).
This follows immediately from the fact that I T
P
(I) and the monotonicity of T
P
. Let N
be the number of facts in B(P, I). (Observe that N depends on I.) The sequence {T
i
P
(I)}
i
reaches a xpoint after at most N steps. That is, for each i N, T
i
P
(I) = T
N
P
(I). In
particular, T
P
(T
N
P
(I)) =T
N
P
(I), so T
N
P
(I) is a xpoint of T
P
. We denote this xpoint by
T

P
(I).
Example 12.3.3 Recall the program P
T C
for computing the transitive closure of a
graph G:
T (x, y) G(x, y)
T (x, y) G(x, z), T (z, y).
Consider the input instance
I ={G(1, 2), G(2, 3), G(3, 4), G(4, 5)}.
Then we have
T
P
T C
(I) =I {T (1, 2), T (2, 3), T (3, 4), T (4, 5)}
T
2
P
T C
(I) =T
P
T C
(I) {T (1, 3), T (2, 4), T (3, 5)}
T
3
P
T C
(I) =T
2
P
T C
(I) {T (1, 4), T (2, 5)}
T
4
P
T C
(I) =T
3
P
T C
(I) {T (1, 5)}
T
5
P
T C
(I) =T
4
P
T C
(I).
Thus T

P
T C
(I) =T
4
P
T C
(I).
We next show that T

P
(I) is exactly P(I) for each datalog program P.
Theorem 12.3.4 Let P be a datalog program and I an instance over edb(P). Then
T

P
(I) =P(I).
Proof By Theorem 12.3.2, it sufces to show that T

P
(I) is the minimum xpoint of T
P
containing I. As noted earlier,
T
P
(T

P
(I)) =T
P
(T
N
P
(I)) =T
N
P
(I) =T

P
(I).
12.3 Fixpoint Semantics 285
where N is the number of facts in B(P, I). Therefore T

P
(I) is a xpoint of T
P
that con-
tains I.
To show that it is minimal, consider an arbitrary xpoint J of T
P
containing I. Then
J T
0
P
(I) =I. By induction on i, J T
i
P
(I) for each i, so J T

P
(I). Thus T

P
(I) is the
minimum xpoint of T
P
containing I.
The smallest integer i such that T
i
P
(I) =T

P
(I) is called the stage for P and I and is
denoted stage(P, I). As already noted, stage(P, I) N =|B(P, I)|.
Evaluation
The xpoint approach suggests a straightforward algorithm for the evaluation of datalog.
We explain the algorithm in an example. We extend relational algebra with a while operator
that allows us to iterate an algebraic expression while some condition holds. (The resulting
language is studied extensively in Chapter 17.)
Consider again the transitive closure query. We wish to compute the transitive closure
of relation G in relation T . Both relations are over AB. This computation is performed by
the following program:
T : =G;
while q(T ) =T do T : =q(T );
where
q(T ) =G
AB
(
BC
(G)
AC
(T )).
(Recall that is the renaming operation as introduced in Chapter 4.)
Observe that q is an SPJRU expression. In fact, at each step, q computes the im-
mediate consequence operator T
P
, where P is the transitive closure datalog program in
Example 12.3.3. One can show in general that the immediate consequence operator can be
computed using SPJRU expressions (i.e., relational algebra without the difference opera-
tion). Furthermore, the SPJRU expressions extended carefully with a while construct yield
exactly the expressive power of datalog. The test of the while is used to detect when the
xpoint is reached.
The while construct is needed only for recursion. Let us consider again the nonrecur-
sive datalog of Chapter 4. Let P be a datalog program. Consider the graph (sch(P), E
P
),
where S, S

is an edge in E
P
if S

occurs in the head of some rule r in P and S occurs in


the body of r. Then P is nonrecursive if the graph is acyclic. We mentioned already that
nr-datalog programs are equivalent to SPJRU queries (see Section 4.5). It is also easy to
see that, for each nr-datalog program P, there exists a constant d such that for each I over
edb(P), stage(P, I) d. In other words, the xpoint is reached after a bounded number
of steps, dependent only on the program. (See Exercise 12.29.) Programs for which this
happens are called bounded. We examine this property in more detail in Section 12.5.
A lot of redundant computation is performed when running the preceding transitive
closure program. We study optimization techniques for datalog evaluation in Chapter 13.
286 Datalog
Remark 12.3.5 In this remark, we make a brief excursion into standard xpoint theory
to reprove Theorem 12.3.4. This machinery is needed when proving the analog of that
theorem in the more general context of logic programming. A partially ordered set (U, )
is a complete lattice if each subset has a least upper bound and a greatest lower bound,
denoted sup and inf , respectively. In particular, inf (U) is denoted and sup(U) is denoted
. An operator T on U is monotone iff for each x, y U, x y implies T (x) T (y). An
operator T on U is continuous if for each subset V, T (sup(V)) = sup(T (V)). Note that
continuity implies monotonicity.
To each datalog program P and instance I, we associate the program P
I
consisting
of the rules of P and one rule R(u) for each fact R(u) in I. We consider the complete
lattice formed with (inst(sch(P)), ) and the operator T
P
I
dened by the following: For
each K, a fact A is in T
P
I
(K) if A is an immediate consequence for K and P
I
. The operator
T
P
I
on (inst(sch(P)), ) is continuous (so also monotone).
The Knaster-Tarski theorem states that a monotone operator in a complete lattice
has a least xpoint that equals inf ({x | x U, T (x) x}). Thus the least xpoint of T
P
I
exists. Fixpoint theory also provides the constructive denition of the least xpoint for
continuous operators. Indeed, Kleenes theorem states that if T is a continuous operator on
a complete lattice, then its least xpoint is sup({K
i
| i 0}) where K
0
= and for each
i > 0, K
i
=T (K
i1
). Now in our case, = and
T
P
I
() T
i
P
I
()
coincides with P(I).
In logic programming, function symbols are also considered (see Example 12.1.4). In
this context, the sequence of {T
i
P
I
(I)}
i>0
does not generally converge in a nite number
of steps, so the xpoint evaluation is no longer constructive. However, it does converge in
countably many steps to the least xpoint {T
i
P
I
() | i 0}. Thus xpoint theory is useful
primarily when dealing with logic programs with function symbols. It is an overkill in the
simpler context of datalog.
12.4 Proof-Theoretic Approach
Another way of dening the semantics of datalog is based on proofs. The basic idea is that
the answer of a program P on I consists of the set of facts that can be proven using P and
I. The result turns out to coincide, again, with P(I).
The rst step is to dene what is meant by proof . A proof tree of a fact A from I and
P is a labeled tree where
1. each vertex of the tree is labeled by a fact;
2. each leaf is labeled by a fact in I;
3. the root is labeled by A; and
4. for each internal vertex, there exists an instantiation A
1
A
2
, . . . , A
n
of a rule
in P such that the vertex is labeled A
1
and its children are respectively labeled
A
2
, . . . , A
n
.
Such a tree provides a proof of the fact A.
12.4 Proof-Theoretic Approach 287
(a) Datalog proof (b) Context-free derivation
S(1,6)
R(5,a,6) T(1,5)
T(3,5) R(1,a,2) R(2,b,3)
R(3,a,4) R(4,a,5)
rule 2
rule 1
rule 3
S
a
T
T
a b
a a
Figure 12.3: Proof tree
Example 12.4.1 Consider the following program:
S(x
1
, x
3
) T (x
1
, x
2
), R(x
2
, a, x
3
)
T (x
1
, x
4
) R(x
1
, a, x
2
), R(x
2
, b, x
3
), T (x
3
, x
4
)
T (x
1
, x
3
) R(x
1
, a, x
2
), R(x
2
, a, x
3
)
and the instance
{R(1, a, 2), R(2, b, 3), R(3, a, 4), R(4, a, 5), R(5, a, 6)}.
A proof tree of S(1, 6) is shown in Fig. 12.3(a).
The reader familiar with context-free languages will notice the similarity between
proof trees and derivation trees in context-free languages. This connection is especially
strong in the case of datalog programs that have the form of the one in Example 12.4.1.
This will be exploited in the last section of this chapter.
Proof trees provide proofs of facts. It is straightforward to show that a fact Ais in P(I)
iff there exists a proof tree for A from I and P. Now given a fact A to prove, one can look
for a proof either bottom up or top down.
The bottom-up approach is an alternative way of looking at the constructive xpoint
technique. One begins with the facts from I and then uses the rules to infer new facts, much
like the immediate consequence operator. This is done repeatedly until no new facts can be
inferred. The rules are used as factories producing new facts from already proven ones.
This eventually yields all facts that can be proven and is essentially the same as the xpoint
approach.
In contrast to the bottom-up and xpoint approaches, the top-down approach allows
one to direct the search for a proof when one is only interested in proving particular facts.
288 Datalog
For example, suppose the query Ans_1(Louvre) is posed against the program P
metro
of
Example 12.1.3, with the input instance of Fig. 12.1. Then the top-down approach will
never consider atoms involving stations on Line 9, intuitively because they are are not
reachable from Odeon or Louvre. More generally, the top-down approach inhibits the
indiscriminate inference of facts that are irrelevant to the facts of interest.
The top-down approach is described next. This takes us to the eld of logic program-
ming. But rst we need some notation, which will remind us once again that To bar an
easy access to newcomers every scientic domain has introduced its own terminology and
notation [Apt91].
Notation
Although we already borrowed a lot of terminology and notation from the logic-program-
ming eld (e.g., term, fact, atom), we must briey introduce some more.
A positive literal is an atom [i.e., P(u) for some free tuple u]; and a negative literal is
the negation of one [i.e., P(u)]. A formula of the form
x
1
, . . . , x
m
(A
1
A
n
B
1
B
p
),
where the A
i
, B
j
are positive literals, is called a clause. Such a clause is written in clausal
form as
A
1
, . . . , A
n
B
1
, . . . , B
p
.
A clause with a single literal in the head (n =1) is called a denite clause. A denite clause
with an empty body is called a unit clause. A clause with no literal in the head is called a
goal clause. A clause with an empty body and head is called an empty clause and is denoted
. Examples of these and their logical counterparts are as follows:
denite T (x, y) R(x, z), T (z, y) T (x, y) R(x, z) T (z, y)
unit T (x, y) T (x, y)
goal R(x, z), T (z, y) R(x, z) T (z, y)
empty false
The empty clause is interpreted as a contradiction. Intuitively, this is because it corresponds
to the disjunction of an empty set of formulas.
A ground clause is a clause with no occurrence of variables.
The top-down proof technique introduced here is called SLD resolution. Goals serve
as the basic focus of activity in SLD resolution. As we shall see, the procedure begins
with a goal such as St_Reachable(x, Concorde), Li_Reachable(x, 9). A correct an-
swer of this goal on input I is any value a such that St_Reachable(a, Concorde) and
Li_Reachable(a, 9) are implied by
P
metro,I
. Furthermore, each intermediate step of the
top-down approach consists of obtaining a new goal from a previous goal. Finally, the
procedure is deemed successful if the nal goal reached is empty.
The standard exposition of SLD resolution is based on denite clauses. There is a
12.4 Proof-Theoretic Approach 289
subtle distinction between datalog rules and denite clauses: For datalog rules, we imposed
the restriction that each variable that occurs in the head also appears in the body. (In
particular, a datalog unit clause must be ground.) We will briey mention some minor
consequences of this distinction.
As already introduced in Remark 12.3.5, to each datalog program P and instance I,
we associate the program P
I
consisting of the rules of P and one rule R(u) for each
fact R(u) in I. Therefore in the following we ignore the instance I and focus on programs
that already integrate all the known facts in the set of rules. We denote such a program P
I
to emphasize its relationship to an instance I. Observe that from a semantic point of view
P(I) =P
I
().
This ignores the distinction between edb and idb relations, which no longer exists for P
I
.
Example 12.4.2 Consider the program P and instance I of Example 12.4.1. The rules
of P
I
are
1. S(x
1
, x
3
) T (x
1
, x
2
), R(x
2
, a, x
3
)
2. T (x
1
, x
4
) R(x
1
, a, x
2
), R(x
2
, b, x
3
), T (x
3
, x
4
)
3. T (x
1
, x
3
) R(x
1
, a, x
2
), R(x
2
, a, x
3
)
4. R(1, a, 2)
5. R(2, b, 3)
6. R(3, a, 4)
7. R(4, a, 5)
8. R(5, a, 6)
Warm-Up
Before discussing SLD resolution, as a warm-up we look at a simplied version of the
technique by considering only ground rules. To this end, consider a datalog program P
I
(integrating the facts) consisting only of fully instantiated rules (i.e., with no occurrences
of variables). Consider a ground goal g
A
1
, . . . , A
i
, . . . , A
n
and some (ground) rule r A
i
B
1
, . . . , B
m
in P
I
. A resolvent of g with r is the ground
goal
A
1
, . . . , A
i1
, B
1
, . . . , B
m
, A
i+1
, . . . , A
n
.
Viewed as logical sentences, the resolvent of g with r is actually implied by g and r.
This is best seen by writing these explicitly as clauses:
290 Datalog
R(5,a,6)
S(1,6) T(1,5), R(5,a,6)
R(2,b,3), T(3,5), R(5,a,6)
T(1,5) R(1,a,2), R(2,b,3), T(3,5)
T(3,5), R(5,a,6)
R(1,a,2)
R(5,a,6)
R(2,b,3)
R(4,a,5), R(5,a,6)
T(3,5) R(3,a,4), R(4,a,5)
R(5,a,6)
R(3,a,4)
R(4,a,5)
R(5,a,6)
S(1,6)
T(1,5),
R(1,a,2),
R(2,b,3),
T(3,5),
R(3,a,4),
R(4,a,5),
R(5,a,6)

Figure 12.4: SLD ground refutation


(A
1
A
i
A
n
) (A
i
B
1
B
m
)
(A
1
A
i1
B
1
B
m
A
i+1
A
n
).
In general, the converse does not hold.
A derivation from g with P
I
is a sequence of goals g g
0
, g
1
, . . . such that for each
i > 0, g
i
is a resolvent of g
i1
with some rule in P
I
. We will see that to prove a fact A, it
sufces to exhibit a refutation of Athat is, a derivation
g
0
A, g
1
, . . . , g
i
, . . . , g
q
.
Example 12.4.3 Consider Example 12.4.1 and the program obtained by all possible
instantiations of the rules of P
I
in Example 12.4.2. An SLD ground refutation is shown
in Fig. 12.4. It is a refutation of S(1, 6) [i.e. a proof of S(1, 6)].
Let us now explain why refutations provide proofs of facts. Suppose that we wish to
prove A
1
A
n
. To do this we may equivalently prove that its negation (i.e. A
1

A
n
) is false. In other words, we try to refute (or disprove) A
1
, . . . , A
n
. The
following rephrasing of the refutation in Fig. 12.4 should make this crystal clear.
12.4 Proof-Theoretic Approach 291
Example 12.4.4 Continuing with the previous example, to prove S(1, 6), we try to refute
its negation [i.e., S(1, 6) or S(1, 6)]. This leads us to considering, in turn, the formulas
Goal Rule used
S(1, 6) (1)
T (1, 5) R(5, a, 6) (2)
R(1, a, 2) R(2, b, 3) T (3, 5) R(5, a, 6) (4)
R(2, b, 3) T (3, 5) R(5, a, 6) (5)
T (3, 5) R(5, a, 6) (3)
R(3, a, 4) R(4, a, 5) R(5, a, 6) (6)
R(4, a, 5) R(5, a, 6) (7)
R(5, a, 6) (8)
false
At the end of the derivation, we have obtained a contradiction. Thus we have refuted
S(1, 6) [i.e., proved S(1, 6)].
Thus refutations provide proofs. As a consequence, a goal can be thought of as a query.
Indeed, the arrow is sometimes denoted with a question mark in goals. For instance, we
sometimes write
?- S(1, 6) for S(1, 6).
Observe that the process of nding a proof is nondeterministic for two reasons: the
choice of the literal A to replace and the rule that is used to replace it.
We now have a technique for proving facts. The benet of this technique is that it is
sound and complete, in the sense that the set of facts in P(I) coincides with the facts that
can be proven from P
I
.
Theorem12.4.5 Let P
I
be a datalog program and ground(P
I
) be the set of instantiations
of rules in P
I
with values in adom(P, I). Then for each ground goal g, P
I
() |=g iff there
exists a refutation of g with ground(P
I
).
Crux To show the only if, we prove by induction that
(**)
for each ground goal g, if T
i
P
I
() |=g,
there exists a refutation of g with ground(P
I
).
(The if part is proved similarly by induction on the length of the refutation. Its proof is
left for Exercise 12.18.)
The base case is obvious. Now suppose that (**) holds for some i 0, and let
A
1
, . . . , A
m
be ground atoms such that T
i+1
P
I
() |= A
1
A
m
. Therefore each A
j
is
in T
i+1
P
I
(). Consider some j. If A
j
is an edb fact, we are back to the base case. Otherwise
292 Datalog
R(x
2
,a,x)
S(x
1
,x
3
) T(x
1
,x
2
), R(x
2
,a,x
3
)
R(y
2
,b,x
3
), T(x
3
,x
2
), R(x
2
,a,x)
T(x
1
,x
4
) R(x
1
,a,x
2
), R(x
2
,b,x
3
), T(x
3
,x
4
)
T(x
3
,x
2
), R(x
2
,a,x)
R(1,a,2)
R(x
2
,a,x)
R(2,b,3)
R(z
2
,a,x
2
), R(x
2
,a,x)
T(x
1
,x
3
) R(x
1
,a,x
2
), R(x
2
,a,x
3
)
R(x
2
,a,x)
R(3,a,4)
R(4,a,5)
R(5,a,6)
S(1,x)
T(1,x
2
),
R(1,a,y
2
),
R(2,b,x
3
),
T(3,x
2
),
R(3,a,z
2
),
R(4,a,x
2
),
R(5,a,x)

Figure 12.5: SLD refutation


there exists an instantiation A
j
B
1
, . . . , B
p
of some rule in P
I
such that B
1
, . . . , B
p
are
in T
i
P
I
(). The refutation of A
j
with ground(P
I
) is as follows. It starts with
A
j
B
1
, B
2
. . . , B
p
.
Now by induction there exist refutations of B
n
, 1 n p, with ground(P
I
). Using
these refutations, one can extend the preceding derivation to a derivation leading to the
empty clause. Furthermore, the refutations for each of the A
j
s can be combined to obtain
a refutation of A
1
, . . . , A
m
as desired. Therefore (**) holds for i +1. By induction, (**)
holds.
SLD Resolution
The main difference between the general case and the warm-up is that we now handle
goals and tuples with variables rather than just ground ones. In addition to obtaining the
goal , the process determines an instantiation for the free variables of the goal g, such
that P
I
() |=g. We start with an example: An SLD refutation of S(1, x) is shown in
Fig. 12.5.
In general, we start with a goal (which does not have to be ground):
12.4 Proof-Theoretic Approach 293
A
1
, . . . , A
i
, . . . , A
n
.
Suppose that we selected a literal to be replaced [e.g., A
i
=Q(1, x
2
, x
5
)]. Any rule used
for the replacement must have Q for predicate in the head, just as in the ground case. For
instance, we might try some rule
Q(x
1
, x
4
, x
3
) P(x
1
, x
2
), P(x
2
, x
3
), Q(x
3
, x
4
, x
5
).
We now have two difculties:
(i) The same variable may occur in the selected literal and in the rule with two
different meanings. For instance, x
2
in the selected literal is not to be confused
with x
2
in the rule.
(ii) The pattern of constants and of equalities between variables in the selected literal
and in the head of the rule may be different. In our example, for the rst attribute
we have 1 in the selected literal and a variable in the rule head.
The rst of these two difculties is handled easily by renaming the variables of the rules.
We shall use the following renaming discipline: Each time a rule is used, a new set of
distinct variables is substituted for the ones in the rule. Thus we might use instead the rule
Q(x
11
, x
14
, x
13
) P(x
11
, x
12
), P(x
12
, x
13
), Q(x
13
, x
14
, x
15
).
The second difculty requires a more careful approach. It is tackled using unication,
which matches the pattern of the selected literal to that of the head of the rule, if possible.
In the example, unication consists of nding a substitution such that (Q(1, x
2
, x
5
)) =
(Q(x
11
, x
14
, x
13
)). Such a substitution is called a unier. For example, the substitu-
tion (x
11
) = 1, (x
2
) = (x
14
) = (x
5
) = (x
13
) = y is a unier for Q(1, x
2
, x
5
) and
Q(x
11
, x
14
, x
13
), because (Q(1, x
2
, x
5
)) = (Q(x
11
, x
14
, x
13
)) = Q(1, y, y). Note that
this particular unier is unnecessarily restrictive; there is no reason to identify all of
x
2
, x
3
, x
4
, x
5
.
A unier that is no more restrictive than needed to unify the atoms is called a most
general unier (mgu). Applying the mgu to the rule to be used results in specializing the
rule just enough so that it applies to the selected literal. These terms are formalized next.
Denition 12.4.6 Let A, B be two atoms. A unier for Aand B is a substitution such
that A =B. A substitution is more general than a substitution , denoted , if
for some substitution

, =

. A most general unier (mgu) for A and B is a unier


for A, B such that, for each unier of A, B, we have .
Clearly, the relation between uniers is reexive and transitive but not antisym-
metric. Let be the equivalence relation on substitutions dened by iff and
. If , then for each atom A, (A) and (A) are the same modulo renaming of
variables.
294 Datalog
Computing the mgu
We now develop an algorithm for computing an mgu for two atoms. Let R be a relation
of arity p and R(x
1
, . . . , x
p
), R(y
1
, . . . , y
p
) two literals with disjoint sets of variables.
Compute , the equivalence relation on var dom dened as the reexive, transitive
closure of: x
i
y
i
for each i in [1, p]. The mgu of R(x
1
, . . . , x
p
) and R(y
1
, . . . , y
p
) does
not exist if two distinct constants are in the same equivalence class. Otherwise their mgu is
the substitution such that
1. If z a for some constant a, (z) =a;
2. Otherwise (z) =z

, where z

is the smallest (under a xed ordering on var) such


that z z

.
We show that the foregoing computes an mgu.
Lemma 12.4.7 The substitution just computed is an mgu for R(x
1
, . . . , x
p
) and
R(y
1
, . . . , y
p
).
Proof Clearly, is a unier for R(x
1
, . . . , x
p
) and R(y
1
, . . . , y
p
). Suppose is another
unier for the same atoms. Let

be the equivalence relation on var dom dened by


x

y iff (x) = (y). Because is a unier, (x


i
) = (y
i
). It follows that x
i

y
i
, so
renes

. Then the substitution

dened by

((x)) =(x), is well dened, because


(x) =(x

) implies (x) =(x

). Thus =

so . Because this holds for every


unier , it follows that is an mgu for the aforementioned atoms.
The following facts about mgus are important to note. Their proof is left to the reader
(Exercise 12.19). In particular, part (ii) of the lemma says that the mgu of two atoms, if it
exists, is essentially unique (modulo renaming of variables).
Lemma 12.4.8 Let A, B be atoms.
(i) If there exists a unier for A, B, then A, B have an mgu.
(ii) If and

are mgus for A, B then

.
(iii) Let A, B be atoms with mgu . Then for each atom C, if C =
1
A =
2
B for
substitutions
1
,
2
, then C =
3
((A)) =
3
((B)) for some substitution
3
.
We are now ready to rephrase the notion of resolvent to incorporate variables. Let
g A
1
, . . . , A
i
, . . . , A
n
, r B
1
B
2
, . . . , B
m
be a goal and a rule such that
1. g and r have no variable in common (which can always be ensured by renaming
the variables of the rule).
2. A
i
and B
1
have an mgu .
12.4 Proof-Theoretic Approach 295
Then the resolvent of g with r using is the goal
(A
1
), . . . , (A
i1
), (B
2
), . . . , (B
m
), (A
i+1
), . . . , (A
n
).
As before, it is easily veried that this resolvent is implied by g and r.
An SLD derivation from a goal g with a program P
I
is a sequence g
0
=g, g
1
, . . . of
goals and
0
, . . . of substitutions such that for each j, g
j
is the resolvent of g
j1
with
some rule in P
I
using
j
1
. An SLD refutation of a goal g with P
I
is an SLD derivation
g
0
=g, . . . , g
q
= with P
I
.
We now explain the meaning of such a refutation. As in the variable-free case, the
existence of a refutation of a goal A
1
, . . . , A
n
with P
I
can be viewed as a proof of the
negation of the goal. The goal is
x
1
, . . . , x
m
(A
1
A
n
)
where x
1
, . . . , x
m
are the variables in the goal. Its negation is therefore equivalent to
x
1
, . . . , x
m
(A
1
A
n
),
and the refutation can be seen as a proof of its validity. Note that, in the case of datalog
programs (where by denition all unit clauses are ground), the composition
1

q
of mgus used while refuting the goal yields a substitution by constants. This substitution
provides witnesses for the existence of the variables x
1
, . . . , x
m
making true the conjunc-
tion. In particular, by enumerating all refutations of the goal, one could obtain all values
for the variables satisfying the conjunctionthat is, the answer to the query
{x
1
, . . . , x
m
| A
1
A
n
}.
This is not the case when one allows arbitrary denite clauses rather than datalog rules, as
illustrated in the following example.
Example 12.4.9 Consider the program
S(x, z) G(x, z)
S(x, z) G(x, y), S(y, z)
S(x, x)
that computes in S the reexive transitive closure of graph G. This is a set of denite clauses
but not a datalog program because of the last rule. However, resolution can be extended to
(and is indeed in general presented for) denite clauses. Observe, for instance, that the goal
S(w, w) is refuted with a substitution that does not bind variable w to a constant.
SLD resolution is a technique that provides proofs of facts. One must be sure that
it produces only correct proofs (soundness) and that it is powerful enough to prove all
296 Datalog
true facts (completeness). To conclude this section, we demonstrate the soundness and
completeness of SLD resolution for datalog programs.
We use the following lemma:
Lemma 12.4.10 Let g A
1
, . . . , A
i
, . . . , A
n
and r B
1
B
2
, . . . , B
m
be a goal and
a rule with no variables in common, and let
g

A
1
, . . . , A
i1
, B
2
, . . . , B
m
, A
i+1
, . . . , A
n
.
If g

is a resolvent of g with r using , then the formula r implies:


r

g
=(A
1
A
i1
B
2
B
m
A
i+1
A
n
) (A
1
A
n
).
Proof Let J be an instance over sch(P) satisfying r and let valuation be such that
J |=[(A
1
) (A
i1
) (B
2
) (B
m
) (A
i+1
) (A
n
)].
Because
J |=[(B
2
) (B
m
)]
and J |=B
1
B
2
, . . . , B
m
, J |=[(B
1
)]. That is, J |=[(A
i
)]. Thus
J |=[(A
1
) (A
n
)].
Hence for each , J |=r

. Therefore J |=r

. Thus each instance over sch(P) satisfying r


also satises r

, so r implies r

.
Using this lemma, we have the following:
Theorem 12.4.11 (Soundness of SLD resolution) Let P
I
be a program and g
A
1
, . . . , A
n
a goal. If there exists an SLD-refutation of g with P
I
and mgus
1
, . . . ,
q
,
then P
I
implies

1

q
(A
1
A
n
).
Proof Let J be some instance over sch(P) satisfying P
I
. Let g
0
=g, . . . , g
q
= be an
SLD refutation of g with P
I
and for each j, let g
j
be a resolvent of g
j1
with some rule in
P
I
using some mgu
j
. Then for each j, the rule that is used implies g
j

j
(g
j1
) by
Lemma 12.4.10. Because J satises P
I
, for each j,
J |=g
j

j
(g
j1
).
12.4 Proof-Theoretic Approach 297
Clearly, this implies that for each j,
J |=
j+1

q
(g
j
)
j

q
(g
j1
).
By transitivity, this shows that
J |=g
q

1

q
(g
0
),
and so
J |=true
1

q
(g).
Thus J |=
1

q
(A
1
A
n
).
We next prove the converse of the previous result (namely, the completeness of SLD
resolution).
Theorem 12.4.12 (Completeness of SLD resolution) Let P
I
be a program and g
A
1
, . . . , A
n
a goal. If P
I
implies g, then there exists a refutation of g with P
I
.
Proof Suppose that P
I
implies g. Consider the set ground(P
I
) of instantiations of rules
in P
I
with constants in adom(P, I). Clearly, ground(P
I
)() is a model of P
I
, so it satises
g. Thus there exists a valuation of the variables in g such that ground(P
I
)() satises
g. By Theorem 12.4.5, there exists a refutation of g using ground(P
I
).
Let g
0
=g, . . . , g
p
= be that refutation. We show by induction on k that for each
k in [0, p],
() there exists a derivation g

0
=g, . . . , g

k
with P
I
such that g
k
=
k
g

k
for some
k
.
For suppose that () holds for each k. Then for k = p, there exists a derivation g

1
=
g, . . . , g

p
with P
I
such that =g
p
=
p
g

p
for some
p
, so g

p
= . Therefore there exists
a refutation of g with P
I
.
The basis of the induction holds because g
0
=g =g

0
. Now suppose that () holds
for some k. The next step of the refutation consists of selecting some atom B of g
k
and
applying a rule r in ground(P
I
). In g

k
select the atom B

with location in g

corresponding
to the location of B in g
k
. Note that B =
k
B

. In addition, we know that there is rule


r

= B

1
. . . A

n
in P
I
that has r for instantiation via some substitution

(such
a pair B

, r

exists although it may not be unique). As usual, we can assume that the
variables in g

k
are disjoint from those in r

. Let
k

be the substitution dened by

(x) =
k
(x) if x is a variable in g

k
, and
k

(x) =

(x) if x is a variable in r

.
Clearly,
k

(B

) =
k

(B

) =B so, by Lemma 12.4.8 (i), B

and B

have some
mgu . Let g

k+1
be the resolvent of g

k
with r

, B

using mgu . By the denition of mgu,


there exists a substitution
k+1
such that
k

=
k+1
. Clearly,
k+1
(g

k+1
) = g
k+1
and () holds for k +1. By induction, () holds for each k.
298 Datalog
S(1,x)
T(1,x
2
), R(x
2
,a,x)
1:x
1
/1,x
3
/x
2:x
1
/1,x
2
/y
2
,x
4
/x
2
R(1,a,y
2
), R(y
2
,b,x
3
), T(x
3
,x
2
), R(x
2
,a,x)
3:x
1
/x
3
,x
2
/z
2
,x
3
/x
2
R(1,a,y
2
), R(y
2
,b,x
3
), R(x
3
,a,z
2
), T(z
3
,x
2
), R(x
2
,a,x)
R(2,b,x
3
), R(x
3
,a,z
2
), R(z
2
,a,x
2
), R(x
2
,a,x)
5:x
3
/3
4:y
2
/2
R(3,a,z
2
), R(z
2
,a,x
2
), R(x
2
,a,x)
6:z
2
/4
R(4,a,x
2
), R(x
2
,a,x)
7:x
2
/5
R(5,a,x)
8:x/6
Infinite
subtree
R(1,a,y
2
), R(y
2
,a,x
2
), R(x
2
,a,x)
R(1,a,1), R(1,a,x
2
), R(x
2
,a,x)
4:y
2
/1
no possible derivation
3:x
1
/1,x
2
/y
2
,x
3
/x
2
2
Figure 12.6: SLD tree
SLD Trees
We have shown that SLD resolution is sound and complete. Thus it provides an adequate
top-down technique for obtaining the facts in the answer to a datalog program. To prove that
a fact is in the answer, one must search for a refutation of the corresponding goal. Clearly,
there are many refutations possible. There are two sources of nondeterminism in searching
for a refutation: (1) the choice of the selected atom, and (2) the choice of the clause to unify
with the atom. Now let us assume that we have xed some golden rule, called a selection
rule, for choosing which atom to select at each step in a refutation. A priori, such a rule
may be very simple (e.g., as in Prolog, always take the leftmost atom) or in contrast very
involved, taking into account the entire history of the refutation. Once an atom has been
selected, we can systematically search for all possible unifying rules. Such a search can be
represented in an SLD tree. For instance, consider the tree of Fig. 12.6 for the program in
Example 12.4.2. The selected atoms are represented with boxes. Edges denote unications
used. Given S(1, x), only one rule can be used. Given T (1, x
2
), two rules are applicable
that account for the two descendants of vertex T (1, x
2
). The rst number in edge labels
denotes the rule that is used and the remaining part denotes the substitution. An SLD tree
is a representation of all the derivations obtained with a xed selection rule for atoms.
12.4 Proof-Theoretic Approach 299
There are several important observations to be made about this particular SLD tree:
(i) It is successful because one branch yields .
(ii) It has an innite subtree that corresponds to an innite sequence of applications
of rule (2) of Example 12.4.2.
(iii) It has a blocking branch.
We can now explain (to a certain extent) the acronym SLD. SLD stands for selection
rule-driven linear resolution for denite clauses. Rule-driven refers to the rule used for
selecting the atom. An important fact is that the success or failure of an SLD tree does not
depend on the rule for selecting atoms. This explains why the denition of an SLD tree
does not specify the selection rule.
Datalog versus Logic Programming, Revisited
Having established the three semantics for datalog, we summarize briey the main differ-
ences between datalog and the more general logic-programming (lp) framework.
Syntax: Datalog has only relation symbols, whereas lp uses also function symbols. Datalog
requires variables in rule heads to appear in bodies; in particular, all unit clauses are
ground.
Model-theoretic semantics: Due to the presence of function symbols in lp, models of lp
programs may be innite. Datalog programs always have nite models. Apart from
this distinction, lp and datalog are identical with respect to model-theoretic semantics.
Fixpoint semantics: Again, the minimum xpoint of the immediate consequence operator
may be innite in the lp case, whereas it is always nite for datalog. Thus the xpoint
approach does not necessarily provide a constructive semantics for lp.
Proof-theoretic semantics: The technique of SLD resolution is similar for datalog and lp,
with the difference that the computation of mgus becomes slightly more complicated
with function symbols (see Exercise 12.20). For datalog, the signicance of SLD
resolution concerns primarily optimization methods inspired by resolution (such as
magic sets; see Chapter 13). In lp, SLD resolution is more important. Due to the
possibly innite answers, the bottom-up approach of the xpoint semantics may not
be feasible. On the other hand, every fact in the answer has a nite proof by SLD
resolution. Thus SLD resolution emerges as the practical alternative.
Expressive power: A classical result is that lp can express all recursively enumerable (r.e.)
predicates. However, as will be discussed in Part E, the expressive power of datalog
lies within ptime. Why is there such a disparity? A fundamental reason is that function
symbols are used in lp, and so an innite domain of objects can be constructed from a
nite set of symbols. Speaking technically, the result for lp states that if S is a (possibly
innite) r.e. predicate over terms constructed using a nite language, then there is an
lp program that produces for some predicate symbol exactly the tuples in S. Speaking
intuitively, this follows from the facts that viewed in a bottom-up sense, lp provides
composition and looping, and terms of arbitrary length can be used as scratch paper
300 Datalog
(e.g., to simulate a Turing tape). In contrast, the working space and output of range-
restricted datalog programs are always contained within the active domain of the input
and the program and thus are bounded in size.
Another distinction between lp and datalog in this context concerns the nature of
expressive power results for datalog and for query languages in general. Specically,
a datalog program P is generally viewed as a mapping from instances of edb(P)
to instances of idb(P). Thus expressive power of datalog is generally measured in
comparison with mappings on families of database instances rather than in terms of
expressing a single (possibly innite) predicate.
12.5 Static Program Analysis
In this section, the static analysis of datalog programs is considered.
2
As with relational
calculus, even simple static properties are undecidable for datalog programs. In particular,
although tableau homomorphism allowed us to test the equivalence of conjunctive queries,
equivalence of datalog programs is undecidable in general. This complicates a systematic
search for alternative execution plans for datalog queries and yields severe limitations
to query optimization. It also entails the undecidability of many other problems related
to optimization, such as deciding when selection propagation (in the style of pushing
selections in relational algebra) can be performed, or when parallel evaluation is possible.
We consider three fundamental static properties: satisability, containment, and a new
one, boundedness. We exhibit a decision procedure for satisability. Recall that we showed
in Chapter 5 that an analogous property is undecidable for CALC. The decidability of
satisability for datalog may therefore be surprising. However, one must remember that,
although datalog is more powerful than CALC in some respects (it has recursion), it is less
powerful in others (there is no negation). It is the lack of negation that makes satisability
decidable for datalog.
We prove the undecidability of containment and boundedness for datalog programs
and consider variations or restrictions that are decidable.
Satisability
Let P be a datalog program. An intensional relation T is satisable by P if there exists
an instance I over edb(P) such that P(I)(T ) is nonempty. We give a simple proof of the
decidability of satisability for datalog programs. We will soon see an alternative proof
based on context-free languages.
We rst consider constant-free programs. We then describe how to reduce the general
case to the constant-free one.
To prove the result, we use an auxiliary result about instance homomorphisms that is of
some interest in its own right. Note that any mapping from dom to dom can be extended
to a homomorphism over the set of instances, which we also denote by .
2
Recall that static program analysis consists of trying to detect statically (i.e., at compile time)
properties of programs.
12.5 Static Program Analysis 301
Lemma 12.5.1 Let P be a constant-free datalog program, I, J two instances over sch(P),
q a positive-existential query over sch(P), and a mapping over dom. If (I) J, then
(i) (q(I)) q(J), and (ii) (P(I)) P(J).
Proof For (i), observe that q is monotone and that q q (which is not necessary
if q has constants). Because T
P
can be viewed as a positive-existential query, a straightfor-
ward induction proves (ii).
This result does not hold for datalog programs with constants (see Exercise 12.21).
Theorem 12.5.2 The satisability of an idb relation T by a constant-free datalog pro-
gram P is decidable.
Proof Suppose that T is satisable by a constant-free datalog program P. We prove that
P(I
a
)(T ) is nonempty for some particular instance I
a
. Let a be in dom. Let I
a
be the
instance over edb(P) such that for each R in edb(P), I
a
(R) contains a single tuple with a
in each entry. Because T is satisable by P, there exists I such that P(I)(T ) =. Consider
the function that maps every constant in dom to a. Then (I) I
a
. By the previous
lemma, (P(I)) P(I
a
). Therefore P(I
a
)(T ) is nonempty. Hence T is satisable by P
iff P(I
a
)(T ) =.
Let us now consider the case of datalog programs with constants. Let P be a datalog
program with constants. For example, suppose that b, c are the only two constants occur-
ring in the programand that R is a binary relation occurring in P. We transformthe problem
into a problem without constants. Specically, we replace R with nine new relations:
R

, R
b
, R
c
, R
b
, R
c
, R
bc
, R
cb
, R
bb
, R
cc
.
The rst one is binary, the next four are unary, and the last four are 0-ary (i.e., are proposi-
tions). Intuitively, a fact R(x, y) is represented by the fact R

(x, y) if x, y are not in {b, c};


R(b, x) with x not in {b, c} is represented by R
b
(x), and similarly for R
c
, R
b
, R
c
. The
fact R(b, c) is represented by proposition R
bc
(), etc. Using this kind of transformation for
each relation, one translates program P into a constant-free program P

such that T is sat-


isable by P iff T
w
is satisable by P

for some string w of or constants occurring in P.


(See Exercise 12.22a.)
Containment
Consider two datalog programs P, P

with the same extensional relations edb(P) and


a target relation T occurring in both programs. We say that P is included in P

with
respect to T , denoted P
T
P

, if for each instance I over edb(P), P(I)(T ) P

(I)(T ).
The containment problem is undecidable. We prove this by reduction of the containment
problem for context-free languages. The technique is interesting because it exhibits a
correspondence between proof trees of certain datalog programs and derivation trees of
context-free languages.
302 Datalog
We rst illustrate the correspondence in an example.
Example 12.5.3 Consider the context-free grammar G = (V, , , S), where V =
{S, T }, S is the start symbol, ={a, b}, and the set of production rules is
S T a
T abT | aa.
The corresponding datalog program P
G
is the program of Example 12.4.1. A proof tree
and its corresponding derivation tree are shown in Fig. 12.3.
We next formalize the correspondence between proof trees and derivation trees.
A context-free grammar is a () grammar if the following hold:
(1) G is free (i.e., does not have any production of the form X , where
denotes the empty string) and
(2) the start symbol does not occur in any right-hand side of a production.
We use the following:
Fact It is undecidable, given () grammars G
1
, G
2
, whether L(G
1
) L(G
2
).
For each () grammar G, let P
G
, the corresponding datalog program, be constructed
(similar to Example 12.5.3) as follows: Let G= (V, , , S). We may assume without
loss of generality that V is a set of relation names of arity 2 and a set of elements from
dom. Then idb(P
G
) =V and edb(P
G
) ={R}, where R is a ternary relation. Let x
1
, x
2
, . . .
be an innite sequence of distinct variables. To each production in ,
T C
1
. . . C
n
,
we associate a datalog rule
T (x
1
, x
n+1
) A
1
, . . . , A
n
,
where for each i
if C
i
is a nonterminal T

, then A
i
=T

(x
i
, x
i+1
);
if C
i
is a terminal b, then A
i
=R(x
i
, b, x
i+1
).
Note that, for any proof tree of a fact S(a
1
, a
n
) using P
G
, the sequence of its leaves is
(in this order)
R(a
1
, b
1
, a
2
), . . . , R(a
n1
, b
n1
, a
n
),
for some a
2
, . . . , a
n1
and b
1
, . . . , b
n1
. The connection between derivation trees of Gand
proof trees of P
G
is shown in the following.
12.5 Static Program Analysis 303
Proposition 12.5.4 Let G be a () grammar and P
G
be the associated datalog pro-
gram constructed as just shown. For each a
1
, . . . , a
n
, b
1
, . . . , b
n1
, there is a proof tree
of S(a
1
, a
n
) from P
G
with leaves R(a
1
, b
1
, a
2
), . . . , R(a
n1
, b
n1
, a
n
) (in this order) iff
b
1
. . . b
n1
is in L(G).
The proof of the proposition is left as Exercise 12.25. Now we can show the following:
Theorem 12.5.5 It is undecidable, given P, P

(with edb(P) = edb(P

)) and T ,
whether P
T
P

.
Proof It sufces to show that
()
for each pair G
1
, G
2
of () grammars,
L(G
1
) L(G
2
) P
G
1

S
P
G
2
.
Suppose () holds and T containment is decidable. Then we obtain an algorithm to decide
containment of () grammars, which contradicts the aforementioned fact.
Let G
2
, G
2
be two () grammars. We show here that
L(G
1
) L(G
2
) P
G
1

S
P
G
2
.
(The other direction is similar.) Suppose that L(G
1
) L(G
2
). Let I be over edb(P
G
1
) and
S(a
1
, a
n
) be in P
G
1
(I). Then there exists a proof tree of S(a
1
, a
n
) from P
G
1
and I, with
leaves labeled by facts
R(a
1
, b
1
, a
2
), . . . , R(a
n1
, b
n1
, a
n
),
in this order. By Proposition 12.5.4, b
1
. . . b
n1
is in L(G
1
). Because L(G
1
) L(G
2
),
b
1
. . . b
n1
is in L(G
2
). By the proposition again, there is a proof tree of S(a
1
, a
n
) fromP
G
2
with leaves R(a
1
, b
1
, a
2
), . . . , R(a
n1
, b
n1
, a
n
), all of which are facts in I. Thus S(a
1
, a
n
)
is in P
G
2
(I), so P
G
1

S
P
G
2
.
Note that the datalog programs used in the preceding construction are very particular:
They are essentially chain programs. Intuitively, in a chain program the variables in a rule
body form a chain. More precisely, rules in chain programs are of the form
A
0
(x
0
, x
n
) A
1
(x
0
, x
1
), A
2
(x
1
, x
2
), . . . , A
n
(x
n1
, x
n
).
The preceding proof can be tightened to show that containment is undecidable even for
chain programs (see Exercise 12.26).
The connection with grammars can also be used to provide an alternate proof of the
decidability of satisability; satisability can be reduced to the emptiness problem for
context-free languages (see Exercise 12.22c).
Although containment is undecidable, there is a closely related, stronger property
which is decidablenamely, uniform containment. For two programs P, P

over the same


304 Datalog
set of intensional and extensional relations, we say that P is uniformly contained in P

,
denoted P P

, iff for each I over sch(P), P(I) P

(I). Uniform containment is a


sufcient condition for containment. Interestingly, one can decide uniform containment.
The test for uniform containment uses dependencies studied in Part D and the fundamental
chase technique (see Exercises 12.27 and 12.28).
Boundedness
A key problem for datalog programs (and recursive programs in general) is to estimate the
depth of recursion of a given program. In particular, it is important to know whether for a
given program the depth is bounded by a constant independent of the input. Besides being
meaningful for optimization, this turns out to be an elegant mathematical problem that has
received a lot of attention.
A datalog program P is bounded if there exists a constant d such that for each I over
edb(P), stage(P, I) d. Clearly, if a program is bounded it is essentially nonrecursive,
although it may appear to be recursive syntactically. In some sense, it is falsely recursive.
Example 12.5.6 Consider the following two-rule program:
Buys(x, y) Trendy(x), Buys(z, y) Buys(x, y) Likes(x, y)
This program is bounded because Buys(z,y) can be replaced in the body by Likes(z,y),
yielding an equivalent recursion-free program. On the other hand, the program
Buys(x, y) Knows(x, z), Buys(z, y) Buys(x, y) Likes(x, y)
is inherently recursive (i.e., is not equivalent to any recursion-free program).
It is important to distinguish truly recursive programs from falsely recursive (bounded)
programs. Unfortunately, boundedness cannot be tested.
Theorem 12.5.7 Boundedness is undecidable for datalog programs.
The proof is by reduction of the PCP (see Chapter 2). One can even show that bound-
edness remains undecidable under strong restrictions, such as that the programs that are
considered (1) are constant-free, (2) contain a unique recursive rule, or (3) contain a unique
intensional relation. Decidability results have been obtained for linear programs or chain-
rule programs (see Exercise 12.31).
Bibliographic Notes
It is difcult to attribute datalog to particular researchers because it is a restriction or
extension of many previously proposed languages; some of the early history is discussed
in [MW88a]. The name datalog was coined (to our knowledge) by David Maier.
Bibliographic Notes 305
Many particular classes of datalog programs have been investigated. Examples are the
class of monadic programs (all intensional relations have arity one), the class of linear
programs (in the body of each rule of these programs, there can be at most one relation that
is mutually recursive with the head relation; see Chapter 13), the class of chain programs
[UG88, AP87a] (their syntax resembles that of context-free grammars), and the class of
single rule programs or sirups [Kan88] (they consist of a single nontrivial rule and a trivial
exit rule).
The xpoint semantics that we considered in this chapter is due to [CH85]. However,
it has been considered much earlier in the context of logic programming [vEK76, AvE82].
For logic programming, the existence of a least xpoint is proved using [Tar55].
The study of stage functions stage(d, H) is a major topic in [Mos74], where they are
dened for nite structures (i.e., instances) as well as for innite structures.
Resolution was originally proposed in the context of automatic theorem proving. Its
foundations are due to Robinson [Rob65]. SLD resolution was developed in [vEK76].
These form the basis of logic programming introduced by Kowalski [Kow74] and
[CKRP73] and led to the language Prolog. Nice presentations of the topic can be found
in [Apt91, Llo87]. Standard SLD resolution is more general than that presented in this
chapter because of the presence of function symbols. The development is similar except
for the notion of unication, which is more involved. A survey of unication can be found
in [Sie88, Kni89].
The programming language Prolog proposed by Colmerauer [CKRP73] is based on
SLD resolution. It uses a particular strategy for searching for SLD refutations. Vari-
ous ways to couple Prolog with a relational database system have been considered (see
[CGT90]).
The undecidability of containment is studied in [CGKV88, Shm87]. The decidability
of uniform containment is shown in [CK86, Sag88]. The decidability of containment for
monadic programs is studied in [CGKV88]. The equivalence of recursive and nonrecursive
datalog programs is shown to be decidable in [CV92]. The complexity of this problem is
considered in [CV94].
Interestingly, bounded recursion is dened and used early in the context of universal
relations [MUV84]. Example 12.5.6 is from [Nau86]. Undecidability results for bound-
edness of various datalog classes are shown in [GMSV87, GMSV93, Var88, Abi89]. De-
cidability results for particular subclasses are demonstrated in [Ioa85, Nau86, CGKV88,
NS87, Var88].
Boundedness implies that the query expressed by the program is a positive existential
query and therefore is expressible in CALC (over nite inputs). What about the converse?
If innite inputs are allowed, then (by a compactness argument) unboundedness implies
nonexpressibility by CALC. But in the nite (database) case, compactness does not hold,
and the question remained open for some time. Kolaitis observed that unboundedness does
not imply nonexpressibility by CALC over nite structures for datalog with inequalities
(x = y). (We did not consider comparators =, <, , etc. in this chapter.) The question
was settled by Ajtai and Gurevich [AG89], who showed by an elegant argument that no
unbounded datalog program is expressible in CALC, even on nite structures.
Another decision problem for datalog concerns arises from the interaction of datalog
with functional dependencies. In particular, it is undecidable, given a datalog program P,
306 Datalog
set of fds on edb(P), and set of fds on idb(P) whether P(I) |= whenever I |=
[AH88].
The expressive power of datalog has been investigated in [AC89, ACY91, CH85,
Shm87, LM89, KV90c]. Clearly, datalog expresses only monotonic queries, commutes
with homomorphisms of the database (if there are no constants in the program), and can be
evaluated in polynomial time (see also Exercise 12.11). It is natural to wonder if datalog
expresses precisely those queries. The answer is negative. Indeed, [ACY91] shows that the
existence of a path whose length is a perfect square between two nodes is not expressible
in datalog
=
(datalog augmented with inequalities x = y), and so not in datalog. This
is a monotonic, polynomial-time query commuting with homomorphisms. The parallel
complexity of datalog is surveyed in [Kan88].
The function symbols used in logic programming are interpreted over a Herbrand do-
main and are prohibited in datalog. However, it is interesting to incorporate arithmetic func-
tions such as addition and multiplication into datalog. Such functions can also be viewed
as innite base relations. If these are present, it is possible that the bottom-up evaluation
of a datalog program will not terminate. This issue was rst studied in [RBS87], where
niteness dependencies were introduced. These dependencies can be used to describe how
the niteness of the range of a set of variables can imply the niteness of the range of
another variable. [For example, the relation +(x, y, z) satises the niteness dependen-
cies {x, y} {z}, {x, z} {y}, and {y, z} {x}.] Safety of datalog programs with innite
relations constrained by niteness dependencies is undecidable [SV89]. Various syntac-
tic conditions on datalog programs that ensure safety are developed in [RBS87, KRS88a,
KRS88b, SV89]. Finiteness dependencies were used to develop a safety condition for the
relational calculus with innite base relations in [EHJ93]. Safety was also considered in
the context of data functions (i.e., functions whose extent is predened).
Exercises
Exercise 12.1 Refer to the Parisian Metro database. Give a datalog program that yields, for
each pair of stations (a, b), the stations c such that c is reachable (1) from both a and b; and (2)
from a or b.
Exercise 12.2 Consider a database consisting of the Metro and Cinema databases, plus a
relation Theater-Station giving for each theater the closest metro station. Suppose that you live
near the Odeon metro station. Write a programthat answers the query Near which metro station
can I see a Bergman movie? (Having spent many years in Los Angeles, you do not like walking,
so your only option is to take the metro at Odeon and get off at the station closest to the theater.)
Exercise 12.3 (Same generation) Consider a binary relation Child_of , where the intended
meaning of Child_of (a, b) is that a is the child of b. Write a datalog program computing the
set of pairs (c, d), where c and d have a common ancestor and are of the same generation with
respect to this ancestor.
Exercise 12.4 We are given two directed graphs G
black
and G
white
over the same set V of
vertexes, represented as binary relations. Write a datalog program P that computes the set of
pairs (a, b) of vertexes such that there exists a path from a to b where black and white edges
alternate, starting with a white edge.
Exercises 307
Exercise 12.5 Suppose we are given an undirected graph with colored vertexes represented
by a binary relation Color giving the colors of vertexes and a binary relation Edge giving the
connection between them. (Although Edge provides directed edges, we ignore the direction, so
we treat the graph as undirected.) Say that a vertex is good if it is connected to a blue vertex
(blue is a constant) or if it is connected to an excellent vertex. An excellent vertex is a vertex
that is connected to an outstanding vertex and to a red vertex. An outstanding vertex is a vertex
that is connected to a good vertex, an excellent one, and a yellow one. Write a datalog program
that computes the excellent vertexes.
Exercise 12.6 Consider a directed graph G represented as a binary relation. Show a datalog
program that computes a binary relation T containing the pairs (a, b) for which there is a path
of odd length from a to b in G.
Exercise 12.7 Given a directed graph G represented as a binary relation, write a datalog
program that computes the vertexes x such that (1) there exists a cycle of even length passing
through x; (2) there is a cycle of odd length through x; (3) there are even- and odd-length cycles
through x.
Exercise 12.8 Consider the following program P:
R(x, y) Q(y, x), S(x, y)
S(x, y) Q(x, y), T (x, z)
T (x, y) Q(x, z), S(z, y)
Let I be a relation over edb(P). Describe the output of the program. Now suppose the rst rule
is replaced by R(x, y) Q(y, x). Describe the output of the new program.
Exercise 12.9 Prove Lemma 12.3.1.
Exercise 12.10 Prove that datalog queries are monotone.
Exercise 12.11 Suppose P is some property of graphs denable by a datalog program. Show
that P is preserved under extensions and homomorphisms. That is, if G is a graph satisfying P,
then (1) every supergraph of G satises P and (2) if h is a graph homomorphism, then h(G)
satises P.
Exercise 12.12 Show that the following graph properties are not denable by datalog
programs:
(i) The number of nodes is even.
(ii) There is a nontrivial cycle (a trivial cycle is an edge a, a for some vertex a).
(iii) There is a simple path of even length between two specied nodes.
Show that nontrivial cycles can be detected if inequalities of the form x =y are allowed in rule
bodies.
Exercise 12.13 [ACY91] Consider the query perfect square on graphs: Is there a path (not
necessarily simple) between nodes a and b whose length is a perfect square?
(i) Prove that perfect square is preserved under extension and homomorphism.
(ii) Show that perfect square is not expressible in datalog.
Hint: For (ii), consider words consisting of simple paths from a to b, and prove a pumping
lemma for words accepted by datalog programs.
308 Datalog
Exercise 12.14 Present an algorithmthat, given the set of proof trees of depth i with a program
P and instance I, constructs all proof trees of depth i + 1. Make sure that your algorithm
terminates.
Exercise 12.15 Let P be a datalog program, I an instance of edb(P), and R in idb(P). Let u
be a vector of distinct variables of the arity of R. Demonstrate that
P(I)(R) ={R(u) | there is a refutation of R(u) using P
I
and
substitutions
1
, . . .
n
such that =
1

n
}.
Exercise 12.16 (Substitution lemma) Let P
I
be a program, g a goal, and a substitution. Prove
that if there exists an SLD refutation of g with P
I
and , there also exists an SLD refutation of
g with P
I
and .
Exercise 12.17 Reprove Theorem 12.3.4 using Tarskis and Kleenes theorems stated in Re-
mark 12.3.5.
Exercise 12.18 Prove the if part of Theorem 12.4.5.
Exercise 12.19 Prove Lemma 12.4.8.
Exercise 12.20 (Unication with function symbols) In general logic programming, one can
use function symbols in addition to relations. A term is then either a constant in dom, a variable
in var, or an expression f (t
1
, . . . , t
n
), where f is an n-ary function symbol and each t
i
is a term.
For example, f (g(x, 5), y, f (y, x, x)) is a term. In this context, a substitution is a mapping
from a subset of var into the set of terms. Given a substitution , it is extended in the natural
manner to include all terms constructed over the domain of . Extend the denitions of unier
and mgu to terms and to atoms permitting terms. Give an algorithm to obtain the mgu of two
atoms.
Exercise 12.21 Prove that Lemma 12.5.1 does not generalize to datalog programs with
constants.
Exercise 12.22 This exercise develops three alternative proofs of the generalization of Theo-
rem 12.5.2 to datalog programs with constants. Prove the generalization by
(a) using the technique outlined just after the statement of the theorem
(b) making a direct proof using as input an instance I
C{a}
, where C is the set of all
constants occurring in the programand a is new, and where each relation in I contains
all tuples constructed using C {a}
(c) reducing to the emptiness problem for context-free languages.
Exercise 12.23 (datalog
=
) The language datalog
=
is obtained by extending datalog with a
new predicate = with the obvious meaning.
(a) Formally dene the new language.
(b) Extend the least-xpoint and minimal-model semantics to datalog
=
.
(c) Show that satisability remains decidable for datalog
=
and that it can be tested in
exponential time with respect to the size of the program.
Exercise 12.24 Which of the properties in Exercise 12.12 are expressible in datalog
=
?
Exercise 12.25 Prove Proposition 12.5.4.
Exercises 309
Exercise 12.26 Prove that containment of chain datalog programs is undecidable. Hint: Mod-
ify the proof of Theorem 12.5.5 by using, for each b , a relation R
b
such that R
b
(x, y) iff
R(x, b, y).
Exercise 12.27 Prove that containment does not imply uniformcontainment by exhibiting two
programs P, Q over the same edbs and with S as common idb such that P
S
Q but P Q.
Exercise 12.28 (Uniformcontainment [CK86, Sag88]) Prove that uniformcontainment of two
datalog programs is decidable.
Exercise 12.29 Prove that each nr-datalog program is bounded.
Exercise 12.30 [GMSV87, Var88] Prove Theorem 12.5.7. Hint: Reduce the halting problem
of Turing machines on an empty tape to boundedness of datalog programs. More precisely, have
the edb encode legal computations of a Turing machine on an empty tape, and have the program
verify the correctness of the encoding. Then show that the program is unbounded iff there are
unbounded computations of the machine on the empty tape.
Exercise 12.31 (Boundedness of chain programs) Prove decidability of boundedness for chain
programs. Hint: Reduce testing for boundedness to testing for niteness of a context-free
language.
Exercise 12.32 This exercise demonstrates that datalog is likely to be stronger than positive
rst order extended by generalized transitive closure.
(a) [Coo74] Recall that a single rule program (sirup) is a datalog program with one
nontrivial rule. Show that the sirup
R(x) R(y), R(z), S(x, y, z)
is complete in ptime. (This has been called variously the graph accessibility problem
and the blue-blooded water buffalo problem; a water buffalo is blue blooded only if
both of its parents are.)
(b) [KP86] Show that the in some sense simpler sirup
R(x) R(y), R(z), T (y, x), T (x, z)
is complete in ptime.
310 Datalog
(c) [Imm87b] The generalized transitive closure operator is dened on relations with
arity 2n so that TC(R) is the output of the datalog program
ans(x
1
, . . . , x
2n
) R(x
1
, . . . , x
2n
)
ans(x
1
, . . . , x
n
, z
1
, . . . , z
n
) R(x
1
, . . . , x
n
, y
1
, . . . , y
n
),
ans(y
1
, . . . , y
n
, z
1
, . . . , z
n
)
Show that the positive rst order extended with generalized transitive closure is in
logspace.
13 Evaluation of Datalog
Alice: I dont mean to sound naive, but isnt it awfully expensive to answer
datalog queries?
Riccardo: Not if you use the right bag of tricks . . .
Vittorio: . . . and some magical wisdom.
Sergio: Well, there is no real need for magic. We will see that the evaluation is
much easier if the algorithm knows where it is going and takes advantage
of this knowledge.
T
he introduction of datalog led to a urry of research in optimization during the late
1980s and early 1990s. A variety of techniques emerged covering a range of different
approaches. These techniques are usually separated into two classes depending on whether
they focus on top-down or bottom-up evaluation. Another key dimension of the techniques
concerns whether they are based on direct evaluation or propose some compilation of the
query into a related query, which is subsequently evaluated using a direct technique.
This chapter provides a brief introduction to this broad family of heuristic techniques.
A representative sampling of such techniques is presented. Some are centered around
an approach known as Query-Subquery; these are top down and are based on direct
evaluation. Others, centered around an approach called magic set rewriting, are based
on an initial preprocessing of the datalog program before using a fairly direct bottom-up
evaluation strategy.
The advantage of top-down techniques is that selections that form part of the initial
query can be propagated into the rules as they are expanded. There is no direct way to take
advantage of this information in bottom-up evaluation, so it would seem that the bottom-
up technique is at a disadvantage with respect to optimization. A rather elegant conclusion
that has emerged from the research on datalog evaluation is that, surprisingly, there are
bottom-up techniques that have essentially the same running time as top-down techniques.
Exposition of this result is a main focus of this chapter.
Some of the evaluation techniques presented here are intricate, and our main emphasis
is on conveying the essential ideas they use. The discussion is centered around the pre-
sentation of the techniques in connection with a concrete running example. In the cases of
Query-Subquery and magic sets rewriting, we also informally describe how they can be ap-
plied in the general case. This is sufcient to give a precise understanding of the techniques
without becoming overwhelmed by notation. Proofs of the correctness of these techniques
are typically lengthy but straightforward and are left as exercises.
311
312 Evaluation of Datalog
a
a
f
g
h
i
j
up
e
f
m
n
n
o
o
g
m
m
p
flat
f
n
o
m
l
m
g
h
i
p
down
f
f
b
c
d
k
(a) The instance (b) Represented as a graph
e f g h i j k
a b c d
l m n o p
d d d u u
d d d u u u u u
f
f
f
f
Figure 13.1: Instance I
0
for RSG example
13.1 Seminaive Evaluation
The rst stop on our tour of evaluation techniques is a strategy for improving the ef-
ciency of the bottom-up technique described in Chapter 12. To illustrate this and the other
techniques, we use as a running example the program Reverse-Same-Generation (RSG)
given by
rsg(x, y) at(x, y)
rsg(x, y) up(x, x1), rsg(y1, x1), down(y1, y)
and the sample instance I
0
illustrated in Fig. 13.1. This is a fairly simple program, but it
will allow us to present the main features of the various techniques presented throughout
this chapter.
If the bottom-up algorithm of Chapter 12 is used to compute the value of rsg on input
I
0
, the following values are obtained:
level 0:
level 1: {g, f , m, n, m, o, p, m}
level 2: {level 1} {a, b, h, f , i, f , j, f , f, k}
level 3: {level 2} {a, c, a, d}
level 4: {level 3}
at which point a xpoint has been reached. It is clear that a considerable amount of
redundant computation is done, because each layer recomputes all elements of the previous
layer. This is a consequence of the monotonicity of the T
P
operator for datalog programs P.
This algorithm has been termed the naive algorithm for datalog evaluation. The central idea
of the seminaive algorithm is to focus, to the extent possible, on the new facts generated at
each level and thereby avoid recomputing the same facts.
Consider the facts inferred using the second rule of RSG in the consecutive stages of
13.1 Seminaive Evaluation 313
the naive evaluation. At each stage, some new facts are inferred (until a xpoint is reached).
To infer a new fact at stage i +1, one must use at least one fact newly derived at stage i.
This is the main idea of seminaive evaluation. It is captured by the following version of
RSG, called RSG

1
rsg
(x, y) at(x, y)

i+1
rsg
(x, y) up(x, x1),
i
rsg
(y1, x1), down(y1, y)
where an instance of the second rule is included for each i 1. Strictly speaking, this is
not a datalog program because it has an innite number of rules. On the other hand, it is
not recursive.
Intuitively,
i
rsg
contains the facts in rsg newly inferred at the ith stage of the naive
evaluation. To see this, we note a close relationship between the repeated applications of
T
RSG
and the values taken by the
i
rsg
. Let I be a xed input instance. Then
for i 0, let rsg
i
=T
i
RSG
(I)(rsg) (i.e., the value of rsg after i applications of T
RSG
on I); and
for i 1, let
i
rsg
= RSG

(I)(
i
rsg
) (i.e., the value of
i
rsg
when T
RSG
reaches a
xpoint on I).
It is easily veried for each i 1 that T
i1
RSG

(I)(
i
rsg
) = and T
i
RSG

(I)(
i
rsg
) =
i
rsg
. Fur-
thermore, for each i 0 we have
rsg
i+1
rsg
i

i+1
rsg
rsg
i+1
.
Therefore RSG(I)(rsg) =
1i
(
i
rsg
). Furthermore, if j satises
j
rsg

i<j

i
rsg
,
then RSG(I)(rsg) =
i<j

i
rsg
, that is, only j levels of RSG

need be computed to nd
RSG(I)(rsg). Importantly, bottom-up evaluation of RSG

typically involves much less re-


dundant computation than direct bottom-up evaluation of RSG.
Continuing with the informal development, we introduce now two renements that
further reduce the amount of redundant computation. The rst is based on the observation
that when executing RSG

, we do not always have


i+1
rsg
= rsg
i+1
rsg
i
. Using I
0
, we
have g, f
2
rsg
but not in rsg
2
rsg
1
. This suggests that the efciency can be further
improved by using rsg
i
rsg
i1
in place of
i
rsg
in the body of the second rule of RSG

.
Using a pidgin language that combines both datalog and imperative commands, the new
version RSG

is given by

1
rsg
(x, y) at(x, y)
rsg
1
:=
1
rsg

temp
i+1
rsg
(x, y) up(x, x1),
i
rsg
(y1, x1), down(y1, y)

i+1
rsg
:= temp
i+1
rsg
rsg
i
rsg
i+1
:= rsg
i

i+1
rsg

(where an instance of the second family of commands is included for each i 1).
314 Evaluation of Datalog
The second improvement to reduce redundant computation is useful when a given idb
predicate occurs twice in the same rule. To illustrate, consider the nonlinear version of the
ancestor program:
anc(x, y) par(x, y)
anc(x, y) anc(x, z), anc(z, y)
A seminaive version of this is

1
anc
(x, y) par(x, y)
anc
1
:=
1
anc

temp
i+1
anc
(x, y)
i
anc
(x, z), anc(z, y)
temp
i+1
anc
(x, y) anc(x, z),
i
anc
(z, y)

i+1
anc
:= temp
i+1
anc
anc
i
anc
i+1
:= anc
i

i+1
anc

Note here that both


i
anc
and anc
i
are needed to ensure that all new facts in the next level
are obtained.
Consider now an input instance consisting of par(1, 2), par(2, 3). Then we have

1
anc
={1, 2, 2, 3}
anc
1
={1, 2, 2, 3}

2
anc
={1, 3}
Furthermore, both of the rules for temp
2
anc
will compute the join of tuples 1, 2 and 2, 3,
and so we have a redundant computation of 1, 3. Examples are easily constructed where
this kind of redundancy occurs for at an arbitrary level i > 0 (see Exercise 13.2).
An approach for preventing this kind of redundancy is to replace the two rules for
temp
i+1
by
temp
i+1
(x, y)
i
anc
(x, z), anc
i1
(z, y)
temp
i+1
(x, y) anc
i
(x, z),
i
anc
(z, y)
This approach is adopted below.
We now present the seminaive algorithm for the general case. Let P be a datalog
program over edb R and idb T. Consider a rule
S(u) R
1
(v
1
), . . . , R
n
(v
n
), T
1
(w
1
), . . . , T
m
(w
m
)
in P, where the R
k
s are edb predicates and the T
j
s are idb predicates. Construct for each
j [1, m] and i 1 the rule
13.1 Seminaive Evaluation 315
temp
i+1
S
(u) R
1
(v
1
), . . . , R
n
(v
n
),
T
i
1
(w
1
), . . . , T
i
j1
(w
j1
),
i
T
j
(w
j
), T
i1
j+1
(w
j+1
), . . . , T
i1
m
(w
m
).
Let P
i
S
represent the set of all i-level rules of this form constructed for the idb predicate S
(i.e., the rules for temp
i+1
S
, j in [1, m]).
Suppose now that T
1
, . . . , T
l
is a listing of the idb predicates of P that occur in the
body of a rule dening S. We write
P
i
S
(I, T
i1
1
, . . . , T
i1
l
, T
i
1
, . . . , T
i
l
,
i
T
1
, . . . ,
i
T
l
)
to denote the set of tuples that result from applying the rules in P
i
S
to given values for input
instance I and for the T
i1
j
, T
i
j
, and
i
T
j
.
We now have the following:
Algorithm 13.1.1 (Basic Seminaive Algorithm)
Input: Datalog program P and input instance I
Output: P(I)
1. Set P

to be the rules in P with no idb predicate in the body;


2. S
0
:=, for each idb predicate S;
3.
1
S
:=P

(I)(S), for each idb predicate S;


4. i :=1;
5. do begin
for each idb predicate S, where T
1
, . . . , T
l
are the idb predicates involved in rules dening S,
begin
S
i
:=S
i1

i
S
;

i+1
S
:=P
i
S
(I, T
i1
1
, . . . , T
i1
l
, T
i
1
, . . . , T
i
l
,
i
T
1
, . . . ,
i
T
l
) S
i
;
end;
i :=i +1
end
until
i
S
= for each idb predicate S.
6. s :=s
i
, for each idb predicate S.
The correctness of this algorithm is demonstrated in Exercise 13.3. However, it is
still doing a lot of unnecessary work on some programs. We now analyze the structure
of datalog programs to develop an improved version of the seminaive algorithm. It turns
out that this analysis, with simple control of the computation, allows us to know in advance
which predicates are likely to grow at each iteration and which are not, either because they
are already saturated or because they are not yet affected by the computation.
Let P be a datalog program. Form the precedence graph G
P
for P as follows: Use
the idb predicates in P as the nodes and include edge (R, R

) if there is a rule with head


predicate R

in which R occurs in the body. P is recursive if G


P
has a directed cycle. Two
predicates R and R

are mutually recursive if R =R

or R and R

participate in the same


316 Evaluation of Datalog
cycle of G
P
. Mutual recursion is an equivalence relation on the idb predicates of P, where
each equivalence class corresponds to a strongly connected component of G
P
. A rule of P
is recursive if the body involves a predicate that is mutually recursive with the head.
We now have the following:
Algorithm 13.1.2 (Improved Seminaive Algorithm)
Input: Datalog program P and edb instance I
Output: P(I)
1. Determine the equivalence classes of idb(P) under mutual recursion.
2. Construct a listing [R
1
], . . . , [R
n
] of the equivalence classes, according to a topo-
logical sort of G
P
(i.e., so that for each pair i < j there is no path in G
P
from R
j
to R
i
).
3. For i = 1 to n do
Apply Basic Seminaive Algorithm to compute the values of predicates in [R
i
],
treating all predicates in [R
j
], j < i, as edb predicates.
The correctness of this algorithm is left as Exercise 13.4.
Linear Datalog
We conclude this discussion of the seminaive approach by introducing a special class of
programs.
Let P be a program. A rule in P with head relation R is linear if there is at most
one atom in the body of the rule whose predicate is mutually recursive with R. P is linear
if each rule in P is linear. We now show how the Improved Seminaive Algorithm can be
simplied for such programs.
Suppose that P is a linear program, and
: R(u) T
1
(v
1
), . . . , T
n
(v
n
)
is a rule in P, where T
j
is mutually recursive with R. Associate with this the rule

i+1
R
(u) T
1
(v
1
), . . . ,
i
T
j
(v
j
), . . . , T
n
(v
n
).
Note that this is the only rule that will be associated by the Improved Seminaive Algorithm
with . Thus, given an equivalence class [T
k
] of mutually recursive predicates of P, the
rules for predicates S in [T
k
] use only the
i
S
, but not the S
i
. In contrast, as seen earlier,
both the
i
S
and S
i
must be used in nonlinear programs.
13.2 Top-Down Techniques
Consider the RSG program from the previous section, augmented with a selection-based
query:
13.2 Top-Down Techniques 317
rsg(x, y) at(x, y)
rsg(x, y) up(x, x1), rsg(y1, x1), down(y1, y)
query(y) rsg(a, y)
where a is a constant. This program will be called the RSG query. Suppose that seminaive
evaluation is used. Then each pair of rsg will be produced, including those that are not
used to derive any element of query. For example, using I
0
of Fig. 13.1 as input, fact
rsg(f, k) will be produced but not used. A primary motivation for the top-down approaches
to datalog query evaluation is to avoid, to the extent possible, the production of tuples that
are not needed to derive any answer tuples.
For this discussion, we dene a datalog query to be a pair (P, q), where P is a datalog
program and q is a datalog rule using relations of P in its body and the new relation query
in its head. We generally assume that there is only one rule dening the predicate query,
and it has the form
query(u) R(v)
for some idb predicate R.
A fact is relevant to query (P, q) on input I if there is a proof tree for query in which
the fact occurs. A straightforward criterion for improving the efciency of any datalog
evaluation scheme is to infer only relevant facts. The evaluation procedures developed in
the remainder of this chapter attempt to satisfy this criterion; but, as will be seen, they do
not do so perfectly.
The top-down approaches use natural heuristics to focus attention on relevant facts. In
particular, they use the framework provided by SLD resolution. The starting point for these
algorithms (namely, the query to be answered) often includes constants; these have the
effect of restricting the search for derivation trees and thus the set of facts produced. In the
context of databases without function symbols, the top-down datalog evaluation algorithms
can generally be forced to terminate on all inputs, even when the corresponding SLD-
resolution algorithm does not. In this section, we focus primarily on the query-subquery
(QSQ) framework.
There are four basic elements of this framework:
1. Use the general framework of SLD resolution, but do it set-at-a-time. This permits
the use of optimized versions of relational algebra operations.
2. Beginning with the constants in the original query, push constants from goals to
subgoals, in a manner analogous to pushing selections into joins.
3. Use the technique of sideways information passing (see Chapter 6) to pass
constant binding information from one atom to the next in subgoals.
4. Use an efcient global ow-of-control strategy.
Adornments and Subqueries
Recall the RSG query given earlier. Consider an SLD tree for it. The child of the root would
be rsg(a, y). Speaking intuitively, not all values for rsg are requested, but rather only those
318 Evaluation of Datalog
with rst coordinate a. More generally, we are interested in nding derivations for rsg
where the rst coordinate is bound and the second coordinate is free. This is denoted by
the expression rsg
bf
, where the superscript bf is called an adornment.
The next layer of the SLD tree will have a node holding at(a, y) and a node holding
up(a, x1), rsg(y1, x1), down(y1, y). Answers generated for the rst of these nodes are
given by
2
(
1 = a
(at)). Answers for the other node can be generated by a left-to-right
evaluation. First the set of possible values for x1 is J =
2
(
1 = a
(up)). Next the possible
values for y1 are given by {y1 | y1, x1 rsg and x1 J} (i.e., the rst coordinate
values of rsg stemming from second coordinate values in J). More generally, then, this
calls for an evaluation of rsg
f b
, where the second coordinate values are bound by J.
Finally, given y1 values, these can be used with down to obtain y values (i.e., answers
to the query).
As suggested by this discussion, a top-down evaluation of a query in which con-
stants occur can be broken into a family of subqueries having the form (R

, J), where
is an adornment for idb predicate R, and J is a set of tuples that give values for the
columns bound by . Expressions of the form (R

, J) are called subqueries. If the RSG


query were applied to the instance of Fig. 13.1, the rst subquery generated would be
(rsg
f b
, {e, f }). As we shall see, the QSQ framework is based on a systematic evalu-
ation of subqueries.
Let P be a datalog program and I an input instance. Suppose that R is an idb predicate
and is an adornment for R (i.e., a string of bs and f s having length the arity of R). Then
bound(R, ) denotes the coordinates of R bound in . Let t be a tuple over bound(R, ).
Then a completion for t in R

is a tuple s such that s[bound(R, )] =t and s P(I)(R).


The answer to a subquery (R

, J) over I is the set of all completions of all tuples in J.


The use of adornments within a rule body is a generalization of the technique of
sideways information passing discussed in Chapter 6. Consider the rule
(*) R(x, y, z) R
1
(x, u, v), R
2
(u, w, w, z), R
3
(v, w, y, a).
Suppose that a subquery involving R
bfb
is invoked. Assuming a left-to-right evaluation, this
will lead to subqueries involving R
bff
1
, R
bffb
2
, and R
bbfb
3
. We sometimes rewrite the rule as
R
bfb
(x, y, z) R
bff
1
(x, u, v), R
bffb
2
(u, w, w, z), R
bbfb
3
(v, w, y, a)
to emphasize the adornments. This is an example of an adorned rule. As we shall see, the
adornments of idb predicates in rule bodies shall be used to guide evaluations of queries
and subqueries. It is common to omit the adornments of edb predicates.
The general algorithm for adorning a rule, given an adornment for the head and an
ordering of the rule body, is as follows: (1) All occurrences of each bound variable in
the rule head are bound, (2) all occurrences of constants are bound, and (3) if a variable
x occurs in the rule body, then all occurrences of x in subsequent literals are bound.
A different ordering of the rule body would yield different adornments. In general, we
permit different orderings of rule bodies for different adornments of a given rule head. (A
generalization of this technique is considered in Exercise 13.19.)
The denition of adorned rule also applies to situations in which there are repeated
13.2 Top-Down Techniques 319
variables or constants in the rule head (see Exercise 13.9). However, adornments do not
capture all of the relevant information that can arise as the result of repeated variables
or constants that occur in idb predicates in rule bodies. Mechanisms for doing this are
discussed in Section 13.4.
Supplementary Relations and QSQ Templates
A key component of the QSQ framework is the use of QSQ templates which store appropri-
ate information during intermediate stages of an evaluation. Consider again the preceding
rule (*), and imagine attempting to evaluate the subquery (R
bfb
, J). This will result in calls
to the generalized queries (R
bff
1
,
1
(J)), (R
bffb
2
, K), and (R
bbfb
3
, L) for some relations K
and L that depend on the evaluation of the preceding queries. Importantly, note that rela-
tion K relies on values passed from both J and R
1
, and L relies on values passed from
R
1
and R
2
. A QSQ template provides data structures that will remember all of the values
needed during a left-to-right evaluation of a subquery.
To do this, QSQ templates rely on supplementary relations. A total of n +1 supple-
mentary relations are associated to a rule body with n atoms. For example, the supplemen-
tary relations sup
0
, . . . , sup
3
for the rule (*) with head adorned by R
bfb
are
R
bfb
(x, y, z) R
bff
1
(x, u, v), R
bffb
2
(u, w, w, z), R
bbfb
3
(v, w, y, a)

sup
0
[x, z] sup
1
[x, z, u, v] sup
2
[x, z, v, w] sup
3
[x, y, z]
Note that variables serve as attribute names in the supplementary relations. Speaking in-
tuitively, the body of a rule may be viewed as a process that takes as input tuples over the
bound attributes of the head and produces as output tuples over the variables (bound and
free) of the head. This determines the attributes of the rst and last supplementary relations.
In addition, a variable (i.e., an attribute name) is in some supplementary relation if it is has
been bound by some previous literal and if it is needed in the future by some subsequent
literal or in the result.
More formally, for a rule body with atoms A
1
, . . . , A
n
, the set of variables used as
attribute names for the i
th
supplementary relation is determined as follows:
For the 0
th
(i.e., zeroth) supplementary relation, the attribute set is the set X
0
of
bound variables of the rule head; and for the last supplementary relation, the attribute
set is the set X
n
of variables in the rule head.
For i [1, n 1], the attribute set of the i
th
supplementary relation is the set X
i
of
variables that occur both before X
i
(i.e., occur in X
0
, A
1
, . . . , A
i
) and after X
i
(i.e., occur in A
i+1
, . . . , A
n
, X
n
).
The QSQ template for an adorned rule is the sequence (sup
0
, . . . , sup
n
) of relation
schemas for the supplementary relations of the rule. During the process of QSQ query
evaluation, relation instances are assigned to these schemas; typically these instances
repeatedly acquire new tuples as the algorithm runs. Figure 13.2 shows the use of QSQ
templates in connection with the RSG query.
320 Evaluation of Datalog
sup
1
0
[x] sup
1
1
[x, y] sup
3
0
[x] sup
3
1
[x, x
1
] sup
3
2
[x, y
1
] sup
3
3
[x, y]
flat(x, y) up(x, x
1
), rsg
fb
(y
1
, x
1
), down(y
1
, y)
rsg
bf
(x, y) rsg
bf
(x, y)
a a a
a
a
a
e
f
g
b
sup
2
0
[y] sup
2
1
[x, y] sup
4
0
[y] sup
4
1
[y, y
1
] sup
4
2
[y, x
1
] sup
4
3
[x, y]
flat(x, y) down(y
1
, y), rsg
bf
(y
1
, x
1
), up(x, x
1
)
rsg
fb
(x, y) rsg
fb
(x, y)
e
f
e
f
f
f
l
m
g f
input
_
rsg
bf
a e
f
a b g f
input
_
rsg
fb
ans
_
rsg
bf
ans
_
rsg
fb
Figure 13.2: Illustration of QSQ framework
The Kernel of QSQ Evaluation
The key components of QSQ evaluation are as follows. Let (P, q) be a datalog query and
let I be an edb instance. Speaking conceptually, QSQ evaluation begins by constructing
an adorned rule for each adornment of each idb predicate in P and for the query q. In
practice, the construction of these adorned rules can be lazy (i.e., they can be constructed
only if needed during execution of the algorithm). Let (P
ad
, q
ad
) denote the result of this
transformation.
The relevant adorned rules for the RSG query are as follows:
1. rsg
bf
(x, y) at(x, y)
2. rsg
fb
(x, y) at(x, y)
13.2 Top-Down Techniques 321
3. rsg
bf
(x, y) up(x, x1), rsg
fb
(y1, x1), down(y1, y)
4. rsg
fb
(x, y) down(y1, y), rsg
bf
(y1, x1), up(x, x1).
Note that in the fourth rule, the literals of the body are ordered so that the binding of y in
down can be passed via y1 to rsg and via x1 to up.
A QSQ template is constructed for each relevant adorned rule. We denote the j
th
(counting from 0) supplementary relation of the i
th
adorned rule as sup
i
j
. In addition, the
following relations are needed and will serve as variables in the QSQ evaluation algorithm:
(a) for each idb predicate R and relevant adornment the variable ans_R

, with
same arity as R;
(b) for each idb predicate R and relevant adornment , the variable input_R

with
same arity as bound(R, ) (i.e., the number of bs occurring in ); and
(c) for each supplementary relation sup
i
j
, the variable sup
i
j
.
Intuitively, input_R

will be used to form subqueries (R

, input_R

). The completion
of tuples in input_R

will go to ans_R

. Thus ans_R

will hold tuples that are in P(I)(R)


and were generated from subqueries based on R

.
A QSQ algorithm begins with the empty set for each of the aforementioned relations.
The query is then used to initialize the process. For example, the rule
query(y) rsg(a, y)
gives the initial value of {a} to input_rsg
bf
. In general, this gives rise to the subquery
(R

, {t }), where t is constructed using the set of constants in the initial query.
There are essentially four kinds of steps in the execution. Different possible orderings
for these steps will be considered. The rst of these is used to initialize rules.
(A) Begin evaluation of a rule: This step can be taken whenever there is a rule with
head predicate R

and there are new tuples in a variable input_R

that have not yet


been processed for this rule. The step is to add the new tuples to the 0
th
supplementary
relation for this rule. However, only new tuples that unify with the head of the rule are
added to the supplementary relation. A new tuple in input_R

might fail to unify with


the head of a rule dening R if there are repeated variables or constants in the rule head
(see Exercise 13.9).
New tuples are generated in supplementary relations sup
i
j
in two ways: Either some
new tuples have been obtained for sup
i
j1
(case B); or some new tuples have been obtained
for the idb predicate occurring between sup
i
j1
and sup
i
j
(case C).
(B) Pass newtuples fromone supplementary relation to the next: This step can be taken
whenever there is a set T of new tuples in a supplementary variable sup
i
j1
that have not
yet been processed, and sup
i
j1
is not the last supplementary relation of the corresponding
rule. Suppose that A
j
is the atom in the rule immediately following sup
i
j1
.
322 Evaluation of Datalog
Two cases arise:
(i) A
j
is R

(u) for some edb predicate R. Then a combination of joins and pro-
jections on R and T is used to determine the appropriate tuples to be added to
sup
i
j
.
(ii) A
j
is R

(u) for some idb predicate R. Note that each of the bound variables in
occurs in sup
i
j1
. Two actions are now taken.
(a) A combination of joins and projections on ans_R

(the current value


for R) and T is used to determine the set T

of tuples to be added to
sup
i
j
.
(b) The tuples in T [bound(R, )] input_R

are added to input_R

.
(C) Use new idb tuples to generate new supplementary relation tuples: This step is
similar to the previous one but is applied when new tuples are added to one of the idb
relation variables ans_R

. In particular, suppose that some atom A


j
with predicate R

occurs in some rule, with surrounding supplementary variables sup


i
j1
and sup
i
j
. In this
case, use join and projection on all tuples in sup
i
j1
and the new tuples of ans_R

to
create new tuples to be added to sup
i
j
.
(D) Process tuples in the nal supplementary relation of a rule: This step is used to
generate tuples corresponding to the output of rules. It can be applied when there are new
tuples in the nal supplementary variable sup
i
n
of a rule. Suppose that the rule predicate is
R

. Add the new tuples in sup


i
n
to ans_R

.
Example 13.2.1 Figure 13.2 illustrates the data structures and scratch paper relations
used in the QSQ algorithm, in connection with the RSG query, as applied to the instance of
Fig. 13.1. Recall the adorned version of the RSG query presented on page 321. The QSQ
templates for these are shown in Fig. 13.2. Finally, the scratch paper relations for the input-
and ans-variables are shown.
Figure 13.2 shows the contents of the relation variables after several steps of the
QSQ approach have been applied. The procedure begins with the insertion of a into
input_rsg
bf
; this corresponds to the rule
query(y) rsg(a, y)
Applications of step (A) place a into the supplementary variables sup
1
0
and sup
3
0
. Step
(B.i) then yields a, e and a, f in sup
3
1
. Because ans_rsg
f b
is empty at this point,
step (B.ii.a) does not yield any tuples for sup
3
2
. However, step (B.ii.b) is used to insert e
and f into input_rsg
f b
. Application of steps (B) and (D) on the template of the second
rule yield g, f in ans_rsg
f b
. Application of steps (C), (B), and (D) on the template
of the third rule now yield the rst entry in ans_rsg
bf
. The reader is invited to extend
the evaluation to its conclusion (see Exercise 13.10). The answer is obtained by applying

1 = a
to the nal contents of ans_rsg
bf
.
13.2 Top-Down Techniques 323
Global Control Strategies
We have now described all of the basic building blocks of the QSQ approach: the use of
QSQ templates to perform information passing both into rules and sideways through rule
bodies, and the three classes of relations used. A variety of global control strategies can
be used for the QSQ approach. The most basic strategy is stated simply: Apply steps (A)
through (D) until a xpoint is reached. The following can be shown (see Exercise 13.12):
Theorem 13.2.2 Let (P, q) be a datalog query. For each input I, any evaluation of QSQ
on (P
ad
, q
ad
) yields the answer of (P, q) on I.
We now present a more specic algorithm based on the QSQ framework. This algo-
rithm, called QSQ Recursive (QSQR) is based on a recursive strategy. To understand the
central intuition behind QSQR, suppose that step (B) described earlier is to be performed,
passing from supplementary relation sup
i
j1
across an idb predicate R

to supplementary
relation sup
i
j
. This may lead to the introduction of new tuples into sup
i
j
by step (B.ii.a) and
to the introduction of new tuples into input_R

by step (B.ii.b). The essence of QSQR is


that it now performs a recursive call to determine the R

values corresponding to the new


tuples added to input_R

, before applying step (B) or (D) to the new tuples placed into
sup
i
j
.
We present QSQR in two steps: rst a subroutine and then the recursive algorithm
itself. During processing in QSQR, the global state includes values for ans_R

and
input_R

for each idb predicate R and relevant adornment . However, the supplementary
relations are not globallocal copies of the supplementary relations are maintained by
each call of the subroutine.
Subroutine Process subquery on one rule
Input: A rule for adorned predicate R

, input instance I, a QSQR state (i.e., set of values


for the input- and ans-variables), and a set T input_R

. (Intuitively, the tuples in T


have not been considered with this rule yet).
Action:
1. Remove from T all tuples that do not unify with (the appropriate coordinates of)
the head of the rule.
2. Set sup
0
:=T . [This is step (A) for the tuples in T .]
3. Proceed sideways across the body A
1
, . . . , A
n
of the rule to the nal supplemen-
tary relation sup
n
as follows:
For each atom A
j
(a) If A
j
has edb predicate R

, then apply step (B.i) to populate sup


j
.
(b) If A
j
has idb predicate R

, then apply step (B.ii) as follows:


(i) Set S :=sup
j1
[bound(R

, )] input_R

.
(ii) Set input_R

:=input_R

S. [This is step (B.ii.b).]


(iii) (Recursively) call algorithm QSQR on the query (R

, S).
324 Evaluation of Datalog
[This has the effect of invoking step (A) and its consequences
for the tuples in S.]
(iv) Use sup
j1
and the current value of global variable ans_R

to populate sup
j
. [This includes steps (B.ii.a) and (C).]
4. Add the tuples produced for sup
n
into the global variable ans_R

. [This is step
(D).]
The main algorithm is given by the following:
Algorithm 13.2.3 (QSQR)
Input: A query of the form(R

, T ), input instance I, and a QSQR state (i.e., set of values


for the input- and ans-variables).
Procedure:
1. Repeat until no new tuples are added to any global variable:
Call the subroutine to process subquery (R

, T ) on each rule dening R.


Suppose that we are given the query
query(u) R(v)
Let be the adornment of R corresponding to v, and let T be the singleton relation
corresponding to the constants in v. To nd the answer to the query, the QSQR algorithm is
invoked with input (R

, T ) and the global state where input_R

=T and all other input-


and ans-variables are empty. For example, in the case of the rsg program, the algorithm is
rst called with argument (rsg
bf
, {a}) , and in the global state input_rsg
bf
={a}. The
answer to the query is obtained by performing a selection and projection on the nal value
of ans_R

.
It is straightforward to show that QSQR is correct (Exercise 13.12).
13.3 Magic
An exciting development in the eld of datalog evaluation is the emergence of techniques
for bottom-up evaluation whose performance rivals the efciency of the top-down tech-
niques. This family of techniques, which has come to be known as magic set techniques,
simulates the pushing of selections that occurs in top-down approaches. There are close
connections between the magic set techniques and the QSQ algorithm. The magic set tech-
nique presented in this section simulates the QSQ algorithm, using a datalog program that
is evaluated bottom up. As we shall see, the magic sets are basically those sets of tuples
stored in the relations input_R

and sup
i
j
of the QSQ algorithm. Given a datalog query
(P, q), the magic set approach transforms it into a new query (P
m
, q
m
) that has two im-
portant properties: (1) It computes the same answer as (P, q), and (2) when evaluated using
a bottom-up technique, it produces only the set of facts produced by top-down approaches
13.3 Magic 325
rsg
bf
(x, y) input_rsg
bf
(x), at(x, y) (s1.1)
rsg
f b
(x, y) input_rsg
f b
(y), at(x, y) (s2.1)
sup
3
1
(x, x1) input_rsg
bf
(x), up(x, x1) (s3.1)
sup
3
2
(x, y1) sup
3
1
(x, x1), rsg
f b
(y1, x1) (s3.2)
rsg
bf
(x, y) sup
3
2
(x, y1), down(y1, y) (s3.3)
sup
4
1
(y, y1) input_rsg
f b
(y), down(y1, y) (s4.1)
sup
4
2
(y, x1) sup
4
1
(y, y1), rsg
bf
(y1, x1) (s4.2)
rsg
f b
(x, y) sup
4
2
(y, x1), up(x, x1) (s4.3)
input_rsg
bf
(x1) sup
3
1
(x, x1) (i3.2)
input_rsg
f b
(y1) sup
4
1
(y, y1) (i4.2)
input_rsg
bf
(a) (seed)
query(y) rsg
bf
(a, y) (query)
Figure 13.3: Transformation of RSG query using magic sets
such as QSQ. In particular, then, (P
m
, q
m
) incorporates the effect of pushing selections
from the query into bottom-up computations, as if by magic.
We focus on a technique originally called generalized supplementary magic; it is
perhaps the most general magic set technique for datalog in the literature. (An earlier
form of magic is considered in Exercise 13.18.) The discussion begins by explaining how
the technique works in connection with the RSG query of the previous section and then
presents the general algorithm.
As with QSQ, the starting point for magic set algorithms is an adorned datalog query
(P
ad
, q
ad
). Four classes of rules are generated (see Fig. 13.3). The rst consists of a family
of rules for each rule of the adorned program P
ad
. For example, recall rule (3) (see p. 321)
of the adorned program for the RSG query presented in the previous section:
rsg
bf
(x, y) up(x, x1), rsg
f b
(y1, x1), down(y1, y).
We rst present a primitive family of rules corrresponding to that rule, and then apply some
optimizations.
326 Evaluation of Datalog
sup
3
0
(x) input_rsg
bf
(x) (s3.0)
sup
3
1
(x, x1) sup
3
0
(x), up(x, x1) (s3.1)
sup
3
2
(x, y1) sup
3
1
(x, x1), rsg
f b
(y1, x1) (s3.2)
sup
3
3
(x, y) sup
3
2
(x, y1), down(y1, y) (s3.3)
rsg
bf
(x, y) sup
3
3
(x, y) (S3.4)
Rule (s3.0) corresponds to step (A) of the QSQ algorithm; rules (s3.1) and (s3.3) cor-
respond to step (B.i); rule (s3.2) corresponds to steps (B.ii.a) and (C); and rule (s3.4)
corresponds to step (D). In the literature, the predicate input_rsg
f b
has usually been de-
noted as magic_rsg
f b
and sup
i
j
as supmagic
i
j
. We use the current notation to stress the
connection with the QSQ framework. Note that the predicate rsg
bf
here plays the role of
ans_rsg
bf
there.
As can be seen by the preceding example, the predicates sup
3
0
and sup
3
3
are essentially
redundant. In general, if the i
th
rule denes R

, then the predicate sup


i
0
is eliminated, with
input_R

used in its place to eliminate rule (3.0) and to form


(s3.1) sup
3
1
(x, x1) input_rsg
bf
(x), up(x, x1).
Similarly, the predicate of the last supplementary relation can be eliminated to delete rule
(s3.4) and to form
(s3.3) rsg
bf
(x, y) sup
3
2
(x, y1), down(y1, y).
Therefore the set of rules (s3.0) through (s3.4) may be replaced by (s3.1), (s3.2), and
(s3.3). Rules (s4.1), (s4.2), and (s4.3) of Fig. 13.3 are generated fromrule (4) of the adorned
program for the RSG query (see p. 321). (Recall how the order of the body literals in that
rule are reversed to pass bounding information.) Finally, rules (s1.1) and (s2.1) stem from
rules (1) and (2) of the adorned program.
The second class of rules is used to provide values for the input predicates [i.e.,
simulating step (B.ii.b) of the QSQ algorithm]. In the RSG query, one rule for each of
input_rsg
bf
and input_rsg
f b
is needed:
input_rsg
bf
(x1) sup
3
1
(x, x1) (i3.2)
input_rsg
f b
(y1) sup
4
1
(y, y1). (i4.2)
Intuitively, the rst rule comes from rule (s3.2). In other words, it follows from the second
atom of the body of rule (3) of the original adorned program (see p. 321). In general, an
adorned rule with k idb atoms in the body will generate k input rules of this form.
The third and fourth classes of rules include one rule each; these initialize and conclude
the simulation of QSQ, respectively. The rst of these acts as a seed and is derived from
the initial query. In the running example, the seed is
input_rsg
bf
(a) .
13.4 Two Improvements 327
The second constructs the answer to the query; in the example it is
query(y) rsg
bf
(a, y).
From this example, it should be straightforward to specify the magic set rewriting of an
adorned query (P
ad
, q
ad
) (see Exercise 13.16a).
The example showed how the rst and last supplementary predicates sup
3
0
and
sup
3
4
were redundant with input_rsg
bf
and rsg
bf
, respectively, and could be eliminated.
Another improvement is to merge consecutive sequences of edb atoms in rule bodies as
follows. For example, consider the rule
(i) R

(u) R

1
1
(u
1
), . . . , R

n
n
(u
n
)
and suppose that predicate R
k
is the last idb relation in the body. Then rules (si.k), . . . ,
(si.n) can be replaced with
(si.k

) R

(u) sup
i
k1
(v
k1
), R

k
k
(u
k
), R

k+1
k+1
(u
k+1
), . . . , R

n
n
(u
n
).
For example, rules (s3.2) and (s3.3) of Fig. 13.3 can be replaced by
(s3.2

) rsg
bf
(x, y) sup
3
1
(x, x1), rsg
f b
(y1, x1), down(y1, y).
This simplication can also be used within rules. Suppose that R
k
and R
l
are idb
relations with only edb relations occurring in between. Then rules (i.k), . . . , (i.l 1) can
be replaced with
(si.k

) sup
i
l1
(v
l1
) sup
i
k1
(v
k1
), R

k
k
(u
k
), R

k+1
k+1
(u
k+1
), . . . , R

l1
l1
(u
l1
).
An analogous simplication can be applied if there are multiple edb predicates at the
beginning of the rule body.
To summarize the development, we state the following (see Exercise 13.16):
Theorem 13.3.1 Let (P, q) be a query, and let (P
m
, q
m
) be the query resulting from the
magic rewriting of (P, q). Then
(a) The answer computed by (P
m
, q
m
) on any input instance I is identical to the
answer computed by (P, q) on I.
(b) The set of facts produced by the Improved Seminaive Algorithm of (P
m
, q
m
) on
input I is identical to the set of facts produced by an evaluation of QSQ on I.
13.4 Two Improvements
This section briey presents two improvements of the techniques discussed earlier. The
rst focuses on another kind of information passing resulting from repeated variables
and constants occurring in idb predicates in rule bodies. The second, called counting, is
applicable to sets of data and rules having certain acyclicity properties.
328 Evaluation of Datalog
Repeated Variables and Constants in Rule Bodies (by Example)
Consider the program P
r
:
T (x, y, z) R(x, y, z) (1)
T (x, y, z) S(x, y, w), T (w, z, z) (2)
query(y, z) T (1, y, z)
Consider as input the instance I
1
shown in Fig. 13.4(a). The data structures for a QSQ
evaluation of this program are shown in Fig. 13.4(b). (The annotations $2 = $3, $2 = $3
= 4, etc., will be explained later.)
A magic set rewriting of the program and query yields
T
bff
(x, y, z) input_T
bff
(x), R(x, y, z)
sup
2
1
(x, y, w) input_T
bff
(x), S(x, y, w)
T
bff
(x, y, z) sup
2
1
(x, y, w), T
bff
(w, z, z)
input_T
bff
(w) sup
2
1
(x, y, w)
input_T
bff
(1)
query(y, z) T
bff
(1, y, z).
On input I
1
, the query returns the empty instance. Furthermore, the SLD tree for this
query on I
1
shown in Fig. 13.5, has only 9 goals and a total of 13 atoms, regardless of the
value of n. However, both the QSQ and magic set approach generate a set of facts with size
proportional to n (i.e., to the size of I
1
).
Why do both QSQ and magic sets perform so poorly on this program and query? The
answer is that as presented, neither QSQ nor magic sets take advantage of restrictions
on derivations resulting from the repeated z variable in the body of rule (2). Analogous
examples can be developed for cases where constants appear in idb atoms in rule bodies.
Both QSQ and magic sets can be enhanced to use such information. In the case of
QSQ, the tuples added to supplementary relations can be annotated to carry information
about restrictions imposed by the atom that caused the tuple to be placed into the leftmost
supplementary relation. This is illustrated by the annotations in Fig. 13.4(b). First consider
the annotation $2 = $3 on the tuple 3 in input_T
bff
. This tuple is included into input_
T
bff
because 1, 2, 3 is in sup
2
1
, and the next atom considered is T
bff
(w, z, z). In particular,
then, any valid tuple (x, y, z) resulting from 3 must have second and third coordinates
equal. The annotation $2 = $3 is passed with 3 into sup
1
0
and sup
2
0
.
Because variable y is bound to 4 in the tuple 3, 4, 5 in sup
2
1
, the annotation $2 =
$3 on 3 in sup
2
0
transforms into $3 = 4 on this new tuple. This, in turn, implies the
annotation $2 = $3 = 4 when 5 is added to input_T
bff
and to both sup
1
0
and sup
2
0
.
Now consider the tuple 5 in sup
1
0
, with annotation ($2 = $3 = 4). This can generate a
tuple in sup
1
1
only if 5, 4, 4 is in R. For input I
1
this tuple is not in R, and so the annotated
13.4 Two Improvements 329
S
A B
1 2
3 4
R
A B C C
3
5
5 6
5 6
6
7
5 6 8
I
1
(R)
.
.
.
5 6 n
(a) Sample input instance I
1
I
1
(S)
sup
1
0
[x] sup
1
1
[x, y, z]
R(x, y, z)
T
bff
(x, y, z)
5 6
5 6
6
7
5 6 8
.
.
.
5 6 n
1
3 ($2 = $3)
5 ($2 = $3 = 4)
input
_
T
bff
1
3 ($2 = $3)
5 ($2 = $3 = 4)
(b) QSQ evaluation
sup
2
1
[x, y, w] sup
2
2
[x, y, z]
T
bff
(x, y, z)
3
3
ans
_
T
bff
1
sup
2
0
[x]
3 ($2 = $3)
5 ($2 = $3 = 4)
1
4
2
5 ($3 = 4)
3
5 6
5 6
6
7
.
.
.
5 6 n
3 4 6
S(x, y, w), T
bff
(w, z, z)
4 6
Figure 13.4: Behavior of QSQ on program with repeated variables
330 Evaluation of Datalog
T(1, y, z)
R(1, y, z)

S(1, y, w1), T(w1, z, z)


T(3, z, z)
R(3, z, z)

S(3, z, w2), T(w2, z, z)


T(5, 4, 4)
R(5, 4, 4)

S(5, 4, w3), T(w3, 4, 4)

Figure 13.5: Behavior of SLD on program with repeated variables


tuple 5 in sup
1
0
generates nothing (even though in the original QSQ framework many
tuples are generated). Analogously, because there is no tuple 5, 4, w in S, the annotated
tuple 5 of sup
2
0
does not generate anything in sup
2
1
. This illustrates how annotations can
be used to restrict the facts generated during execution of QSQ.
More generally, annotations on tuples are conjunctions of equality terms of the form
$i =$j and $i =a (where a is a constant). During step (B.ii.b) of QSQ, annotations
are associated with new tuples placed into relations input_R

. We permit the same tuple


to occur in input_R

with different annotations. This enhanced version of QSQ is called


annotated QSQ. The enhancement correctly produces all answers to the initial query, and
the set of facts generated now closely parallels the set of facts and assignments generated
by the SLD tree corresponding to the QSQ templates used.
The magic set technique can also be enhanced to incorporate the information cap-
tured by the annotations just described. This is accomplished by an initial preprocessing
of the program (and query) called subgoal rectication. Speaking loosely, a subgoal cor-
responding to an idb predicate is rectied if it has no constants and no repeated variables.
Rectied subgoals may be formed from nonrectied ones by creating new idb predicates
that correspond to versions of idb predicates with repeated variables and constants. For
example, the following is the result of rectifying the subgoals of the program P
r
:
T (x, y, z) R(x, y, z)
T (x, y, z) S(x, y, w), T
$2=$3
(w, z)
T
$2=$3
(x, z) R(x, z, z)
T
$2=$3
(x, z) S(x, z, w), T
$2=$3
(w, z)
13.4 Two Improvements 331
query(y, z) T (1, y, z)
query(z, z) T
$2=$3
(1, z).
It is straightforward to develop an iterative algorithm that replaces an arbitrary datalog
programand query with an equivalent one, all of whose idb subgoals are rectied (see Exer-
cise 13.20). Note that there may be more than one rule dening the query after rectication.
The magic set transformation is applied to the rectied program to obtain the nal
result. In the preceding example, there are two relevant adornments for the predicate T
$2=$3
(namely, bf and bb).
The following can be veried (see Exercise 13.21):
Theorem 13.4.1 (Informal) The framework of annotated QSQ and the magic set trans-
formation augmented with subgoal rectication are both correct. Furthermore, the set of
idb predicate facts generated by evaluating a datalog query with either of these techniques
is identical to the set of facts occurring in the corresponding SLD tree.
A tight correspondence between the assignments in SLD derivation trees and the
supplementary relations generated both by annotated QSQ and rectied magic sets can be
shown. The intuitive conclusion drawn fromthis development is that top-down and bottom-
up techniques for datalog evaluation have essentially the same efciency.
Counting (by Example)
We now present a brief sketch of another improvement of the magic set technique. It is
different from the previous one in that it works only when the underlying data set is known
to have certain acyclicity properties.
Consider evaluating the following SG query based on the Same-Generation program:
sg(x, y) at(x, y) (1)
sg(x, y) up(x, x1), sg(x1, y1), down(y1, y) (2)
query(y) sg(a, y)
on the input J
n
given by
J
n
(up) ={a, b
i
| i [1, n]} {b
i
, c
j
| i, j [1, n]}
J
n
(at) ={c
i
, d
j
| i, j [1, n]}
J
n
(down) ={d
i
, e
j
| i, j [1, n]} {e
i
, f | i [1, n]}.
Instance J
2
is shown in Fig. 13.6.
The completed QSQ template on input J
2
for the second rule of the SG query is shown
in Fig. 13.7(a). (The tuples are listed in the order in which QSQR would discover them.)
Note that on input J
n
both sup
2
1
and sup
2
2
would contain n(n +1) tuples.
Consider now the proof tree of SG having root sg(a, f ) shown in Fig. 13.8 (see
Chapter 12). There is a natural correspondence of the children at depth 1 in this tree with the
supplementary relation atoms sup
2
0
(a), sup
2
1
(a, b
1
), sup
2
2
(a, e
1
), and sup
2
3
(a, f ) generated
332 Evaluation of Datalog
c
1
c
2
b
1
b
2
a
d
1
d
2
e
1
e
2
f
flat
up
up down
down
Figure 13.6: Instance J
2
for counting
by QSQ; and between the children at depth 2 with sup
2
0
(b
1
), sup
2
1
(b
1
, c
1
), sup
2
2
(b
1
, d
1
), and
sup
2
3
(b
1
, e
1
).
A key idea in the counting technique is to record information about the depths at which
supplementary relation atoms occur. In some cases, this permits us to ignore some of the
specic constants present in the supplementary atoms. You will nd that this is illustrated
in Fig. 13.7(b). For example, we show atoms count_sup
2
0
(1, a), count_sup
2
1
(1, b
1
), count_
sup
2
2
(1, e
1
), and count_sup
2
3
(1, f ) that correspond to the supplementary atoms sup
2
0
(a),
sup
2
1
(a, b
1
), sup
2
2
(a, e
1
), and sup
2
3
(a, f ). Note that, for example, count_sup
2
1
(2, c
1
) corre-
sponds to both sup
2
1
(b
1
, c
1
) and sup
2
1
(b
2
, c
1
).
More generally, the modied supplementary relation atoms hold an index that indi-
cates a level in a proof tree corresponding to how the atom came to be created. Because
of the structure of SG, and assuming that the up relation is acyclic, these modied supple-
mentary relations can be used to nd query answers. Note that on input J
n
, the relations
countsup
2

1
and count_sup
2

2
hold 2n tuples each rather than n(n +1), as in the original QSQ
approach.
We now describe how the magic set program associated with the SG query can be
transformed into an equivalent program (on acyclic input) that uses the indexes suggested
by Fig. 13.7(b). The magic set rewriting of the SG query is given by
sg
bf
(x, y) input_sg
bf
(x), at(x, y) (s1.1)
sup
2
1
(x, x1) input_sg
bf
(x), up(x, x1) (s2.1)
sup
2
2
(x, y1) sup
2
1
(x, x1), sg
bf
(x1, y1) (s2.2)
sg
bf
(x, y) sup
2
2
(x, y1), down(y1, y) (s2.3)
13.4 Two Improvements 333
sup
2
0
[x] sup
2
1
[x, x
1
] sup
2
2
[x, y
1
] sup
2
3
[x, y]
up(x, x
1
), sg
bf
(x
1
, y
1
), down(y
1
, y)
sg
bf
(x, y)
a
b
1
b
2
a
a
b
1
b
1
b
2
b
2
b
1
b
2
c
1
c
2
c
1
c
2
b
1
b
1
b
2
b
2
a
a
d
1
d
2
d
1
d
2
e
1
e
2
b
1
b
1
b
2
b
2
a
e
1
e
2
e
1
e
2
f
(a) Completed QSQ template for sg
bf
on input J
2
count
_
sup
2
0
[d, x] count
_
sup
2
1
[d, x
1
] count
_
sup
2
2
[d, y
1
] count
_
sup
2
3
[d, y]
up(x, x
1
), sg
bf
(x
1
, y
1
), down(y
1
, y)
sg
bf
(x, y)
1
2
2
1
1
2
2
b
1
b
2
c
1
c
2
2
2
1
1
d
1
d
2
e
1
e
2
2
2
1
e
1
e
2
f
(b) Alternative QSQ template, using indices
a
b
1
b
2
Figure 13.7: Illustration of intuition behind counting
input_sg
bf
(x1) sup
2
1
(x, x1) (i2.2)
input_sg
bf
(a) (seed)
query(y) sg
bf
(a, y). (query)
The counting version of this is now given. (In other literature on counting, the seed is
initialized with 0 rather than 1.)
334 Evaluation of Datalog
sg(a, f)
down(e
1
, f) up(a, b
1
) sg(b
1
, e
1
)
down(d
1
, e
1
) up(b
1
, c
1
) sg(c
1
, d
1
)
flat(c
1
, d
1
)
Figure 13.8: A proof tree for sg(a, f )
count_sg
bf
(I, y) count_input_sg
bf
(I, x), at(x, y) (c-s1.1)
count_sup
2
1
(I, x1) count_input_sg
bf
(I, x), up(x, x1) (c-s2.1)
count_sup
2
2
(I, y1) count_sup
2
1
(I, x1), count_sg
bf
(I +1, y1) (c-s2.2)
count_sg
bf
(I, y) count_sup
2
2
(I, y1), down(y1, y) (c-s2.3)
count_input_sg
bf
(I +1, x1) count_sup
2
1
(I, x1) (c-i2.2)
count_input_sg
bf
(1, a) (c-seed)
query(y) count_sg
bf
(1, y) (c-query)
In the preceding, expressions such as I +1 are viewed as a short hand for using a variable
J in place of I +1 and including J =I +1 in the rule body.
In the counting version, the rst coordinate of each supplementary relation keeps track
of a level in a proof tree rather than a specic value. Intuitively, when constructing
a sequence of supplementary atoms corresponding to a given level of a proof tree, each
idb atom used must have been generated from the next deeper level. This is why count_
sg
bf
(I +1, y1) is used in rule (c-s2.2). Furthermore, rule (c-i2.2) initiates the construc-
tion corresponding to a new layer of the proof tree.
The counting program of the preceding example is not safe, in the sense that on
some inputs the program may produce an innite set of tuples in some predicates (e.g.,
count_sup
2
1
). For example, this will happen if there is a cycle in the up relation reachable
from a. Analogous situations occur with most applications of counting. As a result, the
counting technique can only be used where the underlying data set is known to satisfy
certain restrictions.
Bibliographic Notes 335
This preceding example is a simple application of the general technique of counting.
A more general version of counting uses three kinds of indexes. The rst, illustrated in the
example, records information about levels of proof trees. The second is used to record
information about what rule is being expanded, and the third is used to record which
atom of the rule body is being considered (see Exercise 13.23). A description of the kinds
of programs for which the counting technique can be used is beyond the scope of this
book. Although limited in applicability, the counting technique has been shown to yield
signicant savings in some contexts.
Bibliographic Notes
This chapter has presented a brief introduction to the research on heuristics for datalog
evaluation. An excellent survey of this work is [BR88a], which presents a taxonomy of
different techniques and surveys a broad number of them. Several books provide substantial
coverage of this area, including [Bid91a, CGT90, Ull89b]. Experimental results comparing
several of the techniques in the context of datalog are described in [BR88b]. An excellent
survey on deductive database systems, which includes an overview of several prototype
systems that support datalog, is presented in [RU94].
The naive and seminaive strategies for datalog evaluation underlie several early inves-
tigations and implementations [Cha81b, MS81]; the seminaive strategy for evaluation is
described in [Ban85, Ban86], which also propose various renements. The use of T
i1
and
T
i
in Algorithm 13.1.1 is from [BR87b]. Reference [CGT90] highlights the close relation-
ship of these approaches to the classical Jacobi and Gauss-Seidel algorithms of numerical
analysis.
An essential ingredient of the top-down approaches to datalog evaluation is that of
pushing selections into recursions. An early form of this was developed in [AU79],
where selections and projections are pushed into restricted forms of xpoint queries (see
Chapter 14 for the denition of xpoint queries).
The Query-Subquery (QSQ) approach was initially presented in [Vie86]; the indepen-
dently developed method of extension tables [DW87] is essentially equivalent to this.
The QSQ approach is extended in [Vie88, Vie89] to incorporate certain global optimiza-
tions. An extension of the technique to general logic programming, called SLD-AL, is
developed in [Vie87a, Vie89]. Related approaches include APEX [Loz85], Earley Deduc-
tion [PW80, Por86], and those of [Nej87, Roe87]. The connection between context-free
parsing and datalog evaluation is highlighted in [Lan88].
The algorithms of the QSQ family are sometimes called memo-ing approaches,
because they use various data structures to remember salient inferred facts to lter the work
of traditional SLD resolution.
Perhaps the most general of the top-down approaches uses rule/goal graphs [Ull85];
these potentially innite trees intuitively correspond to a breadth-rst, set-at-a-time execu-
tion of SLD resolution. Rule/goal graphs are applied in [Van86] to evaluate datalog queries
in distributed systems. Similar graph structures have also been used in connection with gen-
eral logic programs (e.g., [Kow75, Sic76]). A survey of several graph-based approaches is
[DW85].
Turning to bottom-up approaches, the essentially equivalent approaches of [HN84] and
336 Evaluation of Datalog
[GdM86] develop iterative algebraic programs for linear datalog programs. [GS87] extends
these. A more general approach based on rewriting iterative algebra programs is presented
in [CT87, Tan88].
The magic set and counting techniques originally appeared for linear datalog in
[BMSU86]. Our presentation of magic sets is based on an extended version called gen-
eralized supplementary magic sets [BR87a, BR91]. That work develops a general notion
of sideways information passing based on graphs (see Exercise 13.19), and develops both
magic sets and counting in connection with general logic programming. The Alexander
method [RLK86, Ker88], developed independently, is essentially the same as general-
ized supplementary magic sets for datalog. This was generalized to logic programming in
[Sek89]. Magic set rewriting has also been applied to optimize SQL queries [MFPR90].
The counting method is generalized and combined with magic sets in [SZ86, SZ88].
Supplementary magic is incorporated in [BR91]. Analytic comparisons of magic and
counting for selected programs are presented in [MSPS87].
Another bottom-up technique is Static Filtering [KL86a, KL86b]. This technique
forms a graph corresponding to the ow of tuples through a bottom-up evaluation and then
modies the graph in a manner that captures information passing resulting from constants
in the initial query.
Several of the investigations just mentioned, including [BR87a, KL86a, KL86b, Ull85,
Vie86], emphasize the idea that sideways information passing and control are largely
independent. Both [SZ88] and [BR91] describe fairly general mechanisms for specifying
and using alternative sideways information passing and related message passing. A more
general formof sideways information passing, which passes bounding inequalities between
subgoals, is studied in [APP
+
86]. A formal framework for studying the success of pushing
selections into datalog programs is developed in [BKBR87].
Several papers have studied the connection between top-down and bottom-up evalua-
tion techniques. One body of the research in this direction focuses on the sets of facts gener-
ated by the top-down and bottom-up techniques. One of the rst results relating top-down
and bottom-up is from [BR87a, BR91], where it is shown that if a top-down technique
and the generalized supplementary magic set technique use a given family of sideways
information passing techniques, then the sets of intermediate facts produced by both tech-
niques correspond. That research is conducted in the context of general logic programs that
are range restricted. These results are generalized to possibly non-range-restricted logic
programs in the independent research [Ram91] and [Sek89]. In that research, bottom-up
evaluations may use terms and tuples that include variables, and bottom-up evaluation of
rewritten programs uses unication rather than simple relational join. A close correspon-
dence between top-down and bottom-up evaluation for datalog was established in [Ull89a],
where subgoal rectication is used. The treatment of Program P
r
and Theorem 13.4.1 are
inspired by that development. This close correspondence is extended to arbitrary logic
programs in [Ull89b]. Using a more detailed cost model, [SR93] shows that bottom-up
evaluation asymptotically dominates top-down evaluation for logic programs, even if they
produce nonground terms in their output.
A second direction of research on the connection between top-down and bottom-up
approaches provides an elegant unifying framework [Bry89]. Recall in the discussion of
Theorem 13.2.2 that the answer to a query can be obtained by performing the steps of
Exercises 337
the QSQ until a xpoint is reached. Note that the xpoint operator used in this chapter is
different fromthe conventional bottom-up application of T
P
used by the naive algorithmfor
datalog evaluation. The framework presented in [Bry89] is based on meta-interpreters (i.e.,
interpreters that operate on datalog rules in addition to data); these can be used to specify
QSQ and related algorithms as bottom-up, xpoint evaluations. (Such meta-programming
is common in functional and logic programming but yields novel results in the context of
datalog.) Reference [Bry89] goes on to describe several top-down and bottom-up datalog
evaluation techniques within the framework, proving their correctness and providing a
basis for comparison.
A recent investigation [NRSU89] improves the performance of the magic sets in some
cases. If the program and query satisfy certain conditions, then a technique called factoring
can be used to replace some predicates by new predicates of lower arity. Other improve-
ments are considered in [Sag90], where it is shown in particular that the advantage of one
method over another may depend on the actual data, therefore stressing the need for tech-
niques to estimate the size of idbs (e.g., [LN90]).
Extensions of the datalog evaluation techniques to stratied datalog

programs (see
Chapter 15) include [BPR87, Ros91, SI88, KT88].
Another important direction of research has been the parallel evaluation of datalog
programs. Heuristics are described in [CW89b, GST90, Hul89, SL91, WS88, WO90].
A novel approach to answering datalog queries efciently is developed in [DT92,
DS93]. The focus is on cases in which the same query is asked repeatedly as the under-
lying edb is changing. The answer of the query (and additional scratch paper relations)
is materialized against a given edb state, and rst-order queries are used incrementally to
maintain the materialized data as the underlying edb state is changed.
A number of prototype systems based on variants of datalog have been developed,
incorporating some of the techniques mentioned in this chapter. They include DedGin
[Vie87b, LV89], NAIL! [Ull85, MUV86, MNS
+
87], LDL [NT89], ALGRES [CRG
+
88],
NU-Prolog [RSB
+
87], GLUE-NAIL [DMP93], and CORAL [RSS92, RSSS93]. Descrip-
tions of projects in this area can also be found in [Zan87], [RU94].
Exercises
Exercise 13.1 Recall the program RSG

from Section 13.1. Exhibit an instance I such that on


this input,
i
rsg
= for each i > 0.
Exercise 13.2 Recall the informal discussion of the two seminaive versions of the nonlinear
ancestor program discussed in Section 13.1. Let P
1
denote the rst of these, and P
2
the second.
Show the following.
(a) For some input, P
2
can produce the same tuple more than once at some level beyond
the rst level.
(b) If P
2
produces the same tuple more than once, then each occurrence corresponds to
a distinct proof tree (see Section 12.5) from the program and the input.
(c) P
1
can produce a given tuple twice, where the proof trees corresponding to the two
occurrences are identical.
Exercise 13.3 Consider the basic seminaive algorithm (13.1.1).
338 Evaluation of Datalog
(a) Verify that this algorithm terminates on all inputs.
(b) Show that for each i 0 and each idb predicate S, after the i
th
execution of the
loop the value of variable S
i
is equal to T
i
P
(I)(S) and the value of
i+1
S
is equal
to T
i+1
P
(I)(S) T
i
P
(I)(S).
(c) Verify that this algorithm produces correct output on all inputs.
(d) Give an example input for which the same tuple is generated during different loops
of the algorithm.
Exercise 13.4 Consider the improved seminaive algorithm (13.1.2).
(a) Verify that this algorithm terminates and produces correct output on all inputs.
(b) Give an example of a program P for which the improved seminaive algorithm pro-
duces fewer redundant tuples than the basic seminaive algorithm.
Exercise 13.5 Let P be a linear datalog program, and let P

be the set of rules associated with


P by the improved seminaive algorithm. Suppose that the naive algorithm is performed using P

on some input I. Does this yield P(I)? Why or why not? What if the basic seminaive algorithm
is used?
Exercise 13.6 A set X of relevant facts for datalog query (P, q) and input I is minimal if (1)
for each answer of q there is a proof tree for constructed from facts in X, and (2) X is
minimal having this property. Informally describe an algorithm that produces a minimal set of
relevant facts for a query (P, q) and input I and is polynomial time in the size of I.
Exercise 13.7 [BR91] Suppose that program P includes the rule
: S(x, y) S
1
(x, z), S
2
(z, y), S
3
(u, v), S
4
(v, w),
where S
3
, S
4
are edb relations. Observe that the atoms S
3
(u, v) and S
4
(v, w) are not connected
to the other atoms of the rule body or to the rule head. Furthermore, in an evaluation of P on
input I, this rule may contribute some tuple to S only if there is an assignment for u, v, w such
that {S
3
(u, v), S
4
(v, w)}[] I. Explain why it is typically more efcient to replace with

: S(x, y) S
1
(x, z), S
2
(z, y)
if there is such an assignment and to delete from P otherwise. Extend this to the case when
S
3
, S
4
are idb relations. State a general version of this heuristic improvement.
Exercise 13.8 Consider the adorned rule
R
bf
(x, w) S
bf
1
(x, y), S
bf
2
(y, z), T
ff
1
(u, v), T
bf
2
(v, w).
Explain why it makes sense to view the second occurrence of v as bound.
Exercise 13.9 Consider the rule
R(x, y, y) S(y, z), T (z, x).
(a) Construct adorned versions of this rule for R
ff b
and R
f bb
.
Exercises 339
(b) Suppose that in the QSQ algorithm a tuple b, c is placed into input_R
f bb
. Explain
why this tuple should not be placed into the 0
th
supplementary relation for the second
adorned rule constructed in part (a).
(c) Exhibit an example analogous to part (b) based on the presence of a constant in the
head of a rule rather than on repeated variables.
Exercise 13.10
(a) Complete the evaluation in Example 13.2.1.
(b) Use Algorithm 13.2.3 (QSQR) to evaluate that example.
Exercise 13.11 In the QSQR algorithm, the procedure for processing subqueries of the form
(R

, S) is called until no global variable is changed. Exhibit an example datalog query and input
where the second cycle of calls to the subqueries (R

, S) generates new answer tuples.


Exercise 13.12 (a) Prove Theorem 13.2.2. (b) Prove that the QSQR algorithm is correct.
Exercise 13.13 The Iterative QSQ (QSQI) algorithm uses the QSQ framework, but without
recursion. Instead in each iteration it processes each rule body fromleft to right, using the values
currently in the relations ans_R

when computing values for the supplementary relations.


As with QSQR, the variables input_R

and ans_R

are global, and the variables for the


supplementary relations are local. Iteration continues until there is no change to the global
variables.
(a) Specify the QSQI algorithm more completely.
(b) Give an example where QSQI performs redundant work that QSQR does not.
Exercise 13.14 [BR91] Consider the following query based on a nonlinear variant of the
same-generation program, called here the SGV query:
(a) sgv(x, y) at(x, y)
(b) sgv(x, y) up(x, z1), sgv(z1, z2), at(z2, z3), sgv(z3, z4), down(z4, y)
query(y) sgv(a, y)
Give the magic set transformation of this program and query.
Exercise 13.15 Give examples of how a query (P
m
, q
m
) resulting from magic set rewriting
can produce nonrelevant and redundant facts.
Exercise 13.16
(a) Give the general denition of the magic set rewriting technique.
(b) Prove Theorem 13.3.1.
Exercise 13.17 Compare the difculties, in practical terms, of using the QSQ and magic set
frameworks for evaluating datalog queries.
Exercise 13.18 Let (P, q) denote the SGV query of Exercise 13.14. Let (P
m
, q
m
) denote the
result of rewriting this program, using the (generalized supplementary) magic set transformation
presented in this chapter. Under an earlier version, called here original magic, the rewritten
form of (P, q) is (P
om
, q
om
):
340 Evaluation of Datalog
sgv
bf
(x, y) input_sgv
bf
(x), at(x, y) (o-m1)
sgv
bf
(x, y) input_sgv
bf
(x), up(x, z1), sgv
bf
(z1, z2), (o-m2)
at(z2, z3), sgv
bf
(z3, z4), down(z4, y)
input_sgv
bf
(z1) input_sgv
bf
(x), up(x, z1) (o-i2.2)
input_sgv
bf
(z3) input_sgv
bf
(x), up(x, z1), sgv
bf
(z1, z2), (o-i2.4)
at(z2, z3)
input_sgv(a) (o-seed)
query(y) sgv
bf
(a, y) (o-query)
Intuitively, the original magic set transformation uses the relations input_R

, but not supple-


mentary relations.
(a) Verify that (P
om
, q
om
) is equivalent to (P, q).
(b) Compare the family of facts computed during the executions of (P
m
, q
m
) and
(P
om
, q
om
).
(c) Give a specication for the original magic set transformation, applicable to any
datalog query.
Exercise 13.19 Consider the adorned rule
R
bbf
(x, y, z) T
bf
1
(x, s), T
bf
2
(s, t ), T
bf
3
(y, u), T
bf
4
(u, v), T
bbf
5
(t, v, z).
A sip graph for this rule has as nodes all atoms of the rule and a special node exit, and
edges (R, T
1
), (T
1
, T
2
), (R, T
3
), (T
3
, T
4
), (T
2
, T
5
), (T
4
, T
5
), (T
5
, exit). Describe a family of
supplementary relations, based on this sip graph, that can be used in conjunction with the QSQ
and magic set approaches. [Use one supplementary relation for each edge (corresponding to
the output of the tail of the edge) and one supplementary relation for each node except for R
(corresponding to the input to this nodein general, this will equal the join of the relations for
the edges entering the node).] Explain how this may increase efciency over the left-to-right
approach used in this chapter. Generalize the construction. (The notion of sip graph and its use
is a variation of [BR91].)
Exercise 13.20 [Ull89a] Specify an algorithm that replaces a program and query by an equiv-
alent one, all of whose idb subgoals are rectied. What is the complexity of this algorithm?
Exercise 13.21
(a) Provide a more detailed specication of the QSQ framework with annotations, and
prove its correctness.
(b) [Ull89b, Ull89a] State formally the denitions needed for Theorem 13.4.1, and prove
it.
Exercise 13.22 Write a program using counting that can be used to answer the RSG query
presented at the beginning of Section 13.2.
Exercises 341
count_sgv
bf
(I, K, L, y) count_input_sgv
bf
(I, K, L, x), at(x, y) (c-s1.1)
count_sup
2
1
(I, K, L, z1) count_input_sgv
bf
(I, K, L, x), up(x, z1) (c-s2.1)
count_sup
2
2
(I, K, L, z2) count_sup
2
1
(I, K, L, z1), (c-s2.2)
count_sgv
bf
(I +1, 2K +2, 5L +2, z2)
count_sup
2
3
(I, K, L, z3) count_sup
2
2
(I, K, L, z2), at(z2, z3) (c-s2.3)
count_sup
2
4
(I, K, L, z4) count_sup
2
3
(I, K, L, z3), (c-s2.4)
count_sgv
bf
(I +1, 2K +2, 5L +4, z4),
count_sgv
bf
(I, K, L, y) count_sup
2
4
(I, K, L, z4), down(z4, y) (c-s2.5)
count_input_sgv
bf
(I +1, 2K +2, 5L +2, z1) (c-i2.2)
count_sup
2
1
(I, K, L, z1)
count_input_sgv
bf
(I +1, 2K +2, 5L +4, z3) (c-i2.4)
count_sup
2
3
(I, K, L, z3)
count_input_sgv
bf
(1, 0, 0, a) (c-seed)
query(y) count_sgv
bf
(1, 0, 0, y) (c-query)
Figure 13.9: Generalized counting transformation on SGV query
Exercise 13.23 [BR91] This exercise illustrates a version of counting that is more general
than that of Exercise 13.22. Indexed versions of predicates shall have three index coordinates
(occurring leftmost) that hold:
(i) The level in the proof tree of the subgoal that a given rule is expanding.
(ii) An encoding of the rules used along the path from the root of the proof tree to the
current subgoal. Suppose that there are k rules, numbered (1), . . . , (k). The index
for the root node is 0 and, given index K, if rule number i is used next, then the next
index is given by kK +i.
(iii) An encoding of the atom occurrence positions along the path from root to the current
node. Assuming that l is the maximum number of idb atoms in any rule body, this
index is encoded in a manner similar to item (ii).
A counting version of the SGV query of Exercise 13.14 is shown in Fig. 13.9. Verify that this is
equivalent to the SGV query in the case where there are no cycles in up or down.
14 Recursion and Negation
Vittorio: Lets combine recursion and negation.
Riccardo: That sounds hard to me.
Sergio: Its no problem, just add xpoint to the calculus, or while to the algebra.
Riccardo: That sounds hard to me.
Vittorio: OKhow about datalog with negation?
Riccardo: That sounds hard to me.
Alice: Riccardo, you are recursively negative.
T
he query languages considered so far were obtained by augmenting the conjunctive
queries successively with disjunction, negation, and recursion. In this chapter, we
consider languages that provide both negation and recursion. They allow us to ask queries
such as, Which are the pairs of metro stops which are not connected?. This query is not
expressible in relational calculus and algebra or in datalog.
The integration of recursion and negation is natural and yields highly expressive lan-
guages. We will see how it can be achieved in the three paradigms considered so far: al-
gebraic, logic, and deductive. The algebraic language is an extension of the algebra with
a looping construct and an assignment, in the style of traditional imperative programming
languages. The logic language is an extension of the calculus in which recursion is provided
by a xpoint operator. The deductive language extends datalog with negation.
In this chapter, the semantics of datalog with negation is dened from a purely compu-
tational perspective that is in the spirit of the algebraic approach. More natural and widely
accepted model-theoretic semantics, such as stratied and well-founded semantics, are pre-
sented in Chapter 15.
As we consider increasingly powerful languages, the complexity of query evaluation
becomes a greater concern. We consider two avors of the languages in each paradigm:
the inationary one, which guarantees termination in time polynomial in the size of the
database; and the noninationary one, which only guarantees that a polynomial amount
of space is used.
1
In the last section of this chapter, we show that the polynomial-time-
bounded languages dened in the different paradigms are equivalent. The set of queries
they dene is called the xpoint queries. The polynomial-space-bounded languages are also
equivalent, and the corresponding set of queries is called the while queries. In Chapter 17,
we examine in more detail the expressiveness and complexity of the xpoint and while
queries. Note that, in particular, the polynomial time and space bounds on the complexity
1
For comparison, it is shown in Chapter 17 that CALC requires only logarithmic space.
342
Recursion and Negation 343
of such queries imply that there are queries that are not xpoint or while queries. More
powerful languages are considered in Chapter 18.
Before describing specic languages, we present an example that illustrates the prin-
ciples underlying the two avors of the languages.
Example The following is based on a version of the well-known game of life, which
is used to model biological evolution. The game starts with a set of cells, some of which
are alive and some dead; the alive ones are colored in blue or red. (One cell may have two
colors.) Each cell has other cells as neighbors. Suppose that a binary relation Neighbor
holds the neighbor relation (considered as a symmetric relation) and that the information
about living cells and their color is held in a binary relation Alive (see Fig. 14.1). Suppose
rst that a cell can change status from dead to alive following this rule:
A dead cell becomes alive if it has at least two neighbors that are alive ()
and have the same color. It then takes the color of the parents.
The evolution of a particular population for the Neighbor graph of Fig. 14.1(a) is given in
Fig. 14.1(b). Observe that the sets of tuples keep increasing and that we reach a stable state.
This is an example of inationary iteration.
Now suppose that the evolution also obeys the second rule:
() A live cell dies if it has more than three live neighbors.
The evolution of the population with the two rules is given in Fig. 14.1(c). Observe that
the number of tuples sometimes decreases and that the computation diverges. This is an
example of noninationary iteration.
All languages that we consider use a xed set of relation schemas throughout the com-
putation. At any point in the computation, intermediate results contain only constants from
the input database or that are specied in the query. Suppose the relations used in the
computation have arities r
1
, . . . , r
k
, the input database contains n constants, and the query
refers to c constants. Then the number of tuples in any intermediate result is bounded by

k
i=1
(n + c)
r
i
, which is a polynomial in n. Thus such queries can be evaluated in poly-
nomial space. As will be seen when the formal denitions are in place, this implies that
each noninationary iteration, and hence each noninationary query, can be evaluated in
polynomial space, whether or not it terminates. In contrast, the inationary semantics en-
sures termination by requiring that a tuple can never be deleted once it has been inserted.
Because there are only polynomially many tuples, each such program terminates in poly-
nomial time.
To summarize, the inationary languages use iteration based on an ination of tu-
ples. In all three paradigms, inationary queries can be evaluated in polynomial time, and
the same expressive power is obtained. The noninationary languages use nonination-
ary or destructive assignment inside of iterations. In all three paradigms, noninationary
queries can be evaluated in polynomial space, and again the same expressive power is
344 Recursion and Negation
Neighbor
a e
b e
c e
d e
(a) Neighbor
Alive Alive Alive
a blue a blue a blue
b red b red b red
c blue c blue c blue . . .
d red d red d red
e blue e blue
e red e red
(b) Inationary evolution
Alive Alive Alive Alive Alive
a blue a blue a blue a blue a blue
b red b red b red b red b red . . .
c blue c blue c blue c blue c blue
d red d red d red d red d red
e blue e blue
e red e red
(c) Noninationary evolution
Figure 14.1: Game of life
obtained. (We note, however, that it remains open whether the inationary and the non-
inationary languages have equivalent expressive power; we discuss this issue later.)
14.1 Algebra + While
Relational algebra is essentially a procedural language. Of the query languages, it is the
closest to traditional imperative programming languages. Chapters 4 and 5 described howit
can be extended syntactically using assignment (:=) and composition (;) without increasing
its expressive power. The extensions of the algebra with recursion are also consistent with
14.1 Algebra + While 345
the imperative paradigm and incorporate a while construct, which calls for the iteration
of a program segment. The resulting language comes in two avors: inationary and
noninationary. The two versions of the language differ in the semantics of the assignment
statement. The noninationary version was the one rst dened historically, and we discuss
it next. The resulting language is called the while language.
Noninationary Semantics
Recall from Chapter 4 that assignment statements can be incorporated into the algebra
using expressions of the form R :=E, where E is an algebra expression and R a relational
variable of the same sort as the result of E. (The difference from Chapter 4 is that it is no
longer required that each successive assignment statement use a distinct, previously unused
variable.) In the while language, the semantics of an assignment statement is as follows:
The value of R becomes the result of evaluating the algebra expression E on the current
state of the database. This is the usual destructive assignment in imperative programming
languages, where the old value of a variable is overwritten.
While statements have the form
while change do
begin
loop body
end
There is no explicit termination condition. Instead a loop runs as long as the execution
of the body causes some change to some relation (i.e., until a stable state is reached). At
the end of this section, we consider the introduction of explicit terminating conditions and
see that this does not affect the language in an essential manner.
Nesting of loops is permitted. A while program is a nite sequence of assignment or
while statements. The program uses a nite set of relational variables of specied sorts,
including the names of relations in the input database. Relational variables that are not in
the input database are initialized to the empty relation. A designated relational variable
holds the output to the program at the end of the computation. The image (or value) of
program P on I, denoted P(I), is the value nally assigned to the designated variable if P
terminates on I; otherwise P(I) is undened.
Example 14.1.1 (Transitive Closure) Consider a binary relation G[AB], specifying
the edges of a graph. The following while program computes in T [AB] the transitive
closure of G.
T :=G;
while change do
begin
T :=T
AB
(
BC
(T )
AC
(G));
end
A computation ends when T becomes stable, which means that no new edges were
added in the current iteration, so T now holds the transitive closure of G.
346 Recursion and Negation
Example 14.1.2 (Add-Remove) Consider again a binary relation G specifying the
edges of a graph. Each loop of the following program
removes from G all edges a, b if there is a path of length 2 from a to b, and
inserts an edge a, b if there is a vertex not directly connected to a and b.
This is iterated while some change occurs. The result is placed into the binary relation T .
In addition, the binary relation variables ToAdd and ToRemove are used as scratch paper.
For the sake of readability, we use the calculus with active domain semantics whenever this
is easier to understand than the corresponding algebra expression.
T :=G;
while change do
begin
ToRemove :={x, y | z(T (x, z) T (z, y))};
ToAdd :={x, y | z(T (x, z) T (z, x) T (y, z) T (z, y))};
T :=(T ToAdd) ToRemove;
end
In the Transitive Closure example, the transitive closure query always terminates. This
is not the case for the Add-Remove query. (Try the graph {a, a, a, b, b, a, b, b}.) The
halting problem for while programs is undecidable (i.e., there is no algorithm that, given
a while program P, decides whether P halts on each input; see Exercise 14.2). Observe,
however, that for a pair (P, I), one can decide whether P halts on input I because, as argued
earlier, while computations are in pspace.
Inationary Semantics
We dene next an inationary version of the while language, denoted by while
+
. The
while
+
language differs with while in the semantics of the assignment statement. In particu-
lar, in while
+
, assignment is cumulative rather than destructive: Execution of the statement
assigning E to R results in adding the result of E to the old value of R. Thus no tuple is
removed from any relation throughout the execution of the program. To distinguish the cu-
mulative semantics fromthe destructive one, we use the notation P +=e for the cumulative
semantics.
Example 14.1.3 (Transitive Closure Revisited) Following is a while
+
program that
computes the transitive closure of a graph represented by a binary relation G[AB]. The
result is obtained in the variable T [AB].
T +=G;
while change do
begin
T +=
AB
(
BC
(T )
AC
(G));
end
14.2 Calculus + Fixpoint 347
This is almost exactly the same program as in the while language. The only difference is
that because assignment is cumulative, it is not necessary to add the content of T to the
result of the projection.
To conclude this section, we consider alternatives for the control condition of loops.
Until now, we based termination on reaching a stable state. It is also common to use explicit
terminating conditions, such as tests for emptiness of the form E =, E =, or E =E

,
where E, E

are relational algebra expressions. The body of the loop is executed as long as
the condition is satised. The following example shows how transitive closure is computed
using explicit looping conditions.
Example 14.1.4 We use another relation schema oldT also of sort AB.
T +=G;
while (T oldT ) = do
begin
oldT +=T ;
T +=
AB
(
BC
(T )
AC
(G));
end
In the program, oldT keeps track of the value of T resulting from the previous iteration
of the loop. The computation ends when oldT and T coincide, which means that no new
edges were added in the current iteration, so T now holds the transitive closure of G.
It is easily shown that the use of such termination conditions does not modify the
expressive power of while, and the use of conditions such as E =E

does not modify the


expressive power of while
+
(see Exercise 14.5).
In Section 14.4 we shall see that nesting of loops in while queries does not increase
expressive power.
14.2 Calculus + Fixpoint
Just as in the case of the algebra, we provide inationary and noninationary extensions of
the calculus with recursion. This could be done using assignment statements and while
loops, as for the algebra. Indeed, we used calculus notation in Example 14.1.2 (Add-
Remove). Instead we use an equivalent but more logic-oriented construct to augment the
calculus. The construct, called a xpoint operator, allows the iteration of calculus formulas
up to a xpoint. In effect, this allows dening relations inductively using calculus formulas.
As with while, the xpoint operator comes in a noninationary and an inationary avor.
For the remainder of this chapter, as a notational convenience, we use active domain
semantics for calculus queries. In addition, we often use a formula (x
1
, . . . , x
n
) as an
abbreviation for the query {x
1
, . . . , x
n
| (x
1
, . . . , x
n
)}. These two simplications do not
affect the results developed.
348 Recursion and Negation
Partial Fixpoints
The noninationary version of the xpoint operator is considered rst. It is illustrated in
the following example.
Example 14.2.1 (Transitive Closure Revisited) Consider again the transitive closure
of a graph G. The relations J
n
holding pairs of nodes at distance at most n can be dened
inductively using the single formula
(T ) =G(x, y) T (x, y) z(T (x, z) G(z, y))
as follows:
J
0
=;
J
n
=(J
n1
), n > 0.
Here (J
n1
) denotes the result of evaluating (T ) when the value of T is J
n1
. Note
that, for each input G, the sequence {J
n
}
n0
converges. That is, there exists some k for
which J
k
=J
j
for every j > k (indeed, k is the diameter of the graph). Clearly, J
k
holds
the transitive closure of the graph. Thus the transitive closure of G can be dened as the
limit of the foregoing sequence. Note that J
k
=(J
k
), so J
k
is also a xpoint of (T ). The
relation J
k
thereby obtained is denoted by
T
((T )). Then the transitive closure of G is
dened by

T
(G(x, y) T (x, y) z(T (x, z) G(z, y))).
By denition,
T
is an operator that produces a new relation (the xpoint J
k
) when applied
to (T ). Note that, although T is used in (T ), T is not a database relation but rather a
relation used to dene inductively
T
((T )) from the database, starting with T =. T is
said to be bound to
T
. Indeed,
T
is somewhat similar to a quantier over relations. Note
that the scope of the free variables of (T ) is restricted to (T ) by the operator
T
.
In the preceding example, the limit of the sequence {J
n
}
n0
happens to exist and is in
fact the least xpoint of . This is not always the case; the possibility of nontermination
is illustrated next (and Exercise 14.4 considers cases in which a nonminimal xpoint is
reached).
Example 14.2.2 Consider
(T ) =(x =0 T (0) T (1)) (x =0 T (1)) (x =1 T (0)).
In this case the sequence {J
n
}
n0
is , {0}, {1}, {0}, . . . (i.e., T ip-ops between zero
and one). Thus the sequence does not converge, and
T
((T )) is not dened. Situations
in which is undened correspond to nonterminating computations in the while language.
The following nonterminating while program corresponds to
T
((T )).
14.2 Calculus + Fixpoint 349
T :={0};
while change do
begin
T :={0, 1} T ;
end
Because is only partially dened, it is called the partial xpoint operator. We now
dene its syntax and semantics in more detail.
Partial Fixpoint Operator Let R be a database schema, and let T [m] be a relation
schema not in R. Let S denote the schema R {T }. Let (T ) be a formula using T and
relations in R, with m free variables. Given an instance I over R,
T
((T )) denotes the
relation that is the limit, if it exists, of the sequence {J
n
}
n0
dened by
J
0
=;
J
n
=(J
n1
), n > 0,
where (J
n1
) denotes the result of evaluating on the instance J
n1
over S whose
restriction to R is I and J
n1
(T ) =J
n1
.
The expression
T
((T )) denotes a new relation (if it is dened). In turn, it can be
used in more complex formulas like any other relation. For example,
T
((T ))(y, z) states
that y, z is in
T
((T )). If
T
((T )) denes the transitive closure of G, the complement
of the transitive closure is dened by
{x, y |
T
((T ))(x, y)}.
The extension of the calculus with is called partial xpoint logic, denoted CALC+.
Partial Fixpoint Logic CALC+ formulas are obtained by repeated applications of
CALC operators (, , , , ) and the partial xpoint operator, starting from atoms. In
particular,
T
((T ))(e
1
, . . . , e
n
), where T has arity n, (T ) has n free variables, and the
e
i
are variables or constants, is a formula. Its free variables are the variables in the set
{e
1
, . . . , e
n
} [thus the scope of variables occurring inside (T ) consists of the subformula
to which
T
is applied]. Partial xpoint operators can be nested. CALC+ queries over a
database schema R are expressions of the form
{e
1
, . . . , e
n
| },
where is a CALC+ formula whose free variables are those occurring in e
1
, . . . , e
n
. The
formula may use relation names in addition to those in R; however, each occurrence P
of such relation name must be bound to some partial xpoint operator
P
. The semantics
of CALC+ queries is dened as follows. First note that, given an instance I over R and a
sentence in CALC+, there are three possibilities: is undened on I; is dened on I
350 Recursion and Negation
and is true; and is dened on I and is false. In particular, given an instance I over R, the
answer to the query
q ={e
1
, . . . , e
n
| }
is undened if the application of some in a subformula is undened. Otherwise the
answer to q is the n-ary relation consisting of all valuations of e
1
, . . . , e
n
for which
((e
1
), . . . , (e
n
)) is dened and true. The queries expressible in partial xpoint logic
are called the partial xpoint queries.
Example 14.2.3 (Add-Remove Revisited) Consider again the query in Example
14.1.2. To express the query in CALC+, a difculty arises: The while program initializes
T to G before the while loop, whereas CALC+ lacks the capability to do this directly.
To distinguish the initialization step from the subsequent ones, we use a ternary relation Q
and two distinct constants: 0 and 1. To indicate that the rst step has been performed, we
insert in Q the tuple 1, 1, 1. The presence of 1, 1, 1 in Q inhibits the repetition of the
rst step. Subsequently, an edge x, y is encoded in Q as x, y, 0. The while program in
Example 14.1.2 is equivalent to the CALC+ query
{x, y |
Q
((Q))(x, y, 0)}
where
(Q) =
[Q(1, 1, 1) [(G(x, y) z =0) (x =1 y =1 z =1)]]

[Q(1, 1, 1) [(x =1 y =1 z =1)


((z =((z =0) Q(x, y, 0) w(Q(x, w, 0) Q(w, y, 0)))
((z =((z =0) w(Q(x, w, 0) Q(w, x, 0)
Q(y, w, 0) Q(w, y, 0)))]].
Clearly, this query is more awkward than its counterpart in while. The simulation highlights
some peculiarities of computing with CALC+.
In Section 14.4 it is shown that the family of partial xpoint queries is equivalent to
the while queries. In the preceding denition of
T
((T )), the scope of all free variables
in is dened by
T
. For example, if T is binary in the following
y(P(y)
T
((T, x, y))(z, w)),
then (T, x, y) has free variables x, y. According to the denition, y is not free in

T
((T, x, y))(z, w) (the free variables are z, w). Hence the quantier y applies to the
y in P(y) alone and has no relation to the y in
T
((T, x, y))(z, w). To avoid confusion,
it is preferable to use distinct variable names in such cases. For instance, the preceding
14.2 Calculus + Fixpoint 351
sentence can be rewritten as
y(P(y)
T
((T, x

, y

))(z, w)).
A variant of the xpoint operator can be developed that permits free variables under the
xpoint operator, but this does not increase the expressive power (see Exercise 14.11).
Simultaneous Induction
Consider the following use of nested partial xpoint operators, where G, P, and Q are
binary:

P
(G(x, y)
Q
((P, Q))(x, y)).
Here (P, Q) involves both P and Q. This corresponds to a nested iteration. In each
iteration i in the computation of {J
n
}
n0
over P, the xpoint
Q
((P, Q)) is recomputed
for the successive values J
i
of P.
In contrast, we now consider a generalization of the partial xpoint that permits simul-
taneous iteration over two or more relations. For example, let R be a database schema and
(P, Q) and (P, Q) be calculus formulas using P and Q not in R, such that the arity
of P (respectively Q) is the number of free variables in (). On input I over R, one can
dene inductively the sequence {J
n
}
n0
of relations over {P, Q} as follows:
J
0
(P) =
J
0
(Q) =
J
n
(P) =(J
n1
(P), J
n1
(Q))
J
n
(Q) =(J
n1
(P), J
n1
(Q)).
Such a mutually recursive denition of J
n
(P) and J
n
(Q) is referred to as simultaneous
induction. If the sequence {J
n
(P), J
n
(Q)}
n0
converges, the limit is a xpoint of the map-
ping on pairs of relations dened by (P, Q) and (P, Q). This pair of values for P and
Q is denoted by
P,Q
((P, Q), (P, Q)), and
P,Q
is a simultaneous induction partial
xpoint operator. The value for P in
P,Q
is denoted by
P,Q
((P, Q), (P, Q))(P)
and the value for Q by
P,Q
((P, Q), (P, Q))(Q). Clearly, simultaneous induction
denitions like the foregoing can be extended for any number of relations. Simultaneous
induction can simplify certain queries, as shown next.
Example 14.2.4 (Add-Remove by Simultaneous Induction) Consider again the
query Add-Remove in Example 14.2.3. One can simplify the query by introducing an
auxiliary unary relation Off , which inhibits the transfer of G into T after the rst step
in a direct fashion. T and Off are dened in a mutually recursive fashion by
Off
and
T
,
respectively:
352 Recursion and Negation

Off
(x) =x =1

T
(x, y) =[Off (1) G(x, y)]
[Off (1) z(T (x, z) T (z, y))
(T (x, y) z(T (x, z) T (z, x) T (y, z) T (z, y))].
The Add-Remove query can now be written as
{x, y |
Off ,T
(
Off
(Off , T ),
T
(Off , T ))(T )(x, y)}.
It turns out that using simultaneous induction instead of regular xpoint operators
does not provide additional power. For example, a CALC+ formula equivalent to the
query in Example 14.2.4 is the one shown in Example 14.2.3. More generally, we have
the following:
Lemma 14.2.5 For some n, let
i
(R
1
, . . . , R
n
) be CALC formulas, i in [1..n], such
that
R
1
,...,R
n
(
1
(R
1
, . . . , R
n
), . . . ,
n
(R
1
, . . . , R
n
)) is a correct formula. Then for each
i [1, n] there exist CALC formulas

i
(Q) and tuples e
i
of variables or constants such
that for each i,

R
1
,...,R
n
(
1
(R
1
, . . . , R
n
), . . . ,
n
(R
1
, . . . , R
n
))(R
i
)
Q
(

i
(Q))( e
i
).
Crux We illustrate the construction with reference to the query of Example 14.2.4. In-
stead of using two relations Off and T , we use a ternary relation Q that encodes both Off
and T . The extra coordinate is used to distinguish between tuples in T and tuples in Off .
A tuple x in Off is encoded as a tuple x, 1, 1 in Q. A tuple x, y in T is encoded as a
tuple x, y, 0 in Q. The nal result is obtained by selecting from Q the tuples where the
third coordinate is 0 and projecting the result on the rst two coordinates.
Note that the use of the tuples e
i
allows one to perform appropriate selections and
projections on
Q
(

i
(Q)) necessary for decoding. These selections and projections are
essential and cannot be avoided (see Exercise 14.17c).
Inationary Fixpoint
The nonconvergence in some cases of the sequence {J
n
}
n0
in the semantics of the par-
tial xpoint operator is similar to nonterminating computations in the while language with
noninationary semantics. The semantics of the partial xpoint operator is essentially
noninationary because in the inductive denition of J
n
, each step is a destructive assign-
ment. As with while, we can make the semantics inationary by having the assignment at
each step of the induction be cumulative. This yields an inationary version of , denoted
by
+
and called the inationary xpoint operator, which is dened for all formulas and
databases to which it is applied.
14.2 Calculus + Fixpoint 353
Inationary Fixpoint Operators and Logic The denition of
+
T
((T )) is identical to
that of the partial xpoint operator except that the sequence {J
n
}
n0
is dened as follows:
J
0
=;
J
n
=J
n1
(J
n1
), n > 0.
This denition ensures that the sequence {J
n
}
n0
is increasing: J
i1
J
i
for each i > 0.
Because for each instance there are nitely many tuples that can be added, the sequence
converges in all cases.
Adding
+
instead of to CALC yields inationary xpoint logic, denoted by
CALC+
+
. Note that inationary xpoint queries are always dened.
The set of queries expressible by inationary xpoint logic is called the xpoint
queries. The xpoint queries were historically dened rst among the inationary lan-
guages in the algebraic, logic, and deductive paradigms. Therefore the class of queries
expressible in inationary languages in the three paradigms has come to be referred to as
the xpoint queries.
As a simple example, the transitive closure of a graph G is dened by the following
CALC+
+
query:
{x, y |
+
T
(G(x, y) z(T (x, z) G(z, y))(x, y)}.
Recall that datalog as presented in Chapter 12 uses an inationary operator and yields
the minimal xpoint of a set of rules. One may also be tempted to assume that an ination-
ary simultaneous induction of the form
+
P,Q
((P, Q), (P, Q)) is equivalent to a system
of equational denitions of the form
P =(P, Q)
Q=(P, Q)
and that it computes the unique minimal xpoint for P and Q. However, one should
be careful because the result of the inationary xpoint computation is only one of the
possible xpoints. As illustrated in the following example, this may not be minimal or
the naturally expected xpoint. (There may not exist a unique minimal xpoint; see
Exercise 14.4.)
Example 14.2.6 Consider the equation
T (x, y) =G(x, y) T (x, y) z(T (x, z) G(z, y))
CT (x, y) =T (x, y).
One is tempted to believe that the xpoint of these two equations yields the complement of
transitive closure. However, with the inationary semantics
354 Recursion and Negation
J
0
(T ) =
J
0
(CT ) =
J
n
(T ) =J
n1
(T ) {x, y | G(x, y) J
n1
(T )(x, y)
z(J
n1
(T )(x, z) G(z, y))}
J
n
(CT ) =J
n1
(CT ) {x, y | J
n1
(T )(x, y)}
leads to saturating CT at the rst iteration.
Positive and Monotone Formulas
Making the xpoint operator inationary by denition is not the only way to guarantee
polynomial-time termination of the xpoint iteration. An alternative approach is to restrict
the formulas (T ) so that convergence of the sequence {J
n
}
n0
associated with
T
((T ))
is guaranteed. One such restriction is monotonicity. Recall that a query q is monotone if
for each I, J, I J then q(I) q(J). One can again show that for such formulas, a least
xpoint always exists and that it is obtained after a nite (but unbounded) number of stages
of inductive applications of the formula.
Unfortunately, monotonicity is an undecidable property for CALC. One can also re-
strict the application of xpoint to positive formulas. This was historically the rst track
that was followed and presents the advantage that positiveness is a decidable (syntactic)
property. It is done by requiring that T occur only positively in (T ) (i.e., under an even
number of negations in the syntax tree of the formula). All formulas thereby obtained are
monotone, and so
T
((T )) is always dened (see Exercise 14.10).
It can be shown that the approach of inationary xpoint and the two approaches
based on xpoint of positive or monotone formulas are equivalent (i.e., the sets of queries
expressed are identical; see Exercise 14.10).
Fixpoint Operators and Circumscription
In some sense, the xpoint operators act as quantiers on relational variables. This is some-
what similar to the well-known technique of circumscription studied in articial intelli-
gence. Suppose (T ) is a calculus sentence (i.e., no free variables) that uses T in addition
to relations from a database schema R. The circumscription of (T ) with respect to T ,
denoted here by circ
T
((T )), can be thought of as an operator dening a new relation,
starting from the database. More precisely, let I be an instance over R. Then circ
T
((T ))
denotes the relation containing all tuples belonging to every relation T such that (1) (T )
holds for I, and (2) T is minimal under set inclusion
2
with this property. Consider now a
xpoint query. As stated earlier, xpoint queries can be expressed using just xpoint op-
erators
T
applied to formulas positive in T (i.e., T always appears in under an even
number of negations). We claim that
T
((T )) =circ
T
(

(T )), where

(T ) is a sentence
2
Other kinds of minimality have also been considered.
14.3 Datalog with Negation 355
obtained from (T ) as follows:

(T ) =x
1
, . . . x
n
((T, x
1
, . . . , x
n
) T (x
1
, . . . , x
n
)),
where the arity of T is n. To see this, it is sufcient to note that
T
((T )) is the unique
minimal T satisfying

(T ). This uses the monotonicity of (T ) with respect to T , which


follows from the fact that (T ) is positive in T (see Exercise 14.10). Although computing
with circumscription is generally intractable, the xpoint operator on positive formulas
can always be evaluated in polynomial time. Thus the xpoint operator can be viewed as a
tractable restriction of circumscription.
14.3 Datalog with Negation
Datalog provides recursion but no negation. It denes only monotonic queries. Viewed
from the standpoint of the deductive paradigm, datalog provides a form of monotonic
reasoning. Adding negation to datalog rules permits the specication of nonmonotonic
queries and hence of nonmonotonic reasoning.
Adding negation to datalog rules requires dening semantics for negative facts. This
can be done in many ways. The different denitions depend to some extent on whether da-
talog is viewed in the deductive framework or simply as a specication formalism like any
other query language. In this chapter, we examine the latter point of view. Then datalog
with negation can essentially be viewed as a subset of the while or xpoint queries and
can be treated similarly. This is not necessarily appropriate in the deductive framework.
For instance, the basic assumptions in the reasoning process may require that once a fact is
assumed false at some point in the inferencing process, it should not be proven true at a later
point. This idea lies at the core of stratied and well-founded semantics, two of the most
widely accepted in the deductive framework. The deductive point of view is considered in
depth in Chapter 15.
The semantics given here for datalog with negation follows the semantics given in
Chapter 12 for datalog, but does not correspond directly to the semantics for nonrecursive
datalog

given in Chapter 5. The semantics in Chapter 5 is inspired by the stratied


semantics but can be simulated by (either of) the semantics presented in this chapter.
As in the previous section, we consider both inationary and noninationary versions
of datalog with negation.
Inationary Semantics
The inationary language allows negations in bodies of rules and is denoted by datalog

.
Like datalog, its rules are used to infer a set of facts. Once a fact is inferred, it is never
removed from the set of true facts. This yields the inationary character of the language.
Example 14.3.1 We present a datalog

program with input a graph in binary re-


lation G. The program computes the relation closer(x, y, x

, y

) dened as follows:
356 Recursion and Negation
closer(x, y, x

, y

) means that the distance d(x, y) from x to y in G is smaller than the


distance d(x

, y

) from x

to y

[d(x, y) is innite if there is no path from x to y].


T (x, y) G(x, y)
T (x, y) T (x, z), G(z, y)
closer(x, y, x

, y

) T (x, y), T (x

, y

)
The program is evaluated as follows. The rules are red simultaneously with all applicable
valuations. At each such ring, some facts are inferred. This is repeated until no new facts
can be inferred. A negative fact such as T (x

, y

) is true if T (x

, y

) has not been inferred


so far. This does not preclude T (x

, y

) from being inferred at a later ring of the rules.


One ring of the rules is called a stage in the evaluation of the program. In the preceding
program, the transitive closure of G is computed in T . Consider the consecutive stages
in the evaluation of the program. Note that if the fact T (x, y) is inferred at stage n, then
d(x, y) =n. So if T (x

, y

) has not been inferred yet, this means that the distance between
x and y is less than that between x

and y

. Thus if T (x, y) and T (x

, y

) hold at some
stage n, then d(x, y) n and d(x

, y

) > n and closer(x, y, x

, y

) is inferred.
The formal syntax and semantics of datalog

are straightforward extensions of those


for datalog. A datalog

rule is an expression of the form


A L
1
, . . . , L
n
,
where A is an atom and each L
i
is either an atom B
i
(in which case it is called positive) or
a negated atom B
i
(in which case it is called negative). (In this chapter we use an active
domain semantics for evaluating datalog

and so do not require that the rules be range


restricted; see Exercise 14.13.)
A datalog

program is a nonempty nite set of datalog

rules. As for datalog pro-


grams, sch(P) denotes the database schema consisting of all relations involved in the pro-
gram P; the relations occurring in heads of rules are the idb relations of P, and the others
are the edb relations of P.
The semantics of datalog

that we present in this chapter is an extension of the xpoint


semantics of datalog. Let K be an instance over sch(P). Recall that an (active domain)
instantiation of a rule A L
1
, . . . , L
n
is a rule (A) (L
1
), . . . , (L
n
), where is a
valuation that maps each variable into adom(P, K). A fact A

is an immediate consequence
for K and P if A

K(R) for some edb relation R, or A

1
, . . . , L

n
is an instantiation
of a rule in P and each positive L

i
is a fact in K, and for each negative L

i
=A

i
, A

i

K. The immediate consequence operator of P, denoted
P
, is now dened as follows. For
each K over sch(P),

P
(K) =K {A | A is an immediate consequence for K and P}.
Given an instance I over edb(P), one can compute
P
(I),
2
P
(I),
3
P
(I), etc. As suggested
in Example 14.3.1, each application of
P
is called a stage in the evaluation. From the
14.3 Datalog with Negation 357
denition of
P
, it follows that

P
(I)
2
P
(I)
3
P
(I) . . . .
As for datalog, the sequence reaches a xpoint, denoted

P
(I), after a nite number of
steps. The restriction of this to the idb relations (or some subset thereof) is called the image
(or answer) of P on I.
An important difference with datalog is that

P
(I) is no longer guaranteed to be a
minimal model of P containing I, as illustrated next.
Example 14.3.2 Let P be the program
R(0) Q(0), R(1)
R(1) Q(0), R(0).
Let I = {Q(0)}. Then P(I) ={Q(0), R(0), R(1)}. Although P(I) is a model of P, it is not
minimal. The minimal models containing I are {Q(0), R(0)} and {Q(0), R(1)}.
As discussed in Chapter 12, the operational semantics of datalog based on the im-
mediate consequence operator is equivalent to the natural semantics based on minimal
models. As shown in the preceding example, there may not be a unique minimal model for
a datalog

program, and the semantics given for datalog

may not yield any of the minimal


models. The development of a natural model-theoretic semantics for datalog

thus calls for


selecting a natural model from among several possible candidates. Inevitably, such choices
are open to debate; Chapter 15 presents several alternatives.
Noninationary Semantics
The language datalog

has inationary semantics because the set of facts inferred through


the consecutive rings of the rules is increasing. To obtain a noninationary variant, there
are several possibilities. One could keep the syntax of datalog

but make the seman-


tics noninationary by retaining, at each stage, only the newly inferred facts (see Exer-
cise 14.16). Another possibility is to allow explicit retraction of a previously inferred fact.
Syntactically, this can be done using negations in heads of rules, interpreted as deletions
of facts. We adopt this solution here, in part because it brings our language closer to some
practical languages that use so-called (production) rules in the sense of expert and active
database systems. The resulting language is denoted by datalog

, to indicate that nega-


tions are allowed in both heads and bodies of rules.
Example 14.3.3 (Add-Remove Visited Again) The following datalog

program
computes in T the Add-Remove query of Example 14.1.2, given as input a graph G.
358 Recursion and Negation
T (x, y) G(x, y), off (1)
off (1)
T (x, y) T (x, z), T (z, y), off (1)
T (x, y) T (x, z), T (z, x), T (y, z), T (z, y), off (1)
Relation off is used to inhibit the rst rule (initializing T to G) after the rst step.
The immediate consequence operator
P
and semantics of a datalog

program are
analogous to those for datalog

, with the following important proviso. If a negative literal


A is inferred, the fact A is removed, unless A is also inferred in the same ring of
the rules. This gives priority to inference of positive over negative facts and is somewhat
arbitrary. Other possibilities are as follows: (1) Give priority to negative facts; (2) interpret
the simultaneous inference of A and A as a no-op (i.e., including A in the new instance
only if it is there in the old one); and (3) interpret the simultaneous inference of A and
A as a contradiction that makes the result undened. The chosen semantics has the
advantage over possibility (3) that the semantics is always dened. In any case, the choice
of semantics is not crucial: They yield equivalent languages (see Exercise 14.15).
With the semantics chosen previously, termination is no longer guaranteed. For in-
stance, the program
T (0) T (1)
T (1) T (1)
T (1) T (0)
T (0) T (0)
never terminates on input T (0). The value of T ip-ops between {0} and {1}, so no
xpoint is reached.
Datalog

and Datalog

as Fragments of CALC+ and CALC+


+
Consider datalog

. It can be viewed as a subset of CALC+ in the following manner.


Suppose that P is a datalog

program. The idb relations dened by rules can alternately be


dened by simultaneous induction using formulas that correspond to the rules. Each ring
of the rules corresponds to one step in the simultaneous inductive denition. For instance,
the simultaneous induction denition corresponding to the program in Example 14.3.3 is
the one in Example 14.2.4. Because simultaneous induction can be simulated in CALC+
(see Lemma 14.2.5), datalog

can be simulated in CALC+. Moreover, notice that only a


single application of the xpoint operator is used in the simulation. Similar remarks apply
to datalog

and CALC+
+
. Furthermore, in the inationary case it is easy to see that the
formula can be chosen to be existential (i.e., its prenex normal form
3
uses only existential
3
A CALC formula in prenex normal form is a formula Q
1
x
1
. . . Q
k
x
k
where Q
i
, 1 i k are
quantiers and is quantier free.
14.3 Datalog with Negation 359
quantiers). The same can be shown in the noninationary case, although the proof is more
subtle. In summary (see Exercise 14.18), the following applies:
Lemma 14.3.4 Each datalog

(datalog

) query is equivalent to a CALC+(CALC+


+
)
query of the form
{ x |
(+)
T
((T ))(t )},
where
(a) is an existential CALC formula, and
(b) t is a tuple of variables or constants of appropriate arity and x is the tuple of
distinct free variables in t .
The Rule Algebra
The examples of datalog

programs shown in this chapter make it clear that the semantics


of such programs is not always easy to understand. There is a simple mechanism that
facilitates the specication by the user of various customized semantics. This is done by
means of the rule algebra, which allows specication of an order of ring of the rules
as well as ring up to a xpoint in an inationary or noninationary manner. For the
inationary version RA
+
of the rule algebra, the base expressions are individual datalog

rules; the semantics associated with a rule is to apply its immediate consequence operator
once in a cumulative fashion. Union () can be used to specify simultaneous application of
a pair of rules or more complex programs. The expression P; Q species the composition
of P and Q; its semantics is to execute P once and then Q once. Inationary iteration of
program P is called for by (P)
+
. The noninationary version of the rule algebra, denoted
RA, starts with datalog

rules, but now with a noninationary, destructive semantics, as


dened in Exercise 14.16. Union and composition are generalized in the natural fashion,
and the noninationary iterator, denoted

, is used.
Example 14.3.5 Let P be the set of rules
T (x, y) G(x, y)
T (x, y) T (x, z), G(z, y)
and let Q consist of the rule
CT (x, y) T (x, y).
The RA
+
program(P)
+
; Qcomputes in CT the complement of the transitive closure of G.
It follows easily from the results of Section 14.4 that RA
+
is equivalent to datalog

,
and RA is equivalent to noninationary datalog

and hence to datalog

(Exercise 14.23).
Thus an RA
+
program can be compiled into a (possibly much more complicated) datalog

360 Recursion and Negation


program. For instance, the RA
+
program in Example 14.3.5 is equivalent to the datalog

program in Example 14.4.2. The advantage of the rule algebra is the ease of expressing
various semantics. In particular, RA
+
can be used easily to specify the stratied and well-
founded semantics for datalog

introduced in Chapter 15.


14.4 Equivalence
The previous sections introduced inationary and noninationary recursive languages with
negation in the algebraic, logic, and deductive paradigms. This section shows that the ina-
tionary languages in the three paradigms, while
+
, CALC+
+
, and datalog

, are equivalent
and that the same holds for the noninationary languages while, CALC+, and datalog

.
This yields two classes of queries that are central in the theory of query languages: the x-
point queries (expressed by the inationary languages) and the while queries (expressed by
the noninationary languages). This is summarized in Fig. 14.2, at the end of the chapter.
We begin with the equivalence of the inationary languages because it is the more
difcult to show. The equivalence of CALC+
+
and while
+
is easy because the languages
have similar capabilities: Program composition in while
+
corresponds closely to formula
composition in CALC+
+
, and the while change loop of while
+
is close to the inationary
xpoint operator of CALC+
+
. More difcult and surprising is the equivalence of these
languages with datalog

, because this much simpler language has no explicit constructs


for program composition or nested recursion.
Lemma 14.4.1 CALC+
+
and while
+
are equivalent.
Proof We consider rst the simulation of CALC+
+
queries by while
+
. Let {x
1
,. . . ,x
m
|
(x
1
,. . . ,x
m
)} be a CALC+
+
query over an input database with schema R. It sufces to
show that there exists a while
+
program P

that denes the same result as (x


1
, . . . , x
m
) in
some m-ary relation R

. The proof is by induction on the depth of nesting of the xpoint


operator in , denoted d(). If d() =0 (i.e., does not contain a xpoint operator), then
is in CALC and P

is
R

+= E

,
where E

is the relational algebra expression corresponding to . Now suppose the state-


ment is true for formulas with depth of nesting of the xpoint operator less than d(d > 0).
Let be a formula with d() =d.
If =
Q
((Q))(f
1
, . . . , f
k
), then P

is
Q +=;
while change do
begin
E

;
Q +=R

end;
R

+=((Q)),
14.4 Equivalence 361
where ((Q)) denotes the selection and projection corresponding to f
1
, . . . , f
k
.
Suppose now that is obtained by rst-order operations from k formulas
1
, . . . ,
k
,
each having
+
as root. Let E

(R

1
, . . . , R

k
) be the relational algebra expression corre-
sponding to , where each subformula
i
=
Q
((Q))(e
i
1
, . . . , e
i
n
i
) is replaced by R

i
. For
each i, let P

i
be a program that produces the value of
Q
((Q))(e
i
1
, . . . , e
i
n
i
) and places
it into R

i
. Then P

is
P

1
; . . . ; P

k
;
R

+=E

(R

1
, . . . , R

k
).
This completes the induction and the proof that CALC+
+
can be simulated by while
+
.
The converse simulation is similar (Exercise 14.20).
We now turn to the equivalence of CALC+
+
and datalog

. Lemma 14.3.4 yields the


subsumption of datalog

by CALC+
+
. For the other direction, we simulate CALC+
+
queries using datalog

. This simulation presents two main difculties.


The rst involves delaying the ring of a rule until after the completion of a xpoint
by another set of rules. Intuitively, this is hard because checking that the xpoint has been
reached involves checking the nonexistence rather than the existence of some valuation,
and datalog

is more naturally geared toward checking the existence of valuations. The


solution to this difculty is illustrated in the following example.
Example 14.4.2 The following datalog

program computes the complement of the tran-


sitive closure of a graph G. The example illustrates the technique used to delay the ring
of a rule (computing the complement) until the xpoint of a set of rules (computing the
transitive closure) has been reached (i.e., until the application of the transitivity rule yields
no new tuples). To monitor this, the relations old-T , old-T -except-nal are used. old-T
follows the computation of T but is one step behind it. The relation old-T -except-nal
is identical to old-T but the rule dening it includes a clause that prevents it from ring
when T has reached its last iteration. Thus old-T and old-T -except-nal differ only in the
iteration after the transitive closure T reaches its nal value. In the subsequent iteration,
the program recognizes that the xpoint has been reached and res the rule computing the
complement in relation CT . The program is
T (x, y) G(x, y)
T (x, y) G(x, z), T (z, y)
old-T (x, y) T (x, y)
old-T -except-nal(x, y) T (x, y), T (x

, z

), T (z

, y

), T (x

, y

)
CT (x, y) T (x, y), old-T (x

, y

),
old-T -except-nal(x

, y

)
(It is assumed that G is not empty; see Exercise 14.3.)
362 Recursion and Negation
The second difculty concerns keeping track of iterations in the computation of a
xpoint. Given a formula
+
T
((T )), the simulation of itself may involve numerous re-
lations other than T , whose behavior may be sabotaged by an overly zealous application
of iteration of the immediate consequence operator. To overcome this, we separate the in-
ternal computation of from the external iteration over T , as illustrated in the following
example.
Example 14.4.3 Let G be a binary relation schema. Consider the CALC+
+
query

+
good
((good))(x), where
=y (G(y, x) good(y)).
Note that the query computes the set of nodes in G that are not reachable from a cycle
(in other words, the nodes such that the length of paths leading to them is bounded). One
application of (good) is achieved by the datalog

program P:
bad(x) G(y, x), good(y)
delay
good(x) delay, bad(x)
Simply iterating P does not yield the desired result. Intuitively, the relations delay and bad,
which are used as scratch paper in the computation of a single iteration of
+
, cannot be
reinitialized and so cannot be reused to perform the computation of subsequent iterations.
To surmount this problem, we essentially create a version of P for each iteration of
(good). The versions are distinguished by using timestamps. The nodes themselves
serve as timestamps. The timestamps marking iteration i are the values newly introduced
in relation good at iteration i 1. Relations delay and delay-stamped are used to delay
the derivation of new tuples in good until bad and bad-stamped (respectively) have been
computed in the current iteration. The process continues until no new values are introduced
in an iteration. The full program is the union of the three rules given earlier, which perform
the rst iteration, and the following rules, which perform the iteration with timestamp t :
bad-stamped(x, t ) G(y, x), good(y), good(t )
delay-stamped(t ) good(t )
good(x) delay-stamped(t ), bad-stamped(x, t ).
We now embark on the formal demonstration that datalog

can simulate CALC+


+
.
We rst introduce some notation relating to the timestamping of a program in the sim-
ulation. Let m 1. For each relation schema Q, let Q be a new relational schema with
arity(Q) =arity(Q) +m. If ()Q(e
1
, . . . , e
n
) is a literal and z an m-tuple of distinct vari-
ables, then ()Q(e
1
, . . . , e
n
)[ z] denotes the literal ()Q(e
1
, . . . , e
n
, z
1
, . . . , z
m
). For each
programP and tuple z, P[ z] denotes the program obtained fromP by replacing each literal
A by A[ z]. Let P be a program and B
1
, . . . , B
q
a list of literals. Then P // B
1
, . . . , B
q
is
the program obtained by appending B
1
, . . . , B
q
to the bodies of all rules in P.
14.4 Equivalence 363
To illustrate the previous notation, consider the program P consisting of the following
two rules:
S(x, y) R(x, y)
S(x, y) R(x, z), S(z, y).
Then P[z] // T (x, w, y) is
S(x, y, z) R(x, y, z), T (x, w, y)
S(x, y, z) R(x, z, z), S(z, y, z), T (x, w, y).
Lemma 14.4.4 CALC+
+
and datalog

are equivalent.
Proof As seen in Lemma 14.3.4, datalog

is essentially a fragment of CALC+


+
, so
we just need to show the simulation of CALC+
+
by datalog

. The proof is by structural


induction on the CALC+
+
formula. The core of the proof involves a control mechanism
that delays ring certain rules until other rules have been evaluated. Therefore the induction
hypothesis involves the capability to simulate the CALC+
+
formula using a datalog

program as well as to produce concomitantly a predicate that only becomes true when the
simulation has been completed. More precisely, we will prove by induction the following:
For each CALC+
+
formula over a database schema R, there exists a datalog

program
prog() whose edb relations are the relations in R, whose idb relations include result

with arity equal to the number of free variables in and a 0-ary relation done

such that
for every instance I over R,
(i) [prog()(I)](result

) =(I), and
(ii) the 0-ary predicate done

becomes true at the last stage in the evaluation of


prog() on I.
We will assume, without loss of generality, that no variable of occurs free and bound,
or bound to more than one quantier, that contains no or , and that the initial query
has the form {x
1
, . . . , x
n
| }, where x
1
, . . . , x
n
are distinct variables. Note that the last
assumption implies that (i) establishes the desired result.
Suppose now that is an atom R( e). Let x be the tuple of distinct variables occurring
in e. Then prog() consists of the rules
done


result

( x) R( e).
There are four cases to consider for the induction step.
1. = . Without loss of generality, we assume that the idb relations of
prog() and prog() are disjoint. Thus there is no interference between prog()
and prog(). Let x and y be the tuples of distinct free variables of and , re-
spectively, and let z be the tuple of distinct free variables occurring in x or y.
364 Recursion and Negation
Then prog() consists of the following rules:
prog()
prog()
result

( z) done

, done

, result

( x), result

( y)
done

done

, done

.
2. = x(). Let y be the tuple of distinct free variables of , and let z be the tuple
obtained from y by removing the variable x. Then prog() consists of the rules
prog()
result

( z) done

, result

( y)
done

done

.
3. = (). Let x be the tuple of distinct free variables occurring in . Then
prog() consists of
prog()
result

( x) done

, result

( x)
done

done

.
4. =
S
((S))( e). This case is the most involved, because it requires keeping
track of the iterations in the computation of the xpoint as well as bookkeeping
to control the value of the special predicate done

. Intuitively, each iteration


is marked by timestamps. The current timestamps consist of the tuples newly
inserted in the previous iteration. The program prog() uses the following new
auxiliary relations:
Relation xpoint

contains
S
((S)) at the end of the computation, and
result

contains
S
((S))( e).
Relation run

contains the timestamps.


Relation used

contains the timestamps introduced in the previous stages


of the iteration. The active timestamps are in run

used

.
Relation not -nal

is used to detect the nal iteration (i.e., the iteration that


adds no new tuples to xpoint

). The presence of a timestamp in used


not -nal

indicates that the nal iteration has been completed.


Relations delay

and not-empty

are used for timing and to detect an empty


result.
In the following, y and t are tuples of distinct variables with the same arity as S. We
rst have particular rules to perform the rst iteration and to handle the special case of an
empty result:
14.4 Equivalence 365
prog()
xpoint

( y) result

( y), done

delay

done

not-empty

result

( y)
done

delay

, not-empty

.
The remainder of the program contains the following rules:
Stamping of the database and starting an iteration: For each R in different from S
and a tuple x of distinct variables with same arity as R,
R( x, t ) R( x), xpoint

(t )
run

(t ) xpoint

(t )
S( y, t ) xpoint

( y), xpoint

(t ).
Timestamped iteration:
prog()[t ]//run

(t ), used

(t )
Maintain xpoint

, not-last

, and used

:
xpoint

( y) done

(t ), result

( y, t ), used

(t )
not-nal

(t ) done

(t ), result

( y, t ), xpoint

( y)
used

(t ) done

(t )
Produce the result and detect termination:
result

( z) xpoint

( e)
where z is the tuple of distinct variables in e,
done

used

(t ), not-nal

(t ).
It is easily veried by inspection that prog() satises (i) and (ii) under the induction
hypothesis for cases (1) through (3). To see that (i) and (ii) hold in case (4), we carefully
consider the stages in the evaluation of prog

. Let I be an instance over the relations


in other than S; let J
0
= be over S; and let J
i
= J
i1
(J
i1
) for each i > 0.
Then
S
((S))(I) =J
n
for some n such that J
n
=J
n1
. The program prog

simulates the
consecutive iterations of this process. The rst iteration is simulated using prog

directly,
whereas the subsequent iterations are simulated by prog

timestamped with the tuples


added at the previous iteration. (We omit consideration of the case in which the xpoint
is ; this is taken care of by the rules involving delay

and not-empty

.)
366 Recursion and Negation
We focus on the stages in the evaluation of prog

corresponding to the end of the


simulation of each iteration of . The stage in which the simulation of the rst iteration
is completed immediately follows the stage in which done

becomes true. The subsequent


iterations are completed immediately following the stages in which
t (done

(t ) used

(t ))
becomes true. Thus let k
1
be the stage in which done

becomes true, and let k


i
(2 < i n)
be the successive stages in which
t (done

(t ) used

(t ))
is true. First note that
at stage k
1
{ y | result

( y)} =(J
0
);
at stage k
1
+ 1
xpoint

=J
1
.
For i > 1 it can be shown by induction on i that
at stage k
i
(i n)
{ t | done

(t ) used

(t )} =(J
i2
) J
i2
=J
i1
J
i2
{ y | done

(t ) result

( y, t ) used

(t )} =(J
i1
);
{ t | done

(t ) result

( y, t ) xpoint

( y)} =(J
i1
) J
i1
=J
i
J
i1
;
at stage k
i
+1 (i < n)
xpoint

=J
i1
(J
i1
) =J
i
,
used

=not-last

=done

=J
i1
;
at stage k
i
+2 (i < n)
{ t | run

(t ) used

(t )} =J
i
J
i1
,
{ x | R( x, t ) run

(t ) used

(t )} =I(R),
{ x | S( x, t ) run

(t ) used

(t )} =J
i
.
Finally, at stage k
n
+1
used

=J
n1
,
14.4 Equivalence 367
not-last

=J
n2
,
xpoint

=J
n
=
S
((S))(I),
and at stage k
n
+2
result

=
S
((S))( z)(I),
done

=true.
Thus (i) and (ii) hold for prog

in case (4), which concludes the induction.


Lemmas 14.4.1 and 14.4.4 now yield the following:
Theorem 14.4.5 while
+
, CALC+
+
, and datalog

are equivalent.
The set of queries expressible in while
+
, CALC+
+
, and datalog

is called the xpoint


queries. An analogous equivalence result can be proven for the noninationary languages
while, CALC+, and datalog

. The proof of the equivalence of CALC+ and datalog

is easier than in the inationary case because the ability to perform deletions in datalog

facilitates the task of simulating explicit control (see Exercise 14.21). Thus we can prove
the following:
Theorem 14.4.6 while, CALC+, and datalog

are equivalent.
The set of queries expressible in while, CALC+, and datalog

is called the while


queries. We will look at the xpoint queries and the while queries from a complexity
and expressiveness standpoint in Chapter 17. Although the spirit of our discussion in this
chapter suggested that xpoint and while are distinct classes of queries, this is far from
obvious. In fact, the question remains open: As shown in Chapter 17, xpoint and while
are equivalent iff ptime = pspace (Theorem 17.4.3).
The equivalences among languages discussed in this chapter are summarized in
Fig. 14.2.
Normal Forms
The two equivalence theorems just presented have interesting consequences for the under-
lying extensions of datalog and logic. First they show that these languages are closed under
composition and complementation. For instance, if two mappings f, g, respectively, from
a schema S to a schema S

and from S

to a schema S

are expressible in datalog


()
,
then f g and f are also expressible in datalog
()
. Analogous results are true for
CALC+
(+)
.
A more dramatic consequence concerns the nesting of recursion in the calculus and
algebra. Consider rst CALC+
+
. By the equivalence theorems, this is equivalent to
datalog

, which, in turn (by Lemma 14.3.4), is essentially a fragment of CALC+


+
.
This yields a normal form for CALC+
+
queries and implies that a single application of
368 Recursion and Negation
Languages Class of queries
while
+
inationary CALC +
+
xpoint
datalog

while
noninationary CALC + while
datalog

Figure 14.2: Summary of language equivalence results


the inationary xpoint operator is all that is needed. Similar remarks apply to CALC+
queries. In summary, the following applies:
Theorem 14.4.7 Each CALC+
(+)
query is equivalent to a CALC+
(+)
query of the
form
{ x |
(+)
T
((T ))(t )},
where is an existential CALC formula.
Analogous normal forms can be shown for while
(+)
(Exercise 14.22) and for RA
(+)
(Exercise 14.24).
14.5 Recursion in Practical Languages
To date, there are numerous prototypes (but no commercial product) that provide query and
update languages with recursion. Many of these languages provide semantics for recursion
in the spirit of the procedural semantics described in this chapter. Prototypes implementing
the deductive paradigm are discussed in Chapter 15.
SQL 2-3 (a norm provided by ISO/ANSII) allows select statements that dene a table
used recursively in the fromand where clauses. Such recursion is also allowed in Starburst.
The semantics of the recursion is inationary, although noninationary semantics can be
achieved using deletion. An extension of SQL 2-3 is ESQL (Extended SQL). To illustrate
the avor of the syntax (which is typical for this category of languages), the following
is an ESQL program dening a table SPARTS (subparts), the transitive closure of the
table PARTS. This is done using a view creation mechanism.
create view SPARTS as
select *
from PARTS
union
Bibliographic Notes 369
select P1.PART, P2.COMPONENT
from SPARTS P1, PARTS P2
where P1.COMPONENT = P2.PART;
This is in the spirit of CALC+
+
. With deletion, one can simulate CALC+. The system
Postgres also provides similar iteration up to a xpoint in its query language POSTQUEL.
A form of recursion closer to while and while
+
is provided by SQL embedded in full
programming languages, such as C+SQL, which allows SQL statements coupled with C
programs. The recursion is provided by while loops in the host language.
The recursion provided by datalog

and datalog

is close in spirit to production-rule


systems. Speaking loosely, a production rule has the form
if condition then action.
Production rules permit the specication of database updates, whereas deductive rules usu-
ally support only database queries (with some notable exceptions). Note that the deletion in
datalog

can be viewed as providing an update capability. The production-rule approach


has been studied widely in connection with expert systems in articial intelligence; OPS5
is a well-known system that uses this approach.
A feature similar to recursive rules is found in the emerging eld of active databases.
In active databases, the rule condition is often broken into two pieces; one piece, called the
trigger, is usually closely tied to the database (e.g., based on insertions to or deletions from
relations) and can be implemented deep in the system.
In active database systems, rules are recursively red when conditions become true in
the database. Speaking in broad terms, the noninationary languages studied in this chapter
can be viewed as an abstraction of this behavior. For example, the database language RDL1
is close in spirit to the language datalog

. (See also Chapter 22 for a discussion of active


databases.)
The language Graphlog, a visual language for queries on graphs developed at the
University of Toronto, emphasizes queries involving paths and provides recursion specied
using regular expressions that describe the shape of desired paths.
Bibliographic Notes
The while language was rst introduced as RQ in [CH82] and as LE in [Cha81a]. The
other noninationary languages, CALC+ and datalog

, were dened in [AV91a]. The


equivalence of the noninationary languages was also shown there.
The xpoint languages have a long history. Logics with xpoints have been consid-
ered by logicians in the general case where innite structures (corresponding to innite
database instances) are allowed [Mos74]. In the nite case, which is relevant in this book,
the xpoint queries were rst dened using the partial xpoint operator
T
applied only
to formulas positive in T [CH82]. The language allowing applications of
T
to formulas
monotonic, but not necessarily positive, in T was further studied in [Gur84]. An interesting
difference between unrestricted and nite models arises here: Every CALC formula mono-
tone in some predicate R is equivalent for unrestricted structures to some CALC formula
positive in R (Lyndons lemma), whereas this is not the case for nite structures [AG87].
Monotonicity is undecidable for both cases [Gur84].
370 Recursion and Negation
The languages (1) with xpoint over positive formulas, (2) with xpoint over mono-
tone formulas, and (3) with inationary xpoint over arbitrary formulas were shown equiv-
alent in [GS86]. As a side-effect, it was shown in [GS86] that the nesting of (or
+
)
provides no additional power. This fact had been proven earlier for the rst language
in [Imm86]. Moreover, a new alternative proof of the sufciency of a single application
of the xpoint in CALC+
+
is provided in [Lei90]. The simultaneous induction lemma
(Lemma 14.2.5) was also proven in [GS86], extending an analogous result of [Mos74] for
innite structures. Of the other inationary languages, while
+
was dened in [AV90] and
datalog

with xpoint semantics was rst dened in [AV88c, KP88].


The equivalence of datalog

with CALC+
+
and while
+
was shown in [AV91a]. The
relationship between the while and xpoint queries was investigated in [AV91b], where
it was shown that they are equivalent iff ptime = pspace. The issues of complexity and
expressivity of xpoint and while queries will be considered in detail in Chapter 17.
The rule algebra for logic programs was introduced in [IN88].
The game of life is described in detail in [Gar70]. The normal forms discussed in this
chapter can be viewed as variations of well-known folk theorems, described in [Har80].
SQL 2-3 is described in an ISO/ANSII norm [57391, 69392]). Starburst is presented in
[HCL
+
90]. ESQL (Extended SQL) is described in [GV92]. The example ESQL program
in Section 14.5 is from [GV92]. The query language of Postgres, POSTQUEL, is presented
in [SR86]. OPS5 is described in [For81].
The area of active databases is the subject of numerous works, including [Mor83,
Coh89, KDM88, SJGP90, MD89, WF90, HJ91a]. Early work on database triggers includes
[Esw76, BC79]. The language RDL1 is presented in [dMS88].
The visual graph language Graphlog, developed at the University of Toronto, is de-
scribed in [CM90, CM93a, CM93b].
Exercises
Exercise 14.1 (Game of life) Consider the two rules informally described in Example 14.1.
(a) Express the corresponding queries in datalog
()
, while
(+)
, and CALC+
(+)
.
(b) Find an input for which a vertex keeps changing color forever under the second rule.
Exercise 14.2 Prove that the termination problem for a while program is undecidable (i.e., that
it is undecidable, given a while query, whether it terminates on all inputs). Hint: Use a reduction
of the containment problem for algebra queries.
Exercise 14.3 Recall the datalog

program of Example 14.4.2.


(a) After how many stages does the program complete for an input graph of diameter n?
(b) Modify the program so that it also handles the case of empty graphs.
(c) Modify the program so that it terminates in order of log(n) stages for an input graph
of diameter n.
Exercise 14.4 Recall the denition of
T
((T )).
(a) Exhibit a formula such that (T ) has a unique minimal xpoint on all inputs, and

T
((T )) terminates on all inputs but does not evaluate to the minimal xpoint on
any of them.
Exercises 371
(b) Exhibit a formula such that
T
((T )) terminates on all inputs but does not have
a unique minimal xpoint on any input.
Exercise 14.5
(a) Give a while program with explicit looping condition for the query in Exam-
ple 14.1.2.
(b) Prove that while
(+)
with looping conditions of the form E = , E = , E = E

,
and E =E

, where E, E

are algebra expressions, is equivalent to while


(+)
with the
change conditions.
Exercise 14.6 Consider the problem of nding, given two graphs G, G

over the same vertex


set, the minimum set X of vertexes satisfying the following conditions: (1) For each vertex v,
if all vertexes v

such that there is a G-edge from v

to v are in X, then v is in X; and (2) the


analogue for G

-edges. Exhibit a while program and a xpoint query that compute this set.
Exercise 14.7 Recall the CALC+
+
query of Example 14.4.3.
(a) Run the query on the input graph G:
{a, b, c, b, b, d, d, e, e, f , f, g, g, d, e, h, i, j, j, h}.
(b) Exhibit a while
+
program that computes good.
(c) Write a program in your favorite conventional programming language (e.g., C or
LISP) that computes the good vertexes of a graph G. Compare it with the database
queries developed in this chapter.
(d) Show that a vertex a is good if there is no path from a vertex belonging to a cycle to
a. Using this as a starting point, propose an alternative algorithm for computing the
good vertexes. Is your algorithm expressible in while? In xpoint?
Exercise 14.8 Suppose that the input consists of a graph G together with a successor relation
on the vertexes of G [i.e., a binary relation succ such that (1) each element has exactly one
successor, except for one that has none; and (2) each element in the binary relation G occurs in
succ].
(a) Give a xpoint query that tests whether the input satises (1) and (2).
(b) Sketch a while program computing the set of pairs a, b such that the shortest path
from a to b is a prime number.
(c) Do (b) using a while
+
query.
Exercise 14.9 (Simultaneous induction) Prove Lemma 14.2.5.
Exercise 14.10 (Fixpoint over positive formulas) Let (T ) be a formula positive in T (i.e.,
each occurrence of T is under an even number of negations in the syntax tree of ). Let R be
the set of relations other than T occurring in (T ).
(a) Show that (T ) is monotonic in T . That is, for all instances I and J over R {T }
such that I(R) = J(R) and I(T ) J(T ),
(I) (J).
(b) Show that
T
((T )) is dened on every input instance.
(c) [GS86] Show that the family of CALC+ queries with xpoints only over positive
formulas is equivalent to the CALC+
+
queries.
372 Recursion and Negation
Exercise 14.11 Suppose CALC+
+
is modied so that free variables are allowed under
xpoint operators. More precisely, let
(T, x
1
, . . . , x
n
, y
1
, . . . , y
m
)
be a formula where T has arity n and the x
i
and y
j
are free in . Then

T,x
1
,...,x
n
((T, x
1
, . . . , x
n
, y
1
, . . . , y
m
))(e
1
, . . . , e
n
)
is a correct formula, whose free variables are the y
j
and those occurring among the e
i
. The
xpoint is dened with respect to a given valuation of the y
j
. For instance,
zw(P(z)
T,x,y
((T, x, y, z))(u, w))
is a well-formed formula. Give a precise denition of the semantics for queries using this
operator. Show that this extension does not yield increased expressive power over CALC+
+
.
Do the same for CALC+.
Exercise 14.12 Let G be a graph. Give a xpoint query in each of the three paradigms that
computes the pairs of vertexes such that the shortest path between them is of even length.
Exercise 14.13 Let datalog
()
rr
denote the family of datalog
()
programs that are range
restricted, in the sense that for each rule r and each variable x occurring in r, x occurs in a
positive literal in the body of r. Prove that datalog

rr
datalog

and datalog

rr
datalog

.
Exercise 14.14 Show that negations in bodies of rules are redundant in datalog

(i.e., for
each datalog

program P there exists an equivalent datalog

program Q that uses no nega-


tions in bodies of rules). Hint: Maintain the complement of each relation R in a new relation
R

, using deletions.
Exercise 14.15 Consider the following semantics for negations in heads of datalog

rules:
() the semantics giving priority to positive over negative facts inferred simultaneously
(adopted in this chapter),
() the semantics giving priority to negative over positive facts inferred simultaneously,
( ) the semantics in which simultaneous inference of A and A leads to a no-op (i.e.,
including A in the new instance only if it is there in the old one), and
() the semantics prohibiting the simultaneous inference of a fact and its negation by
making the result undened in such circumstances.
For a datalog

program P, let P

, denote the program P with semantics {, , , }.


(a) Give an example of a programP for which P

, P

, P

, and P

dene distinct queries.


(b) Show that it is undecidable, for a given program P, whether P

never simultaneously
infers a positive fact and its negation for any input.
(c) Let datalog

denote the family of queries P

for {, , }. Prove that data-


log

datalog

datalog

.
(d) Give a syntactic condition on datalog

programs such that under the semantics


they never simultaneously infer a positve fact and its negation, and such that the
resulting query language is equivalent to datalog

.
Exercises 373
Exercise 14.16 (Noninationary datalog

) The semantics of datalog

can be made nonina-


tionary by dening the immediate consequence operator to be destructive in the sense that only
the newly inferred facts are kept after each ring of the rules. Show that, with this semantics,
datalog

is equivalent to datalog

.
Exercise 14.17 (Multiple versus single carriers)
(a) Consider a datalog

program P producing the answer to a query in an idb relation


S. Prove that there exists a program Q with the same edb relations as P and just one
idb relation T such that, for each edb instance I,
[P(I)](S) = (([Q(I)](T ))),
where denotes a selection and a projection.
(b) Show that the projection and selection in part (a) are indispensable. Hint: Sup-
pose there is a datalog

program with a single edb relation computing the comple-


ment of transitive closure of a graph. Reach a contradiction by showing in this case
that connectivity of a graph is expressible in relational calculus. (It is shown in Chap-
ter 17 that connectivity is not expressible in the calculus.)
(c) Show that the projection and selection used in Lemma 14.2.5 are also indispensable.
Exercise 14.18
(a) Prove Lemma 14.3.4 for the inationary case.
(b) Prove Lemma 14.3.4 for the noninationary case. Hint: For datalog

, the straight-
forward simulation yields a formula
T
((T ))( x), where may contain negations
over existential quantiers to simulate the semantics of deletions in heads of rules
of the datalog

program. Use instead the noninationary version of datalog

de-
scribed in Exercise 14.16.
Exercise 14.19 Prove that the simulation in Example 14.4.3 works.
Exercise 14.20 Complete the proof of Lemma 14.4.1 (i.e., prove that each while
+
program
can be simulated by a CALC+
+
program).
Exercise 14.21 Prove the noninationary analogue of Lemma 14.4.4 (i.e., that datalog

can
simulate CALC+). Hint: Simplify the simulation in Lemma 14.4.4 by taking advantage of the
ability to delete in datalog

. For instance, rules can be inhibited using switches, which can


be turned on and off. Furthermore, no timestamping is needed.
Exercise 14.22 Formulate and prove a normal form for while
+
and while, analogous to the
normal forms stated for CALC+
+
and CALC+.
Exercise 14.23 Prove that RA
+
is equivalent to datalog

and RA is equivalent to nonina-


tionary datalog

, and hence to datalog

. Hint: Use Theorems 14.4.5 and 14.4.6 and Exer-


cise 14.16.
Exercise 14.24 Let the star height of an RAprogram be the maximum number of occurrences
of

and
+
on a path in the syntax tree of the program. Show that each RA program is equivalent
to an RA program of star height one.
15 Negation in Datalog
Alice: I thought we already talked about negation.
Sergio: Yes, but they say you dont think by xpoint.
Alice: Humbug, I just got used to it!
Riccardo: So we have to tell you how you really think.
Vittorio: And convince you that our explanation is well founded!
A
s originally introduced in Chapter 12, datalog is a toy language that expresses many
interesting recursive queries but has serious shortcomings concerning expressive
power. Because it is monotonic, it cannot express simple relational algebra queries such
as the difference of two relations. In the previous chapter, we considered one approach
for adding negation to datalog that led to two procedural languagesnamely, inationary
datalog

and datalog

. In this chapter, we take a different point of view inspired by non-


monotonic reasoning that attempts to view the semantics of such programs in terms of a
natural reasoning process.
This chapter begins with illustrations of how the various semantics for datalog do not
naturally extend to datalog

. Two semantics for datalog

are then considered. The rst,


called stratied, involves a syntactic restriction on programs but provides a semantics that
is natural and relatively easy to understand. The second, called well founded, requires
no syntactic restriction on programs, but the meaning associated with some programs
is expressed using a 3-valued logic. (In this logic, facts are true, false, or unknown.)
With respect to expressive power, well-founded semantics is equivalent to the xpoint
queries, whereas the stratied semantics is strictly weaker. A proof-theoretic semantics
for datalog

, based on negation as failure, is discussed briey at the end of this chapter.


15.1 The Basic Problem
Suppose that we want to compute the pairs of disconnected nodes in a graph G (i.e., we
are interested in the complement of the transitive closure of a graph whose edges are given
by a binary relation G). We already know how to dene the transitive closure of G in a
relation T using the datalog program P
T C
of Chapter 12:
T (x, y) G(x, y)
T (x, y) G(x, z), T (z, y).
To dene the complement CT of T , we are naturally tempted to use negation as we
374
15.1 The Basic Problem 375
did in Chapter 5. Let P
T Ccomp
be the result of adding the following rule to P
T C
:
CT (x, y) T (x, y).
To simplify the discussion, we generally assume an active domain interpretation of
datalog

rules.
In this example, negation appears to be an appealing addition to the datalog syntax.
The language datalog

is dened by allowing, in bodies of rules, literals of the form


R
i
(u
i
), where R
i
is a relation name and u
i
is a free tuple. In addition, the equality
predicate is allowed, and =(x, y) is denoted by x =y.
One might hope to extend the model-theoretic, xpoint, and proof-theoretic semantics
of datalog just as smoothly as the syntax. Unfortunately, things are less straightforward
when negation is present. We illustrate informally the problems that arise if one tries to
extend the least-xpoint and minimal-model semantics of datalog. We shall discuss the
proof-theoretic aspect later.
Fixpoint Semantics: Problems
Recall that, for a datalog program P, the xpoint semantics of P on input I is the unique
minimal xpoint of the immediate consequence operator T
P
containing I. The immediate
consequence operator can be naturally extended to a datalog

program P. For a program


P, T
P
is dened as follows
1
: For each K over sch(P), A is T
P
(K) if A K|edb(P) or
if there exists some instantiation A A
1
, . . . , A
n
of a rule in P for which (1) if A
i
is a
positive literal, then A
i
K; and (2) if A
i
=B
i
where B
i
is a positive literal, then B
i
K.
[Note the difference from the immediate consequence operator
P
dened for datalog

in
Section 14.3:
P
is inationary by denition, (that is, K
P
(K) for each K over sch(P),
whereas T
P
is not.] The following example illustrates several unexpected properties that
T
P
might have.
Example 15.1.1
(a) T
P
may not have any xpoint. For the propositional program P
1
={p p},
T
P
1
has no xpoint.
(b) T
P
may have several minimal xpoints containing a given input. For example,
the propositional program P
2
={p q, q p} has two minimal xpoints
(containing the empty instance): {p} and {q}.
(c) Consider the sequence {T
i
P
()}
i>0
for a given datalog

program P. Recall that


for datalog, the sequence is increasing and converges to the least xpoint of T
P
.
In the case of datalog

, the situation is more intricate:


1. The sequence does not generally converge, even if T
P
has a least x-
point. For example, let P
3
= {p r; r p; p p, r}. Then
1
Given an instance J over a database schema R with S R, J|S denotes the restriction of J to S.
376 Negation in Datalog
T
P
3
has a least xpoint {p} but {T
i
P
3
()}
i>0
alternates between and
{p, r} and so does not converge (Exercise 15.2).
2. Even if {T
i
P
()}
i>0
converges, its limit is not necessarily a minimal
xpoint of T
P
, even if such xpoints exist. To see this, let P
4
={p
p, q q, p p, q p}. Now {T
i
P
4
()}
i>0
converges to {p, q}
but the least xpoint of T
P
4
equals {p}.
Remark 15.1.2 (Inationary xpoint semantics) The program P
4
of the preceding ex-
ample contains two rules of a rather strange form: p p and q q. In some sense, such
rules may appear meaningless. Indeed, their logical forms [e.g., (p p)] are tautologies.
However, rules of the form R(x
1
, . . . , x
n
) R(x
1
, . . . , x
n
) have a nontrivial impact on
the immediate consequence operator T
P
. If such rules are added for each idb relation R,
this results in making T
P
inationary [i.e., K T
P
(K) for each K], because each fact
is an immediate consequence of itself. It is worth noting that in this case, {T
i
P
(I)}
i>0
al-
ways converges and the semantics given by its limit coincides with the inationary xpoint
semantics for datalog

programs exhibited in Chapter 14.


To see the difference between the two semantics, consider again program P
T Ccomp
.
The sequence {T
i
P
T Ccomp
(I)}
i>0
on input I over G converges to the desired answer (the
complement of transitive closure). With the inationary xpoint semantics, CT becomes
a complete graph at the rst iteration (because T is initially empty) and P
T Ccomp
does not
compute the complement of transitive closure. Nonetheless, it was shown in Chapter 14 that
there is a different (more complicated) datalog

program that computes the complement of


transitive closure with the inationary xpoint semantics.
Model-Theoretic Semantics: Problems
As with datalog, we can associate with a datalog

program P the set


P
of CALC
sentences corresponding to the rules of P. Note rst that, as with datalog,
P
always has at
least one model containing any given input I. B(P, I) is such a model. [Recall that B(P, I),
introduced in Chapter 12, is the instance in which the idb relations contain all tuples with
values in I or P.]
For datalog, the model-theoretic semantics of a program P was given by the unique
minimal model of
P
containing the input. Unfortunately, this simple solution no longer
works for datalog

, because uniqueness of a minimal model containing the input is not


guaranteed. Program P
2
in Example 15.1.1(b) provides one example of this: {p} and {q}
are distinct minimal models of P
2
. As another example, consider the program P
T Ccomp
and an input I for predicate G. Let J over sch(P
T Ccomp
) be such that J(G) =I, J(T ) I,
J(T ) is transitively closed, and J(CT ) ={x, y | x, y occur in I, x, y J(T )}. Clearly,
there may be more than one such J, but one can verify that each one is a minimal model of

P
T Ccomp
satisfying J(G) =I.
It is worth noting the connection between T
P
and models of
P
: An instance K over
sch(P) is a model of
P
iff T
P
(K) K. In particular, every xpoint of T
P
is a model of

P
. The converse is false (Exercise 15.3).
When for a program P,
P
has several minimal models, one must specify which
15.2 Stratied Semantics 377
among them is the model intended to be the solution. To this end, various criteria of
niceness of models have been proposed that can distinguish the intended model from
other candidates. We shall discuss several such criteria as we go along. Unfortunately, none
of these criteria sufces to do the job. Moreover, upon reection it is clear that no criteria
can exist that would always permit identication of a unique intended model among several
minimal models. This is because, as in the case of program P
2
of Example 15.1.1(b), the
minimal models can be completely symmetric; in such cases there is no property that would
separate one from the others using just the information in the input or the program.
In summary, the approach we used for datalog, based on equivalent least-xpoint
or minimum-model semantics, breaks down when negation is present. We shall describe
several solutions to the problem of giving semantics to datalog

programs. We begin with


the simplest case and build up from there.
15.2 Stratied Semantics
This section begins with the restricted case in which negation is applied only to edb rela-
tions. The semantics for negation is straightforward in this case. We then turn to stratied
semantics, which extends this simple case in an extremely natural fashion.
Semipositive Datalog

We consider now semipositive datalog

programs, which only apply negation to edb rela-


tions. For example, the difference of R and R

can be dened by the one-rule program


Diff (x) R(x), R

(x).
To give semantics to R

(x), we simply use the closed world assumption (see Chapter 2):
R

(x) holds iff x is in the active domain and x R

. Because R

is an edb relation, its


content is given by the database and the semantics of the program is clear. We elaborate on
this next.
Denition 15.2.1 A datalog

program P is semipositive if, whenever a negative literal


R

(x) occurs in the body of a rule in P, R

edb(P).
As their name suggests, semipositive programs are almost positive. One could elimi-
nate negation from semipositive programs by adding, for each edb relation R

, a new edb
relation R

holding the complement of R

(with respect to the active domain) and replacing


R

(x) by R

(x). Thus it is not surprising that semipositive programs behave much like
datalog programs. The next result is shown easily and is left for the reader (Exercise 15.7).
Theorem 15.2.2 Let P be a semipositive datalog

program. For every instance I over


edb(P),
(i)
P
has a unique minimal model J satisfying J|edb(P) =I.
(ii) T
P
has a unique minimal xpoint J satisfying J|edb(P) =I.
378 Negation in Datalog
(iii) The minimum model in (i) and the least xpoint in (ii) are identical and equal to
the limit of the sequence {T
i
P
(I)}
i>0
.
Remark 15.2.3 Observe that in the theorem, we use the formulation minimal model
satisfying J|edb(P) = I, whereas in the analogous result for datalog we used minimal
model containing I. Both formulations would be equivalent in the datalog setting because
adding tuples to the edb predicates would result in larger models because of monotonicity.
This is not the case here because negation destroys monotonicity.
Given a semipositive datalog

program P and an input I, we denote by P


semipos
(I)
the minimum model of
P
(or equivalently, the least xpoint of T
P
) whose restriction to
edb(P) equals I.
An example of a semipositive program that is neither in datalog nor in CALC is given
by
T (x, y) G(x, y)
T (x, y) G(x, z), T (z, y).
This program computes the transitive closure of the complement of G. On the other hand,
the foregoing program for the complement of transitive closure is not a semipositive pro-
gram. However, it can naturally be viewed as the composition of two semipositive pro-
grams: the program computing the transitive closure followed by the program computing
its complement. Stratication, which is studied next, may be viewed as the closure of semi-
positive programs under composition. It will allow us to specify, for instance, the compo-
sition just described, computing the complement of transitive closure.
Syntactic Restriction for Stratication
We now consider a natural extension of semipositive programs. In semipositive programs,
the use of negation is restricted to edb relations. Now suppose that we use some dened
relations, much like views. Once a relation has been dened by some program, other
programs can subsequently treat it as an edb relation and apply negation to it. This simple
idea underlies an important extension to semipositive programs, called stratied programs.
Suppose we have a datalog

program P. Each idb relation is dened by one or more


rules of P. If we are able to read the program so that, for each idb relation R

, the portion
of P dening R

comes before the negation of R

is used, then we can simply compute


R

before its negation is used, and we are done. For example, consider program P
T Ccomp
introduced at the beginning of this chapter. Clearly, we intended for T to be dened by the
rst two rules before its negation is used in the rule dening CT . Thus the rst two rules
are applied before the third. Such a way of reading P is called a stratication of P and
is dened next.
Denition 15.2.4 A stratication of a datalog

program P is a sequence of datalog

programs P
1
, . . . , P
n
such that for some mapping from idb(P) to [1..n],
(i) {P
1
, . . . , P
n
} is a partition of P.
15.2 Stratied Semantics 379
(ii) For each predicate R, all the rules in P dening R are in P
(R)
(i.e., in the same
program of the partition).
(iii) If R(u) . . . R

(v) . . . is a rule in P, and R

is an idb relation, then (R

)
(R).
(iv) If R(u) . . . R

(v) . . . is a rule in P, and R

is an idb relation, then (R

) <
(R).
Given a stratication P
1
, . . . , P
n
of P, each P
i
is called a stratum of the stratication, and
is called the stratication mapping.
Intuitively, a stratication of a programP provides a way of parsing P as a sequence of
subprograms P
1
, . . . , P
n
, each dening one or several idb relations. By (iii), if a relation R

is used positively in the denition of R, then R

must be dened earlier or simultaneously


with R (this allows recursion!). If the negation of R

is used in the denition of R, then by


(iv) the denition of R

must come strictly before that of R.


Unfortunately, not every datalog

program has a stratication. For example, there is


no way to read program P
2
of Example 15.1.1 so that p is dened before q and q before
p. Programs that have a stratication are called stratiable. Thus P
2
is not stratiable. On
the other hand, P
T Ccomp
is clearly stratiable: The rst stratum consists of the rst two
rules (dening T ), and the second stratum consists of the third rule (dening CT using T ).
Example 15.2.5 Consider the program P
7
dened by
r
1
S(x) R

1
(x), R(x)
r
2
T (x) R

2
(x), R(x)
r
3
U(x) R

3
(x), T (x)
r
4
V(x) R

4
(x), S(x), U(x).
Then P
7
has 5 distinct stratications, namely,
{r
1
}, {r
2
}, {r
3
}, {r
4
}
{r
2
}, {r
1
}, {r
3
}, {r
4
}
{r
2
}, {r
3
}, {r
1
}, {r
4
}
{r
1
, r
2
}, {r
3
}, {r
4
}
{r
2
}, {r
1
, r
3
}, {r
4
}.
These lead to ve different ways of reading the program P
7
. As will be seen, each of these
yields the same semantics.
There is a simple test for checking if a program is stratiable. Not surprisingly, it
involves testing for an acyclicity condition in denitions of relations using negation. Let P
be a datalog

program. The precedence graph G


P
of P is the labeled graph whose nodes
are the idb relations of P. Its edges are the following:
380 Negation in Datalog
P
TCcomp
: T CT
+

P
2
: P Q

P
7
: S U

V
T

Figure 15.1: Precedence graphs for P


CT
, P
2
, and P
7
If R(u) . . . R

(v) . . . is a rule in P, then R

, R is an edge in G
P
with label +
(called a positive edge).
If R(u) . . . R

(v) . . . is a rule in P, then R

, R is an edge in G
P
with label
(called a negative edge).
For example, the precedence graphs for program P
T Ccomp
, P
2
, and P
7
are represented
in Fig. 15.1. It is straightforward to show the following (proof omitted):
Lemma 15.2.6 Let P be a program with stratication . If there is a path from R

to R in
G
P
, then (R

) (R); and if there is a path from R

to R in G
P
containing some negative
edge, then (R

) < (R).
We now show how the precedence graph of a program can be used to test the stratia-
bility of the program.
Proposition 15.2.7 A datalog

program P is stratiable iff its precedence graph G


P
has no cycle containing a negative edge.
Proof Consider the only if part. Suppose P is a datalog

program whose precedence


graph has a cycle R
1
, . . . R
m
, R
1
containing a negative edge, say from R
m
to R
1
. Suppose,
toward a contradiction, that is a stratication mapping for P. By Lemma 15.2.6, (R
1
) <
(R
1
), because there is a path from R
1
to R
1
with a negative edge. This is a contradiction,
so no stratication mapping exists for P.
Conversely, suppose P is a program whose precedence graph G
P
has no cycle with
negative edges. Let be the binary relation among the strongly connected components of
G
P
dened as follows: C C

if C =C

and there is a (positive or negative) edge in G


P
from some node of C to some node of C

.
We rst show that
(*) is acyclic.
Suppose there is a cycle in . Then by construction of , this cycle must traverse two
distinct strongly connected components, say C, C

. Let A be in C. It is easy to deduce


that there is a path in G
P
from some vertex in C

to A and from A to some vertex in C

.
15.2 Stratied Semantics 381
Because C

is a strongly connected component of G


P
, A is in C

. Thus C C

, so C =C

,
a contradiction. Hence (*) holds.
In view of (*), the binary relation induces a partial order among the strongly
connected components of G
P
, which we also denote by , by abuse of notation. Let
C
1
, . . . , C
n
be a topographic sort with respect to of the strongly connected components
of G
P
; that is, C
1
. . . C
n
is the set of strongly connected components of G
P
and if C
i
C
j
,
then i j. Finally, for each i, 1 i n, let Q
i
consist of all rules dening some rela-
tion in C
i
. Then Q
1
, . . . , Q
n
is a stratication of P. Indeed, (i) and (ii) in the denition
of stratication are clearly satised. Conditions (iii) and (iv) follow immediately from the
construction of G
P
and and from the hypothesis that G
P
has no cycle with negative edge.
Clearly, the stratiability test provided by Proposition 15.2.7 takes time polynomial in
the size of the program P.
Verication of the following observation is left to the reader (Exercise 15.4).
Lemma 15.2.8 Let P
1
, . . . , P
n
be a stratication of P, and let Q
1
, . . . , Q
m
be ob-
tained as in Proposition 15.2.7. If Q
j
P
i
=, then Q
j
P
i
. In particular, the partition
Q
1
, . . . , Q
m
of P renes all other partitions given by stratications of P.
Semantics of Stratied Programs
Consider a stratiable program P with a stratication = P
1
, . . . , P
n
. Using the strat-
ication , we can now easily give a semantics to P using the well-understood semi-
positive programs. Notice that for each program P
i
in the stratication, if P
i
uses the
negation of R

, then R

edb(P
i
) [note that edb(P
i
) may contain some of the idb rela-
tions of P]. Furthermore, R

is either in edb(P) or is dened by some P


j
preceding P
i
[i.e., R


j<i
idb(P
j
)]. Thus each program P
i
is semipositive relative to previously de-
ned relations. Then the semantics of P is obtained by applying, in order, the programs
P
i
. More precisely, let I be an instance over edb(P). Dene the sequence of instances
I
0
=I
I
i
=I
i1
P
i
(I
i1
|edb(P
i
)), 0 < i n.
Note that I
i
extends I
i1
by providing values to the relations dened by P
i
; and that
P
i
(I
i1
|edb(P
i
)), or equivalently, P
i
(I
i1
), is the semantics of the semipositive program
P
i
applied to the values of its edb relations provided by I
i1
. Let us denote the nal
instance I
n
thus obtained by (I). This provides the semantics of a datalog

programunder
a stratication .
Independence of Stratication
As shown in Example 15.2.5, a datalog

program can have more than one stratication.


Will the different stratications yield the same semantics? Fortunately, the answer is yes.
382 Negation in Datalog
To demonstrate this, we use the following simple lemma, whose proof is left to the reader
(Exercise 15.10).
Lemma 15.2.9 Let P be a semipositive datalog

program and a stratication for P.


Then P
semipos
(I) =(I) for each instance I over edb(P).
Two stratications of a datalog

program are equivalent if they yield the same seman-


tics on all inputs.
Theorem 15.2.10 Let P be a stratiable datalog

program. All stratications of P are


equivalent.
Proof Let G
P
be the precedence graph of P and
G
P
= Q
1
, . . . , Q
n
be a stratication
constructed from G
P
as in the proof of Theorem 15.2.7. Let =P
1
, . . . , P
k
be a strati-
cation of P. It clearly sufces to show that is equivalent to
G
P
. The stratication
G
P
is used as a reference because, as shown in Lemma 15.2.8, its strata are the nest possible
among all stratications for P.
As in the proof of Theorem 15.2.7, we use the partial order among the strongly
connected components of G
P
and the notation introduced there. Clearly, the relation on
the C
i
induces a partial order on the Q
i
, which we also denote by (Q
i
Q
j
if C
i
C
j
).
We say that a sequence Q
i
1
, . . . , Q
i
r
of some of the Q
i
is compatible with if for every
l < m it is not the case that Q
i
m
Q
i
l
.
We shall prove that
1. If

and

are permutations of
G
P
that are compatible with , then

and

are equivalent stratications of P.


2. For each P
i
, 1 i k, there exists
i
=Q
i
1
, . . . , Q
i
r
such that
i
is a stratica-
tion of P
i
, and the sequence Q
i
1
, . . . , Q
i
r
is compatible with .
3.
1
, . . . ,
k
is a permutation of Q
1
, . . . , Q
n
compatible with .
Before demonstrating these, we argue that the foregoing statements (1 through 3) are
sufcient to show that and
G
P
are equivalent. By statement 2, each
i
is a stratication
of P
i
. Lemma 15.2.9 implies that P
i
is equivalent to
i
. It follows that =P
1
, . . . , P
k
is
equivalent to
1
, . . . ,
k
which, by statement 3, is a permutation of
G
P
compatible with
. Then
1
, . . . ,
k
and
G
P
are equivalent by statement 1, so and
G
P
are equivalent.
Consider statement 1. Note rst that one can obtain

from

by a sequence of
exchanges of adjacent Q
i
, Q
j
such that Q
i
Q
j
and Q
j
Q
i
(Exercise 15.9). Thus it
is sufcient to show that for every such pair, Q
i
, Q
j
is equivalent to Q
j
, Q
i
. Because
Q
i
Q
j
and Q
j
Q
i
, it follows that no idb relation of Q
i
occurs in Q
j
and conversely.
Then Q
i
Q
j
is a semipositive program [with respect to edb(Q
i
Q
j
)] and both Q
i
, Q
j
and Q
j
, Q
i
are stratications of Q
i
Q
j
. By Lemma 15.2.9, Q
i
, Q
j
and Q
j
, Q
i
are both
equivalent to Q
i
Q
j
(as a semipositive program), so Q
i
, Q
j
and Q
j
, Q
i
are equivalent.
Statement 2 follows immediately from Lemma 15.2.8.
Finally, consider statement 3. By statement 2, each
i
is compatible with . Thus it
remains to be shown that, if Q
m
occurs in
i
, Q
l
occurs in
j
, and i < j, then Q
l
Q
m
.
15.2 Stratied Semantics 383
Note that Q
l
is included in P
j
, and Q
m
is included in P
i
. It follows that for all relations R
dened by Q
m
and R

dened by Q
l
, (R) < (R

), where is the stratication function


of P
1
, . . . , P
k
. Hence R

R so Q
l
Q
m
.
Thus all stratications of a given stratiable program are equivalent. This means
that we can speak about the semantics of such a program independently of a particular
stratication. Given a stratiable datalog

program P and an input I over edb(P), we


shall take as the semantics of P on I the semantics (I) of any stratication of P. This
semantics, well dened by Theorem 15.2.10, is denoted by P
strat
(I). Clearly, P
strat
(I) can
be computed in time polynomial with respect to I.
Now that we have a well-dened semantics for stratied programs, we can verify that
for semipositive programs, the semantics coincides with the semantics already introduced.
If P is a semipositive datalog

program, then P is also stratiable. By Lemma 15.2.9,


P
semipos
and P
strat
are equivalent.
Properties of Stratied Semantics
Stratied semantics has a procedural avor because it is the result of an ordering of the
rules, albeit implicit. What can we say about P
strat
(I) from a model-theoretic point of
view? Rather pleasantly, P
strat
(I) is a minimal model of
P
containing I. However, no
precise characterization of stratied semantics in model-theoretic terms has emerged. Some
model-theoretic properties of stratied semantics are established next.
Proposition 15.2.11 For each stratiable datalog

program P and instance I over


edb(I),
(a) P
strat
(I) is a minimal model of
P
whose restriction to edb(P) equals I.
(b) P
strat
(I) is a minimal xpoint of T
P
whose restriction to edb(P) equals I.
Proof For part (a), let = P
1
, . . . , P
n
be a stratication of P and I an instance over
edb(P). We have to show that P
strat
(I) is a minimal model of
P
whose restriction to
edb(P) equals I. Clearly, P
strat
(I) is a model of
P
whose restriction to edb(P) equals I.
To prove its minimality, it is sufcient to show that, for each model J of
P
,
(**) if I J P
strat
(I) then J =P
strat
(I).
Thus suppose I J P
strat
(I). We prove by induction on k that
() P
strat
(I)|sch(
ik
P
i
) =J|sch(
ik
P
i
)
for each k, 1 k n. The equality of P
strat
(I) and J then follows from () with k =n.
For k =1, edb(P
1
) edb(P) so
P
strat
(I)|edb(P
1
) =I|edb(P
1
) =J|edb(P
1
).
By the denition of stratied semantics and Theorem 15.2.2, P
strat
(I)|sch(P
1
) is the
384 Negation in Datalog
minimum model of
P
1 whose restriction to edb(P
1
) equals P
strat
(I)|edb(P
1
). On the
other hand, J|sch(P
1
) is also a model of
P
1 whose restriction to edb(P
1
) equals
P
strat
(I)|edb(P
1
). From the minimality of P
strat
(I)|sch(P
1
), it follows that
P
strat
(I)|sch(P
1
) J|sch(P
1
).
From (**) it then follows that P
strat
(I)|sch(P
1
) = J|sch(P
1
), which establishes () for
k = 1. For the induction step, suppose () is true for k, 1 k < n. Then () for k + 1 is
shown in the same manner as for the case k =1. This proves () for 1 k n. It follows
that P
strat
(I) is a minimal model of
P
whose restriction to edb(P) equals I.
The proof of part (b) is left for Exercise 15.12.
There is another appealing property of stratied semantics that takes into account the
syntax of the program in addition to purely model-theoretic considerations. This property
is illustrated next.
Consider the two programs
P
5
={p q}
P
6
={q p}
From the perspective of classical logic,
P
5
and
P
6
are equivalent to each other and to
{p q}. However, T
P
5
and T
P
6
have different behavior: The unique xpoint of T
P
5
is {p},
whereas that of T
P
6
is {q}. This is partially captured by the notion of supported as follows.
Let datalog

program P and input I be given. As with pure datalog, J is a model of


P iff J T
P
(J). We say that J is a supported model if J T
P
(J) (i.e., if each fact in J is
justied or supported by being the head of a ground instantiation of a rule in P whose
body is all true in J). (In the context of some input I, we say that J is supported relative
to I and the denition is modied accordingly.) This condition, which has both syntactic
and semantic aspects, captures at least some of the spirit of the immediate consequence
operator T
P
. As suggested in Remark 15.1.2, its impact can be annulled by adding rules of
the form p p.
The proof of the following is left to the reader (Exercise 15.13).
Proposition 15.2.12 For each stratiable program P and instance I over edb(P),
P
strat
(I) is a supported model of P relative to I.
We have seen that stratication provides an elegant and simple approach to dening
semantics of datalog

programs. Nonetheless, it has two major limitations. First, it does


not provide semantics to all datalog

programs. Second, stratied datalog

programs are
not entirely satisfactory with regard to expressive power. From a computational point of
view, they provide recursion and negation and are inationary. Therefore, as discussed
in Chapter 14, one might expect that they express the xpoint queries. Unfortunately,
stratied datalog

programs fall short of expressing all such queries, as will be shown


in Section 15.4. Intuitively, this is because the stratication condition prohibits recursive
15.3 Well-Founded Semantics 385
application of negation, whereas in other languages expressing xpoint this computational
restriction does not exist.
For these reasons, we consider another semantics for datalog

programs called well


founded. As we shall see, this provides semantics to all datalog

programs and expresses


all xpoint queries. Furthermore, well-founded and stratied semantics agree on stratied
datalog

programs.
15.3 Well-Founded Semantics
Well-founded semantics relies on a fundamental revision of our expectations of the answer
to a datalog

program. So far, we required that the answer must provide information on the
truth or falsehood of every fact. Well-founded semantics is based on the idea that a given
program may not necessarily provide such information on all facts. Instead some facts may
simply be indifferent to it, and the answer should be allowed to say that the truth value
of those facts is unknown. As it turns out, relaxing expectations about the answer in this
fashion allows us to provide a natural semantics for all datalog

programs. The price is


that the answer is no longer guaranteed to provide total information.
Another aspect of this approach is that it puts negative and positive facts on a more
equal footing. One can no longer assume that R(u) is true simply because R(u) is not
in the answer. Instead, both negative and positive facts must be inferred. To formalize this,
we shall introduce 3-valued instances, in which the truth value of facts can be true, false,
or unknown.
This section begins by introducing a largely declarative semantics for datalog

pro-
grams. Next an equivalent xpoint semantics is developed. Finally it is shown that stratied
and well-founded semantics agree on the family of stratied datalog

programs.
A Declarative Semantics for Datalog

The aim of giving semantics to a datalog

program P will be to nd an appropriate


3-valued model I of
P
. In considering what appropriate might mean, it is useful to
recall the basic motivation underlying the logic-programming approach to negation as
opposed to the purely computational approach. An important goal is to model some form
of natural reasoning process. In particular, consistency in the reasoning process is required.
Specically, one cannot use a fact and later infer its negation. This should be captured in
the notion of appropriateness of a 3-valued model I, and it has two intuitive aspects:
the positive facts of I must be inferred from P assuming the negative facts in I; and
all negative facts that can be inferred from I must already be in I.
A 3-valued model satisfying the aforementioned notion of appropriateness will be
called a 3-stable model of P. It turns out that, generally, programs have several 3-stable
models. Then it is natural to take as an answer the certain (positive and negative) facts that
belong to all such models, which turns out to yield, in some sense, the smallest 3-stable
model. This is indeed how the well-founded semantics of P will be dened.
386 Negation in Datalog
Example 15.3.1 The example concerns a game with states, a, b, . . . . The game is be-
tween two players. The possible moves of the games are held in a binary relation moves. A
tuple a, b in moves indicates that when in state a, one can choose to move to state b. A
player loses if he or she is in a state from which there are no moves. The goal is to compute
the set of winning states (i.e., the set of states such that there exists a winning strategy for
a player in this state). These are obtained in a unary predicate win.
Consider the input K with the following value for moves:
K(moves) ={b, c, c, a, a, b, a, d, d, e, d, f , f, g}
Graphically, the input is represented as
b c
a d f g
e
It is seen easily that there are indeed winning strategies from states d (move to e) and
f (move to g). Slightly more subtle is the fact that there is no winning strategy from any of
states a, b, or c. A given player can prevent the other from winning, essentially by forcing
a nonterminating sequence of moves.
Now consider the following nonstratiable program P
win
:
win(x) moves(x, y), win(y)
Intuitively, P
win
states that a state x is in win if there is at least one state y that one can
move to from x, for which the opposing player loses. We now exhibit a 3-valued model J
of P
win
that agrees with K on moves. As will be seen, this will in fact be the well-founded
semantics of P
win
on input K. Instance J is such that J(moves) =K(moves) and the values
of win-atoms are given as follows:
true win(d), win(f )
false win(e), win(g)
unknown win(a), win(b), win(c)
We now embark on dening formally the well-founded semantics. We do this in three
steps. First we dene the notion of 3-valued instance and extend the notion of truth value
and satisfaction. Then we consider datalog and show the existence of a minimum 3-valued
model for each datalog program. Finally we consider datalog

and the notion of 3-stable


model, which is the basis of well-founded semantics.
3-valued Instances Dealing with three truth values instead of the usual two requires
extending some of the basic notions like instance and model. As we shall see, this is
straightforward. We will denote true by 1, false by 0, and unknown by 1/2.
15.3 Well-Founded Semantics 387
Consider a datalog

programP and a classical 2-valued instance I. As was done in the


discussion of SLD resolution in Chapter 12, we shall denote by P
I
the program obtained
from P by adding to P unit clauses stating that the facts in I are true. Then P(I) =P
I
().
For the moment, we shall deal with datalog

programs such as these, whose input is


included in the program. Recall that B(P) denotes all facts of the form R(a
1
, . . . , a
k
),
where R is a relation and a
1
, . . . , a
k
constants occurring in P. In particular, B(P
I
) =
B(P, I).
Let P be a datalog

program. A 3-valued instance I over sch(P) is a total mapping


from B(P) to {0, 1/2, 1}. We denote by I
1
, I
1/2
, and I
0
the set of atoms in B(P) whose truth
value is 1, 1/2, and 0, respectively. A 3-valued instance I is total, or 2-valued, if I
1/2
=.
There is a natural ordering among 3-valued instances over sch(P), dened by
I J iff for each A B(P), I(A) J(A).
Note that this is equivalent to I
1
J
1
and I
0
J
0
and that it generalizes containment for
2-valued instances.
Occasionally, we will represent a 3-valued instance by listing the positive and negative
facts and omitting the undened ones. For example, the 3-valued instance I, where I(p) =
1, I(q) =1, I(r) =1/2, I(s) =0, will also be written as I ={p, q, s}.
Given a 3-valued instance I, we next dene the truth value of Boolean combinations
of facts using the connectives , , , . The truth value of a Boolean combination of
facts is denoted by

I(), dened by

I( ) =min{

I(),

I( )}

I( ) =max{

I(),

I( )}

I() =1

I()

I( ) =1 if

I( )

I(), and 0 otherwise.


The reader should be careful: Known facts about Boolean operators in the 2-valued
context may not hold in this more complex one. For instance, note that the truth value of
p q may be different from that of p q (see Exercise 15.15). To see that the preceding
denition matches the intuition, one might want to verify that with the specic semantics
of used here, the instance J of Example 15.3.1 does satisfy (the ground instantiation
of) P
win,K
. That would not be the case if we dene the semantics of in a more standard
way; by using p q p q.
A 3-valued instance I over sch(P) satises a Boolean combination of atoms in B(P)
iff

I() =1. Given a datalog
()
program P, a 3-valued model of
P
is a 3-valued instance
over sch(P) satisfying the set of implications corresponding to the rules in ground(P).
Example 15.3.2 Recall the program P
win
of Example 15.3.1 and the input instance K
and output instance J presented there. Consider these ground sentences:
win(a) moves(a, d), win(d)
win(a) moves(a, b), win(b).
388 Negation in Datalog
The rst is true for J, because

J(win(d)) =0,

J(moves(a, d)) =1,

J(win(a)) =1/2, and
1/2 0. The second is true because

J(win(b)) =1/2,

J(moves(a, b)) =1,

J(win(a)) =
1/2, and 1/2 1/2.
Observe that, on the other hand,

J(win(a) (moves(a, b) win(b))) =1/2.


3-valued Minimal Model for Datalog We next extend the denition and semantics of
datalog programs to the context of 3-valued instances. Although datalog programs do not
contain negation, they will now be allowed to infer positive, unknown, and false facts.
The syntax of a 3-extended datalog program is the same as for datalog, except that the
truth values 0, 1/2, and 1 can occur as literals in bodies of rules. Given a 3-extended
datalog program P, the 3-valued immediate consequence operator 3-T
P
of P is a mapping
on 3-valued instances over sch(P) dened as follows. Given a 3-valued instance I and
A B(P), 3-T
P
(I)(A) is
1 if there is a rule A body in ground(P) such that

I(body)=1,
0 if for each rule A body in ground(P),

I(body)=0 (and, in particular, if there is
no rule with A in head),
1/2 otherwise.
Example 15.3.3 Consider the 3-extended datalog program P = {p 1/2; p q, 1/2;
q p, r; q p, s; s q; r 1}. Then
3-T
P
({p, q, r, s}) ={q, r, s}
3-T
P
({q, r, s}) ={r, s}
3-T
P
({r, s}) ={r}
3-T
P
({r}) ={r}.
In the following, 3-valued instances are compared with respect to . Thus least,
minimal, and monotonic are with respect to rather than the set inclusion used for
classical 2-valued instances. In particular, note that the minimum 3-valued instance with
respect to is that where all atoms are false. Let denote this particular instance.
With the preceding denitions, extended datalog programs on 3-valued instances
behave similarly to classical programs. The next lemma can be veried easily (Exer-
cise 15.16):
Lemma 15.3.4 Let P be a 3-extended datalog program. Then
1. 3T
P
is monotonic and the sequence {3-T
i
P
()}
i>0
is increasing and converges
to the least xpoint of 3-T
P
;
15.3 Well-Founded Semantics 389
2. P has a unique minimal 3-valued model that equals the least xpoint of 3-T
P
.
The semantics of an extended datalog program is the minimum 3-valued model of P.
Analogous to conventional datalog, we denote this by P().
3-stable Models of Datalog

We are now ready to look at datalog

programs and formally dene 3-stable models of a


datalog

programP. We bootstrap to the semantics of programs with negation, using the


semantics for 3-extended datalog programs described earlier. Let I be a 3-valued instance
over sch(P). We reduce the problem to that of applying a positive datalog program, as
follows. The positivized ground version of P given I, denoted pg(P, I), is the 3-extended
datalog program obtained from ground(P) by replacing each negative premise A by

I(A) (i.e., 0, 1, or 1/2). Because all negative literals in ground(P) have been replaced by
their truth value in I, pg(P, I) is now a 3-extended datalog program (i.e, a program without
negation). Its least xpoint pg(P, I)() contains all the facts that are consequences of P
by assuming the values for the negative premises as given by I. We denote pg(P, I)()
by conseq
P
(I). Thus the intuitive conditions required of 3-stable models now amount to
conseq
P
(I) =I.
Denition 15.3.5 Let P be a datalog

program. A 3-valued instance I over sch(P) is


a 3-stable model of P iff conseq
P
(I) = I.
Observe an important distinction between conseq
P
and the immediate consequence
operator used for inationary datalog

. For inationary datalog

, we assumed that Awas


true as long as A was not inferred. Here we just assume in such a case that A is unknown
and try to prove new facts. Of course, doing so requires the 3-valued approach.
Example 15.3.6 Consider the following datalog

program P:
p r
q r, p
s t
t q, s
u t, p, s
The program has three 3-stable models (represented by listing the positive and negative
facts and leaving out the unknown facts):
I
1
={p, q, t, r, s, u}
I
2
={p, q, s, r, t, u}
I
3
={p, q, r}
Let us check that I
3
is a 3-stable model of P. The program P

=pg(P, I
3
) is
390 Negation in Datalog
p 1
q 1, p
s 1/2
t q, 1/2
u 1/2, p, s
The minimum 3-valued model of pg(P, I
3
) is obtained by iterating 3-T
P
() up to
a xpoint. Thus we start with = {p, q, r, s, t, u}. The rst application of
3-T
P
yields 3-T
P
() ={p, q, r, t, u}. Next (3-T
P
)
2
() ={p, q, r, t }. Finally
(3-T
P
)
3
() =(3-T
P
)
4
() ={p, q, r}. Thus
conseq
P
(I
3
) =pg(P, I
3
)() =(3-T
P
)
3
() =I
3
,
and I
3
is a 3-stable model of P.
The reader is invited to verify that in Example 15.3.1, the instance J is a 3-stable model
of the program P
win,K
for the input instance K presented there.
As seen from the example, datalog

programs generally have several 3-stable models.


We will show later that each datalog

program has at least one 3-stable model. Therefore


it makes sense to let the nal answer consist of the positive and negative facts belonging
to all 3-stable models of the program. As we shall see, the 3-valued instance so obtained is
itself a 3-stable model of the program.
Denition 15.3.7 Let P be a datalog

program. The well-founded semantics of P is


the 3-valued instance consisting of all positive and negative facts belonging to all 3-stable
models of P. This is denoted by P
wf
(),or simply, P
wf
. Given datalog

program P and
input instance I, P
wf
I
() is denoted P
wf
(I).
Thus the well-founded semantics of the program P in Example 15.3.6 is P
wf
() =
{p, q, r}. We shall see later that in Example 15.3.1, P
wf
win
(K) =J.
A Fixpoint Denition
Note that the preceding description of the well-founded semantics, although effective, is
inefcient. The straightforward algorithm yielded by this description involves checking
all possible 3-valued instances of a program, determining which are 3-stable models, and
then taking their intersection. We next provide a simpler, efcient way of computing the
well-founded semantics. It is based on an alternating xpoint computation that converges
to the well-founded semantics. As a side-effect, the proof will show that each datalog

program has at least one 3-stable model (and therefore the well-founded semantics is
always dened), something we have not proven. It will also show that the well-founded
model is itself a 3-stable model, in some sense the smallest.
The idea of the computation is as follows. We dene an alternating sequence {I
i
}
i0
of
3-valued instances that are underestimates and overestimates of the facts known in every
15.3 Well-Founded Semantics 391
3-stable model of P. The sequence is as follows:
I
0
=
I
i+1
=conseq
P
(I
i
).
Recall that is the least 3-valued instance and that all facts have value 0 in . Also note
that each of the I
i
just dened is a total instance. This follows easily from the following
facts (Exercise 15.17):
if I is total, then conseq
P
(I) is total; and
the I
i
are constructed starting from the total instance by repeated applications of
conseq
P
.
The intuition behind the construction of the sequence {I
i
}
i0
is the following. The
sequence starts with , which is an overestimate of the negative facts in the answer (it
contains all negative facts). From this overestimate we compute I
1
=conseq
P
(), which
includes all positive facts that can be inferred from . This is clearly an overestimate of
the positive facts in the answer, so the set of negative facts in I
1
is an underestimate of the
negative facts in the answer. Using this underestimate of the negative facts, we compute
I
2
=conseq
P
(I
1
), whose positive facts will now be an underestimate of the positive facts
in the answer. By continuing the process, we see that the even-indexed instances provide
underestimates of the positive facts in the answer and the odd-indexed ones provide under-
estimates of the negative facts in the answer. Then the limit of the even-indexed instances
provides the positive facts in the answer and the limit of the odd-indexed instances provides
the negative facts in the answer. This intuition will be made formal later in this section.
It is easy to see that conseq
P
(I) is antimonotonic. That is, if I J, then conseq
P
(J)
conseq
P
(I) (Exercise 15.17). From this and the facts that I
1
and I
2
, it immedi-
ately follows that, for all i > 0,
I
0
I
2
. . . I
2i
I
2i+2
. . . I
2i+1
I
2i1
. . . I
1
.
Thus the even subsequence is increasing and the odd one is decreasing. Because there
are nitely many 3-valued instances relative to a given program P, each of these se-
quences becomes constant at some point. Let I

denote the limit of the increasing sequence


{I
2i
}
i0
, and let I

denote the limit of the decreasing sequence {I


2i+1
}
i0
. From the afore-
mentioned inequalities, it follows that I

. Moreover, note that conseq


P
(I

) =I

and
conseq
P
(I

) =I

. Finally let I

denote the 3-valued instance consisting of the facts known


in both I

and I

; that is,
I

(A) =

1 if I

(A) =I

(A) =1
0 if I

(A) =I

(A) =0 and
1/2 otherwise.
Equivalently, I

=(I

)
1
(I

)
0
. As will be seen shortly, I

=P
wf
(). Before proving this,
we illustrate the alternating xpoint computation with several examples.
392 Negation in Datalog
Example 15.3.8
(a) Consider again the program in Example 15.3.6. Let us perform the alternat-
ing xpoint computation described earlier. We start with I
0
= = {p, q,
r, s, t, u}. By applying conseq
P
, we obtain the following sequence of
instances:
I
1
={p, q, r, s, t, u},
I
2
={p, q, r, s, t, u},
I
3
={p, q, r, s, t, u},
I
4
={p, q, r, s, t, u}.
Thus I

= I
4
= {p, q, r, s, t, u} and I

= I
3
= {p, q, r, s, t, u}. Finally
I

= {p, q, r}, which coincides with the well-founded semantics of P com-


puted in Example 15.3.6.
(b) Recall now P
win
and input K of Example 15.3.1. We compute I

for the program


P
win,I
. Note that for I
0
the value of all move atoms is false, and for each j 1,
I
j
agrees with the input K on the predicate moves; thus we do not show the move
atoms here. For the win predicate, then, we have
I
1
={win(a), win(b), win(c), win(d), win(e), win(f ), win(g)}
I
2
={win(a), win(b), win(c), win(d), win(e), win(f ), win(g)}
I
3
=I
1
I
4
=I
2
.
Thus
I

=I
2
={win(a), win(b), win(c), win(d), win(e), win(f ), win(g)}
I

=I
1
={win(a), win(b), win(c), win(d), win(e), win(f ), win(g)}
I

={win(d), win(e), win(f ), win(g)},


which is the instance J of Example 15.3.1.
(c) Consider the database schema consisting of a binary relation G and a unary
relation good and the following program dening bad and answer:
bad(x) G(y, x), good(y)
answer(x) bad(x)
Consider the instance K over G and good, where
K(G) ={b, c, c, b, c, d, a, d, a, e}, and
K(good) ={a}.
We assume that the facts of the database are added as unit clauses to P, yielding
P
K
. Again we perform the alternating xpoint computation for P
K
. We start with
15.3 Well-Founded Semantics 393
I
0
= (containing all negated atoms). Applying conseq
P
K
yields the following
sequence {I
i
}
i>0
:
bad answer
I
0

I
1
{a, b, c, d, e} {a, b, c, d, e}
I
2
{a, b, c, d, e} {a, b, c, d, e}
I
3
{a, b, c, d, e} {a, b, c, d, e}
I
4
{a, b, c, d, e} {a, b, c, d, e}
We have omitted [as in (b)] the facts relating to the edb predicates G and good,
which do not change after step 1.
Thus I

=I

=I

=I
3
=I
4
. Note that P is stratied and its well-founded
semantics coincides with its stratied semantics. As we shall see, this is not
accidental.
We now show that the xpoint construction yields the well-founded semantics for
datalog

programs.
Theorem 15.3.9 For each datalog

program P,
1. I

is a 3-stable model of P.
2. P
wf
() =I

.
Proof For statement 1, we need to show that conseq
P
(I

) =I

. We show that for every


fact A, if I

(A) = {0, 1/2, 1}, then conseq


P
(I

)(A) =. From the antimonotonicity of


conseq
P
, the fact that I

and conseq
P
(I

) =I

, conseq
P
(I

) =I

, it follows that
I

conseq
P
(I

) I

. If I

(A) =0, then I

(A) =0 so conseq
P
(I

)(A) =0; similarly for


I

(A) =1. Now suppose that I

(A) =1/2. It is sufcient to prove that conseq


P
(I

)(A)
1/2. [It is not possible that conseq
P
(I

)(A) = 1. If this were the case, the rules used to


infer A involve only facts whose value is 0 or 1. Because those facts have the same value
in I

and I

, the same rules can be used in both pg(P, I

) and pg(P, I

) to infer A, so
I

(A) =I

(A) =I

(A) =1, which contradicts the hypothesis that I

(A) =1/2.]
We now prove that conseq
P
(I

)(A) 1/2. By the denition of I

, I

(A) = 0 and
I

(A) =1. Recall that conseq


P
(I

) =I

, so conseq
P
(I

)(A) =1. In addition, conseq


P
(I

)
is the limit of the sequence {3-T
i
pg(P,I

)
}
i>0
. Let stage(A) be the minimum i such that
3-T
i
pg(P,I

)
(A) =1. We prove by induction on stage(A) that conseq
P
(I

)(A) 1/2. Sup-


pose that stage(A) = 1. Then there exists in ground(P) a rule of the form A , or
one of the form A B
1
, . . . , B
n
, where I

(B
j
) = 0, 1 j n. However, the rst
case cannot occur, for otherwise conseq
P
(I

)(A) must also equal 1 so I

(A) = 1 and
therefore I

(A) = 1, contradicting the fact that I

(A) = 1/2. By the same argument,


I

(B
j
) = 1, so I

(B
j
) = 1/2, 1 j n. Consider now pg(P, I

). Because I

(B
j
) =
394 Negation in Datalog
1/2, 1 j n, the second rule yields conseq
P
(I

)(A) 1/2. Now suppose that the state-


ment is true for stage(A) =i and suppose that stage(A) =i +1. Then there exists a rule
A A
1
. . . A
m
B
1
. . . B
n
such that I

(B
j
) =0 and 3-T
i
pg(P,I

)
(A
k
) =1 for each j and
k. Because I

(B
j
) =0, I

(B
j
) 1/2 so I

(B
j
) 1/2. In addition, by the induction hy-
pothesis, conseq
P
(I

)(A
k
) 1/2. It follows that conseq
P
(I

)(A) 1/2, and the induction


is complete. Thus conseq
P
(I

) =I

and I

is a 3-stable model of P.
Consider statement 2. We have to show that the positive and negative facts in I

are
those belonging to every 3-stable model M of P. Because I

is itself a 3-stable model of


P, it contains the positive and negative facts belonging to every 3-stable model of P. It
remains to show the converse (i.e., that the positive and negative facts in I

belong to every
3-stable model of P). To this end, we rst show that for each 3-stable model M of P and
i 0,
() I
2i
MI
2i+1
.
The proof is by induction on i. For i =0, we have
I
0
=M.
Because conseq
P
is antimonotonic, conseq
P
(M) conseq
P
(I
0
). Now conseq
P
(I
0
) = I
1
and because M is 3-stable, conseq
P
(M) =M. Thus we have
I
0
MI
1
.
The induction step is similar and is omitted.
By (), I

MI

. Now a positive fact in I

is in I

and so is in M because I

M.
Similarly, a negative fact in I

is in I

and so is in M because MI

.
Note that the proof of statement 2 above formalizes the intuition that the I
2i
provide
underestimates of the positive facts in all acceptable answers (3-stable models) and the
I
2i+1
provide underestimates of the negative facts in those answers. The fact that P
wf
()
is a minimal model of P is left for Exercise 15.19.
Variations of the alternating xpoint computation can be obtained by starting with
initial instances different from . For example, it may make sense to start with the content
of the edb relations as an initial instance. Such variations are sometimes useful for technical
reasons. It turns out that the resulting sequences still compute the well-founded semantics.
We show the following:
Proposition 15.3.10 Let P be a datalog

program. Let {I
i
}
i0
be dened in the same
way as the sequence {I
i
}
i0
, except that I
0
is some total instance such that
I
0
P
wf
().
Then
15.3 Well-Founded Semantics 395
I
0
I
2
. . . I
2i
I
2i+2
. . . I
2i+1
I
2i1
. . . I
1
and (using the same notation as before),
I

=P
wf
().
Proof Let us compare the sequences {I
i
}
i0
and {I
i
}
i0
. Because I
0
P
wf
() and I
0
is
total, it easily follows that I
0
I

. Thus =I
0
I
0
I

. From the antimonotonicity of


the conseq
P
operator and the fact that conseq
2
P
(I

) =I

, it follows that I
2i
I
2i
I

for
all i, i 0. Thus I

=I

. Then
I

=conseq
P
(I

) =conseq
P
(I

) =I

so I

=I

=P
wf
().
As noted earlier, the instances in the sequence {I
i
}
i0
are total. A slightly different
alternating xpoint computation formulated only in terms of positive and negative facts
can be dened. This is explored in Exercise 15.25.
Finally, the alternating xpoint computation of the well-founded semantics involves
looking at the ground rules of the given program. However, one can clearly compute the
semantics without having to explicitly look at the ground rules. We show in Section 15.4
how the well-founded semantics can be computed by a xpoint query.
Well-Founded and Stratied Semantics Agree
Because the well-founded semantics provides semantics to all datalog

programs, it does
so in particular for stratied programs. Example 15.3.8(c) showed one stratied program
for which stratied and well-founded semantics coincide. Fortunately, as shown next,
stratied and well-founded semantics are always compatible. Thus if a programis stratied,
then the stratied and well-founded semantics agree.
Adatalog

programP is said to be total if P


wf
(I) is total for each input I over edb(P).
Theorem 15.3.11 If P is a stratied datalog

program, then P is total under the well-


founded semantics, and for each 2-valued instance I over edb(P), P
wf
(I) =P
strat
(I).
Proof Let P be stratied, and let input I
0
over edb(P) be xed. The idea of the proof is
the following. Let J be a 3-stable model of P
I
0
. We shall show that J =P
strat
(I
0
). This will
imply that P
strat
(I
0
) is the unique 3-stable model for P
I
0
. In particular, it contains only the
positive and negative facts in all 3-stable models of P
I
0
and is thus P
wf
(I
0
).
For the proof, we will need to develop some notation.
Notation for the stratication: Let P
1
, . . . , P
n
be a stratication of P. Let P
0
=
I
0
(i.e.,
the program corresponding to all of the facts in I
0
). For each k in [0, n],
let S
k
=idb(P
k
) (S
0
is edb(P));
S
[0,k]
=
i[0,k]
S
i
; and
396 Negation in Datalog
I
k
=(P
1
P
k
)
strat
(I
0
) =I
n
|S
[0,k]
(and, in particular, P
strat
(I
0
) =I
n
).
Notation for the 3-stable model: Let

P =pg(P
I
0
, J). Recall that because J is 3-stable for
P
I
0
,
J =conseq

P
(J) =lim
i0
3-T
i

P
().
For each k in [0, n],
let J
k
=J|S
[0,k]
; and


P
k+1
=pg(P
k+1
J
k
, J
k
) =pg(P
k+1
J
k
, J).
[Note that pg(P
k+1
J
k
, J
k
) =pg(P
k+1
J
k
, J) because all the negations in P
k+1
are over predi-
cates in S
[0,k]
.]
To demonstrate the result, we will show by induction on k [0, n] that
(*) l
k
0 such that i 0, J
k
=3-T
l
k
+i

P
() | S
[0,k]
=I
k
.
Clearly, for k =n, (*) demonstrates the result.
The case where k = 0 is satised by setting l
0
= 1, because J
0
= 3-T
1+i

P
()|S
0
= I
0
for each i 0.
Suppose now that (*) is true for some k [0, n 1]. Then for each i 0, by the choice
of

P
k+1
, the form of P
k+1
, and (*),
(1) T
i
P
k+1
(I
k
)|S
k+1
3-T
i+1

P
k+1
()|S
k+1
T
i+1
P
k+1
(I
k
)|S
k+1
.
(Here and later, denotes the usual 2-valued containment between instances; this is well
dened because all instances considered are total, even if J is not.) In (1), the 3-T
i+1

P
k+1
and T
i+1
P
k+1
terms may not be equal, because the positive atoms of I
k
= J
k
are available
when applying T
P
k+1 the rst time but are available only during the second application of
3-T

P
k+1
. On the other hand, the T
i
P
k+1
and 3-T
i+1

P
k+1
terms may not be equal (e.g., if there is
a rule of the form A in P
k+1
).
By (1) and niteness of the input, there is some m0 such that for each i 0,
(2) I
n
|S
k+1
=T
m+i
P
k+1
(I
k
)|S
k+1
=3-T
m+i

P
k+1
()|S
k+1
.
This is almost what is needed to complete the induction, except that

P
k+1
is used instead
of

P. However, observe that for each i 0,
(3) 3-T
i

P
()|S
k+1
3-T
i

P
k+1
()|S
k+1
because 3-T
i

P
()|S
[0,k]
J
k
for each i 0 by the induction hypothesis. Finally observe
that for each i 0,
(4) 3-T
i

P
k+1
()|S
k+1
3-T
i+l
k

P
()|S
k+1
15.4 Expressive Power 397
because 3-T
l
k

P
()|S
[0,k]
contains all of the positive atoms of J
k
.
Then for each i 0 we have
3-T
m+i

P
k+1
()|S
k+1
3-T
m+i+l
k

P
()|S
k+1
by (4)
3-T
m+i+l
k

P
k+1
()|S
k+1
by (3)
3-T
m+i

P
k+1
()|S
k+1
by (2).
It follows that
(5) 3-T
m+i

P
k+1
()|S
k+1
=3-T
m+i+l
k

P
()|S
k+1
.
Set l
(k+1)
=l
k
+m. Combining (2) and (5), we have, for each i 0,
J|S
k+1
=3-T
l
(k+1)
+i

P
()|S
k+1
=I
n
|S
k+1
.
Together with the inductive hypothesis, we obtain for each i 0 that
J|S
[0,k+1]
=3-T
l
(k+1)
+i

P
()|S
[0,k+1]
=I
n
|S
[0,k+1]
,
which concludes the proof.
As just seen, each stratiable program is total under the well-founded semantics. How-
ever, as indicated by Example 15.3.8(b), a datalog

programP may yield a 3-valued model


P
wf
(I) on some inputs. Furthermore, there are programs that are not stratied but whose
well-founded models are nonetheless total (see Exercise 15.22). Unfortunately, there can
be no effective characterization of those datalog

programs whose well-founded semantics


is total for all input databases (Exercise 15.23). One can nd sufcient syntactic conditions
that guarantee the totality of the well-founded semantics, but this quickly becomes a te-
dious endeavor. It has been shown, however, that for each datalog

program P, one can


nd another program whose well-founded semantics is total on all inputs and that produces
the same positive facts as the well-founded semantics of P.
15.4 Expressive Power
In this section, we examine the expressive power of datalog

with the various semantics


for negation we have considered. More precisely, we focus on semipositive, stratied, and
well-founded semantics. We rst look at the relative power of these semantics and show
that semipositive programs are weaker than stratied, which in turn are weaker than well
founded. Then we look at the connection with languages studied in Chapter 14 that also
use recursion and negation. We prove that well-founded semantics can express precisely
the xpoint queries.
Finally we look at the impact of order on expressive power. An ordered database
contains a special binary relation succ that provides a successor relation on all constants
398 Negation in Datalog
in the active domain. Thus the constants are ordered by succ and in fact can be viewed
as integers. The impact of assuming that a database is ordered is examined at length
in Chapter 17. Rather surprisingly, we show that in the presence of order, semipositive
programs are as powerful as programs with well-founded semantics. In particular, all three
semantics are equivalent and express precisely the xpoint queries.
We begin by briey noting the connection between stratied datalog

and relational
calculus (and algebra). To see that stratied datalog

can express all queries in CALC,


recall the nonrecursive datalog

(nr-datalog

) programs introduced in Chapter 5. Clearly,


these are stratied datalog

programs in which recursion is not allowed. Theorem 5.3.10


states that nr-datalog

(with one answer relation) and CALC are equivalent. It follows that
stratied datalog

can express all of CALC. Because transitive closure of a graph can be


expressed in stratied datalog

but not in CALC (see Proposition 17.2.3), it follows that


stratied datalog

is strictly stronger than CALC.


Stratied Datalog Is Weaker than Fixpoint
Let us look at the expressive power of stratied datalog

. Computationally, stratied pro-


grams provide recursion and negation and are inationary. Therefore one might expect that
they express the xpoint queries. It is easy to see that all stratied datalog

are xpoint
queries (Exercise 15.28). In particular, this shows that such programs can be evaluated in
polynomial time. Can stratied datalog

express all xpoint queries? Unfortunately, no.


The intuitive reason is that in stratied datalog

there is no recursion through negation, so


the number of applications of negation is bounded. In contrast, xpoint queries allow re-
cursion through negation, so there is no bound on the number of applications of negation.
This distinction turns out to be crucial. We next outline the main points of the argument,
showing that stratied datalog

is indeed strictly weaker than xpoint.


The proof uses a game played on so-called game trees. The game is played on a given
tree. The nodes of the tree are the possible positions in the game, and the edges are the
possible moves from one position to another. Additionally, some leaves of the tree are
labeled black. The game is between two players. A round of the game starting at node
x begins with Player I making a move from x to one of its children y. Player II then makes
a move from y, etc. The game ends when a leaf is reached. Player I wins if Player II picks
a black leaf. For a given tree (with labels), Player I has a winning strategy for the game
starting at node x if he or she can win starting at x no matter how Player II plays. We are
interested in programs determining whether there is such a winning strategy.
The game tree is represented as follows. The set of possible moves is given by a binary
relation move and the set of black nodes by a unary relation black. Consider the query
winning (not to be confused with the predicate win of Example 15.3.1), which asks if Player
I has a winning strategy starting at the root of the tree. We will dene a set of game trees G
such that
(i) the query winning on the game trees in G is denable by a xpoint query, and
(ii) for each stratied programP, there exist game trees G, G

G such that winning


is true on G and false on G

, but P cannot distinguish between G and G

.
Clearly, (ii) shows that the winning query on game trees is not denable by a stratied
15.4 Expressive Power 399
datalog

program. The set G of game trees is dened next. It consists of the G


l,k
and G

l,k
dened by induction as follows:
G
0,k
and G

0,k
have no moves and just one node, labeled black in G
0,k
and not labeled
in G

0,k
.
G
i+1,k
consists of a copy of G

i,k
, k disjoint copies of G
i,k
, and a new root d
i+1
. The
moves are the union of the moves in the copies of G

i,k
and G
i,k
together with new
moves from the root d
i+1
to the roots of the copies. The labels remain unchanged.
G

i+1,k
consists of k + 1 disjoint copies of G
i,k
and a new root d

i+1
from which
moves are possible to the roots of the copies of G
i,k
.
The game trees G
4,1
and G

4,1
are represented in Fig. 15.2. It is easy to see that winning
is true on the game trees G
2i,k
and false on game trees G

2i,k
, i > 0 (Exercise 15.30).
We rst note that the query winning on game trees in G can be dened by a xpoint
query. Consider
(T ) =(y)[Move(x, y) (z)(Move(y, z) Black(z))]
(y)[Move(x, y) (z)(Move(y, z) T (z))].
G
4.1
G
4.1
Root
Root
Figure 15.2: Game trees
400 Negation in Datalog
It is easy to verify that winning is dened by
T
((T ))(root), where root is the root of the
game tree (Exercise 15.30). Next we note that the winning query is not expressible by any
stratied datalog

program. To this end, we use the following result, stated without proof.
Lemma 15.4.1 For each stratied datalog

program P, there exist i, k such that


P(G
i,k
)(winning) =P(G

i,k
)(winning).
The proof of Lemma 15.4.1 uses an extension of Ehrefeucht-Fraiss e games (the games
are described in Chapter 17). The intuition of the lemma is that, to distinguish between
G
i,k
and G

i,k
for i and k sufciently large, one needs to apply more negations than the
xed number allowed by P. Thus no stratied program can distinguish between all the
G
i,k
and G

i,k
. In particular, it follows that the xpoint query winning is not equivalent to
any stratied datalog

program. Thus we have the following result, settling the relationship


between stratied datalog

and the xpoint queries.


Theorem 15.4.2 The class of queries expressible by stratied datalog

programs is
strictly included in the xpoint queries.
Remark 15.4.3 The game tree technique can also be used to prove that the number of
strata in stratied datalog

programs has an impact on expressive power. Specically, let


Strat
i
consist of all queries expressible by stratied datalog

programs with i strata. Then it


can be shown that for all i, Strat
i
Strat
i+1
. In particular, semipositive datalog

is weaker
than stratied datalog

.
Well-Founded Datalog

Is Equivalent to Fixpoint
Next we consider the expressive power of datalog

programs with well-founded semantics.


We prove that well-founded semantics can express precisely the xpoint queries. We begin
by showing that the well-founded semantics can be computed by a xpoint query. More
precisely, we show how to compute the set of false, true, and undened facts of the answer
using a while
+
program (see Chapter 14 for the denition of while
+
programs).
Theorem 15.4.4 Let P be a datalog

program. There exists a while


+
program w with
input relations edb(P), such that
1. w contains, for each relation R in sch(P), three relation variables R

answer
, where
{0, 1/2, 1};
2. for each instance I over edb(P), u w(I)(R

answer
) iff P
wf
(I)(R(u)) = , for
{0, 1/2, 1}.
Crux Let P be a datalog

program. The while


+
program mimics the alternating x-
point computation of P
wf
. Recall that this involves repeated applications of the operator
conseq
P
, resulting in the sequence
15.4 Expressive Power 401
I
0
I
2
. . . I
2i
I
2i+2
. . . I
2i+1
I
2i1
. . . I
1
.
Recall that the I
i
are all total instances. Thus 3-valued instances are only required to
produce the nal answer from I

and I

at the end of the computation, by one last rst-


order query.
It is easily veried that while
+
can simulate one application of conseq
P
on total
instances (Exercise 15.27). The only delicate point is to make sure the computation is
inationary. To this end, the program w will distinguish between results of even and odd
iterations of conseq
P
by having, for each R, an odd and even version R
0
odd
and R
1
even
. R
0
odd
holds at iteration 2i + 1 the negative facts of R in I
2i+1
, and R
1
even
holds at iteration 2i
the positive facts of R in I
2i
. Note that both R
0
odd
and R
1
even
are increasing throughout the
computation.
We elaborate on the simulation of the operator conseq
P
on a total instance I. The
programw will have to distinguish between facts in the input I, used to resolve the negative
premises of rules in P, and those inferred by applications of 3-T
P
. Therefore for each
relation R, the while
+
program will also maintain a copy

R
even
and

R
odd
to hold the facts
produced by consecutive applications of 3-T
P
in the even and odd cases, respectively. More
precisely, the

R
odd
hold the positive facts inferred from input I
2i
represented in R
1
even
, and
the

R
even
hold the positive facts inferred from input I
2i+1
represented in R
0
odd
. It is easy
to write a rst-order query dening one application of 3-T
P
for the even or odd cases.
Because the representations of the input are different in the even and odd cases, different
programs must be used in the two cases. This can be iterated in an inationary manner,
because the set of positive facts inferred in consecutive applications of 3-T
P
is always
increasing. However, the

R
odd
and

R
even
have to be initialized to at each application
of conseq
P
. Because the computation must be inationary, this cannot be done directly.
Instead, timestamping must be used. The initialization of the

R
odd
and

R
even
is simulated
by timestamping each relation with the current content of R
1
even
and R
0
odd
, respectively. This
is done in a manner similar to the proofs of Chapter 14.
We now exhibit a converse of Theorem 15.4.4, showing that any xpoint query can es-
sentially be simulated by a datalog

programwith well-founded semantics. More precisely,


the positive portion of the well-founded semantics yields the same facts as the xpoint
query.
Example 15.4.6 illustrates the proof of this result.
Theorem 15.4.5 Let q be a xpoint query over input schema R. There exists a datalog

program P such that edb(P) =R, P has an idb relation answer, and for each instance I
over R, the positive portion of answer in P
wf
(I) coincides with q(I).
Crux We will use the denition of xpoint queries by iterations of positive rst-order
formulas. Let q be a xpoint query. As discussed in Chapter 14, there exists a CALC
formula (T ), positive in T , such that q is dened by
T
((T ))(u), where u is a vector of
variables and constants. Consider the CALC formula (T ). As noted earlier in this section,
there is an nr-datalog

program P

with one answer relation R

such that P

is equivalent
402 Negation in Datalog
to (T ). Because (T ) is positive in T , along any path in the syntax tree of (T ) ending
with atom T there is an even number of negations. This is also true of paths in G
P

.
Consider the precedence graph G
P

of P

. Clearly, one can construct P

such that
each idb relation except T is used in the denition of exactly one other idb relation, and
all idb relations are used eventually in the denition of the answer R

. In other words, for


each idb relation R other than T , there is a unique path in G
P

from R to R

. Consider the
paths from T to some idb relation R in P

. Without loss of generality, we can assume that


all paths have the same number of negations (otherwise, because all paths to T have an
even number of negations, additional idb relations can be introduced to pad the paths with
fewer negations, using rules that perform redundant double negations). Let the rank of an
idb relation R in P

be the number of negations on each path leading from T to R in G


P

.
Now let P be the datalog

program obtained from P

as follows:
replace the answer relation R

by T ;
add one rule answer(v) T (u), where v is the vector of distinct variables occurring
in u, in order of occurrence.
The purpose of replacing R

by T is to cause program P

to iterate, yielding
T
((T )).
The last rule is added to perform the nal selection and projection needed to obtain the
answer
T
((T ))(u). Note that, in some sense, P is almost stratied, except for the fact
that the result T is fed back into the program.
Consider the alternating xpoint sequence {I
i
}
i0
in the computation of P
wf
(I). Sup-
pose R

has rank q in P

, and let R be an idb relation of P

whose rank in P

is r q.
Intuitively, there is a close correspondence between the sequence {I
i
}
i0
and the iterations
of , along the following lines: Each application of conseq
P
propagates the correct result
from relations of rank r in P

to relations of rank r +1. There is one minor glitch, how-


ever: In the xpoint computation, the edb relations are given, and even at the rst iteration,
their negation is taken to be their complement; in the alternating xpoint computation, all
negative literals, including those involving edb relations, are initially taken to be true. This
results in a mismatch. To x the problem, consider a variation of the alternating xpoint
computation of P
wf
(I) dened as follows:
I
0
=I .{R(a
1
, . . . , a
n
) | R idb(P), R(a
1
, . . . , a
n
) B(P, I)}
I
i+1
=conseq
P
(I
i
).
Clearly, I
0
P
wf
(I). Then, by Proposition 15.3.10, I

=P
wf
(I).
Now the following can be veried by induction for each idb relation R of rank r:
For each i, (I
iq+r
)
1
contains exactly the facts of R true in P

(
i
()).
Intuitively, this is so because each application of conseq
P
propagates the correct result
across one application of negation to an idb predicate. Because R

has rank q, it takes q


applications to simulate a complete application of P

. In particular, it follows that for each


i, (I
iq
)
1
contains in T the facts true in
i
().
Thus (I

)
1
contains in T the facts true in
T
((T )). Finally answer is obtained by a
simple selection and projection from T using the last rule in P and yields
T
((T ))(u).
15.4 Expressive Power 403
In the preceding theorem, the positive portion of answer for P
wf
(I) coincides with
q(I). However, P
wf
(I) is not guaranteed to be total (i.e., it may contain unknown facts).
Using a recent result (not demonstrated here), a program Q can be found such that Q
wf
always provides a total answer, and such that the positive facts of P
wf
and Q
wf
coincide
on all inputs.
Recall from Chapter 14 that datalog

with inationary semantics also expresses pre-


cisely the xpoint queries. Thus we have converged again, this time by the deductive data-
base path, to the xpoint queries. This bears witness, once more, to the naturalness of this
class. In particular, the well-founded and inationary semantics, although very different,
have the same expressive power (modulo the difference between 3-valued and 2-valued
models).
Example 15.4.6 Consider the xpoint query
good
((good))(x), where
(good) =y(G(y, x) good(y)).
Recall that this query, also encountered in Chapter 14, computes the good nodes of the
graph G (i.e., those that cannot be reached from a cycle). The nr-datalog

program P

corresponding to one application of (good) is the one exhibited in Example 15.3.8(c):


bad(x) G(y, x), good(y)
R

(x) bad(x)
Note that bad is negative in P

and has rank one, and good is positive. The answer R

has
rank two. The program P is as follows:
bad(x) G(y, x), good(y)
good(x) bad(x)
answer(x) good(x)
Consider the input graph
G={b, c, c, b, c, d, a, d, a, e}.
The consecutive values of
i
() are
() ={a},

2
() ={a, e},

3
() ={a, e}.
Thus
good
((good))(x) yields the answer {a, e}. Consider now the alternating xpoint
sequence in the computation of P
wf
on the same input (only the positive facts of bad and
good are listed, because G does not change and answer = good).
404 Negation in Datalog
bad good
I
0

I
1
{b, c, d, e} {a, b, c, d, e}
I
2
{a}
I
3
{b, c, d} {a, b, c, d, e}
I
4
{a, e}
I
5
{b, c, d} {a, b, c, d, e}
I
6
{a, e}
Thus
() =(I
2
)
1
(good),

2
() =(I
4
)
1
(good)
and
(I
4
)
1
(answer) =
good
((good))(x).
The relative expressive power of the various languages discussed in this chapter is
summarized in Fig. 15.3. The arrows indicate strict inclusion. For a viewof these languages
in a larger context, see also Figs. 18.4 and 18.5 at the end of Part E.
The Impact of Order
Finally we look at the impact of order on the expressive power of the various datalog

semantics. As we will discuss at length in Chapter 17, the assumption that databases are
ordered can have a dramatic impact on the expressive power of languages like xpoint
or while. The datalog

languages are no exception. The effect of order is spectacular.


With this assumption, it turns out that semipositive datalog

is (almost) as powerful as
stratied datalog

and datalog

with well-founded semantics. The almost comes from a


well-founded semantics datalog

xpoint semantics datalog

stratied datalog

semipositive datalog

datalog
Figure 15.3: Relative expressive power of datalog
()
languages
15.4 Expressive Power 405
technicality concerning the order: We also need to assume that the minimum and maximum
constants are explicitly given. Surprisingly, these constants, which can be computed with
a rst order query if succ is given, cannot be computed with semipositive programs (see
Exercise 15.29).
The next lemma states that semipositive programs express the xpoint queries on
ordered databases with min and max (i.e., databases with a predicate succ providing a
successor relation among all constants, and unary relations min and max containing the
smallest and the largest constant).
Lemma 15.4.7 The semipositive datalog

programs express precisely the xpoint


queries on ordered databases with min and max.
Crux Let q be a xpoint query over database schema R. Because q is a xpoint query,
there is a rst-order formula (T ), positive in T , such that q is dened by
T
((T ))(u),
where u is a vector of variables and constants. Because T is positive in (T ), we can
assume that (T ) is in prenex normal formQ
1
x
1
Q
2
x
2
. . . Q
k
x
k
(), where is a quantier
free formula in disjunctive normal form and T is not negated in . We show by induction
on k that there exists a semipositive datalog

program P

with an idb relation answer

dening
T
((T )) [the last selection and projection needed to obtain the nal answer

T
((T ))(u) pose no problem]. Suppose k =0 (i.e., =). Then P

is the nr-datalog

program corresponding to , where the answer relation is T . Because is quantier free


and T is not negated in , P

is clearly semipositive. Next suppose the statement is true


for some k 0, and let (T ) have quantier depth k +1. There are two cases:
(i) = x(x, v), where has quantier depth k. Then P

contains the rules


of P

, where T is replaced in heads of rules by a new predicate T

and one
additional rule
T (v) T

(x, v).
(ii) =x(x, v), where has quantier depth k. Then P

consists, again, of P

,
where T is replaced in heads of rules by a new predicate T

, with the following


rules added:
R

(x, v) T

(x, v), min(x)


R

(x

, v) R

(x, v), succ(x, x

), T

(x

, v)
T (v) R

(x, v), max(x),


where R

is a new auxiliary predicate. Thus the program steps through all xs


using the successor relation succ, starting from the minimum constant. If the
maximum constant is reached, then T

(x, v) is satised for all x, and T (v) is


inferred.
This completes the induction.
As we shall see in Chapter 17, xpoint expresses on ordered databases exactly the
406 Negation in Datalog
queries computable in time polynomial in the size of the database (i.e., qptime). Thus we
obtain the following result. In comparing well-founded semantics with the others, we take
the positive portion of the well-founded semantics as the answer.
Theorem 15.4.8 Stratied datalog

and datalog

with well-founded semantics are


equivalent on ordered databases and express exactly qptime. They are also equivalent to
semipositive datalog

on ordered databases with min and max and express exactly qptime.
15.5 Negation as Failure in Brief
In our presentation of datalog in Chapter 12, we saw that the minimal model and least
xpoint semantics have an elegant proof-theoretic counterpart based on SLD resolution.
One might naturally wonder if such a counterpart exists in the case of datalog

. The
answer is yes and no. Such a proof-theoretic approach has indeed been proposed and
is called negation as failure. This was originally developed for logic programming and
predates stratied and well-founded semantics. Unfortunately, the approach has two major
drawbacks. The rst is that it results in a proof-building procedure that does not always
terminate. The second is that it is not the exact counterpart of any other existing semantics.
The semantics that has been proposed as a possible match is Clarks completion, but the
match is not perfect and Clarks completion has its own problems. We provide here only a
brief and informal presentation of negation as failure and the related Clarks completion.
The idea behind negation as failure is simple. We would like to infer a negative fact
Aif Acannot be proven by SLD resolution. Thus Awould then be proven by the failure
to prove A. Unfortunately, this is generally noneffective because SLD derivations may be
arbitrarily long, and so one cannot check in nite time
2
that there is no proof of A by SLD
resolution. Instead we have to use a weaker notion of negation by failure, which can be
checked. This is done as follows. A fact A is proven if all SLD derivations starting from
the goal A are nite and none produces an SLD refutation for A. In other words,
A nitely fails. This procedure applies to ground atoms A only. It gives rise to a proof
procedure called SLDNF resolution. Briey, SLDNF resolution extends SLD resolution as
follows. Refutations of positive facts proceed as for SLD resolution. Whenever a negative
ground goal A has to be proven, SLD resolution is applied to A, and A is proven
if the SLD resolution nitely fails for A. The idea of SLDNF seems appealing as the
proof-theoretic version of the closed world assumption. However, as illustrated next, it
quickly leads to signicant problems.
Example 15.5.1 Consider the usual program P
TC
for transitive closure of a graph:
T (x, y) G(x, y)
T (x, y) G(x, z), T (z, y)
2
Because databases are nite, one can develop mechanisms to bound the expansion. We ignore this
aspect here.
15.5 Negation as Failure in Brief 407
Consider the instance I where G has edges {a, b, b, a, c, a}. Clearly, {a, c} is not
in the transitive closure of G, and so not in T , by the usual datalog semantics. Suppose
we wish to prove the fact T (a, c), using negation as failure. We have to show that SLD
resolution nitely fails on T (a, c), with the preceding program and input. Unfortunately,
SLD resolution can enter a negative loop when applied to T (a, c). One obtains the
following SLD derivation:
1. T (a, c);
2. G(a, z), T (z, c), using the second rule;
3. T (b, c), using the fact G(a, b);
4. G(b, z), T (z, c) using the second rule;
5. T (a, c) using the fact G(b, a).
Note that the last goal is the same as the rst, so this can be extended to an innite
derivation. It follows that SLD resolution does not nitely fail on T (a, c), so SLDNF
does not yield a proof of T (a, c). Moreover, it has been shown that this does not depend
on the particular program used to dene transitive closure. In other words, there is no
datalog

program that under SLDNF can prove the positive and negative facts true of the
transitive closure of a graph.
The preceding example shows that SLDNF can behave counterintuitively, even in
some simple cases. The behavior is also incompatible with all the semantics for negation
that we have discussed so far. Thus one cannot hope for a match between SLDNF and these
semantics.
Instead a semantics called Clarks completion has been proposed as a candidate match
for negation as failure. It works as follows. For a datalog

program P, the completion of


P, comp(P), is constructed as follows. For each idb predicate R, each rule
: R(u) L
1
(v
1
), . . . , L
n
(v
n
)
dening R is rewritten so there is a uniform set of distinct variables in the rule head and so
all free variables in the body are existentially quantied:

: R(u

) v

(x
1
=t
1
x
k
=t
k
L
1
(v
1
) L
n
(v
n
)).
(If the head of has distinct variables for all coordinates, then the equality atoms can be
avoided. If repeated variables or constants occur, then equality must be used.) Next, if the
rewritten rules for R are

1
, . . . ,

l
, the completion of R is formed by
u

(R(u

) body(

1
) body(

l
)).
Intuitively, this states that ground atom R(w) is true iff it is supported by one of the rules
dening R. Finally the completion of P is the set of completions of all idb predicates of P,
along with the axioms of equality, if needed.
408 Negation in Datalog
The semantics of P is now dened by the following: A is true iff it is a logical conse-
quence of comp(P). A rst problem now is that comp(P) is not always consistent; in fact,
its consistency is undecidable. What is the connection between SLDNF and Clarks com-
pletion? Because SLDNF is consistent (it clearly cannot prove A and A) and comp(P)
is not so always, SLDNF is not always complete with respect to comp(P). For consistent
comp(P), it can be shown that SLDNF resolution is sound. However, additional conditions
must be imposed on the datalog

programs for SLDNF resolution to be complete.


Consider again the transitive closure program P
TC
and input instance I of Exam-
ple 15.5.1. Then the completion of T is equivalent to
T (x, y) G(x, y) z(G(x, z) T (z, y)).
Note that neither T (a, c) nor T (a, c) are consequences of comp(P
TC,I
).
In summary, negation as failure does not appear to provide a convincing proof-
theoretic counterpart to the semantics we have considered. The search for more successful
proof-theoretic approaches is an active research area. Other proposals are described briey
in the Bibliographic Notes.
Bibliographic Notes
The notion of a stratied program is extremely natural. Not surprisingly, it was proposed
independently by quite a few investigators [CH85, ABW88, Lif88, VanG86]. The inde-
pendence of the semantics from a particular stratication (Theorem 15.2.10) was shown in
[ABW88].
Research on well-founded semantics, and the related notion of a 3-stable model, has
its roots in investigations of stable and default model semantics. Although formulated
somewhat differently, the notion of a stable/default model is equivalent to that of a total
3-stable model [Prz90]. Stable model semantics was introduced in [GL88], and default
model semantics was introduced in [BF87, BF88]. Stable semantics is based on Moores
autoepistemic logic [Moo85], and default semantics is based on Reiters default logic
[Rei80]. The equivalence between autoepistemic and default logic in the general case has
been shown in [Kon88]. The equivalence between stable model semantics and default
model semantics was shown in [BF88].
Several equivalent denitions of the well-founded semantics have been proposed. The
denition used in this chapter comes from [Prz90]. The alternating xpoint computation
we described is essentially the same as in [VanG89]. Alternative procedures for computing
the well-founded semantics are exhibited in [BF88, Prz89]. Historically, the rst denition
of well-founded semantics was proposed in [VanGRS88, VanGRS91]. This is described in
Exercise 15.24.
The fact that well-founded and stratied semantics agree on stratiable datalog

pro-
grams (Theorem 15.3.11) was shown in [VanGRS88].
Both the stratied and well-founded semantics were originally introduced for general
logic programming, as well as the more restricted case of datalog. In the context of logic
programming, both semantics have expressive power equivalent to the arithmetic hierarchy
[AW88] and are thus noneffective.
The result that datalog

with well-founded semantics expresses exactly the xpoint


Bibliographic Notes 409
queries is shown in [VanG89]. Citation [FKL87] proves that for every datalog

program P
there is a total datalog

program Q such that the positive portions of P


wf
(I) and Q
wf
(I)
coincide for every I. The fact that stratied datalog

is weaker than xpoint, and therefore


weaker than well-founded semantics, was shown in [Kol91], making use of earlier results
from [Dal87] and [CH82]. In particular, Lemma 15.4.1 is based on Lemma 3.9 in [CH82].
The result that semipositive datalog

expresses qptime on ordered databases with min and


max is due to [Pap85].
The investigation of negation as failure was initiated in [Cla78], in connection with
general logic programming. In particular, SLDNF resolution as well as Clarks completion
are introduced there. The fact that there is no datalog

program for which the positive and


negative facts about the transitive closure of the graph can be proven by SLDNF resolution
was shown in [Kun88]. Other work related to Clarks completion can be found in [She88,
Llo87, Fit85, Kun87].
Several variations of SLDNF resolutions have been proposed. SLS resolution is in-
troduced in [Prz88] to deal with stratied programs. An exact match is achieved between
stratied semantics and the proof procedure provided by SLS resolution. Although SLS
resolution is effective in the context of (nite) databases, it is not so when applied to general
logic programs, with function symbols. To deal with this shortcoming, several restrictions
of SLS resolution have been proposed that are effective in the general framework [KT88,
SI88].
Several proof-theoretic approaches corresponding to the well-founded semantics have
been proposed. SLS resolution is extended from stratied to arbitrary datalog

programs
in [Prz88], under well-founded semantics. Independently, another extension of SLS res-
olution called global SLS resolution is proposed in [Ros89], with similar results. These
proposals yield noneffective resolution procedures. An effective procedure is described in
[BL90].
In [SZ90], an interesting connection between nondeterminism and stable models of a
program (i.e., total 3-stable models; see also Exercise 15.20) is pointed out. Essentially,
it is shown that the stable models of a datalog

program can be viewed as the result


of a natural nondeterministic choice. This uses the choice construct introduced earlier
in [KN88]. Another use of nondeterminism is exhibited in [PY92], where an extension
of well-founded semantics is provided, which involves the nondeterministic choice of a
xpoint of a datalog

program. This is called tie-breaking semantics. A discussion of


nondeterminism in deductive databases is provided in [GPSZ91].
Another semantics in the spirit of well-founded is the valid model semantics intro-
duced in [BRSS92]. It is less conservative than well-founded semantics, in the sense that
all facts that are positive in well-founded semantics are also positive in the valid model
semantics, but the latter generally yields more positive facts than well-founded semantics.
There are a few prototypes (but no commercial system) implementing stratied
datalog

. The language LDL [NT89, BNR+87, NK88] implements, besides the strati-
ed semantics for datalog

, an extension to complex objects (see also Chapter 20). The


implementation uses heuristics based on the magic set technique described in Chapter 13.
The language NAIL! (Not Yet Another Implementation of Logic!), developed at Stan-
ford, is another implementation of the stratied semantics, allowing function symbols
and a set construct. The implementation of NAIL! [MUG86, Mor88] uses a battery of
evaluation techniques, including magic sets. The language EKS [VBKL89], developed at
410 Negation in Datalog
ECRC (European Computer-Industry Research Center) in Munich, implements the strat-
ied semantics and extensions allowing quantiers in rule bodies, aggregate functions,
and constraint specication. The CORAL system [RSS92, RSSS93] provides a database
programming language that supports both imperative and deductive capabilities, including
stratication. An implementation of well-founded semantics is described in [CW92].
Nicole Bidoits survey on negation in databases [Bid91b], as well as her book on da-
talog [Bid91a], provided an invaluable source of information and inspired our presentation
of the topic.
Exercises
Exercise 15.1
(a) Show that, for datalog

programs P, the immediate consequence operator T


P
is not
always monotonic.
(b) Exhibit a datalog

program P (using negation at least once) such that T


P
is mono-
tonic.
(c) Show that it is decidable, given a datalog

program P, whether T
P
is monotonic.
Exercise 15.2 Consider the datalog

program P
3
={p r; r p; p p, r}. Verify
that T
P
3
has a least xpoint, but T
P
3
does not converge when starting on .
Exercise 15.3
(a) Exhibit a datalog

program P and an instance K over sch(P) such that K is a model


of
P
but not a xpoint of T
P
.
(b) Show that, for datalog

programs P, a minimal xpoint of T


P
is not necessarily a
minimal model of
P
and, conversely, a minimal model of
P
is not necessarily a
minimal xpoint of T
P
.
Exercise 15.4 Prove Lemma 15.2.8.
Exercise 15.5 Consider a database for the Parisian metro and bus lines, consisting of two re-
lations Metro[Station, Next-Station] and Bus[Station, Next-Station]. Write stratiable datalog

programs to answer the following queries.


(a) Find the pairs of stations a, b such that one can go from a to b by metro but not by
bus.
(b) A pure bus path froma to b is a bus itinerary froma to b such that for all consecutive
stops c, d along the way, one cannot go from c to d by metro. Find the pairs of
stations a, b such that there is a pure bus path from a to b.
(c) Find the pairs of stations a, b such that b can be reached from a by some combina-
tion of metro or bus, but not by metro or bus alone.
(d) Find the pairs of stations a, b such that b can be reached from a by some combina-
tion of metro or bus, but there is no pure bus path from a to b.
(e) The metro is useless in a bus path from a to b if by taking the metro at any interme-
diate point c one can return to c but not reach any other station along the path. Find
the pairs of stations a, b such that the metro is useless in all bus paths connecting
a and b.
Exercises 411
Exercise 15.6 The semantics of stratiable datalog

programs can be extended to innite


databases as follows. Let P be a stratiable datalog

program and let = P


1
. . . P
n
be a
stratication for P. For each (nite or innite) instance I over edb(P), (I) is dened similarly
to the nite case. More precisely, consider the sequence
I
0
=I
I
i
=P
i
(I
i1
|edb(P
i
))
where
P
i
(I
i1
|edb(P
i
)) =
j>0
T
j
P
i
(I
i1
|edb(P
i
)).
Note that the denition is now noneffective because P
i
(I
i1
|edb(P
i
)) may be innite.
Consider a database consisting of one binary relation succ providing a successor relation on
an innite set of constants. Clearly, one can identify these constants with the positive integers.
(a) Write a stratiable datalog

program dening a unary relation prime containing all


constants in succ corresponding to primes.
(b) Write a stratiable datalog

program P dening a 0-ary relation Fermat, which is


true iff Fermats Last Theorem
3
is true. (No shortcuts, please: The computation of
the program should provide a proof of Fermats Last Theorem, not just coincidence
of truth value!)
Exercise 15.7 Prove Theorem 15.2.2.
Exercise 15.8 A datalog

program is nonrecursive if its precedence graph is acyclic. Show


that every nonrecursive stratiable datalog

program is equivalent to an nr-datalog

program,
and conversely.
Exercise 15.9 Let (A, <) be a partially ordered set. A listing a
1
, . . . , a
n
of the elements in
A is compatible with < iff for i < j it is not the case that a
j
< a
i
. Let

be listings of A
compatible with <. Prove that one can obtain

from

by a sequence of exchanges of adjacent


elements a
l
, a
m
such that a
l
< a
m
and a
m
< a
l
.
Exercise 15.10 Prove Lemma 15.2.9.
Exercise 15.11 (Supported models) Prove that there exist stratied datalog

programs P
1
, P
2
such that sch(P
1
) =sch(P
2
),
P
1

P
2
, and there is a minimal model I of
P
1
such that I is a
supported model for P
1
, but not for P
2
. (In other words, the notion of supported model depends
not only on
P
, but also on the syntax of P.)
Exercise 15.12 Prove part (b) of Proposition 15.2.11.
Exercise 15.13 Prove Proposition 15.2.12.
Exercise 15.14 [Bid91b] (Local stratication) The following extension of the notion of strat-
ication has been proposed for general logic programs [Prz86]. This exercise shows that local
stratication is essentially the same as stratication for the datalog

programs considered in this


chapter (i.e., without function symbols).
3
Fermats Last Theorem: There is no n > 2 such that the equation a
n
+b
n
=c
n
has a solution in the
positive integers.
412 Negation in Datalog
A datalog

program P is locally stratied iff for each I over edb(P), ground(P


I
) is strat-
ied. [An example of a locally stratied logic program with function symbols is {even(0) ;
even(s(x)) even(x)}.] The semantics of a locally stratied program P on input I is the
semantics of the stratied program ground(P
I
).
(a) Show that, if the rules of P contain no constants, then P is locally stratied iff it is
stratied.
(b) Give an example of a datalog

program (with constants) that is locally stratied but


not stratied.
(c) Prove that, for each locally stratied datalog

program P, there exists a stratied


datalog

program equivalent to P.
Exercise 15.15 Let and be propositional Boolean formulas (using , , , ). Prove the
following:
(a) If and are equivalent with respect to 3-valued instances, then they are equivalent
with respect to 2-valued instances.
(b) If and are equivalent with respect to 2-valued instances, they are not necessarily
equivalent with respect to 3-valued instances.
Exercise 15.16 Prove Lemma 15.3.4.
Exercise 15.17 Let P be a datalog

program. Recall the denition of positivized ground


version of P given I, denoted pg(P, I), where I is a 3-valued instance. Prove the following:
(a) If I is total, then pg(P, I) is total.
(b) Let {I
i
}
i0
be the sequence of instances dened by
I
0
=
I
i+1
=pg(P, I
i
)() =conseq
P
(I
i
).
Prove that
I
0
I
2
I
2i
I
2i+2
I
2i+1
I
2i1
I
1
.
Exercise 15.18 Exhibit a datalog

program that yields the complement of the transitive clo-


sure under well-founded semantics.
Exercise 15.19 Prove that for each datalog

program P and instance I over edb(P), P


wf
(I)
is a minimal 3-valued model of P whose restriction to edb(P) equals I.
Exercise 15.20 A total 3-stable model of a datalog

program P is called a stable model of P


[GL88] (also called a default model [BF87, BF88]).
(a) Provide examples of datalog

programs that have (1) no stable models, (2) a unique


stable model, and (3) several stable models.
(b) Show that P
wf
is total iff all 3-stable models are total.
(c) Prove that, if P
wf
is total, then P has a unique stable model, but the converse is false.
Exercise 15.21 [BF88] Let P be a datalog

program and I an instance over edb(P). Prove


that the problem of determining whether P
I
has a stable model is np-complete in the size of P
I
.
Exercises 413
Exercise 15.22 Give an example of a datalog

program P such that P is not stratied but


P
wf
is total.
Exercise 15.23 Prove that it is undecidable if the well-founded semantics of a given datalog

program P is always total. That is, it is undecidable whether, for each instance I over
edb(P), P
wf
I
is total.
Exercise 15.24 [VanGRS88] This exercise provides an alternative (and historically rst) de-
nition of well-founded semantics. Let Lbe a ground literal. The complement of L is Aif L =A
and A if L =A. If I is a set of ground literals, we denote by .I the set of complements of the
literals in I. A set I of ground literals is consistent iff I .I =. Let P be a datalog

program.
The immediate consequence operator T
P
of P is extended to operate on sets of (positive and
negative) ground literals as follows. Let I be a set of ground literals. T
P
(I) consists of all literals
A for which there is a ground rule of P, A L
1
, . . . , L
k
, such that L
i
I for each i. Note that
T
P
can produce an inconsistent set of literals, which therefore does not correspond to a 3-valued
model. Now let I be a set of ground literals and J a set of positive ground literals. J is said to be
an unfounded set of P with respect to I if for each A J and ground rule r of P with A in the
head, at least one of the following holds:
the complement of some literal in the body of r is in I; or
some positive literal in the body of r is in J.
Intuitively, this means that if all atoms of I are assumed true and all atoms in J are assumed
false, then no atom of J is true under one application of T
P
.
Let the greatest unfounded set of P with respect to I be the union of all unfounded sets of
P with respect to I, denoted U
P
(I). Next consider the operator W
P
on sets of ground literals
dened by
W
P
(I) =T
P
(I) .U
P
(I).
Prove the following:
(a) The greatest unfounded set U
P
(I) of P with respect to I is an unfounded set.
(b) The operator W
P
is monotonic (with respect to set inclusion).
(c) The least xpoint of W
P
is consistent.
(d) The least xpoint of W
P
equals P
wf
.
Exercise 15.25 [VanG89] Let P be a datalog

program. If I is a set of ground literals, let


P(I) = T

P
(I), where T
P
is the immediate consequence operator on sets of ground literals
dened in Exercise 15.24. Furthermore, P(I) denotes the complement of P(I) [i.e., B(P, I)
P(I)]. Consider the sequence of sets of negative facts dened by
N
0
=,
N
i+1
=.P(.P(N
i
)).
The intuition behind the denition is the following. N
0
is an underestimate of the set of negative
facts in the well-founded model. Then P(N) is an underestimate of the positive facts, and the
negated complement .P(N) is an overestimate of the negative facts. Using this overestimate,
one can infer an overestimate of the positive facts, P(.P(N)). Therefore .P(.P(N)) is now
a new underestimate of the negative facts containing the previous underestimate. So {N
i
}
i0
is
414 Negation in Datalog
an increasing sequence of underestimates of the negative facts, which converges to the negative
facts in the well-founded model. Formally prove the following:
(a) The sequence {N
i
}
i0
is increasing.
(b) Let N be the limit of the sequence {N
i
}
i0
and K =NP(N). Then K =P
wf
.
(c) Explain the connection between the sequence {N
i
}
i0
and the sets of negative facts
in the sequence {I
i
}
i0
dened in the alternating xpoint computation of P
wf
in the
text.
(d) Suppose the denition of the sequence {N
i
}
i0
is modied such that N
0
=.B(P)
(i.e., all facts are negative at the start). Show that for each i 0, N
i
=.(I
2i
)
0
.
Exercise 15.26 Let P be a datalog

program. Let T
P
be the immediate consequence operator
on sets of ground literals, dened in Exercise 15.24, and let

T
P
be dened by

T
P
(I) =I T
P
(I).
Given a set I of ground literals, let P(I) denote the limit of the increasing sequence {

T
i
P
(I)}
i>0
.
A set I

of negative ground literals is consistent with respect to P if P(I

) is consistent. I

is maximally consistent with respect to P if it is maximal among the sets of negative literals
consistent with P. Investigate the connection between maximal consistency, 3-stable models,
and well-founded semantics:
(a) Is .I
0
maximally consistent for every 3-stable model I of P?
(b) Is P(I

) a 3-stable model of P for every I

that is maximally consistent with respect


to P?
(c) Is .(P
wf
)
0
the intersection of all sets I

that are maximally consistent with respect


to P?
Exercise 15.27 Refer to the proof of Lemma 15.4.4.
(a) Outline a proof that conseq
P
can be simulated by a while
+
program.
(b) Provide a full description of the timestamping technique outlined in the proof of
Lemma 15.4.4.
Exercise 15.28 Show that every query denable by stratied datalog

is a xpoint query.
Exercise 15.29 Consider an ordered database (i.e., with binary relation succ providing a
successor relation on the constants). Prove that the minimum and maximum constants cannot
be computed using a semipositive program.
Exercise 15.30 Consider the game trees and winning query described in Section 15.4.
(a) Show that winning is true on the game trees G
2i,k
and false on the game trees G

2i,k
,
for i > 0.
(b) Prove that the winning query on game trees is dened by the xpoint query exhibited
in Section 15.4.
P A R T
E
Expressiveness and
Complexity
V
arious query languages were presented in Parts B and D. Simple languages like
conjunctive queries were successively augmented with various constructs such as
union, negation, and recursion. The primary motivation for dening increasingly powerful
languages was the need to express useful queries not captured by the simpler languages. In
the presentation, the process was primarily example driven. The following chapters present
a more advanced and global perspective on query languages. In addition to their ability to
express specic queries, we consider more broadly the capability of languages to express
queries of a given complexity. This leads to establishing formal connections between
languages and complexity classes of queries. This approach lies on the border between
databases, complexity theory, and logic. It is related to characterizations of complexity
classes in terms of various logics.
The basic framework for the formal development is presented in Chapter 16, in which
we discuss the notion of a query and produce a formal denition. It turns out that it
is relatively easy to dene languages expressing all queries. Such languages are called
complete. However, the real challenge for the language designer is not simply to dene
increasingly powerful languages. Instead an important aspect of language design is to
achieve a good balance between expressiveness and the complexity of evaluating queries.
The ideal language would allow expression of most useful queries while guaranteeing that
all queries expressible in the language can be evaluated with reasonable complexity. To
formalize this, we raise the following basic question: How does one evaluate a query
language with respect to expressiveness and complexity? In an attempt to answer this
question, we discuss the issue of sizing up languages in Chapter 16.
Chapter 17 considers some of the classes of queries discussed in Part B from the
viewpoint of expressiveness and complexity. The focus is on the relational calculus of
Chapter 5 and on its extensions xpoint and while dened in Chapter 14. We show the
connection of these languages to complexity classes. Several techniques for showing the
nonexpressibility of queries are also presented, including games and 0-1 laws.
Chapter 17 also explores the intriguing theoretical implications of one of the basic as-
sumptions of the pure relational modelnamely, that the underlying domain dom consists
of uninterpreted, unordered elements. This assumption can be viewed as a metaphor for
the data independence principle, because it implies using only logical properties of data as
415
416 Expressiveness and Complexity
opposed to the underlying implementation (which would provide additional information,
such as an order).
Chapter 18 presents highly expressive (and complex) languages, all the way up to com-
plete languages. In particular, we discuss constructs for value invention, which are similar
to the object creation mechanisms encountered in object languages (see Chapter 21).
For easy reference, the expressiveness and complexity of relational query languages
are summarized at the end of Chapter 18.
16 Sizing Up Languages
Alice: Do you ever worry about how hard it is to answer queries?
Riccardo: Suremy laptop can only do conjunctive queries.
Sergio: I can do the while queries on my Sun.
Vittorio: I dont worry about it I have a Cray in my ofce.
T
his chapter lays the groundwork for the study of the complexity and expressiveness
of query languages. First the notion of query is carefully reconsidered and formally
dened. Then, the complexity of individual queries is considered. Finally denitions that
allow comparison of query languages and complexity classes are developed.
16.1 Queries
The goal of Part E is to develop a general understanding of query languages and their
capabilities. The rst step is to formulate a precise denition of what constitutes a query.
The focus is on a fairly high level of abstraction and thus on the mappings expressible by
queries rather than on the syntax used to specify them. Thus, unlike Part B, in this part we
use the term query primarily to refer to mappings from instances to instances rather than to
syntactic objects. Although there are several correct denitions for the set of permissible
queries, the one presented here is based on three fundamental assumptions: well-typedness,
computability, and genericity.
The rst assumption involves the schemas of the input and the answer to a query. A
query is over a particular database schema, say R. It takes as input an instance over R
and returns as answer a relation over some schema S. In principle, it is conceivable that
the schema of the result may be data dependent. However, to simplify, it is assumed here
(as in most query languages) that the schema of the result is xed for a given query. This
assumption is referred to as well-typedness. Thus, for us, a query is a partial mapping from
inst(R) to inst(S) for xed R and S. By allowing partially dened mappings, we account
for queries expressed by programs that may not always terminate.
Because we are only interested in effective queries, we also make the natural assump-
tion that query mappings are computable. Query computability is dened using classical
models of computation, such as Turing machines (TM). The basic idea is that the query
must be implementable by a TM. Thus there must exist a TM that, given as input a nat-
ural encoding of a database instance on the tape, produces an encoding of the output. The
formalization of these notions requires some care and is done next.
417
418 Sizing Up Languages
P Q
a b c c
b a
(a)
P[0#1][1#0]Q[10#10]
(b)
Figure 16.1: An instance I and its TM encoding with respect to =abc
The rst question in developing the formalization is, How can input and output in-
stances be represented on a TM tape that has nite alphabet when the underlying domain
dom is innite? We resolve this by using standard encodings for dom. As we shall see later
on, although this permits us to use conventional complexity theory in our study of query
language expressiveness, it also takes us a bit outside of the pure relational model.
We focus on encodings of both dom and of subsets of dom, and we use the symbols 0
and 1. Let d dom and let ={d
0
, d
1
, . . . , d
i
, . . .} be an enumeration of d. The encoding
of d relative to is the function enc

, which maps d
i
to the binary representation of i (with
no leading zeros) for each d
i
d. Note that |enc

(d
i
)| log i for each i.
We can now describe the encoding of instances. Suppose that a set d dom, enu-
meration for d, source schema R ={R
1
, . . . , R
m
}, and target schema S are given. The
encoding of instances of R uses the alphabet {0, 1, [, ], #} R {S}. An instance I over R
with adom(I) d is encoded relative to as follows:
1. enc

(a
1
, . . . , a
k
) is [enc

(a
1
)# . . . #enc

(a
k
)].
2. enc

(I(R)), for R R, is R enc

(t
1
) . . . enc

(t
l
), where t
1
, . . . , t
l
are the tuples in
I(R) in the lexicographic order induced by the enumeration .
3. enc

(I) =enc

(I(R
1
)) . . . enc

(I(R
m
)).
Example 16.1.1 Let R ={P, Q}, I be the instance over R in Fig. 16.1(a), and let =
abc. Then enc

(I) is shown in Fig. 16.1(b).


Let be a xed enumeration of dom. In this case the encoding enc

described earlier
is one-to-one on instances and thus has an inverse enc
1

when considered as a mapping


on instances. We are now ready to formalize the notion of computability relative to an
encoding of dom.
Denition 16.1.2 Let be an enumeration of dom. A mapping q from inst(R) to
inst(S) is computable relative to if there exists a TM M such that for each instance I
over R
16.1 Queries 419
(a) if q(I) is undened, then M does not terminate on input enc

(I), and
(b) if q(I) is dened, M halts on input enc

(I) with output enc

(q(I)) on the tape.


As will be seen shortly, the third assumption about queries (namely, genericity) will
permit us to reformulate the preceding denition to be independent of the encoding of
dom used. Before introducing that notion, we consider more carefully the representation
of database instances on TM tapes. In some sense, TM encodings on the tape are similar
to the internal representation of the database on some physical storage. In both cases, the
representation contains more information than the database itself. In the case of the TM
representation, the extra information consists primarily of the enumeration of constants
necessary to dene enc

. In the pure relational model, this kind of information is not part of


the database. Instead, the database is an abstraction of its internal (or TM) representation.
This additional information can be viewed as noise associated with the internal representa-
tion and thus should not have any visible impact for the user at the conceptual level. This is
captured by the data independence principle in databases, which postulates that a database
provides an abstract interface that hides the internal representation of data.
We can now state the intuition behind the third and last requirement of queries, which
formalizes the data independence principle. Although computations performed on the in-
ternal representation may take advantage of all information provided at this level, it is ex-
plicitly prohibited, in the denition of a query, that the result depend on such information.
(In some cases this restriction may be relaxed; see Exercise 16.4.)
For example, consider a database that consists of a binary relation specifying the edges
of a directed graph. Consider a query that returns as answer a subset of the vertexes in the
graph. One can imagine queries that extract (1) all vertexes with positive in-degree, or (2)
all vertexes belonging to some cycle, or (3) the rst vertex of the graph as presented in the
TM tape representation. Speaking intuitively, (1) and (2) are independent of the internal
representation used, whereas (3) depends on it. Queries such as (3) will be excluded from
the class of queries.
The property that a query depends only on information provided by the input instance
is called genericity and is formalized next. The idea is that the constants in the database
have no properties other than the relationships with each other specied by the database.
(In particular, their internal representation is irrelevant.) Thus the database is essentially un-
changed if all constants are consistently renamed. Of course, a query can always explicitly
name a nite set of constants, which can then be treated differently from other constants.
(The set of such constants is the set C in Denition 16.1.3.)
A permutation of dom is a one-to-one, onto mapping from dom to dom. As done
before, each mapping over dom is extended to tuples and database instances in the
obvious way.
Denition 16.1.3 Let R and S be database schemas, and let C be a nite set of con-
stants. A mapping q from inst(R) to inst(S) is C-generic iff for each I over R and each
permutation of dom that is the identity on C, (q(I)) =q((I)). When C is empty, we
simply say that the query is generic.
420 Sizing Up Languages
The previous denition is best visualized using the following commuting diagram:
I
q
q(I)

(I)
q
(q(I)) =q((I)).
In other words, a query is C-generic if it commutes with permutations (that leave C xed).
Genericity states that the query is insensitive to renaming of the constants in the
database (using the permutation ). It uses only the relationships among constants provided
by the database and is independent of any other information about the constants. The set C
species the exceptional constants named explicitly in the query. These cannot be renamed
without changing the effect of the query.
Permutations for which (I) =I are of special interest. Such are called automor-
phisms for I. If is an automorphism for I and (a) =b, this says intuitively that a and
b cannot be distinguished using the structure of I. Let q be a generic query, I an instance,
and an automorphism for I. Then, by genericity,
(q(I)) =q((I)) =q(I),
so is also an automorphism for q(I). In particular, a generic query cannot distinguish
between constants that are undistinguishable in the input (see Exercise 16.5). Of course,
this is not the case if the query explicitly names some constants.
We illustrate these various aspects of genericity in an example.
Example 16.1.4 Consider a database over a binary relation G holding the edges of a
directed graph. Let I be the instance {a, b, b, a, a, c, b, c}.
Let be the CALC query
{x | yG(x, y)}.
Note that (I) ={a, b}. Let be the permutation dened by (a) =b, (b) =c, and
(c) = d. Then (I) = {b, c, c, b, b, d, c, d}. Genericity requires that ((I)) =
{b, c}. This is true in this case.
Note also that a and b are undistinguishable in I. Formally, the renaming dened by
(a) =b, (b) =a, and (c) =c has the property that (I) =I and thus is an automor-
phism of I. Let q be a generic query on G. By genericity of q, either a and b both belong to
q(I), or neither does. Thus a generic query cannot distinguish between a and b. Of course,
this is not true for C-generic queries (for C nonempty). For instance, let q
b
=
1
(
2=b
(G)).
Now q
b
is {b}-generic, and q
b
(I) ={a}. Thus q
b
distinguishes between a and b.
It is easily veried that if a database mapping q is C-generic, then for each input
instance I, adom(q(I)) C adom(I) (see Exercise 16.1).
16.1 Queries 421
In most cases we will ignore the issue of constants in queries because it is not central.
Note that a C-generic query can be viewed as a generic query by including the constants in
C in the input, using one relation for each constant. For instance, the {b}-generic query q
b
over G in Example 16.1.4 is reduced to a generic query q

over {G, R
b
}, where R
b
={b},
dened as follows:
q

=
1
(
2=3
(GR
b
)).
In the following, we will usually assume that queries have no constants unless explicitly
stated.
Suppose now that and are two enumerations of dom and that a generic mapping
q from R to S is computed by a TM M using enc

. It is easily veried that the same query


is computed by M if enc

is used in place of enc

(see Exercise 16.2). This permits us to


adopt the following notion of computable, which is equivalent to computable relative to
enumeration in the case of generic queries. This denition has the advantage of relying
on nite rather than innite enumerations.
Denition 16.1.5 A generic mapping q from inst(R) to inst(S) is computable if there
exists a TM M such that for each instance I over R and each enumeration of adom(I),
(a) if q(I) is undened, then M does not terminate on input enc

(I), and
(b) if q(I) is dened, M halts on input enc

(I) with output enc

(q(I)) on the tape.


We are now ready to dene queries formally.
Denition 16.1.6 Let R be a database schema and S a relation schema. A query from
R to S is a partial mapping from inst(R) to inst(S) that is generic and computable.
Note that all queries discussed in previous chapters satisfy the preceding denition
(modulo constants in queries).
Queries and Query Languages
We are usually interested in queries specied by the expressions (i.e., syntactic queries
or programs) of a given query language. Given an expression E in query language L, the
mapping between instances that E describes is called the effect of E. Depending on the lan-
guage, there may be several alternative semantics (e.g., inationary versus noninationary)
for dening the query expressed by an expression. A related issue concerns the specica-
tion of the output schema of an expression. In calculus-based languages, the output schema
is unambiguously specied by the form of the expression. The situation is more ambigu-
ous for other languages, such as datalog and while. Programs in these languages typically
manipulate several relations and may not specify explicitly which is to be taken as the an-
swer to the query. In such cases, the concepts of input, output, and temporary relations
may become important. Thus, in addition to semantically signicant input and output re-
lations, the programs may use temporary relations whose content is immaterial outside the
422 Sizing Up Languages
computation. We will state explicitly which relations are temporary and which constitute
the output whenever this is not clear from the context.
A query language or computing device is called complete if it expresses all queries.
We will discuss such languages in Chapter 18.
16.2 Complexity of Queries
We now develop a framework for measuring the complexity of queries. This is done by
reference to TMs and classical complexity classes dened using the TM model.
There are several ways to look at the complexity of queries. They differ in the param-
eters with respect to which the complexity is measured. The two main possibilities are as
follows:
data complexity: the complexity of evaluating a xed query for variable database
inputs; and
expression complexity: the complexity of evaluating, on a xed database instance,
the various queries speciable in a given query language.
Thus in the data complexity perspective, the complexity is with respect to the database
input and the query is considered constant. Conversely, with expression complexity, the
database input is xed and the complexity is with respect to the size of the query expression.
Clearly, the measures provide different information about the complexity of evaluating
queries. The usual situation is that the size of the database input dominates by far the size
of the query, and so data complexity is typically most relevant. This is the primary focus of
Part E, and we use the term complexity to refer to data complexity unless otherwise stated.
The complexity of queries is dened based on the recognition problem associated with
the query. For a query q, the recognition problem is as follows: Given an instance I and a
tuple u, determine if u belongs to the answer q(I). To be more precise, the recognition
problem of a query q is the language
{enc

(I)#enc

(u) | u q(I), an enumeration of adom(I)}.


The (data) complexity of q is the (conventional) complexity of its recognition problem.
Technically, the complexity is with respect to the size of the input [i.e., the length of the
word enc

(I)#enc

(u)]. Because for an instance I the size (number of tuples) in I is closely


related to the length of enc

(I) (see Exercise 16.12), the size of I is usually taken as the


measure of the input.
For each Turing time or space complexity class c, one can dene a corresponding
complexity class of queries, denoted by qc . The class of queries qc consists of all queries
whose recognition problem is in c. For example, the class qptime consists of all queries
for which the recognition problem is in ptime.
There is another way to dene the complexity of queries that is based on the com-
plexity of actually constructing the result of the query rather than the recognition problem
for individual tuples. The two denitions are in most cases interchangeable (see Exer-
cise 16.13). In particular, for complexity classes insensitive to a polynomial factor, the
16.3 Languages and Complexity 423
denitions are equivalent. In general, the denition based on constructing the result dis-
tinguishes between a query with a large answer and one with a small answer, which is
irrelevant to the denition based on recognition. On the other hand, the denition based
on constructing the result may not distinguish between easy and hard queries with large
results.
Example 16.2.1 Consider a database consisting of one binary relation G and the three
queries cross, path, and self on G dened as follows:
cross(G) =
1
(G)
2
(G),
path(G) ={x, y | x and y are connected by a path in G},
self (G) =G.
Consider rst cross and path. Both have potentially large answers, but cross is clearly
easier than path, even though the time complexity of constructing the result is O(n
2
) for
both cross and path. The time complexity of the recognition problem is O(n) for cross
and O(n
2
) for path. Thus the measure based on constructing the result does not detect
a difference between cross and path, whereas this is detected by the complexity of the
recognition problem. Next consider cross and self . The time complexity of the recognition
problem is in both cases O(n), but the complexity of computing the result is O(n) for self
whereas it is O(n
2
) for cross. Thus the complexity of the recognition problem does not
distinguish between cross and self , although cross can potentially generate a much larger
answer. This difference is detected by the complexity of constructing the result.
In Part E, we will use the denition of query complexity based on the associated
recognition problem.
16.3 Languages and Complexity
In the previous section we studied a denition of the complexity of an individual query.
To measure the complexity of a query language L, we need to establish a correspondence
between
the class of queries expressible in L, and
a complexity class qc of queries.
Expressiveness with Respect to Complexity Classes
The most straightforward connection between L and a class of queries qc is when L and
qc are precisely the same.
1
In this case, it is said that L expresses qc. In every case, each
query in L has complexity c, and conversely L can express every query of complexity c.
1
By abuse of notation, we also denote by L the set of queries expressible in L.
424 Sizing Up Languages
Ideally, one would be able to perform complexity-tailored language design; that is,
for a desired complexity c, one would design a language expressing precisely qc. Unfor-
tunately, we will see that this is not always possible. In fact, there are no such results for
the pure relational model for complexity classes of polynomial time and below, that are of
most interest. We consider this phenomenon at length in the next chapter. Intuitively, the
shapes of classes of queries of low complexity do not match those of classes of queries de-
ned by any known language. Therefore we are led to consider a less straightforward way
to match languages to complexity classes.
Completeness with Respect to Complexity Classes
Consider a language L that does not correspond precisely to any natural complexity class
of queries. Nonetheless we would like to say something about the complexity of queries in
L. For instance, we may wish to guarantee that all queries in L lie within some complexity
class c, even though L may not express all of qc. For the bound to be meaningful, we
would also like that c is, in some sense, a tight upper bound for the complexity of queries
in L. In other words, L should be able to express at least some queries that are among
the hardest in qc. The property of a problem being hardest in a complexity class c is
captured, in complexity theory, by the notion of completeness of the problem in the class
(see Chapter 2). By extension to a language, this leads to the following:
Denition 16.3.1 A language L is complete with respect to a complexity class c if
(a) each query in L is also in qc, and
(b) there exists a query in L for which the associated recognition problem is com-
plete with respect to the complexity class c.
As in the classical denition of completeness of a problem in a complexity class,
we qualify, when necessary, the notion of a completeness in a complexity class by the
complexity of the reduction. For instance, L is logspace complete with respect to c qualies
(b) by stating that the query expressible in L whose recognition problem is complete in c
is in fact logspace complete in c.
In some sense, completeness without expressiveness says something negative about
the language L. L can express some queries that are as hard as any query in qc; on the
other hand, there may be easy queries in qc that are not expressible in L. This may at rst
appear contradictory because L expresses some queries that are complete in c, and any
problem in c can be reduced to the complete problem. However, there is no contradiction.
The reduction of the easy query to the complete query may be computationally easy but
nevertheless not expressible in L. Examples of this situation involve the familiar languages
xpoint and while. As will be shown in Section 17.3, these languages are complete in ptime
and pspace, respectively. However, neither can express the simple parity query on a unary
relation R:
even(R) =true if |R| is even, and false otherwise.
Bibliographic Notes 425
Complexity and Genericity
To conclude this chapter, we consider the delicate impact of genericity on complexity.
The foregoing query even illustrates a fundamental phenomenon relating genericity to the
complexity of queries. As stated earlier, even cannot be computed by xpoint or by while,
both of which are powerful languages. The difculty in computing even is due to the lack
of information about the elements of the set. Because the database only provides a set
of undifferentiated elements, genericity implies that they are treated uniformly in queries.
This rules out the straightforward solution of repeatedly extracting one arbitrary element
from the set until the set is empty while keeping a binary counter: How does one specify
the rst element to be extracted?
On the other hand, consider the problem of computing even with a TM. The additional
information provided by the encoding of the input on the tape makes the problem trivial
and allows a linear-time solution.
This highlights the interesting fact that genericity may complicate the task of com-
puting a query, whereas access to the internal representation may simplify this task con-
siderably. Thus this suggests a trade-off between genericity and complexity. This can be
formalized by dening complexity classes based on a computing device that is generic by
denition in place of a TM. Such a device cannot take advantage of the representation of
data in the same manner as a TM, and it treats data generically at all points in the com-
putation. It can be shown that even is hard with respect to complexity measures based on
such a device. The query even will be used repeatedly to illustrate various aspects of the
complexity of queries.
Bibliographic Notes
The study of computable queries originated in the work of Chandra and Harel [CH80b,
Cha81a, CH82]. In addition to well-typed languages, they also considered languages den-
ing queries with data-dependent output schemas. The data and expression complexity
of queries were introduced and studied in [CH80a, CH82] and further investigated in
[Var82a]. Data complexity is most widely used and is based on the associated recogni-
tion problem. Data complexity based on constructing the result of the query is discussed in
[AV90].
The notion of genericity was formalized in [AU79, CH80b] with different terminology.
The term C-genericity was rst used in [HY84]. Other notions related in spirit to genericity
are studied in [Hul86]. The denition of genericity is extended in [AK89] to object-oriented
queries that can produce new constants in the result (arising from new object identiers);
see also [VandBGAG92, HY90]. This is further discussed in Chapters 18 and 21.
A modied notion of Turing machine is introduced in [HS93] that permits domain el-
ements to appear on the Turing tape, thus obviating the need to encode them. However, this
device still uses an ordered representation of the input instance. A device operating directly
on relations is the on-site acceptor of [Lei89a]. This extends the formal algorithmic pro-
cedure (FAP) proposed in [Fri71] in the context of recursion theory. Another variation of
this device is presented in [Lei89b]. Further generalizations of TMs, which do not assume
an ordered input, are introduced in [AV91b, AV94]. These are used to dene nonstandard
426 Sizing Up Languages
complexity classes of queries and to investigate the trade-off between genericity and com-
plexity.
Informative discussions of the connection between query languages and complexity
classes are provided in [Gur84, Gur88, Imm87b, Lei89a].
Exercises
Exercise 16.1 Let q be a C-generic mapping. Show that, for each input instance I, adom(q(I))
C adom(I).
Exercise 16.2 (Genericity) Let q be a generic database mapping from R to S.
(a) Let and be enumerations of dom, and suppose that M computes q using enc

.
Prove that for each instance I over R,
enc

M enc
1

=enc

M enc
1

.
Conclude that M computes q using enc

.
(b) Verify that the denitions of computable relative to and computable are equivalent
for generic database mappings.
Exercise 16.3 Let R be a database schema and S a relation schema.
(a) Prove that it is undecidable to determine, given TM M that computes a mapping q
from inst(R) to inst(S) relative to enumeration of dom, whether q is generic.
(b) Show that the set of TMs that compute queries from R to S is co-r.e.
Exercise 16.4 In many practical situations the underlying domains used (e.g., strings, inte-
gers) have some structure (e.g., an ordering relationship that is visible to both user and imple-
mentation). For each of the following, develop a natural denition for generic and exhibit a
nongeneric query, if there is one.
(a) dom is partitioned into several sorts dom
1
, . . . , dom
n
.
(b) dom has a dense total order . [A total order is dense if x, y(x < y z(x <
z z < y)).]
(c) dom has a discrete total order . [A total order is discrete if x[y(x < y
z(x < z w(x < w w < z))) y(y < x z(z < x w(z < w w <
x)))].]
(d) dom is the set of nonnegative integers and has the usual ordering .
Exercise 16.5 Let q be a C-generic query, and let I be an input instance. Let be an automor-
phism of I that is the identity on C, and let a, b be constants in I, such that (a) =b. Show that
a occurs in q(I) iff b occurs in q(I).
The next several exercises use the following notions. Let R be a database schema. Let k be
a positive integer and I an instance over R.
I
k
denotes the set of k-tuples that can be formed
using just constants in I. Dene the following relation
I
k
on
I
k
: u
I
k
v iff there exists an
automorphism of I such that (u) =v. The k-type index of I, denoted #
k
(I), is the number of
equivalence classes of
I
k
.
Exercises 427
Exercise 16.6 (Equivalence induced by automorphisms) Let R be a database schema and I an
instance of R.
(a) Show that
I
k
is an equivalence relation on
I
k
.
(b) Let q be a generic query on R, whose output is a k-ary relation. Show that q(I) is a
union of equivalence classes of
I
k
.
Exercise 16.7 (Type index) Let G be a binary relation schema corresponding to the edges of
a directed graph. Show the following:
(a) The k-type index of a complete graph is a constant independent of the size of the
graph, as long as it has at least k vertexes.
(b) The k-type index of graphs consisting of a simple path is polynomial in the size of
the graph.
(c) [Lin90, Lin91] The k-type index of a complete binary tree is polynomial in the depth
of the tree.
Exercise 16.8 Let k, n be integers, 0 < n < k, and I an instance over schema R.
(a) Show how to compute
I
n
from
I
k
.
(b) Prove that #
n
(I) < #
k
(I), unless I has just one constant.
Exercise 16.9 (Fixpoint queries and type index) Let be a xpoint query on database schema
R. Show that there exists a polynomial p such that, for each instance I over R, on input I
terminates after at most p(#
k
(I)) steps, for some k > 0.
Exercise 16.10 (Fixpoint queries on special graphs) Show that every xpoint query terminates
in
(a) constant number of steps on complete graphs;
(b) [Lin90, Lin91] p(log(|I|)) number of steps on complete binary trees I, for some
polynomial p. Hint: Use Exercises 16.7 and 16.9.
Exercise 16.11 [Ban78, Par78] Let R be a schema, I a xed instance over R, and a
1
, . . . , a
n
an enumeration of adom(I). For each automorphism on I, let t

=(a
1
), . . . , (a
n
), and let
auto(I) ={t

| an automorphism of I}.
(a) Prove that there is a CALC query q with no constants (depending on I) such that
q(I) =auto(I).
(b) Prove that for each relation schema S and instance J over S with adom(J)
adom(I),
there is a CALC query q with no constants
(depending on I and J)
such that q(I) =J
iff
for each automorphism of I, (J) =J.
A query language is called bp-complete if it satises the if direction of part (b).
428 Sizing Up Languages
Exercise 16.12 (Tape encoding of instances) Let I be a nonempty instance of a database
schema R. Let n
c
be the number of constants in I, n
t
the number of tuples, and an enumeration
of the constants in I. Show that there exist integers k
1
, k
2
, k
3
depending only on R such that
(a) n
c
k
1
n
t
|enc

(I)|,
(b) |enc

(I)| k
2
n
t
log(n
t
),
(c) |enc

(I)| (n
c
)
k
3
.
Exercise 16.13 (Recognition versus construction complexity) Let f be a time or space bound
for a TM, and let q be a query. The notation r-complexity abbreviates the complexity based on
recognition, and a-complexity stands for complexity based on constructing the answer. Show
the following:
(a) If the time r-complexity of q is bounded by f , then there exists k, k > 0, such that
the time a-complexity of q is bounded by n
k
f , where n is the number of constants
in the input instance.
(b) If the space r-complexity of q is bounded by f , then there exists k, k > 0, such that
the space a-complexity of q is bounded by n
k
+f , where n is the number of constants
in the input instance.
(c) If the time a-complexity of q is bounded by f , then there exists k, k > 0, such that
the time r-complexity of q is bounded by kf .
(d) If the space a-complexity of q is bounded by f , then the space r-complexity of q is
bounded by f .
Exercise 16.14 (Data complexity of algebra) Determine the time and space complexity of
each of the relational algebra operations (show the lowest complexity you can).
Exercise 16.15
(a) Develop an algorithm for computing the transitive closure of a graph that uses only
the information provided by the graph (i.e., a generic algorithm).
(b) Develop algorithms for a TM to compute the transitive closure of a graph (starting
from a standard encoding of the graph on the tape) that use as little time (space) as
you can manage.
(c) Write a datalog program dening the transitive closure of a graph so that the number
of stages in the bottom-up evaluation is as small as you can manage.
17
First Order, Fixpoint,
and While
Alice: I get it, now well match languages to complexity classes.
Sergio: Its not that easydata independence adds some spice.
Riccardo: You can think of it as not having order.
Vittorio: Its a lot of fun, and well play some games along the way.
I
n Chapter 16, we laid the framework for studying the expressiveness and complexity
of query languages. In this chapter, we evaluate three of the most important classes of
languages discussed so farCALC, xpoint, and whilewith respect to expressiveness
and complexity. We show that CALC is in logspace and ac
0
, that xpoint is complete in
ptime, and that while is complete in pspace.
1
We also investigate the impact of the presence
of an ordering of the constants in the input.
We rst show that CALC can be evaluated in logspace. This complexity result partly
explains the success of relational database systems: Relational queries can be evaluated
efciently. Furthermore, it implies that these queries are within nc and thus that they have a
high potential of intrinsic parallelism (not yet fully exploited in actual systems). We prove
that CALC queries can be evaluated in constant time in a particular (standard) model of
parallel computation based on circuits.
While looking at the expressive power of CALC and the other two languages, we
study their limitations by examining queries that cannot be expressed in these languages.
This leads us to introduce important tools that are useful in investigating the expressive
power of query languages. We rst present an elegant characterization of CALC based on
Ehrenfeucht-Fraiss e games. This is used to show limitations in the expressive power of
CALC, such as the nonexpressibility of the transitive closure query on a graph. A second
tool related to expressiveness, which applies to all languages discussed in this chapter,
consists of proving 0-1 laws for languages. This powerful approach, based on probabilities,
allows us to show that certain queries (such as even) are not expressible in while and thus
not in xpoint or CALC.
As discussed in Section 16.3, there are simple queries that these languages cannot ex-
press (e.g., the prototypical example of even). Together with the completeness of xpoint
and while in ptime and pspace, respectively, this suggests that there is an uneasy relation-
ship between these languages and complexity classes. As intimated in Section 16.3, the
problem can be attributed to the fact that a generic query language cannot take advantage
of the information provided by the internal representation of data used by Turing machines,
1
ac
0
and nc are two parallel complexity classes dened later in this chapter.
429
430 First Order, Fixpoint, and While
such as an ordering of the constants. For instance, the query even is easily expressible in
while if an order is provided.
A fundamental result of this chapter is that xpoint expresses exactly qptime under
the assumption that queries can access an order on the constants. It is especially surprising
that a complexity class based on such a natural resource as time coincides with a logic-
based language such as xpoint. However, this characterization depends on the order in
a crucial manner, and this highlights the importance of order in the context of generic
computation. No language is known that expresses qptime without the order assumption;
and the existence of such a language remains one of the main open problems in the theory
of query languages.
This chapter concludes with two recent developments that shed further light on the
interplay of order and expressiveness. The rst shows that a while query on an unordered
database can be reduced to a while query on an ordered database via a xpoint query. The
xpoint query produces an ordered database froma given unordered one by grouping tuples
into a sequence of blocks that are never split in the computation of the while query; the
blocks can then be thought of as elements of an ordered database. This also allows us to
clarify the connection between xpoint and while: They are distinct, unless ptime = pspace.
The second recent development considers nondeterminism as a means for overcoming
limitations due to the absence of ordering of the domain. Several nondeterministic exten-
sions of CALC, xpoint, and while are shown.
The impact of order is a constant theme throughout the discussion of expressive power.
As discussed in Chapter 16, the need to consider computation without order is a conse-
quence of the data independence principle, which is considered important in the database
perspective. Therefore computation with order is viewed as a metaphor for an (at least
partial) abandonment of the data independence principle.
17.1 Complexity of First-Order Queries
This section considers the complexity of rst-order queries and shows that they are in
qlogspace. This result is particularly signicant given its implications about the parallel
complexity of CALC and thus of relational languages in general. Indeed, logspace nc.
As will be seen, this means that every CALC query can be evaluated in polylogarithmic
time using a polynomial number of processors. Moreover, as described in this section, a
direct proof shows the stronger result that the rst-order queries can in fact be evaluated in
ac
0
. Intuitively, this says that rst-order queries can be evaluated in constant time with a
polynomial number of processors.
We begin by showing the connection between CALC and qlogspace.
Theorem 17.1.1 CALC is included in qlogspace.
Proof Let be a query in CALC over some database schema R. We will describe a TM
M

, depending on , that solves the recognition problem for and uses a work tape with
length logarithmic in the size of the read-only input tape.
Suppose that M

is started with input enc

(I)#enc

(u) for some instance I over R,


17.1 Complexity of First-Order Queries 431
some enumeration of the constants, and some tuple u over adom(I) whose arity is the
same as that of the result of . M

should accept the input iff u (I). We assume w.l.o.g.


that is in prenex normal form. We show by induction on the number of quantiers of
that the computation can be performed using k log(|enc

(I)#enc

(u)|) cells of the work


tape, for some constant k.
Basis. If has no quantiers, then all the variables of are free. Let be the valuation
mapping the free variables of to u. M

must determine whether I |=[]. To determine


the truth value of each literal L under occurring in , one needs only scan the input
tape looking for (L). This can be accomplished by considering each tuple of I in turn,
comparing it with relevant portions of u. For each such tuple, the address of the beginning
of the tuple should be stored on the tape along with the offset to the current location of the
tuple being scanned. This can be accomplished within logarithmic space.
Induction. Now suppose that each prenex normal form CALC formula with less than
n quantiers can be evaluated in logspace, and let be a prenex normal form formula
with n quantiers. Suppose is of the form x . (The case when is of the form x
is similar.)
All possible values of x are tried. If some value is found that makes true, then
the input is accepted; otherwise it is rejected. The values used for x are all those that
appear on the input tape in the order in which they appear. To keep track of the current
value of x, one needs log(n
c
) work tape cells, where n
c
is the number of constants in I.
Because n
c
is less than the length of the input, the number of cells needed is no more than
log(|enc

(I)#enc

(u)|). The problem is now reduced to evaluating for each value of x.


By the induction hypothesis, this can be done using k log(|enc

(I)#enc

(u)|) work tape


cells for some k. Thus the entire computation takes (k +1) log(|enc

(I)#enc

(u)|) work
tape cells; which concludes the induction.
Unfortunately, CALC does not express all of qlogspace. It will be shown in Sec-
tion 17.3 that even, although clearly in qlogspace, is not a rst-order query.
We next consider informally the parallel complexity of CALC. We are concerned with
two parallel complexity classes: nc and ac
0
. Intuitively, nc is the class of problems that
can be solved using polynomially many processors in time polynomial in the logarithm of
the input size; ac
0
also allows polynomially many processors but only constant time. The
formal denitions of nc and ac
0
are based on a circuit model in which time corresponds to
the depth of the circuit and the number of gates corresponds to its size. The circuits use and,
or, and not gates and have unbounded fan-in.
2
Thus ac
0
is the class of problems denable
using circuits where the depth is constant and the size polynomial in the input.
The fact that the complexity of CALC is logspace implies that its parallel complexity
is nc, because it is well known that logspace nc. However, one can prove a tighter
result, which says that the parallel complexity of CALC is in fact ac
0
. So only constant
time is needed to evaluate CALC queries. More than any other known complexity result on
CALC, this captures the fundamental intuition that rst-order queries can be evaluated in
2
The fan-in is the number of wires going into a gate.
432 First Order, Fixpoint, and While
parallel very efciently and that they represent, in some sense, primitive manipulations of
relations.
We sketch only the proof and leave the details for Exercise 17.2.
Theorem 17.1.2 Every CALC query is in ac
0
.
Crux Let us rst provide an intuition of the result independent of the circuit model. We
will use the relational algebra. We will argue that each of the operations , , , , can
be performed in constant parallel time using only polynomially many processors.
Let e be an expression in the algebra over some database schema R. Consider the
following innite space of processors. There is one processor for each pair f, u, where f
is a subexpression of e and u is a tuple of the same arity as the result of f , using constants
from dom. Let us denote one such processor by p
f,u
. Note that, in particular, for each
relation name Q occurring in f and each u of the arity of Q, p
Q,u
is one of the processors.
Each processor has two possible states, true or false, indicating whether u is in the result
of f .
At the beginning, all processors are in state false. An input instance is specied by
turning on the processors corresponding to tuples in the input relations (i.e., processors
p
R,u
if u is in input relation R). The result consists of the tuples u for which p
e,u
is in
state true at the end of the computation. For a given input, we are only concerned with the
processors formed from tuples with constants occurring in the input. Clearly, no more than
polynomially many processors will be relevant during the computation.
It remains to show that each algebra operation takes constant time. Consider, for
instance, cross product. Suppose f g is a subexpression of e. To compute f g, the
processors p
f,u
and p
g,v
send the message true to processor p
(f g),uv
if their state is
true. Processor p
(f g),uv
goes to state true when receiving two true messages. The other
operations are similar. Thus e is evaluated in constant time in our informal model of parallel
computation.
To formalize the foregoing intuition using the circuit model, one must construct,
for each n, a circuit B
n
that, for each input of length n consisting of an encoding over
the alphabet {0, 1} of an instance I and a tuple u, outputs 1 iff u e(I). The idea for
constructing the circuit is similar to the informal construction in the previous paragraph
except that processors are replaced by wires (edges in the graph representing the circuit)
that carry either the value 1 or 0. Moreover, each B
n
has polynomial size. Thus only wires
that can become active for some input are included. Figure 17.1 represents fragments of
circuits computing some relational operations. In the gure, f is the cross product of g
and h (i.e., g h); f

is the difference g h; and f

is the projection of h on the rst


coordinate. Observe that projection is the most tricky operation. In the gure, it is assumed
that the active domain consists of four constants. Note also that because of projection, the
circuits have unbounded fan-in.
We leave the details of the construction of the circuits B
n
to the reader (see Exer-
cise 17.2). In particular, note that one must use a slightly more cumbersome encoding than
that used for Turing machines because the alphabet is now restricted to {0, 1}.
17.2 Expressiveness of First-Order Queries 433
[f , [b, c]] [f, [a, b, a, b]] [f , [a]]
and and or
not
[g, [a, b]] [h, [a, b]] [g, [b, c]] [h, [b, c]] [h, [a, a]] [h, [a, d]] [h, [a, c]] [h, [a, b]]
Figure 17.1: Some fragments of circuits
One might naturally wonder if CALC expresses all queries in ac
0
. It turns out that
there are queries in ac
0
that are not rst order. This is demonstrated in Section 17.4.
17.2 Expressiveness of First-Order Queries
We have seen that rst-order queries have desirable properties with respect to complexity.
However, there is a price to pay for this in terms of expressiveness: There are many useful
queries that are not rst order. Typical examples of such queries are even and transitive
closure of a graph. This section presents an elegant technique based on a two-player game
that can be used to prove that certain queries (including even and transitive closure) are
not rst order. Although the game we describe is geared toward rst-order queries, games
provide a general technique that is used in conjunction with many other languages.
The connection between CALC sentences and games is, intuitively, the following.
Consider as an example a CALC sentence of the form
x
1
x
2
x
3
(x
1
, x
2
, x
3
).
One can view the sentence as a statement about a game with two players, 1 and 2, who
alternate in picking values for x
1
, x
2
, x
3
. The sentence says that Player 2 can always force
a choice of values that makes (x
1
, x
2
, x
3
) true. In other words, no matter which value
Player 1 chooses for x
1
, Player 2 can pick an x
2
such that, no matter which x
3
is chosen
next by Player 1, (x
1
, x
2
, x
3
) is true.
The actual game we use, called the Ehrenfeucht-Fraiss e game, is slightly more in-
volved, but is based on a similar intuition. It is played on two instances. Suppose that R is
a database schema. Let I and J be instances over R, with disjoint sets of constants. Let r be
434 First Order, Fixpoint, and While

z y
P(x, z) R(x, y)
x
Figure 17.2: A syntax tree
a positive integer. The game of length r associated with I and J is played by two players
called Spoiler and Duplicator, making r choices each. Spoiler starts by picking a constant
occurring in I or J, and Duplicator picks a constant in the opposite instance. This is re-
peated r times. At each move, Spoiler has the choice of the instance and a constant in it,
and Duplicator must respond in the opposite instance.
Let a
i
be the i
th
constant picked in I (respectively, b
i
in J). The set of pairs {(a
1
, b
1
),
. . . , (a
r
, b
r
)} is a round of the game. The subinstance of I generated by {a
1
, . . . , a
r
},
denoted I/{a
1
, . . . , a
r
}, consists of all facts in I using only these constants, and similarly
for J, {b
1
, . . . , b
r
} and J/{b
1
, . . . , b
r
}.
Duplicator wins the round {(a
1
, b
1
), . . . , (a
r
, b
r
)} iff the mapping a
i
b
i
is an iso-
morphism of the subinstances I/{a
1
, . . . , a
r
} and J/{b
1
, . . . , b
r
}.
Duplicator wins the game of length r associated with I and J if he or she has a winning
strategy (i.e., Duplicator can always win any game of length r on I and J, no matter
how Spoiler plays). This is denoted by I
r
J. Note that the relation
r
is an equivalence
relation on instances over R (see Exercise 17.3).
Intuitively, the equivalence I
r
J says that I and J cannot be distinguished by looking
at just r constants at a time in the two instances. Recall that the quantier depth of a CALC
formula is the maximum number of quantiers in a path from the root to a leaf in the
representation of the sentence as a tree. The main result of Ehrenfeucht-Fraiss e games is
that the ability to distinguish among instances using games of length r is equivalent to the
ability to distinguish among instances using some CALC sentence of quantier depth r.
Example 17.2.1 Consider the sentence x (y R(x, y) z P(x, z)). Its syntax tree is
represented in Fig. 17.2. The sentence has quantier depth 2. Note that, for a sentence in
prenex normal form, the quantier depth is simply the number of quantiers in the formula.
The main result of Ehrenfeucht-Fraiss e games, stated in Theorem 17.2.2, is that if I
and J are two instances such that Duplicator has a winning strategy for the game of length
r on the two instances, then I and J cannot be distinguished by any CALC sentence of
17.2 Expressiveness of First-Order Queries 435
quantier depth r. Before proving this theorem, we note that the converse of that result
also holds. Thus if two instances are undistinguishable using sentences of quantier depth
r, then they are equivalent with respect to
r
. Although interesting, this is of less use as a
tool for proving expressibility results, and we leave it as a (nontrivial!) exercise. The main
idea is to show that each equivalence class of
r
is denable by a sentence of quantier
depth r (see Exercises 17.9 and 17.10).
Theorem 17.2.2 Let I and J be two instances over a database schema R. If I
r
J, then
for each CALC sentence over R with quantier depth r, I and J both satisfy or neither
does.
Crux Suppose that I |= and J |= for some of quantier depth r. We prove that
I
r
J. We provide only a sketch of the proof in an example.
Let be the sentence x
1
x
2
x
3
(x
1
, x
2
, x
3
), where has no quantiers, and let I
and J be two instances such that I |=, J |=. Then
I |=x
1
x
2
x
3
(x
1
, x
2
, x
3
) and J |=x
1
x
2
x
3
(x
1
, x
2
, x
3
).
We will show that Spoiler can prevent Duplicator from winning by forcing the choice
of constants a
1
, a
2
, a
3
in I and b
1
, b
2
, b
3
in J such that I |= (a
1
, a
2
, a
3
) and J |=
(b
1
, b
2
, b
3
). Then the mapping a
i
b
i
cannot be an isomorphism of the subinstances
I/{a
1
, a
2
, a
3
} and J/{b
1
, b
2
, b
3
}, contradicting the assumption that Duplicator has a win-
ning strategy. To force this choice, Spoiler always picks witnesses corresponding to the
existential quantiers in and (note that the quantier for each variable is either in
and in , or vice versa).
Spoiler starts by picking a constant b
1
in J such that
J |=x
2
x
3
(b
1
, x
2
, x
3
).
Duplicator must respond by picking a constant a
1
in I. Due to the universal quantication
in ,
I |=x
2
x
3
(a
1
, x
2
, x
3
),
regardless of which a
1
was picked. Next Spoiler picks a constant a
2
in I such that
I |=x
3
(a
1
, a
2
, x
3
).
Regardless of which constant b
2
in J Duplicator picks,
J |=x
3
(b
1
, b
2
, x
3
).
Finally Spoiler picks b
3
in J such that J |=(b
1
, b
2
, b
3
); Duplicator picks some a
3
in I,
and I |=(a
1
, a
2
, a
3
).
436 First Order, Fixpoint, and While
B
B
1
B
2
a
1
a
3
a
2
b
3
b
1
b
2
Figure 17.3: Two undistinguishable graphs
Theorem 17.2.2 provides an important tool for proving that certain properties are not
denable by CALC. It is sufcient to exhibit, for each r, two instances I
r
and J
r
such that
I
r
has the property, J
r
does not, and I
r

r
J
r
. In the next proposition, we illustrate the use
of this technique by showing that graph connectivity, and therefore transitive closure, is
not expressible in CALC.
Proposition 17.2.3 Let R be a database schema consisting of one binary relation. Then
the query conn dened by
conn(I) =true iff I is a connected graph
is not expressible in CALC.
Crux Suppose that there is a CALC sentence checking graph connectivity. Let r be
the quantier depth of . We exhibit a connected graph I
r
and a disconnected graph J
r
such that I
r

r
J
r
. Then, by Theorem 17.2.2, the two instances satisfy or none does, a
contradiction.
For a sufciently large n (depending only on r; see Exercise 17.5), the graph I
r
consists
of a cycle B of 2n nodes and the graph J
r
of two disjoint cycles B
1
and B
2
of n nodes each
(see Fig. 17.3). We outline the winning strategy for Duplicator. The main idea is simple:
Two nodes a, a

in I
r
that are far apart behave in the same way as two nodes b, b

in J
r
that
belong to different cycles. In particular, Spoiler cannot take advantage of the fact that a, a

are connected but b, b

are not. To do so, Spoiler would have to exhibit a path connecting a


to a

, which Duplicator could not do for b and b

. However, Spoiler cannot construct such


a path because it requires choosing more than r nodes.
For example, if Spoiler picks an element a
1
in I
r
, then Duplicator picks an arbitrary
element b
1
, say in B
1
. Now if Spoiler picks an element b
2
in B
2
, then Duplicator picks an
element a
2
in I
r
far from a
1
. Next, if Spoiler picks a b
3
in B
1
close to b
1
, then Duplicator
picks an element a
3
in I
r
close to a
1
. The graphs are sufciently large that this can proceed
17.3 Fixpoint and While Queries 437
for r moves with the resulting subgraphs isomorphic. The full proof requires a complete
case analysis on the moves that Spoiler can make.
The preceding technique can be used to show that many other properties are not
expressible in CALCfor instance, even, 2-colorability of graphs, or Eulerian graphs
(i.e., graphs for which there is a cycle that passes through each edge exactly once) (see
Exercise 17.7).
17.3 Fixpoint and While Queries
That transitive closure is not expressible in CALC has been the driving force behind ex-
tending relational calculus and algebra with recursion. In this section we discuss the ex-
pressiveness and complexity of the two main extensions of these languages with recursion:
the xpoint and while queries.
It is relatively easy to place an upper bound on the complexity of xpoint and while
queries. Recall that the main distinction between languages dening xpoint queries and
those dening while queries is that the rst are inationary and the second are not (see
Chapter 14). It follows that xpoint queries can be implemented in polynomial time and
while queries in polynomial space. Moreover, these bounds are tight, as shown next.
Theorem 17.3.1
(a) The xpoint queries are complete in ptime.
(b) The while queries are complete in pspace.
Crux The fact that each xpoint query is in ptime follows immediately from the ina-
tionary nature of languages dening the xpoint queries and the fact that the total number
of tuples that can be built from constants in a given instance is polynomial in the size of
the instance (see Chapter 14). For while, inclusion in pspace follows similarly (see Ex-
ercise 17.11). The completeness follows from an important result that will be shown in
Section 17.4. The result, Theorem 17.4.2, states that if an order on the constants of the do-
main is available, xpoint expresses exactly qptime and while expresses exactly qpspace.
The completeness then follows from the fact that there exist problems that are complete in
ptime and problems that are complete in pspace (see Exercise 17.11).
The Parity Query
As was the case for the rst-order queries, xpoint and while do not match precisely with
complexity classes of queries. Although they are powerful, neither xpoint nor while can
express certain simple queries. The typical example is the parity query even on a unary
relation. We next provide a direct proof that while (and therefore xpoint) cannot express
even. The result also follows using 0-1 laws, which are presented later. We present the
direct proof here to illustrate the proof technique of hyperplanes.
438 First Order, Fixpoint, and While
Proposition 17.3.2 The query even is not a while query.
Proof Let R be a unary relation. Suppose that there exists a while program w that
computes the query even on input R. We can assume, w.l.o.g., that R contains a unary
relation ans so that, on input I, w(I)(ans) = if |I| is even, and w(I) =I otherwise. Let R
be the schema of w (so R contains R and ans). We will reach a contradiction by showing
that the computation of w on a given input is essentially independent of its size. More
precisely, for n large enough, the computations of w on all inputs of size greater than n
will in some sense be identical. This contradicts the fact that ans should be empty at the
end of some computations but not others.
To show this, we need a short digression related to computations on unary relations.
We assume here that w does not use constants, but the construction can be generalized to
that case (see Exercise 17.14). Let I be an input instance and k an integer. We consider a
partition of the set of k-tuples with entries in adom(I) into hyperplanes based on patterns of
equalities and inequalities between components as follows. For each equivalence relation
over {1, . . . , k}, the corresponding hyperplane is dened by
3
H

(I) ={u
1
, . . . , u
k
| for each i, j [1, k],
u
i
, u
j
adom(I) and u
i
=u
j
i j}.
For instance, let adom(I) ={a, b, c}, k =3 and
={1, 1, 2, 2, 1, 2, 2, 1, 3, 3}.
Then
H

(I) ={a, a, b, a, a, c, b, b, a, b, b, c, c, c, a, c, c, b}.


Finally there are two 0-ary hyperplanes, denoted true and false, that evaluate to {} and {},
respectively.
We will see that a while computation cannot distinguish between two k-tuples in the
same hyperplane, and so intermediate relations of arity k will always consist of a union of
hyperplanes.
Now consider the while programw. We assume that the condition guarding each while
loop has the form R = for some R R, and that in each assignment R :=E, E involves
a single application of some unary or binary algebra operator. We label the statements of
the program so we can talk about the program state (i.e., the label) after some number of
computation steps on input I. We include two labels in a while statement in the following
manner:
label1 while condition do label2 statement.
3
Note that, in logic terminology, corresponds to the notion of equality type, and hyperplanes
correspond to realizations of equality types.
17.3 Fixpoint and While Queries 439
Let N be the maximum arity of any relation in R. To conclude the proof, we will show
by induction on the steps of the computation that there is a number b
w
such that for each
input I with size N, w terminates on I after exactly b
w
steps. Furthermore,
(*) for each step m b
w
, there exists a label j
m
and for each relation T of arity k a set
E
T,m
of equivalence relations over {1, . . . , k} such that for each input I of size greater
than N
1. the control is at label j
m
after m steps of the computation; and
2. each T then contains {H

(I) | in E
T,m
}.
To see that this yields the result, suppose that it is true. Then for each I with size N, w
terminates with ans always empty or always nonempty, regardless of whether the size of I
is even or odd (a contradiction).
The claim follows from an inductive proof of (*). It is clear that this holds at the
0
th
step. At the start of the computation, all T are empty except for the input unary
relation R, which contains all constants and so consists of the hyperplane H

, where
= {1, 1}. Suppose now that (*) holds for each step less than m and that the program
has not terminated on any I with size N. We prove that (*) also holds for m. There are
two cases to consider:
Label j
m1
occurs before the keyword while. By induction, the relation controlling
the loop is empty after the (m1)
st
step, for all inputs large enough, or nonempty for
all such inputs. Thus at step m, the control will be at the same label for all instances
large enough, so (*1) holds. No relations have been modied, so (*2) also holds.
Otherwise j
m1
labels an assignment statement. Then after the (m1)
st
step, the
control will clearly be at the label of the next statement for all instances large enough,
so (*1) holds. With regard to (*2), we consider the case where the assignment is
T :=Q
1
Q
2
for some variables T , Q
1
, and Q
2
; the other relation operators are
handled in a similar fashion (see Exercise 17.12). By induction, (*2) holds for all
relations distinct from T because they are not modied. Consider T . After step m,
T contains
_
{H

1
(I) |
1
in E
Q
1
,m1
}
_
{H

2
(I) |
2
in E
Q
2
,m1
} =
_
{H

1
(I) H

2
(I) |
1
in E
Q
1
,m1
,
2
in E
Q
2
,m1
}.
Let k, l be the arities of Q
1
, Q
2
, respectively, and for each
2
in E
Q
2
,m1
, let

+k
2
={(x +k, y +k) | (x, y)
2
}.
For an arbitrary binary relation [1, k +l] [1, k +l], let

denote the reexive,


symmetric, and transitive closure of . For
1
,
2
in E
Q
1
,m1
, E
Q
2
,m1
, respec-
tively, set
440 First Order, Fixpoint, and While

2
={(
1

+k
2
)

| [1, k] [k +1, k +l],


and for all i, i

, j, j

such that [i, j]


and [i

, j

] , i
1
i

iff j
+k
2
j

}.
It is straightforward to verify that for each pair
1
,
2
in E
Q
1
,m1
, E
Q
2
,m1
, respec-
tively, and I with size N,
H

1
(I) H

2
(I) =H

2
(I).
Note that this uses the assumption that the size of I is greater than N, the maximum
arity of relations in w. It follows that
E
T,m
=
_
{
1

2
|
1
in E
Q
1
,m1
and
2
in E
Q
2
,m1
}.
Thus (*2) also holds for T at step m, and the induction is completed.
The hyperplane technique used in the preceding proof is based on the fact that in the
context of a (sufciently large) unary relation input, there are families of tuples (in this
case the different hyperplanes) that travel together and hence that the intermediate and
nal results are unions of these families of tuples. Although there are other cases in which
the technique of hyperplanes can be applied (see Exercise 17.15), in the general case the
input is not a union of hyperplanes, and so the members of a hyperplane do not travel
together. However, there is a generalization of hyperplanes based on automorphisms that
yields the same effect. Recall that an automorphism of I is a one-to-one mapping on
adom(I) such that (I) =I. For xed I, consider the following equivalence relation
I
k
on
k-tuples of adom(I): u
I
k
v iff there exists an automorphism of I such that (u) =v.
(See Exercises 16.6 and 16.7 in the previous chapter.) It can be shown that if w is a while
query (without constants), then the members of equivalence classes
I
k
travel together
when w is executed on input I. More precisely, suppose that J is an instance obtained at
some point in the computation of w on input I. The genericity of while programs implies
that if is an automorphism of I, it is also an automorphism of J. Thus for each k-tuple u in
some relation of J and each v such that u
I
k
v, v also belongs to that relation. Thus each
relation in J of arity k is a union of equivalence classes of
I
k
. The equivalence relation
I
k
will be used in our development of 0-1 laws, presented next.
0-1 Laws
We now develop a powerful tool that provides a uniform approach to resolving in the
negative a large spectrum of expressibility problems. It is based on the probability that a
property is true in instances of a given size. We shall prove a surprising fact: All properties
expressible by a while query are almost surely true, or almost surely false. More
precisely, we prove the result for while sentences:
17.3 Fixpoint and While Queries 441
Denition 17.3.3 A sentence is a total query that is Boolean (i.e., returns as answer
either true or false).
Let q be a sentence over some schema R. For each n, let
n
(q) denote the fraction of
instances over R with entries in {1, . . . , n} that satisfy q. That is,

n
(q) =
|{I | q(I) =true and adom(I) ={1, . . . , n}}|
|{I | adom(I) ={1, . . . , n}}|
.
Denition 17.3.4 A sentence q is almost surely true (false) if lim
n

n
(q) exists and
equals 1 (0). If every sentence in a language L is almost surely true or almost surely false,
the language L has a 0-1 law.
To simplify the discussion of 0-1 laws, we continue to focus exclusively on constant-
free queries (see Exercise 17.19).
We will show that CALC, xpoint, and while sentences have 0-1 laws. This provides
substantial insight into limitations of the expressive power of these languages and can
be used to show that they cannot express a variety of properties. For example, it follows
immediately that even is not expressible in either of these languages. Indeed,
n
(even) is 1
if n is even and 0 if n is odd. Thus
n
(even) does not converge, so even is not expressible
in a language that has a 0-1 law.
While 0-1 laws provide an elegant and powerful tool, they require the development
of some nontrivial machinery. Interestingly, this is one of the rare occasions when we will
need to consider innite instances even though we aim to prove something about nite
instances only.
We start by proving that CALC has a 0-1 law and then extend the result to xpoint
and while. For simplicity, we consider only the case when the input to the query is a binary
relation G (representing edges in a directed graph with no edges of the form a, a). It is
straightforward to generalize the development to arbitrary inputs (see Exercise 17.19).
We will use an innite set A of CALC sentences called extension axioms, which refer
to graphs. They say, intuitively, that every subgraph can be extended by one node in all
possible ways. More precisely, A contains, for each k, all sentences of the form
x
1
. . . x
k
((

i=j
(x
i
=x
j
)) y(

i
(x
i
=y) connections(x
1
, . . . , x
k
; y))),
where connections(x
1
, . . . , x
k
; y) is some conjunction of literals containing, for each x
i
,
one of G(x
i
, y) or G(x
i
, y), and one of G(y, x
i
) or G(y, x
i
). For example, for k =3,
one of the 2
6
extension axioms is
x
1
, x
2
, x
3
((x
1
=x
2
x
2
=x
3
x
3
=x
1
)
y (x
1
=y x
2
=y x
3
=y
G(x
1
, y) G(y, x
1
) G(x
2
, y) G(y, x
2
) G(x
3
, y) G(y, x
3
)))
specifying the pattern of connections represented in Fig. 17.4.
442 First Order, Fixpoint, and While
x
1
x
2
x
3
y
Figure 17.4: A connection pattern
A graph G satises this particular extension axiom if for each triple x
1
, x
2
, x
3
of
distinct vertexes in G, there exists a vertex y connected to x
1
, x
2
, x
3
, as shown in Fig. 17.4.
Note that A consists of an innite set of sentences and that each nite subset of A is
satised by some innite instance. (The instance is obtained by starting from one node and
repeatedly adding nodes required by the extension axioms in the subset.) Then by the com-
pactness theorem there is an innite instance satisfying all of A, and by the L owenheim-
Skolem theorem (see Chapter 2) there is a countably innite instance Rsatisfying A.
The following lemma shows that Ris unique up to isomorphism.
Lemma 17.3.5 If R and P are two countably innite instances over G satisfying all
sentences in A, then Rand P are isomorphic.
Proof Suppose that a
1
a
2
. . . is an enumeration of all constants in R, and b
1
b
2
. . . is an
enumeration of those in P. We construct an isomorphism between R and P by alternat-
ingly picking constants from R and from P. We construct sequences a
i
1
. . . a
i
k
. . . and
b
i
1
. . . b
i
k
. . . such that a
i
k
b
i
k
is an isomorphism from R to P. The procedure for pick-
ing the k
th
constants a
i
k
and b
i
k
in these sequences is dened inductively as follows. For the
base case, let a
i
1
=a
1
and b
i
1
=b
1
. Suppose that sequences a
i
1
. . . a
i
k
and b
i
1
. . . b
i
k
have
been dened. If k is even, let a
i
k+1
be the rst constant in a
1
, a
2
, . . . that does not occur so
far in the sequence. Let
k
be the sentence in A describing the way a
i
k+1
extends the sub-
graph with nodes a
i
1
. . . a
i
k
. Because P also satises
k
, there exists a constant b in P that
extends the subgraph b
i
1
. . . b
i
k
in the same manner. Let b
i
k+1
=b. If k is odd, the procedure
is reversed (i.e., it starts by choosing rst a new constant from b
1
, b
2
, . . .). This back-and-
forth procedure ensures that (1) all constants from both R and P occur eventually among
the chosen constants, and (2) the mapping a
i
k
b
i
k
is an isomorphism.
Thus the foregoing proof shows that there exists a unique (up to isomorphism) count-
able graph R satisfying A. This graph, studied extensively by Rado [Rad64] and others,
is usually referred to as the Rado graph. We can now prove the following crucial lemma.
The key point is the equivalence between (a) and (c), called the transfer property: It relates
satisfaction of a sentence by the Rado graph to the property of being almost surely true.
Lemma 17.3.6 Let R be the Rado graph and a CALC sentence. The following are
equivalent:
(a) Rsatises ;
17.3 Fixpoint and While Queries 443
(b) A implies ; and
(c) is almost surely true.
Proof (a) (b): Suppose (a) holds but (b) does not. Then there exists some instance P
satisfying A but not . Because P satises A, P must be innite. By the Low enheim-
Skolem theorem (see Chapter 2), we can assume that P is countable. But then, by
Lemma 17.3.5, P is isomorphic to R. This is a contradiction, because Rsatises but P
does not.
(b) (c): It is sufcient to show that each sentence in A is almost surely true.
Suppose this is the case and A implies . By the compactness theorem, is implied
by some nite subset A

of A. Because every sentence in A

is almost surely true, the


conjunction
_
A

of these sentences is almost surely true. Because is true in every


instance where
_
A

is true,
n
()
n
(
_
A

), so
n
() converges to 1 and is almost
surely true.
It remains to showthat each sentence in Ais almost surely true. Consider the following
sentence
k
in A:
x
1
. . . x
k
((

i=j
(x
i
=x
j
)) y(

i
(x
i
=y) connections(x
1
, . . . , x
k
; y))).
Then
k
is the sentence
x
1
. . . x
k
((

i=j
(x
i
=x
j
))
y(

i
(x
i
=y) connections(x
1
, . . . , x
k
; y))).
We will show the following property on the probability that an instance with n constants
does not satisfy
k
:
(**)
n
(
k
) n (n 1) . . . (n k) (1
1
2
2k
)
(nk)
.
Because lim
n
[n (n 1) . . . (n k) (1
1
2
2k
)
(nk)
] =0, it follows that lim
n

n
(
k
) =0, so
k
is almost surely false, and
k
is almost surely true.
Let N be the number of instances with constants in {1, . . . , n}. To prove (**), observe
the following:
1. For some xed distinct a
1
, . . . , a
k
, b in {1, . . . , n}, the number of I satisfying some
xed literal in connections(a
1
, . . . , a
k
; b) is
1
2
N.
2. For some xed distinct a
1
, . . . , a
k
, b in {1, . . . , n}, the number of I satisfying
connections(a
1
, . . . , a
k
; b) is
1
2
2k
N (because there are 2k literals in connections).
3. The number of I not satisfying connections(a
1
, . . . , a
k
; b) is therefore
N
1
2
2k
N =(1
1
2
2k
) N.
444 First Order, Fixpoint, and While
4. For some xed a
1
, . . . , a
k
in {1, . . . , n}, the number of I satisfying
y(

i
(a
i
=y) connections(a
1
, . . . , a
k
; y))
is (1
1
2
2k
)
nk
N [because there are (n k) ways of picking b distinct from
a
1
, . . . , a
k
)].
5. The number of I satisfying
k
is thus at most
n (n 1) . . . (n k) (1
1
2
2k
)
(nk)
N
(from the choices of a
1
, . . . , a
k
). Hence (**) is proven.
(See Exercise 17.16.)
(c) (a): Suppose that Rdoes not satisfy (i.e., R|=). Because (a) (c),
is almost surely true. Then cannot be almost surely true (a contradiction).
The 0-1 law for CALC follows immediately.
Theorem 17.3.7 Each sentence in CALC is almost surely true or almost surely false.
Proof Let be a CALC sentence. The Rado graph R satises either or . By the
transfer property [(a) (c) in Lemma 17.3.6], is almost surely true or is almost
surely true. Thus is almost surely true or almost surely false.
The 0-1 law for CALC can be extended to xpoint and while. We prove it next for
while (and therefore xpoint). Once again the proof uses the Rado graph and extends the
transfer property to the while sentences.
Theorem 17.3.8 Every while sentence is almost surely true or almost surely false.
Proof We use as a language for the while queries the partial xpoint logic CALC+.
The main idea of the proof is to show that every CALC+ sentence that is dened on all
instances is in fact equivalent almost surely to a CALC sentence, and so by the previous
result is almost surely true or almost surely false. We show this for CALC+ sentences.
By Theorem 14.4.7, we can consider w.l.o.g. only sentences involving one application of
the partial xpoint operator . Thus consider a CALC+ sentence of the form
= x (
T
((T ))(t ))
over schema R, where
(a) is a CALC formula, and
(b) t is a tuple of variables or constants of appropriate arity, and x is the tuple of
distinct free variables in t .
17.3 Fixpoint and While Queries 445
(We need the existential quantication for binding the free variables. An alternative is to
have constants in t but, as mentioned earlier we do not consider constants when discussing
0-1 laws.)
Essentially, a computation of a query consists of iterating the CALC formula until
convergence occurs (if ever). Consider the sequence {
i
(I)}
i>0
, where I is an input. If I
is nite, the sequence is periodic [i.e., there exist N and p such that, for each n N,

n
(I) =
n+p
(I)]. If p = 1, then the sequence converges (it becomes constant at some
point); otherwise it does not. Now consider the sequence {
i
(R)}
i>0
, where Ris the Rado
graph. Because the set of constants involved is no longer nite, the sequence may or may
not be periodic. A key point in our proof is the observation that the sequence {
i
(R)}
i>0
is
indeed periodic, just as in the nite case.
To see this, we use a technique similar to the hyperplane technique in the proof of
Lemma 17.3.5. Let k be some integer. We argue next that for each k, there is a nite number
of equivalence classes of k-tuples induced by automorphisms of R. For each pair u, v of k-
tuples with entries in adom(R), let u
R
k
v iff there exists an automorphism of R such
that (u) =v.
Let u
R
k
v if both the patterns of equality and the patterns of connection within u and
v are identical. More formally, for each u =a
1
, . . . , a
k
, v =b
1
, . . . , b
k
(where a
i
and
b
i
are constants in R), u
R
k
v if
for each i, j, a
i
=a
j
iff b
i
=b
j
, and
for each i, j, a
i
, a
j
is an edge in R iff b
i
, b
j
is an edge in R.
We claim that
u
R
k
v iff u
R
k
v.
The only if part follows immediately from the denitions. For the if part, suppose that
u
R
k
v. To show that u
R
k
v, we must build an automorphism of Rsuch that (u) =v.
This is done by a back-and-forth construction, as in Lemma 17.3.5, using the extension
axioms satised by R (see Exercise 17.18).
Because there are nitely many patterns of connection and equality among k vertexes,
there are nitely many equivalence classes of
R
k
, so of
R
k
. Due to genericity of the while
computation, each
i
(R) is a union of such equivalence classes (see Exercise 16.6 in the
previous chapter). Thus there must exist m, l, 0 m < l, such that
m
(R) =
l
(R). Let
N =m and p =l m. Then for each n N,
n
(R) =
n+p
(R). It follows that:
(1) {
i
(R)}
i>0
is periodic.
Using this fact, we show the following:
(2) The sequence {
i
(R)}
i>0
converges.
(3) The sentence is equivalent almost surely to some CALC sentence .
Before proving these, we argue that (2) and (3) will imply the statement of the theorem.
Suppose that (2) and (3) holds. Suppose also that is false in R. By Lemma 17.3.6, is
almost surely false. Then
n
()
n
( ) +
n
() and both
n
( ) and
n
()
446 First Order, Fixpoint, and While
converge to 0, so lim
n
(
n
()) = 0. Thus is also almost surely false. By a similar
argument, is almost surely true if is true in R.
We now prove (2). Let
ij
be the CALC sentence stating that
i
and
j
are equivalent.
Suppose {
i
(R)}
i>0
does not converge. Thus the period of the sequence is greater than 1,
so there exist m, j, l, m < j < l, such that

m
(R) =
l
(R) =
j
(R).
Thus R satises the CALC sentence
=
ml

mj
.
Let I range over nite databases. Because is dened on all nite inputs, {
i
(I)}
i0
converges. On the other hand, by the transfer property (Lemma 17.3.6), is almost surely
true. It follows that the sequence {
i
(I)}
i>0
diverges almost surely. In particular, there exist
nite I for which {
i
(I)}
i>0
diverges (a contradiction).
The proof of (3) is similar. By (1) and (2), the sequence {
i
(R)}
i>0
becomes constant
after nitely many iterations, say N. Then is equivalent on Rto the CALC sentence =
x(
N
(t )). Suppose R satises . Thus R satises . Furthermore, R satises
N(N+1)
because {
i
(R)}
i>0
becomes constant at the N
th
iteration. Thus R satises
N(N+1)
.
By the transfer property for CALC,
N(N+1)
is almost surely true. For each nite
instance I where
N(N+1)
holds, {
i
(I)}
i>0
converges after N iterations, so is equiva-
lent to . It follows that is almost surely equivalent to . The case where R does not
satisfy is similar.
Thus we have shown that while sentences have a 0-1 law. It follows immediately
that many queries, including even, are not while sentences. The technique of 0-1 laws has
been extended successfully to languages beyond while. Many languages that do not have
0-1 laws are also known, such as existential second-order logic (see Exercise 17.21). The
precise border that separates languages that have 0-1 laws from those that do not has yet to
be determined and remains an interesting and active area of research.
17.4 The Impact of Order
In this section, we consider in detail the impact of order on the expressive power of query
languages. As mentioned at the beginning of this chapter, we view the assumption of order
as, in some sense, suspending the data independence principle in a database. Because
data independence is one of the main guiding principles of the pure relational model, it
is important to understand its consequences in the expressiveness and complexity of query
languages.
As illustrated by the even query, order can considerably affect the expressiveness of a
language and the difculty of computing some queries. Without the order assumption, no
expressiveness results are known for the complexity classes of ptime and below; that is, no
17.4 The Impact of Order 447
P succ
b a c a b
b b d b c
c a d c d
d b a
Figure 17.5: An ordered instance
languages are known that express precisely the queries of those complexity classes. With
order, there are numerous such results. We present two of the most prominent ones.
At the end of this section, we present two recent developments that further explore
the interplay of order and expressiveness. The rst is a normal form for while queries that,
speaking intuitively, separates a while query into two components: one unordered and the
second ordered. The second development increases expressive power on unordered input
by introducing nondeterminism in queries.
We begin by making the notion of an ordered database more precise. A database is
said to be ordered if it includes a designated binary relation succ that provides a successor
relation on the constants occurring in the database. A query on an ordered database is a
query whose input database schema contains succ and that ranges only over the ordered
instances of the input database schema.
Example 17.4.1 Consider the database schema R ={P, succ}, where P is ternary. An
ordered instance of R is represented in Fig. 17.5. According to succ, a is the rst constant,
b is the successor of a, c is the successor of b, and d is the successor of c. Thus a, b, c, d
can be identied with the integers 1, 2, 3, 4, respectively.
We now consider the power of xpoint and while on ordered databases. In particular,
we prove the fundamental result that xpoint expresses precisely qptime on ordered data-
bases, and while expresses precisely qpspace on ordered databases. This shows that order
has a far-reaching impact on expressiveness, well beyond isolated cases such as the even
query. More broadly, the characterization of qptime by xpoint (with the order assump-
tion) provides an elegant logical description of what have traditionally been considered
the tractable problems. Beyond databases, this is signicant to both logic and complexity
theory.
Theorem 17.4.2
(a) Fixpoint expresses qptime on ordered databases.
(b) While expresses qpspace on ordered databases.
Proof Consider (a). We have already seen that xpoint qptime (see Exercise 17.11),
and so it remains to show that all qptime queries on ordered databases are expressible in
xpoint. Let q be a query on a database with schema R that includes succ, such that q is
448 First Order, Fixpoint, and While
in qptime on the ordered instances of R. Thus there is a polynomial p and Turing machine
M

that, on input enc(I)#enc(u), terminates in time p(|enc(I)#enc(u)|) and accepts the


input iff u q(I). (In this section, encodings of ordered instances are with respect to the
enumeration of constants provided by succ; see also Chapter 16.) Because q(I) has size
polynomial in I, a TMM can be constructed that runs in polynomial time and that, on input
enc(I), produces as output enc(q(I)). We now describe the construction of a CALC+
+
query q
M
that is equivalent to q on ordered instances of R.
The xpoint query q
M
we construct, when given ordered input I, will operate in three
phases: () construct an encoding of I that can be used to simulate M; () simulate M;
and ( ) decode the output of M. A key point throughout the construction is that q
M
is
inationary, and so it must compute without ever deleting anything from a relation. Note
that this restriction does not apply to (b), which simplies the simulation in that case.
We next describe the encoding used in the simulation of M. The encoding is centered
around a relation that holds the different congurations reached by M.
Representing a tape. Because the tape is innite, we only represent the nite portion,
polynomial in length, that is potentially used. We need a way to identify each cell of the
tape. Let n
c
be the number of constants in I. Because M runs in polynomial time, there
is some k such that M on input enc(I) takes time n
k
c
, and thus n
k
c
tape cells (see also
Exercise 16.12 in the previous chapter). Consider the world of k-tuples with entries in the
constants from I. Note that there are n
k
c
such tuples and that they can be lexicographically
ordered using succ. Thus each cell can be uniquely identied by a k-tuple of constants
from I. One can dene by a xpoint query a 2k-ary relation succ
k
providing the successor
relation on k-tuples, in the lexicographic order induced by succ (see Exercise 17.23a). The
ordered k-tuples thus allow us to represent a sequence of cells and hence Ms tape.
Representing all the congurations. Note that one cannot remove the tuples represent-
ing old congurations of M due to the inationary nature of xpoint computations. Thus
one represents all the congurations in a single relation. To distinguish a particular cong-
uration (e.g., that at time i, i n
k
c
), k-columns are used as timestamp. Thus to keep track of
the sequence of congurations in a computation of M, one can use a (2k +2)-ary relation
R
M
where
1. the rst k columns serve as a timestamp for the conguration,
2. the next k identify the tape cells,
3. column (2k +1) holds the content of the cell, and
4. column (2k +2) indicates the state and position of the head.
Note that now we are dealing with a double encoding: The database is encoded on the tape,
and then the tape is encoded back into R
M
.
To illustrate this simple but potentially confusing situation, we consider an example.
Let R ={P, succ}, and let I be the ordered instance of R represented in Fig. 17.5. Then
enc(I) is represented in Fig. 17.6. We assume, without loss of generality, that symbols
in the tape alphabet and the states of M are in dom. Parts of the rst two congurations
are represented in the relation shown in Fig. 17.7. The representation assumes that k =4,
so the arity of the relation is 10. Because this is a single-volume book, only part of the
relation is shown. More precisely, we show the rst tuples from the representation of the
17.4 The Impact of Order 449
P[1#0#10][1#1#11][10#0#11][11#1#0]succ[0#1][1#10][10#11]
Figure 17.6: Encoding of I and u on a TM tape
rst two congurations. It is assumed that the original state is s and the head points to
the rst cell of the tape; and that in that state, the head moves to the right, changing P to
0, and the machine goes to state r. Observe that the timestamp for the rst conguration
is a, a, a, a, and a, a, a, b for the second. Observe also the numbering of tape cells:
a, a, a, a, . . . , a, a, c, d, etc.
We can now describe the three phases of the operation of q
M
more precisely: For a
given ordered instance I, q
M
() computes, in R
M
, a representation of the initial conguration of M on input enc(I);
() computes, also in R
M
, the sequence of consecutive congurations of M until termina-
tion; and
( ) decodes the nal tape contents of M, as represented in R
M
, into the output
relation.
We sketch the construction of the xpoint queries realizing () and () here, and we leave
( ) as an exercise (17.23).
Consider phase (). Recall that each constant is encoded on the tape of M as the
binary representation of its rank in the successor relation succ (e.g., c as 10). To perform
the encoding of the initial conguration, it is useful rst to construct an auxiliary relation
that provides the encoding of each constant. Because there are n
c
constants, the code of
each constant requires log(n
c
) bits, and thus less than n
c
bits. We can therefore use a
ternary relation constant_coding to record the encoding. A tuple x, y, z in that relation
indicates that the k
th
bit of the encoding of constant x is z, where k is the rank of constant y
in the succ relation. For instance, the relation constant_coding corresponding to the succ in
Fig. 17.5 is represented in Fig. 17.8. The tuples c, a, 1 and c, b, 0 indicate, for instance,
that c is encoded as 10. It is easily seen that constant_coding is denable from succ by a
xpoint query (see Exercise 17.23b).
With relation constant_coding constructed, the task of computing the encoding of
I and u into R
M
is straightforward. We will illustrate this using again the example in
Fig. 17.5. To encode relation P, one steps through all 3-tuples of constants and checks if a
tuple in P has been reached. To step through the 3-tuples, one rst constructs the successor
relation succ
3
on 3-tuples. The rst tuple in P that is reached is b, a, c. Because this
is the rst tuple encoded, one rst inserts into R
M
the identifying information for P (the
rst tuple in Fig. 17.7). This proceeds, yielding the next tuples in Fig. 17.7. The binary
representation for each of b, a, c is obtained from relation constant_coding. This proceeds
by moving to the next 3-tuple. It is left to the reader to complete the details of the xpoint
query constructing R
M
(see Exercise 17.23c). Several additional relations have to be used
for bookkeeping purposes. For instance, when stepping through the tuples in succ
3
, one
must keep track of the last tuple that has been processed.
We next outline the construction for (). One must simulate the computation of M
starting from the initial conguration represented in R
M
. To construct a new conguration
from the current one, one must simulate a move of M. This is repeated until M reaches
450 First Order, Fixpoint, and While
R
M
a a a a a a a a P s
a a a a a a a b [ 0
a a a a a a a c 1 0
a a a a a a a d # 0
a a a a a a b a 0 0
a a a a a a b b # 0
a a a a a a b c 1 0
a a a a a a b d 0 0
a a a a a a c a ] 0
a a a a a a c b [ 0
a a a a a a c c 1 0
a a a a a a c d # 0
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
a a a b a a a a 0 0
a a a b a a a b [ r
a a a b a a a c 1 0
a a a b a a a d # 0
a a a b a a b a 0 0
a a a b a a b b # 0
a a a b a a b c 1 0
a a a b a a b d 0 0
a a a b a a c a ] 0
a a a b a a c b [ 0
a a a b a a c c 1 0
a a a b a a c d # 0
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
Figure 17.7: Coding of part of the (rst two) congurations
a nal state (accepting or rejecting), which, as we assumed earlier, happens after at most
n
k
c
steps. The iteration can be performed using the xpoint operator in CALC +
+
. Each
step consists of dening the new conguration from the current one, timestamping it, and
adding it to R
M
. This can be done with a CALC formula. For instance, suppose the current
state of M is q, the content of the current cell is 0, and the corresponding move of M is to
change 0 to 1, move right, and change states from q to r. Suppose also that
17.4 The Impact of Order 451
constant_coding
a a 0
b a 1
c a 1
c b 0
d a 1
d b 1
Figure 17.8: The relation constant_coding corresponding to a,b,c,d
t is the timestamp (in the example this is a 4-tuple) identifying the current congu-
ration,
R
M
contains the tuple t ,

j, 0, q, where

j species a tape cell (in the example again


with a 4-tuple), and

is the next timestamp and



j

the next cell [i.e., succ


k
(t ,

) and succ
k
(

j,

j

)].
The tuples describing the new conguration of M are
(a)

i, x, y if

i =

j,

i =

j

and t ,

i, x, y R
M
;
(b)

j, 1, 0;
(c)

,

j

, x, r if t ,

j

, x, 0 R
M
.
In other words, (a) says that the cells other than the j
th
cell and the next cell remain
unchanged; (b) says that the content of cell j changes from 0 to 1, and the head no longer
points to the j
th
cell; nally, (c) says that the head points to the right adjacent cell, the
new state is r, and the content of that cell is unchanged. Clearly, (a) through (c) can be
expressed by a CALC formula (Exercise 17.23d). One such formula is needed for each
move of M, and the formula corresponding to the nite set of possible moves is obtained
by their disjunction.
We have outlined queries that realize () and () (i.e., perform the encoding needed
to run M and then simulate the run of M). Using these xpoint queries and their analog for
phase ( ), it is now easy to construct the xpoint query q
M
that carries out the complete
computation of q. This completes the proof of (a).
The construction for (b) is similar. The difference lies in the fact that a while computa-
tion need not be inationary, unlike xpoint computations. This simplies the simulation.
For instance, only the tuples corresponding to the current conguration of M are kept in
R
M
(Exercise 17.24).
Although ptime is considered synonymous with tractability in many circumstances,
complexity classes lower than ptime are most useful in practice in the context of potentially
large databases. There are numerous results that extend the logical characterization of
qptime to lower complexity classes for ordered databases. For instance, by limiting the
xpoint operator in xpoint to simpler operators based on various forms of transitive
452 First Order, Fixpoint, and While
closure, one can obtain languages expressing qlogspace and qnlogspace on ordered
databases.
Theorem 17.4.2 implies that the presence of order results in increased expressive
power for the xpoint and while queries. For these languages, this is easily seen (for in-
stance, even can be expressed by xpoint when an order is provided). For weaker lan-
guages, the impact of order may be harder to see. For instance, it is not obvious whether
the presence of order results in increased expressive power for CALC. The query even is of
no immediate help, because it cannot be expressed by CALC even in the presence of order
(Exercise 17.8). However, a more complicated query based on even can be used to show
that CALC does indeed become more expressive with an order (Exercise 17.27). Because
the CALC queries on ordered instances remain in ac
0
, this shows in particular that there
are queries in ac
0
that CALC cannot express.
From Chaos to Order: A Normal Form for While
We next discuss informally a normal form for the while queries that provides a bridge be-
tween computations without order and computations with order. This helps us understand
the impact of order and the cost of computation without order.
The normal form says, intuitively, that each while query on an unordered instance can
be reduced to a while query over an ordered instance via a xpoint query. More precisely,
a while program in the normal form consists of two phases. The rst is a xpoint query
that performs an analysis of the input. It computes an equivalence relation on tuples that
is a congruence with respect to the rest of the computation, in that equivalent tuples are
treated identically throughout the computation. Thus each equivalence class is treated as
an indivisible block of tuples that is never split later in the computation. The xpoint
query outputs the equivalence classes in some order, so that each class can be thought of
abstractly as an integer. The second phase consists of a while query that can be viewed as
computing on an ordered database obtained by replacing each equivalence class produced
in the analysis phase by its corresponding integer.
The normal form also allows the clarication of the relationship between xpoint
and while. Because on ordered databases the two languages express qptime and qpspace,
respectively, the languages are equivalent on ordered databases iff ptime = pspace. What
about the relationship of these languages without the order assumption? It turns out that the
normal form can be used to extend this result to the general case when no order is present.
We do not describe the normal form in detail, but we provide some intuition on how a
query on an unordered database reduces to a query on an ordered database.
Consider a while program q and a particular instance. There are only nitely many
CALC queries that are used in q, and the number of their variables is bounded by some
integer, say k. To simplify, assume that the input instance consists of a single relation I
of arity k and that all relations used in q also have arity k. We can further assume that all
queries used in assignment statements are either conjunctive queries or the single algebra
operations , , and that no relation name occurs twice in a query. For a query in q,
(R
1
, . . . , R
n
) indicates that R
1
, . . . , R
n
are the relation names occurring in .
Consider the set J of k-tuples formed with the constants from I. First we can distin-
guish between tuples based on their presence in (or absence from) I. This yields a rst par-
17.4 The Impact of Order 453
tition of J. Now using the conjunctive queries occurring in q, we can iteratively rene this
partition in the following way: If for some conjunctive query (R
1
, . . . , R
n
) occurring in
q and some blocks B
1
, . . . , B
n
of the current partition (B
1
, . . . , B
n
) and (B
1
, . . . , B
n
)
have nonempty intersection with some block B

of the current partition, we rene the par-


tition by splitting the block B

into B

(B
1
, . . . , B
n
) and B

(B
1
, . . . , B
n
). This is
repeated until no further renement occurs, yielding a nal partition of J. Furthermore, the
blocks can be numbered as they are produced, which provides an ordering J
1
, . . . , J
m
of
the blocks of the partition. The entire computation can be performed by a xpoint query
constructed from q.
It is important to note that two tuples u, v in one block of the nal partition cannot be
separated by the computation of q on input I (i.e., at each step of this computation, each
relation either contains both u and v or none). In other words, each relation contains a union
of blocks of the nal partition. Then one can reduce the original computation to an abstract
computation q

on the integers by replacing the i


th
block of the partition by integer i. Thus
the original query q can be rewritten as the composition of a xpoint query f followed by
a while query q

that essentially operates on an ordered input.


Using this normal form, one can show the following:
Theorem 17.4.3 While = xpoint iff ptime = pspace.
Crux The only if part follows from Theorem 17.4.2. The normal form is used for the
if part as follows. Suppose ptime = pspace. Then qptime = qpspace. Let q be a while
query. By the normal form, q =f q

, where f is a xpoint query and q

is a while query
whose computation is isomorphic to that of a while query on an ordered domain. Because
q

is in pspace and pspace = ptime, q

is in ptime. By Theorem 17.4.2(a), there exists a


xpoint query f

equivalent to q

on the ordered domain. Thus q is equivalent to ff

and
is a xpoint query.
An Alternative to Order: Nondeterminism
Results such as Theorem 17.4.2 show that the presence of order can solve some of the
problems of expressiveness of query languages. This can be interpreted as a trade-off
between expressiveness and the data independence provided by the abstract interface to
the database system. We conclude this section by considering an alternative to order for
increasing expressive power. It is based on the use of nondeterminism.
We will use the following terminology. A deterministic query is a classical query that
always produces at most one output for each input instance. A nondeterministic query is a
query that may have more than one possible outcome on a given input instance. Generally
we assume that all possible outcomes are acceptable as answers to the query. For example,
the query Find one cinema showing Casablanca is nondeterministic.
Consider again the query even, which is not expressible by xpoint or while. The query
even is easily computed by xpoint in the presence of order (see Exercise 17.25). Another
way to circumvent the difculty of computing even is to relax the determinism of the query
language. If one could choose, whenever desired, an arbitrary element from the set, this
would provide another way of enumerating the elements of the set and computing even.
454 First Order, Fixpoint, and While
R A B R A B R A B R A B R A B
a b a b a b a c a c
a c b b b c b b b c
b b
b c
I I
1
I
2
I
3
I
4
Figure 17.9: An application of witness
The drawback is that, with such a nondeterministic construct in the language, determinism
of queries can no longer be guaranteed.
The trade-offs based on order and nondeterminism are not unrelated, as it may seem
at rst. Suppose that an order is given. As argued earlier, this comes down to suspending
the data independence principle and accessing the internal representation. In general, the
computation may depend on the particular order accessed. Then at the conceptual level,
where the order is not visible, the mapping dened by the query appears as nondeterminis-
tic. Different outcomes are possible for the same conceptual-level view of the input. Thus
the trade-offs based on order and on relaxing determinism are intimately connected.
To illustrate this, we exhibit nondeterministic versions of the while
(+)
and
CALC+
(+)
queries. In both cases we obtain exactly the (deterministic and nondeter-
ministic) queries computable in polynomial space (time). Analogous results can be shown
for lower complexity classes of queries.
Consider rst the algebraic setting. We introduce a new operator called witness that
provides the nondeterminism. To illustrate the use of this operator, consider the relation I
in Fig. 17.9. An application of witness
B
to I may lead to several results [i.e., witness
B
(I)
is either I
1
, I
2
, I
3
or I
4
]. Intuitively, for each x occurring in the A column, witness
B
selects some tuple x, y in I, thus choosing nondeterministically a B value y for x.
More generally, for each relation J over some schema U =XY, X Y =, witness
Y
(I)
selects one tuple x, y for each x occurring in
X
(J). Observe that from this denition,
witness
U
(J) selects one tuple in J (if any).
It is also possible to describe the semantics of the witness operator using functional
dependencies: For each instance J over some schema XY, X Y = , a possible result
of witness
Y
(J) is a maximal subinstance J

of J satisfying X Y (i.e., such that the


attributes in X form a key).
The witness operator provides, more generally, a uniform way of obtaining nondeter-
ministic counterparts for traditional deterministic languages.
The extension of while
(+)
with witness is denoted by while
(+)
+W. Following is a
useful example that shows that an arbitrary order can be constructed using the witness
operator.
Example 17.4.4 Consider an input instance over some unary relation schema R. The
following while+W query denes all possible successor relations on the constants from
17.4 The Impact of Order 455
R succ max
b c a a
d
(a)
R succ max
b c a d
a d
(b)
R succ A B max
c a b
a d
d b
(c)
Figure 17.10: Some steps in the computation of an ordering
the input (i.e., each run constructs some ordering of the constants from the input; we use
the unnamed perspective):
succ :=witness
12
(
1=2
(R R));
max :=
2
(succ); R :=R (
1
(succ)
2
(succ));
while change do
begin
succ :=succ witness
12
(max R);
max :=
2
(succ)
1
(succ);
R :=R max
end
The result is constructed in a binary relation succ. Aunary relation max contains the current
maximum element in succ. Some steps of a possible computation on input R ={a, b, c, d}
are shown in Fig. 17.10: (a) shows the state before the loop is rst entered, (b) the state
after the rst execution of the loop, and (c) the nal state. Note that the output is empty if
R contains fewer than two constants. It is of interest to observe that the program uses only
the ability of witness to pick an arbitrary tuple from a relation.
This query can also be expressed in while
+
+W. (See Exercise 17.31.)
To continue with the nondeterministic languages, we next consider the language
456 First Order, Fixpoint, and While
CALC+
(+)
. The nondeterminism is again provided by a logical operator called wit-
ness
4
and denoted W. Suppose ( x, y) is a formula with free variables x, y. Intuitively,
W y( x, y) indicates that one witness y
x
is chosen for each x satisfying y ( x, y).
For example, if R consists of the relation I in Fig. 17.9, the formula W
y
R(x, y) denes
the possible answers I
1
, I
2
, I
3
, I
4
in the same gure. [Thus W
y
R(x, y) is equivalent to
witness
B
(R).] More precisely, for each formula ( x, y) (where x and y are vectors of the
variables that are free in ), W y( x, y) is a formula (where the y remain free) dening the
set of relations I such that for some J dened by : I J; and for each x for which x, y
is in J for some y, there exists a unique y
x
such that x, y
x
is in I.
The extension of CALC+
(+)
with the witness operator is denoted by
CALC+
(+)
+W. Following is a useful example that shows that an arbitrary order
can be constructed using CALC+
+
+W.
Example 17.4.5 Consider the (unary) relation schema R of Example 17.4.4. The follow-
ing CALC+
+
+W query denes, on each instance I of R, all possible successor relations
on the constants in I. (The output is empty if I contains fewer than two constants.) The
query uses a binary relation schema succ, which is used to construct the successor relation
iteratively. The query is
+
succ
((succ))(x, y), where =
1

2
and

1
(x, y) =xy(succ(xy)) Wxy(R(x) R(y) x =y),

2
(x, y) =Wy(R(y) z(succ(yz) succ(zy))) z(succ(zx)) z(succ(xz)).
The formula
1
initializes the iteration when succ is empty;
2
adds to succ a tuple
x, y, where y is an arbitrarily chosen element of I(R) not yet in succ and x is the current
maximum element in succ.
The ability of while
+
+W and CALC+
+
+W to dene nondeterministically a suc-
cessor relation on the constants suggests that the impact of nondeterminism on expressive
power is similar to that of order. This is conrmed by the following result.
Theorem 17.4.6 The set of deterministic queries that are expressed by while
+
+W or
CALC+
+
+W is qptime.
Proof It is easy to verify that each deterministic query expressed by while
+
+ W is in
qptime. Conversely, let q be a query in qptime. By Theorem 17.4.2, there exists a while
+
query w that expresses q if a successor relation succ on the constants is given. Then the
while
+
+W query expressing q consists of the following:
(i) construct a successor relation succ on the constants, as in Example 17.4.5;
(ii) apply query w to the input instance together with succ.
4
The witness operator is related to Hilberts -symbol [Lei69], but its semantics is different. In
particular, the -symbol does not yield nondeterminism.
Bibliographic Notes 457
An analogous result holds for while+W and CALC++W. Specically, the set of
deterministic queries expressible by these languages is precisely qpspace.
Note that Theorem 17.4.6 does not provide a language that expresses precisely
qptime, because nondeterministic queries can also be expressed and it is undecidable if
a while
+
+W or CALC+
+
+W query denes a deterministic query (Exercise 17.32). In-
stead the result shows the power of nondeterministic constructs and so points to a trade-off
between expressive power and determinism.
Bibliographic Notes
The sequential data complexity of CALC was investigated by Vardi [Var82a], who showed
that CALC is included in logspace. The parallel complexity of CALC, specically the
connection with ac
0
, was studied by Immerman [Imm87a]. In [DV91], a database model
for parallel computation is dened, and CALC is shown to coincide exactly with its restric-
tion to constant time and polynomial size. This differs from ac
0
in that the match is precise.
Intuitively, this is due to the fact that the model in [DV91] is generic and does not assume
an ordered encoding of the input.
The rst results on the expressiveness and complexity of xpoint and while were
obtained by Chandra and Harel, Vardi, and Immerman. In [CH80b] it is shown by a
direct proof that xpoint cannot express even. The result is extended to while in [Cha81a].
The fundamental result that xpoint expresses qptime on ordered instances was obtained
independently by Immerman [Imm86] and Vardi [Var82a]. The fact that while on ordered
instances expresses qpspace is shown in [Var82a].
Languages expressing complexity classes of queries below qptime are investigated
in [Imm87b]. They are based on augmenting CALC with operators providing limited re-
cursion, such as various forms of transitive closure. The classes of queries expressed
by the resulting languages on ordered databases include deterministic logspace, denoted
logspace, nondeterministic logspace, denoted nlogspace, and symmetric logspace, de-
noted slogspace.
There has been a long quest for a language expressing precisely qptime on arbitrary
(unordered) databases. The problem is formalized in a general setting in [Gur88], where
it is also conjectured that no such language exists. The issue is further investigated in
[Daw93], where, in particular, it is shown that there exists a language for qptime iff there
exists some problem complete in p via an extended kind of rst-order reductions. To date,
the problem of the existence of a language for qptime remains open.
In the absence of a language for qptime, there have been several proposals to extend
the xpoint queries to capture more of qptime. Recall that queries involving counting (such
as even) are not in xpoint. Therefore it is natural to consider extensions of xpoint with
counting constructs. An early proposal by Chandra [Cha81a] is to add a bounded looping
construct of the form For |R| do, which iterates the body of the loop |R| times. Clearly,
this construct allows us to express even. However, it has been shown that bounded looping
is not sufcient to yield all of qptime, because tests |R
1
| = |R
2
| cannot be expressed
(see [Cha88]). More recently, extensions of xpoint with counting constructs have been
considered and studied in [CFI89, GO93]. They allow access to the cardinality of relations
as well as limited integer manipulation. These languages are more powerful than xpoint
458 First Order, Fixpoint, and While
but, as shown in [CFI89], still fall short of expressing all of qptime. Other results of this
avor are proven in [Daw93, Hel92]. They show that extending xpoint with a nite set
of polynomial-time computable constructs of certain forms (generalized quantiers acting
much like oracles) cannot yield a language expressing exactly qptime (see Exercise 17.35
for a simplied version of this result).
The normal form for while was proven in [AV91b, AV94]. It was also shown there,
using the normal form, that xpoint and while are equivalent iff ptime = pspace. The cost
of computing without an order is also investigated in [AV91b, AV94]. This is formalized
using an alternative model of computation called generic machine (GM). Unlike Turing
machines, GMs do not require an ordered encoding of the input and use only the informa-
tion provided by the input instance. Based on GM, generic complexity classes of queries
are dened. For example, gen-ptime and gen-pspace are obtained by taking polynomial
time and space restrictions of GM. As a typical result, it is shown that even is not in gen-
pspace, which captures the intuition that this query is hard to compute without order. An-
other more restricted device, also operating without encodings, is the relational machine,
also considered in [AV91b, AV94]. There is a close match between complexity classes de-
ned using this device, called relational complexity classes, and various languages. For ex-
ample, relational polynomial time coincides with xpoint and relational polynomial space
with while. Further connections between languages and relational complexity classes are
shown in [AVV92].
Nondeterministic languages and their expressive power are investigated in [ASV90,
AV91a, AV91c]. The languages include nondeterministic extensions of CALC+
+
and
CALC+and of rule-based languages such as datalog

. Strong connections between these


languages are shown (see Exercise 17.33). Nondeterministic languages that can express all
the qptime queries are exhibited.
A construct related to the witness operator described in this chapter is the choice
operator, rst presented in [KN88]. This construct has been included in the language LDL,
an implementation of datalog

[NT89] (see also Chapter 15). Variations of the choice


operator, and its connection with stable models of datalog

programs, are further studied


in [SZ90, GPSZ91]. The expressive power of the choice operator in the context of datalog
is investigated in [CGP93] (see Exercise 17.34).
The Ehrenfeucht-Fraiss e games are due to Ehrenfeucht [Ehr61] and Fraiss e [Fra54].
Since their work, extensions of the games have been proposed and related to various lan-
guages such as datalog [LM89], fragments of innitary logic [KV90c], xpoint queries,
and second-order logic [Fag75, AF90, dR87]. In [Imm82, CFI89], games are used to prove
lower bounds on the number of variables needed to express certain graph properties. Typ-
ically, in the extensions of Ehrenfeucht-Fraiss e games, choosing a constant in an instance
is thought of as placing a pebble over that constant (the games are often referred to as
pebble games). Like the Ehrenfeucht-Fraiss e games, these are two-player games in which
one player attempts to prove that the instances are not the same and the other attempts to
prove the contrary by placing the pebbles such that the corresponding subinstances are iso-
morphic. The games differ in the rules for taking turns among players and instances, the
number of pebbles placed in one move, whether the pebbles are colored, etc. In games cor-
responding to languages with recursion, players have more than one chance for achieving
Exercises 459
G[00#01][10#00][10#01][01#01]#[10]
Figure 17.11: Encoding of an instance and tuple
their objective by removing some of the pebbles and restarting the game. Our presentation
of Ehrenfeucht-Fraiss e games was inspired by Kolaitiss excellent lecture notes [Kol83].
The study of 0-1 laws was initiated by Fagin and Glebski

i. The 0-1 law for CALC


was proven in [Fag72, Fag76] and independently by Glebski

i et al. [GKLT69]. The 0-


1 law for xpoint was shown by Blass, Gurevich, and Kozen [BGK85] and Talanov and
Knyazev [TK84]. This was extended to while by Kolaitis and Vardi, who proved further
extensions of 0-1 laws for certain fragments of second-order logic [KV87, KV90b] and
for innitary logic with nitely many variables [KV92], both of which subsume while.
For instance, 0-1 laws were proven for existential second-order sentences Q
1
. . . Q
k
,
where the Q
i
are relation variables and is a CALC formula in prenex form, whose
quantier portion has one of the shapes

or

. It is known that arbitrary existential


second-order sentences do not have a 0-1 law (see Exercise 17.21). Innitary logic is an
extension of CALC that allows innite disjunctions and conjunctions. Kolaitis and Vardi
proved that the language consisting of innitary logic sentences that use only nitely many
variables has a 0-1 law. Note that this language subsumes while (Exercise 17.22). Another
aspect of 0-1 laws that has been studied involves the difculty of deciding whether a
sentence in a language that has a 0-1 law is almost surely true or whether it is almost
surely false. For instance, Grandjean proved that the problemis pspace complete for CALC
[Gra83]. The problem was investigated for other languages by Kolaitis and Vardi [KV87].
A comprehensive survey of 0-1 laws is provided by Compton [Com88].
Fagin [Fag93] presents a survey of nite model theory including 0-1 laws that inspired
our presentation of this topic.
Exercises
Exercise 17.1 Consider the CALC query on a database schema with one binary relation G:
= {x | yz(G(x, y) G(z, x))}.
Consider the instance I over Gand tuple encoded on a Turing input tape, as shown in Fig. 17.11.
Describe in detail the computation of the Turing machine M

, outlined in the proof of Theo-


rem 17.1.1, on this input.
Exercise 17.2 Prove Theorem 17.1.2.
Exercise 17.3 Prove that
r
is an equivalence relation on instances.
Exercise 17.4 Outline the crux of Theorem 17.2.2 for the case where
= x (y (R(xy)) z (R(zx))).
(Note that the quantier depth of is 2, so this case involves games with two moves.)
Exercise 17.5 Provide a complete description of the winning strategy outlined in the crux of
Proposition 17.2.3. Hint: For the game with r moves, choose cycles of size at least r(2
r+1
1).
460 First Order, Fixpoint, and While
Exercise 17.6 Extend Proposition 17.2.3 by showing that connectivity of graphs is not rst-
order denable even if an order on the constants is provided. More precisely, let R be the
database schema consisting of two binary relations G and . Let I

be the family of instances


I over R such that I() provides a total order on the constants of I(G). Outline a proof that there
is no CALC sentence such that, for each I I

,
(I) is true iff I(G) is a connected graph.
Exercise 17.7 [Kol83] Use Ehrenfeucht-Fraiss e games to show that the following properties
of graphs are not rst-order denable:
(i) the number of vertexes is even;
(ii) the graph is 2-colorable;
(iii) the graph is Eulerian (i.e., there exists a cycle that passes through each edge exactly
once).
Exercise 17.8 Show that the property that the number of elements in a unary relation is even
is not rst-order denable even if an order on the constants is provided.
The following two exercises lead to a proof of the converse of Theorem 17.2.2. It states that
instances that are undistinguishable by CALC sentences of quantier depth r are equivalent
with respect to
r
. This is shown by proving that each equivalence class of
r
is denable
by a special CALC sentence of quantier depth r, called the r-type of the equivalence class.
Intuitively, the r-type sentence describes all patterns that can be detected by playing games of
length r on pairs of instances in the equivalence class.
To dene the r-types, one rst denes formulas with m free variables, called (m, r)-types.
An r-type is dened as a (0, r)-type. The set of (m, r)-types is dened by backward induction
on m as follows.
An (r, r)-type consists of all satisable formulas with variables x
1
, . . . , x
r
such that is
a conjunction of literals over R and for each i
1
, . . . , i
k
, either R(x
i
1
, . . . , x
i
k
) or R(x
i
1
, . . . , x
i
k
)
is in . Suppose the set of (m+1, r)-types has been dened. Each set S of (m+1, r)-types
gives rise to one (m, r)-type dened by
_
{ x
m+1
| S}
_
{x
m+1
(()) | S}.
Exercise 17.9 [Kol83] Let r and m be positive integers such that 0 mr. Prove that
(a) every (m, r)-type is a CALC formula with free variables x
1
, . . . , x
m
and quantier
depth (r m);
(b) there are only nitely many distinct (m, r)-types; and
(c) for every instance I and sequence a
1
, . . . , a
m
of constants in I, there is exactly one
(m, r)-type such that I satises (a
1
, . . . , a
m
).
Exercise 17.10 [Kol83] Prove that each equivalence class of
r
is denable by a CALC
sentence of quantier depth r. Hint: For a given equivalence class of
r
, consider an instance
in the class and the unique r-type satised by the instance.
Exercise 17.11 Complete the proof of Theorem 17.3.1; specically show that
(a) xpoint qptime and while qpspace, and
Exercises 461
(b) xpoint is complete in ptime and while is complete in pspace.
Exercise 17.12 In the proof of Proposition 17.3.2, the case of assignments of the form T :=
Q
1
Q
2
was discussed. Describe the constructions needed for the other algebra operators. Point
out where the assumption that the size of I is greater than N is used.
Exercise 17.13 Prove that the while queries collapse to CALC on unary relation inputs. More
precisely, let R be a database schema consisting of unary relations. Show that for each while
query w on R there exists a CALC query equivalent to it. Hint: Use the same approach as in
the proof of Proposition 17.3.2 to show that there is a constant bound on the length of runs of a
given while program on unary inputs.
Exercise 17.14 Describe how to generalize the proof of Proposition 17.3.2 so that it handles
while queries that have constants. In particular, describe how the notion of hyperplanes needs
to be generalized.
Exercise 17.15 Recall the technique of hyperplanes used in the proof of Proposition 17.3.2.
(a) Let D dom be nite. For a relation schema R, the cross-product instance of R over
D is I
R
D
=D D (arity of R times). The cross-product instance of database
schema R over D is the instance I
R
D
, where I
R
D
(R) =I
R
D
for each R R. Let P
be a datalog

program with no constants, input schema R, and output schema S with


arity k. Prove that there is an N > 0 and a set E
P
of equivalence relations over [1, k]
such that for each set D dom: if |D| N then
P(I
R
D
) =
_
{H

(D) | E
P
}.
(b) Prove (a) for datalog

programs.
(c) Generalize your proofs to permit constants in P.
Exercise 17.16 In the proof of Lemma 17.3.6, prove more formally the bound on
n
(
k
).
Prove that its limit is 0 when n goes to .
Exercise 17.17 Determine whether the following properties of graphs are almost surely true
or whether they are almost surely false.
(a) Existence of a cycle of length three
(b) Connectivity
(c) Being a tree
Exercise 17.18 Prove that there is a nite number of equivalence classes of k-tuples induced
by automorphisms of the Rado graph. Hint: Each class is completely characterized by the
pattern of connection and equality among the coordinates of the k-tuple. To see this, show that
for all tuples u and v satisfying this property, one can construct an automorphism of the Rado
graph such that (u) =v. The automorphism is constructed using the extension axioms, similar
to the proof of Lemma 17.3.5.
Exercise 17.19 Describe how to generalize the development of 0-1 laws for arbitrary input
and for queries involving constants.
Exercise 17.20 Prove or disprove: The properties expressible in xpoint are exactly the ptime
properties that have a 0-1 law.
462 First Order, Fixpoint, and While
Exercise 17.21 The language existential second-order logic, denoted (SO), consists of sen-
tences of the formQ
i
. . . Q
k
, where Q
i
are relations and is a rst-order sentence using the
relations Q
i
(among others). Show that SO does not have a 0-1 law. Hint: Exhibit a property
expressible in SO that is neither almost surely true nor almost surely false.
Exercise 17.22 Innitary logic with nitely many variables, denoted L

, is an extension of
CALC that allows formulas with innitely long conjunctions and disjunctions but using only
a nite number of variables. Show that each while query can be expressed in L

. Hint: Start
with a specic example, such as transitive closure.
Exercise 17.23 The following refer to the proof of Theorem 17.4.2.
(a) Describe a xpoint query that, given a successor relation succ on constants, con-
structs a 2k-ary successor relation succ
k
on k-tuples of constants, in the lexicograph-
ical order induced on k-tuples by succ.
(b) Show that the relation constant_coding can be dened from succ using a xpoint
query.
(c) Complete the details of the construction of R
M
by a xpoint query.
(d) Describe in detail the CALC formula corresponding to the move of M considered in
the proof of Theorem 17.4.2.
(e) Describe in detail the CALC formula used to perform phase in the computation of
q
M
.
(f) Show where the proof of Theorem 17.4.2 breaks down if it is not assumed that the
input instance is ordered.
Exercise 17.24 Spell out the differences in the proofs of (a) and (b) in Theorem 17.4.2.
Exercise 17.25 Write a xpoint query that computes the parity query even on ordered data-
bases.
Exercise 17.26 Consider queries of the form
Does the diameter of G have property P?
where P is an exptime property of the integers (i.e., a property that can be checked, for integer
n, in time exponential in log n, or polynomial in n). Show that each query as above is a xpoint
query.
Exercise 17.27 [Gur] This exercise shows that there is a query expressible in CALC in the
presence of order that is not expressible in CALC without order. Let R ={D, S}, where D is
unary and S is binary. Consider an instance I of R. Suppose the second column of I(S) contains
only constants from I(D). Then one can view each constant s in the rst column of I(S) as
denoting a subset of I(D), namely {x | S(s, x)}. Call an instance I of R good if for each subset
of I(D), there exists a constant representing it. In other words, for each subset T of I(D), there
exists a constant s such that
T ={x | S(s, x)}.
Consider the query q dened by q(I) = true iff I is a good input and |I(D)| is even.
(a) Show that q is not expressible by CALC.
Exercises 463
(b) Show that q is expressible on instances extended with an order relation on the
constants.
(c) Note that in (b), an order is used instead of the usual successor relation on constants.
Explain the difculty of proving (b) if a successor relation is used instead of .
Hint: For (a), use Ehrenfeucht-Fraiss e games. Consider (b). To check that the input is good,
check that (1) all singleton subsets of I(D) are represented, and (2) if T
1
and T
2
are represented,
so is T
1
T
2
. To check evenness of |I(D)| on good inputs, dene rst from a successor
relation succ
D
on the constants in I(D); then check that there exists a subset T of I(D) consisting
of the even constants according to succ
D
and that the last element in succ
D
is in T .
Exercise 17.28 (Expression complexity [Var82a])
(a) Show that the expression complexity of CALC is within pspace. That is, consider a
xed instance I and tuple u, and a TM M
I,u
depending on I and u that, given as input
some standard encoding of a query in CALC, decides if u (I). Show that there
is such a TMM
I,u
whose complexity is within pspace with respect to |enc()|, when
ranges over CALC.
(b) Prove that in terms of expression complexity, CALC is complete in pspace. Hint:
Use a reduction to quantied propositional calculus (see Chapter 2 and [GJ79]).
(c) Let CALC

consist of the quantier-free queries in CALC. Show that the expression


complexity of CALC

is within logspace.
Exercise 17.29 Show that
(a) Wx(WyR(x, y)) is not equivalent
5
to Wxy(x, y);
(b) Wx(WyR(x, y)) is not equivalent to Wy(WxR(x, y)).
Exercise 17.30 Write a CALC+
+
+W formula dening the query even.
Exercise 17.31 Express the query of Example 17.4.4 in while
+
+W.
Exercise 17.32 [ASV90] Showthat it is undecidable whether a given CALC+
+
+W formula
denes a deterministic query. Hint: Use the undecidability of satisability of CALC sentences.
Exercise 17.33 [AV91a, AV91c]. As seen, the witness operator can be used to obtain nonde-
terministic versions of while
(+)
and CALC+
(+)
. One can obtain nondeterministic versions of
datalog
()
as follows. The syntax is the same, except that heads of rules may contain several
literals, and equality may be used in bodies of rules. The rules of the program are red one rule
at a time and one instantiation at a time. The nondeterminism is due to the choice of rule and
instantiation used in each ring. The languages thus obtained are denoted N-datalog
()
.
(a) Prove that N-datalog

is equivalent to CALC++W and while+W and expresses


all nondeterministic queries computable in polynomial space.
6
(b) Show that N-datalog

cannot compute the query P


A
(Q), where Q is of sort AB
and P of sort A.
5
Two formulas are equivalent iff they dene the same set of relations for each given instance.
6
This includes qpspace, the deterministic queries computable in polynomial space.
464 First Order, Fixpoint, and While
(c) Let N-datalog

be the language obtained by extending N-datalog

with universal
quantication in bodies of rules. For example, the program
answer(x) y[P(x), Q(x, y)]
computes the query P
A
(Q). Prove that N-datalog

is equivalent to
CALC+
+
+W and while
+
+W and expresses all nondeterministic queries com-
putable in polynomial time.
(d) Prove that N-datalog

and N-datalog

are equivalent on ordered databases.


Exercise 17.34 (Dynamic choice operator [CGP93]) The following extension of datalog
=
with a variation of the choice operator (see Bibliographic Notes) is introduced in [CGP93].
Datalog
=
programs are extended by allowing atoms of the form choice(X,Y) in rules of bodies,
where X and Y are disjoint sets of variables occurring in regular atoms of the rule. Several
choice atoms can appear in one rule. The language obtained is called datalog
=
+choice. The
semantics is the following. The choice atoms render the immediate consequence operator of
a datalog
=
+choice program P nondeterministic. In each application of T
P
, a subset of the
applicable valuations is chosen so that for each rule containing an occurrence choice(X,Y), the
functional dependency X Y holds. That is, one instantiation for the Y variables is chosen
for each instantiation of the X variables. Moreover, the nondeterministic choices operated at
each application of T
P
for a given occurrence of a choose atom extend the choices made in
previous applications of T
P
for that atom. (Thus choose has a more global nature than the
witness operator.) Although negation is not used in datalog
=
+choice, it can be simulated. The
following datalog
=
+choice program computes in

P the complement of a nonempty relation P
with respect to a universal relation T of the same arity [CGP93]:
TAG(X, 0) P(X)
TAG(X, 1) T (X), COMP(Y, 0)
COMP(X, I) TAG(X, I), choose(X, I)

P(X) COMP(X, 1)
The role of choose in the preceding program is simple. When rst applied, it associates with
each X in P the tag I =0. At the second application, it chooses a tag of 0 or 1 for all tuples in
T . However, tuples in P have already been tagged by 0 in the previous application of choose,
so the tuples tagged by 1 are precisely those in the complement.
(a) Exhibit a datalog
=
+choice program that, given as input a unary relation P, denes
nondeterministically the successor relations on the constants in P.
(b) Show that every N-datalog

query is expressible in datalog


=
+choice (see Exer-
cise 17.33).
(c) Prove that datalog
=
+choice expresses exactly the nondeterministic queries com-
putable in polynomial time.
Exercise 17.35 [Daw93, Hel92] As shown in this chapter, the xpoint queries fall short of
expressing all of qptime. For example, they cannot express even. A natural idea is to enrich
the xpoint queries with additional constructs in the hope of obtaining a language expressing
exactly qptime. This exercise explores one (unsuccessful) possibility, which consists of adding
some nite set of ptime oracles to the xpoint queries.
Exercises 465
A property of instances over some database schema R is a subset of inst(R) closed under
isomorphisms of dom. Let Qbe a nite set of properties, each of which can be checked in ptime.
Let while
+
(Q) be the extension of while
+
allowing loops of the form while q(R
1
, . . . , R
n
) do,
where q Q and R
1
, . . . , R
n
are relation variables compatible with the schema of q. Intuitively,
this allows us to ask whether R
1
, . . . , R
n
have property q. Clearly, while
+
(Q) generally has
more power than while
+
. For example, the query even is trivially expressible in while
+
({even}).
One might wonder if there is choice of Q such that while
+
(Q) expresses exactly qptime.
(a) Show that for every nite set Q of ptime properties, there exists a single ptime
property q such that while
+
(Q) while
+
({q}).
(b) Let while
+
1
({q}) denote all while
+
({q}) programs whose input is one unary relation.
Let ptime[k] denote the set of properties whose time complexity is bounded by some
polynomial of degree k. Show that, for each ptime property q, the properties of unary
relations denable in while
+
1
({q}) are in ptime[k] for some k depending only on
q. Hint: Show that for each while
+
1
({q}) program there exist N > 0 and properties
q
1
, . . . , q
m
of integers where each q
i
(n) can be checked in time polynomial in n, such
that the program is equivalent to a Boolean combination of tests n j, n =j, q
i
(n),
where n is the size of the input, 0 j N and 1 i m. Use the hyperplane
technique developed in the proof of Proposition 17.3.2.
(c) Prove that there is no nite set Q of ptime properties such that while
+
(Q) expresses
qptime. Hint: Use (a), (b), and the fact that ptime[k] ptime by the time hierarchy
theorem.
18
Highly Expressive
Languages
Alice: I still cannot check if I have an even number of shoes.
Riccardo: This will not stand!
Sergio: We now provide languages that do just that.
Vittorio: They can also express any query you can think of.
I
n previous chapters, we studied a number of powerful query languages, such as the
xpoint and while queries. Nonetheless, there are queries that these languages cannot
express. As pointed out in the introduction to Chapter 14, xpoint lies within ptime, and
while within pspace. The complexity bound implies that there are queries, of complexity
higher than pspace, that are not expressible in the languages considered so far. Moreover,
we showed simple, specic queries that are not in xpoint or while, such as the query even.
In this chapter, we exhibit several powerful languages that have no complexity bound
on the queries they can express. We build up toward languages that are complete (i.e.,
they express all queries). Recall that the notion of query was made formal in Chapter 16.
Basically, a query is a mapping from instances of a xed input schema to instances of a
xed answer schema that is computable and generic. Recall that, as a consequence, answers
to queries contain only constants from the input (except possibly for some xed, nite set
of new constants).
We begin with a language that extends while by providing arbitrary computing power
outside the database; this yields a language denoted while
N
, in the style of embedded
relational languages like C+SQL. This would seem to provide the simplest cure for the
computational limitations of the languages exhibited so far. There is no complexity bound
on the queries while
N
can express. Surprisingly, we show that, nonetheless, while
N
is not
complete. In fact, while
N
cannot express certain simple queries, including the infamous
query even. Intuitively, while
N
is not complete because the external computation has lim-
ited interaction with the database. Complete languages are obtained by overcoming this
limitation. Specically, we present two ways to do this: (1) by extending while with the
ability to create new values in the course of the computation, and (2) by extending while
with an untyped version of relational algebra that allows relations of variable arity.
For conciseness, in this chapter we do not pursue the simultaneous development of
languages in the three paradigmsalgebraic, logic, and deductive. Instead we choose to
focus on the algebraic paradigm. However, analogous languages could be developed in the
other paradigms (see Exercise 18.22).
466
18.1 While
N
while with Arithmetic 467
18.1 While
N
while with Arithmetic
The language while is the most powerful of the languages considered so far. We have seen
that it lies within pspace. Thus it does not have full computing power. Clearly, a complete
language must provide such power. In this section, we consider an extension of while that
does provide full computing power outside the database. Nonetheless, we will show that
the resulting language is not complete; it is important to understand why this is so before
considering more exotic ways of augmenting languages.
The extension of while that we consider allows us to perform, outside the database,
arbitrary computations on the integers. Specically, the following are added to the while
language:
(i) integer variables, denoted i, j, k, . . . ;
(ii) the integer constant 0 (zero);
(iii) instructions of the formincrement(i), decrement(i), where i is an integer variable;
(iv) conditional statements of the form if i =0 then s else s

, where i is an integer
variable and s, s

are statements in the language;


(v) loops of the form while i > 0 do s, where i is an integer variable and s a program.
The semantics is straightforward. All integer variables are initialized to zero. The
semantics of the while change construct is not affected by the integer variables (i.e., the
loop is executed as long as there is a change in the content of a relational variable).
The resulting language is denoted by while
N
.
Because the language while
N
can simulate an arbitrary number of counters, it is
computationally complete on the integers (see Chapter 2). More precisely, the following
holds:
Fact For every computable function f (i
1
, . . . , i
k
) on integers, there exists a while
N
pro-
gram w
f
that computes f (i
1
, . . . , i
k
) for every integer initialization of i
1
, . . . , i
k
. In partic-
ular, w
f
stops on input i
1
, . . . , i
k
iff f is dened on (i
1
, . . . , i
k
).
In view of this fact, one can use in while
N
programs, whenever convenient, statements
of the form n := f (i
1
, . . . , i
k
), where n, i
1
, . . . , i
k
are integer variables and f is a com-
putable function on the integers. This is used in the following example.
Example 18.1.1 Let G be a binary relation with attributes AB. Consider the query on
the graph G:
square(G) = if the diameter of G is a perfect square, and G otherwise.
The following while
N
program computes square(G) (the output relation is answer; it is
assumed that G=):
i :=0; T :=G;
468 Highly Expressive Languages
while change do
begin
T :=T
AB
(
BC
(T )
AC
(G));
increment(i);
end;
j :=f (i);
answer :=G;
if j > 0 then answer :=.
where f is the function such that f (x) =1 if x is a perfect square and f (x) =0 otherwise.
(Clearly, f is computable.) Note that, after execution of the while loop, the value of i is the
diameter of G.
It turns out that the preceding program can been expressed in while alone, and even
xpoint, without the need for arithmetic (see Exercise 18.2). However, this is clearly not
the case in general. For instance, consider the while
N
program obtained by replacing f in
the preceding program by some arbitrary computable function.
Despite its considerable power, while
N
cannot express certain simple queries, such
as even. There are several ways to show this, just as we did for while. Recall that, in
Chapter 17, it was shown that while has a 0-1 law. It turns out that while
N
also has a
0-1 law, although proving this is beyond the scope of this book. Thus there are many
queries, including even, that while
N
cannot express. One can also give a direct proof
that even cannot be expressed by while
N
by extending straightforwardly the hyperplane
technique used in the direct proof that while cannot express even (Proposition 17.3.2, see
Exercise 18.3).
As in the case of other languages we considered, order has a signicant impact on the
expressiveness of while
N
. Indeed, while
N
is complete on ordered databases.
Theorem 18.1.2 The language while
N
expresses all queries on ordered databases.
Crux Let q be a query on an ordered database with schema R. Let I denote an input
instance over R and the enumeration of constants in I given by the relation succ. By the
denition of query, there exists a Turing machine M
q
that, given as input enc

(I), produces
as output enc

(q(I)) (whenever q is dened on I). Because while


N
manipulates integers,
we wish to encode I as an integer rather than a Turing machine tape. This can be done easily
because each word over some nite alphabet with k symbols (with some arbitrary order
among the symbols) can be viewed as an integer in base k. For any instance J, let enc

(J)
denote the integer encoding of J obtained by viewing enc

(J) as an integer. It is easy to see


that there is a computable function f
q
on the integers such that f
q
(enc

(I)) =enc

(q(I))
whenever q is dened on I. Furthermore, because while
N
can express any computable
function over the integers (see the preceding Fact), there exists a while
N
program w
f
q
(i)
that computes f
q
. It is left to show that while
N
can compute enc

(I) and can decode q(I)


from enc

(q(I)). Recall that, in the proof of Theorem 17.4.2, it was shown that while can
compute a relational representation of enc

(I) and, conversely, it can decode q(I) from


the representation of enc

(q(I)). A slight modication of that construction can be used to


18.2 While
new
while with New Values 469
S R
a b a b
a c a c
c a c a
Figure 18.1: An application of new
show that while
N
can compute the desired integer encoding and decoding. Thus a while
N
program computes q in three phases:
1. compute enc

(I);
2. compute f
q
(enc

(I)) =enc

(q(I));
3. compute q(I) from enc

(q(I)).
18.2 While
new
while with New Values
Recall that, as discussed in the introduction to Chapter 14, while cannot go beyond pspace
because (1) throughout the computation it uses only values from the input, and (2) it uses
relations of xed arity. The addition of integers as in while
N
is one way to break the space
barrier. Another is to relax (1) or (2). Relaxing (1) is done by allowing the creation of new
values not present in the input. Relaxing (2) yields an extension of while with untyped
algebra (i.e., an algebra of relations with variable arities). In this and the next section, we
describe two languages obtained by relaxing (1) and (2) and prove their completeness.
We rst present the extension of while denoted while
new
, which allows the creation of
new values throughout the computation. The language while is modied as follows:
(i) There is a new instruction R :=new(S), where R and S are relational variables
and arity(R) =arity(S) +1;
(ii) The looping construct is of the formwhile R do s, where R is a relational variable.
The semantics of (i) is as follows: Relation R is obtained by extending each tuple of S
by one distinct new value from dom not occurring in the input, the current state, or in the
program. For example, if the value of S is the relation in Fig. 18.1, then R is of the form
shown in that gure. The values , , are distinct new values
1
in dom.
The semantics of while R do s is that statement s is executed while R is nonempty.
We could have used while change instead because each looping construct can simulate the
other. However, in our context of value invention, it is practical to have the more direct
control on loops provided by while R.
1
If arity(S) =0, then R is unary and contains one new value if S ={} and is empty if S =. This
allows the creation of values one by one. One might wonder if this kind of one-by-one value creation
is sufcient. The answer is negative. The language with one-by-one value creation is equivalent to
while
N
(see Exercise 18.6).
470 Highly Expressive Languages
Note that the new construct is, strictly speaking, nondeterministic. The new values
are arbitrary, so several possible outcomes are possible depending on the choice of values.
However, the different outcomes differ only in the choice of new values. This is formalized
by the following:
Lemma 18.2.1 Let w be a while
new
program with input schema R, and let R be a relation
variable in w. Let I be an instance over R, and let J, J

be two possible values of R at the


same point during the execution of w on I. Then there exists an isomorphism from J to
J

that is the identity on the constants occurring in I or w.


The proof of Lemma 18.2.1 is done by a straightforward induction on the number of
steps in a partial execution of w on I (Exercise 18.7).
Recall that our denition of query requires that the answer be unique (i.e., the query
must be deterministic). Therefore we must consider only while
new
programs whose an-
swer never contains values introduced by the new statements. Such programs are called
well-behaved while
new
programs. It is possible to give a syntactic restriction on while
new
programs that guarantees good behavior, can be checked, and yields a class of programs
equivalent to all well-behaved while
new
programs (see Exercises 18.8 and 18.9).
We wish to show that well-behaved while
new
programs can express all queries. First
we have to make sure that well-behaved while
new
programs do in fact express queries. This
is shown next.
Lemma 18.2.2 Each well-behaved while
new
program with input schema R and output
schema answer expresses a query from inst(R) to inst(answer).
Proof We need to show that well-behaved while
new
programs dene mappings from
inst(R) to inst(answer) (i.e., they are deterministic with respect to the nal answer). Com-
putability and genericity are straightforward. Let w be a well-behaved while
new
program
with input schema R and output answer. Let I, I

be two possible values of answer after


the execution of w on an instance I of R. By Lemma 18.2.1, there exists an isomorphism
from I to I

that is the identity on values in I or w. Because w is well behaved, answer


contains only values from I or w. Thus is the identity and I =I

.
Note that although well-behaved programs are deterministic with respect to their nal
answer, they are not deterministic with respect to intermediate results that may contain new
values.
We next show that well-behaved while
new
programs express all queries. The basic idea
is simple. Recall that while
N
is complete on ordered databases. That is, for each query q,
there is a while
N
program w that, given an enumeration of the input values in a relation
succ, computes q. If, given an input, we were able to construct such an enumeration,
we could then simulate while
N
to compute any desired query. Because of genericity, we
cannot hope to construct one such enumeration. However, constructing all enumerations
of values in the input would not violate genericity. Both while
new
and the language with
variable arities considered in the next section can compute arbitrary queries precisely in
this fashion: They rst compute all possible enumerations of the input values and then
18.2 While
new
while with New Values 471
simulate a while
N
program on the ordered database corresponding to each enumeration.
These computations yield the same result for all enumerations because queries are generic,
so the result is independent of the particular enumeration used to encode the database (see
Chapter 16).
Before proving the result, we show how we can construct all the possible enumerations
of the elements in the active domain of the input.
Representation
Let I be an instance over R. Let Success be the set of all binary relations dening a
successor relation over adom(I). We can represent all the enumerations in Success with
a 3-ary relation:
succ =
_
ISuccess
I {
I
},
where {
I
| I Success} is a set of distinct new values. [Each such
I
is used to denote
a particular enumeration of adom(I).] For example, Fig. 18.2 represents an instance I and
the corresponding succ.
Computation of succ
We now argue that there exists a while
new
program w that, given I, computes succ. Clearly,
there is a while
new
program that, given I, produces a unary relation D containing all values
in I. Following is a while
new
program w
succ
that computes the relation succ starting from
D (using a query q explained next):
I succ succ
a b a b
1
a b a b c
a c b c
1
b c a b c
c a a c
2
a c a c b
c b
2
c b a c b
b a
3
b a b a c
a c
3
a c b a c
b c
4
b c b c a
c a
4
c a b c a
c a
5
c a c a b
a b
5
a b c a b
c b
6
c b c b a
b a
6
b a c b a
Figure 18.2: An example of succ and succ
472 Highly Expressive Languages
succ :=new(
1=2
(D D));
:=q;
while do
begin
S :=new();
succ :=
_
x, y,

, x

, y

[S(x

, y

, ,

) succ(x, y, )]
[S(x, y, ,

)]
_
;
:=q;
end
The intuition is that we construct in turn enumerations of subsets of size 2, 3, etc., until
we obtain the enumerations of D. (To simplify, we assume that D contains more than two
elements.) An enumeration of a subset of D consists of a successor (binary) relation over
that subset. As mentioned earlier, the program associates a marking (invented value) with
each such successor relation.
During the computation, succ contains the successor relation of subsets of size i
computed so far. A triple a, b, indicates that b follows a in enumeration denoted .
The rst instruction computes the enumerations of subsets of size 2 (i.e., the distinct
pairs of elements of D) and marks them with new values. At each iteration, indicates
for each enumeration the elements that are missing in this enumeration. More precisely,
relation must contain the following set of triples:
_
a, b,

b does not occur in the successor relation corresponding to


and the last element of is a.
_
The relational query q computes the set given a particular relation succ. If is not empty,
for each a new value

is created for each element missing in (i.e., the enumeration


is extended in all possible ways with each of the missing elements). This yields as many
new enumerations from each as missing elements.
This is iterated until becomes empty, at which point all enumerations are complete.
Note that if D contains n elements, the nal result succ contains n! enumerations.
Theorem 18.2.3 The well-behaved while
new
programs express all queries.
Crux Let q be a query from inst(R) to inst(answer). Assume the query is generic (i.e.,
C-generic with C = ). The proof is easily modied for the case when the query is
C-generic with C =. It is sufcient to observe that
(*)
for each while
N
program,
there exists an equivalent well-behaved while
new
program.
Suppose that (*) holds. Let w
succ
be the while
new
program computing succ from given
I over R. By Theorem 18.1.2 and (*), there exists a while
new
program w(succ) that com-
putes q using a successor relation succ. We construct another while
new
program w(succ)
that computes q given I and succ. Intuitively, w(succ) is run in parallel for all possible
18.2 While
new
while with New Values 473
enumerations succ provided by succ. All computations produce the same result and are
placed in answer. The computations for different enumerations in succ are identied by
the marking the enumeration in succ. To this end, each relation R of arity k in w(succ)
is replaced by a relation R of arity k +1. The extended database relations are rst initial-
ized by statements of the form R := R
3
(succ). Next the instructions of w(succ) are
modied as follows:
R := {u | (u)} becomes R := {u, | yzsucc(y, z, ) (u, )}, where
(u, ) is obtained from (u) by replacing each atom S(v) by S(v, );
while change do remains unchanged.
Finally the instruction answer :=
1..n
(answer), where n =arity(answer), is appended at
the end of the program. The following can be shown by induction on the steps of a partial
execution of w(succ) on I (Exercise 18.10):
(**) At each point in the computation of w(succ) on I, the set of tuples in relation R
marked with coincides with the value of R at the same point in the computation
when w(succ) is run on I and succ is the successor relation corresponding to .
In particular, at the end of the computation of w(succ) on I,
answer =
_

w()(I) {},
where ranges over the enumeration markers. Because w()(I) =q(I) for each , it fol-
lows that answer contains q(I) at the end of the computation. Thus query q is computable
by a well-behaved while
new
program.
Thus it remains to show (*). Integer variables are easily simulated as follows. An
integer variable i is represented by a binary variable R
i
. If i contains the integer n, then
R
i
contains a successor relation for n +1 distinct new values:
{
j
,
j+1
| 0 j < n}.
(The integer 0 is represented by an empty relation and the integer 1 by a singleton
{
0
,
1
}.) It is easy to nd a while
new
program for increment and decrement of i.
We showed that well-behaved while
new
programs are complete with respect to our
denition of query. Recall that while
new
programs that are not well behaved can compute
a different kind of query that we excluded deliberately, which contains new values in the
answer. It turns out, however, that such queries arise naturally in the context of object-
oriented databases, where new object identiers appear in query results (see Chapter 21).
This requires extending our denition of query. In particular, the query is nondeterministic
but, as discussed earlier, the different answers differ only in the particular choice of new
values. This leads to the following extended notion of query:
Denition 18.2.4 A determinate query is a relation Q from inst(R) to inst(answer)
such that
474 Highly Expressive Languages

1
b a {a, b}
Figure 18.3: A query not expressible in while
new
Q is computable;
if I, J Q and is a one-to-one mapping on constants, then (I), (J) Q;
and
if I, J Q and I, J

Q, then there exists an isomorphism from J to J

that is
the identity on the constants in I.
A language is determinate complete if it expresses only determinate queries and all deter-
minate queries.
Let Q be a determinate query. If I, J Q and is a one-to-one mapping on con-
stants leaving I xed, then I, (J) Q.
The question arises whether while
new
remains complete with respect to this ex-
tended notion of query. Surprisingly, the answer is negative. Each while
new
query is
determinate. However, we exhibit a simple determinate query that while
new
cannot ex-
press. Let q be the query with input schema R = {S}, where S is unary, and output G,
where G is binary. Let q be dened as follows: For each input I over S, if I = {a, b}
then q(I) ={
0
,
1
,
1
,
2
,
2
,
3
,
3
,
0
,
0
, b,
1
, a,
2
, b,
3
, a} for
some new elements
0
,
1
,
2
,
3
, and q(I) = otherwise (Fig. 18.3).
Theorem 18.2.5 The query q is not expressible in while
new
.
Proof The proof is by contradiction. Suppose w is a while
new
program expressing q.
Consider the sequence of steps in the execution of w on an input I = {a, b}. We can
assume without loss of generality that no invented value is ever deleted from the data-
base (otherwise modify the program to keep all invented values in some new unary rela-
tion). For each invented value occurring in the computation, we dene a trace that records
how the value was invented and uniquely identies it. More precisely, trace() is de-
ned inductively as follows. If is a constant, then trace() = . Suppose is a new
value created at step i with a new statement associating it with tuple x
1
, . . . , x
k
. Then
trace() =i, trace(x
1
), . . . , trace(x
k
). Clearly, one can extend trace to tuples and rela-
18.3 While
uty
An Untyped Extension of while 475
tions in the natural manner. It is easily shown (Exercise 18.11) by induction on the number
of steps in a partial execution of w on I that
() trace() =trace() iff =;
() for each instance J computed during the execution of w on input I, trace(J) is closed
under each automorphism of I. In particular, for each occurring in J, (trace())
equals trace() for some also occurring in J.
Consider now trace(q(I)) and the automorphism of I [and therefore of trace(q(I))]
dened by (a) = b, (b) = a. Note that
2
= id (the identity) and =
1
. Consider
(trace(
0
)). Because
0
, b q(I), it follows that trace(
0
), b trace(q(I)). Be-
cause (b) =a, it further follows that (trace(
0
)), a trace(q(I)) so (trace(
0
)) is
either trace(
1
) or trace(
3
). Suppose (trace(
0
)) =trace(
1
) (the other case is simi-
lar). From the fact that is an automorphism of trace(q(I)) it follows that (trace(
3
)) =
trace(
0
), (trace(
2
)) = trace(
3
), and (trace(
1
)) = trace(
2
). Consider now
2
.
First, because
2
= id,
2
(trace(
i
)) = trace(
i
), 0 i 3. On the other hand,

2
(trace(
0
)) = ((trace(
0
))) = (trace(
1
)) = trace(
2
). This is a contradiction.
Hence q cannot be computed by while
new
.
The preceding example shows that the presence of new values in the answer raises
interesting questions with regard to completeness. There exist languages that express all
queries with invented values in answers (see Exercise 18.14 for a complex construct that
leads to a determinate-complete language). Value invention is common in object-oriented
languages, in the form of object creation constructs (see Chapter 21).
18.3 While
uty
An Untyped Extension of while
We briey describe in this section an alternative complete language obtained by relaxing
the xed-arity requirement of the languages encountered so far. This relaxation is done
using an untyped version of relational algebra instead of the familiar typed version. We will
obtain a language allowing us to construct relations of variable, data-dependent arity in the
course of the computation. Although strictly speaking they are not needed, we also allow
integer variables and integer manipulation, as in while
N
. Intuitively, it is easy to see why
this yields a complete language. Variable arities allow us to construct all enumerations of
constants in the input, represented by sufciently long tuples containing all constants. The
ability to construct the enumerations and manipulate integers yields a complete language.
The rst step in dening the untyped version of while is to dene an untyped version
of relational algebra. This means that operations must be dened so that they work on
relations of arbitrary, unknown arity. Expressions in the untyped algebra are built from
relation variables and constants and can also use integer variables and constants. Let i, j
be integer variables, and for each integer k, let
k
denote the empty relation of arity k.
Untyped algebra expressions are built up using the following operations:
If e, e

are expressions, then e e

and e e

are expressions; if arity(e) =arity(e

)
the semantics is the usual; otherwise the result is
0
.
476 Highly Expressive Languages
If e is an expression, then e is an expression; the complement is with respect to the
active domain (not including the integers).
If e, f are expressions, then e f is an expression; the semantics is the usual cross-
product semantics.
If e is an expression, then
i=j
(e) is an expression, where i, j are integer variables
or constants; if arity(e) max{i, j} the semantics is the usual; otherwise the result
is
0
.
If e is an expression, then
ij
(e) is an expression, where i, j are integer variables or
constants; if i j and arity(e) max{i, j}, this projects e on columns i through j;
otherwise the result is
|ji|
.
If e is an expression, then ex
ij
(e) is an expression; if arity(e) max{i, j}, this
exchanges in each tuple in the result of e the i and j coordinates; otherwise the
result is
0
.
We may also consider an untyped version of tuple relational calculus (see Exer-
cise 18.15).
We can now dene while
uty
programs. They are concatenations of statements of the
form
i :=j, where i is an integer variable and j an integer variable or constant.
increment(i), decrement(i), where i is an integer variable.
while i > 0 do t, where i is an integer variable and t a program.
R := e, where R is a relational variable and e an untyped algebra expression; the
semantics here is that R is assigned the content and arity of e.
while R do t, where R is a relational variable and t a program; the semantics is that
the body of the loop is repeated as long as R is nonempty.
All relational variables that are not database relations are initialized to
0
; integer variables
are initialized to 0.
Example 18.3.1 Following is a while
uty
program that computes the arity of a nonempty
relation R in the integer variable n:
S
0
:={}; S
1
:=S
0
R; S
2
:=S
1
;
while S
2
do
begin
n :=n +1;
S
0
:=S
0
D;
S
1
:=S
0
R;
S
2
:=S
1
;
end
where D abbreviates an algebra expression computing the active domain [e.g.,
11
(R)

11
(R)]. The program tries out increasing arities for R starting from 0. Recall that
18.3 While
uty
An Untyped Extension of while 477
whenever R and S
0
have different arities, the result of S
0
R is
0
. This allows us to detect
when the appropriate arity has been found.
Remark 18.3.2 There is a much simpler set of constructs that yields the same power as
while
uty
. In general, programs are much harder to write in the resulting language, called QL,
than in while
uty
. One can showthat the set of constructs of QL is minimal. The language QL
is described next; it does not use integer variables. QL expressions are built from relational
variables and constant relations as follows (D denotes the active domain):
equal is an expression denoting {a, a | a D}.
e e

and e are dened as for while


uty
; the complement is with respect to the active
domain.
If e is an expression, then e is an expression; this projects out the last coordinate
of the result of e (and is
0
if the arity is already zero).
If e is an expression, then e is an expression; this produces the cross-product of e
with D.
If e is an expression, then e is an expression; if arity(e) 2, then this exchanges
the last two coordinates in each tuple in the result of e. Otherwise the answer is
0
.
Programs are built by concatenations of assignment statements (R :=e) and while state-
ments (while R do s). The semantics of the while is that the loop is iterated as long as R is
nonempty.
We leave it to the reader to check that QL is equivalent to while
uty
(Exercise 18.17).
We briey describe the simulation of integers by QL. Let Z denote the constant 0-ary
relation {}. We can have Z represent the integer 0 and Z
n
represent the integer n. Then
increment(n) is simulated by one application of , and decrement(n) is simulated by one
application of . A test of the form x =0 becomes e =, where e is the untyped algebra
expression representing the value of x. Thus we can simulate arbitrary computations on the
integers.
Recall that our denition of query requires that both the input and output be instances
over xed schemas. On the other hand, in while
uty
relation arities are variable, so in general
the arity of the answer is data dependent. This is a problem analogous to the one we
encountered with while
new
, which generally produces new values in the result. As in the
case of while
new
, we can dene semantic and syntactic restrictions on while
uty
programs
that guarantee that the programs compute queries. Call a while
uty
program well behaved if
its answer is always of the same arity regardless of the input. Unfortunately, it can be shown
that it is undecidable if a while
uty
programis well behaved (Exercise 18.19). However, there
is a simple syntactic condition that guarantees good behavior and covers all well-behaved
programs. A while
uty
program with answer relation answer is syntactically well behaved if
the last instruction of the program is of the form answer :=
mn
(R), where m, n are integer
constants. Clearly, syntactic good behavior guarantees good behavior and can be checked.
Furthermore, it is obvious that each well-behaved while
uty
program is equivalent to some
syntactically well-behaved program (Exercise 18.19).
478 Highly Expressive Languages
We now prove the completeness of well-behaved while
uty
programs.
Theorem 18.3.3 The well-behaved while
uty
programs express all queries.
Crux It is easily veried that all well-behaved while
uty
programs dene queries. The proof
that every query can be expressed by a well-behaved while
uty
program is similar to the
proof of Theorem 18.2.3. Let q be a query with input schema R. We proceed in two steps:
First construct all orderings of constants from the input. Next simulate the while
N
program
computing q on the ordered database corresponding to each ordering. The main difference
with while
new
lies in how the orderings are computed. In while
uty
, we use the arbitrary arity
to construct a relation R
<
containing sufciently long tuples each of which provides an
enumeration of all constants. This is done by the following while
uty
program, where D
stands for an algebra expression computing the active domain:
R
<
:=
0
;
C :=D; arityC :=1;
while C do
begin
R
<
:=C;
C :=C D; increment(arityC);
for i :=1 to (arityC 1) do
C :=C
i=arity(C)
(C);
end
Clearly, the looping construct f or i :=1 t o . . . can be easily simulated. If the size of D
is n, the result of the program is the set of n-tuples with distinct entries in adom(D). Note
that each such tuple t in R
<
provides a complete enumeration of the constants in D. Next
one can easily construct a while
uty
program that constructs, for each such tuple t in R
<
, the
corresponding successor relation. More precisely, one can construct
succ =
_
t R
<
succ
t
{t },
where succ
t
={t (i), t (i +1) | 1 i < n} (see Fig. 18.2 and Exercise 18.20).
Untyped languages allow us to relax the restriction that the output schema is xed.
This may have a practical advantage because in some applications it may be necessary to
have the output schema depend on the input data. However, in such cases one would likely
prefer a richer type system rather than no typing at all.
The overall results on the expressiveness and complexity of relational query languages
are summarized in Figs. 18.4 and 18.5. The main classes of queries and their inclusion
structure are represented in Fig. 18.4 (solid arrows indicate strict inclusion; the dotted
arrow indicates strict inclusion if ptime = pspace). Languages expressing each class of
queries are listed in Fig. 18.5, which also contains information on complexity (rst with-
out assumptions, then with the assumption of an order on the database). In Fig. 18.5,
Bibliographic Notes 479
Conjunctive queries
Positive-existential
All queries
While
Fixpoint
Stratified datalog
Semipositive datalog
Datalog
First order
Figure 18.4: Main classes of queries
CALC(, ) denotes the conjunctive calculus and CALC(, , ) denotes the positive-
existential calculus.
Bibliographic Notes
The rst complete language proposed was the language QL of Chandra and Harel [CH80b].
Chandra also considered a language equivalent to while
N
, which he called LC [Cha81a].
It was shown that LC cannot compute even. Several other primitives are considered in
[Cha81a] and their power is characterized. The language while
new
was dened in [AV90],
where its completeness was also shown.
The languages considered in this chapter can be viewed as formalizing practical lan-
guages, such as C+SQL or O
2
C, used to develop database applications. These languages
combine standard computation (C) with database computation (SQL in the relational world
or O
2
in the object-oriented world). In this direction, several computing devices were de-
ned in [AV91b], and complexity-theoretic results are obtained using the devices. First
an extension of Turing machines with a relational store, called relational machine, was
shown to be equivalent to while
N
. A further extension of relational machines equivalent to
while
new
and while
uty
, called generic machine, was also dened. In the generic machine,
480 Highly Expressive Languages
Class of Complexity
queries Languages Complexity with order
conjunctive CALC(, ) logspace logspace
SPJR algebra ac
0
ac
0
positive- CALC(, , )
existential SPJUR algebra logspace logspace
nr-datalog ac
0
ac
0
datalog datalog monotonic monotonic
ptime ptime
semipositive semipositive datalog

ptime = ptime
datalog

(with min, max)


rst order CALC
ALG logspace logspace
nr-stratied datalog

ac
0
ac
0
stratied stratied datalog

ptime = ptime
datalog

xpoint CALC+
+
while
+
datalog

(xpoint and
well-founded semantics) ptime = ptime
while CALC+
while
datalog

(xpoint semantics) pspace = pspace


all queries while
uty
no bound no bound
while
new
Figure 18.5: Languages and complexity
parallelism is used to allow simultaneous computations with all possible successor rela-
tions.
Queries with new values in their answers were rst considered in [AK89], in the con-
text of an object-oriented deductive language with object creation, called IQL. The notion
of determinate query [VandBGAG92] is a recasting of the essentially equivalent notion of
db transformation, formulated in [AK89]. In [AK89], the query in Theorem 18.2.5 is also
exhibited, and it is shown that IQL without duplicate elimination cannot express it. Because
IQL is more powerful than while
new
, their result implies the result of Theorem 18.2.5. The
issue of completeness of languages with object creation was further investigated in [AP92,
VandBG92, VandBGAG92, VandBP95, DV91, DV93].
Exercises 481
Finally it is easy to see that each (determinate) query can be computed in some natural
nondeterministic extension of while
new
(e.g., with the witness operator of Chapter 17)
[AV91c]. However, such programs may be nondeterministic so they do not dene only
determinate queries.
Exercises
Exercise 18.1 Let G be a graph. Consider a query Does the shortest path from a to b in G
have property P? where G is a graph, P is a recursive property of the integers, and a, b are
two particular vertexes of the graph. Show that such a query can be expressed in while
N
.
Exercise 18.2 Prove that the query in Example 18.1.1 can be expressed (a) in while; (b) in
xpoint.
Exercise 18.3 Sketch a direct proof that even cannot be expressed by while
N
by extending the
hyperplane technique used in the proof of Proposition 17.3.2.
Exercise 18.4 [AV94] Consider the language L augmenting while
N
by allowing mixing of
integers with data. Specically, the following instruction is allowed in addition to those of
while
N
: R :={i
1
, . . . , i
k
}, where R is a k-ary relation variable and i
1
, . . . , i
k
are integer vari-
ables. It is assumed that the domain of input values is disjoint from the integers. Comple-
ment (or negation) is taken with respect to the domain formed by all values in the database or
program, including the integer values present in the database. The well-behaved L programs
are those whose outputs never contain integers. Show that well-behaved L and while
N
are
equivalent.
Exercise 18.5 Complete the proof of Theorem 18.1.2.
Exercise 18.6 [AV90] Consider a variation of the language while
new
where the R :=new(S)
instruction is replaced by the simpler instruction R :=new where R is unary. The semantics
of this instruction is that R is assigned a singleton {}, where is a new value. Denote the
new language by while
unary-new
.
(a) Show that each query expressible in while
N
is also expressible in while
unary-new
.
Hint: Use new values to represent integers. Specically, to represent the integers up
to n, construct a relation succ
int
containing a successor relation on n new values. The
value of rank i with respect to succ represents integer i.
(b) Show that each query expressible in while
unary-new
is also expressible in while
N
.
Hint: Again establish a correspondence between new values and integers. Then use
Exercise 18.4.
Exercise 18.7 Prove Lemma 18.2.1.
Exercise 18.8 Prove that it is undecidable if a given while
new
program is well behaved.
Exercise 18.9 In this exercise we dene a syntactic restriction on while
new
programs that
guarantees good behavior. Let w be a while
new
program. Without loss of generality, we can
assume that all instructions contain at most one algebraic operation among , , , , . Let
the not-well-behaved set of w, denoted Bad(w), be the smallest set of pairs of the form R, i,
where R is a relation in w and 1 i arity(R), such that
482 Highly Expressive Languages
(a) if S :=new(R) is an instruction in w and arity(S) =k, then S, k Bad(w);
(b) if S :=T R is in w and T, i Bad(w) or R, i Bad(w), then S, i Bad(w);
(c) if S :=T R is in w and T, i Bad(w), then S, i Bad(w);
(d) if S := T R is in w and T, i Bad(w), then S, i Bad(w); and if R, j
Bad(w), then S, arity(T ) +j Bad(w);
(e) if S :=
i
1
...i
k
(T ) is in w and T, i
j
Bad(w), then S, j Bad(w);
(f) if S :=
cond
(T ) is in w and T, i Bad(w), then S, i Bad(w).
A while
new
program w is syntactically well behaved if
{answer, i | 1 i arity(answer)} Bad(w) =.
(a) Outline a procedure to check that a given while
new
program is syntactically well
behaved.
(b) Show that each syntactically well-behaved while
new
program is well behaved.
(c) Show that for each well-behaved while
new
program, there exists an equivalent syn-
tactically well-behaved while
new
program.
Exercise 18.10 Prove (*) in the proof of Theorem 18.2.3.
Exercise 18.11 Prove () and () in the proof of Theorem 18.2.5.
Exercise 18.12 Consider the query q exhibited in the proof of Theorem 18.2.5. Let q
2
be the
query that, on input I ={a, b}, produces as answer two copies of q(I). More precisely, for each

i
in q(I), let

i
be a distinct new value. Let q

(I) be obtained from q(I) by replacing


i
by

i
, and let q
2
(I) =q(I) q

(I). Prove that q


2
can be expressed by a while
new
program.
Exercise 18.13 [DV91, DV93] Consider the instances I, J of Fig. 18.6. Consider a query q
that, on input of the same pattern as I, returns J (up to an arbitrary choice of distinct ,
i
) and
otherwise returns the empty instance. Show that q is not expressible in while
new
.
Exercise 18.14 (Choose [AK89]) Let while
choose
new
be obtained by augmenting while
new
with the
following (determinate) choose construct. A program w may contain the instruction choose(R)
for some unary relation R. On input I, when choose(R) is applied in a state J, the next state J

is dened as follows:
(a) if for each a, b in J(R), there is an automorphism of J that is the identity over
adom(I, w) and maps a to b, J

is obtained from J by eliminating one arbitrary


element in J(R);
(b) otherwise J

is just J.
Show that while
choose
new
is determinate complete.
Exercise 18.15 One may consider an untyped version of tuple relational calculus. Untyped
relations are used just like typed relations, except that terms of the form t (i) are allowed, where
t is a tuple variable and i an integer variable. Equivalence of queries now means that the queries
yield the same answers given the same relations and values for the integer variables. Show that
untyped relational calculus and untyped relational algebra are equivalent.
Exercise 18.16 Show that ex
ij
is not redundant in the untyped algebra.
Exercises 483

1
a
1

1
b
1

1
b
2

1
c
2

1
c
3

1
d
3

1
d
4

1
a
4

2
a
5

2
b
5

2
b
6

2
c
6

2
c
7

2
d
7

2
d
8

2
a
8

a
1
b
1
b
2
c
2
c
3
d
3
d
4
a
4
I J
Figure 18.6: Another query not expressible in while
new
Exercise 18.17 Sketch a proof that while
uty
and the language QL described in Remark 18.3.2
are equivalent.
Exercise 18.18 Write a QL program computing the transitive closure of a binary relation.
Exercise 18.19 This exercise concerns well-behaved while
uty
programs. Show the following:
(a) It is undecidable whether a given while
uty
program is well behaved.
(b) Each syntactically well-behaved while
uty
program is well behaved.
(c) For each well-behaved while
uty
program, there exists an equivalent syntactically
well-behaved while
uty
program.
Exercise 18.20 Write a while
uty
program that constructs the relation succ fromR
<
in the proof
of Theorem 18.3.3.
Exercise 18.21 [AV91b] Prove that any query on a unary relation computed by a while
new
or while
uty
program in polynomial space is in FO. (For the purpose of this exercise, dene the
space used in a program execution as the maximum number of occurrences of constants in some
instance produced in the execution of the program.) Note that, in particular, even cannot be
computed in polynomial space in these languages.
Exercise 18.22 [AV91a] Consider the following extension of datalog

with the ability to


create new values. The rules are of the same form as datalog

rules, but with a different


semantics than the active domain semantics used for datalog

. The new semantics is the


following. When rules are red, all variables that occur in heads of rules but do not occur
positively in the body are assigned distinct new values, not present in the input database,
program, or any of the other relations in the program. A distinct value is assigned for each
484 Highly Expressive Languages
applicable valuation of the variables positively bound in the body in each ring. This is similar
to the new construct in while
new
. For example, one ring of the rule
R(x, y, ) P(x, y)
has the same effect as the R := new(P) instruction in while
new
. The resulting extension of
datalog

is denoted datalog

new
. The well-behaved datalog

new
programs are those that never
produce new values in the answer. Sketch a proof that well-behaved datalog

new
programs ex-
press all queries.
P A R T
F Finale
I
n this part, we consider four advanced topics. Two of them (incomplete information and
dynamic aspects) have been studied for a while, but for some reason (perhaps their dif-
culty) they have never reached the maturity of more established areas such as dependency
theory. Interest in the other two topics (complex values and object databases) is more re-
cent, and our understanding of them is rudimentary. In all cases, no clear consensus has
yet emerged. Our choice of material, as well as our presentation, are therefore unavoid-
ably more subjective than in other parts of this book. However, the importance of these
issues for practical systems, as well as the interesting theoretical issues they raise, led us to
incorporate a discussion of them in this book.
In Chapter 19, we address the issue of incomplete information. In many database
applications, the knowledge of the real world is incomplete. It is crucial to be able to handle
such incompleteness and, in particular, to be able to ask queries and perform updates.
Chapter 19 surveys various models of incomplete databases, research directions, and some
results.
In Chapter 20, we present an extension of relations called complex values. These are
obtained from atomic elements using tuple and set constructors. The richer structure allows
us to overcome some limitations of the relational model in describing complex data. We
generalize results obtained for the relational model; in particular, we present a calculus
and an equivalent algebra.
Chapter 21 looks at another way to enrich the relational model by introducing a num-
ber of features borrowed and adapted from object-oriented programming, such as objects,
classes, and inheritance. In particular, objects consist of a structural part (a data reposi-
tory) and a behavioral part (pieces of code). Thus the extended framework encompasses
behavior, a notion conspicuously absent from relational databases.
Chapter 22 deals with dynamic aspects. This is one of the less settled areas in data-
bases, and it raises interesting and difcult questions. We skim through a variety of issues:
languages and semantics for updates; updating views; updating incomplete information;
and active and temporal databases.
A comprehensive vision of the four areas discussed in Part F is lacking. The reader
should therefore keep in mind that some of the material presented is in ux, and its
importance pertains more to the general avor than the specic results.
485
19 Incomplete Information
Somebody: What are we doing next?
Alice: Who are we? Who are you?
Somebody: We are you and the authors of the book, and I am one of them. This is an
instance of incomplete information.
Somebody: Its not much, but we can still tell that surely one of us is Alice and that
there are possibly up to three Somebodies speaking.
I
n the previous parts, we have assumed that a database always records information that
is completely known. Thus a database has consisted of a completely determined nite
instance. In reality, we often must deal with incomplete information. This can be of many
kinds. There can be missing information, as in John bought a car but I dont know which
one. In the case of Johns car, the information exists but we do not have it. In other
cases, some attributes may be relevant only to some tuples and irrelevant to others. Alice is
single, so the spouse eld is irrelevant in her case. Furthermore, some information may be
imprecise: Heather lives in a large and cheap apartment, where the values of large and
cheap are fuzzy. Partial information may also arise when we cannot completely rely on the
data because of possible inconsistencies (e.g., resulting from merging data from different
sources).
As soon as we leave the realm of complete databases, most issues become much more
intricate. To deal with the most general case, we need something resembling a theory of
knowledge. In particular, this quickly leads to logics with modalities: Is it certain that John
lives in Paris? Is it possible that he may? What is the probability that he does? Does John
know that Alice is a good student? Does he believe so? etc.
The study of knowledge is a fascinating topic that is outside the scope of this book.
Clearly, there is a trade-off between the expressivity of the model for incomplete informa-
tion used and the difculty of answering queries. From the database perspective, we are
primarily concerned with identifying this trade-off and understanding the limits of what is
feasible in this context. The purpose of this chapter is to make a brief foray into this topic.
We limit ourselves mostly to models and results of a clear database nature. We consider
simple forms of incompleteness represented by null values. The main problem we examine
is how to answer queries on such databases. In relation to this, we argue that for a represen-
tation system of incomplete information to be adequate in the context of a query language,
it must also be capable of representing answers to queries. This leads to a desirable closure
property of representations of incomplete information with respect to query languages. We
observe the increase of complexity resulting from the use of nulls.
We also consider briey two approaches closer to knowledge bases. The rst is based
487
488 Incomplete Information
on the introduction of disjunctions in deductive databases, which also leads to a form of
incompleteness. The second is concerned with the use of modalities. We briey mention
the language KL, which permits us to talk about knowledge of the world.
19.1 Warm-Up
As we have seen, there are many possible kinds of incomplete information. In this section,
we will focus on databases that partially specify the state of the world. Instead of com-
pletely identifying one state of the world, the database contents are compatible with many
possible worlds. In this spirit, we dene an incomplete database simply as a set of possible
worlds (i.e., a set of instances). What is actually stored is a representation of an incomplete
database. Choosing appropriate representations is a central issue.
We provide a mechanism for representing incomplete information using null values.
The basic idea is to allow occurrences of variables in the tuples of the database. The
different possible values of the variables yield the possible worlds.
The simplest model that we consider is the Codd table (introduced by Codd), or table
for short. A table is a relation with constants and variables, in which no variable occurs
twice. More precisely, let U be a nite set of attributes. A table T over U is a nite set of
free tuples over U such that each variable occurs at most once. An example of a table is
given in Fig. 19.1. The gure also illustrates an alternative representation (using @) that is
more visual but that we do not adopt here because it is more difcult to generalize.
The preceding denition easily extends to database schemas. A database table T over
a database schema R is a mapping over R such that for each R in R, T(R) is a table
over sort (R). For this generalization, we assume that the sets of variables appearing in
each table are pairwise disjoint. Relationships between the variables can be stated through
R A B C R A B C
0 1 x 0 1 @
y z 1 @ @ 1
2 0 v 2 0 @
Table T Alternative representation of T
R A B C R A B C R A B C R A B C
0 1 2 0 1 2 0 1 2 0 1 1
2 0 1 3 0 1 2 0 1 2 0 1
2 0 0 2 0 5 2 0 0
I
1
I
2
I
3
I
4
Figure 19.1: A table and examples of corresponding instances
19.1 Warm-Up 489
global conditions (which we will introduce in the next section). In this section, we will
focus on single tables, which illustrate well the main issues.
To specify the semantics of a table, we use the notion of valuation (see Chapter 4). The
incomplete database represented by a table is dened as follows:
rep(T ) ={(T ) | a valuation of the variables in T }.
Consider the table T in Fig. 19.1. Then I
1
, . . . , I
4
all belong to rep(T ) (i.e., are possible
worlds).
The preceding denition assumes the Closed World Assumption (CWA) (see Chap-
ter 2). This is because each tuple in an instance of ref (T ) must be justied by the presence
of a particular free tuple in T . An alternative approach is to use the Open World Assumption
(OWA). In that case, the incomplete database of T would include all instances that contain
an instance of rep(T ). In general, the choice of CWA versus OWA does not substantially
affect the results obtained for incomplete databases.
We now have a simple way of representing incomplete information. What next? Nat-
urally, we wish to be able to query the incomplete database. Exactly what this means is
not clear at this point. We next look at this issue and argue that the simple model of tables
has serious shortcomings with respect to queries. This will naturally lead to an extension
of tables that models more complicated situations.
Let us consider what querying an incomplete database might mean. Consider a table T
and a query q. The table T represents a set of possible worlds rep(T ). For each I rep(T ),
q would produce an answer q(I). Therefore the set of possible answers of q is q(rep(T )).
This is, again, an incomplete database. The answer to q should be a representation of this
incomplete database.
More generally, consider some particular representation system (e.g., tables). Such a
systeminvolves a language for describing representations and a mapping rep that associates
a set of instances with each representation. Suppose that we are interested in a particular
query language L (e.g., relational algebra). We would always like to be capable of repre-
senting the result of a query in the same system. More precisely, for each representation T
and query q, there should exist a computable representation q(T ) such that
rep(q(T )) =q(rep(T )).
In other words, q(T ) represents the possible answers of q [i.e., {q(I) | I rep(T )}].
If some representation system has the property described for a query language L, we
will say that is a strong representation system for L. Clearly, we are particularly interested
in strong representation systems for relational algebra and we shall develop such a system
later.
Let us now return to tables. Unfortunately, we quickly run into trouble when asking
queries against them, as the following example shows.
Example 19.1.1 Consider T of Fig. 19.1 and the algebraic query
A=3
(T ). There is no
table representing the possible answers to this query. A possible answer (e.g., for I
1
) is
the empty relation, whereas there are nonempty possible answers (e.g., for I
2
). Suppose
490 Incomplete Information
that there exists a table T

representing the set of possible answers. Either T

is empty and

A=3
(I
2
) is not in rep(T

); or T

is nonempty and the empty relation is not in rep(T

). This
is a contradiction, so no such T

can exist.
The problem lies in the weakness of the representation system of tables; we will
consider richer representation systems that lead to a complete representation system for
all of relational algebra. An alternative approach is to be less demanding; we consider this
next and present the notion of weak representation systems.
19.2 Weak Representation Systems
To relax our expectations, we will no longer require that the answer to a query be a
representation of the set of all possible answers. Instead we will ask which are the tuples
that are surely in the answer (i.e., that belong to all possible answers). (Similarly, we may
ask for the tuples that are possibly in the answer (i.e., that belong to some possible answer).
We make this more precise next.
For a table T and a query q, the set of sure facts, sure(q, T ), is dened as
sure(q, T ) ={q(I) | I rep(T )}.
Clearly, a tuple is in sure(q, T ) iff it is in the answer for every possible world. Observe
that the sure tuples in a table T [i.e., the tuples in every possible world in rep(T )] can be
computed easily by dropping all free tuples with variables. One could similarly dene the
set poss(q, T ) of possible facts.
One might be tempted to require of a weak system just the ability to represent the
set of tuples surely in the answer. However, the denition requires some care due to
the following subtlety. Suppose T is the table in Fig. 19.1 and q the query
A=2
(R),
for which sure(q, T ) = . Consider now the query q

=
AB
(R) and the query q q

.
Clearly, q

(sure(q, T )) =; however, sure(q

(q(rep(T ))) ={2, 0}. So q q

cannot be
computed by rst computing the tuples surely returned by q and then applying q

. This
is rather unpleasant because generally it is desirable that the semantics of queries be
compositional (i.e., the result of q q

should be obtained by applying q

to the result
of q). The conclusion is that the answer to q should provide more information than just
sure(q, T ); the incomplete database it species should be equivalent to q(rep(T )) with
respect to its ability to compute the sure tuples of any query in the language applied to it.
This notion of equivalence of two incomplete databases is formalized as follows.
If L is a query language, we will say that two incomplete databases I, J are L
equivalent, denoted I
L
J, if for each q in L we have
{q(I) | I I} ={q(I) | I J}.
In other words, the two incomplete databases are undistinguishable if all we can ask for is
the set of sure tuples in answers to queries in L.
We can now dene weak representation systems. Suppose L is a query language. A
19.2 Weak Representation Systems 491
representation system is weak for L if for each representation T of an incomplete database,
and each q in L, there exists a representation denoted q(T ) such that
rep(q(T ))
L
q(rep(T )).
With the preceding denition, q(T ) does not provide precisely sure(q, T ) for tables
T . However, note that sure(q, T ) can be obtained at the end simply by eliminating from
the answer all rows with occurrences of variables.
The next result indicates the power of tables as a weak representation system.
Theorem19.2.1 Tables forma weak representation systemfor selection-projection (SP)
[i.e., relational algebra limited to selection (involving only equalities and inequalities) and
projection]. If union or join are added, tables no longer form a weak representation system.
Crux It is easy to see that tables form a weak representation system for SP queries.
Selections operate conservatively on tables. For example,

cond
(T ) ={t | t T and cond((t )) holds
for all valuations of the variables in t }.
Projections operate like classical projections. For example, if T is again the table in
Fig. 19.1, then

A=2
(T ) ={2, 0, v}
and
(
AB
(R)
A=2
(R))(T ) ={2, 0}.
Let us show that tables are no longer a weak representation system if join or union are
added to SP. Consider join rst. So the query language is now SPJ. Let T be the table
R A B C
a x c
a

where x, x

are variables and a, a

, c, c

are constants.
Let q =
AC
(R)
B
(R). Suppose there is table W such that
rep(W)
SPJ
q(rep(T )),
and consider the query q

=
AC
(
AB
(R)
BC
(R)). Clearly, sure(q q

, T ) is
492 Incomplete Information
A C
a c
a

c
a c

Therefore sure(q

, W) must be the same. Because a

, c sure(q

, W), for each valua-


tion of variables in W there must exist tuples u, v W such that u(A) = a

, v(C) =
c, (u)(B) =(v)(B). Let be a valuation such that (z) =(y) for all variables z, y, z =
y. If u =v, then u(A) =a

and u(C) =c so a

, c sure(
AC
(R), W). This cannot be be-
cause, clearly, a

, c sure(
AC
(R), q(rep(T ))). So, u =v. Because (u)(B) =(v)(B)
and W has no repeated variables, it follows that u(B) and v(B) equal some constant k. But
then a

, k sure(
AB
(R), W), which again cannot be because one can easily verify that
sure(
AB
(R), q(rep(T ))) =.
The proof that tables do not provide a weak representation system for SPU follows
similar lines. Just consider the table T
R A B
x b
and the query q outputting two relations:
A=a
(R) and
A=a
(R). It is easily seen that there
is no pair of tables W
1
, W
2
weakly representing q(rep(T )) with respect to SPU. To see this,
consider the query q

=
B
(W
1
W
2
). The details are left to the reader (Exercise 19.7).
Naive Tables
The previous result shows the limitations of tables, even as weak representation systems.
As seen from the proof of Theorem 19.2.1, one problem is the lack of repeated variables.
We next consider a rst extension of tables that allows repetitions of variables. It will
turn out that this will provide a weak representation system for a large subset of relational
algebra.
A naive table is like a table except that variables may repeat. A naive table is shown
in Fig. 19.2. Naive tables behave beautifully with respect to positive existential queries
(i.e., conjunctive queries augmented with union). Recall that, in terms of the algebra, this
is SPJU.
Theorem 19.2.2 Naive tables form a weak representation system for positive relational
algebra.
Crux Given a naive table T and a positive query q, the evaluation of q(T ) is extremely
simple. The variables are treated as distinct new constants. The standard evaluation of q is
then performed on the table. Note that incomplete information yields no extra cost in this
case. We leave it to the reader to verify that this works.
19.3 Conditional Tables 493
R A B C
0 1 x
x z 1
2 0 v
Figure 19.2: A naive table
Naive tables yield a nice representation system for a rather large language. But the
representation system is weak and the language does not cover all of relational algebra. We
introduce in the next section a representation that is a strong system for relational algebra.
19.3 Conditional Tables
We have seen that Codd tables and naive tables are not rich enough to provide a strong
representation system for relational algebra. To see what is missing, recall that when we
attempt to represent the result of a selection on a table, we run into the problem that
the presence or absence of certain tuples in a possible answer is conditioned by certain
properties of the valuation. To capture this, we extend the representation with conditions
on variables, which yields conditional tables. We will show that such tables form a strong
representation system for relational algebra.
A condition is a conjunct of equality atoms of the form x =y, x =c and of inequality
atoms of the form x =y, x =c, where x and y are variables and c is a constant. Note that
we only use conjuncts of atoms and that the Boolean true and false can be respectively
encoded as atoms x =x and x =x.
If formula is a condition, we say that a valuation satises if its assignment of
constants to variables makes the formula true.
Conditions may be associated with table T in two ways: (1) A global condition
T
is
associated with the entire table T ; (2) a local condition
t
is associated with one tuple t of
table T . A conditional table (c-table for short) is a triple (T,
T
, ), where
T is a table,

T
is a global condition,
is a mapping over T that associates a local condition
t
with each tuple t of T .
A c-table is shown in Fig. 19.3. If we omit listing a condition, then it is by default the atom
true. Note also that conditions
T
and
t
for t in T may contain variables not appearing
respectively in T or t .
For our purposes, the global conditions in c-tables could be distributed at the tuple level
as local conditions. However, they are convenient as shorthand and when dependencies are
considered.
For brevity, we usually refer to a c-table (T,
T
, ) simply as T . A given c-table T
represents a set of instances as follows (again adopting the CWA):
494 Incomplete Information
J
4
A B
0 1
0 3
J
3
A B
0 1
J
2
A B
0 1
1 0
J
1
A B
0 1
0 0
T A B
0 1
1 x
y x
z = z
y = 0
x y
x 2, y 2
Figure 19.3: A c-table and some possible instances
rep(T ) ={I | there is a valuation satisfying
T
such that relation I
consists exactly of those facts (t ) for which satises
t
}.
Consider the table T

in Fig. 19.3. Then J


1
, J
2
, J
3
, J
4
are obtained by valuating x, y, z
to (0,0,0), (0,1,0), (1,0,0), and (3,0,0), respectively.
The next example illustrates the considerable power of the local conditions of c-tables,
including the ability to capture disjunctive information.
Example 19.3.1 Suppose we know that Sally is taking math or computer science (CS)
(but not both) and another course; Alice takes biology if Sally takes math, and math or
physics (but not both) if Sally takes physics. This can be represented by the following
c-table:
Student Course
(x =math) (x =CS)
Sally math (z =0)
Sally CS (z =0)
Sally x
Alice biology (z =0)
Alice math (x =physics) (t =0)
Alice physics (x =physics) (t =0)
19.3 Conditional Tables 495
Observe that there may be several c-table representations for the same incomplete
database. Two representations T, T

are said to be equivalent, denoted T T

, if rep(T ) =
rep(T

). Testing for equivalence of c-tables is not a trivial task. Just testing membership
of an instance in rep(T ), apparently a simpler task, will be shown to be np-complete. To
test equivalence of two c-tables T and T

, one must show that for each valuation of the


variables in T there exists a valuation

for T

such that (T ) =

(T

), and conversely.
Fortunately, it can be shown that one need only consider valuations to a set C of constants
containing all constants in T or T

and whose size is at most the number of variables in the


two tables (Exercise 19.11). This shows that equivalence of c-tables is decidable.
In particular, nding a minimal representation can be hard. This may affect the com-
putation of the result of a query in various ways: The complexity of computing the answer
may depend on the representation of the input; and one may require the result to be some-
what compact (e.g., not to contain tuples with unsatisable local conditions).
It turns out that c-tables form a strong representation system for relational algebra.
Theorem 19.3.2 For each c-table T over U and relational algebra query q over U, one
can construct a c-table q(T ) such that rep(q(T )) =q(rep(T )).
Crux The proof is straightforward and is left as an exercise (Exercise 19.13). The exam-
ple in Fig. 19.4 should clarify the construction.
1
For projection, it sufces to project the
columns of the table. Selection is performed by adding new conjuncts to the local condi-
tions. Union is represented by the union of the two tables (after making sure that they use
distinct sets of variables) and choosing the appropriate local conditions. Join and intersec-
tion involve considering all pairs of tuples from the two tables. For difference, we consider
a tuple in the rst table and add a huge conjunct stating that it does not match any tuple
from the second table (disjunctions may be used as shorthand; they can be simulated using
new variables, as illustrated in Example 19.3.1).
To conclude this section, we consider (1) languages with recursion, and (2) depen-
dencies. In both cases (and for related reasons) the aforementioned representation system
behaves well. The presentation is by examples, but the formal results can be derived easily.
Languages with Recursion
Consider an incomplete database and a query involving xpoint. For instance, consider the
table in Fig. 19.5. The representation tc(T ) of the answer to the transitive closure query tc
is also given in the same gure. One can easily verify that
rep(tc(T )) =tc(rep(T )).
This can be generalized to arbitrary languages with iteration. For example, consider a
c-table T and a relational algebra query q that we want to iterate until a xpoint is reached.
1
The representations in the tables can be simplied; they are given in rough form to illustrate the
proof technique.
496 Incomplete Information
T
1
B C
x c
T
2
B C
y c
z w
(y = b)

B
(T
2
) B
y
z
(y = b)
T
3
A B
a y

B=b
(T
1
T
3
) A
a (y = x) (y = b)
B
y
C
c
T
1
T
2
B C
x c
T
1
T
3
A
a (y = x)
B
y
C
c
y c
z w
(y = b)
T
1
T
2
B C
x c
x c
x c
(y b) (w c)
x c
(y = b) (x b) (x z)
(y = b) (x b) (w c)
(y b) (x z)
Figure 19.4: Computing with c-tables
Then we can construct the sequence of c-tables:
q(T ), q
2
(T ), . . . , q
i
(T ), . . . .
Suppose now that q is a positive query. We are guaranteed to reach a xpoint on
every single complete instance. However, this does not a priori imply that the sequence
of representations {q
i
(T )}
i>0
converges. Nonetheless, we can show that this is in fact the
case. For some i,
rep(q
i
(T )) =rep(q
i+1
(T )).
19.3 Conditional Tables 497
T A B t c(T ) A B
a b a b
x c x c
c d c d
a c x =b
x d
c c x =d
a d x =b
Figure 19.5: Transitive closure of a table
(See Exercise 19.17.) It can also be shown easily that for such i, every I rep(q
i
(T ))
is a xpoint of q. The proof is by contradiction: Suppose there is I rep(q
i
(T )) such
that q(I) = I, and consider one such I with a minimum number of tuples. Because
rep(q
i
(T )) = rep(q
i+1
(T )), I = q(J) for some J rep(q
i
(T )). Because q is positive,
J I; so because q(I) = I, J I. This contradicts the minimality of I. So q
i
(T )) is
indeed the desired answer.
Thus to nd the table representing the result, it sufces to compute the sequence
{q
i
(T )}
i>0
and stop when two consecutive tables are equivalent.
Dependencies
In Part B, we studied dependencies in the context of complete databases. We now recon-
sider dependencies in the context of incomplete information. Suppose we are given an
incomplete database (i.e., a set I of complete databases) and are told, in addition, that
some set of dependencies is satised. The question arises: How should we interpret the
combined information provided by I and by ?
The answer depends on our view of the information provided by an incomplete data-
base. Dependencies should add to the information we have. But how do we compare in-
complete databases with respect to information content? One common-sense approach, in
line with our discussion so far, is that more information means reducing further the set
of possible worlds. Thus an incomplete database I (i.e., a set of possible worlds) is more
informative than J iff I J. In this spirit, the natural use of dependencies would be to
eliminate from I those possible worlds not satisfying . This makes sense for egds (and
in particular fds).
A different approach may be more natural in the context of tgds. This approach stems
from a relaxation of the CWA that is related to the OWA. Let I be an incomplete database,
and let be a set of dependencies. Recall that tgds imply the presence of certain tuples
based on the presence of other tuples. Suppose that for some I I, a tuple t implied by a
tgd in is not present in I. Under the relaxation of the CWA, we conclude that t should
be viewed as present in I, even though it is not represented explicitly. More generally, the
chase (see Chapter 8), suitably generalized to operate on instances rather than tableaux,
498 Incomplete Information
I
1
I
2
I
3
J
1
J
2
A B C A B C A B C A B C A B C
a b c e f g a b c a b c e f g
a b

e f

g b h a b

e f

e f g

a b c

e f g
e f

g a b

c e f

g
Figure 19.6: Incomplete databases and dependencies
can be used to complete the instance by adding all missing tuples implied by the tgds in
. (See Exercise 19.18.)
In fact, the chase can be used for both egds and tgds. In contrast to tgds, the effect of
chasing with egds (and, in particular, fds) may be to eliminate possible worlds that violate
them. Note that tuples added by tgds may lead to violations of egds. This suggests that an
incomplete database I with a set of dependencies represents
{chase(I, ) | I I and the chase of I by succeeds}.
For example, consider Fig. 19.6, which shows the incomplete database I = {I
1
, I
2
, I
3
}.
Under this perspective, the incorporation of the dependencies ={A B, B A} in
this incomplete database leads to J ={J
1
, J
2
}.
Suppose now that the incomplete database I is represented as a c-table T . Can the
effect of a set of full dependencies on T be represented by another c-table T

? The
answer is yes, and T

is obtained by extending the chase to c-tables in the straightforward


way. For example, a table T
1
and its completion T
2
by ={A B, C D} are given
in Fig. 19.7. The reader might want to check that
chase

(rep(T
1
)) =rep(T
2
).
T
1
A B C D T
2
A B C D
a b c d a b c d
x e y g x e y g
a b c z a b y g (x =a)
a e c d (x =a)
Figure 19.7: c-tables and dependencies
19.4 The Complexity of Nulls 499
19.4 The Complexity of Nulls
Conditional tables may appear to be a minor variation from the original model of complete
relational databases. However, we see next that the use of nulls easily leads to intractability.
This painfully highlights the trade-off between modeling power and resources.
We consider some basic computational questions about incomplete information data-
bases. Perhaps the simplest question is the possibility problem: Given a set of possible
worlds (specied, for instance, by a c-table) and a set of tuples, is there a possible world
where these tuples are all true? A second question is the certainty problem: Given a set
of possible worlds and a set of tuples, are these tuples all true in every possible world?
Natural variations of these problems involve queries: Is a given set of tuples possibly (or
certainly) in the answer to query q?
Consider a (c-) table T , a query q, a relation I, and a tuple t . Some typical questions
include the following:
(Membership) Is I a possible world for T [i.e., I rep(T )]?
(Possibility) Is t possible [i.e., I rep(T )(t I)]?
(Certainty) Is t certain [i.e., I rep(T )(t I)]?
(q-Membership) Is I a possible answer for q and T [i.e., I q(rep(T ))]?
(q-Possibility) Is t possibly in the answer [i.e., I rep(T )(t q(I))]?
(q-Certainty) Is t certainly in the answer [i.e., I rep(T )(t q(I))]?
Finally we may consider the following generalizations of the q-membership problem:
(q-Containment) Is T contained in q(T

) [i.e., rep(T ) q(rep(T

))]?
(q, q

-Containment) Is q(T ) contained in q

(T ) [i.e., rep(q(T )) rep(q

(T ))]?
The crucial difference between complete and incomplete information is the large num-
ber of possible valuations for the latter case. Because of the nite number of variables in a
set of c-tables, only a nite number of valuations are nonisomorphic (see Exercise 19.10).
However, the number of such valuations may grow exponentially in the input size. By sim-
ple reasoning about all valuations and by guessing particular valuations, we have some
easy upper bounds. For a query q that can be evaluated in polynomial time on complete
databases, deciding whether I q(rep(T )), or whether I is a set of possible answers, can
be answered in np; checking whether q(rep(T )) ={I}, or if I is a set of certain tuples, is
in co-np.
To illustrate such complexity results, we demonstrate one lower bound concerning the
q-membership problem for (Codd) tables.
Proposition 19.4.1 There exists a positive existential query q such that checking, given
a table T and a complete instance I, whether I q(rep(T )) is np-complete.
Proof The proof is by reduction of graph 3-colorability. For simplicity, we use a query
mapping a two-relation database into another two-relation database. (An easy modication
of the proof shows that the result also holds for databases with one relation. In particular,
500 Incomplete Information
increase the arity of the largest relation, and use constants in the extra column to encode
several relations into this one.)
We will use (1) an input schema R with two relations R, S of arity 5 and 2, respec-
tively; (2) an output schema R

with two relations R

, S

of arity 3 and 1, respectively; and


(3) a positive existential query q from R to R

. The query q [returning, on each input I over


R, two relations q
1
(I) and q
2
(I) over R

and S

] is dened as follows:
q
1
={x, z, z

| y([vw(R(x, y, v, w, z) R(v, w, x, y, z))]


[vw(R(x, y, v, w, z

) R(v, w, x, y, z

))])}
q
2
={z | xyvw(R(x, y, v, w, z) S(y, w))}.
For each input G=(V, E) to the graph 3-colorability problem, we construct a table
T over the input schema R and an instance I

over the output schema R

, such that G is
3-colorable iff I

q(rep(T)).
Without loss of generality, assume that G has no self-loops and that E is a binary
relation, where we list each edge once with an arbitrary orientation.
Let V = {a
i
| i [1..n]} and E = {(b
j
, c
j
) | j [1..m]}. Let {x
j
| j [1..m]} and
{y
j
| j [1..m]} be two disjoint sets of distinct variables. Then T and I

are constructed
as follows:
(a) T(R) ={t
j
| j [1..m]}, where t
j
is the tuple b
j
, x
j
, c
j
, y
j
, j;
(b) T(S) ={i, j | i, j {1, 2, 3}, i =j};
(c) I

(R

) ={a, j, k | a {b
j
, c
j
} {b
k
, c
k
}, where each (b, c) pair is an edge in
E }; and
(d) I

(S

) ={j | j [1..m]}.
Intuitively, for each tuple in I(R), the second column contains the color of the vertex in
the rst column, and the fourth column contains the color of the vertex in the third column.
The edges are numbered in the fth column. The role of query q
2
is to check whether this
provides an assignment of the three colors {1, 2, 3} to vertexes such that the colors of the
endpoints of each edge are distinct. Indeed, q
2
returns the edges z for which the colors
y, w of its endpoints are among {1, 2, 3}. So if q(I)(S

) =I

(S

), then all edges have color


assignments among {1, 2, 3} to their endpoints. Next query q
1
checks whether a vertex is
assigned the same color consistently in all edges where it occurs. It returns the x, z, z

,
where x is a vertex, z and z

are edges, x occurs as an endpoint, and x has the same color


assignment y in both z and z

. So if q
1
(I)(R

) =I

(R

), it follows that the color assignment


is consistent everywhere for all vertexes.
For example, consider the graph G given in Fig. 19.8; the corresponding I

and T
are exhibited in Fig. 19.9. Suppose that f is a 3-coloring of G. Consider the valuation
dened by (x
j
) =f (b
j
) and (y
j
) =f (c
j
) for all j. It is easily seen that I

= q((T )).
Moreover, it is straightforward to show that G is 3-colorable iff I

is in q(rep(T)).
19.5 Other Approaches 501
1 2
2 3
3 4
4 1
3 1
Figure 19.8: Graph G
T(R) T(S) I

(R

) I

(S

)
1 x
1
2 y
1
1 1 2 1 1 1 1
2 x
2
3 y
2
2 1 3 1 1 4 2
3 x
3
4 y
3
3 2 1 1 1 5 3
4 x
4
1 y
4
4 2 3 1 4 1 4
3 x
5
1 y
5
5 3 1 1 4 4 5
3 2 1 4 5
2 1 1
2 1 2
2 2 1
2 2 2
.
.
.
4 3 3
4 3 4
4 4 3
4 4 4
Figure 19.9: Encoding for the reduction of 3-colorability
19.5 Other Approaches
Incomplete information often arises naturally, even when the focus is on complete data-
bases. For example, the information in a view is by nature incomplete, which in particular
leads to problems when trying to update the view (as discussed in Chapter 22); and we
already considered relations with nulls in the weak universal relations of Chapter 11.
In this section, we briey present some other aspects of incomplete information. We
consider some alternative kinds of null values; we look at disjunctive deductive databases;
we mention a language that allows us to address directly in queries the issue of incom-
pleteness; and we briey mention several situations in which incomplete information arises
naturally, even when the database itself is complete. An additional approach to representing
incomplete information, which stems from using explicit logical theories, will be presented
in connection with the view update problem in Chapter 22.
502 Incomplete Information
Other Nulls in Brief
So far we have focused on a specic kind of null value denoting values that are unknown.
Other forms of nulls may be considered. We may consider, for instance, nonexisting nulls.
For example, in the tuple representing a CEO, the eld DirectManager has no meaning
and therefore contains a nonexisting null. Nonexisting nulls are at the core of the weak
universal model that we considered in Chapter 11.
It may also be the case that we do not know for a specic eld if a value exists. For
example, if the database ignores the marital status of a particular person, the spouse eld
is either unknown or nonexisting. It is possible to develop a formal treatment of such no-
information nulls. An incomplete database consists of a set of sets of tuples, where each set
of tuples is closed under projection. This closure under projection indicates that if a tuple
is known to be true, the projections of this tuple (although less informative) are also known
to be true. (The reader may want to try, as a nontrivial exercise, to dene tables formally
with such nulls and obtain a closure theorem analogous to Theorem 19.3.2.)
For each new form of null values, the game is to obtain some form of representation
with clear semantics and try to obtain a closure theorem for some reasonable language
(like we did for unknown nulls). In particular, we should focus on the most important
algebraic operations for accessing data: projection and join. It is also possible to establish
a lattice structure with the different kinds of nulls so that they can be used meaningfully in
combination.
Disjunctive Deductive Databases
Disjunctive logic programming is an extension of standard logic programming with rules
of the form
A
1
A
i
B
1
, . . . , B
j
, C
1
, . . . , C
k
.
In datalog, the answer to a query is a set of valuations. For instance, the answer to a query
Q(x) is a set of constants a such that Q(a) holds. In disjunctive deductive databases,
an answer may also be a disjunction Q(a) Q(b).
Disjunctions give rise to new problems of semantics for logic programs. Although in
datalog each program has a unique minimal model, this is no longer the case for datalog
with disjunctions. For instance, consider the database consisting of a single statement
{Q(a) Q(b)}. Then there are clearly two minimal models: {Q(a)} and {Q(b)}. This
leads to semantics in terms of sets of minimal models, which can be viewed as incomplete
databases. We can develop a xpoint theory for disjunctive databases, extending naturally
the xpoint approach for datalog. To do this, we use an ordering over sets of minimal
interpretations (i.e., sets I of instances such that there are no I, J in I with I J).
Denition 19.5.1 Let I, J be sets of minimal interpretations. Then
J I iff I I (J J (J I)).
19.5 Other Approaches 503
Consider the following immediate consequence operator. Let P be a datalog program
with disjunctions, and let I be a set of minimal interpretations. A new set J of interpre-
tations is obtained as follows. For each I in I, state
P
(I) is the set of disjunctions of the
form A
1
A
i
that are immediate consequences of some facts in I using P. Then J
is the set of of instances J such that for some I I, J is a model of state
P
(I) contain-
ing I. Clearly, J is not a set of minimal interpretations. The immediate consequence of I,
denoted T
P
(I), is the set of minimal interpretations in J. Now consider the sequence
I
0
=
I
i
=T
P
(I
i1
).
It is easy to see that the sequence {I
i
}
i0
is nondecreasing with respect to the ordering ,
so it becomes constant at some point. The semantics of P is the limit of the sequence.
When negation is introduced, the situation, as usual, becomes more complicated. How-
ever, it is possible to extend semantics, such as stratied and well founded, to disjunctive
deductive databases.
Overall, the major difculty in handling disjunction is the combinatorial explosion it
entails. For example, the xpoint semantics of datalog with disjunctions may yield a set of
interpretations exponential in the input.
Logical Databases and KL
The approach to null values adopted here is essentially a semantic approach, because the
meaning of an incomplete database is a set of possible instances. One can also use a
syntactic, proof-theoretic approach to modeling incomplete information. This is done by
regarding the database as a set of sentences, which yields the logical database approach.
As discussed in Chapter 2, in addition to statements about the real world, logical
databases consider the following:
1. Uniqueness axioms: State that distinct constants stand for distinct elements in the
real world.
2. Domain closure axiom: Specify the universe of constants.
3. Completion axiom: Specify that no fact other than recorded holds.
Missing in both the semantic and syntactic approaches is the ability to make more
rened statements about what the database knows. Such capabilities are particularly im-
portant in applications where the real world is slowly discovered through imprecise data.
In such applications, it is general impossible to wait for a complete state to answer queries,
and it is often desirable to provide the user with information about the current state of
knowledge of the database.
To overcome such limitations, we may use languages with modalities. We briey
mention one such language: KL. The language KL permits us to distinguish explicitly
between the real world and the knowledge the database has of it. It uses the particular
modal symbol K. Intuitively, whereas the sentence states the truth of in the real world,
K states that the database knows that holds.
For instance, the fact that the database knows neither that Alice is a student nor that
504 Incomplete Information
she is not is expressed by the statement
KStudent(Alice) K(Student(Alice)).
The following KL statement says that there is a teacher who is unknown:
x(Teacher(x) K(Teacher(x))).
This language allows the database to reason and answer queries about its own knowledge
of the world.
Incomplete Information in Complete Databases
Incomplete information often arises naturally even when the focus is on complete data-
bases. The following are several situations that naturally yield incomplete information:
Views: Although a view of a database is usually a complete database, the information
it contains is incomplete relative to the whole database. For a user seeing the view,
there are many possible underlying databases. So the view can be seen as a rep-
resentation for the set of possible underlying databases. The incompleteness of the
information in the view is the source of the difculty in view updating (see Chap-
ter 22).
Weak universal relations: We have already seen how relations with nulls arise in the
weak universal relations of Chapter 11.
Nondeterministic queries: Recall from Chapter 17 that nondeterministic languages
have several possible answers on a given input. Thus we can think of nondeter-
ministic queries as producing as an answer a set of possible worlds (see also Ex-
ercise 19.20).
Semantics of negation: As seen in Chapter 15, the well-founded semantics for
datalog

involves 3-valued interpretations, where some facts are neither true nor
false but unknown. Clearly, this is a form of incomplete information.
Bibliographic Notes
It was accepted early on that database systems should handle incomplete information
[Cod75]. After some interesting initial work on the topic (e.g., [Cod75, Gra77, Cod79,
Cod82, Vas79, Vas80, Bis81, Lip79, Lip81, Bis83]), the landmark paper [IL84] laid the
formal groundwork for incomplete databases with nulls of the unknown kind and intro-
duced the notion of representation system. That paper assumed the OWA, as opposed to
the CWA that was assumed in this chapter. Since then, there has been considerable work
on querying incomplete information databases. The focus of most of this work has been
a search for the correct semantics for queries applied to incomplete information databases
(e.g., [Gra84, Imi84, Zan84, AG85, Rei86, Var86b]).
Much of the material presented in this chapter is from [IL84] (although it was pre-
sented there assuming the OWA), and we refer the reader to it for a detailed treatment.
Bibliographic Notes 505
Tables form the central topic of the monograph [Gra91]. Examples in Section 19.1 are
taken from there. The naive tables have been called V-tables and e-tables in [AG85,
Gra84, IL84]. The c-tables with local conditions are from [IL84]; they were augmented
with global conditions in [Gra84]. The fact that c-tables provide a strong representation
system for relational algebra is shown in [IL84]. That this strong representation property
extends to query languages with xpoint on positive queries is reported in [Gra91]. Chasing
is applied to c-tables in [Gra91].
There are two main observations in the literature on certainty semantics. The rst
observation follows from the results of [IL84] (based on c-tables) and [Rei86, Var86b]
(based on logical databases). Namely, under particular syntactic restrictions on c-tables
and using positive queries, the certainty question can be handled exactly as if one had a
complete information database. The second observation deals with the negative effects of
the many possible instantiations of the null values (e.g., [Var86b]).
Comprehensive data-complexity analysis of problems related to representing and
querying databases with null values is provided in [IL84, Var86b, AKG91]. The program
complexity of evaluation is higher by an exponential than the data complexity [Cos83,
Var82a]. Such problems were rst noted in [HLY80, MSY81] as part of the study of nulls
in weak universal instances.
Early investigations suggesting the use of orderings in the spirit of denotational se-
mantics for capturing incomplete information include [Vas79, Bis81]. The rst paper to
develop this approach is [BJO91], which focused on fds and universal relations. This
has spawned several papers, including an extension to complex objects (see Chapter 20)
[BDW88, LL90], mvds [Lib91], and bags [LW93b]. An important issue in this work con-
cerns which power domain ordering is used (Hoare, Smyth, or Plotkin); see [BDW91,
Gun92, LW93a].
The logical database approach has been largely inuenced by the work of Reiter
[Rei78, Rei84, Rei86] and by that of Vardi [Var86a, Var86b]. The extension of the xpoint
operator of logic programs to disjunctive logic programs is shown in [MR90]. Disjunctive
logic programming is the topic of [LMR92]. A survey on deductive databases with disjunc-
tions can be found in [FM92]. The complexity of datalog with disjunction is investigated
in [EGM94].
A related but simpler approach to incomplete information is the use of or-sets. As a
simple example, a tuple Joe, {20, 21} might be used to indicate that Joe has age either 20
or 21. This approach is introduced in [INV91a, INV91b] in the context of complex objects;
subsequent works include [Rou91, LW93a].
One will nd in [Lev84b, Lev84a] entry points to the interesting world of knowledge
bases (fromthe viewpoint of incompleteness of information), including the language KL. A
related, active area of research, called reasoning about knowledge, extends modal operators
to talk about the knowledge of several agents about facts in the world or about each others
knowledge. This may be useful in distributed databases, where sites may have different
knowledge of the world. The semantics of such statements is in terms of an extension of
the possible worlds semantics, based on Kripke structures. An introduction to reasoning
about knowledge can be found in [Hal93, FHMV95].
Finally, nonapplicable nulls are studied in [LL86]; open nulls are studied in [GZ88];
and weak instances with nonapplicable nulls are studied in [AB87b].
506 Incomplete Information
Exercises
Exercise 19.1 Consider the c-table in Example 19.3.1. Give the c-tables for the answers to
these queries: (1) Which students are taking Math? (2) Which students are not taking Math? (3)
Which students are taking Biology? In each case, what are the sets of sure and possible tuples
of the answer?
Exercise 19.2 Consider the c-table T

in Fig. 19.3. Show that each I in rep(T

) has two tuples.


Is T

equivalent to some 2-tuple c-table?


Exercise 19.3 Consider the naive table in Fig. 19.2. In the weak representation system de-
scribed in Section 19.1, compute the naive tables for the answers to the queries
A=C
(R),

AB
(R)
AC
(R). What are the tuples surely in the answers to these queries?
Exercise 19.4 A ternary c-table T represents a directed graph with blue, red, and yellow
edges. The rst two columns represent the edges and the last the colors. Some colors are
unknown. The local conditions are used to enforce that a blue edge cannot follow a red one
on a path. Give a datalog query q stating that there is a cycle with no two consecutive edges of
the same color. Give c-tables such that (1) there is surely such a cycle; and (2) there may be one
but it is not sure. In each case, compute the table strongly representing the answer to q.
Exercise 19.5 Let T be the Codd table in Fig. 19.1. Compute strong representations of the
results of the following queries, using c-tables: (a)
A=3
(R); (b) q
1
=
BCAB
(
BC
(R)); (c)
q
1

AB
(R); (d) q
1

AB
(R); (e) q
1

AB
(R); (f) q
1

BC
(R).
Exercise 19.6 Consider the c-table T
4
=T
1
T
2
of Fig. 19.4. Compute a strong representation
of the transitive closure of T
4
.
Exercise 19.7 Complete the proof that Codd tables are not a weak representation system with
respect to SPU, in Theorem 19.2.1.
Exercise 19.8 Example 19.1.1 shows that one cannot strongly represent the result of a selec-
tion on a table with another table. For which operations of relational algebra applied to tables is
it possible to strongly represent the result?
Exercise 19.9 Prove that naive tables are not a weak representation system for relational
algebra.
Exercise 19.10 Prove that, given a c-table T without constants, rep(T ) is the closure under
isomorphism of a nite set of instances. Extend the result for the case with constants.
Exercise 19.11 Provide an algorithm for testing equivalence of c-tables.
Exercise 19.12 Show that there exists a datalog query q such that, given a naive table T and
a tuple t , testing whether t is possibly in the answer is np-complete.
Exercise 19.13 Prove Theorem 19.3.2.
Exercise 19.14 Prove that for each c-table T
1
and each set of fds and mvds, there exists a
table T
2
such that chase

(rep(T
1
)) =rep(T
2
). Hint: Use the chase on c-tables.
Exercise 19.15 Show that there is a query q in polynomial time for which deciding, given I
and a c-table T , (a) whether I q(rep(T )), or whether I is possible, are np-complete; and (b)
whether q(rep(T )) {I}, or whether I is certain, are co-np-complete.
Exercises 507
Exercise 19.16 Give algorithms to compute, for a c-table T and a relational algebra query q,
the set of tuples sure(q, T ) surely in the answer and the set of tuples poss(q, T ) possibly in the
answer. What is the complexity of your algorithms?
Exercise 19.17 Let T be a c-table and q a positive existential query of the same arity as T .
Show that the sequence q
i
(T ) converges [i.e., that for some i, q
i
(T ) q
i+1
(T )]. Hint: Show
that the sequence converges in at most m stages, where m=max{i | q
i
(I) =q
i+1
(I), I I}
and where I is a nite set of relations representing the nonisomorphic instances in rep(T ).
Exercise 19.18 Describe how to generalize the technique of chasing by full dependencies
to apply to instances rather than tableau. If an egd can be applied and calls for two distinct
constants to be identied, then the chase ends in failure. Show that for instance I, if the chase
of I by succeeds, then chase(I, ) |=.
Exercise 19.19 Show that for datalog programs with disjunctions in heads of rules, the se-
quence {I
i
}
i0
of Section 19.5 converges. What can be said about the limit in model-theoretic
terms?
Exercise 19.20 [ASV90] There is an interesting connection between incomplete information
and nondeterminism. Recall the nondeterministic query languages based on the witness operator
W, in Chapter 17. One can think of nondeterministic queries as producing as an answer a
set of possible worlds. In the spirit of the sure and possible answers to queries on incomplete
databases, one can dene for a nondeterministic query q the deterministic queries sure(q) and
poss(q) as follows:
sure(q)(I) ={J | J q(I)}
poss(q)(I) ={J | J q(I)}
Consider the language FO +W, where a program consists of a nite sequence of assignment
statements of the formR :=, where is a relational algebra expression or an application of W
to a relation. Let sure(FO +W) denote all deterministic queries that can be written as sure(q)
for some FO +W query q, and similarly for poss(FO +W). Prove that
(a) poss(FO +W) =np, and
(b) sure(FO +W) =co-np.
20 Complex Values
Alice: Complex values?
Riccardo: We could have used a different title: nested relations, complex objects,
structured objects . . .
Vittorio: . . . N1NF, NFNF, NF
2
, NF2, V-relation . . . I have seen all these names
and others as well.
Sergio: In a nutshell, relations are nested within relations; something like
Matriochka relations.
Alice: Oh, yes. I love Matriochkas.
A
lthough we praised the simplicity of the data structure in the relational model, this
simplicity becomes a severe limitation when designing many practical database ap-
plications. To overcome this problem, the complex value model has been proposed as a
signicant extension of the relational one. This extension is the topic of this chapter.
Intuitively, complex values are relations in which the entries are not required to be
atomic (as in the relational model) but are allowed to be themselves relations. The data
structure in the relational model (the relation) can be viewed as the result of applying to
atomic values two constructors: a tuple constructor to make tuples and a set constructor
to make sets of tuples (relations). Complex values allow the application of the tuple and
set constructor recursively. Thus they can be viewed as nite trees whose internal nodes
indicate the use of the tuple and nite set constructors. Clearly, a relation is a special kind
of complex value: a set of tuples of atomic values.
At the schema level, we will specify a set of complex sorts (or types). These indicate
the structure of the data. At the instance level, sets of complex values corresponding to
these sorts are provided. For example, we have the following:
Sort Complex Value
dom a
{dom} {a, b, c}
A : dom, B : dom A : a, B : b
{A : dom, B : dom} {A : a, B : b, A : b, B : a}
{{dom}} {{a, b}, {a}, { }}
An example of a more involved complex value sort and of a value of that sort is shown
in Fig. 20.1(a). The tuple constructor is denoted by and the set constructor by . An
508
Complex Values 509
dom

c
*
*

dom dom
*

b a
*

f e
*
*

*
dom d

*
A
B
C
A E
A
B
C A
B
C
A E A E
(a) A sort and a value of that sort
A B
a b
e f
A E
c
d
C
A E
(b) Another representation of the same value
Figure 20.1: Complex value
alternative representation more in the spirit of our representations of relations is shown in
Fig. 20.1(b). Another complex value (for a CINEMA database) is shown in Fig. 20.2.
We will see that, whereas it is simple to add the tuple constructor to the traditional
relational data model, the set constructor requires a number of interesting new ideas. There
are similarities between this set construct and the set constructs used in general-purpose
programming languages such as Setl.
In this chapter, we introduce complex values and present a many-sorted algebra and
an equivalent calculus for complex values. The focus is on the use of the two constructors
of complex values: tuples and (nite) sets. (Additional constructors, such as list, bags, and
510 Complex Values
Director Movies
Title Actors
Forsythe
Gwenn
MacLaine
Hitchcock
Hitchcock
The Trouble with Harry
Hedren
Taylor
Pleshette
Hitchcock
The Birds
Perkins
Leigh
Hitchcock
Psycho
Title Actors
Andersson
Sylwan
Thulin
Ullman
Bergman
Cries and Whispers
von Sydow
Bjrnstrand
Ekerot
Poppe
The Seventh Seal
Figure 20.2: The CINEMA database revisited (with additional data shown)
union, have also been incorporated into complex values but are not studied here.) After in-
troducing the algebra and calculus, we present examples of these interesting languages. We
then comment on the issues of expressive power and complexity and describe equivalent
languages with xpoint operators, as well as languages in the deductive paradigm. Finally
we briey examine a subset of the commercial query language O
2
SQL that provides an
elegant SQL-style syntax for querying complex values.
The theory described in this chapter serves as a starting point for object-oriented data-
bases, which are considered in Chapter 21. However, key features of the object-oriented
paradigm, such as objects and inheritance, are still missing in the complex value frame-
work and are left for Chapter 21.
20.1 Complex Value Databases 511
20.1 Complex Value Databases
Like the relational model, we will use relation names in relname, attributes in att, and
constants in dom. The sorts are more complex than for the relational model. Their abstract
syntax is given by
= dom | B
1
: , . . . , B
k
: | {},
where k 0 and B
1
, . . . , B
k
are distinct attributes. Intuitively, an element of dom is a
constant; an element of B
1
:
1
, . . . , B
k
:
k
is a k-tuple with an element of sort
i
in entry
B
i
for each i; and an element of sort {} is a nite set of elements of sort .
Formally, the set of values of sort (i.e., the interpretation of ), denoted [ []], is dened
by
1. [ [dom]] =dom,
2. [ [{}]] ={{v
1
, . . . , v
j
} | j 0, v
i
[ []], i [1, j]}, and
3. [ [B
1
:
1
, . . . , B
k
:
k
]] ={B
1
: v
1
, . . . , B
k
: v
k
| v
j
[ [
j
]], j [1, k]}.
An element of a sort is called a complex value. A complex value of the form
B
1
: a
1
, . . . , B
k
: a
k
is said to be a tuple, whereas a complex value of the form
{a
1
, . . . , a
j
} is a set.
Remark 20.1.1 For instance, consider the sort
{A : dom, B : dom, C : {A : dom, E : {dom}}}
and the value
{ A : a, B : b, C : { A : c, E : {},
A : d, E : {}},
A : e, B : f, C : { } }
of that sort. This is yet again the value of Fig. 20.1. It is customary to omit dom and for
instance write this sort {A, B, C : {A, E : {}}}.
As mentioned earlier, each complex value and each sort can be viewed as a nite
tree. Observe the tree representation. Outgoing edges from tuple vertexes are labeled; set
vertexes have a single child in a sort and an arbitrary (but nite) number of children in a
value.
Finally note that (because of the empty set) a complex value may belong to more than
one sort. For instance, the value of Fig. 20.1 is also of sort
{A : dom, B : dom, C : {A : dom, E : {{dom}}}}.
Relational algebra deals with sets of tuples. Similarly, complex value algebra deals
with sets of complex values. This motivates the following denition of sorted relation (this
512 Complex Values
denition is frequently a source of confusion):
A (complex value) relation of sort is a nite set of values of sort .
We use the term relation for complex value relation. When we consider the classical
relational model, we sometimes use the phrase at relation to distinguish it from complex
value relation. It should be clear that the at relations that we have studied are special cases
of complex value relations.
We must be careful in distinguishing the sort of a complex value relation and the sort
of the relation viewed as one complex value. For example, a complex value relation of sort
A, B, C is a set of tuples over attributes ABC. At the same time, the entire relation can be
viewed as one complex value of sort {A, B, C}. There is no contradiction between these
two ways of viewing a relation.
We now assume that the function sort (of Chapter 3) is from relname to the set of
sorts. We also assume that for each sort, there is an innite number of relations having that
sort.
Note that the sort of a relation is not necessarily a tuple sort (it can be a set sort). Thus
relations do not always have attributes at the top level. Such relations whose sort is a set
are essentially unary relations without attribute names.
A (complex value) schema is a relation name; and a (complex value) database schema
is a nite set of relation names. A (complex value) relation over relation name R is a
nite set of values of sort sort(R)that is, a nite subset of [ [sort (R)]]. A (complex value
database) instance I of a schema R is a function from R such that for each R in R, I(R) is
a relation over R.
Example 20.1.2 To illustrate this denition, an instance J of {R
1
, R
2
, R
3
} where
sort(R
1
) =sort(R
3
) =A : dom, B : {A
1
: dom, A
2
: dom} and
sort(R
2
) =A : dom, A
1
: dom, A
2
: dom
is shown in Fig. 20.3.
Variations
To conclude this section, we briey mention some variations of the complex value model.
The principal one that has been considered is the nested relation model. For nested rela-
tions, set and tuple constructors are required to alternate (i.e., set of sets and tuple with a
tuple component are prohibited). For instance,

1
=A, B, C : {D, E : {F, G}} and

2
=A, B, C : {E : {F, G}}
are nested relation sorts whereas
20.1 Complex Value Databases 513
A B
A
1
A
2
d
1 d
1
d
2
d
3
d
4
A
1
A
2
d
1 d
3
d
4
d
5
d
6
A
1
A
2
d
2 d
1
d
3
d
2
d
4
A A
1
A
2
d
1
d
1
d
2
d
1
d
3
d
4
d
1
d
5
d
6
d
2
d
1
d
3
d
2
d
2
d
4
A B
A
1
A
2
d
1
d
1
d
2
d
3
d
4
A
1
A
2
d
2 d
1
d
3
d
2
d
4
d
5
d
6
J(R
1
) J(R
2
) J(R
3
)
Figure 20.3: A database instance

3
=A, B, C : D, E : {F, G} and

4
=A, B, C : {{F, G}}
are not. (For
3
, observe two adjacent tuple constructors; there are two set constructors
for
4
.)
The restriction imposed on the structure of nested relations is mostly cosmetic. A more
fundamental constraint is imposed in so-called Verso-relations (V-relations).
As with nested relations, set and tuple constructors in V-relations are required to
alternate. A relation is dened recursively to be a set of tuples, such that each component
may itself be a relation but at least one of them must be atomic. The foregoing sort
1
would
be acceptable for a V-relation whereas sort
2
would not because of the sort of tuples in the
C component.
A further (more radical) assumption for V-relations is that for each set of tuples, the
atomic attributes form a key. Observe that as a consequence, the cardinality of each set in
a V-relation is bounded by a polynomial in the number of atomic elements occurring in the
V-relation. This bound certainly does not apply for a relation of sort {dom} (a set of sets)
or for a nested relation of sort
A : {B : dom},
which is also essentially a set of sets. The V-relations are therefore much more limited data
structures. (See Exercise 20.1.) They can be viewed essentially as at relational instances.
514 Complex Values
20.2 The Algebra
We nowdene a many-sorted algebra, denoted ALG
cv
(for complex values). Like relational
algebra, ALG
cv
is a functional language based on a small set of operations. This section rst
presents a family of core operators of the algebra and then an extended family of operators
that can be simulated by them. At the end of the section we introduce an important subset
of ALG
cv
, denoted ALG
cv
.
The Core of ALG
cv
Let I, I
1
, I
2
, . . . be relations of sort ,
1
,
2
, . . . respectively. It is important to keep in mind
that a relation of sort is a set of values of sort .
Basic set operations: If
1
=
2
, then I
1
I
2
, I
1
I
2
, I
1
I
2
, are relations of sort
1
, and
their values are dened in the obvious manner.
Tuple operations: If I is a relation of sort =B
1
:
1
, . . . , B
k
:
k
, then

(I) is a relation of sort .


The selection condition is (with obvious restrictions on sorts) of the form B
i
=d,
B
i
=B
j
, B
i
B
j
or B
i
=B
j
.C, where d is a constant, and it is required in the last
case that
j
be a tuple sort with a C eld. Then

(I) ={v | v I, v |= },
where |= is dened by
. . . , B
i
: v
i
, . . . |=B
i
=d if v
i
=d,
. . . , B
i
: v
i
, . . . , B
j
: v
j
, . . . |=B
i
=B
j
if v
i
=v
j
, and
. . . , B
i
: v
i
, . . . , B
j
: v
j
, . . . |=B
i
B
j
if v
i
v
j
.
. . . , B
i
: v
i
, . . . , B
j
: . . . , C : v
j
, . . ., . . . |=B
i
=B
j
.C if v
i
=v
j
.

B
1
,...,B
l
(I), l k is a relation of sort B
1
:
1
, . . . , B
l
:
l
with

B
1
,...,B
l
(I) ={ B
1
: v
1
, . . . , B
l
: v
l
|
v
l+1
, . . . , v
k
(B
1
: v
1
, . . . , B
k
: v
k
I)}.
Constructive operations
powerset(I) is a relation of sort {} and
powerset(I) ={v | v I}.
If A
1
, . . . , A
n
are distinct attributes, tup_create
A
1
...A
n
(I
1
, . . . , I
n
) is of sort A
1
:

1
, . . . , A
n
:
n
, and
tup_create
A
1
,...,A
n
(I
1
, . . . , I
n
) ={A
1
: v
1
, . . . , A
n
: v
n
| i (v
i
I
i
)}.
20.2 The Algebra 515
set_create(I) is of sort {}, and set_create(I) ={I}.
Destructive operations
If ={

}, then set_destroy(I) is a relation of sort

and
set_destroy(I) =I ={w | v I, w v}.
If I is of sort A :

, tup_destroy(I) is a relation of sort

, and
tup_destroy(I) ={v | A : v I}.
We are now prepared to dene the (core of the) language ALG
cv
. Let R be a database
schema. A query returns a set of values of the same sort. By analogy with relations, a query
of sort returns a set of values of sort . ALG
cv
queries and their answers are dened as
follows. There are two base cases:
Base values: For each relation name R in R, R is an algebraic query of sort sort(R). The
answer to query R is I(R).
Constant values: For each element a, {a} is a (constant) algebraic query of sort dom. The
answer to query {a} is simply {a}.
Other queries of ALG
cv
are obtained as follows. If q
1
, q
2
, . . . are queries, is a selection
condition, and A
1
, . . . are attributes,
q
1
q
2
, q
1
q
2
, q
1
q
2
,

(q
1
),
A
1
,...,A
k
(q
1
), tup_create
A
1
,...,A
k
(q
1
, . . . , q
k
),
powerset(q
1
), tup_destroy(q
1
), set_destroy(q
1
),
set_create(q
1
)
are queries if the appropriate restrictions on the sorts apply. (Note that because of the
sorting constraints, tup_destroy and set_destroy cannot both be applicable to a given q
1
.)
The sort of a query and its answer are dened in a straightforward manner.
To illustrate these denitions, we present two examples. We then consider other alge-
braic operators that are expressible in the algebra. In Section 20.4 we provide several more
examples of algebraic queries.
Example 20.2.1 Consider the instance J of Fig. 20.3. Then one can nd in Fig. 20.4
J
1
=[
A=d
2
(R
1
)](J), J
2
=
B
(J
1
),
J
3
=tup_destroy(J
2
), J
4
=set_destroy(J
3
),
J
5
=powerset(J
4
), J
6
=tup_create
C
(J
4
).
Also observe that
J
5
=[powerset(set_destroy(tup_destroy(
B
(
A=d
2
(R
1
))))](J).
516 Complex Values
A B
A
1
A
2
d
2 d
1
d
3
d
2
d
4
J
1
B
A
1
A
2
d
1
d
3
d
2
d
4
J
2
A
1
A
2
d
1
d
3
d
2
d
4
J
3
A
1
A
2
d
1
d
3
d
2
d
4
J
4
A
1
A
2
J
5
J
6
C
A
1
A
2
d
1
d
3
A
1
A
2
d
1
d
3
A
1
A
2
d
2
d
4
A
1
A
2
d
1
d
3
d
2
d
4
C
A
1
A
2
d
2
d
4
Figure 20.4: Algebraic operations
Example 20.2.2 In this example, we illustrate the destruction and construction of a
complex value. Consider the relation
I ={A : a, B : {b, c}, C : A : d, B : {e, f }}.
Then
[(
A
tup_destroy)
(
B
tup_destroy set_destroy)
(
C
tup_destroy
A
tup_destroy)
(
C
tup_destroy
B
tup_destroy set_destroy)](I)
={a, b, c, d, e, f }.
20.2 The Algebra 517
We next reconstruct I from singleton sets:
I =tup_create
A,B,C
({a}, set_create({b} {c}),
tup_create
A,B
({d}, set_create({e} {f }))).
Additional Algebraic Operations
There are innite possibilities in the choice of algebraic operations for complex values.
We chose to incorporate in the core algebra only a few basic operations to simplify the
formal presentation and the proof of the equivalence between the algebra and calculus.
However, making the core too reduced would complicate that proof. (For example, the
operator set_create can be expressed using the other operations but is convenient in the
proof.) We now present several additional algebraic operations. It is important to note that
all these operations can be expressed in complex value algebra. (In that sense, they can
be viewed as macro operations.) Furthermore, all but the nest operator can be expressed
without using the powerset operator.
We rst generalize constant queries.
Complex constants: It is easy to see that the technique of Example 20.2.2 can be gener-
alized. So instead of simply {a} for a atomic, we use as constant queries arbitrary
complex value sets.
We also generalize relational operations.
Renaming: Renaming can be computed using the other operations, as illustrated in Sec-
tion 20.4 (which presents examples of queries).
Cross-product: For i in [1,2], let I
i
be a relation of sort

i
=B
i
1
:
i
1
, . . . , B
i
j
i
:
i
j
i

and let the attribute sets in


1
,
2
be disjoint. Then I
1
I
2
is the relation dened by
sort(I
1
I
2
) =B
1
1
:
1
1
, . . . , B
1
j
1
:
1
j
1
, B
2
1
:
2
1
, . . . , B
2
j
2
:
2
j
2

and
I
1
I
2
={ B
1
1
: x
1
1
, . . . , B
1
j
1
: x
1
j
1
, B
2
1
: x
2
1
, . . . , B
2
j
2
: x
2
j
2
|
B
i
1
: x
i
1
, . . . , B
i
j
i
: x
i
j
i
I
i
for i [1, 2] }.
It is easy to simulate cross-product using the operations of the algebra. This is also
illustrated in Section 20.4.
Join: This can be dened in the natural manner and can be simulated using cross-product,
renaming, and selection.
It should now be clear that complex value algebra subsumes relational algebra when
applied to at relations. We also have new set-oriented operations.
518 Complex Values
N-ary set_create: We introduced tup_create as an n-ary operation. We also allow n-ary
set_create with the meaning that
set_create(I
1
, . . . , I
n
) set_create(I
1
) set_create(I
n
).
Singleton: This operator transforms a set of values {a
1
, . . . , a
n
} into a set {{a
1
}, . . . , {a
n
}}
of singletons.
Nest, unnest: Less primitive interesting operations such as nest, unnest can be considered.
For example, for J of Fig. 20.3 we have
unnest
B
(J(R
1
)) =J(R
2
) and
nest
B=(A
1
A
2
)
(J(R
2
)) =J(R
3
).
More formally, suppose that we have R and S with sorts
sort(R) =A
1
:
1
, . . . , A
k
:
k
, B : {A
k+1
:
k+1
, . . . , A
n
:
n
}
sort(S) =A
1
:
1
, . . . , A
k
:
k
, A
k+1
:
k+1
, . . . , A
n
:
n
.
Then for instances I of R and J of S, we have
unnest
B
(I) ={A
1
: x
1
, . . . , A
n
: x
n
| y
A
1
: x
1
, . . . , A
k
: x
k
, B : y I and A
k+1
: x
k+1
, . . . , A
n
: x
n
y}
nest
B=(A
k+1
,...,A
n
)
(J) ={A
1
: x
1
, . . . , A
k
: x
k
, B : y |
=y ={A
k+1
: x
k+1
, . . . , A
n
: x
n
| A
1
: x
1
, . . . , A
n
: x
n
J}}.
Observe that
unnest
B
(nest
B=(A
1
A
2
)
(J(R
2
))) =J(R
2
).
nest
B=(A
1
A
2
)
(unnest
B
(J(R
1
))) =J(R
1
).
This is indeed not an isolated phenomenon. Unnest is in general the right inverse of
nest (nest
B=
unnest
B
is the identity), whereas unnest is in general not information
preserving (one-to-one) and so has no right inverse (see Exercise 20.8).
Relational projection and selection were ltering operations in the sense that intu-
itively they scan a set and keep only certain elements, possibly modifying themin a uniform
way. The lters in complex value algebra are more general. Of course, we shall allow
Boolean expressions in selection conditions. More interestingly, we also allow set com-
parators in addition to , such as , , , , and negations of these comparators (e.g.,
). The inclusion comparator plays a special role in the calculus. We will see in Sec-
tion 20.4 how to simulate selection with .
Selection is a predicative lter in the sense that a predicate allows us to select some
elements, leaving them unchanged. Other lters, such as projection, are map lters. They
transform the elements. Clearly, one can combine both aspects and furthermore allow more
complicated selection conditions or restructuring specications. For instance, suppose I is
20.3 The Calculus 519
a set of tuples of sort
A : dom, B : C : E : {dom}, E

: dom, C

: {dom}.
We could use an operation that rst lters all the values matching the pattern
A : x, B : C : E : y, E

: z, C

: {x};
and then transforms them into
A : (y {x}), B : y, C : z.
This style of operations is standard in functional languages (e.g., apply-to-all in fp).
Remark 20.2.3 As mentioned earlier, all of the operations just introduced are express-
ible in ALG
cv
. We might also consider an operation to iterate over the elements of a set
in some order. Such an operation can be found in several systems. As we shall see in Sec-
tion 20.6, iteration is essentially expressible within ALG
cv
. On the other hand, an iteration
that depends on a specic ordering of the underlying domain of elements cannot be simu-
lated using ALG
cv
unless the ordering is presented as part of the input.
In the following sections, we (informally) call extended algebra the algebra consisting
of the operations of ALG
cv
and allowing complex constants, renaming, cross-product, join,
n-ary set_create, singleton, nest, and unnest.
An important subset of ALG
cv
, denoted ALG
cv
, is formed from the core operators of
ALG
cv
by removing the powerset operator and adding the nest operator. As will be seen in
Section 20.7, although the nest operator has the ability to construct sets, it is much weaker
than powerset. When restricted to nested relations, the language ALG
cv
is usually called
nested relation algebra.
20.3 The Calculus
The calculus is modeled after a standard, rst-order, many-sorted calculus. However, as
we shall see, calculus variables may denote sets, so the calculus will permit quantication
over sets (something normally considered to be a second-order feature). For complex
value calculus, the separation between rst and second order (and higher order as well)
is somewhat blurred. As with the algebra, we rst present a core calculus and then extend
it. The issues of domain independence and safety are also addressed.
For each sort, we assume the existence of a countably innite set of variables of that
sort. A variable is atomic if it ranges over the sort dom. Let R be a schema. A term is an
atomic element, a variable, or an expression x.A, where x is a tuple variable and A is an
attribute of x. We do not consider (yet) fancier terms. A positive literal is an expression of
the form
520 Complex Values
R(t ), t =t

, t t

, or t t

,
where R R, t, t

are terms and the appropriate sort restrictions apply.


1
Formulas are
dened fromatomic formulas using the standard connectives and quantiers: , , , , .
A query is an expression {x | }, where formula has exactly one free variable (i.e. x). We
sometimes denote it by (x). The calculus is denoted CALC
cv
.
The following example illustrates this calculus.
Example 20.3.1 Consider the schema and the instance of Fig. 20.3. We can verify that
J(R
2
) is the answer on instance J to the query
{x | y, z, z

, u, v, w (R
1
(y) y.A =u y.B =z
z

z z

.A
1
=v z

.A
2
=w
x.A =u x.A
1
=v x.A
2
=w) },
where the sorts of the variables are as follows:
sort(x) =A, A
1
, A
2
, sort(y) =A, B : {A
1
, A
2
},
sort(u) =sort(v) =sort(w) =dom, sort(z

) =A
1
, A
2
,
sort(z) ={A
1
, A
2
}.
We could also have used an unsorted alphabet of variables and sorted them inside the
formula, as in
{x : A, A
1
, A
2
| y : A, B : {A
1
, A
2
},
z : {A
1
, A
2
}, z

: A
1
, A
2
,
u : dom, v : dom, w : dom
(R
1
(y) y.A =u y.B =z
z

z z

.A
1
=v z

.A
2
=w
x.A =u x.A
1
=v x.A
2
=w) }.
The key difference with relational calculus is the presence of the predicates and ,
which are interpreted as the standard set membership and inclusion. Another difference (of
a more cosmetic nature) is that we allow only one free variable in relation atoms and in
query formulas. This comes from the stronger sorts: A variable may represent an n-tuple.
The answer to a query q on an instance I, denoted q(I), is dened as for the relational
model. As in the relational case, we may dene various interpretations, depending on the
underlying domain of base values used. As with relational calculus, the basis for dening
the semantics is the notion
I satises for relative to d.
1
Strictly speaking, the symbols =, and are also many sorted.
20.3 The Calculus 521
[Recall that is a valuation of the free variables of and d is an arbitrary set of elements
containing adom(, I).]
Consider the denition of this notion in Section 5.3. Cases (a) through (g) remain valid
for the complex object calculus. We have to consider two supplementary cases. Recall that
for equality, we had case (b):
(b) I |=
d
[] if =(s =s

) and (s) =(s

).
In the same spirit, we add
I |=
d
[] if =(s s

) and (s) (s

) (h-1)
I |=
d
[] if =(s s

) and (s) (s

). (h-2)
This formally states that is interpreted as set membership and as set inclusion (in the
same sense that as = is interpreted as equality).
The issues surrounding domain independence for relational calculus also arise with
CALC
cv
. We develop a syntactic condition ensuring domain independence, but we also
occasionally use an active domain interpretation.
Extensions
As in the case of the algebra, we now consider extensions of the calculus that can be
simulated by the core syntax just given.
The standard abbreviations used for relational calculus, such as the logical connectives
, , , can be incorporated into CALC
cv
. Using these connectives, it is easy to see the
nonminimality of the calculus: Each literal x y can be replaced by z(z x z y),
where z is a fresh variable.
Arity In the core calculus, only relation atoms of the form R(t ) are permitted. Suppose
that the sort of R is A
1
:
1
, . . . , A
n
:
n
for some n. Then R(u
1
, . . . , u
n
) is a shorthand
for
y(R(y) y.A
1
=u
1
y.A
n
=u
n
),
where y is a new variable. In particular, if R
0
is a relation of sort (n =0), observe that
the only value of that sort is the empty tuple. Thus a variable y of that sort has only one
possible value, namely . Thus for such y, we can use the following expression:
R
0
( ) for y(R
0
(y)).
Constructed Terms Next we allow constructed terms in the calculus such as
{x, b}, x.A.C, B
1
: a, B
2
: y.
More formally, if t
1
, . . . , t
k
are terms and B
1
, . . . , B
k
are distinct attributes, then B
1
:
t
1
, . . . , B
k
: t
k
is a term. Furthermore, if the t
i
are of the same sort, {t
1
, . . . , t
k
} is a term;
522 Complex Values
and if t
1
is a tuple term with attribute C, then t
1
.C is a term. The sorts of terms are dened
in the obvious way. Note that a term may have several sorts because of the empty set. (We
ignore this issue here.)
The use of constructed terms can be viewed as syntactic sugaring. For instance, sup-
pose that the term {a, y} occurs in a formula . Then is equivalent to
x(

z(z x (z =a z =y))),
where

is obtained from by replacing the term {a, y} by x (a fresh variable).


Complex Terms We can also view relations as terms. For instance, if R is a relation of
sort A, B, then R can be used in the language as a term of sort {A, B}. We may then
consider literals such as x R, which is equivalent to R(x); or more complex ones such as
S T , which essentially means
y(T (y) x(x y S(x))).
The previous extension is based on the fact that a relation (in our context) can be
viewed as a complex value. This is again due to the stronger sort system. Now the answer
to a query q is also a complex value. This suggests considering the use of queries as terms
of the language. We consider this now: A query q {y | (y)} is a legal term that can be
used in the calculus like any other term. More generally, we allow terms of the form
{y | (y, y
1
, . . . , y
n
)},
where the free variables of are y, y
1
, . . . , y
n
. Intuitively, we obtain queries by providing
bindings for y
1
, . . . , y
n
. We will call such an expression a parameterized query and denote
it q(y
1
, . . . , y
n
) (where y
1
, . . . , y
n
are the parameters).
For instance, suppose that a formula liked(x, y) computes the lms y that person x
liked; and another one saw(x, y) computes those that x has seen. The set of persons who
liked all the lms that they saw is given by
{ x | {y | liked(x, y)} {y | saw(x, y)} }.
The following form of literals will play a particular role when we study safety for this
calculus:
x ={y | (y, y
1
, . . . , y
n
)},
x

{y | (y, y
1
, . . . , y
n
)}, and
x

{y | (y, y
1
, . . . , y
n
)},
where y is a free variable of . Like the previous extensions, the parameterized queries
can be viewed simply as syntactic sugaring. For instance, the three last formulas are,
respectively, equivalent to
20.4 Examples 523
y(y x ),
y(x

=y ), and
y(y x

).
In the following sections, we (informally) call extended calculus the calculus consist-
ing of CALC
cv
extended with the abbreviations described earlier (such as constructed and
complex terms and, notably, parameterized queries).
20.4 Examples
We illustrate the previous two sections with a series of examples. The queries in the
examples apply to schema {R, S} with
sort(R) =A : dom, A

: dom,
sort(S) =B : dom, B

: {dom}.
For each query, we give an algebraic and a calculus expression.
Example 20.4.1 The union of R and a set of two constant tuples is given by
{r | R(r) r =A : 3, A

: 5 r =A : 0, A

: 0}
or
R {A : 3, A

: 5, A : 0, A

: 0}.
Example 20.4.2 The selection of the tuples from S, where the rst component is a
member of the second component, is obtained with
{s | S(s) s.B s.B

} or
BB
(S).
Example 20.4.3 The (classical) cross-product of R and S is the result of
{t | r, s(R(r) S(s) t =A : r.A, A

: r.A

, B : s.B, B

: s.B

)}
or

AA

BB
(
A=A

.A
(
A

=A

.A
(
B=B

.B
(
B

=B

.B
(q))))),
where q is
524 Complex Values
tup_create
AA

BB

B
(tup_destroy(
A
(R)),
tup_destroy(
A
(R)),
tup_destroy(
B
(S)),
tup_destroy(
B
(S)), R, S).
Example 20.4.4 The join of R and S on A = B. This query is the composition of the
cross-product of Example 20.4.3, with a selection. In Example 20.4.3, let the formula
describing the cross-product be
3
and let (R S) be the algebraic expression. Then the
(A =B) join of R and S is expressed by
{t |
3
(t ) t.A =t.B} or
A=B
(R S).
Example 20.4.5 The renaming of the attributes of R to A
1
, A
2
is obtained in the calculus
by
{t | r(R(r) t.A
1
=r.A t.A
2
=r.A

)}
with t of sort A
1
: dom, A
2
: dom. In the algebra, it is given by

A
1
A
2
(
A
0
.A=A
1
(
A
0
.A

=A
2
(tup_create
A
0
A
1
A
2
(R, tup_destroy(
A
(R)), tup_destroy(
A
(R)))))).
Example 20.4.6 Flattening S means producing a set of at tuples, each of which con-
tains the rst component of a tuple of S and one of the elements of the second component.
This is the unnest operation unnest
B
() in the extended algebra, or in the calculus
{t | s(S(s) t.B =s.B t.C s.B

)},
where t is of sort B, C. In the core algebra, this is slightly more complicated. We rst
obtain the set of values occurring in the B

sets using
E
1
=tup_create
C
(set_destroy(tup_destroy(
B
(S)))).
We can next compute (E
1
S) (using the same technique as in Example 20.4.3). Then the
desired query is given by

BC
(
CB
(E
1
S)).
Flattening can be extended to sorts with arbitrary nesting depth.
Example 20.4.7 The next example is a selection using . Consider a relation T of sort
C : {dom}, C

: {dom}. We want to express the query


20.4 Examples 525
{t | T (t ) t.C t.C

}
in the algebra. We do this in stages:
F
1
=
C

C
(T tup_create
C
(set_destroy(tup_destroy(
C
(T ))))),
F
2
=
C

C
(F
1
),
F
3
=F
1
F
2
,
F
4
=T
CC
(F
3
).
Observe that
1. A tuple C : U, C

: V, C

: u is in F
1
if C : U, C

: V is in T and u is in U.
2. A tuple C : U, C

: V, C

: u is in F
2
if C : U, C

: V is in T and u is in U and
V.
3. A tuple C : U, C

: V, C

: u is in F
3
if C : U, C

: V is in T and u is in U V.
4. A tuple C : U, C

: V is in F
4
if it is in T and there is no u in U V (i.e.,
U V).
Example 20.4.8 This example illustrates the use of nesting and of sets. Consider the
algebraic query
nest
C=(A)
nest
C

=(A

)

C=C
unnest
C
unnest
C
(R).
It is expressed in the calculus by
{x, y | u(x u y u
u ={x

| R(x

, y)}
u ={y

| {x

| R(x

, y

)} =u})}.
A consequence of Theorem 20.7.2 is that this query is expressible in relational calculus or
algebra. It is a nontrivial exercise to obtain a relational query for it. (See Exercise 20.24.)
Example 20.4.9 Our last example highlights an important difference between the at
relational calculus and CALC
cv
. As shown in Proposition 17.2.3, the at calculus cannot
express the transitive closure of a binary relation. In contrast, the following CALC
cv
query
does:
{y | x(closed(x) contains_R(x) y x)},
where
closed(x)
u, v, w(A : u, A

: v x A : v, A

: w x A : u, A

: w x);
526 Complex Values
contains_R(x) z(R(z) z x);
sort(x) ={sort(R)}, sort(y) =sort(z) =sort(R); and
sort(u) =sort(v) =sort(w) =dom.
Intuitively, the formula species the set of pairs y such that y belongs to each binary re-
lation x containing R and transitively closed. This construction will be revisited in Sec-
tion 20.6.
20.5 Equivalence Theorems
This section presents three results that compare the complex value algebra and calculus.
First we establish the equivalence of the algebra and the domain-independent calculus.
Next we develop a syntactic safeness condition for the calculus and show that it does not
reduce expressive power. Finally we develop a natural syntactic condition on CALC
cv
that
yields a subset equivalent to ALG
cv
.
Our rst result is as follows:
Theorem 20.5.1 The algebra and the domain independent calculus for complex values
are equivalent.
In the sketch of the proof, we present a simulation of the core algebra by the extended
calculus and the analogous simulation in the opposite direction. An important component
of this proofnamely, that the extended algebra (calculus) is no stronger than the core
algebra (calculus)is left for the reader (see Exercises 20.6, 20.7, 20.8, 20.10, and 20.11).
From Algebra to Calculus
We now show that for each algebra query, there is a domain-independent calculus query
equivalent to it.
Let q be a named algebra query. We construct a domain-independent query {x |
q
}
equivalent to q. The formula
q
is constructed by induction on subexpressions of q. For a
subexpression E of q, we dene
E
as follows:
(a) E is R for some R R:
E
is R(x).
(b) E is {a}:
E
is x =a.
(c) E is

(E
1
):
E
is
E
1
(x) , where is
x.A
i
=x.A
j
if A
i
=A
j
; x.A
i
=a if A
i
=a;
x.A
i
x.A
j
if A
i
A
j
; x.A
i
=x.A
j
.C if A
i
=A
j
.C.
(d) E is
A
i
1
,...,A
i
k
(E
1
):
E
is
y(x =A
i
1
: y.A
i
1
, . . . , A
i
k
: y.A
i
k

E
1
(y)).
20.5 Equivalence Theorems 527
(e) For the basic set operations, we have

E
1
E
2
(x) =
E
1
(x)
E
2
(x),

E
1
E
2
(x) =
E
1
(x)
E
2
(x),

E
1
E
2
(x) =
E
1
(x)
E
2
(x).
(f) E is powerset(E
1
):
E
is x {y |
E
1
(y)}.
(g) E is set_destroy(E
1
):
E
is y(x y
E
1
(y)).
(h) E is tup_destroy(E
1
):
E
is y(A : x =y
E
1
(y)), where A is the name of
the eld (of y).
(i) E is tup_create
A
1
,...,A
n
(E
1
, . . . , E
n
):
E
is
y
1
, . . . , y
n
(x =A
1
: y
1
, . . . , A
n
: y
n

E
1
(y
1
)
E
n
(y
n
)).
(j) E is set_create(E
1
): x ={y |
E
1
(y)}.
We leave the verication of this construction to the reader (see Exercise 20.13). The
domain independence of the obtained calculus query follows from the fact that algebra
queries are domain independent.
From Calculus to Algebra
We now show that for each domain-independent query, there is a named algebra query
equivalent to it.
Let q ={x | } be a domain-independent query over R. As in the at relational case,
we assume without loss of generality that associated with each variable x occurring in q
(and also variables used in the following proof) is a unique, distinct attribute A
x
in att. We
use the active domain interpretation for the query, denoted as before with a subscript adom.
The crux of the proof is to construct, for each subformula of , an algebra formula
E

that has the property that for each input I,


E

(I) = {y | x
1
, . . . , x
n
(y =A
x
1
: x
1
, . . . , A
x
n
: x
n
(x
1
, . . . , x
n
))}
adom
(I),
where x
1
, . . . , x
n
is a listing of free().
This construction is accomplished in three stages.
Computing the Active Domain The rst step is to construct an algebra query E
adom
having sort dom such that on input instance I, E
adom
(I) = adom(q, I). The construction
of E
adom
is slightly more intricate than the similar construction for the relational case. We
prove by induction that for each sort , there exists an algebra operation F

that maps a set


I of values of sort to adom(I). This induction was not necessary in the at case because
the base relations had xed depth. For the base case (i.e., =dom), it sufces to use for
F

an identity operation (e.g., tup_create


A
tup_destroy). For the induction, the following
cases occur:
528 Complex Values
1. is A
1
:
1
, . . . , A
n
:
n
for n 2. Then F

is
F
A
1
:
1

(
A
1
) F
A
n
:
n

(
A
n
).
2. is A
1
:
1
. Then F

is F

1
(tup_destroy).
3. is {
1
}. Then F

is F

1
(set_destroy).
Now consider the schema R. Then for each R in R, F
sort(R)
maps a relation I over R to
adom(I). Thus adom(q, I) can be computed with the query
E
adom
=F
sort(R
1
)
(R
1
) F
sort(R
m
)
(R
m
) {a
1
} {a
p
},
where R
1
, . . . , R
m
is the list of relations in R and a
1
, . . . , a
p
is the list of elements occur-
ring in q.
Constructing Complex Values In the second stage, we prove by induction that for each
sort , there exists an algebra query G

that constructs the set of values I of sort such


that adom(I) adom(q, I). For =dom, we can use E
adom
. For the induction, two cases
occur:
1. is A
1
:
1
, . . . , A
n
:
n
. Then G

is tup_create
A
1
,...,A
n
(G

1
, . . . , G

n
).
2. is {
1
}. Then G

is powerset(G

1
).
Last Stage We now describe the last stage, an inductive construction of the queries E

for subformulas of . We assume without loss of generality that the logical connectives
and do not occur in . The proof is similar to the analogous proof for the at case.
We also assume that relation atoms in do not contain constants or repeated variables. We
only present the new case (the standard cases are left as Exercise 20.13). Let be x y.
Suppose that x is of sort , so y is of sort {}. The set of values of sort (or {}) within the
active domain is returned by query G

, or G
{}
. The query

A
x
A
y
(tup_create
A
x
,A
y
(G

, G
{}
))
returns the desired result.
Observe that with this construction, E

returns a set of tuples with a single attribute


A
x
. The query q is equivalent to tup_destroy(E

).
As we did for the relational model, we can dene a variety of syntactic restrictions of
the calculus that yield domain-independent queries. We consider such restrictions next.
Safe Queries
We now turn to the development of syntactic conditions, called safe range, that ensure
domain independence. These conditions are reminiscent of those presented for relational
calculus in Chapter 5. As we shall see, a variant of safe range, called strongly safe range,
will yield a subset of CALC
cv
, denoted CALC
cv
, that is equivalent to ALG
cv
.
20.5 Equivalence Theorems 529
We could dene safe range on the core calculus. However, such a denition would be
cumbersome. A much more elegant denition can be given using the extended calculus.
In particular, we consider here the calculus augmented with (1) constructed terms and (2)
parameterized queries.
Recall that intuitively, if a formula is safe range, then each variable is bounded, in the
sense that it is restricted by the formula to lie within the active domain of the query or the
input. We now dene the notions of safe formulas and safe terms. To give these denitions,
we dene the set of safe-range variables of a formula using the following procedure, which
returns either the symbol (which indicates that some quantied variable is not bounded)
or the set of free variables that are bounded. In this discussion, we consider only formulas
in which universal quantiers do not occur.
In the following procedure, if several rules are applicable, the one returning the largest
set of safe-range variables (which always exists) is chosen.
procedure safe-range (sr)
input: a calculus formula
output: a subset of the free variables of or . (In the following, for each Z, Z =
Z =Z =Z =.)
begin
(pred is a predicate in {=, , })
if for some parameterized query {x | } occurring as a term in , x sr() then
return
case of
R(t ) : sr() =free(t );
(t pred t

) : if is safe and free(t

) free()
then sr() =free(t ) free();
t pred t

: if free(t

) =sr(t

) then sr() =free(t

) free(t );
else sr() =;

1

2
: sr() =sr(
1
) sr(
2
);

1

2
: sr() =sr(
1
) sr(
2
);

1
: sr() =;
x
1
: if x sr(
1
)
then sr() =sr(
1
) {x}
else return
end;
We say that a formula is safe if sr() =free(); and a query q is safe if its associated
formula is safe.
It is important to understand how new sets are created in a safe manner. The next
example illustrates two essential techniques for such creation.
530 Complex Values
Example 20.5.2 Let R be a relation of sort A, B. The powerset of R can be obtained
in a safe manner with the query
{x | x {y | R(y)}}.
For {y | R(y)} is clearly a safe query (by the rst case). Now letting t x, t

{y | R(y)},
the formula is safe (by the third case).
Now consider the nesting of the B column of R. It is achieved by the following query:
{x | x =z, {y | R(z, y)} y

(R(z, y

))}.
Let t x, t

z, {y | R(z, y)} and y

(R(z, y

)). First note that sr(R(z, y)) con-


tains y, so the parameterized query {y | R(z, y)} can be used safely. Next the formula is
safe. Finally the only free variable in t

is z, which is also free in . Thus x is safe range


(by the second case) and the query is safe.
As detailed in Section 20.7, the complex value algebra and calculus can express
mappings with complexity corresponding to arbitrarily many nestings of exponentiation.
In contrast, as discussed in that section, the nested relation algebra ALG
cv
, which uses
the nest operator but not powerset, has complexity in ptime. Interestingly, there is a minor
variation of the safe-range condition that yields a subset of the calculus equivalent to
ALG
cv
. Specically, a formula is strongly safe range if it is safe range and the inclusion
predicate does not occur in it. In the previous example, the nesting is strongly safe range
whereas powerset is not.
We now have the following:
Theorem 20.5.3
(a) The safe-range calculus, the domain-independent calculus, and ALG
cv
coincide.
(b) The strongly safe-range calculus and ALG
cv
coincide.
Crux Consider (a). By inspection of the construction in the proof that ALG
cv
CALC
cv
,
each algebra query is equivalent to a safe-range calculus query. Clearly, each safe-range
calculus query is a domain-independent calculus query. We have already shown that each
domain-independent calculus query is an algebra query.
Now consider (b). Observe that in the proof that ALG
cv
CALC
cv
, is used only
for powerset. Thus each query in ALG
cv
is a strongly safe-range query. Now consider
a strongly safe-range query; we construct an equivalent algebra query. We cannot use the
construction from the proof of the equivalence theorem, because powerset is crucial for
constructing complex domains. However, we can show that this can be avoided using the
ranges of variables. (See Exercise 20.16.) More precisely, the brute force construction of
the domain of variables using powerset is replaced by a careful construction based on the
strongly safe-range restriction. The remainder of the proof stays unchanged.
20.6 Fixpoint and Deduction 531
Because of part (b) of the previous result, we denote the strongly safe-range calculus
by CALC
cv
.
20.6 Fixpoint and Deduction
Example 20.4.9 suggests that the complex value algebra and calculus can simulate itera-
tion. In this section, we examine iteration in the spirit of both xpoint queries and datalog.
In both cases, they do not increase the expressive power of the algebra or calculus. How-
ever, they allow us to express certain queries more efciently.
Fixpoint for Complex Values
Languages with xpoint semantics were considered in the context of the relational model
to overcome limitations of relational algebra and calculus. In particular, we observed
that transitive closure cannot be computed in relational calculus. However, as shown by
Example 20.4.9, transitive closure can be expressed in the complex value algebra and
calculus. Although transitive closure can be expressed in that manner, the use of powerset
seems unnecessarily expensive. More precisely, it can be shown that any query in the
complex value algebra and calculus that expresses transitive closure uses exponential space
(assuming the straightforward evaluation of the query). In other words, the blowup caused
by the powerset operator cannot be avoided. On the other hand, a xpoint construct allows
us to express transitive closure in polynomial space (and time). It is thus natural to develop
xpoint extensions of the calculus and algebra.
We can provide inationary and noninationary extensions of the calculus with recur-
sion. As in the relational case, an inationary xpoint operator
+
T
allows the iteration of a
CALC
cv
formula (T ) up to a xpoint. This essentially permits the inductive denition of
relations, using calculus formulas. The calculus CALC
cv
augmented with the inationary
xpoint operator is dened similarly to the at case (Chapter 14) and yields CALC
cv
+
+
.
We only consider the inationary xpoint operator. (Exercise 20.19 explores the nonina-
tionary version.)
Theorem 20.6.1 CALC
cv
+
+
is equivalent to ALG
cv
and CALC
cv
.
The proof of this theorem is left for Exercise 20.18. It involves simulating a xpoint
in a manner similar to Example 20.4.9.
Before leaving the xpoint extension, we show how powerset can be computed by iter-
ating a ALG
cv
formula to a xpoint. (We will see later that powerset cannot be computed
in ALG
cv
alone.)
Example 20.6.2 Consider a relation R of sort dom (i.e., a set of atomic elements). The
powerset of R is computed by {x |
T
((T ))(x)}, where T is of sort {dom} and
(T )(y) [y = x

, y

(R(x

) T (y

) y =y

{x

}.]
532 Complex Values
This formula is in fact equivalent to a query in ALG
cv
. (See Exercise 20.15.) For example,
suppose that R contains {2, 3, 4}. The iteration of yields
J
0
=
J
1
=() ={}
J
2
=(J
1
) =J
1
{{2}, {3}, {4}}
J
3
=(J
2
) =J
2
{{2, 3}, {2, 4}, {3, 4}}
J
4
=(J
3
) =J
3
{{2, 3, 4}},
and J
4
is a xpoint and coincides with powerset({2, 3, 4}).
Datalog for Complex Values
We now briey consider an extension of datalog to incorporate complex values. The basic
result is that the extension is equivalent to the complex value algebra and calculus. We
also consider a special grouping construct, which can be used for set construction in this
context.
In the datalog extension considered here, the predicates and are permitted. A rule
is safe range if each variable that appears in the head also appears in the body, and the
body is safe (i.e., the conjunction of the literals of the body is a safe formula). We assume
henceforth that rules are safe. Stratied negation will be used. The language is illustrated
in the following example.
Example 20.6.3 The input is a relation R of sort A, B : {C, C

}. Consider the query


dening an idb relation T , which contains the tuples of R, with the B-component re-
placed by its transitive closure. Let us assume that we have a ternary relation ins, where
ins(w, y, z) is interpreted as z is obtained by inserting w into y. We show later how to
dene this relation in the language. The program consists of the following rules:
S(x, y) R(x, y) (r1)
S(x, z) S(x, y), u y, v y, u.C

=v.C, ins(u.C, v.C

, y, z) (r2)
S

(x, z) S(x, z), S(x, z

), z z

, z =z

(r3)
T (x, z) S(x, z), S

(x, z). (r4)


The rst two rules compute in S pairs corresponding to pairs from R, such that the second
component of a pair contains the corresponding component from the pair in R and possibly
additional elements derived by transitivity. Obviously, for each pair x, y of R, there is a
pair x, z in S, such that z is the transitive closure of y, but there are other tuples as well.
To answer the query, we need to select for each x the unique tuple x, z of S, where z is
maximal.
2
The third rule puts into S

tuples x, z such that z is not maximal for that x. The


last rule then selects those that are maximal, using negation.
2
We assume, for simplicity, that the rst column of R is a key. It is easy to change the rules for the
case when this does not hold.
20.6 Fixpoint and Deduction 533
We now show the program that denes ins for some given sort (the variables are of
sort {} except for w, which is of sort ):
super(w, y, z) w z, y z
not-min-super(w, y, z) super(w, y, z), super(w, y, z

), z

z, z

=z
ins(w, y, z) super(w, y, z), not-min-super(w, y, z)
Note that the program is sort specic only through its dependence on the sorts of the
variables. The same program computes ins for another sort

, if we assume that the sort of


w is

and that of the other variables is {

}. Note also that the preceding program is not


safe. To make it safe, we would have to use derived relations to range restrict the various
variables.
We note that although we used in the example as a built-in predicate, it can be
expressed using membership and stratied negation.
The proof of the next result is omitted but can be reconstructed reasonably easily using
the technique of Example 20.6.3.
Theorem 20.6.4 A query is expressible in datalog
cv
with stratied negation if and only
if it is expressible in CALC
cv
.
The preceding language relies heavily on negation to specify the new sets. We could
consider more set-oriented constructs. An example is the grouping construct, which is
closely related to the algebraic nest operation. For instance, in the language LDL, the rule:
S(x, y) R(x, y)
groups in S, for each x, all the ys related to it in R (i.e., S is the result of the nesting of R
on the second coordinate).
The grouping construct can be used to simulate negation. Consider a query q whose
input consists of two unary relations R, S not containing some particular element a and
that computes R S. Query q can be answered by the following LDL program:
Temp(x, a) R(x)
Temp(x, x) S(x)
T (x, y) Temp(x, y)
Res(x) T (x, {a})
Note that for an x in R S, we derive T (x, {a}); but for x in R S, we derive
T (x, {x, a}) =T (x, {a}) because a is not in R.
From the previous example, it is clear that programs with grouping need not be mono-
tone. This gives rise to semantic problems similar to those of negation. One possiblity,
adopted in LDL, is to dene the semantics of programs with grouping analogously to strat-
ication for negation.
534 Complex Values
20.7 Expressive Power and Complexity
This section presents two results. First the expressive power and complexity of ALG
cv
/
CALC
cv
is establishedit is the family of queries computable in hyperexponential time.
Second, we consider the expressive power of ALG
cv
/CALC
cv
(i.e., in algebraic terms
the expressive power of permitting the nest operator, but not powerset). Surprisingly, we
show that the nest operator can be eliminated from ALG
cv
queries with at input/ouput.
Complex Value Languages and Elementary Queries
We now characterize the queries in ALG
cv
in terms of the set of computable queries in a
certain complexity class. First the notion of computable query is extended to the complex
value model in the straightforward manner. The complexity class of interest is the class of
elementary queries, dened next.
The hyperexponential functions hyp
i
for i in N are dened by
1. hyp
0
(m) =m; and
2. hyp
i+1
(m) =2
hyp
i
(m)
for i 0.
A query is an elementary query if it is a computable query and has hyperexponential time
data complexity
3
w.r.t. the database size. By database size we mean the amount of space
it takes to write the content of the database using some natural encoding. Note that, for
complex value databases, size can be very different from cardinality. For example, the
database could consist of a single but very large complex value.
It turns out that a query is in ALG
cv
/CALC
cv
iff it is an elementary query.
Theorem 20.7.1 A query is in ALG
cv
/CALC
cv
iff it is an elementary query.
Crux It is trivial to see that each query in ALG
cv
/CALC
cv
is elementary. All operations
can be evaluated in polynomial time in the size of their arguments except for powerset,
which takes exponential time.
Conversely, let q be of complexity hyp
n
. We show how to compute it in CALC
cv
.
Suppose rst that an enumeration of adom(I) is provided in some binary relation succ.
(We explain later how this is done.) We prove that q can then be computed in CALC
cv
+
+
.
Let X
0
=adom(I) and for each i, X
i
=powerset(X
i1
). Observe that for each X
i
, we can
provide an enumeration as follows: First succ provides the enumeration for X
0
; and for
each i, we dene V <
i
U for U, V in X
i
if there exists x in U V such that each element
larger than x (under <
i1
) is in both or neither of U, V. Clearly, there exists a query in
CALC
cv
+
+
that constructs X
n
and a binary relation representing <
n
.
Now we view each element of X
n
as an atomic element. The input instance together
with X
n
and the enumeration can be seen as an ordered database with size the order of
hyp
n
. Query q is now polynomial in this new (much larger) instance. Finally we can easily
3
We are concerned exclusively with the data complexity. Observe that when considering the union
of hyperexponential complexities, time and space coincide.
20.7 Expressive Power and Complexity 535
extend to complex values the result from the at case that CALC+
+
can express qptime
on ordered databases (Theorem 17.4.2). Thus CALC
cv
+
+
can also express all qptime
queries on ordered complex value databases, so q can be computed in CALC
cv
+
+
using
<
n
on X
n
. By Theorem 20.6.1, CALC
cv
+
+
is equivalent to CALC
cv
, so there exists a
CALC
cv
query computing q if an (arbitrary) enumeration of the active domain is given
in some binary relation succ.
To conclude the proof, it remains to remove the restriction on the existence of an
enumeration of the active domain. Let

be the formula obtained from by replacing


1. succ by some fresh variable y (the sort of y is set of pairs); and
2. each literal succ(t, t

) by t, t

y.
Then q can be computed by
y(

).
where is the CALC
cv
formula stating that y is the representation in a binary relation of
an enumeration of the active domain. (Observe that it is easy to state in CALC
cv
that the
content of a binary relation is an enumeration.)
On the Power of the nest Operator
The set-height of a complex sort is the maximum number of set constructors in any branch
of the sort. We can exhibit hierarchies of classes of queries in CALC
cv
based on the set-
height of the sorts of variables used in the query. For example, consider all queries that
take as input a at relational schema and produce as output a at relation. Then for each
n > 0, the family of CALC
cv
queries using variables that have sorts with set-height n is
strictly weaker than the family of CALC
cv
queries using variables that have sorts with set-
height n +1. A similar hierarchy exists for ALG
cv
, based on the sorts of intermediate
types used. Intuitively, these results follow from the use of the powerset operator, which
essentially provides an additional exponential amount of scratch paper for each additional
level of set nesting.
The bottom of this hierarchy is simply relational calculus. Recall that ALG
cv
can use
the nest operator but not the powerset operator. It is thus natural to ask, Where do ALG
cv
/
CALC
cv
(assuming at input and output) lie relative to the relational calculus and the rst
level of the hierarchy? Rather surprisingly, it turns out that the nest operator alone does
not increase expressive power. Specically, we show now that with at input and output,
ALG
cv
/CALC
cv
is equivalent to relational calculus.
Theorem 20.7.2 Let be a CALC
cv
/ALG
cv
query over a relational database schema
Rwith output of relational sort S. Then there exists a relational calculus query

equivalent
to .
Crux The basic intuition underlying the proof is that with a at input in CALC
cv
or
ALG
cv
, each set constructed at an intermediate stage can be identied by a tuple of atomic
536 Complex Values
values. In terms of ALG
cv
, the intuitive reason for this is that sets can be created only in
two ways:
by nest, which builds a relation whose nonnested coordinates form a key for the
nested one, and
by set_create, which can build only singleton sets.
Thus all created sets can be identied using some at key of bounded length. The sets
can then be simulated in the computation by their at representations. The proof consists
of
providing a careful construction of the at representation of the sets created in the
computation, which reects the history of their creation; and
constructing a new query, equivalent to the original one, that uses only the at
representations of sets.
The details of the proof are omitted.
Observe that an immediate consequence of the previous result is that transitive closure
or powerset are not expressible in ALG
cv
.
Remark 20.7.3 The previous results focus on relational queries. The same technique
can be used for nonat inputs. An arbitrary input I can be represented by a at database
I
f
of size polynomial in the size of the input. Now an arbitrary ALG
cv
query on I can be
simulated by a relational query on I
f
to yield a at database representing the result. Finally
the complex object result is constructed in polynomial time. This shows in particular that
ALG
cv
is in ptime.
20.8 A Practical Query Language for Complex Values
We conclude our discussion of languages for complex values with a brief survey of a frag-
ment of the query language O
2
SQL supported by the commercial object-oriented database
system O
2
(see Chapter 21). This fragment provides an elegant syntax for accessing and
constructing deeply nested complex values, and it has been incorporated into a recent in-
dustrial standard for object-oriented databases.
For the rst example we recall the query
(4.3) What are the address and phone number of the Le Champo?
Using the CINEMA database (Fig. 3.1), this query can be expressed in O
2
SQL as
element select tuple ( t.address, t.phone )
from t in Location
where t.name = Le Champo
20.8 A Practical Query Language for Complex Values 537
The select-from-where clause has semantics analogous to those for SQL. Unlike SQL, the
select part can specify an essentially arbitrary complex value, not just tuples. A select-
from-where clause returns a set
4
; the keyword element here is a desetting operator that
returns a runtime error if the set does not have exactly one element.
The next example illustrates how O
2
SQL can work inside nested structures. Recall the
complex value shown in Fig. 20.2, which represents a portion of the CINEMA database.
Let the full complex value be named Films. The following query returns all movies for
which the director does not participate as an actor.
select m.Title
from f in Films
m in f.Movies
where f.Director not in select a
from a in m.Actors
O
2
SQL also provides a mechanismfor collapsing nested sets. Again using the complex
value Films of Fig. 20.2, the following gives the set of all directors that have not acted in
any Hitchcock lm.
select f.Director
from f in Films
where f.Director not in atten select m.Actors
from g in Films
m in g.Movies
where g.Director = Hitchcock
Here the inner select-from-where clause returns a set of sets of actors. The keyword atten
has the effect of forming the union of these sets to yield a set of actors.
We conclude with an illustration of how O
2
SQL can be used to construct a deeply
nested complex value. The following query builds, from the complex value Films of
Fig. 20.2, a complex value of the same type that holds information about all movies for
which the director does not serve as an actor.
select tuple ( Director: f.Director,
Movies: select tuple ( Title: m.Title,
Actors: select a
from a in m.Actors )
from m in f.Movies
where f.Director not in m.Actors )
from f in Films
4
In the full language O
2
SQL, a list or bag might also be returned; we do not discuss that here.
Furthermore, we do not include the keyword unique in our queries, although technically it should be
included to remove duplicates from answer sets.
538 Complex Values
Bibliographic Notes
The original proposal for generalizing the relational model to allow entries in relations to
be sets is often attributed to Makinouchi [Mak77]. Our presentation is strongly inuenced
by [AB88]. An extensive coverage of the eld can be found in [Hul87]. The nested rela-
tion model is studied in [JS82, TF86, RKS88]. The V-relation model is studied in [BRS82,
AB86, Ver89], and the essentially equivalent partition normal form (PNF) nested relation
model is studied in [RKS88]. The connection of the PNF nested relations with dependen-
cies has also been studied (e.g., in [TF86, OY87]). References [DM86a, DM92] develop a
while-like language that expresses all computable queries (in the sense of [CH80b]) over
directories; these are database structures that are essentially equivalent to nested rela-
tions.
There have been many proposals of algebras. In general, the earlier ones have es-
sentially the power of ALG
cv
(due to obvious complexity considerations). The powerset
operation was rst proposed for the Logical Data Model of [KV84, KV93b].
The calculus presented in this chapter is based on Jacobss calculus [Jac82]. This orig-
inal proposal allowed noncomputable queries [Var83]. We use in this chapter a computable
version of that calculus that is also used (with minor variations) in [KV84, KV93b, AB88,
RKS88, Hul87].
Parameterized queries are close to the commonly used mathematical concept of set
comprehension.
The equivalence of the algebra and the calculus has been shown in [AB88]. An equiv-
alence result for a more general model had been previously given in [KV84, KV93b]. The
equivalence result is preserved with oracles. In particular, it is shown in [AB88] that if the
algebra and the calculus are extended with an identical set of oracles (i.e., sorted functions
that are evaluated externally), the equivalence result still holds.
The strongly safe-range calculus, and the equivalence of ALG
cv
and CALC
cv
, are
based on [AB88].
The fact that transitive closure can be computed in the calculus was noted in [AB88].
The result that any algebra query computing transitive closure requires exponential space
(with the straightforward evaluation model) was shown in [SP94]. The equivalence be-
tween the calculus and various rule-based languages is from [AB88]. In the rule-based
paradigm, nesting can be expressed in many ways. A main difference between various pro-
posals of logic programming with a set construct is in their approach to nesting: grouping
in LDL [BNR
+
87], data functions in COL [AG91], and a form of universal quantication
in [Kup87]. In [Kup88], equivalence of various rule-based languages is proved. In [GG88],
it is shown that various programming primitives are interchangeable: powerset, xpoint,
various iterators.
The correspondence between ALG
cv
/CALC
cv
queries and elementary queries is stud-
ied in [HS93, KV93a]. Hierarchies of classes of queries based on the level of set nesting
are considered in [HS93, KV93a]. Related work is presented in [Lie89a]. Exact complexity
characterizations are obtained with xpoint, which is no longer redundant when the level
of set nesting is bounded [GV91].
Theorem 20.7.2 is from [PG88], which uses a proof based on a strongly safe calculus.
Exercises 539
The proof of Theorem 20.7.2 outlined in this chapter suggests a strong connection between
ALG
cv
and the V-relation model.
Reference [BTBW92] introduces a rich family of languages for complex objects,
extended to include lists and bags, that is based on structural recursion. One language in this
family corresponds to the nested algebra presented in this chapter. Using this, an elegant
family of generalizations of Theorem 20.7.2 is developed in [Won93].
An extension of complex values, called formats [HY84], includes a marked union
construct in addition to tuple and nitary set. Abstract notions of relative information
capacity are developed there; for example, it can be shown that two complex value types
have equivalent information capacity iff they are isomorphic.
Exercises
Exercise 20.1 (V-relations) Consider the schema R of sort
A, B : {C, D}.
Furthermore, we impose the fd A B (more precisely, the generalization of a functional
dependency). (a) Prove that for each instance I of R, the size of I is bounded by a polynomial
in adom(I). (b) Show how the same information can be naturally represented using two at
relations. (One sufces with some coding.) (c) Formalize the notion of V-relation of Section 20.1
and generalize the results of (a) and (b).
Exercise 20.2 Consider a (at) relation R of sort
name age address car child_name child_age
and the multivalued dependency name age address car. Prove that the same information
can be stored in a complex value relation of sort
name, age, address, cars : {dom}, children : {child_name, child_age}
Discuss the advantages of this alternative representation. (In particular, show that for the same
data, the size of the instance in the second representation is smaller. Also consider update
anomalies.)
Exercise 20.3 Consider the value
{ A : a, B : A : {a, b}, B : A : a, C : ,
A : a, B : A : {}, B : A : a, C : }.
Show how to construct it in the core algebra from {a} and {b}.
Exercise 20.4 Prove that for each complex value relation I, there exists a constant query in
the core algebra returning I.
540 Complex Values
Exercise 20.5 Let R be a database schema consisting of a relation R of sort
A : dom, B : A : {dom}, B : A : dom, C : ;
and let ={A : dom, B : {{dom}}}.
(a) Give a query computing for each I over R, adom(I).
(b) Give a query computing the set of values J of sort such that adom(J) adom(I).
Exercise 20.6 Prove that set_create can be expressed using the other operations of the core
algebra. Hint: Use powerset.
Exercise 20.7 Formally dene the following operations: (a) renaming, (b) singleton, (c)
cross-product, and (d) join. In each case, prove that the operation is expressible in ALG
cv
.
Which of these can be expressed without powerset?
Exercise 20.8 (Nest,unnest)
(a) Show that nest is expressible in ALG
cv
.
(b) Show that unnest is expressible in ALG
cv
without using the powerset operator.
(c) Prove that unnest
A
is a right inverse of nest
A=(A
1
...A
k
)
and that unnest
A
has no right
inverse.
Exercise 20.9 (Map) The operation map
C,q
is applicable to relations of sort where is of
the form {C : {

}, . . .} and q is a query over relations of sort

. For instance, let


I ={C : I
1
, C

: J
1
, C : I
2
, C

: J
2
, C : I
3
, C

: J
3
}.
Then
map
C,q
(I) ={C : q(I
1
), C

: J
1
, C : q(I
2
), C

: J
2
, C : q(I
3
), C

: J
3
}.
(a) Give an example of map and show how the query of this example can be expressed
in ALG
cv
.
(b) Give a formal denition of map and prove that the addition of map does not change
the expressive power of the algebra.
Exercise 20.10 Show how to express
{x | {y | liked(x, y)} ={y | saw(x, y)}}
in the core calculus.
Exercise 20.11 The calculus is extended by allowing terms of the form z z

and z z

for
each set term z, z

of identical sort. Prove that this does not modify the expressive power of the
language. More generally, consider introducing in the calculus terms of the form q(t
1
, . . . , t
n
),
where q is an n-ary algebraic operation and the t
i
are set terms of appropriate sort.
Exercise 20.12 Give ve queries on the CINEMA database expressed in ALG
cv
. Give the
same queries in CALC
cv
.
Exercises 541
Exercise 20.13 Complete the proof that ALG
cv
CALC
cv
for Theorem 20.5.1. Complete the
proof of Last Stage for Theorem 20.5.1.
Exercise 20.14 This exercise elaborates the simulation of CALC
cv
by ALG
cv
presented in the
proof of Theorem 20.5.1. In particular, give the details of
(a) the construction of E
adom
(b) the construction of G

for each
(c) the last stage of the construction.
Exercise 20.15 Show that the query in Example 20.6.2 is strongly safe range (e.g., give a
query in ALG
cv
or CALC
cv
equivalent to it).
Exercise 20.16 Show that every strongly safe-range query is in ALG
cv
[one direction of (b)
of Theorem 20.5.3].
Exercise 20.17 Sketch a program expressing the query even in CALC
cv
+
+
.
Exercise 20.18 Prove that CALC
cv
+
+
=ALG
cv
.
Exercise 20.19 Dene a while language based on ALG
cv
. Show that it does not have more
power than ALG
cv
.
Exercise 20.20 Consider a query q whose input consists of two relations blue, red of sort
A, B (i.e., consists of two graphs). Query q returns a relation of sort A, B : {dom} with the
following meaning. A tuple x, X is in the result if x is a vertex and X is the set of vertexes y
such that there exists a path fromx to y alternating blue and red edges. Prove in one line that q is
expressible in ALG
cv
. Show how to express q in some complex value language of this chapter.
Exercise 20.21 Generalize the construction of Example 20.6.2 to prove Theorem 20.6.1.
Exercise 20.22 Datalog with stratied negation was shown to be weaker than datalog with
inationary negation. Is the situation similar for datalog
cv
with negation?
Exercise 20.23 Exhibit a query that is not expressible in CALC
cv
but is expressible in
CALC
cv
, and one that is not expressible in CALC
cv
.
Exercise 20.24 Give a relational calculus formula or algebra expression for the query in
Example 20.4.8.
Exercise 20.25 Recall the language while
N
from Chapter 18. The language allows assign-
ments of relational algebra expressions to relational variables, looping, and integer arithmetic.
Let while
cv
N
be like while
N
, except that the relational algebra expressions are in ALG
cv
. Prove
that while
cv
N
can express all queries from at relations to at relations.
21 Object Databases
Minkisi are complex objects clearly not the product of a momentary im-
pulse. . . . To do justice to objects, a theory of them must be as complex as
them.
1
Wyatt MacGaffey in Astonishment and Power
Alice: What is a Minkisi?
Sergio: It is an African word that translates somewhat like things that do things.
Vittorio: It is art, religion, and magic.
Riccardo: Oh, this sounds to me very object oriented!
I
n this chapter, we provide a brief introduction to object-oriented databases (OODBs). A
complete coverage of this new and exciting area is beyond the scope of this volume; we
emphasize the new modeling features of OODBs and some of the preliminary theoretical
research about them. On the one hand, we shall see that some of the most basic issues con-
cerning OODBs, such as the design of query languages or the analysis of their expressive
power, can be largely resolved using techniques already developed in connection with the
relational and complex value models. On the other hand, the presence of new features (such
as object identiers) and methods brings about new questions and techniques.
As mentioned previously, the simplicity of the data structure in the relational model
often hampers its use in many database applications. A relational representation can ob-
scure the intention and intricate semantics of a complex data structure (e.g., for holding the
design of a VLSI chip or an airplane wing). As we shall see, OODBs remedy this situation
by borrowing a variety of data structuring constructs from the complex value model (Chap-
ter 20) and from semantic data models (considered in Chapter 11). At a more fundamental
level, the relational data model and all of the data models presented so far impose a sharp
distinction between data storage and data processing: The DBMS provides data storage,
but data processing is provided by a host programming language with a relatively simple
language such as SQL embedded in it. OODBs permit the incorporation of behavioral por-
tions of the overall data management application directly into the database schema, using
methods in the sense of object-oriented programming languages.
This chapter begins with an informal presentation of the underlying constructs of
OODBs. Next a formal denition for a particular OODB model is presented. Two direc-
tions of theoretical research into OODBs are then discussed. First a family of languages
1
Reprinted with permission. Smithsonian Institution Press 1993.
542
21.1 Informal Presentation 543
for data access is presented, with an emphasis on how the languages interact with the novel
modeling constructs (of particular interest is the impact of generalizing the notion of com-
plete query language to accommodate the presence of object identiers, OIDs). Next two
languages for methods are described. The rst is an imperative language allowing us to
specify methods with side effects.
2
The second language brings us to a functional perspec-
tive on methods and database languages and allows us to specify side-effect-free methods.
In both cases, we present some results on type safety and expressive power. Checking type
safety is generally undecidable; we identify a signicant portion of the functional language,
monadic method schemas, for which type safety is decidable. With respect to expressive
power, the imperative language is complete in an extended sense formalized in this chapter.
The functional language expresses precisely qptime on ordered inputs and so turns out to
express the by-now-famous xpoint queries. The chapter concludes with a brief survey of
additional research issues raised by OODBs.
21.1 Informal Presentation
Object-oriented database models stem from a synthesis of three worlds: the complex value
model, semantic database models, and object-oriented programming concepts. At the time
of writing, there is not widespread agreement on a specic OODB model, nor even on what
components are required to constitute an OODB model. In this section, we shall focus on
seven important ingredients of OODB models:
1. objects and object identiers;
2. complex values and types;
3. classes;
4. methods;
5. ISA hierarchies;
6. inheritance and dynamic binding;
7. encapsulation.
In this section, we describe and illustrate these interrelated notions informally; a more
formal denition is presented in the following section. We will also briey discuss alterna-
tives.
As a running example for this discussion, we shall use the OODB schema specied in
Fig. 21.1. This schema is closely related to the semantic data model schema of Fig. 11.1,
which in turn is closely related to the CINEMA example of Chapter 3.
As discussed in Chapter 11, a signicant shortcoming of the relational model is that
it must use printable values, often called keys, to refer to entities or objects-in-the-world.
As a simple example, suppose that the rst and last names of a person are used as a key
to identify that person. From a physical point of view, it is then cumbersome to refer to
a person, because the many bytes of his or her name must be used. A more fundamental
2
Methods are said to have side-effects if they cause updates to the database.
544 Object Databases
(* schema and base denitions *)
create schema PariscopeSchema ;
create base PariscopeBase;
(* class denitions *)
class Person
type tuple ( name: string, citizenship: string, gender: string );
class Director inherit Person
type tuple ( directs: set ( Movie ) );
class Actor inherit Person
type tuple ( acts_in: { Movie },
award: { tuple ( prize: string, year: integer ) } );
class Actor_Director inherit Director, Actor
class Movie
type tuple ( title: string, actors: set ( Actor );
director: Director );
class Theater
type tuple ( name: string, address: string, phone: string );
(* name denitions *)
name Pariscope: set ( tuple ( theater: Theater, time: string, price: integer,
movie: Movie ) );
name Persons_I_like: set ( Person );
name Actors_I_like, Actors_you_like: set ( Actor );
name My_favorite_director : Director
(* method denitions *)
method get_name in class Person : string
{ if (gender = male)
return Mr. + self.name;
else
return Ms. + self.name }
method get_name in class Director : string
{ return ( Director + self.name ) };
method get_name in class Actor_Director : string
{ return ( Director + self.name ) };
(* we assume here that + denotes a string concatenation operator *)
Figure 21.1: An OODB Schema
21.1 Informal Presentation 545
problem arises if the person changes his or her name (e.g., as the result of marriage). When
performing this update, conceptually there is a break in the continuity in the representation
of the person. Furthermore, care must be taken to update all tuples (typically arising in a
number of different relations) that refer to this person, to reect the change of name.
Following the spirit of semantic data models, OODB models permit the explicit rep-
resentation of physical and conceptual objects through the use of object identiers (OIDs).
Conceptually, a unique OID is assigned to each object that is represented in the database,
and this association between OID and object remains xed, even as attributes of the ob-
ject (such as name or age) change in value. The use of objects and OIDs permits OODBs
to share information gracefully; a given object o is easily shared by many other objects
simply by referencing the OID of o. This is especially important in the context of updates;
for example, the name of a person object o need be changed in only one place even if o is
shared by many parts of the database.
In an OODB, a complex value is associated with each object. This complex value may
involve printables and/or OIDs (i.e., references to the same or other objects). For example,
each object in the class Movie in Fig. 21.1 has an associated triple whose second coordinate
contains a set of OIDs corresponding to actors. In this section, we focus on complex values
constructed using the tuple and set construct. In practical OODB models, other constructs
are also supported (including, for example, bags and lists). Some commercial OODBs are
based on an extension of C++ that supports persistence; in these models essentially any
C++ structure can serve as the value associated with an object.
Objects that have complex values with the same type may be grouped into classes, as
happens in semantic data models. In the running example, these include Person, Director,
and Movie. Classes also serve as a natural focal point for associating some of the behavioral
(or procedural) components of a database application. This is accomplished by associating
with each class a family of methods for that class. Methods might be simple (e.g., produc-
ing the name of a person) or arbitrarily complex (e.g., displaying a representation of an
object to a graphical interface or performing a stress analysis of a proposed wing design).
A method has a name, a signature, and an implementation. The name and signature serve
as an external interface to the method. The implementation is typically written in a (pos-
sibly extended) programming language such as C or C++. The choice of implementation
language is largely irrelevant and is generally not considered to be part of the data model.
As with semantic models, OODB models permit the organization of classes into a
hierarchy based on what have been termed variously ISA, specialization, or class-subclass
relationships. The term hierarchy is used loosely here: In many cases any directed acyclic
graph (DAG) is permitted. In Fig. 21.1 the ISA hierarchy has Director and Actor as (im-
mediate) specializations of Person and Actor_Director as a specialization of both Director
and Actor. Following the tradition of object-oriented programming languages, a virtual
class any is included that serves as the unique root of the ISA hierarchy.
In OODB models, there are two important implications of the statement that class c

is a subclass of c. First it is required that the complex value type associated with c

be a
subtype (in the sense formally dened later) of the complex value type associated with c.
Second it is required that if there is a method with name m associated with c, then there is
also a method with name m associated with c

. In some cases, the implementation (i.e., the


actual code) of m for c

is identical to that for c; in this case the code of m for c

need not
546 Object Databases
be explicitly specied because it is inherited from c. In other cases, the implementation
of m for c

is different from that for c; in which case we say that the implementation of
m for c

overrides the implementation of m for c. (See the different implementations for


method get_name in Fig. 21.1.) The determination of what implementation is associated
with a given method name and class is called method resolution. A method is invoked with
respect to an object o, and the class to which o belongs determines which implementation
is to be used. This policy is called dynamic binding. As we shall see, the interaction of
method calls and dynamic binding in general makes type checking for OODB schemas
undecidable. (It is undecidable to check whether such a schema would lead to a runtime
type error; on the other hand, it is clearly possible to nd decidable sufcient conditions
that will guarantee that no such error can arise.)
In the particular OODB model presented here, both values (in the style of complex
values) and objects are supported. For example, in Fig. 21.1 a persistent set of triples
called Pariscope is supported (see also Fig. 11.1). The introduction of values not directly
associated with OIDs is a departure from the tradition of object-oriented programming, and
not all OODBs in the literature support it. However, in databases the use of explicit values
often simplies the design and use of a schema. Their presence also facilitates expressing
queries in a declarative manner.
The important principle of encapsulation in object orientation stems from the eld
of abstract data types. Encapsulation is used to provide a sharp boundary between how
information about objects is accessed by database users and how that information is actu-
ally stored and provided. The principle of encapsulation is most easily understood if we
distinguish two categories of database use: dba mode, which refers to activities unique to
database administrators (including primarily creating and modifying the database schema),
and user mode, which refers to activities such as querying and updating the actual data in
the database. Of course, some users may operate in both of these modes on different occa-
sions. In general, application software is viewed as invoked from the user mode.
Encapsulation requires that when in user mode, a user can access or modify infor-
mation about a given object only by means of the methods dened for that object; he or
she cannot directly examine or modify the complex value or the methods associated with
the object. In particular, then, essentially all application software can access objects only
through their methods. This has two important implications. First, as long as the same set
of methods is supported, the underlying implementation of object methods, and even of the
complex value representation of objects, can be changed without having to modify any ap-
plication software. Second, the methods of an object often provide a focused and abstracted
interface to the object, thus making it simpler for programmers to work with the objects.
In object-oriented programming languages, it is typical to enforce encapsulation ex-
cept in the special case of rewriting method implementations. In some OODBmodels, there
is an important exception to this in connection with query languages. In particular, it is
generally convenient to permit a query language to examine explicitly the complex values
associated with objects.
The reader with no previous exposure to object-oriented languages may now be utterly
overwhelmed by the terminology. It might be helpful at this point to scan through a book
or manual about an object-oriented programming language such as C++, or an OODB such
21.2 Formal Denition of an OODB Model 547
as O
2
or ObjectStore. This will provide numerous examples and the overall methodology
of object-oriented programming, which is beyond the scope of this book.
21.2 Formal Denition of an OODB Model
This section presents a formal denition of a particular OODB model, called the generic
OODB model. (This model is strongly inuenced by the IQL and O
2
models. Many fea-
tures are shared by most other OODB models. While presenting the model, we also discuss
different choices made in other models.) The presentation essentially follows the preced-
ing informal one, beginning with denitions for the types and class hierarchy and then
introducing methods. It concludes with denitions of OODB schema and instance.
Types and Class Hierarchy
The formal denitions of object, type, and class hierarchy are intertwined. An object
consists of a pair (identier, value). The identiers are taken from a specic sort containing
OIDs. The values are essentially standard complex values, except that OIDs may occur
within them. Although some of the denitions on complex values and types are almost
identical to those in Chapter 20, we include them here to make precise the differences from
the object-oriented context. As we shall see, the class hierarchy obeys a natural restriction
based on subtyping.
To start, we assume a number of atomic types and their pairwise disjoint corresponding
domains: integer, string, bool, oat. The set dom of atomic values is the (disjoint) union
of these domains; as before, the elements of dom are called constants. We also assume an
innite set obj ={o
1
, o
2
, . . .} of object identiers (OIDs), a set class of class names, and a
set att of attribute names. A special constant nil represents the undened (i.e., null) value.
Given a set O of OIDs, the family of values over O is dened so that
(a) nil, each element of dom, and each element of O are values over O; and
(b) if v
1
, . . . , v
n
are values over O, and A
1
, . . . , A
n
distinct attributes names, the
tuple [A
1
: v
1
, . . . , A
n
: v
n
] and the set {v
1
, . . . , v
n
} are values over O.
The set of all values over O is denoted val(O). An object is a pair (o, v), where o is an
OID and v a value.
In general, object-oriented database models also include constructors other than tuple
and set, such as list and bag; we do not consider them here.
Example 21.2.1 Letting oid7, oid22, etc. denote OIDs, some examples of values are as
follows:
[theater : oid7, time : 16:45, price : 45, movie : oid22]
{H. Andersson, K. Sylwan, I. Thulin, L. Ullman}
[title : The Trouble with Harry, director : oid77,
actors : {oid81, oid198, oid265, oid77}]
548 Object Databases
An example of an object is
(oid22 , [title :The Trouble with Harry, director : oid77,
actors : {oid81, oid198, oid265, oid77}])
As discussed earlier, objects are grouped in classes. All objects in a class have complex
values of the same type. The type corresponding to each class is specied by the OODB
schema.
Types are dened with respect to a given set C of class names. The family of types
over C is dened so that
1. integer, string, bool, oat, are types;
2. the class names in C are types;
3. if is a type, then
3
{} is a (set) type;
4. if
1
, . . . ,
n
are types and A
1
, . . . , A
n
distinct attribute names,
then [A
1
:
1
, . . . , A
n
:
n
] is a (tuple) type.
The set of types over C together with the special class name any are denoted types(C).
(The special name any is a type but may not occur inside another type.) Observe the close
resemblance with types used in the complex value model.
Example 21.2.2 An example of a type over the classes of the schema in Fig. 21.1 is
[name : string, citizenship : string, gender : string]
One may want to give a name to this type (e.g., Person_type). Other examples of types
(with names associated to them) include
Director_type =[name : string, citizenship : string, gender : string,
directs : {Movie}]
Theater_type =[name : string, address : string, phone : string]
Pariscope_type =[theater : Theater, time : string, price : integer, movie : Movie]
Movie_type =[title : string, actors : {Actor}, director : Director]
Award_type =[prize : string, year : integer]
In an OODB schema we associate with each class c a type (c), which dictates the
type of objects in this class. In particular, for each object (o, v) in class c, v must have the
exact structure described by (c).
3
In Fig. 21.1 we use keywords set and tuple as syntactic sugar when specifying the set and tuple
constructors.
21.2 Formal Denition of an OODB Model 549
Recall from the informal description that an OODB schema includes an ISA hierarchy
among the classes of the schema. The class hierarchy has three components: (1) a set
of classes, (2) the types associated with these classes, and (3) a specication of the ISA
relationships between the classes. Formally, a class hierarchy is a triple (C, , ), where
C is a nite set of class names, a mapping from C to types(C), and a partial order
on C.
Informally, in a class hierarchy the type associated with a subclass should be a rene-
ment of the type associated with its superclass. For example, a class Student is expected to
rene the information on its superclass Person by providing additional attributes. To cap-
ture this notion, we use a subtyping relationship () that species when one type renes
another.
Denition 21.2.3 Let (C, , ) be a class hierarchy. The subtyping relationship on
types(C) is the smallest partial order over types(C) satisfying the following conditions:
(a) if c c

, then c c

;
(b) if
i

i
for each i [1, n] and n m, then
[A
1
:
1
, . . . , A
n
:
n
, . . . , A
m
:
m
] [A
1
:

1
, . . . , A
n
:

n
];
(c) if

, then {} {

}; and
(d) for each , any (i.e., any is the top of the hierarchy).
A class hierarchy (C, , ) is well formed if for each pair c, c

of classes, c c

implies
(c) (c

).
By way of illustration, it is easily veried that
Director_type Person_type Director_type Movie_type.
Thus the schema obtained by adding the constraint Director Movie would not be well
formed.
Henceforth we consider only well-formed class hierarchies.
Example 21.2.4 Consider the class hierarchy (C, , ) of the schema of Fig. 21.1. The
set of classes is
C ={Person, Director, Actor, Actor_Director, Theater, Movie}
with Actor Person, Director Person, Actor_Director Director, Actor_Director
Actor, and (referring to Example 21.2.2 for the denitions of Person_type, Theater_type,
etc.)
550 Object Databases
(Person) =Person_type,
(Theater) =Theater_type,
(Movie) =Movie_type,
(Director) =Director_type,
(Actor) =[name : string, citizenship : string,
gender : string, acts_in : {Movie},
award : {Award_type}]
(Actor_Director) =[name : string, citizenship : string,
gender : string, acts_in : {Movie},
award : {Award_type}, directs : {Movie}]
The use of type names here is purely syntactic. We would obtain the same schema if we
replaced, for instance, Person_type with the value of this type.
Observe that (Director) (Person) and (Actor) (Person), etc.
The Structural Semantics of a Class Hierarchy
We now describe how values can be associated with the classes and types of a class
hierarchy. Because the values in an OODB instance may include OIDs, the semantics of
classes and types must be dened simultaneously. The basis for these denitions is the
notion of OID assignment, which assigns a set of OIDs to each class.
Denition 21.2.5 Let (C, , ) be a (well-formed) class hierarchy. An OIDassignment
is a function mapping each name in C to a disjoint nite set of OIDs. Given OID
assignment , the disjoint extension of c is (c), and the extension of c, denoted

(c),
is {(c

) | c

C, c

c}.
If is an OID assignment, then

(c

(c) whenever c

c. This should be
understood as a formalization of the fact that an object of a subclass c

may be viewed
also as an object of a superclass c of c

. From the perspective of typing, this suggests that


operations that are type correct for members of c are also type correct for members of c

.
Unlike the case for many semantic data models, the denition of OID assignment for
OODB schemas implies that extensions of classes of an ISA hierarchy without common
subclasses are necessarily disjoint. In particular, extensions of all leaf classes of the hierar-
chy are disjoint (see Exercise 21.2). This is a simplifying assumption that makes it easier to
associate objects to classes. There is a unique class to whose disjoint extension each object
belongs.
The semantics for types is now dened relative to a class hierarchy (C, , ) and
an OID assignment . Let O = {(c) | c C}, and dene (any) = O. The disjoint
interpretation of a type , denoted dom(), is given by
(a) for each atomic type , dom() is the usual interpretation of that type;
(b) dom(any) is val(O);
21.2 Formal Denition of an OODB Model 551
(c) for each c C, dom(c) =

(c) {nil};
(d) dom({}) ={{v
1
, . . . , v
n
} | n 0, and v
i
dom(), i [1, n]}; and
(e) dom([A
1
:
1
, . . . , A
k
:
k
]) ={[A
1
: v
1
, . . . , A
k
: v
k
] | v
i
dom(
i
), i [1, k]}.
Remark 21.2.6 In the preceding interpretation, the type determines precisely the struc-
ture of a value of that type. It is interesting to replace (e) by
(e

)
dom([A
1
:
1
, . . . , A
k
:
k
]) =
{[A
1
: v
1
, . . . , A
k
: v
k
, A
k+1
: v
k+1
, . . . A
l
: v
l
] |
v
i
dom(
i
), i [1, k], v
j
val(O), j [k +1, l]}.
Under this alternative interpretation, for each ,

in types(C), if

then dom(

)
dom(). This is why this is sometimes called the domain-inclusion semantics. From a
data model viewpoint, this presents the disadvantage that in a correctly typed database
instance, a tuple may have a eld that is not even mentioned in the database schema. For
this reason, we do not adopt the domain-inclusion semantics here. On the other hand, from
a linguistic viewpoint it may be useful to adopt this more liberal semantics in languages to
allow variables denoting tuples with more attributes than necessary.
Adding Behavior
The nal ingredient of the generic OODB model is methods. A method has three compo-
nents:
(a) a name
(b) a signature
(c) an implementation (or body).
There is no problem in specifying the names and signatures of methods in an OODB
schema. To specify the implementation of methods, a language for methods is needed.
We do not consider specic languages in the generic OODB model. Therefore only names
and signatures of methods are specied at the schema level in this model. In Section 21.4,
we shall consider several languages for methods and shall therefore be able to add the
implementation of methods to the schema.
Without specifying the implementation of methods, the generic OODB model speci-
es their semantics (i.e., the effect of each method in the context of a given instance). This
effect, which is a function over the domains of the types corresponding to the signature of
the method, is therefore specied at the instance level.
We assume the existence of an innite set meth of method names. Let (C, , ) be
a class hierarchy. For method name m, a signature of m is an expression of the form
m : c
1

n1

n
, where c is a class name in C and each
i
is a type over
C. This signature is associated with the class c; we say that method m applies to objects of
class c and to objects of classes that inherit m from c. It is common for the same method
name to have different signatures in connection with different classes. (Some restrictions
shall be specied later.) The notion of signature here generalizes the one typically found in
552 Object Databases
object-oriented programming languages, because we permit the
i
s to be types rather than
only classes.
It is easiest to describe the notions of overloading, method inheritance, and dynamic
binding in terms of an example. Consider the methods dened in the schema of Fig. 21.1.
All three share the name get_name. The signatures are given by
get_name : Person string
get_name : Director string
get_name : Actor_Director string
Note that get_name has different implementations for these classes; this is an example of
overloading of a method name.
Recall that Actor is a subclass of Person. According to the informal discussion, if
get_name applies to elements of Person, then it should also apply to members of Actor.
Indeed, in the object-oriented paradigm, if a method m is dened for a class c but not for
a subclass c

of c (and it is not dened anywhere else along a path from c

to c), then the


denition of m for c

is inherited from c. In particular, the signature of m on c

is identical
to the one of m for c, except that the rst c is replaced by c

. The implementation of m
for c

is identical to that for c. In the schema of Fig. 21.1, the signature of get_name for
Actor is
get_name : Actor string
and the implementation is identical to the one for Person. The determination of the correct
method implementation to use for a given method name m and class c is called method
resolution; the selected implementation is called the resolution of m for c.
Suppose that is an OID assignment, that oid25 is in the extension

(Person) of
Person, and that get_name is called on oid25. What implementation of get_name will
be used? In our OODB model we shall use dynamic binding (also called late binding,
or value-dependent binding). This means that the specic implementation chosen for
get_name on oid25 depends on the most specic class that oid25 belongs to, that is, the
class c such that oid25 (c).
(An alternative to dynamic binding is static binding, or context-dependent binding.
Under this discipline, the implementation used for get_name depends on the type associ-
ated with the variable holding oid25 at the point in program where get_name is invoked.
This can be determined at compile time, and so static binding is generally much cheaper
than dynamic binding. In the language C++, the default is static binding, but dynamic bind-
ing can be obtained by using the keyword virtual when specifying the method.)
Consider a call m(o, v
1
, . . . , v
n1
) to method m. This is often termed a message,
and o is termed the receiver. As described here, the implementation of m associated
with this message depends exclusively on the class of o. To emphasize the importance
of the receiver for nding the actual implementation, in some languages the message is
denoted o m[v
1
, . . . v
n1
]. In some object-oriented programming languages, such as
CommonLoops (an object-oriented extension of LISP), the implementation depends on
21.2 Formal Denition of an OODB Model 553
m
c c
c
c
m
m
Figure 21.2: Unambiguous denition
all of the parameters of the call, not just the rst. This is also the approach of the method
schemas introduced in Section 21.4.
The set of methods applicable to an object is called the interface of the object. As noted
in the informal description of OODB models, in most cases objects are accessed only via
their interface; this philosophy is called encapsulation.
As part of an OODB schema, a set M of method signatures is associated to a class
hierarchy (C, , ). Note that a signature m : c
1

n1

n
can be viewed
as giving a particular meaning to m for class c, at least at a syntactic level. Because of
inheritance, a meaning for method m need not be given explicitly for each class of C nor
even for subclasses of a class for which m has been given a meaning. However, we make
two restrictions on the family of method signatures: The set M is well formed if it obeys
the following two rules:
Unambiguity: If c is a subclass of c

and c

and there is a denition of m for c

and c

, then
there is a denition of mfor a subclass of c

and c

that is either c itself, or a superclass


of c. (See Fig. 21.2.)
Covariance
4
: If m : c
1

n
and m : c

are two de-


nitions and c c

, then n =m for each i,


i

i
and

.
The rst rule prevents ambiguity resulting from the presence of two method implemen-
tations both applicable for the same object. A primary motivation for the second rule is
intuitive: We expect the argument and result types of a method on a subclass to be more
rened than those of the method on a superclass. This also simplies the writing of type-
correct programs, although type checking leads to difculties even in the presence of the
covariance assumption (see Section 21.4).
Database Schemas and Instances
We conclude this section by presenting the denitions of schemas and instances in the
generic OODB model. An important subtlety here will be the role of OIDs in instances
4
In type theory, contravariance is used instead. Contravariance is the proper notion when functions
are passed as arguments, which is not the case here.
554 Object Databases
as placeholders; as will be seen, the specic OIDs present in an instance are essentially
irrelevant.
As indicated earlier, a schema describes the structure of the data that is stored in
a database, including the types associated with classes and the ISA hierarchy and the
signature of methods (i.e., the interfaces provided for objects in each class).
In many practical OODBs, it has been found convenient to allow storage of complex
values that are not associated with any objects and that can be accessed directly using
some name. This also allows us to subsume gracefully the capabilities of value-based
models, such as relations and complex values. It also facilitates writing queries. To reect
this feature, we allow a similar mechanism in schemas and instances. Thus schemas may
include a set of value names with associated types. Instances assign values of appropriate
type to the names. Method implementations, external programming languages, and query
languages may all use these names (to refer to their current values) or a class name (to
refer to the set of objects currently residing in that class). In this manner, named values and
class names are analogous to relation names in the relational model and to complex value
relation names in the complex value model.
In the schema of Fig. 21.1, examples of named values are Pariscope (holding a set
of triples); Persons_I_like, Actors_I_like, and Actors_you_like (referring to sets of person
objects and actor objects; and, nally, My_favorite_director (referring to an individual
object as opposed to a set). These names can be used explicitly in method implementations
and in external query and programming languages.
We now have the following:
Denition 21.2.7 A schema is a 5-tuple S =(C, , , M, G) where
G is a set of names disjoint from C;
is a mapping from C G to types(C);
(C, , ) is a well-formed class hierarchy
5
; and
M is a well-formed set of method signatures for (C, , ).
An instance of an OODB schema populates the classes with OIDs, assigns values to
these OIDs, gives meaning to the other persistent names, and assigns semantics to method
signatures. The semantics of method signatures are mappings describing their effect. From
a practical viewpoint, the population of the classes, the values of objects, and the values of
names are kept extensionally; whereas the semantics of the methods are specied by pieces
of code (intensionally). However, we ignore the code of methods for the time being.
Denition 21.2.8 An instance of schema (C, , , M, G) is a 4-tuple I =(, , , ),
where
(a) is an OID assignment (and let O ={(c) | c C});
(b) maps each OID in O to a value in val(O) of correct type [i.e., for each c and
o (c), (o) dom((c))];
5
By abuse of notation, we use here and later instead of |
C
.
21.2 Formal Denition of an OODB Model 555
(c) associates to each name in G of type a value in dom();
(d) assigns semantics to method names in agreement with the method signatures
in M. More specically, for each signature m : c ,
(m : c ) : dom(c ) dom();
that is, (m : c ) is a partial function from dom(c ) to dom().
Recall that a method m can occur with different signatures in the same schema. The
mapping can assign different semantics to each signature of m. The function (m :
c ) is only relevant on objects associated with c and subclasses of c for which
m is not redened.
In the preceding denitions, the assignment of semantics to method signatures is
included in the instance. As will be seen in Section 21.4, if method implementations
are included in the schema, they induce the semantics of methods at the instance level
(this is determined by the semantics of the particular programming language used in the
implementation).
Intuitively, it is generally assumed that elements of the atomic domains have univer-
sally understood meaning. In contrast, the actual OIDs used in an instance are not relevant.
They serve essentially as placeholders; it is only their relationship with other OIDs and
constants that matters. This arises in the practical perspective in two ways. First, in most
practical systems, OIDs cannot be explicitly created, examined, or manipulated. Second,
in some object-oriented systems, the actual OIDs used in a physical instance may change
over the course of time (e.g., as a result of garbage collection or reclustering of objects).
To capture this aspect of OIDs in the formal model, we introduce the notion of OID
isomorphism. Two instances I, J are OID isomorphic, denoted I
OID
J, if there exists a
bijection on domobj that maps obj to obj, is the identity on dom, and transforms I into J.
To be precise, the term object-oriented instance should refer to an equivalence class under
OID isomorphism of instances as dened earlier. However, it is usually more convenient to
work with representatives of these equivalence classes, so we follow that convention here.
Remark 21.2.9 In the model just described, a class encompasses two aspects:
1. at the schema level, the class denition (its type and method signatures); and
2. at the instance level, the class extension (the set of objects currently in the class).
It has been argued that one should not associate explicit class extensions with classes. To
see the disadvantage of class extensions, consider object deletion. To be removed from
the database, an object has to be deleted explicitly from its class extension. This is not
convenient in some cases. For instance, suppose that the database contains a class Polygon
and polygons are used only in gures. When a polygon is no longer used in any gure of
the current database, it is no longer of interest and should be deleted. We would like this
deletion to be implicit. (Otherwise the user of the database would have to search all possible
places in which a reference to a polygon may occur to be able to delete a polygon.)
To capture this, some OODBs use an integrity constraint, which states that
556 Object Databases
every object should be accessible from some named value.
This integrity constraint is enforced by an automatic deletion of all objects that become
unreachable from the named values. In the polygon example, this approach would allow
dening the class Polygon, thus specifying the structure and methods proper to polygons.
However, the members of class Polygon would only be those polygons that are currently
relevant. Relevance is determined by membership in (or accessibility from) the named
values (e.g., My-Figures, Your-Figures) that refer to polygons. From a technical viewpoint,
this involves techniques such as garbage collection.
In these OODBs, the set of objects in a class is not directly accessible. For this
reason, the corresponding models are sometimes called models without class extension.
Of course, it is always possible, given a schema, to compute the class extensions or to
adapt object creation in a given class to maintain explicitly a named value containing that
class extension. In these OODBs, the named values are also said to be roots of persistence,
because the persistence of an object is dependent on its accessibility from these named
values.
21.3 Languages for OODB Queries
This section briey introduces several languages for querying OODBs. These queries
are formulated against the database as a whole; unlike methods, they are not associated
with specic classes. In the next section, we will consider languages intended to provide
implementations for methods.
In describing the OODB query languages, we emphasize how OODB features are
incorporated into them. The rst language is an extension of the calculus for complex
values, which incorporates such object-oriented components as OIDs, different notions
of equality, and method calls. The second is an extension of the while language, initially
introduced in Chapter 14. Of primary interest here is the introduction of techniques for
creating new OIDs as part of a query. At this point we examine the notion of completeness
for OODB access languages. We also briey look at a language introducing a logic-based
approach to object creation. Finally, we mention a practical language, O
2
SQL. This is a
variant of SQL for OODBs that provides elegant object-oriented features.
Although the languages discussed in this section do provide the ability to call methods
and incorporate the results into the query processing and answer, we focus primarily
on access to the extensional structural portion of the OODB. The intensional portion,
provided by the methods, is considered in the following section. Also, we largely ignore the
important issue of typing for queries and programs written in these languages. The issue of
typing is considered, in the context of languages for methods, in the next section.
An Object-Oriented Calculus
The object-oriented calculus presented here is a straightforward generalization of the com-
plex value calculus of Chapter 20, extended to incorporate objects, different notions of
equality, and methods.
21.3 Languages for OODB Queries 557
Let (C, , , M, G) be an OODB schema, and let us ignore the object-oriented fea-
tures for a moment. Each name in Gcan be viewed as a complex value; it is straightforward
to generalize the complex value calculus to operate on the values referred to by G. (The
fact that in the complex value model all relations are sets whereas some names in G might
refer to nonset values requires only a minor modication of the language.)
Let us now consider objects. OIDs may be viewed as elements of a specic sort.
If viewed in isolation from their associated values, this suggests that the only primitive
available for comparing OIDs is equality. Recall from the schema of Fig. 21.1 the names
Actors_I_like and Actors_you_like. The query
6
(21.1) x, y(x Actors_I_like y Actors_you_like x =y)
asks whether there is an actor we both like. To obtain the names of such actors, we need
to introduce dereferencing, a mechanism to obtain the value of an object. Dereferencing is
denoted by . The following query yields the names of actors we both like:
(21.2) {y | x(x Actors_I_like x Actors_you_like x .name =y)}
In the previous query, x denotes the value of x, in this case, a tuple with four elds. The
dot notation (.) is used as before to obtain the value of specic elds.
In query (21.1), we tested two objects for equality, essentially testing whether they
had the same OID. Although it does not increase the expressive power of the language, it
is customary to introduce an alternative test for equality, called value equality. This tests
whether the values of two objects are equal regardless of whether their OIDs are distinct.
To illustrate, consider the three objects having Actor_type:
(oid50, [name : Martin, citizenship : French, gender : male,
award : { }, acts_in : {oid33}])
(oid51, [name : Martin, citizenship : French, gender : male,
award : { }, acts_in : {oid33}])
(oid52, [name : Martin, citizenship : French, gender : male,
award : { }, acts_in : {oid34}])
Then oid50 and oid51 are value equal, whereas oid50 and oid52 are not. Yet another
form of equality is deep equality. If oid33 and oid34 are value equal, then oid50 and
oid52 are deep equal. Intuitively, two objects are deep equal if the (possibly innite) trees
obtained by recursively replacing each object by its value are equal. The innite trees that
we obtain are called the expansions. They present some regularity; they are regular trees
(see Exercise 21.10).
The notion of deep equality highlights a major difference between value-based and
object-based models. In a value-based model (such as the relational or complex value
6
In this example, if name is a key for Actor, then one can easily obtain an equivalent query not using
object equality; this may not be possible if there is no key for Actor.
558 Object Databases
models), the database can be thought of as a collection of (nite) trees. The connections
between trees arise as a result of the contents of atomic elds. That is, they are implicit
(e.g., the same string may appear twice). In the object-oriented world, a database instance
can be thought of as graph. Paths in the database are more explicit. That is, one may
view an (oid, value) pair as a form of logical pointer and a path as a sequence of pointer
dereferencing.
This graph-based perspective leads naturally to a navigational form of data access
(e.g., using a sequence such as o .director .citizenship to nd the citizenship of the
director of a given movie object o). This has led some to view object-oriented models as
less declarative than value-based models such as the relational model. This is inaccurate,
because declarativeness is more a property of access languages than models. Indeed, the
calculus for OODBs described here illustrates that a highly declarative language can be
developed for the OODB model.
We conclude the discussion of the object-oriented calculus by incorporating meth-
ods. For this discussion, it is irrelevant how the methods are specied or evaluated; this
evaluation is external to the query. The query simply uses the method invocations as or-
acles. Method resolution uses dynamic binding. The value of an expression of the form
m(t
1
, . . . , t
n
) under a given variable assignment is obtained by evaluating (externally)
the implementation of m for the class of (t
1
) on input (t
1
, . . . , t
n
). In this context, it
is assumed that m has no side-effects. Although not dened formally here, the following
illustrates the incorporation of methods into the calculus:
(21.3) {y | x(x Persons_I_like y =get_name(x))}
If the set Persons_I_like contains Bergman and Liv Ullman, the answer would be
{Ms. Ullman, Liv Ullman}
The use of method names within the calculus raises a number of interesting typing and
safety issues that will not be addressed here.
Object Creation and Completeness
Relational queries take relational instances as input and produce relational instances as
output. The preceding calculus fails to provide the analogous capability because the output
of a calculus query is a set of values or objects. Two features are needed for a query
language to produce the full-edged structural portion of an object-oriented instance: the
ability to create OIDs, and the ability to populate a family of named values (rather than
producing a single set).
We rst introduce an extension of the while language of Chapter 14 that incorporates
both of these capabilities. This language leads naturally to a discussion of completeness of
OODB access languages. After this we mention a second approach to object creation that
stems from the perspective of logic programming.
The extension of while introduced here is denoted while
obj
. It will create new OIDs in
a manner reminiscent of how the language while
new
of Chapter 18 invented new constants.
21.3 Languages for OODB Queries 559
The language while
obj
incorporates object-oriented features such as dereferencing
and method calls, as in the calculus. To illustrate, we present a while
obj
program that
collects all actors reachable from an actor I likeLiv Ullman. In this query, v_movies and
v_directors serve as variables, and reachable serves as a newname that will hold the output.
reachable :={x | x Actors_I_like x .name =Liv Ullman};
v_movies :={ }; v_directors :={ };
while change do
begin
reachable :=reachable {x | y(y v_movies x y .actors)};
v_directors :=v_directors
{x | y(y v_movies x y .director)};
v_movies :=v_movies
{x | y(y reachable x y .acts_in)}
{x | y(y v_directors x y .directs)};
end;
We now introduce object creation. The operator new works as follows. It takes as input
a set of values (or objects) and produces one new OID for each value in the set. As a simple
example, suppose that we want to objectify the quadruples in the named value Pariscope
of the schema of Fig. 21.1. This may be accomplished with the commands
add_class Pariscope_obj
type tuple (theater : Theater, time : string, price : integer, movie : Movie);
Pariscope_obj :=new(Pariscope)
Of course, the newoperator can be used in conjunction with arbitrary expressions that yield
a set of values, not just a named value.
The new operator used here is closely related to the new operator of the language
while
new
of Chapter 18. Given that while
obj
has iteration and the ability to create new
OIDs, it is natural to ask about the expressive power of this language. To set the stage,
we introduce the following analogue of the notion of (computable) query, which mimics
the one of Chapter 18. The denition focuses on the structural portion of the OODB model;
methods are excluded from consideration.
Denition 21.3.1 Let R and S be two OODB schemas with no method signatures. A
determinate query is a relation Q from inst(R) to inst(S) such that
(a) Q is computable;
(b) (Genericity) if I, J Q and is a one-to-one mapping on constants, then
(I), (J) Q;
(c) (Functionality) if I, J Q, and I, J

Q, then J and J

are OID isomorphic;


and
(d) (Well dened) if I, J Qand I

, J

is OID isomorphic to I, J, then I

, J


Q.
A language is determinate complete (for OODBs) if it expresses exactly the determinate
queries.
560 Object Databases
The essential difference between the preceding denition and the denition of deter-
minate query in Chapter 18 is that here only OIDs can be created, not constants. Parts (c)
and (d) of the denition ensure that a determinate query Q can be viewed as a function
from OID equivalence classes of instances over R to OID equivalence classes of instances
over S. So OIDs serve two purposes here: (1) They are used to compute in the same way
that invented values were used to break the polynomial space barrier; and (2) they are now
essential components of the data structure and in particular of the result. With respect to
(2), an important aspect is that we are not concerned with the actual value of the OIDs,
which motivates the use of the equivalence relation. (Two results are viewed as identical if
they are the same up to the renaming of the OIDs.)
Like while
new
, while
obj
is not determinate complete. There is an elegant characteriza-
tion of the determinate queries expressible in while
obj
. This result, which we state next,
uses a local characterization of input-output pairs of while
obj
programs. That characteriza-
tion is in the spirit of the notion of bp-completeness, relating input-output pairs of relational
calculus queries (see Exercise 16.11). For each input-output pair I, J, the characteriza-
tion of while
obj
queries requires a simple connection between the automorphism group of
I and that of J. For an instance K, let Aut(K) denote the set of automorphisms of K. For
a pair of instances K, K

, Aut(K, K

) denotes the bijections on adom(K K

) that are
automorphisms of both K and K

.
Theorem 21.3.2 A determinate query q is expressible in while
obj
iff for each input-
output pair I, J in q there exists a mapping h from Aut(I) to Aut(I, J) such that for
each , Aut(I),
(i) and h() coincide on I;
(ii) h( ) =h() h(); and
(iii) h(id
I
) =id
I,J
.
The only if part of the theorem is proven by an extension of the trace technique
developed in the proof of Theorem 18.2.5 (Exercise 21.14). The if part is considerably
more complex and is based on a group-theoretic argument.
A mapping h just shown is called an extension homomorphism from Aut(I) to
Aut(I, J). To see an example of the usefulness of this characterization, consider the
query q in Fig. 21.3. Recall that q was shown as not expressible in the language while
new
by Theorem 18.2.5. The language while
obj
is more powerful than while
new
, so in principle
it may be able to express that query. However, we show that this is not the case, so while
obj
is not determinate complete.
Proposition 21.3.3 Query q (of Fig. 21.3) is not expressible in while
obj
.
Proof Let I, J be the input-output pair of Fig. 21.3. The proof is by contradiction.
Suppose there is a while
obj
query that produces J on input I. By Theorem 21.3.2, there
is an extension homomorphism h from Aut(I) to Aut(I, J). Let be the automorphism
of I exchanging a and b. Note that
1
=, so =id
I
. Consider h()(
0
). Clearly,
h()(
0
) {
1
,
3
}. Suppose h()(
0
) =
1
(the other case is similar). Then clearly,
21.3 Languages for OODB Queries 561

0

2

1
b a {a, b}
Figure 21.3: A query not expressible in while
obj
h()(
1
) =
2
. Consider now h( )(
0
). We have, on one hand,
h( )(
0
) =(h() h())(
0
)
=h()(
1
)
=
2
and on the other hand
h( )(
0
) =h(id
I
)(
0
)
=id
I,J
(
0
)
=
0
,
which is a contradiction because
0
=
2
. So q is not expressible in while
obj
.
It is possible to obtain a language expressing all determinate queries by adding to
while
obj
a choose operator that allows the selection (nondeterministically but in a determi-
nate manner) of one object out of a set of objects that are isomorphic (see Exercise 18.14).
However, this is a highly complex construct because it requires the ability to check for
isomorphism of graphs. The search for simpler, local constructs that yield a determinate-
complete language is an active area of research.
A Logic-Based Approach to Object Creation
We now briey introduce an alternative approach for creating OIDs that stems from the
perspective of datalog and logic programming. Suppose that a new OID is to be created for
each pair t, m, where movie m is playing at theater t according to the current value of
Pariscope. Consider the following dataloglike rule:
1. create_tm_object(x, t, m) Pariscope(t, s, m)
Note that x occurs in the rule head but not in the body, so the rule is not safe. Intuitively,
we would like to attach semantics to this rule so that a new OID is associated to x for each
562 Object Databases
distinct pair of (t, m) values. Using the symbol ! to mean exists a unique, the following
versions of (1) intuitively captures the semantics.
2. t m!xs[create_tm_object(x, t, m) Pariscope(t, s, m)]
3. t m!x[create_tm_object(x, t, m) s(Pariscope(t, s, m))]
This suggests that Skolem functions might be used. Specically, let f
t m
be a function
symbol associated with the predicate create_tm_object. We rewrite (2) as
t ms[create_tm_object(f
t m
(t, m), t, m) Pariscope(t, s, m)]
or, leaving off the universal quantiers as traditional in datalog,
4. create_tm_object(f
t m
(t, m), t, m) Pariscope(t, s, m)
Under this approach, the Skolem terms resulting from rule (4) are to be interpreted
as new, distinct OIDs. Under some formulations of the approach, syntactic objects such
as f
t m
(oid7, oid22) (where oid7 is the OID of some theater and oid22 the OID of some
movie) serve explicitly as OIDs. Under other formulations, such syntactic objects are
viewed as placeholders during an intermediate stage of query evaluation and are (nonde-
terministically) replaced by distinct new OIDs in the nal stage of query evaluation (see
Exercise 21.13).
The latter approach to OID creation, incorporated into complex value datalog ex-
tended to include also OID dereferencing, yields a language equivalent to while
obj
. As
with while
obj
, this language is not determinate complete.
A Practical Language for OODBs
We briey illustrate some object-oriented features of the language O
2
SQL, which was
introduced in Section 20.8. Several examples are presented there, that show how O
2
SQL
can be used to access and construct deeply nested complex values. We now indicate how
the use of objects and methods is incorporated into the language. It is interesting to note that
methods and nested complex values are elegantly combined in this language, which has the
appearance of SQL but is essentially based on the functional programming paradigm.
For this example, we again assume the complex value Films of Fig. 20.2, but we
assume that Age is a method dened for the class Person (and thus for Director).
select tuple (f.Director, f.Director.Age)
from f in Films
where f.Director not in atten select m.Actors
from g in Films,
m in g.Movies
where g.Director = Hitchcock
(Recall that here the inner select-from-where clause returns a set of sets of actors. The
keyword atten has the effect of forming the union of these sets to yield a set of actors.)
21.4 Languages for Methods 563
21.4 Languages for Methods
So far, we have used an abstraction of methods (their signature) and ignored their imple-
mentations. In this section, we present two abstract programming languages for specifying
method implementations. Method implementations will be included in the specication
of methods in OODB schemas. In studying these languages, we emphasize two impor-
tant issues: type safety and expressive power. This focus largely motivates our choice of
languages and the particular abstractions considered.
The rst language is an imperative programming language. The second, method
schemas, is representative of a functional style of database access. In the rst language,
we will gather a number of features present in practical object-oriented database languages
(e.g., side-effect, iteration, conditionals). We will see that with these features, we get (as
could be expected) completeness, and we pay the obvious price for it: the undecidability
of many questions, such as type safety. With method schemas, we focus on the essence
of inheritance and methods. We voluntarily consider a limited language. We see that the
undecidability of type safety is a consequence of recursion in method calls. (We obtain
decidability in the restricted case of monadic methods.) With respect to expressiveness,
we present a surprising characterization of qptime in terms of a simple language with
methods.
For both languages, we study type safety and expressive power. We begin by dis-
cussing briey the meaning of these notions in our context, and then we present the two
languages and the results.
An OODB schema S (with method implementations assigned to signatures) is type
safe if for each instance I of S and each syntactically correct method call on I, the execution
of this method does not result in a runtime type error (an illegal method call). When
the imperative programming language is used in method implementations, type safety is
undecidable. (It is possible, however, to obtain decidable sufcient conditions for type
safety.) For method schemas, type safety remains undecidable. Surprisingly, type safety
is decidable for monadic method schemas.
To evaluate the expressive power of OODB schemas using a particular language for
method implementation, a common approach is to simulate relational queries and then
ask what family of relational queries can be simulated. If OID creation is permitted, then
all computable relational queries can be simulated using the imperative language. The
expressive power of imperative methods without OID creation depends on the complex
types permitted in OODB schemas. We also present a result for the expressive power of
method schemas, showing that the family of method schemas using an ordered domain of
atomic elements expresses exactly qptime.
A Model with Imperative Methods
To consider the issue of type safety in a general context, we present the imperative (OODB)
model, which incorporates imperative method implementations. This model simplies the
OODB model presented earlier by assuming that the type of each class is a tuple of values
and OIDs. However, a schema in this model will include an assignment of implementations
to method signatures.
564 Object Databases
The syntax for method implementations is
par: u
1
, . . . , u
n
;
var: x
1
, . . . , x
l
;
body: s
1
; . . . ; s
q
;
return x
1
where the u
i
s are parameters (n 1), the x
j
s are internal variables (l 1), and for each
p [1, q], s
p
is a statement of one of the following forms (where w, y, z range over
parameters and internal variables):
Basic operations
(i) w := self.
(ii) w := self.a for some eld name a.
(iii) w := y.
(iv) w := m(y, . . . , z), for some method name m.
(v) self.a := w, for some eld name a.
Class operations
(vi) w := new(c), where c is a class.
(vii) delete(c, w), where c is a class.
(viii) for each w in c do s

1
; . . . ; s

t
end, where c is a class and s

1
, . . . , s

t
are statements
having forms from this list.
Conditional
(ix) if yz then s, where is = or = and s is a statement having a form in this list
except for the conditional.
It is assumed that all internal variables are initialized before used to some default value
depending on their type. The intended semantics for the forms other than (viii) should
be clear. (Here clear does not mean easy to implement. In particular, object deletion
is complex because all references to this object have to be deleted.) The looping construct
executes for each element of the extension (not disjoint extension) of class c. The execution
of the loop is viewed as nondeterministic, in the sense that the particular ordering used for
the elements of c is not guaranteed by the implementation. In general, we focus on OODB
schemas in which different orders of execution of the loops yield OID-equivalent results
(note, however, that this property is undecidable, so it must be ensured by the programmer).
An imperative schema is a 6-tuple S =(C, , , M, G, ), where (C, , , M, G) is
a schema as before; where the range of is tuples of atomic and class types; and where
is an assignment of implementations to signatures. The notion of instance for this model is
dened in the natural fashion.
It is straightforward to develop operational semantics for this model, where the execu-
tion of a given method call might be successful, nonterminating, or aborted (as the result
of a runtime type error) (Exercise 21.15a).
21.4 Languages for Methods 565
Type Safety in the Imperative Model There are two ways that a runtime type error can
arise: (1) if the type of the result of an execution of method m does not lie within the type
specied by the relevant method signature of m; or (2) if a method is called on a tuple
of parameters that does not satisfy the domain part of the appropriate signature of m. We
assume that the range of all method signatures is any, and thus we focus on case (2).
A schema S is type safe if for each instance over S and each m(o, v
1
, . . . , v
n
) method
call that satises the signature of m associated with the class of o, execution of this call is
either successful or nonterminating.
Given a Turing machine M, it is easy to develop a schema S in this model that can
simulate the operation of M on a suitable encoding of an input tape (Exercise 21.15c). This
shows that such schemas are computationally powerful and implies the usual undecidabil-
ity results. With regard to type safety, it is easy to verify the following (Exercise 21.16):
Proposition 21.4.1 It is undecidable, given an imperative schema S, whether S is type
safe. This remains true, even if in method implementations conditional statements and the
new operator are prohibited and all methods are monadic (i.e., have only one argument).
A similar argument can be used to show that it is undecidable whether a given method
terminates on all inputs. Finally, a method m

on class c

is reachable from method m on


class c in OODB schema S if there is some instance I of S and some tuple o, v
1
, . . . with
o in c such that the execution of m(o, v
1
, . . .) leads to a call of m

on some object in c

.
Reachability is also undecidable for imperative schemas.
Expressive Power of the Imperative Model
As discussed earlier, we measure the expressive power of OODB schemas in terms of the
relational queries they can simulate. A relational schema R = {R
1
, . . . , R
n
} is simulated
by an OODB schema S of this model if there are leaf classes c
1
, . . . , c
n
in S, where the
number of attributes of c
i
is the arity of R
i
for i [1, n] and where the type of each of
these attributes is atomic. We focus on instances in which no null values appear for such
attributes. Let R be a relational schema and S be an OODB schema that simulates R. An
instance I of R is simulated by instance J of S if for each tuple v I(R
i
) there is exactly
one object o in the extension of c
i
such that the value associated with o is v and all other
classes of S are empty. Following this spirit, it is straightforward to dene what it means
for a method call in schema S to simulate a relational query from R to relation schema R.
We consider only schema S for which different orders of evaluation of the looping
construct yield the same nal result (i.e., generic mappings). We now have the following
(see Exercise 21.20):
Theorem 21.4.2 The family of generic queries corresponding to imperative schemas
coincides with the family of all relational queries.
The preceding result relies on the presence of the new operator. It is natural to ask
about the expressive power of imperative schemas that do not support new. As discussed in
Exercise 21.21, the expressive power depends on the complex types permitted for objects.
566 Object Databases
Note also that imperative schemas can express all determinate queries. This uses the
nondeterminism of the for each construct. Naturally, nondeterministic queries that are not
determinate can also be expressed.
Method Schemas
We now present an abstract model for side-effect-free methods, called method schemas.
In this model, we focus almost exclusively on methods and their implementations. Two
kinds of methods are distinguished: base and composite. The base methods do not have
implementations: Their semantics is specied explicitly at the instance level. The imple-
mentations of composite methods consist of a composition of other methods.
We now introduce method schemas. In the next denition, we make the simplifying
assumption that there are no named values (only class names) in database schemas. In
fact, data is only stored in base methods. In the following,
[ ]
denotes the type assign-
ment
[ ]
(c) =[ ] for every class c. Because the type assignment provides no information
in method schemas (it is always
[ ]
), this assignment is not explicitly specied in the
schemas.
Denition 21.4.3 A method schema is a 5-tuple S = (C, , M
base
, M
comp
, ), where
(C,
[ ]
, ) is a well-formed class hierarchy, M
base
M
comp
is a well-formed set of method
signatures for (C,
[ ]
, ), and
no method name occurs in both M
base
and M
comp
;
each method signature in M
comp
is of the form m : c
1
, . . . , c
n
any (method signa-
tures for M
base
are unrestricted, i.e., can have any class as range);
is an assignment of implementations to the method signatures of M
comp
, as fol-
lows: For a signature m : c
1
, . . . , c
n
any in M
comp
, (m : c
1
, . . . , c
n
any) is a
term obtained by composing methods in M
base
and M
comp
.
An example of an implementation for a method m : c
1
, c
2
any is
m(x, y) m
1
(m
2
(x), m
1
(x, y)).
The semantics of methods is dened in the obvious way. For instance, to compute m(o, o

),
one computes rst o
1
=m
2
(o) and then o
2
=m
1
(o, o

); the result is m
1
(o
1
, o
2
). The range
of composite methods is left unspecied (it is any) because it is determined by the do-
main and the method implementation as a composition of methods. Because the range of
composite methods is always any, we will sometimes only specify their domain.
Let S = (C, , M
base
, M
comp
, ) be a method schema. An instance of S is a pair
I = (, ), where is an OID assignment for (C, ) and where assigns a semantics
to the base methods. Note the difference from the imperative schemas of the previous
section, where together with the method implementations was sufcient to determine
the semantics of methods. In contrast, the semantics of the base methods must be specied
in instances of method schemas.
21.4 Languages for Methods 567
Inheritance of method implementations for method schemas is dened slightly differ-
ently from that for the OODB model given earlier. Specically, given an n-ary method m
and invocation m(o
1
, . . . , o
n
), where o
i
is in disjoint class c
i
for i [1, n], the implementa-
tion for mis inherited from the implementation of signature m : c

1
, . . . , c

n
c

, where this
is the unique signature that is pointwise least above c
1
, . . . , c
n
. [Otherwise m is undened
on input (o
1
, . . . , o
n
).]
An important special case is when methods take just one argument. Method schemas
using only such methods are called monadic. To emphasize the difference, unrestricted
method schemas are sometimes called polyadic.
Example 21.4.4 Consider the following monadic method schema. The classes in the
schema are
class c
class c

c
The base method signatures are
method m
1
: c c

method m
2
: c c
method m
2
: c

method m
3
: c

c
The composite method denitions are
method m : c =m
2
(m
2
(m
1
(x)))
method m

: c =m
3
(m

(m
2
(x)))
method m

: c

=m
1
(x)
Note that m

is recursive and that calls to m

on elements in c

break the recursion.


Type Safety for Method Schemas As before, a method schema S is type safe if for each
instance I of S no method call on I leads to a runtime type error.
The following example demonstrates that the schema of Example 21.4.4 is not type
safe. Note how the interpretation for base methods can be viewed as an assignment of
values for objects.
Example 21.4.5 Recall the method schema of Example 21.4.4. An instance of this is
I =(, ), where
7
(c) ={p, q}
(c

) ={r}
7
We write (m
1
)(p) rather than (m
1
, c)(p) to simplify the presentation.
568 Object Databases
and
(m
1
)(p) =r (m
2
)(p) =q
(m
1
)(q)l = (m
2
)(q) =r (m
3
)(r) =p.
(m
1
)(r) =r (m
2
)(r) =r
Consider the execution of m(p). This calls for the computation of m
2
(m
2
(m
1
(p))) =
m
2
(m
2
(r)) = r. Thus the execution is successful. On the other hand, m

(p) leads to
a runtime type error: m

(p) = m
3
(m

(m
2
(p))) = m
3
(m

(q)) = m
3
(m
3
(m

(m
2
(q)))) =
m
3
(m
3
(m

(r))) = m
3
(m
3
(m
1
(r))) = m
3
(m
3
(r)) = m
3
(p), which is undened and raises
a runtime type error. Thus the schema is not type safe.
It turns out that type safety of method schemas permitting polyadic methods is un-
decidable (Exercise 21.19). Interestingly, type safety is decidable for monadic method
schemas. We now sketch the proof of this result.
Theorem 21.4.6 It is decidable in polynomial time whether a monadic method schema
is type safe.
Crux Let S = (C, , M
base
, M
comp
, ) be a monadic method schema. We construct a
context-free grammar (see Chapter 2) that captures possible executions of a method call
over all instances of S. The grammar is G
S
=(V
n
, V
t
, A, P), where the set V
t
of terminals
is the set of base method names (denoted N
base
) along with the symbols {error, ignore},
and the set V
n
of nonterminals includes start symbol A and
{[c, m, c

] | c, c

are classes, and m is a method name}


The set P of production rules includes
(i) A [c, m, c

], if m is a composite method name and it is dened at c or a


superclass of c.
(ii) [c, m, c

] error, if m is not dened at c or a superclass of c.


(iii) [c, m, c

] m, if mis a base method name, the resolution of mfor c is m : c


1

c
2
, and c

c
2
. (Note that c

=c
2
is just a particular case.)
(iv) [c, m, c] , if m is a composite method name and the resolution of m for c is
the identity mapping.
(v) [c, m, c
n
] [c, m
1
, c
1
][c
1
, m
2
, c
2
] . . . [c
n1
, m
n
, c
n
], if m is a composite met-
hod, m on c resolves to a method with implementation m
n
(m
n1
(. . . (m
2
(m
1
(x))) . . .)), and c
1
, . . . , c
n
are arbitrary classes.
(vi) [c, m, c

] ignore, for all classes c, c

and method names m.


Given a successful execution of a method call m(o), it is easy to construct a word in L(G
S
)
of the form m
1
. . . m
n
, where the m
i
s list the sequence of base methods called during the
execution. On the other hand, if the execution of m(o) leads to a runtime error, a word of the
form m
1
. . . m
i
error . . . can be formed. The terminal ignore can be used in cases where
21.4 Languages for Methods 569
a nonterminal [c, m, c

] arises, such that m is a base method name and c

is outside the
range of m for c. The productions of type (vi) are permitted for all nonterminals [c, m, c

],
although they are needed only for some of them.
It can be shown that S is type safe iff
L(G
S
) N

base
errorV

t
=.
Because it can be tested if the intersection of a context-free language with a regular lan-
guage is empty, the preceding provides an algorithm for checking type safety. However, a
modication of the grammar G
S
is needed to obtain the polynomial time test (see Exer-
cise 21.18).
Expressive Power of Method Schemas We now argue that method schemas (with or-
der) simulate precisely the relational queries in qptime. The object-oriented features are
not central here: The same result can be shown for functional data models without such
features.
As for imperative schemas, we show that method schemas can simulate relational
queries. The encoding of these queries assumes an ordered domain, as is traditional in the
world of functional programming.
A relational database is encoded as follows:
(a) a class elem contains objects representing the elements of the domain, and it has
zero as a subclass containing a unique element, say 0;
(b) a function pred, which is included as a base method,
8
provides the predecessor
function over elem zero [pred(0) is, for instance, 0]; a base method 0 returns
the least element and another base method N the largest object in elem;
(c) to have the Booleans, we think of 0 as the value false and all objects in elem as
representations of true;
(d) an n-ary relation R is represented by an n-ary base method m
R
of signature
m
R
: elem, . . . , elem elem, the characteristic function of R. [For a tuple t ,
m
R
(t ) is true iff t is in R.]
Next we represent queries by composite methods. A query q is computed by method
m
q
if m
q
(t ) is true (not in zero) iff t is in the answer to query q.
The following illustrates how to compute with this simple language.
Example 21.4.7 Consider relation R with R = {R(1, 1), R(1, 2)}. The class zero is
populated with the object 0 and the class elem with 1, 2. The base method pred is dened
by pred(2) = 1, pred(0) = pred(1) = 0. The base method m
R
is dened by m
R
(1, 1) =
m
R
(1, 2) =1 and m
R
(x, y) =0 otherwise.
8
The function pred is a functional analog of the relation succ, which we have assumed is available
in every ordered database (a successor function could also have been used).
570 Object Databases
Recall that each object in class elem is viewed as true and object 0 as false. We can
code the Boolean function and as follows:
for x, y in zero, zero and(x, y) 0
for x, y in elem, zero and(x, y) 0
for x, y in zero, elem and(x, y) 0
for x, y in elem, elem and(x, y) N.
The other standard Boolean functions can be coded similarly. We can code the intersection
between two binary relations R and S with and(m
R
(x, y), m
S
(x, y)). As a last example,
the projection of a binary relation R over the rst coordinate can be coded by a method

R,1
dened by

R,1
m(x, N),
where m is given by
for x, y in elem, zero m(x, y) m
R
(x, y)
for x, y in elem, elem m(x, y) or(m
R
(x, y), m(x, pred(y))).
We now state the following:
Theorem 21.4.8 Method schemas over ordered databases express exactly qptime.
Crux As indicated in the preceding example, we can construct composite methods for the
Boolean operations and, or, and not . For each k, we can also construct k k-ary functions
pred
i
k
for i [1, k] that compute for each k tuple u the k components of the predecessor (in
lexicographical ordering) of u. Indeed, we can simulate an arbitrary relational operation
and more generally an arbitrary inationary xpoint. To see this, consider the transitive
closure query. It is computed with a method tc dened (informally) as follows. Intuitively,
a method tc(x, y) asks, Is x, y in the transitive closure? Execution of tc(x, y) rst calls
a method m
1
(x, y, N), whose intuitive meaning is Is there a path of length N fromx to y?
This will be computed by asking whether there is a path of length N 1 (a recursive call to
m
1
), etc. This can be generalized to a construction that simulates an arbitrary inationary
xpoint query. Because the underlying domain is ordered, we have captured all qptime
queries. The converse follows from the fact that there are only polynomially many possible
method calls in the context of a given instance, and each method call in this model can
be answered in qptime. Moreover, loops in method calls can be detected in polynomial
time; calls giving rise to loops are assumed to output some designated special value. (See
Exercise 21.25.)
We have presented an object-oriented approach in the applicative programming style.
There exists another important family of functional languages based on typed calculi.
It is possible to consider database languages in this family as well. These calculi present
21.5 Further Issues for OODBs 571
additional advantages, such as being able to treat functions as objects and to use higher-
order functions (i.e., functions whose arguments are functions).
21.5 Further Issues for OODBs
As mentioned at the beginning of this chapter, the area of OODB is relatively young and
active. Much research is needed to understand OODBs as well as we understand relational
databases. A difculty (and richness) is that there is still no well-accepted model. We
conclude this chapter with a brief look at some current research issues for OODBs. These
fall into two broad categories: advanced modeling features and dynamic aspects.
Advanced Modeling Features
This is not an exhaustive list of new features but a sample of some that are being studied:
Views: Views are intended to increase the exibility of database systems, and it is natu-
ral to extend the notion of relational view to the OODB framework. However, unlike
relational views, OODB views might redene the behavior of objects in addition to
restructuring their associated types. There are also signicant issues raised by the pres-
ence of OIDs. For example, to maintain incrementally a materialized viewwith created
OIDs, the linkage between the base data and the created OIDs must be maintained.
Furthermore, if the view is virtual, then how should virtual OIDs be specied and
manipulated?
Object roles: The same entity may be involved in several roles. For instance, a director
may also be an actor. It is costly, if not infeasible, to forecast all cases in which this
may happen. Although not as important in object-oriented programming, in OODBs it
would be useful to permit the same object to live in several classes (a departure from
the disjoint OID assignment from which we started) and at least conceptually maintain
distinct repositories, one for each role. This feature is present in some semantic data
models; in the object-oriented context, it raises a number of interesting typing issues.
Schema design: Schema design techniques (e.g., based on dependencies and normal forms)
have emerged for the relational model (see Chapter 11). Although the richer model in
the OODB provides greater exibility in selecting a schema, there is a concomitant
need for richer tools to facilitate schema design. The scope of schema design is en-
larged in the OODB context because of the interaction of methods within a schema
and application software for the schema.
Querying the schema: In many cases, information may be hidden in an OODB schema.
Suppose, for example, that movies were assigned categories such as drama, west-
ern, suspense, etc. In the relational model, this information would typically be rep-
resented using a new column in the Movies relation. A query such as list all categories
of movie that Bergman directed is easily answered. In an OODB, the category infor-
mation might be represented using different subclasses of the Movie class. Answering
this query now requires the ability of the query language to return class names, a fea-
ture not present in most current systems.
572 Object Databases
Classication: A related problem concerns how, given an OODB schema, to classify
new data for this schema. This may arise when constructing a view, when merging
two databases, or when transforming a relational database into an OODB one by
objectifying tuples. The issue of classication, also called taxonomic reasoning, has
a long history in the eld of knowledge representation in articial intelligence, and
some research in this direction has been performed for semantic and object-oriented
databases.
Incorporating deductive capabilities: The logic-programming paradigm has offered a
tremendous enhancement of the relational model by providing an elegant and (in
many cases) intuitively appealing framework for expressing a broad family of queries.
For the past several years, researchers have been developing hybrids of the logic-
programming and object-oriented paradigms. Although it is very different in some
ways (because the OO paradigm has fundamentally imperative aspects), the perspec-
tive of logic programming provides alternative approaches to data access and object
creation.
Abstract data types: As mentioned earlier, OODB systems come equipped with several
constructors, such as set, list, or bag. It is also interesting to be able to extend the
language and the system with application-specic data types. This involves language
and typing issues, such as how to gracefully incorporate access mechanisms for the
new types into an existing language. It also involves system issues, such as how to
introduce appropriate indexing techniques for the new type.
Dynamic Issues
The semantics of updates in relational systems is simple: Perform the update if the result
complies with the dependencies of the schema. In an OODB, the issue is somewhat trickier.
For instance, can we allowthe deletion of an object if this object is referred to somewhere in
the database (the dangling reference problem)? This is prohibited in some systems, whereas
other systems will accept the deletion and just mark the object as dead. Semantically, this
results in viewing all references to this object as nil.
Another issue is object migration. It is easy to modify the value of an object. But
changing the status of an object is more complicated. For example, a person in the database
may act in a movie and overnight be turned into an actor. In object-oriented programming
languages, objects are often not allowed to change classes. Although such limitations also
exist in most present OODBs, object migration is an important feature that is needed in
many database applications. One approach, followed by some semantic data models, is
to permit objects to be associated with multiple classes or roles and also permit them to
migrate to different classes over time. This raises fundamental issues with regard to typing.
For example, how do we treat a reference to the manager of a department (that should be of
type Employee) when he or she leaves the company and is turned into a normal person?
Finally, as with the relational model, we need to consider evolution of the schema
itself. The OODB context is much richer than the relational, because there are many more
kinds of changes to consider: the class hierarchy, the type of a class, additions or deletions
of methods, etc.
Bibliographic Notes 573
Bibliographic Notes
Collections of papers on object-oriented databases can be found in [BDK92, KL89, ZM90].
The main characteristics of object-oriented database systems are described in [ABD
+
89].
An inuential discussion of some foundational issues around the OODB paradigm is
[Bee90]. An important survey of subtyping and inheritance from the perspective of pro-
gramming languages, including the notion of domain-inclusion semantics, is [CW85].
Object-oriented databases are, of course, closely related to object-oriented program-
ming languages. The rst of these is Smalltalk [GR83], and C++ [Str91] is fast becoming
the most widely used object-oriented programming language. Several commercial OODBs
are essentially persistent versions of C++. Several object-oriented extensions of Lisp have
been proposed; the article [B
+
86] introduces a rich extension called CommonLoops and
surveys several others.
There have been a number of approaches to provide a formal foundation [AK89,
Bee90, HTY89, KLW93] for OODBs. We can also cite as precursors attempts to formalize
semantic data models [AH87] and object-based models [KV84, HY84]. Recent graph-
oriented models, although they do not stress object orientation, are similar in spirit (e.g.,
[GPG90]).
The generic OODB model used here is directly inspired from the IQL model [AK89]
and that of O
2
[BDK92, LRV88]. The model and results on imperative method implementa-
tions are inspired by [HTY89, HS89a]. A similar model of imperative method implementa-
tion, which avoids nondeterminism and introduces a parallel execution model, is developed
in [DV91]. Method schemas and Theorem 21.4.6 are from [AKRW92]; the functional per-
spective and Theorem 21.4.8 are from [HKR93].
OIDs have been part of many data models. For example, they are called surrogates in
[Cod79], l-values in [KV93a], or object identiers in [AH87]. The notion of object and the
various forms of equalities among objects form the topic of [KC86]. Type inheritance and
multiple inheritance are studied in [CW85, Car88].
Since [KV84], various languages for models with objects have been proposed in the
various paradigms: calculus, algebra, rule based, or functional. Besides standard features
found in database languages without objects, the new primitives are centered around object
creation. Language-theoretic issues related to object creation were rst considered in the
context of IQL [AK89]. Object creation is an essential component of IQL and is the main
reason for the completeness of the language. The need for a primitive in the style of copy
elimination to obtain determinate completeness was rst noticed in [AK89]. The IQL
language is rule based with an inationary xpoint semantics in the style of datalog

of
Chapter 14.
The logic-based perspective on object creation based on Skolem was rst informally
discussed in [Mai86] and rened variously in [CW89a, HY90, KLW93, KW89]. In partic-
ular, F-logic [KLW93] considers a different approach to inheritance. In our framework, the
classication of objects is explicit; in particular, when an object is created, it is within an
explicit class. In [KLW93], data organization is also specied by rules and thus may de-
pend on the properties of the objects involved. For instance, reasoning about the hierarchy
becomes part of the program.
Algebraic and imperative approaches to object creation are developed in [Day89].
574 Object Databases
Since then, object creation has been the center of much interesting research (e.g., [DV93,
HS89b, HY92, VandBG92, VandBGAG92, VandB93]). The characterization of queries ex-
pressible in while
obj
(Theorem 21.3.2) is from [VandBG92]; this extends a previous result
from [AP92]. The proof of Proposition 21.3.3 is also from [VandBGAG92]. In [Vand-
BGAG92, VandB93], it is argued that the notion of determinate query may not be the
most appropriate one for the object-based context, and alternative notions, such as semide-
terministic queries, are discussed. A tractable construct yielding a determinate-complete
language is exhibited in [DV93]. However, the construct proposed there is global in nature
and is involved. The search for simpler and more natural local constructs continues.
As mentioned earlier, the OODB calculus and algebra presented here are mostly
variations of languages for non object-based models and, in particular, of the languages
for complex values of Chapter 20. There have been several proposals of SQL extensions.
In particular, as indicated in Section 21.3, O
2
SQL [BCD89] retains the avor of SQL but
incorporates object orientation by adopting an elegant functional programming style. This
approach has been advanced as a standard in [Cat94].
Functional approaches to databases have been considered rather early but attracted
only modest interest in the past [BFN82, Shi81]. The functional approach has become more
popular recently, both because of the success of object-oriented databases and due to re-
cent results of complex objects and types emphasizing the functional models [BTBN92,
BTBW92]. The use of a typed functional language similar to calculus as a formalism to
express queries is adapted from [HKM93]. Characterizations of qptime in functional terms
are from [HKM93, LM93]. The work in [AKRW92, HKM93, HKR93] provides interest-
ing bridges between (object-oriented) databases and well-developed themes in computer
science: applicative program schemas [Cou90, Gre75] and typed calculi [Chu41, Bar84,
Bar63].
This chapter presented both imperative and functional perspectives on OODB meth-
ods. A different approach (based on rules and datalog with negation) has been used in
[ALUW93] to provide semantics to a number of variations of schemas with methods. The
connection between methods and rule-based languages is also considered in [DV91].
Views for OODBs are considered in [AB91, Day89, HY90, KKS92, KLW93]. The
merging of OODBs is considered in [WHW90]. Incremental maintenance of materialized
object-oriented views is considered in [Cha94]. The notion of object roles, or sharing ob-
jects between classes, is found in some semantic data models [AH87, HK87] and in recent
research on OODBs [ABGO93, RS91]. A query language that incorporates access to an
OODB schema is presented in [KKS92]. Classication has been central to the eld of
knowledge representation in articial intelligence, based on the central notion of taxo-
nomic reasoning (e.g., see [BGL85, MB92], which stem from the KL-ONE framework of
[BS85]); this approach has been carried to the context of OODBs in, for example, [BB92,
BS93, BBMR89, DD89]. Deductive object-oriented database is the topic of a conference
(namely, the Intl. Conf. on Deductive and Object-Oriented Databases). Properties of object
migration between classes in a hierarchy are studied in [DS91, SZ89, Su92].
Exercises 575
Exercises
Exercise 21.1 Construct an instance for the schema of Fig. 21.1 that corresponds to the
CINEMA instance of Chapter 3.
Exercise 21.2 Suppose that the class Actor_Director were removed from the schema of
Fig. 21.1. Verify that in this case there is no OID assignment for the schema such that there
is an actor who is also a director.
Exercise 21.3 Design an OODB schema for a bibliography database with articles, book
chapters, etc. Use inheritance where possible.
Exercise 21.4 Exhibit a class hierarchy that is not well formed.
Exercise 21.5 Add methods to the schema of Fig. 21.1 so that the resulting family of methods
violates rules unambiguous and covariance.
Exercise 21.6 Show that testing whether I
OID
J is in np and at least at hard as the graph
isomorphism problem (i.e., testing whether two graphs are isomorphic).
Exercise 21.7 Give an algorithm for testing value equality. What is the data complexity of
your algorithm?
Exercise 21.8 In this exercise, we consider various forms of equality. Value equality as dis-
cussed in the text is denoted =
1
. Two objects o, o

are 2-value equal, denoted o =


2
o

, if replac-
ing each object in (o) and (o

) by its value yields values that are equal. The relations =


i
for
each i are dened similarly. Show that for each i, =
i+1
renes =
i
. Let n be a positive integer.
Give a schema and an instance over this schema such that for each i in [1, n], =
i
and =
i+1
are
different.
Exercise 21.9 Design a database schema to represent information about persons, including
males and females with names and husbands and wives. Exhibit a cyclic instance of the schema
and an object o that has an innite expansion. Describe the innite tree representing the expan-
sion of o.
Exercise 21.10 Consider a database instance I over a schema S. For each o in I, let expand(o)
be the (possibly innite) tree obtained by replacing each object by its value recursively. Show
that expand(o) is a regular tree (i.e., that it has a nite number of distinct subtrees). Derive from
this observation an algorithm for testing deep equality of objects.
Exercise 21.11 In this exercise, we consider the schema S with a single class c that has type
(c) =[A : c, B : string]. Exhibit an instance I over S and two distinct objects in I that have the
same expansion. Exhibit two distinct instances over S with the same set of object expansions.
Exercise 21.12 Sketch an extension of the complex value algebra to provide an algebraic
simulation of the calculus of Section 21.3. Give algebraic versions of the queries of that section.
Exercise 21.13 Recall the approach to creating OIDs by extending datalog to use Skolem
function symbols. Consider the following programs:
T (f
1
(x, y), x) S(x, y) T (f
3
(x, y), x) S(x, y)
T (f
2
(x, y), x) S(x, y) T (f
3
(y, x), x) S(x, y)
T (f
1
(x, y), y) S(x, y), S(y, x) T (f
4
(x, y), x) S(x, y), S(y, x)
P Q
576 Object Databases
(a) Two programs P
1
, P
2
involving Skolem terms such as the foregoing are exposed
equivalent, denoted P
1

exp
P
2
, if for each input instance I having no OIDs, P
1
(I) =
P
2
(J). Show that P
exp
Q does not hold.
(b) Following the ILOG languages [HY92], given an instance J possibly with Skolem
terms, an obscured version of J is an instance J

obtained from J by replacing each


distinct nonatomic Skolem term with a new OID, where multiple occurrences of a
given Skolem term are replaced by the same OID. (Intuitively, this corresponds to
hiding the history of how each OID was created.) Two programs P
1
, P
2
are obscured
equivalent, denoted P
1

obs
P
2
, if for each input instance I having no OIDs, if J
1
is
an obscured version of P
1
(I) and J
2
is an obscured version of P
2
(I), then J
1

OID
J
2
.
Show that P
obs
Q.
(c) Let P and Q be two nonrecursive datalog programs, possibly with Skolem terms in
rule heads. Prove that it is decidable whether P
exp
Q. Hint: Use the technique for
testing containment of unions of conjunctive queries (see Chapter 4).
(d) A nonrecursive datalog program with Skolem terms in rule heads has isolated OID
invention if in each target relation at most one column can include nonatomic Skolem
terms (OID). Give a decision procedure for testing whether two such programs are
obscured equivalent. (Decidability of obscured equivalence of arbitrary nonrecursive
datalog programs with Skolem terms in rule heads remains open.)
Exercise 21.14 [VandBGAG92] Prove the only if part of Theorem 21.3.2. Hint: Associate
traces to new object ids, similar to the proof of Theorem 18.2.5. The extension homomorphism
is obtained via the natural extension to traces of automorphisms of the input.
Exercise 21.15 [HTY89]
(a) Dene an operational semantics for the imperative model introduced in Section 21.4.
(b) Describe how a method in this model can simulate a whileloop of arbitrary length.
Hint: Use a class c with associated type tuple(a : c, . . .), and let c

c. Construct
the implementation of method m on c so that on input o if the loop is to continue,
then it creates a new object o

in c, sets o.a =o

, and calls m on o

. To terminate the
loop, create o

in c

, and dene m on c

appropriately.
(c) Show how the computation of a Turing machine can be simulated by this model.
Exercise 21.16 Prove Proposition 21.4.1. Hint: Use a reduction from the PCP problem, sim-
ilar in spirit to the one used in the proof of Theorem 6.3.1. The effect of conditionals can be
simulated by putting objects in different classes and using dynamic binding.
Exercise 21.17 Describe how monadic method schemas can be simulated in the imperative
model.
Exercise 21.18 [AKRW92]
(a) Verify that the grammar G
S
described in the proof of Theorem 21.4.6 has the stated
property.
(b) How big is G
S
in terms of S?
(c) Find a variation of G
S
that has size polynomial in the size of S. Hint: Break produc-
tion rules having form (v) into several rules, thereby reducing the overall size of the
grammar.
Exercises 577
(d) Complete the proof of the theorem.
Exercise 21.19 [AKRW92]
(a) Showthat it is undecidable whether a polyadic method schema is type safe. Hint: You
might use undecidability results for program schemas (see Bibliographic Notes), or
you might use a reduction from the PCP.
(b) A schema is recursion free if there are no two methods m, m

such that m occurs in


some code for m

and conversely. Show that type safety is decidable for recursion-


free method schemas.
Exercise 21.20
(a) Complete the formal denition of an imperative schema simulating a relational
query.
(b) Prove Theorem 21.4.2.
Exercise 21.21
(a) Suppose that the imperative model were extended to include types for classes that
have one level of the set construct (so tuple of set of tuple of atomic of class types is
permitted) and that the looping construct is extended to the sets occurring in these
types. Assume that the new command is not permitted. Prove that the family of
relational queries that this model can simulate is qpspace. Hint: Intuitively, because
the looping operates object at a time, it permits the construction of a nondeterministic
ordering of the database.
(b) Suppose that n levels of set nesting are permitted in the types of classes. Show that
this simulates qexp
n1
space.
Exercise 21.22
(a) Describe how the form of method inheritance used for polyadic method schemas can
be simulated using the originally presented form of method inheritance, which is
based only on the class of the rst argument.
(b) Suppose that a base method m
R
in an instance of a polyadic method schema is used
to simulate an n-ary relation R. In a simulation of this situation by an instance of a
conventional OODB schema, how many OIDs are present in the class on which m
R
is simulated?
Exercise 21.23 Show how to encode or, not , and equal using method schemas.
Exercise 21.24 Show how to encode pred
i
k
and the join operation using method schemas.
Exercise 21.25 [HKR93] Prove Theorem 21.4.8. Hint: Show rst that method schemas can
simulate relational algebra and then inationary xpoint. For the xpoint, you might want to
use pred
k
. For the other direction, you might want to simulate method schemas over ordered
databases by inationary xpoint.
22 Dynamic Aspects
Alice: How come weve waited so long to talk about something so important?
Riccardo: Talking about change is hard.
Sergio: Were only starting to get a grip on it.
Vittorio: And still have a long way to go.
A
t a fundamental level, updating a database is essentially imperative programming.
However, the persistence, size, and long life cycle of a database lead to perspec-
tives somewhat different from those found in programming languages. In this chapter, we
briey examine some of these differences and sketch some of the directions that have been
explored in this area. Although it is central to databases, this area has received far less at-
tention from the theoretical research community than other topics addressed in this book.
The discussion in this chapter is intended primarily to give an overview of the important is-
sues raised concerning the dynamic aspects of databases. It therefore emphasizes examples
and intuitions much more than results and proofs.
This chapter begins by examining database update languages, including a simple
language that corresponds to the update capabilities of practical languages such as SQL,
and more complex ones expressed within a logic-based framework. Next optimization and
semantic properties of transactions built from simple update commands are considered,
including a discussion of the interaction of transactions and static integrity constraints.
The impact of updates in richer contexts is then considered. In connection with views,
we examine the issue of how to propagate updates incrementally from base data to views
and the much more challenging issue of propagating an update on a view back to the
base data. Next updates for incomplete information databases are considered. This includes
both the conditional tables studied in Chapter 19 and more general frameworks in which
databases are represented using logical theories.
The emerging eld of active databases is then briey presented. These incorporate
mechanisms for automatically responding to changes in the environment or the database,
and they often use a rule-based paradigm of specifying the responses.
This chapter concludes with a brief discussion of temporal databases, which support
the explicit representation of the time dimension and thus historical information.
A broad area related to dynamic aspects of databases (namely, concurrency control)
will not be addressed. This important area concerns mechanisms to increase the throughput
of a database system by interleaving multiple transactions while guaranteeing that the
semantics of the individual transactions is not lost.
579
580 Dynamic Aspects
22.1 Update Languages
Before embarking on a brief excursion into update languages, we should answer the fol-
lowing natural question: Why are update languages necessary? Could we not use query
languages to specify updates?
The difference between query and update languages is subtle but important. To specify
an update, we could indeed dene the new database as the answer to a query posed against
the old database. However, this misses an essential characteristic of updates: Most often,
they involve small changes to the current database. Query languages are not naturally suited
to speak explicitly about change. In contrast, update languages use as building blocks
simple statements expressing change, such as insertions, deletions, and modications of
tuples in the database.
In this section, we outline several formal update languages and point to some theoret-
ical issues that arise in this context.
Insert-Delete-Modify Transactions
We begin with a simple procedural language to specify insertions, deletions, and modica-
tions. Most commercial relational systems provide at least these update capabilities.
To simplify the presentation, we suppose that the database consists of a single relation
schema R. Everything can be extended to the multirelational case. An insertion is an
expression ins(t ), where t is a tuple over att(R). This inserts the tuple t into R. [We
assume set-based semantics, under which ins(t ) has no effect if t is already present in
R.] A deletion removes from R all tuples satisfying some stated set of conditions. More
precisely, a condition is an (in)equality of the form A =c or A =c, where A att(R) and
c is a constant. A deletion is an expression del(C), where C is a nite set of conditions.
This removes from R all tuples satisfying each condition in C. Finally, a modication is
an expression mod(C C

), where C, C

are sets of conditions, with C

containing only
equalities A = c. This selects all tuples in R satisfying C and then, for each such tuple
and each A =c in C

, sets the value of A to c. An update over R is an insertion, deletion,


or modication over R. An IDM transaction (for insert, delete, modify) over R is a nite
sequence of updates over R. This is illustrated next.
Example 22.1.1 Consider the relation schema Employee with attributes N (Name), D
(Department), R (Rank). The following IDM transaction res the manager of the parts
department, transfers the manager of the sales department to the parts department, and
hires Moe as the new manager for the sales department:
del({D =parts, R =manager});
mod({D =sales, R =manager} {D =parts});
ins(Moe, sales, manager)
The same update can be expressed in SQL as follows:
delete from Employee
22.1 Update Languages 581
where D = parts and R = manager;
update Employee
set D = parts
where D = sales and R = manager;
insert into Employee values Moe,sales,manager
As for queries, a question of central interest to update languages is optimization. To
see how IDM transactions can be optimized, it is useful to understand when two such
transactions are equivalent. It turns out that equivalence of IDM transactions has a sound
and complete axiomatization. Following are some simple axioms:
mod(C C

); del(C

) del(C); del(C

)
ins(t ); mod(C C

) mod(C C

); ins(t

)
where t satises C and {t

} =mod(C C

)({t })
and a slightly more complex one:
del(C
3
); mod(C
1
C
3
); mod(C
2
C
1
); mod(C
3
C
2
)
del(C
3
); mod(C
2
C
3
); mod(C
1
C
2
); mod(C
3
C
1
),
where C
1
, C
2
, C
3
are mutually exclusive sets of conditions.
We can dene criteria for the optimization of IDM transactions along two main lines:
Syntactic: We can take into account the length of the transaction as well as the kind of
operations involved (for example, it may be reasonable to assume that insertions are
simpler than modications).
Semantic: This can be based on the number of tuple operations actually performed when
the transaction is applied.
Various denitions are possible based on the preceeding criteria. It can be shown that
there exists a polynomial-time algorithm that optimizes IDM transactions, with respect
to a reasonable denition based on syntactic and semantic criteria. The syntactic criteria
involve the number of insertions, deletions, and modications. The semantic criteria are
based on the number of tuples touched at runtime by the transaction. We omit the details
here.
Example 22.1.2 Consider the IDM transaction over a relational schema R of sort AB:
mod({A =0, B =1} {B =2}); ins(0, 1); ins(3, 2);
mod({A =0, B =1} {B =2}); mod({A =0, B =0} {B =1});
mod({A =0, B =0} {B =1}); mod({A =0, B =2} {B =0});
mod({A =0, B =2} {B =0}); del({A =0, B =0}).
Assuming that insertions are less expensive than deletions, which are less expensive than
modications, an optimal IDM transaction equivalent to the foregoing is
582 Dynamic Aspects
del({A =0, B =1}); del({A =0, B =2});
mod({A =0, B =1} {B =2});
mod({B =0} {B =1});
mod({A =0, B =2} {B =0});
ins(0, 0).
Thus the six modications, one deletion, and two insertions of the original transaction were
replaced by three modications, two deletions, and one insertion.
Another approach to optimization is to turn some of the axioms of equivalence into
simplication rules, as in
mod(C C

); del(C

) del(C); del(C

).
It can be shown that such a set of simplication rules can be used to optimize a restricted
set of IDM transactions that satisfy a syntactic acyclicity condition. For the other transac-
tions, applications of the simplication rules yield a simpler, but not necessarily optimal,
transaction. The simplication rules have the advantage that they are local and can be eas-
ily applied even online, whereas the complete optimization algorithm is global and has to
know the entire transaction in advance.
Rule-Based Update Languages
The IDM transactions provide a simple update language of limited power. This can be
extended in many ways. One possibility is to build another procedural language based
on tuple insertions, deletions, and modications, which includes relation variables and
an iterative construct. Another, which we illustrate next, is to use a rule-based approach.
For example, consider the language datalog

described in Chapter 17, with its xpoint


semantics. Recall that rules allow for both positive and negative atoms in heads of rules;
consistently with the xpoint semantics, the positive atoms can be viewed as insertions
of facts and the negative atoms as deletions of facts. For example, the following program
removes all cycles of length one or two from the graph G:
G(x, y) G(x, y), G(y, x).
In the usual xpoint semantics, rules are red in parallel with all possible instantiations
for the variables. This yields a deterministic semantics. Some practical rule-based update
languages take an alternative approach, which yields a nondeterministic semantics: The
rules are red one instantiation at a time. With this semantics, the preceeding program
provides some orientation of the graph G. Note that generally there is no way to obtain an
orientation of a graph deterministically, because a nondeterministic choice of edges to be
removed may be needed.
A deterministic language expressing all updates can be obtained by extending
datalog

with the ability to invent new values, in the spirit of the language while
new
22.1 Update Languages 583
in Chapter 18. This can be done in the manner described in Exercise 18.22. The same
language with nondeterministic semantics can be shown to express all nondeterministic
updates.
The aforementioned languages yield a bottom-up evaluation procedure. The body of
the rule is rst checked, and then the actions in the head are executed. Another possibility
is to adopt a top-down approach, in the spirit of the assert in Prolog. Here the actions
to be taken are specied in rule bodies. A good example of this approach is provided
by Dynamic Logic Programming (DLP). Interestingly, this language allows us to test
hypothetical conditions of the form Would hold if t was inserted? This, and the
connection of DLP with Prolog, is illustrated next.
Example 22.1.3 Consider a database schema with relations ES of sort Emp,Sal (em-
ployees and their salaries), ED of sort Emp,Dep (employees and their departments), and
DA of sort Dep,Avg (average salary in each department).
Suppose that an update is intended to hire John in the toys department with a salary of
200K, under the condition that the average salary of the department stays below 50K. In
the language DLP, this update is expressed by
hire(emp1, sal1, dep1)
+ES(emp1, sal1)(+ED(emp1, dep1)(DA(dep1, avg1) & avg1 < 50k)).
(Other rules are, of course, needed to dene DA.) Acall hire(John,200K,Toys) hires John in
the toys department only if, after hiring him, the average salary of the department remains
below50K. The +symbol indicates an insertion. Here the conditions in parentheses should
hold after the two insertions have been performed; if not, then the update is not realized.
Testing a condition under the assumption of an update is a form of hypothetical reasoning.
It is interesting to contrast the semantics of DLP with that of Prolog. Consider the
following Prolog program:
: assert(ES(john, 200)), assert(ED(john, toys)),
DA(toys, Avg1), Avg1 < 50.
In this program, the insertions into ES and ED will be performed even if the conditions are
not satised afterward. (The reader familiar with Prolog is encouraged to write a program
that has the desired semantics.)
A similar top-down approach to updates is adopted in Logical Data Language (LDL).
Updates concern not only instances of a xed schema. Sometimes the schema itself
needs to be changed (e.g., by adding an attribute). Some practical update languages include
constructs for schema change. The main problem to be resolved is how the existing data
can be t to the new schema.
In deductive databases, some relations are dened using rules. Occasionally these
denitions may have to be changed, leading to updates of the rule base. There are
languages that can be used to specify such updates.
584 Dynamic Aspects
22.2 Transactional Schemas
Typically, database systems restrict the kinds of updates that users can perform. There are
three main ways of doing this:
(a) Specify constraints (say, fds) that the database must satisfy and reject any update
that leads to a violation of the constraints.
(b) Restrict the updates themselves by only allowing the use of a set of prespecied,
valid updates.
(c) Permit users to request essentially arbitrary updates, but provide an automatic
mechanism for detecting and repairing constraint violations.
Object-oriented databases essentially embrace option (b); updates are performed only
by methods specied at the schema level, and it is assumed that these will not violate the
constraints (see Chapter 21). Both options (a) and (b) are present in the relational model.
Several commercial systems can recognize and abort on violation of simple constraints
(typically key and simple inclusion dependencies). However, maintenance of more com-
plex constraints is left to the application software. Option (c) is supported by the emerging
eld of active databases, which is discussed in the following section.
We now briey explore some issues related to approach (b) in connection with the
relational model. To illustrate the issues, we use simple procedures based on IDM transac-
tions. The procedures we use are parameterized IDM transactions, obtained by allowing
variables in addition to constants in conditions of IDM transactions. The variables are used
as parameters. A database schema R together with a nite set of parameterized IDM trans-
actions over R is called an IDM transactional schema.
Example 22.2.1 Consider a database schema R with two relations, TA (Teaching Assis-
tant) of sort Name,Course, and PHD (Ph.D. student) of sort Name, Address. The following
IDM-parameterized transactions allow the hiring and ring of TAs (subscripts indicate the
relation to which each update applies):
hire(x, y, z) =del
TA
(Name =x); ins
TA
(x, y)
del
PHD
(Name =x); ins
PHD
(x, z)
re(x) =del
TA
(Name =x)
The pair T =R, {hire, re} is an IDM transactional schema. Note in this simple example
that once a name n is incorporated into the PHD relation, it can never be removed.
Clearly, we could similarly dene transactional schemas in conjunction with any up-
date language.
Suppose T is an IDM transactional schema. To apply the parameterized transactions,
values must be supplied to the variables. A transaction obtained by replacing the variables
of a parameterized transaction t in T by constants is a call to t . The only updates allowed
by an IDM transactional schema are performed by calls to its parameterized transactions.
22.2 Transactional Schemas 585
The set of instances that can be generated by such calls (starting from the empty instance)
is denoted Gen(T).
Transactional schemas offer an approach for constraint enforcement, essentially by
preventing updates that violate them. So it is important to understand to what extent they
can do so. First we need to clarify the issue. Suppose T is an IDM transactional schema
and is a set of constraints over a database schema R; Sat() denotes all instances over
R satisfying . If T is to replace , we would expect the following properties to hold:
soundness of T with respect to : Gen(T) Sat(); and
completeness of T with respect to : Gen(T) Sat().
Thus T is sound and complete with respect to iff it generates precisely the instances
satisfying .
Example 22.2.2 Consider again the IDM transactional schema T in Example 22.2.1. Let
be the following constraints:
TA : Name Course
PHD : Name Address
TA[Name] PHD[Name]
It is easily seen that T in Example 22.2.1 is sound and complete with respect to . That is,
Gen(T) =Sat() (Exercise 22.7).
This example also highlights a limitation in the notion of completeness: It can be seen
that there are pairs I and J of instances in Sat() where I cannot be transformed into J
using T. In other words, there are valid database states I and J such that when in state I,
J is never reachable. Such forbidden transitions are also a means of enriching the model,
because we can viewthemas temporal constraints on the database evolution. We will return
to temporal constraints later in this chapter.
Of course, the ability of transaction schemas to replace constraints depends on the
update language used. For IDM transactional schemas, we can show the following (Exer-
cise 22.8):
Theorem22.2.3 For each database schema R and set of fds and acyclic inclusion de-
pendencies over R, there exists an IDM transactional schema T that is sound and complete
with respect to .
Thus IDM transactional schemas are capable of replacing a signicant set of con-
straints. The kind of difculty that arises with more general constraints is illustrated next.
Example 22.2.4 Consider a relation R of sort ABC and the following set of
constraints:
586 Dynamic Aspects
the embedded join dependency
xyzx

(R(xyz) R(x

) z

R(xy

)),
the functional dependency AB C,
the inclusion dependency R[A] R[C],
the inclusion dependency R[B] R[A],
the inclusion dependency R[A] R[B].
It is easy to check that, for each relation satisfying the constraints, the number of con-
stants in the relation is a perfect square (n
2
, n 0). Thus there are unbounded gaps be-
tween instances in Sat(). There is no IDM transactional schema T such that Sat() =
Gen(T), because the gaps cannot be crossed using calls to parameterized transactions with
a bounded number of parameters. Moreover, this problem is not specic to IDM trans-
actional schemas; it arises with any language in which procedures can only introduce a
bounded number of new constants into the database at each call.
Another natural question relating updates and constraints is, What about checking
soundness and/or completeness of IDM transactional schemas with respect to given con-
straints? Even in the case of IDMtransactional schemas, such questions are generally unde-
cidable. There is one important exception: Soundness of IDM transactional schemas with
respect to fds is decidable. These questions are explored in Exercise 22.12.
22.3 Updating Views and Deductive Databases
We now turn to the impact of updates on views. Views are an important aspect of databases.
The interplay between views and updates is intricate. We can mention in particular two
important issues. One is the view maintenance problem: A view has been materialized and
the problem is to maintain it incrementally when the database is updated. An important
variation of this is in the context of deductive databases when the view consists of idb
relations. The other is known as the view update problem: Given a view and an update
against a view, the problem is to translate the update into a corresponding update against
the base data. This section considers these two issues in turn.
View Maintenance
Suppose that a base schema B and view schema V are given along with a (total) view map-
ping f : Inst(B) Inst(V). Suppose further that a materialized view is to be maintained
[i.e., whenever the base database holds an instance I
B
, then the view schema should be
holding f (I
B
)].
For this discussion, an update for a schema R is considered to be a mapping from
Inst(R) to Inst(R). If constraints are present, it is assumed that an update cannot map to
instances violating the constraints. The updates considered here might be based on IDM
transactions or might be more general. We shall often speak of the update that maps
22.3 Updating Views and Deductive Databases 587
I
V
I
V
I
B
I
B
f f
v

Figure 22.1: Relationship of views and updates


instance I to instance I

, and by this we shall mean the set of insertions and deletions that
need to be made to I to obtain I

.
Suppose that the base database B is holding I
B
and that update maps this to I

B
(see
Fig. 22.1). A naive way to keep the view up to date is to simply compute f (I

B
). However,
I

B
is typically large relative to the difference between I
V
and I

V
. It is thus natural to search
for more efcient ways to nd the update that maps I
V
to I

V
= f ((I
B
)). This is the
view maintenance problem.
There are generally two main components to solutions of the view maintenance prob-
lem. The rst involves developing algorithms to test whether an update to the base data can
affect the view. Given such an algorithm, an update is said to be irrelevant if the algorithm
certies that the update cannot affect the view, and it is said to be relevant otherwise.
Example 22.3.1 Let the base database schema be B =(R[AB], S[BC]), and consider
the following views:
V
1
=(R
C>50
S)
V
2
=
A
R
V
3
=R S
V
4
=
AC
(R S).
Inserting b, 20 into S cannot affect views V
1
or V
2
. On the other hand, whether or not this
insertion affects V
3
or V
4
depends on the data already present in the database.
Various algorithms have been developed for determining relevance with varying de-
grees of precision. A useful technique involves maintaining auxiliary information, as illus-
trated next.
Example 22.3.2 Recall view V
2
of Example 22.3.1, and suppose that R currently holds
588 Dynamic Aspects
R A B
a 20
a 30
a

80
Deleting a, 20 has no impact on the view, whereas deleting a

, 80 has the effect of


deleting a

from the view. One way to monitor this is to maintain a count on the number
of distinct ways that a value can arise; if this count ever reaches 0, then the value should be
deleted from the view.
The other main component of solutions to the view maintenance problem concerns the
development of incremental evaluation algorithms. This is closely related to the seminaive
algorithm for evaluating datalog programs (see Chapter 13).
Example 22.3.3 Recall view V
3
from Example 22.3.1, and let
+
R
and
+
S
denote sets
of tuples that are to be inserted into R and S, respectively. It is easily veried that
(R
+
R
) (S
+
S
) =(R S) (R
+
S
) (
+
R
S) (
+
R

+
S
).
Thus the new join can be found by performing three (typically smaller) joins followed by
some unions.
It is relatively straightforward to develop incremental evaluation expressions, such as
in the preceeding example, for all of the relational algebra operators (see Exercise 22.13).
In some cases, these expressions can be rened by using information about constraints,
such as key and functional dependencies, on the base data.
Incremental Update of Deductive Views
The view maintenance problem has also been studied in connection with views constructed
with (stratied) datalog
()
. In general, the techniques used are analogous to those discussed
earlier but are generalized to incorporate recursion. In the context of stratied datalog

,
various heuristics have been adapted from the eld of belief revision for incrementally
maintaining supports (i.e., auxiliary information that holds the justications for the pres-
ence of a fact in the materialized output of the program).
An interesting research direction that has recently emerged focuses on the ability of
rst-order queries to express incremental updates on views dened using datalog. The
framework for these problems is as follows. The base schema B and view schema V are
as before, except that V contains only one relation and the view f is dened in terms of
a datalog program P. A basic question is, Given P, is there a rst-order query such
that (I
B
, I
V
, +R(t )) =P(I
B
{R(t )}) for each choice of I
B
, I
V
=P(I
B
) and insertion
+R(t ) where R B? If this holds, then P is said to be rst-order incrementally denable
(FOID) (without auxiliary relations).
22.3 Updating Views and Deductive Databases 589
Example 22.3.4 Consider a binary relation G[AB] and the usual datalog programP that
computes the transitive closure of G in T [AB]. Suppose that I is an instance of G, and J
is P(I). Suppose that tuple a, b is inserted into I. Then a tuple a

, b

will be inserted
into J iff one of the following occurs:
(a) a

=a and b =b

;
(b) a

=a and b, b

J;
(c) a

, a J and b =b

; or
(d) a

, a J and b, b

J.
The preceeding conditions can clearly be specied by a rst-order query. It easily follows
that P is FOID (see Exercise 22.21).
Several variations of FOIDs have been studied. These include FOIDs with auxiliary
relations (i.e., that permit the maintenance of derived relations not in the original data-
log program) and FOIDs that support incremental updates for sets of insertions and/or
deletions. FOIDs have been found for a number of restricted classes of datalog programs.
However, it remains open whether there is a datalog program that is not FOID with auxil-
iary relations.
Basic Issues in View Update
The view update problem is essentially the inverse of the view maintenance problem.
Referring again to Fig. 22.1, the problem now is, Given I
B
, I
V
, and update on I
V
, nd an
update so that the diagram commutes.
The rst obvious problem here is the potential for ambiguity.
Example 22.3.5 Recall the view V
2
of Example 22.3.1. Suppose that the base value of
R is {a, b} (and the base value of S is ). Thus the view holds {a}. Now consider an
update to the view that inserts a

. Some possible choices for include


(a) Insert a

, b into R.
(b) Insert a

, b

into R for some b

dom.
(c) Insert {a

, b

| b

X} into R, where X is a nite subset of dom.


(d) Insert a

, b

into R for some b

dom, and replace a, b by a, b

.
Possibility (d) seems undesirable, because it affects a tuple in a base relation that is,
intuitively speaking, independent of the view update. Possibilities (a) and (b) seem more
appealing than (c), but (c) cannot be ruled out. In any case, it is clear that there are a large
number of updates that correspond to .
The fundamental problem, then, is how to select one update to the base data given
that many possibilities may exist. One approach to resolving the ambiguity involves exam-
ining the intended semantics of the database and the view.
590 Dynamic Aspects
Example 22.3.6 Consider a schema Employee[Name, Department, Team_position],
which records an employees department and the position he or she plays in the corpo-
rate baseball league. It is assumed that Name is a key. The value no indicates that the
employee does not play in the league. It is assumed that Name is a key. Consider the views
dened by
Sales =
Department=Sales
(Employee)
Baseball =
Employee,Team_position
(
Team_position=no
(Employee))
Typically, if tuple Joe, Sales, shortstop is deleted from the Sales view, then
this tuple should also be deleted from the underlying Employee relation. In contrast, if
tuple Joe, shortstop is deleted from the Baseball view, it is typically most natural to
replace the underlying tuple Joe, d, shortstop in Employee by Joe, d, no (i.e.,
to remove Joe from the baseball league rather than forcing him out of the company).
As just illustrated, the correct translation of a view update can easily depend on the
semantics associated with the view as well as the syntactic denition. Research in this
area has developed notions of update translations that perform a minimal change to the
underlying database. Algorithms that generate families of acceptable translations of views
have been developed, so that the database administrator may choose at view denition time
the most appropriate one.
Another issue in view update is that a requested update may not be permitted on the
view, essentially because of constraints implicit to the view denition and algorithm for
choosing translations of updates.
Example 22.3.7 Recall the view V
4
of Example 22.3.1, and suppose that the base data
is
R A B S B C
a 20 20 c
a

20 20 c

In this case the view contains {a, c, a, c

, a

, c, a

, c

}.
Suppose that the user requests that a, c be deleted. Typically, this deletion is mapped
into one or more deletions against the base data. However, deleting R(a, 20) results in a
side-effect (namely, the deletion of a, c

from the view). Deletion of S(20, c) also yields


a side-effect.
Formal issues surrounding such side-effects of view updates are largely unexplored.
22.3 Updating Views and Deductive Databases 591
Complements of Views
We now turn to a more abstract formulation of the view update problem. Although it is
relatively narrow, it provides an interesting perspective.
In this framework, a view over a base schema B is dened to be a (total) function f
fromInst(B) into some set. In practice this set is typically Inst(V) for some viewschema V;
however, this is not required for this development. [The proof of Theorem 22.3.10, which
presents a completeness result, uses a view whose range is not Inst(V) for any schema V.]
A binary relation on views is dened so that f g if for all base instances I and I

,
g(I) = g(I

) implies f (I) = f (I

). Intuitively, f g if g can distinguish more instances


that f . For view f , let
f
be the equivalence relation on Inst(B) dened by I
f
I

iff
f (I) =f (I

). It is clear that f g iff


g
is a renement of
f
and thus can be viewed
as a partial order on the equivalence relations over Inst(B).
Two views f, g are equivalent, denoted f g, if f g and g f . This is an equiva-
lence relation on views. In the following, the focus is primarily on the equivalence classes
under . Let denote the view that is simply the identity, and let denote a view that
maps every base instance to . It is clear that (the equivalence classes represented by)
and are the maximal and minimal elements of the partial order . We use cross-
product as a binary operator to create views: The product of views f and g is dened so
that (f g)(I) = (f (I), g(I)). View g is a complement of view f if f g . Intu-
itively, this means that the base relations can be completely identied if both f and g are
available. Clearly, each view f has a trivial complement: .
Example 22.3.8 (a) Let B ={R[ABC]} along with the fd R : A B, and consider the
view f =
AB
R. Let g =
AC
R. It follows from Proposition 8.2.2 that g is a complement
of f .
(b) Let B ={R[AB]} and f =
A
R. As mentioned earlier, is a complement of f .
It turns out that there are other complements of f , but they cannot be expressed using the
relational algebra (see Exercise 22.25).
(c) Let B = {Employee(Name, Salary, Bonus, Total_pay)}, with the constraints that
Name is a key and that for each tuple n, s, b, t in Employee we have s +b =t . Consider
the view f =
Name,Salary
(Employee). Consider the views
g
1
=
Name,Bonus
(Employee)
g
2
=
Name,Total_pay
(Employee).
Both g
1
and g
2
are complements of f .
Thus each view has at least one complement (namely, ) and may have more than one
minimal complement.
In some cases, complements can be used to resolve ambiguity in the view update
problem in the following way. Suppose that view f has complement g, and suppose
that I
V
= f (I
B
) and update on I
V
are given. An update is a g-translation of if
f ((I
B
)) = (f (I
B
)) and g((I
B
)) = g(I
B
) (see Fig. 22.2). Intuitively, a g-translation
592 Dynamic Aspects
( f(I
B
), g(I
B
)) ( v( f(I
B
)), g(I
B
))
I
B
I
B
f g ( f g)
1
v

Figure 22.2: Properties of a g-translation of view update on view f


accomplishes the update but leaves g(I
B
) xed. By the properties of complements, for an
update there is at most one g-translation of .
Example 22.3.9 (a) Recall the base schema {R[ABC]}, view f , and complement g of
Example 22.3.8(a). Suppose that a, b is in the view, and consider the update on the
view that modies a, b to a, b

. The update dened to modify all tuples a, b, c of


R into a, b

, c is a g-translation of . On the other hand, given an insertion or deletion


to the view, there is no g-translation of .
(b) Recall the base schema, view f , and complementary views g
1
and g
2
of Exam-
ple 22.3.8(c). Suppose that Joe, 200, 50, 250 is in Employee. Consider the update that
replaces Joe, 200 by Joe, 210 in the view. Consider the updates

1
=replace Joe, 200, 50, 250 by Joe, 210, 50, 260

2
=replace Joe, 200, 50, 250 by Joe, 210, 40, 250.
Then
1
is the g
1
-translation of , and
2
is the g
2
-translation of .
Finally, we state a result showing that a restricted class of view updates can be trans-
lated into base updates using complementary views. To this end, we focus on updates of a
schema R that are total functions from Inst(R) to Inst(R). A family U of updates on R is
said to be complete if
(a) it is closed under composition (i.e., if and

are in U, then so is

);
(b) it is closed under inverse in the following sense: I inst(R) U

U
such that

((I)) =I.
Intuitively, condition (b) says that a user can always undo an update just made. It is certainly
natural to focus on complete sets of updates.
Let base schema B and view f be given, and let U
f
be a family of updates on
the view. Let U
B
denote the family of all updates on the base schema. A translator for
U
f
is a mapping t : U
f
U
B
such that for each base instance I
B
and update U
f
,
f (t ()(I
B
)) =(f (I
B
)). Clearly, solving the view update problem consists of coming up
with a translator.
22.4 Updating Incomplete Information 593
If g is a complement for f , then a translator t is a g-translator if t () is a g-translation
of for each U
f
.
We can now state the following (see Exercise 22.26):
Theorem 22.3.10 Let base schema B and view f be given, and let U
f
be a complete set
of updates on the view. Suppose that t is a translator for U
f
. Then there is a complement g
of f such that t is a g-translator for U
f
.
Thus to nd a translator for a complete set of view updates, it is sufcient to specify an
appropriate complementary view g and take the corresponding g-translator. The theorem
says that one can nd such g if a translator exists at all.
The preceeding framework provides an abstract, elegant perspective on the view up-
date problem. Forming bridges to the more concrete frameworks in which views are dened
by specic languages (e.g., relational algebra) remains largely unexplored.
22.4 Updating Incomplete Information
In a sense, an update to a view is an incompletely specied update whose completion must
be determined or selected. In this section, we consider more general settings for studying
updates and incomplete information.
First we return to the conditional tables of Chapter 19 and show a system for updating
such databases. We then introduce formulations of incomplete information that use theories
(i.e., sets of propositional or rst-order sentences) to represent the (partial) knowledge
about the world. Among other benets, this approach offers an interesting alternative to
resolving the view update problem. This section concludes by comparing these approaches
to belief revision.
Updating Conditional Tables
The problems posed by updating a c-table are similar to those raised by queries. A rep-
resentation T species a set of possible worlds rep(T ). Given an update u, the possible
outcomes of the update are
u(rep(T )) ={u(I) | I rep(T )}.
As for queries, it is desirable to represent the result in the same representation system. If
the representation system is always capable of representing the answer to any update in a
language L, it is a strong representation system with respect to L.
Let us consider c-tables and simple insertions, deletions, and modications, as in
the language of IDM transactions. We know from Chapter 19 that c-tables form a strong
representation system for relational algebra; and it is easily seen that IDM transactions
can be expressed in the algebra (see Exercise 22.3). It follows that c-tables are a strong
representation system for IDM transactions. In other words, for each c-table T and IDM
transaction t , there exists a c-table t (T ) such that rep(t (T )) =t (rep(T )).
594 Dynamic Aspects
Example 22.4.1 Consider the c-table in Example 19.3.1. Insertions ins(t ) are straight-
forward: t is simply inserted in the table. Consider the deletion d = del({Student =
Sally, Course =Physics}). The c-table t (T ) representing the result of the deletion is
Student Course
Sally
Sally
Sally
Alice
Alice
Alice
(x Math) (x CS)
Math
CS
x
Biology
Math
Physics
(z = 0)
(z 0)
(x Physics)
(z = 0)
(x = Physics) (t = 0)
(x = Physics) (t 0)
Consider again the original c-table T in Example 19.3.1 and the modication
m=mod({Student =Sally, Course =Music} {Course =Physics}).
The c-table m(T ) representing the result of the modication is
Student Course
Sally
Sally
Sally
Sally
Alice
Alice
Alice
(x Math) (x CS)
Math
CS
Physics
x
Biology
Math
Physics
(z = 0)
(z 0)
(x = Music)
(x Music)
(z = 0)
(x = Physics) (t = 0)
(x = Physics) (t 0)
In the context of incomplete information, it is natural to consider updates that them-
selves have partial information. For c-tables, it seems appropriate to dene updates with
the same kind of incomplete information, using tuples with variables subject to conditions.
We can dene extensions of insertions, deletions, and modications in this manner. It can
be shown that c-tables remain a strong representation system for such updates.
Representing Databases Using Logical Theories
Conditional tables provide a stylized, restricted framework for representing incomplete
information and are closed under a certain class of updates. We now turn to more general
frameworks for representing and updating incomplete information. These are based on
representing databases as logical theories.
Given a logical theory T (i.e., set of sentences), the set of models of T is denoted
22.4 Updating Incomplete Information 595
by Mod(T). In our context, each model corresponds to a different possible instance. If
|Mod(T)| > 1, then T can be viewed as representing incomplete information.
In general, these approaches use the open world assumption (OWA). Recall from
Chapter 2 that under the closed world assumption (CWA), a fact is viewed as false unless it
can be proved from explicitly stated facts or sentences. In contrast, under the OWA if a fact
is not implied or contradicted by the underlying theory, then the fact may be true or false.
As a simple example, consider the theory T = {p} over a language with two propositional
constants p and q. Under the CWA, there is only one model of T (namely, {p}), but under
the OWA, there are two models (namely, {p} and {p, q}).
Model-Based Approaches to Updating Theories
One natural approach to updating a logical theory T is model based; it focuses on how
proposed updates affect the elements of Mod(T). Given an update u and instance I, let
u(I) denote the set of possible instances that could result from applying u to I. We use a set
for the result to accommodate the case in which u itself involves incomplete information.
Now let T be a theory and u an update. Under the model-based approach, the result
u(T) of applying u to T should be a theory T

such that
Mod(T

) ={u(I) | I Mod(T)}.
Example 22.4.2
(a) Consider the theory T ={p q}, where p and q are propositional constants, and
the update [insert p]. There is only one model of T (namely, {p, q}). If we take
the meaning of insert p to be make p false and leave other things unchanged,
then updating this model yields the single model {q}. Thus the result of applying
[insert p] to T yields the theory {q}.
(b) Consider T

= {p q} and the update [insert p]. The models of T

and the
impact of the update are given by
{p}
{q} {q}
{p, q} {q}.
Thus the result of applying the update to T

is {p}.
The approach to updating c-tables presented earlier falls within the model-based par-
adigm (see Exercise 22.14). A family of richer model-based frameworks that supports null
values and disjunctive updates has also been developed. An interesting dimension of vari-
ation in this approach concerns how permissive or restrictive a given update semantics is.
This essentially amounts to considering how many models are associated with u(I) for
given update u and instance I. As a simple example, consider starting with an empty data-
base I

and the update [insert (p q)]. Under a restrictive semantics, only {p} and {q}
596 Dynamic Aspects
are in u(I

), but under a permissive semantics, {p, q} might also be included. The update
semantics for c-tables given earlier is very permissive: All possible models corresponding
to an update are included in the result.
Formula-Based Approaches to Updating Theories
Another approach to updating theories is to apply updates directly to the theories them-
selves. As we shall see, a disadvantage of this approach is that the same update may have
a different effect on equivalent but distinct theories. On the other hand, this approach does
allow us to assign priorities to different sentences (e.g., so that constraints are given higher
priority than atomic facts).
We consider two forms of update: [insert ] and [delete ], where is a sentence (i.e.,
no free variables). Given theory T, a theory T

accomplishes the update [insert ] for T if


T

, and it accomplishes [delete ] for T if


1
T

. Observe that there is a difference


between [insert ] and [delete ]: In the former case is true for all models of T

,
whereas in the latter case may hold in some model of T

.
In general, we are interested in accomplishing an update for T with minimal impact
on T. Given theory T, we dene a partial order
T
on theories with respect to the degree
of change from T. In particular, we dene T

T
T

if T T

T T

, or if T T

=
T T

and T

T T

T. Intuitively, T

T
T

if T

has fewer deletions (from T)


than T

, or both T

and T

have the same deletions but T

has no more insertions than T

.
(Exercise 22.16 considers the opposite ordering, where insertions are given priority over
deletions.)
Intuitively, we are interested in theories T

that accomplish a given update u for T and


are minimal under
T
. We say that such theories T

accomplish u for T minimally. The


following characterizes such theories (see Exercise 22.15):
Proposition 22.4.3 Let T, T

be theories and a sentence. Then


(a) T

accomplishes [delete ] for T minimally iff T

is a maximal subset of T that


is consistent with .
(b) T

accomplishes [insert ] for T minimally iff T

is a maximal subset of T
that is consistent with .
Thus T

accomplishes [delete ] for T minimally iff T

accomplishes [insert ]
for T minimally.
The following example shows that equivalent but distinct theories can be affected
differently by updates.
Example 22.4.4 (a) Consider the theory T
0
={p, q} and the update [insert p]. Then
{p, q} is the unique minimal theory that accomplishes this update.
1
For a theory S, the (logical) closure of S, denoted S

, is the set of all sentences implied by S.


22.4 Updating Incomplete Information 597
(b) Let T
1
= {p q} and consider [insert p]. The unique minimal theory that ac-
complishes this update for T
1
is {p} [i.e., ( {p})]. Note how this differs from the
model-based update in Example 22.4.2(a).
A problem at this point is that, in general, there are several theories that minimally
accomplish a given update. Thus an update to a theory may yield a set of theories, and so
the framework is not closed under updates. Given a set T
1
, T
2
, . . . , we would like to nd a
theory T whose models are exactly the union of all models of the set of theories. In general,
it is not clear that there is a theory that has this property. However, if there is only a nite
number of theories that are possible answers, then we can use the disjunction operator

dened by

{T
i
| i [1, n]} ={
1

n
|
i
T
i
for i [1, n]}.
It is easily veried that Mod(

{T
i
| i [1, n]}) = {Mod(T
i
) | i [1, n]}. Of course,
there is a great likelihood of a combinatorial explosion if the disjunction operator is applied
repeatedly.
Assigning Priorities to Sentences
We now explore a mechanism for giving priority to some sentences in a theory over other
sentences. Let n 0 be xed. A tagged sentence is a pair (i, ), where i [0, n] and
is a sentence. A tagged theory is a set of tagged sentences. Given tagged theory T and
i [1, n], T
i
denotes { | (i, ) T}.
The partial order for comparing theories is extended in the following natural fashion.
Given tagged theories T, T

and T

, dene T

T
T

if for some i [1, n] we have


T
j
T

j
=T
j
T

j
, for each j [1, i 1]
and
T
i
T

i
T
i
T

i
or we have
T
j
T

j
=T
j
T

j
, for each j [1, n]
and
T

T T

T.
Intuitively, T

T
T

if the deletions of T

and T

agree up to some level i and then T

has fewer deletions at level i; or if the deletions match and T

has fewer insertions. In this


manner, higher priority is given to the sentences having lower numbers.
598 Dynamic Aspects
Example 22.4.5 Consider a relation R[ABC] that satises the functional dependency
A B, and consider the instance
R A B C
a b c
a b c

We now construct a tagged theory T to represent this situation and show how changing a
B value of a tuple is accomplished.
We assume three tag values and describe the contents of T
0
, T
1
, and T
2
in turn. T
0
holds the functional dependency and the unique name axiom (see Chapter 2). That is,

(0, x, y, y

, z, z

(R(x, y, z) R(x, y

, z

) y =y

)),
(0, a =a

), (0, a =a

), . . . , (0, a =b), . . . , (0, c

=c

T
1
holds the following existential sentences:

(1, x(R(a, x, c))),


(1, x(R(a, x, c

))),
(1, x(R(a

, x, c

))),
(1, x(R(a

, x, c

)))

Finally, T
2
holds

(2, R(a, b, c)),


(2, R(a, b, c

)),
(2, R(a

, b

, c

)),
(2, R(a

, b

, c

))

Consider now the update u = [insert ], where = yR(a, b

, y). Intuitively, this


insertion should replace all a, b pairs occurring in
AB
R by a, b

. More formally, it is
easy to verify that the unique tagged theory (up to choice of i) that accomplishes u is (see
Exercise 22.17)
{(i, )} T
0
T
1

(2, R(a, b

, c))
(2, R(a, b

, c

))
(2, R(a

, b

, c

))
(2, R(a

, b

, c

))

Thus the choice of sentences and tags included in the theory can inuence the result
of an update.
22.4 Updating Incomplete Information 599
The approach of tagged theories can also be used to develop a framework for accom-
plishing viewupdates. The underlying database and the vieware represented using a tagged
theory, and highest priority is given to ensuring that the complement of the view remains
xed. Exercise 22.18 explores a simple example of this approach.
In the approach described here, a set of theories is combined using the disjunction
operator. In this case, multiple deletions can lead to an exponential blowup in the size
of the underlying theory, and performing insertions is np-hard (see Exercise 22.19). This
provided one motivation for developing a generalization of the approach, in which families
of theories, called ocks, are used to represent a database with incomplete information.
Update versus Revision
The idea of representing knowledge using theories is not unique to the eld of databases.
The eld of belief revision takes this approach and considers the issue of revising a knowl-
edge base. Here we briey compare the approaches to updating database theories described
earlier with those found in belief revision.
A starting point for belief revision theory is the set of rationality postulates of Al-
chourr on, G ardenfors, and Makinson, often referred to as the AGM postulates. These
present a general family of guidelines for when a theory accomplishes a revision, and they
include postulates such as
(R1) If T

accomplishes [insert ] for T, then T

|=.
(R2) If is consistent with T, then the result of [insert ] on T should be equivalent to
T {}.
(R3) If T T

and

, then the result of [insert ] on T is equivalent to the result of


[insert

] on T

.
(This is a partial listing of the eight AGM postulates.) Other postulates focus on maintain-
ing satisability, relationships between the effects of different updates, and capturing some
aspects of minimal change.
It is clear from postulate (R3) that the formula-based approaches to updating database
theories do not qualify as belief revision systems. The relationship of the formula-based
approaches and belief revision is largely unexplored.
A key difference between belief revision and the model-based approach to updating
database theories stems from different perspectives on what a theory T is intended to
represent. In the former context, T is viewed as a set of beliefs about the state of the world.
If a new fact is to be inserted, this is a modication (and, it is hoped, improvement) of
our knowledge about the state of the world, but the world itself is considered to remain
unchanged. In contrast, in the model-based approaches, the theory T is used to identify a
set of worlds that are possible given the limited information currently available. If a fact
is inserted, this is understood to mean that the world itself has been modied. Thus T is
modied to identify a different set of possible worlds.
Example 22.4.6 Suppose that the world of interest is a room with a table in it. There is
an abacus and a (hand-held, electronic) calculator in the room. Let proposition a mean that
600 Dynamic Aspects
the abacus is on the table, and let proposition c mean that the calculator is on the table.
Finally, let T be (a c) (a c).
From the perspective of belief revision, T indicates that according to our current
knowledge, either the abacus or the calculator is on the table, but not both. Suppose that
we are informed that the calculator is on the table (i.e., [insert c]). This is viewed as
additional knowledge about the unchanging world. Combining T with c, we obtain the new
theory T
1
=((a c) (a c)) c (a c). [Note that this outcome is required by
postulate (R2).]
From the model-based perspective, T indicates that either the world is {a} or it is {c}.
The request [insert c] is understood to mean that the world has been modied so that c has
become true. This can be envisioned in terms of having a robot enter the room and place the
calculator on the table (if it isnt already there) without reporting on the status of anything
except that the robot has been successful. As a result, the world {a} is replaced by {a, c},
and the world {c} is replaced by itself. The resulting theory is T
2
=c (which is interpreted
under the OWA).
A set of postulates for updates, analogous to the AGM postulates for revision, has been
developed. The postulate analogous to (R2) is
(U2) If T implies , then the result of [insert ] on T should be equivalent to T.
This is strictly weaker than (R2). Other postulates enforce the intuition that the effect of
an update on a possible model is independent of the other possible models of a theory,
maintaining satisability and relationships between the effects of different updates.
22.5 Active Databases
As we have seen, object orientation provides one paradigm for incorporating behavioral
information into a database schema. This has the effect of separating a portion of the be-
havioral information from the application software and providing a more structured repre-
sentation and organization for that portion. In this section, we briey consider a second,
essentially orthogonal, paradigm for separating a portion of the behavioral information
from the application software. This emerging paradigm, called activeness, stems from a
synthesis of techniques from databases, on the one hand, and expert systems and articial
intelligence, on the other.
Active databases generally support the automatic triggering of updates in response to
internal or external events (e.g., a clock tick, a user-requested update, or a change in a
sensor reading). In a manner reminiscent of expert systems, forward chaining of rules is
generally used to accomplish the response. However, there are several differences between
classical expert systems and active databases. At the conceptual and logical level, the
differences are centered around the expressive power of rule conditions and the semantics
of rule application. (Some active database systems, such as POSTGRES, also support a
form of backward chaining or query rewriting; this is not considered here.)
Active databases have been shown to be useful in a variety of areas, including con-
22.5 Active Databases 601
Suppliers Sname Address Prices Part Sname Price
The Depot 1210 Broadway nail The Depot .02
Builders Mart 100 Main bolt The Depot .05
bolt Builders Mart .04
nut Builders Mart .03
Figure 22.3: Sample instance for active database examples
straint maintenance, incremental update of materialized views, mapping view updates to
the base data, and supporting database interoperability.
Rules and Rule Application
There are three distinguishing components in an active database: (1) a subsystem for
monitoring events, (2) a set of rules, often called a rule base, and (3) a semantics for rule
application, typically called an execution model.
Rules typically have the following so-called ECA form:
on event if condition then action.
Depending on the system and application, the event may range over external phenomena
and/or over internal events (such as a method call or inserting a tuple to a relation). Events
may be atomic or composite, where these are built up fromatomic events using, say, regular
expressions or a process algebra. Events may be essentially Boolean or may return a tuple
of values that indicate what triggered the event.
Conditions typically involve parameters passed in by the events, and the contents of the
database. As will be described shortly, several systems permit conditions to look at more
than one version of the database state (e.g., corresponding to the state before the event and
the state after the event). In some systems, events are not explicitly specied; essentially
any change to the database makes the event true and leads to testing of all rule conditions.
In principle, the action may be a call to an arbitrary routine. In many cases in relational
systems, the action will involve a sequence of insertions, deletions, and modications; and
in object-oriented systems it will involve one or more method calls. Note that this may in
turn trigger other rules.
The remainder of this discussion focuses on the relational model. A short example is
given, followed by a brief discussion of execution models.
Example 22.5.1 Suppose that the Inventory database includes the following relations:
Suppliers[Sname, Address]
Prices[Part, Sname, Price]
602 Dynamic Aspects
Suppliers and the parts they supply are represented in Suppliers and Prices, respectively. It
is assumed that Sname is a key of Suppliers and Part, Sname is a key of Prices. An example
instance is shown in Fig. 22.3.
We now list some example rules. These rules are written in a pidgin language that
uses tuple variables. The variable T ranges over sets of tuples and is used to pass them
from the condition to the action. As detailed shortly, both (r1) considered in isolation and
the set (r2.a) . . . (r2.d) taken together can be used to enforce the inclusion dependency
Prices[Sname] Suppliers[Sname].
(r1) on true
if Prices(p) and p.Sname
Sname
(Suppliers)
then Prices :=Prices {p}
(r2.a) on delete Sname(s)
if T :=
Sname=s.Sname
(Prices) is not empty
then Prices :=Prices T
(r2.b) on modify Sname(s)
if old(s).Sname =new(s).Sname
and T =
Sname=old(s).Sname
(Prices)
then set p.Sname =new(s).Sname
for each p in Prices
where p T
(r2.c) on insert Prices(p)
if p.Sname
Sname
(Suppliers)
then issue supplier_warning(p)
(r2.d) on modify Prices(p)
if new(p).Sname
Sname
(Suppliers)
then issue supplier_warning(new(p))
Consider rule (r1). If ever a state arises that violates the inclusion dependency, then the
rule deletes violating tuples from the Prices relation. The event of (r1) is always true; in
principle the database must check the condition whenever an update is made. It is easy to
see in this case that such checking need only be done if the relations Supplies or Prices are
updated, and so the event on Supplies or Prices is updated could be incorporated into
(r1). Although this does not change the effect of the rule, it provides a hint to the system
about how to implement it efciently.
Rules (r2.a) . . . (r2.d) form an alternative mechanism for enforcing the inclusion
dependency. In this case, the cause of the dependency violation determines the reaction
of the system. Here a deletion from (r2.a) or modication (r2.b) to Suppliers will result
in deletions from or modications to Prices. In (r2.b), variable s ranges over tuples that
have been modied, old(s) refers to the original value of the tuple, and new(s) refers to the
modied value. On the other hand, changes to Prices that cause a violation [rules (r2.c) and
22.5 Active Databases 603
(r2.d)] call a procedure supplier_warning; this might abort the transaction and warn the
user or dba of the constraint violation, or it might attempt to use heuristics to modify the
offending Sname value.
Execution Models
Until now, we have considered rules essentially in isolation fromeach other. Afundamental
issue concerns the choice of an execution model, which species how and when rules will
be applied. As will be seen, a wide variety of execution models are possible. The true
semantics of a rule base stems both fromthe rules themselves and fromthe execution model
for applying them.
We assume for this discussion that there is only one user of the system, or that a
concurrency control protocol is enforced that hides the effect of other users.
Suppose that a user transaction t =c
1
; . . . ; c
n
is issued, where each of the c
i
s is an
atomic command. In the absence of active database rules, application of t will yield a
sequence
I
0
, I
1
, . . . , I
n
of database states, starting with the original state I
0
and where each state I
i+1
is the result
of applying c
i+1
to state I
i
. If rules are present, then a different sequence of states might
arise.
One dimension of variation between execution models concerns when rules are red.
Under immediate ring, a rule is essentially red as soon as its event and condition become
true; under deferred ring, rule application is delayed until after the state I
n
is reached; and
under concurrent ring, a separate process is spawned for the rule action and is executed
concurrently with other processes. In the most general execution models, each rule is
assigned its own coupling mode (i.e., immediate, deferred, or concurrent), which may be
further rened by associating a coupling mode between event and condition testing and
between condition testing and action execution.
We now examine the semantics of immediate and deferred ring in more detail. We
assume for this discussion that the event of each rule is simply true.
To illustrate immediate ring, suppose that a rule r with action d
1
; . . . ; d
m
is triggered
(i.e., its condition has become true) in state I
1
of the preceeding sequence of states. Then
the sequence of databases states might start with
I
0
, I
1
, I

1
, I

2
, . . . , I

m
, . . . ,
where I

1
is the result of applying d
1
to I
1
and I

j+1
is the result of applying d
j+1
to I

j
.
After I

m
, the command c
2
would be applied. The semantics of intermediate rule ring
is in fact more complex, for two reasons. First, another rule might be triggered during
the execution of the action of the rst triggered rule. In general, this calls for a recursive
style of rule application, where the command sequences of each triggered rule are placed
onto a stack. Second, several rules might be triggered at the same time. One approach in
604 Dynamic Aspects
this case is to assume that the rules are ordered and that rules triggered simultaneously
are considered in that order. Another approach is to re simultaneously-triggered rules
concurrently; essentially this has the effect of ring them in a nondeterministic order.
In the case of deferred ring, the full user transaction is completed before any rules are
red, and each rule action is executed in its entirety before another rule action is initiated.
This gives rise to a sequence of states having the form
I
orig
, I
user
, I
2
, I
3
, . . . , I
curr
,
where now I
orig
is the original state, I
user
is the result of applying the user-requested
transaction, and the states I
2
, I
3
, . . . , I
curr
are the results of applying the actions of red
rules. The sequence shown here might be extended if additional rules are to be red.
Several intricacies arise. As before, the order of rule ring must be considered if
multiple rules are triggered at a given state. Recall the (r2) rules of Example 22.5.1, whose
events where based on transitions between some former state and some latter state. What
states should be used? It is natural to use I
curr
as the latter state. With regard to the former
state, some systems advocate using I
orig
, whereas other systems support the use of one of
the intermediate states (where the choice may depend on a complex condition).
Suppose that two rules r and r

are triggered at some state I


curr
=I
i
and that r is red
rst to reach state I
i+1
. The event and/or condition of r

may no longer be true. This raises


the question, Should r

be red? A consensus has not emerged in the literature.


As should be clear from the preceeding discussion, there is a wide variety of choices
for execution models. Amore subtle dimension of exibility concerns the expressive power
of rule events and conditions: In addition to accessing the current state, should they be
able to access one or more previous ones? Several prototype active database systems have
been implemented; each uses a different execution model, and several permit access to
both current and previous states. It has been argued that different execution models may be
appropriate for different applications. This has given rise to systems that include a choice
of execution models and to languages that permit the specication of customized execution
models. An open problem at the time this book was written is to develop a natural syntax
that can be used to specify easily a broad range of execution models, including a substantial
subset of those described in the literature.
The while languages studied in Part E can serve as the kernel of an active database.
These languages do not use events; restrict rule actions to insertions, deletions, and value
creation; and examine only the current state in a rule ring sequence. If value creation is
supported, then these languages are complete for database mappings and so in some sense
can simulate all active databases. However, richer rules and execution models permit the
possibility of developing rule bases that enforce a desired set of policies in a more intuitive
fashion than a while program.
An Execution Model That Reaches a Unique Fixpoint
It should be clear that whatever execution model and form for rules is selected, most
questions about the behavior of an active database are undecidable. It is thus interesting
to consider more restricted execution models that behave in predictable ways. We now
present one such execution model, called the accumulating model; this forms a portion of
22.5 Active Databases 605
the execution model of AP5, a main-memory active database system that has been used in
research for over a decade.
To describe the accumulating execution model, we rst introduce the notion of a delta.
Let R ={R
1
, . . . , R
n
} be a database schema. An atomic update over R is an expression of
the form +R
i
(t ) or R
i
(t ), where i [1, n] and t is a tuple having the arity of R
i
. A delta
over R is a nite set of atomic updates over R that does not contain both +R(t ) and R(t )
for any R and t or the special value fail. (Modies could also be incorporated into deltas,
but we do not consider that here.) A delta not containing the value fail is consistent. For
delta , we dene

+
={R(t ) | +R(t ) }

={R(t ) | R(t ) }.
Given instance I and consistent delta over R, the result of applying to I is
apply(I, ) =(I
+
)

= (I

)
+
.
Finally, the merge of two consistent deltas
1
,
2
is dened by

1
&
2
=

1

2
if this is consistent
fail otherwise.
The accumulating execution model uses deferred rule ring. Each rule action is viewed
as producing a consistent delta. The user-requested transaction is also considered to be the
delta
0
. Thus a sequence of states
I
orig
=I
0
, I
user
=I
1
, I
2
, I
3
, . . . , I
curr
is produced, where I
user
=apply(I
orig
,
0
) and, more generally, I
i+1
=apply(I
i
,
i
) for
some
i
produced by a rule ring.
At this point the accumulating model is quite generic. We now restrict the model
and develop some interesting theoretical properties. First we assume that rules have only
conditions and actions (i.e., that the event part is always true). Second, as noted before, we
assume that the action of each rule can be viewed as a delta. Furthermore, we assume that
these deltas use only constants from I
orig
(i.e., there is no invention of constants). Third
we insist that for each i 0,
0
&. . . &
i
is consistent. More precisely, we modify the
execution model so that if for some i we have
0
&. . . &
i
=fail, then the execution is
aborted. For each i 0, let

i
=
0
&. . . &
i
.
Suppose that we are now in state I
curr
with delta
curr
. We assume that rule conditions
can access only I
orig
and
curr
. (If the rule conditions have the power of, for example, the
relational calculus, this means they can in effect access I
curr
.) Given rule r, state I, and
delta , the effect of r on I and , denoted effect(r, I, ), is the delta corresponding to the
ring of r on I and , if the condition of r is satised, and is otherwise.
Execution proceeds as follows. The sequence

0
,

1
, . . . is constructed sequentially.
At the i
th
step, if there is no rule whose condition is satised by I
orig
and

i
, then execution
terminates successfully. Otherwise a rule r with condition satised by I
orig
and

i
is
606 Dynamic Aspects
selected nondeterministically. If

i
&effect(r, I
orig
,

i
) is fail, then execution terminates
with an abort; otherwise set

i+1
=

i
&effect(r, I
orig
,

i
) and continue.
A natural question at this point is, Will execution always terminate? It is easy to see
that it does, because constants are not invented and the sequence of deltas being constructed
is monotonically increasing under set containment.
It is also natural to ask, Does the order of rule ring affect the outcome? In general, the
answer is yes. We now develop a semantic condition on rules that ensures independence of
rule ring order. A rule r is monotonic if for each instance I and pair
1

2
of deltas,
effect(r, I,
1
) effect(r, I,
2
). The following can now be shown (see Exercise 22.23):
Theorem 22.5.2 If each rule in a rule base is monotonic, then the outcome of the
accumulating execution model on this rule base is independent of rule ring order.
Monitoring Events and Conditions
In Example 22.5.1, the events that triggered rules were primitive, in the sense that each
one corresponded to an atomic occurrence of some phenomenon. There has been recent
interest in developing languages for specifying and recognizing composite events, which
might involve the occurrence of several primitive events. For example, composite event
specication is supported by the ODE system, a recently released prototype object-oriented
active database system. The ODE systemsupports a rich language for specifying composite
events, which has essentially the power of regular expressions (see also Section 22.6
for examples of composite events specied by regular expressions). An implementation
technique based on nite state automata has been developed for recognizing composite
events specied in this language.
Other formalisms can also be used for specifying composite events (e.g., using Petri
nets or temporal logics). There appears to be a trade-off between the expressiveness of trig-
gers in rules and conditions. For example, some Petri-net-based languages for composite
events can be simulated using additional relations and rules based on simple events. The
details of such trade-offs are largely unexplored.
22.6 Temporal Databases and Constraints
Classical databases model static aspects of data. Thus the information in the database
consists of data currently true in the world. However, in many applications, information
about the history of data is just as important as static information. When history is taken
into account, queries can ask about the evolution of data through time; and constraints may
restrict the way changes occur. We briey discuss these two aspects.
Temporal Databases
Suppose we are interested in a database over some schema R. Thus we wish to model and
query information about the content of the database through time. Conceptually, we can
associate to each time t the state I
t
of the database at time t . Thus the database appears as a
22.6 Temporal Databases and Constraints 607
sequence of statessnapshotsindexed by some time domain. Two basic questions come
up immediately:
What is the meaning of I
t
? Primarily two possible answers have been proposed. The
rst is that I
t
represents the data that was true in the world at time t ; this view of time
is referred to as valid time. The second possibility is that time represents the moment
when the information was recorded in the database; this is called transaction time.
Clearly, using valid time requires including time as a rst-class citizen in the
data model. In many applications transaction time might be hidden and dealt with
by the system; however, in time-critical applications, such as air-trafc control or
monitoring a power plant, transaction time may be important and made explicit. A
particular database may use valid time, transaction time, or both. In our discussion,
we will consider valid time only.
What is the time domain? This can be discrete (isomorphic to the integers), contin-
uous (isomorphic to the reals), or dense and countable (isomorphic to the rationals).
In databases, time is usually taken to be discrete, with some xed granularity for
the time unit. However, several distinct time domains with different granularities
are often used (e.g., years, months, days, hours, etc.). The time domain is usually
equipped with a total order and sometimes with arithmetic operations. A temporal
variable now may be used to refer to the present time.
To query a temporal database, relational languages must be extended to take into
account the time coordinate. To say that a tuple u is in relation R at time t , we could simply
extend R with one temporal coordinate and write R(u, t ). Then we could use CALC or
ALG on the extended relations. This is illustrated next.
Example 22.6.1 Consider the CINEMA database, indexed by a time domain consisting
of dates of the form month/day/year. The query
What were the movies shown at La Pagode in May, 1968?
is expressed in CALC by
{m| s, t (Pariscope(La Pagode, m, s, t ) 5/1/68 t 5/31/68)}.
The query
Since when has La Pagode been showing the current movie?
is expressed by
{t | m[s(Pariscope(La Pagode, m, s, now))
since(t, m) t

(since(t

, m) t t

)]},
where
608 Dynamic Aspects
since(t, m) =t

[t t

now s

(Pariscope(La Pagode, m, s

, t

))].
Classical logics augmented with a temporal coordinate have been studied extensively,
mostly geared toward specication and verication of concurrent programs. Such logics
are usually referred to as temporal logics. There is a wealth of mathematical machinery
developed around temporal logics; unfortunately, little of it seems to apply directly to
databases.
Although the view of a temporal database as a sequence of instances is conceptually
clean, it is extremely inefcient to represent a temporal database in this manner. In prac-
tice, this information is summarized in a single database in which data is timestamped to
indicate the time of validity. The timestamps can be placed at the tuple level or at the at-
tribute level. Typically, timestamps are unions of intervals of the temporal domain. Such
representations naturally lead to nested structures, as in the nested relation, semantic, and
object-oriented data models.
Example 22.6.2 Figure 22.4 is a representation of temporal information about Pari-
scope using attribute timestamps with nested relations. It would also be natural to represent
this using a semantic or object-oriented model.
The same information can be represented by timestamping at the tuple level, as
follows:
Pariscope Theater Title Schedule
La Pagode Sleeper 19:00 [5/1/685/31/68]
La Pagode Sleeper 19:00 [7/15/747/31/74]
La Pagode Sleeper 19:00 [12/1/93now]
La Pagode Sleeper 22:00 [8/1/748/14/75]
La Pagode Sleeper 22:00 [10/1/9311/30/93]
La Pagode Psycho 19:00 [8/1/9311/30/93]
La Pagode Psycho 22:00 [2/15/7810/14/78]
La Pagode Psycho 22:00 [12/1/93now]
Kinopanorama Sleeper 19:30 [4/1/9010/31/90]
Kinopanorama Sleeper 19:30 [2/1/928/31/92]
In this representation, the time intervals are more fragmented. This may have some draw-
backs. For example, retrieving the information about when Sleeper was playing at La
Pagode (using a selection and projection) yields time intervals that are more fragmented
than needed. To obtain a more concise representation of the answer, we must merge some
of these intervals.
Note also the difference between the timestamps and the attribute Schedule, which
also conveys some temporal information. The value of Schedule is user dened, and the
database may not know that this is temporal information. Thus from the point of view of
22.6 Temporal Databases and Constraints 609
Theater
[5/1/685/31/68]
[7/15/748/14/75]
[10/1/93now]
La Pagode
Sleeper
[2/15/7810/14/78]
[8/1/93now]
Psycho
Kinopanorama
Pariscope Title
[5/1/685/31/68]
[7/15/747/31/74]
[12/1/93now]
19:00
Schedule
[8/1/748/14/75]
[10/1/9311/30/93]
22:00
[8/1/9311/30/93] 19:00
[2/15/7810/14/78]
[12/1/93now]
22:00
[4/1/9010/31/90]
[2/1/928/31/92]
Sleeper
[4/1/9010/31/90]
[2/1/928/31/92]
19:30
Figure 22.4: A representation of temporal information using attribute timestamps with
nested relations
the temporal database, the value of Schedule is treated just like any other nontemporal value
in the database.
Much of the research in temporal databases has been devoted to nding extensions of
SQL and other relational languages suitable for temporal queries. Most proposals assume
some representation based on tuple timestamping by intervals and introduce intuitive lin-
guistic constructs to compare and manipulate these temporal intervals. Sometimes this is
done without explicit reference to time, in the spirit of modal operators in temporal logic.
One such operator is illustrated next.
Example 22.6.3 Several temporal extensions of SQL use a when clause to express a
temporal condition. For example, consider the query on the CINEMA database:
Find the pairs of theaters that have shown some movie at the same date and hour.
This can be expressed using the when clause as follows:
610 Dynamic Aspects
select t
1
.theater, t
2
.theater
from Pariscope t
1
t
2
where t
1
.title =t
2
.title and t
1
.schedule =t
2
.schedule
when t
1
.interval overlaps t
2
.interval
The when clause is true for tuples t
1
, t
2
iff the intervals indicating their validity have
nonempty intersection. Other Boolean tests on intervals include before, after, during, fol-
lows, precedes, etc., with the obvious semantics. The expressive power of such constructs
is not always well elucidated in the literature, beyond the fact that they can clearly be ex-
pressed in CALC. A review of the many constructs proposed in the literature on temporal
databases is beyond the scope of this book. For the time being, it appears that a single
well-accepted temporal language is far from emerging, although there are several major
prototypes.
Temporal Deductive Databases
An interesting recent development involves the use of deductive databases in the temporal
framework, yielding temporal extensions of datalog. This can be used in two main ways.
As a specication mechanism: Datalog-like rules allow the specication of some
temporal databases in a concise fashion. In particular, this allows us to specify
innite temporal databases, with both past and future information.
As a query mechanism: Rules can be used to express recursive temporal queries.
Example 22.6.4 We rst illustrate the use of rules in the specication of an innite tem-
poral database. The database holds information on a professors schedulemore precisely,
the times she meets her two Ph.D. students. The facts
meets-rst(Emma, 0), follows(Emma, John), follows(John, Emma)
say that the professors rst meeting is with Emma, and then John and Emma take turns.
Consider the rules
meets(x, t ) meets-rst(x, t )
meets(y, t +1) meets(x, t ), follows(x, y)
The rules dene the following innite sequence of facts providing the professors schedule:
meets(Emma, 0)
meets(John, 1)
meets(Emma, 2)
meets(John, 3)
.
.
.
22.6 Temporal Databases and Constraints 611
Another way to use temporal rules is for querying. Consider the query
Find the times t such that La Pagode showed Sleeper on date t and continued
to show it at least until the Kinopanorama started showing it.
The answer (given in the unary relation until) is dened by the following stratied program:
date(x, y, t ) Pariscope(x, y, s, t )
until(t ) date(Kinopanorama, Sleeper, t +1),
date(Kinopanorama, Sleeper, t ),
date(La Pagode, Sleeper, t )
until(t ) date(La Pagode, Sleeper, t ), until(t +1)
The expressiveness of several datalog-like temporal languages and the complexity of
query evaluation using such languages are active areas of research.
Temporal Constraints
Classical constraints in relational databases are static: They speak about properties of the
data seen at some moment in time. This does not allow modeling the behavior of data.
Temporal (or dynamic) constraints place restrictions on how the data changes in time. They
can arise in the context of classical databases as well as in temporal databases. In temporal
databases, we can specify restrictions on the sequence of time-indexed instances using
temporal logics (extensions of CALC, or modal logics). These are essentially Boolean
(yes/no) temporal queries. For example, we might require that La Pagode not be a rst-
run theater (i.e., every movie shown there must have been shown in some other theater at
some earlier time). An important question is how to enforce such constraints efciently. A
step in this direction is suggested by the following example.
Example 22.6.5 Suppose that Pariscope is extended with a time domain ranging over
days, as in Example 22.6.1. The constraint that La Pagode is not a rst-run theater can
be expressed in CALC as
m, s, t (Pariscope(La Pagode, m, s, t )
x, s

, t

(Pariscope(x, m, s

, t

) x =La Pagode t

< t ))
A naive way to enforce this constraint involves maintaining the full history of the
relation Pariscope; this would require unbounded storage. A more efcient way involves
storing only the current value of Pariscope and maintaining a unary relation Shown_
Before[Title], which holds all movie titles that have been shown in the past at a theater
other than La Pagode. Note that the size of Shown_Before is bounded by the number of
titles that have occurred through the history of the database but is independent of how long
the database has been in existence. (Of course, if a new title is introduced each day, then
Shown_Before will have size comparable to the full history.)
612 Dynamic Aspects
A systematic approach has been developed to maintain temporal constraints in this
fashion.
For classical databases, in which no history is kept, temporal constraints can only
involve transitions from the current instance to the next; this gives rise to a subset of
temporal constraints, called transition constraints
For instance, a transition constraint can state that salaries do not decrease or that
the new salary of an employee is determined by the old salary and the seniority. Such
transition constraints are by far the most common kind of temporal constraint considered
for databases. We discuss some ways to specify transition constraints. Clearly, these can
be stated using a temporal version of CALC that can refer to the previous and next state. A
notion of identity similar to object identity is useful here; otherwise we may have difculty
speaking about the old and new versions of some tuple or entity. Such identity may be
provided by a key, assuming that it does not change in time.
Besides CALC, transition constraints may be stated in various other ways, including
pre- and postconditions associated with transitions;
extensions of classical static constraints, such as dynamic fds;
computational constraints on sequences of consecutive versions of tuples.
Restrictions on updatessay, by transactional schemasalso induce temporal con-
straints. For instance, consider again the transactional schema in Example 22.2.1. It can be
veried that all possible sequences of instances obtained by calls to the transactions of that
schema satisfy the temporal constraint:
Nobody can be a PhD student without having been a TA at some point.
The following less desirable temporal constraint is also satised:
Once a PhD student, always a PhD student.
Overall, the connection between canned updates and temporal constraints remains largely
unexplored.
A related means of specifying temporal constraints is to identify a set of update events
and impose restrictions on valid sequences of events. This can be done using regular
expressions. For example, suppose that the events concerning an employee are
hire, transfer, promote, raise, re, retire
The valid sequences of events are all prexes of sequences specied by the regular
expression
hire[(transfer) +(promote +)(raise)]

[(retire) +(re)]
Thus an employee is rst hired, receives some number of promotions and raises, may be
transferred, and nally either retires or is red. Everybody who is promoted must also
Bibliographic Notes 613
receive a raise, but raises may be received even without promotion. Such constraints appear
to be particularly well suited to object-oriented databases, in which events can naturally be
associated with method invocations. Some active databases (Section 22.5) can also enforce
constraints on sequences of events.
Bibliographic Notes
The properties of IDM transactions were formally studied in [AV88b]. The sound and
complete axiomatization for IDM transactions is provided in [KV91]. The results on sim-
plication rules are also presented there. The language datalog

and other rule-based


and imperative update languages are studied in [AV88c]. Dynamic Logic Programming
is discussed in [MW88b]. In particular, Example 22.1.3 is from there. The language LDL,
including its update capabilities, is presented in [NT89].
IDM transactional schemas are investigated in [AV89]. Transactional schemas based
on more powerful languages are discussed in [AV87, AV88a]. Patterns of object migration
in object-oriented databases are studied in [Su92], using results on IDM transactional
schemas. A simple update language is shown there to express the family of migration
patterns characterized by regular languages; richer families of patterns are obtained by
permitting conditionals in this language.
One of the earliest works on the view maintenance problem is [BC79], which focuses
on determining whether an update is relevant or not. References [KP81, HK89] study the
maintenance of derived data in the context of semantic data models, and [SI84] studies
the maintenance of a universal relation formed from an acyclic database family. Additional
works that use the approach of incremental evaluation include [BLT86, GKM92, Pai84,
QW91]. Heuristics for maintaining the materialized output of a stratied datalog

pro-
gram are developed in [AP87b, K uc91]. A comprehensive approach, which handles views
dened using the stratied datalog and aggregate operators, is developed in [GMS93].
Reference [Cha94] addresses the issue of incremental update to materialized views in the
presence of OIDs.
Testing for relevance of updates in connection with view maintenance is related to
the problem of incremental maintenance of integrity constraints. References [BBC80,
HMN84] develop general techniques for this problem, and approaches for deductive data-
bases include [BDM88, LST87, Nic82].
The issue of rst-order incremental denability of datalog programs was rst raised
in [DS92] and [DS93]. Additional research in this area includes [DT92, DST94]. A more
general perspective on these kinds of problems is presented in [PI94].
An informative survey of research on the viewupdate problemis [FC85]. One practical
approach to the view update problem is to consider the underlying database and the view
to be abstract data types, with the updating operations predened by the dba [SF78, RS79].
The other practical approach is to perform a careful analysis of the syntax and semantics
of a view denition to determine a unique or a small set of update translation(s) that satisfy
a family of natural properties. This approach is pioneered in [DB82] and further developed
in [Kel85, Kel86]. Example 22.3.6 is inspired by [Kel86]. Reference [Kel82] considers the
issue of unavoidable side-effects from view updates.
The discussion of view complements and Theorem 22.3.10 is from [BS81]. Reference
614 Dynamic Aspects
[CP84] studies complexity issues in this area; for example, in the context of projective
views over a single relation possibly having functional dependencies, nding a minimal
complement is np-complete. Reference [KU84] examines some of the practical shortcom-
ings of the approach based on complementary views.
The semantics of updates on incomplete databases is investigated in [AG85] and
[Gra91].
The idea of representing a database as a logical theory, as opposed to a set of atomic
facts, has roots in [Kow81, NG78, Rei84]. A survey of approaches to updating logical theo-
ries, which articulates the distinction between model-based and formula-based approaches,
is [Win88]. Reference [Win86] develops a model-based approach for updating theories that
extends the framework of [Rei84]. Complexity and expressiveness issues related to this ap-
proach are studied in [GMR92, Win86]. Amodel-based approach has recently been applied
in connection with supporting object migration in object-oriented databases in [MMW94].
An early formula-based approach to updating is discussed in [Tod77]. This chapters
discussion of the formula-based approach is inspired largely by [FUV83]. The notion
of using ocks (i.e., families of theories) to describe incomplete information databases
is developed in [FKUV86]. Reference [Var85] investigates the complexity of querying
databases that are logical theories and shows that even in restricted cases, the complexity
of, for example, the relational calculus goes from logspace to co-np-complete.
References on belief revision include [AGM85], where the AGM postulates are devel-
oped, and [G ar88, Mak85]. The contrast between belief revision and knowledge update was
articulated informally in [KW85] and formally in [KM91a], where postulates for updating
theories under the model-based perspective were developed; see also [GMR92, KM91b].
The discussion in this chapter is inspired by [KM91a].
Active databases generally support the automatic triggering of updates as a re-
sponse to user-requested or system-generated updates. Most active database systems (e.g.,
[CCCR
+
90, Coh86, MD89, Han89, SKdM92, SJGP90, WF90]) use a paradigm of rules
to specify the actions to be taken, in a manner reminiscent of expert systems.
Active databases and techniques have been shown to be useful for constraint main-
tenance [Mor83, CW90, CTF88], incremental update of materialized views [CW91], and
database security [SJGP90]; and they hold the promise of providing a new family of so-
lutions to the view and derived data update problem [CHM94] and issues in database in-
teroperability [CW93, Cha94, Wie92]. Another functionality associated with some active
databases is query rewriting [SJGP90], whereby a query q might be transformed into a
related query q

before being executed.


As discussed in Section 22.5 (see also [HJ91b, HW92, Sto92]), each of the active
database systems described in the literature uses a different approach for event specication
and a different execution model. The execution models of several active database systems
are specied using deltas, either implicitly or explicitly [Coh86, SKdM92, WF90]. The
Heraclitus language [HJ91a, JH91, GHJ
+
93] elevates deltas to be rst-class citizens in a
database programming language based on C and the relational model, thereby enabling the
specication, and thus implementation, of a wide variety of execution models. Execution
models that support immediate, deferred, and concurrent ring include [BM91, HLM88,
MD89].
Exercises 615
The accumulating execution model forms part of the semantics of the AP5 active
database model [Coh86, Coh89] (see also [HJ91a]). Theorem22.5.2 is from[ZH90], which
goes on to present syntactic conditions on rules that ensure the Church-Rosser property for
rule bases that are not necessarily monotonic.
An early investigation of composite events in connection with active databases is
[DHL91]. Reference [GJS92c] describes the event specication language of the ODE
active database system [GJ91]. Reference [GJS92b] presents the equivalence of ODEs
composite event specication language and regular expressions, and [GJS92a] develops an
implementation technique based on nite state automata for recognizing composite events
in the case where parameters are omitted. Reference [GD94] uses an alternative formalism
for composite events based on Petri nets and can support parameters.
A crucial issue with regard to efcient implementation of active databases is determin-
ing incrementally when a condition becomes true. Early work in this area is modeled after
the RETE algorithm from expert systems [For82]. Enhancements of this technique biased
toward active database applications include [WH92, Coh89]. Reference [CW90] describes
a mechanism for analyzing rule conditions to infer triggers for them.
There is a vast amount of literature on temporal databases. The volume [TCG
+
93]
provides a survey of current research in the area. In particular, several temporal exten-
sions of SQL can be found there. Bibliographies on temporal databases are provided in
[Sno90, Soo91]. A survey of temporal database research, emphasizing theoretical aspects,
is provided in [Cho94]. Deductive temporal databases are presented in [BCW93]. Exam-
ple 22.6.4 is from [BCW93].
Specication of transition constraints by pre- and postconditions is studied in [CCF82,
CF84]. Transition constraints based on a dynamic version of functional dependencies are
investigated in [Via87], where the interaction between static and dynamic fds is discussed.
Constraints of a computational avor on sequences of objects (object histories) are con-
sidered in [Gin93]. Temporal constraints specied by regular languages of events (where
the events refer to object migration in object-oriented databases) are studied in [Su92].
References [Cho92a, LS87] develop the approach of history-less checking of temporal
constraints, as illustrated in Example 22.6.5. This technique is applied to testing real-time
temporal constraints in [Cho92b], providing one approach to monitoring complex events
in an active database system.
Temporal databases are intimately related to temporal logic. Informative overviews of
temporal logic can be found in [Eme91, Gal87].
A survey of dynamic aspects in databases is provided in [Abi88].
Exercises
Exercise 22.1 Show that there are updates expressible by IDM transactions that are not ex-
pressible by ID transactions (i.e., transactions with just insertions and deletions).
Exercise 22.2 Prove the soundness of the equivalence axioms
616 Dynamic Aspects
mod(C C

)del(C

) del(C)del(C

)
ins(t )mod(C C

) mod(C C

)ins(t

)
where t satises C and {t

} =mod(C C

)({t })
and
del(C
3
)mod(C
1
C
3
)mod(C
2
C
1
)mod(C
3
C
2
)
del(C
3
)mod(C
2
C
3
)mod(C
1
C
2
)mod(C
3
C
1
),
where C
1
, C
2
, C
3
are mutually exclusive sets of conditions.
Exercise 22.3 Show that, for each IDM transaction, there exists a CALC query dening
the same result but that the converse is false. Characterize the portion of CALC (or ALG)
expressible by IDM transactions.
Exercise 22.4 [AV88b] Show that for every IDM transaction there exists an equivalent IDM
transaction of the form t
d
; t
m
; t
i
, where t
d
is a sequence of deletions, t
m
is a sequence of
modications, and t
i
is a sequence of insertions.
Exercise 22.5 [VV92] Let t
1
, . . . t
k
be IDM transactions over the same relation R. A schedule
s for t
1
, . . . , t
k
is an interleaving of the updates in the t
i
s, such that the updates of each t
i
occur
in s in the same order as in t
i
. The schedule s is serializable if it is equivalent to t
(1)
. . . t
(k)
for some permutation of {1, . . . , k}.
(a) Prove that checking whether a schedule s for a set of IDM transactions t
1
, . . . , t
k
is
serializable is np-complete with respect to the size of s.
(b) Show that checking the serializability of a schedule can be done in polynomial time
if the transactions contain no modications.
Exercise 22.6 [KV90a] Suppose mboxes B
1
, . . . , B
m
are given. Initially, each box B
i
is either
empty or contains some balls. Balls can be moved among boxes by any sequence of moves,
m(B
j
, B
k
), each of which consists of putting the entire contents of box B
j
into box B
k
. Suppose
that the balls must be redistributed among boxes according to a given mapping f from boxes
to boxes [f (B
j
) = B
k
means that the contents of box B
j
must wind up in box B
k
after the
redistribution].
(a) Show that redistribution according to a given mapping f cannot always be accom-
plished by a sequence of moves. If it can, the mapping f is called realizable. Char-
acterize realizable redistribution mappings.
(b) A parallel schedule of moves is a partially ordered set of moves (M,) such that in-
comparable moves commute. (Thus incomparable moves are independent and can be
executed in parallel.) A parallel schedule takes time t if the depth of the partial order
is t . Show that the problem of testing if a parallel schedule of moves accomplishes
the redistribution in minimal time (according to a realizable redistribution mapping)
is np-complete with respect to m.
(c) Show that testing if a parallel schedule accomplishes the redistribution in time within
one unit from the minimal time can be done in time polynomial in m.
(d) What is the connection between moving balls and IDM transactions?
Exercise 22.7 Recall the transaction schema T of Example 22.2.1 and the set of constraints
in Example 22.2.2.
Exercises 617
(a) Prove that T is sound and complete with respect to .
(b) Exhibit instances I and J in Sat(), where I cannot be transformed into J using T.
(c) Write a transactional schema T

that is sound and complete for , such that whenever


I, J are in Sat(), there is a transformation from I to J using T

. (Do not use a T

that completely empties the database to make a change involving only one student.)
Exercise 22.8 [AV89] Prove Theorem 22.2.3.
Exercise 22.9 Prove the statements in Example 22.2.4.
Exercise 22.10 [AV89]
(a) Prove that it is undecidable whether I Gen(T) for given IDM transactional schema
T and instance I over a database schema. Hint: Reduce the question of whether
w L(M) for a word w and Turing machine M to the preceeding problem.
(b) Show that (a) becomes decidable if T is an ID transactional schema (no modica-
tions). Hint: For I Gen(T), nd a bound on the number of calls to transactions in
T needed to reach I and on the number of constants used in these calls.
(c) Prove that it is undecidable whether Gen(T) =Gen(T

) for given IDM transactional


schemas T and T

.
Exercise 22.11 [AV89]
(a) Show that there is a relation schema R and a join dependency g over R such that
Sat({g}) =Gen(T) for each IDM transactional schema T over R.
(b) Prove that there is a database schema R and a set of inclusion dependencies over
R, such that Sat() =Gen(T) for each IDM transactional schema T over R.
Exercise 22.12 [AV89] Prove that it is undecidable whether Gen(T) equals all instances over
R for given IDM transaction schema T over R. What does this say about the decidability of
soundness and completeness of IDM transaction schemas with respect to sets of constraints?
Exercise 22.13 [QW91] Develop expressions for incremental evaluation of the relational alge-
bra operators, analogous to the expression for join in Example 22.3.3. Consider both insertions
and deletions from the base relations.
Exercise 22.14 Recast c-tables in terms of rst-order theories. Observe that the approach to
updating c-tables is model based. Given a theory T corresponding to a c-table and an update,
describe how to change T in accordance with the update. Hint: To represent c-tables using a
theory, you will need to use variations of the equality, extension, unique name, and closure
axioms mentioned at the end of Chapter 2.
Exercise 22.15 Prove Proposition 22.4.3.
Exercise 22.16 [FUV83] Given theory T, dene T

T
T

if T

T T

T, or if T

T =
T

T and T T

T T

. Thus
T
is like
T
, except that insertions are given priority over
deletions.
Let T be a closed theory, a sentence not in T, and T

a closed theory that accomplishes


[insert ] for T. Show that {}

T
T

.
Exercise 22.17 [FUV83] Verify the claim of Example 22.4.5.
Exercise 22.18 [FUV83] Let R[ABC] be a relation schema with functional dependency A
B, and let I be the instance of Example 22.4.5.
618 Dynamic Aspects
Consider the view f over S[AB] dened by
AB
(R). A complement of this view is

AC
(R). The idea of keeping this complement unchanged while updating the view is captured
by the sentences

x(R(a, x, c)),
x(R(a, x, c

))
x(R(a

, x, c

))
x(R(a

, x, c

))

Let T
0
be that set of sentences. Let T
1
include the functional dependency and the unique name
axioms. Finally, let T
2
include the four atoms of I. Verify that there is a unique tagged theory
that accomplishes the view update [insert S(a, b

)] with minimal change.


Exercise 22.19 [FUV83] Show that under the formula-based approach to updating theories
presented in Section 22.4,
(a) A sequence of deletions can lead to an exponential blowup in the size of the theory.
(b) Determining the result of an insertion is np-hard.
Exercise 22.20 [DT92, DS93] Give a formal denition of FOID and of FOID with auxiliary
relations. Include the cases in which sets of insertions and/or deletions are permitted.
Exercise 22.21 [DT92]
(a) Verify the claim of Example 22.3.4, that the transitive closure query is FOID.
(b) Consider the datalog program
R(z) R(x), S(x, y, z)
R(z) R(y), S(x, y, z)
R(x) T (x)
An intuitive interpretation of this is that the variables range over nodes in a graph,
and the predicate S(a, b, c) indicates that nodes a and b are connected by an or-gate
to node c. The relation R contains all nodes that have value true, assuming that the
nodes in the input relation T are initially set to true.
Prove that there is a FOID with auxiliary relations for R. Hint: Dene a new
derived relation Q that holds paths of nodes with value true.
(c) Prove that there is no FOID without auxiliary relations for R.
(d) A regular chain program consists of a nite set of chain rules of the form
R(x, z) R
1
(x, y
1
), R
2
(y
1
, y
2
), . . . , R
n
(y
n1
, z),
where the only idb predicate occurring in the body (if any) is R
n
. Show that each
regular chain program is FOID with auxiliary relations. In particular, describe an
algorithm that produces, for each regular chain program dening a predicate R, a
rst-order query with auxiliary relations that incrementally evaluates the program.
Exercise 22.22 Specify in detail an active database execution model based on immediate rule
ring.
Exercise 22.23 [ZH90] Recall the accumulating execution model for active databases.
Exercises 619
(a) Exhibit a rule base for which the outcome of execution depends on the order of rule
ring.
(b) Prove Theorem 22.5.2.
Exercise 22.24 [HJ91a] Recall that in the accumulating semantics, rule conditions can access
I
orig
and
curr
. Consider an alternative semantics that differs from the accumulating semantics
only in that the rule conditions can access only I
orig
and I
curr
. Suppose that rule conditions have
the expressive power of the relational calculus (and in the case of the accumulating semantics,
the ability to access the sets
+
R
={R(t ) | +R(t ) } and

R
={R(t ) | R(t ) }). Show
that the accumulating semantics is more expressive than the alternative semantics. Hint: It is
possible that
curr
may have redundant elements, e.g., an update +R(t ), where R(t ) I
orig
.
Such redundant elements are not accessible to the alternative semantics.
Exercise 22.25 Consider a base schema B = {R[AB]} and a view f =
A
R, as in Exam-
ple 22.3.8(b).
(a) Describe a complement g of f that is not equivalent to .
(b) Show that each complement g of f expressible in the relational algebra is equivalent
to .
Exercise 22.26 [BS81] Prove Theorem 22.3.10. Hint: Consider the equivalence relation on
Inst(B) dened by I I

iff update U
f
such that I

=t ()(I). Now dene the mapping


g : Inst(I) Inst(I)/ so that g(I) is the equivalence class of I under .
Bibliography
[57391] ISO/IEC JTC1/SC21 N 5739. Database language SQL, April 1991.
[69392] ISO/IEC JTC1/SC21 N 6931. Database language SQL (SQL3), June 1992.
[A
+
76] M. M. Astrahan et al. System R: a relational approach to data management. ACM Trans. on
Database Systems, 1(2):97137, 1976.
[AA93] P. Atzeni and V. De Antonellis. Relational Database Theory. Benjamin/Cummings
Publishing Co., Menlo Park, CA, 1993.
[AABM82] P. Atzeni, G. Ausiello, C. Batini, and M. Moscarini. Inclusion and equivalence between
relational database schemata. Theoretical Computer Science, 19:267285, 1982.
[AB86] S. Abiteboul and N. Bidoit. Non rst normal form relations: An algebra allowing
restructuring. Journal of Computer and System Sciences, 33(3):361390, 1986.
[AB87a] M. Atkinson and P. Buneman. Types and persistence in database programming languages.
ACM Computing Surveys, 19(2):105190, June 1987.
[AB87b] P. Atzeni and M. C. De Bernardis. A new basis for the weak instance model. In Proc. ACM
Symp. on Principles of Database Systems, pages 7986, 1987.
[AB88] S. Abiteboul and C. Beeri. On the manipulation of complex objects. Technical Report,
INRIA and Hebrew University, 1988. (To appear, VLDB Journal.)
[AB91] S. Abiteboul and A. Bonner. Objects and views. In Proc. ACM SIGMOD Symp. on the
Management of Data, 1991.
[ABD
+
89] M. Atkinson, F. Bancilhon, D. DeWitt, K. Dittrich, D. Maier, and S. Zdonik. The object-
oriented database system manifesto. In Proc. of Intl. Conf. on Deductive and Object-Oriented
Databases (DOOD), pages 4057, 1989.
[ABGO93] A. Albano, R. Bergamini, G. Ghelli, and R. Orsini. An object data model with roles. In
Proc. of Intl. Conf. on Very Large Data Bases, pages 3951, 1993.
[Abi83] S. Abiteboul. Algebraic analogues to fundamental notions of query and dependency theory.
Technical Report, INRIA, 1983.
[Abi88] S. Abiteboul. Updates, a new frontier. In Proc. of Intl. Conf. on Database Theory, 1988.
[Abi89] S. Abiteboul. Boundedness is undecidable for datalog programs with a single recursive
rule. Information Processing Letters, 32(6):281289, 1989.
621
622 Bibliography
[Abr74] J.R. Abrial. Data semantics. In Data Base Management, pages 159. North Holland,
Amsterdam, 1974.
[ABU79] A. V. Aho, C. Beeri, and J. D. Ullman. The theory of joins in relational databases. ACM
Trans. on Database Systems, 4(3):297314, 1979.
[ABW88] K. R. Apt, H. Blair, and A. Walker. Towards a theory of declarative knowledge. In
J. Minker, editor, Foundations of Deductive Databases and Logic Programming, pages 89148.
Morgan Kaufmann, Inc., Los Altos, CA, 1988.
[AC78] A. K. Arora and C. R. Carlson. The information preserving properties of relational data
base transformations. In Proc. of Intl. Conf. on Very Large Data Bases, pages 352359, 1978.
[AC89] F. Afrati and S. S. Cosmadakis. Expressiveness of restricted recursive queries. In Proc.
ACM SIGACT Symp. on the Theory of Computing, pages 113126, 1989.
[ACO85] A. Albano, L. Cardelli, and R. Orsini. Galileo: A strongly-typed, interactive conceptual
language. ACM Trans. on Database Systems, 10:230260, June 1985.
[ACY91] F. Afrati, S. Cosmadakis, and M. Yannakakis. On datalog vs. polynomial time. In Proc.
ACM Symp. on Principles of Database Systems, pages 1325, 1991.
[ADM85] G. Ausiello, A. DAtri, and M. Moscarini. Chordality properties on graphs and minimal
conceptual connections in semantic data models. In Proc. ACM Symp. on Principles of
Database Systems, pages 164170, 1985.
[AF90] M. Ajtai and R. Fagin. Reachabiliy is harder for directed than for undirected nite graphs.
Journal of Symbolic Logic, 55(1):113150, 1990.
[AG85] S. Abiteboul and G. Grahne. Update semantics for incomplete databases. In Proc. of Intl.
Conf. on Very Large Data Bases, pages 112, 1985.
[AG87] M. Ajtai and Y. Gurevich. Monotone versus positive. J. ACM, 34(4):10041015, 1987.
[AG89] M. Ajtai and Y. Gurevich. Datalog versus rst order. In IEEE Conf. on Foundations of
Computer Science, pages 142148, 1989.
[AG91] S. Abiteboul and S. Grumbach. A rule-based language with functions and sets. ACM Trans.
on Database Systems, 16(1):130, 1991.
[AGM85] C. E. Alchourr on, P. G ardenfors, and D. Makinson. On the logic of theory change: partial
meet contraction and revision functions. Journal of Symbolic Logic, 50:510530, 1985.
[AGSS86] A. K. Aylamazan, M. M. Gigula, A. P. Stolboushkin, and G. F. Schwartz. Reduction
of the relation model with innite domains to the nite domain case. In Proceedings of USSR
Academy of Science (Dokl. Akad. Nauk. SSSR), vol. 286,(2), pages 308311, 1986. (In Russian.)
[AH87] S. Abiteboul and R. Hull. IFO: A formal semantic database model. ACM Trans. on
Database Systems, 12(4):525565, 1987.
[AH88] S. Abiteboul and R. Hull. Data functions, datalog and negation. In Proc. ACM SIGMOD
Symp. on the Management of Data, pages 143153, 1988.
[AH91] A. Avron and Y. Hirshfeld. Safety in the presence of function and order symbols. In Proc.
IEEE Conf. on Logic in Computer Science, 1991.
[AK89] S. Abiteboul and P. C. Kanellakis. Object identity as a query language primitive. In Proc.
ACM SIGMOD Symp. on the Management of Data, pages 159173, 1989. To appear in J. ACM.
[AKG91] S. Abiteboul, P. Kanellakis, and G. Grahne. On the representation and querying of sets of
possible worlds. Theoretical Computer Science, 78:159187, 1991.
[AKRW92] S. Abiteboul, P. Kanellakis, S. Ramaswamy, and E. Waller. Method schemas. Technical
Report CS-92-33, Brown University, 1992. (An earlier version appeared in Proceedings 9th
ACM PODS, 1990.)
Bibliography 623
[ALUW93] S. Abiteboul, G. Lausen, H. Uphoff, and E. Waller. Methods and rules. In Proc. ACM
SIGMOD Symp. on the Management of Data, pages 3241, 1993.
[AP82] P. Atzeni and D. S. Parker. Assumptions in relational database theory. In Proc. ACM Symp.
on Principles of Database Systems, pages 19, 1982.
[AP87a] F. Afrati and C. H. Papadimitriou. The parallel complexity of simple chain queries. In
Proc. ACM Symp. on Principles of Database Systems, pages 210213, 1987.
[AP87b] K. R. Apt and J. -M. Pugin. Maintenance of stratied databases viewed as a belief revision
system. In Proc. ACM Symp. on Principles of Database Systems, pages 136145, 1987.
[AP92] M. Andries and J. Paredaens. A language for generic graph-transformations. In Proc. Intl.
Workshop WG 91, pages 6374. Springer-Verlag, Berlin, 1992.
[APP
+
86] F. Afrati, C. H. Papadimitriou, G. Papageorgiou, A. Roussou, Y. Sagiv, and J. D. Ullman.
Convergence of sideways query evaluation. In Proc. ACM Symp. on Principles of Database
Systems, pages 2430, 1986.
[Apt91] K. R. Apt. Logic programming. In J. Van Leeuwen, editor, Handbook of Theoretical
Computer Science, pages 493574. Elsevier, Amsterdam, 1991.
[Arm74] W. W. Armstrong. Dependency structures of data base relationships. In Proc. IFIP
Congress, pages 580583. North Holland, Amsterdam, 1974.
[ASSU81] A. V. Aho, Y. Sagiv, T. G. Szymanski, and J. D. Ullman. Inferring a tree from the lowest
common ancestors with an application to the optimization of relational expressions. SIAM J. on
Computing, 10:405421, 1981. Extended abstract appears in Proc. 16th Ann. Allerton Conf. on
Communication, Control and Computing, Monticello, Ill., Oct. 1978, pp. 5463.
[ASU79a] A. V. Aho, Y. Sagiv, and J. D. Ullman. Efcient optimization of a class of relational
expressions. ACM Trans. on Database Systems, 4(4):435454, 1979.
[ASU79b] A. V. Aho, Y. Sagiv, and J. D. Ullman. Equivalence of relational expressions. SIAM J. on
Computing, 8(2):218246, 1979.
[ASV90] S. Abiteboul, E. Simon, and V. Vianu. Non-deterministic languages to express
deterministic transformations. In Proc. ACM Symp. on Principles of Database Systems,
pages 218229, 1990.
[AT93] P. Atzeni and R. Torlone. A metamodel approach for the management of multiple models
and the translation of schemas. Information Systems, 18:349362, 1993.
[AU79] A. V. Aho and J. D. Ullman. Universality of data retrieval languages. In Proc. ACM Symp.
on Principles of Programming Languages, pages 110117, 1979.
[AV87] S. Abiteboul and V. Vianu. A transaction language complete for database update and
specication. In Proc. ACM Symp. on Principles of Database Systems, pages 260268, 1987.
[AV88a] S. Abiteboul and V. Vianu. The connection of static constraints with boundedness and
determinism of dynamic specications. In 3rd Intl. Conf. on Data and Knowledge Bases,
pages 324334, Jerusalem, 1988.
[AV88b] S. Abiteboul and V. Vianu. Equivalence and optimization of relational transactions. J.
ACM, 35(1):130145, 1988.
[AV88c] S. Abiteboul and V. Vianu. Procedural and declarative database update languages. In Proc.
ACM Symp. on Principles of Database Systems, pages 240250, 1988.
[AV89] S. Abiteboul and V. Vianu. Atransaction-based approach to relational database specication.
J. ACM, 36(4):758789, October 1989.
[AV90] S. Abiteboul and V. Vianu. Procedural languages for database queries and updates. Journal
of Computer and System Sciences, 41:181229, 1990.
624 Bibliography
[AV91a] S. Abiteboul and V. Vianu. Datalog extensions for database queries and updates. Journal
of Computer and System Sciences, 43:62124, 1991.
[AV91b] S. Abiteboul and V. Vianu. Generic computation and its complexity. In Proc. ACM
SIGACT Symp. on the Theory of Computing, pages 209219, 1991.
[AV91c] S. Abiteboul and V. Vianu. Non-determinism in logic-based languages. Annals of Math.
and Artif. Int., 3:151186, 1991.
[AV94] S. Abiteboul and V. Vianu. Computing with rst-order logic. Journal of Computer and
System Sciences, 1994. To appear.
[AvE82] K. Apt and M. van Emden. Contributions to the theory of logic programming. J. ACM,
29(3):841862, 1982.
[AVV92] S. Abiteboul, M. Y. Vardi, and V. Vianu. Fixpoint logics, relational machines, and
computational complexity. In Conf. on Structure in Complexity Theory, pages 156168, 1992.
[AW88] K. R. Apt and H. A. Walker. Arithmetic classication of perfect models of stratied
programs. Technical Report TR-88-09, University of Texas at Austin, 1988.
[B
+
86] D. G. Bobrow et al. CommonLoops: Merging lisp and object-oriented programming. In
Proc. ACM Conf. on Object-Oriented Programming Systems, Languages, and Applications,
pages 1729, 1986.
[B
+
88] D. S. Batory et al. Genesis: An extensible database management system. IEEE Transactions
on Software Engineering, SE-14(11):17111730, 1988.
[Ban78] F. Bancilhon. On the completeness of query languages for relational data bases. In
7th Symposium on the Mathematical Foundations of Computer Science, pages 112123.
Springer-Verlag, Berlin, LNCS 64, 1978.
[Ban85] F. Bancilhon. A note on the performance of rule based systems. Technical Report
DB-022-85, MCC, 1985.
[Ban86] F. Bancilhon. Naive evaluation of recursively dened relations. In M. L. Brodie and J. L.
Mylopoulos, editors, On Knowledge Base Management SystemsIntegrating Database and AI
Systems, pages 165178. Springer-Verlag, Berlin, 1986.
[Bar63] H. Barendregt. Functional programming and lambda calculus. In J. Van Leeuwen, editor,
Handbook of Theoretical Computer Science, vol. B, pages 321363. Elsevier, Amsterdam,
1990.
[Bar84] H. Barendregt. The Lambda Calculus: Its Syntax and Semantics. North Holland,
Amsterdam, 1984.
[BB79] C. Beeri and P. A. Bernstein. Computational problems related to the design of normal form
relational schemas. ACM Trans. on Database Systems, 4(1):3059, March 1979.
[BB91] J. Berstel and L. Boasson. Context-free languages. In J. Van Leeuwen, editor, Handbook of
Theoretical Computer Science, pages 102163. Elsevier, Amsterdam, 1991.
[BB92] D. Beneventano and S. Bergamaschi. Subsumption for complex object data models. In
Proc. of Intl. Conf. on Database Theory, pages 357375, 1992.
[BBC80] P. A. Bernstein, B. T. Blaustein, and E. M. Clarke. Fast maintenance of semantic integrity
assertions using redundant aggregate data. In Proc. of Intl. Conf. on Very Large Data Bases,
pages 126136, 1980.
[BBG78] C. Beeri, P. A. Bernstein, and N. Goodman. A sophisticates introduction to database
normalization theory. In Proc. of Intl. Conf. on Very Large Data Bases, pages 113124, 1978.
[BBMR89] A. Borgida, R. J. Brachman, D. L. McGuinness, and L. A. Resnick. CLASSIC: A
structural data model for objects. In Proc. ACM SIGMOD Symp. on the Management of Data,
pages 5867, 1989.
Bibliography 625
[BC79] O. P. Buneman and G. K. Clemons. Efciently monitoring relational databases. ACM Trans.
on Database Systems, 4(3):368382, September 1979.
[BC81] P. A. Bernstein and D. W. Chiu. Using semi-joins to solve relational queries. J. ACM,
28(1):2540, 1981.
[BCD89] F. Bancilhon, S. Cluet, and C. Delobel. Query languages for object-oriented database
systems: the O
2
proposal. In Proc. Second Intl. Workshop on Data Base Programming
Languages, 1989.
[BCW93] M. Baudinet, J. Chomicki, and P. Wolper. Temporal deductive databases. In A. U. Tansel
et al., editors, Temporal DatabasesTheory, Design, and Implementation, pages 294320.
Benjamin/Cummings Publishing Co., Menlo Park, CA, 1993.
[BDB79] J. Biskup, U. Dayal, and P. A. Bernstein. Synthesizing independent database schemas. In
Proc. ACM SIGMOD Symp. on the Management of Data, pages 143152, 1979.
[BDFS84] C. Beeri, M. Dowd, R. Fagin, and R. Statman. On the structure of Armstrong relations
for functional dependencies. J. ACM, 31(1):3046, 1984.
[BDK92] F. Bancilhon, C. Delobel, and P. Kanellakis, editors. Building an Object-Oriented
Database System: The Story of O
2
. Morgan Kaufmann, Inc., Los Altos, CA, 1992.
[BDM88] F. Bry, H. Decker, and R. Manthey. A uniform approach to constraint satisfaction and
constraint satisability in deductive databases. In Proc. of Intl. Conf. on Extending Data Base
Technology, pages 488505, 1988.
[BDW88] P. Buneman, S. Davidson, and A. Watters. A semantics for complex objects and
approximate queries. In Proc. ACM Symp. on Principles of Database Systems, pages 302314,
1988.
[BDW91] P. Buneman, S. Davidson, and A. Watters. A semantics for complex objects and
approximate answers. Journal of Computer and System Sciences, 43:170218, 1991.
[Bee80] C. Beeri. On the membership problem for functional and multivalued dependencies in
relational databases. ACM Trans. on Database Systems, 5:241259, 1980.
[Bee90] C. Beeri. A formal approach to object-oriented databases. Data and Knowledge
Engineering, 5(4):353382, 1990.
[Ber76a] C. Berge. Graphs and Hypergraphs. North Holland, Amsterdam, 1976.
[Ber76b] P. A. Bernstein. Synthesizing third normal form relations from functional dependencies.
ACM Trans. on Database Systems, 1(4):277298, 1976.
[BF87] N. Bidoit and C. Froidevaux. Minimalism subsumes default logic and circumscription. In
Proc. IEEE Conf. on Logic in Computer Science, pages 8997, 1987.
[BF88] N. Bidoit and C. Froidevaux. General logic databases and programs: Default logic semantics
and stratication. Technical Report, LRI, Universit e de Paris-Sud, Orsay, 1988. To appear in J.
Information and Computation.
[BFH77] C. Beeri, R. Fagin, and J. H. Howard. A complete axiomatization for functional and
multivalued dependencies. In Proc. ACM SIGMOD Symp. on the Management of Data,
pages 4761, 1977.
[BFM
+
81] C. Beeri, R. Fagin, D. Maier, A. O. Mendelzon, J. D. Ullman, and M. Yannakakis.
Properties of acyclic database schemes. In Proc. ACM SIGACT Symp. on the Theory of
Computing, pages 355362, 1981.
[BFMY83] C. Beeri, R. Fagin, D. Maier, and M. Yannakakis. On the desirability of acyclic database
schemes. J. ACM, 30(3):479513, 1983.
626 Bibliography
[BFN82] P. Buneman, R. Frankel, and R. Nikhil. An implementation technique for database query
languages. ACM Trans. on Database Systems, 7:164186, 1982.
[BG81] P. A. Bernstein and N. Goodman. The power of natural semi-joins. SIAM J. on Computing,
10(4):751771, 1981.
[BGK85] A. Blass, Y. Gurevich, and D. Kozen. A zero-one law for logic with a xed point operator.
Information and Control, 67:7090, 1985.
[BGL85] R. J. Brachman, V. P. Gilbert, and H. J. Levesque. An essential hybrid reasoning
system: Knowledge and symbol level accounts of KRYPTON. In Intl. Joint Conf. on Articial
Intelligence, pages 532539, 1985.
[BGW
+
81] P. A. Bernstein, N. Goodman, E. Wong, et al. Query processing in a system for
distributed databases (SDD-1). ACM Trans. on Database Systems, 6:602625, 1981.
[BHG87] P. A. Bernstein, V. Hadzilacos, and N. Goodman. Concurrency Control and Recovery in
Database Systems. Addison-Wesley, Reading, MA, 1987.
[Bid91a] N. Bidoit. Bases de Donn ees D eductives (Pr esentation de Datalog). Armand Colin, Paris,
1991.
[Bid91b] N. Bidoit. Negation in rule-based database languages: A survey. Theoretical Computer
Science, 78:383, 1991.
[Bis80] J. Biskup. Inferences of multivalued dependencies in xed and undetermined universes.
Theoretical Computer Science, 10:93105, 1980.
[Bis81] J. Biskup. A formal approach to null values in database relations. In H. Gallaire, J. Minker,
and J.M. Nicolas, editors, Advances in Data Base Theory, vol. 1, pages 299341. Plenum Press,
New York, 1981.
[Bis83] J. Biskup. A foundation of Codds relational maybe-operations. ACM Trans. on Database
Systems, 8(4):608636, 1983.
[BJO91] P. Buneman, A. Jung, and A. Ohori. Using powerdomains to generalize relational
databases. Theoretical Computer Science, 91:2355, 1991.
[BK86] C. Beeri and M. Kifer. An integrated approach to logical design of relational database
schemes. ACM Trans. on Database Systems, 11:134158, 1986.
[BKBR87] C. Beeri, P. C. Kanellakis, F. Bancilhon, and R. Ramakrishnan. Bounds on the
propagation of selection into logic programs. In Proc. ACM Symp. on Principles of Database
Systems, pages 214226, 1987.
[BL90] N. Bidoit and P. Legay. Well! an evaluation procedure for all logic programs. In Proc. of
Intl. Conf. on Database Theory, pages 335348. Springer-Verlag, Berlin, LNCS 470, 1990.
[BLN86] C. Batini, M. Lenzerini, and S. B. Navathe. A comparative analysis of methodologies for
database schema integration. ACM Computing Surveys, 18:323364, 1986.
[BLT86] J. A. Blakeley, P.-A. Larson, and F. W. Tompa. Efciently updating materialized views. In
Proc. ACM SIGMOD Symp. on the Management of Data, pages 6171, 1986.
[BM91] C. Beeri and T. Milo. A model for active object oriented databases. In Proc. of Intl. Conf.
on Very Large Data Bases, pages 337349, 1991.
[BMG93] J. A. Blakeley, W. J. McKenna, and G. Graefe. Experiences building the open OODB
query optimizer. In Proc. ACM SIGMOD Symp. on the Management of Data, pages 287296,
1993.
[BMSU81] C. Beeri, A. O. Mendelzon, Y. Sagiv, and J. D. Ullman. Equivalence of relational
database schemes. SIAM J. on Computing, 10(2):352370, 1981.
Bibliography 627
[BMSU86] F. Bancilhon, D. Maier, Y. Sagiv, and J. D. Ullman. Magic sets and other strange
ways to implement logic programs. In Proc. ACM Symp. on Principles of Database Systems,
pages 115, 1986.
[BNR
+
87] C. Beeri, S. Naqvi, R. Ramakrishnan, O. Shmueli, and S. Tsur. Sets and negation in
a logic database language (LDL1). In Proc. ACM Symp. on Principles of Database Systems,
pages 2137, 1987.
[Bor85] A. Borgida. Features of languages for the development of information systems at the
conceptual level. IEEE Software, 2:6372, 1985.
[BP83] P. De Bra and J. Paredaens. Conditional dependencies for horizontal decompositions. In
Proc. Intl. Conf. on Algorithms, Languages and Programming, pages 6782. Springer-Verlag,
Berlin, LNCS 154, 1983.
[BPR87] I. Balbin, B. S. Port, and K. Ramamohanarao. Magic set computation for stratied
databases. Technical Report TR 87/3, Dept. of Computer Science, University of Melbourne,
1987.
[BR80] C. Beeri and J. Rissanin. Faithful representation of relational database schemes. Technical
Report RJ2722, IBM Research Laboratory, San Jose, CA, 1980.
[BR87a] C. Beeri and R. Ramakrishnan. On the power of magic. In Proc. ACM Symp. on Principles
of Database Systems, pages 269283, 1987.
[BR87b] I. Balnbin and K. Ramamohanarao. A generalization of the differential approach to
recursive query evaluation. In Journal of Logic Programming, 4(3), 1987.
[BR88a] F. Bancilhon and R. Ramakrishnan. An amateurs introduction to recursive query
processing strategies. In M. Stonebraker, editor, Readings in Database Systems, pages 507
555. Morgan Kaufmann, Inc., Los Altos, CA, 1988. An earlier version of this work appears in
Proc. ACM SIGMOD Conf. on Management of Data, pp. 1652, 1986.
[BR88b] F. Bancilhon and R. Ramakrishnan. Performance evaluation of data intensive logic
programs. In J. Minker, editor, Foundations of Deductive Databases and Logic Programming,
pages 439517. Morgan Kaufmann, Inc., Los Altos, CA, 1988.
[BR91] C. Beeri and R. Ramakrishnan. On the power of magic. J. Logic Programming,
10(3&4):255300, 1991.
[BRS82] F. Bancilhon, P. Richard, and M. Scholl. On line processing of compacted relations. In
Proc. of Intl. Conf. on Very Large Data Bases, pages 263269, 1982.
[BRSS92] C. Beeri, R. Ramakrishnan, D. Srivastava, and S. Sudarshan. The valid model semantics
for logic programs. In Proc. ACM Symp. on Principles of Database Systems, pages 91104,
1992.
[Bry89] F. Bry. Query evaluation in recursive databases: Bottom-up and top-down reconciled. In
Proc. of Intl. Conf. on Deductive and Object-Oriented Databases (DOOD), pages 2039, 1989.
[BS81] F. Bancilhon and N. Spyratos. Update semantics of relational views. ACM Trans. on
Database Systems, 6(4):557575, 1981.
[BS85] R. J. Brachman and J. G. Schmolze. An overview of the KL-ONE knowledge representation
system. Cognitive Science, 9:171216, 1985.
[BS93] S. Bergamaschi and C. Sartori. On taxonomic reasoning in conceptual design. ACM Trans.
on Database Systems, 17:385422, 1993.
[BST75] P. A. Bernstein, J. R. Swenson, and D. C. Tzichritzis. A unied approach to functional
dependencies and relations. In Proc. ACM SIGMOD Symp. on the Management of Data,
pages 237245, 1975.
628 Bibliography
[BTBN92] V. Breazu-Tannen, P. Buneman, and S. Naqvi. Structural recursion as a query language.
In Proc. of Intl. Workshop on Database Programming Languages, pages 919. Morgan
Kaufmann, Inc., Los Altos, CA, 1992.
[BTBW92] V. Breazu-Tannen, P. Buneman, and L. Wong. Naturally embedded query languages. In
Proc. of Intl. Conf. on Database Theory, pages 140154. Springer-Verlag, Berlin, LNCS, 1992.
[BV80a] C. Beeri and M. Y. Vardi. On the complexity of testing implications of data dependencies.
Technical Report, Department of Computer Science, Hebrew University of Jerusalem, 1980.
[BV80b] C. Beeri and M. Y. Vardi. A proof procedure for data dependencies (preliminary report).
Technical Report, Department of Computer Science, Hebrew University of Jerusalem, August
1980.
[BV81a] C. Beeri and M. Y. Vardi. The implication problem for data dependencies. In Proc. Intl.
Conf. on Algorithms, Languages and Programming, pages 7385, 1981. Springer-Verlag,
Berlin, LNCS 115.
[BV81b] C. Beeri and M. Y. Vardi. On the properties of join dependencies. In H. Gallaire, J. Minker,
and J. M. Nicolas, editors, Advances in Data Base Theory, vol. 1, pages 2572. Plenum Press,
New York, 1981.
[BV84a] C. Beeri and M. Y. Vardi. Formal systems for tuple and equality generating dependencies.
SIAM J. on Computing, 13(1):7698, 1984.
[BV84b] C. Beeri and M. Y. Vardi. On acyclic database decompositions. Inf. and Control,
61(2):7584, 1984.
[BV84c] C. Beeri and M. Y. Vardi. A proof procedure for data dependencies. J. ACM, 31(4):718
741, 1984.
[BV85] C. Beeri and M. Y. Vardi. Formal systems for join dependencies. Theoretical Computer
Science, 38:99116, 1985.
[C
+
76] D. D. Chamberlin et al. Sequel 2: a unied approach to data denition, manipulation and
control. IBM J. Research and Development, 20(6):560575, 1976.
[Cam92] M. Campbell. Microsoft Access Inside and Out. Osborne McGraw-Hill, New York,
1992.
[Car88] L. Cardelli. A semantics of multiple inheritance. Information and Computation, 76:138
164, 1988.
[Cat94] R. G. G. Cattell, editor. The Object Database Standard: ODMB-93. Morgan Kaufmann,
Inc., Los Altos, CA, 1994.
[CCCR
+
90] F. Cacace, S. Ceri, S. Crespi-Reghizzi, L. Tanca, and R. Zicari. Integrating object-
oriented data modeling with a rule-based programming paradigm. In Proc. ACM SIGMOD
Symp. on the Management of Data, pages 225236, 1990.
[CCF82] I. M. V. Castillo, M. A. Casanova, and A. L. Furtado. A temporal framework for database
specication. In Proc. of Intl. Conf. on Very Large Data Bases, pages 280291, 1982.
[CF84] M. A. Casanova and A. L. Furtado. On the description of database transition constraints
using temporal logic. In H. Gallaire, J. Minker, and J. -M. Nicolas, editors, Advances in Data
Base Theory, vol. 2. Plenum Press, New York, 1984.
[CFI89] J. Cai, M. F urer, and N. Immerman. An optimal lower bound on the number of variables
for graph identication. In IEEE Conf. on Foundations of Computer Science, pages 612617,
1989.
[CFP84] M. A. Casanova, R. Fagin, and C. H. Papadimitriou. Inclusion dependencies and
Bibliography 629
their interaction with functional dependencies. Journal of Computer and System Sciences,
28(1):2959, 1984.
[CGKV88] S. S. Cosmadakis, H. Gaifman, P. C. Kanellakis, and M. Y. Vardi. Decidable
optimization problems for database logic programs. In Proc. ACM SIGACT Symp. on the
Theory of Computing, 1988.
[CGP93] L. Corciulo, F. Giannotti, and D. Pedreschi. Datalog with non-deterministic choice
computes NDB-PTIME. In Proc. of Intl. Conf. on Deductive and Object-Oriented Databases
(DOOD), 1993.
[CGT90] S. Ceri, G. Gottlob, and L. Tanca. Logic Programming and Databases. Springer-Verlag,
Berlin, 1990.
[CH80a] A. K. Chandra and D. Harel. Structure and complexity of relational queries. In IEEE Conf.
on Foundations of Computer Science, pages 333347, 1980.
[CH80b] A. K. Chandra and D. Harel. Computable queries for relational data bases. Journal of
Computer and System Sciences, 21(2):156178, 1980.
[CH82] A. K. Chandra and D. Harel. Structure and complexity of relational queries. Journal of
Computer and System Sciences, 25(1):99128, 1982.
[CH85] A. K. Chandra and D. Harel. Horn clause queries and generalizations. J. Logic
Programming, 2(1):115, 1985.
[Cha81a] A. K. Chandra. Programming primitives for database languages. In Proc. ACM Symp. on
Principles of Programming Languages, pages 5062, 1981.
[Cha81b] C. Chang. On the evaluation of queries containing derived relations in relational databases.
In H. Gallaire, J. Minker, and J.-M. Nicolas, editors, Advances in Database Theory, vol. 1,
pages 235260. Plenum Press, New York, 1981.
[Cha88] A. K. Chandra. Theory of database queries. In Proc. ACM Symp. on Principles of Database
Systems, pages 19, 1988.
[Cha94] T. -P. Chang. On Incremental Update Propagation Between Object-based Databases.
Ph.D. thesis, University of Southern California, Los Angeles, 1994.
[Che76] P. P. Chen. The entity-relationship model Toward a unied view of data. ACM Trans. on
Database Systems, 1:936, 1976.
[CHM94] I-M. A. Chen, R. Hull, and D. McLeod. Local ambiguity and derived data update. In
Fourth Intl. Workshop on Research Issues in Data Engineering: Active Database Systems,
pages 7786, 1994.
[Cho92a] J. Chomicki. History-less checking of dynamic integrity constraints. In Proc. IEEE Intl.
Conf. on Data Engineering, 1992.
[Cho92b] J. Chomicki. Real-time integrity constraints. In Proc. ACM Symp. on Principles of
Database Systems, 1992.
[Cho94] J. Chomicki. Temporal query languages: A survey. In Proc. 1st Intl. Conf. on Temporal
Logic, 1994.
[Chu41] A. Church. The Calculi of Lambda-Conversion. Princeton University Press, Princeton, NJ,
1941.
[CK73] C. C. Chang and H. J. Keisler. Model Theory. North Holland, Amsterdam, 1973.
[CK85] S. S. Cosmadakis and P. C. Kanellakis. Equational theories and database constraints. In
Proc. ACM SIGACT Symp. on the Theory of Computing, pages 73284, 1985.
[CK86] S. S. Cosmadakis and P. C. Kanellakis. Functional and inclusion dependencies: A graph
theoretic approach. In P. C. Kanellakis and F. Preparata, editors, Advances in Computing
630 Bibliography
Research, vol. 3: The Theory of Databases, pages 164185. JAI Press, Inc., Greenwich, CT,
1986.
[CKRP73] A. Colmerauer, H. Kanoui, P. Roussel, and R. Pasero. Un syst` eme de communication
homme-machine en fran cais. Technical Report, Groupe de Recherche en Intelligence
Articielle, Universit e Aix-Marseille, 1973.
[CKS86] S. S. Cosmadakis, P. C. Kanellakis, and S. Spyratos. Partition semantics for relations.
Journal of Computer and System Sciences, 32(2):203233, 1986.
[CKV90] S. S. Cosmadakis, P. C. Kanellakis, and M. Y. Vardi. Polynomial-time implications
problems for unary inclusion dependencies. J. ACM, 37:1546, 1990.
[CL73] C. L. Chang and R. C. T. Lee. Symbolic Logic and Mechanical Theorem Proving. Academic
Press, New York, 1973.
[CL94] D. Calvanese and M. Lenzerini. Making object-oriented schemas more expressive. In Proc.
ACM Symp. on Principles of Database Systems, 1994.
[Cla78] K. L. Clark. Negation as failure logic and databases. In H. Gallaire and J. Minker, editors,
Logic and Databases, pages 293322. Plenum Press, New York, 1978.
[CLM81] A. K. Chandra, H. R. Lewis, and J. A. Makowsky. Embedded implicational dependencies
and their inference problem. In Proc. ACM SIGACT Symp. on the Theory of Computing,
pages 342354, 1981.
[CM77] A. K. Chandra and P. M. Merlin. Optimal implementation on conjunctive queries in
relational data bases. In Proc. ACM SIGACT Symp. on the Theory of Computing, pages 7790,
1977.
[CM90] M. Consens and A. Mendelzon. GraphLog: A visual formalism for real life recursion. In
Proc. ACM Symp. on Principles of Database Systems, pages 404416, 1990.
[CM93a] M. Consens and A. Mendelzon. The hy+ hygraph visualization system. In Proc. ACM
SIGMOD Symp. on the Management of Data, 1993.
[CM93b] M. Consens and A. Mendelzon. Low complexity aggregation in GraphLog and Datalog.
Theoretical Computer Science, 116(1):95116, 1993. A preliminary version was published in
the Proceedings of the Third International Conference on Database Theory, Springer-Verlag,
Berlin, LNCS 470, 1990.
[Cod70] E. F. Codd. A relational model of data for large shared data banks. Comm. of the ACM,
13(6):377387, 1970.
[Cod71] E. F. Codd. Normalized database structure: A brief tutorial. In ACM SIGFIDET Workshop
on Data Description, Access and Control, November 1971.
[Cod72a] E. F. Codd. Further normalization of the data base relational model. In R. Rustin, editor,
Courant Computer Science Symposium 6: Data Base Systems, pages 3364. Prentice-Hall,
Englewood Cliffs, NJ, 1972.
[Cod72b] E. F. Codd. Relational completeness of database sublanguages. In R. Rustin, editor,
Courant Computer Science Symposium 6: Data Base Systems, pages 6598. Prentice-Hall,
Englewood Cliffs, NJ, 1972.
[Cod74] E. F. Codd. Recent investigations in relational data base systems. In Information Processing
74, pages 10171021. North Holland, Amsterdam, 1974.
[Cod75] T. Codd. Understanding relations (installment #7). In FDT Bull. of ACM Sigmod 7,
pages 2328, 1975.
[Cod79] E. F. Codd. Extending the data base relational model to capture more meaning. ACM Trans.
on Database Systems, 4(4):397434, 1979.
Bibliography 631
[Cod82] E. F. Codd. Relational databases: A practical foundation for productivity. Comm. of the
ACM, 25(2):102117, 1982.
[Coh86] D. Cohen. Programming by specication and annotation. In Proc. of AAAI, 1986.
[Coh89] D. Cohen. Compiling complex database transition triggers. In Proc. ACM SIGMOD Symp.
on the Management of Data, pages 225234, 1989.
[Coh90] J. Cohen. Constraint logic programming languages. Comm. of the ACM, 33(7):6990,
1990.
[Com88] K. Compton. 0-1 laws in logic and combinatorics. In 1987 NATO Adv. Study Inst. on
Algorithms and Order, pages 353383, 1988.
[Coo74] S. A. Cook. An observation on a time-storage trade-off. Journal of Computer and System
Sciences, 9:308316, 1974.
[Cos83] S. S. Cosmadakis. The complexity of evaluating relational queries. Inf. and Control,
58:101112, 1983.
[Cos87] S. S. Cosmadakis. Database theory and cylindric lattices. In IEEE Conf. on Foundations of
Computer Science, pages 411420, 1987.
[Cou90] B. Courcelle. Recursive applicative programschemes. In J. Van Leeuwen, editor, Handbook
of Theoretical Computer Science, vol. B, pages 459492. Elsevier, Amsterdam, 1990.
[CP84] S. S. Cosmadakis and C. H. Papadimitriou. Updates of relational views. J. ACM, 31(4):742
760, 1984.
[CRG
+
88] S. Ceri, S. Crespi Reghizzi, G. Gottlob, F. Lamperti, L. Lavazza, L. Tanca, and R. Zicari.
The algres project. In Proc. of Intl. Conf. on Extending Data Base Technology. Springer-Verlag,
Berlin, 1988.
[CT48] L. H. Chin and A. Tarski. Remarks on projective algebras. Bulletin AMS, 54:8081, 1948.
[CT87] S. Ceri and L. Tanca. Optimization of systems of algebraic equations for evaluating datalog
queries. In Proc. of Intl. Conf. on Very Large Data Bases, 1987.
[CTF88] M. A. Casanova, L. Tucherman, and A. L. Furtado. Enforcing inclusion dependencies and
referential integrity. In Proc. of Intl. Conf. on Very Large Data Bases, pages 3849, 1988.
[CV81] T. Connors and V. Vianu. Tableaux which dene expression mappings. Technical Report,
Computer Science Department, University of Southern California, 1981. Presented at XP2
Conf. on Theory of Relational Databases, Pennsylvania State University, June 1981.
[CV83] M. A. Casanova and V. M. P. Vidal. Towards a sound view integration methodology. In
Proc. ACM Symp. on Principles of Database Systems, pages 3647, 1983.
[CV85] A. K. Chandra and M. Y. Vardi. The implication problem for functional and inclusion
dependencies is undecidable. SIAM J. on Computing, 14(3):671677, 1985.
[CV92] S. Chaudhuri and M. Y. Vardi. On the equivalence of datalog programs. In Proc. ACM
Symp. on Principles of Database Systems, pages 5566, 1992.
[CV93] S. Chaudhuri and M. Y. Vardi. Optimization of real conjunctive queries. In Proc. ACM
Symp. on Principles of Database Systems, pages 5970, 1993.
[CV94] S. Chaudhuri and M. Y. Vardi. On the complexity of equivalence between recursive and
nonrecursive datalog programs. In Proc. ACM Symp. on Principles of Database Systems,
pages 107116, 1994.
[CW85] L. Cardelli and P. Wegner. On understanding types, data abstraction and polymorphism.
ACM Computing Surveys, 17:471522, December 1985.
[CW89a] W. Chen and D. S. Warren. C-Logic of complex objects. In Proc. ACM Symp. on
Principles of Database Systems, pages 369378, 1989.
632 Bibliography
[CW89b] S. R. Cohen and O. Wolfson. Why a single parallelization strategy is not enough in
knowledge bases. In Proc. ACM Symp. on Principles of Database Systems, pages 200216,
1989.
[CW90] S. Ceri and J. Widom. Deriving production rules for constraint maintenance. In Proc. of
Intl. Conf. on Very Large Data Bases, pages 566577, 1990.
[CW91] S. Ceri and J. Widom. Deriving production rules for incremental view maintenance. In
Proc. of Intl. Conf. on Very Large Data Bases, pages 577589, 1991.
[CW92] W. Chen and D. S. Warren. A goal oriented approach to computing well founded semantics.
In Proc. of the Joint Intl. Conf. and Symp. on Logic Programming, pages 589606, 1992.
[CW93] S. Ceri and J. Widom. Managing semantic heterogeneity with production rules and
persistent queues. In Proc. of Intl. Conf. on Very Large Data Bases, pages 108119, 1993.
[DA83] C. Delobel and M. Adiba. Bases de Donn ees et Syst` emes Relationnels. Dunod Informatique,
Paris, 1983.
[Dal87] E. Dalhaus. Skolem normal forms concerning the least xpoint. In E. B orger, editor,
Computation Theory and Logic, vol. 270, pages 101106. Springer-Verlag, Berlin, LNCS,
1987.
[Dat81] C. J. Date. Referential integrity. In Proc. of Intl. Conf. on Very Large Data Bases,
pages 212, 1981.
[Dat86] C. J. Date. An Introduction to Database Systems. Addison-Wesley, Reading, MA, 1986.
[Daw93] A. Dawar. Feasible Computation through Model Theory. Ph.D. thesis, University of
Pennsylvania, 1993.
[Day89] U. Dayal. Queries and views in an object-oriented data model. In Proc. of Intl. Workshop
on Database Programming Languages, pages 80102, 1989.
[DB82] U. Dayal and P. A. Bernstein. On the correct translation of update operations on relational
views. ACM Trans. on Database Systems, 8(3):381416, 1982.
[DC72] C. Delobel and R. C. Casey. Decomposition of a database and the theory of boolean
switching functions. IBM J. Research and Development, 17(5):370386, 1972.
[DD89] L. M. L. Delcambre and K. C. Davis. Automatic validation of object-oriented database
structures. In Proc. IEEE Intl. Conf. on Data Engineering, pages 29, 1989.
[Dec86] H. Decker. Extending and restricting deductive databases. Technical Report KB-21, ECRC,
Munich, 1986.
[Del78] C. Delobel. Normalization and hierarchical dependencies in the relational data model. ACM
Trans. on Database Systems, 3(3):201222, 1978.
[Dem82] R. Demolombe. Syntactical characterization of a subset of domain independent formulas.
Technical Report, ONERACERT, Toulouse, 1982.
[DF92] C. J. Date and R. Fagin. Simple conditions for guaranteeing higher normal forms in
relational databases. ACM Trans. on Database Systems, 17:465476, 1992.
[DG79] B. S. Dreben and W. D. Goldfarb. The Decision Problem: Solvable Classes of
Qualicational Formulas. Addison-Wesley, Reading, MA, 1979.
[DH84] U. Dayal and H. Y. Hwang. View denition and generalization for database integration in a
multidatabase system. IEEE Trans. on Software Engineering, SE-10(6):628644, 1984.
[DHL91] U. Dayal, M. Hsu, and R. Ladin. A transaction model for long-running activities. In Proc.
of Intl. Conf. on Very Large Data Bases, pages 113122, 1991.
[DiP69] R. A. DiPaola. The recursive unsolvability of the decision problem for a class of denite
formulas. J. ACM, 16(2):324327, 1969.
Bibliography 633
[DM86a] E. Dahlhaus and J. A. Makowsky. Computable directory queries. In 11th CAAP 86,
pages 254265, Springer-Verlag, Berlin, LNCS 214, 1986.
[DM86b] A. DAtri and M. Moscarini. Recognition algorithms and design methodologies for
acyclic database. In P. C. Kanellakis and F. Preparata, editors, Schemes Advances in Computing
Research, vol. 3, pages 164185. JAI Press, Inc., Greenwich, CT, 1986.
[DM92] E. Dalhaus and J. A. Makowsky. Query languages for hierarchic databases. Information
and Computation, 101(1):132, November 1992.
[DMP93] M. A. Derr, S. Morishita, and G. Phipps. Design and implementation of the Glue-Nail
database system. In Proc. ACM SIGMOD Symp. on the Management of Data, pages 147156,
1993.
[dMS88] C. de Maindreville and E. Simon. Modelling non-deterministic queries and updates in
deductive databases. In Proc. of Intl. Conf. on Very Large Data Bases, 1988.
[Don92] G. Dong. Datalog expressiveness of chain queries: Grammar tools and characterizations.
In Proc. ACM Symp. on Principles of Database Systems, pages 8190, 1992.
[DP84] P. DeBra and J. Paredaens. Horizontal decompositions for handling exceptions to functional
dependencies. In H. Gallaire, J. Minker, and J. -M. Nicolas, editors, Advances in Database
Theory, vol. 2, pages 123144. Plenum Press, New York, 1984.
[dR87] M. de Rougemont. Second-order and inductive denability of nite structures. Zeitschr.
Math. Logik und Grundlagen d. Math., 33:4763, 1987.
[DS91] G. Dong and J. Su. Object behaviors and scripts. In Proc. of Intl. Workshop on Database
Programming Languages, pages 2730, 1991.
[DS92] G. Dong and J. Su. Incremental and decremental evaluation of transitive closure by rst-
order queries. Technical Report TRCS 92-18, University of California, Santa Barbara, 1992. To
appear in Information and Computation.
[DS93] G. Dong and J. Su. First-order incremental evaluation of datalog queries (extended abstract).
In Proc. of Intl. Workshop on Database Programming Languages, 1993.
[DST93] G. Dong, J. Su, and R. Topor. Nonrecursive incremental evaluation of datalog queries.
Technical Report, Department of Computer Science, University of Melbourne, Australia, 1993.
To appear in Annals of Mathematics and Articial Intelligence.
[DT92] G. Dong and R. Topor. Incremental evaluation of datalog queries. In Proc. of Intl. Conf. on
Database Theory, pages 282296, 1992.
[DV91] K. Denninghoff and V. Vianu. The power of methods with parallel semantics. In Proc. of
Intl. Conf. on Very Large Data Bases, pages 221232, 1991.
[DV93] K. Denninghoff and V. Vianu. Database method schemas and object creation. In Proc. ACM
Symp. on Principles of Database Systems, pages 265275, 1993.
[DW85] S. W. Dietrich and D. S. Warren. Dynamic programming strategies for the evaluation of
recursive queries. Technical Report TR 85-31, Computer Science Department, SUNY at Stony
Brook, New York, 1985.
[DW87] S. W. Dietrich and D. S. Warren. Extension tables: Memo relations in logic programming.
In Proc. of the Symposium on Logic Programming, 1987.
[DW94] U. Dayal and J. Widom. Active Database Systems. Morgan Kaufmann Publishers, Inc., Los
Altos, CA. In preparation, to appear in 1994.
[EFT84] H. D. Ebbinghaus, J. Flum, and W. Thomas. Mathematical Logic. Springer-Verlag, Berlin,
1984.
[EGM94] T. Eiter, G. Gottlob, and H. Mannila. Adding disjunction to Datalog. In Proce. ACM
Symp. on Principles of Database Systems, pages 267278, 1994.
634 Bibliography
[EHJ93] M. Escobar-Molano, R. Hull, and D. Jacobs. Safety and translation of calculus queries
with scalar functions. In Proc. ACM Symp. on Principles of Database Systems, pages 253264,
1993.
[Ehr61] A. Ehrenfeucht. An application of games to the completeness problem for formalized
theories. Fund. Math., 49:129141, 1961.
[Eme91] E. A. Emerson. Temporal and modal logic. In J. Van Leeuwen, editor, Handbook of
Theoretical Computer Science, pages 9971072. Elsevier, Amsterdam, 1991.
[EN89] R. Elmasri and S. B. Navathe. Fundamentals of Database Systems. Benjamin/Cummings
Publishing Co., Menlo Park, CA, 1989.
[End72] H. B. Enderton. A Mathematical Introduction to Logic. Academic Press, New York, 1972.
[Esw76] K. P. Eswaran. Aspects of a trigger subsystem in an integrated data base system. In
Proceedings of the 2nd International Conference in Software Engineering, San Francisco, CA,
pages 243250, 1976.
[ESW78] R. Epstein, M. Stonebraker, and E. Wong. Distributed query processing in a relational
database system. In Proc. ACM SIGMOD Symp. on the Management of Data, pages 169180,
1978.
[Fag72] R. Fagin. Probabilities on nite models. Notices of the American Mathematical Society,
October: A714, 1972.
[Fag75] R. Fagin, Monadic generalized spectra. Zeitschrift f ur Mathematische Logik und
Grundlagen der Mathematik, 21:8996, 1975.
[Fag76] R. Fagin. Probabilities on nite models. Journal of Symbolic Logic, 41(1):5058, 1976.
[Fag77a] R. Fagin. The decomposition versus synthetic approach to relational database design. In
Proc. of Intl. Conf. on Very Large Data Bases, pages 441446, 1977.
[Fag77b] R. Fagin. Multivalued dependencies and a new normal form for relational databases. ACM
Trans. on Database Systems, 2:262278, 1977.
[Fag79] R. Fagin. Normal forms and relational database operators. In Proc. ACM SIGMOD Symp.
on the Management of Data, pages 153160, 1979.
[Fag81] R. Fagin. A normal form for relational databases that is based on domains and keys. ACM
Trans. on Database Systems, 6(3):387415, 1981.
[Fag82a] R. Fagin. Armstrong databases. In Proc. IBM Symp. on Mathematical Foundations of
Computer Science, 1982.
[Fag82b] R. Fagin. Horn clauses and database dependencies. J. ACM, 29(4):952985, 1982.
[Fag83] R. Fagin. Degrees of acyclicity for hypergraphs and relational database schemes. J. ACM,
30(3):514550, 1983.
[Fag93] R. Fagin. Finite-model theoryA personal perspective. Theoretical Computer Science,
116:331, 1993.
[FC85] A. L. Furtado and M. A. Casanova. Updating relational views. In W. Kim, D. S. Reiner, and
D. S. Batory, editors, Query Processing in Database Systems. Springer-Verlag, Berlin, 1985.
[FHMV95] R. Fagin, J. Y. Halpern, Y. Moses, and M. Y. Vardi. Reasoning about Knowledge. MIT
Press, Cambridge, MA, 1995.
[Fit85] M. Fitting. A Kripke-Kleene semantics of logic programs. Logic Programming, 4:295312,
1985.
[FJT83] P. C. Fischer, J. H. Jou, and D. M. Tsou. Succinctness in dependency systems. Theoretical
Computer Science, 24:323329, 1983.
[FKL97] J. Flum, M. Kubierschky, and B. Ludaescher. Total and partial well-founded datalog
coincide. To appear, Proc. of Intl. Conf. on Database Theory, 1997.
Bibliography 635
[FKUV86] R. Fagin, G. Kuper, J. D. Ullman, and M. Y. Vardi. Updating logical databases. In
P. C. Kanellakis and F. Preparata, editors, Advances in Computing Research, vol. 3, pages 118.
JAI Press, Inc., Greenwich, CT, 1986.
[FM92] J. A. Fernandez and J. Minker. Semantics of disjunctive deductive databases. In Proc. of
Intl. Conf. on Database Theory, pages 2150. Springer-Verlag, Berlin, LNCS 646, 1992.
[FMU82] R. Fagin, A. O. Mendelzon, and J. D. Ullman. Asimplied universal relational assumption
and its properties. ACM Trans. on Database Systems, 7(3):343360, 1982.
[FNS91] C. Faloutsos, R. Ng, and T. Sellis. Predictive load control for exible buffer allocation. In
Proc. of Intl. Conf. on Very Large Data Bases, pages 265274, 1991.
[For81] C. L. Forgy. OPS5 users manual. Technical Report CMU-CS-81-135, Carnegie-Mellon
University, 1981.
[For82] C. L. Forgy. Rete: A fast algorithm for the many pattern/many object pattern match problem.
Articial Intelligence, 19:1737, 1982.
[Fra54] R. Fraiss e. Sur les classications des syst` emes de relations. Publ. Sci. Univ. Alger, I:1,
1954.
[Fre87] J. C. Freytag. A rule-based view of query optimization. In Proc. ACM SIGMOD Symp. on
the Management of Data, pages 173180, 1987.
[Fri71] H. Friedman. Algorithmic procedures, generalized Turing algorithms, and elementary
recursion theory. In R. O.Gangy and C. M. E.Yates, editors, Logic Colloquium 69, pages 361
389. North Holland, Amsterdam, 1971.
[FT83] P. C. Fischer and D. -M. Tsou. Whether a set of multivalued dependencies implies a join
dependency is np-hard. SIAM J. on Computing, 12:259266, 1983.
[FUMY83] R. Fagin, J. D. Ullman, D. Maier, and M. Yannakakis. Tools for template dependencies.
SIAM J. on Computing, 12(1):3659, 1983.
[FUV83] R. Fagin, J. D. Ullman, and M. Y. Vardi. On the semantics of updates in databases. In
Proc. ACM Symp. on Principles of Database Systems, pages 352365, 1983.
[FV86] R. Fagin and M. Y. Vardi. The theory of data dependencies: A survey. In M. Anshel and
W. Gewirtz, editors, Mathematics of Information Processing: Proceedings of Symposia in
Applied Mathematics, vol. 34, pages 1971. American Mathematical Society, Providence, RI,
1986.
[Fv89] C. C. Fleming and B. von Halle. Handbook of Relational Database Design. Addison-Wesley,
Reading, MA, 1989.
[Gal87] A. Galton. Temporal logics and their applications. Academic Press, New York, 1987.
[Gar70] M. Gardner. The game of life. Sci. American, 223, 1970.
[G ar88] P. G ardenfors. Knowledge in Flux: Modeling the Dynamics of Epistemic States. MIT Press,
Cambridge, MA, 1988.
[GD87] G. Graefe and D. J. DeWitt. The EXODUS optimizer generator. In Proc. ACM SIGMOD
Symp. on the Management of Data, pages 160172, 1987.
[GD94] S. Gatziu and K. R. Dittrich. Detecting composite events in active database systems using
petri nets. In Proc. Fourth Intl. Workshop on Research Issues in Data Engineering: Active
Database Systems, pages 29, 1994.
[GdM86] G. Gardarin and C. de Maindreville. Evaluation of database recursive logic programs
as recurrent function series. In Proc. ACM SIGMOD Symp. on the Management of Data,
pages 177186, 1986.
[GG88] M. Gyssens and D. Van Gucht. The powerset algebra as a result of adding programming
636 Bibliography
constructs to the nested relational algebra. In Proc. ACM SIGMOD Symp. on the Management
of Data, pages 225232, 1988.
[GH83] S. Ginsburg and R. Hull. Characterizations for functional dependency and Boyce-Codd
normal form families. Theoretical Computer Science, 27:243286, 1983.
[GH86] S. Ginsburg and R. Hull. Sort sets in the relational model. J. ACM, 33:465488, 1986.
[Gin66] S. Ginsburg. The Mathematical Theory of Context-Free Languages. McGraw-Hill, New
York, 1966.
[Gin93] S. Ginsburg. Object and spreadsheet histories. In A. U. Tansel et al., editors, Temporal
Databases Theory, Design, and Implementation, pages 272293. Benjamin/Cummings
Publishing Co., Menlo Park, 1993.
[GJ79] M. R. Garey and D. S. Johnson. Computers and Intractibilitiy: A Guide to the Theory of
NP-Completeness. Freeman, San Francisco, 1979.
[GJ82] J. Grant and B. E. Jacobs. On the family of generalized dependency constraints. J. ACM,
29(4):986997, 1982.
[GJ91] N. H. Gehani and H. V. Jagadish. ODE as an active database: Constraints and triggers. In
Proc. of Intl. Conf. on Very Large Data Bases, pages 327336, 1991.
[GHJ
+
93] S. Ghandeharizadeh, R. Hull, D. Jacobs, et al. On implementing a language for
specifying active database execution models. In Proc. of Intl. Conf. on Very Large Data Bases,
pages 441454, 1993.
[GHJ94] S. Ghandeharizadeh, R. Hull, and D. Jacobs. [Alg,C]: Elevating deltas to be rst-class
citizens in a database programming language. Technical report USCCS94581, Computer
Science Dept., University of Southern California, Los Angeles, September, 1994.
[GJS92a] N. H. Gehani, H. V. Jagadish, and O. Shmueli. Composite event specication in active
databases: Model & implementation. In Proc. of Intl. Conf. on Very Large Data Bases,
pages 327338, 1992.
[GJS92b] N. H. Gehani, H. V. Jagadish, and O. Shmueli. Event specication in an active object-
oriented database. Technical memorandum, Bell Labs, Holmdel, NJ, 1992.
[GJS92c] N. H. Gehani, H. V. Jagadish, and O. Shmueli. Event specication in an active object-
oriented database. In Proc. ACM SIGMOD Symp. on the Management of Data, 1992.
[GKLT69] Y. V. Glebski

i, D. I. Kogan, M. I. Liogonki

i, and V. A. Talanov. Range and degree of


realizability of formulas in the restricted predicate calculus. Kibernetika, 2:1728, 1969.
[GKM92] A. Gupta, D. Katiyar, and I. S. Mumick. Counting solutions to the view maintenance
problem. In K. Ramamohanarao, J. Harland, and G. Dong, editors, Proc. of the JICSLP
Workshop on Deductive Databases, 1992.
[GL82] Y. Gurevich and H. R. Lewis. The inference problem for template dependencies. In Proc.
ACM Symp. on Principles of Database Systems, pages 221229, 1982.
[GL88] M. Gelfond and V. Lifschitz. The stable model semantics for logic programs. In Intl. Conf.
on Logic Programming, pages 10701080, 1988.
[GM78] H. Gallaire and J. Minker. Logic and Databases. Plenum Press, New York, 1978.
[GMN84] H. Gallaire, J. Minker, and J. -M. Nicolas. Logic and databases: A deductive approach.
ACM Computing Surveys, 16(2):153185, 1984.
[GMR92] G. Grahne, A. O. Mendelzon, and P. Z. Revesz. Knowledgebase transformations. In Proc.
ACM Symp. on Principles of Database Systems, pages 246260, 1992.
[GMS93] A. Gupta, I. S. Mumick, and V. S. Subrahmanian. Maintaining views incrementally. In
Proc. ACM SIGMOD Symp. on the Management of Data, pages 157166, 1993.
[GMSV87] H. Gaifman, H. Mairson, Y. Sagiv, and M. Y. Vardi. Undecidable optimization
Bibliography 637
problems for database logic programs. In Proc. IEEE Conf. on Logic in Computer Science,
pages 106115, 1987.
[GMSV93] H. Gaifman, H. Mairson, Y. Sagiv, and M. Y. Vardi. Undecidable optimization problems
for database logic programs. J. ACM, 40:683713, 1993.
[GMV86] M. H. Graham, A. O. Mendelzon, and M. Y. Vardi. Notions of dependency satisfaction.
J. ACM, 33(1):105129, 1986.
[GO93] E. Gr adel and M. Otto. Inductive denability with counting on nite structures. In 6th
Workshop on Computer Science Logic CSL 92, pages 231247. Springer-Verlag, Berlin, LNCS
702, 1993.
[Goo70] L. A. Goodman. The multivariate analysis of qualitative data: Interactions among multiple
classications. J. Amer. Stat. Assn., 65:226256, 1970.
[Got87] G. Gottlob. Computing covers for embedded functional dependencies. In Proc. ACM Symp.
on Principles of Database Systems, pages 5869, 1987.
[GPG90] M. Gyssens, J. Paredaens, and D. Van Gucht. A graph-oriented object database model. In
Proc. ACM Symp. on Principles of Database Systems, pages 417424, 1990.
[GPSZ91] F. Giannotti, D. Pedreschi, D. Sacc` a, and C. Zaniolo. Nondeterminism in deductive
databases. In Proc. of Intl. Conf. on Deductive and Object-Oriented Databases (DOOD),
pages 129146. Springer-Verlag, Berlin LNCS 566, 1991.
[GR83] A. Goldberg and D. Robson. Smalltalk-80: The Language and Its Implementation.
Addison-Wesley, Reading, MA, 1983.
[GR86] G. Grahne and K.-J. Raiha. Characterizations for acyclic database schemes. In
P. C. Kanellakis and F. Preparata, editors, Advances in Computing Research, vol. 3: The
Theory of Databases, pages 1942. JAI Press, Inc., Greenwich, CT, 1986.
[Gra77] J. Grant. Null values in relational databases. In Inf. Proc. Letters, pages 156157, 1977.
[Gra79] M. H. Graham. On the universal relation. Technical Report, University of Toronto, Toronto,
Ontario, Canada, 1979.
[Gra83] E. Grandjean. Complexity of the rst-order theory of almost all structures. Information and
Control, 52:180204, 1983.
[Gra84] G. Grahne. Dependency satisfaction in databases with incomplete information. In Proc. of
Intl. Conf. on Very Large Data Bases, pages 3745, 1984.
[Gra91] G. Grahne. The Problem of Incomplete Information in Relational Databases. Springer-
Verlag, Berlin, 1991.
[Gra93] G. Graefe. Query evaluation techniques for large databases. ACM Computing Surveys,
25(2):73170, 1993.
[Gre75] S. Greibach. Theory of Program Structures: Schemes, Semantics, Verication. Springer-
Verlag, Berlin, LCNS 36, 1975.
[GS82] N. Goodman and O. Shmueli. Tree queries: A simple class of queries. ACM Trans. on
Database Systems, 7(4):653677, 1982.
[GS84] N. Goodman and O. Shmueli. The tree projection theorem and relational query processing.
Journal of Computer and System Sciences, 28(1):6079, 1984.
[GS86] Y. Gurevich and S. Shelah. Fixed-point extensions of rst-order logic. Annals of Pure and
Applied Logic, 32:265280, 1986.
[GS87] G. Gardarin and E. Simon. Les syst` emes de gestion de bases de donn ees deductives.
Technique et Science Informatiques, 6(5), 1987.
638 Bibliography
[GS94] S. Grumbach and J. Su. Finitely representable databases. In Proc. ACM Symp. on Principles
of Database Systems, 1994.
[GST90] S. Ganguly, A. Silberschatz, and S. Tsur. A framework for the parallel processing of
datalog queries. In Proc. ACM SIGMOD Symp. on the Management of Data, pages 143152,
1990.
[GT83] N. Goodman and Y. C. Tay. Synthesizing fourth normal form relations from multivalued
dependencies. Technical Report, Harvard University , 1983.
[Gun92] C. Gunter. The mixed powerdomain. Theoretical Computer Science, 103:311334, 1992.
[Gur] Y. Gurevich. Personal communication.
[Gur66] Y. Gurevich. The word problem for certain classes of semigroups (In Russian.). Algebra
and Logic, 5:2535, 1966.
[Gur84] Y. Gurevich. Toward a logic tailored for computational complexity. In M. M. Richter et al.,
editor, Computation and Proof Theory, pages 175216. Springer-Verlag, Berlin, LNM 1104,
1984.
[Gur88] Y. Gurevich. Logic and the challenge of computer science. In E. Borger, editor, Trends in
Theoretical Computer Science, pages 157. Computer Science Press, Rockville, MD, 1988.
[GV84] M. H. Graham and M. Y. Vardi. On the complexity and axiomatizability of consistent
database states. Proc. ACM Symp. on Principles of Database Systems, pages 281289, 1984.
[GV91] S. Grumbach and V. Vianu. Tractable query languages for complex object databases. In
Proc. ACM Symp. on Principles of Database Systems, 1991.
[GV92] G. Gardarin and P. Valduriez. ESQL2: An object-oriented SQL with F-logic semantics. In
Intl. Conf. on Data Engineering, 1992.
[GW89] G. Graefe and K. Ward. Dynamic query evaluation plans. In Proc. ACM SIGMOD Symp.
on the Management of Data, pages 358366, 1989.
[GW90] J. R. Groff and P. N. Weinberg. Using SQL. Osborne McGraw-Hill, New York, 1990.
[GZ82] S. Ginsburg and S. M. Zaiddan. Properties of functional dependency families. J. ACM,
29(4):678698, 1982.
[GZ88] G. Gottlob and R. Zicari. Closed world databases opened through null values. In Proc. of
Intl. Conf. on Very Large Data Bases, pages 5061, 1988.
[Hab70] S. J. Haberman. The general log-linear model. Ph.D. thesis, Department of Statistics,
University of Chicago, 1970.
[Hal93] J. Y. Halpern. Reasoning about knowledge: a survey circa 1991. In A. Kent and J. G.
Williams, editors, Encyclopedia of Computer Science and Technology, Vol. 27 (Supplement 12).
Marcel Dekker, New York, 1993.
[Han89] E. H. Hanson. An initial report on the design of ariel: a dbms with an integrated production
rule system. In SIGMOD Record, 18(3):1219, 1989.
[Har78] M. A. Harrison. Introduction to Formal Language Theory. Addison-Wesley, Reading, MA,
1978.
[Har80] D. Harel. On folk theorems. Comm. of the ACM, 23:379385, 1980.
[HCL
+
90] L. Haas, W. Chang, G. M. Lohman, J. McPherson, P. F. Wilms, G. Lapis, B. Lindsay,
H. Pirahesh, M. Carey, and E. Shekita. Starburst midight: As the dust clears. IEEE Transactions
on Knowledge and Data Engineering, 2(1):143160, 1990.
[Hel92] L. Hella. Logical hierarchies in PTIME. In Proc. IEEE Conf. on Logic in Computer Science,
1992.
Bibliography 639
[Her92] C. Herrmann. On the undecidability of implications between embedded multivalued
database dependencies. Technical Report, Technische Hochschule Darmstadt, Germany.
February 24, 1992.
[HH93] T. Hirst and D. Harel. Completeness results of recursive data bases. In Proc. ACM Symp.
on Principles of Database Systems, pages 244252, 1993.
[HJ91a] R. Hull and D. Jacobs. Language constructs for programming active databases. In Proc. of
Intl. Conf. on Very Large Data Bases, pages 455468, 1991.
[HJ91b] R. Hull and D. Jacobs. On the semantics of rules in database programming languages. In
J. Schmidt and A. Stogny, editors, Next Generation Information System Technology: Proc. of
the First International East/West Database Workshop, Kiev, USSR, October 1990, pages 5985.
Springer-Verlag, Berlin, LNCS 504, 1991.
[HK81] M. S. Hecht and L. Kerschberg. Update semantics for the functional data model. Technical
Report, Bell Laboratories, Holmdel, NJ, January 1981.
[HK87] R. Hull and R. King. Semantic database modeling: Survey, applications, and research
issues. ACM Computing Surveys, 19:201260, 1987.
[HK89] S. E. Hudson and R. King. Cactis: A self-adaptive, concurrent implementation of an
object-oriented database management system. ACM Trans. on Database Systems, 14:291321,
1989.
[HKM93] G. Hillebrand, P. Kanellakis, and H. Mairson. Database query languages embedded in
the typed lambda calculus. In Proc. IEEE Conf. on Logic in Computer Science, pages 332343,
1993.
[HKR93] G. Hillebrand, P. Kanellakis, and S. Ramaswamy. Functional programming formalisms
for OODB methods. In Proc. NATO ASI Summer School on OODBs, Kasadaci, Turkey, 1993.
[HLM88] M. Hsu, R. Ladin, and D. R. McCarthy. An execution model for active data base
management systems. In Intl. Conf. on Data and Knowledge Bases: Improving Usability and
Responsiveness, pages 171179, 1988.
[HLY80] P. Honeyman, R. E. Ladner, and M. Yannakakis. Testing the universal instance assumption.
Inf. Proc. Letters, 10(1):1419, 1980.
[HM81] M. Hammer and D. McLeod. Database description with SDM: A semantic database model.
ACM Trans. on Database Systems, 6(3):351386, 1981.
[HMN84] L. J. Henschen, W. W. McCune, and S. A. Naqvi. Compiling constraint-checking
programs from rst-order formulas. In H. Gallaire, J. Minker, and J. -M. Nicolas, editors,
Advances in Data Base Theory, vol. 2, pages 145169. Plenum Press, New York, 1984.
[HMT71] L. Henkin, J. D. Monk, and A. Tarski. Cylindric Algebras. North Holland, Amsterdam,
1971.
[HN84] L. J. Henschen and S. A. Naqvi. On compiling queries in recursive rst-order databases. J.
ACM, 31(1):4785, 1984.
[Hon82] P. Honeyman. Testing satisfaction of functional dependencies. J. ACM, 29(3):668677,
1982.
[HS89a] R. Hull and J. Su. On accessing object-oriented databases: Expressive power, complexity,
and restrictions. In Proc. ACM SIGMOD Symp. on the Management of Data, pages 147158,
1989.
[HS89b] R. Hull and J. Su. Untyped sets, invention, and computable queries. In Proc. ACM Symp.
on Principles of Database Systems, pages 347359, March 1989.
640 Bibliography
[HS93] R. Hull and J. Su. Algebraic and calculus query languages for recursively typed complex
objects. Journal of Computer and System Sciences, 47:121156, 1993.
[HS94] R. Hull and J. Su. Domain independence and the relational calculus. Acta Informatica
31:513524, 1994.
[HTY89] R. Hull, K. Tanaka, and M. Yoshikawa. Behavior analysis of object-oriented databases:
Method structure, execution trees and reachability. In Proceedings 3rd International Conference
on Foundations of Data Organization and Algorithms, pages 372388, 1989.
[Hul83] R. Hull. Acyclic join dependency and data base projections. Journal of Computer and
System Sciences, 27(3):331349, 1983.
[Hul84] R. Hull. Finitely speciable implicational dependency families. J. ACM, 31(2):210226,
1984.
[Hul85] R. Hull. Non-nite speciability of projections of functional dependency families. In
Theoretical Computer Science, 39:239265, 1985.
[Hul86] R. Hull. Relative information capacity of simple relational schemata. SIAM J. on
Computing, 15(3):856886, August 1986.
[Hul87] R. Hull. A survey of theoretic research on typed complex database objects. In J. Paredaens,
editor, Databases, pages 193256. Academic Press, London, 1987.
[Hul89] G. Hulin. Parallel processing of recursive queries in distributed architectures. In Proc. of
Intl. Conf. on Very Large Data Bases, pages 8796, 1989.
[HW92] E. N. Hanson and J. Widom. An overview of production rules in database systems.
Technical Report RJ 9023 (80483), IBM Almaden Research, October 1992.
[HY84] R. Hull and C. K. Yap. The Format model: A theory of database organization. Journal of
the ACM, 31(3):518537, 1984.
[HY90] R. Hull and M. Yoshikawa. ILOG: Declarative creation and manipulation of object
identiers (extended abstract). In Proc. of Intl. Conf. on Very Large Data Bases, pages 455
468, 1990.
[HY92] R. Hull and M. Yoshikawa. On the equivalence of data restructurings involving object
identiers. In J. D. Ullman, editor, Studies in Theoretical Computer Science (a festschrift for
Seymour Ginsburg), pages 253286. Academic Press, New York, 1992. See also article of same
title in Proc. ACM Symp. on Principles of Data Base Systems, 1991.
[IK90] Y. E. Ioannidis and Y. C. Kang. Randomized algorithms for optimizing large join queries. In
Proc. ACM SIGMOD Symp. on the Management of Data, pages 312321, 1990.
[IL84] T. Imielinski and W. Lipski. The relational model of data and cylindric algebras. Journal of
Computer and System Sciences, 28(1):80102, 1984.
[Imi84] T. Imielinski. On algebraic query processing in logical databases. In H. Gallaire and
J. Minker, editors, Advances in Data Base Theory, vol. 2. Plenum Press, New York, 1984.
[Imm82] N. Immerman. Upper and lower bounds for rst-order denability. Journal of Computer
and System Sciences, 25:7698, 1982.
[Imm86] N. Immerman. Relational queries computable in polynomial time. Inf. and Control,
68:86104, 1986.
[Imm87a] N. Immerman. Expressibility as a complexity measure: Results and directions. Technical
Report DCS-TR-538, Yale University, New Haven, CT, 1987.
[Imm87b] N. Immerman. Languages which capture complexity classes. SIAM J. on Computing,
16(4):760778, 1987.
Bibliography 641
[IN88] T. Imielinski and S. Naqvi. Explicit control of logic programs through rule algebra. In Proc.
ACM Symp. on Principles of Database Systems, pages 103116, 1988.
[INSS92] Y. E. Ioannidis, R. T. Ng, K. Shim, and T. K. Sellis. Parametric query optimization. In
Proc. of Intl. Conf. on Very Large Data Bases, pages 103114, 1992.
[INV91a] T. Imielinski, S. Naqvi, and K. Vadaparty. Incomplete objects A data model for
design and planning applications. In Proc. ACM SIGMOD Symp. on the Management of Data,
pages 288197, 1991.
[INV91b] T. Imielinski, S. Naqvi, and K. Vadaparty. Querying design and planning databases. In
Proc. of Intl. Conf. on Deductive and Object-Oriented Databases (DOOD), pages 524545,
1991.
[Ioa85] Y. E. Ioannidis. A time bound on the materialization of some recursively dened views. In
Proc. of Intl. Conf. on Very Large Data Bases, pages 219226, 1985.
[Jac82] B. E. Jacobs. On database logic. J. ACM, 29(2):310332, 1982.
[JH91] D. Jacobs and R. Hull. Database programming with delayed updates. In Proc. of Intl.
Workshop on Database Programming Languages, pages 416428, 1991.
[JK84a] M. Jarke and J Koch. Query optimization in database systems. ACM Computing Surveys,
16(2):111152, 1984.
[JK84b] D. S. Johnson and A. Klug. Testing containment of conjunctive queries under functional
and inclusion dependencies. Journal of Computer and System Sciences, 28:167189, 1984.
[JL87] J. Jaffar and J. L. Lassez. Constraint logic programming. In Proc. ACM Symp. on Principles
of Programming Languages, pages 111119, 1987.
[Joh91] D. S. Johnson. A catalog of complexity classes. In J. Van Leeuwen, editor, Handbook of
Theoretical Computer Science, pages 67162. Elsevier, Amsterdam, 1991.
[Joy76] W. H. Joyner Jr. Resolution strategies as decision procedures. J. ACM, 23:398417, 1976.
[JS82] G. Jaeschke and H. -J. Schek. Remarks on the algebra on non rst normal form relations. In
Proc. ACM Symp. on Principles of Database Systems, pages 124138, 1982.
[Kam81] Y. Kambayashi. Database: A Bibliography. Computer Science Press, Rockville, MD,
1981.
[Kan88] P. C. Kanellakis. Logic programming and parallel complexity. In J. Minker, editor,
Foundations of Deductive Databases and Logic Programming, pages 547586. Morgan
Kaufmann, Inc., Los Altos, CA, 1988.
[Kan91] P. C. Kanellakis. Elements of relational database theory. In J. Van Leeuwen, editor,
Handbook of Theoretical Computer Science, pages 10741156. Elsevier, Amsterdam, 1991.
[KC86] S. Khoshaan and G. Copeland. Object identity. In Proc. OOPSALA, 1986.
[KDM88] A. M. Kotz, K. R. Dittrich, and J. A. M ulle. Supporting semantic rules by a generalized
event/trigger mechanism. In Intl. Conf. on Extending Data Base Technology, pages 7691,
1988.
[Kel82] A. M. Keller. Updates to relational databases through views involving joins. In Peter
Scheuermann, editor, Improving Database Usability and Responsiveness. Academic Press,
New York, 1982.
[Kel85] A. Keller. Algorithms for translating view updates to database updates for views involving
selections, projections and joins. In Proc. ACM Symp. on Principles of Database Systems,
pages 154163, 1985.
[Kel86] A. M. Keller. The role of semantics in translating view updates. IEEE Computer, 19(1):63
73, January 1986.
642 Bibliography
[Ken78] W. Kent. Data and Reality. North Holland, Amsterdam, 1978.
[Ken79] W. Kent. Limitations of record-based information models. ACM Trans. on Database
Systems, 4:107131, 1979.
[Ken89] W. Kent. The many forms of a single fact. In Proc. of the IEEE Compcon Conf., 1989.
[Ker88] J-M. Kerisit. La M ethode dAlexander: Une Technique de D eduction. Ph.D. thesis,
Universit e Paris VII, 1988.
[KG94] P. Kanellakis and D. Goldin. Constraint programming and database query languages.
To appear in Springer-Verlag, Berlin, editor, Proc. 2nd Conference on Theoretical Aspects of
Computer Software (TACS), 1994.
[Kif88] M. Kifer. On safety, domain independence, and capturability of database queries. In
C. Beeri, J. W. Schmidt, and U. Dayal, editors, Proc. 3rd Intl. Conf. on Data and Knowledge
Bases, pages 405415. Morgan Kaufmann, Inc., Los Altos, CA, 1988.
[KKR90] P. Kanellakis, G Kuper, and P. Revesz. Constraint query languages. In Proc. 9th ACM
Symp. on Principles of Database Systems, pages 299313, Nashville, 1990.
[KKS92] M. Kifer, W. Kim, and Y. Sagiv. Querying object-oriented databases. In Proc. ACM
SIGMOD Symp. on the Management of Data, pages 393402, 1992.
[KL86a] M. Kifer and E. Lozinskii. A framework for an efcient implementation of deductive
databases. In Proc. of the Advanced Database Symposium, Tokyo, 1986.
[KL86b] M. Kifer and E. L. Lozinskii. Filtering data ow in deductive databases. In Proc. of Intl.
Conf. on Database Theory, 1986.
[KL89] W. Kim and F. Lochovsky, editors. Object-Oriented Concepts, Databases, and Applications.
Addison-Wesley, Reading, MA, 1989.
[Kle67] S. C. Kleene. Mathematical Logic. North Holland, Amsterdam, 1967.
[Klu80] A. Klug. Caculating constraints on relational tableaux. In ACM Trans. on Database
Systems, 5:260290, 1980.
[Klu82] A. Klug. Equivalence of relational algebra and relational calculus query languages having
aggregate functions. J. ACM, 29(3):699717, 1982.
[Klu88] A. Klug. On conjunctive queries containing inequalities. J. ACM, 35(1):146160, 1988.
[KLW93] M. Kifer, G. Lausen, and J. Wu. Logical foundations of object-oriented and frame-based
languages. Technical Report 93/06, Computer Science Department, SUNY at Stony Brook,
NY, 1993.
[KM91a] H. Katsuno and A. O. Mendelzon. On the difference between updating a knowledge base
and revising it. In Proc. of the Second Intl. Conf. on Principles of Knowledge Representation
and Reasoning, pages 387394, 1991.
[KM91b] H. Katsuno and A. O. Mendelzon. Propositional knowledgebase revision and minimal
change. Articial Intelligence, 52:263294, 1991.
[KN88] R. Krishnamurthy and S. A. Naqvi. Nondeterministic choice in datalog. In 5th Intl. Conf.
on Data and Knowledge Bases, pages 416424. Morgan Kaufmann, Inc., Los Altos, CA, 1988.
[Kni89] K. Knight. Unication: a multidisciplinary survey. ACM Computing Surveys, 21(1):93124,
1989.
[Kol83] P. G. Kolaitis. Lecture notes on nite model theory, 1983.
[Kol91] P. G. Kolaitis. The expressive power of stratied logic programs. Information and
Computation, 90(1):5066, 1991.
[Kon88] S. Konolige. On the relation between default and autoepistemic logic. Articial
Intelligence, 35(3):343382, 1988.
Bibliography 643
[Kow74] R. A. Kowalski. Predicate logic as a programming language. In Proc. IFIP.74, pages 569
574, 1974.
[Kow75] R. A. Kowalski. A proof procedure using connection graphs. J. ACM, 22:572595, 1975.
[Kow81] R. Kowalski. Logic as database language. Unpublished manuscript, Dept. of Computing,
Imperial College, London, 1981.
[KP81] S. Koenig and R. Paige. A transformational framework for the automatic control of derived
data. In Proc. of Intl. Conf. on Very Large Data Bases, pages 306318, 1981.
[KP82] A. Klug and R. Price. In determining view dependencies using tableaux. In ACM Trans. on
Database Systems, 7:361381, 1982.
[KP86] P. Kanellakis and C. H. Papadimitriou. Notes on monadic sirups. Unpublished manuscript,
1986.
[KP88] P. G. Kolaitis and C. H. Papadimitriou. Why not negation by xpoint? In Proc. ACM Symp.
on Principles of Database Systems, pages 231239, 1988.
[KRS88a] M. Kifer, R. Ramakrishnan, and A. Silberschatz. An axiomatic approach to deciding
query safety in deductive databases. In Proc. ACM Symp. on Principles of Database Systems,
pages 5260, 1988.
[KRS88b] R. Krishnamurthy, R. Ramakrishnan, and O. Shmueli. A framework for testing safety
and effective computability of extended Datalog. In Proc. ACM SIGMOD Symp. on the
Management of Data, pages 154163, 1988.
[KS91] H. F. Korth and A. Silberschatz. Database System Concepts, 2d ed. McGraw-Hill, New
York, 1991.
[KT88] D. B. Kemp and R. W. Topor. Completeness of a top-down query evaluation procedure for
stratied databases. In Proc. Fifth Intl. Symp. on Logic Programming, pages 195211, 1988.
[KU84] A. Keller and J. D. Ullman. On complementary and independent mappings. In Proc. ACM
SIGMOD Symp. on the Management of Data, pages 143148, 1984.
[K uc91] V. K uchenhoff. On the efcient computation of the difference between consecutive
database states. In Proc. of Intl. Conf. on Deductive and Object-Oriented Databases (DOOD),
pages 478502, 1991.
[Kuh67] J. L. Kuhns. Answering questions by computer: a logical study. Technical Report
RM-5428-PR, Rand Corp., 1967.
[Kun87] K. Kunen. Negation in logic programming. Logic Programming, 4:289308, 1987.
[Kun88] K. Kunen. Some remarks on the completed database. In Intl. Conf. on Logic Programming,
pages 978992, 1988.
[Kup87] G. M. Kuper. Logic programming with sets. In Proc. ACM Symp. on Principles of Database
Systems, pages 1120, 1987.
[Kup88] G. M. Kuper. On the expressive power of logic programming languages with sets. In Proc.
ACM Symp. on Principles of Database Systems, pages 1014, 1988.
[Kup93] G. M. Kuper. Aggregation in constraint databases. In Proc. First Workshop on Principles
and Practice of Constraint Programming, 1993.
[KV84] G. Kuper and M. Y. Vardi. A new approach to database logic. In Proc. ACM Symp. on
Principles of Database Systems, pages 8696, 1984.
[KV87] P. Kolaitis and M. Y. Vardi. The decision problem for the probabilities of higher-order
properties. In Proc. ACM SIGACT Symp. on the Theory of Computing, pages 425435, 1987.
[KV90a] D. Karabeg and V. Vianu. Parallel update transactions. Theoretical Computer Science,
76:93114, 1990.
644 Bibliography
[KV90b] P. G. Kolaitis and M. Y. Vardi. 0-1 laws and decision problems for fragments of
second-order logic. Information and Computation, 87:302338, 1990.
[KV90c] P. G. Kolaitis and M. Y. Vardi. On the expressive power of Datalog: tools and a case study.
In Proc. ACM Symp. on Principles of Database Systems, pages 6171, 1990.
[KV91] D. Karabeg and V. Vianu. Simplication rules and axiomatization for relational update
transactions. ACM Trans. on Database Systems, 16(3):439475, 1991.
[KV92] P. G. Kolaitis and M. Y. Vardi. Innitary logics and 0-1 laws. Information and Computation,
98:258294, 1992.
[KV93a] G. Kuper and M. Y. Vardi. On the complexity of queries in the logical data model.
Theoretical Computer Science, 116:3358, 1993.
[KV93b] G. M. Kuper and M. Y. Vardi. The logical data model. ACM Trans. on Database Systems,
18:379413, 1993.
[KW85] A. M. Keller and M. Winslett Wilkins. On the use of an extended relational model
to handle changing incomplete information. IEEE Transactions on Software Engineering,
SE-11:620633, 1985.
[KW89] M. Kifer and J. Wu. A logic for object-oriented logic programming (Maiers O-logic
revisited). In Proc. ACM Symp. on Principles of Database Systems, pages 379393, 1989.
[Lan88] B. Lang. Datalog automata. In Proc. 3rd Intl. Conf. on Data and Knowledge Bases,
pages 389404. Morgan Kaufmann, Inc., Los Altos, CA, 1988.
[Lee91] J. Van Leeuwen, editor. Handbook of Theoretical Computer Science. Elsevier, Amsterdam,
1991.
[Lei69] A. C. Leisenring. Mathematical Logic and Hilberts -symbol. Gordon and Breach, New
York, 1969.
[Lei89a] D. Leivant. Descriptive characterization of computational complexity. Journal of Computer
and System Sciences, 39:5183, 1989.
[Lei89b] D. Leivant. Monotonic use of space and computational complexity over abstract structures.
Technical Report CMU-CS-89-212, Carnegie-Mellon University, 1989.
[Lei90] D. Leivant. Inductive denitions over nite structures. Information and Computation,
89:95108, 1990.
[Lel87] W. Leler. Constraint Programming Languages. Addison-Wesley, Reading, MA, 1987.
[Lev84a] H. J. Levesque. The logic of incomplete knowledge bases. In M. L. Brodie,
J. L. Mylopoulos, and J. W. Schmidt, editors, On Conceptual Modeling, pages 165189.
Springer-Verlag, Berlin, 1984.
[Lev84b] H. J. Levesque. Foundations of a functional approach to knowledge representation. AI J.,
23:155212, 1984.
[Lib91] L. Libkin. A relational algebra for complex objects based on partial information. In LNCS
495: Proceedings of Symp. on Mathematical Fundamentals of Database Systems, pages 3641.
Springer-Verlag, Berlin, 1991.
[Lie80] Y. E. Lien. On the semantics of the entity-relationship model. In P. P. Chen, editor,
Entity-Relationship Approach to Systems Analysis and Design, pages 155167, 1980.
[Lie82] E. Lien. On the equivalence of database models. J. ACM, 29(2):333363, 1982.
[Lif88] V. Lifschitz. On the declarative semantics of logic programs with negation. In J. Minker,
editor, Foundations of Deductive Databases and Logic Programming, pages 177192. Morgan
Kaufmann, Inc., Los Altos, CA, 1988.
Bibliography 645
[Lin90] S. Lindell. An analysis of xed-point queries on binary trees. Ph.D. thesis, University of
California at Los Angeles, 1990.
[Lin91] S. Lindell. An analysis of xed-point queries on binary trees. Theoretical Computer
Science, 85:7595, 1991.
[Lip79] W. Lipski. On semantic issues connected with incomplete information databases. ACM
Trans. on Database Systems, 4(3):262296, 1979.
[Lip81] W. Lipski. On databases with incomplete information. J. ACM, 28(1):4170, 1981.
[LL86] N. Lerat and W. Lipski. Nonapplicable nulls. Theoretical Computer Science, 46:6782,
1986.
[LL90] M. Leven and G. Loizou. The nested relation type model: An application of domain theory
to databases. The Computer Journal, 33:1930, 1990.
[Llo87] J. W. Lloyd. Foundations of logic programming, 2d ed., Springer-Verlag, Berlin, 1987.
[LM89] V. S. Lakshmanan and A. O. Mendelzon. Inductive pebble games and the inductive power
of Datalog. In Proc. ACM Symp. on Principles of Database Systems, pages 301311, 1989.
[LM93] D. Leivant and J. -Y. Marion. Lambda calculus characterizations of polytime. In
Proceedings of the International Conference on Typed Lambda Calculi and Applications,
1993. (To appear in Fundamenta Informaticae.)
[LMG83] K. Laver, A. O. Mendelzon, and M. H. Graham. Functional dependencies on cyclic
database schemes. In Proc. ACM SIGMOD Symp. on the Management of Data, pages 7991,
1983.
[LN90] R. J. Lipton and J. F. Naughton. Query size estimation by adaptive sampling (extended
abstract). In Proc. ACM Symp. on Principles of Database Systems, pages 4046, 1990.
[LO78] C. L. Lucchesi and S. L. Osborn. Candidate keys for relations. Journal of Computer and
System Sciences, 17(2):270279, 1978.
[Loh88] G. M. Lohman. Grammar-like functional rules for representing query optimization
alternatives. In Proc. ACM SIGMOD Symp. on the Management of Data, pages 1827, 1988.
[Low15] L. Lowenheim. Uber M oglichkeiten im relativekalkul. Math. Ann., 76:447470, 1915.
[Loz85] E. Lozinskii. Evaluating queries in deductive databases by generating. In Proc. 11th Intl.
Joint Conf. on Articial Intelligence, pages 173177, 1985.
[LP81] H. R. Lewis and C. H. Papadimitriou. Elements of the Theory of Computation. Prentice-Hall,
Englewood Cliffs, NJ, 1981.
[LMR92] J. Lobo, J. Minker, and A. Rajasekar. Foundations of Disjunctive Logic Programming.
MIT Press, Cambridge, MA, 1992.
[LRV88] C. Lecluse, P. Richard, and F. Velez. O
2
, an object-oriented data model. In Proc. ACM
SIGMOD Symp. on the Management of Data, pages 424434, 1988.
[LS87] U. W. Lipeck and G. Saake. Monitoring dynamic integrity constraints based on temporal
logic. Information Systems, 12(3):255269, 1987.
[LST87] J. W. Lloyd, E. A. Sonenberg, and R. W. Topor. Integrity constraint checking in stratied
databases. Journal of Logic Programming, 4:331343, 1987.
[LTK81] T. Ling, F. Tompa, and T. Kameda. An improved third normal formfor relational databases.
ACM Trans. on Database Systems, 6(2):326346, 1981.
[LV87] P. Lyngbaek and V. Vianu. Mapping a semantic database model to the relational model. In
Proc. ACM SIGMOD Symp. on the Management of Data, 1987.
[LV89] A. Lefevre and L. Vieille. On deductive query evaluation in the DedGin* system. In Proc.
1st Internat. Conf. on Deductive and Object-Oriented Databases, pages 225246, 1989.
646 Bibliography
[LW93a] L. Libkin and L. Wong. Semantic representations and query languages for or-sets. In Proc.
ACM Symp. on Principles of Database Systems, pages 3748, 1993.
[LW93b] L. Libkin and L. Wong. Some properties of query languages for bags. In Proc. of Intl.
Workshop on Database Programming Languages, pages 97114, 1993.
[Mai80] D. Maier. Minimum covers in the relational database model. J. ACM, 27(4):664674,
1980.
[Mai83] D. Maier. The Theory of Relational Databases. Computer Science Press, Rockville, MD,
1983.
[Mai86] D. Maier. A logic for objects. From a Workshop on Foundations of Deductive Databases
and Logic Programming held in Washington, D.C., pages 626, 1986.
[Mak77] A. Makinouchi. A consideration of normal form of not-necessarily-normalized relations
in the relational data model. In Proc. of Intl. Conf. on Very Large Data Bases, pages 447453,
1977.
[Mak81] J. A. Makowsky. Characterizing data base dependencies. In 8th Colloquium on Automata,
Languages and Programming. Springer-Verlag, Berlin, 1981.
[Mak85] D. Makinson. How to give it up: A survey of some formal aspects of the logic of theory
change. Synth` ese, 62:347363, 1985.
[Mal86] F. M. Malvestuto. Modelling large bases of categorical data with acyclic schemes. In Proc.
of Intl. Conf. on Database Theory, 1986.
[MB92] R. M. MacGregor and D. Brill. Recognition algorithms for the Loom classier. In Proc.
Natl. Conf. on Articial Intelligence, 1992.
[MBW80] J. Mylopoulos, P. A. Bernstein, and H. K. T. Wong. A language facility for designing
database-intensive applications. ACM Trans. on Database Systems, 5:185207, June 1980.
[MD89] D. McCarthy and U. Dayal. The architecture of an active database management system. In
Proc. ACM SIGMOD Symp. on the Management of Data, pages 215224, 1989.
[ME92] P. Mishra and M. H. Eich. Join processing in relational databases. ACM Computing Surveys,
24:63113, 1992.
[MFPR90] I. S. Mumick, S. Finkelstein, H. Pirahesh, and R. Ramakrishnan. Magic is relevant. In
Proc. ACM SIGMOD Symp. on the Management of Data, 1990.
[Min88a] J. Minker, editor. Foundations of Deductive Databases and Logic Programming. Morgan
Kaufmann, Inc., Los Altos, CA, 1988.
[Min88b] J. Minker. Perspectives in deductive databases. J. Logic Programming, 5(1):3360, 1988.
[MIR93] R. Miller, Y. Ioannidis, and R. Ramakrishnan. The use of information capacity in schema
integration and translation. In Proc. of Intl. Conf. on Very Large Data Bases, pages 120133,
1993.
[MIR94] R. Miller, Y. Ioannidis, and R. Ramakrishnan. Schema equivalence in heterogeneous
systems: bridging theory and practice, in Information Systems, 19:331, 1994.
[Mit83a] J. C. Mitchell. The implication problem for functional and inclusion dependencies.
Information and Control, 56:154173, 1983.
[Mit83b] J. C. Mitchell. Inference rules for functional and inclusion dependencies. In Proc. ACM
Symp. on Principles of Database Systems, pages 5869, 1983.
[MM79] A. O. Mendelzon and D. Maier. Generalized mutual dependencies and the decomposition
of database relations. In Proc. of Intl. Conf. on Very Large Data Bases, pages 7582, 1979.
[MMS79] D. Maier, A. O. Mendelzon, and Y. Sagiv. Testing implications of data dependencies.
ACM Trans. on Database Systems, 4(4):455469, 1979.
Bibliography 647
[MMSU80] D. Maier, A. O. Mendelzon, F. Sadri, and J. D. Ullman. Adequacy of decompositions
of relational databases. Journal of Computer and System Sciences, 21(3):368379, 1980.
[MMW94] A. O. Mendelzon, T. Milo, and E. Waller. Object migration. In Proc. ACM Symp. on
Principles of Database Systems, 1994.
[MNS
+
87] K. Morris, J. F. Naughton, Y. Saraiya, J. D. Ullman, and A. Van Gelder. YAWN! (yet
another window on NAIL!). Data Engineering, 10(4), 1987.
[Moo85] R. C. Moore. Semantics considerations on non-monotonic logic. Articial Intelligence,
25:7594, 1985.
[Mor83] M. Morgenstern. Active databases as a paradigm for enhanced computing environments.
In Proc. of Intl. Conf. on Very Large Data Bases, pages 3442, 1983.
[Mor88] K. Morris. An algorithm for ordering subgoals in NAIL! In Proc. ACM Symp. on Principles
of Database Systems, pages 8288, 1988.
[Mos74] Y. N. Moschovakis. Elementary Induction on Abstract Structures. North Holland,
Amsterdam, 1974.
[MR85] H. Mannila and K. -J. R aih a. Small Armstrong relations for database design. In Proc. ACM
Symp. on Principles of Database Systems, pages 245250, 1985.
[MR88] H. Mannila and K. -J. R aih a. Generating Armstrong databases for sets of functional and
inclusion dependencies. Technical Report A-1988-7, University of Tampere, Department of
Computer Science, Tampere, Finland, 1988.
[MR90] J. Minker and A. Rajasekar. A xpoint semantics for disjunctive logic programs. In J.
Logic Programming, 1990.
[MR92] H. Mannila and K. -J. R aih a. The Design of Relational Databases. Addison-Wesley,
Wokingham, England, 1992.
[MRW86] D. Maier, D. Rozenshtein, and D. S. Warren. Window functions. In P. C. Kanellakis and
F. Preparata, editors, Advances in Computing Research, vol. 3, pages 213246. JAI Press, Inc.,
Greenwich, CT, 1986.
[MS81] D. McKay and S. Shapiro. Using active connection graphs for reasoning with recursive
rules. In Proc. 7th Intl. Joint Conf. on Articial Intelligence, pages 368374, 1981.
[MS92] V. M. Markowitz and A. Shoshani. Represented extended Entity-Relationship structures in
relational databases. ACM Trans. on Database Systems, 17:385422, 1992.
[MSPS87] A. Marchetti-Spaccamela, A. Pelaggi, and D. Sacc` a. Worst-case complexity analysis
of methods for logic query implementation. In Proc. ACM Symp. on Principles of Database
Systems, pages 294301, 1987.
[MSY81] D. Maier, Y. Sagiv, and M. Yannakakis. On the complexity of testing implications of
functional and join dependencies. J. ACM, 28(4):680695, 1981.
[MUG86] K. Morris, J. D. Ullman, and A. Van Gelder. Design overview of the NAIL! system. In
3rd Int. Conf. on Logic Programming, LNCS 225, pages 554568, Springer-Verlag, Berlin,
1986.
[MUV84] D. Maier, J. D. Ullman, and M. Y. Vardi. On the foundations of the universal relation
model. ACM Trans. on Database Systems, 9(2):283308, 1984.
[MUV86] K. Morris, J. D. Ullman, and A. Van Gelder. Design overview of the NAIL! system. In
Proc. Third Intl. Conf. on Logic Programming, pages 554568, 1986.
[MV86] J. A. Makowsky and M. Y. Vardi. On the expressive power of data dependencies. Acta
Informatica, 23:231244, 1986.
648 Bibliography
[MW88a] D. Maier and D. S. Warren. Computing with Logic: Logic Programming with Prolog.
Benjamin/Cummings Publishing Co., Menlo Park, CA, 1988.
[MW88b] S. Manchanda and D. S. Warren. A logic-based language for database updates. In
J. Minker, editor, Foundations of Deductive Databases and Logic Programming, pages 363
394. Morgan Kaufmann, Inc., Los Altos, CA, 1988.
[Nau86] J. F. Naughton. Data independent recursion in deductive databases. In Proc. ACM Symp.
on Principles of Database Systems, pages 267279, 1986.
[NCS91] R. Ng, C. Caloutsos, and T. Sellis. Flexible buffer allocation based on marginal gains. In
Proc. ACM SIGMOD Symp. on the Management of Data, pages 387396, 1991.
[ND82] J. -M. Nicolas and R. Demolombe. On the stability of relational queries. Technical Report,
ONERA-CERT, Toulouse, 1982.
[Nej87] W. Nejdl. Recursive strategies for answering recursive queries The RQA/FQI strategy. In
Proc. of Intl. Conf. on Very Large Data Bases, 1987.
[NG78] J. -M. Nicolas and H. Gallaire. Database Theory vs. interpretation. In H. Gallaire and
J. Minker, editors, Logic and Databases, pages 3354. Plenum Press, New York, 1978.
[Nic78] J -M. Nicolas. First order logic formalization for functional, multivalued, and mutual
dependencies. In Proc. ACM SIGMOD Symp. on the Management of Data, pages 4046, 1978.
[Nic82] J. -M. Nicolas. Logic for improving integrity checking in relational databases. Acta
Informatica, 18(3):227253, 1982.
[Nij76] G. M. Nijssen, editor. Modelling in Data Base Management Systems. North Holland,
Amsterdam, 1976.
[NK88] S. Naqvi and R. Krishnamurthy. Database updates in logic programming. In Proc. ACM
Symp. on Principles of Database Systems, 1988.
[NPS91] M. Negri, S. Pelagatti, and L. Sbattella. Formal semantics of SQL queries. ACM Trans. on
Database Systems, 16(3):513535, 1991.
[NRSU89] J. F. Naughton, R. Ramakrishnan, Y. Sagiv, and J. D. Ullman. Argument reduction by
factoring. In Proc. of Intl. Conf. on Very Large Data Bases, 1989. To appear in Theoretical
Computer Science.
[NS87] J. F. Naughton and Y. Sagiv. A decidable class of bounded recursions. In Proc. ACM Symp.
on Principles of Database Systems, pages 227236, 1987.
[NT89] S. Naqvi and S. Tsur. A language for data and knowledge bases. Computer Science Press,
Rockville, MD, 1989.
[Ora89] SQL Language Reference: ORACLE Server for OS/2. Oracle Corp. Redwood Shores, CA,
1989.
[Osb79] S. L. Osborn. Towards a universal relation interface. In Proc. of Intl. Conf. on Very Large
Data Bases, pages 5260, 1979.
[OW93] G.

Ozsoyo glu and H. Wang. A survey of QBE languages. Computer, 26, 1993.
[OY87] Z. M.

Ozsoyo glou and L. -Y. Yuan. A new normal form for nested relations. ACM Trans.
on Database Systems, 12(1):111136, 1987.
[Pai84] R. Paige. Applications of nite differencing to database integrity control and
query/transaction optimization. In H. Gallaire, J. Minker, and J. -M. Nicolas, editors, Advances
in Data Base Theory, vol. 2, pages 171209. Plenum Press, New York, 1984.
[Pap85] C. P. Papadimitriou. A note on the expressive power of prolog. Bulletin of the EATCS,
26:2123, 1985.
Bibliography 649
[Pap86] C. H. Papadimitriou. The Theory of Concurrency Control. Computer Science Press,
Rockville, MD, 1986.
[Pap94] C. Papadimitriou. Computational Complexity. Addison-Wesley, Reading, MA, 1994.
[Par78] J. Paredaens. On the expressive power of the relational algebra. Inf. Proc. Letters,
7(2):107111, 1978.
[Par79] J. Paredaens. Transitive dependencies in a database scheme. Technical Report R387, MBLE,
Brussels, 1979.
[PBGG89] J. Paredaens, P. De Bra, M. Gyssens, and D. Van Gucht. The Structure of the Relational
Database Model. EATCS Monographs on Theoretical Computer Science No. 17. Springer-
Verlag, Berlin, 1989.
[Pea88] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, Inc., Los Altos,
CA, 1988.
[Per91] D. Perrin. Finite automata. In J. Van Leeuwen, editor, Handbook of Theoretical Computer
Science, pages 158. Elsevier, Amsterdam, 1991.
[Pet89] S. V. Petrov. Finite axiomatization of languages for representation of system properties.
Information Sciences, 47:339372, 1989.
[PG88] J. Paredaens and D. Van Gucht. Possibilities and limitations of using at operators in nested
algebra expressions. In Proc. ACM Symp. on Principles of Database Systems, pages 2938,
1988.
[PI94] S. Patnaik and N. Immerman. Dyn-FO: A parallel, dynamic complexity class. In Proc. ACM
Symp. on Principles of Database Systems, 1994.
[PJ81] J. Paredaens and D. Janssens. Decompositions of relations: a comprehensive approach. In
H. Gallaire, J. Minker, and J. -M. Nicolas, editors, Advances in Data Base Theory, vol. 1,
pages 73100. Plenum Press, New York, 1981.
[PM88] J. Peckham and F. Maryanski. Semantic data models. ACM Computing Surveys, 20:153
190, 1988.
[Por86] H. H. Porter. Earley deduction. Technical Report TR CS/E-86-002, Oregon Graduate
Center, Beaverton, OR, 1986.
[Pos47] E. L. Post. Recursive unsolvability of a problem of Thue. J. of Symbolic Logic, 12:111,
1947.
[PPG80] D. S. Parker and K. Parsaye-Ghomi. Inference involving embedded multivalued
dependencies and transitive dependencies. In Proc. ACM SIGMOD Symp. on the Management
of Data, pages 5257, 1980.
[Prz86] T. Przymusinski. On the semantics of stratied deductive databases. In Proc. Workshop on
the Foundations of Deductive Databases and Logic Programming, pages 433443, 1986.
[Prz88] T. Przymusinski. Perfect model semantics. In Intl. Conf. on Logic Programming,
pages 10811096, 1988.
[Prz89] T. Przymusinski. Every logic program has a natural stratication and an iterated least
xpoint model. In Proc. ACM Symp. on Principles of Database Systems, pages 1121, 1989.
[Prz90] T. Przymusinski. Well-founded semantics coincides with three-valued stable semantics.
Fundamenta Informaticae, XIII:445463, 1990.
[PSV92] D. S. Parker, E. Simon, and P. Valduriez. SVP A model capturing sets, streams, and
parallelism. In Proc. of Intl. Conf. on Very Large Data Bases, pages 115126, 1992.
[PV88] J. Pearl and T. Verma. The logic of representing dependencies by directed graphs. In
Proceedings, AAAI Conference, Seattle, WA. July, 1987, pages 374379, 1988.
650 Bibliography
[PW80] F. C. N. Pereira and D. H. D. Warren. Denite clause grammars for language analysis
A survey of the formalism and a comparison with augmented transition networks. Articial
Intelligence, 13:231278, 1980.
[PY92] C. H. Papadimitriou and M. Yannakakis. Tie-breaking semantics and structural totality. In
Proc. ACM Symp. on Principles of Database Systems, pages 1622, 1992.
[QW91] X. Qian and G. Wiederhold. Incremental recomputation of active relational expressions.
IEEE Trans. on Knowledge and Data Engineering, 3:337341, 1991.
[Rad64] R. Rado. Universal graphs and universal functions. Acta Arith., 9:331340, 1964.
[Ram91] R. Ramakrishnan. Magic templates: A spellbinding approach to logic programs. J.
Logic Programming, 11:189216, 1991. See also Proc. Joint Symp. and Intl. Conf. on Logic
Programming, 1988.
[RBS87] R. Ramakrishnan, R. Bancilhon, and A. Silberschatz. Safety of recursive horn clauses with
innite relations (extended abstract). In Proc. ACM Symp. on Principles of Database Systems,
pages 328339, 1987.
[RD75] J. Rissanen and C. Delobel. Decomposition of les, a basis for data storage and retrieval.
Technical Report RJ1220, IBM Res. Lab, San Jose, CA, 1975.
[Rei78] R. Reiter. On closed world databases. In H. Gallaire and J. Minker, editors, Logic and
Databases, pages 5676. Plenum Press, New York, 1978.
[Rei80] R. Reiter. A logic for default reasoning. Articial Intelligence, 13(1):80132, 1980.
[Rei84] R. Reiter. Towards a logical reconstruction of relational database theory. In M. L. Brodie,
J. L. Mylopoulos, and J. W. Schmidt, editors, On Conceptual Modeling, pages 191238.
Springer-Verlag, Berlin, 1984.
[Rei86] R. Reiter. A sound and sometimes complete query evaluation algorithm for relational
databases with null values. J. ACM, 33(2):349370, 1986.
[Ris77] J. Rissanen. Independent components of relations. ACM Trans. on Database Systems,
2(4):317325, 1977.
[Ris78] J. Rissanen. Theory of relations for databases A tutorial survey. In Proc. 7th Symp. on
Mathematical Foundations of Computer Science, pages 536551. Zadopane, Springer-Verlag,
Berlin, LNCS 64, 1978.
[Ris82] J. Rissanen. On equivalence of database schemes. In Proc. ACM Symp. on Principles of
Database Systems, pages 2326, 1982.
[RKS88] M. A. Roth, H. F. Korth, and A. Silberschatz. Extended algebra and calculus for nested
relational databases. ACM Trans. on Database Systems 13(4):389417, 1988.
[RLK86] J. Rohmer, R. Lescoeur, and J. M. Kerisit. The Alexander method A technique for
the processing of recursive axioms in deductive databases. New Generation Computing,
4(3):273286, 1986.
[Rob65] J. A. Robinson. A machine oriented logic based on the resolution principle. J. ACM,
12(1):2341, 1965.
[Roe87] D. Roelants. Recursive rules in logic databases. Technical Report R513, Philips Research
Laboratories, Bruxelles, 1987.
[Ros89] K. Ross. A procedural semantics for the well-founded negation in logic programs. In Proc.
ACM Symp. on Principles of Database Systems, pages 2233, 1989.
[Ros91] K. A. Ross. The Semantics of Deductive Databases. Ph.D. thesis, Stanford University,
1991.
Bibliography 651
[Rou91] B. Rounds. Situation-theoretic aspects of databases. In Proc. of Conf. on Situation Theory
and Applications; CSLI vol. 26, pages 229256, 1991.
[RS79] L. Rowe and K. A. Schoens. Data abstractions, views and updates in RIGEL. In Proc. ACM
SIGMOD Symp. on the Management of Data, pages 7181, 1979.
[RS91] J. Richardson and P. Schwartz. Aspects: Extending objects to support multiple independent
roles. In Intl. Conf. on Principles of Knowledge Representation and Reasoning, pages 298307,
1991.
[RSB
+
87] K. Ramamohanarao, J. Shepherd, I. Balbin, G. Port, L. Naish, J. Thom, J. Zobel, and
P. Dart. The NU-Prolog deductive database system. Data Engineering, 10(4):1019, 1987.
[RSS92] R. Ramakrishnan, D. Srivastava, and S. Sudarshan. CORAL: control, relations and logic.
In Proc. of Intl. Conf. on Very Large Data Bases, 1992.
[RSSS93] R. Ramakrishnan, D. Srivastava, S. Sudarshan, and P. Seshadri. Implementation of the
CORAL deductive database system. In Proc. ACM SIGMOD Symp. on the Management of
Data, pages 167176, 1993.
[RSUV89] R. Ramakrishnan, Y. Sagiv, J. D. Ullman, and M. Y. Vardi. Proof-tree transformations
and their applications. In Proc. ACM Symp. on Principles of Database Systems, pages 172182,
1989.
[RSUV93] R. Ramakrishnan, Y. Sagiv, J. D. Ullman, and M. Y. Vardi. Logical query optimization
by proof-tree transfomation. J. Computer and System Sciences, 47, pages 222248, 1993.
[RU94] R. Ramakrishnan and J. D. Ullman. A survey of research on deductive database systems. In
J. of Logic Programming, to appear.
[SAC
+
79] P. Selinger, M. M. Astrahan, D. D. Chamberlin, R. A. Lorie, and T. G. Price. Access
path selection in a relational database management system. In Proc. ACM SIGMOD Symp. on
the Management of Data, pages 2334, 1979.
[Sag81] Y. Sagiv. Can we use the universal assumption without using nulls? In Proc. ACM SIGMOD
Symp. on the Management of Data, pages 108120, 1981.
[Sag83] Y. Sagiv. A characterization of globally consistent database and their correct access paths.
ACM Trans. on Database Systems, 8(2):266286, 1983.
[Sag88] Y. Sagiv. Optimizing datalog programs. In J. Minker, editor, Foundations of Deductive
Databases and Logic Programming, pages 659698. Morgan Kaufmann, Inc., Los Altos, CA,
1988.
[Sag90] Y. Sagiv. Is there anything better than magic? In Proc. North American Conf. on Logic
Programming, pages 235254, 1990.
[Sci81] E. Sciore. Real-world MVDs. In Proc. ACM SIGMOD Symp. on the Management of Data,
pages 121132, 1981.
[Sci82] E. Sciore. A complete axiomatization of full join dependencies. J. ACM, 29:373393, 1982.
[Sci83] E. Sciore. Inclusion dependencies and the universal instance. In Proc. ACM Symp. on
Principles of Database Systems, pages 4857, 1983.
[Sci86] E. Sciore. Comparing the universal instance and relational data models. In P. C. Kanellakis
and F. Preparata, editors, Advances in Computing Research, vol. 3: The Theory of Databases,
pages 139163. JAI Press, Inc., Greenwich, CT, 1986.
[SDPF81] Y. Sagiv, C. Delobel, D. S. Parker, Jr., and R. Fagin. An equivalence between relational
database dependencies and a fragment of propositional logic. J. ACM, 28:435453, 1981.
[Sek89] H. Seki. On the power of Alexander templates. In Proc. ACM Symp. on Principles of
Database Systems, pages 150159, 1989.
652 Bibliography
[SF78] K. C. Sevcik and A. L. Furtado. Complete and compatible sets of update operations. In Intl.
Conf. on Management of Data (ICMOD), Milan, Italy, 1978.
[SG85] D. E. Smith and M. R. Genesereth. Ordering conjunctive queries. Articial Intelligence,
26:171215, 1985.
[She88] J. Shepherdson. Negation in logic programming. In J. Minker, editor, Foundations of
Deductive Databases and Logic Programming, pages 1988. Morgan Kaufmann, Inc., Los
Altos, CA, 1988.
[Shi81] D. Shipman. The functional data model and the data language daplex. ACM Trans. on
Database Systems, 6:140173, 1981.
[Shm87] O. Shmueli. Decidability and expressiveness aspects of logic queries. In Proc. ACM Symp.
on Principles of Database Systems, pages 237249, 1987.
[SI88] H. Seki and H. Itoh. A query evaluation method for stratied programs under the extended
CWA. In Proc. Fifth Intl. Symp. on Logic Programming, pages 195211, 1988.
[SI84] O. Shmueli and A. Itai. Maintenance of views. In Proc. ACM SIGMOD Symp. on the
Management of Data, pages 240255, 84.
[Sic76] S. Sickel. A search technique for clause interconnectivity graphs. IEEE Trans. on
Computers, C-25:7280, 1976.
[Sie88] J. H. Siekmann. Unication theory. J. Symbolic Computation, 7:207274, 1988.
[SJGP90] M. Stonebraker, A. Jhingran, J. Goh, and S. Potamianos. On rules, procedures, caching
and views in data base systems. In Proc. ACM SIGMOD Symp. on the Management of Data,
pages 281290, 1990.
[SKdM92] E. Simon, J. Kiernan, and C. de Maindreville. Implementing high level active rules on
top of a relational dbms. In Proc. of Intl. Conf. on Very Large Data Bases, pages 315326,
1992.
[SL90] A. P. Sheth and J. A. Larson. Federated database systems for managing distributed,
heterogeneous, and autonomous databases. ACM Computing Surveys, 22:184236, 1990.
[SL91] J. Seib and G. Lausen. Parallelizing datalog programs by generalized pivoting. In Proc.
ACM Symp. on Principles of Database Systems, pages 7887, 1991.
[SLRD93] W. Sun, Y. Ling, N. Rishe, and Y. Deng. An instant and accurate size estimation method
for joins and selection in a retrieval-intensive environment. In Proc. ACM SIGMOD Symp. on
the Management of Data, pages 7988, 1993.
[SM81] A. M. Silva and M. A. Melkanoff. A method for helping discover the dependencies of a
relation. In Advances in Data Base Theory, ed. by H. Gallaire, J. Minker, and J. -M. Nicolas,
pages 115133. Plenum Press, New York, 1981.
[Sno90] R. Snodgrass. Temporal databases: status and research directions. ACM SIGMOD Record,
19(4):8389, December 1990.
[Soo91] M. Soo. Bibliography on temporal databases. In Proc. ACM SIGMOD Symp. on the
Management of Data, pages 1423, 1991.
[SP94] D. Suciu and J. Paredaens. Any algorithm in the complex object algebra with powerset
needs exponential space to compute transitive closure. In Proc. ACM Symp. on Principles of
Database Systems, pages 171179, 1994.
[SR86] M. Stonebraker and L. Rowe. The design of Postgres. In Proc. ACM SIGMOD Symp. on the
Management of Data, pages 340355, 1986.
[SR93] S. Sudarshan and R. Ramakrishnan. Optimizations of Bottom-up evaluation with non-
ground terms. In Proc. of Intl. Logic Programming Symp, 1993.
Bibliography 653
[SS86] Y. Sagiv and O. Shmueli. The equivalence of solving queries and producing tree projections.
In Proc. ACM Symp. on Principles of Database Systems, pages 160172, 1986.
[Sto81] M. Stonebraker. Operating system support for database management. Comm. of the ACM,
24:412418, 1981.
[Sto88] M. Stonebraker, editor. Readings in Database Systems. Morgan Kaufmann, Inc., Los Altos,
CA, 1988.
[Sto92] M. Stonebraker. The integration of rule systems and database systems. IEEE Transactions
on Knowledge and Data Engineering, 4:415423, 1992.
[Str91] B. Stroustrup. The C++ Programming Language, 2d ed. Addison-Wesley, Reading, MA,
1991.
[SU82] U. F. Sadri and J. D. Ullman. Template dependencies: A large class of dependencies in
relational database and their complete axiomatization. J. ACM, 29(2):363372, 1982.
[Su92] J. Su. Dynamic constraints and object migration. Technical Report TRCS-9202, Computer
Science Department, University of California, Santa Barbara, 1992. To appear, Theoretical
Computer Science; see also Proc. of Intl. Conf. on Very Large Data Bases, 1991.
[SV89] Y. Sagiv and M. Y. Vardi. Safety of datalog queries over innite databases. In Proc. ACM
Symp. on Principles of Database Systems, pages 160172, 1989.
[SW82] Y. Sagiv and S. Walecka. Subset dependencies and a completeness result for a subclass of
embedded multivalued dependencies. J. ACM, 29(1):103117, 1982.
[SWKH76] M. Stonebraker, E. Wong, P. Kreps, and G. Held. The design and implementation of
Ingres. ACM Trans. on Database Systems, 1(3):189222, 1976.
[SY80] Y. Sagiv and M. Yannakakis. Equivalence among expressions with the union and difference
operators. J. ACM, 27(4):633655, 1980.
[SZ86] D. Sacc` a and C. Zaniolo. On the implementation of a simple class of logic queries for
databases. In Proc. ACM Symp. on Principles of Database Systems, pages 1623, 1986.
[SZ88] D. Sacc` a and C. Zaniolo. The generalized counting method for recursive logic queries.
Theoretical Computer Science, 62:187220, 1988.
[SZ89] L. A. Stein and S. B. Zdonik. Clovers: The dynamic behavior of type and instances.
Technical Report CS-89-42, Computer Science Department, Brown University, 1989.
[SZ90] D. Sacc` a and C. Zaniolo. Stable models and non-determinism in logic programs with
negation. In Proc. ACM Symp. on Principles of Database Systems, pages 205217, 1990.
[Tan88] L. Tanca. Optimization of Recursive Logic Queries to Relational Databases. Ph.D. thesis,
Politecnico di Milano and Universita di Napoli, 1988.
[Tar55] A. Tarski. A lattice theoretical xpoint theorem and its applications. Pacic J. Math,
5(2):285309, 1955.
[TCG
+
93] A. U. Tansel, J. Clifford, S. Gadia, S. Jajodia, A. Segev, and R. Snodgrass. Temporal
Databases Theory, Design, and Implementation. Benjamin/Cummings Publishing Co., Menlo
Park, CA, 1993.
[TF82] D. -M. Tsou and P. C. Fischer. Decomposition of a relation scheme into Boyce-Codd normal
form. SIGACT News, 14(3):2329, 1982.
[TF86] S. J. Thomas and P. C. Fischer. Nested relational structures. In P. C. Kanellakis and
F. Preparata, editors, Advances in Computing Research, vol. 3, pages 269307. JAI Press, Inc.,
Greenwich, CT, 1986.
[Tha91] B. Thalheim. Dependencies in Relational Databases. Teubner Verlagsgesellschaft, Stuttgart
and Leipzig, 1991.
654 Bibliography
[TK84] V. A. Talanov and V. V. Knyazev. The asymptotic truth value of innite formulas. In
All-union seminar on discrete mathematics and its applications, pages 5661, 1984.
[TL82] D. C. Tsichritzis and F. H. Lochovsky. Data Models. Prentice-Hall, Englewood Cliffs, NJ,
1982.
[Tod77] S. Todd. Automatic constraint maintenance and updating dened relations. In B. Gilchrist,
editor, Proc. IFIP 77, pages 145148. North Holland, Amsterdam, 1977.
[Top87] R. Topor. Domain independent formulas and databases. Theoretical Computer Science,
52(3):281307, 1987.
[Top91] R. Topor. Safe database queries with arithmetic relations. Technical Report, Computer
Science Department, University of Melbourne, 1991. Abstract appears as Safe Database
Queries with Arithmetic Relations, Proc. 14th Australian Computer Science Conf., Sydney,
1991, pp. 113.
[TS88] R. W. Topor and E. A. Sonenberg. On domain independent databases. In J. Minker, editor,
Foundations of Deductive Databases and Logic Programming, pages 217240. Morgan
Kaufmann, Inc., Los Altos, CA, 1988.
[TT52] A. Tarski and F. B. Thompson. Some general properties of cylindric algebras. Bulletin of
the AMS, 58:65, 1952.
[TY84] R. E. Tarjan and M. Yannakakis. Simple linear-time algorithms to test chordality of
graphs, test acyclicity of hypergraphs, and selectively reduce acyclic hypergraphs. SIAM J. on
Computing, 13(3):566579, 1984.
[TYF86] T. J. Teorey, D. Yand, and J. P. Fry. A logical design methodology for relational databases
using the extended entity-relationship model. In ACM Computing Surveys, pages 197222,
1986.
[UG88] J. D. Ullman and A. Van Gelder. Parallel complexity of logical query programs.
Algorithmica, 3(1):542, 1988.
[Ull82a] J. D. Ullman. The U.R. strikes back. In Proc. ACM Symp. on Principles of Database
Systems, pages 1022, 1982.
[Ull82b] J. D. Ullman. Principles of Database Systems, 2d ed. Computer Science Press, Rockville,
MD, 1982.
[Ull85] J. D. Ullman. Implementation of logical query languages for databases. ACM Trans. on
Database Systems, 10(3):289321, 1985.
[Ull88] J. D. Ullman. Principles of Database and Knowledge Base Systems, vol. I. Computer
Science Press, Rockville, MD, 1988.
[Ull89a] J. D. Ullman. Bottom-up beats top-down for datalog. In Proc. ACM Symp. on Principles of
Database Systems, pages 140149, 1989.
[Ull89b] J. D. Ullman. Principles of Database and Knowledge Base Systems, vol. II: The New
Technologies. Computer Science Press, Rockville, MD, 1989.
[Van86] A. Van Gelder. A message passing framework for logical query evaluation. In Proc. ACM
SIGMOD Symp. on the Management of Data, pages 155165, 1986.
[VandB93] J. Van den Bussche. Formal Aspects of Object Identity. Ph.D. thesis, University of
Antwerp, 1993.
[VandBG92] J. Van den Bussche and D. Van Gucht. Semi-determinism. In Proc. ACM Symp. on
Principles of Database Systems, pages 191201, 1992. (Full version to appear in Journal of
Computer and System Sciences.)
[VandBGAG92] J. Van den Bussche, D. Van Gucht, M. Andries, and M. Gyssens. On the
Bibliography 655
completeness of object-creating query languages. In IEEE Conf. on Foundations of Computer
Science, pages 372379, 1992.
[VandBP95] J. Van den Bussche and J. Paredaens. The expressive power of complex values in
object-based data models. In Information and Computation, 120:220236, August, 1995.
[VanG86] A. Van Gelder. Negation as failure using tight derivations for general logic programs. In
IEEE Symp. on Logic Programming, pages 127139, 1986.
[VanG89] A. Van Gelder. The alternating xpoint of logic programs with negation. In Proc. ACM
Symp. on Principles of Database Systems, pages 111, 1989.
[VanGRS88] A. Van Gelder, K. A. Ross, and J. S. Schlipf. The well-founded semantics for general
logic programs. In Proc. ACM Symp. on Principles of Database Systems, pages 221230, 1988.
[VanGRS91] A. Van Gelder, K. A. Ross, and J. S. Schlipf. The well-founded semantics for general
logic programs. J. ACM, 38:620650, 1991.
[VanGT91] A. Van Gelder and R. Topor. Safety and translation of relational calculus queries. ACM
Trans. on Database Systems, 16:235278, 1991.
[Var81] M. Y. Vardi. The decision problem for database dependencies. Inf. Proc. Letters, 12(5):251
254, 1981.
[Var82a] M. Y. Vardi. The complexity of relational query languages. In Proc. ACM SIGACT Symp.
on the Theory of Computing, pages 137146, 1982.
[Var82b] M. Y. Vardi. On decomposition of relational databases. In IEEE Conf. on Foundations of
Computer Science, pages 176185, 1982.
[Var83] M. Y. Vardi. Inferring multivalued dependencies from functional and join dependencies.
Acta Informatica, 19:305324, 1983.
[Var84] M. Y. Vardi. The implication and nite implication problems for typed template
dependencies. Journal of Computer and System Sciences, 28:328, 1984.
[Var85] M. Y. Vardi. Querying logical databases. In Proc. ACM Symp. on Principles of Database
Systems, pages 5765, 1985.
[Var86a] M. Y. Vardi. On the integrity of databases with incomplete information. In Proc. ACM
Symp. on Principles of Database Systems, pages 252266, 1986.
[Var86b] M. Y. Vardi. Querying Logical Databases. J. Computer and Systems Sciences, 33,
pages 142160, 1986.
[Var87] M. Y. Vardi. Fundamentals of dependency theory. In E. Borger, editor, Trends in Theoretical
Computer Science, pages 171224. Computer Science Press, Rockville, MD, 1987.
[Var88] M. Y. Vardi. Decidability and undecidablity results for boundedness of linear recursive
queries. In Proc. ACM Symp. on Principles of Database Systems, pages 341351, 1988.
[Vas79] Y. Vassiliou. Null values in database management, A denotational semantics approach. In
Proc. ACM SIGMOD Symp. on the Management of Data, pages 162169, 1979.
[Vas80] Y. Vassiliou. A Formal Treatment of Imperfect Information in Data Management. Ph.D.
thesis, University of Toronto, 1980.
[VBKL89] L. Vieille, P. Bayer, V. Kuchenoff, and A. Lefebvre. Eks-v1: A short overview. In Proc.
ACM SIGMOD Symp. on the Management of Data, 1989. Technical exhibition.
[vEK76] M. H. van Emden and R. A. Kowalski. The semantics of predicate logic as a programming
language. J. ACM, 23(4):733742, 1976.
[Ver89] J. Verso. Verso: a database machine based on non-1nf relations. In H. Schek, S. Abiteboul,
P. Fisher, editors, Nested Relations and Complex Objects, LNCS, page 361. Springer-Verlag,
Berlin, 1989.
656 Bibliography
[Via87] V. Vianu. Dynamic functional dependencies and database aging. J. ACM, 34(1):2859,
1987.
[Via88] V. Vianu. Database survivability under dynamic constraints. Acta Informatica, 25:5584,
1988.
[Vie86] L. Vieille. Recursive axioms in deductive databases: The Query/Subquery approach. In
L. Kerschberg, editor, Proc. First Intl. Conf. on Expert Database Systems, pages 179193,
1986.
[Vie87a] L. Vieille. A database-complete proof procedure based on sld-resolution. In Proc. of the
Fourth Intl. Conf. on Logic Programming, pages 74103, 1987.
[Vie87b] L. Vieille. Recursion in deductive databases: DedGin, a recursive query evaluator. In Des
Bases de Donn ees aux Bases de Connaissances, Sophia-Antipolis, France, 1987. Also available
as Technical Report TR-KB-14, ECRC, Munich.
[Vie88] L. Vieille. From QSQ towards QaSSaQ: Global optimization of recursive queries. In
L. Kerschberg, editor, Proc. Second Intl. Conf. on Expert Database Systems, pages 421436,
1988.
[Vie89] L. Vieille. Recursive query processing: The power of logic. Theoretical Computer Science,
69:153, 1989.
[Vos91] G. Vossen. Data Models, Database Languages and Database Management Systems.
Addison-Wesley, Wokingham, England, 1991.
[VV92] V. Vianu and G. Vossen. Conceptual-level concurrency control for relational update
transactions. Theoretical Computer Science, 95:142, 1992.
[WF90] J. Widom and S. J. Finkelstein. Set-oriented production rules in relational database systems.
In Proc. ACM SIGMOD Symp. on the Management of Data, pages 259264, 1990.
[WH92] Y. -W. Wang and E. N. Hanson. A performance comparison of the Rete and TREAT
algorithms for testing database rule conditions. In IEEE Conf. on Data Engineering, pages 88
97, 1992.
[WHW90] S. Widjojo, R. Hull, and D. S. Wile. A specicational approach to merging persistent
object bases. In A. Dearle, G. Shaw, and S. Zdonik, editors, Implementing Persistent Object
Bases: Proc. of Fourth Intl. Workshop on Persistent Object Systems, pages 267278. Morgan
Kaufmann, Inc., Los Altos, CA, 1990.
[Wie92] G. Wiederhold. Mediators in the architecture of future information systems. IEEE
Computer, 25(3):3849, March 1992.
[Win86] M. Winslett. A model-theoretic approach to updating logical databases. In Proc. ACM
Symp. on Principles of Database Systems, pages 224234, 1986.
[Win88] M. Winslett. A framework for comparison of update semantics. In Proc. ACM Symp. on
Principles of Database Systems, pages 315324, 1988.
[WO90] O. Wolfson and A. Ozeri. A new paradigm for parallel and distributed rule-processing. In
Proc. ACM SIGMOD Symp. on the Management of Data, pages 133142, 1990.
[Won93] L. Wong. Normal forms and conservative properties for query languages over collection
types. In Proc. ACM Symp. on Principles of Database Systems, pages 2636, 1993.
[WS88] O. Wolfson and A. Silberschatz. Distributed processing of logic programming. In Proc.
ACM SIGMOD Symp. on the Management of Data, pages 329336, 1988.
[WW75] C. P. Wang and H. H. Wedekind. Segment synthesis in logical data base design. IBM J.
Res. and Develop., 19:7177, 1975.
Bibliography 657
[WY76] E. Wong and K. Yousse. DecompositionA strategy for query processing. ACM Trans.
on Database Systems, 1(3):223241, 1976.
[Yan81] M. Yannakakis. Algorithms for acyclic database schemes. In Proc. of Intl. Conf. on Very
Large Data Bases, pages 8294, 1981.
[YC84] C. T. Yu and C. C. Chang. Distributed query processing. ACM Computing Surveys, 16,
1984.
[YO79] C. T. Yu and M. Z.

Ozsoyo glu. An algorithm for tree-query membership of a distributed
query. In Proc. IEEE COMPSAC, pages 306312, 1979.
[YP82] M. Yannakakis and C. Papadimitriou. Algebraic dependencies. Journal of Computer and
System Sciences, 25(2):341, 1982.
[Zan76] C. Zaniolo. Analysis and Design of Relational Schemata for Database Systems. Ph.D.
thesis, University of California at Los Angeles, 1976. Technical Report UCLA-Eng-7669,
Department of Computer Science.
[Zan82] C. Zaniolo. A new normal form for the design of relational database schemata. ACM Trans.
on Database Systems, 7:489499, 1982.
[Zan84] C. Zaniolo. Database relations with null values. Journal of Computer and System Sciences,
28(1):142166, 1984.
[Zan87] C. Zaniolo, editor. IEEE Data Engineering 10(4), 1987. Special issue on databases and
logic.
[ZH90] Y. Zhou and M. Hsu. A theory for rule triggering systems. In Intl. Conf. on Extending Data
Base Technology, pages 407421, 1990.
[Zlo77] M. Zloof. Query-by-example: A data base language. IBM Systems Journal, 16:324343,
1977.
[ZM90] S. B. Zdonik and D. Maier, editors. Readings in Object-Oriented Database Systems.
Morgan Kaufmann, Inc., Los Altos, CA, 1990.
Index
Page numbers in italics indicate the location of denitions of terms.
ac
0
, 96, 431
Access, 36, 143, 150, 152153, 155
access plan, 107
active database, 8, 600606
action, 601
composite event, 606
condition, 601
coupling mode, 603
ECA, 601
event, 601
execution model, 601, 603606
accumulating model, 604606
concurrent ring, 603
deferred ring, 603, 604
immediate ring, 603604
rule, 601
rule base, 601
vs. expert system, 600
active domain, 41, 46, 249
interpretation, 79
preservation, 249
active domain semantics
of relational calculus, 74, 79
vs. domain independence, 79
acyclic
vs. dependencies, 137
distributed databases, 136
hypergraph, 36, 132
inclusion dependencies, 208210, 211
join, 105, 126, 128135, 136
join dependency, 169, 182183, 186
and mvds, 182
adom, 41, 46, 77, 249
adorned rule, 318, 321
adornment, 317, 318
A-egd, 218
aggregate function, 9193
aggregate operator, 97, 153, 154
in query language, 155
AGM postulates, 599
agreement set, 188
Alexander method, 336
ALG
cv
, 514
ALG
cv
, 519
algebra
complex value, 514, 519
conjunctive, 5261
cylindric, 96, 103
named conjunctive, 5659, 57
nested relation, 519
relational, 28, 35, 36, 64, 70, 71,
81
named, 71
unnamed, 71
translation into calculus, 80
SPC, 5256, 54
SPCU, 62, 222
SPJR, 5659, 57
SPJRU, 62
SPJU, 492
typed restricted SPJ, 64, 67
unnamed conjunctive, 5256, 52
unsorted, 103
algebraic dependency, 228233
661
662 Index
algebraic dependency (continued)
axiomatization, 231
ALGRES, 337
allowed calculus query, 97, 101102
alternating xpoint, 390, 413
ancestor program, 63
nonlinear version, 314
anomaly
deletion, 162, 254
insertion, 162
modication, 162
update, 162
anonymous variable, 39, 44
ans, 40
ans_R

, 321
anti-symmetric, 11
any, 548
AP5, 605, 615
APEX, 335
arithmetic in query language, 153, 154
arity(), 31
of instance, 32
of relation name, 31
of tuple, 32
Armstrong relation, 168169, 186, 232
for typed dependencies, 233
Armstrongs axioms for fds, 186
articulation set, 132
articial intelligence (AI), 97
atom, 22, 33
constraint, 112
equality, 217
ground, 34
relation, 112, 217
att, 30
attribute, 29
in relational model, 30
in semantic data model, 243
attribute renaming, 58
autoepistemic logic, 408
automorphism, 12, 420, 426428, 461
average in SQL, 91, 154
awk, 155
axiom, 24
vs. inference rule, 167
axiomatizable, 167
axiomatization, 167, 226
abstract formulation, 203
for algebraic dependencies, 231, 235
complete, 167
for fds, 166, 168, 186
for fds and mvds, 172173, 186
nite, 202
for full typed dependencies, 227228
Gentzen-style for jds, 186
IDM transaction for, 581
for inds, 193195, 211
k-ary, 202, 204
proof using, 167
provable using, 167
sound, 167
for typed embedded dependencies, 226, 235
for uinds, 210, 215
for uinds and fds, 210
vs. fds and inds, 192, 202207, 211
vs. fds and sort set dependencies, 213
vs. nite implication, 226
vs. jds, 169, 171, 186
B(P, I), 280
B(P
I
), 387
B-tree, 107
bag, 92, 136
in SQL, 145, 155
BCNF, 250, 251252
algorithm, 255
belief revision, 588, 599
Berge-acyclic, 131, 137
Bernays-Sch onnkel class, 219
Binary Data Model, 264
binary relation, 10
body of rule, 39, 41, 276
bottom-up datalog evaluation, 324335
vs. top-down, 311, 327, 336
bound coordinate in datalog evaluation, 318
bound variable occurrence, 23, 45, 75
boundedness, 285, 304
Boyce-Codd normal form (BCNF), 250, 251252,
BP-completeness, 428, 560
buffering of main memory, 106, 107
C-genericity, 419420
C+SQL, 466
c-table, 493
and dependencies, 501
update, 593594
CALC
adom
, 79, 80, 100
CALC
di
, 79, 80
CALC
sr
, 81, 86, 100
CALC+, 348352, 349
normal form, 368
simultaneous induction, 351
CALC+
+
, 352354, 353
normal form, 368
Index 663
CALC+
(+)
+W, 456
CALC
adom
, 79
CALC
di
, 79
CALC
sr
, 85
CALC
cv
, 519
CALC
cv
, 528
calculus
complex value, 519, 523
conjunctive, 4447, 45
domain, 39, 74
for OODBs, 557
positive existential, 68, 91
relational, 28, 35, 36, 39, 64, 70, 7391
tuple, 39, 74, 101
calculus formula, 7475
parse tree, 83
Cartesian product, 52
chain program, 303
chase(T, t, ), 176
chase, 43, 159, 163, 173185, 186, 220, 263, 497
Church-Rosser, 183185
complexity, 176, 190
fd rule, 175
generalized to embedded dependencies, 223225
generalized to full dependencies, 220
incomplete database, 498
ind rule, 208
and inds, 208
jd rule, 175
and logical implication, 180182, 186
query optimization, 163, 177180
and tableau minimization, 177180
of tableau queries, 173, 186
tgd-rule, 223
uniquely determined, 176
vs. datalog, 186
vs. resolution and paramodulation, 186
chase homomorphism, 184
chasing sequence, 175
innite, 208, 223, 225
terminal, 175
vs. dependency satisfaction, 175
choice operator, 458
Church-Rosser property, 175, 176
chase, 183185
CINEMA example, 31
circumscription vs. xpoint operators, 354
Clarks completion. See datalog

, negation as
failure.
class, 543, 545, 547
in semantic data model, 243
class extension, 556
class hierarchy, 549
well formed, 549
classication, 572, 575
clause, 288
Closed World Assumption (CWA), 27, 283, 489,
497, 599
clustering, 107
CNF, 83
co-r.e., 16
Codd, 64
Codd-table
query, 488
update, 593594
COL, 538
compactness theorem, 25
complement of views, 591593
complement operator, 103, 104
complete axiomatization, 167
complete lattice, 286
completeness, 18
object-oriented language, 560561, 560, 574
of a query language, 466
relational, 96, 147, 150, 151
update language, 583
of while
N
, 468
of while
new
, 470473
of while
uty
, 478
completion in Query-Subquery (QSQ), 318
completion of program, 407
complex constant, 517
complex value, 508541, 542, 543, 545
algebra, 514, 519
calculus, 519, 523
datalog, 532, 533
elementary query, 534
Equivalence Theorem, 526531
xpoint, 531532
instance, 512
relation, 512
safe-range, 528
schema, 512
semantic data model, 243
sort, 511
strongly-safe-range, 530
term, 519
complex value model, 97, 548
complexity, 1320
data vs. expression, 122
of query languages, 136
composition of tableaux, 226227
composition of queries, 37, 4852
conjunctive queries, 64
conjunctive queries with union, 64
conjunctive query program, 49
664 Index
composition of queries (continued)
functional paradigm, 50
imperative paradigm, 50
relational algebra queries, 71
and user views, 5152
computability, 1320
condensation, 136
condition box in QBE, 150
conditional table. See c-table.
conjunction, 44
atten, 83
and negation, 74
polyadic, 46
conjunctive calculus, 64
with disjunction, 91
with equality, 48
equivalence of formulas, 46
normal form, 4647
rewrite rule, 46
semantics, 45
with union, 81
conjunctive normal form (CNF), 21, 83
conjunctive query, 36, 3764
algebraic, 5261
with arithmetic, 105
calculus, 4447, 45, 64
normal form, 4647
composition, 4852, 50
containment, 105
complexity, 121122
and decidability, 36, 37, 117, 118
with disjunction, 6164
equality, 4748, 50
equivalence, 47, 82, 105
Equivalence Theorem, 60
evaluation, 56
Homomorphism Theorem, 105, 115118, 117,
127, 136
logic-based perspectives, 4048
and Microsoft Access, 152
monotonic, 42
named algebra, 5659, 57
optimization, 36, 56, 105
in practical systems, 105115
using chase, 163
using dependencies, 163
program, 49
range restricted, with equality, 41, 48, 65
rule-based, 39, 4042, 41
satisable, 42
and SQL, 143146
static analysis, 105, 115122
tableau, 4344, 43
with union, 36, 37, 38, 6164
unnamed algebra, 5256, 52
vs. expert systems, 135
yes-no, 42
connectivity query, not rst-order, 436, 460
conseq
P
, 389
consistent
globally, 128, 136
pairwise, 128, 136
constant in relational model, 30
constraint, 186
inequalities over rationals, 96, 98
integrity, 28, 185, 236
vs. dependency, 157
polynomial inequalities, 96
temporal, 611613
transition, 612
vs. rst-order logic, 186, 234
constraint atom, 112
constraint database, 36, 71, 9496, 9798
constraint programming, 97
constraint query language, 9496, 9798
containment
conjunctive queries, 105, 118
decidability, 117
differences of SPCU queries, 140
rst-order queries
undecidability, 125
queries, 115
tableau queries, complexity of, 121122
containment of queries
relative to dependencies, 175, 177
relative to family of instances, 174
context-free grammar, 19
context-free language, 20
continuous operator, 286
conventional perspective on relations, 32, 33
CORAL, 337
cost model for query evaluation, 106, 108110
count, 91, 92, 154
counter machine, 15
counting vs. relational calculus, 154
counting technique, 327, 331335, 336, 341
covariance, 553
cover, 254
minimal, 257
create in SQL, 145
cross product, 52, 54
physical implementation, 108
in SQL, 144
vs. equi-join, 108
vs. join, 58
cumulative assignment, 346
Index 665
CWA. See Closed World Assumption.
cylindric algebra
vs. relational algebra, 96, 103
dangling reference, 999, 572
data complexity, 122, 422423
data denition language (DDL), 4, 28
data function, 306
data independence principle, 4, 9
data integrity, 162
data manipulation language (DML), 4, 28
data model. See database model.
data storage, 106
database access functional paradigm, 571
database instance, 29
conventional perspective, 32
logic-programming perspective, 32
database logic, 97
database management system, 3
database model, 4, 7, 28
complex value, 508541
directory, 97
Entity-Relationship (ER), 242
functional, 574
Functional Data Model, 264
generic semantic model (GSM), 242
hierarchy, 28, 97
IFO, 242
Logical Data Model (LDM), 97
network, 28, 97
object-oriented, 28; See object-oriented database.
relational, 2834
semantic, 28, 207, 242250
database schema, 29, 31
with dependencies, 241, 251
datalog, 39, 273310
bottom-up, 312316, 324335
vs. top-down, 311, 327, 336
boundedness, 285, 304, 309
vs. rst-order, 306
chain program, 303, 305, 309
clause, 288
denite, 288
empty, 288
goal, 288
ground, 288
unit, 288
complex value, 532, 533
containment, 301304
uniform, 304, 305, 309
and domain independence, 97
evaluation, 112, 311337
adorned rule, 318, 321
adornment, 318
Alexander method, 336
algebraic approaches, 336
annotated QSQ, 330
APEX, 335
bottom-up, 312316, 324335
bound coordinate, 318
connected atom, 338
counting, 327, 331335, 336, 341
direct evaluation, vs. pre-compilation, 317
Earley Deduction, 335
extension tables, 335
factoring, 337
free coordinate, 318
generalizations to logic programming, 336
generalized supplementary magic set rewriting,
325, 336
incremental, 337
Iterative Query-Subquery (QSQI), 339
left-to-right, 318
magic set rewriting, vs. QSQ, 311, 324335,
336, 340
memo-ing, 335
naive, 312
original magic set rewriting, 340
parallel, 337
pre-compilation, vs. direct evaluation, 317
Query-Subquery (QSQ), 311, 317324, 335,
341
rectied subgoal, 328, 330331, 336
Recursive Query-Subquery (QSQR), 323324,
324
relevant fact, 317
rule-goal graph, 335
seminaive, basic algorithm, improved
algorithm, 312316, 335
sideways information passing, 318, 336,
340
sip graph, 340
SLD-AL, 335
stratication, 337
supplementary relation, 319320
top-down, 316324
extensional database (ebb), 279
extensional relation, 277
extensional schema, 277
immediate consequence operator, 282, 375
intensional database (idb), 279
intensional relation, 277
intensional schema, 277
least xpoint semantics, 276, 282286
Knaster-Tarskis Theorem, 286
666 Index
datalog (continued)
linear program, 305, 316
linear rule, 316
magic set rewriting, 311, 324335, 336
generalized supplementary, 325, 336
original, 340
vs. QSQ, 324
minimum model semantics, 275, 278282
Herbrand interpretation, 282
Herbrand model, 282
monadic programs, 305
negative literal, 288
nonrecursive
with negation, 70, 7273
normal form, 68
nonrecursive (nr) program, 62
optimization, 36, 112, 311337
parallel evaluation, 337
positive literal, 288
precedence graph, 315
program, 276
proof tree, 286
proof-theoretic semantics, 275, 286300
prototype systems, 337
query, 317
Query-Subquery (QSQ), 311, 317324, 335
annotated, 330
completion, 318
Iterative (QSQI), 339
Recursive (QSQR), 323324
template, 319320
vs. magic set rewriting, 324
rule, 276
body, 276
head, 276
instantiation, 277
satisability, 300301
semipositive, 379
sirup, 305, 309
SLD-resolution, 289298
completeness, 297
datalog

, 400
derivation, 290
most general unier (mgu), 293
refutation, 290
resolvent, 289, 295
selection rule, 298
SLD-derivation, 295
SLD-refutation, 295
soundness, 296
unier, 293
SLD-tree, 298, 317
stratied evaluation, 337
syntax, 276
top-down vs. bottom-up, 311, 327, 336
and undecidability, 306, 308311
vs. logic programming, 35, 278, 298
datalog

, 308, 309, 355360, 357, 374414


default model semantics, 408
inationary semantics, 356
locally stratied, 411
negation as failure, 406408
Clarks completion, 406
nite failure, 406
SLDNF resolution, 406
noninationary semantics, 357
nonrecursive, 70, 7273
range-restricted, 372
rule algebra, 359, 373
semipositive program, 377
on ordered databases, 406
vs. xpoint, 405
SLB-resolution, 400
stable model semantics, 408, 413
vs. choice, 409
stratied, 374
stratied semantics, 377385
independence of stratication, 382
on innite databases, 411
on ordered databases, 406
precedence graph, 379
SLS resolution, 409
stratiable program, 379
stratication, 378
stratication mapping, 378
vs. Fermats Last Theorem, 411
vs. xpoint queries, 400
supported model, 384, 411
tie-breaking semantics, 409
update language, 582
valid model semantics, 409
well-founded, 374
well-founded semantics, 385397, 413
3-stable model, 389
3-valued instance, 387
3-valued model, 387
alternating xpoint, 390, 408, 413
global SLS-resolution, 409
greatest unfounded set, 413
on ordered databases, 406
total instance, 387
total program, 395
unfounded set, 413
vs. default, 412
vs. xpoint queries, 400, 401
vs. stable, 412
Index 667
datalog

new
, 483
DB2, 155
DBASE IV, 152, 155
dbms. 3
DDL, 28; See data denition language.
decidability, 16
of implication for full dependencies, 220, 234
declarative vs. procedural, 35, 53
decomposition, 162, 251259, 252, 265266
dependency preserving, 254
and functional dependency, 164, 171
and join dependency, 169171
lossless join, 253
mapping, 253
multi-way join, 106, 114115
reconstruction mapping, 254
vs. synthesis, 258, 265
DedGin, 337
deductive database, 8
disjunctive, 502
deductive object-oriented database, 572, 574, 575
deductive temporal query language, 610
deep equality, 557, 575
default logic, 408
denite clause, 288
denite query, 97
delete in SQL, 149
deletion, 580
implicit, 556
deletion anomaly, 162, 254
dense linear order, 96, 98
dependency, 157
afunctional, 234
algebraic, 228233
axiomatization, 166, 171, 172, 186, 193, 202207,
227, 231
capturing semantics, 159163
classication, 218
conditional table, 497
and data integrity, 162
and domain independence, 97
dynamic, 234
embedded, 192, 217, 233
embedded implicational (eid), 233
embedded join (ejd), 218, 233
embedded multivalued (emvd), 218, 220, 233
equality-generating (egd), 217228
extended transitive, 234
faithful, 232, 233, 239
niteness, 306
full, 217
functional (fd), 28, 159, 163169, 163, 186, 218,
250, 257, 260
general, 234
generalized dependency constraints, 234
generalized mutual, 234
implication
in view, 221
implication of, 160, 164, 193, 197
implicational (id), 233
implied, 234
inclusion (ind), 161, 192211, 193, 218, 250
acyclic, 207, 208210, 211, 250
key-based, 250, 260
typed, 213
unary (uind), 210211
inference rule, 166, 172, 193, 227, 231
ground, 203
join (jd), 161, 169173, 170, 218
key, 157, 163169, 163, 267
logical implication of, 160, 164
nite, 197
unrestricted, 197
multivalued (mvd), 161, 169173, 170, 186, 218
mutual, 233
named vs. unnamed perspectives, 159
order, 234
partition, 234
projected join, 233
and query optimization, 163
satisfaction, 160
satisfaction by tableau, 175
satisfaction family, 174
and semantic data models, 249253
and schema design, 253262
single-head vs. multi-head, 217
sort set, 191, 213, 234
subset, 233
tagged, 164, 221, 241
template, 233, 236
transitive, 234
trivial, 220
tuple-generating (tgd), 217228
typed, 159
vs. untyped, 192, 217
unirelational, 217
and update anomalies, 162
and views, 221, 222
vs. rst-order logic, 159, 234
vs. integrity constraint, 157
vs. tableaux, 218, 234
dependency basis, 172
dependency preserving decomposition, 254
dependent class, 246
dereferencing, 557, 558
derivation, 290
668 Index
derived data, 246
determinate-completeness, 474, 561, 574
determinate query, 474, 559
diameter, 12
diff, 88
difference, 33, 36, 68
in relational algebra, 71
and SPCU algebra, 136
in SQL, 146
vs. negation, 70
direct product, 232, 238
directory model, 97
disjunction, 38
in conjunctive queries, 37, 38, 61, 64
atten, 83
and negation, 74
in selection formulas, 62
disjunctive deductive database, 502
disjunctive normal form (DNF), 21, 83
disk, 106
distinct in SQL, 107, 145, 154
distributed database
query optimization, 128
division in relational algebra, 99
DML, 4, 28
DNF, 83
dom, 30, 72
Dom(), 30
domain
active, 46
in relational model, 29, 30
scalar, 153
time, 607
underlying, 74
domain calculus, 74
vs. tuple calculus, 39
Domain Closure axiom, 26
domain independence, 70, 74, 7577, 79, 8197
and algebra, 78
complex value, 526
and datalog, 97
and dependencies, 97
with functions, 97
and nr-datalog

, 78
with order, 97
practical query languages, 153
relational calculus, 81
syntactic restrictions, 8191
undecidability, 97, 125
vs. active domain semantics, 79
domain-inclusion semantics, 551
domain-key normal form, 265
dominance of query languages (), 47
DOOD. See deductive object-oriented database.
duplicate elimination, 107
distinct, 107
duplicate tuples, 144
dynamic aspect of object-oriented database, 572
dynamic binding, 543, 546, 552
dynamic choice operator, 464
Dynamic Logic Programming (DLP), 583, 613
ear of hypergraph, 130
Earley Deduction, 335
edb, 42, 49, 277
edge of hypergraph, 130
egd, 217228
A-egd, 218
Ehrenfeucht-Fraiss e games, 433437, 460
eid, 233
ejd, 218
EKS, 410
elementary functions, 18
elementary query, 534
embedded dependency, 192, 217
embedded implicational dependency (eid), 233
embedded join dependency (ejd), 218
embedded multivalued dependency (emvd), 218,
220, 233
embedding of tableau, 43
empty clause, 288
emvd, 218, 220, 233
enc

,, 418
encapsulation, 543, 546, 553
entity, 543
Entity-Relationship (ER) model, 242, 264
equality atom, 217
equality-generating dependency (egd), 217228
A-egd, 218
equi-join, 55, 108
physical implementation, 107108
in SQL, 144
vs. natural join, 57
equivalence
algebraic, 106
calculus formulas, 82
conjunctive calculus formulas, 46
conjunctive queries, 47, 60, 64, 82, 105
decidability, 118
conjunctive queries with union, 63
differences of SPCU queries, 140
nite and unrestricted implicaton for full
dependencies, 220, 234
rst-order languages, 36, 80, 96
rst-order queries, 74
Index 669
undecidability, 125
of full typed and algebraic dependencies, 231
of hypergraph properties, 132
nr-datalog

and relational algebras, 73


queries, 37
relative to dependencies, 176, 177
query languages, 47
relational algebras, 71
SPC and SPJR algebras, 60
equivalence class, 10
equivalence relation, 10
Equivalence Theorem
conjunctive query languages, 60
conjunctive query languages with union, 63
rst-order languages, 80
ER model, 242
ESQL, 368, 370
evaluable query, 97
evaluation
of conjunctive queries, 56
datalog, 112, 311337
evaluation plan, 107, 108, 110, 135
generating, 110111
parameterized, 135
exact cover problem, 121
existential quantication, 44
atten, 83
vs. universal, 74
Exodus
and optimization, 135
and query evaluation plans, 111
expert system vs. conjunctive queries, 135
expression complexity, 122, 422423, 463
expressive power of object-oriented database, 569,
577
extended relational theory, 26
extension axioms, 26
extension tables, 335
extensional database edb, 42, 49, 279
extensional relation, 42, 48, 277
F-logic, 574
fact, 32
factoring, 337
faithful dependency, 232, 233, 239
vs. typed, 233
fd, 28, 159, 160, 163169, 163, 186, 218. See
functional dependency
fd closure
algorithm, 165
of set of attributes, 165
of set of fds, 165
fd rule in chasing, 175
fd-schema, 251
eld, real closed, 97
le systems, 3
lter, 518
nitary power set, 10
nite interpretation, 26
nite logical implication, 197202, 219
vs. unrestricted, 197
nite model theory, 123, 197
nite representation of innite database, 9396, 97
nite-state automata, 13
nitely implies, 198
niteness dependency, 306
rst normal form, 265
rst-order incremental denability, 588, 613
rst-order language, 7098
Equivalence Theorem, 80
and undecidability, 122126
vs. SQL, 147149, 155
rst-order logic, 22, 35
vs. conjunctive queries, 40
vs. constraints, 234
vs. dependencies, 159, 234
vs. integrity constraints, 186
vs. relational calculus, 77, 105, 123, 136
rst-order predicate calculus, 22, 35
rst-order queries, 7098, 70
and dependencies in views, 222
equivalence, 74
expressiveness, 433437
Ehrenfeucht-Fraiss e games, 433437, 460
on ordered databases, 462
logspace complexity, 430431
parallel complexity, 431433
static analysis, 105, 122126
and undecidability, 105, 122126
xpoint
complex value, 531532
datalog, 276
incomplete database, 495
semantics of datalog

, 390
xpoint of an operator, 283
xpoint queries, 342, 367
on ordered databases, 447
ptime complexity, 437
vs. while queries, 453
atten, 524
FOID, 588
format model, 539
formula, 22
conjunctive calculus, 45
conjunctive normal form (CNF), 83
670 Index
formula (continued)
disjunctive normal form (DNF), 83
interpretable, 77
matrix of, 82
prenex normal form (PNF), 82
relational calculus, 7475
4NF, 252, 252, 259
fourth normal form (4NF), 252, 252, 259
Foxpro, 152
FQL, 264
free(), 45, 75
free coordinate
in datalog evaluation, 318
free tuple, 33
free variable occurrence, 23, 45, 75
fsa. See nite-state automata.
full dependency, 217
full reducer, 129, 136
full tuple generating dependency, 218
full typed dependencies
axiomatization, 227228
function-based perspective on tuples, 32
Functional Data Model, 264
functional dependency (fd), 28, 163169, 163, 186,
218
agreement set, 188
axiomatization, 166168
with mvds, 172173
vs. inds, 192, 202207, 211
and chasing, 175
closure, 165
cover, 254
and decomposition, 162, 164, 171, 253262, 255
dynamic, 615
independent of inds, 250
logical implication
with inds, 192, 199202
linear time, 165
satises, 163
saturated set, 188
and synthesis, 260261
and two-element instances, 189
vs. decomposition, 164, 171
vs. join dependency, 171, 178
vs. multivalued dependency, 171
vs. propositional logic, 186, 189
vs. semantic data model, 249253
vs. unrestricted implication, 199
vs. propositional logic, 189
functional paradigm, 569
functional query language, 569
G
P
, 379
Galileo, 264
game-of-life, 343
garbage collection, 556
Gauss-Seidel algorithm, 335
generalized instance, 95
generalized SPC algebra, 55
generalized SPJR algebra, 59
generalized tuple, 94, 95
generic OODB model, 547556
generic semantic model (GSM), 242250
genericity, 103, 419421, 419, 425
globally consistent join, 128, 136, 261
GLUE-NAIL, 337
goal clause, 288
G odel Completeness Theorem, 123, 136
graph, 11
graphical query language, 150153
Graphlog, 369, 370
ground, 22
ground atom, 34
ground clause, 288
ground inference rule, 203
group by in SQL, 154
grouping, 533
GSM, 242250
GYO algorithm, 130, 136
GYO reduction, 141
hash index, 107
head of rule, 39, 41, 276
Heraclitus, 614
Herbrand interpretations, 23
Herbrand model
datalog, 282
hierarchy model, 28, 97
homomorphism, 12
of tableau queries, 117, 127, 136
Homomorphism Theorem, 37, 105, 115118, 117,
127, 136, 177, 178
Horn clause, 279
hyp, 18
hyperedge, 130
hypergraph, 130
acyclic, 132
articulation set, 132
connected, 132
cyclic, 132
of database schema, 130
ear, 130
edge, 130
GYO algorithm, 130
Index 671
path, 132
reduced, 130
hyperplane, 438
I
1
, I
1/2
, I
0
, 387
I

,I

,I

, 391
idb, 42, 49, 277
IDM transaction, 580582, 613, 615617
axiomatization, 581
condition, 580
deletion, 615
insertion, 615
modication, 615
optimization, 581
parallelization, 616
schedule, 616
serializability, 616
simplication rules, 582
IDM transactional schema, 584, 613, 617
vs. constraints, 585586
completeness, 617
soundness, 617
vs. fds, 585
vs. inclusion dependencies, 585, 617
vs. jds, 617
IFO, 242, 264
ILOG, 576
image of calculus query, 78
immediate consequence operator, 282
imperative method, 564566, 573
implementation
cross product, 108
equi-join, 107108
multi-way join, 111115
physical, 106108
projection, 107
relational algebra, 107108
selection, 107
implication
and chase, 180182, 186
closed under, 204
closed under k-ary, 204
of dependencies, 158, 160, 164, 195
in view, 221
of fds and inds, 192
nite, 197199, 226
nite vs. unrestricted, 202, 219, 234
of functional dependencies, 186
of inds, 192, 195197
for two-element instances, 189
unrestricted, 197199
vs. fds and inds, 199202
implicational dependency (id), 233
implies. See implication.
nitely, 198
without restriction, 198
inclusion dependency (ind), 161, 192211, 193, 218,
253
acyclic, 208, 210, 211, 250
vs. implication, 210
axiomatization, 193195, 211
vs. fds, 192, 202207, 211
and chasing, 208
independent of fds, 250
key-based, 250, 260
logical implication, 192, 195197
with fds, 192, 199202
repeats-permitted, 212
restricted classes, 192
satises, 193
typed, 211
vs. referential integrity, 211
vs. semantic data model, 207
vs. unrestricted implication, 199
incomplete database, 487507
c-table, 493
update, 593594
complexity, 499
xpoint, 495
logical theory, 594600
and nondeterminism, 507
table, 488
incomplete information
and update anomalies, 162
incremental update. See rst-order incremental
denability.
ind, 161; See inclusion dependency.
ind-rule in chasing, 208
independent component, 265
indexing, 106, 107
inequality atom
in selections, 69
inequality in constraint databases, 96
inference rule, 24, 158
ground, 202, 203
schema, 202
substitution, 167
inference rules
for fds and mvds, 172173, 186
for functional dependency, 166168, 186
for inclusion dependency, 193195
proof using, 167
provable using, 167
for unary inds, 210, 215
vs. algorithm for testing implication, 166
672 Index
inference rules (continued)
vs. axiom, 167
innitary logic, 458, 459, 462
innite database, 97
nite representation, 36, 9396, 97
innite tree, 575
inationary datalog

, 356
inationary xpoint logic (CALC+
+
), 352,
353354
inationary xpoint operator (
+
), 353
information capacity
relative, 265, 268269
INGRES, 34, 111, 155
distributed, 135
query optimizer, 114115, 127, 135, 137
inheritance, 546, 552, 553, 567, 573575, 577
semantic data model, 245
input schema of query, 37
insert in SQL, 149
insertion, 580
insertion anomaly, 162
instance
complex value, 512
database, 29
conventional perspective, 32
logic-programming perspective, 32
generalized, 95
GSM, 245
object-oriented database, 554, 555
relation
conventional perspective, 32
logic-programming perspective, 32
relativized, 77
semantic data model, 245
unrestricted, 197
instantiation, 277
integrity constraint, 6, 28, 157, 186
vs. rst-order logic, 186, 234
intended model, 279
intensional database (id6), 42, 49, 279
intensional relation, 42, 48, 277
interpretable formula, 77
interpretation, 23
active domain, 79
natural, 78
relativized, 74, 7778
unrestricted, 78
intersection, 33
in relational algebra, 71
and SPC algebra, 55, 69
in SQL, 146
vs. join, 58
invented value, 469
IQL, 573
irreexive, 11
ISA, 543, 545
semantic data model, 245
isomorphic tableau queries, 120
isomorphism, 12
OID, 555
iterate, 518
Iterative QSQ (QSQI), 339
Jacobi algorithm, 335
jd, 161, 169173, 218. See join dependency
jd rule, in chasing, 175
join, 55,57
acyclic, 105, 126, 128135, 136
algorithms for binary join, 135
complex value, 517
decomposition, 106, 114
equi-join, 55, 57, 108
implementation, 111115
left-to-right evaluation, 112
lossless, 164, 253
multi-way, 106, 108, 135
natural, 56, 57, 169
vs. equi-join, 57
pairwise consistent, 128, 136
physical implementation, 107108
semi-join, 128, 135
in SQL, 144
tuple substitution, 115, 135
vs. cross product, 58
vs. intersection, 58
vs. tableau, 64
join decomposition, 114115
join dependency (jd), 161, 169173, 170, 218
acyclic, 169, 182183, 186
and mvds, 182
and chasing, 175
complexity of implication, 169
and decomposition, 169171
embedded, 233
Gentzen-style axiomatization, 186
n-ary, 170
projected, 233
satises, 170
vs. axiomatization, 171, 186
vs. functional dependency, 169, 171, 178
vs. multi-valued dependency, 170, 182
vs. natural join, 169
vs. SPJR algebra, 181
vs. unrestricted implication, 199
Index 673
join detachment, 114, 135
join tree, 130, 136
k-ary axiomatization, 202, 204
key, 257, 543
attribute, 257
in semantic data model, 247
key dependency, 163
simple, 267
vs. functional dependency, 161
key-based inclusion dependency, 250, 260
KL, 503
Knaster-Tarskis Theorem, 286
lambda-calculus, 574
language (formal), 1320
late binding, 552
LDL, 337, 409, 533, 538, 613
update language, 583
left-to-right evaluation
datalog, 318
join, 112
linear bounded Turing machine, 196
linear datalog, 305, 316
linear programming, 97
Lisp, 573
literal, 21
in nr-datalog

rule, 72
local stratication, 411
logic. See mathematical logic.
temporal, 612, 619
three-valued, 389391
logic programming, 97
constraints, 97
object-oriented database, 572
vs. datalog, 35
logic-programming perspective on relations, 32, 33
Logical Data Model (LDM), 97
logical database, 503
logical implication, 21
and chase, 180182, 186
closed under, 204
closed under k-ary, 204
of dependencies, 160, 164, 193
in view, 221
of fds, 165, 186
of fds and inds, 192
nite, 197199
vs. unrestricted, 202, 219, 234
full dependencies
complexity, 221
of inds, 192, 195197
of mvds,172173
unrestricted, 197199
logical level of three-level architecture, 106
logical theory and updates, 594
logspace complexity
of rst-order queries, 430431
lossless join, 164, 253
L owenheim-Skolem theorem, 25
magic set rewriting, 311, 324335
generalized supplementary, 325, 336
original, 340
vs. QSQ, 324, 327
main-memory buffering, 106, 107
many-sorted query language, 153154
map, 540
map lter, 518
materizialized view, 51
mathematical logic, 2027
matrix of formula, 82
maximum in SQL, 154
memo-ing, 335
message, 552
method, 543, 551
languages, 563571
method resolution, 546, 552
method schema, 563, 566571
monadic, 543, 563, 565, 567, 568, 577
polyadic, 567, 568, 577
mgu, 295
Microsoft Access, 36, 143, 150, 152153, 155
minimal cover, 257
minimal tableau query, 118
minimization of tableau queries, 105, 119, 136
minimum in SQL, 154
minimum model, 275
modal operator, 503
model, 24
database, 28
datalog, 279
relational, 2834
semantic data, 243, 245253, 267
modication, 580
modication anomaly, 162
modied RANF, 88
modus ponens, 24
monadic datalog program, 305
monadic method schema, 543, 563, 565, 567, 568,
577
monoid, 199
monotone operator, 283
674 Index
monotonic query, 42
monotonicity
and conjunctive queries, 42
and relational algebra, 71, 98
most general unier (mgu), 293
multi-head dependency, 217
multi-way join
decomposition, 114115
detachment, 114, 135
implementation, 106, 108, 111115, 135
left-to-right evaluation, 112
tuple substitution, 115, 135
multiset, 92, 136, 145
multivalued dependency (mvd), 161, 169173, 170,
186, 218
and acyclic jds, 182
axiomatization with fds, 172173
dependency basis, 172
embedded, 218, 220, 233
original denition, 189
satises, 170
and two-element instances, 189
vs. functional dependency, 171
vs. join dependency, 170
vs. propositional logic, 189
mutual recursion, 315
mvd, See multivalued dependency.
N-datalog
()
, 463
N1NF. See nested relation.
NAIL!, 337, 409
naive evaluation
of datalog, 312
of SPC query, 109
naive table, 492
named perspective, 31, 32
and dependencies, 159
projection, 57
relational algebra, 71
selection, 57
SPJR algebra, 5659, 57
tuple, 32
vs. unnamed perspective, 32
named value, 554, 556
root of persistence, 556
natural interpretation, 78
natural join, 56, 57, 169
polyadic, 58
vs. equi-join, 57
vs. join dependency, 169
natural semantics of relational calculus, 78, 79
nc, 96, 431
negation, 36
in Microsoft Access, 153
pushing, 83
in QBE, 150
in selections, 68
in SQL, 143
stratied, 49
vs. set difference, 70
Negation as Failure, 27, 406
negative literal, 288
nest, 518
nested loop implementation of join, 107, 108
nested relation, 512
algebra, 519
nested SQL query, 143, 146147
network model, 28, 97
new, 559
NF2. See nested relation.
no-information null, 502
non-existing null, 502
nondeterminism, 15
semantics of negation, 409
nondeterministic query. See query, nondeterministic.
noninationary datalog

, 357
nonrecursive (nr) datalog
with negation, 70, 7273
program, 72
nonrecursive datalog program, 62
normal form, 158
Boyce-Codd (BCNF), 250,251
decomposition algorithm, 255
conjunctive (CNF), 83
conjunctive calculus, 4647
disjunctive (DNF), 83
domain-key, 265
rst, 265
fourth (4NF), 252, 252, 259
nr-datalog, 68
prenex (PNF), 82
project-join (PJ/NF), 265, 267
relational algebra (RANF), 86, 97
relational schema, 251259, 265
safe-range (SRNF), 83
SPC algebra, 55
SPCU algebra, 62
SPJR algebra, 59
SPJRU algebra, 62
third (3NF), 257
decomposition algorithm, 257
synthesis algorithm, 257
now, 607
np, 18
np-complete, 105, 121, 122, 127
Index 675
np-hard, 121
npspace, 18
nr-datalog, 62
normal form, 68
nr-datalog

, 70, 7273
and domain independence, 78
with equality, 72, 73
equivalence to rst-order languages, 80
literal, 72
program, 72
query, 73
range restricted, 72
with equality, 72
rule, 72
semantics, 72
translation into SQL, 147149
and undecidability, 122126
NU-Prolog, 337
null value, 488
O
2
, 562, 573
O
2
SQL, 510, 536537, 562
obj, 547
object, 246, 543, 545, 547, 573
object creation, 573; See object-oriented database,
object creation.
object equality, 557
object history, 615
object identier (OID), 473, 543, 545547
semantic data model, 243
object migration, 572, 613, 615
object-oriented data model, 28, 245, 477, 546
object-oriented database, 8, 242, 473, 542578
calculus, 557558
class, 545
class hierarchy, 549
well formed, 549
classication, 572, 575
completeness, 560561, 560, 574
complex value, 545
consistency, See, object-oriented database, type
safety
context-dependent binding, 552
covariance, 553
dangling reference, 999, 572
dba mode, 546
deductive, 575
deep equality, 557, 575
dereferencing, 557, 558, 559
determinate query, 559
domain-inclusion semantics, 551
dynamic aspect, 572
dynamic binding, 543, 546, 552
encapsulation, 543, 546, 553
expansion of value, 558
formal denition, 547555
generic OODB model, 547556
ILDG, 580
imperative methods, 564566, 573
expressive power, 565566, 577
inheritance, 546, 552, 553, 567, 573575, 577
instance, 554, 555
IQL, 573
ISA, 543, 545
languages for methods, 563571
late binding, 552
logic programming, 572, 574
message, 552
method, 551
signature, 551
well formed, 553
method resolution, 546, 552
method schema, 563, 566571
expressive power, 569571
monadic, 543, 563, 565, 567, 568, 577
polyadic, 567, 568, 577
named value, 554, 556
object, 543, 545, 547, 573
object creation, 558562, 573, 574
object equality, 557
object identier, 543, 545, 547
object migration, 572
OID assignment, 550
OID isomorphism, 555, 560
overriding, 546
parallelism, 573
query semi-deterministic, 574
query language, 556563
querying schema, 572
reachability, 565
receiver, 552
role, 571
schema, 554
schema design, 571
specialization, 545
static binding, 552
subtyping relationship, 549
type, 548
disjoint interpretation, 550
semantics, 550
type safety, 563, 565, 567, 573
user mode, 546
value, 547
value equality, 557
value-dependent binding, 552
676 Index
object-oriented database (continued)
view, 571
object-oriented programming languages, 573
object-orienteddatabase
consistency. See object-oriented database,type
safety.
ODE, 615
OID
-assignment, 550
-equivalence, 246
-isomorphism, 246, 560
semantic data model, 243
OODB, 242; See object-oriented database.
Open World Assumption (OWA), 489, 497, 595
operator
continuous, 286
monotone, 283
OPS5, 369, 370
optimization
conjunctive queries, 36, 105
using chase, 163
using dependencies, 163
datalog, 36, 112, 311337
and Exodus, 135
in practical systems, 105, 106115
relational algebra, 106
transaction, 581
using chase, 177180
or-sets, 505
ORACLE, 34, 155
ordered database, 397, 447
output schema of query, 37
overriding, 546
OWA, 489, 497, 595
P(I), 280, 378, 383, 387
pg(P, I), 389
P
wf
, 390
page fetch, 107
page size, 106
paging protocol, 106
pairwise consistent join, 128, 136
Paradox, 152, 155
parallel complexity
classes of circuits, 431
of rst-order queries, 431433
parameterized IDM transaction, 584
call, 584
parametrized query, 522
paramodulation vs. chase, 186
parity query
not rst-order, 460
not in while, 437
partial xpoint logic (CALC+), 348, 349352
partial xpoint operator (), 349
partial order, 11
partially ordered set, 11
path in hypergraph, 132
PCP, 16
and satisability of relational calculus, 123
permutation, 13
physical implementation, 106108
cross product, 108
equi-join, 107108
projection, 107
relational algebra, 107108
selection, 107
physical level
of three-level architecture, 106
physical model of relational database, 106107
PNF, 82
polyadic
conjunction, 46, 75, 83
disjunction, 75, 83
existential quantication, 83
natural join, 58
polyadic method schema, 567, 568, 577
polynomial inequalities constraint, 96, 97
positive existential calculus, 91, 97
decidability, 99
positive literal, 288
positive selection formula, 67
poss(T ), 490
Post Correspondence Problem (PCP), 16
and satisability of relational calculus, 123
POSTGRES, 153, 600
powerset, 514
precedence graph
in datalog evaluation, 315
in datalog

, 379
negative edge, 380
positive edge, 380
predicate, 277
prenex normal form (PNF), 82
procedural vs. declarative, 35, 53
product
Cartesian, 52
cross, 52, 54, 58, 108, 144
direct, 235, 240
production rule system, 369
program schema, 574
project-join expression
extended, 229
project-join normal form (PJ/NF), 267
project-join query, at, 126
Index 677
projection, 52
and aggregate functions, 93
named perspective, 57
physical implementation, 107
pushing, 109
in SQL, 144
unnamed perspective, 54
proof, 24
using inference rules, 167
proof tree, 286
propositional calculus, 21
propositional logic, 21
vs. fds and mvds, 186, 189
pspace, 17
pspace complexity
of while queries, 437
pspace-complete, 196
P
I
, 286
ptime, 17
ptime complexity
of xpoint queries, 437
pure universal relation assumption (URA), 126, 130,
242, 252
pushing
negation, 83
projection, 109
selection, 109, 335
q
adom
, 79
q
d
(), 78
q
nat
(), 78
qc, 422
QL, 477
qptime, 406, 422
QSQ, 311, 317324, 335
annotated, 330
completion, 318
Iterative (QSQI), 339
Recursive (QSQR), 323324
algorithm, 324
template, 319320
vs. magic set rewriting, 324, 327
QSQI, 339
QSQR, 323324
algorithm, 324
Quel, 74, 112, 155
query, 421
complexity, 422423
data complexity, 422423
expression complexity, 422423, 463
composition, 4852, 71
computability, 417421
conjunctive, 36, 3764
conjunctive calculus, 4447
containment relative to dependencies, 37, 177
denite, 97
determinate, 474
equivalence, 37
relative to dependencies, 176, 177
rst-order, 70
genericity, 419421, 419, 425
C-genericity, 419420
input schema, 37
with invented values, 469
monotonic, 42
nondeterministic, 453457
CALC+
(+)
+W, 456
choice operator, 458
dynamic choice operator, 464
N-datalog
()
, 463
while
(+)
+W, 454, 456
witness operator, 454456
nr-datalog

, 73
optimization, 36, 105115, 112, 313339
output schema, 37
parametrized, 522
project-join, at, 126
relational calculus, 75
satisable, 42
schema query, 572
semi-deterministic, 574
statistical properties, 106
tableau, 4344, 43
union-of-tableaux, 139
untyped, 475
vs. implementation, 110
vs. query mapping, 37
vs. update, 28
well-typedness, 417
yes-no, 42
query composition, 37
query decomposition, 114115
query evaluation
cost model, 106, 108110
naive, 109
in practical systems, 106115
query evaluation plan, 107, 108, 110, 135
and Exodus, 111
generating, 110111
parameterized, 135
query language
aggregate operators, 153, 154, 155
with arithmetic, 153, 154
associative, 35
BP-completeness, 428
678 Index
query language (continued)
completeness, 466
completeness in a class, 424
conjunctive queries, 36, 3764
with union, 36, 37, 38
constraint, 9498
declarative, 29, 558
vs. procedural, 35, 53
determinate-completeness, 474
disjunction, 37, 38
dominated by (), 47
embedded, 466
C+SQL, 466
while
N
, 467
equivalence (), 47
expressive power, 106, 427
graphical, 150153
inationary semantics, 342344
many-sorted, 153154
navigational, 558
noninationary semantics, 342344
object-oriented, 556563
practical, 143155
relational algebra, 28, 35, 36
relational calculus, 28, 35, 36
set-at-a-time, 35
static analysis, 36, 105, 122126, 306311
temporal, 606613
three paradigms, 3536
Query Management Facility (QMF), 155
query mapping vs. query, 37
query optimization, 36, 105
cost model, 106, 108110
distributed database, 128
evaluation plan, 107, 108, 110111, 135
and Exodus, 111
in INGRES, 114115
join detachment, 114, 135
local vs. global, 115, 117
and negation, 106
in practical systems, 106115
program transformation, 108
query rewriting, 108110
query tree, 108110, 108
and relational calculus, 126
rewrite rule, 110
and sampling, 111
in System R, 112114
by tableau minimization, 118120
tuple substitution, 115, 135
using chase, 163, 177180
using dependencies, 163
query rewriting, 108110
query tree, 108110, 108
Query-By-Example (QBE), 36, 40, 43, 143,
150152, 155
condition box, 150
and domain independence, 153
and rst-order languages, 151
negation, 150
relationally complete, 151
view denition, 151
vs. tableau queries, 150
Query-Subquery (QSQ), 311, 317324, 335
annotated, 330
completion, 318
Iterative (QSQI), 339
Recursive (QSQR), 323324
algorithm, 324
template, 319320
vs. magic set rewriting, 324, 327
R[], 31
r.e.. See recursively enumerable.
Rado graph, 442, 461
RANF, 86, 97
algorithm, 88
modied, 88
range restricted
algorithm, 84
calculus query, 97
calculus variable, 83, 84
conjunctive query, with equality, 41, 48
formula, 102
nr-datalog

, 72
with equality, 65, 72
rule, 41
range separable query, 97
rank, 402
RDL, 369, 370
real closed eld, 96, 97
receiver, 552
reconstruction mapping, 254
rectangle, representation, 95
rectied subgoal in datalog evaluation, 328,
330331, 336
recursive (formal) language, 16
Recursive QSQ (QSQR), 323324
algorithm, 324
recursively enumerable, 16
reduced hypergraph, 130
redundancy and update anomalies, 162
referential integrity constraint vs. inclusion
dependency, 161, 213
reexive relation, 10
Index 679
refutation, 290
regular language, 14
regular tree, 558, 575
relation
complex value, 512
extended, 229
extensional, 42, 48
intensional, 42, 48
relation (instance), 29
conventional perspective, 32
logic-programming perspective, 32, 33
over empty attribute set, 32
unrestricted, 197
relation atom, 112, 217
relation schema, 31
with dependencies, 241
relational algebra, 28, 35, 36, 70, 71, 81
aggregate operators, 97
with bags, 136
complement operator, 103, 104
composition, 71
conjunctive, 5261
division, 99
and domain independence, 78
equivalence to rst-order languages, 80
equivalences, 106
implementation, 106, 107108
and monotonicity, 71, 98
named, 64, 71
named conjunctive, 5659
optimization, 106, 126
in practical systems, 105, 106115
physical implementation, 106115
and satisability, 98
semi-join, 128, 135
SPC, 5256, 108, 118
SPCU, 62, 97, 136
SPJR, 56,59, 118
vs. join dependency, 181
SPJRU, 62
translation into calculus, 80
typed restricted SPJ, 156
and undecidability, 122126
unnamed, 71
unnamed conjunctive, 5256
unrestricted, 103
untyped algebra, 475
relational algebra normal form (RANF), 86, 97
algorithm, 88
modied, 88
relational calculus, 28, 35, 36, 64, 70, 7391, 85
active domain semantics, 74, 79
aggregate operators, 97
allowed query, 97, 101102
base formula, 74
conjunctive, 45
conjunctive normal form (CNF), 83
and counting, 154
disjunctive normal form (DNF), 83
domain calculus, 39, 74
domain independence, 70, 74, 7577, 79, 8197
equivalence to rst-order languages, 80
evaluable query, 97
formula, 7475
equivalence, 82
parse tree, 83
image of query, 78
inequalities constraint, 96, 97
natural semantics, 78, 79
negation, 7071
polynomial inequalities constraint, 96
positive existential, 68, 91, 97
prenex normal form (PNF), 82
query, 75
and query optimization, 126
range restricted
range separable query, 97
algorithm, 84
formula, 102
query, 97, 102
variable, 83, 84
relational algebra normal form (RANF), 86, 97
relativized interpretation, 74, 7778
rewrite rule, 82
for RANF, 8687
for SRNF, 83
safe DRC query, 97
safe query, 64, 97
safe-range, 81, 85, 8385, 97
normal form (SRNF), 83
safety, 70, 7577
and satisability, 123
semantics, relativized, 77
simulation of PCP, 123
static analysis, 105, 122126
syntax, 74
translation into algebra, 97
active domain case, 80
safe-range case, 81, 8691
tuple calculus, 39, 74, 101
and undecidability, 36, 97, 105, 122126, 136
unrestricted semantics, 78
unsafe, 75
vs. rst-order logic, 77, 105, 123, 136
vs. select-from-where clause, 145
relational completeness, 96
680 Index
relational completeness (continued)
QBE, 151
SQL, 147
vs. Turing computability, 96
relational model, 2834
relative information capacity, 265, 268269, 539
relativized instance, 77
relativized interpretation, 74, 7778
relevant fact, 317
relname, 31
renaming
attribute, 58
complex value, 517, 524
operator, 57, 58
SPJR algebra, 57
rep(T ), 489
repeat restricted tableau query, 67
representation system
strong, 489
weak, 490
representative instance, 263
resolution, 186, 552
vs. chase, 186
resolution theorem proving, 136
resolvent, 289, 294
RETE, 600
Reverse-Same-Generation (RSG)
program, 312
query, 317
revision vs. update, 599600
rewrite rule
conjunctive calculus, 46
normal form vs. query optimization, 110
for optimization, 108, 110
relational calculus, 82
SRNF, 83
sound, 56
SPC algebra, 5556, 110
SPJR algebra, 110
SRNF to RANF, 8687
rewriting, query, 108110
role, 571
root of persistence, 556
rule, 41
active database, 605
anonymous variable, 39
body, 39, 41
head, 39, 41
nr-datalog

, 72
range restricted, 72
semantics, 72
range-restricted, 41
semantics, 41
update language, 582
rule-based conjunctive query, 39, 4042, 41
with equality, 48
semantics, 41
with union, 62
rule-goal graph, 335
running intersection property, 141
safe, 64
DRC query, 97
query, 97
safe-range, 85
and aggregate functions, 93
complex value, 528
normal form (SRNF), 83
query, 97
relational calculus, 81, 8385
and universal quantication, 85
safety, 70, 7577
in SQL, 153
Same-Generation (SG)
program, 331
query, 331
Variant (SGV), 339
sampling in query optimization, 111
sat(R, ), sat(), 174
satisfaction, 24
conjunctive calculus formula, 46
relative to a domain, 77
satisfaction family, 174, 186, 222
satisability
and conjunctive queries, 42
datalog, 300
and rst-order queries, 123
and relational algebra, 71, 98
and relational calculus, 123
satisable formula, 21
satisable query, 42
satisable SPC algebra, 56
satisable SPJR algebra, 59
satisfy
dependency, 160
by tableau, 175
functional dependency, 163
inclusion dependency, 193
join dependency, 170
multivalued dependency, 170
saturated set, 188
scalar domain, 153
schema
complex value, 512
database, 29, 31
Index 681
object-oriented database, 554
query, 572
relation, 31
schema design
decomposition, 162, 251259, 252
object-oriented database, 571
synthesis, 257258
SDD-1, 135
select-from-where clause, 112, 144
vs. projection, 144
vs. relational calculus, 145
selection, 52, 57
constant based, 66
named perspective, 57
physical implementation, 107
positive conjunctive, 55, 58
pushing, 109, 335
in SQL, 144
unnamed perspective, 53
selection formula
atomic, 53
disjunction, 62
inequality atom, 69
with negation, 68
positive, 67
positive conjunctive, 55, 58, 108
selection rule, 298
Semantic Binary Data Model, 264
semantic data model, 28, 157, 192, 240, 242250,
264, 542
abstract class, 243
attribute, 243
multi-valued, 243
single-valued, 243
class, 243
complex value, 243
derived data, 246
Entity-Relationship (ER), 242
and functional dependencies, 249253
generic (GSM), 242
inheritance, 245
instance, 245
ISA, 245
object identier (OID), 243
printable class, 243
and rfelational model, 249253
and schema design, 247250
subclass, 243
vs. inclusion dependencies, 207, 251253
semantics
conjunctive calculus, 45
conjunctive query, 41
nr-datalog

program, 72
nr-datalog

rule, 72
relational calculus
active domain, 79
natural, 78, 79
unrestricted, 78
rule-based conjunctive query, 41
SPC algebra, 54
SPJR algebra, 58
tableau query, 43
semi-deterministic query, 574
semi-join, 128, 135
program, 129
seminaive datalog evaluation, 312316, 335
basic algorithm, 315
improved algorithm, 316
semipositive datalog, 377
sentence, 23
Sequel, 144
set comprehension, 538
set constructor, 508, 509
set difference, 68
in relational algebra, 71
and SPCU algebra, 136
vs. negation, 70
set membership, 514
set-at-a-time, 35
set_create, 515
set_destroy, 515
sideways information passing, 111, 112114
in datalog evaluation, 318, 336, 340
graph, 113, 340
strategy, 113
signature, method, 551
simple key dependency, 267
simple tableau query, 140
simultaneous induction, 351
single rule programs (sirups), 305, 309
single-head dependency, 217
singleton, 518
sip graph, 113, 340
sip strategy, 113
sirup, 305309
SLD datalog evaluation, 289298
SLD-AL, 335
SLD-resolution, 295; See datalog, SLD-resolution.
datalog

, 406
SLD-tree, 298, 317
SLDNF resolution, 406
SLS resolution, 409
sort
complex value, 511
of instance, 32
of relation name, 31
682 Index
sort (continued)
of tuple, 32
sort(), 31
sort set dependency, 191
vs. axiomatization with fds, 213
sort-merge implementation of join, 108
sound axiomatization, 167
spatial database, 95
SPC algebra, 5256, 54, 108
base query, 54
generalized, 55
intersection, 55, 69
normal form, 55
rewrite rule, 5556, 110
satisable, 56
unary singleton constant, 54
with union, 62
vs. SPJR algebra, 60
vs. tableau queries, 118
SPCU algebra, 62, 97
and dependencies in views, 222
and difference, 136, 140
normal form, 62
specialization, 545
SPJ algebra,
typed restricted, 64, 67
SPJR algebra, 5659, 57
base query, 58
generalized, 59
natural join, 56
normal form, 59
renaming, 57
rewrite rule, 110
satisable, 59
unary singleton constant, 58
with union, 62
vs. join dependency, 181
vs. SPC algebra, 60
vs. tableau queries, 118
SPJRU algebra, 62
normal form, 62
SQL, 23, 36, 70, 74, 112, 143150, 155, 336, 370,
372, 536, 574
bags, 145, 155
and conjunctive queries, 143146
contains, 146
count, 154
create, 145
delete, 149
distinct, 145, 154
and domain independence, 153
duplicate tuples, 144
from, 144
group by, 154
insert, 149
and negation, 143
nested query, 143147
in personal computer DBMSs, 152
relationally complete, 147, 150
safety, 153
scalar types, 145
select, 144
set operators, 146
simulation of nr-datalog

, 147149
translation to algebra, 112
update, 149
update language, 580
views, 149
vs. cross product, 144
vs. rst-order queries, 147149, 155
vs. relational calculus, 145
vs. Sequel, 144
where, 144
SRNF, 83
stable model, 408, 413
stage(P, I), 285
Starburst, 368, 370
static analysis
conjunctive queries, 105, 115122
datalog queries, 306311
rst-order queries, 105, 122126
of queries, 36
relational calculus, 105, 122126
static binding, 552
stored data, statistical properties, 106
stratied datalog

, 378
stratied negation, 49
stratied semantics, 377385. See datalog

,
stratied semantics
stream of tuples, 106, 135
strongly-safe-range
complex value, 530
structured object. See complex value.
Structured Query Language (SQL), 143. See SQL
subclass, 545
semantic data model, 243
subquery
in datalog evaluation, 318
substitution, 24, 116
vs. valuation, 116
subsumption, 136
subtyping relationship, 549
succ, 397
sum, 91, 92
in SQL, 154
summary of tableau query, 43
Index 683
superkey, 257
supplementary relation, 319320
supported model, 384, 411
sure(T ), 490
surrogate, 247, 573
Sybase, 155
symmetric, 10
synthesis, 257258
vs. decomposition, 258, 265
System R, 111
query optimizer, 112, 113114, 122, 127, 135,
137
T
P
, 375
table, 488500; See Codd-table, naive table, c-table.
tableau, 43
complexity, 121122
composition, 226227
embedding, 43
typed, 44
vs. dependencies, 218, 234
vs. join, 64
tableau minimization, 105, 118120, 136
and chasing, 177180
vs. condensation, 136
vs. local optimization, 117
vs. number of joins, 118
vs. resolution theorem proving, 136
tableau query, 4344, 43
chasing, 173, 186
complexity, 111122
composition, 226
containment, 121122
difference, 64
with equality, 48
of an fd, 181
homomorphism, 117, 127, 136
isomorphic, 120
of a jd, 181
minimal, 118
minimization, 119
repeat restricted, 67
semantics, 43
simple, 140
summary, 43
typed, 64, 121, 136
union-of-tableaux query, 63, 64, 139
vs. dependencies, 64
vs. QBE, 150
vs. SPC algebra, 118
vs. SPJR algebra, 118
tagged dependency, 164, 221, 241
Tarskis Algebraization Theorem, 96
Taxis, 264
taxonomic reasoning, 572, 575
template dependency, 233, 236
temporal constraint, 611613
history-less checking, 615
temporal database, 95, 606613
query language, 607611
deductive, 610
TSQL, 609
representation, 608609
temporal CALC, 607
temporal constraint, 611613
on events, 612, 615
object histories, 615
object migration, 613
vs. transactional schemas, 612
time domain, 607
now, 607
transaction time, 607
transition constraint, 612
dynamic fds, 615
pre/post conditions, 615
valid time, 607
temporal logic, 608, 615
temporal query language, 607611
term, 22, 34
complex value, 519
tgd, 217228
tgd-rule in chasing, 223
third normal form (3NF), 257
3-T
P
, 388
3-satisability, 19
3NF, 257
3NF Algorithm, 257
3-valued instance, 386, 387, 388, 389
three-level architecture, 3
logical level, 106
physical level, 106
3-SAT, 139
TI Open Object-Oriented Data Base, 135
timestamp, 401
top-down datalog evaluation, 316324
vs. bottom-up, 311, 327, 336
topological sort, 11
total instance, 387
total order, 11
total program, 395
T
P
, 283
transaction time, 607
transactional schema, 584586, 584, 617
Gen(T), 585
IDM transactional schema, 584, 613, 617
684 Index
transactional schema (continued)
parameterized IDM transaction, 584
vs. constraints, 585586
completeness, 585
soundness, 585
vs. methods, 584
vs. temporal constraints, 612
transformation rule.See rewrite rule.
transition constraint, 612
transitive, 10
transitive closure query
generalized, 310
not rst-order, 436
tree, 12
truth assignment, 21
TSQL, 609
tup_create, 514
tup_destroy, 515
tuple, 29
free, 33
generalized, 94, 95
named perspective, 32
with placeholders, 94
unnamed perspective, 32
tuple calculus, 74, 101
vs. domain calculus, 39
tuple generating dependency (tgd)
full, 218
tuple rewriting, 107
tuple substitution, 115, 135
tuple-generating dependency (tgd), 217228
Turing machine, 15
linear bounded, 196
two-element instances
vs. fds and mvds, 189
two-way automata, 15
type in object-oriented database, 548
type safety, 563, 565, 567, 573
typed dependency, 159
vs. faithful, 233
vs. untyped, 217
typed inclusion dependency, 211
typed restricted SPJ algebra, 64, 67, 156
typed tableau, 44
query, 64, 121, 136
types(C), 548
unary inclusion dependency (uind), 207, 210211
undecidability
of properties of datalog queries, 306, 308
of properties of rst-order queries, 105, 122126
of implication for embedded dependencies, 220,
234
of implication for emvds, 220
of implication of fds and inds, 199, 211
underlying domain, 74
unfounded set, 413
unication, 293
uniform containment, 304
union, 33, 37, 38
in conjunctive queries, 6164
in Microsoft Access, 153
in relational algebra, 71
in rule-based conjunctive queries, 62
in SQL, 146
union-of-tableaux query, 63, 64, 139
unique name axioms, 26
unique role assumption, 261
unirelational dependency, 217
unit clause, 288
universal quantication
removing, 83
and safe-range, 85
vs. existential quantication, 74
universal relation
assumption (URA), 137, 266
pure, 126, 130, 242, 252
weak, 261264, 262
interface, 266
scheme assumption (URSA), 260
unique role assumption, 261
universe, 23
universe of discourse, 77
Unix, 155
unknown value, 488
unnamed perspective
on relations, 32
projection, 54
relational algebra, 71
selection, 53
SPC algebra, 5256, 54
tuple, 32
vs. named perspective, 32
unnest, 518
unrestricted instance, 197
unrestricted interpretation, 78
unrestricted logical implication, 197202, 219
vs. nite, 197
vs. functional dependency, 199
vs. inclusion dependency, 199
vs. join dependency, 199
unrestricted relational algebra, 103
unrestricted semantics of relational calculus, 78
untyped dependency, 192
Index 685
vs. typed, 217
untyped relational algebra, 475
update
in SQL, 149150
statistical properties, 106
vs. revision, 599600
vs. query, 28
update in SQL, 149
update anomalies, 162, 241
and incomplete information, 162
and redundancy, 162
update language, 580583
completeness, 583
IDM transaction, 580582, 615617
deletion, 615
insertion, 615
modication, 615
rule-based, 582583
datalog

, 582
Dynamic Logic Programming (DLP), 583, 613
LDL, 583
SQL, 580
URA, 126, 130, 137
pure, 242, 252
weak, 261264, 262
URSA, 260
user view. See view.
V-relation, 513
val(O), 547
valid, 21
valid model semantics, 409
valid time, 607
valuation, 41
as syntactic expression, 45
of tableau, 43
vs. substitution, 116
value equality, 557
var, 33, 41
variable, 33
anonymous, 39, 44
bound occurrence, 45, 75
free occurrence, 45, 75
variable assignment, 24
variable substitution
rewrite rule, 46, 83
view, 4
complement, 583
and dependencies, 222
maintenance, 586588, 586
materialized, 51
object-oriented database, 571
in QBE, 151
and query composition, 5152
in SQL, 149
update, 586, 589593
complement of views, 591593
virtual, 51
weak instance, 262
weak universal model, 502
weak universal relation assumption (URA), 261
264, 262
well-formed formula
conjunctive calculus, 45
relational calculus, 7475
well-founded semantics, 385397
where in SQL vs. selection, 144
while, 344346, 345
while queries, 342, 367
normal form, 452453
on ordered databases, 447
pspace complexity, 437
vs. xpoint queries, 453
while
+
, 346, 346347
while
(+)
+W, 456
while
(+)
+W, 454
while
N
, 467
completeness on ordered databases, 468
while
new
, 469
completeness, 470473
not determinate-complete, 474
well-behaved, 470
while
obj
, 559
while
uty
, 475
completeness, 478
well-behaved, 477
witness operator, 454456
word problem for monoids, 199
yes-no query, 42
0-1 law, 441
for CALC, 441444
for while, 444446

You might also like