You are on page 1of 36

Gerstner Laboratory

for Intelligent Decision Making and Control


Czech Technical University in Prague
-

Series of Research Reports


Report No:

GL 126/01

Ontologies
Description and Applications
Marek Obitko
obitko@labe.felk.cvut.cz
http://cyber.felk.cvut.cz/gerstner/reports/GL126.pdf

Gerstner Laboratory, Department of Cybernetics


Faculty of Electrical Engineering, Czech Technical University
Technick 2, 166 27 Prague 6, Czech Republic
tel. (+420-2) 2435 7421, fax: (+420-2) 2492 3677
http://cyber.felk.cvut.cz/gerstner

Prague, 2001
ISSN 1213-3000

Ontologies Description and Applications


Marek Obitko
obitko@labe.felk.cvut.cz
February 22, 2001

Abstract
The word ontology has gained a good popularity within the AI
community. Ontology is usually viewed as a high-level description consisting of concepts that organize the upper parts of the knowledge base.
However, meaning of the term ontology tends to be a bit vague, as the
term is used in different ways. In this paper we will attempt to clarify
the meaning of the ontology including the philosophical views and show
why ontologies are useful and important.
We will give an overview of ontology structures in several particular
systems. A field proposed within ontological efforts, ontological engineering, will be also described.
Usage of ontologies in several particular ways will be discussed. These
include systems and ideas to support knowledge base sharing and reuse,
both for computers and humans, ontology based communication in multiagent systems, applications of ontologies for natural language processing,
applications in documents search and enrichment of knowledge bases,
both particularly for the World Wide Web environment and construction
of educational systems, particularly intelligent tutoring systems.

Contents
1 Introduction
1.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1.2 Philosophical View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3
3
4

2 What is an Ontology?
2.1 Common Definitions . . . . . . . . . . . . . . . . . . . .
2.1.1 Ontology as a Philosophical Term . . . . . . . .
2.1.2 Ontology as a Specification of Conceptualization
2.1.3 Ontology as a Representational Vocabulary . . .
2.1.4 Ontology as a Body of Knowledge . . . . . . . .
2.2 Other Ontology Definitions . . . . . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

4
4
5
5
6
7
8

3 Ontology Structure
3.1 CYC . . . . . . . . . . . . . . . . . . .
3.2 Russell & Norvigs General Ontology .
3.3 Ontology Engineering . . . . . . . . .
3.3.1 Structure of Usage . . . . . . .
3.3.2 Ontology Engineering Subfields

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

9
9
10
11
11
12

4 Using Ontologies
4.1 Top-Level Ontology . . . . . . . . . . . . . . . . .
4.2 Knowledge Sharing and Reuse . . . . . . . . . . . .
4.2.1 KIF . . . . . . . . . . . . . . . . . . . . . .
4.2.2 Ontolingua . . . . . . . . . . . . . . . . . .
4.2.3 Collaboration . . . . . . . . . . . . . . . . .
4.2.4 Particular Ontologies Reuse . . . . . . . . .
4.3 Communication in Multi-Agent Systems . . . . . .
4.3.1 FIPA Agent Management Model . . . . . .
4.3.2 Ontology Service by FIPA . . . . . . . . . .
4.3.3 Ontologies Relationships . . . . . . . . . . .
4.3.4 FIPA Knowledge Model and Meta-Ontology
4.4 Natural Language Understanding . . . . . . . . . .
4.4.1 CYC NLP . . . . . . . . . . . . . . . . . . .
4.4.2 WordNet Database . . . . . . . . . . . . . .
4.5 Document Search and Ontologies . . . . . . . . . .
4.5.1 OntoSeek . . . . . . . . . . . . . . . . . . .
4.5.2 WebKB . . . . . . . . . . . . . . . . . . . .
4.5.3 Knowledge Representation Techniques . . .
4.5.4 Document Enrichment . . . . . . . . . . . .
4.6 Educational Systems . . . . . . . . . . . . . . . . .
4.6.1 EON . . . . . . . . . . . . . . . . . . . . . .
4.6.2 ABITS . . . . . . . . . . . . . . . . . . . . .
4.6.3 Other Proposals . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

14
15
16
16
16
17
18
18
19
20
21
22
23
24
24
24
25
26
26
27
29
29
30
30

5 Conclusion

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

.
.
.
.
.

30

Introduction

The word ontology has gained a good popularity within the AI community. Ontology is
usually viewed as a high-level description consisting of concepts that organize the upper parts
of the knowledge base. However, meaning of the term ontology tends to be a bit vague, as
the term is used in different ways. In this paper we will attempt to clarify the meaning of the
ontology and show why ontologies are useful and important.
We will discuss usage of ontologies in several particular ways, such as knowledge base
reuse, knowledge sharing, communication in multi-agent systems, applications of ontologies
for WWW applications, for natural language processing, and for intelligent tutoring systems.

1.1

Motivation

In AI research history, we can identify two types of research [31, 8]. One is form-oriented
research (mechanism theories) and the other is content-oriented research (content theories).
The former deals with logic and knowledge representation while the latter with content of
knowledge. Apparently, the former has dominated AI research up to date. Recently, however,
content-oriented research has become to gather much attention because a lot of real-world
problems to solve such as knowledge reuse, facilitation of agent communication, media integration through understanding, large-scale knowledge bases, and so on, require not only
advanced theories or reasoning methods, but also sophisticated treatment of the content of
knowledge.
Formal theories such as predicate logic provides us with a powerful tool to guarantee
sound reasoning and thinking. It even enables us to discuss the limits of our reasoning in a
principled way. However, it cannot answer to any of the questions such as what knowledge we
should have for solving given problems, what is knowledge at all, what properties a specific
knowledge has, and so on.
Sometimes, the AI community gets excited by some mechanisms such as neural nets, fuzzy
logic, genetic algorithms, constraint propagation etc. These mechanisms are proposed as the
secret of making intelligent machines. At other times, it is realized that, however wonderful
the mechanism, it cannot do much without a good content theory of the domain on which it
is to work. Moreover, we often recognize that once a good content theory is available, many
different mechanisms might be used equally well to implement effective systems, all using
essentially the same content.
Importance of content-oriented research is being recognized more and more nowadays.
Unfortunately it seems that there are no widely recognized sophisticated methodologies for
content-oriented research now. Major results till later years were only development of knowledge bases. The reasons for this can be [31]:
content-oriented research tends to be ad-hoc
there is no methodology that enables to accumulate research results
It is necessary to overcome these difficulties in the content-oriented research. Ontologies
are proposed for that purpose. Ontology engineering, as proposed in e.g. [31], is a research
methodology which gives us design rationale of a knowledge base, kernel conceptualization of
the world of interest, strict definition of basic meanings of basic concepts together with sophisticated theories and technologies enabling accumulation of knowledge which is dispensable for
modeling the real world.
3

Interest in ontologies has also grown as researchers and system developers have become
more interested in reusing or sharing knowledge across systems. Currently, one key impediment to sharing knowledge is that different systems use different concepts and terms for
describing domains. These differences make it difficult to take knowledge out of one system
and use it in another. If we could develop ontologies that could be used as the basis for multiple systems, they would share a common terminology that would facilitate sharing and reuse.
Developing such reusable ontologies is an important goal of ontology research. Similarly, if
we could develop tools that would support merging ontologies and translating between them,
sharing would be possible even between systems based on different ontologies.

1.2

Philosophical View

The term ontology was taken from philosophy. According to Websters Dictionary an ontology
is
a branch of metaphysics relating to the nature and relations of being
a particular theory about the nature of being or the kinds of existence
Ontology (the science of being) is a word, like metaphysics, that is used in many different
senses. It is sometimes considered to be identical to metaphysics, but we prefer to use it in a
more specific sense, as that part of metaphysics that specifies the most fundamental categories
of existence, the elementary substances or structures out of which the world is made. Ontology
will thus analyze the most general and abstract concepts or distinctions that underlay every
more specific description of any phenomenon in the world, e.g. time, space, matter, process,
cause and effect, system.
Recently, the term of ontology has been up taken by researchers in Artificial Intelligence,
who use it to designate the building blocks out of which models of the world are made. An
agent (e.g. an autonomous robot) using a particular model will only be able to perceive that
part of the world that his ontology is able to represent. In this sense, only the things in
his ontology can exist for that agent. In that way, an ontology becomes the basic level of
a knowledge representation scheme. An example is set of link types for a semantic network
representation which is based on a set of ontological distinctions: changinginvariant, and
generalspecific.

What is an Ontology?

The term ontology is used in many different ways. In this section we will discuss what an
ontology is on several definitions that are currently used.

2.1

Common Definitions

The most widespread definitions of ontology are given below.


1. Ontology is a term in philosophy and its meaning is theory of existence.
2. Ontology is an explicit specification of conceptualization [21].
3. Ontology is a theory of vocabulary or concepts used for building artificial systems [31].
4

4. Ontology is a body of knowledge describing some domain (eg. a common sense knowledge domain in CYC [45])
The definition 1 is radically different from all the others (including additional ones discussed below). We will shortly discuss some implications of its meaning for definition of
ontology for AI purposes. The second definition is generally proposed as a definition of
what an ontology is for the AI community. It may be classified as syntactic, but its precise
meaning depends on the understanding of the terms specification and conceptualization.
The third definition is a proposal for definition within the knowledge engineering community.
The last fourth definition differs from the previous two ones it views the ontology as an
inner body of knowledge, not as the way to describe the knowledge.
Although these definitions are compact, they are not sufficient for in-depth understanding
of what an ontology is. We will try to give more comprehensive definitions and insights.
2.1.1

Ontology as a Philosophical Term

Following [24] we will use the convention that the uppercase initial letter O is to distinguish
the Ontology as a philosophical discipline from other usages of this term. Ontology is a
branch of philosophy that deals with the nature and the organization of reality. It tries to
answer questions like what is existence, what properties can explain the existence etc.
Aristotle defined Ontology as the science of being as such. Unlike the special sciences,
each of which investigates a class of beings and their determinations, Ontology regards all
the species qua being and the attributes that belong to it qua being (Aristotle, Metaphysics,
IV, 1). In this sense Ontology tries to answer the question what is the being? or, in a
meaningful reformulation what are the features common to all beings?.
This is what is today called General Ontology in contrast with various Special or Regional Ontologies (eg. Biological, Social). From this, Formal Ontology is defined as an area
that has to determinate the conditions of the possibility of the object in general and the individualization of the requirements that every objects constitution has to satisfy. According
to [24] Formal Ontology can be defined as the systematic, formal, axiomatic development of
the logic of all forms and modes of being. From this, Formal Ontology is not concerned so
much in the existence of certain objects, but rather in the rigorous description of their forms
of being, i.e. their structural features. In practice, Formal Ontology can be intended as the
theory of the distinctions, which can be applied independently of the state of the world, i. e.
the distinctions:
among the entities of the world (physical objects, events, regions...)
among the meta-level categories used to model the world (concept, property, quality,
state, role, part...)
In this sense, Formal Ontology, as a discipline, may be relevant to both Knowledge Representation and Knowledge Acquisition [24].
2.1.2

Ontology as a Specification of Conceptualization

The second definition of ontology mentioned above, explicit specification of conceptualization, is briefly described in [20]. The definition comes from work [22] where the ontology is
5

used in context of knowledge sharing. According to Thomas Gruber, explicit specification


of conceptualization means that an ontology is a description (like a formal specification of
a program) of the concepts and relationships that can exist for an agent or a community of
agents. This definition is consistent with the usage of ontology as set of concept definitions,
but more general.
In this sense, ontology is important for the purpose of enabling knowledge sharing and
reuse. An ontology is in this context a specification used for making ontological commitments.
Practically, an ontological commitment is an agreement to use a vocabulary (i.e. ask queries
and make assertions) in way that is consistent (but not complete) with respect to the theory
specified by an ontology. Agents are then built that commit to ontologies and ontologies are
designed so that the knowledge can be shared with and among these agents.
The body of a knowledge is based on a conceptualization: the objects, concepts, and
other entities that are assumed to exist in some area of interest and the relationship that
hold among them. A conceptualization is an abstract, simplified view of the world that
we wish to represent for some purpose. Every knowledge base, knowledge-based system, or
knowledge-level agent is committed to some conceptualization, explicitly or implicitly. For
these systems, what exists is that which can be represented. When the knowledge of a
domain is represented in a declarative formalism, the set of objects that can be represented is
called the universe of discourse. This set of objects and the describable relationships among
them, are reflected in the representational vocabulary with which a knowledge-based program
represents knowledge. Thus, in the context of AI, we can describe the ontology of a program
by defining a set of representational terms. In such an ontology, definitions associate the
names of entities in the universe of discourse (e.g. classes, relations, functions, or other
objects) with human readable text describing what the names mean, and formal axioms that
constraint the interpretation and well-formed use of these terms. Formally it can be said that
an ontology is a statement of a logical theory [20].
Ontologies are often equated with taxonomic hierarchies of classes without class definitions
and the subsumption relation. Ontologies need not to be limited to these forms. Ontologies
are also not limited to conservative definitions, that is, definitions in the traditional logic sense
that only introduce terminology and do not add any knowledge about the world. To specify
a conceptualization, one needs to state axioms that do constrain the possible interpretations
for the defined terms.
Pragmatically, a common ontology defines the vocabulary with which queries and assertions are exchanged among agents. The agents sharing a vocabulary need not share a
knowledge base. An agent that commits to an ontology is not required to answer all queries
that can be formulated in the shared vocabulary. In short, a commitment to a common ontology is a guarantee of consistency, but not completeness, with respect to queries and assertions
using the vocabulary defined in the ontology.
2.1.3

Ontology as a Representational Vocabulary

The third definition of ontology proposed above says that it is in fact a representational vocabulary [8, 31]. The vocabulary can be specialized to some domain or subject matter. More
precisely, it is not the vocabulary as such that qualifies as an ontology, but the conceptualization that the terms in the vocabulary are intended to capture. Thus, translating the
terms in an ontology from one language to another, for example from Czech to English, does
not change the ontology conceptually. In engineering design, one might discuss the ontology
6

of an electronic devices domain, which might include vocabulary that describes conceptual
elements transistors, operational amplifiers, and voltages and the relations between
these elements operational amplifiers are a type-of electronic device, and transistors are
component-of operational amplifiers. Identifying such a vocabulary and the underlying conceptualization generally requires careful analysis of the kinds of objects and relations that
can exist in the domain.
The term ontology is sometimes used to refer to a body of knowledge describing some
domain (see below), typically a common sense knowledge domain, using a representational
vocabulary. For example, CYC [45] often refers to its knowledge representation of some area
of knowledge as its ontology.
In other words, the representation vocabulary provides a set of terms with which one can
describe the facts in some domain, while the body of knowledge using that vocabulary is
a collection of facts about a domain. However, this distinction is not as clear as it might
first appear. In the electronic-device example, that transistor is a component-of operational
amplifier or that the latter is a type-of electronic device is just as much a fact about its
domain as a CYC fact about some aspect of space, time or numbers. The distinction is that
the former emphasizes the use of ontology as a set of terms for representing specific facts in
an instance of the domain, while the latter emphasizes the view of ontology as a general set
of facts to be shared.
2.1.4

Ontology as a Body of Knowledge

Sometimes, ontology is defined as a body of knowledge describing some domain, typically


a common sense knowledge domain, using a representation vocabulary as described above.
In this case, an ontology is not only the vocabulary, but the whole upper knowledge base
(including the vocabulary that is used to describe this knowledge base).
The typical example of this definition usage is project CYC (http://www.cyc.com/, [45])
that defines its knowledge base as an ontology for any other knowledge based system. CYC
is the name of a very large, multi-contextual knowledge base and inference engine. The
development of CYC started during the early 1980s headed by Douglas Lenat. CYC is an
attempt to do symbolic AI on a massive scale. It is neither based on numerical methods
such as statistical probabilities, nor is it based on neural networks or fuzzy logic. All of the
knowledge in CYC is represented declaratively in the form of logical assertions. CYC contains
over 400, 000 significant assertions [45], which include simple statements of fact, rules about
what conclusions to draw if certain statements of fact are satisfied (true), and rules about how
to reason with certain types of facts and rules. New conclusions are derived by the inference
engine using deductive reasoning.
The CYC team doesnt believe there is any shortcut toward being intelligent or creating
an artificial intelligence based agent. Addressing the need for a large body of knowledge with
content and context may only be done by manually organizing and collating information.
This knowledge includes heuristic, rule of thumb problem solving strategies, as well as facts
that can only be known to a machine if it is told.
Much of the useful common sense knowledge needed for life is prescientific and has therefore not been analyzed in detail. Thus a large part of the work of the CYC project is to
formalize common relationships and fill in the gaps between the highly systematized knowledge used by specialists.
It is not necessary to divide such a large knowledge base into smaller pieces to enable
7

reasoning in reasonable time. Because of this, the CYC knowledge base uses a special context
space [29], that is divided by 12 dimensions into smaller pieces (contexts) that have something
in common and can be used to reason about a specific problem in that context. It is possible
to lift assertion from one context to another when the problem requires it.
The CYC common sense knowledge can be used as a body of a knowledge base for any
knowledge intensive system. In this sense, this body of knowledge can be viewed as an
ontology of the knowledge base of the system.

2.2

Other Ontology Definitions

As we can see from the above discussions, the exact definition of ontology is not obvious,
however it can be seen that the definitions have much in common. In addition to the above
definitions there are many other proposals for ontology definitions. Some other definitions
collected from [24] are:
1. informal conceptual system
2. formal semantic account
3. representation of a conceptual system via a logical theory
(a) characterized by specific formal properties
(b) characterized only by its specific purposes
4. vocabulary used by a logical theory
5. (meta-level) specification of a logical theory
Definitions 1 and 2 conceive an ontology as a conceptual semantic entity, either formal
or informal, while according to the interpretations 3, 4 and 5 is a specific syntactic object.
According to interpretation 1, an ontology is the conceptual system which may be assumed
to underlay a particular knowledge base. Under interpretation 2, instead, the ontology, that
underlies a knowledge base, is expressed in terms of suitable formal structures at the semantic
level. In both cases, we may say that the ontology of knowledge base A is different from
that of knowledge base B.
Under interpretation 3, an ontology is nothing else then a logical theory. The issue is
whether such a theory needs to have particular formal properties in order to be an ontology
or, rather, whether it is the intended purpose which lets us consider a logical theory as an
ontology. The latter position can be supported by arguing that an ontology is an annotated
and indexed set of assertion about something: leaving off the annotations and indexing, this
is a collection of assertions: what in logic is called a theory (Pat Hayes statement in [24]).
According to interpretation 4, an ontology is not viewed as a logical theory, but just as the
vocabulary used by a logical theory. Such an interpretation collapses into 3.a if an ontology
is thought of as a specification of a vocabulary consisting of a set of logical definitions. We
may anticipate that the Grubers interpretation (specification of conceptualization) collapses
into 3.a as well when a conceptualization is intended as a vocabulary.
Finally, under interpretation 5, an ontology is seen as a specification of a logical theory
in the sense that it specifies the architectural components (or primitives) used within a
particular domain theory.
8

Ontology Structure

From the overview above we can see that an ontology can be perceived in basically two
approaches. The first approach is an ontology as a representational vocabulary, where the
conceptual structure of terms should remain unchanged during translation. The other approach, that is discussed in this section, is an ontology as the body of knowledge describing
a domain, in particular a common sense domain.
An ontology can be divided in several ways. We will describe some of the proposals here.
Particularly interesting is so called upper ontology that is intended to serve as an upper
part of ontology of practically all knowledge based systems. Some of the ways of dividing
presented here are intended to be used for merging to form an upper ontology standard in
the IEEE Standard Upper Ontology Study Group [39]. On pages linked from [39] there are
many other examples that could be used as some kind of an upper ontology.
CYC

Individual object

Wordnet

Thing

Intangible

Represented

Living

GUM

Nonliving

Sowa's
Um-Thing

Configuration

Thing

Element

Thing

Sequence

Concrete

Process

Object

Abstract

Figure 1: How ontologies differ in their analyses of the most general concepts [8]
It is interesting that many authors agree that the upper class1 of the ontology is thing,
however even in the second level they do not agree on the separation, as can be seen in the
figure 1. The initiative [39] tries to unify these views.

3.1

CYC

The ontology of CYC is based on a several terms that form the fundamental vocabulary of the
CYC knowledge base. The universal set is #$Thing2 (see figure 1). It is the set of everything.
Every CYC constant in the knowledge base is a member of this collection. In the prefix
notation of the language CycL [10], we express that fact as (#$isa CONST #$Thing). Thus,
too, every collection in the knowledge base is a subset of the collection #$Thing. In CycL,
that fact is expressed as (#$genls COL #$Thing).
The set #$Thing has some subsets, such as PathGeneric, Intangible, Individual, SimpleSegmentOfPath, PathSimple, MathematicalOrComputationalThing, IntangibleIndividual,
Product, TemporalThing, SpatialThing, Situation, EdgeOnObject, FlowPath, Computation1
When using hierarchy of classes for ontology description. For describing ontologies however we do not
have to limit to class hierarchies as in the case of taxonomies.
2
We will use the notation used in CYC language. The explanation of it can be found in [10].

alObject, Microtheory, plus about 1500 more public subsets and about 13600 unpublished
subsets.
#$Individual is the collection of all things that are not sets or collections. Thus,
#$Individual includes (among other things) physical objects, temporal subabstractions of
physical objects, numbers, relations, and groups (#$Group). An element of #$Individual
may have parts or a structure (including parts that are discontinuous), but no instance of
#$Individual can have elements or subsets.
#$Collection is the collection of all CYC collections. CYC collections are natural kinds
or classes, as opposed to mathematical sets. Their elements have some common attribute(s).
Each CYC collection is like a set in so far as it may have elements, subsets, and supersets, and
may not have parts or spatial or temporal properties. Sets, however, differ from collections
in that a mathematical set may be an arbitrary set of things which have nothing in common
(#$Set-Mathematical). In contrast, the elements of a collection will all have in common
some feature(s), some intensional qualities. In addition, two instances of #$Collection can
be co-extensional (i.e. have all the same elements) without being identical, whereas if two
arbitrary sets had the same elements, they would be considered equal.
#$Individual and #$Collection are disjoint collections. No CYC constant can be an
instance of both.
#$Predicate is the set of all CYC predicates. Each element of #$Predicate is a truthfunctional relationship in CYC which takes some number of arguments. Each of those arguments must be of some particular type. Informally, one can think of elements of #$Predicate
as functions that always return either true or false. More formally, when an element of
#$Predicate is applied to the legal number and type of arguments, an expression is formed
which is a well-formed formula (wff) in CycL. Such expressions are called atomic formulas if
they contain variables, or ground atomic formulas (gaf) if they contain no variables.
#$isa:<#$ReifiableTerm> <#$Collection> expresses the ISA relationship. (#$isa EL
COL) means that EL is an element of the collection COL. CYC knows that #$isa distributes
over #$genls. That is, if one asserts (#$isa EL COL) and (#$genls COL SUPER), CYC
will infer that (#$isa EL SUPER). Therefore, in practice one only manually asserts a small
fraction of the #$isa assertions the vast majority are inferred automatically by CYC.
#$genls:<#$Collection> <#$Collection> expresses similar relationship for collections
(generalization). (#$genls COL SUPER) means that SUPER is one of the supersets of COL.
Both arguments must be elements of #$Collection. Again, as with the #$isa, CYC knows
that #$genls is transitive, therefore, in practice one only manually asserts a small fraction of
the #$genls assertions since the rest is inferred inferred automatically.
More details about the structure of the CYC ontology and about how the CYC knowledge
base is constructed can be found at http://www.cyc.com.

3.2

Russell & Norvigs General Ontology

Yet another view of general ontology structure is presented in Russell & Norvigs book [38].
Every category of their ontology (see figure 2) is discussed in detail on example axioms.
An example of this ontology in KIF [18] can be found at http://ltsc.ieee.org/suo/
ontologies/Russell-Norvig.txt.

10

Anything

AbstractObjects

Sets

Numbers

Categories

Events

RepresentationalObjects

Sentences

Measurements

Times

Intervals

Places

Moments

Things

Animals

Weights

PhysicalObjects Processes

Agents

Stuff

Solid

Liquid Gas

Humans

Figure 2: Russell & Norvigs general ontology structure [38]

3.3

Ontology Engineering

Ontology engineering is a field in artificial intelligence or computer science that is concerned


with ontology creation and usage. Report [31], that proposes and comments this field, declares
that the ultimate purpose of ontology engineering should be to provide a basis of building
models of all things in which computer science is interested.
3.3.1

Structure of Usage

An ontology can be divided into following subcategories according to [31] from the knowledge
reuse and ontology engineering point of view as follows. This is rather a structure of ontologies
from a point of view of their usage than a division of one general ontology. Some examples
are included.
Workplace Ontology
This is an ontology for workplace which affects task characteristics by specifying several
boundary conditions which characterize and justify problem solving behaviour in the
workplace. Workplace and task ontologies collectively specify the context in which
domain knowledge is intended and used during the problem solving.
Examples from circuit troubleshooting: fidelity, efficiency, precision, high reliability.
Task Ontology
Task ontology is a system of vocabulary for describing problem solving structure of
all the existing tasks domain independently. It does not cover the control structure. It
covers components or primitives of unit inferences taking place during performing tasks.
Task knowledge in turn specifies domain knowledge by giving roles to each objects and
relations between them.
Examples from scheduling tasks: schedule recipient, schedule resource, goal, constraint,
availability, load, select, assign, classify, remove, relax, add.
11

Domain ontology
Domain ontology can be either task dependent or task independent. Task independent
ontology usually relates to activities of objects.
Task-dependent ontology
A task structure requires not all the domain knowledge but some specific domain
knowledge in a certain specific organization. This special type of domain knowledge
can be called task-domain ontology because it depends on the task.
Examples from job-shop scheduling: job, order, line, due date, machine availability,
tardiness, load, cost.
Task-independent ontology
Activity-related ontology

Object ontology. This ontology covers the structure, behaviour and


function of the object.
Examples from circuit boards: component, connection, line, chip, pin,
gate, bus, state, role.
Activity ontology.
Examples from enterprise ontology: use, consume, produce, release, state,
resource, commit, enable, complete, disable.

Activity-independent ontology

Field ontology. This ontology is related to theories and principles which


govern the domain. It contains primitive concepts appearing in the theories
and relations, formulas, and units constituting the theories and principles.
Units ontology.
Examples: mole, kilogram, meter, ampere, radian.
Engineering mathematics ontology.
Examples: linear algebra, physical quantity, physical dimension, unit of
measure, scalar quantity, physical components.

General or Common ontology


Examples: things, events, time, space, causality or behaviour, function etc.
3.3.2

Ontology Engineering Subfields

We can also divide the ontology or ontologies from the point of view of ontology engineering
as a field. The subjects which should be covered by ontology engineering are demonstrated
in [31]. It includes basic issues in philosophy, knowledge representation, ontology design,
standardization, EDI, reuse and sharing of knowledge, media integration, etc. which are the
essential topics in the future knowledge engineering. Of course, they should be constantly
refined through further development of ontology engineering.
Basic Subfield
Philosophy(Ontology, Meta-mathematics)
Ontology which philosophers have discussed since Aristotle is discussed as well as
logic and meta-mathematics.
12

Scientific philosophy
Investigation on Ontology from the physics point of views, e.g., time, space, process, causality, etc. is made.
Knowledge representation
Basic issues on knowledge representation, especially on representation of ontological stuff, are discussed.
Subfield of Ontology Design
General(Common) ontology
General ontologies such as time, space, process, causality, part/whole relation, etc.
are designed. Both in-depth investigation on the meaning of every concept and
relation and on formal representation of ontologies are discussed.
Domain ontologies
Various ontologies in, say, Plant, Electricity, Enterprise, etc. are designed.
Subfield of Common Sense Knowledge
Parallel to general ontology design, common sense knowledge is investigated and
collected and knowledge bases of common sense are built.
Subfield of Standardization
EDI (Electronic Data Interchange) and data element specification
Standardization of primitive data elements which should be shared among people
for enabling full automatic EDI.
Basic semantic repository
Standardization of primitive semantic elements which should be shared among
people for enabling knowledge sharing.
Conceptual schema modeling facility (CSMF)
Components for qualitative modeling
Standardization of functional components such as pipe, valve, pump, boiler, register, battery, etc. for qualitative model building.
Subfield of Data or Knowledge Interchange
Translation of ontology
Translation methodologies of one ontology into another are developed.
Database transformation
Transformation of data in a data base into another of different conceptual schema.
Knowledge base transformation
Transformation of a knowledge base into another built based on a different ontology.
Subfield of Knowledge Reuse
Task ontology
Design of ontology for describing and modeling human ways of problem solving.

13

T-domain ontology
Task-dependent domain ontology is designed under some specific task context.
Methodology for knowledge reuse
Development of methodologies for knowledge reuse using the above two ontologies.
Subfield of Knowledge Sharing
Communication protocol
Development of communication protocols between agents which can behave cooperatively under a goal specified.
Cooperative task ontology
Task ontology design for cooperative communication
Subfield of Media Integration
Media ontology
Ontologies of the structural aspects of documents, images, movies, etc. are designed.
Common ontologies of content of the media
Ontologies common to all media such as those of human behavior, story, etc. are
designed.
Media integration
Development of meaning representation language for media and media integration
through understanding media representation are done.
Subfield of Ontology Design Methodology
Methodology
Support environment
Subfield of ontology evaluation
Evaluation of ontologies designed is made using the real world problems by forming
a consortium.

Using Ontologies

From above, we can see that an ontology can describe an upper-part of the knowledge base.
The distinction between an ontology and a knowledge base is that ontology provides the basic
structure or armature around which a knowledge base could be built. An ontology provides
a set of concepts and terms for describing some domain, while a knowledge base uses those
terms to represent what is true about some real or hypothetical world. Thus, a medical
ontology might contain definitions for terms such a leukemia or terminal illness, but it
would not contain assertions that a particular patient has some disease, although a knowledge
base might.
We can use the terms provided by the domain ontology to assert specific propositions
about a domain or a situation in a domain. For example, in the electronic-device domain,
we can represent a fact about a specific circuit, such as circuit 35 has transistor 22 as a
14

component, where circuit 35 is an instance of the concept circuit and the transistor 22 is an
instance of the concept transistor [8]. Another example of blocks on table by Genesereth
and Nillson is deeply discussed in [24] and [23] including discussion of possible ontologies.
Once we have the basis for representing propositions, we can also represent knowledge
involving propositional attitudes, such as hypothesize, believe, expect, hope, desire, fear, etc.
Propositional attitude terms take propositions as arguments. For example, for the electronicdevice domain, we can assert for example the diagnostician hypothesizes or believes that
part 2 is broken, or the designer expects or desires that the power plant has an output of 20
megawatts[8].
Thus, an ontology can represent beliefs, goals, hypotheses, and predictions about a domain, in addition to simple facts. The ontology also plays a role in describing such as plans
and activities, because these also require specification of world objects and relations. Propositional attitude terms are also part of a larger ontology of the world, useful especially in
describing the activities and properties of the special class of objects in the world called
intensional entities for example, agents like humans who have mental states.

4.1

Top-Level Ontology

Ontologies range in abstraction, from very general terms that form the foundation for knowledge representation in all domains, to terms that are restricted to specific knowledge domains. For example, space, time, parts, and subparts are terms that apply to practically all
domains; malfunction applies to engineering or biological domains; and hepatitis applies only
to medicine.
Even in cases where a task might seem to be quite domain-specific, knowledge representation might include an ontology that describes knowledge at higher levels of generality. For
example, solving problems in the domain of turbines might require knowledge expressed using domain-general terms such as flows or causality. Such general-level descriptive terms are
called the upper-ontology or top-level ontology [8]. There are open research issues about the
correct ways to analyze knowledge at the upper level. Illustration of different upper parts of
ontologies are show in figure 1. For example, many ontologies have thing or entity as their
root class. Figure 1 illustrates that thing and entity start to diverge at the next level. Some
of the differences arise because not all of these ontologies are intended to be general-purpose
tools, or even explicitly to be ontologies.
Although differences exist within ontologies, general agreement exists between ontologies
on many issues [8]:
There are objects in the world.
Objects have properties or attributes that can take values.
Objects can exist in various relations with other objects.
Properties and relations can change over time.
There are events that occur at different time instances.
There are processes in which objects participate and that occur over time.
The world and its objects can be in different states.
15

Events can cause events or states as effects.


Objects can have parts.
The representational repertoire of objects, relations, states, events, and processes does not
say anything about which classes of these entities exist. The modeler of the domains makes
these commitments.

4.2

Knowledge Sharing and Reuse

To support the sharing and reuse of formally represented knowledge among AI systems, it is
useful to define the common vocabulary in which shared knowledge or ontology is represented.
There have been several attempts to create engineering framework for constructing ontologies
to support knowledge sharing and knowledge base reuse.
4.2.1

KIF

Michael R. Genesereth and Richard E. Fikes describe KIF (Knowledge Interchange Format),
an enabling technology that facilitates expressing domain factual knowledge using a formalism
based on augmented predicate calculus [18]. Knowledge Interchange Format is a computeroriented language for interchange of knowledge among disparate programs. Some important
properties according to [18] are
it has declarative semantics, i.e. the meaning of expressions in the representation can
be understood without appeal to an interpreter for manipulating those expressions
it is logically comprehensive, i.e. it provides for the expression of arbitrary sentences in
the first-order predicate calculus
it provides for the representation of knowledge about the representation of knowledge
it provides for the representation of nonmonotonic reasoning rules
it provides for the definition of objects, functions, and relations
4.2.2

Ontolingua

Thomas R. Gruber has proposed a language called Ontolingua to help construct portable ontologies. In [22] an ontology as definitions of classes, relations, functions, and other objects is
proposed for a specification of a representational vocabulary for a shared domain of discourse.
The paper describes a mechanism for defining ontologies that are portable over representation systems. Definitions written in a standard format for predicate calculus are translated
by a system called Ontolingua into specialized representations, including frame-based systems as well as relational languages (see figure 3). This allows researchers to share and reuse
ontologies, while retaining the computational benefits of specialized implementations.
The architecture of Ontolingua is shown in figure 3. Ontolingua enables to translate
from a common ontology into several others. The same ontology can be used for different
purposes in different systems. Instances of common representation idioms are recognized and
transformed into a simpler, equivalent form using the second-order vocabulary from the Frame

16

Ontolingua

"Off the shelf"


Ontology

Ontolingua
definitions
Parsing and syntax
checking
KIF sentences
Recognition of
idioms

Frame
Ontology

LOOM Translator

LOOM T-Box
descriptions

Epikit Translator

Epikit axioms

Other Ontolingua
Translator

System-specific
ontology

Canonicalization

Canonical Form

Pure KIF generator

Other KIF
Translators

data models
Prolog rules
...

Figure 3: Translation architecture for Ontolingua translation from a common ontology


into several others using the canonical form.

Ontology. These and other transformations result in a canonical form, which is given to backend translators that generate system-specific output. A pure KIF output is also available to
be given to other translators developed for KIF, such as KIF-to-Prolog.
4.2.3

Collaboration

Ontologies facilitate collaboration among computer systems, among people, and between
computers and people. Given an ontology for a particular domain it is possible to formalize
exchanged or stored messages and so enable easier cooperation. Without such an ontology the
terms used may not have the same meaning for all parties and a confusion may occur which
can lead to misunderstanding. Also, when having formal form, it is possible for computers to
assist in e.g. searching of needed information.
The ontology for these purposes can be developed by one expert. The ontology is then
usually fixed for other users. This has an advantage that the ontology is more likely precise
[35]. On the other hand this approach requires some amount of time of an expert and a
knowledge engineer. It is also possible to develop the ontology collaboratively. With this
approach, everyone can contribute to the ontology. This is usually supported by web-based

17

tools, such as Ontolingua Server [22]. The ontology is then evolving according to actual needs,
but can lead to a wide and even inconsistent ontology.
One of the areas where the common terms to efficiently communicate are needed, is enterprise modeling. The enterprise ontology can support integration with existing and new
tools and get together different views. Such an ontology also facilitates communication and
information reuse, which can be supported by computer tools. An example of carefully designed enterprise ontology is described in [43]. On applications it is shown that the enterprise
ontology encourages to use the terms in a unified form, and so the results by one group can
be more easily reused by other group.
Shared explicit conceptualization saves much efforts whenever collaborators from different
areas have to work together. The effect of conceptualization is much bigger when the collaborators work together over large distances. The discussions usually consist of some proposals
that are then revised and accepted or refuted. The discussion is usually supported by sketches,
diagrams and other documents including formal structures. In Tadzebao World Wide Design
Lab [46] such a discussion is supported by a set of notepads that can contain documents
including ontologies. This tool even facilitates reusing previous solutions by retrieving them
from other clients the similar solution is searched by a central case-based reasoning engine.
4.2.4

Particular Ontologies Reuse

There were some attempts to develop ontologies for particular cases that would enable to
construct knowledge bases from them. One of the areas that is being explored is diagnostics
area for example a careful ontological analysis of fault process and category of faults is
described in [27]. Such an ontology can be core of a knowledge base solving any related
problem.
Using such a common knowledge base we can develop a deeper model of the system, not
only a few heuristic shallow facts. This might enable to infer deeper causes of a fault. Another
advantage is that the system with a model can easily explain its inferences.
The ontology can be comparatively easily reused for diagnostics of other similar systems.
The knowledge engineer doesnt have to start from the scratch, but can use already predefined
concepts in ontology. Also, because of using the same terms in the same ontology, knowledge
bases from different sources can be more easily compared and possibly merged.

4.3

Communication in Multi-Agent Systems

Knowledge sharing and exchange is particularly important in multi-agent systems. An agent


is usually described as a persistent entity with some degree of independence or autonomy that
carries out some set of operations depending on what he perceives. An agent usually contains
some level of intelligence, so it has to have some knowledge about its goals or desires. In
multi-agent systems, an agent usually cooperates with other agents, so it should have some
social and communicative abilities.
Each agent has to know something about a domain he is working in and also has to
communicate with other agents. An agent is able to communicate only about things that can
be expressed by some ontology. This ontology must be agreed and understood among the
agent community (or at least among its part) in order to enable each agent to understand
messages from other agents.

18

Agent

Agent Platform
Agent
Management
System

Agent
Communication
Channel

Directory
Facilitator

Agent
Communication
Channel

Internal Platform Message Transport

Figure 4: FIPA Agent Management Reference Model [15]

The ontology in multi-agent system can be explicit or implicit. It is explicit when it is


specified in declarative form e.g. as a set of axioms and definitions. It is implicit, when
the assumptions on the meaning of its vocabulary are implicitly embedded in agents, i.e. in
software programs representing agents. The explicit form enables and requires communication
about an ontology that can be modified when agents agree on that. The implicit form is
fixed, so no communication about ontology is required, but the change is impossible without
reprograming agents.
It is obvious that in open systems, where agents designed by different programmers or
organizations may enter into communication, the ontology must be explicit. In these environments, it is also necessary to have some standard mechanism to access and refer to
explicitly defined ontologies. We will describe a recommendation for this published by FIPA.
This recommendation is being widely accepted and is also implemented in some systems for
constructing agents (see for example FIPA Open Source at http://www.nortelnetworks.
com/products/announcements/fipa/index.html [37] or JADE at http://sharon.cselt.
it/projects/jade/ [4]).
4.3.1

FIPA Agent Management Model

The Foundation for Intelligent Physical Agents (FIPA, http://www.fipa.org/) is a nonprofit association registered in Geneva. It is formed by companies and organizations sharing
the effort to produce specifications of generic agent technologies. FIPAs purpose is to make
and publish internationally agreed specifications or recommendations for agent systems and
also to promote agent-based applications, services and equipments.
Basic structure of the multi-agent system compliant to FIPA [15] is shown in figure 4.
An agent is a fundamental actor on an agent platform. Agent platform (AP) consists of the
machine(s), operating system(s), agent support software, FIPA compliant agent management
components (DF, AMS, ACC see below), proprietary internal platform message transport
and agents. The communication between agents on one proprietary platform can be carried
out in different ways, however according to FIPA at least the agent management components
must be able to communicate using FIPA compliant communication protocols and languages.
Agent management components (see figure 4) consist of these agents [15]:
Agent management system (AMS) is an agent that exerts supervisory control over
19

access to and use of the agent platform. Only one AMS must exist in a single AP.
The AMS maintains a directory of logical agent names and their associated transport
addresses for an agent platform. The AMS offers white pages services to other agents.
Agent communication channel (ACC) is an agent that provides the default communication methods between agents on different APs. One agent platform contains
exactly one ACC agent.
Directory facilitator (DF) is an agent that provides yellow pages service to other
agents. Agents may register their services with the DF and query the DF to find out
what services are offered by other agents. One platform contains one main DF and can
contain several other DFs that help the main DF.
These agent must be able to use FIPA agent communication language (ACL) [14] that is
similar to KQML (Knowledge Query and Manipulation Language [2]). An example message
in this language can be:
(inform
:sender agent1
:receiver hpl-auction-server
:content (price (bid good02) 150)
:in-reply-to round-4
:reply-with bid04
:language sl
:ontology hpl-auction
)
As we can see, this language is content independent3 , so that it can be used in any
system, but when communicating, an agent must specify how to perceive the content. This
specification is done via ontology. As was already said, there can be several ontologies and
agents must be able to access and to refer to them.
4.3.2

Ontology Service by FIPA

For ontology services there is proposed a dedicated ontology agent (OA) in FIPA agent platform. The role of such an agent is to provide some or all of the following services [16]:
discovery of public ontologies in order to access them
help in selecting a shared ontology for communication
maintain (e.g. register with the DF, upload, download, or modify) a set of public
ontologies
translate expressions between different ontologies and/or different content languages
3

respond to query for relationship between terms or between ontologies

Some content languages such as SL, CCL, KIF and RDF are described in FIPA specification. An overview
can be found in [17].

20

facilitate the identification of a shared ontology for communication between two agents
It is not mandatory that an ontology agent must provide all of these services, but every OA
must be able to participate in a communication about these tasks. Also, it is not mandatory
that every agent platform must contain an ontology agent, but when an ontology agent is
present, it must complain to FIPA specifications.
Example scenarios [16] of using OA particular services are:
Querying the OA for definitions of terms
An user interface agent A wants to receive pictures from picture-archiver agent B to
show them to a user. It asks agent B for citrus. However the agent B discovers that
it doesnt have any picture with that description. So it asks the appropriate OA to
obtain sub-species of citrus within the given ontology. OA answers B that orange
and lemon are sub-species of citrus, so the agent B can send pictures with these
descriptions to agent A and so satisfy his requirements.
Finding equivalent ontology
An ontology designer declares the ontology car-product to the ontology agent OA2
in U.S. in English terms and translates the same ontology to French for the ontology
agent OA1 in France. Agent A2 uses the ontology from OA2 and wants to communicate
with agent A1 about cars in ontology maintained by OA2. Because agent A1 doesnt
know ontology of agent A2, it queries OA1 for ontology equivalent to that one used by
A2 (and maintained by OA24 ). OA1 returns its French ontology about cars and so A1
can inform A2 that these two ontologies are equivalent and that OA1 can be used as a
translator. After that, a dialogue between A1 and A2 can start.
Translations of terms
An agent A1 wants to translate a given term from an ontology #1 into the corresponding
term in an ontology #2 (for example the concept the name of a part can be called
name in ontology #1 and nomenclature in ontology #2). A1 queries DF for an OA
which supports the translation between these ontologies. DF returns the name of an
OA that knows the format of these ontologies (e.g. XML) and has capabilities to make
translations between them. A1 can then query this OA and request translation of a
term from ontology #1 to ontology #2.
4.3.3

Ontologies Relationships

In an open environment agents may benefit from knowing the existence of some relationships
between ontologies, for instance to decide if and how to communicate with other agents. In the
agent community, the ontology agent has the most adequate role to know that. It can be then
queried for the information about such relationships and it can use that for translation or for
facilitating the selection of a shared ontology for agent communication. In FIPA specification
[16] the following relations are proposed:
Extension ontology O1 extends ontology O2
The ontology O1 extends or includes the ontology O2. Informally this means that all
4
There is an ontology naming scheme described in [16] that allows to identify these ontologies for example
the English car ontology of agent OA2 can be named as OA2@http://makers.ford.com/car-product

21

the symbols that are defined within the O2 are found in the O1 together with the
restrictions, meanings and other axiomatic relations of these symbols from O2.
Identical ontologies O1 and O2 are identical
Vocabulary, axiomatization and the language are physically identical, but the name can
be different.
Equivalent ontologies O1 and O2 are equivalent
Logical vocabulary and logical axiomatization are the same, but the language is different
(e.g. XML and Ontolingua). When O1 and O2 are equivalent then they are strongly
translatable in both ways.
Strongly-translatable source ontology O1 is strongly translatable to the target
ontology O2
The vocabulary of O1 can be totally translated to the vocabulary of O2, axiomatization
from O1 holds in O2, there is no loss of information from O1 to O2 and there is
no introduction of inconsistency. Note that the representation languages can still be
different.
Weakly-translatable source ontology O1 is weakly translatable to the target ontology O2
The translation permits some loss of information (e.g. the terms are simplified in O2),
but doesnt permit introduction of inconsistency.
Approx-translatable source ontology O1 is approximately translatable to the target
ontology O2
The translation permits even introduction of inconsistencies, i.e. some of the relations
become no more valid and some constraints do not apply anymore.
The problem of deciding whether two logical theories (as ontologies usually are) have
relationships to each other, is in general computationally very difficult. Therefore, knowing
about these relationships often requires manual intervention and so a FIPA ontology agent
should be also able to at least maintain database of these relationships.
4.3.4

FIPA Knowledge Model and Meta-Ontology

To allow agents to talk about knowledge and about ontologies, for instance to query for the
definition of a concept or to define a new concept, a standard meta-ontology and knowledge
model is necessary. This meta-ontology and knowledge model must be able to describe the
primitives like concepts, attributes or relations.
FIPA adopts for these purposes the OKBC Knowledge Model [16]. OKBC, the Open
Knowledge Base Connectivity [9], provides operations for manipulating knowledge expressed
in an implicit representation formalism called the OKBC Knowledge Model. This knowledge model supports an object-oriented representation of knowledge and provides a set of
representational constructs and thus can serve as an interlingua for knowledge sharing and
translation. The OKBC Knowledge Model includes constants, frames, slots, facets, classes,
individuals, and knowledge bases. For precise description of the model, the KIF [18] is used.
The OKBC knowledge model assumes a universe of discourse consisting of all entities
about which knowledge is to be expressed. In every domain of discourse it is assumed that
22

all constants of the following basic types are always defined: integers, floating point numbers,
strings, symbols, lists, classes. It is also assumed that the logical constants true and false are
included in every domain of discourse. Classes are set of entities5 , and all sets of entities are
considered to be classes.
A frame is a primitive object that represents an entity in the domain of discourse. A
frame is called class frame when it represents a class, and is called individual frame when it
represents an individual. A frame has associated with it a set of slots that have associated
a set of slot values. A slot has associated a set of facets that put some restrictions on slot
values. Slots and slot values can be again any entities in the domain of discourse, including
frames. A class is a set of entities, that are instances of that class (one entity can be instance
of multiple classes). A class is a type for that entities. Entities that are not classes are referred
to as individuals. Class frames may have associated a template slots and template facets that
are considered to be used in instances of subclasses of that class. Default values can be also
defined. Each slot or facet may contain multiple values. There are three collection types:
set, bag (unordered, multiple occurrences permitted), and list (ordered bag). A knowledge
base (KB) is a collection of classes, individuals, frames, slots, slot values, facets, facet values,
frame-slot associations, and frame-slot-facet associations. KBs are considered to be entities of
the universe of discourse and are represented by frames. There are defined standard classes,
facets, and slots with specified names and semantics expressing frequently used entities. These
are described in detail in [9] or [16].
Knowledge bases or ontologies conforming to OKBC are often expressed in KIF [18] or
RDF [28]. An example of OKBC compliant editor of knowledge bases or ontologies that
supports both of these formats is Protege-2000 [19].
FIPA specification [16] defines ontology FIPA-meta-ontology based on OKBC Knowledge
Model to describe ontologies. This ontology must be used by an agent when it talks about
ontologies. Ontology FIPA-Ontol-service-Ontology must be used when requesting services
of an ontology agent. This ontology extends the basic FIPA-meta-ontology by symbols
enabling manipulation with ontologies. These ontologies are described in detail in [16].

4.4

Natural Language Understanding

One of AI fields that depend on a rich body of knowledge is natural language understanding
(NLU) or natural language processing (NLP). Ontologies are useful in NLU in two ways.
First, domain knowledge often plays a crucial role in disambiguation. A well designed domain
ontology provides the basis for domain knowledge representation. In addition, ontology of a
domain helps identify the semantic categories that are involved in understanding discourse
of the domain. For this use, the ontology plays the role of a concept dictionary. In general,
for NLU we need both a general-purpose upper ontology and a domain-specific ontology that
focuses on the domain of discourse (such as military communications or business stories).
CYC [45], Wordnet [3] and Sensus are examples of sharable ontologies that have been used
for language understanding. NLU is one area that is highly motivating the work on ontologies
even CYC, which was originally motivated by the need of for the knowledge systems to have
common world knowledge, has been tested more in natural language areas than in knowledge
systems applications [8].
5

The term class is used synonymously with the term concept as used in the description logic.

23

4.4.1

CYC NLP

The system for natural language developed with CYC in Cycorp is unique in having access
to a very large, declaratively represented common sense knowledge base. CYC helps the
natural language system handle word or phrase disambiguation, and also provides a target
internal representation language (CycL [10]) that can be used to do interesting things, such as
inference. A substantial portion of the CYC natural language processing system (the lexicon
and many semantic rules) is actually represented in the CYC knowledge base. Syntactic
parsing is carried out by application of phrase-structure rules to an input string. Semantic
rules are applied to the output of the syntax module. It is in the application of the semantic
rules that the knowledge in the knowledge base is proving especially advantageous.
Most of the CYC pilot applications developed in the recent past have some NL component
in their interfaces. The captioned image retrieval application [45], for example, accepts queries
in English, and allows captioners to describe new images to the system using English sentences.
The CYC NL team is currently expanding the lexicon, extending the parser, and adding new
semantic capabilities to the system.
4.4.2

WordNet Database

Probably the most used linguistic database (and linguistic ontology) for natural language
processing is WordNet [3] developed at Princeton University. WordNet is a lexical reference system whose design is inspired by current psycholinguistic theories of human lexical
memory. English nouns, verbs, adjectives and adverbs are organized into synonym sets, each
representing one underlying lexical concept. The WordNet includes several environments for
manipulating its database.
Each synonym set (called synset) is usually expressed as a unique combination of synonymous words. In general, each word can be associated to more than one synset and more
than one lexical category. Different relations link the synonym sets. Examples of these are
hypernymy, hyponymy, and antonymy. The first can be roughly assimilated to the usual subsumption relation, while the last links together opposite or inverse terms, such as tall/short.
The WordNet offers two distinct services: a vocabulary, which describes the various word
senses, and an ontology, which describes the semantic relationships among senses. Both these
services can be used in natural language systems, such as WWW based systems (see below).
Currently, an effort is being made for the translation of WordNet to several European languages in EuroWordNet project (see http://www.hum.uva.nl/~ewn/). For some languages,
such as Dutch, Italian, Spanish, German, French, Czech and Estonian, the WordNet database
is already available.

4.5

Document Search and Ontologies

To find a document in a large database is often not easy. For example, World Wide Web is so
big knowledge base, that it is sometimes very time consuming to find some needed information.
The search is usually entered in a form of a few keywords that are then searched. This is
however not sufficient for todays web some king of more content-based search is needed.
Structured content representations coupled with linguistic ontologies can increase both recall
and precision of the search.
Ontologies can be used in many ways for these purposes. They can support searching or
knowledge mining from textual web sources, they can be used to specify the content of the
24

pages or to standardize content and query vocabulary. In this section we will also briefly
describe applications for standardization of materials that need not be necessarily shared
over the WWW, such as structured documents for business-to-business (B2B) information
exchange.
4.5.1

OntoSeek

OntoSeek [25] is a tool intended for searching documents in product catalogs, such as yellow
pages. Structured content representation with linguistic ontologies are used to increase both
recall and precision of the retrieval. OntoSeek adopts a language of a limited expressiveness
for content representation. It uses a large ontology based on WordNet (see above) for content
matching.
According to [25], we can distinguish three areas of information retrieval :
Text retrieval the goal is to find relevant documents in a large collection in response
to a users query expressed as a sequence of words. The user does not know much
about collection content, therefore a precise semantic match between the query and the
relevant documents is not very important. Also, it is assumed, that the textual quality
of the documents is good.
Data retrieval both the queries and data to be retrieved are encoded by a structured
list of words, acting as values of a set of attributes established by the systems designer.
These words usually belong to a fixed taxonomy. However, large taxonomies can be
hard to design and difficult to maintain.
Knowledge retrieval here the query and data-encoding language is much more expressive. This results in increased precision, because the user can represent accurately the
datas content structure and formulate sophisticated queries. An arbitrary description
can be then formed and matched on the basis of an ontology of primitive concepts and
relations. However, this forces users to adopt a language that could be too expressive
for their purposes.
The last area is particularly interesting. In knowledge retrieval systems, an ontology provides the primitives needed to formulate queries and resource descriptions. Simple ontologies,
such as keyword hierarchies, might be also useful to text and data retrieval techniques.
OntoSeek uses WordNet and Sensus ontology for matching of queries and content. For
content description (derived from natural language and refined by user) OntoSeek uses lexical
conceptual graphs (LCG), that are simplified variants of Sowas conceptual graphs in way that
they permit only lexical natural language relations between concepts (no ad-hoc relations
such as part-of are allowed).
The way of work with OntoSeek is following. In encoding phase, a resource description
is converted into a LCG with the help of the user interface. The resulted LCG is stored
into database. When querying, query description is converted into LCG with variables at
places where the user expects an URL as an answer. Then the database is searched for
matching graphs and the results are returned. Note that if the query language changes, but
the underlying ontology for constructing LCGs remains the same, the system will continue
to work correctly, i.e. if we change WordNet database to EuroWordNet database for some
language, we have made the localization.
25

4.5.2

WebKB

Similar techniques for content matching are used in the project WebKB [30] (available at
http://meganesia.int.gu.edu.au/~phmartin/WebKB/). Here however the content description is embedded directly to a web page. The description of a (part of) text is embedded
into two HTML markers <KR> and </KR>. Because it is possible to use several formal languages to describe the content, the representational language must be also specified (e.g. <KR
language="CG"> for conceptual graphs). It is also possible to use alternative ways, such as
embedding the description in the ALT property of images to describe content of images.
The formal description language in WebKB include KIF [18], conceptual graphs, and
the Resource Description Format (RDF) [28]. All of these suppose that some ontology is
imported to distinguish between different concept, however it is possible to suppress the check
whether the used concepts are already defined. Simpler notations, that are not so expressive,
are also supported. These include structured text (with delimiters like :,=>,<=),
text structured with HTML tags (lists defined in HTML) or formalized English (English-like
expression of underlying conceptual graph). Descriptions expressed in these languages can be
translated to another of these language, so that the underlying representational schema can
be the same.
A fact that John believes that Mary has a cousin who is her age can be then expressed
in this way [30]:
<KR language="CG">
load "http://www.foo.com/topLevelOntology"; // import this ontology
Age < Property; // declare Age as a subtype of Property
Cousin(Person, Person) {Relation type Cousin};
[Person:"John"]<-(Believer)<-[Descr: [Person:"Mary"]->
{(Chrc)->[Age:*a];
(Cousin)->[Person]->(Chrc)->[Age:*a];
}];
</KR>
The WebKB tool provides a way to interpret these descriptions. It also provides an interface to Unix-like text processing commands to exploit web-accessible documents or databases
and process them, for example query them.
4.5.3

Knowledge Representation Techniques

The current HTML standard in which the WWW pages are written doesnt provide a way
to embed some machine-understandable description of the content to the page. HTML was
intended to provide a way to format documents to be readable for humans. There are however
attempts to overcome this either by extending the standard or by using special existing
standard markers for special purposes.
One of the oldest ways of using HTML-based semantic markup to describe briefly the
content of the page is to use HTML <META> tags. In this way, we can embed the description of
the whole page, that can be then processed by indexation or search agents. However this doest
provide any easily machine usable description. It is also usually used only for description of
the whole page. There is a way to describe parts of the text using these tags and SPAN tags
as shown in [44], but this is not standardized way. Another unstandardized proposed way is
26

to use Cascading Style Sheets (CSS), that could be used not only for formatting, but also for
semantic markup.
A standard enabling to embed any descriptive tree structure into a document is Extensible
Markup Language XML [5]. XML is in fact meta-markup language, since it enables to define
customs tags with desired syntactic structure via its DTD (Document Type Definition). The
DTD can be viewed as an ontology specification. There are many proposals of DTD for
specific areas examples are CML, Chemical Markup Language and OML, Ontology Markup
Language. The lexical structure of a XML document is a tree, but using shared identifiers it
is possible to encode an arbitrary tree.
W3C supported recommendation for semantic markup is Resource Description Framework
(RDF) [28]. The data model of of RDF provides three entity types:
subjects is an entity that can be referred to by an address at the WWW. Subjects are
the elements that are described by RDF objects.
predicate defines a binary relation between subjects and/or atomic values provided by
primitive data type definitions in XML.
object specifies a value of a subject property. That is, objects provide the actual characterization of the WWW documents.
There are alternative approaches using XML, such as SHOE or Ontobroker. SHOE
(http://www.cs.umd.edu/projects/plus/SHOE/), a derivative of OML, provides tags for
constructing ontologies and tags for annotating web documents. It doesnt provide any standard top-level ontology, but it is possible to choose from offered particular ontologies. Ontobroker (http://ontobroker.aifb.uni-karlsruhe.de/) uses ontology based on frame logic
to describe knowledge and to express queries. It includes an inference engine that can derive
additional knowledge through inference rules. These two approaches have in common that
they enable to express an inferential knowledge, i.e. to express relationships between entities in ontologies. These relationships can be used to form sophisticated queries or to infer
new knowledge from existing web pages.
Recently started DARPA program called DAML (DARPA Agent Mark-Up Language) [1]
has a goal to create a mark-up language built upon XML that would allow users to provide
machine-readable semantic annotations for specific communities of interest together with
agent-based tools using this language. The program is using results and proposals mentioned
above. So far the DAML results consists of the DAML language first proposal. The DAML
language is a frame-based language based on RDF [28] and includes a basic top ontology for
describing basic entities and ontologies.
4.5.4

Document Enrichment

So far we were concerned mostly with techniques enabling to represent document contents in
a machine readable form to enable to use it for search. Ontologies can however be used for
expressing not only the contents of documents, but also relations between documents. These
relations can explicitly express what is usually expressed only implicitly in the contents of
the document. For example in scholarly publications [40] a publication may support or refuse
ideas expressed in another publication. Such a relationship with other documents adds a new
information to the document and so this process is called document enrichment [41, 35].
27

This additional information can be used for discovering relevant documents or for example
for finding a potentially interesting gaps in available documents, such as finding possibly
interesting themes for articles in a news system [12].
The particular relations between documents have to conform to an explicit structure
to enable adding new documents to a structure and to enable using the structure. This
explicit structure is expressed via ontology for a particular document domain. All the process
of working with the documents is then ontology-driven [35]. The steps of a methodology
described in [35] are:
1. Identify use scenario
2. Characterize viewpoint for ontology
3. Develop the ontology
4. Perform ontology-driven model construction
5. Customize query interface for semantic knowledge retrieval
6. Develop additional reasoning services on top of knowledge model
The first three steps are performed by system authoring experts, because the quality
of the document relationship model depends crucially on a quality of the ontology, that
says what everything is possible to express. According to [35], the authors should focus on
usability rather than on reusability. Reusable ontology often includes all potential aspects that
could ever arise, however considering all these aspects when submitting a new document to a
document structure could easily discourage system users. Usable ontology on the other side
includes only highly relevant aspects that are easy to understand. The ontology is expressed
in OCML (Operational knowledge modeling language) [34] language and used for rest of the
steps above. For constructing ontologies a special web-based tool WebOnto can be used. The
fourth step, construction of the model based on the ontology (i.e. document structure), is
performed by casual users in a distributed way. When submitting a new document an author
uses an environment enabling to define the relations to other documents or other aspects in
a modeled world. This form-based environment called Knote is dynamically generated from
the actual ontology. The fifth step, querying the database, is performed using Lois, a formbased interface for knowledge retrieval, that is also created automatically once the key classes
for a knowledge model have been specified.
This methodology summarized in [35] was used in several domains, such as enriching
news stories [12], supporting scholarly debate [40] and knowledge management of medical
guidelines.
News server Planet-Onto [12] uses an ontology of events and classes like people, organizations, stories, projects and technologies. In addition to enriching and searching stories
there are designed (but not implemented) two intelligent agent. NewsHound would gather
data about popular news item and thus could solicit potentially popular stories by identifying gaps in the knowledge base. NewsBoy would enable personalized service for finding new
stories according to user profile.
Digital library server SholOnto [40] enables to contextualize ideas in relation to the literature. The ontology consists of contribution elements and relationships that are further

28

divided to argumentation and non-argumentation links. The ontology is designed to to support scholars in making claims by asserting relationships between concepts. Other scholars
may support, raise-issue-with or refute these claims.
Document enrichment can also be used to support organizational learning. The project
Enrich [41] tries to support discussions structured along documents and concepts for learning.
It is based on the fact, that the learning is more efficient when the acquired knowledge is
immediately applied. The documents and domain concepts can be incrementally enriched by
users, which supports further group learning.

4.6

Educational Systems

Computer educational or tutoring systems allow users to take a lesson without any time, place,
or teacher availability constraints. Traditional computer aided instruction tools provide just
a set of a static pages with text, or in better cases some simulators usable for one specific
purpose. The courses delivered by these systems can be adapted and personalized usually
only by a teacher editor of the course. Also, these tools have the teaching strategies (if
any) encoded directly in the taught content (such as when the user presses button A, go to
screen 23). It is obvious, that it is practically impossible to reuse knowledge encoded in such
a system. It is necessary to have the content knowledge separated from the system as well as
from teaching strategies and so on.
Intelligent tutoring systems (ITS, also called knowledge based tutors) are computer-based
instructional systems that have separate data bases, or knowledge bases, for instructional
content (specifying what to teach), and for teaching strategies (specifying how to teach),
and attempt to use inferences about a students mastery of topics to dynamically adapt
instruction. The adaptation of teaching materials for a particular student is usually made
by artificial intelligence techniques. ITS usually requires to model the student and from the
perceived model plan the next actions that would be the best for a particular student.
These systems were usually built from the scratch up to date, without reusing any knowledge or parts of the other tutoring systems. Today however there are attempts to overcome
this by establishing a common methodology or frameworks to enable knowledge reuse and
so speed up development of such a systems and make their development easier. To establish
common ontologies seems as the best way in order to achieve this goal. Some other advantages
are discussed below.
4.6.1

EON

One of the pioneering works is EON [36], a collection of tools for authoring content, instructional strategy, student model and interface design. These authoring tools enable to store all
the tutoring materials in a separate and thus a better reusable form.
When editing a network of topic for further storing of course content, it is necessary to
define an ontology of the network for a particular course area. That ontology can then be
reused for another course in the same area. Also, the content can be easily transferred to
another course with the same ontology with all of its properties. Other parts of the tutoring
system are treated in the same way it is necessary to define some concepts and constraints
under which one will work and then it is possible to use them to specify the particular
behaviour.

29

4.6.2

ABITS

Another attempt to create a reusable framework is ABITS [6], an agent based intelligent
tutoring system. The knowledge taught is organized and indexed according so called Learning Object Metadata [26], that specifies properties and constraints of objects that could be
considered as some entities that could be taught. For further description, resource description
format [28] is used. Using a common format with the same ontology would facilitate sharing
or transferring the knowledge to another tutoring systems. The topic structure is modeled
via conceptual graphs.
User modeling consists of cognitive state and learning preferences. Both these are expressed as fuzzy numbers. ABITS architecture consists of three types of agents evaluation
agents, that take care about the cognitive state of the student, affective agents, that evaluate
learning preferences, and pedagogical agents, that update the curriculum (current plan for
teaching). All these agents communicate with database connected to their type and with
agents of other types.
4.6.3

Other Proposals

There are currently proposals and standards for some aspects of intelligent tutoring systems.
IEEE initiative to specify Learning Object Metadata [26] was already mentioned.
A partial task ontology for intelligent educational system is proposed in [33]. Such an
ontology enables to provide a vocabulary in terms of which existing systems can be compared
and enables to accumulate research results. Another advantages are that the educational
tasks can be formalized and that it is possible to create reusable parts of tutoring systems.
Also, it is possible to standardize communication protocol among component agents of ITS.
Various entities, methods and concepts are analyzed to create an ontology that could be used
for any ITS. A collection of other proposals of ontologies and their usage for several aspect
of the intelligent educational systems, such as student model and preferences, curriculum
ontology, task ontology, and others, can be found in [32].
Other initiatives are proposing ontologies for a particular areas that could be used for
teaching special knowledge, such as simulation of physical systems. These ontologies have
much in common with previously mentioned special ontologies and even top-level ontologies
they are often derived from them.

Conclusion

From the presented survey it follows that ontologies will fundamentally change the way in
which systems are constructed. Today, knowledge bases are still built with little sharing
or reuse almost each one starts from a blank slate. In the future, intelligent systems
developers will have libraries of ontologies at their disposals. Rather than building from
scratch, they will assemble knowledge bases from components drawn from the libraries. This
should greatly decrease development time while improving the robustness and reliability of
the resulting knowledge bases.
There is proposed a new field within knowledge engineering that is called ontology engineering [31]. This field is concerned with all theoretical or practical aspects of ontologies,
such ontology development, maintenance, reuse etc. We have discussed some of the areas of
interest within this discipline.
30

An ontology can be viewed (among other possibilities) as a topmost part of the knowledge
base. This upper part is usually encoding the general or common sense knowledge, such
as that objects can have properties. For larger applications, that are not strictly focused
on one simple thing, such a knowledge can be very extensive and difficult to maintain. One
of the examples that were shown is the project CYC [45], that claims to encode substantial
part of the peoples common sense knowledge.
We have shown applications of ontologies in several areas. The first applications, that
started broader interest in ontologies, were the ones in knowledge sharing and reuse [22].
Ontologies here help to express commonly accepted conceptualization of domains. The worlds
in these domains can be described in various ways or languages, but if this all conform to
one ontology, then it is much more easily possible to provide translation between different
descriptions. Ontology helps here as interlingua, that is that it serves as a common language
for translations (we do not have to construct translators for every pair of language, since
the translators between each language and common interlingua are enough to provide any
translation). This facilitates sharing of knowledge and also reuse of the knowledge in various
systems, that do not have to use exactly the same internal representation of the world.
An area that is tightly related to the knowledge sharing and reuse is communication in
multi-agent systems. If agents have to communicate, they have to use a commonly agreed
and understood syntax and semantic of the messages. If this is expressed explicitly as an
ontology, it is possible to manipulate with various forms of communication. For example it
is possible to find a translation between ontologies, to query meanings in other ontologies, to
enrich ontology of an agent and so on. We have shown, how this is proposed to be done in
FIPA [16] standards through a dedicated ontology agent.
In the area of natural language processing the ontology is usually perceived as a structure
of lexical word meanings. The ontology plays here the role of the conceptual dictionary.
Probably the most known linguistic database, WordNet [3], was described. It should be
mentioned that such ontologies are not only vocabularies of words. They in addition include
semantic and other relationships among senses and meanings of each word.
An application that needs to have an explicit conceptualization is a sophisticated search
of the document in a large database, such as World Wide Web. If we want to search for documents not only by entering a few keywords, but also use a structure encoded in and between
documents, we have to build a database of such structures together with the simple directory
of the documents. Among the systems using the conceptual structures encoding the internal semantic structure of the documents we have mentioned OntoSeek [25] and WebKB [30].
Both these projects use ontologies for defining possible structure of the content description
of the document. It is then possible to search by queries that use these structures. We have
shown some attempts for standardization of such a description for the WWW, often built
on XML [5]. The way of introducing relations between document is described as document
enrichment [40, 35]. This concept of work was described on domains like scholarly debate,
news stories, medical guidelines, and organizational learning.
The last presented application that needs explicitly defined ontologies, is an area of intelligent educational systems. We have shortly described two frameworks for intelligent tutoring
systems (or knowledge based tutors), EON [36] and ABITS [6]. On both these systems it is
shown, that an ontology can help to increase speed of creating tutoring system for particular
domain as well as reuse of already created parts of the systems. There are already arising
committees for standards within these educational systems initiatives, similarly like in the
document search area mentioned above.
31

To conclude, we can see, that the explicit conceptualization of a domain in a form of


ontology facilitates knowledge sharing, knowledge reuse, communication and collaboration
and construction of knowledge rich and intensive systems. There are some methodologies
how to construct and maintain ontologies and how to use them in practical applications, but
this is rather a set of recommendations then exact theories. Despite this, ontologies are being
more and more widely used for construction of systems that require an explicitly encoded
knowledge.

Acknowledgements
This research has been supported by the MSMT Grant No. 212300013. I would like to thank
professor Vladimr Mark for his valuable comments to the early versions of this work.

References
[1] DAML - DARPA Agent Mark-Up Language. http://www.daml.org/.
[2] UMBC KQML Web. http://www.cs.umbc.edu/kqml/.
[3] WordNet - a Lexical Database for English. http://www.cogsci.princeton.edu/~wn/.
[4] Fabio Bellifemine, Agostino Poggi, and Giovanni Rimassa. JADE A FIPA-Compliant
Agent Framework. In Proceedings of PAAM99, pages 97108, London, 1999.
[5] Tim Bray, Jean Paoli, and C. M. Sperberg-McQueen. Extensible Markup Language
(XML) 1.0. http://www.w3.org/TR/REC-xml, 1998.
[6] Nicola Capuano, Marco Marsella, and Saverion Salerno. ABITS: An Agent Based Intelligent Tutoring System for Distance Learning. In C. Peylo, editor, Proceedings of the
International Workshop on Adaptive and Intelligent Web-Based Educational Systems
Held in Conjunction with ITS 2000, Montreal, Canada, 2000.
[7] B. Chandrasekaran, J. R. Josephson, and V. Richard Benjamins. The Ontology of Tasks
and Methods. In Knowledge Acquisition, Modeling and Management, 1998.
[8] B. Chandrasekaran, John. R Josephson, and Richard V. Benjamins. What Are Ontologies, and Why do We Need Them? IEEE Transactions on Intelligent Systems, pages
2026, 1999.
[9] Vinay K. Chaudhri, Adam Farquhar, Richard Fikes, Peter D. Karp, and James P. Rice.
Open Knowledge Base Connectivity 2.0.3. http://www.ai.sri.com/ okbc/spec.html, 1998.
[10] CYCORP. The CycL Language. http://www.cyc.com/cycl.html.
[11] Randall Davis, Howard Shrobe, and Peter Szolovits. What is a Knowledge Representation? AI Magazine, 14:1733, 1993.
[12] John Domingue and Enrico Motta. A Knowledge-Based News Server Supporting
Ontology-Driven Story Enrichment and Knowledge Retrieval. In 11th European Workshop on Knowledge Acquistion, Modelling, and Management (EKAW99), 1999.
32

[13] Edward Feigenbaum, Robert Engelmore, Thomas Gruber, and Yurni Iwasaki. Large
Knowledge Bases for Engineering: The How Things Work Project of the Stanford Knowledge Systems Laboratory. Technical Report KSL 90-83, Standford University, Knowledge
System Laboratory, 1990.
[14] FIPA. Agent Communication Language Specification. http://www.fipa.org/specs/
fipa00003/, 1997.
[15] FIPA. Agent Management Specification. http://www.fipa.org/specs/fipa00002/,
1998.
[16] FIPA. Ontology Service Specification. http://www.fipa.org/specs/fipa00006/, 1998.
[17] FIPA. Content Languages Specification. http://www.fipa.org/specs/fipa00007/,
2000.
[18] Michael R. Genesereth and Richard E. Fikes. Knowledge Interchange Format, Version
3.0, Reference Manual. http://logic.stanford.edu/kif/Hypertext/kif-manual.
html, 1994.
[19] W. E. Grosso, H. Eriksson, R. W. Fergerson, J. H. Gennari, S. W. Tu, and M. A.
Musen. Knowledge Modeling at the Millennium The Design and Evolution of Protege2000. Technical Report SMI-1999-0801, Stanford Medical Informatics, 1999. http:
//smi-web.stanford.edu/pubs/SMI_Abstracts/SMI-1999-0801.html.
[20] Thomas R. Gruber. What is an Ontology?
what-is-an-ontology.html.

http://www-ksl.stanford.edu/kst/

[21] Thomas R. Gruber. A Translation Approach to Portable Ontology Specifications. Knowledge Acquisition, 1993.
[22] Thomas R. Gruber. Toward Principles for the Design of Ontologies Used for Knowledge
Sharing. In Nicola Guarino and Roberto Poli, editors, Formal Ontology in Conceptual
Analysis and Knowledge Representation. Kluwer Academic Publishers, 1993.
[23] Nicola Guarino. Understanding, Building and Using Ontologies. International Journal
of Human-Computer Studies, special issue on Using Explicit Ontologies in KBS Development, 1997.
[24] Nicola Guarino and Pierdaniele Giaretta. Ontologies and Knowledge Bases - Towards
a Terminological Clarification. In N.J.I. Mars, editor, Towards Very Large Knowledge
Bases. IOS Press, Amsterdam, 1995.
[25] Nicola Guarino, Claudio Masolo, and Guido Vetere. OntoSeek: Content-Based Access
to the Web. IEEE Transactions on Intelligent Systems, pages 7080, 1999.
[26] Wayne Hodgins. Learning Objects Metadata, Working Draft. Technical report, IEEE
Learning Technology Standartization Committee, 2000. http://ltsc.ieee.org/doc/
wg12/LOM_WD4.htm.
[27] Yoshinobu Kitamura and Riichiro Mizoguchi. An Ontological Analysis of Fault Process
and Category of Faults. In Proc. Of Tenth International Workshop on Principles of
Diagnosis, pages 118128, 1999.
33

[28] Ora Lassila and Ralph R. Swick. Resource Description Framework (RDF) Model and
Syntax Specification. http://www.w3.org/TR/PR-rdf-syntax/, 1999.
[29] Douglas Lenat. The Dimensions of Context-Space. CYCORP, Austin, Texas, 1998.
[30] Philippe Martin and Peter W. Eklund. Knowledge Retrieval and the World Wide Web.
IEEE Transactions on Intelligent Systems, pages 1825, 2000.
[31] Riichiro Mizoguchi and Mitsuru Ikeda. Towards Ontology Engineering. Technical Report
AI-TR-96-1, The Institute of Scientific and Industrial Research, Osaka University, 1996.
[32] Riichiro Mizoguchi and Tom Murray, editors. AI-ED 99 Workshop on Ontologies for Intelligent Educational Systems, 1999. http://www.ei.sanken.osaka-u.ac.jp/aied99/
aied99-onto.html.
[33] Riichiro Mizoguchi, K. Sinitsa, and Mitsuru Ikeda. Knowledge Engineering of Educational Systems for Authoring System Design - A preliminary results of task ontology
design. In Proceedings of EAIED, pages 329335, Lisabon, 1996.
[34] Enrico Motta. An Overview of the OCML Modelling Language. In Proceedings of the
8th Workshop on Knowledge Engineering Methods and Languages (KEML98), 1998.
[35] Enrico Motta, Simon Buckingham Shum, and John Domingue. Ontology-Driven Document Enrichment: Principles and Case Studies. International Journal of HumanComputer Studies, 52:10711109, 2000.
[36] Tom Murray. Authoring Knowledge Based Tutors: Tools for Content, Instruction Strategy, Student Model, and Interface Design. Journal of the Learning Sciences, 7(1):564,
1998.
[37] Nortel Networks. FIPA-OS V1.3.3 Distribution Notes, 2000.
[38] Stuart Russell and Peter Norvig. Artificial Intelligence: A Modern Approach. PrenticeHall International, Inc., 1995. ISBN 0-13-360124-2.
[39] James Schoening. IEEE Standard Upper Ontology Study Group Homepage. http:
//ltsc.ieee.org/suo/, 2000.
[40] Simon Buckingham Shum, Enrico Motta, and John Domingue. ScholOnto: An OntologyBased Digital Library Server for Research Documents and Discourse. International Journal on Digital Libraries, 3(3), August/Sept. 2000.
[41] Tamara Sumner, John Domingue, Zdenek Zdrahal, Marek Hatala, Alan Millican, Jayne
Murray, Knut Hinkelmann, Ansgar Bernardi, Stefan Wess, and Ralph Traphoner. Enriching Representations of Work to Support Organisational Learning. In Proceedings
of the Interdisciplinary Workshop on Building, Maintaining, an Using Organizational
Memories (OM-98), 13th European Conference on Artificial Intelligence (ECAI-98),
Bringhton, UK, 1998.
[42] William Swartout. Ontologies. IEEE Transactions on Intelligent Systems, pages 1819,
1999.

34

[43] Mike Uschold, Martin King, Stuart Moralee, and Yannis Zorgios. The Enterprise Ontology. Technical Report AIAITR -195, Artificial Intelligence Applications Institute,
University of Edinburgh, 1998.
[44] Frank Van Harmelen and Dieter Fensel. Practical Knowledge Representation for the
Web. In IJCAI99 Workshop on Intelligent Information Integration, 1999.
[45] David Whitten. The Unofficial, Unauthorized Cyc Frequently Asked Questions.
http://www.robotwisdom.com/ai/cycfaq.html, 1997.
[46] Zdenek Zdrahal and John Domingue. The World Wide Design Lab: An Environment
for Distributed Collaborative Design. In Proceedings of International Conference on
Engineering Design, Tampere, 1997.

35

You might also like