You are on page 1of 11

J Pustejovsky,

Brandeis University, Waltham, MA, USA


In The Encyclopedia of Language and Linguistics, p 5775
_ 2006 Elsevier Ltd. All rights reserved.

Lexical Semantics: Overview


Word Knowledge
Semantic interpretation requires access to knowledge about words. The lexicon of a
grammar must provide a systematic and efficient way of encoding the information
associated with words in a language. Lexical semantics is the study of what words mean and
how they structure these meanings. This article examines word meaning from two different
perspectives: the information required for composition in the syntax and the knowledge
needed for semantic interpretation. The lexicon is not merely a collection of words with their
associated phonetic, orthographic, and semantic forms. Rather, lexical entries are structured
objects that participate in larger operations and compositions, both enabling syntactic
environments and acting as signatures to semantic entailments and implicatures in the
context of larger discourse.
There are four basic questions in modeling the semantic content and structure of the lexicon:
(1) What semantic information goes into a lexical entry?
(2) How do lexical entries relate semantically to one another?
(3) How is this information exploited compositionally by the grammar? and
(4) How is this information available to semantic interpretation generally?
This article focuses on the first two.
The lexicon and lexical semantics have traditionally been viewed as the most passive
modules of language, acting in the service of the more dynamic components of the grammar.
This view has its origins in the generative tradition (Chomsky, 1955) and has been an
integral part of the notion of the lexicon ever since. While the aspects model of selectional
features (Chomsky, 1965) restricted the relation of selection to that between lexical items,
work by McCawley (1968) and Jackendoff (1972) showed that selectional restrictions must be
available to computations at the level of derived semantic representation rather than at deep
structure. Subsequent work by Bresnan (1982), Gazdar et al. (1985), and Pollard and Sag
(1994) extended the range of phenomena that can be handled by the projection and
exploitation of lexically derived information in the grammar. With the convergence of several
areas in linguistics (lexical semantics, computational lexicons, and type theories) several
models for the determination of selection have emerged that put even more compositional
power in the lexicon, making explicit reference to the paradigmatic systems that allow for
grammatical constructions to be partially determined by selection. Examples of this
approach are generative lexicon theory (Bouillon and Busa, 2001; Pustejovsky, 1995) and
construction grammar (Goldberg, 1995; Jackendoff, 1997, 2002). These developments have
helped to characterize the approaches to lexical design in terms of a hierarchy of semantic
expressiveness.
There are at least three such classes of lexical description: sense enumerative lexicons,
where lexical items have a single type and meaning, and ambiguity is treated by multiple
listings of words; polymorphic lexicons, where lexical items are active objects, contributing to
the determination of meaning in context, under welldefined constraints; and unrestricted
sense lexicons, where the meanings of lexical items are determined mostly by context and
conventional use. Clearly, the most promising direction seems to be a careful and formal
elucidation of the polymorphic lexicons, and this will form the basis of the subsequent
discussion of both the structure and the content of lexical entries.

Historical Overview
The study of word meaning has occupied philosophers for centuries, beginning at least with
Aristotles theory of meaning. Locke, Hume, and Reid all paid particular attention to the
meanings of words, but not until the 19th century did the rise of philological and
psychological investigations of word meaning occur, with Bral (1897), Erdmann (1900), Trier
(1931), Stern (1931/1968), and others focused on word connotation, semantic drift, and word
associations in the mental lexicon as well as in social contexts.
Interestingly, Russell, Frege, and other early analytic philosophers were not interested in
language as a linguistic phenomenon but simply as the medium through which judgments
can be formed and expressed. Hence, there is little regard for the relations between senses
of words, when not affecting the nature of judgment, for example, within intensional
contexts. Nineteenth-century semanticists and semasiologists, on the other hand, viewed
polysemy as the life force of human language. Brel, for example, considered it to be a
necessary creative component of language and argued that this phenomenon better than
most in semantics illustrates the cognitive and conceptualizing force of the human species.
Even with their obvious enthusiasm, semasiology produced no lasting legacy to the study of
lexical semantics. In fact, there was no systematic research into lexical meaning until
structural linguists extended the relational techniques of Saussure (1916/1983) and
elaborated the framework of componential analysis for language meaning (Jakobson, 1970).
The idea behind componential analysis is the reduction of a words meaning into its ultimate
contrastive elements. These contrastive elements are structured in a matrix, allowing for
dimensional analysis and generalizations to be made about lexical sets occupying the cells in
the matrix. This technique developed into a general framework for linguistic description
called distinctive feature analysis (Jakobson and Halle, 1956). This is essentially the
inspiration for Katz and Fodors 1963 theory of lexical semantics within transformational
grammar. In this theory, usually referred to as markerese, a lexical entry in the language
consists of grammatical and semantic markers and a special feature called a semantic
distinguisher. In Weinreich (1972) and much subsequent discussion, it was demonstrated
that this model is far too impoverished to characterize the compositional mechanisms
inherent in language.
In the late 1960s and early 1970s, alternative models of word meaning emerged (Fillmore,
1965; Gruber, 1965; Jackendoff, 1972; Lakoff, 1965) that respected the relational structure of
sentence meaning while encoding the named semantic functions in lexical entries. In Dowty
(1979), a model theoretic interpretation of the decompositional techniques of Lakoff,
McCawley, and Ross was developed. Recently, the role of lexicalsyntactic mapping has
become more evident, particularly with the growing concern over projection from lexical
semantic form, the problem of verbal alternations and polyvalency, and the phenomenon of
polysemy.

Ambiguity and Polysemy


Given the compactness of a lexicon relative to the number of objects and relations in the
world, and the concepts we have for them, lexical ambiguity is inevitable. Add to this the
cultural, historical, and linguistic blending that contributes to the meanings of our lexical
items, and ambiguity can appear arbitrary as well. Hence, homonymy where one lexical
form has many meanings is to be expected in a language. Examples of homonyms are
illustrated in the following sentences:
(1a) Mary walked along the bank of the river.
(1b) Bank of America is the largest bank in the city.

(2a) Drop me a line when you are in Boston.


(2b) We built a fence along the property line.
(3a) First we leave the gate, then we taxi down the runway.
(3b) John saw the taxi on the street.
(4a) The judge asked the defendant to approach the bar.
(4b) The defendant was in the pub at the bar.

Weinreich (1964) calls such lexical distinctions contrastive ambiguity, where it is clear that
the senses associated with the lexical item are unrelated. For this reason, it is generally
assumed that homonyms are represented as separate lexical entries within the organization
of the lexicon. This accords with a view of lexical organization that has been termed a sense
enumeration lexicon (cf. Pustejovsky, 1995). That is, a lexicon is sense enumerative when
every word o that has multiple senses stores these senses as separate lexical entries.
This model becomes difficult to maintain, however, when we consider the phenomenon
known as polysemy. Polysemy is the relationship that exists between different senses of a
word that are related in some logical manner rather than arbitrarily, as in the previous
examples. We can distinguish three broad types of polysemy, each presenting a novel set of
challenges to lexical semantics and linguistic theory.
a. Deep semantic typing: single argument polymorphism
b. Syntactic alternations: multiple argument polymorphisms
c. Dot objects: lexical reference to objects that have multiple facets
The first class refers mainly to functors allowing a range of syntactic variation in a single
argument. For example, aspectual verbs (begin and finish), perception verbs (see, hear), and
most propositional attitude verbs (know, believe) subcategorize for multiple syntactic forms
in complement position, as illustrated in (6):
(5a) Mary began to read the novel.
(5b) Mary began reading the novel.
(5c) Mary began the novel.
(6a) Bill saw John leave.
(6b) Bill saw John leaving.
(6c) Bill saw John.
(7a) Mary believes that John told the truth.
(7b) Mary believes what John said.
(7c) Mary believes Johns story.

What these and many other cases of multiple selection share is that the underlying relation
between the verb and each of its complements is essentially identical. For example, in (7),
the complement to the verb believe in all three sentences is a proposition; in (5), what is
begun in each sentence is an event of some sort; and in (6), the object of the perception is
(arguably) an event in each case. This has led some linguists to argue for semantic selection
(cf. Chomsky, 1986; Grimshaw, 1979) and others to argue for structured selectional
inheritance (Godard and Jayez, 1993). In fact, these perspectives are not that distant from
one another (cf. Pustejovsky, 1995): in either view, there is an explicit lexical association
between syntactic forms that is formally modeled by the grammar.
The second type of polysemy (syntactic alternations) involves verbal forms taking arguments
in alternating constructions, the so-called verbal alternations (cf. Levin, 1993). These are
true instances of polysemy because there is a logical (typically causal) relation between the
two senses of the verb. As a result, the lexicon must either relate the senses through lexical
rules (such as in head-driven phrase structure grammar (HPSG) treatments; cf. Pollard and
Sag, 1994) or assume that there is one lexical form that has multiple syntactic realizations
(cf. Pustejovsky and Busa, 1995).
(8a) The window opened suddenly.

(8b) Mary opened the window suddenly.


(9a) Bill began his lecture on time.
(9b) The lecture began on time.
(10a) The milk spilled onto the table.
(10b) Mary spilled the milk onto the table.

The final form of polysemy reviewed here is encountered mostly in nominals and has been
termed regular polysemy (cf. Apresjan, 1973) and logical polysemy (cf. Pustejovsky, 1991)
in the literature; it is illustrated in the following sentences:
(11a)
(11b)
(12a)
(12b)
(13a)
(13b)

Mary carried the book home.


Mary doesnt agree with the book.
Mary has her lunch in her backpack.
Lunch was longer today than it was yesterday.
The flight lasted 3 hours.
The flight landed on time in Los Angeles.

Notice that in each of the pairs, the same nominal form is assuming different semantic
interpretations relative to its selective context. For example, in (11a) the noun book refers to
a physical object, whereas in (11b) it refers to the informational content. In (12a), lunch
refers to the physical manifestation of the food, whereas in (12b) it refers to the eating
event. Finally, in (13a) flight refers to the flying event, whereas in (13b) it refers to the plane.
This phenomenon of polysemy is one of the most challenging in the area and has stimulated
much research Bouillon, 1997; Bouillon and Busa, 2001. In order to understand how each of
these cases of polysemy can be handled, we must first familiarize ourselves with the
structure of individual lexical
entries.

Lexical Relations
Another important aspect of lexical semantics is the study of how words are semantically
related to one another. Four classes of lexical relations, in particular, are important to
recognize: synonymy, antonymy, hyponymy, and meronymy.
Synonymy is generally taken to be a relation between words rather than concepts. One fairly
standard definition states that two expressions are synonymous if substituting one for the
other in all contexts does not change the truth value of the sentence where the substitution
is made (cf. Cruse, 1986, 2004; Lyons, 1977). A somewhat weaker definition makes reference
to the substitution relative to a specific context. For example, in the context of carpentry,
plank and board might be considered synonyms, but not necessarily in other domains (cf.
Miller et al., 1990). The relation of antonymy is characterized in terms of semantic opposition
and, like synonymy, is properly defined over pairs of lexical items rather than concepts.
Examples of antonymy are rise/fall, heavy/light, fast/slow, and long/short (cf. Cruse, 1986;
Miller, 1991). It is interesting to observe that co-occurrence data illustrate that synonyms do
not necessarily share the same antonyms. For example, rise and ascend as well as fall and
descend are similar in meaning, yet neither fall/ascend nor rise/descend are antonym pairs.
For further details see Miller et al. (1990).
The most studied relation in the lexical semantic community is hyponymy, the taxonomic
relationship between words, as defined in WordNet (Fellbaum, 1998) and other semantic
networks. For example, specifying car as a hyponym of vehicle is equivalent to saying that
vehicle is a superconcept of the concept car or that the set car is a subset of those
individuals denoted by the set vehicle.
One of the most difficult lexical relations to define and treat formally is that of meronymy,
the relation of parts to the whole. The relation is familiar from knowledge representation

languages with predicates or slot-names such as part-of and made-of. For treatments of
this relation in lexical semantics, see Miller et al. (1990) and Cruse (1986).

The Semantics of a Lexical Entry


It is generally assumed that there are four components to a lexical item: phonological,
orthographic, syntactic, and semantic information. Here, we focus first on syntactic features
and then on what semantic information must be encoded in an individual lexical entry.
There are two types of syntactic knowledge associated with a lexical item: its category and
its subcategory. The former includes traditional classifications of both the major categories,
such as noun, verb, adjective, adverb, and preposition, as well as the minor categories, such
as adverbs, conjunctions, quantifier elements, and determiners. Knowledge of the
subcategory of a lexical item is typically information that differentiates categories into
distinct, distributional classes. This sort of information may be usefully separated into two
types, contextual features and inherent features. The former are features that may be
defined in terms of the contexts in which a given lexical entry may occur. Subcategorization
information marks the local syntactic context for a word. It is this information that ensures
that the verb devour, for example, is always transitive in English, requiring a direct object;
the lexical entry encodes this requirement with a subcategorization feature specifying that a
noun phrase (NP) appear to its right. Another type of context encoding is collocational
information, where patterns that are not fully productive in the grammar can be tagged. For
example, the adjective heavy as applied to drinker and smoker is collocational and not freely
productive in the language (Meluk, 1988). Inherent features on the other hand, are
properties of lexical entries that are not easily reduced to a contextual definition but, rather,
refer to the ontological typing of an entity. These include such features as count/mass (e.g.,
pebble vs. water), abstract, animate, human, physical, and so on.
Lexical items can be systematically grouped according to their syntactic and semantic
behavior in the language. For this reason, there have been two major traditions of word
clustering, corresponding to this distinction. Broadly speaking, for those concerned mainly
with grammatical behavior, the most salient aspect of a lexical item is its argument
structure; for those focusing on a words entailment properties, the most important aspect is
its semantic class.
In this section, these two approaches are examined and it is shown how their concerns can
be integrated into a common lexical representation.
Lexical Semantic Classifications

Conventional approaches to lexicon design and lexicography have been relatively informal
with regard to forming taxonomic structures for the word senses in the language. For
example, the top concepts inWord- Net (Miller et al., 1990) illustrate how words are
characterized by local clusterings of semantic properties. As with many ontologies, however,
it is difficult to discern a coherent global structure for the resulting classification beyond a
weak descriptive labeling of words into extensionally defined sets.
One of the most common ways to organize lexical knowledge is by means of type or feature
inheritance mechanisms (Carpenter, 1992; Copestake and Briscoe, 1992; Evans and Gazdar,
1990; Pollard and Sag, 1994). Furthermore, Briscoe et al. (1993) described a rich system of
types for allowing default mechanisms into lexical type descriptions. Similarly, type
structures, such as that shown in Figure 1, can express the inheritance of syntactic and
semantic features, as well as the relationship between syntactic classes and alternations (cf.
Alsina, 1992; Davis, 1996; Koenig and Davis, 1999; Sanfilippo, 1993) and other relations (cf.
Pustejovsky, 2001; Pustejovsky and Boguraev, 1993).
In the remainder of this section, we first examine the approach to characterizing the weak
constraints imposed on a lexical item associated with its arguments. Then, we examine
attempts to model lexical behavior by means of internal constraints imposed on the

predicate. Finally, it is shown how, in some respects, these are very similar enterprises and
both sets of constraints may be necessary to model lexical behavior.
Argument Structure

Once the base syntactic and semantic typing for a lexical item has been specified, its
subcategorization and selectional information must be encoded in some form. There are two
major techniques for representing this type of knowledge:
1. Associate named roles with the arguments to the lexical item (Fillmore, 1985; Gruber,
1965; Jackendoff, 1972).
2. Associate a logical decomposition with the lexical item; meanings of arguments are
determined by how the structural properties of the representation are interpreted (cf. Hale
and Keyser, 1993; Jackendoff, 1983; Levin and Rappaport, 1995).
Figure 1 Type structures.

One influential way of encoding selectional behaviour has been the theory of thematic
relations (cf. Gruber, 1976; Jackendoff, 1972). Thematic relations are now generally defined
as partial semantic functions of the event being denoted by the verb or noun, and they
behave according to a predefined calculus of roles relations (e.g., Dowty, 1989). For
example, semantic roles such as agent, theme, and goal can be used to partially determine
the meaning of a predicate when they are associated with the grammatical arguments to a
verb.
(14a) put <AGENT, THEME, LOCATION>
(14b) borrow <RECIPIENT, THEME, SOURCE>

Thematic roles can be ordered relative to each other in terms of an implicational hierarchy.
For example, there is considerable use of a universal subject hierarchy such as is shown in
the following (cf. Comrie, 1981; Fillmore, 1968):
(15) AGENT > RECIPIENT/BENEFACTIVE > THEME/PATIENT > INSTRUMENT > LOCATION>

Many linguists have questioned the general explanatory coverage of thematic roles,
however, and have have chosen alternative methods for capturing the generalizations they
promised. Dowty (1991) suggested that theta-role eneralizations are best captured by
entailments associated with the predicate.
A theta-role can then be seen as the set of predicate entailments that are properties of a
particular argument to the verb. Characteristic entailments might be thought of as prototype
roles, or proto-roles; this allows for degrees or shades of meaning associated with the
arguments to a predicate. Others have opted for a more semantically neutral set of labels to
assign to the parameters of a relation, whether it is realized as a verb, noun, or adjective. For
example, the theory of argument structure as developed by Williams (1981), Grimshaw
(1990), and others can be seen as a move toward a more minimalist description of semantic
differentiation in the verbs list of parameters. The argument structure for a word can be
seen as the simplest specification of its semantics, indicating the number and type of
parameters associated with the lexical item as a predicate. For example, the verb die can be
represented as a predicate taking one argument, kill as taking two arguments, where as the
verb give takes three arguments:
(16a) die (x)
(16b) kill (x,y)
(16c) give (x,y,z)

What originally began as the simple listing of the parameters or arguments associated with a
predicate has developed into a sophisticated view of the way arguments are mapped onto
syntactic expressions. Williamss (1981) distinction between external (the underlined
arguments above) and internal arguments and Grimshaws proposal for a hierarchically

structured representation (cf. Grimshaw, 1990) provide us with the basic syntax for one
aspect of a words meaning. Similar remarks hold for the argument list structure in HPSG
(Pollard and Sag, 1994) and Lexical
Functional Grammar (Bresnan, 1994).
The interaction of a structured argument list and a rich system of types, such as that
presented previously, provides a mechanism for semantic selection through inheritance.
Consider, for instance, the sentence pairs in (17):
(17a) The man/the rock fell.
(17b) The man/*the rock died.

Now consider how the selectional distinction for a feature such as animacy is modeled so as
to explain the selectional constraints of predicates. For the purpose of illustration, the
arguments of a verb will be identified as being typed from the system shown previously.
(18a) lx: physical[fall(x)]
(18b) lx: animate[die(x)]

In the sentences in (17), it is clear how rocks cannot die and men can, but it is still not
obvious how this judgment is computed, given what we would assume are the types
associated with the nouns rock and man, respectively. What accomplishes this computation
is a rule of subtyping, Y, that allows the type associated with the noun man (i.e., human) to
also be accepted as the type animate, which is what the predicate die requires of its
argument as stated in (18b) (cf. Carpenter, 1992).
(19) Y [human v animate]: human ! animate

The rule Y, applies since the concept human is subtyped under animate in the type
hierarchy. Parallel considerations rule out the noun rock as a legitimate argument to die since
it is not subtyped under animate. Hence, one of the concerns given previously for how
syntactic processes can systematically keep track of which selectional features are entailed
and which are not is partially addressed by such lattice traversal rules as the one presented
here.
Event Structure and Lexical Decomposition

The second approach to lexical specification mentioned previously is to define constraints


internally to the predicate. Traditionally, this has been known as lexical decomposition. In
this section, we review the motivations for decomposition in linguistic theory and the
proposals for encoding lexical knowledge as structured objects. We then relate this to the
way in which verbs can be decomposed in terms of eventualities (Tenny and Pustejovsky,
2000).
Since the 1960s, lexical semanticists have attempted to formally model the semantic
relations between lexical items such as between the adjective dead and the verbs die and kill
(cf. Lakoff, 1965; McCawley, 1968) in the following sentences:
(20a) John killed Bill.
(20b) Bill died.
(20c) Bill is dead.

Assuming the underlying form for a verb such as kill directly encodes the stative predicate in
(20c) and the relation of causation, generative semanticists posited representations such as
(21):
(21) (CAUSE (x, (BECOME (NOT (ALIVE y))))

Here the predicate CAUSE is represented as a relation between an individual causer x and an
expression involving a change of state in the argument y. Carter (1976) proposes a
representation quite similar, shown here for the causative verb darken:
(22) (x CAUSE ((y BE DARK) CHANGE))

Although there is an intuition that the cause relation involves a causer and an event, neither
Lakoff nor Carter make this commitment explicitly. In fact, it has taken several decades for
Davidsons (1967) observations regarding the role of events in the determination of verb
meaning to find their way convincingly into the major linguistic frameworks. A new synthesis

has emerged that attempts to model verb meanings as complex predicative structures with
rich event structures (cf. Hale and Keyser, 1993; Parsons, 1990; Pustejovsky, 1991). This
research has developed the idea that the meaning of a verb can be analyzed into a
structured representation of the event that the verb designates, and it has furthermore
contributed to the realization that verbs may have complex, internal event structures. Recent
work has converged on the view that complex events are structured into an inner and an
outer event, where the outer event is associated with causation and agency, and the inner
event is associated with telicity (completion) and change of state (cf. Tenny and Pustejovsky,
2000).
Jackendoff (1990) developed an extensive system of what he calls conceptual
representations, which parallel the syntactic representations of sentences of natural
language. These employ a set of canonical predicates, including CAUSE, GO, TO, and ON, and
canonical elements, including Thing, Path, and Event. These approaches represent verb
meaning by decomposing the predicate into more basic predicates. This work owes obvious
debt to the innovative work within generative semantics, as illustrated by McCawleys (1968)
analysis of the verb kill. Recent
versions of lexical representations inspired by generative semantics can be seen in the
lexical relational structures of
Hale and Keyser (1993), where syntactic tree structures are employed to capture the same
elements of causation and change of state as in the representations of Carter, Levin and
Rapoport, Jackendoff, and Dowty. The work of Levin and Rappaport, building on Jackendoffs
lexical conceptual structures, has been influential in further articulating
the internal structure of verb meanings (Levin and Rappaport, 1995).
Pustejovsky (1991) extended the decompositional approach presented in Dowty (1979) by
explicitly reifying the events and subevents in the predicative expressions. Unlike Dowtys
treatment of lexical semantics, where the decompositional calculus builds on propositional or
predicative units (as discussed previously) a syntax of event structure makes explicit
reference to quantified events as part of the word meaning. Pustejovsky further introduced a
tree structure to represent the temporal ordering and dominance constraints on an event
and its subevents. For example, a predicate such as build is associated with a complex event
such as the following (cf. also Moens and Steedman, 1988):
(23) [transition[e1:PROCESS] [e2:STATE]]

The process consists of the building activity itself, whereas the state represents the result of
there being the object built. Grimshaw (1990) adopted this theory in her work on argument
structure, where complex events such as break are given a similar representation. In such
structures, the process consists of what x does to cause the breaking, and the state is the
resultant state of the broken item. The process corresponds to the outer causing event as
discussed previously, and the state corresponds in part to the inner change of state event.
Both Pustejovsky and Grimshaw differ from the previous authors in assuming a specific level
of representation for event structure, distinct from the representation of other lexical
properties. Furthermore, they follow Higginbotham (1985) in adopting an explicit reference to
the event place in the verbal semantics.
Rappaport and Levin (2001) adopted a large component of the event structure model for
their analysis of the resultative construction in English. Event decomposition has also been
employed for properties of adjectival selection, the interpretation of compounds, and stage
and individual-level predication.
Qualia Structure

Thus far, we have focused on the lexical semantics of verb entries. All of the major
categories, however, are encoded with syntactic and semantic feature structures that

determine their constructional behaviour and subsequent meaning at logical form. In


generative lexicon theory (Pustejovsky, 1995), it is assumed that word meaning is structured
on the basis of four generative factors, or qualia roles, that capture how humans
understand objects and relations in the world and provide the minimal explanation for the
linguistic behavior of lexical items (these are inspired in large part by Moravcsiks (1975,
1990) interpretation of Aristotelian aitia). These are the formal role, the basic category that
distinguishes the object within a larger domain; the constitutive role, the relation between an
object and its constituent parts; the telic role, its purpose and function; and the agentive
role, factors involved in the objects origin or coming into being. Qualia structure is at the
core of the generative properties of the lexicon since it provides a general strategy for
creating new types. For example, consider the properties of nouns such as rock and chair.
These nouns can be distinguished on the basis of semantic criteria that classify them in
terms of general categories such as natural_kind, artifact_ object. Although very useful, this
is not sufficient to discriminate semantic types in a way that also accounts for their
grammatical behavior. A crucial distinction between rock and chair concerns the properties
that differentiate natural_kinds from artifacts:
Functionality plays a crucial role in the process of individuation of artifacts but not of natural
kinds. This is reflected in grammatical behavior, whereby a good chair or enjoy the chair
are well-formed expressions reflecting the specific purpose for which an artifact is designed,
but good rock or enjoy a rock are semantically ill formed since for rock the functionality
(i.e., telic) is undefined. Exceptions exist when new concepts are referred to, such as when
the object is construed relative to a specific activity, as in The climber enjoyed that rock;
rock takes on a new meaning by virtue of having telicity associated with it, and this is
accomplished by integration with the semantics of the subject NP. Although chair and rock
are both physical_object, they differ in their mode of coming into being (i.e., agentive):
artifacts are man-made, rocks develop in nature. Similarly, a concept such as food or cookie
has a physical manifestation or denotation, but also a functional grounding, pertaining to the
relation of eating. These apparently contradictory aspects of a category are orthogonally
represented by the qualia structure for that concept, which provides a coherent structuring
for different dimensions of meaning.
See also: Compositionality: Semantic Aspects; Lexicon,

Generative; Semantics of Prosody; Syntax-Semantics


Interface.

Bibliography

Alsina A (1992). On the argument structure of causatives. Linguistic Inquiry 23(4), 517555.
Apresjan J D (1973). Regular polysemy. Linguistics 142, 532.
Bouillon P (1998). Polymorphie et semantique lexicale: le cas des adjectifs. Lille: Presse du Septentrion.
Bouillon P & Busa F (2001). The language of word meaning. Cambridge [England], New York: Cambridge
University Press.
Breal M (1897). Essai de semantique (science des significations). Paris: Hachette.
Bresnan J (ed.) (1982). The mental representation of grammatical relations. Cambridge, MA: MIT Press.
Bresnan J (1994). Locative inversion and the architecture of universal grammar. Language 70, 72131.
Briscoe T, de Paiva V & Copestake A (eds.) (1993). Inheritance, defaults, and the lexicon. Cambridge,
UK: Cambridge University Press.
Carpenter R (1992). Typed feature structures. Computational Linguistics 18, 2.
Chomsky N (1955). The logical structure of linguistic theory. Chicago: University of Chicago Press
(Original work published 1975).
Chomsky N (1965). Aspects of the theory of syntax. Cambridge: MIT Press.
Comrie B (1981). Language universals and linguistic typology. Chicago, IL: The University of Chicago
Press.
Copestake A & Briscoe T (1992). Lexical operations in a unification-based framework. In Pustejovsky
J&Bergler S (eds.) Lexical semantics and knowledge representation, Berlin: Springer Verlag.
Cruse D A (1986). Lexical semantics. Cambridge, UK: Cambridge University Press.
Cruse D A (2004). Meaning in language: an introduction to semantics and pragmatics (2nd edn.).
Oxford: Oxford University Press.
Davidson D (1967). The logical form of action sentences. In Rescher N (ed.) The logic of decision and
action. Pittsburgh: Pittsburgh University Press.

Davis A (1996). Lexical semantics and linking and the hierarchical lexicon. Ph.D. diss., Stanford
University.
Davis A & Koenig J-P (2000). Linking as constraints on word classes in a hierarchical lexicon. Language
2000.
Dowty D R (1979). Word meaning and Montague grammar. Dordrecht, The Netherlands: D. Reidel.
Dowty D R (1989). On the semantic content of the notion thematic role. In Chierchia G, Partee B &
Turner R (eds.) Properties, types, and meaning, vol. 2. Semantic issues. Dordrecht: D. Reidel.
Dowty D (1991). Thematic proto-roles and argument selection. Language 67, 547619.
Erdmann K (1900). Die Bedeutung des Wortes: Aufsa tze aus dem Grenzgebiet der Sprachpsychologie
und Logik. Avenarius: Leipzig.
Evans R & Gazdar G (1990). The DATR papers: February 1990. Cognitive Science Research Paper CSRP
139, School of Cognitive and Computing Science, University of Sussex, Brighton, England.
Fillmore C (1965). Entailment rules in a semantic theory. POLA Report 10. Columbus, OH: Ohio State
University.
Fillmore C (1968). The case for case. In Bach E W & Harms R T (eds.) Universals in linguistic theory.
New York: Holt, Rinehart and Winston.
Gazdar G, Klein E, Pullum G & Sag I (1985). Generalized phrase structure grammar. Cambridge, MA:
Harvard University Press.
Goldberg A (1995). Constructions: a construction grammar approach to argument structure. Chicago:
University of Chicago Press.
Grimshaw J (1979). Complement selection and the lexicon. Linguistic Inquiry 10, 279326.
Grimshaw J (1990). Argument structure. Cambridge: MIT Press.
Gruber J S (1965/1976). Lexical structures in syntax and semantics. Amsterdam: North-Holland.
Gruber J S (1976). Lexical structures in syntax and semantics. Amsterdam: North-Holland.
Hale K & Keyser J (1993). On argument structure and the lexical expression of syntactic relations: the
view from
building 20. Cambridge, MA: MIT Press.
Halle M, Bresnan J & Miller G (eds.) (1978). Linguistic theory and psychological reality. Cambridge: MIT
Press.
Higginbotham J (1985). On Semantics. Linguistic Inquiry 16, 547593.
Hjelmslev L (1961). Prolegomena to a theory of language. Whitfield F (ed.). Madison: University
ofWisconsin Press Original work published 1943).
Jackendoff R (1972). Semantic interpretation in generative grammar. Cambridge: MIT Press.
Jackendoff R (1983). Semantics and cognition. Cambridge, MA: MIT Press.
Jackendoff R (1990). Semantic structures. Cambridge: MIT Press.
Jackendoff R (1992). Babe Ruth homered his way into the hearts of America. In Stowell T&Wehrli E
(eds.) Syntax and the lexicon. San Diego: Academic Press. 155178.
Jackendoff R (2002). Foundations of language: brain, meaning, grammar. Oxford: Oxford University
Press.
Jakobson R (1970). Recent developments in linguistic science. Perenial Press.
Jakobson R (1974). Main trends in the science of language. New York: Harper & Row.
Jakobson R & Halle M (1956). Fundamentals of language. The Hague, The Netherlands: Mouton.
Katz J (1972). Semantic theory. New York: Harper & Row.
Katz J & Fodor J (1963). The structure of a semantic theory. Language 39, 170210.
Lakoff G (1965/1970). Irregularity in syntax. New York: Holt, Rinehart, and Winston.
Levin B & Rappaport Hovav M (1995). Unaccusativity: at the syntaxsemantics interface. Cambridge:
MIT Press.
Lyons J (1977). Semantics (2 volumes). Cambridge: Cambridge University Press.
McCawley J (1968). Lexical insertion in a transformational grammar without deep structure.
Proceedings of the Chicago Linguistic Society 4.
Melcuk I A (1988b). Dependency syntax. Albany, NY: SUNY Press.
Miller G (1991). The science of words. New York: Scientific American Library.
Miller G, Beckwith R, Fellbaum C, Gross D & Miller K J (1990). Introduction to WordNet: an on-line lexical
database. International Journal of Lexicography 3, 235 244.
Moens M & Steedman M (1988). Temporal ontology and temporal reference. Computational Linguistics
14, 1528.
Moravcsik J M (1975). Aitia as generative factor in Aristotles philosophy. Dialogue 14, 622636.
Moravcsik J M (1990). Thought and language. London: Routledge.
Parsons T (1990). Events in the semantics of English. Cambridge, MA: MIT Press.
Pinker S (1989). Learnability and cognition: the acquisition of argument structure. Cambridge: MIT
Press.

Pollard C & Sag I (1994). Head-driven phrase structure grammar. Chicago University of Chicago Press,
Stanford CSLI.
Pustejovsky J (1991). The syntax of event structure. Cognition 41, 4781.
Pustejovsky J (1995). The generative lexicon. Cambridge: MIT Press.
Pustejovsky J (2001). Type construction and the logic of concepts. In Bouillon P & Busa F (eds.) The
syntax of word meaning. Cambridge: Cambridge University Press.
Pustejovsky J & Boguraev P (1993). Lexical knowledge representation and natural language processing.
Artificial Intelligence 63, 193223.
Pustejovsky J & Busa F (1995). Unaccusativity and event composition. In Bertinetto P M, Binachi V,
Higginbotham J & Squartini M (eds.) Temporal reference: aspect and actionality. Turin: Rosenberg and
Sellier.
Rappaport Hovav M & Levin B (2001). An event structure account of English resultatives. Language
77, 766797.
Sanfilippo A (1993). LKB encoding of lexical knowledge. In Briscoe T, de Paiva V & Copestake A (eds.)
Inheritance, defaults, and the Lexicon. Cambridge: Cambridge University Press.
Saussure F de (1983). Course in general linguistics. Harris R (trans.). (Original work published 1916).
Stern G (1968). Meaning and change of meaning. With special reference to the English langage.
Bloomington: Indiana University Press (Original work published 1931).
Tenny C & Pustejovsky J (2000). Events as grammatical objects. Chicago: University of Chicago Press.
Trier J (1931). Der deutsche Wortschatz im Sinnbezirk des Verstandes: Die Geschichte eines
sprachlichen Feldes. Band I. Heidelberg: Heidelberg.
Weinreich U (1972). Explorations in semantic theory. The Hague, The Netherlands: Mouton.
Williams E (1981). Argument structure and morphology. Linguistic Review 1, 81114

You might also like