You are on page 1of 18

International Phenomenological Society

The Problem of Robot Consciousness


Author(s): Dwight van de Vate, Jr.
Source: Philosophy and Phenomenological Research, Vol. 32, No. 2 (Dec., 1971), pp. 149-165
Published by: International Phenomenological Society
Stable URL: http://www.jstor.org/stable/2105945
Accessed: 24-12-2017 05:40 UTC

JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org.

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms

International Phenomenological Society is collaborating with JSTOR to digitize, preserve


and extend access to Philosophy and Phenomenological Research

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS

1) The Conscious Machine as a Collective Achievement

Can we humans make a conscious machine, a machine with the self-


awareness each of us has? In the literature - by now embarrassingly
large - devoted to this problem, the conscious machine is usually thought
of as something produced the way other machines are produced, viz.,
by a solitary inventor or a laboratory team. I hold, however, that the
production of a conscious being is always the collective achievement
of society as a whole, nothing smaller having sufficient power. If one
assumes that conscious flesh-and-bone persons are actual social achieve-
ments, is then the conscious machine a possible social achievement? This
is the version of the problem of robot consciousness I propose to ex-
plore. I say "explore," for in the end I shall not settle the problem.
My argument depends on a functionalistic assumption. I assume that
"society" is an ideal of community life which has the power to sustain
itself through time in the minds of the members of society. Society's
only existence is the existence of its individual members, whom I shall
call "persons." Society, then, perpetuates itself by producing persons.
It may be looked upon as an engine for the production of persons from
the raw materials nature supplies. The human infant is processed by a
series of social institutions - the family, the neighborhood gang, the
schools, and so on - until he is shaped into a finished person with
whom other persons can live. We know society can produce perso
this familiar way, by what we call the "socialization process." Can it
produce persons from hardware, from gears, wires, and transistors? That
is the question we shall explore.
Another way of expressing our functionalistic assumption - that
society perpetuates itself by producing persons - is to say that what
distinguishes persons from other things (hereafter I shall call them
simply "things") are concepts which involve a certain mutuality or
reciprocity. I shall call them "social concepts." A social concept is
reciprocal or mutual in the sense that it applies to a given person if,
and only if he knows how to apply it to others. "Politeness" is an
obvious example: someone is "polite" if, and only if he knows when
others are polite. His claim to the characteristic depends on his ability

149

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
150 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH

to ascribe it to others. Indeed, politeness may be said to be socially


enforced in the sense that someone who cannot tell when others are
polite will himself be thought rude and penalized accordingly. Thus
while a polite person is both an individual and a group achievement,
the first depends on the second: society must declare and enforce rules
of etiquette before any given member of it can be polite. We shall see
that the concepts of consciousness and of "having a body" are "social"
in this sense.
The concept of a person is the most fundamental social concept, for
every other characteristic a member of society has will depend on his
having person status to begin with. Persons guard their privileges jeal-
ously. They demand to be treated altogether differently from things.
They, not things, may criticize one another's reasoning, pass judgment
on one another's conduct, sue one another in the courts. To be eligible
for these privileges, a person must be able to identify his fellows, to
distinguish them from the other items in his environment, for otherwise
he cannot give them the special treatment they deserve. Like politeness,
person status is enforced: whoever cannot recognize it in others cannot
have it himself.
When someone confers person status on something I shall say he
"personifies" it. This means he endows it with a certain significance,
whether or not he treats it well. Thus murder and torture are modes of
personification: murdering a man is not at all like shooting at a target,
nor is torturing a man like twisting a rope. At first glance, personifi-
cation seems a trivial and commonplace form of voluntary action. The
human imagination is lavish: men personify dogs, cats, images, corpora-
tions, athletic teams, abstractions, and so on. Society, however, exists
only in its members. It cannot permit them to be confused over who is
a member and who is not. It depends entirely on their ability to per-
sonify one another and nothing else. We ought to expect, then, that
underneath our apparent freedom to personify as we please, we shall
find that imagination has been quite pitilessly disciplined.
Surely this is what we do find. The child who personifies its doll
is playing. The adult who personifies dolls - really - is hallucinating
and will pay by the loss of his liberty. Each of us takes and must take
personification for a perilous affair, for each of us is absolutely de-
pendent on the ability of others to personify him. The correct word is
"ability," not "willingness," for real personifications are involuntary or
automatic. Each of us must demand of others that they be so well
socialized, so well trained in personification, that they recognize him
unthinkingly for the significant fellow that he is - even if only to abuse
him. What others are at liberty to give, others are at liberty to with-

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 151

hold. It would be self-contradictory to concede them the power to


withhold one's significance. Indeed, the freedom with which we create
imaginary persons is testimony to how deeply we trust one another not
to confuse them with real ones.
The fact that personification is socially disciplined is a barrier to
philosophical elucidation of the concept of a person. Personification is
too risky to be trusted to thought. Persons are required to recognize
one another emotionally and aesthetically: this is what we mean by
"being socialized." Deliberation has no place here. The man who has
to debate with himself whether there is a difference between torturing
a human and twisting a rope has no business at large, and we all know
it. We as philosophers find person status difficult to explain because
we have been so thoroughly trained as persons, that is, trained not to
need to think about it.
I have written as if there were no intermediate degrees between per-
son status and thinghood, as if the distinction between what should be
personified and what should not be were sharp. There is an obvious
sense in which the distinction is not sharp. Persons are privileged in
many ways, and things which are not in all respects persons, such as
children, corporations, the mentally ill, and domestic animals, share
many of these privileges. We say that they too have their rights. On
the other hand, they are not allowed to defend their rights: we persons
do that for them. We arrogate to ourselves the right to determine what
their rights shall be. In this more fundamental sense, then, the distinc-
tion is sharp.
Here the mental patient provides a philosophically revealing example.
We can learn how persons are made by watching how they are unmade.
The mental patient is no longer a person in the fundamental sense in
which you and I are persons. He has lost his rights. He cannot sue us
and he cannot move freely about. In keeping with the reciprocity of
the concept, we take person status away from those whom we think
incapable of recognizing it in others - the principle underlying the
M'Naghten Rule. We do this collectively, however. Insanity is a legal
term, not a medical one. It is the courts, representing society as a
whole, which have the power to unmake persons. We cannot permit
anything smaller than society to unmake a person, just as we cannot
permit anything smaller than society to make one.
One wants to object that human agencies are fallible, courts not
always trustworthy. But it is not the philosopher's business to legislate
what is our concept of a person. He finds this already done for him,
by legislators, in fact. His business is merely to record what he finds.
Whatever their imperfections, courts and legislatures are society's agen-

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
152 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH

cies, the only ones it has. Persons and person status are no more
"natural" nor "artificial" than is society itself. If society is sometimes
a precarious affair, this is a misfortune which reflection alone will not
affect.
In this article, I want to bring to bear on the problem of robot con-
sciousness the general principles I have just sketched out. If only society
can make a person, then only society can make a machine a person -
if indeed society can. The computing machine which the logician in his
study conceives and the engineer in his laboratory builds cannot be-
come a person ("just like us") until we collectively declare it one, per-
haps by an act of Congress or something of the sort. The prospect is
a challenge to the disciplined imagination.

2) The First-Person Structure Problem

Our first task is to define "consciousness" and "robot." This will


enable us to break the general problem of robot consciousness apart
into more manageable subproblems. I begin by asking what a robot is.
I take it that as matters stand today, robots, even the best of them,
are not conscious, and we do distinguish persons sharply from machines.
We ought to ask what is the basis for this distinction, since our problem
is to evaluate the possibility of overcoming it. A robot, we say, is an
artifact, and it seems intuitively obvious that natural and artificial
processes generate materially different results. The robot is a mere
machine, not a person like us, for it has a different structure from ours:
it is made, say, from gears, wires, and transistors. But let the robot be
made from discarded cadaver parts. Would there then be any intuitive
barrier to its being thought a person just like us, not a "mere" machine?
Surely not! So the fact of being artificially made does not suffice to
make something a (mere) machine unless it results in a difference of
structure.
That cannot be the whole story, of course, for there is the additional
factor of control. Suppose the cadaver-part artifact is controlled in some
purely mechanical way - like a catatonic, say, it simply remains in
whatever position one puts it in. Then intuition will tell us it is a mere
machine, even though it has a human structure. If, on the other hand,
once we have manufactured it, we can no longer control it except by
the same means by which we control indisputably human persons, then
it seems unreasonable to deny that it too is a person. If we are willing
to say this of the cadaver-part robot, then surely we should be willing
to say it of the gear-wire-and-transistor robot too.
Intuitively, then, there seem to be two criteria of robot-hood, struc-

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 153

ture and control. ("Body" and "mind" are the conventional terms.)
Whatever something is made of, whatever structure it has, if it can be
controlled as we control machines, that seems to make it a "mere"
machine, a robot. And if something has the structure of a mechanical
artifact, if it is made, say, of gears, wires, and transistors, then however
it may be controlled, that seems to make it a robot too. Things that
are robots by both criteria are familiar today and present no problem.
Intuition no longer guides us, however, when we ask whether some-
thing can be a robot by one criterion but not by the other. We are, in
effect, changing the relation between mind (control) and body (struc-
ture) by varying the terms independently of one another.
If something has a human mind, but a mechanical body, then it is a
robot by the structure criterion but not by the control criterion. I shall
call the problem of the possibility of such a being "the structure prob-
lem." If something has a human body, but a mechanically controllable
mind, then it is a robot by the control criterion but not by the struc-
ture criterion. I shall call the problem of the possibility of such a being
"the control problem."
In the Foundations of the Metaphysics of Morals, Kant writes
Now we cannot conceive of a reason which consciously responds to a bidding
from the outside with respect to its judgments, for then the subject would
attribute the determination of its power of judgment not to reason but to an
impulse.

While it makes sense to say of someone else that even though he looks
human, he just might possibly be a mere machine like a can opener,
it does not make sense to say this of oneself. When persons do make
such assertions about themselves ("I am a mere instrument in the hands
of the Party"), we take them to be resolving, not reporting. The control
problem is a third-person problem: one can raise it about someone else,
but not about oneself.
The structure problem, on the other hand, has both third-person and
first-person versions. The third-person version is familiar. Indeed, it
has monopolized the attention of students of the problem. Could a
mechanical artifact do things which would require me to call it con-
scious, to credit it with a mind like my own? How could it do this,
what would it have to do? This is the third-person version. The first
person version is: could I have my own mind in a nonhuman body, for
example, a gear, wire, and transistor body? I am not now a robot, but
could I become one, or be transformed into one?

I Critique of Practical Reason and Other Writings in Moral Philosophy (tr.


L. W. Beck), p. 103.

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
154 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH

Now this may seem a silly question. To see the point of it, we have
to recall the functionalism which is the controlling assumption of our
argument. Like "person," "mind" and "consciousness" are social con-
cepts. Whatever may be their substantive or material content, they are
formally reciprocal. As we observed, reciprocity is enforced: others will
judge one's own claim to have a mind or to be conscious in the light
of the accuracy with which one detects mind and consciousness in them.
Each of us necessarily thinks that he himself has a mind and is con-
scious. Whenever someone attributes a mind or consciousness to some-
thing else, then, he must mean at least that it has what he has. If, then,
one decides in advance that one could not possess one's own mind
except in the familiar flesh-and-bone body, one would appear to have
decided the third-person structure problem in advance. If I could not
possibly have what I have in a mechanical body, then it does not make
sense to suggest that something else could have what I have in a
mechanical body. We ought, therefore, to discuss the first-person version
of the structure problem before going on to the third-person version.
"I have what I have" (viz., mind, consciousness) is a tautology, a
necessary truth. Therefore it does not depend, either for being true or
for being known, on one's knowledge of one's body. If the transition
could be managed, there seems no reason why one could not have the
same self-consciousness in a different kind of body, in particular, a
mechanical body. Mechanical substitutes for individual organs are as
old as the wooden leg. Given such contemporary sophistications as
mechanical heart valves and artificial kidneys, it seems plausible to
suppose that it may someday be possible to replace every organ of the
body, even the brain, by a mechanical substitute - an up-to-date ver-
sion of the ancient concept of transmigration.
This transmigratory fable has to make sense as a first-person story:
one has to be able to put oneself in the place of the hero. Imagine,
then, that one has volunteered (perhaps out of scientific curiosity) to
become the world's first robot-convert, the first person to be trans-
formed into a machine. One cannot imagine the transition accomplished
at a single step - what would one imagine? - so we may suppose it
accomplished gradually, through a series of surgical operations in which
one by one the various organs of the body are replaced, the brain
coming last of all. If the operations are successful, one will emerge
converted into a robot. (Note that nothing is implied here about one's
freedom of the will,. The question is, "Could I have what I now have
in a mechanical body?" Whether what I now have includes freedom of
the will is another question.)
I take it that no operation in the robot-conversion series except the

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 155

last poses an unfamiliar problem. We know what it would be like to


have one's flesh-and-bone legs replaced by artificial legs, one's fleshly
eyes by artificial eyes, and so on. When, however, the Chief Surgeon
announces, "Today we attempt the brain-replacement!" that is another
and a more disconcerting prospect. Will one ever awaken from the
anaesthetic, and if so, in what condition?
One can imagine the story having a happy ending. One awakens
feeling reasonably normal. The Chief Surgeon asks, "How do you feel?"
and one replies with one's mechanical voice, "Not bad!" One finds
one can solve the same problems, feel the same emotions, with one's
mechanical brain that one solved and felt with one's fleshly brain. The
operation is a success and one has become the first robot-convert.
One can imagine different endings for the story, however. Children
are sometimes troubled by vivid images of death as a condition in which
one suffers but cannot act. In his mind, the child sees the coffin closing,
hears the clods clatter on the lid, feels his members rot as he lies
impotently in the darkness. In a similar spirit, suppose that when one
awakens (or seems to awaken) from the brain-replacement, one sees
the Chief Surgeon peering down, hears him say, "I'm not getting any
response," and feels the operating table underneath one. But one can-
not respond. As in the child's conception of death, one has input, but
no output. One cannot crank a gear. One has become what I shall call
a "possibly paralyzed robot-convert" (PPRC).
The PPRC's situation is intriguing, for he seems to be one of those
few persons who really has the problem of deciding which body is his.
I assume he has to have a body. Is he, then, in the robot body, but
paralyzed, or is he still in the fleshly brain, but hallucinating? We may
assume him self-conscious in the sense that he feels in possession of
himself, he can as it were talk to himself, reason with himself about
his situation. Can he decide which body is his, on the basis of input
alone? I want to argue that he cannot decide, and that this tells us that
the concept of having a body is a social or reciprocal concept.
Now I presume that (as the word "organ" suggests) an attempted
organ-replacement succeeds when the new organ performs the function
of the old and fails when it does not. Thus when one received mechani-
cal eyes, the Chief Surgeon must have asked such questions as, "Can
you read this eye chart?" He could not determine the success or failure
of the operation except on the basis of one's replies. "Can you read
this eye chart?" "Yes, it says D - F - T ...." (To his assistant, "Since
I haven't yet picked up the eye chart, he must be hallucinating. The
operation has affected his mind.") To have mechanical eyes means to
be able to see with them. Since the common public world is what there

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
156 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH

is to be seen, no one of us is the authority on whether he himself sees


or not. Before one has the right to say one has mechanical eyes, except
in the sense in which Anne Boleyn has her head (tucked underneath
her arm), one has to be able to use them successfully to interact with
others, specifically by giving visual reports which seem reasonable to
them. One retorts to the Chief Surgeon, "No matter what you say, I
do see an eye chart - you're holding it in your left hand." And he
replies, "Yes, if you say so, I suppose you do think you see me holding
an eye chart. But since I'm not holding one, you must be hallucinating.
The operation is a failure. It has given you novel hallucinations, but
it hasn't given you mechanical eyes."
The point of this example is simple. While different steps in the
robot-conversion process may test whether or not the convert has this
or that special ability - the ability to walk, to see, etc. - whether or
not there is such a person as the convert, whether or not the process
has killed him or driven him hopelessly mad, etc., is something equally
tested by every step of the process. The various steps test whether the
convert's altered body has this or that special ability - to walk, to see,
etc. But interaction or (using the term broadly) conversation is the test
of whether or not the convert has a body. Nor is this a pecularity of
the robot-conversion process. The test of anyone's possession of his body
is his ability to use that body to interact with others. Moreover, con-
versation is not the test of one's possesion of one's body in the sense
that there are two distinguishable items whose coincidence is to be
demonstrated - the person on the one hand, a mass of material on the
other - but in the sense that in order to be at all, in order to have
a body, one has to locate oneself in space and time. The social process
of interacting with others is the very process by which one's location
is determined.
The interest of the PPRC example lies in the fact that one can make
it seem as if indeed there could be three distinguishable items, one per-
son and two bodies, and a problem which goes with which. A totally
paralyzed flesh-and-bone person (input only) could have experiences
which he would correctly describe using egocentric particulars (expres-
sions such as "in front of me," "to my left," etc.) which locate him in
his paralyzed body. One can put oneself in the place of such a person
because persons have recovered from fits of total paralysis and told
what it was like: "When I seemed to you to be dead, I was really trying
to scream for help - but no sound came." We locate such a person in
his paralyzed body, but in retrospect, or on the basis of subsequent out-
put. We certainly do not locate him in it during the fit, when, for all
we know, he is dead. Accordingly, the PPRC will be able to distinguish

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 157

seeming to have a body from actually having one when, and on


someone thinks to reconnect his output wires (or what hav
that he can determine his location with respect to others. Bef
is done - if indeed it can be done - he cannot tell what is his
nor which is his body, nor can anyone else. We might just as
that he has neither location nor body, except potentially. Why
we do not first have bodies and then interact with one ano
have bodies by interacting with one another.

It seems clear that we understand the meaning of the question:


sequence 7 7 7 7 occur in the development of a?" It is an English s
it can be shown what it means for 415 to occur in the developmen
similar things. Well, our understanding of that question reaches ju
one may say, as such explanations reach. (PI, par. 516, Cf. also p
and 426).

In constructing the decimal expansion of a, we know what it would


like to encounter the sequence 7 7 7 7. It would be like encounter
sequences which we know do occur in the expansion, such as 4 1
That is all the question means. If you add, "But either 7 7 7 7 occ
in the expansion or it does not, whether or not we work it out," th
Wittgenstein replies:

That is to say: "God sees - but we don't know." But what does that mean?
We use a picture; the picture of a visible series which one person sees t
whole of and another not. The law of excluded middle says here: It mu
either look like this, or like that. So it really - and this is a truism -sa
nothing at all, but gives us a picture....

Similarly when it is said, "Either he has this experience, or not" - what


primarily occurs to us is a picture which by itself seems to make the sense
of the expression unmistakable: "Now you know what is in question" - we
should like to say. And that is precisely what it does not tell him. (PI, par. 352.)

When we say that the PPRC's self-consciousness must be located either


in the fleshly brain or in the mechanical body, we entertain a certain
picture. We see him interacting with the surgical team from one or the
other of these locations. We imagine him defining his situation in thi
way, thus enabling him also to change places with them. But this picture
is inapplicable. We are the ones who can, as it were, transfer ourselves
from one consciousness to another in this situation. The parties to the
situation cannot do so - that is just the point of the example. The PPRC
cannot use his body, in whatever sense he "has" one and whichever on
he "has," to determine his location with respect to others, nor can they
use theirs to determine their locations with respect to him. If you say,
"But nevertheless he must have a location," the proper rejoinder is

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
158 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH

How? You are trying to have the results of communicating without the
communicating.
Could I have what I have in a mechanical body? Does it make sense
to decide the question in advance? We have seen, at least, that by a
"body" one means something with which one interacts with others.
Moreover, by envisaging the robot-conversion process as a step-by-step
process (and again: how else could one imagine it?), we have surrepti-
tiously introduced a whole host of "interaction conditions." The mechan-
ical body is to be of a convenient size, neither too large nor too small.
It is to behave at a certain pace, neither too fast nor too slow. Perhaps
the further we pursue the question, the more concretely we imagine the
situation, the more the robot body will come to seem just a slight varia-
tion on the conventional flesh-and-bone body. Indeed, since personifica-
tion is enforced, and imagination disciplined, this should come as no
surprise.

3) Structure and Control: The Third-Person Problems

When persons consciously come into one another's presence, or, as


I shall say, "encounter" one another, each personifies the others. Each
puts himself in every other's place, imaginatively sees the world through
the other's eyes, roughly anticipates the other's actions. Thus they
reciprocally locate one another and apportion time and space among
them. Were they unable to do so, they would interrupt one another,
unexpectedly collide with one another, in general behave so irrelevantly
toward one another that they could not by any reasonable standard be
said to be conscious of one another at all. Persons endure through time.
One cannot encounter someone unless he constitutes a lasting focus for
one's attention. Therefore one must be able to anticipate his actions at
least sufficiently to keep him located in space over a period of time. Nor
can one regard oneself as having been encountered by someone else
unless one can attribute to him the same rough anticipating of one's
actions - otherwise one will take him not to have noticed one.
In this tautological sense, encounters are reciprocal or mutual. The
parties mutually agree upon an apportioning of space and time which
permits each to make his presence known to the others. This agreement
need not be amicable. What would occur in the absence of it is not
something less friendly than an encounter, but something less inclusive.
If A stabs the unsuspecting B in the back, this is not an encounter
between A and B, for B is left out of it. A is conscious of the presence
of B, 'but not conversely. On the other hand, if A stabs B from the

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 159

front, so that B sees him coming, then A and B do encounter one


another: the situation is well and mutually defined. Each knows roughly
what the other is going to do next, that is, each agrees that the other is
doing about what the other himself thinks he is doing.
Encounters in their reciprocity are the medium in which persons
exist. To be is to be defined, and persons define one another by defining
the situations in which they mutually participate. Surely the primary
function of the concepts we study in the philosophy of mind - person,
mind, body, self, consciousness - is to bring about this mutuality or
reciprocity. These concepts are not primarily instruments with which we
advance our interests in well-defined situations. Primarily, they are
instruments with which we define situations and ourselves as participants
in them. Therefore if we want to understand these concepts, we ought to
observe how they work in ill-defined situations, in situations that are
bizarre, uncomfortable, embarrassing, distasteful, stupid. There we shall
catch ourselves using them to pull and twist the situation into a shape
in which we can live. For this purpose, the laboratory is less revealing
than the madhouse, where reciprocity is not so much taken for granted.
To apply these principles to the problem of robot consciousness, we
need imaginatively to extend the problem further than do investigators
whose interests are narrowly logical or technological. Their problems are
programmatic. They wish to assess the (at least logically possible)
eventual outcomes of existing laboratory programs. Therefore they take
for granted the etiquette, the protocols, the apportionings of time and
space, which define the laboratory as a social situation. They mean by
a conscious robot something which fits smoothly into such a situation,
a situation which they have defined in advance. While this restriction of
the problem is fully in keeping with their interests, it is not in keeping
with ours, for we want to understand not how such social concepts as
person, mind, body, self, and consciousness are possibly or usefully
superadded to well-defined situations, but how they are used to define
situations in the first place. I want now to illustrate this, beginning with
the third-person version of the structure problem. I shall proceed at first
more or less intuitively, moving back and forth between structural
considerations and control considerations. My aim is to bring out how
our personificatory capacities are disciplined.
One sees only the outsides of other humans. But they put on perfor-
mances which demonstrate that they are human, and therefore one
personifies them. One imaginatively transfers one's first-person con-
sciousness to them, one thinks one knows what it would be like to be
them. These performances, however, involve moving limbs, making
noises, etc., that is, involve nothing which cannot be mechanically

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
160 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH

simulated. Suppose a robot puts on an identical performance - which


we may suppose possible,. for the time being. Still, one does not want to
personify the robot, to project oneself into the robot (which is made of
gears, wires, and transistors). One does not want to say one knows what
it is like to be the robot, for there simply is not any such thing as "what
it is like to be the robot.' One cannot put oneself in the robot's place,
for that place is, as it were, empty. There is no one there.
And yet (we are supposing) the robot puts on an impeccably human
performance. He does the very things one's fellow humans do that
entitle them to the privilege of having one imaginatively change places
with them, but one does not want to change places with him. It must be
the structural difference, those cold, unfeeling gears, wires, and transis-
tors, that causes the trouble. But then that cannot be it, for nothing about
one's own self-consciousness depends upon one's having a knowledge of
one's internal workings. After all, not long ago men gave accounts of
their bodies which are drastically different from the accounts we give
today. But they still communicated with one another, and they would not
have wished to change places with robots, had they had any to change
places with. What we call a "human body" today is quite different from
what men once called a "human body" (no more medieval "humors,"
consciousness resides in the brain, not the stomach, etc.), but we still
want to say that a human mind can reside only in a human body -
which begins to look like nothing more than a prejudice.
But it cannot be just a prejudice, for think of the control problem,
"Can a human body be inhabited by a control mechanism different from
the human mind?" To see the problem, one has to distinguish two senses
of "control," on the one hand, self-control, on the other, control over a
utensil, machine, in general over a device which is a means to some
other end than itself. Self-control, surely, is not a means to anything
other than itself. We want a control-criterion robot to be controlled not
in the obvious ways a person deficient in self-control (small child, scared
for his life, too stupid, etc.) can be controlled by another person, but in
the ways in which persons control machines, that is, so that the possibility
of self-control does not even arise. In other words, to have a model for a
control mechanism different from the human mind, we use the concept
of a machine, there being only these two alternatives.
But suppose we assemble a human body from old cadaver parts. It is
a fully formed adult, indistinguishable from a natural-born human. But
we control it completely, it is a mere machine, a utensil. It is not a
formerly or potentially conscious human, something which had or might
have the self-consciousness each of us has. It just is a machine in the
same sense in which a can opener is a machine. When we gaze into its

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 161

eyes - what? Can one keep the picture in focus? Surely one has to
imagine it having a dull, lifeless stare and making jerky, mechanical
movements. By hypothesis, it "has" a human body. But what does that
mean? It cannot put on a human performance, it cannot "have" a human
body the way persons have human bodies, or it will trigger off one's
personificatory proclivities, and one will turn it into a degraded human
rather than a machine.
Murdering a man is not like shooting at a target, nor is torturing a
man like twisting a rope. Acts of the first kind have a different moral
quality from acts of the second. If by "person" we mean "sane person,"
then a person is not logically required to act virtuously, but he is logic-
ally required to be able to act virtuously. I assume the reader sane
(however vicious). Take, then, your attitude towards a machine, say, a
can opener. Now apply that attitude to a living human body. Can you
really do it? When one demolishes the can opener, one does not feel one
is insulting it or degrading it. Imagine yourself demolishing a living
human body - I leave the means unspecified. To make the picture at
all plausible, does one not have to imagine oneself intending the act as
a degradation of the victim?
It is tempting to object that whether one can or cannot take up an
attitude proves nothing. But just as we know the difference between real
and make-believe personifications, we also know the difference between
real and make-believe depersonifications. The attitudes we are required
to take toward that most potent of symbols, the human body, are
reinforced by the most obvious social norms, norms whose force is so
great that to ignore them is literally madness. So we may press the case:
the cadaver-part machine puts on human performance. And yet we
control it, we think it a mere machine which we can destroy with
impunity. But how is it a machine, and how do we control it? The two
parts of the picture - human performance, nevertheless machine - will
not both come into focus at once. What puts on a human performance
with a human body simply is a person. So much for the control problem.
But what constitutes a human performance? Could a machine put one
on? One has to see that a certain sort of example is insufficient here,
temptation to the contrary. Our ingenious computer is bolted to the
floor and has "Property of US Govt" stamped on its console. We turn
it on and test its intelligence, let us say by playing Turing's "imitation
game" with it.2 Now the game is over, and the computer insists, "You
see? I'm human just like you! I won the game, I fooled you every time,

2 A. M. Turing, "Computing Machinery and Intelligence," Mind, Vol. LIX,


No. 236 (1950).

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
162 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH

I can do anything a human.. . ." Click. "Let's go to lunch," says the


Head Scientist. To call the computer "theoretically human" is no better
than to call it "metaphorically human." Humans live in encounters, in
social situations, not merely in laboratories. The primary function of
the concept of a person is to define social situations. If the computer
cannot use its putative person status except in the laboratory and at the
pleasure of the scientist, who defines the laboratory situation, then the
computer is only a scientific curiosity, not a person. We mean by a
"person" what can put on a human performance in any social situation.
Human performances are put on for the benefit of us, who are human.
To be successful, a performance must be in phase with the way we are
human. It must make possible the reciprocity of encountering. We
apportion time. We train children not to interrupt. To pass for human,
one must present oneself at a certain pace. If the robot proclaims its
humanity too rapidly or too slowly, then it won't pass. If it takes a
hundred years to say, "I a m l i k e y ou," or if it spits it out in a
microsecond, "Iamlikeyou," then we will not even notice it. That much
is easy.
What is a machine, after all? What limits one's models? Could a
complex electromagnetic field be a conscious machine? It would have to
act at a point in space, say, at a radio receiver, but it need not itself be
located in space. "Good morning, I am like you," it says. "But where
are you?" we answer. "Everywhere." That will not do. How in the world
could we interact with something which is everywhere?
We are asking what sort of structure a thing must have if we are to
be able to personify it, put ourselves in its place, project ourselves into
it. We start out thinking the conditions for this are very general, very
loose. But then when we really examine how our imagination works,
when we distinguish serious from "play"' personifications, we find we
have covertly tailored our examples to a human measure. The conditions,
which our imagination surreptitiously supplies, turn out to be not loose
at all, but the longer we look at them, the tighter, the more restrictive
they seem. If in the end they should reveal that whatever we personify
must have a human, flesh-and-bone body, this should not provoke sur-
prise. We are the products of a ruthlessly thorough training in personifi-
cation, a training which depends upon the conventional flesh-and-bone
body.3

3 For an excellent brief discussion of this, see Erving Goffman, 'The Nature
of Deference and Demeanor," American Anthropologist, Vol. 58 (June, 1956),
pp. 473-502; reprinted as No. 97 in the Bobbs-Merrill Reprint Series in the Social
Sciences.

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 163

In order to put on a human performance, one must apportion time


reciprocally with others, now speaking, now being silent so someone else
can speak. Invariably, when we imagine amazingly intelligent robots, we
take this ability for granted. But just what is the ability? We think of the
robot in a laboratory context: we want to test its intelligence. But then
it cannot babble on endlessly, it must stop talking long enough for us to
submit questions to it. So we imagine the robot fitting into the conversa-
tional context, the social proprieties, of the laboratory. As we have seen,
however, it cannot have the ability to fit into conversations there, and
only there, if it is to pass for human. The objection that goca, "But we
can program a robot to fit into any conversational context - after all,
there must be objective rules, or humans themselves could not be trained
to fit into conversations. So we will just program those rules into the
robot," seems to me to ignore an important difficulty. Outside of such
tightly defined social situations as the laboratory intelligence test, in
order to interact conversationally with someone, one must be able to put
oneself in his place or project oneself into him. If he is sulky, or grumpy,
or obviously deeply troubled, one will pace ones conversation with him
differently than one would were he his normal self. So, consider the
reciprocity of the situation. For a robot successfully to change places
with a person, the person must be aware that the robot has done so -
otherwise he will think that the robot has not noticed him. To be aware
of this, however, the person must change places with the robot, must
project into the robot a self-consciousness like his own. But that is the
question at issue: can one project oneself into a robot? To imagine a
robot programmed to fit into any social or conversational context, rather
than merely into a context - and usually a "mechanical" context -
which we have rigged in advance, simply begs the question.4
Must a robot have a particular sort of structure, a particular sort of
inner workings or outer shape, in order to be a person, not "just a
machine?" One has to ask how the shape and workings affect the robot's
ability to fit into society, complimenting and insulting others, himself
receiving insults and compliments, helping and hindering, himself being

4 The error involved is subtle. For example, Hilary Putnam writes, "I will
imagine that we are confronted with a community of robots...." ("Robots:
Machines or Artificially Created Life?" J. Phil., Vol. LXI, No. 21 (Nov. 12, 1964),
p. 677). But how are we "confronted with a community?" If I understand how
several robots form a community, then I must have a criterion of robot identity,
a criterion which enables me to understand how, e.g., robot A identifies (projects
himself into, puts himself in the place of) robot B. But this must mean I put my-
self in the place of robot A, etc. Then the question of the consciousness or person
status of robots is already settled.

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
164 PHILOSOPHY AND PHENOMENOLOGICAL RESEARCH

hindered and helped. We flesh-and-bone humans have been trained to


recognize one another through the rules that govern these behaviors,
rules that depend upon the flesh-and-bone body, which is hard and soft
in the right places. One of us may safely claim to transcend these rules
provided he tacitly assures the rest of us that he is not really in earnest,
that the claim is "merely theoretical," "just science fiction," "about some
indefinitely remote future time," etc. Personification is fundamentally
a pitiless social discipline, only afterwards a logical and psychological
fact. Apparently, robots are sexless. To claim one can personify a
genuinely neuter interactant seems to me unconvincing when one rec
the part sex plays in the formation of one's ability to interact. When
makes such a claim to (i.e., in front of) one's fellow flesh-and-bone
humans, one makes it in a context and manner sufficient to reassure
them that one is properly aware of their sexuality and intends to govern
oneself so as not to be a threat to it; otherwise the claim would be too
dangerous to make. One may say the same about claiming to imagine an
interactant who cannot suffer pain. What is important about a body is
the reciprocity of having one, that is, each of us has his own body in
relation to or with respect to how the rest have their bodies. As matters
stand today, therefore, only what has the conventional flesh-and-bone
body can really be thought to have a mind or to be a person, which is
so much as to say that a structure-criterion robot is not possible - now.
But we knew that when we began the argument.
As for the future, we may seek guidance from the past. Classes of non-
persons have been elevated to person status - women and Negroes, for
example. These newcomers had been disqualified on structural grounds,
just as robots are now disqualified. It seems to me that the process by
which they became persons is tolerably mysterious. Certainly they cannot
be said to have won person status by putting on human performances, by
demonstrating their abilities. What does a demonstration prove if no one
is obliged to witness it? ("Let's go to lunch.") Did they win person
status by force? But the power one person has over another is a moral
power, the power to make the other wrong in his own eyes (think of the
difference between an insult and an accident), and until they were
admitted to society, they did not have power in this sense. The decision
to think of women as human, as rightfully persons, was made, it seems
to me, by men and women alike, before, perhaps long before, it was
recognized by being incorporated into the structure of our institutions
and enforced; and the same may be said of Negroes. To say "Society
made it" is simply to say that nothing smaller than society made it, which
is no doubt helpful so far as it goes. Pursuing the analogy, if a decision
is ever made to admit robots to person status, robots themselves will

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms
THE PROBLEM OF ROBOT CONSCIOUSNESS 165

have as much voice in the decision as anyone - if anyone, flesh-and-bone


or gear-wire-and-transistor, can be said to have a voice in decisions such
as these, beyond recognizing them once they have been collectively made.
The examples of Negroes and women suggests that there may be a trend
in the direction of greater liberality; but it would be foolhardy to predict
its outcome.
DWIGHT VAN DE VATE, JR.
THE UNIVERSITY OF TENNESSEE.

This content downloaded from 103.231.241.233 on Sun, 24 Dec 2017 05:40:22 UTC
All use subject to http://about.jstor.org/terms

You might also like