You are on page 1of 14

410 IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 3, NO.

4, OCTOBER-DECEMBER 2012

The Good Our Field Can Hope to Do,


the Harm It Should Avoid
Roddy Cowie, Member, IEEE

Abstract—This paper tries to achieve a balanced view of the ethical issues raised by emotion-oriented technology as it is, rather than
as it might be imagined. A high proportion of applications seem ethically neutral. Uses in entertainment and allied areas do no great
harm or good. Empowering professions may do either, but regulatory systems already exist. Ethically positive aspirations involve
mitigating problems that already exist by supporting humans in emotion-related judgments, by replacing technology that treats people
in dehumanized and/or demeaning ways, and by improving access for groups who struggle with existing interfaces. Emotion-oriented
computing may also contribute to revaluing human faculties other than pure intellect. Many potential negatives apply to technology as a
whole. Concerns specifically related to emotion involve creating a lie, by simulate emotions that the systems do not have, or promoting
mechanistic conceptions of emotion. Intermediate issues arise where more general problems could be exacerbated—helping systems
to sway human choices or encouraging humans to choose virtual worlds rather than reality. “SIIF” systems (semi-intelligent information
filters) are particularly problematic. These use simplified rules to make judgments about people that are complex, and have potentially
serious consequences. The picture is one of balances to recognize and negotiate, not uniform good or evil.

Index Terms—Ethics, emotion, affective computing

1 INTRODUCTION

P EOPLEspend a huge part of their lives interfacing with


computers, knowingly or unknowingly. Correspond-
ingly, when research devises new ways for people to
This paper engages with ethical questions at two levels.
At a generic level, it is concerned with suspicions that the
whole discipline of affective computing is ethically tainted
interface with computers, there may be substantial implica- —a technology too far. It argues that there is no need for
tions for what Arnold [1] memorably called our “weal or generic condemnation of the technology that we actually
woe.” Affective computing envisages that kind of shift, and have, or can reasonably imagine. That does not mean that it
for that reason, there is a strong case for thinking through takes an anodyne view, though. The second level that the
its likely effects on our weal and woe before it is launched paper considers involves distinguishing between some
on a large scale. That is a thoroughly interdisciplinary application areas that present no ethical problems of note,
challenge because it involves coming to terms with several and others that do. It does not dispute that some areas do
different kinds of issue: what the technology can achieve, pose problems. The outstanding examples arise not because
the characteristics of humans that determine what new technology endows machines with dangerous powers, but
technologies will mean to them, and the moral and ethical because there is a risk of applying technology to tasks that it
frameworks relevant to evaluating the interactions. The aim is not very good at—not because the tower of Babel may
of this paper is to provide a framework for addressing the infringe on Heaven, but because a building as high as that,
new issues that gives due weight to these different made with clay bricks, is likely not to be fit for purpose.
perspectives.
It would be natural to assume that there was already a
considerable literature which did exactly that. But although
2 CORE JUDGMENTS
a good deal has been written about ethics and affective The outlook that the paper offers depends on judgments
computing, much of it seems to deal with rather a different about issues that are open to debate—the subject matter of
problem—the desirability in principle of things that we can the technology, what it has achieved, and the relevant kind
of moral framework. There is no question of resolving the
imagine in an abstract sense, but that there is no obvious
debates here, but neither should they be glossed over. This
prospect of building in the foreseeable future, if ever. That
section sets out summaries of the positions that are taken so
is a fascinating subject, but it should not be confused with
that readers can judge whether they regard them as
the more immediate one that this paper aims to address. reasonable. The fact that the audience is interdisciplinary
means that most readers will find at least some of the
. The author is with the School of Psychology, Queen’s University Belfast, material too familiar to be worth mentioning.
Northern Ireland, United Kingdom. E-mail: r.cowie@qub.ac.uk.
2.1 The Natural Subject Matter of the Discipline
Manuscript received 1 May 2011; accepted 19 Dec. 2012; published online 20
Dec. 2012. There is real difficulty in identifying the natural subject
Recommended for acceptance by A. Beavers. matter of what is sometimes called affective computing,
For information on obtaining reprints of this article, please send e-mail to: sometimes emotion-oriented computing. “Affect” and
toac@computer.org, and reference IEEECS Log Number
TAFFCSI-2011-05-0037. “emotion” are the only words that could reasonably be
Digital Object Identifier no. 10.1109/T-AFFC.2012.40. used to identify it, but they set traps [2].
1949-3045/12/$31.00 ß 2012 IEEE Published by the IEEE Computer Society
COWIE: THE GOOD OUR FIELD CAN HOPE TO DO, THE HARM IT SHOULD AVOID 411

Historically, and in many contemporary writers, “affect” 2.2 The State of the Art
means pure, irreducibly subjective feeling—for instance, the It is a difficulty that the field lacks authoritative reviews of
glow of positiveness and energy that a happy person feels, the state of the art. This section provides a brief (and
or the negativeness and lethargy at the core of depression necessarily personal) assessment. It emphasizes that the
[3], [4]. It is easy for nonexperts to assume that the goal of a techniques we have, or can reasonably envisage, are very
discipline called “affective computing” must be to build limited. That is no criticism of the research that has been
computers which have that kind of subjective experience. done. It is more appropriate to say that it underlines how
Alternatively, they might assume that the goal was to build sophisticated human emotional competence is—a point that
machines capable of peering into those quintessentially is taken up later.
private layers of experience. Somebody who believed that A substantial part of the field is concerned with
developments like that were under way might reasonably communication of emotion. Within that, probably the most
feel that they raised deep and disturbing ethical questions. developed area involves recognition of emotion from
The view taken here is that neither of those images nonverbal signs—voice (as opposed to words spoken),
actually captures what the discipline generally does—and facial expression, and gesture, singly or in combination [10],
correspondingly, although the ethical questions associated [11], [12]. Near-perfect recognition rates have been reported
for all these modalities, but they are for artificial displays.
with them may be fascinating, they are not the ones that are
For naturalistic displays of emotion, recognition rarely
practically pressing.
attempts to do more than assign episodes to one of half a
Similarly, the word “emotion” has a sense in which it
dozen classes, and the success rates that it achieves are
refers specifically to brief, intense episodes evoked by
usually modest. Recognition is also context-bound—a
dramatic situations, and substantial figures in the field
system that recognizes emotion effectively in one kind of
prefer to use the word only in that sense [5]. It is easy for
phone call will perform poorly in another [13].
nonexperts to assume that the goal of a discipline called The idea of recognizing emotion from physiological
“emotion-oriented computing” must be to build computers signs has captured the public imagination, but it is subject
which intervene in that kind of situation. If so, they might to a major problem, which is that the variables affected by
reasonably protest that interventions like that are notor- emotion are also affected by many other things—physical
iously difficult even for a sensitive human; and accordingly, movement, mental effort, temperature, etc. An interesting
emotions like that are a part of life that computers should be range of discriminations can be achieved when all of these
kept well away from. Again, the view taken here is that that variables are controlled (e.g., the person is sitting still in a
is not what the discipline is about, at least not for the most regulated environment), but that is not a common applica-
part; and correspondingly, the associated ethical issues are tion scenario [14].
not as prominent as the image suggests. One might also expect that emotion recognition would
The obvious subject matter for the discipline is more become easy when a person was speaking and the words
mundane. Most of human life is emotionally colored, to at could be taken into account. In fact, the standard task with
least some extent [6]. People feel something about the other text is to recognize whether the sentiment it is expressing is
people and things that they are dealing with or discussing; positive, negative, or neutral, and that is difficult and
they give signs of the way they feel, they expect other uncertain [15].
people to register them to at least some extent, and the Cutting across these areas is the challenge of represent-
feelings influence their perception, judgments, and deci- ing everyday emotional states in a way that relates to
sions in a variety of ways. The term “pervasive emotion” detectable signs [16], [17]. A key reason is that automatic
has been used to refer to these colorings, signs, and techniques can rarely assign a precise description of an
cognitive patterns, which are an integral part of everyday emotional state with any reliability. Typical solutions
life [6]. involve assigning a “cover class” such as “negative” [18]
Looking at standard sources such as the proceedings of or a few numbers indicating how well each of a small
ACII conferences [7], [8] and the HUMAINE handbook on number of descriptors applies. The descriptors may specify
emotion-related states [19] or dimensions [20].
emotion-oriented systems [9], it seems fair to say that the
On the output side, synthesis sets out to convey
core research problem is to engage in a limited way with the
distinctions which are also coarse, but (at least until
emotional coloring that ordinarily pervades human inter-
recently) more often based on “basic emotion” categories
action. That involves recognizing signs (not necessarily the
[21]. It can generate displays that people can identify
ones that humans use), knowing how to generate appro-
reliably, but they rarely look or sound natural. Work with
priate signs, and being able to take some account of their robots has added a different slant by exploring forms of
implications. Clearly there are ethical questions associated expression that are not meant to be human (often animal-
with these tasks. However, they are not primarily about like) [22].
grand principles: They are about the uses that can ethically It is important for later arguments that although the
be made of techniques which may tread on sensitive problems sketched above are, so to speak, on the periphery
ground, and which at present almost always fall far short of emotion—dealing with external signs rather than internal
of the levels of competence that humans take for granted. states—they are nevertheless extremely difficult, and have
On that understanding, effective ethical discussion depends absorbed a large part of the effort in affective computing.
on being aware of the actual state of the art. Correspondingly, a core part of evaluating the technology is
412 IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 3, NO. 4, OCTOBER-DECEMBER 2012

gauging what can be done with rudimentary ability to On the other side, an important claim that is made in
detect and generate signals related to emotion in various favor of emotion-oriented computing also seems to stand
modalities. outside Principalism. It hinges on the belief that valuing
Efforts to move beyond that into central issues remain different aspects of humanity appropriately has moral
very fragmentary. Philosophers have evoked the rich value, and that emotion-oriented computing contributes to
network of connections and dispositions that go with being revaluing emotion. The belief is consistent with the effort
in a given emotional state [23]. Attempts to model that kind that philosophers from Plato onward have invested into
of network are at an early stage [24]. Without progress on considering how we should evaluate emotion. If computing
that problem, it is very difficult for a system to respond can contribute to the debate, it is making a morally
with any sophistication to a user’s emotional state. Even significant contribution.
choosing when to smile, frown, or nod in a routine
interaction is a major challenge [25]. 2.4 The Structure of the Paper
The discussion that follows contains four main sections.
2.3 The Ethical Framework
They deal with areas that seem to be essentially neutral
When technologists debate ethics, they are often drawn to from a moral point of view; with the positive moral goals
one of two positions. One equates ethics with an explicit that we can realistically hope to serve by working on
code that has legal or quasi-legal force. The other assumes emotion-oriented computing; with the negative moral
that the correct way to justify particular ethical conclusions consequences we should be seriously concerned about;
is by derivation from fundamental philosophical principles
and finally, with areas that seem to be mixed or
[26]. This paper steers away from both, and adopts a
controversial. After considering these, the paper revisits
position recommended by philosophers in the HUMAINE
some of the more dramatic stances that are taken elsewhere,
network [27].
and considers how they relate to the more pragmatic
Focusing on ethics in the sense of an explicit code is
assessment that has been outlined.
unsatisfactory for two reasons. First, it is unwise to invest
In many ways, the division is not particularly satisfying.
effort in developing technologies that are likely to be
It does not yield the kind of neat grid where each major
prohibited as soon as codemakers realize that they are
possible. Second, codes tend to focus on the negative and section has similar subheadings, and so on. There are
are not good at distinguishing between activities that are overlaps and gray areas. However, other divisions were
merely harmless and those that there is a case for under- explored, and they work even less well. If they are neat,
taking because they do good. There are good reasons for they leave out research areas and problems that really
affective computing to consider issues in both of those should be considered; and if they cover the ground, they are
areas, which means looking beyond ethics in the narrow structurally unsatisfying.
sense. In that context, the fact that “ethics” connotes codes The problem of finding a satisfying structure may
and proscriptions sometimes means that the word sits actually be informative, insofar as it signals the inherent
uncomfortably, and “morality” is used instead. messiness of ethical judgments in real cases. The same
Turning to fundamental principles, it is well known that technology may be ethically neutral when it is used to do
the major positions (Kantian, Utilitarian, etc.) do not one thing, ethically positive when it is used to do another,
translate easily into practical ethical guidance, partly and have ethically undesirable effects when it is used in
because there are conflicts between them. For that reason, some other ways. That kind of complexity is part of the
philosophers have developed systems that are closer to reason for believing that serious ethical discussion needs to
applied questions. Doring et al. have argued that one of engage with specific technologies and specific applications.
these, Principalism, provides a suitable framework for
emotion-oriented technology [27], and this paper follows
3 MORALLY NEUTRAL APPLICATIONS
their recommendation. That means options are evaluated in
terms of beneficence (in Arnold’s terms, whether they Looking across the applications of emotion-oriented com-
promote weal), malificience (whether they inflict woe), puting as it currently stands, it seems reasonable to say that
autonomy, and equity. the great majority of them do not need a great deal of
Principalism highlights those values because their attention because they appear to be morally neutral or all
relevance to ethical judgments is widely accepted. How- but morally neutral. They can be grouped into two main
ever, there are other ethical stances which often seem to be categories.
implicit in critiques of emotion-oriented computing, but
which are by no means universally shared. One is a 3.1 Frivolous Applications
disposition to value activities that meet indisputable needs Probably the largest single category of applications for
and to be critical of others. The other is a disposition to emotion-oriented technology can fairly be called frivolous.
value things and activities that are natural and to mistrust The new technology allows people to have experiences that
artificiality. For brevity, they can be called needs-oriented are positive, but of no great significance. Very probably,
and nature-oriented ethics respectively. Both have long there will be other means that could have produced equally
roots, but neither is by any means universally accepted. At positive experiences in the same circumstances; and so the
least some objections to affective computing seem to be effect is to increase the range of ways in which people can
grounded at least partly in these personal stances rather induce mildly positive experiences in themselves and
than in more universally accepted values. others, not to affect the sum total of human weal or woe.
COWIE: THE GOOD OUR FIELD CAN HOPE TO DO, THE HARM IT SHOULD AVOID 413

It is hard to argue that innovations like that will affect the simple: to make the website more interesting to a user, and
moral status of the human race one way or the other. to create a generally positive atmosphere.
Arguably it has the positive effects of directing human It is not totally clear yet how successful they are:
ingenuity into harmless streams and creating employment, Methods of evaluation are still under development [28].
neither of which is to be sneered at. However, an established company, Cantoche, offers web-
There is a great range of “frivolous” options. Electronic marketing avatars that play roles such as brand ambassa-
postcards designed to convey the sender’s feelings have dors, interactive assistants, and dialog agents for customer
become so common as to be hardly worth a mention. Given care. Its website claims that 90 percent of clients prefer sites
the synthesis technologies mentioned above, they could
with avatars, and clients can expect a 30 percent increase in
easily be made far more elaborate, and that will presumably
customer satisfaction with the website experience [35].
become more standard. Chatbots are now an established
A variation on the theme includes one of affective
feature of the internet, and they are routinely programmed
computing’s most famous citizens, Max [36]. Max is a
to express a degree of emotion [28]. Computer games have
included characters that express emotion since the Sims museum guide, and as such he is a successor to the tape
over a decade ago. Ambitious new developments have been recordings that guided earlier generations round their
widely canvassed [29], though it is not clear how successful museums. Like chatbots on a website, he carries out a
they have been [30]. standard function with some added flair.
A different strand of research involves attaching in- Allied to the museum, educators have long used simula-
dicators of emotion to electronic messages. Smileys are tions to bring histories, other societies, and so on to life.
ubiquitous. Subtler technologies enhance messages with Emotion has been integral to their simulation of events like
colours and movements that convey their moods [31]. the great battles of history. As digital simulations have come
Related techniques have produced diaries designed to let into play, it is appropriate that they should simulate emotion
users record (and reflect on) their emotional state through- too, as was done in a recent simulation of Pompeii [37].
out the day [32]. Simulation, particularly interactive simulation, is also
A distinctive set of emotion-oriented technologies has widely used in training, and has been since long before
developed around music. They are very varied. At one computers were invented. Here too, it is wholly natural that
extreme, there are technologies that allow performers to use digital simulations should incorporate emotion into simu-
emotion-related signals to shape performance [33]. At the lated scenarios—for instance, incorporating the emotions of
other, there are efforts to offer iPod users music that suits the trainer and the patient into a scenario where the trainee
their mood by monitoring signals that reflect their emo-
is giving an injection [38].
tional state [34].
There are two sides to the reason for doubting that
It is hard to argue that any of these applications adds
resourcing professions in these various ways needs very
much either to human weal or to human woe. Nor do the
much ethical debate. The first is that in this area, nothing
technologies trespass into areas that we might think should
be reserved for humans; in general, they are relatively that the affective computing community can realistically
straightforward. The products are, so to speak, ethically expect to do will change very much. The impact is not like
lightweight, and ethicists should not make much of them. the technologies that have transformed conception and
The obvious ground for disagreeing relates to one of the death. Technology may be able to make teachers marginally
stances mentioned earlier. According to a needs-oriented more effective with marginally fewer resources. That is
ethic, technology ought to focus on weightier matters, and positive, but it is not earth-shattering, and not worth
the applications considered here are objectionable simply launching moral crusades about. The second side is that
because they are frivolous. It seems fair to regard that as an professions are already regulated. If technological develop-
ethic that people are entitled to adopt for themselves, but ments did pass uncomfortable power into their hands,
not one that should be enforced on others. For most people, society would already have levers to claw it back. In areas
promoting mild happiness is an unexceptionable occupa- like that, there is no ethical story—only the ongoing revision
tion, and the emotion-related technologies that contribute to of codes that already keep the professions in tune with
it are not a problem. public values.
3.2 Resourcing Professions
3.3 Coda
It might at first sight be more surprising to suggest that a
Currently, many applications, perhaps most, fall into the
second group of applications should be considered broadly
categories considered here. From an ethical point of view,
neutral. They involve providing various professions with
the issues are minor—small pleasures, small improvements,
resources that they can use to do things that they might do
in other ways, but that would be more expensive, less minor risks, minor adjustments to practice.
flexible, harder to adapt to local cultures, and so on. It There are issues to be considered. Chatbots can convey
seems fair to suggest that if that is what an application does, hurtful messages as well as benign ones. It is hard to see
it is not ethically problematic. major issues involving autonomy, but there could be
There is a large overlap between the technologies used in problems of equity if, for instance, avatars disproportio-
that way and the technologies used for entertainment. nately represented particular ethnic types or personalities.
Chatbots increasingly feature on websites designed for Issues like that are not specific to the new media, though:
practical use rather than entertainment. The aim is generally They are well covered by existing regulations.
414 IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 3, NO. 4, OCTOBER-DECEMBER 2012

4 MORAL POSITIVES have even minimal emotional competence underscores the


level of respect due to human abilities that we usually take
From long experience in affective computing, it seems that
for granted.
moral aims often play a part in attracting people to the area.
Computing provides an abundance of evidence that
They tend to regard the enterprise as distinctively humane
emotion-related competences are practically important, and
—a point reflected in the name of one of the influential
their absence is a significant drawback. There are examples
projects in the area, HUMAINE. Some of their activities
throughout this paper, but it is worth identifying a few key
may be less beneficial than they hope, but others do seem to
cases here. Emotional distress is a highly functional signal,
be relatively straightforward moral positives, where it
and it is a serious limitation if an automatic answer system
seems reasonably clear that computing can do good by
cannot recognize it [46]. Similarly, both learning support
engaging with emotion. This section summarizes activities
and interview systems are substantially more effective if
in that category under four headings.
they can use the emotional channels that normally operate
4.1 Understanding Ourselves alongside exchange of propositions to achieve rapport
Contemporary research on emotion has been much con- between two parties [47]. When rapport breaks down,
emotion-related behaviors such as laughter are key to
cerned with questions about the nature of humanity that
defusing the situation [48].
Plato would recognize. They hinge on the relationship
Evidence for the sophistication of apparently simple
between intellect and other facets of human life, of which
competences is even more pervasive. The introduction
emotions are a salient example.
stressed how difficult it has proven to simulate the most
There is a long tradition of regarding intellect as the true
peripheral aspects of emotional competence, perceptual and
essence of humanity, to which other attributes—including
productive. From the standpoint of someone trying to
emotion—should be subordinated, if they cannot be
match it, human ability to grasp how the other party feels,
eradicated altogether. Augustine, for example, believed
and to plan accordingly, is awe-inspiring.
that our intellects could enter Heaven, but our emotions
Allied to these is a shift in people’s views of humans as a
would not: “they...belong to this life, not to the life we hope
whole. Humans have learned to compare ourselves un-
for in the future” [39]. Descartes famously regarded the
favorably with computers: They are wholly rational, free
thinking part of him as his essential self [40]. Academic
from emotion, and we are neither. Affective computing
psychology in the second half of the 20th century placed
turns that round, and invites people to realize that the
huge emphasis on cognition, and for most of that time it
deficiency is on the machines’ side, and the rational path is
treated emotion as marginal.
to make them, to the very limited extent that we can or want
In reaction, thinkers have protested that other attributes,
to, more like us.
particularly emotion, are an essential part of our humanity.
In these ways, and others related to them, it seems fair to
Hume famously pronounced that “reason is, and ought only
claim that emotion-oriented computing can make a real
to be the slave of the passions” [41]; and the Romantics
contribution to rebalancing the ways that human beings
extolled passion as the center of both art and human
understand themselves. That, as suggested earlier, is part of
relationships. The late 20th century saw a distinctive turn in the good that the field can hope to do.
the dialogue, hinging on the idea that emotion is not simply a
swirl of feelings overlaid on the systems that allow us to 4.2 Humanizing Electronic Communication
function. On the contrary, it is a compendium of routines that There are links between the last area and another where it
solve massively complex problems and that are essential to seems hard to deny that there is clear moral good to be
normal functioning. The case was made in various ways: done. It is already very common for electronic systems to
through neuroscientific studies suggesting that damage to interact with a person roughly as another person might, and
areas associated with emotion seriously impaired the it is all too common for the interaction to be bad. Websites
individual’s ability to function [42]; by showing that emo- and answering services bully, patronize, demean, confuse,
tion-related individual differences predict practical success infuriate, and simply don’t work in ways that cause people
[43]; by showing that people in appropriate emotional states real problems. They also make statements that it is hard to
could perform various kinds of task more effectively [44]; by regard as anything other than a lie—for example, “your call
reaffirming the link between emotion and moral judgment matters to us” repeated at regular intervals, apparently
[23]; and so on. indefinitely.
That debate has a clear moral component. It is a moral The problem cuts in two directions. An emotionally
problem if the dominant intellectual tradition belittles parts insensitive website, for instance, does not simply inflict
of human nature that are actually integral to effective unpleasant experiences on users. It can also be understood
action, wellbeing, and moral functioning. Correspondingly, as representing the company or individual who owns it,
if computing can contribute to righting the balance, it is a and systems can easily give an impression of their owners
morally significant activity. that are very far from reflecting their actual values and
It seems fair to claim that research on emotion-oriented feelings. During the writing of this paragraph, an interac-
computing makes distinctive inputs to the ongoing re- tion with a utility company provided an example. A
evaluation of emotion. Two levels are involved. First, recorded message ended with “thank you,” but the
artificial systems that have no emotional competences intonation rose at the end, and the “you” was both clipped
whatsoever demonstrate painfully clearly that those com- short and stressed. The impression was summary dismissal,
petences matter. Second, attempting to build systems that and it did not leave a good feeling toward the company.
COWIE: THE GOOD OUR FIELD CAN HOPE TO DO, THE HARM IT SHOULD AVOID 415

The point behind these examples is that technology is usable it will be; and conversely, the more people have to
already used in ways that impact on people’s emotions, adjust their communicative style to use an interface, the
whether the designers intend it to or not. Given that it is fewer people will be able to make the adjustment.
used in those ways, it certainly seems fair to say that if Expressing emotion certainly is part of human beings’
emotion-oriented technologies can make malinteraction less natural communicative repertoire, and that alone suggests
common, then they are doing good. It is not far fetched to integrating emotion into interfaces could help to make them
go further and say that the designers have a moral more natural and more accessible. More specifically,
obligation to consider the impact, and to seek out ways to emotional colouring is an integral part of “really natural
ensure that it is not needlessly unpleasant. language” [53], that is, language as people spontaneously
There are abundant examples of the way that emotion- use it among themselves. Hence, one might expect its
oriented technology does or could contribute to preventing absence to be one of the barriers to developing “really
needless unpleasantness. One which has already been natural language” interfaces, which would seem to offer a
mentioned involves detecting when a person is distressed, very considerable broadening of access. That is quite a
and a human being should be alerted [49]. An interestingly strong expectation in at least some areas: It has long been
subtle example comes from the technology of in-car satellite recognized that absence of affective coloring is one of the
navigation systems [50]. Nass et al. realized that because of factors that makes synthesized speech unpalatable to
the way directions were given, complying infringed on the human listeners [54].
driver’s self-respect. Changing the voice used to deliver More specific evidence exists in a few areas. Limited
them meant that the device could be much more effective. uptake of technology among older adults is recognized as
The example of the utility answering phone points to a an issue. Affect may well be a specific issue because age
related set of possibilities which is potentially quite large. brings changes in the way cognition and emotion interact—
When messages are recorded by humans, it is quite difficult for example, emotion-related information is used to
to control the emotional coloring with any precision. The compensate for cognitive decline, and emotion regulation
best contemporary text to speech systems convey emotional is prioritized [55]. These appear to be reflected in direct
coloring quite effectively [51], and using them rather than studies of interactions between older adults and technology.
conventional recording technology would make it relatively Medeiros et al. [55, p. 33] hypothesize that the dominant
straightforward to generate a more emotionally appropriate level of response to technology changes with age: “sensory
“thank you.” for youth, cognitive for younger adults and affective for
All of these cases involve relatively simple kinds of older adults.”
electronic communication that are already part of people’s Last but not least, it is very widely recognized that
lives, where it is not difficult to improve the way emotional “computer anxiety” is a major factor in the uptake of
aspects of communication are dealt with. It hardly seems computing technology. It is known to affect different
controversial to regard improvement in that kind of groups and types of people differentially [57], [58]. It seems
interaction as an ethical positive. Matters are not altogether eminently reasonable to believe that their disadvantage
simple, though. There are cases where it is difficult to would be reduced if we could develop interfaces that were
separate appropriate emotional coloring from a role that less demanding, and that had some ability to respond to
one might want to argue machines should not take—for anxiety if they did provoke it.
example, in the case of artificial companions or teachers. Improvements in access are not something that can be
Those are taken up later. guaranteed, but ethics does not depend on guarantees. It is
reasonable to think that developing more natural interfaces
4.3 Broadening Access with a degree of emotional competence as an integral part
It is a recognized issue that technologies which facilitate would address inequities that currently exist. Given that, it
access for most people reduce it for others. The classical is hard to deny that the goal of making that kind of interface
example is the telephone, which accentuated the isolation of a reality is an eminently moral one.
people with hearing difficulties because it became the norm
for social interaction to use a medium where the commu- 4.4 Additional Resources to Make Judgments
nication channels that they had (sign, lipreading, writing, A contrasting category includes the work that brought our
and facial expression) could not be used to supplement the group into emotion-oriented computing many years ago
one that they lacked [52]. [59]. It involves supporting people who are already making
Since the late 20th century, computer-based systems judgments that involve emotion, which have substantial
have become the default way of accessing more and more implications for the person about whom the judgment is
services—flights, taxis, shopping, news, in-home entertain- made, by providing them with new kinds of instrumenta-
ment, and so on. As with deafness and the telephone, the tion. In the most straightforward cases, the process involves
corollary is that those who have difficulty with the systems, automatic identification of features that an expert observer
and the interfaces that they use, are excluded. As a result, would use to make the judgment, but that someone who is
the Principalist criterion of equity comes into play. Ensuring not an expert observer might not register reliably.
that those groups are not excluded becomes an ethical issue. Our work involved psychiatric issues, and specifically
The link between emotion and inclusion is not con- the diagnosis of flat affect. Flat affect is a feature of some
clusive, but there are several reasons for taking it seriously. kinds of schizophrenia, and it is indicated by the absence of
At the most general level, it seems reasonable a priori to signs of affective expression. The most prominent signs are
assume that the more natural the interface, the more widely in the face and the voice. There are large individual
416 IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 3, NO. 4, OCTOBER-DECEMBER 2012

differences in people’s ability to make analytic judgments ethically positive, not so much because of their logical status
about vocal patterns: Some people have “a good ear,” others as because of their usefulness.
do not. There is no guarantee that psychiatrists will be in One example which seems relatively clear is pain
the first category. Hence, instrumentation designed to detection. (Pain is not an emotion, but in the traditional
supplement their judgment would appear to be a useful use of the term, it is an archetypal affect.) Ingrained social
resource. The task is not simply a matter of signal norms inhibit patients from describing pain objectively even
processing because it is necessary to find the relevant in situations where that is what the clinician needs, for
features (i.e. features that characterize voices which expert instance to assess when movement in a damaged shoulder
judges regard as showing flat affect), and to establish becomes painful. It has been shown that certain facial
reliable ways of extracting them. Early efforts encountered actions are characteristic of pain [61]. They can now be
problems on both counts, but the general area has detected automatically, and automatic pain detectors con-
developed since. For example, recent studies on the related structed on that basis can indicate quite precisely when pain
problem of vocal signs of depression have shown that the occurs [64], [65].
durations of particular speech sounds relate to particular A very different example comes from “cybertherapies,”
aspects of depression more closely than more global (and where the reality is that two humans are interacting, but
more easily recovered) speech features do [60]. affective technology is used to present one —the “therapist”
Similar work is being done with facial expression. So, for —as an avatar [66]. The technology is a device that allows
instance, depressed patients show reduced frequency of socially anxious individuals to achieve self-disclosure that
Duchenne smiles, and less overall facial animation [61]. they would find difficult when sensory evidence made it
These are variables that it is now possible to measure impossible to ignore the fact that the other party was
automatically so that clinicians can be given the relevant another human being. No deception is involved: The
information. anxious individual knows what is happening. Nevertheless,
It is worth stressing that what is being considered here is interacting with the avatar avoids problems that would
not allowing automatic systems to diagnose, for instance, obstruct other forms of interaction. If that presentational
flat affect or depression: It is using them to give clinicians device allows particular groups of people to deal with
information about patterns that are relevant to their problems that would resist other forms of therapy, it is
diagnosis. Unlike a diagnosis, the identification of patterns difficult to see why other people should doubt that it is
is concerned with physical events that are directly ethically positive.
observable; and confidence in it could theoretically ap-
proach 100 percent, though in practice various kinds of
noise will usually mean that the level is lower. In logical
5 MORAL NEGATIVES
terms, the task is classification, not abduction (reasoning to This section contains three parts, one dealing with negatives
the most probable cause or explanation). The distinction can that include emotion-oriented computing as part of a much
be blurred in practice, for instance if the machine output wider case against technology or computing; the second
uses descriptors that invite interpretation (e.g. tense or dealing with negatives that are very specifically related to
trembling voice), or if the user treats theoretically objective emotion; and the third a middle ground, where the issues
descriptors as a coy way of specifying a mental state. That can arise when emotion is not involved, but are particularly
kind of blurring does raise ethical issues, but it is discussed likely to arise when it is.
later. It is also worth mentioning that human beings in
general are very bad at identifying objective patterns in the 5.1 General
face and voice, partly because they are so strongly disposed It is useful to start with the general issues, mainly to clear
to jump directly to what they believe lies behind the the air. It seems reasonably clear that some arguments
patterns. For example, even phoneticians trained to code against emotion-oriented computing are really particular
voice quality achieve quite low rates of agreement beyond applications of arguments that stand or fall on much more
coarse-grained description [62]. general grounds. If that is the case, it simply confuses the
Interestingly, this is one of the areas where specialist issue to talk as if the problems lay with emotion-oriented
research is being translated directly into systems that can be computing in particular when in fact the arguments apply
used more widely. Work by Picard on somatic signs of equally well to a wide range of technologies.
emotion and El Kaliouby on facial signs form the basis of The most general of all the positions is the one that was
products that have recently been released [63], for use in the described earlier as nature-oriented. Critical arguments
first instance by teams who are studying emotion-related often seem to be grounded, explicitly or implicitly, in
phenomena and want to reduce their reliance on impres- Rousseau-like suspicion of any deviation from the state of
sionistic judgments. Of course, any instrumentation can be nature. A computer behaving like a person is certainly not
abused, but providing instrumentation that is valid if it is natural, and given that general perspective, it must there-
used appropriately is hard to classify as anything but a fore be a cause for concern. But if that is the premise, it
moral good. should be brought out and debated explicitly. It is difficult
to believe that an honest debate would reveal many people
4.5 Therapeutic Applications willing to divest themselves of everything in their sur-
This section partly overlaps with the last, but it brings roundings that was artificial.
different issues to the fore. Emotion-oriented technologies A second position is related, but narrower. It is
have various therapeutic uses that it is difficult to deny are concerned is not with technology in general, but with
COWIE: THE GOOD OUR FIELD CAN HOPE TO DO, THE HARM IT SHOULD AVOID 417

technology that blurs the line between humans and quite salient when the resource is devoted to making
machines. Ideas like that are often raised in debates, but computer games marginally more entertaining. It is less
on inspection, are difficult to express in a defensible way. worrying when the resource is being concentrated for a
One form of the argument is saying that artifacts which purpose that seems morally good in and of itself, such as
are too human-like are doomed to fall into the famous letting everyone in our society access the resources that
uncanny valley [67]. If that were true, it would seem to be a computing makes available. However, the question is not at
practical problem, not an ethical one—much as it is a all specific to emotion-oriented computing. It applies
practical problem that there are certain smells that humans equally to improving processor speed or designing more
dislike (presumably nobody thinks that plastics with an alluring cars.
unpleasant smell are an ethical issue). That aside, it is not at These are the general moral negatives that seem to recur.
all clear that uncanniness is associated with a valley It is clearly reasonable to be concerned about creating
dividing acceptably human-like things from everything mindsets that accept slavery, and about concentrating
else [68]. The problem may be more to do with things that resources on already rich sections of humanity. However,
mix features of the living and the dead. these things do not make emotion-oriented computing a
Another argument claims that the way to meet human worse area to work in than most others. Society provides a
needs is to design things that fit human requirements, not host of ways for people to exercise power in unsavory ways,
things that mimic humans [69]. Again, that seems to be a and to concentrate resources. Confronting those moral
practical issue, not an ethical one. It is also not obvious that problems means confronting society, not confronting emo-
there is an opposition: Why not do both? The logic would be tion-oriented computing.
clearer if it were being argued that it is God’s place to create 5.2 Emotion-Specific
human beings, and it is sacrilege when humans create
At the opposite extreme, there are moral negatives that
human-like things. Presumably, though, that is not an
seem to be very specific to emotion-oriented computing.
argument that many people in computing would want to
Perhaps surprisingly, though, it is difficult to find many.
pursue. One is left feeling that there is something in this area
The first moral negative is that there is a peculiar kind of
that bothers people, but that it is difficult to know what it is.
lie at the heart of the enterprise. A beaming Duchenne smile
Beyond all that, it is not obvious that emotion-oriented
says, “I am really happy”; a trembling voice says, “I am
computing has to involve artifacts that are particularly
afraid.” But if the signs are being given by an agent, there is
human-like. A serious web page designer ought to take
no happiness or fear inside the box, just symbols being
careful account of the way the page affects a user’s
connected slightly differently or parameters being adjusted.
emotions. That does not mean it has to have a smiling face
The whole process is about creating a lie—the appearance
and a sweet voice. For example, offering to send an error of emotion without the reality.
message when the application crashes is one of the better That argument can be understood in various different
ways of acknowledging the user’s emotions. So even if the ways, some of which carry much more weight than others.
objection were clear, it would only apply to some parts of As might be expected, the bias here is to focus on the
emotion-oriented computing. concrete issues.
A third position is easier to understand. It is to do with The most abstract form of the argument is open to
creating slaves. It has been pointed out that our society uses several rebuttals. People do not generally regard it as
computers much as the Romans used Greek slaves—except, deception when artifacts convey an impression of some-
of course, that the Romans required the slaves to speak thing that is not there. It takes a philosopher to dwell on the
Latin, not versa. The argument touches an uncomfortable idea that a picture constitutes a lie. In the specific case of
intuition that something about human beings likes being emotion, it is not at all obvious that people in general care
able to turn to an agent with amazing abilities, and say “do very much about the relationship between emotion-related
this” and it is done, “find that” and it is found, “terminate” signs and the irreducibly private experiences of the agents
and it terminates. It is very obvious why society should who give them. They would consider it a lie if an agent’s
worry about encouraging the part of us that likes to do that. overt signals were seriously at odds with its private
It does seem right to worry about creating systems that intentions. It is not at all clear that they would consider it
encourage that attitude, and it is not obvious what to do a lie if they discovered that a colleague had spent the past
about it. On the other hand, it applies whether or not the decade unable to experience subjective feelings of emotion,
systems have emotional capabilities. but had continued to give signals that accurately reflected
One particular variant of that problem does have his/her evaluations and intentions.
particular ties to emotion, though. People do take particular Arguments like these undercut a strong kind of objection
pleasure in forcing another agent to take part in sex or to to affective computing, based on concern that it necessarily
suffer pain. Nobody should seriously doubt that there could entails deception. However, they do not refute the objection
be a market in that area for intelligent systems with that affective computing invites particular kinds of decep-
emotional capacities—as objects for sex and sadism. There tion, which are particularly distasteful. That kind of
may be grounds to worry about creating systems that argument can be cast at two levels.
encourage that side of humanity. The more benign level argues that the ability to
There is one other general negative that is perhaps manipulate emotional signs at will brings with it a
harder to discount than any of the others. It is about particularly heavy moral obligation. People know how to
concentration of resource. That kind of moral question is discount words because they know that words can be used
418 IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 3, NO. 4, OCTOBER-DECEMBER 2012

to lie to them. However, the agents that they know—i.e., 5.3 Problems Exacerbated by Emotion
humans—do not have the same freedom to generate The previous section focused on concerns that are specific
dishonest emotion-related signals. As a result, people to emotion. However, there are many cases where the issue
cannot help treating emotion-related signals as trustworthy is that research on emotion could contribute to a more
in a way that words are not. Therefore, if an agent does use general problem. Key examples are sketched here, working
emotion-related signals dishonestly, people are particularly from the issues where the risks seem least clear to those
vulnerable to it. For that reason, there is a special need to where there seems most.
ensure that artificial agents do not generate misleading
emotion-related signals. 5.3.1 Surrogacy
The less benign level accepts the argument in the last A widely cited concern was dramatized in Isaac Asimov’s
paragraph, but extends it pessimistically. The extension is description of the planet Solaria [70]. People there ceased to
that the way people read emotion-related signals depends interact with other human beings: They only interacted with
on the sender having particular, peculiarly human combi- robots or via electronic communications. Now, Asimov
nations of connections and dispositions. Therefore, what is could walk down the street and see people virtually
required for an agent to give signals that will not grievously indifferent to their real surroundings—listening to music
mislead a human is not just that the designer tries sincerely on iPods or talking on mobile phones, looking up when
to avoid deception: It is that the designer somehow
satnavs tell them where to turn at the next corner.
manages to endow the agent with the whole network of
There is no doubt that immersion in a virtual world is
connections and dispositions that humans will presuppose
happening. It is reasonable to worry about developments
when they try to interpret the signals.
that could encourage people to move deeper into it. What is
From a pragmatic point of view, it seems fair to think
not clear is whether new types of emotion-oriented
that the truth is somewhere between the last two argu-
technology are likely to exacerbate the problem. Current
ments, and its impact is qualified by practical considera-
techniques do not allow us to create virtual worlds that are
tions. It is hard to doubt that affective computing will open
emotionally rich enough to compete with reality, and it is
up opportunities for new kinds of deception: It will be
technically possible for agents to give misleading signals, hard to imagine that changing in the the foreseeable future,
which uncritical people will be overwhelmingly disposed to if ever. Refining old technologies (such as toys, music, films,
accept. Some avatars will be confidence tricksters and some TV, and action games), and making access to them easier,
people will be conned. seems a more believable route to withdrawal from human-
It is another matter entirely to assume that the deception to-human contact.
will be any less manageable than older forms. One reason
for optimism is human flexibility: People do learn to deal 5.3.2 Thought Control
with the emotion-related signals offered by species with It is clear that information technology has the potential to
significantly different kinds of emotional organization. A produce formidable instruments of thought control. People
second has been repeated over and over: There is no can be trained and taught very effectively using old
foreseeable prospect of avatars matching the deceptive computer-based learning techniques or new virtual reality
capabilities of a human being. The third is that humans are systems. They can be shown things that happened long ago
capable of policing new technologies. One of the funda- or never happened. They can be saturated with advertise-
mental reason for writing about ethics and emotion- ments tuned to their individual style on nicely calculated
oriented computing is precisely to anticipate where policing schedules. It is hard to doubt that adding emotional
may be required. Deception carried by emotional signs is sophistication enhances the potential of these technologies
one of the areas where policing is most obvioussly needed.
to convince and to sway people, though it is not clear how
But the fact that an activity needs to be policed does not
much difference it makes.
mean that it should be banned.
Some of the most obviously benevolent pieces of
The second specific moral negative is effectively a mirror
image of a positive claim that was made earlier. emotion-oriented computing have been in this area. An
Emotion-oriented computing can give concrete expres- excellent example is FearNot, the program that tries to teach
sion to the sense that emotion matters. However, it can also children how to deal with bullying [71]. Good teaching
give concrete expression to the sense that emotion is always has appealed effectively to emotion, and enhancing
fundamentally mundane—a hardwired coarse communica- that tradition technologically has obvious potential for
tion system. It can encourage people to see a smile not as a good. However, there is equally obvious potential for ill.
thing of wonder and richness, but as a particular kind of Schools can teach pupils to hate minorities as easily as to
token in a communicative exchange. stand up to bullies. Candidates with great resources can
It is not an issue that should be dismissed, but there is finance campaigns that slip telling emotional messages into
good reason to doubt that the problem is particularly grave. web pages day in and day out.
Comparable dehumanizing metaphors have been part of The scenario being condidered here is very like one that
cognitive psychology for decades—attention is a filter, was considered earlier and classified as unproblematic. The
memory is a filing cabinet, and so on. However, it is not difference is that the earlier scenario assumed both that
obvious that people who have been exposed to metaphors there was professional regulation and that it was ethically
of that kind in the course of psychology training suffer sound. Clearly, though, regulation may break down, or be
particular ill effects. corrupted, and that changes things.
COWIE: THE GOOD OUR FIELD CAN HOPE TO DO, THE HARM IT SHOULD AVOID 419

The underlying issue here is a general one. It is hard to making—it is that machines are making decisions that they
think of an exception to the rule that technologies which may not be very good at; and the consequences of a wrong
empower humans may be used for ill as well as for good. decision are serious. One of the reasons why a judgment is
The question is whether the threat of emotion-oriented likely to have serious consequences is if it has moral
technologies falling into manipulative hands has particu- implications.
larly worrying features. Emotions are not the only kind of case where that kind of
There is no easy way to answer the question. That is issue is likely to arise, but they are an important one. It is
partly because the real potential of emotion-oriented reasonable to object if machines are left to make judgments
technology is so unclear, and partly because of uncertainties about whether people were fighting or behaving suspi-
surrounding the question of how illegitimate control may ciously, and so on. Emotions belong in a similar bracket—
be established and resisted. Among other things, if experts areas where machines should not be trusted to make
cannot be trusted to resist illegitimate control of technolo- decisions for the foreseeable future.
gies, it is not clear how much is gained if they refuse to That is a technical judgment. It is grounded in long
develop them under more benign regimes. The issues experience of research on machine perception, which gives
deserve a clear-headed debate. It impinges on emotion- a sense of what the algorithms that we can write at present
oriented technology, but it is not restricted to it. can and can’t be expected to do. Unfortunately, not many
people do understand what can and can’t be expected from
5.3.3 Misrepresenting Scientific Understanding
machine perception. That is part of the problem. People are
It is assumed in this paper that there are moral issues not educated to judge how much weight should be put on a
surrounding the way we understand or misunderstand readout from a machine that says, “she was distressed/he
humanity. If so, misrepresenting the depth of our under- was angry”—and so on.
standing is a moral issue. The issue affects technology in From a nonexpert’s point of view, SIIFs are very
general because demonstrations can eaily give a profoundly attractive. There is long-standing interest in monitoring
misleading impression of the extent to which their creators the states of drivers and pilots [77]. Retailers would
understand some aspect of humanity. welcome a machine that detected whether staff were
The archetypal example was Weizenbaum’s ELIZA [72],
behaving in a friendly way toward customers, and educa-
which actually depended on simple tricks, but which is
tion authorities would love a machine that detected
easily portrayed to a naive user as evidence of a funda-
whether staff were showing inappropriate attraction to
mental breakthrough in understanding human intelligence
children. Unfortunately, that would almost certainly mean
in general, and language in particular. It is revealing that
people losing their jobs because they had a gruff voice even
Weizenbaum himself reacted by making strong public
if they were actually friendly, or a breathy voice which had
statements about the shallowness of AI [73].
nothing to do with their sexual orientation.
Emotion particularly lends itself to ELIZA-like demon-
It is worth separating three levels of concern in this area.
strations. The toy industry testifies that it takes no intellectual
One is a general concern about too much surveillance. Like
understanding of emotion to make things that evoke it.
so much else, that is a wider debate, not specifically for
Nevertheless, if a device engages emotional responses, it is
emotion-oriented computing. The second is a technical
difficult not to feel that it vindicates the creator’s claim to
concern about competence. There the emotion-oriented
understand something deep about emotion.
The result is that scientists have to strike a careful computing community has a very specific input to make.
balance. It is perfectly legitimate to engage an audience by The key is adequate assessment of what a given machine
showing robots that flutter their eyelashes [74] or appear to will or will not achieve. That takes genuine knowledge of
nuzzle up to a human [75]. But it is an ethical problem if the area, both the computational side and the emotional
they use the spontaneous response to invite false evalua- side. SIIFs need to be certified as fit for purpose by people
tions of the understanding behind them. who know what they are doing.
The third level follows from that. It is extremely difficult
5.3.4 SIIFs to communicate to a nonexpert what an emotion-oriented
The last category is the one that raises the clearest concerns. system can and cannot achieve. The problem is particularly
It links back to an earlier example. It seems thoroughly acute with SIIFs, where there will usually be an accessible
ethical to provide a psychiatrist with printouts that show way of describing what the system does—it detects anger,
what a patient’s voice is actually doing. It would be entirely or attraction, and so on. It is difficult for nonexperts given
different if the computer monitored the voice, and sent that kind of description to avoid feeling that they under-
word to the police that the speaker should be taken to a stand well enough what the system does. In reality, though,
secure mental hospital. no system that exists or can be foreseen does anything like
The example illustrates a general class of systems that what a nonexpert infers from an accessible description. It is
does need to be taken seriously. They have been called likely to detect, with doubtful reliability, some of the
semi-intelligent information filters, or SIIFs for short [76]. patterns that are associated with a particular expression of a
The important point about SIIFs is that they try to interpret particular state in a particular situation; and if they are
data and draw conclusions that have serious implications present, to infer that the state is present without testing the
for the people concerned. plausibility of the conclusion against the multitude of
The area raises a multitude of concerns. The point is not contextual and personal considerations that a human would
that machines are making decisions that humans should be bring into play.
420 IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 3, NO. 4, OCTOBER-DECEMBER 2012

It may well be that in some situations, it is in a user’s everything that they would mean if a human being gave
interest to deploy a system like that despite its limitations. them. If people choose to have endearing artificial compa-
However, the user cannot make the decision unless he or nions, the provider needs to find ways of ensuring that their
she genuinely understands what the limitations are. Similar endearing features do not lead people into relying on them
points apply to others who may be drawn in, for example in inappropriate ways. The same point goes for public
police and courts. Understanding the limitations depends understanding of science: The way the human mind works
on experts accepting responsibility not only for telling the makes it all too easy for a superficial demonstration to
unvarnished truth about their creations, but also for finding suggest profound understanding. It seems quite possible in
ways to convey it to potential users who may be deeply principle to find ways of dealing with the peculiar ethical
reluctant to go beyond what seems to them a perfectly problems that arise in these areas, but real work is needed
adequate grasp. to establish them.
There is a strong case for saying that after other avenues There are strong links between that and another
have been mapped, the single most difficult ethical issue is
immediate challenge. It might be called lieutenancy—the
truth. People find it singularly difficult to state, or to absorb,
extent to which an authority figure can or should invest part
what the devices do because the vocabulary that comes to
of that authority in a machine—a virtual teacher, diet coach,
mind applies to systems of an entirely different order. If
therapist, etc. The question here is how to draw the line
they respond by conspiring to give and to accept descrip-
between acceptable delegation and delegation that either
tions that misrepresent the reality, then there is a double
ethical problem: a problem of honesty and a problem of infringes on the recipient’s autonomy, or abrogates the
unwanted consequences. authority figure’s responsibility.
There are intriguing general discussions about autonomy
[79]. The point here, though, is more straightforward. It is
6 ISSUES FOR ETHICAL REGULATION that this is an area where there is need of regulation (formal
The discussion so far has prioritized the question of or informal) to ensure that potentially useful applications
whether emotion-oriented technologies as they exist now, are not abused. There is no obvious reason to doubt that
and as they can realistically be envisaged, are intrinsically workable regulation can be established. Some elements
pernicious. The broad conclusion is that they are not: There seem reasonably obvious, such as establishing that a
is good that they can do, and the harm that they might do professional cannot legally delegate responsibility to an
can be forestalled. Clearly that does not end the matter, automatic system; and that would provide an incentive for
though. It indicates that there is a need to consider the kind professionals to confront the problem noted above, estab-
of action that is needed to forestall harm. lishing exactly what a system does.
A key part of the argument is that existing kinds of It would be dishonest not to mention applications that
regulation either cover a large proportion of the things that would be widely seen as morally repugnant, but that
can be done or could easily be extended to cover them. That history suggests it would be extremely difficult to regulate.
applies to resources in teaching, training, advertising, It has been argued that sex robots are inevitable [80], and
communication, and so on. Although regulation tends to examples have already been reported [81]. The case is as an
lag behind new technologies, it tends to catch up, not least exception that proves an important rule. It seems unlikely
because reputable users want regulation [78]. The same that many other applications will have the kind of appeal
applies to participants in research needed to develop that would lead potential users to defy reasonably framed
systems, where standard ethical committeees are being agreements on legitimate and illegitimate uses.
brought into play [26].
There are issues that are nonstandard, though, and
where real care is needed to avoid problems. Following on 7 Dark Imaginings
from the previous section, the single most difficult issue is Looking at the emotion-related technologies that currently
misinterpretation. Central to the problem is the principle of exist, and that can realistically be envisaged, it is hard to
“pars pro toto.” It is difficult for people to avoid inferring argue that rational, well-informed people would regard
that if a system shows fragments of behavior that are them as threatening. However, that does not alter the fact
strikingly human, it has other characteristics that would be that substantial numbers of people do find the idea of
associated with those fragments in a human. The problem is emotion-related technology disturbing. Considering how to
exacerbated by the fact that most people do not have the react to that is an ethical problem in itself: What status has
technical background to understand what they observe in widespread disquiet with a technology that seems to have
any other way. real (if undramatic) potential for good, and to pose few risks
That problem is a major factor in dealing with several that cannot be controlled?
difficulties that were mentioned above. One is ethical use of If the overall view that has been sketched in this paper is
SIIF systems. If people are considering installing a system to accepted, then part of the answer would seem to be simple.
detect distress, they need to know not only that it succeeds The field will not be able to do the good that it has the
in 95 percent of cases in a trial, but also that, unlike a potential to do if it is dogged by public suspicion of the
human, faced with an unforeseen form of distress, it will systems it makes. The suspicion would be reduced to
very probably misclassify the situation completely. Another reasonable and limited wariness if the public understood
involves signaling emotion. There is an immediate appeal the systems for what they are, rather than believing that
about generating signs that convey warmth or concern, but they are, or might easily become, quasi-living creatures
it becomes highly dubious if users take them to mean with disturbing powers. On that argument, there is an
COWIE: THE GOOD OUR FIELD CAN HOPE TO DO, THE HARM IT SHOULD AVOID 421

ethical burden on people in the field to portray the systems it would be sad indeed if that conclusion were allowed to
for what they are, and to avoid the kind of inflation that take hold without the public having understood what in
science, and science journalism, usually regard as legitimate reality was being done.
sales talk. Conversely, talking up the risk that the technol-
ogy will produce unnatural monsters is not a harmless ACKNOWLEDGMENTS
intellectual sport: It is unethical because it makes it less
likely that good will be done. The author wishes to thank Peter Goldie for insightful
It may be that however clearly the reality is portrayed, discussions of the topic. This paper draws on work that was
supported by several grants from the European Commu-
people will be unable to resist the draw of “pars pro toto”
nity: HUMAINE, SEMAINE, and SSPnet.
and therefore will be unable to reach a modus vivendi with
emotion-oriented systems—too prone to misunderstand
them for comfort or safety. If so, then enterprise has to be REFERENCES
abandoned. However, the proliferation of harmless exam- [1] M.B. Arnold, Emotion and Personality: Vol 2. Physiological Aspects.
ples that is already occurring (discussed in Sections 3 and 4) Columbia Univ. Press, 1960.
suggests that that outcome is avoidable. [2] R. Cowie, N. Sussman, and A. Ben-Ze’ev, “Emotions: Concepts
and Definitions,” Emotion-Oriented Systems: The Humaine Hand-
book, P. Petta, C. Pelachaud, and R. Cowie eds., pp. 9-30, Springer-
Verlag, 2011.
8 CONCLUSION [3] J. Russell and L. Barrett-Feldman, “Core Affect, Prototypical
Debate in practical ethics is an interplay between specifics Emotional Episodes, and Other Things Called Emotion: Dissecting
the Elephant,” J. Personal and Social Psychology, vol. 76, pp. 805-819,
and generalities. Principles are articulated in the light of 1999.
what seems right in particular cases, and judgments about [4] J. Panksepp, “At the Interface of the Affective, Behavioral, and
particular cases are measured against more general princi- Cognitive Neurosciences: Decoding the Emotional Feelings of the
ples. As Beauchamp and Childress put it, “Progressive Brain,” Brain Cognition, vol. 52, pp. 4-14, 2003.
[5] K.R. Scherer, “What Are Emotions? How Can They Be Mea-
specification often must occur to handle the variety of sured?” Social Science Informatics, vol. 44, no. 4, pp. 695-729, 2005.
problems that arise, gradually reducing the dilemmas and [6] R. Cowie, “Describing the Forms of Emotional Colouring That
conflicts that abstract principles lack sufficient content to Pervade Everyday Life,” Oxford Handbook of Philosophy of Emotion,
P. Goldie,ed., pp. 63-94, Oxford Univ. Press, 2010.
resolve” [82]. [7] Affective Computing and Intelligent Interaction, A. Paiva, R. Prada,
Until recently, ethical debate about emotion and comput- and R. Picard, eds., Springer-Verlag, 2007.
ing has been unavoidably skewed toward generalities: The [8] Proc. IEEE Third Int’l Conf. Affective Computing and Intelligent
shape of the specifics has simply not been clear enough to Interaction, J. Cohn, A. Nijholt, and M. Pantic, eds., vol. 1, 2009.
[9] Emotion-Oriented Systems: The Humaine Handbook, P. Petta,
support very much argument. However, research has C. Pelachaud, and R. Cowie, eds. Springer-Verlag, 2011.
accumulated to the point where arguments from specifics [10] A. Batliner, B. Schuller, D. Seppi, S. Steidl, L. Devillers, L.
can make a more substantial input. This paper has tried to Vidrascu, T. Vogt, V. Aharonson, and N. Amir, “The Automatic
contribute to that kind of development. It is to be expected Recognition of Emotions in Speech,” Emotion-Oriented Systems: The
Humaine Handbook, P. Petta, C. Pelachaud, and R. Cowie, eds.,
that it will seem unbalanced to those whose arguments pp. 71-99, Springer-Verlag, 2011.
prioritize generalities: After all, the aim is to redress a skew [11] G. Castellano, S.D. Villalba, and A. Camurri, “Recognising
toward debates that say rather little about actual systems and Human Emotions from Body Movement and Gesture Dy-
projects and the specific issues that they raise, for good or ill. namics,” Affective Computing and Intelligent Interaction, pp. 71-
82, 2007.
This paper offers a way of developing arguments from
[12] Z. Zeng, M. Pantic, G.I. Roisman, and T.S. Huang, “A Survey of
specifics that is undoubtedly crude, but that seems a natural Affect Recognition Methods: Audio, Visual, and Spontaneous
starting point: to sort activities and aspirations into those Expressions,” IEEE Transa. Pattern Analysis and Machine Intelli-
that potentially do good, those that potentially do harm, gence, vol. 31, no. 1, pp. 39-58, Jan. 2009.
and those that seem unlikely to do either; and to look for [13] M. Brendel, R. Zacharelli, B. Schuller, and L. Devillers, “Towards
Measuring Similarity between Emotional Corpora,” Proc. LREC
steps that might help to maximize the good and minimize Workshop Emotion and Affect, 2010.
the harm. It is to be expected that others will feel that key [14] R.B. Knapp, J. Kim, and E. André, “Physiological Signals and
examples have been left out and will question key Their Use in Augmenting Emotion Recognition for Human-
judgments, but if the framework allows that kind of Machine Interaction,” Emotion-Oriented Systems: The Humaine
Handbook, P. Petta, C. Pelachaud, and R. Cowie, eds., pp. 133-
argument to develop, that is all to the good. 159, Springer-Verlag, 2011.
Bringing specifics to the fore is partly about achieving a [15] M. Shaikh, H. Prendinger, and I. Mitsuru, “Assessing Sentiment of
logical balance, but only partly. Part of the motivation is an Text by Semantic Dependency and Contextual Valence Analysis,”
impression that arguments from principles tend not to Affective Computing and Intelligent Interaction, pp. 191-202, 2007.
[16] R. Cowie and R. Cornelius, “Describing the Emotional States That
reflect a fundamental feature of the field, which is the depth Are Expressed in Speech,” Speech Comm., pp. 5-32, 2003.
of good intention behind it. Specific projects do aim to [17] M. Schröder, H. Pirker, M. Lamolle, F. Burkhardt, C. Peter, and E.
humanize computing, promote access, support delicate Zovato, “Representing Emotions and Related States in Technolo-
decisions, and facilitate therapy; and there is a widespread gical Systems,” Emotion-Oriented Systems: The Humaine Handbook,
P. Petta, C. Pelachaud, and R. Cowie, eds., pp. 369-387, Springer-
sense that the discipline makes a distinctive contribution to Verlag, 2011.
the long project of bringing the emotional side of human [18] B. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Devillers, C.
nature in from the cold. To somebody whose daily work is Müller, and S. Narayanan, “The INTERSPEECH 2010 Paralinguis-
oriented toward goals like those, it seems wrong that people tic Challenge,” Proc. Interspeech 2010, pp. 2794-2797, 2010.
[19] R. El Kaliouby and P. Robinson, “Real-Time Inference of Complex
should try to reach ethical judgments on it without Mental States from Facial Expression and Head Gestures,” Proc.
understanding that those are the goals. It may be that the IEEE Int’l Conf. Computer Vision and Pattern Recognition, vol. 3,
good intentions are the kind that pave the road to Hell, but pp. 181-200, 2004.
422 IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, VOL. 3, NO. 4, OCTOBER-DECEMBER 2012

[20] M. Wöllmer, F. Eyben, S. Reiter, B. Schuller, C. Cox, E. Douglas- [46] L. Devillers, L. Vidrascu, and L. Lamel, “Challenges in Real-Life
Cowie, and R. Cowie, “Abandoning Emotion Classes—Towards Emotion Annotation and Machine Learning Based Detection,”
Continuous Emotion Recognition with Modelling of Long-Range J. Neural Networks, vol. 18, pp. 407-422, 2005.
Dependencies,” Proc. Interspeech 2008, pp. 597- 600, 2009. [47] J. Cassell, “Modelling Rapport in Embodied Conversational
[21] J.-C. Martin et al., “Coordinating the Generation of Signs in Agents,” Proc. Interspeech 2008, pp. 18-19, 2008.
Multiple Modalities in an Affective Agent,” Emotion-Oriented [48] K. Murata, “Laughter for Defusing Tension: Examples from
Systems: The Humaine Handbook, P. Petta, C. Pelachaud, and Business Meetings in Japanese and in English,” New Frontiers in
R. Cowie, eds., pp. 349-367, Springer-Verlag, 2011. Artificial Intelligence, pp. 294-305, Springer, 2009.
[22] http://www.parorobots.com/, 2012. [49] L. Devillers, I. Vasilescu, and L. Lamel, “Emotion Detection in
[23] P. Goldie, The Emotions: A Philosophical Exploration. Clarendon Task-Oriented Dialog Corpus,” Proc. IEEE Int’l Conf. Multimedia
Press, 2000. and Expo, vol. III, pp. 549-552, 2003.
[24] F. de Rosis, C. Castelfranchi, P. Goldie, and V. Carofiglio, [50] C. Nass, L. Takayama, and S. Brave, “Socializing Consistency:
“Cognitive Evaluations and Intuitive Appraisals: Can Emotion From Technical Homogeneity to Human Epitome,” Advances in
Models Handle Them Both?” Emotion-Oriented Systems: The Management Information Systems, vol. 6, pp. 373-391, 2006.
Humaine Handbook, P. Petta, C. Pelachaud, and R. Cowie eds., [51] O. Turk and M. Schroeder, “Evaluation of Expressive Speech
pp. 459-481, Springer-Verlag, 2011. Synthesis with Voice Conversion and Copy Resynthesis Techni-
[25] L.D. Riek, P.C. Paul, and P. Robinson, “When My Robot Smiles at ques,” IEEE Trans. Audio, Speech, and Language Processing, vol. 18,
Me: Enabling Human-Robot Rapport via Real-Time Head Gesture pp. 965-973, 2010.
Mimicry,” J. Multimodal User Interfaces, vol. 3, pp. 99-108, 2010.
[52] Keating and Mirus, deaf phones, 2003.
doi: 10.1007/s12193-009-0028-2.
[26] I. Sneddon, P. Goldie, and P. Petta, “Ethics in Emotion-Oriented [53] R. Cowie and M. Schroeder, “Piecing Together the Emotion
Systems: The Challenges for an Ethics Committee,” Emotion- Jigsaw,” Machine Learning for Multimodal Interaction, S. Bengio and
Oriented Systems: The Humaine Handbook, P. Petta, C. Pelachaud, H. Bourlard, eds., pp. 305-317, Springer Verlag, 2005.
and R. Cowie eds., pp. 756-767, Springer-Verlag, 2011. [54] M. Bulut, S.S. Narayanan, and A.K. Syrdal, “Expressive Speech
[27] S. Döring, P. Goldie, and S. McGuinness, “Principalism: A Method Synthesis Using a Concatenative Synthesizer,” Proc. Int’l Conf.
for the Ethics of Emotion-Oriented Machines,” Emotion-Oriented Spoken Language Processing, pp. 1265-1268, 2002.
Systems: The Humaine Handbook, P. Petta, C. Pelachaud, and [55] L.L. Carstensen and J.A. Mikels, “At the Intersection of Emotion
R. Cowie eds., pp. 713-724, Springer-Verlag, 2011. and Cognition: Aging and the Positivity Effect,” Current Directions
[28] S. Slater, R. Moreton, and K. Buckley, “Emotional Agents as in Psychological Science, vol. 14, pp. 117-121, 2005.
Software Interfaces,” Proc. Workshop Emotion in HCI—Designing for [56] A.C.B. Medeiros, N. Crilly, and P.J. Clarkson, “Affective Response
People, pp. 38-43, 2010. to ICT Products in Old Age,” Proc. 2008 Int’l Workshop Emotion in
[29] http://www.guardian.co.uk/technology/2010/sep/19/3d- Human Computer Interaction—Designing for People, pp. 32-37, 2010.
games-xbox-playstation, 2012. [57] A.R. Korukonda, “Differences That Do Matter: A Dialectic
[30] http://en.wikipedia.org/wiki/Milo_and_Kate, 2012. Analysis of Individual Characteristics and Personality Dimensions
[31] P. Sundstrom, A. Stahl, and K. Hook, “In Situ Informants Contributing to Computer Anxiety,” Computers in Human Behavior,
Exploring an Emotional Mobile Messaging System in Their vol. 23, pp. 1921-1942, 2007.
Everyday Practice,” Int’l J. Human Computer Studies, vol. 65, [58] G. Conti-Ramsden, K. Durkin, and A.J. Walker, “Computer
pp. 388-403, 2007. Anxiety: A Comparison of Adolescents with and without a
[32] K. Höök, “Affective Loop Experiences: Designing for Interactional History of Specific Language Impairment,” Computers and Educa-
Embodiment,” Philosphical Trans. Royal Soc. B, vol. 364, pp. 3585- tion, vol. 54, pp. 136-145, 2010.
3595, 2009. [59] S. McGilloway, R. Cowie, and E. Douglas-Cowie, “Prosodic Signs
[33] R.B. Knapp, J. Jaimovich, and N. Coghlan, “Measurement of of Emotion in Speech: Preliminary Results from a New Technique
Motion and Emotion during Musical Performance,” Proc. Third for Automatic Statistical Analysis,” Proc. 13th Int’l Conf. Phonetic
IEEE Int’l Conf. Affective Computing and Intelligent Interaction, Sciences, vol. 1, pp. 250-253, 1995.
pp. 1735-739, 2009. [60] A. Trevino, T. Quatieri, and N. Malyska, “Phonologically-Based
[34] M.D. van der Zwaag and J.H.D.M. Westerink, “Deploying Music Biomarkers for Major Depressive Disorder,” EURASIP J. Advances
Characteristics for an Affective Music Player,” Proc. Third IEEE in Signal Processing, to appear.
Int’l Conf. Affective Computing and Intelligent Interaction, vol. 1, [61] J.F. Cohn, “Advances in Behavioral Science Using Automated
pp. 459-465, 2009. Facial Image Analysis and Synthesis,” IEEE Signal Processing
[35] http://www.cantoche.com/en~AvatarGallery.html, 2012. Magazine, vol. 128, pp. 128-133, 2010.
[36] S. Kopp and I. Wachsmuth, “Synthesizing Multimodal Utterances [62] A.L. Webb, P.N. Carding, I.J. Deary, K. MacKenzie, N. Steen, and
for Conversational Agents,” J. Computer Animation of Virtual J.A. Wilson, “The Reliability of Three Perceptual Evaluation Scales
Worlds, vol. 15, pp. 39-52, 2004. for Dysphonia,” European Archives of Oto-Rhino-Laryngology,
[37] G. Papagiannakis, S. Schertenleib, B. O’Kennedy, M. Arevalo- vol. 261, pp. 429-434, 2004.
Poizat, N. Magnenat-Thalmann, A. Stoddart, and D. Thalmann, [63] http://www.affectiva.com/, 2012.
“Mixing Virtual and Real Scenes in the Site of Ancient Pompeii,”
[64] P. Lucey, J.F. Cohn, S. Lucey, S. Sridharan, and K. Prkachin,
J. Computer Animation of Virtual Worlds, vol. 16, pp. 11-24, 2005.
“Automatically Detecting Pain Using Facial Actions,” Proc. Third
[38] D. Heylen, A. Nijholt, and R. op den Akker, “Affect in Tutoring IEEE Int’l Conf. Affective Computing and Intelligent Interaction, vol. 1,
Dialogues,” Applied Artificial Intelligence, vol. 19, pp. 287-311, 2005. pp. 12-18, 2009.
[39] Augustine,, City of God, H. Bettenson, trans. Penguin Classics,
[65] A.B. Ashraf, S. Lucey, J.F. Cohn, T. Chen, K.M. Prkachin, and P.
1984.
Solomon, “The Painful Face: Pain Expression Recognition Using
[40] Descartes: Philosophical Writings. A selection translated and edited
Active Appearance Models,” Image Visual Computing, vol. 27,
by E. Anscombe and P.T. Geach, with introduction by A.K.
no. 12, pp. 1788-1796, 2009.
Nelson, 1954.
[41] David Hume: A Treatise of Human Nature: Volume 1, D.F. Norton [66] S.-H. Kang and J. Gratch, “Virtual Humans Elicit Socially Anxious
and F.J. Norton, eds. Oxford Univ. Press, 2007. Interactants’ Verbal Self-Disclosure,” Computer Animation of
Virtual Worlds, 2010.
[42] H. Damasio, T. Grabowski, R. Frank, A. Galaburda, and A.R.
Damasio, “The Return of Phineas Gage: Clues about the Brain for [67] M. Mori, “The Uncanny Valley,” Energy, vol. 7, pp. 33-35, 1970.
the Skull of a Famous Patient,” Science, vol. 264, pp. 1102-1105, [68] D. Hanson, A. Olney et al., “Upending the Uncanny Valley,” Proc.
1994. 20th Nat’l Conf. Artificial Intelligence, pp. 1728-1729, 2005.
[43] V. Dulewicz and M. Higgs, “Emotional Intelligence—A Review [69] W. Gaver, “Designing for Emotion (among Other Things),”
and evaluation Study,” J. Managerial Psychology, vol. 15, pp. 341- Philosophical Trans. Royal Soc. B, vol. 364, pp. 3597-3604, 2009.
372, 2000. [70] I. Asimov, The Naked Sun. Collins, 1993.
[44] A.M. Isen, “Positive Affect,” Handbook of Cognition and Emotion, [71] L. Hall, M. Vala, M. Hall, M. Webster, S. Woods, A. Gordon, R.
T. Dalgleish and M. Power, eds., pp. 521-539, 1999. Aylett, “FearNot’s Appearance: Reflecting Children’s Expecta-
[45] J. Decety and W.J. Ickes, The Social Neuroscience of Empathy. MIT tions and Perspectives,” Proc. Int’l Conf. Intelligent Virtual Agents,
Press, 2009. pp. 407-419, 2006.
COWIE: THE GOOD OUR FIELD CAN HOPE TO DO, THE HARM IT SHOULD AVOID 423

[72] J. Weizenbaum, “ELIZA—A Computer Program for the Study of Roddy Cowie graduated in philosophy and
Natural Language Communication between Man and Machine,” psychology, and his PhD was on machine vision.
Comm. ACM, vol. 9, no. 1, pp. 36-45, 1966. He is a professor of psychology at Queen’s,
[73] J. Weizenbaum, Computer Power and Human Reason: From Judgment Belfast. His research has used computational
to Calculation. Freeman & Co., 1976. methods to study a range of complex perceptual
[74] C. Breazel, “Role of Expressive Behaviour for Robots That Learn phenomena—perceiving pictures, the experi-
from People,” Philosohhical Trans. Royal Soc. B, vol. 364, pp. 3527- ence of deafness, what speech conveys about
3538, 2009. the speaker, and, in a series of EC projects, the
[75] A. Hiolle, L. Canamero, and A.J. Blanchard, “Learning to Interact perception of emotion, where he has developed
with the Caretaker: A Developmental Approach,” Affective methods of measuring perceived emotion and
Computing and Intelligent Interaction, pp. 422-433, Springer-Verlag, inducing emotionally coloured interactions. Key outputs include special
2007. editions of Speech Communication (2003) and Neural Networks (2005),
[76] P. Goldie, S. Döring, and R. Cowie, “The Ethical Distinctiveness of and the HUMAINE Handbook on Emotion-Oriented Systems (2011). He
Emotion-Oriented Technology: Four Long-Term Issues,” Emotion- is a member of the IEEE.
Oriented Systems: The Humaine Handbook, P. Petta, C. Pelachaud,
and R. Cowie, eds., pp. 725-733, Springer-Verlag, 2011.
[77] Hadfield and Marks, “This Is Your Captain Dozing,” New
. For more information on this or any other computing topic,
Scientist, vol. 1682, p. 267, 2000.
please visit our Digital Library at www.computer.org/publications/dlib.
[78] http://www.telegraph.co.uk/finance/newsbysector/mediatech-
nologyandtelecoms/digital-media/6539402/Google-to-fund-
ASAs-regulation-of-web-advertising.html, 2012.
[79] H. Bauman and S. Doring, “Emotion-Oriented Systems and the
Autonomy of Persons,” Emotion-Oriented Systems: The Humaine
Handbook, P. Petta, C. Pelachaud, and R. Cowie, eds., pp. 735-752,
Springer-Verlag, 2011.
[80] D. Levy, Love and Sex with Robots: The Evolution of Human-Robot
Relationships. Harper Collins, 2007.
[81] http://www.huffingtonpost.com/2010/01/10/roxxxy-sex-robot-
photo-wo_n_417976.html, 2012.
[82] T.L. Beauchamp and J.F. Childress, Principles of Biomedical Ethics,
fifth ed. Oxford Univ. Press, 2001.

You might also like