Professional Documents
Culture Documents
desk research
de-structuring
anecdote stories
re-structuring
creative facilitation
workshops/sessions
brainstorming
synthesizing insights
explorative research
involvement of young users
questionnaire
experiments
involvement of older users
context insights
refined direction
conceptual framework
interaction vision
written scenarios
storyboards
prototypes
machine-state diagram
service concept
video of the concept
video footage of participants
qualitative data
interaction goal
concept recommendations
methodology recommendations
D
iv
e
rg
in
g C
o
n
v
e
rg
in
g
Define Challenge
Define Insights Define Concept Define End Result
FIGURE 5 The approach
i
This part summarizes learning
points and notions from the
analysis phase, that are
relevant to the development of
this project. Theory collected
from, mainly, the areas of
aging, motivation, and human-
robot interaction, is used to
define the challenge that will
be address in this project. The
byproduct of this phase is a
new conceptual framework to
tackle the complexity of the
defined challenge.
Analyze
Chapter Overview
2.1 Robotics to HRI, a long history
2.2 Human-Robot Interaction towards a classification
2.2.1 Classification Framework
2.2.2 Human-Robot Ratio
2.2.3 Application
2.2.4 Aesthetics
2.2.5 Interaction
2.3 What is a robot, what an artificial companion, and
what is it made of, after all.
Every interaction designer always has as an ultimate goal to design
products (or services, or interfaces) where interactions are natural,
intuitive, effortless so that the user experience is unhindered,
smooth, and pleasant. When it comes to humans interacting with
robots that principle can easily backfire. Researchers and engineers
have worked many years, approaching the goal from different
perspectives, creating many different robots, to realize:
there is no such thing as natural interaction in HRI.
People do not perceive robots as products. The more they look like
humans (anthropomorphic), the more they behave to them like if they
were humans. If they look like animals (zoomorphic), then they will
behave to them as if they were pets. So what is natural in that case, to
begin with, and, how do you design what the robot needs to be, in order
to elicit a natural response back?
In this chapter, an attempt to de-structure knowledge (resulting a T-
shaped overview: variety before deepness), that can be used by a
designer, is made.
Analyze
Robots
De-structuring their nature
18
2.
19
21
39
22
24
25
32
38
Robotics to HRI,
a long history
One could say that robots have two kinds
of histories: the one that starts in the 20th
century when robot technology was able
to bring to life the first industrial robots
[W05], and the other one starts way back,
when people started thinking of the whole
idea of such creatures existing.
Before the 20th century, the notion of a
robot usually is closer to that of a
servant. There also other words used
for the same notion, such as automata,
android, golem that appear to be
originated as an idea in the mythologies
of ancient Egypt [B13], Greece [B11], and
China [B12]. Other artificial, robotic
devices (e.g. pets) are also reported
[B12]. Robota, etymologically [W07]
derived from Czech, means forced
labor, compulsory service, drudgery.
From an Old Czech source akin to Old
Church Slavonic rabota servitude from
rabu slave. From Old Slavic *orbu-,
*orbh, pass from one status to another,
orphan. The word robot is also related
with the German root arbeit, work. Till
today there still isnt a definition that
everyone agrees upon and the whole
notion of what a robot is or does keeps
on changing, but it definitely changed
much from being a servant.
Leonardo Da Vinci is probably the first
to have designed androids and other
mechanics for automated machines in
the 15th century [W08].
The word robot first appeared in a play,
written by Karel #apek, published in
1920. The play is about a factory that is
in need of manufactured living beings,
simplified people, as it has chemicals.
These mass-production, mechanical
creatures are often mistaken for humans
in appearance but they have no
emotions or capability of thinking [W06].
Isaac Asimov is generally credited with
the popularization of the term Robotics,
in 1942, marking a new field, and being
the first that formulated design guidelines
for human-robot interaction (a sub-field
that was established much later) with his
Laws of Robotics [J11].
Early robotic implementations were
just remotely-controlled devices that
were actually not automated but
contributed in the field of robot
mechanics. For example, in 1899,
Nikolai Tesla demonstrated a remotely-
controlled boat that had a borrowed
mind. In the patent of this boat, Tesla
writes that someday mechanical men
will do the work of human race, since
this boat is just the first of a
race [J11]. It is reported that the public
was not impressed and the press
reports refer to the remotely-controlled
boat as a result of mind control [W09].
Research in artificial intelligence
followed the mechanics advancements.
In 1950, Alan Turing publishes a paper
posing the question can machines
think?. He probably thought they do,
since he proposed a test in order to
determine whether a machine can think
on its own or not, by convincing another
to believe that the machine is human
too (known as the Turing Test) [J12].
Till recently there was no machine
reported to have passed the test, but
scientists are still confused about a
case that appeared in 2014 [W10].
In 1954 the first programmable robotic
arm was developed. This arm will lead
to the first industrial robot, the Unimate,
that was able to work in an assembly
line of General Motors in 1961 [W09].
Later on, attempts to build fully
autonomous robots were made and
the result of those attempts, perhaps,
the most famous one, in terms of
citations, was the first robot controlled
by artificial intelligence, Shakey, made
at the Stanford Research Institute, in
1970. Shakey was able to navigate
alone around through a block of
world [W09].
In mid 80s a breakthrough in robot
technology occurred with research in
the behavior of robots. Research was
initially focused in mobility, followed by
research in developing lifelike
anthropomorphic behaviors,
acceptable behaviors and desirable
behaviors [J11].
Human-Robot interaction (HRI) has
emerged in the early 1990s and can be
described as:
"The field of study dedicated to
understanding, designing, and
evaluating robotic systems for
use by or with humans" [J11].
The main goal of HRI is to:
Understand and shape the
interactions between one or
more humans and one or more
robots" [J11].
Since the 1990s, the focus is on
determining what a robots behavior is
made of, bringing new notions to the
field: social intelligence, emotional
behavior, personality [W02], [B01].
19
Analyze
2.1.
20
Analyze
20
FIGURE 6 Indicative Robotics Milestones
Homer
describes
automated
walking
tripods and
golden maids
that behave
like real
people
Leonardo Da
Vinci designs the
first android
The word robot
was introduced to
the public by the
Czech interwar
writer Karel #apek
in his play R.U.R.
(Rossum's
Universal Robots)
Tesla builds remotely
controlled boat. The
boat could be
commanded to go,
stop, turn left and
right, turn its lights,
and submerge
Isaac Asimov
publishes the three
laws of Robotics,
making the word
Robotics well
known to the public,
marking a new field
Alan Turing
proposes a test
to determine
whether or not a
machine has
gained the power
to think for itself.
Since then each
year a contest is
held to test
candidates for
The "Turing Test
A prototype Unimate
arm is installed in a
General Motors and
soon becomes the
first commercial
industry robot
Shakey, the first
mobile robot with
vision and AI.
The aptly named
robot is an
unstable box on
wheels that
figures out how to
get around
obstacles
R2D2 lookalikes are
vacuuming floors
and singing songs in
Japan, marking an
era where designing
behavior comes first
Robocop, a
cyborg, brings a
dystopia, a
singularity future,
for the public to
be afraid of,
when robots are
technologically
possible.
MIT's Rodney A.
Brooks starts
building Cog, a robot
that is being raised
and educated like a
human
Paro, an artificial
companion, that
is intended to
elicit emotional
responses from
its owners, older
people with
dementia
800 BC 1495 1899 1920 1942 1950 1961 1968 1980s 1987 1993 2001
(*) Milestones as selected from [J11], [W02]. Creating a complete overview of all milestones in robotics probably
requires much more paper. Hopefully this one sets the ground accordingly for the purposes and direction of this project.
Human-Robot
Interaction
towards a
Classification
In an attempt to de-structure the field of
HRI, so that research questions can be
answer, by gaining understanding of
artificial companions, where do they
stand at the moment, what is an
artificial companion and what a robot is
in general, it is crucial to map the field.
There is no point in making
categorizations of robots without a point
of reference. As a complicated field,
there are many points that could be
taken as a point of reference in order to
start classifying robots. From a
designers perspective, a holistic
overview of HRI could be, roughly,
mapped in Figure 7, using information
from various sources [W02], [J17]. The
scheme will later help in classification.
Human-robot interaction (HRI) differs
significantly from human-computer
interaction (HCI), or human-product
interaction. Although there are
conceptual frameworks on the
components that result in natural
human-product interaction experiences
[B14], and human-computer interaction
experiences [J13], there is little evidence
(yet endless theories and research) on
what influences HRI experiences.
It is controversial to define whether a
robot is more of a product or more of a
computer, to begin with. There is even a
dispute that human-robot interactions,
simply, cannot be natural and it is a
goal that should be carefully thought as
it has a different meaning when it
comes to HRI. People have been
reported to react to robots differently
according to their looks. For example, If
the are human-like they will treat them
in a human-like manner and they will
also expect a human-like behavior from
them. Hence natural human-robot
interaction is actually a reflection of
what human-human interaction is
perceived to be natural in peoples
minds [W02]. But what is natural when
the robot is not human-alike? Hence it
is important to define morphology as a
possible classifier of HRI, but in order to
draw conclusions this it is important to
make correlations and classifications of
different, in other ways, kinds of robots.
On the side of human-product
interaction, a modern approach, [J14] of
what the components of human-product
experience are, is:
aesthetic pleasure,
attribution of meaning, and
emotional response.
Thus defining product experience would
be the entire set of affects that is
elicited by the interaction between a
user and a product, including the
degree to which all our senses are
gratified (aesthetic experience), the
meanings we attach to the product
(experience of meaning) and the
feelings and emotions that are elicited
(emotional experience) [J15]. A
definition of a natural experience would
also include usability, as it seems to
influence all the other three dimensions
that are mentioned [J13].
On the human-computer interaction an
experience would be natural with flow
when four conditions are met [J13]:
the user perceives a sense of
control over the computer
interaction
21
Analyze
2.2.
Human-Robot interaction is a synthetic field. Adapted from [W02] [J17].
FIGURE 7 Fields in HRI
HRI
Robots
view on
the world
Humans
view on
robots
psychology
arts
HCI
Human-Product
Interaction
ethology
design
interaction
design
aesthetics
sociology
engineering
computer science
economics
market gap/business
technology
artificial life
intelligence
machine learning
sociobiology
perception
hardware
software
well-being
emotions
meaning
innovation
distributed AI
social
emotional
behavior
behavior
logic
physicality
materials
sustainability
Society experiencing HRI
anthropology
philosophy
perception
the user perceives that their
attention is focused on the
interaction
the users curiosity is aroused
during the interaction
the user finds the interaction
intrinsically interesting.
Researchers [C03] claim that HRI is
different from HCI in four dimensions,
which can be considered in order to
make classifications for HRI and robots.
Those four dimensions are:
the levels of human interaction,
the necessity of environment
interaction for mobile robots,
the dynamic nature of robots in
their tendencies to develop
hardware problems,
the environment in which
interactions occur.
Attempts to create a classification
framework have been done before
[W02], [J11], [J16], [J18], [C04], [J19],
[J20], but they dont give a holistic
view. A new framework will be created,
to serve a designers view.
Classification
Framework
Creating a classification framework can
have multiple applications. Firstly it can
help in marking where a robot belongs.
Secondly it can help in evaluating a
specific robot against others in the
same category/categories. Thirdly it can
help determine ingredients a robot
should have when designing for a
specific category. The last one is the
most wanted effect since, it will help in
the analysis of artificial companions.
The proposed classification is based on
previous frameworks, models, criteria
and components before [W02], [J11],
[J16], [J18], [C04], [J19], [J20]. Taking
Figure 7 under consideration it should be
mentioned that the components
sometimes derive from how the robot
perceives the world (e.g. application,
and aesthetics in Figure 8), sometimes
how people perceive the robot, the
interact and the experience they have
with it (e.g. interaction in Figure 8) and
sometimes a meta-level of reflection is
needed from observing such relations
as a third-party (second level areas that
will belong to interaction according to
Figure 8 as seen in Figure 9). The ratio of
humans to robots can be considered as
an external factor that can affect all of
the above.
As with everything, there are three
challenges to be taken under account:
The complexity. Taxonomies
keep on getting updated and
evolved all the time,
Factors overlapping, belonging
to more than one categories or
affecting/creating other factors,
Subjectivity is almost
everywhere, making things more
difficult. Some factors can be
logically answered, in some
others people might disagree as
preferences and tastes differ
too.
22
Analyze
2.2.1
Application, Aesthetics, Interaction, Human-Robots Ratio do not give
independent factors.
4->1, 4->2, 4->3: H-R Ratio affects all three: Application, Aesthetics, Interaction
3->1 Form and Function follows Interaction
2->1 Form follows Function
FIGURE 8 Areas of Classification
1
2
3
4
Application
Aesthetics
Interaction
H-R ratio
The proposed
framework (see Figure
9) has may many
possible layers.
Some classifiers are
primary and it would
be meaningful to be
defined for most
kinds of robots.
Some other
classifiers can be
treated as secondary,
or even tertiary and
give more light into
defining objective
matters, like
experience. Moving
from Application to
Interaction the
classifiers are getting
from objective to
subjective (see Figure 7,
8). The more
objective a classifier
is, the more it has to
do with how the robot
sees the world.
Whereas the more
subjective, the more
it has to do with what
people project upon a
robot or what such a
relation evokes at a
meta-level of
thinking.
23
Analyze
Factors deriving from the main areas: Application, Aesthetics, Interaction, Human-Robot ratio
FIGURE 9 Overall view of classifiers for HRI
Human - Robot numbers
Composition of
Teams
1
2
3
4
Application
Aesthetics
Interaction
H-R ratio
Level of shared
interaction
among teams
Environment
Task Type
Task Criticality
Skills
(processing)
Intelligence
Spatial
Intra-Personal
Linguistic
Kinesthetic
Interpersonal
Existential
Logical/Mathematical
Musical
Naturalist
Autonomy
Morphology
Likeability
Awareness
(sensors / input)
Responsive
(communication means)
Modality
(output)
Size
Behavior
Material
Type
Role
Proximity
Type of proximity
Perceived as what
Acceptability (?)
All of the above do not really matter if
the robot fails in being accepted, but
they do play a role.
Human-Robot
Ratio
The ratio of people to robots directly
affects the human-robot interaction, but
as a classifier, it simply states the
numbers of each, without indicating how
this affects the interaction [J16]. The
human-robot ratio axis is not one-
dimensional (although it can get
simplified, see Figure 10).
24
Analyze
2.2.2
Examples: Classifying robots according to H-R ratio. Axis could also be represented in a robot-human plot of 2 axis to include more cases.
FIGURE 10 Human-Robot Ratio robot classifier
1 to 1 N to N N to 1
(number of) Robots (number of) Humans
N to some few to few few to 1
They are many, and
they are tiny! They
are inspired to bio-
imitate behaviors
(e.g. how insects
behave), have
collective
intelligence (like
flocks do), and they
can serve various
purposes, like
assemble in one.
Multi-participant
receptionist
system that can
recognize people
that are currently
interacting to those
that are currently
are waiting.
Receptionists-
robotic cameras
can be 1 or more
than 1.
Factory Spaces
Manipulators
are usually
robotic arms
for specific
tasks
Navigate
Some robots,
like NAO, are
able to
navigate
around
Kaspar teaches
autistic children
social skills with
repetitive
movents
Assist Educate Entertain
Keep
Company
Manipulators Adaptable Mobile Transport Telepresence Assistive Educational Toys Companions
Baxter is the
new generation
of industrial
robots can
adapt to the
environment
and learn
Besides being
able to
navigate
independently,
google car, can
also drive you
somewhere
Furby is one
the oldest and
most famous
robotic toys for
children
The teddy bear
is designed to
just keep
company to
people
Telepresence
robots help (or
mediate)
communication
Riba is an
assistive robot,
helping people
(that have
fallen down, or
cannot move)
by lifting them
The task criticality classifier
Task criticality indicates the importance
completeness of the task has. However
it is highly subjective, since it is not
possible to measure the criticality. For
some task types it might be more clear,
for example in military environments it is
important that the robot will track mines
that are in the ground. But for other task
types it is getting vague, how critical is it
if a Furby fails to keep a childs interest?
To deal with this issue, it has been
proposed that criticality can be
considered high when failure affects the
life of a human [J16].
28
Analyze
Examples of robots that have different criticality for their task completion. Axis ranges from Low (left) to High (right).
FIGURE 14 Task type classifier
Low High
Robotic toys,
probably, wont
hurt anyone if
they fail to be
entertaining
enough
(Older)
Industrial
robots are
usually pre-
programmed
Adjustable
Baxter is the
new generation
of industrial
robots can
adapt
performing
calculations
There is no
existing robot
that can be
fully dynamic,
proactive and
contextually
aware.
CB2 learns and grows up
as a child, mimicking a
childs actions.
Analyze
Ambient
intelligence, and
agents are used
in many fields.
For example
nest can learn
and control a
houses
temperature
Embedded
Romo is a
robot
embedded in
an iPhone with
some extra
embodiment
equipment
Boxie doesnt
exactly
resemble any
known animal,
but it could
have been one
Abstract
Objectified Caricatured
Biomorphic
Zoomorphic Anthropomorphic
Zoe is a virtual
assistive avatar
that can
express
emotions
Exoskeletons
are close to
functional
morphology in
order to make
people wake
Philips iCat, is
a robotic cat,
apparently.
Hiroshi
ishiguros
Geminoid HI-4
is an android
that looks like
himself
Giraff is like
this, because it
has to have a
screen and a
camera
Species of
illumination is a
series of robots
that look more
like lamps,
than animals.
But they exhibit
a pet behavior
The size classifier
As with everything a robot can has a
size, as long as it has a physical form.
A possible axis could range from small
to big, but it is another classifier that
can be attributed in relation to
something else. However it is quite an
important factor for specific types of
robots and having the same form in
different sizes is not the same.
Moreover selecting a specific size for a
robot might enable or disable some
interactions or trigger a different feeling
in the human that confronts it (for
example if it is too big it might seem
threatening).
The material classifier
Analyzing and selecting materials for
products is a field on its own: cheap -
expensive, soft - tough, glossy - mat.
Many properties to define. Yet materials
affect the overall impression you will get
about a robot. It is absolutely a different
encounter to meat a robot from metal,
than a robot from carton. Roughly put,
this classifier will not be represented in
full detail, though later on there will be a
selection of material to work with (for
prototype reasons). Last but not least,
the material of the robot also has an
effect on its price, making it affordable
to end users (or not).
34
Analyze
Examples of robots that have different size
FIGURE 19 The size classifier
Tiny
Small Big
Medium
Huge
Scrap
Mamoru, a robot
that reminds elderly
to take their pills, is,
the usual, plastic.
Robots can also be
made by scrap
materials and
arduinos
The behavior classifier
How a robot behaves and how should
it, is probably the hottest question in
HRI research the last decade [B01],
[W02], [J11], [B17], [J08]. Initially robots
were not to be interacting with people
at all, they were supposed to work fully
autonomously in some industrial setup.
Since they began to be more and more
around people, it is a classifier that can
be possibly inter-related with everything
(appearance, first impressions,
expectation breakdown, personality,
consistency) and highly affects how
people perceive the robot and if they
will accept a robot after all [J08].
Behavior can be analyzed in many
ways for example if it seems to be
predetermined, appropriate, intentional,
competent enough, real-time,
animalistic, human-alike etc [J17], [J26].
Roughly all the above could be included
in a deterministic vs. stochastic
behavior scale. The more a behavior
seems to be stochastic (not repetitive,
with intrinsic triggers), the more the
robot will be perceived as alive [B17].
35
Analyze
Examples of robots that are perceived as having different behavioral freedom, a mind of their own. Axis ranges from Deterministic (left) to Stochastic (right).
FIGURE 21 The behavior classifier
Controlled
Deterministic Stochastic
Repetitive
Predictable
Own Mind