You are on page 1of 130

COMPUTATIONAL MODELS AND ANALYSES OF

HUMAN MOTOR PERFORMANCE IN HAPTIC


MANIPULATION

by
MICHAEL J. FU

Submitted in partial fulllment of the


requirements for the degree of
Doctor of Philosophy

Department of Electrical Engineering and Computer Science


CASE WESTERN RESERVE UNIVERSITY

May 2011

CASE WESTERN RESERVE UNIVERSITY


SCHOOL OF GRADUATE STUDIES

We hereby approve the thesis/dissertation of


Michael John Fu
______________________________________________________
Doctor of Philosophy
candidate for the ________________________________degree *.

Prof. M. Cenk Cavusoglu


(signed)_______________________________________________
(chair of the committee)
Prof. Wyatt S. Newman
________________________________________________
Prof. Kenneth A. Loparo
________________________________________________
Prof. Wei Lin
________________________________________________
Prof. Roger D. Quinn
________________________________________________

________________________________________________

March 31, 2011


(date) _______________________

*We also certify that written approval has been obtained for any
proprietary material contained therein.

c
Copyright 2011
by Michael John Fu
All rights reserved

Contents

List of Tables

List of Figures

vii

Acknowledgements

viii

Abstract

ix

1 Introduction

1.1

What is Haptics? . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1.2

Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2 Background
2.1

2.2

2.3

Virtual Environment Immersion Techniques . . . . . . . . . . . . . .

2.1.1

Fish Tank Display . . . . . . . . . . . . . . . . . . . . . . . .

Eect of Immersion on Task Performance . . . . . . . . . . . . . . . .

2.2.1

Stereographic Rendering . . . . . . . . . . . . . . . . . . . . .

2.2.2

Physical vs Virtual Tasks

. . . . . . . . . . . . . . . . . . . .

2.2.3

Visual and Haptic Workspace Co-location . . . . . . . . . . .

10

2.2.4

Eect of Visual Rotations on Task Performance . . . . . . . .

11

Fitts Law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

13

2.3.1

14

Comparing Experimental Conditions: Throughput . . . . . . .

2.3.2
2.4

3D Extensions . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

Human Operator Models . . . . . . . . . . . . . . . . . . . . . . . . .

16

3 Arm-and-Hand Dynamics and Variability Modeling


3.1

3.2

3.3

Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

3.1.1

Input Signals Used in the Human Experiment . . . . . . . . .

20

3.1.2

Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

21

3.1.3

Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

3.1.4

Arm Model Experiment Paradigm . . . . . . . . . . . . . . . .

23

3.1.5

Arm Dynamics Model Structure . . . . . . . . . . . . . . . . .

26

3.1.6

Structured Variability . . . . . . . . . . . . . . . . . . . . . .

29

3.1.7

Unstructured Variability Model . . . . . . . . . . . . . . . . .

29

Measured-Dynamics Model Results . . . . . . . . . . . . . . . . . . .

31

3.2.1

Arm Dynamics Model Identication Results . . . . . . . . . .

31

3.2.2

Variability Results . . . . . . . . . . . . . . . . . . . . . . . .

32

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

3.3.1

Comparison with Previous Arm Model Parameters . . . . . .

34

3.3.2

Grip-Force-Dependent Models . . . . . . . . . . . . . . . . . .

40

3.3.3

Structured Variability . . . . . . . . . . . . . . . . . . . . . .

41

3.3.4

Unstructured Variability . . . . . . . . . . . . . . . . . . . . .

42

4 Arm Model ID Without Force Transducers


4.1

4.2

18

44

Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

45

4.1.1

Phantom and Arm Dynamics Models Structure . . . . . . . .

45

4.1.2

Structured Variability . . . . . . . . . . . . . . . . . . . . . .

49

4.1.3

Unstructured Variability Model . . . . . . . . . . . . . . . . .

49

Derivation of Arm-Only Experimental Frequency Response . . . . . .

50

4.2.1

52

Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

ii

4.2.2
4.3

4.4

Arm Model Experiment Paradigm . . . . . . . . . . . . . . . .

52

Measured-Dynamics Model Results . . . . . . . . . . . . . . . . . . .

52

4.3.1

Arm Dynamics Model Identication Results . . . . . . . . . .

52

4.3.2

Variability Results . . . . . . . . . . . . . . . . . . . . . . . .

53

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

56

4.4.1

60

Compared to Results Using Force Sensors . . . . . . . . . . .

5 Evaluation of 3D Fitts Task in Physical and Virtual Environments 62


5.0.2
5.1

5.2

5.3

Study Objectives . . . . . . . . . . . . . . . . . . . . . . . . .

65

Performance Measures for Analysis . . . . . . . . . . . . . . . . . . .

65

5.1.1

Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . .

66

5.1.2

End-point Error . . . . . . . . . . . . . . . . . . . . . . . . . .

66

5.1.3

Number of Corrective Movements . . . . . . . . . . . . . . . .

67

5.1.4

Eciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

67

5.1.5

Initial Movement Error . . . . . . . . . . . . . . . . . . . . . .

68

5.1.6

Peak Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . .

68

5.1.7

Accounting for Eect of ID on Performance Measures . . . . .

69

Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69

5.2.1

Equipment . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

69

5.2.2

Subjects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

74

5.2.3

Experiment Paradigms . . . . . . . . . . . . . . . . . . . . . .

74

Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

5.3.1

Throughput . . . . . . . . . . . . . . . . . . . . . . . . . . . .

79

5.3.2

End-Point Error . . . . . . . . . . . . . . . . . . . . . . . . . .

83

5.3.3

Number of Corrective Movements . . . . . . . . . . . . . . . .

84

5.3.4

Eciency . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

87

5.3.5

Peak Velocity . . . . . . . . . . . . . . . . . . . . . . . . . . .

89

5.3.6

Initial Movement Error . . . . . . . . . . . . . . . . . . . . . .

92

iii

5.4

Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

5.4.1

Real vs. Non-colocated VE vs. Co-located VE . . . . . . . . .

96

5.4.2

Eect of Visual Rotations . . . . . . . . . . . . . . . . . . . .

98

5.4.3

VE System Design Implications . . . . . . . . . . . . . . . . . 100

6 Conclusions

101

6.1

Arm-and-hand Dynamics Modeling . . . . . . . . . . . . . . . . . . . 101

6.2

Reaching in Virtual Environments . . . . . . . . . . . . . . . . . . . . 102

6.3

Future Research Problems . . . . . . . . . . . . . . . . . . . . . . . . 103

Appendices

105

A Arm Model Derivation

105

B End-eector Inertia for the Phantom Premium 1.5a

106

Related Publications

110

Bibliography

111

iv

List of Tables
3.1

Arm Structure Parameters Grip Force Dependent Models . . . . . .

31

3.2

Nominal Arm Model Parameters . . . . . . . . . . . . . . . . . . . . .

32

3.3

Structured Variability - Arm Structure Parameter Statistics . . . . .

32

3.4

Unstructured Variability Model Poles and Zeroes . . . . . . . . . . .

35

3.5

Arm Model Parameters from Literature . . . . . . . . . . . . . . . . .

39

4.1

No F/T Sensor: Grip Force Dependent Arm Model Parameters . . . .

53

4.2

No F/T Sensor: Structured Variability Statistics . . . . . . . . . . . .

55

4.3

No F/T Sensor: Nominal Arm Model Parameters . . . . . . . . . . .

56

4.4

No F/T Sensor: Unstructured Variability Model Parameters . . . . .

59

5.1

Target List

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

75

5.2

Signicant Multiple Comparisons Throughput . . . . . . . . . . . .

80

5.3

Signicant Multiple Comparisons End-Point Error . . . . . . . . . .

84

5.4

Signicant Means Comparisons Corrective Movement Osets . . . .

87

5.5

Signicant Means Comparisons Corrective Movement Slopes . . . .

87

5.6

Signicant Means Comparisons Eciency . . . . . . . . . . . . . . .

89

5.7

Signicant Means Comparisons Peak Velocity Oset . . . . . . . . .

92

5.8

Signicant Means Comparisons Peak Velocity Slope . . . . . . . . .

92

5.9

Signicant Means Comparisons Initial Movement Error . . . . . . .

94

5.10 Performance Means and (Std. Dev.) Normalized to Real and 0 . . .

96

List of Figures
1.1

Haptic Interface Devices . . . . . . . . . . . . . . . . . . . . . . . . .

2.1

Fish Tank Display Setup . . . . . . . . . . . . . . . . . . . . . . . . .

2.2

Throughput Example . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

3.1

System Identication Experiment Arm Conguration . . . . . . . . .

22

3.2

System Identication Graphical User Interface . . . . . . . . . . . . .

24

3.3

F/T Sensor Arm Modeling: Arm Model Free Body Diagram . . . . .

25

3.4

F/T Sensor Arm Modeling: Coherence . . . . . . . . . . . . . . . . .

28

3.5

F/T Sensor Arm Modeling: Bode Plots for the Grip-force Dependent
Arm Dynamics Models . . . . . . . . . . . . . . . . . . . . . . . . . .

33

3.6

F/T Sensor Nominal Arm Model and Unstructured Variability Fits .

36

3.7

F/T Sensor Arm Modeling: Unstructured Variability Models . . . . .

37

3.8

F/T Sensor Arm Modeling: Comparison of Arm Models

. . . . . . .

38

4.1

Closed Loop Arm Model Block Diagram . . . . . . . . . . . . . . . .

45

4.2

Arm Model Free Body Diagram . . . . . . . . . . . . . . . . . . . . .

46

4.3

Bode Plots for the Grip-force Dependent Arm Plus Phantom Dynamics
Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

54

4.4

No F/T Sensor: Nominal Arm-Only Models . . . . . . . . . . . . . .

57

4.5

No F/T Sensor: Unstructured Variability Fits . . . . . . . . . . . . .

58

4.6

No F/T Sensor Arm Modeling: Comparison with F/T Models . . . .

60

vi

5.1

Fish Tank Display Setup . . . . . . . . . . . . . . . . . . . . . . . . .

70

5.2

All Physical Targets . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

5.3

Fish Tank Display Setup . . . . . . . . . . . . . . . . . . . . . . . . .

72

5.4

Fish Tank User Position . . . . . . . . . . . . . . . . . . . . . . . . .

73

5.5

Experiment Setup for Physical Targets . . . . . . . . . . . . . . . . .

76

5.6

Fish Tank Display Non-colocated Setup . . . . . . . . . . . . . . . . .

77

5.7

Virtual User Interface (0315 rotation) . . . . . . . . . . . . . . . . .

78

5.8

Fish Tank: Throughput Linear Regression . . . . . . . . . . . . . . .

80

5.9

Fish Tank: Throughput Linear Regression R2 Histogram . . . . . . .

81

5.10 Fish Tank: Throughput Boxplots . . . . . . . . . . . . . . . . . . . .

82

5.11 Fish Tank: End-point Error Boxplots . . . . . . . . . . . . . . . . . .

85

5.12 Fish Tank: Corrective Movements Linear Regression

. . . . . . . . .

86

5.13 Fish Tank: Corrective Movements Linear Regression R2 Histogram .

86

5.14 Fish Tank: Corrective Movements Boxplots

. . . . . . . . . . . . . .

88

5.15 Fish Tank: Eciency Boxplots . . . . . . . . . . . . . . . . . . . . . .

90

5.16 Fish Tank: Peak Velocity Linear Regression . . . . . . . . . . . . . .

91

5.17 Fish Tank: Peak Velocity Linear Regression R2 Histogram . . . . . .

91

5.18 Fish Tank: Peak Velocity Boxplots . . . . . . . . . . . . . . . . . . .

93

5.19 Fish Tank: Initial Movement Error Boxplots . . . . . . . . . . . . . .

95

vii

Acknowledgements

Thank you, Prof. M. Cenk C


avusoglu, for the opportunity of being part of such
a great lab. Your steadfastness and encouragement have made all the dierence in
my career as a graduate student. I dont know how many times Ive walked into your
oce in despair, but left with hope.

Tesekk
ur ederim, Drs. Ozkan
and Ebru Bebek, for setting a standard in my life
as colleagues and friends.
A special thanks to Andrew D. Hershberger, Kumiko Sano, Fang Zhou, and Justin
Lee for their signicant contribution as undergraduate researchers.
Dr. John Erhlinger and Prof. Gregory S. Lee, thank you both for the enlightening discussions regarding statistical analysis, which have been put into use in this
dissertation.
Thank you, Elizabethanne Fuller-Murray, for always treating me like I was the
most important student ever to walk into your oce.
To my wife, thank you for being my help in every way through this season.
And thank you, my parents, for believing in me.
This work was accomplished with the generous support of Case Western Reserve
University and the National Science Foundation grants CNS-0423253, IIS-0805495,
and IIS-0905344.

viii

Computational Models and Analyses of


Human Motor Performance in Haptic Manipulation

Abstract
by
MICHAEL JOHN FU

Haptic interaction refers to interactivity with an environment based on the sense of


touch. Haptics is a critical mode of human interface with real or virtual environments,
as it is the only active form of perception. All other senses are passive and cannot
directly act upon an environment.
Haptic interface devices connect users to real or virtual environments through
the modality of touch and associated sensory feedback. As the user interacts with
environments through the haptic system, it alters the users perception and motor
control, which can aect task performance. Therefore, understanding a haptic systems eects on the sensory-motor system and the implications of these interactions
on task performance is important for the design of eective haptic interface systems.
This dissertation focused on characterization, modeling, and analysis of human
motor performance in the context of stylus-based haptic interface devices. The current
work combined human psychophysics experiments with analysis methods from system
theory to model and study several aspects of human haptic interaction.
The rst contribution of this work was the identication of 3D linear dynamics
and variability models for the arm and hand congured in a stylus grip. The literature
contains many human arm dynamics models, but lacks detailed associated variability
analyses. Without them, variability is modeled in a very conservative manner, leading to less than optimal controller and system designs. The current work not only
presented models for human arm dynamics, but also developed inter and intra-subject
ix

variability models from human experiments.


The second contribution of this work was the analysis of 3D point-to-point Fitts
reaching task in both real and virtual environments in order to determine the eect of
visual eld and haptic workspace co-location on task performance. A key nding was
the signicant decrease observed in end-point error for tasks performed in a co-located
virtual environment. The results also conrmed cyclic performance degradations due
to rotational visuo-haptic misalignments for a wide variety of task diculties.
These ndings expanded important understanding regarding the behavior of the
human operator, which is arguably the most variable element in any haptic interface
system.

Chapter 1
Introduction
Human-computer interaction is a rapidly expanding eld, due in great part to increasingly important roles that computer systems play in our everyday lives. As the
accessibility and ubiquity of computers increase, so does the complexity of our interactions with them. Thanks in large part to smartphones, the general public is now
familiar with touch screen and gesture-based interfaces, but is perhaps not yet aware
of the more general eld of haptic interface systems.

1.1

What is Haptics?

The word haptics originates from haptikos, the Greek word for touch. Haptic interaction refers to the form of human interaction with a real or virtual environment
based on the sense of touch. The sense of touch is important because it is the only
active sensation mechanism that humans have for exploring or experiencing an environment. All of our other senses (such as olfactory, vision, and auditory) dier in that
they are essentially passive and cannot directly act upon the physical environment.
A haptic interface device is a robotic or other electro-mechanical apparatus capable
of connecting users to real or virtual environments through the modality of touch
and the associated sensory feedback. As the user experiences the environment and
1

(a)

(b)

Figure 1.1: a) Sensable Technologies, Inc. Phantom Premium 1.5 haptic device and
b) Phantom Omni.
interacts with it through the haptic system, the system directly or indirectly alters
the users perception and motor control, which can aect a users task execution
performance. Therefore, understanding a haptic systems eects on the sensory-motor
system and the implications of these interactions on task performance is important
for the design of eective haptic systems.
A haptic interface system connects the user (referred to as the human operator)
with a real or virtual environment through the modalities of touch and vision. Touch
is provided via one or more haptic interface devices (Fig. 1.1) for measuring and
transmitting sensory information and control commands, and associated algorithms.
The algorithms typically include those for control of the interface devices, signal
processing, data transmission, and haptic feedback generation.
The bi-directional nature of the exchange of haptic information with the environment, and the fact that the information exchange also involves substantial physical
energy exchange, make haptic interfacing a challenging research problem. Since human beings are arguably the most variable element of a haptic interface system, an
important area of haptics research is the precise modeling of human operator dynam-

ics and understanding the implications of virtual environments realization modalities


on task performance.
This thesis focused on the characterization, modeling, and analysis of human
motor performance in the context of stylus-based haptic interface devices. The current work combined human psychophysics methodologies with analysis methods from
system and control theory to model and study several aspects of human haptic interaction.

1.2

Contributions

The two major contributions of this dissertation were: i) Identication of linear dynamics and variability models for the arm and hand congured in a stylus grip; ii)
Quantitative analysis of 3D point-to-point reaching performance in both real and
virtual environments with a stylus-based haptic interface device.
This work expands important understanding regarding the behavior of the human
operator. The identied models and performance measures are designed to be easily
integrated into the design cycle of haptic systems and facilitate quantitative analysis
of design choices throughout system development.
In this thesis, Chapter 2 covers topics in haptics and human-computer interface
research that are related to the current work. Chapter 3 contains the methods, results
and discussions for the system identication of a 3D 5-parameter human arm dynamics model along with comprehensive variability analyses for a stabilization task. In
Chapter 4, the same human arm model system identication was performed for the
case when it is not possible or practical to employ a force sensor in the measurements.
Chapter 5 describes the methods, results, and analyses for a study of manual performance of a 3D reaching task in physical and virtual environments. Finally, this thesis
concludes with several lessons gathered from the results of the three studies.

Chapter 2
Background
As the focus of this thesis is on the precise modeling of human operator dynamics and
understanding the implications of virtual environments (VEs) on task performance,
this chapter will cover topics relevant to haptic interaction and human performance
in VEs.

2.1

Virtual Environment Immersion Techniques

Precise and realistic visualization is an important component of virtual environment


simulations. Proper visualization can enhance the feeling of presence and quality of
the immersive experience. Several types of visualization modalities exist and each
have their advantages and drawbacks. Common VE visualization methods include
standard video displays (ranging from computer monitors to large wall-sized displays)
and head-mounted displays (HMDs). Head-mounted displays are able track head
movements and give the illusion of a limitless virtual visual space, but suer from
low image resolution capabilities and a propensity to cause strain on the users eyes.
Standard video displays have the benet of high-resolution capabilities, but because
the screen does not move with the users gaze, it reduces the feeling of presence, or
immersiveness.
4

2.1.1

Fish Tank Display

Fish tank display is another modality that was designed to balance the immersive
qualities of HMDs with the visual delity of standard displays. Several variations
exist, but sh tank displays typically employs a xed display device (computer monitor) viewed either directly or indirectly using a mirror [1] (Fig. 2.1). Fish tank
displays are advantageous over HMDs because monitors with larger screen sizes can
be used, which allows for higher-resolution images and a greater range of depth that
can be simulated using stereoscopic rendering. In order to avoid straining the eyes
of the human operator, the magnitude of binocular disparity (dened as the distance
between the rendered left and right-eyed images) and hence the magnitude of simulated depth is limited by the distance between the eye and the image plane of
the monitor. The farther the image plane is from the eye, the greater the range of
simulated depths that avoid eye strain for the human operator.

Figure 2.1: Example sh tank display setup.

Another advantage of a sh tank display is the ability to easily co-locate, or align


the visual and haptic workspaces by placing a haptic interface device behind the
image place, as shown in Fig. 2.1. In this way, the hands appear to be operating on
the environment similar to typical hand-eye interactions.
Early versions of sh tank displays used semi-transparent mirrors. However, this
reduces depth perception because it interferes with occlusion cues. Occlusion cues,
such as when a nearer object obstructs the view of a more distant one, cannot be
properly rendered when the hand can be seen behind the virtual image plane at
all times. Therefore, current sh tank setups use full mirrors, which prevents the
operator from seeing and being distracted by the hand and haptic device located
behind the mirror.

2.2

Eect of Immersion on Task Performance

Many perceptual factors can inuence the quality of a VE visualization modality.


However, the ones explored in this dissertation are more related to vision and colocation eects, so these topics are discussed below.

2.2.1

Stereographic Rendering

Stereographic rendering is implemented in visual displays to provide depth perception


through the use of binocular disparity. Binocular disparity refers to the dierence in
the location of an object as perceived by the left and right eye. Objects farther away
will appear to be located in the same location to both eyes, while closer objects appear
to be located more to the left for the right eye and more to the right for the left eye.
It is important to note that binocular disparity is only one of several visual cues
used by human proprioception to judge depth. Others include occlusion (the visual
blocking of more distant objects by closer ones), relative motion (when the eld of

vision is moving, objects closer appear to move faster than more distant ones), accommodation (sensory cues provided by the eyes need to focus on objects at dierent
distances), perspective distortions (closer objects appear larger than distant ones of
the same size), and the eect of lighting and shadows on 3D objects.
It has been well established that the inclusion of stereographic depth rendering has
a positive eect on task performance measures for both co-located and non-colocated
virtual environments. Arthur et al. studied the Sollenberger-Milgram tree tracing
task and reported a 50% decrease in error when a stereographic display was used
compared to a conventional computer display [1]. For a 3D pointing task using virtual spherical targets, [2] reported that introducing stereographic display reduced
completion times. This was conrmed by [3], which reported that introducing stereographic display decreased completion times by 33%, and introducing head-tracking
improved completion times by 11% for a 3D tapping task performed on the circular
tops of virtual cylinders for 19 subjects. Kim and Tendick, in separate studies, also
reported that stereoscopic visualization signicantly decreased task completion times
(up to 2x) compared to monoscopic visualization in pick-and-place tasks with both
laprascopic surgeons and test subjects unexperienced with robot teleoperation [4, 5].
Therefore, it is generally held that stereographic displays decrease completion time
and end-point error for reaching tasks and is a necessary element in human-computer
interfaces.
Implementation Notes
It is important to note that the use of physiological interoccular distances in the
implementation of stereographic displays is not necessarily recommended. While
it may seem intuitive to use the actual eye-separation distances, Rosenberg found
that depth perception accuracy for a 3D depth perception test did not improve for
interoccular distances greater than 3 cm (compared to an average human interoccular

distance of 6.3 cm). The subjects were seated 80 cm away from a stereographic
display and had to move a virtual peg to match the 3D location of a virtual target
peg. He reasons that the eyes may not be able to fuse the binocular image pairs of
greater binocular disparity while they are trying to also focus on the image plane.
This contradiction between focal depth and perceived depth can cause eye strain and
user discomfort. Therefore, in order to prevent eye fatigue, he recommends using
interoccular distances signicantly less than physiological values.
Another important method for reducing eye strain is the use of asymmetric frustums for setting the left and right eye perspectives. Asymmetric frustums allow for
the simulation of more pleasing stereo images that make use of positive, zero, and
negative parallax to simulate depth. Images at positive parallax appear to be located
behind the display surface, images at zero parallax appear to rest on the display
surface, and images at negative parallax appear to extend in front of the display surface. Binocular image pairs with positive parallax appear to be on the same side of
center as the eye they are intended for while negative parallax images appear to the
opposite side of center to the intended eye (similar to how a nger held close to the
nose appears). Images at zero parallax will appear to be the most in focus to the
eyes and cause the least discomfort over long viewing periods. Therefore, important
xation points of the image are recommended to be at zero parallax. Also, negative
parallax (images extending out in front of the display surface) cause more eye strain
than positive parallax images, so should be limited in use. Implementation details
for this technique are described in [6].

2.2.2

Physical vs Virtual Tasks

Contrasts between pointing and reaching for physical and virtual conditions have
consistently shown signicant dierences, with the physical conditions reecting improved task performance from 1.53x for completion time. The cause for this phe8

nomenon is commonly attributed to impaired depth perception in VEs (even with


stereographic visualization, haptic feedback, and accurate visual-haptic workspace
co-location) since not all depth cues can be reproduced. However, it is possible that
cognitive factors are also at play.
Graham and MacKenzie compared physical and virtual 2D Fitts task with at
circular targets and found that mean movement time was signicantly less (approximately 1.5 times higher task completion rate) for the physical condition [7]. Although
not a signicant dierence, peak velocity was also higher for physical reaching than
for virtual reaching.
Mason et al. also studied a 2D planar reaching task, but used a co-located sh tank
display setup with virtual blocks compared against physical blocks with augmented
reality images projected onto them [8]. Mean movement times were approximately
1.5x higher for the virtual blocks than the augmented reality blocks.
Sprague et al. studied 24 subjects performing a 2D Fitts task in both physical
and co-located virtual environments [9]. They reported that task completion rate for
the real environment was approximately twice that of the virtual environment.
A larger dierence between real and virtual tasks was reported in [10], but for a
more complicated peg-in-hole assembly task. It was reported that completion times
for the virtual peg-in-hole task (stereographic, but not co-located) were increased over
the physical task by approximately 3 times.
Blackmon et al. tested a whole-arm target reaching task along with its immersive
and non-immersive virtual counterparts [11]. Compared to the physical task, the
non-immersive virtual task required 2.4x longer completion time, 4.5x more corrective
movements (resulting in jerkier, less smooth motion), and yielded 0.67x lower peak
velocity. Performance for the immersive virtual task was even worse, with 5.6x longer
completion times, 11x jerkier motion, and 0.78x lower peak velocity. The authors
noted that the poor performance in the immersive condition could be attributed to

the subject needing to search through the virtual environment for the target during
every trial.

2.2.3

Visual and Haptic Workspace Co-location

Given the ease that humans have at operating a keyboard and mouse in a typical,
non-colocated computer display setup, many studies have investigated the value of
co-locating the visual and haptic workspaces. However, it is not clear if visuo-haptic
co-location has a signicant eect on task performance. For example, Teather et al.
tested 12 subjects on a 3D Fitts task with spherical targets and reported that colocation resulted in lower mean completion times and target error, but the dierences
were not statistically signicant [12]. Also, [9] compared three dierent cases of
visual scaling (calibrated, small distance oset, and larger distance oset between
the subject and the virtual board) for a virtual 2D Fitts tapping task using a headmounted display and found no signicant dierence in the task completion rates.
In contrast, Mine et al. reported a signicantly higher task completion rate for colocated conditions [13]. Using head-mounted displays, 18 subjects were tested using
a virtual 3D reaching task where the goal was to match the object in the subjects
hand to an identical one located in a virtual environment. It was reported that
completion times decreased when the object being manipulated was co-located with
the virtual representation of the hand versus when the object was at a xed oset
from the hand. Also, Swapp et al. reported that a co-located setup signicantly
improved performance metrics for six subjects performing 3 types of virtual tasks:
3D reaching between xed blocks, 3D maze navigation, and juggling of falling objects
[14]. Three arbitrary levels of diculty were tested for each task and co-location
involved physically aligning the haptic input device in front of a computer display.
A possible reason for the inconsistency is that humans can adapt to small misalignments between the visual and haptic workspaces for physical pointing tasks, but
10

the level of adaptation is sensitive to task complexity and the amount of practice
[15]. This phenomenon is known as the prism adaptation demonstrated by [16].
Held found that within minutes (usually less than 30), subjects looking through optical prisms, which oset their vision by several degrees, were able to adapt their
motor control to compensate for the shift and perform pointing tasks with accuracy
similar to their normal, sans-prism performance. Similar hand displacement adaptation eects were tested and reported to exist in virtual environments by [17]. Using
a virtual object docking task, it was reported that no signicant dierence existed
for completion times and error rates between when there was no visual dislocation
between the virtual and physical hand positions versus when a constant displacement
was present.

2.2.4

Eect of Visual Rotations on Task Performance

In addition to translational dislocation in vision described above, rotational dislocations can occur if a camera or virtual display provides a viewpoint that does match
the human operators. Results in literature consistently reported that performance
measures such as completion times and error increase to a maximum when the azimuth (perpendicular to the ground) rotational dierence between the visual and
haptic workspaces is 90 . Findings also consistently indicated that the eect of
visual misalignment may be symmetric about 0 . Also, studies that tested the 180
oset condition reported slightly improved performance at this condition versus 90
and 180 . However, performance without rotations undoubtedly facilitated the best
task performance.
Bernotat was one of the earliest to investigate the eect of rotational misalignment between the visual and haptic workspaces in VEs [18]. His experiments tested
30 soldiers performance of a joystick-controlled virtual 2D targeting task. The task
was to drive a cursor from a starting position to a target position, but under several
11

experimental conditions where the visual display was rotated from 0360 in 45 increments. Bernotat reported that errors were greatest for the 90 and 270 conditions,
both of which had mean error approximately 35 times higher than the 0 case. Also,
error was at a local minimum for the 180 condition.
Also using joystick controllers, Kim et al. studied the eect of visual perspective
rotations for tracking and virtual 3D pick-and-place tasks [4, 19]. They reported
results that were consistent with Bernotats ndings. Specically, that azimuth angle
misalignments caused task completion time to increase to local maximums at 90 and
270 for both the tracking and pick-and-place tasks. Similarly, completion times
decreased to a local minimum for the 180 condition.
Subsequently, Blackmon et al. investigated the eect of visual rotations for a
virtual 3D whole-arm reaching task using 6 degree-of-freedom (DOF) position trackers
for the hand. Four subjects were tested using the 0, 45, and 90 azimuth angle visual
misalignment conditions. Similar to other studies, the mean task completion times
and error magnitudes were highest for the 90 case [11].
Recently, Ware and Aresenault examined the eect of visual rotations (from -90
to 90 ) on an 3D orientation-matching task in a co-located sh tank display setup [3].
They reported that 14 subjects mean task completion times increased signicantly
after 45 of azimuth angle visual misalignment in either direction. Also, mean completion times were highest for the 90 cases approximately 3 times higher than
the 045 conditions (for the 2nd attempt means). They also reported the interesting
result that performance improved in conditions where a visual rotation was presented
along with a haptic workspace translation in the same direction. For instance, if
the visual rotation was 45 to the right, a haptic workspace translation to the right
improved performance. It is possible that performance improved because the haptic
workspace translation eectively realigned the arms reference frame with the visual
rotation.

12

2.3

Fitts Law

In 1954, Paul M. Fitts empirically developed a way to predict movement time for
rapid 1D point-to-point reaching motions, now termed Fitts task and Fitts Law
[20, 21]. The basic Fitts task involves a user using a stylus to start at rest at a
specic location, and then moving the stylus to rest within a designated target area.
The empirically identied Fitts law formally models the speed/accuracy trade-os
in rapid, aimed movement. According to Fitts law, the time it takes for a human to
move and point to a target is a logarithmic function of the relative spatial error, as
in
MT = a + b log2 (2

D
)
W

(2.1)

where MT is the movement time, D is the distance from the starting point to the
center of the target, W is the width of the target, and constant parameters a and b
D
) is called the index of diculty
are identied by linear regression. The term log2 (2 W

(ID). ID is a measure of the diculty of the motor task, and carries the unit of bits,
in reference to an information theoretic interpretation of Fitts Law. The constants
a and b are empirically determined through linear regression of the movement time
data for a given system. If MT is measured in seconds, a has a unit of seconds, and
b has a unit of bits/second. Specically, 1/b is called the index of performance and
measures the information capacity of the system. Although the basic interpretation of
the Fitts Law is one-dimensional, Fitts task is applicable to and can be executed in
one, two, and three spatial dimensions [22]. Fitts task has also been used to study a
myriad of computer input devices, including digital pointers, computer mouse inputs,
and haptic devices [23, 24].
Recently, Soukore and Mackenzie encouraged the use of Shannons formulation
of Fitts Law because it is truer to the original Shannon-Heartly channel capacity
theorem that the law was based on and has been shown to produce better ts to

13

empirical data [25]. The Shannon formulation is




MT = a + b log2 1 +

D
W


,

(2.2)

where MT is the movement time, a is a constant time in seconds, b represents the


slope of the line, W is the width of a target, and D is the distance of a target from
D
)) is referred to as Fitts binary ID and
the starting location. The term log(1 + ( W

is used to quantify the diculty of a movement condition (target). This formulation


guarantees positive values for ID.

2.3.1

Comparing Experimental Conditions: Throughput

Soukore and Mackenzie also challenged the classical use of index of performance
= 1/b to compare dierent experimental conditions [25]. For one, they argued that
ignoring the linear regression oset term can cause problems when regression lines
intersect. For example, in Fig. 2.2, movement times for regression line A are lower
than those for regression line B for ID < 2, but the vice versa is true for ID >
2. However, if only 1/slope is taken into account for comparing the experimental
conditions that produced lines A and B, then B would appear to indicate higher
performance, but this is not true for all IDs. Also, in the case of regression lines
B and C, considering only the slope will lead one to believe that the performance
of both experimental conditions associated with the lines are equal, when obviously
they are not.
Instead, Soukore and Mackenzie recommended that experimental conditions be
compared using throughput (TP), dened as
1
TP =
y i=1
y

1  IDe ij
x j=1 MTij

14


,

(2.3)

Figure 2.2: Throughput example.


where x is the number of unique movement conditions, y is the number of subjects,
and IDe is the eective ID calculated from the actual distance traveled and end-point
errors measured from human experiment. Since human subjects tend to miss the
target or move to the edges of a wide target, IDe is dened for each unique movement
condition as


IDe = log2 1 +

De
We


,

(2.4)

where De is the average distance traveled for multiple repetitions of the same move
ment condition and We = 2e = 4.133, where was the standard deviation of
the end point locations. This formulation for IDe, detailed in [25], assumes that the
end-point error has a normal random distribution since it is due to human error.
The work in this thesis veried that the use of IDe and TP produced better
ts to empirical data than ID, and therefore the recommendations of Soukore and
Mackenzie were followed for the analyses of Fitts task performance across dierent
experimental conditions.

15

2.3.2

3D Extensions

Fitts law was originally formulated for 1D motion, but has since been extended
into 2D for use in evaluating computer input devices and graphic user interfaces. The
applicability of Fitts Law to human-computer interface research is generally accepted,
as evidenced by its adoption as ISO standard 9241-9 in 2000. For 3D tasks, there is
strong evidence that Fitts law applies and can even be adapted for the complications
involved with 3D reaching.
Murata and Iwase proposed that a third parameter can be added to Fitts Law
to better account for the targets angle with respect to the horizon [26]. Grossman
and Balakrishnan, proposed a version of Fitts Law that was parameterized for 3D
rectangular-shaped targets and the azimuth angle of reach, which they studied using
a volumetric virtual display [27]. Liu et al. proposed to add horizon angles, azimuth
angles, and path curvature parameters to Fitts Law. They veried this using a
virtual 3D tunnel steering task requiring subjects to move a cursor through paths
with constant curvature on a stereoscopic display [28].
These extensions to Fitts Law have been shown to accurately predict task completion time. However, with so many parameters, it is not clear how to dene the
concept of task throughput using these new models. Therefore, the current work
maintains the use of Shannons formulation of Fitts law and throughput, from (2.2)
and (2.3), for comparing experimental conditions.

2.4

Human Operator Models

While task performance is important for understanding how a haptic system aects
the users experience of the environment, equally important is the quantication of
human arm dynamics during haptic manipulation.
Dynamic models for the human arm originated with researchers investigating the

16

bodys biomechanics, joint dynamics, and mechanical impedance modeling [29, 30,
31, 32, 33]. As robotics and haptic technology became more mature, researchers
began to develop single-input-single-output models based on mass-spring-dampers
(MSD) systems, which have been shown to accurately reect arm dynamics and are
more suitable for real-time computer implementation [34, 35, 36, 32]. More recently,
human arm dynamics have been increasingly modeled using robots or manipulators
that can be used for haptic feedback in an eort to improve haptic system design
and delity. For instance, [37] developed a hand grasping model while operating a
haptic knob. Woo et al. characterized the inertia, stiness, and viscosity of the arm
exerting forces of 020 N using a 1 DOF haptic device [38]. Dong et al. described
non-parametric frequency responses of human ngers using various grip congurations
subjected to a random vibration [39]. Various others have modeled intrinsic and
reexive muscle parameters for the shoulder, elbow, and wrist joints using a 2D
(horizontal plane) planar haptic device with a cylindrical grip handle [40, 41, 32].
Speich and Goldfarb characterized human arm parameters using a 1 DOF haptic
device with a spherical handle and also a custom 3 DOF haptic device with a stylus
handle [42]. Kuchenbecker et al. also used a stylus handle with a grip force sensor on
a custom 1 DOF manipulator to characterize the hand and wrist [43].
Researchers have also made progress investigating the vibro-tactile responses of
the human hand using haptic devices [44]. McMahan, et al. identied a ve-parameter
MSD model of the hand interfaced with a stylus grip haptic device using a 1 DOF
linear actuator custom-mounted onto the Phantoms stylus itself (for high frequency
10-200 Hz vibro-tactile feedback applications) [45]. Israr et al. have used both stylusbased devices and spherical actuators to shake the hand at 10-500 Hz [46, 47]. Also,
[48] have investigated the vibration modes from 0.7200 Hz in 1 DOF of the human
operator using a racquet grip on the Phantom Premium 1.0 and a custom haptic
interface.

17

Chapter 3
Arm-and-Hand Dynamics and
Variability Modeling
The mentioned works have all contributed greatly to haptics research, but what is
currently not found in the literature are experimentally-derived results describing the
uncertainty and variation found in human arm dynamics. Human operator variability
is frequently modeled as the set of all passive nonlinear impedances [49]. However,
this approach typically results in over-conservative designs, which limit the haptic
interface systems performance. More limited uncertainty sets are used in some studies (e.g. [50]), however these models are not based on detailed human experiments.
Indeed, many studies used human experiments and reported the amount of variance
observed from their data collections and parameter identications, but the variances
are not modeled in a way that can be directly used for robust stability and performance analysis.
Haptic interfaces provide a human operator bilateral force interaction with a remote or virtual environment. The human arm, with its countless congurations and a
multitude of applications, is by far the most complex and variable element in haptic interface systems. In order to develop a stable and useful haptic interface, accurate and

18

relevant models of human arm dynamics are a necessity. They are critical for proper
stability analysis, interface design, and improving haptic delity. However, because
the human arm is so dextrous and recongurable, researchers have reported that small
variations in arm congurations, grip forces, and application environments result in
the arm exhibiting a wide range of dynamic behavior [51, 43, 52, 53]. Since the arms
conguration is constantly subjected to slight changes during a haptic manipulation
task, this implies that in addition to accurate, task and orientation-dependant models of human arm dynamics, researchers can also benet from precise information on
the variability of those dynamics during haptic manipulation. Without accurate arm
dynamics variability models, haptic interface systems are conservatively designed to
account for a larger set of variability than sometimes necessary [54]. In contrast, the
availability of precise variability measurements will enable more ecient and higherperformance haptic interface systems targeted at subsets of possible human operator
dynamics.
Therefore, the current study aimed to not only create models of the arm and
hand dynamics, but also study the inter and intra-subject variability observed in the
dynamics and model parameters.
Haptic interfaces with a stylus handle were selected as the focus of this work because of their accessibility and relevance to many haptic manipulation tasks. Stylus
handles are commonly found on commercially available haptic devices and are convenient for mimicking other tools that require a similar grasping style. Paintbrushes,
dentistry tools, and surgical blades are just a few examples of objects that are held
in a pinched-grasp method similar to how one would hold a stylus.
The models developed in this study used the common convention of force at the
hand as the model input and measured hand position as model output. This formulation was consistent with the impedance model for human interaction and the
two-port framework for haptic interfaces [55, 56].

19

Study Objectives
The current work focuses on modeling not only the 3D arm dynamics, but also the
inter and intra-subject variability (due to human variation and grip force changes,
respectively) as a function of frequency.
This study used data collected from human experiments to identify both gripforce-dependent 3D Cartesian-space models of the human arm and inter-subject variation using force as the model input and measured position as output. The measured
human experiment dynamics were modeled using ve-parameter linear transfer functions based on the dynamics of one mass, two springs, and two dampers.
Variability of the dynamics was studied in two forms: as the statistics of the
identied arm dynamics model parameters (referred to from here on as structured
variability) and as multiplicative unstructured uncertainty (referred to as unstructured variability). The unstructured variability was modeled in a form consistent
with robust stability theory using transfer functions composed of up to ve stable
complex-conjugate pairs of poles and up to ve minimum-phase complex-conjugate
pairs of zeros. In this way, they can be directly applied to robust stability analysis of haptic interfaces. The structured variability, on the other hand, is consistent
and applicable to -synthesis stability analysis methods. Details for robust stability
analysis can be found in texts such as [57].

3.1

Methods

The following methods were consistently applied to each of the three Cartesian axes.

3.1.1

Input Signals Used in the Human Experiment

For system identication, input signals such as frequency sweeps, discrete sinusoidal
signals, and random noise typically produce comparable results [58]. However, when
20

modeling the human arm, frequency sweeps and discrete sine waves are not suitable
because at low frequencies (< 3 Hz), human anticipatory reexes make it dicult to
keep the arm passive to force disturbances. Fortunately, the more random the force
disturbance is, the less likely it will trigger the arms reexes. For this reason, the
current study used Gaussian white noise inputs with a bandwidth of 30 Hz to render
them unpredictable by the human subjects and still achieve frequency responses in
the range of 0.630 Hz.
The Gaussian white noise was low-pass ltered to 30 Hz in line with the known
limits imposed by neural signal delay for voluntary movement. During complex tasks,
such as target reaching, humans take up to 110 ms to respond to changes in target
position [59]. It takes approximately 75 ms for a neural signal to travel from the
brain to the ankle muscles and back [15]. For the wrist, [60] found that it takes
approximately 50 ms to resist an extension by an external force. Since the arm is
closer to the brain than the ankle and the target in this study is static, 50 ms was
assumed as the approximate time delay for the arm in the experimental task. Under
this assumption, the bandwidth for the human arm was approximated to 20 Hz,
motivating the selection of the 30 Hz noise bandwidth.

3.1.2

Subjects

Fifteen subjects (6 female, 9 male, ages 2032) were recruited with prior consent
for this study and were not compensated for their participation. Each subject was
free from any movement impairments that would have aected this study and tested
using their dominant arm. The experimental procedures were reviewed and given
exemption status by the institutions Internal Review Board.

21

Figure 3.1: The experimental setup and arm conguration used for the human experiment data collections.

3.1.3

Equipment

Experiments were performed using a Phantom Premium 1.5a haptic device (Sensable
Technologies Corp., Woburn, MA) equipped with both a Nano 17 6-DOF force/torque
sensor (ATI Industrial Automation, Apex, NC) to measure end eector forces and
a FlexiForce force-sensitive resistor to measure grip forces (TekScan Corp., Boston,
MA). The force/torque sensor was attached to the Phantom at the end eector. A
custom stylus made of Delrin was attached via the Phantoms stock passive gimbal
to the force/torque sensor. The stylus and gimbal together had a mass of 52 g. The
grip force sensor was mounted to the surface of the stylus 4 cm from the gimbals
center and a Phidgets Inc. (Calgary, Alberta, Canada) 1018 analog-to-digital interface
was used to acquire data from the grip force sensor at 65 Hz. A dual-core 2.53
GHz Xenon workstation (Dell Corp., Round Rock, TX) ran a real-time servo loop
of 1 kHz and acquired data from the motor encoders using a PCI-6602 counter and
the force sensor using a PCI-6031 analog-to-digital converter (National Instruments
Corp., Austin, TX). Motor outputs were controlled using a PCI-DDA08/12 digital-to22

analog converter (Measurement Computing Corp., Norton, MA). The user interface
was programmed in OpenGL and displayed stereoscopically using a 120 Hz, 22 CRT
monitor (Dell Corp., Round Rock, TX) and Crystal Eyes 3 active shutter glasses
(RealD Corp., Beverly Hills, CA).

3.1.4

Arm Model Experiment Paradigm

During each experiment trial, the subject was instructed to wear stereographic shutter
glasses, sit approximately 60 cm from a computer monitor in a chair with no arm rests,
and use their hand to hold a stylus-shaped handle at the end eector of the Phantom
haptic device as one would hold a pen. Figure 3.1 shows the arm conguration and
experiment setup for the experiments. Figure 3.2 shows the graphical user interface
(GUI) presented to the subject. The stereographic GUI displays a spheroid cursor
that reects the motion of the stylus at the gimbal pivot point on a 1:1 scale in virtual
3D space. The subjects grip force was displayed in two ways: using a gauge and by
changing the color of the sphere to signal that a certain grip force was achieved (red
for 1 N, cyan for 2 N, and magenta for 3 N). Changing the color of the cursor with
respect to the grip force minimizes the need for subjects to divert their attention away
from the cursor to the force gauge.
Using the stereographic GUI and the Phantom stylus, the subject was instructed
to maintain one of the three tested grip forces (1, 2, and 3 N) and try their best
to stabilize the cursor (red sphere in Fig. 3.2) at the static target at the center of
the crossbars (inside the transparent green box shown in Fig. 3.2) throughout the
duration of the trial. Maintaining a static hand position served to stabilize the hand
about the center of the haptic device workspace and minimize any complex cognitive
strategies so that the observed dynamics would be largely the result of low-level motor
control. Once the subject achieved the desired grip force and centered the cursor at
the target, they vocally signaled an experimenter to initiate stimulation forces to
23

Figure 3.2: This was the on-screen view seen by the subjects. The blue cross bars
give the user a xed coordinate frame to judge 3D motion. The sphere is a cursor
controlled by moving the haptic devices stylus. The color of the sphere changes to
correspond to the label used for each grip force in the gauge located in the lower right
of the screen. The green transparent box at the intersection of the crossbars was the
static target position each subject was instructed to keep the cursor at during the
experiment.
the hand along one of the three tested Cartesian coordinate axes. The unstimulated
axes of the Phantom were unconstrained. When each trial was over, the subject was
given as much time as needed to rest and prevent fatigue to their hand and arm
caused by the trial. To minimize any order eects, the combinations of grip force and
stimulation direction were presented in random order to each subject.
During the experiments, the position at the stylus gimbals center was recorded
in all three degrees of movement (X being left and right, Y being up and down, and
Z being forward and backward) while the subjects arm was stimulated with random
forces in only one of the degrees of movement at a time (see Sec. 3.1.1). The duration
of stimulation lasted 60 s, and the ability of the subject to consistently maintain
a specic grip force was monitored by the experimenter via the experiment visual

24

Figure 3.3: This block diagram represents the identied system. The left most block
represents the haptic device that exerts a force on the human arm. A force sensor
(center block) was placed between the haptic device and the users arm. The dashed
box on the right contains the MSD model for the human arm. Mass M represents
the inertia of the of the arm. The spring k1 and damper b1 represent the hand grasp
stiness while spring k2 and damper b2 represent the arm stiness. FPhantom is the
measured force applied at the end eector of the haptic interface and xarm is the
measured position of the stylus gimbal center that is attached to the force sensor.

interface described in Sec. 3.1.4. In order not to exceed the 3 A current limit on the
Phantoms motors, the stimulation forces at the stylus were limited to not exceed 5
N. Nine sets of data were collected from each subject, one for every combination of
three grip forces and three directions of force stimulation (X, Y, and Z directions).
The grip forces were the source of inter and intra-subject variability and selected to
be 13 N because grip forces less than 1 N were insucient for maintaining a hold
on the stylus under the stimulation forces and grip forces greater than 3 N were very
dicult for the subjects to consistently maintain for longer than 60 s. Subjects were
instructed to maintain a static cursor position at the center of the crossbars in order
to trigger a consistent motor control strategy throughout the experiments.
A total of 135 trials were recorded for this study from 15 subjects, three grip
forces, and three stimulation axes.

25

3.1.5

Arm Dynamics Model Structure

Figure 3.3 represents the system that was identied. The human arm was conceptualized as a MSD model containing ve parameters (1 mass, 2 springs, and 2 dampers),
similar to those used in [61, 42, 62, 48]. Mass M represents the inertia from the arm,
spring k1 and damper b1 represents the grasp stiness while spring k2 and damper b2
represent the arm stiness.
A transfer function model for the arm, Harm , was then derived (detailed in Appx.
A) from the ve-parameter MSD model with measured force Fsensor as input and
position of the hand Xarm (considered equal to the measured stylus gimbal center) as
output. In Laplace notation, the transfer function (consistent with [62, 42]) was

Harm (s) =
=

Xarm (s)
Fsensor (s)
Ms2 + (b1 + b2 )s + k1 + k2
.
b1 Ms3 + (b1 b2 + k1 M)s2 + (b2 k1 + b1 k2 )s + k1 k2

(3.1)

This arm model transfer function was tted to the measured human experiment
frequency response in each axis in order to identify ve parameters M, k1 , k2 , b1 , and
b2 . The measured human experiment frequency response (arm position as output and
force sensor measured force as input) was computed using Welchs transfer function
estimation (Matlabs tfestimate.m) with 32 Hamming windowed segments and 50%
overlap in order to minimize FFT artifacts. Each t was performed using nonlinear
constrained optimization (Matlab fmincon.m function) in the frequency domain by
minimizing the cost function
p



n
n 2
Wt (n) Hexp (j2 ) Harm (j2 ) ,
N
N
n=1

(3.2)

where Wt (n) was a weighting function, Hexp (s) was the frequency response of the
force-input, position-output human experiment data, Harm (s) was the measured-

26

dynamics models frequency response calculated from (3.1) with the identied parameters, p = 57 was the number of data points for 30 Hz of data, and N = 958 was
the total number of frequency response points resulting from the 32 segment Welch
frequency response estimation method. The weighting function, when used, was dened as the mean-squared coherence of the force input and position output, as in [41].
Coherence was calculated via Matlabs mscohere.m function with 958 FFT samples
to match the frequency response data. In eect, each empirical frequency response
sample was weighted by how closely the input and output signals corresponded at
that frequency.
Equation (3.2) was used as the cost function to identify three sets of arm model
structure parameters.
Set 1: Grip-Force-Dependent Measured-Dynamics Model Parameters
Parameters for this set were derived from nine measured-dynamics model ts. One
model t was identied for each each grip force at each axis. For this set, Hexp (s)
was dened as the measured frequency response data averaged over all subjects,
resulting in nine grip-force-dependent measured-dynamics model transfer functions.
The weighting function used for each t was the mean-squared coherence averaged
over all subjects (Fig. 3.4). These models provide dynamic equations that are useful
for simulating the arms dynamics during haptic system design.
Set 2: Nominal Arm Model Parameters
Parameters in this set were derived from three measured-dynamics model ts, one for
each axis. These ts were obtained by dening Hexp (s) as the central complex value of
the minimum circle bounding the complex measured frequency response data for all
subjects and all grip forces at each frequency sample. The minimum bounding circle
center (found using the Crystal-Peirce algorithm in [63]) was necessary in order to

27

XAxis Input, XAxis Output Coherence

1
0.5
0

10

10

1N
2N
3N

YAxis Input, YAxis Output Coherence

1
0.5
0

10

10

ZAxis Input, ZAxis Output Coherence

1
0.5
0

10

Frequency (Hz)

10

Figure 3.4: Subject-averaged mean-squared coherence with force as input and position
as output.

nd what was eectively the center frequency response of the range at each frequency
sample about which variability could be estimated. No weighting function was used
for these ts because these models were used to calculate unstructured variability
(Sec. 3.1.7).
Set 3: Individual Arm Model Parameters
Parameter set three was derived from 135 measured-dynamics model ts, one for each
subject, grip force, and axis combination. For this set, Hexp (s) was dened as each of
the 135 total sets of measured frequency response data. The weighting function used
for these ts was the mean-squared coherence for each set of the 135 experiments.
These parameters were used to calculate the structured variability statistics presented
in Sec. 3.1.6).

28

3.1.6

Structured Variability

Structured variability refers to the statistical characteristics of the ve identied arm


model parameters M, k1 , k2 , b1 , and b2 . Structured variability results were obtained
from 135 measured-dynamics models using the arm model structure and methods
described in Sec. 3.1.5. From these models, the following statistics were computed:
standard deviation, mean, minimum, maximum, and the 95%, and 67% condence
intervals.

3.1.7

Unstructured Variability Model

Unstructured variability refers to the inter and intra-subject variability observed in


exp
(s) with respect to the three
the measured arm frequency response, dened as Harm

arm (s) nominal arm models (Sec. 3.1.5).


H
Variability was considered as unstructured multiplicative uncertainty. Under this
assumption, the uncertainty model was dened as follows [57].
For a system with plant transfer function P ,


P (j) {P (j) 1 + Wu (j)(j) : sup|(j)| 1},
(3.3)
(j) R,
where P is the nominal plant transfer function, Wu (j) is the uncertainty weighting
function, and R is the set of all proper real rational functions [57]. The uncertainty
weighting function Wu (j) has the relationship
(j)
|Wu (j)(j)| | PP (j)
1|

(3.4)

and can be interpreted as the percentage uncertainty in the nominal plant P (j) at
frequency .
29

Therefore, the magnitude of the unstructured uncertainty function |Wu (j)| was
considered to represent the unstructured variability of the measured frequency response with respect to the nominal arm models. This was done by using the right
arm (s), as the nominal plant
side of (3.4) and dening the nominal arm models, H
transfer function P (j) and the set of all individual measured frequency responses,
exp
exp
(s), as P (j). Both the nominal arm models and individual Harm
(s) frequency
Harm

responses are plotted in Fig. 3.6ac.


For each axis, a stable and minimum-phase transfer function in Laplace notation
of the form

Nn
i=1
V (s) = K N
d

(s zi )

i=1 (s pi )

(3.5)

with a scaling term K, stable poles pi , numerator order Nn , minimum-phase zeroes


zi , and denominator order Nd was tted to envelope the maximum Wu (j) over all
subjects and all grip forces using the Matlabs fmincon.m function. Each transfer
function was constrained to have Nn Nd so that the modeled uncertainty would
not asymptotically approach zero. The cost function used was


n 2
n
Wt (n) Wu (j2 ) V (j2 ) ,
N
N
n=1

p


(3.6)

where Wt (n) was a weighting function, V (j) was the variability transfer function
from (3.5), p = 57 was the total number of frequency samples for 30 Hz of data, and
the total number of frequency samples was N = 958 due to the 32 segment Welch
frequency response estimation method. The weighting function was tuned visually in
order to avoid local minimum solutions that did not properly provide a bound for the
computed unstructured uncertainty.
The 67% CI limits for unstructured variability were also examined in order to
provide less conservative models for stability analysis. The 67% CI limits were computed using empirically estimated cumulative distribution functions gathered from

30

Table 3.1: Arm Structure Parameters Grip Force Dependent Models


X-axis
1N
2N
3N

M (kg)
0.2892
0.2869
0.2731

k1 (N/m)
428.4
448.6
455.5

k2 (N/m)
99.45
93.93
96.17

b1 (Ns/m)
2.998
2.443
2.325

b2 (Ns/m)
5.802
5.698
5.629

Y-axis
1N
2N
3N

M (kg)
0.4602
0.3892
0.4186

k1 (N/m)
469.69
625.94
671.20

k2 (N/m)
121.8
122.2
126.0

b1 (Ns/m)
7.063
5.996
5.858

b2 (Ns/m)
5.996
6.005
6.410

Z-axis
1N
2N
3N

M (kg)
0.2115
0.2525
0.2353

k1 (N/m)
843.1
868.3
855.1

k2 (N/m)
323.9
332.8
355.1

b1 (Ns/m)
0.7093
0.5882
0.4925

b2 (Ns/m)
19.42
19.90
20.56

the experimental data (Matlabs ecdf.m function).

3.2
3.2.1

Measured-Dynamics Model Results


Arm Dynamics Model Identication Results

Three sets of arm dynamics models were identied, each with force as input and
position as output (Sec. 3.1.53.1.5). This paper presents the parameters from Sets
1 and 2, and, for conciseness, only the statistics from Set 3 (consisting of 135 model
ts) are presented in Sec. 3.2.2.
Set 1 consists of nine measured-dynamics models, whose arm structure parameters
are listed in Table 3.1. Bode plots for these model transfer functions are shown in
Fig. 3.5.
Parameter Set 2 consisted of three nominal arm models, one representing the
center of the range of measured frequency responses for each axis over all grip forces
and all subjects. These were used for the calculation of the unstructured variability
models in Sec. 3.1.7. The identied parameters for the nominal arm models are
reported in Table 3.2 and the Bode plots for the model transfer functions are in Fig.
3.6ac.

31

Table 3.2: Nominal Arm Model Parameters


Axis
X-axis
Y-axis
Z-axis

M (kg)
0.2179
0.2692
0.2041

k1 (N/m)
379.5
552.4
769.9

k2 (N/m)
78.75
105.3
271.7

b1 (Ns/m)
1.839
3.609
0.7764

b2 (Ns/m)
4.645
6.430
18.06

Table 3.3: Structured Variability - Arm Structure Parameter Statistics


X-axis
Minimum
Mean
Maximum
Std Dev
95% CI Min
95% CI Max
67% CI Min
67% CI Max

M (kg)
0.0340
0.3240
0.8016
0.1464
0.1433
0.5664
0.2527
0.3759

k1 (N/m)
140.6
459.4
757.9
144.5
228.0
650.5
393.0
539.6

k2 (N/m)
53.05
104.8
196.2
27.59
63.20
151.3
91.12
116.8

b1 (Ns/m)
0.0020
2.579
7.095
1.337
0.8686
4.561
1.957
2.883

b2 (Ns/m)
3.148
5.920
10.34
2.192
3.678
10.29
4.372
6.222

Y-axis
Minimum
Mean
Maximum
Std Dev
95% CI Min
95% CI Max
67% CI Min
67% CI Max

M (kg)
0.2275
0.4763
0.9115
0.1528
0.2747
0.7221
0.3852
0.5367

k1 (N/m)
292.3
620.2
926.5
185.8
313.1
896.6
525.1
738.6

k2 (N/m)
88.66
132.5
199.7
29.05
90.10
194.7
115.5
144.7

b1 (Ns/m)
3.830
6.094
9.898
1.403
4.212
8.904
5.304
6.425

b2 (Ns/m)
4.020
5.890
9.721
1.398
4.145
8.591
5.150
6.272

Z-axis
Minimum
Mean
Maximum
Std Dev
95% CI Min
95% CI Max
67% CI Min
67% CI Max

M (kg)
0.0003
0.2357
0.4810
0.1261
0.0161
0.4252
0.1630
0.2968

k1 (N/m)
590.3
886.9
1050
105.4
679.1
1043
849.4
941.0

k2 (N/m)
194.6
365.8
588.7
105.8
214.1
533.0
293.3
421.6

b1 (Ns/m)
0.0002
0.5241
1.939
0.4884
0.0002
1.420
0.1091
0.8081

b2 (Ns/m)
10.66
20.14
28.88
5.129
13.09
27.61
16.28
24.67

Each model was identied to accurately reect the measured frequency response
data across 0.630 Hz.

3.2.2

Variability Results

The observed inter and intra-subject arm dynamics variability across all subjects
and grip forces was identied in two forms: structured variability and unstructured
variability.

32

Magnitude (dB)

Grip Force Depedent YAxis Model Bode Plots

40
50
60
70

10

10

0
50
100
0

10

1N
2N
3N
1N fit
2N fit
3N fit

Phase (deg)

Phase (deg)

Magnitude (dB)

Grip Force Depedent XAxis Model Bode Plots


40
50
60
70

10

0
50
100

Frequency (Hz)

10

10

10

(a)

Frequency (Hz)

10

(b)

Phase (deg)

Magnitude (dB)

Grip Force Depedent ZAxis Model Bode Plots


45
50
55
60
65

10

10

0
20
40
60

10

Frequency (Hz)

10

(c)
Figure 3.5: AC) The thicker lines are the frequency responses of the grip-force
dependent X, Y, and Z-axis measured-dynamics models calculated using (3.1). The
thinner lines are the frequency response of the measured arm dynamics. The model
parameters for the 1, 2, and 3 N models are in Table 3.1.

33

Structured Variability
Structured variability was characterized across all subjects and grip forces using statistics from 135 individual arm dynamic model ts. For the sake of conciseness, the
actual model parameters were not reported, but their statistics are reported in Table
3.3.
Unstructured Variability
For the unstructured variability models, multiplicative unstructured uncertainty was
calculated using the nominal arm models (Table 3.2) and (3.4). For conciseness, the
maximum and 67% CI data were reported and not the 95% CI data, as the 95%
CI data diered by less than 5 dB from the maximum uncertainty in the 0.630
Hz frequency range. Each unstructured variability model was a transfer function
consisting of up to ve stable complex-conjugate pole pairs and ve minimum-phase
complex-conjugate zero pairs. Table 3.4 reports the poles and zeros for the transfer
functions as tted for the maximum and 67% CI limits. Each unstructured variability
model closely enveloped the uncertainty observed from all 16 subjects and 13N grip
forces from 0.630 Hz, as seen in Fig. 3.7ac.
The maximum unstructured uncertainty observed for all three axes was < 10 dB
from 0.630 Hz. In the same frequency range, the the 67% CI variability models were
all < 0 dB and exhibited approximately 10 dB less multiplicative uncertainty than
the maximum uncertainty.

3.3
3.3.1

Discussion
Comparison with Previous Arm Model Parameters

The proposed arm model structure produced transfer functions that accurately matched
the frequency response of the measured data from 0.630 Hz for the X, Y, and Z axes.
34

Table 3.4: Unstructured Variability Model Poles and Zeroes


X-Axis
K
Zero Pair 1
Zero Pair 2
Zero Pair 3
Zero Pair 4
Zero Pair 5
Pole Pair 1
Pole Pair 2
Pole Pair 3
Pole Pair 4
Pole Pair 5

Max Variance
1.322
3.420 12.88j
67.28 0.0024j
2.714 6.183j
5.458 27.34j
29.81 159.5j
2.294 4.224j
53.48 69.71j
54.64 160.1j
1.971 13.26j
4.536 27.25j

67% Confidence Interval


0.4476
68.65 0.000j
2.349 7.134j
54.65 133.3j
6.627 41.04j

41.16 0.0011j
8.028 42.57j
4.297 7.713j
43.17 167.8j

Y-Axis
K
Zero Pair 1
Zero Pair 2
Zero Pair 3
Zero Pair 4
Zero Pair 5
Pole Pair 1
Pole Pair 2
Pole Pair 3
Pole Pair 4
Pole Pair 5

Max Variance
1.856
2.259 7.250j
23.53 129.2j
6.371 39.57j
16.45 92.46j
5.785 30.96j
56.82 66.53j
19.90 49.73j
3.334 4.181j
19.81 117.5j
2.908 34.03j

67% Confidence Interval


1.163
2.379 8.072j
10.43 18.00j
43.20 99.69j
1547 331.6j

93.89 128.0j
8.189 0.000j
756.9 1363j
6.038 15.00j

Z-Axis
K
Zero Pair 1
Zero Pair 2
Zero Pair 3
Zero Pair 4
Zero Pair 5
Pole Pair 1
Pole Pair 2
Pole Pair 3
Pole Pair 4
Pole Pair 5

Max Variance
2.592
1.312 6.482j
208.8 0.5379j
20.09 38.93j
5.210 19.61j
5.004 40.79j
388.7 18.74j
4.004 17.33j
7.647 49.51j
2.042 6.189j
5.494 35.932j

67% Confidence Interval


0.5320
924.6 122.0j
40.12 84.57j
1.842 6.803j
10.63 39.00j
6.796 95.29j
13.19 44.69j
2.594 6.321j
73.98 0.0048j
487.2 964.86j
8.092 95.687j

35

Yaxis Nominal Arm Model


Magnitude (dB)

Magnitude (dB)

Xaxis Nominal Arm Model


20
40
60
80

10
Nominal Harm model

10

0
100
200

40
60
80

10

10

200
Phase (deg)

Phase (deg)

100

20

10

200

Frequency (Hz)

10

10

(a)

Frequency (Hz)

10

(b)

Magnitude (dB)

Zaxis Nominal Arm Model


40
50
60
70

10

10

10

Phase (deg)

50
0
50
100

10

Frequency (Hz)

(c)
Figure 3.6: AC) For each axis, the black dotted lines representing the nominal arm
arm (s) Bode plots (whose parameters are in Table 3.2) are plotted over the
model H
multi-colored thin lines showing the measured frequency responses for all subjects
exp
and all grip forces, Harm
(s). These arm models were used as the nominal model for
calculating the unstructured uncertainty in (3.4).

36

Yaxis Unstructured Variability (13N Grip Force)


10

Magnitude (dB)

Magnitude (dB)

Xaxis Unstructured Variability (13N Grip Force)


10

10

15

10

Max |W(j)(j)|
67% CI Max |W(j)(j)|
Max |W(j)(j)| Model
67% CI Max |W(j)(j)| Model
0

10

15
0

Frequency (Hz)

10

10

(a)

Frequency (Hz)

10

(b)
Zaxis Unstructured Variability (13N Grip Force)
10

Magnitude (dB)

10

15
0

10

Frequency (Hz)

10

(c)
Figure 3.7: AC) Magnitude response for the inter and intra-subject unstructured
variability models of the X, Y, and Z axes (dashed pink like for the max model and
solid green line for 67% model) plotted along with the maximum uncertainty and 67%
CI limits they were modeled after (pink x markers and green circles, respectively).

37

Magnitude (dB)

Comparison with Models from Literature


40
60
80
1

Phase (deg)

10

Speich Stylus Xaxis


1Speich Sphere21DOF
10 Kosuge
10
Lawrence
Nominal H
Zaxis

10

arm

0
100
200 1
10

10

10
Frequency (Hz)

10

Figure 3.8: The frequency responses of dierent models reported in literature (solid
color lines) to the current studys nominal Z-axis arm model (black dashed line). All
models correspond to forward and backward motion.
The identied models successfully captured the magnitude response plateaus that
start around 10 Hz in all three axes (Fig. 3.5ac). Similar behavior in the measured
magnitude response was observed in [42], which also modeled the human arm using a 3 DOF stylus-based manipulator. However, their transfer functions were tted
from 0.510 Hz and therefore, were not designed to capture the plateau characteristics
present in the measured data. As a result, the current model behaves quite dierently
beyond 10 Hz than past models.
However, up to 10 Hz, the frequency response of the current models are comparable
arm model
to existing results. Figure 3.8 shows the current nominal arm Z-axis H
plotted on the same scale with similar arm models from literature that also modeled
forward and backward direction motion. Speich, et al. used a ve-parameter MSD
model with a transfer function similar to (3.1). Kosuge and Lawrence used threeparameter models (mass m, spring k, damper b) resulting in a second-order transfer

38

Table 3.5: Arm Model Parameters from Literature


M (kg)

k1 (N/m)

k2 (N/m)

b1 (Ns/m)

b2 (Ns/m)

Diaz [48]

0.22

3662

98.6

1.18

6.88

Speich [42] X

0.85

122

330

12.9

12.9

Speich Y

4.03

108

104

9.20

47.6

Speich Z

0.68

81.4

13.0

17.6

13.5

Speich 1DOF

1.46

48.8

375

4.5

7.9

Vlugt [61]

1.88

14998

733

178

37.3
5.5

Hogan [55]

0.8

568

Kosuge [36]

11.6

243

17

Lawrence [35]

17.5

175

175

function expressed by

Harm (s) =

1
Position(s)
=
.
2
Force(s)
ms + bs + k

(3.7)

Up to 10 Hz, the current model was most similar in frequency response to Speich,
et als stylus handle model. Their other model used a 1 DOF sphere-handled manipulator, which exhibited a resonant peak at 3 Hz. The other models in Fig. 3.8
were identied for delity in the lower frequency ranges (< 10 Hz), assume a joystick
handle grasp, and are second order, so they drop o at 40 dB/dec from 12 Hz. In
contrast, the current model maintains valuable dynamics that occurred past 10 Hz.
Table 3.5 lists model parameters from literature for the models shown in Fig. 3.8 in
addition to identied parameters from [55], [48] and [61]. The magnitude responses for
[48] and [61] were not plotted in Fig. 3.8 because the human operator MSD parameters
were coupled and identied along with other dynamics, such as neural delays and
manipulator vibration modes. For the ve-parameter models from literature, it was
assumed that k1 , b1 = ks , bs and k2 , b2 = kh , bh in [48], while k1 , b1 = kh , bh and
k2 , b2 = ka , ba in [61]. Also, for the three-parameter arm models, it was assumed
that the spring and damper correspond to k2 and b2 in the current model structure
(implying a rigid link between the hand and the haptic device). Of the cited models,
only Speichs X, Y, and Z models identied the hand in a stylus grip conguration;
Speichs 1 DOF used a spherical knob, [48] used a horizontal tennis racquet grip and
39

the rest used vertical joystick grip congurations.


It was also observed that the identied parameters of the current model structure
were comparable to existing results. The mass parameters of the current models were
identied to be between 0.00030.91 kg, which overlapped the range of 0.2217.5 kg
in past studies. This studys stiness results ranged from 1411050 N/m for k1 and
53589 N/m for k2 , which were within the 48.814998 N/m for k1 and 13733 N/m for
k2 reported in literature. The current results also showed that damping parameters
ranged between 0.00029.9 Ns/m for b1 and 3.129 Ns/m for b2 , which was lower,
but also overlapped the range of 1.18178 Ns/m for b1 and 5.547.6 Ns/m for b2
reported by literature.

3.3.2

Grip-Force-Dependent Models

Some apparent trends were observed from the subject-averaged grip-force-dependent


ts (Sec. 3.1.5 and Table 3.1), but statistical tests for grip-force trends on the 135
individual ts (Sec. 3.1.5) did not reach statistical signicance. The statistical analysis
performed was a one-way repeated measures analysis of variance with GreenhouseGeisser sphericity correction and Holm-Sidak multiple comparison tests (grip force as
the factor).
The lack of clear trends was possibly because only three grip forces were examined
in this study. A more appropriate study design for trend analysis will likely require a
wider range and more grip force levels. However, due to fatigue concerns during the
60 s of force input, the current study was able to only test three grip forces.
Also, not all parameters vary in the same direction with respect to grip force,
as was reported in [43]. It is possible that the identied parameters represent local
minimum solutions. Since the current study did not investigate the possibility for
local minima, future investigations may try the following techniques. One method
may be to maintain several parameters as constant over all grip forces for a particular
40

axis, while only allowing only one set of stiness and damping to vary with the grip
forces. Another may be to sample several arm model parameters from the literature and attempt to perform the optimization using these parameters as the initial
conditions, given some relatively small bounding conditions. Finally, it may be necessary to investigate the bio-mechanical properties of the arm using electromyography
(to measure actual muscle contraction intensities) or kinematic analysis to determine
more precise initial conditions for optimization.
There were, however, notable dierences in the Z-axis spring and damping parameters compared to the other axes (Table 3.1). Specically, the Z-axis k1 , k2 , and b2
parameters were increased more than 2 times beyond the range of their counterparts
for the X and Y axes, while b1 was approximately one order of magnitude less. One
interpretation of this is that the Z-axis had decreased damping, but higher stiness
near the stylus handle and higher stiness and damping further away from the stylus.
The cause for these parameter discrepancies is not obvious and there is no mention
of similar phenomenon in the literature.
However, Z-axis motion kinematics were observed to dier from that of the other
axes and could be a contributing factor. For all three axes, since a grip force was
maintained, the wrist joint was very rigid compared to the elbow and shoulder joints.
Therefore, force inputs to the X (left/right) and Y (up/down) axes predominantly
cause rotations about one joint, the shoulder or the elbow, respectively. But Zaxis force stimulation resulted in forward/backward motion that requires both the
shoulder and elbow joints. Also, X and Y-axis forces apply torques over the length
of the forearm, while Z-axis forces apply torque over the length of the upper arm.

3.3.3

Structured Variability

It is noteworthy that some of the structured variability model parameters do not


provide bounds on the range of parameter results from literature, but this is not
41

unexpected. The current study is relevant for a stylus grasp conguration similar to
Fig. 3.1 while applying 1-3 N grip forces. In contrast, methods from the cited literature
dier in signicant ways, such as in model structure, grip forces used by subjects, and
arm conguration all of which can aect the arms response. Thus, since the current
variability results were not designed to encompass all those variations, it is possible
for the identied parameter ranges to exclude some of those from the literature.

3.3.4

Unstructured Variability

As seen in Fig. 3.7ac, the proposed unstructured variability structure was successful
in producing models that closely enveloped both the maximum and 67% CI limits
from the measured data. Also, the unstructured variability models (Table 3.4) were
computationally-simple, minimum-phase and stable transfer functions. These transfer
functions can be used to compute multiplicative uncertainty bounds on the nominal
arm model transfer functions (Table 3.2).
The developed nominal and variability models are can be used in various robust
control design and analysis techniques. Specically, the multiplicative unstructured
uncertainty models are used for robust performance and robust stability in H-innity
analysis and control design framework. Arm models with unstructured uncertainty
are constructed, consistent with 3.3 as


u
arm (s) 1 + Wu (s)(s) : sup|(s)| 1},
(s) {H
Harm
(3.8)
(s) R
arm (s) is (3.1) with parameters from Table
where the nominal arm transfer function H
3.2, unstructured variability Wu (s) is (3.5) with parameters from Table 3.4, and R is
the set of all proper real rational functions.
Both the structured and unstructured uncertainty models can be used for con42

troller design using the -synthesis framework. Structured variability models can be
used for robust stability analysis using Kharitonovs Theorem [64]. Previous work
which used robust analysis methods that can employ the current models include
[65, 66, 50, 67, 68].
It is important to note that the identied uncertainty models are overbounds
on the set of transfer function models of the arm dynamics and that the actual
variability may only be a smaller subset. Such representations may also lead to
somewhat conservative robustness analyses. Specically, the obtained unstructured
multiplicative uncertainty models for the maximum variation case exceeded 0 dB for
most of the 0.630 Hz frequency range and therefore may lead to conservative results.

43

Chapter 4
Arm Model ID Without Force
Transducers
Force/torque sensors are often not standard components in commercially-available
haptic interface devices. Two reasons for this are cost and size constraints. For
example, a force sensor that is precise and small enough to be used on a haptic
interface such as the Phantom 1.5a costs approximately $5000, which is roughly 25%
of the cost of the haptic device. In some cases, such as the Phantom Omni (which
costs $2000), the cost and modications required to t a force sensor to the end
eector are not practical. Therefore, the methods for system identication described
in Chapter 3 need to be modied in order to be applicable to a wider range of haptic
interface devices.
This chapter describes the necessary modications to the arm model structures
(described in Sec. 4.1.1 and 4.3) so that the same identication can be performed without force sensors. Also, results from a separate human experiment using the modied
methods are presented and compared against those from Chapter 3. In the current
version of the experiment, the same human subject testing procedures and equipment
were used (except for the lack of a force sensor and 3D stereographic display).

44

Figure 4.1: This block diagram of the human arm coupled to the Phantom haptic device represents the measured experimental arm dynamics. Harm represents the
a lumped model of the arms passive and controlled dynamics and HPhantom represents the position-input/force-output dynamics of the Phantom haptic interface as
presented in [69]

4.1

Methods

The following methods were consistently applied to each of the three Cartesian axes.

4.1.1

Phantom and Arm Dynamics Models Structure

The closed-loop model structure in Fig. 4.1, referred to from here on as the measureddynamics model, represents the measured experimental dynamics as a feedback loop
between the Phantom haptic interface (HPhantom ) and a lumped model of the arms
passive and active control dynamics (Harm ), referred to as the arm-only dynamics
from here on). The Phantoms dynamics were dened as the force-input, positionoutput frequency response models described by Cavusoglu, et al. [69].
A transfer function for the measured-dynamics model structure was constructed
by conceptualizing the coupled arm and Phantom as two masses in series separated
by springs and dampers, as seen in Fig. 4.2. Conveniently, all three of the Phantoms
degrees of freedom were shown in [69] to behave as a simple mass for the frequency
bandwidth of interest in this study ( 30 Hz). Thus, for system identication pur-

45

Figure 4.2: This free-body diagram illustrates the coupled relationship between the
human arm and the Phantom. The mass in the dashed box on the left labeled
corresponds to the HPhantom block in Fig. 4.1 and the components in the dashed box
on the right correspond to the Harm block in Fig. 4.1. Mass Mp represents the inertia
of the Phantom and mass Ma the inertia of the arm. The spring k1 and damper
b1 represent the interface between the hand and the Phantom while spring k2 and
damper b2 represent both the passive and the active control dynamics of the arm.

poses, the Phantoms frequency response in Laplace notation for all 3 DOFs was
modeled as
HPhantom (s) =

Fp (s)
= Mp s2 ,
Xp (s)

(4.1)

where Fp (s) is the force acting on the Phantoms stylus, Xp (s) is the position of the
stylus tip, and Mp is the eective mass at the end eector for each axis (Fig. 4.2),
approximated from the kinematics (and not the cartesian space frequency responses)
in [69] localized around the conguration shown in Fig. 3.1 about the operating point
for each axis as (detailed in Appx. B)

Mpx = 0.09kg
Mpy = 0.095kg

(4.2)

Mpz = 0.091kg.
The arm-only model transfer function Harm , similar to (3.1), was derived from the 5
46

parameters Ma , k1 , k2 , b1 , and b2 in Fig. 4.2 in Laplace notation as

Harm (s) =
=

Xh (s)
Fh (s)
Ma s2 + (b1 + b2 )s + k1 + k2
,
b1 Ma s3 + (b1 b2 + k1 Ma )s2 + (b2 k1 + b1 k2 )s + k1 k2

(4.3)

where Xa (s) is the position of the hand and Fa (s) is the force applied to the hand.
Finally, the measured-dynamics model transfer function was dened as the closedloop combination of the arm-only and Phantom transfer functions, resulting in a
fourth-order transfer function

HCL (s) =

Harm (s)
P osition
=
.
F orce
1 + Harm (s)HPhantom (s)

(4.4)

This measured-dynamics model transfer function was tted to the measured human
experiment frequency response in each axis in order to identify the ve parameters Ma ,
k1 , k2 , b1 , and b2 . Each t was performed using nonlinear constrained optimization
(Matlab fmincon.m function) in the frequency domain by minimizing the cost function


n
n 2
Wt (n) 20log10 Hexp (j2 ) HCL (j2 ) ,
N
N
n=1

p


(4.5)

where Wt (n) was a weighting function used to ne-tune the t at each data sample,
Hexp (s) was the frequency response of the force-input, position-output human experiment data, HCL (s) was the measured-dynamics models frequency response calculated
from (4.4) with the identied parameters, p = 3000 was the total number of data samples for 30 Hz of data (from 100 s of data sampled at 1 KHz), and N was 50000 (from
performing a fast-Fourier transform (FFT) equal in length to the time-domain data).
Hexp (s) was calculated by taking the FFT of the measured time-domain position output data and dividing it by the FFT of the generated time-domain white noise force
input signal (Matlabs fft.m).
47

Equation (4.1.1) was used as the cost function to identify three sets of arm model
structure parameters Ma , k1 , k2 , b1 , and b2 .
Set 1: Grip-Force-Dependent Measured-Dynamics Model Parameters
Parameters for set one were derived from nine measured-dynamics model ts. One
model t was identied for each grip force at each axis. For this set, Hexp (s) was
dened as the measured experimental data averaged over all subjects, resulting in nine
grip-force-dependent measured-dynamics model transfer functions. These models are
of the form (4.4), which includes both the Harm and HPhantom dynamics and are
presented in Sec. 4.3, Table 4.1, and Fig. 4.3. These models provide dynamic equations
that are useful for simulating the arms dynamics during haptic system design.
Set 2: Nominal Arm-only Model Parameters
Parameters in set two were derived from three measured-dynamics model ts, one
for each axis. These ts were obtained by dening Hexp (s) as the experimental data
averaged over all subjects and all grip forces. Then, the identied parameters were
arm (s) using (4.3), providing only
used to compute the arm-only transfer function H
the arms dynamics frequency response for each axis. These models were used to
calculate unstructured variability (Sec. 4.1.3) and do not include the HPhantom (s)
dynamics that represent the haptic device.
Set 3: Individual Arm Model Parameters
Parameter set three was derived from 81 measured-dynamics model ts, one for each
subject, grip force, and axis combination. For this set, Hexp was dened as each of the
81 total sets of measured experimental data. These parameters were used to calculate
the structured variability statistics presented in Sec. 4.1.2.

48

4.1.2

Structured Variability

Structured variability refers to the statistical characteristics of the ve identied


arm dynamics model parameters Ma , k1 , k2 , b1 , and b2 .
Structured variability results were obtained from 81 measured-dynamics models
using the arm dynamics structure and methods described in Sec. 4.1.1. From the
parameters of the 81 models, the following statistics were computed: standard deviation, mean, minimum, maximum, and the 95%, and 67% CI. These statistics can
be used with (4.3) to generate a variety of arm-only models for use in haptic system
analysis and design.

4.1.3

Unstructured Variability Model

The unstructured variability concepts used here were identical to that described in
exp
(s) was computed from the
Sec. 3.1.7. However, the dierence here was that Harm

measured experimental data after removing the Phantom dynamics, as described at


the end of this section. In the previous chapter, this step was not necessary since the
force sensor was mounted between the Phantoms end eector and the hand.
Variability was considered as unstructured multiplicative uncertainty. Under this
assumption, the uncertainty model was dened as follows [57].
For a system with plant transfer function P ,


P (j) {P (j) 1 + Wu (j)(j) : sup|(j)| 1},
(4.6)
R
where P is the nominal plant transfer function, Wu (j) is the uncertainty weighting
function, and R is the set of proper real rational functions [57]. The uncertainty

49

weighting function Wu (j) has the relationship


(j)
|Wu (j)(j)| | PP (j)
1|

(4.7)

and can be interpreted as the percentage uncertainty in the nominal plant P (j) at
frequency .
Therefore, the magnitude of the unstructured uncertainty function |Wu (j)| was
considered to represent the unstructured variability of the arm-only (sans Phantom
dynamics) response with respect to the nominal arm models by using the right side
arm (s), as the nominal plant
of (4.7) and dening the nominal arm-only models, H
transfer function P (j) and the set of all individual arm-only experimental frequency
exp
(s), as P (j). Both the nominal arm-only models and individual
responses, Harm
exp
(s) frequency responses are plotted in Fig. 4.5ac.
Harm

4.2

Derivation of Arm-Only Experimental Frequency


Response

exp
The arm-only experimental frequency response, Harm
(s), diers from the measuredexp
(s)
dynamics frequency response by the fact that the latter was measured, while Harm
exp
(s)
is computed from the latter by removing the dynamics of the Phantom. Harm

was computed using Welchs transfer function estimation (Matlabs tfestimate.m)


with four Hamming windowed segments and 50% overlap in order to minimize FFT
artifacts from using (4.7) to calculate unstructured variability. Consistent with Fig.
exp
(s) by Welchs method was
4.1, the time-domain output signal for computing Harm

dened as the measured hand position and the time-domain input signal was dened
as the arm-only forces
Farm (t) = Fin (t) FPhantom (t),

50

(4.8)

where Fin was the random noise force input used for system identication. FPhantom
was approximated by the second derivative of the measured hand position, x(t) by

FPhantom (t) = Mp

 d2 x(t) 
,
dt2

(4.9)

with Mp as dened in (4.2) and the second derivative approximated by


d2 x[n]
x[n + 1] 2x[n] + x[n 1]
=
,
2
dt
t2

(4.10)

with t as 0.001 s.
exp
Having computed Harm
(s), the right side of (4.7) was then used to compute the
exp
(s) for each subjects data (81 trials)
uncertainty weighting data by dening P as Harm

arm (s).
and P as the identied nominal arm-only transfer functions H
For each axis, a stable and minimum-phase transfer function of the form
Nn
i=1
V (s) = K N
d

(s zi )

i=1 (s

pi )

(4.11)

with a scaling term K, stable poles pi , numerator order Nn , minimum-phase zeroes


zi , and denominator order Nd were tted to envelope the maximum Wu (j) over all
subjects and all grip forces using the Matlabs fmincon.m function. Each transfer
function was constrained to have Nn Nd so that the modeled uncertainty would
not asymptotically approach zero. The cost function used was



n
n 2
Wt (n) 20log10 V (j2 ) Hf it (j2 ) ,
N
N
n=1

p


(4.12)

where Wt (n) was a weighting function, Hf it was the frequency response of the identied nominal arm model, p = 57 was the total number of data samples for 30 Hz of
data, and N = 958 was the total number of frequency response samples for 500 Hz

51

of data estimated by the Welch method.


The current study also examined the 67% CI limits for unstructured variability in order to provide less conservative models for stability analysis. The 67% CI
limits were computed using empirically estimated cumulative distribution functions
gathered from the experimental data (Matlabs ecdf.m function).

4.2.1

Subjects

Nine right hand dominant subjects (4 female, 5 male, ages 20-30) were recruited with
prior consent for this study and were not compensated for their participation. Each
subject was free from any movement impairments that would have aected this study
and tested using their right arm. The experimental procedures were reviewed and
given exemption status by the institutions Internal Review Board.

4.2.2

Arm Model Experiment Paradigm

The same experiment paradigm from Chapter 3, Sec. 3.1.4. A total of 3 grip forces x
3 axes = 81 data trials were recorded for this study.

4.3
4.3.1

Measured-Dynamics Model Results


Arm Dynamics Model Identication Results

Three sets of arm dynamics models were identied, each with force as input and position as output. This paper presents the parameters from two sets and, for conciseness,
only the statistics from the third were presented in Sec. 4.3.2.
The rst set consists of nine measured-dynamics models, whose arm structure
parameters are listed in Table 4.1. The frequency responses for these model transfer
functions include the dynamics of the Phantom and are shown in Fig. 4.3.

52

Table 4.1: Arm Structure Parameters Grip Force Dependent Models


X-axis

Ma (Kg)

k1 (N/m)

k2 (N/m)

b1 (Ns/m)

b2 (Ns/m)

1N grip
2N grip
3N grip

0.1925
0.2037
0.2057

704.2
785.4
784.3

85.48
76.29
88.91

2.477
2.532
2.525

7.410
7.598
7.592

Y-axis

Ma (Kg)

k1 (N/m)

k2 (N/m)

b1 (Ns/m)

b2 (Ns/m)

1N grip
2N grip
3N grip

0.2775
0.2984
0.2954

649.4
779.8
775.1

91.48
84.85
86.84

4.314
4.919
4.541

7.217
7.632
7.719

Z-axis

Ma (Kg)

k1 (N/m)

k2 (N/m)

b1 (Ns/m)

b2 (Ns/m)

1N grip
2N grip
3N grip

4.374
2.250
3.107

200.4
181.1
120.8

4877
2196
3289

32.96
30.56
29.70

55.85
40.83
50.16

Parameter set two consisted of three nominal arm-only models, one representing
each axis averaged over all grip forces and all subjects. These were used for the calculation of the unstructured variability models in Sec. 4.1.3. The identied parameters
for the nominal arm-only models are reported in Table 4.3 and the Bode plots for the
model transfer functions are in Fig. 4.5ac. These transfer functions do not include
the dynamics of the Phantom.
Each model was identied to accurately reect the experimental data across 0.1
30 Hz and also capture the resonant peaks observed at approximately 15 Hz for the
X and Y axes and 5 Hz for the Z-axis in the experimentally measured frequency
responses (which include the Phantom). For the grip-force dependent models, no
conclusive parameter variation trends were observed with respect to the grip force.

4.3.2

Variability Results

The observed inter and intra-subject arm-only dynamics variability across all subjects
and grip forces was identied in two forms: structured variability and unstructured
variability.

53

40

Magnitude (dB)

40
50
60
70
80

Phase (deg)

YAxis Grip Force Dependent Arm Plus Phantom Dynamics


30

10

10

100

1N
2N
3N
1N fit
2N fit
3N fit

0
100
1

10

10
Frequency (Hz)

50
60
70
80

10

Phase (deg)

Magnitude (dB)

XAxis Grip Force Dependent Arm Plus Phantom Dynamics


30

10

10

10

100

1N
2N
3N
1N fit
2N fit
3N fit

0
100

10

10

10
Frequency (Hz)

(a)

10

(b)
ZAxis Grip Force Dependent Arm Plus Phantom Dynamics

Magnitude (dB)

30
40
50
60
70

Phase (deg)

80

10

10

10

100

1N
2N
3N
1N fit
2N fit
3N fit

0
100
1

10

10
Frequency (Hz)

10

(c)
Figure 4.3: AC) The thicker lines are the frequency responses of the grip-force
dependent X, Y, and Z-axis measured-dynamics models calculated using (4.4), which
include the Phantom dynamics. The thinner lines are the frequency response of the
experimentally measured arm-plus-Phantom dynamics. The model parameters for
the 1, 2, and 3 N models are in Table 4.1.

54

Table 4.2: Structured Variability - Arm Structure Parameter Statistics


X-axis
Minimum
Mean
Maximum
Std Dev
95% CI Min
95% CI Max
67% CI Min
67% CI Max

Ma (Kg)
0.04396
0.2078
0.2872
0.0498
0.1193
0.2623
0.2021
0.2272

k1 (N/m)
442.8
770.7
1090
181.2
484.9
1076
669.9
867.5

k2
37.74
89.68
200.5
43.36
40.67
154.4
63.08
110.2

b1 (Ns/m)
0.02026
2.011
4.052
0.9933
0.02026
3.119
1.374
2.639

b2
3.972
7.453
13.00
2.097
4.064
12.12
6.417
7.541

Y-axis
Minimum
Mean
Maximum
Std Dev
95% CI Min
95% CI Max
67% CI Min
67% CI Max

Ma (Kg)
0.1498
0.2918
0.4135
0.07518
0.1529
0.4111
0.2481
0.3333

k1 (N/m)
379.6
825.4
1455
305.6
406.1
1280
688.1
1064

k2
47.75
96.72
199.2
40.75
47.80
175.3
71.75
96.88

b1 (Ns/m)
1.671
3.420
5.837
0.9866
1.899
5.010
3.234
3.695

b2
3.259
7.176
9.675
1.787
4.102
9.632
5.985
8.557

Z-axis
Minimum
Mean
Maximum
Std Dev
95% CI Min
95% CI Max
67% CI Min
67% CI Max
Minimum
Mean
Maximum
Std Dev
95% CI Min
95% CI Max
67% CI Min
67% CI Max

Ma (Kg)
0.5661
3.105
9.435
2.084
0.6921
7.532
1.931
3.678
0.5661
3.105
9.435
2.084
0.6921
7.532
1.931
3.678

k1 (N/m)
79.65
3224
9920
2581
176.8
9312
1943
3592
1.358
176.4
1036
124.2
1.358
418.6
107.4
207.0

k2
1.358
176.4
1036
124.2
1.358
418.6
107.4
207.0
79.65
3224
9920
2581
176.8
9312
1943
3592

b1 (Ns/m)
10.69
36.90
83.79
15.03
13.17
59.63
32.64
42.21
18.07
48.60
649.0
55.82
22.53
155.3
30.23
34.37

b2
18.07
48.60
649.0
55.82
22.53
155.3
30.23
34.37
10.69
36.90
83.79
15.03
13.17
59.63
32.64
42.21

Structured Variability
The structured variability observed in the identied parameters across all subjects
and grip forces was computed from 81 arm dynamic model ts. For the sake of
conciseness, the actual model parameters were not reported, but their statistics are
reported in Table 4.2.
Unstructured Variability
For the unstructured variability models, multiplicative uncertainty was calculated
using the arm-only nominal dynamic models (Table 4.3) and (4.7). The maximum and
55

Table 4.3: Nominal Arm-Only Model Parameters


Axis
X-axis
Y-axis
Z-axis

Ma (Kg)
0.2187
0.2929
3.160

k1 (N/m)
787.0
727.23
180.6

k2
77.15
87.79
3319

b1 (Ns/m)
2.250
4.674
30.81

b2
8.3752
7.489
48.87

67% CI data were reported and not the 95% CI data because it diered by less than
5 dB from the maximum uncertainty from 0.130 Hz. Each unstructured variability
model was a transfer function consisting of up to ve stable complex-conjugate pole
pairs and ve minimum-phase complex-conjugate zero pairs. Table 4.4 reports the
poles and zeros for the transfer functions as tted for the maximum and 67% CI limits.
Each unstructured variability model closely enveloped the uncertainty observed from
all nine subjects and 13N grip forces from 0.130 Hz, as seen in Figure 4.5df.
The maximum unstructured uncertainty observed for the X and Y axes was < 10
dB and < 15 dB for the Z axis from 0.1 30 Hz. In the same frequency range, the 67%
CI variability models reected approximately 10 dB less multiplicative uncertainty
than the maximum uncertainty.

4.4

Discussion

The proposed measured-dynamics model structure produced transfer functions that


accurately matched the overall frequency response of the experimental data between
0.130 Hz for the X and Y axes and 0.110 Hz for the Z-axis. The Z-axis was not
tted to the experimental data between 1030 Hz because in this frequency range
we observed that the experimental magnitude response was rising at a rate of approximately 15 dB/dec. This gives rise to the possibility for a resonant peak existing
beyond 30 Hz. Since the force input bandwidth was limited to 30 Hz, further study
is required in order to determine how to best model the Z-axis frequency response
past 10 Hz. Therefore, the Z-axis models frequency response was designed to be
dominated by the Ma mass parameter at frequencies past 10 Hz, which is why the
56

XAxis Nominal ArmOnly Model

YAxis Nominal ArmOnly Model

20

20
X Nominal H

Model

Y Nominal H

arm

Magnitude (dB)

Magnitude (dB)

arm

40
60
80

60
80

10

10

10

10

100

Phase (deg)

Phase (deg)

Model

40

0
100
1

10

10
Frequency (Hz)

10

10

10

100
0
100

10

10

10
Frequency (Hz)

(a)

(b)
ZAxis Nominal ArmOnly Model
20
Z Nominal H

Magnitude (dB)

arm

Model

40
60
80
1

Phase (deg)

10

10

10

10

100
0
100
1

10

10
Frequency (Hz)

(c)
arm (s) the
Figure 4.4: AC) For each axis, the black dotted lines representing H
nominal arm-only model Bode plots (whose parameters are in Table 4.3) are plotted
over the multiple thin lines showing the arm-only frequency responses for all subjects
arm
and all grip forces, Hexp
(s) as calculated in Sec. 4.1.3. These arm models were used
as the nominal model for calculating the unstructured uncertainty in (4.7).

57

Xaxis Unstructured Variability Models

Yaxis Unstructured Variability Models

25

25
Max |W(j)(j)|
Max |W(j)(j)| Model
67% CI Max |W(j)(j)|
67% CI Max |W(j)(j)| Model

20

15

10

Magnitude (dB)

Magnitude (dB)

15

5
0
5

10
5
0
5

10

10

15

15

20

10

10
Frequency (Hz)

Max |W(j)(j)|
Max |W(j)(j)| Model
67% CI Max |W(j)(j)|
67% CI Max |W(j)(j)| Model

20

20

10

10

10
Frequency (Hz)

(a)

10

(b)
Zaxis Unstructured Variability Models
25
Max |W(j)(j)|
Max |W(j)(j)| Model
67% CI Max |W(j)(j)|
67% CI Max |W(j)(j)| Model

20

Magnitude (dB)

15
10
5
0
5
10
15
20

10

10
Frequency (Hz)

10

(c)
Figure 4.5: AC) Magnitude response for the inter and intra-subject unstructured
variability models of the X, Y, and Z axes (solid lines) plotted along with the maximum uncertainty and 67% CI limits they were modeled after (pink and green dots,
respectively).

58

Table 4.4: Unstructured Variability Model Poles and Zeroes


X-Axis
K
Zero Pair 1
Zero Pair 2
Zero Pair 3
Zero Pair 4
Zero Pair 5
Pole Pair 1
Pole Pair 2
Pole Pair 3
Pole Pair 4
Pole Pair 5

Max Variance
2.350 106
1.841 7.013j
18.66 92.78j
6.680 24.10j
42.13 741.1j

16.17 9.531j
28.27 51.95j
9.740 0.7106j

67% Confidence Interval


1.076
2.517 4.746j
8.222 22.44j
23.85 90.79j
7.692 45.82j
396.9 0.03940j
19.51 34.36j
54.90 3.152j
16.01 41.88j
5.231 0j
369.7 93.24j

Y-Axis
K
Zero Pair 1
Zero Pair 2
Zero Pair 3
Zero Pair 4
Pole Pair 1
Pole Pair 2
Pole Pair 3
Pole Pair 4

Max Variance
1.316
10.58 0j
9.142 5.834j
30.73 103.6j
11.38 28.95j
6.803 14.58j
16.05 40.70j
51.90 93.82j
39.56 0.1183j

67% Confidence Interval


1.259
2.327 3.397j
40.12 103.5j
11.37 27.65j

3.127 0j
17.98 29.35j
85.37 85.05j

Z-Axis
K
Zero Pair 1
Zero Pair 2
Zero Pair 3
Zero Pair 4
Pole Pair 1
Pole Pair 2
Pole Pair 3
Pole Pair 4

Max Variance
2.330
4.686 13.29j
1.991 3.381j
11.03 39.46j
103.5 98.00j
2.354 1.851j
3.589 11.58j
24.33 42.73j
118.1 4.981j

67% Confidence Interval


1.742
5.106 24.383j
2.288 4.620j
9.919 39.41j

30.98 0.0004682j
9.485 30.47j
5.343 0j

models frequency response falls o at 40 dB/dec between 1030 Hz in Fig. 4.3c and
the phase response is o by approximately 90 deg in Fig. 4.5c. The eect of this
design decision is apparent in that the high-frequency unstructured uncertainty is
highest in the Z-axis, where it is approximately 5 dB, compared to approximately 0
dB for the X and Y axes (Fig. 4.5.f).
Again, like the grip-force dependent parameters derived from the force-sensor measurements in Chapter 3, not all parameters varied in the same direction with respect
to grip force, as was reported in [43]. It is possible that the identied parameters
represent local minimum solutions. Since the current study did not investigate the
possibility for local minima, future investigations may try the following techniques.

59

Magnitude (dB)

Comparison of Arm Model ID with and without Force Sensors


40
60
80
1

10

FT sensor X
FT sensor Y
FT sensor Z
no FT X
no FT Y 0
10
no FT x

10

10

10

Phase (deg)

0
50
100
1

10

10

10
Frequency (Hz)

Figure 4.6: The nominal arm model frequency responses identied with force sensors
(solid lines) compared to the current studys nominal arm models identied without
force sensors (dashed lines).
One method may be to maintain several parameters as constant over all grip forces
for a particular axis, while only allowing only one set of stiness and damping to
vary with the grip forces. Another may be to sample several arm model parameters
from the literature and attempt to perform the optimization using these parameters
as the initial conditions, given some relatively small bounding conditions. Finally, it
may be necessary to investigate the bio-mechanical properties of the arm using EMG
(to measure actual muscle contraction intensities) or kinematic analysis to determine
more precise initial conditions for optimization.

4.4.1

Compared to Results Using Force Sensors

Compared to the models identied using force sensors in Chapter 3, the identied
X and Y axes models had very similar frequency responses, but the Z-axis response
showed signicant dierences between the two methods (Fig. 4.6). Specically, the
Z-axis models began to signicantly deviate at around 10 Hz. Both Z-axis models t

60

well to the data, as seen in Fig. 3.6c for when force sensors were used and Fig. 4.4c
when no force sensors were used. This indicates that the discrepancy was not due to
the model structure or tting, but rather to dierences in the empirical arm frequency
responses. And in fact, the empirical magnitude responses seen in Figs. 3.6c and 4.4c
were notably dierent for frequencies > 3 Hz. The dierence in magnitude response
was most apparent from 1020 Hz, where the sensor-measured magnitude response
stays relatively constant around -60 dB compared to an approximate -10 dB drop
over the same frequency range for the derived, no-force-sensor data.
These observations indicated that the assumption of simple mass-like dynamics
for the Z-axis was not a good approximation. Without a force sensor, the empirical
arm frequency responses were derived by assuming that the Phantom dynamics were
that of a simple mass, as described in Sec. 4.2. However, in light of the dierences
in the Z-axis frequency responses, this method was only acceptable for the X and
Y axes. Further investigation of the mechanical dynamics of the Phantom would be
necessary to determine the true cause of the discrepancy in the Z axis.

61

Chapter 5
Evaluation of 3D Fitts Task in
Physical and Virtual Environments
Optimization of human task performance in haptic interface systems is desirable from
an engineering standpoint, but it is also crucial in applications such as surgical robot
control, where impaired performance can lead to costly consequences. The eect of
immersion modalities on task performance is a well studied area, but there are still
gaps in the literature that can be lled.
As covered in Sec. 2.2, many have investigated the value of immersive technologies over typical, non-colocated computer interaction where the visual eld and haptic workspace are not aligned (e.g. common computer display and mouse interface).
There are two general causes of misalignment: rotational and translational dislocation
of the visual display from the input device. For the current study, a co-located interface was dened as the condition when visual and haptic workspace scales, origins,
and orientations are aligned (similar to our physical hand-eye interactions). Therefore, translational misalignment refers to the condition where only the scales and
orientations are aligned, and rotational misalignment refers to the case where only
the scales and origins are aligned.

62

Investigations into the eect of visuo-haptic misalignments on task performance


are rooted in motor control studies regarding the physiological processes behind adaptations to optical prisms (described in Sec. 2.2.4). Since then, the increased accessibility of computers brought the eld into intersections with the study of humancomputer interaction, where the focus is on optimization of human performance in
virtual environments which is also the focus of the current work.
Previous ndings regarding the eect of rotational misalignment on virtual task
performance are surprisingly consistent, despite the notable variation in the type of
tasks that were tested. The tasks include 2D point-to-point targeting with a joystick
interface [18], 3D pick-and-place and tracking using two joysticks [4, 19], whole-arm
3D point-to-point reaching [11], and 3D object orientation matching [24]. Across all
the tested tasks, the results, usually quantied by task completion time and error
rate, showed that visual rotations about the azimuth aected performance in a quasisymmetric manner about the unrotated, 0 condition. Performance was generally at
a maximum for the 0 condition, from which it decreased to a minimum for 90 , and
increased to a local maximum at 180 . In short, completion times and error rates were
lowest for the rotation-free condition and highest for rotations of 90 , regardless of
task type.
In contrast, previous ndings on the eect of translational misalignments on task
performance are conicting. For instance, Swapp et al. reported that co-location signicantly improved performance metrics for a set of 3D tasks [14]. Their method of
co-locating the visual and haptic workspaces was to physically align and stereographically calibrate a haptic device located at eye level between the user and the computer
display. Three virtual tasks (3D reaching, 3D maze navigation, and object juggling)
were tested, each over three arbitrarily dened diculty levels. Similarly, Lev et al.
reported that a virtual endoscopic surgery suturing task was performed signicantly
faster using a stereographic, co-located sh tank display modality than a monoscopic

63

non-colocated monitor mounted 2.4 m away from the haptic device [70]. Their colocated modality placed the mirrored display between the user and the haptic device
used for input. In contrast, Teather et al. tested the eect of co-location using a
3D Fitts task and did not nd a signicant improvement in task completion time or
end-point error [12]. Instead of a haptic device, they used an optically-tracked stylus
that was operated between the user and the stereoscopic display for the co-located
condition. For the non-colocated condition, the stylus workspace was shifted to the
right of the display so the two did not overlap.
The cited works have provided useful information regarding human performance
in VE, but several gaps in the literature are apparent. First, the large variation of task
paradigms in the literature made repeatability and inter-study comparisons dicult.
Second, task diculty is known to aect task performance, but only two studies
have taken this into consideration, [14, 12]. And of those two, only one specied
how diculty was dened [12]. Most importantly, although rotation and translation
misalignments have been shown to impact task performance with respect to the same
task in a physical environment only one attempt has been made to investigate all
three factors using the same task [11]. The attempt by Blackmon et al. was a small
study of 4 subjects that examined 0, 45, and 90 rotations. Also, the authors noted
that the head-mounted display used for the co-located condition caused excessively
long movement times because subjects had to search for the target by moving their
head around the virtual environment.
Therefore, the motivation for the current study was to characterize human performance of a point-to-point reaching task in the physical, co-located/non-colocated
VE, and rotated VE visualization conditions. Also, the reaching tasks should span a
range of diculties, but still facilitate inter-study comparison and repeatability.
Fitts point-to-point reaching task stands out as an appropriate motor task for
this goal. Fitts task (covered in Sec. 2.3) is an established motor task for testing

64

manual performance that has a well-dened method of adjusting task diculty [25].
The current study kept the following aspects consistent across all experiment conditions: stereographic visualization, haptic interface device (Phantom Omni haptic
device), the placement of the haptic device, and distance between the eye location
and image planes (set to 50 cm in both co-located and non-colocated congurations
via a headrest). In addition, targets with a range of diculties will be tested (as
measured by Fitts index of diculty).

5.0.2

Study Objectives

This work examined human performance of a 3D variation of Fitts point-to-point


reaching task performed using a stylus-based haptic interface device under various
experimental conditions. A total of ten conditions of the reaching task were considered: physical targets (real), non-colocated (NC) virtual targets, co-located virtual
targets (0 ) using a stereographic sh tank display modality, and virtual targets using
sh tank display with azimuth perspective rotations of 45, 90, 135, 180, 225, 170, and
315 .
Objective 1: Examine the eect of visualization paradigms (real, NC, and 0 ) on
task performance measures.
Objective 2: Examine the eect of visual rotations (0315) on task performance
measures.

5.1

Performance Measures for Analysis

The following six quantitative measures were calculated from the reaching trajectories
after removing data unrelated to movement. To eliminate dwell-time (time between
movement termination and computer registration of end-point position) and movement onset delays from interfering with the analysis, only data with velocity greater

65

than 1.5 mm/s were analyzed. This threshold was consistent with [11] and was based
on hand tremor frequency response for expert retinal surgeons using a stylus grip,
which were measured to have an amplitude of 0.03 mm at a fundamental frequency
of 9 Hz [71]. Because the encoder pulse widths cannot be measured from the Phantom Omni haptic device, velocity proles were estimated from the rst dierence
of the trajectory data after it had been low-pass ltered at 5 Hz with a 3rd-order
Butterworth lter (Matlabs filtfilt.m).
All the following performance measures were scalars calculated from composite
position, velocity, and acceleration signals. The composite was dened as the squared
root of the sum of squares of the data at each axis.

5.1.1

Throughput

The measure of task throughput used in this experiment was IDe / (Movement Time
to Target), which which is consistent with [25] and described in detail in Sec. 2.3.1.
Throughput, also referred to as task completion rate, is inversely proportional to
the task completion times measured across a range of target diculties. Therefore,
increased values of throughput for an experimental condition are considered to reect
increased task performance for Fitts task.

5.1.2

End-point Error

End-point error was dened as the Euclidean distance from the location of movement
termination to the targets central location, without regard for the width of the target.
Since healthy, unimpaired individuals were tested for this study, increased end-point
error was equated with decreased task performance.

66

5.1.3

Number of Corrective Movements

The number of corrective movements was dened as the number of local maxima of
the acceleration signal during each trial and indicates the smoothness of a reaching
motion. Since each corrective movement signies a direction change, if the reaching
motion was ideally smooth, then the number of corrections should be 0.
It was used in [11] as a performance measure used to quantify human reaching
performance in virtual and real environments. They found that corrective movements
were minimized for real environments with physical targets and was increased for the
virtual environment cases. Therefore, an increased number of corrections was equated
with decreased task performance.

5.1.4

Eciency

Eciency is a measure of how far a subjects trajectory deviated from the shortest,
straight line path to the target. A form of it was rst dened in [72] for use in
quantifying the ability of subjects to perform a 6 DOF orientation-matching task. In
the current work, eciency is dened as

Eciency =

Dendpoint
,
Dpath Dendpoint

(5.1)

where Dendpoint is the Euclidean distance from the location of movement onset to the
endpoint position and Dpath is the length of the actual reaching motion. Therefore,
eciency is higher if a subject reaches in a straight line from a starting point to the
endpoint, versus in a curved-motion. Eciency equals innity if the path taken is
exactly a straight line, however this is not expected to happen for human reaching
motions.
The current analysis assumed that increased eciency infers increased performance. This is consistent with the use of eciency in [72], where eciency was used
67

as a benchmark for two 6 DOF input devices. The input device that facilitated lower
task completion times was shown to also have increased eciency compared to an
alternative device that facilitated higher task completion times.

5.1.5

Initial Movement Error

Initial movement error was dened as the magnitude of the dierence between two
normalized vectors: the target vector and the initial movement vector. The target
vector points from the location of motion onset to the target location, while the initial
movement vector points from the location of motion onset to the location where the
rst corrective movement occurred (the rst local maximum of acceleration).
Increased initial movement error was considered to indicate degraded performance.
This was based on ndings in [11] that reported a > 4.5x increase in initial movement
error for VE reaching tasks compared to the same physical task.

5.1.6

Peak Velocity

Peak velocity was dened as the highest magnitude of velocity that was measured
during each reaching motion. It has been found that higher peak velocities resulted
from reaching in physical environments versus in virtual environments [7, 11]. Therefore, higher peak velocity in an experiment condition was considered to indicate motor
control condence and higher performance.
It was also reported in [73, 74, 8] that peak velocity was positively correlated
with target distance. This means that the farther away a target is, the more likely
peak velocity will increase. Since target diculty is a function of target distance, the
current study used the method described below to account for the possible eect of
parameters such as target distance or ID on this or any other performance measure.

68

5.1.7

Accounting for Eect of ID on Performance Measures

If target diculty was observed to have a signicant eect on a performance measure,


linear regression was performed on the performance measure values as a function of
ID for each experiment condition. As a result, each experiment condition will have
two parameters, instead of one: regression slope and oset. The oset was interpreted
as the performance measure for the minimum tested target diculty, while the slope
was considered to be an ID-independent measure that was interpreted in the same
way as the original measure it was derived from.
As an illustration, consider the case where target ID was found to have a signicant
eect on peak velocity and linear regression for peak velocity as a function of ID
yielded an oset of 0.4 mm/s and a slope of 1.0 in the 180 condition. In this case,
0.4 mm/s was considered the minimum observed peak velocity for the 180 condition
and the slope of 1.0 mm/(sbit) was considered as the ID-independent measure of
peak velocity for that condition. The slope was then analyzed in the same way as
the original performance measure, so a higher peak velocity slope was considered to
be an indicator of higher performance.

5.2

Methods

The following experiments have been reviewed and provided exemption status by the
institutions Internal Review Board.

5.2.1

Equipment

The sh tank display modality (Sec. 2.1.1) was selected for its ability to provide a
high-delity virtual environment that aligns the visual and haptic workspaces while
minimizing user fatigue. The visual and haptic workspaces were co-located by placing
a haptic device behind the image plane of the calibrated sh tank setup. In this way,
69

Figure 5.1: The experimental setup for the sh tank VE experiments.


the haptic devices representation in the virtual environment will appear to match
both the motion and location of the physical device.
A custom sh tank display (Fig. 5.1) was designed to be recongurable for the
physical task, co-located VE, and non-colocated VE congurations. It supports both
a 22 CRT monitor (Dell Corp., Round Rock, TX) and a Phantom Omni haptic
device (Sensable Technologies Corp., Woburn, MA). The same haptic device and
workspace were used for all the experiment conditions.

70

Figure 5.2: All the physical targets used for the real task.
OpenGL was used to develop a VE user interface that was rendered on a dual-core
workstation computer (Dell Corp.) running Windows XP (Microsoft Corp, Redmond,
WA). Data was sampled at 1 Khz using the OpenHaptics API (Sensable Technologies Corp.). Stereographic images were rendered using non-symmetric frustums and
viewed using Crystal Eyes 3 active shutter glasses and transmitter (RealD Corp., Beverly Hills, CA). The physical targets used in the experiments were custom fabricated
using hollow half-spheres mounted on telescoping stems (Fig. 5.2).
Stereographic calibration between the virtual and physical workspaces was performed manually using physical objects seen through a half-mirror as reference with
respect to a xed forehead rest used by all subjects (Fig. 5.3). The actual experiments
were conducted using a full mirror in order to improve image visibility and maintain
occlusion depth cues important in human depth perception (described in Sec. 2.2.1).
71

Figure 5.3: First person view of the co-located sh tank setup through a semitransparent mirror. The semi-transparent mirror was used for calibration only.

Distance between the eyes and the image plane (screen surface for the noncolocated conguration and reected image for the co-located condition) was approximately 20 (50 cm) for both the co-located and non-colocated conditions (Fig.
5.4). The rst person viewpoint for the non-colocated condition is shown in Fig. 5.6.
A custom headrest was used for the non-colocated condition in order to restrict head
motion and prevent stereo swim, the eect when the fused image appears to move
due to head motion. For the physical task, the mirror was removed, but the forehead
rest was still used in order to maintain a consistent viewpoint across all experiment
conditions. The physical targets were placed at various locations within a workspace
measuring approximately 26 cm wide, 15 cm deep, and 15 cm high.

72

Figure 5.4: User positioning for the sh tank display setup used for the 0315 conditions (tilted monitor with user facing downward) and non-colocated condition (upright
monitor with subject looking forward and head stabilized by a custom headrest). The
setup for the physical condition required removing the mirror, but the forehead rest
was still used.

73

5.2.2

Subjects

Twenty-two subjects (11 male and 11 female, ages 2032) were recruited and compensated for their participation in this study. All subjects were right handed and
tested using their dominant hand. The experiments used a repeated-measures design
in which each subject performed each of the experiment paradigms once.

5.2.3

Experiment Paradigms

A variation of Fitts discrete task was used for the experiments [25]. Fitts discrete
task requires a subject to move from a home position to each target. An alternative
is Fitts serial version of the task, which requires the subject to move back and forth
between two identical targets. However, the discrete task was chosen in order to
reduce variability by having a dened initial position for each task and ensure that
all targets remain within the connes of the physical and virtual workspaces.
Ten conditions were tested: physical targets in a real environment (real), virtual
targets displayed on a non-colocated computer monitor (NC), and eight rotation
conditions with virtual targets. Rotations were about the azimuth at 0 (co-located
condition), 45, 90, 135, 180, 225, 270, and 315 . The NC condition viewpoint was
not rotated.
For each condition, subjects were asked to sit at the sh tank display station, grip
the haptic device stylus like a pen, and perform the following task quickly, but as
accurately as possible. Each session tested one experimental condition, consisting of a
set of 40 practice trials (40 targets) followed immediately by a set of 40 recorded trials.
During each trial, a home position and one target were displayed simultaneously to
the subject. Each subject was then instructed to rst set the tip of the stylus at the
home position, press a button on the stylus when ready to move to the target, and
press again when the tip of the stylus was within the target volume. Each button
press triggered a chime sound eect. Ample rest was provided to subjects between
74

Table 5.1: Target List


ID (bits)

Distance (cm)

Diameter(mm)

1.9

5.6

20

2.3

8.0

20

2.7

6.7

12

3.2

9.6

12

3.6

9.2

8.0

4.1

13

8.0

4.6

18

8.0

5.1

9.7

3.0

5.6

14

3.0

6.0

19

3.0

dierent paradigms, but no rest was provided between the practice and actual test
runs in order to maintain the subjects familiarity with the specic paradigm. The
entire experiment took approximately 3 hours per subject. If necessary, some subjects
performed the entire set of tests over 2 separate days.
The home position was laterally centered near the edge of the workspace closest to
the subject. Each set of 40 targets were randomly constructed from 10 unique targets
spanning IDs of 26 (Table 5.1), each repeated four times. Two of the repetitions
were placed on the opposite lateral side of the other two with respect to home position
in order to minimize the eect of direction bias. Performance measures for all four
repetitions were averaged for the analyses.
Real Task
For the real task (Fig. 5.5), the subject used the haptic device stylus to point at
physical targets that the experimenter manually changed. During the real paradigm,
subjects were additionally instructed to judge the accuracy by vision and not by
contacting the stylus tip with target. One peculiarity with the Phantom Omni haptic
device was that the gimbal attached to the stylus can obstruct the view of the stylus
tip when a right-handed user points toward a left-sided target (and vice versa). In
order to account for this, during the real target paradigm, the hollow face of the rightsided targets were rotated toward the subject, while left-sided targets were rotated 45
75

Figure 5.5: The experimental setup for the physical target experiment condition. The
home target is the small stem centered farthest away from the haptic devices base
and an example target is the hollow half-sphere resting on a stem.
about the target stem to face just right of the user so that the stylus can be rotated
just enough so the stylus tip is visible to the user. This alteration to the targets
was not needed for the virtual task paradigms since a virtual pointer representing
the stylus tip was displayed that cannot be obstructed by the physical gimbal. Also,
separate trial runs were analyzed to ensure that the rotation of targets did not result
in signicantly dierent completion times between targets located on opposite sides.
Non-colocated
Figure 5.6 shows the rst person view of the non-colocated task condition.
Co-located and Rotations
Figure 5.7 shows screen shots from all eight rotation conditions. The 0 rotation case
was the co-located condition. The virtual targets were generated to be hollow halfspheres in order to match the appearance of the physical targets. Additionally, virtual
76

Figure 5.6: First person view of the the experimental setup in the conguration for
the non-colocated experiment condition.

77

Figure 5.7: The co-located and rotated conditions, rotated about the azimuth at 45,
90, 135, 180, 225, 270, and 315.

78

targets were made semi-transparent for the rotated conditions so that the cursor would
not be obstructed. Separate trial runs were made to ensure that transparent targets
did not result in signicantly dierent completion times or error.
Force feedback was not provided for virtual targets in order to evoke a visionbased motor control from the subject and record error rates that are not aected by
contact-based strategies where subjects might search around for haptic contact with
the target before deciding to register the end point click.

5.3

Results

The eect of each task paradigm on six computed performance measures were studied.
Statistical testing for mean dierences were performed using appropriate repeated
measures analysis of variance (ANOVA) with Greenhouse-Geisser epsilon corrections
and Holm-Sidak multiple comparisons (performed in OriginPro 8.5, OriginLab Corp.,
Northampton, MA). All performance measures were statistically tested in two groups,
one consisting of the real, NC, and 0 conditions (referred to as the visualization
paradigms) and another with only the sh tank display 0315 conditions (referred
to as the rotations).
Statistical power for the visualization paradigms was computed to be 0.70 (as
calculated by G*Power 3.1 [75], sample size of 22, 3 repeated measurements, =
0.05, Cohens f medium eect size of 0.25, 1 group) and 0.99 for the rotations analysis
(sample size of 22, 8 repeated measurements, = 0.05, Cohens f medium eect size
of 0.25, 1 group).

5.3.1

Throughput

Throughput, as described in Sec. 2.3.1, was calculated by tting a one-parameter


linear slope to each subjects movement time data as a function of IDe and taking

79

Linear Regression Fits for Throughput (One per Subject)


5.5
5
4.5

Time (sec)

4
3.5
3

R2=0.78

2.5
2

R =0.64

1.5
1

R =0.83

0.5
2

4
5
Effective Index of Dificulty

Figure 5.8: For conciseness, several examples of a one-parameter slope t to the


movement time data. Each line and set of points denoted by a marker type represents
the movement time data for a subject at a particular adjusted index of diculty (IDe ).

Table 5.2: Signicant Multiple Comparisons Throughput


0 45

45 90

45

90

135

0 135

45 180

45

180

135 180
135

270

180 225
180315

225 270
225

270315

315

135 315

225

0 225

45 270

45 315

270

90 315

0315

the inverse of the tted slope (Fig. 5.8). Also, a histogram of all linear regression R2
values are shown in Fig. 5.9. This was done because target diculty was found to have
a signicant eect on movement time for the paradigms (p 0.001, F(9,189) = 253.1)
and rotations (p 0.001, F(9,189) = 169.6). The computed throughput parameters
for each subject are reported as boxplots in Fig. 5.10. Statistical tests for signicant
mean dierences were computed using experiment condition as the within-subjects
factor and throughput as the dependent variable. Signicant multiple comparison
results for the rotations are reported in Table 5.2.
The paradigm was found to have a signicant eect on throughput (p=0.0016,

80

Histogram of R2 from Throughput Linear Regression


45
40
35
30

25
20
15
10
5
0
0

0.2

0.4

0.6

0.8

Figure 5.9: Histogram of R2 results from all throughput linear regressions as a function
of IDe .

F(2,42)=15.96). Highest mean throughput was observed for the real target case (4.71
b/s), which was found to be signicantly greater than both the NC (3.26 b/s) and
0 (3.51 b/s) cases. The NC and 0 mean throughput values were not found to be
signicantly dierent.
Rotations were also found to signicantly aect throughput (p=0.00, F(7,147)=81.66).
0 exhibited signicantly higher throughput than all the other rotations. Also, throughput between the 0 , 45 (2.73 b/s), and 90 (1.55 b/s) conditions were found to be
signicantly dierent from each other. In addition, throughput at 45 was significantly dierent from all the other rotations. There was no signicant dierence
between throughput for the 90 case versus 135270, but it was signicantly lower
than throughput at 315.
Throughput decreased from 0 to a local minimum at 135 (1.3 b/s), peaked at a
local maximum at 180 (1.70 b/s) before decreasing to another local minimum at 225
(1.28 b/s). From 225315 (3.05 b/s), throughput increased almost to the 0 level. It
is noteworthy that throughput at 180 (1.79 b/s) was signicantly higher than both
81

Throughput
8
7
6

bit/s

5
4
3
2
1
0
4.71 3.26 3.51 2.73 1.55 1.30 1.79 1.28 1.73 3.05
1

Real

NC

45

90

135

180

225

270

315

Figure 5.10: Boxplots of throughput (IDe /Movement Time) computed for each experimental condition (real targets, non-colocated VE, and sh tank VE with rotations
0315). Higher throughput infers better performance. The bold number below each
boxplot is the mean value denoted by the blue, circle markers. The red line inside
each box is the median, the lower and upper edges of the box mark the 25% and 75%
quartiles, respectively, and the lower and upper horizontal bars represent 1.5x less
than the 25% quartile and 1.5x greater than the 75% quartile, respectively. The red
cross markers represent data outside the 1.5x quartile ranges.

82

the 135 and 225 conditions, which is where the lowest throughput values occurred.
Mean throughput for 135 and 225 were not signicantly dierent from each other.

5.3.2

End-Point Error

End-point error was calculated for every subject and plotted in (Fig. 5.11) without
distinguishing each target by diculty. Statistical tests for signicant mean dierences were computed using experiment condition as the within-subjects factor and
end-point error as the dependent variable. Signicant multiple comparison results for
the rotations are reported in Table 5.3.
Paradigm had a signicant eect on end-point error (p 0.001, F(2,42) = 54.9).
End-point error was signicantly higher for the NC condition versus the 0 case. From
Fig. 5.11, it would appear that error was highest for the real targets, but this is in
fact not true and was due to an calibration limitation with the Phantom Omni haptic
device. The Phantom Omni calibration is hard-coded into the haptic device based
on a well on the device that serves as both a holder for the stylus and a calibration
point every time the pen is inserted into the well. However, the joints of the haptic
device can shift slightly even when the stylus tip is within the well, which causes
slight calibration errors between the actual and estimated joint angles. Therefore,
the end-point error for the real target condition is not reliable and was not included
in the analysis.
Rotations were also found to signicantly impact end-point error (corrected p
= 0.025, F(4,147) = 2.905). Mean end-point error was observed to increase from
0 (7.24 mm) to a local maximum at 135 (11.11 mm). After 135 , end-point error
signicantly decreased to 8.18 mm at 180 before rising to another signicantly higher
local maximum of 11.73 mm at 225 . From 225315, mean end-point error decreased
down to 7.25 mm, which was not signicantly dierent from the 0 condition.
The highest mean end-point error (not including the real target case) occurred in
83

Table 5.3: Signicant Multiple Comparisons End-Point Error


0 90

45 90

90 180

135 180

180 225

0 135

45 135

90 315

135 315

180 270

0 225

45 225

225 270

270315

45 270

the 225 condition and the lowest was in the 0 condition.

5.3.3

Number of Corrective Movements

Figure 5.14 reports the linear regression results for each subjects number of corrective
movements as a function of target diculty. This was done because analysis of
the measures appeared to be dependent upon target ID, which was conrmed by
a signicant eect of the ID factor on the number of corrective movements (corrected
p = 0.01, F(5.1,378) = 3.07). The linear regression parameters for each subject
are reported as boxplots in Fig. 5.14 and examples from the linear regression are
shown in Fig. 5.12. Also, a histogram of all linear regression R2 values are shown
in Fig. 5.13. Statistical tests for signicant mean dierences were computed using
experiment condition as the within-subjects factor and the regression parameters as
the dependent variable.
Paradigm did not have a signicant eect on the linear regression oset, which
represents the number of corrective movements for target with lowest ID (p = 0.0626,
F(2,42) = 2.96). However, a signicant eect was found for paradigm on the linear
regression slope (p = 0.00, F(2,42) = 22.83). Multiple comparisons revealed that
the real target condition (0.92) had signicantly lower mean slope than both the NC
(1.78) and 0 (1.94) cases. The NC and 0 conditions did not have dierent mean
slopes.
Rotation was found to have a signicant eect on linear regression oset (p
0.001, F(7,147) = 12.95). The signicant means comparisons are reported in Table
5.4. The lowest mean oset occurred for the 0 condition (3.47 corrections). Local

84

Endpoint Error
40

Composite Distance (mm)

35
30
25
20
15
10
5
0
12.67 10.67 7.24 8.28 10.32 11.11 8.18 11.73 10.20 7.25
5

Real* NC

45

90

135

180

225

270

315

Figure 5.11: Boxplots for the composite end-point error (distance from end location
to the target) for each experimental condition (real targets, non-colocated VE, and
sh tank VE with rotations 0315). Lower error infers better performance. The bold
number below each boxplot is the mean value denoted by the blue, circle markers.
The red line inside each box is the median, the lower and upper edges of the box
mark the 25% and 75% quartiles, respectively, and the lower and upper horizontal
bars represent 1.5x less than the 25% quartile and 1.5x greater than the 75% quartile,
respectively. The red cross markers represent data outside the 1.5x quartile ranges.
The real condition was excluded for analysis due to calibration issues.

85

Corrective Movements Regression Fits LMS


5.5
5

# Corrective Movements

4.5
4
R2=0.94

3.5
3
2.5
2

R2=0.92

1.5
2

R =0.91

1
0.5
1

3
4
Index of Dificulty

Figure 5.12: Several examples of a linear regression of the number of corrective movements with respect to target ID. Each line and set of points denoted by a marker
type represents the linear regression and number of corrections, respectively, for one
subject.

Histogram of R2 from Corrective Movements Linear Regression


70
60
50

40
30
20
10
0

0.4

0.5

0.6

0.7

0.8

0.9

Figure 5.13: Histogram of R2 results from all corrective movements linear regressions
as a function of ID.

86

Table 5.4: Signicant Means Comparisons Corrective Movement Osets


0 90

45 90

0 135

45 135

0 180

45 225

0 225

45 270

90 315

135 315

225 315

270315

0 270

Table 5.5: Signicant Means Comparisons Corrective Movement Slopes


0 90

45 90

45

135

135

0 180

45 180

0 225

45 225

0 270

45 270

90 315

135 180

180 225

225 270

135

180 315

225

270

270315

315

135 315

maximum osets occurred at 90 (8.82 corrections) and 225 (8.36 corrections). A


local minimum oset of 6.58 corrections occurred at 180 .
Rotation also had a signicant eect on linear regression slope (p 0.001 F(7,147)
= 22.51). The signicant means comparisons for slope are in Table 5.5. Similar to the
osets, the 0 exhibited a minimum slope of 1.94. Unlike the osets, local maximum
slopes occurred at 135 (6.29) and 225 (6.49). A local minimum also occurred for
the 180 condition (4.36), which was signicantly dierent from only the 0 and 315
conditions.

5.3.4

Eciency

Eciency measures, boxplotted in Fig. 5.15, were calculated for each subject without distinguishing each target by its diculty. Statistical tests for signicant mean
dierences were computed using experiment condition as the within-subjects factor
and eciency as the dependent variable. Signicant multiple comparison results for
the rotations are reported in Table 5.6.
Paradigm signicantly impacted eciency (corrected p = 0.0065, F(1.7,42)=6.24).
Eciency was highest for the real target condition, which was signicantly greater
than both the NC and 0 cases. A signicant dierence was not detected between
the NC and 0 conditions.
87

# corrections

Corrective Movements Regression Fit for Lowest ID


20
10
0

4.10 4.06 3.47 4.89 8.82 8.49 6.58 8.36 8.11 4.34
Real

NC

45

90

135

180

225

270

315

Corrective Movements Regression Fit Slope


Slope (vs. ID)

15
10
5
0

0.92 1.78 1.94 2.33 5.31 6.29 4.36 6.49 4.47 2.35
Real

NC

45
90
135 180
Experiment Condition

225

270

315

Figure 5.14: Boxplots for linear regression parameters tted to each subjects number
of corrective movements as a function of target diculty. The upper plot shows
the constant terms of the linear regression for each experiment condition, while the
lower plot shows the slopes of the linear regression for each experimental condition.
Increased number of corrections implies higher task diculty. The bold number below
each boxplot is the mean value denoted by the blue, circle markers. The red line inside
each box is the median, the lower and upper edges of the box mark the 25% and 75%
quartiles, respectively, and the lower and upper horizontal bars represent 1.5x less
than the 25% quartile and 1.5x greater than the 75% quartile, respectively. The red
cross markers represent data outside the 1.5x quartile ranges.

88

Table 5.6: Signicant Means Comparisons Eciency


0 45

45 90

90 180

135 180

180 225

225 270

90 315

135 315

180 270

225 315

0 90

45 135

0 135

45 180

0 180

45 225

0 225

45 270

270315

180315

270

Rotations were also found to exert a signicant eect on eciency (corrected p


0.001, F(3.34,147) = 21.11). Eciency was highest for the 0 condition (10.66) compared to any other rotation. Like other performance measures, eciency decreased
from 0 to a local minimum at 135 (2.71) before peaking at the local maximum observed at the 180 condition (3.91). Eciency again decreased at 225 (2.84) before
increasing from 270315 back to a 6.75, which was not signicantly dierent from
the 45 level of 6.11.

5.3.5

Peak Velocity

Figure 5.18 reports the linear regression results for each subjects number of corrective
movements as a function of target diculty. This was done because analysis of
the measures appeared to be dependent upon target ID, which was conrmed by a
signicant eect of the ID factor on the number of corrective movements for paradigms
(p 0.001, F(9,189) = 320.4) and for rotations (p 0.001, F(9,189) = 166.3). Linear
regressions were also performed as a function of target distance, but this produced
lower correlations. The linear regression parameters for each subject are reported as
boxplots in Fig. 5.18 and examples from the linear regression are shown in Fig. 5.16.
Also, a histogram of all linear regression R2 values are shown in Fig. 5.17. Statistical
tests for signicant mean dierences were computed using experiment condition as
the within-subjects factor and the regression parameters as the dependent variable.
Paradigm exhibited a signicant eect on both osets (p 0.0013, F(2,42) =
14.48) and slopes (p 0.001, F(2,42) = 29.04). In both analyses, the real target
89

Efficiency = Distance/(Moved Distance Distance)


30
25

Efficiency

20
15
10
5
0

10.66 8.25 8.80 6.11 3.03 2.71 3.91 2.84 3.51 6.75
Real

NC

45

90

135

180

225

270

315

Figure 5.15: Boxplots for the composite eciency for each experimental condition
(real targets, non-colocated VE, and sh tank VE with rotations 0315). Eciency
is dened in (5.1), with higher values meaning less deviation from the shortest path
to the endpoint. Higher eciency infers better performance. The bold number below
each boxplot is the mean value denoted by the blue, circle markers. The red line
inside each box is the median, the lower and upper edges of the box mark the 25%
and 75% quartiles, respectively, and the lower and upper horizontal bars represent
1.5x less than the 25% quartile and 1.5x greater than the 75% quartile, respectively.
The red cross markers represent data outside the 1.5x quartile ranges.

90

Peak Velocity Regression Fits LMS


0.5
0.45

Velocity (m/s)

0.4
0.35
0.3
0.25

R =0.76

R =0.60 R2=0.58

0.2

3
4
Index of Dificulty

Figure 5.16: Several examples of a linear regression of peak velocity with respect to
target ID. Each line and set of points denoted by a marker type represents the linear
regression and number of corrections, respectively, for one subject.

Histogram of R2 from Peak Velocity Linear Regression


70
60
50

40
30
20
10
0
0

0.2

0.4

0.6

0.8

Figure 5.17: Histogram of R2 results from all peak velocity linear regressions as a
function of ID.

91

Table 5.7: Signicant Means Comparisons Peak Velocity Oset


0 90

45 90

0 135

45 135

0 180

45 180

0 225

45 225

90 315

135 315

180 270

225 270

180315

225 315

0 270

Table 5.8: Signicant Means Comparisons Peak Velocity Slope


0 45

45 135

45

90

90 315

135 315

270315

225

0 135
0 180
0 225

condition (0.29 m/s and slope 0.06) exhibited was signicantly higher mean peak
velocity than the NC (0.23 m/s and slope 0.04) and 0 (0.23 m/s and slope 0.05)
cases. In both paradigms and rotations, there was no signicant dierence between
the NC and 0 conditions.
Rotations exhibited a signicant eect for osets (p 0.001, F(7,147) = 14.63).
A maximum mean oset of 0.23 m/s occurred in the 0 case. Mean osets decreased
from 0 to a minimum of 0.15 for the 180 condition (0.15 m/s) and then increased
up to 0.21 m/s for 315 . Unlike other measures, there was no local maximum at 180 .
Rotations also had signicant eect on slopes (p 0.001, F(7,147) = 11.18).
Maximum slope occurred for the 0 (0.047) case and decreased to a local minimum
of 0.028 at the 90 and 135 conditions, which both had mean slopes of 0.028. A local
maximum mean slope occurred at 180 (0.033), but it was not signicantly dierent
than the mean slopes at 135 and 225 . Another local minimum occurred at 225
(0.024), after which mean slope increased to signicantly higher values at 270 (0.03)
and 315 (0.04).

5.3.6

Initial Movement Error

Initial movement error measures, boxplotted in Fig. 5.19, were calculated for each
subject without distinguishing each target by its diculty. This was done because
92

velocity (m/s)

Peak Velocity Regression Fit for Lowest ID


0.4
0.2
0 0.29 0.23 0.23 0.21 0.18 0.16 0.15 0.17 0.20 0.21
Real NC
0
45
90
135 180 225 270 315

Slope (vs. ID)

Peak Velocity Regression Fit Slope


0.08
0.06
0.04
0.02
0 0.064 0.043 0.047 0.037 0.028 0.028 0.033 0.024 0.030 0.040
0.02
Real NC
0
45
90
135 180 225 270 315

Figure 5.18: Boxplots for linear regression parameters tted to each subjects peak
velocity as a function of target diculty. The upper plot shows the constant terms
of the linear regression for each experiment condition, while the lower plot shows
the slopes of the linear regression for each experimental condition. Increased peak
velocity implies higher task performance. The bold number below each boxplot is
the mean value denoted by the blue, circle markers. The red line inside each box is
the median, the lower and upper edges of the box mark the 25% and 75% quartiles,
respectively, and the lower and upper horizontal bars represent 1.5x less than the 25%
quartile and 1.5x greater than the 75% quartile, respectively. The red cross markers
represent data outside the 1.5x quartile ranges.

93

there was no apparent trend in the individual initial movement errors with respect to
target diculty. The range of initial movement errors was [02] because the target
and initial movement vectors were normalized prior to taking their dierence. Statistical tests for signicant mean dierences were computed using experiment condition
as the within-subjects factor and initial movement error as the dependent variable.
Signicant multiple comparison results for the rotations are reported in Table 5.9.
Paradigm did not signicantly aect mean initial movement error (corrected p =
0.07, F(2,42) = 4.19). Again, as in the end-point error measurements, due to possible
calibration errors, the real target condition was not included in the analysis.
Rotations, however, did signicantly impact initial movement error (p 0.001,
F(7,147) = 20.38). The lowest mean initial movement error occurred for the 0
condition (0.86, which was signicantly dierent from all other rotations except 45
and 315 ) and increased up to a maximum mean error of 1.18 for 135 . However, the
initial movement error at 135 was only signicantly dierent from the 0 and 315
conditions. Similar to other measures, a local minimum occurred for 180 (1.09),
but it was not signicantly dierent from 135 or 225 as in the other performance
measures. Also, initial movement error decreased from 1.15 at 225 down to a level
of 0.91, which was not signicantly dierent from the initial movement error at 0 .
Table 5.9: Signicant Means Comparisons Initial Movement Error

5.4

0 45

45 90

45

90

90315

135 315

180315

225 315

270315

135

0 180

45 180

225

45 225

270

45 270

Discussion

Several consistent trends were observed across all six of the performance measures.
First, mean performance signicantly diered between the physical and both non94

Initial Movement Direction Error

Normalized Magnitude of Error

1.5

0.5

1.01 0.86 0.86 0.92 1.13 1.18 1.09 1.15 1.13 0.91
Real* NC

45

90

135

180

225

270

315

Figure 5.19: Boxplots of initial movement error for each experimental condition (real
targets, non-colocated VE, and sh tank VE with rotations 0315 ). Eciency is
dened in (5.1), with higher values meaning less deviation from the shortest path to
the endpoint. Higher eciency infers better performance. The bold number below
each boxplot is the mean value denoted by the blue, circle markers. The red line
inside each box is the median, the lower and upper edges of the box mark the 25%
and 75% quartiles, respectively, and the lower and upper horizontal bars represent
1.5x less than the 25% quartile and 1.5x greater than the 75% quartile, respectively.
The red cross markers represent data outside the 1.5x quartile ranges. The real
condition was excluded for analysis due to calibration issues.

colocated and co-located (0 ) conditions (the only exception being the linear regression oset for the number of corrective movements). Second, visual rotations of 135
and 225 conditions exhibited performance measures with similar local maxima (for
end-point error, corrective movements, and initial movement error) and local minima (for throughput, eciency, and peak velocity). Third, visual rotations of 180
produced performance measures that were consistently at a local maximum (for endpoint error, corrective movements, and initial movement error) and local minimum

95

(for throughput, eciency, and peak velocity).


Table 5.10: Performance Means and (Std. Dev.) Normalized to Real and 0
Real
NC
0

TP
1.0(1.0)
0.69(0.68)
0.75(0.52)

End
Err.
N/A
1.0(1.0)
0.68(0.36)

E.
1.0(1.0)
0.77(0.80)
0.83(0.79)

Init
Err.
N/A
1.0(1.0)
1.00(0.99)

CM
oset
1.0(1.0)
0.99(1.12)
0.85(1.21)

CM
slope
1.0(1.0)
1.94(1.08)
2.11(1.48)

PV
oset
1.0(1.0)
0.81 (0.75)
0.82(0.75)

PV
slope
1.0(1.0)
0.67(0.67)
0.73(0.72)

0
45
90
135
180
225
270
315

1.0(1.0)
0.78(0.83)
0.44(0.99)
0.37(0.60)
0.51(0.54)
0.37(0.65)
0.49(0.85)
0.87(0.78)

1.0(1.0)
1.14(1.73)
1.42(2.61)
1.53(2.87)
1.13(1.00)
1.62(2.76)
1.41(2.57)
1.00(0.95)

1.0(1.0)
0.70(0.84)
0.34(0.61)
0.31(0.44)
0.44(0.62)
0.32(0.51)
0.40(0.72)
0.77(0.83)

1.0(1.0)
1.07(1.00)
1.32(0.94)
1.37(0.92)
1.26(0.98)
1.34(0.94)
1.32(0.96)
1.06(1.02)

1.0(1.0)
1.28(1.48)
2.12(3.90)
2.09(2.98)
1.67(1.49)
2.07(2.48)
1.98(3.71)
1.15(1.33)

1.0(1.0)
1.25(1.18)
2.72(5.20)
3.23(5.48)
2.29(2.46)
3.28(5.71)
2.27(3.53)
1.20(1.07)

1.0(1.0)
0.78(1.05)
0.58(1.15)
0.58(1.02)
0.69(1.53)
0.52(1.38)
0.66(1.37)
0.87(1.00)

1.0(1.0)
0.88(1.17)
0.74(1.31)
0.66(1.14)
0.65(1.06)
0.69(1.49)
0.82(1.49)
0.90(0.91)

5.4.1

Real vs. Non-colocated VE vs. Co-located VE

The results indicated that subjects were able to accomplish the physical reaching task
at a faster rate and with smoother, more direct motion than for the non-colocated
and co-located conditions. Mean throughput for the real task was 1.4x higher than
the NC condition and 1.3x higher than the co-located condition 5.10. Similarly,
eciency was approximately 1.3x higher than the NC condition and 1.2x higher than
the 0 condition. Also, mean peak velocity linear regression osets were 1.3x higher
than both the NC and 0 conditions. Mean peak velocity regression slopes were
1.5x higher than the NC condition and 1.4x higher than 0 condition. Finally, linear
regression slope for corrective movements was 2x lower for the physical task than
that of both the NC and 0 conditions. Although mean linear regression oset for
corrective movements were slightly lower than the NC and 0 conditions, the result
did not reach statistical signicance.
These results agreed with the literature in nding that mean performance measures for the physical task were signicantly dierent (from 1.22x) than the virtual
tasks (both co-located and non-colocated). This is consistent with the 1.5x decrease
in completion time for 2D physical reaching tasks versus virtual by [7, 8] and a 2x
decrease by [9].
96

Also, the results showed that the only performance measure signicantly dierent
between the co-located and non-colocated conditions was end-point error. End-point
error was 1.5x lower for the co-located condition than the non-colocated condition.
With the exception of end-point error magnitude, no other performance measures
resulted in signicant mean dierences between the NC and 0 conditions. However,
mean values for throughput, eciency, and peak velocity regression slope were higher
and corrective movements regression oset was lower for the co-located condition
than for the non-colocated condition. Mean initial movement error and corrective
movement regression slope were equal for both the co-located and non-colocated conditions.
The signicant eect of co-location on end-point error diers from the ndings of
[12, 9], but the lack of signicant dierences for task completion rate and peak velocity
was in line with [12] and [9], but appear to contrast the ndings of [70]. However,
the dierences in completion time and error reported in [70] may be inuenced by
viewpoint scaling and stereographic eects since there were signicant dierences
between the test conditions. The non-colocated condition used a 19 monoscopic
display located 2.4 m away from the haptic device, compared to a co-located condition
using a stereoscopic sh tank display with the haptic device located directly behind
the mirrored display.
In contrast, the detected dierence in end-point error for the current study likely
cannot be attributed to viewpoint scaling. This is because the distance between the
eyes and the image plane were kept approximately constant, as were the stereoscopic
display parameters and haptic device location.
It is possible that changes in visuo-motor processes are responsible for the difference in end-eector error. Humans spend years unconsciously tuning their motor
control strategies to the ideal condition where the visual eld and haptic workspace
are aligned. So, it is likely that reaching tasks in the NC condition requires some

97

sort of cognitive re-mapping. The NC condition may require a re-mapping of the


perceptual processes responsible for converting visual dierences between the hand
and the target into muscle forces necessary to make a movement to close the gap [15].
This re-mapping may be minimal for the co-located condition, assuming that the
main new mapping is to convert hand movements to virtual cursor movements. However, re-mapping for the NC condition may require a visual eld-to-haptic workspace
transform that is unusual and may add to cognitive load. Assuming this is true and
noting the fact that task completion rates did not suer, it is not implausible that
accuracy does. This theory was echoed in [13], that suggests VE interfaces should
take advantage of the highly accurate body-relative proprioception by keeping manipulated virtual objects within arms reach. They found that, when objects were
outside of the arms workspace, completion times signicantly increased for a virtual
object docking task where error was not possible.
Though verication of the above theory is beyond the scope of the current work,
evidence from psychophysics work suggests that error rates caused by articially imposed visual and haptic misalignments can be overcome by practice. It has been
demonstrated through optical prism experiments that subjects who practice for several days can overcome initially signicant errors and throw balls at targets with the
same accuracy regardless of whether they are wearing prisms or not [15, 76].

5.4.2

Eect of Visual Rotations

The eect of visual rotations on task performance yielded performance measure trends
that were in line with the literature, exhibiting the same trend where task performance
was quasi-periodic and symmetric about 180. All performance measures, except for
throughput, did not exhibit statistically signicant dierences between means for the
135 and 225 , 90 and 270 , and 45 and 315 conditions. This symmetry was also
observed in the results for various tasks reported by [18, 4, 19]. It is not apparent
98

why, unlike the other measures, throughput was not symmetric about 180 .
The current results also showed that task performance was highest for the 0
condition. Then, performance degraded from 45135, where performance was usually
worst. Statistical tests showed that throughput, eciency, peak velocity slope, and
initial movement error reect signicant performance decreases between 0 and 45 .
The other measures of peak velocity oset, end-point error, and corrective movements
(both oset and slope) showed that signicant performance decreased starting at 90 .
Although performance appeared to be lowest at 135 from the mean measures, there
was no statistically signicant dierence between any measured means for 135 and 90 .
Next, performance tended to improve slightly from 135 to 180 , before degrading again
from 180 to what appeared to be another point of lowest performance at 225. And
in fact, a statistically signicant peak at 180 was detected for throughput, end-point
error, and eciency. Also, statistical analysis conrmed that performance degraded
to a minimum at 225 for throughput, end-point error, corrective movement osets,
and eciency. Finally, from 225 315 , performance improved back to levels similar
to performance at 45 . The only mean measure that diered signicantly between
45 and 315 was throughput. This nding was in line with [24], that reported a
signicant eect on completion times of an orientation matching task beyond visual
rotations of 45 .
The major dierence between the current ndings and the literature was that all
of the six measures inferred that lowest task performance occurred for the 90 , 135 ,
and 225 conditions, compared to 90 and 270 in previous ndings. However, this is in
line with psychophysics literature, which reported that poorest manual performance
occurs for visual rotations in the range of 90135 or 225270 for physical tasks and
camera rotations [15]. It is important to note, though, that since task throughput
was not symmetric about 180 , the lowest throughput occurred at 225 .

99

5.4.3

VE System Design Implications

Several system-design recommendations can be gathered from the results of this study.
First, if end-point error is of concern, a co-located VE conguration is recommended
over a non-colocated modality. Second, for rotated perspectives, if task throughput,
movement eciency, peak velocity, and initial movement error are of concern, then
haptic and visual perspectives should be aligned, as visual rotations of 45 in either
direction signicantly impacted these measures. On the other hand, if only end-point
error and movement smoothness are of concern, then visual rotations of up to 90
might be acceptable before signicant eects on performance occur.

100

Chapter 6
Conclusions
This dissertation was centered around the dynamics and performance of the human
operator in haptic interface systems. The emphasis was upon understanding visuohaptic co-location and rotation eects on task performance and modeling human arm
dynamics.

6.1

Arm-and-hand Dynamics Modeling

The developed models of the arm and hand dynamics were based on a ve-parameter
linear MSD model. These models are relevant in the context of stylus-based haptic devices operated by the human arm with a conguration similar to that depicted in Fig.
3.1 for grip forces of 13 N. Empirical data from 15 individuals were used to identify
both grip-force-dependent and nominal arm models. The parameters and frequency
responses were consistent with the literature. All models were force-input, positionoutput transfer functions that were accurate to the measured data in the frequency
range of 0.630 Hz. In addition, the current work presented inter and intra-subject
model variability data in the form of both structured and unstructured variability.
The structured variability was the computed statistics from 135 individually identied
arm dynamics models. The unstructured variability were empirically-derived transfer
101

functions that accurately modeled the unstructured multiplicative uncertainty. These


results provide experimentally-derived uncertainty bounds useful for designing precise
controllers targeted to a subset of possible human operator dynamics.
In addition, an alternative system identication method that does not require
force sensors was also proposed. The results of this study showed that models for two
of the three axes of motion were comparable in behavior to models that were derived
using a force sensor. However, ndings indicated that the use of force sensors is still
ideal for high frequency system identication of human arm dynamics.

6.2

Reaching in Virtual Environments

The second major focus of this dissertation was to better understand the eects of
visual eld and haptic workspace co-location on a 3D point-to-point reaching task.
The study methods were designed to use a general and established test of manual
performance, Fitts task, in order to promote future inter-study comparisons and
repeatability.
A key nding of this study was that co-located sh tank display facilitates signicantly reduced end-point error for 3D Fitts task over the non-colocated task
condition. This result is important because it conrms with good statistical power
the anecdotal evidence that co-location improves performance, which previous work
studying co-location eects using Fitts task did not conrm.
In addition, six performance measures gathered from the literature were used
to analyze the eects of rotational visual dislocation on Fitts task performance.
The results showed that, even with a wide variety of task diculties, all six task
performance measures appeared to be symmetric about the 180 condition. Also, they
all followed the same trend of indicating best performance at 0 , poorest performance
between 90 and 135 , and then slightly less impaired performance at 180 . This

102

nding conrmed the trend seen in literature for tasks of consistent diculties. One
deviation from the literature though, was that poorest performance occurred for the
225 condition for all measures in the current work, instead of 270 as seen in previous
studies.

6.3

Future Research Problems

In the more immediate term, one research problem to address is how physiologicallyappropriate the identied grip-force dependent arm models are. Since not all parameters vary in the same direction with respect to grip force, it is possible that
the identied parameters represent local minimum solutions. Since the current works
did not investigate the possibility for local minima, future investigations may try the
following techniques. One method may be to maintain several parameters to be consistent over all grip forces for a particular axis, while only allowing only one set of
stiness and damping to vary with the grip forces. Another may be to sample several
arm model parameters from the literature and attempt to perform the optimization
using these parameters as the initial conditions, given some relatively small bounding
conditions. Finally, it may ultimately be necessary to investigate the bio-mechanical
properties of the arm using techniques such as electromyography (to measure actual
muscle contraction intensities) or detailed kinematic analysis of the arm to determine
more precise initial conditions for optimization so as to avoid local minima.
As multi-modal human-computer interfaces steadily advance and gain popularity
in the mainstream, it is likely that the level of their growth will continue to be tied to
the level of understanding we have of the human operator. Currently, the majority of
haptic interface control systems have been conservatively designed because the variability bounds of human operators were not well understood. The presented ndings
and methods will facilitate future eorts to produce higher performance control meth-

103

ods that can take into account the relevant ranges of human performance. Of course,
also necessary is the continued investigation of human performance bounds in tasks
and congurations relevant to the systems being designed. On the far horizon is the
design of human-computer interfaces that can autonomously gauge the performance
bounds of the human operator and intelligently adjust relevant interaction parameters to provide an optimal user experience. Such a capability may not be necessary
for the general population that can adapt to small ineciencies in a human-computer
interface, but it might be crucial for populations of cognitively or physically impaired
individuals. This is a lofty goal, but the presented work serves as a foundation to
build such eorts upon.

104

Appendix A
Arm Model Derivation
Equation (3.1) was derived from the MSD model in Fig. 3.3 as follows in Laplace
notation (leaving out the dependency of Fsensor (s), Xarm (s) = X1 (s), and X2 (s) on
the Laplace variable s for legibility). First, the dierential equation for mass M is
transformed into the Laplace domain and X2 (s) was found as
M x2 = k1 (x1 x2 ) + b1 (x1 x2 ) k2 x2 b2 x2
L

Ms2 X2 = k1 (X1 X2 ) + b1 s(X1 X2 ) X2 (k2 + b2 s)


X2 =

Ms2

X1 (k1 + b1 s)
.
+ (b1 + b2 )s + k1 + k2

(A.1)

Then, the measured force Fsensor (s) was solved for as


0 = Fsensor k1 (x1 x2 ) b1 (x1 x2 )
L

0 = Fsensor k1 (X1 X2 ) b1 s(X1 X2 )


Fsensor = X1 (k1 + b1 s) (K1 + b1 s)X2 .

(A.2)

Finally, (A.1) was substituted into (A.2) to nd the transfer function Harm (s) in (3.1),
with x1 = xarm .

105

Appendix B
End-eector Inertia for the
Phantom Premium 1.5a
One method of nding the end-eector inertia is approximating it from the Cartesian
space transfer function of the Phantom 1.5a. However, perhaps more accurate is to
use the kinematics detailed in [69]. Also, it appears there is a scaling error (of around
a factor of 103 ) for the frequency response functions in [69], so using the kinematics
would be ideal.
The home position of the Phantom Premium haptic interface for the experiments
in Ch. 4 was (x,y,z) = (0.01, 0.24, 0.02) m in Cartesian coordinates. From inverse
kinematics in [69], the Phantoms joint angles (13 are in order from the base of

106

the haptic device, upward) for this home position were calculated to be

1

1 = tan
d=

z + l1
x


(B.1)

x2 + (z + l1 )2

(B.2)

x2 + (y l2 )2 + (z + l1 )2
 2



l1 + r 2 l22
d
1
1
2 = cos
+ tan
2l1 r
y l2
 2

2
2
l + l2 r

3 = 2 + cos1 1
2 l1 l 2
2
r=

(B.3)
(B.4)
(B.5)
(B.6)

where l1 = 0.21 m and l2 = 0.2095 m.


The Phantoms dynamic inertia matrix is then

M1,1
0
0

M = 0
M2,2 M2,2

0
M3,2 M3,3

107

(B.7)

where

M1,1


1
= 4Iayy + 4Iazz + 8Ibaseyy + 4Ibeyy + 4Ibezz + 4Icyy
8
2


2

+ 4Iczz + 4Idfyy + 4Idfzz + 4l1 ma + l1 mc + l2 ma + 4l3 mc



1
+ cos(23 ) 4Iayy 4Iazz + 4Idfyy 4Idfzz + l2 2 (ma )
8

1
2
4l3 mc ) + cos(22
4Ibeyy 4Ibezz + 4Icyy 4Iczz
8

2
+ l1 (4ma + mc ) + l1 cos(2 ) sin(3 ) (l2 ma + l3 mc ) ,
(B.8)

1
4(Ibexx + Icxx + l21 ma ) + l21 mc ,
4
1
= l1 (l2 ma + l3 mc ) sin(2 3 ),
2
1
= (4Iaxx + 4Idfxx + l22 ma + 4l23 mc ),
4

M2,2 =
M2,3 = M3,2
M3,3

(B.9)
(B.10)
(B.11)

with l3 = 0.0325 m, ma = 0.0202 g, mc = 0.0249 g, Iayy = 0.0018 x 104 , Iazz = 0.4864


x 104 , Ibaseyy = 11.87 x 104 , Ibeyy = 10.06 x 104 , Ibezz = 0.591 x 104 , Icyy = 0.959
x 104 , Iczz = 0.0051 x 104 , Idfyy = 0.629 x 104 , Idfzz = 6.246 x 104 , Ibexx = 11.09
x 104 , Icxx = 0.959 x 104 , Iaxx = 0.4864 x 104, and Idfxx = 7.11 x 104 . These
values are identical to those in [69], except for l1 , l2 , and l3 , which were modied for
the minor dierences and to include the gimbal length.
Therefore, the intertia in end-eector coordinates is dened as (J 1 )T MJ 1 , where
J is the Jacobian matrix of the translation-only components of the forward kinematics
mapping (we are only concerned with the position of the end eector since a gimbal
was installed between the end eector and the stylus handle). With

F =


sin 1 l1 cos 2 +

l2
sin3

l2 l2 cos 3 + l1 sin 2
l1 + cos 1 (l1 cos 2 + l2 sin 3 )

108

(B.12)

we can nd the Jacobian matrix as


J=

F
1

F
2

F
2

l1 sin 1 sin 2 l2 cos 3 sin 1

,
0
l1 cos 2
l2 sin 3

sin 1 (l1 cos 2 + l2 sin 3 ) l1 cos 1 sin 2 l2 cos 1 cos 3


cos 1 (l1 cos 2 + l2 sin 3

(B.13)

and its inverse as

1
l1 cos sin
2 +l2 sin 3

cos 3 sec(2 3 )
l1

cos 1 sin 3l1sec(2 3 )

sin 2 sec(2 3 )
l2

cos 1 cos 2 sec(2 3 )


l2

cos 1
l1 cos 2 +l2 sin 3

sin 1 sin 3 sec(2 3 )


=

l1

sin 1 cos 2 sec(2 3 )


l2

(B.14)

Now, we can nd the end-eector frame inertia approximation by assuming that


the handle, which weighed 51.95 g including the gimbal, is a constant inertia at the
end eector. The result was

0
0
0.05195

kg + (J1 )T MJ1
Mee =
0
0.05195
0

0
0
0.05195

0.09
0.00077 0.0014

kg.
=
0.00077
0.095
0.0185

0.0014 0.0185
0.091

(B.15)

The diagonals of Mee were considered to be the approximate end eector mass
for each Cartesian axis about the home position, which is where the subjects were
attempting to stabilize the haptic device.

109

Related Publications
1. Michael J. Fu and M. Cenk C
avusoglu, Three-dimensional human arm and
hand dynamics and variability model for a stylus-based haptic interface, Proceedings of the 2010 ieee International Conference on Robotics and Automation, Anchorage, AK, pp. 1339 1346, 2010.
2. Michael J. Fu and M. Cenk C
avusoglu, Human Arm-and-Hand Dynamics
Model with Variability Analyses for a Stylus-Based Haptic Interface, International Journal of Robotics Research, submitted January, 2011.
3. Michael J. Fu and M. Cenk C
avusoglu, Eect of Visuo-Haptic Co-location
on 3D Fitts Task Performance in Real and Virtual Environments, Presence:
Teleoperators and Virtual Environments, submitted March, 2011.
4. Michael J. Fu and M. Cenk C
avusoglu, Eect of Visuo-Haptic Co-location on
3D Fitts Task Performance, 2011 ieee International Conference on Intelligent
Robots and Systems, submitted March, 2011.

110

Bibliography
[1] K. W. Arthur, K. S. Booth, and C. Ware, Evaluating 3D task performance for
sh tank virtual worlds, ACM Transactions on Information Systems, vol. 11,
no. 3, pp. 239265, July 1993.
[2] S. W. Karin, S. A. Wall, K. Paynter, A. M. Shillito, M. Wright, and S. Scali,
The eect of haptic feedback and stereo graphics in a 3d target acquisition
task, Proc. of EuroHaptics 2002, 2002.
[3] R. Aresenault and C. Ware, The importance of stereo and eye-coupled perspective for eye-hand coordination in sh tank vr, Presence, vol. 13, no. 5, pp.
549559, October 2004.
[4] W. S. Kim, F. Tendick, and L. W. Stark, Visual enhancements in pick-andplace tasks: Human operators controlling a simulated cylindrical manipulator,
IEEE Journal of Robotics and Automation, vol. 3, no. 5, pp. 418425, 1987.
[5] F. Tendick, R. W. Jennings, G. Tharp, and L. Stark, Sensing and manipulation problems in endoscopic surgery: Experiment, analysis, and observation,
Presence, vol. 2, no. 1, pp. 6679, 1993.
[6] B. Akka, Writing stereoscopic software for stereographics systems using microsoft windows opengl, StereoGraphics Corporation, Tech. Rep., 1998.
[7] E. D. Graham and C. L. MacKenzie, Physical versis virthal pointing, Proc.
of the 2001 ACM Conference on Human Factors in Computing Systems, pp.
292299, April 1996.
[8] A. H. Mason, M. A. Walji, E. J. Lee, and C. L. MacKenzie, Reaching movements
to augmented and graphic objects in virtual environments, Proc. of the 2001
ACM Conference on Human Factors in Computing Systems, pp. 426433, March
2001.
[9] D. W. Sprague, B. A. Po, and K. S. Booth, The importance of accurate vr head
registration on skilled motor performance, in Proc. of Graphics Interface 2006,
2006, pp. 131137.

111

[10] L. T., J. M. Ritchie, J. R. Corney, R. G. Dewar, K. Schmidt, and K. Bergsteiner,


Assessment of a haptic virtual assembly system that uses physics0based interactions, in Proc. of the IEEE International Symposium on Assembly and
Manufacturing, 2007, pp. 147153.
[11] T. T. Blackmon, M. C. C
avusoglu, F. Lai, and L. W. Stark, Human Hand
Trajectory Analysis in Point-and-Direct Telerobotics, in Proceedings of the 8th
International Conference on Advanced Robotics (ICAR97), July 1997, pp. 927
932.
[12] R. J. Teather, R. S. Allison, and W. Steurzlinger, Evaluating visual/motor
co-location in sh-tank virtual reality, in Proceedings of the IEEE Toronto International Conference - Science & Technology for Humanity. IEEE, 2009, pp.
624629.
[13] M. R. Mine, F. P. Brooks, Jr., and C. H. Sequin, Moving objects in space:
exploiting proprioception in virtual-environment interaction, in Proceedings of
the 24th annual conference on Computer graphics and interactive techniques, ser.
SIGGRAPH 97. New York, NY, USA: ACM Press/Addison-Wesley Publishing
Co., 1997, pp. 1926.
[14] D. Swapp, V. Pawar, and C. Loscos, Interaction with co-located haptic feedback
in virtual reality. Virtual Reality, no. 10, pp. 2430, 2006.
[15] R. Shadmehr and S. P. Wise, The Computational Neurobiology of Reaching and
Pointing, A Foundation for Motor Learning, ser. Computational Neuroscience
Series, T. J. Sejnowski and T. A. Poggio, Eds. Cambridge, MA: The MIT
Press, 2005.
[16] R. Held, Plasticity in sensory-motor systems. Scientific American, vol. 213,
no. 5, pp. 8494, 1965.
[17] J. Groen and P. J. Werkhoven, Visuomotor adaptation to virtual hand position
in interactive virtual environments, Presence: Teleoper. Virtual Environ., vol. 7,
pp. 429446, October 1998.
[18] R. K. Bernotat, Rotation of visual reference systems and its inuence on control
quality, IEEE Transactions on Man-Machine Systems, vol. 11, no. 2, pp. 129
131, 1970.
[19] W. S. Kim, s. R. Ellis, M. E. Tyler, B. Hannaford, and L. W. Stark, Quantitative
evaluation of perspective and stereoscopic displays in three-axis manual tracking
tasks, IEEE Transactions on Man-Machine Systems, vol. 17, no. 1, pp. 6172,
1987.
[20] P. M. Fitts, The information capacity of the human motor system in controlling
the amplitude of movement, Journal of Experimental Psychology, vol. 47, pp.
381391, 1954.
112

[21] P. M. Fitts and J. R. Peterson, Information capacity of discrete motor responses, Journal of Experimental Psychology, vol. 67, pp. 103113, 1964.
[22] I. S. MacKenzie and W. Buxton, Extending tts law to two-dimensional tasks,
in CHI92: Proceedings of the SIGCHI conference on Human factors in computing systems. New York, NY, USA: ACM Press, 1992, pp. 219226.
[23] C. Ware and R. Balakrishnan, Reaching for objects in vr displays: lag and
frame rate, ACM Transactions on Computer-Human Interaction, vol. 1, no. 4,
pp. 331356, December 1994.
[24] C. Ware and R. Arsenault, Frames of reference in virtual object rotation, in
Proceedings of the 1st Symposium on Applied perception in graphics and visualization. New York, NY, USA: ACM, 2004, pp. 135141.
[25] R. W. Soukore and I. S. MacKenzie, Towards a standard for pointing device
evaluation: Perspectives on 27 years of tts law research in hci. International
Journal of Human-Computer Studies, no. 61, pp. 751789, 2004.
[26] I. Murata, Atsuo and H. Iwase, Extending tts law to a three-dimensional
pointing task, Human Movement Science, vol. 20, pp. 791805, 2001.
[27] T. Grossman and R. Balakrishnan, Pointing at trivariate targets in 3D environments, in Proceedings of the SIGCHI conference on Human factors in computing
systems, ser. CHI 04. ACM, 2004, pp. 447454.
[28] L. Liu, J.-B. Martens, and R. v. Liere, Revisiting Path Sterring for 3D Manipulation Tasks, in Proc. of the IEEE Symposium on 3D User Interfaces 2010, 2010,
pp. 3946. [Online]. Available: http://dx.doi.org/10.1109/VRAIS.1993.380802
[29] J. Winters, L. Stark, and S.-N. A. H., An analysis of the sources of musculoskeletal system impedance, Journal of Biomechanics, vol. 21, no. 12, pp. 10111025,
1988.
[30] J. B. MacNeil, R. E. Kearney, and I. W. Hunter, Time-varying identication of
human joint dynamics, Proc. of the 11th IEEE EMBS Inter. Conf., 1989.
[31] C. C. Gielen and J. C. Houk, Nonlinear viscosity of human wrist, Journal of
Neurophysiology, vol. 52, no. 3, pp. 553569, September 1984.
[32] T. Tsuji, K. Goto, M. Moritani, M. Kaneko, and P. Morasso, Spatial characteristics of human hand impedance in multi-joint arm movements, in Proceedings
of the IEEE International Conference on Intelligent Robots and Systems, vol. 1,
September 1994, pp. 423430.
[33] R. Gurram, S. Rakheja, and A. J. Brammer, Driving-point mechanical
impedance of the human hand-arm system: Synthesis and model development.
Journal of Sound and Vibrations, vol. 180, no. 3, pp. 437458, 1995.

113

[34] R. E. Kearney and I. W. Hunter, System identication of human joint dynamics, Critical Reviews in Biomedical Engineering, vol. 18, no. 1, pp. 5587, 1990.
[35] D. A. Lawrence, Stability and transperancy in bilateral teleoperation, IEEE
Transactions on Robotics and Automation, vol. 9, no. 5, pp. 624637, October
1993.
[36] K. Kosuge, Y. Fujisawa, and F. T., Control of mechanical system with manmachine interaction, Proc. of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 8792, 1992.
[37] C. J. Hasser and M. R. Cutkosky, System identication of the human grasping
a haptic knob, Proc. of the 10th Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems, Orlando, FL, pp. 117180, March 2002.
[38] H. Woo and D. Lee, Exploitation of the impedance and characteristics of the
human arm in the design of haptic interfaces, IEEE Transactions on Industrial
Electronics, vol. 56, no. 9, p. in press, 2009.
[39] R. G. Dong, D. E. Wecome, T. W. McDowell, and T. Z. Wu, Biodynamic
response of human ngers in a power grip subjected to a random vibration,
Journal of Biomechanical Engineering, vol. 126, pp. 447457, August 2004.
[40] E. Vlugt and A. C. Schouten, Identication of intrinsic and reexive muscle
parameters of the human arm in 3d joint space, Proc. of the 2004 IEEE Int.
Conf. on Systems, Man, and Cybernetics, pp. 24712478, 2004.
[41] E. J. Perreault, R. F. Kirsch, and P. E. Crago, Eects of voluntary force generation on the elastic components of endpoint stiness, Experimental Brain
Research, vol. 141, pp. 312323, 2001.
[42] J. E. Speich, L. Shao, and M. Goldfarb, Modeling the human hand as it interacts
with a telemanipulation system, Mechatronics, vol. 15, no. 9, pp. 11271142,
November 2005.
[43] K. J. Kuchenbecker, J. G. Park, and G. Niemeyer, Characterizing the human
wrist for improved haptic interaction, Proc. of the 2003 International Mechanical Engineering Congress and Exposition, pp. 18, November 2003.
[44] R. G. Dong, S. Rakheja, A. W. Schopper, B. Han, and W. P. Smutz, Handtransmitted vibration and biodynamic response of the human hand-arm: A critical review. Critical Reviews in Biomedical Engineering, vol. 29, no. 4, pp. 393
439, 2001.
[45] W. McMahan and K. J. Kuchenbecker, Haptic display of realistic tool contact
via dynamically compensated control of a dedicated actuator. Proc. of the 2009
IEEE/RSJ Int. Conf. on Intelligent Robots and Systems, pp. 317077, October
2009.
114

[46] A. Israr, S. Choi, and H. Z. Tan, Detection threshold and mechanical impedance
of the hand in a pen-hold posture. Proc. of the 2006 IEEE/RSJ Int. Conf. on
Intelligent Robots and Systems, pp. 472477, 2006.
[47] , Mechanical impedance of the hand holding a spherical tool at threshold
and suprathreshold stimulation levels, in WHC 07: Proceedings of the Second
Joint EuroHaptics Conference and Symposium on Haptic Interfaces for Virtual
Environment and Teleoperator Systems. Washington, DC, USA: IEEE Computer Society, 2007, pp. 5660.
[48] I. Daz and J. J. Gil, Inuence of vibration modes and human operator on the
stability of haptic rendering, IEEE Transactions on Robotics, vol. 26, no. 1, pp.
160165, 2010.
[49] G. Niemeyer and J. J. E. Slotine, Stable adaptive teleoperation, IEEE Journal
of Oceanic Engineering, vol. 16, no. 1, pp. 152162, January 1991.
[50] M. C. C
avusoglu, A. Sherman, and F. Tendick, Design of bilateral teleoperation
controllers for haptic exploration and telemanipulation of soft environments,
IEEE Transactions on Robotics and Automation, vol. 18, no. 4, pp. 641647,
August 2002.
[51] H. Gomi, Y. Koike, and K. M., Human hand stiness during discrete point-topoint multi-joint movement, Proc. of the Annual International Conference of
the IEEE EMBS, vol. 14, pp. 16281629, October - November 1992.
[52] L. A. Jones and H. I. W., Inuence of the mechanical properties of a manipulandum on human operator dynamics: elastic stiness, Biological Cybernetics,
vol. 62, no. 4, pp. 299307, 1990.
[53] , Inuence of the mechanical properties of a manipulandum on human
operator dynamics: viscocity, Biological Cybernetics, vol. 69, no. 4, pp. 295
303, 1993.
[54] A. Haddadi and K. Hashtrudi-Zaad, Bounded-Impedance Absolute Stability of
Bilateral Teleoperation Control Systems, IEEE Transactions on Haptics, vol. 3,
no. 1, pp. 1527, JanuaryMarch 2009.
[55] N. Hogan, Controlling impedance at the man/machine interface, Proc. of the
IEEE International Conference on Robotics and Automation, Scottsdale, AZ,
vol. 3, pp. 16211631, May 1989.
[56] R. J. Adams and B. Hannaford, A two-port framework for the design of unconditionally stable haptic interfaces, Proc. of the 1998 IEE/RSJ Int. Conf. on
Intelligent Robots and Systems, pp. 12541259, 1998.
[57] K. Zhou, J. C. Doyle, and K. Glover, Robust and Optimal Control.
Clis, NJ: Prentice Hall, 1996.
115

Edgewood

[58] L. Ljung, System Identification: Theory for the User, 2nd ed.
Hall, Upper Saddle River, NJ, 1999.

PTR Prentice-

[59] E. Brenner and J. B. J. Smeets, Fast responses of the human hand to changes
in target position, Journal of Motor Behavior, vol. 29, no. 4, pp. 297321,
December 1997.
[60] J. E. Marsden and T. J. R. Hughes, Mathematical foundations of elasticity. Englewood Clis, NJ, USA: Prentice-Hall, Inc., 1983.
[61] E. de Vlugt, A. C. Schouten, and F. C. van der Helm, Adaptation of reexive
feedback during arm posture, Biological Cybernetics, vol. 87, no. 1, pp. 1026,
2002.
[62] K. B. Fite, L. Shao, and M. Goldfarb, Loop shaping for transparency and stability robustness in bilateral telemanipulation, IEEE Transactions on Robotics
and Automation, vol. 20, no. 3, pp. 620624, 2004.
[63] D. W. Hearn and J. Vijay, Ecient algorithms for the (weighted) minimum
circle problem, Operations Research, vol. 30, no. 4, pp. 777795, 1982.
[64] W. S. Levine, Ed., The Control Handbook.
1996.

Boca Raton, FL: CRC Press LLC,

[65] H. Kazerooni, T.-I. Tsay, and K. Hollerbach, A controller design framework for
telerobotic systems, IEEE Transactions on Control Systems Technology, vol. 1,
no. 1, pp. 5062, March 1993.
[66] J. Yan and S. E. Salcudean, Teleoperation controller design using H-innity
optimization with application to motion-scaling, IEEE Transactions on Control
Systems Technology, vol. 4, no. 3, pp. 244258, May 1996.
[67] K. Kim, M. C. C
avusoglu, and W. K. Chung, Quantitative comparison of bilateral teleoperation systems using synthesis, IEEE Transactions on Robotics,
vol. 23, no. 4, pp. 776789, August 2007.
[68] A. Shahdi and S. Sirouspour, Adaptive/Robust Control for Time-Delay Teleoperation, IEEE Transactions on Robotics, vol. 25, no. 1, pp. 196205, Februrary
2009.
[69] M. C. C
avusoglu, D. Feygin, and F. Tendick, A critical study of the mechanical
and electrical properties of the PHANToMT M haptic interface and improvements
for high performance control, Presence, vol. 11, no. 6, pp. 555568, December
2002.
[70] D. D. Lev, R. Rozengurt, T. Gelfeld, A. Tarkhnishvili, and M. Reiner, The
eects of 3d collocated presentation of visuo-haptic information, in Proc. of
EuroHaptics 2010, 2010, pp. 432437.

116

[71] C. N. Riviere, W. T. Ang, and P. K. Khosla, Torward active tremor canceling


in handheld microsurgical instruments. IEEE Transactions on Robotics, vol. 19,
no. 5, pp. 793800, 2003.
[72] S. Zhai and P. Milgram, Quantifying coordination in multiple dof movement
and its application to evaluating 6 dof input devices. Proc. of the 1998 ACM
Conference on Human Factors in Computing Systems, pp. 320327, April 1998.
[73] C. L. MacKenzie, R. G. Marteniuk, C. Dugas, D. Liske, and B. Eickmeier,
Three-idmensional movement trajectories in tts task: Implicaitons for control, The Quarterly Journal of Experimental Psychology, no. 39A, pp. 629647,
1987.
[74] A. H. Mason, An experimental study on the role of graphical information about
hand movement when interacting with objects in virtual reality environments,
Interacting with Computers, vol. 19, pp. 370381, 2007.
[75] F. Faul, E. Erdfelder, A.-G. Lang, and B. Axel, G*power 3: A exible statistical power analysis program for the social, behavioral, and biomedical sciences.
Behavior Research Methods, vol. 39, no. 2, pp. 175191, 2007.
[76] T. A. Martin, J. G. Keating, H. P. Goodkin, A. J. Bastian, and T. W. T.,
Throwing while looking through prisms. Brain, no. 119, pp. 11991211, 1996.

117

You might also like