Professional Documents
Culture Documents
ProfessorPeterAllen
LowDimensionalDataDrivenGrasping......5-6
Prof.GeorgeBekey
Thehistoryofgraspingwithroboticandprosthetichands.......7
Dr.LynetteJones
HumanHandFunction:TheCouplingofSensoryandMotorSystems.89
Prof.DerekG.Kamper
Transmissionofmusculotendonforcestotheindexfinger.....1011
Prof.HaruhisaKawasaki
ForceSensationoftheHumanFingerwhenUsingaMultiFingeredHapticInterface.....12-13
Prof.SusanLederman
ScientificApproachestotheStudyofHumanHandFunction.....14-15
Prof.GerryLoeb
RobustBiomimeticTactileSensingandGripControl.....16-17
Prof.MarcoSantello
SynergisticControlofHandMusclesThroughCommonNeuralInput.1820
Prof.LuigiVillani
Originalapproachestointerpretationlearning,andmodelling,fromtheobservationofhumanmanipulation.2122
Prof.VeronicaSantos
Developmentofartificialgripreflexesthatutilizetheadduction/abductioncapabilitiesofhumanfingers.........................................2324
Prof.FranciscoValeroCuevas
Theneuromuscularsystemsdoesordinarymanipulationtasksthehardway:Lessonsforroboticmanipulators?...............................3536
M. Ittyerah
Does practice affect hand ability? The reliance on frames of reference......5253
W. Merlo
Threedimensional, Continuousmotion Ball Joint Technologies for Prosthetic Wrist Applications....6567
Recentadvancesinthehumanscienceshaveenergizedthefieldofroboticstowardpersonalassistantsandbrainmachine
interface.Thereisanincreasedinteresttosolvetheroboticmanipulationquestion:Canwebuildrobotichandsthatcan
accomplishourdailymanipulationtasks?Thehumanhandisadeptatmanydiversetasksinvariedcontexts,including
powerandprecisiongrasping,twisting,andtapping.Butwestilldonotknowwhatfeaturesofthehumanhandenable
suchcapability.Forexample,dobiomechanicalfeatureslikethecomplextendonhood,synergisticmuscleactuation,and
boneshapesmakethedifference?Orisittheneuralcodingofmovement?Importantlyforrobotics,weneedto
understandwhatfeaturesshouldbeincludedinfuturerobotichands.Thisworkshopisaforumforresearcherstodiscuss
manipulationviewedinlightofthehumanhandsfeaturesandhopestopushthestateoftheartoftherobotichand.We
expectthattheworkshopwillbringtogetherresearchersfromdiverseareassuchasrobotics,biomechanics,
neuroscience,andanthropologists.
Low-Dimensional Data-Driven Grasping
Peter K. Allen Matei T. Ciocarlie Corey Goldfeder Hao Dang
Department of Computer Science, Columbia University, New York, USA
1
robotic grasping becomes a pre-computed database
lookup. While we have not yet achieved this level of
performance it is our directional goal. 1 1 1 2 3 5
The most direct way to construct a grasp database
is to collect grasping data from human volunteers.
We could gather a large set of example objects, outfit
an army of graduate students with grasp-capture de- 0.135 0.152 0.150 0.049 0.028 0.095
References
[1] M. Ciocarlie and P. K. Allen. On-line interactive dexterous
grasping. In Haptics: Perception, Devices and Scenarios,
2
The history of grasping
with robotic and prosthetic hands
George A. Bekey
Abstract
The human hand is an amazing and complex instrument, capable of grasping objects of
arbitrary shape (as well as having numerous other functions). Attempts to replicate some
of these functions with both amputees and robots make clear how difficult it is to replace
the hand by an electromechanical (or pneumatic) device. In this talk we will review the
fascinating history of these devices. Prosthetic hands began with a simple hook
(associated with pirates in the Middle Ages), but grasping originated with the invention
of the Dorrance parallel hook in 1912, which is still in common use. Robot hands
began with parallel jaw grippers in industry in the 1960s, followed by the introduction of
systems with 3 or more fingers in the 1980s.
During the past 20 years both robot hands and prosthetic hands have undergone dramatic
improvements. Commercial grippers for robots with 4 and 5 fingers are now available,
and some are incorporated in humanoids. For industrial grasping a number of non-
anthropomorphic grippers have been developed. Dramatic improvements in prosthetic
hands have come about during the past decade, largely in research laboratories. Recently
these developments have been accelerated by the DARPA program in upper extremity
prosthetics. The parallel history of robot and prosthetic hands will be examined with
attention to such issues as control vs. autonomy, cost, reliability, and applications. There
will be few if any equations, so anyone will be able to grasp this talk.
Human Hand Function: The Coupling of Sensory and Motor Systems
Lynette Jones
Department of Mechanical Engineering,
Massachusetts Institute of Technology
From an evolutionary perspective the hands of primates are often considered the absolute
bedrock of mammalian primitiveness with a skeletal structure that changed little over 65 million
years. The hand evolved relatively early with capabilities that preceded the development of the
cerebral structures required to make use of its potential. The importance of the evolution of
cortical neuronal hardware that processes incoming sensory information from the hand and via
corticospinal neurons directly controls movements of the fingers cannot be underestimated in
analyzing the versatility of primate hands. One major evolutionary change that resulted in a
remarkable increase in the functionality of the hand was the development of the opposable thumb.
Opposition involves flexion, abduction and medial rotation of the thumb so that the top of the
thumb can make contact with the tips of the fingers. This movement is essential for exploring
and manipulating objects and in the absence of the thumb, for example following amputation, it
is estimated that the hand loses 40% of its functional capacity. The extensive area of contact
between the pulps of the thumb and index finger is a uniquely human characteristic that reflects
the more distal position of the thumb and its longer length relative to that of the index finger.
Another feature of the human hand that distinguishes it from other primate hands is the presence
of rough horseshoe-shaped tuberosities on the distal phalanges to which the soft tissues of the
palmar pads are attached. These pads facilitate the distribution of pressure during grasping and
allow the fingertips to conform to uneven surfaces. Further evidence of the specialization of the
hand comes from the relative mass of the musculature devoted to the thumb, which makes up
about 39% of the weight of the intrinsic muscles within the hand.
Human dexterity has been studied experimentally from a number of perspectives ranging
from detailed analyses of the sensorimotor control involved in object manipulation, to kinematic
studies of skilled activities such as typing and piano playing. The manipulative skill of the
human hand is most apparent in the tight coupling that occurs between the motor and sensory
systems when the hand reaches and grasps an object. Within 70 ms of contact, the forces used to
grasp an object are modified by feedback from mechanoreceptors in the skin. Numerous studies
have recorded grip forces as the index finger and thumb grasp and lift an object and shown how
these forces vary as a function of the geometry and weight of the object, the shape and texture of
the contact surface, and the friction between the hand and the object. The crucial role played by
cutaneous mechanoreceptors in the modulation of grip forces has been highlighted in
microneurographic studies in which the activity of isolated tactile afferent fibers is recorded as
people grasp and lift objects. These experiments show that tactile afferents encode the timing,
magnitude, direction and spatial distribution of fingertip forces, and the friction between the skin
and object. When these inputs are eliminated following local anesthesia, the hand is unable to
compensate rapidly as an object slips between the digits, and grip forces are considerably larger
than those used normally. In addition, the predictive control of grip force based on previous
experience is impaired when the expected sensory feedback is not available.
Kinematic analyses of more complex manual tasks such as typing and piano playing have
offered insight into the internal representation of motor activities and have shown that muscles
controlling movements of the fingers are not controlled independently. Although keyboard tasks
are often thought of as serial activities, analyses of the sequencing of finger movements indicate
that there is an anticipatory component to performance particularly when consecutive keystrokes
are executed by different hands. Skilled typists commit themselves to typing a particular letter
approximately three characters in advance of the current keystroke and can visually process up to
eight characters in advance of the character being typed. For expert typists, the interval between
successive keystrokes is typically 100-200 ms, but intervals as short as 60 ms are not infrequent.
Piano playing and sending Morse code are performed at rates comparable to those reported for
typing. Experienced Morse code operators can send code at a rate of 20-30 words per minute,
and at faster tempos the inter-note intervals of pianists are often 80-100 ms, and for brief periods
of time, such as when playing trills, they can produce 20-30 notes per second. The similarity in
peak movement speeds recorded from skilled typists and pianists suggests that they are
performing near the mechanical and neural limits of the human hand.
Skilled manual activities rely on sensory feedback from cutaneous, muscle and joint
mechanoreceptors for successful execution. The ability to perceive finger movements and sense
the position of the fingers is not limited to a single class of receptor, but is derived from a
number of redundant sources. In muscle, spindle receptors respond to changes in muscle length,
with spindle primary afferents being more sensitive to the velocity of muscle contraction and
secondary afferents displaying much greater position sensitivity. The discharge rates of these
receptors are not simply a function of changes in muscle length, but also reflect the activity of
their own motor innervation, called the fusimotor system, which can regulate the sensitivity of
muscle spindles. To decode signals from muscle spindle afferents the central nervous system
must have access to information regarding the level of fusimotor activity to distinguish between
afferent discharges that are proprioceptively significant from those that are the result of
fusimotor activity. Mechanoreceptors (SA II) in hairy skin also discharge in response to finger
movements which typically stretch the loosely connected skin on the dorsum of the hand; the
majority of these receptors respond to movements of more than one joint and so the information
that they provide the CNS is ambiguous with respect to movement direction and amplitude.
However, the ensemble response from a population of SA II receptors can probably provide a
population vector that differentiates individual finger movements.
In contrast to the tactile sensory system in which higher densities of mechanoreceptors are
associated with superior tactile acuity, higher spindle densities do not appear to be associated
with superior sensory acuity. For the muscles of the hand, the number of spindles has been
estimated to vary from 12 to 356. There is no evidence indicating that the detection of
movements or changes in limb position is superior as one goes from proximal to distal joints, and
when expressed in terms of the absolute angular rotation of the joint, the ability to detect
movements is better at more proximal joints such as the elbow (0.08 at 20/s) than the distal
joints of the hand (0.88 at 20/s). The superior performance of more proximal joints is not
surprising, as they move more slowly than distal joints and rotation of these joints results in a
larger displacement of the end-point of the limb than the same angular rotation of a distal joint.
The forces generated by muscles are sensed by Golgi tendon organs normally found at the
junction between muscle tendon and a small group of muscle fibers. These receptors are less
numerous than spindle receptors and some muscles of the hand do not appear to have any tendon
organ receptors. Results from a number of experiments suggest that the perception of force is
derived from internal neural correlates of the descending motor command, and that peripheral
feedback calibrates these signals to indicate whether the motor command is adequate for the task.
Collectively, peripheral receptors together with central feedback systems provide the information
required for the dexterous performance of the hand.
Transmission of musculotendon forces to the index finger
Sang-Wook Lee, Hua Chen and Derek G. Kamper, Member, IEEE
AbstractThis article reviews work completed by the orientations of the bands in the net model are assumed
authors to examine how musculotendon forces from index fixed at the outset and are not re-configured during finger
finger muscles impact finger kinetics and kinematics. motion. Contributions of specific muscles to the different
Results were obtained from in vivo studies, computer bands are estimated and assumed to remain constant. In
simulations, and cadaver investigations. Passive joint actuality, however, the bands of the extensor hood move
impedance and finger posture had particularly strong with respect to the phalanges [4] and the transmission of
effects on force transmission in the fingers.
force throughout the hood is not known.
Thus, examination of how musculotendon forces are
I. INTRODUCTION
transmitted to index finger segments and joints is
F or the hand, the mapping of musculotendon forces to
joint torques and forces is quite complex. Typically, 21
warranted. The results of a series of such studies
conducted in our laboratory are presented.
degrees-of-freedom (DOF) are ascribed to the fingers and
thumb. If one distinguishes among the different II. METHODS AND PROCEDURES
compartments, controlling different digits, of some of the
In our laboratory, we have used a combination of in
extrinsic muscles, over 30 different muscles control the
vivo studies, computer modeling, and cadaver
hand.
experiments to examine the transmission of tendon forces
The index finger alone is comprised of three joints:
into finger kinetics and kinematics.
metacarpophalangeal (MCP), proximal interphalangeal
The in vivo studies involve either percutaneous
(PIP), and distal interphalangeal (DIP) with a total of four
stimulation or fine wire electroymyography of the
DOF. Motion about these joints is controlled by 7
muscles of the index finger. Two 55-m stainless steel
different muscles: the extrinsic finger flexors {flexor
intramuscular electrodes are inserted into each muscle
digitorum superficialis (FDS) and flexor digitorum
with a 27-gauge hypodermic needle. Kinematics are
profundus (FDP)}, the extrinsic finger extensors
recorded with a video-based infrared motion capture
{extensor digitorum communis (EDC) and extensor
system (OptoTrak 3020, Northern Digital, Inc.,
indicis (EI)}, and the three intrinsic hand muscles {first
Waterloo, Ontario, Canada). Kinetics are recorded with a
dorsal interosseous (FDI), first lumbrical (LUM), and the
6 DOF load cell (20E12A, JR3, Inc., Woodland, CA).
first palmar interosseous (FPI)}. All of these muscles are
Computer models simulating finger dynamics have
multi-articular, with some of them crossing all three
been developed using Working Model (Design
joints.
Simulation Technologies, Canton, MI), visualNastran
On the palmar side of the index finger, both FDS and
(MSC Software, Santa Ana, CA), and Simulink
FDP travel through a series of anatomical pulleys which
(MathWorks, Natick, MA). These models include finger
shape the tendon path. These structures impact force
structures such as anatomical pulleys.
transmission as can be observed in cases in which they are
Cadaver experiments are performed with
damaged. On the dorsal side, the situation is even more
fresh-frozen hand specimens, fixed with the Agee
complicated as the five other muscles, EI, EDC, FPI,
WristJack (Hand Biomechanics Lab, Sacramento, CA).
LUM, and, arguably FDI ([1] but see [2] ), interact with
Known loads can be applied to nylon threads attached to
the extensor hood, a dorsal aponeurosis conveying
the tendons of the index finger muscles (see Fig. 1).
muscle force to the finger bones and joints. The extensor
Strain in structures such as the extensor hood can be
hood, in turn, has multiple insertion sites into the finger
measured using a camera-based system and ultraviolet
phalanges. In the past, researchers have used a force net,
light.
Winslows tendinous rhombus, to describe the
III. RESULTS AND DISCUSSION
distribution of forces in the extensor hood [3-4]. The
Both computer simulations and in vivo testing have
confirmed that activation of the extrinsic flexors, FDS
This work was supported in part by grants from the Coleman and FDP, does lead to concurrent flexion of all 3 finger
Foundation, the Whitaker Foundation, and the National Institutes of joints. This occurs despite the existence of only a single
Health (NINDS R01 NS052369). attachment point for each muscle [5-6].
Sang-Wook Lee is with the Sensory Motor Performance Program Passive joint stiffness and damping are crucial to
of the Rehabilitation Institute of Chicago.
Hua Chen was with the Rehabiltiation Institute of Chicago. He is obtaining this concurrent flexion. When the passive joint
now a graduate student at Harvard University. impedance is removed in the computer models, flexion of
Derek G. Kamper is with the Biomedical Engineering Department, the MCP joint, for example, is greatly reduced or even
of the Illinois Institute of Chicago and the Sensory Motor Performance eradicated [5-6].
Program of the Rehabilitation Institute of Chicago (kamper@iit.edu).
In contrast, forces in the central and terminal slips of
the extensor hood decreased with PIP and DIP flexion.
For a given level of EDC loading, force levels in these
two slips dropped by more than 50% as the
interphalangeal joints were flexed [11].
ACKNOWLEDGMENT
The authors thank Mr. Erik Cruz, Ms. Heidi Fischer,
and Dr. Joseph Towles for their invaluable help with
these studies.
REFERENCES
Fig. 1. Cadaver forearm secured with Agee [1] D. G. Kamper, H. C. Fischer, and E. G. Cruz, "Impact of
WristJack. Fingertip is connected to 6 DOF load cell. finger posture on mapping from muscle activation to joint
Threads are connected to muscle tendons and run back torque," Clin Biomech, vol. 21, pp. 361-9, 2006.
through a metal plate [7]. [2] C. Long, "Intrinsic-extrinsic muscle control of the fingers:
electromyographic studies," Journal Bone Joint Surg, vol.
50-A, pp. 973-984, 1968.
The anatomical pulleys also impact finger [3] F. J. Valero-Cuevas, F. E. Zajac, and C. G. Burgar, "Large
movement. Removal of pulley models from the index-fingertip forces are produced by subject-independent
patterns of muscle excitation," J Biomech, vol. 31, pp.
simulations altered simulation output, although to a lesser
693-703, 1998.
extent than removal of joint impedance [5-6]. This effect [4] M. Garcia-Elias, K. N. An, L. Berglund, R. L. Linscheid, W.
is in accordance with the description of the impact of P. Cooney, 3rd, and E. Y. Chao, "Extensor mechanism of the
pulley releases [8]. fingers. I. A quantitative geometric study," J Hand Surg
The observed flexion pattern, however, does not [Am], vol. 16, pp. 1130-6, 1991.
[5] D. G. Kamper, T. G. Hornby, and W. Z. Rymer, "Extrinsic
follow from use of the geometrically obtained moment flexor muscles generate concurrent flexion of all three finger
arms for the muscles at each joint [9]. That technique joints," J Biomech, vol. 35, pp. 1581-1589, 2002.
leads to overestimation of MCP flexion for a given [6] S. Lee and D.G. Kamper, Modeling of multi-articular
amount of force input. Additionally, the effective muscles: importance of inclusion of tendon-pulley
interactions in the finger, IEEE Trans BME, [in press).
moment arms measured directly in static finger postures [7] F. J. Valero-Cuevas, J.D. Towles, and V.R. Hentz,
show a very strong dependence on joint posture [10]. For Quantification of fingertip force reduction in the forefinger
example, the EDC and EI moment arms about the MCP following simulated paralysis of extensor and intrinsic
joint doubled as the PIP and DIP joints became more muscles, J Biomech, vol. 33, pp. 1601-1609.
[8] C. K. Low, B. P. Pereira, R. T. H. Ng, Y. P. Low, and H. P.
flexed (Fig. 2). Wong, The effect of the extent of A1 pulley release on the
force required to flex the digits, J Hand Surg [Br], vol. 23B,
pp. 46-49, 1998.
[9] K. N. An, Y. Ueba, E. Y. Chao, W. P. Conney, and R. L.
Linsheid, Tendon excursion and moment arm of index finger
muscles, J Biomech, vol. 16, pp. 419-425, 1983.
[10] S. W. Lee, H. Chen, J. D. Towles, and D. G. Kamper,
"Estimation of the effective static moment arms of the
tendons in the index finger extensor mechanism," J Biomech,
vol. 41, pp. 1567-73, 2008.
[11] S. W. Lee, H. Chen, J. D. Towles, and D. G. Kamper, Effect
of finger posture on the tendon force distribution within the
finger extensor mechanism, J Biomech Eng, vol. 130, 2008.
Title
Force Sensation of the Human Finger when Using a Multi-Fingered Haptic Interface
Speaker
Haruhisa Kawasaki
Professor
Faculty of Engineering, Gifu University
E-mail: h_kawasa@gifu-u.ac.jp, Tel & Fax: +81-58-293-2546
Abstract
Haptic interfaces that present force and tactile feelings have been utilized in the areas of
telemanipulation, interaction with micro-nanoscale phenomena, medical training and
evaluation, and others. A multifingered haptic interface (MHI) has greater potential for
effectiveness in these applications than does a single-point haptic interface. In particular,
telemanipulation is necessary when using a humanoid hand robot. When a human manipulates
an object by using his or her hand, the human receives a tactile sensation through a contact
plane between the finger surface and the object surface as a result of the grasping force,
vibration and heat. However, when a human manipulates an object in a virtual environment or
telemanipulates a humanoid hand robot, he or she uses an MHI and receives a tactile
sensation through a finger holder, which connects the human fingertip and the haptic finger of
the MHI. This means that the tactile sensation is derived from all of the contact surfaces
between the human fingertip and finger holder. Therefore, for the MHI to be further
developed, it is necessary to clarify the human tactile sensation that occurs under these
conditions.
The characteristics of the force sensation of the human finger have been reported in several
papers. In most of these studies, the force sensation was evaluated upon performance of the
flexion/extension motion. Humans feel the force sensation during the adduct/abduct motion
and the moment sensation, which often occurs with the grasping of a corner of a long object.
These characteristics have not been previously evaluated.
Our group has developed a multi-fingered haptic interface called HIRO III[1], as shown in
Figure 1. HIRO III consists of an interface arm with 6 degrees of freedom (dof), a haptic hand
with five haptic fingers, and a controller. Each finger has three joints allowing 3 dof. The first
joint, relative to the hand base, allows abduction/adduction. The second and third joints allow
flexion/extension. A three-axis force sensor is mounted in a tip link of each haptic finger. To
manipulate the haptic interface, the operator has to wear a finger holder on his/her fingertips.
The finger holder contains an iron sphere, which is attached to the permanent magnet at the
force sensor tip to form a passive spherical joint. This passive spherical joint makes
adjustments between the human and haptic finger orientations and ensures safety if the MHI
malfunctions. When the operator moves his/her hand, the haptic interface follows the motion
of the operators fingertips and presents a sensation of force. We have evaluated the
characteristics of the following force sensations experimentally using the HIRO III:
1) Ability to recognize the direction and magnitude of a three-dimensional force;
2) Ability to recognize the direction and magnitude of the friction moment.
The experimental results showed that the ability to recognize the force varied according to the
displayed force direction. Moreover, the ability to recognize the frictional moment was not
affected by the roughness of the object surface or the angle between the fingertip and object
surface.
Finally, a small device that presents the frictional moment, which was developed based on the
experiment results for the force and moment sensations, and a virtual object handling system
which can present the normal force and the friction force, are presented.
Reference
1) T. Endo, H. Kawasaki, T. Mouri, Y. Doi, T. Yoshida, Y. Ishigure, H. Shimomura, M.
Matsumura, and K. KoketsuFive-Fingered Haptic Interface Robot: HIRO III, Proc. of
WorldHaptics 2009, pp.458-463, 2009
Scientific Approaches to the Study of Human Hand Function
Psychology
Center for Neuroscience
School of Computing
The human hand is a critical component of a highly complex system that includes, and is
controlled by, a very powerful nervous system. This highly flexible system is used to
perform an impressive range of manual functions.
Human hand performance critically depends upon the structure and function of the
human hand, the properties of the physical environment, the nature of the
hand/environment interactions, and finally, how sensorimotor inputs are processed and
represented, both functionally and neurally.
Within this conceptual framework, I will address selected issues relating to the sensory
side of human hand function that have been investigated by touch scientists. For each, I
will outline some of the more successful methodologies that have been employed and
illustrate the types of conclusions that may be drawn, using examples from the scientific
literature.
1) What is the sensitivity and spatiotemporal resolution capacity of the human hand?
This aspect of psychophysical investigations focuses on the detection and
discrimination of threshold-level events. Human hand function is constrained in part
by whether the somatosensory system can detect the occurrence of mechanical,
thermal and/or electrical events. It is further limited by the precision with which it can
discriminate spatial or temporal events. Selective adaptation and masking paradigms
have been used to behaviorally determine the relative contribution of different
sensory channels.
3) What is the nature and role of manual exploration in human haptic processing of
objects and their properties?
Video analysis has shown that manual exploration is highly systematic.
Behavioral experiments have revealed the costs and benefits of performing one (or
1
more) patterns of manual exploration (exploratory procedures) for haptic object
perception. When manual exploration of unfamiliar objects is unconstrained, their
material properties are more perceptually salient than their geometric properties.
Simultaneous execution of two or more exploratory procedures allows perceivers to
integrate valuable redundant property information about the identity of multi-attribute
objects.
4) What are the contributions of spatial and temporal information to haptic object
processing?
It is possible to assess the contribution of different types of information by
manually constraining haptic exploration in space and/or during its time-course,
thereby eliminating certain sources of information. The resulting decrement in
performance signals the contribution of the missing information.
5) What are the psychological dimensions that underlie complex human tactile/haptic
percepts?
Complementary methods involving controlled psychophysical experiments and
multidimensional-scaling procedures have been used to determine the perceptual
spaces that underlie complex touch-related experiences (e.g., surface texture), and
associated physical dimensions.
6) When both vision and touch are used, how is information from both modalities
integrated?
Human hand performance is frequently affected by the simultaneous availability
of information to more than one sensory modality (e.g., touch, vision, audition). How
are multisensory inputs about a common physical event combined? A number of
scientific methodologies have been used to address this question, usually by
comparing data from unimodal to that of bimodal (or multimodal) conditions.
A number of these topics are also relevant to how cutaneous and kinesthetic cues
contribute to human manual kinesthesis and to dextrous grasping and manipulation, as
will presumably be noted in other presentations at this workshop.
2
Robust Biomimetic Tactile Sensing and Grip Control
Gerald E. Loeb, M.D., Professor of Biomedical Engineering, University of Southern California, Los
Angeles, CA 90089, gloeb@usc.edu, in collaboration with Nicholas Wettels, Jeremy Fishel, Gary Lin,
and Nora Nelson at USC and Roland Johansson, Umea University, Sweden
Rehabilitation therapists refer to the hand as the third eye because its function is highly dependent on
the ability of touch to create a minds eye image of the objects that it encounters even without direct
vision. That internal representation includes not just the objects shape but those mechanical properties
that are needed to develop useful strategies for its manipulation: mass, rotational inertia, texture,
hardness, friction, etc. Much of the data to develop that image comes from the tactile sensors of the
glabrous skin of the fingers. The actual signals result from the deformation of the skin and pulp of the
finger tips as they interact with objects according to forces applied via the tendons of the muscles that
actuate the fingers. The shape and viscoelastic properties of the fingertips are a critical determinant of the
transduction process as well as the mechanical interactions that occur during manipulation.
We have incorporated a primitive version of an impedance sensing TAC with 6 electrodes onto the thumb
of the Proto1 hand made by Otto Bock HealthCare (Vienna, Austria) for the DARPA Revolutionizing
Prosthetics 2009 program. Normal and tangential forces were extracted by a Kalman filter and used to
drive a simple algorithm for adjusting grip force in a manner similar to that employed by humans
(Johansson and Flanagan, 2007). The hand was able to grip a fragile Styrofoam coffee cup and to adjust
grip force incrementally as water was poured rapidly into it, thereby avoiding crush or slip failures.
The multimodal sensing provided by the TAC produces synergistic benefits. For example, the thermal
and vibration signals obtained during exploration of an object depend on the location, magnitude and
direction of the applied force from the fingertip, which can be determined from the impedance sensors.
Some exploratory movements (such as texture discrimination) actually seem to be driven by their ability
to excite selectively a particular class of afferents. Rather than imitating the exact procedures and
classifications described for humans, it may be more productive to imitate the process whereby any
intelligent organism comes to know and categorize its environment through its senses. We hypothesize
that machines with a sufficiently rich set of receptors and coordinated movements can use artificial
intelligence methods to develop their own exploratory procedures and object classification schemes.
Robots were originally conceived as machines that would work side-by-side with humans in our
environments, rather than the glorified NC-milling machines now used on industrial assembly lines or the
engaging but clumsy anthropomorphic robots demonstrated at trade-shows. Despite these obvious
limitations, however, manufacturing companies have made and profited from huge investments in
industrial robots, starting with the Unimate of the 1950s and progressing to the almost completely
automated assembly lines for automobile chassis (Kamm, 2005). Yet industrial robots are almost
completely absent from the rest of the labor intensive steps of manufacturing automobiles such as
installation of wiring harnesses, upholstery, carpets, etc. Industrial robots are almost invariably
segregated from human workers lest they unwittingly cause injuries. They generally cannot identify or
handle common human tools. They function best in highly structured environments. The opportunities
for robots in military and police missions are obvious but their inability to deal with even the simplest
common objects such as doors and latches is legendary. While lots of tactile sensing technologies have
been developed, none has provided the combination of robustness, dynamic range and multimodality that
is required to support humanlike dexterity in realistic environments. We believe that the TAC technology
meets those requirements but will now require substantial development of biomimetic artificial
intelligence to make use of the distributed set of nonlinear sensors that it provides.
REFERENCES
Johansson RJ and Flanagan JR. Tactile sensory control of object manipulation in human. Handbook of the
senses. Vol: Somatosensation. 2007
Wettels, N., Popovic, D., Santos, V.J., Johansson, R.S. and Loeb, G.E. Biomimetic tactile sensor array.
Advanced Robotics. 22(8):829-849, 2008.
ACKNOWLEDGEMENT
Initial development of TAC technology was supported by a grant from the National Academy Keck
Futures Initiative. Authors Loeb, Wettels, Fishel and Lin are principals in SynTouch LLC, a start-up
company developing TAC technology with SBIR grant support from the US NIH and Department of
Agriculture. Various aspects of this technology are covered by pending patent applications.
Synergistic Control of Hand Muscles Through Common Neural
Input
Marco Santello
Arizona State University
In the past two decades important features of the kinetics and kinematics of
human object grasping and manipulation have been characterized, providing significant
insight into how the Central Nervous System (CNS) controls the hand. In particular,
emerge in hand-object interactions and constrain the behavior of the hands multiple
remains controversial, it is clear that they are the by-product of the interaction between
mechanical and neural mechanisms or constraints (for review see Schieber and
Santello, 2004). Linkages among the tendons of finger muscles as well as the multi-
constraints limiting the extent to which joints can be independently controlled. With
regard to neural constraints, the poor somatotopy of the cortical representation of hand
muscles has been identified as evidence for a synergistic, rather than individual, control
of hand muscles.
In the past few years we have studied common neural input (CNI) to motor units
of hand muscles to quantify the neural bases of digit force and movement synergies.
(i.e., the long flexors of the fingers) it would enhance the coupling of forces exerted by
the digits. In this talk I will review main findings from the literature and my laboratory on
synergistic control of hand muscles assessed through analyses in the time and
frequency domains, as well as EMG amplitude correlations. The main focus of my talk
heterogeneous and muscle-pair specific fashion, i.e., the neural coupling between
certain muscle pairs is stronger than that between other muscle pairs (Winges et al.,
2004, 2006; Johnston et al., 2005). Specifically, common neural input across motor
units of two antagonist intrinsic muscles of the index finger (First Dorsal Interosseus,
FDI; First Palmar Interosseus; FPI; Winges et al., 2008) is weaker than across intrinsic-
extrinsic muscle pairs (P < 0.001) as well as synergistic extrinsic flexors of the thumb
and index finger (Flexor Pollicis Longus, index finger compartment of Flexor Digitorum
Profundus; Winges et al., 2004, 2006). In contrast, the strength of common neural input
within each of these intrinsic muscles is ~ 3 times stronger than across them (Winges et
al., 2008). These findings have been confirmed by other studies indicating different
degrees of neural coupling across intrinsic vs. extrinsic muscle pairs (Johnston et al.,
submitted). This evidence suggests that the distribution of common neural input across
hand muscles may reflect the degree of independent vs. dependent neural control of
specific muscles and muscle pairs and possibly their long-term adaptation for their role
0.6
0.4
0.2
0
FDI-FDI FPI-FPI FPI-FDI FPL-FPI 2-digit 5-digit
FPL-FDI
FPL-FDP2
Original approaches to interpretation learning, and
modelling, from the observation of human
manipulation
Francesco Corato , Pietro Falco, Martin Losh, Emilio Maggio , Jakel Rainer , and Luigi Villani
Dip. di Ingegneria dellInformazione, Seconda Universita di Napoli, Aversa, Italy
Email: pietro.falco@unina2.it
Email: fcorato@unina.it
FZI, Karlsruhe, Germany
Email: loesch@ira.uka.de
Email: jaekel@ira.uka.de
OMG plc, Oxford, UK
Email: Emilio.Maggio@vicon.com
Dip. di Informatica e Sistemistica, Universita di Napoli Federico II, Napoli, Italy
Email: lvillani@unina.it
and modelling of the human hand, the challenge is to build Marker movements due to skin motion are a real problem
systems which are able to learn from the observation of ma- for hand motion capture as the error can have the same order
nipulations demonstrated by a human. Different robot learning of magnitude than the bone lengths. Existing models treat
tasks in this context reach from simple recording and replaying the marker positional error as a residual covariance. This
of observed trajectories to the abstraction and generalization assumption simplifies the mathematical formulation. However,
of the results of observed manipulations. Often the observation residual error analysis conducted within DEXMART showed
of the hand(s) only is not sufficient, but also the context, e.g. that it may be possible to improve the predictive power
manipulated objects, have to be considered depending on the of the kinematic model by explicitly accounting for marker
complexity of the learning goal: more advanced sensor fusion movements. To this extent, we developed a novel kinematic
algorithms which integrate joint angular positions, marker calibration procedure that accounts for soft tissue artifacts
tested four different algorithms, namely: Bayesian Information
Criterion (BIC), Akaike Information Criterion (AIC), and Con-
sistent AIC (CAIC) and a procedure based on Bootstrapping
the data. Among these methods the results show that BIC,
CAIC and Bootstrapping can provide a good insight on the
number of DoF to assign to each articulation of the hand.
After the selection of the number of DOFs, a quantitative
analysis of joint interdependencies has been done aimed at
studying kinematic synergies during reaching and hand grasp-
(a) (b)
ing activity. Four male subjects have been asked to perform
Fig. 1. MRI measurements for the marker RMM4 on open (left column) and two tasks: a cylinder-grasp with right hand and a voluntary
closed (right column) hand. (a)-(b): marker spatial position (yellow arrow)
flexion and extension of each individual finger. From acquired
data, the computation of the correlation coefficient matrix for
by allowing the markers to move according to polynomial all the DOFs has been carried out, aimed at quantifying the
functions of the joints angles. To validate the motion marker degree of correlation of each DOF with all the others. The
model and to increase the knowledge about marker slipping, results obtained in this study will be used both for reducing the
an MRI system has been used to capture the human hand in complexity of trajectory planning and for improving the sensor
two different static poses, shown in Fig. 1. The MR images fusion algorithms of kinetostatic data owing to the increased
have been used to reconstruct a three-dimensional model of a priori knowledge about the hand kinematics.
the hand bones according to a co-registration procedure whose III. L EARNING FROM OBSERVATION
details can be found in [4]. The measurement results show that
To find the most relevant features for the learning, an
the marker slipping over the bones is quite significant ranging approach in two steps is investigated. In the first step, the
between 2.5 mm and 10 mm along the hand axial direction. available sensors are exploited to extract and derive as much
Missing marker measurements due to occlusions can heavily
features as possible. These features include, but are not
affect the reliability of the joint angle measurements. This
limited to joint angles, trajectories of all joints and finger
problem was tackled from two directions: (i) investigation
tips, velocities of them, statistical features of these points.
about the optimal number of markers and their positioning;
To accommodate for two-handed manipulations, features like
(ii) study about the fusion of data-glove angular information
correlations of movements of both hands, temporal and spatial
with the marker measurements. Regarding the marker-set, two
synchronization points are also investigated. Using training
possible configurations were tested: a minimal marker set
data from fundamentally different actions, ranging from simple
with approximately one marker per segment and a redundant
to complex, all features are analyzed and ranked according to
marker configuration with three or more markers per segment.
their utility for separating between different action classes. In
To handle the occlusion problem using the minimal marker
the second stage, from the complete set of features only the
set, in [1] a sensor fusion algorithm is presented. It fuses,
most relevant for a current learning task are selected using
through a Kalman-like algorithm, positions in the space of
an information content-based method for the actual training
a set of reflective markers and measurements from low-cost
process. This minimizes the noise in the training data. A
angular sensors disposed on the finger joints. The algorithm
complementary method for data reduction is to abstract from
allows real-time tracking of complex hand movements.
sets of fingers, which serve the same purpose in a grasp or
B. Kinematic model of the human hand manipulation action, to a single finger which combines force
Another key topic is the question of what is the best and contact information of the whole set.
kinematic model for hand data. There is no generally accepted ACKNOWLEDGMENT
methodology for making such a choice. Ideally an objec- The research leading to these results has received funding
tive comparison between different kinematic models would from the European Communitys Seventh Framework Pro-
require the actual joint angle values measured with a pro- gramme (FP7/2007-2013) under grant agreement no. 216239
cedure that guarantees higher accuracy than motion capture (DEXMART project).
measurements. Unfortunately, acquiring accurate ground truth
data almost always involves invasive procedures. Many have R EFERENCES
used intra-cortical (bone) pin mounted markers or capitalized [1] P. Falco. Sensor Fusion based on the EKF for human and robotic hand
on external bone fixation already in place [2]. As direct tracking application. Master Degree Thesis, 2008
[2] J. Fuller, L. Liu, M.C. Murphy, and R.W. Mann. A comparison of lower-
measurements are impractical, researchers have evaluate other extremity skeletal kinematics measured using skin- and pin-mounted
desirable model qualities such as repeatability or have used markers. In 3-D Analysis of Human Movement, volume 16, pages 219
synthetic or semi synthetic data [3]. Although repeatability is a 242, 1997.
[3] I.W. Charlton, P. Smyth, and L. Roren. Repeatability of an optimized
well established technique for model validation, it does not tell lower body model. Gait and Posture, 20:213221, 2004.
us how well the model explains the data. To this extent, OMG [4] The DEXMART consortium, Kinematic Modelling of the Human Hand.
studied statistical model selection for hand kinematics and Dexmart Deliverable D1.1, 2008.
Development of artificial grip reflexes that utilize
the adduction/abduction capabilities of human fingers
Man and machine can work together on levels ranging from the earthbound remote control of a
satellite-repair robot to the intimate connection between an amputee and his prosthetic hand.
One common feature for any man-machine system, independent of machine proximity, is the
existence of delays in information flow between man and machine. In the context of a
neuroprostheses, delays can exist because of afferent and efferent signal limitations.
Techniques such as targeted muscle reinnervation (Kuiken, et al., J Amer Med Assoc, 2009) hold the
promise of bringing a conscious perception of tactile feedback to the user and increasing the
number of channels with which a user can control a complex hand and arm. One open question
is whether users will be able to respond quickly enough through man-machine interfaces in the
face of unexpected perturbations.
I. Introduction
Dexterous function arises from the ability of the nervous system to orchestrate numerous muscles and joints to
meet mechanical demands. We find that the neural controller is severely taxed even for ordinary tasks like
making contact with a surface rubbing a surface. These results have implications to (i) longstanding questions
about neuroanatomy and notions of muscle redundancy, and begin to explain the vulnerability of dexterous
function to development, aging and neuromuscular pathology; and (ii) the design of versatile robotic
manipulators.
Contact
Mathematical modeling and analysis revealed that the underlying
Angular deviation from
Transition
referernce vector ()
["
["
["
[+
[+
20
20
40
21
15
90
0,
0,
0,
0,
,"
+2
70
"
"
"
40
50
35
16
10
]
change in initial force magnitude for catch trials where the tapping
0]
0]
0]
Time (ms)
b
surface was surreptitiously lowered and raised (p=0.93). We
Contact
12
conclude that the nervous system predictively switches between
Normalized vector magnitude of
10
muscle coordination pattern
neurally demanding and time-critical strategy for routine motion-to- -400 -100 0
Time (ms)
200
force transitions with the fingertip may explain the existence of Figure 1. Switch in direction (a) and
specialized neural circuits for the human hand. magnitude (b) of the muscle
coordination pattern vector between
motion and static force production.
III. Combination of motion and force
Numerous studies of limbs and fingers propose that force-velocity
properties of muscle limit maximal voluntary force production during anisometric tasks, i.e. when muscles are
shortening or lengthening. Although this proposition appears logical,
our study on the simultaneous production of fingertip motion and A Position 1 Position 2 Position 3
force disagrees with this commonly held notion (Keenan et al. In Press).
We asked eight consenting adults to use their dominant index
fingertip to maximize voluntary downward force against a horizontal 35
surface at specific postures (static trials), and also during an Maximal
30
anisometric rubbing task of rhythmically moving the fingertip static force
Static
Moderate
Fast
Static
Slow
Moderate
Fast
Static
Slow
Moderate
Fast
IV. Conclusions
1. The nervous system responded to the evolutionary pressures (i.e.,
Movement Condition
mechanical constraints) of transitioning from motion to well- Figure 2. Maximal downward force
directed force production while maintaining finger posture by diminished when motion was added to the
developing a time-critical strategy of switching control laws. This task; but was not affected by movement
suggests roboticists should consider this option carefully before speed or position along the target line.
Absolute (A) and normalized (B) maximal
favoring passive solutions that rely on peripheral endpoint downward forces are shown for static
impedance to make this transition. (ci rcl e s) a n d d yn a mi c a n i so me tri c
2. While the biomechanical system of the fingers is over-actuated for (squares) trials.
some tasks, we find that it is likely not redundant for ordinary
tasks. Creating over-actuated systems may be a key to creating
versatile manipulators.
V. References
Keenan KG, Santos VJ, Venkadesan M, and Valero-Cuevas FJ. Maximal voluntary fingertip force production is not limited by
movement speed in combined motion and force tasks. J Neuroscience, In Press.
Venkadesan M and Valero-Cuevas FJ. Neural control of motion-to-force transitions with the fingertip. J Neuroscience 28: 1366-1373,
2008.
VI. Acknowledgements
We thank R. V. McNamara III for technical support and experiment execution, F. A. Medina for EMG processing, S. Song for
machining and instrumentation, Drs. M. Price and S. Backus for fine-wire electrodes placement. This material is based upon work
supported by the Whitaker Foundation, NSF Grants 0312271 & 0237258, and NIH Grants HD048566 AR050520 and AR052345 to
FVC. Its contents are solely the responsibility of the authors and do not necessarily represent the official views of the National
Institute of Arthritis and Musculoskeletal and Skin Diseases (NIAMS), the National Institute of Childhood and Human Development
(NICHD), the NIH, the Whitaker Foundation or the NSF.
Human hand kinematics based on MRI imaging
Georg Stillfried and Patrick van der Smagt 1
Institute of Robotics and Mechatronics
German Aerospace Center (DLR Oberpfaffenhofen)
Anthropomorphic robot hands have come to a technical level where understanding exact human hand kinematics
becomes relevant, e.g. the hand/arm system that is presently being developed at DLR (Fig. 2, Grebenstein and van der
Smagt, 2008). Human hand kinematics have been investigated through cadaver hands (e.g. Hollister et al., 1992 and
1995) and optical motion tracking of surface markers (e.g. Cerveri et al., 2005). A problem with the former is that
tissue properties might be altered due to tissue necrosis. With the latter, the motion of the skin relative to the bones
leads to so called soft tissue artifacts (STA) that negatively influence the quality of the results (Ryu et al., 2006). To
allow in vivo measurements and to avoid STA, we recorded finger postures by magnetic resonance imaging (MRI,
Table 1 and Fig. 2). We used a method similar to the one described by Miyata et al. (2005), but with a much larger
number of hand postures, resulting in a model with multi-degree-of-freedom (multi-DoF) joints.
MRI images of the hand were taken in fifty different static postures (Fig. 3). The postures were chosen so that for
each joint, the extreme poses as well as intermediate ones are included; furthermore, the opposition movement
between thumb and fingers is covered extensively. One hand posture is defined as reference. The pose of each bone in
the other postures is described by a rotation and translation from the reference posture (Fig. 4). The first step for
determining the pose is the segmentation of each MRI image, i.e. identification of the point set of each bone. Next, a
statistical method by Hillenbrand (2008) is used to find the rotation and
translation parameters that are necessary to match the point sets of the
same bone. From this, the relative poses with respect to the neighbor
bone are calculated (Fig. 5).
Seven joint models with varying degrees of freedom and intersecting
and non-intersecting axes of rotation are defined to be valid (Fig. 6): A
joint with one rotation axis (1DoF), a joint with one DoF but two
coupled rotation axes (1DoF_2c), a joint with two rotation axes (2DoF),
a joint with two rotation axes that are orthogonal to each other
(2DoF_o), a joint with two non-intersecting rotation axes (2DoF_ni) and
two joints with three mutually orthogonal axes that are oriented with the
bone geometry (3DoF and 3DoF_ni).
For each joint, the parameters of the joint models are adapted
numerically to fit the measured bone poses. This is done by numerically
minimizing the orientational and the translational discrepancy between
the modeled and the measured bone poses, using the fminsearch function
within Matlab. The orientational discrepancy is defined as the rotation
angle of a rotation that is necessary to match the orientation of the
modeled and the measured bone pose (Fig. 7). The translational
discrepancy is defined as the mean distance of the bone surface point in
the modeled and the measured pose (Fig. 8). Fig. 9 DLR's kinematic hand model with
mean errors smaller than 6 and
By setting a limit for the discrepancy values, each joint was assigned one
3 mm. With 2-DoF joints, the first
of the seven joint models. For example, a limit of 6 degrees mean
joint axis is drawn as a red arrow and
rotational discrepancy and 3 mm mean displacement leads to a the second one as a green arrow.
kinematic hand model with 21 DoF in the fingers plus 3 DoF in the
palm, shown in Fig. 9. In this model, the following joint models are used: A two-DoF joint with non-intersecting axes
(2DoF_ni) for the thumb saddle joint, two-DoF joints with orthogonal, intersecting axes (2DoF_o) for the
metacarpophalangeal joints and one-DoF joints (1DoF) for the interphalangeal and the intermetacarpal joints.
Virtual grasping experiments will be conducted with this hand model to see if it has advantages over the simplified
kinematics that were used so far for the design of the robotic hand/arm system.
Fig. 7 Orientational discrepancy Fig. 8 Translational discrepancy is defined as the mean distance between
between two poses. surface points. Here five example points are shown.
Cerveri, P.; Lopomo, N.; Pedotti, A. & Ferrigno, G. (2005), 'Derivation of Centers of Rotation for Wrist and Fingers in a Hand Kinematic Model: Methods
and Reliability Results', Annals of Biomedical Engineering 33, 402-412.
Grebenstein, M. & van der Smagt, P. (2008), 'Antagonism for a Highly Anthropomorphic Hand-Arm System', Advanced Robotics 22(1), 39-55.
Hillenbrand, U. (2008), Pose Clustering From Stereo Data, in 'Proceedings VISAPP International Workshop on Robotic Perception', pp. 23-32.
Hollister, A.; Buford, W. L.; Myers, L. M.; Giurintano, D. J. & Novick, A. (1992), 'The Axes of Rotation of the Thumb Carpometacarpal Joint', Journal of
Orthopaedic Research 10, 454-460.
Hollister, A.; Giurintano, D. J.; Buford, W. L.; Myers, L. M. & Novick, A. (1995), 'The Axes of Rotation of the Thumb Interphalangeal and
Metacarpophalangeal Joints', Clinical Orthopaedics and Related Research 320, 188-193.
Miyata, N.; Kouchi, M.; Mochimaru, M. & Kurihaya, T. (2005), Finger Joint Kinematics from MR Images, in 'IEEE/RSJ International Conference on
Intelligent Robots and Systems'.
Ryu, J. H.; Miyata, N.; Kouchi, M.; Mochimaru, M. & Lee, K. H. (2006), 'Analysis of skin movement with respect to flexional bone motion using MR
images of a hand', Journal of Biomechanics 39, 844-852.
Dynamic Hand Simulation with Strands
Shinjiro Sueda Dinesh K. Pai
Sensorimotor Systems Laboratory, University of British Columbia
3 Results
Using the strand-based framework, we are able to simulate various
clinical tests, such as the retinacular-plus test. We are currently
working on the simulation of various pathological deformities, such
as the swan-neck deformity or the Boutonniere deformity, and their
treatments (See Fig 1).
References
Figure 1: Flexion (left), extension (top), Swan-neck (middle) and D ELP, S. L., AND L OAN , J. P. 2000. A computational frame-
Boutonniere (bottom) deformities. Included in the simulation are work for simulating and analyzing human and animal movement.
the extensor mechanism and Landsmeers oblique band. The three Computing in Science & Engineering 2, 5, 4655.
extrinsic tendons (EDC, FDP, FDS) and the two interossei (DI, PI)
S UEDA , S., K AUFMAN , A., AND PAI , D. K. 2008. Musculo-
are pulled using an external load, and the lumbrical is modeled as
tendon simulation for hand animation. In ACM Trans. Graph.
a contractile strand, originating from the FDP and inserting into
(Proceedings of SIGGRAPH), vol. 27, 83:183:8.
the extensor mechanism.
VALERO -C UEVAS , F., Y I , J.-W., B ROWN , D., M C NAMARA , R.,
PAUL , C., AND L IPSON , H. 2007. The tendon network of
2 Our Approach the fingers performs anatomical computation at a macroscopic
scale. IEEE Transactions on Biomedical Engineering 54, 6,
Previous approaches based on lines-of-force (e.g. [Delp and Loan 11611166.
2000]), while effective and robust for large scale muscles and W ILKINSON , D. D., W EGHE , M. V., AND M ATSUOKA , Y. 2003.
skeleton, are not well-suited for complex mechanical systems like An extensor mechanism for an anatomical robotic hand. In Pro-
the hand, where multiple contact constraints make it difficult to ceedings of the 2003 IEEE International Conference on Robotics
route muscles and tendons effectively. Our strand-based simulation & Automation.
framework combines the robustness and simplicity of lines-of-force
models with the accuracy of solid mechanics models. Our strand- Z ANCOLLI , E. 1979. Structural and Dynamic Bases of HAND
based simulation framework has the following desirable attributes: SURGERY, 2nd ed. J. B. Lippincott Company.
e-mail:{sueda, pai}@cs.ubc.ca
1
Movement Intermittency and Variability
in Human Arm Movements
Ozkan Celik, Qin Gu, Zhigang Deng, Marcia K. OMalley
Thumb
the thumb than in Fz
4 Fx
Thumb
the index finger, Fy
Index
2 Index
whereas the lifting Fz
Index
0
force (force in the 1.5 2 2.5 3
y direction) is EMG of First Dorsal Interosseous
Voltage (V)
2 Lag: 2 (2.7) ms
64 2 52 8
Rotation Angle (deg)
0 63 50 6
0
2 48
62 4
2
Test Object
4 46
61 44 2
6 4
60 42 0
8 Trial 1 6 40
10 Trial 2 59 38 2
Trial 3 58 8 36 4
12 Trial 4
57 10 Lag: 2 (2.7) ms 34 6 Lag: 16 (39.1) ms
14 Trial 5
32
0.1 0 0.1 0.2 0.3 0.1 0 0.1 0.2 0.3 0.1 0 0.1 0.2 0.3 0.1 0 0.1 0.2 0.3 0.1 0 0.1 0.2 0.3
time (s)
2 21
30 30 32
1 21.5
0 31
32 22 29
1 30
22.5
2 28 29
34 23
3
4 23.5 27 28
36 24
5 26 27
6 24.5
38 26
7 25 25
Lag: 24 (8.2) ms 8 Lag: 23 (5.7) ms 25.5 Lag: 9 (9.6) ms Lag: 9 (8.9) ms 25 Lag: 9 (4.2) ms
40 24
0.1 0 0.1 0.2 0.3 0.1 0 0.1 0.2 0.3 0.1 0 0.1 0.2 0.3 0.1 0 0.1 0.2 0.3 0.1 0 0.1 0.2 0.3
time (s)
Figure 2: Test object rotation angle, index finger joint angles (top) and thumb joint angles (bottom) are shown for the 100g, CW
case for a single right-hand dominant subject. The object rotation angle was used to set t=0 (vertical line). The black markers
indicate the moment of maximum object rotation due to the perturbation. Positive joint angles indicate flexion or adduction
while negative joint angles indicate extension or abduction. The lag between maximum object rotation and joint angle response
(peak or valley) across the five trials is shown as mean(std) wherever possible. Joints: CMC = carpometacarpal, MCP =
metacarpophalangeal, IP = interphalangeal, PIP = proximal interphalangeal, DIP = distal interphalangeal; Motions: FE =
flexion/extension, AA = adduction/abduction.
CONCLUSIONS
This study suggests that each digit plays a different role depending on the direction of the rotational
perturbation. Furthermore, it was shown that these perturbations elicit kinematic responses involving
adduction/abduction in addition to flexion/extension. However, additional analysis is still required to
further understand the delayed thumb kinematic responses. Future studies may look at the effects that
cutaneous anesthesia has on the response of the fingertips to rotational perturbations and the
randomization of our experimental trials through the use of stepper motors. Characterizing reflexive grip
responses for adduction/abduction in conjunction with flexion/extension will be critical to designing
autonomous reflexes for artificial hands, especially as prosthetic and robotic finger kinematics become
more anthropomorphic.
Better Manipulation with Human Inspired Tactile Sensing
Ravinder S. Dahiya, Monica Gori, Giorgio Metta, Giulio Sandini
Robotics, Brain and Cognitive Sciences Department,
Italian Institute of Technology, Via Morego 30, Genoa, 16163, Italy
Introduction: Understanding what properties of the human hand can to be incorporated in robotic hands has been
an active area of investigation for a long time. Good strides have been made in designing robotic hands and a number
of working dexterous robotic hands have also been built [1, 2]. However, the use of touch sensory (both,
cutaneous/tactile/extrinsic as well as kinesthetic/intrinsic) information for dexterous manipulation still lags the
mechanical capability of such hands. This work presents how extrinsic touch sensing, the cutaneous/tactile analogous
of human sense of touch in robotics, can help improving the manipulation capability of robotics hands. Some features
of human cutaneous sense, namely, the role of skin biomechanics and skin microstructures, the spatio-temporal
response, information coding and transfer are presented as they may help extending the usage of tactile sensing in
robotic manipulation. If introduced, such features can help extending the intrinsic touch sensing and tendon driven
based gross manipulation capability of present day robotic hands to precise manipulation. The discussion is followed
by presenting the POSFET (Piezoelectric Oxide Semiconductor Field Effect Transistor) based tactile sensing arrays,
inspired from cutaneous/tactile sense of touch in humans, for the fingertips of humanoid robot iCub [3].
Tactile sensing for Manipulation by humans and robots: It is difficult to hold or safely manipulate real
world objects without physically touching them. The sense of touch is of essence to any manipulative task. Robots
guidance and force based control has mainly depended on the kinaesthetic information from intrinsic tri-axial or 6D
force sensors located close to wrist and on the actuation of tendon-driven fingers by motors located in the forearm.
However, transmission dynamics such as friction, backlash, compliance, and inertia make it difficult to accurately
sense and control endpoint positions and forces based on intrinsic sensors and actuator signals alone which point
towards the insufficiency of kinaesthetic information for manipulation in robotics [4]. Thus, there is need for
augmenting the kinesthetic information with the tactile information. In humans, the impaired tactile sensibility makes
manipulation difficult as brain lacks the information, about mechanical contact, needed to plan and control
manipulation which is centered on the mechanical events that mark transitions between consecutive action phases
[5]. Signals from tactile afferents play decisive role during such transitions. As an example, various phases of a
grasping action, namely reaching, loading, lifting, holding, replacing and unloading, are characterized as discrete
sensory events by specific tactile afferent responses. The FA (fast adapting) receptors respond to transient stimulation
- FAI responds at end of reaching and unloading phases and FAII responds at beginning of loading and unloading
phases. Similarly, SA-I (slow adapting) and SA-II afferents respond when static forces are applied to the object. The
activity of receptors during various phases of grasp gives an idea of contact timing, contact site, direction of contact
forces and shape of contact zone. Brain uses such tactile afferent information when humans manipulate objects and
similar information is needed by robots for manipulating objects. Measuring material properties such as hardness,
temperature etc., in addition to the measurement of contact forces, with tactile sensors can also be useful for
manipulating real world objects.
Human tactile sensing for better design of Tactile Sensors: Designing of a meaningful robotic tactile
sensing system should be guided by a broad but integrated knowledge of how tactile information is encoded and
transmitted at various stages of interaction via touch sensing. In this context, various studies on human tactile sensing
provide a good starting point. Such studies are also important due to the lack of any rigorous robotic tactile sensing
theory that can help in specifying important system parameters such as sensor density, resolution, location, and
bandwidth etc. - which are also likely to be task or application dependent.
Human skin structure is quite complex with tactile information elaborated by different kind of mechanoreceptors -
embedded in the skin at specific locations and depths and transducing signals with specific spatio-temporal
characteristics [6]. Density of various receptors too varies with body site. As an example, FA-I receptors have higher
density [5] than SA-I receptors on fingertips, thus, reflecting the importance of extracting spatial features of dynamic
mechanical events and supporting the need for dynamic tactile sensors in robotics.
The mechanoreceptors are not just transducers. Both independently and as a group they also involve some local
processing. Different firing rates of mechanoreceptors helps their independent coding of contact events. When
considered as a group, the relative timing of their first spikes provides precise information about the shape of the
contacted surface as well as the direction of the force exerted on the hand, and it does fast enough to account for the
speed with which tactile signals are used in object manipulation tasks [5]. Such processing of data is useful as it helps
optimum usage of the limited throughput of the nervous system. For robotics, these results not only underlie the
importance of having tactile sensing arrays on robotic hand, but, also local processing of the data collected by them.
Minimizing the data by way of local processing not only helps optimum usage of the limited computational resources
of a robot, but also facilitates the speedy transfer of contact information for any control task.
The elasticity of skin varies with depth, which can influence the intensity of the tactile signal that the receptors
receive. Further skin contains some ridge like patterns, comprising of papillary ridges or fingerprints, intermediate
ridges and limiting ridges [7]. Both papillary ridges and intermediate ridges are believed to affect the response of
various receptors in the skin to a different degree [7-9]. At their tips, intermediate ridges house the Merkel cell
complex and hence they are believed to influence the response of SA-I receptors [8]. Similarly in addition of
enabling better grip [10], the fingerprints or papillary ridges are also believed to enhance the tactile sensitivity of
Pacinian Corpuscles and hence help feeling fine textures[9]. Mimicking complex human skin structure - with
receptors embedded at specific depths and locations, performing different functions and responding optimally in
Fig. 1: Concept of POSFET
Tactile Sensing array and the
SEM picture of implemented
POSFET tactile sensing device.
Like mechanoreceptors in human
skin, each POSFET device is
capable of sensing and
processing the touch information
at same site. The output of
POSFET touch sensing device is
linear (with a slope of 50 mV/N)
for tested range of normal forces
(0.15-5 N)
different frequency ranges, is a challenging technological issue. Nonetheless, adding functional equivalent of a
mechanoreceptor (e.g. Pacinian Corpuscles) to an ordinary tactile sensor, by using soft protective rubber cover
patterned with fingerprint like microstructures can help in broadening the usage of tactile sensing and in bringing the
level of tactile sensitivity and acuity, that human possess, to robotic devices.
Human Inspired POSFET Tactile Sensing Arrays: It is desirable to have tactile arrays or distributed tactile
sensors with density and spatial distribution of taxels (tactile elements) varying with body site. For the sites like
fingertips a large number of fast responding (of the order of few milliseconds) taxels are needed in a small space (~ 1
mm spatial resolution). Further, local processing and use of less number of wires are also desired. Keeping in view
such facts, tactile sensing array using POSFET touch sensing devices have been developed. Designs of, both,
POSFET devices and the array are inspired from cutanous sense in humans. Tactile sensing device is fabricated by
spin coating a piezoelectric polymer (PVDF-TrFE) film on the gate area of MOS (Metal Oxide Semiconductor)
devices. A force applied on piezoelectric polymer generates charge, which in turn modulates the charge in the
induced channel - thereby converting force in to voltage. Contrary to conventional approaches - where transducers
and conditioning electronics are separate entities connected through wires each POSFET touch sensing element
presents an integral unit comprising of transducer and transistor. As shown in Fig. 1, each POSFET element, as an
integral sensotronic unit, is capable of sensing and partially processing the touch signal at same site as is done
by the receptors in human skin. Further, absence of any wire between transducer and the transistor can help solving
the wiring complexity, which is one of the major issues hindering the wide usage of distributed tactile sensing. A
system on chip or system in package with on-chip conditioning electronic circuitry and local processing will further
improve the overall performance of POSFET tactile sensing arrays and utilization of the tactile data in a control task.
To match the spatial resolution and acuity of human fingertips, the size of each touch element is 1 mm x 1 mm and
the center to center distance between adjacent elements is 1 mm. POSFET elements have linear response up to 5 N
and constant gain over tested frequency range of 2.13 KHz. In present format POSFET tactile sensing arrays use a
plain thin rubber cover i.e. one without any microstructure like fingerprints or intermediate ridges. However,
POSFET tactile sensing arrays in future will have the cover patterned with microstructures, as in human skin.
Conclusion: The ways in which biological sensory systems are structured and process information to control
behavior may not always lead to the best engineering solutions for robots, nevertheless they provide useful insights
into how behaving organisms respond to dynamically changing environments, and also provide a comprehensive
multi-level conceptual framework within which to organize the overall task of designing the sensors for robotic
systems. The approach may bring up new ideas that can help improving the level of tactile sensitivity and acuity of
robots to the human range. With this premise, human inspired POSFET tactile sensing arrays have been developed
and presented here. The POSFET tactile sensing arrays are good for dynamic contact events like FA receptors in
the skin. However, the structure can be modified to include other mode of transduction that is sensitive to static or
quasistatic contact events.
References:
[1] G. Bekey, Autonomous Robots: from Biological Inspiration to Implementation and Control. Cambridge: MIT Press, 2005.
[2] Touch Bionics, "The i-LIMB Hand," http://www.touchbionics.com/, 2007.
[3] www.robotcub.org, "iCub," 2009.
[4] R. S. Dahiya, G. Metta, M. Valle, and G. Sandini, "Tactile Sensing: From Humans to Humanoids," IEEE Transactions on
Robotics, 2009 (in press).
[5] R. S. Johannson and J. R. Flanagan, "Coding and use of tactile signals from the fingertips in object manipulation tasks," Nature
Reviews Neuroscience, vol. Advance Online Publication, pp. 1-15, 2009.
[6] K. O. Johnson, T. Yoshioka, and F. Vega-Bermudez, "Tactile functions of mechanoreceptive afferents innervating the hand," J
Clin Neurophysiol, vol. 17, pp. 539-58, Nov 2000.
[7] G. J. Gerling and G. W. Thomas, "Fingerprint Lines may not directly affect SA-I mechanoreceptor response," Somatosensory
and Motor Research, vol. 25, pp. 61-76, 2008.
[8] N. Cauna, "Nature and functions of the papillary ridges of the digital skin," Anatomical Record, vol. 119, pp. 449-468, 1954.
[9] J. Scheibert, S. Leurent, A. Prevost, and G. Debregeas, "The Role of Fingerprints in the Coding of Tactile Information Probed
with a Biomimetic Sensor," Science, vol. 323, pp. 1503-1506, 2009.
[10] T. Maeno, K. Kobayashi, and N. Yamazaki, "Relationship between the Structure of Human Finger Tissue and the Location of
Tactile Receptors," Bulletin of JSME International Journal, vol. 41, pp. 94-100, 1998.
Title: Quantification of wrist/finger flexion forces and EMGs as a function of limb loading in chronic
hemiparetic stroke
Authors: Laura C. Miller1,2, Julius P.A. Dewald1-4
Departments of Biomedical Engineering1, Physical Therapy and Human Movement Sciences,2 Interdepartmental
Neuroscience Program3, and Physical Medicine and Rehabilitation4, Northwestern University, Chicago, IL, USA
Introduction
The expression of the flexion synergy in the paretic upper limb of moderately to severely affected
hemiparetic stroke survivors may cause disabling coupling between shoulder abductors and elbow, wrist
and finger flexors [1]. This would explain the hypertonia in the wrist/finger flexors and weakness in the
wrist/finger extensors frequently observed following stroke. Robotic assistive devices for the hand, such
as extension-aiding gloves and exoskeletons, have the potential to recover hand function, as do robot-
aided rehabilitation protocols, which can involve sophisticated and precise active or passive mechanical
manipulations of the hand/wrist. However, better identification of the physiological and neural
mechanisms underlying stroke-induced movement synergies must be made before these technologies can
reach their full potential. Understanding the underlying mechanisms and exact expression of synergies is
crucial for the creation of novel therapeutic interventions that are physiologically and impairment based,
and possibly more effective than conventional therapies.
Studies that have quantified elements of hand dysfunction following stroke have focused on the hand
in isolation, so the behavior of the hand during movements of more proximal joints in the upper limb has
not been adequately characterized [2-8]. Sukal et al. quantified reductions in active elbow extension for
greater levels of shoulder abduction, according to the flexion synergy [9]. These results agree with the
abnormal coupling described under static conditions between shoulder abduction and elbow flexion in
earlier studies [10, 11]. If abnormal torque coupling exists between the elbow and shoulder then it would
be reasonable to hypothesize that this abnormal coupling would also be present at the wrist and fingers,
consistent with clinically described stereotypical movement patterns following hemiparetic stroke [1, 12].
This study measured isometric forces generated by the paretic, non-paretic, and healthy control fingers
and wrist during lifting and reaching, at 7 levels of shoulder abduction loading, using the ACT3D robotic
device [13].
Methods
Eight individuals with chronic hemiparetic stroke (mean age 62 9 yrs, range 22-270 months post-
stroke) and three healthy controls (mean age 42 14 yrs) participated in this study. All stroke-affected
participants had severe to moderate upper limb impairment according to the Upper Limb Fugl-Meyer
Motor Assessment, with scores ranging 10-37 (mean 20.6) of a possible 66, and had severe to moderate
hand impairment according to the Chedoke-McMaster Hand Assessment, with scores ranging from 2-5
(mean 2.9) of a possible 7.
Each participant placed the arm in the ACT3D robotic device [13]. The rigid forearm orthosis of the
ACT3D had been modified to include the Wrist/Finger Force Sensing module (WFFS) [14], which can
measure isometric forces generated by the fingers and wrist and by the thumb. Participants were
instructed to move the limb into a home position of 85 shoulder abduction, 90 elbow flexion, and 40
shoulder flexion and to hold for 1s. Timing and limb positioning were verified using real-time visual
feedback on the ACT3D monitor. After holding the home position, participants were instructed to reach as
fast as possible to a virtual target displayed on the ACT3D feedback monitor, while keeping their hand
relaxed during the reach. The virtual target was located in the sagittal plane through the shoulder. 11 trials
were completed at each of 7 active shoulder support levels: while supported on the ACT3D haptic table
and while lifting up at 0, 5, 15, 25, 35, and 50% of maximum shoulder abduction force. Surface
electromyographic (EMG) data were also recorded from 10 muscles in the upper limb.
Results
Forces generated during two time windows were selected for analysis: while holding the home
position (LIFT) and at maximal reach (REACH). Forces were smoothed using a 250 ms moving average
filter and were normalized to maximum volitional flexion force. Normalized wrist/finger flexion forces
generated by the paretic and non-paretic limbs are shown below. Separate two-way repeated measures
(RM) ANOVAs for LIFT and for REACH
revealed significant main effects of support
condition and of limb on flexion forces
generated, as well as significant
interactions between the main effects (p <
0.0001 for all). Additionally, when the
effect of task (LIFT vs. REACH) was
included in the two-way RM ANOVA with
support level, there was a significant main
effect of task for the paretic limb (p =
0.007). A two-way RM ANOVA also
revealed significant main effects of support
level (p = 0.0002) and limb (p = 0.03) on
EMG data recorded over the flexor
digitorum superficialis muscle during LIFT.
Discussion
Wrist/finger flexion forces and EMG generated by the paretic limb drastically increased with required
shoulder abduction force (LIFT) and increased further with concurrent elbow extension and shoulder
flexion (REACH). When participants had to generate 50% of their maximum shoulder abduction force
and then reach out, wrist/finger flexion forces were generated that were near volitional maximum levels
(91.7 40.8%, mean SD). Therefore, studies using robotic devices that focus on the hand and wrist in
isolation provide an incomplete picture of movement dysfunction following stroke, because they do not
consider the abnormal finger/wrist flexion coupling that occurs with increasing levels of shoulder
abduction activity in the paretic upper limb.
1. Brunnstrom S, Movement therapy in hemiplegia: a neurophysiological approach. 1970, New York: Harper and Row.
2. Cruz E G, Waldinger H C, and Kamper D G, Kinetic and kinematic workspaces of the index finger following stroke. Brain,
2005. 128(5): p. 1112-1121.
3. Kamper D G, Fischer H C, Cruz E G, and Rymer W Z, Weakness Is the Primary Contributor to Finger Impairment in
Chronic Stroke. Archives of Physical Medicine and Rehabilitation, 2006. 87(9): p. 1262-1269.
4. Kamper D G, Harvey R, Suresh S, and Rymer W Z, Relative contributions of neural mechanisms versus muscle mechanics
in promoting finger extension deficits following stroke. Muscle & Nerve, 2003. 28(3): p. 309-318.
5. Kamper D G and Rymer W Z, Quantitative features of the stretch response of extrinsic finger muscles in hemiparetic stroke.
Muscle & Nerve, 2000. 23(6): p. 954-961.
6. Kamper D G and Rymer W Z, Impairment of voluntary control of finger motion following stroke: Role of inappropriate
muscle coactivation. Muscle & Nerve, 2001. 24(5): p. 673-681.
7. Kamper D G, Schmit B D, and Rymer W Z, Effect of Muscle Biomechanics on the Quantification of Spasticity. Annals of
Biomedical Engineering, 2001. 29(12): p. 1122-1134.
8. Li S, Kamper D, and Rymer W, Effects of changing wrist positions on finger flexor hypertonia in stroke survivors. Muscle
& Nerve, 2006. 33(2): p. 183-190.
9. Sukal T, Ellis M, and Dewald J, Shoulder abduction-induced reductions in reaching work area following hemiparetic
stroke: neuroscientific implications. Exp Brain Res, 2007. 176: p. 594-602.
10. Beer R, Given J, and Dewald J, Task-dependent weakness at the elbow in patients with hemiparesis. Arch Phys Med
Rehabil, 1999. 80(7): p. 766-772.
11. Dewald J and Beer R, Abnormal joint torque patterns in the paretic upper limb of subjects with hemiparesis. Muscle &
Nerve, 2001. 24: p. 273-283.
12. Twitchell T, The restoration of motor function following hemiplegia in man. Brain, 1951. 74: p. 443-480.
13. Sukal T, Ellis M, and Dewald J, Use of a Novel Robotic System for Quantification of Upper Limb Work Area Following
Stroke., in 27th IEEE EMBS annual international conference. 2005: Shanghai, China.
14. Miller L, Ruiz Torres R, Stienen A, and Dewald J, A Wrist and Finger Force Sensor Module for Use During Movements of
the Upper Limb in Chronic Hemiparetic Stroke. IEEE Transactions on Biomedical Engineering, 2009. In Press.
Human reach-to-grasp generalization
strategies: a Bayesian approach
Diego R. Faria, Ricardo Martins and Jorge Dias
Institute of Systems and Robotics
Department of Electrical Engineering
University of Coimbra - Polo II
3030-290 Coimbra, Portugal
Email: {diego,rmartins,jorge}@isr.uc.pt
Abstract In this work we present a general structure of an ini- II. BAYESIAN A PPROACH
tial Bayesian framework to describe the mechanisms underlying
the human strategies that define the appropriate characteristics Based on the studies about the grasp model [8] and studies
of the reach-to-grasp movements to specific contexts, objects and of neuroscience of grasping [4], we intend to develop a
how these strategies can be extended and replicated to other Bayesian framework to generalize the human strategies to
contexts and objects. The Bayesian framework uses information
extracted from data about the pose of the hand, fingers and
perform a reach-to-grasp tasks and to transfer and integrate
head acquired by a magnetic tracker device, finger flexure data this knowledge to robotic platforms.
acquired by a data glove, as well as, data about eye gaze and
saccade movements of the subject.
I. I NTRODUCTION
C. Simulation Process
The collected EMG data was used to drive the forward
dynamics simulations. The muscle-contraction dynamics
were implemented such that the muscle-tendon force produced
by each muscle was computed from the system states: the
muscle fiber lengths, muscle activations, and the generalized
coordinates and speeds that influence muscle-tendon length
and velocity. Muscle activation dynamics were modeled by a
first-order differential equation. Passive joint torques and
damping were implemented at each joint in order to simulate
ligaments and to limit joint motion at extreme joint angles. Fig. 2. Final poses for L and W ASL posture simulations.
The state variables for the simulation include the 22 each individual slip of each individual muscle in order to
generalized coordinates, 22 generalized velocities, 24 muscle achieve dexterous motion.
fiber lengths and 24 muscle activations, for a total of 92 states The negative effects of not including the intrinsic muscles
(Fig. 1). of the hand include inaccurate abduction/adduction angles for
The 24 input signals from the 16 recorded EMGs of the the fingers as well as the inability to fully extend the distal
extrinsic muscles defined the neural excitation input to each segments of the fingers.
muscle. The initial conditions were defined by the zero
position of the musculoskeletal model at rest. The equations IV. CONCLUSION
The observed differences between the individual finger and
thumb movements required to achieve the final L and W
poses support the choice to utilize ASL postures as a useful
metric of dexterous hand function. Examining a variety of
differing ASL postures will aid us in our final goal of
understanding the control of the human hand.
However, we learned that achieving an even greater level of
control of the hand will require EMG data for each individual
slip of each extrinsic muscle as well as a method to
incorporate the effects of the intrinsic muscles into
simulations based solely on extrinsic muscles. It was also
noted that the time needed to complete the forward dynamic
simulations was extensive and the development of real-time
methods would be required to implement these results into a
Fig. 1. Flowchart of the forward dynamic simulation process. real-time prosthetic controller.
of motion were numerically integrated using a variable-step EMG driven forward dynamic simulations using only
integrator over the time range of the movement, solving for extrinsic muscles represent a method of studying hand
the motion of each moving segment at each time step. This movements that will lead to a better understanding of
process resulted in a time-series output file that contains each dexterous hand motion along with the development of a more
of the state variables. sophisticated control paradigm for prostheses along with
robotic or mechanical hands of any nature. In general, this is
III. SIMULATION RESULTS a positive first step towards understanding the control of hand
The final poses for the L and W ASL posture mechanics utilizing only muscles present in the amputee limb.
simulations illustrate the potential for the forward dynamic
simulation approach to predicting hand movement (Fig. 2). REFERENCES
These poses reflect the ability to flex or extend individual [1] D. Childress, and R.F. Weir, Control of Limb Prostheses, in Atlas of
fingers along with producing complicated thumb movements. Amputations and Limb Deficiencies, 3rd ed. Rosemont, IL: American
Academy of Orthopaedic Surgeons, 2003.
The decision to use the recorded EMG signal of one slip for
[2] T.E. Jerde, J.F. Soechting, and M. Flanders, Biological constraints
all four slips of certain finger muscles appeared to be well simplify the recognition of hand shapes, IEEE Transactions on
founded for the W posture. However, the final pose for the Biomedical Engineering, Feb 2003, 50(2), pp.265-269.
L posture was achieved through modifying this assumption [3] K.R. Holzbaur, W.M. Murray, and S.L. Delp, A model of the upper
extremity for simulating musculoskeletal surgery and analyzing
for one of the flexor muscles of the index finger. Initially, the neuromuscular control, Annals of Biomedical Engineering, June 2005,
final pose resulted in the index finger flexed similar to the 33(6), pp. 829-840.
other three fingers. But when the input to this particular [4] A.B. Ajiboye, Neuromotor Muscle Synergies for EMG Pattern
Recognition of Prehension Grasps for Control of Multifunctional
finger flexor was changed, the reported pose was achieved. Myoelectric Prostheses. PhD Dissertation, Evanston, IL: Department of
This highlights the importance in obtaining EMG signals for Biomedical Engineering, Northwestern University, 2007.
Design Optimization of a Hand Exoskeleton
Rehabilitation Device
Jamshed Iqball,2, Nikos Tsagarakisl, Darwin Caldwelll
lItalian Institute of Technology, Genova, Italy
2University of Genova, Italy
Corresponding author: jamshed.iqbal@iit.it
0.1
alternative therapies using robotic hand exoskeletons or 0
Y -ax is
Y -axis
0.05
-0.05
assistive devices could have considerable potential in 0
-0.1
of the fingers without any constraints. The device should be -0.15 -0.1 -0.05 0 0.05 0.1
X-axis
0.15 0.2 0.25 0.3 0.35 -0.2 -0.1 0
X-axis
0.1 0.2 0.3
0.15
Y -axis
0.05
-0.1
how the device design has been optimized to maximize the -0.15
-0.2
-0.1 0 0.1 0.2 0.3 0.4
workspace and minimize the motion constraints. -0.2 -0.1 0
X-axis
0.1 0.2 0.3
X-axis
(c) (d)
II. METHODOLOGY Figure 1. Finger model: (a) Full extension (b) Intermediate
An anatomical hand model has been developed and used (c) Partial flexion (d) Full flexion
to obtain the finger angle ranges during full grasping and 0.3
0.15
0
T1 T2 T3
DIP = 0.46PIP + 0.083PIP2 -0.05
0 100 200 300 400 500 600
Time --->
(EXO) system attached to a finger model. The link lengths Finger angles
are denoted by L1E, L2E and L3E. The performance of such Finger forward
Finger Model
kinematics
device can be represented by its capability to exert as
perpendicular forces as possible to fingers phalanges [2]. Mapping in EXO frame
Target/attachment point
EXO IK
Theta [1-3]
Worst case (collision)
distance calculation
NO
d >0 &
d > T (EXO
Thickness)
YES
Figure 3. (a) EXO and finger model (b) Kinematic iteration
Dexterity analysis Force Impact Factor
III. RESULTS AND DISCUSSION
This work focuses on finding those link lengths which can
Overall Impact Factor
maximize the performance criterion mentioned in the last
section. The angle values from the finger model are used in
Update EXCEL file
kinematic analysis and optimization algorithm. The EXO
link lengths have been iterated and kinematic analysis
selected those combinations which maximize the NO Trajectory
traversed
performance criterion. Figure 3b shows one such valid
iteration for L1E, L2E, L3E equal to 8, 7, 1.5 cm YES
(MCP) and PIP links of human finger whereas EXO links are
represented by pink lines. This valid combination of lengths Figure 5. Flow chart
is then subjected to optimization. Parameters of optimization
included the Perpendicular Impact Force (PIF) factor and the Table 1. Optimization results
Global Isotropy Index (GII). The PIF factor gives how A 1.5 cm
much exerted force is perpendicular to the finger phalanges
INPUT B 5.5 cm
whereas GII gives worst case workspace dexterity and
PARAMETERS 0.5 cm
isotropy throughout the workspace [3]. Based on these T
parameters, an Overall Impact Factor (OIF) has been HAND SIZE MEDIUM
calculated by assigning equal weights to PIF and GII. Figure L1E 6.5 cm
4 shows the OIF results as a function of L1E and L2E while OPTIMIZED
L2E 7.5 cm
Figure 5 shows the flow chart of the optimization procedure LENGTHS
used. The algorithm also incorporates the worst case L3E 1.5 cm
collision avoidance in entire workspace. Table 1 shows
optimization results based on selected input parameters A, B
and T (see Figure 3a). REFERENCES
[1] Kessler G.D, Hodges L.F., Walker N., Evaluation of the CyberGlove
as a whole hand input device, ACM Transactions on Computer-
Human Interaction 2(4), 1995, pp. 263-283.
[2] Y,Fu, P,Wang, S,Wang, Development of a multi-DOF exoskeleton
based machine for injured fingers, IEEE/RSJ International
Conference on Intelligent Robots and Systems (IROS) Sept. 2008, pp.
1946-1951.
[3] L. Stocco, S.E. Salcudean, F. Sassani, Fast constrained global
minimax optimization of robot parameters, Robotica, vol. 16, issue
6, Nov. 1998, pp. 595-605.
A loss of a limb is a common phenomenon that by studying the morphology of the human
among both humans and animals. Where animals hand, we can identify and exploit principles that
have to cope with such a loss, humans have underlie its functions, creating a device that can
pushed attempts towards limb replacements for be of use to patients in real world circumstances.
quite a long time. It has only been relatively Towards identifying the underlying
recently that such devices, in addition to functional principles of the human hand, we
providing a cosmetic replacement can also be focus our attention on two important properties:
sophisticated enough to account for some of the (1) the phalangeal curvature and (2) the friction
functions of the lost limb. Upper-limb between tendons and their sheathing under high
replacements in particular are inherently difficult loads. Even though both properties have been
to realize as the human hand is the most complex well documented in the anthropological and
and important tool man possesses. medical fields, to our knowledge, they have not
The prosthetics and robotics research yet been exploited in robotics.
fields have marked increasing progress towards The human finger consists of three
functional hand replacements. By satisfying segments or phalanges, the distal, middle and
requirements such as (1) numerous sensory proximal and a palm segment, the metacarpal.
modalities, (2) high processing capabilities, and All of the above segments are so similar in
(3) high output torque, current robotic/prosthetic morphology that further discussion will include
prototypes are close to attaining functionalities the metacarpal segment. Despite the many
similar to those of the human hand evolutionary changes the human finger has
(Balasubramanian et al., 2008). In the prosthetics undergone, a characteristic that is common to
industry however, such advancements are numerous arboreal creatures is the curvature of
usually not realized. Upper limb prostheses used the finger segments. Such phalangeal curvature
in the real world pose additional, inherent is discussed in paleontology and primate
requirements which are usually not taken into morphology, where the association between this
consideration by robot designers. Such curvature and arboreality has been extensively
requirements, vital to the adoption of the documented (Richmond et al., 2003; Stern et al.,
prosthetic device by its user (Carrozza et al., 1995). Arboreality itself however is not a direct
2006), are: (1) reduced power consumption, (2) indicator of the reasons behind such a curved
low device weight, (3) high dexterity, and (4) finger structure despite the fact that most
ease of use. In contrast, prosthetics research is arboreal creatures possess it. A recent study has
carried out using robotic hand prototypes that identified, using finite element methods, the
only partially fulfill the above requirements. benefits of this morphology (Richmond, 2007).
Even so, most prosthetic and arguably robotic The results from this study indicate that a curved
hand prototypes cannot be considered optimal. bone segment like the one present in siamang
Size and weight considerations are rarely taken mammals but also humans can provide
into account, and power consumption issues are significant strain benefits over a similar but non-
usually not dealt with; when they are, usual curved segment. The strain present on the curved
approaches involve either smaller or fewer segment is roughly half that of the straight one,
motors, disregarding any dexterity requirements. despite equivalence in lengths, areas, mechanical
Clearly, for upper-limb amputees to properties and loading force conditions in the
directly benefit from prosthetics research, the two models. This difference in strains can be
devices being used to conduct this research attributed to the higher compression forces
should themselves adhere to these real world present in the curved model. Such a result is
requirements. Had the biomechanics of the quite important in prosthetics research as it
human hand been taken into consideration, could indicates that a curved finger segment with less
such requirements be met in a better fashion than volume than a straight segment will in fact
traditional engineering approaches? We argue withstand the same strain with the same loading
forces present. A decrease in bone volume will it could be achieved using smaller motors, thus
be beneficial in at least two ways. (1) Given a saving both weight and energy.
constant bone density, the weight of the device Designing a human hand replacement
scales with its volume, satisfying the according cannot be treated as a purely engineering
requirement. (2) We can identify a volume problem as is usually the case with prosthetic
metric demonstrating that a minimal bone research prototypes. Embodiment is well known
volume is preferred. The empty volume gained to lead to surprising insights (Pfeifer et al., 2007).
can be efficiently employed by other structures And there is much to be gained by studying and
such as a cosmetic skin or a large volume of exploiting the biomechanics of the human hand.
sensory modalities presently absent from most We can extract principles and concepts that can
prosthetic hands in use whilst maintaining the be used towards advancing prosthetic research.
same weight as a device with a larger bone In turn, we can realize devices that will directly
volume. benefit prosthesis users by better satisfying real
A simple curved bone segment was use requirements. Our research aims at paving
created in Solidworks. The curvature of the the way to exposing principles of the human
bone segment is identified by a radius R of a hand morphology, and using them directly to
circle tangent to the segments joints, denoted by implement prosthetic hands. Showcasing the
two smaller circles of radii R1 and R2. To design benefits of this approach will highlight the
a complete robotic finger, the original segment fundamental importance of morphology not only
was replicated, only changing its length and the of the hand but of the human body as a whole.
diameters of the joints accordingly. Our initial
model shows that when fully realized, a REFERENCES
complete hand will provide sufficient structural
strength to be used in every day activities while Balasubramanian R. and Matsuoka Y. (2008).
at the same time will help satisfy the weight Biological stiffness control strategies for the
constraints of the device. Anatomically Correct Testbed (ACT) Hand. IEEE
The presence of friction in most International Conference on Robotics and Automation,
737742.
mechanical devices is unwanted and usually
associated with energetic losses. Such is also the Carrozza M.C., Cappiello G., Micera S., Edin B.B.,
case with all robotic and prosthetic hands to date, Beccai L., Cipriani C. (2006). Design of a cybernetic
where hands employing a tendon-driven hand for perception and action. Biological
transmission mechanism aim towards frictionless Cybernetics, 95(6):629644.
transmission, e.g. by utilizing Teflon cables and Pfeifer R., Lungarella M., Iida F. (2007). Self-
sheaths. In the human tendon-sheath mechanisms organization, embodiment, and biologically inspired
however, frictional forces might be beneficial. It robotics. Science, 318(5853): 10881093.
has been experimentally shown that during high- Richmond, B.G. (2003). Early hominin locomotion
load flexion of the interphalangeal joints, and the ontogeny of phalangeal curvature in primates.
eccentric and concentric forces differ by 9%, a American Journal of Physical Anthropology, 36
difference that can be directly accounted to (Suppl.):178179.
tendon-sheath friction (Schweizer et al., 2003). Richmond B. G. (2007). Biomechanics of phalangeal
In addition, the property of tendons to compress curvature. Journal of Human Evolution, 53(6):678
when tensioned, but also the orientation of 690.
tendon and sheath fibers further simplify the
appearance of frictional forces (Walbeehm et al., Schweizer A., Frank O., Ochsner P. E., Jacob H. A. C.
(2003). Friction between human finger flexor tendons
1995). Such utilization of friction is present in a
and pulleys at high loads. Journal of Biomechanics,
more specialized form in chiropterans (bats), 36(1):6371.
which employ frictional forces to dangle on their
fingers without the application of muscular force. Stern Jr., J.T., Jungers, W.L., Susman, R.L., (1995).
Preliminary results on a robotic finger indicate a Quantifying phalangeal curvature: an empirical
comparison of alternative methods. American Journal
higher mechanical advantage of a frictional
of Physical Anthropology, 97:110.
(rubber sheaths) tendon transmission system
over a frictionless (Teflon sheaths) one during a Walbeehm ET, McGrouther DA., (1995). An
pre-caging task. Utilizing a friction mechanism anatomical study of the mechanical interactions of
in robotic hands can have two alternative flexor digitorum superficialis and profundus and the
flexor tendon sheath in zone 2. Journal of Hand
advantages: (1) finger force output under high
Surgery, 20(3):26980.
loads could be increased at no motor cost; or (2)
Learning of the anticipatory mapping between digit positions and
forces in two-digit object manipulation
Figure 1: The grip device (left), sensor placement (center), and experimental setup (right)
Subjects showed significant performance improvement only between first and second
trials by significant adjustment to both digit force and digit placement (post hoc
comparison with bonferroni correction, p<0.005, Fig.2). This was achieved mainly by
estimation of external torque from sensory input and generation of a compensatory
moment at object lift onset that approximated the magnitude of external moment (~80%)
with reversed direction. The compensatory moment was analyzed across trial 4-10 for
each subject. We found that it is produced from concurrent modulation of two torque
components which were regulated by digit position and load forces. Importantly,
although significant trial-to-trial variability was found on digit placement and forces,
covariation of load torque and digit position-dependent normal torque resulted in a
relatively stable compensatory torque at object lift onset (average ratio between standard
deviation of distribution of its two components and stand deviation of total compensatory
moment and is 1.96).
These results indicate that the CNS is able to quickly generate sensorimotor memories
that map digit positions and forces to the dynamics of the object. Nevertheless, trial-to-
trial variability associated with digit placement is accompanied by concurrent force
modulation and therefore does not affect the compensatory torque necessary for accurate
object manipulation. We propose that this is accomplished by using sensory information
about digit position shortly after contact to update digit forces such that the required
compensatory torque can be generated.
Figure 2: Object peak roll (A), relative placement of center of pressure (B), load force sharing (C),
and grip force (D).
References:
Cohen RG, Rosenbaum DA (2004) Where grasps are made reveals how grasps are
planned: generation and recall of motor plans. Exp Brain Res 157:486495.
Friedman J, Flash T (2007) Task-dependent selection of grasp kinematics and stiffness in
human object manipulation. Cortex 43:444-460.
Johansson RS, Flanagan JR (2009) Coding and use of tactile signals from the fingertips
in object manipulation tasks. Nature Rev Neurosci 10:345:359.
Lukos J, Ansuini C, Santello M (2007) Choice of contact points during multidigit
grasping: effect of predictability of object center of mass location. J Neurosci
27:3894-3903.
Lukos J, Ansuini C, Santello M (2008) Anticipatory control of grasping: independence of
sensorimotor memories for kinematics and kinetics. J Neurosci 28:12765-12774.
ANALYSIS OF FINGER POSITION DURING TWO AND THREE-FINGERED
GRASP: POSSIBLE IMPLICATIONS FOR TERMINAL DEVICE DESIGN
METHODS: Seven subjects participated in the study (2 male, 5 female). A 6-camera video
motion analysis system (Qualisys, Gothenburg, Sweden) was used to collect marker
locations in a calibrated volume approximately 1 m3. The hand biomechanical model was
modified from previous studies.3 Fourteen reflective markers were taped to the
interphalangeal joints, metacarpophalangeal joints and the carpals of the subjects. Seated
subjects were asked to grasp and lift an object, using their preferred two fingers or three
fingers. The 9 objects were chosen based on standard geometric shapes and sizes. The
objects included discs, rods, cubes, rectangles and spheres. The smallest object was a 1.9
cm diameter sphere and the largest object was a 9.5 X 12.5 X 1.7 cm rectangular prism.
Data were collected at 120 frames/s and the markers 3-dimensional coordinates were
exported for further analysis using mathematical software. The three dimensional joint
angles of each of the finger articulations were calculated.
Pearson product moment correlation coefficients (r) were used to determine the
degree of association between distal to proximal joints as well as parallel joints between the
fingers.
RESULTS: For two and three-fingered grasp using this specific set of objects, the thumb
interphalangeal joint and metacarpal phalangeal joint summed ranges of motion was from
20o to 43o for 80% of grasps. In other words, the range of motion requirement for 80% of
grasps in this sample was 23o to position the contact surface of the thumb. The largest
variability of any of the finger joints was the index MCP joint (a range of -44o to 49o).
There appeared to be a predisposition for the distal segments of the thumb and index finger
to be angled rather than being parallel to each other at the point of contact with the object.
The average angle between the index and thumb distal segments was about 30o.
Distal to proximal articulations in separate fingers (thumb, index and long fingers)
had relatively low correlations (r= -0.20 to 0.40). Thumb total range of motion had low
correlation to index and long finger motion. Higher correlations were found with index
finger and long finger parallel joints (r= 0.41 to 0.65) and total ranges of motion (r= 0.74).
DISCUSSION: The available sagittal plane range of motion of index and long finger distal
phalanges relative to the metacarpals is approximately 320 degrees and the distal phalanx
of the thumb can move 180 degrees relative to the 1st metacarpal, but grasping patterns use
only a small proportion of the total range. To develop an effective grasping terminal
device, it might be possible to reproduce the necessary range of motion more easily than
the entire potential range of motion of the hand. For example, data from the current study
suggests that for these objects the range of motion of a terminal device contact surface
should be about 90 degrees.
The current study identified greater correlations between parallel joints compared to
proximal to distal joints correlations. This finding seems to suggest that degrees of freedom
in the hand could be reduced in parallel fingers rather than by controlling proximal to distal
joints.
While it is possible that only the biomimetic approach can accomplish all possible
grips, other approaches with simpler controls or mechanisms have not been explored
thoroughly. In creating an artificial hand for grasping, the optimal configuration may differ
from the anatomical configuration. Alternative articular designs may lead to contact
geometry that is similar to the normal hand but have desirable features in some other way.
CONCLUSIONS: Thumb, index and long finger positions during two and three finger
grasp of standard geometric objects was relatively consistent for this sample of subjects.
They utilized less than half of the available ranges of motion. Index and long finger
metacarpophalangeal joints used the largest ranges of motion. Design criteria for grasping
devices could include these findings depending on the goals for the device. Improved
performance, robustness, cost, and ease of manufacturing might be goals of non-
biomimetic terminal device designs
Reference List
1. Baud-Bovy G, Soechting JF: Two virtual fingers in the control of the tripod grasp . Journal of
Neurophysiology 2001;86:604-615.
2. Kamper DG, Cruz EG, Siegel MP: Stereotypical fingertip trajectories during grasp. Journal of
Neurophysiology 2003;90:3702-3710.
3. Lee S-W, Zhang X: Development and evaluation of an optimization-based model for power-grip posture
prediction. Journal of Biomechanics 2005;38:1591-1597.
4. Mason CR, Gomez JE, Ebner TJ: Hand synergies during reach-to-grasp. Journal of Neurophysiology
2001;86:2896-2910.
Does practice affect hand ability? The reliance on frames of reference
Miriam Ittyerah
Department of Psychology
Christ University
Bangalore 560029, India
Early attempts to explore the effects of practice in a certain hand action have emerged
from relating hand preference to a variety of tasks (Fleishman 1958; Fleishman &
Ellison 1962). Since the degree and direction of hand preference does not differ in the
blind and sighted children, it may be inferred that vision does not determine hand
preference (Ittyerah 1993, 2000; McManus et al 1988). The question of interest in this
investigation is to know the role of vision in the effect of practice with the preferred
and non preferred hands over a period of development, both in blind and sighted
blindfolded conditions. Further, whether practice will show gains only from the
preferred hand remains to be established. In tasks such as type writing, piano playing
or braille reading (Millar 1987) the hands perform as well as each other. These are
however non prehensile movements where the object is manipulated by the hand or the
fingers and not grasped in the hand. In tasks using prehensile movements when the
object is partly or wholly held, the evidence is sparse.
In the present study the hypothesis that blind children will improve in ability with
practice in spatial tasks was tested in a group of 90 congenitally blind and blindfolded
sighted children between the ages of 5 and 15 years. All the children were tested for
hand preference. The hand preference test comprised of 10 items selected from
unimanual and bimanual tasks. The selected tasks assessed the common repertoire of
hand actions in daily life. Four tasks were selected to assess hand ability (sorting,
stacking, finger dexterity, Minnesota rate of manipulation). The tests required an
object to be held partly or wholly within the hand. The instructions required the child
to perform each task separately with the left and right hands and the performance times
were recorded in seconds. This was followed by a practice period of four months for
each group during which time each child was required to practice each task three times
with each hand in front of the experimenter. The children were post tested and the
performance times of each task and hand were recorded in seconds. The blindfolded
sighted children practiced the tasks with their blindfolds. The order of the two groups
of children tested was the same in pretest as in the posttest.
The results of the handedness task showed that most children had a right hand
preference; blind (mean=97.8) and blindfolded sighted groups (mean = 97.8) (F (1,
144) = .0001; p> 0.05), and lateralization increased with age in development (mean
age groups 5-7 = 93.72; 9-11 = 99.6; 13-15 = 100), (F (8, 144) = 4.338; p< .001)
(Ittyerah, 1993, McManus et al 1988). Analysis of variance computed for each task
separately indicated a percentage gain with practice during development for the left
and the right hands of the blind children for the sort task (F (1, 162)= 307.9; p<0.001),
the stack task (F (1, 162) = 48.8; p< 0.001), the finger dexterity task (F (1, 162)=
1400.2; p< .001) and the Minnesota rate of manipulation task (F (1, 162) = 44.7; p<
.001). The left and right hands of both groups of children did not differ in percentage
gain, indicating little or no relationship between hand preference and hand ability.
Thus general laterality does not affect ability (Ittyerah1993, 2000). Percentage gains
were more for the blind than the blindfolded sighted children, indicating that practice
1
improves skill in the absence of vision (Millar & Ittyerah, 1991). The blind folded
sighted had more losses than gains in the post- test, probably because the blindfolds
hampered performance, indicating their dependence on vision in interacting with the
environment.
The finding that the congenitally blind children gained from practice indicates
that performance is not solely confined to benefits from vision. The use of self referent
coding is evident in the blind and to a lesser extent in the sighted blindfolded children.
Self referent coding refers to the use of the body midline as an anchor from which
estimates of distances to the left or the right side of the body can be inferred. Body
centred reference cues are more reliable than information from external cues in blind
conditions. In principle geometric rules can be applied to body centred reference axes
just as they can be applied to external reference cues in sighted conditions. It also
indicates that the losses of the blindfolded sighted children may have been caused by
the difficulty of having to rely on body centred self referent codes that are usually
ignored in their sighted existence (Millar 1981). This indicates that long term haptic
experience tends to elicit more self referent coding than prior visual experience.
Evidence (Millar and Al-Attar, 2005) indicates that vision improves performance in a
haptic spatial task only in so far as it adds cues that are potentially relevant to spatial
discrimination and reference. Millar and Al-Attar (2003) have for example
demonstrated that the left and right hands do not differ in a spatial processing task that
assessed distance. Thus comparisons between the hands in the processing of sensory
inputs (Millar and Al-Attar 2003) must be considered separately from comparisons
between the hands for spatial versus sequential processes (Langdon and Warrington
2000). Therefore processing of sensory inputs from touch and movement is not
expected to differ between the hands. Thus hand actions require a frame of reference to
perform effectively and are independent of vision.
References
Fleischman, E. A. (1958). Dimensional analysis of movement reactions. Journal of
Experimental Psychology, 55,438-453.
Fleischman, E. A. & Ellison, G.D. (1962). A factor analysis of fine manipulative tests
Journal of Applied Psychology, 46, 96-105.
Ittyerah, M. (2000). Hand skill and hand preference in blind and sighted children.
Laterality, 5, (3), 221-235.
Ittyerah, M. (1993) Hand preferences and hand ability in congenitally blind
children. Quarterly Journal of Experimental Psychology, 46B, 35-50.
Langdon, D. & Warrinton, E.K. (2000). The role of the left hemisphere in verbal and
spatial reasoning tasks. Cortex, 36, 691-702.
McManus, I. C., Suk, G., Cole, D. R. Mellon, A. F., Wong, J.& Kloss, J.(1988). The
development of handedness in children. British Journal of Developmental
Psychology, 6, 257-272.
Millar, S. & Al-Attar (2005). What aspects of vision facilitate haptic processing?
Brain and Cognition, 59, 258-268.
Millar, S. & Al-Attar (2003). Spatial reference and scanning with the left and right
hand. Perception, 32, 1499-1511.
Millar, S. & Ittyerah, M. (1991) Movement imagery in young and congenitally blind
children. Mental practice without visuospatial information. International
Journal of Behavioural Development, 15,135-146.
2
Video survey of pre-grasp interactions in natural hand activities
Lillian Y. Chang and Nancy S. Pollard
Carnegie Mellon University, Pittsburgh PA 15213
The first step in many manipulation actions is object acquisition, where the hand is formed
around the object to achieve a task-specific grasp. A large body of the previous research has
investigated object acquisition in the context of reach-to-grasp actions. The primary focus has
been on the motor coordination of the upper limb and hand shape in the process of reaching
toward the object which is grasped. In many of the reach-to-grasp actions studied previously, the
target of the coordinated reach and preshape motion is a presented object whose placement is
considered fixed in the environment.
In several natural task settings, however, the object is movable in the environment, and the
task does not require the object to be grasped exactly from its presented placement. The object
interaction during acquisition may be more complex, such that the object is moved prior to the
complete formation of the desired grasp. This pre-grasp interaction may serve to adjust the object
configuration to improve the goal grasp. For example, a person may use non-prehensile contact to
slide and re-orient a mug on a table before grasping it by the handle. When grasping a pen off of
a table, the fingers may quickly pivot the pen to orient the tip for the subsequent writing task.
Our previous work [Chang et al., 2008, 2009] has investigated preparatory object rotation as
a specific example of pre-grasp interaction. In preparatory object rotation, the object is pivoted
in the plane of the support surface in order to re-orient the object handle prior to grasping, as in
the mug example above. Instead of completely re-planning a new direct reach-to-grasp action for
novel object orientations, a preparatory rotation strategy first adjusts the object orientation with
pre-grasp interaction and then completes the hand formation of the final desired object grasp.
The present work surveys the broader class of pre-grasp interaction strategies beyond the specific
example of preparatory rotation. Our goal was to develop a taxonomy for classifying the variety
of pre-grasp action primitives which are integrated into complex reach-to-grasp tasks. We were
specifically interested in surveying human hand activity in natural settings in contrast to instructed
tasks within a laboratory environment. In this way, we could capture the richness of pre-grasp
interactions beyond the direct reach-to-grasp actions studied previously in the literature.
In the video survey of human hand activity, we filmed people performing manipulation tasks in
natural settings such as the home or place of occupation. All participants provided informed con-
sent. In all observations, the participants performed manipulation skills which had been practiced
previously as part of their regular occupation. There were a total of 10 sessions of both individual
and group manipulation activities, such that overall 38 people were filmed. The sessions covered
activities for housekeeping, food preparation, office work, and mechanical repair. Specific tasks
include sorting office supplies, washing dishes, and moving furniture.
We found that there is indeed a broad class of pre-grasp interactions where the object is not
grasped directly from its presented placement in the environment. Our framework describes the
survey examples according to two main aspects of the pre-grasp interaction. The first aspect is the
type of object re-configuration resulting from the interaction. The second aspect is the underlying
intent of the interaction to improve the posture quality or grasp quality of the manipulation action.
First, the object reconfiguration is classified by a taxonomy based on the degrees of freedom
which were adjusted by the pre-grasp interaction (Fig. 1). The object motion may be completely
comprised by planar displacement. This is common in examples of non-prehensile pre-grasp inter-
action where the object is primarily supported on a horizontal surface. Alternatively, for a bulky
RSS 2009 Workshop Understanding the Human Hand for Advancing Robotic Manipulation
Object adjustment
Figure 1: Taxonomy for the object reconfiguration aspect of pre-grasp interaction. Examples of rigid planar
transformations included rotation of a cup by its handle and sliding books off the top of a stack. General
rigid tumbling was used to achieve a whole body grasp of a bulky piece of furniture. Pre-grasp interaction
was also observed for non-rigid objects. A hinged bucket handle was rotated to achieve a cylinder grasp, and
a piece of paper was curled to achieve a pinch grasp. Multiple objects were also rearranged as a set, such as
in the scooping interaction with a pile of peelings.
piece of furniture, the pre-grasp tumbling interaction may result in general 6-degree-of-freedom
rigid displacement. In more complex cases, the pre-grasp interaction may cause a morphological
reconfiguration of a deformable or articulated object, such as a bucket with a hinged handle.
Second, the pre-grasp interaction is described by the intent of the object adjustment in relation
to the final grasp. The presented configuration of the object in the environment could be suboptimal
for direct reach-to-grasp object acquisition due to preferences for a particular body posture and/or
grasp. When the handle on a cooking pan is oriented away from the person, a direct grasp of the
handle may be feasible but could require lifting the heavy pan from an uncomfortable body posture
with limited lifting capability. In other scenarios, the intent of the pre-grasp interaction may be to
improve the grasp quality rather the posture quality. This is especially relevant to situations where
environmental clutter occludes the desired grasp contact surfaces, as in the case of a shelved book
where only the spine is exposed in the initial task condition. The observed examples suggest that
the intent of pre-grasp interaction was often a combination of preferences for both posture quality
and grasp quality, and potentially other optimization metrics.
The presented pre-grasp interaction framework suggests several approaches for improving the
dexterity of robotic manipulators. Taking advantage of object movability may extend the effective
workspace by changing the environmental constraints when direct reach-to-grasp actions are of
insufficient posture and/or grasp quality. Non-prehensile pre-grasp interaction could reduce the
load on the manipulator by using shared support with the work surface during the initial interaction
with the object. Moreover, the expense of tuning control parameters for complex manipulators can
be reduced if pre-grasp object reconfiguration enables the reuse of a single well-tuned grasp action
for multiple initial placements. Finally, because pre-grasp strategies are part of natural human
manipulation, incorporating them in the repertoire of assistive or teleoperated manipulators could
facilitate more intuitive control for human operators.
Chang, L. Y., Klatzky, R. L., and Pollard, N. S., 2009. Selection criteria for preparatory object rotation in manual
lifting actions. Journal of Motor Behavior. In review.
Chang, L. Y., Zeglin, G. J., and Pollard, N. S., 2008. Preparatory object rotation as a human-inspired grasping
strategy. In IEEE-RAS International Conference on Humanoid Robots (Humanoids 2008). pages 527534.
RSS 2009 Workshop Understanding the Human Hand for Advancing Robotic Manipulation
1
Abstract The goal of this work is to overview and summarize B. Comparison of Taxonomies
the grasping taxonomies reported in the literature. Our long term
goal is to understand how to reduce mechanical complexity of To develop the comprehensive taxonomy, several literature
anthropomorphic hands and still preserve their dexterity. On the sources were compared. They range from the field of robotics,
basis of a literature survey, 33 different grasp types are taken developmental medicine, occupational therapy to biomechan-
into account. They were then arranged in a hierarchical manner, ics [1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12],[13],[14].
resulting in 17 grasp types. An excerpt of the comparison table is shown in Fig. 1.
Columns store equal grasps, whereas rows store all grasps
I. I NTRODUCTION defined by an author. Grasps that are defined by the author
The design of an anthropomorphic hand is always a compro- as power, precision or intermediate, are marked with a color
mise between hand complexity and the tasks it is supposed to code. Yellow is denoting a power grasp, green a precision
accomplish. In general, sophisticated hands with many degrees grasp and yellow/green an intermediate grasp as defined in
of freedom are dexterous but pose significant requirements in [15],[16],[17]. Red is marking grasps that are not conforming
terms of control. Many of the reported taxonomies have been to our definition of a grasp.
made with the goal of understanding what types of grasps
humans commonly use in everyday tasks and use this as an
inspiration for designing robotic and prosthetic hands. The
goal of our research is in the same direction: understanding
how to minimize the complexity and maximize the dexterity
of a mechanical hand.
Since there is little consensus in the existing literature on
the grasp types humans use, the first step was to review the
existing literature. The goal was to find the maximal number
of grasp types, which will act as basis for further research.
II. M ETHOD
A. Definition of a grasp
Fig. 1. The sheet used for comparison of different grasp taxonomies. This is
Since grasping in humans is a very broad area, it was just a small excerpt of the whole sheet, the complete table can be downloaded
necessary to find a definition of a grasp relevant to our work. via the Human Grasping Database, [18].
We thus propose the following:
A grasp is every static hand posture with which an
III. R ESULTS
object can be held securely with one hand.
The definition also implies that the grasp stability has to be A. The Taxonomy
guaranteed irrespective of the relative force direction between In total, we have found 147 grasp examples in the consid-
hand and object. ered literature sources. Out of those 147 examples, we have
Therefore, intrinsic movements are excluded because the detected only 45 different grasp types. A further classification
object is not in a constant relationship to the hand. Bimanual based on our grasp definition has revealed only 33 valid grasp
tasks are not relevant because they use both hands. Gravity types.
dependent grasps are ruled out, because the hand orientation is The grasps were then arranged in a taxonomy depicted
vital to the grasp stability. If one turns the hand, the object may in Fig. 2. The classification in the columns is done by the
fall down, which shows that it is not independent of the force power/precision requirements. The next finer differentiation is
direction. Thus grasps being excluded are the Hook Grasp and done, depending on whether the opposition type is Palm, Pad
the Flat Hand Grasp. or Side Opposition. The opposition type is also defining the
Compared to the existing taxonomies the present one is
more sophisticated, since it incorporates a larger number of
grasp types than the reviewed literature sources. This should
allow a better description of the human grasping capabilities
and therefore will be an excellent starting point for further
research. As next step the grasp types will be modeled in a 20
DoF hand model and the corresponding joint angles will be
determined. This will then act as a basis for an analysis on how
the complex hand model can be simplified, but still preserve a
lot of dexterity. This will be defined via the grasp types it can
to accomplish. Finally this will lead to an anthropomorphic
hand setup which is dexterous and simple in design.
ACKNOWLEDGMENT
This research is supported by the EC project GRASP,
Fig. 2. Comprehensive Grasp Taxonomy which includes 33 grasp types.
IST-FP7-IP-215821.
R EFERENCES
VF 1: In the case of Palm Opposition the palm is mapped into [1] M. R. Cutkosky, On grasp choice, grasp models, and the design
VF 1, in Pad and Side Opposition the thumb is VF 1. The of hands for manufacturing tasks, Robotics and Automation, IEEE
only exception to this rule is the Adduction Grasp, where Transactions on, vol. 5, no. 3, pp. 269279, 1989.
[2] S. B. Kang and K. Ikeuchi, Grasp recognition using the contact web,
the thumb is not in contact with the object. in Intelligent Robots and Systems, 1992., Proceedings of the 1992
To differentiate between the two rows, the position of the lEEE/RSJ International Conference on, vol. 1, 1992, pp. 194201.
thumb is used: the thumb CMC joint can be in a either [3] D. Lyons, A simple set of grasps for a dextrous hand, in Robotics
and Automation. Proceedings. 1985 IEEE International Conference on,
adducted or abducted position. This is a new feature introduced vol. 2, 1985, pp. 588593.
in our taxonomy. [4] J. M. Elliott and K. J. Connolly, A classification of manipulative hand
movements. Developmental medicine and child neurology, vol. 26,
B. Merging of grasps within one cell no. 3, pp. 283296, June 1984.
[5] S. J. Edwards, D. J. Buckland, and J. D. McCoy-Powlen, Developmental
Since many grasps have similar properties (opposition type, and Functional Hand Grasps, 1st ed. Slack Incorporated, October 2002.
thumb position etc.), some cells are populated with more [6] C. E. Exner, Development of hand skills, in Occupational therapy for
than one grasp. In the literature examples, the sole difference children, C. J. Smith, Ed., 2001, pp. 289328.
[7] N. Kamakura, M. Matsuo, H. Ishii, F. Mitsuboshi, and Y. Miura,
between such grasps is the shape of the object the hand is Patterns of static prehension in normal hands. The American journal
manipulating. This offers the possibility to reduce the set of of occupational therapy. : official publication of the American Occupa-
all 33 grasps down to 17 grasps by a merge of the grasps within tional Therapy Association, vol. 34, no. 7, pp. 437445, July 1980.
[8] I. A. Kapandji, Funktionelle Anatomie der Gelenke. Schematisierte und
one cell to a corresponding standard grasp. Depending on kommentierte Zeichnungen zur menschlichen Biomechanik. Thieme
the task, this offers the possibility to choose two different Georg Verlag, January 2006.
levels of accuracy of the grasp classification. [9] G. Lister, The hand: Diagnosis and surgical indications. Churchill
Livingstone, 1984.
As a comparison, the classification of Cutkosky [1] has 15 [10] C. L. Taylor and R. J. Schwarz, The anatomy and mechanics of the
different grasp types that fit into our definition of a grasp. This human hand. Artificial limbs, vol. 2, no. 2, pp. 2235, May 1955.
is very close to the amount of grasps the reduced taxonomy [11] W. P. Cooney and E. Y. Chao, Biomechanical analysis of static forces
in the thumb during hand function, J Bone Joint Surg Am, vol. 59,
has. However, our comparison shows that even though the no. 1, pp. 2736, January 1977.
number of grasps is nearly the same, the classification is very [12] D. B. Slocum and D. R. Pratt, Disability evaluation of the hand, J.
different. When one classifies the grasps from [1] according Bone Joint Surg, vol. 28, pp. 491495, 1946.
[13] K. H. Kroemer, Coupling the hand with the handle: an improved
to our scheme, the grasps only populate 7 cells, which is a notation of touch, grip, and grasp. Human factors, vol. 28, no. 3, pp.
reduction by more than half. This is only natural, since [1] 337339, June 1986.
mainly differs grasps by the object properties which in our [14] C. M. Light, P. H. Chappell, and P. J. Kyberd, Establishing a standard-
ized clinical assessment tool of pathologic and prosthetic hand function:
case is done within one cell. normative data, reliability, and validity. Archives of physical medicine
and rehabilitation, vol. 83, no. 6, pp. 776783, June 2002.
IV. C ONCLUSION AND F UTURE W ORK [15] J. R. Napier, The prehensile movements of the human hand. The
Journal of bone and joint surgery. British volume, vol. 38-B, no. 4,
A comprehensive human grasp taxonomy, on the basis pp. 902913, November 1956.
of a comparative literature research, was developed. A total [16] T. Iberall, G. Bingham, and M. A. Arbib, Opposition space as a structur-
of 33 different grasp types were identified and arranged in ing concept for the analysis of skilled hand movements, Experimental
Brain Research Series, vol. 15, pp. 158173, 1986.
a taxonomy. The position of the thumb was introduced as [17] T. Iberall, Human prehension and dexterous robot hands, The Interna-
additional attribute, which can be either abducted or adducted. tional Journal of Robotics Research, vol. 16, no. 3, pp. 285299, June
Depending on the need for precision, the taxonomy offers a 1997.
[18] Human grasping database, http://web.student.tuwien.ac.at/
second level of classification which includes only 17 gross e0227312/, April 2009.
grasp types.
Quantification and Reproduction of Human Hand Skin Stretch
and its Effects on Proprioception
1* 1* 2 3
Elliot Greenwald , Jeffrey Pompe , Steve Hsiao , and Allison Okamura
1
Dept. of Biomedical Engineering, 2Dept. of Neuroscience, 3Dept. of Mechanical Engineering
The Johns Hopkins University, Baltimore, MD USA
*Both authors contributed equally to this work
ABSTRACT
Celia Herrera-Rincon1, Beatriz Jorrin1, Abel Sanchez-Jimenez1, Carlos Avendao2 and Fivos
Panetsos1
1
Neurocomputing and Neurorobotics Research Group, Complutense University of Madrid.
Amputation of a member implies the loss of sensory perception and motor skills of the
patient. Neural prostheses and artificial members, devices aiming at restoring these lost
capabilities, will be directly connected to the Central Nervous System (CNS) and, in
order to be effective, they have to code sensory information as the human body does and
to communicate it by means of electrical pulses through some interface with a
peripheral nerve.
Any peripheral modification of the sensory input (from a temporary local anesthesia to
the amputation of a limb) provokes modifications to both, the anatomic substrate and
the physiology of brain structures that were receiving this sensory input1. Our first
hypothesis was that electrical stimulation from artificial members will be an additional
to the peripheral nerve sectioning manipulation of the peripheral input (now an artificial
one) and, consequently, will provoke additional modifications to the CNS. Our second
hypothesis was that CNS alterations after the nerve sectioning would be less
1
pronounced when artificial electrical stimulation was applied than when no stimulation
was delivered.
To test our hypotheses we manipulated the tactile sensory input from the whiskers of
young adult rats: we sectioned the left trigeminal sensory nerve of three animals and we
applied 12 hours/day electrical stimulation of 100s-long pulses of 3.0V, at 20Hz
through chronically implanted microelectrodes. After 4 weeks, animals were sacrificed
and brains were removed and sectioned. In rodents, the whiskers on the animals nose
are topographically represented by clusters of neurons called barrels in the contralateral
cortex in a one-to-one relationship (barrels are functional modules that process input
signals arriving from the periphery)2. To evaluate the alterations, in each animal, we
compared the cortex receiving input from the manipulated nerve to the cortex receiving
input from the intact nerve. We also compared the two cortexes of control animals
(intact, n=3) and the two cortexes of animals with sectioned peripheral nerve
(equivalent to common amputees, n=3).
As a measure we used the cortical volume of the barrel cortex (whiskers A1-A5 and E1-
E5) expressing Cytochrome Oxidase, a key mitochondrial enzyme that catalyzes the
final step of oxidative metabolism, generating the ATP necessary for neuronal
functioning3. Because of the tight coupling between neuronal activity and energy
metabolism, activity alterations within the CNS are reflected by regional changes of the
level of energy metabolism4. As such, it serves as a marker for direct visualization of
the oxidative capacity of specific cells and their processes. In control animals, we
obtained 4.36 0.20 and 4.28 0.03 for the right and left cortex respectively, with a
difference between cortexes of 1.8 3.81. Animals with sectioned peripheral nerve
show a very significant loss of the cortical volume (3.31 0.00 versus 3.98 0.00),
with a difference of -16.9 3.35. Finally, animals with sectioned peripheral nerve
submitted to electrical stimulation show a clear reduction of the loss of cortical volume
(-6.6 3.46, 4.03 0.21 for the right versus 4.31 0.01 for the left cortex). Measures
have been obtained using stereological methods5, values correspond to mean SD in
mm3. Variance analysis show significant differences between the theree groups:
Control-Sectioned p< 0.001; Control-Stimulated p=0.009; Sectioned-Stimulated
p=0.003.
2
Our results show that, on a short-term, electrical stimulation has an effect on
maintenance of organization in cortical structures.. In animals with transected nerve, the
comparisons of the contralateral and ipsilateral hemispheres, with respect to the
transected nerve, show a very significant loss of the cortical volume. Electrical
stimulation of the transected peripheral nerve reduce this loss in a percentage >50%.
Consequently, advanced neuroprostheses could help in the maintenance of the cortical
activity near its normal values preventing the direct effects of the deafferentation. We
are currently studying the effect of long-term stimulations as well as the effect of neural
stimulation with a delay between amputation and electrode implants.
(2.) ME Waite and DJ Tracey: Trigeminal sensory system. In: The Rat Nervous System, edited by G.
Paxinos, San Diego:Academic Press, 1995, p. 705-724.
(3.) MT Wong-Riley: Cytochrome oxidase: an endogenous metabolic marker for neuronal activity.
Trends in Neurosc., 1989, Vol. 12, pp. 94-101.
(4.) R Machn, B Blasco, R Bjugn, and C Avendao. The size of the whisker barrel field in adult rats:
minimal nondirectional asymmetry and limited modifiability by chronic changes of the sensory input.
Brain Res. , 2004, 1025:130-138.
(5.) HJG Gundersen, P Bagger, TF Bendtsen, SM Evans, L Korbo, N Marcussen, A Mbller, K Nielsen,
JR Nyengaard, B Pakkenberg, FB Sbrensen, A Vesterby, MJ West: The new stereological tools: dissector,
fractionator, nucleator and point sampled intercepts and their use in pathological research and diagnosis,
APMIS 96, 1988, 857 881.
3
Three-dimensional, Continuous-motion Ball Joint
Technologies for Prosthetic Wrist Applications
Werner Merlo
Medical Bionics Inc., 51203 Range Road 265
Spruce Grove, Alberta, Canada T7Y1E7
Telephone: (780) 987 3245, Fax: (780) 987 0075
E-Mail: werner@medicalbionics.com
http://medicalbionics.com
In this abstract, two revolutionary joint technologies that have been developed at Medical
Bionics Inc. are introduced. The CIVIC robotic hybrid and the Passive Locking (PL) joint
technology are based on the Robo-Flex core and can be applied to revolutionize any robotic
or manually operated joint system.
The hybrid version CIVIC (Continuous Internally Variable Interlocking Concept) shown in
(Figure1) and best described as a three dimensional rotational mechanical gear drive
mechanism, is a variant of the prior Robo-Flex PL joint technology described below, shown in
(Figure 2). Unlike its PL predecessor which cannot move into a new locking position without
first being disengaged, CIVIC features a permanent linkage between the two engaging
surfaces. This linkage enables a continuous flow between locking positions and significantly
increases the potential of the CIVIC concept for other innovative applications, including
robotics.
CIVIC Know-How: A ball joint, connected to a Terminal Device (TD) and held in place by a
shell-like enclosure is able to move completely around its own axis and/or tilt in all directions
within its confines. A portion of the ball is comprised of ball race protuberances that are
permanently interlinked with a crown of pressure sensitive, spaced actuators. Bound only to
the race protuberances, the actuators control the movement of the ball joint. When the actuator
assembly is moved within a small horizontal plane, it directs the angular and rotational
deviations of the ball-race and consequently the TD attached thereto. Every degree of tilt
and/or axial rotation by the TD is directly proportional to the directional changes made by the
actuators. In essence, the two components move in tow.
This system incorporates complete and reliable three-dimensional function in a single unit, and
can be used for a multitude of robotic applications. CIVIC can be powered through a
complete range of locking positions to accurately and continuously control the positions of
pitch, yaw and roll while maintaining its 3D interlock. The current prototype spans 26 degree
rotations in the pitch and yaw directions on a 360 degree perimeter; however, it is possible to
extend the range of lateral rotations to 180 degrees by increasing the surface of the ball race
and the area covered by the actuators.
Current robotic joint mechanisms have limited dexterity. The robotic wrists of such
sophisticated projects as the Mars Spirit Rover, the Dexterous Manipulator Arm attached to the
Space Station or unmanned Reconnaissance Vehicles are still comprised of three individual,
albeit highly complex, unidirectional joint systems to control pitch, yaw and roll. Besides
extremely high manufacturing and maintenance costs, the controls of these systems are difficult
to synchronize and operate efficiently. Also, the extreme weight and bulk of these mechanisms
are often a hindrance. By contrast, CIVIC could revolutionize the robotic joint industry as a
technology capable of emulating real life-like 3D movements while controlling pitch, yaw and
roll from one single, compact joint module.
Medical Bionics Inc. developed the production ready Robo-Flex prosthetic wrist. The ball joint
based joint mechanism enables the smooth motion through a wide angular range, and a step-
less 3D locking mechanism that can be activated at the desired working angle. That wrist has
been tested extensively by amputee soldiers at the Walter Reed Medical Center, Washington
DC, during the past two years.
The Robo-Flex passive-locking ball joint technology (Figure 2) works as follows. A rounded
object such as a ball or part thereof has a surface covered with polygonal patterns of spaced-
apart protuberances. The spaces between protuberances are cavities. When an assembly of
closely-spaced, spring-loaded actuators is imprinted against the protuberances, the actuators
emulate a mirror image of the opposing surface. Actuators contacting protuberances are
pushed back, while unobstructed actuators penetrate into the cavities between protuberances.
Once trapped in a cavity, the actuators are unable to move in any direction, thereby freezing
the spatial orientation between actuators and ball surface against forces of pitch, yaw and roll
and enabling a large load carrying capacity. As soon as the actuator assembly is retracted
from the protuberances, the unhindered ball can be freely moved to another angle. The
actuators, compressed to various depths while engaged with the protuberances, regain their
fully extended position ready to imprint the protuberances at a different locking position.
REFERENCES
M.L. Latash, M.T. Turvey, N. Bernstein, 1991, Dexterity and Its
Development, Lawrence Erlbaum
Lynette Jones, 1998. Manual Dexterity in Psychobiology of the hand,
Ed. K. J. Connolly, Cambridge University Press.
M. Kimmerle, L. Mainwaring, and M. Borenstein (2003). The functional
repertoire of the hand and its application to assessment. American
Journal of Occupational Therapy, 57, 489498.
Fig. 2: A biplot of the dimensions of dexterity represented in
3
RaviBalasubramanianandYokyMatsuoka