Professional Documents
Culture Documents
Department of Psychological and Brain Sciences and bDepartment of Music, Dartmouth College, Hanover, NH 03755
Edited by Dale Purves, Duke-National University of Singapore Graduate Medical School, Singapore, and approved November 7, 2012 (received for review
May 28, 2012)
| cross-modal
Author contributions: B.S., L.P., and T.W. designed research; B.S. performed research; B.S.,
M.C., and T.W. analyzed data; B.S. and T.W. wrote the paper; and B.S. wrote the MaxMSP
program and recorded the supporting media.
The authors declare no conict of interest.
This article is a PNAS Direct Submission.
1
www.pnas.org/cgi/doi/10.1073/pnas.1209023110
Sievers et al.
indicating their function (SI Text). Each of the cross-modal (musicmovement) mappings represented by these slider bars constituted
a hypothesis about the relationship between music and movement.
That is, based on their uses in emotional music and movement
(2, 23), we hypothesized that rate, jitter, direction of movement, and step size have equivalent emotional function in both
music and movement. Additionally, we hypothesized that both
dissonance and spikiness would have negative valence, and that
in equivalent cross-modal emotional expressions the magnitude of
one would be positively correlated with the magnitude of the other.
United States
Methods. Our rst experiment took place in the United States
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
72 | www.pnas.org/cgi/doi/10.1073/pnas.1209023110
Sievers et al.
20000
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
30000
40000
Distribution of distances
Frequency
10000
ANOVA. We z-scored the data for each parameter (slider) separately within each population. We then combined all z-scored data
into a single, repeated-measures ANOVA with Emotion and
Sliders as within-subjects factors and Modality and Population
as between-subjects factors. Emotion had the largest main effect
on slider position [F(3.76, 492.23) = 40.60, P < 0.001, partial 2 =
0.24], accounting for 24% of the overall (effect plus error) variance. There were no signicant main effects of Modality (music
vs. movement) [F(1,131) = 0.004, P = 0.95] or Population (United
States, Kreung) [F(1,131) < 0.001, P = 0.99] and no interaction
between the two [F(1,131) = 1.15, P = 0.29].
This main effect of Emotion was qualied by an Emotion
Sliders interaction, indicating each emotion was expressed by
different slider settings [F(13.68, 1791.80) = 38.22, P < 0.001;
partial 2 = 0.23]. Emotion also interacted with Population, albeit
more modestly [F(3.76, 492.23) = 11.53, P < 0.001, partial 2 =
0.08] and both were qualied by the three-way Emotion Sliders
Population interaction, accounting for 7% of the overall variance in slider bar settings [F(13.68, 1791.80) = 10.13, P < 0.001,
partial 2 = 0.07]. This three-way interaction can be understood
as how much participants different emotion congurations could
be predicted by their population identity. See SI Text for the
z-scored means.
Emotion also interacted with Modality [F(3.76, 492.23) = 2.84,
P = 0.02; partial 2 = 0.02], which was qualied by the three-way
Emotion Modality Sliders [F(13.68, 1791.80) = 4.92, P < 0.001;
partial 2 = 0.04] and the four-way Emotion Modality Sliders
Population interactions [F(13.68, 1791.80) = 2.8, P < 0.001; partial
2 = 0.02]. All of these Modality interactions were modest, accounting for between 2% and 4% of the overall variance.
In summary, the ANOVA revealed that the slider bar congurations depended most strongly on the emotion being conveyed
(Emotion Slider interaction, partial 2 = 0.23), with signicant but
small inuences of modality and population (partial 2s < 0.08).
analysis examining the importance of each feature (slider) in distinguishing any given emotion from the other emotions.
4 4.24
10
12
Distance
1. Baily J (1985) Musical Structure and Cognition, eds Howell P, Cross I, West R (Academic, London).
2. Juslin PN, Laukka P (2004) Expression, perception, and induction of musical emotions:
A review and a questionnaire study of everyday listening. J New Music Res 33(3):
217238.
3. Winkler I, Hden GP, Ladinig O, Sziller I, Honing H (2009) Newborn infants detect the
beat in music. Proc Natl Acad Sci USA 106(7):24682471.
4. Zentner MR, Eerola T (2010) Rhythmic engagement with music in infancy. Proc Natl
Acad Sci USA 107(13):57685773.
5. Bergeson TR, Trehub SE (2006) Infants perception of rhythmic patterns. Music Percept
23(4):345360.
6. Phillips-Silver J, Trainor LJ (2005) Feeling the beat: Movement inuences infant
rhythm perception. Science 308(5727):1430.
7. Trehub S (2000) The Origins of Music. , Chapter 23, eds Wallin NL, Merker B, Brown S
(MIT Press, Cambridge, MA).
8. Higgins KM (2006) The cognitive and appreciative import of musical universals. Rev
Int Philos 2006/4(238):487503.
9. Fritz T, et al. (2009) Universal recognition of three basic emotions in music. Curr Biol
19:14.
10. Ekman P (1993) Facial expression and emotion. Am Psychol 48(4):384392.
11. Izard CE (1994) Innate and universal facial expressions: Evidence from developmental
and cross-cultural research. Psychol Bull 115(2):288299.
12. Scherer KR, Banse R, Wallbott HG (2001) Emotion inferences from vocal expression
correlate across languages and cultures. J Cross Cult Psychol 32(1):7692.
13. Darwin C (2009) The Expression of the Emotions in Man and Animals (Oxford Univ
Press, New York).
14. Brown S, Jordania J (2011) Universals in the worlds musics. Psychol Music, 10.1177/
0305735611425896.
15. Huron D (1994) Interval-class content in equally tempered pitch-class sets: Common
scales exhibit optimum tonal consonance. Music Percept 11(3):289305.
16. Parncutt R (1989) Harmony: A Psychoacoustical Approach (Springer, Berlin).
17. McDermott JH, Lehr AJ, Oxenham AJ (2010) Individual differences reveal the basis of
consonance. Curr Biol 20(11):10351041.
18. Bidelman GM, Heinz MG (2011) Auditory-nerve responses predict pitch attributes
related to musical consonance-dissonance for normal and impaired hearing. J Acoust
Soc Am 130(3):14881502.
19. Khler W (1929) Gestalt Psychology (Liveright, New York).
20. Ramachandran VS, Hubbard EM (2001) Synaesthesia: A window into perception,
thought and language. J Conscious Stud 8(12):334.
21. Maurer D, Pathman T, Mondloch CJ (2006) The shape of boubas: Sound-shape correspondences in toddlers and adults. Dev Sci 9(3):316322.
22. Krumhansl CL (2002) Music: A link between cognition and emotion. Curr Dir Psychol
Sci 11(2):4550.
23. Bernhardt D, Robinson P (2007) Affective Computing and Intelligent Interaction, eds
Paiva A, Prada R, Picard RW (Springer, Berlin), pp 5970.
24. Hevner K (1936) Experimental studies of the elements of expression in music. Am J
Psychol 48(2):246268.
25. United Nations Development Programme Cambodia (2010) Kreung Ethnicity: Documentation of Customary Rules (UNDP Cambodia, Phnom Penh, Cambodia). Available
at www.un.org.kh/undp/media/les/Kreung-indigenous-people-customary-rules-Eng.
pdf. Accessed June 29, 2011.
26. Friberg A, Sundberg J (1999) Does music performance allude to locomotion? A model
of nal ritardandi derived from measurements of stopping runners. I. Acoust Soc Am
105(3):14691484.
27. Moelents D, Van Noorden L (1999) Resonance in the perception of musical pulse. J
New Music Res 28(1):4366.
28. Styns F, van Noorden L, Moelants D, Leman M (2007) Walking on music. Hum Mov Sci
26(5):769785.
29. Leman M (2007) Embodied Music Cognition and Mediation Technology (MIT Press,
Cambridge, MA).
30. Molnar-Szakacs I, Overy K (2006) Music and mirror neurons: From motion to emotion. Soc Cogn Affect Neurosci 1(3):235241.
31. Fujioka T, Trainor LJ, Large EW, Ross B (2012) Internalized timing of isochronous
sounds is represented in neuromagnetic oscillations. J Neurosci 32(5):17911802.
32. Nozaradan S, Peretz I, Missal M, Mouraux A (2011) Tagging the neuronal entrainment
to beat and meter. J Neurosci 31(28):1023410240.
33. Janata P, Tomic ST, Haberman JM (2012) Sensorimotor coupling in music and the
psychology of the groove. J Exp Psychol Gen 141(1):5475.
34. Koelsch S, Siebel WA (2005) Towards a neural basis of music perception. Trends Cogn
Sci 9(12):578584.
35. Bryant GA, Barrett HC (2008) Vocal emotion recognition across disparate cultures. J
Cogn Cult 8(1-2):135148.
36. Sauter DA, Eisner F, Ekman P, Scott SK (2010) Cross-cultural recognition of basic emotions
through nonverbal emotional vocalizations. Proc Natl Acad Sci USA 107(6):24082412.
37. Banse R, Scherer KR (1996) Acoustic proles in vocal emotion expression. J Pers Soc
Psychol 70(3):614636.
38. Thompson WF, Balkwill LL (2006) Decoding speech prosody in ve languages. Semiotica 2006(158):407424.
39. Elfenbein HA, Ambady N (2002) On the universality and cultural specicity of emotion
recognition: A meta-analysis. Psychol Bull 128(2):203235.
40. Marie C, Delogu F, Lampis G, Belardinelli MO, Besson M (2011) Inuence of musical
expertise on segmental and tonal processing in Mandarin Chinese. J Cogn Neurosci
23(10):27012715.
41. Zatorre RJ, Baum SR (2012) Musical melody and speech intonation: Singing a different
tune. PLoS Biol 10(7):e1001372.
42. Smalley D (1996) The listening imagination: Listening in the electroacoustic era.
Contemp Music Rev 13(2):77107.
43. Susemihl F, Hicks RD (1894) The Politics of Aristotle (Macmillan, London), p 594.
44. Meyer LB (1956) Emotion and Meaning in Music (Univ of Chicago Press, Chicago, IL).
45. Truslit A (1938) Gestaltung und Bewegung in der Musik. [Shape and Movement in
Music] (Chr Friedrich Vieweg, Berlin-Lichterfelde). German.
46. Iyer V (2002) Embodied mind, situated cognition, and expressive microtiming in African-American music. Music Percept 19(3):387414.
47. Gagnon L, Peretz I (2003) Mode and tempo relative contributions to happy-sad
judgements in equitone melodies. Cogn Emotion 17(1):2540.
48. Eitan Z, Granot RY (2006) How music moves: Musical parameters and listeners images
of motion. Music Percept 23(3):221247.
49. Juslin PN, Lindstrm E (2010) Musical expression of emotions: Modelling listeners
judgments of composed and performed features. Music Anal 29(1-3):334364.
50. Phillips-Silver J, Trainor LJ (2007) Hearing what the body feels: Auditory encoding of
rhythmic movement. Cognition 105(3):533546.
51. Fitch WT (2006) On the biology and evolution of music. Music Percept 24(1):8588.
52. Fleming W (1946) The element of motion in baroque art and music. J Aesthet Art Crit
5(2):121128.
53. Roseman M (1984) The social structuring of sound: The temiar of peninsular malaysia.
Ethnomusicology 28(3):411445.
54. Ames DW (1971) Taaken smarii: A drum language of hausa youth. Africa 41(1):1231.
55. Vines BW, Krumhansl CL, Wanderley MM, Dalca IM, Levitin DJ (2005) Dimensions of
emotion in expressive musical performance. Ann N Y Acad Sci 1060:462466.
56. Russell JA (1980) A circumplex model of affect. J Pers Soc Psychol 39(6):11611178.
57. Laban R (1975) Labans Principles of Dance and Movement Notation. 2nd edition, ed
Lange R (MacDonald and Evans, London).
58. Stalinski SM, Schellenberg EG (2012) Music cognition: A developmental perspective.
Top Cogn Sci 4(4):485497.
59. Trehub SE, Hannon EE (2006) Infant music perception: Domain-general or domainspecic mechanisms? Cognition 100(1):7399.
60. Gabrielsson A (2002) Emotion perceived and emotion felt: Same or different? Music
Sci Special Issue 20012002:123147.
61. Juslin PN, Vstfjll D (2008) Emotional responses to music: The need to consider underlying mechanisms. Behav Brain Sci 31(5):559575, discussion 575621.
62. Hagen EH, Bryant GA (2003) Music and dance as a coalition signaling system. Hum Nat
14(1):2151.
63. Phillips-Silver J, Keller PE (2012) Searching for roots of entrainment and joint action in
early musical interactions. Front Hum Neurosci, 10.3389/fnhum.2012.00026.
64. Wheatley T, Kang O, Parkinson C, Looser CE (2012) From mind perception to mental
connection: Synchrony as a mechanism for social understanding. Social Psychology
and Personality Compass 6(8):589606.
65. Janata P, Grafton ST (2003) Swinging in the brain: Shared neural substrates for behaviors related to sequencing and music. Nat Neurosci 6(7):682687.
66. Zatorre RJ, Chen JL, Penhune VB (2007) When the brain plays music: Auditory-motor
interactions in music perception and production. Nat Rev Neurosci 8(7):547558.
67. Dehaene S, Cohen L (2007) Cultural recycling of cortical maps. Neuron 56(2):384398.
Sievers et al.
PSYCHOLOGICAL AND
COGNITIVE SCIENCES
Supporting Information
Sievers et al. 10.1073/pnas.1209023110
SI Text
Data. Spreadsheets containing the raw data are attached. Dataset
S1 includes all of the data from the United States experiment.
Dataset S2 includes all of the data from the Kreung experiment,
as well as discretized data from the United States experiment. The
raw means by emotion are provided in Table S1, for each population. Table S2 provides the z-scored means of the discrete data
by Emotion, Slider, and Population (corresponding to the crosscultural ANOVA). These means were z-scored within each slider
(using discrete data), for each population separately. These means
are graphically portrayed in Fig. S1.
Fishers Linear Discriminant Analysis. In an attempt to estimate which
sliders were most important to the emotion categorization effect,
we performed a Fishers linear discriminant analysis between the
slider values and their associated emotions for each population, for
each modality. The numbers represent the importance of each
feature (slider) in distinguishing the given emotion from all of the
other emotions (higher numbers mean more importance). These
data are presented in Table S3.
In the United States data, consonance was the most effective
feature for discriminating each emotion from the other four
emotions in both modalities, accounting for 60% of the total
discrimination between emotions. The second most effective feature was direction (up/down), accounting for 26% (music) and
35% (movement). In the Kreung music data, rate and step size
were most effective for discriminating each emotion compared
with the other four; rate and consonance were most effective for
the Kreung movement data.
It is important to note that low discriminant values do not necessarily imply unimportance for two reasons. First, this analysis only
reveals the importance of each feature as a discriminant for each
emotion when that emotion is compared with the other four
emotions. That is, any other comparison (e.g., each emotion
compared with a different subset of emotions) would yield different
values. For example, although jitter may seem relatively unimportant for discriminating emotions based on the data in Table S3,
jitter was a key feature for discriminating between particular
emotion dyads (e.g., scared vs. sad in the United States data).
Second, whether a parameter (slider) was redundant in this dataset
is impossible to conclude because of potential interactions between
parameters. Additional research is necessary to determine whether
one or more features could be excluded without signicant cost to
emotion recognition. Such research would benet from testing
each feature in isolation (e.g., by holding others constant) to better
elucidate its contribution to emotional expression within and across
modalities and cultures.
Multimedia. Audio and visual les of the emotional prototypes for
(
r
1a
a x 1
a
1 a x 1
x a
x>a
that it could be capable of communicating or experiencing happiness, sadness, and so forth. The movement of our character
(henceforth referred to as the Ball) was limited to bouncing up
and down, rotating forward and backward, and modulating the
spikiness of its surface. Technical details follow.
The Ball was drawn as a red 3D sphere composed of a limited
number of triangular faces, which were transformed into an ellipsoid by scaling its y axis by a factor of 1.3. The Ball was positioned such that it appeared to be resting on a rectangular oor
beneath it. Its base appeared to atten where it made contact with
the oor. The total visible height of the Ball when it is above the
oor was 176 pixels; this was reduced to 168 pixels when the Ball
was making contact with the oor. Its eyes were small white cubes
located about 23% downward from the top of the ellipsoid. The
Ball and the oor are rotated about the y axis such that it appeared the Ball was looking somewhere to the left of the viewer.
Every time the current position on the number line changed, the
Ball would bounce. A bounce is the translation of the Ball to
a position somewhere above its resting position and back down
again. Bounce duration was equal to 93% of the current period of the
metronome. The 7% reduction was intended to create a perceptible
landing between each bounce. Bounce height was determined by
the difference between the current position on the number line and
the previous position. A difference of 1 resulted in a bounce height
of 20 pixels. Each additional addition of 1 to the difference increased the bounce height by 13.33 pixels (e.g., a difference of 5
would result in a bounce height of 73.33 pixels). The Ball reached
its translational apex when the bounce was 50% complete. The arc
of the bounce followed the rst half of a sine curve.
The Ball would rotate, leaning forward or backward, depending
on the current number line value. High values caused the Ball to
lean backward, such that it appeared to look upward, and low
values caused the Ball to lean forward or look down. When the
current value of the number line was 60, the Balls angle of rotation was 0. An increase of 1 on the number line decreased the
Balls angle of rotation by 1; conversely, a decrease of 1 on the
number line increased the Balls angle of rotation by 1. For
example, if the current number-line value were 20, the Balls
angle of rotation would be 40. If the current number-line value
were 90, the Balls angle of rotation would be 30.
The Ball could also be more or less spiky. The amplitude of the
spikes or perturbations of the Balls surface were analogically
mapped to musical dissonance. The visual effect was achieved by
adding noise to the x, y, and z coordinates of each vertex in the set
of triangles comprising the Ball. Whenever a new position on the
number-line was chosen, the aggregate dyadic consonance of the
interval formed by the new position and the previous position was
calculated. The maximum aggregate dyadic consonance was 0.8,
the minimum was 1.428. The results were scaled such that when
the consonance value was 0.8, the spikiness value was 0, and when
the consonance value was 1.428, the spikiness value was 0.2.
Changes in consonance of 0.01 resulted in a change of 0.008977 to
the spikiness value. For each vertex on the Balls surface, spikiness
offsets for each of the three axes were calculated. Each spikiness
offset was a number chosen uniformly at random between 1 and
1, which was then multiplied by the Balls original spherical radius
times the current spikiness value.
For the emotion labels care was taken to avoid using words
etymologically related to either music or movement (e.g., upbeat for happy or downtrodden for sad). See Fig. S3A for
a screen shot of the United States experiment interface and the
lists of emotion words presented to participants.
In the United States experiment, the labels for the slider bars
changed between tasks as described in Fig. S3B. In the Kreung
experiment, slider bars were not labeled in the music task, and
were accompanied by icons in the movement task. See Fig. S4 for
a screenshot of the slider bars during Kreung movement task.
Sievers et al. www.pnas.org/cgi/content/short/1209023110
4 of 19
7. Paterson G, Thomas A (2005) Commonplaces and Comparisons: Remaking EcoPolitical Spaces in Southeast Asia (Regional Center for Social Science and Sustainable
Development, Chiang Mai, Thailand).
8. Vieillard S, et al. (2008) Happy, sad, scary and peaceful musical excerpts for research
on emotions. Cogn Emotion 22(4):218237.
9. Clynes M, Nettheim N (1982) Music, Mind, and Brain (Plenum, New York), pp
4782.
10. Trussoni SJ, OMalley A, Barton A (1988) Human emotion communication by touch:
A modied replication of an experiment by Manfred Clynes. Percept Mot Skills 66(2):
419424.
Fig. S1. Average values of z-scores (with SE bars) for each slider (i.e., feature) by each emotion and by population. The sign for the up/down slider was ipped
for visualization purposes (positive values indicate upward tilt/pitch).
5 of 19
Fig. S2.
The Ball.
Fig. S3. (A) Interface for the United States music task. (B) Slider labels by task modality.
6 of 19
Fig. S4.
Interface for the Kreung movement task with icons as a mnemonic aid.
Table S1. Raw means for each slider by emotion for each
population
Parameter
United States means
Rate
Jitter
Consonance
Size (big/small)
Direction (up/down)
Kreung means
Rate
Jitter
Consonance
Size (big/small)
Direction (up/down)
Range
1,
1,
1,
1,
1,
or
or
or
or
or
2
2
2
2
2
1.42
1.06
1.01
1.05
1.20
1.18
0.84
1.33
0.75
0.80
69.48
11.30
32.34
23.66
38.34
0.76
0.80
1.31
0.79
0.92
53.74 289.68
19.44 58.22
22.66 10.08
26.28 52.28
78.30 51.12
0.39
1.09
1.34
0.75
1.13
1.15
1.00
0.99
1.16
1.16
Table S2. Z-scored, discrete means for each slider by emotion, for
each population
Parameter
Sad
Scared
0.58
0.05
0.68
0.11
0.69
1.00
0.68
0.72
0.64
0.59
1.07
0.36
0.09
0.49
0.71
0.60
0.53
0.72
0.24
0.09
0.23
0.15
0.17
0.17
0.29
0.26
0.19
0.14
0.13
0.15
0.70
0.16
0.18
0.18
0.10
0.20
0.05
0.26
0.31
0.15
7 of 19
Table S3.
0.01
<0.01
0.92
0.01
0.06
0.24
0.05
0.17
0.16
0.39
0.01
<0.01
0.80
0.02
0.16
0.50
0.04
0.07
0.14
0.26
Sad
Scared Total
0.07
0.01
0.18
0.04
0.71
<0.01
0.04
0.89
0.01
0.06
0.16
0.09
2.95
0.13
1.71
0.46
0.82
0.19
0.03
0.13
0.08
0.10
0.06
0.11 <0.01
0.20
<0.01
0.32
0.39
0.09
2.39
0.31
0.84
0.78
0.69
0.04
0.07
0.57
0.18
0.13
0.12
0.01
0.55
0.01
0.32
<0.01
0.09
0.80
0.07
0.03
0.22
0.20
2.99
0.33
1.27
0.08
0.95
0.04 <0.01
0.17
0.03
0.45
0.02
0.26 <0.01
0.10
<0.01
0.02
0.66
0.21
1.93
0.15
0.75
1.29
0.90
0.05
0.02
0.38
0.03
0.52
<0.01
<0.01
0.91
0.07
0.01
0.05
0.01
0.32
<0.01
0.63
0.03
0.02
0.01
0.11
0.83
0.01
0.02
0.75
0.11
0.12
0.11
0.07
2.77
0.33
1.77
<0.01
<0.01
0.20
0.09
0.70
0.65
0.27
0.06
<0.01
0.02
0.62
0.07
0.20
0.11
0.01
0.22
<0.01
0.73
0.04
0.01
2.39
0.37
1.20
0.31
0.78
Low
Medium
High
55
4
4
15
39
187
25
30
56
50
320
61
37
82
85
8 of 19
Rate
Jitter
Consonance
Direction (up/down)
<0.01
0.10
0.03
<0.01
0.08
0.49
0.18
0.08
0.56
0.24
0.99
<0.01
<0.01
<0.01
0.34
0.49
0.02
<0.01
<0.01
0.04
<0.01
0.06
0.65
0.34
0.08
0.02
<0.01
0.08
<0.01
0.11
<0.01
0.06
0.81
0.22
0.06
0.02
0.01
0.17
0.40
0.42
0.81
<0.01
<0.01
0.15
0.01
<0.01
0.06
0.61
0.75
0.01
<0.01
0.21
<0.01
<0.01
0.21
0.32
0.12
0.03
0.30
0.85
0.02
<0.01
<0.01
<0.01
0.74
0.21
0.32
0.10
0.02
0.85
0.12
0.12
0.91
0.42
0.91
Movie S1.
Movie S1
9 of 19
Movie S2.
Movie S2
10 of 19
11 of 19
12 of 19
Movie S5.
Movie S5
13 of 19
14 of 19
Movie S7.
Movie S7
15 of 19
Movie S8.
Movie S8
16 of 19
17 of 19
Movie S10.
Movie S10
Audio S1.
Audio S1
Audio S2.
Audio S2
Audio S5.
Audio S5
18 of 19
Audio S6.
Audio S7.
Audio S6
Audio S7
Audio S8.
Audio S8
Audio S9.
Audio S9
Audio S10.
Audio S10
Audio S11.
Audio S11
Audio S12.
Example of Kreung mem music, Bun Hear, courtesy of Cambodian Living Arts.
Audio S12
Dataset 1. All data from the United States experiment, in continuous format
Dataset S1
Dataset 2. All data from the United States and Kreung experiments
Dataset S2
Kreung data were collected as discrete values (each slider had three positions: 0 1 2). Continuous values from the United States data were converted to
discrete values as described in the main text.
19 of 19