You are on page 1of 16

Ncmprychologm Vol.30.No 3,pp.217-292.

1992 CO28-3932.'92$5.00+0.00
PrhtedI"GreatBrttain. plc
?;1992PergamonPress

THE ROLE OF CONTOUR AND INTERVALS IN THE


RECOGNITION OF MELODY PARTS: EVIDENCE FROM
CEREBRAL ASYMMETRIES IN MUSICIANS

ISABELLE PERETZ* and MYRIAM BABA’I’

Department of Psychology, University of Montreal, Montreal, Quebec, Canada

(Received 25 March 1991; accepted 28 October 1991)

Abstract-The left hemisphere (LH) has been shown to be involved in tasks requiring interval-based
procedures, and the right hemisphere (RH) in task allowing a contour-based approach in melody
recognition. Support for this distinction was obtained by studying contour properties at the level of
whole melodies and interval characteristics at the level ofindividual tones. The purpose of the present
study was to extend the validity of this two-component model at the level of melody parts. It was
predicted that both the LH interval-based procedure and the RH contour-based approach contribute
to melody part recognition, but that their respective efficiency will depend on the structure of the parts
used as recognition probes. To address this question along the lines of prior work [3] (BEVEK and
CHIAKELLO, Science185,537~539,1974), a probe recognition task was presented monaurally to right-
handed musicians. The recognition probes that corresponded to one of the melody parts were found
to be far more accurately and quickly recognized than the probes that bridged across the contour-
defined boundary. On the latter, subjects performed initially at about chance level. They improved,
however, after some exposure to the task, that is on the second half of the test material in Experiment 1
and on the second half of Experiment 2, and displayed the predicted interaction between laterality and
probe type. Subjects recognized the probes that coincided with a part delineated by contour
boundaries in the left ear (or the RH) more easily; whereas they recognized more easily the probes
which crossed contour boundaries in the right ear (or the LH). These findingsjustify the consideration
of contour as an important grouping factoi in pitch sequences and emphasize the usefulness of
laterality effects.

ONE PARTICULAR domain that has greatly benefited from the progress made in cognitive
psychology is the study of functional differences between the cerebral hemispheres in melody
processing. The interplay between the two disciplines found its original impulse in the study
of BEVER and CHIARELLO [3]. These authors demonstrated that the recognition of pitch
sequences was subserved by two distinct processing components implemented in different
cerebral structures. The distinction stemmed from the comparison of musicians’ and
nonmusicians’ performance, In a melody recognition task, musicians exhibited a right-ear
advantage (REA, taken to reflect a left-hemisphere-LH-superiority) whereas nonmusi-
cians displayed a left-ear advantage (LEA, taken to reflect a right-hemisphere-RH-
superiority). Support for relating the observed laterality effects to the use of distinct
processing components was gathered in the same study by considering the success with
which the two groups could identify a two-tone probe as part of the whole melody. Only
musicians were able to perform above chance on this subsidiary task. This was taken as

*Address all correspondence to: Isabelle Peretz, Dkpartement de Psychologie, Universitk de Montrkal, C.P. 6128,
SU~JC.A, Mont&al (Qub.) H3C 357, Canada.
278 I. PEKETZ and M. BA~IA~’

evidence that musicians used an analytic route for melody recognition, which would be
typical of the LH mode of functioning, and that nonmusicians recognized melodies
holistically, which characterizes the RH mode of processing.
The major cognitive implication of Bever and Chiarello’s seminal study is the notion that
melody recognition can be achieved via independent routes. However, the fact that they
framed the discussion of this dissociation in terms of analytic vs holistic processes attracted
various criticisms (see open peer commentaries to Ref. [S] and [6]). The major criticism was
that the functional distinction was not specified in operational terms. It is at this level that the
progresses made independently within an information-processing approach have been most
constructive. In effect, DOWLING [13] presented a two-component model of memory for
melodies that captured well the distinction between analytic and holistic processes, while
providing the necessary specifications of their respective operations. In Dowling’s model,
which has been substantiated further in a large number of studies (see Refs [ 141 and [IS], for
reviews), melodies are stored independently into two different formats: one that contains
information about the specific intervals * between the individual tones and one that
characterizes the overall contour of the melody, defined by pitch directions. This two-
component model of melody processing, by incorporating weighting effects due to expertise,
can account well for Bever and Chiarello’s findings. Indeed, it is a well-documented fact that
musicians are more prone than nonmusicians to use an interval-based procedure than a
contour-based approach [l, 7, 141.
The work that we conducted subsequently has been quite successful in testing the validity
of the two-component model of melody processing as a conceptualization of cerebral
asymmetries in processing pitch sequences (see Ref. 1321, for a review). In this work, we
adopted experimental conditions which were most likely to highlight the differential use of
one type of representation over the other and then tested them against the functional
asymmetries between the hemispheres. These conditions were first assessed in nonmusicians
in order to ascertain that superiority effects of the LH, which is also the speech-dominant
hemisphere, were due to the encoding of melodic intervals and not to the translation of these
features into a verbal code. Such a distinction is difficult to draw in the case of musicians,
unless an interaction between cerebral dominance and experimental conditions can be
observed within the same subjects. To illustrate our approach, we will briefly report the
results of three studies that are most relevant to the present study, in that they also involved
manipulation of stimulus characteristics as a way to distinguish and identify the nature of the
lateralized processing components.
The first two studies were carried out with normal right-handed subjects by using
monaural presentation of the melodies, as in Bever and Chiarello’s study. Unlike their study,
the stimulus parameters were manipulated to provide a direct test of the notion that the RH
operates more effectively on the overall contour representation of the melody and the LH on
an interval-based representation. To this end, and following the standard procedure
employed by Dowling and his collaborators, melodies were presented in a “samedifferent”

*That successive pitches are encoded as a function of pitch distance between adjacent tones or as scale-steps. that
is as a distance function from a tonal center, are presently equally plausible definitions of the type of features
abstracted by the LH. Indeed, all the studies that have led to the observation of a LH superiority and that cannot be
attributed to a verbalization component, have used tonally structured melodies (see Ref. 1321, for a review), thus
lending themselves to both kinds ofpitch encoding. It is obviously ofimportant theoretical value to better specify the
nature of pitch encoding that is subserved by the LH, as it will be discussed later on in Experiment 2. However,
clarification of this issue is not necessary for the present argument.
PART STRUCTURE AND LATERALITY 279

classification task, so that the contour of the melodies to be compared was violated, and thus
av.ailable for discrimination, in half the trials and preserved on the other half. A clear
facilitative effect of contour was observed, contour-violated melodies being far better
distinguished than contour-preserved melodies. Moreover, a LEA was obtained in the first
case and a REA in the second case [31]. Taken together, these results indicate that subjects
tended to rely on contour representation to discriminate the melodies and in doing so
involved predominantly their RH. When contour cues were not available and interval
information was required, i.e. with the contour-preserved melodies, subjects shifted to the
LH. In that study, the melodies to be compared were untransposed, in that they were played
at the same pitch level. Yet, under these circumstances a contour superiority effect is
scmetimes difficult to observe [ 1.51, for interval information is easily available. In order to
further restrict the use of interval information, we exploited melody transposition in another
st Jdy. It is well-documented that this situation hinders access to interval information while
making contour the privileged route to melody discrimination [l, 13, 191. Thus, in a
ncuropsychological perspective, discrimination of melodies after transposition should
promote the contribution of the RH compared to a condition where the melodies are
untransposed. An increased LEA was indeed found in such conditions [27].
The lateralization of the underlying mechanisms that subserve the computations of these
vrrious melody properties was examined more recently with brain-damaged patients who
had sustained a vascular lesion either in the LH or in the RH [29]. A lesion in the LH was
found to spare the ability of representing melodies in terms of their contour but to interfere
with an interval-based procedure, whereas a lesion in the RH was found to disrupt both
procedures. Taken in isolation, this pattern of results suggests that only the contour-based
pr,ocedure would be lateralized to the RH whereas the interval-based approach would recruit
bilateral neural structures. This latter conclusion is, however, inconsistent with the LH
superiorities associated with use of interval information in melodies that have been observed
in normal subjects, such as in the studies described previously and in several other studies 121
(ELxperiment l), 30, 331. The results obtained with brain-damaged patients suggest rather
that melodic contour serves as a necessary anchorage frame for encoding interval
information, as hypothesized by some authors [18,40]. According to this view, a lesion in the
RH is detrimental because it disrupts both the processing subsystem required for
representing the melody contour and deprives the intact LH with the necessary outline for
encoding interval information (see also Ref. [36] for an analogous model of cooperation
between the cerebral hemispheres but in vision). Therefore, the study of brain-damaged
patients provided evidence consistent with the notion that interval-based and contour-based
approaches of melodies are not only functionally but anatomically distinct. The data further
suggest a hierarchical organization between these two processing components.
The main conclusion to be drawn from the foregoing presentation is that the work carried
out within a cognitive approach has provided fruitful guidelines for understanding the nature
of the functional division between the cerebral hemispheres in melody processing. The fact
that both lines of research yielded similar conclusions constitutes strong evidence for the
claim that organization of pitch patterns in melodies is not unitary but involves at least two
separate processing components, that are distinguishable by the kind of features on which
they operate. If the neuropsychological line of investigation owes most of its progress to
cognitive psychology so far, the situation seems ripe for reversal, that is to develop new ideas
about the processing of melodies from the exploitation of neuropsychological evidence. The
present study represents such an attempt.
280 I. PEKETZand M. BABA~’

The neuropsychological data that prompted the present work was a set of negative
findings that were obtained with probe recognition tasks. In numerous independent studies
[3,21,28,39], the recognition of melody probes failed to elicit ear-asymmetries. If such a task
requires consideration of melody intervallic structure, one should expect a LH advantage.
The fact that no laterality effects were observed might be considered inconclusive, since
negative findings can be due to a series of uncontrolled factors. However, when negative
results appear as the rule rather than the exception they may point out to a genuine
contribution of both hemispheres to the task. We thus contemplated the possibility that the
two-component account of hemispheric contribution to melody recognition previously
described may contribute to the recognition of probes from melodies, by being valid at the
level of melody parts.
We reasoned that a RH involvement in melody part recognition was very likely. Indeed,
contour characteristics can be indicative of melody part structure. The fact that this
possibility was ignored in previous work can be related to the manner in which contour had
been traditionally defined, and thus studied, that is as a property of the whole melody
regardless of its internal structure. Such a conceptualization would have obscured one of its
most fundamental functions in melody processing. Contour might well be an essential factor
for grouping adjacent tones together into small structural units. These units, which are larger
than the individual intervals but probably smaller than the entire melody, are certainly
crucial in processing music, especially in memorizing it for later recognition. When faced
with a stream of rapidly changing ephemeral events, a well-established and useful propensity
of the perceiving human organism is to group these events in small chunks in order to
increase the amount of information that can be retained by the limited capacity of our
processing system. Contour might contribute to the formation of these chunks. If this
hypothesis is correct, recognition of melody parts contained in probes would not rely solely
on an interval-based representation of a melody but would be based on contour ones as well.
The role of contour as an essential grouping factor in pitch sequence organization is an
intuitively appealing notion, and if the present study were to support it, that would provide a
useful contribution to both neuropsychological and cognitive theories of melody processing.
With respect to the former, it would provide an explanation of why ear-asymmetries have
been difficult to elicit in probe recognition tasks. By the same token, it would support the
notion that neuropsychological evidence constitutes a reliable database from which one can
develop new hypotheses. From a cognitive perspective, showing that contour contributes to
the parsing of melodic sequences allows substantiation of a novel aspect of its function in
music organization and, thus, may help to better understand why contour is so perceptually
salient in melodies.
In effect, the role of contour as a parsing device has largely been neglected in current
theories of how musical events are initially grouped together, although most consider this
level of music organization fundamental to music understanding. For example, LERDAHL
and JACKENDOFF [23], who have provided the most explicit formalization of how this basic
organization is derived from the musical surface, have not considered contour, or pitch
direction, as a grouping factor. In some models, such as the one presented by DEUTSCH and
FEROE [l 11, pitch directions have been considered. Yet, in subsequent work aimed at testing
their model, DEUTSCH [9] overlooked contour reversals and repetitions which, as noted by
JONES [22], may explain the observed facilitative effects of structured sequences over
unstructured ones in written recall performance. These contour features have been
controlled in more recent studies and have been found to facilitate perceiving and
PART STRUCTURE AND LATERALITY 281

remembering of melodies [4, 2.51. To our knowledge, however, no study has directly
exa.mined these contour features as the basis for melody chunking or melody part
recognition. The present research addresses this issue.
To specify where boundaries between contour-defined parts should be perceived in a
melody, we combined elements from these former studies and applied them to pitch
sequences that were equitemporal (that is, with pitches of equal duration). The boundaries
we+e created here at every three tones in a sequence of 12 tones by a contour reversal
supplemented by a contour repetition through transposition and a pitch skip. Adding the
latter factor was found necessary in order to specify with precision where a contour boundary
was to be heard in the melody. Indeed, depending on context or parsing idiosyncracies, as
verified in several pilot studies, a boundary created by a contour reversal can be heard in two
difl’erent locations, such as between D-E and E-D in the C-D-E-D-C sequence. Since
elirnination of such ambiguities was indispensable in the present study, as it will become clear
when task characteristics are described, pitch skips and contour repetitions were used here to
reinforce the boundaries created by a contour reversal. These melodies with contour-defined
parts served as stimuli in half of the experimental trials. In the remaining trials, melodies of
the same length but having a different partition or having no clear part structure served as
dis tractors.
“The melodies were arranged so that a probe recognition task with monaural presentations
could be used, ensuring that the observed results could be directly related to prior work [3,
28,393. The preference for monaural over dichotic presentations was also motivated by the
fact that, when presented dichotically, computer-generated melodies may create spurious
effects that are unrelated to hemispheric asymmetries (see, e.g. Ref. [lo]). Since ear effects
have already been observed with monaural presentations of musical stimuli [31, 351,
adopting the same procedure here was most appropriate. Thus, on each trial, subjects heard
monaurally a melody of 12 tones followed by a three-tone probe. Their task was to decide
accurately but quickly whether or not the probe was part of the preceding melody. The
critical manipulation, that makes the present study different from prior ones, consisted of
presenting true probes that either corresponded to one of the middle parts of the melody
(part probe) or that cut across these parts (across-part probe). Probes that corresponded to
contour-defined parts were expected to be easily recognized, for they derived from
representing the melodies according to the grouping rules applied to the melodies. Across-
part probes, in contrast, should be very difficult to match because they violate the part
structure held in memory. In the latter case, a match will be eventually found by consulting
an alternate representation of the melody that is relatively unmarked by contour
characteristics, that is the one specifying the interval sequence independently of pitch
directions.
The neuropsychological concommittants of these predictions are relatively straight-
forward. Assuming that the contour device lateralized to the RH is not merely confined to
drawing an outline of the entire melody but is actively involved in parsing it for later
recognition, a LEA on both accuracy and time measures should be obtained on part probes.
In contrast, the LH interval-based procedure would be the only way to recognize probes that
viollate the contour-defined structure of the melody. However, this precise encoding of the
intervals is assumed to be initiated as soon as it receives elements from the contour parsing
performed in the RH, following the two-stage model suggested earlier on the basis of
pathological evidence [29]. On the across-part probes, then, a REA was predicted. Thus, in
contrast with former views that would expect the recognition of any part in a melody to be an
282 I. PERETZ and M. BABA~’

instance of analytic processing likely to involve left hemispheric structures, we predict that
both the LH interval-based procedure and the RH contour-based approach contribute to
melody organization, but that their respective efficiency will depend on the probe structure.

EXPERIMENT 1
Method
Subjects. Sixteen subjects* (eight men and eight women), without known hearing loss, were recruited in our Music
Department and among professional musicians. They were all strongly right-handed, as assessed by a French
version of OLDFIELD’S 1261 questionnaire, and their age ranged from 22 to 35 (mean =27 years). All were trained
musicians; they had played an instrument for a minimum of 5 years (mean = 17 years) and had at least 5 years of
formal musical education (mean = 11 years). None had absolute pitch. They had never participated in a
psychological laboratory study and were not aware of the purpose of the present study.
Stimuli and equipment. The experimental stimuli were eight equitemporal pitch sequences arranged into four
groups on the pitch dimension. The main grouping principle, referred to here as a contour reversal, was applied to
the successive tones by adapting LERI)HAL and JACKENDOFF’S1231 rules to pitch direction characteristics. Thus, it
was expected that for a sequence of four tones, tl t2 t3 t4, the transition t2~-t3 will be heard as a boundary if it involves
a pitch direction change with respect to both the directions of tl&12 and of t3St4. The actual application of this
contour rule to the experimental stimuli followed two simple patterns: the pitch direction of the groups were either
all ascending (see A in Fig. 1) or all descending (see Bin the same Figure). The second parameter, referred here to as
a pitch skip, comes directly from Lerdahl and Jackendoff’s set of rules. This rule states that for a sequence of four
tones, tl 12 13 t4, the transition t2Zt3 will be heard as a boundary if it involves a greater intervallic distance than both
tl-t2 and t3St4. In the stimuli, the pitch skip was at least twice as large as any of the surrounding interval sizes in
terms of semitones (see Fig. 1). Since this type of boundary has been shown to be a weak determinant of grouping
when considered in isolation [2,28], its main use was to disambiguate, in a formal way, the location of the boundary
created by a contour reversal. The last grouping factor that aimed at emphasizing contour as the main structural
defining feature of the successive parts was contour repetition. This was achieved by transposing each part
delineated by a contour reversal and a pitch skip to a different pitch level, so that exact repetition did not occur.
Contour repetition can be considered as an instantiation of the parallelism and symmetry rules formulated by
Lerdahl and Jackendoff, since it created subdivisions of equal length (symmetry) having the same pitch directions
and containing similar interval sizes (parallelism).
The choice of Lerdahl and Jackendoff’s formalism was motivated because it defines rules independently of any
particular code attributed to pitch values. In order to control for potential influence of tonal interpretation on the
effects of the grouping rules, half of the eight experimental sequences were written in accordance with the Western
tonal system, and the remaining ones were tonally ambiguous. The tonal sequences involved pitches from the same
major diatonic scale (either C or G major) that did not disconfirm the first tone as the tonal center (see A in Fig. 1).
The nontonal sequences were highly ambiguous in this respect (see B in Fig. I). The ratings provided by eight
musicians, who did not participate in the present study, confrmed this classification; on a five-point scale where
1 meant clearly tonal and 5 clearly non-tonal. the tonal sequences were rated with a mean of 1.7 and the non-tonal
sequences with a mean of 3.6.
Eight distractor sequences were also created in order to avoid strategic biases. Four of these sequences were made
of three parts, each containing four tones, according to the same grouping principles as the ones used for the
experimental sequences (see C in Fig. 1). The other four distractor sequences were created so that they did not lend
themselves to any consistent internal grouping (see E in Fig. 1). Verification of these structural manipulations on
subjects’segmentation was done through several pilot studies, in which subjects were required to indicate what were
the “natural” parts of the melodies by partitioning a visuo-spatial array (as in Ref. [2X]). For the final set of stimuli
used here, 95.0% of subjects’ segmentations coincided with the theoretical boundaries for the structured distractor
sequences and 97.5% for the experimental sequences. The unstructured distractor sequences failed to motivate
segmentation by three of the eight subjects; the others segmented the sequences arbitrarily (responses were
distributed across eight of the eleven locations without any apparent consistency).
The eight experimental sequences and the eight distractor sequences served equally often as target melodies in a
set of 160 trials, preceded by eight practice trials. Each trial consisted of a warning signal that preceded a target
melody which, in turn, was followed after a 2-set silent interval by a probe. Of the 160 such trials, 80 were created by
pairing a target melody with a true probe and 80 consisted of a target melody paired with a false probe. All probes

*Initially, nonmusicians were also recruited. Their data had to be discarded from the present study, because their
performance was at chance on the across-part probes (45% correct responses) and, above all, because they showed
no sign of improvement through the experimental session.
PART STRUCTURE AND LATERALITY 283

COdU~ tt - tt - tt - tt

intervals 4 3 16 3 4 11 4 3 19 3 4
boundary A A A

COntour - - + - _ + _ _ + _ _

intervals 6 7 15 6 7 15 6 7 15 6 7
A A A
boundary

contour --- t -_- t ___

intervals 3 4 3 14 4 3 4 14 3 4 3
A A
boundary

contour t - ttt - -t-t-


intervals 513542229121

Fig. I. Musical notation, contour and interval characteristics of the stimuli. Part boundaries are
marked at each serial position at which a contour reversal reinforced by a contour repetition and a
pitch skip occurred for each type of experimental melodies (A and B) and distractor melodies (C
and D).

contained three tones. The f&e probes were made up of a unique three-tone series that did not occur in the target
melody but which was matched with the true probes for contour and pitch skips. For each of the eight experimental
sequences, the true probes were taken from four locations around the central boundary. As illustrated in Fig. 2, this
procedure generated two types of true probes: part prohrsthat contained tones inside boundaries and across-part
pro/w that comprised tones from both sides of the boundary. Thus, four different true probes were associated with
each experimental target, resulting in a total of 32 different trials presented twice, once to each ear.
It should be noted that the experimental probes were located in the middle of the tunes, in a way that minimized
primacy as well as recency effects. Such well-known effects have been shown to exist in tone recognition [24] and
might have biased the results in favor of the hypotheses, since part probes (location 1 and 2) occur in the earliest and
latest portions of the sequence. This control measure might, however, encourage subjects to focus their attention on
the (central portion of the experimental target melodies, and thus to counteract the expected grouping effect. In order
284 I. PERETZand M. BABA~’

A Boundary
Fig. 2. The experimental probes were taken from four locations. Location 1 and 2 correspond to part
probes and location 3 and 4 to across-part probes with respect to the central boundary.

to prevent subjects from adopting such a strategy, the true probes taken from the distractor melodies were chosen
with equal frequency from the beginning and the end portions of the sequences.
Presentation of each combination of target melody and probe pair was pseudo-randomized in eight blocks of 20
trials each, so that each block contained an equal number of “yes” and “no” trials and that two successive blocks
contained the same trials but in a different order. This particular arrangement allows between-ear comparisons per
two blocks of trials, since ear of presentation alternates every block.
The stimuli were all produced at the same tempo, with each tone lasting 600 msec and simulating a piano timbre.
They were generated on an IBM AT compatible microcomputer operating a Yamaha TX-81Z synthesizer. The
analog output was recorded with a Revox 877 stereo tape-recorder, which was also used to play back the tapes
during theexperimental sessions. Subjects listened via Uher W770earphones. The test material was recorded on one
channel of the tapes. On the other channel, that was not delivered to the subject, a high pitch tone signailing probe
onset was recorded in order for the computer to register accuracy and latency of each response. Feedback was
provided to the subject by switching a red light on whenever the response was inaccurate.
Procedure. Each subject was tested individually. On each trial, subjects were instructed to decide as accurately
and as quickly as possible whether or not the probe was part of the target melody. No information about the stimuli
was provided. Ear of presentation changed after each block of 20 trials. Half of the subjects started with the left ear
while the other halfwith the right ear. Subjects responded by moving a two-way switch either away from or towards
themselves. Response hand was alternated every 40 trials and was counterbalanced in each group defined by a
particular ear-track order and response direction.

Results and comments


Since the hypotheses bear on the lateralized performance obtained in the recognition of the
true probes as a function of their structure, only the data for “yes” responses to the
experimental trials (i.e. the trials with four part melodies) were subjected to analyses. On
these responses, preliminary analyses of variance were performed in order to examine
potential effects of two secondary variables: sex and responding hand. Since neither was
found to have an effect that interacted with ear of presentation, the results were combined
over these two factors in subsequent analyses.
The error data and the median response times (for correct responses) were then analyzed
separately as a function of probe type and ear of presentation, both within-subjects factors. A
very large effect of probe was obtained on both measures [F (1, 15)=27.6 and 40.5, for
accuracy and latency, respectively, P < O.OOl]. In accordance with our predictions, subjects
had little difficulty recognizing part probes; they made on average 11.9% of errors and
responded in 273 msec when measured from probe offset. However, subjects were far worse
than expected for matching the across-part probes. They performed close to chance (with
42.8% of errors and 504 msec); 11 of the 16 subjects had error rates that did not differ
significantly (P~0.05) from chance level of 50% by a binomial test. The fact that subjects
performed close to ceiling level on probes that corresponded to contour-defined parts of the
melody and at chance level on probes that violated that structure demonstrates a drastic
PART STRUCTURE AND LATERALITY 285

effect of the grouping principles studied. These conditions are, however, inappropriate for
yielding reliable ear effects on either measure. Such effects were found to be negligible (the ear
by probe interaction yielded an F (1, 15) of 2.0 on accuracy and an F< 1 on latency).
Since subjects received continuous feedback on accuracy during the experimental session,
they might have adjusted their approach to optimize performance, in particular on the
across-part probes. To assess this possibility, the data were divided equally into the first and
the second half of the test. As it can be seen in Fig. 3, the results were consistent with the idea

Half 1 Half 2
60 - 60

50 - E 50

f! 40 - - 40
I!
2;
- 30

J!
20 - 20

‘ii 30 > / - 10
10

0 8 I
Dart aCrOSS-Dart Dart across-Dart

Fig. 3. Subjects made a yes/no decision about a probe presented after a melody in Experiment 1.
Probes that fell within boundaries were more accurately recognized than those that bridged across the
boundary. The latter were at chance level in the first half of the test material, and increased
substantially on the second half on which subjects exhibited the expected REA on the across-part
probes and LEA for the part probes.

that a floor effect had obscured evidence for the predicted ear-asymmetry pattern. In the first
half, across-part probe recognition was at chance (with 5 1.5% of errors), and ear differences
were unreliable (the ear factor yielded a F (1, 15) of 1.O, and the ear by probe type interaction
a F-C 1). In the second half, however, performance on across-part probes improved
substantially (with a little less than 34% of errors, F (1, 15) = 20.54, P<O.OOl) and yielded
the expected REA (F (1, 15) =4.3, P ~0.06). In contrast, part probe recognition was as good
in .:he second half as in the first half (F-c1; the interaction between probe type and half was
significant, F (1,15)=9.74, P~0.01) and gave rise to a tendency toward a LEA
[F (1, 15) = 2.1; the interaction between ear and probe reached significance on the second
half, F (1, 15)=7.1, P-cO.051. The shift in laterality pattern between the first and the second
half of the test as a function of probe type was close to significance [F (1, 15) = 3.6, PcO.081.
A similar analysis on the response times could not be computed because there were too few
da1.a per cell.
Thus, support for our hypotheses was obtained once subjects could perform at better-
than-chance levels. Subjects recognized the across-part probes more accurately in the right
ear than in the left one, and exhibited the reverse pattern on the part probes. What remains to
286 I. PEKETZand M. BABAP

be specified is the nature of the processing adjustment that allowed subjects to perform above
chance on the across-part probes after some exposure to the task. One possibility which
derives directly from the theoretical context of the present study is that subjects relied at first
predominantly on the contour characteristics of melodic parts and that they realized after
making many errors, but not necessarily explicitly, that this approach was not sufficient to
accomplish the task. Thus, it is only after being confronted with their “misses” that the
subjects would have paid more attention to the intervallic structure.
One way to improve precise interval encoding is to exploit the knowledge about musical
scale structure. Indeed, it is a well-established fact that recognition of tones in a tonal
sequence is advantageous because the embedded pitches or intervals can be more accurately
encoded than a collection of non-scale tones that fail to activate a musical scale structure 18,
12, 201. In the present study, the use of tonal constraints should be apparent in yielding a
superior matching performance for probes in a tonal context over the ones in a nontonal
context. Moreover, this effect would be expected to emerge on the second half of the
experiment where subjects showed signs of improvement on the across-part probe
recognition. The analysis of the data provided partial support for this hypothesis. The
expected superiority of tonal trials over nontonal ones was indeed confined to the second half
of the material, with an advantage of 11.5% of correct responses [F (1, 15) = 9.4, PC O.Ol]
and of 101 msec; probes were not better matched in a tonal context than in a nontonal one in
the first half of the material (F< 1). The interaction between tonal context and half was
significant on accuracy measures [F (1, 15)=4.5, PC 0.05). However, there was no
interaction with probe type (F-C l), thus suggesting that tonal encoding of pitch was not
limited to the matching of across-part probes. Nevertheless, the emergence of a tonal
organization effect on the second half of the material supports the notion that prior
experience with the task may yield qualitative changes rather than merely quantitative ones.
Yet, support for the hypotheses was undermined by the fact that it was obtained on only
half the material, thus substantially reducing the number of observations for each factor
considered. To remediate these limitations, while keeping all task characteristics constant,
we ran a second experiment in which a new group of subjects was pre-exposed to the test in a
preliminary session. It was expected that this procedure would enable the subjects to perform
well above chance and display clear evidence of ear differential sensitivity to probe structure
on the entire set of trials in the second session, particularly on response times.

EXPERIMENT 2
Mrthod
Sixteen subjects (eight men and eight women), whose ages ranged from 19 to 32 (mean = 25 years), were selected
with the same criteria as in Experiment 1.They had not participated, as pilot or experimental subject, in any of our
former experiments. They had played an instrument for at least 5 years (mean = 10 years) and had received formal
training for at least 5 years (mean=7 years). They were presented with the same material as the one used in
Experiment 1. As mentioned earlier, the only difference between the present experiment and the former one is that
here subjects were familiarized with the test in a preliminary session, occurring the day before the experimental
session. They were tested under the same conditions on both occasions. At the beginning of the experimental session
they were told that they would have to perform the same kind of test as on the day before, but that this time they
should respond more quickly and more accurately than previously. No information was provided about the
construction of the material in either session.

Results und comments


In the training session, the subjects encountered the same difficulty in matching
across-part probes as the subjects of Experiment 1. Twelve out of 16 subjects did not perform
PART STRUCTURE AND LATERALITY 287

significantly better than chance by a binomial test. In the experimental session, however,
subjects reached a reasonable level of performance with an error rate of 29.3% on the across-
part probes. In view of this performance level and the fact that the subjects made very few
errors on the part probes (10.2% of errors), a greater emphasis will be placed on response
times as the dependent variable.
‘Thus, for the “yes” responses obtained on the experimental trials of the second session, the
median correct response times and the error scores were analyzed separately as a function of
probe type and ear of presentation. As it can be seen in Fig. 4, a LEA emerged on the part
probes, whereas a REA emerged on the across-part probes. This ear by probe interaction was
in the same direction on the response times as on the accuracy measures, although it was
found to be statistically significant only on the response times [F (1, 15)=9.31; P<O.Ol].

4M)

'ii

E.
t 300

ii
G 200

100

4’
40

?? 30 I-

2
t

6 20,-

1cI-

CI-

Fig. 4. In Experiment 2, subjects performed the same task as in Experiment 1 but after having been
exposed to the material in a preliminary session. They were faster (top) and made fewer errors
(bottom)for probes that fell within boundaries than for those that crossed the boundaries. Moreover,
on the response times (top), they exhibited a LE.4 for matching part probes and a REA for recognizing
the across-part probes.

The observation that the expected interaction between ear of presentation and probe
structure was obtained here on response times should be related to the fact that latencies of
responses are more reliable the more practice subjects are given and the more measurements
on which the estimates are based. The fact that the pattern of ear-asymmetries is identical to
288 I. PEKPTZand M. BABAT

the one obtained in Experiment 1, which was observed on half the material and on accuracy
measures, greatly increases the confidence that laterality depends on probe structure. In this
respect, it should be emphasized that the effect of probe structure was still present, despite
prior familiarization with the task: part probes were still matched far more easily than across-
part probes [F (1, 15) = 29.18 and 13.18, P < 0.005, for latency and accuracy, respectively].
Thus, both the probe structure effect and the laterality pattern appear to be robust
phenomena.
Given that the present results replicated those of Experiment 1 on twice as many trials, it
was worth re-examining here the effect of tonal organization on the ease with which the two
kinds of probes were matched in their appropriate context. Thus, the data were further
analyzed as a function of tonal context, probe type and ear of presentation. Since ear of
presentation was not found to interact with tonal context on either measure, the data
combined over this factor are presented in Fig. 5. As it can be seen, tonal context facilitated
only the across-part probe recognition [F (1, 15) = 12.4, P-cO.OOS]; it did not influence
recognition of part probes (PC 1). The interaction between probe type and tonal context was
significant [F (1, 15) = 6.4, P<O.Ol]. All these effects were obtained on the accuracy
measures. The response times showed trends consistent with these effects but failed to reach
statistical significance.

340 ,
- Tonal context
s 320 -
- Nontonal context
a
!k 300-

& 260 -

:
G 260-

E
240 -

220 -
//p
200 -

0
40 -

30 -
E
2
ii
20 -
z

8
10 -

01
part across-part

Fig. 5. In Experiment 2. subjects recognized more accurately (bottom) but not faster (top) probes that
crossed the boundary in a tonal context than in a nontonal context, while they were not influenced by
tonal structure for matching probes that fell within boundaries.
PAKT STRUCTURE AND LATERALITY 289

These effects of tonal context, although obtained on a single dependent variable, strongly
support the notion that matching the two kinds of probe requires consultation of different
representations of the melody. Given that tonal encoding of intervals or individual pitches
enhances their memorability and given that evidence for the use of such code was obtained on
the recognition of across-part probes and not of part probes, the results are consistent with
the hypothesis that across-part probes require examination of the interval structure of the
melody. Conversely, the observation that part probe recognition was barely affected by the
availability of a tonal structure for encoding the successive tones is highly consistent with the
claim that these probes do not require examination of interval structure for they can be
di-ectly matched with the contour-defined parts of the melody that are held in short-term
memory.
Although consistent with the hypotheses, the effects of tonal context and of laterality both
reached significance on only one of the two dependent variables considered. Accuracy was
found to be most sensitive to tonal context while latency was most sensitive to ear-
asymmetries. Yet, in neither case were the patterns on the two measures in a trade-off
relationship, nor were the values so extreme as to suggest that ceiling or floor effects had
obscured the data, as it was partially the case in Experiment 1. Thus, one may consider these
slight discrepancies as indicators that tonal organization and ear-asymmetries are reliable
but weak determinants of probe recognition scores, particularly in comparison with the
dr.astic effect of probe location with respect to the contour-defined boundary. This is a
common observation in laterality research. There is no simple explanation for this fact and
hence, there is no clear solution for its remediation. Confidence in the data stems primarily
from the fact that laterality effects are consistent and recurrent.

GENERAL DISCUSSION
The results of the present study provide new evidence that the cerebral hemispheres
operate on two different representations of sequential pitch patterns: one that assigns part
structure and one that specifies the component elements. Support for this functional
dissociation was obtained by the observation of distinctive effects of probe location, tonal
structure and ear-asymmetries on the ease with which probes were matched in their
appropriate melodic context. These findings will first be discussed in relation to the
functional distinction that they highlight, while implications for the cognitive and
neuropsychological study of melody processing will be addressed later.
The basic finding was the observation that the part structure of the melody had a very
strong impact on the later recognition of some of its tone subsets. When probes corresponded
to a part of the melody, they were very easily recognized. In contrast, when the probe
contained tones that violated this part organization, by straddling the boundary between
two adjacent parts of the melody, performance dropped to chance level. This drastic effect of
probe location was found to be a robust phenomenon, since it was not eliminated when
subjects’ performance improved through familiarization with the task, either in the second
half of the experimental session (Experiment 1) or in the repetition of the whole session
(Experiment 2). This main effect attests clearly to the psychological relevance of the contour
defining rules used to impose partitioning of the target melody. Given these rules for
combining the successive pitches in a melody, it seems relatively easy and possibly
mandatory to construct a multi-part representation. What appears rather difficult is to go in
the reverse direction and discover the elements from which the melody has been constructed.
290 I. PEKETZ and M. BABAP

That the melody parts are retained as contour-defined wholes and that violation of this
integrity requires consideration of the intervallic structure of the melody is supported by the
selective effect of tonal organization obtained and, above all, by the specific pattern of ear-
asymmetries observed. Tonal conformity of the pitch set used in the melodic patterns was
found to facilitate performance, but only under certain conditions. Tonal constraints
appeared to be considered only for matching the across-part probes in Experiment 2 and
only on the second half of the trials, in Experiment 1. The fact that exploitation of tonal
constraints emerged in conditions where subjects were able to match some of the probes that
violated the part structure of the melody is highly consistent with the notion that across-part
probes require a greater degree of consideration of individual pitches or intervals. The
observation that tonal organization had little impact on the ease with which part probes were
matched argues for the idea that recognition of these probes is not drawn by an interval-
based description of the melody, but could be achieved on a different kind of representation.
The most decisive argument for considering that part probes and across-part probes were
recognized by consulting different representations of the same melody arises, however, from
the double dissociation observed between laterality and probe type.
As we hypothesized, part probes were found to elicit a LEA, taken to reflect a RH
predominance, whereas across-part probes elicited a REA, or LH predominance. This
interaction between ear-asymmetry and probe type is consistent with former data showing
that the RH operates on a contour-based and the LH on an interval-based representation of
the same melody. This new pattern of ear-asymmetry can thus be viewed as an extension of
the functional distinction reported previously at the level of the whole melody and the
individual tones to the processing of melody parts. Given the stimuli and task used in the
present experiments as well as in prior ones, at least three levels of structural units seem
relevant for distinguishing the cerebral hemispheres: the whole melody, the parts, and the
individual tones. With more complex musical passages, many more levels may be required.
Presently, the interval vs contour distinction studied here may be conceived as holding at
each of these two first hierarchical levels.
One might argue that contour characteristics were not the sole partitioning factors in the
present study. Indeed, in order to disambiguate boundaries between contour-defined parts,
the principle of pitch proximity was exploited as well. Therefore, part representations of the
melodies might contain markers of pitch proximity along contour ones. And, similarly, the
hemispheres might be regarded as being differentially sensitive to pitch interval sizes, with the
left hemisphere being superior for large skips and the right one for small intervals. Although a
definite answer to this question will have to await further study, there are empirical reasons
to consider contour as the main, and probably the only, parsing device that was effective here
on the pitch dimension. In effect, boundaries marked only by pitch proximity factors have
repeatedly failed to motivate segmentation in tasks similar to the one used here [2,28].
Moreover, if pitch proximity were indeed effective for inducing part formation, we should
have observed a larger grouping effect for the tonal sequences in the present study than for
the nontonal ones, since the former contained smaller pitch skips than the latter. However,
when a performance difference was noted between these two types of pitch sequences, it was
in the reverse direction with tonal sequences yielding a less marked probe location effect than
the nontonal ones.
Therefore, contour appears as the best, if not the only, determinant of part structure of
melodies varying on the pitch dimension. This suggestion has important implications for the
cognitive study of pitch structure processing and remembering, for its highlights the role of
PAKT STRUCTUREAN,, LATERALITY 291

contour as a parsing or chunking device. By viewing pitch directions as essential cues for
grouping adjacent tones into structural units, one understands better why contour is so
salient and sometimes misleading in short-term retention [16, 371. However, some authors
have pointed out recently another function of contour cues [4, 17, 25, 381. In their view,
contour reversals create melodic accents, that is points of greater perceptual salience. In fact,
we do not see any incongruity between these two views. Rather, we conceive contour-induced
accents as a by-product of the contour grouping operation, much in the same way as POVEL
and ESSENS [34] conceive temporal accents as deriving from grouping adjacent tones on the
temporal dimension. Thus, considering contour as a grouping factor allows integration of
various accounts of its perceptual effects within a single construct.
In general, the above propositions were treated as if they apply to both nonmusicians and
musicians. However, the present findings do not allow such a generalization, since only
musicians’ data were interpretable and thus reported. It is our contention, however, that
similar conclusions would be drawn with nonmusicians, provided that task demands were
better tailored to these subjects’ limited listening flexibility. It should be noted that
performance was considered interpretable to the extent that subjects achieved both
fo:mation and decomposition of melody parts above chance. The latter requirement
necessitated prolonged exposure to the task in order for musicians to perform better than
chance, probably because part decomposition is not a natural procedure for organizing
music. For nonmusicians to achieve this sort of reverse engineering, the procedure may
require explicit tutoring, as we suggested in our earlier work [33]. Such an hypothesis should
be tested in future work. Thus, for the present, we cannot assert that the present findings
generalize to all listeners, although the available evidence makes it very likely.
In summary, the present study provided further evidence consistent with the hypothesis
that the left and right cerebral hemispheres compute two different kinds of pitch-relation
representations, that are well captured by a contour vs an interval based distinction.
Furthermore, by documenting this distinction with a probe recognition task, the present
inlfestigation establishes new conditions under which the two processes may be separated. It
is probably this latter information which has the most utility in the understanding of the
organization of music processes.

.4(,,;no,~/edgemenrs_This research was supported by a research grant and fellowship from the National Science and
Engineering Research Council of Canada to the first author. We thank Serge Larochelle and Sylvie Belleville for
hel Jful comments on a previous draft of this article.

REFERENCES
1. BARTLETT, J. and DOWLING, W. Recognition of transposed melodies: A key-distance effect in developmental
perspective. J. exp. Psycho/.: Hum. Percept. Perform. 6, 501-515, 1980.
2. BERGER, C. Di@rences larPrales et reconnaissance d’extraits. Unpublished Master’s thesis. Free University of
Brussels, Brussels, 1986.
3. BEVER,T. and CHIARELLO, R. Cerebral dominance in musicians and nonmusicians. Science 185,537~-539, 1974.
4. BOLT, M. and JONES, M. R. Does rule recursion make melodies easier to reproduce? If not, what does? Cognitirr
Psycholoqy, 18, 389 43 I, 1986.
5. BKAL)SHAW,J. and NETTLETON, N. The nature of hemispheric specialization in man. Behap. Brain Sci. 4, 51 91,
1981.
6. COHEN, G. Theoretical interpretation of lateral asymmetries. In Divided Visuul Field S&dies qf Cerehrul
Orqunization. J. BEAUMONT(Editor). Academic Press, London, 1982.
7. CUUOY, L. and COHEN, A. Recognition of transposed melodic sequences. Q. J. exp. Psychol. Z&255-270, 1976.
8. CUDDY, L., COHEN, A. and MILLER, J. Melody recognition: The experimental application of musical rules. Can.
J. Psycho/. 33, 148 157, 1979.
292 I. PERETZ and M. BABA’I’

9. DEUTSCH, D. The processing of structured and unstructured tonal sequences. Percept. Psychophys. Z&381&389,
1980.
10. DEUTSCH, D. Dichotic listening to melodic patterns and its relationship to hemispheric specialization of
function. Music Percept. 3, 127-154, 1985.
11. DEUTSCH, D. and FEKOE, J. The internal representation of pitch sequences in tonal music. Psychol. Rev. 88,
503-522, 1981.
12. DEWAR, K., CUDDY, L. and MEWHOKT, D. Recognition memory for single tones with and without context. J.
exp. Psychol.: Hum. Learn. Mem. 3, 6&61, 1917.
13. DOWLING, W. Scale and contour: Two components of a theory of memory for melodies. Psychal. Rev. 85,
341-354, 1978.
14. DOWLING, W. Melodic information processing and its development. In The Psycholoyy qf‘Music, D. DEUTSCH
(Editor). Academic Press, New York, 1982.
15. DOWLING, W. and FUJITANI, D. Contour, interval and pitch recognition in memory for melodies. J. ucoust. Sot.
Am. 49, 524531, 1971.
16. DOWLING, W. and HARWOOD, D. Music Cognition. Series in cognition and perception. Academic Press, New
York, 1986.
17. DYSON, M. and WATKINS, A. A figural approach to the role of melodic contour in melody recognition. Percept.
Psychophys. 35,477488, 1984.
18. EDWORTHY, J. Melodic contour and musical structure. In Musical Structure and Cognition, P. HOWELL, I. CROSS
and R. WEST (Editors). Academic Press, London, 1985.
19. EDWORTHY, J. Interval and contour in melody processing. Music Percepf. 2, 375-388, 1985.
20. FRAN@, R. La Perception de lu Musique. Vrin, Paris, 1972.
21. GATES, A. and BKADSHAW, J. Music perception and cerebral asymmetries. Cortex 13, 390-401, 1977.
22. JONES, M. R. A tutorial on some issues and methods in serial pattern research. Percept. Psychophys. 30,492-504,
1981.
23. LERDAHL, F. and JACKENUOFF, R. A Generative Theory of Tonal Music. M.I.T. Press, Cambridge, 1983.
24. LESHOWI~Z, B. and HANZI, R. Serial position effects for tonal stimuli. Mem. Coynit. 2, 112-l 16, 1974.
25. MONAHAN, C., KENDALL, R. and CARTERETTE,E. The effect of melodic and temporal contour on recognition
memory for pitch change. Percept. Psychophys. 41, 576-600, 1987.
26. OLDFIELD, R. Handedness in musicians. Br. J. Psychol. 60, 91-99, 1969.
21. PERETZ, I. Shifting ear-asymmetry in melody comparison through transposition. Cortex, 23, 317-323, 1987.
28. PERETZ, I. Clustering determinants of music: An appraisal of task factors. ht. J. Psycho/. 24, 157-178, 1989.
29. PERETZ, I. Processing of local and global musical information in unilateral brain-damaged patients. Brain 113,
1185-1205, 1990.
30. PEKETZ, I. and MORAIS, J. Modes of processing melodies and ear-asymmetry in nonmusicians. Neuropsycholo-
gia 20, 477489, 1980.
31. PEKETZ, I. and MOKAIS, J. Analytic processing in the classification of melodies as same or different.
Neuropsycholoyia 25,645-652, 1987.
32. PERETZ, I and MORAIS, J. Determinants of laterality
for music: Towards an information processing account. In
Handbook offnichotic Listening: Theory, Methods and Research,
K. HUGDAHL (Editor). Wiley, New York, 1988.
33. PERETZ, I., MORAIS, J. and BERTELSON, P. Shifting ear differences in melody recognition through strategy
inducement. Brain Coynil. 6, 202 -215, 1987.
34. POVEL, D. and ESSENS, P. Perception of temporal patterns. Music Percept. 2, 411440, 1985.
35. RASTATTER,M. and GALLAHEK, A. Reaction-times ofnormal subjects to monaurally presented verbal and tonal
stimuli. Neuropsychologia 20, 465 473, 1982.
36. SERGENT, J. The role of input in visual hemispheric asymmetries. Psycho/. Bull. 93, 481-514, 1983.
37. SLOBODA, J. The Musical Mind: The Coynitioe Psychology oj’Mu.sic. Oxford University Press, London, 1985.
38. THOMASSEN,J. Melodic accent: Experiments and a tentative model. J. acoust. Sot. Am. 71, 15961605, 1982.
39. WAGNER, M. and HANNON, R. Hemispheric asymmetries in faculty and student musicians and nontnusicians
during melody recognition tasks. Brain Lang. 13, 379-388, 1981.
40. WATKINS, A. and DYSON, M. On the perceptual organization of tone sequences and melodies. In Musical
Structure and Cognition, P. HOWELL, I. CROSS and R. WEST (Editors). Academic Press, London, 1985.

You might also like