You are on page 1of 18

The Psychological Record, 2013, 63, 409426

Auditory Stimulus Equivalence


and Non-Arbitrary Relations
Ian Stewart and Niamh Lavelle
National University of Ireland, Galway

This study extended previous research on stimulus equivalence with all auditory
stimuli by using a methodology more similar to conventional match-to-sample
training and testing for three 3-member equivalence relations. In addition,
it examined the effect of conflicting non-arbitrary relations on auditory
equivalence. Three conditions (n=11 participants each) were trained and tested
for formation of equivalence using recorded auditory nonsense syllable stimuli. In
the Same Voice (SV) condition, participants were exposed to stimuli pronounced
by the same voice in training and testing. For the Different Voice Test (DVT)
condition, in training, stimuli were all pronounced by the same voice, while in
testing, they were pronounced by three different voices, with the sample always
in a different voice from the equivalent comparisons. This established potentially
competing sources of stimulus control, since participants might respond either
in accordance with non-arbitrary auditory relations or with equivalence. In the
third condition (Different Voice; DV), participants were given testing identical to
the DVT condition but were trained with stimuli pronounced by different voices,
such that voice was unrelated to the programmed contingencies. As predicted,
the DVT condition produced less equivalence responding and more nonarbitrary matching than the DV condition. These data are broadly consistent
with previous findings with visual stimuli.
Key words: stimulus equivalence, auditory stimuli, non-arbitrary relations,
interference, humans
Stimulus equivalence is perhaps the most well known example of the phenomenon of
derived or emergent stimulus relations. In a typical stimulus equivalence preparation, matchto-sample (MTS) training in a series of appropriately related conditional discriminations
results in a set of derived performances characterized by reflexivity, symmetry, and
transitivity (Sidman & Tailby, 1982). To pass tests of reflexivity, the participant is required to
conditionally relate each stimulus to itself (e.g., by selecting comparison A in the presence of
sample A). Symmetry refers to the functional reversibility of conditional discriminations
(e.g., if the selection of comparison B, given A as sample, is taught, then the selection of A as
a comparison, given B as sample, is derived). Transitivity refers to the combination of taught
relations (e.g., if the selection of B, given A, and C, given B, are taught, then the selection of
C, given A, is derived). A performance that combines all three patterns is held to indicate a
relation of equivalence between the three stimuli.
Correspondence concerning this article should be addressed to Ian Stewart, School of Psychology,
St. Anthonys Building, National University of Ireland, Galway; E-mail: ian.stewart@nuigalway.ie
DOI:10.11133/j.tpr.2013.63.3.001

410

Stewart and Lavelle

Stimulus equivalence and derived relations more generally have been extensively
studied by behavior analytic researchers. One of the main reasons for this is that they
appear to be closely linked with human language. For instance, while verbally able humans
readily show derived relations, studies with non-humans have failed to produce robust and
unequivocal demonstrations of this phenomenon (e.g., Dugdale & Lowe, 2000; Sidman,
Rauzin, Lazar, Cunningham, Tailby, & Carrigan, 1982; see also Schusterman & Kastak,
1998). In human participants, the ability to show derived relations has been shown to
correlate with language ability (e.g., Barnes, McCullagh, & Keenan, 1990; Devany, Hayes,
& Nelson, 1986; OHora, Pelaez, & Barnes-Holmes, 2005). In addition, empirical effects
produced by language-based tasks (e.g., semantic priming) are paralleled by tasks
involving derived relations (e.g., Barnes-Holmes etal., 2005; Bissett & Hayes, 1988).
Given the close link between equivalence and human language, it might seem
important to investigate this phenomenon using auditory stimuli. In fact, however, there
has been much less research into stimulus equivalence with auditory stimuli than with
visual stimuli. A number of studies have included both auditory and visual stimuli in the
conditional discrimination training used to establish equivalence, including the very first
empirical demonstration of the latter by Sidman (1971), amongst others (e.g., AlmeidaVerdu etal., 2008; De Rose, De Souza, & Hanna, 1996; Gast, VanBiervliet, & Spradlin,
1979; Green, 1990; Groskreutz, Karsina, Miguel, & Groskreutz, 2010; Kelly, Green, &
Sidman, 1998; Sidman & Cresson, 1973; Smeets & Barnes-Holmes, 2005; Ward & Yu,
2000). However, with respect to the investigation of equivalence using solely auditory
stimuli, there is only one example thus far.
The study in question is Dube, Green, and Serna (1993). In this study, participants
were trained in A-B and A-C conditional discriminations using a two-comparison MTS
preparation and subsequently tested for the derivation of B-C and C-B derived relations.
The computer-based protocol for assessing trained and derived relations was slightly nonstandard, which the authors argued was necessary due to the use of auditory stimuli
(recordings of spoken nonsense syllables, e.g., cug, zid). On each trial, prior to the
presentation of each of the auditory stimuli, a round, white spot would appear on the center
of the screen. After the participant had touched this, it disappeared and the auditory sample
stimulus was presented. First the sample was presented and then one of the two
comparisons, followed by a similar second presentation of the sample, followed by the
other comparison. The order of presentation of the two comparisons was counterbalanced
across trials. In addition, as each comparison was presented, a grey rectangle would appear
briefly in either the upper left or upper right of the screen, and after the second comparison
had been presented, both rectangles would appear on screen together and remain on screen
until the participants responded by touching one. The results were that six out of seven
participants acquired the conditional discrimination baseline and four demonstrated the
formation of two 3-member (A-B-C) equivalence relations. Four participants received
additional training and subsequently demonstrated extension of the relations from three to
four members.
As the first study to demonstrate stimulus equivalence with exclusively auditory
stimuli, Dube etal. (1993) was an important step forward. One of the aims of the current
study was to extend this work in accordance with the recommendations of Dube etal. by
showing auditory stimulus equivalence with more than two comparisons. In order to
achieve this aim, this study employed an alternative format for training and testing derived
equivalence that was arguably closer to conventional MTS procedures in certain respects.
Most notably, in the current procedure, all the auditory stimuli on each trial, including the
sample and each of the three comparisons, was associated with an on-screen button that
appeared in a visual array similar to the array of stimuli seen in a conventional all-visual
MTS task. A second aim of the present study was to extend the auditory equivalence
paradigm in another potentially useful direction by investigating the effect on derived
equivalence in the auditory domain of a potentially competing source of stimulus control
based on non-arbitrary (physical) relations.

Auditory Equivalence and Non-Arbitrary Relations

411

Previous research demonstrated this effect (referred to as non-arbitrary interference)


with equivalence relations using visual stimuli (Stewart, Barnes-Holmes, Roche, &
Smeets, 2002). The concept underlying this work was based on the relational frame theory
(RFT; Hayes, Barnes-Holmes, & Roche, 2001) distinction between non-arbitrary relational
responding in which stimuli are related based on their physical properties (e.g., identity
matching) and arbitrarily applicable relational responding, in which stimuli are related in a
particular way based on control by stimuli that lie outside the relata. An example of the
latter is equivalence, which RFT argues is made more likely by certain aspects of the
training and testing context to which participants are exposed previously in their preexperimental histories. Arbitrarily applicable relational responding, the nature and origins
of which are described in detail elsewhere (see, e.g., Stewart & McElwee, 2009), is
proposed by RFT advocates to be a form of responding that only humans appear to learn to
an advanced degree. Furthermore, it is argued that this uniquely human repertoire is what
enables humans alone to readily demonstrate not simply stimulus equivalence but in fact
all forms of complex language, and indeed this is how RFT explains the link between
these two phenomena.
Relational frame theorists suggest that although arbitrarily applicable relational
responding is more abstract and complex than non-arbitrary relational responding, learning
the former is probably based to an important extent on learning the latter. For example,
humans almost certainly learn to match physically similar things before they become able
to respond in accordance with an abstract relation of sameness between physically
dissimilar stimuli, such as happens in stimulus equivalence, and the former may well
facilitate the development of the latter. In fact, RFT-based research procedures in which
non-arbitrary relational training is used to establish contextual cues that later control
abstract arbitrarily applicable relations between stimuli (see, e.g., Steele & Hayes, 1991)
are based on this theoretical relationship. Given the latter, however, it is also possible that
under certain circumstances, non-arbitrary relational stimulus control might compete with
and make arbitrarily applicable relational stimulus control less likely. This was the
rationale underlying Stewart etal. (2002).
Stewart etal. (2002) investigated whether conflicting non-arbitrary color relations
could interfere with equivalence relations. This study involved three conditions (n=8
participants per condition), each of whom was trained and tested for the formation of three
3-member equivalence relations using three-letter (CVC) nonsense syllables as stimuli. In
the No Color condition, all stimuli were in black lettering (the background was white for
all three conditions). The other two conditions (i.e., All Color and Color Test) were identical
to the No Color condition in all respects except that some or all of the nonsense syllable
stimuli they employed appeared in a variety of colors (red, blue, and green). Furthermore,
during the testing phase for both these conditions, the sample stimulus was always a
different color from the experimenter-designated correct (i.e., equivalent) comparison and
always the same color as one of the incorrect (non-equivalent) comparisons; hence, for
both these conditions, testing trials represented a conflict between non-arbitrary color
relations and equivalence relations as potential sources of stimulus control. However, in
the All Color condition, the stimuli during the training phase were also in color. The
difference was that during training there was no consistent relationship (either congruency
or lack of it) between the trained arbitrary relations and the color match relations. As a
result, it was predicted that participants would learn to ignore color as an aspect of the
format and thus would show roughly comparable levels of equivalence to the No Color
group. Participants in the Color Test condition, in contrast, were trained with black lettered
nonsense syllables, and thus it was predicted that there would be maximal conflict between
non-arbitrary and arbitrary relational stimulus control for this group.
The results of the study were consistent with these predictions in that, while the No
Color and All Color conditions showed roughly comparable mean levels of equivalence
responding and equal numbers of participants who passed the equivalence test (i.e., three
per condition), the mean levels of equivalence in the Color Test condition were significantly

412

Stewart and Lavelle

lower, and there were no equivalence passes. At the same time, mean levels of color
matching were significantly higher for the Color Test condition than for the other two
conditions, indicating that participants in this condition were showing lower levels of
equivalence than those in the other conditions because participants in the former were
tending to color match more than those in the latter. These results supported the RFT
predictions of non-arbitrary relational interference with equivalence responding.
Apart from testing for stimulus equivalence with auditory stimuli with an alternative
protocol and using three comparisons in the training/testing format, the current study also
examined whether the non-arbitrary interference with equivalence effect shown by Stewart
etal. (2002) in the context of visual stimuli might also occur in the context of auditory
stimuli. Similar to the Stewart etal. study, three conditions were trained and tested for the
formation of three 3-member equivalence relations. However, of course, in this study, the
stimuli were all auditory stimuli, and more specifically, they were recordings of spoken
nonsense syllables as used by Dube etal. (1993). In one condition (Same Voice), all
nonsense syllables in both training and testing were spoken by the same voice, and thus no
conflict was expected. However, the other two conditions were exposed to testing in which
the nonsense syllables were spoken by varying types of voice and in which the correct (i.e.,
equivalent) comparison stimulus was always produced by a different voice from that
producing the sample stimulus, while one of the incorrect (i.e., non-equivalent) comparison
stimuli was produced by the same voice as the sample. One of the conditions (Different
Voice) also received training with varying voices, and thus participants in this condition
were trained to ignore the non-arbitrary relation of voice. Hence, during testing, the
voices of the stimuli were predicted to have little or no impact on equivalence performance.
However, in the other condition (Different Voice Test), participants were trained with the
stimuli all being spoken in the same voice. Thus, these participants were not exposed to
reinforcement for ignoring voice during training procedures. Thus, it was predicted that
the level of interference in equivalence responding would be highest for participants in this
condition. It was also predicted that matching would be highest for these participants. This
study aimed to examine interference with auditory stimulus equivalence analogous to the
way in which Stewart etal. (2002) examined interference with visual stimulus equivalence,
and thus the foregoing predictions were based on the patterns observed in the Stewart etal.
study. However, given the difference in the modality of the stimuli employed, it was
expected that there might also be some differences between the two studies.

Method
Participants
Participants were 33 undergraduate students attending the National University of
Ireland, Galway, with a mean age of 21.3 years (SD=5.6) who volunteered to take part in
the study in exchange for course credit. Informed consent was appropriately obtained from
each participant (see the Procedure section). Participants were randomly assigned to one of
three experimental conditions (i.e., 11 per condition).

Apparatus
Stimulus presentation and recording of responses were controlled by custom software
programmed in Visual Basic and presented on an Advent K200 laptop. Participants were
provided with headphones to hear the auditory experimental stimuli, which included the
following nine spoken nonsense syllables: ZID (A1), MAU (B1), JOM (C1), VEK (A2),
WUG (B2), BIF (C2), YIM (A3), DAX (B3), and PUK (C3). The alphanumeric labels
accompanying each nonsense syllable are used here for ease of communication.
Participants were unaware of these labels.
The Visual Basic program used for the experiment drew on pre-recorded sound files to
produce the auditory stimuli employed during the protocol. The sound files were

Auditory Equivalence and Non-Arbitrary Relations

413

pre-recordings of each of the nine nonsense syllables listed above spoken in each of three
different voices: an adult male voice, an adult female voice, and a female childs voice.
Whenever a sample or comparison button was clicked (within pre-designated parameters; see
the Procedure section), a particular pre-recording would be played by the program. In the
Same Voice condition, all nonsense syllable stimuli were presented in the childs voice. In the
Different Voice condition, during training and during testing, nonsense syllables were
presented in all three pre-recorded voices. In the training phase of the Different Voice Test
condition, all nonsense syllable stimuli were presented in the childs voice, while in the
testing phase, all nonsense syllable stimuli were presented in all three pre-recorded voices.

Procedure
General. At the beginning of the procedure, participants were seated at a desk in a
small experimental room facing the laptop computer and were provided with a typed
information sheet and a typed consent form to sign. After they signed the consent form, the
experimental procedure proper could begin.
Each participant was exposed to two separate sessions of training and testing (see
Table1). At the start of the procedure, the following instructions appeared on the computer
screen:
In the following trial, you will see a button at the top of the screen. When you
click this button using the mouse, you will hear a nonsense word and you
will see three further buttons appear at the bottom of the screen, and hear
three further nonsense words. You need to choose one of the three nonsense
words by clicking on one of the three buttons at the bottom. In the first part
of the experiment, the computer will always tell you whether your choice
was correct or wrong. In the latter part of the experiment, however, the
computer will no longer provide you with feedback. Click START to begin.
Figure 1 shows the auditory successive conditional discrimination protocol for the
MTS trials used in both training and testing. On each trial, a red button first appeared in
the top center of the screen. When the participant clicked on it, a black box appeared
around it for 0.5 s and an auditory stimulus, namely, a recording of a spoken nonsense
syllable, was presented. After this, three additional buttons appeared in order from left to
right along the lower half of the screen. The first button appeared 1.5 s after the sample was
presented, a black box surrounded it for 0.5 s, and an auditory stimulus (spoken nonsense
syllable) was presented. A second button appeared 1 s later, a black box surrounded this
button for 0.5 s, and an auditory stimulus (spoken nonsense syllable) was presented.
Finally, 1 s later again, a third button appeared, a black box surrounded it for 0.5 s, and an
auditory stimulus (spoken nonsense syllable) was presented.

Figure 1. The auditory successive conditional discrimination protocol.

414

Stewart and Lavelle

Until all three comparison buttons had appeared on the screen and the auditory stimuli
had been presented, a click on any of the on-screen buttons produced no further effect.
From that point on, a click on one of the three comparisons produced further effects. First,
a black box appeared around that particular button again, but this time no auditory stimulus
was presented. In addition, if this was training, then appropriate feedback was provided. If
a correct response was made, the stimulus display cleared and the word correct appeared
in the center of the screen for 0.5 s accompanied by an auditory tone (i.e., a beep). If an
incorrect response was made, the display cleared and the word wrong appeared in the
center of the screen for 0.5 s accompanied by a different auditory tone (i.e., a buzzing
sound). Then, after an intertrial interval (ITI) of 1 s, the next trial began. During testing,
no feedback was provided and the ITI began immediately.
Training. During the training phase, the format for the experimental stimuli was as
follows.
Same Voice group: Same voice for all stimuli
Different Voice group: Different voices across stimuli
Different Voice Test group: Same Voice for all stimuli
During this phase of the experiment, participants were trained on three A-B and three
B-C MTS tasks. For the three A-B tasks, participants were presented with A1, A2, or A3
as the sample stimulus and B1, B2, and B3 as auditory comparisons. A correct response
was B1 given A1, B2 given A2, and B3 given A3. For the three B-C tasks, participants
were presented with B1, B2, or B3 as the sample and C1, C2, and C3 as comparisons. A
correct response was C1 given B1, C2 given B2, and C3 given B3.
Tasks were presented in a repeating cycle of 36 trials, the order of which was the same
for every participant (see Appendix A). First, the three A-B tasks were presented six times
each in a quasi-randomly ordered block of 18 trials; the three B-C tasks were then
presented six times each in another quasi-randomly ordered block of 18 trials. Across both
of these blocks, each of the following elements was counterbalanced: (a) the order of
presentation of the three A-B, and then the three B-C MTS tasks; (b) the spatial positioning
of the buttons whose appearance accompanied particular comparison auditory stimuli
(left, middle, or right); and (c) the spatial positioning of the button that accompanied the
experimenter-designated correct match (left, middle, or right). In the case of the Different
Voice condition, one extra element of counterbalancing was includedthe spatial
positioning of the buttons whose appearance accompanied particular voice types, so that
no one particular voice would be associated with any one particular position. In addition,
across one third of the trials, as would be expected by chance, the correct match was
presented in the same voice as the sample stimulus, while on the remaining trials the
correct comparison was presented in a different voice from the sample.
When participants had responded correctly on 36 consecutive MTS training trials
(which could be achieved at any point during a training block) the testing phase for that
session began.
Testing. Participants first read the following instructions:
During this phase the computer will no longer give you feedback.
The testing phase then began. The format for the experimental stimuli during the
testing phase was as follows.
Same Voice group: Same voice for all stimuli
Different Voice group: Different voices across stimuli
Different Voice Test group: Different voices across stimuli
In this phase of the experiment, participants were tested on the three C-A MTS tasks.
In these tasks, participants were presented with C1, C2, or C3 as the sample stimulus and
had to choose from among the three comparison stimuli A1, A2, and A3. Responding in

Auditory Equivalence and Non-Arbitrary Relations

415

accordance with an equivalence relation was defined as choosing A1 given C1, A2 given
C2, and A3 given C3. A response rate of 32/36 (89% or approximately 9/10) was used as
the criterion for passing equivalence.
Each of the three C-A MTS tasks was presented 12 times in one quasi-randomly
ordered block of 36 trials (see Appendix B). The predetermined quasi-random order of
presentation was the same for every participant. Each of the following elements was
counterbalanced: (a) the order of presentation of the three MTS tasks; (b) the spatial
positioning of the buttons whose appearance accompanied particular comparison auditory
stimuli (left, middle, or right); and (c) the spatial positioning of the button that accompanied
the experimenter-designated correct match (left, middle, or right). In the case of the
Different Voice and Different Voice Test groups, which presented different voices during
testing, one extra element of counterbalancing was includedthe spatial positioning of
the buttons whose appearance accompanied particular voice types, so that no one particular
voice would be associated with any one particular position. In addition, in both conditions,
the correct comparison stimulus choice, in terms of the equivalence relation, was never the
same voice as the sample stimulus.
At the end of each experimental session, the following message appeared on screen:
Thank youthats the end of that part of the experiment. Please call the
experimenter.
If this was the participants first experimental session, he or she was exposed to a
second session, either immediately or within 48 hr of the first exposure. During the second
session, the participant was exposed to exactly the same training and testing procedures.
After the second session, the participant was thanked and debriefed.

Results
Training
Table 1 provides an overview of individual and average results by both condition and
exposure for both training (number of trials required to meet criterion) and testing
(numbers of equivalence and matching responses, respectively). The mean number of
training trials required during the first exposure was 108.73 (SD=46.95) for the Same
Voice group, 155.45 (SD=67.50) for the Different Voice group, and 138.73 (SD=69.90)
for the Different Voice Test group. The corresponding figures for the second exposure were
51.10 (SD=23.31), 53.18 (SD=14.93), and 52.10 (SD=27.97), respectively (see Figure 2).
A 32 repeated measures analysis of variance (ANOVA) revealed a highly significant
main effect for exposure, F(1, 30)=47.74, p<0.0001, partial 2=0.61. However, there
were no significant differences between the conditions, and there was no interaction effect.
A Pearsons productmoment correlation test was conducted to check for a possible
correlation between number of training trials required in the first exposure and the number
of equivalence responses in either the first or the second exposure to testing. There were no
significant correlations in either case (Exposure 1: r=.079, p=.663; Exposure 2: r=.049,
p=.786). Overall, then, these results indicate the absence of significant differences
between the conditions with respect to number of training trials required to reach criterion,
and that the quantity of training trials received did not systematically affect equivalence
performance.

Testing
Equivalence responding. The test data were first analyzed in terms of the number of
responses emitted by participants in each condition that were in accordance with the
designated equivalence relations (defined hereafter as correct responses; see Table 1). In
the Same Voice condition, four individuals passed equivalence by showing 32/36 (89%) or
more correct responses in Exposure 1, and seven did so in Exposure 2. In the Different

11 (31%)
22 (61%)

12 (33%)
22 (61%)

5 (14%)
16 (44%)

36 (100%)
12 (33%)
13 (36%)

35 (97%)
23 (64%)
0 (0%)

17 (47%)
14 (39%)

18 (50%)
13 (36%)
6 (17%)

29 (81%)
12 (33%)
7 (19%)

19 (53%)
18 (50%)

19 (53%)
14 (39%)

1 (3%)
1 (3%)
0 (0%)

7 (19%)
4 (11%)
5 (14%)

2 (6%)
1 (3%)

1 (3%)
3 (8%)

12 (33%)
34 (94%)
2 (6%)

12 (33%)
32 (89%)
2 (6%)

11 (31%)
14 (39%)

9 (25%)
8 (22%)

35 (97%)
13 (36%)
10 (28%)

34 (94%)
13 (36%)
11 (31%)

9 (25%)
20 (56%)

9 (25%)
22 (61%)

34 (94%)
10 (28%)
10 (28%)

34 (94%)
10 (28%)
9 (25%)

Note. SV = Same Voice condition; DV = Different Voice condition; DVT = Different Voice Test condition.

Testing (Matching)
Exposure 1
DV
6 (17%)
DVT
15 (42%)
Exposure 2
DV
5 (14%)
DVT
12 (33%)

Testing (Equivalence)
Exposure 1
SV
26 (72%)
DV
26 (72%)
DVT
13 (36%)
Exposure 2
SV
29 (81%)
DV
25 (69%)
DVT
13 (36%)

0 (0%)
11 (31%)

2 (6%)
13 (36%)

24 (67%)
36 (100%)
12 (33%)

11 (31%)
34 (94%)
11 (31%)

14 (39%)
16 (44%)

18 (50%)
19 (53%)

33 (92%)
6 (17%)
18 (50%)

32 (89%)
6 (17%)
15 (42%)

10 (28%)
30 (83%)

9 (25%)
29 (81%)

36 (100%)
14 (39%)
4 (11%)

33 (92%)
14 (39%)
4 (11%)

0 (0%) 7.8 (6.2)


23 (46%) 16.6 (7.5)

3 (8%) 9.5 (6.2)


23 (64%) 16.5 (7.4)

33 (92%) 28 (11.5)
32 (89%) 18.7 (11.9)
6 (17%) 8 (6.0)

29 (81%) 24.1 (10.2)


28 (78%) 17.5 (10.6)
10 (28%) 8.5 (4.0)

Table 1
Numbers of Training Trials and Numbers and Percentages of Correct (Equivalent) Testing Trials for Individual Participants and Means and
Standard Deviations for Each of the Three Conditions Across Both Exposures
Condition
P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
P11
M (SD)
Training
Exposure 1
SV
65
111
80
224
101
91
148
141
72
92
71
108.7 (46.9)
DV
75
254
170
117
255
150
240
116
95
156
82
155.5 (67.5)
DVT
76
206
253
109
204
224
60
85
138
95
76
138.7 (69.9)
Exposure 2
SV
97
63
36
36
36
36
90
36
60
36
36
51.1 (23.3)
DV
57
67
52
36
36
56
68
73
68
36
36
53.2 (14.9)
DVT
114
36
36
36
89
36
36
40
36
78
36
52.1 (28.0)

416
Stewart and Lavelle

Auditory Equivalence and Non-Arbitrary Relations


Training

250

dv

225
Mean Number Trials Required

417

dvt

200

sv

175
150
125
100
75
50
25
0

Exp 1

Exposure

Exp 2

Figure 2. Means and standard deviations for number of training trials required for Exposures 1 and
2 across the three conditions, Different Voice (dv), Different Voice Test (dvt), and Same Voice (sv).

Voice condition, the corresponding figures were two and three, whereas in the Different
Voice Test condition, no participant showed this level of equivalence responding. In fact,
in the Different Voice Test condition, the highest achieved in either exposure was 50%,
and even in the second exposure, two people recorded a score of 0 and a third recorded a
score of just 2.
The mean numbers of correct (equivalence) responses in the first exposure were 24.10
(SD=10.2) for the Same Voice group, 17.5 (SD=10.6) for the Different Voice group, and
8.5 (SD=4.0) for the Different Voice Test group. The corresponding figures for the second
exposure were 28.0 (SD=11.5) for the Same Voice group, 18.7 (SD=11.9) for the
Different Voice group, and 8.0 (SD=5.98) for the Different Voice Test group (see Figure
3). A 32 mixed repeated measures ANOVA, with the three voice conditions as Factor 1
and the two test exposures as Factor 2, showed a highly significant main effect for
condition, F(2, 30)=10.49, p<0.0001, partial 2=0.41, but not for exposures. No
significant interaction effect was identified. Post-hoc analyses (Fisher PLSD) revealed
significant differences in mean equivalence responses between the Different Voice Test
conditions and both the Same Voice (p<0.0001) and Different Voice conditions (p=.017).
The difference in mean equivalence responses for the Same Voice and Different Voice
conditions was nonsignificant, but only marginally so (p=.051).
Matching. Another area for analysis was the extent of non-arbitrary relational
(matching) type responding by participants in the Different Voice and Different Voice Test
conditions (See Table 1 and Figure 4). For the first exposure, mean levels of matching-type
responses were 9.5 (SD=6.2) for the Different Voice condition and 16.6 (SD=7.4) for the
Different Voice Test condition. The corresponding means for the second exposure were 7.8
(SD=6.2) for the Different Voice condition and 16.6 (SD=7.5) for the Different Voice Test
condition. Level of matching responses for the Different Voice and Different Voice Test
conditions were analyzed using a 22 mixed repeated measures ANOVA, with voice
groups as Factor A and exposures as Factor B. This analysis showed a significant effect for
condition, F(1, 20)=7.792, p=.0113, partial 2=0.25, with no significant effect for
exposures (Factor B) and no interaction effect.

Discussion
One key aim of the current study was to extend previous work by Dube etal. (1993) on
stimulus equivalence with auditory stimuli. As discussed earlier, research into stimulus
equivalence has typically used visual stimuli exclusively, with relatively less work
conducted that has included auditory stimuli as well, and only one study thus far (i.e.,

Stewart and Lavelle

418

Equivalence Testing

45

dv
dvt

40

sv

Mean Number of Equivalence Responses

35
30
25
20
15
10
5
0

Exp 1

Exposure

Exp 2

Figure 3. Means and standard deviations for number of responses in accordance with equivalence
for Exposures 1 and 2 across the Different Voice (dv), Different Voice Test (dvt), and Same Voice
(sv) conditions. The dashed line indicates the maximum score possible on the equivalence test.
Matching

Mean Number Of Matching Responses

25

dv
dvt

20
15
10
5
0

Exp 1

Exposure

Exp 2

Figure 4. Means and standard deviations for number of responses in accordance with the non-
arbitrary relation of voice per individual exposure for the Different Voice (dv) and Different Voice
Test (dvt) conditions.

Dube etal., 1993) that has used solely auditory stimuli. However, derived relations such as
equivalence are seen by many behavior analytic researchers as providing a model of
language, and everyday language often involves either exclusively or nearly exclusively
auditory stimuli. Hence, if the link between derived relations and language is to be
comprehensively explored, then it would appear important that more attention be paid to
the generation of derived relations based on auditory stimuli alone. Dube etal. (1993)
provided an initial demonstration of stimulus equivalence using auditory stimuli. The
current study replicates the Dube etal. effect by providing a similar demonstration. The

Auditory Equivalence and Non-Arbitrary Relations

419

results of this study were that a total of 11 participants demonstrated equivalence


responding with all auditory stimuli. However, whereas Dube etal. used a two-comparison
format to produce two 3-member equivalence relations, the current study used a threecomparison format to produce three 3-member equivalence relations.
With regard to the variable of number of participants showing equivalence, in Dube
etal. (1993), three of six participants passed the first equivalence test (including one person
whose equivalence relations were argued to be based on reject relations), and five of six
passed the second test. At first blush, the results of the procedure used in the current study
do not appear as good. In the SV condition, in which all stimuli were presented in the same
voice and that thus allows the most direct comparison with Dube etal., only 3 out of 11
participants passed on the first equivalence test, while 7 out of 11 passed the second.
However, a number of factors need to be considered. First, the current study was testing for
three 3-member relations, whereas Dube etal. tested for two 3-member relations. Previous
research suggests that equivalence formation with visual stimuli is a function of class or
relation size (Arntzen & Holth, 2000), and this may be the same, or perhaps even more
pronounced, in the case of auditory stimuli. Second, because the present study sought to
explore non-arbitrary interference and to compare findings with Stewart etal. (2002), it
provided an equivalence test only, whereas Dube etal. tested for a range of derived
relations, including symmetry and transitivity. Further research into the effect of these
variables on equivalence formation with auditory stimuli is needed. For the present,
however, it is certainly possible that their influence affected the number of participants
demonstrating equivalence in this study.
A major aim of this study was to test for non-arbitrary relational interference with
equivalence relational responding. A previous study (Stewart etal., 2002) reported nonarbitrary interference with equivalence responding in the context of visual stimuli. The
present study attempted to extend this work by examining this effect in the context of
auditory stimuli. In one condition (Same Voice), auditory stimuli were presented in the
same pre-recorded voice throughout equivalence training and testing. In a second condition
(Different Voice Test), stimuli in training were presented in the same pre-recorded voice,
while those in testing were presented in three different voices. On every trial, there was a
potential conflict of stimulus control based on the fact that the correct (equivalent)
comparison stimulus was in a different voice from the sample, while one of the two
incorrect comparisons was in the same voice as the sample and thus presented an
opportunity for non-arbitrary matching. In a third condition, participants received the
same testing as the second condition but were provided with a history of training in which
different voices were employed but voice was irrelevant with respect to the contingencies
for correct responding.
Findings were similar to those of Stewart etal. (2002) in a number of ways. First,
there was a higher incidence of individuals passing criteria for equivalence and also
significantly higher average levels of equivalence responding in the Same Voice and
Different Voice conditions relative to the Different Voice Test condition. This is analogous
to the Stewart etal. finding of higher levels of equivalence in the Same Color and All Color
conditions compared with the Color Test condition. Furthermore, in the current study,
there was a significantly higher level of non-arbitrary relational responding or matching
observed in the Different Voice Test condition than in the Different Voice condition. These
results are also analogous to those of Stewart etal. (the relationship between the conditions
is similar to that between the Color Test and All Color conditions) and are consistent with
the suggestion that the lower levels of equivalence in the Different Voice Test condition are
the result of interference by non-arbitrary relations.
Despite broad similarities, there were also some differences in findings between this
study and that of Stewart etal. (2002). First, the mean number of training trials required to
meet criterion for equivalence testing in Exposure 1 was considerably lower in the current
study than in Stewart etal.s for all three conditions. More specifically, in the current study,
the numbers of trials required for the Same Voice, Different Voice, and Different Voice

420

Stewart and Lavelle

Test conditions were, respectively, 108.7, 155.5, and 138.7, while in the Stewart etal. study,
the numbers of trials required for the No Color, All Color, and Color Test, conditions were,
respectively, 347.4, 499.1, and 443.8. This suggests that the use of auditory stimuli resulted
in more efficient, albeit slower (i.e., more time was required per trial), learning of the
conditional discriminations than the use of visual stimuli. Both the greater efficiency in
terms of number of trials required and the slower pace of the training might be functions of
the successive nature of the conditional discriminative training involved in the case of
auditory stimuli. Future research might explore this issue further by, for example,
examining efficiency and time needed in the case of training analogous successive visual
conditional discriminations.
A second difference between findings for this study and that of Stewart etal. (2002)
has to do with levels of equivalence. Table 2 shows the proportions (and percentages) of
individuals passing equivalence in each equivalence test exposure in each of the three
conditions in both Stewart etal. and the current study. In Stewart etal., the No Color group
had just 3/8 (37.5%) individuals passing equivalence across both exposures, whereas in the
current study, the analogous Same Voice condition had a total of 7/11 (64%) individuals
passing equivalence across both exposures. This greater facilitation of the derivation of
equivalence relations might, again, be a function of the use of auditory stimuli or, perhaps
more specifically, of successive discrimination procedures. However, this facilitation did
not seem to apply in the case of the other two conditions, for which levels of equivalence
are similar to their analogous conditions in Stewart etal. For the condition in which nonarbitrary interference was expected to be maximal (i.e., the Different Voice Test condition),
and in which there were no instances of passing equivalence, this is perhaps not that
surprising. However, for the Different Voice condition, which yielded less than half the
equivalence passes of the Same Voice condition (i.e., 3/11 versus 7/11), when its analogue
in Stewart etal. (i.e., All Color) did as well as the analogue of the Same Voice condition
(i.e., 3/8 versus 3/8), there seems to be something additional to be explained. In this case, it
may be that the presence of different voices was proportionately more disruptive in the
auditory context than the presence of different colors was in the visual context.
Table 2
Proportions (and Percentages) of Equivalence Passes in Equivalence
Test Exposures 1 and 2 for the Three Conditions in Stewart et al. (2002)
and Their Counterparts in the Current Study
Current study
Stewart et al. (2002) conditions
conditions
Same
Different Different
Exposure No Color All Color Color Test
Voice
Voice
Voice Test
1
2/8 (25%) 2/8 (25%) 0/8 (0%) 4/11 (36%) 2/11 (18%) 0/11 (0%)
2
3/8 (38%) 3/8 (38%) 0/8 (0%) 7/11 (64%) 3/11 (27%) 0/11 (0%)
Note. Criterion for equivalence passes was 32 out of 36, or 90%, correct.

The reason that the presence of different voices might be more disruptive in the
auditory than in the visual case is unknown. Again, some aspect of the use of auditory
stimuli themselves or the use of successive conditional discrimination procedures, or both,
may be responsible. At the present time, given the relative lack of empirical work in the
domain of auditory conditional discriminations, especially as a basis for equivalence,
attempted explanations of these effects can only be speculative. More work will be needed
to investigate this area. Such research might systematically compare auditory conditional
discrimination procedures with both simultaneous and successive visual conditional
discrimination procedures as methods of producing equivalence and/or other derived
relations. Furthermore, one dimension of this research might involve investigation of
possible differences between the auditory and visual domains with respect to the influence
of non-arbitrary dimensions. Stimulus control topography (McIlvane & Dube, 2003; Ray
& Sidman, 1970) analyses of the influence of topographical aspects of visual and auditory
conditional discrimination formats might be particularly useful in this regard.

Auditory Equivalence and Non-Arbitrary Relations

421

In regard to the outcomes of the present study, if the mere presence of different
voices is more disruptive in the auditory than in the visual domain, then the disruption
of equivalence in the current study might be relatively more influenced by the mere
presence of different formats (e.g., voice types) than was the case in Stewart etal. (2002).
In other words, perhaps the disruption of equivalence in the current study is
proportionately more based on mere presence of different formats throughout training
and testing and proportionately less based on non-arbitrary relations as an alternative
to equivalence responding. The evidence concerning patterns of non-arbitrary
matching in testing bears this out. Though there certainly is evidence in the current
study of interference with equivalence by non-arbitrary matching, since there was a
significantly higher average level of matching in the Different Voice Test condition
than in the Different Voice condition, there seemed to be relatively less such
interference in this study than in Stewart etal.s. This can be seen by comparing across
studies for the average level of matching occurring across both exposures in the two
conditions in which matching would have been predicted to be maximal, namely, the
Different Voice Test and Color Test conditions. Levels of matching for these two
conditions were 16.6 out of 36 (46%) and 25.2 out of 36 (70%), respectively, suggesting
that much more matching was occurring in the Color Test than in the Different Voice
Test condition and, thus, that the lower level of equivalence in the current study was
proportionately more a result of disruption caused by the mere presence of different
formats than non-arbitrary relational interference. As previously suggested, further
research analyzing stimulus control aspects of auditory conditional discrimination
procedures, especially as the basis for equivalence or other derived relations and in
particular as influenced by non-arbitrary relations, is warranted. Such research may
allow a better understanding of these effects.
The current findings suggest a number of other directions for future work also. For
example, the current study adopted only one of many different training protocols that have
been reported in the literature on derived stimulus relations. It has been shown that testing
for symmetry before testing for transitivity or equivalence may facilitate the emergence of
derived relations (see, e.g., Adams, Fields, & Verhave, 1993). Accordingly, Stewart etal.
(2002) suggested that testing for symmetry before testing for equivalence relations in a
context such as they employed in their research might affect testing outcomes substantially.
Further exploration of this or related variables (e.g., prior transitivity testing) might also be
conducted with respect to the current protocol. In addition, as interference has now been
demonstrated in both visual and auditory domains, another possible extension of the
current research might be to examine whether and to what extent interference might occur
cross-modally. For example, to what extent might potentially conflicting non-arbitrary
relations in the visual realm affect equivalence with auditory stimuli, and to what extent
might potentially conflicting non-arbitrary relations in the auditory realm affect
equivalence with visual stimuli?
In conclusion, the current research has extended previous work by Dube etal. (1993)
by demonstrating derived equivalence with auditory stimuli. This study employed a
protocol more similar to conventional MTS procedures than Dube etal. and used it to
facilitate derivation of three 3-member equivalence relations. It also investigated nonarbitrary interference with auditory equivalence using a design similar to that used
previously by Stewart etal. (2002) in the visual domain and provided evidence that nonarbitrary relations can interfere with equivalence relations in the auditory domain, just as
Stewart etal. had done in the visual. These findings cohere with the RFT account of the
historical importance of non-arbitrary relations and extend previous work on non-arbitrary
interference into the auditory domain. Given the link between derived relations and
language and the fact that auditory stimuli play such an important role in the latter,
investigation of derived relations between auditory stimuli is a potentially important area
of research to pursue, and it is hoped that the methods and findings described here will
guide further work in this domain.

422

Stewart and Lavelle

References
Adams, B. J., Fields, L., & Verhave, T. (1993).

Effects of test order on intersubject


variability during equivalence class formation. The Psychological Record, 43,
133152.

Almeida-Verdu, A. C., Huziwara, E. M., De Souza, D. G., de Rose, J. C., Bevilacqua, M. C.,
Lopes, J., Jr., McIlvane, W. J. (2008). Relational learning in children with

deafness and cochlear implants. Journal of the Experimental Analysis of Behavior,


89(3), 40724. doi:10.1901/jeab.2008-89-407
Arntzen, E., & Holth, P. (2000). Probability of stimulus equivalence as a function of
class size versus number of classes. The Psychological Record, 50, 79104.
Barnes, D., McCullagh, P. D., & Keenan, M. (1990). Equivalence class formation in non
hearing impaired children and hearing impaired children. The Analysis of Verbal
Behaviour, 8, 1930.
Barnes-Holmes, D., Staunton, C., Whelan, R., Barnes-Holmes, Y., Commins, S., Walsh, D.,
Dymond, S. (2005). Derived stimulus relations, semantic priming, and event-

related potentials: Testing a behavioral theory of semantic networks. Journal of


the Experimental Analysis of Behavior, 84(3), 417433. doi:10.1901/jeab.2005.78-04
Bissett, R. T., & Hayes, S. C. (1998). Derived stimulus relations produce mediated and
episodic priming. The Psychological Record, 48, 617630.
Devany, J. M., Hayes, S. C., & Nelson, R. O. (1986). Equivalence class formation in
language-able and language-disabled children. Journal of the Experimental
Analysis of Behavior, 46, 243257. doi:10.1901/jeab.1986.46-243
De Rose, J., De Souza, D. G., & Hanna, E. S. (1996). Teaching reading and spelling:
Exclusion and stimulus equivalence. Journal of Applied Behavior Analysis, 29,
451469. doi:10.1901/jaba.1996.29-451
Dube, W. V., Green, G., & Serna, R. W. (1993). Auditory successive conditional
discrimination and auditory stimulus equivalence classes. Journal of the
Experimental Analysis of Behavior, 59, 103114. doi:10.1901/jeab.1993.59-103
Dugdale, N., & Lowe, C. F. (2000). Testing for symmetry in the conditional
discriminations of language-trained chimpanzees. Journal of the Experimental
Analysis of Behavior, 73, 522. doi:10.1901/jeab.2000.73-5
Gast, D. L., VanBiervliet, A., & Spradlin, J. E. (1979). Teaching number-word equivalences:
A study of transfer. American Journal of Mental Deficiency, 83, 524527.
Green, G. (1990). Differences in development of visual and auditory-visual
equivalence relations. American Journal on Mental Retardation, 95, 260270.
Groskreutz, N. C., Karsina, A. J., Miguel, C. F., & Groskreutz, M. P. (2010). Using
conditional discrimination training with complex auditory-visual samples to
produce emergent relations in children with autism. Journal of Applied Behavior
Analysis, 43, 131136.
Hayes, S. C., Barnes-Holmes, D., & Roche, B. (2001). Relational frame theory: A post-
Skinnerian account of human language and cognition. New York, NY: Plenum.
Kelly, S., Green, G., & Sidman, M. (1998). Visual identity matching and auditory-visual
matching: A procedural note. Journal of Applied Behavior Analysis, 31, 237243.
doi:10.1901/jaba.1998.31-237
McIlvane, W. V., & Dube, W. V. (2003). Stimulus control topography coherence theory:
Foundations and extensions. The Psychological Record, 26(2), 195213.
OHora, D., Pelaez., M., & Barnes-Holmes, D. (2005). Derived relational responding
and performance on verbal subtests of the WAIS-III. The Psychological Record,
55(1), 155175.

Auditory Equivalence and Non-Arbitrary Relations

423

(1970). Reinforcement schedules and stimulus control. In


W. N. Schoenfeld (Ed.), The theory of reinforcement schedules (pp. 187214).
New York, NY: Appleton-Century-Crofts.
Schusterman, R. J., & Kastak, D. (1998). Functional equivalence in a California sea lion:
Relevance to animal social and communicative interactions. Animal Behaviour,
55, 10871095.
Sidman, M. (1971). Reading and auditory-visual equivalences. Journal of Speech and
Hearing Research, 14, 513.
Sidman, M., & Cresson, O., Jr. (1973). Reading and crossmodal transfer of stimulus
equivalences in severe retardation. American Journal of Mental Deficiency, 77,
515523.
Sidman, M., Rauzin, R., Lazar, R., Cunningham, S., Tailby, W., & Carrigan, P. (1982).
A search for symmetry in the conditional discriminations of rhesus monkeys,
baboons and children. Journal of the Experimental Analysis of Behavior, 37,
2344. doi:10.1901/jeab.1982.37-23
Sidman, M., & Tailby, W. (1982). Conditional discrimination vs. matching to sample:
An expansion of the testing paradigm. Journal of the Experimental Analysis of
Behavior, 37, 522. doi:10.1901/jeab.1982.37-5
Smeets, P. M., & Barnes-Holmes, D. (2005). Auditory-visual and visual-visual
equivalence relations in children. The Psychological Record, 55(3), 483503.
Steele, D., & Hayes, S. C. (1991). Stimulus equivalence and arbitrarily applicable
relational responding. Journal of the Experimental Analysis of Behavior, 56(3),
519555. doi:10.1901/jeab.1991.56-519
Stewart, I., Barnes-Holmes, D., Roche, B., & Smeets, P. (2002). Stimulus equivalence
and non-arbitrary relations. The Psychological Record, 52, 7788.
Stewart, I., & McElwee, J. (2009). Relational responding and conditional discrimination
procedures: An apparent inconsistency and clarification. Behavior Analyst, 32(2),
309317.
Ward, R., & Yu, D. C. T. (2000). Bridging the gap between visual and auditory
discrimination learning in children with autism and severe developmental
disabilities. Journal on Developmental Disabilities, 7, 142155.
Ray, B. A., & Sidman, M.

Stewart and Lavelle

424

Appendix A
This table shows the 36-trial block of trials that was cycled during training for each of
the three conditions. Alphanumerics (e.g., A1) represent the particular nonsense syllables
that were presented in auditory mode. For participants in two of the conditions (Same
Voice and Different Voice Test), all nonsense syllables were pronounced in the same voice
(i.e., a childs voice). For the third condition (Different Voice), the nonsense syllables were
pronounced in three different voices. The letters M, W, and C indicate the type of voice in
which particular nonsense syllables were pronounced on particular trials and stand for
man, woman, and child, respectively.
1
B2(C)
5
B3(M)
9
B1(C)
13
B2(W)
17
B2(W)
21
C3(C)
25
C2(M)
29
C2(W)
33
C2(M)

A2(C)
B3(W)
A3(C)
B1(C)
A1(C)
B3(W)
A3(W)
B1(M)
A2(C)
B1(M)
B3(M)
C1(W)
B2(C)
C3(C)
B3(M)
C1(C)
B3(C)
C3(W)

B1(M)
B2(W)
B2(M)
B3(C)
B3(C)
C2(M)
C1(W)
C3(M)
C1(C)

2
B2(W)
6
B1(W)
10
B2(C)
14
B1(W)
18
B3(M)
22
C1(M)
26
C2(M)
30
C3(C)
34
C2(M)

A1(W)
B1(C)
A2(W)
B2(M)
A3(C)
B3(W)
A1(C)
B2(C)
A3(M)
B2(W)
B2(C)
C2(C)
B1(C)
C3(W)
B2(M)
C1(W)
B2(W)
C1(W)

3
B3(M) B1(W)
7
B3(C) B1(M)
11
B1(M) B3(M)
15
B3(M) B3(C)
19
B1(C) C3(W)
23
C3(W) C1(W)
27
C1(C) C3(M)
31
C2(M) C2(W)
35
C3(C) C1(M)

A3(M)
B3(C)
A3(W)
B2(C)
A1(M)
B1(C)
A2(W)
B1(W)
B1(C)
C1(M)
B1(M)
C3(M)
B3(W)
C2(W)
B1(W)
C1(C)
B3(C)
C2(C)

B2(M)
B3(W)
B2(W)
B2(M)
C2(C)
C2(C)
C1(C)
C3(M)
C3(W)

4
B3(W)
8
B3(M)
12
B1(W)
16
B2(M)
20
C3(C)
24
C1(M)
28
C1(C)
32
C1(C)
36
C3(W)

A1(M)
B2(C)
A2(M)
B2(W)
A2(M)
B3(C)
A1(W)
B3(W)
B2(W)
C2(M)
B3(W)
C3(W)
B1(W)
C2(M)
B2(M)
C3(M)
B1(M)
C2(C)

B1(M)
B1(C)
B2(M)
B1(C)
C1(W)
C2(C)
C3(W)
C2(W)
C1(M)

Auditory Equivalence and Non-Arbitrary Relations

425

Appendix B
This table shows the 36-trial block of testing trials presented to participants in each of
the three conditions. Alphanumerics (e.g., A1) represent the particular nonsense syllables
that were presented in auditory mode. For participants in one of the conditions (Same
Voice), all nonsense syllables were pronounced in the same voice (i.e., a childs voice), as
during training. For the other two conditions (Different Voice and Different Voice Test),
the nonsense syllables were pronounced in three different voices. The letters M, W, and C
indicate the type of voice in which particular nonsense syllables were pronounced on
particular trials and stand for man, woman, and child, respectively.
1
A1(C)
5
A2(M)
9
A1(W)
13
A2(M)
17
A3(C)
21
A1(M)
25
A3(W)
29
A3(C)
33
A1(C)

C2(M)
A3(M)
C3(C)
A3(W)
C1(M)
A2(M)
C2(C)
A3(W)
C3(M)
A1(W)
C2(C)
A2(W)
C2(M)
A1(M)
C1(M)
A2(M)
C3(C)
A2(M)

2
A2(W) A3(M)
6
A1(C) A1(M)
10
A3(C) A1(W)
14
A1(C) A2(W)
18
A2(M) A2(W)
22
A3(C) A3(C)
26
A2(C) A2(M)
30
A1(W) A1(M)
34
A3(W) A2(C)

C3(W)
A2(C)
C1(W)
A2(C)
C2(W)
A3(C)
C1(W)
A1(M)
C1(C)
A3(C)
C1(C)
A1(W)
C3(M)
A3(C)
C3(W)
A3(C)
C2(W)
A1(M)

3
A1(W) A1(M)
7
A3(W) A3(W)
11
A2(M) A3(C)
15
A3(C) A1(C)
19
A1(M) A1(C)
23
A2(M) A2(M)
27
A1(W) A1(W)
31
A2(W) A2(M)
35
A3(W) A3(C)

C2(C)
A2(W)
C2(M)
A1(M)
C1(M)
A2(M)
C3(C)
A2(M)
C2(M)
A3(M)
C3(C)
A3(W)
C1(M)
A2(M)
C2(C)
A3(W)
C3(M)
A1(W)

A3(C)
A2(C)
A1(W)
A3(W)
A2(W)
A1(C)
A3(C)
A1(C)
A2(M)

4
A3(C)
8
A2(M)
12
A1(M)
16
A2(C)
20
A3(M)
24
A1(M)
28
A1(W)
32
A2(M)
36
A2(W)

C1(C)
A1(W)
C3(M)
A3(C)
C3(W)
A3(C)
C2(W)
A1(M)
C3(W)
A2(C)
C1(W)
A3(C)
C2(W)
A3(C)
C1(W)
A1(W)
C1(C)
A3(C)

A2(M)
A1(W)
A2(W)
A3(W)
A1(W)
A2(W)
A2(M)
A3(C)
A1(M)

Reproduced with permission of the copyright owner. Further reproduction prohibited without
permission.

You might also like