Professional Documents
Culture Documents
This study extended previous research on stimulus equivalence with all auditory
stimuli by using a methodology more similar to conventional match-to-sample
training and testing for three 3-member equivalence relations. In addition,
it examined the effect of conflicting non-arbitrary relations on auditory
equivalence. Three conditions (n=11 participants each) were trained and tested
for formation of equivalence using recorded auditory nonsense syllable stimuli. In
the Same Voice (SV) condition, participants were exposed to stimuli pronounced
by the same voice in training and testing. For the Different Voice Test (DVT)
condition, in training, stimuli were all pronounced by the same voice, while in
testing, they were pronounced by three different voices, with the sample always
in a different voice from the equivalent comparisons. This established potentially
competing sources of stimulus control, since participants might respond either
in accordance with non-arbitrary auditory relations or with equivalence. In the
third condition (Different Voice; DV), participants were given testing identical to
the DVT condition but were trained with stimuli pronounced by different voices,
such that voice was unrelated to the programmed contingencies. As predicted,
the DVT condition produced less equivalence responding and more nonarbitrary matching than the DV condition. These data are broadly consistent
with previous findings with visual stimuli.
Key words: stimulus equivalence, auditory stimuli, non-arbitrary relations,
interference, humans
Stimulus equivalence is perhaps the most well known example of the phenomenon of
derived or emergent stimulus relations. In a typical stimulus equivalence preparation, matchto-sample (MTS) training in a series of appropriately related conditional discriminations
results in a set of derived performances characterized by reflexivity, symmetry, and
transitivity (Sidman & Tailby, 1982). To pass tests of reflexivity, the participant is required to
conditionally relate each stimulus to itself (e.g., by selecting comparison A in the presence of
sample A). Symmetry refers to the functional reversibility of conditional discriminations
(e.g., if the selection of comparison B, given A as sample, is taught, then the selection of A as
a comparison, given B as sample, is derived). Transitivity refers to the combination of taught
relations (e.g., if the selection of B, given A, and C, given B, are taught, then the selection of
C, given A, is derived). A performance that combines all three patterns is held to indicate a
relation of equivalence between the three stimuli.
Correspondence concerning this article should be addressed to Ian Stewart, School of Psychology,
St. Anthonys Building, National University of Ireland, Galway; E-mail: ian.stewart@nuigalway.ie
DOI:10.11133/j.tpr.2013.63.3.001
410
Stimulus equivalence and derived relations more generally have been extensively
studied by behavior analytic researchers. One of the main reasons for this is that they
appear to be closely linked with human language. For instance, while verbally able humans
readily show derived relations, studies with non-humans have failed to produce robust and
unequivocal demonstrations of this phenomenon (e.g., Dugdale & Lowe, 2000; Sidman,
Rauzin, Lazar, Cunningham, Tailby, & Carrigan, 1982; see also Schusterman & Kastak,
1998). In human participants, the ability to show derived relations has been shown to
correlate with language ability (e.g., Barnes, McCullagh, & Keenan, 1990; Devany, Hayes,
& Nelson, 1986; OHora, Pelaez, & Barnes-Holmes, 2005). In addition, empirical effects
produced by language-based tasks (e.g., semantic priming) are paralleled by tasks
involving derived relations (e.g., Barnes-Holmes etal., 2005; Bissett & Hayes, 1988).
Given the close link between equivalence and human language, it might seem
important to investigate this phenomenon using auditory stimuli. In fact, however, there
has been much less research into stimulus equivalence with auditory stimuli than with
visual stimuli. A number of studies have included both auditory and visual stimuli in the
conditional discrimination training used to establish equivalence, including the very first
empirical demonstration of the latter by Sidman (1971), amongst others (e.g., AlmeidaVerdu etal., 2008; De Rose, De Souza, & Hanna, 1996; Gast, VanBiervliet, & Spradlin,
1979; Green, 1990; Groskreutz, Karsina, Miguel, & Groskreutz, 2010; Kelly, Green, &
Sidman, 1998; Sidman & Cresson, 1973; Smeets & Barnes-Holmes, 2005; Ward & Yu,
2000). However, with respect to the investigation of equivalence using solely auditory
stimuli, there is only one example thus far.
The study in question is Dube, Green, and Serna (1993). In this study, participants
were trained in A-B and A-C conditional discriminations using a two-comparison MTS
preparation and subsequently tested for the derivation of B-C and C-B derived relations.
The computer-based protocol for assessing trained and derived relations was slightly nonstandard, which the authors argued was necessary due to the use of auditory stimuli
(recordings of spoken nonsense syllables, e.g., cug, zid). On each trial, prior to the
presentation of each of the auditory stimuli, a round, white spot would appear on the center
of the screen. After the participant had touched this, it disappeared and the auditory sample
stimulus was presented. First the sample was presented and then one of the two
comparisons, followed by a similar second presentation of the sample, followed by the
other comparison. The order of presentation of the two comparisons was counterbalanced
across trials. In addition, as each comparison was presented, a grey rectangle would appear
briefly in either the upper left or upper right of the screen, and after the second comparison
had been presented, both rectangles would appear on screen together and remain on screen
until the participants responded by touching one. The results were that six out of seven
participants acquired the conditional discrimination baseline and four demonstrated the
formation of two 3-member (A-B-C) equivalence relations. Four participants received
additional training and subsequently demonstrated extension of the relations from three to
four members.
As the first study to demonstrate stimulus equivalence with exclusively auditory
stimuli, Dube etal. (1993) was an important step forward. One of the aims of the current
study was to extend this work in accordance with the recommendations of Dube etal. by
showing auditory stimulus equivalence with more than two comparisons. In order to
achieve this aim, this study employed an alternative format for training and testing derived
equivalence that was arguably closer to conventional MTS procedures in certain respects.
Most notably, in the current procedure, all the auditory stimuli on each trial, including the
sample and each of the three comparisons, was associated with an on-screen button that
appeared in a visual array similar to the array of stimuli seen in a conventional all-visual
MTS task. A second aim of the present study was to extend the auditory equivalence
paradigm in another potentially useful direction by investigating the effect on derived
equivalence in the auditory domain of a potentially competing source of stimulus control
based on non-arbitrary (physical) relations.
411
412
lower, and there were no equivalence passes. At the same time, mean levels of color
matching were significantly higher for the Color Test condition than for the other two
conditions, indicating that participants in this condition were showing lower levels of
equivalence than those in the other conditions because participants in the former were
tending to color match more than those in the latter. These results supported the RFT
predictions of non-arbitrary relational interference with equivalence responding.
Apart from testing for stimulus equivalence with auditory stimuli with an alternative
protocol and using three comparisons in the training/testing format, the current study also
examined whether the non-arbitrary interference with equivalence effect shown by Stewart
etal. (2002) in the context of visual stimuli might also occur in the context of auditory
stimuli. Similar to the Stewart etal. study, three conditions were trained and tested for the
formation of three 3-member equivalence relations. However, of course, in this study, the
stimuli were all auditory stimuli, and more specifically, they were recordings of spoken
nonsense syllables as used by Dube etal. (1993). In one condition (Same Voice), all
nonsense syllables in both training and testing were spoken by the same voice, and thus no
conflict was expected. However, the other two conditions were exposed to testing in which
the nonsense syllables were spoken by varying types of voice and in which the correct (i.e.,
equivalent) comparison stimulus was always produced by a different voice from that
producing the sample stimulus, while one of the incorrect (i.e., non-equivalent) comparison
stimuli was produced by the same voice as the sample. One of the conditions (Different
Voice) also received training with varying voices, and thus participants in this condition
were trained to ignore the non-arbitrary relation of voice. Hence, during testing, the
voices of the stimuli were predicted to have little or no impact on equivalence performance.
However, in the other condition (Different Voice Test), participants were trained with the
stimuli all being spoken in the same voice. Thus, these participants were not exposed to
reinforcement for ignoring voice during training procedures. Thus, it was predicted that
the level of interference in equivalence responding would be highest for participants in this
condition. It was also predicted that matching would be highest for these participants. This
study aimed to examine interference with auditory stimulus equivalence analogous to the
way in which Stewart etal. (2002) examined interference with visual stimulus equivalence,
and thus the foregoing predictions were based on the patterns observed in the Stewart etal.
study. However, given the difference in the modality of the stimuli employed, it was
expected that there might also be some differences between the two studies.
Method
Participants
Participants were 33 undergraduate students attending the National University of
Ireland, Galway, with a mean age of 21.3 years (SD=5.6) who volunteered to take part in
the study in exchange for course credit. Informed consent was appropriately obtained from
each participant (see the Procedure section). Participants were randomly assigned to one of
three experimental conditions (i.e., 11 per condition).
Apparatus
Stimulus presentation and recording of responses were controlled by custom software
programmed in Visual Basic and presented on an Advent K200 laptop. Participants were
provided with headphones to hear the auditory experimental stimuli, which included the
following nine spoken nonsense syllables: ZID (A1), MAU (B1), JOM (C1), VEK (A2),
WUG (B2), BIF (C2), YIM (A3), DAX (B3), and PUK (C3). The alphanumeric labels
accompanying each nonsense syllable are used here for ease of communication.
Participants were unaware of these labels.
The Visual Basic program used for the experiment drew on pre-recorded sound files to
produce the auditory stimuli employed during the protocol. The sound files were
413
pre-recordings of each of the nine nonsense syllables listed above spoken in each of three
different voices: an adult male voice, an adult female voice, and a female childs voice.
Whenever a sample or comparison button was clicked (within pre-designated parameters; see
the Procedure section), a particular pre-recording would be played by the program. In the
Same Voice condition, all nonsense syllable stimuli were presented in the childs voice. In the
Different Voice condition, during training and during testing, nonsense syllables were
presented in all three pre-recorded voices. In the training phase of the Different Voice Test
condition, all nonsense syllable stimuli were presented in the childs voice, while in the
testing phase, all nonsense syllable stimuli were presented in all three pre-recorded voices.
Procedure
General. At the beginning of the procedure, participants were seated at a desk in a
small experimental room facing the laptop computer and were provided with a typed
information sheet and a typed consent form to sign. After they signed the consent form, the
experimental procedure proper could begin.
Each participant was exposed to two separate sessions of training and testing (see
Table1). At the start of the procedure, the following instructions appeared on the computer
screen:
In the following trial, you will see a button at the top of the screen. When you
click this button using the mouse, you will hear a nonsense word and you
will see three further buttons appear at the bottom of the screen, and hear
three further nonsense words. You need to choose one of the three nonsense
words by clicking on one of the three buttons at the bottom. In the first part
of the experiment, the computer will always tell you whether your choice
was correct or wrong. In the latter part of the experiment, however, the
computer will no longer provide you with feedback. Click START to begin.
Figure 1 shows the auditory successive conditional discrimination protocol for the
MTS trials used in both training and testing. On each trial, a red button first appeared in
the top center of the screen. When the participant clicked on it, a black box appeared
around it for 0.5 s and an auditory stimulus, namely, a recording of a spoken nonsense
syllable, was presented. After this, three additional buttons appeared in order from left to
right along the lower half of the screen. The first button appeared 1.5 s after the sample was
presented, a black box surrounded it for 0.5 s, and an auditory stimulus (spoken nonsense
syllable) was presented. A second button appeared 1 s later, a black box surrounded this
button for 0.5 s, and an auditory stimulus (spoken nonsense syllable) was presented.
Finally, 1 s later again, a third button appeared, a black box surrounded it for 0.5 s, and an
auditory stimulus (spoken nonsense syllable) was presented.
414
Until all three comparison buttons had appeared on the screen and the auditory stimuli
had been presented, a click on any of the on-screen buttons produced no further effect.
From that point on, a click on one of the three comparisons produced further effects. First,
a black box appeared around that particular button again, but this time no auditory stimulus
was presented. In addition, if this was training, then appropriate feedback was provided. If
a correct response was made, the stimulus display cleared and the word correct appeared
in the center of the screen for 0.5 s accompanied by an auditory tone (i.e., a beep). If an
incorrect response was made, the display cleared and the word wrong appeared in the
center of the screen for 0.5 s accompanied by a different auditory tone (i.e., a buzzing
sound). Then, after an intertrial interval (ITI) of 1 s, the next trial began. During testing,
no feedback was provided and the ITI began immediately.
Training. During the training phase, the format for the experimental stimuli was as
follows.
Same Voice group: Same voice for all stimuli
Different Voice group: Different voices across stimuli
Different Voice Test group: Same Voice for all stimuli
During this phase of the experiment, participants were trained on three A-B and three
B-C MTS tasks. For the three A-B tasks, participants were presented with A1, A2, or A3
as the sample stimulus and B1, B2, and B3 as auditory comparisons. A correct response
was B1 given A1, B2 given A2, and B3 given A3. For the three B-C tasks, participants
were presented with B1, B2, or B3 as the sample and C1, C2, and C3 as comparisons. A
correct response was C1 given B1, C2 given B2, and C3 given B3.
Tasks were presented in a repeating cycle of 36 trials, the order of which was the same
for every participant (see Appendix A). First, the three A-B tasks were presented six times
each in a quasi-randomly ordered block of 18 trials; the three B-C tasks were then
presented six times each in another quasi-randomly ordered block of 18 trials. Across both
of these blocks, each of the following elements was counterbalanced: (a) the order of
presentation of the three A-B, and then the three B-C MTS tasks; (b) the spatial positioning
of the buttons whose appearance accompanied particular comparison auditory stimuli
(left, middle, or right); and (c) the spatial positioning of the button that accompanied the
experimenter-designated correct match (left, middle, or right). In the case of the Different
Voice condition, one extra element of counterbalancing was includedthe spatial
positioning of the buttons whose appearance accompanied particular voice types, so that
no one particular voice would be associated with any one particular position. In addition,
across one third of the trials, as would be expected by chance, the correct match was
presented in the same voice as the sample stimulus, while on the remaining trials the
correct comparison was presented in a different voice from the sample.
When participants had responded correctly on 36 consecutive MTS training trials
(which could be achieved at any point during a training block) the testing phase for that
session began.
Testing. Participants first read the following instructions:
During this phase the computer will no longer give you feedback.
The testing phase then began. The format for the experimental stimuli during the
testing phase was as follows.
Same Voice group: Same voice for all stimuli
Different Voice group: Different voices across stimuli
Different Voice Test group: Different voices across stimuli
In this phase of the experiment, participants were tested on the three C-A MTS tasks.
In these tasks, participants were presented with C1, C2, or C3 as the sample stimulus and
had to choose from among the three comparison stimuli A1, A2, and A3. Responding in
415
accordance with an equivalence relation was defined as choosing A1 given C1, A2 given
C2, and A3 given C3. A response rate of 32/36 (89% or approximately 9/10) was used as
the criterion for passing equivalence.
Each of the three C-A MTS tasks was presented 12 times in one quasi-randomly
ordered block of 36 trials (see Appendix B). The predetermined quasi-random order of
presentation was the same for every participant. Each of the following elements was
counterbalanced: (a) the order of presentation of the three MTS tasks; (b) the spatial
positioning of the buttons whose appearance accompanied particular comparison auditory
stimuli (left, middle, or right); and (c) the spatial positioning of the button that accompanied
the experimenter-designated correct match (left, middle, or right). In the case of the
Different Voice and Different Voice Test groups, which presented different voices during
testing, one extra element of counterbalancing was includedthe spatial positioning of
the buttons whose appearance accompanied particular voice types, so that no one particular
voice would be associated with any one particular position. In addition, in both conditions,
the correct comparison stimulus choice, in terms of the equivalence relation, was never the
same voice as the sample stimulus.
At the end of each experimental session, the following message appeared on screen:
Thank youthats the end of that part of the experiment. Please call the
experimenter.
If this was the participants first experimental session, he or she was exposed to a
second session, either immediately or within 48 hr of the first exposure. During the second
session, the participant was exposed to exactly the same training and testing procedures.
After the second session, the participant was thanked and debriefed.
Results
Training
Table 1 provides an overview of individual and average results by both condition and
exposure for both training (number of trials required to meet criterion) and testing
(numbers of equivalence and matching responses, respectively). The mean number of
training trials required during the first exposure was 108.73 (SD=46.95) for the Same
Voice group, 155.45 (SD=67.50) for the Different Voice group, and 138.73 (SD=69.90)
for the Different Voice Test group. The corresponding figures for the second exposure were
51.10 (SD=23.31), 53.18 (SD=14.93), and 52.10 (SD=27.97), respectively (see Figure 2).
A 32 repeated measures analysis of variance (ANOVA) revealed a highly significant
main effect for exposure, F(1, 30)=47.74, p<0.0001, partial 2=0.61. However, there
were no significant differences between the conditions, and there was no interaction effect.
A Pearsons productmoment correlation test was conducted to check for a possible
correlation between number of training trials required in the first exposure and the number
of equivalence responses in either the first or the second exposure to testing. There were no
significant correlations in either case (Exposure 1: r=.079, p=.663; Exposure 2: r=.049,
p=.786). Overall, then, these results indicate the absence of significant differences
between the conditions with respect to number of training trials required to reach criterion,
and that the quantity of training trials received did not systematically affect equivalence
performance.
Testing
Equivalence responding. The test data were first analyzed in terms of the number of
responses emitted by participants in each condition that were in accordance with the
designated equivalence relations (defined hereafter as correct responses; see Table 1). In
the Same Voice condition, four individuals passed equivalence by showing 32/36 (89%) or
more correct responses in Exposure 1, and seven did so in Exposure 2. In the Different
11 (31%)
22 (61%)
12 (33%)
22 (61%)
5 (14%)
16 (44%)
36 (100%)
12 (33%)
13 (36%)
35 (97%)
23 (64%)
0 (0%)
17 (47%)
14 (39%)
18 (50%)
13 (36%)
6 (17%)
29 (81%)
12 (33%)
7 (19%)
19 (53%)
18 (50%)
19 (53%)
14 (39%)
1 (3%)
1 (3%)
0 (0%)
7 (19%)
4 (11%)
5 (14%)
2 (6%)
1 (3%)
1 (3%)
3 (8%)
12 (33%)
34 (94%)
2 (6%)
12 (33%)
32 (89%)
2 (6%)
11 (31%)
14 (39%)
9 (25%)
8 (22%)
35 (97%)
13 (36%)
10 (28%)
34 (94%)
13 (36%)
11 (31%)
9 (25%)
20 (56%)
9 (25%)
22 (61%)
34 (94%)
10 (28%)
10 (28%)
34 (94%)
10 (28%)
9 (25%)
Note. SV = Same Voice condition; DV = Different Voice condition; DVT = Different Voice Test condition.
Testing (Matching)
Exposure 1
DV
6 (17%)
DVT
15 (42%)
Exposure 2
DV
5 (14%)
DVT
12 (33%)
Testing (Equivalence)
Exposure 1
SV
26 (72%)
DV
26 (72%)
DVT
13 (36%)
Exposure 2
SV
29 (81%)
DV
25 (69%)
DVT
13 (36%)
0 (0%)
11 (31%)
2 (6%)
13 (36%)
24 (67%)
36 (100%)
12 (33%)
11 (31%)
34 (94%)
11 (31%)
14 (39%)
16 (44%)
18 (50%)
19 (53%)
33 (92%)
6 (17%)
18 (50%)
32 (89%)
6 (17%)
15 (42%)
10 (28%)
30 (83%)
9 (25%)
29 (81%)
36 (100%)
14 (39%)
4 (11%)
33 (92%)
14 (39%)
4 (11%)
33 (92%) 28 (11.5)
32 (89%) 18.7 (11.9)
6 (17%) 8 (6.0)
Table 1
Numbers of Training Trials and Numbers and Percentages of Correct (Equivalent) Testing Trials for Individual Participants and Means and
Standard Deviations for Each of the Three Conditions Across Both Exposures
Condition
P1
P2
P3
P4
P5
P6
P7
P8
P9
P10
P11
M (SD)
Training
Exposure 1
SV
65
111
80
224
101
91
148
141
72
92
71
108.7 (46.9)
DV
75
254
170
117
255
150
240
116
95
156
82
155.5 (67.5)
DVT
76
206
253
109
204
224
60
85
138
95
76
138.7 (69.9)
Exposure 2
SV
97
63
36
36
36
36
90
36
60
36
36
51.1 (23.3)
DV
57
67
52
36
36
56
68
73
68
36
36
53.2 (14.9)
DVT
114
36
36
36
89
36
36
40
36
78
36
52.1 (28.0)
416
Stewart and Lavelle
250
dv
225
Mean Number Trials Required
417
dvt
200
sv
175
150
125
100
75
50
25
0
Exp 1
Exposure
Exp 2
Figure 2. Means and standard deviations for number of training trials required for Exposures 1 and
2 across the three conditions, Different Voice (dv), Different Voice Test (dvt), and Same Voice (sv).
Voice condition, the corresponding figures were two and three, whereas in the Different
Voice Test condition, no participant showed this level of equivalence responding. In fact,
in the Different Voice Test condition, the highest achieved in either exposure was 50%,
and even in the second exposure, two people recorded a score of 0 and a third recorded a
score of just 2.
The mean numbers of correct (equivalence) responses in the first exposure were 24.10
(SD=10.2) for the Same Voice group, 17.5 (SD=10.6) for the Different Voice group, and
8.5 (SD=4.0) for the Different Voice Test group. The corresponding figures for the second
exposure were 28.0 (SD=11.5) for the Same Voice group, 18.7 (SD=11.9) for the
Different Voice group, and 8.0 (SD=5.98) for the Different Voice Test group (see Figure
3). A 32 mixed repeated measures ANOVA, with the three voice conditions as Factor 1
and the two test exposures as Factor 2, showed a highly significant main effect for
condition, F(2, 30)=10.49, p<0.0001, partial 2=0.41, but not for exposures. No
significant interaction effect was identified. Post-hoc analyses (Fisher PLSD) revealed
significant differences in mean equivalence responses between the Different Voice Test
conditions and both the Same Voice (p<0.0001) and Different Voice conditions (p=.017).
The difference in mean equivalence responses for the Same Voice and Different Voice
conditions was nonsignificant, but only marginally so (p=.051).
Matching. Another area for analysis was the extent of non-arbitrary relational
(matching) type responding by participants in the Different Voice and Different Voice Test
conditions (See Table 1 and Figure 4). For the first exposure, mean levels of matching-type
responses were 9.5 (SD=6.2) for the Different Voice condition and 16.6 (SD=7.4) for the
Different Voice Test condition. The corresponding means for the second exposure were 7.8
(SD=6.2) for the Different Voice condition and 16.6 (SD=7.5) for the Different Voice Test
condition. Level of matching responses for the Different Voice and Different Voice Test
conditions were analyzed using a 22 mixed repeated measures ANOVA, with voice
groups as Factor A and exposures as Factor B. This analysis showed a significant effect for
condition, F(1, 20)=7.792, p=.0113, partial 2=0.25, with no significant effect for
exposures (Factor B) and no interaction effect.
Discussion
One key aim of the current study was to extend previous work by Dube etal. (1993) on
stimulus equivalence with auditory stimuli. As discussed earlier, research into stimulus
equivalence has typically used visual stimuli exclusively, with relatively less work
conducted that has included auditory stimuli as well, and only one study thus far (i.e.,
418
Equivalence Testing
45
dv
dvt
40
sv
35
30
25
20
15
10
5
0
Exp 1
Exposure
Exp 2
Figure 3. Means and standard deviations for number of responses in accordance with equivalence
for Exposures 1 and 2 across the Different Voice (dv), Different Voice Test (dvt), and Same Voice
(sv) conditions. The dashed line indicates the maximum score possible on the equivalence test.
Matching
25
dv
dvt
20
15
10
5
0
Exp 1
Exposure
Exp 2
Figure 4. Means and standard deviations for number of responses in accordance with the non-
arbitrary relation of voice per individual exposure for the Different Voice (dv) and Different Voice
Test (dvt) conditions.
Dube etal., 1993) that has used solely auditory stimuli. However, derived relations such as
equivalence are seen by many behavior analytic researchers as providing a model of
language, and everyday language often involves either exclusively or nearly exclusively
auditory stimuli. Hence, if the link between derived relations and language is to be
comprehensively explored, then it would appear important that more attention be paid to
the generation of derived relations based on auditory stimuli alone. Dube etal. (1993)
provided an initial demonstration of stimulus equivalence using auditory stimuli. The
current study replicates the Dube etal. effect by providing a similar demonstration. The
419
420
Test conditions were, respectively, 108.7, 155.5, and 138.7, while in the Stewart etal. study,
the numbers of trials required for the No Color, All Color, and Color Test, conditions were,
respectively, 347.4, 499.1, and 443.8. This suggests that the use of auditory stimuli resulted
in more efficient, albeit slower (i.e., more time was required per trial), learning of the
conditional discriminations than the use of visual stimuli. Both the greater efficiency in
terms of number of trials required and the slower pace of the training might be functions of
the successive nature of the conditional discriminative training involved in the case of
auditory stimuli. Future research might explore this issue further by, for example,
examining efficiency and time needed in the case of training analogous successive visual
conditional discriminations.
A second difference between findings for this study and that of Stewart etal. (2002)
has to do with levels of equivalence. Table 2 shows the proportions (and percentages) of
individuals passing equivalence in each equivalence test exposure in each of the three
conditions in both Stewart etal. and the current study. In Stewart etal., the No Color group
had just 3/8 (37.5%) individuals passing equivalence across both exposures, whereas in the
current study, the analogous Same Voice condition had a total of 7/11 (64%) individuals
passing equivalence across both exposures. This greater facilitation of the derivation of
equivalence relations might, again, be a function of the use of auditory stimuli or, perhaps
more specifically, of successive discrimination procedures. However, this facilitation did
not seem to apply in the case of the other two conditions, for which levels of equivalence
are similar to their analogous conditions in Stewart etal. For the condition in which nonarbitrary interference was expected to be maximal (i.e., the Different Voice Test condition),
and in which there were no instances of passing equivalence, this is perhaps not that
surprising. However, for the Different Voice condition, which yielded less than half the
equivalence passes of the Same Voice condition (i.e., 3/11 versus 7/11), when its analogue
in Stewart etal. (i.e., All Color) did as well as the analogue of the Same Voice condition
(i.e., 3/8 versus 3/8), there seems to be something additional to be explained. In this case, it
may be that the presence of different voices was proportionately more disruptive in the
auditory context than the presence of different colors was in the visual context.
Table 2
Proportions (and Percentages) of Equivalence Passes in Equivalence
Test Exposures 1 and 2 for the Three Conditions in Stewart et al. (2002)
and Their Counterparts in the Current Study
Current study
Stewart et al. (2002) conditions
conditions
Same
Different Different
Exposure No Color All Color Color Test
Voice
Voice
Voice Test
1
2/8 (25%) 2/8 (25%) 0/8 (0%) 4/11 (36%) 2/11 (18%) 0/11 (0%)
2
3/8 (38%) 3/8 (38%) 0/8 (0%) 7/11 (64%) 3/11 (27%) 0/11 (0%)
Note. Criterion for equivalence passes was 32 out of 36, or 90%, correct.
The reason that the presence of different voices might be more disruptive in the
auditory than in the visual case is unknown. Again, some aspect of the use of auditory
stimuli themselves or the use of successive conditional discrimination procedures, or both,
may be responsible. At the present time, given the relative lack of empirical work in the
domain of auditory conditional discriminations, especially as a basis for equivalence,
attempted explanations of these effects can only be speculative. More work will be needed
to investigate this area. Such research might systematically compare auditory conditional
discrimination procedures with both simultaneous and successive visual conditional
discrimination procedures as methods of producing equivalence and/or other derived
relations. Furthermore, one dimension of this research might involve investigation of
possible differences between the auditory and visual domains with respect to the influence
of non-arbitrary dimensions. Stimulus control topography (McIlvane & Dube, 2003; Ray
& Sidman, 1970) analyses of the influence of topographical aspects of visual and auditory
conditional discrimination formats might be particularly useful in this regard.
421
In regard to the outcomes of the present study, if the mere presence of different
voices is more disruptive in the auditory than in the visual domain, then the disruption
of equivalence in the current study might be relatively more influenced by the mere
presence of different formats (e.g., voice types) than was the case in Stewart etal. (2002).
In other words, perhaps the disruption of equivalence in the current study is
proportionately more based on mere presence of different formats throughout training
and testing and proportionately less based on non-arbitrary relations as an alternative
to equivalence responding. The evidence concerning patterns of non-arbitrary
matching in testing bears this out. Though there certainly is evidence in the current
study of interference with equivalence by non-arbitrary matching, since there was a
significantly higher average level of matching in the Different Voice Test condition
than in the Different Voice condition, there seemed to be relatively less such
interference in this study than in Stewart etal.s. This can be seen by comparing across
studies for the average level of matching occurring across both exposures in the two
conditions in which matching would have been predicted to be maximal, namely, the
Different Voice Test and Color Test conditions. Levels of matching for these two
conditions were 16.6 out of 36 (46%) and 25.2 out of 36 (70%), respectively, suggesting
that much more matching was occurring in the Color Test than in the Different Voice
Test condition and, thus, that the lower level of equivalence in the current study was
proportionately more a result of disruption caused by the mere presence of different
formats than non-arbitrary relational interference. As previously suggested, further
research analyzing stimulus control aspects of auditory conditional discrimination
procedures, especially as the basis for equivalence or other derived relations and in
particular as influenced by non-arbitrary relations, is warranted. Such research may
allow a better understanding of these effects.
The current findings suggest a number of other directions for future work also. For
example, the current study adopted only one of many different training protocols that have
been reported in the literature on derived stimulus relations. It has been shown that testing
for symmetry before testing for transitivity or equivalence may facilitate the emergence of
derived relations (see, e.g., Adams, Fields, & Verhave, 1993). Accordingly, Stewart etal.
(2002) suggested that testing for symmetry before testing for equivalence relations in a
context such as they employed in their research might affect testing outcomes substantially.
Further exploration of this or related variables (e.g., prior transitivity testing) might also be
conducted with respect to the current protocol. In addition, as interference has now been
demonstrated in both visual and auditory domains, another possible extension of the
current research might be to examine whether and to what extent interference might occur
cross-modally. For example, to what extent might potentially conflicting non-arbitrary
relations in the visual realm affect equivalence with auditory stimuli, and to what extent
might potentially conflicting non-arbitrary relations in the auditory realm affect
equivalence with visual stimuli?
In conclusion, the current research has extended previous work by Dube etal. (1993)
by demonstrating derived equivalence with auditory stimuli. This study employed a
protocol more similar to conventional MTS procedures than Dube etal. and used it to
facilitate derivation of three 3-member equivalence relations. It also investigated nonarbitrary interference with auditory equivalence using a design similar to that used
previously by Stewart etal. (2002) in the visual domain and provided evidence that nonarbitrary relations can interfere with equivalence relations in the auditory domain, just as
Stewart etal. had done in the visual. These findings cohere with the RFT account of the
historical importance of non-arbitrary relations and extend previous work on non-arbitrary
interference into the auditory domain. Given the link between derived relations and
language and the fact that auditory stimuli play such an important role in the latter,
investigation of derived relations between auditory stimuli is a potentially important area
of research to pursue, and it is hoped that the methods and findings described here will
guide further work in this domain.
422
References
Adams, B. J., Fields, L., & Verhave, T. (1993).
Almeida-Verdu, A. C., Huziwara, E. M., De Souza, D. G., de Rose, J. C., Bevilacqua, M. C.,
Lopes, J., Jr., McIlvane, W. J. (2008). Relational learning in children with
423
424
Appendix A
This table shows the 36-trial block of trials that was cycled during training for each of
the three conditions. Alphanumerics (e.g., A1) represent the particular nonsense syllables
that were presented in auditory mode. For participants in two of the conditions (Same
Voice and Different Voice Test), all nonsense syllables were pronounced in the same voice
(i.e., a childs voice). For the third condition (Different Voice), the nonsense syllables were
pronounced in three different voices. The letters M, W, and C indicate the type of voice in
which particular nonsense syllables were pronounced on particular trials and stand for
man, woman, and child, respectively.
1
B2(C)
5
B3(M)
9
B1(C)
13
B2(W)
17
B2(W)
21
C3(C)
25
C2(M)
29
C2(W)
33
C2(M)
A2(C)
B3(W)
A3(C)
B1(C)
A1(C)
B3(W)
A3(W)
B1(M)
A2(C)
B1(M)
B3(M)
C1(W)
B2(C)
C3(C)
B3(M)
C1(C)
B3(C)
C3(W)
B1(M)
B2(W)
B2(M)
B3(C)
B3(C)
C2(M)
C1(W)
C3(M)
C1(C)
2
B2(W)
6
B1(W)
10
B2(C)
14
B1(W)
18
B3(M)
22
C1(M)
26
C2(M)
30
C3(C)
34
C2(M)
A1(W)
B1(C)
A2(W)
B2(M)
A3(C)
B3(W)
A1(C)
B2(C)
A3(M)
B2(W)
B2(C)
C2(C)
B1(C)
C3(W)
B2(M)
C1(W)
B2(W)
C1(W)
3
B3(M) B1(W)
7
B3(C) B1(M)
11
B1(M) B3(M)
15
B3(M) B3(C)
19
B1(C) C3(W)
23
C3(W) C1(W)
27
C1(C) C3(M)
31
C2(M) C2(W)
35
C3(C) C1(M)
A3(M)
B3(C)
A3(W)
B2(C)
A1(M)
B1(C)
A2(W)
B1(W)
B1(C)
C1(M)
B1(M)
C3(M)
B3(W)
C2(W)
B1(W)
C1(C)
B3(C)
C2(C)
B2(M)
B3(W)
B2(W)
B2(M)
C2(C)
C2(C)
C1(C)
C3(M)
C3(W)
4
B3(W)
8
B3(M)
12
B1(W)
16
B2(M)
20
C3(C)
24
C1(M)
28
C1(C)
32
C1(C)
36
C3(W)
A1(M)
B2(C)
A2(M)
B2(W)
A2(M)
B3(C)
A1(W)
B3(W)
B2(W)
C2(M)
B3(W)
C3(W)
B1(W)
C2(M)
B2(M)
C3(M)
B1(M)
C2(C)
B1(M)
B1(C)
B2(M)
B1(C)
C1(W)
C2(C)
C3(W)
C2(W)
C1(M)
425
Appendix B
This table shows the 36-trial block of testing trials presented to participants in each of
the three conditions. Alphanumerics (e.g., A1) represent the particular nonsense syllables
that were presented in auditory mode. For participants in one of the conditions (Same
Voice), all nonsense syllables were pronounced in the same voice (i.e., a childs voice), as
during training. For the other two conditions (Different Voice and Different Voice Test),
the nonsense syllables were pronounced in three different voices. The letters M, W, and C
indicate the type of voice in which particular nonsense syllables were pronounced on
particular trials and stand for man, woman, and child, respectively.
1
A1(C)
5
A2(M)
9
A1(W)
13
A2(M)
17
A3(C)
21
A1(M)
25
A3(W)
29
A3(C)
33
A1(C)
C2(M)
A3(M)
C3(C)
A3(W)
C1(M)
A2(M)
C2(C)
A3(W)
C3(M)
A1(W)
C2(C)
A2(W)
C2(M)
A1(M)
C1(M)
A2(M)
C3(C)
A2(M)
2
A2(W) A3(M)
6
A1(C) A1(M)
10
A3(C) A1(W)
14
A1(C) A2(W)
18
A2(M) A2(W)
22
A3(C) A3(C)
26
A2(C) A2(M)
30
A1(W) A1(M)
34
A3(W) A2(C)
C3(W)
A2(C)
C1(W)
A2(C)
C2(W)
A3(C)
C1(W)
A1(M)
C1(C)
A3(C)
C1(C)
A1(W)
C3(M)
A3(C)
C3(W)
A3(C)
C2(W)
A1(M)
3
A1(W) A1(M)
7
A3(W) A3(W)
11
A2(M) A3(C)
15
A3(C) A1(C)
19
A1(M) A1(C)
23
A2(M) A2(M)
27
A1(W) A1(W)
31
A2(W) A2(M)
35
A3(W) A3(C)
C2(C)
A2(W)
C2(M)
A1(M)
C1(M)
A2(M)
C3(C)
A2(M)
C2(M)
A3(M)
C3(C)
A3(W)
C1(M)
A2(M)
C2(C)
A3(W)
C3(M)
A1(W)
A3(C)
A2(C)
A1(W)
A3(W)
A2(W)
A1(C)
A3(C)
A1(C)
A2(M)
4
A3(C)
8
A2(M)
12
A1(M)
16
A2(C)
20
A3(M)
24
A1(M)
28
A1(W)
32
A2(M)
36
A2(W)
C1(C)
A1(W)
C3(M)
A3(C)
C3(W)
A3(C)
C2(W)
A1(M)
C3(W)
A2(C)
C1(W)
A3(C)
C2(W)
A3(C)
C1(W)
A1(W)
C1(C)
A3(C)
A2(M)
A1(W)
A2(W)
A3(W)
A1(W)
A2(W)
A2(M)
A3(C)
A1(M)
Reproduced with permission of the copyright owner. Further reproduction prohibited without
permission.