You are on page 1of 15

Journal of School Psychology

42 (2004) 343 357

Training school personnel to implement a universal


school-based prevention of depression program
under real-world conditions
Paul H. Harnetta,*, Mark R. Daddsb
a
School of Psychology, University of Queensland, Brisbane, Queensland, Australia
School of Psychology, University of New South Wales, Sydney, New South Wales, Australia

Received 6 May 2003; received in revised form 3 June 2004; accepted 9 June 2004

Abstract
The present study evaluated the impact of a universal prevention of depression program [the
Resourceful Adolescent Program (RAP)] when implemented under real-world conditions in a
school setting. Prior research has found the RAP program to be beneficial for high-school
students when the program was implemented by university staff selected, trained, and supervised
by a research team. The present study evaluated the RAP program when implemented by
existing school personnel. Separately, we measured the impact of a training program for
facilitators, the quality of subsequent program implementation, and the students response to the
RAP Program. Results showed that, in response to the training program, facilitators believed
they had acquired the knowledge and confidence to implement the program and that the quality
of program implementation was acceptable. The study did not demonstrate a beneficial impact of
the RAP program for the students. The results raise important questions regarding the extent of
training and ongoing supervision facilitators require if the beneficial outcomes for students are to
be maintained when interventions are implemented under real-world conditions in school
settings.
D 2004 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.
Keywords: Prevention; Depression; Adolescent development; High schools

* Corresponding author. School of Psychology, Faculty of Social and Behavioural Sciences, University of
Queensland, St Lucia, Brisbane, Queensland 4072, Australia. Tel.: +61 7 3365 6723; fax: +61 7 3365 4466.
E-mail address: p.harnett@psy.uq.edu.au (P.H. Harnett).
0022-4405/$ - see front matter D 2004 Society for the Study of School Psychology. Published by Elsevier Ltd.
All rights reserved.
doi:10.1016/j.jsp.2004.06.004

344

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

Introduction
Recent studies have suggested that, under relatively ideal conditions, adolescent
depression is treatable (Harrington, Whittaker, & Shoebridge, 1998) and preventable
(Clarke et al., 1995; Gillham, Reivich, Jaycox, & Seligman, 1995; Jaycox, Reivich,
Gillham, & Seligman, 1994; Shochet et al., 2001). A challenge to researchers is to
demonstrate that the beneficial outcomes of interventions can be replicated under realworld conditions (Domitrovich & Greenberg, 2000). Of particular concern is that the
quality of program implementation is maintained. Studies that fail to demonstrate that the
quality of program implementation is maintained when implemented in a naturalistic
setting run the risk of incorrectly concluding that a program was ineffective when, in fact,
the negative findings were the result of inadequate service delivery (Domitrovich &
Greenberg, 2000). If this is the case, potentially effective interventions may be
discontinued (Dane & Schneider, 1998).
Shochet et al. (2001) reported the results of a universal prevention of depression
program targeting high-school students. Promising results were found when the program
was implemented in a school setting by staff selected, trained, and supervised by the
university research team. The authors of this study noted evidence from previous studies
that indicated prevention programs targeting adolescents at risk of depression could be
successful in reducing levels of depression, but pointed out that these studies suffered from
low recruitment rates and high attrition (Clarke et al., 1995; Gillham et al., 1995; Hains &
Ellman, 1994; Jaycox et al., 1994). The Resourceful Adolescent Program (RAP) was
implemented on a universal basis as part of the school curriculum during class time to
address problems with recruitment and retention. The content of the RAP program
(discussed in more detail below) includes cognitivebehavioural strategies and exercises
influenced by Interpersonal Psychotherapy (Mufson, Moreau, Weissman, & Klerman,
1993). Results of the study showed a statistically significant decrease in levels of
depression for the intervention, but not the control group. A clinically important
prevention effect was argued from the observation that the proportion of nonsymptomatic
students who displayed depressive symptoms at follow-up was significantly lower in the
intervention group compared to the comparison group. It was not possible for Shochet et
al. to employ a randomised control design as the study was conducted in one school and
included all students of the Year 9 cohort. In order to compare the impact of the
intervention with a group of students who did not receive the program, the Year 9 cohort in
the previous year was used as the comparison group. In addition to determining the impact
of the intervention on levels of depression, the Shochet et al. study aimed to establish
whether the constraints of the school environment, such as time tabling issues, would
reduce program fidelity. A measure of the quality of program implementation involved
facilitators completing a checklist (byesnoQ response) to indicate that they had covered
specified content areas of the program. Facilitators reported that they presented the
majority of the key concepts, with the mean percentage of key concepts covered being
85% to 94% across sessions. Ratings from independent observers confirmed the reliability
of the facilitators self-ratings. The study also measured the acceptability of the program
for high-school students, an important issue given that the program addressed personal
issues and was implemented during school time at the expense of other aspects of the

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

345

curriculum. The overall rating of the programs usefulness on a five-point scale found the
students considered the program to be useful (M=3.89; S.D.=.91). External validity was
addressed by delivering it in a classroom setting (as it would be if accepted by the school
as part of the routine curriculum). However, a significant deviation from real-world
conditions was that the facilitators were psychologists, including experienced clinicians
and postgraduate psychology students on a clinical placement. Under real-world
conditions, it is likely that teachers would be recruited as facilitators. Furthermore, the
facilitators in the study received training and supervision sessions totalling 25 h, including
two 4-h training workshops and 11 supervision and session-planning meetings of
approximately 1.5 h each. These training and supervision sessions were held after every
RAP session and were led by experienced university-based clinical psychologists who
were members of the research team that developed the RAP program. Whether this level of
training and supervision of facilitators is essential for ensuring positive outcomes for
students has yet to be established.
The present study aimed to determine whether the promising results obtained by
Shochet et al. (2001) would be replicated when the RAP program was implemented by
school personnel, mainly teachers. Dane and Schneider (1998) pointed out that
professionals unfamiliar with psychosocial interventions may feel uncomfortable and
lack confidence in their role as program facilitators. If the intervention program adds to
their already busy schedule, teachers may not devote the time and energy necessary to
maintain the integrity of the program, particularly if they do not share the researchers
belief in the importance of the program (Dane & Schneider, 1998; Meyer, Miller, &
Herman, 1993). In the light of these problems, Dane and Schneider recommended that
the promotion and verification of program integrity be addressed in effectiveness
studies. Promotion of integrity refers to the effort researchers put into ensuring an
intervention is implemented as intended (Dane & Schneider, 1998; Domitrovich &
Greenberg, 2000) and includes the provision of a manual, formal training, and ongoing
consultation and supervision (Domitrovich & Greenberg, 2000). Determining the level
of promotion required to maintain integrity is important as redundancy in training and
supervision is costly and may reduce the uptake of programs if the investment of time
for training and supervision is perceived to be too high. This requires balancing against
the possibility that inadequately trained and supervised professionals may not
implement the program competently, decreasing the effectiveness of the program.
While little research has been conducted on the level of training and supervision
needed to ensure facilitators maintain program integrity in real-world settings, it is
reasonable to assume that this would vary according to the facilitators prior
professional training and experience (Hennggeler, Schoenwald, Liao, Letourneau, &
Edwards, 2002).
Verification of integrity refers to procedures and measures that demonstrate program
integrity was actually achieved. Dane and Schneider (1998) define program integrity as a
multidimensional construct covering: (a) the degree to which the content of the program
was delivered as intended (adherence), (b) the frequency and duration of the program
administered (dosage), (c) qualitative aspects of the program delivery, (d) participant
responsiveness, and (e) program differentiation, i.e., determining that the comparison
group was not exposed to any form of program that resembles the intervention, for

346

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

example, a life skills program that was part of the existing curriculum of the school. The
aim of the present study was to assess procedures developed to address both the promotion
and verification of integrity when the RAP program was implemented by school
personnel.
In the present study, procedures to promote integrity included the development of a
RAP training program specifically targeting school personnel. Specifically, two factors
identified by Dane and Schneider (1998) to be related to the effectiveness of a program,
the facilitators knowledge and confidence in implementing the program, were investigated.
Furthermore, the impact of training on the quality of program implementation was
assessed. Finally, the impact of the RAP program on participant outcomes was
investigated. In addition to the hypothesis that knowledge and confidence in implementing
the program would be related to higher quality implementation, it was predicted that a
higher quality of program implementation would lead to more positive outcomes for
students.

Method
Participants
The study was conducted in two independent girls schools in Brisbane, Australia.
Participants were eight facilitators from School A and a total of 212 female students from
the two schools. The facilitators included one School Psychologist and seven Teachers.
Teachers participated in the program if they were allocated to teach the part of the
curriculum in which the RAP program was included. Two facilitators were male and six
were female. The number of years experience in their profession ranged from 1 to 24 years
with a mean of 14 years. The students included 96 students from School A (who
participated in the RAP Program) and 116 students from School B who acted as a usualcare comparison group. Students ranged in age from 12 to 16 years with a mean age, at the
time of first testing, of 13.58 years (S.D.=.61). As private independent schools, the
majority of the students were from middle to high SES. The majority of the students were
of AngloSaxon origin, with a small proportion of Asian and Polynesian descent.
Measures
The RAP Training Program Questionnaire (RAP-TPQ), a 37-item self-report
questionnaire, was used to evaluate the training program (Harnett, 2000). The RAPTPQ includes: (a) a Knowledge Scale consisting of 34 items (1 point for a correct answer)
designed to test knowledge on a range of topics relevant to the program (the nature of
depression in adolescents, principles of early intervention, and the content of the RAP
Program); (b) a Confidence Scale consisting of three items (rated 04) that provides a
measure of the facilitators confidence to implement the program, a factor identified as an
important contributor to successful outcomes in effectiveness studies (Dane & Schneider,
1998). The internal consistency of the scales is acceptable, with alpha levels of the two
RAP-TPQ scales both above .70. The testretest reliability of the scales was found to be

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

347

high, with correlations between two administrations .96 or above (Harnett, 2000). The
eight facilitators completed the RAP-TPQ before and after participating in the RAP
Training Program.
Shochet et al.s (2001) procedure for addressing integrity verification was broadened to
include other aspects of this multidimensional construct (Dane & Schneider, 1998).
Facilitators adherence to the program was evaluated using the checklist developed by
Shochet et al. that provides a measure of the number of key concepts covered in each
session, and a measure developed for the present study that involved a four-point rating
(03) of the extent to which facilitators believed they deviated from the manual during a
specific activity. Facilitators requested that independent observers not be used in the
present study. Thus, the reliability of the adherence measures could not be determined.
However, previous studies in which independent observers were employed have found
agreement between facilitators and independent observers for these measures (r=.65 for
deviation from the manual and r=.73 for the percentage of core concepts presented).
Student participation was measured by keeping a record of attendance. Student feedback
on individual sessions was obtained by asking all students to complete a 10-point Likert
scale at the end of each session that measured how enjoyable and useful the participants
found that session. Following completion of the entire program, students completed a final
program feedback questionnaire (Shochet et al., 2001). This included 13 items rated on a
five-point scale measuring the students reaction to the program (e.g., bHow much do you
think the program has helped your confidence about yourselfQ, bWould you recommend
the program to your friends?Q) and a single item providing an overall rating of the program
(bOverall, how would you rate this program?Q).
Assessment of participant outcomes included measures of known risk and protective
factors for depression. Risk factors included measures of depression [Reynolds Adolescent
Depression Scale (RADS); Reynolds, 1987], anxiety [Revised Childhood Manifest
Anxiety Scale (RCMAS); Reynolds & Richmond, 1978], poor coping strategies
[Adolescent Coping Scale (ACS); Frydenberg & Lewis, 1993], and family conflict
[family conflict scale of the Family Environment Scale (FES); Moos & Moos, 1986].
Protective factors included social competence [social competence scale of the Youth Selfreport Form of the Child Behavior Checklist (SCS); Achenbach, 1991a, 1991b], selfesteem [Harter Self-Perception Profile for Adolescents (HSPPA); Harter, 1988], positive
coping skills, [Adolescent Coping Scale (ACS); Frydenberg & Lewis, 1993], and family
cohesion [family cohesion scale of the Family Environment Scale (FES); Moos & Moos,
1986]. The reliability and validity of these measures have been extensively reported.
The use of teacher reports of participant outcomes was ruled out in the current study by
the school authorities concerned about the time teachers would need to complete
questionnaires. Parent reports were included in the original design but due to two
complaints from parents on the personal content of the questionnaires, School A withdrew
permission to involve parents in further assessments.
Procedure
The eight staff from School A participated in the 1-day RAP Training Program. The
training program was delivered in the school by the first author. The training program,

348

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

originally developed to be conducted over 2 days, was shortened to 1 day as it was not
acceptable to school management that school personnel be released for 2 days of
training. A video outlining the key features of the program and a procedures and
evaluation manual were produced to supplement the 1-day training program (Harnett,
Hoge, & Shochet, 1998). After participating in the RAP Training Program, each
facilitator implemented the RAP Program with one class of students in the school
setting.
All students in Year 9 of School A (n=98) were invited to participate in the RAP
Program. The program was optional, but was implemented during school hours as part of
the school curriculum. Only one student declined to participate. One other student who
was receiving treatment for a mental health problem participated in the program but data
for this student were not included in the analyses. The remaining 96 students of School A
received the RAP program during class time. There were eight classes of students with
class sizes ranging between 10 and 14 students. The eight facilitators each implemented
the RAP program with one class of students.
Students and their parents were provided with information about the RAP program
and both were asked to sign a consent form. Students and parents in School B (n=116)
acted as a comparison group. All students from both schools were assessed on four
occasions: preintervention, postintervention (within 3 weeks of the end of the program),
1-year follow-up (FU1), and 3-year follow-up (FU3). The majority of the students were
reassessed at postintervention (98.9% and 93.1% of the intervention and comparison
groups, respectively), at FU1 (89.1% and 83.6%), and at FU3 (82.6% and 81.0%). The
small decline in the numbers of participants completing postintervention and follow-up
assessments were mainly due to absences from school and a small number of children
transferring to other schools. One-year follow-up assessment occurred around midyear
1999, approximately 10 months after the postintervention, while FU3 occurred in
midyear 2001.
The RAP training program for implementers was conducted in the school over 6 h. The
training program included details on the theoretical orientation of the RAP program, the
specific content of the program including an overview of the major issues to be addressed
in each session, process issues, and the evaluation procedures.
The RAP program is a manualised group intervention consisting of eleven 40- to 50min sessions that includes intervention techniques influenced by both cognitivebehaviour
therapy (Clarke et al., 1995) and interpersonal psychotherapy (Mufson et al., 1993). The
program was implemented according to an intervention manual. The specific sessions
included in the manual were as follows: Session 1, establishing rapport; Session 2,
affirmation of existing strengths; Sessions 3 and 4, promoting self-management and
emotional regulation skills in the face of stress; Session 5 and 6, cognitive restructuring;
Session 7, problem solving; Session 8, building and accessing psychological support
networks; Session 9 and 10, interpersonal components designed to promote family
harmony and avoid escalation of conflict; Session 11, summary and termination (Shochet
et al., 2001). The interpersonal components address role transitions during adolescence,
and encourage the use of skills to avoid the escalation of conflict with friends and family,
the ability to take another persons perspective, and the importance of broadening social
support networks (Shochet et al., 2001).

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

349

Results
The RAP training program was found to significantly increase facilitator knowledge.
Paired sample t-tests found a significant increase (t=3.33, pb.05) on the Knowledge Scale
of the RAP-TPQ from the pretraining level of 20.63 (S.D.=3.25) to the posttraining level
of 25.75 (S.D.=1.17). No significant increase was found on the Confidence Scale in
response to training. On this scale, which has a maximum score of 12, the pretraining score
was found to be high (M=9.38, S.D.=.52) and remained close to this level posttraining
(M=10.13, S.D.=2.17).
Facilitators reported minimal deviation from the manual (M=.64, S.D.=.37) and
reported that they delivered the majority of the key concepts of the program (M=80.8%,
S.D.=13.1%). However, a decline in the percentage of key concepts presented in
sessions was reported over the course of the program. Specifically, in early sessions,
90.5% of key concepts were covered, but this fell to 77.3% in the middle sessions, and
66.4% in the late sessions. Students attended the majority of sessions (89%),
representing an average of 9.8 out of the 11 sessions. This compared favourably with
Shochet et al. (2001) who found all students attended at least 9 of the 11 sessions.
Students rated the program to be moderately enjoyable (M=6.60, S.D.=1.52) and useful
(M=5.90, S.D.=1.65). Final feedback from the students indicated they were moderately
satisfied with the program. The mean rating of the 13 items on the final feedback
questionnaire was 2.14 (S.D.=.67) while the mean rating of the item providing an overall
rating was 2.82 (S.D.=1.07). These results are slightly lower, but comparable to those
reported by Shochet et al.
There was no systematic relationship between the training measures (knowledge
and confidence) and program integrity measures. One exception was that posttraining
levels of knowledge and confidence were both significantly correlated with the
number of key concepts delivered in the early sessions of the program (r=.83, p=.02;
and r=.88, p=.03, respectively). Thus, higher levels of knowledge and confidence
were related to more key concepts being delivered in the initial sessions of the
program.
To investigate the impact of the intervention on participant outcomes, repeatedmeasures multivariate analyses of covariance (MANCOVAs) were conducted with the
risk and resilience measures using the data summarised in Table 1. A significant group
by time interaction using this procedure tests the hypotheses that the intervention group
would show lower levels of risk and higher levels of resilience than the comparison
group following exposure to the intervention, while the comparison group would show
no such change over time. Statistically significant differences between the two groups on
several risk and resilience measures were found at preintervention. Specifically, the
comparison group scored higher than the intervention group on measures of depression
and anxiety [RADS, F(1,206)=3.88, pb.05, and RCMAS, F(1,205)=3.88, pb.05] and
lower on social competence and positive coping strategies [SCS, F(1,206)=10.26, pb.01,
and ACS(+), F(1,206)=4.76, pb.05]. As a group by time interaction is difficult to
interpret when the preintervention measures vary, the MANCOVAs were carried out
using the measures at postintervention, 1-year follow-up and 3-year follow-up,
controlling for the effects of the preintervention differences by including all

350

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

Table 1
Means, standard deviations, and sample size for demographic and experimental measures at pre-, post- and
follow-up assessments for the whole sample
Intervention

Comparison

Pre

Post

FU1

FU3

Pre

Post

FU1

FU3

M
S.D.
n

13.52
(.65)
92

13.87
(.63)
91

14.70
(.64)
82

17.21
(.56)
76

13.63
(.57)
116

14.01
(.57)
108

14.79
(.55)
97

17.16
(.36)
94

M
S.D.
n
M
S.D.
n
M
S.D.
n
M
S.D.
n

55.45
(13.08)
92
9.46
(6.00)
92
2.88
(2.26)
92
284.79
(69.89)
92

55.41
(14.13)
91
9.00
(6.61)
91
3.16
(2.30)
90
286.34
(60.51)
90

57.74
(14.73)
81
10.32
(6.52)
82
3.18
(2.42)
82
291.89
(63.63)
81

57.26
(12.02)
76
9.67
(5.76)
76
3.12
(2.09)
76
298.60
(58.23)
75

59.32
(14.84)
116
11.10
(5.92)
115
3.38
(2.60)
115
290.06
(68.80)
116

59.94
(16.13)
108
10.16
(6.47)
108
3.82
(2.63)
106
296.24
(79.04)
108

61.07
(16.19)
97
10.68
(6.74)
96
3.45
(2.54)
97
305.93
(84.00)
95

59.58
(14.73)
94
10.53
(6.19)
94
3.49
(2.48)
94
309.04
(80.90)
94

Resilience measures
SCS
M
S.D.
n
HSPPA
M
S.D.
n
FES-cohesion
M
S.D.
n
ACS (+)
M
S.D.
n

14.64
(3.00)
92
47.87
(6.86)
91
6.95
(2.15)
92
505.41
(69.22)
92

14.64
(3.48)
91
48.11
(7.44)
90
6.93
(1.94)
90
493.88
(70.28)
91

14.64
(3.60)
82
48.14
(7.60)
80
7.02
(2.23)
82
509.32
(73.81)
82

14.30
(3.76)
76
48.73
(7.42)
75
7.20
(2.00)
76
520.66
(75.47)
74

13.38
(2.67)
116
45.76
(8.24)
111
6.78
(2.45)
115
484.26
(69.60)
116

13.38
(3.34)
108
46.22
(9.14)
107
6.24
(2.73)
106
467.78
(78.30)
108

14.45
(3.29)
97
48.51
(8.24)
92
6.80
(2.47)
97
495.54
(69.99)
95

13.48
(2.82)
94
48.28
(7.97)
92
6.596
(2.60)
94
488.34
(65.79)
94

Age

Risk measures
RADS

RCMAS

FES (conflict)

ACS ( )

preintervention measures as covariates. A treatment effect would be indicated by


significantly lower levels of risk and significantly higher levels of resilience within the
intervention group at each time point. A group by time interaction would indicate that
one group showed changes over this interval not observed by the other group, as would
be expected if the benefits of the intervention were to increase for the intervention group
and/or the comparison group was to deteriorate over time. In addition, the covariate by
group interactions were entered into the MANCOVA design to test for group differences
that may have resulted from differences in preintervention levels of the risk and
resilience measures.
The first MANCOVA conducted with the risk measures showed a significant main
effect for time [ F(8,612)=3.07, p=.002]. No significant effects were found for group or the
group by time interaction. Univariate results of the MANCOVA showed the time effect

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

351

was significant for RADS [ F(2,308)=5.14, p=.006], RCMAS [ F(2,308)=5.45, p=.005],


and FES-Conflict [ F(2,308)=8.15, pb.001]. A series of paired sample t-tests were
conducted to determine the changes in the risk measures over time. As there was no main
effect for group, these analyses were conducted for the sample as a whole. Depression and
anxiety both increased between postintervention and 1-year follow-up as indicated by a
significant increase on RADS [t(204)= 4.12, pb.001] and RCMAS [t(204)= 4.12,
pb.001] over this period. There was no change in depression or anxiety between 1-year
follow-up and 3-year follow-up. Family conflict (FES-Conflict) did not vary between
postintervention and 1-year follow-up, but showed a significant increase between the 1and 3-year follow-up assessments [t(201)= 2.26, p=.03]. Negative coping strategies
[ACS( )] showed a significant increase between postintervention and 1-year follow-up
[t(203)= 3.05, p=.003] and a further increase between the 1- and 3-year follow-up
assessments [t(199)= 2.03, p=.04]. There were no significant group by covariate
interactions.
The second MANCOVA conducted with the resilience measures showed a significant
main effect for group [ F(4,144)=2.75, p=03]. No significant effects were found for time
or the group by time interaction. A series of independent sample t-tests found higher
social competence scores (SCS) in the intervention group compared to the comparison
group at postintervention [t(247)=2.52, p=.01]. Positive coping was higher in the
intervention group compared to the comparison group at 3-year follow-up [t(240)=2.91,
p=.004]. However, there were no differences between the two groups in self-concept or
family cohesion at any assessment. There were no significant group by covariate
interactions.
Clinical significance was not assessed as mean scores were within the normal range
on measures at all assessment times. However, further analyses were conducted on the
intervention group data to investigate whether there were systematic relationships
between implementation measures and participant outcomes. A one-way ANOVA was
conducted to determine whether there were differences between the eight classes in the
extent of change they showed on the key dependent variable (RADS) between
preintervention and 3-year follow-up. The ANOVA found no significant difference
between the eight groups in the extent to which RADS scores changed over the 3-year
period of the study. Across all groups, there was an average increase of 3.2 points on the
RADS. There was considerable variability between classes in the mean change in RADS
scores. For example, one class showed a slight decrease in RADS scores (mean
change= 1.0, S.D.=13.3), while another showed an increase in RADS scores of 7.2
points (S.D.=5.7). However, there was also a high level of variability in the RADS
change scores within each class. This was reflected in standard deviations for the RADS
scores ranging between 5.7 and 14.2, with a mean standard deviation for the eight
classes of 11.5. Given the extent of this variability within classes and the small number
of groups, it was not surprising that RADS change scores were not systematically
associated with program integrity measures.
The relationship between implementation measures and student outcomes was further
investigated by comparing the outcomes of students exposed to varying levels of the key
concepts of the program. First, cutpoints were calculated to collapse the eight classes
into three groups according to the number of key concepts delivered. This procedure

352

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

resulted in three groups that received 65.2% (low), 82.0% (moderate), and 93.8% (high)
of the key concepts. Second, the means and standard deviations of the change in risk
and resilience scores between preintervention and each subsequent assessment time was
calculated. In order to facilitate comparison across measures, the changes scores were
calculated after converting the outcome measures to z-scores. The results of the change
scores for the three groups varying in the percentage of key concepts received are
presented in Table 2. A positive change score in Table 2 represents an increase in the
outcome measure between the preintervention and subsequent assessment, while a
negative number represents a decrease. A series of ANOVAs was conducted using the
data in Table 2. No significant differences were found between the three groups on any
measure, suggesting that the number of key concepts delivered did influence student
outcomes. However, on closer inspection of Table 2, it can be seen that the pattern of
results was in the expected direction. Specifically, the students in the group that received
the most key concepts showed the most improvement on the risk measures. A similar
pattern was less evident for the resilience measures. It can also be seen that the standard

Table 2
Means and standard deviations of change in outcome measures between preintervention, postintervention, 12month and 3-year follow-up for groups receiving low, moderate, and high percentage of key concepts
Percentage of key concepts received
Low
Depression

Anxiety

Family conflict

Negative coping

Social competence

Self-concept

Family cohesion

Positive coping

pre-post
pre-FU1
pre-FU2
pre-post
pre-FU1
pre-FU2
pre-post
pre-FU1
pre-FU2
pre-post
pre-FU1
pre-FU2
pre-post
pre-FU1
pre-FU2
pre-post
pre-FU1
pre-FU2
pre-post
pre-FU1
pre-FU2
pre-post
pre-FU1
pre-FU2

.00 (.57)a
.23 (.88)
.12 (.72)
.14 (.55)
.11 (.70)
.39 (.90)
.05 (.70)
.07 (.58)
.15 (1.07)
.06 (.61)
.22 (.73)
.24 (.53)
.03 (.76)
.10 (1.33)
.19 (1.05)
.05 (.69)
.30 (.79)
.13 (.81)
.10 (.63)
.08 (.55)
.04 (.69)
.07 (.62)
.13 (.98)
.14 (1.03)

Mod
.13 (.58)
.12 (.64)
.18 (.95)
.06 (.63)
.11 (.70)
.02 (.91)
.01 (.86)
.01 (.85)
.04 (.76)
.04 (.73)
.12 (.86)
.17 (.94)
.02 (.79)
.22 (1.13)
.09 (1.16)
.16 (.64)
.34 (.84)
.28 (.92)
.07 (.73)
.09 (.90)
.09 (.91)
.03 (.69)
.13 (.98)
.15 (.96)

High
.06
.11
.08
.30
.20
.05
.12
.01
.10
.00
.21
.20
.36
.37
.30
.08
.08
.15
.16
.07
.14
.04
.02
.21

(.67)
(.88)
(1.06)
(.72)
(.91)
(.94)
(.91)
(.70)
(.89)
(1.02)
(.84)
(1.09)
(.66)
(.83)
(.91)
(.78)
(1.14)
(.85)
(.93)
(1.21)
(1.21)
(.75)
(1.04)
(.67)

a
Positive change scores represent an increase on the outcome measure while negative scores represent a
decrease on the outcome measure. As change scores were calculated using z-scores, the numbers in the table
represent change in standard deviation.

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

353

deviation of the change scores was high. This indicates considerable variability in the
response to the program within each group and reduces the likelihood of a statistically
significant result. Thus, while not statistically significant, the pattern of results is
nevertheless of interest.

Discussion
The aim of the present study was to investigate the impact of a universal school-based
prevention of depression program when implemented by school personnel under realworld conditions. The first question addressed was whether school personnel would feel
confident and have the knowledge to implement a psychosocial intervention in response to
a 1-day training program. Results showed that facilitators recruited within the school,
mainly teachers, reported moderate to high levels of confidence in their ability to
implement the RAP program prior to training and that this did not change greatly in
response to the training program. Facilitators did, however, show an increase in their
knowledge of the program as indicated by a significant increase in scores on the
Knowledge Test administered before and after training. Facilitators in this study had many
years experience working with high-school students and it is possible their experience in
delivering a predefined curriculum contributed to their confidence in implementing the
RAP program, while the training facilitated the acquisition of specific knowledge of the
program. This result is important in demonstrating that, at least in this sample of school
personnel, the RAP program was not considered too difficult to implement, a factor that
could limit the uptake of the intervention in real-world settings. Furthermore,
demonstrating that a 1-day training program can successfully facilitate the acquisition
of relevant knowledge is important since short training programs will be met with less
resistance from school personnel further increasing the accessibility of the program.
Empirical support to guide the duration of training programs is preferable to assuming
more is better in the dissemination stage of interventions (Hennggeler et al., 2002).
Adherence to the program was found to vary. While little deviation was reported for
activities presented, there was a decline in the percentage of key concepts presented over
the course of the program. The low level of deviation is consistent with the training of
teachers in the delivery of a structured curriculum. However, the measure used in this
study does not clearly distinguish between deviations due to incorrect administration of the
program from a flexible decision to modify the program in response to the group process
(Kendall & Chu, 2000). As flexible deviation can be a potential advantage for successful
outcomes, it cannot be ruled out that the minimal deviation was at the expense of
important process issues, such as engagement of students in the program or focusing on
the key concepts. The decline reported in the percentage of key concepts delivered over
the course of the program suggests facilitators may, in fact, have been more focused on
ensuring exercises were completed than on getting across the key concept of the session. It
is important that future studies should measure both flexible and inaccurate deviation
(Kendall & Chu, 2000).
A further aim of the present study was to evaluate the relationship between the
quality of program implementation and student outcomes. Results showed a high level

354

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

of variability between students within classes, making it difficult to detect differences


between classes that could be attributed to facilitator variables. However, collapsing
the eight classes into three groups based on the percentage of key concepts delivered
showed that students exposed to more key concepts responded better to the
intervention than students exposed to a lower percentage of the key concepts. While
not statistically significant, this finding highlights the importance of ensuring a high
percentage of the key concepts is delivered. Two important factors in maximising
program integrity are training and supervision. In the present study, the facilitators
with higher posttraining levels of knowledge and confidence regarding the
implementation the program delivered more concepts in the early sessions. This
suggests that posttraining competency is important in achieving greater compliance
with the manual, and can also have an influence on student outcomes. It can be
speculated that the decline over the course of the intervention in the percentage of key
concepts delivered was due to the absence of supervision. Supervision has been
identified as an important factor contributing to the external validity of intervention
programs (Dane & Schneider, 1998) and the absence of supervision in the present
study was a major departure from the Shochet et al. (2001) study that reported
beneficial effects of the RAP program.
A weakness of the present study was the lack of independent observers due to the
facilitators requesting that they not be observed. The request not to be observed may
have reflected some performance anxiety, although facilitators did not report a lack of
confidence in their ability to implement the program. The reluctance to be observed is an
example of a methodological problem encountered under real-world conditions, for
example, when facilitators recruited from the pool of existing staff may not be as highly
committed to the research study as the researchers. Nevertheless, a number of comments
are warranted that suggest the results may be meaningful. First, previous studies in
which independent observers were employed have found agreement between facilitators
and independent observers on these measures (r=.65 for deviation from the manual and
r=.73 for the percentage of core concepts presented; Harnett, 2000; Shochet et al.,
2001). Second, there was consistency in the reported scores with previous studies, with
the mean scores on the percentage of key concepts delivered, in initial sessions, only
slightly below those reported by Shochet et al. (2001). Third, the finding that the
number of key concepts reported declines as the program progressed suggests that
facilitators were not simply attempting to present themselves in a favourable light, which
would be a source of bias.
A disappointing aspect of the present study was that no evidence was found for the
effectiveness of the intervention in this sample. A possible factor may have been that
the level of psychological functioning displayed by the two samples was within
normal limits. It has been argued that universal prevention programs are potentially
useful for all students given that no individual is immune from adversity (Shochet et
al., 2001). It is argued that, over time, students are likely to be faced with some level
of adversity for which they should be prepared to handle. While exposure to life
stressors was not measured in the present study, it is noted that one major source of
stress, family conflict, did not increase for this sample. Thus, a possible interpretation
of the present results is that the program was redundant for this group of students. If

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

355

this is the case, the usefulness of a universal prevention model is called into question
and suggests selective or indicated prevention approaches may be more justified in the
attempt to prevent depression in young people (Harrington, 1997; Mrazek & Haggerty,
1994).
The present results could be taken to shed doubt on the external validity of the RAP
program. However, it is important to acknowledge that intervention studies conducted in
real-world settings encounter difficulties that compromise methodological rigor. For
example, the universal application of a prevention program does not lend itself to a
random allocation of participants to intervention. While attempts were made to recruit a
comparison school matched on relevant demographic variables to minimise preintervention differences between the groups, differences in several preintervention measures were
found. Thus, selection bias and procedures for assigning students to the intervention
condition may have worked against detecting an intervention effect.
A further methodological issue concerns the nature of the comparison group. In
intervention trials, the control or comparison group is not offered the intervention. The
assumption is that apart from the intervention, the comparison group and the
intervention group are exposed to similar influences that could affect outcome
measures. However, this assumption has been challenged by Weissberg and Greenberg
(1998) who argued that comparison groups may be exposed to influences that can
affect students in ways similar to the intervention (e.g., the existing school curricula or
media influences). Future research should attempt to measure exposure to naturally
occurring interventions for both intervention and comparison groups (Weissberg &
Greenberg, 1998).
Weissberg and Greenberg (1998) pointed out that interventions developed from a
prevention science framework may not necessarily generalise to real-world contexts,
even if the internal validity of the program has been determined in more controlled
settings and the methodology of the trial is sound. Contextual influences from different
ecological levels can affect the way an intervention is employed in a particular setting.
They argue it is important to understand and take into account the ecological context in
which an intervention is to be implemented if the program is to be relevant and
optimally effective for a particular setting. For example, supervision may have been a
critical ingredient of the success reported by Shochet et al. (2001). Unfortunately,
providing supervision in a school setting can be difficult to arrange, is expensive, and is
time consuming. Thus, providing supervision may not be a priority for school
administrators, and finding time to receive supervision may be a challenge for teachers
with busy workloads. Nevertheless, if programs, such as the RAP Program, are to hold
their promise in preventing young people experiencing depression, further research and
collaboration with schools is needed to understand the necessary conditions to ensure
successful outcomes in the real world.

Acknowledgement
I would like to acknowledge the useful comments of the anonymous reviewers of this
paper for their suggestion in revising this paper.

356

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

References
Achenbach, T. M. (1991a). Integrative guide for the 1991 CBCL/4-18, YSR, and TRF Profiles. Burlington, VT7
University of Vermont, Department of Psychiatry.
Achenbach, T. M. (1991b). Manual for the youth self-report and 1991 profile. Burlington, VT7 University of
Vermont.
Clarke, G. N., Hawkins, W., Murphy, M., Sheber, L. B., Lewinsohn, P. M., & Seeley, J. R. (1995). Targeted
prevention of unipolar depressive disorder in an at-risk sample of high school adolescents: A randomized trial
of a group cognitive intervention. Journal of the American Academy of Child and Adolescent Psychiatry,
34(3), 312 321.
Dane, A. V., & Schneider, B. H. (1998). Program integrity in primary and secondary prevention: Are
implementation effects out of control? Clinical Psychology Review, 18, 23 45.
Domitrovich, C., & Greenberg, M. T. (2000). The study of implementation: Current findings from effective
programs that prevent mental disorders in school-aged children. Journal of Educational and Psychological
Consultation, 11, 193 221.
Frydenberg, E., & Lewis, R. (1993). Manual: The adolescent coping scale. Melbourne, Australia7 Australian
Council for Educational Research.
Gillham, J. E., Reivich, K. J., Jaycox, L. H., & Seligman, M. E. P. (1995). Prevention of depressive symptoms in
schoolchildren: Two year follow-up. Psychological Science, 6(6), 251 343.
Hains, A. A., & Ellman, S. W. (1994). Stress inoculation training as a preventative intervention for high school
youths. Journal of Comparative Psychotherapy, 8(3), 219 232.
Harnett, P. H. (2000). The prevention of depression in adolescents: Training school personnel to implement a
school-based program in a real world context. Unpublished doctoral dissertation, Griffith University,
Brisbane, Australia.
Harnett, P. H., Hoge, R., & Shochet, I. M. (1998). Resourceful adolescent project: Procedures and evaluation
manual. Brisbane, Australia7 Griffith University.
Harrington, R. (1997). The role of the child and adolescent mental health service in preventing later depressive
disorder: Problems and prospects. Child Psychology and Psychiatry Review, 2(2), 46 57.
Harrington, R., Whittaker, J., & Shoebridge, P. (1998). Psychological treatment of depression in children and
adolescents: A review of treatment research. British Journal of Psychiatry, 173, 291 298.
Harter, S. (1988). Manual for the self-perception profile for adolescents. Denver, CO7 University of
Denver.
Hennggeler, S. W., Schoenwald, S. K., Liao, J. G., Letourneau, E. J., & Edwards, D. L. (2002). Transporting
efficacious treatments to fields settings: The link between supervisory practices and therapist fidelity in MST
programs. Journal of Clinical Child Psychology, 31(2), 155 167.
Jaycox, L. H., Reivich, K. J., Gillham, J., & Seligman, M. E. (1994). Prevention of depressive symptoms in
school children. Behaviour Research and Therapy, 32(8), 801 816.
Kendall, P. C., & Chu, B. C. (2000). Retrospective self-reports of therapist flexibility in a manual-based treatment
for youths with anxiety disorders. Journal of Clinical Child Psychology, 29(2), 209 220.
Meyer, A., Miller, S., & Herman, M. (1993). Balancing the priorities of evaluation with the priorities of the
setting: A focus on positive youth development programs in school settings. Journal of Primary Prevention,
14, 95 113.
Moos, R. H., & Moos, B. S. (1986). The Family Environment Scale: The manual. Palo Alto, CA7 Consulting
Psychologists Press.
Mrazek, P. J., & Haggerty, R. J. (1994). Reducing risks for mental disorders: Frontiers for preventive intervention
research. Washington, DC7 National Academy Press.
Mufson, L., Moreau, D., Weissman, M. M., & Klerman, G. L. (1993). Interpersonal psychotherapy for adolescent
depression. In M. M. Weissman (Ed.), New applications of interpersonal psychotherapy. Washington, DC7
American Psychiatric Association Press.
Reynolds, C. R., & Richmond, B. O. (1978). What I think and feel: A revised measure of childrens manifest
anxiety. Journal of Abnormal Child Psychology, 6(2), 271 280.
Reynolds, W. M. (1987). Reynolds adolescent depression scale: Professional manual. Odessa, FL7 Psychological
Assessment Resources.

P.H. Harnett, M.R. Dadds / Journal of School Psychology 42 (2004) 343357

357

Shochet, I. M., Dadds, M. R., Holland, D., Whitefield, K., Harnett, P. H., & Osgarby, S. (2001). The efficacy of a
universal school-based program to prevent adolescent depression. Journal of Community Psychology, 30(3),
303 315.
Weissberg, R. P., & Greenberg, M. T. (1998). Prevention science and collaborative community action research:
Combining the best from both perspectives. Journal of Mental Health, 7(5), 479 492.

You might also like