You are on page 1of 22

 

Project  Oracle  Synthesis  Study  January  2015  

 
 
 
 
 
Randomised  Controlled  Trials  (RCTs)  for  
Projects  with  Children  and  Young  People  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Project  Oracle  is  managed  
by  
 

Project  Oracle  is  funded  by


Key  Insights  
 
This   synthesis   study   examines   when   and   where   Randomised   Controlled   Trial  
(RCT)   evaluations   have   been   carried   out   with   young   people,   and   the  
intervention  outcomes  that  were  measured  by  these.  The  focus  is  on  the  RCT  as  
a  methodology  rather  than  the  interventions.  Within  our  criteria,  we  found  only  
11   evaluations   of   children   and   youth   interventions   in   London.   As   a   result,  
drawing   general   conclusions   is   problematic.   However,   we   can   identify   the  
common  benefits  and  challenges  that  they  faced.    
 
The  evidence  
 
The   RCTs   included   in   this   study   were   carried   out   with   projects   of   varying   sizes,   from   barely   100   to  
thousands  of  participants.  The  studies  also  took  place  in  a  variety  of  environments,  from  foster  care  
settings  to  schools  and  children’s  centres  although  most  were  in  schools.  
 
The  majority  of  studies  explored  health  related  factors  to  measure  the  primary  outcome.  There  was  
an   absence   of   evidence   of   the   studies’   relevance   for   understanding   broader   social   outcomes   with  
children  and  young  people.  There  was  only  one  RCT  in  the  criminal  justice  area,  compared  with  seven  
on  health  issues.    
 
Caveats  
 
Despite  RCTs  being  considered  capable  of  providing  strong  conclusions  on  programme  impact,  where  
there   has   been   no   indication   in   the   findings,   evaluators   have   needed   to   employ   additional  
methodologies   in   order   to   understand   the   difference   they   make   (e.g.   interviews   or   questionnaires).  
What  this  means  is  that  where  RCT  studies  have  failed  to  find  evidence  of  impact,  additional  research  
activities  have  been  required  to  explain  why  this  is  the  case.    
 
It  is  also  important  to  note  that  the  RCTs  included  in  this  study  were  all  externally  funded  and  usually  
carried   out   by   whole   teams   of   researchers   from   one   or   more   university   centres,   as   well   as   in   some  
cases   public   or   private   sector   organisations.   This   indicates   the   complexity   and   cost   sometimes  
associated  with  carrying  out  an  RCT.  
 
Recommendations  
 
The   low   number   of   studies   found   here   and   the   dominance   of   available   health-­‐related   evaluations  
illustrate   how   the   much   hoped-­‐for   spread   of   RCTs   into   new,   broader   areas   of   social   policy   is   yet   to  
happen.  Those  leading  the  push  for  more  RCT  evaluations  should  be  aware  of  the  lack  of  evidence  of  
RCTs  effectively  measuring  non-­‐health-­‐related  outcomes  for  children  and  young  people.    
 

  1  
©Project  Oracle  Evidence  Hub™            
 
To   obtain   the   best   results,   an   RCT   requires   a   thorough   methodology,   a   systemic   randomization  
process   and   consequent   in-­‐depth   data   gathering.   This   often   implies   time   and   resources   to   be  
allocated  to  the  study.  In  this  sense,  commissioners  and  funders  encouraging  RCT  evaluations  should  
also  be  aware  of  the  need  for  resources  to  fund  such  studies.  
 
Finally,   this   is   an   exploratory   synthesis   and   it   offers   the   basis   to   develop   further   research.   We  
recommend   a   further   study   to   review   the   usefulness   of   this   methodology   for   understanding   the  
effectiveness   of   children   and   youth   interventions   in   the   capital.   A   suggested   focus   would   be   to  
examine  how  RCTs  are  or  are  not  being  used  to  influence  policy-­‐making  processes.    
 
At   Project   Oracle   we   look   forward   to   working   to   develop   greater   understandi ng   of  
how  the  right  support  can  ensure  that  RCTs  are  used  to  the  best  of  their  potential.  

  2  
©Project  Oracle  Evidence  Hub™            
 
 
Contents  
Key  Insights ............................................................................................................................................ 1  
The  evidence.................................................................................................................................. 1  
Caveats........................................................................................................................................... 1  
Recommendations ......................................................................................................................... 1  
Glossary  of  key  terms ............................................................................................................................ 4  
Introduction ........................................................................................................................................... 5  
Project  Oracle  Synthesis  Studies:  Aims  and  Approach .......................................................................... 6  
Synthesis  study  aims ...................................................................................................................... 6  
Our  realist  approach  to  synthesis  sets  out  to: ............................................................................... 6  
This  study:  What  are  RCTs  and  why  do  they  matter?............................................................................ 7  
What  are  the  benefits  of  RCTs? ..................................................................................................... 7  
What  are  the  shortcomings? ......................................................................................................... 8  
The  material:  what  evidence  is  there? .................................................................................................. 9  
Eligibility  criteria ............................................................................................................................ 9  
The  selected  material* .................................................................................................................. 9  
Mapping  out  RCTs  in  London ............................................................................................................... 11  
Participants .................................................................................................................................. 11  
Case  Study  1:  Brandon  Centre  Multisystemic  Therapy  Randomised  Controlled  Trial ................. 12  
Programme  types......................................................................................................................... 12  
Contexts ....................................................................................................................................... 13  
Case  Study  2:  The  RIPPLE  study  of  peer-­‐led  sex  education  in  secondary  schools ....................... 14  
Creating  confidence  in  and  gaining  useful  learning  from  RCTs ........................................................... 16  
Case  Study  3:  A  Randomised  Controlled  Trial  of  ‘Teens  and  Toddlers’ ....................................... 17  
Conclusions .......................................................................................................................................... 19  
References ........................................................................................................................................... 20  
 

 
 
 
 
 
©Project  Oracle  Evidence  Hub™         3  
 
Glossary  of  key  terms  
 
 
Context  –  the  external  factors  which  influence  how  individuals  participate  in  or  organise  and  carry  out  
an  intervention.  The  context  affects  the  availability  of  resources  and  the  opportunities  for  a  project  to  
be  carried  out.  For  example,  the  arrangement  of  key  institutions  for  young  people,  such  as  schools,  
colleges,  and  young  offender  institutions  can  facilitate  the  building  of  network  of  support.  
Interventions  and  their  participants  are  embedded  in  social  contexts  characterised  by  for  example:  
dynamics  of  inclusion  and  exclusion,  material  inequality,  and  racial  and  ethnic  diversity.    
 
Evidence  –  the  range  of  available  robust  and  reliable  information  which  demonstrates  the  impact  of  a  
policy,  public  service  or  programme.  Evidence  is  knowledge  which  has  been  substantiated  through  
empirical  or  theoretical  research.  Evidence  is  not  the  only  type  of  knowledge,  but  it  is  often  claimed  
to  be  more  certain  because  of  the  rigour  with  which  its  claims  have  been  tested.    
 
Indicator  –  a  measurable  characteristic  or  process  which  reflects  (indicates)  that  change  has  occurred.  
 
Intervention  –  a  planned  action  which  aims  to  bring  about  positive  change.  This  may  aim  to  improve  
social  inclusion,  empowerment  and  equality.  The  term  intervention  refers  to  person-­‐centred  
approaches  which  aim  to  change  knowledge,  understanding,  attitudes  and  behaviour.  
   
Methodology  –  the  framework  and  assumptions,  which  enable  the  design  of  research,  evaluation  or  
secondary  data  analysis.  This  is  different  to  a  research  or  evaluation  method,  which  is  a  tool  for  
collecting  and  analysing  data.  
 
Outcomes  –  the  end  result  of  a  process  is  its  outcome.  The  outcomes  of  a  policy,  a  project  or  an  
intervention  are  the  changes  that  it  has  caused,  or  is  aiming  to  cause.  This  is  not  the  same  as  an  
output,  which  refers  to  the  products  offered  by  a  policy,  a  project  or  an  intervention  such  as  the  
number  of  workshops  delivered,  the  number  of  people  trained,  or  website  built.  Outcomes  are  the  
changes,  benefits,  learning  or  other  effects  that  happen  as  a  result  of  a  policy,  a  project  or  an  
intervention.  These  can  be  immediate,  intermediate  or  long  term.  
 
Participant  –  an  individual  actively  involved  with  an  intervention  which  is  aimed  at  bringing  about  
change  for  them,  such  as  in  their  behaviour  or  attitudes.  
 
Programme  –  an  on-­‐going  series  of  activities  to  affect  an  issue  or  problem.  Whereas  a  project  has  a  
defined  beginning  and  end,  a  programme  may  be  more  open-­‐ended:  an  overarching  programme  can  
be  composed  of  various  projects.  
 

©Project  Oracle  Evidence  Hub™         4  


 
Introduction  
 
In   recent   years   considerable   energy   and   resources   have   been   invested   in  
developing   evidence   and   data-­‐informed   approaches   to   public   policy   and  
practice.  However,  despite  there  being  considerable  consensus  on  the  value  of  
evidence,   there   is   ongoing   debate   and   uncertainty   regarding   the   best   ways   to  
go  about  generating  evidence.  The  Randomised  Controlled  Trial  (RCT)  has  often  
been  at  the  heart  of  these  debates.  
 
This   synthesis   study   offers   insight   into   the  
topic   by   examining   when   and   where   RCT   What  is  a  randomised  controlled  
evaluations   have   been   carried   out,   with   what   trial?    
kind  of  interventions  and  what  outcomes  were    

measured.     A   further   study   can   uncover   the   A  randomised  controlled  trial  (RCT)  is  an  
extent   to   which   these   RCTs   are   useful   to   evaluation  methodology  which  aims  to  
funders,   commissioners   and   project   providers   draw  an  objective  picture  of  the  difference  
from  London’s  children  and  youth  sector.   that  a  project,  programme  or  policy  makes  
 
by  comparing  its  effect  on  separate  groups.    
The   Cabinet   Office’s   seminal   report   ‘Test,  
Learn,   Adapt’,   claims   that   ‘Randomised   Participants  are  allocated  to  groups  which  
controlled   trials   (RCTs)   are   the   best   way   of   are  either  affected  by  the  project  or  policy  
determining   whether   a   policy   is   working   …   (the  intervention  group)  or  not  (the  control  
[and]   by   enabling   us   to   demonstrate   just   how   group).  The  selection  of  people  to  each  
well  a  policy  is  working,  RCTs  can  save  money   group  is  random,  to  eliminate  as  much  bias  
in  the  long  term’  (Haynes  et.  al.  2012:  5).  As  a   as  possible.  
result,   the   report   advocates   running   more  
The  difference  in  outcomes  between  the  
RCTs   on   a   broader   range   of   social   policies   in  
groups  is  considered  to  demonstrate  the  
order  to  judge  their  effectiveness.  
effect  of  the  project  or  policy.  
 
So,   the   RCT   appears   to   fit   neatly   into   the  
current   requirement   for   evidence   to   underpin  
policy  and  practice,  and  in  particular  to  inform  
the  way  that  these  are  funded.  That  said,  there  
is   a   problem.   There   is   no   universal   agreement  
that   RCTs   represent   the   gold   standard   in  
evaluation.   Indeed,   the   approach   is   positively  
rejected  by  some  and  feared  by  others.  
 
All  of  this  means  that  there  is  a  real  need  to  gain  a  clearer  understanding  of  who  requests  and  uses  
RCTs,  when;  and  how  useful  and  practical  they  are  for  children  and  youth  initiatives  in  London.  This  
synthesis  study  reviews  the  available  evidence  to  contribute  towards  examination  of  these  issues.  
 

©Project  Oracle  Evidence  Hub™         5  


 
Project   Oracle   Synthesis   Studies:   Aims   and  
Approach  
 
The   Project   Oracle   synthesis   studies   aim   to   create   a   better   understanding   of  
what   works   in   youth   policy.   They   each   bring   together,   assess   and   analyse   the  
available  evidence  on  a  particular  issue.  Together  the  studies  build  a  library  of  
synthesised   evidence   to   aid   providers   and   funders   in   designing   and  
commissioning  projects  that  have  a  greater  likelihood  of  succeeding.  
 
 
Synthesis  study  aims  
 
1.  To  identify  which  projects  work,  for  whom  and  under  what  conditions  by  
- Focusing  on  evaluations  of  previous  interventions  to  draw  broad  conclusions  regarding  how  
intervention  mechanisms  influence  outcomes  in  different  contexts.  
 
2.  To  assess  the  type  and  quality  of  evaluation  data  currently  available  by  
- Analysing  the  evaluations  being  conducted  in  terms  of  their  underlying  theory,  
methodological  approach  and  data  collection  strategies.  
- Outlining  indicators  of  positive  and  negative  outcomes.    
- Identifying  gaps  and  shortcomings  in  the  evidence  base  for  future  work.  
 
We  approach  the  examination  of  ‘what  works’  from  the  perspective  of  realist  evaluation  (Pawson  and  
Tilly   1997).   This   means   that   we   accept   that   projects   are   found   in   complex   social   realities,   and   that  
they  proceed  within  a  context  (Pawson  2006;  Pawson  and  Tilley  2009).  A  realist  synthesis  of  evidence  
therefore  involves  investigating  how  social  programmes  work  in  certain  times  and  places,  looking  at  
the  mechanisms  that  underpin  interventions  and  the  way  that  these  evolve  in  specific  contexts.  
 
A  realist  synthesis  of  evidence  therefore  involves  investigating  how  social  programmes,  or  in  this  case  
approaches  to  evaluation,  work  in  practice  and  this  demands  a  focus  on  the  way  that  they  evolve  in  
specific  contextual  settings.    
 
Our  realist  approach  to  synthesis  sets  out  to:  
 
1.  Identify  an  underlying  theory  of  change  
Do  the  interventions  aim  to  achieve  change  through  an  approach  which  is  theoretically  
grounded?  
 
2.  Examine  interventions  in  their  context  
Do  interventions  have  the  same  impact  on  all  participant  groups  in  all  places?  Which  
institutional  and  social  contexts  facilitate  positive  outcomes?  
 
3.  And  is  aware  of  interventions  and  contexts  being  open  and  changeable  
Do  intervention  providers  borrow  from  or  compete  with  each  other?  Do  positive  or  negative  
outcomes  create  or  restrict  opportunities  for  future  programmes?  Can  synthesis  studies  
facilitate  knowledge  exchange  among  intervention  providers  and  commissioners??  

©Project  Oracle  Evidence  Hub™         6  


 
This  study:  What  are  RCTs  and  why  do  they  matter?  
 
For   many   years,   the   RCT   has   been   viewed   as   the   ‘gold   standard’   methodology   for  
evaluating   interventions   (Torgerson   and   Torgerson   2008).   In   particular,   they   have  
been   considered   the   most   popular   scientific   method   in   health   care   interventions,  
particularly   in   clinical   settings.   However,   within   social   sciences   and   social   policy,   the  
effectiveness  and  validity  of  an  RCT  has  been  questioned  and  contested.  
 
It  is  not  the  place  for  this  study  to  enter  into  a  discussion  about  the  relative  merits  of  RCTs.  However,  
the   academic   and   political   interest   in   this   methodology   means   that   it   is   important   to   have   a   clear  
understanding  of  when,  where  and  with  what  effect  they  are  used.  In  this  section,  we  set  the  scene  
for  the  analysis  that  follows  by  looking  at  arguments  for  and  against  the  RCT.  
 
What  are  the  benefits  of  RCTs?  
 
RCTs  are  considered  to  offer  the  clearest  and  strongest  signal  of  what  an  intervention’s  effect  is  while  
aiming  to  eliminate  as  much  bias  as  possible  from  the  researcher  and  intervention  providers  
(Grossman  &  Mackenzie  2005).  

Through  random  assignment  of  people  to  one  group  or  another,  an  evaluator  aims  to  largely  remove  
the   risk   of   bias   in   study   findings   and   to   evenly   distribute   the   characteristics   of   participants   across  
groups.  For  example,  a  test  of  the  effectiveness  of  a  new  flu  medicine  may  put  10,000  people  into  two  
groups  and  be  confident  that  there  will  be  a  similar  volume  of  individuals  in  each  group  and  they  will  
have  similar  characteristics,  such  as  their  age  or  gender.    
 
Randomisation,   it   is   argued,   allows   for   statistically   identical   separate   groups   to   be   created.   If   an  
intervention,  e.g.  a  social  programme,  is  then  administered  to  one  group  but  not  the  other  (assuming  
a   two   group   design),   any   differences   in   outcome   between   the   two   groups   can   be   attributed   to   the  
intervention   and   alternative   explanations   for   these   differences   ruled   out   since   the   two   groups   only  
differ  in  whether  or  not  they  received  the  programme.    
 
As   a   result   of   this   approach,   RCTs   are   considered   a   powerful   tool   to   help   policymakers   and  
practitioners  decide  which  of  several  policies  or  programmes  and  projects  is  the  most  effective,  and  
to   distinguish   which   interventions   are   not   as   effective   as   they   should   or   have   the   potential   to   be  
(Haynes,  Owain,  Goldacre  and  Torgerson  2012).  
 
“The   task   of   isolating   the   effects   of   treatments   or   programs   from   other   confounding   aspects  
of   selection   or   design   is   the   researcher’s   most   significant   challenge   in   coming   to   a   valid  
policy   conclusion.   Through   randomization   of   treatment   and   control   or   comparison   conditions,  
the   researcher   can   assume   that   such   threats   to   valid   conclusions   are   distributed   equally  
between   the   treatment   and   control   conditions   …   Accordingly,   in   a   randomized   study   the  
effect   of   treatment   is   disentangled   from   the   confounding   effects   of   other   factors   …  
Randomized   studies   are   thus   the   most   powerful   tool   that   crime   and   justice   evaluators   have  
for  making  valid  conclusions  about  whether  progra ms  or  treatments  are  effective.”  
(Weisburd  2003:  338)  
   

©Project  Oracle  Evidence  Hub™         7  


 
What  are  the  shortcomings?  
 
Much   criticism   has   been   targeted   at   RCTs   for   being   inappropriate   to   the   setting,   addressing   only   a  
limited   number   of   questions,   too   expensive,   too   complex   and   possibly   applied   carelessly   with   the  
outcome  being  poor  data  (Black  1996;  Haynes  et  al.,  2012).  
 
RCTs  are  viewed  as  unrealistic  for  many  small  and  often  voluntary  sector  organisations  because  they  
fear   the   evaluation   will   add   extra   expense,   time   and   effort   to   the   intervention.   Many   organisations  
perceive   RCTs   as   complicated   especially   where   they   are   already   struggling   to   deliver   their   own  
activities  within  set  timeframes.    
 
However,   perhaps   the   biggest   concern   for   these   organisations   is   that   conducting   an   RCT,   which  
intentionally   withholds   a   programme   or   project   with   potential   benefits   from   individuals   who   are  
placed  in  a  control  group  for  research  purposes,  is  unethical.  At  the  same  time,  it  can  be  argued  that  it  
may   be   even   more   damaging   to   roll   out   interventions   and   services   without   really   knowing   whether  
intervention’s  impact  is  positive  or  not.  
 
The  extent  to  which  conclusions  from  RCTs  can  be  transferred  is  also  contested.  A  trial  may  well  show  
a  strong  and  clear  indication  of  a  project  or  programme’s  impact,  but  this  positive  effect  has  occurred  
at  one  time  and  in  setting.    Without  knowing  the  mechanisms  that  underpin  how  it  works,  it  is  difficult  
to  replicate  the  intervention  elsewhere  (Cartwright  and  Hardie  2012).  
 
 
“W hereas   an   RCT   would   be   able   to   compare   the   effectiveness   of   different   variations  
of   [a]   programme   …   the   methodological   requirements   to   do   so   exclude   picking   up   the  
interplay   between   programme   implementation,   the   individuals   who   are   targeted,   the  
programme’s   context   and   the   wider   social   context   …   [it]   cannot   prove   the   underlying  
causal  m echanism ”  
 (Marchal  et  al.  2013:  125-­‐6)  
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Figure  taken  from  Haynes  et  al.,  (2012)  ‘Test,  Learn,  Adapt:  Developing  Public  Policy  with  Randomized  Controlled  Trials’  

©Project  Oracle  Evidence  Hub™         8  


 
The  material:  what  evidence  is  there?  
 
A  synthesis  study  is  a  meta-­‐analysis  of  the  available  evidence.  It  evaluates  and  
summarises   previous   evaluations,   rather   than   examining   the   detail   of   specific  
projects.   The   framework   outlined   for   this   synthesis   study   is   open   to   diverse  
forms   of   evidence,   both   qualitative   and   quantitative,   that   can   be   drawn  
together  to  help  understand  what  influences  an  intervention’s  outcomes.    
 
Eligibility  criteria    
 
We   drew   upon   a   wide   range   of   potential   sources,   not   limited   to   academic   research   papers   and  
focused  in  particular  on  the  ‘grey’  and  often  unpublished  evaluation  literature.  These  were  accessed  
through  bibliographic  searches,  internet  searches  through  Google  and  by  consulting  experts.1  
 
We   have   included   evaluations   that   used   a   randomised   controlled   trial   methodology,   were   based   in  
London,   and   which   were   written   in   English.   The   evaluations   had   to   include   evidence   of   programme  
outcomes.  The  criteria  did  not  include  the  size  of  the  trial  nor  the  number  of  participants.  
 
Furthermore,   in   order   to   focus   on   projects   and   programmes   of   particular   relevance   to   the   children  
and  youth  sector,  we  excluded  clinical  and  medical  trials.  The  outcomes  of  the  work  being  evaluated  
had   to   focus   on   the   lives   and   behaviour   of   children   and   young   people,   although   they   could   also  
include  other  additional  outcomes  relating  to,  e.g.  children’s  parents  and  families.  
 
 
The  selected  material*  
 
This   search   identified   eleven   studies   that   matched   our   criteria.   Of   these,   all   are   based   in   London  
although   some   include   London   as   only   one   area   of   intervention   among   other   areas   in   the   UK.   We  
tried  to  focus  on  those  changes  happening  within  the  London  context.    
 
Publication     Funded  by  
Randomized  controlled  trial  of  support  from  volunteer  counsellors  for  mothers   The  Royal  College  of  
considering  breast  feeding.   General  Practitioners  
  Scientific  Foundation  
Graffy,J.,  Taylor,J.  Williams,A.,  Eldridge,S.   Board  and  NHS  North  
British  Medical  Journal  2004  http://www.bmj.com/content/328/7430/26   Thames  responsive  
funding  program  
Exploratory  and  developmental  trial  of  a  family  centred  nutrition  intervention   Department  of  Health  
delivered  in  Children's  Centres.  
 
Watt,R.G.,  Hayter,  A.K.M.,  Ohly,  H.R.,  Pikhart,  H.,  Draper,  A.K.,  Crawley,  H.,  
McGlone,P.,  Cooke,L.,  Moore,L.,  Pettinger,  C.  
University  College  London  and  the  University  of  Plymouth  2012  
http://www.ucl.ac.uk/dph/research/finalreport    
                                                                                                                         
*Special  thanks  to  Stephen  Morris,  Chris  Bonell  and  David  Pritchard  for  their  kind  contributions  to  this  study.  

©Project  Oracle  Evidence  Hub™         9  


 
Randomized  controlled  trial  of  ‘teens  and  toddlers’:  A  teenage  pregnancy   Department  for  
prevention  intervention  combining  youth  development  and  voluntary  service  in  a   Education  
nursery.  
 
Bonell.,  C,  Maisey,  R.,  Speight,  S.,  Purdon,  S.,  Keogh,  P.,  Wollny,  I.,  Sorhaindo,  A.,  
and  Wellings,  K.  
2013  http://www.ncbi.nlm.nih.gov/pubmed/24011102  
Two-­‐Year  Impact  of  Personality-­‐Targeted,  Teacher-­‐Delivered  Interventions  on   Action  on  Addiction  
Youth  Internalizing  and  Externalizing  Problems:  A  Cluster-­‐Randomized  Trial.  
 
O'Leary-­‐Barrett,  M.,  Topper,  L.,  Al-­‐Khudhairy,  N.,  Pihl,  R.O.,  Castellanos-­‐Ryan,  
N.,  Mackie,  C.J.  and  Conrod,  P.J.  
2013  http://www.ncbi.nlm.nih.gov/pubmed/23972693  
Randomized  Controlled  Trial  of  the  Fostering  Changes  Program.   Department  for  
Education  to  The  
Briskman,J.,  Castle,J.,  Blackeby,K.,  Bengo,C.,  Slack,K.,  Stebbens,  C.,  Leaver,  W.  and   National  Academy  for  
Scott,S.  2012   Parenting  Research  
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/1
83398/DFE-­‐RR237.pdf  
Peer-­‐led  sex  education  in  England  (RIPPLE  study)–  characteristics  of  peer  educators   Medical  Research  
and  their  perceptions  of  the  impact  on  them  of  participation  in  a  peer  education   Council  
programme.  
 
Strange,V.,  Forrest,S.,  Oakley,A.  and  the  RIPPLE  Study  Team  
Health  Education  Research  2004  
http://www.ncbi.nlm.nih.gov/pubmed/15276393  
Factors  influencing  hand  washing  behaviour  in  primary  schools:  process  evaluation   The  National  Institute  
within  a  randomized  controlled  trial.   for  Health  Research  
  (NIHR)  
Chittleborough,  C.R.,,  Nicholson,  A.L.,  Basker,  E.,  Bell,  S.  and  Campbell,  R.  2012  
http://www.ncbi.nlm.nih.gov/pubmed/22623617  
A  randomized  randomized  controlled  trial  of  Multisystemic  Therapy  and  statutory   EPPI-­‐Centre,  Social  
therapeutic  intervention  for  young  offenders.   Science  Research  Unit,  
  Institute  of  Education,  
Butler,  S.,  Baruch,  G.,  Hickey,  N.  and  Fonagy,  P.  2011   University  of  London  
http://www.ncbi.nlm.nih.gov/pubmed/22115143  
Randomized  controlled  trial  of  motivational  interviewing  compared  with  drug   Wellcome  Trust  for  a  
information  and  advice  for  early  intervention  among  young  cannabis  users.   Health  Services  
  Research  Fellowship  
McCambridge,J.,  Slym,  R.L.  and  Strang,  J.  2008  
http://onlinelibrary.wiley.com/doi/10.1111/j.1360-­‐0443.2008.02331.x/abstract  
A  Randomised  Controlled  Trial  of  Cognitive  Behaviour  Therapy  for  Psychosis  in  a   Culyer  funding  from  
routine  clinical  service.   the  Department  of  
  Health,  U.K.,  and  
Peters,E.,  Landau,S.,  McCrone,P.,  Cooke,M.,  Fisher,P.,  Steel,C.,  Evans,  E.,  Carswell,   subsequently  by  a  
K.,  Dawson,K.,  Williams,S.,  Howard,A.  and  Kuipers,E.  2010   donation  from  Eli  Lilly.    
http://www.ncbi.nlm.nih.gov/pubmed/20491720  
 
Effectiveness  of  out-­‐of-­‐home  day  care  for  disadvantaged  families:  randomised   Department  of  Health  
controlled  trial.  
 
Toroyan,T.,  Roberts,I.,  Oakley,A.,  Laing,G.,  Mugford,M.  and  Frost,C.  2003  
http://www.bmj.com/content/327/7420/906  
 
 

©Project  Oracle  Evidence  Hub™         10  


 
Mapping  out  RCTs  in  London  
 
This  section  maps  out  RCTs  in  London  according  to  the  people  that  they  work  
with,  the  programme  types  and  the  contexts  in  which  they  take  place.  
 
 
Participants  
 
The   RCTs   included   in   this   study   were   carried   out   with   projects   of   varying   sizes,   from   around   60   to  
thousands  of  participants.  The  spread  of  project  sizes  can  be  seen  in  the  diagram  below.  
 
A   concern   related   to   RCTs   is   often   that   large   group   sizes   are   needed   to   measure   statistically-­‐
significant   effects,   but   this   is   not   always   the   case.   In   fact,   the   Brandon   Centre   RCT   of   multisystem  
therapy   is   an   example   of   how   an   RCT   can   effectively   provide   data   with   relatively   small   cohorts   (see  
Case  Study  1).    
 
Nevertheless,   compared   to   other   evaluation   methodologies   -­‐   ethnographic   observation,   qualitative  
interviews  and  focus  groups,  participatory  videos  and  photography  or  even  simple  questionnaires  -­‐  all  
of  the  RCT  cohort  sizes  can  be  considered  large.  We  are  also  aware  that  for  some  small,  targeted  or  
specialised  projects  (many  of  which  are  supported  by  Project  Oracle),  the  number  of  participants  may  
be  considerably  smaller  than  those  found  in  the  trials  examined  here.  
 

Size  of  RCTs  in  London  (intervenpon  and  control  group  parpcipants)  
8000  

1024  

644  

449  
394  
326  
263  
152  
108  

 
 
 
 
 
 
 

©Project  Oracle  Evidence  Hub™         11  


 
 
Case   Study   1:   Brandon   Centre   Multisystemic   Therapy   Randomised  
Controlled  Trial  
 
Project:   Multisystemic   Therapy   is   aimed   at   reducing   offending   among   young   people   through   an  
intervention   combining   improving   parenting   capacity,   increasing   young   people’s   engagement   with  
education  and  training  and  tackling  underlying  health  or  mental  health  problems,  including  substance  
misuse.  It  involves  children  and  young  people  aged  11-­‐17  years  and  their  families.  
 
Evaluation:    The  programme  pilot  involved  108  young  offenders  and  found  that  they  were  less  likely  
to   re-­‐offend   than   others   who   were   given   the   usual   support.   They   also   found   that   the   therapy   was  
particularly   helpful   for   boys.   The   project   was   also   considered   to   be   cost   effective   because   young  
people   are   kept   out   of   custody   or   local   authority   care,   and   parents   are   encouraged   to   use   the  
voluntary  sector  and  local  support  systems  instead.    
 
The   clear   conclusions   of   the   trial   supported   the   UK   government’s   decision   to   set   up   multisystemic  
therapy  programmes  elsewhere  in  the  country,  involving  families  in  Hackney,  Greenwich,  and  Merton  
and   Kingston   in   London,   plus   Barnsley,   Leeds,   Peterborough,   Plymouth,   Reading,   Sheffield   and  
Trafford.  
 
The   trial   has   also   had   international   interest.   Building   on   evaluations   in   the   USA,   cost   effectiveness  
studies  from  the  Washington  State  Institute  for  Public  Policy  suggested  that  £5  is  saved  for  every  £1  
invested  in  the  programme.  
 
 
Programme  types  
 
Despite  calls  for  RCTs  to  be  used  to  evaluate  a  wider  range  of  social  interventions,  we  found  that  most  
of  the  RCTs  included  in  this  synthesis  study  were  health-­‐related  interventions  and  therefore  measured  
health-­‐related  outcomes.  There  was  only  one  RCT  in  the  criminal  justice  area,  compared  with  seven  
on  health  issues,  for  example.  Further  research  would  be  needed  to  clarify  the  dominance  of  health  
outcomes  in  RCTs  of  children  and  youth  programmes.  The  full  range  of  outcome  areas  measured  can  
be  seen  in  the  chart  below.  
 
 

©Project  Oracle  Evidence  Hub™         12  


 
Outcomes  measured  by  RCTs  in  London  
8  
7  
6  
5  
4  
3  
2  
1  
0  
Health   Mental  health   Family  welfare   Socio-­‐economic  
 
The  health  outcomes  measured  ranged  from  levels  of  nutrition  in  children’s  centres  to  cigarette  and  
cannabis   use   among   teenagers   in   further   education.   Two   projects   also   concentrated   on   rates   of  
teenage  pregnancy  and  participants’  knowledge  of  sex  education.  
 
However,   there   is   an   important   caveat   here.   The   interventions   being   tested   were   predominantly  
health-­‐related   and   therefore   it   is   difficult   to   draw   conclusions   across   different   policy   areas   or   to  
compare  them  across  distinct  programme  types.  As  a  result,  this  study  focuses  on  drawing  out  lessons  
to  be  learned  from  the  evaluations  included.  
 
Contexts  
 
The   studies   included   here   have   taken   place   in   varied   environments;   from   foster   care   settings   to  
schools  and  children’s  centres.  However,  by  far  the  majority  were  in  schools  (see  the  chart  below  on  
p.13).    
 
An  advantage  of  carrying  out  RCTs  in  schools  is  that  there  is  a  large  population  of  students  from  which  
to  select  participants,  and  they  may  already  be  divided  into  classes  or  year  groups  which  allows  for  a  
ready   distinction   between   intervention   and   control   groups.   This   is   particularly   helpful   for   cluster  
and/or  multi-­‐level  RCT  designs,  both  types  of  RCT.    
 
However,   working   in   schools   also   raises   challenges.   In   particular,   it   is   difficult   to   ensure   that   the  
groups   of   students   remain   separate   during   the   intervention   and   that   there   is   no   contamination,   a  
requirement   of   the   pure   RCT.   According   to   West   and   Spring   (2014),   contamination   occurs   when  
individuals   are   exposed   to   the   wrong   conditions   through   having   contact   with   each   other,   either  
inadvertently   or   intentionally   as   people   discuss   their   experiences.   For   example,   a   group   of   students  
taking  part  in  sex  education  lessons  may  then  talk  to  others  not  in  their  group  once  the  lesson  is  over.    
 
Bias   in   groups   was   a   concern   raised   elsewhere   too   and   the   source   of   bias   was   a   common   topic   of  
discussion   in   the   studies.   For   example,   a   day-­‐care   health   programme   for   disadvantaged   families   in  
Hackney   mentioned   that   ‘at   the   paediatric   assessments,   some   parents   talked   about   their   childcare  
arrangements,   and   so   we   cannot   exclude   the   possibility   of   bias   in   the   assessment   of   child  
development’  (Toroyan  et.al.  2003:  4).  
 

©Project  Oracle  Evidence  Hub™         13  


 
RCT  Locapons  

General  pracpponer's  surgery   School   Foster  care   Children's  centre   Other  


 
A   distinct   approach   identified   here   was   to   randomly   place   whole   schools   or   boroughs   into  
intervention  or  control  groups  and  to  compare  the  effects  across  them,  rather  than  creating  groups  of  
individual  students.  One  example  can  be  seen  in  Case  Study  2,  an  evaluation  of  peer-­‐led  sex  education  
in   secondary   schools.   Nevertheless,   to   randomise   in   this   way   may   require   larger   projects   which   are  
implemented  in  multiple  settings.  
 
Finally,   it   is   important   to   note   that   the   RCTs   included   here   involved   a   range   of   individuals   and  
organisations.  They  were  all  externally  funded,  usually  by  public  bodies  including  the  Department  of  
Health   and   the   Department   for   Education,   but   also   by   other   organisations   such   as   Action   on  
Addiction.  The  evaluations  were  also  usually  carried  out  by  researchers  from  universities,  as  well  as  in  
some  cases,  public  or  private  sector  research  organisations.    
 
This   emphasis   on   external   funding   and   involvement   of   a   range   of   evaluation   stakeholders,   in   the  
context   of   the   constraints   and   challenges   of   working   in   settings   such   as   primary   and   secondary  
schools,   illustrates   the   complexity   associated   with   carrying   out   an   RCT.   Despite   recent   calls   for  
simpler,  smaller  trials  to  be  carried  out  in  a  wider  range  of  policy  areas  (Haynes,  Owain,  Goldacre  and  
Torgerson  2012),  the  findings  from  this  study  suggest  that  the  difficulties  this  complexity  is  perceived  
to  present  are  yet  to  be  overcome  in  the  children  and  youth  sector.  
 
 
Case   Study   2:   The   RIPPLE   study   of   peer-­‐led   sex   education   in  
secondary  schools  
 
Project:   RIPPLE   is   ‘a   Randomized   Intervention   of   Pupil-­‐led   Sex   Education’   funded   by   the   Medical  
Research  Council.  It  explores  the  effectiveness  of  peer  education  among  young  people  on  the  topic  of  
sex  education.  In  each  school,  students  in  Year  12  (16/17  years  old)  took  part  in  standardised  training  
with  health  practitioners.  The  students  then  delivered  three  sex  education  sessions  to  Year  9  (13/14  
years  old)  students  as  a  way  of  improving  their  knowledge  and  attitudes  to  sexual  health  issues.  
 

©Project  Oracle  Evidence  Hub™         14  


 
Evaluation:   27   schools   were   recruited   and   randomly   allocated   to   a   programme   of   peer-­‐led   sex  
education  or  to  act  as  control  schools.    
 
Outcomes   were   measured   through   analysis   of   a   pre-­‐   and   post-­‐questionnaire   designed   to   judge   the  
impact  of  the  process  on  the  peer  educators,  although  problems  were  reported  with  maintaining  high  
levels   of   data   collection   due   to   the   different   characteristics,   curricula   and   timetables   of   the   distinct  
schools.  
 
The   Year   12   students   reported   changes   in   sexual   knowledge   and   more   liberal   attitudes,   as   well   as  
believing  that  the  programme  would  have  a  positive  impact  on  their  confidence  in  relationships  and  
their  sexual  behaviour.  The  evaluation  concludes  that  ‘participation  in  a  peer  education  programme  
benefits  peer  educators.’  
 
 
 
 

©Project  Oracle  Evidence  Hub™         15  


 
Creating   confidence   in   and   gaining   useful   learning  
from  RCTs    
 
The  principal  purpose  of  an  RCT  is  to  identify  the  causal  effects  of  interventions  
and   programmes   that   minimize   bias,   on   the   condition   that   they   are  
implemented   well   (Torgerson   and   Torgerson   2008).   This   section   looks   at   the  
RCTs   examined   here,   and   how   many   of   these   were   able   to   draw   strong  
conclusions  on  the  impact  of  the  evaluated  programmes.  
 
Our  review  shows  that  only  a  minority  of  RCTs  found  significant  evidence  of  a  change  in  outcomes.  
The  same  number  of  reports  also  highlighted  that  there  was  no  significant  evidence  of  programmes  
being  effective  at  achieving  their  primary  outcomes.    
 
From  the  studies  we  reviewed,  four  presented  significant  evidence  of  change  after  the  intervention  
but  the  same  number  did  not.  Two  of  the  reports  showed  partial  evidence  of  change  (see  the  chart  
below).   It   is   likely   that   this   reflects   whether   the   programme   was   partially   effective   or   not,   as   a  
statistically  insignificant  result  is  a  strong  conclusion  that  an  intervention  is  not  effective.    
 
The  small  number  of  studies  included  here  and  their  predominant  focus  on  health  outcomes  means  
that   we   cannot   draw   firm   conclusions   regarding   the   usefulness   of   RCTs   in   the   children   and   youth  
sector.  Regardless  of  this,  it  is  more  generally  difficult  to  determine  whether  RCTs  always  give  a  clear  
indication   of   a   programme’s   impact.   Where   there   has   been   no   indication   of   programme   impact,  
evaluators   have   needed   to   use   additional   evaluation   methodologies   in   order   to   draw   useful  
conclusions  (see  Case  Study  3).    
 
This  suggests  that  while  RCTs  can  give  insight  on  the  extent  to  which  a  given  programme  works,  they  
cannot  entirely  explain  why  it  works.  In  this  sense,  the  evidence  suggests  that  it  is  not  always  the  most  
useful   approach   to   evaluation.   Even   in   cases   when   it   does   help   our   understanding,   additional  
methodologies  may  be  required  to  acquire  a  holistic  understanding  of  an  intervention’s  effectiveness.    
Shadish,   Cook   and   Campbell   (2002)   for   instance,   point   out   that   RCTs   can   only   address   casual  
descriptions,  and  not  casual  explanations.  For  the  latter,  it  is  necessary  to  include  a  process  evaluation  
alongside  an  RCT.    
 
     

©Project  Oracle  Evidence  Hub™         16  


 
Conclusions  drawn  by  RCTs  in  London  

0%   10%   20%   30%   40%   50%   60%   70%   80%   90%   100%  

Clear  posipve  effects   Some  posipve  effects   No  effects  


 
 
 
Case  Study  3:  A  Randomised  Controlled  Trial  of  ‘Teens  and  Toddlers’  
 
 
Project:   Teens   and   Toddlers   took   place   in   2012   and   aimed   to   reduce   sexual   risk   behaviour   and  
consequentially   teenage   pregnancies   by   promoting   girls’   overall   personal   development   and  
inculcating   awareness   of   the   responsibility   involved   in   caring   for   a   child.   The   intervention   targeted  
young   people   considered   at   risk   of   teenage   parenthood   for   weekly   three-­‐hour   sessions   over   18–20  
weeks  held  in  local  pre-­‐school  nurseries.  
 
Evaluation:  The  evaluation  used  a  matched-­‐pair  individual-­‐allocation  randomized  trial  with  449  girls  
allocated   to   the   intervention   or   control   group   (the   latter   continuing   with   their   usual   education   in  
school).   Data   for   all   participants   were   collected   by   questionnaire   at   three   points   in   time:   prior   to  
random   allocation   (baseline),   immediately   post-­‐intervention   at   22   weeks   post-­‐baseline   (follow-­‐up  
one)  and  one  year  after  the  intervention  (follow-­‐up  two).  
 
At  follow-­‐up  two,  which  was  the  analytical  focus  for  primary  outcomes  (last  sex  without  contraception  
in   previous   three   months;   more   than   one   episode   of   sex   without   contraception   in   previous   three  
months;   expectation   of   teenage   parenthood;   youth   development   score),   there   was   no   evidence   of  
intervention   benefits   regarding   the   programme’s   four   primary   outcomes.   However,   there   was  
evidence   of   benefits   for   three   secondary   outcomes:   reducing   low   self-­‐esteem;   low   sexual   health  
knowledge;  and  difficulty  with  discussing  the  pill.    
 
These   positive   findings   were   consistent   with   self-­‐reported   perceptions   of   intervention   benefits   for  
participants  and  stakeholders  and  identified  in  an  additional  process  evaluation,  which  explained  how  
they   were   brought   about.   This   enabled   the   provider   organisation   to   learn   from   the   evaluation   and  
adapt   its   programme   accordingly,   highlighting   the   importance   of   additional   methodologies   to   the  
RCT.  

©Project  Oracle  Evidence  Hub™         17  


 
 
Teens  and  Toddlers  is  the  only  youth  development  project  in  the  UK  designed  to  raise  the  aspirations  
of  young  people  by  pairing  them  as  a  mentor  and  role  model  to  a  child  in  a  nursery  who  is  in  need  of  
extra  support.  They  measure  the  impact  of  the  project  on  both  the  teenagers  and  the  toddlers  taking  
part.   For   the   toddlers,   a   matched   comparison   group   study   showed   that   those   toddlers   taking   part  
showed  significantly  greater  improvements  in  some  of  the  key  outcomes  compared  to  their  peers  in  
the   control   group.   A   comparison   group   study   of   the   impact   on   the   teenagers’   attendance   and  
attainment  at  school  is  also  underway.  
 
Based   on   their   evaluation   of   the   impact   of   the   programme   on   the   toddlers,   Teens   and   Toddlers   are  
now  validated  at  Standard  3  of  Project  Oracle’s  Standards  of  Evidence.    
 
 
 

©Project  Oracle  Evidence  Hub™         18  


 
 

Conclusions  
 
This  synthesis  study  has  examined  when  and  where  randomised  controlled  trial  
evaluations  have  been  carried  out,  with  which  outcomes.  We  found  only  eleven  
evaluations  of  children  and  youth  interventions  in  London  that  fitted  our  criteria  
so   drawing   general   conclusions   is   problematic.   Nevertheless,   we   can   put  
forward   some   key   considerations   for   these   and   future   RCT   studies   in   children  
and  youth  interventions.  
 
• A   strong   trial   showing   insignificant   results   would   offer   evidence   that   an   intervention   is   not  
working.  Less  rigorous  methodologies  can  overestimate  the  effectiveness  of  the  results.    

• However,  an  RCT,  depending  on  the  design,  can  also  help  to  demonstrate  which  aspects  of  a  
programme  are  having  the  greatest  effect,  and  how  it  could  be  further  improved.  

• RCTs  are  not  magical  tools,  and  these  still  need  to  have  a  robust  trial  design.    

• In  some  cases,  this  implies  considering  whether  the  evaluation  is  of  sufficient  size  to  detect  a  
programme’s  effect.  There  remains  a  question  about  whether  the  evaluation  therefore  is  of  
sufficient  size  to  identify  programme  effect.    

• In   many   cases,   it   is   best   to   combine   an   RCT   with   a   process   evaluation,   as   this   will   help  
understand  what  has  actually  happened  during  as  well  as  what  resulted  from  the  intervention  
during  the  trial.    

• RCTs  that  show  no  impact  are  also  important  because  they  can  help  to  understand  an  
intervention  better  and  ensure  that  scarce  public  resources  are  directed  at  effective  
programmes  or  interventions.    

 
 

©Project  Oracle  Evidence  Hub™         19  


 
References  
 

Berk,R.A.  (2005)  Randomized  experiments  as  the  bronze  standard  Journal  of  Experimental  
Criminology  1:417-­‐33.    

Black,N.  (1996)  Why  we  need  observational  studies  to  evaluate  the  effectiveness  of  health  care  British  
Medical  Journal,  312,  1215  -­‐1218.  

Bonell,C.,  Maisey,R.,  Speight,S.,  Purdon,S.,  Keogh,P.,  Wollny,I.,  Sorhaindo,A.,  and  Wellings,K.  (2013)  
Randomized  controlled  trial  of  'teens  and  toddlers':  a  teenage  pregnancy  prevention  intervention  
combining  youth  development  and  voluntary  service  in  a  nursery  October  36(5):859-­‐70.  doi:  
10.1016/j.adolescence.2013.07.005.  Epub  2013  Jul  31.  

Briskman,J.,  Castle,J.,  Blackeby,K.,  Bengo,C.,  Slack,K.,  Stebbens,C.,  Leaver,W.,  &  Scott,  S.  (2012)  
Randomized  Controlled  Trial  of  the  Fostering  Changes  Program  Available  at  
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/183398/DFE-­‐
RR237.pdf  

Butler,S.,  Baruch,G.,  Hickey,N.,  Fonagy,P.  (2011)  A  randomized  controlled  trial  of  multisystemic  
therapy  and  a  statutory  therapeutic  intervention  for  young  offenders  December;  50(12):1220-­‐35.e2.  
doi:  10.1016/j.jaac.2011.09.017.  Epub  2011  Nov  6.  
 
Cartwright,N.  and  Hardie,J.  (2012)  Evidence-­‐Based  Policy  A  Practical  Guide  to  Doing  It  Better  OUP  USA  
 
Chittleborough,C.R.,  Nicholson,A.L.,  Basker,E.,  Bell,S.,  and  Campbell,R.  (2012)  Factors  influencing  hand  
washing  behaviour  in  primary  schools:  process  evaluation  within  a  randomised  controlled  trial  Health  
Educ  Res.,  27(6),  1055–1068.  doi:10.1093/her/cys061.  

Graffy,J.,  Taylor,J.,  Williams,A.,  Eldridge,S.  (2004)  Randomized  controlled  trial  of  support  from  
volunteer  counsellors  for  mothers  considering  breast  feeding  accessed  August  24,  2014  British  
Medical  Journal  http://www.bmj.com/content/328/7430/26  

Haynes,  L.,  and  Service,  O.,  Goldacre,  B.  and  Torgerson,  D.  (2012).  “Test,  Learn,  Adapt:  Developing  
Public  Policy  with  Randomised  Controlled  Trials”.  Cabinet  Office  -­‐  Behavioural  Insights  Team.  Available  
at  SSRN:  http://ssrn.com/abstract=2131581  or  http://dx.doi.org/10.2139/ssrn.2131581  
 
Kavanagh,  J.,  Trouton,  A.,  Oakley,  A.,  Powell,  C.  (2006).  A  systematic  review  of  the  evidence  for  
incentive  schemes  to  encourage  positive  health  and  other  social  behaviours  in  young  people.  London:  
EPPI-­‐Centre,  Social  Science  Research  Unit,  Institute  of  Education,  University  of  London.  
 
McCambridge,  J.,  Renee  L.,  Slym  and  Strang,J.  (2008)  “Randomized  controlled  trial  of  motivational  
interviewing  compared  with  drug  information  and  advice  for  early  intervention  among  young  
cannabis  users”.  Article  first  published  online:  5th  September  2008  
DOI:  10.1111/j.1360-­‐0443.2008.02331.x  
 
O'Leary-­‐Barrett,  M.,  Topper,  L.,  Al-­‐Khudhairy,  N.,  Pihl,  R.O.,  Castellanos-­‐Ryan,  N.,  Mackie,  C.J.,  Conrod,  
P.J.  (2013).  “Two-­‐Year  Impact  of  Personality-­‐Targeted,  Teacher-­‐Delivered  Interventions  on  Youth  

©Project  Oracle  Evidence  Hub™         20  


 
Internalizing  and  Externalizing  Problems:  A  Cluster-­‐Randomized  Trial.”  
http://www.ncbi.nlm.nih.gov/pubmed/23972693  
 
Peters,  E.,  Landau,  S.,  McCrone,  P.,  Cooke,  M.,  Fisher,  P.,  Steel,  C.,  Evans,  R.,  Carswell,  K.,  Dawson,  K.,  
Williams,  S.,  Howard,  A.  and  Kuipers,  E.  (2010).  “A  randomised  controlled  trial  of  cognitive  behaviour  
therapy  for  psychosis  in  a  routine  clinical  service.”  Acta  Psychiatr  Scand.  2010  Oct;  122(4):302-­‐18.  doi:  
10.1111/j.1600-­‐0447.2010.01572.x.  Epub  2010  Jun  28.  
 
Shadish,W.R.,   Cook,D.C.,   Campbell,D.T.   (2002)   Experimental   and   Quasi-­‐experimental   Designs   for  
Generalized  Causal  Inference  Houghton  Mifflin  

Stephenson,  J.,  Strange,  V.,  Forrest,  S.,  Oakley,  A.,  Copas,  A.,  Allen,  E.,  Babiker,  A.,  Black,  S.,  Ali,  M.,  
Monteiro,  H.,  Johnson,  A.M.,  RIPPLE  study  team.  (2004)  “Pupil-­‐led  sex  education  in  England  (RIPPLE  
study):  cluster-­‐randomised  intervention  trial.”  Lancet.  2004  Jul  24-­‐30;  364(9431):338-­‐46.  
http://www.ncbi.nlm.nih.gov/pubmed/15276393  
 
Torgerson,  D.  J.,  and  Torgerson,  C.  (2008)  Designing  randomised  trials  in  health,  education,  and  the  
social  sciences:  An  introduction.  New  York:  Palgrave  Macmillan.  

Toroyan,T.,  Roberts,  I.,  Oakley,A.,  Laing,G.,  Mugford,M.  and  Frost,M.  (2003).  “Effectiveness  of  out-­‐of-­‐
home  day  care  for  disadvantaged  families:  randomised  controlled  trial.”  Available  at  
http://www.bmj.com/content/327/7420/906    

Watt  R.G.,  Hayter,  A.,  Ohly,  H.,  Pikhart,  A.,  Draper,K.,  Crawley,H.,  McGlone,P.,  Cooke,L.,  Moore,L.  and  
Pettinger,C.  (2012)  “Exploratory  and  developmental  trial  of  a  family  centred  nutrition  intervention  
delivered  in  Children's  Centres”,  accessed  August  27,  2014,  University  College  London  and  the  
University  of  Plymouth  (http://www.ucl.ac.uk/dph/research/finalreport)  

West  A.,  Spring  B.  (2014)  “Randomized  Controlled  Trials”,  Evidenced-­‐Based  Behavioral-­‐Practice  
[EBBP],  accessed  April  17,  2014,  http://www.ebbp.org/course_outlines/rcts.pdf.  

 
 
 

Project  Oracle  is  managed  


by

           Project  Oracle  -­‐  Hub  Westminster,  80  Haymarket,  London,  SW1Y  4TE,  UK
         www.project-­‐oracle.com,  info@project-­‐oracle.com,  twitter  @project  oracle,  020  7148  6726  
Project  Oracle  Evidence  Hub™  is  a  registered  company  in  England  and  Wales.  Company  number:  9131843.  All  Rights  
Reserved.  
   Any  intellectual  property  rights  arising  in  the  Project  Oracle  methodology  are  the  exclusive  property  of  the  
Project  Oracle  delivery  team.    ©Project  Oracle  2014
Project  Oracle  is  funded  by

©Project  Oracle  Evidence  Hub™         21  


 

You might also like