Professional Documents
Culture Documents
Randomised Controlled Trials (RCTs) for
Projects with Children and Young People
Project Oracle is managed
by
1
©Project Oracle Evidence Hub™
To obtain the best results, an RCT requires a thorough methodology, a systemic randomization
process and consequent in-‐depth data gathering. This often implies time and resources to be
allocated to the study. In this sense, commissioners and funders encouraging RCT evaluations should
also be aware of the need for resources to fund such studies.
Finally, this is an exploratory synthesis and it offers the basis to develop further research. We
recommend a further study to review the usefulness of this methodology for understanding the
effectiveness of children and youth interventions in the capital. A suggested focus would be to
examine how RCTs are or are not being used to influence policy-‐making processes.
At Project Oracle we look forward to working to develop greater understandi ng of
how the right support can ensure that RCTs are used to the best of their potential.
2
©Project Oracle Evidence Hub™
Contents
Key Insights ............................................................................................................................................ 1
The evidence.................................................................................................................................. 1
Caveats........................................................................................................................................... 1
Recommendations ......................................................................................................................... 1
Glossary of key terms ............................................................................................................................ 4
Introduction ........................................................................................................................................... 5
Project Oracle Synthesis Studies: Aims and Approach .......................................................................... 6
Synthesis study aims ...................................................................................................................... 6
Our realist approach to synthesis sets out to: ............................................................................... 6
This study: What are RCTs and why do they matter?............................................................................ 7
What are the benefits of RCTs? ..................................................................................................... 7
What are the shortcomings? ......................................................................................................... 8
The material: what evidence is there? .................................................................................................. 9
Eligibility criteria ............................................................................................................................ 9
The selected material* .................................................................................................................. 9
Mapping out RCTs in London ............................................................................................................... 11
Participants .................................................................................................................................. 11
Case Study 1: Brandon Centre Multisystemic Therapy Randomised Controlled Trial ................. 12
Programme types......................................................................................................................... 12
Contexts ....................................................................................................................................... 13
Case Study 2: The RIPPLE study of peer-‐led sex education in secondary schools ....................... 14
Creating confidence in and gaining useful learning from RCTs ........................................................... 16
Case Study 3: A Randomised Controlled Trial of ‘Teens and Toddlers’ ....................................... 17
Conclusions .......................................................................................................................................... 19
References ........................................................................................................................................... 20
©Project Oracle Evidence Hub™ 3
Glossary of key terms
Context – the external factors which influence how individuals participate in or organise and carry out
an intervention. The context affects the availability of resources and the opportunities for a project to
be carried out. For example, the arrangement of key institutions for young people, such as schools,
colleges, and young offender institutions can facilitate the building of network of support.
Interventions and their participants are embedded in social contexts characterised by for example:
dynamics of inclusion and exclusion, material inequality, and racial and ethnic diversity.
Evidence – the range of available robust and reliable information which demonstrates the impact of a
policy, public service or programme. Evidence is knowledge which has been substantiated through
empirical or theoretical research. Evidence is not the only type of knowledge, but it is often claimed
to be more certain because of the rigour with which its claims have been tested.
Indicator – a measurable characteristic or process which reflects (indicates) that change has occurred.
Intervention – a planned action which aims to bring about positive change. This may aim to improve
social inclusion, empowerment and equality. The term intervention refers to person-‐centred
approaches which aim to change knowledge, understanding, attitudes and behaviour.
Methodology – the framework and assumptions, which enable the design of research, evaluation or
secondary data analysis. This is different to a research or evaluation method, which is a tool for
collecting and analysing data.
Outcomes – the end result of a process is its outcome. The outcomes of a policy, a project or an
intervention are the changes that it has caused, or is aiming to cause. This is not the same as an
output, which refers to the products offered by a policy, a project or an intervention such as the
number of workshops delivered, the number of people trained, or website built. Outcomes are the
changes, benefits, learning or other effects that happen as a result of a policy, a project or an
intervention. These can be immediate, intermediate or long term.
Participant – an individual actively involved with an intervention which is aimed at bringing about
change for them, such as in their behaviour or attitudes.
Programme – an on-‐going series of activities to affect an issue or problem. Whereas a project has a
defined beginning and end, a programme may be more open-‐ended: an overarching programme can
be composed of various projects.
measured. A further study can uncover the A randomised controlled trial (RCT) is an
extent to which these RCTs are useful to evaluation methodology which aims to
funders, commissioners and project providers draw an objective picture of the difference
from London’s children and youth sector. that a project, programme or policy makes
by comparing its effect on separate groups.
The Cabinet Office’s seminal report ‘Test,
Learn, Adapt’, claims that ‘Randomised Participants are allocated to groups which
controlled trials (RCTs) are the best way of are either affected by the project or policy
determining whether a policy is working … (the intervention group) or not (the control
[and] by enabling us to demonstrate just how group). The selection of people to each
well a policy is working, RCTs can save money group is random, to eliminate as much bias
in the long term’ (Haynes et. al. 2012: 5). As a as possible.
result, the report advocates running more
The difference in outcomes between the
RCTs on a broader range of social policies in
groups is considered to demonstrate the
order to judge their effectiveness.
effect of the project or policy.
So, the RCT appears to fit neatly into the
current requirement for evidence to underpin
policy and practice, and in particular to inform
the way that these are funded. That said, there
is a problem. There is no universal agreement
that RCTs represent the gold standard in
evaluation. Indeed, the approach is positively
rejected by some and feared by others.
All of this means that there is a real need to gain a clearer understanding of who requests and uses
RCTs, when; and how useful and practical they are for children and youth initiatives in London. This
synthesis study reviews the available evidence to contribute towards examination of these issues.
Through random assignment of people to one group or another, an evaluator aims to largely remove
the risk of bias in study findings and to evenly distribute the characteristics of participants across
groups. For example, a test of the effectiveness of a new flu medicine may put 10,000 people into two
groups and be confident that there will be a similar volume of individuals in each group and they will
have similar characteristics, such as their age or gender.
Randomisation, it is argued, allows for statistically identical separate groups to be created. If an
intervention, e.g. a social programme, is then administered to one group but not the other (assuming
a two group design), any differences in outcome between the two groups can be attributed to the
intervention and alternative explanations for these differences ruled out since the two groups only
differ in whether or not they received the programme.
As a result of this approach, RCTs are considered a powerful tool to help policymakers and
practitioners decide which of several policies or programmes and projects is the most effective, and
to distinguish which interventions are not as effective as they should or have the potential to be
(Haynes, Owain, Goldacre and Torgerson 2012).
“The task of isolating the effects of treatments or programs from other confounding aspects
of selection or design is the researcher’s most significant challenge in coming to a valid
policy conclusion. Through randomization of treatment and control or comparison conditions,
the researcher can assume that such threats to valid conclusions are distributed equally
between the treatment and control conditions … Accordingly, in a randomized study the
effect of treatment is disentangled from the confounding effects of other factors …
Randomized studies are thus the most powerful tool that crime and justice evaluators have
for making valid conclusions about whether progra ms or treatments are effective.”
(Weisburd 2003: 338)
Size of RCTs in London (intervenpon and control group parpcipants)
8000
1024
644
449
394
326
263
152
108
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Conclusions
This synthesis study has examined when and where randomised controlled trial
evaluations have been carried out, with which outcomes. We found only eleven
evaluations of children and youth interventions in London that fitted our criteria
so drawing general conclusions is problematic. Nevertheless, we can put
forward some key considerations for these and future RCT studies in children
and youth interventions.
• A strong trial showing insignificant results would offer evidence that an intervention is not
working. Less rigorous methodologies can overestimate the effectiveness of the results.
• However, an RCT, depending on the design, can also help to demonstrate which aspects of a
programme are having the greatest effect, and how it could be further improved.
• RCTs are not magical tools, and these still need to have a robust trial design.
• In some cases, this implies considering whether the evaluation is of sufficient size to detect a
programme’s effect. There remains a question about whether the evaluation therefore is of
sufficient size to identify programme effect.
• In many cases, it is best to combine an RCT with a process evaluation, as this will help
understand what has actually happened during as well as what resulted from the intervention
during the trial.
• RCTs that show no impact are also important because they can help to understand an
intervention better and ensure that scarce public resources are directed at effective
programmes or interventions.
Berk,R.A. (2005) Randomized experiments as the bronze standard Journal of Experimental
Criminology 1:417-‐33.
Black,N. (1996) Why we need observational studies to evaluate the effectiveness of health care British
Medical Journal, 312, 1215 -‐1218.
Bonell,C., Maisey,R., Speight,S., Purdon,S., Keogh,P., Wollny,I., Sorhaindo,A., and Wellings,K. (2013)
Randomized controlled trial of 'teens and toddlers': a teenage pregnancy prevention intervention
combining youth development and voluntary service in a nursery October 36(5):859-‐70. doi:
10.1016/j.adolescence.2013.07.005. Epub 2013 Jul 31.
Briskman,J., Castle,J., Blackeby,K., Bengo,C., Slack,K., Stebbens,C., Leaver,W., & Scott, S. (2012)
Randomized Controlled Trial of the Fostering Changes Program Available at
https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/183398/DFE-‐
RR237.pdf
Butler,S., Baruch,G., Hickey,N., Fonagy,P. (2011) A randomized controlled trial of multisystemic
therapy and a statutory therapeutic intervention for young offenders December; 50(12):1220-‐35.e2.
doi: 10.1016/j.jaac.2011.09.017. Epub 2011 Nov 6.
Cartwright,N. and Hardie,J. (2012) Evidence-‐Based Policy A Practical Guide to Doing It Better OUP USA
Chittleborough,C.R., Nicholson,A.L., Basker,E., Bell,S., and Campbell,R. (2012) Factors influencing hand
washing behaviour in primary schools: process evaluation within a randomised controlled trial Health
Educ Res., 27(6), 1055–1068. doi:10.1093/her/cys061.
Graffy,J., Taylor,J., Williams,A., Eldridge,S. (2004) Randomized controlled trial of support from
volunteer counsellors for mothers considering breast feeding accessed August 24, 2014 British
Medical Journal http://www.bmj.com/content/328/7430/26
Haynes, L., and Service, O., Goldacre, B. and Torgerson, D. (2012). “Test, Learn, Adapt: Developing
Public Policy with Randomised Controlled Trials”. Cabinet Office -‐ Behavioural Insights Team. Available
at SSRN: http://ssrn.com/abstract=2131581 or http://dx.doi.org/10.2139/ssrn.2131581
Kavanagh, J., Trouton, A., Oakley, A., Powell, C. (2006). A systematic review of the evidence for
incentive schemes to encourage positive health and other social behaviours in young people. London:
EPPI-‐Centre, Social Science Research Unit, Institute of Education, University of London.
McCambridge, J., Renee L., Slym and Strang,J. (2008) “Randomized controlled trial of motivational
interviewing compared with drug information and advice for early intervention among young
cannabis users”. Article first published online: 5th September 2008
DOI: 10.1111/j.1360-‐0443.2008.02331.x
O'Leary-‐Barrett, M., Topper, L., Al-‐Khudhairy, N., Pihl, R.O., Castellanos-‐Ryan, N., Mackie, C.J., Conrod,
P.J. (2013). “Two-‐Year Impact of Personality-‐Targeted, Teacher-‐Delivered Interventions on Youth
Stephenson, J., Strange, V., Forrest, S., Oakley, A., Copas, A., Allen, E., Babiker, A., Black, S., Ali, M.,
Monteiro, H., Johnson, A.M., RIPPLE study team. (2004) “Pupil-‐led sex education in England (RIPPLE
study): cluster-‐randomised intervention trial.” Lancet. 2004 Jul 24-‐30; 364(9431):338-‐46.
http://www.ncbi.nlm.nih.gov/pubmed/15276393
Torgerson, D. J., and Torgerson, C. (2008) Designing randomised trials in health, education, and the
social sciences: An introduction. New York: Palgrave Macmillan.
Toroyan,T., Roberts, I., Oakley,A., Laing,G., Mugford,M. and Frost,M. (2003). “Effectiveness of out-‐of-‐
home day care for disadvantaged families: randomised controlled trial.” Available at
http://www.bmj.com/content/327/7420/906
Watt R.G., Hayter, A., Ohly, H., Pikhart, A., Draper,K., Crawley,H., McGlone,P., Cooke,L., Moore,L. and
Pettinger,C. (2012) “Exploratory and developmental trial of a family centred nutrition intervention
delivered in Children's Centres”, accessed August 27, 2014, University College London and the
University of Plymouth (http://www.ucl.ac.uk/dph/research/finalreport)
West A., Spring B. (2014) “Randomized Controlled Trials”, Evidenced-‐Based Behavioral-‐Practice
[EBBP], accessed April 17, 2014, http://www.ebbp.org/course_outlines/rcts.pdf.
Project Oracle -‐ Hub Westminster, 80 Haymarket, London, SW1Y 4TE, UK
www.project-‐oracle.com, info@project-‐oracle.com, twitter @project oracle, 020 7148 6726
Project Oracle Evidence Hub™ is a registered company in England and Wales. Company number: 9131843. All Rights
Reserved.
Any intellectual property rights arising in the Project Oracle methodology are the exclusive property of the
Project Oracle delivery team. ©Project Oracle 2014
Project Oracle is funded by