You are on page 1of 14

Journal of Development Studies

Fo
rP
ee
rR

Walking the talk: the need for a trial register for


development interventions
ev

Journal: Journal of Development Studies


ie

Manuscript ID: Draft


w

Manuscript Type: Original Manuscripts

Economic development < Economics, Economics < Economics,


Third World < Geographical Area, Foreign Aid < International
On

Keywords:
relations and Organisations and Agreements, Development < Social
Issues
ly

URL: http://mc.manuscriptcentral.com/fjds
Page 1 of 13 Journal of Development Studies

1
2
3
4
5 Walking the talk: the need for a trial register for development interventions
6
7
8
9
10
11
Abstract: Recent advances in the use of randomisation within the field of development
12
promises to greatly enhance our knowledge of what works and why. A core argument
13
14 supporting randomised studies is the claim that they have high internal validity. We
15 argue that this claim is weak as long as a trial register of development interventions is
16 not in place. This deficiency is due to the possibilities for data mining created by
Fo

17 analyses of multiple outcomes and subgroups. Using experience from the medical
18 sciences and a recent example from microfinance, we argue that a trial registry would
19 also enhance external validity and foster innovative research.
20
rP

21
22
23 Keywords: Impact assessment, randomised control trials, trial registry.
24 JEL: C93, O12
ee

25
26
27
28
rR

29
30 Draft by:
31
32
25 November 2010
33
ev

34
35
36
iew

37
38
39
40
41
42
43
On

44
45
46
47
ly

48
49
50
51
52
53
54
55
56
57
58
59
60

1
URL: http://mc.manuscriptcentral.com/fjds
Journal of Development Studies Page 2 of 13

1
2
3
4
5
6 Introduction
7 During the last 15 years, randomised control trials (RCTs) have become extremely popular in
8
9
10 assessing the impact of social interventions. There are several reasons for this: General
11
12 dissatisfaction with using regression techniques on non-experimental data (Freedman, 1991)
13
14 as well as theoretical advantages of randomisation in overcoming selection biases common to
15
16
Fo

17 other forms of impact assessments (Angrist and Pischke, 2009; Banerjee and Duflo, 2009).
18
19 However, the approach has also received critique, for example for the low external validity
20
rP

21
and thus the inability to generalise findings to other contexts (Deaton, 2009; Rodrik, 2008).
22
23
24 In this paper we do not wish to part with either side in the debate. Randomisation
ee

25
26 indeed is a promising approach for obtaining causal inference in the social sciences, although
27
28
it comes with some limitations. Instead, we simply argue that current practice often falls short
rR

29
30
31 of the stated aims. To remedy this, we propose the establishment of a trial registry for
32
33 development interventions. As we shall explain below, this will greatly improve the internal
ev

34
35
36 validity of the randomisation approach in development and will contribute to higher external
iew

37
38 validity as well.
39
40 This is not a completely novel suggestion. Duflo et al. (2006) have previously
41
42
43 suggested the establishment of a database of RCTs which should also "include the salient
On

44
45 features of the ex-ante design (outcome variables to be examined, subgroups to be considered,
46
47
etc.)" (Duflo et al., 2006: 18). Moreover, Ron Bose of the International Initiative on Impact
ly

48
49
50 Evaluation has suggested that the Consolidated Standards of Reporting Trials (CONSORT)
51
52 for controlled medical trials should be adopted (with some extensions) to development
53
54
55
interventions (Bose, 2010). The CONSORT checklist specifies the pieces of information
56
57 which should be included when reporting on medical RCTs. Item number 23 in the original
58
59
60

2
URL: http://mc.manuscriptcentral.com/fjds
Page 3 of 13 Journal of Development Studies

1
2
3
4
5 CONSORT list is "Registration number and name of trial registry" (Moher et al., 2010),
6
7 Hence, adopting CONSORT would automatically call for a trial registry.1
8
9 However, neither Duflo et al. (2006) nor Bose (2010) elaborate on these suggestions
10
11
12 and to the best of our knowledge they have not been picked up by other researchers and/or
13
14 practitioners in the field. We believe that the benefits of introducing a trial registry are both
15
16 nontrivial and far too important to be mentioned only in passing. Or to put it differently:
Fo

17
18
19 Without a trial registry, we fail to see how the randomisation approach can deliver the gold
20
rP

21 standard its proponents envision.


22
23
24
We elaborate on these conjectures in the following and also provide some specific
ee

25
26 suggestions for the most important features of such a registry. While drawing on research and
27
28 practice within economics, medicine and epidemiology, we rely on recent randomised studies
rR

29
30
31 from microfinance to illustrate our points.
32
33
ev

34 The immediate benefits of a trial registry


35
36 The implementation of a trial registry for development interventions has two immediate
iew

37
38 desirable consequences. First, and most importantly, registering a trial in advance will greatly
39
40 improve internal validity, something which is often considered the mainstay of RCTs. To
41
42
43 illustrate why this is the case, we will consider the generally accepted advantage of RCTs
On

44
45 compared to other types of studies: the ability to draw causal inference.
46
47
The issue at stake is attribution: How do we establish that the observed effect stems
ly

48
49
50 from our intervention and not from some excluded confounder? For non-experimental studies
51
52 using regression analysis, one attempts to isolate the causal effect by controlling for a
53
54
55
potentially long list of possible confounders or by using instrumental variable (IV)
56
57 techniques. The obvious problem is that in theory it is impossible to know which confounders
58
59
1
60 Requiring authors reporting on RCTs to follow an augmented CONSORT would also lead to a much needed
improvement in reporting on other issues, e.g., the method of randomization (Bruhn and McKenzie, 2009).

3
URL: http://mc.manuscriptcentral.com/fjds
Journal of Development Studies Page 4 of 13

1
2
3
4
5 are relevant (or which instruments are valid). A regression analysis of the connection between
6
7 asbestos in drinking water and cancer controlled for a large number of background variables,
8
9 but forgot to include smoking. Results were "highly statistically significant", but for men
10
11
12 only. Men are strong smokers and smoking thus appeared to be an unmeasured confounder
13
14 (Freedman, 1999; Kanarek et al., 1980). At the same time, the researchers seemed to have
15
16 carried out over hundred different specifications. This is what we in this context call data
Fo

17
18
19 mining: Analysing a large number of subgroups and variables and reporting only the
20
rP

21 significant values.
22
23
24
In a randomised study, on the other hand, a number of people or areas are divided
ee

25
26 randomly into two or more groups and the intervention in question is implemented in only
27
28 one of the groups. In this way, all confounders will in expectation have an even distribution
rR

29
30
31 among the groups and the difference on the outcome measure stems from the intervention
32
33 only. The researcher should not decide on which confounders to include or exclude, for which
ev

34
35 reason the possibilities for data mining are eliminated. The attribution problem is solved, the
36
iew

37
38 data has spoken, and data mining is impossible.
39
40 Correct? Not entirely. Data mining can be a problem in randomised studies as well.
41
42 Most studies look at more than one outcome measure and often several subgroups. If
43
On

44
45 conventional significance levels are used for the individual tests, there is a substantial chance
46
47 of committing a type-I error, i.e., incorrectly rejecting a null hypothesis. In other words,
ly

48
49
50
studies of this kind are likely to find statistically significant effects on at least one of the
51
52 outcomes or for one of the subgroups, even in the absence of such effects, simply due to
53
54 sampling variation. At the same time, they are prone to various forms of more or less
55
56
57 conscious data mining. A researcher might thus report only on the subgroups and outcomes
58
59 where there are significant results or just leave out certain insignificant findings.
60

4
URL: http://mc.manuscriptcentral.com/fjds
Page 5 of 13 Journal of Development Studies

1
2
3
4
5 Naturally, proponents of randomisation are aware of this. As Duflo et al. phrase it in
6
7 their toolkit for RCTs: "A researcher testing ten independent hypotheses at the 5% level will
8
9 reject at least one of them with a probability of approximately 40%." (2006: 63). The same
10
11
12 toolkit gives two typical, but in our view incomplete, suggestions for ways out of this: First,
13
14 p-values should be adjusted so that the overall probability of rejection of one outcome (or
15
16 subgroup) in a "family of outcomes" is less than for example 5%. The exact method for doing
Fo

17
18
19 this depends on the assumed dependence between the different outcomes (or subgroups). If
20
rP

21 the outcomes are assumed to be fully independent, p-values should be divided by the number
22
23
24
of tests (Abdi, 2007). Second, one can test the overall treatment effect on the different
ee

25
26 outcomes (subgroups) in a family of outcomes by constructing a mean standardised outcome
27
28 which averages the effects on the individual outcomes and takes into account that outcomes
rR

29
30
31 are correlated. On example is Karlan and Ashraf (2010) who report results of an index
32
33 comprising nine different factors of empowerment. The index itself is calculated in two
ev

34
35 different ways.
36
iew

37
38 These are purely mathematical operations, and whereas they do contribute to making
39
40 data mining more difficult it is still very possible. The decision about which outcomes or
41
42 subgroups to eventually include in a "family" is left to the researcher. Hence it can be
43
On

44
45 influenced more or less consciously by preliminary findings. Basically, the solutions
46
47 suggested places the internal validity of the results in the hands of the researcher.2 A trial
ly

48
49
50
registry where the relevant outcomes and subgroups are specified and registered ex-ante
51
52 would eliminate this risk thereby greatly improving internal validity of the findings.
53
54 A second benefit of a trial registry is that it will contain both RCTs that find an effect
55
56
57 and RCTs that find no effect. Thus, also the results from RCTs that find little or no effects are
58
2
59 To some extent, this is of course always the case. Any randomized experiment in social science must at some
60 point rely on the subjective judgement of “experts” with local and/or context specific information (Cartwright,
2007).

5
URL: http://mc.manuscriptcentral.com/fjds
Journal of Development Studies Page 6 of 13

1
2
3
4
5 accumulated. This can mitigate a publication bias where only significantly positive or
6
7 negative effects are reported in journal articles.
8
9 This is especially important for external validity: RCTs have commonly been
10
11
12 criticised for the limited possibility of extrapolating the results to other settings and/or scaling
13
14 the results to relevant intervention levels (Deaton, 2009; Rodrik, 2008). An obvious way of
15
16 improving the external validity is to accumulate as much evidence as possible from diverse
Fo

17
18
19 settings and types of interventions. Only then can we hope to learn whether the findings from
20
rP

21 one RCT are robust to changes in the settings and/or the level of intervention.
22
23
24
Is a trial registry really needed?
ee

25
26 Is it possible that we do not need a trial registry in the social sciences? One could point out
27
28
that outcome variables and subgroups are trivial, that the experimental design itself can be
rR

29
30
31 used to test only certain variables or that alternative measures have been taken. Glaeser thus
32
33 argues that using an experimental design limits the number of possible outcome variables and
ev

34
35
36 that this makes data mining "essentially disappear" (Glaeser, 2006: 20).
iew

37
38 That outcome variables should be trivial seems not to be the case. As an example,
39
40 Karlan and Zinman note that there is no "natural summary statistic" for household utility and
41
42
43 thus choose to "measure treatment effects on a range of household survey variables that
On

44
45 capture economic behavior and subjective well-being" (Karlan and Zinman, 2010: 4). Also,
46
47
the advance of initiatives like Measuring the Progress of Societies supported by OECD, EU
ly

48
49
50 and UN shows that welfare indicators are not straightforward (OECD, 2010).
51
52 But could other steps do the trick? Glaeser argues in favor of making data publicly
53
54
55
available and some economic journals have taken steps in this direction, e.g., the American
56
57 Economic Journal: Applied Economics, which has made it mandatory for submissions to
58
59 make data available together with calculation procedures to ensure the possibility of
60

6
URL: http://mc.manuscriptcentral.com/fjds
Page 7 of 13 Journal of Development Studies

1
2
3
4
5 replicating the results. While this is certainly a good idea, it is no guardian against data
6
7 mining. Ironically, availability of data and computation procedures is likely to be more
8
9 relevant in non-RCT studies. RCT's usually have fairly simply regression frameworks and
10
11
12 thus the exact programming of the data tends to matter less compared to non-experimental
13
14 approaches.
15
16
Fo

17
18
19 Letting researchers research: other benefits of a trial registry
20 If a trial registry is implemented, unsubstantiated results will be less common and easier to
rP

21
22 discover, but moreover, and just as importantly, substantiated yet surprising and small effects
23
24
would be given more weight. Consider the situation without a trial registry: When reviewers
ee

25
26
27 of academic journals receive a paper reporting on a randomised trial they need to judge
28
rR

29
30
whether the reported results are genuine or whether they could be a result of data mining.
31
32 Most likely, they will look at whether the outcome measures in question are obvious choices
33
ev

34 as dependent variables. The basis for doing this is the reviewers’ sense of the field and the
35
36
received knowledge in the field, and possibly their own understanding of the causality in
iew

37
38
39 question. Conversely, with a trial registry in place this decision can be made by the researcher
40
41 in advance, who is then free to choose the outcome variable(s) of interest, including more
42
43
On

44 novel outcomes and hence “uncommon” variables according to the received wisdom in the
45
46 field. The registry provides the a priori credibility to such outcome variables, not the common
47
ly

48
sense of the reviewers.
49
50
51 Recent evidence in microfinance provides an illustrative example: Banerjee et al.
52
53 (2009) carry out a randomised experiment on access to microfinance and find no effect on
54
55
56
female empowerment and total consumption, which in the general debate are typically
57
58 imagined outcomes of microfinance. At the same time, Banerjee et al. find a negative effect
59
60 on spending on temptation goods and a positive effect on spending on durables. Also, the

7
URL: http://mc.manuscriptcentral.com/fjds
Journal of Development Studies Page 8 of 13

1
2
3
4
5 number of people opening businesses was 1.7 percentage points bigger in the treatment areas
6
7 compared to the control areas, an increase from 5.3% to 7.0%. One in five of the loans that
8
9 came from the opening of new microfinance branches resulted in starting a new business
10
11
12 (Banerjee et al., 2009). The problem with these results is not so much that they are small, but
13
14 that the outcome variables are non-trivial. In the light of standard economic theory, the
15
16 expectation would be that increased credit availability leads to business startups for the
Fo

17
18
19 previously credit constrained clients. In this view, a rise of 1.7 percentage points seems to be
20
rP

21 a very small effect and the absence of effects on consumption and female empowerment is
22
23
24
worrying.
ee

25
26 But there are other frameworks for understanding microfinance. Microfinance can be
27
28 seen as a commitment device for sophisticated agents with time-inconsistent preferences
rR

29
30
31 (Ashraf et al., 2006). These agents have a tendency to spend income immediately when they
32
33 receive it, but because they are aware of this, it is possible for them to commit to not spending
ev

34
35 future income. Microfinance loan or savings services serve as commitment devices. In this
36
iew

37
38 framework, moving spending from temptation goods to investment goods is a significant
39
40 result and a rise in business startups of 1.7 percentage points might be quite a change.
41
42 Thus, the two frameworks yield two different predictions. If the relevant outcome
43
On

44
45 measures are decided upon by a broad range of social science researchers, the advance of new
46
47 and surprising findings is unlikely. In our opinion, this is the case when there is no trial
ly

48
49
50
registry in place. With a trial registry, on the other hand, researchers can decide upon the
51
52 framework and new and groundbreaking evidence will stand a better chance of getting
53
54 acknowledged. This is the only way to move the research frontier.
55
56
57
58
59
60

8
URL: http://mc.manuscriptcentral.com/fjds
Page 9 of 13 Journal of Development Studies

1
2
3
4
5 What are the important features of a trial registry?
6 A trial registry is different from a database of evaluations in a number of respects. To ensure
7
8
9 that the trial registry would work as a guard against data mining as described above, it would
10
11 need to have a number of salient features, most of which are drawn and adapted from WHO
12
13 standards for trial registries in the medical field (WHO, 2010).
14
15
16 • Content. The registry should be open to all prospective registrants, should publically
Fo

17 display key information of the trial and should never delete a registered trial. It should
18 at least contain the following information about each trial: A unique ID number,
19 registration date, sources of monetary support, contact details, country, intervention,
20
type of study, date of first enrollment, primary outcome (of which you can have only
rP

21
22 one), secondary outcomes and planned subgroup analyses.3
23
24 • Unambiguous identification. The registry should have processes to identify double
ee

25
26
registrations and it should work towards bridging to other registries to enable cross
27 checks. This is important to avoid multiple ex-ante entries of the same trial where only
28 the most suitable entry is evoked ex-post.
rR

29
30 • Governance. To ensure that the trial registry, its mandate and procedures are perceived
31
as legitimate, it should be governed by a board with broad representation from actors
32
33 with a stake in impact assessments. Representation should be across low- and high-
ev

34 income countries and the most important players in the field should be represented.
35 Examples of the latter include OECD, 3ie, the World Bank, IPA and J-PAL.
36
iew

37
38
39 The development sector already has some databases of randomised trials (e.g. 3ie; OECD's
40
41 DEReC), but these do not meet the list of requirements above and do not allow for clear
42
43
On

identification of the original registration by making changes following the first registration
44
45
46 clearly visible. The same is true for registries created within the fields of education (WWC,
47
ly

48 2010) and criminology (C2-SPECTR, 2010). Needless to say, the institutions cited who
49
50 currently maintain databases of randomised trials for development, like OECD and 3ie, would
51
52
53 certainly be good candidates as hosts for a trial registry.
54
55 Apart from the registry itself, additional initiatives are needed to promote its
56
57
58
usage and to build a credibility-promoting infrastructure. A lot can be learned from the
59
60
3 This list was inspired by the WHO Trial Registration Data Set version 1.2.1.

9
URL: http://mc.manuscriptcentral.com/fjds
Journal of Development Studies Page 10 of 13

1
2
3
4
5 medical sciences where registration of an experiment in a trial registry is required for
6
7 publication of original research. Currently, it is required by the more than 850 journals which
8
9 follow the Uniform Requirements for Manuscripts issued by International Committee of
10
11
12 Medical Journal Editors. Similarly, funders, editors and publishers who promote RCTs should
13
14 agree on a set of minimum standards for reporting. This should include trial registration.
15
16 Current practice of submitting trials to institutional review boards for ethical reasons could
Fo

17
18
19 provide a starting point for this work.
20
rP

21
22
23
24
Conclusions
ee

25 Recent advances in the use of randomisation as a means to assessing the effect of


26
27 development interventions has contributed greatly to our knowledge of what works and why.
28
rR

29
30 By taking causality seriously, it has enabled researchers to provide evidence of effects as well
31
32 as advances in theory development, for example within development economics. In doing
33
ev

34
this, proponents have pledged allegiance to hard scientific ideals and claimed a "credibility
35
36
revolution in development economics" (Angrist and Pischke, 2010).
iew

37
38
39 We argue that development work and development economics still lack much of
40
41
42
the infrastructure that is needed to generate this type of credibility. If randomised studies of
43
On

44 development interventions are to stay true to their claims of internal validity, we need to
45
46 establish a bulwark against the data mining options that pose a threat to credibility also for
47
ly

48
49 randomised studies. A trial registry will enable readers and reviewers to judge whether or not
50
51 the results reported by a randomised study are credible or whether they could be a result of
52
53 chance. Results on outcome variables and subgroups not mentioned in the ex-ante registration
54
55
56 of the RCT could still be reported, but it should be possible to distinguish primary outcomes
57
58 from secondary outcomes. Apart from making the claim of internal validity more trustworthy,
59
60
a trial registry would also increase external validity by facilitating comparisons of trials across

10
URL: http://mc.manuscriptcentral.com/fjds
Page 11 of 13 Journal of Development Studies

1
2
3
4
5 different contexts. Finally, a trial registry would place the decision of relevant outcome
6
7 variables solely with the researcher, allowing her to include novel outcome measures, while
8
9 still being able to report results credibly. In this way, a trial registry would promote
10
11
12 innovation in theory and practice.
13
14
15
16
Fo

17
18
19
20
rP

21
22
23
24
ee

25
26
27
28
rR

29
30
31
32
33
ev

34
35
36
iew

37
38
39
40
41
42
43
On

44
45
46
47
ly

48
49
50
51
52
53
54
55
56
57
58
59
60

11
URL: http://mc.manuscriptcentral.com/fjds
Journal of Development Studies Page 12 of 13

1
2
3
4
5
6 References
7 3ie, Database of Impact Evaluations. International Initiative for Impact Evaluations,
8 http://www.3ieimpact.org/database_of_impact_evaluations.html accessed October 2010.
9
Abdi, H., 2007, The Bonferonni and Šidák corrections for multiple comparisons. In: Neil J. Salkind
10
11 (Ed.), Encyclopedia of measurement and statistics. Sage Publications, New York, pp. 103–107.
12 Angrist, J. and Pischke, J., (2010) The Credibility Revolution in Empirical Economics: How Better
13 Research Design is Taking the Con out of Econometrics. The Journal of Economic
14 Perspectives, 24(2), pp. 3-30.
15 Angrist, J.D. and Pischke, J.-S., 2009, Mostly harmless econometrics: an empiricist's companion.
16
Fo
Princeton University Press.
17
18
Ashraf, N., Karlan, D. and Yin, W., 2006, Tying Odysseus to The Mast: Evidence From a Commitment
19 Savings Product in the Philippines, Quarterly Journal of Economics. MIT Press, pp. 635-672.
20 Ashraf, N., Karlan, D. and Yin, W., (2010) Female empowerment: Impact of a commitment savings
rP

21 product in the Philippines. World Development, 38(3), pp. 333-344.


22 Banerjee, A. and Duflo, E., (2009) The experimental approach to development economics. NBER
23 working paper.
24
Banerjee, A., Duflo, E., Glennerster, R. and Kinnan, C., 2009, The miracle of microfinance? Evidence
ee

25
26 from a randomized evaluation. Financial Access Initiative and Innovations for Poverty Action,
27 Working Paper.
28 Bose, R., (2010) CONSORT Extensions for Development Effectiveness: guidelines for the reporting of
rR

29 randomised control trials of social and economic policy interventions in developing countries.
30 Journal of Development Effectiveness, 2(1), pp. 173-186.
31 Bruhn, M. and McKenzie, D., (2009) In pursuit of balance: randomization in practice in development
32
field experiments. American Economic Journal: Applied Economics, 1(4), pp. 200-232.
33
ev

34 C2-SPECTR, 2010, The Campbell Collaboration Trials Register (C2-SPECTR).


35 http://geb9101.gse.upenn.edu/RIS/RISWEB.ISA.
36 Deaton, A., (2009) Instruments of development: Randomization in the tropics, and the search for the
iew

37 elusive keys to economic development. NBER working paper.


38 Duflo, E., Glennerster, R. and Kremer, M., 2006, Using randomization in development economics
39
research: A toolkit. National Bureau of Economic Research, Cambridge MA, Technical
40
41 Working Paper 333.
42 Freedman, D., (1991) Statistical models and shoe leather. Sociological methodology, 21, pp. 291-313.
43 Freedman, D., (1999) From association to causation: some remarks on the history of statistics.
On

44 Statistical Science, 14(3), pp. 243-258.


45 Glaeser, E., (2006) Researcher Incentives and Empirical Methods. NBER working paper.
46 Kanarek, M., Conforti, P., Jackson, L., Cooper, R. and Murchio, J., (1980) Asbestos in drinking water
47
and cancer incidence in the San Francisco Bay Area. American Journal of Epidemiology,
ly

48
49 112(1), pp. 54-72.
50 Karlan, D. and Zinman, J., (2010) Expanding credit access: Using randomized supply decisions to
51 estimate the impacts. Review of Financial Studies, 23(1), pp. 433-464.
52 Moher, D., Hopewell, S., Schulz, K., Montori, V., Gotzsche, P. et al., (2010) CONSORT 2010
53 Explanation and Elaboration: updated guidelines for reporting parallel group randomised
54 trials. British Medical Journal, 340, pp. c869.
55
56
OECD's DEReC, DAC Evaluation Ressource Centre. OECD, Paris,
57 http://www.oecd.org/pages/0,2966,en_35038640_35039563_1_1_1_1_1,00.html accessed
58 October 2010.
59 OECD, 2010, Istanbul Declaration. OECD, Istanbul,
60 http://www.oecd.org/dataoecd/23/54/39558011.pdf, retrieved October 2010.

12
URL: http://mc.manuscriptcentral.com/fjds
Page 13 of 13 Journal of Development Studies

1
2
3
4
5
Rodrik, D., 2008, The new development economics: we shall experiment, but how shall we learn?
6 Working Paper, Kennedy School, Harvard University, Boston.
7 WHO, 2010, WHO Registry Criteria (Version 2.1, April 2009). WHO at
8 http://www.who.int/ictrp/network/criteria_summary/en/index.html, October 2010.
9 WWC, 2010. http://ies.ed.gov/ncee/wwc/references/registries/index.asp?NoCookie=yes, What
10 Works Clearinghouse.
11
12
13
14
15
16
Fo

17
18
19
20
rP

21
22
23
24
ee

25
26
27
28
rR

29
30
31
32
33
ev

34
35
36
iew

37
38
39
40
41
42
43
On

44
45
46
47
ly

48
49
50
51
52
53
54
55
56
57
58
59
60

13
URL: http://mc.manuscriptcentral.com/fjds

You might also like