Professional Documents
Culture Documents
21-60.This work may be downloaded only. It may not be copied or used for any purpose other than scholarship.
If you wish to make copies or use it for a non-scholarly purpose, please contact AERA directly.
Chapter 2
Ethics in Educational Research
KENNETH R. HOWE AND MICHELE S. MOSES
University of Colorado at Boulder
The ethics of social and educational research has been significantly complicated
over the last several decades as a consequence of the "interpretive turn" and
the ever-increasing use of qualitative research methods that have accompanied
it. In this chapter, we identify what came before and after the interpretive turn
with the traditional and contemporary approaches to research ethics, respectively.
The distinction is a heuristic one. We do not mean to suggest that the interpretive
turn occurred at any precise point in time or that it has completely won out. In
this vein, the traditional approach is no doubt still in currency.
Embedded in the distinction between traditional and contemporary approaches
is another between the protection of research participants ("research subjects"
in the traditional vocabulary) and research misconduct. This, too, is a heuristic
distinction, because it involves significant overlaps. In particular, research misconduct largely subsumes the protection of research participants. Nonetheless, it
is a distinction that has the virtue of familiarity, since it parallels the way federal
regulations and universities divide the issues in research ethics.
We should observe here at the outset that medical research has been at the
forefront of the ethics of research involving humans, both with respect to the
development of vocabularies and frameworks and with respect to the formulation
of federal policy. Social research in general and educational research in particular
have generally followed this lead. We do not make this observation to suggest
that social and educational researchers have remained on the sidelines, simply
applying the precepts of medical ethics. On the contrary, as we shall see, for at
least some theorists an adequate approach to the ethics of social and educational
research requires significantly modifying the vocabularies and frameworks that
have come down to them through the ethics of medical research. We make this
observation instead to apprise readers of why we borrow so heavily from sources
outside education and to alert them to an important part of the history of the
ethics of educational research.
22
23
Venturing deeply enough into moral philosophy to fully develop this point
would take us too far afield from the task at hand. We thus adopt the more
modest tasks of examining the most outstanding difficulty for utilitarianism in
the context of research involving human participants and then showing how,
whatever the ultimate theoretical foundations might be, the principles employed
to govern the treatment of research participants are de facto Kantian.
The most outstanding difficulty for utilitarianism is specifying the benefits
and harms that are to go into its calculations. Not only are people likely to
disagree about what these are. An important corollary is that all morally relevant
considerations must be cast in terms of benefits and harms, in which, for instance,
the harms done to slaves who must fight to the death are put on the same scale as
the benefits that accrue to those who enjoy watching such a spectacle. Otherwise,
utilitarian calculations would not be possible.
Maclntyre contends that confinement to utilitarian benefits-harms calculations
eliminates two additional kinds of morally relevant considerations in the context
of social research: "wrongs" and "moral harms" (1982). Take the famous (or
infamous) Tearoom Trade study. Keeping his identity as a researcher secret,
Laud Humphreys assumed the role of a lookout, a "watchqueen," in public
restrooms as men engaged in homosexual acts. Arguably, the balance of benefits
over harms in this study was positive, if not for the men actually involved in
the study, then for gay men overall. (There has been much actual discussion
along these lines, and Humphreys saw himself as producing overall beneficial
effects by reducing homophobic stereotyping [see, for example, Beauchamp et
al., 1982].) But restricting the relevant considerations to benefits and harms
circumscribes the analysis in a way that excludes the question of whether deceiving these men did them a moral wrong, independent of the calculation of overall
harms and benefits. It may be argued that Humphreys's deception of these men
disregarded their dignity and their agency, and, in general, treated them as mere
means for achieving other persons' ends. The response that treating persons as
mere means is just one kind of harm to be entered into the benefit-harm calculation
misses the point of the objection and begs the question in favor of utilitarianism's
premise that all morally relevant considerations can be put on the same scale.
The Tearoom Trade example may also be used to illustrate the issue of "moral
harms," the other morally relevant consideration eliminated by confinement to
utilitarian benefit-harm calculations. According to Maclntyre, "Moral harm is
inflicted on someone when some course of action produces in that person a
greater propensity to commit wrongs" (1982, p. 178). It is a plausible conjecture
that, as a result of Humphreys's study, the men involved in it were made more
cynical and distrustful and more inclined to treat others as mere means to pursuing
their own ends. (The Tuskegee study provides a more dramatic example and one
for which "moral harms" have been documented [Haworth, 1997].)
If inflicting moral harm is something that social research ought to avoid, then
the justification for doing so has to be sought beyond utilitarian benefits-harms
calculations. Moral harms cannot be routinely plugged into utilitarian benefitharm calculations; rather, avoiding them places a fundamental constraint on the
24
use to which such calculations can be put. This is true for moral wrongs as well
because they involve the rights to self-determination and privacy, rights that, in
Dworkin's (1978) suggestive phraseology, " t r u m p " utilitarian calculations.
As mentioned earlier, there is a version of utilitarianism that putatively avoids
the kinds of criticisms just advanced, namely rule utilitarianism. Kelman (1982),
a self-described rule utilitarian, provides a good example of such a view applied
specifically to the ethics of social research.
The benefit that Kelman ultimately seeks to maximize is the "fulfillment of
human potentialities" (1982, p. 41 ). He concedes, however, the extreme difficulty
involved in determining whether this applies in specific circumstances and, for
this reason, rejects act utilitarianism. He goes on to use "consistency with human
dignity" as his criterion for moral evaluation (1982, p. 42), which he subsequently
identifies (in language almost straight from Kant) with treating "individuals as
ends in themselves, rather than as means to some extraneous ends" (1982, p. 43).
In a related vein, under the rubric o f ' 'wider social values" (1982, p. 46), Kelman
embraces the idea that social research should avoid engendering "diffuse harm,"
the "reduction of private space," and the "erosion of trust."
The parallel between Kelman's and Maclntyre's views is striking. Corresponding to Maclntyre's admonition to avoid "moral wrongs," we have Kelman's
admonition to treat persons as "ends in themselves"; corresponding to Maclntyre's admonition to avoid "moral harms," we have Kelman's to avoid "diffuse
harms." In both cases, confinement to utilitarian benefit-h~u-m calculations is
viewed as morally inadequate. If moral justification is to be ultimately utilitarian,
to ultimately fall under the rule of benefit-harm calculations, then it is not only
individually defined benefits and harms that must be taken into account but also
benefits or harms to the moral health of the human community overall.
This should explain why we would say that thinking about the ethical treatment
of participants in social research is de facto Kantian: There is rather widespread
agreement that whatever the ultimate justification for moral conclusions regarding
the treatment of research participants might be, certain ethical principles should
constrain the manner in which researchers may treat research participants in
meeting the traditional utilitarian goals of advancing knowledge and otherwise
benefitting society.
Informed consent is the most central of such ethical principles, and it is
prominent in federal regulations governing social research. The basic idea is that
it is up to research participants to weigh the risks and benefits associated with
participating in a research project and up to them to then decide whether to take
part. And they can do this only if they are informed about and understand what
their participation in the research involves. In this way, their autonomy is protected
in a way it was not in the Tuskegee, Tearoom Trade, and Milgram studies.
Informed consent is de facto Kantian because refusal to participate on the part
of research participants is binding, even if their refusal results in a failure to
maximize presumed benefits.
25
26
the risks to their privacy might be and what measures will be taken to ensure
anonymity or confidentiality. In this way, how important privacy might be, and
why, largely devolves to individuals' exercise of autonomy.
Reeearch Misconduct
While the issue of research misconduct encompasses 1x)th the treatment of
research participants and fraudulent or deceptive practices of research and reporting, this section focuses primarily on the latter. Even when having no direct
effect on research participants, research misconduct nonetheless wrongs others
within the research community and damages the research enterprise overall. Thus,
in this section, we explore issues of research misconduct among researchers. We
begin with a discussion of the general nature of the scholarly endeavor that
frames how to think about research misconduct. We then examine plagiarism
and data fabrication/misrepresentation. We end with a few observations about how
pressures facing contemporary researchers may contribute to research misconduct.
27
28
(Haworth, 1997). As for the harms to researchers and the research professions,
Warwick cites the development of a deceptive, manipulative attitude toward
others; increased restrictions on research activities; and lowered overall quality
of research.
Plagiarism
Instances of plagiarism are perhaps the most common of all research misconduct, in any field. Plagiarism can take different forms: copying another research
er's work verbatim, which is the most blatant form; using intellectual property
without the express permission of the owner of those ideas; or lifting substantial
portions of another's work without any citation of that author. While it often
may be obvious when someone actually copies the work of another, what makes
plagiarism especially complicated to contend with is that it is often very difficult
to locate the exact origins of ideas. Two prominent cases from the biomedical
sciences, the Aisabti case and the Soman case, illustrate these issues.
Elias Alsabti came to the United States in 1977 from Jordan to pursue postgraduate medical education. He was hired by a cancer research laboratory within
Temple University's medical school, where he supposedly did cancer research.
He ended up publishing more than 60 articles within 2 years of his arrival in the
United States, some of which appeared in prestigious journals such as the Journal
of Cancer Research and Clinical Oncology (Broad, 1980a). However, as he
moved from one lab to another, his work became suspect, until he finally was
accused publicly of severe plagiarism and of making up the names of various
listed coauthors. For example, one article by Alsabti published in a European
journal was found to have been copied almost word for word from a 2-year-old
article in a Japanese journal. As the investigation of Alsabti continued, it was
found that he never even had received a medical degree in Jordan (Broad,
1980a). This incident shook the world of medical research and publishing. People
wondered how so many fraudulent articles could have slipped by the screening
review systems. Apparently, even those who had noticed something fishy with
Alsabti's work did not have him investigated. Rather, they just terminated him,
which gave him the opportunity to move to other research laboratories and
continue his plagiarism (Broad & Wade, 1982).
Another prominent case occurred at around the same time. in 1979, a National
Institutes of Health (NIH) medical researcher accused two Yale medical researchers of plagiarizing a manuscript that she had submitted to the New England Journal
of Medicine. She had been asked to review a paper submitted for publication by
Philip Fefig, vice-chair of Yale's Department of Medicine, and his junior coauthor,
Vijay Soman. The NIH researcher, Helena Rodbard, recognized the data and a
portion of the writing as her own (Broad, 1980b). Concerned about priority of
publication, she contacted the dean of the Yale University School of Medicine,
who responded by asking the researchers whether they had conducted the study
on which their paper was based. Felig and Soman said yes, and once he saw
their data sheets, he considered the matter closed. His high respect for senior
29
researcher Felig allowed him to give the benefit of the doubt. Still, Rodbard
pushed for further investigation and was eventually satisfied. It turned out that
Felig had not been supervising Soman very closely and Soman had actually used
Rodbard's study as his own, plagiarized her writing and that of others, and fudged
some of his own data (Broad, 1980b).
After these two sensational misconduct cases, it became apparent that the
traditional system of self-regulation was not working. In the Soman case, Rodbard
had brought up the ethical questions only because of a competitive threat to her
work. Moreover, the investigation took an inordinate amount of time because
she had no ethics board to which to turn. Instead, she had to appeal to a dean
who happened to be a close colleague of one of the alleged plagiarizers (Broad,
1980b). The result of these incidents was not only that the researchers involved
were penalized (even Rodbard soured to a career in research); the research
community as a whole faced increased scrutiny. Most specifically, the government
looked to increase its role in the oversight of research conduct (Broad, 1980c).
Data Fabrication~Misrepresentation
In addition to the research misconduct issues surrounding authorship and
plagiarism, issues of the integrity of data are also salient ethical matters. For
both quantitative and qualitative research studies, the integrity of the research is
determined by the authenticity of data, proper data representation, and political
issues surrounding research findings.
When data are fabricated and peer reviewers do not catch on, it is clear that
something is amiss in the system of scholarly publication. John Darsee was a
Harvard University cardiologist who published more than 100 articles between
1978 and 1982 based on fabricated data (Chubin, 1985). When Harvard officials
were first notified of the suspicion of Darsee's misconduct, they did not notify
anyone at NIH, his funding agency. Instead, it was seen as an isolated incident,
and Darsee was given the benefit of the doubt and allowed to continue his work
in the Cardiac Research Laboratory, although an offer of an assistant professorship
was rescinded (Chubin, 1985; Greene et al., 1985). The officials did not want
to ruin Darsee's life or the reputation of their lab. It was not until NIH itself
questioned some of Darsee's submitted data that an investigation occurred. The
investigation showed a clear pattern of fabricated data over a 4-year period (Broad
& Wade, 1982).
Different from pure fabrication, the misrepresentation of data includes "massaging" data to favor a preferred hypothesis or outcome or omitting relevant
sources present in the literature. Cyril Burt, a prominent British psychologist,
was accused after his death of misrepresenting his data on identical twins who
were raised apart as well as completely fabricating some of the data (Chubin,
1985). Whether or not Butt actually engaged in research misconduct remains
contested; prominent scholars fall on both sides of the debate (Hattie, 1991).
B urt's defenders, such as J. Philippe Rushton (1994) and Robert Joynson (1994),
say that the main reason that Burt has been accused of misconduct is racial
30
31
sampled, the lead author(s) had some sort of conflict of interest. The conflict
was usually a financial one, for example, holding investments in a company
connected to the research in some way (Cho, 1997). Another recent investigation
revealed that 98% of research studies that were funded by the pharmaceutical
industry found new drug therapies to be more effective than the current drug,
whereas 79% of studies that were not funded by the pharmaceutical industry
found the new drug to be more effective. All of these studies were published in
peer-reviewed biomedical journals (Cho, 1997).
Third, the long-standing pressure to "publish or perish" within academia
continues to put strain on young researchers. The top-level research institutions
require numerous publications as a condition of tenure. Consider Alsabti, a junior
researcher. When asked why he engaged in such misconduct and fraud, he blamed
the pressure to move up within academia. He said that his "actions . . . were
done in the midst of significant pressure to publish these data as fast as possible
so as to obtain priority" (Broad, 1980b, p. 39). According to Alsabti, the cutthroat
research atmosphere had compelled him into fraud. For Felig's part, although it
was found that he had not been aware of Soman's unethical behavior, he made
sure that, as a senior researcher, his name appeared on Soman's papers even
when he was not involved in the research project. Certainly, an interest in adding
to his list of publications played into Felig's ethical negligence.
Finally, the various pressures just mentioned can also lead to abuse of power.
One prominent example from the biomedical research community is the Baltimore
case at the Massachusetts Institute of Technology (MIT). In 1988, David Baltimore, a Nobel-prize-winning biologist and director of a laboratory at MIT, was
indirectly accused of data misrepresentation because the evidence presented in
an article by Baltimore and five other colleagues did not support the conclusions
drawn (Goodstein, 1991). The primary researcher and author of the article was
Thereza Imanishi-Kari, the director of another laboratory at MIT. Baltimore's
role was as senior scientist; it was Imanishi-Kari's lab team who did the primary
work for this particular paper. Baltimore's lab team was collaborating with
Imanishi-Kari's on a larger project from which the research in question came.
After repeated denials from both Baltimore and Imanishi-Kari, the Office of
Research Integrity's Commission on Research Integrity found that Imanishi-Kari
had indeed falsified data to help support research findings that were published
in the journal Cell. Much later, in 1996, Imanishi-Kari won her appeal to the
Department of Health and Human Services' Integrity Adjudications Panel, when
they decided that the Office of Research Integrity had never adequately proven
its charges of intentional data falsification (Kevles, 1998).
What makes this case fall under the abuse of power category is that it was
two junior scholars, postdoctoral fellow Margot O'Toole and graduate student
Charles Maplethorpe, who first questioned the Imanishi-Kari/Baltimore research.
There were two other graduate students in the laboratory who also suspected
data falsification, but when O'Toole brought the accusations, they refused to
support her because they feared jeopardizing their degrees (LaFollette, 1994a).
32
It turns out that their fear was justified; O'Toole has accused both Baltimore and
lmanishi-Kari, as well as other senior scientists who became involved with the
controversy, of damaging her reputation and making it very difficult for her to
obtain a position in academe. She insists that it was her position as a junior
scientist that made her first hesitate to make official accusations, but when she
finally did, she was fired from her post in the MIT lab and later from her position
as a research professor at Tufts University (O'Toole, 1991).
Faculty-student relations are very complex. While in some sense they are often
collegial, there is always an imbalance of power and often a dependent relationship
(Penslar, 1995). In social and educational research, where the "lab-chief" system
is less prevalent than in biomedical research, the general problem of the imbalance
of power among graduate students and professors nonetheless remains a constant
source of ethical worries.
33
and operating principles. The former refers to the broad moral-political frameworks that undergird social and educational research; the latter involves the more
specific principles used to govern and evaluate social and educational research
vis-h-vis ethics.
Fundamental Perspectives
The interpretive perspective jettisons the positivistic fact-value distinction and,
along with it, the idea that social and educational researchers can confine themselves to neutral descriptions and effective means toward "technical control"
(Fay, 1975). Rather, value-laden descriptions and ends are always pertinent and
always intertwined. Because each are part and parcel of social science research,
the researcher has no way to avoid moral-political commitments by placing
ethics and politics in one compartment and scientific merit in another. As stated
by Maclntyre:
The social sciences are moral sciences. That is. not only dt~ social scientists explore a human universe
centralize constituted by a variety or obediences to and breaches of, conformities to and rebellions
against, a host of rules, taboos, ideals, and beliefs about goods, virtues, and vices . . . their own
explorations ~/ that universe are no different in this respect from any other.[~Jrm ~!["httm,n ac'tivity.
(1982, p. 175, italics added)
Communitarianism
Communitarianism locates morality within a given community and its shared
norms and "practices" (Maclntyre, 1981). Accordingly, what is conceived as
the morally good life has to be known from the inside and varies from one
community (or culture) to another. Because social and educational research cuts
across communities that may differ from the social researcher's own, ensuring the
ethical treatment of research participants who are members of such communities is
doubly problematic. Not only are the normal problems involved in protecting
autonomy potentially complicated by a lack of mutual understanding; a commitment to the fundamental values that undergird social research may not be shared.
For example, certain communities do not place a high value on individual
autonomy (the Amish perhaps being the most well-known case). As such, it is
not up to individual community members to give their informed consent to have
34
social researchers peering into the social life of the community, for it is not
always theirs to give. The community may reject the way of explaining and
rendering community life transparent associated with social science and may not
want its practices understood and portrayed in these terms. True, an individual
community member who agreed to participate in developing such a portrayal
might be viewed as a rogue who was wronging the community, but the social
researcher could not avoid the charge that it was he or she who was the true
instigator of such an "act of aggression" (MacIntyre, 1982, p. 179). The social
researcher has no wholly neutral position from which to conduct research. "The
danger" in believing otherwise, according to Maclntyre, "is that what is taken
to be culturally neutral by the [social researcher] may be merely what his or her
own culture takes to be culturally neutral" (1982, pp. 183-184).
The ethical predicament for social and educational researchers raised here is
close to the one historically raised under the anthropological concept of "cultural
relativism." The difference is that it may now be recognized as a pervasive
problem that applies to the broad range of "qualitative" social and educational
research conducted across a broad range of cultural contexts and groups, not
only exotic ones.
Care Theory
Care theory is a close cousin of communitarianism insofar as both emphasize
concrete circumstances and specific demands on individuals ("the view from
here") over ideal circumstances and the demands placed on individuals by abstract
principles ("the view from nowhere") (Nagel, 1986). On the other hand, care
theory embraces, if not a culturally neutral ideal, one that nonetheless is to be
applied across cultural encounters; for Noddings (1984), caring is the ethical universal.
Noddings (1986) applies the ethics of care specifically to educational research.
Her first thesis is that the relationship between researchers and participants ought
to exemplify caring, particularly trust and mutual respect; her second thesis
broadens the first so as to apply to the educational research enterprise as a whole.
According to Noddings, the choice of research questions and the overall conduct
of the research ought to be based on their potential to contribute to caring school
communities. Educational research should not be conducted on the basis of mere
intellectual curiosity; much less should it be conducted in a way that is likely to
be harmful to individual students or groups of students or destructive of school
communities. Educational research should be "for teaching," Noddings says,
not simply "on teaching" (1986, p. 506). Ignoring these concerns renders the
traditional emphasis on autonomy and privacy incomplete at best.
Postmodernism
Postmodernism shares the premise found in communitarianism and care theory
that social and educational research cannot, first, isolate the descriptive component
of social research from its moral component and, second, ensure the ethical
35
treatment of research participants by obtaining their informed consent and protecting their privacy. But the postmodernist critique is more radical. Whereas communitarianism and care theory identify dangers with and lacunas in the traditional
conception, postmodernism questions the very existence of the integral selves
on which the traditional conception is based.
In the postmodern analysis, individuals are not capable of freely directing their
own lives; rather, they are always enmeshed in and shaped by relationships of
knowledge/power. These "regimes of truth," as labeled by Foucault, serve to
"normalize" individual selves and render them acquiescent and "useful" vish-vis the institutions of modern society (Foucault, 1970). Traditional forms of
social and educational research foist such regimes of truth on participants, however masked the nature of their activity might be. When practiced unreflectively,
these forms of research create a situation in which, far from fostering autonomy
or even respecting it, social and educational researchers are accomplices in social
domination.
Given a "strong" version of this thesis (Benhabib, 1995), postmodernism,
ironically it would seem, can provide little or no guidance about what direction
social and educational research should take to avoid domination. If there are no
criteria of truth, justice, and reason independent of the perspective of a given
regime of truth and the position of power researchers occupy within it, then there
are no criteria for distinguishing abuses of power from its (unavoidable) uses
(see also Burbules & Rice's, 1991, characterization of "anti-modernism").
In educational research, postmodernism typically takes a less extreme, or
"weak," form. As Stronach and MacLure (1997) put it, a "positive reading"
is required. The basic idea is that researchers must be alert to the often subtle
asymmetrical relationships of power that threaten to oppress participants. Accordingly, participants must take a much more active role than they have traditionally
in shaping the research process and in challenging its methods and findings as
it unfolds. In general, educational researchers should be much more suspicious
than they typically are of the idea that educational research is per sea progressive
force. Not unrelated to this, the validity of the findings of educational research
cannot be divorced from how it treats relationships of power (e.g., Lather,
1991, 1994).
Critical Theory
The sine qua non of critical theory is its characterization of and opposition to
"technical control" as the primary or only role for social and educational research
(e.g., Fay, 1975). Technical control is closely associated with positivist social
research; it is the goal educational research adopts when it proceeds by bracketing
moral and political ends and investigating only the means of achieving them. The
current testing/accountability movement launched by A Nation at Risk (National
Commission on Excellence in Education, 1983) is illustrative. First, the end,
economic competitiveness, is bracketed and left to politicians and policymakers
(presumably, it is unimpeachable). Second, coming up with effective means in
36
37
for liberal thinking. The kind of "liberal-egalitarian" view (e.g., Kymlicka, 1990)
that Rawls formulated constrains the principle of maximizing utility in the name
of justice. That is, not only aggregate utility is morally relevant. How utility
(benefit) is distributed is paramount: Stated most generally, Rawls's principle of
justice is that distributions (or redistributions) should tend toward equality.
Although providing an advance over utilitarianism, Rawls's theory has nonetheless been criticized for making several of the same general mistakes, including
(a) presupposing a certain Western (and male) conception of rationality (i.e.,
maximize utility within constraints) and (b) conceiving of policy-making on
the model of technical control (merely operating with different principles than
utilitarianism).
The most general difficulty is liberal-egalitarianism's commitment to the "distributivist paradigm" (Young, 1990). The basic criticism is that liberalism defines
and identifies the disadvantaged and then goes about the task of compensation.
Compensation takes the form of various social welfare programs, including
educational ones. Insofar as those targeted for compensation have been excluded
from participation, what counts as rational and good is foisted upon them, and
they are the pawns of technical control. And compensation, so called, can come
at a cost. Consider a sexist curriculum in which girls fare poorly relative to
boys. It is hardly a benefit to girls to compensate them so that they, too, can
become sexist.
Contemporary liberal-egalitarians have taken these difficulties seriously and
have proffered remedies aimed at preserving the viability of liberalism. The
general strategy is to tilt liberalism's emphasis on equality away from the distribution of predetermined goods and toward participation in determining what those
goods should be. As stated by Kymlicka:
It only makes sense to invite peopleto participatein politics (or for people to accept that invitation)
if they are treated as equals.... And that is incompatiblewith defining people in terms of roles they
did not shape or endorse. (1991, p. 89)
The "participatory paradigm" (Howe, 1995) exemplified in Kymlicka's admonition is much more attuned to the "interpretive turn" in social and educational
research than the "distributivist paradigm." It fits with a model of research in
which justice and equality are sought not only in the distribution of predetermined
goods but also in the status and voice of research participants.
The five perspectives we have portrayed differ in the ways we have indicated
and, no doubt, in further ways we have not developed. We do not wish to deny
that these differences can be deep, perhaps even irreconcilable. Still, there are
several shared themes across these perspectives regarding the ethics of social
and educational research.
First, as we have indicated, there is a strong tendency in what we call the
"traditional view" to distinguish the "descriptive" (scientific-methodological)
component of social research from the "prescriptive" (moral-political) component. Each of the five alternative perspectives denies that social and educational
38
research can be (ought to be) divided up in this way. On the contrary, social and
educational research is (ought to be) framed by self-consciously chosen moralpolitical ends, for example, fostering caring communities or fostering equality
and justice.
It follows that all social and educational research is advocacy research, by its
very nature, and it is thus no criticism of a given study that it adopts some moralpolitical perspective. Criticism arises instead with respect to just what that moralpolitical perspective is as well as the consequences of framing research in terms
of it. This casts a different light on research like Murray and Herrnstein's (1994)
The Bell Curve. The problem is not that they are engaged in advocacy research
in virtue of making policy recommendations. The problem is with the moralpolitical basis of such recommendations, combined with the consequences of
their recommendations and their claim that they are simply following science
where it leads.
Second, and related especially to this last point, the research questions deemed
worth asking are circumscribed by the moral-political framework in which they
are couched. Educational researchers might (and many no doubt do) ostensibly
conduct research on teaching rather than for teaching, to use Noddings's (1986)
distinction once again. But rather than getting rid of the question of what research
might be for, they are merely closing their eyes to it. Any research that is used
at all is used for something, and the range of uses is limited from the outset by
how the research is conceived and designed.
Third, social and educational research ought to have points of contact with
the insiders' perspectives, with their "voices." In this way, the moral-political
aims of social and educational research affect its methodology. Interpretive, or
"qualitative," methods are best suited for getting at what these voices have to
say and what they mean.
Finally, and dovetailing with each of the preceding three observations, contemporary perspectives militate against the race, gender, and class biases that have
historically plagued social and educational research--forms of bias that grow
out of the assumed premise that the attitudes, beliefs, and reasoning of mainstream
White males are the norm against which all other social groups must be measured
(Stanfield, 1993).
We have seen a shift from social and educational research that asks how
diverse groups are either similar to or different from mainstream groups to
research concerned with finding out about those diverse groups in their own
right. A prominent example of such a contemporary perspective is Carol Gilligan's
(1982) landmark study of girls' and young women's psychological development.
In it, her findings and discussion challenge the developmental theories of psychological researchers, such as Lawrence Kohlberg, who excluded female voices
from their research studies, yet generalized their findings to both males and
females. This type of sex bias resulted, in Kohlberg's case, in a tendency to label
women as deficient in moral development. In the attempt to fit women into a
theory of moral development that came out of research conducted exclusively
39
Operating Principles
The distinction between research ethics in the sense of operating principles
and in the broader, fundamental sense is not hard and fast. What questions are
worth asking and how researchers are to conduct themselves in the process of
answering them cannot be divorced from the overarching alms that research seeks
to achieve, one of the fundamental premises of the "contemporary approach."
Nonetheless, there exists a "looseness of fit" between operating principles and
competing perspectives, such that reasonable agreement on what constitutes
ethical conduct is (or should be) possible in the face of broader theoretical
disagreements. Bearing in mind, then, that broader ethical obligations associated
with broader moral-political perspectives are always lurking in the background,
there remain general ethical implications of the interpretive (qualitative) turn in
educational research that may be best understood in terms of the methodological
nitty-gritty of "techniques and procedures" (a description that owes to Smith &
Heshusius, 1986).
40
41
1982). Wax, who is exemplary of this view, contends that informed consent "is
both too much and too little" to require of interpretivist research ("fieldwork,"
to be precise):
Informed consent is t o o m u c h . . . in requesting formal and explicit consent to observe that which
is intended to be observed and appreciated. Formal and explicit consent also appears ovcrscrupulous
and disruptive in the case of many of the casual conversations that are intrinsic to good fieldwork,
where respondents (informants) arc equal partners to interchange, under no duress to participate, and
free either to express themselves or to withdraw into silence. On the other hand, informed consent
is t o o l i t t l e because ficldworkcrs so often require much more than consent; they need active assistance
from their hosts, including a level of research cooperation that frequently amounts to collcagucship.
(1982, p. 44)
42
43
44
was a student and, furthermore, was doing research were revealed later in the
course of her research. Facio believed that the culture and social histories of
these women required this kind of procedure, and citing Punch (1986), she
observed that participant observation "always involves impression management," including "alleviating suspicion." Nonetheless, she confessed to feeling
"uncomfortable with the deceit and dissembling," as she put it, that "are part
of the research role" (1993, p. 85).
Was Facio's incremental approach to consent ethically defensible? We think
it was. But saying this does not provide social and educational research with any
rule that will apply in all cases. What to do in specific cases is very often not
going to be an easy call, and misgivings like Facio's often cannot be eliminated.
To further complicate matters, in addition to differing concrete circumstances,
differences in fundamental frameworks can also contribute to ethical complexity.
Contra Lincoln, whatever benefits the interpretive (qualitative) turn has brought,
an ethically simpler life for researchers is quite clearly not among them.
Research Misconduct
This leads to a further kind of ethical complexity engendered by the interpretive
(qualitative) turn in social and educational research: how to report results. We
include this issue under the heading of research misconduct because it involves
the possible misrepresentation of data and possible researcher incompetence and
because, for the most part, it is one step removed from the face-to-face interactions
with participants that are central to issues falling under the rubric of the protection
of research participants. Of course, the line between research misconduct and
the protection of research participants vis-h-vis reporting results is a fuzzy one,
all the more so for the "contemporary approach," which generally blurs the
traditional dividing lines.
As before, experimentalist (quantitative) researchers can face some of the same
difficulties as interpretivist (qualitative) researchers in writing their reports. But
also as before, they are more numerous and more acute for the latter. The general
source of the difficulties is the "thick description" that characterizes interpretive
research. Because such descriptions are judged for accuracy, at least in part, by
how well they square with the insider's or " e m i c " perspective, researchers
must negotiate or "construct" these descriptions in collaboration with research
participants. (Compare negotiating statistical analyses with participants.) This
raises the questions of who owns the data (e.g., Noddings, 1986) and how the
data may be used subsequently (e.g., Johnson, 1982), as well as the question of
how much power participants should have to challenge, edit, and change written
reports. Except by adopting the extreme of providing participants either absolute
power or none, crafting a defensible report is a thorny ethical problem.
Thick description in reporting also complicates the protection of privacy. In
contrast to survey researchers, for instance, interpretive rese~trchers can rarely
(never?) provide anonymity to research participants. Instead, they must rely
on maintaining confidentiality as the means to protect privacy. The possibility
45
46
ous, more philosophical analyses. We also expand the discussion, raising some
issues for the first time (e.g., the special ethical problems associated with student
researchers). Once again, we entertain the issues under the two general categories
of the protection of research participants and research misconduct.
47
whether a woman should accept a slightly greater risk of death from breast cancer
by opting for radiation therapy over a mutilating and debilitating mastectomy,
educational researchers qua educational researchers have no special expertise
regarding whether parents should be given the opportunity to refuse to have their
children involved in a given educational research project. Indeed, given their
aims and interests, physicians and educational researchers are probably in the
w o r s t position to make these judgments. It is for this reason that 45 CFR 46
requires IRBs to be staffed by persons who represent a range of perspectives
and interests, including at least one member of the community who is not affiliated
with the university and at least one member whose chief interests are nonscientific
(e.g., clergy, lawyer, or ethicist).
In the second place, although IRBs are often overly bureaucratic and discharge
their duties in a rather perfunctory manner that takes too lightly the ethical
complexities involved (Christakis, 1988; Dougherty & Howe, 1990), they are
the only formal mechanism in the United States for overseeing social research
(McCarthy, 1983). The shortcomings in the practices exemplified by IRBs are
insufficient to abandon or radically change this oversight tool. The alternative
of no policing or self-policing has proven to have worse consequences, on balance,
than those associated with the institution of IRBs. Furthermore, remedies for
these shortcomings are not altogether lacking (for example, Silva and Sorrell,
1988, suggest ways for IRBs to enhance informed consent by focusing on the
process of consent rather than the wording of the consent form). Finally, IRBs
can serve an important educational function. In our experience (which we suspect
reflects what is generally true), the IRB is the chief, and often only, locus of
reflection and debate about the ethics of social research.
48
problematic. Thus, in the 1970s a national commission was set up for the protection of human participants that thoroughly reviewed policies for the social sciences, including education. With essentially the same model as that addressing
medical research, the idea of an independent review board and the emphasis on
the need for informed consent prevailed in the new policies on social research.
The commission made provisions in its final recommendations to allow some
discretion on the part of IRBs to reduce the burden placed on them. Specifically,
a series of thresholds were developed that defined three levels of review: exempt
(no IRB review), expedited (review by a representative of the 1RB), and full IRB
review. The commission also reduced the burden placed on IRBs by giving
prospective research participants, through the vehicle of informed consent, a
significant role in determining the worth and moral acceptability of research
projects for which they are recruited. (Partly because of this, the issue of informed
consent has become of paramount concern for research in the social sciences.)
The commission believed that educational research, in particular, required less
stringent oversight than other varieties of social research, both because the risks
were perceived as slight and because district- and school-based procedures were
believed to already exist to screen and guide research. Thus, the commission
believed that the area of educational research was one place where the IRB's
role could be minimized, especially since it believed that mechanisms of accountability for educational research were already in place at the local level. Accordingly, it crafted 45 CFR 46 so as to provide explicit exemptions for educational research.
The commission, nonetheless, mandated in 45 CFR 46 that some sort of
administrative review (e.g., by department or college) would take place in every
case of research involving human participants. As a consequence, the apparent
wide latitude afforded educational research was significantly narrowed by many
universities as they went about the task of articulating the purview and responsibilities of their IRBs. In particular, IRBs typically do not permit educational researchers to decide for themselves whether their research is exempt from the 45 CFR
46 regulations. In many universities, "exempt" has come to mean exempt from
certain requirements and full committee review, not exempt from IRB oversight altogether.
That IRBs, not educational researchers, are responsible for determining when
educational research qualifies as exempt from the normal requirements of 45
CFR 46 engenders potential conflicts between educational researchers and IRBs.
Taking the responsibility for determining what educational research satisfies the
exemptions in 45 CFR 46 out of the hands of educational researchers and placing
it in the hands of IRBs makes the latter the arbiter of key questions such as what
constitutes "normal educational practice." This is problematic for educational
researchers because IRBs are composed mostly of university faculty who have
little knowledge of the workings of public schools.
We share the concern of other educational researchers about whether the typical
IRB is composed of individuals who are in a good position to determine when
49
educational research should qualify as exempt (i.e., qualify as "normal educational practice"). In our view, there is an answer to the question of how to make
such a determination that stops short of the extremes of permitting educational
researchers to decide for themselves, on the one hand, or of placing the decision
exclusively in the hands of IRBs, on the other. Our suggestion is the simple and
straightforward one to formally include school people in the review process,
particularly regarding the judgment of what is to count as "normal educational
practice" (Dougherty & Howe, 1990).
50
51
and students alike must familiarize themselves with the ethical requirements of
research involving human participants, particularly regarding the different levels
of review associated with different kinds of research activities. Such issues
typically receive too little attention, and too late. (Students often don't give ethics
a thought until they learn they must have their dissertation proposals approved
by the IRB.)
Insofar as more sophisticated and ethically complex research requires normal
IRB review, this policy will no doubt inhibit instructors from encouraging and
students from conducting such research. But this is not a bad thing, for students
just learning to conduct research involving human participants are the least
prepared to successfully grapple with ethically complex situations that arise in
the course of planning and carrying it out.
Research Misconduct
Until quite recently, the general consensus between research communities
within higher education and the federal government was that scientific and social
scientific researchers, including educational researchers, did not need regulations
to ensure ethical conduct. Rather, there was an implicit ethical code that called
for professional self-regulation and honesty in one's research conduct and data
reporting. Misconduct was thought to be a rare event (Steneck, 1984). Research
communities enjoyed considerable autonomy in directing the conduct of research
(LaFollette, 1994a). As Deborah Cameron and her colleagues put it, "All social
researchers are expected to take seriously the ethical questions their activities
raise" (Cameron, Frazer, Harvey, Rampton, & Richardson, 1993, p. 82). In the
cases in which research turned out to be fraudulent in some way, it was presumed
that members of the community would sanction their own.
This presumption was not borne out. As research institutions grew, competition
between scholars stiffened, and the pressure to produce new scholarship and
procure funding intensified. With the increased competition came more frequent
and visible cases of research misconduct. Official regulations on the conduct of
scientific and social scientific research soon followed (Price, 1994; Steneck,
1994). While ethical conduct concerning research participants had been monitored
more closely (especially in the wake of the Tuskegee debacle), scrutiny of research
misconduct conceming data collection and representation and the originality of
research ideas and writing has been more recent. Most documented cases of
research fraud and misconduct have come from the biomedical research community. Of the 26 cases of serious misconduct reported between 1980 and 1987,
21 were biomedical research cases (Goodstein, 1991). Sensational cases of misconduct in social research have arisen more often around the issue of deceptive
research practices, such as Milgram's obedience experiments in the 1960s and
Humphreys's Tearoom Trade study in the 1970s. Thus, the medical and scientific
communities were the first to prompt worries about research misconduct and to
take the lead in formulating specific regulations. Social and educational research
communities have been catching up.
52
53
Yale, and Emory were among the first institutions of higher education to begin
the process of formalizing ethical codes and procedures (Steneck, 1984). In fact,
by the middle of 1983, 80% of all medical schools had begun establishing rules
for investigating research misconduct (Chubin, 1985).
Major organizations such as NIH (in 1988) and the National Science Foundation
(in 1989) published their own formal regulations on scientific research misconduct
in the Federal Register. NIH had established the Office of Scientific Integrity
and stipulated that research proposals from institutions without formal regulations
on scientific misconduct would not be accepted (Goodstein, 1991).
Although the movement toward increased ethical regulation of research
stemmed from the biomedical sciences, it strongly affected those in the social
science and educational research communities as well. The federal government,
via the Health Research Extension Act of 1985, imposed regulations stipulating
that all applications for research funding and sponsorship from both the biomedical and behavioral sciences had to include a plan for examining allegations
of research misconduct. In addition, institutions of higher education became
responsible for promptly reporting any research misconduct to the federal government (LaFollette, 1994a; Steneck, 1994). These federal regulations, in combination with institutional policies on research misconduct, affect educational research
in the same way they affect social research in general. Unlike the federal regulations that protect human research participants, the government outlines no provisions that specifically concern educational research.
In addition to the federal regulations, the social science research community
as a whole and the educational research community in particular have established
their own ethical codes to govern the research conduct of their members. We
move now to a discussion of the major educational research professional organization, the American Educational Research Association (AERA), and its code of
research ethics.
54
Research Misconduct
The AERA code has clear standards regarding researchers' responsibilities
to the field of education. Two of the guiding standards directly involve how
55
the proper conduct and improper conduct of research affect the field of education. These standards are "Responsibilities to the Field" and "Intellectual
Ownership."
The standards regarding researchers' responsibilities to the field focus on
researcher behavior and how inappropriate conduct could negatively affect the
public standing of the field and its future research endeavors. Most important,
these responsibilities stipulate that "educational researchers must not fabricate,
falsify, or misrepresent authorship, evidence, data, findings, or conclusions"
(AERA, 1992, p. 2). They should also monitor the uses of their research to avoid
its use for any fraudulent purposes.
Regarding issues of authorship, the section on intellectual ownership centers
on making sure that credit for research contributions goes where it is properly
due. Both plagiarism and assuming credit for research to which one did not
contribute in a significant creative way are prohibited. In a related vein, researchers are to be wary of any undue influence from government or other sponsoring
agencies regarding the conduct of the research, its findings, or the reporting of it.
56
REFERENCES
American Educational Research Association. (1992). American Educational Research
Association ethical standards. Washington, DC: American Educational Research
Association.
Anderson, J. A. (1992). On the ethics of research in a socially constructed reality. Journal
of Broadcasting & Electronic Media, 36, 353-357.
Beauchamp, T. L., Faden, R. R., Wallace, R. J., & Waiters, L. (1982). Introduction. In T.
Beauchamp, R. Faden, R. Wallace, & L. Waiters (Eds.), Ethical issues in social science
research (pp. 3-39). Baltimore: Johns Hopkins University Press.
Benhabib, S. (1995). Feminism and postmodernism. In L. Nicholson (Ed.), Feminist
contentions (pp. 17-34). New York: Routledge.
Broad, W. J. (1980a). Would-be academician pirates papers. Science, 208, 1438-1440.
Broad, W. J. (1980b). Imbroglio at Yale (I): Emergence of a fraud. Science, 210, 38--41.
Broad, W. J. (1980c), Imbroglio at Yale (II): A top job lost. Science, 210, 171-173.
Broad, W., & Wade, N. (1982). Betrayers of the truth. New York: Simon & Schuster.
Burbules, N., & Rice, S. (1991). Dialogue across difference: Continuing the conversation.
Hara,ard Educational Review, 61, 396-416.
Burgess, R. G. (Ed.). (1989). The ethics of educational research (Voi. 8). New York:
Falmer Press.
Cameron, D., Frazer, E., Harvey, P., Rampton, B., & Richardson, K. (1993). Ethics,
advocacy and empowerment: Issues of method in researching language. Language and
Communication, 13, 81-94.
Caplan, A. (1982). On privacy and confidentiality in social science research. In T. Beauchamp, R. Faden, R. Wallace, & L. Waiters (Eds.), Ethical issues in social science
research (pp. 315-328). Baltimore: Johns Hopkins University Press.
Cho, M. K. ( 1997, August I ). Secrecy and financial conflicts in university-industry research
must get closer scrutiny. Chronicle of Higher Education, p. B4.
Christakis, N. (1988). Should IRB's monitor research more strictly? IRB: A Review of
Human Subjects Research, 10(2), 8-9.
Chubin, D. E. (1985). Misconduct in research: An issue of science policy and practice.
Minerva, 23, 175-202.
57
Code of Federal Regulations for the Protection of Human Subjects, 45 CFR 46 (1991,
as amended).
Cornett, J., & Chase, S. ( 1989, March). The analysis of teacher thinking and the problem
~fethics: Reflections of a case study participant and a naturalistic researcher. Paper
presented at the annual meeting of the American Educational Research Association,
San Francisco.
Dennis, R. (1993). Participant observations. In J. Stanfield & R. Dennis (Eds.), Race and
ethnicity in research methods (pp. 53-74). Newbury Park, CA: Sage.
Dougherty, K., & Howe, K. (1990). Policy regarding educational research: Report to
the Subcommittee on Educational Research of the Human Research Committee. Unpublished manuscript.
Dworkin, R. (1978). Taking rights seriously. Cambridge, MA: Harvard University Press.
Facio, E. (1993). Ethnography as personal experience. In J. Stanfield & R. Dennis (Eds.),
Race and ethnicity in research methods (pp. 74-91). Newbury Park, CA: Sage.
Fay, B. (1975). Social theor~~and political practice. Birkcnhcad, England: George Allen
& Unwin.
Foucault, M. (1970). Discipline and punish: The birth of the prison. New York: Vintage Books.
Gilbert, N. (1994). Miscounting social ills. Society, 31(3), 18-26.
Gilligan, C. (1982). In a different voice: Psychological theo~' and women's development.
Cambridge, MA: Harvard University Press.
Goodstein, D. (1991). Scientific fraud. American Scholar. 60, 505-515.
Greene, P. J., Durch, J. S., Horwitz, W., & Hooper, V. S. (1985). Policies for responding
to allegations of fraud in research. Minera,a, 23, 203-215.
Griswold v. Connecticut, 381 U.S. 479, 507 (1964).
Hamnett, M. P., Porter, D. J., Singh, A., & Kumar, K. (1984). Ethics, politics, and
international social science research: From critique to praxis. Honolulu: University
of Hawaii Press.
Hattie, J. (1991). The Burr controversy. Alberta Journal of Educational Research, 3Z
259-275.
Haworth, K. (1997, May 30). Clinton starts efforts to recruit minority w~lunteers for
federal research projects. Chronicle of Higher Education, p. A39.
Hecht, J. B. (1996, April), Educational research, research ethics and federal policy: An
update. Paper presented at the annual meeting of the American Educational Research
Association, New York City.
House, E., & Howe, K. (1999). Values in evaluation and social research. Thousand Oaks,
CA: Sage.
Howe, K. (1995). Democracy, justice and action research: Some theoretical developments.
Educational Action Research, 3, 347-349.
Howe, K., & Dougherty, K. (1993). Ethics, IRB's, and the changing face of educational
research. Educational Researcher, 22(9), 16-21.
Johnson, C, (1982). Risks in the publication of fieldwork. In J. Sieber (Ed.), The ethics
of social research: Fieldwork, regulation, and publication (pp. 71-92). New York:
Springer-Verlag.
Jones, J. H. (1993). Bad blood: The Tuskegee .~3'philis experiment. New York: Free Press.
Joynson, R. B. (1994). Fallible judgments. Society, 31(3), 45-52.
Kelman, H. (1982). Ethical issues in different social science mcthods. In T. Beauchamp,
R. Faden, R. Wallace, & L. Waiters (Eds.), Ethical issues in social science research
(pp. 40-100). Baltimore: Johns Hopkins University Press.
Kevles, D. (1998). The Baltimore case: A trial of politics, science, and character. New
York: Norton.
Kymlicka, W. (1990). Contemporary political theory: An introduction. New York: Oxford
University Press.
58
Kymlicka, W. (1991). Liberalism. community and culture. New York: Oxford Universit,,' Prcss.
LaFollette. M. C. (1994a). The politics of research misconduct: Congressional oversight,
universities, and science. Journal of Higher Education. 65, 261--285.
LaFollette. M. C. (1994b). Research misconduct. Socieo', 31(3), 6-10.
Lather, P. (1991). Getting smart: Feminist research and pedagogy with/in postmodernism.
New York: Routledge.
Lather. P. (1994). Fertile obsession: Validity after poststructuralism. In A. Gillin (Ed.),
Power and method: Political activism and educational research (pp. 36-60). New
York: Routledge.
Lincoln, Y. (1990). Toward a categorical imperative for qualitative research. In E. Eisner
& A. Peshkin (Eds.), Qualitative inqui O, in educational research: The continuing debate
(pp, 277-295). New York: Teachers College Press.
lincoln, Y. S., & Denzin, N. K. (Eds.). (1994). Handbook of qualitative research. Beverly
Hills, CA: Sage.
Maclnlyre, A. (1981). After virtue. Notre Dame, IN: University of Notre Dame Press.
Maclntyre. A. (1982). Risk, harm, and benefit assessments as instruments of moral evaluation. In T. Beauchamp, R. Faden, R. Wallace, & L. Waiters (Eds.), Ethical issues in
~oeial science research (pp. 175-192). Baltimore: Johns Hopkins University Press.
McCarthy, C. (1983). Experiences with boards and commissions concerned with research
ethics in the U.S. In K. Berg & K. Tranoy (Eds.), Research ethics. New York: Alan
R. Liss.
Milgram, S. (1974). Obedience to authority. New York: Harper & Row.
Murphy, M., & Johannsen, A. (1990). Ethical obligations and (ederal regulations in
ethnographic research and anthropological education. Human Organization, 49,
127-134.
Murray, C., & Herrnstein, R. (1994). Ttte bell curve. New York: Free Press.
Nagel, T. (1986). The view from nowhere. New York: Oxford Uni~,ersity Press.
National Commission on Excellence in Education. (1983), ,4 nation at risk. Washington,
DC: U.S. Government Printing Office.
Noddings, N. ( 1984), Caring: A feminine approach to ethics and moral education. Berkeley: University of California Press.
Noddings, N. (1986). Fidelity in teaching, teacher education, and research on teaching.
Harvard Edueational Review. 56, 496-510.
Ogbu, J. U., & Matute-Bianchi, M. E. (1986). Understanding soci~ullural factors: Knowledge, identity, and school adjustment. In Beyond language: Social and cultural factors in
schooling language minority students (pp. 73-142). Sacramento: Bilingual Educational
Office, California State Department of Education.
O'Toole, M. (1991). The whistle-blower and the train wreck. New York Times, p. A29,
Pattullo. E. L. (1982). Modesty is the best policy: The federal role in social research. In
T. L. Beauchamp, R. R. Faden, R. J. Wallace, & L. Waiters (Eds.), Ethical issues in
social science research (pp. 373-390). Baltimore: Johns Hopkins University Press.
Penslar. R. L. (Ed.). (I 995). Research ethics: Cases and materials. Bloomington: Indiana
University Press.
Price, A. R. (1994). Definitions and boundaries of research misconduct: Perspectives from
a federal government viewpoint. Journal of Higher Education, 65, 287-297.
Punch, M. (1986). The politics and ethics offieldwork. Beverly Hills, CA: Sage.
Rabinow, P., & Sullivan, W. (1987). The interpretive turn: Emergence of an approach.
In P. Rabinow & W. Sullivan (Eds.), Interpretive social science (pp. 1-21 ). Los Angeles:
University of California Press.
Rawls, J. (1971), A theory' of justice. Cambridge, MA: Belmont Press.
Roc v. Wade, 410 U.S. 113 (1973).
59