Discover millions of ebooks, audiobooks, and so much more with a free trial

Only $11.99/month after trial. Cancel anytime.

Exploitation and Developing Countries: The Ethics of Clinical Research
Exploitation and Developing Countries: The Ethics of Clinical Research
Exploitation and Developing Countries: The Ethics of Clinical Research
Ebook483 pages7 hours

Exploitation and Developing Countries: The Ethics of Clinical Research

Rating: 3 out of 5 stars

3/5

()

Read preview

About this ebook

When is clinical research in developing countries exploitation? Exploitation is a concept in ordinary moral thought that has not often been analyzed outside the Marxist tradition. Yet it is commonly used to describe interactions that seem morally suspect in some way. A case in point is clinical research sponsored by developed countries and carried out in developing countries, with participants who are poor and sick, and lack education. Such individuals seem vulnerable to abuse. But does this, by itself, make such research exploitative?



Exploitation and Developing Countries is an attempt by philosophers and bioethicists to reflect on the meaning of exploitation, to ask whether and when clinical research in developing countries counts as exploitative, and to consider what can be done to minimize the possibility of exploitation in such circumstances. These reflections should interest clinical researchers, since locating the line between appropriate and inappropriate use of subjects--the line between exploitation and fair use--is the central question at the heart of research ethics. Reflection on this rich and important moral concept should also interest normative moral philosophers of a non-Marxist bent.


In addition to the editors, the contributors are Richard J. Arneson, Alisa L. Carse, Margaret Olivia Little, Thomas Pogge, Andrew W. Siegel, and Alan Wertheimer.

LanguageEnglish
Release dateAug 4, 2008
ISBN9781400837328
Exploitation and Developing Countries: The Ethics of Clinical Research

Related to Exploitation and Developing Countries

Related ebooks

Medical For You

View More

Related articles

Reviews for Exploitation and Developing Countries

Rating: 3 out of 5 stars
3/5

2 ratings1 review

What did you think?

Tap to rate

Review must be at least 10 words

  • Rating: 3 out of 5 stars
    3/5
    I still can't decide where I fall: while it seems pretty clear exploitation is what's taking place, is it preferable to allow it to proceed given the paternalistic nature of disallowing it and yes, the benefits it can lead to or should it be regulated (resolutioned) against due to the precedent of immorality it sets? Three arguments resonated with me, although one is a daydream. 1) "We should not confuse our moral worries about the objective circumstances in which people find themselves with worries about their capacity or right to make decisions about their lives within those circumstances." (Wertheimer)2) "The only time we should be conducting clinical research trials among impoverished communities that offer less than the world's best treatment in the control arm is when doing so is needed to help those in similar health circumstances (and not, say, because using this population provides an especially cheap way to collect clean data for those who do not suffer the 'useful' deprivation)." (Carse & Little)3) A la Kant, "We need to show humility in our condemnations of those who exploit the impoverished, as it is our indifference to the true needs of the poor that sows much of the ground of exploitation. We must also exercise caution when legislating against exploitation, for the perverse reality is that the best prospect some persons currently have for improving their lives is to submit to exploitative exchanges. While our indignation at those who exploit the poor is certainly warranted, it is not until the day arrives that our indignation is accompanied by shame that we can hope for real progress." (Siegel)

Book preview

Exploitation and Developing Countries - Jennifer S. Hawkins

1

Research Ethics, Developing Countries, and Exploitation: A Primer

jennifer s. hawkins

As the AZT trials demonstrated, questions about the ethical conduct of clinical research in developing countries are anything but simple. Informed judgments about the cases in this book, and about future directions for policy, require familiarity with both the details of clinical research and significant moral distinctions. This chapter aims to provide some background material relevant to understanding the complexities of the current debates.

The first section provides a basic overview of the fundamental concepts of research ethics as they relate to developing country research—informed consent, randomized controlled trials (RCTs), standard of care, and clinical equipoise. It is intended for readers who are familiar with ethical theory but less familiar with the peculiar concerns of clinical research. It aims to familiarize them with the concepts and clearly articulate the disputed questions currently relevant for the discussions of international research that follow.

The second section provides an overview of the concept of exploitation to help orient those less familiar with philosophy to the ways in which philosophers have approached the topic. Exploitation is commonly associated with Marxism, but there is, in addition to the Marxist tradition, a robust tradition of non-Marxist accounts of exploitation of which the essays in this book are representative. This section will briefly introduce readers to both the Marxist and the non-Marxist understandings of the concept, and set out the central questions about exploitation that remain open for debate.

Brief Background to Clinical Research Ethics

Medical research aims to generate knowledge that will hopefully, in turn, lead to improvements in medical practice. It is a long process. Often there is an initial phase of basic scientific research conducted in laboratories. Animal studies are then usually conducted before new agents are tried out in a human body. Clinical research refers to the later phases of medical research,when human bodies enter the equation. Even here there are multiple stages. Phase I is the name traditionally given to the first studies of an agent in humans. Such studies are usually very small, and hence cannot reveal anything general about how effective an agent is at curbing disease. Instead, the aim is to first learn about toxicity, metabolism, and other drug dynamics.¹ Many such studies are carried out on healthy volunteers, though research on new cancer drugs is an exception.

Phase II and Phase III studies are generally randomized controlled trials where the new agent is compared with something else. Unlike Phase I trials, they are usually conducted on subjects with the illness for which an improvement in treatment is being sought. Phase II studies are generally much smaller than Phase III, and still largely concerned with gaining understanding about safety and side effects. It is the large Phase III clinical trials that enroll hundreds and sometimes thousands of human subjects that aim to determine whether a new agent will indeed prove to be a useful general treatment for the illness in question. The controversies described here are all related to the conduct of Phase III trials.

Informed Consent

Informed consent is almost universally recognized as a requirement for the ethical conduct of clinical research. Indeed, the more common danger in discussions of research ethics is for people to reduce all moral concerns about research to concerns about consent. While consent is important, it is only one of the key components of ethical research.²

Valid informed consent has four requirements. First, only those potential subjects who pass the requirements for decision-making competence should be asked to give consent. When it is necessary to enroll incompetent subjects, consent must be obtained from an appropriate surrogate. Three additional requirements must be satisfied with respect to either the competent subject or the incompetent subject’s surrogate. There must be full disclosure of all the relevant information, the subject or surrogate must understand the information, and he or she must then consent freely or voluntarily. Force, coercion, and undue inducement are all recognized as undermining the voluntariness, and hence the validity, of informed consent.

In the context of research in developing countries, a number of distinct and sometimes contradictory concerns have been raised about informed consent. Most recent discussion has emphasized the difficulties of obtaining genuine informed consent in developing countries. Another, older concern is that informed consent may not be culturally appropriate in all parts of the world. Whereas the first group is concerned that consent is not being obtained often enough, the second thinks that consent may not even be necessary. Let us consider first the concerns of those who think genuine consent is not obtained often enough.

Some of the concerns that arise here are concerns about understanding. In 1997, for example, a New York Times article that focused specifically on one of the controversial AZT trials in Côte d’Ivoire sponsored by the National Institutes of Health (NIH) and the Centers for Disease Control (CDC) raised worries about the true level of subject understanding.³ One subject singled out by the reporter clearly did not understand the concept of a placebo despite repeated explanations and questionings. She is quoted as saying:

They gave me a bunch of pills to take, and told me how to take them. Some were for malaria, some were for fevers, and some were supposed to be for the virus. I knew that there were different kinds, but I figured that if one of them didn’t work against AIDS, then one of the other ones would.

Although there is no evidence to suggest that this woman’s poor level of understanding is a widespread phenomenon, such anecdotal evidence is understandably troubling to those who see full understanding among subjects as a central requirement for ethical research. No doubt the challenges of obtaining informed consent among poor, uneducated populations can be daunting. There are often significant language barriers. Not only must the information be translated, but many languages lack words for important medical-scientific concepts.⁵ It can be extremely difficult to find a way to explain randomization or the concept of a placebo. In authoritarian cultures, it may also be difficult to get subjects to appreciate that they really are free not to participate, and that their health care will not be jeopardized if they refuse.⁶ Then there is the problem—familiar from developed world settings as well—of the therapeutic misconception. This is the tendency of some patients to believe, despite being told otherwise, that the research has a therapeutic purpose.⁷

Perhaps the most pressing theoretical question with relation to informed consent and understanding is that of how much understanding must be achieved if research is to go forward while remaining ethical. For this reason, the requirement of understanding is one of the more worrisome, and hence contentious, aspects of the traditional analysis. At one extreme is the view that understanding is not actually necessary for consent to be valid.⁸ At the other end is the equally extreme view that complete understanding of all details is required. Taken literally, this view would lead to the conclusion that much clinical research—not just clinical research conducted in developing countries—is unethical, because what evidence we have suggests that full understanding is rarely achieved in any setting. Most theorists appear to accept a view somewhere in the middle. But where to draw an appropriate line remains troubling.

Another set of extremely common concerns are those about voluntariness. For example, in a newspaper article detailing some of the worries about consent to research in developing countries, the law professor and bioethicist George Annas is quoted as saying:

I’d argue you can’t do studies ethically in a country where there is no basic health care. You can tell a person there that this is research, but they hear they have a chance to get care or else refuse their only good chance at care. How can you put them in that position and then say they are giving informed consent?

Although Annas does not say precisely what it is that undermines informed consent in these cases, his reference to the choice subjects face suggests that he is worried about voluntariness. He also suggests not just that obtaining informed consent is difficult but that, because of the circumstances in developing countries, it is actually impossible. This suggests, in turn, that he must either believe that all offers made in such circumstances are necessarily coercive, or that all such offers necessarily count as undue inducement. However, neither claim has any merit.

Unfortunately, it is not uncommon for theorists to assume that whenever a person facing a grim set of alternatives makes a choice—even if she chooses what under the circumstances is clearly in her best interests—it must be coerced.¹⁰ Such misconceptions are further ingrained because we often speak loosely of the force of circumstances. However, the identification of coercion proper requires that we consider two features: the choice set of the agent (B), and the actions (if any) of other parties (A) that have unfairly created that choice set. The first requirement for coercion is that B’s set of options be altered unfavorably in the sense of being narrowed. This much is common to many cases of being forced by circumstances. The second, equally important requirement for coercion, however, is that some agent A (the coercer) must have deliberately brought about the narrowing of the set so as to get B to do what he wants. This may come about as a result of A forcibly eliminating options from B’s set. Alternatively (and much more commonly), it involves A threatening to make B worse off if B does not do as A wishes. (Here the threat serves to narrow the set of good options B has by altering the payoffs associated with them.) It is this second crucial condition that distinguishes coercion from mere harsh choice situations.

Once we abandon the simplistic equation of coercion with harsh choice situations, there is no reason to suppose that the offers researchers extend to potential subjects in developing countries are necessarily coercive as Annas suggests. There is simply no evidence to suggest that researchers routinely force or threaten subjects. They simply make them offers that are, in the sad context where subjects have few options for good health care, extremely attractive.

Those who think genuine informed consent is impossible in impoverished settings might, alternatively, be worried that all such transactions—made against a backdrop of poverty and desperation—constitute undue inducement. For example, in a recent policy brief, Lindegger and Bull note that economic factors may undermine voluntary consent. In particular, if subjects have few other options for health care, these authors worry that research may be too attractive.¹¹ They give as an example the following statement from a Thai subject who claimed:

The study staff gives good advice and when this project is over I hope I can enroll in another study. For that matter, I hope there will be new studies for me to participate in all the time. If there would be no more studies, I don’t know if I would have the strength to go on, as I would not know where to get drugs outside of clinical trials.¹²

The idea behind undue inducement is that some offers are excessive and hence irresistible in a way that is morally problematic. But how are we to understand this more precisely? While it is sometimes claimed that an offer is excessive whenever it leads a person to choose something she would not otherwise choose, this is a highly implausible account of the badness of undue inducement. That would suggest that every case of accepting a job for a good salary would count as undue inducement.¹³ A better account of what is morally problematic with undue inducement is that certain highly attractive offers may lead individuals to make choices that are not in their long-term interests, by, for example, leading them to accept risks that are not really worth it. However, if that is correct, it is hard to see how the offers made in developing world research necessarily induce in a bad way. For in the cases that concern us, the sad fact is that, by participating in research, subjects really are choosing something that does—given their current options—serve their longterm interests.¹⁴ As Annas admits in the quotation from earlier, it may be their only chance for effective care.

In short, since there is no reason to believe that either coercion or undue inducement is a necessary feature of offers made to subjects in developing countries, there is no reason to think that voluntary informed consent is impossible to obtain.

Let us now consider briefly the claims of those who think that informed consent in developing countries may not be ethically required and may even be culturally inappropriate. In the years prior to the dispute about the AZT transmission trials, a number of commentators argued that, for cultural reasons, individual informed consent might not be a necessary ethical requirement of research in developing countries.¹⁵ The claim was that informed consent is a uniquely Western ideal, one grounded in the peculiarly Western fascination with personal autonomy understood generally as individual decision making.¹⁶Where this value is not shared, it was argued, it may be inappropriate to impose such practices. This does not seem to be a widely held view, and it has been explicitly challenged.¹⁷ But it does reflect genuine concerns people have had about how to approach informed consent in cultures where the practice does not neatly fit with local norms. Without being able to settle the issues here, it is worth making a few points about the general weakness of such arguments. The problem derives from placing too much emphasis on the autonomy-based justification for informed consent, as if that were the only reason to obtain informed consent. However, there are other justifications for the practice besides that one.

First, even if informed consent is a Western practice reflecting Western values, it is worth remembering that RCTs are also a Western practice that reflect Western values. Assuming that people are interested in importing the practice of RCTs, it may not be inappropriate to insist that informed consent be imported along with RCTs. At any rate, we should not take the mere fact that individuals in other places find the practice odd or cumbersome to be reason for giving it up. Their puzzlement may simply reflect their unfamiliarity with the realities of research as opposed to treatment. Informed consent is part of the apparatus developed in the West for safeguarding participants in a particular type of Western practice that strongly resembles but nonetheless differs from the practice of medicine. If it makes sense to think that ethical practices develop in accordance with the prevailing nonethical context, it would not be surprising to discover that a community with no tradition of research will have had no prior need to develop a practice of individual consent. But it would be too hasty to assume that it would not benefit from having a practice such as informed consent in place when it begins to participate in research.

Second, such arguments assume that the only ethical considerations that speak in favor of informed consent are considerations pertaining to decision making. In essence, the claim is that if individuals in a particular community do not value the opportunity to make decisions for themselves, then there is no need to obtain individual informed consent. This argument assumes that informed consent is merely a vehicle for allowing people to exercise decision-making power.

However, informed consent is much more than that, as experience even in the West has shown. Numerous empirical studies have shown that people in the developed world are frequently not as interested in making decisions for themselves as theorists assume.¹⁸ Yet they are still frequently interested in being informed about what is going on, even when they are letting someone else decide for them.¹⁹ It is important that people understand what is going on, even if they defer decision-making authority. There is more that matters to people than simply being able to exercise choice. It is at least possible, therefore, that informed consent serves an ethical function, even in cases where people are quite happy to defer the final decision about whether to participate to someone else (perhaps a senior family member or a community leader). Having the practice of individualized consent in place simultaneously serves to offer those few who might want to make a decision for themselves the opportunity to do so, while ensuring that those who do not care as much about exercising choice are still well-informed about what is happening.

Finally, the practice of obtaining informed consent serves another important function that is all too often ignored. These are the benefits that derive from transparency. The practice of informed consent can benefit subjects indirectly because of the ways it influences the behavior of researchers. It is plausible to suppose that the general transparency that results when each subject has to be carefully informed of all aspects of the research heightens the awareness and sensitivity of the researchers. Knowing that others know what you are doing, and why, can be a useful way of instigating a sense of accountability that may itself serve to reinforce ethical conduct among researchers.

Randomized Controlled Trials

Theorists frequently like to point out that the history of human subjects research is longer than often supposed—that human beings have been trying things out on one another for a long time. While this is certainly the case, it is also true that the modern era of clinical research using randomized controlled trials is breathtakingly new. This is particularly amazing when we consider the sheer volume of such trials occurring in the world today. Yet, according to one expert, this era of RCTs only really began in the early 1950s with the evaluation of streptomycin in patients with tuberculosis.²⁰

A randomized controlled trial involves dividing subjects into groups, or arms, each of which will be treated in a different way for the sake of controlled comparison. The subjects are assigned to particular arms randomly—by lottery, as it were—in order to ensure that unacknowledged bias does not lead researchers to cluster similar people together in groups. If all or most of the people with a particular medically significant characteristic are clustered in one arm of a trial, then differences in the responses of subjects in that arm cannot be reliably attributed simply to the general difference in study drug or intervention. The differences might also have arisen because all of these people differ from the general population in some important way. Because bias can creep in in very subtle ways, randomization is considered extremely important for arriving at generalizable comparisons of one group to another.

In most cases, RCTs are also double blinded, which means that neither individual subjects nor researchers know, during the course of the trial, who is in which group. Only certain members of the research team, who have no interactions with subjects, have the information that links particular subjects with particular interventions. To accomplish a truly blind trial, the different interventions being studied must be made to appear identical. To give a rather trivial example, if a study drug is given as a set of five pink pills, then any other medications used in other arms of the trial (or a placebo, if there is one) would also be presented to subjects as five pink pills.

The point of such elaborate hiding of information is to control for placebo effects. It is well documented that many people will experience some level of improvement whenever they believe they are receiving an effective treatment (whether or not they are). In other words, the epistemic state of the subject has a measurable effect that is not part of the effect a drug produces on its own. Hence, it is important to ensure that all participants are in the same epistemic state, so that any measured differences in subject response can be reliably attributed to differences in the medical efficacy of the different intervention agents. To accomplish this, researchers ensure that all participants are in an epistemic state of ignorance.

Placebo-Controlled Trials and Active-Controlled Trials

One of the key disputes in discussions of research ethics concerns study design. While clinical trials can, for various reasons, assume a bewildering variety of forms, only two basic designs need concern us here: placebo-controlled trials (PCTs) and active-controlled trials (ACTs), sometimes referred to as equivalency trials. For the sake of simplicity, I shall assume that we are always dealing with trials with only two arms. In reality, many trials have multiple arms, and in some cases incorporate both placebo and active controls. Also for the sake of simplicity, I will write as if all RCTs are trials of single drugs, though of course RCTs are used to test novel drug combinations, novel drug regimens (as in the AZT trials, where AZT itself was not new), vaccines (as in the Havrix case presented in this volume), and all sorts of medical procedures, even including complex forms of surgery. The term drug is more convenient than the correct but cumbersome phrase investigational agent or intervention.

An active-controlled (or equivalency) trial compares a test drug to an already established drug that has been used successfully for the same condition. In an ACT, the question being asked is: Is the new drug better than, or at least equivalent to, the existing drug? An ACT compares two types of treatment to one another, usually on the assumption that, if the new drug is not at least as good as the old, the new one will not be worth developing.²¹ Of course the goodness of a drug (or any intervention) is not as simple as I am making it seem here, for it is an all-things-considered assessment on the basis of efficacy, safety, and side effects. However, keeping this important qualification in mind, it is still true that if the new drug is not (all things considered) as good as the old, it would not be used in a context where the old drug is readily available.

An ACT is not designed to answer the question, for less effective drugs, whether those drugs are still better than nothing. All it tells us is whether or not the new drug is at least as good as the old. In cases where the new drug is not as good as the old, there is no direct way, with an ACT, to gauge how much better than nothing the new drug is. Of course it is sometimes possible to answer such a question indirectly with data from an ACT if one also possesses data from a PCT of the old therapy. But it must be possible to generalize from one trial to the other (if too many variables are different, this may cause problems), and of course this also assumes that a PCT was once done. This is not always true; many standard therapies have never been tested against placebo.

Placebo-controlled trials are RCTs in which one arm of participants receives a placebo (i.e., some type of medically inert substance). A well-conducted placebo-controlled trial enables researchers to answer the question: Is the test drug more effective than nothing, and if so, how much better is it? The idea that one is comparing a drug to nothing is not literally correct. But, as we shall see, it is a simplifying fiction that need not undermine the basic rationale for such trials.

The reason it is not nothing is that placebos have a measurable effect. Indeed, controlling for this effect is what makes having some form of control important. Placebo effects appear to have attitudinal (mental) origins, and as such they will enter into the active arm of the trial anyway. Hence, the use of a placebo arm is preferable to just giving the test drug to a bunch of people and observing how well they respond compared with the general population. Researchers want to control for those observable health improvements that are simply the result of changes in participant attitudes (i.e., placebo effects). In many cases, it is important to answer the question: To what degree are observed beneficial effects attributable to the test drug alone—as opposed to other factors, whatever these may be? This matters, as it is only the benefits of the drug itself that should be considered when deciding whether the drug is worth developing and distributing.

As a general rule, placebo controls are not used in the developed world, once some form of effective therapy exists for a condition. However, many people think that there are ethically justified exceptions to this rule. A large part of its rationale is to ensure against inadequate attention to individual subject welfare. In many cases, attention to welfare is extremely important either because the welfare of the particular subjects in question is independently quite fragile (as in the case of very sick people used as research subjects), or because the research drug itself places individual welfare at high risk, or for both reasons together. But not all research is like this, and many RCTs are not very dangerous. For example, Emanuel and Miller point out that in trials involving conditions such as baldness, or some types of headaches, it may be perfectly okay to use placebo controls.²² The real issue arises when the health risks involved are significant. It is easy to forget such points precisely because the cases that attract the attention of research ethicists are usually the dramatic cases in which life itself is at risk.

Furthermore, it is often the case that PCTs, because of their design, involve fewer subjects overall than ACTs. In some cases, this fact itself can, arguably, make it ethical to use placebos. For example, where the health dangers of being assigned to placebo are not too great, and the health profile of a new drug is highly uncertain (there is some chance that it will have quite negative side effects), it may well be better to test the new drug against placebo, since by doing so you will expose many fewer people to the unknown risks of the test drug. Emanuel and Miller explain the point well:

Equivalence trials which evaluate the hypothesis that one drug is equivalent to another, typically require larger samples to achieve sufficient power because the delta or the difference between the rates of response to the two drugs is likely to be smaller than that between the rates of response to an investigational treatment and placebo. Consider an equivalence trial in which an investigational drug is compared with a standard drug that is known to have a 60 percent response rate. With a delta of 10 percent (if they were equivalent the difference between the standard and investigational drugs would be less than 10%) and a one-sided statistical test to show equivalence, each group must contain 297 participants. Conversely, if a placebo is hypothesized to have a 30% response rate and the investigational drug a 60% response rate, then only 48 participants are needed in each group. With the sample required for the equivalence trial—larger by a factor of six than the sample required for the equivalence trial—many more subjects will be exposed to an investigational drug that may be ineffective or even more toxic than the standard drug.²³

These examples serve to remind us that the issue of placebo use is quite complex. Not all trials pose significant risks to subjects. Moreover, it is not always safe to assume that the risks posed to subject well-being by nontreatment are greater than the risks posed by new

Enjoying the preview?
Page 1 of 1