Professional Documents
Culture Documents
JuLeigh Petty
Center for Medicine, Health, and Society,Vanderbilt University, Nashville, TN, USA
Carol A. Heimer
Department of Sociology, Northwestern University, Evanston, IL, USA
American Bar Foundation, Chicago, IL, USA
Abstract
The purpose of clinical research is to create the scientific foundation for medical practice. In this
way of thinking, the effect on medical practice occurs after the research has been completed.
Social studies of science have debunked the standard model of scientific research, observing that
changes in practice associated with research occur not just because of the results of research
but also because of the practice of research. Drawing on fieldwork in HIV clinics in the US,
South Africa, Thailand, and Uganda, we argue that clinical trials shape medical practice by altering
the organizations in which both medical treatment and clinical trials take place. Three general
processes are central to this transformation: the modification of material environments, the
reorganization of bureaucratic relations, and the prioritization of research values.These processes
unfold somewhat differently in the clinics of poorer countries than in those of wealthier ones.
Keywords
clinical research, diffusion, HIV, medical practice, standardization
Corresponding author:
Carol Heimer, Department of Sociology, Northwestern University, 1810 Chicago Ave,
Evanston, IL 60208-1330, USA.
Email: c-heimer@northwestern.edu
According to Timmermans and Berg (1997: 297), the resuscitation protocols and
oncology research protocols they studied were the means through which facts can be
produced and, at the same time, a crucial part of the networks through which the facts can
be performed (italics in original). In clinics that conduct research, then, research results
enter a world that has been re-made in ways that ease their implementation.
Research program Major research facility Sub-site of another Began as research facility Began as treatment Began as research facility
research program with treatment program program with research with treatment program
added later added later added later
Location of Outpatient HIV clinic Stand-alone HIV clinic Stand-alone HIV clinic Outpatient HIV clinic Stand-alone HIV clinic
research and in hospital building not on main hospital not on hospital grounds in hospital building not on hospital grounds
treatment grounds
activities
Medical school Physicians hold medical Physicians hold faculty Top tiers of physicians No formal tie between Top tiers of physicians
affiliation school appointments; appointments in one hold medical school clinic and local hold medical school
research nurses are of two medical schools appointments (in Thailand medical school, but appointments (in Uganda
employees of university or other countries) collaborations with or the US)
university faculty;
formal ties with US
medical school
Hospital affiliation Research formally part Clinic is sub-unit of Clinic is a clinical center Clinic is sub-unit of
Clinic not subunit of
of medical school and hospital of the medical school hospital hospital, but clinic
treatment formally part hospital; also has ties research often carried out
of university hospital to other local health in hospital and hospital
organizations staff are also employed
by research facility
Major sources of Research funds from US Research funds from Research funds from US Research funds from US Research funds from US
although the constraints imposed by treatment programs are also rather severe. In the
Thai clinic, care continues to be given under the umbrella of research, but that umbrella
has been enlarged to accommodate research subjects who cycle off research projects but
have no other way to access HIV/AIDS treatment.2 In US1, there was some overlap
between research and care. Both took place within the same clinic, and the research
nurses and physicians did both clinical and research work. Most research subjects were
also clinic patients. When the study subjects required HIV treatment beyond what a study
provided, they usually got that care in the clinic and it was paid for with their own insur-
ance. Whenever changes had to be made to their medications, the research staff con-
sulted with subjects HIV care providers to encourage them to make changes that
complied with the research protocol if at all possible. However, medical needs came first,
and the research staff would never discourage or prevent a subject from receiving needed
medical treatment even if that meant losing a subject from a study. In US2, research and
care were treated as discrete activities. The research and treatment units were housed in
the same building but operated separately. In fact, to prevent research subjects from con-
fusing research with treatment, the clinics policy was that study subjects were not to be
seen by their primary care physicians when they came for study visits. How the South
African clinic was going to manage the overlap between research and care was less clear,
because the clinic had not yet fully developed its research program.
In each field site, one or two members of our team conducted the bulk of the research
while other researchers visited the site, usually for a couple of weeks.3 The fieldwork in
the two American clinics was of longer duration (just short of 2 years in US1; 13 months
in US2) but was less intensive (we were not in the field every day). We spent 4 months
doing very intensive fieldwork in the clinics in Thailand, Uganda, and South Africa, with
multiple visits of a couple of weeks before and after. We first began fieldwork in US1 in
the fall of 2003 and last revisited our sites in Uganda, Thailand, and South Africa in the
summer of 2007.
Because organizing for research often leads to tension between research and clinical
staff, many topics we were interested in were rather sensitive. In our first days in each
site, staff members nervously joked about whether we were really there to see whether
they were doing things properly. As our research unfolded, we gained access to the for-
mal meetings and informal discussions that surround research and caregiving. We shad-
owed staff as they went about their work, which included study visits, clinical
examinations, phone calls, monitor visits, and meetings. We watched them complete
paperwork and requested copies of forms and policies. We attended weekly research
team meetings, training sessions, clinical care meetings, meetings about standard operating
procedures, research grant application meetings and even prayer meetings. We interviewed
and shadowed staff in a wide variety of positions at all levels of the hierarchy principal
investigators, doctors, nurses, administrators, social workers and counselors, reception
and clerical staff, data entry staff, and so forth.
entities such as the sponsors of clinical trials or the scientific protocol teams that design
procedures to ensure the comparability of data collected in many sites. Other changes
arise as researchers, caregivers, and administrators attempt to balance research and care-
giving under local constraints.
Bergs (1997) studies of the rationalization of medical work offer interesting parallels
and suggest mechanisms that might account for this broad impact of clinical research on
medical practice. Although decision tools are intended to be universally applicable, in
fact they are typically only very locally useable, and then only with a lot of work to fit
them into the setting (Berg, 1997). In the course of double-fitting clinical research to the
organization and the organization to clinical research, the material environment is altered
with the introduction of new tools of research, bureaucratic relations are reorganized
with the introduction of new and retrained staff, and priorities are changed with adoption
of scientific practices. A broad correlate of these processes is standardization within and
among HIV clinics as clinics conduct studies developed and funded by a small set of
large sponsors and governed by an increasingly shared set of ethical and technical rules
(Petty, 2008). Standardization is important not only as an effect (uniformity that makes
HIV clinics all over the world mutually comprehensible), but also as a technique. Below
we examine these three general processes, observe how they vary with levels of develop-
ment and clinics existing social and technical networks, and show how research-related
changes in clinics pave the way for the adoption of research findings.
patients and for children. With this evidence about levels of drugs in patients blood,
clinicians lowered doses, saving valuable medicines and reducing side effects. As one of
the lead researchers emphasized though, it would have been inappropriate to lower doses
without the data from the research.
These synergies between laboratory facilities, research, and caregiving also showed
up in decisions about pap [cervical] smears, which were initially added to improve one
study but later became standard of care for all research and treatment programs in the
Thai clinic. When researchers wanted additional ways of measuring disease progression
in one long-term study, one researcher suggested in the weekly doctors meeting that
annual pap smears for the women would provide another clinical end point (cervical
cancer is a correlate of HIV infection). As they discussed whether and how to provide
pap smears, clinic staff worried both that they could not offer the test at their sub-sites
and that Thailand might actually be behind the Western sites if all of the other countries
are doing annual pap smears, and [we] are not. Noting that Latin American sites had not
been doing pap smears, they decided to tell the research sponsor about their plan and to
suggest that other countries like the Latin American ones might wish to follow
Thailands example. In a project meeting later that day, they debated whether to do
cheaper conventional or more expensive, but more accurate, liquid pap smears, set-
tling on liquid smears for initial examinations and conventional smears for subsequent
examinations. The same schedule was later adopted for the observational study that the
Thai clinic used as a vehicle for providing care to patients who did not qualify for other
studies.
In addition to wanting to keep up with wealthy countries and be a role model for other
research clinics, the Thai clinicians were clearly pleased to provide high quality care to
their patients. In the clinic meeting mentioned above, one caregiver readily agreed to
write up notes from the examinations and track any necessary follow-up; because pro-
viding this care is very important to her ... she was willing to take on quite a bit to be able
to do it. During clinic hours, another caregiver seemed pleased about providing this
service, even though it makes the visits very long for the patients. Improved laboratory
facilities and findings from research studies created opportunities for providing better
care. Indeed, once a higher standard of care became possible, one staff member argued,
they were required by GCP [good clinical practice] to provide the best care for the
patient including pay[ing] for all the tests.
In exploring the relationship between research and the material environment of the
clinic, we make two related arguments. First, research entails importing new technolo-
gies or using old technologies in new ways. These technologies include medications,
tests, equipment, paper or computerized forms, and even exam rooms. In our study, we
observed that research-mandated technologies have bigger immediate effects but smaller
long-term consequences in poorer clinics. Second, researchers make non-mandated
changes to the material environment of the clinic to ease the enactment of study proto-
cols. According to Berg (1997: 93), materializing a tools demands or in this case,
materializing researchs demands requires that those advocating the use of a tool
change the material environment so that the decision technique becomes an unavoid-
able (and often unnoticed) part of daily practice. Both of these processes adopting
research-mandated materials and changing the material environment to facilitate the
and scarcer in the African clinics. Researchers in Uganda, particularly, were keenly
aware of the material barriers to complying with research protocols. Internet connections
were unreliable in Uganda, making it difficult to file serious adverse event (SAE)
reports in a timely way.4 The lack of reliable medical records in the Ugandan hospital
sometimes meant that more leg-work was required to track SAEs. When the Ugandan
clinic instituted a quality assessment/quality control (QA/QC) program (described more
fully below) to reduce protocol violations and documentation errors, the trainer
instructed QA/QC reviewers to pay attention not just to documentation but also to the
conditions that make it possible to do documentation:
If people dont have any place to sit while they do their work, they are less likely to fill out
forms correctly or to check them. If there is not a clock, they cant document the time if they
dont themselves own watches. If there is not a locking cabinet, they cant store their work
securely. You cant tell people not to leave forms in the pocket of the binder (as they have) if
you dont supply them with a hole punch so that they can prepare the forms to insert in the right
sections of the binder.
The QA/QC program provided what equipment it could. Where rule infractions were
related to working conditions that could not be changed, QA/QC reviewers were to docu-
ment the local working conditions for external monitors. In contrast to the subjective
form in US1, much of the work of implementing research protocols is never made
unavoidable and unnoticed in poor settings.
When routines have already been reconfigured to accommodate research protocols,
the subsequent work of implementing research results, now translated into new tools
(such as clinical practice guidelines), is likely to be less onerous, less noticeable, and less
avoidable. The ratcheting up of rigor, detail, and pace that comes with research may thus
make the subsequent implementation of research results, which requires some of this
same extra work, seem relatively easier. Some costs of implementing research results
have merely been displaced into the research phase. Research may entail more work than
simply caring for patients, but taking on board new therapies and administering new
drugs also entails extra work unless, of course, one has already learned appropriate
practices by participating in the research on those therapies and drugs.
In materializing the clinics environment, research projects alter the way that people
interact with objects and so make it easy for them to continue to use those new tools and
technologies even after research projects end. Healthcare institutions that have not partici-
pated in the research have to bear the start-up costs of acquiring and learning to use the
technologies required to implement research results. The rapid adoption of research results
in research clinics is thus partly accounted for by the continuing presence of the tools,
technologies, and infrastructure of research, as well as the staffs recently acquired skills
for using the new materials. Techniques and skills, as Knorr Cetina (1999: 220) has noted,
travel not just through laboratory protocols, but through packages of arrangements that
incorporate scientists and material objects and that need to be recreated in local contexts.
different: As a quality management team, you see that the problem is there is no form.
That is a major problem! So they waited until the unit doctor arrived and then discussed
the situation. To make the doctor see the problem, the regulatory specialist role-played
with him, asking him to pretend that she had failed to record the data and he was inspect-
ing her work. Watching his reaction to the error he was actually harsher than I was,
the regulatory specialist confided made things click for her team. They now understood
that they were evaluating the job, not the person, and that poor work sometimes required
harsh assessments. The special care nursery team ultimately proposed some recommen-
dations on their own and warmly welcomed the QA/QC group on a follow-up visit,
consolidating the QA/QC teams emerging sense that it really was acceptable to be criti-
cal when making inspections.
A few days later, another documentation problem turned up in an unannounced site
visit to a post-natal ward. This time, a confession occurred quickly. As they discussed
the problem that the doctors sometimes forget to summarize the case in the discharge
section at the bottom of the form ... [the lowest level QA/QC staffer] tells them that actu-
ally the nurses end up writing the discharge summary even though it is supposed to be
the doctors who do it. ... [They all agree, though] that the nurse cant sign the form once
she has filled it out because the line for the signature clearly indicates that it is supposed
to be the doctors signature. The professor (QA/QC team member and also a doctor)
seems baffled that about why it should be a problem for the doctors to write given that
all they have to do is say things like the sex of the child, that it was a simple vaginal
delivery, the date and time, and so on [He runs through the couple of sentences that
would be required.]. In this reordered workplace, the lowest level QA/QC staff members
are now quickly reporting errors or violations of protocol, even when it is their hierarchi-
cal superiors who are not correctly and fully documenting.
As it shapes their activities, these examples show, research also reshapes relationships
among colleagues. The non-relational parts of work (skills and tasks) nearly always spill
over into the relational aspects of work (Barley, 1990): the QA/QC tasks changed the
relations between research staff inspecting medical and research records and the nurses
and doctors who write those records. When they take up QA/QC work, nurses, who cus-
tomarily follow doctors orders, must learn to correct the doctors whose medical records
are being used for research. Research inevitably confers new duties on clinical staff, as
nurses become study nurses and physicians become physician-investigators. Some of
the changes required by research are superficial, as Fisher (2009) showed in her study of
pharmaceutical clinical trials conducted by non-academic physicians, but many are not.
Research has led to the creation of wholly new job categories, including study coordina-
tor, data monitor, and regulatory affairs coordinator. Although new staff members were
hired and existing staff retrained and redeployed in order to conduct clinical research in
all five of the clinics we studied, the wealthier clinics were better able to meet staffing
requirements.
Nurses, doctors, and administrators compete for authority in a clinic; the introduction
of new tools shifts the kinds of capital that are valued, fostering interpersonal conflict as
some seek to conserve the current authority system while others attempt to subvert it
(Bourdieu, 1975; Hong, 2008). Bergs work (1997) emphasizes that tools reinforce
bureaucratic hierarchies. Support of the established medical hierarchy is particularly
salient during the backstage work of clinical research when protocols are written by
experts (Mueller, 1997). Study protocols reinforce established clinical jurisdictions by
dictating that certain categories of professionals be represented. For example, because
ACTG research units are required to have on staff experts in a long list of fields, includ-
ing virology and immunology, primary care clinics are unlikely to be selected as research
units. Other sites are excluded from particular studies because they lack relevant exper-
tise. US2 ruled out some studies because it had no on-site hepatologist. Within clinics,
research protocols support the established medical hierarchy by defining who is allowed
to examine patients, prescribe and dispense drugs, and interpret diagnostic tests.
These professional jurisdictions and other procedures of the organization are codified
in standard operating procedures (SOPs), which are required for participation in research.
According to the DAIDS ACTG, SOPs spell out exactly how you are expected to do
things.5 SOPs define how procedures, examinations, and measurements are to be done
and who should do them; they dictate how to manage samples (for example, how to array
blood samples in a box for shipment, as we learned in Thailand). They lay out procedures
for filling out case report forms and laboratory requests, for correcting errors, and for
quality assuring forms. When monitors inspect a research sites work, they routinely
verify that the site has SOPs and then sometimes check whether the site is actually fol-
lowing its SOPs (Heimer and Gazley, 2010).
Although the imposition of SOPs can be experienced as a hassle, SOPs can also be
welcome infrastructure, particularly when they clarify relations among staff and do not
simply duplicate or supplant existing infrastructure. During our fieldwork, the Ugandan
clinic conducted a big SOP writing program as it launched a large study, funded by the NIH
and the Ugandan Ministry of Health. It was a moment of liberation when the Ugandans
realized they could write their own SOPs rather than adopting the NIH templates. It is one
thing to be expected to follow a regular procedure, but quite another to be expected to fol-
low an American procedure in a Ugandan environment. SOPs must actually fit the site, and
lengthy weekly meetings were required to craft workable SOPs. But once these jurisdic-
tional and procedural issues were worked out, the SOPs were adopted for general clinic
use, reshaping relations well beyond the project for which they were originally created.
Although SOPs and research protocols reinforce some authority relations, they recon-
figure others. Conducting research disrupts established relations when local research
staff members follow research protocols and perform work they are not usually autho-
rized to do. Protocols require that research staff perform particular tests at specified
study visits or in specified situations, much like nurses are required to follow the stand-
ing orders written by doctors. A simple example is the standing order to give an analge-
sic if a patients oral temperature rises above 101F (38C). The distinction between
standing orders and research protocols is distance. Research protocols are written by
experts who may live and work far away from those implementing the protocol. As with
the standing order, the authority of a research protocol formally arises from the expertise
of those who wrote it. Nevertheless, because of the distance between the authors and
implementers of the research protocol, the apparent authority of local implementers
increases. Thus, established notions of medical authority are called into question in the
implementation of research protocols when, for instance, nurses are asked to do work
usually performed by physicians.
Authority relations are also challenged by the layers of external and internal scrutiny
introduced by research. To ensure uniformity, a protocol team reviews key site decisions,
answers questions, and decides how to manage unexpected events. Because external
study monitors regularly review the work of researchers, medical decisions must be doc-
umented in excruciating detail. Monitors appointed by study sponsors verify that study
data are accurate, complete, and verifiable, that the trial is in compliance with the proto-
col and other regulatory requirements, and that the rights and well-being of human sub-
jects are protected (International Conference on Harmonization, 1997). Unable actually
to watch the daily conduct of research, monitors primarily review records, and clinics
establish record-keeping routines to avoid getting dinged by monitors. Those whose
medical record-keeping is deemed inadequate may even receive remedial training,
quite an affront to those at the top of the medical hierarchy (Heimer, 2008).
Although the intervention of research staff in caregiving was largely about the pro-
duction of records, there were also consultations about the care itself, particularly when
local procedures would seem inappropriate to external referees reviewing the manage-
ment of serious adverse events (SAEs). Some adverse events (for example, malaria) that
are categorized as serious by the research protocols may seem routine to Ugandan doc-
tors; similarly, care that would be deemed inadequate by American researchers and doc-
tors (for example, transfusing without first doing blood tests) may be viewed as standard
in the Ugandan context. Because of these cross-national differences, the researchers in
the Ugandan clinic intervened more frequently in the work of other caregivers than did
the researchers in the American or Thai clinics. And Ugandan caregivers were generally
more willing to accept the authority of researchers, even on matters of care, than were
their American and Thai counterparts.
This was a matter of degree, however. Although Ugandan caregivers were quite will-
ing to comply with documentation requirements, where researchers had clear authority,
they were less amenable to intervention in medical decisions, where the authority of
research doctors was more precarious. For example, the Ugandan clinic staff worried
about the appropriateness of the hospital staffs admission and discharge decisions. They
were troubled that sick people were being sent home, but concerned that sending people
back to the hospital from the clinic would alienate hospital staff. Saying that the situation
made him jittery, one doctor worried that hospital staff would think here comes Dr
[name] to tell us that this patient is not ready to go home. ... Our relations with them will
deteriorate, and the patients will suffer. During our fieldwork, clinic doctors were devel-
oping a policy on when and how to intervene when they disagreed with hospitalization
decisions. Similar concerns about alienating other clinicians came up in Thailand, where
research doctors worried that less aggressive specialists in the affiliated hospital would
be irritated if clinic research doctors pushed for particular tests. To avoid conflicts, they
sometimes sent patients to private clinics for specific cultures and tests or worked through
clinic research doctors who had especially good relations with doctors in one of the uni-
versity hospitals.
The Ugandan researchers were more likely to intervene in hospital decisions than the
researchers in our other field sites because of variations in the standard of care, which we
take up now, as well as the shape and rigidity of the authority structure, which we turn to
below. HIV treatment was relatively standardized in four of the five clinics we studied.
Participation in research raised the status of doctors, elevating them above other doctors
who still needed to be honored as equals. Often, as in South Africa, doctors from other
facilities sent their patients to the HIV experts who were also conducting research. But
caregiving is no respecter of institutional boundaries and research doctors often found
that they had to share the care of their patient-subjects with other doctors when patients
had to be hospitalized or needed treatment from other specialists. A good bedside manner
had to be augmented with strong diplomatic skills if these clinic research doctors were to
induce their colleagues to adopt the latest scientific findings rather than continuing with
business as usual. These tensions across organizational boundaries underscore the orga-
nizational accomplishment of the clinics: where work relations have been fully reorga-
nized to recognize research expertise and where staff have learned the meta-technology
of standardization, as they have in the clinics themselves, research results take root and
flourish without the aid of diplomacy.
records and for reviewing the information, for instance to help locate potential research
participants (with unusual mixes of characteristics).
In response to the requirements of specific research projects and funding agencies,
clinics developed practices for ensuring the comparability of records. For example, the
Ugandan clinic brought in a consultant to help them make their records useable by the
US Food and Drug Administration (FDA) because they anticipated that the data from
one study would be used as part of an FDA approval process. The Ugandan manager of
the health visitors (outreach workers) described how the new standards of record-keep-
ing increased and changed her work. Before the consultants visit, they used index cards;
afterward, they used forms. They had to legalize everything, including even the writing.
If someone scribbled, they now had to cross out the illegible inscription, correct it, date
it, and sign it. Research staff in American clinics, indeed in all the clinics, also com-
plained about these nit-picky research requirements. However, in the US similar stan-
dards of record-keeping are defined and enforced by non-research entities such as the
Joint Commission. Although pre-existing rules increase the potential for conflicting stan-
dards, they also increase the odds that the organization will be able to follow additional,
similar rules.
Multi-center research spreads Western expectations about medical record-keeping.
The practice of recording medical information to share across locations is less routine in
Uganda or Thailand than in the US. Implementing a standard medical record in early
20th century American hospitals required constructing a network of doctors, administra-
tors, and buildings (Timmermans and Berg, 2003). Similarly, maintaining research
records requires a network of personnel, objects (binders, file cabinets, computers), and
locations (laboratories, work space for completing forms and entering data, storage
rooms). While participating in treatment programs such as the US Presidents Emergency
Plan for AIDS Relief (PEPFAR) also increases record-keeping duties, as we saw in sev-
eral of our clinics, evaluations of treatment programs are typically less deep than corre-
sponding evaluations of research. Indeed, the extra documentation work required for
research compared with treatment was a source of tension among the Ugandan staff.
Donors funding treatment programs certainly expect reports and often give exceedingly
elaborate instructions for the preparation of those reports, but unlike research monitors,
donor staff and site visitors rarely review individual medical records. Research encour-
ages a legalistic orientation to medical record-keeping because of the anticipation that
outsiders will use the record to make judgments about work quality. This orientation is
already common in the US, where the audience for medical records has expanded beyond
local caregivers (for example, in the same clinic or hospital) to include outsiders who use
the records for non-caregiving functions, such as justification for payment or evidence of
malpractice.
Medical research entails a reorientation toward patient information. Numerous schol-
ars have observed the objectification of patients in medical care. During a medical exam-
ination, an individual is subjected to the clinical gaze and established as an analyzable
object, a case (Foucault, 1973). Clinicians depersonalize patients by treating them as
biological processes (Anspach, 1988). Research goes even further, because transforming
patients and their complaints into data requires removal of identifying and extraneous
information. Depersonalization conflicts with the ethos of nursing. As one US1 study
nurse pointed out, nurses are charged all the way through their training with the obliga-
tion to think of Mrs Jones in room 235, rather than depersonalizing her.
Through a process of reshuffling spokesmanship (Berg, 1997), research shifts what
kind of information is relevant and trustworthy. In particular, quantitative data are privi-
leged over qualitative data. An important example of quantification is the grading of
symptoms on a four-point severity scale to make symptoms ranging from nausea to
blood pressure comparable (DAIDS/RSC, 2004). Although symptoms are graded in
research records, they often are not graded in ordinary clinical records. Research staff
drawing on clinical records to fill out case report forms often had to ask clinical staff to
grade symptoms after the fact.
Research also tends to privilege machine measurement. As Anspach (1988) observed,
different levels of authority are attributed to technology, professional staff, and patients;
the x-ray shows but the patient alleges. The measurement of fat redistribution, a side
effect of antiretrovirals, elucidates this privileging of machine measurement. Fat redistri-
bution (lipodystrophy or lipoatrophy, in medical terminology) can radically alter a
patients appearance, resulting in sunken cheeks, extra fat on the neck and/or abdomen,
the buffalo hump, or skinny legs. Yet patient reports that they look different often seem
imprecise or inaccurate to clinicians and are insufficient for the purposes of research.
Instead, data collectors are trained in anthropometrics so they more precisely can mea-
sure fat loss and gain. Even this method is suspect, though, and some protocols insist on
the use of expensive DEXA scans to measure changes in fat distribution.6
In many medical settings, preference for precise, machine-produced measurements
has transformed the way people work. In poor countries, such expensive substitutions are
less common and when they occur, it is often through the largesse of research projects.
Thus in the Ugandan clinic, it was a research funder that supplied the mercury manom-
eter blood pressure kit for the maternity ward. Mercury manometers are used in clinical
trials because they have usually been thought to be more accurate than aneroid manom-
eters. Although this blood pressure kit brought Ugandan practices into line with research
standards, the effects of research were short-lived. As we noted above, before the arrival
of the new kit, blood pressure measurements were not made in the maternity ward. Nor
were they measured after the kit broke. If the aneroid manometers were in fact sturdier,
the choice of greater accuracy came at a high cost. Large leaps in technology often have
less long-term impact than shorter leaps when sustainability is an issue.
Most remarkably, though, participating in research changed how clinic staff thought
about scientific and medical knowledge. The doctors in our study critically evaluated
published research, taking on what Timmermans and Berg (2003) call a researcher ori-
entation. Because most medical research is oriented toward answering questions impor-
tant to wealthy countries, doctors in Uganda, Thailand, and South Africa were especially
critical consumers of research. In Uganda, clinic staff discussed research on mother-to-
child transmission of HIV, asking penetrating questions about when antiretrovirals had
become common in the research sites. Noting that elective cesareans reduced transmis-
sion, they were troubled that such procedures were not possible in their own facility,
where staff shortages meant that any elective cesarean section was inevitably delayed
until it became an emergency. What would the research findings predict about Ugandan
outcomes under these conditions? In Thailand, clinic staff were intensely aware that
some side effects of antiretrovirals were more common among Thai patients than among
Westerners, perhaps because of some combination of genetic factors and lower body
weight. They were also unhappy that the local government-produced generic GPOvir
often could not be included as a comparison point in multi-site studies. To answer their
questions, they would need to conduct their own research. The South African clinic, an
early participant in the government antiretroviral roll-out, questioned some components
of the government first-line regimen. In particular, staff were deeply worried about the
danger of lactic acidosis (a life-threatening condition) when patients with a high body
mass index were given stavudine, one of the drugs included in the first-line regimen. But
what made this clinic so confident that stavudine was the problem? Well, they had done
what proper scientists do, keeping good records and analyzing their data to see what was
going on. As a result of their early findings, they purchased a machine to make more
accurate and more rapid measures of lactic acidosis so they could protect their patients.
Eventually the government agreed that the rates of lactic acidosis had been unexpectedly
high and agreed to carry out an audit (Wilson, 2006).
This impulse to do some research is quite fully institutionalized in the two American
clinics, where staff fairly often conduct small research projects using existing medical
records (chart reviews) and even carry out rather substantial investigator-initiated stud-
ies. Often these studies are either required or strongly encouraged for staff on certain
career paths or enrolled in degree programs. All five clinics welcomed outside research-
ers (including us) and felt obliged to assist them in their work. But the enthusiasm about
research described above goes well beyond this essentially routine endorsement and sup-
port of research. What we are describing is instead a habit of noting where the uncertain-
ties lie and a deep commitment to scientific inquiry as the way to answer open questions.
This lesson was brought home to us when the Ugandan staffs immediate response to an
early presentation of our research (where we hazarded an explanation about why one of
their guidelines was not working as intended) was to propose a research project (subse-
quently funded and carried out). In addition to turning clinic staff into enthusiastic
believers in science, participation in clinical research also made them eager to adopt the
results of the enterprise that they so whole-heartedly endorsed.
Conclusion
How institutions organize care and research is consequential because the more care and
research overlap, the more each is altered in the course of double-fitting the clinic and the
research protocol. Conducting clinical research is not only a means for testing new treat-
ments or a means for poor patients to get access to drugs and therapies; it is also a way to
increase the likelihood that new therapies will fit local conditions. Translating clinical
research into medical care involves complex articulation (Epstein, 1996; Rosengarten et al.,
2004) because new therapies enter a full world (Lwy, 1996) of constraining routines,
practices, and knowledge. However, this articulation occurs not only after the research is
complete, but also while it is being conducted. Even before the results of clinical studies
are announced the full world of medicine has intruded on clinical research and vice versa.
If clinical trials shape organizational routines as much as they shape medical routines,
this may help explain why scientific knowledge penetrates other arenas with more
difficulty. If one of the main routes of influence is the indirect one through organizational
practices, then less elite organizations that do not also conduct research may be less able
to adopt scientific knowledge because they lack the organizational and material cultures
that support such knowledge. This is not to say that clinics that are not engaged in
research provide poor care. What it means is that clinics that segregate research and, to
an even greater extent, clinics that do no research will have much more infrastructural
work to do before new research results can be implemented. In US2, the director had
strong ties to the research community; he also required clinic staff to follow treatment
guidelines that were updated regularly. Staff members were required to discuss certain
kinds of treatment decisions in a group meeting. In other words, mechanisms were in
place to distribute and enforce the use of new clinical tools in a way that compensated for
the partial wall between research and treatment. Given these mechanisms, what US2
mainly had to worry about was getting the other parts of its network, namely state
funders, to keep up.
How, when, and whether technologies affect the content of work is an empirical ques-
tion. In order to meet research requirements, clinics create new routines, hire new staff,
retrain existing staff, adopt new technologies, and establish new relationships with tech-
nicians and experts. The extent and permanency of these effects varies. Some research
procedures, such as the exceedingly cumbersome technique for taking blood pressure
required by some ACTG protocols, are learned only temporarily.7 A procedure that
makes nurses feel like Catholic nuns (keep both feet on the floor) is unlikely to be
widely adopted.
During the course of a study, research protocols are resources for action, but once the
study is over, clinic staff have no reason to use these protocols, except perhaps as tem-
plates for future studies. The techniques as well as the norms of research, such as the
commitment to standardization, quantification, and even the rights of individual partici-
pants, are potentially ephemeral. In the language of Sewell (1992), such schemas only
become structural when supported by human or non-human resources. Although the
effects of some protocols are fleeting, other protocols are built into the structure of the
clinic through paper and computerized forms, machines, and employment categories.
Because it is labor intensive to modify forms and job categories, once altered, they are
unlikely to be changed back.
If research and the adoption of research results are really two parts of the same enter-
prise, as our findings indicate, those eager to diffuse research results more quickly might
consider a different strategy. Rather than diffusing results, they should instead try diffus-
ing research. But although research transformed clinics in both rich and poor countries,
the effects varied, as we have noted. In effect, standardization, learned as meta-strategy
in research, is often redeployed to treatment. Clinics that do research have mastered the
standardization necessary for reworking research templates for treatment. But the effect
of standardization as a meta-strategy was larger in poorer sites where standardization of
medical care was less common and where it brought considerable legitimacy (especially
in the eyes of donors).
We also argued that research expertise is more readily accepted and drawn upon in
facilities conducting research, where bureaucratic relations have already been reconfig-
ured to recognize that new expertise. The effects of reorganizing bureaucratic relations
also seem to be greater in poorer sites than in richer ones. Because trained staff are in
short supply, research funding alters staff composition much more in poor countries than
rich ones. Moreover, the disturbance of existing authority relations is especially pro-
nounced where research-trained workers are charged with bringing their site into confor-
mity with international standards and must instruct and correct others who may be
hierarchically superior.
In addition, we reasoned that organizations already adept at using the technologies
required by new medication regimens and new therapies face lower costs in adopting
research results. In poorer clinics, the material objects brought by research projects are
especially valuable because they are less likely to be duplicates of (or near substitutes
for) existing materials. At the same time, though, they are less easily replenished when
used up or replaced when damaged, so the overall effect of altering the material environ-
ment is likely to be modest unless the flow of research (and treatment) funds is very
stable.
Finally, we suggested that research participation makes clinic staff more sensitive to
the evidentiary basis of recommendations, and so more enthusiastic about adopting
research results. The new sensitivity to research values has a larger effect in poorer clin-
ics than richer ones because most research results can be applied off the shelf to richer
clinics but need to be scrutinized for applicability in poorer sites.
We do not wish to be too sanguine about the consequences of research participation,
particularly in poor countries. Others, especially Petryna (2009), have carefully analyzed
the risks as well as the benefits, both to individuals and to societies, that come with
research participation. Not least, the costs of new pharmaceuticals can easily overwhelm
the healthcare systems of poor countries, when investing in the lower-end of healthcare
would surely be wiser.
Conducting research is likely to have its most lasting effects when the network of ties
and the infrastructure built and reconfigured in the course of doing a research project are
later appropriated by subsequent research projects and care programs. Healthcare orga-
nizations are deeply transformed by conducting clinical research. Material environments
are changed as new objects are brought in and workers learn to use those objects. The
relations among workers are altered, with scientific workers gaining prestige and author-
ity. Staff members come to value precise measurement and reliable records. Research
extends the rails that allow scientific research results to be driven into the clinic. Of
course more rails can be laid and connected, but we should perhaps not be surprised that
the locomotives make their first trips to the places where rails have already been laid.
Nor should we be surprised that eager conductors start driving those locomotives while
the first rails are still being laid.
Acknowledgements
An early version of this article, titled How research shapes medical work: Organizational effects
of clinical trials, was presented at the 2006 meetings of the American Sociological Associa-
tion. The data for the article were collected as part of a larger research project, Clinic-Level
Law: The Legalization of Medicine in AIDS Treatment and Research (principal investigator,
Carol Heimer), supported by the National Science Foundation (NSF SES 0319560), the Russell
Sage Foundation, and the American Bar Foundation. We are grateful for helpful comments from
Rebecca Culyba, Lynn Gazley, Michael Lynch, Sergio Sismondo, Arthur Stinchcombe, and three
anonymous reviewers.
Notes
1. Because the world of HIV research is rather small in some countries, we omitted explicit
details about funding to conceal clinic identities.
2. We should not forget that research subjects are an important constituency. Their demand for
treatment, when it is not available elsewhere, creates an additional pressure for research clinics
to adopt the results of clinical trials.
3. The fieldwork was conducted by Carol Heimer (US1, Thailand, Uganda, South Africa; site
visits to US2), JuLeigh Petty (US1; site visits to US2, Uganda, South Africa), Rebecca Culyba
(US2), and Lynn Gazley (Thailand; site visits to US1, South Africa). Enid Wamani and Dusita
Pheungsamran assisted with fieldwork in Uganda and Thailand, respectively.
4. An SAE is any undesirable event, associated with a medical product, resulting in death,
life-threatening illness, hospitalization (initial or prolonged), disability, congenital anom-
aly or requiring treatment to prevent permanent damage (US Code of Federal Regulations
21CFR312.32). SAEs are tracked by funding agencies and institutional review boards (IRBs).
5. Defining the Terms: Policy, Guidelines, and Standard Operating Procedures (SOPs). 5 Octo-
ber 2006. Available at www.aactg.org/sites/default/files/definitions.pdf, accessed 19 January
2008.
6. Dual-energy x-ray absorptiometry (DEXA) scans use two x-ray beams of differing energy
levels.
7. A US1 study nurse laughingly explained that the protocol requires that the study patient raise
both arms during part of the procedure; that the nurse do a blood pressure test on each arm,
document on which arm the measure is higher, and then always do that arm first during sub-
sequent study visits; and that the nurse leave the study patient alone in the examination room
for 5 minutes between the measurements on the two arms so that he or she is relaxed. The final
requirement is that the patient keep both feet on the floor.
References
Anspach RR (1988) Notes on the sociology of medical discourse: The language of case presenta-
tion. Journal of Health and Social Behavior 29(4): 357375.
Barley SR (1990) The alignment of technology and structure through roles and networks.
Administrative Science Quarterly 35(1): 61103.
Berg M (1997) Rationalizing Medical Work: Decision Support Techniques and Medical Practices.
Cambridge: MIT Press.
Bourdieu P (1975) The specificity of the scientific field and the social conditions of the progress
of reason. Social Science Information 14(6): 1947.
Bowker GC (1994) Science on the Run: Information Management and Industrial Geophysics at
Schlumberger, 19201940. Cambridge: MIT Press.
Clarke A (1990) A social worlds research adventure: The case of reproductive science. In: Cozzens
S and Gieryn T (eds) Theories of Science in Society. Bloomington: Indiana University Press,
2350.
Coleman JS, Katz E and Menzel H (1966) Medical Innovation: A Diffusion Study. Indianapolis:
Bobbs-Merrill.
DAIDS/RSC (2004) Division of AIDS Table for Grading the Severity of Adult and Pediatric
Adverse Events. Bethesda, MD: Division of AIDS, Regulatory Support Center, National
Institutes of Health. Available at: http://rsc.tech-res.com/safetyandpharmacovigilance/
DiMaggio PJ and Powell WW (1983) The iron cage revisited: Institutional isomorphism and col-
lective rationality in organizational fields. American Sociological Review 48(2): 147160.
Epstein S (1996) Impure Science: AIDS, Activism, and the Politics of Knowledge. Berkeley:
University of California Press.
Fisher JA (2009) Medical Research for Hire: The Political Economy of Pharmaceutical Clinical
Trials. New Brunswick, NJ: Rutgers University Press.
Foucault M (1973) The Birth of the Clinic: An Archaeology of Medical Perception. New York:
Vintage Books.
Friedman LM, Furberg CD and DeMets DL (1998) Fundamentals of Clinical Trials, 3rd edn. New
York: Springer.
Heimer CA (2008) Thinking about how to avoid thought: Deep norms, shallow rules, and the
structure of attention. Regulation and Governance 2: 3047.
Heimer CA and Gazley JL (2010) Performing regulation: Transcending regulatory ritualism in
HIV clinics. Unpublished paper, Northwestern University, Evanston, IL, USA.
Hong W (2008) Domination in a scientific field: Capital struggle in a Chinese isotope lab. Social
Studies of Science 38: 543570.
International Conference on Harmonization (1997) Guidance for industry: E6 good clinical prac-
tice. Federal Register 62: 2569225709.
Katz E and Lazarsfeld PF (1955) Personal Influence: The Part Played by People in the Flow of
Mass Communication. New York: Free Press.
Knorr Cetina K (1999) Epistemic Cultures: How the Sciences Make Knowledge. Cambridge, MA:
Harvard University Press.
Latour B (1983) Give me a laboratory and I will raise the world. In: Knorr Cetina K and Mulkay
M (eds) Science Observed: Perspectives on the Social Study of Science. London: Sage
Publications, 140170.
Layton ET Jr (1974) Technology as knowledge. Technology and Culture 15(1): 3141.
Lwy I (1996) Between Bench and Bedside: Science, Healing, and Interleukin-2 in a Cancer
Ward. Cambridge: Harvard University Press.
Majumdar SR, Roe MT, Peterson ED, Chen AY, Gibler WB and Armstrong PW (2008) Better
outcomes for patients treated at hospitals that participate in clinical trials. Archives of Internal
Medicine 168(6): 657662.
Mueller MR (1997) Science versus care: Physicians, nurses and the dilemma of clinical research.
In: Elston MA (ed.) The Sociology of Medical Science and Technology. Oxford: Blackwell
Publishers, 5778.
Petryna A (2009) When Experiments Travel: Clinical Trials and the Global Search for Human
Subjects. Princeton: Princeton University Press.
Petty J (2008) Science in the clinic: HIV research in the era of evidence-based medicine.
Unpublished doctoral dissertation, Northwestern University, Evanston, IL, USA.
Porter TM (1995) Trust in Numbers: The Pursuit of Objectivity in Science and Public Life.
Princeton: Princeton University Press.
Rogers EM (2003 [1962]) Diffusion of Innovation, 5th edn. New York: Free Press.
Rosenberg N (1976) Perspectives on Technology. Cambridge: Cambridge University Press.
Rosengarten M, Imrie J, Flowers P, Davis MD and Hart GJ (2004) After the euphoria: HIV medical
technologies from the perspective of their prescribers. Sociology of Health and Illness 26(5):
575596.
Sewell WH Jr (1992) A theory of structure: Duality, agency, and transformation. American Journal
of Sociology 98(1): 129.
Sismondo S (2010) Linking research and marketing: A pharmaceutical innovation. In: Quirke V
and Slinn J (eds) Perspectives on Twentieth-Century Pharmaceuticals. Oxford: Peter Lang,
241256.
Strang D and Soule SA (1998) Diffusion in organizations and social movements: From hybrid corn
to poison pills. Annual Review of Sociology 24: 265290.
Straus SE (2005) Evidence-Based Medicine: How to Practice and Teach EBM, 3rd edn. Edinburgh
and New York: Elsevier/Churchill Livingstone.
Timmermans S and Berg M (1997) Standardization in action: Achieving local universality through
medical protocols. Social Studies of Science 27: 273305.
Timmermans S and Berg M (2003) The Gold Standard: The Challenge of Evidence-Based Medicine
and Standardization in Health Care. Philadelphia: Temple University Press.
Turner BJ, Newschaffer CJ, Zhang D, Fanning T and Hauck WW (1999) Translating clinical trial
results into practice: The effect of an AIDS clinical trial on prescribed antiretroviral therapy for
HIV-infected pregnant women. Annals of Internal Medicine 130(12): 979986.
Wejnert B (2002) Integrating models of diffusion of innovations: A conceptual framework. Annual
Review of Sociology 28: 297326.
Wilson D (2006) Lactic Acidosis Audit: February to December 2005. Department of Medicine,
Edendale Hospital, Pietermaritzburg, KZN, South Africa. Available at: www.kznhealth.gov.
za/medicine/lactic.pdf
Biographical notes
JuLeigh Petty is Senior Lecturer in the Center for Medicine, Health, and Society at
Vanderbilt University. Her research focuses on the standardization of medicine, the role
of epidemiological evidence in medical decision-making, and the regulation and ethics of
research and HIV/AIDS. She is the co-author of: Bureaucratic Ethics: IRBs and the Legal
Regulation of Human Subjects Research (co-authored with Carol Heimer, Annual Review
of Law and Social Science 6, 2010), Preparing Students to Navigate Cross-National
Differences in the Research Environment: The Role of Research Integrity Education
(co-authored with Elizabeth Heitman, in International Research Collaborations, edited by
M. Anderson and N. Steneck, Routledge, 2010), Risk and Rules: The Legalization of
Medicine (co-authored with Carol Heimer and Rebecca Culyba, in Organizational
Encounters with Risk, edited by B. Hutter and M. Power, Cambridge University Press,
2005), and The Ethnographic Turn: Fact, Fashion, or Fiction (co-authored with Rebecca
Culyba and Carol Heimer, Qualitative Sociology 27(4), 2004).