Professional Documents
Culture Documents
, Quasi-experimental Studies
Position Paper j
ANTHONY D. HARRIS, MD, MPH, JESSINA C. MCGREGOR, PHD, ELI N. PERENCEVICH, MD, MS,
JON P. FURUNO, PHD, JINGKUN ZHU, MS, DAN E. PETERSON, MD, MPH, JOSEPH FINKELSTEIN, MD
possesses because the evidence of the potential causation intervention has either questionable efficacy or safety based
between the intervention and the outcome is strengthened.4 on previously conducted studies, then the ethical issues of
We then performed a systematic review of four years of publi- randomizing patients are sometimes raised. In the area of
cations from two informatics journals. First, we determined medical informatics, it is often believed prior to an implemen-
the number of quasi-experimental studies. We then classified tation that an informatics intervention will likely be beneficial
these studies on the above-mentioned hierarchy. We also and thus medical informaticians and hospital administrators
classified the quasi-experimental studies according to their are often reluctant to randomize medical informatics inter-
application domain. The categories of application domains ventions. In addition, there is often pressure to implement
employed were based on categorization used by Yearbooks the intervention quickly because of its believed efficacy,
of Medical Informatics 19922005 and were similar to the thus not allowing researchers sufficient time to plan a ran-
categories of application domains employed by Annual domized trial.
Symposiums of the American Medical Informatics Asso- For medical informatics interventions, it is often difficult to
ciation.11 The categories were (1) health and clinical manage- randomize the intervention to individual patients or to indi-
ment; (2) patient records; (3) health information systems; (4) vidual informatics users. So while this randomization is tech-
medical signal processing and biomedical imaging; (5) deci- nically possible, it is underused and thus compromises the
sion support, knowledge representation, and management; eventual strength of concluding that an informatics interven-
(6) education and consumer informatics; and (7) bioinfor- tion resulted in an outcome. For example, randomly allowing
matics. Because the quasi-experimental study design has only half of medical residents to use pharmacy order-entry
recognized limitations, we sought to determine whether software at a tertiary care hospital is a scenario that hospital
authors acknowledged the potential limitations of this design. administrators and informatics users may not agree to for nu-
Examples of acknowledgment included mention of lack of merous reasons.
randomization, the potential for regression to the mean, the Similarly, informatics interventions often cannot be random-
presence of temporal confounders and the mention of another ized to individual locations. Using the pharmacy order-entry
design that would have more internal validity. system example, it may be difficult to randomize use of the
All original scientific manuscripts published between January system to only certain locations in a hospital or portions of
2000 and December 2003 in the Journal of the American Medical certain locations. For example, if the pharmacy order-entry
Informatics Association (JAMIA) and the International Journal of system involves an educational component, then people
Medical Informatics (IJMI) were reviewed. One author (ADH) may apply the knowledge learned to nonintervention wards,
reviewed all the papers to identify the number of quasi- thereby potentially masking the true effect of the interven-
experimental studies. Other authors (ADH, JCM, JF) then tion. When a design using randomized locations is employed
independently reviewed all the studies identified as quasi- successfully, the locations may be different in other respects
experimental. The three authors then convened as a group (confounding variables), and this further complicates the
to resolve any disagreements in study classification, applica- analysis and interpretation.
tion domain, and acknowledgment of limitations. In situations where it is known that only a small sample size
will be available to test the efficacy of an intervention, ran-
Results and Discussion domization may not be a viable option. Randomization is
beneficial because on average it tends to evenly distribute
What Is a Quasi-experiment? both known and unknown confounding variables between
Quasi-experiments are studies that aim to evaluate interven- the intervention and control group. However, when the sam-
tions but that do not use randomization. Similar to random- ple size is small, randomization may not adequately accom-
ized trials, quasi-experiments aim to demonstrate causality plish this balance. Thus, alternative design and analytical
between an intervention and an outcome. Quasi-experimen- methods are often used in place of randomization when
tal studies can use both preintervention and postintervention only small sample sizes are available.
measurements as well as nonrandomly selected control
groups. What Are the Threats to Establishing Causality
Using this basic definition, it is evident that many published When Using Quasi-experimental Designs in
studies in medical informatics utilize the quasi-experimental Medical Informatics?
design. Although the randomized controlled trial is generally The lack of random assignment is the major weakness of the
considered to have the highest level of credibility with regard quasi-experimental study design. Associations identified in
to assessing causality, in medical informatics, researchers of- quasi-experiments meet one important requirement of causal-
ten choose not to randomize the intervention for one or ity since the intervention precedes the measurement of the
more reasons: (1) ethical considerations, (2) difficulty of ran- outcome. Another requirement is that the outcome can be
domizing subjects, (3) difficulty to randomize by locations demonstrated to vary statistically with the intervention.
(e.g., by wards), (4) small available sample size. Each of these Unfortunately, statistical association does not imply causality,
reasons is discussed below. especially if the study is poorly designed. Thus, in many
Ethical considerations typically will not allow random with- quasi-experiments, one is most often left with the question:
holding of an intervention with known efficacy. Thus, if the Are there alternative explanations for the apparent causal
efficacy of an intervention has not been established, a ran- association? If these alternative explanations are credible,
domized controlled trial is the design of choice to determine then the evidence of causation is less convincing. These rival
efficacy. But if the intervention under study incorporates an hypotheses, or alternative explanations, arise from principles
accepted, well-established therapeutic intervention, or if the of epidemiologic study design.
18 HARRIS ET AL., Quasi-experimental Studies
characterized by 11 of 17 designs, with six study designs in study. For example, there may be instances where an A6
category A, one in category B, three designs in category C, design established stronger causality than a B1 design.1517
and one design in category D because the other study designs
were not used or feasible in the medical informatics literature. Quasi-experimental Designs without
Thus, for simplicity, we have summarized the 11 study de- Control Groups
signs most relevant to medical informatics research in Table 2. The One-Group Posttest-Only Design X O1
The nomenclature and relative hierarchy were used in the Here, X is the intervention and O is the outcome variable (this
systematic review of four years of JAMIA and the IJMI. notation is continued throughout the article). In this study de-
Similar to the relative hierarchy that exists in the evidence- sign, an intervention (X) is implemented and a posttest obser-
based literature that assigns a hierarchy to randomized vation (O1) is taken. For example, X could be the introduction
controlled trials, cohort studies, case-control studies, and of a pharmacy order-entry intervention and O1 could be the
case series, the hierarchy in Table 2 is not absolute in that in pharmacy costs following the intervention. This design is
some cases, it may be infeasible to perform a higher level the weakest of the quasi-experimental designs that are dis-
cussed in this article. Without any pretest observations or a
Table 2 j Relative Hierarchy of Quasi-experimental control group, there are multiple threats to internal validity.
Designs Unfortunately, this study design is often used in medical in-
Quasi-experimental Study Designs Design Notation formatics when new software is introduced since it may be
difficult to have pretest measurements due to time, technical,
A. Quasi-experimental designs
without control groups
or cost constraints.
1. The one-group posttest-only X O1 The One-Group Pretest-Posttest Design O1 X O2
design
2. The one-group O1 X O2 This is a commonly used study design. A single pretest mea-
pretest-posttest design surement is taken (O1), an intervention (X) is implemented,
3. The one-group pretest-posttest O1 O2 X O3 and a posttest measurement is taken (O2). In this instance, pe-
design using a double pretest riod O1 frequently serves as the control period. For exam-
4. The one-group pretest-posttest (O1a, O1b) X (O2a, O2b) ple, O1 could be pharmacy costs prior to the intervention, X
design using a nonequivalent could be the introduction of a pharmacy order-entry system,
dependent variable and O2 could be the pharmacy costs following the interven-
5. The removed-treatment design O1 X O2 O3 removeX O4 tion. Including a pretest provides some information about
6. The repeated-treatment design O1 X O2 removeX O3 X O4
what the pharmacy costs would have been had the interven-
B. Quasi-experimental designs that
use a control group but no tion not occurred.
pretest The One-Group Pretest-Posttest Design Using a Double
1. Posttest-only design with Intervention group: X O1
Pretest O1 O2 X O3
nonequivalent groups
Control group: O2 The advantage of this study design over A2 is that adding a
C. Quasi-experimental designs that second pretest prior to the intervention helps provide evi-
use control groups and pretests dence that can be used to refute the phenomenon of regression
1. Untreated control group with Intervention group: to the mean and confounding as alternative explanations for
dependent pretest and O1a X O2a any observed association between the intervention and the
posttest samples posttest outcome. For example, in a study where a pharmacy
Control group: O1b O2b
order-entry system led to lower pharmacy costs (O3 , O2 and
2. Untreated control group Intervention group:
design with dependent O1a O2a X O3a
O1), if one had two preintervention measurements of phar-
pretest and posttest samples macy costs (O1 and O2) and they were both elevated, this
using a double pretest would suggest that there was a decreased likelihood that O3
Control group: O1b O2b O3b is lower due to confounding and regression to the mean.
3. Untreated control group design Intervention group: Similarly, extending this study design by increasing the
with dependent pretest and O1a X O2a O3a number of measurements postintervention could also help
posttest samples using to provide evidence against confounding and regression to
switching replications the mean as alternate explanations for observed associations.
Control group: O1b
O2b X O3b The One-Group Pretest-Posttest Design Using a Non-
D. Interrupted time-series design* equivalent Dependent Variable O1a; O1b X O2a; O2b
1. Multiple pretest and posttest O1 O2 O3 O4 O5 X O6
observations spaced at O7 O8 O9 O10
This design involves the inclusion of a nonequivalent depen-
equal intervals of time dent variable (b) in addition to the primary dependent varia-
ble (a). Variables a and b should assess similar constructs; that
O 5 Observational Measurement; X 5 Intervention Under Study.
is, the two measures should be affected by similar factors and
Time moves from left to right.
confounding variables except for the effect of the interven-
*In general, studies in category D are of higher study design quality
than studies in category C, which are higher than those in category tion. Variable a is expected to change because of the interven-
B, which are higher than those in category A. Also, as one moves tion X, whereas variable b is not. Taking our example, variable
down within each category, the studies become of higher quality, a could be pharmacy costs and variable b could be the length
e.g., study 5 in category A is of higher study design quality than of stay of patients. If our informatics intervention is aimed at
study 4, etc. decreasing pharmacy costs, we would expect to observe a
20 HARRIS ET AL., Quasi-experimental Studies
decrease in pharmacy costs but not in the average length of MICU. Also, the absence of pretest measurements comparing
stay of patients. However, a number of important confound- the SICU to the MICU makes it difficult to know whether
ing variables, such as severity of illness and knowledge of differences in O1 and O2 are due to the intervention or due
software users, might affect both outcome measures. Thus, to other differences in the two units (confounding variables).
if the average length of stay did not change following the in-
tervention but pharmacy costs did, then the data are more Quasi-experimental Designs That Use Control
convincing than if just pharmacy costs were measured. Groups and Pretests
The reader should note that with all the studies in this cate-
The Removed-Treatment Design O1 X O2 O3 Remove X O4
gory, the intervention is not randomized. The control groups
This design adds a third posttest measurement (O3) to the chosen are comparison groups. Obtaining pretest measure-
one-group pretest-posttest design and then removes the inter- ments on both the intervention and control groups allows
vention before a final measure (O4) is made. The advantage of one to assess the initial comparability of the groups. The as-
this design is that it allows one to test hypotheses about the sumption is that if the intervention and the control groups
outcome in the presence of the intervention and in the ab- are similar at the pretest, the smaller the likelihood there is
sence of the intervention. Thus, if one predicts a decrease in of important confounding variables differing between the
the outcome between O1 and O2 (after implementation of two groups.
the intervention), then one would predict an increase in the
outcome between O3 and O4 (after removal of the interven- Untreated Control Group with Dependent Pretest and
tion). One caveat is that if the intervention is thought to Interventiongroup : O1a X O2a
Posttest Samples:
have persistent effects, then O4 needs to be measured after Controlgroup : O1b O2b
these effects are likely to have disappeared. For example, a The use of both a pretest and a comparison group makes it
study would be more convincing if it demonstrated that phar- easier to avoid certain threats to validity. However, because
macy costs decreased after pharmacy order-entry system in- the two groups are nonequivalent (assignment to the groups
troduction (O2 and O3 less than O1) and that when the is not by randomization), selection bias may exist. Selection
order-entry system was removed or disabled, the costs in- bias exists when selection results in differences in unit charac-
creased (O4 greater than O2 and O3 and closer to O1). In ad- teristics between conditions that may be related to outcome
dition, there are often ethical issues in this design in terms of differences. For example, suppose that a pharmacy order-
removing an intervention that may be providing benefit. entry intervention was instituted in the MICU and not the
The Repeated-Treatment Design SICU. If preintervention pharmacy costs in the MICU (O1a)
O1 X O2 Remove X O3 X O4 and SICU (O1b) are similar, it suggests that it is less likely
that there are differences in the important confounding vari-
The advantage of this design is that it demonstrates reproduc- ables between the two units. If MICU postintervention costs
ibility of the association between the intervention and the out- (O2a) are less than preintervention MICU costs (O1a), but
come. For example, the association is more likely to be causal SICU costs (O1b) and (O2b) are similar, this suggests that
if one demonstrates that a pharmacy order-entry system re- the observed outcome may be causally related to the
sults in decreased pharmacy costs when it is first introduced intervention.
and again when it is reintroduced following an interruption
of the intervention. As for design A5, the assumption must Untreated Control Group Design with Dependent Pretest
be made that the effect of the intervention is transient, which and Posttest Samples Using a Double Pretest:
is most often applicable to medical informatics interventions. Interventiongroup : O1a O2a X O3a
Because in this design, subjects may serve as their own con- Controlgroup : O1b O2b O3b
trols, this may yield greater statistical efficiency with fewer In this design, the pretests are administered at two different
numbers of subjects. times. The main advantage of this design is that it controls
Quasi-experimental Designs That Use a Control for potentially different time-varying confounding effects in
Group but No Pretest the intervention group and the comparison group. In our ex-
ample, measuring points O1 and O2 would allow for the as-
Posttest-Only Design with Nonequivalent Groups: sessment of time-dependent changes in pharmacy costs, e.g.,
Interventiongroup : X O1 due to differences in experience of residents, preintervention
Controlgroup : O2 between the intervention and control group, and whether
An intervention X is implemented for one group and com- these changes were similar or different.
pared to a second group. The use of a comparison group
Untreated Control Group Design with Dependent Pretest
helps prevent certain threats to validity including the ability
to statistically adjust for confounding variables. Because in and Posttest Samples Using Switching Replications:
this study design, the two groups may not be equivalent Interventiongroup : O1a X O2a O3a
(assignment to the groups is not by randomization), con- Controlgroup : O1b O2b X O3b
founding may exist. For example, suppose that a pharmacy With this study design, the researcher administers an inter-
order-entry intervention was instituted in the medical inten- vention at a later time to a group that initially served as a non-
sive care unit (MICU) and not the surgical intensive care intervention control. The advantage of this design over
unit (SICU). O1 would be pharmacy costs in the MICU after design C2 is that it demonstrates reproducibility in two differ-
the intervention and O2 would be pharmacy costs in the SICU ent settings. This study design is not limited to two groups; in
after the intervention. The absence of a pretest makes it fact, the study results have greater validity if the intervention
difficult to know whether a change has occurred in the effect is replicated in different groups at multiple times. In the
Journal of the American Medical Informatics Association Volume 13 Number 1 Jan / Feb 2006 21
example of a pharmacy order-entry system, one could imple- of a pharmacy order-entry system, O1 through O5 could rep-
ment or intervene in the MICU and then at a later time, inter- resent monthly pharmacy costs preintervention and O6
vene in the SICU. This latter design is often very applicable to through O10 monthly pharmacy costs post the introduction
medical informatics where new technology and new software of the pharmacy order-entry system. Interrupted time-series
is often introduced or made available gradually. designs also can be further strengthened by incorporating
many of the design features previously mentioned in other
Interrupted Time-Series Designs categories (such as removal of the treatment, inclusion of a
O1 O2 O3 O4 O5 X O6 O7 O8 O9 O10 nondependent outcome variable, or the addition of a control
group).
An interrupted time-series design is one in which a string
of consecutive observations equally spaced in time is inter- Systematic Review Results
rupted by the imposition of a treatment or intervention. The The results of the systematic review are in Table 3. In the four-
advantage of this design is that with multiple measurements year period of JAMIA publications that the authors reviewed,
both pre- and postintervention, it is easier to address and con- 25 quasi-experimental studies among 22 articles were pub-
trol for confounding and regression to the mean. In addition, lished. Of these 25, 15 studies were of category A, five studies
statistically, there is a more robust analytic capability, and were of category B, two studies were of category C, and no
there is the ability to detect changes in the slope or intercept studies were of category D. Although there were no studies
as a result of the intervention in addition to a change in the of category D (interrupted time-series analyses), three of the
mean values.18 A change in intercept could represent an im- studies classified as category A had data collected that could
mediate effect while a change in slope could represent a grad- have been analyzed as an interrupted time-series analysis.
ual effect of the intervention on the outcome. In the example Nine of the 25 studies (36%) mentioned at least one of the
potential limitations of the quasi-experimental study design. 8. Shadish WR, Heinsman DT. Experiments versus quasi-experi-
In the four-year period of IJMI publications reviewed by the ments: do they yield the same answer? NIDA Res Monogr.
authors, nine quasi-experimental studies among eight manu- 1997;170:14764.
scripts were published. Of these nine, five studies were of cat- 9. Grimshaw J, Campbell M, Eccles M, Steen N. Experimental and
quasi-experimental designs for evaluating guideline implemen-
egory A, one of category B, one of category C, and two of
tation strategies. Fam Pract. 2000;17(Suppl 1):S116.
category D. Two of the nine studies (22%) mentioned at least 10. Zwerling C, Daltroy LH, Fine LJ, Johnston JJ, Melius J, Silver-
one of the potential limitations of the quasi-experimental stein BA. Design and conduct of occupational injury interven-
study design. tion studies: a review of evaluation strategies. Am J Ind Med.
In addition, three studies from JAMIA were based on a coun- 1997;32:16479.
terbalanced design. A counterbalanced design is a higher 11. Haux RKC, editor. Yearbook of medical informatics 2005. Stutt-
order study design than other studies in category A. The gart: Schattauer Verlagsgesellschaft, 2005, p. 563.
12. Morton V, Torgerson DJ. Effect of regression to the mean on de-
counterbalanced design is sometimes referred to as a Latin-
cision making in health care. BMJ. 2003;326:10834.
square arrangement. In this design, all subjects receive all 13. Bland JM, Altman DG. Regression towards the mean. BMJ. 1994;
the different interventions but the order of intervention 308:1499.
assignment is not random.19 This design can only be used 14. Bland JM, Altman DG. Some examples of regression towards the
when the intervention is compared against some existing mean. BMJ. 1994;309:780.
standard, for example, if a new PDA-based order entry system 15. Guyatt GH, Haynes RB, Jaeschke RZ, Cook DJ, Green L, Naylor
is to be compared to a computer terminalbased order entry CD, et al. Users guides to the medical literature: XXV. Evidence-
system. In this design, all subjects receive the new PDA-based based medicine: principles for applying the users guides to pa-
order entry system and the old computer terminal-based tient care. Evidence-Based Medicine Working Group. JAMA.
order entry system. The counterbalanced design is a within- 2000;284:12906.
16. Harris RP, Helfand M, Woolf SH, Lohr KN, Mulrow CD, Teutsch
participants design, where the order of the intervention is var-
SM, et al. Current methods of the US Preventive Services Task
ied (e.g., one group is given software A followed by software Force: a review of the process. Am J Prev Med. 2001;20:2135.
B and another group is given software B followed by software 17. Harbour R, Miller J. A new system for grading recommenda-
A). The counterbalanced design is typically used when the tions in evidence based guidelines. BMJ. 2001;323:3346.
available sample size is small, thus preventing the use of ran- 18. Wagner AK, Soumerai SB, Zhang F, Ross-Degnan D. Segmented
domization. This design also allows investigators to study the regression analysis of interrupted time series studies in medica-
potential effect of ordering of the informatics intervention. tion use research. J Clin Pharm Ther. 2002;27:299309.
19. Campbell DT. Counterbalanced design. In: Company RMCP,
Conclusion editor. Experimental and Quasiexperimental Designs for Re-
Although quasi-experimental study designs are ubiquitous in search. Chicago: Rand-McNally College Publishing Company,
the medical informatics literature, as evidenced by 34 studies 1963, p. 505.
20. Staggers N, Kobus D. Comparing response time, errors, and
in the past four years of the two informatics journals, little has
satisfaction between text-based and graphical user interfaces
been written about the benefits and limitations of the quasi- during nursing order tasks. J Am Med Inform Assoc. 2000;7:
experimental approach. As we have outlined in this paper, 16476.
a relative hierarchy and nomenclature of quasi-experimental 21. Schriger DL, Baraff LJ, Buller K, Shendrikar MA, Nagda S, Lin
study designs exist, with some designs being more likely EJ, et al. Implementation of clinical guidelines via a computer
than others to permit causal interpretations of observed asso- charting system: effect on the care of febrile children less than
ciations. Strengths and limitations of a particular study de- three years of age. J Am Med Inform Assoc. 2000;7:18695.
sign should be discussed when presenting data collected in 22. Patel VL, Kushniruk AW, Yang S, Yale JF. Impact of a computer-
the setting of a quasi-experimental study. Future medical in- based patient record system on data collection, knowledge orga-
formatics investigators should choose the strongest design nization, and reasoning. J Am Med Inform Assoc. 2000;7:56985.
23. Borowitz SM. Computer-based speech recognition as an alterna-
that is feasible given the particular circumstances.
tive to medical transcription. J Am Med Inform Assoc. 2001;8:
1012.
References j
24. Patterson R, Harasym P. Educational instruction on a hospital in-
1. Rothman KJ, Greenland S. Modern epidemiology. Philadelphia: formation system for medical students during their surgical ro-
LippincottRaven Publishers, 1998. tations. J Am Med Inform Assoc. 2001;8:1116.
2. Hennekens CH, Buring JE. Epidemiology in medicine. Boston: 25. Rocha BH, Christenson JC, Evans RS, Gardner RM. Clinicians
Little, Brown, 1987. response to computerized detection of infections. J Am Med In-
3. Szklo M, Nieto FJ. Epidemiology: beyond the basics. Gaithers- form Assoc. 2001;8:11725.
burg, MD: Aspen Publishers, 2000. 26. Lovis C, Chapko MK, Martin DP, Payne TH, Baud RH, Hoey PJ,
4. Shadish WR, Cook TD, Campbell DT. Experimental and quasi- et al. Evaluation of a command-line parser-based order entry
experimental designs for generalized causal inference. Boston: pathway for the Department of Veterans Affairs electronic pa-
Houghton Mifflin, 2002. tient record. J Am Med Inform Assoc. 2001;8:48698.
5. Trochim WMK. The research methods knowledge base. Cincin- 27. Hersh WR, Junium K, Mailhot M, Tidmarsh P. Implementation
nati: Atomic Dog Publishing, 2001. and evaluation of a medical informatics distance education pro-
6. Cook TD, Campbell DT. Quasi-experimentation: design and gram. J Am Med Inform Assoc. 2001;8:57084.
analysis issues for field settings. Chicago: Rand McNally Pub- 28. Makoul G, Curry RH, Tang PC. The use of electronic medical rec-
lishing Company, 1979. ords: communication patterns in outpatient encounters. J Am
7. MacLehose RR, Reeves BC, Harvey IM, Sheldon TA, Russell IT, Med Inform Assoc. 2001;8:6105.
Black AM. A systematic review of comparisons of effect sizes de- 29. Ruland CM. Handheld technology to improve patient care: eval-
rived from randomised and non-randomised studies. Health uating a support system for preference-based care planning at
Technol Assess. 2000;4:1154. the bedside. J Am Med Inform Assoc. 2002;9:192201.
Journal of the American Medical Informatics Association Volume 13 Number 1 Jan / Feb 2006 23
30. De Lusignan S, Stephens PN, Adal N, Majeed A. Does feedback generation of medical reports. J Am Med Inform Assoc. 2000;7:
improve the quality of computerized medical records in primary 4628.
care? J Am Med Inform Assoc. 2002;9:395401. 40. Dunbar PJ, Madigan D, Grohskopf LA, Revere D, Woodward J,
31. Mekhjian HS, Kumar RR, Kuehn L, Bentley TD, Teater P, Minstrell J, et al. A two-way messaging system to enhance anti-
Thomas A, et al. Immediate benefits realized following imple- retroviral adherence. J Am Med Inform Assoc. 2003;10:115.
mentation of physician order entry at an academic medical cen- 41. Lenert L, Munoz RF, Stoddard J, Delucchi K, Bansod A, Skoczen
ter. J Am Med Inform Assoc. 2002;9:52939. S, et al. Design and pilot evaluation of an Internet smoking ces-
32. Ammenwerth E, Mansmann U, Iller C, Eichstadter R. Factors af- sation program. J Am Med Inform Assoc. 2003;10:1620.
fecting and affected by user acceptance of computer-based nurs- 42. Koide D, Ohe K, Ross-Degnan D, Kaihara S. Computerized re-
ing documentation: results of a two-year study. J Am Med minders to monitor liver function to improve the use of etreti-
Inform Assoc. 2003;10:6984. nate. Int J Med Inf. 2000;57:119.
33. Oniki TA, Clemmer TP, Pryor TA. The effect of computer-gener- 43. Gonzalez-Heydrich J, DeMaso DR, Irwin C, Steingard RJ,
ated reminders on charting deficiencies in the ICU. J Am Med Kohane IS, Beardslee WR. Implementation of an electronic med-
Inform Assoc. 2003;10:17787. ical record system in a pediatric psychopharmacology program.
34. Liederman EM, Morefield CS. Web messaging: a new tool for Int J Med Inf. 2000;57:10916.
patient-physician communication. J Am Med Inform Assoc. 2003; 44. Anantharaman V, Swee Han L. Hospital and emergency ambu-
10:26070. lance link: using IT to enhance emergency pre-hospital care. Int J
35. Rotich JK, Hannan TJ, Smith FE, Bii J, Odero WW, Vu N, Mamlin Med Inf. 2001;61:14761.
BW, et al. Installing and implementing a computer-based patient 45. Chae YM, Heon Lee J, Hee Ho S, Ja Kim H, Hong Jun K, Uk Won
record system in sub-Saharan Africa: the Mosoriot Medical Rec- J. Patient satisfaction with telemedicine in home health services
ord System. J Am Med Inform Assoc. 2003;10:295303. for the elderly. Int J Med Inf. 2001;61:16773.
36. Payne TH, Hoey PJ, Nichol P, Lovis C. Preparation and use of 46. Lin CC, Chen HS, Chen CY, Hou SM. Implementation and eval-
preconstructed orders, order sets, and order menus in a comput- uation of a multifunctional telemedicine system in NTUH. Int J
erized provider order entry system. J Am Med Inform Assoc. Med Inf. 2001;61:17587.
2003;10:3229. 47. Mikulich VJ, Liu YC, Steinfeldt J, Schriger DL. Implementation
37. Hoch I, Heymann AD, Kurman I, Valinsky LJ, Chodick G, Sha- of clinical guidelines through an electronic medical record: phy-
lev V. Countrywide computer alerts to community physicians sician usage, satisfaction and assessment. Int J Med Inf. 2001;63:
improve potassium testing in patients receiving diuretics. J Am 16978.
Med Inform Assoc. 2003;10:5416. 48. Hwang JI, Park HA, Bakken S. Impact of a physicians order en-
38. Laerum H, Karlsen TH, Faxvaag A. Effects of scanning and elim- try (POE) system on physicians ordering patterns and patient
inating paper-based medical records on hospital physicians length of stay. Int J Med Inf. 2002;65:21323.
clinical work practice. J Am Med Inform Assoc. 2003;10:58895. 49. Park WS, Kim JS, Chae YM, Yu SH, Kim CY, Kim SA, et al. Does
39. Devine EG, Gaehde SA, Curtis AC. Comparative evaluation of the physician order-entry system increase the revenue of a gen-
three continuous speech recognition software packages in the eral hospital? Int J Med Inf. 2003;71:2532.