You are on page 1of 102

i

S
C
C
G
G
M
A
Y
/
J
U
N
E
2
0
0
4
F
R
O
N
T
I
E
R
S
O
F
S
I
M
U
L
A
T
I
O
N
P
A
R
T
I
I
V
O
L
U
M
E
6
N
U
M
B
E
R
3
Computing in Science & Engineering is a peer-reviewed, joint publication of the IEEE Computer Society and the American Institute of Physics
http://cise.aip.org www.computer.org/cise
May/June 2004
Simulated Bite
Marks, p. 4
Multisensory
Perception, p. 61
Biological Aging
and Speciation, p. 72
A L S O

Frontiers of Simulation
PART II
Frontiers of Simulation
Statement of Purpose
Computing in Science & Engineering
aims to support and promote the
emerging discipline of computational
science and engineering and to
foster the use of computers and
computational techniques in scientific
research and education. Every issue
contains broad-interest theme articles,
departments, news reports, and
editorial comment. Collateral materials
such as source code are made available
electronically over the Internet. The
intended audience comprises physical
scientists, engineers, mathematicians,
and others who would benefit from
computational methodologies.
All theme and feature articles in
CiSE are peer-reviewed.
Copublished by the IEEE
Computer Society and the
American Institute of Physics
F R O N T I E R S O F S I M U L A T I O N
P A R T I I
M A Y / J U N E 2 0 0 4 Volume 6, Number 3
Guest Editors Introduction:
Frontiers of Simulation, Part II
Douglass Post
16
Virtual Watersheds:
Simulating the Water Balance of the Rio Grande Basin
C.L. Winter, Everett P. Springer, Keeley Costigan, Patricia Fasel,
Sue Mniewski, and George Zyvoloski
18
Large-Scale Fluid-Structure Interaction Simulations
Rainald Lhner, Juan Cebral, Chi Yang, Joseph D. Baum, Eric Mestreau,
Charles Charman, and Daniele Pelessone
27
Simulation of Swimming Organisms: Coupling Internal
Mechanics with External Fluid Dynamics
Ricardo Cortez, Lisa Fauci, Nathaniel Cowen, and Robert Dillon
38
Two- and Three-Dimensional
Asteroid Impact Simulations
Galen Gisler, Robert Weaver, Charles Mader, and Michael Gittings
46
Cover illustration: Dirk Hagner
President:
CARL K. CHANG*
Computer Science Dept.
Iowa State University
Ames, IA 50011-1040
Phone: +1 515 294 4377
Fax: +1 515 294 0258
c.chang@computer.org
President-Elect:
GERALD L. ENGEL*
Past President:
STEPHEN L. DIAMOND*
VP, Educational Activities:
MURALI VARANASI*
VP, Electronic Products and Ser-
vices:
LOWELL G. JOHNSON (1ST VP)*
VP, Conferences and Tutorials:
CHRISTINA SCHOBER*
VP, Chapters Activities:
RICHARD A. KEMMERER (2ND VP)
VP, Publications:
MICHAEL R. WILLIAMS
VP, Standards Activities:
JAMES W. MOORE
VP, Technical Activities:
YERVANT ZORIAN
Secretary:
OSCAR N. GARCIA*
Treasurer:
RANGACHAR KASTURI
20032004 IEEE Division V Direc-
tor:
GENE H. HOFFNAGLE
20032004 IEEE Division VIII Di-
rector:
JAMES D. ISAAK
2004 IEEE Division VIII Director-
Elect:
STEPHEN L. DIAMOND*
Computer Editor in Chief:
DORIS L. CARVER
Executive Director:
DAVID W. HENNAGE
* voting member of the Board of Governors

nonvoting member of the Board of


Governors
B O A R D O F G O V E R N O R S
Term Expiring 2004: Jean M. Bacon, Ricardo
Baeza-Yates, Deborah M. Cooper, George V.
Cybenko, Haruhisha Ichikawa, Thomas W.
Williams, Yervant Zorian
Term Expiring 2005: Oscar N. Garcia, Mark A.
Grant, Michel Israel, Stephen B. Seidman, Kathleen
M. Swigger, Makoto Takizawa, Michael R. Williams
Term Expiring 2006: Mark Christensen, Alan
Clements, Annie Combelles, Ann Gates, Susan
Mengel, James W. Moore, Bill Schilit
Next Board Meeting: 12 June 2004, Long Beach, CA
E X E C U T I V E S T A F F
Executive Director: DAVID W. HENNAGE
Assoc. Executive Director:
ANNE MARIE KELLY
Publisher: ANGELA BURGESS
Assistant Publisher: DICK PRICE
Director, Finance & Administration:
VIOLET S. DOAN
Director, Information Technology & Services:
ROBERT CARE
Manager, Research & Planning: JOHN C. KEATON
COMPUTER SOCIETY OFFI CES
Headquarters Office
1730 Massachusetts Ave. NW
Washington, DC 20036-1992
Phone: +1 202 371 0101 Fax: +1 202 728 9614
E-mail: hq.ofc@computer.org
Publications Office
10662 Los Vaqueros Cir., PO Box 3014
Los Alamitos, CA 90720-1314
Phone:+1 714 8218380
E-mail: help@computer.org
Membership and Publication Orders:
Phone: +1 800 272 6657 Fax: +1 714 821 4641
E-mail: help@computer.org
Asia/Pacific Office
Watanabe Building
1-4-2 Minami-Aoyama,Minato-ku,
Tokyo107-0062, Japan
Phone: +81 3 3408 3118 Fax: +81 3 3408 3553
E-mail: tokyo.ofc@computer.org
I E E E O F F I C E R S
President:
ARTHUR W. WINSTON
President-Elect:
W. CLEON ANDERSON
Past President:
MICHAEL S. ADLER
Executive Director:
DANIEL J. SENESE
Secretary:
MOHAMED EL-HAWARY
Treasurer:
PEDRO A. RAY
VP, Educational Activities:
JAMES M. TIEN
VP, Publication Services and Products:
MICHAEL R. LIGHTNER
VP, Regional Activities:
MARC T. APTER
VP, Standards Association:
JAMES T. CARLO
VP, Technical Activities:
RALPH W. WYNDRUM JR.
IEEE Division V Director:
GENE H. HOFFNAGLE
IEEE Division VIII Director:
JAMES D. ISAAK
President, IEEE-USA:
JOHN W. STEADMAN
AVAILABLE INFORMATION
To obtain more information on any of the
following, contact the Publications Office:
Membership applications
Publications catalog
Draft standards and order forms
Technical committee list
Technical committee application
Chapter start-up procedures
Student scholarship information
Volunteer leaders/staff directory
IEEE senior member grade applica-
tion (requires 10 years practice and sig-
nificant performance in five of those 10)
To check membership status or report a
change of address, call the IEEE toll-free
number, +1 800 678 4333. Direct all other
Computer Society-related questions to the
Publications Office.
PUBLICATIONS AND ACTIVITIES
Computer. An authoritative, easy-to-read
magazine containing tutorial and in-depth
articles on topics across the computer field,
plus news, conferences, calendar, industry
trends, and product reviews.
Periodicals. The society publishes 12
magazines and 10 research transactions.
Refer to membership application or request
information as noted at left.
Conference Proceedings, Tutorial
Texts, Standards Documents.
The Computer Society Press publishes
more than 160 titles every year.
Standards Working Groups. More
than 200 groups produce IEEE standards
used throughout the industrial world.
Technical Committees. Thirty TCs pub-
lish newsletters, provide interaction with
peers in specialty areas, and directly influ-
ence standards, conferences, and education.
Conferences/Education. The society
holds about 100 conferences each year
and sponsors many educational activities,
including computing science accreditation.
PURPOSE The IEEE Computer Society is the
worlds largest association of computing
professionals, and is the leading provider of
technical information in the field.
MEMBERSHIP Members receive the
monthly magazine Computer, discounts,
and opportunities to serve (all activities
are led by volunteer members). Member-
ship is open to all IEEE members, affiliate
society members, and others interested in
the computer field.
COMPUTER SOCIETY WEB SITE
The IEEE Computer Societys Web site, at
www.computer.org, offers information
and samples from the societys publications
and conferences, as well as a broad range
of information about technical committees,
standards, student activities, and more.
OMBUDSMAN Members experiencing
problemsmagazine delivery, member-
ship status, or unresolved complaints
may write to the ombudsman at the
Publications Office or send an e-mail
to help@computer.org.
CHAPTERS Regular and student chapters
worldwide provide the opportunity to
interact with colleagues, hear technical
experts, and serve the local professional
community.
E X E C U T I V E C O M M I T T E E
i
n
S
C
I
E
N
C
E
E
N
G
I
N
E
E
R
I
N
G

M
A
Y
/
J
U
N
E

2
0
0
4
F
R
O
N
T
I
E
R
S
O
F
S
I
M
U
L
A
T
I
O
N
,

P
A
R
T

I
I
V
O
L
U
M
E

6
N
U
M
B
E
R

3
Statement of Purpose
Computing in Science & Engineering
aims to support and promote the
emerging discipline of computational
science and engineering and to
foster the use of computers and
computational techniques in scientific
research and education. Every issue
contains broad-interest theme articles,
departments, news reports, and
editorial comment. Collateral materials
such as source code are made available
electronically over the Internet. The
intended audience comprises physical
scientists, engineers, mathematicians,
and others who would benefit from
computational methodologies.
All theme and feature articles in
CiSE are peer-reviewed.
Copublished by the IEEE
Computer Society and the
American Institute of Physics
F R O N T I E R S O F S I M U L A T I O N
P A R T I I
M A Y / J U N E 2 0 0 4 Volume 6, Number 3
Guest Editors Introduction:
Frontiers of Simulation, Part II
Douglass Post
16
Virtual Watersheds:
Simulating the Water Balance of the Rio Grande Basin
C.L. Winter, Everett P. Springer, Keeley Costigan, Patricia Fasel,
Sue Mniewski, and George Zyvoloski
18
Large-Scale Fluid-Structure Interaction Simulations
Rainald Lhner, Juan Cebral, Chi Yang, Joseph D. Baum, Eric Mestreau,
Charles Charman, and Daniele Pelessone
27
Simulation of Swimming Organisms: Coupling Internal
Mechanics with External Fluid Dynamics
Ricardo Cortez, Lisa Fauci, Nathaniel Cowen, and Robert Dillon
38
Two- and Three-Dimensional
Asteroid Impact Simulations
Galen Gisler, Robert Weaver, Charles Mader, and Michael Gittings
46
Cover illustration: Dirk Hagner
From the Editors
Francis Sullivan
Computational Science and Pathological Science
News
Simulated Bite Marks
New Cloud Animation Software on the Horizon
Technology News & Reviews
Norman Chonacky
Stella: Growing Upward, Downward, and Outward
2
4
8
56
61
66
74
82
87
WWW. C O MP U T E R . O R G / C I S E /
H T T P : / / C I S E . A I P. O R G
M A Y / J U N E 2 0 0 4
Computing Prescriptions
Eugenio Roanes-Lozano, Eugenio Roanes-Macas, and Luis M. Laita
Some Applications of Grbner Bases
Visualization Corner
Jonathan C. Roberts
Visualization Equivalence
for Multisensory Perception: Learning from the Visual
Your Homework Assignment
Dianne P. OLeary
Fitting Exponentials: An Interest in Rates
Computer Simulations
Suzana Moss de Oliveira, Jorge S. S Martins,
Paulo Murilo C. de Oliveira, Karen Luz-Burgoa,
Armando Ticona, and Thadeau J.P. Penna
The Penna Model for Biological Aging and Speciation
Education
Guy Ashkenazi and Ronnie Kosloff
String, Ring, Sphere:
Visualizing Wavefunctions on Different Topologies
Scientific Programming
Glenn Downing, Paul F. Dubois, and Teresa Cottom
Data Sharing in Scientific Simulations
How to Contact CiSE, p. 17
Advertiser/Product Index, p. 37
AIP Membership Info, p. 45
Subscription Card, p. 88 a/b
Computer Society Membership Info, Inside Back Cover
D E P A R T M E N T S
From the Editors
Francis Sullivan
Computational Science and Pathological Science
News
Simulated Bite Marks
New Cloud Animation Software on the Horizon
Technology News & Reviews
Norman Chonacky
Stella: Growing Upward, Downward, and Outward
2
4
8
56
61
66
74
82
87
WWW. C O MP U T E R . O R G / C I S E /
H T T P : / / C I S E . A I P. O R G
M A Y / J U N E 2 0 0 4
Computing Prescriptions
Eugenio Roanes-Lozano, Eugenio Roanes-Macas, and Luis M. Laita
Some Applications of Grbner Bases
Visualization Corner
Jonathan C. Roberts
Visualization Equivalence
for Multisensory Perception: Learning from the Visual
Your Homework Assignment
Dianne P. OLeary
Fitting Exponentials: An Interest in Rates
Computer Simulations
Suzana Moss de Oliveira, Jorge S. S Martins,
Paulo Murilo C. de Oliveira, Karen Luz-Burgoa,
Armando Ticona, and Thadeau J.P. Penna
The Penna Model for Biological Aging and Speciation
Education
Guy Ashkenazi and Ronnie Kosloff
String, Ring, Sphere:
Visualizing Wavefunctions on Different Topologies
Scientific Programming
Glenn Downing, Paul F. Dubois, and Teresa Cottom
Data Sharing in Scientific Simulations
How to Contact CiSE, p. 17
Advertiser/Product Index, p. 37
AIP Membership Info, p. 45
Subscription Card, p. 88 a/b
Computer Society Membership Info, Inside Back Cover
D E P A R T M E N T S
2 Copublished by the IEEE CS and the AIP 1521-9615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
F R O M T H E
E D I T O R S
the best diet for a healthy life. One characteristic these
examples all share is that they fade quickly, only to be re-
placed by a new ultimate answer. Sometimes rather than
fading, the thrilling discovery has a second life in check-
out-line tabloids. A few of these items are hoaxes, some
are merely consequences of over-enthusiasm about pre-
liminary results, but many are honest mistakes carried to
the point of pathology.
To be fair, let me say at the outset that computational
science is not immune from this pathology. But a point I
hope to make is that widespread availability of fairly
high-end computing has shortened the life span of the
science pathologies that occur in computing.
The term pathological science goes back at least as
far as Irving Langmuirs famous 1953 General Electric
lecture, in which he discussed things like N-rays and ESP.
He described pathological science this way:
These are cases where there is no dishonesty involved but
where people are tricked into false results by a lack of un-
derstanding about what human beings can do to themselves
in the way of being led astray by subjective effects, wishful
thinking or threshold interactions. These are examples of
pathological science. These are things that attracted a great
deal of attention. Usually hundreds of papers have been pub-
lished on them. Sometimes they have lasted for 15 or 20
years and then gradually have died away.
Langmuir also identified six features that he thought
characterized pathological science:
The maximum effect observed is produced by a
causative agent of barely detectable intensity; the effects
magnitude is substantially independent of the cause.
The effect is of a magnitude that remains close to the
limit of detectability; otherwise, many measurements
are necessary because of the very low significance of
the results.
Claims of great accuracy.
Fantastic theories contrary to experience.
Criticisms are met by ad hoc excuses thought up on the
spur of the moment.
The ratio of supporters to critics rises up to somewhere
near 50 percent and then falls gradually to oblivion.
Langmuirs lecture did not put an end to patho-
logical science. In 1966, the Soviet scientists Boris
Valdimirovich Derjaguin and N.N. Fedyakin discovered
a new form of water that came to be known as polywa-
ter. It had a density higher than normal water, a viscos-
ity 15 times that of normal water, a boiling point higher
than 100 degrees Centigrade, and a freezing point lower
than zero degrees. After more experiments, it turned out
that these strange properties were all due to impurities
in the samples. An amusing sidenote is that the polywa-
ter episode occurred a few years after Kurt Vonneguts
book Cats Cradle, which imagined a form of water, and
more importantly a form of ice, with strange properties.
The most well-publicized pathological case in recent
years is arguably the cold fusion story.
Why do these things happen? Imagine working late
into the night on a new algorithm that you feel sure will
be much more efficient than existing methods, but it
somehow doesnt seem to work. After many hours of ef-
fort, you make a few more changes to the code, and sud-
denly it works amazingly well. The results begin to ap-
pear almost as soon as you hit the enter key. Next you
try another case, but that example doesnt work well at
COMPUTATIONAL SCIENCE AND PATHOLOGICAL SCIENCE
By Francis Sullivan
Editor in Chief
E
VERY NOW AND THEN, A PECULIAR KIND OF NEWS STORY APPEARS ABOUT
SOME SCIENTIFIC TOPIC. ON FIRST READING, IT LOOKS LIKE STARTLING
NEW RESULTS OR THE ANSWER TO EVERYTHING ABOUT SOME PERPETUALLY
HOT TOPIC, SUCH AS THE AGE OF THE UNIVERSE, THE ORIGIN OF MANKIND, OR
MAY/JUNE 2004 3
all. You go back to re-run the original wonderful case,
and that doesnt work either! This is the danger point:
you either find the error that made the one good case
work, or you decide that theres a subtle effect here that
can only be produced by doing things just so. If you
choose the second path and get one more good result,
you might end up believing you have an excellent
method that only you know how to use. This is one way
that legitimate science can descend into pathology.
Fortunately, your experiment was done with a com-
puter rather than a complicated lab setup, which means
that, in principle, others can repeat the experiment
quickly and easily. And unless youre very stubborn in-
deed, youll soon discover that your error was a fluke,
perhaps something like branching to a routine where the
correct answer was stored for testing purposes.
A final caution: to guard against becoming too com-
placent about the use of computing as immunization
against pathological science, recall the many instances
where easily generated and beautiful gratuitous graph-
ics are used in lieu of content in computational science
presentations. I dont know if this is pathological science
in the old sense, but its a symptom of something spawned
by the ease of computing.
Scalable Input/
Output
Achieving System Balance
edited by Daniel A. Reed
A summar y of the major research
results from the Scalable Input/
Output Initiative, exploring software
and algorithmic solutions to the I/O
imbalance.
Scientific and Engineering Computation
series
392 pp. $35 paper
Imitation of Life
How Biology Is Inspiring Computing
Nancy Forbes
This book will appeal to tech-
nophiles, interdiscplinarians, and
broad thinkers of all stripes.
George M. Church, Harvard
Medical School
176 pp., 48 illus. $25.95 cloth
To order call 800-405-1619.
Prices subject to change without notice.
New from The MIT Press
http://mitpress.mit.edu
SIAM/ACM Prize in
Computational Science
and Engineering
CALL for NOMINATIONS
The prize will be awarded for the second time at the SIAM
Conference on Computational Science and Engineering
(CSE05), February 1215, 2005, in Orlando, Florida.
The prize was established in 2002 and first awarded in 2003.
It is awarded every other year by SIAM and ACM in the area
of computational science in recognition of outstanding
contributions to the development and use of mathematical and
computational tools and methods for the solution of science
and engineering problems.
The prize is intended to recognize either one individual or a
group of individuals for outstanding research contributions to
the field of computational science and engineering. The
contribution(s) for which the award is made must be publicly
available and may belong to any aspect of computational
science in its broadest sense.
The award will include a total cash prize of $5,000 and a
certificate. SIAM and ACM will reimburse reasonable travel
expenses to attend the award ceremony.
A letter of nomination, including a description of the
contribution(s), should be sent by July 31, 2004, to:
Chair, SIAM/ACM Prize in CS&E
c/o Joanna Littleton
SIAM
3600 University City Science Center
Philadelphia, PA 19104-2688
littleton@siam.org (215) 382-9800 ext. 303 www.siam.org/prizes
Though real Sabertooth cats are long extinct, anatomist
Frank Mendel and his team plan to build a scale model of the
head and jaws of a 700-pound Smilodon fatalis to reproduce the
predators deadly bite. They want to measure the forces nec-
essary for the teeth to penetrate the skin, muscle, and other
tissues of a recently dead herbivore, and use the data in a new
computer-aided design (CAD) program theyre developing.
The CAD program, the Vertebrate Analyzer (VA), could
do for muscle and bone what similar programs have done
for bridges, buildings, and automobileslet scientists probe
the form and function of a complex object on the computer.
Ultimately, it could shed light on human bone and muscle
ailments, as well as the lives of long-gone exotic creatures.
Mendel wants to be careful not to oversell the technol-
ogy. He and Kevin Hulme of the projects engineering
team have only just begun to show the beta version of the
VA at scientific conferences, and theyve just applied for
US$1 million of federal funding to develop it further. But
everyone from paleontologists to orthopedists wants a fin-
ished product.
Whenever I talk about the Vertebrate Analyzer, someone
says, that sounds great, when can we have it? Mendel says.
Larry Witmer, an anatomist at Ohio University, echoes
that sentiment. The software sounds really exciting. It
looks like they still have a ways to go before they have a
really sophisticated tool, but theyre on the right track,
he says.
The Software
Witmer currently uses the 3D visualization program Amira
from TGS to analyze computed tomography scans of fossil
skullsthe same kind of data set that Mendels team uses. Re-
cently, Witmer changed the face of Tyrannosaurus rex by sug-
gesting the dinosaurs nostrils rested lower on its snout than
once thought; hes also reconstructed a Pterodactyl brain and
inner ear. He wants a program like the VA, which promises
to let users virtually apply tissue to bone quickly and easily.
With the VA, the 3D skull rotates and translates by using
the arrow keys; two mouse clicks attach the ends of a muscle
bundle. During jaw movement, the muscle glows green when
its relaxed, then yellow, and nally, red as it fully extends. The
goal is for the virtual muscles to move like real ones. Users
can hasten the simulation by lowering the resolution.
A supercomputer could speed things up, but Mendel
wants the software to run on a PC.
What Mendel and Hulme hope will set the VA apart from
similar software is what they plan to do with it. They want
to maintain it as open-source code and create a publicly
available online vertebrate anatomy library, comparable in
scope to the National Center for Biotechnology Informa-
tions GenBank DNA database. Modeling Smilodon is the
rst step.
Toothy Test Case
When scientists study prehistoric animals, they dont often
have the luxury of complete specimens. Smilodon is an ex-
ception, due to large clusters of remains such as the 2,000
cats preserved in Californias La Brea Tar Pits. Those skele-
tons suggest that adults were about the size of an African
lion, but with longer forelegs that were more powerful than
its hind legs. The cats infamous fangsskinny and serrated
like steak knives, and up to 7 inches longprompted experts
to debate whether they were used for hunting or for com-
petition among males (see Figure 1).
For Mendel, that question is settled. At La Brea, we cant
tell males from females, he says. They all have enlarged
canines, even the kittens. This suggests that the teeth did
something other than advertise age or gender.
But how Smilodon used those teeth is still a mystery. Did
it clamp down on an animals throat to suffocate it, as big
cats do today, or simply tear the throat out and let its prey
bleed to death? Maybe its strong front legs could have
pinned down a suffocating Ice Age herbivore such as a deer,
but could those relatively thin teeth, which lack a full coat
4 Copublished by the IEEE CS and the AIP 1521-9615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
SIMULATED BITE MARKS
By Pam Frost Gorder
F
OR THE FIRST TIME IN 11,000 YEARS, THE FEAR-
SOME SABER-TOOTHED TIGERS CANINES
WILL TEAR INTO FRESH MEATIF SCIENTISTS
AT THE UNIVERSITY OF BUFFALO GET THEIR WAY.
News Editor: Scott L. Andresen, sandresen@computer.org
NEWS
N E W S
MAY/JUNE 2004
of the enamel that strengthens human teeth, have held on
without breaking?
We assume the teeth were used to kill, yet we have to ac-
count for the lack of heft and enamel, so its a mechanical
problem, Mendel explains. Whats more, fossil skulls offer
only the barest clues of the muscle architecture that made
wielding such teeth possible.
He was considering this puzzle when news reports of Boe-
ings computer-designed 777 aircraft prompted him to con-
tact engineers at his institution. I thought, wouldnt it be
great if we could bring CAD to bear on the things I want to
look at? But modeling soft tissue is a complex problem. An
airplane is great technology, but it pales in comparison to
what humans do walking around every day.
Once they build a skull and replicate its bite on animal
carcasses from a butcher shop, scientists might know more
about Smilodon. But the real payoff could go beyond that.
Potential Value
One benefit would be a clearer picture of extinct animals
biomechanics. If you just look at modern times, youre
missing the diversity of most of the life that has existed on
this planet, Witmer says. Understanding animals from the
past helps us better understand animals today.
Stuart Sumida, a functional morphologist at California
State University, San Bernardino, who also works with the
film industry, sees two other ways for this technology to
reach people: movies and video games. Today, animators
move virtual skeletons called rigs inside animated skin to
create movement. Using virtual muscles to pull on these
rigs realistically is a kind of Holy Grail of special effects,
Sumida says.
Medicine, too, could benet, as doctors could use the soft-
ware to study joint problems. For instance, the work on
Smilodon could lend insight to temporomandibular joint dis-
order, which causes headaches and jaw pain in an estimated
Figure 1. Frank Mendel holding the Smilodon cast.
EDITOR IN CHIEF
Francis Sullivan, IDA Ctr. for Computing Sciences
fran@super.org
ASSOCIATE EDITORS IN CHIEF
Anthony C. Hearn, RAND
hearn@rand.org
Douglass E. Post, Los Alamos Natl Lab.
post@lanl.gov
John Rundle, Univ. of California at Davis
rundle@physics.ucdavis.edu
EDITORIAL BOARD MEMBERS
Klaus-Jrgen Bathe, Mass. Inst. of Technology, kjb@mit.edu
Antony Beris, Univ. of Delaware, beris@che.udel.edu
Michael W. Berry, Univ. of Tennessee, berry@cs.utk.edu
John Blondin, North Carolina State Univ., john_blondin@ncsu.edu
David M. Ceperley, Univ. of Illinois, ceperley@uiuc.edu
Michael J. Creutz, Brookhaven Natl Lab., creutz@bnl.gov
George Cybenko, Dartmouth College, gvc@dartmouth.edu
Jack Dongarra, Univ. of Tennessee, dongarra@cs.utk.edu
Rudolf Eigenmann, Purdue Univ., eigenman@ecn.purdue.edu
David Eisenbud, Mathematical Sciences Research Inst., de@msri.org
William J. Feiereisen, Los Alamos Natl Lab, bill@feiereisen.net
Sharon Glotzer, Univ. of Michigan, sglotzer@umich.edu
Charles J. Holland, Ofce of the Defense Dept., charles.holland@osd.mil
M.Y. Hussaini, Florida State Univ., myh@cse.fsu.edu
David Kuck, KAI Software, Intel, david.kuck@intel.com
David P. Landau, Univ. of Georgia, dlandau@hal.physast.uga.edu
B. Vincent McKoy, California Inst. of Technology, mckoy@its.caltech.edu
Jill P. Mesirov, Whitehead/MIT Ctr. for Genome Research,
mesirov@genome.wi.mit.edu
Cleve Moler, The MathWorks Inc., moler@mathworks.com
Yoichi Muraoka, Waseda Univ., muraoka@muraoka.info.waseda.ac.jp
Kevin J. Northover, Open Text, k.northover@computer.org
Andrew M. Odlyzko, Univ. of Minnesota, odlyzko@umn.edu
Charles Peskin, Courant Inst. of Mathematical Sciences,
peskin@cims.nyu.edu
Constantine Polychronopoulos, Univ. of Illinois, cdp@csrd.uiuc.edu
William H. Press, Los Alamos Natl Lab., wpress@lanl.gov
John Rice, Purdue Univ., jrr@cs.purdue.edu
Ahmed Sameh, Purdue Univ., sameh@cs.purdue.edu
Henrik Schmidt, MIT, henrik@keel.mit.edu
Donald G. Truhlar, Univ. of Minnesota, truhlar@chem.umn.edu
Margaret H. Wright, Bell Lab., mhw@bell-labs.com
COMPUTING IN SCIENCE & ENGINEERING
10,000 Americans (see Figure 2). Better articial limbs could
also result.
Mendel is staying patient. If in three or four years we have
a part of what Ive been dreaming about, itll be a great thing.
Pam Frost Gorder is a freelance science writer living in Columbus, Ohio.
B R I E F
NEWCLOUD ANIMATION
SOFTWARE ON THE HORIZON
By Lissa E. Harris
A
cirrus cloud wisp hovers on a brooding sky, glowing
gold and vermilion with the last rays of the setting
sun. But this cloud isnt made of dust and vaporits made
of pixels. Its the product of Swell, a new software program
that creates animated clouds with unprecedented speed.
Swell and Primetwo new programs that render ani-
mated, three-dimensional (3D) cloudsare the Purdue
University Rendering and Perceptualization Labs latest in-
novations. At the lab, directed by David Ebert, researchers
are developing software that brings scientific and medical
data sets to life as 3D models, computer-generated illustra-
tions, and photorealistic images.
Swell
Swell isnt the first cloud-animation program to be devel-
oped, or the most realistic. But many simulatorslike the
software used to make virtual clouds for cinematic special
effectstake hours or days to run. Those that function in
real time, such as weather-predicting simulators that meteo-
EDITORIAL OFFICE
COMPUTING in SCIENCE & ENGINEERING
10662 Los Vaqueros Circle, PO Box 3014
Los Alamitos, CA 90720-1314
phone +1 714 821 8380; fax +1 714 821 4010;
www.computer.org/cise/
DEPARTMENT EDITORS
Book & Web Reviews: Bruce Boghosian, Tufts Univ., bruce.boghosian@
tufts.edu
Computing Prescriptions: Isabel Beichl, Natl Inst. of Standards and
Tech., isabel.beichl@nist.gov, and Julian Noble, Univ. of Virginia,
jvn@virginia.edu
Computer Simulations: Dietrich Stauffer, Univ. of Khn, stauffer@
thp.uni-koeln.de
Education: Denis Donnelly, Siena College, donnelly@siena.edu
Scientic Programming: Paul Dubois, Lawrence Livermore Natl Labs,
dubois1@llnl.gov, and George K. Thiruvathukal, gkt@nimkathana.com
Technology News & Reviews: Norman Chonacky, Columbia Univ.,
chonacky@chem.columbia.edu
Visualization Corner: Jim X. Chen, George Mason Univ., jchen@cs.gmu.edu,
and R. Bowen Loftin, Old Dominion Univ., bloftin@odu.edu
Web Computing: Geoffrey Fox, Indiana State Univ., gcf@grids.ucs.indiana.edu
Your Homework Assignment: Dianne P. OLeary, Univ. of Maryland,
oleary@cs.umd.edu
STAFF
Senior Editor: Jenny Ferrero, jferrero@computer.org
Group Managing Editor: Gene Smarte
Staff Editors: Scott L. Andresen, Kathy Clark-Fisher, and Steve Woods
Contributing Editors: Cheryl Baltes and Joan Taylor
Production Editor: Monette Velasco
Magazine Assistant: Hazel Kosky, cise@computer.org
Design Director: Toni Van Buskirk
Technical Illustration: Alex Torres
Publisher: Angela Burgess
Assistant Publisher: Dick Price
Advertising Coordinator: Marian Anderson
Marketing Manager: Georgann Carter
Business Development Manager: Sandra Brown
AIP STAFF
Jeff Bebee, Circulation Director, jbebee@aip.org
Charles Day, Editorial Liaison, cday@aip.org
IEEE ANTENNAS AND PROPAGATION SOCIETY LIAISON
Don Wilton, Univ. of Houston, wilton@uh.edu
IEEE SIGNAL PROCESSING SOCIETY LIAISON
Elias S. Manolakos, Northeastern Univ., elias@neu.edu
CS MAGAZINE OPERATIONS COMMITTEE
Michael R. Williams (chair), Michael Blaha, Mark Christensen, Sorel Reisman,
Jon Rokne, Bill Schilit, Linda Shafer, Steven L. Tanimoto, Anand Tripathi
CS PUBLICATIONS BOARD
Bill Schilit (chair), Jean Bacon, Pradip Bose, Doris L. Carver, George
Cybenko, John C. Dill, Frank E. Ferrante, Robert E. Filman, Forouzan
Golshani, David Alan Grier, Rajesh Gupta, Warren Harrison, Mahadev
Satyanarayanan, Nigel Shadbolt, Francis Sullivan
IEEE
Signal Processing Society
A
S
P
IEEE Antennas &
Propagation Society
Figure 2. Vertex-based model of the Smilodon skull.
MAY/JUNE 2004 7
rologists use, tend to produce bloblike, unrealistic images
that dont possess a real clouds depth or complexity.
In the animation trade, Swells clouds are known as vol-
umetric objects, meaning they have internal structures, not
just a surface. Many computer-generated images are hollow
shells composed of a kind of digital chicken wire, a mesh of
triangles that approximates a curved surface. But to interact
convincingly with solid objects in computer-generated ani-
mation, a cloud must be truly 3D (see Figure 3).
Volumetric phenomena are difcult to render. Youre not
just working with a surface, says Swell author Joshua Sch-
pok, who wrote the software as an undergraduate in Eberts
lab. To illuminate things, you need to consider that any
point can illuminate any other point.
To create a virtual cloud structure, Swell begins with sets
of points, called vertices, arrayed on a series of stacked planes
in 3D space. The software then assigns values for cloud
properties, such as opacity and brightness, to each point, and
interpolates between them to form a seamless texture.
Think of sheets of glass lined up perpendicular to the di-
rection youre looking at the cloud from. You look at the color
and opacity of each of those points on those planes, Ebert
says. The reason you do them in planes, rather than random
points, is that it allows you to do quicker processing.
Running a simulation with this level of detail typically in-
volves massive amounts of data-crunching, hence the long
computing times required for most simulators. But many of
the data manipulations involve computing the same function
on a large group of similar data pointsfor example, adjust-
ing the opacity of a set of points, all by the same factor.
Swell sidesteps this dilemma by harnessing recent im-
provements in the speed and efciency of graphics process-
ing units (GPUs), which perform computations in parallel
to the CPU. The new breed of graphics cards, used primar-
ily by gamers, handles single-instruction, multiple-data
computations far more swiftly than the pace of software is-
suing instructions to the CPU.
Unlike its CPU-based competitors, Swell can render
complex, visually realistic clouds quickly enough to react to
a mouse. Swell lacks the sophistication of the very best cloud
simulators, but its dramatic speedcombined with an im-
pressive level of realismmight soon make cloud modeling
accessible for real-time applications.
Prime
For now, Swell seems to be more of an artists than a
meteorologists tool; those most interested in it are video-
game developers and special-effects studios. But the Pur-
due lab is developing similar software that merges the art
and science realms.
One promising program, Prime, has emerged from a lab
effort to create software that takes scientic data sets and ren-
ders them more visually realistic. Primes author, doctoral
student Kirk Riley, has developed a program that takes data
from weather-predicting simulation software and upgrades
its images from solid blobs to realistic, volumetric clouds.
The numerical weather prediction models that run daily
in Washington, DC, produce the kind of data that would al-
low you to view the data in a photorealistic sense, if you had
the software to do it, says Jason Levit, a meteorologist for
the National Oceanic and Atmospheric Administration, who
collaborated with Eberts lab on the Prime project. But up
until now, we havent had that software.
Like Swell, Prime uses parallel processing on the GPU to
speed up rendering. But while Swell builds and manipulates
virtual clouds from scratch, Prime takes its clouds underly-
ing structure from the simulator data.
Were trying to take the simulation data and make it look
the way someone would see it, if it were actually there, Ri-
ley says. Now, all programs can do are surface approxima-
tions that look like plastic blobs in the sky. This handles the
light in a more realistic fashion.
Crude as they might appear, simulators are invaluable to
weather forecasters. But they havent replaced storm spot-
ters: meteorologists trained in field observation still make
predictions based on how clouds look in the sky.
Prime soon could train new storm spotters to recognize
many different types of conditions, without having to wait
for them to occur in the eld. It also could nd applications
in public education about meteorology or make television
weather forecasts more visually appealing.
Ultimately, Primes developers hope that the software will
enhance forecastings speed and accuracy by giving simula-
tion data the look and feel of real-world weather conditions
that meteorologists could instantly recognize.
It might help us predict things faster, because we can vi-
sualize things in the model with greater accuracy, Levit
says. Will it enhance scientic discovery? That remains to
be seen.
Lissa E. Harris is a freelance writer based in Boston, Massachusetts.
Figure 3. Screen shot of a Swell cloud model.
Thus, it was not exceptional when a
student recently asked me for a nu-
merical integrator. After some prob-
ing, I established that this student
wanted to simulate the time course
for a complex chemical process, given
the rate constants for various compo-
nent reactions. In short, he wanted to
build a model; so why not use a mod-
eling tool?
Indeed, our department already has
a license for Aspen, a sophisticated sys-
tem for modeling unit processes in
chemical engineering. Rather than a
detailed path-to-process optimization,
however, this student wanted a quick
answer to whether a certain process
would proceed and, if so, how fast. Sci-
entists and engineers often want to do
this type of back of the envelope cal-
culation, where an envelope is inade-
quate for the task. On such occasions,
we want to reach for a modeling
scratch pad.
Stella (www.hps-inc.com) is a mod-
eling application that can serve such
needs, although it makes a relatively
expensive scratch pad. Fortunately, it
also provides other capabilities that
add to its value as a productivity tool,
just as a spreadsheet application lets
you both build certain kinds of models
and formulate a budget. Stella has sev-
eral component toolsets, and its user
interface is organized in layers. As
such, the test of the applications total
value is not just in its range of func-
tionality but also in how well its
toolsets and layers are integrated. The
premise underlying Stellas design is
that systems thinking is important
for solving a wide class of problems
and that there is a need for tools that
support and cultivate this methodol-
ogy. As professionals, most scientists
and engineers seem to heartily agree
with this premise, but it is less obvious
that, as academics, they find it fit or
feasible to include this methodology in
standard curricular practiceparticu-
larly for undergraduate students (es-
pecially those who arent mathemati-
cally sophisticated). There thus seems
to be a need for a product to help en-
gineering and science students learn
model systems.
In this article, Ill review Stellas
modeling capabilities for both re-
search and instruction. Ill describe
the basic modeling tools using my
students quest as a simple, illustra-
tive case study, exploring how these
tools contribute to speed and effi-
ciency in creating models for concept
testing. I will also examine some of
Stellas broader research capabilities
in the context of how they support
and connect with more specific and
scalable modeling systems imple-
mented in high-end systems. Finally,
I will comment on Stellas range of
educational applications.
Basic Modeling for
Science and Engineering
As testament to the powerful, efcient,
and well-integrated features that High
Performance Systemsnow isee sys-
temshas engineered into Stella, my
graduate student started with no
knowledge of the system and learned
the basic modeling functions in about
an hour or two. This investment
earned him the ability to create his
first working model (although a cor-
rect model required the usual debug-
ging, and more time). Stellas features
are not only easy to learn and intuitive
to use, but they also support good
modeling practices such as documen-
tation and unit consistencygood
things for students to learn and for ex-
perts to follow.
My student wanted to emulate the
process of hydrocarbon radicals react-
ing with nitrogen in an air-sustained,
oxygen-depleted part of a ame to pro-
duce hydrogen cyanide, nitric oxide,
and other things. Starting with a col-
lection of rate constants for compo-
nent reactions, he wanted to determine
the time courses of selected parts of the
process under various initial and ambi-
ent conditions. He knew that these re-
actions were described by differential
equations, and that the solution lay in
integration; hence his initial quest for
8 Copublished by the IEEE CS and the AIP 1521-9615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
STELLA: GROWING UPWARD,
DOWNWARD, AND OUTWARD
By Norman Chonacky
A
S AN EXPERIMENTAL PHYSICIST WORKING IN AN ENVI-
RONMENTAL-ENGINEERING RESEARCH GROUP, I OFTEN
GET REQUESTS TO INTRODUCE GRADUATE STUDENTS TO COM-
PUTATIONAL TOOLS TO HELP THEM CONDUCT THESIS RESEARCH.
Editor: Norman Chonacky, chonacky@columbia.edu
TECHNOLOGY
T E C H N O L O G Y N E W S & R E V I E W S
MAY/JUNE 2004 9
a numerical integrator. But it did not
occur to him that, rather than simply
doing a computation, he really needed
to create a model like the one in Figure
1, which shows the reactions of inter-
est in a graphical rendering he first
produced using Stella.
In a larger context, Figure 1 shows a
Stella window containing an iconic
map of the chemical process model in
its view area. The window margins
contain icons representing the Stella
modeling objects and various interface
tools that control the modeling process
and appearance. Four of these objects,
whose icons appear in the top left-hand
corner of this window, are of funda-
mental significance. Figure 2 shows
these icons for Stellas modeling vo-
cabulary objects in closer detail:
The stock (Figure 2a) is a material ac-
cumulator. In the language of mathe-
matics, it is a quantitative variable. In
this particular model, the stocks are all
molecular concentrations of reactants.
The flow (Figure 2b) is a material
connector. Mathematically, it is an
integrator. In this model, the flows
are chemical reactions.
The converter (Figure 2c) is an infor-
mation translator. Mathematically, it
is an algebraic operator. In this model,
the converters introduce and control
the reaction processes rate constants.
The action connector (Figure 2d) is an
information connector. Mathemati-
cally, it is a logical relationship. In
this model, the action connectors de-
ne dependencies and relationships
in the chemical reactions.
Note the (somewhat nave) choice that
my student made for modeling the three
coupled processes in this reaction: he
represented these explicitly as three sep-
arate ows, not coupled with one an-
other. Instead, he implicitly specied the
actual coupling via action connectors
(here represented by the red arrows)
from the stocks to the Rate Converter
and back. I will comment on his ap-
proach in the last section, noting this
revelations educational value. In
essence, this approach requires material
to exit and enter the system, appropri-
ately. The cloud objects in the dia-
gram achieve this. They represent input
and output portals across the boundaries
of the system required by this choice of
model. Stella automatically inserts these
at each end of a ow pipe when it is rst
created, and maintains each until the
modeler makes a positive connection to
a stock. This is one of many similar
cueing mechanismspart of Stella
designs guided learning strategy.
These are useful for tutoring new users,
but also serve as a debugging pre-
processor to catch incompleteness and
inconsistencies in a models specication
while it is being created. They greatly
facilitate the model production/debug-
ging process as well as being excellent
auto-instructional aids.
The buttons in the left-hand margin
of Figure 1 control model operations
and visualizations. At the bottom, the
button with an icon of a running per-
son pops up a Run Controller for start-
ing, stopping, and modifying the
model calculation. The {|+} buttons
below it are zoom controls for the
graphical window. At the upper end of
the left-hand margin, the up and down
arrows navigate among three levels for
presenting the model to usersstart-
ing here at the graphical model view, the
<up-arrow> takes us to the interface
view, and the <down-arrow> to the
Figure 1. The Stella graphical modeling environment, holding a model for nitrogen
xation by free radicals in a hydrocarbon ame. This window of the Stella user
interface contains objects and connectives that the developer drags and drops
into position, and then uses pop-up windows for setting their internal parameters.
(a) (b) (c) (d)
Figure 2. Iconic cluster representing Stellas basic modeling vocabulary: (a) stock, (b)
ow, (c) converter, and (d) action connector. This is a minimal set for modeling
processes represented by ordinary differential equations.
10 COMPUTING IN SCIENCE & ENGINEERING
equation view. I illustrate some details
and the value of these in the next sec-
tion.
The button bearing the chi-squared
icon switches between two modes of
the model view:
In the model mode, the user can
modify parameters and relations.
In the map mode, the user cant
modify them.
I mention this to illustrate that there
are limits to the intuitiveness of the
Stella design. Despite trying, I couldnt
understand the map modes value; I
found only its annoyances when trying
to create a model.
As mention previously, in the model
view, clicking on any model object
opens a pop-up window that lets you
view and change the objects opera-
tional detailsthat is, its conguration.
To illustrate in our example, clicking
on the Rate Converter (remember to
be in the model, not map, mode!)
brings up the window pictured in Fig-
ure 3.
This window explicitly shows which
input values the model topology re-
quires, as depicted by the red lines in
the graphical renderingin this case,
three action connectors pointing in-
ward (see Figure 1). This particular
configuration window shows which
connections (here, input) are required
for this object (here, converter) in or-
der for the model to be complete; it lets
the developer set the (algebraic) rela-
tion among these inputs to be used for
calculating the rates state value. For
our simple chemical models topology,
an algebraic expression in the Rate
Converters box (across the bottom of
the window in Figure 3) must refer to
three itemsone for each of the inputs
listed in the required-inputs box (at the
windows upper left). Note that each
input is pictured with an icon indicat-
ing what type of object is involved
(here, stocks and other converters).
The appropriate expression for our
model is a three-linear product of the
three required input values. A key-
board tool and a scroll box of built-in
functions provide support for formu-
lating appropriate expressions. If the
modeler fails to create an acceptable
expression based on these complete-
ness criteria, a question mark appears
on the objects icon in the model dia-
gram and remains until the incom-
pleteness is resolved. To further assist
maintaining consistency in the model,
these configuration windows include
units and documents utilities to remind
and facilitate unit coherence and model
documentation, respectively.
Icons across the graphical modeling
windows top margin (Figure 1) indi-
cate some of Stellas other basic mod-
eling tools, including a button object
and a sector frame for implementing ex-
ecution controls. You can program a
button to step through a models
computation, for example, or you can
use a sector frame to partition, isolate,
and run sections of the model piece-
meal. There are also graphical and tab-
ular pads, as well as numerical windows
for rendering outputs in various ways.
Figure 4 shows a typical graphical
display for our simple chemical reac-
tion model on the Stella graph pad.
Note that the ordinates in this display
have different scales and ranges, which
are clearly indicated. Consistent with
the programs quick prototyping ap-
proach, it selects the scales and ranges
automatically, but the user can override
autoscaling and autoranging to facili-
tate exibility in communicating mod-
eling results. In these senses, and con-
sistent with modern productivity
software, Stella is reasonably self-
contained. Unless you want particu-
larly fancy graphical displays, there is
no need to export results. These inte-
grated capabilities are consistent with
the scratch-pad usage of Stella. Its in-
T E C H N O L O G Y N E W S & R E V I E W S
Figure 3. Conguration window for the Rate Converter in the chemical reaction
model for the model mode of the model view. This conguration window is typical
of that for other objects, showing such things as required connections, allowing the
developer to x relations among inputs, setting initial values in the case of a stock
and so on.
MAY/JUNE 2004 11
tegration even goes further in this
cause. For example, the run controller
launched through the model view (de-
scribed above) from a marginal button
is really a oating palette of drop-down
menus that can be dragged on top of
the graph pad. One drop-down is a
Specications menu that lets the user
set the range and scale parameters for
the displays. It also contains selections
for setting computational run parame-
ters, selecting the integration algo-
rithm and step size, evaluating the re-
sults sensitivity to variations in the
initial conditions and run-parameter
values, and controlling sector switch-
ing for models that are partitioned into
sectors. This facilitates exploring the
parameter space for these computa-
tional variables by providing a compact
way to control and observe repeated
tests of the model.
This type of well-thought-out design
is evident in many segments of Stella.
The design features economy and their
deft integration are other hallmarks of
the applications design, reecting a
philosophical consistency and lots of use
experience. I nd this quite remarkable
in this age of hastily drafted bloatware
that is fatally aficted with feature-itis.
Added-Value Capabilities
Moving upward and beyond Stellas ba-
sic modeling capabilities and its use as a
scratch pad, we can best discuss some of
the features that add value by looking at
more complex modeling examples. To
that end, consider the reversible chem-
ical reaction in Figure 5, borrowed from
examples that come bundled with the
current Stella distributionversion 8.1.
Equation View Features
The up and down arrows near the up-
per left-hand corner of the now-famil-
iar window in this gure, like that in
Figure 1, let the user navigate to the
model presentations equation view
(down) or interface view (up). The for-
mer takes you under the hood to see a
representation of the equations Stella
automatically generates to depict the re-
lations among the objects in the model.
Figure 6 shows these codes for the
model in Figure 5, gathered from each
of the individual objects into one place.
In effect, this listing summarizes all
the relations and values fixed by the
modeler for the individual objects, as
required by the models topology.
These determine the time evolution
dictated by its ows. The objects in the
list are organized first by stocks, with
Figure 4. Graph-pad window. This window displays results for one run of our simple
chemical model.
Figure 5. Graphical model of the hydrogen-iodide dissociation, reversible chemical
reaction. This model of a straightforward chemical reaction has been carefully
drawn to reect the system symmetries and employs a more highly detailed form of
object icons, which intimate Stellas more sophisticated depths.
12 COMPUTING IN SCIENCE & ENGINEERING
sublists of inflows, outflows, and con-
stants under each. For the novice, it il-
lustrates the integrations computa-
tional logicimplied by the concept
of time-stepped ow values. Its a rst
step toward understanding the deeper
issues of computational algorithms im-
plemented in the actual computational
codes. To the expert, the listing pro-
vides a single comprehensive summary
of all the structures, relations, and
fixed values included in the models
underlying object-oriented computa-
tional machinery.
Interface View Features
The <up-arrow> icon takes you from
the model view on top to the inter-
face view to create as a developer, or
see as a user, a rendering of the model
intended to facilitate the communica-
tion of its results. The objective is to let
the model speak for itself by delivering
an easily operated version to the puta-
tive audience. The Stella distribution
package comes with run-only engines
for Mac and Windows platforms.
These runtime engines are well suited
to educational applications. By letting
users manipulate but not modify the
models, they enable those who dont
have the Stella package to still operate
the models through interfaces such as
the one in Figure 7. Constructed to op-
erate the model depicted in Figure 5,
this interface uses graphical input de-
vices, such as sliders and knobs, to fa-
cilitate exploratory use of the model. It
lets users conduct runs using different
values for selected parameters over re-
stricted ranges.
This interface provides virtual
knobs that let the model user set a
value for each reactants initial con-
centration. Similarly, the user can set
values for the two reaction rates via
sliders. This interface includes a run
controller and a predefined graph that
displays resulting time courses for
each of the reactants over a selected
range. The developer can also design
an interface that restricts users con-
trol to certain model functions and
predefined range values. In this sense,
the model creator conveys informa-
tion to the user in an operational,
rather than a declarative, way.
Nonetheless, the interface view pro-
vides many ways to communicate de-
claratively as well. The Instructions,
View the Model, and Run Control
palettes are all button objects. The rst
invokes page linking for tutorial pur-
poses. The second is coupled to the
Stella modeling environments view-
shifting machinery. The third is cou-
pled to the computational engines ex-
ecution control. As a collection, these
capabilities let you build stand-alone
tutorials of considerable exibility and
power and should be ideal for com-
puter-assisted instruction using mod-
els. They also support professional sci-
entic and engineering communication
that can be suitably tailored for peers in
the same or other disciplines, as well as
for those outside the technical sphere
who must be able to appreciate and un-
derstand the models consequences.
The details of such applications are
outside this reviews scope, but they
abound on the High Performance Sys-
tems Web site (www.hps-inc.com).
Advanced Features
For the professional scientist or engi-
neer, Stella offers other modeling capa-
bilities that are suited to sophisticated
applications. The stock objects described
thus far have acted like simple reservoirs,
but Stella lets you congure them to be-
have in more complex ways than simple
storage. Indeed, we can describe three
variants of reservoir behavior:
T E C H N O L O G Y N E W S & R E V I E W S
Figure 6. Equation view for the chemical-dissociation reaction. This view shows
equations that dene the relations among objects expressed as algebraic formulae
that determine how their values are computed from one anothers, organized
starting with the stocks and including parametric data.
MAY/JUNE 2004 13
A conveyor receives inflow material
that it holds for outflow in one of
two conditionseither normally af-
ter a specied residence time or as a
leakage with specied probabilities.
Both capacity and inflow-rate re-
strictions can be imposed, and con-
veyor operations can be arrested
(suspended temporarily) subject to a
programmed logic condition.
A queue holds portions of multiple
inflows on a first-in, first-out basis.
One portion is admitted to a queue
for each time slice from among the
multiple inflow possibilities whose
priorities the system sets and alters
according to various explicit and im-
plicit criteria.
An oven is a reservoir that processes
discrete batches of its inflow, which
are set by capacity or time. Outow
is off during this time, and subse-
quent to inow shut-off, the outow
remains closed until cook time has
elapsedthe duration is set by logic
programmed into the outflowat
which point, the stock outputs the
entire contents at once.
Most of the logic conditions and time
values described in these three additional
stock varieties can be drawn by the soft-
ware from user-specied sampling dis-
tributions, thus adding statistical char-
acter to the resulting simulations.
Stella has many other sophisticated
features that subscribe to the spirit of
these modeling capabilities. The list of
built-in functions is substantial, in-
cluding conventional math, trig, and
logic functions, but also cycle-time
functions and those capable of pro-
cessing arrays. In fact, array capability
is also built into the basic modeling
objects, which we have already de-
scribed, so the developer can econom-
ically represent parallel processing of
different cases or different classes of
materials. Stella also offers the ability
to obtain cycle-time information that
helps support computational perfor-
mance optimization, as well as a sub-
model capability that helps control
complexity in building and testing
complicated applications.
Subversive Values
In evaluating the potential signicance
of a modeling tool like Stella to scien-
tic and engineering computation, we
should consider its utilitythe degree
to which its functionality aligns with a
users modus operandi. Does the tools
form fit the way scientists and engi-
neers, whether students or practition-
ers, function? I believe that the answer
is a resounding yes. Does it also fos-
ter best practices, surreptitiously, by its
intrinsic design and not by making an
explicit issue of these? You can judge its
subversive value for yourself.
Why for Students
and Teaching Faculty?
As organized and represented by my
student, each component of the sec-
ond-order chemical reaction was fully
described by an ordinary differential
equation, from which follows a well-
known form of analytic solution. In
light of this fact, the student might
simply have emulated the solution for
these coupled component reactions on
a spreadsheet and fit them to various
boundary conditions introduced as pa-
rameters. In a subsequent interview
with him, however, I substantiated that
this was a line of attack of which he was
completely unaware.
For this student, an operational per-
spective on such problems was most
naturalconsidering each reaction
concretely as an operating mechanism
governed by a differential equation to
be integrated, rather than as a coupled
network of reactions abstractly de-
scribed by a set of coupled differential
equations. The modeling exercise us-
ing Stella explicitly demonstrated to
me how my student was thinking about
the problemto wit, the cognitive
construction of his analyses. For in-
structors that are able and willing to
use such information in constructing
their instructional approach, this is an
enormous advantage.
If you doubt the conventional wis-
dom or the results of cognitive re-
search on problem-solving protocols
upon which I base these assertions,
Figure 7. Illustrative interface. This interface provides virtual, graphical input devices
to let users explore the chemical-dissociation reaction by changing the reaction-
parameter values.
14 COMPUTING IN SCIENCE & ENGINEERING
consider the last time you discussed a
reaction process with a chemist. I
doubt that the grammar was of equa-
tions, differential or otherwise.
Chemists thoughts most frequently
unfold by diagramming reaction
mechanisms, and the results generally
resemble the Stella model in Figure 1,
though in a more sophisticated form.
Students, and novices learning to be
experts, must progressively increase
their sophistication in solving prob-
lems and designing systems. Because
Stella serves the proclivities of both
novices and experts, it supports a
seamless transition between levels.
Other good reasons to introduce en-
gineering students to modeling tools lie
in the recently rewritten standards for
accreditation of undergraduate engi-
neering programs by the Accreditation
Board for Engineering and Technology
(ABET). The new criteria for curricular
evaluation are cast in terms of learning
outcomeswhat graduating students
must be capable of doing. Among other
things, ABETs 20042005 Criteria for
Accrediting Engineering Programs
(www.abet.org/criteria.html) requires
engineering schools to demonstrate that
graduates have the ability to:
(3c) design a system, component, or
process to meet desired needs,
(3d) function on multidisciplinary
teams,
(3e) identify, formulate, and solve
engineering problems, and
(3g) communicate their results.
Because Stella requires the developer
to explicate a systems logical construc-
tion in an external medium that can be
read and studied by others, it is a natural
tool for assisting system-design work
done collaboratively in teams. Moreover,
its simple language facilitates clear com-
munication of both process and results,
appropriate for the educational process
and supportive of the ABET outcomes
listed above. In addition, Stella:
has a low-profile learning curve for
achieving the ability to construct
modelsthat is, to render a hypoth-
esis in computational form, enabling
exploratory simulation of model
performance;
uses operational representations of
systems, facilitating the student
learning process;
facilitates the process of experiential
collaboration (supporting ABET
outcomes goal 3d);
embodies the ability to simulate, al-
lowing students to learn to identify
critical parts in a component or
process steps and to solve engineer-
ing problems such as optimization
(supporting ABET goal 3e); and
supports clear communication, espe-
cially among those with differing
preparations and disciplinary back-
grounds (supporting ABET goal 3g).
This kind of explication of student
thinking is a very important investiga-
tive capability for those in the college
teaching profession who are aware of
the results of recent research in teach-
ing and learning in science and engi-
neering, conducted by a new genera-
tion of university professoriate whose
research is dedicated to this end. In this
sense, Stella is also in the tradition of
these new research professionals whose
research will help determine better
ways of training the next generation of
scientists and engineers.
Why for Research Professionals?
There remains to point out why pro-
fessional computational scientists and
engineers might wish to use Stella. A
good deal of what has already been said
sheds light on this question. Consider-
ing the operational approach to spec-
ifying systems that Stella uses, I con-
tend that many experienced chemists
prefer to think about chemical reactions
in this way. Considering numerical as
compared to analytic descriptions, the
obvious comment is that the former are
generally applicable while the latter ap-
ply just to special casesfor example,
where simplifying assumptions can be
employed. But beyond this advantage,
many experimental scientists and engi-
neers prefer working in a mode of in-
teraction with a model that closely re-
sembles laboratory work, even when an
analytic alternative is available. As ex-
perts, scientists and engineers more
naturally do their analytic thinking in
object-oriented rather than relational
frameworks. Part of Stellas effective-
ness is that it supports operational
thinking, and it is object-oriented
thus making it useful for both experi-
mental and theoretical types.
I havent made much explicit men-
tion of what Stella proffers as one of
its strengths: its communications ca-
pability. Clearly, being able to pass a
model that colleagues can operate,
rather than just describing certain re-
sults of that model, has much to rec-
ommend it. Today, there is much call
to make the details of our work known
to those outside of our profession in
forms that let them understand the
consequences in their own terms. I
believe that providing an easily oper-
ated simulation model might provide
a great advantage here, for the ex-
ported model permits its recipient to
invent appropriate and relevant impli-
cations by experimentation. This is
guided active learning for the profes-
sional! But can Stella do this?
Here, I must resort more to per-
sonal deduction than generalized
knowledge. The list of Stella users is
impressively large and varied. There is
T E C H N O L O G Y N E W S & R E V I E W S
MAY/JUNE 2004 15
a related productiThinkthat is
used by professionals in business and
other non-science/engineering profes-
sions as well. The single most pre-
scient conclusion I can draw from this
broad appeal is that modeling liter-
acy is a tool that facilitates cross-
disciplinary collaborations. In Stella,
we have a tool that can be used by, and
results that can be shared among, a
wide spectrum of professionals. As
widely understood in industry, gov-
ernment, and academe, such collabo-
rations drive the cutting edge of re-
search and development these days.
Stella thus seems well positioned to
help such collaborations share re-
search knowledge across disciplines.
S
tella is used by all sorts of pro-
fessionalsfrom high-school
teachers to middle corporate man-
agers for activities from instruction
to production engineering. For sci-
entists and engineers, it facilitates
quick paste-up and sanity checks of
technical ideas and prepares certain
modeling ideas for the transition
from small to large-scale applica-
tions. It also serves as a tool for
teaching students about solving sys-
tems problems and as a transitional
tool for taking simpler system con-
cepts to a more complex level of
analysis prior to attacking them with
high-level simulation tools.
By this point, it should be clear
that Stella is much more than a
modeling scratch pad, although I
would maintain that it excels at that.
At its base, Stella is for modeling
problems that can be described by
ordinary differential equations.
That means it is not designed for at-
tacking problems that involve spa-
tial distributions, time evolution, or
partial differential equations. Yet, in
the appropriate regime, it is an excel-
lent system for treating and commu-
nicating the nature and results of
problems with many dependent vari-
ables, very complicated topologies,
and a wide range of logical rules for
interactions. It is truly a simulation
package as well, because it lets you in-
troduce statistical effects into a
models operation in several helpful
ways. For a deeper appraisal of Stellas
capabilities and a realistic experience
of its look and feel, you can download
a demo and try it yourself.
Norman Chonacky is a senior research sci-
entist at Columbia University. His research in-
terests include cognitive processes in research
and education, environmental sensors and
the management of data derived from sensor
arrays, the physico-chemical behavior of ma-
terial in environmental systems, and applied
optics. He received a PhD in physics from the
University of Wisconsin, Madison. He is a
member of the American Association of
Physics Teachers (AAPT), the American Phys-
ical Society (APS), and the American Associa-
tion for the Advancement of Science (AAAS).
Contact him at chonacky@columbia.edu.
Applications are invited for an assistant professor level, tenure-track faculty position, with joint
appointments in the Scientific Computing and Imaging (SCI) Institute and the Department of
Bioengineering at the University of Utah. Candidates with expertise in the areas of cardiac or neurologic
modeling and simulation and/or biomedical image analysis are encouraged to apply. A strong candi-
date should also have an extensive background in numerical computation and application-driven
research.
The SCI Institute is an interdisciplinary research institute consisting of approximately 70 scientists,
staff, and students dedicated to advancing the development and application of computing, scientific
visualization, and numerical mathematics to topics in a wide variety of fields such as bioelectricity in
the heart and brain, multimodal medical imaging, and combustion. The SCI Institute currently houses
two national research centers: the NIH Center for Bioelectric Field Modeling, Simulation, and
Visualization and the DOE Advanced Visualization Technology Center.
The Bioengineering Department at the University of Utah is ranked in the top-10 of American gradu-
ate programs in bioengineering and has an international reputation for research with particular
strengths in biobased engineering, biomaterials, biomechanics, biomedical computing/imaging, con-
trolled chemical delivery, tissue engineering and neural interfaces. Tenure-track faculty typically have
primary appointments within College of Engineering and secondary appointments within the Health
Sciences.
The successful candidate will be expected to maintain/establish a strong extramurally funded
research program consistent with the research mission of the SCI Institute, and participate in under-
graduate/graduate teaching consistent with the educational mission of the Department of
Bioengineering. The candidate should have a doctoral degree in a field related to biomedicine or engi-
neering and have demonstrated research skills, ideally with 2 or more years of postdoctoral experience.
The candidate must be prepared to seek and secure ongoing extramural research support, collaborate
closely with researchers in interdisciplinary projects, and establish or maintain an international pres-
ence in his or her field.
A complete CV, names of three references, and a short description of
current research activities, teaching experience, and career goals should
be sent to: Director, Scientific Computing and Imaging Institute,
University of Utah, 50 So. Central Campus Drive, Rm. 3490, Salt Lake
City, UT 84112; Email: crj@sci.utah.edu; Web: www.sci.utah.edu.
The University of Utah, an AA/EO employer, encourages applications
from women and minorities, and provides reasonable accommodation to
the known disabilities of applicants and employees.
UNIVERSITY OF UTAH - SCI/Bioengineering
I
n this second of two issues devoted to the
frontiers of simulation, we feature four ar-
ticles that illustrate the diversity of com-
putational applications of complex physi-
cal phenomena. A major challenge for
computational simulations is how to accurately
calculate the effects of interacting phenomena,
especially when such phenomena evolve with
different time and distance scales and have very
different properties.
When time scales for coupling different effects
are longcompared with those that determine
each effects evolution separatelythen the system
is loosely coupled. It is then possible to couple
several existing calculations together through an
interface and obtain accurate answers.
Two of the articlesVirtual Watersheds: Mod-
eling Regional Water Balances, by Winter et al.,
and Large-Scale Fluid-Structure Interaction Sim-
ulations, by Lhner et al.discuss how to do this
for specific loosely coupled systems and give ex-
ample codes and results. A third article, Simula-
16 COMPUTING IN SCIENCE & ENGINEERING
FRONTIERS OF SIMULATION, PART II
DOUGLASS POST
Los Alamos National Laboratory
1521-9615/04/$20.00 2004 IEEE
Copublished by the IEEE CS and the AIP
G U E S T E D I T O R S
I N T R O D U C T I O N
MAY/JUNE 2004 17
tion of Swimming Organisms: Coupling Internal
Mechanics with External Fluid Dynamics, by
Cortez et al., describes methods for calculating how
deformable animals ranging in size from microbes
to large vertebrates swim through fluids. The
fourth article, Two- and Three-Dimensional As-
teroid Impact Simulations, by Gisler et al., de-
scribes a closely coupled calculation of hydrody-
namics and radiation transport for asteroids
striking the Earth. The coupling time for the radi-
ation and material is much shorter than the time
step, so the radiation transport and hydrodynam-
ics motion must be solved simultaneously.
Linking together existing modules has tremen-
dous advantages compared to developing new ones
with a similar capability. If the modules already ex-
ist, the time between defining the problem and
solving it can be much shorter. Second, the mod-
ules have already been tested, and thus have a lot of
verication and validation. Third, code developers
and users already have experience with how to use
the modules correctly. The largest remaining issue
is how to pass data among modules and how to
handle different types of adjacent meshes. The cal-
culation in Winter et al.s article employs a gener-
alized software infrastructure that connects sepa-
rate parallel applications and couples three existing
software packages. This method appears to be par-
ticularly powerful for calculating fluid flows
through a fixed geometry. Lhner et al. discuss
their solutions for how to enforce accurate cou-
pling between packages with very different mesh
types and geometries. Their simulations include
deformation of a solid object due to force loading
from the fluid. Cortez et al. examine how to treat
the interaction of highly deformable objects (such
as bacteria and nematodes) within the fluids
through which they move via an immersed bound-
ary framework. This powerful technique helps cal-
culate self-consistent solutions for the force balance
between the swimming organism and the fluid
through which it moves.
Obviously, the coupling between the constituent
parts of asteroid impactsmatter and radiation
occurs on a time scale much shorter than practical
time steps. Gisler et al. calculate the radiationmat-
ter interaction implicitly. The material and radia-
tion both move through the same fixed Cartesian
mesh. Although the common mesh simplifies the
treatments of different phenomena, it does so at a
potential cost of numerical diffusion if the resolu-
tion is inadequate. They achieve additional resolu-
tion by adaptive mesh renement (AMR)that is,
by increasing the number of mesh cells locally
wherever increased accuracy is needed.
Douglass Post is an associate editor in chief of CiSE
magazine. He has 30 years of experience with compu-
tational science in controlled magnetic and inertial fu-
sion. His research interests center on methodologies
for the development of large-scale scientic simulations
for the US Department of Defense and for the con-
trolled-fusion program. Contact him at post@lanl.gov.
Writers
For detailed information on submitting articles, write to cise@
computer.org or visit www.computer.org/cise/author.htm.
Letters to the Editors
Send letters to Jenny Ferrero, Contact Editor, jferrero@computer.org.
Please provide an email address or daytime phone number with your
letter.
On the Web
Access www.computer.org/cise/ or http://cise.aip.org for informa-
tion about CiSE.
Subscription Change of Address (IEEE/CS)
Send change-of-address requests for magazine subscriptions to
address.change@ieee.org. Be sure to specify CiSE.
Subscription Change of Address (AIP)
Send general subscription and refund inquiries to subs@aip.org.
Subscribe
Visit https://www.aip.org/forms/journal_catalog/order_form_fs.html
or www.computer.org/subscribe/.
Missing or Damaged Copies
If you are missing an issue or you received a damaged copy
(IEEE/CS), contact membership@computer.org. For AIP sub-
scribers, contact kgentili@aip.org.
Reprints of Articles
For price information or to order reprints, send email to cise@
computer.org or fax +1 714 821 4010.
Reprint Permission
To obtain permission to reprint an article, contact William Hagen,
IEEE Copyrights and Trademarks Manager, at copyrights@ieee.org.
How to
Reach
CiSE
D
etailed computational models of
complex naturalhuman systems
can help decision makers allocate
scarce natural resources such as wa-
ter. This article describes a virtual watershed
model, the Los Alamos Distributed Hydrologic
System (LADHS), which contains the essential
physics of all elements of a regional hydros-
phere and allows feedback between them. Un-
like real watersheds, researchers can perform
experiments on virtual watersheds, produce
them relatively cheaply (once a modeling
framework is established), and run them faster
than real time. Furthermore, physics-based vir-
tual watersheds do not require extensive tuning
and are flexible enough to accommodate novel
boundary conditions such as land-use change
or increased climate variability. Essentially, vir-
tual watersheds help resource managers evalu-
ate the risks of alternatives once uncertainties
have been quantified.
LADHS currently emphasizes natural processes,
but its components can be extended to include such
anthropogenic effects as municipal, industrial, and
agricultural demands. The system is embedded in
the Parallel Applications Work Space (PAWS), a
software infrastructure for connecting separate par-
allel applications within a multicomponent model.
1
LADHS is composed of four interacting compo-
nents: a regional atmospheric model, a land-surface
hydrology model, a subsurface hydrology model,
and a river-routing model. Integrated atmos-
phereland/surfacegroundwater models such as
LADHS and those described elsewhere
24
provide
a realistic assessment of regional water balances by
including feedback between components. Realistic
simulations of watershed performance require dy-
namically coupling these components because
many of them are nonlinear, as are their interac-
tions. Boundary conditions from global climate
models, for example, can be propagated through a
virtual watershed; interaction effects can then be
evaluated in each component.
The level of resolution a virtual watershed re-
quires depends on the questions asked. Grid res-
olutions of 5 km or less on a side seem necessary
for atmospheric simulations to represent the
convective storms and high-relief topography
common in semi-arid regions, whereas resolu-
tions of less than 100 m are needed to represent
the spatial variability inherent in soil and vegeta-
tion. Simulations of regional water balances gen-
18 COMPUTING IN SCIENCE & ENGINEERING
VIRTUAL WATERSHEDS:
SIMULATINGTHE WATER
BALANCE OF THE RIOGRANDE BASIN
C.L. WINTER, EVERETT P. SPRINGER, KEELEY COSTIGAN,
PATRICIA FASEL, SUE MNIEWSKI, AND GEORGE ZYVOLOSKI
Los Alamos National Laboratory
1521-9615/04/$20.00 2004 IEEE
Copublished by the IEEE CS and the AIP
F R O N T I E R S
O F S I M U L A T I O N
Managers of water resources in arid and semi-arid regions must allocate increasingly
variable surface water supplies and limited groundwater resources. This challenge is
leading to a new generation of detailed computational models that can link multiple
sources to a wide range of demands.
MAY/JUNE 2004 19
erally require high resolution because they are
meant to support analysis of fine-scaled processes
such as land-use change, soil moisture distribu-
tion, localized groundwater recharge, and soil
erosion. Many water resource decisions are based
on data from 1 m to 1 km in scale; the smallest
grid in LADHSs regional atmosphere compo-
nent is 5 km on a side. The land-surface compo-
nent uses 100-m spacing, whereas the groundwa-
ter component concentrates processing on key
volumes via an unstructured grid of about 100 m
characteristic length.
This article focuses on LADHSs computational
aspectsprimarily, its system design and imple-
mentation and basic measures of its performance
when simulating interactions between the land-
surface and regional atmospheres. We also give re-
sults of initial simulations of the water balance be-
tween the land surface and atmosphere in the
upper Rio Grande basin to illustrate the promise
of this approach.
LADHS Functional Decomposition
Our computational approach links a regional at-
mospheric component with terrestrial hydrologic
components in a dataflow corresponding to ex-
changes of mass and energy among elements of re-
gional water cycles (see Figure 1). We implemented
the individual component models as loosely cou-
pled processes on several shared- and distributed-
memory parallel computers at Los Alamos Na-
tional Laboratory. Because legacy applications exist
for each component, we use PAWS to link the ap-
plications with minimal additional code. Each com-
ponent process is assigned a xed number of phys-
ical processors before runtime. The processes run
independently, but are synchronized by exchang-
ing data in parallel via message passing. Data are
geographically referenced to a location for passing
between applications.
Table 1 summarizes the detailed physics of re-
gional watershed elements along with the resolu-
tions we use in our model. Fluxes are basically
driven by dissipative waves operating at multiple
scales. Scaling the links between components is
one of the major modeling challenges in a system
like LADHS. For example, the atmospheric com-
ponent solves the Navier-Stokes equations and
operates on time steps of m/sec, whereas the
groundwater element uses Darcys law and has a
time resolution of m/day. The relative time steps
of these components differ by four orders of mag-
nitude, with their spatial resolutions differing by
an order of magnitude (see Table 1). The differ-
ence in spatial resolution is managed by a statis-
tical downscaling technique that transforms rel-
Regional
atmospheric
model
Winds,
water vapor
Sea surface
temperature
Runoff
Interflow
Recharge
Exfiltration
GW recharge GW discharge
Ocean
model
General
circulation
model
Land-
surface
model
River
system
model
Groundwater
model
Temperature
Pressure Water vapor
Wind
Winds
Temperature
Water vapor
Net radiation
Precipitation
Figure 1. LADHS dataow. The system consists of four software objects
corresponding to the major components of basin-scale water cycles: the
regional atmosphere, the land-surface, the groundwater system, and
the network of rivers and streams. Global-scaled general circulation
data enters the system through the regional atmospheric model.
Table 1. Physics of model elements.
Model element Physical model Characteristic time scales Spatial resolution
Groundwater Darcys equation mm to m/day ~100 m
Unsaturated subsurfaces Multiphase ow mm to cm/min 100 m
Atmosphere Navier-Stokes equations mm to m/sec 1 to 5 km
Overland ow St. Venant equations cm to m/sec 100 m
Snowmelt Diffusion (heat and mass) m/hr 100 m
Stream St. Venant equations m/sec By reach
Evapotranspiration Diffusion m/sec 100 m
20 COMPUTING IN SCIENCE & ENGINEERING
atively coarsely resolved atmospheric data to
more highly resolved hydrologic scales. Differ-
ences in temporal resolution are handled by sum-
ming mass quantities like precipitation over many
short time steps. Energetic quantities such as
temperature are scaled up by averaging atmos-
pheric data over time.
The model physics are instantiated in four com-
putational modules. The physics of the atmos-
phere, including precipitation, is computed in the
Regional Atmospheric Modeling System (RAMS).
We use the nite-element heat and mass (FEHM)
transport code to calculate groundwater ow; over-
land flow and river routing are separate responsi-
bilities of LADHSs land-surface module. In addi-
tion to the physics, an auxiliary module couples the
land surface to the atmosphere through statistical
downscaling, with PAWS providing the computa-
tional glue needed to link components.
Regional Atmosphere
The mesoscale atmosphere component of the
LADHS is RAMS,
5
which estimates meteorolog-
ical fields by solving the Navier-Stokes equations
with finite differencing methods. The RAMS
model consists of modules that allow for many
possible configurations of parameterizations for
processes such as radiation calculations and cloud
microphysics. Potentially nonstationary global cli-
mate effects enter LADHS via boundary condi-
tions affecting the regional atmosphere. We can
set these boundary conditions from observed sea-
surface temperatures and atmospheric fields or
from a global climate models output; RAMS pro-
vides precipitation, temperature, humidity, radia-
tion, and wind data to the surface-water hydrol-
ogy component. A masterslave model and
domain decomposition of nested grids are used to
parallelize RAMS.
Land Surface
The LADHS surface hydrology module is a grid-
based water-balance model based on the land-
surface representation presented elsewhere.
6
This
module uses nite differencing to approximate sur-
face and subsurface ows in two dimensions. It in-
cludes routines for snow accumulation and
snowmelt, infiltration, overland flow, evapotran-
spiration, saturated subsurface lateral flow, and
groundwater recharge. The surface hydrology
module is parallelized by domain decomposition.
River Routing
Stream flow routing is based on the St. Venant
equations to account for multiple ow conditions
that occur in watersheds. Reservoirs and other fea-
tures such as diversion dams create backwater con-
ditions that affect channel flows. Reservoirs and
their operations must be represented realistically
because they can dominate stream ow in a basin.
Subsurface Hydrology
Groundwater represents a major water resource
not included in current climate models. LADHS
uses the FEHM code to model both shallow sub-
surface and regional aquifers.
7
FEHM is a three-
dimensional multiphase ow code that uses control
volume finite elements to solve mass and energy
flow equations in a porous medium. The upper
boundary condition for FEHM is supplied by a Los
Alamos surface hydrology module, a surface flow
module within FEHM, or a computational module
that simulates streambed recharge. FEHM is par-
allelized by domain decomposition.
Coupling Components
A key challenge for integrated modeling is to cou-
ple physical domains operating at different scales
of space and time. However, we do not emphasize
coupling here because its main challenges are phys-
ical, not computational. We use a statistical algo-
rithm based on kriging to downscale regional at-
mospheric data at 1 to 5 km resolutions to the
100-m resolution of the Los Alamos surface hy-
drology module.
8
The approach uses an elevation
covariate to represent topographys effects. Cou-
Table 2. Computational requirements of high-resolution basin-scale land-surface/atmosphere simulation.
RAMS Los Alamos surface hydrology module
Basin size (km
2
) upper Rio Grande 92,000
Duration of simulation One year
Resolution 1 km 100 m
Number of grid cells 92,000 9,200,000
Number of vertical layers and themes 22 80
Floating-point operations per grid cell 300 100
Time step One second One minute
Total number of operations 2.E+16 4.E+16
MAY/JUNE 2004 21
pling from the land surface to the atmosphere is
presently based on RAMSs internal submodels.
Parallel Applications Work Space
PAWS takes a data-centric view of coordination be-
tween applications, which makes it well-suited to
implement dataows between legacy codes. In gen-
eral, applications are loosely coupled and opaque
to each other within PAWS. They can have differ-
ent numbers of processors and data layout strate-
gies, and can be written in different languages.
PAWS consists of two main elements: a central
controller, which coordinates the creation of con-
nections between components and data structures,
and an application program interface (API). Appli-
cations register the parallel layout of shared data
with the API and identify points where data can be
transferred in parallel. PAWS can work coopera-
tively with an applications existing parallel com-
munication mechanism.
In this article, we concentrate on the coupled
performance of RAMS and the Los Alamos surface
hydrology module, which are standalone legacy
codes. Nevertheless, the resolutions of their data
structures differ, the regional atmosphere compo-
nent runs in a masterslave style, and they have
different grid orientations. We use three different
communication strategies: land-surface elevation
data is broadcast from the regional atmosphere
master node to every node in the surface-
hydrology module, each surface-hydrology node
then gathers partial precipitation arrays from
RAMS, and, nally, the remaining arrays are trans-
ferred in parallel and reoriented.
Implementation
We selected the upper Rio Grande for our simu-
lation because the Rio Grande is a major river sys-
tem in the southwestern United States and north-
ern Mexico providing water for flora, fauna,
agriculture, domestic consumption, recreation,
business, and industry. Analysis indicates high-res-
olution simulation of a single year of the upper Rio
Grande basins water balance requires on the or-
der of 10
16
arithmetical operations, with the com-
putation fairly evenly balanced between the com-
ponents (see Table 2).
Performance experiments conrm this. Because
a coupled model using our data-transfer module
runs at the speed of the slowest component (due to
data-transfer synchronization), we evaluated the
performance of RAMS and the Los Alamos surface
hydrology module separately and later investigated
performance of the coupled models. RAMS ran
fastest on 25 or 29 processes in standalone mode on
an SGI Origin 2000 Nirvana cluster using a 94
74 grid with 22 vertical layers (see Figure 2). The
fall off in performance beyond 29 processors is due
to the message-passing overhead associated with
the masterslave arrangement.
Runtime for one iteration of the standalone sur-
face-hydrology module on a PC Linux cluster
does not show a decrease in performance, with the
number of processors over the range investigated,
although performance essentially plateaus at 15
processors (see Figure 3). The surface-hydrology
module ran on a 3,650 2,550 grid with 100 m
spatial resolution. Time per iteration is 5.5 sec-
onds for 15 processors and 3.0 for 25. Communi-
cations overhead goes up with the number of
processors, with 15 percent overhead for message
passing. Performance is maintained when com-
ponents are linked.
The coupled RAMS-surface hydrology model us-
ing PAWS was run for one days simulated time, with
0
50
100
150
200
250
300
350
400
450
RAMS timing
13 17 21 25 29 33
Number of processors
T
i
m
e

(
s
e
c
o
n
d
s
)
Figure 2. RAMS timing. The decrease in performance beyond 29
processors is due to message-passing overhead.
LASH timing
Number of processors
T
i
m
e

p
e
r

i
t
e
r
a
t
i
o
n

(
s
e
c
o
n
d
s
) 140
120
100
80
60
40
20
0
1 2 4 5 6 15 25 30 50
Figure 3. Land-surface hydrology (LASH) timing. Performance levels off
at 15 processors.
22 COMPUTING IN SCIENCE & ENGINEERING
varying numbers of processors for the regional at-
mosphere and 25 processors for surface hydrology
(see Figure 4). The wait time is due to the difference
in speed between the components, with PAWS data-
transfer time constant over different numbers of
processors, typically 2 to 4 percent of total runtime.
Rio Grande Simulations
The upper Rio Grande basin extends from head-
waters in the San Juan and Sangre de Cristo moun-
tains of southern Colorado to where it runs dry at
Fort Quitman, Texas, about 40 miles downstream
from El Paso/Juarez (see Figure 5). The upper
basin covers around 90,000 km
2
and includes the
cities of Santa Fe and Albuquerque and the Las
Cruces/El Paso/Juarez metropolitan area.
Water moves through the basin along multiple
pathways, the most important of which are pre-
cipitation, surface runoff, infiltration, groundwa-
ter recharge and discharge, and evapotranspiration
(see Figure 6). River discharge and the atmosphere
are the main mechanisms for transporting water
out of the basin: about 95 percent of precipitation
is evaporated or transpired by plants back to the
atmosphere. Annual flows have averaged about a
million acre-feet per year in the upper Rio
Grande, but variability is high, and the river has
been subject to lengthy droughts. A major drought
in the 1950s caused a rapid shift in forest and
woodland. The system may be entering another
such period now.
Spring snowmelt and summer rains are the main
sources of water in the basin.
9
Spring snowmelt ac-
cumulated from winter storms contributes about
70 percent of annual flows in the northern Rio
Grande and its tributaries. Further south in the
basin, thunderstorms contribute a greater propor-
tion of the precipitation feeding the river. Stream
ow interacts with groundwater in some areas, with
gains and losses highly localized. Additional
groundwater recharge occurs through fractures
within mountain blocks, in ephemeral streams
along mountain fronts, and through agricultural
elds. Groundwater is the primary source of water
for metropolitan areas. The Rio Grande is a highly
regulated stream, and the operation of diversion
and storage dams reduces stream ow as the river
passes through New Mexico.
So far, our modeling efforts have concentrated
on the spatial extent and timing of the inuence of
precipitation on soil moisture during the
19921993 water year (October 1992 through Sep-
tember 1993). Our precipitation estimates are
based on high-resolution simulations using RAMS,
with three nested grids of size 80 km, 20 km, and 5
Number of processors
T
i
m
e

(
m
i
n
u
t
e
s
)
RAMSPAWSLASH timing
LASH wait
RAMS/LASH run
PAWS run
160
140
120
100
80
60
40
20
0
13 17 21 15
Figure 4. RAMS/PAWS/LASH timing. Timings are based on a xed
number of processors for LASH (25) and varying numbers of processors
for RAMS. The wait time is due to the difference in speed between
RAMS and LASH. The PAWS data transfer time is constant over different
numbers of processors and is a small percentage of total runtime.
Figure 5. The upper Rio Grande basin has its headwaters in San Juan
Mountains near Creede, Colorado, and ends near Fort Quitman, Texas.
MAY/JUNE 2004 23
km on a side. The largest grid covers most of the
western United States, along with parts of Canada,
Mexico, and the Pacic Ocean. We need it for sim-
ulating synoptic-scale flow features in the region.
The 20-km grid contains the states of Utah, Ari-
zona, Colorado, and New Mexico. Terrain features,
such as mountain ranges, are discerned at this res-
olution to affect regional atmospheric dynamics.
The 5-km grid more fully describes the rapid
changes in topography and land use that affect the
regional atmosphere, especially precipitation. We
compared our simulations of precipitation in
19921993 to observed data
9
and ran the atmos-
pheric simulations on an SGI Origin 2000 Nirvana
cluster using 17 processors.
We ran the simulations with a 120-second-long
time step (24 seconds for acoustic terms) on the
coarsest (80-km) grid, with proportionally shorter
time steps on the smaller grids for the winter
months. The time step halved during the warmer
seasons. The model produced one day of the sim-
ulation for each one to four hours of wall clock
time, depending on the complexity of the micro-
physical processes taking place at any given time.
Simulated and observed monthly precipitation to-
tals compare fairly well, although they are far from
perfect. It should be noted that we performed the
simulations without calibration. In general, the
19921993 water year was wetter than normal, but
even so, our model had a tendency to overestimate
precipitation at some locations. For instance, ob-
servations from July 1993 indicate that the great-
est precipitation totals for the month occurred in
southern and eastern New Mexico, a feature that
our model captures (see Figure 7).
We can demonstrate the coupled land-
surface/atmosphere models capability by simulat-
ing the effect of snow water equivalent on soil
moisture from October through November 1992.
The atmospheric simulation we used observed sea-
surface temperatures and US National Oceanic and
Atmospheric Administration (NOAA)s National
Center for Environmental Prediction reanalysis
data as global boundary conditions. Simulated tem-
perature, radiation, and wind data were sent from
RAMS to the Los Alamos surface hydrology mod-
ule every 20 minutes of simulated time; precipita-
tion was sent at two-minute intervals when it oc-
curred. Snow water equivalent is the amount of
water contained in snow. Its extent is the same as
the snowpack. RAMS produced it at 5-km resolu-
tions and statistically downscaled it to the surface-
hydrology module at 100-m resolution. The num-
ber of land-surface grid cells in this simulation is
9,307,500. We obtained soils data from the State
Soil Geographic (STATSGO) database,
10
and esti-
mated hydrologic parameters by using soil texture.
We obtained spatial distributions of vegetation type
from the Vegetation/Ecosystem Modeling and
Analysis Project (VEMAP) database.
11
Estimates of snow water equivalent (see Figure 8)
and surface soil moisture estimates (see Figure 9) il-
lustrate the relative effects of 5-km and 100-m grid
cell representations. The blocky nature of the snow
distribution at the 5-km resolution is obvious in
Figure 8, but the highly resolved soil moisture
process smoothes the edges of the snow distribution
(see Figure 9). The distribution of soil moisture
ranges from very dry in the San Luis Valley around
Alamosa, Colorado, where there is little precipita-
tion on an annual basis, to very wet conditions in
higher elevation zones, where snow accumulation
and melt usually occur. The detail presented in Fig-
ure 9 is important when simulating processes such
as soil erosion and contaminant transport, which
depend on local information to determine the wa-
ter velocities used in transport calculations.
A
lthough we cannot conduct actual ex-
periments with a system as large and
valuable as the hydrosphere of the Rio
Grande basin, computational science
has advanced to a point where simulations of
river basins can be highly realistic. Although
coupling theory, additional component model-
ing, and data gaps that affect parameterizations
and validation are the main limits on distributed
Channel/
groundwater
interaction
C
h
a
n
n
e
l

f
l
o
w
Terrestrial hydrologic model
Overland
flow
Evapotranspiration
Soil
hydrologic
model
Groundwater
hydrologic model
Infiltration Infiltration
Figure 6. Water moves through the basin along multiple pathways,
the most important of which are precipitation, surface runoff,
infiltration, groundwater recharge and discharge, and
evapotranspiration.
24 COMPUTING IN SCIENCE & ENGINEERING
basin-scale simulations, the basic framework for
addressing them exists in LADHS and similar
physics-based systems.
24
That said, we still have some progress to make.
An immediate need of LADHS is to link land-
surface output directly to the atmosphere. Once
this is done, we can evaluate the horizontal redis-
tribution of soil moistures impact on the atmos-
phere. Enhancements also are in order for exist-
ing components. Improved models of plantwater
interactions in riparian areas can lead to better
evaluation of their impacts on aquifer recharge
and streamflow. In the future, a more compre-
hensive model based on the FEHM groundwater
code will replace LADHSs subsurface dynamics:
the grid-based computational model will be re-
placed by a tree-based data structure that can take
computational advantage of specific physical fea-
tures of flow through watersheds. Domain de-
composition based on watersheds can reduce mes-
sage sizes to the output of a single point, the
stream outlet, because surface flows do not cross
watershed divides. Domain decomposition of
FEHM also can take advantage of similar limita-
tions on flows between groundwater basins.
LADHSs modular structure allows for the inter-
change of atmospheric models, raising the possi-
bility of using other atmospheric models.
Remote sensing, especially satellite-based, and
new geological and geophysical characterization
techniques could eventually fill many gaps in ini-
tialization and parameterization data, but issues of
resolution and scaling must still be resolved. Most
remotely sensed data is too coarse to be the direct
source of parameters. We plan to investigate alter-
native atmospheric data sets for boundary condi-
tions and large-scale forcing because they can sig-
nicantly affect model results.
N
o
r
t
h

l
a
t
i
t
u
d
e
109 111 123 121 119 117 115 113 107 105
West longitude
33
41
39
37
35
31
Simulation
5-km resolution
N
o
r
t
h

l
a
t
i
t
u
d
e
109 111 123 121 119 117 115 113 107 105
West longitude
33
41
39
37
35
31
Observations
109 111 123 121 119 117 115 113 107 105
West longitude
33
41
39
37
35
31
N
o
r
t
h

l
a
t
i
t
u
d
e
Simulation
20-km resolution
0 50 mm
50 125 mm
125 250 mm
> 250 mm
Figure 7. The effect of resolution on simulations of precipitation. The light-blue rectangles indicate the extent of coverage of the
20-km and 5-km grids. The area of a circle is the total amount of precipitation as simulated at the two resolutions (lower) and as
observed (upper) for July 1993. The more highly resolved 5-km simulation does a better job of capturing the observed pattern of
variability.
MAY/JUNE 2004 25
Validation is a challenge for distributed models of
environmental systems such as virtual watersheds.
Most hydrologic state variables have not been ob-
served consistently for long periods, and they are
usually restricted to point data. We can use point
data to evaluate distributed models, but it is not suf-
cient by itself. Streamow measured at a point is
often used to validate hydrologic models, but the
method is ill-posed because very different parame-
terizations can lead to the same estimates of stream-
ow (streamow integrates hillslope processes). We
plan to explore better methods for comparing the
gridded model predictions to observation points.
One method is to convert the point observations to
gridded elds, using a model that combines physi-
cally based simulation submodels with three-di-
mensional, spatial interpolation to reduce the topo-
graphic, geographic, and observational bias in
station networks. A weaker alternative is to compare
two models; we plan to do this especially with re-
gard to the regional atmosphere component. Some
day, remotely sensed data could be the source of
spatially distributed observations of system state
variables as well as the source of distributed system
parameters. Most progress has been made in esti-
mating snow-covered areas.
Observations contain both systematic and ran-
dom errors, either of which can affect conclusions
drawn from simulations. Coupled basin-scale mod-
els require methods of quantifying uncertainty be-
cause no data set will ever be exact. Uncertainty in
physics-based models can be represented through
stochastic partial differential equations and quan-
tied by either Monte Carlo simulation or the di-
rect evaluation of moment equations. We have de-
veloped moment equations for the groundwater
pressure head
12
that we expect to extend to other
components of the system, especially the land sur-
face. Because most decision-makers recognize un-
certainty is a byproduct of every simulation, quan-
tifying uncertainty systematically is a critical basis
for establishing their trust.
Trust also arises when a model can respond to a
wide range of scenarios, including ones that have
not been observed. Decision-makers need esti-
mates of what the Rio Grande basin will look like
if urban populations double, if land use changes, if
climate becomes much more variable, or if we en-
ter a new climate regime entirely. Physics-based
models such as LADHS are not restricted to ob-
served ranges of variability, nor do they rely on cal-
ibration. Virtual watersheds help us predict the
continued long-term behavior of regional hydros-
pheres under circumstances that will not be ob-
served for many years, if ever.
Acknowledgments
This study was supported by the Los Alamos National
Laboratorys Directed Research and Development project,
Sustainable Hydrology, in cooperation with the US
National Science Foundation Science and Technology
Center for Sustainability of Semi-Arid Hydrology and
Riparian Areas (SAHRA).
References
1. K. Keahey, P. Fasel, and S. Mniszewski, PAWS: Collective Inter-
actions and Data Transfers, Proc. 10th IEEE Intl Symp. High Per-
formance Distributed Computing (HPDC-10), IEEE CS Press, 2001,
pp. 4754.
2. Z. Yu et al., Simulating the River-Basin Response to Atmospheric
Forcing by Linking a Mesoscale Meteorological Model and Hy-
drologic Model System, J. Hydrology, 1999, vol. 218, nos. 1 and
2, 1999, pp. 7291.
3. J.P. York et al., Putting Aquifers into Atmospheric Simulation
Models: An Example from the Mill Creek Watershed, Northeast-
ern Kansas, Advances in Water Resources, vol. 25, no. 2, 2002,
pp. 221238.
4. G. Seuffert et al., The Inuence of Hydrologic Modeling on the
Predicted Local Weather: Two-Way Coupling of a Mesoscale
Weather Prediction Model and a Land Surface Hydrologic
Model, J. Hydrometeorology, vol. 3, no. 5, 2002, pp. 505523.
5. R.A. Pielke et al., A Comprehensive Meteorological Modeling
19 November 1992 00:00 UTC
Snow water
equivalent (cm)
0
0.1 5
5 10
10 20
20 40
40 80
80 120
120 180
180 240
> 240
Colorado
New Mexico
Creede
Alamosa
Taos
Espanola
Los Alamos
Santa Fe
Figure 8. The distribution of simulated snow on 19 Nov. 1992. The
estimates come from RAMS using a 5-km resolution. Snow occurs in the
mountains, where it should, but note the blocky nature of the pattern.
26 COMPUTING IN SCIENCE & ENGINEERING
System: RAMS, Meteorological Atmospheric Physics, vol. 49, nos.
14, 1992, pp. 6991.
6. Q.-F. Xiao, S.L. Ustin, and W.W. Wallender, A Spatial and Tem-
poral Continuous Surface-Subsurface Hydrologic Model, J. Geo-
physical Research, vol. 101, no. 29, 1996, pp. 565 584.
7. G.A. Zyvoloski et al., Users Manual for the FEHM Application: A Fi-
nite-Element Heat- and Mass-Transfer Code, tech. report LA-
13306-M, Los Alamos Natl Laboratory, 1997.
8. K. Campbell, Linking Meso-Scale and Micro-Scale Models: Us-
ing BLUP for Downscaling, Proc. Section on Statistics and the En-
vironment, Am. Statistical Assoc., 1999.
9. K.R. Costigan, J.E. Bossert, and D.L. Langley, Atmospheric/Hy-
drologic Models for the Rio Grande Basin: Simulations of Precip-
itation Variability, Global and Planetary Change, vol. 25, nos. 1
and 2, 2000, pp. 83110.
10. State Soil Geographic (STATSG0) Database, publication number
1492, US Dept. of Agriculture, Natural Resources Conservation
Service, Natl Soil Survey Center, Aug. 1991.
11. T.G.F. Kittel et al., The VEMAP Integrated Database for Model-
ing United States Ecosystem/Vegetation Sensitivity to Climate
Change, J. Biogeography, vol. 22, nos. 4 and 5, 1995, pp.
857862.
12. C.L. Winter and D.M. Tartakovsky, Groundwater Flow in Het-
erogeneous Composite Aquifers, Water Resources Research, vol.
38, no. 8, 2002, pp. 23123.11.
C.L. Winter, an applied mathematician and ground-
water hydrologist, was a member of the Theoretical Di-
vision at Los Alamos National Laboratory and principal
investigator on Los Alamoss project to model the wa-
ter cycles of regional basins. He is currently the deputy
director of the National Center for Atmospheric Re-
search in Boulder, Colorado. Winter also is an adjunct
professor in the Department of Hydrology and Water
Resources at the University of Arizona. He has a PhD in
applied mathematics from the University of Arizona.
Contact him at lwinter@ucar.edu.
Everett P. Springer is a technical staff member with the
Atmospheric, Climate, and Environmental Dynamics
Group at Los Alamos National Laboratory. His research
interests include numerical modeling of surface and sub-
surface hydrologic systems, applying high-performance
computing to hydrologic modeling, and hydrologic
model testing. He has a BS and an MS in forestry from
the University of Kentucky and a PhD from Utah State
University. Contact him at everetts@lanl.gov.
Keeley Costigan is a technical staff member in the At-
mospheric, Climate, and Environmental Dynamics
Group at Los Alamos National Laboratory. Her research
interests include regional climate modeling and moun-
tain meteorology. She has a BS in meteorology from
Iowa State University and an MS and PhD in atmos-
pheric science from Colorado State University. She is a
member of the American Meteorological Society. Con-
tact her at krc@lanl.gov.
Patricia Fasel is a technical staff member with the
Computer and Computational Sciences Division at Los
Alamos National Laboratory. Her interests include par-
allel programming, anomaly detection, feature extrac-
tion, and algorithm development in all areas of science.
She has a BS in mathematics and computer science
and an MS in computer science from Purdue Univer-
sity. Contact her at pkf@lanl.gov.
Sue Mniszewski is a staff member at Los Alamos Na-
tional Laboratory. Her research interests include paral-
lel coupling of large-scale models, bio-ontologies, and
computational economics. She has a BS in computer
science from Illinois Institute of Technology in Chicago.
Contact her at smm@lanl.gov.
George Zyvoloski is a subsurface ow specialist at the
Los Alamos National Laboratory. His interests include
numerical algorithms for coupled groundwater ow at
large scales and the development of linear equation
solvers for unstructured grids. He has a PhD in me-
chanical engineering from the University of California,
Santa Barbara. Contact him at gaz@lanl.gov.
Volumetric
soil mosture
content
0.00 0.05
0.05 0.10
0.10 0.15
0.15 0.20
0.20 0.25
0.25 0.30
0.30 0.35
0.35 0.40
0.40 0.45
0.45 0.50
0.50 0.55
0.55 0.60
Colorado
New Mexico
Creede
Alamosa
Taos
Espanola
Los Alamos
Santa Fe
19 November 1992 00:00 UTC
Figure 9. The distribution of simulated soil moisture on 19 Nov. 1992.
The estimates come from LASH using 100-m cell resolutions. Soil
moisture arises from rain as well as snow, hence its greater spatial
extent. The much higher resolution of LASH leads to a smoother
distribution than that of snow.
MAY/JUNE 2004 27
F R O N T I E R S
O F S I M U L A T I O N
O
ver the past two decades, the disci-
plines required to predict the behav-
ior of processes or productsfluid
dynamics, structural mechanics,
combustion, heat transfer, and so onhave fol-
lowed the typical bottom-up trend.
Starting from sufciently simple geometries and
equations to have an impact on design decisions and
be identied as computational, more and more re-
alism was added at the geometrical and physics lev-
els. Whereas the engineering process, outlined in
Figure 1, follows a line from project to solution of
partial differential equations (PDEs) and evaluation,
the developments (in particular of software) in the
computational sciences tend to run in the opposite
direction: from solvers to complete database.
With the advancement of numerical techniques
and the advent, rst, of affordable 3D graphics work-
stations and scalable compute servers, and, more re-
cently, PCs with sufciently large memory and 3D
graphics cards, public-domain and commercial soft-
ware for each of the computational core disciplines
has matured rapidly and received wide acceptance in
the design and analysis process. Most of these pack-
ages are now at the threshold mesh generator:pre-
processor. This has prompted the development of the
next logical step: multidisciplinary links of codes, a
trend that is clearly documented by the growing num-
ber of publications and software releases in this area.
In principle, interesting problems exist for any
combination of the disciplines listed previously.
Here, we concentrate on uid-structure and uid-
structure-thermal interaction, in which changes of
geometry due to fluid pressure, shear, and heat
loads considerably affect the flowfield, changing
the loads in turn. Problems in this category include
steady-state aerodynamics of wings under
cruise conditions;
aeroelasticity of vibratingthat is, elastic
structures such as utter and buzz (aeroplanes
and turbines), galloping (cables and bridges),
and maneuvering and control (missiles and
drones);
weak and nonlinear structures, such as wetted
membranes (parachutes and tents) and bio-
logical tissues (hearts and blood vessels); and
LARGE-SCALE FLUID-STRUCTURE
INTERACTIONSIMULATIONS
RAINALD LHNER, JUAN CEBRAL, AND CHI YANG
George Mason University
JOSEPH D. BAUM AND ERIC MESTREAU
Science Applications International Corporation
CHARLES CHARMAN
General Atomics
DANIELE PELESSONE
Engineering and Software Systems Solutions
1521-9615/04/$20.00 2004 IEEE
Copublished by the IEEE CS and the AIP
Combining computational-science disciplines, such as in uid-structure interaction
simulations, introduces a number of problems. The authors offer a convenient and cost-
effective approach for coupling computational uid dynamics (CFD) and computational
structural dynamics (CSD) codes without rewriting them.
28 COMPUTING IN SCIENCE & ENGINEERING
strong and nonlinear structures, such as shock-
structure interaction (command and control
centers, military vehicles) and hypersonic
ight vehicles.
The most important question is how to combine
these disciplines in order to arrive at an accurate,
cost-effective, and modular simulation approach
that can handle an arbitrary number of disciplines
at the same time. Considering the uid-structure-
thermal interaction problem as an example, we see
from the list of possibilities displayed in Figure 2
that any multidisciplinary capability must be able
to quickly switch between approximation levels,
models, and ultimately codes. Clearly, only those
approaches that allow a maximum of exibility will
survive. Such approaches enable
linear and nonlinear computational fluid dy-
namics (CFD), computational structure dy-
namics (CSD), and computational thermal dy-
namics (CTD) models,
different, optimally suited discretizations for
CFD, CSD, and CTD domains,
modularity in CFD, CSD, and CTD models
and codes,
fast multidisciplinary problem denition, and
fully automatic grid generation for arbitrary
geometrical complexity.
In this article, we focus only on such approaches.
Coupling Schemes
The question of how to couple CSD and CFD
codes has been treated extensively in the litera-
ture.
16
Two main approaches have been pursued
to date: strong coupling and loose coupling. The strong
(or tight) coupling technique solves the discrete
system of coupled, nonlinear equations resulting
from the CFD, CSD, CTD, and interface condi-
tions in a single step. Thornton and Dechaumphai
present an extreme example of the tight coupling
approach, in which even the surface discretization
was forced to be the same.
1
The loose coupling
technique, illustrated in Figure 3, solves the same
system using an iterative strategy of repeated CFD
solution followed by CTD solution followed by
CSD solution until convergence is achieved.
Special cases of the loose coupling approach in-
clude the direct coupling in time of explicit CFD
and CSD codes and the incremental-load approach
of steady aero- and hydro-elasticity. The variables
on the boundaries are transferred back and forth
between codes by a master code that directs the
multidisciplinary run. Each code (CFD, CSD,
Project
Objectives (performance, cost, ...)
Optimization (critical parameters, ...)
Disciplines (CSD, CFD, CTD, CEM, CDM, and so on)
Problem definition (models, PDEs, BCs, and so on)
Grid
Solver
Data reduction
Historic
development
line
Figure 1. Design and analysis process in engineering. Developments in
the computational sciences tend to go in the reverse direction.
Prescribed heat, flux,temperature/sinks
Rigid
walls
Rigid body
(6 DOF)
Modal
analysis
Linear
finite-
element
method
Nonlinear
finite-
element
method
Rupture,
tearing,
and so on
No fluid
Potential/
acoustics
Full
potential
Euler
Reynolds-averaged
Navier-Stokes
Large-eddy
simulation
Direct simulation
of Navier-Stokes
Computational fluid dynamics
Computational
structural
dynamics
Computational
thermodynamics
Advanced
aeroelasticity
Current
efforts
Classic
aeroelasticity
Current
efforts
Linear heat conduction
Nonlinear heat conduction
Current
efforts
Current
efforts
Figure 2. Fluid-structure-thermal interaction. Researchers in the
computational sciences must develop flexible approaches to
combining disciplines to create accurate, cost-effective, and modular
simulations.
MAY/JUNE 2004 29
CTD, and so on) is seen as a subroutine, or object,
that is called by the master code, or as a series of
processes that communicate via message passing.
This implies that the transfer of geometrical and
physical information is performed between codes
without affecting their efficiency, layout, basic
functionality, or coding styles.
At the same time, CSD, CTD, and CFD codes
can easily be replaced, making this a modular ap-
proach. The loose coupling approach allows for a
straightforward reuse of existing codes and the
choice of the most suitable model for a given ap-
plication. The information transfer software can be
developed, to a large extent, independently of the
CSD, CTD, and CFD codes involved, again lead-
ing to modularity and software reuse. For this rea-
son, this approach is favored for widespread use in
academia and industry. Indeed, considerable effort
has been devoted to develop general, scalable in-
formation transfer libraries.
4,7,8
Information Transfer
Optimal discretizations for the CSD, CTD, and
CFD problem will, in all probability, differ. For ex-
ample, consider a commercial aircraft wing under-
going aeroelastic loads. For a reliable CFD solu-
tion using the Euler equations, an accurate surface
representation with 60 to 120 points in the chord
direction will be required. For the CSD model, a
20 40 mesh of plate elements might be more than
sufficient to capture the dominant eigenmodes.
Any general uid-structure coupling strategy must
be able to efciently handle the information trans-
fer between surface representations. This is not
only a matter of fast interpolation techniques, but
also of accuracy, load conservation, geometrical -
delity, and temporal synchronization.
One of the main aims of the loose coupling ap-
proach is to achieve multidisciplinary runs in such a
way that each one of the codes used is modied in the
least possible way. Moreover, the option of having
different grids for different disciplines, as well as
adaptive grids that vary in time, implies that in most
cases no xed common variables will exist at the
boundaries. Therefore, fast and accurate interpola-
tion techniques are required. Because the grids can
be rened or coarsened during time steps, and the
surface deformations can be severe, the interpolation
procedures must combine speed with generality.
Consider the problem of fast interpolation be-
tween two surface triangulations. Other types of sur-
face elements can be handled by splitting them into
triangles, so that what follows can be applied to such
grid types as well. The basic idea is to treat the topol-
ogy as 2D while the interpolation problem is given
in 3D space. This implies that further criteria, such
as relative distances normal to the surface, will have
to be used to make the problem unique. Many search
and interpolation algorithms have been devised over
the years. Experience indicates that, for generality, a
layered approach of different interpolation tech-
niques works best. Wherever possible, a vectorized
advancing front neighbor-to-neighbor algorithm is
used as the basic procedure.
4
If this fails, octrees are
used. Finally, if this approach also fails, an exhaustive
search over all surface faces is performed.
For realistic 3D surface geometries, a number of
factors can complicate the interpolation of surface
grid information.
The rst of these factors is the proper answer to
the question, How close must a face be to a point
to be acceptable? This is not a trivial question for
situations in which narrow gaps exist in the CFD
mesh, and when there is a large discrepancy of face
sizes between surface grids.
A second complication often encountered
arises due to the fact that interpolation may be
impossible (for convex ridges) or multivalued (for
concave ridges).
4
A third complication arises for cases in which
thin shells are embedded in a 3D volumetric uid
mesh. For these cases, the best face might actually
lie on the opposite side of the face being interpo-
lated. This ambiguity is avoided by dening a sur-
face normal, and then only considering the faces
and points whose normals are aligned.
A fourth complication arises for the common case
of thin structural elementsfor example roofs,
walls, and stiffenerssurrounded by a uid medium.
The structural elements will be discretized using
q,(T) u f
f: Forces
q: Heat fluxes
T: Temperature
u: Deformations
x: Mesh position
w: Mesh velocity
CTD
T,(q)
CFD
CSD
Master
x,w,T,(q) f,q,(T)
Figure 3. Loose coupling for uid-structure-thermal simulations. The
technique uses an iterative strategy, with the master codes transferring
geometrical and physical information between codes.
30 COMPUTING IN SCIENCE & ENGINEERING
shell elements. These shell elements will be affected
by loads from both sides. Most CSD codes require
a list of faces on which loads are exerted. This im-
plies that the shell elements loaded from both sides
will appear twice in this list. To be able to incorpo-
rate thickness and interpolate between CSD and
CFD surface grids in a unique way, these doubly de-
ned faces are identied and, should this check re-
veal the existence of doubly dened faces, new points
are introduced using an unwrapping procedure.
4
Position and Load Transfer
Another important question that needs to be ad-
dressed is how to make the different grids follow
one another when deforming surfaces are present.
Consider again the aeroelastic case of a wing de-
forming under aerodynamic loads. For accuracy,
the CFD discretization will be ne on the surface,
and the surface will be modeled as accurately as
possible from the CAD/CAM data at the start of
the simulation. On one hand, a CSD discretization
that models the wing as a series of plates might be
entirely appropriate. If one would force the CFD
surface to follow the CSD surface, the result would
be a wing with no thickness, clearly inappropriate
for an acceptable CFD result. On the other hand,
for strong shock/object interactions with large plas-
tic deformations and possible tearing, forcing the
CFD surface to follow exactly the CSD surface is
the correct way to proceed. These two examples in-
dicate that more than one strategy might have to
be used to interpolate and move the surface of the
CFD mesh as the structure moves. To date, a num-
ber of techniques have been explored, including
exact tracking with linear interpolation,
4
exact tracking with quadratic interpolation,
9
and
tracking with an initial distance vector.
10
An important unsolved problem (at least to our
knowledge) is how to handle, in an efcient and au-
tomatic way, models that exhibit incompatible di-
mensionalities. An example of such a reduced
model is an aeroelastic problem in which the wing
structure is modeled by a torsional beam (perfectly
acceptable for the lowest eigenmodes), and the uid
by a 3D volumetric mesh. Clearly, the proper spec-
ification of movement for the CFD surface based
on the 1D beam, as well as the load transfer from
the uid to the beam, represent nontrivial problems
for a general, user-friendly computing environment.
During each global cycle, the CFD loads must be
transferred to the CSD mesh. Simple point-wise in-
terpolation can be used for cases in which the CSD
surface mesh elements are smaller than or of simi-
lar size to the elements of the CFD surface mesh.
However, this approach is not conservative and will
not yield accurate results for the common case of
CSD surface elements being larger than their CFD
counterpart. Considering, without loss of general-
ity, the pressure loads only, it is desirable to attain:
p
s
(x) p
f
(x), (1)
while being conservative in the sense of
f = p
s
nd = p
f
nd, (2)
where p
f
, p
s
denote the pressures on the uid and
solid material surfaces, and n is the normal vector.
These requirements can be combined using a
weighted residual method. With the approximations
p
s
= N
s
i
p
is
, p
f
= N
f
j
p
jf
, (3)
we have
N
s
i
N
s
j
dp
js
= N
s
i
N
f
j
d p
jf
, (4)
which can be rewritten as
Mp
s
= r = Lp
f
. (5)
Here M is a consistent-mass matrix, and L a
loading matrix. This weighted residual method is
conservative in the sense of Equation 2.
9,10
The
most problematic part of the weighted residual
method is the evaluation of the integrals appearing
on the right-hand side of Equation 4. When the
CFD and CSD surface meshes are not nested, this
is a formidable task. Adaptive Gaussian quadrature
techniques
9,10
have been able to solve this problem
reliably even for highly complex geometries.
Treatment of Moving Surfaces/Bodies
Any uid-structure interaction simulation with con-
siderable structural deformation will require a ow
solver that can handle the arbitrary surface defor-
mation in time. The treatment of these moving sur-
faces differs depending on the mesh type chosen.
For body-conforming grids, the external mesh faces
match up with the surface (body surfaces, external
surfaces, and so on) of the domain. This is not the
case for the embedded approach (also known as c-
ticious domain, immersed boundary, or Cartesian
method), in which the surface is placed inside a large
mesh (typically a box), with special treatment of the
elements near the surfaces. For moving or deform-
ing surfaces with topology change, both approaches
have complementary strengths and weaknesses.
MAY/JUNE 2004 31
Body-Conforming Moving Meshes
The PDEs describing the ow need to be cast in an
arbitrary Lagrangian-Eulerian (ALE) frame of ref-
erence, the mesh is moved in such a way as to min-
imize distortion, if required the topology is recon-
structed, the mesh is regenerated, and the solution
reinterpolated. All of these steps have been opti-
mized over the last decade, and this approach has
been used extensively.
6,1114
The body-conforming solution strategy exhibits
several shortcomings:
The topology reconstruction can sometimes
fail for singular surface points.
There is no way to remove subgrid features
from surfaces, leading to small elements due
to geometry.
Reliable parallel performance on more than 16
processors has proven elusive for most gen-
eral-purpose grid generators.
The interpolation required between grids in-
variably leads to some loss of information.
There is an extra cost associated with the re-
calculation of geometry, wall distances, and
mesh velocities as the mesh deforms.
On the other hand, the imposition of boundary
conditions is natural, the precision of the solution
is high at the boundary, and this approach still rep-
resents the only viable solution for problems with
boundary layers.
Embedded Fixed Meshes
An embedded fixed mesh is not body conforming
and does not move. Hence, the PDEs describing
the ow can remain in the simpler Eulerian frame
of reference. At every time step, the edges crossed
by CSD faces are identified and proper boundary
conditions are applied in their vicinity. Although
used extensively (see Lhner and colleagues,
15
Murman, Aftosmis, and Berger,
16
and the refer-
ences cited therein), this solution strategy also ex-
hibits some shortcomings:
The boundary, which has the most profound
inuence on the ensuing physics, is also where
the worst elements are found.
At the same time, near the boundary, the em-
bedding boundary conditions must be applied,
reducing the local order of approximation for
the PDE.
Stretched elements cannot be introduced to
resolve boundary layers.
Adaptivity is essential for most cases.
There is an extra cost associated with the re-
calculation of geometry (when adapting) and
the crossed edge information.
Efcient Use
of Supercomputing Hardware
Despite the striking successes reported to date, only
the simplest solversexplicit time-stepping or im-
plicit iterative schemes, perhaps with added multi-
gridhave been ported without major changes or
problems to massively parallel machines with dis-
tributed memory. Many code options essential for re-
alistic simulations are difcult to parallelize on this
type of machinefor example, local and global
remeshing,
2,17
uidstructure interaction with topol-
ogy change, and in general, applications with rapidly
varying load imbalances. Even if 99 percent of all op-
erations required by these codes can be parallelized,
the maximum achievable gain would be 1:100.
If we accept as fact that for most large-scale
codes we might not be able to parallelize more than
99 percent of all operations, the shared-memory
paradigm, discarded for a while as nonscalable, will
make a comeback. It is far easier to parallelize some
of the more complex algorithms, as well as cases
with large load imbalance, on a shared-memory
machine. In addition, it is within technological
reach to achieve a 100-processor, shared-memory
machine (128 has been a reality since 2000).
Figure 4 shows the performance of the authors
Finite Element Flow Code (Feo)the uid code
used in the work presented hereon a variety of
common US Department of Defense high-perfor-
mance-computing platforms. One can see that the
speedup obtained using shared- and distributed-
memory approaches is similar.
1
2
4
8
16
32
1 2 4 8 16 32
S
p
e
e
d
u
p
Number of processors
Ideal
SGI-O2K SHM
SGI-O2K MPI
IBM-SP2 MPI
HP-DAX MPI
Figure 4. Performance of the Finite Element Flow Code (Feo) on
different platforms. Shared- and distributed-memory approaches gave
similar results.
32 COMPUTING IN SCIENCE & ENGINEERING
Examples
The loose coupling methodology has been applied to
a number of problems over the past ve years. We in-
clude here some recent examples, from simple rigid-
body CSD motion to highly nonlinear, fragmenting
(that is, topology-changing) solids. Additional exam-
ples, including validation and comparison to experi-
ments, are available elsewhere.
5,6,12,13,17,18
Series-60 Hull
The rst example considers the steady (incompress-
ible) ow past a typical ship hull. The hull is allowed
to sink and trim due to the uid forces. The nal po-
sition and inclination (trim) of the hull are obtained
iteratively. In each iteration, the steady ow is com-
puted, the forces and moments evaluated, and the
ship repositioned. The mesh is typically moved.
Should the need arise, a local or global remeshing is
invoked to remove elements with negative volumes.
Figure 5a shows the geometry considered. The
mesh consisted of approximately 400,000 elements.
Figures 5b and 5c depict the convergence of the
computed sinkage and trim with respect to the
number of iterations. Figures 5d and 5e compare
the computed sinkage and trim with experimental
data. Figures 5f and 5g compare the computed
wave drag coefficient with experimental data for
both the xed model and the free to sink and trim
model, respectively. A run of this kind can be ob-
tained in less than an hour on a leading-edge PC.
Details are available elsewhere.
18
Nose Cone
Figure 6 shows results for a proposed nose-cone
experiment. The CFD part of the problem was
computed using Feflo98, and the CSD and CTD
with Cosmic-Nastran. More on the flow solver is
available elsewhere.
2,11,19,20
The incoming ow was set to M

= 3.0 at an an-
gle of attack of = 10
o
. The Reynolds number was
approximately Re = 2 10
6
, based on the length of
the cone. The solution was initiated by converging
the fluid-thermal problem without any structural
deformation. Thereafter, the uid-structure-ther-
mal problem was solved. Convergence was
achieved after 10 cycles. The convergence is
markedly slower than that achieved for uid-struc-
ture (aeroelastic) problems. This is due to the in-
terplay of temperature advection in the flow do-
main and conduction in the solid, whose
counteracting effects must be balanced.
Fragmenting Weapon
The third case considered was a fragmenting
weapon. The detonation and shock propagation was
modeled using a Jones-Wilkins-Lee equation of state
with Feo. The structural response, which included
tearing and failure of elements, was computed using
GA-DYNA, General Atomics version of DYNA3D.
At the beginning, the walls of the weapon separate
two ow domains: the inner domain, consisting of
high explosives, and the outer domain, consisting of
air. As the weapons structure begins to fail, fragments
are shrunk and the ensuing gaps are automatically
remeshed, leading to one continuous domain. The
topology reconstruction from the discrete data
passed to Feo from GA-DYNA is completely auto-
matic, requiring no user intervention at any stage of
the simulation. The mesh in the uid domain was
adapted using sources for geometric delity and a
modied H2-seminorm error indicator. The sources
required for geometric delity are constructed auto-
matically from the CSD surface faces during the
topology reconstruction. At the end of the run, the
ow domain contains approximately 750 indepen-
dently ying bodies and 16 million elements.
Figures 7a, 7b, and 7c show the development of
the detonation. The fragmentation of the weapon
is clearly visible. Figure 7d shows the correlation
with the observed experimental evidence.
Blast Interaction with a Generic Ship Hull
Figure 8 shows the interaction of an explosion with
a generic ship hull. For this fully coupled
CFD/CSD run, the structure was modeled with
quadrilateral shell elements and the uid as a mix-
ture of high explosives and air, and mesh embed-
ding was used.
15
The structural elements were as-
sumed to fail once the average strain in an element
exceeded 60 percent. As the shell elements failed,
the uid domain underwent topological changes.
Figure 8 shows the structure and the pressure
contours in a cut plane at two times during the
run. Note the failure of the structure, and the in-
vasion of high pressure into the chamber. The dis-
tortion and interpenetration of the structural ele-
ments is such that the traditional moving mesh
approach (with topology reconstruction, remesh-
ing, ALE formulation, remeshing, and so on) will
invariably fail for this class of problems. In fact, it
was this type of application that led the authors to
consider the development of an embedded CSD
capability in Feo.
15
T
he methodologies and software re-
quired for fluid-structure-(thermal)
interaction simulations have pro-
gressed rapidly over the last decade.
Several packages offer the possibility of fully
MAY/JUNE 2004 33
nonlinear coupled CSD, CFD, and CTD in a
production environment. Looking toward the
future, we envision a multidisciplinary, database-
linked framework that is accessible from any-
where on demand, simulations with unprece-
dented detail and realism carried out in fast suc-
cession, virtual meeting spaces where geograph-
ically displaced designers and engineers discuss
and analyze collaboratively new ideas, and rst-
principles-driven virtual reality.
(c) (b)
(e) (d)
(g) (f)
0.000
0.001
0.002
0.003
0.004
0.005
0.006
0.15 0.20 0.25 0.30 0.35 0.40
W
a
v
e

d
r
a
g

c
o
e
f
f
i
c
i
e
n
t

(
C
w
)
Froude number
Present results
Experiment results, SRS
Experiment results, UT
0.000
0.001
0.002
0.003
0.004
0.005
0.006
0.15 0.20 0.25 0.30 0.35 0.40
W
a
v
e

d
r
a
g

c
o
e
f
f
i
c
i
e
n
t

(
C
w
)
Froude number
0
0.001
0.002
0.004
0.005
0.006
1 2 3 4
S
i
n
k
a
g
e

(
s
)
Number of iterations
Fr = 0.18
Fr = 0.25
Fr = 0.32
0.003
Fr = 0.368
Fr = 0.388
0.000
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.15 0.20 0.25 0.30 0.35 0.40
S
i
n
k
a
g
e

(
s
)
Froude number
-0.002
0.000
0.002
0.004
0.006
0.008
0.010
0.012
0.014
0 1 2 3 4
T
r
i
m

(
t
)
Number of iterations
Fr = 0.18
Fr = 0.25
Fr = 0.32
Fr = 0.3682
Fr = 0.388
-0.004
-0.002
0.000
0.002
0.004
0.006
0.008
0.010
0.012
0.014
0.15 0.20 0.25 0.30 0.35 0.40
T
r
i
m

(
t
)
Froude number
Present results
Experiment results, IHHI
Experiment results, SRS
Experiment results, UT
(a)
Present results
Experiment results, IHHI
Experiment results, SRS
Experiment results, UT
Present results
Experiment results, IHHI
Experiment results, SRS
Experiment results, UT
Figure 5. Series-60 hull. (a) Surface mesh, (b) sinkage convergence, (c) trim convergence, (d) sinkage versus our experimental
data, Froude-Nr., (e) trim versus Froude-Nr., (f) wave drag for xed model, and (g) wave drag for free model.
34 COMPUTING IN SCIENCE & ENGINEERING
Acknowledgments
This research was partially supported by AFOSR and
DTRA. Leonidas Sakell, Michael Giltrud, and Darren Rice
acted as technical monitors.
References
1. E.A. Thornton and P. Dechaumphai, Coupled Flow, Thermal
and Structural Analysis of Aerodynamically Heated Panels, J. Air-
craft, vol. 25, no. 11, 1988, pp. 10521059.
2. R. Lhner, Three-Dimensional Fluid-Structure Interaction Using
a Finite Element Solver and Adaptive Remeshing, Computer Sys-
tems in Eng., vol. 1, nos. 24, 1990, pp. 257272.
3. G.P. Guruswamy and C. Byun, Fluid-Structural Interactions Using
Navier-Stokes Flow Equations Coupled with Shell Finite Element
Structures, paper no. 93-3087, Am. Inst. of Aeronautics and As-
tronautics,1993.
(a)
(b)
1 2
1. Pressure 2. Temperature
3. Deformation 4. Temperature
Figure 6. Nose cone. The (a) surface grids for computational uid dynamics (CFD) and computational structure
dynamics/computational thermal dynamics (CSD/CTD), and the (b) CFD/CSD/CTD results obtained.
MAY/JUNE 2004 35
4. R. Lhner et al., Fluid-Structure Interaction Using a Loose Coupling
Algorithm and Adaptive Unstructured Grids, Paper no. 95-2259,
Am. Inst. of Aeronautics and Astronautics, 1995.
5. R. Lhner et al., Fluid-Structure-Thermal Interaction Using a Loose
Coupling Algorithm and Adaptive Unstructured Grids, Paper no. 98-
2419, Am. Inst. of Aeronautics and Astronautics, 1998.
6. J.D. Baum et al., A Coupled CFD/CSD Methodology for Modeling
Weapon Detonation and Fragmentation, Paper no. 99-0794, Am.
Inst. of Aeronautics and Astronautics, 1999.
7. N. Maman and C. Farhat, Matching Fluid and Structure Meshes
for Aeroelastic Computations: A Parallel Approach, Computers
and Structures, vol. 54, no. 4, 1995, pp. 779785.
8. COCOLIB Deliverable 1.1: Specication of the Coupling Com-
munications Library, Cispar Esprit Project 20161, 1997.
9. J.R. Cebral and R. Lhner, Conservative Load Projection and
Tracking for Fluid-Structure Problems, AIAA J., vol. 35, no. 4,
1997, pp. 687692.
10. J.R. Cebral and R. Lhner, Fluid-Structure Coupling: Extensions and
Improvements, Paper no. 97-0858, Am. Inst. of Aeronautics and
Astronautics, 1997.
11. J.D. Baum, H. Luo, and R. Lhner, A New ALE Adaptive Unstruc-
tured Methodology for the Simulation of Moving Bodies, Paper no.
94-0414, Am. Inst. of Aeronautics and Astronautics, 1994.
12. J.D. Baum et al., A Coupled Fluid/Structure Modeling of Shock In-
teraction with a Truck, Paper no. 96-0795, Am. Inst. of Aeronau-
tics and Astronautics, 1996.
13. J.D. Baum et al., Application of Unstructured Adaptive Moving Body
Methodology to the Simulation of Fuel Tank Separation From an F-
16 C/D Fighter, Paper no. 97-0166, Am. Inst. of Aeronautics and
Astronautics, 1997.
14. D. Sharov et al., Time-Accurate Implicit ALE Algorithm for
Shared-Memory Parallel Computers, Proc. 1st Intl Conf. Com-
putational Fluid Dynamics, Springer Verlag, 2000, pp. 387-392.
15. R. Lhner et al., Adaptive Embedded Unstructured Grid Methods,
Paper no. 03-1116, Am. Inst. of Aeronautics and Astronautics,
2003.
16. S.M. Murman, M.J. Aftosmis, and M.J. Berger, Simulations of 6-
DOF Motion with a Cartesian Method, Paper no. 03-1246, Am.
Inst. of Aeronautics and Astronautics, 2003.
17. R. Lhner et al., The Numerical Simulation of Strongly Unsteady
(b) (a)
(d) (c)
0
10
9
8
7
6
5
4
3
1 .5 1.5
Weight (kg)
2.0 2.5 3.0 3.5
V
r

(
c
m
/
s
e
c


i
n

t
h
o
u
s
a
n
d
s
)
Aup fragmentation (2)
Aup fragmentation (2)
Pressure Mesh velocity Frag. velocity
t =
.131 ms
t =
.310 ms
Pressure Mesh velocity Frag. velocity
Mass average velocity
Pressure Mesh velocity Frag. velocity
Aup fragmentation (2)
t =
.500 ms
Figure 7. Fragmenting weapon. The gure shows fragmentation (a) at 131 msec, (b) at 310 msec, (c) at 500 msec, and (d) radial
velocity as a function of fragment weight.
36 COMPUTING IN SCIENCE & ENGINEERING
Flows With Hundreds of Moving Bodies, Intl J. for Numerical
Methods Fluids, vol. 31, 1999, pp. 113120.
18. C. Yang and R. Lhner, Calculation of Ship Sinkage and Trim
Using a Finite Element Method and Unstructured Grids, Intl J.
CFD, vol. 16, no. 3, 2002, pp. 217227.
19. H. Luo, J.D. Baum, and R. Lhner, Edge-Based Finite Element
Scheme for the Euler Equations, AIAA J., vol. 32, no. 6, 1994,
pp. 11831190.
20. H. Luo, J.D. Baum, and R. Lhner, An Accurate, Fast, Matrix-Free
Implicit Method for Computing Unsteady Flows on Unstructured
Grids, Comp. and Fluids, vol. 30, 2001, pp. 137159.
Rainald Lhner is a professor in the School of Com-
putational Sciences at George Mason University,
where he is also head of the Fluid and Materials Pro-
gram. His research interests include eld solvers based
on unstructured grids, uid-structure-thermal interac-
tion, grid generation, parallel computing, and visual-
ization. Lhner has an MS in mechanical engineering
from the Technical University of Braunschweig, Ger-
many, and a PhD in civil engineering from the Uni-
versity of Wales. He is a member of the American In-
stitute of Aeronautics and Astronautics (AIAA) and
Sigma-Chi. Contact him at rlohner@gmu.edu.
Chi Yang is an associate professor in the School of
Computational Sciences at George Mason University.
Her research interests include eld solvers based on un-
structured grids for compressible and incompressible
flows, incompressible flows with free surface, field
solvers based on boundary element method for free
surface ows, ship hydrodynamics and hull optimiza-
tion, and uid-structure interaction. Yang has a BS and
a PhD in naval architecture and ocean engineering
from Shanghai Jiao Tong University. She is a member
of the AIAA and an associate member of the Society of
Naval Architects and Marine Engineers. Contact her at
cyang@gmu.edu.
Juan R. Cebral is an assistant professor in the School of
Computational Sciences at George Mason University and
a research physicist at Inova Fairfax Hospital. His research
(b) (a)
(d) (c)
Figure 8. Results in a cut plane for the interaction of an explosion with a generic ship hull: (a) surface at 20 msec, (b) pressure at
20 msec, (c) surface at 50 msec, and (d) pressure at 50 msec.
MAY/JUNE 2004 37
interests include image-based modeling of blood ows;
distributed, multidisciplinary visualization; applications
to cerebral aneurysms, carotid artery disease, and cere-
bral perfusion; and uid-structure interaction in the con-
text of biouids. Cebral has an MS in physics from the
University of Buenos Aires and a PhD in computational
sciences from George Mason University. He is a member
of the AIAA. Contact him at jcebral@gmu.edu.
Joseph D. Baum is director of the Center for Applied
Computational Sciences at the Science Applications In-
ternational Corporation. His research interests include
unsteady flows for internal and external flows, shock
and blast dynamics, and blast-structure interaction.
Baum has an MSc and a PhD in aerospace engineering
from Georgia Tech. He is an associate fellow of the
AIAA. Contact him at joseph.d.baum@saic.com.
Charles Charman is senior technical advisor at Gen-
eral Atomics. His research interests include nonlinear
structural mechanics, soil-structure and uid-structure
interaction, parallel computing, and discrete particle
mechanics. Charman has a BS in engineering from
San Diego State University and an MS in civil engi-
neering from the Massachusetts Institute of Technol-
ogy. He is a professional civil engineer in the State of
California. Contact him at charman@gat.com.
Eric L. Mestreau is a senior research scientist at the
Center for Applied Computational Sciences at the Sci-
ence Applications International Corporation. His re-
search interests include uid/structure coupling, shock
and blast dynamics, and graphical display of large mod-
els. Mestreau has an MSc in mechanical engineering
from the Ecole Centrale de Paris. He is a member of the
AIAA. Contact him at eric.l.mestreau@saic.com.
Daniele Pelessone is chief scientist and founding part-
ner of Engineering and Software Systems Solutions
(ES3). His research interests include development of
advanced analytical modeling techniques in structural
dynamics, including theoretical continuum mechan-
ics, applications of nite-element programs, and soft-
ware installation and optimization on vector process-
ing computers. Pelessone has an MSc in applied
mechanics from the University of California, San
Diego, and a DSc in aeronautical engineering from the
University of Pisa. Contact him at peless@home.com.
A D V E R T I S E R / P R O D U C T I N D E X
M A Y / J U N E 2 0 0 4
CSE 2005 3
MIT Press 3
SIAM Cover 4
University of Utah 15
Advertising Personnel Advertiser / Product Page Number
Marion Delaney
IEEE Media, Advertising Director
Phone: +1 212 419 7766
Fax: +1 212 419 7589
Email: md.ieeemedia@ieee.org
Marian Anderson
Advertising Coordinator
Phone: +1 714 821 8380
Fax: +1 714 821 4010
Email: manderson@computer.org
Sandy Brown
IEEE Computer Society,
Business Development Manager
Phone: +1 714 821 8380
Fax: +1 714 821 4010
Email: sb.ieeemedia@ieee.org
Advertising Sales Representatives
Mid Atlantic (product/recruitment)
Dawn Becker
Phone: +1 732 772 0160
Fax: +1 732 772 0161
Email: db.ieeemedia@ieee.org
New England (product)
Jody Estabrook
Phone: +1 978 244 0192
Fax: +1 978 244 0103
Email: je.ieeemedia@ieee.org
New England (recruitment)
Barbara Lynch
Phone: +1 401 739-7798
Fax: +1 401 739 7970
Email: bl.ieeemedia@ieee.org
Connecticut (product)
Stan Greenfield
Phone: +1 203 938 2418
Fax: +1 203 938 3211
Email: greenco@optonline.net
Midwest (product)
Dave Jones
Phone: +1 708 442 5633
Fax: +1 708 442 7620
Email: dj.ieeemedia@ieee.org
Will Hamilton
Phone: +1 269 381 2156
Fax: +1 269 381 2556
Email: wh.ieeemedia@ieee.org
Joe DiNardo
Phone: +1 440 248 2456
Fax: +1 440 248 2594
Email: jd.ieeemedia@ieee.org
Southeast (recruitment)
Jana Smith
Email: jsmith@bmmatlanta.com
Phone: +1 404 256 3800
Fax: +1 404 255 7942
Southeast (product)
Bob Doran
Email: bd.ieeemedia@ieee.org
Phone: +1 770 587 9421
Fax: +1 770 587 9501
Midwest/Southwest (recruitment)
Darcy Giovingo
Phone: +1 847 498-4520
Fax: +1 847 498-5911
Email: dg.ieeemedia@ieee.org
Southwest (product)
Josh Mayer
Phone: +1 972 423 5507
Fax: +1 972 423 6858
Email: josh.mayer@wageneckassociates.com
Northwest (product)
Peter D. Scott
Phone: +1 415 421-7950
Fax: +1 415 398-4156
Email: peterd@pscottassoc.com
Southern CA (product)
Marshall Rubin
Phone: +1 818 888 2407
Fax: +1 818 888 4907
Email: mr.ieeemedia@ieee.org
Northwest/Southern CA (recruitment)
Tim Matteson
Phone: +1 310 836 4064
Fax: +1 310 836 4067
Email: tm.ieeemedia@ieee.org
Japan
German Tajiri
Phone: +81 42 501 9551
Fax: +81 42 501 9552
Email: gt.ieeemedia@ieee.org
Europe (product)
Hilary Turnbull
Phone: +44 1875 825700
Fax: +44 1875 825701
Email: impress@impressmedia.com
Europe (recruitment)
Penny Lee
Phone: +20 7405 7577
Fax: +20 7405 7506
Email: reception@essentialmedia.co.uk
C
omputational simulation, in conjunc-
tion with laboratory experiment, can
provide valuable insight into complex
biological systems that involve the in-
teraction of an elastic structure with a viscous, in-
compressible uid. This biological uid-dynamics
setting presents several more challenges than
those traditionally faced in computational uid
dynamicsspecically, dynamic ow situations
dominate, and capturing time-dependent geome-
tries with large structural deformations is neces-
sary. In addition, the shape of the elastic struc-
tures is not preset: uid dynamics determines it.
The Reynolds number of a ow is a dimension-
less parameter that measures the relative signicance
of inertial forces to viscous forces. Due to the small
length scales, the swimming of microorganisms cor-
responds to very small Reynolds numbers (10
6

10
2
). Faster and larger organisms such as sh and
eels swim at high Reynolds numbers (10
2
10
5
), but
organisms such as nematodes and tadpoles experi-
ence inertial forces comparable to viscous forces:
they swim at Reynolds numbers of order one.
Modern methods in computational uid dynam-
ics can help create a controlled environment in
which we can measure and visualize the fluid dy-
namics of swimming organisms. Accordingly, we
designed a unied computational approach, based
on an immersed boundary framework,
1
that couples
internal force-generation mechanisms of organisms
and cells with an external, viscous, incompressible
uid. This approach can be applied to model low,
moderate, and high Reynolds number ow regimes.
Analyzing the uid dynamics of a exible, swim-
ming organism is very difcult, even when the or-
ganisms waveform is assumed in advance.
2,3
In the
case of microorganism motility, the low Reynolds
number simplies mathematical analysis because the
equations of uid mechanics in this regime are lin-
ear. However, even at low Reynolds numbers, a mi-
croorganisms waveform is an emergent property of
the coupled nonlinear system, which consists of the
organisms force-generation mechanisms, its passive
elastic structure, and external uid dynamics. In the
immersed boundary framework, the force-
38 COMPUTING IN SCIENCE & ENGINEERING
SIMULATIONOF
SWIMMINGORGANISMS:
COUPLINGINTERNAL MECHANICS
WITHEXTERNAL FLUIDDYNAMICS
RICARDO CORTEZ AND LISA FAUCI
Tulane University
NATHANIEL COWEN
Courant Institute of Mathematical Sciences
ROBERT DILLON
Washington State University
1521-9615/04/$20.00 2004 IEEE
Copublished by the IEEE CS and the AIP
F R O N T I E R S
O F S I M U L A T I O N
Problems in biological uid dynamics typically involve the interaction of an elastic structure
with its surrounding uid. A unied computational approach, based on an immersed
boundary framework, couples the internal force-generating mechanisms of organisms and
cells with an external, viscous, incompressible uid.
MAY/JUNE 2004 39
generating organism is accounted for by suitable con-
tributions to a force term in the uid-dynamics equa-
tions. The force of an organism on the uid is a Dirac
delta-function layer of force supported only by the
region of uid that coincides with the organisms ma-
terial points; away from these points, this force is
zero. After including this force distribution on the
uid, we can solve the uid equations by using either
a nite-difference grid-based method or the regular-
ized Stokeslets grid-free method developed specically
for zero Reynolds number regimes.
4
This article presents our recent progress on cou-
pling the internal molecular motor mechanisms of
beating cilia and flagella with an external fluid, as
well as the three-dimensional (3D) undulatory
swimming of nematodes and leeches. We expect
these computational models to provide a testbed
for examining different theories of internal force-
generation mechanisms.
Immersed Boundary Framework
Charles Peskin
1
introduced the immersed bound-
ary method to model blood ow in the heart. Since
then, many researchers have advanced this method
to study other biologic fluid dynamics problems,
including platelet aggregation, 3D blood flow in
the heart, inner-ear dynamics, blood flow in the
kidneys, limb development, and deformation of red
blood cells; a recent overview appears elsewhere.
1
For this articles purposes, we describe the im-
mersed boundary method in the context of swim-
ming organisms. We regard the uid as viscous and
incompressible, and the laments that comprise the
organisms as elastic boundaries immersed in this
uid. In our 3D simulationsFigure 1 shows a typ-
ical examplemany laments join to form the or-
ganism. The nematode, tapered at both ends, is built
out of three families of laments: circular, longitu-
dinal, and right- and left-handed helical laments.
We assume that the flow is governed by the in-
compressible Navier-Stokes equations (conserva-
tion of momentum and conservation of mass):
= p + u + F(x, t)
u = 0.
Here, is uid density, is dynamic viscosity, u is
uid velocity, p denotes pressure, and F is the force
per unit volume the organism exerts on the uid
this force is split into the contributions from each
of the filaments comprising the organism. The
forces F
k
due to the kth filament include elastic
forces from individual lament structures and pas-
sive elastic forces caused by links between fila-
ments; they also may include active forces due to
muscle contractions (in the case of nematode or
leech swimming) or active forces caused by the ac-
tion of dynein molecular motors (in the case of cil-
iary and agellar beating). F is a -function layer of
force supported only by the region of uid that co-
incides with the filaments material points; away
from these points, the force is zero.
Let X
k
(s, t) denote the kth lament as a function
of a Lagrangian parameter s and time t, and let f
k
(s,
t) denote the boundary force per unit length along
the kth filament. The boundary force depends on
F F =

k
k

u
u u
t
(a) (b)
Figure 1. Three-dimensional nematode. (a) An immersed boundary nematode, and (b) a snapshot of a
swimming nematode suppressing all but the circular laments. Notice that these laments are elastic and
deform in response to the viscous uid.
40 COMPUTING IN SCIENCE & ENGINEERING
the biological system being modeled; well discuss
the general form later. We assume the elastic
boundary has the same density as the surrounding
uid, and that its mass is attributed to the mass of
the uid in which it sits, thus the forces are trans-
mitted directly to the uid. The force eld F
k
from
the lament X
k
(s, t) is therefore
F
k
(x, t) = f
k
(s, t) (x X
k
(s, t))ds.
Here, the integration is over the kth one-dimen-
sional lament comprising an immersed boundary,
and is the 3D Dirac delta-function. The total
force F(x, t) is calculated by adding the forces from
each lament.
Each filament of the immersed boundary is ap-
proximated by a discrete collection of points. This
boundary exerts elastic forces on the uid near each
of these points. We imagine that between each pair
of successive points on a lament, an elastic spring
or link generates forces to push the links length to-
ward a specified resting length. The force arising
from the spring on a short filament segment of
length ds is the product of a stiffness constant and
the deviation from rest length. This force is ap-
proximated by the force density at a single point in
the segment multiplied by ds. In addition to the
forces caused by springs along individual laments,
forces due to passive or active interactions between
laments contribute to force density. Each spring
may have a time-dependent rest length as well as a
time-dependent stiffness. Our coupled fluid-im-
mersed boundary system is closed because it re-
quires the velocity of a laments material point to
be equal to the uid velocity evaluated at that point.
In the next two sections, we provide brief descrip-
tions of two numerical methods used in the simula-
tion of immersed boundary motion in ows corre-
sponding to a wide range of Reynolds numbers.
Grid-Based Immersed Boundary Algorithm
We can summarize the immersed boundary algo-
rithm as follows: Suppose that at the end of time
step n, we have uid velocity eld u
n
on a grid and
the conguration of the immersed boundary points
on the laments comprising the organism (X
k
)
n
. To
advance the system by one time step, we must
1. Calculate the force densities f
k
from the
boundary conguration.
2. Spread the force densities to the grid to de-
termine the forces F
k
on the uid.
3. Solve the Navier-Stokes equations for u
n+1
.
4. Interpolate the uid velocity eld to each im-
mersed boundary point (X
k
)
n
and move the
point at this local uid velocity.
The Navier-Stokes equations are solved on a reg-
ular grid with simple boundary conditions in Step 3;
Steps 2 and 4 involve the use of a discrete delta-func-
tion that communicates information between the
grid and the immersed boundary points.
1
This algo-
rithms crucial feature is that the immersed boundary
is not the computational boundary in the Navier-
Stokes solverrather, it is a dynamic force eld that
inuences uid motion via the force term in the uid
equations. This modular approach lets us choose a
uid solver best suited to the problems Reynolds
number. Furthermore, we can base whatever solver
we choose on a variety of formulations, including
nite-difference and nite-element methods.
Grid-Free Method of Regularized Stokeslets
At the low Reynolds number regime of swimming
microorganisms, we can describe the uid dynam-
ics via the quasi-steady Stokes equations:
u = p F(x, t)
u = 0.
A fundamental solution of these equations is called
a Stokeslet, which represents the velocity due to a
concentrated force acting on the uid at a single
point in an innite domain of uid.
3
In fact, F(x, t)
is the sum of such point forces. Ricardo Cortez con-
sidered the smoothed case in which the concentrated
0
1
2
3
4
5
6
1
0
1
x
y
z
1.0
0.5
0.0
0.5
1.0
Figure 2. A bacterium swimming because of a helical waves
propagation. Fluid velocity vectors are shown on two planes
perpendicular to the swimming axis. The simulation demonstrates the
grid-free method of regularized Stokeslets.
MAY/JUNE 2004 41
force is applied not at a single point, but over a small
ball of radius centered at the immersed boundary
point.
4
We can compute a regularized fundamental
solutionor regularized Stokesletanalytically.
The method of regularized Stokeslets is a La-
grangian method in which the trajectories of uid
particles are tracked throughout the simulation.
This method is particularly useful when the forces
driving the uid motion are placed along the surface
of a swimming organism that deforms because of its
interaction with the uid. The forces on the surface
are given by regularized delta-functions, and the re-
sulting velocity represents the exact solution of
Stokes equations for the given forces.
Because the incompressible Stokes equations are
linear, we can use direct summation to compute the
velocity at each immersed boundary point to ad-
vance a time step. This method of regularized
Stokeslets is related to boundary integral methods,
but it has the advantage that forces may be applied
at any discrete collection of pointsthese points
need not approximate a smooth interface.
We have successfully implemented this algo-
rithm for ciliary beating in two dimensions and he-
lical swimming in three. Figure 2 shows a snapshot
of a helical swimmer with uid velocity elds com-
puted along two planes perpendicular to the axis of
the helix.
Undulatory Swimming
Nematodes are unsegmented roundworms with
elongated bodies tapered at both ends. The most fa-
mous nematode is C. Elegans, a model organism for
genetic, developmental, and neurobiological stud-
ies. Nematodes possess a uid-lled cavity, longi-
tudinal muscles, and a flexible outer cuticle com-
posed of left- and right-handed helical laments, yet
they still maintain a circular cross-section. The al-
ternate contractions of their dorsal and ventral lon-
gitudinal muscles cause these worms to swim with
an eel-like, undulatory pattern.
5
A typical nematode
is roughly 0.5 to 1 millimeter long, undulating with
a wave speed between 0.8 and 4 millimeters per sec-
ond. Therefore, in water, a Reynolds number (based
on wavelength and wave speed) between 0.4 and 4
governs nematode swimming.
We chose the laments comprising our computa-
tional organism to reect the nematodes anatomy,
including the longitudinal muscle bers and the he-
lical laments of its cuticle. The stiffness constants
of the springs making up these laments reect
the tissues elastic properties. In the simulation de-
picted in Figure 1, sinusoidal undulatory waves are
passed along the body of the immersed organism by
imposing appropriate muscle contractions along its
longitudinal and helical laments. Figure 3 shows a
3D perspective of the worm along with the velocity
eld of the uid depicted in the plane that contains
the worms centerline. (Here, we used a grid-based
immersed boundary algorithm.) The ow eld
shows vortices with alternating directions supported
along the length of the organism. A previous study
experimentally observed this characteristic ow pat-
tern for the nematode Turbatrix.
5
We computed the
swimming speed of our simulated nematode, whose
amplitude of oscillation we chose to be about one
half of that reported for Turbatrix, to be 5 percent of
the propulsive wave speed along its body. These cal-
culations compare very well with the experimentally
observed swimming speed of 20 percent of wave
speed reported for Turbatrix;
5
swimming speed is
proportional to the square of the waves amplitude.
2
We now turn to modeling another undulatory
swimmerthe leech. Leeches are larger and faster
than nematodes, and have an elliptical rather than
circular cross-section. We focus on 2-centimeter
long juvenile leeches, with propulsive wave speeds
of approximately 5 centimeters per second undu-
lating in water. In this case, the Reynolds number
based on wavelength and wave speed is about
1,000; inertial effects are significantly more im-
portant than viscous effects.
6
Using the same immersed boundary construct as
we did for the nematodes (longitudinal muscle la-
ments and right- and left-helical laments), but re-
placing the circular laments with elliptical cross-
sectional filaments, we examine the leechs
undulatory swimming in a 3D uid. Figure 4 shows
Figure 3. Snapshot of a swimming nematode shown within the
rectangular computational domain. The velocity eld is depicted in the
plane that contains the worms centerline.
42 COMPUTING IN SCIENCE & ENGINEERING
four snapshots of the leech as viewed from the side,
along with uid markers for ow visualization. Each
of the four snapshots depicts the leech at the same
phase in its undulation, during successive periods.
A wave passes over the body from left to right
note the forward swimming progression and the
wake that is left behind. We initially placed the red
uid markers in the foreground far enough from the
side of the leech that they dont get carried along
with the organism. Figure 5 shows four snapshots
of the leech from a different perspectivenote the
complex 3D particle mixing that occurs.
For our simulated leech, we used experimental
data on waveform and wave speed originally re-
ported by Chris Jordan.
6
Because of accuracy con-
straints that require enough grid points within a
cross-section of the leech, the aspect ratio of the
simulated leechs elliptical cross-section is 2:1, not
the actual 5:1 Jordan reported.
6
We believe that
this difference causes the simulated leech to swim
about ve times slower than the real leech.
Cilia and Flagella
Cilia and agella are the prominent organelles as-
sociated with microorganism motility. Although
the patterns of agellar movement are distinct from
those of ciliary movement, and agella are typically
much longer than cilia, their basic ultrastructure is
identical. A corecalled the axonemeproduces
the bending of cilia and flagella. The typical ax-
oneme consists of a central pair of single micro-
tubules surrounded by nine outer doublet micro-
tubules and encased by the cell membrane.
7,8
Radial spokes attach to the peripheral doublet mi-
crotubules and span the space toward the central
pair of microtubules. The outer doublets are con-
nected by nexin links between adjacent pairs of
doublets. Two rows of dynein arms extend from the
A-tubule of an outer doublet toward the B-tubule
of an adjacent doublet at regularly spaced intervals.
The bending of the axoneme is caused by sliding
between pairs of outer doublets, which in turn is
due to the unidirectional adenosine triphosphate
(ATP)-induced force generation of the dynein mol-
ecular motors. The precise nature of the spatial and
temporal control mechanisms regulating the vari-
ous waveforms of cilia and agella is still unknown.
Considerable interest has focused on the devel-
opment of mathematical models for the hydrody-
namics of individual as well as rows of cilia and on
individual flagellated organisms. Gray and Han-
cocks
9
resistive-force theory and Sir James
Lighthills slender-body theory
3
are particularly
noteworthy. More detailed hydrodynamic analy-
sis, such as refined slender-body theory and
boundary element methods, have produced excel-
lent simulations of both two- and three-dimen-
sional agellar propulsion and ciliary beating in an
infinite fluid domain or in a domain with a fixed
Figure 4. Snapshots of leech and surrounding uid markers at the same
phase in its undulation during successive temporal periods. The actual
organism is mostly obscured in the rst panel by the uid markers
placed around it.
MAY/JUNE 2004 43
wall. In all these fluid dynamical models, re-
searchers take the shape of the ciliary or flagellar
beat as given. More recent work by Shay Gueron
and Konstantin Levit-Gurevich includes a model
that addresses the internal force generation in a
cilium
10
but does not explicitly model the individ-
ual microtubule-dynein interactions.
Our model for an individual cilium or agellum in-
corporates discrete representations of the dynein
arms, passive elastic structures of the axoneme in-
cluding the microtubules and nexin links, and the
surrounding uid. This model couples the internal
force generation of the molecular motors through
the passive elastic structure with external uid me-
chanics. Detailed geometric information may be kept
track of in this computational model, such as the
spacing and shear between the microtubules, the lo-
cal curvature of individual microtubules, and the
stretching of the nexin links. In addition, the explicit
representation of the dynein motors gives us the ex-
ibility to incorporate a variety of activation theories.
The ciliary beat or agellar waveform is not preset,
but it is an emergent property of the interacting com-
ponents of the coupled uid-axoneme system.
In other articles,
11,12
we present a model of a
simplied axoneme consisting of two microtubules,
with dynein motors being dynamic, diagonal elas-
tic links between the two microtubules. To achieve
beating in the simplied two-microtubule model,
we allow two sets of dyneins to act between the mi-
crotubulesone set is permanently attached to
fixed nodes on the left microtubule, the other to
xed nodes on the right. Contraction of the dynein
generates sliding between the two microtubules; in
either configuration, one end of a dynein can at-
tach, detach, and reattach to attachment sites on
the microtubule. As the microtubules slide, a
dynein links endpoint can jump, or ratchet, from
one node of the microtubule to another.
We model each microtubule as a pair of fila-
ments with diagonal cross-links. The diagonal
cross-links elastic properties govern the resistance
to microtubule bending. Linear elastic springs rep-
resenting the nexin and/or radial links of the ax-
oneme interconnect adjacent pairs of microtubules.
In the case of ciliary beating, the axoneme is teth-
ered to fixed points in space via strong elastic
springs at the base. The entire structure is embed-
ded in a viscous incompressible uid.
Figure 6 shows a cilium during the power stroke
(note the two microtubules) and a ciliary waveform
showing a single lament at equally spaced time in-
tervals. This waveform was not presetit resulted
from the actions of individual dynein motors. In
particular, the ciliums local curvature determined
the activation cycle of each dynein motor along the
cilium. Figure 7 shows the swimming of a model
sperm cell whose waveform is also the result of a
curvature control model. The beating cilium does
Figure 5. Snapshots of leech and surrounding uid markers. From this
perspective, the wave is moving back over the body, and the swimming
progression is toward the viewer. Note the complex 3D uid mixing
depicted by the evolution of the uid markers.
44 COMPUTING IN SCIENCE & ENGINEERING
indeed result in a net displacement of fluid in the
direction of the power stroke, and the sperm cell
does indeed swim in the direction opposite that of
the wave. We have shown elsewhere
12
that making
different assumptions about the internal dynein ac-
tivation mechanisms does results in different swim-
ming behavior. In particular, when we altered the
curvature control model to change the effective
time scale of dynein kinetics, the time of a single
beat changes significantly, along with the entire
waveform of the agellum.
C
ombining computational uid dynam-
ics with biological modeling provides
a powerful means for studying the in-
ternal force-generation mechanisms of
a swimming organism. The integrative approach
presented here lets us use computer simulations
to examine theories of physiological processes
such as dynein activation in a beating cilium and
muscle dynamics in invertebrates. The success of
these models depends on both the continued de-
velopment of robust and accurate numerical
(a) (b)
Figure 6. Cilium. (a) A two-microtubule cilium nearing the end of its power stroke. Asterisks denote uid markers, which we
initially placed directly above the base of the cilium in a rectangular array. The displacement to the right is the result of the net
uid ow induced by the beating cilium. (b) A ciliary waveform showing a single lament at equally spaced time intervals.
Figure 7. A sequence of a two-microtubule sperm cell swimming upwards as a wave passes from base to tip. The red (blue) color
indicates that the right (left) family of dyneins is activated at that position of the agellum. Asterisks denote uid markers.
MAY/JUNE 2004 45
methods and the interdisciplinary collaboration
of computational scientists and biologists. We ex-
pect that this work will have an impact on un-
derstanding biomedical systems such as sperm
motility in the reproductive tract and mucus-cil-
iary transport in both healthy and diseased respi-
ratory tracts, as well as the complex coupling of
electrophysiology, muscle mechanics, and fluid
dynamics in aquatic animal locomotion.
References
1. C.S. Peskin, The Immersed Boundary Method, Acta Numerica,
vol. 11, 2002, pp. 479517.
2. S. Childress, Mechanics of Swimming and Flying, Cambridge Univ.
Press, 1981.
3. J.L. Lighthill, Mathematical Biouiddynamics, SIAM Press, 1975.
4. R. Cortez, The Method of Regularized Stokeslets, SIAM J. Sci-
entic Computing, vol. 23, no. 4, 2001, pp. 12041225.
5. J. Gray and H.W. Lissmann, The Locomotion of Nematodes, J.
Exploratory Biology, vol. 41, 1964, pp. 135154.
6. C.E. Jordan, Scale Effects in the Kinematics and Dynamics of
Swimming Leeches, Canadian J. Zoology, vol. 76, 1998, pp.
18691877.
7. M. Murase, The Dynamics of Cellular Motility, John Wiley & Sons,
1992.
8. G.B. Witman, Introduction to Cilia and Flagella, Ciliary and Fla-
gellar Membranes, R.A. Bloodgood, ed., Plenum, 1990, pp. 130.
9. J. Gray and G. Hancock, The Propulsion of Sea-Urchin Sperma-
tozoa, J. Exploratory Biology, vol. 32, 1955, pp. 802814.
10. S. Gueron and K. Levit-Gurevich, Computation of the Internal
Forces in Cilia: Application to Ciliary Motion, the Effects of Vis-
cosity, and Cilia Interactions, Biophyscial J., vol. 74, 1998, pp.
16581676.
11. R. Dillon and L.J. Fauci, An Integrative Model of Internal Ax-
oneme Mechanics and External Fluid Dynamics in Ciliary Beat-
ing, J. Theoretical Biology, vol. 207, 2000, pp. 415430.
12. R. Dillon, L.J. Fauci, and C. Omoto, Mathematical Modeling of
Axoneme Mechanics and Fluid Dynamics in Ciliary and Sperm
Motility, Dynamics of Continuous, Discrete and Impulsive Systems,
vol. 10, no. 5, 2003, pp. 745757.
Ricardo Cortez is an associate professor of mathe-
matics at Tulane University and associate director of the
Center for Computational Science at Tulane and Xavier
Universities. His research interests include numerical
analysis, scientic computing, and mathematical biol-
ogy. He has a PhD in applied mathematics from the
University of California, Berkeley. Contact him at
rcortez@tulane.edu.
Nathaniel Cowen is a PhD candidate in mathematics
at the Courant Institute of Mathematical Sciences. His
research interests include computational biouid dy-
namics, which involves mathematical modeling of bi-
ological systems (including both swimming organisms
and internal physiological ows), computational uid
dynamics, and parallel computing. He is a member of
the Society for Industrial and Applied Mathematics.
Contact him at cowen@cims.nyu.edu.
Robert Dillon is an associate professor of mathemat-
ics at Washington State University. His research inter-
ests include mathematical modeling of tumor growth,
limb development, and agellar and ciliary motility. He
has a PhD in mathematics from the University of Utah.
He is a member of the Society for Mathematical Biol-
ogy, the Society for Industrial and Applied Mathemat-
ics, and the American Mathematical Society. Contact
him at dillon@math.wsu.edu.
Lisa Fauci is a professor of mathematics at Tulane Uni-
versity and an associate director of the Center for Com-
putational Science at Tulane and Xavier Universities.
Her research interests include scientic computing and
mathematical biology. She has a PhD in mathematics
from the Courant Institute of Mathematical Sciences in
1986. She is a member of the Council of the Society
for Industrial and Applied Mathematics. Contact her at
fauci@tulane.edu .
The American Institute of Physics is a
not-for-prot membership corporation
chartered in New York State in 1931 for the
purpose of promoting the advancement and
diffusion of the knowledge of physics and
its application to human welfare. Leading
societies in the elds of physics, astronomy,
and related sciences are its members.
In order to achieve its purpose, AIP serves physics and related elds
of science and technology by serving its Member Societies, individual
scientists, educators, students, R&D leaders, and the general public
with programs, services, and publicationsinformation that matters.
The Institute publishes its own scientic journals as well as those
of its member societies; provides abstracting and indexing services;
provides online database services; disseminates reliable information on
physics to the public; collects and analyzes statistics on the profession
and on physics education; encourages and assists in the documentation
and study of the history and philosophy of physics; cooperates with
other organizations on educational projects at all levels; and collects
and analyzes information on federal programs and budgets.
The scientists represented by the Institute through its member soci-
eties number approximately 120 000. In addition, approximately 6000
students in more than 700 colleges and universities are members of the
Institutes Society of Physics Students, which includes the honor society
Sigma Pi Sigma. Industry is represented through the membership of 42
Corporate Associates.
Governing Board: Mildred S. Dresselhaus (chair), Martin Blume,
Dawn A. Bonnell, William F. Brinkman, Marc H. Brodsky (ex ofcio),
James L. Burch, Brian Clark, Lawrence A. Crum, Robert E. Dickinson,
Michael D. Duncan, H. Frederick Dylla, Joseph H. Eberly, Judy R.
Franz, Donald R. Hamann, Charles H. Holbrow, James N. Hollen-
horst, Judy C. Holoviak, Anthony M. Johnson, Bernard V. Khoury,
Leonard V. Kuhi, Arlo U. Landolt, Louis J. Lanzerotti, Charlotte Lowe-
Ma, Rudolf Ludeke, Christopher H. Marshall, Thomas J. McIlrath,
Arthur B. Metzner, Robert W. Milkey, James Nelson, Jeffrey J. Park,
Richard W. Peterson, Helen R. Quinn, S. Narasinga Rao, Elizabeth A.
Rogan, Myriam P. Sarachik, Charles E. Schmid, James B. Smathers,
Benjamin B. Snavely (ex ofcio), A. F. Spilhaus Jr, and Richard Stern.
Board members listed in italics are members of the executive committee.
O
n a geological time scale, science
must consider the impacts of aster-
oids and comets with Earth a rela-
tively frequent occurrence, causing
signicant disturbances to biological communi-
ties and strongly perturbing evolutions course.
1
Most famous among known catastrophic im-
pacts, of course, is the one that ended the Creta-
ceous period and the dominance of the di-
nosaurswhat researchers now believe caused
the shallow-water impact event at the Chicxulub
site in Mexicos Yucatan Peninsula. (See the
Chicxulub Site Impact sidebar for specics on
this event and its importance.)
In preparation for a definitive simulation of a
large event like Chicxulub, we developed a pro-
gram for modeling smaller impacts, beginning with
impacts in the deep ocean where the physics is
somewhat simpler. Smaller impacts happen more
frequently than dinosaur-killer events.
2,3
Besides
seafloor cratering, these events give rise to
tsunamis
4
that leave traces many kilometers inland
from a coast facing the impact point.
In this article, we report on a series of simula-
tions of asteroid impacts we performed using the
SAGE code from Los Alamos National Labora-
tory (LANL) and Science Applications Interna-
tional Corporation (SAIC), developed under the
US Department of Energys program in Accel-
erated Strategic Computing (ASCI). With our
ocean-impact simulations, we estimate impact-
generated tsunami events as a function of the
size and energy of the projectile, partly to aid
further studies of potential threats from modest-
sized Earth-crossing asteroids.
We also present a preliminary report on a sim-
ulation of the impact that created the Chicxulub
crater in Mexicos Yucatan Peninsula. This is a
rich test because of the stratigraphys complexity
at Chicxulub, involving rocks like calcite and an-
hydrite that are highly volatile at the pressures
reached during impact. (The Chicxulub stratas
volatility is what made this event so dangerous to
the megafauna of the late Cretaceous.) To model
this volatilitys effects and to better understand
what happened, we must use good equations of
state and constitutive models for these materials.
We report on progress in developing better con-
stitutive models for the geological materials in-
volved in this impact and in cratering processes
in general.
46 COMPUTING IN SCIENCE & ENGINEERING
TWO- ANDTHREE-DIMENSIONAL
ASTEROIDIMPACT SIMULATIONS
GALEN R. GISLER, ROBERT P. WEAVER, AND CHARLES L. MADER
Los Alamos National Laboratory
MICHAEL L. GITTINGS
Science Applications International
1521-9615/04/$20.00 2004 IEEE
Copublished by the IEEE CS and the AIP
F R O N T I E R S
O F S I M U L A T I O N
Performing a series of simulations of asteroid impacts using the SAGE code, the authors
attempt to estimate the effects of tsunamis and other important environmental events.
MAY/JUNE 2004 47
SAGE Code
The SAGE hydrocode is a multimaterial adaptive-
grid Eulerian code with a high-resolution Godunov
scheme originally developed by Michael Gittings
for SAIC and LANL. It uses continuous adaptive
mesh renement (CAMR), meaning that the deci-
sion to rene the grid is made cell by cell and cycle
by cycle continuously throughout the problem run.
Refinement occurs when gradients in physical
properties (density, pressure, temperature, and ma-
terial constitution) exceed user-defined limits,
down to a minimum cell size the user species for
each material in the problem. With the computing
power concentrated on the regions of the problem
that require higher resolution, we can simulate very
large computational volumes and substantial dif-
ferences in scale at low cost.
We can run SAGE in several modes of geome-
try and dimensionality: explicitly 1D Cartesian
and spherical, 2D Cartesian and cylindrical, and
3D Cartesian. The RAGE code is similar to
SAGE but incorporates a separate module for im-
plicit, gray, nonequilibrium radiation diffusion.
Both codes are part of LANLs Crestone project,
in turn part of the Department of Energys ASCI
program.
Because scientists commonly do modern super-
computing on machines or machine clusters con-
taining many identical processors, the codes par-
allel implementation is supremely important. For
portability and scalability, SAGE uses the widely
available message-passing interface (MPI). It ac-
complishes load leveling using an adaptive cell
pointer list, in which newly created daughter cells
are placed immediately after the mother cells. Cells
are redistributed among processors at every time
step, while keeping mothers and daughters to-
gether. If there are Mcells and N processors, this
technique gives nearly M/Ncells per processor. As
neighbor-cell variables are necessary, the MPIs
gather and scatter routines copy those neighbor
variables into local scratch.
In a multimaterial code like SAGE, every cell in
the computational volume can contain all the ma-
terials defined in the problem, each with its own
equation of state (and strength model, as appro-
priate). A number of equations of state are avail-
able, analytical and tabular. In our impact prob-
Chicxulub Site Impact
S
cientists now widely accept that the worldwide sequence of mass extinctions at the CretaceousTertiary (K/T) bound-
ary 65 million years ago was directly caused by the collision of an asteroid or comet with Earth.1,2 Evidence for this
includes the large (200-km diameter) buried impact structure at Chicxulub in Mexicos Yucatan Peninsula, the worldwide
iridium-enriched layer at the K/T boundary, and the tsunamic deposits well inland in North America, all dated to the same
epoch as the extinction event.
Consensus is building that the K/T impactor was a bolide of diameter roughly 10 km; its impact was oblique (not verti-
cal), either from the southeast at 30 degrees to the horizontal or from the southwest at 60 degrees; its encounter with lay-
ers of water, anhydrite, gypsum, and calcium carbonate (all highly volatile materials at the pressures of impact) lofted
many hundreds of cubic kilometers of these materials into the stratosphere. These materials then resided there for many
years and produced a global climate deterioration that was fatal to many large-animal species on Earth. All these points
are still under discussion, however, and researchers still need to address several scientic questions:
How is the energy of impact (in the realm of hundreds of teratons TNT equivalent) partitioned among the vaporization
of volatiles, the lofting of other materials, the generation of tsunamis, and the cratering of the substrate? How is this
partition of energy reected in the observables detectable after 65 million years?
What is the projectiles fate?
What is the distribution of proximal and distal ejecta around the impact site?
How do these questions depend on the problems unknown parametersnamely, bolide mass, diameter, velocity, and
impact angle?
References
1. J.V. Morgan et al., Peak-Ring Formation in Large Impact Craters: Geophysical Constraints from Chicxulub, Earth and Planetary Science Letters, vol. 183,
2000, pp. 347354.
2. E. Pierazzo, D.A. Kring, and H.J. Melosh, Hydrocode Simulation of the Chicxulub Impact Event and the Production of Climatically Active Gases, J. Geo-
physical Research, vol. 103, 1998, pp. 2860728625.
48 COMPUTING IN SCIENCE & ENGINEERING
lems, we use the LANL Sesame tables for air,
basalt, calcite, granite, iron, and garnet (as a rather
stiff analog to mantle material), and for water, we
use a somewhat more sophisticated table (includ-
ing a good treatment of the vapor dome) from
SAIC. When we judged strength to be important,
we used a simple elastic-plastic model with pres-
sure hardening (with depth) for the crustal mater-
ial (basalt for the water impacts, calcite, and gran-
ite for the K/T impactthat is, the impact at the
CretaceousTertiary [K/T] boundary).
The boundary conditions we use in these calcu-
lations allow unhindered outow of waves and ma-
terial. We accomplish this by using freeze regions
around the computational boxs edges, which are
updated normally during the hydrodynamic step,
then quietly restored to their initial values of pres-
sure, density, internal energy, and material proper-
ties before the next step. This technique has proven
to be extremely effective at minimizing the delete-
rious effect of articial reections.
By far the best technique for dealing with un-
wanted boundary effects is to put the boundaries
far away from the regions of interest or to place
the boundary beyond a material interface that
truly exists in the problem and might be expected
to interact with waves appropriatelythat is,
through reflection, transmission, and absorption.
In the ocean-impact simulations, the physical
boundary that is most important is of course the
seafloor, which partly reflects and partly transmits
the waves that strike it. The crustmantle inter-
face provides further impedance to waves that
propagate toward the computational boxs bottom
boundary. For land (or continental shelf) impact
simulations, the sedimentcrust and crustmantle
interfaces play similar roles. With these material
interfaces, and our freeze-region boundary con-
ditions, reflections from the computational
boundaries are insignificant.
3D Water-Impact Simulations
We performed 3D simulations of a 1-km-diameter
iron asteroid impacting the ocean at 45- and 30-de-
gree angles at 20 km/s on the ASCI White machine
at LLNL, using 1,200 processors for several weeks.
We used up to 200 million computational cells, and
the total computational time was 1,300,000 CPU
hours. The computational volume was a rectangu-
lar box 200-km long in the direction of the aster-
oid trajectory, 100-km wide, and 60-km tall. We di-
vided the vertical extent into 42 km of atmosphere,
5 km ocean water, 7 km basalt crust, and 6 km man-
tle material. Using bilateral symmetry, we simu-
lated a half-space only, the boundary of the half-
space being the vertical plane containing the
impact trajectory.
Asteroid initial position 30-km altitude
t = 0.5 seconds
1.0 seconds
1.5 seconds
2.0 seconds
3.0 seconds
2
0

k
m
/
s
e
c
Atmosphere 47 km
Ocean water 5 km
Basalt crust 7 km
Mantle 5 km
5.0 seconds
10.0 seconds
37.0 seconds
101.0 seconds
Figure 1. Montage of 10 separate images from the 3D run of the impact
of a 1-km-diameter iron bolide at an angle of 45 degrees with an ocean
5 km deep. These are density raster graphics in a 2D slice in the vertical
plane containing the asteroid trajectory. Note the initial
uprangedownrange asymmetry and its disappearance in time.
Maximum transient crater diameter of 25 km is achieved at about 35
seconds. The maximum crown height reaches 30 km, and the jet seen
forming in the last frame eventually approaches 60 km.
SAGE ast308 5.00 seconds
|Pressure| .01 mbar/cm Grid spacing: 10 km
P
r
e
s
s
u
r
e

(
b
a
r
)
10000.00
5623.41
3162.28
1778.28
1000.00
562.34
316.23
177.83
100.00
56.23
31.62
17.78
10.00
5.62
3.16
1.78
1.00
0.56
0.32
0.18
0.10
Figure 2. Perspective plot of an isosurface of the pressure gradient at a
time ve seconds after the beginning of a 3D run of the impact of a 1-
km-diameter iron bolide at an angle of 30 degrees with an ocean 5 km
deep. The pressure gradient isosurface is colored by the value of
pressure, with a color palette chosen to highlight interfaces between
mantle and basalt as well as basalt and water in the target. The
isosurface shows both the atmospheric shock accompanying the
incoming trajectory of the projectile (right) and the explosively driven
downrange shock (left) that carries the horizontal component of the
projectiles momentum. Also visible are seismic waves generated in the
mantle and crust and the expanding transient crater in the water.
MAY/JUNE 2004 49
The asteroid starts at a point 30 km above the
waters surface (see Figure 1). The atmosphere we
used in this simulation is a standard exponential at-
mosphere with a scale height of 10 km, so the
medium surrounding the bolide is tenuous (with a
density of approximately 1.5 percent of sea-level
density) when the calculation begins. During the
2.1 seconds of the bolides atmospheric passage at
approximately Mach 60, a strong shock develops
(see Figure 2), heating the air to temperatures up-
wards of 1 eV (1.2 10
4
K). Less than 1 percent of
the bolides kinetic energy (roughly 200 gigatons
high-explosive equivalent yield) is dissipated in the
atmospheric passage.
The water is much more effective at slowing the
asteroid; essentially, all its kinetic energy is ab-
sorbed by the ocean and seafloor within 0.7 sec-
onds. The water immediately surrounding the tra-
jectory vaporizes, and the rapid expansion of the
resulting vapor cloud excavates a cavity in the wa-
ter that eventually expands to a diameter of 25 km.
This initial cavity is asymmetric because of the as-
teroids inclined trajectory, and the splash, or
crown, is markedly higher on the downrange side
(see Figures 1 and 3). The crowns maximum
height is nearly 30 km at 70 seconds after impact.
The collapse of the crowns bulk makes a rim
wave or precursor tsunami that propagates out-
ward, somewhat higher on the downrange side (see
Figures 1 and 4). The crowns higher portion
breaks up into fragments that fall back into the wa-
ter, giving this precursor tsunami an uneven and
asymmetric prole.
The rapid conversion of the asteroids kinetic en-
ergy into thermal energy produces a rapid expan-
sion in the volume occupied by the newly vapor-
ized water and bolide material. This is much like
an explosion and acts to symmetrize the subsequent
development. Shocks propagate outward from the
cavity in the water, in the basalt crust and the man-
tle beneath (Figure 2). Subsequent shocks are gen-
erated as the cavity rells and by cavitation events
that occur in the turbulence that accompanies the
development of the large-amplitude waves. The
shocks are partly reflected and partly transmitted
by the material interfaces, and the interactions of
these shocks with each other and with the waves
make the dynamics complicated.
The hot vapor from the initial cavity expands
into the atmosphere, mainly in the downrange di-
rection because of the horizontal component of the
asteroids momentum (Figure 2). When the vapors
pressure in the cavity has diminished sufciently
at about 35 seconds after the impactwater begins
to ll the cavity from the bottom, driven by grav-
ity. This lling has a high degree of symmetry be-
cause of the uniform gravity responsible for the wa-
ter pressure. An asymmetric fill could result from
nonuniform seafloor topography, but we do not
consider that here. The lling water converges on
the cavitys center, and the implosion produces an-
other series of shock waves and a jet that rises ver-
tically in the atmosphere to a height in excess of 20
km at 150 seconds after impact. The collapse of this
central vertical jet produces the principal tsunami
SAGE ast304
LANL
Time: 30.00 seconds
p 0.075 (gm/cm
3
)
p 0.50 (gm/cm
3
)
p 1.50 (gm/cm
3
)
Figure 3. Perspective plot of three isosurfaces of the density from the
3D run of a 45-degree impact of a 1-km-diameter iron bolide into an
ocean 5 km deep, 30 seconds after the beginning of the calculation
(27.5 seconds after impact). We chose the isosurfaces to show the
basalt underlayment, the ocean waters bulk, and the cells containing
water spray (mixed air and water). The crown splashs asymmetry is
evident, as is its instability to fragmentation. Cratering in the basalt is
seen, to a depth of approximately 1 km. The transient cavitys diameter
is at this time approximately 25 km.
SAGE ast304
LANL
Time: 115.00 seconds
p 0.075 (gm/cm
3
)
p 0.50 (gm/cm
3
)
p 1.50 (gm/cm
3
)
Figure 4. Perspective plot of three isosurfaces of the density from the
3D run of a 45-degree impact of a 1-km-diameter iron bolide into an
ocean 5 km deep, 115 seconds after impact. The transient cavity has
collapsed under the surrounding waters pressure to form a central jet,
and the crown splash has collapsed almost completely, pock-marking
the waters surface and generating the rst precursor wave.
50 COMPUTING IN SCIENCE & ENGINEERING
wave (see Figure 5). This wave has an initial height
of 1.5 km and a propagation velocity of 170 me-
ters/second (m/s).
We follow this waves evolution in three dimen-
sions for 400 seconds after impact and nd that the
inclined impact eventually produces a tsunami that
is nearly circularly symmetric at late times (see Fig-
ure 6). The tsunami declines to a height (dened as
a positive vertical excursion above the initial water
surface) of 100 meters at a distance of 40 km from
the initial impact, and its propagation speed con-
tinues at roughly 170 m/s.
2D Water-Impact Simulations
Because of the high degree of symmetry achieved
late in the 3D calculations, we can learn much
about the physics of impact events by performing
2D simulations. These are, of course, much
cheaper than full 3D calculations, so we can un-
dertake parameter studies to isolate the phenom-
enas dependence on the impactors properties.
We have therefore performed a series of sup-
porting calculations in two dimensions (cylindri-
cal symmetry) for asteroids impacting the ocean
vertically at 20 km/s, using the ASCI Blue Moun-
tain machines at LANL. We took the asteroids
composition to be either dunite (3.32 grams per
cubic centimeter [g/cc]) as a mockup for typical
stony asteroids, or iron (7.81 g/cc) as a mockup
for nickel-iron asteroids. For these projectiles, in-
stead of the Sesame tables, we used the simpler
analytical Mie-Grneisen equation of state to
avoid time-step difficulties during the atmos-
pheric passage. The strength model used for the
crust and asteroid are the same in all cases
namely, an elastic-plastic model with shear mod-
uli and yield stress similar to experimental values
for aluminum. For the known increase of
strength with depth, we use a linear pressure-
hardening relationship.
We designed these simulations to follow an as-
teroids passage through the atmosphere, its impact
with the ocean, the cavity generation and subse-
quent recollapse, and the generation of tsunamis.
The parameter study included six different aster-
oid masses. We used stony and iron bodies of di-
ameters 250 meters, 500 meters, and 1,000 meters,
all at speeds of 20 km/s. The impacts kinetic ener-
gies ranged from 1 gigaton to 200 gigatons (high-
explosive equivalent yield).
Table 1 gives a tabular summary of our para-
meter study and lists the bolides input charac-
teristics (composition, diameter, density, mass,
velocity, and kinetic energy) and the impacts
measured characteristics (maximum depth and di-
SAGE ast304
LANL
Time: 150.00 seconds
p 0.075 (gm/cm
3
)
p 0.50 (gm/cm
3
)
p 1.50 (gm/cm
3
)
Figure 5. Similar to Figure 4, but 150 seconds after impact. The central
jet has now collapsed, and both the pock-marked precursor wave and
the somewhat smoother principal wave are evident. The latter wave is
~1.5 km in initial amplitude, and moves with a speed of ~175 m/s.
6e + 06
5e + 06
4e + 06
3e + 06
2e + 06
2e + 06
0
6e + 06
5e + 06
4e + 06
3e + 06
2e + 06
2e + 06
0
6e + 06 2e + 06 0 2e + 06 4e + 06 6e + 06
6e + 06 2e + 06 0 2e + 06 4e + 06 6e + 06
Height (cm)
625 2500 5625 10000 LANL (p 0.9 gm/cm
3
) Height 0.1 cm) 0
(a)
(b)
Figure 6. Overhead plots at a late time showing wave height as a function
of distance along the trajectory (horizontal) and perpendicular to the
trajectory (units of centimeters). The asteroid entered from the right. At
270 seconds, (a) the irregular precursor wave has declined to a few meters
in height and strongly bears the asymmetry of the crown splash, while the
much more regular principal wave, at an amplitude signicantly greater
than 100 meters, is much more symmetrical. The wavelength, measured
as the crest-to-crest distance from precursor to principal wave, is 34 km.
At 385 seconds, (b) the precursor wave has left the box, and the principal
wave has a mild quadrupole asymmetry with the maximum wave height
roughly 100 meters, at a distance of 40 km from the impact point.
MAY/JUNE 2004 51
ameter of the transient cavity, quantity of water
displaced, time of maximum cavity, maximum jet
and jet rebound, tsunami wavelength, and
tsunami velocity).
The amount of water displaced during cavity
formation is found to scale nearly linearly with the
asteroids kinetic energy, as Figure 7 illustrates. A
fraction of this displaced mass (ranging from 5 per-
cent for the smaller impacts to 7 percent for the
largest ones) is vaporized during the encounters
explosive phase, while the rest is pushed aside by
the vapors pressure to form the transient cavitys
crown and rim.
Figure 7 indicates that the linear scaling with
kinetic energy differs from the scaling predicted
by Keith Holsapple.
5
Holsapple, using dimen-
sional analysis informed by experimental results
over many decades in scaled parameters, found
that the ratio of the displaced mass to the pro-
jectile mass scales as the Froude number, u
2
/ga,
to the two-thirds power, where u is the projectile
velocity, g is acceleration due to gravity, and a is
the projectile radius. The difference between the
Holsapple scaling and our results is most likely
due to the effect of vaporization, which the di-
mensional analysis does not include. We also
note that our two projectile compositions differ
from each other by a factor greater than two in
density, and this is also omitted in the dimen-
sional analysis. We have begun a new series of 27
runs to investigate the scaling issue further.
These runs are similar to the six runs we report
here, yet we also include bolides of ice and ve-
locities of 10 and 15 km/s.
We used Lagrangian tracer particles to mea-
sure the amplitude, velocity, and wavelength of
the waves produced by these impacts. These mea-
sures are somewhat uncertain because the wave
trains are highly complex and the motions are
turbulent. There are multiple shock reflections
and refractions at the watercrust and waterair
interfaces, as well as cavitation events. For the
larger impacts, the tracer particles execute highly
complex motions, while for the smaller impacts,
the motions are superpositions of approximately
closed elliptical orbits. In all cases, we measure
wave amplitudes by taking half the difference of
adjacent maxima and minima in the vertical ex-
cursions executed by the tracer particles, and we
measure wave speeds by plotting the radial posi-
tions of these maxima and minima as a function
of time.
With these warnings, we find the tsunami am-
plitude to evolve in a complex manner, eventually
decaying faster than , where r is the distance
of propagation from the impact point (see Figure
1 r
Table 1. Summary of parameter-study runs.
Asteroid material Dunite Iron Dunite Iron Dunite Iron
Asteroid diameter 250 m 250 m 500 m 500 m 1,000 m 1,000 m
Asteroid density 3.32 g/cc 7.81 g/cc 3.32 g/cc 7.81 g/cc 3.32 g/cc 7.81 g/cc
Asteroid mass 2.72e13 g 6.39e13 g 2.17e14 g 5.11e14 g 1.74e15 g 4.09e15 g
Asteroid velocity 20 km/s 20 km/s 20 km/s 20 km/s 20 km/s 20 km/s
Kinetic energy 1.3 GT 3 GT 10 GT 24 GT 83 GT 195 GT
Maximum cavity diameter 4.4 km 5.2 km 10.0 km 12.6 km 18.6 km 25.2 km
Maximum cavity depth 2.9 km 4.3 km 4.5 km 5.7 km 6.6 km 9.7 km
Observed displacement 4.41e16 g 9.13e16 g 3.53e17 g 7.11e17 g 1.79e18 g 4.84e18 g
Time of maximum cavity 13.5 s 16.0 s 22.5 s 28.0 s 28.5 s 33.0 s
Time of maximum jet 54.5 s 65.0 s 96.5 s 111 s 128.5 s 142 s
Time of rebound 100.5 s 118.5 s 137.5 s 162 s 187.5 s 218.5 s
Tsunami wavelength 9 km 12 km 17 km 20 km 23 km 27 km
Tsunami velocity 120 m/s 140 m/s 150 m/s 160 m/s 170 m/s 175 m/s
1.0E + 25
1.0E + 19
1.0E + 18
1.0E + 17
1.0E + 16
1.0E + 26
Asteroid kinetic energy (ergs)
1.0E + 27 1.0E + 28
M
a
s
s

o
f

w
a
t
e
r

d
i
s
p
l
a
c
e
d

(
g
r
a
m
s
)
Figure 7. The mass of water displaced in the initial cavity formation
scales with the asteroids kinetic energy. The squares are the results
from the parameter-study simulations, as Table 1 tabulates, and the
solid line illustrates direct proportionality. About 5 to 7 percent of this
mass is vaporized in the initial encounter. The circles are predictions of
the crater scaling formula from Keith Holsapple.
5
52 COMPUTING IN SCIENCE & ENGINEERING
8). We found the steepest declines for the smaller
projectiles (as expected from linear theory
4
), and
we have greater condence in the amplitudes mea-
sured for these than in the amplitudes measured for
the larger projectiles because of the more complex
motions executed by the tracer particles in the
large-projectile simulations. Geometrical effects
account for a pure 1/r decline, and the remainder
of the decline is due partly to wave dispersion and
partly to dissipation via turbulence. Realistic
seafloor topography will also influence the waves
development, of course. We also remark that our
rst measured amplitude points are well outside the
transient cavity. Tracers from within the cavity ex-
ecute much larger excursions (indeed, some of
them join the jet), and we cannot measure reliable
amplitudes from them.
We expect that the tsunami waves will eventu-
ally evolve into classic shallow-water waves
6
be-
cause the wavelengths are long compared to the
ocean depth. However, the initial wave trains
complexity and the wave-breaking associated with
the interaction of shocks reflected from the
seafloor do not permit the simplifications associ-
ated with shallow-water theory. Much previous
work on impact-generated tsunamis
7
has used
shallow-water theory, which gives a particularly
simple form for the wave velocitynamely,
, where g is the acceleration due to
gravity and D is the water depth. For an ocean 5
km deep, the shallow-water velocity is 221 m/s. In
Figure 9, we show the wave-crest positions as a
function of time for the simulations in our para-
meter study, along with constant-velocity lines at
150 and 221 m/s. From this, we see that the wave
velocities are substantially lower than the shallow-
water limit, although there is some indication of
an approach to that limit at late times. This as-
ymptotic approach is only observed for the largest
impactors because the waves from the smaller im-
pactors die off too quickly for reliable measure-
ment of the far-eld limit in our simulations.
To illustrate the complications we encountered
in our large-projectile runs, we show in Figure 10
a close-up snapshot of density and pressure from
the wave train produced by a 1-km iron projec-
tile. This snapshot is taken 300 seconds after im-
pact and about 35 km from the impact point. The
wave moves to the right, and the impact point is
to the left. The vertical excursion of the bulk wa-
ter above the original surface is about 1 km at this
point. The dense spray above the wave (up to 1
percent water density) extends 3.5 km up into the
atmosphere, while the lighter spray goes up more
than twice as far. Apparently, the surrender of
wave energy to the atmosphere is a significant
loss mechanism. The bottom frame shows pres-
sure with a banded palette to highlight differ-
ences. Besides the turbulent pressure field in the
atmosphere, two significant features are a decay-
ing cavitation event just aft of the main
peak/trough system, and a shock propagating
backwards from that event and scraping the wa-

v gD = ( )
1
10,000
1,000
100
10
1
10
Distance from impact (km)
100 1,000
A
m
p
l
i
t
u
d
e

(
m
)
Dn 250 tr
Dn 250 lsq
Fe 250 tr
Dn 250 lsq
Dn 250 tr
Dn 500 lsq
Fe 500 tr
Fe 500 lsq
Dn 1k tr
Dn 1k lsq
Fe 1k tr
Fe 1k lsq
1/r
Figure 8. The tsunami amplitude declines with propagation distance
faster than 1/r. The legend identies the points associated with
individual runs, where the notation signies the asteroids composition
(Dn for dunite and Fe for iron) and diameter in meters. We also show
lines indicating least-squares power-law ts, with the power-law indices
varying from 2.25 to 1.3.
0
900
800
700
600
500
400
300
200
100
0
200
Time (sec)
150 m/s
221 m/s
(Shallow-water
theory)
400 600
W
a
v
e

c
r
e
s
t

p
o
s
i
t
i
o
n

(
k
m
)
Dn 250 m
Fe 250m
Dn 500m
Dn 250 tr
Dn 1kn
Fe 1kn
Figure 9. We plot the tsunami wave-crest positions as a function of time
here for the six runs of the parameter study. The notation in the legend
is similar to Figure 6, with the solid lines at constant velocity to illustrate
that these waves are substantially slower than the shallow-water
theorys prediction. There is an indication, however, that the waves may
be accelerating toward the shallow-water limit at late times.
MAY/JUNE 2004 53
tercrust interface. A new series of runs we are
planning incorporates new diagnostics to better
interpret the energy flows.
Preliminary Study
of a Major Terrestrial Impact
When the projectile diameter is large compared
to the depth of water in the target, the decelera-
tion is accomplished almost entirely by the rock
beneath. We therefore need to deal directly with
the issues of the instantaneous fluidization of tar-
get rock and its subsequent evolution through
regimes of visco-plastic flow through freeze-out.
Because this is a rather new regime for our code,
we decided to begin by examining a well-studied
event.
8
In extending our impact study to larger
diameters, we accordingly chose to focus on the
shallow-water impact event at the Chicxulub site
in Mexicos Yucatan Peninsula and anticipate that
our early effort on this will not do very well with
the final, strength-dependent, phases of the crater
evolution.
Scientists discovered the Chicxulub impact
structure with Petroleos Mexicanos (Pemex), the
Mexican national oil company.
9,10
This discovery
established the suggestion that an impact was re-
sponsible for the mass extinction at the end of the
Cretaceous period, as Luis and Walter Alvarez
and their colleagues proposed,
11
on the basis of
the anomaly in abundances of iridium and other
platinum-group elements in the boundary bed-
ding plane.
Paleogeographic data suggests that the crater
site, which presently straddles the Yucatan coast-
line, was submerged at the end of the Cretaceous
on the continental shelf. The substrate consisted
of fossilized coral reefs over continental crust. In
our simulation, we therefore constructed a mul-
tilayered target consisting of 300 meters of water,
3 km of calcite, 30 km of granite, and 18 km of
mantle material. It is likely that the Chicxulub
target contained multiple layers of anhydrites and
other evaporites as well as calcite, but for sim-
plicity (and because of access to good equations
of state), we simplified the structure to calcite
above granite. Above this target, we included a
standard atmosphere up to 106 km altitude and
started the asteroids plunge at 45 km altitude.
We performed 3D simulations with impact angles
of 30, 40, and 60 degrees to the horizontal as well
as a 2D vertical-impact simulation. In the hori-
zontal plane, our computational domain extended
256 km by 128 km because we elected to simulate
a half-space.
We ran these simulations on the new ASCI Q
computer at Los Alamos, a cluster of ES45-alpha
boxes from HP/Compaq. Generally, we ran on
1,024 processors at a time and used about 1 million
CPU hours over the course of these runs. Our
adaptive mesh included up to a third of a billion
computational cells.
The simulation illustrates three prominent fea-
tures for a 45-degree impact. First, the impact pro-
duced a rooster tail that carries much of the hor-
izontal component of the asteroids momentum in
Figure 10. A snapshot in density (top) and pressure (bottom) for a small
part of the simulation of the 1-km-diameter iron projectile vertical
impact. This snapshot is taken 300 seconds after impact and illustrates
the principal wave train 35 km out from the impact point, which is to
the left. This frames horizontal dimension is 28 km, and the vertical
dimension is 15 km. The wave is traveling to the right. In the top frame,
the height of the principal wave above the original water surface is 1.2
km, the maximum extent of the dense spray (about 1 percent water
density) is 3.5 km above the original water surface, and the light spray
extends almost to the tropopause at 10 km altitude. The bottom frame
uses a banded palette to highlight pressure differences. A cavitation
event is seen just aft of the principal wave, and a decaying shock
produced by this event is seen propagating backward (toward the
impact point to the left) and scraping the ocean bottom.
54 COMPUTING IN SCIENCE & ENGINEERING
the downrange direction (see Figure 11). This ma-
terial, consisting of vaporized fragments of the pro-
jectile mixed with the target, is extremely hot, and
will ignite vegetation many hundreds of kilometers
away from the impact site. Second is the highly tur-
bulent and energetic plume of ejecta directed pre-
dominantly upward (see Figure 12). Ballistic trajec-
tories carry some of this material back to Earth in
the conical debris curtain that gradually moves away
from the crater lip and deposits a blanket of ejecta
around the forming crater (see Figure 13). Some
material is projected into orbits that have ground
termini far outside the computational volume, even
extending to the antipodal point and beyond.
We found the blanket of ejecta to be strongly
asymmetrical around the crater, with the uprange
portion much thinner than the rest. This owes
partly to the coupling of the horizontal component
of the asteroids momentum to the debris, and
partly to the ionized and shocked atmosphere in
the asteroids wake producing a zone of avoidance
for the entrained debris. The ejecta blankets lobate
structure seen in Figure 13 is a second-order effect,
due to the break-up of the unstable ow in the de-
bris curtain. The hot structure seen within the
crater in Figure 13 is the incipient formation of a
central peak.
We are conducting further analysis of the simu-
lation results from these runs, with the aim of de-
termining material and energy partitions among
the resultant features as functions of the impacts
parameters.
W
e are continuing the study we
outline here, with an aim to-
ward including better physics
for the later stages of the
craters development. For this, it is important
we include a proper characterization of the
material strength of the geological strata in
which the impact occurs and the dependence
of those strength properties with depth, tem-
perature, strain, and strain rate. The data for
these studies is still not readily available for
many of the geological materials of interest,
and some controversy exists over the best way
to implement strength breakdown in hy-
drocodes. Our intention is to use a few choices
for strength degradation (for example, acoustic
fluidization and damage mechanics) in our
code and include visco-elastic models as well
as the elastic-plastic models we have already
used. Applying our code to other geologic sce-
narios that involve rock mobilization (for ex-
ample, volcanic eruptions and landslides) will
guide us in appropriately implementing and
validating these models.
Figure 12. Forty-two seconds after impact, the rooster tail has left the
simulation volume and gone far downrange. The dissipation of the
asteroids kinetic energy, some 300 teratons TNT equivalent, produces a
stupendous explosion that melts, vaporizes, and ejects a substantial
volume of calcite, granite, and water. The dominant feature in this
picture is the curtain of the debris that has been ejected and is now
falling back to Earth. The ejecta follows ballistic trajectories, with its
leading edge forming a conical surface that moves outward from the
crater as the debris falls to form the ejecta blanket. The turbulent
material interior to the debris curtain is still being accelerated upward
by the explosion produced during the craters excavation.
0.50
0.35
0.23
0.13
0.06
0.02
0.01
T
e
m
p
e
r
a
t
u
r
e

(
e
V
)
Figure 11. Seven seconds after a 10-km-diameter granite asteroid strikes
Earth, billions of tons of hot material are lofted into the atmosphere.
This material consists of asteroid fragments, mixed with vaporized
water, calcite, and granite from Earth. Much of this debris is directed
downrange (to the right and back of this image) carrying the horizontal
momentum of the asteroid in this 45-degree impact. This image is a
perspective rendering of a density isosurface colored by material
temperature (0.5 eV = 5,800 K). We chose the isosurface, at density
0.005 g/cm
3
, to show everything denser than air. This pictures scale is
set by the back boundary, which is 256-km long. The maximum height
of the rooster tail at this time is 50 km.
MAY/JUNE 2004 55
Acknowledgments
We thank Bob Greene for assistance with the
visualization of the 3D runs and Lori Pritchett for help
with executing the simulations. We had helpful
conversations with Eileen Ryan, Jay Melosh, Betty
Pierazzo, Frank Kyte, Erik Asphaug, Steve Ward, and Tom
Ahrens on the impact problem in general. We also thank
the anonymous reviewers for comments that helped
improve this article.
References
1. E. Pierazzo and H.J. Melosh, Understanding Oblique Impacts
from Experiments, Observations, and Modeling, Ann. Rev. Earth
and Planetary Sciences, vol. 28, 2000, pp. 141167.
2. F.T. Kyte, Iridium Concentrations and Abundances of Meteoritic
Ejecta from the Eltanin Impact in Sediment Cores from Polarstern
Expedition ANT XII/4, Deep Sea Research II, vol. 49, 2002, pp.
10491061.
3. S.A. Stewart and P.J. Allen, A 20-km-Diameter Multi-Ringed Im-
pact Structure in the North Sea, Nature, vol. 418, 2002, pp.
520523.
4. S.N. Ward and E. Asphaug, Impact Tsunami Eltanin, Deep Sea
Research II, vol. 49, 2002, pp. 10731079.
5. K.A. Holsapple, The Scaling of Impact Processes in Planetary Sci-
ences, Ann. Rev. Earth and Planetary Sciences, vol. 21, 1993, pp.
333373.
6. C.L. Mader, Numerical Modeling of Water Waves, Univ. of Calif.
Press, 1988.
7. D.A. Crawford and C.L. Mader, Modeling Asteroid Impact and
Tsunami, Science of Tsunami Hazards, vol. 16, 1998, pp. 2130.
8. E. Pierazzo, D.A. Kring, and H.J. Melosh, Hydrocode Simulation
of the Chicxulub Impact Event and the Production of Climatically
Active Gases, J. Geophysical Research, vol. 103, 1998, pp.
2860728625.
9. A.R. Hildebrand et al., Chicxulub Crater: A Possible Creta-
ceous/Tertiary Boundary Impact Crater on the Yucatan Penin-
sula, Mexico, Geology, vol. 19, 1991, pp. 867871.
10. V.L. Sharpton et al., New Links Between the Chicxulub Impact
Structure and the Cretaceous/Tertiary Boundary, Nature, vol.
359, 1992, pp. 819821.
11. L. Alvarez et al., Extraterrestrial Cause for the Cretaceous/Ter-
tiary Extinction, Science, vol. 208, 1980, pp. 10951008.
Galen R. Gisler is an astrophysicist at the Los Alamos
National Laboratory. He has many years of experience
in modeling and understanding complex phenomena
in Earth, space, and astrophysical contexts. His research
interests include energetic phenomena in geosciences
and using the SAGE and RAGE codes of the Los Alamos
Crestone Project. He has a BS in physics and astronomy
from Yale University and a PhD in astrophysics from
Cambridge University. Contact him at grg@lanl.gov.
Robert P. Weaver is an astrophysicist at Los Alamos
and leader of the Crestone Project, part of the Depart-
ment of Energys Advanced Simulation and Comput-
ing Initiative. This project develops and uses sophisti-
cated 1D, 2D, and 3D radiation-hydrodynamics codes
for challenging problems of interest to the DOE. He
has a BS in astrophysics and mathematics from Colgate
University, an MS in physics from the University of Col-
orado, and a PhD in astrophysics from the University
of Colorado.
Michael L. Gittings is an assistant vice president and
chief scientist at Science Applications International. He
works full time on a multiyear contract with the Los
Alamos National Laboratory to support and improve
the SAGE and RAGE codes that he began developing
in 1990. He has a BS in mechanical engineering and
mathematics from New Mexico State University.
Charles L. Mader is a fellow emeritus of the Los
Alamos National Laboratory, president of Mader Con-
sulting, fellow of the American Institute of Chemists,
and editor of the Science of Tsunami Hazards journal.
He also has authored Numerical Modeling of Water
Waves, Second Edition (CRC Press, 2004) and Numeri-
cal Modeling of Explosives and Propellants (CRC Press,
1998). He has a BS and MS in chemistry from Okla-
homa State University and a PhD in chemistry from Pa-
cic Western University.
Figure 13. Two minutes after impact, the debris curtain has separated
from the rim of the still-forming crater as material in the curtain falls to
Earth. The debris from the curtain is deposited in a blanket of ejecta
that is asymmetric around the crater with more in the downrange than
in the uprange direction. The distribution of material in the ejecta
blanket can be used as a diagnostic to determine the direction and
angle of the asteroids impact.
www.ieee.org/renewal
Renew your
IEEE Computer Society
membership today!
this article, we give more details about
Grbner bases and describe their
main application (algebraic system
solving) along with some surprising
derived ones: inclusion of varieties,
automatic theorem-proving in geom-
etry, expert systems, and railway in-
terlocking systems.
Reduced Grbner Bases
In the previous article, we introduced
Grbner bases of idealswith an ideal
being the set of algebraic linear com-
binations of a given set of polynomi-
alsas a tool for algebraic system
solving (that is, general polynomial
system solving). We solved such sys-
tems using simple commands in a
computer algebra system such as
Maple. Lets review an example from
the previous article.
Example 1. The solution set of the
system
are the points in the intersection curve
of both surfaces. We emulate Maples
notation by preceding inputs with a
>, closing them with a ;, and in-
cluding outputs centered in the fol-
lowing line:
> gbasis( {x^2 - y^2 - z,
x^2 + y^2 - z} , plex(y,x,z)
);
[x
2
z, y
2
]
Consequently, we also can express this
systems solution set as the intersection
of the parabolic cylinder x
2
z = 0 with
the vertical plane y = 0.
To really delve into algebraic system
solving, though, we first must explain
term orders (such as plex) and reduced
Grbner bases.
Term Orders
The polynomial ring A[x
1
, ..., x
n
] is the
set of polynomials in the variables x
1
,
..., x
n
with coefficients in A. A usually
is a field (known as the base field), and
in our examples, it is the set of real
numbers (). However, this is not nec-
essarily always the case.
A product of variables, such as x
1

x
3
3
x
4
, is known as a power product or
monomial. The product of an element
in the base eld with a power product,
such as 7 x
1
x
3
3
x
4
, is known as a
polynomial term.
To be able to say when a polynomial
is simpler (meaning that it is smaller)
than other polynomials in the chosen
ordering, we first must order polyno-
mial terms. But before ordering terms,
we must fix a variable order, which is
similar to a letter order. For instance,
our dictionaries are ordered lexico-
graphically according to letter order: a
> b > c > ... > z.
Two possible term orders are lexi-
cographical (also denoted plex) and to-
tal degree (also denoted tdeg). In the
lexicographical order, with x > y > z as
an example, x
2
y > x y
3
because
word xxy would appear before
word xyyy in a dictionary. In the to-
tal degree order, with x > y > z as an
example, x
2
y < x y
3
because the de-
grees of these monomials are 2 + 1
= 3 and 1 + 3 = 4, respectively. Ties
are usually broken in tdeg by using
lexicographic order.
So how can we order polynomials?
Lets use lc(p) to denote the leading
coefficient of polynomial p (that is,
the coefficient of the greatest term
for the chosen term and variable or-
ders). We can say that polynomial p
1
is simpler than p
2
if lc(p
1
) < lc(p
2
). If
they have the same value, we can re-
cursively compare p
1
lc(p
1
) and p
2
lc(p
2
) instead.
When we use Maples gbasis com-
mand, we must specify a variable or-
dering (such as y > x > z) and a term or-
der (like tdeg or plex), as Example 1
shows. Which term order is best de-
pends on the particular case; its not al-
ways easy to decide.
Main Property
of Reduced Grbner Bases
Just as in the theory of vector spaces, in
which bases that contain perpendicu-
lar vectors of unit length are particu-

(hyperbolic paraboloid)
(elliptic paraboloid)

x y z
x y z
2 2
2 2
0
0

+

56 Copublished by the IEEE CS and the AIP 1521-9615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
SOME APPLICATIONS OF GRBNER BASES
By Eugenio Roanes-Lozano, Eugenio Roanes-Macas, and Luis M. Laita
I
N THE MARCH/APRIL ISSUE OF CISE, WE DISCUSSED THE GEOM-
ETRY OF LINEAR AND ALGEBRAIC SYSTEMS. WE ALSO DEFINED
IDEALS AND BASES SO THAT WE COULD INTRODUCE THE CON-
CEPT OF GRBNER BASES FOR ALGEBRAIC SYSTEM SOLVING. IN
Editors: Isabel Beichl, isabel.beichl@nist.gov
Julian V. Noble, jvn@virginia.edu
PRESCRIPTIONS
C O M P U T I N G P R E S C R I P T I O N S
MAY/JUNE 2004 57
larly important, so it is that some
Grbner bases are particularly impor-
tant: we call these reduced Grbner
bases. We say that a Grbner basis is
reduced if and only if the leading coef-
cient of all its polynomials is 1 and we
cant simplify any of its polynomials
by adding a linear algebraic combina-
tion of the rest of the polynomials in
the basis.
The Buchberger algorithm is what
allows Maple and other computer al-
gebra systems to compute Grbner
bases. The input to Buchbergers al-
gorithm is a polynomial set, a term
order (for instance, tdeg), and a vari-
able order (for instance, x > y > z).
The algorithms output is the ideals
reduced Grbner basis with respect
to the specified term and variable or-
ders. The key point is that such a re-
duced Grbner basis completely
characterizes the ideal: any ideal has
a unique reduced Grbner basis.
1
Consequently,
two sets of polynomials generate
the same ideal if and only if their
reduced Grbner bases are the
same, and
{1} is the only reduced Grbner basis
for the ideal that is equal to the whole
ring (remember that the ideal gener-
ated by {1} is always the whole ring,
because any element of the ring can be
generated as the product of 1 and an
element of the ring; the property of
the reduced Grbner bases men-
tioned earlier implies the uniqueness
of such a basis).
Because well often refer to reduced
Grbner bases, we should introduce an
abbreviation. Let C be a set of polyno-
mials and use GB(C) to denote the
ideals reduced Grbner basis generated
by C with respect to certain term and
variable orders.
Grbner Bases
and Algebraic System Solving
Grbner bases deal with polynomial
ideals, but as the previous article
showed, we also can use them in alge-
braic system solving.
Algebraic Systems
with the Same Solutions
A first application in algebraic system
solving would be to check for the
equality of solutions. As a consequence
of the previous sections theoretical re-
sults, if GB(pol
1
, ..., pol
n
) = GB(pol
1
, ...,
pol
m
), then systems
have the same solutions.
1
A result close to the converse is true
if the base field is algebraically closed.
(To describe this in full detail, though,
we would have to introduce the so-
called radical of an ideal and men-
tion Hilberts Nullstellensatz, which is
behind this brief introductions scope.)
Lets use the direct result: well prove
that three systems have the same solu-
tions because the reduced Grbner bases
of the corresponding ideals coincide.
Example 2. The following three sys-
tems have the same solutions:
(The example shows the intersection of
a cylinder and a plane orthogonal to its
axis; the intersection of a cone and the
same plane; and the intersection of an
elliptic paraboloid, another elliptic pa-
raboloid, and a spherical surface, re-
spectively.) Well check it by comput-
ing the corresponding Grbner bases
in Maple:
> gbasis( {x^2 + y^2 - 1, z
- 1} , plex(x,y,z) );
[z - 1, x
2
+ y
2
- 1]
> gbasis( {x^2 + y^2 - z^2,
z - 1} , plex(x,y,z) );
[z - 1, x
2
+ y
2
- 1]
> gbasis( {x^2 + y^2 - z, -
x^2 - y^2 - z + 2, x^2 + y^2
+ z^2 - 2} , plex(x,y,z) );
[z - 1, x
2
+ y
2
- 1]
Distinguishing Real
and Complex Solutions
Whether an algebraic equation has so-
lutions clearly depends on the set in
which we are looking for such solu-
tions. A field is algebraically closed if
each polynomial with coefficients in
the field also has a root in the field. A
elds algebraic closure is the minimum
algebraically closed eld that contains
the given eld. For instance, the elds
(set of rational numbers) and are
not algebraically closed, because x
2
2
[x] has no rational root, and x
2
+ 1
[x] has no real root either. How-
ever, (set of complex numbers) is al-
gebraically closed. Moreover, is the
algebraic closure of and .
Whether an algebraic system has
solutions will also depend on the set in
which were looking for such solutions.
For example some algebraic systems
have exactly the same real or complex
solutions, but this is not always the
case, as the next example shows.
Example 3. Consider the algebraic
system below, also used as an example

x y z
x y z
x y z
2 2
2 2
2 2 2
0
2 0
2 0
+
+
+ +


x y z
z
2 2 2
0
1 0
+

x y
z
2 2
1 0
1 0
+


pol
pol
pol
pol
n m
1 1
0
0
0
0

............
'
............
'

58 COMPUTING IN SCIENCE & ENGINEERING
in the previous article (see Figure 1):
Computing the Grbner basis with
Maple, we get
> gbasis( {x^2 + y^2 + z^2 -
2, x^2 + y^2 - z, x - y} ,
plex(x,y,z) );
[z
2
- 2 + z, 2y
2
- z, x - y]
The first polynomial has two roots: (z
= 1 and z = 2). Substituting 1 for z in
the second polynomial and by substi-
tution in the third polynomial, we get
two real solutions (points):
,
Nevertheless, two other imaginary so-
lutions (points of
3
) correspond to
the other root of the first polynomial
(z = 2):
(x = i, y = i, z = 2),
(x = i, y = i, z = 2)
Algebraic Systems
with No Solutions
We also can use Grbner bases in alge-
braic system solving to check the exis-
tence of solutions (in the algebraic clo-
sure of the base eld).
1
For instance,
has no solution in the algebraic closure
of the base field if and only if GB(pol
1
,
, pol
n
) = {1}
Example 4. The following system
has no real solution (see Figure 2). We
can now check that it has no complex
solutions either:
> gbasis( {x^2 - y, x^2 y
+ 1} , plex(x,y,z) );
[1]
However, this is not always the case:
a polynomial system with no real solu-
tions can have complex solutions, as
the next example shows.
Example 5. Consider the surfaces of

3
given by system
,
that is, a spherical surface below plane
z = 9/4 and an elliptic paraboloid above
the same plane (see Figure 3). Clearly,
the two surfaces do not intersect in
3
.
Nevertheless, the reduced Grbner ba-
sis is not {1}:
> gbasis( {x^2 + y^2 + z^2 -
4, x^2 + y^2 - z + 5/2} ,
plex(x,y,z) );
[2z - 13 + 2z
2
, 2x
2
+ 2y
2
-
2z + 5]
This is because although the two surfaces
dont intersect in
3
, they do intersect in

3
! The roots of the rst polynomial are
z = 1/2 (33)/2, and substituting these
values for z in the second polynomial, we
get two imaginary circles:
In plane z = 1/2 + (33)/2: 2x
2
+ 2y
2
+ 6 33 = 0.
In plane z = 1/2 (33)/2: 2x
2
+ 2y
2
+ 6 + 33 = 0.
Other Applications
of Grbner Bases
Apart from obvious direct polyno-
mial system solving, different fields
have some surprising applications of
Grbner bases.

x y z
x y z
2 2 2
2 2
4 0
5 2 0
+ +
+ +

/

x y
x y
2
2
0
1 0

+


pol
pol
n
1
0
0

............

x y z

_
,

1
2
1
2
1 , ,
x y z

_
,

1
2
1
2
1 , ,

x y z
x y z
x y
2 2 2
2 2
2 0
0
0
+ +
+

C O M P U T I N G P R E S C R I P T I O N S
2
0
2
x
2
1
0
1
2
y
2
0
2
4
6
Figure 3. A nonlinear polynomial system.
The system has no real solution (as can
be seen in the gure), but it does have
complex solutions.
4
3
2
1
0
1
2
3
4
y
2 1 1 2
x
Figure 2. A nonlinear polynomial system.
Its solution set is the intersection of two
parabolas. It has neither real nor complex
solutions (that it has no real solution can
be deduced from the gure).
2
1
0
1
2
x
2
1
0
1
2
y
2
1
0
1
2
Figure 1. A nonlinear polynomial system.
Its solution set is the intersection of a
spherical surface, an elliptic paraboloid,
and a plane. The gure shows two real
solutions, but two other imaginary
solutions (that we cant draw) exist.
MAY/JUNE 2004 59
Inclusion of Varieties
Although Emmy Noether and Wolf-
gang Krull developed the basic theory
of algebraic geometry in the 1930s, un-
til the implementation of Grbner
bases in computer algebra systems, its
applications were very limited: the ex-
amples that could be managed were al-
most trivial.
A straightforward application of
Grbner bases is the difficult task of
deciding whether an algebraic vari-
ety is included within another one
(an algebraic variety is the solution
set of an algebraic system). We can
easily check, for instance, that the
curve in
3
is contained in the surface x z y
2
= 0 (a
cone). We simply prove that the equation
x z y
2
= 0 doesnt add any constraint to
the equations in the rst system:
> gbasis( {z - x^3, y - x^2}
, plex(x,y,z) );
[-z
2
+ y
3
, xz - y
2
, xy - z, -
y + x
2
]
> gbasis( {z - x^3, y - x^2,
x * z - y^2} , plex(x,y,z) );
[-z
2
+ y
3
, xz - y
2
, xy - z, -
y + x
2
]
A related eld of application of these
techniques is computer-aided geomet-
ric design (CAGD).
2
Automatic Theorem
Proving in Geometry
It is possible to automatically prove geo-
metric theorems the same way by using
Grbner bases.
3
Both the hypotheses
and the thesis are usually statements
like, a point lies on a geometric object,
and three lines share a point, which
we can write as polynomial equations.
We can express the theorem as
hyp
1
= 0, ..., hyp
k
= 0 thesis = 0.
But to prove such an implication, it is
enough to prove that
thesis hyp
1
, ..., hyp
k
,
which we can check by comparing
GB(thesis, hyp
1
, ..., hyp
k
) and
G(hyp
1
,..., hyp
k
).
z x
y x

3
2
0
0
Figure 4. The control desk of the railway interlocking at a railway station. (a) This
interlocking has a mixed technology: it is computer-controlled, but compatibility is
decided by a combination of relay arrays. (b) Part of the huge room that contains
the relay arrays.
(a)
(b)
60 COMPUTING IN SCIENCE & ENGINEERING
Expert Systems
We can apply a GB-based method to
knowledge extraction and verification
of rule-based expert systems.
4
To do so,
logic formulae can be translated into
polynomials, and the following result
relating to be a tautological conse-
quence and polynomial ideal mem-
bership is obtained:
if ( A) denotes the polynomial translation
of the negation of a formula A, then A
0
can be deduced from a set of facts
F
1
,...,F
n
, and a set of rules R
1
, ..., R
m
if and
only if ( A) ( F
1
), ..., ( F
n
),
( R
1
), ..., ( R
m
).
And, as mentioned earlier, its easy
enough to compare two GBs to check
for an ideal membership. Moreover,
this result holds both when the under-
lying logic is Boolean and when it is
modal multivalued.
Railway Interlocking Systems
We also applied a GB-based method to
checking the safety of switch position,
semaphore color, and train position in
a railway station (see Figure 4). Our
decision-making model is topology-in-
dependentthat is, it doesnt depend
on track layout.
5
The key idea is to identify trains via
integer numbers, sections of the lines
via polynomial variables, and the con-
nectivity among the different sections
via polynomials (trains can pass from
one section to another if they are
physically connected and the posi-
tion of the switches and the color of
the semaphores allow it). Lets con-
sider an algebraic system constructed
as follows:
If section y is reachable from section
x, we add x (x y) = 0 to the system.
If train 3 is in section x, we add x 3
= 0 to the system.
Notice that the values propagate
along reachable sections: for instance, if
train 3 is in section x, and its possible to
pass from section x to section y, we have
,
which means section y is reachable by
train 3. We thought about this problem
for a long time until we could find
polynomials, x (x y) and x j, that
translated this behavior. Because a sit-
uation is unsafe if and only if two dif-
ferent trains could reach the same sec-
tion, the situations safeness is
equivalent to the algebraic systems
compatibility.
A
lgebraic systems are usually
solved by using numerical meth-
ods, but these methods are not appro-
priate when dealing with decision-
making problems. In such cases, the
Grbner bases method is the key. Al-
though some knowledge of commuta-
tive algebra is required to know how to
calculate them, why the reduction
process always finishes, and why they
completely identify an ideal, just using
them can be intuitive and useful. In
fact, the symbolic solve commands in
computer algebra systems internally
apply a Grbner basis algorithm if the
system is nonlinear. As this article
shows, a wide variety of applications
arise. One future direction under de-
velopment now is the application to
CAGDin particular, to the geometry
of a car bodys pressed steel pieces.
6
Acknowledgments
Research project TIC-2000-1368-
C03 (MCyT, Spain) partially sup-
ported this work.
References
1. D. Cox, J. Little, and D. OShea, Ideals, Vari-
eties, and Algorithms, Springer-Verlag, 1992.
2. L. Gonzlez-Vega, Computer Aided Design
and Modeling, Computer Algebra Handbook,
J. Grabmeier, E. Kaltofen, and V. Weispfen-
ning, eds., Springer-Verlag, 2003, pp.
234242.
3. B. Buchberger, Applications of Grbner
Bases in Non-Linear Computational Geome-
try, Mathematical Aspects of Scientic Soft-
ware, vol. 14, J.R. Rice, ed., Springer-Verlag,
1988, pp. 6087.
4. E. Roanes-Lozano et al., A Polynomial Model
for Multi-Valued Logics with a Touch of Alge-
braic Geometry and Computer Algebra,
Mathematics and Computers in Simulation,
vol. 45, nos. 12, 1998, pp. 8399.
5. E. Roanes-Lozano and L.M. Laita, Railway In-
terlocking Systems and Grbner Bases,
Mathematics and Computers in Simulation,
vol. 51, no. 5, 2000, pp. 473481.
6. L. Gonzlez-Vega and J.R. Sendra, Algebraic-
Geometric Methods for the Manipulation of
Curves and Surfaces, Actas del 7
0
Encuentro
de lgebra Computacional y Aplicaciones
(EACA2001), J. Rubio, ed., Universidad de La
Rioja, 2001, pp. 4560.
Eugenio Roanes-Lozano is an associate profes-
sor in the algebra department of the Universidad
Complutense de Madrid. He has a PhD in math-
ematics from the Universidad de Sevilla and a
PhD in computer science from the Universidad
Politecnica de Madrid. He is a member of the Real
Sociedad Matematica Espaola, the Sociedad
Matemtica Puig Adam, and the IMACS soci-
ety. Contact him at eroanes@mat.ucm.es.
Eugenio Roanes-Macias is an associate profes-
sor in the algebra department of the Universidad
Complutense de Madrid. He has a PhD in math-
ematics from the Universidad Complutense de
Madrid. He is a member of the Real Sociedad
Matematica Espaola, and the Sociedad
Matemtica Puig Adam.
Luis M. Laita is a full professor in the articial in-
telligence department of the Universidad Politec-
nica de Madrid. He has an Lltd. in physics, a PhD
in mathematics from the Universidad Com-
plutense de Madrid, and a PhD in history and
philosophy of science from Notre Dame Univer-
sity. He is a correspondent academician of the
Real Academia de Ciencias de Espaa.

x
x x y
y


3 0
0
3 0
( )
C O M P U T I N G P R E S C R I P T I O N S
MAY/JUNE 2004 Copublished by the IEEE CS and the AIP 1521-9615/04/$20.00 2004 IEEE 61
Editors: Jim X. Chen, jchen@cs.gmu.edu
R. Bowen Loftin, bloftin@odu.edu
VISUALIZATION
V I S U A L I Z A T I O N C O R N E R
Visualization is a process of presentation
and discovery. When a graphic presenta-
tion is effective, users perceive relation-
ships, quantities, and categories within
the information. They also might inter-
act and manipulate various information
aspects, dynamically changing a ren-
derings appearance, which could con-
rm or contradict their hypotheses de-
velopment. Users want to understand
the underlying phenomena via the visu-
alization; they dont need to (necessar-
ily) understand individual values. An ef-
fective visualization should convey the
datas meaning and increase the infor-
mations clarity, through the users nat-
ural perception abilities.
In additionor as an alternative to
visual mappingswe could map infor-
mation into nonvisual forms, any form
that stimulates any of our senses: from
auditory, haptic, olfactory, and gustatory
to vestibular.
1
For example, we could
map month-long stock-market data
onto a line graph, with the x-axis repre-
senting time and the y-axis the stock
price (we could then plot multiple
stocks using various colored or textured
lines). Alternatively, we could use sound
graphs, in which each stock sounds a
different timbre, with higher stock value
represented by a higher pitch, and the
days and weeks represented by time.
2
Presenting the information in these
nonvisual forms offers many advantages:
They are more accessible to partially
or nonsighted users.
Some modalities might be more ef-
fective at representing data (for ex-
ample, sonification is useful when
temporal features are important).
Multiple different modalities are
useful when one modality is already
overloaded with numerous variables.
In situations where a display screen
is too small to encapsulate an intri-
cate visualization or users cant view
a screen because theyre monitoring
something else (for example, in a
machine room in which an engineer
constantly monitors the material be-
ing cut and machined), a nonvisual
form (such as sonication) could be
more appropriate.
These nonvisual visualizations cre-
ate a range of challenges: How can we
effectively represent information using
these various modalities? Can users ac-
tuallyand accuratelyperceive the
information? These are difcult ques-
tions, and there is much research ahead
to work out effective multimodal visu-
alization designs. Alternatively, much
research has been completed in the ar-
eas of data visual perception and repre-
sentation. For example, researchers
have employed empirical studies to as-
semble design rules and theories,
3
such
as Gestalt principles of similarity, James
Gibsons affordance theory, or Jacques
Bertins semiology of graphics. Al-
though many are merely guidelines,
they do aid us (as data-presentation en-
gineers) to recreate good visualizations.
So, what can we learn from one
modality to another? Is there equiva-
lence? Can we apply ideas in one modal-
ity to gain effective and understandable
realizations in another? Bar charts are
extremely popular visualizations, but
what would a multimodal bar chart look
like? What would an audible bar chart
sound like? What about a haptic bar
chart? Can we learn from one modalitys
design principles and apply that knowl-
edge to another? An obvious advantage
is that because users effortlessly under-
stand the visual bar-chart concept they
should instinctively understand an
equivalent design in another modality.
Additionally, good design principles in
one modality might help us generate an
effective realization in another. Well re-
turn to our audible bar chart later. For
now, lets try to answer some of the other
questions I raised.
Equivalence Chart Designs
Many current multiperceptual designs
are equivalence designs. For instance,
work by Wai Yu and colleagues demon-
strated haptic line graphs.
4
Like their
visual counterparts, the researchers
placed the haptic line graphs on a 2D
grid, with lines representing ridges and
VISUALIZATION EQUIVALENCE
FOR MULTISENSORY PERCEPTION
LEARNING FROMTHE VISUAL
By Jonathan C. Roberts
I
N OUR INFORMATION-RICH WORLD, COMPUTERS GENERATE SO
MUCH DATA THAT COMPREHENDING AND UNDERSTANDING IT
IN ITS RAW FORM IS DIFFICULT. VISUAL REPRESENTATIONS ARE IM-
PERATIVE IF WE ARE TO UNDERSTAND EVEN A SMALL PART OF IT.
62 COMPUTING IN SCIENCE & ENGINEERING
valleys. Users traced the line graph path
by following a pointer alongside a ridge
or down a valley.
In this example, Yus team utilized a
Phantom force-feedback joystick (see
Figure 1), which lets users feel 3D ob-
jects, finding that users more success-
fully followed the valleys because they
could more easily keep the pointer on
the line. Users could effectively under-
stand the graph data, but problems oc-
curred when the graph became detailed
and when multiple lines crossed on the
graph (users didnt know which one
they were following). We can envisage
various strategies to overcome these
problems; making individual lines feel
different (for example, by changing
their frictions) or staying on the line
that the user started investigating
(much like a train crossing through a
railroad switch), which could be imple-
mented by the geometry conguration
or magnetic forces.
In effect, such a haptic graph mimics
swell paper, or tactile graphics, on which
users can feel and follow raised areas
on the paper (albeit using valleys rather
than ridges). However, the main and
important difference is that the Phan-
tom device is point-based, with the
kinesthetic force realized at a single
point in space, whereas human ngers
are much more versatile in their sensi-
tivity. They can feel surrounding in-
formation; multiple fingers can mark
points of interest; and the ngers sep-
arations can gauge distances. In fact, an
ideal system would stimulate a larger
part of the human finger (effecting
more realistic rendering of the graph
by letting the user perceive surround-
ing elements), stimulate multiple fin-
gers, and let the user dynamically feel
and explore the information. Devices
with such capabilities do exist, such as
Immersions CyberTouch glove (see
Figure 2), which uses vibro-tactile
stimulators placed on each finger and
one on the palm, or dynamic Braille
displays (in which the pattern of six
Braille dots changes); but the resolu-
tion and, therefore, the information
detail these devices portray is not as ac-
curate as the human nger can sense.
Various examples of sonication
equivalence designs go beyond the sound
graphs Ive mentioned. David Bennett
visualized home heating schematics by
sounding out each nodes position on a
graph.
5
Each node was represented by
two musical motifs played on different
instruments (one each for the x,y coordi-
nates), and the number of notes in the
scale corresponded to the coordinate po-
sition. In this way, various 2D objects
were sounded.
In 2003, Keith Franklin and I real-
ized sonied pie charts.
6
Our example
used 3D sound sources, simulated sur-
round sound on headphones using
head-related transfer functions
(HTRs), which are functions that cre-
ate an illusion of sounds at particular
locations based on a human model
(timing of a source to the left and right
ears and modifications by our ears,
head, or torso), and a surround-sound
speaker setup to position the pie seg-
ments. We positioned a user in the az-
imuth plane, with the pie segments
surrounding the user. We used various
strategies to sound out the pie seg-
ments, from placing the segments
around the user to normalizing the
segments to the front. The results
showed that the user easily understood
how the information was being repre-
sented, but had difculty in accurately
gauging the segments values. In fact,
mapping the pie segments to spatial
sound is much less accurate than the vi-
sual equivalence. This problem is fur-
ther exacerbated by the errors nonlin-
earity, which depends on the sounds
position surrounding the user (the so-
called minimum audible angle
7
).
Another example of using position in
the graphic to represent position in the
sonication is by Rameshsharma Ram-
loll and colleagues, who describe an au-
dio version of tabular data.
8
In their ex-
ample, they map the value to pitch and
the horizontal position of each cell to a
localized sound source: a user hears the
left-most cell in the left ear and the
right most cell in the right ear, while in-
terpolating intermediary cell values be-
tween the left and right positions. In
the same way, other researchers have
developed several systems that nonvi-
sually represent a computers GUI.
Some systems are speech-based, while
others use nonspeech sounds. For ex-
ample, Earcons,
9
which are unique and
identiable rhythmic pitch sequences,
can represent the interfaces menus.
V I S U A L I Z A T I O N C O R N E R
Figure 1. Phantom Desktop haptic device
provides 3D positional sensing.
(Reproduced courtesy of SensAble
Technologies. Phantom and Phantom
Desktop are trademarks or registered
trademarks of SensAble Technologies.)
Figure 2. CyberTouch vibro-tactile glove,
with stimulators placed on each nger
and one on the palm. (Reproduced by
permission of Immersion Corporation,
copyright 2004. All rights reserved.)
MAY/JUNE 2004 63
Elizabeth Mynatt and Gerhard Weber
describe two systems: Textual and
Graphical User Interfaces for Blind
People (GUIB) translates the screen
into tactile information, and the Mer-
cator project exchanges the interface
with nonspeech auditory cues.
10
It is obviously useful and, indeed, pos-
sible to implement equivalence designs.
Developers are gaining inspiration from
one traditional mapping to instigate an
effective mapping in another modality.
Although idea transference is an impor-
tant strategy, it might not be wise to fol-
low it unconditionally because by fo-
cusing on an equivalent design, the
temptationand perhaps the conse-
quenceis to recreate the design itself
rather than representing the underlying
phenomenas aspects. Thus, in practice,
the process is necessarily more complex
than applying a one-to-one design
translation. Of course, extracting design
principles from one and applying them
to another is useful, but it might be that
the equivalent presentation in another
modality will not look like its equiva-
lent. Gaining inspiration from the visual
design equivalent relies on users implic-
itly understanding the design and know-
ing how to interpret the information. In
reality, a user might not be so familiar
with the original form; for instance, Yu
mentioned that nonsighted users found
it slower to comprehend the realization
compared with sighted users using the
same haptic graphs, mentioning that
this could be due to unfamiliarity with
certain graph layouts.
4
Inspiration
from the Workplace
Rather than gaining inspiration strictly
from the visual domain, perhaps we
should look to the real world or the
workplace. Since the dawn of the visual
interface, designers have applied non-
computerized-workplace or everyday-
living ideas to help develop understand-
able user interfaces. The idea of the
desktop comes from the ofce, with
documents spread over a desk or work-
space and cabinets or folders in which to
store them. Tools such as spreadsheets
are inspired from the tabular columns of
numbers found in ledger sheets. We
take for granted these and other con-
cepts, such as cut and paste, in our day-
to-day computing use, but they were in-
spired from a noncomputerized world.
Currently, there are various
metaphors for nonvisual interfaces.
We might exchange graphical icons
with auditory icons (using familiar
real-world sounds), or Earcons, which
also encode similarities among an as-
sortment of objects. We can shade out
visual interface elements or icons
when they are unavailable in a partic-
ular configuration. Similarly, we can
use sound effects or filtears (auditory
icons) to manipulate and perturb au-
ditory cues.
11
For example, we could
represent a musical motif more qui-
etly, or it could sound more dull or
bright depending on whether it is
iconized. Finally, instead of imple-
menting a sonified version of the
desktop metaphor, Maynatt and Ed-
wards describe a metaphor called au-
dio rooms,
10
(an extension of previous
ideas from Xerox PARC). The rooms
metaphor groups activities together
(much like rooms in a house; kitchen
for cooking, bedroom for sleeping,
and so on). Thus, we can group files
and applications in a room for similar
activities. The rooms also include
doors to traverse into adjacent rooms.
As a consequence, we might ask,
What more can we learn from everyday
visual interfaces and metaphors? Con-
sider various aspects of the user inter-
face. For example, what would be the
nonvisual counterparts for highlighted
text, popups, or multiple windows?
Looking at the Variables
The equivalent chart designs I men-
tioned succeed because they employ
perceptual variables with similar traits.
For instance, sonified pie charts
6
map
each pie segment (usually represented
by an angle) into position surrounding
a user; the visual and haptic line graphs
represent data through a perceptual
variable that demonstrates the data
value by showing distance from a xed
axis. This is consistent with Ben Chal-
lis and Alistair Edwards who say, A
consistence of mapping should be
maintained such that descriptions of
actions remain valid in both the visual
and the non-visual representations.
12
This is polarity mapping.
There are two types of mapping po-
larities: positive (a variable increases in
the same direction as the change of the
underlying data) and negative, the con-
verse. Bruce Walker and David Lane
summarized that the majority of (sighted
and nonsighted) users allocated the same
polarities to data, with the exception of
monetary values, particularly when
mapped to pitch.
13
They conjecture that
sighted users might associate higher
pitches to faster forms of transport,
whereas nonsighted users relate the val-
ues to the everyday sounds of the money
itself (dropped coins make a higher
pitched sound, whereas a stack of paper
money makes a lower pitch, although
the stack holds a higher monetary value).
It also is worth looking further at
the variables. For example, Jacques
Bertin recommends mapping the con-
tent to the container using a compo-
nent analysis.
3
First, analyze the orig-
inal datas individual components and
note whether they are variant or in-
variantthe range is small, medium,
or largeand whether the quantities
are nominal, ordinal, or quantitative.
Then, evaluate the containers compo-
nents for the same traits, and map one
64 COMPUTING IN SCIENCE & ENGINEERING
into the other. Although Bertin origi-
nally was inferring graphics and chart-
ing information, the same general
principle is relevant for multimodal in-
formation presentation.
Consequently, just as there is a role
for evaluating perception issues and in-
vestigating rules and guidelines for us-
ing retinal variables, there also is a sim-
ilar need for nonvisual variables. Some
graphics researchers have automated
the design of graphical presentations.
14
However, few guidelines currently ex-
ist for the use of nonvisual variables.
We shouldnt be surprised that as we
learn about the limitations and problems
with designing visual interfaces, we must
learn about the peculiarities associated
with nonvisual perception. For example,
when we use the same color in different
contexts, with different adjacent colors,
our perception of that color can radically
change. We know some of the issues in
multisensory perception, such as the
minimum audible angle and that ab-
solute pitch judgment is difcult (only
about 1 percent of the population has
perfect pitch). But, we need to do more
empirical research to decipher the inter-
play between various modalities and also
various parameters.
The Engineering Dataow
Another aspect to contemplate is the
mapping process itself. It is only one of
many procedures needed to generate an
appropriate presentation. Over the
years, researchers have posed various
modus operandi,
15
but many of the fun-
damental principles are the same. In vi-
sualization, the dataflow model pre-
dominates. It describes how the data
ows through a series of transformation
steps; the data is enhanced (which could
consist of filtering, simplifying, or se-
lecting an information subset), then this
processed information is mapped into
an appropriate form that can be ren-
dered into an image. Perception engi-
neers must go through similar steps,
whatever the target modality. They
must select and, perhaps, summarize
and categorize the information before
mapping it into effective perceptual
variables. Thus, it is useful that devel-
opers think about this engineering
dataflow and consider the perceptual
implications at each step.
Abstract Realizations
All the previously mentioned designs are
really presentation graphics. They dont
represent the intricacies that, say, infor-
mation visualization does to the sighted
user. Thus, an important part of visual-
ization is abstraction. Often, users more
easily understand the underlying infor-
mation if the information is simplied.
At the 2004 Human Vision and Elec-
tronic Imaging banquet for SPIEs Elec-
tronic Imaging conference (www.
spie.org), Pat Hanrahan, Canon USA
Professor at Stanford University, spoke
about Realism or Abstraction: The Fu-
ture of Computer Graphics.
In his presentation, he said that much
effort has gone into generating realistic
graphics, and more should go into the
process of generating abstract represen-
tations. There are many instances when
line, sketchy, or even cartoon drawings
are easier to perceive. Indeed, he men-
tioned Ryan and Schwartz, who evalu-
ated users response to photographic
and cartoon images in 1956.
16
They
found that people could more quickly
identify a cartoon hand than a photo-
graph of one. However, abstract ren-
derings often are hard to achieve; in one
respect, realistic renderings merely re-
quire the application of mathematics to
the problem, whereas abstract realiza-
tions rely on ingenious and clever map-
pings (which is a harder process, be-
cause they cant be mathematically
dened). For instance, an artist can
change an obscure painting into an un-
derstandable picture by merely adding
a few precisely placed lines.
An excellent example of an abstract
realization is the London Underground
Map designed by Harry Beck in 1933
(http://tube.t.gov.uk/guru/index.asp).
It depicts the stations logical positions
rather than their exact positions. Users
arent confused by additional, unneces-
sary information because they see only
the important information. Obviously,
this is task-dependent; the map makes
it much easier to understand how to
navigate the railway and work out
where to change stations, but it is im-
possible to calculate exact distances be-
tween different stations.
If abstract mappings are useful in vi-
sual information presentation, then
perhaps they should be equally impor-
tant in nonvisual perception. This is a
strong principle to adhere to when de-
veloping nonvisual realizations. In our
group, we have tried to apply some of
these ideas. For example, we recog-
nized that if the user only needs to
perceive particular features of the pre-
sentation, such as maximum and min-
imum values, then we only need to
present this abstract information.
17
In
this case, we abstracted important
graph facets (maximum and minimum
points, turning points, and gradient)
and displayed them in an abstract tac-
tile work surface. Other abstract ren-
derings include sonication of sorting
algorithms
18
and oil and gas well-log
sonification.
19
Indeed, sorting algo-
rithm sonification is interesting, be-
cause it displays both the current state
of the sorted list and the process of
how two elements are swapped; the
well-log sonification uses a Geiger-
counter metaphor to abstractly repre-
sent the information.
So, what can we learn from this?
First, abstraction is important, and per-
V I S U A L I Z A T I O N C O R N E R
MAY/JUNE 2004 65
ception engineers should think about
how to extract and display the most
data-significant features. Second, we
should consider that abstract render-
ings might be better than realistic rep-
resentations. Finally, many of the cur-
rent nonvisual representations are
realistic and accurate; perhaps we need
to start thinking about nonrealistic and
nonaccurate renderings that portray
the underlying informations essence: a
cartoon-style nonrealistic rendering.
Indeed, neat and precise drawings give
a perception of being complete and ac-
curate, while stylistic and sketchy dia-
grams give the appearance of being in-
complete or rough; can we utilize these
ideas to generate more effective nonvi-
sual visualization?
N
ow lets nish the thought exper-
iment on the visual bar chart. Vi-
sual bar charts are popular because they
are convenient, easy to create, and, most
important, easy to understand. A user
quickly eyeballs the graphic, perceives
the overall trend, and immediately real-
izes different categoriesinformation
encoded in the bars lengths. After a
while (perhaps only a few milliseconds),
a user might investigate further to de-
termine which bar is largest, to which
category it belongs, and its magnitude.
This gives us some targets to design ef-
fective nonvisual representations.
First, we still can learn a lot from di-
rect equivalent designs and metaphor
equivalences. Most users will instantly
understand the presentations aim and
get on with the task of understanding
the underlying phenomena. Second,
there is a need for tools that enhance the
users discovery. In other words, there is
a need for exploration of and interaction
with these nonvisual realizations.
This is starting to happen,
8,11
but we
need to learn from Ben Shneidermans
mantra of Overview rst, zoom and l-
ter, then details-on-demand.
20
This is
an important and effective visualization
idiom, and we should be able to apply it
to nonvisual perception. Third, there is
a need for more abstract nonvisual rep-
resentations. Abstraction is important; it
helps users understand easily the infor-
mations structure. Think about how
stylistic, cartooning, or nonaccurate ideas
might generate more-effective nonvisual
forms. Evaluation and empirical testing
is imperative if we are to understand
what is effective and how variables inter-
play and interfere with each other.
References
1. R.B. Loftin, Multisensory Perception: Beyond
the visual in visualization, Computing in Sci-
ence & Eng., vol. 5, no. 4, 2003, pp. 5658.
2. D.L. Mansur, M.M. Blattner, and K.I. Joy,
Sound-Graphs: A Numerical Data Analysis
Method For The Blind, J. Medical Systems,
vol. 9, no. 3, 1985, pp. 163174.
3. C. Ware, Information VisualizationPerception
for Design, Morgan Kaufmann, 2000.
4. W. Yu et al., Exploring Computer-Generated
Line Graphs Through Virtual Touch, Proc. IEEE
ISSPA 2001, IEEE CS Press, 2001, pp. 7275.
5. D.J. Bennett, Effects of Navigation and Posi-
tion on Task when Presenting Diagrams to
Blind People using Sound, Proc. 2nd Intl
Conf., Diagrams, M. Hegarty et al., eds.,
LNCS 2317, Springer, 2002, pp. 161175.
6. K. Franklin and J.C. Roberts, Pie Chart Soni-
cation, Proc. Information Visualization (IV03),
Ebad Banissi et al., eds., IEEE CS Press, 2003,
pp. 49.
7. A.W Mills, On the Minimum Audible An-
gle, J. Acoustical Soc. Am., vol. 30, no. 4,
1958, pp. 237246.
8. R. Ramloll et al., Using Non-speech Sounds
to Improve Access to 2D Tabular Numerical
Information for Visually Impaired Users,
Proc. People and Computers XVInteraction
Without Frontiers, Springer, 2001, pp.
515530.
9. M. Blattner, D. Sumikawa, and R. Greenberg,
Earcons and Icons: Their Structure and Com-
mon Design Principles, Human Computer In-
teraction, vol. 4, no. 1, 1989, pp. 1144.
10. E.D. Mynatt and G. Weber, Nonvisual Pre-
sentation of Graphical User Interfaces: Con-
trasting Two Approaches, ACM CHI 94
Conf. Proc., ACM Press, 1994, pp. 166172.
11. L.F. Ludwig, N. Pincever, and M. Cohen, Ex-
tending the Notion of a Window System to
Audio, Computer, vol. 23, no. 8, 1990, pp.
6672.
12. B.P. Challis and A.D.N. Edwards, Design
Principles for Tactile Interaction, Haptic Hu-
man-Computer Interaction, S. Brewster and R.
Murray-Smith, eds., LNCS 2058, Springer-
Verlag, 2001, pp. 1724.
13. B.N. Walker and D.M. Lane, Psychophysical
Scaling of Sonication Mappings: A Compari-
son of Visually Impaired and Sighted Listen-
ers, Proc. Intl Conf. Auditory Displays (ICAD),
2001, pp. 9094.
14. J. Mackinlay, Automating the Design of
Graphical Presentations of Relational Infor-
mation, ACM Trans. Graphics, vol. 5, no. 2,
1986, pp. 110141.
15. J.C. Roberts, Display ModelsWays to Clas-
sify Visual Representations, Intl J. Computer
Integrated Design and Construction, D. Bouch-
laghem and F. Khosrowshahi, eds., vol. 2, no.
4, 2000, pp. 241250.
16. T.A. Ryan and C.B. Schwartz, Speed of Percep-
tion as a Function of Mode of Representation,
Am. J. Psychology, vol. 69, 1956, pp. 6069.
17. J.C. Roberts, K. Franklin, and J. Cullinane,
Virtual Haptic Exploratory Visualization of
Line Graphs and Charts, The Engineering Re-
ality of Virtual Reality 2002, M.T. Bolas, ed.,
vol. 4660B, Intl Soc. Optical Engineering
(SPIE), 2002, pp. 401410.
18. M.H. Browns and J. Hershberger, Color and
Sound in Algorithm Animation, Computer,
vol. 25, no. 2, 1992, pp. 5263.
19. S. Barrass and B. Zehner, Responsive Soni-
fication of Well-Logs, Proc. Intl Conf. Audi-
tory Displays, ICAD, 2000; www.icad.org/
websiteV2.0/Conferences/ICAD2000/PDFs/
Barrass.pdf.
20. B. Shneiderman, The Eyes Have It: A Task By
Data Type Taxonomy for Information Visual-
izations, Proc. IEEE Visual Languages, IEEE
Press, 1996, pp. 336343.
Jonathan C. Roberts is a senior lecturer at the
Computing Laboratory, University of Kent, UK.
His research interests include exploratory visu-
alization, nonvisual and multimodal visualiza-
tion, visualization in virtual environments, mul-
tiple views, visualization reference models, and
Web-based visualization. He received a BSc and
PhD in computer science from the University of
Kent. He is a member of the ACM, the IEEE,
and Eurographics societies. Contact him at
j.c.roberts@kent.ac.uk.
of time t as
y(t) = x
1
e

1
t
+ x
2
e

2
t
,
where x
1
, x
2
,
1
, and
2
are xed parameters. The negative
values
1
and
2
are rate constants; in time 1/
1
, the rst
exponential term drops to 1/e of its value at t = 0. Often we
can observe y(t) fairly accurately, so we would like to deter-
mine the rate and amplitude constants x
1
and x
2
. This in-
volves tting the parameters of the sum of exponentials.
In this project, we study efficient algorithms for solving
this problem, but well see that for many data sets, the solu-
tion is not well determined.
How Sensitive Are
the x Parameters to Errors in the Data?
In this section, we investigate how sensitive the y function
is to choices of parameters x, assuming that we are given the
parameters exactly.
Typically, we observe the function y(t) for m fixed t val-
uesperhaps t = 0, t, 2t, , t
nal
. For a given parameter
set and x, we can measure the goodness of the models t
to the data by calculating the residual
r
i
= y(t
i
) y
e
(t
i
), i = 1, , m, (1)
where y
e
(t) = x
1
e

1
t
+ x
2
e

2
t
is the model prediction. Ide-
ally, the residual vector r = 0, but due to noise in the mea-
surements, we never achieve this. Instead, we compute
model parameters that make the residual as small as pos-
sible; we often choose to measure size using the 2-norm:
||r||
2
= r
T
r.
If the parameters are given, we can nd the x parame-
ters by solving a linear least-squares problembecause r
i
is a lin-
ear function of x
1
and x
2
. Thus, we minimize the norm of
the residual, expressed as
r = y Ax,
where A
ij
= e

j
t
i
; j = 1, 2; i = 1, , m; and y
i
= y(t
i
).
We can easily solve this problem by using matrix decom-
positions, such as the QR decomposition of Ainto the prod-
uct of an orthogonal matrix times an upper triangular ma-
trix, or the singular value decomposition (SVD). Well focus
on the SVD because even though its somewhat more ex-
pensive, its generally less inuenced by round-off error and
it gives us a bound on the problems sensitivity to small
changes in the data.
The SVD factors A = UV
T
, where the m m matrix U
satises UU
T
= U
T
U= I (the m midentity matrix), the n
n matrix V satisfies VV
T
= V
T
V = I, and the m n matrix
is zero except for entries
1

2

n
on its main diag-
onal. Because ||r||
2
= r
T
r = (U
T
r)
T
(U
T
r) = ||U
T
r||
2
, we can
solve the linear least-squares problem by minimizing the
norm of U
T
r = U
T
y U
T
Ax = V
T
x, where

i
= u
i
T
y, i = 1, , m,
and u
i
is the ith column of U. If we change the coordinate
system by letting w= V
T
x, then our problem is to minimize
(
1

1
w
1
)
2
+ (
n

n
w
n
)
2
+
n+1
2
+
m
2
.
66 Copublished by the IEEE CS and the AIP 1521-9615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
FITTING EXPONENTIALS:
AN INTEREST IN RATES
By Dianne P. OLeary
S
UPPOSE WE HAVE TWO CHEMICAL REAC-
TIONS OCCURRING SIMULTANEOUSLY. A
REACTANTS AMOUNT y CHANGES BECAUSE OF
BOTH PROCESSES AND BEHAVES AS A FUNCTION
Editor: Dianne P. OLeary, oleary@cs.umd.edu
HOMEWORK
Y O U R H O M E W O R K A S S I G N M E N T
I
n this issue, we investigate the problem of tting a sum of exponential functions to data. This problem occurs in many
real-world situations, but we will see that getting a good solution requires care.
MAY/JUNE 2004 67
In Problem 1, we see that the SVD gives us not only an
algorithm for solving the linear least-squares problem, but
also a measure of the sensitivity of the solution x to small
changes in the data y.
Problem 1.
a. The columns of the matrix V = [v
1
, , v
n
] form
an orthonormal basis for n-dimensional space. Lets
express the solution x
true
as
x
true
= w
1
v
1
+ . w
n
v
n
.
Determine a formula for w
i
(i = 1, , n) in terms of U,
y
true
, and the singular values of A.
b. Justify the reasoning behind these two state-
ments:
A(x x
true
) =
y y
true
r means ||x x
true
|| (||y y
true
r||)
y
true
= Ax
true
means ||y
true
|| = ||Ax
true
|| ||A|| ||x
true
||.
c. Use these two statements and the fact that ||A||
=
1
to derive an upper bound on ||x x
true
||/||x
true
|| in
terms of the condition number (A)
1
/
n
and ||y
y
true
r||/||y
true
||.
The solution to Problem 1 shows that the sensitivity of
the parameters x to changes in the observations y depends
on the condition number . With these basic formulas in
hand, we can investigate this sensitivity in Problem 2.
Problem 2. Generate 100 problems with data x
true
=
[0.5, 0.5]
T
, = [0.3, 0.4], and
y = y
true
+ z,
where = 10
4
, y
true
contains the true observations
y(t), t = 0, 0.01, , 6.00, and the elements of the vec-
tor z are uniformly distributed on the interval [1,1].
In a gure, plot the computed solutions x
(i)
, i = 1, ,
100 obtained via your SVD algorithm, assuming that
is known. In a second figure, plot the components
w
(i)
of the solution in the coordinate system deter-
mined by V. Interpret these two plots using Problem
1s results. The points in the rst gure are close to a
straight line, but what determines the lines direction?
What determines the shape and size of the second g-
ures point cluster? Verify your answers by repeating
the experiment for = [0.3, 0.31] and also try varying
to be = 10
2
and = 10
6
.
How Sensitive Is the Model
to Changes in the Parameters?
Now we need to investigate the sensitivity to the nonlinear
parameters . In Problem 3, we display how fast the func-
tion y changes as we vary these parameters, assuming that
we compute the optimal x parameters using least squares.
Problem 3. Suppose that the reaction results in
y(t) = 0.5e
0.3t
+ 0.5e
0.7t
.
Next, suppose that we observe y(t) for t [0, t
final
],
with 100 equally spaced observations per second.
Compute the residual norm as a function of various
estimates, using the optimal values of x
1
and x
2
for
each choice of values. Make six contour plots of the
log of the residual norm, letting the observation in-
terval be t
nal
= 1, 2, , 6 seconds. Plot contours of 2,
6, and 10. How helpful is it to gather data for longer
time intervals? How well determined are the para-
meters?
From the results of Problem 3, we learn that the parame-
ters are not well determined; a broad range of values
lead to small residuals. This is an inherent limitation in the
problem, and we cannot change it. Nonetheless, we want to
develop algorithms to compute approximate values of and
x as efciently as possible, and we next turn our attention to
this computation.
Solving the Nonlinear Problem
If we are not given the parameters , then minimizing the
norm of the residual r dened in Equation 1 is a nonlinear
least-squares problem. For our model problem, we must deter-
mine four parameters. We could solve the problem by using
standard minimization software, but taking advantage of the
least-squares structure is more efcient. In addition, because
two parameters occur linearly, taking advantage of that struc-

1

n
68 COMPUTING IN SCIENCE & ENGINEERING
ture is also wise. One very good way to do this is to use a vari-
able projection algorithm. The reasoning is as follows: our
residual vector is a function of all four parameters, but given
the two parameters, determining optimal values of the two
x parameters is easy if we solve the linear least-squares prob-
lem we considered in Problem 1. Therefore, we express our
problem as a minimization problem with only two variables:
,
where the computation of r requires us to determine the x
parameters by solving a linear least-squares problem using,
for instance, SVD.
Although this is a very neat way to express our minimiza-
tion problem, we pay for that convenience when we evalu-
ate the derivative of the function f() = r
T
r. Because the de-
rivative is quite complicated, we can choose either to use
special-purpose software to evaluate it (see the Tools side-
bar) or a minimizer that computes a difference approxima-
tion to it.
min

r
2
Y O U R H O M E W O R K A S S I G N M E N T
Tools
I
n a previous problem, we studied exponential tting to determine directions of arrival of signals.
1
This problem was
somewhat better posed, because the data did not decay.
Fitting a sum of exponentials to data is necessary in many experimental systems, including molecule uorescence,
2
volt-
age formation kinetics,
3
studies of scintillators using X-ray excitation,
4
drug metabolism, and predatorprey models. Often,
though, the publication of a set of rate constants elicits a storm of letters to the editor, criticizing the methods used to de-
rive them. It is important to do the t carefully and document the methods used.
A good source on perturbation theory, singular value decomposition (SVD), and numerical solution of least-squares prob-
lems is ke Bjrcks book.
5
Looking at a functions contours is a useful way to understand it. The Matlab function contour is one way to construct
such a plot.
Gene Golub and Victor Pereyra described the variable projection algorithm Varpro, which solves nonlinear least-squares
problems by eliminating the linear variables. Linda Kauffman noticed that each iteration would run faster if certain negligi-
ble but expensive terms in the derivative computation are omitted. Golub and Pereyra wrote a recent review of the litera-
ture on the algorithm and its applications.
6
In Problems 4 and 5, if no standard nonlinear least-squares algorithm is available (such as lsqnonlin in Matlab), use a
general-purpose minimization algorithm.
Although bad computational practices often appear in published papers involving fitting exponentials, many sources
discuss the pitfalls quite lucidly. See, for example, Richard Shrager and Richard Hendlers
7
work and Bert Rusts series of
tutorials.
810
References
1. D.P. OLeary, The Direction of Arrival Problem: Coming at You, Computing in Science & Eng., vol. 5, no. 6, 2003, pp. 6070.
2. A.H. Clayton and W.H. Sawyer, Site-Specic Tryptophan Dynamics in Class A Amphipathic Helical Peptides at a Phospholipid Bilayer Interface,
Biophysical J., vol. 79, no. 2, 2000, pp. 10661073.
3. R.W. Hendler et al., On the Kinetics of Voltage Formation in Purple Membranes of Halobacterium Salinarium, European J. Biochemistry, vol. 267,
no. 19, 2000, pp. 58795890.
4. S.E. Derenzo et al., Measurements of the Intrinsic Rise Times of Common Inorganic Scintillators, IEEE Trans. Nuclear Science, vol. 47, no. 3, 2000,
pp. 860864.
5. . Bjrck, Numerical Methods for Least Squares Problems, SIAM Press, 1996.
6. G. Golub and V. Pereyra, Separable Nonlinear Least Squares: the Variable Projection Method and Its Applications, Inverse Problems, vol. 19, no.
2, 2003, pp. R1R26.
7. R.I. Shrager and R.W. Hendler, Some Pitfalls in Curve-Fitting and How to Avoid Them: A Case in Point, J. Biochemical and Biophysical Methods,
vol. 36, nos. 2 and 3, 1998, pp. 157173.
8. B.W. Rust, Fitting Natures Basic Functions, Computing in Science & Eng., vol. 3, no. 5, 2001, pp. 8489.
9. B.W. Rust, Fitting Natures Basic Functions, Computing in Science & Eng., vol. 4, no. 4, 2002, pp. 7277.
10. B.W. Rust, Fitting Natures Basic Functions, Computing in Science & Eng., vol. 5, no. 2, 2003, pp. 7479.
MAY/JUNE 2004 69
Problem 4.
a. Use a nonlinear least-squares algorithm to deter-
mine the sum of two exponential functions that ap-
proximates the data set generated with = [0.3,
0.4], x = [0.5, 0.5]
T
, and normally distributed error
with mean zero and standard deviation = 10
4
. Pro-
vide 601 values of (i, y(t)) with t = 0, 0.01, , 6.0. Ex-
periment with the initial guesses
and
.
Next, plot the residuals obtained from each solu-
tion, and then repeat the experiment with = [0.30,
0.31]. How sensitive is the solution to the starting
guess?
b. Repeat the runs of part (a), but use variable pro-
jection to reduce to two parameters, the two compo-
nents of . Discuss the results.
To nish our investigation of exponential tting, lets try
dealing with some given data.
Problem 5. Suppose that we gather data from a
chemical reaction involving two processes: one
process produces a species and the other depletes it.
We have measured the concentration of the species as
a function of time. (If you prefer, consider the amount
of a drug in a patients bloodstream while the intestine
is absorbing it and the kidneys are excreting it.) Fig-
ure 1 shows the data; it is also available at www.
computer.org/cise/homework. Suppose your job (or
even the patients health) depends on determining the
two rate constants and a measure of uncertainty in
your estimates. Find the answer and document your
computations and reasoning.
F
inding rate constants is an example of a problem that is
easy to state and often critically important to solve, but
devilishly difcult to answer with precision.
x
(0)

3
4



1
]
1
,
(0)
[5, 6]
x
(0)

3
4



1
]
1
,
(0)
[1, 2]
0 1 2 3 4 5 6
y
t
0.02
0.00
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
Figure 1. Data for Problem 5. Given these measurements of
species concentration (mg/ml) versus time (sec), or drug
concentration (mg/liter) versus time (hours), nd the rate
constants.
25
%
N
o
t

a

m
e
m
b
e
r
?
J
o
i
n

o
n
l
i
n
e

t
o
d
a
y
!
save
on all
conferences
sponsored
by the
IEEE
Computer Society
I E E E
C o m p u t e r
S o c i e t y
m e m b e r s
www.computer.org/join
70 COMPUTING IN SCIENCE & ENGINEERING
Problem 1. Model 1 consists of the differential
equation
We start the model by assuming some proportion
of infected individualsfor example, I(0) = 0.005, S(0)
= 1 I(0), and R(0) = 0. Run Model 1 for k = 4 and =
0.8 until either I(t) or S(t) drops below 10
5
. Plot I(t),
S(t), and R(t) on a single graph. Report the proportion
of the population that became infected and the maxi-
mum difference between I(t) + S(t) + R(t) and 1.
Answer: Weve posted sample programs at www.
computer.org/cise/homework. Figure A shows the results;
95.3 percent of the population becomes infected.
Problem 2. Instead of using the equation dR/dt = I/k,
we could have used the conservation principle
I(t) + S(t) + R(t) = 1
for all time. Substituting this for the dR/dt equation
gives us an equivalent system of differential algebraic
equations (DAEs); we will call this Model 2.
Redo Problem 1 using Model 2 instead of Model 1.
To do this, differentiate the conservation principle and
express the three equations of the model as My = f(t,
y), where Mis a 3 3 matrix.
Answer: Figure A shows the results, which, as expected,
are indistinguishable from those of Model 1.
Problem 3.
a. Redo Problem 1 using Model 3
instead of Model 1. For t 0, use the initial conditions
I(t) = 0, S(t) = 1, R(t) = 0,
and let I(0) = 0.005, S(0) = 1 I(0), and R(0) = 0.
Note that these conditions match our previous ones
at t = 0. Compare the results of the three models.
Answer: Figure B shows the results; 94.3 percent of the
population becomes infected, slightly less than in the first
models. The epidemic dies out in roughly half the time.
Problem 4. Let S, I, and R depend on a spatial coor-
dinate (x, y) as well as t, and consider the model
To solve this problem, we will discretize and approxi-
R(t, x, y)
t
I(t, x, y) / k.
S(t, x, y)
t
I (t, x, y)S(t, x, y)

2
I (t, x, y)
x
2
+

2
I (t, x, y)
y
2



_
,

S(t, x, y),
I (t, x, y)
t
I (t, x, y)S(t, x, y) I(t, x, y) / k
+

2
I (t, x, y)
x
2
+

2
I(t, x, y)
y
2



_
,

S(t, x, y),
dR(t)
dt
I (t k)S(t k),
dS(t )
dt
I(t)S(t),

dI(t )
dt
I (t)S(t) I (t k)S(t k),
dR(t)
dt
I (t) / k.
dS(t )
dt
I(t)S(t),
dI(t )
dt
I (t)S(t) I(t)k,
Y O U R H O M E W O R K A S S I G N M E N T
Partial Solution to Last Issues
Homework Assignment
MORE MODELS OF INFECTION: ITS EPIDEMIC
By Dianne P. OLeary
MAY/JUNE 2004 71
mate the solution at the points of a grid of size n n.
Let h = 1/(n 1) and let x
i
= ih, i = 0, , n 1 and y
j
= jh,
j = 0, , n 1. Our variables will be our approximations
I(t)
ij
I(t, x
i
, y
j
) and similarly for S(t)
ij
and R(t)
ij
.
a. Use Taylor series expansions to show that we can
approximate
.
We can derive a similar expression for d
2
I(t, x
i
, y
j
)/dy
2
.
b. Form a vector

I(t) from the approximate values of


I(t) by ordering the unknowns as I
00
, I
01
, , I
0,n1
; I
10
,
I
11
, , I
1,n1
, , I
n1,0
; I
n1,1
, , I
n1,n1
. In the same way,
form the vectors

S(t) and

R(t) and derive the matrix Aso


that our discretized equations become Model 4:
where the notation

I(t). *

S(t) means the vector formed


from the product of each component of

I(t) with the


corresponding component of

S(t) . To form the ap-


proximation near the boundary, assume that the (Neu-
mann) boundary conditions imply I(t, h, y) = I(t, h, y),
I(t, 1 + h, y) = I(t, 1 h, y) for 0 y 1, and similarly
for S and R. Make the same type of assumption at the
two other boundaries.
Answer:
a. Since Taylor series expansion yields
I(t)
i1,j
= I(t, x, y) hI
x
(t, x, y) + I
xx
(t, x, y)
I
xxx
(t, x, y) + O(h
4
)
and
I(t)
i+1,j
= I(t, x, y) + hI
x
(t, x, y) + I
xx
(t, x, y)
+ I
xxx
(t, x, y) + O(h
4
),
we see that
b. The matrix A can be expressed as
A = T I + I T,
where

T
h

1
]
1
1
1
1
1
1
1
2 2
1 2 1
1 2 1
2 2
2

I t I t I t
h
h I t x y O h
h
I t x y O h
i j ij i j
xx
xx
( ) ( ) ( )
( , , ) ( )
( , , ) ( ).
, , +
+

+
+
1 1
2
2 4
2
2
2

h
3
6

h
2
2

h
3
6

h
2
2


R (t)
t


I (t) / k,


S (t)
t


I (t). *

S (t) A

I (t)
( )
. *

S (t ),


I (t)
t


I (t). *

S (t )

I (t) / k + AI(t) ( ). *

S (t),
d
2
I(t, x
i
, y
j
)
dx
2

I (t )
i1, j
2I (t)
ij
+ I(t)
i +1, j
h
2
+O(h)
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0 10 20 30 40 50 60
Time
P
r
o
p
o
r
t
i
o
n

o
f

p
o
p
u
l
a
t
i
o
n
Solution from ordinary differential equation model
Infected
Susceptible
Recovered
Figure A. Proportion of individuals infected by the epidemic
from the ODE Model 1 or the DAE Model 2.
0 10 20 30 40 50 60
P
r
o
p
o
r
t
i
o
n

o
f

p
o
p
u
l
a
t
i
o
n
Infected
Susceptible
Recovered
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Time
Solution from delay differential equation model
Figure B. Proportion of individuals infected by the epidemic
from the DDE Model 3.
72 COMPUTING IN SCIENCE & ENGINEERING
and T and I are matrices of dimension n n. (The notation
C D denotes the matrix whose (i, j)th block is c
ij
D. The
Matlab command to form this matrix is kron(C,D), which
means Kronecker product of C and D.)
Problem 5.
a. Set n = 11 (so that h = 0.1), k = 4, = 0.8, and =
0.2 and use an ODE solver to solve Model 4. For ini-
tial conditions, set S(0, x, y) = 1 and I(0, x, y) = R(0, x,
y) = 0 at each point (x, y), except that S(0, 0.5, 0.5) = I(0,
0.5, 0.5) = 0.5. (For simplicity, you need only use I and
S in the model, and you may derive R(t) from these
quantities.) Stop the simulation when the average value
of either

I(t) or

S(t) drops below 10


5
. Form a plot sim-
ilar to that of Problem 1 by plotting the average value
of I(t), S(t), and R(t) versus time. Compare the results.
b. Lets vaccinate the susceptible population, at a
rate S(t, x, y) I(t, x, y)/(I(t, x, y) + S(t, x, y)). This rate
is the derivative of the vaccinated population V(t, x, y)
with respect to time, and this term is subtracted from
S(t, x, y)/t. Run this model with = 0.7 and compare
the results with those of Model 4.
Answer: Figure C shows the results of Problem 5a, and Fig-
ure D shows those for Problem 5b. The infection rate with-
out vaccination is 95.3 percent (very similar to Model 1), but
with vaccination, it drops to 38.9 percent. Vaccination also
signicantly shortens the epidemics duration.
Acknowledgments
Im grateful to G.W. Stewart for helpful comments on this
project.
Dianne P. OLeary is a professor of computer science and a faculty mem-
ber in the Institute for Advanced Computer Studies and the Applied
Mathematics Program at the University of Maryland. Her interests include
numerical linear algebra, optimization, and scientic computing. She re-
ceived a BS in mathematics from Purdue University and a PhD in com-
puter science from Stanford. She is a member of SIAM, ACM, and AWM.
Contact her at oleary@cs.umd.edu; www.cs.umd.edu/users/oleary/.
Y O U R H O M E W O R K A S S I G N M E N T
0 5 10 15 20 25
P
r
o
p
o
r
t
i
o
n
Infected
Recovered
Vaccinated
Time
Solution from differential equation model with vaccination
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0 10 20 30 40 50 60
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Time
P
r
o
p
o
r
t
i
o
n
Solution from differential equation model
0.0
Infected
Recovered
Figure C. Proportion of individuals infected by the epidemic
from the differential equation of Model 5a.
Figure D. Proportion of individuals infected by the epidemic from
the differential equation of Model 5b, including vaccinations.
Look to the Future
IEEE Internet Computing reports
emerging tools, technologies, and
applications implemented through
the Internet to support a worldwide
computing environment.
In the next year, well look at
Business Processes for the Web
Seeds of Internet Growth
Internet-Based Data
Dissemination
the Wireless Grid
Measuring Performance
Homeland Security
... and more!
www.computer.org/internet
To subscribe to
CiSE magazine, visit
www.computer.org/cise/
2004
EDI TORI AL
CALENDAR
January/February
Grand Challenges in Earth System Modeling
This special issue attempts to describe in greater detail the high-end computational aspects of grand challenges
in Earth system modeling. It invites experts from disciplines of computational physics and computer sciences to
join Earth system scientists in understanding and predicting the Earths planetary system.
March/April
Frontiers of Simulation
The range and importance of computer simulations of complex, coupled physical phenomena involving multiple
distance and time scales have increased dramatically during the last decade. This issue highlights multiphysics
simulations including models of the Rio Grande watershed, swimming microbes, asteroid impacts, explosive
volcano eruptions, air pollutant dispersion, space weather, and complex fluid flow.
May/June
Frontiers of Simulation, Part II
Computer simulation offers the promise of being able to study an enormous variety of complex physical
phenomena. In a continuation of the Frontiers of Simulation special issue from March/April, these articles describe
simulations of very different systems.
July/August
High-Performance Computing
This special issue highlights the application of high-performance computing software and hardware by a
subset of the diverse community of users of the US Department of Defense HPC Modernization Program. It
will feature articles describing the use of advanced methods in computational chemistry, fluid dynamics, gas
dynamics, and acoustics to developing advanced technology for pressing national defense challenges.
September/October
Web Engineering
The topics included in this issue span from information searching and retrieval to trust and security aspects. The
articles will highlight the current status of Web data management from an engineering viewpoint (in terms of
Web caching, mining, and retrieval). Moreover, they will emphasize the most popular current Web technologies
(such as agents and trust systems).
November/December
Validation and Verification
Its easy to forget that numerical models do not necessarily faithfully simulate nature. Verification of the models
is essential to ensure that they are implemented without major errors; validation ensures that the models accurately
capture the dominate effects determining the behavior of the system being modeled. This issue discusses the need
for V&V and gives examples of it for several different types of simulations.
namely, the orderdisorder conflict. Order, represented
by a minimum of the free energy in physical systems, is
related to the Darwinian principle of survival of the fittest;
disorder, or entropy maximization, is driven by tempera-
ture in physical systems and genetic mutations in biolog-
ical systems.
The Penna model for biological aging
2
is based entirely
on Darwinian evolution with mutations and is a represen-
tation of the Darwinian conict particularly well suited for
computer simulations. It has played a role similar to the
Ising model for magnetic systems in the sense that it is a
minimal model that retains only the essentials of Darwinian
dynamics. Like the Ising model, it uses binary variables to
represent genes: zero for ordinary genes and ones for harm-
ful ones. Originally focused on problems of biological ag-
ing, application to several different evolutionary problems
substantially increased its scope. Our purpose here is to pro-
vide an updated review of recent results researchers have ob-
tained with this model.
The Penna Model
In the original asexual version of the Penna model, each in-
dividuals genome is represented by a computer word (bit
string) of 32 bits (each bit can be 0 or 1). Each bit corre-
sponds to one year in the individuals lifetime; conse-
quently, each individual lives for 32 years at most. A bit
set to 1 means that the individual will suffer from the effects
of a deleterious inherited mutation (genetic disease) in that
and all following years. As an example, an individual with a
genome 10100... would start to become sick during its rst
year of life and would become worse during its third year
when a new disease appears. In this way, the bit string rep-
resents a chronological genome. Alzheimers disease is a good
example of the biological motivation for such a representa-
tion: its effects generally appear in old age, although the cor-
responding defective gene is present in genetic code from
birth.
The extremely short size of the 32-bit bit string used in
the model would be totally unrealistic if all our genes were
related to life-threatening diseases. However, of the 10
4
to
10
5
genes in the human genome, only a subgroup will give
rise to a serious disease at some moment in the individuals
lifetime. Besides, theres no difference qualitatively when 32,
64, and 128 bits are taken into account.
3
One step of the simulation corresponds to reading one bit
(locus) of all genomes. Whenever a new bit of a given
genome is read, we increase the individuals age by one. For
the individual to stay alive,
1. The number of inherited diseases (bits set to 1) already
accumulated until its current age must be smaller than
a threshold T, which is the same for the whole popula-
tion. In the example given earlier, if T = 2, the individ-
ual would live for only two years.
2. There is a competition for space and food given by the
logistic Verhulst factor V = 1 N(t)/N
max
, where N
max
is a parameter that characterizes the maximum popu-
lation size the environment can support, and N(t) is the
current population size. We usually take N
max
to be 10
times larger than the initial population N(0).
At each time step and for each individual, the code gener-
ates a random number between 0 and 1 and compares it
with V: if this random number is greater than V, the indi-
vidual dies independently of age or genome. The smaller
the population size, the greater the probability of any indi-
vidual escaping from this random killing factor.
If the individual succeeds in staying alive up to a mini-
mum reproduction age R, it generates b offspring in that
and all following years (unless we decide to also set some
maximum reproduction age). The offsprings genome is a
copy of the parents, except for Mrandomly chosen mu-
tations introduced at birth. Although the model allows
good and bad mutations, generally we consider only the
74 Copublished by the IEEE CS and the AIP 1521-9615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
THE PENNA MODEL FOR BIOLOGICAL
AGING AND SPECIATION
By Suzana Moss de Oliveira, Jorge S. S Martins, Paulo Murilo C. de Oliveira, Karen
Luz-Burgoa, Armando Ticona, and Thadeu J.P. Penna
B
IOLOGICAL EVOLUTION
1
PRESENTS THE
SAME FUNDAMENTAL INGREDIENT THAT
CHARACTERIZES STATISTICAL MECHANICAL SYS-
TEMS IN THEIR ROUTE TO EQUILIBRIUM
Editor: Dietrich Stauffer, stauffer@thp.uni-koeln.de
SIMULATIONS
C O M P U T E R S I M U L A T I O N S
MAY/JUNE 2004 75
bad ones. In this case, if a locus that
carries a bit 1 is randomly tossed in
the parents genome, it remains 1 in
the offspring genome; however, if this
locus carries a bit 0, it is set to 1 in the
mutated offspring genome. In this
way, the offspring is always as good as
or worse than the parent in asexual
reproduction. This fact does not for-
bid a stable population to be ob-
tained, provided the birth rate b is greater than some min-
imum value (which Thadeu Penna and Suzana Moss de
Oliveira obtained analytically).
4
In fact, the population is
sustained by those cases in which no new mutation oc-
curswhen a bit already set to 1 in the parent genome is
chosen. Such cases are frequent enough to avoid muta-
tional meltdownthat is, extinction due to the accumu-
lation of deleterious mutation. The reason for consider-
ing only harmful mutations is that they are 100 times more
frequent than the backward ones (reverse mutations delet-
ing harmful ones
5
).
In the sexual version of the model,
6,7
individuals are
diploids, with their genomes represented by two bit strings
read in parallel. One bit string contains genetic informa-
tion inherited from the mother and the other has data from
the father. To count the accumulated number of mutations
and compare it with threshold T, we must distinguish be-
tween recessive and dominant mutations. A mutation is
counted if two bits set to 1 appear at the same position in
both bit strings (inherited from both parents), or if it ap-
pears in only one of the bit strings but at a dominant posi-
tion (locus). The code randomly chooses the dominant po-
sitions at the beginning of the simulation; they are the
same for all individuals.
The population is now divided into males and females. Af-
ter reaching the minimum reproduction age R, a female ran-
domly chooses a male with age also equal to or greater than
R to breed. To construct the offsprings genome, we cut the
mothers two bit strings in a random position, producing
four bit string pieces (crossing). Next, we choose two com-
plementary pieces to form the female gamete (recombina-
tion). Finally, we randomly introduce m
f
deleterious muta-
tions (see Figure 1a). The same process occurs with the
males genome, producing the male gamete with m
m
harm-
ful mutations. The resulting bit strings form the offspring
genome. The babys sex is randomly chosen, with a proba-
bility of 50 percent for either one. This whole strategy re-
peats b times to produce the b offspring. The Verhulst killing
factor already mentioned works in the same way as for asex-
ual reproduction.
Fundamental Issues
A very important parameter of the Penna model is the min-
imum reproduction age R. According to the mutation ac-
cumulation theory, Darwinian selection pressure tries to
keep genomes as clean as possible until reproduction starts.
For this reason, we age: Mutations that appear early in life
are not transmitted and disappear from the population,
while those that become active later in life, after the indi-
vidual has already reproduced, can accumulate. This de-
creases our survival probability, but it doesnt risk the per-
petuation of the species. One of the most striking examples
of such a mechanism is the catastrophic senescence of the
Pacic salmon and other species, called semelparous. In such
species, all individuals reproduce only once over a lifetime,
all at the same age. We can easily simulate this
4,8
simply by
setting a maximum reproduction age equal to the minimum
reproduction one, R. After many generations, the inherited
mutations have accumulated in such a way that as soon as
reproduction occurs, the individuals die. It might seem
cruel, but its just Darwinian: nature is solely interested in
the perpetuation of the species, which means reproduction.
Those that have already fulfilled this goal can disappear
from the population, and in so doing, not compete for food
and space with the youngsters.
Questions related to the evolution of recombination and
the modes of reproduction that they entailin particular,
the evolutionary advantage of diploid sexual reproduction
are of particular interest here. Earlier results, as well as For-
tran programs for the asexual and sexual versions of the
Penna model, appear elsewhere.
9
The Advantage of Recombination
For many years, researchers have used several different
models to study the question about why sex evolved. Some
of these models justify sexual reproduction from intrinsic
1 2 3 4 1 2 4 3 1 2 3 4
1 0 1 0
0 0 0 1
0 1 0 0
1 1 0 0
0 1 1 1
1 0 0 0
1 1 0 1
0 1 0 1
0 0 1 1
1 0 1 1
0 0 1 0
1 0 1 0
1 0 0 0
(a) (b) (c)
Figure 1. Gamete generation during the reproduction process. The arrows indicate
where a mutation has occurred: (a) diploid sexual reproduction, (b) triploid sexual
gamete formation, and (c) diploid sexual individuals with phenotypic trait.
76 COMPUTING IN SCIENCE & ENGINEERING
genetic reasons, others from extrinsic or social reasons such
as child protection, changing environments, or protection
against parasites.
10
Parasex investigates an intermediate strategy (between
asexual-haploid and sexual-diploid reproduction). Parasex-
uality is any process in which more than one parent partici-
pates, without meiosis and fertilization, and gives as a result
a new cell. Three phenomena lead to parasexual recombi-
nation in bacteria: conjugation, transduction, and transfor-
mation. Bacteria cannot live without at least one of these
mechanisms.
11
Parasex might have been an intermediate
step in the evolutionary road from asexual to sexual repro-
duction; the purpose is to show that this strategy provides
an advantage over simple asexual populations.
To simulate parasex,
12
we add a new ingredient to the
standard asexual model. At each time step, each individual
changes its genome with some probability p. This new
genome is generated as follows: The model randomly selects
another individual in the population, and for each of the 32
bits, it makes a random choice whether the old bit is kept or
if the other individuals bit is inserted instead. We then nd
the number of active deleterious mutations (1 bits) from the
newly formed bit string.
Figure 2 compares the population from the standard
asexual Penna model with that from the parasex model.
This is a simple way to compare the fitness of species un-
der the same environment: the larger the stationary pop-
ulation N is for the same N
max
, the fitter the species. We
see a fitness maximum at a probability p 0.1; these or-
ganisms also live longer with parasex than without it (not
shown). If we allow positive mutations in the model, we
lose parasexs advantages.
The Advantage of Diploid over Triploid Organisms
From the results just described, we can conclude that mixing
different genomes presents a clear evolutionary advantage. In
fact, researchers have used the Penna model to study the abil-
ity of sexual reproduction to generate larger diversity, thus
promoting an evolutionary protection against genetic cata-
strophes,
13
parasites,
10
and other obstacles. But would poly-
ploidal organisms enhance this diversity? Simulations of a
triploidal Penna population
14
make the argument against this
possibility: triploid populations do not have better survival
probabilities or larger population sizes, nor do they show
larger diversity than their sexual counterpart.
14
Genetic re-
production has to recombine material in the correct amount
to balance the extra cost of reproduction involved when mul-
tiple parents are neededmore is not necessarily better!
To make this comparison, we must change the Penna
models rules for survival according to a recent study
15
in
which the researchers adopted a modied survival probabil-
ity, thus generating sexual populations with sizes compara-
ble to those of the asexual ones. Due to the competition in-
duced by the Verhulst factor, if we dont introduce such a
modication, the asexual population always dominates the
sexual one, because the former produces twice the offspring
of the latter (where only females give birth).
This modication consists of assuming that harmful mu-
tation reduces the survival probability. At each iteration, or
year, each individual survives with probability exp(m) if
it has a total of m harmful mutations (taking into account
dominant positions) in its whole genome. (It is killed if a ran-
dom number smaller than the survival probability is tossed.)
is a parameter of the simulation, fixed from the start. An
individual can now die for any one of three reasons:
randomly, due to the Verhulst logistic factor,
if its actual number of accumulated diseases reaches the
limit T, or
if its survival probability becomes too small.
In the triploid population, individuals have genomic ma-
terial in three different bit strings read in parallel. Mating
is assumed to involve three individuals (two males and one
female or vice versa). Homozygous positions are those with
three equal bits at homologous loci; harmful mutations are
active only if three 1 bits are at the same position or at a het-
erozygous locus at which harmful mutations dominate.
Only females generate offspring. Crossing and recombina-
tion are performed by a random choice of a locus at which
the three strings are cut, generating six pieces. The model
C O M P U T E R S I M U L A T I O N S
430,000
435,000
440,000
445,000
450,000
455,000
460,000
1e-07 1e-06 1e-05 0.0001 0.001 0.01
P
o
p
u
l
a
t
i
o
n
Probability
0.1 1
Figure 2. Variation of equilibrium population with parasex
probability p. The gure shows at least 10
4
iterations at N
max
=
2 million, averaged over the second half of the simulation. The
horizontal line gives the result for the standard Penna model
without parasex but with the same parameters otherwise.
MAY/JUNE 2004 77
randomly chooses two complementary pieces from the six
to form one gamete from each of the three parents. It then
randomly introduces deleterious mutations in each one of
the three gametes (see Figure 1b). The baby is male or fe-
male with equal probability.
Figure 3 presents the time evolution of a diploid sexual
population and two different triploid ones, showing that the
diploid sexual population is larger than any of the other two.
We also can calculate the survival rates, as well as the diver-
sity, which we measure as a distribution of the Hamming
distance between the genomes of any pair of individuals.
The diploid population presents a higher diversity and a
slightly better survival rate, with comparable longevities.
14
Sympatric Speciation
Speciation involves the division of a species on an adaptive
peak so that each part moves onto a new peak without either
one going against the upward force of natural selection. If a
physical barrier were to subdivide the habitat of a species it
would be easy to understand how speciation could occur:
each part experiences different mutations, population uc-
tuations, and selective forces in what is called the allopatric
model of speciation. In contrast, conceiving a single popula-
tions division and radiation onto separate peaks without ge-
ographical isolation (called sympatric speciation) is intuitively
more difcult. Through which mechanism can a single pop-
ulation of interbreeding organisms be converted into two re-
productively isolated segments in the absence of spatial bar-
riers or hindrances to gene exchange? The models we
present next can help us better understand how microscopic
representations of Darwinian evolution generate this process.
Speciation Dened by a Single Bit
For speciation dened by a single bit, we rst obtain speci-
ation in the sexual Penna model by dening one bit position,
taken at position 11, as inuencing mating.
16
Each individ-
ual has n = 0, 1, or 2 bits set at this position. A female with n
such bits at position 11 selects only males with the same
number n of such speciation bits. Due to the randomness of
mutations and crossover, the children do not necessarily
have n speciation bits set to 1; this randomness allows the
emergence of a new species out of the original one for which
all n were 0. At every time step t, we have three populations
N
n
, depending on the number n = 0, 1, 2 of speciation bits
set to 1, now co-evolving, and each of these three subpopu-
lations is half male and half female.
To get speciation in this model, starting with one popula-
tion and changing it into another population via random mu-
tations is not sufcient. Instead, the goal is to start with one
population and at the end have two populations coexisting with
each other in stable equilibrium but without cross mating.
We get coexistence by turning the Verhulst factor into
three separate Verhulst factors for the separate populations
n = 0, 1, 2. Imagine, for example, that the original popula-
tion n = 0 is vegetarian, and that the second population n =
2 emerging out of it consists of carnivores. Both populations
are limited by the amount of food, but their food sources dif-
fer completely, thus there is no competition. That said,
meat-eating females would not select any herbivore males
for mating, and vice versa. We can regard the small popula-
tion with n = 1 as one that feeds in both niches. We add half
of it to n = 0 and half to n = 2 for the evaluation of the two
intraspecic Verhulst factors V
0
= (N
0
+ N
1
/2)/N
max
and V
2
= (N
2
+ N
1
/2)/N
max
. The small population has the arithmetic
average of these two Verhulst factors as its own food-limit-
ing Verhulst factor.
Figure 4 shows, for nearly 10
8
individuals, how the new
species N
2
emerges from the old species N
0
within about 100
iterations. The intermediate population N
1
is only about 1
percent of the total, so we see two separate populations
clearly emergingsympatric speciation. Shifting the speci-
ation position from 11 to 21 or 1 does not change the results
much. If the birth rate changes from 1 to 1 + n, the new
species ends up with a larger population than the original,
but both may still coexist.
16
Speciation with Phenotypic Selection
By adding features to the standard Penna model, we can rep-
resent phenotypic selection. The first modification deals
with the Verhulst factor, which now depends on genetically
acquired material. We add one or more extra pairs of bit
399,000 399,200 399,400 399,600 399,800 400,000
Time
4,500
5,500
6,500
7,500
8,500
9,500
N
u
m
b
e
r

o
f

i
n
d
i
v
i
d
u
a
l
s
Figure 3. Time evolution of a diploid population (upper curve)
and two triploid populations. In the central curve,
reproduction involves one male and two females, whereas in
the lower one it involves one female and two males.
78 COMPUTING IN SCIENCE & ENGINEERING
strings to the original age-structured one to represent the
individual phenotype. Each extra pair does not have age
structure and stands for a particular multilocus phenotypic
trait, such as size or color, which could have selective value.
Reproduction and mutation dynamics are the same for both
age-structured and new stringsfor the latter, a mutation
that changes a bit from 1 to 0 is also allowed (see Figure 1c).
A nal addition refers to mating selectiveness. We intro-
duce a locus into the genome that codes for this selective-
ness, being sure to obey the Penna models general rules of
genetic heritage and mutation. If we set it to 0, the individ-
ual will not be selective in mating (panmictic mating); it will
be selective (assortative mating) if we set this locus to 1. We
set the mutation probability for this locus to 0.001 in all sim-
ulations. Selective females will choose mating partners that
satisfy some criterion related to the sexual selection trait.
Assortative mating is essentially equivalent to speciation
in this context. One purpose of these simulations is to fol-
low the rising of the fraction of the population that becomes
sexually selective.
Model with a single phenotypic trait. Field observations
motivated the simulation of a model that uses a single trait.
17
The intention was to mimic rainfalls seasonal effect on the
availability of different-sized seeds in the Galapagos Islands
and this availabilitys impact on beak size in the population
of ground nches feeding on those seeds.
18
Beak size is en-
coded by a single pair of bit strings added to the genome of
each individual by counting the number of recessive bit po-
sitions (chosen as 16), where both bits are set to 1, plus the
number of dominant positions with at least one of the two
bits set. It will therefore be a number k between 0 (meaning
a very small beak) and 32 (a very large one). Its selective
value is given by a tness function F(k). For a given value of
beak size k, F(k) quantifies the availability of resources for
individuals with that particular morphology.
The researchers did the simulations with two different
functional forms for function F(k). At the beginning of the
simulations, F(k) is a single-peaked function with a maxi-
mum at k = 16 representing large availability of medium-
sized seeds. This means the whole finch population will
compete for the same resources. After some number of iter-
ations N
step
, the function F(k) changes to a two-peaked
shape, with maxima at k = 0 and k = 32. Now the food re-
sources are either small or large seeds, with a vanishing
number of medium-sized ones.
The probability of death by intraspecific competition at
each time step is V(t) = N(t)/(N
max
* F(k)), where N(t) ac-
counts for the population that competes for resources avail-
able to individuals of beak size k. From step 0 up to step N
step
,
the whole population competes for the same general food re-
sources. After step N
step
, only small (large)-beaked individu-
alsthose with k <(>) 16can compete for the small (large)
seeds. For that reason, we can compute the death probabil-
ity V(t) of an individual with k <(>) 16 by assigning to N(t) the
number of individuals with k <(>) 16 plus half the population
that has k = 16. An individual with k = 16 competes either for
small or large seeds; this choice is random.
Sexual selection also depends on this single trait. When
the tness function F(k) is single-peaked, there is no selec-
tive pressure for mating selectiveness, and the population is
panmictic. After F(k) becomes double-peaked, females that
mutate into selectiveness will choose mating partners that
have beak sizes similar to their own: if a female has k <(>) 16
and is selective, she will only mate with a partner that also
has k <(>) 16.
Figure 5 shows the distribution of phenotypesin this
case, beak sizeof the population at time step N
step
= 12,000
(up to which the tness function was single-peaked) and at
time step 50,000 (after it has been double-peaked for 30,000
time steps). This clearly establishes a stable polymorphism
as a result of food resource duplicity. The fraction of selec-
tive females in the population, which was 0 at the start of the
run, has also increased to nearly 1.0 after the establishment
of a double-peaked F(k). Now two distinct populations exist,
each of which does not mate with a partner from the other
because hybrids are poorly fit to the environment. Evolu-
tionary dynamics made it advantageous to develop assorta-
tive mating in this bimodal ecology: as a consequence of re-
productive isolation, one single species has split into two.
Model with two phenotypic traits. The model with two
C O M P U T E R S I M U L A T I O N S
10 100 1,000
P
o
p
u
l
a
t
i
o
n
Time
10000
100000
1e+06
1e+07
Figure 4. Variation in time of N
0
(line, original species), N
1
(x,
mixed genomes), and N
2
(+, new species), with N
max
= 300
million. We start with 30 million males and as many females of
the original species.
MAY/JUNE 2004 79
traits
16
has two additional pairs of nonstructured bit strings
in each individuals genome. One of them is related to fit-
ness, like in the previous version, and the second introduces
a representation of a trait that drives sexual selection. The
purpose here is to make the model more realistic by assign-
ing different traits to different functions and to study the in-
teractions between these traits,
19
as well as to address issues
raised by recent observations of speciation in sh.
20
Death
and birth dynamics follow the rules already stated, and the
phenotype space for this new sexual trait is mapped again
onto an integer between 0 and 32. For mating, a female
chooses from among a random selection of a xed number
of males, chosen to be six in the simulations, a suitable mat-
ing partner for whom the phenotype for this second trait
(color, say) matches her own. For the results we show in this
article, the authors defined the mating strategy as follows:
Call f the phenotype of the female and m the one for the
male, for the sexual selection trait. Then, if the female has
mutated into selective, she follows these rules:
If f < 16, then she selects the male with the smallest m.
If f > 16, then she selects the male with the largest m.
If f = 16, then she randomly chooses to act as one of the
above.
These rules amount to choosing a mating partner that fur-
ther enhances the phenotype into which the trait is mapped.
If we think about this mating trait as color, for instance, and
assign f <(>) 16 to a blue (red) character, a blue (red) female
will choose the male that lies deepest in the blue (red) region.
The simulations run with the same parameters as the ones
for the single-trait version. Figure 6 shows that, as in the for-
mer case, the distribution of the tness trait is single-peaked
at k = 16 up to step N
step
, as a consequence of the number of
loci (16) in which the 1 allele is dominant, and moves into a
polymorphism after the ecology becomes bimodal. The sex-
ual selection trait also shows a single peak until step N
step
,
where it then splits the population into two groups. A strong
correlation now develops between these traits, and the indi-
viduals with sexual selection phenotype <(>) 16 have their
tness trait <(>) 16. In other words, a female chooses a mate
because of color, and his correlation with size lets the two of
them generate viable offspring.
Sexual selectiveness also develops as a result of evolution-
ary dynamics. At the end of the simulation, all females are
selective. Assortative mating and reproductive isolation are
the proxies in this model to the development of two separate
species out of the single one that existed at the beginning.
The Handicap Principle
Darwin documented the development of exaggerated sec-
ondary ornaments for sexual selectionfor example, the size
of the male peacocks tail. He suggested that the disadvan-
tage to male survival induced by such characters (which also
call the attention of predators) is compensated for by the
preference of females to males bearing them. The handicap
principle appears as a hypothesis for the origin or mainte-
nance of these preferences,
21
in which secondary sexual or-
naments act as handicaps for males to test their quality,
mainly in a population at its optimal tness among geneti-
0
0.05
0.10
0.15
0.20
5 10 15 20 25 30
Beak size (k)
R
e
l
a
t
i
v
e

f
r
e
q
u
e
n
c
y
Seeds extreme-sized
Seeds middle-sized
Figure 5. Population morphology. This gure shows the
populations morphology when only middle-sized seeds are
abundant (circles) and when only small- and large-sized seeds
are left (squares). We show the fraction of the population with
each beak size.
0.02
0.04
0.06
0.08
0.10
0.12
0.14
0.16
0.18
0.20
5 10 15 20 25 30
R
e
l
a
t
i
v
e

f
r
e
q
u
e
n
c
y
Number of bits 1
0
Beak
Color
Figure 6. Distribution of tness (squares) and sexual (*) traits
at the end of the simulation. These traits are correlated, and
the population with the tness trait to the left has its sexual
trait also to the left of the plot.
80 COMPUTING IN SCIENCE & ENGINEERING
cally similar individuals.
The results we present next are an attempt to better un-
derstand how the handicap principle works in sexual selec-
tion.
23
The version of the sexual Penna model used in the
simulations considers the following main aspects: male qual-
ities that females cannot observe, the signal males use to
show this quality, and the information that females use to in-
fer the quality of males when choosing one of them.
Instead of representing individuals by two age-structured
bit strings, the model uses only one (haploid sexual individ-
uals), which corresponds to the usual diploid Penna model,
with all the positions taken as dominants. It adds a non-
structured string to represent the secondary ornament
17
mapped onto an integer k that counts its number of 1 bits.
Reproduction follows the usual strategy for the age-struc-
tured string: the strings from mother and father are cut in a
random position, with one piece of each combined with the
complementary piece of the other. The rule to generate the
string that represents the newborn offsprings phenotype is
to copy it from the father if the offspring is a male or from
the mother if it is a female, with M
p
random mutations. This
rule represents some trait in the species that shows differ-
ences among sexes, such as the already mentioned tails in
peacocks or antlers in deer.
The probability of death by intraspecic competition and
the action of predators at each time is given by V/f(k) if k >
0 and 1 otherwise. V is the standard Verhulst factor
N(t)/N
max
, and the function f is given by f(k) = 1 for the fe-
males and
for the males, where A is the size of the bit string, taken as
16 in this simulation. This tness function expresses an en-
vironment in which the ideal male phenotype would be
composed entirely of 1 bits, meaning, for instance, a short
tail for the male peacock.
Sexual selection is simulated by imposing females to pre-
fer males with a small number of bits set to 1 in their phe-
notype. In the simulations, each female chooses, between
two randomly selected males t to reproduce, the one with
the smaller value of k. This is the way in which the handicap
principle is introduced in the modela male with few 1 bits
in its phenotype has its survival probability reduced but has
the preference of females at the time of reproduction.
The simulation starts from a population with a random
genetic load both in the age-structured and nonstructured
parts of the genome. Figure 7 shows the main result, with
the distribution of the phenotypes compared for three dif-
ferent situations. For the female, who is not subject to a phe-
notype-dependent environment pressure or sexual selection,
this distribution reects only the random nature of the non-
biased mutation rule. For males suffering from the pheno-
type-dependent pressure of the environment alone, the dis-
tribution shows a peak at high values of k, which are favored
by the tness function mentioned earlier. When we put sex-
ual selection into action, it more than balances the previous
trend, and the phenotype distribution shifts toward smaller
values of k.
T
he Penna model is by far the most used computational
aging model for studying and simulating evolutionary
phenomena. It can be easily implemented in a computa-
tionally efcient way due to the representation of the indi-
viduals genome by a computer word and Boolean opera-
tions. For those who prefer analytical results, a recent
solution appears elsewhere.
23
Although the results presented
here describe attempts to better understand evolutionary
phenomena, the Penna model can also be used to predict
some important features of real populations. For instance,
Penna, Adriana Racco, and Adriano Sousa used real data
about the weight, size, and fertility of red lobsters to pro-
pose a new rule for fishing that guarantees an increase of
available stock without decreasing profits.
24
This same
group is developing a study using data about Brazilian mor-
tality rates to quantify mortality decreases due to medical
improvement in the last century.
Acknowledgments
We thank Adriano O. Sousa and Dietrich Stauffer for help-
ful discussions on this subject over the last years, and the

f k
A k
A
( )
( )
=

1
C O M P U T E R S I M U L A T I O N S
Number of 1 bits
12 16 4 8
0.1
0.2
0.3
0
Figure 7. Distribution of 1 bits in a phenotype string. Gaussian-
like ts are plotted for each case (full lines) for females (open
circles), males with only natural selection (full squares), and
males with natural and sexual selection (full triangles).
MAY/JUNE 2004 81
Brazilian agencies Conselho Nacional de Desenvolvimento
Cientfico e Tecnolgico (CNPq), Fundao de Amparo
Pesquisa do Estado do Rio de Janeiro (FAPERJ), and Coor-
denao de Aperfeioamento de Pessoal de Ensino Superior
(CAPES) for nancial support.
References
1. C. Darwin, On the Origin of Species by Means of Natural Selection, Mur-
ray, 1859.
2. T.J.P. Penna, A Bit-String Model for Biological Aging, J. Statistical
Physics, vol. 78, nos. 5 and 6, 1995, pp. 16291633.
3. T.J.P. Penna and D. Stauffer, Bit-String Ageing Model and German
Population, Zeitschrift Phys. B, vol. 101, no. 3, 1996, pp. 469470.
4. T.J.P. Penna and S. Moss de Oliveira, Exact Results of the Bit-String
Model for Catastrophic Senescence, J. Physique I, vol. 5, no. 12, 1995,
pp. 16971703.
5. P. Pamilo, P.M. Nei, and W.H. Li, Accumulation of Mutations in Sexual
and Asexual Population, Genetic Research, vol. 49, no. 2, 1987, pp.
135146.
6. A.T. Bernardes, Mutational Meltdown in Large Sexual Populations, J.
Physique I, vol. 5, no. 11, 1995, pp 15011515.
7. D. Stauffer et al., Monte Carlo Simulations of Sexual Reproduction,
Physica A, vol. 231, no. 4, 1996, pp. 504514.
8. T.J.P. Penna, S. Moss de Oliveira, and D. Stauffer, Mutation Accumula-
tion and the Catastrophic Senescence of the Pacic Salmon, Physical
Rev. E, vol. 52, no. 4, 1995, pp. R3309R3312.
9. S. Moss de Oliveira, P.M.C. de Oliveira, and D. Stauffer, Evolution,
Money, War and Computers, Teubner, 1999.
10. J.S. S Martins, Simulated Coevolution in a Mutating Ecology, Physical
Rev. E, vol. 61, no. 3, 2000, pp. R2212R2215.
11. F.J. Ayala and J.A. Kiger Jr., Modern Genetics, Benjamin/Cummings Pub-
lishing Co., 1980.
12. S. Moss de Oliveira, P.M.C. de Oliveira, and D. Stauffer, Bit-String
Models for Parasex, Physica A, vol. 322, nos. 14, 2003, pp. 521530.
13. J.S. S Martins and S. Moss de Oliveira, Why Sex? Monte Carlo Simula-
tions of Survival After Catastrophes, Intl J. Modern Physics C, vol. 9, no.
3, 1998, pp. 421432.
14. A.O. Sousa, S. Moss de Oliveira, and J.S. S Martins, Evolutionary Ad-
vantage of Diploidal Over Polyploidal Sexual Reproduction, Physical
Rev. E, vol. 67, no. 3, 2003, Art. No. 032903.
15. J.S. S Martins and D. Stauffer, Justication of Sexual Reproduction by
Modied Penna Model of Ageing, Physica A, vol. 294, nos. 1 and 2,
2001, pp. 191194.
16. K. Luz-Burgoa et al., Computer Simulation of Sympatric Speciation
with Penna Ageing Model, Brazilian J. Physics, vol. 33, no. 3, 2003, pp.
623627.
17. J.S. S Martins, S. Moss de Oliveira, and G.A. de Medeiros, Ecology-Dri-
ven Sympatric Speciation, Physical Rev. E, vol. 64, no. 2, 2001, Art. No.
021906.
18. P.T. Boag and P.R. Grant, Intense Natural Selection in a Population of
Darwin Finches (Geospizinae) in the Galapagos, Science, vol. 214, no.
4516, 1981, pp. 8285.
19. A.S. Kondrashov and F.A. Kondrashov, Interactions Among Quantita-
tive Traits in the Course of Sympatric Speciation, Nature, vol. 400, no.
6742, 1999, pp. 351354.
20. A.B. Wilson, K. Noack-Kunnmann, and A. Meyer, Incipient Speciation
in Sympatric Nicaraguan Crater Lake Cichlid Fishes: Sexual Selection
Versus Ecological Diversication, Proc. Royal Soc. London B, vol. 267,
no. 1458, 2000, pp. 21332141.
21. A. Zahavi, Mate Selection: Selection for a Handicap, J. Theoretical Biol-
ogy, vol. 53, no. 1, 1975, pp. 205214.
22. A. Ticona and T.J.P. Penna, Simulation of Zahavis Handicap Principle,
Brazilian J. Physics, vol. 33, no. 3, 2003, pp. 619622.
23. J.B. Coe, Y. Mao, and M.E. Cates, Solvable Senescence Model Showing
a Mortality Plateau, Physical Rev. Letters, vol. 89, no. 28, 2002, Art. No.
288103.
24. T.J.P. Penna, A. Racco, and A.O. Sousa, Can Microscopic Models for
Age-Structured Populations Contribute to Ecology, Physica A, vol. 295,
nos. 1 and 2, 2001, pp. 3137.
Suzana Moss de Oliveira is an associate professor at the Department of
Physics of Universidade Federal Fluminense (UFF), Brazil, where she also
holds a joint appointment at the Division of Strategic Planning. Her re-
search interest is the statistical mechanics of evolutionary systems. She is
a member of the Brazilian Physical Society (SBF). Contact her at
suzana@if.uff.br.
Jorge S. S Martins is an associate professor at the Department of
Physics of Universidade Federal Fluminense (UFF), Brazil. His research in-
terests range from the statistical mechanics of evolutionary systems and
nonlinear threshold systems to phase transitions in nuclear matter. He is
a member of the Brazilian Physical Society (SBF). Contact him at
jssm@if.uff.br.
Paulo Murilo C. de Oliveira is a professor at the Department of Physics
of Universidade Federal Fluminense (UFF), Brazil. His research interest is
the statistical mechanics of complex systems, with special focus on evo-
lutionary systems. He is a member of the Brazilian Academy of Sciences
and vice president of the Brazilian Physical Society (SBF). Contact him at
pmco@if.uff.br.
Karen Luz-Burgoa is nishing her dissertation work at the Department
of Physics of Universidade Federal Fluminense (UFF), Brazil. Contact her
at karen@if.uff.br.
Armando Ticona is nishing his dissertation work at the Department of
Physics of Universidade Federal Fluminense (UFF), Brazil. Contact him at
ticona@if.uff.br.
Thadeu J.P. Penna is an associate professor at the Department of
Physics of Universidade Federal Fluminense (UFF), Brazil. His research in-
terest is the statistical mechanics of complex systems, with special focus
on evolutionary systems. He is the author of the model discussed in this
article, and a member of the Brazilian Physical Society (SBF). Contact him
at tjpp@if.uff.br.
scientic communities that previously
have not been exposed to the subject.
Traditional teaching methods have re-
lied heavily on a profound knowledge
of the mathematical structure; students
are usually able to incorporate quantum
mechanics principles only when they
are at such an advanced stage of study
that the mathematical basis makes
sense to them. Unfortunately, many
beneciaries of quantum theorysuch
as chemists, engineers, biologists, and
computer scientists, who traditionally
lack a more rigorous mathematical
foundationhave been left behind.
New Teaching Methods
The emerging challenge, then, is to
develop new teaching methods for
quantum mechanics that address its
principles in students early stages of
study without compromising a more
advanced approach at later stages. At
present, it is natural to partially base
such an approach on the use of com-
puters because of their great ability to
simulate and animate. This task seems
straightforward because it relies on the
fact that potential students have be-
come accustomed to using advanced
graphical and computational tools that
apply to their eld.
Simulations and visualization of mi-
croscopic encounters are commonly
used in teaching molecular dynamics
science. Because most people nd the
motion of solid bodies intuitive, they
can follow complex chemical events by
visually observing the atoms classical
motion. Examples of this approach
range from simple inorganic gas phase
reactions to complex systems such as
enzymatic reactions. However, quan-
tum mechanics lacks such an intuitive
basis. Its basic entitythe wavefunc-
tionis complex; therefore, it excludes
even a direct connection to physical
wave motion. Faced with these diffi-
culties, we must devote considerable
attention to developing and applying a
visual language for teaching quantum
mechanics. In this article, we address
the design principles employed in the
visualization of quantum wavefunc-
tions, and their application in teaching
the superposition principle.
The tools we describe here are only
part of a larger set used to teach ele-
mentary quantum mechanics to chem-
istry and physics students for the last six
years at the Hebrew University. Teach-
ing experience supports the assumption
that visualization enhances students
understanding of basic principles with-
out compromising their mathematical
rigor. Due to the tools dynamical char-
acter, full appreciation of the methods
requires online interaction. Readers
should try these tools for themselves
students and interested parties can get
free access through standard Web tools,
such as those at www.fh.huji.ac.il/~guy/
ChemBond/.
In this article, we will discuss the
tools used to represent the quantum
state of a single particle, which is de-
scribed by a wavefunction. The wave-
function can be expressed as a sum of
other wavefunctions (the superposition
principle), each of which has a distinct
property (such as a specific energy or
momentum value). The composition
of the quantum state determines the
probability of measuring the specific
value of the distinct property. The
number and type of degrees of free-
dom a system possesses determines the
topology on which the wavefunction is
defined. For example, a wavefunction
can describe a system in a one-
dimensional (1D) open space, a system
residing on a closed 1D ring, or a sys-
tem on a 2D spherical surface.
Linear 1D Wavefunctions
and Their Superposition
Lets start by visualizing a complex
function of a single variable x, which
represents a particle constrained to
move in one dimensiona particle on
a string, for instance. Popular examples
using this topology are a particle in a
1D box, scattering by a step potential,
and the harmonic oscillator. Think of a
1D complex function as a collection of
vectors perpendicular to the x-axis, each
one characterized by its length (magni-
82 Copublished by the IEEE CS and the AIP 1521-9615/04/$20.00 2004 IEEE COMPUTING IN SCIENCE & ENGINEERING
STRING, RING, SPHERE:
VISUALIZING WAVEFUNCTIONS
ON DIFFERENT TOPOLOGIES
By Guy Ashkenazi and Ronnie Kosloff
Q
UANTUM PHENOMENA HAVE PENETRATED NEW SCIEN-
TIFIC FIELDS AND EMERGING TECHNOLOGIES. THUS,
THERE IS AN INCREASING DEMAND FOR FRESH APPROACHES TO-
WARD UNDERSTANDING QUANTUM THEORY FUNDAMENTALS IN
Editor: Denis Donnelly, donnelly@siena.edu
EDUCATION
E D U C A T I O N
MAY/JUNE 2004 83
tude) and by its angle in the complex
plane (phase). This requires the ability
to visualize a 3D representation, which
is harder to interpret for people accus-
tomed to working with 2D graphs. To
reduce the representations dimension-
ality, the angle of the vectors in the
complex plane can be color-coded.
This approach is demonstrated for
the plane wave f (x) = A e
ikx
, an eigen-
function of the free particle in a 1D
Hamiltonian. To simplify, the complex
function is first reduced to a discrete
matching rule between selected x val-
ues of the functions 1D domain and
their complex f (x) values (see Figure
1a). Arrows that originate from the ap-
propriate x position represent the com-
plex values; their length represents the
complex values magnitude. Because all
the f (x) values have the same magni-
tude (= A), all the arrows are the same
length. The phase is represented in two
ways: rst, the arrows are rotated in the
2D plane perpendicular to the x-axis
according to their phase value (zero
phase is straight up, and positive phase
change is counterclockwise); second,
the arrows are color-coded (red de-
notes zero phase value, pink is /2, blue
is , and violet is /2).
We chose these colors carefully to
render this scheme more intuitive.
Red and blue are primary colors
widely associated with positive and
negative (for example, colors on hot-
and cold-water taps), so we chose
them to represent the positive real and
negative real phases. For the imagi-
nary phases, we used less saturated
(more imaginary) colors, which are
reminiscent of the primary colors:
pink or quasi-red for the positive
imaginary, and violet or quasi-blue
for the negative imaginary. To keep
the scale simple, other phase colors
are generated by a continuous gradu-
ation between these four colors.
Because each value has a different
phase (= kx), the arrows form a spiral
with a wavelength of 2/k. The next
step is to represent a continuous, rather
than a discrete, function. This is
achieved by connecting all the arrows
to form a 3D spiraling band (see Figure
1b). The bands width at each x value
equals f (x)s magnitude, and f (x)s
phase value determines its direction
and color. Because there is redundancy
in having two kinds of phase represen-
tations, we can discard one without los-
ing any information. By unfolding the
spiraling band, the 3D representation
is reduced to a 2D representation, but
the phase information remains in the
bands color (see Figure 1c). In the re-
gion where the band is colored red, the
arrows point upward (zero phase), in
the regions where it is pink, the arrows
Re
Im
X
Re
Im
X
Re
Im
X
X
(a)
(c)
(b)
(d)
Figure 1. Coding phase by color. (a) A vector representation of complex numbers, (b) a 3D representation of a complex function,
(c) reducing the representation to 2D, and (d) using the 2D representation to visualize wave superposition. Educational Java
applets generated these graphical representations; they are available at www.fh.huji.ac.il/~guy/links/CISE2004.html.
84 COMPUTING IN SCIENCE & ENGINEERING
point perpendicular to the band (/2
phase), and so on.
Another common method of color-
coding uses the entire color-wheel
spectrum and encodes the phase angle
as the hue component.
13
We discov-
ered that our four-color representation
is easier to interpret than the full spec-
trum representation, especially when
dealing with wavefunctions superposi-
tion. Figure 1d illustrates the superpo-
sition of two plane waves with equal
weights: f (x) = A e
ik
1
x
+ A e
ik
2
x
. The
two upper bands are the plane waves (as
can be seen from their constant magni-
tude and periodically changing phase),
and the lower band is the superposition.
It is easy to identify the regions of con-
structive and destructive interference.
At x values where the two phases match
(red with red or blue with blue), the su-
perposition exhibits constructive inter-
ference (the three maxima); where the
phases oppose (pink with violet), the su-
perposition exhibits destructive inter-
ference (the four nodes). The represen-
tation clearly shows the resulting
beat-pattern in the envelope (the height
of the band) and the harmonic carrier
wave in the phase (the periodic change
in color).
Angular Wavefunctions
and Their Superposition
The wavefunction representation we
introduced in the previous section is
easily extended into more complicated
topologies. A different possible topol-
ogy is encountered in particles that are
constrained to move in a circular or-
bita particle on a ring, for instance.
The most notable example of this
topology is the 2D rigid rotor. This sys-
tem is represented as a function of a sin-
gle angular variable, . The conven-
tional representation of an angular
function f() is by using polar diagrams,
in which is the angle with the positive
x-axis in the xy plane, and |f()| is the
distance from the origin to the point on
the graph. Figure 2 shows the polar di-
agram of the function f() = sin(),
which is the angular part of a p-type
atomic orbital (p
y
in this case). This type
of representation is very misleading for
many students, who misinterpret the
diagrams shape as that of the orbital,
and believe the electron is orbiting the
nucleous in a figure-eight orbit or is
conned in the area of the two lobes.
4
Both these misinterpretations include
an additional dimension of motion
(along the r polar coordinate) that the
diagram does not represent. The polar
diagram is also limited to representing
only real functions and requires addi-
tional captioning in case the function
has negative values (because it repre-
sents only the functions absolute value).
To emphasize the 1D topology of
the wavefunction (a single variable ),
we propose an alternative representa-
tion. This representation is obtained
from the linear color-coded represen-
tation by bending the x-axis in Figure
1c into a ring. The result is a colored
band wrapped around a ring, whose
width corresponds to the wavefunc-
tions amplitude (its color corresponds
to the phase).
This approach is demonstrated for
the wavefunction family f() = e
im
,
which are eigenfunctions of the free
particle on a ring Hamiltonian. We
can use these functions to study the
reason for quantization in angular
wavefunctions. Closing the string into
a ring imposes a constraint on the
wavefunction: because = 0 and =
2 represent the same point on the
ring, f(0) must equal f(2). When m =
1/2 (see Figure 3a), the wavefunction
has different phase values at = 0 and
= 2 , as is apparent from the abrupt
color change at = 0. To obtain a con-
tinuous function, mmust have an inte-
ger value, as Figure 3b shows. There-
fore, the quantization of the angular
wavefunctions (and consequently that
of angular momentum) is an outcome
of the rings topology.
When dealing with superposition of
angular wavefunctions, color-coding
helps distinguish between positive and
negative m values. When m is positive
(see Figure 3b), the order of the colors
for increasing is red pink blue
purple. When mis negative (see Fig-
ure 3c), the order of the imaginary
colors is reversed: red purple blue
pink. When superimposing the two
wavefunctions by summing them to-
gether, the real parts interfere con-
structively, while the imaginary parts
interfere destructively (see Figure 3d).
This results in increasing amplitude
along the x-axis and a node along the
y-axis, which is the angular part of the
p
x
atomic orbital. Because the energy
of free angular functions (with no an-
gular dependence in the potential) de-
pends only on |m|, p
x
is also an eigen-
function of the Hamiltonian. In a
similar way, the two eigenfunctions
with m = 1 can be subtracted, causing
the real parts to interfere destructively
and the imaginary parts to interfere
constructively (see Figure 3e). The re-
sulting wavefunction is pointing along
the y-axis and resembles the p
y
orbital,
except that it is purely imaginary
E D U C A T I O N
+

y
x
Figure 2. Polar diagram of the function
sin() =
1
/
2
i (e
i
e
i
). is the angle
with the positive x-axis; the sign on
the lower circle indicates that sin() is
negative for < < 2.
MAY/JUNE 2004 85
rather than real. By multiplying the
function by a constant phase of e
i/2
(see Figure 3f), we get the real func-
tion p
y
. To show the similarity between
this representation and the conven-
tional polar diagram, we decreased the
rings radius in Figure 3f. As the rings
arbitrary radius approaches zero, the
new representation reduces to a polar
diagram with the added value of phase
color, which allows the presentation of
complex functions (compare Figure 3f
to Figure 2).
Spherical Harmonics
and Their Superposition
A closely related topology to the ring is
the sphere, which adds a second angu-
lar variable, , the angle with the posi-
tive z-axis. The most notable example
of this topology is the 3D rigid rotor,
which is part of the solution of all cen-
tral force systems, including the hy-
drogen atom. Again, to emphasize the
systems 2D topology and avoid im-
proper inclusion of distance from the
origin as a variable (which often arises
when using polar diagrams), the wave-
function is drawn on a spheres surface.
The wavefunctions phase is denoted
by color, as before, but the amplitude is
now encoded as opacity. The wave-
function is opaque at the maximum
amplitude, partially transparent at
medium amplitudes, and completely
transparent at the nodes. The physical
basis of this encoding comes from
viewing opacity as a measure of proba-
bility density.
We demonstrate this approach for the
three spherical harmonics with l = 1,
Y
1
m
(, ), which are eigenfunctions of
the free particle on a sphere Hamilton-
ian. The spherical topology imposes two
constraints on the wavefunction. The
rst concerns , and is the same as in the
case of the ring. The second constraint
concerns the poles ( = 0 and 2), in
which the functions value must be the
same for all values. This can be
achieved either by setting m= 0 (see Fig-
ure 4a) or by having a node at the poles
(Figures 4b and 4c). It is instructive to
note the resemblance between Figures
3b and 3c and Figures 4b and 4c when
viewed from the direction of the z-axis.
Encoding the amplitude with opac-
ity does not provide a quantitative
measure of it. However, the important
features of the spherical wavefunc-
tionsthe direction of maximum am-
plitude and the existence of nodal
planesare easily observed. These fea-
tures are also sufcient for determining
the result of superposition of the
spherical wavefunctions. Using similar
arguments to those used in the previ-
ous section, it is easy to see that p
x
= Y
1
+1
+ Y
1
1
(see Figure 4d), and p
y
= (Y
1
+1
Y
1
1
)/i (see Figures 4e and 4f).
The third orbital in this set, p
z
, is Y
1
0
(see Figure 4a). All three orbitals have
(a) (c) (b)
(d) (f) (e)
y
x

y
x
y
x
y
x
y
x
y
x

Figure 3. Wavefunctions on a ring. (a) f() = e


i/2
; (b) f() = e
i
; (c) f() = e
i
; (d) f() = e
i
+ e
i
; (e) f() = e
i
e
i
; and
(f) f() = (e
i
e
i
)/i .
86 COMPUTING IN SCIENCE & ENGINEERING
maximum amplitude along the corre-
sponding axis and a perpendicular
nodal plane through the origin. These
manipulations relate the three possible
values for m (0 and 1), when l = 1 in
the hydrogen atom solution, to the
three p orbitals used in chemistry in a
simple, graphical way. Many students,
who are never presented with images
of Y
1
+1
and Y
1
1
, fail to see the differ-
ence between the two sets.
4
U
sing advanced computer graph-
ics, we have demonstrated the
ability of a graphical applet to illustrate
the superposition principle in different
topologies. The interactive ability to
change parameters and follow their in-
uence is a crucial aspect of the tool set
weve described. Time evolution and
its inuence on the wavefunctions su-
perposition is an integral part of the
proposed approach. As a result, the
concept of unitary evolution becomes
intuitive and is employed as a base for
an axiomatic approach to quantum me-
chanics. Once the student masters the
elementary steps, we can extend the
wavefunction description to higher di-
mensions using the same principles and
visual language. Thus, we have devel-
oped a tool for illustrating the super-
position principle in the hydrogen
atom and the hydrogen molecular ion
in three dimensions as the natural next
step. The insight gained by visualizing
the same phenomena in different con-
texts contributes to the students abil-
ity to abstract, which is key to under-
standing quantum phenomena in
higher dimensions.
References
1. J.R. Hiller, I.D. Johnston, and D.F. Styer,
Quantum Mechanics Simulations, John Wiley
& Sons, 1995.
2. B. Thaller, Visual Quantum Mechanics,
Springer Verlag, 2000.
3. M. Belloni and W. Christian, Physlets for
Quantum Mechanics, Computing in Science
& Eng., vol. 5, no. 1, 2003, pp. 9096.
4. G. Tsaparlis and G. Papaphotis, Quantum-
Chemical Concepts: Are They Suitable for
Secondary Students?, Chemistry Education:
Research and Practice In Europe, vol. 3, no. 2,
2002, pp. 129144.
Guy Ashkenazi is a lecturer of science educa-
tion at the Hebrew University of Jerusalem.
His research interests include chemical edu-
cation and the integration of technology into
higher education. He received his PhD at the
Hebrew University. Contact him at guy@
fh.huji.ac.il.
Ronnie Kosloff is professor of theoretical chem-
istry at the Hebrew University of Jerusalem. His
research interests include quantum molecular
dynamics and quantum thermodynamics. He
received his PhD at the Hebrew University. He
is a member of the International Academy of
Quantum Molecular Science. Contact him at
ronnie@fh.huji.ac.il.
E D U C A T I O N
(a)
z
x
y
(d)
z
x
y
(b)
z
x
y
(e)
z
x
y
(c)
z
x
y
(f)
z
x
y
Figure 4. Wavefunctions on a sphere. (a) Y
1
0
, (b) Y
1
+1
, (c) Y
1
1
, (d) Y
1
+1
+ Y
1
1
, (e) Y
1
+1
Y
1
1
, and (f) (Y
1
+1
Y
1
1
)/i. The latitude
lines designate = /4, /2, and 3/4; the longitude lines designate = 0 to 7/4 in /4 increments.
MAY/JUNE 2004 Copublished by the IEEE CS and the AIP 1521-9615/04/$20.00 2004 IEEE 87
Editors: Paul F. Dubois, paul@pfdubois.com
George K. Thiruvathukal, gkt@cs.luc.edu
PROGRAMMING
S C I E N T I F I C P R O G R A M M I N G
packages, according to physics specialization. In this article,
we use the word package primarily to mean a portion of sci-
entic software whose components communicate internally
much more than they do with outside routines, but packages
can take the form of third-party libraries for common math-
ematical or computer-science functions. Most parts of a sim-
ulation refer to the infrastructure portion of the state, so we
can think of this portion as a package with lots of customers.
How we share data within and between these packages is
crucial to developer productivity. In this installment of Sci-
entic Programming, we explore some of the pros and cons
of the different ways to share data in C++ code.
Existing Issues with Sharing
Fortran 77 had only two scopes: common or local (modern
Fortran added modules). C has external, le, or local scope.
In actual practice in both languages, though, groups of vari-
ables are described in header les, which are to be included
when compiling the translation units that hold the codes ex-
ecutable part. When a given header file is changed in any
way, all of its customers must be recompiled. In addition
to being concerned with how much of the program to re-
compile, we face additional issues:
As new packages are introduced, name conicts become
increasingly likely.
More source-code control merges occur as multiple peo-
ple modify the same le.
Each variant of a given physics process declares data to be
allocated dynamically if that package is invoked, but to re-
main unused otherwise. This occupies space and clutters
the visual and mental landscape if combined in a single
data structure.
An object-oriented program in C++ encapsulates the in-
formation to be shared in a class in order to provide abstrac-
tion and encapsulation. Namespaces help eliminate clashes
between packages, but the language provides a bewildering
set of choices for actually sharing data. We want to share data
without excessive entanglements, in ways that promote clar-
ity and safety and are least subject to difcult-to-track errors.
Basic Sharing Techniques
In C++, we can implement global data as a class object that
can have a public and nonpublic interface to provide ab-
straction and encapsulation, but how we initialize these ob-
jects is somewhat problematic.
C++ offers the following implementations of a global
object:
a global or class static object,
a global or class static pointer to a heap object,
a local static object, and
a local static pointer to a heap object.
In deciding how to implement a shared object, we need
to understand how our choice of implementation deter-
mines the answers to the following questions:
When does C++ construct and destruct objects? C++ can
construct a global or class static object before the first
statement of main() and destruct it after main() re-
turns, but it constructs a local static object the rst time
it calls the function its in, again destructing it after
main() returns.
1
Can we specify the initialization order among objects?
C++ guarantees the initialization order of global and class
statics within a translation unit, but not among different
translation units.
1
C++ initializes local statics in the order
that it calls their respective functions.
1
Will C++ construct and destruct an object, even if its
never used? It does so for global and class static objects,
but not for local static objects.
Are there performance overheads when C++ constructs
and destructs a global object or when it is used?
DATA SHARING
IN SCIENTIFIC SIMULATIONS
By Glenn Downing, Paul F. Dubois, and Teresa Cottom
S
EVERAL PHYSICS PROCESSES MODIFY THE
STATE OF A SCIENTIFIC SIMULATION OVER
TIME. IN FACT, RESEARCHERS OFTEN DIVIDE A SIM-
ULATIONS DEVELOPMENT INTO AREAS CALLED
88 COMPUTING IN SCIENCE & ENGINEERING
This section presents four approaches to creating sharable
global objects, each of which provides a different set of
trade-offs in terms of these issues.
The Global Static Object Approach
The global static object approach consists of declaring a
global static object extern in the header file and defining
it in the source le. To minimize coding, well dene a class
Global with just one method f and invite the reader to
imagine this eshed out into a full-edged abstraction. Well
also leave off the standard parts of the header les. We will
be sharing an instance of Global and calling its method
f(), which returns a string.
First, we declare the shared object in a header le:
// Global.hh
struct Global {
std::string f () const {
return You called Global::f;}
};
extern Global x;
Naturally, we need to instantiate our shared object x:
// Global.cc
#include Global.hh
Global x;
When we use xsay, in our main programwe simply re-
fer to x:
#include Global.hh
std::cout << x.f() << std::endl;
C++ will construct the Global object x before main() and
destruct it after main().
If there were an analogous denition of another instance
x2 of class Global2, we would not be able to specify an or-
der of initialization between x and x2, short of dening both
of them in the same translation unit.
C++ will construct and destruct the Global object, even
if its never used.
There are no performance overheads when C++ con-
structs or destructs the Global object, or when its used.
This approach is both simple and adequate in many cases,
but having C++ construct and destruct the Global object
outside the bounds of main() poses some restrictions. In
particular, the Global constructor or destructor must not
depend on anything that is not also well dened outside the
bounds of main(). For example, we might turn on a mem-
ory checker as the rst statement of main() and turn it off
as the last statement. In this case, the memory checker would
be unable to meter the constructor or destructors work.
Also, some class libraries provide classes that are not well de-
ned outside the bounds of main(); this can cause problems
if were using instances of such classes in our object. In short,
this method is simple, but it gives us very little control, and
it can be dangerous.
The Class Static Pointer
with Implicit Initialization Approach
The class static pointer with implicit initialization approach
uses reference counting to let us specify an initialization or-
der among global objects, even if they are dened in differ-
ent translation units. Scott Meyers describes this approach
more completely in his book.
2
Again, we rst declare our class, but this time with an in-
ternal static pointer p rather than an external object x:
// Global.hh
struct Global {
static Global* p;
std::string f () const {
return You called Global::f;}
};
C++ rules require us to dene the static pointer:
// Global.cc
#include Global.hh
Global* Global::p;
Using reference counting, a class GlobalInit will cre-
ate and destroy the Global object and store it in
Global::p:
//GlobalInit.hh
#include Global.hh
class GlobalInit {
public:
GlobalInit () {
S C I E N T I F I C P R O G R A M M I N G
MAY/JUNE 2004 89
if (!c++)
Global::p = new Global();}
~GlobalInit () {
if (!c)
delete Global::p;}
private:
static int c;};
static GlobalInit x;
Of course, we also need to dene the counter c:
// GlobalInit.cc
#include GlobalInit.hh
int GlobalInit::c;
This time we use our object via references through
Global::p:
#include GlobalInit.hh
std::cout << Global::p->f() <<
std::endl;
C++ will construct the Global object before main() and
destruct it after main().
If there were an analogous definition of Global2, we
would specify the initialization order at compile time using
the declared dependency between Global and Global2.
C++ will construct and destruct the Global object, even
if its never used.
Reference counting and use of the heap incur minimal
performance overhead when C++ constructs and destructs
the Global object. Theres also a slight performance over-
head when its used because of the pointers indirection to
the heap.
This approach poses similar restrictions to those of the
global static approach. The Global constructor and de-
structor must not depend on anything that is not also well
dened outside the bounds of main().
The Class Static Pointer
with Explicit Initialization Approach
The class static pointer with explicit initialization ap-
proach does not construct or destruct the Global object
outside the bounds of main(). It uses a class static
pointer for the Global object and creates and destroys
the Global object as the first and last statements of
main(). The files Global.hh and Global.cc are the
same as in the last example, as is the use of the object via
Global::p:
// main.c++
#include Global.h
int main () {
Global::p = new Global();
std::cout << Global::p->f() <<
std::endl;
delete Global::p;
return 0;
}
C++ constructs and destructs the Global object within the
bounds of main().
If there were an analogous definition of Global2, we
would specify whether the Global object or the Global2
object is created or destroyed rst by our explicit ordering
of their creation and destruction statements.
C++ will construct and destruct the Global object, even
if its never used.
Theres a minimal performance overhead when C++ con-
structs and destructs the Global object. Because of the
pointer indirection to the heap, however, there is a slight
performance overhead when the object is used.
This approach addresses the problem of construction
and destruction outside the bounds of main(), but it is a
very manual solution in that we must explicitly create the
global objects in the correct order at the beginning of
main() and then destroy them in the reverse order at the
end of main().
The Local Static Object Approach
The local static object approach involves a lazy evaluation
strategy, a term coined by functional-programming design-
ers. The idea is to not create the Global object until we
need it. We can do this by creating a local static object:
// Global.hh
struct Global {
static Global& get () {
static Global x;
return x;}
90 COMPUTING IN SCIENCE & ENGINEERING
std::string f () const {
return Global::f;}};
#endif // Global_h
Now we can reference through Global::get():
#include Global.h
std::cout << Global::get().f() <<
std::endl;
C++ constructs and destructs the Global object within the
bounds of main().
If there were an analogous definition of Global2, we
would specify the initialization order p by our order of use
of Global and Global2 at runtime.
C++ will not construct and destruct the Global object if
its never used.
There is a small performance overhead when C++ con-
structs and destructs the Global object because the com-
piler must generate code to check if the Global object has
already been created.
Discussion
The global static object approach is the simplest solution
S C I E N T I F I C P R O G R A M M I N G
Caf Thiruvathukal
I
n my rst ever Caf Thiruvathukal, Im serving up a three-
course menu featuring your favorite ingredients: pro-
gramming languages, operating systems, and the Internet.
I hope youre hungry!
Open-Sourcing Java
I have long been an advocate of Sun
turning Java into an open-source lan-
guage, having argued in favor of this
idea in my books, articles, and lec-
tures. I think this would be a win not
only for the open-source community
but also for Sun Microsystems itself.
The move would galvanize the Java
communitys interest (again). As Java
grows older, its evolution is begin-
ning to follow a pattern known all
too well to die-hard C++ program-
mers: reluctant evolution. We need
not look further than the recent intro-
duction of the JDK 1.5 beta 1, which
has been more than 1.5 years in the
making. Language features, such as
generics (templates, to C++ program-
mers), are only now appearing for-
mally in a Java implementationafter almost nine years!
Im not optimistic that Sun is about to turn the Java effort
into an open-source one, despite the best efforts of Eric
Raymond (president of the Open Source Initiative,
www.opensource.org, and author of The Cathedral and the
Bazaar) and IBM, who both issued public letters to Sun ar-
guing why Java should become yet another open-source
initiative. However, the reasoning behind both letters is
compelling. Raymond argues that Sun would be able to
leverage the open-source community, not only as a devel-
opment resource, but also as a distribution resource.
Downloading and installing the JDK onto any platform
(let alone Linux) has become an exercise somewhat akin to
performing a root canal. There is no ostensible reason for
thisexcept to make the lawyers richand the worst as-
pect is that none of the major Linux distributions can dis-
tribute JDK legally, making Java one of the few language
distributions (unlike Python or Perl) not available legally on
Linux systems by default. Raymond is correct in raising the
distribution issue, although he might
be overestimating the impact on the
overall potential for Java via Linux.
Linux still accounts for a trivial per-
centage of desktop usage, and its un-
clear how much server-side impact
Java enjoys compared to other lan-
guages like Perl and Pythonespe-
cially on open-source platforms.
IBM, in turn, argues that it is the
largest Java developer and is contribut-
ing signicantly to the languages fu-
ture via its Eclipse effort (which I plan
to comment on in a future Caf). Fur-
thermore, efforts such as Eclipse have
garnered signicant market share in
the Java tool space and represent at
least one example of a commercial
venture turned open source (anyone
remember IBM VisualAge?) that not
only became better than the original but also created a ma-
jor impact, even by open-source standards.
My view is that Sun should transition Java to a true open-
source and community effort at its earliest opportunity.
Even in economics terms, the prospect appears to be a no-
brainer. Java contributes insignicantly to Suns bottom
line; competitors (such as IBM) have been more successful
in this regard. Given the number of Java efforts outside Sun
that continually bear more fruit, Suns best efforts should
be designed to support Java and make it easy for others
even competitorsto deploy. IBM gave away the crown
MAY/JUNE 2004 91
and, within its limitations, very nice.
The class static pointer with implicit initialization ap-
proach is more complex, but necessary if we need to dene
global objects outside the bounds of main().
We dont recommend the class static pointer with explicit
initialization approach because its too manual and, there-
fore, error-prone.
Overall, we recommend the local static object approach
because its very dynamic. Local static objects are created in
the order dictated by their interdependencies, and we can
specify that order at runtime. Another benefit of this ap-
proach is that the global object is never created if it isnt used.
Once Functions
While the techniques just described are appropriate for most
cases, there is an alternative for a special but frequent case:
we want a singleton object in our program that may or may
not need to be created. When the singleton is needed, we
might have several potential places that need the object and
not know which of these will be called rst, but in all cases,
we want just one object to share.
In a variation on the local static object approach, the Eif-
fel programming language lets you explicitly define a once
function,
3
which calculates a return value the first time its
called. Subsequent calls return the same object. In Eiffel, a
jewels (its development tools) and is making money hand
over st on Java consulting services and innovative server
products. Sun needs to think beyond Java itself and remove
the constraints on its evolution to ensure a brighter future.
Gentoo Linux
Ive recently been switching almost all my research, devel-
opment, and teaching systems over to Gentoo Linux
(www.gentoolinux.org)one of the few truly free Linux
distributions left after the recent migration of both RedHat
and Mandrake Linux to pure enterprise plays. Granted,
both continue to offer free versions via the Fedora and
Cooker initiatives, respectively; but I consider these efforts
unfriendly to academic and experimental users such as my-
self who want a free option that in no way limits our ability
to trust the software for serious work.
Gentoo Linux represents one of the best options available
to hackers today. It is available for virtually all architectures,
including x86 and its variants, PowerPC, SPARCthe list
goes on (and seems to keep growing). It would be impossi-
ble to go into every detail in this Caf, but you can also do
many things that are unique to the Gentoo platform:
Optimized kernel conguration. You can build a kernel that
takes advantage of the processor actually running on your
system. Although this is not totally to Gentoos credit, its
platform does not limit you to a default kernel build like so
many other Linux distributions. And reconguring the ker-
nel at any time without having to figure out and remove
complex dependencies is easy.
Portage. The Portage system, which has its roots in the
FreeBSD project, is a metadata-driven approach to pack-
age management, conguration, and installation. Once a
Gentoo system is congured, Portage maintains it. It keeps
all packages up to date, including the kernel, by using the
emerge command, which in consultation with local meta-
data, goes to the Internet and fetches the package to be
upgraded (along with any dependent packages). You sel-
dom need to reboot when working with Gentoo, and (in
theory) installation occurs once on any given system.
Live CD. When things go wrong, you can always boot with
something called a Gentoo Live CD and perform regular sys-
tem maintenance with well-known Unix commands. Live
CDs have recently gained popularity in the Linux community.
The idea is that the CD has a fully functional Linux distribu-
tion, which is a superior approach to having a recovery CD
(the preferred approach in most major Linux distributions).
I cannot say enough about this splendid Linux distribu-
tion. Its the rst distribution that truly lets you keep in
touch with the latest and greatest open-source software
while simultaneously staying focused on a consistent user
experience and an intuitive system-administration model.
Content Management Systems
Many of my students and readers have commented on how
beautiful and easy to use my site is. Thanks. :-) However, I
cannot take full credit. Over the years, I have grown tired of
maintaining HTML statically and have often found myself
with out-of-date content. During the past year, I began an
investigation into content management systems with my
friend and colleague, Konstantin Laufer. One so-called CMS
that we found particularly impressive was the Plone CMS
(www.plone.org). Think of it as a le manager (like Windows
and other desktops) that supports typed content. The vari-
ous content types include folders, documents (in plaintext,
structured text, or HTML), links, and events; it also supports
others (such as photo albums and photos) via plug-ins.
Im not sure whether I could live without CMS software
anymore. It lets me make a more or less complete transfor-
mation from chaos (a predominant characteristic in most
realms in which HTML is still crafted by hand or via authoring
tools such as DreamWeaver) to sensibility. The best aspect, of
course, is that almost none of the content is maintained in
HTML, but rather in a variant of plaintext called structured
text. Well tell you more about Plone in a future article.
92 COMPUTING IN SCIENCE & ENGINEERING
routine that takes no arguments can be called without writ-
ing an empty set of parentheses after it, so such a function is
indistinguishable from a data object as far as the consumer
is concerned. This is ideal: you end up with a name that just
looks like data and doesnt exist until someone uses it.
In C++, we can simulate this to a degree, especially if no
nalization is required. Suppose, for example, that we want
to share one instance of class B under the name, theB.
Assume class B is defined in header file B.hh. This
time, the static pointer is internal to a function:
#include B.hh
B *theB ()
{
static B *pTheOneTrueB = 0;
if (!pTheOneTrueB) {
pTheOneTrueB= new B (...);
}
return pTheOneTrueB;
}
We access the object via the function. Assuming B is similar
to our rst class Global, we might do
std::cout << theB()->f() <<std::endl;
The object returned by theB() would be created the first
time it was used, and only then. The function theB() could
also return a B& if you wish, although this method is not suit-
able for objects that must be nalized upon exit.
The once function method separates the existence of
theB from the design of class B. You might want a future
program with lots of unshared Bs and four shared Bs. The
once function method easily enables this without modifying
the class B; the only global item it creates is a function, which
can be declared static if all the sharing is to occur within one
translation unit.
Sharing Across Packages
All the techniques we have explored so far involve sharing ob-
jects by sharing header les. Such an approach runs into scal-
ing problems as the number of packages in a program increases.
Having more than one variant for a given physics process
is quite common. Imagine, for example, alternate hydrody-
namic algorithms that advance a variable representing den-
sity in time. Other parts of the program need to use this den-
sity, but if we have a variable density declared in both
Hydro1.hh and Hydro2.hh, we cant include both head-
ers in a consumer package. It is therefore tempting to make
a header le Hydro.hh that declares density, and to have the
two hydro packages use it. However, these packages will
have additional variables peculiar to their own implementa-
tions, so what typically happens in practice is that people
place everything either package might need into Hydro.hh.
The worse temptation is to create some sort of global area
that holds all the variables to be shared.
Assuming we can solve this problem with some discipline,
we also must look at this situation from the consumers view-
point. Sharing header les means increasing the risk of com-
pulsory recompilation when a supplier package changes its
header le. Unless we use namespaces carefully, with explicit
qualification, we also increase the risk of name collisions.
Some developers pass needed values around via long argu-
ment lists, but this practice has many bad properties with re-
spect to scaling, readability, maintenance, and safety.
Conceptually, a data registry is a middleman that keeps
track of objects by name. A producer package registers the
object with the registry, which might be as simple as a stan-
dard map object mapping names to void pointers. A con-
sumer supplies the object name to request the pointer from
the registry, which casts the void pointer into a pointer of
the correct type and uses it. Project members then agree on
what objects to publish under what names.
Typically, a consumer of a given quantity accesses the reg-
istry to get the pointer once at the beginning of the pack-
ages turn in the cycle, so the overhead is insignicant. In
programming our package, the object we are sharing almost
appears to be something local.
An Unsafe Registry
We can approach the registry concept in different stages. First,
lets look at a registry that is simple but has no safeguards:
// registry.hh
#include <map>
#include <string>
namespace Registry {
class Register
{
private:
std::map<const std::string, void* > m;
public:
void put(const std::string& name, void* t)
{
S C I E N T I F I C P R O G R A M M I N G
MAY/JUNE 2004 93
m[name] = t;
}
void* get(const std::string name) const {
typename std::map<const std::string,
void* >::const_iterator entry =
m.nd(name);
if(entry == m.end()) {
std::cerr << name << not
registered, check name.;
throw registry error; //needs a
real exception here
}
return *(entry->second);
}
};
In our program, we use a once function to share a single
registry object. Producers put names and pointers to objects
they wish to publish, and consumers get these pointers by
name. There must be explicit casting between the actual
types and the void*s to satisfy C++s type system. We can
use some macros to make it more elegant; well show exam-
ples in a more complex setting later.
Note that the type of object being shared must still be
available to the consumer package. There is less point to
sharing an object through a registry if its type is unique to
the producing package. However, this is not typically the
case for the major variables in a simulation, which tend to
represent elds of various sorts. Some small number of in-
frastructure classes declaring things like scalar and vector
elds, possibly with variants for different centerings, is of-
ten dened for use by all physics packages. Those types are
declared in header files everyone uses, not in the specific
producer packages.
Such an approach can be quite successful, and we can do
it in C, C++, and Fortran with pointers. However, this sim-
ple registry has two weaknesses:
If the consumer is incorrect about what type the object re-
ally is, a disaster will occur (a type error).
If the consumer code misspells the name of the item to be
retrieved from the registry, either a runtime error will oc-
cur when the registry cannot nd it or the registry will re-
turn the wrong item (a typo error).
In short, a simple name-to-void* mapping is not type or
typo safe. We can overcome both these weaknesses in C++.
A Basic Templated Registry
Can we do something about the lack of type and typo safety
in our registry? In a word: yes. Lets look at a simple imple-
mentation of a registry that detects type errors and increases
the chance of detecting a typo at runtime.
First, to deal with the type issue, we could make the reg-
istry object itself templated on the type it holds:
// registry.hh
#include <map>
#include <string>
namespace Registry {
template <class T>
class Register
{
private:
std::map<const std::string, T* > m;
public:
void put(const std::string& name, T& t)
{
m[name] = &t;
}
T& get(const std::string name) const {
typename std::map<const std::string, T*
>::const_iterator entry
= m.nd(name);
if(entry == m.end()) {
std::cerr << name << not
registered, check name and type.;
throw registry error; //needs a
real exception here
}
return *(entry->second);
}
};
We also dene a templated once function to access the reg-
istry object for a given type:
template<class T>
Register<T>* registry(T* t) {
static Register<T>* p = 0;
if (!p) { p = new Register<T>; }
return p;
}
94 COMPUTING IN SCIENCE & ENGINEERING
} // end namespace Registry
Then we add some macros to make it look nice for the user:
#dene EXPORTAS(name, regname)
Registry::registry(&name)->put(#regname,
name)
#dene EXPORT(name) EXPORTAS(name, name)
#dene IMPORTAS(typename, name, regname)
typename& name = \
Registry::registry(static_cast< typename*
>(0))->get(#regname)
#dene IMPORT(typename, name)
IMPORTAS(typename, name, name)
#endif
The package that wants to publish the data uses these
macros like this:
#include registry.hh
static int w=3;
static std::vector<oat> x;
static int* wp;
void somephysics (void) {
// code giving these variables values
omitted ...
EXPORT(w);
EXPORT(x);
EXPORT(wp);
}
Finally, the consumer can get these values as follows. In the
case of wp, the user decides to give it a different name,
wmine, locally.
#include registry.hh
void consumer(void)
{
IMPORT(int, w);
IMPORT(std::vector<oat>, x);
IMPORTAS(int*, wmine, wp);
}
Unfortunately, we can introduce either of the following er-
rors and the code compiles without error:
IMPORT(double*, wp);
IMPORT(int*, wpspellingerror);
We say unfortunate because both would create runtime
errors. In the first case, nothing named wp has been regis-
tered with the registry templated on double*, which means
were looking for the correct name in the wrong registry. In
the second case, were looking for the wrong name in the
correct registry.
Obviously, it would be better to detect both these errors
at compile time. Also, while providing a renaming capabil-
ity, we havent really used namespaces properly.
Compile-Time Detection
of Type and Typo Errors
We have developed a registry that makes both type and typo
errors detectable at compile time, but it contains some spe-
cializations to our own field classes and is too complex to
present in this article. However, we can present a simplied
sketch of the basic ideas. The first step to exposing data to
the registry is to fully dene the datas type and name as they
will be accessed from the registry. The registry includes
some convenient macros for this purpose:
DECLARATION(namespace, dataname,
typeparam);
namespace - namespace to assist in uniquely
qualifying the data
dataname - name which the data is
registered under
typeparam - type of the data registered
Basic example:
// SimulationVars.hh
#include registry.hh
DECLARATION(Simulation, Time, double);
The DECLARATION creates a unique data type that the
registry uses as the template parameter for the storage
and retrieval functions. The above statement expands to
this:
namespace Simulation {
struct Time {
typedef double Type;
typedef double ReturnType;
static const char* Name() { return
Simulation::Time; }
};
}
S C I E N T I F I C P R O G R A M M I N G
MAY/JUNE 2004 95
What were doing takes a little getting used to: weve declared
the struct Time inside namespace Simulation. This is now
a unique type, which we can use as the template argument to
get and put functions in a registry. In our basic example, we
used the variables name and type for this purpose, but now we
are going to use this articial construct. Note that Simula-
tion::Time is just a compiler artifact and is never instantiated.
We can use the value Name() as the key in our registrys map.
This type Simulation::Time in turn denes the type of
the variable to be registered (Type), and the type as available
to consumers, (ReturnType). A similar macro CONSTDEC-
LARATION makes ReturnType const double; use of that
macro to publish makes the data read-only for other packages.
The second step to making data available through the data
registry is to physically register the datas address. The
REGISTER macro is available for this purpose:
REGISTER(name, variable);
name name, unique type created by a
DECLARATION
variable variable to be registered
as entry name
Basic example:
/* Simulation.hh declares package stuff
including double time; */
// Simulation.cc
#include registry.hh
#include SimulationVars.hh
#include Simulation.hh
....
time = some value;
REGISTER(Simulation::Time, time);
This time we use a single registry with templated mem-
ber functions put and get. The use of the REGISTER macro
really makes a call to the registrys put method, templated
on the type Simulation::Time. Our DECLARATION
macro created this type in the Simulation namespace. Our
REGISTER statement expands into something like this:
registry.put< Simulation::Time >((void*)
&time);
To access the variables that have been registered, we use the
IMPORT macro. Assume the owner of our time variable puts
the DECLARATION in a header le SimulationVars.hh:
IMPORT(name, variable);
name name, unique type created by
a DECLARATION
variable name of local variable for
accessing registered variable
Basic example:
#include registry hh:
#include SimulationVars.hh
IMPORT (Simulation::Time, time);
... now use time...
/* note, we could have used a different
name than time*/
The IMPORT statement is a macro that retrieves the data
from registry and makes it accessible via the name supplied
as the variable argument. IMPORT creates the following line
of code:
Simulation::Time::ReturnType & time
= registry.get< Simulation::Time> ();
The consumer cannot get the type declaration wrong.
Operating on this variable in a matter inconsistent with its
type means it wont compile. If the user spells Simula-
tion::Time incorrectly, the compiler will issue a not a
type error. If the consumer does everything right, but the
publisher does not actually publish the value, the result is an
exception. This is the most we can ask for.
Note that the consumer here includes only the header le
SimulationVars.hh, which is where the author of the Sim-
ulation package uses DECLARATION macros to publish the
list of things available to consumers. This header le will be
changed much less frequently than the details of the Simula-
tion.hh header, which developers of that package actively use.
Given someone elses package that we dont want to mod-
ify, we can write some wrapper code to use the registry to con-
nect it to our code by importing and exporting via the registry.
O
f course, in this article, we havent dealt at all with the
thorny issue of sharing data between processors in a par-
allel job or between processes in a distributed job. Those con-
cepts seem today to be really above the level of the program-
96 COMPUTING IN SCIENCE & ENGINEERING
ming language. Future research on programming languages
is needed to let us talk about what were actually doing in such
programs. We conceptually talk about objects as existing
across the set of processors, but we have no way to talk about
their global properties within our existing languages. In the
single-processor case, C++ has probably gone, as they say in
the musical Oklahoma!, about as fur as they cn go.
References
1. B. Stroustrup and M. Ellis, The Annotated C++ Reference Manual, Addi-
son-Wesley, 1990.
2. S. Meyers, Effective C++, Addison-Wesley, 1992.
3. B. Meyer, Eiffel: The Language, Prentice Hall, 1992.
Glenn Downing is on the faculty of the Department of Computer Sci-
ences at the University of Texas, where he teaches CS315: Algorithms
and Data Structures in Java and CS378: Generic Programming and the
STL in C++. He also has taught generic and object-oriented design and
programming. Contact him at downing@cs.utexas.edu.
Paul F. Dubois is a mathematician turned computer scientist at Lawrence
Livermore National Laboratory. He also serves as the coeditor for this de-
partment. Contact him at paul@pfdubois.com.
Teresa Cottomis a computer scientist at Lawrence Livermore National
Laboratory. Her current research involves development on a parallel gran-
ular flow simulation code, as well as involvement on a large-scale data
warehouse. In the past, she has worked on parallel computational uid
dynamics codes at Pratt & Whitney. Contact her at cottom1@llnl.gov.
S C I E N T I F I C P R O G R A M M I N G
Submissions: Send one PDF copy of articles and/or proposals to Francis Sullivan, Editor in Chief, fran@super.org. Submissions should not
exceed 6,000 words and 15 references. All submissions are subject to editing for clarity, style, and space.
Editorial: Unless otherwise stated, bylined articles and departments, as well as product and service descriptions, reect the authors or
rms opinion. Inclusion in CiSE does not necessarily constitute endorsement by the IEEE, the AIP, or the IEEE Computer Society.
Circulation: Computing in Science & Engineering (ISSN 1521-9615) is published bimonthly by the AIP and the IEEE Computer Society. IEEE
Headquarters, Three Park Ave., 17th Floor, New York, NY 10016-5997; IEEE Computer Society Publications Ofce, 10662 Los Vaqueros
Circle, PO Box 3014, Los Alamitos, CA 90720-1314, phone +1 714 821 8380; IEEE Computer Society Headquarters, 1730 Massachusetts
Ave. NW, Washington, DC 20036-1903; AIP Circulation and Fulllment Department, 1NO1, 2 Huntington Quadrangle, Melville, NY
11747-4502. Annual subscription rates for 2004: $42 for Computer Society members (print only) and $48 for AIP society members (print
plus online). For more information on other subscription prices, see www.computer.org/subscribe or https://www.aip.org/forms/
journal_catalog/order_form_fs.html. Computer Society back issues cost $20 for members, $96 for nonmembers; AIP back issues cost $22
for members.
Postmaster: Send undelivered copies and address changes to Circulation Dept., Computing in Science & Engineering, PO Box 3014, Los
Alamitos, CA 90720-1314. Periodicals postage paid at New York, NY, and at additional mailing ofces. Canadian GST #125634188.
Canada Post Publications Mail Agreement Number 40013885. Return undeliverable Canadian addresses to 4960-2 Walker Rd., Windsor,
ON N9A 6J3. Printed in the USA.
Copyright & reprint permission: Abstracting is permitted with credit to the source. Libraries are permitted to photocopy beyond the
limits of US copyright law for private use of patrons those articles that carry a code at the bottom of the rst page, provided the per-copy
fee indicated in the code is paid through the Copyright Clearance Center, 222 Rosewood Dr., Danvers, MA 01923. For other copying,
reprint, or republication permission, write to Copyright and Permissions Dept., IEEE Publications Administration, 445 Hoes Ln., PO Box
1331, Piscataway, NJ 08855-1331. Copyright 2004 by the Institute of Electrical and Electronics Engineers Inc. All rights reserved.
NEW
for 2004!
Learn more about this new publication and become a
charter subscriber today. www.computer.org/tcbb
IEEE/ACM TRANSACTIONS
ON COMPUTATIONAL
BIOLOGY AND BIOINFORMATICS
Stay on top of the exploding elds of computa-
tional biology and bioinformatics with the lat-
est peer-reviewed research.
This new journal will emphasize the algorith-
mic, mathematical, statistical and computa-
tional methods that are central in bioinformat-
ics and computational biology.

You might also like