You are on page 1of 106

FY10

Engineering Innovations,
Research & Technology Report

April 2011
Lawrence Livermore National Laboratory
PO Box 808, L-151
Lawrence Livermore National Laboratory
Livermore, CA 94551-0808
http:www-eng.llnl.gov/ LLNL-TR-468271
Manuscript Date April 2011
Acknowledgments
Distribution Category UC-42
Scientific Editors
Don McNichols This report has been reproduced directly from the
Camille Minichino best copy available.

Available from
National Technical Information Service
Graphic Designers 5285 Port Royal Road
Jeffrey B. Bonivert
Springfield, VA 22161
Lucy C. Dobson
Debbie A. Ortega
Kathy J. Seibert
Or online at www-eng.llnl.gov/pubs.html

This document was prepared as an account of work sponsored by an agency


of the United States Government. Neither the United States Government nor
Lawrence Livermore National Security, LLC, nor any of their employees, makes
any warranty, express or implied, or assumes any legal liability or responsibility
for the accuracy, completeness, or usefulness of any information, apparatus,
product, or process disclosed, or represents that its use would not infringe
privately owned rights. Reference herein to any specific commercial product,
process, or service by trade name, trademark, manufacturer, or otherwise,
does not necessarily constitute or imply its endorsement, recommendation, or
favoring by the United States Government or Lawrence Livermore National
Security, LLC. The views and opinions of authors expressed herein do not
necessarily state or reflect those of the United States Government or
Lawrence Livermore National Security, LLC, and shall not be used for
advertising or product endorsement purposes.

This work was performed under the auspices of the U.S. Department of Energy
by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
ST-10-0070
FY10
Engineering Innovations,
Research & Technology Report

April 2011
Lawrence Livermore National Laboratory
LLNL-TR-468271
Introduction

A Message From

Monya A. Lane
Associate Director for Engineering

T his report summarizes key research,


development, and technology
advancements in Lawrence Livermore
proposition of the Lawrence Livermore
Laboratory. The balance of the report
highlights this work in research
National Laboratory’s Engineering and technology, organized into the-
Directorate for FY2010. These efforts matic technical areas: Computational
exemplify Engineering’s nearly 60-year Engineering; Micro/Nano-Devices and
history of developing and applying the Structures; Measurement Technologies;
technology innovations needed for the Engineering Systems for Knowledge
Laboratory’s national security missions, Discovery; and Energy Manipulation.
and embody Engineering’s mission to Our investments in these areas serve
“Enable program success today and en- not only known programmatic require-
sure the Laboratory’s vitality tomorrow.” ments of today and tomorrow, but also
Leading off the report is a section anticipate the breakthrough engineer-
featuring compelling engineering in- ing innovations that will be needed in
novations. These innovations range the future.
from advanced hydrogen storage that
enables clean vehicles, to new nuclear Computational Engineering
material detection technologies, to a Computational Engineering efforts
landmine detection system using ultra- focus on the research, development,
wideband ground-penetrating radar. and deployment of computational
Many have been recognized with R&D engineering technologies that provide
Magazine’s prestigious R&D 100 Award; the foundational capabilities to address
all are examples of the forward-looking most facets of Engineering’s mission,
application of innovative engineering ranging from fundamental advances to
to pressing national problems and chal- enable accurate modeling of full-scale
lenging customer requirements. DOE and DoD systems performing at
Engineering’s capability develop- their limits, to advances for treating
ment strategy includes both fundamen- photonic and micro-fluidic systems.
tal research and technology develop- FY2010 research projects encom-
ment. Engineering research creates passed in situ observations of twin-
the competencies of the future where ning and phase transformations at the
discovery-class groundwork is required. crystal scale; work on developing a new
Our technology development (or reduc- Lagrange embedded mesh technique
tion to practice) efforts enable many for multiphysics simulations; and adding
of the research breakthroughs across improvements to our existing lattice-
the Laboratory to translate from the Boltzmann polymer code to enable fully
world of basic research to the national turbulent, multiscale simulations of drag
security missions of the Laboratory. This reduction. Technology projects included
portfolio approach produces new and maintaining and adding new capabili-
advanced technological capabilities, ties to vital engineering simulation tools
and is a unique component of the value such as DYNA3D and NIKE3D.

ii FY10 Engineering Innovations, Research & Technology Report


Introduction

Micro/Nano-Devices & Structures driven dense plasma focus for experi-


Micro/Nano-Scale Manufacturing ments; applying our Statistical Radiation
encompasses technology efforts that Detection System (SRaDS) processing
fuel the commercial growth of micro- to data collected from NaI detectors;
electronics and sensors, while simulta- and establishing a flexible testbed for
neously customizing these technologies 95-GHz impulse imaging radar.
for unique, noncommercial applications
that are mission-specific to the Labora- Engineering Systems for
tory and DOE. The Laboratory’s R&D tal- Knowledge & Inference
ent and unique fabrication facilities have Knowledge Discovery encompasses
enabled highly innovative and custom a wide variety of technologies with the
solutions to technology needs in Stock- goal to broadly generate new under-
pile Stewardship, Homeland Security, standing or knowledge of relevant situ-
and Intelligence. ations, thereby allowing anticipation or
FY2010 research projects included prediction of possible outcomes. With
characterizing phenomena of DNA this understanding, a more compre-
microarray hybridization, regeneration, hensive solution may be possible for
and selective release; improving the problems as complex as the prediction
performance of cadmium–zinc–telluride of disease outbreaks or advance warn-
gamma radiation detectors; creating ing of terrorist threats.
transparent ceramic optics with unique FY2010 research efforts were
properties based on tailored nanostruc- centered on better understanding of
tures; and advancing 3-D micro- and higher-adaptive systems, especially in
nanofabrication using Projection Micro- adversarial relationships; event ex-
Stereolithography (PµSL). Technology traction from text using error-driven
projects included introducing a pro- aggregation methodologies; and devel-
cess to fabricate polydimethylsiloxane oping new, dynamic classifier algorithms
(PDMS) multilayer soft lithography chips; for detecting problems with unequal
developing read-out electronics for our and evolving error costs. Technology
pillar thermal neutron detector; and efforts included improvements to an
using acoustic focusing in a microfluidic entity extractor aggregation system, and
device to separate white blood cells improving optimization capabilities for
from whole blood. electric grid energy modeling by using
high-performance computing.
Measurement Technologies
Measurement Technologies com- Energy Manipulation
prise activities in nondestructive char- Energy Manipulation encompasses
acterization, metrology, sensor systems, the fundamental understanding and
and ultrafast technologies for advanced technology deployment for many mod-
diagnostics. The advances in this area ern pulsed-power applications. This area
are essential for the future experimental has broad applications for magnetic flux
needs in Inertial Confinement Fusion, compression generators, components
High-Energy-Density Physics, Weapons, for modern accelerators, and high-
and Department of Homeland Security performance apparatus for high-energy-
programs. density physics experiments.
FY2010 research featured advanced FY2010 research focused on devel-
Bayesian model-based statistical pro- oping a computer model of electrical
cessing algorithms for illicit radionuclide breakdown at the dielectric/vacuum
detection; and optimized, volumetric interface, leading to a computational
scanning for x-ray array sources for methodology for designing high-voltage
nondestructive evaluation. Technology vacuum insulators for pulsed-power
projects included feasibility studies for devices.
using low-energy, fast-pulsed, power-

Lawrence Livermore National Laboratory iii


Contents

Introduction
A Message from Monya A. Lane ................................................................................................................................. ii

Engineering Innovations
Advanced Fuel Storage for Tomorrow's Clean Hydrogen Vehicles
Salvador M. Aceves .....................................................................................................................................................2

Landmine Detection Using Ultrawideband Ground-Penetrating Radar Technology


Christine N. Paulson ....................................................................................................................................................6

Fast Detection of Illicit Radioactive Materials to Prevent Nuclear Terrorism


James V. Candy ..........................................................................................................................................................10

Capturing Waveforms in a Quadrillionth of a Second


Corey V. Bennett ........................................................................................................................................................14

Transforming the Weapons Complex Through a Modern Manufacturing Infrastructure


Keith Carlisle ..............................................................................................................................................................18

Ultrahigh-Resolution Adaptive Optics Optical Coherence Tomography


Diana C. Chen ............................................................................................................................................................22

Multiphysics Engineering Simulations with ALE3D


Daniel A. White ..........................................................................................................................................................26

Computational Engineering
Direct Observation of Phase Transformations and Twinning Under Extreme Conditions:
In Situ Measurements at the Crystal Scale
Joel V. Bernier ............................................................................................................................................................32

Lagrange Multiplier Embedded Mesh Method


Michael A. Puso .........................................................................................................................................................34

Multiscale Polymer Flows and Drag Reduction


Todd H. Weisgraber ...................................................................................................................................................36

Finite Element Analysis Visualization and Data Management


Bob Corey ..................................................................................................................................................................38

Modeling Enhancements in DYNA3D


Jerry I. Lin ..................................................................................................................................................................40

iv FY10 Engineering Innovations, Research & Technology Report


Contents

NIKE3D Enhancement and Support


Michael A. Puso .........................................................................................................................................................42

Electromagnetics Code Enhancement and Maintenance


Daniel A. White ..........................................................................................................................................................44

Micro/Nano-Devices and Structures


Hybridization, Regeneration, and Selective Release of DNA Microarrays
Elizabeth K. Wheeler .................................................................................................................................................48

Cadmium–Zinc–Telluride Sandwich Detectors for Gamma Radiation


Adam M. Conway ......................................................................................................................................................50

Enabling Transparent Ceramic Optics with Nanostructured Materials Tailored in Three Dimensions
Joshua D. Kuntz .........................................................................................................................................................52

High-Resolution Projection Micro-Stereolithography (PµSL) for Advanced Target Fabrication


Christopher M. Spadaccini .........................................................................................................................................54

Three-Dimensional Polymer Fabrication Techniques


Christopher M. Spadaccini .........................................................................................................................................56

PDMS Multilayer Soft Lithography for Biological Applications


Dietrich A. Dehlinger .................................................................................................................................................58

Embedded Sensors for Gas Monitoring in Complex Systems


Jack Kotovsky .............................................................................................................................................................60

Neutron Cookoff: Read-Out Electronics for LLNL Pillar Detector


Rebecca Nikolić ..........................................................................................................................................................62

Isotachophoretic Separation of Actinides


Raymond P. Mariella, Jr. .............................................................................................................................................64

Extraction of White Blood Cells from Whole Blood Through Acoustic Focusing
Elizabeth K. Wheeler .................................................................................................................................................66

Measurement Technologies
Detection, Classification, and Estimation of Radioactive Contraband from
Uncertain, Low-Count Measurements
James V. Candy ..........................................................................................................................................................70

Optimized Volumetric Scanning for X-Ray Array Sources


Angela M. K. Foudray .................................................................................................................................................72

Lawrence Livermore National Laboratory v


Contents

Low-Energy, Fast-Pulsed, Power-Driven Dense Plasma Focus for WCI and NIF Relevant Experiments
Vincent Tang ..............................................................................................................................................................74

Applying High-Resolution Time-Domain Radiation Detection Techniques to Low-Resolution Data


Brian L. Guidry ...........................................................................................................................................................76

Flexible Testbed for 95-GHz Impulse Imaging Radar


Christine N. Paulson ..................................................................................................................................................78

Engineering Systems for Knowledge and Inference


Toward Understanding Higher-Adaptive Systems
Brenda M. Ng ............................................................................................................................................................82

Enhanced Event Extraction from Text via Error-Driven Aggregation Methodologies


Tracy D. Lemmond .....................................................................................................................................................84

Robust Ensemble Classifier Methods for Detection Problems with Unequal and Evolving Error Costs
Barry Y. Chen .............................................................................................................................................................86

Entity Extractor Aggregation System


Tracy D. Lemmond .....................................................................................................................................................88

Improving Optimization Capabilities for Energy Modeling via High-Performance Computing


Carol A. Meyers .........................................................................................................................................................90

Energy Manipulation
High Voltage Vacuum Insulator Flashover
Timothy L. Houck .......................................................................................................................................................94

Author Index ................................................................................................................................................96

vi FY10 Engineering Innovations, Research & Technology Report


Engineering Innovations
Innovations

Advanced Fuel Storage for Tomorrow’s


Clean Hydrogen Vehicles
For more information contact:

Salvador M. Aceves
Insulated cryogenic fuel tanks could be used in mass-produced, (925) 422-0864
hydrogen-powered vehicles. aceves6@llnl.gov

T he U.S. transportation sector is


almost 100 percent dependent
on fossil fuels, and the results of that
dependence, associated air pollutants,
and greenhouse gases. When burning
hydrogen, vehicles generate zero green-
dependency are evident all around us, house gases and only small amounts of
from the effects of global warming to nitrogen oxides. Water vapor is the only
rising prices at the gas pump. Hydrogen emission.
(H2) is a leading candidate to supplant
petroleum as a universal transporta- Limitations of Current Hydrogen
tion fuel. It has the highest combustion Storage Systems
energy by weight of any fuel. Burning Despite hydrogen’s stellar fuel effi-
1 kilogram of hydrogen produces 2.6 ciency, it is difficult to store compressed
times more energy than 1 kilogram hydrogen in the large quantities needed
of gasoline. Additionally, hydrogen to provide the driving range achieved by
can be generated from water and any gasoline- and diesel-powered vehicles.
energy source, and has the potential Most prototype hydrogen vehicles use
to ultimately eliminate petroleum compressed hydrogen stored at room
Composite Stainless steel vacuum shell temperature and high pressure (35 to
support rings 70 megapascals, or 350 to 700 atmo-
spheres). The energy density of com-
pressed hydrogen at 35 megapascals is
only about one-twelfth that of gasoline.
Gaseous H2
fill line As a result, hydrogen cars must use
large, high-pressure tanks, which are
often located in the trunk. Thus, the
predominant technical barrier limiting
widespread use of hydrogen automo-
biles is storing enough hydrogen fuel
onboard to achieve an acceptable driv-
ing range (500+ kilometers) in a com-
Carbon-fiber, pact, lightweight, rapidly refuelable, and
high-pressure
vessel
cost-effective system. Current hydrogen
storage systems present fundamental
limitations:
• Low-pressure liquid hydrogen stor-
age systems have high density and
Liquid H2 fill line
reasonably low cost, however a
major drawback is the significant
Figure 1. Design of cryogenic pressure vessel installed in a Prius hydrogen hybrid vehicle.
The inner vessel is an aluminum-wound, carbon fiber-wrapped pressure vessel typically
electricity required to liquefy the
used for storage of compressed gases. This vessel is surrounded by a vacuum space filled hydrogen (about equal to 30 percent
with numerous sheets of highly reflective plastic (minimizing heat transfer into the of the heating value of the hydro-
vessel), and an outer jacket of stainless steel. The outer tank measures 47 inches long gen molecule). In addition, liquid
with an outer diameter of 23 inches. hydrogen is extremely sensitive to

2 FY10 Engineering Innovations, Research & Technology Report


rupture, potentially mitigating the con-
sequences of vessel failure.
The cryogenic pressure vessel
developed by the LLNL team comprises
a high-pressure inner vessel made of
carbon-fiber-coated aluminum similar
to those used for storage of com-
pressed gas, a vacuum space filled with
numerous sheets of highly reflective
plastic (for high-performance ther-
mal insulation), and an outer jacket
of stainless steel (see Figure 1). The
vacuum minimizes the conduction of
heat between the outer steel jacket and
the inner pressure vessel, which would
cause liquid hydrogen to evaporate
quickly. In addition, the multiple layers
of reflective material almost eliminate
heat transfer from radiation, much
like a Thermos bottle. The outer steel
jacket has cutouts for thermocouples
Figure 2. Technicians Vern Switzer and Tim Ross prepare to fuel the hydrogen-powered
and sensors to measure pressure,
test car with liquid hydrogen for the first time.
temperature, and the fuel level within
the inner vessel. In addition, the system
heat; it expands significantly when longer than the approximately is equipped with two safety devices to
warmed only a few degrees. As a 3 minutes required to refuel a gaso- prevent catastrophic failure in case of
result, vehicles that use low-pres- line vehicle). overpressure.
sure tanks are usually not filled to For a pressure vessel of given size
maximum capacity and must have Advantages of a Cryogenic and cost, a cryogenic vessel stores
a system to release some of the System substantially more hydrogen than a
hydrogen vapor that accumulates in Over the last decade, LLNL has vessel at ambient temperature, doing
the tank when the car is not driven pioneered an approach that combines so without the additional weight and
for several days. Evaporative losses existing storage technologies to capture cost of hydrogen-absorbent materials
after longer periods of inactivity the advantages of both cryogenic and but with far greater thermal endurance
(1–2 weeks) can leave the driver high-pressure storage: cryogenic high- than conventional (i.e., low-pressure)
without fuel. pressure vessels. The team had earlier cryogenic liquid H2 tanks. Cryogenic
• Compressed hydrogen gas is bulky focused on liquid hydrogen (–253 °C) pressure vessels present key advan-
and difficult to package within a because it does not require a high- tages:
vehicle. The large storage vessels pressure tank, and it takes up one-third • High density, superior to liquid
necessary to achieve a 500-km the volume of compressed hydrogen at hydrogen vessels, but with reduced
(300-mile) range are expensive room temperature. Like all gases, com- evaporative losses. A driver will
because of the amount of material pressed hydrogen can be stored more have a limited reserve and range
needed for current carbon-fiber compactly at colder temperatures. regardless of evaporative losses
tanks. Additionally, the size of such Pressurized hydrogen at 35 megapascals as compared to liquid hydrogen
tanks limits the passenger and cargo becomes twice as dense when cooled vessels.
capacity of the vehicles in which from ambient temperature to –150 °C. • More compact, and therefore
they are installed. Cooling it further to –210 °C (close to smaller and less expensive, than
• Storage systems based on metal the temperature of liquid nitrogen) compressed hydrogen systems.
hydrides and sorbents can be heavy triples the energy density. Cooling • Light weight and capable of fast
and may suffer from slow refueling hydrogen also lowers the hydrogen refueling—about 3 minutes (see
(about 15 minutes, considerably expansion energy during a sudden tank Figure 2).

Lawrence Livermore National Laboratory 3


Innovations

Figure 3. The cryogenic pressure vessel installed onboard a Toyota Prius hydrogen hybrid vehicle, which set a world record for the
longest distance driven on one tank of hydrogen fuel.

Record-Setting Technology
The LLNL group put their cryogenic 40 gallons). The overall fuel economy versus about 3 cm in the previous
tank design to the test in 2007 and set was about 105 kilometers per kilogram design) while still maintaining adequate
a world record for the longest distance of hydrogen—equivalent to about dormancy (see Figure 4). The resulting
driven on one tank of hydrogen fuel, 65 miles per gallon of gasoline—when cryogenic pressure vessel is the most
albeit at reduced speeds and utiliz- driven at 40 to 56 kilometers (25 to compact automotive hydrogen storage
ing the trunk for storage space. The 35 miles) per hour (see Figure 3). vessel ever built, and is the only system
group drove a Toyota Prius hybrid A subsequent cryogenic pressure that meets DOE’s very challenging 2015
vehicle, modified to run on hydrogen, vessel design uses LLNL proprietary targets for weight and volume per-
1050 kilometers (653 miles) on one tank technology to shrink the insulation formance at higher storage capacities
of liquid hydrogen (150 liters, or almost thickness considerably (to about 1.5 cm (10 kg versus 5 kg).

4 FY10 Engineering Innovations, Research & Technology Report


Hydrogen Vehicles in Our Future? continues to advance the technology
Although cryogenic pressure vessels toward near-term demonstration and
demonstrate superior performance commercialization.
for storing hydrogen, LLNL researchers The Livermore team is playing an
envision further potential for lighter, important role in making possible both
more compact, and lower-cost de- clean and sustainable transportation
signs. In collaboration with automobile fueled by the simplest element in the
manufacturer BMW and a major pres- universe.
sure vessel manufacturer (SCI), LLNL

Figure 4. The latest cryogenic pressure vessel prototype (right) is 23% more compact than the previous (left), a feat achieved
through reduced insulation thickness (1.5 cm versus 3 cm). This reduction in external volume suffices for meeting the very
challenging 2015 DOE weight and volume targets.

Lawrence Livermore National Laboratory 5


Innovations

Landmine Detection Using


Ultrawideband Ground-Penetrating
For more information contact:
Radar Technology Christine N. Paulson
(925) 423-7362
paulson4@llnl.gov
Advances in several technology areas could soon
yield a ruggedized, fieldable system for safely detecting
buried explosive devices.

D ecades after armed conflicts end,


hidden landmines continue to
maim and kill thousands of innocent
Detecting buried landmines and
improvised explosive devices (IEDs) also
presents a daunting challenge for U.S.
civilians. Today, minefields plague 79 forces, a challenge that remains inad-
countries, and most of these nations equately addressed by today’s technolo-
have limited resources to remove them. gy. Buried roadbed and roadside bombs
Metal detectors and manual prodding continue to pose a significant threat
of soil remain amongst the most reliable to U.S. forces in the Afghan and Iraqi
and trusted demining techniques. As theaters. Route clearance patrols often
a result, humanitarian mine detection must rely on their sense of intuition and
efforts are time-consuming, costly, and familiarity with the terrain to detect
dangerous. surface abnormalities that may indicate
a potential buried or concealed threat.
Whether it be cleaning up the
explosive remnants of previous wars or
protecting our in-theater troops from
surprise attack, better tools are needed
to detect and identify buried explosive
threats. To address this technology gap,
LLNL is developing ultrawideband (UWB)
impulse radar (iRadar) platforms that are
combined with sophisticated subsurface
image reconstruction algorithms.

A Lasting, Widespread, and


Insidious Threat
Landmines are explosive devices
placed in the ground and triggered
by mechanical or electronic proximity
sensors (Figure 1). After conflicts cease,
unexploded landmines can remain in-
tact for decades, killing and maiming ci-
vilians, impeding reconstruction efforts,
and rendering agricultural lands useless.
The Landmine Monitor Report 2008
assesses that many thousands of square
kilometers of land are contaminated
by up to 100 million mines and other
explosives. In 2007, about 1400 people
were killed and 4000 injured by mines
or other explosive remnants of war.
Figure 1. Three examples of antipersonnel landmines. These mines commonly use the An estimated 100,000 mines are re-
pressure of a person’s foot as a trigger, but tripwires also are frequently employed. Often, moved each year. At that rate, clearing
mines can be set to detonate if someone attempts to lift, shift, or disarm them. 40 to 50 million mines would require

6 FY10 Engineering Innovations, Research & Technology Report


450 to 500 years, assuming no new Current demining methods are decades clutter. Some antipersonnel mines are
mines are laid. However, the pace of old and are extremely tedious because mostly plastic except for small metal
mine removal is far slower than the rate metal detectors cannot discriminate parts. Detecting these metal parts
at which new mines are being placed. metallic mines from innocuous metallic requires turning up the metal detector’s
sensitivity. As a result, most “mines”
turn out to be harmless objects such as
bottle caps, bullet casings, nails, or tin
cans. Demining teams typically uncover
100 to 1000 innocuous metal objects for
each mine found.
In 2009, the Joint Improvised Explo-
sive Device Defeat Organization (JIEDDO)
sponsored a large, multiyear effort at
LLNL to develop a route-clearance,
vehicle-mounted IED detection system
capable of detecting buried IEDs in real
time. In the JIEDDO project, LLNL is tran-
sitioning the archetype ultrawideband
iRadar technology into an integrated,
rugged, and automated system ready for
forward operational assessment. In col-
laboration with two industrial partners,
we have developed a landmine locator
system, which in 2009 was recognized
with an R&D 100 Award.

Evolution of iRadar
For more than a decade, Livermore
researchers have been working on
applying their patented ultrawideband
technology to the worldwide problem
of demining. In 1993, LLNL invented
the micropower impulse radar (MIR), a
technology that could be leveraged to
build low-cost, low-power, and compact
radar devices based on advances in
transient digitizing circuits needed for
LLNL laser programs. Two years later,
LLNL began two separate multiyear ef-
forts to develop ground-penetrating ra-
dar (GPR) systems using ultrawideband
radar technology: the High-performance
Electromagnetic Roadway Mapping and
Evaluation System (HERMES), designed
to identify structural deterioration in
bridge decks, and the Landmine Detec-
tion Advanced Radar Concept (LAND-
MARC) funded by the Department
of Defense. These efforts helped to
accelerate MIR development to where
it is today.
The LANDMARC system was devel-
oped as a small array system mounted
Figure 2. The Landmine Detection Advanced Radar Concept showed the feasibility of on a cart, as shown in Figure 2. This
discriminating buried landmines from innocuous clutter. portable system collected data at a

Lawrence Livermore National Laboratory 7


Innovations

Figure 3. The landmine locator employed a high-speed, multistatic array of


16 iRadar elements.

slower rate than the truck-mounted for post-processing, which could take reconstruction processing. This ability
HERMES array but was capable of col- several hours to produce a result and attracted several externally funded ef-
lecting data at much higher resolution. required human adjustment of model forts, which further advanced the GPR
Traditional GPR systems face difficulties parameters to match the dielectric con- effort.
detecting mines buried at or near the ditions and thicknesses of subsurface
surface of the ground. This is because soil layers. Today’s iRadar GPR Technology
the strong radar reflection from the Over the next several years, sub- In 2008, First Alliance Technologies,
surface, combined with stray reflections sequent developments in other re- LLC, entered into a cooperative research
from innocuous surface clutter, tends to lated technology projects significantly and development agreement (CRADA)
obscure the return signal from the hid- improved LLNL core capabilities in the with LLNL to develop an iRadar land-
den landmine. One of the key innova- area of ultrawideband radar imaging. mine detection system designed for
tions in using micropower impulse radar These developments included a super- deployment on an aerial platform. To
for this application was the ability to resolution imaging technique, advanced achieve rapid ground coverage, suf-
remove the effects of surface reflections time reversal reconstruction algorithms, ficient resolution, and superior imaging
through time-gating and better discrimi- model-based tomography schemes, a quality, a 16-element linear multi-
nation of rocks, roots, voids, and other self-contained digital synthetic aperture static array was developed, as shown
subsurface clutter from mines. radar imaging system that enabled elec- in Figure 3. This technology formed
While both efforts yielded promis- tronic beam-forming and beam-steering the initial technological basis of what
ing results, the prototype UWB GPR of the UWB signal, and advances in would later become the JIEDDO vehicle-
technology was not yet ready for opera- parallel computation technology to mounted iRadar IED detection platform.
tional use. One of the most significant create a dedicated real-time image The 2010 JIEDDO system, whose
hurdles was that collected data had to reconstruction circuit. This series of precursor is shown in Figure 4, consists
be processed offline, due to limitations technology projects enabled LLNL to ad- of a ruggedized multistatic iRadar array
on data acquisition speed, data stor- vance the ultrawideband radar imaging with military standard (MIL-SPEC) com-
age capacity, and processing speed. capability from a monostatic array with ponents and sophisticated positioning
HERMES and LANDMARC data were post-processing, to a digitally controlled, sensors. The JIEDDO project had similar
captured and stored on hard drives multistatic array with real-time image imaging requirements to previous

8 FY10 Engineering Innovations, Research & Technology Report


efforts and could leverage existing hard- Opportunities for Future • Development of multiple frequency
ware technology, but unlike previous Development band systems could improve resolu-
efforts its results needed to be calcu- There are opportunities to further tion and dynamic range.
lated and presented to the operator in improve the iRadar GPR capabil- • For route clearance applications,
real-time. Computation of iRadar image ity in several areas, including radar further research and development
reconstruction and detection algorithms hardware, antenna design, the use of in automated change detection will
are performed on the vehicle platform, optimal transmit waveform shapes, help to better identify alterations
and automated detection results are electromagnetic modeling, and in the environment and identify
displayed to the user in real-time. The algorithms. innocuous clutter objects, thereby
system is currently being readied for an • Immediate benefits could be reducing the false alarm rate.
operational assessment. achieved by using existing engi- • Enhanced sensor fusion using
The iRadar sensor is compact, low neering capabilities to transition thermal or radiation detectors could
power, inexpensive, and unusually versa- time-reversal analysis, migration enhance surface anomaly detection.
tile. The sensor can send out extremely imaging with refraction correction, • The development of an optimal
short electromagnetic pulses over an and model-based layer estimation iRadar antenna for GPR applications
exceptionally wide range of frequencies, algorithms into real-time tools could improve iRadar sensitivity,
permitting much finer resolution of ma- that can be deployed during data robustness, and form factor.
terials than other sensing systems. The collection. Once the landmine locator is in
JIEDDO system represents the pinnacle • Through the use of an arbitrary use, the world will at last have a safer
of many years of LLNL array and imaging waveform generator system, stud- method to detect mines. As a result,
research and will likely serve as an im- ies on custom pulse shapes could nations will be able to confidently re-
portant stepping-stone for the systems be evaluated for their utility in the claim millions of square kilometers from
that will follow. GPR application. this long-lasting scourge of war.

Figure 4. An early prototype of the JIEDDO vehicle-mounted IED detection platform scans for buried targets in a desert environment.

Lawrence Livermore National Laboratory 9


Innovations

Fast Detection of Illicit Radioactive


Materials to Prevent Nuclear Terrorism
For more information contact:

James V. Candy
SRaDS provides a fast and reliable radionuclide detection and (925) 422-8675
identification capability that can dramatically enhance the utility candy1@llnl.gov
of existing detection systems.

E ach year, some 48 million cargo


containers move among the world’s
transportation portals with more than
by principal investigator James Candy
applied its expertise in radiation science
and gamma detection to develop the
16 million containers arriving in the U.S. statistical radiation detection system
by ship, truck, and rail. Illicit radioactive (SRaDS), an innovative software solution
materials could be hidden in any one that nonexperts can use to rapidly and
of these cargo-filled containers. Yet, reliably detect radionuclides (Figure 1).
physically searching every container The team, along with ICx® Technolo-
would bring shipping to a halt. Im- gies, Inc., in Arlington, Virginia, has won
proving security at U.S. transportation an R&D 100 Award for the technology.
portals is thus one of the nation’s most According to Candy, who derived early
difficult technical and practical chal- support from Livermore’s Laboratory
lenges because the systems developed Directed Research and Development
for screening cargo must operate in real Program, “the team cross-fertilized the
time without disrupting legitimate com- areas of statistical signal processing with
mercial shipping activities. radiation transport physics, enabling a
Working at this intersection of com- unique and breakthrough solution to
merce and national security, a team of a long-troubling problem, especially in
Livermore scientists and engineers led today’s climate of terrorist threats.”

Figure 1. The statistical radiation detection system (SRaDS) is an innovative software solution that can be integrated easily into any
gamma-detection system to combat illicit trafficking of radioactive material through customs, border crossings, and limited-access areas.
SRaDS identifies radionuclides in low-count situations when measurement time is short and demand for reliability is high. The processed
data are displayed in intuitive plots showing results that a nontechnical user can interpret.

10 FY10 Engineering Innovations, Research & Technology Report


Rapid and Reliable Radionuclide collect the number of photons required SRaDS speeds up identification by
Detection to calculate the pulse-height spectra automatically rejecting extraneous
Identifying radioactive material in a (PHS) that identify radioactive materi- and nontargeted photons during the
moving target is a difficult problem pri- als. For example, a vehicle moving process. Exploiting Bayesian algorithms,
marily because of the very low counts through a gamma-detection system at the smart processor examines each
of gamma-ray signals that occur during a transportation portal is screened for photon—one by one—as it arrives and
the short time interval available for less than 10 seconds. Accurate radio- then “decides” whether a detected ra-
detection. In low-count situations such nuclide detection is even more difficult dionuclide is present based on selected
as these, conventional spectrometry when radioactive material is shielded by parameters. This capability is not avail-
techniques do not have enough time to lead, packaging, or adjacent cargo. able in conventional detection systems,
yet it is essential in the successful iden-
tification of radionuclides in low-count
Sequential threat Target situations when measurement time is
detection processor short and demand for reliability is high.
Threshold 1
A Closer Look
Photons The generic sequential detection
No decision technique is depicted conceptually in
Decision function

(take another Figure 2, illustrating each photon arrival


sample) along with the corresponding decision
function and thresholds. At each arrival
Threshold 0
the decision function is sequentially
updated and compared to thresholds
No target to perform the detection, photon-by-
Time
photon. The thresholds are selected
from a receiver-operating characteristic
Confidence: Detection/false alarm probability Performance: Receiver-operating characteristic
(ROC) curve (detection versus false-
Figure 2. The graph shows a conceptual implementation of the sequential Bayesian radio-
alarm probability) for each individual
nuclide detection technique. As each individual photon is extracted, it is discriminated, radionuclide decision function. An
estimated, and the decision function calculated and compared to thresholds to “decide” operating point is selected from the
if the targeted radionuclide is present or not. Quantitative performance and sequential ROC that corresponds to specific desired
thresholds are determined from estimated receiver-operating characteristic (ROC) curves probabilities specifying the required
and the selected operating point (detection/false alarm probability). thresholds that are calculated for each
radionuclide.
Instead of accumulating a PHS or
energy histogram as is usually done in
Photon current detection systems, each photon
RN is processed individually upon arrival
Preprocessor I and then discarded. The structure of
Source
d this processor is shown in Figure 3. After
e
n a photon is preprocessed by the acquisi-
ENERGY line/rate Sequential t tion system, the energy and arrival
discrimination/ decision function i
f time measurements are passed to the
estimation estimation i energy/rate discriminators to deter-
c
a mine the photon’s status (accepted or
COMPTON rate t rejected). If accepted, the parameter
discrimination/ i
o estimates are sequentially updated and
estimation n provided as input to update the decision
function for detection and eventual
ENERGY line/rate identification. If rejected, the photon is
detection Reject discarded. Detection is declared when
(background, noise)
such a decision is statistically justified
Figure 3. The basic SRaDS design paradigm, showing discrimination of both photo- based on estimated detection and false
electrons and downscatter (Compton) photons and estimation of the targeted radionuclide, alarm probabilities specified by the ROC
which provides parameters for the sequential decision function leading to radionuclide curve obtained during calibration. The
identification. Background and extraneous photons are rejected. result is a system that has improved

Lawrence Livermore National Laboratory 11


Innovations

detection performance with high reli- • enhances the channel energy and the estimates of its energy, rate, and
ability and short decision times. rate (interarrival) parameters; emission probability and then used to
Each unique energy/arrival compo- • updates the corresponding decision update the decision function.
nent of the target radionuclide is pro- function; and Results of this photon-by-photon
cessed individually in a separate chan- • detects/identifies the target radio- processor with downscatter are shown
nel, resulting in the parallel/distributed nuclide by thresholding the decision in Figure 5. In this three-column figure,
processor structure. After the photon is function. the first column is the composite PHS
acquired, the distributed processor As diagrammed in Figure 4, the (which is not used). The second column
• discriminates the individual SRaDS processor consists of a discrimi- shows the measured photon energies
monoenergetic (single-energy) nator for both energy (amplitude) and (arrivals) as red dots. Circles (green)
arrival identifying one of the parallel rate (interarrival time). If the photon represent the discriminator output
channels; does not pass this test, it is sent to the photoelectrons, and squares (purple)
• discriminates the corresponding de- Compton (downscatter) processor or re- represent the discriminated downscat-
tection rate (interarrival) parameter jected (photoelectron only). If accepted, ter photons. Notice that these align
for that particular channel; it is processed further to improve with the PHS column’s energy “lines.”
The third column shows the decision
function for each of the targeted radio-
nuclides. As each photon is processed,
the decision function is sequentially up-
Sequential radiation
dated until one of the thresholds, target
detection processor (SRaDS) Initialize or nontarget, is crossed (indicated by
solid red boxes in the figure), declaring a
Photon
threat or nonthreat.
When a cargo container arrives at an
SRaDS detector, the decision function
in the software is refreshed, updated,
Target Target and refined based on the energies and
Yes photoelectric? No downscatter? No
(discriminate) (discriminate)
Reject arrival times of the accepted photons.
Detection is declared only when statisti-
cally justified according to the three
Yes factors—the Bayesian algorithms, the
Photoelectric Downscatter
updated decision function, and the
Next conditions defined by the specific ROC
Parameter Parameter
estimation estimation curve obtained during initial calibration.
In contrast, conventional techniques
require manually setting a specific
Previous
Table Decision calculation Decision SPT model counting time in advance with the hope
(target PE function function (target DS that the data acquired can justify the
parms) (photoelectron) (downscatter) parms)
decision. By encompassing the statisti-
cal nature of radiation transport physics
+
and sequential Bayesian processing
techniques, SRaDS provides highly de-
Composite
decision function veloped quantitative statistical analysis
of the data received in real time.
What’s more, basic and advanced
Detection processor options are available with
SRaDS. Both processor options pro-
vide complete statistical analysis of
No Yes
RN? Alarm radionuclide data obtained from any
type of gamma detector. The basic and
advanced processors gather information
Figure 4. The statistical Bayesian design is shown (simply) as a photon-by-photon
processor enhancing the raw detector measurement while rejecting instrumentation noise from unscattered photons that de-
and estimating the photoelectrons and downscatter photons through energy/rate posit full photon energy. The advanced
discrimination and parameter estimation. This information is input to a function used to processor also gathers information from
detect which of the target radionuclides are present. Compton-scattered photons that exhibit

12 FY10 Engineering Innovations, Research & Technology Report


(a) Pulse-height spectrum (b) Event mode sequence (c) RN channels Decision functions
Peak accepted
Downscatter acc.
1400 1400 Rejected 30

1300 1300 20
Cobalt Photoelectric photons

60Co
10
1200 1200 Cobalt
0 detection
(5.76 s)
1100 1100 –10
Downscatter photons 0 1 2 3 4 5
1000 1000
Time (s)

900 900

20
800 Cesium 800
Volts (keV)

137Cs
700 700 Photoelectric photons Cesium detection
–20 (0.47 s)
600 600
–40

500 500 0 1 2 3 4 5
Barium Time (s)
Photoelectric photons
400 400

300 300 30
20
200 200 Downscatter photons 10

133Ba
Barium detection
0 (0.46 s)
100 100 –10
Photoelectric photons
–20
0 0
40 30 20 10 0 2 3 4 5 6 0 1 2 3 4 5
Counts Time (s) Time (s)

Figure 5. Results of sequential Bayesian detection and identification. (a) Pulse-height spectrum (after calibration). (b) Photon arrivals
(red) with targeted photoelectron discrimination (green circles) and downscatter photons (purple squares). (c) Decision functions for
60Co (detection time: 5.76 s), 137Cs (detection time: 0.47 s) and 133Ba (detection time: 0.46 s) with thresholds for radionuclide
detection/identification.

diminished energy—a major break- The technology can also be installed ing results that a nontechnical user can
through in time-domain, low-count in portable gamma detectors used by interpret. Alternatively, SRaDS can be
detection technology. first responders to determine radia- configured to simply provide audio and
tion risks associated with local nuclear visual alerts indicating the presence of
Integrates into Any emergencies. The algorithms are easily targeted radionuclides at user-selected
Gamma-Detector System embedded in programmable gate arrays confidence levels. Users can also select
The Livermore team took special that users in the field can adjust to a false-alarm probabilities to reduce
care to ensure that SRaDS can easily be location’s specifications and detection or eliminate the occurrence of false
integrated into any gamma-detection requirements. positives depending on the level of
system, including large stationary detec- Depending on the hardware setup, detection required for a given situation.
tors at transportation portals that help the processed data can be graphi- SRaDS is a comprehensive software sys-
search for radioactive contraband mate- cally displayed on a computer monitor tem that combines outstanding radionu-
rial in moving vehicles, cargo containers, or portable unit. While conventional clide-detection performance with high
and railroad cars. SRaDS works equally gamma-detection systems require a reliability and a short acquisition time.
well in pedestrian monitors used to highly trained practitioner to analyze The system can be implemented easily
combat illicit trafficking of radioac- the results, refine the data, and guide in existing infrastructure to protect the
tive material through customs, border the interpretation procedure, SRaDS nation from the insidious threat of illicit
crossings, and limited-access areas. displays data in intuitive plots, show- radioactive materials.

Lawrence Livermore National Laboratory 13


Innovations

Capturing Waveforms in a
Quadrillionth of a Second
For more information contact:

Corey V. Bennett
The FemtoScope time microscope provides dramatic (925) 422-9394
improvments in instruments’ resolution and dynamic range. bennett27@llnl.gov

H OW will scientists “see” what hap-


pens inside the National Ignition
Facility (NIF), the world’s largest laser,
To meet the emerging need for
greater dynamic range and temporal
resolution, scientists can turn to the
when it creates the extreme tempera- new FemtoScope—a “time microscope”
ture and pressure conditions found in that is attached to the front end of a
stars? Instruments such as oscilloscopes conventional recording instrument to
and streak cameras cannot capture dramatically improve its performance.
all the details of fast-moving, com- Livermore researchers (see Figure 1),
plex events such as fusion burn. Their in collaboration with colleagues from
dynamic range (the ratio between the Stanford University, the University
smallest and largest possible values) of Southampton, and the University
and their temporal resolution (the preci- of California at Davis, won an R&D
sion of a measurement with respect 100 Award for their invention of the
to time) are coupled. As a result, these FemtoScope. Initial efforts for this
conventional instruments lose dynamic work were funded by Livermore’s
range with faster temporal resolution Laboratory Directed Research and the
or lose temporal resolution with more Engineering Directorate’s Technology
dynamic range. Development Program for single-shot,

Figure 1. Livermore development team for the FemtoScope (from left): Bryan Moran, Vincent Hernandez, Alex Drobshoff, and Corey Bennett.

14 FY10 Engineering Innovations, Research & Technology Report


high-dynamic-range applications. The
Defense Advanced Research Projects
Agency (DARPA) then funded develop-
ment for advanced light detection and
ranging (LIDAR) applications where
high repetition rate frames containing
sub-picosecond detail each needed to
be recorded in a single shot. Today, with
support from Livermore’s Weapons and
Complex Integration (WCI) organization,
this time lens technology is again being
applied to single-event, high-dynamic-
range diagnostic applications on NIF.

Slowing Down the Signal


The FemtoScope (Figure 2) improves
the performance of an oscilloscope or
streak camera much in the same way
that a high-performance lens improves
a camera’s output. It is not a recording
instrument in itself. Rather, it dramati-
cally enhances the performance of any
conventional recording instrument to
which it is connected by ultrafast pro-
cessing of waveforms. The FemtoScope
improves the dynamic range of these
instruments and their time resolution
from tens of picoseconds (trillionths of
a second) to hundreds of femtoseconds
(quadrillionths of a second). Figure 2. FemtoScope equipment.
“The temporal imaging technology
on which the FemtoScope is based is
fundamentally a time-scale transfor-
mation tool that can be configured to FemtoScope’s powers of time magnifi- diagnostics must operate in a single-
magnify, compress, reverse, and even cation can reveal the peaks and valleys shot mode, and repetitive sampling
Fourier-transform ultrafast waveforms,” in a 1-picosecond signal not detectable approaches are not an option.
says Livermore scientist Corey Bennett. by a standalone oscilloscope or streak By slowing down or “magnifying”
“We have concentrated our past efforts camera. the time scale of the signal before it
on developing a time-magnification In the past, other instruments have enters the recording instrument, the
system.” Just as a scanning electron obtained very high resolution by con- FemtoScope allows the capture of
microscope’s powers of magnification ducting repetitive waveform sampling signals that otherwise would be too
can reveal nanometer-size details of and averaging with ultrashort time fast to record in any detail. This process
an object’s structure not viewable with intervals. However, because NIF will be not only improves the resolution of the
an ordinary light microscope, so the fired a maximum of four times a day, recording system but also increases

Lawrence Livermore National Laboratory 15


Innovations

the available dynamic range at a given When combined with a 20-GHz recorded in a single shot with a conven-
speed. In Figure 3, a simulation shows real-time oscilloscope, the Femto- tional imaging spectrometer.
how three optical pulses separated by Scope produces an instrument capable
6 picoseconds (first to last) can be “time of recording 850-GHz waveforms in Emerging Needs
magnified” so that they occur over 100-picosecond frames at 155 million The FemtoScope represents a fun-
18 picoseconds at the output. frames per second until its memory is damental paradigm shift in high-speed
The FemtoScope uses a single-shot full. When combined with an optical imaging technology. As researchers
process in real time to capture each streak camera, the FemtoScope pro- improve their understanding of physical
window of time (or frame) of interest duces an instrument with a 20-times phenomena, they will need to exam-
and stretches out the waveform so that increase in temporal resolution and a ine processes on shorter and shorter
greater detail is revealed. Furthermore, 30-times increase in dynamic range, time scales. The FemtoScope will be
this process can be repeated at a rate resulting in an overall improvement of an invaluable tool for collecting de-
of more than 100 million frames per 600 times compared with the perfor- tailed dynamic data at faster temporal
second to record the real-time evolution mance of the streak camera alone. The resolution.
of a signal. With ultrafast resolution and same time-lens technology can also The Laboratory plans to use the
nearly endless recording length, this be configured to Fourier-transform FemtoScope on NIF experiments, which
instrument can uncover waveform data an input waveform. This produces an will need diagnostics with time resolu-
with peaks and valleys never before output spectrum that looks like the tions on the scale of 1 picosecond or
detectable. input temporal profile and which can be less to determine when high-energy
photons first appear and what happens
from their first appearance to their peak
production. The FemtoScope will also
Input Input Time Output be useful for detecting and recording a
signal fiber lens Fiber Bragg grating signal
broad range of signal strengths—from
very weak signal intensities to very
–15 strong.
The time-lens technology is funda-
–10 mentally an optical signal manipulation
tool. NIF has applications in which the
–5 fundamental measurement desired is
that of an x-ray signal. We have been
Time (ps)

developing picosecond-resolution x-ray


0
sensors, which are fundamentally opti-
cal modulators driven by the x-ray signal
5 being recorded. A continuous wave or
long pulse optical probe is reflected off
10 these sensors with its amplitude modu-
lated by the incident x-ray flux, thereby
15 performing an x-ray-to-optical conver-
sion of the waveform we wish to record.
–10 0 10 20 30
Propagation length (arbitrary units) These sensors are being integrated with
the time-lens based recording system. In
Figure 3. A false-color image shows three pulses propagating through a temporal imaging the past the output was time-magnified
system with a magnification of three times. Color here represents intensity or brightness,
using fiber Bragg gratings at the output
with red being the brightest. The simulation shows how three optical pulses occurring
in a 6-picosecond time frame can be “time magnified” so that, at the output, they occur
to stretch the waveform in time. In this
over 18 picoseconds. The angle of propagation in this figure also represents the path of a case, the incident x-ray signal is mapped
particular color of light relative to the carrier frequency. The time lens not only magnifies to a magnified time trace, which can be
waveforms in time but it can be used to Fourier-transform the waveform, converting the recorded with higher fidelity on a streak
temporal profile to a scaled spectral profile. camera or oscilloscope than without

16 FY10 Engineering Innovations, Research & Technology Report


Sensor Time-to-frequency conversion Record spectrum
Radiation-to-optical conversion Imparting a linear frequency (energy) chirp

Grating
Nonlinear
Result:
t mixing crystal
Focus control, fiber Mirror Time of
X-ray Optical probing optic input dispersion Detector rad. profile
Lens array in space

or
Chirped pump pulse
Record magnified time
(past approach)

Result: Time of
2009 R&D 100 Award, FemtoScope front end with Fiber Bragg rad. profile in
improved read-out for single-event and high-dynamic-range Grating (FBG) magnified time
output dispersion
“Time lens” based Fourier transform, operates single-shot

Figure 4. System conceptual diagram integrating an ultrafast x-ray sensor that produces a modulated optical response with a time-lens
based processor. Past work focused on time magnification using fiber Bragg gratings at the output to create a magnified waveform recorded
in time with streak cameras or oscilloscopes. Today, our system utilizes a conventional grating-based spectrometer to map the waveform to
space where it can be recorded on a conventional high-performance CCD camera.

the magnification. Today we are using a The FemtoScope will also be a Real-Time Recording Using Temporal Imag-
conventional grating-based spectrom- valuable tool for Livermore researchers ing,” OSA Conference on Lasers and Electro-
eter to map the waveform out in space, who are beginning development of a Optics CTuA6, San Jose, CA, May 6, 2008.
and record it with a charge coupled new energy concept known as the Laser 3. Lowry, M. E., C. V. Bennett, S. P. Vernon,
device (CCD) camera (Figure 4). Inertial Fusion Engine, or LIFE, which R. Stewart, R. Welty, J. Heebner, O. Landen,
The true potential of temporal im- is based on physics and technology and P. M. Bell, “X-ray Detection by Direct
aging is just beginning to be explored. developed for NIF. LIFE has the potential Modulation of an Optical Probe Beam–
The FemtoScope could also be applied to meet future worldwide energy needs RadSensor: Progress on Development for
to several other research facilities and in an inherently safe, sustainable man- Imaging Applications,” Rev. Scientific Instru-
experiments with diagnostic needs ner without carbon dioxide emissions, ments, 75, 10, pp. 3995–3997, 2004.
similar to those of the NIF. The Defense while dramatically shrinking the planet’s 4. Bennett, C. V., and B. H. Kolner, “Principles
Advanced Research Projects Agency co- stockpile of spent nuclear fuel. of Parametric Temporal Imaging—Part I:
funded Livermore to develop the tech- System Configurations,” IEEE J. Quantum
nology for LIDAR, which measures the Related References Electronics, 36, 6, pp. 430–437, 2000.
properties of scattered light to gather 1. Hernandez, V. J., C. V. Bennett, 5. Bennett, C. V., and B. H. Kolner, “Principles
information about a distant target. The B. D. Moran, A. D. Drobshoff, C. Langrock, of Parametric Temporal Imaging—Part II:
same high repetition rate system has D. Chang, M. M. Fejer, and M. Ibsen, System Performance,” IEEE J. Quantum Elec-
applications in particle counting and “745 fs Resolution Single-shot Recording tronics, 36, 6, pp. 649–655, 2000.
sub-picosecond resolution time-of-flight at 2.1 TSample/s and 104 Mframes/s Using 6. Bennett, C. V., and B. H. Kolner, “Upcon-
measurements at the Large Hadron Temporal Imaging,” OSA Nonlinear Optics version Time Microscope Demonstrating
Collider at CERN, where scientists are Conference, PDNFA2, Honolulu, HI, July 17, 103 x Magnification of Femtosecond Wave-
recreating in the lab conditions just af- 2009. forms,” Optics Letters, 24, 11, pp. 783–785,
ter the Big Bang and studying the basic 2. Bennett, C. V., B. D. Moran, C. Lan- 1999.
nature of subatomic particles. grock, M. M. Fejer, and M. Ibsen, “640 GHz

Lawrence Livermore National Laboratory 17


Innovations

Transforming the Weapons Complex


Through a Modern Manufacturing
For more information contact:
Infrastructure Keith Carlisle
(925) 424-3495
Collaboration between LLNL, other Nuclear Weapons carlisle4@llnl.gov
Complex labs, and private industry yields a modern,
flexible pit manufacturing capability.

I n December 2001, the government


released the comprehensive Nuclear
Posture Review, a report that recom-
One area identified as requiring
modernization is the manufacture of
weapon pits (plutonium shells). Current
mends overall U.S. nuclear policy and pit manufacturing processes depend on
strategy and the capabilities and forces an unreliable infrastructure that is be-
needed to ensure that the nation’s coming costly to maintain and will not
nuclear weapons remain safe, secure, meet the future needs of the Nuclear
and effective far into the future. While Weapons Complex (NWC). The required
the report recommends strategies for machines and equipment were custom-
retaining the smallest possible nuclear designed in the 1950s when the U.S.
stockpile consistent with our need had a strong machine tool supply chain.
to deter adversaries and reassure This has changed as the manufactur-
our allies, it also calls for substantial ing has moved off shore, leading to the
investments to rebuild America’s aging demise of our special purpose machine
nuclear infrastructure. suppliers, and their support in maintain-
ing production equipment. Fortunately,
advances in manufacturing technology
have led to development of commer-
cial machines that can now meet the
fabrication tolerances and operational
flexibility required for pit manufacture.
Recognizing the significant cost
advantages commercial machine supply
could offer over a custom design, the
National Nuclear Security Administra-
tion (NNSA) directed the Plutonium Sus-
tainment Enterprise to identify, develop,
and deploy a commercial machining
center for future pit manufacture. LANL
was directed to manage the project as
they were the potential customer. To
ensure success, the entire NWC col-
laborated and converged on a machine
requirements specification that would
meet their future needs. All participants
agreed that the new machining center
must:
• Consolidate and reduce the number
of machines into one machining
center.
• Have the required capacity, preci-
sion, and flexibility.
• Have a small footprint.
• Enable flexibility through multiple
Figure 1. The Hardinge TS350. tooling, turning, drilling, and milling.

18 FY10 Engineering Innovations, Research & Technology Report


in the supply of precision lathes. This
combination optimized the design to
meet the operational requirements for
a glove-box machine and had all the
advantages of a commercial build.
The design configuration for the
TS350 was based on two basic machin-
ing operations for pit manufacture:
external and internal profile form-
ing. Each operation requires special
work-holding fixtures that are both
heavy and time-consuming to change.
Swapping out work-holding fixtures
can expose the machine operator to
radiation and risk of injury, so a major
design consideration for the TS350 was
to eliminate or minimize those risks.
This was achieved by incorporating two
work spindles in the design. Each work
spindle is dedicated to either internal or
external work, thus eliminating the con-
stant need for work-holding change-out.
Hardinge high-precision lathe spindles
Figure 2. PrISMM. met the machine requirements and had
proven and reliable lifetimes.
Another factor in the design process
was facilitating the handling of multiple
• Support radiation-safe glove-box the Hardinge TS350, designed by Keith cutting tools, including fixed tools for
operation. Carlisle, Director of LLNL’s Center for turning and boring, live tooling for drill-
• Have an ergonomic design for load- Precision Engineering, was selected ing and milling, and part probing. To re-
ing and unloading, material manage- (Figure 1). duce the need for manual tool change,
ment, ease of cleaning, and mainte- Livermore’s Precision Engineering modern commercial machines use
nance. group has been designing and build- automated tool changing units, which
After an exhaustive search involving ing special-purpose machines since are flexible and robust. However, these
over 100 companies, the NWC conclud- the 1970s, including the Large Optics units can be a risk when used in a glove-
ed that a standard commercial ma- Diamond Turning Machine (LODTM), box environment because of maintain-
chine would not meet the demanding the world’s most accurate machine for ability and the difficulty of maintaining
requirements for glove-box operation. fabricating meter-sized optics. Today, cleanliness for criticality management.
A combined approach was required, the group provides a range of machines An indexing tool turret offered a more
which would use commercial hardware for the manufacture and inspection of suitable alternative, and Hardinge had
configured for glove-box operation. large, meter-sized laser optics for the considerable experience in their use for
Experts from across the NWC generated National Ignition Facility (NIF) and the this type of machining operation.
designs for a machine selection process French Laser Mégajoule (LMJ). Precision A major design constraint was mini-
to meet the pit requirements speci- Engineering also supports the Stockpile mizing the overall size of the machine to
fication. An important aspect of this Stewardship program, having designed achieve a small footprint to reduce facil-
process was working with commercial and built the Precision Inspection Shell ity space requirements in a radiation
machine manufacturing companies to Measuring Machine (PrISMM) shown in zone. The cost of decontamination and
take advantage of their existing com- Figure 2, the primary shell-measuring eventual disposal of a machine at the
modity subsystems (spindles, slides, machine at LLNL and potentially the end of its life was also a major concern,
turrets, controllers, etc.) that could be entire NWC. requiring design for ease of cleaning
configured to meet NWC needs. LLNL’s unique special-purpose ma- and disassembly.
At the conclusion of the selection chine design and build capability was a Many configurations were reviewed
process, six designs were submitted major factor in the design of the TS350 before reaching the final design of
and evaluated against the require- and the partnership with Hardinge, a the TS350. The vertical column design
ments specification. From these six, company recognized as a world leader provided the smallest footprint. The

Lawrence Livermore National Laboratory 19


Innovations

Figure 3. Machine and support equipment.

work-head spindles and turret are posi- glove-box design, which is essential in the glove-box industry, the NWC, and
tioned at an operational height deter- for the machine operation including the UK Atomic Weapons Establishment
mined by glove-box ergonomics. Mount- loading, unloading, part transfer, tool (AWE) also were consulted.
ing these units on horizontal slide-ways change, machining, chip management, Proposed designs were reviewed
maintained the required operational cleaning, and maintenance. All op- and optimized using Delmia VNC (a
height for all machining operations. erations are performed through glove software package that allowed combi-
Good operator access is also achieved ports, so positioning of these ports rela- nation of both the machine and glove-
by keeping the glove-box profile parallel tive to the machine was important, as box design models), which allowed
and close to the machine axis. was positioning of windows. Arm reach simulation of the machine’s kinematics
To maximize machine flexibility, a through the glove port has a critical and its controller, including part pro-
vertical servo axis was incorporated to effective length: if too short or long to gram simulation for part fabrication.
allow raising and lowering the turret. carry out an operation, operator strain The same software was used to model
This provided greater flexibility for the and injury could result. Awkward reach operator ergonomics. The designs were
live tooling (drilling and milling), use of is also the main cause of a glove breach. also evaluated by AWE using their own
a three-axis “tool-setting station,” tool- Reducing operator injury and glove specially developed 3-D simulation
setting after indexing (which eliminated port breaches was a major objective of package. Using computer-aided design
turret indexing errors), and a parking the system design, so the design team models of the machine and glove box, a
zone for better access for cleaning and comprised engineers, machine opera- full-size, three-dimensional image of the
work fixture change-out. tors, maintenance technicians, and an machine was generated. Figure 3 shows
An important aspect of the ma- occupational physiotherapist with a typical model of the machine and sup-
chine design is its integration with the knowledge of glove-box injuries. Experts port equipment.

20 FY10 Engineering Innovations, Research & Technology Report


Figure 4. TS350 undergoing machining trials.

Using only a head-mounted vision both unit cost and waste material and builder has been an integral part of the
system, the operators and designers also eliminated a thermal process that process, selecting and integrating their
were able to move 360° around the vir- removed a production bottleneck. The proven products into the design and
tual machine. This enabled glove ports near-net-shape casting enabled a 50% producing a cost-effective solution they
to be evaluated for position and reach, reduction in material to be removed can maintain and support in the future.
and ensured window placement was by the TS350 machine and a potential The TS350 provides NNSA with a robust,
optimal for visibility. Machine operabil- 50% reduction in machining time and modern manufacturing center that is
ity was also tested, providing essential collection of waste material, which in fully capable and sufficiently flexible to
feedback to the machine design without turn greatly reduced operator radiation resolve technical problems in the stock-
the need for building a full-scale model. exposure. The reduction in mass re- pile, and able to respond to adverse
This system provided instant operational duces the potential for operator injuries geopolitical change. AWE indicated they
feedback for design changes, greatly during part handling and transfer within intend to purchase the TS350 to meet
reducing the risk of discovering design the glove box. their future needs, and the machine tool
problems after manufacture and avoid- With the award of the manufactur- builder may offer a commercial version
ing cost and schedule overruns. ing contract to Hardinge, the TS350 of the machine. Keith Carlisle received
In parallel with the TS350 machine is now undergoing trials at their site an Award of Excellence for the planning,
design, LLNL worked within the Pluto- (Figure 4) where the machine is meeting managing, and execution of the modern
nium Sustainability program to develop all expectations. turning center evaluation and selection
a casting process for plutonium shells This project owes its success to the process and for significant contributions
that achieved near net-shape. This collaboration of many engineers across to the Stockpile Stewardship Program.
work resulted in major reductions in the NWC and AWE. The machine tool

Lawrence Livermore National Laboratory 21


Innovations

Ultrahigh-Resolution Adaptive Optics


Optical Coherence Tomography
For more information contact:

Diana C. Chen
Enabling early diagnosis and treatment of blinding eye disease. (925) 423-5664
chen47@llnl.gov

A round the world, millions of people


suffer from eye diseases that de-
grade the retina, the light-processing
adaptive optics (AO) to noninvasively
observe and record ultrahigh-resolution,
three-dimensional (3-D) retinal images
component of the eye, causing blind- in real time. MEMS-AO-OCT allows
ness. Ophthalmologists observe the precise, in vivo visualization and charac-
retina to diagnose and monitor a wide terization of all the cellular layers in the
variety of blinding diseases; however, human retina. It also provides a perma-
conventional instrumentation does not nent, digitized record of clinical observa-
provide sufficient resolution to reveal tions for monitoring disease progression
the cellular-level details that would al- and the effectiveness of therapeutic
low detection and monitoring of ocular treatments.
diseases at their earliest stages. Funded The optical structures in the eye,
by the National Eye Institute, Livermore particularly the cornea and lens, can
scientists and engineers, in collabora- produce ocular conditions such as
tion with the University of California at myopia, hypermetropia, and astigma-
Davis, Indiana University, and Boston tism that many of us encounter in our
Micromachines Corporation in Cam- natural vision. As a result of these aber-
bridge, Massachusetts, have created an rations, we see blurred images. During
optical coherence tomography (OCT) retinal scans, these aberrations also de-
system that incorporates microelec- grade the image quality obtained from
tromechanical systems (MEMS) and traditional OCT or ophthalmoscopes.
MEMS-AO-OCT incorporates an AO
system similar to the one pioneered
at Livermore, with initial support from
AO - Control
the Laboratory Directed Research and
Deformable Scanner
mirror Development Program, for use in large,
Deformable
mirror
high-powered telescopes, such as those
control at W. M. Keck Observatory in Hawaii.
In this capacity, AO systems correct
Wavefront
reconstruction wavefront aberrations caused by atmo-
Beamsplitter

spheric distortion, which blur our view


Hartmann-
Shack
from Earth of stars, galaxies, and other
wavefront celestial objects. The same principle is
sensor applied to MEMS-AO-OCT, except that
Scanner the optics correct and compensate for
Eye
aberrations from ocular conditions. AO
compensates for optical aberrations by
OCT light source/ controlling the phase of the lightwaves,
laser beacon
or wavefronts. It continuously samples
Beamsplitter Mirrors
optical aberrations and then automati-
cally corrects them.
Mirror As shown schematically in Figure 1,
the MEMS-AO-OCT uses an AO system
to automatically measure the optical
OCT
detector aberrations in the eye with a wave-
front sensor and rapidly compensate
Figure 1. Schematic of a MEMS-AO-OCT system. for these aberrations with a wavefront

22 FY10 Engineering Innovations, Research & Technology Report


9˚N9˚SR
8˚T2˚SR

4.5˚TR 4.5˚NR

8˚N8˚IR
4.5˚T4.5˚IR

Figure 2. Example of the AO-OCT application for clinical imaging.

corrector. The aberration-free signals beams travel along different paths until Inside the detector, a spectrometer and
are then integrated with the OCT for 3-D they ultimately reunite in a detector a charge-coupled-device camera record
image acquisition. Implementing AO in that measures their interference. In the sample and reference signatures.
the system increases lateral resolution MEMS-AO-OCT, an ultrabroadband light Custom computer software interprets
by approximately an order of magni- is generated using a superluminescent the recorded signals and produces
tude. Without aberration correction diode, and the sample beam propagates high-resolution, 3-D, digital images.
from the AO, the lateral resolution of a through a series of telescopes, mirrors, The device has a total footprint of ap-
clinical OCT system is not sufficient for and horizontal and vertical scanners proximately 0.5 cubic meters and can
imaging individual cellular structure. before reaching the patient’s eye. The be easily placed and moved within a
The wavefront corrector is a critical light beam is focused onto the patient’s physician’s office. In addition, its com-
component in the system. In traditional retina in a raster, or uniform, pattern, mercial components make the system a
AO systems, the wavefront corrector creating individual “snapshots” of each financially feasible option for practices,
can be both large and expensive. The layer. A wavefront sensor automatically and its cost is competitive with exist-
MEMS-AO-OCT was designed using a measures the patient’s optical aber- ing instruments that have much lower
MEMS-deformable mirror to reduce rations. A MEMS deformable mirror resolution.
the size and cost of the system without working in conjunction with a Badal op-
sacrificing speed or accuracy. Using this tometer and a pair of rotating cylinders Clinical Results
state-of-the-art AO technology enables then compensates for the distortions. To date, more than 100 individuals
the implementation of an instrument These components make the device have been clinically tested with AO-OCTs
suitable for clinical practice. The MEMS- effective even for patients who have built and operated at the University of
AO-OCT constructed at LLNL is the first large refractive errors, obviating the California, Davis and Indiana University.
instrument that has been optimized for need to fit patients with trial lenses. These instruments have been used
clinical use. This optimization involved The light reflected off the retina is then to image both healthy and diseased
a number of critical features including relayed back through the system to the eyes with different amounts of ocular
compact design and automating several detector. The reference beam, whose aberration.
components to enable the instrument path length matches that of the sample The AO-OCT system can acquire
to be operated efficiently by a clinician. beam, reflects off a pair of mirrors into data at different retinal eccentricities
OCT systems are based on inter- the detector. to allow high-resolution sampling of
ferometry, where light from a single Compact afocal telescopes align the retinal features of interest that are not
source is split into a sample and a system components with the patient’s resolvable by other methods. Figure 2
reference beam. These two separate pupil to achieve precise measurements. shows results of an AO-OCT imaging

Lawrence Livermore National Laboratory 23


Innovations

15N 2SR

16N 6SR

12NR
15N 2SR

17N 1SR
14N 2IR
12NR

14N 2IR

Figure 3. Example of AO-OCT imaging of the optic nerve head. (Left) Fundus photo with superimposed OCT-fundus reconstructed
from the AO-OCT volumes. (Right) Representative B-scans and C-scans.

session during which several retinal ec- makes the imaging of eccentric loca- and monitoring the progression of ONH
centricities were imaged with 1 × 1 mm tions such as the ONH more difficult. diseases. Figure 3 shows results of ONH
sampling volumes at the fovea (dark We find that retinal structures are re- imaging with AO-OCT.
region) and optical nerve head (ONH, solved and revealed using AO-OCT that Figure 4 shows micro-traction in the
bright region). were not visible using OCT without AO center of the fovea. Because of its small
Chromatic aberration effects in- correction. Imaging these microscopic size, it was misdiagnosed using other
crease with retinal eccentricity, which ONH structures is critical for diagnosing image modalities including commercial

Figure 4. Micro-traction was not detected with other imaging Figure 5. Microscotoma cannot be detected with other imaging
modalities. modalities.

24 FY10 Engineering Innovations, Research & Technology Report


Fourier domain OCT and fundus pho- nerve fiber layer can be an indicator and improving the quality of life for mil-
tography. Because AO-OCT has a much of glaucoma. lions of people.
higher resolution, it clearly reveals the
problematic structures and provides Impact of Innovation Related References
ophthalmologists with an effective The MEMS-AO-OCT combines AO 1. Chen, D. C., S. Oliver, S. Jones,
imaging tool for diagnoses. technology, OCT, and confocal scan- R. J. Zawadzki, J. Evans, S. Choi, and
Figure 5 represents a case of mi- ning laser ophthalmoscopy to resolve J. Werner, “Compact MEMS-Based Adaptive
croscotoma (a small blind spot close to features that cannot be detected by Optics: Optical Coherence Tomography for
the fovea) that could not be detected by conventional imaging tools. MEMS-AO- Clinical Use,” SPIE, 6888-OF, 2008.
any standard imaging instrument. How- OCT provides real-time, high-resolution, 2. Zawadzki, R. J., Y. Zhang, S. M. Jones,
ever, AO-OCT indicates that structural cellular-level images of the living human R. Ferguson, S. S. Choi, B. Cense, J. Evans,
disruptions in the outer nuclear layer retina, and the technology could be D. C. Chen, D. Miller, S. Olivier, and J. Werner,
extending to the photoreceptor layers adapted for use in other medical fields. “Ultrahigh-Resolution Adaptive Optics—
are good candidates to account for Because biological tissues absorb and Optical Coherence Tomography: Towards
the reduction in vision reported by the reflect light differently, the intensity and Isotropic 3 µm Resolution for In Vivo Retinal
patient. wavelength of the light source must be Imaging,” SPIE, 6429–09, 2007.
The AO-OCT system can acquire gauged to specific tissues to optimize 3. Zawadzki, R. J., S. Jones, S. Olivier,
3-D, ultrahigh-resolution images over image resolution. MEMS-AO-OCT can be M. Zhao, B. Bower, J. Izatt, S. Choi, S. Laut,
small retinal areas (approximately easily adjusted to accommodate these and J. Werner “Adaptive-Optics Optical
300 × 250 × 600 µm), which reveal varying light parameters, making it a Coherence Tomography for High-Resolution
cellular structures within the retina, as valuable tool for diagnosing and treating and High-Speed 3D Retinal In Vivo Imaging,”
shown in Figure 6. Volumetric rendering many health conditions, including car- Optics Express, 13, 21, pp. 8532-8546, 2005.
allows detailed insight into microscopic diovascular disease. In addition, dentists 4. Zawadzki, R. J., S. Choi, A. Fuller, J. Evans,
structures not possible with 2-D scans. could image both hard (teeth) and soft B. Hamann, and J. Werner, “Cellular Resolu-
For instance, an abnormal appearance (gums) tissues, and oncologists could tion Volumetric In Vivo Retinal Imaging with
of the photoreceptor cell layer is indica- identify cancer cells well before they Adaptive Optics–Optical Coherence Tomog-
tive of numerous blinding diseases— develop into tumors. The system could raphy,” Optics Express, 17, 5, pp. 4084–4094,
such as macular degeneration, diabetic ultimately help medical professionals ac- 2009.
retinopathy, and retinitis pigmentosa— curately diagnose diseases, dramatically
while abnormalities in the retinal reducing the cost of medical treatment

Figure 6. Example of ultrahigh-resolution volume acquisition with AO-OCT.

Lawrence Livermore National Laboratory 25


Innovations

Multiphysics Engineering Simulations


with ALE3D
For more information contact:

Daniel A. White
Implementation of magnetohydrodynamics in ALE3D provides (925) 422-9870
new and exciting capabilities for pulse power, microfluidics, and white37@llnl.gov
other applications.

S imulation is a key element of


modern engineering, complement-
ing experimentation and “back-of-the-
equations of elasticity when designing a
bridge, or a mechanical engineer would
use a computer to solve the equations
envelope” calculations. We define simu- of heat transfer when designing a cool-
lation as the solution of a mathematical ing system. There are many engineering
model of a system using numerical problems, however, whose complex-
methods, with those numerical meth- ity requires fully coupled multiphysics
ods being implemented and executed simulations. A multiphysics simulation
on a computer. Simulation, therefore, incorporates multiple, disparate mathe-
is not just visualization or animation; matical models, which are coupled, and
it is a quantitatively correct approxi- solves these models self-consistently.
mation of reality. Engineers have long Developing a multiphysics simulation
used single-physics simulations, which code requires a close collaboration
employ computer codes optimized for between physicists, engineers, math-
a single area. For example, an electrical ematicians, and computer scientists,
engineer would use a computer to solve and is a large undertaking. The effort
the equations of electromagnetics when to develop a three-dimensional, mas-
designing an antenna, a structural engi- sively parallel, multiphysics code can
neer would use a computer to solve the exceed 100 man-years. However, the
end result is the ability to gain insight
into complex phenomena, the ability to
“see” quantities like electric fields and
thermal fluxes that cannot be measured
EM Thermal experimentally, and the ability to pre-
J•E
∂B 1 1 ∂T dict and optimize the performance of
= – × σ × μ B+g c = •k T+J•E
Δ Δ Δ Δ
∂t ∂t
next-generation devices.
+ ×υ×B T
Δ
ALE3D is an acronym for Arbitrary
Lagrangian Eulerian in 3 Dimensions.
This LLNL-developed multiphysics
U, V J×B U, W T simulation code incorporates hydro-
dynamics, heat transfer, chemistry,
Mechanics incompressible flow, and electromag-
∂2 u netics. This article focuses on recently
ρ 2 = •S+J×B
Δ
∂t implemented magnetohydrodynamics
applications; these are applications
that require a fully coupled solution
Figure 1. Electromagnetics is represented by the white module, of hydrodynamics, heat transfer, and
heat transfer is represented by the yellow module, and hydro- electromagnetics. Magnetohydrody-
dynamics is represented by the green module. The arrows show
namic phenomena are most commonly
the physical quantities that are the input and the output of each
module. For example, the electromagnetics module produces Joule associated with the study of electrically
heating, J•E, which is input to the heat transfer module, and the conducting gases (i.e., plasmas) such as
heat transfer module produces temperature, T, which is input to those that are studied in the realms of
both the electromagnetics and hydrodynamics modules. astrophysics or magnetic fusion energy.

26 FY10 Engineering Innovations, Research & Technology Report


HE
Magnetic field

Figure 2. Simulation of an explosive magnetic flux compression generator. The graphic on the left is a snapshot just as the high
explosive (HE) is detonated; the graphic on the right is at a later time when the magnetic field has been compressed. The vectors
represent the magnetic field; the contours represent pressure.

Computer codes for simulating this class Opening New Horizons in Pulse As a specific example, the device shown
of magnetohydrodynamics problems Power Research in Figure 2 is designed to be axially sym-
have existed for some time. The impor- The primary motivation for develop- metric, but the effect of deviations from
tant feature of our magnetohydrody- ing an ALE3D magnetohydrodynamics perfect symmetry (due to manufactur-
namic implementation in ALE3D is that capability was to simulate explosive ing tolerances, material imperfections,
it excels for problems that involve solid magnetic flux compression generators. etc.) can be investigated via full three-
materials that undergo gross deforma- These generators are energy conver- dimensional simulation by incorporating
tion, melting, and transition to plasma. sion devices, in that they convert the imperfections into the simulation. The
The mutiphysics nature of such prob- chemical energy in a high explosive to ability of ALE3D to simulate the deto-
lems is illustrated in Figure 1, where mechanical energy, and the mechanical nation of a high explosive, combined
electromagnetics is represented by a energy is in turn converted to electri- with material motion, electromagnetic
white module, heat transfer is repre- cal energy. The purpose of the device fields and currents, and heating and
sented by yellow, and hydrodynamics is is to generate a large pulse of electric melting of conductors, is a very power-
represented by green. The arrows show current with a faster rise time than ful and unique capability that is having
the physical quantities that are the can be achieved using other means a tremendous impact on the design of
input and the output of each module. (such as capacitor banks). To function, explosive magnetic flux compression
The coupled equations are solved using the device is seeded with an electric generators.
an operator-splitting algorithm, and current, generating a magnetic field A railgun is another energy conver-
within each module the equations are in a cavity; the high explosive is then sion device, in some sense the opposite
solved using the finite element method. detonated, compressing the cavity, of a magnetic flux compression gen-
The illustration in Figure 1 is deceptive; and as the cavity is compressed, the erator. In a railgun, electric current is
it looks simple, but the development of initial seed current is amplified in order input into the device, and the output
this magnetohydrodynamics capability to maintain the magnetic flux in the is mechanical energy in the form of a
required hundreds of thousands of lines cavity (in accordance with Lenz’s law). high-velocity projectile. A railgun is es-
of software. Simulations using magne- An example simulation of this process sentially a linear electric motor, and the
tohydrodynamics often require large- is shown in Figure 2. Such simulations projectile is the armature of the motor.
scale computing—a typical problem will have enabled engineers to visualize Railguns have application in defense as
require a week of processing time using fields and currents that are difficult to an alternative to traditional explosive-
256 processors, while the largest prob- measure in experiments, thus giving based guns. One advantage of a railgun
lems require thousands of processors. insight into the operation of the device. is the very high velocity that can be

Lawrence Livermore National Laboratory 27


Innovations

Magnetic field

Stress

Figure 3. Simulation of a generic railgun. The armature is between two rails; current flows into one rail, through the armature,
and out the other rail. This graphic shows only one-half of the geometry so that the stress within the rails can be seen.

achieved by the projectile compared hence accurate estimation of stress Naval Research in their goal to design
to that of a traditional gun. A second via full three-dimensional simulation is a naval railgun. ALE3D simulations are
advantage is that the velocity can be essential. A novel capability of ALE3D is being used to investigate alternative
“dialed” by the electrical power sup- its ability, via an algorithm, to correctly rail geometries, alternative rail materi-
ply. Figure 3 is a simulation of a generic simulate a sliding electromagnetic con- als, and the effect of Joule heating and
railgun, with the graphic showing a tact. This algorithm allows two materials viscous heating on armature erosion.
cross-section of the gun. The purpose to slide past each other while allow-
of this particular simulation was to ing electric current to flow across the Simulation Enables Virtual
investigate stress in the rails, which is material interface. The sliding electrical Prototyping
caused by the enormous magnetic pres- contact algorithm is quite sophisticated, The two previous examples of
sure in the gun. A high magnetic field involving advanced mortar finite ele- magnetohydrodynamic simulation were
is required to achieve high acceleration ment methods with implicit constraints pulse power applications involving
of the armature, but a high magnetic to satisfy conservation of current and possibly millions of amperes of current.
field also causes stress in the rails. If continuity of the magnetic field across A third example, simulating a micro-
the gun is not properly designed (i.e., the interface. Currently, LLNL is using fluidic pump (Figure 4), is quite differ-
if the stress is too high) it can explode, the ALE3D code to assist the Office of ent and illustrates the broad utility of

28 FY10 Engineering Innovations, Research & Technology Report


multiphysics codes such as ALE3D. The of LLNL’s investment in multiphysics Related References
key element of the pump is a magneto- simulation: novel ideas can be virtu- 1. White, D., R. Rieben, and B. Wallin,
elastic membrane that is manufactured ally prototyped via simulation prior to “Coupling Magnetic Fields and ALE Hydro-
by dispersing iron particles in silicone. manufacturing. dynamics for 3D Simulations of MFCG’s,”
In a traditional membrane pump, the There are numerous other applica- Proceedings of the 2006 IEEE International
membrane is flexed via a mechanical tions for the ALE3D magnetohydrody- Conference on Megagauss Magnetic Field
crank or gear system or by hydraulics. namics capability. The code is currently Generation, pp. 371–376, Santa Fe, NM,
The unique feature of this pump is that being used to investigate the effects November 5–10, 2006.
it is driven by an external magnetic of aging on the performance of deto- 2. Rieben, R., D. White, B. Wallin, and
field: an applied magnetic field deforms nators and the effect of lightning on J. Solberg, “An Arbitrary Langrangian-
the membrane, and as the membrane detonators, an important component Eulerian Discretization of MHD on 3D
is deformed fluid is pulled into one side of LLNL’s Stockpile Stewardship mission. Unstructured Grids,” J. Comp. Phys., 226,
of the pump and ejected out the other As another example, ALE3D was used pp. 534–570, 2007.
side. The advantage of this magneto- successfully to investigate magnetic 3. Barham, M., and D. White, “Finite Ele-
elastic pump is that it does not require steering of injected fuel pellets for the ment Modeling of the Deformation of a
any external connections. This pump Laser Inertial Fusion Energy (LIFE) proj- Thin Magnetoelastic Film Compared to a
may have applications in microfluid- ect. Research and development of the Membrane Model,” IEEE Transactions on
ics chemistry-lab-on-a-chip and in in ALE3D magnetohydrodynamic module Magnetics, 45, 10, pp. 4124–4127, 2009.
vivo medical equipment. Simulations continues. Current development efforts 4. Barham, M., D. White, and D. Steigmann,
are being used to optimize the geom- include incorporation of a model for “Finite Element Modeling of the Deforma-
etry of the device and to determine arcing and development of models for tion of Magnetoelastic Film, ” J. Comp. Phys,
the required magnetic field strength. boundary layer effects. 229, pp. 6193–6207, 2010.
This application highlights a key payoff

Magnetic field

Inflow

Outflow

Magnetic membrane

Figure 4. Simulation of a magnetoelastic-based microfluidic pump. The applied magnetic field (magenta vectors) deforms a magnetoelastic
membrane (solid black), which pulls fluid into one side of the pump and expels fluid out the other side of the pump. The pseudocolor scalar
field is the speed of the fluid, blue being slow and red being fast.

Lawrence Livermore National Laboratory 29


30 FY10 Engineering Innovations, Research & Technology Report
Computational Engineering
Research

Direct Observation of Phase


Transformations and Twinning
Under Extreme Conditions: In Situ
For more information contact:

Joel V. Bernier
Measurements at the Crystal Scale (925) 423-3708
bernier2@llnl.gov

M echanical twinning (MT) and


displacive phase transformations
(DPT) are complex deformation-driven
capability for making direct observa-
tions of MT and DPT mechanisms in
individual grains embedded in polycrys-
mechanisms observed in polycrystalline talline samples subject to high pressure
materials including metals, ceramics, and temperature in situ. A schematic
minerals, and high explosives. Both of the experimental setup is shown in
MT and DPT can have a significant Fig. 1. As a sample case, the α↔ε DPT
effect on equation-of-state and critical in iron will be thoroughly character-
mechanical properties such as strength ized at the crystal scale, including the
and stiffness. In situ observations of non-ambient material properties of the
MT and DPT at the crystal scale are high-pressure ε-iron phase, which will
essential for motivating, validating, and in turn yield a calibrated crystal-scale
verifying advanced constitutive mod- constitutive model. Additional materi-
els. The recent availability of large, fast als of programmatic interest, such as
flat-panel detectors at synchrotron x-ray zirconium cerium, will subsequently
sources, such as the Advanced Photon be studied. We propose to extend the
Source (APS), has enabled the develop- method to heavily deformed/higher
ment of novel experimental techniques. defect content materials, and to
We are leveraging these capabilities and develop a combined angular (strain)
developing a fully 3-D, in situ character- resolved and spatially (grain/domain
ization technique having unprecedented boundary) resolved technique that
resolutions that will accommodate forms the basis for a dedicated 3-D
quasistatic thermomechanical loading x-ray microscopy instrument at the APS
in situ using a uniaxial loadframe as well 1-ID beamline. This includes develop-
as a diamond anvil cell (DAC). ment of algorithms and software for
data analysis, including graphical user
Project Goals interfaces, to make the technique
We propose to develop a novel available to the DOE user community.
x-ray diffraction-based experimental

(a) Y’d (b)


Ideal detector plane 200
η 400
Y ι, Ys ρ 600
Q || G 2θ 800
X’d 1000
Sample 1200
b e am ω Xs 1400
y
X-ra Zι Xι
Zs 1600
1800
D 2000
500 1000 1500 2000
Figure 1. (a) Schematic of the diffraction instrument geometry, showing the specimen
position on the rotation stage, transmitted x-ray beam (red), a diffracted x-ray beam
(green), the associated scattering vector (blue), and the flat panel detector. (b) Sample
diffraction image from a magnesium AZ31 specimen rotated over Δω = 1°.

32 FY10 Engineering Innovations, Research & Technology Report


Computational Engineering

• grain identification through orienta-


tion indexing; and
• precise centroid/orientation/strain
determination for indexed grains,
including parent/daughter associa-
tion under specified transformation
ε {203}
pathways.

Related References
1. Lee, J. H., C. C. Aydiner, J. Almer,
J. V. Bernier, K. W. Chapman, P. J. Chupas,
D. Haeffner, K. Kump, P. L. Lee, U. Lienert,
A. Miceli, and G. Vera, “Synchrotron Ap-
plications of an Amorphous Silicon Flat-
Panel Detector,” J. Synchrotron Rad., 15, 5,
pp. 477–488, 2008.
2. Lienert, U., J. Almer, B. Jakobsen,
W. Pantleon, H. F. Poulsen, D. Hennessy,
C. Xiao, and R. M. Suter, “3-Dimensional
ε {102}
Characterization of Polycrystalline Bulk
Materials Using High-Energy Synchrotron
Radiation,” Mater. Sci. Forum, 539–543,
pp. 2353–2358, 2007.
3. Aydiner, C. C., J. V. Bernier, B. Clausen,
U. Lienert, C. N. Tomé, and D. W. Brown,
“Evolution of Stress in Individual Grains and
Figure 2. Results from experiment in which the α↔ε phase transformation was observed
Twins in a Magnesium Alloy Aggregate,”
in situ. The location of Bragg reflections for the two indicated types of ε-phase lattice
planes are plotted as pole figures in an equal area projection. The different glyphs denote Phys. Rev. B, 80, 2, 024113, 6, 2009.
the predicted locations of reflections from each variant under the Burgers path. The red 4. Merkel, S., H. R. Wenk, P. Gillet, H. K. Mao,
and green colors correspond to the extrema of the orientation spread of the parent α phase and R. J. Hemly, “Deformation of Polycrystal-
(~15° misorientation). The correspondence of the strongest variants is quite good, even in line Iron Up To 30 GPa and 1000 K,” Phys.
this cursory analysis, and all variants are observed. Earth Planet. Inter., 145, pp. 239–251, 2004.

Relevance to LLNL Mission The variant selection corresponds well


This work strengthens our Science, with the Burgers path, which cor- FY2011 Proposed Work
Technology, and Engineering pillars, roborates reported findings. These are The experimental program of
encompassing Materials on Demand unique data: first-of-kind 3-D in situ ob- FY2011 involves applications of this
and Measurement Science and Technol- servations of this phase transformation. technique to additional materials
ogy. High-fidelity material models also It was also shown that in addition to of interest, including phase trans-
comprise a critical piece of the multi- directly observing the parent/daughter formation zirconium and cerium.
physics simulation codes employed at orientation relationships, it is possible to In both cases, the specific orienta-
LLNL in support of Stockpile Steward- separate the hydrostatic and deviatoric tion relationships and strain/stress
ship Science goals. This work will enable strain responses throughout the loading at transformation (full tensors) are
rigorous verification and validation of history. This has incredible potential for desired.
cutting-edge constitutive models at the providing higher-fidelity equation-of- Continued development of the
relevant, crystal length scale. state measurements on a wide range of software includes exploration of
materials. more sophisticated segmentation
FY2010 Accomplishments Building on fundamental tools algorithms, addition of basic crys-
and Results produced under Institutional funding in tallography routines for calculating
A successful experiment was per- FY2008 and FY2009, a complete soft- structure factors, optimization of
formed at APS 1-ID in which the α↔ε ware package has been developed that the most computationally intensive
phase transformation was observed in facilitates: functions, and enhancement of the
situ using DACs. Figure 2 shows pole • angular calibration of flat-panel user interface based on feedback
figures for two ε-phase reflections, cal- detectors; from users at APS.
culated post transformation at ~17 GPa. • diffraction image segmentation;

Lawrence Livermore National Laboratory 33


Research

Lagrange Multiplier Embedded


For more information contact:
Mesh Method Michael A. Puso
(925) 422-8198
puso1@llnl.gov

M ultiphysics simulations are in


growing demand as LLNL’s Engi-
neering addresses challenging issues for
mesh tangling: simulation termination
arising from excess deformation.
We are developing a software tool
internal and external customers. One to accurately interface embedded finite
broad class from our national security element meshes. This goes beyond
customers is the effect of blast loadings most published research that has
on structures. Developing appropriate focused on finite volume techniques
computational technologies continues common to fluid dynamics. The formu-
as an area of active research. Needs lation will have robust numerical stabil-
exist both to simplify the model genera- ity and as implemented should attain
tion for the user and to increase the the computational speed expected of
robustness and accuracy of the numeri- an explicitly time integrated program. It
cal techniques. Furthermore, we wish to should also be extensible to handle
leverage the breadth of LLNL’s simula- different physics and finite element
tion codes and seek approaches that discretizations including Arbitrary
will effectively combine multiple codes’ Lagrangian-Eulerian (ALE) representa-
strengths to work in concert on a single, tions. Two model problems are the
overall problem. focus of this work: fluid-structure inter-
action and electromagnetics in moving,
Project Goals deforming solids.
The objective is to develop a new
embedded mesh technique for using
superposed meshes within a common (a) (b)
simulation. Figure 1 shows a simple
example where a fluid is flowing past, or
through, a moving, deforming solid. The
two meshes could be processed by dis-
tinct simulation codes. The method can
drastically simplify the meshing process
as the fluid mesh need not smoothly
mate with the surface of a complex (c) (d)
structure. Separate meshes for the two
physics subdomains can help avoid

(a) (b)

Eulerian Figure 2. Circular charge placed next to


fluid submerged plate. Meshes shown are
Lagrangian Shell
coarsest used in convergence study.
solid structure
Air (a) Embedded mesh model with initial plate
and charge shown in dotted line. (b) ALE
Figure 1. Typical application of the embedded mesh method. (a) A Lagrange solid mesh is model requiring conformal, common mesh
moving in an Eulerian fluid mesh. Our software couples the fluid and solid physics across of fluid and solid domains. (c–d) Pressure
their common boundary. This is particularly helpful when the solid is undergoing large field arriving at surface of plate for
deformations due to loadings from the surrounding fluid (b). (c) embedded mesh and (d) ALE methods.

34 FY10 Engineering Innovations, Research & Technology Report


Computational Engineering

(a) (b) Relevance to LLNL Mission (Fig. 2a) is compared to the basic ALE
The technologies being developed approach (Fig. 2b). Figure 2 shows
can support a variety of multiphys- the coarsest mesh in the study; two
ics simulations for the Laboratory. additional mesh refinements were made
The initial focus on blast effects upon to establish convergence. The embed-
structures is consistent with many LLNL ded mesh exploits an Eulerian mesh of
national security missions. Its generality the fluid and a Lagrangian mesh of the
will be demonstrated by extension to plate. The ALE model uses the default
include fluid-structural-electromagnetic ALE3D relaxation with advection in the
coupling for moving solid conductors fluid and a Lagrangian mesh of the plate.
such as flux compression generation The blast bends the plate and forces
and rail guns. the plate to flow through the fluid a
significant distance (Figs. 2 and 3). While
FY2010 Accomplishments the Eulerian fluid background mesh is
and Results stationary, the ALE fluid mesh advects
We completed the first version with the moving plate. Figure 4a com-
of the FEusion software library that pares the pressure at the plate surface
identifies the overlap of foreground on the centerline, and Fig. 4b compares
and background meshes and generates the velocity at the center and ends of
constraints between the solid and fluid the plate for embedded mesh and ALE
regions. The library was ported to three models. Both pressure and displace-
codes: DYNA3D, ALE3D and NIKE3D. In ments match well for long times (10 ms).
this process we identified and imple- Figure 4c shows the difference in plate
mented modifications to the ALE3D center displacement for different ele-
advection routines for transporting ment sizes h and demonstrates that the
material response quantities between results appear to converge.
elements. Initial computational results
with multiple model problems across all
three codes are highly encouraging. FY2011 Proposed Work
The ALE3D implementation is dem- In FY2011, we plan to
onstrated in Fig. 2 where its standard 1) improve the advection scheme;
discretization is used as a benchmark 2) incorporate a new method for
Figure 3. Pressure field in fluid and
for the embedded mesh method. A handling potential locking; 3) add
effective plastic strain in plate for the two-inch-thick steel plate is exposed a shell element capability; and
(a)embedded mesh and (b) ALE to an underwater blast and an analysis 4) extend our work to electromag-
simulations at successive, identical times. using the new embedded mesh method netic analysis.

(a) (b) (c)


0.030 0.030 10
9
0.025 Pressure (embedded) 0.025 Center (embedded) 8
% displacement difference

Pressure (conforming) End (embedded)


Pressure (embedded)

Pressure (embedded)

Center (conforming) 7
0.020 0.020
End (conforming 6
0.015 0.015
5
0.010 0.010
4
0.005 0.005

0 0 3

–0.005 –0.005
0 2000 4000 6000 8000 10000 0 2000 4000 6000 8000 10000 2.0 1.0 0.8 0.6 0.5
Time (µs) Time (µs) h

Figure 4. (a) Comparison of centerline pressure in plates. (b) Comparison of displacement at center and ends. (c) Difference in center of
mass displacement for three mesh sizes. Results (a–b) are in good agreement and (c) demonstrates some convergence.

Lawrence Livermore National Laboratory 35


Research

Multiscale Polymer Flows and


For more information contact:
Drag Reduction Todd H. Weisgraber
(925) 423-6349
weisgraber2@llnl.gov

S uspensions and polymer solutions


exhibit a variety of complex physi-
cal phenomena and have applications
Relevance to LLNL Mission
Our research aligns with LLNL’s
focus on high-performance computing
across multiple disciplines, including and simulation. Specifically, we seek to
blood flow and materials processing. In address fundamental scientific ques-
particular, drag reduction in bounded tions in hydrodynamics. The interac-
turbulent flows by the addition of tion between flow and suspended
long-chain polymers is a well-estab- macromolecules is also relevant to the
lished phenomenon. However, despite development of the next generation
decades of research, there is still a lack of emerging pathogen detection and
of understanding of the fundamental analysis systems, an important compo-
mechanisms. We believe that a com- nent for the Laboratory’s biosecurity
plete description must incorporate wall strategic mission thrust.
roughness, a coarse-grained molecular
representation of the polymer, and FY2010 Accomplishments
hydrodynamic fluctuations at the poly- and Results
mer length scale. The emphasis during this first year
We are developing new algorithms, was code development and initial
including an unconditionally stable,
fluctuating lattice-Boltzmann (LB) solver
1.4
coupled with molecular dynamics (MD),
1.2 No flow
to enable fully turbulent, multiscale With flow
simulations of drag reduction. 1.0
Probability

0.8
Project Goals 0.6
Our ultimate goal is to perform
0.4
a series of large-scale simulations of
dilute polymer solutions in turbulent 0.2
flows with a detailed model of the 0
0 0.2 0.4 0.6 0.8 1.0 1.2
polymer chains and the hydrodynamic
z/Rg
interactions. To resolve the relevant
scales we are incorporating the follow-
ing improvements to our existing LB Figure 1. Polymer center of mass distri-
bution in a confined channel with height,
polymer code: 1) enhanced numerical
h = 2.5Rg, as a function of distance (z) from
stability; 2) accurate hydrodynamic the lower wall. Rg is the polymer radius of
fluctuations; and 3) integration of the gyration, which is a measure of the average
solver with an adaptive mesh refine- size of the chain. Note the curves are sym-
ment (AMR) framework. metric so only half the channel is shown.

36 FY10 Engineering Innovations, Research & Technology Report


Computational Engineering

validation. One of the main drawbacks finding the polymer at a certain height, 4. Ansumali, S., I. V. Karlin, and
of the LB method is that it becomes z, above the lower wall for the case H. C. Ottinger, “Minimal Entropic
unstable for high Reynolds number flows without any flow in the channel. With Kinetic Models for Hydrodynamics,”
relevant to drag reduction. Fortunately, a flow present, the polymer can stretch Europhys. Lett., 63, 6, pp. 798–804,
a new generation of methods, known as and sample regions closer to the wall, 2003.
entropic LB has overcome this stabil- as is indicated by the purple line. These 5. Rohde, M., D. Kandhai, J. J.
ity issue and we have developed an results provided additional validation for Derksen, and H. E. A. van den Akker,
algorithm for fluctuating hydrodynamics our algorithm. “A Generic, Mass Conservative
that incorporates these entropic solvers. One of the key challenges was Local Grid Refinement Technique
We have successfully run simulations incorporating mesh refinement for the for Lattice-Boltzmann Schemes,”
of turbulent channel flow for Reynolds LB method, which required developing Int. J. Numer. Methods Fluids, 51, 4,
numbers up to 12,000 using our new the methodology to transfer informa- pp. 439–468, 2006.
solver. These calculations are not realiz- tion between coarse and fine grids, as
able with traditional LB methods. shown in Fig. 2. We chose the Chombo
In addition, we performed bench- library developed at LBNL for the AMR x 10-4
marking laminar-flow simulations of a framework. Since LB evolves a discrete 1.5
single polymer migration in a microchan- velocity distribution function and not 1.0
Method from
nel using the new entropic code. We the macroscopic hydrodynamic variables ref. 5
LB Chombo

Velocity error
want to understand how the polymer (i.e., mass and momentum) the stan- 0.5
will migrate relative to the wall in differ- dard finite difference flux conservation 0
ent flow conditions. In a narrow chan- approach at grid interfaces was not ap-
–0.5
nel the chain will migrate toward the propriate. Instead we developed a novel
wall if the flow rate is high enough. Our algorithm that minimizes the numerical –1.0
simulations reproduced this behavior as error associated with refinement. The –1.5
the shift in the distribution curves of the plot in Fig. 3 compares the errors from –20 –10 0 10 20
polymer center of mass in Fig. 1 demon- both our method and a popular state- z
strates. The blue line is the probability of of-the-art refinement scheme for a uni-
formly accelerating flow parallel to the Figure 3. Mesh refinement error near
coarse–fine interface (z=0) for uniformly
coarse–fine interface. Our error is orders
accelerating flow parallel to the interface.
of magnitude lower. The fine mesh is located at negative z
values and the coarse at positive z.
Related References
1. Toms, B. A., “Some Observations on the
Flow of Linear Polymer Solutions Through
Straight Tubes at Large Reynolds Numbers,” FY2011 Proposed Work
Proc. 1st Int. Cong. Rheol. N. Holland, Am- In FY2011 we plan to: 1) com-
sterdam, 2, pp. 135–141, 1948. plete the adaptive mesh refinement
2. Zhang, Y., A. Donev, T. Weisgraber, development; 2) begin Newtonian
B. J. Alder, M. D. Graham, and J. J. de Pablo, turbulent channel simulations to
“Tethered DNA Dynamics in Shear Flow,” validate the AMR code; 3) continue
J. Chem. Phys., 130, 23, pp. 234902–234913, the laminar flow migration
y
2009. simulations with wall roughness;
z x 3. Usta, O. B., A. J. C. Ladd, and J. E. Butler, and 4) investigate how many-
“Lattice-Boltzmann Simulations of the core graphics processors could
Figure 2. Example mesh for flow in a Dynamics of Polymer Solutions in Periodic accelerate our hybrid LB/MD
channel with grid refinement near the and Confined Geometries,” J. Chem. Phys., computations.
walls. 122, 2005.

Lawrence Livermore National Laboratory 37


Technology

Finite Element Analysis Visualization


For more information contact:
and Data Management Bob Corey
(925) 423-3271
ircorey@llnl.gov

S upport for and enhancement of sev-


eral visualization and postprocess-
ing tools is a key component of LLNL’s
and a data file manipulation tool called
XmiliCS. These tools are used by engi-
neering analysts and other modelers
efforts. These tools include the Griz across the Laboratory to interpret data
finite element visualization postproces- from a variety of simulation codes such
sor, the Mili data management library, as DYNA, ParaDyn, Nike and Diablo. We
also provided support in the area of
data translation tools and processes for
performing intracode calculations.
Z velocity
Griz is our primary tool for visual-
izing finite element analysis results
816
on 2- and 3-D unstructured grids. Griz
600
provides advanced 3-D visualization
techniques such as isocontours and
400 isosurfaces, cutting planes, vector
field display. Functionality is expand-
200 ing for Smooth Particle Hydrodynamics
(SPH) and other “mesh-less” modeling
–49.9 entities. Mili is a high-level mesh I/O
library intended to support computa-
tional analysis and post-processing on
unstructured meshes. It provides the
primary data path between analysis
codes and visualization tools such as
Griz. Mili databases are also viewable
with the LLNL VisIt postprocessor. Xmil-
iCS is a utility used to combine results
from multiple processors that are gen-
erated by our large parallel computing
platforms.

Z Project Goals
Y
The project provides ongoing user
X
support and software maintenance for
t = 3.5 × 10–3 LLNL’s visualization and postprocessing
Figure 1. “Mesh-less” particle representation of soil being ejected tools and adds new capabilities to these
by a buried explosive charge, as simulated by ParaDyn. Griz is being tools to support evolving, multipro-
extended to support additional types of discretization technologies. grammatic requirements.

38 FY10 Engineering Innovations, Research & Technology Report


Computational Engineering

Relevance to LLNL Mission


These postprocessing tools provide r Requests
ste
q ue
important user interfaces for LLNL’s re GRIZ
on
engineering simulation capabilities and e ati command
Cr
are critical elements in our tool suite.
Analysts would be severely limited GrizCommand (key, args)
Python
in their ability to interpret the vast GRIZ
command
amounts of data generated by simula- command
factory
do ( )
tion and to synthesize key results with-
out graphical representations providing
interactive feedback. createPyCommand (key, args)

FY2010 Accomplishments
and Results
Our baseline commitment to LLNL do vis
users, the annual release of updated ()
production software, was achieved with
invis
the issue of Version 10.1 of Griz, Mili,
and XmiliCS. This is a necessary comple-
ment to annual releases of the DYNA3D
and ParaDyn simulation tools. The grow- Pipeline of commands
ing “mesh-less” capability in those codes to VisIt
for representing highly deforming mate- Figure 2. GrizIt Command Line Interface. The common “factory design pattern” software
rial creates new needs for support from construct provides engineering users with familiar, highly productive ways of accessing and
the visualization and data management investigating their simulation results while leveraging the high-performance visualization
tools. Figure 1 provides an example of capabilities of the VisIt tool and the flexibility of its Python command interface.
how Griz provides color contour plots
of basic solution quantities over the
ensemble of material points used in a of the particle sets arising from the new permit the set of commands supported
“mesh-less” scheme. Efforts are now “mesh-less” simulations. to smoothly expand. Proof of concept
underway to robustly extend Griz’s The Laboratory’s VisIt data analysis has been demonstrated and a number
capabilities for standard continuum ele- tool is a world-class tool for rendering of specific commands have been imple-
ments, e.g., cutting planes to expose the extremely large scientific data sets. This mented.
interior of a body, to this added simula- capability is of interest to engineering
tion option. analysts as our own models grow, but
We are halfway through an effort to adoption has been slow because the
re-architect and re-implement the Xmil- operational flow of VisIt is so different FY2011 Proposed Work
iCS utility for consolidating the very large from that of Griz. We are now creating a Targets for next year include
results databases created with ParaDyn. capability so users can type the concise 1) delivering the new XmiliCS com-
This is being used as an opportunity to commands familiar to them from Griz biner tool; 2) having a user-testable
redefine the boundary between Mili and while the computer automatically issues (“beta”) version of GrizIt exercised
XmiliCS so that common operations are the more complex stream of multiple sufficiently to support a decision
consolidated in the Mili interface and instructions required to have VisIt on delivery with Version 11.1 of
better leveraged (re-used) by XmiliCS. generate the specific image the user the ParaDyn Suite; and 3) initial op-
This major evolution of the products desires. We have named this lightweight erational capability for end-to-end
also provides an opportunity to incor- add-on GrizIt. A modern object-oriented regression testing of the ParaDyn
porate a dedicated “particle” data type design (Fig. 2) allowing for flexible class Suite workflow.
to more specifically support definition manipulation and reusable code will

Lawrence Livermore National Laboratory 39


Technology

Modeling Enhancements in DYNA3D


For more information contact:

Jerry I. Lin
(925) 423-0907
lin5@llnl.gov

T he explicit finite element code


DYNA3D is a main Engineering tool
for simulation of the transient response
Project Goals
This project funds the ongoing
implementation of user-requested
of structures to fast, impulsive loadings. features, general technical support,
Its use has extended beyond our histori- documentation updates, and Software
cal core mission to broader applications, Quality Assurance (SQA) compliance for
such as infrastructure vulnerability DYNA3D. It also supports Collaborator
and protection analysis, vehicle impact Program activities. The Collaborator
simulation, and integrity assessment for Program grants selected users licensed
various structures. The code also serves access to LLNL’s computational mechan-
as the mechanics foundation of the ics/thermal codes in exchange for their
highly parallel ParaDyn simulation tool. results and acknowledgement. These
This project represents an institutional collaborative members include our
investment in the continued usability sister laboratories, U.S. government
and vitality of DYNA3D. agencies, and other education/research
institutions.

Figure 1. First principal material axes automatically generated Figure 2. Second principal material axes at points of a generic
at points of a generic conical/cylindrical structure. The dots conical/cylindrical structure, locally orthogonal to the axes
represent mesh points and the line segment the orientation of shown in Fig. 1.
the local material axis.

40 FY10 Engineering Innovations, Research & Technology Report


Computational Engineering

Relevance to LLNL Mission great hindrances is assigning local mate- by tracking a set of discrete points and
Engineering analysts supporting rial directions at the various locations assessing the local stress state in the
a variety of programs require new throughout the material. For parts of body via their relative motions.
structural modeling functionalities and special geometric shapes that align with Figure 3 shows a simple example of
technical support to complete their the material directions, computational two bars striking end-on-end. We are
missions. Some of these programs algorithms can be created to automati- leveraging an SPH capability originally
and projects involve the Laboratory’s cally calculate the material orientations. created as a standalone code at LLNL.
collaboration with other institutions Two such geometric configurations, During its prove-in phase, the module
and federal agencies, including the Los conical and ellipsoidal, were added to remained nearly a separate code as it
Alamos National Laboratory (LANL), the existing choices for all anisotro- used an independent (small) input file
the Missile Defense Agency, the Naval pic continuum materials in DYNA3D. and output a separate visualization data-
Surface Warfare Center, the U.S. Army Figures 1 and 2 depict the automatically base. The capability is now more seam-
Corps of Engineers, and the Department generated two principal material axes, lessly integrated from a user perspective
of Homeland Security. the first one along the structure longi- as the SPH input parameters are defined
tudinal direction and the second along within the DYNA3D input file. Similarly,
FY2010 Accomplishments the transverse direction, at points of a the response quantities computed
and Results merged conical/cylindrical part. within the SPH region are now written
The use of composite materials such The drive to model more extreme as part of the standard DYNA3D output
as graphite-epoxy has risen signifi- structural deformations requires sus- database. This both simplifies the col-
cantly in structures and protective gear. tained investment in additional nu- lection of files the analyst must manage
Fiber-reinforced materials are typi- merical representations in DYNA3D. One and facilitates visualization and interpre-
cally anisotropic, having non-uniform option mechanical analysts have found tation of the simulation results.
mechanical and thermal behaviors in useful for high-speed impacts is Smooth
different directions. For model prepara- Particle Hydrodynamics (SPH). This tech-
tion involving these materials, one of the nique approximates material behavior
FY2011 Proposed Work
General technical support for
DYNA3D users, the addition of
user-requested capabilities, and
SQA-compliance work will continue
under FY2011 funding. For users
modeling more complex shapes
constructed of anisotropic materi-
als we are introducing a general
auxiliary file that can be read for
the local material orientation
specifications. This will free them
to write or adapt small, problem-
specific utilities.
Code modifications leading to a
completely keyword driven input
file will continue as our goal is to
provide users with a more conve-
niently read and modified problem
specification. We will use a regres-
sion test suite traceability matrix
completed in FY2010 to guide and
prioritize further test problems to
assure performance and stability of
Figure 3. Hopkinson bar high-speed impact modeled with the SPH option now DYNA3D’s features.
integrated with DYNA3D.

Lawrence Livermore National Laboratory 41


Technology

NIKE3D Enhancement and Support


For more information contact:

Michael A. Puso
(925) 422-8198
puso1@llnl.gov

T he objective of this work is to en-


hance, maintain and support LLNL’s
implicit structural mechanics finite
Relevance to LLNL Mission
Structural analysis is one of the most
important functions of Engineering and
element code, NIKE3D. New features the in-house maintenance, support,
are added to accommodate engineering and code enhancement we provide for
analysis needs. Maintenance includes our suite of codes is crucial for meeting
bug fixes and code porting to the vari- engineering’s analysis needs. NIKE3D, in
ous platforms available to engineering particular, is a premier code for handling
analysts. User support includes assisting difficult nonlinear static structural analy-
analysts in model debugging and gen- sis problems.
eral analysis recommendations.
FY2010 Accomplishments
Project Goals and Results
Ongoing code enhancement In contrast to explicit finite element
requires new features to meet our en- codes like DYNA3D, implicit codes re-
gineering community’s user demands. quire the solution of systems of coupled
The following are highlights for FY2010: linear equations that are typically the
1. Add and benchmark a new thread- bottleneck in large calculations. Many
ed, distributed memory direct national laboratory implicit codes rely on
equation solver; iterative solvers solely, whereas NIKE3D
2. Port a prototype model for high and Diablo include a highly robust and
explosive deformation under long- fast distributed memory parallel direct
term loading, and made significant linear solver (WSMP). We find the direct
modifications for fast computation; solver is often the desired approach in
and many applications due to reliability and
3. Add diagnostics to evaluate sources speed.
of rigid body motion. The biggest drawback to using direct
solves is the required memory. The typi-
cal strategy to achieving large memory
is to run the analysis on as many nodes
6000
of a parallel computer as necessary. To
1 subdomain Wall time for further economize memory usage, one
Wall time for factorization (s)

x 1 thread factorization
5000 to four subdomains per node are used
for very large analyses (i.e., greater than
4000
1 subdomain x 2 thread
3000
1 subdomain x 4 thread
2000

1000
4 subdomains η1 η2 ηk
x 4 thread

0 σ σ
0 5 10 15 20
# of processors per node E1 E2 Ek

Figure 1. Results from 6-million-element Figure 2. A Kelvin-Voigt spring damper


model run on 256 nodes of a Linux model for visoelastic-plastic response.
high-performance Linux cluster (Hera).

42 FY10 Engineering Innovations, Research & Technology Report


Computational Engineering

–0.020

–0.019

Displacement
–0.018
–0.017

–0.016
New logarithmic creep
–0.015 Standard linear creep

–0.014
0 1x108 2x108 3x108
Time (s)

Figure 3. Cylindrical creep specimen for a Figure 4. Top displacement of creep


constant load test. cylinder versus time. Note logarithmic
response of new creep model compared
to classical creep model (Model 27).

1 million elements). Unfortunately, this k


approach sacrifices processing power ε• = Σε + ε
i=1
•i • nr
(σ,t) . (2)
for memory. To alleviate this, the Linux
version of the WSMP solver in NIKE3D Nonrecoverable implies that the strain is
and Diablo has recently been upgraded fixed after the unloading (i.e., s → 0).
to include threaded parallelism. This ca- Time integration of (1) while evaluat-
pability has been available for sometime ing (2) requires many computations,
with the AIX version. Now one, two, or including exponential function evalua-
four subdomains can be specified on tions. Appropriate compiler options cut
machines such as LLNL’s Hera or Juno, CPU time in half for the model. OpenMP
and each subdomain can run with mul- was then used to apply threaded paral-
tiple threads. lelism to the material model. A small
An example showing the factor- example (Fig. 3) has a constant 2-ksi
ization times for a 6-million-element load held over 15 years (150 time steps).
analysis is shown in Fig. 1. It should be Figure 4 shows the resultant creep
emphasized that only the linear solver strain to be logarithmic in time as is
is parallel in the NIKE3D code, whereas characteristic with many materials over
Diablo is fully parallel. long time spans. Standard “viscoplas-
A nonlinear creep model for plastic tic” creep models are typically linear in
bonded explosive was added to the time and are thus not valid over such
production version of NIKE3D. While long duration. The original material
porting the model, a significant effort implementation required 1400 s of wall
was made to make it more efficient. The time, whereas the new threaded parallel
model uses a nonlinear Kelvin-Voigt type version required 114 s of wall time using
spring-damper model with many depen- 16 threads.
dent spring and damper components to
capture recoverable creep (Fig. 2). The
basic 1-D model is nonlinear: FY2011 Proposed Work
In FY2011, our plans include 1)
σ+ cσ2 = Eiεi + ηiε• i (1)
supporting engineering analysts
where ėi represents the ith strain (spring using NIKE2D; 2) modularizing
displacement), and Ei, hi are the spring material models for ultimate migra-
and damper stiffnesses. The total strain tion into DIABLO; and 3) upgrading
for the Kelvin-Voigt system plus an ad- quasi-Newton solver for solution of
ditional history-dependent, nonrecover- contact problems.
able creep strain enr is given by:

Lawrence Livermore National Laboratory 43


Technology

Electromagnetics Code Enhancement


For more information contact:
and Maintenance Daniel A. White
(925) 422-9870
white37@llnl.gov

L LNL Engineering’s EIGER code is a


3-D, parallel, boundary element
code for solving Maxwell’s equations
capability also provides LLNL a competi-
tive edge when considering additional
DOE and WFO projects. Increasing the
of electromagnetics (EM). Since EIGER accuracy and efficiency of our tools
uses a boundary element method benefits all customers.
(also known as an integral equation
method, or method of moments), the FY2010 Accomplishments
EM problem of interest is described by a and Results
surface mesh; there is no need to mesh We created a MOR algorithm,
the entire volume. Another advantage implemented it, and applied it to
of the boundary element method is that
it provides an exact radiation boundary
condition. The EIGER code is currently
being used on several Global Security
projects, and can play a large role in Calculate solution
x(s) at center of
possible future work-for-others (WFO)
box, add to V matrix
projects.

Project Goals Multiply solution by


The goal of this project is to gener- A(s) at previous
sample points
ate an automatic Model Order Reduc-
tion (MOR) algorithm for use with
EIGER. We are collaborating with staff Interpolate reduced
at Ohio State University. Consider the system VTA(s)V
following example problem: We need using RBFs
to compute electrical currents on cables
for 100 different RF frequencies, 100
Split box
different angles of incidence, and 100
different cable locations. A brute force
approach results in one million distinct Estimate error at
EIGER calculations, which is not practi- center of each box
cal. Automatic MOR is an approach to
reduce the total number of calculations Select box
Is global error No
by 1) sampling the parameter space; acceptable?
with largest
2) using the samples to build a model of error
the quantities of interest; and 3) iterat- Yes
ing until the model meets a specified
accuracy. End

Relevance to LLNL Mission


Figure 1. Iterative algorithm for automatic
EIGER performs EM analyses that
Model Order Reduction. The algorithm
cannot be performed by commercial performs successive sampling of the
codes. This allows engineering staff to original problem space to construct a
better support Laboratory programs. smaller model having sufficient accuracy
Having a unique computational EM to represent all the behavior of interest.

44 FY10 Engineering Innovations, Research & Technology Report


Computational Engineering

several EM problems of interest to is essentially a “change of basis” result- user, and new samples were chosen to
Global Security projects. The outline of ing in a significantly more compact minimize the error. The process and an
the MOR algorithm is as follows. representation than the full model. example are shown in Figs. 1 and 2.
Let A(s) x = b be the boundary ele- The reduced system is evaluated at M As an example, consider the moti-
ment discretization of Maxwell’s equa- sample points (M different values of the vating problem of computing electrical
tions, where A(s) is the N × N bound- parameter vector s); between sample currents on cables where a brute force
ary element matrix, x is the vector of points the reduced system is defined by approach would result in one million
unknowns (surface current and charge), linear interpolation using Radial Basis EIGER calculations. Using the automatic
b is the source vector, and s = {s1, s2, Functions (RBF). The basis function MOR algorithm outlined above, we
..., sk} is the vector of parameters. expansion is were able to completely explore this
The parameters could be, for example, parameter space to within an error
frequency, angle of incidence, material a(s) = Σ a φ ( s–s )+θ(s)
i
i i of 1% by performing only about 800
properties, or positions of cables or EIGER calculations, a reduction of over
apertures. The number of unknowns, where the RBFs are of the form 1000 ×. It is also important to note that
N , is determined by the computational f(r) = r2p, and q(r) is a polynomial of the MOR software implementation is
mesh and can be quite large. N ≈ 104 is order p. The coefficients of the basis parallel. The EIGER calculations were
common and N ≈ 106 is possible when function expansion are determined by executed using 8 nodes (32 processors)
a fast multiple multipole algorithm is solving a Vandermonde system of equa- and completed in just one day.
used for the boundary element repre- tions, this system is only M × M.
sentation. The process of choosing the sample Related References
The reduced system is given by points is iterative. The process is like 1. Sharpe, R., et al., “EIGER: Electromagnetic
a(s) = VT A(s)V, where V is a N × M structured adaptive mesh refinement, Interactions Generalized,” IEEE Ant. Prop. Int.
matrix with M « N. The columns of V but in k-dimensional sampling space Symp., pp. 2366–2369, July 1997.
are the basis vectors of the reduced rather than the physical spatial domain. 2. Buhmann, M. D., “Radial Basis Functions,”
order model, the reduced order model An error tolerance is specified by the Acta Numerica, 9, pp. 1–38, 2000.

(a) (b)
Parameter space sampling and refinement
1.0
0.6130
0.4507 0.9
0.3066
0.1572 0.8
0
0.7

0.6

0.5
S2

0.4

0.3

0.2

0.1

X Y Z 0
0 0.2 0.4 0.6 0.8 1.0
S1

Figure 2. Example calculation using our automatic Model Order Reduction algorithm. (a) Current density computed on a circuit for a single
combination of two input parameters, s1 and s2. (b) Graphic showing the limited number of cases sampled in the plane needed to capture
the behavior over the entire parameter range.

Lawrence Livermore National Laboratory 45


46 FY10 Engineering Innovations, Research & Technology Report
Micro/Nano-Devices
& Structures
Research

Hybridization, Regeneration, and


For more information contact:
Selective Release of DNA Microarrays Elizabeth K. Wheeler
(925) 423-6245
wheeler16@llnl.gov

D NA microarrays contain sequence-


specific probes arrayed in distinct
spots numbering from 10,000 to over
yield significant information for rapidly
mutating and emerging (or deliberately
engineered) pathogens. In the selective
1,000,000, depending on the platform. release work, optical energy deposition
This tremendous degree of multiplex- with coherent light is being explored to
ing gives microarrays great potential for quickly provide the thermal energy to
environmental background sampling, single spots to release hybridized DNA.
broad-spectrum clinical monitoring, and The second study area involves
continuous biological threat detection. hybridization kinetics and mass-transfer
In practice, their use in these applica- effects. The standard hybridization
tions is not common due to limited protocol uses an overnight incubation to
information content, long processing achieve the best possible signal for any
times, and high cost. sample type, as well as for convenience
Our work seeks to characterize the in manual processing. There is potential
phenomena of microarray hybridization, to significantly shorten this time based
regeneration, and selective release that on better understanding and control of
will allow these limitations to be ad- the rate-limiting processes and knowl-
dressed. This will revolutionize the ways edge of the progress of the hybridiza-
that microarrays can be used for LLNL’s tion. In the hybridization work, a custom
Global Security missions. microarray flow cell will be used to
One study area is selective release. manipulate the chemical and thermal
Microarrays easily generate hybridiza- environment of the array and image the
tion patterns and signatures, but there changes over time during hybridization.
still is an unmet need for methodologies
enabling rapid and selective analysis of Project Goals
these patterns and signatures. Detailed The goals of the selective release
analysis of individual spots by subse- work are to characterize the phenom-
quent sequencing could potentially ena involved in high-resolution energy

Figure 1. Detail image of a microarray Figure 2. R&D system, showing user load-
scan after autonomously hybridizing E. coli, ing sample vial onto system prior to begin-
E. faecalis, and S. aureus on the Virulence ning integrated hybridization experiments.
Array. The microarray flow cell shown on right
hand side is connected to fluidics lines.

48 FY10 Engineering Innovations, Research & Technology Report


Micro/Nano-Devices & Structures

deposition with an IR laser and to dem- Fluid


onstrate selective release of DNA from in
a microarray. This includes assessing the Fluidics Collection vial for
released DNA New
effects of wavelength, absorption, spot
enclosure
size, materials, pulse energy, and fluid Fluid
for 1.47 µm
flow. out
laser
The goal of the hybridization work is
to quantify the rate-limiting processes in
Laser path Microarray in flow cell
microarray hybridization and to dem-
onstrate improvement in hybridization
time by controlling the process.
1.47 µm laser
Relevance to LLNL Mission (inside enclosure)
LLNL has ongoing efforts in detec- DNA single release
tion methods against biological ter- Figure 3. Photograph of new laser system for selective release.
rorism. The next stage of molecular
diagnostics for biological threats is to
look much more broadly for emerging
Power: 1W
threat bio-signatures, such as virulence
elements or natural and engineered
mutations. This capability is targeted
100 ms 250 ms 500 ms 1000 ms
against new natural pandemics and
engineered biological warfare agents,
while still detecting the full set of
known bio-threat agents, to enable
prompt countermeasures.
Figure 4. Characterization of pulse power and duration on
FY2010 Accomplishments spot size for new laser system.
and Results
We extended the DNA Release and
Capture Laser (DRACULA) to a multiple Virulence Array. Analysis of the data B. Lin, and D. A. Stenger, “Identifying Influ-
wavelength system allowing spot size re- taken at different time points during the enza Viruses with Resequencing Microar-
duction as well as more uniform energy hybridization is ongoing with the analy- rays,” Emerging Infectious Diseases, 12, 4,
deposition in the fluid column due to sis tool that we developed this year. Also pp. 638–646, 2008.
the absorption wavelength dependency. an array to study the kinetics of hybrid-
We have conducted thermal energy ization has been designed and is in the
deposition tests to calibrate the elution process of being fabricated, ready for
temperature. Since we do not want to testing in FY2011. FY2011 Proposed Work
elute all DNA hybridized to the array, we Figures 1 through 4 illustrate the For selective release, the key
had to investigate non-standard aqueous results of our work. milestone will be release, capture
elution solutions. To quantify how much and quantification of an undam-
DNA is selectively released, we have Related References aged SARS target. After the initial
developed a quantitative PCR assay for 1. Jain, C., S. Gardner, K. McLoughlin, demonstration of selective release,
off-line detection of eluted oligonucle- N. Mulakken, M. Alegria-Hartman, P. Banda, we will focus on characterizing the
otides for selective release and demon- P. Williams, P. Gu, M. Wagner, C. Manohar, effects of laser power and fluid
strated high sensitivity. and T. Slezak, “A Functional Gene Array for flow conditions on release selec-
For the hybridization platform, we Detection of Bacterial Virulence Elements,” tivity and yield. After the initial
developed the software needed to PLoS ONE, 3, 5, e2163, 2008. characterization of hybridization
translate the bench top process to the 2. Wang, D., A. Urisman, Y. Liu, M. Springer, rates using an artificial kinetics ar-
integrated automated system. Most et al., “Viral Discovery and Sequence Recov- ray, the hybridization experiments
importantly, we performed our first ery Using DNA Microarrays,” PLoS Bio., 1, will determine rates using biologi-
integrated flow-cell hybridization experi- pp. 257–260, 2003. cal samples on the LLNL Virulence
ment with a biological sample of E. coli, 3. Wang, Z., L. T. Daum, G. J. Vora, D. Metgar, Array.
E. faecalis, and S. aureus, using the LLNL E. A. Walter, L. C. Canas, A. P. Malanoski,

Lawrence Livermore National Laboratory 49


Research

Cadmium–Zinc–Telluride Sandwich
For more information contact:
Detectors for Gamma Radiation Adam M. Conway
(925) 422-2412
conway8@llnl.gov

D etectors to sense the presence of


nuclear and radioactive weapons
concealed in transit through borders,
Project Goals
With this project, we expect to dem-
onstrate a pathway toward a gamma
airports, and seaports are crucial for the detector with better than 1% at 662 keV
international struggle against terror- energy resolution that will operate at
ism and the proliferation of weapons room temperature. To achieve this goal,
of mass destruction. Currently, high we will design a novel structure using
purity germanium detectors offer the bandgap engineering concepts that will
best performance in detecting gamma result in a 90% reduction in leakage
rays; however, they must be operated at current (which is the dominant noise
cryogenic temperatures. mechanism at the energies of interest)
A room-temperature detector is relative to a resistive device. We will
greatly preferred because of cost and also provide leadership to the detector
ease of use, but the only available community through a technical road-
alternative is based on cadmium zinc map for the demonstration of a 0.5% (at
telluride (CZT) technology, which offers 662 keV) energy resolution within five
inferior performance. Here we propose years.
a pathway for CZT gamma detectors to
achieve the desired energy resolution Relevance to LLNL Mission
of better than 1% at 662 keV. We will The solution to the radiation-
use a multilayered structure, as shown detector materials problem is expected
schematically in Fig. 1, to allow signal to have significant impact on efforts to
collection while simultaneously reject- develop detectors that are compact,
ing noise. By applying energy bandgap efficient, inexpensive, and operate at
engineering to CZT gamma detectors, ambient temperature for the detec-
we believe detector performance can be tion of special nuclear materials as well
improved. as radiological dispersal devices. The

Al a-Se CZT a-Si Pt

e
h

Anode Cathode
γ
Figure 1. Schematic diagram of a-Se/CZT/a-Si detector
layer structure.

50 FY10 Engineering Innovations, Research & Technology Report


Micro/Nano-Devices & Structures

multidisciplinary nature of this work and 1.5


the relevance to national and homeland
security align well with LLNL capabilities 1.0
and missions. 0.5
CZT

Energy (eV)
FY2010 Accomplishments 0
and Results –0.5
Over the course of this project we
have 1) developed finite element mod- –1.0 a-Si
a-Se
eling capabilities for amorphous-CZT –1.5
heterojunctions to understand electronic Figure 2. Simulated energy
conduction mechanisms; 2) studied –2.0 band diagram of
–0.2 0.2 0.6 1.0 1.4 a-Se/CdZnTe/a-Si:H
amorphous-CZT heterojunctions using Position (µm) layered structure.
current vs. voltage vs. temperature
measurements for characterization of
Schottky barrier height; and 3) fabri-
cated amorphous Se-CZT-amorphous
Si heterojunction detectors (Fig. 2) that 10–6
have reduced the leakage current by No amorphous layer
100 × resulting in an effective resistiv- w/ a-Si
ity of greater than 1012 ohm-cm in w/ a-Se
Current density (A/cm2)

bulk material that is too conductive for 10–7


typical CZT gamma detectors (Fig. 3);
and 4) demonstrated proof-of-principle
detectors with improved energy resolu-
tion (Fig. 4). 10–8

Related References
1. Voss, L. F., P. R. Beck, A. M. Conway, Figure 3. Comparison of
10–9 current versus voltage
R. T. Graff, R. J. Nikolic, A. J. Nelson, and –100 –75 –50 –25 0 characteristics with and
S. A. Payne, “Surface Current Reduction in Reverse bias voltage (V)
without amorphous contacts.
(211) Oriented CdZnTe Crystals by Ar Bom-
bardment,” J. Appl. Phys., 108, 014510, 2010.
2. Conway, A. M., B. W. Sturm, L. F. Voss,
P. R. Beck, R. T. Graff, R. J. Nikolic, A. J. Nelson,
and S. A. Payne, “Amorphous Semiconductor
600
Blocking Contacts on CdZnTe Gamma Detec- w/ a-Se
tors,” International Semiconductor Device w/ a-Si
500 No amorphous layer
Research Symposium, December 2009.
3. Voss, L. F., A. M. Conway, B. W. Sturm, 400
R. T. Graff, R. J. Nikolic, A. J. Nelson, and
Counts

300
S. A. Payne, “Amorphous Semiconductor
Blocking Contacts on CdTe Gamma Detec- 200
tors,” IEEE Nuclear Science Symposium, FWHM = 11.8 keV
October 2009. 100 FWHM = 15.5 keV
4. Nelson, A. J., A. M. Conway, C. E. FWHM = 21.8 keV
0
Figure 4. Comparison of
Reinhardt, J. J. Ferreira, R. J. Nikolic, and S. Am-241 gamma spectra with
A. Payne, “X-Ray Photoemission Analysis 0 40 80 120 160
and without amorphous
Energy (keV)
of CdZnTe Surfaces for Improved Radiation contacts.
Detectors,” Materials Lett., 63, 180, 2009.

Lawrence Livermore National Laboratory 51


Research

Enabling Transparent Ceramic


For more information contact:
Optics with Nanostructured Materials Joshua D. Kuntz
Tailored in Three Dimensions (925) 423-9593
kuntz2@llnl.gov

W e are developing a novel nano-


manufacturing technique, based
on the electrophoretic deposition (EPD)
chamber and dynamically modifying
the electrode pattern on the deposition
substrate. We can also use the electric
process, to create transparent ceramic field to control the orientation of non-
optics with unique properties based spherical particles during deposition to
on tailored nanostructures. The EPD orient grain structures prior to sintering.
process uses electric fields to deposit To enable this new functionality, we
charged nanoparticles from a solution are synthesizing ceramic nanoparticles
onto a substrate. We are expanding as our precursor material, implementing
current EPD capabilities to enable con- new instrumentation for the benchtop
trolled deposition in three dimensions deposition experiments, and developing
by automating the injection of nanopar- modeling capabilities to predict deposi-
ticle suspensions into the deposition tion kinetics and deposited structures
based on the particle, solution, and
system properties.
To guide our research and develop-
(a) Sharp transition (b) Smooth transition ment efforts, we have identified trans-
parent ceramic optics as a major area
in which nanostructured functionally
graded materials can have a significant
920 nm
impact. Laser physicists and optical
system engineers are currently hindered
500 nm by the small subset of materials avail-
able for their designs. The only crystal-
500 nm line materials open to them are those
that can be grown as single crystals
and isotropic cubic materials that can
be formed into transparent ceramics.
By depositing nanorods of a noncubic
material in the same orientation, the
220 nm
resulting green-body can theoretically
be sintered to a transparent ceramic.
Additionally, current optics designs are
material- and process-limited to uni-
form composition profiles across optical
components and laser gain media. To
220 nm date, only coarse step function compo-
120 nm sition changes have been produced in
the most advanced transparent ceramic
1 µm 10 µm
optics. Our EPD platform will enable
us to create new transparent ceramic
Figure 1. Examples of multilayer films deposited with either optics with doping profiles tailored in
(a) sharp or (b) smooth transitions between material layers. three dimensions.

52 FY10 Engineering Innovations, Research & Technology Report


Micro/Nano-Devices & Structures

Patterned deposition Au

RF

100 µm (a) (b)

1 cm

70 nm gold & RF

Figure 2. Deposition of 70-nm gold particles onto a fixed electrode Figure 3. Example of a transparent ceramic fabricated using EPD.
pattern that is gelled in situ using resorcinol formaldehyde (RF). The The ceramic material is Nd:GYSAG and was deposited to a thickness
white areas are gold and the red areas are the gelled RF. Minimum of 1.4 mm in 10 min using a 10-V/mm electric field. The part shown
feature size at the center of the pattern is approximately 10 µm. in (a) was vacuum sintered at 1675 °C and then hot isostatically
pressed to create the final transparent part (b).

Project Goals process and demonstrate materials and 3. Developed a new gelation process
The goals of this project are: 1) structures of relevance to LLNL missions that enables deposition in an aque-
demonstrate the fabrication of function- and programs. The main demonstra- ous solution and then “locks in” the
ally graded materials with composition tions for this project align with current pattern by gelling the solution at
profiles tailored in three dimensions and future needs in NIF as well as the elevated temperature.
while maintaining desired bulk prop- LIFE and ALOSA thrust areas. These are: 4. Synthesized near-monodisperse
erties; 2) demonstrate the use of the 1) to create transparent ceramic optics fluorapatite nanorods and demon-
EPD deposition field to simultaneously with doping profiles tailored in three strated alignment of the rods in a
align nanorod particles of precursor dimensions to enable new high-pow- 300-V/cm electric field.
material as they are deposited; and 3) ered laser designs (NIF/LIFE); and 2) to 5. Demonstrated transparent ceramic
demonstrate the fabrication of compos- deposit aligned nanoparticles of noncu- structures fabricated using the EPD
ite structures with controlled material bic ceramics to create a new family of process (Fig 3).
composition and create smooth or sharp transparent ceramics (NIF/LIFE/ALOSA).
material transitions along the z-axis of a
composite structure. FY2010 Accomplishments
and Results FY2011 Proposed Work
Relevance to LLNL Mission Accomplishments and results in the In FY2011 we will 1) demon-
The project is intended to establish second year include the following: strate combined orientation and
LLNL leadership in bottom-up nanofabri- 1. Demonstrated the ability to change deposition control and fabricate a
cation of functionally graded materials. material composition and thickness transparent optic from a noncubic
Our dynamic EPD system will position us of deposition layers, and created material; 2) implement fixed-mask
to deliver the next generation of nano- both sharp and gradual material x-y control and demonstrate a
manufacturing capabilities for projects transitions during deposition (Fig. 1). transparent sintered part with a
throughout the Laboratory. Using these 2. Successfully deposited a 2-D extrud- planar composition gradient; and
capabilities, we are working to pro- ed pattern with 10-µm resolution 3) use in-situ AFM to monitor the
duce a number of novel materials and (Fig. 2). The material is deposited EPD process and validate our pro-
structures. These structures will both onto a photolithographically pat- cess model.
illustrate the capabilities of the new terned metal electrode.

Lawrence Livermore National Laboratory 53


Research

High-Resolution Projection
For more information contact:
Micro-Stereolithography (PµSL) Christopher M. Spadaccini
for Advanced Target Fabrication (925) 423-3185
spadaccini2@llnl.gov

O ur objective is to advance the


state of the art in 3-D micro- and
nanofabrication by using Projection
as a dynamically reconfigurable digital
photomask.
PµSL is capable of fabricating com-
Micro-Stereolithography (PµSL). PµSL is plex, 3-D microstructures in a bottom-
a low cost, high throughput, microscale, up, layer-by-layer fashion. A CAD model
stereolithography technique that uses is first sliced into a series of closely
a spatial light modulator (typically a spaced horizontal planes. These 2-D
Digital Micromirror Device (DMD) or slices are digitized in the form of a bit-
a Liquid Crystal on Silicon (LCoS) chip) map image and transmitted to the LCoS.
A UV LED illuminates the LCoS, which
acts as a dynamically reconfigurable
Beam splitting Light Pellicle
cube slit beam splitter Mirror
photomask and transmits the image
through a reduction lens into a bath of
L photosensitive resin. The resin that is
C exposed to the UV light is then cured
O and anchored to a platform and z-axis
S motion stage. The stage is lowered a
small increment and the next 2-D slice
is projected into the resin and cured on
Reduction optic
CCD top of the previously exposed structure.
L This layered fabrication continues until
E Focus/ Substrate
reduction optic the 3-D part is complete. Figure 1 shows
D holder
a schematic of our system’s optical path.
Heat sink The process has been shown to
Material have the capability to rapidly generate
container complex 3-D geometries. Applying this
concept to LLNL programmatic prob-
Figure 1. Schematic of PµSL system optical path. lems and advancing PµSL capability with
respect to these issues constitutes the
primary focus of the research. PµSL per-
(a) (b) (c)
formance, such as resolution, materials,
geometries, and substrates, has been
greatly improved.

Project Goals
Overall project goals included:
1. Establishing a high functioning PµSL
system at LLNL that can produce 3-D
components on demand.
2. Improving the resolution of this
system by integrating a plasmonic
superlens.
Figure 2. SEM images of (a) cylinders, (b) lattice structures, and (c) 3-D components with 3. Broadening the range of materi-
overhanging features. als that can be used with PµSL to

54 FY10 Engineering Innovations, Research & Technology Report


Micro/Nano-Devices & Structures

(a) (b) Projected image


2. Fabrication and demonstration of
a working plasmonic superlens has
been accomplished. This superlens
Cured part device was integrated into a PµSL
system. However, the light attenu-
ation as result of the superlens did
not allow for any features to be
fabricated. As a result, a path to
Uncured resin generating subwavelength features
includes a stronger light source and
a defect-free superlens structure.
3. Validation of the optical-chemical
model. Figure 3 shows a fabricated
part compared to the predicted
geometry from the numerical
model. Fluid motion is also incor-
Figure 3. (a) Fabricated structure next to (b) predicted polymerization profile from
porated into the model and the
numerical model.
movement of the substrate as it is
lowered into the resin is captured.
include metals, ceramics and a range be conducted. Research efforts across 4. A multi-material lattice structure
of polymers. LLNL have focused on developing new has been generated and is shown
4. Establishing a coupled optical- fabrication techniques that can gener- in Fig. 4. This demonstrates both
chemical-fluidic model to better ate meso- to microscale targets with the three-dimensionality and
understand the physics of the pro- micro- to nanoscale precision and the heterogeneous nature of the
cess and to use as a design tool. features. Although much progress has technique in a single structure and
been made, several key target features used microfluidic resin delivery and
Specific goals for FY2010 were: have been difficult to achieve and would removal systems.
1. Exercise the LLNL PµSL system to benefit from advances in 3-D microfab- Our success is evidenced by a recent
rapidly fabricate 3-D components of rication techniques. WFO DARPA award where PµSL will be
interest to the target community. High resolution PµSL has the used as one of the fabrication tools for
2. Fabricate and test a plasmonic potential to directly impact target generating Materials with Controlled
superlens to demonstrate the capa- fabrication limitations and may also be Microstructural Architectures.
bility for improved resolution and of great benefit to the newly emerg-
begin integration with PµSL. ing LIFE program at LLNL, which has
3. Validate the baseline coupled its own set of challenges. An ancillary
optical-chemical model by compar- impact of this work is to enable a host
ing the model’s predicted polymer- of new MicroElectroMechanical Systems
ization depths to those measured (MEMS) devices never before conceived
from parts fabricated with the LLNL due to the rapid, high-resolution, fully
system. 3-D nature of the technique.
4. Include fluid motion in the coupled
model to study the impact of mov- FY2010 Accomplishments
ing components in the liquid resin and Results
bath. Significant progress has been made
5. Incorporate microfluidics with PµSL during FY2010, including:
to demonstrate the capability to 1. Fabrication of many 3-D compo-
fabricate with multiple materials. nents of interest to the target
community. Figure 2 highlights some 1 mm
Relevance to LLNL Mission of these pieces including cylinders,
Target fabrication for NIF and lattice structures, and fully 3-D
other stockpile stewardship physics parts with overhanging features. Figure 4. Lattice of heterogeneous unit cell
experiments has been a critical factor Features as small as 5 µm have been structures. Two different polymer materials
in limiting the scope of tests that can demonstrated. were used in the same structure.

Lawrence Livermore National Laboratory 55


Technology

Three-Dimensional Polymer
For more information contact:
Fabrication Techniques Christopher M. Spadaccini
(925) 423-3185
spadaccini2@llnl.gov

O ur objective for this work was to


demonstrate the fabrication of
several important 3-D polymer shapes
Encapsulating amine solvents in
permeable polymer spheres is useful for
advanced carbon capture technologies
of interest to LLNL programs—hol- used in E-Program. This technique will
low microspheres and dimple-like enable the use of much higher amine
structures. This was attempted via two concentrations than is used today,
microfabrication techniques: 1) micro- offering the potential to lower costs
fluidic-based double emulsions; and and improve carbon capture. In addi-
2) wide area, layer-by-layer lithography. tion, assembling a microencapsulation
Based on the published literature apparatus and demonstrating its use
and guidance from an academic collabo- was instrumental in securing WFO
rator, we established the capability to funding from ARPA-E for future carbon
generate hollow microspheres in micro- capture work.
fluidic channels. Our existing Projection Dimpled meso- and microstructures
MicroStereolithography System (PµSL) made from polymer materials provide
layer-by-layer fabrication tool was used excellent light scattering surfaces for use
to obtain the dimpled structures. in LLNL’s optical systems. Smooth sur-
faces can cause undesirable reflections
Project Goals of laser light in certain experiments and
There were three primary goals a means of rapidly generating complex,
for this project: 1) assemble an appa- 3-D, dimpled structures for light scat-
ratus for fabrication of microspheres; tering applications is useful in many
2) demonstrate polymeric shell fabrica- experimental configurations.
tion and encapsulation of liquids; and
3) demonstrate fabrication of dimpled FY2010 Accomplishments
microstructures. and Results
Microfluidic assembly techniques
Relevance to LLNL Mission have recently emerged as a new plat-
This project has important program- form for creating droplets, core-shell
matic implications for Global Security’s beads, polymer particles, colloid-filled
E-Program and for the various laser granules, and microcomponents of con-
programs at LLNL. trolled shape, size, and polydispersity.

Outer fluid Middle fluid Collection tube


Outer fluid

Inner fluid

Middle fluid 500 µm


Collection tube Injection tube
Inner fluid

Figure 1. Schematic of flow-focusing microfluidic geometry for microencapsulation. Figure 2. Optical micrograph of micro-
fluidic geometry.

56 FY10 Engineering Innovations, Research & Technology Report


Micro/Nano-Devices & Structures

We targeted a microfluidic approach middle fluid (the shell material) and the shows that the encapsulated material
for microcapsule fabrication based on a outer fluid, which focuses and pinches does not diffuse out of the shell.
flow-focusing geometry. We identified off the microcapsule droplets in the col- Dimpled structures were fabricated
this approach because it offers exquisite lection tube (Fig. 2). The dimensions of out of hexanediol diacrylate (HDDA) us-
control of shell diameter and wall thick- the capsules are controlled by the fluid ing our PµSL system. This technique uses
ness, which are important parameters viscosities and flow rate as well as the UV light and polymerizes 3-D shapes in
for mass transfer consideration and channel size. a layer-by-layer fashion. An example of
mechanical robustness. In addition, this We have assembled an apparatus a dimpled structure fabricated using this
approach conserves expensive materials for microcapsule fabrication (Fig. 3). Our technique is shown in Fig. 6.
since it encapsulates nearly 100% of the setup consists of an inverted microscope
desired solution, which is a significant and high-speed camera for visualization, Related References
improvement over other batch process- three syringe pumps for independent 1. Thorsen, T., et al., “Dynamic Pattern For-
es. In this approach, a tapered capillary flow control of the three fluids, a UV LED mation in a Vesicle-Generating Microfluidic
nozzle with circular cross-section is for curing of the photosensitive shell Device,” Physical Review Letters, 86, 4163,
aligned within a square capillary (Fig. 1). material, and a laptop computer for 2001.
To do this, the outer diameter of the control. The microfluidic device in Fig. 4 2. Utada, A., et al., “Monodisperse Double
circular capillary must be approximately shows the three inlets for the fluids and Emulsions Generated from a Microcapillary
equal to the inner diameter of the the outlet for collection of the microcap- Device,” Science, 308, 537, 2005.
square capillary. A second circular capil- sules. We used a photopolymerizable 3. Dendukuri, D., et al., “Controlled Synthesis
lary with a larger opening is inserted into material for the shell. The shells were of Nonspherical Microparticles Using Micro-
the square capillary from the opposite exposed to UV light after collecting in a fluidics,” Langmuir, 21, 2113, 2005.
end, which forces alignment. This con- vial (Fig. 5). We encapsulate a fluores- 4. Shepherd, R. F., et al., “Microfluidic
figuration allows the flow of three fluids: cent dye within the shells in order to Assembly of Homogeneous and Janus Colloi-
the inner fluid (to be encapsulated), the confirm that they are capsules. This also dal Granules,” Langmuir, 22, 8616, 2006.

Inner fluid Middle fluid Outer fluid Capsules


High speed camera UV LED
input input input output

Syringe Inverted microscope Power


pumps supply Microscope objective

Figure 3. Apparatus for microencapsulation. Figure 4. Photograph of microfluidic geometry.

500 µm

Figure 5. (a) Photograph of microcapsules under UV illumination; Figure 6. Dimpled microstructure


(b) optical micrograph of capsule cross-sections. fabricated with PµSL for light scattering.

Lawrence Livermore National Laboratory 57


Technology

PDMS Multilayer Soft Lithography


For more information contact:
for Biological Applications Dietrich A. Dehlinger
(925) 422-4030
dehlinger1@llnl.gov

R apid and accurate characterization


of biological pathogens is crucial
for the response to new and emerg-
individual cells. These advances are en-
abled by PDMS soft lithography, which
allows for microfluidic channels that be-
FY2010 Accomplishments
and Results
In FY2010 we have:
ing medical threats and for effective come instantly reconfigurable through 1. Introduced the complete PDMS
countermeasures. Large reductions in the use of appropriately placed valves. fabrication process to LLNL. This
analytical time for characterization of The valves are simply an intersection of includes both the capability to
novel biological pathogens require the one microfluidic channel above another manufacture the molds required for
demonstration of advanced instrumen- with each channel fabricated in a differ- fabrication, as well as the processes
tation that completely automates com- ent stack of thin parallel silicon rubber. required for complete device assem-
plicated laboratory procedures and data By pressurizing one microfluidic chan- bly. With capacity to manufacture
analysis. Such automation will enable nel, the thin membrane separating it chips using PDMS soft lithography
the determination of growth conditions, from another channel deflects, creating in house, we have demonstrated
metabolic requirements, and effective a small hydraulic valve. This reconfigu- an experimental platform tailored
treatments for novel pathogens in time rability can create new chambers, pump to meet an experimenter’s specific
to cut short incipient epidemics. fluids through peristaltic action, and needs while reducing experimental
route those fluids to different parts of turnaround time.
Project Goals a device. The devices were made using 2. Successfully and repeatedly fab-
The primary goal of this project is to the LLNL microfabrication facilities. ricated robust, high yield devices.
bring to LLNL the capability of fabricat- Using our established processes, we
ing polydimethylsiloxane (PDMS) multi- Relevance to LLNL Mission have demonstrated our capability to
layer soft lithography chips. The rapid detection and character- consistently make devices that hold
The secondary goal is to build a ization of countermeasures are essential up to long term experimental condi-
complete system to demonstrate their tools for detecting novel pathogens in tions. To test the viability of our
value for both reducing labor-intensive time and to limit the scale of an incipi- devices, we ran complicated biologi-
operations in biological research and im- ent epidemic. Soft lithography capabil- cal procedures that did not require
proving biological methods. For exam- ity will allow for the rapid creation of operator input for days. Ultimately,
ple, microfluidic systems provide better experimental platforms for screening, similar experiments will operate
environmental control for culturing cells analysis, and enclosed long term moni- unattended for weeks or months.
than standard laboratory methods and toring of biological systems, thereby 3. Assembled an automated micro-
also simplify continuous monitoring of enhancing the nation’s ability to prevent scope workstation for monitoring of
epidemics. devices under controlled conditions.

Figure 1. A close-up view of PDMS micro- Figure 2. PDMS microfluidic chip containing four separate devices.
channels and valves used for cell cultures.
The culture loop holds approximately 5 nL
of fluid and thousands of cells.

58 FY10 Engineering Innovations, Research & Technology Report


Micro/Nano-Devices & Structures

in Labview that provides integrated Poly(dimethylsiloxane),” Analytical Chemistry,


control of all platform components 70, 23, pp. 4974–4984, December 1998.
through one interface, and allows 4. Melin, J, and S. R. Quake, “Microfluidic
the scripting of automatic operation Large-Scale Integration: The Evolution of
and monitoring of experiments for Design Rules for Biological Automation,”
days. Annu. Rev. Biophys. Biomol. Struct., 36,
5. Integrated all the components to pp. 213–31, 2007.
form an automated cell culture 5. Balagadde, F. K., L. You, C. L. Hansen,
system on a chip. By integrating our F. H. Arnold, and S. R. Quake, “Long-term
control software and our experimen- Monitoring of Bacteria Undergoing Pro-
tal monitoring and control system grammed Population Control in a Microche-
to our PDMS microfluidic laborato- mostat,” Science, 309, 5731, pp. 137–140,
ries we have demonstrated a fully July, 2005.
automatic cell culture system with
Figure 3. Chip interface system. unprecedented ability to collect a
time sequenced bright field and
fluorescent images under various
The primary use of the devices is operating conditions without opera-
to run experiments with highly tor intervention. FY2011 Proposed Work
controlled environmental conditions Figures 1 through 5 are photographs In FY 2011, we will continue to
with optical measurements of the of our system and controls. use our results to support energy
results. Our experimental platform and medical applications as well as
took both fluorescent and bright Related References new areas.
field optical images at controlled 1. Balagadde, F. K., H. Song, C. H. Collins, We propose to 1) introduce
points in space and at specified M. Barnet, F. H. Arnold, S. R. Quake, and this capability to several collaborat-
times. We gained high-resolution L. You, “A Synthetic Escherichia Coli Predator- ing groups at LLNL; 2) implement a
data about cellular systems over Prey Ecosystem,” Mol. Sys. Biol., 4, 187, series of modules that can be used
long periods of time without dis- pp. 1–8, 2008. to rapidly assemble new devices as
rupting the experiment or requiring 2. Unger, M. A., H.-P. Chou, T. Thorsen, required; 3) work toward making
operator input. A. Scherer, and S. Quake, “Monolithic these systems more portable and
4. Written a flexible program to control Microfabricated Valves and Pumps by user friendly; and 4) implement
and monitor chips in an automated Multilayer Soft Lithography,” Science, 288, 7, devices and procedures to improve
fashion. Both our experimental pp. 113–116, April 2000. the input/output connections of
chip and the monitoring platforms 3. Duffy, D. C., J. C. McDonald, the microfluidic devices with the
are controlled via automation. We O. J. A. Schueller, and G. Whitesides, macroscale world.
implemented a software package “Rapid Prototyping of Microfluidic Systems in

Figure 4. Microscope and environmental control system Figure 5. System control software.
used to monitor and maintain growth conditions on the
microfluidic system.

Lawrence Livermore National Laboratory 59


Research

Embedded Sensors for Gas Monitoring


For more information contact:
in Complex Systems Jack Kotovsky
(925) 424-3298
kotovsky1@llnl.gov

T he change of a complex gas mixture


within a closed system indicates the
status of aging materials. Emission mon-
Relevance to LLNL Mission
As a national security laboratory,
LLNL is responsible for ensuring that the
Absorption lines and thermal noise
investigations guided the design and
implementation of a proprietary laser
itoring of trace gases reveals the onset nation’s nuclear weapons remain safe, system that addresses background
and evolution of chemical indicators secure, and reliable through application noise issues, such as thermal noise and
that offer an early warning of ensuing of advances in science and engineering. broad-absorbing acoustic chamber,
problems. This project advances tech- Persistent surveillance through embed- with a novel optical compensation. A
nologies relevant to complex-mixture ded sensing is a new paradigm for the first-generation optical acoustic detec-
gas detection for advanced state-of- efficient determination of the overall tor was designed, built, and tested. A
health system assessments. state-of-health of the stockpile. This second-generation acoustic detector
effort contributes tools needed for this was designed with excellent flexibility
Project Goals challenging instrumentation effort. and tunability to optimize the system’s
The overall goal of this project is to sensitivity (Fig. 1).
develop broad-specie gas detection sys- FY2010 Accomplishments SERS. SERS has been around for
tems relevant for embedded trace-gas and Results many years, principally for liquid and
detection. These systems must be small, PAS. PAS is capable of detecting solid sample analyses. At LLNL we
embeddable, and safe, while offering infrared absorbing gas molecules in have been able to detect femtomolar-
long-term fingerprint gas detection. To concentrations as low as parts per tril- concentration solutions reproducibly on
accomplish these goals, PhotoAcoustic lion. Molecules that absorb pulsed light wide area, Ag- and Au-coated Si or SiO2
Spectroscopy (PAS) and Surface at unique wavelengths will partially nanopillar substrates. The high sensitiv-
Enhanced Raman Spectroscopy (SERS) release absorbed energy in non-radiative ity of our substrates enables the pursuit
are both being pursued. Both meth- modes, which produces local gas heating of much harder gas-phase detection,
ods offer the potential of meeting the and a resultant pressure pulse. If the and their compatibility with optical fiber
stated goals with fiber-optic address- target molecule is present in a complex leads to consideration for an in-situ,
able systems and offer similar features: gas sample, a pressure wave is pro- compact detection device (Fig. 2).
fingerprint gas detection capability, very duced at the light's pulse frequency. To A confocal Raman setup (Fig. 3) was
small form-factor, and remote, fiber- be adapted to LLNL mission relevance, used to detect O2 in ambient air and in
optic interrogation. Each has technical a fiber-optic-based acoustic detector nitrogen-purged atmosphere at different
challenges that need to be overcome to must be created and the system must be temperature cycles, as shown in Fig. 4.
meet their intended function. reduced in size by orders of magnitude. Most of the observed peaks are within
In this first year, an electronic 10 cm-1 of assigned peaks due to O2 spe-
acoustic detector was used to score cies reported in literature, the presence
optical-acoustic-detector prototypes. of which was also confirmed by XPS.

Laser excitation
Gas sensing tip

To spectrometer 1 µm
Nanopillars

Figure 1. Second-generation photoacous-


tic spectrometer. The design includes an
acoustic chamber and is powered and
interrogated with a pair of optical fibers for Figure 2. Conception of fiber-based SERS using LLNL SERS templates for sensing and fiber
remote, embedded trace-gas detection. bundles for excitation and delivery of signal.

60 FY10 Engineering Innovations, Research & Technology Report


Micro/Nano-Devices & Structures

(a) (b) An independent fiber-based Raman


system was setup in a fume hood with
a controlled gas delivery of CO, various
Objective
Gratings-
NOx, as well as VOCs, in addition to a
lens low-noise fiber probe and high-power
Laser based
excitation spectrometer Gas cell ultra-stable laser for highly sensitive
Dichroic mirror
Pre-spectrometer optics measurements. Initial tests on toluene
Laser have been performed with detection by
Spectrometer
Raman filter
LN-cooled SERS.
CCD Microscope stage
camera
Mirror
Related References
1. Gartia, M., et al., “Surface Enhanced
Raman Spectral Characterization of Large-
Area High-Uniformity Silver-Coated Tapered
0.2 mm thick glass window (temp.- Silica Nanopillar Arrays,” Nanotechnology,
controlled sample stage underneath) 21, 39, 2010.
Liquid N2 in/out
2. Gartia, M., et al., “Large-Area Vertical
Nanopillar Arrays,” FACCS, Raleigh, North
Temperature Gas in/out
Carolina, October 2010.
controller connect.
3. Bora, M. G., et al., “Vertical Pillar Array
Liquid N2 Plasmon Cavities,” Nanoletters, 10, 8, 2010.
in/out 4. Moseir-Boss, P., and S. H. Lieberman,
“Detection of Volatile Organic Compounds
Using Surface Enhanced Raman Spectroscopy
Gas in/out Y-stage movement
Substrates Mounted on a Thermoelectric
Door lock Cooler,” Journ. Chim. Acta, 488, pp. 15–25,
X-stage movement
2003.
5. Smythe, E. J., et al., “Optical Antenna
Figure 3. (a) Schematic of confocal Raman system; (b) real setup with pressure- and
temperature-controlled gas cell (196 °C to 600 °C). Arrays on a Fiber Facet for In Situ Surface-
Enhanced Raman Scattering Detection,”
Nanoletters, 9, 3, pp. 1132–1138, 2009.

FY2011 Proposed Work


(a) (b) PAS. In FY2011, the new laser
6000 4000 system and the second-generation
466 575 466 575
acoustic detector (Fig. 1) will be
5000 759 970 759
703 785970 1018
548 703 877 1018 3000 548 tested. These features will be
380 646 877
4000 380 646 housed in a series of shrinking
335
Intensity

Intensity

335 808 acoustic chambers in the pursuit of


3000 2000
an embedded system of less than
2000 3 mm. If successful, design itera-
1000
tions will work toward a balance of
1000
improved limits of detection and
0 0 reduced system size. Tests of varied
0 400 800 1200 0 400 800 1200
CO2 concentrations and complex
Raman shift (cm–1) Raman shift (cm–1)
mixtures will be used to benchmark
N2 O2 25 °C (ave.) the system performance.
O2 O2 55 °C (ave.)
Background O2 105 °C (ave.) SERS. In FY2011, we plan to
w/mirror O2 155 °C (ave.) test different gas mixtures in a
Background O2 205 °C (ave.)
controlled environment to deter-
w/gas cell O2 O2 25 °C (ave.)
cool-down mine cross-sensitivity, specificity,
and limit of detection. We will also
Figure 4. (a) SERS spectra from LLNL Pillars Substrate under N2 and O2 flow at room introduce chronocoulometry to
temperature; (b) SERS spectra of O2 using LLNL Pillars Substrate at various substrate characterize the surface dynamics.
temperatures.

Lawrence Livermore National Laboratory 61


Technology

Neutron Cookoff: Read-Out Electronics


For more information contact:
for LLNL Pillar Detector Rebecca Nikolić
(925) 423-7389
nikolic1@llnl.gov

T his project is constructing a high-


efficiency pillar structured thermal
neutron detector. As we are now mov-
and test the electrical and radiation per-
formance. Then we will tile up to nine of
these elements together and integrate
provide adequate timing for coincidence
measurements. The noise profile of the
system is different from most standard
ing from the proof-of-principle phase to- again with the read-out components applications, in that the dominant noise
ward instrumentation, having the ability and characterize our instrument. in the read-out is given by the detector
to read out our device is critical for our leakage current. Therefore, the input
continued growth in this area. Relevance to LLNL Mission JFET was not chosen, as in common
Specifically, as there is a pressing LLNL is funded by DHS-DNDO to con- practice, to capacitively match the
need for a 3He tube replacement, our struct a high-efficiency pillar structured detector, but to give negligible overall
project is now under increased time thermal neutron detector, consistent noise contributions (1/f, white and cur-
pressure to show that a field ready with its homeland and global security rent noise) to the overall noise figure for
device is possible in a short time frame. missions. the lowest reasonable current consump-
LLNL has been requested to participate tion (~1 mA drain current).
in the next “neutron cookoff,” in which FY2010 Accomplishments For simplicity, and to allow the
the Domestic Nuclear Detection Office and Results capability of collecting holes or electron
(DNDO) will perform laboratory tests of A schematic of our device is shown signals, the power supply rails have been
the detector modules with neutron- and in Fig. 1. kept at +/–12 V, but further optimiza-
gamma-ray emitting sources to charac- The read-out electronics is based tion on their value could be done to
terize parameters such as the inherent on a well-known topology, its arrange- further lower the power consumption.
efficiency, response/dead time, environ- ment originally constructed to provide The preamplifier is followed by a fast
mental/mechanical performance, and germanium-quality performance. In this shaper using a semi-Gaussian, fourth-
gamma rejection. case, such a level of performance is not order bipolar shape with 250-ns peaking
required and several compromises could time. This choice minimized the domi-
Project Goals be made for low power while preserv- nant noise component, the detector’s
Under this project we will build the ing adequate bandwidth (20 ns rise current noise associated with its leakage
read-out electronics for one element time). The fast response is needed to current. Bipolar shaping poses a penalty
to the white (high-frequency) noise
components but is more tolerant to the
Bias or 0V
lower frequency components and offers
intrinsically better baseline return than
Gain adj. unipolar signals, mitigating the need for
+ a baseline restoration circuit.
simple The shaper optimizes the signal-to-
shaper Leading edge noise ratio for the signal and its output is
AC discriminators
coupling
fed to a simple low-power, leading edge
K K K comparator that detects the presence
caps
of a signal, but the shaper output is also
A capable of driving 50-ohm loads for dif-
Σ
ferent applications.
The fabrication of the read-out
TTL out electronics board, integration with our
detectors and system characterization
Figure 1. Schematic of LLNL pillar detector integrated with read-out. was done at LLNL.

62 FY10 Engineering Innovations, Research & Technology Report


Micro/Nano-Devices & Structures

Element 1
10 1000
1
10–1 100

J (A/cm2)

Counts
10–2
10–3
10
10–4
D1 D1
10–5
10–6 1
–2.0 –1.6 –1.2 –0.8 –0.4 0 0.4 0 20 40 60 80 100 120 140
V (V) Channel
Element 4
10 1000
1
10–1 100

J (A/cm2)

Counts
10–2
10–3
10
10–4
D4 D4
10–5
10 –6 1
–2.0 –1.6 –1.2 –0.8 –0.4 0 0.4 0 20 40 60 80 100 120 140
V (V) Channel
Element 7
10 1000
1
10–1 100
J (A/cm2)

Counts
10–2
10–3
10–4 10
10–5 D7 D7
10–6 1
–2.0 –1.6 –1.2 –0.8 –0.4 0 0.4 0 20 40 60 80 100 120 140
V (V) Channel

Figure 2. Nine-element detector. Each detector has an active area of 2 mm × 2 mm. Current vs. voltage and thermal neutron response
are shown for the first column of the 3 × 3 set.

Figure 2 shows a picture of a 3-×-3 Transport in Pillar Structured Solid State and C. L. Cheung, “Pillar Structured Thermal
array of detectors integrated into the Thermal Neutron Detector,” International Neutron Detector With 6:1 Aspect Ratio,”
9-channel read-out system, showing the Semiconductor Device Research Conference, Appl. Phys. Lett., 93, p. 133502, 2008.
electrical and radiation response. Figure College Park, Maryland, December 12–14,
3 shows the final integrated system that 2007.
was fully characterized under this proj- 3. Nikolic, R. J., A. M. Conway,
ect. The next step is the participation C. E. Reinhardt, R. T. Graff, T. F. Wang, N. Deo,
of our detector in the “DNDO Neutron and C. L. Cheung, “Pillar Structured Thermal
Cookoff” tentatively scheduled for fall Neutron Detectors,” International Conference
2010. on Solid State and Integrated Circuit Technol-
ogy, Beijing, China, October 20–23, 2008.
Related References 4. Nikolic, R. J., C. L. Cheung, C. E. Reinhardt
1. Fabris, L., N. W. Madden, and H. Yaver, “A and T. F. Wang, “Roadmap for High Efficiency
Fast, Compact Solution for Low Noise Charge Solid-State Neutron Detectors,” SPIE – Inter-
Preamplifiers,” NIM-A 4242, pp. 545–551, national Symposium on Integrated Optoelec-
1999. tronic Devices, 6013, 1, pp. 36–44, 2005. Figure 3. Nine-channel COTS read-out
2. Conway, A. M., R. J. Nikolic, and 5. Nikolic, R. J., A. M. Conway, integrated with nine-element pillar
T. F. Wang, “Numerical Simulations of Carrier C. E. Reinhardt, R. T. Graff, T. F. Wang, N. Deo detector array.

Lawrence Livermore National Laboratory 63


Research

Isotachophoretic Separation of
For more information contact:
Actinides Raymond P. Mariella, Jr.
(925) 422-8905
mariella1@llnl.gov

I sotachophoresis (ITP), also known


as ionic migration, electromigration,
steady-state-stacking electrophoresis,
Figure 1 is a schematic of the ITP
technique, showing the separation of
a mixture of “A” and “B,” with time
and disc electrophoresis, is an electro- progressing from t0 to t6; “L” is the lead-
phoretic technique that separates and ing electrolyte, and “T” is the trailing
stacks mixtures of two or more ions electrolyte.
(charged particles) in solution, accord-
ing to their electrophoretic mobilities, Project Goals
µi, where the subscript “i” refers to the Our goal was to determine whether
“ith” ion, either “A” or “B” in Figure 1. ITP could be applied to the analysis
The ultimate resolution of two ions, A of numerous 10-g samples of debris/
and B, is a nonlinear function of the rubble from an event. ITP was identi-
differences of their electrophoretic fied for such an application, because,
mobilities, µA and µB, as well as their as opposed to chromatography and
diffusion coefficients in the solution at electrophoresis, ITP is capable of rival-
the interface between the separated ing state-of-the-art separations that use
regions. Influenced by the concentration ion-exchange columns for separation
of the leading electrolyte, “L,” ions, A and concentration of minority con-
and B can be both separated and con- stituents in multi-gram samples. Our
centrated, with concentrations possibly literature search also found that the
exceeding the Debye-Hückel assump- apparatus for ITP can be very simple
tions of dilute solute. (Figs. 2 and 3).
A reduction of overall channel As can be seen from equations (1)
length has been demonstrated with the and (2), the key physical parameters
use of a counter flow. that are needed to model the ITP

Isotachophoresis I.
t0 T** A* B* L

t1 T** T* B* A* B* AB A L
t2 T** T* B* B AB A L
t3 T** T* B AB A L
tdet
t4 T** T* T B AB A L
t5 T** T* T B A L tres
t6 T** T* T B A L

xdet xres
Sampling
compartment Separation compartment

Figure 1. Schematic of ITP technique. Figure 2. Apparatus for ITP.

64 FY10 Engineering Innovations, Research & Technology Report


Micro/Nano-Devices & Structures

Figure 3. Large-scale separation of proteins


process are diffusion coefficients Di and
from serum, using the apparatus pictured
electrophoretic mobilities µi. Although in Fig. 2. Note the narrow zones (discs)
this feasibility project did not have suffi- Starting sample size
that formed for each constituent, over
cient funds to perform ab initio or simi- time, notwithstanding the relatively large
lar modeling, we have found literature diameters of the separation columns. Flow
examples that have modeled the ions, was left to right and the reproducibility of
themselves, which leaves us optimistic the process for multiple tubes in parallel is
that Di and µi. can be calculated, using clearly visible.
semi-empirical calculations:
Simplified equation of continuity
∂ci ∂2c ∂ ciµi
= Di 2i – j (1)
∂t ∂x ∂x k

Kohlrausch regulating function

µi(µL + µLC)
Ci = CL (2)
µL(µi + µLC)

where µ is the mobility of an ion (i),


leading electrolyte (L), or leading coun-
ter ion (LC).
Figure 4 is a QM calculation of
UO2(CO3)34-•28(H2O) that includes the
double layer and even the outer (third)
layer of associated solvent molecules. It
seems likely that the “non-slip” assump-
tion in which such an ion is modeled as
a hard sphere will fail, since the “free”
solvent molecules can be expected to Figure 4. QM calculation of
exchange readily with the third layer of UO2(CO3)34-•28(H2O) that includes the
double layer and the outer (third) layer
“bound” solvent molecules.
of associated solvent molecules. Green =
Note narrow final “discs” oxygen atom; red = hydrogen atom; pink =
Relevance to LLNL Mission of different haptoglobins uranium atom; and gray = carbon atom.
Rapid, quantitative sample prepa-
ration (separation and purification) is
a key step in post-detonation nuclear
forensic analysis, an important com-
ponent of the Laboratory’s mission. A
procedure that uses ITP offers increased to their significantly differing electropho- Related References
throughput over the state-of-the-art retic mobilities from ions such as UO2+2. 1. Kendall, J., Science, 67, 163, 1928.
procedures. 2. Many-gram quantities are handled 2. Brewer, A. K., et al., Journal of Research of
via ITP in relatively simple apparatus. the National Bureau of Standards, 38, 137,
FY2010 Accomplishments 3. Minority constituents of a mixture 1947.
and Results are routinely concentrated during the 3. Kubicki, J., G. Halada, P. Jha, and
The FY2010 Literature Search pro- ITP process, reaching concentration B. Phillips, Chemistry Central Journal, 3, 10,
vided information that strongly supports factors of one million-fold in optimized 2009.
the application of ITP to analyze heavy cases. 4. Choppin, G. R., and R. J. Silva, Journal of
metals in debris/rubble. We conclude It would be very important to make Inorganic & Nuclear Chemistry, 3, 153, 1956.
that ITP is appropriate for three reasons: experimental measurements of the 5. Jung, B., R. Bharadwaj, and J. G. Santiago,
1. Majority constituents, such as ions separation factors of actinide ions and Analytical Chemistry, 78, 2319, April 2006.
from the elements Ca, Mg, Al, Na, and K ion complexes as the heart of a follow-
can be discarded from the sample, due on project.

Lawrence Livermore National Laboratory 65


Technology

Extraction of White Blood Cells


For more information contact:
from Whole Blood Through Elizabeth K. Wheeler
Acoustic Focusing (925) 423-6245
wheeler16@llnl.gov

I n this project, we used acoustic focus-


ing within a microfluidic device to
attempt to separate white blood cells
voltage, it generates acoustic standing
waves in the microfluidic channel. These
waves produce a force field that moves
from a sample of whole blood. White particles to nodes or antinodes of the
blood cells provide important biomark- acoustic wave depending on the relative
ers that can be used as presymptomatic compressibility and density between
detection of infections and disease. To the particle and the suspending liquid.
isolate and detect these biomarkers, the The magnitude of the acoustic forces
white blood cells must first be extracted scales with the volume of the particle,
from whole blood to remove the signifi- providing a natural size cutoff for frac-
cant background material. tionation, while the node and antinode
The microfluidic device used locations depend on the fluid chan-
for these separations was created nel geometry and the acoustic driving
previously as a large cell/particle sepa- frequency.
rator for performing automated sample This system, shown in Fig. 1, was
preparation on complex biological previously demonstrated to separate
samples such as blood, sputum, and polystyrene particles as small as 2 µm at
urine. a throughput of up to 100 µL/min.
The device works by using a piezo-
electric transducer bonded to a silicon/ Project Goals
glass chip with a microchannel etched The goals of this project are: 1) cre-
into the silicon. When the transducer ate a surrogate blood sample with white
is driven with a high frequency AC blood cells spiked into whole animal

Figure 1. Image of the front (top) and back (bottom) of microfluidic acoustic chip. The gray
region in the lower Image is the piezoelectric transducer that generates the acoustic waves
within the etched microchannels.

66 FY10 Engineering Innovations, Research & Technology Report


Micro/Nano-Devices & Structures

blood for device testing; 2) demonstrate white blood cells, is a critical step to end-to-end detection platform.
that the acoustic forces in the exist- maximize the selectivity and sensitiv-
ing devices are sufficient to focus and ity of downstream biological assays to FY2010 Accomplishments
separate white blood cells in a purified detect important markers for infec- and Results
sample (1X phosphate buffered saline tion and/or disease. To date, the best Accomplishments and results for this
(PBS) solution); and 3) demonstrate selective preparation results have been year include the following:
separation of white blood cells from achieved in laboratories where effec- 1. Implemented protocols for cultur-
whole blood. tive sample preparation using bench- ing and staining Raji cells that are B
top techniques, such as membrane lymphocytes from a leukemia cell
Relevance to LLNL Mission filtration, centrifugation, and chemical line with an average size of 7–8 µm
This work directly impacts ongoing methods, have been applied to the (Fig. 2).
and future efforts within LLNL’s Global samples. Our goal is to demonstrate a 2. Demonstrated focusing and separa-
Security Principal Directorate for new robust, automated sample preparation tion of white blood cells out of a
platforms for biosecurity applications. that reduces preparation time, improves phosphate buffered saline stream
Preparation of complex samples, such performance of downstream detection using the microfluidic acoustic
as the removal and extraction of target assays, and can be integrated into an device shown in Fig. 3.
3. Identified the need for a coating
(e.g., heparin) in the microfluidic
(a) (b) separator and associated fluidics to
reduce clogging due to adsorption
of material from the blood onto the
exposed surfaces.

Stained Raji cells 5.78 µm beads

Figure 2. (a) Stained white blood cells (Raji cells) used in acoustic separation experiments.
The cells are generally 7–8 µm in diameter and are shown next to (b) 5.78-µm fluorescent
particles for comparison.

(a) (b) (c)

le
mp
Image Sa
region

Input
Ce
lls

Acoustic field off Acoustic field on @ 870 kHz

Figure 3. Acoustic focusing and extraction of white blood cells from an input sample stream (cells suspended in PBS). (a) Top view of
microfluidic chip near the bifurcating exit. The sample flows in from the left side in the upper half of the channel and exits the top if the
acoustic field is turned off and the bottom when the field is on. (b) Raji cells exiting through the upper outlet with the acoustic field turned
off. (c) Raji cells focused and exiting the lower stream when the acoustic field is turned on.

Lawrence Livermore National Laboratory 67


68 FY10 Engineering Innovations, Research & Technology Report
Measurement Technologies
Research

Detection, Classification, and


Estimation of Radioactive
For more information contact:
Contraband from Uncertain, James V. Candy
Low-Count Measurements (925) 422-8675
candy1@llnl.gov

R adionuclide (RN) detection is a


critical first line defense used by
Customs and Border Protection (CBP) to
theory, simulation, experiments, and
application. It enabled the development
of advanced signal/image-processing
radiological materials by terrorists, an
important goal in national and interna-
tional security.
detect the transportation of radiological techniques for the next generation
materials by potential terrorists. Detec- of processors. FY2010 Accomplishments
tion of these materials is particularly and Results
difficult due to the inherent low-count Project Goals Our FY2010 accomplishments
emissions produced. RN detection from The goal is to develop an innovative included the development of a “smart”
low-count gamma ray emissions is a radiation-detection solution uniquely Statistical Radiation Detection System
critical capability that is very difficult qualified to provide rapid and reliable (SRaDS) software capable of automati-
to achieve. performance in applications that require cally detecting the presence of targeted
This project is focused on the accurate detection of radioactive mate- SNM by incorporating a parallel/distrib-
detection, classification, and estima- rial. Thus, our goal is a reliable detection uted sequential processor. The following
tion of special nuclear material (SNM) with a 95% detection probability at a 5% was accomplished:
from highly uncertain, low-count RN false alarm rate in less than a minute. 1. Theoretically, developed an optimal
measurements. We apply innovative (physics-based) decision function in-
sequential Bayesian model-based sta- Relevance to LLNL Mission corporating both photoelectric and
tistical processing algorithms that take The detection of illicit SNM is a top downscattered photons (Compton
advantage of the statistical nature of priority of LLNL in furthering its national scattering).
radiation transport physics by incor- security mission. RN detection, clas- 2. Applied these theoretical results to
porating a priori knowledge of nuclear sification, and identification are criti- develop a “smart” algorithm capable
physics. This effort encompassed cal for detecting the transportation of of automatically providing detection
of targeted SNM.
Sequential radiation 3. Implemented a simple signal pro-
detection processor (SRaDS) Initialize
cessing transport (1-D geometry)
Photon
model enabling downscatter photon
discrimination.
4. Estimated the required physics
Target Target parameters such as channel energy,
Yes photoelectric? No downscatter? No
Reject interarrival parameters, and emis-
(discriminate) (discriminate)
sion probabilities, using modern
Photoelectric Yes Downscatter
Bayesian sequential techniques
(Kalman and particle filters).
Next Parameter Parameter 5. Implemented a parallel/distributed
estimation estimation
algorithmic structure capable of
Previous being realized by field program-
Table Decision Decision SPT model
calculation mable gate arrays (FPGA) for high
(target PE function function (target DS
parms) (photoelectron) (downscatter) parms) computational speeds.
The basic structure of the processor
+
implementation is shown in Fig. 1. After
Composite the photon information (energy/rate) is
decision function extracted from the photon by the mea-
surement electronics, it is discriminated
Detection
to determine if it is associated with the
target RN. If so, the parametric informa-
No Yes
RN? Alarm tion is enhanced by performing param-
eter estimation and input to update the
Figure 1. Schematic of statistical Bayesian design. sequential decision function to decide

70 FY10 Engineering Innovations, Research & Technology Report


Measurement Technologies

(detection) whether or not the targeted experimental data and comparing it to 3. Candy, J. V., D. H. Chambers, E. F.
RN is present. the GAMANAL software solution, where Breitfeller, B. L. Guidry, J. M. Verbeke, M. A.
Results of this photon-by-photon its detection rate of 98% easily exceeded Axelrod, K. E. Sale, and A. M. Meyer, “Model-
processor with downscatter are shown that of 47% both at essentially 0% false Based Detection of Radioactive Contraband
in Fig. 2. In the figure, three columns are alarm rate. These results demonstrate for Harbor Defense Incorporating Compton
shown. The first column is the compos- the potential capability of the sequen- Scattering Physics,” Proc. OCEANS09, IEEE
ite (not used) pulse-height spectrum tial Bayesian model-based approach to OES Soc., 2010.
(PHS), with the second the measured solving a variety of radiation-detection 4. Candy, J. V., D. H. Chambers, E. F.
photon energies (arrivals) in red circles problems. Breitfeller, B. L. Guidry, J. M. Verbeke, M.
with the green circles representing the A. Axelrod, K. E. Sale, and A. M. Meyer,
discriminator output photoelectrons and Related References “Radioactive Threat Detection with Scat-
the purple squares the discriminated 1. Candy J. V., E. F. Breitfeller, B. L. Guidry, tering Physics: a Model-Based Application,”
downscatter photons. Notice that they D. Manatt, K. E. Sale, D. H. Chambers, M. A. Proc. CIP, IEEE Comp. Soc., 2010.
align with the PHS energy “lines.” The Axelrod, and A. M. Meyer, “Physics-Based
final column is the decision function for Detection of Radioactive Contraband: A
each of the targeted RNs. As each pho- Sequential Bayesian Approach,” IEEE Trans.
ton is processed, the decision function Nuclr. Sci., 56, 6, pp. 3694–3711, 2009. FY2011 Proposed Work
is sequentially updated until one of the 2. Candy, J. V., D. H. Chambers, E. F. Teaming with ICx, a well-known,
thresholds (target/non-target) is crossed Breitfeller, B. L. Guidry, J. M. Verbeke, portable radiation detection system
(solid red box in figure) declaring a tar- M. A. Axelrod, K. E. Sale, and A. M. Meyer, manufacturer, an R&D 100 award
geted threat or non-threat. “Threat Detection of Radioactive Contraband was received for this project with a
The performance of the proces- Incorporating Compton Scattering Physics: high anticipation of “bringing it to
sor was substantiated by extracting a A Model-Based Processing Approach,” IEEE market” in the future.
100-member ensemble of controlled Trans. Nuclr. Sci., 57, 6, 2010.

(a) (b) Peak accepted (c)


Downscatter acc.
1400 1400 Rejected 30

1300 1300 20
Cobalt Photoelectric photons 60Co
10
1200 1200 Cobalt
0 detection
(5.76 s)
1100 1100 –10
Downscatter photons 0 1 2 3 4 5
1000 1000
Time (s)

900 900

20
800 Cesium 800
Volts (keV)

0
137Cs

700 700 Photoelectric photons Cesium detection


–20 (0.47 s)
600 600
–40

500 500 0 1 2 3 4 5
Barium Time (s)
Photoelectric photons
400 400

300 300 30
20
200 200 Downscatter photons 10
133Ba

Barium detection
0 (0.46 s)
100 100 –10
Photoelectric photons
–20
0 0
40 30 20 10 0 2 3 4 5 6 0 1 2 3 4 5
Counts Time (s) Time (s)

Figure 2. Sequential Bayesian detection and identification. (a) Pulse-height spectrum (after calibration). (b) Photon arrivals (red circles)
with photoelectron discrimination (green circles) and downscatter photons (purple squares). (c) Decision functions for 60Co.

Lawrence Livermore National Laboratory 71


Research

Optimized Volumetric Scanning for


For more information contact:
X-Ray Array Sources Angela M. K. Foudray
(925) 422-1509
foudray1@llnl.gov

X -ray measurement systems are used


for nondestructive evaluation (NDE)
to determine noninvasively the internal
We are investigating methods to deter-
mine if these sources would change the
way LLNL would acquire NDE computed
structure of objects. NDE application tomography (CT) data.
areas include medicine, industrial Single-source x-ray CT data collec-
manufacturing, military, homeland tion, processing, and imaging methods
security, and airport luggage screening. and algorithms are not applicable when
X rays are most widely used because of the source location is expanded from
their ability to penetrate a wide range of one dimension (a rotating and/or trans-
materials. In a traditional x-ray system, lating point source) to two (a rotating
a single source and detector system and/or translating array). There are
rotates and/or translates with respect to four tasks in the research to achieve the
the object under evaluation, gathering project goal: 1) develop forward array
projections from only a single perspec- source analytic and computational mod-
tive. Mathematical algorithms are used els; 2) research and develop array source
to invert the detected forward-attenu- reconstruction algorithms; 3) perform
ated ray projections to form images of experiments and simulations; and
the object. 4) evaluate systems’ performances.
More recently, arrays of sources
have been used to gather projections Project Goals
from multiple perspectives per detec- The goal of this project is to
tion location. The spatially diverse determine the applicability of x-ray
nature of x-ray array sources has the array sources to problems of interest
potential of reducing data collection to LLNL and its customers. It is believed
time, reducing imaging artifacts, and array source data collection will be
increasing the resolution of the resul- faster, while yielding higher resolution
tant images. Most of the existing CT reconstructions with fewer artifacts.
algorithms were developed assuming a
single source, some making approxima- Relevance to LLNL Mission
tions that take advantage of the simpli- X-ray tomography is a workhorse
fied traditional system configuration. in LLNL’s NDE capabilities. X-ray array
sources may constitute leading edge
technology in NDE. This project is
underway to determine if there is a role
for array sources and what it would
be in terms of maintaining LLNL’s NDE
leadership.

FY2010 Accomplishments
and Results
The acquisition of x-ray array source
data was obtained through a collabora-
tion with Triple Ring Technologies, a
Figure 1. Photograph of the x-ray array source and array detector system designed and research, development, and array source
developed by industrial collaborators NovaRay and Triple Ring. and detector system manufacturing

72 FY10 Engineering Innovations, Research & Technology Report


Measurement Technologies

Reconstruction code proof of concept: Multiplexing sources within the


Traditional fan-beam system
array consists of simultaneously activat-
ing sources in a predetermined fashion
Analytic recon AMCG recon RSEM recon during acquisition. Although currently
Full dynamic range window Steel bar available systems are not able to acquire
data in a multiplexed acquisition mode,
Jar of jelly
to understand whether multiplexing
Carry-on bin (2 ft.) would be useful for LLNL’s NDE efforts,
Lineout through steel bar
the Triple Ring system was modeled
and multiplexing simulated. The noise

Attenuation
15000 Analytic
Window max at mean steel was analyzed in various multiplex-
10000
AMCG
5000 ing methods as well as the available
RSEM
0 single-source-at-a-time (single-series)
0 100
Pixel along lineout
illumination method. Single-series col-
lection methods were found to contain
Lineout – bar and jelly less noise for the data collection rates
89%* 63%*
Window max at max jelly
8000
95%* currently achievable by array source
Attenuation Analytic systems.
4000 AMCG
RSEM
0
0 50 100
Related Reference
Pixel along lineout De Man, B., S. Basu, D. Bequé, B. Claus,
* error reduction using RSEM P. Edic, M. Iatrou, J. LeBlanc, B. Senzig,
R. Thompson, M. Vermilye, C. Wilson, Z. Yin,
Figure 2. Comparison of algorithms for traditional 2-D cross-sectional data. and N. Pelc, “Multi-Source Inverse Geometry
CT: A New System Concept for X-Ray Comput-
ed Tomography,” Medical Imaging : Physics of
company. The partnership was formal- a cylindrically symmetric “As-Built” Medical Imaging Proc. of SPIE, 6510, 2007.
ized in FY2010. A Triple Ring Technolo- phantom, and a Contrast and Resolu-
gies system is presented in Fig. 1. tion Interleaved Stacked Plate (CRISP)
The project identified and tested phantom. The CRISP phantom was FY2011 Proposed Work
three reconstruction algorithms on 2-D designed and manufactured within this Instead of acquiring data by
array source data for algorithm testing project and metrology was performed to rotation and only in a single plane,
and comparison. The three algorithms determine the ground truth feature sizes project goals for FY2011 will
were: a traditional analytic method (Fil- and locations for system comparison. investigate a portable, field-ready
tered Back-Projection (FBP)); the Adjoint The data collected using the Triple Ring system by acquiring select views
Method Conjugate Gradient (AMCG) array source for both the “As-Built” and of objects from any vantage (i.e.,
algorithm developed at LLNL; and the CRISP phantoms were reconstructed region-of-interest (ROI) acquisi-
Ordered Subset Expectation Maximiza- with OSEM (Fig. 3). tion). Simulations will provide
tion (OSEM) method, which is used direction for object construction as
widely in the medical imaging field for well as choosing which views to use
emission tomographic reconstruction. (a) (b) for ROI imaging. To collect these
OSEM showed a significant impact data, new equipment and acquisi-
on reducing artifacts. Particularly evi- tion gantries will be necessary,
dent was a reduction in streak artifacts which will be developed and used
(>65% reduction in streak artifacts in through a collaboration with Triple
regions, as shown in Fig. 2). Although Ring and Stanford University.
OSEM was specifically included in this We will also expand on our
study because no modification was nec- work accomplished in FY2010 by
essary to reconstruct array source data, acquiring data on the previously
it has shown it can also improve the investigated phantoms using a 1-D
reconstruction of traditionally acquired array source system developed by
data as well (reduction in streak arti- Figure 3. OSEM reconstructed images XinRay. All systems (cone-beam,
facts, also shown in Fig. 2). for array source-acquired cross-sectional Triple Ring (2-D) array and XinRay
Two test objects were identified data on both the (a) as-built and (b) CRISP array) will then be evaluated.
and scanned with the array source: phantoms.

Lawrence Livermore National Laboratory 73


Technology

Low-Energy, Fast-Pulsed, Power-Driven


For more information contact:
Dense Plasma Focus for WCI and NIF Vincent Tang
Relevant Experiments (925) 422-0126
tang23@llnl.gov

I n this work we constructed and


analyzed a new modular, table-top,
low-energy (up to 10’s kJ) pulsed-
machines were found to reach some of
the relevant conditions. Additionally, a
small experiment was constructed and
tabletop format will enable significantly
more experiments to be performed and
more data to be collected. A second-
power driver for Dense Plasma Focus operated with the end goal of validat- ary objective is the possibility of using
(DPF) z-pinch loads that can provide a ing our models. these devices for neutron or accelerator
low-cost means of generating plasma sources for Global Security applications.
conditions relevant to LLNL’s WCI and Project Goals
NIF, the study of primary and second- The goal of this feasibility project FY2010 Accomplishments
ary boost and laser-plasma interactions was to examine whether modular, and Results
(LPI), respectively. Our preliminary tabletop, low-energy (up to ~10’s kJ) In FY2010, we built and exercised
models show that relevant plasma pulsed-power systems, using modern models of modular tabletop Marx
conditions with average electron densi- off-the-shelf technology, can drive DPF pulsed-power drivers to study this topic.
ties greater than 1021/cc and tempera- z-pinches to plasma conditions that en-
tures up to ~10 keV can be reached in able WCI and NIF relevant experiments.
a 55-kJ machine approximately 2 m in The enabling new technologies include 150

Number of arms
radius, powered by 10’s of modular new low-inductance, high-voltage
100
arms arranged in a circle. Lower energy capacitors and transmission lines, along
100 kV
with fast high-current switches. 200 kV
50
Figure 1 illustrates the device and con- 300 kV
400 kV
cept, along with some of the enabling
0
Z-pinch forms here technologies.
IV 6000
Peak I(kA)

DPF Gun
Relevance to LLNL Mission
4000
III The ability to reach plasma con-
ditions useful for LLNL missions in a 2000
1 to ~10 cm

Cathode
Anode

II

220
Peak I time (ns)

Sample 200 kV marxed, 0.8 kJ 200


IV
folded single arm driver 180
160
SG Insulator
R0 140

L0 200
C
Energy (kJ)

Pulsed power driver Current


100 kV
path 100
GA caps
Switches

Fast transmission line 0


1 2 3 4
Device radius (m)

To DPF gun
Figure 2. Key parameters of the modular
Figure 1. Left: DPF gun driven by a switched capacitor bank. Right: New modular Marx driver concept in Fig. 1 as a function of
driver concept using new pulsed-power technologies. By shortening the pulse-width total pulser radius. The different curves
through low inductance and resistance, higher peak currents can be delivered at lower on each plot are for drivers with different
total energy and driver size. Marx setups.

74 FY10 Engineering Innovations, Research & Technology Report


Measurement Technologies

We studied a multi-arm circular driver to 104


maximize current as a function of size at 69-arm
100 to 400 kV. We showed analytically 103 Figure 3. Plasma temperature and
and through full circuit simulations that 2-arm densities reached in our simulations for
a multi-arm driver can be represented as 8-arm various gun geometries with dimensions

Te[eV]
a lumped RLC circuit. 102 on the ~cm scale. The black dotted line is
Figure 2 provides short-circuit per- the electronic coupling parameter; below
formance of several drivers as a function the line the plasma is strongly coupled.
101 The red lines are the electron degeneracy
of size and voltage using modern com- Θ=10 Θ=0.1
Γe=1 parameter; below Θ = 0.1 degeneracy
ponents. These models were coupled to effects dominate, and above Θ = 10
a version of our three-phase DPF z-pinch 100
1018 1020 1022 1024 1026 1028
degeneracy effects are negligible. The green
model, which allowed us to perform dashed box indicates conditions of interest
ne [cm–3]
scans of engineering parameters to de- to NIF LPI studies.
termine achievable plasma conditions.
Figure 3 shows relevant plasma
regimes that can be achieved with a
2-m radius, 69-arm, 200-kV driver with
(a) (b)
various gun geometries. Data for plasma
regimes using loads for only 2 and 8 of
the 69 arms are also shown. Overall,
the modeling shows that it should be
possible to achieve relevant plasma
conditions with a low-energy machine
at tabletop dimensions on the order
of ~3–4 m due to newer pulsed-power ~9 in.
technology. This is an improvement over ~1 in.
previous leading machines with 8-m
diameter footprints.
We constructed, from existing
components, a small, up to 200-J DPF
device to enable experiments to validate
our model. Figure 4 shows the 1-arm
100-kV pulsed-power driver equipped Figure 4. (a) Small, 100-kV, 200-J pulsed-power driver and (b) DPF gun. The experiment
with a simple self-break oil switch and used existing equipment with the objective of validating our code at low energies.
a starting DPF gun load. Figure 5 shows
sample short circuit data from the driver
with a simple RLC fit, along with dI/dt
data indicating pinch formation and par- (a) (b)
x 104
ticle emission from initial experiments.
2.5 dl/dt
2.0 0.6
Related References Beam x5
1.5 0.4 Dip indicates
1. Tang, V., M. L. Adams, and B. Rusnak,
1.0 0.2 pinch
“Dense Plasma Focus Z-pinches for High
I(A)

AU

0.5
Gradient Particle Acceleration,” IEEE Transac- 0
0
tions on Plasma Science, 38, 4, pp. 719–727, –0.2
–0.5
2010. Data, Mret = 4 x 10–10
–1.0 –0.4
2. Decker, G., W. Kies, M. Malzig, et al., “High Fit
–1.5 –0.6
Performance 300 kV Driver Speed 2 for MA
0 100 200 300 –100 0 100 200 300 400 500
Pinch Discharges,” Nuclear Instruments and
Time (ns) Time (ns)
Methods, A249, 1986.
3. Soto, L., “New Trends and Future Perspec- Figure 5. (a) Short circuit current profile of the driver in Fig. 4 at 64 kV and 82 J. A peak
tives on Plasma Focus Research,” Plasma current of ~25 kA was reached with a rise time of ~100 ns. (b) Raw dI/dt data from the DPF
Phys. Cont. Fusion, 47, 5A, p. A361, 2005. gun in Fig. 4.

Lawrence Livermore National Laboratory 75


Technology

Applying High-Resolution Time-Domain


For more information contact:
Radiation Detection Techniques to Brian L. Guidry
Low-Resolution Data (925) 422-1661
guidry1@llnl.gov

T he Statistical Radiation Detection


System (SRaDS) is a software system
for detecting specific radioactive materi-
value goes below the lower threshold,
the radionuclide is declared not present.
If it goes above the upper threshold,
als from their characteristic gamma-ray the radionuclide is declared present. If
emissions. Each year more than 16 neither occurs, more data is required to
million cargo containers arrive in the reach a decision at the desired level of
United States and the detection of illicit performance (Fig. 1).
radioactive materials hidden within the
cargo is perhaps the most technically Project Goals
and logistically challenging problem fac- To date, SRaDS processing has
ing the Homeland Security community. been successfully applied only to data
The SRaDS system processes collected from high-resolution HPGe
gamma-ray photon arrivals individually detectors. Our goal was to apply the
as they arrive at a detector. For each underlying approach to data collected
photon arrival, a decision function is using lower-resolution sodium iodide
calculated based on its energy and inter- (NaI) detectors, by far the most ubiqui-
arrival time. The value of the decision tous type of radiation detector on the
function is then compared to thresholds market today.
set by the user for a given radionuclide.
The thresholds are calculated from the Relevance to LLNL Mission
detection and false alarm probabilities LLNL has a long history of working
required by the user. If the function with others in government as well as

Sequential radionuclide Target


detection processor

Threshold 1

Photons
No decision
Decision function

(take another
sample)

Threshold 0
No target
Time

Confidence: Detection/false alarm probability Performance: Receiver operating characteristic

Figure 1. Operation of a sequential processor. A decision function is calculated at each


discriminated photon arrival. This is compared to two thresholds, one for positive
detection and the other for negative detection.

76 FY10 Engineering Innovations, Research & Technology Report


Measurement Technologies

1.0 Related References


0.9 1. Candy, J. V., E. Breitfeller, B. L. Guidry, FY2011 Proposed Work
D. Manatt, K. Sale, D. Chambers, M. A. The true strength of the SRaDS
0.8
Axelrod, and A. Meyer, “Radioactive Contra- processing technique lies in our
0.7 band Detection: A Bayesian Approach,” Proc. ability to effectively use the
0.6 OCEANS09, IEEE OES Soc., 2009. information contained in down-
0.5 2. Candy, J. V., E. Breitfeller, B. L. Guidry, scattered photon arrivals. The next
Pd

D. Manatt, K. Sale, D. H. Chambers, M. A. step is to continue working on our


0.4
Axelrod, and A. M. Meyer, “Physics- Based low-resolution detector capabilities
0.3 Detection of Radioactive Contraband: A with a particular interest in using
0.2 Sequential Bayesian Approach,” IEEE Trans. photons in the Compton con-
on Nuclear Science, 56, 6, 2, pp. 3694–3711, tinuum.
0.1
2009.
0
0 0.1 0.2 0.3 0.4 0.5 0.6
Pfa

Figure 2. Receiver Operating Characteristic Target


curve for NaI data. present

Target
our partners in industry to find solutions absent
for the problems involved in monitoring 0 10 20 30 40 50 60 70 80 90 100
for illicit proliferation activities as well Trial number (n)
as monitoring for treaty verification. To
Figure 3. SRaDS detection results using “Target Present” data.
that end, the SRaDS system was cre-
ated to help address these challenging
issues.

FY2010 Accomplishments Target


present
and Results
NaI data collected during previous
work was extracted and arranged in a Target
absent
format suitable for use by SRaDS. Two
0 10 20 30 40 50 60 70 80 90 100
types of event-mode sequence (EMS) Trial number (n)
data were used: data with a target
source present and data consisting of Figure 4. SRaDS detection results using “Background Only” data.
only background. The high-resolution
SRaDS algorithm was then re-targeted
for the lower resolution NaI data. Receiv-
er Operating Characteristic (ROC) curves Table 1. Comparison of desired operating Table 2. Comparison of desired operating
describing probabilities of detection and characteristics with actual SRaDS perfor- characteristics with actual SRaDS perfor-
false alarm are shown in Fig. 2. An oper- mance using “Target Present” data. mance using “Background Only” data.
ating region for the detection software
Operating Actual Operating Actual
was selected. Using threshold values de- Region Performance Region Performance
rived from the operating region, SRaDS
Probability of >90% 98% Probability of <20% 12%
was run again on the extracted data and
detection of target false alarm (Pfa)
comparisons were made between the present (Pd)
target performance of the system and Probability of >90% 88%
the actual performance (Figs. 3 and 4, Probability of <5% 2% correctly identifying
miss (Pmiss) background only
and Tables 1 and 2).

Lawrence Livermore National Laboratory 77


Technology

Flexible Testbed for 95-GHz Impulse


For more information contact:
Imaging Radar Christine N. Paulson
(925) 423-7362
paulson4@llnl.gov

D uring the 1990s, Lawrence Liver-


more National Laboratory (LLNL)
was funded by the Naval Air Warfare
Project Goals
The primary objective is to establish
a flexible and reconfigurable high-
Center to develop a 95-GHz high- frequency radio frequency testbed
frequency, ultra-wideband (UWB) radar that will enable feasibility studies for
as a diagnostic tool for extending the new applications relating to DOE, DoD,
lifetime of the rotor bearings and blades and LLNL programs. To implement this,
of the V-22 Osprey tiltrotor aircraft. we restored the 95-GHz UWB radar
Numerous emerging applications could prototype to an operational state and
benefit from this technology. upgraded its obsolete timing and pro-
cessing systems.

Relevance to LLNL Mission


There is renewed interest in the
use of high-frequency pulsed radar
systems for a variety of DOE, DoD and
LLNL applications. As an initial target
goal for this effort we chose to focus
the retooling of the 95-GHz UWB radar
technology to support feasibility stud-
ies in standoff explosive detection.
Other potential uses of pulsed 95-GHz
Figure 1. V-22 diagnostic radar prototype radar systems, which can offer high
hardware system. (i.e., submillimeter) resolution include

Timing 95-GHz
Transmitter
board and IMPATT
control
processor diode Tx RF signal

Diagnostic
Circulator Antenna
computer

Rx RF signal
Video Sampling Envelope
board receiver detector

Figure 2. Functional block diagram for V-22 diagnostic radar prototype.

78 FY10 Engineering Innovations, Research & Technology Report


Measurement Technologies

standoff imaging, nondestructive evalu- some of the functionality of the digital


ation, vibration detection and analysis, board with an add-on radar controller
materials characterization, surface map- board, shown in Fig. 3. This control-
ping, vital signs monitoring, and small ler taps into the timing signals on the
projectiles tracking. original digital control board inside the
The construction of fieldable radar radar, enabling the configuration of
systems operating near 100 GHz is the radar to be rapidly changed using
costly and requires technicians skilled a software application. In addition, the
in the art. Many outside entities lack board contains a USB interface, allowing
the resources to perform feasibility the connection to modern computer sys-
studies involving high-frequency radar. tems and improving the data acquisition
However, the restoration of LLNL’s functionality of the radar. Now, instead
95-GHz radars will provide a flexible and of processing the data using hard-coded Figure 3. Modified high-frequency radar
reconfigurable high-frequency radio algorithms within the PIC processor, data with external controller.
frequency testbed and enable LLNL can be exported to software applications
to work with outside groups such as running on a computer for more ad-
government agencies and universities vanced and more easily reconfigurable
on new applications in a cost-effective processing algorithms. These changes
manner. enable the customization of the radar
operating parameters and provide easy
FY2010 Accomplishments access to the raw data captured by the
and Results radar.
The original V-22 diagnostic radar Recently, LLNL teamed with a group
prototype and its functional block at the University of South Carolina (USC)
diagram are shown in Figs. 1 and 2. The to conduct feasibility studies combin-
most significant modification required ing laser spectroscopy with microwave
to make a versatile high-frequency radar technologies for standoff identification
testbed from the existing prototype was of high-explosive vapors and residues
to replace the existing timing control, (e.g., radar resonance-enhanced multi-
data acquisition, and data process- photon laser ionization (RADAR-REMPI)). Figure 4. USC microwave plasma detection
ing hardware with modern, software Microwave devices were used, ranging apparatus being tuned.
reconfigurable circuitry. In its original from 10.5 to 26 GHz, to detect in situ the
form, the timing signals, data acquisi- formation of laser-induced plasmas by
tion, and data processing functions were microwave reflection and scattering over
self-contained within a MicroChip PIC ranges of 10s of centimeters, as shown
17C756 microcontroller and a digital in Fig. 4. To increase the sensitivity of the
timing board for the very specific V-22 USC studies to achieve longer standoff
application requirements. This made detection ranges, LLNL is investigating
it difficult to change radar settings or the use of higher RF frequencies for their
access the received radar data for other REMPI-based standoff detection work.
applications and studies. In addition, Certain LLNL elements have significantly
while it was the state of the art at its improved the signal-to-noise ratio of the
time, this 33-MHz, 8-bit microcontroller RADAR-REMPI setup. Currently, we are
with limited data and programming conducting in-house plasma measure-
memory is obsolete today. ments, shown in Fig. 5, with plans to
By upgrading the timing system, combine the newly restored 95-GHz ra-
we have the capability to electronically dar testbed with the USC group’s REMPI Figure 5. Standoff signal-to-noise charac-
control the sweep position and range technique for longer-range standoff terization data collection on a plasma using
parameters of the radar. We replaced detection of REMPI plasmas. LLNL’s 95-GHz flexible UWB radar test.

Lawrence Livermore National Laboratory 79


80 FY10 Engineering Innovations, Research & Technology Report
Engineering Systems for
Knowledge & Inference
Research

Toward Understanding
For more information contact:
Higher-Adaptive Systems Brenda M. Ng
(925) 422-4553
ng30@llnl.gov

T his project is a two-year research


effort that seeks to understand
higher-adaptive systems, which are
Project Goals
The objective of this work is to
explore and extend as necessary cur-
Relevance to LLNL Mission
This research effort directly supports
the Cyber, Space, and Intelligence thrust
systems that can modify their structures rent decision-theoretical frameworks area in LLNL’s Five-year Strategic Road-
and behaviors in response to attempts and algorithms for solving real-world map. This work can provide important
at detection or regulation. These sys- adversarial problems, especially those insights about real-world adversarial
tems are ubiquitous: in the real world, involving adversaries that are higher- modeling and higher-adaptive systems,
there are many entities, such as money adaptive. The results of this work can with applications in law enforcement
launderers and cyber intruders, whose provide foundational knowledge for (e.g., money laundering and drug
fundamental behavior changes upon building a computationally efficient trafficking), homeland security (e.g.,
probing or intervention by an observer. framework that can characterize and terrorist networks), cyber security, and
Such a system outputs observations respond to dynamically changing, nonproliferation.
(e.g., an unintentional trail of evidence deceptive adversarial systems. This
connected to its activities) and adver- knowledge will be invaluable for future FY2010 Accomplishments
sarial actions (e.g., direct assaults/ studies of even more adaptive and and Results
countermoves against its opponent). aggressive adversarial systems, such In FY2009, our work was driven by
In particular, these actions can span a as those that limit resources as well as a money laundering application, an
spectrum of aggression, from limiting information from their opponents. This extremely pervasive crime in which the
information available to its opponent to type of study will have scientific merit in funds from illegal activity are disguised
misleading the opponent into making both the artificial intelligence and game to appear legitimate. We formulated a
the wrong moves or decisions. theory communities; it also provides the simplified model of money laundering
basis for addressing significant national using an interactive partially observable
security threats. Markov decision process (IPOMDP).
In an IPOMDP, each agent maintains
beliefs about the physical states of the
environment, and the models of other
agents (e.g., how each of them might
perceive or act in the same environ-
ment). This makes IPOMDP especially
relevant for adversarial modeling in that
it incorporates the notion of nested
intent into the belief of each agent, al-
lowing for the modeling of agents that
“game” against each other (see Fig. 1).
From this work, we gained insights
about the capability gaps required for
IPOMDPs to be applied in real-world
settings. One deficiency with the stan-
dard IPOMDP is its assumption that
agents are privy to the knowledge of
model parameters, when these param-
eters are often unknown in real-world
scenarios. Our FY2010 contribution
is the proposal of a new framework
called the Bayes-Adaptive IPOMDP
Figure 1. Illustration of human adversaries "gaming" against each other. To adequately (BA-IPOMDP), which augments the
respond in a realistic adversarial situation, it is important to model the adversary as an IPOMDP with model-learning capabili-
intentional agent, who anticipates its opponent’s counteractions in its strategies. ties. Our approach is to assume that the

82 FY10 Engineering Innovations, Research & Technology Report


Engineering Systems for Knowledge & Inference

solve Policy None


S0 Learning agents

a0
Agent i only Agents i and j
Update
a1 belief
a0
Does i model Does i/j model
a0 Update j as learning? j/i as learning?
a1 belief
Update
a0 S2* Y N S4 Y N
a1
belief Update ai,optimal
a1 belief Does i model j as Does i/j model j/i as
a0
bt–1
i,l Update using the correct (static) using the correct (static)
a1 belief model for itself? model for itself?
bt–1
i,l bti,l
Update Y N Y N
zi S5*
belief S3 S6
Does j model
i as learning?
Resample Propagate
Weight bti,l
S7 Y N
bt–1
j,l–1 bt–1
j,l–1 btj,l–1 Does j assume i is using
[s, ] [s, ] [s, ] the correct (static) model
for itself? Notes
Y N * Agent j assumes that Agent i
solve Update
is using the correct (static)
Policy aj,optimal belief
S1 S8 model for itself.

Figure 2. Representation of beliefs, approximated by samples Figure 3. Nine scenarios are considered, differing according to
over the physical states and “counts” that parametrize the model. which agent is learning and what it assumes of its opponent’s
Beliefs are updated via interactive particle filtering, which entails capabilities.
a recursive procedure.

state, action, and observation spaces In FY2009, we introduced approxi- average rewards and model conver-
are finite and known, but the model mations such as the interactive particle gence depend on how accurately the
parameters (namely, the state transi- filter (I-PF) to address the curse of agent models its opponent’s learning
tion probabilities and the observation dimensionality (belief complexity that process; and 3) the parameter learn-
probabilities) are not fully known. We increases with the number of states), ing introduced by our BA-IPOMDP
include these parameters as part of and reachability tree sampling (RTS) significantly improved the agent’s initial
the state space and maintain beliefs, in to address the curse of history (policy estimate of the model parameters while
the form of probability distributions, to complexity that increases with number maintaining reasonable runtime. The
represent our uncertainty about these of time steps or horizons in the decision ability to incorporate learning into these
parameters and the physical state of the process). models brings us one step closer to
environment. In FY2010, we optimized I-PF to their applicability in realistic adversarial
At each time step, an agent searches compute an approximation to the BA- modeling.
for and executes the optimal action belief, and we used RTS to construct
given its current belief. When it receives a pruned version of the finite-horizon Related References
an observation, it uses this observation reachability tree and applied dynamic 1. Ng, B., C. Meyers, K. Boakye, and
to iteratively refine its estimate of the programming to search this tree for the J. Nitao, “Towards Applying Interactive
state transition and observation models, optimal policy (see Fig. 2). POMDPs to Real-World Adversary Model-
and applies these learned models to To evaluate the performance of ing,” Proceedings of the 22nd Innovative
update its belief about the physical these methods, we tested our algo- Applications of Artificial Intelligence Confer-
state of the environment. During this rithms against a popular academic ence, pp. 1814–1820, 2010.
process, the agent must anticipate the problem, known as the multiagent Tiger 2. Doshi, P., and P. Gmytrasiewicz, “Monte
action, observation, and belief update problem, over multiple scenarios (see Carlo Sampling Methods for Approximating
of its opponent. Thus, belief updates are Fig. 3). Runtime was recorded for vary- Interactive POMDPs,” Journal of Artificial In-
recursive. ing resolutions of the approximation telligence Research, 34, pp. 297–337, 2009.
To solve an IPOMDP is to determine algorithms; average rewards and model 3. Ross, S., B. Chaib-draa, and J. Pineau,
the optimal policy, which for every pos- parameter convergence was compared “Bayes-Adaptive POMDPs,” Proceedings
sible observation, produces an optimal for different learning scenarios. of the 21st Neural Information Processing
action that maximizes the agent’s Baselining against the standard Systems, pp. 1225–1232, 2008.
expected reward. Previous work has IPOMDP, we found that 1) the agent
shown that value iteration can be used quickly converges on a model which
to solve IPOMDPs. is slightly off the true model; 2) the

Lawrence Livermore National Laboratory 83


Research

Enhanced Event Extraction from


For more information contact:
Text via Error-Driven Aggregation Tracy D. Lemmond
Methodologies (925) 422-0219
lemmond1@llnl.gov

K nowledge discovery systems are


designed to construct massive data
repositories using text and information
tools, provided their behavior can be
accurately characterized and quantified.
Our research is addressing this prob-
extraction methodologies, and then lem through the aggregation of extrac-
infer knowledge from the ingested data, tion tools based on a general inferential
allowing analysts to “connect the dots.” framework that exploits their strengths
The extraction of relational information and mitigates their weaknesses.
(e.g., triples, events) and related entities
(e.g., people, organizations) often forms Project Goals
the basis for data ingestion. Unfortu- The objective of this effort is to
nately, these systems are particularly develop a significantly improved entity/
vulnerable to errors introduced during event extraction system that enables
the ingestion process, frequently result- 1) greater insight into the downstream
ing in misleading or unreliable analysis. effects of extraction errors; 2) more ac-
Though state-of-the-art extraction tools curate automatic text extraction; 3) bet-
may achieve insufficient accuracy rates ter estimates of uncertainty in extracted
for practical use, not all extractors are data; 4) effective use of investments
prone to the same types of error. This by the Natural Language Processing
suggests that substantial improvements community; and 5) rapid incorporation
might be achieved through appropri- of future advancements in extraction
ate combinations of existing extraction technologies.
An extensive analysis of the error
processes of individual extractors has
yielded insights into their synergistic
and conflicting behaviors that have
k
Extr. 1

Extr. 2

tr.

been leveraged to configure a col-


Ex

lection of base extractors, through a


general inferential framework, into an
Extractor Plugin
aggregate meta-extractor with substan-
tially improved extraction performance
New Data (Fig. 1).
(truth unknown)
Lan

Existing Data
(truth known)
gua

Relevance to LLNL Mission


Hypothesis Generator
ge

ge
Nonproliferation, Counterterrorism,
a
Learner Plugin ngu and other national security missions
La
Hyp. Ranker Plugin rely on the acquisition of knowledge
that is buried in unstructured text docu-
Likelihood

Pattern

ce
BMA

Most likely truth


Likelihood

ments too numerous to be manually


uen

Pattern

ce

(probabilistically
BMA

processed. Systems are under develop-


uen
Seq

ranked)
ment by LLNL and its customers that
Seq

Calibration Aggregation
must automatically extract critical
information from these sources. To
Figure 1. Meta-extraction system. enable effective knowledge discovery,

84 FY10 Engineering Innovations, Research & Technology Report


Engineering Systems for Knowledge & Inference

however, extraction error rates must (a) (b)


be driven down. Probabilistic aggrega- 0.90
tion of extractors is a promising and 0.88
innovative approach to accomplishing 0.90
0.86
this goal. This effort directly supports

F measure
Engineering Systems for Knowledge and 0.85 0.84
Inference (ESKI) Text to Inference area 0.82
and the Cyber, Space, and Intelligence 0.80
0.80
strategic mission thrust in the LLNL
Five-year Strategic Roadmap. Successful 0.75 0.78
completion of this research will pro- 0.76
vide highly valued and unprecedented 0.70
BAILE GATE LP SNER X-Man BAILE GATE LP SNER X-Man
inference and decision-making capabili-
ties to internal programs, such as IOAP
and CAPS, and to external customers Figure 2. Box plots showing bootstrapped samples of the weighted mean of F measure.
Extractors and X-Man were trained on MUC6 (Message Understanding Conferences).
such as DHS, DoD, and the intelligence
Results are shown for testing on (a) MUC6 and (b) CoNLL-2003.
community.

FY2010 Accomplishments
and Results X-Man incorporates machine learn- assigns a probability to each ground
Insights gained in event extraction ing and probabilistic methods, ranging truth hypothesis. The resulting relative
error analyses performed in FY2008 from classical probability to Bayesian ranking of hypotheses for each piece of
motivated a graduated approach to Model Averaging, into several novel al- text provides not only an ordered list
triple/event aggregation that is founded gorithms, each consisting of a calibration of the most probable ground truths,
fundamentally on the aggregation of component coupled with an aggregation but a mechanism for determining those
extracted entities. component (Fig.1). hypotheses that are significantly more
To this end, we have developed For each of these algorithms, calibra- likely in a statistical sense. Thus, the
a novel text extraction management tion hinges on the estimation of prob- ranking informs downstream decision-
methodology (“X-Man”) focused on ability distributions over a joint hierar- making and analysis by enabling confi-
entity extraction that can be generalized chical error space arising from the suite dence assessments of extracted data.
to multiscale triples (i.e., simple events) of underlying extractors. The generated Figure 2 shows sample test results.
and more complex event aggregation error distributions characterize each X-Man has been shown to 1) pro-
solutions. X-Man is a flexible, general- extractor’s performance relative to dis- duce statistically significant improve-
ized framework for the aggregation of joint regions of contiguous text. Perfor- ments in extraction relative to standard
named entity extraction technologies mance takes into account joint extractor performance metrics (up to 120%
that uses the joint characteristics of characteristics, as well as the statistical improvement under certain operating
its constituent extractors’ output to behaviors (both individual and joint) of conditions); 2) be able to reconstruct
aggregate extracted text. Hence, exist- the errors occupying the defined error truth when all of its constituent extrac-
ing extraction tools (e.g., commercial, space. These algorithms are particu- tors fail; and 3) provide a framework
academic) can be readily incorporated larly distinguished by their reliance on for quantifying uncertainty in extracted
to enhance the quality of extracted data. a range of underlying models and/or output. Moreover, mechanisms have
Moreover, the methodology has been assumptions. Accordingly, X-Man’s final been developed to help X-Man adapt to
designed to enable the incorporation of stage of calibration involves using state- sparse data conditions.
new extractor evaluation and aggrega- of-the-art machine learning methods
tion methodologies, as well as language to determine an optimal deployment Related Reference
modules that can leverage language- strategy for incoming data. Lemmond, T., N. Perry, J. Guensche, J. Nitao,
specific resources such as gazetteers, When newly extracted output data R. Glaser, P. Kidwell, P., and W. Hanley,
stop word lists, and parsers. This unprec- are encountered, the X-Man System “Enhanced Named Entity Extraction via Error-
edented level of flexibility makes X-Man constructs a space of hypotheses over Driven Aggregation,” International Confer-
highly customizable and adaptable to a ground truth for each piece of contigu- ence on Data Mining, Las Vegas, Nevada, July
wide range of applications and problem ous text and then deploys the text to its 2010.
domains. optimal aggregation algorithm, which

Lawrence Livermore National Laboratory 85


Research

Robust Ensemble Classifier Methods


For more information contact:
for Detection Problems with Unequal Barry Y. Chen
and Evolving Error Costs (925) 423-9429
chen52@llnl.gov

S uccessful analysis in real-world


detection applications often
hinges upon the automatic collection
distributions in a dynamic environment.
This research will lead to greater insight
into the factors that interact to govern
of massive amounts of data over time. classification performance, including
However, the pace of automatic data ensemble size, feature dimensionality,
collection far exceeds our manual pro- and data sampling.
cessing and analysis capabilities, making
automated pattern detection in stream- Relevance to LLNL Mission
ing data critical. Machine learning This research effort directly supports
classifiers capable of detecting patterns the Cyber, Space, and Intelligence thrust
in datasets have been developed to area in LLNL’s Five-year Strategic Road-
address this need, but none can simul- map. As threats continue to evolve, and
taneously address the many challenging relevant data grow in size and complexi-
characteristics of real-world detection ty, the capability to automatically detect
problems. threat signatures in data becomes
The costs associated with false increasingly more critical. Real-time
alarms and missed detections are situational awareness is paramount and
frequently unequal, extreme (demand- requires dynamic learning systems that
ing near-zero false alarm or miss rates) maintain high detection performance as
or changing over time. Moreover, the misclassification costs and data distribu-
underlying data distribution mod- tions change over time. Our research
eled by the classifiers may also evolve explicitly addresses these needs in
over time, resulting in progressively cyber, counterterrorism, nonprolifera-
degraded classification performance. tion, and national security missions for a
We are addressing these deficien- broad range of customers, including the
cies through the development of new IC, DHS, DOE, DoD, and NNSA.
dynamic ensemble classifier algorithms
that leverage diverse cost-sensitive base FY2010 Accomplishments
classifiers. and Results
In FY2010, we extended the tradi-
Project Goals tional ensemble classifier error bound
The ultimate goal of this two-year to a class-specific error bound that
effort focuses on the understanding and allowed us to develop the fundamental
development of new ensemble learning understanding of the conditions in which
algorithms that can effectively address ensemble classifiers can be expected to
the considerable challenges presented achieve higher detection rates at lower
by detection problems of national false alarm rates (and vice versa). This
significance. The developed methodolo- led to a counterintuitive finding that
gies will yield significantly improved there are times when increasing classi-
performance at near-zero false alarm fier agreement in an ensemble results
(or missed detection) rates and be able in higher detection rates at lower false
to adapt to changing costs and data alarm rates (Fig. 1). This theory bore

86 FY10 Engineering Innovations, Research & Technology Report


Engineering Systems for Knowledge & Inference

Score distribution for Class 0 static classifier over the low false alarm
Threshold rate regions of the receiver operating
µ0 characteristic (ROC) curve.
False
σ02 alarms Increase class 1 Related References
classifier Cost-sensitive ensemble

Detection rate
agreement using error bound results 1. Chen, B. Y., T. D. Lemmond, and
0 1 W. G. Hanley, “Building Ultra-Low False Alarm
III II I
µ1 Rate Support Vector Machine Ensembles
Detections Using Random Subspaces,” Proc. IEEE Sympo-
Conventional
σ12 sium on Computational Intelligence and Data
Cost-sensitive
ensemble Mining, 2009.
0 1 False alarm rate 2. Lemmond, T. D., W. G. Hanley, L. J. Hiller,
Score distribution for Class 1 D. A. Knapp, M. J. Mugge, and B. Y. Chen,
“Discriminant Random Forest,” U.S. Provi-
Figure 1. Class-specific error bounds governing ensemble classification performance. sional Patent Filed, May 2008.
Increasing classifier agreement in the ensemble can lead to higher detection rates at 3. Lemmond, T. D., A. O. Hatch, B. Y. Chen,
extremely low false alarm rates. D. A. Knapp, L. J. Hiller, M. J. Mugge, and
W. G. Hanley, “Discriminant Random
Forests,” Proceedings of 2008 International
Conference on Data Mining, 2008.
fruit on a very difficult hidden signal a 32.5 × speedup using two orders of 4. Lemmond, T. D., B. Y. Chen, A. O. Hatch,
detection application, where our new magnitude fewer bytes while maintain- and W. G. Hanley, “An Extended Study of the
classifiers were able to achieve a previ- ing high classification accuracy on an Discriminant Random Forest,” Data Mining
ously unattainable level of performance: important cyberthreat detection applica- Special, Annals of Information Systems, 8,
non-zero detection rates at 0% false tion (see Table). 2010.
alarms. To address the challenge of changing 5. Prenger, R. J., T. D. Lemmond,
In applications where computational data distributions, we have developed K. R. Varshney, B. Y. Chen, and W. G. Hanley,
resources are scarce, it is extremely im- several new approaches for dynamic “Class-Specific Error Bounds for Ensemble
portant to have ensemble classifiers that density estimation. The Forest Based Classifiers,” Proc. ACM SIGKDD Conference on
are compact and fast. We developed Density Estimator (FBDE) is an approach Knowledge Discovery and Data Mining, July
a Bayesian Random Forest to provide inspired by Random Forests whose trees 2010.
high-throughput classification using randomly partition feature space into
orders of magnitude fewer bytes. Un- appropriately sized hyper-rectangles.
Cyberthreat detection ROC curves
like traditional Random Forests whose An ensemble of these trees allows for 1.0
trees are grown until homogeneity in a robust estimate of the probability
data class is reached, Bayesian Random density of high dimensional data. As 0.9
Detection rate

Forests stop growing when the data no data distributions change, the trees in
longer justifies the increased complexity the forest evolve to track changes in the 0.8
of adding another layer of split nodes. data. Using dynamic density estimators,
0.7
The Bayesian Random Forest achieved we developed a classification system
that is able to maintain high detection 0.6 Static 2-class
classifier
rates at low false alarm rates even as the
Bayesian Random Forest compared to a Dynamic 2-class
underlying detection pattern changes. 0.5 classifier
standard Random Forest for cyberthreat
detection.
In Fig. 2, we compare a standard
static classifier to our dynamic classifi- 0.4
10–3 10–2 10–1 100
Standard RF Bayesian RF cation system on a cyberthreat detec- False alarm rate
Memory 16 MB 0.021 MB tion application where the threat class
Classification undergoes six different changes during Figure 2. Our dynamic classifier, which out-
195 6
speed (s) the experiment. Our dynamic classifier performs the traditional static classifier in
Avg. class
99.1% 98.7%
successfully tracks the changing threat cyberthreat detection when the threat class
accuracy class and significantly outperforms the undergoes six distinct changes.

Lawrence Livermore National Laboratory 87


Technology

Entity Extractor Aggregation System


For more information contact:

Tracy D. Lemmond
(925) 422-0219
lemmond1@llnl.gov

T he extraction of relational informa-


tion (e.g., triples, events) and enti-
ties (e.g., people, organizations) from
problem through the aggregation of
extraction tools (i.e., base extractors).
In FY2009, a prototype of many
unstructured text often forms the basis of these algorithms, within a robust,
for data ingestion by Knowledge Discov- extensible framework, was constructed,
ery (KD) systems. These systems enable that enables the generation of detailed
analysis and inference on massive sets results and performance assessment
of data and are particularly vulnerable data. The focus of the FY2010 effort has
to errors introduced during the inges- been to extend that framework with
tion process. more advanced aggregation algorithms
Though state-of-the-art extraction and to enhance its efficiency, usability,
tools often achieve insufficient accuracy and flexibility for future use in an opera-
rates for practical use, not all extractors tional environment.
are prone to the same types of error.
This suggests that improvements may Project Goals
be achieved via appropriate combina- Key objectives for the system in-
tions of existing extraction tools, pro- cluded 1) extending the first generation
vided their behavior can be accurately “plug-and-play” aggregation framework
characterized and quantified. Several to incorporate ongoing algorithmic
methodologies that combine pattern- advancements; 2) enhancing the flex-
based and probabilistic approaches with ibility of the framework to allow users
state-of-the-art machine learning tech- greater freedom in experimentation;
nologies address the entity extraction and 3) streamlining the graphical user
interface for greater efficiency and
usability.

Search
Relevance to LLNL Mission
[New Experiment] [Query Experiments] [Root Folder] [Team] [Manage]
Nonproliferation, counterterrorism,
and other national security missions
rely on the acquisition of knowledge
Main Dispatch Alg 0 Alg 1 Alg 2 Alg 3
that is buried within unstructured
NOTE: Alg 0 must be a Pattern Algorithm.
Algorithm Dispatcher
text documents too numerous to be
Push Condition Alg to manually processed. Systems are being
Run
Pattern not found in Pattern Dictionary Alg 1 constructed by LLNL and its customers
Push when Pattern Algorithm encounters punctuation errors
... and the number of extractors used
is greater than
1 Alg 2 that must automatically extract entities
Meta-Entity length greater than (set to -1 to disable) 2 Meta-Entity length type SegmentBased Alg 3
from these sources, and methodologies
Relative Difference Between top two hypotheses less than
or equal to (-1 to disable)
1 Alg 2
have been produced that significantly
Total # votes (all hypotheses) less than (set to -1 to disable) Alg 1

Winning Hypothesis with # votes less than (set to -1 to


10
advance entity extraction capabilities.
5 Alg 2
disable) Bringing these capabilities to an op-
Push criteria are shown in order of precedence.
erational status is critical to the timely
Add Algorithm Start Execution
deployment of knowledge discovery
technologies that will impact LLNL mis-
Figure 1. Manual extracted data deployment in X-Man. sion goals.

88 FY10 Engineering Innovations, Research & Technology Report


Engineering Systems for Knowledge & Inference

WASHINGTON -- The flier whose Navy F-14A fighter plunged into a Nashville suburb on Monday, killing himself and four other people, crashed another jet into the sea last April.
But Navy investigators and senior admirals forgave him, saying he made a mistake in pursuit of the combative flying that the Navy wants and encourages in its pilots.
The flier, Lt. Comdr.
John Stacy Bates, flew aggresively, a Navy official said on Tuesday, but he added: “We want them to fly aggresively.
Bates was highly motivated and that accident was a one-time glitch on his record.
He was a great aviator.”
The Navy invests years and more than $1 million to train each of its fighter pilots, and is reluctant to dismiss them if senior officers believe an erring pilot can learn from mistakes.
But as military investigators sifted through the wreckage on Tuesday for clues to what caused the crash that killed the fighter’s two-man crew and three people on the ground, Navy officials said they did not know what caused Bates’
second crash, or why his squadron had lost so many F-14 Tomcats.
The crash was the fourth in 16 months for Fighter Squadron 213, a 14-plane unit known as the Fighting Blacklions and one of six F-14 squadrons assigned to Miramar Naval Air Station near San Diego.
The unit’s safety record is by far the worst among the Navy’s 13 F-14 squadrons.
Bates was blamed for losing control of his F-14 last April while conducting training maneuvers off Hawaii.
Last September, and F-14A from the squadron exploded in flight off the Philippines, but both crew members ejected safely.
The cause of that accident is still under investigation.
In October 1994, one the Navy’s first female fighter pilots, Lt. Kara S. Hultgreen, died in a training accident off Southern California, rekindling tensions within the military over the decision to expand some combat roles for women.
The Navy concluded that that accident resulted from a combination of pilot error and mechanical failure.
“You go back 10 or 15 years and they are snake bit,” said a retired admiral who once commanded the squadron.
“We’ve tried to put top-notch pilots and maintenance people there.
You can’t believe in luck or superstition, but they’re behind the eight ball and have stayed there.”
The Navy ordered the squadron to suspend its operations for three days for safety reasons after the second of the squadron’s four crashes.
Vice Adm. Brent Bennitt, the commander of naval air forces in the Pacific, immediately ordered the squadron to stand down again after the crash on Monday to review its safety record and prodedures.
The crash underscores the fact that even in peacetime, operating complex weapons of war is a hazardous business.
Twelve F-14 fliers have died in training accidents since 1992.
But the accident also raises questions about the F-14’s safety record.
Since 1991, the fighter has a major crash rate of 5.93 per 100,000 flight hours, compared with 4.82 major crashes per 100,000 hours for all Navy tactical aircraft.
Navy officials note that since 1981, the F-14’s major accident rate is slightly lower than the overall tactical aircraft rate.
Many naval aviators have complained that the engines on the older A-model F-14’s are not powerful enough to perform the demanding aerial maneuvers they fly.
The Navy is replacing them with a more powerful engine that is now on about 30 percent of the fleet’s F-14’s. Fighter Squadron 213 flies all A-model F-14’s. In the latest accident, the twin-engine, two-seat Tomcat crashed shortly
after takeoff from Berry Field, an Air National Guard airfield adjacent to Nashville International Airport.
The jet left Miramar Air Station in San Diego for Nashville on Friday on a routine training mission.
Bennit said on Tuesday that Navy officials approved Bates’ request to use a maximum-performance takeoff, in which a pilot turns on the jet’s after-burner and soars straight up moments after the aircraft leaves the ground

Figure 2. Highlighted extracted and ground truth entities.

This effort directly supports collectively allow 1) joint character- these folds in parallel for increased
Engineering Systems for Knowledge ization of extracted data relative to efficiency. When an experiment has
and Inference (ESKI) Text to Inference various features of interest (e.g., entity been completed, the user is provided
R&D area and the Cyber, Space, and length, data sparseness, entity type); with an array of statistics associated
Intelligence strategic mission thrust in 2) performance evaluation (in terms of with the execution. These include
the LLNL Five-year Strategic Roadmap. F-measure, exact match, miss and/or 1) error counts and probability esti-
The completed system will provide false alarm) of aggregation algorithms mates for the base and meta-extractor
highly valued and unprecedented entity and extractors relative to these data algorithms; 2) detailed output of the
extraction capabilities to internal pro- characteristics; and 3) either manual or entities extracted by the base extrac-
grams, such as IOAP and CAPS, and to automatically optimized deployment of tors, the space of hypothesized ground
external customers such as DHS, DoD, data to the corresponding aggregation truths proposed by the meta-extractor,
and the intelligence community. algorithms. and the corresponding meta-extractor
Figure 1 shows an example of result; 3) the rate that events of inter-
FY2010 Accomplishments manual dispatching of extracted entity est occur (e.g., the frequency that the
and Results data to four different algorithms that meta-extractor recreates the truth
The entity extractor aggregation may be defined by the user. These when all base extractors fail); and 4) the
tool was originally constructed to serve algorithms may include any of several original text with extracted and ground
as both a prototype of first genera- aggregation algorithms (e.g., Pattern- truth entities highlighted (Fig. 2).
tion aggregation methodologies (i.e., based, Bayesian Model Averaging), This information collectively
meta-extraction algorithms) and as an or the base extractors themselves, if provides substantial insight into the
environment for the incorporation of desired. Automatic deployment, when behaviors and performance of the base
these advancements. selected, is optimized using state-of-the- extractors, as well as of X-Man itself,
In FY2010, the final prototype, art machine learning techniques (e.g., enabling the potential for algorithm
called the Extraction Manager (X-Man), ensemble predictors) to determine an optimization and enhancement.
contains not only more advanced optimal mapping of extracted data fea-
aggregation algorithms that improved tures to appropriate aggregation and/or Related Reference
the effectiveness of entity extractor base algorithms. Lemmond, T., N. Perry, J. Guensche, J. Nitao,
aggregation, but also additional features Performance estimation takes place R. Glaser, P. Kidwell, P., and W. Hanley, “En-
that enhanced the flexibility provided through cross-validation, in which the hanced Named Entity Extraction via Error-
to its users. Specifically, the new meta- data are partitioned into multiple folds Driven Aggregation,” International Confer-
extraction system consists of various with associated performance estimates ence on Data Mining, Las Vegas, Nevada,
modules, linked through a central that are typically averaged to obtain an July 2010.
component called the dispatcher, that overall estimate. The X-Man tool runs

Lawrence Livermore National Laboratory 89


Technology

Improving Optimization Capabilities


For more information contact:
for Energy Modeling via Carol A. Meyers
High-Performance Computing (925) 422-1252
meyers14@llnl.gov

E nergy systems within the United


States are expected to undergo
a significant transition in the com-
energy standard across the western
United States (Figs. 1 and 2). Because
many of the decisions (such as turning a
ing decades. These changes are due generator on or off) associated with grid
in large part to the goal of creating a operations are binary, it is formulated
more energy-efficient infrastructure by as a mixed-integer linear program. The
increasing renewable energy generation model’s stakeholders have expressed a
and introducing Smart Grid technolo- significant interest in using LLNL exper-
gies, electric vehicles, and smart appli- tise and supercomputing hardware to
ances. This in turn generates the need get the model to solve more quickly and
for grid models capable of capturing eventually add more complexity.
this new paradigm, as well as the need
for improved optimization capabilities, Project Goals
which can handle the massive size and The primary objective of this project
complexity associated with these future is to greatly speed the execution time
energy scenarios. Currently, grid and of the obtained electric generation
dispatch problems are limited to a small dispatch planning model. In the process,
number of generators and/or a simpli- we hope to demonstrate how LLNL
fied network topology, which will not energy modeling expertise and high-
be suitable to understand or solve the performance computing resources can
energy challenges of the future. be successfully leveraged to solve large-
As a byproduct of interactions with scale energy problems. Within this con-
the California Public Utilities Commis- text, our specific milestones are as fol-
sion, LLNL was alerted to an energy lows: 1) identify where in the planning
model currently used for planning in model the computational bottlenecks
California that suffers from exactly this are occurring; 2) investigate strategies
kind of computational bottleneck. This for reformulating the underlying model
model is an electric generation dispatch to speed execution time; 3) improve
planning model for studying the opera- performance by demonstrating the abil-
tional requirements and market impacts ity to run multiple copies of the model
of transitioning to a 33% renewable in parallel; and 4) improve performance

2100 generators Load, storage, reserves, Grid dispatch mixed integer programming model with
across Western U.S. transmission requirements 225,000 variables (34,000 integer) and 400,000 constraints

Figure 1. Composition of grid dispatch model currently used for planning in California.

90 FY10 Engineering Innovations, Research & Technology Report


Engineering Systems for Knowledge and Inference

by demonstrating the ability to run a


single copy of the model across many
nodes, in a massively parallel setting.

Relevance to LLNL Mission


This work directly aligns with the
Energy Security and Regional Climate
Change Impacts pillar of the LLNL
Institutional Science and Technology Gen. > 500 MW
Coal
Five-year Roadmap to the Future. It also Natural gas
supports the growing Livermore Valley Nuclear Figure 3. Hyperion cluster at LLNL, a high-
Renewable performance computing testbed.
Open Campus initiative, specifically the Hydro
Major transmission
intended high-performance computing 230 – 300 kV
300 – 400 kV
center. The successful completion of this > 400 kV
DC line
project will also support the proposed
Partnership for 21st Century Energy
Systems between LLNL and the utilities Figure 2. Structure of power grid in the
in the state of California. western United States. Related References
1. Crainic, T., B. Le Cun, and C. Roucairol,
FY2010 Accomplishments “Parallel Branch-and-Bound Algorithms,” Par-
and Results allel Combinatorial Optimization, Chapter 1,
We accomplished our first milestone run on our supercomputers, using the E. Talbi, Ed., John Wiley & Sons, New Jersey,
of identifying the computational bottle- Hyperion testbed (Fig. 3). This included 2006.
neck in the planning model. We were porting the software package (a .NET 2. DeMeo, E., G. Jordan, C. Kalich, J. King,
able to determine that the vast majority executable natively run in Windows) to M. Milligan, C. Murley, B. Oakleaf, and
of the time is spent solving a daily unit Linux, and scripting the software calls to M. Schuerger, “Accommodating Wind’s
commitment model, which is formulated function in this environment. Our third Natural Behavior,” IEEE Power and Energy
as a mixed integer linear program by the milestone was completed via the imple- Magazine, pp. 59–67, November/December,
front-end energy modeling software, mentation of a parallel job launching 2007.
and solved by the back-end optimiza- capability, whereby many copies of the 3. Lamont, A., “Assessing the Long-Term
tion engine. Formulation of the problem software could be executed simultane- System Value of Intermittent Electric Genera-
itself, inputs, and outputs are computa- ously on different nodes of Hyperion. tion Technologies,” Energy Economics, 30,
tionally trivial compared to the length On a yearly timeframe, this parallelism pp. 1208–1231, 2008.
of time necessary to solve this unit com- alone caused the model to solve an ad- 4. Phillips, C., J. Eckstein, and W. Hart, “Mas-
mitment problem for every day. ditional three times faster than before. sively Parallel Mixed-Integer Programming:
We were able to identify a number Altogether, we have thus demon- Algorithms and Applications,” Parallel Pro-
of improvements to the original problem strated a 12× speedup in the original cessing for Scientific Computing, Chapter 17,
formulation (our second milestone), pri- planning model via the use of our super- M. Heroux, P. Raghavan, and H. Simon, Eds.,
marily concerning the representation of computing resources (see Table). Our SIAM, Philadelphia, Pennsylvania, 2006.
certain generator dispatch constraints. fourth milestone, which is to more fully
These improvements enabled the unit leverage these resources by implement-
commitment model to be solved nearly ing the model in a massively parallel
four times faster than before. framework, is ongoing work in conjunc- FY2011 Proposed Work
Next, we demonstrated that the tion with the creators of the underlying We will continue leveraging
front-end software could be successfully optimization engine. our supercomputing resources
by attempting to implement the
Sources and degrees of improvement in solution times of the grid dispatch model. grid planning model using MPI
distributed-memory parallelism.
Source of speedup Degree of speedup Additionally we plan on engaging
Reformulation of the mixed integer programming model 4x the model’s stakeholders directly,
Parallel execution over months in the year 3x by supporting their production runs
Massively parallel implementation of underlying algorithms Not yet determined using the existing framework from
FY2010.
TOTAL 12x (so far)

Lawrence Livermore National Laboratory 91


92 FY10 Engineering Innovations, Research & Technology Report
Energy Manipulation
Research

High Voltage Vacuum Insulator


For more information contact:
Flashover Timothy L. Houck
(925) 423-7905
houck1@llnl.gov

H igh-performance pulsed-power sys-


tems are used in numerous applica-
tions related to national security. The
insulator flashover. Varying the geom-
etry, materials, and environment used
in simulations shows how the different
In FY2010 we added a gas layer on
the insulator surface to the simulations.
The light or flash seen during the elec-
vacuum insulator is a critical component initiation mechanisms evolve. This tool trical breakdown of a vacuum insulator
of such systems, often limiting peak will make it possible to study complex that led to the phrase “flashover” is due
performance. If designed incorrectly the insulator designs in realistic operational to the ionization of gas near the surface
insulator can be the weak link, leading conditions and to predict performance. of the insulator. Whether this gas ioniza-
to failure of the entire system. Scien- We have proposed insulator designs and tion was important to the initiation of
tific knowledge developed from simple processing for the next generation of breakdown or simply a result of the
experiments provides understanding of magnetic flux-compression generators. breakdown had not been determined.
important physics involved in insulator Our initial model used a static gas to
performance, but is not readily trans- Relevance to LLNL Mission simplify computations. Published data
formed into a reliable tool for predict- This project directly supports the from insulator experiments provided
ing operational performance. We are Energy Manipulation pillar of the an idea of the gas layer thickness while
developing a computer model of electri- Science, Technology, and Engineering Paschen’s Curve provided information
cal breakdown at the dielectric/vacuum foundation. Our computational model on the gas density.
interface. We are leveraging LLNL’s enhances LLNL’s status as a world-class Our next step was to improve the
advances in computational resources to center for high-voltage vacuum insula- gas collision algorithms used in the
bridge the gap between knowledge and tor design, development, and testing. code. This involved adding elastic and
application. Improved vacuum insulators will have inelastic scatterings to the ionization
immediate impact on explosively driven collision algorithm. We were then able
Project Goals flux compressors and compact accelera- to do simulations that mimic flashover
We wish to produce a computational tor designs. behavior noted for positive angle insula-
methodology for designing high-voltage tors (Fig. 1).
vacuum insulators for pulsed-power FY2010 Accomplishments Figure 2 depicts the orientation and
devices. We have demonstrated during and Results particle orbits for two of the simula-
this project that a few basic physics In FY2009 we concentrated on tions. The positive angle orientation
phenomena can explain the initiation of improving the algorithms related to the was expected to be the most difficult to
electrical breakdown across the dielec- field and secondary electron emission model as the orientation of the electric
tric/vacuum interface, known as vacuum models in the commercial Particle- field prevented secondary electrons
In-Cell code, Vorpal. The prevailing from returning to the insulator in a true
theories assumed that an avalanche of vacuum situation. However, a combina-
Experiment electrons created from a few seed elec- tion of electron scattering from gas mol-
Simulation results trons impacting the insulator surface, ecules and electrodes, surface charging,
causing additional (secondary) electron and geometrical field enhancement led
emission, was the initiating event for to localized electric fields near the sur-
flashover. As we refined our model and face that favored an electron avalanche
ran more simulations, we discovered or flashover. The angle-independent,
that the secondary electrons were not fast flashover behavior for negative
sufficient by themselves to cause flash- angles indicated that we needed a bet-
over and, in the case of positive angled ter gas model than the static layer that
–80 –60 –40 –20 0 20 40 60 80
Insulator angle (degrees)
insulators, did not return to the surface. we imposed on the surface.
We submitted a Record of Invention Our finishing work on the project
Figure 1. Comparison of empirical data (ROI), IL-12057, related to an insulator was developing a dynamic gas where
with simulations. Vertical axis reflects design and changed the protocol for electron impact led to desorption of a
ability to withstand flashover, either field insulator testing using knowledge that neutral gas. Figure 3 illustrates the new
strength or time to fail. came from these studies. gas model.

94 FY10 Engineering Innovations, Research & Technology Report


Energy Manipulation

Surface view
Insulator

Insulator
Vacuum

Vacuum

Vacuum
+ angle
y (mm)

0 0 0
25 5 50
z (mm) x (mm) z (mm)

Surface view
Insulator

Insulator
Vacuum

Vacuum

Vacuum
– angle

Figure 2. Simulation results for positive 55° (top) and negative 30° (bottom) angle insulators. The red dots represent ionized gas and the
blue are electrons. The two views are looking across (left) and directly at (right) the insulator surface. The graph on right is a color contour
of the electric field magnitude (red is low and blue is high). The anode is at the top and the cathode is the bottom for each plot.

The understanding we gained on


the effect of a gas layer on flashover
led to another ROI, IL-12236, related
to processing the insulator to avoid gas 0.005
T1: 1st electron impacts Insulator T2: 2nd electron impacts Insulator
desorption. We anticipate this work to
continue with programmatic support. 0.004
Expanding
neutral
Related References 0.003
Test electrons desorbed gas
1. Tang, J., et al., “Process of Surface Flash- y
approaching
over in Vacuum Under Nanosecond Pulse,” 0.002 the insulator
IEEE Trans. Plasma Sci., 38, 1, January 2010.
2. Baglin, V., N. Hilleret, et al., “The Second- 0.001
ary Electron Yield of Technical Materials Vacuum Vacuum
and Its Variation With Surface Treatments,” 0
Proceedings European Particle Accelera- z
tor Conference, Austria, pp. 217–221, June 0.005
26–30, 2000. T5: 5th electron impacts Insulator T13: last electron impacts Insulator
3. Perkins, M. P., T. L. Houck, A. R. Marquez, 0.004
and G. E. Vogtlin, “FDTD-PIC Modeling for Ini-
tiation of Vacuum Insulator Flashover,” 37th 0.003
IEEE International Conference on Plasma Neutral desorbed
Science, Norfolk, Virginia, 2010. 0.002 gas layer
4. Anderson, R. A., “Anode-initiated Surface
Flashover,” Conf. Electr. Insul. Dielec. Phen., 0.001
pp. 173–179, 1979.
Vacuum Vacuum
5. Perkins, M. P., T. L. Houck, A. R. Marquez,
0
and G. E. Vogtlin, “Simulations for Initiation 0.015 0.016 0.017 0.018 0.019 0.020 0.015 0.016 0.017 0.018 0.019 0.020
of Vacuum Insulator Flashover,” 29th IEEE
International Power Modulator and High Figure 3. Simulation testing gas desorption model due to electron impact. A group of
Voltage Conference, Atlanta, Georgia, 2010. electrons enters from the left and strikes angled insulator surface.

Lawrence Livermore National Laboratory 95


Author Index

Author Index

Aceves, Salvadore M. ................................................. 2


Bennett, Corey V. ..................................................... 14
Bernier, Joel V. ......................................................... 32
Candy, James V. ................................................. 10, 70
Carlisle, Keith ........................................................... 18
Chen, Barry Y. .......................................................... 86
Chen, Diana C. ......................................................... 22
Conway, Adam M. .................................................... 50
Corey, Bob ............................................................... 38
Dehlinger, Dietrich A. ............................................... 58
Foudray, Angela M. K. .............................................. 72
Guidry, Brian L. ........................................................ 76
Houck, Timothy L. .................................................... 94
Kotovsky, Jack........................................................... 60
Kuntz, Joshua D. ....................................................... 52
Lemmond, Tracy D. ............................................ 84, 88
Lin, Jerry I. ............................................................... 40
Mariella, Raymond P., Jr. .......................................... 64
Meyers, Carol A. ...................................................... 90
Ng, Brenda M........................................................... 82
Nikolić, Rebecca J. .................................................... 62
Paulson, Christine N. ........................................... 6, 78
Puso, Michael A. ................................................ 34, 42
Spadaccini, Christopher M. ................................ 54, 56
Tang, Vincent ........................................................... 74
Weisgraber, Todd H. ................................................. 36
Wheeler, Elizabeth K. ......................................... 48, 66
White, Daniel A. ................................................. 26, 44

96 FY10 Engineering Innovations, Research & Technology Report


Manuscript Date April 2011
Acknowledgments
Distribution Category UC-42
Scientific Editors
Don McNichols This report has been reproduced directly from the
Camille Minichino best copy available.

Available from
National Technical Information Service
Graphic Designers 5285 Port Royal Road
Jeffrey B. Bonivert
Springfield, VA 22161
Lucy C. Dobson
Debbie A. Ortega
Kathy J. Seibert
Or online at www-eng.llnl.gov/pubs.html

This document was prepared as an account of work sponsored by an agency


of the United States Government. Neither the United States Government nor
Lawrence Livermore National Security, LLC, nor any of their employees, makes
any warranty, express or implied, or assumes any legal liability or responsibility
for the accuracy, completeness, or usefulness of any information, apparatus,
product, or process disclosed, or represents that its use would not infringe
privately owned rights. Reference herein to any specific commercial product,
process, or service by trade name, trademark, manufacturer, or otherwise,
does not necessarily constitute or imply its endorsement, recommendation, or
favoring by the United States Government or Lawrence Livermore National
Security, LLC. The views and opinions of authors expressed herein do not
necessarily state or reflect those of the United States Government or
Lawrence Livermore National Security, LLC, and shall not be used for
advertising or product endorsement purposes.

This work was performed under the auspices of the U.S. Department of Energy
by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
ST-10-0070
FY10
Engineering Innovations,
Research & Technology Report

April 2011
Lawrence Livermore National Laboratory
PO Box 808, L-151
Lawrence Livermore National Laboratory
Livermore, CA 94551-0808
http:www-eng.llnl.gov/ LLNL-TR-468271

You might also like