You are on page 1of 483

Experimental Aerodynamics

Edited by
Stefano Discetti and Andrea Ianiro
Cover image credit: Andrea Sciacchitano, Giuseppe Carlo Alp Caridi, and Rakesh Yuvaraj

CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742

© 2017 by Taylor & Francis Group, LLC


CRC Press is an imprint of Taylor & Francis Group, an Informa business

No claim to original U.S. Government works

Printed on acid-free paper

Version Date: 20161115

International Standard Book Number-13: 978-1-4987-0401-4 (Hardback)

This book contains information obtained from authentic and highly regarded sources. Reasonable efforts have been made
to publish reliable data and information, but the author and publisher cannot assume responsibility for the validity of all
materials or the consequences of their use. The authors and publishers have attempted to trace the copyright holders of all
material reproduced in this publication and apologize to copyright holders if permission to publish in this form has not
been obtained. If any copyright material has not been acknowledged please write and let us know so we may rectify in
any future reprint.

Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced, transmitted, or utilized
in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying,
microilming, and recording, or in any information storage or retrieval system, without written permission from the
publishers.

For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.
copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-
750-8400. CCC is a not-for-proit organization that provides licenses and registration for a variety of users. For organiza-
tions that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.

Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for
identiication and explanation without intent to infringe.

Library of Congress Cataloging-in-Publication Data

Names: Discetti, Stefano, editor. | Ianiro, Andrea, editor.


Title: Experimental aerodynamics / [edited by] Stefano Discetti and
Andrea Ianiro.
Description: Boca Raton : CRC Press, 2017.
Identiiers: LCCN 2016040406 | ISBN 9781498704014 (hardback : alk. paper)
Subjects: LCSH: Aerodynamics–Mathematical models. | Aerodynamics–
Experiments. | Experimental design.
Classiication: LCC TA358 .E97 2017 | DDC 629.132/300724--dc23
LC record available at https://lccn.loc.gov/2016040406

Visit the Taylor & Francis Web site at


http://www.taylorandfrancis.com

and the CRC Press Web site at


http://www.crcpress.com
Contents

Preface vii
Editors ix
Contributors xi

S e c ti on i
Fundamentals aspects of experimental aerodynamics

1 theoretical fundamentals of experimental aerodynamics 3

ANDREA IANIRO AND STEFANO DISCETTI

2 Statistical data characterization and elements of data processing 25

STEFANO DISCETTI AND ANDREA IANIRO

3 experimental facilities: Wind tunnels 55

ANDREA SCIACCHITANO

4 Principles of low visualization 91

JAVIER RODRÍGUEZ-RODRÍGUEZ

S e c ti on ii
Scalar measurements

5 Pressure measurements 109

DANIELE RAGNI

v
vi CONTENTS

6 temperature and heat lux measurements 143

FRANCESCO PANERAI

7 Density-based methods 195

FYODOR GLAZYRIN

8 From interferometry to color holography 223

JEAN-MICHEL DESSE

S e c ti on iii
Velocity measurements

9 thermal anemometry 257

RAMIS ÖRLÜ AND RICARDO VINUESA

10 Laser velocimetry 305

JOHN J. CHARONKO

11 Volumetric velocimetry 357

FILIPPO COLETTI

S e c ti on iV
Wall shear and force measurement

12 Measurement of wall shear stress 393

RICARDO VINUESA AND RAMIS ÖRLÜ

13 Force and moments measurements 429

MARIOS KOTSONIS

Index 449
Preface

Since the very irst ages of aeronautics, the leading role of experimentation was immediately
clear. While Newton’s sine squared law to predict aerodynamic drag slowed down the rush
toward human light for several decades, as it was used as an argument to support the impossi-
bility of designing high-lift low-drag devices, the enthusiasm and the experimental evidences
achieved in the eighteenth and nineteenth centuries contested the theory and paved the way
to the era of aeronautics. Starting from the historical moment of the irst powered light in
1903 on Kitty Hawk Beach, the role of the experimentation has never been disputed: on one
side the struggle to closely reproduce realistic low conditions in controlled environment and
on the other side the commitment to extract the most complete and reliable information about
the low—these have been the leading incentives for the continuous evolution of experimental
aerodynamics over the last century. The increasing availability of high-performance comput-
ers for computational luid dynamics, which was expected to suppress experimentation with
relatively low-cost simulation if compared to the burden of extensive experimental test cam-
paigns, has the counter-effect of pushing toward more and more sophisticated experimental
techniques. The ubiquitous nature of turbulence, the limits of direct numerical simulation of
the Navier–Stokes equations at relatively large Reynolds numbers, and the urgent need to set
benchmarks for turbulence closure models validation provided an incomparable thrust to the
development of measurement tools.
We irmly believe that since experimental aerodynamics is a branch of science that is far
from fading, a well-rounded background of the next generation of specialists in aerodynamics
could not stand without a deep knowledge of current limits and potentialities of the experi-
mental techniques, as well as of the principles of real data characterization and treatment.
This belief originated the idea of this book directed to students in the inal steps of their uni-
versity career. The ambitious task is to provide a panoramic view on the fundamentals of the
main measurement techniques while simultaneously keeping a weather eye on leading edge
research. This target pushed us toward the idea of including contributions from specialists of
the presented measurement techniques. The inal consortium is composed of 13 contributors,
from continental Europe, Russia, and the United States, with active lines of research and
development in the discussed measurement techniques.
This book is divided into four main sections. Section I provides a general introduction to
the problem of measuring experimental quantities in aerodynamics. The scenario on the back-
ground relies on the fundamentals of the Navier–Stokes equations and on the low properties
of interest (Chapter 1). In this section, the student is also made aware of a powerful tool for the
design of experiments such as dimensional analysis. Some rudiments on instruments for sta-
tistical data characterization (measurement uncertainty, statistical representation of turbulent
low ields, etc.) are provided in Chapter 2. In this section, the student is also guided through
some tools for data processing, such as Fourier analysis, Proper Orthogonal Decomposition,

vii
viii PREFACE

and conditional averaging. Furthermore, since the experiment is in the irst place an attempt to
reproduce low conditions in controlled environment, an overview on wind tunnel facilities is
provided in Chapter 3. Eventually, as direct visualization used as an instrument to understand
the low motion can be considered the dawn of experimental luid mechanics, a place of honor
is reserved to low visualization techniques (Chapter 4).
Section II focuses the attention on the measurement of scalar thermodynamic proper-
ties. Pressure measurements are commonly used to infer on other luid dynamic properties,
such as wall shear stresses, luid velocity, and more recently aeroacoustic noise sources. In
Chapter 5, the traditional methods based on static pressure tubes, wall tappings, and pressure-
sensitive paints are integrated with the most recent horizons opened by the advancements of
highly time-resolved measurements with microphones. Chapter 6 is focused on the methods
for punctual and surface temperature measurements. In this last case, particular attention is
devoted to full-ield techniques for heat lux measurement. The section concludes with an
overview on density-based techniques (Chapter 7 and 8), which rely on index of refraction
changes along the optical path to outline features of the low ield. Even though these tech-
niques are well assessed as optical low visualization methods, active research is ongoing on
the extraction of quantitative 3D information.
Section III is centered on velocity measurement techniques. In Chapter 9, the fundamen-
tals of thermal anemometry are described, as well as the most recent advancements with
respect to near-wall measurements. Chapter 10 covers the basics of optical laser velocimetry
methods, with utter focus on particle image velocimetry. Chapter 11 provides a panoramic
view of the most recent 3D velocimetry methods. The conceptual pathway underlying this
section is somehow twofold: on one side, the workhorses in turbulence investigation are pre-
sented, with their relative points of strength and weaknesses, and with some insights on future
years developments; and on the other side, the evolution of velocimetry toward results getting
closer and closer to that of numerical simulation (at least in the 4D format and in the declara-
tion of intents) is described.
Section IV closes the book with a description of methods to measure the effects of momen-
tum transfer from the lowing luid to bodies immersed in it. The discussion in Chapter 12
covers the techniques for the measurement of wall shear stresses, which have fundamental
importance for the analysis of drag near a solid surface or for the study of wall turbulence.
Recent advances in measurement techniques such as oil ilm interferometry are also dis-
cussed. In Chapter 13, methods for the extraction of forces and moments are described. The
focus is on traditional invasive methods (balances, strain gauges, load cells, etc.) as well as on
the most recent developments on forces extraction from velocimetry data.
Editors

Stefano Discetti received his BSc (2007), MSc (2009), and PhD (2013) in aerospace
engineering from the University of Naples Federico II. His PhD thesis focused on the develop-
ment of tomographic PIV and its application to turbulent lows. As a part of his PhD studies,
in 2010 and 2012 he worked in the Laboratory for Energetic Flow and Turbulence at Arizona
State University on the development of 3D particle image velocimetry for the investigation of
the turbulence generated by fractal grids. After receiving his PhD, he joined the Department
of Bioengineering and Aerospace Engineering at Universidad Carlos III de Madrid where he
currently holds a visiting professorship in the area of experimental aerodynamics and propul-
sion. He also served as test-case provider and referee in the team of the 4th International PIV
Challenge. His research interests include development of non-intrusive measurement tech-
niques, unsteady aerodynamics and wall-bounded turbulent lows.

Andrea Ianiro received his BSc (2006), MSc (2008), and PhD (2012) in aerospace engineer-
ing from the University of Naples Federico II. His PhD was on nonintrusive diagnostics on
impinging jets with IR thermography and tomographic PIV. During his PhD studies, in 2010
and 2011 he joined the Aerodynamics Labs at TU Delft for the development of tomographic
PIV measurements on impinging jets. After receiving his PhD, Dr. Ianiro worked as a postdoc-
toral research fellow at the University of Naples developing tomographic PIV diagnostics for
swirl lows in geometries representative of aero engine combustors. In 2013, Dr. Ianiro joined
the Department of Bioengineering and Aerospace Engineering at Universidad Carlos III de
Madrid where he currently is a visiting professor, teaching courses on aero engines and exper-
imental aerodynamics. His research interests include wall-bounded lows, unsteady aerody-
namics, and reduced order modeling techniques.

ix
Contributors

John J. Charonko received his BS in engineering science in mechanics and MS in engineer-


ing mechanics from Virginia Tech in 2002 and 2005. After receiving his PhD in biomedical
engineering from the Virginia Tech—Wake Forest School of biomedical engineering in 2009,
he worked irst as a postdoc and then research assistant professor of mechanical engineer-
ing at Virginia Tech. His research has focused on applications of particle image velocimetry
to traditional and biomedical lows, as well as advancements in methodology and uncer-
tainty analysis. Professor Charonko received the 2010 Outstanding Paper award in the Fluid
Mechanics category for the journal Measurement Science and Technology for his research on
“Assessment of pressure ield calculations from particle image velocimetry measurements.”
He is currently employed as a research scientist at Los Alamos National Laboratory.

Filippo Coletti earned his bachelor’s and master’s degrees in mechanical engineering at the
University of Perugia (Italy) in 2003 and 2005, respectively, and a diploma in luid dynamics at
the von Karman Institute (Belgium) in 2006. He performed his doctoral studies at the von Karman
Institute and at the University of Stuttgart (Germany), where he earned his PhD in aerospace engi-
neering in 2010. From 2011 to 2013 he was postdoctoral fellow at Stanford University, where we
worked in the Flow Physics group and collaborated with the Center for Turbulence Research. In
2014, Dr. Coletti joined the faculty at the University of Minnesota in the Aerospace Engineering
and Mechanics Department and became a member of the St. Anthony Falls Laboratory. His inter-
ests lie in the areas of single- and multiphase transport in complex lows, relevant to human health
(respiratory and cardiovascular luid mechanics) and environment (particle transport in turbulence).

Jean-Michel Desse joined ONERA in 1979. He is in charge of the development of optical


metrological tolls for analyzing unsteady lows based on shadow and schlieren techniques,
interferometry, and holography. As senior research scientist, he has worked on color differ-
ential interferometry using Wollaston prisms and polarized white light. The technique was
applied to 2D and axisymmetric unsteady wake lows, hypersonic lows, gaseous mixture,
and oil ilm interferometry skin friction measurement. Then, he developed three-color inter-
ferometry and color holographic interferometry using panchromatic plates by transmission
and relection. Currently, digital color holographic interferometry replaces plate holography
and it is implemented successfully for studying lows. Several different applications of digital
holography are also tested such as stochastic digital holography for visualizing inside strongly
refracting transparent objects, auto-referenced digital holography, and double-reference digi-
tal holography.

xi
xii CONTRIbUTORS

Fyodor Glazyrin received his specialist degree in physics from Lomonosov Moscow State
University in 2012, and his PhD degree in 2016. He is a member of the Laboratory of Plasma-
Gas Dynamics and Flow Visualization of the Faculty of Physics in Lomonosov MSU. His
scientiic specializations are optical methods of low diagnostics and their application to
unsteady, shock-containing lows.

Marios Kotsonis received his BSc and MSc in mechanical and aerospace engineering from
the University of Patras, Greece, in 2007. He received his PhD from the Department of
Aerodynamics of Delft University of Technology in 2012 with a thesis topic on plasma actua-
tors. He is currently an assistant professor at the same department. His research interests involve
applied aerodynamics, active low control, plasma actuators, and hydrodynamic stability.

Ramis Örlü received his MSc (Dipl-.Ing.) in 2003 from the Ruhr University of Bochum,
Germany, in mechanical engineering and holds a PhD in luid mechanics (2009), KTH Royal
Institute of Technology, Stockholm, Sweden. His research is focused on experimental meth-
ods and wall-bounded turbulent lows. Since 2009 and 2015, he works as a researcher and
docent (in Experimental Fluid Physics), respectively, at the Linné FLOW Centre and at the
Competence Centre for Gas Exchange (CCGEx) both located at KTH.

Francesco Panerai serves as material scientist at NASA Ames Research Center (ARC) in Moffett
Field, California with Analytical Mechanics Associated, Inc. His research covers advanced mate-
rials for extreme environments, heat and mass transport in porous media, and hypersonic aero-
thermodynamics. Before moving to NASA, he spent ive years at von Karman Institute for Fluid
Dynamics (VKI) in Belgium, where he earned a Research Master in Aeronautics and Aerospace
in 2008, and a PhD in 2012. At VKI, he investigated the behavior of high-temperature ceramic
composites and developed measurement techniques for high-temperature materials and reactive
lows. He also designed and qualiied in-light experiments for hypersonic spacecraft, most nota-
bly the catalysis in-light experiment for the European Space Agency Intermediate eXperimental
Vehicle (IXV).

Daniele Ragni graduated in 2007 from Universitá Politecnica delle Marche (AN, Italy) with
a bachelor’s degree in mechanical engineering and a masters’ degree in thermomechanical
engineering. In February 2012, after an internship at the DLR specializing in Background
Oriented Schlieren, he earned a PhD in aerospace engineering at the TU Delft High Speed
Laboratories under the supervision of Professor F. Scarano and Dr. B. W. van Oudheusden.
Currently, Dr. Ragni is assistant professor at the Aerodynamics, Wind Energy, Flight
Performance and Propulsion (AWEP) department of TU Delft, leading the new group for
aeroacoustic studies in rotors.

Javier Rodríguez-Rodríguez is an aeronautical engineer from the School of Aeronautics at the


Polytechnic University of Madrid. He earned his PhD at Universidad Carlos III de Madrid (2004),
working on the turbulent breakup of drops and bubbles. After a two-year postdoctoral period at
the University of California San Diego, he moved back to Universidad Carlos III de Madrid
where he is now associate professor. His research interests vary from the physics of bubbles to the
mechanics of soft animals and cells, including topics as varied as the physics of Beer Tapping.

Andrea Sciacchitano earned his degree in aerospace engineering in 2010 from the Sapienza
University of Rome and his doctorate in aerospace engineering in 2014 from the Aerodynamics
section of Delft University of Technology. During his PhD, Dr. Sciacchitano investigated
uncertainty quantiication methods and advanced image analysis for particle image velocim-
etry. Since 2014, he is an assistant professor in the Aerodynamics section of Delft University
of Technology. Dr. Sciacchitano is the author of several publications on international jour-
nals and has participated in international projects in collaboration with NLR-DNW, BMW,
Siemens Wind Power, LaVision GmbH, and Utah State University.
CONTRIbUTORS xiii

Ricardo Vinuesa received his BS in mechanical engineering from the Polytechnic University
of Valencia (Spain) and holds an MS and a PhD in mechanical and aerospace engineer-
ing from the Illinois Institute of Technology (USA). His research is focused on pressure-
gradient turbulent boundary layers, including the low around wings. He combines high-order
spectral-element DNSs and LESs with wind-tunnel measurements, including oil-ilm inter-
ferometry and hot-wire anemometry. Since 2014, he works as a postdoctoral research fellow
at the Linné FLOW Centre from KTH (Stockholm).
S e c ti on i
Fundamentals aspects of
experimental aerodynamics
c h a P t e r one

Theoretical fundamentals of
experimental aerodynamics

Andrea Ianiro and Stefano Discetti

contents

1.1 Introduction: Theory and experiments in aerodynamics 3


1.2 Dimensional analysis 4
1.3 Buckingham Π theorem 5
Example nondimensional parameters for aerodynamic forces 6
1.4 Air as a continuum 7
The continuum hypothesis 7
Peculiar velocities and compressibility effects 8
Continuum hypothesis: Is it still valid in the small scales of turbulent lows? 10
1.5 Navier–Stokes equations 10
Lagrangian and Eulerian speciication of the low ield 11
Conservation of mass 11
Newton’s second law 12
Conservation of energy (irst law of thermodynamics) 13
Second law of thermodynamics 13
1.6 Nondimensional numbers 14
1.7 Some types of lows 15
Inviscid incompressible lows 15
Inviscid compressible lows 16
Hypersonic reentry low 17
Boundary layers 17
1.8 Laminar versus turbulent lows 19
Laminar and turbulent regimes 19
Turbulent boundary layer 20
1.9 Aerodynamic forces: Lift and drag 20
Problems 22
References 22

1.1 introduction: theory and experiments in aerodynamics

Aerodynamics is a branch of physics that studies the motion of air and other gases and the
forces acting on solid objects interacting with them. Since its origins, aerodynamics has been
strongly connected to aeronautics and great part of early aerodynamic studies was devoted to
the development of heavier-than-air light (see, e.g., [1]). Modern aerodynamics maintains
an intimate connection with aeronautics, in particular, to model the principles governing the
light of aircraft, rockets, and missiles and to improve their performances; moreover, aerody-
namics is fundamental for the design of wind turbines, automobiles, high-speed trains, and of
civil structures, which must withstand strong winds such as bridges and tall buildings.

3
4 ANDREA IANIRO AND STEFANO DISCETTI

Among physical sciences, aerodynamics is one of those with the strongest mathematical
basis. As it will be shown in the following sections, it is possible to write a well-posed system
of differential equations (Navier–Stokes) describing the temporal and spatial variation of all
the quantities of interest such as velocity, pressure, temperature, and density. Nevertheless,
turbulence remains one of the greatest unsolved problems in physics, despite its relevance
in scientiic and technological applications. Theoretical understanding of the solutions of the
Navier–Stokes equations is still incomplete and even basic properties of the Navier–Stokes
equations have never been proven. As a matter of fact, the Millennium Prize Problems in
mathematics, proposed by the Clay Mathematics Institute in 2000, include the Navier–Stokes
existence and smoothness problem [2], which concerns basic mathematical properties of
solutions of the Navier–Stokes equations.
Solutions for aerodynamic lows have been obtained neglecting or approximating the con-
tribution of turbulence; thus, they are valid only under strong assumptions, most often far from
the reality.
Accordingly, the contribution of experimentalists has been and is still fundamental to solve
practical industrial problems (such as aircraft design and certiication) to validate numerical mod-
els and theoretical analysis. Nevertheless, even when it is not possible to solve analytically the
problem, theory will always help to discern which elements are more important to be reproduced
in an experiment. A good experimentalist should never overlook the importance of theory and
mathematics for the design and scaling of an experiment and for the analysis of its results.
In this chapter, the reader is provided with the main mathematical tools he/she will need
for the design of a sound experiment. The fundamentals of dimensional analysis are given and
the equations of luid mechanics are derived. An appropriate dimensional scaling is presented
and some special low conditions are reviewed. Particular attention is given to special cases in
which the Navier–Stokes equations can be simpliied. Finally, the chapter is closed with a note
on the generation of aerodynamic forces.

1.2 Dimensional analysis

Extracting useful information from experiments may be a very dificult task. For instance, mea-
suring the force acting on a sphere of diameter d in a wind tunnel at a given speed and for given
air properties will return the aerodynamic force relative only to those experimental conditions.
If the experimental conditions are changed, the absolute value of the acting force will be differ-
ent. Extracting the relevant information on the dependence on all the parameters of the problem
(sphere diameter, low velocity, air dynamic viscosity, air density, etc.) may be extremely costly
and would require an overwhelming number of experiments across a huge parametric space.
As shown in the following, generally, in aerodynamics, a given experimental result can also
be related to other lows with different scales or even different luids if the experimental results
are conveniently expressed in nondimensional form by dividing their dimensional values by
appropriate reference quantities. The technique for the choice and deinition of the appropriate
nondimensional scaling is referred to as dimensional analysis.
Dimensional analysis is a direct consequence of the principle of dimensional homogeneity,
which expresses the basic characteristic of any meaningful equation: all terms must have the
same dimensions (already in our childhood we were all told that we are not allowed to sum
beans and potatoes!). The magnitudes of the quantities involved in a certain equation are gen-
erally expressed according to some chosen scales, which are taken as units for the physical
quantities such as length L, mass M, time t, and temperature T. The measurement units corre-
sponding to each quantity depend on the chosen system of units (e.g., the SI units or imperial
units). In particular, the units of several physical quantities are expressed as the product of a
few fundamental units (see Table 1.1).
In geometry, two objects are deined similar if they both have the same shape or, more
precisely, one can be obtained from the other by uniformly scaling the geometrical dimen-
sions; this concept, which is intuitive in the physical space, applies as is in a general metric
space. In our case, if we consider the parameters characterizing a given low (in the Rn space
THEORETICAL FUNDAMENTALS OF ExPERIMENTAL AERODYNAMICS 5

Table 1.1 Physical quantities of interest of aerodynamics

Quantity Dimensions Derived units in SI


Acceleration Lt−2 m/s2
Angle (plane) 1 rad
Angle (solid) 1 sterad
Angular acceleration t−2 rad/s2
Angular velocity t−1 rad/s
Angular momentum ML2t−1 kg m2/s
Area L2 m2
Curvature L−1 m−1
Density ML−3 kg/m3
Dynamic viscosity ML−1t−1 kg/(m · s)
Elastic modulus ML−1t−2 kg/(m · s2)
Energy and enthalpy ML2t−2 J
Entropy ML2t2T−1 J/K
Force MLt−2 N
Frequency t−1 Hz
Mass M kg
Momentum MLt−1 kg m/s
Power ML2t−3 W
Pressure ML−1t−2 N/m2
Speciic heat capacity L2t−2T−1 J/(kg · K)
Temperature T K
Temperature gradient L−1T K/m
Thermal conductivity Mt−3LT−1 W/(m · K)
Thermal diffusivity L2t−1 m2/s
Time t s
Velocity Lt−1 m/s
Volume L3 m3

of the n parameters of the equation), two systems are similar if all the relevant parameters
scale uniformly.
It is possible to reproduce an experiment with simple geometrical similarity, kinematic
similarity, and dynamic similarity. Geometrical similarity requires that two geometries are
correctly scaled, kinematic similarity requires that luid streamlines are similar, and dynamic
similarity requires similarity of the resulting forces acting on luid particles and solid surfaces.
It is required that all the relevant parameters are correctly scaled to achieve dynamic similar-
ity. In the following paragraph, it is shown that through the Buckingham Π theorem it is pos-
sible to identify the relevant nondimensional numbers involved in a certain problem which
need to be reproduced to correctly scale an experiment.

1.3 Buckingham Π theorem

A dimensionally homogeneous equation can become nondimensional just dividing all the
terms by a given one; then the equation will be a combination of nondimensional numbers.
It appears now clear that the appropriate nondimensional scaling of our physical quantities is
fundamental to deine the similarity between two systems object of our study or to generalize
the results of a given experiment.
The Vaschy–Buckingham Π (pi) theorem [3,4] is the fundamental theorem of dimensional
analysis (the interested reader is referred to the book by Yarin [5], on the application of the
Π theorem to luid mechanics problems). This theorem also provides a method for the deini-
tion of the nondimensional parameters, even if the object equation is unknown. The use of
6 ANDREA IANIRO AND STEFANO DISCETTI

such a method requires, nevertheless, a robust theoretical background of the experimenter


since the choice of the relevant nondimensional parameters is not unique and the Π theorem
is not capable to distinguish nondimensional parameters with or without physical meaning.
The Π theorem cornerstone is to start from a functional relation between the physical quan-
tity object of investigation and n physical magnitudes or variables Ai (e.g., force, area, luid
density, luid viscosity). This functional relation can be formulated as
f ( A1, A2 , ¼, An ) = 0 (1.1)

If these n variables can be expressed with k dimensionally independent physical quantities


(e.g., M, L, t, T), then the original equation can be written as an equation composed by n − k
nondimensional numbers obtained from the original variables Ai:

fɶ ( P1, P 2 , ¼, P n - k ) = 0 (1.2)

where P i = A1m1 A2m2 ,¼, Anmn , with mi being integer numbers. The choice of the n − k
nondimensional numbers can be made very easily by choosing k of the original variables as
“fundamental variables” which will appear in all the nondimensional numbers Πi and the n − k
“dependent” variables which will appear only in one nondimensional number, respectively.

example Consider the problem of studying the aerodynamic force Fa acting on a body, for example, a
nondimensional sphere. The most relevant dimensional parameters involved in the problem, at a irst glance,
parameters for appear to be the diameter of the sphere d, the luid density and dynamic viscosity ρ and μ, and
aerodynamic forces the relative velocity U between the low and the sphere. As such, we can assume that there
exists a mathematical relation of the type

f ( Fa , d, r, m, U ) = 0 (1.3)

in which the number of relevant variables is n = 5 and which, according to Table 1.1, have
the dimensions of [MLt−2], [L], [ML−3], [ML−1t−1], [Lt−1], respectively; thus, the involved physi-
cal quantities are M, L, and t, with k = 3. For the moment it can be assumed that temperature
changes are not relevant; thus, T is not included as a parameter. This is true if the low speed is
suficiently low (see “Inviscid incompressible lows” section). We should now be able to write
n − k = 2 nondimensional numbers to reduce Equation 1.3 to

fɶ ( P1, P 2 ) = 0 (1.4)

It is possible to characterize the value of Π1 for various values of Π2 through a simple set
of experiments. In the dimensional space, in order to obtain empirically the magnitude of
the aerodynamic forces acting on whatever sphere in whatever low condition, we would
have needed a much bigger set of experiments than we would actually need by using the
Buckingham Π theorem!
To ind Π1 and Π2, the physical quantities ρ, V, and d can be chosen as “fundamental vari-
ables” and Fa and μ as “dependent variables” so that Π1 = ραUβdγFa and Π2 = ρα′Uβ′dγ′μ.
The exponents α, β, γ, α′, β′, γ′ can be calculated imposing that Π1 and Π2 are nondimen-
sional, thus getting two systems of three equations and three unknowns for the three indepen-
dent physical quantities, mass, length, and time. Solving, α = − 1, β = − 2, γ = − 2, α ′ = − 1, β ′ = − 1
and γ′ = − 1. Equation 1.4 can be rewritten as

æ Fa m ö
fɶ ç , ÷=0 (1.5)
è rU d rUd ø
2 2

æ m ö
Equation 1.5 states the existence of a relation Φ such that Fa = rU 2 d 2F ç ÷ . This is analo-
è rUd ø
1
gous to the classical expression for aerodynamic forces Fa = rU 2 SCF in which the surface S
2
THEORETICAL FUNDAMENTALS OF ExPERIMENTAL AERODYNAMICS 7

Smooth
Rough
1.5

1.0

CF
0.5

0.1

102 103 104 105 106 107


Re

FIGUre 1.1 Drag of a sphere versus the Reynolds number. (Adapted from Schlichting, H.,
Boundary Layer Theory, 7th edn., McGraw-Hill, 1979.)

is proportional to d2 and CF is the force coeficient, which is a function of the Reynolds number
Re (Re = ρUd/μ is the inverse of Π2).
CF represents the ratio between the aerodynamic force and the dynamic pressure (1/2)ρU2 of
the luid times the area of the surface of the body “seen” by the low, while the Reynolds num-
ber, in a low, is the ratio between inertia and viscous forces, as it will be shown in Section 1.6.
As expected from Equation 1.5, experimental data collected over a wide number of
conditions for a smooth sphere collapse on the continuous curve in Figure 1.1, being CF
only function of the Reynolds number. The curve is not just linear because the 3D low
past a sphere, according to the importance of viscous effects (Reynolds number and surface
inishing), experiences a transition regime. It has also to be remarked that Figure 1.1 shows
the importance of a further parameter that was not taken into account in our analysis, that
is, the surface roughness of the sphere, which modiies the aerodynamic behavior of the
sphere in correspondence of the transitions. This would have led to the introduction in
Equation 1.5 of a further parameter, that is, the nondimensional surface roughness of the
sphere, obtained by dividing the surface roughness by the sphere diameter.

1.4 air as a continuum

The continuum In the broader world of luid dynamics, aerodynamics concerns the motion of gases; it is thus
hypothesis mandatory to characterize the physical properties of gases and then consider their evolution
and dynamics. A gas is composed of molecules that are in continuous, random motion. The
molecules in motion collide with each other and with the bodies immersed in or containing
it. A gas in which molecules do not interact except when they collide elastically and other
intermolecular forces can be neglected is deined as a perfect gas. In this section, air will be
considered as a perfect gas. The impact of the molecules against a surface results in a change
in their velocity (i.e., in a force applied by the molecules to the surface). For an ideal gas, it
stands true that

pV = NR0T (1.6)

where
p is the pressure
V is the volume occupied by the gas
N is the number of moles
R0 = 8.314 J/ ( mol × K ) is the universal gas constant
T is the absolute gas temperature
8 ANDREA IANIRO AND STEFANO DISCETTI

To have an idea of the number of molecules typically involved in an aerodynamic problem, at


ambient pressure equal to 1 atm = 101,300 Pa and room temperature equal to 273.15 K, 1 mol
(6.023 ∙ 1023 molecules) of air occupies a volume of 22.4 L, that is, a cubic volume of 1 m3
contains almost 3 ∙ 1025 molecules.
It appears quite intuitive that air can be considered in most common applications as a con-
tinuous medium; thus, its properties (density, temperature, pressure) and the low features,
such as velocity, change continuously in space without singularities and can be probed in
every volume, arbitrarily small down to a certain limit. The deinition of this limit requires a
deeper analysis on the behavior of gases, reported in the following.
A quantity to be considered to ascertain the validity of the continuum assumption is the
mean free path l, that is, the average distance traveled by a molecule between two collisions
with other moving molecules. The comparison of the mean free path with respect to the char-
acteristic length of the problem is done through the Knudsen number Kn = l/L with l being
the mean free path of molecules and L being the characteristic length of the system, which is
object of interest. A very small Kn (<0.01) means that, given a reference element of volume
V0 » o( L3 ), it must exist in the system an elementary control volume DV0 ≪ V0 and much greater
of the cube with edge length equal to the mean free path of the molecules. In other words, luid
dynamic properties can be averaged out on small volumes DV0 , which are suficiently small to
be treated as “points” if compared to the scale of the low ield under analysis, and containing
a suficiently large number of molecules to obtain a continuous description of the quantities of
interest. Under this condition, the luid can be considered as a continuum.
Given the relevance of the mean free molecular path in supporting the cornerstone assump-
tion of the luid as a continuum, a path to estimate it is reported here. Let us consider for sim-
plicity gas molecules as spherical particles. If we pick a molecule in a gas with a density of
n (molecules/m3) moving at an average velocity c, and assuming as a irst approximation that
all the other molecules are not moving, it will collide with all the molecules whose center is
at a distance equal to the molecule diameter from its own center. The molecule impact section
spans in a time Δt a volume equal to pd 2 c Dt, which will contain npd 2 c Dt molecules. This
represents the number of collisions of the molecule over its path. The mean free path is thus of
the order of l = 1/npd 2. Typical values of the mean free path at room temperature for various
pressures are reported in Table 1.2. The reader can thus easily understand that in typical appli-
cations in aerodynamics Kn is small enough to ensure that the low can be considered as a con-
tinuum while for applications such as satellites this assumption does not stand true anymore.

peculiar velocities Even if the luid is macroscopically quiescent, air molecules move freely and interact dur-
and compressibility ing collisions in which they exchange energy and momentum. In a gas in equilibrium, the
effects molecules, speeds c (speed means, from now on, the magnitude of the velocity in a given
inertial reference system) assume random values with a probability deined according to the
Maxwell–Boltzmann probability distribution. Given that R0 = 8.314 J/(K × mol) is the univer-
sal gas constant, the Maxwell–Boltzmann function is reported in Equation 1.7 and plotted in
Figure 1.2, showing that the probability distribution moves toward higher speeds if tempera-
ture increases or if the molecular mass is decreased:
3
mc2
æ m ö 2 - 2 R0T
f ( c ) = 4pc ç
2
÷ e (1.7)
è 2pR0T ø

Table 1.2 Mean free path of molecules

Vacuum range Pressure in Pa Molecules/m3 Mean free path


Ambient pressure 101,300 2.7 × 1025 68 nm
Low vacuum 30,000–100 1025 – 1022 0.1–100 μm
High vacuum 10−1 – 10−5 1019 – 1015 10 cm–1 km
Extremely high vacuum <10−10 <1010 >105 km
THEORETICAL FUNDAMENTALS OF ExPERIMENTAL AERODYNAMICS 9

0.9

0.8

0.7

0.6

f √2R0T/m
0.5

0.4

0.3

0.2

0.1

0
0 0.5 1 1.5 2 2.5 3
c/√2R0T/m

FIGUre 1.2 Maxwell–Boltzmann probability distribution of the molecular speeds in a gas.

In Figure 1.2, we note that the velocity with higher probability is equal to (2 R0T ) /m , while
via simple algebra from Equation 1.7 the average velocity can be computed and it is equal
to (8 R0T ) /pm . The most probable and the average velocities are equal to 396 and 447 m/s,
respectively, for air at temperature equal to 273.15 K. From thermodynamics, it can also be
demonstrated that small pressure disturbances propagate at a slightly lower speed than the
average and most likely ones, the Laplacian speed of sound a = ( gR0T ) /m , where γ is the
ratio of speciic heat at constant pressure and at constant speciic volume of the gas, equal to
1.4 for air (the speed of sound in air at 273.15 K is thus equal to 331 m/s). Using the equa-
tions of state of a gas, it can be shown that the speed of sound is equal to the square root of
the derivative of the pressure versus the density with entropy held constant, a = (¶p /¶r) s .
The discussion can now be transferred to air moving macroscopically at a velocity with
magnitude v with respect to a given reference frame. For simplicity of discussion, and given the
relevance of the propagation of small pressure disturbs (such as sound) it can be assumed that
all the particles have a velocity with magnitude equal to a and random orientation (Figure 1.3a).
At this point, a new nondimensional number, the Mach number M = v/a, can be immediately
introduced. The value of M determines two possible situations, observed in Figure 1.3 in which
the gray arrows indicate the bulk velocity of the luid, the dotted lines are relative to a reference
frame moving at the bulk velocity of the luid, and the continuous arrows are the velocity vec-
tors in the reference frame with respect to which air is moving. If M < 1 (Figure 1.3b, referred to
as subsonic regime), some molecules are capable to move upstream against the air macroscopic
velocity, while if M > 1 (Figure 1.3c, referred to as supersonic regime), this cannot happen. In
this regime, it is not possible for the luid to transmit information upstream via small pressure
disturbs. The Mach number is thus capable to distinguish between two different situations in
which luid upstream is or is not informed of any small pressure disturbance.

(a) (b) (c)

FIGUre 1.3 Velocities of molecules for (a) steady air, (b) subsonic low, and (c) supersonic low.
10 ANDREA IANIRO AND STEFANO DISCETTI

The Knudsen number, the Mach number, and the Reynolds number are interrelated. In fact,
since the dynamic viscosity of a perfect gas is m = (1/ 2)rc l, it is immediate to show that

l M gp
Kn = = (1.8)
L Re 2

This means that the continuum hypothesis is veriied for both high values of the Reynolds
number and low values of the Mach number, as well as for a combination of these two condi-
tions. The Knudsen number might be, instead, of the order of unity or more for reentry aero-
dynamics problems in which the Mach number is greater than 10 and densities are very low,
resulting in low Reynolds numbers.

Continuum In turbulent lows, in which vortices appear along a wide spectrum of scales and interact
hypothesis: Is it with each other, one might question if at the smallest scales the low can still be treated as a
still valid in the continuum. Consider a turbulent low, decomposed a la Reynolds into a mean average low
small scales of and luctuations superimposed to it. The kinetic energy corresponding to the luctuating
turbulent lows? velocity u′ lows from the mean low into large energy-containing scales with characteristic
wavelength ℓ (comparable to the scale of the macroscopic problem). These large structures
(broadly referred to as eddies) break up into smaller ones, from them into the smaller eddies,
and so on, until a scale is reached in which the Reynolds number, based on the eddy size, is of
the order unity, thus leading to dissipation of energy into heat by viscous forces. In statistically
steady turbulence, the amount of turbulent kinetic energy per unit mass, which is dissipated
per unit time, must be equal to the amount of energy that enters the “spectral pipeline” ε = u¢3 /ℓ
(given that the kinetic energy is u′2 and the eddy lifetime, or eddy turnover time, is of the order
of ℓ/u¢; the student interested in the topic of turbulence may ind several specialized textbooks
such as [7]).
According to the celebrated Kolmogorov’s theory [8], the smallest scales of turbulence
are universal and must depend only on ε and on the kinematic viscosity ν = μ/ρ. Kinematic
viscosity has dimensions of L2/t, and the energy dissipation rate per unit mass has dimensions
of L2/t3; thus, from dimensional analysis it can be shown that the characteristic time of viscous
dissipation is τη = (ν/ε)(1/2) (referred to as the Kolmogorov timescale). With a similar argument,
the characteristic length scale of dissipation, which is the smallest turbulent length scale, is
referred to as the Kolmogorov microscale η = (ν3/ε)(1/4). The Kolmogorov velocity scale is thus
uη = (εν)(1/4).
Deining the turbulent Reynolds number Reℓ = ( u¢ℓ ) /n, it is found that

1 1
h æ n 3 ö 4 1 æ n 3ℓ ö 4 1
3
-
=ç ÷ =ç 3 ÷ = Reℓ 4 (1.9)
ℓ è ε ø ℓ è u¢ ø ℓ

When comparing η to the mean free path l, then from Equations 1.8 and 1.9 it results that
l /h = M t /Reℓ( ) where Mt is the turbulent luctuating Mach number, typically much smaller
1/ 4

than one. Additionally, turbulent lows of interest of aerodynamics are generally characterized
by large values of Reℓ; consequently air in the lows of interest of aerodynamics can be con-
sidered as a continuum throughout the whole spectral range of turbulence.

1.5 Navier–Stokes equations

The governing principles in luid mechanics and aerodynamics are the conservation laws (for
mass, momentum, and energy) and the second law of thermodynamics. As will be shown in
“Lagrangian and Eulerian speciication of the low ield” section, it is possible to write these
laws both in integral and in differential forms referring to a certain spatial volume (Eulerian
description) or to a certain luid mass (Lagrangian description). The reader interested in a
complete derivation is here referred to luid mechanics textbooks such as [9].
THEORETICAL FUNDAMENTALS OF ExPERIMENTAL AERODYNAMICS 11

Lagrangian It is possible to classify physical quantities into intensive and extensive ones. Extensive prop-
and eulerian erties increase with increasing system size, as it happens with mass or energy. Intensive prop-
speciication of erties, on the contrary, are bulk properties (such as temperature) that do not depend on the
the low ield amount of material considered. Several intensive quantities are obtained as the ratio of two
extensive quantities in order to remove the dependence on the system size, as in the case of
the density (mass per unit volume).
Once deined air as a continuum, it is possible to deine a certain subsystem, arbitrarily
small, in which the intensive properties have a inite value. The intensive low quantities can
thus be depicted as a function of position and time. The system object of the study can be
deined either in terms of control volume (Eulerian speciication) or in terms of control mass
(Lagrangian speciication). In the Eulerian speciication, the system of interest is deined by
the air contained in a ixed control volume; the mass contained in the control volume is instead
a function of time since it will enter into, and exit out of, the volume through its boundary
surface. In the case of the Lagrangian speciication, the system is deined by a control mass
that will occupy a volume changing with time; analogously, the delimiting surface of this
volume will also vary with time due to the low ield, which will change the shape of the
control volume.
Consider a certain continuous quantity q( x, t ) that is probed continuously in time in a cer-
tain Eulerian volume. Indicating with x the spatial coordinates in the Eulerian reference frame
and with u the low velocity, it may be interesting to quantify the total rate of change of q( x, t )
in a Lagrangian luid element.
As a luid element moves through a low ield, the total rate of change of the quantity
q( x, t ) described by its Eulerian speciication is equal to the sum of the local rate of change
in time (∂q/∂t) and of the convective rate of change of q. It is thus possible to introduce the
Lagrangian derivative:

Dq ¶q
= + ( u ×Ñ ) q (1.10)
Dt ¶t

The derivative ∂q/∂t is also referred to as local or Eulerian derivative. Equation 1.10 allows
to pass from a Lagrangian to a Eulerian speciication of a low ield through the Lagrangian
derivative Dq/Dt (also referred to as substantial derivative or material derivative) in which
Ñ denotes the gradient operator in the Eulerian frame.

Conservation Consider a Lagrangian description of the low ield: the mass contained in the material volume
of mass will not change (the volume V , instead, will change with time), that is,


¶t ò( )rdV = 0
V t
(1.11)

Passing to a Eulerian description of the low ield, the control volume will be constant and the
mass contained in it will increase (decrease) according to the amount of mass entering in it
(exiting out of it) through the volume surface A. This is equal to the surface integral over A of
the convective mass lux ru × n in which n is the direction normal to the surface A:


ò ¶t rdV + òru × n dA = 0
V A
(1.12)

The derivation of the differential form requires the application of the divergence theorem and
results in obtaining

æ¶ ö
ò çè ¶t r + Ñ × (ru ) ÷ø dV = 0
V
(1.13)
12 ANDREA IANIRO AND STEFANO DISCETTI

Equation 1.13 is valid for whichever volume (arbitrarily small). This is possible only if the
argument in the integral is null everywhere; thus, Equation 1.13 can be written in differential
form equating the integrand to zero. This equation is referred to as continuity equation.

Newton’s The variation of the momentum of a certain control mass is equal to the resultant of the
second law external forces applied to it; in particular, forces can be divided into body forces (such as
the gravity g, that is, gravitational force referred per unit mass) and surface forces acting on
the boundary of the material volume:

¶t ò rudV = ò( )rgdV + ò( ) fdA
V (t ) V t A t
(1.14)

where
ru is the momentum per unit volume of the lowing luid
f is the force per unit area acting on the surface of the material volume

Switching from the Lagrangian to the Eulerian description, the transport term of the momen-
tum lux through the surface of the control volume should be included:


ò ¶t rudV + òru ( u × n ) dA = òrgdV + ò fdA
V A V A
(1.15)

Surface forces act on a luid element through direct contact on the surface and f has units of a
pressure or stress (force per unit area). If n is the local surface normal, then f = t × n , in which
t is the stress tensor (see [9] for a complete description of the terms composing t). By applying
the divergence theorem, Equation 1.15 becomes

æ¶ ö
ò çè ¶t ru + Ñ × (ruu ) - Ñ × t - rg ÷ø dV = 0
V
(1.16)

The stress tensor t is symmetrical and has nine components. Surface stresses include pressure,
which acts normal to the element surface, and viscous stresses. Pressure can be further divided
into thermodynamic pressure (deined in Equation 1.6) and pressure related to the volumetric
strain rate, that is, the divergence of u. Deformations and stresses (the rate of change of its
deformation over time) in a luid element are related by the luids constitutive equation. Air is
a Newtonian luid, that is, the viscous stress tensor is linearly proportional to the local strain
rate. This is equivalent to state that the viscous part of the surface forces is proportional to the
rate of change of the luid's velocity vector as one moves away from the point of observation.
We can thus write that

æ ¶u ¶u ö æ 2 ö
ti, j = - pdi, j + m ç i + j ÷ + ç m v - m ÷ Ñ × udi, j (1.17)
è ¶x j ¶xi ø è 3 ø

in which μv is the coeficient of bulk viscosity and is typically found to be nonzero in poly-
atomic gases due to the effect of relaxation related to molecular rotation. The term δi, j is the
Kroenecker delta equal to 1 for i = j and to zero for i ≠ j. The second term in Equation 1.17 is
the viscous term that is due to the symmetric part of the stress tensor (the antisymmetric part
only produces “solid-body” rotation), while the third term is the one related to compressibility.
Replacing the constitutive equation for τi, j into Equation 1.16 and rearranging the irst two
terms considering the continuity equation, it is possible to derive the Navier–Stokes momen-
tum equation for Newtonian luids:

Du æ 1 ö
r = -Ñp + rg + mÑ 2 u + ç m v + m ÷ Ñ ( Ñ × u ) (1.18)
Dt è 3 ø
THEORETICAL FUNDAMENTALS OF ExPERIMENTAL AERODYNAMICS 13

Conservation of The internal energy contained in a material volume increases (decreases) of the amount of heat
energy (irst law of supplied to (extracted from) it and decreases of the amount of work done by (onto) the luid
thermodynamics) contained into the volume onto (by) the external ambient; thus, if a certain material volume
is considered,

¶ æ 1 ö
¶t ò( )r çè e + 2 u
V t
2

ø V t
ò( )
÷ dV = rg × udV +
A t
ò( )
f × udA - q × ndA
A t
ò( ) (1.19)

where
e is the internal energy per unit mass
(1/ 2 ) u 2 is the kinetic energy per unit mass
q is the heat lux through the volume boundary A ( t )

Notice that normally the gravitational potential energy from the irst term of the right-hand
side of Equation 1.19 is neglected since gravity forces are of negligible entity if compared to
inertia forces in typical aerodynamics problems; buoyancy-driven lows, of course, constitute
an exception.
Equation 1.19 can be rewritten in the Eulerian speciication and, using the divergence theo-
rem, it becomes

ïì ¶ é æ öù é æ 1 2ö ù üï
1
ò íîï ¶t êër çè e + 2 u
2
( )
÷ ú + Ñ × êr ç e + 2 u ÷ u ú - rg × u - Ñ × t × u + Ñ × q ý dV = 0
øû
(1.20)
V
ë è ø û þï

Equation 1.20 contains both thermal energy and mechanical energy. The equation for mechani-
cal energy can be obtained by scalar multiplication of the Navier–Stokes equation with the
velocity vector. Subtracting the mechanical energy equation from the total energy conservation
equation leads to the internal energy equation. After some manipulation, it can be shown that

D æç u ö÷
2
æ 1 ö
r = -Ñ × p × u + rg × u + mÑ 2 u × u + ç m v + m ÷ Ñ ( Ñ × u ) × u
Dt ç 2 ÷ è 3 ø
è ø

De D æ1ö 1 1
= - p ç ÷ - Ñ × q + t, Ñ u (1.21)
Dt Dt è r ø r r F

in which the operator a, b F


= åå a
i j
b
i, j a, j is the Frobenius inner product. Since the stress
tensor is symmetric, obviously only the symmetric part of the velocity gradient contributes
to the last term of Equation 1.21. This viscous term is commonly referred to as the turbulent
kinetic energy dissipation rate ε, which accounts for the kinetic energy dissipated into heat
per unit mass through luid element deformation, as already mentioned in Section 1.4. It is
now clear that in turbulent lows the dissipation term represents the irreversible conversion
due to viscosity of mechanical energy into thermal energy at the dissipative length scales of
the order of η.

Second law of According to the Planck statement of the second law of thermodynamics [10], every process
thermodynamics occurring in nature proceeds in the sense in which the sum of the entropies of all bodies taking
part in the process is increased. In the limit, that is, for reversible processes, the sum of the
entropies remains unchanged and Tds = dq.
Since  de = dq − pd(1/ρ), the entropy variation can be written as  ds = (1/T)(de + pd(1/ρ));
thus, from Equation 1.21, it is possible to write

Ds 1 ε
=- Ñ×q + (1.22)
Dt rT T
14 ANDREA IANIRO AND STEFANO DISCETTI

The term (1/ρT)Ñ × q can be further written as (1/r ) Ñ × ( q /T ) + q /(rT 2 )Ñ(T ) from which it is
clear that entropy is transferred through heat conduction and is produced through both viscous
dissipation of the mechanical energy into heat and heat conduction, which is a nonreversible
process.

1.6 Nondimensional numbers

An extremely powerful tool to understand the relevance of the terms appearing in the previ-
ous equations is dimensional analysis. The previous equations can be nondimensionalized by
deining characteristic scales for the physical quantities such as length, velocity, etc. In this
way, all the equations can be expressed as nondimensional variables (of order 1, if the char-
acteristic scales are properly chosen) multiplied by the corresponding dimensional coeficient
that determines the relative order of magnitude of each term. The ratios of such coeficients
are nondimensional parameters that set the relative importance of the various terms in the
governing differential equations.
Applying this process to the Navier–Stokes equations, the nondimensional (expressed as
the former variable with an asterisk at the apex) variables are

x u p - p¥ g
x* = , t* = ft , u* = , p* = c p = , g* = (1.23)
L U ( )
1/ 2 rU 2
g

with L, f, U being, respectively, reference length, frequency, and velocity. Here, it has been
found convenient to express the nondimensional pressure as a pressure difference with respect
to a reference value, divided by the dynamic pressure; shear stresses are typically scaled anal-
ogously, being the nondimensional term referred to as cf. Dividing all the terms of the momen-
tum equation by the coeficients of the convective term and neglecting, for simplicity, the term
on bulk viscosity term, Equation 1.18 becomes

é fL ù d u* é gL ù é m ù 2
ê U ú dt + ( u* ×Ñ* ) u* = -Ñ* p* + ê U 2 ú g* + ê rUL ú Ñ * u* (1.24)
ë û ë û ë û

The terms between square brackets are three nondimensional numbers, detailed in the
following.
The Reynolds number, already mentioned earlier, is the ratio of the inertia force to the
viscous force:

é rUL ù
Re = ê ú (1.25)
ë m û

Reproducing Re is a requirement for the similarity of lows in which viscous forces are impor-
tant. In this case, the matching between two conditions is needed to obtain dynamic similar-
ity. It is now clear as, in Figure 1.1, the drag coeficient of the sphere is not dependent on
Re if Re ≫ 1. For such regime, viscosity forces are much smaller than inertia forces and the
dynamic similarity is achieved even if Re is not perfectly reproduced.
The Strouhal number St is the ratio between unsteady acceleration and convection of
momentum, the two parts of the Lagrangian derivative. It is relevant in low with natural
oscillations (think, e.g., to the well-known Karman shedding in the wake of bluff bodies) or
because of a mechanically oscillating motion (such as in the case of lapping wings of birds):

é fL ù
St = ê ú (1.26)
ëU û
THEORETICAL FUNDAMENTALS OF ExPERIMENTAL AERODYNAMICS 15

The Froude number is the ratio between inertia forces and gravity forces and is typically of
scarce importance in aerodynamics while it can be of great importance in hydrodynamics and
naval engineering applications:

é U ù
Fr = ê ú (1.27)
êë gL úû

The nondimensional form of the continuity equation can be used to estimate if the low is
compressible or incompressible, that is, if pressure and density variation are large enough
to induce signiicant difference with respect to the conditions of incompressible low. Mass
conservation equation (Equation 1.13) can be rewritten in terms of substantial derivative of the
density and divergence of velocity:
1 Dr
= -Ñ × u (1.28)
r Dt

Equation 1.28 can be further simpliied assuming that density changes occur isentropically
(i.e., observing that the speed of sound is equal to the square root of the derivative of the pres-
sure versus the density at constant entropy), so dp = a2dρ:

1 Dp
= -Ñ × u (1.29)
ra 2 Dt

which, writing now t* as Ut/L, becomes in nondimensional form

é U 2 ù 1 Dp*
ê 2ú = -Ñ* × u * (1.30)
ë a û r* Dt*

The previously introduced Mach number is now shown to be the square root of the ratio
between inertia and compressibility forces. If the Mach number is small enough, Ñ × u = 0 and
the density ρ is constant over time and space (see Equation 1.28). In aerodynamics, lows are
broadly considered incompressible if M < 0.3 and the luid dynamics conditions differ from
the ideal incompressible low (M = 0) of less than 10%. As it has been shown earlier, lows
with M < 1 are called subsonic and with M > 1 are called supersonic. Matching those condi-
tions in dynamic similarity is almost always mandatory if the low is compressible.

éU ù
M=ê ú (1.31)
ëaû

1.7 Some types of lows

Depending on the values of the nondimensional numbers involved in the problem, it is pos-
sible to identify some special low conditions. Understanding the characteristics of such lows
allows for deining simpliied equations and for identifying/reducing the requirements to
reproduce similarity in an experiment.

Inviscid If the Mach number is small enough and the Reynolds number is large enough, viscous forces
incompressible and compressibility forces can be neglected: the low thus is inviscid and incompressible (note
lows that incompressibility is a property of the low, not of the luid). In the applications of interest
of aerodynamics, the term related to gravity acceleration can also be omitted since Fr ≫ 1.
The momentum equation, expressed for a Lagrangian mass element under these conditions, is

Du
Ñp = -r (1.32)
Dt
16 ANDREA IANIRO AND STEFANO DISCETTI

This relation is called Euler’s equation. A very useful form of the equation is then obtained
integrating the Euler equation over a trajectory starting at point 1 and ending at point 2:

r u12 r u22
p1 + = p2 + (1.33)
2 2

( )
In other words, p + r u 2 / 2 is constant for a certain Lagrangian element. It can be shown that
( )
for inviscid incompressible lows p + r u 2 / 2 is constant over the entire low ield.
The absence of compressibility and viscous terms in Equation 1.33 shows that, in a certain
experiment, if the Mach number is suficiently smaller than one it is not necessary to exactly
reproduce its value; the same holds true for the Reynolds number if it is suficiently large. The
design of an experiment will, in general, require an appropriate literature review and order
of magnitudes analysis to estimate whether compressibility and viscous effects need to be
reproduced or not.
As an example, the reader could think at the case of a glider that may have a chord of
1 m and ly at a speed of 20 m/s in air at standard conditions (temperature equal to 273.15 K
and a pressure equal to 100,000 Pa). The Mach number is small enough to neglect com-
pressibility effects that, thus, are not needed to be reproduced. On the other side, the size
of the glider might imply a Reynolds number based on the chord of the order of 1 ⋅ 106.
The effect of Reynolds number on aerodynamic force coeficients is, in this case, negli-
gible only at small angles of attack as reported for airfoil sections in the classical book by
Abbott et al. [11].

Inviscid According to Equation 1.30, if the Mach number is suficiently large, the low experiences
compressible lows non-null divergence; velocity changes are then associated with density changes. As previously
stated, a common choice is to consider the low compressible for M > 0.3 (the density change
is greater than 5%, in this case). The study of compressible lows is relevant for high-speed
aircrafts and jet engines, as well as for several industrial applications.
For a Lagrangian luid element, it still holds

p2 2
dp u
ò
p1
r
+
2
= const (1.34)

With good approximation, several processes under interest can be considered as isentropic
(i.e., reversible and adiabatic such as the external low around an airfoil at high Reynolds
number). They are thus characterized by the relation (p/ργ) = const (this can be derived by the
deinition of the speed of sound in “Peculiar velocities and compressibility effects” section),
where γ is the speciic heat ratio of the gas. Equation 1.34, then, integrated, gives

2
g p u
+ = const (1.35)
g -1 r 2

This is commonly referred to as the total enthalpy of the low.


( )
These conditions allow to analyze the acceleration of air from rest u0 2 = 0 . Recalling the
state law for ideal gases and the deinition of Mach number, Equation 1.35 simpliies to

æ g -1 2 ö
T0 = T ç 1 + M ÷ (1.36)
è 2 ø

The state 0 is called the state of stagnation conditions. Thermodynamic properties at stagna-
tion can be measured if the low is decelerated adiabatically and isentropically (even though
for Equation 1.36 the hypothesis of isentropic low is not needed). For stagnation pressure
THEORETICAL FUNDAMENTALS OF ExPERIMENTAL AERODYNAMICS 17

and stagnation density relations analogous to Equation 1.36 can be obtained by using the
properties of adiabatic isentropic processes.

g 1
æ g - 1 2 ö g -1 æ g - 1 2 ö g -1
p0 = p ç 1 + M ÷ , r0 = r ç 1 + M ÷ (1.37)
è 2 ø è 2 ø

As shown in Figure 1.3, if the low is subsonic the luid upstream is informed of any small
pressure disturbance and can accommodate to the presence of a body immersed in the low. If
the low is supersonic, small pressure disturbances (which travel at the speed of sound) cannot
travel upstream; thus, the low has to abruptly react to the disturbances given, for example,
by the presence of a body. A shock wave is a type of propagating large-pressure disturb; it
has a relative speed with respect to the luid which is larger than the speed of sound. A shock
wave causes an abrupt increase in density, temperature, and pressure, as well as an increase of
entropy related, for nonreacting lows, to stagnation pressure decrease (the interested reader
is here referred to [12] for a complete reference on compressible gas dynamics). Similarly,
expansion in the supersonic regime can be obtained through expansion fans that are ensembles
of isentropic waves with normal Mach number equal to 1.
It might appear obvious to the reader that correctly reproducing the Mach number and the
geometry in a compressible low study is mandatory; as it will be shown in Chapter 3, this
requires the design of special wind tunnels.

hypersonic Further increasing the Mach number in the supersonic regime, the hypersonic regime is
reentry low reached. A deinition of the Mach number at which a low is considered as hypersonic varies
depending on the phenomenon considered. Anyway, all the “hypersonic effects” are present
for Mach number higher than 5 and this is the deinition commonly accepted in the com-
munity. A book by Anderson provides a complete reference on the phenomena involved in
hypersonic aerodynamics [13]. This regime typically applies to lows related to spacecrafts
during their reentry in the atmosphere.
What really catches the interest of the researcher is not the low at high Mach number
(which, although being complex, is relatively well known), but the low after the shock
wave caused by a body immersed in a very high Mach number stream. The high tempera-
tures (close to stagnation temperature) reached by the low-speed high-enthalpy low after
the shock cause nonequilibrium chemical properties such as the excitation of the molecular
vibrational state and the dissociation and/or ionization of molecules resulting in convec-
tive and radiative heat luxes, which challenge the design capabilities of thermal protection
systems. This is the reason why hypersonic lows require intensive studies on heat lux, as
discussed in Chapter 6.
A further complication for such lows depends on the fact that they are typically experi-
enced in the upper layers of the atmosphere where the density is quite small, so the relatively
large mean free molecular path challenges the application of the continuum hypothesis.

Boundary layers For high Reynolds number lows, viscous forces are negligible with respect to inertia forces.
Nevertheless, due to the continuum hypothesis a low adjacent to a wall should not have a slip
velocity. Prandtl [14] irst presented the concept of boundary layer stating that close to a solid
boundary there must exist a region where the low decelerates from the freestream velocity to
zero speed in order to satisfy the no-slip condition. The low is decelerated by viscous forces,
producing shear stresses on the wall (see discussion in Chapter 12).
Consider a 2D low with the velocity component ux deined as parallel to the wall and the
velocity component uy deined as perpendicular to the wall: the steady momentum equation
for ux is

¶ux ¶u 1 ¶p æ ¶ 2u ¶ 2u ö
ux + uy x = - + n ç 2x + 2x ÷ (1.38)
¶x ¶y r ¶x è ¶x ¶y ø
18 ANDREA IANIRO AND STEFANO DISCETTI

FIGUre 1.4 Schematic representation of a boundary layer.

Equation 1.38 can be put in nondimensional form, choosing as characteristic velocity the
freestream velocity U∞ and as characteristic length for the wall parallel direction the character-
istic length of the considered problem (e.g., for an airfoil, its chord). The characteristic length
to be chosen for the derivative along the wall-normal direction has to be equal to the thickness
of the region through which the low is decelerated to zero speed. In this region, the irst and
the last terms of Equation 1.38 must be of the same order of magnitude.
This layer of luid in which viscous forces are not negligible is referred to as boundary
layer, being its thickness indicated with the Greek letter δ.
The irst and the last terms of Equation 1.38 represent, respectively, inertia and viscous
effects. Since they have an approximate size equal to (U ¥2 /L ) and νU∞/δ2, they are of the same
order of magnitude only if d @ L/ Re.
Within the boundary layer thickness δ, the low velocity varies from zero at the wall up to
U at the boundary, U being the velocity outside of the boundary layer, which approximately
corresponds to the freestream velocity (see Figure 1.4).
More precisely, the value of δ is arbitrary since the friction force decreases with the dis-
tance from the wall and becomes equal to zero only at ininity. A broadly accepted deinition
of the boundary layer edge corresponds to the location where the velocity is 99% of the exter-
nal velocity.
The presence of the boundary layer also distorts the surrounding nonviscous low. In fact,
another measure of the boundary layer can be deined, which is the displacement thickness δ*,
a layer of zero-velocity luid corresponding to the same velocity deicit of the actual boundary
layer:
¥
æ u ö
0
ò
d* = ç 1 -
è u¥
÷ dy
ø
(1.39)

The momentum loss in the low due to the presence of the boundary layer (the integral effect
of the shear stresses) can be accounted for through the momentum thickness θ:
¥
u æ u ö
q=
òu
0
ç1 - u
¥ è ¥
÷ dy
ø
(1.40)

Differently from the boundary layer thickness, the choice of displacement and momentum
thickness is not arbitrary.
At the same time, from the steady momentum equation along the wall-normal direction
it can be shown that ∂p/∂y = 0 through the boundary layer, so the static pressure within the
boundary layer is constant at a given streamwise location and it is equal to the static pressure
outside of the boundary layer. Assuming that the low ield is incompressible, the Bernoulli
theorem can be applied outside of the boundary layer and the streamwise pressure variation
can be obtained as

1 ¶p ¶U x
= -U x (1.41)
r ¶x ¶x
THEORETICAL FUNDAMENTALS OF ExPERIMENTAL AERODYNAMICS 19

This result can be further analyzed evaluating Equation 1.38 at the wall (where u = 0):

¶ 2ux 1 ¶p ¶U x
n = = -U x (1.42)
¶y 2 r ¶x ¶x

This means that the curvature of the boundary layer proile depends on the outer pressure
gradient, that is, on the low ield outside of the boundary layer. The example in Figure 1.4
is representative of a lat plate with zero pressure gradient; thus, there is no curvature of the
velocity proile at the wall. An accelerated low results in a favorable (negative) pressure
gradient, with negative curvature of the velocity proile at the wall, while a decelerated low
results in an adverse (positive) pressure gradient with a velocity proile with positive curvature
at the wall. The positive pressure gradient is referred to as adverse because, if considering the
low around a body, the boundary layer may separate from the surface before reaching the rear
part of the body forming a thick low-momentum wake in which the low ield is unsteady and
might result in counterlow.

1.8 Laminar versus turbulent lows

Laminar and If the Reynolds number is suficiently low, momentum transfer is dominated by diffusion and
turbulent regimes the low is referred to as laminar. This means that the motion of the luid is ordered and any
lateral mixing and swirl are absent as if the luid would be composed of laminae sliding side-
by-side with each other.
As the Reynolds number increases, the momentum transfer starts being dominated by
convection and the low develops instabilities even to ininitesimal disturbances. The low
overturns to what is referred to as turbulent motion: the velocity at a given point in the low
is not constant over time and, if several measurements are taken, under identical conditions,
different values are obtained. Turbulent motions make the velocity assume instantaneously
random values independent of the macroscopic characteristic of the low, which can instead
maintain steady its statistical properties (tools for the analysis of turbulent data are reported
in Chapter 2).
Turbulence is easily observed in nature and in daily life: the smoke plume rising from a
cigarette represents an example of turbulent low (see Figure 4.2). A small low of hot gas is
accelerated by buoyancy while entrains ambient air and, as it moves upstream, both its veloc-
ity and characteristic length (the width of the smoke plume) increase. Initially, the low is
laminar, then smoke experiences a transition to turbulence as its Reynolds number increases,
due to the increase in the low velocity and characteristic length (the plume width). The obser-
vation of the cigarette smoke shows the presence of vortical motion on several scales. The
interaction between vortices always results in vortex stretching and tilting, which causes the
spread of velocity luctuations over a continuum spectrum of wavelengths. This spectrum is
delimited by a maximum wavelength, which depends on the low boundary conditions and,
as said before, ends with the Kolmogorov microscale η where the energy of the turbulent
cascades is inally dissipated into heat. Several outstanding scientists attempted a deinition
of the turbulent phenomena ([15], reports a list of these deinitions). Here, the deinition by
Liepmann [16] is reported, which looks at turbulence under a thermodynamic point of view,
anticipating the inal effect of turbulent dissipation of kinetic energy into heat with consequent
entropy production.
Turbulence can be deined by a statement of impotence reminiscent of the second law of thermo-
dynamics: low at a suficiently high Reynolds number cannot be decelerated to rest in a steady
fashion. The deceleration always produces vorticity, and the resulting vortex interactions are
apparently so sensitive to the initial conditions that the resulting low pattern changes in time and
usually in stochastic fashion.

With our sight on turbulent lows, the equations of motion (Section 1.5) can be rewritten
considering all the quantities q as composed of a statistically steady time average q and a
20 ANDREA IANIRO AND STEFANO DISCETTI

luctuating component q′, that is, q = q + q¢. By time averaging the Navier–Stokes equation
written with this decomposition, it is possible to obtain the Reynolds-averaged Navier–
Stokes equations (or RANS equations) here reported for the case of incompressible low with
negligible gravity acceleration.
1
u ×Ñ u = - r Ñp + nÑ 2 u - Ñ × u¢u¢
(1.43)

The term u¢u¢ in Equation 1.43 has the dimensions of a stress per unit density; thus, it is
referred to as Reynolds stress tensor. The Reynolds stresses, arising from the nonlinearity
of the Navier–Stokes equations, constitute further unknowns to the problem, thus needing
additional equations to close it. However, even though balance equations can be written for all
the components of the Reynolds stress tensor, it can be easily shown that this leads to adding
further unknowns. This represents the famous closure problem of turbulence [7]. The random-
ness of turbulent motion together with its nonlinearity and irreversibility makes the problem
challenging, complicated, and fascinating.

Turbulent Transition to turbulence arises in practically all engineering problems related to aeronautics
boundary layer for what concern wings, aerodynamic surfaces, and even parts of engines, such as turbine
blades. When testing an aerodynamic body in a wind tunnel, it is important to reproduce the
transition location by correctly reproducing the operation Reynolds number or by imposing a
transition with a disturbance in the boundary layer.
The transition to turbulence in a boundary layer deserves special attention especially con-
sidering the existence of the Reynolds stresses as in Equation 1.43 in the case of the turbulent
boundary layer. Writing Equation 1.38 averaged a la Reynolds it is obtained:

¶ux ¶u 1 ¶p ¶ 2u ¶u¢ u¢
ux + uy x = - + n 2x - x y (1.44)
¶x ¶y r ¶x ¶y ¶y

The shear stress in a turbulent boundary layer is expressed as the sum of shear stresses due
to the mean low velocity and Reynolds stresses, being the latter typically much greater than
the former ones. This greatly changes the velocity proile through the boundary layer and
boundary layer thickness: as a comparison the reader can be addressed to [6] where it is
reported that the boundary layer thickness on a lat plate at a distance x from the leading edge
is equal to d = 5x / Rex for a laminar boundary layer and can be estimated of the order of
d @ 0.37 x / 5 Rex for a turbulent boundary layer, with Rex being the Reynolds number based on
x as characteristic length. Similarly, the friction coeficient, deined as the ratio between the
wall shear stresses and the low dynamic pressure is equal to c f x = 0.664 / Rex for a laminar
boundary layer and c f @ 0.0592 / 5 Rex for a turbulent boundary layer (for a more detailed
description of the features of a turbulent boundary layer the reader is referred to Chapter 12).
This means that on a lat plate with air speed equal to 10 m/s, at a distance of 1 m from the
leading edge, the boundary layer thickness is of the order of 6 mm for a laminar and 25 mm
for a turbulent boundary layer while the friction coeficient is of the order of 7.6 ⋅ 10−4 for a
laminar and 3.9 ⋅ 10−3 for a turbulent boundary layer.
This last result is of crucial importance in aeronautics since it is important to control the fric-
tion, which will result in resistance (drag) to the advancement of an aircraft through the still air.

1.9 aerodynamic forces: Lift and drag

As already stated, the interactions between air and a body in relative motion are the main
object of aerodynamic studies. If the surface of the body under investigation is considered,
on each point the luid acts with pressures and tangential stresses, which, integrated, provide
a resulting force. This force can be decomposed in its components orthogonal (positive as
opposite to gravity) and parallel to the low velocity: the lift and the drag. The lift, as shown
THEORETICAL FUNDAMENTALS OF ExPERIMENTAL AERODYNAMICS 21

in the following, is mostly due to the luid pressure distribution over the wing while the drag
is due to both pressure and friction contributions.
Under a global point of view, according to Newton’s laws, F = m(du/dt), which means that
the force results from a variation of the momentum of the low. According to the principle of
Galilean invariance, it is possible to consider either a body moving into still air or a steady
body invested by an air stream as it happens in a wind tunnel (see Chapter 3). Using the latter
description, in an aircraft the production of lift is thus associated with a downward accel-
eration of the low that invests the wing, while the drag is associated with a decrease in the
streamwise component of the low momentum.
Surface interactions (which, integrated, provide the aerodynamic forces) depend on the fact
that luid elements change their velocity in order to comply with the presence of the body and
that, due to the continuum hypothesis and to the viscosity it must have zero slip velocity on
the body surface.
The mechanism for generation of lift can be understood thinking at the case of a cylinder
immersed in a freestream. The reader, observing Figure 1.5a, expects that the low should
be decelerated to zero speed in point A, accelerated until point B and eventually decelerated
between point B and C (if the boundary layer does not separate from the cylinder wall due
to the adverse pressure gradient). For such condition, the cylinder should not experience any
aerodynamic force except in the case of boundary layer separation, which should produce a
low-momentum region with consequent drag.
If a clockwise rotation is added to the cylinder, assuming that the two solutions could be
linearly added, a rotational velocity (with tangential velocity inversely proportional to the
distance from the cylinder axis) should be added to the low ield of the previous solution
(Figure 1.5b). The stagnation point A would move in the bottom part of the cylinder determin-
ing lower pressure on the upper side and higher pressure on the bottom side of the cylinder. It
is quite intuitive that a rotating cylinder can produce a lifting force. From potential low theory
[17], it can be shown that lift is proportional to the circulation Γ, which is the contour integral
of the tangential velocity of the luid on a closed curve surrounding the cylinder. This is true
for whatever body and, according to the Kutta–Joukowski theorem, lift per unit span L′ can
be calculated as

L¢ = rV G (1.45)

A C

(a)

A C

(b)

A C

(c)

FIGUre 1.5 Schematic of low ields: (a) low around a cylinder, (b) low around a rotating
cylinder, and (c) low around an airfoil.
22 ANDREA IANIRO AND STEFANO DISCETTI

FIGUre 1.6 Schematic of the generation mechanism of induced drag over an aircraft wing.

Wing sections produce lift in a way analogous to the cylinder. The circulation around the wing
is determined through the Kutta condition, which states that the low must leave the trailing
edge smoothly (Figure 1.5c).
The generation of lift has as a counterpart the production of induced drag in inite 3D
wings. In fact, the pressure difference between upper and lower sides of the wing results at
the wing edges in the production of an induced motion from the bottom to the upper side (as
sketched in Figure 1.6): the luid on the high-pressure side is accelerated outward and the luid
on the suction side is accelerated inward resulting in what is referred to as the tip vortex. This
motion, practically, results in a higher angle of attack seen by the wing. As a consequence,
especially near the wing tips the aerodynamic force has a higher inclination angle with respect
to freestream velocity, resulting in force component contributing to the drag.
As seen in “Turbulent boundary layer” section, the wall shear stresses are further sources of
momentum losses in the freestream, typically contributing to 30% of the total drag. Although the
prediction of the lift and induced drag with potential theory are well assessed and sit on a solid
mathematical background, it is still important to correctly estimate the viscous phenomena on the
wall especially for what concerns turbulence. In particular, transition from laminar to turbulent
boundary layer is an extremely important design information since it will affect not only drag
but also the displacement thickness, that is, the body shape that the low on the outside of the
boundary layer actually “sees,” especially in low conditions in which separation might occur.
Understanding of turbulence in wall-bounded low is, moreover, of paramount importance for the
design of control strategies in order to face the challenge of reducing aviation fuel consumption.

problems

1.1 Consider the drag coeficient of a sphere reported in Figure 1.1. Using the Buckingham
Π theorem, explain the reason for the linear decrease of CF at low Reynolds number and
its constant behavior at high Re.
1.2 Using the Buckingham Π theorem, identify the nondimensional numbers to be consid-
ered for the study of the performances of a propeller.
1.3 A sparrow lies at an average speed V of 40 km/h. The wing chord c is about 7 cm and
the typical reduced frequency k = πfc/V is 0.2, with f being the lapping frequency. We
are planning to study the 2D aerodynamics of its wing midsection in a water tunnel
with maximum speed equal to 1.5 m/s. Assuming that the lapping amplitude is equal
to the chord, propose the best scaling choice to preserve the dynamic similarity. Set air
temperature and pressure, respectively, at 300 K and 1 atm.

references

1. Vo n M is e s R (1959). Theory of Flight, Courier Dover Publications, New York, NY.


2. F e ffe r ma n CL (2000). Existence and smoothness of the Navier–Stokes equation, The millen-
nium prize problems, pp. 57–67.
THEORETICAL FUNDAMENTALS OF ExPERIMENTAL AERODYNAMICS 23

3. Va s c h y A (1892). Sur les lois de similitude en physique, Annales Télégraphiques, 19, 25–28.
4. Bu c k in g h a m E (1914). On physically similar system: Illustrations of the use of dimensional
equations, Physical Review, 4, 345–376.
5. Ya r in LP (2012). The Pi-Theorem: Applications to Fluid Mechanics and Heat and Mass
Transfer, Springer-Verlag, Berlin Heidelberg, Germany.
6. S c h l ic h t in g H (1979). Boundary Layer Theory, 7th edn., McGraw-Hill, New York, NY.
7. Dav id s o n PA (2004). Turbulence: An Introduction for Scientists and Engineers, Oxford
University Press, Oxford, UK.
8. Ko l mo g o r ov AN (1941). The local structure of turbulence in incompressible viscous luid for
very large Reynolds numbers, Doklady Akademii Nauk SSSR, 30(4), 301–305.
9. Ku n d u P, Co h en I, D owling D (2015). Fluid Mechanics, 6th edn., Academic Press,
Walthom, NC.
10. P l a n c k M (1926). Sitzungsberichte der Preussischen Akademie der Wissenschaften,
pp. 453–463.
11. A b b o t t IH, Vo n D oenhof f AE (1959). Theory of Wing Sections, Including a Summary of
Airfoil Data, Courier-Dover Publication, New York, NY.
12. Z u c r ow MJ, H of f m an JD (1976). Gas Dynamics, Wiley, New York.
13. A n d e r s o n JD (2000). Hypersonic and High Temperature Gas Dynamics, AIAA, Reston, VA.
14. P r a n d t l L (1904). Über Flüssigkeitsbewegung Bei Sehr Kleiner Reibung, Verhandl, III Int.
Math. Kongr., Heidelberg, Germany.
15. T s in o b e r A (2009). An Informal Conceptual Introduction to Turbulence, Vol. 483, Springer,
Berlin, Germany.
16. L ie pm a n n HW (1979). The rise and fall of ideas in turbulence, American Scientist, 67, 221.
17. Katz J, Plotkin A (2001). Low-Speed Aerodynamics, Cambridge University Press,
Cambridge, UK.
C h a p T e r TWO

Statistical data characterization


and elements of data processing

Stefano Discetti and Andrea Ianiro

Contents

2.1 Introduction 25
2.2 Statistical characterization of luid-low measured variables 26
Statistical data characterization 26
Stationarity and ergodicity 28
Joint random variables 29
The Gaussian distribution and the central limit theorem 31
Error, precision, accuracy, uncertainty 31
Uncertainty quantiication methods 32
Data regression 33
Analog-to-digital conversion of experimental data 35
2.3 Fundamentals of data processing 37
Decomposing turbulent data in a lower-dimensional space 37
Fourier analysis 38
Proper orthogonal decomposition 44
Dynamic mode decomposition 46
Conditional averages 48
Problems 51
References 52

2.1 Introduction

Measurement science, in general, does not involve only questions like “which are the relevant
quantities to measure?” and “how do we perform the measurement?” but also “how do we
handle our data?” While intuition might tempt us to think that data treatment is the last step of
the measurement chain, thus it should play a minor role, most often the success of an experi-
mental test is all about it. The efforts of setting up properly an experiment might be frustrated
by improper data acquisition, conditioning, and analysis. Furthermore, when dealing with
turbulent lows, the full amount of information might be so overwhelming that it is necessary
to obtain a low-order representation of the phenomena based on few simpler parameters and
models. Additionally, low-order models of complex phenomena, such as the turbulent low
over a wing, are needed to plan strategies for performance improvement and low control.
In this chapter a sharp focus will be directed on basic mathematical instruments to extract
statistical information from the acquired data. The contents treated herein are far from being
exhaustive. The attempt is to guide the reader with simple intuitive explanations through an
extremely complex topic so that, given the foundations, he/she will be able to reach a deeper
understanding by referring to speciic literature.

25
26 STEFANO DISCETTI AND ANDREA IANIRO

In the irst part of the chapter, some concepts of statistics and probability theory are recalled.
Since experimental data are unavoidably affected by measurement noise, the measurement of
a fully deterministic process would result in a random process, thus needing some statistical
treatment to infer the desired information. Moreover, under proper assumptions, turbulent
lows can be analyzed from the standpoint of turbulent statistics; thus, it is of paramount
importance to have a clear idea of the instruments that can be used to extract properly these
information.
Before inferring any conclusion on the acquired data, the experimentalist has to be able to
quantify the uncertainty on the data, that is, the width of the interval (centered on the mea-
sured value) in which the true value (unknown in general) is supposed to lie with a certain
probability. Along with this section on statistical data treatment some notions on measurement
uncertainty estimate methods are reported.
Furthermore, data regression methods, which are of crucial importance when attempting to
extract reduced-order models from experimental data, are outlined.
The second part of the chapter is dedicated to instruments for the decomposition of turbu-
lent data with a look on the objective to determine the structure of turbulent lows. This section
covers Fourier analysis, proper orthogonal decomposition, dynamic mode decomposition, and
conditional averaging.

2.2 Statistical characterization of luid-low measured variables

Statistical data Data classiication is the irst step to deine the appropriate handling method to extract the
characterization desired information from an experiment. Experimental data are classiied into two broad
categories: deterministic and random. Data are referred to as deterministic if the process
can be described unambiguously by mathematical relations. Conversely, random data are
instantaneously unpredictable and can be described only in terms of statistical features. An
ideal measurement with zero uncertainty of the velocity ield in a low Reynolds number
low (for instance, the low around a sphere at Re ≪ 1 [1]) is an example of deterministic
process, as it can be described with a solution in closed form. On the other side, even
though turbulent lows can be in principle modeled by the Navier–Stokes equations (which
are fully deterministic), they constitute a classical example of random process encoun-
tered in experimental aerodynamics. The reason behind the random nature of turbulent
phenomena stands in the high sensitivity of turbulent lows at high Reynolds number to
perturbations, which inevitably occur during an experiment. Suppose, for example, to set
up an experiment to measure the pressure distribution over an airfoil in a wind tunnel
under certain speciied conditions. Unfortunately, full control on the boundary conditions
is unfeasible; possible “contaminating agents” are nonuniformity of the incoming low,
perturbations of the boundary layer on the airfoil surface or the tunnel walls due to non-
perfect surface inish, etc. As a result, pressure will oscillate around a mean value with
instantaneously unpredictable luctuations, whose intensity is related to the perturbations
and the low Reynolds number. The sensitivity of systems to small  perturbations of the
initial conditions is often encountered in nature (and, consequently, in any experimenter’s
life!) and it is well documented in books on chaos theory [2–4].
Some fundamental concepts on random data characterization are reported in Section 2.2.
The interested reader can refer to specialized literature on the topic (such as the book by
Rice [5]) or to the adapted view for turbulent lows by Pope [6]. Random data are character-
ized in terms of probability that a certain event can occur; for example, the probability that the
velocity U in a certain point x = { x, y, z} of a low ield at the instant t is equal to a prescribed
value V can be written as

{
p = Pr U ( x, t ) = V } (2.1)
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 27

The probability p is a real number, ranging between 0 and 1 (0 stands for impossible event,
1 for a sure event).
The statistical description of a random variable (in this case, without any loss of generality,
the low velocity in the point x at the instant t, that is, U ( x, t )) can be provided via the cumula-
tive distribution function (cdf):

{
F ( x,V ) = Pr U ( x, t ) < V } (2.2)

or in terms of the probability density function (pdf), deined as the derivative of the cdf:

dF ( x, V )
f ( x ,V ) = (2.3)
dV

which can be interpreted as the probability that V £ U ( x, t ) £ V + dV .


Equations 2.2 and 2.3 can be used to fully characterize the random process in terms of
statistical moments. The moment of order n (with n integer and positive) of the random
process U ( x, t ) is deined as the mathematical expectation E éëU ( x, t ) ùû :
n

E éêU ( x, t ) ùú =
òV f ( x,V ) dV
n n
(2.4)
ë û

The mathematical expectation can be computed as the average on a suficiently large set of
realizations of the values assumed by the random variable at the time instant t after a refer-
ence time (e.g., the beginning of that realization). This operation is referred to as ensemble
averaging.
The mean of the random variable U ( x, t ) is the irst-order moment:

E éëU ( x, t ) ùû =

ò Vf ( x,V ) dV (2.5)

The central moment of order n (i.e., the moment calculated around the mean) is deined as

(
m n ( x, t ) = E éê U ( x, t ) - E éëU ( x, t ) ùû ùú )
n
(2.6)
ë û

The second-order central moment is referred to as variance, which quantiies the degree of
variability of the data with respect to the mean value:

ò {V - E éëU ( x, t )ùû}
2
m 2 ( x, t ) = var éëU ( x, t ) ùû = f ( x,V ) dV (2.7)

The square root of the variance is referred to as the standard deviation:

std éëU ( x, t ) ùû = var éëU ( x, t ) ùû = sU ( x, t ) (2.8)


28 STEFANO DISCETTI AND ANDREA IANIRO

Standardized random variables, obtained by subtraction of the mean value and normalization
by the standard deviation, are often easier to handle. A standardized random variable, by dei-
nition, has zero mean and unitary standard deviation.
The third- and fourth-order standardized central moments are commonly referred to as
skewness γ1 (which quantiies the asymmetry with respect to the mean value) and kurtosis γ2
(which determines how lat the probability distribution function is), respectively:

éæ U x, t - E éU x, t ù ö3 ù
( ) ë ( ) û ÷ ú m3 ( x, t )
g1 ( x , t ) = E ê ç = (2.9)
êç sU ( x, t ) ÷ ú és ( x, t ) ù 3
êë è ø úû ë U û

éæ U x, t - E éU x, t ù ö4 ù
( ) ë ( ) û ÷ ú m 4 ( x, t )
g 2 ( x , t ) = E êç = (2.10)
êç sU ( x, t ) ÷ ú é s ( x, t ) ù 4
êëè ø úû ë U û

The temporal autocorrelation and the autocovariance of a random process are deined, respec-
tively, as

R ( t1, t2 ) = E éëU ( x, t1 ) U ( x, t2 ) ùû (2.11)

( )( )
C ( t1, t2 ) = E é U ( x, t1 ) - E éëU ( x, t1 ) ùû U ( x, t2 ) - E éëU ( x, t2 ) ùû ù
ë û
(2.12)

where t1 and t2 are two generic time instants. The autocorrelation and the autocovariance
indicate the degree of coherence over time of the velocity and of the velocity luctuations,
respectively. They can be intended, for example, as persistence over time of a certain low
structure.
If the statistical properties are independent on position shifts, the random process is deined
as statistically homogeneous. Thinking at a random process which is a vector ield (such as
the velocity ield in a turbulent low), if its statistics are independent on the direction, then the
process is referred to as isotropic. This concept is particularly signiicant in turbulence inves-
tigation, since isotropy assumptions are extremely useful in simplifying turbulence theories
and in developing closure models.

Stationarity and The aforementioned cdf and pdf can be used to describe the distribution of probability of an
ergodicity event; however, they do not provide any information about the time evolution of the events. The
N-time joint cumulative distribution FN(V1, V2, …, VN) of the process U ( x, t ) can be deined as

(
FN V ( t1 ) ,V ( t2 ) , ¼, V ( t N ) )
{ }
= Pr U ( x, t1 ) < V ( t1 ) ; U ( x, t2 ) < V ( t2 ) ; ¼; U ( x, t N ) < V ( t N ) (2.13)

Note that from this point on the space location dependence of the cdfs and the pdfs has been
dropped for ease of notation. The function fN(V1, V2, …, VN), referred to as joint probability
density function, can be computed as partial derivative of the joint cdf.
A random process is referred to as stationary if the joint cdf is independent on time shifts
τ, that is,

( ) (
FN V ( t1 ) , V ( t2 ) , ¼, V ( t N ) = FN V ( t1 + t ) , V ( t2 + t ) , ¼, V ( t N + t ) ) (2.14)

In other words, the statistical moments (mean, variance, etc.) of the process do not change
over time.
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 29

A random process is ergodic if the statistical moments can be extracted by observing the
random process evolution over a suficiently large time interval. If U ( x ) is the time average
of the signal, deined as

1
U ( x ) = lim
T ®¥ T ò
U ( x ) dt
T
(2.15)

the process is ergodic if E éëU ( x, t ) ùû = U ( x ) (with this relation valid for all the statistical
moments of the random process).
In most cases, experimental data in aerodynamics are ergodic; this property is intensively
exploited in experimental aerodynamics, as it allows to extract the statistical characterization
of the quantity to be measured from a single (suficiently long over time) experiment, without
repeating it several times.
In order to clarify the difference between the concepts of stationarity and ergodicity, con-
sider an experiment on a laminar Couette low, that is, the constant shear low of a viscous luid
between two parallel plates, one of which in relative motion with respect to the other. Suppose
that a class of students is performing the experiment, and each student takes an independent
sample of the velocity by inserting a probe in a random point between the two plates, as illus-
trated in Figure 2.1. Assume that the perturbation induced by the probe is negligible and that
the acquired data are noise free. Independently on the time instant, if the average value on the
set of realizations is computed (dashed line in Figure 2.1), the inal result is the velocity at
mid-height, provided that the number of students is suficiently large, as addressed in “Error,
precision, accuracy, uncertainty” section; consequently, the random process in object (i.e.,
taking independent measurement in random points) is stationary. However, if we consider a
single realization of each student, the average value is “realization dependent”; thus, the pro-
cess is not ergodic. Of course, the random process of capturing the measurement always in the
same point between the two plates is both stationary and ergodic.
In a stationary ergodic process, the autocorrelation and the autocovariance functions are
independent on the initial instant and depend only on the time separation:

R ( t ) = E éëU ( x, t ) U ( x, t + t ) ùû (2.16)

( )( )
C ( t ) = E é U ( x, t ) - U ( x ) U ( x, t + t ) - U ( x ) ù
ë û
(2.17)

Joint random During an experiment, more than one physical quantity can be measured at the same time (two
variables or three velocity components, pressure, temperature, etc.). Considering the simpliied case of
two velocity components measured simultaneously at the same point x, U1 ( x, t ) and U 2 ( x, t ),
the joint cdf F12(V1, V2) is deined as

{
F12 (V1,V2 ) = Pr U1 ( x, t ) < V1,U 2 ( x, t ) < V2 } (2.18)

Equation 2.18 reads as the probability that at the same time U1 ( x, t ) is smaller than V1 and
U 2 ( x, t ) is smaller than V2.
The joint pdf f12(V1, V2) can be deduced as

¶2
f12 (V1,V2 ) = F12 (V1,V2 ) (2.19)
¶V1¶V2

A conditional pdf can be extracted by imposing that one of the variables assumes a certain
value. For example,

f12 (V1,V2 )
f12 (V1 V2 ) = (2.20)
f2 (V2 )
30 STEFANO DISCETTI AND ANDREA IANIRO

y U

Student 1
Probe

x t

y U
Probe

Student 2

x t

y
U

Student N

Probe
x t

FIGUre 2.1 Example of experiment on a laminar Couette low repeated by a set of N students.

which indicates the probability density of the process U1 ( x, t ) under the condition U 2 ( x, t ) = V2.
If U1 ( x, t ) and U 2 ( x, t ) are independent, the pdf of U1 ( x, t ) is not inluenced by the events
occurring on U 2 ( x, t ) and vice versa. As a consequence,

f12 (V1,V2 )
f12 (V1 V2 ) = = f1 (V1 ) (2.21)
f2 (V2 )

f12 (V1,V2 ) = f1 (V1 ) f2 (V2 ) (2.22)

The effects of a conditional relation between two variables can be outlined through the mixed
second-order central moment, referred to as covariance:
+¥ +¥

cov éëU1 ( x, t ) ,U 2 ( x, t ) ùû =
ò ò {V - E éëU ( x, t )ùû}{V - E éëU ( x, t )ùû} f
-¥ -¥
1 1 2 2 12 (V1, V2 ) dV1dV2 (2.23)

The correlation coeficient −1≤ r12 ≤ 1 is deined as

cov éëU1 ( x, t ) , U 2 ( x, t ) ùû
r12 = (2.24)
var éëU1 ( x, t ) ùû var éëU 2 ( x, t ) ùû

When r12 = 1(−1), the two variables are perfectly (negatively) correlated; in case r12 = 0, the two vari-
ables are uncorrelated. The correlation coeficient can be interpreted in the following way: if it is
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 31

positive (negative), then luctuations of one random variable around its mean are dominantly asso-
ciated with luctuations of the other random variable of the same (opposite) sign.
Uncorrelation is a necessary but not suficient condition for statistical independence
between two random variables.

The Gaussian The pdf of the normal distribution (referred to also as Gaussian) of mean μ and standard
distribution deviation σ is deined as
and the central
limit theorem -
(V - m )2
1
f (V ) = e 2 s2 (2.25)
s 2p

The normal distribution is extremely relevant in the experimental scenario. Indeed, the central
limit theorem [5] states that the sum of a suficiently large sequence of independent and identi-
cally distributed random variables with mean μ and variance σ2 will be normally distributed,
independently on the underlying probability distribution.
Jointly normal (Gaussian) random variables are described by a pdf with this form:

é U x, t -U x 2 U x, t -U x 2 2 r U x, t -U x U x, t -U x ù
-
1 ê ( 1 ( ) 1 ( ) ) + ( 2 ( ) 2 ( ) ) - 12 ( 1 ( ) 1 ( ) )( 2 ( ) 2 ( ) ) ú

f12 (V1,V2 ) =
1
e
2 ( 2
1- r12 ) ê
êë s12 s22 s1s2 ú
úû

2ps1s2 1 - r 2
12

(2.26)

error, precision, Since measured data might be contaminated by spurious effects and the experimental setup is
accuracy, not the exact representation of the conditions to be reproduced, the acquired data are affected
uncertainty by measurement errors. In this section, some deinitions and fundamentals on errors quantii-
cation are reported, following the Guide to the expression of uncertainty in measurement [7]
and the discussion by [8].
The difference between the true value, that is, the ideal value of the physical quantity to
be measured, and the actual measured value is referred to as absolute measurement error.
Clearly, since the true value cannot be determined unambiguously, quantifying the measure-
ment error would make no sense. Nevertheless, the statistical tools illustrated in the previous
sections can be used to characterize the quality of a measurement intended as its reliability.
Measurement errors are broadly classiied as bias and precision errors. The precision errors
are generated by random changes in the experiment (for instance, luctuations of the incom-
ing velocity ield in a wind tunnel due to irregular functioning of the fan, vibrations in the
experimental setup, etc.) or in the measurement system (background noise in image-based low
diagnostics, sampling/truncation errors, etc.). According to the central limit theorem, indepen-
dently on the underlying distribution, the precision errors should have a Gaussian distribution.
This statistical feature enables their suppression from irst-order statistical moments, since their
expected value is zero. Suppose, for example, that the precision error on the velocity measure-
ment U in a point can be modeled as a normally distributed random variable, with zero mean
and standard deviation σ. The standard deviation of the error obtained using N samples is

s2 s
sM = = (2.27)
N N

An immediate consequence of Equation 2.27 is that doubling the number of samples corre-
sponds to a reduction of the error over the mean of 1/ 2 , that is, only about 30%. This notion
has to be kept in mind every time that the experimenter has to ind a trade-off between reduc-
ing the uncertainty on the mean quantities and increasing the number of samples (with all its
consequences on storage, handling, and processing of data).
The bias errors affect systematically the measurement. A typical example of bias error
might be due to improper calibration of the instrument, which shifts all the measured quanti-
ties by a certain amount. Bias errors are more subtle to be suppressed and contribute to the
32 STEFANO DISCETTI AND ANDREA IANIRO

Error
Bias

(a) (b) (c) (d)

FiGUre 2.2 Target shooting example to outline the different features of a measurement in terms of error distribution:
(a) biased and unprecise, (b) biased and precise, (c) unbiased and unprecise, and (d) unbiased and precise.

whole uncertainty of the measurement. Again, referring to the case of the velocity measure-
ment in a point, and supposing that the true mean value μU is known from an independent
unbiased measurement, the bias error is deined as

b = U - mU (2.28)

The classic “target shooting” example to qualify a measurement is reported in Figure 2.2.
Suppose that measuring a physical quantity can be assimilated to shooting arrows to a target
and that the distance between the point in which the arrows hit the target and its center is the
measurement error. Then, a measurement would be biased if in average the target is hit away
from the center. The precision, on the other hand, refers to the repeatability of the shooting,
that is, how close to each other the arrows hit the target. The case of biased and precise mea-
surement might be very misleading, in the sense that the low precision error (which can be
easily estimated with statistical tools) would lead the experimenter to think that he/she has
performed a high-quality measurement. Unfortunately, bias errors are more dificult to be
suppressed; the main strategies to detect and eliminate them include a detailed measurement
system calibration, comparison with theory, etc.
The accuracy quantiies the difference between the result of a measurement and the true
value of the quantity to be measured, so it is directly related to the measurement error (i.e.,
the sum of precision and bias errors). As argued previously, it is not possible to quantify the
accuracy of a measurement since the true value is in principle unknown. For this reason, accu-
racy is sometimes confused with the concept of uncertainty, which characterizes the expected
dispersion of measurement data of the measurand [7]. The measurement uncertainty can be
interpreted as the range in which the true value of a measured quantity is expected to fall. In
other words, while the accuracy indicates how close is the measured value to the true value,
the uncertainty can be intended as the quantiication of the accuracy in the real experiment, in
which the exact value of the quantity to be measured is unknown (otherwise, it would make
no sense performing the experiment!).

Uncertainty The Guide to the expression of uncertainty in measurement [7] outlines a procedure for uncer-
quantiication tainty quantiication. The main focal points are reported in the following text; the reader is
methods referred to the guide for a more detailed and exhaustive overview. Other relevant references
on the topic are [9–11].
The irst step consists in determining a mathematical model of the quantity Y to be measured
as a function of the other quantities X1, X2, …, XN from which Y is determined:

Y = f ( X1, X 2 ,¼, X N ) (2.29)

The function f has to be intended in a broad sense, that is, most often it is not trivial to
identify a dependency in closed form. The parameters X1, X2, …, XN can be quantities
whose values and uncertainties could be either directly determined in the current mea-
surement (for instance, the uncertainty of a pressure value reading on a manometer) or
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 33

extracted from external sources (e.g., the thermal conductivity of a certiied material,
which could be provided by the manufacturer).
Let us now indicate the uncertainty on the quantity xi (lower case letters indicate the
expected values of the corresponding upper case letters quantities) with the symbol unc(xi).
For each quantity Xi, the uncertainty has to be quantiied in order to obtain the inal uncer-
tainty on the quantity Y to be measured. The uncertainty quantiication strategy depends on
the uncertainty type. The Guide to the expression of uncertainty in measurement [7] classiies
the uncertainty in two categories:
1. Type A: The variance characterizing the uncertainty is estimated via statistical tools
from a set of measured data (e.g., the statistical variance of a measured quantity over
an ensemble of experiments). In this case, the uncertainty on the expected value of
the generic quantity Xi can be obtained as the standard deviation of the mean, as from
Equation 2.27. For example, if we need to estimate the pressure in a tank with a pressure
tab, and we collect 10,000 statistically independent samples having a standard devia-
tion of 1 Pa, the estimated uncertainty on the expected value of the pressure would be
1 Pa/ 10, 000 = 0.01 Pa .
2. Type B: The estimated variance is evaluated by judgment of the information on the pos-
sible variability of the generic quantity Xi (e.g., previous measured data, manufacturer’s
speciication, etc.).
Under the assumption that the uncertainties on the quantities Xi are uncorrelated, it is possible
to combine them to build up the combined standard uncertainty, which can be computed with
the following relation:
N 2
æ ¶f ö
éë unc ( y ) ùû = å ç ¶x ÷ éë unc ( xi ) ùû
2 2
(2.30)
i =1 è iø
The derivatives (∂f/∂xi)2 are referred to as sensitivity coeficients, as they weight the relative impor-
tance of the uncertainty on a certain quantity on the measured one. For instance, if we aim at mea-
suring the potential difference ΔV created across a resistor by measuring the continuous current I
passing through it and its resistance R with the Ohm’s law ΔV = RI, it is evident that a 1% uncer-
tainty on the current will lead on 1% uncertainty on the potential difference. On the other hand,
an uncertainty on the temperature of the resistor would also have an impact. Indeed, the electrical
resistance is dependent on the temperature according to the relation R(T) = R0[1 + α(T − T0)], where
R0 is the resistance at the reference temperature T0, T is the generic temperature, and α is the tem-
perature coeficient of the resistance. In this case, a 1% uncertainty on the temperature variation
with respect to the reference one would lead to α ⋅ 1% uncertainty on the resistance as well as on
the potential difference measurement.

Data regression The process of data regression consists in itting data with a model function (for instance,
linear, polynomial, logarithmic, or a more complex function derived by physical arguments).
Data regression is extremely useful in determining simple models to relate trends between two
or more joint variables. For example, the calibration of an infrared camera (see Chapter 6) is
a process of data regression, which correlates the radiation intensities detected by the camera
sensor with the temperature of a reference body. Usually, the regression is performed with
the least square method, that is, minimization of the sum of the squared differences between
the itting function and the original data set. Consider, for instance, a data set (xi, yi), with
i = 1, …, N, and a itting function of the kind
Y = f X, p( ) (2.31)

with f being a generic function of the r parameters p = éë p1, p2 , ¼, pr ùû . The sum of the squared
difference between the measured values of the data set and the itting function is
N

å ( f ( x , p) - y )
2
e= i i (2.32)
i =1
34 STEFANO DISCETTI AND ANDREA IANIRO

The least square minimization consists in identifying the minimum of Equation 2.32 with
respect to the parameters of the model function:

¶e ¶e ¶e
= 0, = 0, ¼, =0 (2.33)
¶p1 ¶p2 ¶pr

When f is a linear function of the parameters p, Equation 2.33 constitutes a system of linear
equations, which can be solved in closed form with direct methods. This does not mean that
( )
f has to be a linear function in X: f X , p = p1 X + p2 X 2 + p3e X is an example of model func-
tion that can be minimized with linear least squares.
In general, in the case of linear itting,
r

Y= å p j (X)
j =1
j j (2.34)

where φj(X) are generic functions of the independent variable X. Applying Equation 2.34 to
the points of the data set, the following system of linear equations is obtained:

y =jp

é j1 ( x1 ) ⋯ jr ( x1 ) ù
ê ú
y = éë y1, ¼, yN ùû , j = ê ⋮ ⋱ ⋮ ú , p = éë p1,¼, pr ùû (2.35)
êj1 ( x N ) ⋯ jr ( x N ) úû
ë

Provided that the system is overdetermined, the problem can be solved, for example, by pre-
T
multiplying j by its transpose j and then inverting the system. It can be demonstrated that

( )
-1
T T
p= j j j y (2.36)

is the least square solution of the proposed problem of linear itting. A numerically more stable
method consists in calculating the pseudo-inverse matrix of j via singular value decomposition
(SVD) [12], as shown in “Dynamic mode decomposition” section.
The most common case is the it with a linear function of one independent variable
Y = p1X + p2. It is left to the reader the useful exercise of obtaining from Equation 2.36 that

å x y -å x å
N N N
N i i i yi
p1 = i =1 i =1 i =1
2
(2.37)
æ ö
Nå x - çå x ÷
N N
2
i i
è ø i =1 i =1

å å å å
N N N N
xi2 yi - xi xi yi
p2 = i =1 i =1 i =1
2
i =1
(2.38)
æ ö
Nå -çå x ÷
N N
2
x i i
i =1 è ø i =1

An example of linear itting is reported in Figure 2.3 for the case of a set of values of the
quantity y obtained as a function of the random variable x. The data are generated by impos-
ing a linear dependence between x and y and contaminating the values of y with Gaussian
random noise. The squared correlation coeficient r12 (Equation 2.24) provides a quantitative
estimate  on the scattering of the data around the itting function and, consequently, useful
information on the quality of the itting.
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 35

2 2 2
r 212 = 0.96 r 212 = 0.75 r 212 = 0.41
1.5 1.5 1.5

1 1 1

0.5 0.5 0.5

y
0 0 0

–0.5 –0.5 –0.5

–1 –1 –1
0 1 2 0 1 2 0 1 2
x x x

FIGUre 2.3 Linear regression of data for different levels of noise contamination. The square of
the correlation coeficient is reported for reference.

The process of linear data regression relies on a set of assumptions, whose strength depends
on the peculiar application:
• The modeled relations are linear and additive, that is, the function depends linearly on
each independent variable, the slope of the linear relation is independent on the value of
the other variables, and the relative effects can be superposed.
• The errors on the data points are statistically uncorrelated one with each other.
• The perturbation (say, for instance, noise) is present only in the yi so that the xi are
considered ixed points. For instance, considering the example of the infrared camera
calibration, it is assumed that the temperature of the reference body is measured with
negligible error.
• The statistical distribution of the errors is a normal distribution. This is often justiied
with the central limit theorem.
• The standard deviation of the errors with respect to the itting function is independent on
the itting point. This property is normally referred to as homoscedasticity.
If f is nonlinear in the parameters pi (for instance, in the expression of f appear terms as pi2, e pi ,
etc.), then there might be no closed-form solution and the problem could have to be solved
with an iterative method. While in the case of the linear data regression the solution is unique,
in nonlinear itting there might exist several local minima for Equation 2.32; thus, the results
of iterative processes might depend on the chosen initial value for the least square optimiza-
tion (see [13]).

analog-to-digital All the instruments presented so far can be either applied on analog or digital signals. The
conversion of word analog indicates that the signal is stored in a manner that is analog to the real one (e.g.,
experimental data the imprint of light on a photographic ilm). In order to store and process the data, it is very
practical to convert the signal in digital format, which is portable, cheaper, less prone to time
degradation and can be easily manipulated to perform post-processing analysis. In  many
cases, the analog output of transducers is an electric signal, like a voltage, which can be
converted in a digital signal and stored in a hard drive. This operation is referred to as A/D
(analog/digital) conversion. It can be described as a three-stage process: sampling, quantiza-
tion, and encoding. The irst two steps of the process are sketched in Figure 2.4 and detailed
in the following.
The sampling process consists in extracting the values of the analog signal x(t) at dis-
crete time instants. In principle, the user is free to set the time spacing of the samples; how-
ever, it is often very convenient to use uniform time separation so that it is not necessary
to store the sampling instants; it is however possible that the chosen measurement device
provides information with nonuniform time separation (see, e.g., the case of Laser Doppler
36 STEFANO DISCETTI AND ANDREA IANIRO

(a) (b)

(d) (c)

FIGUre 2.4 A/D process. (a) Analog signal, (b) sampling, (c) quantization, and (d) inal digital
signal (with original analog signal included with dashed line for reference.)

Anemometry in Chapter 10). Obviously, in the sampling process some information is lost.
Consider, for instance, the case of Figure 2.5, in which two signals with equal values at the
sampling instants are represented. In general, an ininite number of signals can match with the
sampled one. At this point, the natural question is: how can we estimate univocally which is
the approximate shape of the original analog signal? Is it possible to get more from less? The
answer is given by the Shannon theorem, outlined in the following.
Given the sampling frequency of a signal, there is a critical frequency beyond which sine
oscillations cannot be resolved. It is referred to as Nyquist frequency and it is equal to half of
the sampling frequency fs:

fs
f Nyq = (2.39)
2

1.5

0.5
y

–0.5

0 1 2 3 4 5 6
x

FIGUre 2.5 Example of two signals that, although being different, have the same digital
representation after the sampling process.
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 37

fs = 4f

fs = 2f ?

fs = 3f/2 !!!

FIGUre 2.6 Imaging of a spinning wheel with a reference bar sampled at frequencies higher,
equal, and lower than the Nyquist limit. The aliasing effect is evident in the last row.

The Shannon theorem states that if the sampled signal x(t) is bandwidth limited within the
range 0 ≤ f ≤ fNyq (i.e., it only contains frequencies included within the limits of the Nyquist
frequency), then x(t) is unambiguously described by its Fourier transform (see “Fourier
analysis” section) and is thus univocally determined by the set of samples. If instead the
signal does not fulill this condition, the information relative to frequencies with magnitude
larger than fNyq is spuriously moved into the range 0 ≤ f ≤ fNyq. This effect is known as aliasing.
In order to understand the effect of aliasing, suppose you are capturing images at sampling
frequency fs of a wheel rotating at variable frequency f, as in Figure 2.6 and that a reference
bar is placed on the wheel to identify the wheel angular rotation. If f is lower than the Nyquist
frequency (for instance, fs = 4f so that four pictures of the bar for each cycle are sampled),
the sense of rotation is properly captured. Now, suppose that the wheel accelerates up to the
Nyquist limit (fs = 2f ). The bar in subsequent images will move into opposite positions along
the same diameter, and the sense of rotation is undetermined: it could be either clockwise or
counterclockwise. Finally, if the rotation frequency is further increased beyond the Nyquist
limit, then the wheel appears rotating slowly in the direction opposite with respect to the real
one. The rotation frequency has been aliased into a negative (and lower in absolute value)
frequency of the range 0 ≤ f ≤ fNyq.
Real signals are virtually never bandwidth limited. Contamination due to noise, for instance,
is likely to be modeled by a uniform distribution along the frequency spectrum (referred to as
white distribution). The experimenter must thus take care of sampling the signal at a frequency
large enough to track its time changes and, in order to suppress the effects of aliasing, the
analog signal should ideally be iltered with an analog low-pass ilter with sharp cutoff at fNyq.
The second step of the A/D conversion consists in assigning a numerical value to the sampled
signal. This process is referred to as quantization. The values assumed by the analog signal are
grouped in quantization intervals, each one represented by a single value. For example, if the out-
put voltage of a voltmeter has to be discretized in quantization intervals of 0.1 V width, each value
included within the interval [−0.05 V, 0.05 V] would be quantized with the level 0 V. The obtained
values are then converted in binary digits (bits) for storing, that is, in sequences of 0 and 1. This
process is referred to as encoding. Digital data typically have length ranging between 8 and
16 bits. A 16-bit array can represent integers between 0 and 216 − 1, thus corresponding to 65,536
quantization levels and to a resolution of 1 part in 65,536 (i.e., 0.0015% resolution).
Evidently, the quantization and encoding process determines a loss of information on the
analog signal amplitude (while the sampling process loses information on the time evolution).
The absolute value of the error on the quantized amplitude in case of rounding off to the
nearest integer is, for internal recorded values (i.e., contained within the minimum and the
maximum measured value), at most equal to half of the amplitude of the quantization interval
(while for external values it is in principle unbounded). Since this error is randomly distrib-
uted, it is commonly referred to as quantization noise.

2.3 Fundamentals of data processing

Decomposing In experimental aerodynamics, dealing with large data ensembles is more the rule than the
turbulent data exception. Most often, the parameters entering in action in the observed process describe
in a lower- high-dimensional spaces, in which the experimentalist might get lost. The main reason behind
dimensional space that is turbulence, which involves a continuous spectrum of wavelengths and frequencies
38 STEFANO DISCETTI AND ANDREA IANIRO

down to the dissipation scales, as outlined in Chapter 1. For this reason, tools to extract the
most relevant information from turbulent lows are of fundamental importance in order to
achieve low-dimensional representations and models of complex phenomena. The idea is to
reduce a complex set of data, involving the interaction of many different turbulent structures
acting over a range of scales, to a simpliied low-order view of the problem to spot the most
signiicant features of the phenomenology.
One possible step to start walking along this path is to decompose the velocity ield into the
mean (assumed, for simplicity, stationary) part and the luctuating part and to project the last
one on a properly chosen set of base functions.
Suppose that a function U ( x, t ) is approximated by
Nm

U ( x, t ) = U ( x, t ) + u ( x, t ) » U ( x, t ) + åa ( t ) j ( x )
n =1
n n (2.40)

Without any loss of generality, let the symbols U ( x, t ) and u ( x, t ) indicate the mean and
the luctuating part of the velocity ield, respectively, with x and t being the spatial and time
coordinates. The luctuating part can be approximated as a linear combination of a set of basis
functions jn ( x ), with coeficients an depending on time; the symbol Nm is used to indicate
the number of modes, that is, the rank of the algebraic space. Evidently, in the limit Nm → ∞
the approximation becomes exact. The decomposition in Equation 2.40 is not unambiguously
determined until a set of basis functions jn ( x ) is chosen.
In addition to this problem, determining the time coeficients an given the set of basis func-
tions might not be straightforward. A path to obtain in closed form the time coeficients can be
depicted if using orthonormal basis functions, that is,

ji ( x ) × j j ( x ) = dij (2.41)

where δij is the Kronecker delta symbol (1 if i = j, 0 if i ≠ j) and the angular brackets 〈⋯〉 indi-
cate spatial integration over all the positions x of the measurement domain. By applying this
peculiar choice (not that peculiar, though; the sinusoidal harmonically related functions of the
Fourier transform are mutually orthogonal, as shown in the next section) a simple relation to
compute the time coeficients can be extracted:

an ( t ) = u ( x , t ) × j n ( x ) (2.42)

The time coeficients an(t) depend only on the corresponding jn ( x ) if a set of orthonormal
functions forming a base is chosen.
A new question arises now: provided that we restrict our choice to mutually orthogonal
functions, how do we choose the set of basis functions? An answer to this question is proposed
in the next paragraphs.

Fourier analysis Fundamentals The Fourier Transform (FT) decomposes signals into a linear combination of
orthogonal sinusoidal basis functions at different frequencies. The coeficients of the linear com-
bination (alias the amplitudes of the sinusoidal functions of the basis) contain what is commonly
referred to as spectral information of the signal.
FT can be intended as the generalization to nonperiodic functions of the concept of Fourier
series, which instead apply to periodic signals. Consider, for example, a continuous periodic
signal x(t) with period T0 (i.e., x(t) = x(t + T0) independently on the chosen t). The Fourier series
of the signal is the decomposition in harmonically related (i.e., with frequencies multiples of
the base frequency f = 1/T0) sinusoidal functions:
¥ ¥
æ 2pnt ö æ 2pnt ö
x ( t ) = A0 + ån =1
an cos ç
è T0
÷+
ø
åb sin çè
n =1
n
T0 ÷ø
(2.43)
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 39

where the coeficients of Equation 2.43 are deined as


T0
2 æ 2pnt ö
an =
T0 ò
x ( t ) cos ç
0
è T0 ø
÷ dt

a
A0 = 0 (2.44)
2
T0
2 æ 2pnt ö
bn =
T0 ò
x ( t ) sin ç
0
è T0 ø
÷ dt

Note that the irst coeficient A0 is the average value of the signal x(t) on a single cycle. For this
reason, it is commonly referred to as continuous component of the signal.
Equations 2.43 and 2.44 can be formulated in a more synthetic form using the relation between
trigonometric and complex exponential function (generally referred to as Euler’s formula):

eix = cos ( x ) + i × sin ( x ) (2.45)

where i is the imaginary unit (i2 = − 1).

The Fourier series of the periodic signal x(t) is then expressed as


+¥ 2 pint

x (t ) = å cne T0
(2.46)
n = -¥

T0 2 pint
1 -
cn =
T0 ò
x (t ) e
0
T0
dt (2.47)

If x(t) is a nonperiodic signal, it can still be treated as periodic by assuming that T0 → ∞.


Consequently, the sums in Equations 2.43 and 2.46 will become integrals (the spacing between
different frequencies of the sinusoids of the basis tends to zero), thus leading to the deinition
of Fourier transform:

X(f )=
ò x (t ) e -2pift
dt (2.48)

and its inverse transform:


x (t ) =
òX ( f )e 2pift
df (2.49)

In a more synthetic form, F {x(t)} = X( f ) indicates that X( f ) is the Fourier transform of x(t).
Now consider two functions of time x(t) and y(t) and the corresponding Fourier transforms
X( f ) and Y( f ). The convolution x(t) ⊗ y(t) is

x (t ) Ä y (t ) =

ò x ( t) y (t - t) dt (2.50)

The convolution theorem states that the convolution product in the time domain corresponds
to a simple product in the frequency space:

{ }
F x (t ) Ä y (t ) = X ( f )Y ( f ) (2.51)
40 STEFANO DISCETTI AND ANDREA IANIRO

The cross-correlation of two signals x(t) and y(t) is deined as


Rxy ( t ) =
ò x * ( t ) y ( t + t) dt

(2.52)

where the superscript * indicates the complex conjugate of the random process. It follows
immediately from the convolution theorem:

{
Rxy ( t ) = F -1 X ( f ) * Y ( f ) } (2.53)

Recalling that for a stationary ergodic random process the autocorrelation is given by
Equation 2.16, an immediate consequence of the convolution theorem is

ò x ( t + t) x ( t ) dt = F {X ( f )* X ( f )} = F { X ( f ) }

-1 -1 2
Rxx ( t ) = (2.54)

Discrete Fourier transform As outlined in “Analog-to-digital conversion of experimen-


tal data” section, in most cases we deal with digital data, which can be described as inite
sequences of values xn. Suppose that xn is a digital data ensemble of N equally time-spaced
samples. The time separation is Δt (the period is T0 = NΔt; the sampling frequency is fs = 1/Δt).
The discrete Fourier transform (DFT) is deined with the following formula to extract the
spectral coeficients Xk (corresponding to the amplitudes of the harmonic sinusoidal functions
at equally spaced frequencies with spacing Δf = 1/T0):
N -1 2 pink

å
-
Xk = xn e N , k = 0,1,¼, N - 1 (2.55)
n=0

Note that the spacing between the spectral frequencies is

1 1 f
Df = = = s (2.56)
T0 N Dt N

The number of samples and the sampling frequency determine the spectral resolution of
the DFT: the higher N, the smaller the spacing between the spectral frequencies, that is, the
better resolved will be the spectrum. Notice that the spectral coeficients are provided at
the frequencies fk = kΔf = kfs/N. As outlined in “Analog-to-digital conversion of experimental
data” section, the maximum frequency that can be represented with data sampled at fs is the
Nyquist frequency, that is, fNyq = fs/2. This means that the spectral information is included
in the coeficients with frequency below fNyq, while the remaining coeficients embed phase
information since for a discrete signal the trigonometric functions with frequencies higher
than the Nyquist one correspond to low (negative) frequency trigonometric functions (for
instance, a signal with frequency equal to fs is a constant signal).
As in the continuous version, an interesting aspect is the interconnection between DFT and
the discrete correlation of two discrete periodic signals xn and yn, given by
N -1

åx y
1
Rxy ( j ) = n n+ j (2.57)
N n=0

The discrete version of the convolution theorem states that the discrete correlation in Equation
2.57 and the product of the DFTs of the two periodic signals form a DFT pair, as in Equation 2.53.
One remarkable feature of DFT is related to the computational cost to calculate it. In
principle, the direct computation of Equation 2.55 would require o(N2) operations. Actually,
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 41

eficient algorithms to noticeably reduce the computational cost of DFT are known since
the work by Cooley and Tukey [14] and are referred to as fast Fourier transform (FFT).
FFT are now of common use (see, for instance, the open-source package available on
http://www.fftw.org/ [15]) as the computational cost required is of o(Nlog2N). For this rea-
son, auto- and cross-correlation are generally computed in the frequency domain applying
the convolution theorem.

Power spectral density While Equation 2.54 indicates the total power of the signal, one
might be interested in extracting information on the power distribution across the spectral
frequencies. The Power Spectral Density (PSD) relates to the square of the magnitude of the
spectral coeficients (thus, according to Equation 2.54, to the Fourier transform of the autocor-
relation function):

1
S ( fk ) =
2
X k , k = 0,1,¼, N - 1 (2.58)
fs N

For real signals, the PSD is periodic with period equal to the number of samples N even
(odd) number and symmetric with respect to k = N/2 + 1 (k = ((N + 1)/2)). The two-sided PSD
is obtained by representing Equation 2.58 as symmetric with respect to k = 0 in the range of
frequencies −fNyq ≤ f ≤ fNyq.
Given the symmetry around k = 0, the two-sided PSD is also often represented as one-
sided PSD, which is obtained by multiplying by 2 the right-hand side of Equation 2.58 with
k = 0 , 1 , …, N/2 ((N − 1)/2 for N odd number).
The PSD is an extremely useful instrument to extract the spectral behavior, for instance,
of turbulent lows (the reader is referred to [6]). However, care must be taken when com-
puting the PSD. One of the most relevant problems affecting the computation of the PSD
on inite discrete data sets is the spectral leakage. It could be easily demonstrated that the
DFT and its inverse are periodic functions with period equal to the number of samples N.
This means that DFT acts on a periodic version of the signal, which would be obtained by
replicating it ininite times (Figure 2.7). Suppose that the signal is a sine wave with a fre-
quency f0. If N/f0 is an integer number (i.e., an integer number of cycles has been captured),

Sinusoidal signal

Rectangular window

Pseudo-periodic signal

FIGUre 2.7 Boundary effects due to imposed periodicity on signals observed over a inite
window of time.
42 STEFANO DISCETTI AND ANDREA IANIRO

then the frequency f0 will appear within the set of frequencies of Equation 2.58 and the
energy pertaining to it will be correctly represented in the spectrum. On the contrary, if a
noninteger number of cycles has been recorded, the spectrum will be represented on fre-
quencies on which f0 will not appear. Since the energy is conserved by the DFT, it will leak
to frequencies other than the original one. The origin of this phenomenon can be explained
by observing Figure 2.7: the discontinuity induced by sampling a noninteger number of
cycles will introduce a spurious abrupt variation, which affects the spectrum over a certain
range of frequencies. More rigorously, one can imagine a inite length signal as an ininite
one multiplied by a rectangular window with width equal to the number of samples N
(i.e., a function equal to 1 during the sampling time and 0 outside of it). In principle, we
are implicitly doing this operation with all inite length signals, which appear to the “eyes”
of the DFT as a single cycle of a periodic signal.
The DFT of the product of two signals is the convolution of the Fourier transforms:

{ }
F x (t ) y (t ) = X ( f ) Ä Y ( f ) =
ò X ( f) Y ( f - f) df (2.59)

The frequency response of a rectangular window is the following one:

sin ( pk )
Yk ( k ) = , k = 0,1,¼, N / 2 (2.60)
N sin ( pk /N )

whose spectrum is characterized by “lobes,” which determine the spectral leakage.


Evidently, capturing larger sequences (large N) reduces the effects of spectral leakages.
This is certainly an intuitive consequence that border effects would be less important if the
signal is long enough. Furthermore, if the number of samples N is increased while keeping
constant the sampling frequency, the spectral resolution (i.e., the spacing between the spec-
tral frequencies given by Equation 2.56) is improved, thus increasing the probability that the
generic frequency f0 of the previous example will appear in the set of harmonic frequencies,
or, at least, will be better approximated. This concept is indeed intuitive: observing a periodic
phenomenon over a larger observation time (thus including more periods) leads to a better
deinition of the periodicity itself.
These aspects have to be taken into account when setting the acquisition time of our
instrument to obtain a proper representation of the spectra. Suppose, for instance, that the
phenomenon under investigation is the (relatively) low-frequency shedding of a circu-
lar cylinder in crosslow. The release of Kármán vortices occurs with a Strouhal number
St = fd/U ≈ 0.2. Considering a cylinder with diameter d = 5  cm and a crosslow speed of
U = 1.3 m/s, this results in a shedding frequency of about 5.2 Hz. If we are capturing the
wake luctuations with, for example, a hot wire anemometer capturing at 1 kHz for 1 s (thus
recording 1000 samples), then the frequency spacing would be, according to Equation 2.56,
of 1 Hz, and the spectrum would be represented at the frequencies [0, 1, 2, 3, …, 500] Hz.
Within this set, the sought frequency does not appear, thus determining the spreading of its
energy on the neighboring frequencies. If instead 10,000 samples are captured at the same
sampling frequency, then the spacing of the frequencies would be 0.1 Hz, thus signiicantly
improving the spectral resolution.
Among the possible solutions to reduce the effect of the spectral leakage, two have achieved
more success: zero-padding and windowing. Both ideas are related to smear out the border
effects due to the periodicity imposed by the DFT. The zero-padding consists in adding zeros
to the end of a signal to artiicially increase its length. This is normally used with the addi-
tional task of reaching the closest power of 2 to the original value of N, since FFTs on basis
2 are very eficient from the computational viewpoint. Windowing consists in pre-multiplying
the signal by a weighting function, which smears out the signal at the borders, thus reducing
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 43

the edge effects related to the imposed periodicity. If wn (with n = 0, 1, ..., N) is the generic
weighting function, the DFT of the windowed signal is
N -1 2 pink

å
-
X wk ( fk = k Df ) = wn xne N , k = 0,1,¼, N - 1 (2.61)
n=0

and the PSD will be

1
S ( fk ) =
2
X wk , k = 0,1,¼, N - 1 (2.62)
fs W

where the norm used for the normalization at the denominator is

N -1
2
W =N åw
n=0
2
n (2.63)

For the case of the rectangular window, we obtain the term Nfs appearing in Equation 2.58. Some
examples of windows for data windowing are reported in Table 2.1 and illustrated in Figure 2.8.

Table 2.1 Some examples of window functions for data windowing

Window functions Analytical expression

Rectangular w(n) = 1

Bartlett w (n) = 1 -
(
n - ( N - 1) / 2 )
(( N - 1) /2 )
1é æ 2pn ö ù
Hann w (n) = ê1 - cos ç ÷ú
2ë è N - 1 øû
æ 2pn ö æ 4pn ö
Blackman w ( n ) = 0.42 - 0.5 cos ç ÷ + 0.08 cos ç N - 1 ÷
è N -1 ø è ø

Weighting windows

Rectangular

Hann

Bartlett

Blackman

FIGUre 2.8 Examples of window functions.


44 STEFANO DISCETTI AND ANDREA IANIRO

An additional source of leakage is due to the presence of very low-frequency information


in the data (for instance, a linear trend). In case a frequency component lower than the funda-
mental one (i.e., the inverse of the time length of the signal) is present, its pertaining energy
leaks in adjacent frequencies. In order to suppress this effect, it is good practice to identify and
suppress trends before performing the Fourier analysis to compute the spectrum. This process
is referred to as detrending.

proper orthogonal Fundamentals of proper orthogonal decomposition The Proper Orthogonal Decomposition
decomposition (POD) is a mathematical procedure that identiies a set of orthonormal basis functions com-
puted as the solution of the integral eigenvalue problem referred to as Fredholm equation.
A rigorous formulation of the problem goes beyond the scope of this chapter; please refer to
[16–18] for a more detailed overview on formulations and applications.
POD has been originally conceived independently for purposes other than turbulent lows
analysis [19–22]; it is also commonly referred to as Karhunen–Loève decomposition or prin-
cipal component analysis. The use of POD to perform low-order modeling of turbulent lows
was irstly proposed by Lumley [23] with the aim of identifying coherent structures in the
low. A turbulent coherent structure is a spatially coherent vortical motion, which maintains
its coherence over a suficiently large time. The identiication of the coherent structures and
their interaction is of primary importance as they typically contain the bulk of turbulent kinetic
energy, as well as they play a leading role in processes like mixing, noise generation, dynamic
loads in luid–structure interaction, etc.
As outlined in the previous section, Equation 2.40 provides an approximation of the luctu-
ating velocity ield u ( x, t ) for inite number of modes Nm. Among the ininite possible sets of
basis functions, we could select the set that is optimal in the least square sense. The approxima-
tion problem then corresponds to inding the set of jn ( x ) such that the norm of the difference
between u ( x, t ) and its approximation would be minimum. Suppose that we have captured
Nt realizations of the low ield, that is, t = ti, i = 1, 2, …, Nt, the problem would correspond to
inding the minimum:

ì Nt Nm
2
ü
ï ï
min í
ï
å
i =1
u ( x , ti ) - å ( u ( x, t ), j ( x )) j ( x )
n =1
i n n ý
ï
(2.64)
î þ
where (⋯, ⋯) is used to indicate the inner product. The set of jn ( x ) satisfying Equation
2.64 is energetically optimal, in the sense that the approximation is as good as possible for
each value of the number of modes Nm in the least square sense, that is, in terms of energy
(Frobenius) norm.
The data are arranged in an Np × Nt matrix (where Np is the number of points x on which
velocity is measured), in which data from the same time instants (also referred to as snapshots)
are arranged in columns. The matrix has the following form:

é u(x , t ) ⋯ u ( x1, t N t ) ù
ê 1 1
ú
U=ê ⋮ ⋱ ⋮ ú (2.65)
ê ú
(
êëu x N p , t1 ) ⋯ (
u xNp, t N t ú ) û

The matrix U is deined as snapshot matrix. It can be demonstrated that the solution of the min-
imization problem of Equation 2.64 corresponds to the computation of the SVD of the matrix
U *U (being U * the conjugate transpose of U). The SVD is the decomposition of matrix in the
following product:
U = Y S F* (2.66)

with S being a diagonal Np × Nt matrix, whose nonnegative diagonal elements σi are referred
to  as  singular values; Y and F are unitary matrices (i.e., Y Y * = I N and F F* = I Nt ,
p
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 45

where I N indicates a square N × N matrix with all the diagonal elements equal to 1). The columns
of Y and F contain the left and right singular vectors. It follows from Equation 2.66 that

U *U = F S* Y*Y S F* = F S F*
2
(2.67)

2
Note that S is a diagonal square matrix with size Nt × Nt. Since U * U (also referred to as two-
point temporal correlation matrix, since it is composed by the products of velocities in different
points and distances) is a nonnegative Hermitian matrix, its eigenvalue decomposition can be
expressed as

U * U = Q L Q* (2.68)

From the comparison of Equations 2.67 and 2.68 it follows that Q = F is the matrix containing
2
the right singular values, while L = S is a diagonal matrix containing the eigenvalues, which
are evidently related to the singular values by si = l i .
The same reasoning can apply to U U * . However, it has to be underlined that the com-
putational burden of the two problems is quite different, depending on the aspect ratio of
the snapshot matrix: if the number of snapshots is signiicantly smaller than the number of
measurement points, solving the eigenvalue problem for U * U is cheaper. Moreover for a
rectangular matrix with Nt < Np, performing the SVD of U U* leads to Np modes of which only
the irst Nt would be orthogonal while the following Np − Nt would be a linear combination of
the others. The matrix F is composed by the eigenvectors of the two-point correlation matrix.
The advantage of solving this problem via the SVD is that the columns (i.e., the eigenvectors)
will be sorted according to the value of the respective eigenvalues λi. In other words, the irst
column will be the eigenvector corresponding to the largest eigenvalue, the second column to
the second largest eigenvalue, and so on. It can be demonstrated that the eigenvalues measure
the relative importance of the different basis functions in terms of turbulent kinetic energy
of the low; consequently, the SVD determines a set of basis functions sorted by their energy
contribution.

Low-order reconstruction As outlined in the previous paragraph, POD is an optimal


decomposition from the energetic viewpoint (see Equation 2.64). This feature suggests that a
relatively small number of modes would be suficient to describe a turbulent low ield in an
optimal sense in terms of energy. In other words, a low-order approximation can be achieved
using the modes containing the bulk of the energy, that is, the eigenvectors corresponding to
the largest eigenvalues. If we use a subset of the irst K modes (supposing they are sorted by
their energy content), the residual error in the low-order reconstruction would be
2 2
Nt K Nt Nm

e(K ) = å u ( x, t ) - å ( u ( x, t ), j ( x )) j ( x ) = å å ( u ( x, t ),j ( x )) j ( x )
i =1
i
n =1
i n n
i =1 n = K +1
i n n

(2.69)

The main concern in performing low-order reconstructions of experimental data is the con-
tamination due to measurement errors. The energetic optimality of POD guarantees that the
real low ield information is contained in the irst modes and rapidly decays as K increases.
On the other side, the measurement error distribution is more complex to be predicted and
it requires a deep knowledge of the measurement uncertainty. Generally speaking, a conse-
quence of the central limit theorem is that precision errors will have a spectrally white dis-
tribution (i.e., errors magnitude independent on the local scale). The choice of the optimal
value of K to recover most of the information of the real signal with minimum noise contami-
nation is critical; nevertheless, it has been left for long to empirical judgment. It is typical to
introduce modes until the turbulent kinetic energy of the reconstructed snapshots reaches a
certain percentage of the total, for example, 90%. However, the intuition tells us that for data
46 STEFANO DISCETTI AND ANDREA IANIRO

with strong noise contamination, this cutoff should be set to a lower value, while “cleaner”
data allow the insertion of a larger percentage of modes without signiicant degradation due
to noise. Raiola et al. [24] modeled theoretically the effect of noise contamination of velocity
ield data obtained with Particle Image Velocimetry (Chapter 10). Furthermore, they proposed
an empirical criterion to identify the optimum number of modes to achieve error minimiza-
tion in POD-based low-order reconstructions. The criterion is based on the observation that
when the effect of random errors is dominant, the energy of the pertaining modes assumes the
feature of a white noise, that is, the residual decreases linearly with the number of modes. This
can be easily detected by observing the relative decrease rate of the reconstruction error and
setting a threshold to its value to identify the optimum number of modes:

e2 ( K + 1) - e2 ( K )
F (K ) = , F ( K opt ) » 0.999 (2.70)
e2 ( K ) - e2 ( K - 1)

Evidently, F(K) approaches the value of 1 in case of behavior of fully white spectral signal,
while it would be monotonically increasing when inserting the irst modes, whose contribu-
tion is mainly related to the signal and follows the path of the turbulent cascade (i.e., energy
decreasing monotonically as the turbulent structures decrease in size).
In case of phenomena characterized by a strong large-scale shedding (like the wake of a
cylinder), POD can be used to extract phase information. The basic principle is that if the
shedding motion contains the bulk of energy, the irst two POD modes are representative of it.
Ben Chiekh et al. [25] proposed to extract a low-order representation of the low ield accord-
ing to the following decomposition:

U LOR ( x, t ) = U ( x, t ) + a1 ( t ) j1 ( x ) + a2 ( t ) j2 ( x )
a1 = 2l1 sin q (2.71)
a2 = 2l 2 cos q

where θ is the shedding angle.


In order to verify that the irst two modes represent the coherent harmonics related to the
shedding motion, the scatter plot of the coeficients normalized with their respective eigen-
values a1 / 2l1 and a2 / 2l 2 should be observed. If the points distribute in the neighborhood
of a circle with unit radius, Equations 2.71 are respected.
An example of application of this technique is proposed in [26]. The velocity ield of a
turbulent swirling jet at Re = 50 ∙ 103 generated by a cold low model aero engine lean burn
injector is measured with tomographic particle image velocimetry (Chapter 11). Jets at high
swirls are characterized by the formation of a recirculation region along the jet centerline. This
region sheds periodically in a gyroscopic-like motion around the jet axis itself; this phenome-
non is referred to as precessing vortex core. Since it is a very large-scale motion, it is expected
to have a high energy contribution characterizing the irst modes. A modal analysis is carried
out with POD; subsequently, a low-order reconstruction of the velocity ield is performed
to reconstruct the dynamics of the periodic motion of the precessing vortex core. Figure 2.9
reports the results obtained of a swirling jet in conditions of free outlow and when conined
in a cylindrical pipe with diameter equal to three times that of the nozzle.

Dynamic mode The main limitation of POD is that since second-order statistics are used to decompose the low
decomposition ield (U * U ), the information on the phase is lost in the process, apart from the very special cases
of shedding-dominated lows outlined earlier. In recent years, the DMD proposed by Schmid
[27] for luid dynamics analysis has gained popularity due to its simplicity and its “matrix-free”
computation (i.e., it relies directly on collected data input, such as velocity ields, e.g., without
assuming any knowledge of an underlying system matrix, which is available only on numerical
computations, to perform a stability analysis). DMD can be applied on time-resolved experi-
mental and numerical data. A brief description of the technique is provided in the following.
The reader is referred to [27] and subsequent papers for a more detailed dissertation.
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 47

V/Vj : –1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1


1
1.5
0.5
a2/√2λ2

1
0

Y/D
–0.5 0.5
–1
0
(a) (c) (e)

1.5
1

0.5 1
a2/√2λ2

Y/D
0
0.5
–0.5

–1 0
–1.5 –1 –0.5 0 0.5 1 1.5 –1.5 –1 –0.5 0 0.5 1 1.5
–1 –0.5 0 0.5 1 X/D X/D
(b) a1/√2λ1 (d) (f )

FIGUre 2.9 (See color insert.) Left: Scatter plot of the normalized time coeficients a1, a2 for free (a) and conined
(b) swirling jets. The circumference with radius 1 is plotted for reference. Right: Iso-contours with velocity vectors of
the instantaneous velocity maps V/Vj (left) and of the low-order reconstruction velocity (right) at the jet mid-plane and
isosurfaces of positive second invariant of the velocity gradient tensor for free swirling jet (c, e) and conined swirling
jet (d, f). (Reprinted from Experimental Thermal and Fluid Science, 52, Ceglia, G., Discetti, S., Ianiro, A., Michaelis, D.,
Astarita, T., and Cardone, G., Three-dimensional organization of the low structure in a non-reactive model aero engine
lean burn injection system, 164–173, Copyright 2014, with permission form Elsevier.)

The data are arranged in a data matrix, with the columns representing the individual data
samples. Suppose we have Nt snapshots ui (data are arranged in column), with equal time
spacing Δt, and subscript i = 1, …, Nt indicating that ui is the snapshot acquired at the instant
(i − 1)Δt. A data matrix is built, whose columns are the snapshots of the sequence (in perfect
analogy with the snapshot matrix introduced in Equation 2.65):

U = éë u1, ¼, u Nt ùû (2.72)

A irst-order approximation of the transition between one snapshot and the following one is
given by, that is,

ui +1 = A u i, i = 1,¼, N t - 1 (2.73)

For a linear time-discrete time-invariant system, the mapping matrix A describes exactly the
time evolution of the system and does not change over time. If the low ield evolution is
generated by a nonlinear process, Equation 2.73 leads to a tangential linear approximation.
Assuming that within this “linearity” approximation the mapping matrix is constant, the data
sequence can be written as a Krylov sequence (see [28,29]):

U = éë u1, A u1, A2 u1, ¼, A Nt -1 u1ùû (2.74)

By using a more synthetic notation UiN , where the subscript i indicates the time instant relative
to the irst column and the superscript N indicates the time instants of the last column, this can
be written in compact form as

U 2N = AU1N -1 (2.75)
48 STEFANO DISCETTI AND ANDREA IANIRO

Equation 2.75 can be approximated using a companion matrix S:

U 2N = AU1N -1 » U1N -1 S (2.76)

It can be easily shown that the matrix S simply shifts the snapshots (indeed the irst column
of U 2N is equal to the second column of U1N -1 and so on through Nt − 1), while the last column
determines the snapshot Nt as a linear combination of the snapshots 1 through Nt − 1 contained
in U1Nt -1:

é0 0 0 a1 ù
ê ú
ê1 0 ⋯ 0 a2 ú
S = ê0 1 0 a3 ú (2.77)
ê ú
ê⋮ ⋮ ⋱ ⋮ ⋮ ú
ê0 0 ⋯ 1 aNt -1 úû
ë

It can be demonstrated that the eigenvalues of S approximate fairly well those of the mapping
matrix A. The last column of S is the vector a = éë a1, a2 ,¼, aNt -1 ùû which is such that

u Nt = a1u1 + a2 u2 + ⋯ + aNt -1u Nt -1 + r (2.78)

with r being a residual to be minimized. It can be demonstrated that the least square solution
of Equation 2.78 is

a = R -1Q*u Nt (2.79)

with Q and R being the QR decomposition of U1Nt -1 = Q R.


This procedure, even if formally correct, leads to ill-posed problems, especially in case of
experimental data, which are often noise contaminated. Schmid [27] proposes a more robust
implementation, in which the companion matrix S is estimated starting from the singular value
decomposition of the snapshot matrix U1Nt -1, as in Equation 2.66:

U1Nt -1 = Y S F* (2.80)

After substitution in Equation 2.76 and some manipulation

Sɶ = Y*U 2Nt F S -1 (2.81)

where Sɶ is an approximation of S (in the sense that it well approximates its eigenvalues and
ɶ the dynamic modes are obtained as
eigenvectors). If yi are the eigenmodes of S,

F = Y yi (2.82)

It can be demonstrated that the eigenvalues μi of the matrix Sɶ (i.e., it stands Sɶ yi = mi yi) contain
information about the growth/decay rates of the dynamic modes [30]. The real and the imagi-
nary part of the logarithms of the eigenvalues represent the damping/growth rate and the
frequency of the modes, respectively.

Conditional What is a conditional average? The operator “expectation” E[⋯] introduced in “Statistical
averages data characterization” section acts unconditionally on a random signal to extract its statistical fea-
tures. In other words, if you have to calculate the mean value assumed by a random process over
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 49

a set of realizations, the mean operator of Equation 2.5 will take the following form for a generic
discrete signal xi (where the subscript i indicates the individual realization), with i = 1, …, N:
N

åx
1
E éë xi ùû = i (2.83)
N i =1

In this process, each realization will have the same importance in determining the mean, and
in this sense this operation is unconditioned.
Suppose now that we are interested in obtaining the statistical information of a random
process only when a particular event occurs. For example, suppose that you want to measure
the average level of turbulence on a runway of an airport in the two minutes after the takeoff of
an aircraft, and the only available instruments are an anemometric system to measure continu-
ously the air turbulence intensity and a clock to cross-check the takeoff time. In order to obtain
the desired average intensity in the allocated time slot, you want to use the samples only in the
two minutes right after the takeoff of an aircraft, that is, the average has to be performed under
the condition that an aircraft has passed on the runway in the last 2 minutes. The result is a
conditional average, in the sense that it is obtained only considering a subset of realizations in
which a certain condition is veriied.
Suppose again, without any loss of generality, that the observed random process is the
velocity ield U ( x, t ), which can be decomposed as the sum of its temporal mean U ( x, t ) and
a luctuating part u ( x, t ) . If x = éëx1, x2 , ¼, x M ùû is a vector of events, the conditional average
is the sum of the unconditioned average of the random process and the conditional average of
its luctuating part:

E éëU ( x, t ) x ùû = U ( x, t ) + E éë u ( x, t ) x ùû (2.84)

Conditional averages in experimental aerodynamics In many fundamental problems of


luid mechanics, the identiication of the low ield features occurring in presence of particular
events is of great interest. For example, for purposes of active control of the boundary layer
on a wing it might be of interest the investigation of the low ield features when particular
perturbations of the low occur, such as events determining strong shear stresses or abrupt
pressure luctuations. If we are interested, instead, in determining the phase-averaged low
ield of a shedding phenomenon (say, for instance, the turbulent wake past a cylinder) using a
ield velocity measurement technique (such as particle image velocimetry), it might be reason-
able to equip the experimental setup with an auxiliary point-wise instrument (for instance, an
anemometer or a pressure probe), which will detect the luctuations due to the shedding wake
and trigger the acquisition of the main instrumentation.
One interesting path that has been followed in wall-bounded shear lows is to perform
a quadrant analysis to identify the mechanisms occurring in the formation of the Reynolds
shear stress u1u2 , that is, the events corresponding to the quadrants in the Cartesian plane
u1 − u2 are observed [31]. This method has been particularly used in wall turbulence, being the
irst and second velocity component, respectively, in the streamwise and wall-normal direc-
tion. For example, a negative value of u1u2 might be either due to u1 < 0 , u2 > 0 (i.e., in the
second quadrant, corresponding to negative streamwise luctuations ejected away from the
wall; these events are commonly referred to as ejections) or u1 > 0 , u2 < 0 (i.e., in the fourth
quadrant, corresponding to high momentum low moved toward the wall; these motions are
called sweeps). The two events, from the point of view of the Reynolds shear stress, are undis-
cernible. However, by setting the events ξ1 = [u1 < 0, u2 > 0] and ξ2 = [u1 > 0, u2 < 0] it is possible
to isolate the two different phenomenologies.

Fundamentals of stochastic estimation The conditional average is a powerful instru-


ment, but on the downside under some conditions it might be very dificult to be applied. For
instance, conditional sampling (such as in the example of the cylinder wake) is not always
possible. Furthermore, the number of useful samples to calculate the conditional average
depends on how frequent is the event.
50 STEFANO DISCETTI AND ANDREA IANIRO

Suppose, for instance, that you want to measure the average speed of cars passing at a certain
point of a highway having a ixed sampling frequency. The uncertainty on the estimated value
depends on the inverse of the square root of the number of samples, as outlined in Equation 2.27,
so, according to the trafic intensity, a certain observation time would be set in order to obtain
the desired accuracy. Suppose now that you want to identify the average velocity of red cars (or,
in other words, the average velocity of the cars when the event “the car is red” is veriied). If the
same accuracy is to be achieved, the observation time now will depend on the amount of red cars
passing per unit time, that is, on the probability that a velocity measurement is relative to a red
car or to a car with a different color painting. If we add more restrictions, like that the car should
also be of a particular brand, the event will become more and more infrequent and the observa-
tion time to achieve a suficient number of samples will be larger.
A different approach to estimate conditional averages is based on stochastic estimation,
which is the approximation of a random variable in terms of other known random variables
(e.g., the occurrence of a set of events). Following the approach in the review by Adrian [32],
consider, without any loss of generality, that the random variable to be estimated is the luctu-
ating velocity ield u ( x, t ) and that the events to be veriied are collected in the vector x. In
( )
general, a stochastic estimate û would be a generic function F x, x, t . It can be demonstrated
( )
that, among all the possible estimates of F x, x, t , the best mean square estimate is the con-
ditional average E éë u ( x, t ) x ùû (see [18] for a derivation). In general, the conditional average is a
nonlinear function of x; nonetheless, it is linear in the case u ( x, t ) and x have a joint-normal
probability distribution [33]. In this case, the stochastic estimation of u is referred to as linear
mean square estimation.
In general, it is reasonable to linearize the problem around the point x. The linear mean
square estimation of the ith velocity component ui at a certain point, given a set of M events
x = éë x1, x2 ,¼, x M ùû, is
M

uˆi = åA x ,
j =1
ij j i = 1, 2, 3, j = 1,¼, M (2.85)

The mean square error estimate is obtained by minimizing the residual:


2
æ M ö
res = ç ui -
ç
è
å j =1
Aij x j ÷ , i = 1, 2, 3,
÷
ø
j = 1,¼, M (2.86)

In this system, the coeficients Aij are the unknowns to be extracted. A necessary condition to
minimize the residual in Equation 2.86 is the orthogonality principle, that is, the errors are
statistically orthogonal to the data set. This can be obtained by equating to zero the derivative
of the residual with respect to the coeficients Aij:

æ M ö
ç ui -
ç
è
åA x ÷÷ø x
j =1
ij j k = 0, i = 1, 2, 3, j, k = 1,¼, M (2.87)

Equation 2.87 can be rearranged to obtain the following systems of equations in the unknowns
Aij:
M

åx x A
j =1
j k ij = xk ui , i = 1, 2, 3, j, k = 1,¼, M (2.88)

Equation 2.88 is a linear system of M equations in M unknowns for each velocity component i,
which can be solved to obtain the Aij. The coeficients Aij depend on the position x and on the
event vector. A similar approach can be used to extract conditional averages. The interested
reader is referred to [18,32] for a more detailed description.
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 51

problems

2.1 An experiment concerning the convective heat transfer measurement of a jet impinging on
a lat plate is performed. The lat plate is a constantan foil, heated uniformly by an electric
power source and cooled on one side by the impinging jet. The temperature of the plate,
supposed to be thermally thin, is measured by an infrared camera. An energy balance is
then applied on the slab to measure the convective heat transfer coeficient. Supposing that
tangential conduction and natural convection terms can be neglected, the adopted function is
qɺ J - qɺr
hc =
Tw - T jet

where
qɺ J = VI /A is the heat input by Joule effect (V and I are tension and current of the
electric power input, A is the area of the constantan plate)
(
qɺr = sSBe Tw4 - Tamb
4
)
is the radiative heat transfer (σSB is the Stefan–Boltzmann
constant, Tw and Tamb are the wall and the ambient temperature, ε is the sur-
face emissivity)
Tjet is the jet temperature (supposing to be in the incompressible regime)
Determine the combined uncertainty on hc using the information of the following table.
Consider that the wall temperature measurement is obtained after averaging on 100
statistically independent samples.
Can we state that the uncertainty of hc expressed in nondimensional form as Nusselt
number Nu = hd/k (where d is the jet diameter and k is the thermal conductivity of air) is
going to be equal to that of hc?

Typical value Uncertainty information


Tw 305 K Randomly distributed error with standard deviation of 300 m K
Tjet 293 K Measured by an operator with a thermometer. Error range
of ±100 m K
Tamb 293 K Measured by an operator with a thermometer. Error range
of ±100 m K
ε 0.95 Obtained by calibration. Error range ±0.01
V 1.40 V 1% of the reading
I 60 A 1% of the reading
A 0.2 × 0.15 m2 No uncertainty
σSB 5.67 ∙ 10−8 (W/(m2 K4)) No uncertainty

2.2 In an experiment the following data are collected:

X (arbitrary units) Y (arbitrary units)


0.1 18.47
0.4 10.01
0.7 8.29
1.0 7.35
1.3 7.18
1.6 8.18
1.9 9.00
2.2 8.55
2.5 9.14
2.8 9.97
3.1 12.60
3.4 14.60
3.7 18.06
4.0 21.45
52 STEFANO DISCETTI AND ANDREA IANIRO

A function has to be itted through the data for calibration purposes. Suppose that the
function to be itted is

p2
y fit = p0 + p1e X + + p3e - X
X

Is it possible to identify the coeficients through a linear least square procedure? Quantify
the value of the coeficients and the correlation factor.
Would the function still be linear if the third term of the right-hand side is Xp2?
2.3 Write a short code to calculate the PSD of a generic signal. Suppose, then, that N = 1024
samples of signal are captured at a frequency of 1000 Hz. The signal is

x ( t ) = sin ( 2pf1t ) + 0.5 sin ( 2pf2t ) + noise

that is, the sum of sinusoids at frequencies f1, f2 and a noise contribution, which can be
tuned by the user to quantify the contamination effects on the signal. Plot the measured
spectrum for the cases of f1 = 50  Hz, f2 = 200  Hz, and N = 256, 1024, 4096 samples.
Which are the changes you observe on the spectra? What happens to the spectra if you
window the signal with a Hann window? Supposing now that f2 = 600 Hz, which are the
expected changes of the spectrum?
2.4 A data set of 2D velocity ields in the wake of a cylinder in crosslow close to a wall
is provided as supplementary material, available at the webpage https://www.crcpress.
com/Experimental-Aerodynamics/Discetti-Ianiro/p/book/9781498704014. The database
contains
(a) Grid.m with the measurement points grid (variables XP and YP, in mm)
(b) 300 iles with format Cylinder_XXXXXX, containing the snapshots in matrix
form (U, V in m/s). The freestream velocity is 2.8 m/s and the cylinder diameter is
D = 32 mm. The gap between the bottom of the cylinder and the wall is G = 3D (thus
the cylinder axis, parallel to the wall, is at a distance of 3.5D from the wall itself).
( )
Calculate the mean low ield U = U x , U y and the in-plane Reynolds stresses
ux2 , uy2 , uxuy . Represent your results in nondimensional form, both for the grid and for
the measured quantities.
2.5 Write a code to perform a POD analysis of the data set provided as supplementary mate-
rial. Analyze the scatter plot of the time coeficients of the irst two modes to verify if a
low-order reconstruction of the wake shedding can be performed or not.

references

1. Bat c h e l o r GK (2000). An Introduction to Fluid Dynamics, Cambridge University Press,


Cambridge, UK.
2. M o o n FC (2008). Chaotic and Fractal Dynamics: Introduction for Applied Scientists and
Engineers, John Wiley & Sons, Weinheim, Germany.
3. G a s pa r d P (2005). Chaos, Scattering and Statistical Mechanics, Vol. 9, Cambridge University
Press, Cambridge, UK.
4. O t t E (2002). Chaos in Dynamical Systems, Cambridge University Press, Cambridge, UK.
5. R ic e J (2006). Mathematical Statistics and Data Analysis, Cengage Learning, Belmont, CA.
6. P o pe SB (2001). Turbulent Flows, Cambridge University Press, Cambridge, UK.
7. JCGM (2008). Evaluation of measurement data—Guide to the expression of uncertainty in mea-
surement, JCGM 100: 2008.
8. Tavo u l a r is S (2005). Measurement in Fluid Mechanics, Cambridge University Press,
Cambridge, UK.
9. K l in e SJ, M c C lintock FA (1953). Describing uncertainties in single-sample experiments,
Mechanical Engineering, 75(1), 3–8.
STATISTICAL DATA CHARACTERIZATION AND ELEMENTS OF DATA PROCESSING 53

10. M o ffat RJ (1985). Using uncertainty analysis in the planning of an experiment, Journal of
Fluids Engineering, 107(2), 173–178.
11. M o ffat RJ (1988). Describing the uncertainties in experimental results, Experimental Thermal
and Fluid Science, 1(1), 3–17.
12. G o l u b GH, va n Loan CF (1996). Matrix Computations, John Hopkins University,
Baltimore, MD.
13. K e l l e y CT (1999). Iterative Methods for Optimization, Vol. 18, SIAM, Philadelphia, PA.
14. Co o l e y JW, Tu key JW (1965). An algorithm for the machine calculation of complex Fourier
series, Mathematics of Computation, 19(90), 297–301.
15. Fastest Fourier Transform in the West: http://www.fftw.org.
16. S ir ov ic h L (1987). Turbulence and the dynamics of coherent structures. Part I: Coherent struc-
tures, Quarterly of Applied Mathematics, 45(3), 561–571.
17. Be r ko o z G, H o lm es P, Lum ley JL (1993). The proper orthogonal decomposition in the
analysis of turbulent lows, Annual Review of Fluid Mechanics, 25(1), 539–575.
18. Tr o pe a C, Ya r i n AL, Fos s JF (2007). Springer Handbook of Experimental Fluid Mechanics,
Vol. 1, Springer Science & Business Media, Berlin Heidelberg, Germany.
19. K a r h u n e n K (1947). Über lineare Methoden in der Wahrscheinlichkeitsrechnung, Vol. 37,
Universitat Helsinki.
20. L o e v e M (1948). Fonctions Al´eatoires du Second Ordre, in: Brownien PL, ed., Processus
stochastiques et mouvement, Gauthier-Villars, Paris, France.
21. P o u g ac h e v VS (1953). General theory of the correlations of random functions, Izvestiya
Akademii Nauk USSR, 17, 1401–1402.
22. O bu k h ov MA (1954). Statistical description of continuous ields, Transactions of the
Geophysical International Academy Nauk USSR, 24, 3–42.
23. L u m l e y JL (1967). The structure of inhomogeneous turbulent lows, in: Atmospheric Turbulence
and Radio Wave Propagation, pp. 166–178.
24. R a io l a M, D is cetti S, Ianiro A (2015). On PIV random error minimization with optimal
POD-based low-order reconstruction, Experiments in Fluids, 56(4), 1–15.
25. Be n Ch ie k h M, M ichard M, G ros jean N, B era JC (2004). Reconstruction temporelle
d’un champ aérodynamique instationnaire à partir de mesures PIV non résolues dans le temps,
9ème Congrès Français de Vélocimétrie Laser, Brussels, Belgium, paper D, p. 8.
26. Ce g l ia G, D is cetti S, Ianiro A, M ichaelis D, A s tarita T, Cardone G (2014).
Three-dimensional organization of the low structure in a non-reactive model aero engine lean
burn injection system, Experimental Thermal and Fluid Science, 52, 164–173.
27. S c h mid PJ (2010). Dynamic mode decomposition of numerical and experimental data, Journal
of Fluid Mechanics, 656, 5–28.
28. G r e e n b au m A (1997). Iterative Methods for Solving Linear Systems, Vol. 17, SIAM,
Philadelphia, PA.
29. Trefethen LN, Bau III, D (1997). Numerical Linear Algebra, Vol. 50, SIAM, Philadelphia, PA.
30. J ova n ov i ć MR, Schm id PJ, N ichols JW (2014). Sparsity-promoting dynamic mode
decomposition, Physics of Fluids (1994–present), 26(2), 024103.
31. A d r ia n RJ (2007). Hairpin vortex organization in wall turbulencea), Physics of Fluids (1994–
present), 19(4), 041301.
32. A d r ia n RJ (1994). Stochastic estimation of conditional structure: A review, Applied Scientiic
Research, 53(3–4), 291–303.
33. Pa po u l is A, P illai SU (2002). Probability, Random Variables, and Stochastic Processes,
Tata McGraw-Hill Education, New York, NY.
C h a p T e r T hr e e

Experimental facilities
Wind tunnels

Andrea Sciacchitano

Contents

3.1 Relevant testing parameters 56


3.2 Wind tunnel classiications 57
3.3 Low-speed subsonic wind tunnels 57
The test section 59
The diffuser 61
The fan 61
The contraction cone 62
Turbulence reduction devices 63
Evaluation of power losses 64
Wind tunnel boundary corrections 66
3.4 High-speed subsonic and transonic wind tunnels 70
Wall effects in transonic wind tunnels 71
3.5 Supersonic wind tunnels 71
Ideal low in a supersonic wind tunnel 72
Sizing of the second throat 73
Actual low in a supersonic tunnel 73
Tunnel start-up with model in the test section 74
The need for drying to avoid condensation 75
The need for heating to avoid liquefaction 76
Supersonic wind tunnel classiication 76
3.6 Hypersonic wind tunnels 79
Shock tubes 79
Shock wind tunnels 80
Ludwieg tube wind tunnels 81
Hot-shot wind tunnels 81
Plasma wind tunnels 82
3.7 Special wind tunnels 82
High Reynolds number wind tunnels 82
Anechoic wind tunnels 85
Water tunnels 86
Meteorological wind tunnels 86
Automotive wind tunnels 88
Problems 89
References 89

55
56 ANDREA SCIACCHITANO

Wind tunnels are tools used in aerodynamic research to investigate the low around solid
objects. They are structures where a low is produced, usually by means of a fan, under con-
trolled conditions. The test model is placed in the tunnel test section. Their working principle
relies upon the concept of Galilean invariance, according to which the laws of motion are the
same in all inertial frames. This means that the same low ield is produced whether the model
is in motion with respect to the luid (as happens in actual light) or the luid is in motion with
respect to the model (as occurs in wind tunnel tests).
Measurement systems are used to measure, record, or visualize the low around the model
and/or the forces acting upon it.
Wind tunnels offer an economical, rapid, and accurate means for aerodynamic research: in
the aerospace sector, they allow to investigate the air low around an aircraft yielding a reduc-
tion of the number of light tests, thus saving time, costs, and even lives!
Since wind tunnels are facilities designed mainly for scale-model testing, it is worthwhile
starting the discussion from the relevant testing parameters that need to be selected to achieve
measurements that are representative of the low over full-scale bodies.

3.1 relevant testing parameters

As stated in Chapter 1, a body moving through a medium is subjected to several forces; for
aerodynamic applications the most relevant are

V
• Inertia force ~ rL3 × = rL2V 2
t
where
ρL3 is the mass of luid contained in a volume L3
V/t represents the characteristic low acceleration
• Viscous force ~ μVL
with μ the luid dynamic viscosity.
• Elastic force ~ ρ a2L2
with a speed of sound. The product ρa2 is the bulk modulus of elasticity of a gas and
represents the stress needed to develop the unit change in volume.
From the forces mentioned earlier, two relevant force ratios can be deined:

Inertia force rLV


• Reynolds number: Re = =
Viscous force m
Inertia force V
• Mach number: M = =
Elastic force a

If the wind tunnel model is geometrically similar to the full-scale body and the wind tunnel
low has the same Reynolds and Mach numbers as the full-scale low, then the low about the
model will be dynamically similar to that about the full-scale vehicle (see Chapter 1). As a
consequence, forces and moments developed by the model can be directly scaled to full scale.
For free-light models, also the Froude number must be matched. However, since most wind
tunnels tests are carried out with rigid models held by a strut, the matching of the Froude
number is usually not required.
The matching of the Mach number is typically relevant for high-speed regimes, where
the compressibility effects are predominant with respect to Reynolds number effects.
Instead, in the low-speed regime, matching the Mach number is not as critical because
Reynolds number effects predominate. Nevertheless, in all low regimes it is recom-
mended to carefully evaluate the effect of Reynolds and Mach numbers to ensure the
validity of the results.
ExPERIMENTAL FACILITIES 57

Although it is often dificult, it is not impossible to match these two nondimensional param-
eters with those of full-scale conditions. Possible approaches for increasing the low Reynolds
number are given in “High Reynolds number wind tunnels” section.

3.2 Wind tunnel classiications

Unfortunately, a wind tunnel that can be used for all low regimes of interest in aerodynamic
tests does not exist. Therefore, wind tunnels are usually classiied according to the low regime
they can yield:
• Low-speed subsonic wind tunnels, which operate from very low speed up to M ≅ 0.3
• High-speed subsonic and transonic wind tunnels, with a maximum Mach number of 1.3
• Supersonic wind tunnels, which can reach M = 4 ÷ 5
• Hypersonic wind tunnels, used for tests at Mach number exceeding 5
The coniguration of the duct and test section, as well as the driving devices, strongly depends
on the low regime. The main features of wind tunnels for different low regimes are discussed
in the following sections.

3.3 Low-speed subsonic wind tunnels

Low-speed subsonic wind tunnels operate in the incompressible regime, with speeds in the test
section below 100 m/s (M = 0.3). In these wind tunnels, the drive system (which determines
how the working luid is moved through the test section) is typically an axial or centrifugal fan
or blower that pushes or pulls air through the test section.
The working principle of these wind tunnels relies upon mass conservation (continuity)
and Bernoulli equations. Consider the Eiffel-type wind tunnel illustrated in Figure 3.1. Let
us indicate with 1 the section located before the contraction and with 2 the test section. The
continuity and Bernoulli equations read, respectively,

A1V1 = A2V2 (3.1)

1 1
p1 + rV12 = p2 + rV22 (3.2)
2 2

Inlet
Contraction
Diffuser
Test
section
Fan
Model Flow

FIGUre 3.1 Schematic of Eiffel-type open-return wind tunnel.


58 ANDREA SCIACCHITANO

where A is the section area and V is the bulk velocity. In the Bernoulli equation 3.2, it has been
assumed that the pressure losses are negligible. Indicating with C the contraction ratio (ratio
between the areas of the sections upstream and downstream the contraction) and combining
Equations 3.1 and 3.2, we obtain

C 2 p1 - p2
V22 = 2 (3.3)
C2 -1 r

Equation 3.3 shows that the velocity in the test section is controlled by the applied pressure
difference between the entry and the exit of the contraction.
Low-speed wind tunnels are usually classiied in to two basic conigurations:
1. Open-return tunnel: In this coniguration, the air low follows a straight path from the
inlet to the exhaust. The main components are the inlet, which guides the low into the
tunnel; the contraction, where the low is accelerated up to the speed desired in the test
section; the test section, where the model is placed and measurements are carried out;
the diffuser, which reduces the low velocity by expanding the low and recovering the
static pressure; and the fan, which drives the motion of the luid in the wind tunnel.
The test section may have solid boundaries (closed jet or NPL type, from the National
Physics Laboratory in Teddington, England) or no solid boundaries (open jet or Eiffel
type; see Figure 3.1).
2. Closed-return tunnel (Prandtl or Göttingen type), where the air continuously circulates
within the tunnel (Figure 3.2). The components of these tunnels are essentially the same
as in the open-return tunnels, with the addiction of the return duct that allows the air
exiting the fan section to return to the contraction section and to the test section. The
return duct must be properly designed to reduce the pressure losses and to insure smooth
low in the test section. Corners typically consist of 90° bends; to limit the pressure
losses at the corners and avoid the formation of secondary recirculating lows, the cor-
ners are usually equipped with guide vanes. Also in this case, the test section can be
open or closed.
An open-circuit tunnel has several advantages and drawbacks:
Advantages
1. Low construction cost, because no return duct must be built.
2. Superior design for smoke visualization, use of low tracer particles and propulsion.
There is no accumulation of low tracers or exhaust products in an open tunnel.

Model supporting
Guide vanes frame Actuator fan
Nozzle

Return duct

FIGUre 3.2 Schematic of closed-return (Göttingen-type) low-speed wind tunnel.


ExPERIMENTAL FACILITIES 59

Drawbacks
1. The low quality entering the inlet and moving toward the test section may be low. To
enhance the low quality, screens and low straighteners may be introduced upstream of
the test section.
2. High operating costs, because the fan must continuously accelerate the low through the
tunnel.
3. Noisy operation, mainly associated with the fan noise.
Due to the low construction costs, the open-return coniguration is often the irst choice for
universities and research institutes.
Also the closed-return tunnel presents advantages and drawbacks:
Advantages
1. High low quality in the test section, which can be controlled by means of turning vanes
in the corner and low straighteners near the test section.
2. Low operating costs, because once the air is circulating in the tunnel, the fan needs to
provide only the energy needed to overcome the pressure losses.
3. Low noise during operation.
Drawbacks
1. High construction costs associated with the presence of the return duct and guide vanes.
2. Inferior design for smoke visualization, use of low tracer particles and propulsion. The
tunnel must be equipped with a purge system to remove products that otherwise would
accumulate inside the wind tunnel.
3. Heat exchangers may be required to control the luid temperature since the fans continu-
ously provide power to the luid, which is dissipated into heat.

The following sections will discuss more in detail the characteristics of the main components
of subsonic wind tunnels.

The test section The aim of wind tunnels is to provide a uniform, steady, and controllable low in the test
section, where the test model is placed. Both open-return and closed-return tunnels can have
either an open or a closed test section. An open test section used in an open-return wind tunnel
as in Figure 3.1 requires an enclosure around the test section to avoid that the air is drawn from
the test section rather than from the inlet.
In general, the closed test section should be the irst choice because of the enhanced low
quality. In fact, when an open test section is employed, unsteady shear layers are formed at the
test-section boundaries, thus inducing unsteadiness to the low.
The choice of the size and shape of the test section is a crucial step in the design of a wind
tunnel. The cross-sectional area of the test section determines the overall dimensions of the
tunnel. Furthermore, test-section size, speed, and design deine the required power.
The size of the tunnel determines the initial construction costs, while the required power
and operating hours affect the energy portion of the operational costs (although wind tunnel
personnel salaries are usually a larger portion of the operational costs). Both initial and opera-
tional costs should be taken into account in the design phase.
For small tunnels design, the size of the test section is often deined based on the size of the
room that will house the wind tunnel.
Over more than one hundred years of wind tunnel developments, many test-section shapes
have been used, including round, elliptical, square, rectangular, hexagonal, and octagonal.
It has been shown that the shape of the test section has negligible effects on the pressure losses.
As a result, the shape should be selected based on utility and aerodynamic considerations.
For ease in installing and changing models or splitter plates for half models, or installing
windows for guaranteeing the optical access into the test section, lat walls are by far the
60 ANDREA SCIACCHITANO

Wind tunnel Boundary


wall layer thickness

A1
Model A2

V1 V2 >V1

FIGUre 3.3 Effect of the wall boundary layer in the test section.

preferred choice. Rectangular test sections with width to height ratio of 1.5 have been shown
to yield minimum low corrections.
As the low advances along the test section, the boundary layer at the wall thickens. This
effect reduces the effective cross-sectional area and therefore yields an increase of veloc-
ity (based on the continuity equation 3.1; see Figure 3.3). Furthermore, based on Bernoulli
equation 3.2, the static pressure decreases, inducing an additional drag on the model, named
horizontal buoyancy.
A possible solution to this problem is increasing the cross-sectional area along the longitu-
dinal direction to compensate for the thickening of the boundary layer. When this is done prop-
erly, constant values of velocity and static pressure are maintained throughout the test section.
Unfortunately, to date no exact design method exists that ensures a constant static pressure.
Nevertheless, as a irst approximation one could design test-section walls with 0.5  degree
of divergence each; after the wind tunnel is built, pressure measurements may be conducted
along the longitudinal direction to verify whether a uniform static pressure condition has been
achieved (if not, ine adjustments may be required).
The length of the test section varies typically from one to two times the largest dimen-
sion of the cross section, with some exceptions (for instance, very long test sections may
be required to reproduce the characteristics of an atmospheric boundary layer). As it will be
discussed in “Evaluation of power losses” section, the energy losses are proportional to the
velocity to the power three and are therefore rather large in the test section. Consequently, a
short test section is beneicial for reducing the energy losses.
A practical detail in test-section design is the installation of a suficient number of windows
to guarantee optical access to the model. This is required for low visualization techniques
(e.g., smoke visualization; see Chapter 4) and optical measurement techniques as Schlieren,
holography, particle image velocimetry, and laser Doppler velocimetry (Chapters 7, 8, 10).
If the test section were clear, that is with no struts holding the model or fairings, the low
outside the boundary layer would have the following characteristics:
• Uniform velocity proile at each longitudinal direction
• No velocity component in the lateral and vertical direction
• No turbulence
These conditions are ideal and dificult to meet. In practice, an “acceptable” low quality
should be achieved in the test section.
Velocity variations across the test section of 0.2%–0.3% from the average velocity are typi-
cally attained. As a consequence, the dynamic pressure variations range between 0.4% and
0.6%. The angular variations are of the order of 0.1° from the average low angle.
A crucial requirement is steady low. Any unsteady low luctuation should have small
magnitude and suficiently low frequency so as to produce a negligible effect for pressure or
balance measurements. Usually, unsteady low is a result of separated low, either continu-
ous or intermittent. The only cure for this is to locate the source and eliminate it. The main
ExPERIMENTAL FACILITIES 61

locations where separation may occur are the irst diffuser, irst corner, fan nacelle, and con-
traction. Other sources of unsteady low can stem from the fan, which may cause nonuniform
inlow. In this case, the unsteadiness will occur at the blade frequency.
Nonuniform velocity distribution may arise from low separation or poorly designed guide
vanes which underturn or overturn the low at the corners. To achieve uniform low, the test
section should be preceded by a constant area duct of suficient settling length (about twice
the major dimension of the cross section).

The diffuser Since the power losses are proportional to the velocity to the third power (see “Evaluation
of power losses” section), aim of the diffuser is to reduce the velocity by expanding the low
and recovering the static pressure. It is desired to decelerate the low in the shortest possible
distance; therefore, a correct design of the diffuser is crucial for the success of the wind tunnel.
For closed-return tunnels, the diffuser typically extends from the downstream end of the
test section to the third corner of the tunnel. The tunnel fan divides the diffuser into two parts,
where the irst diffuser usually extends only to the irst corner past the test section.
Diffusers are usually described in terms of the area ratio (ratio of cross-sectional areas at
the end and beginning of the diffuser) and of the equivalent cone angle, which is the angle of
an imaginary conical section with identical length and inlet and exit areas of the actual dif-
fuser. Such wind tunnel component is rather sensitive to design errors that may induce steady
or unsteady low separations. In theory, when a uniform low enters the diffuser inlet, the only
constraint on the cone angle is that the turbulent boundary layer does not undergo separation.
In practice, the low entering the diffuser is far from being uniform due to the wake of the
model and holding struts. As a rule of thumb, the equivalent cone angle should not exceed 7°.
However, it should be kept in mind that also the area ratio plays an important role, because it
determines the pressure recovery and pressure gradient, and therefore the risk of separation. In
fact, when a very long diffuser with moderate (e.g., 5°) conical angle is used to achieve a large
contraction ratio, separation may occur. As a result, the total area ratio (considering both parts
of the diffuser) is typically limited to ive or six to one, where half of the area ratio is achieved
in each part of the diffuser. The area ratio limits the contraction ratio of the wind tunnel. When
larger contraction ratios are required, a wide-angle diffuser with an area ratio of about four to
one and equivalent cone angle of 45° may be installed before the settling chamber. The latter
is the component of largest section in the whole wind tunnel; its aim is to “straighten” the low
before the contraction, so to reduce the turbulent luctuations in the test section and guarantee
the low quality. Typically, turbulence reduction devices (see “Turbulence reduction devices”
section) are installed in the settling chamber.
The second diffuser is located between the fan section and the third corner. Its role is to
continue the expansion to the desired total area ratio. The equivalent cone angle is usually
below 5°. The fan at the entrance yields an approximately constant total pressure proile.
Two main sources of trouble may occur in the second diffuser:
• Flow separation on the aft portion of the fan nacelle
• Nonuniform velocity distribution downstream of the fan
The former issue is typically tackled via a proper design of the nacelle (length to diameter
ratio above 3, closing cone angle below 5°) to avoid an excessive pressure gradient over the
rear portion. In the presence of separation, vortex generators may be considered to reduce the
width of the separated region.
The second problem usually occurs in rectangular tunnels and yields lower velocities in
the corners. A possible solution is the use of antiswirl vanes which contrast the low rotation
induced by the fan.

The fan The fan has the aim of driving the low through the wind tunnel by producing an increase
of pressure in the low. It is typically modeled as an actuator disk through which the low
passes at constant velocity, gaining an increase in static pressure. In closed-return tunnels,
the increase in static pressure should compensate for the total pressure losses in the rest of
the circuit.
62 ANDREA SCIACCHITANO

The location where the fan is placed has important consequences on its performances.
The fan develops the highest eficiency when located in a stream of suficiently high velocity.
Additionally, the fan cost is at least partially proportional to its diameter squared. For these
two reasons, placing the fan in the settling chamber or in a large part of the return passage
is not convenient. On the other hand, possible debris of failing models and the nonuniform
low distribution preclude a position in the diffuser right after the test section. As a result, in
a Göttingen-type tunnel, the fan is typically placed in the diffuser downstream of the second
corner. At this location, the low velocity is relatively high and its distribution uniform because
it has passed through a section of constant area for a considerable time.
The fan induces a rotating motion to the low which should be removed to achieve high
low quality. Three fan-straightener systems are usually employed:
1. Straightener vanes behind the fan
2. Pre-rotating vanes ahead of the fan
3. Counter-rotating fans, where the second fan removes the rotation induced by the irst
The two counter-rotating fans develop more thrust than a single fan; hence, this solution is
suitable for large tunnels where high thrust is required. However, the drive system becomes
more complicated because equal torques should be applied to the two fans.
The use of low straightener behind and pre-rotating vanes ahead of the fan is a rather inex-
pensive solution to obtain a uniform low stream. Pre-rotation vanes are designed to produce
a swirl opposite to the fan swirl so that the total swirl after the fan is zero. However, this may
not occur at all fan speeds. Thus, it is more effective to use low straighteners and antiswirl
vanes after the fan.
The area ratio between fan section and test section is usually about 3 to 1. A larger area
ratio may cause a poor velocity proile before the fan; furthermore, the fan costs increases due
to its larger size. A smaller area ratio would increase the low velocity in the fan section. As
a result, a higher fan rotational speed is required to drive the low. However, the fan speed is
limited by keeping a suficiently low Mach number at the blade tip to avoid the formation of
shock waves that would drastically reduce the low quality and fan eficiency.
The fan motor can be mounted either in the nacelle or outside the tunnel. In the former case,
a cooling system is required. The cooling air is usually ducted through the nacelle supports.
The number of blades on the fan is somewhat arbitrary: the product of number of blades
and their chord determines the total area, which must be selected based on thrust require-
ments. At least four blades are required to avoid pulsations in the low stream. The maxi-
mum number of blades is limited by structural strength considerations: the maximum value
of the sum of the blade chords (N · c, being N the number of blades and c the chord of a
blade) must not exceed the local circumference at the root to avoid excessive interference
among blades.

The contraction The contraction cone purpose is to accelerate the low to the desired velocity in the test sec-
cone tion. The increase in velocity follows the continuity equation 3.1: the cross-sectional area is
decreased to achieve a higher velocity. Aside from determining the ratio between exit and
inlet velocity, the contraction ratio C governs the turbulent velocity components reduction.
According to [1], the longitudinal turbulence component decreases with C2, while the lateral
turbulent components decrease with C .
The design of the contraction cone presents two main issues. First, while the pressure
decreases along the contraction cone as a result of the low expansion, an adverse pressure gra-
dient is encountered at its entrance and exit as a consequence of its solid edges. Such adverse
pressure gradient may cause boundary layer separation, yielding a dramatic degradation of
the low quality in the test section and increase in the drive power required. Second, when the
contraction has a rectangular cross section, a secondary low is generated at the corners, which
has lower low velocity and yields higher risk of low separation. The latter issue is alleviated
by making the contraction octagonal, which has the effect of “cutting out” the corners, thus
avoiding the secondary low.
ExPERIMENTAL FACILITIES 63

Until the advent of digital computers, no satisfactory method for the design of the contrac-
tion cone was present. The contraction was usually designed by eye, based on past experi-
ence. Experience has shown that the radius of curvature should be less at the exit than at the
entrance.
It is desirable to keep the contraction length (sum of settling chamber length plus contrac-
tion plus settling length at the exit) as short as possible to limit the overall size of the tunnel.
A settling chamber length of about 0.5 times the inlet diameter is often used. Such length
allows the use of honeycombs and/or screens to reduce turbulence.
Theoretical (potential low) constructions of the nozzle shape exist that guarantee uniform
velocity proile at the nozzle exit and small boundary layer thickness, such as the popular
Witoszynski nozzle [2]:

-0.5
ì é 2 -3
ü
æ r2 ö ù é æ x ö ù é æ x ö ù ï
2 2 2
ï
r ( x ) = r2 × í1 - ê1 - ç ÷ ú × ê1 - ç ÷ ú × ê1 + ç ÷ ú ý (3.4)
ïî êë è r1 ø úû êë è L ø úû êë è L ø úû ïþ

where
x is the distance along the jet axis
L is the contraction cone length
r1 and r2 are the radii at its inlet and exit, respectively

Turbulence Ideally, the test section should feature a uniform low with no turbulent luctuations. In prac-
reduction devices tice, the rotation of the fan, the corners in a closed-return tunnel, and other factors induce
turbulent velocity components that need to be attenuated or eliminated. Turbulence in the
test section is usually reduced by the installation of screens or honeycombs upstream of the
contraction.
Honeycombs are series of tubes laid lengthwise in the air stream. They yield a low pressure
drop; therefore, their effect on the axial velocity is negligible. In contrast, due to their length
(typically exceeding 6–8 times the cell size), they reduce the lateral velocity components.
Figure 3.4 shows some honeycombs used in wind tunnels and their pressure losses coeficient
deined as the ratio between pressure drop across the screen (Δp) and mean low dynamic
pressure (q): K = Δp/q.
Screens reduce primarily the streamwise turbulence. They produce a relatively large pres-
sure drop in the low direction, which reduces the higher velocities more than the lower, result-
ing in a more uniform axial velocity across the cross section. Despite the screens being located
in the lowest speed portion of the wind tunnel, the pressure drop they produce increases the
required power to run the tunnel.
Screens used for turbulence reduction should have a porosity β ≥ 0.57 [4]. Screens with
lower ratios suffer from low instabilities that degrade the low quality in the test section. For
Reynolds number based on the wire diameter exceeding 80, the wires wake is turbulent and
dumps out quickly.

K = 0.30 K = 0.22 K = 0.20

FIGUre 3.4 Some honeycombs and their losses. (From Rae, W.H. and Pope, A., Low-Speed
Wind Tunnel Testing, 2nd edn., John Wiley & Sons, Inc., 1984.)
64 ANDREA SCIACCHITANO

Screens tend to quickly accumulate dust. Since dust distributes nonuniformly, it signii-
cantly alters the screen’s porosity and pressure drop, resulting in a nonuniform velocity dis-
tribution that changes arbitrarily in time. The problem is even more relevant in tunnels where
smoke or oil is used for low visualization or laser-based measurement techniques. As a conse-
quence, screens should be installed in a way that they are accessible for cleaning. Furthermore,
the low quality in the test section should be monitored regularly.
The pressure loss coeficient K is typically used to characterize the performance of a screen.
Following the formulation of De Vahl [5]:

55.2
K = K0 + (3.5)
Red

where Red is the Reynolds number based on the wire diameter and K0 = ((1 − 0.95β)/0.95β)2.
The turbulence reduction factor f is deined the ratio between the turbulent velocity
component with and without screens. For isotropic turbulence, the following expressions of f
are reported in the literature [6,7]:

1
f = for axial reduction (3.6)
1+ K

1
f = for lateral reduction (3.7)
1+ K

When multiple turbulence reduction devices are used, the total turbulence reduction factor f is
the product of those of the individual screens. The pressure drop K is the sum of those of the
individual screens. In this case, the screens should be placed at a minimum distance between
each other so that the turbulence induced by the preceding screen damps out before the
successive screen.

evaluation of For evaluating the power losses in a return-type wind tunnel, it is customary to break down
power losses the tunnel into four parts:
1. Cylindrical (constant area) sections
2. Corners
3. Expanding sections
4. Contracting sections
A power loss occurs in each section and is typically expressed in terms of pressure loss coef-
icient. Wattendorf [8] refers the local losses to the jet dynamic pressure q0 (which occurs at
the test section), deining the loss coeficient as

Dp Dp q q
K0 = = × =K (3.8)
q0 q q0 q0

Since for the conservation of mass the dynamic pressure varies inversely as the fourth power
of the tunnel equivalent diameter (deined as 4A/p), Equation 3.8 can be written as

D04
K0 = K (3.9)
D4

where
D is the local tunnel diameter
D0 is the test-section diameter
ExPERIMENTAL FACILITIES 65

The power losses in each section are proportional to the local velocity to the cube:

1 1
DP = K rAV 3 = K 0 rA0V03 (3.10)
2 2

where
A and V are the local area and velocity, respectively
A0 and V0 are the test-section area and velocity, respectively

If the drive system has a fan eficiency ηf and a motor eficiency ηm, then the installed power
has to be

å
N
K 0 ,i 1
P= i =1
rA0V03 (3.11)
hf hm 2

where N is the number of components where power losses occur. The power factor λ of a
tunnel, deined as the ratio of the drive power to the rate of kinetic energy of the test-section
low, is

å
N
P K 0 ,i
l= = i =1
(3.12)
(1/ 2 ) rA0V03 hf hm

In cylindrical sections, the pressure drop per unit length L is proportional to the skin friction
coeficient Cf: (Δp/L) = (Cf /D) ⋅ (1/2)ρV 2, resulting in K = (Δp/q) = ((CfL)/D). Hence, the pres-
sure loss coeficient is

L æ D04 ö
K 0 = Cf ç ÷ (3.13)
D è D4 ø

For smooth pipes at high Reynolds number, the skin friction coeficient can be determined
using the relationship proposed by Von Kármán [9]:

1
Cf
(
= 2 log10 Re Cf - 0.8 ) (3.14)

For open cylindrical sections (e.g., open jet), the skin friction coeficient may be set to
Cf = 0.08 [3].
In divergent sections, power losses are caused both by skin friction at the wall and by
expansion. Accounting for both effects, the pressure loss coeficient becomes [3]

æ Cf a öæ D4 ö D4
K0 = ç + 0.6 tan ÷ ç 1 - 04 ÷ 04 (3.15)
ç 8 tan ( a / 2 ) 2 ÷ø è D2 ø D1
è

where
α is the divergence angle between opposite walls
D1 is the smaller diameter
D2 is the larger diameter

Figure 3.5 shows the typical trend of the pressure loss coeficient for varying divergence angle.
By differentiating Equation 3.15, the pressure losses are minimum for tan ( a/ 2 ) = Cf / 4.8 .
66 ANDREA SCIACCHITANO

0.25

0.2

0.15

K0
0.1

0.05

0
0 5 10 15 20
α (°)

FIGUre 3.5 Pressure loss coeficient in a divergent section as a function of the divergence
angle α (calculated with Equation 3.15) for Cf = 0.01, D1/D2 = 0.5, D0/D1 = 1.

For reasonable values of Cf (as obtained from Equation 3.14 and shown in Figure 3.5), the
optimum α is around 5°. Higher divergence angles increase the expansion losses, while a
smaller α increases the skin friction losses. However, space limitations and construction costs
may dictate the use of a slightly larger divergence angle.
In the corners, the rotation typically accounts for two-thirds of the loss, and the friction
in the guide vanes for the remaining one-third. For 90° corners, the following semiempirical
relation for pressure losses may be used [3]:

é 4.55 ù D4
K 0 = ê0.10 + ú 04 (3.16)
( log10 Re ) úû D
2.58
êë

In the contraction cone, the losses are due only to friction; assuming a mean value for the skin
friction coeficient Cf and a contraction length Lc, the pressure loss coeficient is

Lc
K 0 = 0.32Cf (3.17)
D0

Losses in honeycombs and screens have been discussed in “Turbulence reduction devices”
section; in general, these components are responsible for less than 5% of the total tunnel loss.
An example of losses incurred in a closed-loop wind tunnel without accounting for grids
and honeycombs are reported in Table 3.1.

Wind tunnel The low conditions in a wind tunnel are not the same as those in an unbounded airstream
boundary or “free air” for an aircraft. Based on the concept of Galilean invariance introduced before,
corrections there is no difference between having the model at rest and the air moving with respect to
the model instead of vice versa. However, the presence of walls in the test section produces
lateral low boundaries at a inite distance from the model, which yield several effects on the
low. Those include
• Horizontal buoyancy: This is a variation of the static pressure along the test section
when no model is present. In many wind tunnels it is zero, which is the desired con-
dition. When present, it produces a drag force analogous to the hydrostatic force on
objects in a stationary luid under a uniform gravitational ield. In closed test sections,
it is usually small and in the drag direction; it may be negligible in open test sections,
where in some cases it becomes a thrust.
ExPERIMENTAL FACILITIES 67

Table 3.1 Example of losses in a closed-return wind tunnel

Section K0 Percentage of total loss (%)

1. The jet 0.0093 5.1


2. Divergence 0.0391 21.3
3. Corner 0.046 25.0
4. Cylinder 0.0026 1.4
5. Corner 0.046 25.0
6. Cylinder 0.002 1.1
7. Divergence 0.016 8.9
8. Corner 0.0087 4.7
9. Corner 0.0087 4.7
10. Cylinder 0.0002 0.1
11. Cone 0.0048 2.7
Total 0.1834 100.0
Source: Data from Rae, W.H. and Pope, A., Low-Speed Wind Tunnel Testing,
2nd edn., John Wiley & Sons, Inc., 1984.

• Solid blockage: The ratio of the frontal area of a craft to the stream cross-sectional area
is approximately zero in most actual operations. In wind tunnels, this ratio is inite and
depends on the relative size of the model with respect to the test section. It is usually cho-
sen in the range of 0.01–0.10. In a closed test section, it produces larger surface stresses
then for the corresponding free-air condition; for an open jet, the surface stresses are
lower. In closed wind tunnels, the solid blockage yields an increase in dynamic pressure,
producing larger forces and moments at a given angle of attack. In open test sections, the
effect is usually negligible, because the airstream is free to expand in the lateral directions.
• Wake blockage: This effect results from the inite size of the model wake and is simi-
lar to the solid blockage. It is more complicated than the latter because the wake size
depends on the body shape and on the cross-sectional area. In a closed test section, the
wake blockage increases the measured drag; in an open test section, it is often consid-
ered negligible because the airstream is free to expand.
• Streamline curvature: This is an alteration to the curvature of the streamlines of the low
about the model with respect to the corresponding curvature in an ininite stream. For
a wing in a closed test section, it yields higher lift, moment coeficient, and effective
angle of attack.
The corrections typically applied to account for the effects mentioned earlier are discussed
hereafter. For simplicity, a 2D test case is considered. For corrections in 3D test cases, the
interested reader is referred to the book of Barlow et al. [10].

Horizontal buoyancy The presence of a longitudinal pressure gradient dp/dl induces an


increase of the model drag. According to Allen and Vincenti [11], the correction to the model
drag reads as

h2 dp
DDhb = -6 Ls (3.18)
p dl
where
h is the tunnel height
σ = (π2/48)(c/h)2
Λ = 4λ2(t/c)2
c and t are the model chord and thickness, respectively
λ2 is the body shape factor (nondimensional coeficient that depends on the body shape)
Typical values of the shape factor as a function of the thickness ratio t/c are shown in Figure 3.6.
68 ANDREA SCIACCHITANO

7
Rankine oval
6 Ellipse
Symmetrical airfoil

λ2
3

0
0 1 2 3 4 5 6 7 8 9 10
t/c

FIGUre 3.6 Shape factors from selected forms. (Readapted from Barlow, J.B. et al., Low-Speed
Wind Tunnel Testing, 3rd edn., John Wiley & Sons, Inc., 1999.)

Solid blockage The presence of a model in the test section reduces the area through which
the air must low. By continuity equation 3.1, the velocity of the air increases in the sections
containing the model. The velocity correction due to solid blockage is quantiied by the coef-
icient εsb, deined as

V - Vu
esb = (3.19)
Vu

where
Vu is the uncorrected velocity (which would be obtained in the test section with no model
installed)
V is the corrected velocity

Based on the derivation of Allen and Vincenti [11], the expression of εsb reads as

esb = Ls (3.20)

Wake blockage Any real body produces a wake of inite size behind it, where the mean
velocity is lower than the freestream. Similarly to the solid blockage, the velocity outside the
wake in a closed test section must be higher than freestream in order to fulill continuity law.
The velocity correction due to the wake blockage is [11]

V - Vu c /h
e wb = = CDu (3.21)
Vu 2

where CDu is the model uncorrected drag coeficient. The wake blockage also induces an
increase of drag, which is often negligible:

DCDwb = Ls (3.22)

Streamline curvature The presence of ceiling and loor prevents the streamlines to curve as
they do in the free air. As a result, the body appears to have more camber (typically around 1%)
than it actually has. Accordingly, the effective angle of attack and lift and pitching moment
about the quarter chord are larger than in free air.
ExPERIMENTAL FACILITIES 69

Following the formulation of [11], the corrections of angle of attack (in radians), lift coef-
icient, and moment coeficient read as

1
Dasc =
2p
(
s CLu + 4CM(1/ 4 ) u ) (3.23)

DCLsc = -sCLu (3.24)

s
DCM(1/ 4 )sc = - DCLsc (3.25)
4

where CLu and CM(1/4)u are the uncorrected lift coeficient and moment coeficient with respect
to the quarter chord, respectively.

Summary of two-dimensional boundary corrections The low-speed wall effects for 2D


wind tunnel testing are summarized in the following text. The corrections account for solid
blockage, wake blockage, and streamline curvature. The subscript “u” indicates uncorrected
data.
Adding together the solid blockage and wake blockage velocity corrections (ε = εsb + εwb),
the corrected velocity is

V = (1 + e ) Vu (3.26)

From Equation 3.26, the corrected values of Reynolds number and dynamic pressure (neglect-
ing higher-order terms) are easily computed:

Re = (1 + e ) Reu (3.27)

q = (1 + 2e ) qu (3.28)

The expressions of angle of attack (in radians), lift, and moment coeficients read as (from
Equations 3.23 through 3.25)

1
a = au +
2p
(
s CLu + 4CM(1/ 4 ) u ) (3.29)

CL = CLu (1 - s - 2e ) (3.30)

sCL
CM(1/ 4 ) = CM(1/ 4 ) u (1 - 2e ) + (3.31)
4

For the drag coeficient, accounting for the dynamic pressure effect and the wake gradient
term, we obtain

CD = CDu (1 - 3esb - 2e wb ) (3.32)

Adaptive wall test section In order to minimize wall interference on the low, a possible
solution is to adapt the test-section boundaries to the streamline shapes. This is typically
achieved by means of a sophisticated control system that iteratively adjusts the position of the
test-section lexible walls. In this way, the test-section walls become nearly invisible to the
70 ANDREA SCIACCHITANO

model under test. Aside of the primary beneit of minimizing the wall interferences, adaptive
walls yield other beneits [12]:
• With wall interferences minimized, the size of the model can be increased for a given
test section, thus achieving larger test Reynolds number.
• With solid adaptive walls, the test-section boundaries are smooth, minimizing the noise
generated by the low on the wall.
• As an alternative to increasing Reynolds number, the test-section size can be shrunk for
a given model size, yielding a reduction of operating costs.
Obviously, the use of adaptive walls implies some shortcomings, which include
• Hardware complexity: The test-section boundaries need to be adjusted for each test con-
dition. The complexity of the system is comparable to that of lexible walled supersonic
nozzles.
• Operational complexity: A sophisticated control system is required to iteratively stream-
line the walls. Using such control system requires training of the users.
• Lower productivity: Since the wall adaptation is never instantaneous, the number of
measurements per unit time is lower than when ixed walls are used.
As Wolf [12] remarks, none of these shortcomings is crucial and can be made insigniicant by
intelligent use of adaptive wall technology.
Examples of wind tunnel using the adaptive test-section walls technology are the High-
Speed Wind Tunnel (HKG) at DLR Göttingen and the Transonic Cryogenic Tunnel (TCT) at
NASA Langley Research Center.

3.4 high-speed subsonic and transonic wind tunnels

High-speed subsonic wind tunnels are those used to achieve Mach numbers between 0.4 and
0.75, where the low is in the compressible regime. Transonic wind tunnels are able to achieve
speeds close to the speed of sound (0.75 < M < 1.2). In transonic tunnels, the low typically
features both subsonic and supersonic regions. Both types of tunnel are designed on the same
principles of subsonic tunnels.
The working principle is discussed hereafter. The equations of conservation of mass and
momentum along a streamline can be written as, respectively,

dr d A d V
rAV = const ® + + =0 (3.33)
r A V

dp + rV dV = 0 (3.34)

Considering an isentropic low (a2 = dp/dρ), the following relation holds:

dp rV dV
dr = =- (3.35)
a2 a2

Substituting the expression of dρ in Equation 3.33, we obtain

dA
A
(
= M2 -1
dV
V
) (3.36)

From Equation 3.36, it can be deduced that the cross-sectional area has a minimum at sonic
low conditions (M = 1). According to this equation, to achieve a subsonic speed in the test
section the cross-sectional area upstream of the test section should be converging (dA/A < 0).
ExPERIMENTAL FACILITIES 71

M = 1.1 M = 1.3 M = 1.5

Model Reflected Model Reflected Model Reflected


shock wave shock wave shock wave shock wave shock wave shock wave

FIGUre 3.7 Shock wave relection at the wall for different Mach numbers.

Instead, to accelerate the low up to supersonic conditions, a convergent–divergent nozzle is


required, where the low is subsonic in the converging part, sonic at the throat section, and
supersonic in the diverging part.
Combining the previous relations with the isentropic relations (Chapter 1), one obtains the
area–Mach relation, which for a perfect gas reads as
g +1
æ A ö 1 é 2 æ g - 1 2 ö ù 2( g -1)
çA ÷ = M ê g + 1 ç1 + 2 M ÷ú (3.37)
è throat ø ë è øû

Equation 3.37 shows that the Mach number in the test section is controlled by the expansion
ratio A/Athroat.

Wall effects Testing at transonic speeds presents additional problems due to the relection of the shock
in transonic waves from the wall of the test section. An example of shock wave relections is illustrated in
wind tunnels Figure 3.7 for different Mach numbers. At low supersonic Mach numbers, the shock wave is
almost normal to the freestream direction. As a result, the test-section walls may relect the
shock on the model itself or on its near wake: both cases are not representative of the free-
light conditions.
Transonic tunnel walls are typically equipped with slots or perforations to minimize the wall
effects on the shock waves and on the shape of the streamlines in the vicinity of the model.
It has been shown that the wind tunnel velocity correction can be greatly reduced by the
proper arrangement of solid and open wall elements [13]. However, the use of slotted walls
has a very limited potential for cancelling shock wave relections.
Investigations around 1950s showed that, for effective cancellation of shock waves at wind
tunnel walls, it is necessary to use porous or perforated walls. Porous walls are those made of
a porous material, typically with grains of micrometric size, so to allow air to low through
them. A pressure drop is produced mainly by friction in the narrow channels, which practi-
cally eliminates any dynamic effects. These walls are not very practical from an operational
point of view: not only pores frequently undergo clogging during operation, but also the wall
porosity should be continuously changed with Mach number for effective shock cancella-
tion. Perforated walls consist of a large number of small discrete openings in the wind tunnel
wall. Numerous experiments indicated that such perforated walls are effective in cancelling
shock wave relections. However, they are not suitable for subsonic lows. In fact, in high-
pressure regions of the low (e.g., upstream of the model) air lows through the slots from
the test section into the plenum chamber; conversely, in the low-pressure regions (e.g., at the
maximum thickness of the model) air lows from outside into the test section, which is clearly
not representative of what occurs in free light.

3.5 Supersonic wind tunnels

In order for a low to reach supersonic conditions in the test section, it must irst pass through
sonic conditions. According to Equation 3.36, sonic conditions (M = 1) can occur only where
the cross-sectional area of a duct is minimum (dA = 0). Hence, a supersonic nozzle is composed
72 ANDREA SCIACCHITANO

by a converging duct where the subsonic low is accelerated, a throat where sonic velocity is
reached, and a diverging duct where the low becomes supersonic.

Ideal low in As already stated, Equation 3.37 regulates the Mach number in the test section for a given area
a supersonic ratio. However, the simple choice of test section and throat does not assure a uniform super-
wind tunnel sonic low. In fact, disturbances in a supersonic low propagate along characteristic lines (or
Mach lines, [14]), which are inclined at angle arcsin(1/M) with respect to the low direction.
Hence, planes normal to the duct axis do not have uniform low conditions (Figure 3.8). In
particular, regions of low upstream of the Mach lines are not inluenced by the disturbance. In
the example of Figure 3.8, an increase in the duct area occurs in A and B, but this is not felt at
the duct centerline until point C, with xC > xA = xB, being x the coordinate along the duct axis.
Because of this delay in the propagation of disturbances, care must be taken in the design of
the divergent part of the duct to obtain a uniform low.
The shock wave is the primary mechanism by which supersonic lows are decelerated.
When a supersonic low passes through a shock wave, total pressure losses occur. The higher
the Mach number upstream of the shock, the larger the total pressure losses. In supersonic
wind tunnels, total pressure losses due to the shock wave compose a large portion (up to 90%)
of the power required to run the tunnel.
For this reason, most supersonic wind tunnels feature a diffuser with a converging section,
a region where the cross-sectional area is the minimum (named second throat) and a diverg-
ing section (Figure 3.9). The supersonic low leaving the test section is decelerated in the
converging duct and passes through the second throat at a speed considerably lower than that
at the test section. In this way, the normal shock occurs in the diverging part of the diffuser at
a much lower Mach number than the test section, resulting in a signiicant reduction of total
pressure losses.
In theory, it would be desirable to have M = 1.0 at the second throat so as to minimize the
power losses. However, for practical reasons the Mach number at the second throat is often
well above 1.0.
When the supersonic tunnel is started, at irst the low is subsonic throughout the tunnel
circuit. The highest Mach number, still below 1.0, occurs in the nozzle throat. Increasing the
power, the speed rises throughout the circuit until sonic conditions are reached at the nozzle

A
M>1
C

arcsin(1/M)

FIGUre 3.8 Supersonic low affected by diverging duct walls. The low upstream of the Mach
lines AC and BC is not affected by the divergence.

Throat 1 Throat 2

a b cd e f
Nozzle Test section Diffuser

FIGUre 3.9 Scheme of a supersonic tunnel featuring a diffuser with second throat. The normal
shock position during the tunnel starting process is indicated.
ExPERIMENTAL FACILITIES 73

throat; a normal shock develops at a short distance downstream of the throat (station a in
Figure 3.9). A slight increase in the drive power does not change the Mach number at the
nozzle throat (which remains 1.0) but produces a displacement of the normal shock further
downstream (station b), where M > 1. Here inite total pressure losses occur. Further increas-
ing the power, the shock moves downstream and occurs at larger Mach number, yielding
larger losses (stations c, d, e). For suficiently high drive power, the shock moves to the test
section, where the Mach number is the highest (station f ). Note that the power requirements
to start the supersonic tunnel are not inluenced by the diffuser design and correspond to the
normal shock losses at the design Mach number. Obviously, higher power is required for start-
ing tunnels that operate at larger design Mach numbers.
When the normal shock is located in the test section, a slight increase in power allows mov-
ing the shock to the second throat of the diffuser. In fact, the Mach number and therefore the
power losses decrease as the shock moves along the converging part of the diffuser.

Sizing of the When the normal shock is in the test section, the low downstream of the shock is subsonic
second throat and accelerates in the converging part of the diffuser, reaching a maximum speed in the second
throat. Since the Mach number in the second throat cannot exceed 1.0, the second throat must
be sized to pass the nozzle mass low with Mach number not greater than 1.0.
The following steps are taken to size the second throat. First, the Mach number (M2) down-
stream of the normal shock in the test section is computed as

M 22 =
( 2/( g - 1)) + M
2
1
(3.38)
( 2g/( g - 1)) M - 1
2
1

where M1 is the Mach number upstream of the normal shock, which is the design Mach num-
ber in the test section. Second, the Mach number in the second throat is assumed to be 1.0.
Under the assumption of isentropic expansion from the conditions downstream of the normal
shock (located in the test section) to the second throat (where M = 1.0), Equation 3.37 can be
used to determine the ratio of test-section area to second throat area as a function of the Mach
number downstream of the shock.
After the tunnel has started, the Mach number in the second throat is larger than 1.0. This
can be calculated using the isentropic relation (Equation 3.37) between the two throats.
Typically, it is obtained that, after the tunnel is started, the Mach number in the second
throat is well above 1.0; the shock that forms here causes signiicant pressure losses.
In theory, if the second throat could be opened enough during the starting process and
closed down after that, the Mach number in the second throat could be reduced to 1.0 and
the total pressure losses would drop considerably. In practice, this solution of an adjustable
second throat has been used in several high-speed wind tunnels with limited success.
In summary, we can distinguish three compression ratios (ratio of the total pressure in the
settling chamber to that at the diffuser exit) that are relevant for a supersonic tunnel:
1. The smallest ratio is that required to run the tunnel after an adjustable second throat
has been closed down to the minimum area. In theory, the Mach number in the second
throat is 1.0; therefore, the pressure losses approach zero and the compression ratio
approaches one.
2. The intermediate ratio is that required to run the tunnel when a ixed second throat is
employed. In this case, the normal shock occurs at a Mach number exceeding one and
the pressure losses are inite.
3. The largest compression ratio is that required to start the tunnel. This is achieved when
the normal shock is in the test section.

actual low in a ”Ideal low in a supersonic wind tunnel” and “Sizing of the second throat” have dealt with ideal
supersonic tunnel lows in a supersonic tunnel. Real lows are characterized by non-null viscosity, which may
have relevant effects on the tunnel operation. Due to viscosity, a boundary layer is formed at the
74 ANDREA SCIACCHITANO

2
Reflected
1.5 shock wave Expansion fan

Incident
1
shock wave

y/δ
Slow moving
0.5 subsonic fluid

–0.5
–3 –2 –1 0 1 2
x/δ

FIGUre 3.10 Mean low topology of a shock wave/boundary layer interaction. Experimental
result from [15] at Mach 2.1 via particle image velocimetry (PIV; see Chapter 10). Mean velocity
streamlines are shown along with mean vertical velocity contours. The boundary layer thickness
(up to 99% of the external velocity) is δ = 20 mm.

tunnel walls. Obviously, the boundary layer thickness increases moving downstream from the
irst throat of the nozzle; it becomes relevant in the test section, especially at high Mach numbers.
During steady-state operations of the tunnel, viscous effects between irst throat and test
section are usually negligible. The growth of the boundary layer thickness is fairly predictable
and can be accounted for in the nozzle design.
During the tunnel starting process, viscous effects play a crucial role. They are so important
that the compression ratio required to start the tunnel is typically 100% greater than the nor-
mal shock pressure ratio. In other terms, during the starting process the losses due to viscosity
are comparable to or greater than the normal shock losses.
Boundary layers are usually stable in the presence of favorable pressure gradient, that is,
when the pressure decreases in the direction of growth of the boundary layer. In the presence
of adverse pressure gradient (pressure increasing in the direction of boundary layer growth),
they may become unstable and separate. When the normal shock passes through the nozzle, it
imposes a strong adverse pressure gradient on the boundary layer. If the boundary layer sepa-
rates, the low downstream of the shock is severely altered, as illustrated in Figure 3.10. Even
if the boundary layer does not separate, the low may be altered because high pressure in the
boundary layer downstream of the shock will cause air to low upstream into the subsonic part
of the boundary layer upstream of the shock.
In conclusion, the following general items concerning lows (both ideal and real) in super-
sonic nozzles should be remarked:
1. The Mach number in the test section is set by the nozzle area ratio and does not change
(as long as it remains supersonic) when changing the compression ratio.
2. If the downstream pressure is decreased while keeping constant the upstream pressure,
the test-section low does not change, while the losses in the diffuser increase. This hap-
pens because the normal shock is moved further downstream and occurs at higher Mach
number, yielding higher shock losses.
3. If the upstream pressure is increased, the low in the test section will occur at higher
pressure (and therefore higher Reynolds number), but at the same Mach number.

Tunnel start-up So far we have considered an empty test section, with no model mounted in it. The presence
with model in of the model has important effects on the starting of a supersonic tunnel.
the test section It can be shown that, according to mass conservation, the area of a second throat sized for sonic
conditions during the tunnel start-up varies with the total pressure losses in the test section [14]:

A*2 p
= t1 (3.39)
*
A1 pt 2
ExPERIMENTAL FACILITIES 75

where
A*1 and A*2 are the areas of the two throats, respectively, to achieve sonic conditions at the
tunnel start-up
pt1/pt2 represents the total pressure losses in the test section

Equation 3.39 implies that the higher the pressure losses, the larger the second throat area.
Since the presence of the model in a supersonic low produces a shock wave upstream of it,
the pressure losses with a model are larger than for the clear tunnel. Hence also the second
throat area is larger.
To start the tunnel, the model blockage (and therefore the model size) cannot exceed a
certain value. When the tunnel is started, a normal shock is formed ahead of the model; as
a consequence, the low immediately upstream of the model is subsonic. A minimum cross-
sectional area occurs where the thickness of the model is greatest. Here, the Mach number
cannot exceed 1.0. Therefore, the model must be small enough to allow the nozzle mass low
to pass through that minimum cross-sectional area at M ≤ 1. If the normal shock does not pass
through the model, the tunnel is said to be chocked and the design Mach number is not reached
in the test section. Figure 3.11 reports a sketch of the progress of the normal shock through a
test section with a model (the low is from left to right).
Occasionally, when the model is mounted on the test section the tunnel may not start. Pope
and Goin [16] suggest taking the following actions to start the tunnel:
• Increase the diffuser area.
• Increase the tunnel pressure ratio.
• Move the model forward in the test section.
• Add an afterbody to the model.
• Add a removable sharp nose to the model.
• Blow air out of holes near the nozzle throat.

The need for Due to the isentropic expansion of the low from the settling chamber to the test section,
drying to avoid very low temperatures are reached in the latter. Hence, the air in the test section may become
condensation supercooled, that is, cooled to a temperature below the dew point temperature. As a result, the
moisture contained in the air may condensate out.
Condensation in a supersonic tunnel must be rigorously avoided because it yields changes
in the local Mach number and other low properties so that data taken in the wind tunnel

Shock Shock

(a) (b)

Shock

(c) (d)

FIGUre 3.11 Sketch of the progress of the normal shock through a test section with a model.
Illustration readapted from [16]. At the beginning of the start-up process, a normal shock is
located upstream of the model (a). Increasing the drive power, the normal shock moves to the
model section (b). If the model size is suficiently small, increasing the power allows moving the
shock downstream of the model (c, d).
76 ANDREA SCIACCHITANO

may become meaningless. At supersonic speeds, condensation causes the Mach number to
decrease and the static pressure to increase; the opposite is true for subsonic speeds.
Whether condensation occurs depends upon four parameters: the static temperature of the
stream, the static pressure of the stream, the amount of moisture in the stream, and the time
during which the stream is at low temperature.
The static temperature in the test section may be well below the dew point temperature.
For the sake of example, consider an air low of total temperature T0 = 288 K. When the low
is expanded isentropically up to M = 3, the static temperature drops to 102 K.
The static pressure drops with the Mach number more rapidly than does the static tem-
perature. Since the dew point temperature decreases with decreasing static pressure, the static
pressure drop is a desirable effect to prevent condensation.
Two main approaches exist to avoid condensation in supersonic tunnels. The irst is to
heat the air before the expansion in the nozzle so that the static temperature in the test section
exceeds the dew point temperature. This approach is not practicable because it would require
excessively high temperatures in the settling chamber. The second approach consists in dry-
ing the air to remove the moisture. This is the common procedure used by supersonic tunnel
operators. Equipment for drying air are commercially available and relatively inexpensive.

The need for In an analogous way to the condensation of moisture in an air stream cooled below the dew
heating to avoid point, the components of air liquefy when the proper pressure and temperature conditions are
liquefaction met. Experience on existing high-speed tunnels has shown that temperatures suficiently low
to yield liquefaction of air are seldom reached. Nevertheless, when doubts on possible air
liquefaction exist in a supersonic tunnel, the air low is heated to avoid liquefaction.

Supersonic The design of any high-speed wind tunnel aims at providing air with the following
wind tunnel characteristics:
classiication
1. With enough pressure ratio across the tunnel to achieve the desired Mach number in the
test section
2. With enough low rate and total mass to meet the requirements on tunnel size and
run time
3. Suficiently dry to avoid condensation
4. Suficiently hot to avoid liquefaction
To achieve the objectives mentioned earlier, four basic types of wind tunnels can be used:
1. Intermittent blowdown tunnels
2. Intermittent indraft tunnels
3. Intermittent pressure–vacuum tunnels
4. Continuous tunnels
The irst distinction to be made is between intermittent and continuous tunnels. The former
feature several advantages with respect to the latter:
• Simpler to design and less costly to build.
• Failure of a model will usually not yield a tunnel damage.
• Faster start-up.
However, continuous tunnels allow long run times with constant testing conditions.

Intermittent blowdown tunnels The basic circuit of a blowdown tunnel is composed by


compressor, air storage tank, stagnation pressure control system, test section, and exhaust
(Figure 3.12).
The working luid is compressed by a compressor and stored in the high-pressure tank.
When the control valve is opened, the low expands in the nozzle up to the design (supersonic)
Mach number in the test section and then exhausts to the atmosphere. Since the low exhausts
ExPERIMENTAL FACILITIES 77

Pressure Sonic throat


Settling chamber Test section
High-pressure tank control Diffuser
Model
Screens Exit

Supersonic nozzle

FIGUre 3.12 Schematic drawing of a generic blowdown facility.

to the atmosphere, the tunnel exit pressure is known; hence, the minimum compression ratio
for the design Mach number can be easily computed [16].
The sizing of a blowdown tunnel is conducted based on the consideration that, within rea-
sonable cost and space limitations, we would like the largest test section possible to maximize
the test Reynolds number. The run time is strictly dependent on the air low rate and test-
section size; most blowdown tunnels are designed for minimum run times between 20 and
40 seconds.
An example of intermittent blowdown facility is the TST-27 of Delft University of
Technology (TU Delft), which is shown in Figure 3.13. The tunnel test section has a width
of 280 mm, while the height can be varied between 250 to 270 mm based on the freestream
Mach number. The Mach number in the test section can be either subsonic (from 0.5 to 0.85)
or supersonic (from 1.15 to 4.2). Dry air stored at 40 bars in a 300 m3 storage vessel allows
intermittent operation of the wind tunnel for 300 seconds. The air contained in the vessel is
dried and iltered in order to achieve condensation-free airstream. Supersonic Mach num-
bers are set by means of a continuously variable throat and lexible upper and lower nozzle
walls; the Mach number may be varied during a run. Subsonic Mach numbers are controlled
using a variable choke section in the outlet diffuser. For transonic tests, a test section with
either slotted or perforated walls may be inserted downstream of the closed-wall test section.

FIGUre 3.13 Schematic drawing of the TU Delft TST-27 blowdown wind tunnel. (Courtesy of the Aerodynamics Section
of TU Delft, Delft, the Netherlands.)
78 ANDREA SCIACCHITANO

Settling Sonic
chamber throat Test section
Diffuser Vacuum chamber
Model
Inlet Screens

Supersonic nozzle

FIGUre 3.14 Schematic drawing of an intermittent indraft wind tunnel.

The comparatively long running time of the wind tunnel (up to 300 seconds) allows exploring
in detail the low ield over a model.

Intermittent indraft tunnels Intermittent indraft tunnels store energy as a pressure differ-
ence between the atmosphere and a low-pressure tank. During the operation, the air lows
from the atmosphere through the tunnel and inally into the vacuum tank, causing the tank
pressure to rise. The schematic drawing of an indraft tunnel is shown in Figure 3.14.
Indraft tunnels present several advantages with respect to blowdown tunnels, including the
following:
• The air temperature and pressure at supply condition are constant during a run. However,
the total pressure is lower than in blowdown tunnels.
• The airstream is free from contaminants such as compressor oil.
• Handling the vacuum is safer than handling high pressure.
• Lower noise level.
• Easy to achieve low density in the test section, which is representative of high-altitude
light.
However, since the indraft tunnel is fed with atmospheric air (ρ ≈ 1.2 kg/m3 in standard
conditions), low Reynolds numbers are typically achieved. In contrast, blowdown tunnels
allow a wide variation of the Reynolds number for a given Mach number. Furthermore, blow-
down tunnels are less expensive (up to one-fourth of the total cost) than indraft tunnels of
equal Reynolds number. For these reasons, blowdown tunnels are more commonly used than
indraft tunnels.

Intermittent pressure–vacuum tunnels Pressure–vacuum tunnels combine the high-pressure


storage upstream of the nozzle as in blowdown tunnels with the vacuum storage vessel
downstream of the diffuser as in indraft tunnels. The high-pressure air is introduced into the
tunnel and the tunnel exhausts into the vacuum chamber (Figure 3.15). These are used when

Settling Sonic throat


Pressure chamber Test section
High-pressure tank control Diffuser Vacuum chamber
Model
Screens

Supersonic nozzle

FIGUre 3.15 Schematic drawing of a pressure–vacuum wind tunnel.


ExPERIMENTAL FACILITIES 79

the pressure required to operate blowdown tunnels becomes excessive. By exhausting the tun-
nel to a low pressure, the overall pressure ratio required for running the tunnel can be achieved
with a much lower total pressure upstream of the nozzle. These tunnels are widely used also as
hypersonic facilities.
Pressure–vacuum tunnels are usually the same as blowdown tunnels from the air compres-
sor through the pressure regulator valve. Downstream of the valve, heaters are often installed
to avoid liquefaction. Mixers may be installed in the settling chamber to provide a uniform
temperature of the air entering the nozzle. Downstream of the diffuser, an air cooler and a
valve are installed for isolating the tunnel from the vacuum chamber.
Pressure–vacuum tunnels can feature either 2D or axisymmetric nozzle up to the low hyper-
sonic speeds. At higher Mach numbers, for which air needs to be heated to avoid liquefaction,
the axisymmetric nozzle is preferred for several reasons: it is easy to fabricate and to cool, yields
less low distortion in the throat and provides better sealing for high temperatures.

Continuous tunnels The main difference between continuous and intermittent tunnels has
already been discussed in the introduction of “Supersonic wind tunnel classiication” section.
It is important to remark that continuous tunnels allow minutes of uniform low test time
instead of seconds normally available in the intermittent tunnels.
In a continuous tunnel, the compressor continuously adds energy to the low to allow the
continuous air low through the tunnel. As a result, the air is continuously heated. Compressors
used for continuous tunnels are usually not equipped with aftercoolers for removing the com-
pression heat; hence, a special cooler is required to avoid a continuous increase of the air
temperature in the test section.
Continuous tunnels may operate both in the supersonic regime (as blowdown and indraft
tunnels) and in the hypersonic regime. For hypersonic operation, the air must be heated
upstream of the nozzle to prevent liquefaction during the expansion in the nozzle. Downstream
of the nozzle, the air low is cooled by a cooler to be safely handled by the compressor.

3.6 hypersonic wind tunnels

Hypersonic tunnels operate at design Mach number exceeding 5. Typical values of stagnation
pressure and temperature are 10–100 atm and above 350 K, respectively. They feature solid-
walled test sections and require contoured nozzle, which are usually axially symmetric rather
than two dimensional.
Models tested in hypersonic wind tunnels are typically larger than those tested in super-
sonic tunnels and can reach up to 10% of the test-section area, since the inclination of the
shock waves is lower. The tunnel wall has negligible effects on the low over the model.
Compared to supersonic tunnels, hypersonic tunnels feature larger complexity due to the
need for heating the air to avoid liquefaction during the expansion to high Mach number. The
air should also be dry to avoid condensation. The latter is a less serious issue than in super-
sonic tunnels, because most of the water is squeezed out in the process of compressing the air.
The high pressure and temperature required to run the hypersonic tunnel can be achieved in
different ways. In blowdown tunnels, as for the supersonic regime, an electric heating system
is used to achieve storage temperature up to 1000 K. In order to increase the total temperature
further, other types of tunnels have been developed. The most common hypersonic tunnels
are shock tunnels. These are evolutions of the shock tubes where a diaphragm separating
high-pressure and low-pressure luids is ruptured to create a shock that propagates through
the stagnant gas. Other types of facility capable of generating a high-enthalpy low are the
Ludwieg tube wind tunnel, the hot-shot wind tunnel, and the plasma wind tunnel.

Shock tubes The classical shock tube consists of two chambers of equal and constant cross-sectional area,
which contain gases at different pressures, initially separated by a diaphragm (Figure 3.16a).
The bursting of the diaphragm (typically by pressure overloading or mechanical cutting)
causes the high-pressure gas (or driver gas) to expand into the driven tube, compressing the
80 ANDREA SCIACCHITANO

Diaphragm

Driver tube Driven tube

(a)

Expansion t
waves

5
Refle
cted
shock
ce
3
u rfa Δt
ts 2
ac
nt ve
Co k w a
Shoc 1
4 x

(b)

FIGUre 3.16 Schematic drawing of a shock tube. (a) Initially, the driver tube is illed in with
high-pressure gas, while the driven tube is illed in with low-pressure gas. (b) Wave diagram of
the low in the shock tube.

low-pressure gas (or driven gas). Ideally, the test time depends only on the shock tube length:
the maximum test time Δt is determined by the interception of the contact surface (between
driver and driven gases) by the relected shock (Figure 3.16b). In practice, the effect of viscos-
ity, interface mixing, and other low nonidealities further reduce the test time.
The shock tube low, which results from the acceleration of the driven gas by a normal
shock, is limited to low Mach numbers. For an ideal gas with γ = 1.4, the maximum achiev-
able Mach number is 1.89 [17]. For real air, higher values may be attained, but typically below
Mach 3. To overcome this limitation, shock wind tunnels have been developed.

Shock wind Shock wind tunnels have been developed based on the same principle of shock tubes to
tunnels achieve higher Mach numbers. Here, the working gas is compressed by a normal shock and
successively expanded to high Mach number in a nozzle at the end of the driven section of
the shock tube. A low-pressure vessel is located downstream of the test section (Figure 3.17).
The low in the tube is initiated by the rupture of the primary diaphragm; the nozzle low
starts with the rupture of the second diaphragm, located at the driven tube end and nozzle
entrance. The second diaphragm is required to maintain an initial pressure in the nozzle lower
than the initial driven tube pressure to accelerate the starting process. The working luid
expands in the nozzle. The formation of the steady-state low is preceded by a starting shock,
a contact surface between driver and driven luid, and a system of expansion waves. The useful
testing time is terminated when the contact surface passes through the nozzle (provided that
this event is not preceded by the arrival of the irst expansion wave relected by the driver end)
and is typically limited to a few milliseconds.

First diaphragm Second diaphragm Test section


Model Vacuum chamber
Driver tube Driven tube
Supersonic
nozzle

FIGUre 3.17 Schematic drawing of a shock wind tunnel.


ExPERIMENTAL FACILITIES 81

Test
Storage tube Nozzle Vacuum chamber
section

Fast acting valve

FIGUre 3.18 Schematic drawing of a Ludwieg tube wind tunnel.

The Mach number attainable in the test section of a shock wind tunnel typically ranges
between 5 and 20.

Ludwieg tube Ludwieg tube facilities were conceived as a low-cost alternative for subsonic and transonic
wind tunnels testing at high Reynolds number [18]. Successively, they were extended to hypersonic appli-
cations. The advantage of a Ludwieg tube facility is the ability to generate a low-turbulence
uniform low by placing a fast-acting valve downstream of the test section.
The Ludwieg tube consists of four main elements (Figure 3.18):

• Storage tube, where the luid is stored at high pressure and temperature.
• Axisymmetric nozzle, which is separated from the storage tube by a fast-acting valve.
The nozzle is convergent–divergent to accelerate the low up to supersonic and hyper-
sonic speeds.
• Test section, where the low reaches the design Mach number.
• Vacuum discharge tank, which allows a large pressure ratio across the nozzle to operate
the tunnel.

The expansion ratio of nozzle to test section determines the design Mach number in the test
section.
When the valve opens, air lows from the storage tube through the nozzle. As a conse-
quence, an expansion wave travels upstream into the storage tube. The low conditions behind
the expansion wave are the reference stagnation conditions for the low in the test section.
When the expansion wave reaches the upstream end of the storage tube, it is relected down-
stream. The total running time of the facility is determined by the time it takes before the
relected expansion wave reaches the fast-acting valve.

hot-shot wind Hot-shot tunnels are short-duration high-speed test facilities where the high pressures and
tunnels temperatures required for operation are achieved by rapidly discharging a large amount of
electrical energy into an enclosed small volume of air, which then expands through a nozzle
and test section (Figure 3.19).
The arc chamber is illed with air at pressure up to 2000 atm and temperature exceed-
ing 4000 K; the reminder of the circuit is evacuated to a very low pressure (usually a few
Pascal). The high- and low-pressure portions of the tunnel are separated by a thin diaphragm
located slightly upstream of the nozzle throat. Energy is discharged into the arc chamber over
a few milliseconds: the energy added to the air yields an increase in temperature and pressure,
causing the rupture of the diaphragm between arc chamber and nozzle. When the diaphragm
breaks, high-pressure and high-temperature air expands from the arc chamber through the
nozzle, reaching high velocity (typically M > 10) in the test section. The high-speed low typi-
cally lasts 10–100 ms. The velocity in the test section is not constant in time as a result of the
pressure and temperature drop in the arc chamber. The high-velocity low is terminated when
the starting shock passing through the tunnel is relected from the downstream edge of the
vacuum tank and arrives back to the model.
82 ANDREA SCIACCHITANO

Arc Test
Nozzle Vacuum chamber
chamber section

Diaphragm

FIGUre 3.19 Schematic drawing of a hot-shot wind tunnel.

Cooling
water

Model
Settling
– Arc Nozzle
chamber
Gas in
+

Test
chamber

FIGUre 3.20 Sketch of a plasma arc tunnel.

plasma wind Plasma wind tunnels (also called plasma arc tunnels) use a high-current electric arc to heat
tunnels the test gas to very high temperature (exceeding 14,000 K). The testing time typically reaches
several minutes, using either direct or alternating current.
The plasma arc tunnel is composed by an arc chamber, a supersonic nozzle for Mach num-
ber typically below 3, a test chamber, and a vacuum system for maintaining the test chamber
at low pressure (see Figure 3.20). In operation, a low of cold test gas takes place through
arc chamber and nozzle. An electric arc is established through the gas between an insulated
electrode in the arc chamber and some surface of the arc chamber. The electric arc raises the
temperature of the test gas to ionization level yielding a plasma, that is a mixture of free elec-
trons, positively charged ions, and neutral atoms. Argon is frequently used as test gas instead
of air due to the higher degree of ionization achievable with a given power input.
Plasma arc tunnels are especially useful in the study of materials for reentry vehicles due
to the high heating rates that can be developed. Surface material ablation tests, which are not
possible in low-temperature tunnels or high-temperature short-duration tunnels, can be made.
Examples of plasma wind tunnels are the SCIROCCO tunnel at the Italian Aerospace
Research Centre (CIRA) and the PLASMATRON at Von Karman Institute (VKI).

3.7 Special wind tunnels

high reynolds It is often not possible to achieve the same Reynolds number as in the low to simulate by
number wind using a full-scale model. Such a problem is evident when the low around commercial aircraft,
tunnels which may have a wingspan larger than 50 m, is investigated. However, several approaches
can be adopted to increase the Reynolds number with small-scaled models.
One of the oldest methods consists in using pressurized wind tunnels. The working prin-
ciple of this approach can be easily explained from the equation of state for a perfect gas and
the deinition of Reynolds number.
The viscosity is very weakly dependent on pressure and increases with temperature. When
the pressure is increased keeping the temperature constant, the density increases proportionally,
ExPERIMENTAL FACILITIES 83

yielding a higher Reynolds number. For example, increasing the pressure of the air in the wind
tunnel from 1 to 15 atm would yield a Reynolds number increase by factor 15.
Although this approach is effective in increasing the Reynolds number by more than one
order of magnitude, it presents several drawbacks:
1. The wind tunnel shell must be designed and built to withstand the high internal pressure,
which increases the initial cost.
2. The process of pressurizing the air requires time and costs.
3. Increasing the air density, also the dynamic pressure increases, resulting in higher aero-
dynamic loads on the model.
4. The wind tunnel test section must be sealed and a depressurization system must be
present to allow accessing the model.
Despite these drawbacks, pressurized wind tunnels are often considered for several test
conditions that require high Reynolds number.
A second possible approach to achieve large Reynolds numbers with scaled models con-
sists in changing the working luid. Using heavy gases with lower kinematic viscosity than air
results in increased Reynolds number and Mach number. For instance, Freon 12 has a density
over 1000 times greater than air and a molecular mass more than 4 times bigger (thus a gas
constant 4 times smaller) than air. The use of Freon 12 as a working luid allows signiicantly
increasing the Reynolds number and the Mach number while preserving the same wind tunnel
power and dimension. Also this approach involves several drawbacks presented for pressur-
ized wind tunnels, such as the large initial costs, the cost of pumps and of the luid, and the
cost and dificulty of making the test section accessible for model changes.
A further approach for high-Re tests is that of cryogenic tunnels. These are wind tunnels
where the luid stagnation temperature is decreased to 100–150 K to decrease the luid viscos-
ity and increase its density. The use of low temperature in wind tunnels was irst proposed by
Smelt in 1945 [19] to reduce the tunnel drive power. Successive studies have shown that a
major increase in Reynolds number may be achieved at Mach number up to about 3 by operat-
ing at cryogenic temperatures.
Cryogenic wind tunnels often employ nitrogen gas as working luid. To achieve low
temperatures down to 110 K in continuously operating facilities, liquid nitrogen (with a
temperature of about 85 K) is injected into the tunnel, which vaporizes immediately and forms
the cold gas low. Examples of such cryogenic wind tunnels are the National Transonic Facility
at the NASA Langley Research Center in Hampton, Virginia, and the European Transonic
WindTunnel (ETW) in Cologne, Germany (Figure 3.21).
It should be kept in mind that in some applications it is not possible to match the Reynolds
number of the full-scale low, not even using pressurized or cryogenic tunnels. A possible way

Turning vanes Internally insulated Compressor (50 MW)


GN2
blow
off
LN2
Test section injection
T
pt
Plenum

Stilling chamber Insulated stainless


Second throat
steel pressure shell
(a)
(b)

FIGUre 3.21 (See color insert.) Aerodynamic circuit of the ETW (a) and detail of the liquid nitrogen injector rakes
(b). (From www.etw.de.)
84 ANDREA SCIACCHITANO

to overcome this issue relies on the consideration that the Reynolds number primarily affects
the boundary layer pattern. Therefore, in order to achieve the same boundary layer character-
istics for nonmatching Reynolds numbers, trip strips may be placed on the scale model. These
are artiicial roughness strips (typically sand paper or sparsely spread carborundum particles),
located at the position where the boundary layer transition is expected to occur in the full-scale
low, which force the boundary layer transition on the scale model.

Power gains of cryogenic, pressurized tunnels The maximum gain obtained by using
cryogenic wind tunnels can be evaluated from the power required to reach a given Reynolds
number. For simplicity, it is assumed that the cryogenic tunnel is used to reduce the power
requirement and not to increase the Reynolds number.
In a conventional wind tunnel (index 1), the required power is (see also “Evaluation of
power losses” section)
1 1 p1
Pconv = l r1 A1V13 = l A1V13 (3.40)
2 2 RT1

where the equation of perfect gas (p = ρRT) has been used.


Suppose that the cryogenic tunnel (index 2) produces an increase in pressure by a factor kp
and a decrease in temperature by a factor kT:
p2 T
kp = ; kT = 1 (3.41)
p1 T2

To achieve the same Reynolds number as in the conventional tunnel, the velocity in the test
section of the cryogenic tunnel must be

p1 T2 m 2
V2 = V1 (3.42)
p2 T1 m1

For cryogenic temperatures, the following dependence of the viscosity on temperature can be
used:
0.9
m æT ö
= (3.43)
m 0 çè T0 ÷ø

The power required in a cryogenic tunnel is

1 1 p2
Pcryo = l r2 A2V23 = l A2V23 (3.44)
2 2 RT2

Combining Equations 3.40 through 3.44 and assuming a constant power factor λ, one obtains
Pcryo
= kp-2 kT-4.7 (3.45)
Pconv

Hence, for a cryogenic tunnel where the test-section pressure is 10 bars and the temperature
120 K, the required power decreases by a factor above 6000!

Reynolds gains of cryogenic, pressurized tunnels Usually, pressurized tunnels are used to
achieve large Reynolds number at acceptable operating cost (viz., drive power). The gain in
Reynolds number for a given power and Mach number can be computed in the following way.
First, the power is expressed in terms of Mach number:

1
P=l pD 2 M 3 g gRT (3.46)
2
ExPERIMENTAL FACILITIES 85

In Equation 3.46, a squared test section of area A = D2 has been assumed. The characteristic
length D is substituted in the expression of the Reynolds number:

2 Pp T00.9
Re = (3.47)
l m 0T 1.65 g 0.25 R 0.75 M 0.5

At ixed M, P, λ, the gain in Reynolds number is

Recryo
= kp0.5kT1.65 (3.48)
Reconv

For the same example of before (p2 = 10 bar, T2 = 120 K), the gain in Reynolds number is
equal to 13.4.

anechoic wind Anechoic wind tunnels have been recently implemented to investigate the sources of noise in
tunnels the low and the luid dynamic phenomena producing these sources. These topics have become
very relevant due to the rapid increase in vehicle noise (e.g., airplanes, trains, automobiles,
and ships) and machinery noise (e.g., compressors, pumps, turbines, fans, propellers), result-
ing from luid–structure interactions. The main mechanisms yielding noise in these applica-
tions are low separation, vortex shedding, vortex breakdown, and boundary layer–related
phenomena. Recent theoretical and computational studies have produced signiicant advances
in understanding aerodynamic noise. However, direct experimental validation of these results
has been accomplished only partly due to the lack of appropriate experimental facilities. This
is especially true at universities, where a large amount of basic research in aeroacoustics is
conducted.
An example of anechoic wind tunnel is the Anechoic Flow Facility at the David Taylor
Research Center (Figure 3.22).

Screens and wide


angle diffuser
Anechoic chamber
Open test section

Closed test section


Breather
1 r#
Muffle

fler #2
Muf
er
ive

Cool
r
Fan d

FIGUre 3.22 Schematic drawing of the anechoic low facility at the David Taylor Research
Center (United States). (Readapted from Rae, W.H. and Pope, A., Low-Speed Wind Tunnel Testing,
2nd edn., John Wiley & Sons, Inc., 1984.)
86 ANDREA SCIACCHITANO

While this wind tunnel is essentially a closed-return tunnel with a closed test section
upstream of an open one, it has several speciic features that make it a low-noise facility suit-
able for aeroacoustic investigation. These features include

1. The use of a wide-angle diffuser to allow a contraction ratio of 10:1 without a long
diffuser and return path.
2. The use of two 100° turns and two 80° turns instead of the more conventional 90° turns.
This permits the length required for the fan noise suppressors (muflers), again within a
short return duct.
3. Very heavy concrete constructions to avoid wall vibrations plus the use of noise
suppression materials on walls, ceilings, and turning vanes.
4. An anechoic chamber surrounding the open test section to strongly reduce the relected
noise.
5. Each section of the tunnel is acoustically insulated; furthermore, the entire tunnel is
insulated from the ground through several feet.

For further details on the design and construction of an anechoic wind tunnel, the interested
reader is referred to the acoustic measurements book by Mueller [20].
Many general-purpose wind tunnels have been modiied to include noise absorption mate-
rials and other features to reduce the tunnel noise and allow aeroacoustic measurements. More
thorough understanding of aeroacoustic principles, better material availability, and improved
instrumentation of acoustic measurements even in the presence of high background noise have
been developed to meet the demand of users for quieter vehicles.

Water tunnels Water tunnels allow direct investigation of cavitation phenomena that cannot be conducted in
a wind tunnel. They are typically used in the same way and under the same physical principles
as low-speed wind tunnels. Water tunnels are typically smaller than wind tunnels with com-
parable Reynolds numbers. However, this apparent advantage is cancelled out by the greater
dificulty in employing water as working luid instead of air.
Only a few “large” water tunnels exist. An example is the 48 in. (1.2 m) tunnel at the
Navy’s Applied Physics Laboratory at State College in Pennsylvania, United States. This
tunnel is mainly used for the development and design of underwater vehicles. Smaller
water tunnels are widely used for low visualization studies and laser-based quantitative
measurements, for example, via particle image velocimetry (Chapter 10). The latter is
easy to implement in water tunnels because the higher density of water with respect to
air allows using larger seeding particles, which are more easily detected by the imaging
system.

Meteorological Meteorological wind tunnels are designed to simulate testing in the natural boundary layer,
wind tunnels which can be as tall as 500 m. These tunnels are used to determine wind loads on buildings and
their surrounding area, air pollution, soil erosion, and snow drifts. According to Cermak [21],
the main requirements for these wind tunnels are ive:

1. Proper scaling of buildings and topographic features


2. Matching Reynolds numbers
3. Matching Rossby numbers
4. Kinematic simulation of air low, boundary layer velocity distribution, and turbulence
5. Matching the zero pressure gradient found in the real world

For buildings, Reynolds numbers are based on the building width. In practice, Reynolds
number effects are typically negligible due to the sharp edges of most objects under
investigation.
ExPERIMENTAL FACILITIES 87

The Rossby number is concerned with the effect of the rotation of the earth on its wind. It is
deined as the ratio between inertial and Coriolis force:
U
Ro = (3.49)
Lf

where
L and U are the characteristic length scale and velocity of the phenomenon, respectively
f is the Coriolis frequency, which depends on both latitude and angular frequency of rota-
tion of the earth

In most application, the Rossby number is of little signiicance and would be hard to simu-
late if it were necessary since it requires to put the tunnel in rotation.
The velocity distribution in the natural boundary layer should be simulated as completely
as possible. For example, at a scale of 1:400, a 200 m building will be 0.5 m high. In this case,
the boundary layer must be matched to at least 0.75 m high, and preferably all the way to
the test-section ceiling. The boundary layer velocity distribution and turbulence can be well
reproduced by installing spires upstream of the test section followed by a roughness run of
10–15 test-section heights often made with small cubes on the loor (Figure 3.23). Typically,
the buildings to be tasted lie on a turntable that may be rotated to allow testing the effect of the
wind from different directions.
The longitudinal pressure gradient normally found in a wind tunnel and increased by the
rather thick boundary layer can be signiicantly reduced by means of an adjustable test-section
roof that may be tuned to provide the extra cross-sectional area needed.
Some applications (e.g., tests of pollution) require the air low to be cooled or heated with
respect to ambient temperature. Two types of “atmospheric” wind tunnels have been devel-
oped, both with long test sections:
1. Meteorological wind tunnels, with test-section length to height ratio of about 15. These
have the capability of both cooling and heating the air and test-section loor.
2. Environmental wind tunnels use wind only, with no possibility of cooling or heating the
low. These usually have test sections of 10 test-section heights long.

FIGUre 3.23 (See color insert.) Example of meteorological wind tunnel showing roughness
elements upstream of the buildings in the test section to simulate the proper conditions for study-
ing the wind pressures and pedestrian-level velocities. (Image credit to BMT Fluid Mechanics
Limited.)
88 ANDREA SCIACCHITANO

automotive Tests for aerodynamic parameters that affect a road vehicle performance and stability are
wind tunnels conducted with either scale models or full-scale cars in large tunnels. Wind tunnels designed
for testing road vehicles (automobiles and small trucks) have the same general layout as
conventional tunnels, with the following peculiar features usually provided [22]:

1. A lower than conventional maximum speed. Tests are typically conducted at speed
between 20 and 50 m/s.
2. A test section as large as possible within space and tunnel cost considerations. Desirable
would be at least 7 m wide and 5 m high.
3. A moving belt on the test-section loor to remove the effect of the ground boundary
layer (Figure 3.24). The moving belt is usually inadequate to support the vehicle in the
test section. The model is therefore suspended to a vertical (or sometimes lateral) strut,
streamlined to minimize the aerodynamic interference. The wheels are in contact with
the belt and are driven by it.
4. A turntable to simulate lateral wind conditions for the investigation of the vehicle lateral
stability.
5. A cooling system to remove engine-heated air and a special exhaust removal system to
keep the tunnel air free of contaminants.
6. The capability to run at very low speeds and yet remove the engine heat and exhaust.
7. A slot across the test-section loor near the entrance cone to remove the boundary layer,
or at least part of it.
8. A tunnel refrigeration system adequate to keep the tunnel cool enough that the model
clay may be used for styling changes.
9. A rain simulator so that windshield wiper operation may be checked, and design changes
made to keep the side windows clear of waters. Some operators also require tests with
freezing rain.

FIGUre 3.24 Car model in the test section of an automotive wind tunnel at the Aerodynamic
Test Centre of BMW in Munich, Germany. The igure shows the moving belt on the test-section
loor and the struts holding the model. (From https://www.press.bmwgroup.com/global/photo/
detail/P90070560/the-new-bmw-6-series-convertible-in-the-bmw-group-wind-tunnel-12/2010.)
ExPERIMENTAL FACILITIES 89

problems

3.1 A regional aircraft lies at cruise speed of 720 km/h at an altitude of 8000 m, where the
freestream pressure and temperature equal plight = 36 kPa and Tlight = 236 K, respectively.
A wind tunnel test needs to be designed in atmospheric conditions (pwt = 101 kPa and
Twt = 288 K). Determine the wind tunnel freestream velocity and the geometric scaling
factor (ratio between characteristic lengths) between real aircraft and wind tunnel model
to fulill low similarity. Assume the dynamic viscosity proportional to T 0.5.
3.2 A wind tunnel test is conducted on a NACA 0015 airfoil model (λ2 = 3) of chord c = 1 m.
The wind tunnel test section has size w × h = 1 m × 2 m. The freestream velocity is
Vu = 40 m/s. A test in standard air at angle of attack αu = 4° yields uncorrected values of
lift, drag, and moment with respect to the quarter chord equal to Lu = 300 N, Du = 30 N,
and M1/4 u  = 10 Nm. Considering a longitudinal pressure gradient dp/dl =  −0.3  Pa/m,
determine the corrected values of angle of attack and coeficients of lift, drag, and
moment (with respect to the quarter chord) accounting for the wall boundary effects.
3.3 Consider a supersonic wind tunnel having test-section area Ats = 0.01 m2. The design
Mach number is Mts = 6.0. Considering an ideal working luid, determine
The cross-sectional area of the irst throat
The cross-sectional area of the second throat to allow starting the tunnel
The Mach number in the second throat after the tunnel has started
3.4 A subsonic wind tunnel operates at atmospheric conditions (pconv = 1 atm, Tconv = 288 K);
the maximum achievable Reynolds number per unit length is Reconv = 2 × 10−6 m−1.
Suppose that the wind tunnel is modiied into a cryogenic tunnel to increase the maximum
Reynolds number. Assuming a total temperature of the cryogenic facility of Tcryo = 130 K,
determine the total pressure pcryo required to achieve a Reynolds number (per unit length)
of Reconv = 30 × 10−6 m−1.

references

1. Ra m je e V, H u s s ain AKMF (September 1976). Inluence of the axisymmetric contraction ratio


on free-stream turbulence, Journal of Fluids Engineering, 98, 505–515.
2. Wit o s z y n s k i C (1924). Vorträge aus der Gebiet der Hydro- und Aerodynamik (Lectures in the
Field of Hydrodynamics and Aerodynamics), Springer, Berlin, Germany (in German).
3. Ra e WH, P o pe A (1984). Low-Speed Wind Tunnel Testing, 2nd edn., John Wiley & Sons, Inc.
4. M o r g a n d PG (1960). The stability of low through porous screens, Journal of the Royal
Aeronautical Society, 64, 359.
5. D e Va h l DG (1964). The low of air through wire screens, in: Sylvester R, ed., Hydraulics and
Fluid Mechanics, Pergamon, New York, pp. 191–212.
6. P r a n d t l L (1933). Attaining a steady air stream in wind tunnels, NACA TM 726.
7. D r y d e n HL, Schubauer GB (1947). The use of damping screens for the reduction of wind tunnel
turbulence, Journal of the Aeronautical Sciences, 14, 221–228.
8. Wat t e n d o r f FL (1938). Factors inluencing the energy ratio of return low wind tunnels,
Fifth International Congress for Applied Mechanics, Cambridge, MA.
9. Vo n K á r má n T (1934). Turbulence and skin friction, Journal of the Aeronautical Sciences.
10. Ba r l ow JB, Ra e WH, Pop e A (1999). Low-Speed Wind Tunnel Testing, 3rd edn., John Wiley &
Sons, Inc.
11. A l l e n JH, Vin centi WG (1944). Wall interference in a two-dimensional-low wind tunnel
with consideration of the effect of compressibility, TR 782.
12. Wo l f SWD (1995). Adaptive wall technology for improved wind tunnel testing techniques—A
review, Progress in Aerospace Sciences, 31, 85–136.
13. G o e t h e r t BH (1961). Transonic Wind Tunnel Testing, AGARDograph 49, Pergamon Press.
14. Z u c r ow MJ, H o f f m an JD (1976). Gas Dynamics, Vol. 1, John Wiley & Sons, Australia.
15. Humble RA, Scarano F, van Oudheusden BW (2007). Particle image velocimetry
measurements of a shock wave/turbulent boundary layer interaction, Experiments in Fluids, 43,
173–183.
90 ANDREA SCIACCHITANO

16. P o pe A, G o in KL (1965). High-Speed Wind Tunnel Testing, John Wiley & Sons, Inc., New York.
17. L u k a s ie w ic z J (1973). Experimental Methods of Hypersonics, Marcel Dekker, Inc., New York.
18. L u d w ie g H (1955). Der rohrwindkanal, Zeitschrift für Flugwissenschaften, 3, 206–216.
19. S m e lt R (1945). Power economy in high speed wind tunnels by choice of working luid and
temperature, Report No. Aero. 2081, Royal Aircraft Establishment, Farnborough, England.
20. M u e l l e r TJ (2002). Acoustic Measurements, Springer-Verlag GmbH, Berlin, Germany.
21. C e r ma k JE (1997). Wind tunnel testing of structures, ASCE Journal of the Engineering
Mechanics Division, 103, 1125–1140.
22. K e l ly KB, P r ovencher LG, Schenkel FK (1982). The General Motors Aerodynamics
Laboratory, a full scale automotive wind tunnel, SAE 820371.
c h a P t e r F o Ur

Principles of low visualization

Javier Rodríguez-Rodríguez

Contents

4.1 Physical basis of low visualization 91


Tracer dynamics 91
4.2 Wall-bounded lows 93
4.3 Outer lows 97
Streamlines 98
Recirculation bubbles and detached vortical structures 99
4.4 Velocity proiles 100
Problems 101
Acknowledgment 104
References 104

Every person interested in aerodynamics surely has been ever captured by the beauty of a
picture showing a low visualization like, for instance, those appearing in the seminal van
Dyke’s Album of Fluid Motion [1]. Besides the obvious appeal of the images they produce,
low visualization techniques constitute a powerful tool in experimental aerodynamics to
obtain qualitative as well as quantitative information about a low: streamlines, surface pres-
sure distribution, and regions of occurrence of laminar–turbulent transition in a boundary layer
are only a few examples of the low features that can be determined through low visualization.
Rather than enumerating different visualization techniques, the aim of this chapter is to
introduce the student into the physical principles on which some representative visualization
techniques rely. Thus, armed with this knowledge, the student will be prepared to understand
a full variety of methodologies commonly employed to investigate low features.

4.1 physical basis of low visualization

tracer dynamics Perhaps the most popular low visualization technique consists in continuously releasing smoke
at a point upstream the region where one desires to visualize the low. When the smoke is
advected downstream, it marks the low lines deined as streak lines: by deinition, lines made
up of all the luid particles that passed through a same point at some earlier time. These lines
only coincide with the streamlines under some particular conditions, the most relevant one being
when the low is steady ([2], p. 72), which is the case in many lows of interest in aerodynamics.
The adequacy of smoke to produce neat streak lines resides in the small size of the particles
that compose it, usually around the micron. This smallness guarantees that tracers faithfully
follow the low; thus, they effectively behave as luid particles. Besides low visualization,
investigating under which conditions tracer particles behave as luid particles (i.e., follow
the low) is essential to other techniques based on measuring their velocity, rather than the
actual luid one, such as particle image velocity (PIV) or laser Doppler anemometry (LDA)
(see Chapter 10). In order to perform an order of magnitude analysis of the particle’s dynam-
ics, we will use a simpliied model in which a solid spherical particle of diameter d and
density ρp is immersed in a uniform low with velocity u, impulsively started at t = 0. In a

91
92 JAVIER RODRÍGUEZ-RODRÍGUEZ

irst approximation, and assuming the particle is so small than the Reynolds number of its
motion is small, Re = ud/ν ≪ 1, the only force acting on the particle is Stoke’s drag, thus*

p 3 dv
d rp = 3pdm ( u - v ) , (4.1)
6 dt

where v is the particle’s velocity. After some algebra, this equation can be expressed

dv 18n r
dt
= 2
d rp
(u - v ). (4.2)

This equation can be made free of parameters by deining a dimensionless velocity, vɶ = v /u


and time τ = t/tc, with tc = (d2/18ν)(ρp/ρ), inally resulting

dvɶ
= 1 - vɶ. (4.3)
dt

The solution of this equation is, under the assumption that the particle started from rest,

( )
v = u 1 - e - t / tc . (4.4)

For the sake of clarity in the interpretation, dimensional variables have been used to write this
expression. Some important conclusions can be obtained from this solution (see Figure 4.1).
The irst one is that the time it takes for the particle to acquire a velocity close to that of the
low is of the order of the characteristic time tc. Indeed, after one characteristic time the par-
ticle moves already at 63.2% of the low velocity, whereas after twice that time, this value
rises to 86.5%.

1.0

0.8

0.6
v/u

0.4

0.2

0.0
0 2 4 6 8 10
t/tc

FIGUre 4.1 Time evolution of the velocity of a tracer particle exposed to an impulsively started
uniform low. The point marked corresponds to t = tc, where the particle’s speed is already 63.2%
of the inal one.

* A comment is in place here: Equation 4.1 is a very simpliied version of the actual equation describing the motion
of a particle immersed in a low, namely, the Maxey–Riley equation [3]. Several effects have been neglected: Basset
history force, inertia of the surrounding luid, etc. Nevertheless, the approach used here is enough to obtain an order
of magnitude estimation of the conditions under which particles behave as good low tracers.
PRINCIPLES OF FLOw VISUALIZATION 93

Coming back now to the application at hand, investigating under which conditions a par-
ticle is a good tracer, it seems clear that when the low experiences temporal variations much
slower than the time tc, the particle will have plenty of time to adapt to the new velocity and
thus will follow the low. Contrarily, if the low changes over times shorter than, or of the order
of, tc, the particle will not be able to follow the low. With this idea in mind, a dimensionless
parameter can be built, namely, the Stokes number, comparing the characteristic time taken
by the particle to adapt to the low velocity with the timescale of the variations of the latter,
denoted by the inverse of their frequency, f −1:

fd 2 rp
St = . (4.5)
18n r

In summary, when the Stokes number of a tracer in a given low is small, St ≪ 1, we can
guarantee that it will behave as a luid particle. Examining Equation 4.5, it is clear that small,
light particles will make much better tracers, as is the case of smoke. More interestingly,
Equation 4.5 can be used to compute the highest frequency of low variations that a tracer is
able to follow: frequencies faster than fc = 18ν/d2 · ρ/ρp will lead to Stokes numbers of order
unity or larger, implying particles that do not adapt to the low variations.
As an illustrative example, let us compute this cutoff frequency for typical smoke particles
with diameter d = 1 μm and density ρp = 2000 kg m−3 immersed in an air low, ρ = 1.2 kg m−3
and ν = 1.5 × 10−5 m2 s−1:

18n r
fc = = 162 kHz.
d 2 rp

A very high frequency! In fact, ranging well into the ultrasonic spectrum. This justiies the
usage of smoke as the tracer of choice in many low visualization applications.
Notice that in the earlier discussion, the velocity of the low, u, does not appear at all. This
has a very important consequence: the ability of a tracer to follow the low is independent of
the velocity of the latter, being mostly inluenced by its frequency.
The analysis performed earlier is, strictly speaking, valid for a uniform low, where the
velocity ield only depends on time, not on space. Despite this, in many situations this analysis
is a reasonable tool to determine the goodness of a particle to serve as a tracer. The exceptions
are those regions where the velocity experiences large variations in relatively small distances,
such as near walls. The reader is referred to Problem 4.1 for a deeper explanation of this
phenomenon.
Another effect that must be taken into account is the widening of the smoke trails due to
the diffusive effects or turbulence. A popular example of this effect can be seen in the smoke
plume coming out of the tip of a cigarette (Figure 4.2). In this case, the hot plume that carries
the smoke thickens as it rises due to the mixing induced by the entrainment of fresh air from
the surroundings (see Chapter 12.1 on Reference 4 for a physical explanation of entrainment).
This effect would be even more pronounced in turbulent lows, as they exhibit stronger mixing
(see Problem 4.2).

4.2 Wall-bounded lows

In aerodynamics, the structure of the boundary layer developing on a body is of utmost inter-
est (see Chapter 1). For instance, the nature of the boundary layer, that is, laminar or turbulent,
greatly affects its ability to withstand a given adverse pressure gradient without separating.
When separation occurs on the upper surface of a lifting body, the section downstream from
the separation line no longer contributes to the generation of lift, what can lead to a stall in the
case of a lying vehicle. Furthermore, the shear stress exerted on the surface increases signii-
cantly when the boundary layer transitions from laminar to turbulent. Due to this importance,
94 JAVIER RODRÍGUEZ-RODRÍGUEZ

FIGUre 4.2 Smoke plume rising from the tip of a burning cigarette illuminated with a white
light.

a number of techniques have been developed to characterize the structure of the boundary
layer and even to determine quantitatively the wall shear stress. In what follows, some repre-
sentative techniques are described with the aim at illustrating the physical principles behind
boundary layer characterization.
Historically, the irst techniques to visualize the behavior of a boundary layer addressed
the occurrence of phenomena such as laminar–turbulent transition, separation, and reattach-
ment [5]. Besides using the direction of the streamlines near the surface as an indicator, some
techniques exploit the changes in the wall shear stress to visualize the different regions of a
boundary layer. Indeed, the wall shear stress signiicantly increases for the same low condi-
tions when the boundary layer becomes turbulent. As an illustrative example, let us consider
the friction coeficient on a lat plate immersed in a uniform stream of speed U∞, deined as
tw
cf = (4.6)
(1/2 ) rU¥2
In the laminar case, the Blasius solution yields [4]

cf, lam = 0.664Re -x 1/ 2 (4.7)

whereas for the turbulent case, the Schultz–Grunow formula predicts [4]
-2.584
cf, tur = 0.370 ( log10 Re x ) (4.8)

Thus, for instance, at the location where Rex = 106, we have cf, tur/cf, lam ≈ 5.4. This situation
is sketched in Figure 4.3a, where the evolution of the shear stress is depicted when the low
transitions to turbulence.
PRINCIPLES OF FLOw VISUALIZATION 95

Laminar Turbulent
Separation Reattachment
Transition
Separation

Plate

τw
τw

(a) X (b) X

FIGUre 4.3 Sketches of the different low structures that could be found in a boundary layer.
(a) Laminar–turbulent transition. (b) Separation bubble with further reattachment.

(a) (b)

FIGUre 4.4 Main features of the boundary layer detachment, transition to turbulence, and reat-
tachment observed with an evaporative paint. (Image taken at the wind tunnel of the Spanish
National Institute of Aerospace Technology [INTA], Madrid, Spain, courtesy of Andrés Vázquez.)

In a qualitative way, this increase in the wall shear stress can be attributed to the greater
eficiency of turbulent lows in transporting luid magnitudes. In particular, in the case of the
boundary layer, if the wall is seen as a sink of momentum, turbulence is more eficient than
molecular diffusion in supplying that sink with mean momentum from the free stream, what
results in larger values of the mean velocity at a given distance from the wall, compared to the
laminar case, what, in turns, yields larger velocity gradients and thus shear stresses. Naturally,
this is not only true for momentum but for any other intensive magnitude such as the inter-
nal energy. For this reason, heat conduction at a wall is also enhanced when the boundary
layer becomes turbulent. This principle is at the heart of some of the diagnostic techniques
described in Chapter 6.
More importantly for low visualization, this enhancement in the transport of luid proper-
ties can contribute to accelerate the evaporation or sublimation of a material layer coating
the aerodynamic surface. Thus, upon a wind-tunnel test, the examination of the distribution
of an evaporative coating reveals those regions where transition occurred. This is illustrated
in Figure 4.4a, where a 1:1 model of the bulb keel of a high-performance competition boat
can be observed after being tested. Before the test, the in was coated with a thin naphthalene
96 JAVIER RODRÍGUEZ-RODRÍGUEZ

layer that sublimates at high shear stresses as a consequence of the enhanced mass transfer
rate (see [6] for an illustrative example of an analogous process and Problem 4.5 for a deeper
explanation). White regions indicate the presence of the coating, whereas darker ones have
been nearly depleted of it. The occurrence of transition at about 60% of the chord length is
clearly marked. Quite remarkably, a number of dark wedges starting from the leading edge are
also observed. These structures indicate an early transition to turbulence due to imperfections
of the surface near the leading edge.
A more quantitative technique that exploits the variations of the shear stress found in
different low regions is that of the oil-ilm interferometry. In this method, a thin oil ilm is
applied on the surface before the experiment. The sheet is then allowed to evolve under the
inluence of the shear stresses and pressure gradients distributions occurring at the surface,
what modiies the thickness of the ilm in a way that can be quantitatively computed from
luid mechanical irst principles [7]. This technique permits to highlight structures such as
laminar–turbulent transition, recirculation bubbles, reattachment, etc., albeit is usually limited
to smooth surfaces. The quantitative measurement of the wall-shear stress using this effect
is beyond the scope of this chapter; thus, the interested reader is referred, for instance, to
Reference 8 and to Chapter 12.
As an example, Figure 4.4b shows the same keel in of panel (a) during a test where the
model was coated with a thin oil ilm containing a luorescent dye. The existence of the tur-
bulent region is illustrated by the existence of the white boxes close to the trailing edge.
Interestingly, there is also a light–dark strip that runs parallel to the edge, interrupted by two
bright wedges. This strip is caused by a recirculation bubble, like that sketched in Figure 4.3b,
which induces the transition and later reattachment of the boundary layer.
Besides transitions between different boundary layer regimes, a liquid surface coating can
be used to visualize the overall structure of the low near the wall, including streamlines,
attached vortices, recirculation bubbles, etc. Figure 4.5 shows an example of how, using two
liquids with different dissolved luorescent dyes, it is possible to make visible a number of
important features of the low. Indeed, in the irst place, the breakup of the liquid ilm into
droplets and the later dragging of these droplets by the low clearly mark the streamlines both
in the outer low (yellow region) and in the recirculating bubble downstream from the plate
(pink region) (see Color Insert). It is interesting to see how the usage of two dyes emitting
light at different wavelengths allows the experimenter to determine the dividing streamline
between the outer low and the recirculation bubble.
Moreover, developing along this dividing streamline, the footprint of a detached corner
vortex can be inferred [9]: the thick yellow line that separates the outer low from a relatively
darker region, thus depleted of dye, is a stagnation line where the coating liquid accumulates
(see Figure 4.5d).
This is just an example of how liquid coatings can be used to obtain a detailed overall
picture of the structure of a complex wall-bounded low.
But perhaps the most extended (and oldest) visualization technique in experimental
aerodynamics consists in the use of small tufts with their tips pinned to an aerodynamic
surface. Although the information that they yield is less quantitative than that obtained
using liquid and/or sublimating coatings, the easiness to implement the technique has
granted tufts their popularity. In the laboratory, they can be used to determine the low
direction at a surface and to spot regions of low separation. For instance, Figure 4.6 shows
a test model of an airplane being tested in a wind tunnel, using tufts to assess low separa-
tion in different low conigurations: clean coniguration, low angle of attack (a); clean
coniguration, high angle of attack (b); and with laps partially deployed (c). In this exam-
ple, it can be observed how the low remains attached in conigurations (a) and (c) but
exhibits separation at different locations near the trailing edge (see (d) for a close-up view
of the near-tip region).
Indeed, in this subigure, dark regions can be observed near the trailing edge. This occurs
because the tufts no longer orient parallel to the surface but point perpendicularly to it.
Moreover, due to the usually unsteady character of separation bubbles, tufts oscillate violently,
what constitutes a very tell-telling evidence of low detachment.
PRINCIPLES OF FLOw VISUALIZATION 97

Streamlines

Detached vortex

Separation
bubble

(a) (b)

(c) (d)

FiGUre 4.5 (See color insert.) Fluorescent dyes can be used to denote separation and stream-
lines. The igure shows the dye pattern at a plate oriented streamwise near a corner at different
low speeds (increasing from (a) to (c)). Notice how the thickness of the boundary separating the
detached corner vortex increases with the speed, even giving birth to a complex pattern with
two nested vortices (panel c). These conigurations are explained in Reference 9. Panel (d) shows
a sketch of the test model.

To conclude this section, it should be pointed out that all the techniques described here,
not only the tufts, can be used with minimum or no modiication, not only in experimental
facilities, but more importantly on actual lying vehicles. Some of these techniques, such as
oil visualization, require the analysis of the low pattern upon landing whereas others, such
as tufts, allow the pilot to obtain aerodynamic information in real time. A comprehensive
review on the information that can be extracted using these methods in light research can be
found in [10]. As a curiosity, in gliders, a tuft is actually used as a “light indicator” of the yaw
angle. This tuft is taped to the cockpit canopy to aid the pilot determine whether the plane is
side-slipping.

4.3 outer lows

Perhaps the irst image that comes to our minds when we hear about low visualization is that
of smoke streaks surrounding a model car, or plane, being tested in a wind tunnel. Indeed, one
of the most common applications of low visualization techniques is to make apparent to the
sight the main features of the luid low around an aerodynamic body, including the direction
of the luid velocity (by deinition, tangent to the streamlines) at different points. Besides the
streamlines, experimental aerodynamicists are usually interested in inding regions of low
separation, recirculation bubbles, and vortical structures. Thus, this chapter deals with some
physical considerations that need to be taken into account to visualize, and interpret, these
structures in aerodynamic applications.
98 JAVIER RODRÍGUEZ-RODRÍGUEZ

(a) (b)

(c) (d)

FIGUre 4.6 Tufts mark the appearance of different regions in a model of an EADS CN-235 at
different angles of attacks and wing conigurations. Panels (a) and (b) show conigurations, clean
and laps deployed respectively, in which no separation occurs. Panel (c) illustrates a situation at
high angle of attack exhibiting separation near the tips (see panel (d) for a zoom in). (Courtesy of
Andrés Vázquez, INTA Wind Tunnel, Madrid, Spain.)

Streamlines By deinition, a streamline is a curve tangent to the velocity ield at all its points. However,
when smoke or any other tracer is used to visualize a low, what is actually seen is a collection
of streak lines that, in general, does not coincide with the streamlines (see “Tracer dynamics”
section). Besides the tracers being able to follow the low, it was stated in that section that the
velocity ield must be steady for streak lines and streamlines to coincide. Actually, this condi-
tion can be relaxed if the rate at which the low changes are suficiently slow.
An illustrative example is shown in Figure 4.7, where paths and streak lines have been
computed for a set of perfect tracers immersed in a uniform low whose direction rotates
clockwise with angular speed ω = 2π/T, that is, completing a whole turn in a time T. The paths
of particles released at the origin every Δt = 0.05 time units are shown as solid lines, with
the dots denoting the inal position of the particle at t = 1 time unit. Thus, the line connect-
ing these dots would be a streak line. The left panel corresponds to a low that rotates with a
period T = 1000, whereas the right one to one rotating with T = 10. Notice that, in both cases,

0
0

–0.1 –0.1
Y
Y

–0.2 –0.2

–0.3 –0.3
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
(a) X (b) X

FIGUre 4.7 Paths (solid lines) and streak lines (denoted by the dots) for a uniform low where the direction rotates
clockwise with an angular speed 2π/1000 (a) and 2π/10 (b).
PRINCIPLES OF FLOw VISUALIZATION 99

the streamlines are straight lines forming an angle −ωt with respect to the horizontal direction.
In the left panel, since the characteristic time of variation of the low, T = 1000 time units,
is much longer than the integration time (t = 1), no appreciable delection of the trajectories
is observed. Conversely, in the right plot paths signiicantly deviate from the straight direc-
tion due to the fact that the direction of the low has changed in a noticeable way in a time of
the order of the integration time (it would complete a whole turn in a time T = 10). It can be
inferred that, even if the low is unsteady, when the time it takes to change, T in our example,
is suficiently longer than the time during which the tracer is going to be visualized, all the
tracers release within the observation time will follow nearly the same trajectory. In general, if
the low has a characteristic speed U and size L, this so-called residence time will be tr = L/U.
Consequently, paths, streak lines, and streamlines will approximately coincide when the ratio
St = tr/T ≪ 1. This dimensionless number is referred to as Strouhal number.
Albeit this condition is usually satisied in many practical applications in aerodynamics,
there are some important cases where it will not be possible to fulill St ≪ 1 in the whole low.
One of these important situations is described in the next subsection.

recirculation In high-Reynolds-number lows, dimensional analysis dictates that recirculation regions and
bubbles and shed vortices (such as those found in the von Karman street) usually exhibit frequencies cor-
detached vortical responding to Strouhal numbers of order unity. Indeed, under these circumstances, the charac-
structures teristic frequency of these low regions (shedding period for von Karman vortices or turnover
time for recirculation regions) must depend solely on the low speed, U, and length scale, L,
thus fc ≈ U/L and consequently St ≈ fcL/U ≈ O(1). This implies that it is not possible to visual-
ize the instantaneous streamlines in these lows.
Nevertheless, as can be observed in Figure 4.8, the visualization using smoke streak lines
still serves to make apparent important features of the low. For instance, the presence of
vortices is marked by the presence of smoke tongues that penetrate into the wake (see [11].
The turbulent nature of the vortices can also be appreciated by the smearing of the smoke
lines as the vortices are advected downstream. Also, the lack of smoke at all times (not seen
in a single picture, naturally) in the immediate vicinity of the wake suggests the presence
of attached vortices (recirculation bubble). In summary, the analysis of the streak lines can
be used to infer a number of important low properties, even though the exact instantaneous
direction of the velocity ield is not known.
Naturally, this argument holds when the direction of rotation of the vortical structure is per-
pendicular to the streamwise direction of the low. When the vorticity vector is nearly aligned
with the main low direction, then the central line of the vortex is transported as any other luid
particle and thus would coincide with a streak line. Such is the case of the so-called wingtip
vortices (see Figure 4.9), which form in the wake of an airplane’s wings. Even without seed-
ing the low, these vortices are evident to the naked eye thanks to the condensation of water

Smoke tongue

Cylinder
Vortex

FIGUre 4.8 Karman street downstream a cylinder visualized with smoke streak lines. The smoke
is produced by heating a wire coated with a thin olive oil ilm.
100 JAVIER RODRÍGUEZ-RODRÍGUEZ

FiGUre 4.9 Vortices detaching from the wingtips of a plane lying at low altitude. (Copyright by
Stephen Edmonds, https://www.lickr.com/photos/popcorncx/, last accessed September 8, 2015.
Commercial use allowed).

vapor that occurs at the core of the vortex when the plane is lying in humid air (usually close
to the ground). This condensation is a consequence of the low pressures found at the center of
the vortex, where the luid can spin at very high velocities.

4.4 Velocity proiles

A more quantitative way of using smoke tracers consists in analyzing the deformation of a
tracer line to obtain the velocity proile as its luid particles are advected downstream. This
method is widely used with liquids where, instead of smoke, other traces such as hydrogen
bubbles generated by electrolysis are used. For instance, in Figure 4.10, this method is used
to visualize the velocity proiles in the boundary layer developing along a lat plate. The
reader is referred to the book by Merzkirch [13] for a review of the usage of these tech-
niques in liquids.
In the case of smoke trails, in most situations, buoyancy precludes its usage in the same
way as hydrogen bubbles that, due to their small size (usually tens of microns), exhibit very
little effect of buoyancy during the residence time (see Problem 4.3). However, in some atmo-
spheric measurements, smoke trails might be the only mean to quantify low velocities. As an
example, Figure 4.11 shows the visualization of the low induced by a very intense explosion
using smoke.

FiGUre 4.10 Hydrogen bubbles generated by electrolysis are used to visualize the velocity
proiles in a boundary layer. (Reprinted from Introduction to Fluid Mechanics, Nakayama, Y. and
Boucher, R.F., Figure 6.19, Copyright 1998, with permission from Elsevier.)
PRINCIPLES OF FLOw VISUALIZATION 101

FIGUre 4.11 Smoke “ilaments” used to visualize the low induced by a nuclear explosion.
(Taken from the Nuclear Weapon Archive, http://nuclearweaponarchive.org/Usa/Tests/Smoke
Trails.html, Retrieved on May 5, 2015.)

problems

4.1 Let us consider a spherical particle of density ρp and radius R immersed in a 2D straining
low given by the velocity ield:

Vf, x = Mx

Vf, y = - My

where M is the rate of strain, which physically corresponds to the inverse of the times-
cale of the low. As sketched in Figure 4.12, this low consists of two vertical streams
that collide at the plane y = 0 and is commonly used to model the behavior of more
complex lows near stagnation points. Also, in experimental combustion, it is often used
to create a laminar diffusion lame at this plane, by letting one stream be the oxidant and
the other the fuel. In the present problem, we investigate under which circumstances a
small particle can be used as a tracer in this low.
(a) Show that the dynamics of the particle is controlled by a dimensionless number, the
Stokes number, given by

9 m
S= ,
2 rp MR 2

where μ is the viscosity of the luid, which has a density ρ ≪ ρp.

V
y Vf
x

FIGUre 4.12 Sketch of the low ield.


102 JAVIER RODRÍGUEZ-RODRÍGUEZ

Hint: make the coordinates (x, y) dimensionless with the radius of the particle, R, and
the velocity with MR.
(b) Calculate the critical value of the Stokes number such that the particle crosses the
interface separating both streams.
(c) Show that, in the limit of very large Stokes number, the particle perfectly follows the
low.
4.2 We consider here the spreading rate of a smoke trail in uniform grid turbulence of mean
velocity U. Neglecting the molecular diffusion of the smoke particles, the downstream
evolution of the mean concentration of smoke, C, can be modeled with the following
transport equation (see, for instance, Tennekes and Lumley [12]):

¶C ¶ æ ¶C ö
U = ç DT ( x ) ÷,
¶x ¶y è ¶y ø

where DT(x) is the so-called eddy diffusivity. Its value can be estimated on dimensional
grounds as

DT ~ u¢ℓ,

with u′ being the characteristic intensity of the velocity luctuations and ℓ the length
scale of the turbulence. Although both u′ and ℓ evolve over x, for the particular case of
grid turbulence the product u′ℓ remains constant, as u′ decays as x−1/2, whereas ℓ grows
as x1/2. With these hypotheses
(a) If, close to the grid, u′ = 10 cm/s and ℓ = 1 mm, use order of magnitude analysis
to determine the spreading rate of a thin smoke trail in a low with uniform mean
velocity U = 1 m/s.
(b) Compare this spreading rate with that observed if the low was fully laminar. In this
case, the diffusion of smoke particles would be purely due to Brownian motion;
thus, the diffusivity may be estimated using the Stokes–Einstein relation:

kBT
D= .
6pmR

Here,
kB = 1.38 × 10−23 is Boltzmann’s constant
T is the absolute temperature (say 300 K)
μ is the air viscosity (μ = 2 × 10−5 Pa s)
R is the radius of the particle (R = 1 μm)
4.3 A small spherical particle (either a bubble or a solid particle) immersed in a luid low
will experience a drift with respect to the trajectory of a perfect tracer due to the effect of
gravity. The aim of this problem is to evaluate how important is this effect in the motion
of a small hydrogen bubble of diameter d = 10 μm compared to that of a smoke particle
of diameter d = 1 μm and density ρp = 2000 kg m−3. In both cases, we will assume that
the density of the gas (Hydrogen or air) is negligible.
(a) Write down the equations for the velocity of a particle (bubble in water and solid
particle in air) that starts from rest and is immersed in a horizontal, uniform low of
velocity U. The surrounding luid (water or air) has density ρ and viscosity μ.
(b) Simplify the previous equations using the assumption that the density of the gas is
zero.
(c) Compute the terminal velocity of the particle in the two cases considered. In partic-
ular, evaluate the ratio between the vertical terminal velocities. Which one is faster?
PRINCIPLES OF FLOw VISUALIZATION 103

4.4 We shall consider here the thickness distribution of a thin ilm of very viscous oil, depos-
ited on a lat plate, under the action of a given wall shear stress, τ(x), that may depend on
the distance along the plate, x. This analysis constitutes the physical principle on which
the oil-ilm techniques used to determine the wall shear stress are based.
We start from the equations of motion for a thick viscous liquid ilm neglecting
unsteady, inertia, and gravity effects:

¶Vx ¶Vy
+ =0
¶x ¶y

¶pw ¶ 2V
0=- + m 2x
¶x ¶y

where
μ is the liquid’s viscosity
pw is the pressure at the upper boundary of the ilm

These equations must be completed with the boundary conditions for Vx and Vy:
¶V
Vx(x, y = 0) = Vy(x, y = 0) = 0 and t = m x ( x, y = h). Here, h denotes the height of the
free surface; thus, y − h(x, t) = 0. ¶y
(a) Use the previous equations to obtain the low rate of liquid crossing a given section
of constant x, Q, as well as the velocity at the surface y = h, Vs. Give the condi-
tion to neglect the effect of the pressure gradient. Assume that this condition holds
hereafter.
(b) Using continuity and the condition that the free surface is a luid line, write a
differential equation that relates h with τ.
(c) Under the assumption of thin ilm thickness, we can evaluate the derivative of a
luid property for a luid particle moving with the velocity at the free surface in the
following way:

ds ¶ ¶
= + Vs
dt ¶t ¶x

Starting from this deinition, prove that following a point at the interface, the prod-
uct hτ1/2 is conserved.
(d) Compute the time taken for a particle at the surface, starting from x = 0, to arrive
at a given downstream location x. Combine this equation with the result of the pre-
vious point to obtain* a relation between the thickness of the ilm at this location
and the history of shear stresses experienced by the particle during its downstream
evolution.
(e) Modeling the wall shear stress near a separation point, x = xs, as

æ xö
t = tc ç 1 - ÷ ,
è xs ø

with τc a known constant, prove that the thickness of the oil ilm diverges to ininity
at the separation point. What is the physical interpretation of this result?
4.5 In this problem, we illustrate with a canonical example how to quantitatively relate the
rate of evaporation of a thin naphthalene layer with the shear stress acting on a solid

* The analysis described in this problem is based on that by Tanner and Blows [7].
104 JAVIER RODRÍGUEZ-RODRÍGUEZ

surface. Let us consider a laminar, low-Mach-number, uniform low of air parallel to a


semi-ininite lat plate (Blasius low) over which a thin layer of naphthalene has been
deposited. The system of boundary layer equations and boundary conditions that deter-
mine both the velocity (u, v) and concentration of naphthalene, C, are

¶u ¶v
+ =0
¶x ¶y

¶u ¶u ¶ 2u
u +v =n 2
¶x ¶y ¶y

¶C ¶C ¶ 2C
u +v =D 2
¶x ¶y ¶y

with u(x, 0) = v(x, 0) = u(0, y) = 0; u(x, ∞) = U; C(x, ∞) = C(0, y) = 0; C(x, 0) = Cw.


Here, D = 8.6 × 10−6 m2 s−1 is the diffusivity of naphthalene in air and Cw is the satu-
ration concentration. In writing these equations, it has been assumed that the rate of
sublimation is small enough not to perturb the velocity ield, that is, the impermeability
boundary condition at the plate v = 0. Moreover, the negative heat lux that the sublima-
tion of naphthalene imposes at the plate has been assumed to be small enough to assume
that the low temperature, T0, is constant and uniform everywhere.
(a) If the vapor pressure of naphthalene at T0 = 20°C is pv = 6.5 Pa, compute the wall
concentration, Cw, assuming thermodynamic equilibrium.
(b) Knowing that the velocity ield, in particular the horizontal velocity u, can be
expressed as a function of a similarity variable,

y
h= ,
nx/U

in the following form:

u
= uˆ (h),
U

show that the rate of sublimation per unit surface at the plate, m,
ɺ can be expressed as a
product of the shear stress, τw, times a function of the Schmidt number.

acknowledgment

The author acknowledges Andrés Vázquez Losada, from Vazquez y Torres Ingeniería (VTI, S.L.)
for his assistance in the writing of this chapter and for providing many of the pictures used.

references

1. va n D y k e M. An Album of Fluid Motion. Parabolic Press, Stanford, CA (1982).


2. Bat c h e l o r GK. An Introduction to Fluid Mechanics. Cambridge University Press, Cambridge,
U.K. (2000).
3. M a x e y MR and Riley JJ. (1983). Equation of motion for a small rigid sphere in a nonuniform
low. Phys. Fluids 26: 883–889.
PRINCIPLES OF FLOw VISUALIZATION 105

4. S c h l ic h t in g H and G ers ten K. Boundary Layer Theory. 8th edn. Springer, Berlin,
Heidelberg, Germany (2004).
5. S q u ir e LC (1961). The motion of a thin oil sheet under the steady boundary layer on a body.
J. Fluid Mech. 11: 161–169.
6. M ac H ua n g J, M oore MNJ, and R is trop h L (2015). Shape dynamics and scaling laws for
a body dissolving in a luid low. J. Fluid Mech. 765: R3.
7. Ta n n e r LH and B lows LG (1976). A study of the motion of oil ilms on surfaces in air low,
with application to the measurement of skin friction. J. Phys. E Sci. Inst. 9: 194–202.
8. K l e wic k i JC, S aric WS, M arus ic I, and Eaton JK. Wall-bounded lows. In Handbook of
Experimental Fluid Mechanics (C. Trop ea, A. Yarin, and J.F. Fos s, eds.). Springer, Berlin,
Germany (2007).
9. S im ps o n RL (2001). Junction lows. Annu. Rev. Fluid Mech. 33: 415–443.
10. F is h e r DF and M eyer RR. Flow visualization techniques for light research. NASA
TM-100455. NASA Ames Research Center, Moffett ield, CA (1988).
11. Z d r av kov ic h MM (1969). Smoke observations of the formation of a Kármán vortex street.
J. Fluid Mech. 37: 491–496.
12. N a k aya ma Y and B oucher , RF. Introduction to Fluid Mechanics. Elsevier, Amsterdam,
Netherlands (1998).
13. M e r z k ir c h W. Flow Visualization. 2nd edn. Academic Press, Cambridge, MA (1987).
14. Nuclear Weapon Archive. http://nuclearweaponarchive.org/Usa/Tests/SmokeTrails.html. Retrieved
on May 5, 2015.
15. Te n n e k e s H and Lum ley J . A First Course in Turbulence. MIT Press, Cambridge, MA(1972).
S e C TI ON II
Scalar measurements
C h a p T e r F IVe

Pressure measurements

Daniele Ragni

Contents

5.1 Introduction 109


The concept of pressure 109
Pressure units and standards 110
Atmospheric pressure, static pressure, and total pressure 112
5.2 Direct pressure measurement devices 118
Deadweight gauges 118
Manometers and barometers 119
McLeod gauge 123
5.3 Indirect pressure measurement devices 124
Elastic transducers 124
Bourdon tubes 124
Diaphragms 124
Strain gauges 125
Emission-based techniques: pressure-sensitive paint 127
5.4 Dynamic pressure measurements 128
Resonant transducers and vibrating cylinder 130
Microphones, capacitor type 132
Inductive and reluctive transducers 135
5.5 Some aspects on measurement procedures 136
Problems 138
Additional exercises 140
References 141

5.1 Introduction

The concept The “pressure” concept is very relevant in physics and aerodynamics, since it lays the basis
of pressure for the generation of luid forces and loads on objects. “Pressure” is a derived quantity deined
as force per unit area. “Force” and “area” are also derived from three fundamental quantities:
“length,” “mass,” and “time”; therefore, the irst one is usually written as a combination of the
last three ones. Typically, the concept of pressure is wrongly confused with the one of “force,”
mainly due to the historical ways of measuring pressures by applying forces on known areas.
While pressure stresses can be measured in solids by exploiting the stiffness characteristics
of materials and structures, in luids (i.e., gas and liquids) the measuring procedure is rather
different. When considering luids indeed, the movement of their constituting molecules can-
not be neglected. An interesting relation can be derived, that states how the statistical kinetic
motion of the molecules constituting the luid medium can be linked to its resultant pressure.
Understanding the connection between the statistical characteristics of the constituting ele-
ments of matter and the macro characteristics of the medium is a critical step to avoid many
misunderstandings. Together with the luid pressure, other properties such as temperature and
density can be linked to the kinetic motion of the luid molecules, due to the connection with

109
110 DANIELE RAGNI

V(x, y)

1 2
3
4
5
Orifice: d = 0.3 mm Pressure orifices

NAC Scanning valve


A001
2: c =
100 m
m Transducer
To data acquisition system
(a) (b)

FIGUre 5.1 (a) Example of an airfoil with manufactured pressure holes. (b) Scanivalve system for acquisition of steady/
unsteady pressure measurements. (From Daniele Ragni, TUDelft, Delft, the Netherlands.)

the random motion of the particles moving in the medium. In the following paragraphs, the
concept of pressure will be explained with respect to many different applications, irst intro-
ducing it at the molecular level and then studying it in several pressure transducers. Different
typologies of sensors can be chosen to measure unsteady/steady or high/low pressures (exam-
ple in Figure 5.1). The following paragraphs will be dedicated to the presentation of the most
known conigurations in experimental aerodynamics, with speciic interest to their implemen-
tation and characteristics.
Most of the known sensors transduce pressure stresses through the force exerted by common
luids on known areas (e.g., membranes and diaphragms), typically due to the versatility of this
particular setup. However, different measurement techniques have been developed in the last
decade to retrieve the pressure distribution around complicated shapes such as aircraft propeller
blades tested at transonic speed (Figure 5.2). However, before discussing the latest develop-
ments in experimental aerodynamics, the basic aspects of the pressure concept are introduced.

pressure units Pressure can be measured with respect to a vast range of “standard units,” a fact that, at irst,
and standards might seem confusing and terribly redundant. Ultimately, the variety of pressure units has
developed together with the invention of different design solutions for pressure transducers.
Pressure units are mostly obtained from the following physical deinition or equivalence:
éF ù N
® ëé p ùû = ë û = 2 =
F kg
p= def Pa (5.1)
A éë A ùû m ms 2

In the International System of Units also referred to as “SI” (Le Système International d’Unités
[1]), the fundamental unit of pressure (derived from fundamental quantities) is the “Pascal” =
kg/(ms2), named in honor of the seventeenth-century scientist Blaise Pascal and corresponding
from Equation 5.1 to Newton force per square meter area. The Pascal unit is indeed a refer-
ence in physics; however, many other possibilities can be found in engineering, all historically
related to characteristics of particular measurement instruments. A summary of conventional
pressure units is shown in Table 5.1, together with their conversion to the reference one. Two
main categories in the available standards panorama can be found: manometric units (metric,
not SI) and Imperial ones (English standards). Whether interested, the reader is recommended
to browse the NIST Guide for the use of the International System of Units [2], in order to have
a lavor of the amount of work that has been accomplished in the deinition of such units.
Every pressure conversion reported in Table 5.1 (if not prescribed by deinition to a cer-
tain value) has to be carried out at a prescribed temperature, usually at “standard conditions”
(273.15 K or 0°C) as referred to by the International Union of Pure and Applied Chemistry
(IUPAC [3]) and the National Institute of Standards and Technology (NIST [2]).
PRESSURE MEASUREMENTS 111

(a)

|V| (m/s): 20 25 30 35 40 45 50 55 60 Cp: –1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1

40 40
50 50
10 60 10 60
70 70

m)

m)
z (m
80 80

z (m
x (mm)
x (mm)

0 0
90 90
100 100
–10 –10
110 110
–10 0 10 20 120 –10 0 10 20 120
30 30
(b) y (mm) y (mm)

FIGUre 5.2 (See color insert.) (a) Pressure distribution on a wind tunnel model, measured with
PSP (From French-German Research Institute of Saint-Louis, France), (b) 3D low and pressure
coeficient visualization of a lying aircraft propeller blade model. (From Daniele Ragni, TUDelft,
Delft, the Netherlands. Reproduced with permission from Springer.)

Table 5.1 Brief summary of the most common manometric and imperial pressure
units with their symbol, name, and conversion to the fundamental Pascal reference

Symbol Name Conversion

Manometric
Torr = mm of Hg Torra = millimeters of mercuryb 1 Torr ≈ 133.322 Pa
mm H2O Millimeters of water 2 1 mm H2O ≈ 9.807 Pa
Bar Bar,c and multiples 1 bar = 100,000 Pa
Atm Standard atmosphere unit 1 atm = 101,325 Pa
Imperial system
psi Pound force per square inch 1 psi ≈ 6,894.757 Pa
in. Hg or “Hg” Inches of mercury 2 1 in. Hg ≈ 3,386.389 Pa
in. H2O or “H2O” Inches of water 2 1 in. H2O ≈ 249.089 Pa
oz/in.2 Ounce force per square inch 1 oz/in.2 ≈ 430.922 Pa
a Note the symbol uppercase initial, in comparison to the name in lowercase, since it
derives from Torricelli and its irst demonstration of the mercury barometer. It has to be
remembered that 760 mm Hg or 760 Torr corresponds to the height of a mercury luid
column needed to balance the atmospheric pressure.
b Pressure needed to displace an equivalent amount of luid column.
c Bar and millibar were introduced by the British meteorologist William Napier Shaw in
1909, director of the Meteorological Ofice in London.
112 DANIELE RAGNI

mv

mv

FIGUre 5.3 Schematic of a gas conined in a box.

atmospheric Despite the uniqueness of the concept of pressure, many “pressure deinitions” can be found in
pressure, static engineering, depending on the particular application. Before embarking in this long journey,
pressure, and it is appropriate to briely explore how the pressure is generated at the molecular level. Fluids
total pressure are taken as reference in the following paragraphs, together with the concepts of atmospheric,
static, and total pressure. The pressure determined by a solid in contact with a surface is rela-
tively less interesting in static pressure measurement devices, since luids are mostly used in
conventional ones. Let us imagine an ideal luid conined in a container (for example, the luid
can be “air” with good approximation). The “box of air” in Figure 5.3 is a good representation
of such a closed system. The gas is supposed to be constituted by molecules with temperature
above 0 K, therefore possessing a positive internal energy, allowing them to randomly move
and elastically interact with the container walls and with each other.
The pressure that the gas exerts on the box walls is determined by Newton’s second law,
with the normal force per unit area determined by the rate of change of momentum per unit
area of all the particles elastically (following the ideal gas assumption) colliding with the
walls. In one of the vertical walls, the ininitesimal contribution dFx to the normal force Fx is
given by the rate of change of momentum per unit area in the x direction:

dq x Dq x
dFx = » (5.2)
dt Dt

with the operator Δ indicating inite differences and q the momentum of colliding particles.
The change of momentum due to a single colliding particle of mass m and velocity V = {u, 0, 0}
is given by Δqx = 2mu. The rate of change of momentum per unit area is more complex, since
the total number of particles hitting the box walls per unit of time and their velocities needs
to be known. From the total number of particles per unit volume, it can be stated, using the
Maxwell–Boltzmann probability distribution (Chapter 1), that

n × u
Dt × f ( u ) du
particles max distance

  
per unit from the wall
fraction of (5.3)
particle with
volume for particles with u
positive u

In this respect, the low pressure is given by the integral procedure of the rate of change of
momentum per unit area:
¥ ¥
Dpx 2mu n uDt r ( u ) du
Dt
=
ò
0
Dt ò
= 2mn u2 r ( u ) du = nmu2
0
(5.4)
PRESSURE MEASUREMENTS 113

An isotropic distribution of particle velocities is assumed for simplicity that is, the movement
of each particle in the box is supposed to be equally probable in the three spatial directions.
The statistical distribution of the average velocity components of the particles and their val-
ues are thus all the same, allowing writing Equation 5.4 in function of the average velocity
magnitude as

mV 2
V 2 = 3u2 = 3v 2 = 3w 2 ® p = n (5.5)
3

where
n is the number of molecules contained in the box per unit volume
V 2 is their average velocity
m is the mass of a single molecule

It has to be noted that the pressure of the gas in the box depends on the total mass of the
molecules therein contained and on their kinetic energy. The reader can already speculate
that, whenever the gas is heated, the average kinetic energy of the gas increases with two
consequences:

1. The gas expands in the available space/volume and its pressure increases.
2. The container is subjected to an equally distributed pressure, or equivalently, the pres-
sure distribution of the gas is uniformly acting on the container.

The reason why the air box does not buckle under the pressure of the internal gas lays behind
the counteracting force of the gas that is outside the box, which balances the effect of the inner
pressure as can be seen from Figure 5.4.
The pressure that balances the inner one of the box is the pressure of the “gaseous stratum”
of air that surrounds the planet Earth, where all the objects and people therein immersed
can be considered as more complex versions of the air box, this time immersed in a rela-
tively thick* atmospheric layer. The pressure that this layer of air exerts on the Earth’s sur-
face and objects is typically referred to as “ambient air pressure” or “atmospheric pressure.”
Since there is nothing conining the atmospheric air layer to stay close to the Earth’s surface,
beside Earth’s gravity, it might be dificult to understand what the source of such an ambient

mv
mv
A
A

FIGUre 5.4 Counter pressure exerted by the ambient contribution.

* The reader can judge upon the validity/sarcasm of the “relatively thick” statement by comparing its dimension to
the Earth’s radius.
114 DANIELE RAGNI

pressure is. Indeed, the pressure that can be measured at the “sea level” or at any other height
on the Earth’s surface is the result of the weight of the column of air that stratiies on top of
that location. Considering that the atmosphere surrounding our planet extends up to an alti-
tude of about 700 km (beginning of the so-called exosphere [4], where the atmosphere starts
merging with the outer space), the relation that subsists between pressure and air density (or
gravity) can be easily derived. An extremely interesting derivation of the relation between
barometric pressure and altitude is shown in the International Standard Atmosphere model
formulated by the International Civil Aviation Organization [5]. The model assumes the fol-
lowing atmospheric characteristics:
1. The atmosphere has a constant decrease in temperature with increase in altitude.
2. The gravity is independent of altitude.
3. The atmosphere is in hydrostatic equilibrium.
4. The air constituting the atmosphere is a perfect gas.
Although the fair approximation of the model, its results are quite interesting. The constant
decrease in temperature is fairly accurate till 11 km and, if more accuracy is needed, the pres-
sure/altitude relationship can be improved by considering separate atmospheric layers, each
one with its own characteristic constants. These accuracy reinements of the single constant
approximation are left to the enthusiastic reader. The assumption of constant gravity with alti-
tude is generally accurate for engineering applications, given the relatively small thickness of
the atmospheric layer with respect to the Earth’s radius. The assumption of hydrostatic equi-
librium is also accurate for engineering applications on the entire globe. Finally, the perfect
gas assumption is among the most accurate for air, although the speciic gas constant of air
Rair (287.14 J/(kg K) = R/Mair, the ratio of the ideal gas constant R = 8.3114 J/(mol K), and the
molar mass of air Mair = 0.02895 kg/mol) vary with the composition of the atmosphere when
outer layers are considered.
Under hydrostatic conditions, at a certain altitude z0, the ininitesimal change of pressure
follows the weight of the ininitesimal layer at that precise location:
dp = -rg dz (5.6)

where
dp is the ininitesimal change in gas pressure
dz is the ininitesimal change in altitude
ρ is the gas density
g is the gravitational force (all calculated at z0)

Note that the minus sign ensures that the change of pressure over the ininitesimal layer at z0
compensates for the downward directed gravitational force (increase in height → lowering of
the pressure). From the ideal gas law, the gas density can be replaced with
p
r= (5.7)
RairT

Substituting Equation 5.7 in Equation 5.6, a differential relation can be obtained from which
both pressure and altitude variations can be integrated from two arbitrary positions 1, 2:

p2 z2
dp g
ò
p1
p
=-
ò
RairT
z1
dz (5.8)

Once solving the previous equation by considering a constant temperature T,

p2 g
log
p1
=-
RairT
( z2 - z1 ) (5.9)
PRESSURE MEASUREMENTS 115

Equation 5.9 offers one of the most interesting ways for evaluating changes in pressure
due to change of altitude. In fact, it can be used either for the determination of the average
pressure proile per altitude or for the altitude proile in function of the pressure variation,
relation normally employed in many aircraft altimeters:

R T p
p2 = p1e ( air )( 2 1 ) « z2 = z1 - air log 2
- g / R T z -z
(5.10)
g p1

A more reined equation is derived as in the Portland State Aerospace Society [6] by letting
the temperature linearly change in function of the altitude location with T = βz, where β is a
proper constant. The procedure remains the same as in Equation 5.8:
p2 z2
dp g
ò
p1
p
=-
ò
Rairbz
z1
dz (5.11)

Performing again the integration, the previous equation reads

p2 g é æ z2 ö ù
log =- ê log ú (5.12)
p1 Rairb ë çè z1 ÷ø û

With an opportune choice of the integration constants, the previous relation can be written as
- ( Rair b/ g )
T æp ö
z2 = 1 ç 2 ÷ (5.13)
b è p1 ø

A gas can determine a nonzero pressure with its physical characteristics related to the kinetics
of its molecules or with its own weight. One important question is related to the effect of the
internal kinetic characteristics of the luid, that is, what happens when the luid is in motion at
macroscopic level. This is an important step to understand the deinitions of “static” and “total
pressure.” As mentioned earlier, although in physics there is “one and only” pressure concept,
related to the action of multiple microscopic molecules/atoms collisions, distinguishing in
engineering between “static pressure” and “total pressure” might seem confusing. Once again,
the two deinitions historically date back to the particular way the pressure of moving luids
was originally measured. In Figure 5.5, a cloud of gas moving macroscopically with uniform
velocity V is represented.
Let us imagine to be lying at the same speed of the cloud, or equivalently, to be in a
moving frame of reference with origin O′ as in Figure 5.5 at the same speed of the cloud V.
Since the measured pressure when “lying with the gas” is merely associated with the internal
characteristics of the gas rather than with its motion, this pressure is typically referred to as
“static.”
A second approach considers a second frame of reference, stationary with respect to the
moving gas (the one with origin in O in Figure 5.5), where the pressure of the luid in motion
with velocity V is measured. It might seem obvious that the pressure is a property of the gas;
therefore, it is independent from the linear velocity of the cloud. However, this would entail
absolute no difference between a stationary gas having pressure p and a moving gas with
pressure p and velocity V. Indeed, there is a substantial difference between the two cases. The
“thermodynamic potentials” of the two gases or the total enthalpies* of the gases are different.
If the gas could be brought down to 0 from its moving velocity V fully converting its motion
into pressure, the gas would possess a higher pressure than its companion with zero velocity.

* Recall the concept of enthalpy, state variable constituted by hT = e + p/ρ +V2/2 where e is the internal energy of the
gas, ρ is the density of the gas, and V the macroscopic velocity at which is moving.
116 DANIELE RAGNI



V

p
y

o x

FIGUre 5.5 Gaseous cloud moving with macroscopic velocity V. From a stationary observer
{x, O, y}, the cloud is moving with V. An observer in motion with the gas {x′, O′, y′} sees the single
particles moving with random velocity contributions in any of the three directions.

This procedure is referred to as an “isentropic and monoenergetic deceleration,” and the pres-
sure that the gas would assume once isentropically (with no losses) decelerated down to 0 is
deined as “total pressure.” In symbols

ptot = pstat + pdyn (5.14)

Equation 5.14 states that the total pressure measured by isentropically slowing down the gas
to 0 velocity is equal to the pressure that the moving cloud of gas possesses plus a dynamic
pressure contribution, accounting for its motion. At this point, a mathematical expression for
pdyn is needed. Recalling some basic concepts of aerodynamics [7] and considering an inviscid
gas low with no body forces having velocity V = {u, v, w}, the momentum equation can be
written as

ì ¶p Du æ ¶u ¶u ¶u ¶u ö
ï - =r ® rç + u + v + w ÷
ï ¶ x Dt è ¶t ¶x ¶ y ¶z ø
ï ¶p æ ¶v ¶v ¶v ¶v ö
ï Dv
í - =r ® rç + u + v + w ÷ (5.15)
ï ¶ y Dt è ¶ t ¶x ¶y ¶z ø
ï ¶p Dw æ ¶w ¶w ¶w ¶w ö
ï- =r ® rç +u +v +w ÷
ïî ¶z Dt è ¶t ¶x ¶y ¶z ø

where p and ρ are gas pressure and density, while the material derivative D/Dt is deined as
¶ /¶t + V × Ñ. Considering the gas in a steady motion for simplicity and multiplying for the
ininitesimal element dx, dy, and dz, the three equations are as follows:

ì 1 ¶p ¶u ¶u ¶u
ï - r ¶x dx = u ¶x dx + v ¶y dx + w ¶z dx
ï
ï 1 ¶p ¶v ¶v ¶v
í- dy = u dy + v dy + w dy (5.16)
ï r ¶y ¶x ¶y ¶z
ï 1 ¶p ¶w ¶w ¶w
ï- dz = u dz + v dz + w dz
î r ¶z ¶x ¶y ¶z
PRESSURE MEASUREMENTS 117

p3, V3

v/u
x=
/d dy
dy
2 dx
y /2ρV 1
+1
p1
p2 + 1/2ρV 22
o x

FIGUre 5.6 Streamline paths for the integration of the low total pressure.

Equation 5.16 is further integrated on a selected path. The integration has to be carried out on
an arbitrarily chosen path: for simplicity a low path called “low streamline” is chosen. The
low streamline is a path with the property of being at every time instant tangent to the veloc-
ity vector of the low. This particular low path is chosen because it ensures conservation of
certain kinetic low properties that will be explained in the next paragraph.
The tangent function in a steady reference system with x, y, z components is

ìu dz = w dx
í (5.17)
î v dx = u dy
The second relation, for instance, can be obtained by simple geometrical reasoning on Figure 5.6.
Substituting the streamline function in Equation 5.16, rewriting the contributions u du as
d(u2), etc., and summing up the three equations, the following can be obtained:
1 dp
2
(
d u2 + v 2 + w2 = -
r
)
® dp = -rV dV (5.18)

The previous equation is the well-known Euler’s equation for steady inviscid lows (see
Chapter 1). With another integration along the streamline path, and assuming incompressible
luid, the inal conservation expression reads
p2 V2
1 1
òdp = - òrV dV ® p + 2 rV
2
1 1 = p2 + rV22 (5.19)
2
p1 V1

Equation 5.19 is also referred to as Bernoulli’s equation for incompressible lows. It relates the
pressure to the kinetic status of the luid along two different points of a streamline (Bernoulli’s
equation can be generalized for isentropic or conservative lows to any path). As it can be seen,
the contribution of the “static” pressure of the gas is combined to the “dynamic/kinetic contribu-
tion” (note that (1/2)ρV 2 possesses units of pressure as well) and conserved through the medium.
As Equation 5.20 shows, when the low is isentropically brought to rest (V = 0), the contribution
due to the luid pressure plus the kinetic energy is converted into the so-called total pressure in
1
ptot = pstat + rV 2 (5.20)
2

Bernoulli’s principle, at the basis of many pressure measurement devices, exploits the con-
servation characteristics of the low. However, back to the original deinitions, it is always
118 DANIELE RAGNI

possible to deine a total, a static, and a dynamic pressure, independently of what the low
characteristics are. This of course entails that along a low streamline ptot is a constant, while
in the presence of viscous dissipation that ptot decreases.

5.2 Direct pressure measurement devices

Devices for direct pressure measurements are transducers that directly read pressure either
from a force acting on a known area or from the displacement of a column/layer of luid due to
gravity (Equation 5.1 or Equation 5.6). These are usually considered direct pressure transduc-
ers since they do not use an intermediate loaded material with a deformation pattern that is a
function of the applied pressure, for example, membranes and Bourdon tubes. In several direct
pressure measurement devices, the pressure acting on the reference luid is usually balanced
by extra weights or by additional columns of luid.

Deadweight Deadweight gauges are fairly accurate and reliable instruments for steady pressure measure-
gauges ments. They can offer accuracies and repeatability within 0.05% of full range and few needs
of recalibration.
The irst two conigurations in Figure 5.7 show two typical devices constituted by a balanc-
ing weight of known mass and by a calibrated spring, which provides with a reaction pressure.
The pressure p to be measured is applied to the bottom of the device, creating a pressure-
tight chamber, exposed to a piston. In several constructions, the piston protrudes in the cham-
ber to allow the uniform pressure action to automatically align the piston in the chamber itself.
The pressure p acting on the piston area A determines a force F, which can be balanced either
by the reaction of a calibrated spring or by a known mass (in the gravitational ield g) as

mg k ( x - x0 )
p= , p= (5.21)
A1 A1

Calibrated Calibrated
Graduated
weight weight
scale
F = mg F = mg

Spring
reaction
F = k(x – x0)

A2
Pressure x – x0
tight seal

p
A1 A1

Area of
action

p A1
p

FIGUre 5.7 Schematic of a deadweight gauge, standard coniguration on the left, with pre-
loaded spring in the middle and differential coniguration on the right.
PRESSURE MEASUREMENTS 119

Tested instrument
Calibrated
weight

Piston

Filling
screw

Cylinders
with fluid

Pump

FIGUre 5.8 Deadweight gauge with multiple accesses for pumps and valves calibration.

In Figure 5.7, a third differential coniguration is shown, also called differential manometer,
where the effective area is A = A2 − A1. It can be noted that the device can either work via
calibration of the known masses m meant to balance a certain pressure, or by the use of a pre-
loaded spring, which gives the possibility to read the displacement x − x0 on a graduated scale,
directly relating it to pressure. Due to friction, which typically persists between the piston
and the surrounding cylinder, these devices are not suitable for dynamic loading; however,
depending on the tightness of the seal these devices can be used up to 2000 bar. For the reader
interested in the dynamic response of such a device, it is recommended to read Reference 8,
which shows how the system can be approximated with a second-order response function.
Deadweight testers can also be used together with calibrated weights for testing pumps or
pistons. In Figure 5.8, an example of such an application is presented. A pumping device is
activated to provide with a supply pressure balanced by the calibration weight on the second
column. Once equilibrium is reached at a certain height with a known mass m, the indicated
pressure can be read on the manometer in the irst column. The coniguration can indeed be
reversed by allowing the pump to be calibrated/inspected once a setting pressure is chosen via
the manometer and the calibrated weight.

Manometers and The measuring principle of manometers and barometers is based on the displacement of a
barometers column/layer of luid of known density as indicated in Equation 5.6. The height of the col-
umn of liquid is typically indicated by a graduated scale or, in modern devices, transduced
into a digital signal. Depending on the pressure range to be measured, different luids can
be employed (from alcohol to mercury), with densities corresponding to different column
heights.
One of the most common conigurations is the U-tube manometer, where two different
pressures are applied at the ends of a tube with a speciic reference luid inside (Figure 5.9a).
Considering two different applied pressures p1 and p2 on a luid of density ρ, it can be written
from momentum balance that

p2 = p1 - rgDh (5.22)

where ∆h is the differential height between the two tube ends. A more interesting conigura-
tion can be built as in Figure 5.9b, called “inclined manometer,” inclined of an angle α < 90°
with respect to the horizontal direction. The difference with the previous coniguration stands
120 DANIELE RAGNI

p2

A2

h
p1

A1
p2
p1

h A2
A1

ρ α

(a) (b)

FIGUre 5.9 (a) U-tube and (b) inclined manometers.

in the fact that once given a certain luid height h, the effective distance traveled in the tube
by the luid is h/sin(α), therefore larger than h itself. Since the graduated scale is typically
placed on the tube, the sensitivity of the system can be changed by varying the inclination α
of the tube itself.
An interesting application of the differential pressure manometer used mainly in luid
dynamics is the “Pitot tube.” The device is constituted by a tube with two openings connected
to a differential manometer. Depending on the complexity of the device, the two openings can
be used to measure two absolute pressures or the differential pressure between the two. The
system is schematically depicted in Figure 5.10, with the two openings at different locations.
The luid pressure is measured at two different locations, one having an opening perpen-
dicular to the low direction and one having an opening parallel to it. While the luid particles
approaching the opening area with the same low direction are supposed to slow down to zero
velocity in the tube (supposedly isentropic compression) till they recover their stagnation or

Total Static
pressure pressure
port port
V dA2

dA1 A2 Static
A1 pressure
V Total
pressure

FIGUre 5.10 Schematic of a Pitot tube used to measure the low velocity. The measurement
system uses a differential manometer to determine the dynamic pressure of the luid, which is by
aerodynamic Bernoulli’s principle of compressibility laws related to the luid velocity.
PRESSURE MEASUREMENTS 121

total pressure, the low that perpendicularly approaches the opening is supposed to exert only
this static pressure contribution. Therefore, the connection of the two ends of the Pitot tube
to a differential manometer allows determining the dynamic pressure of the low as from
Equation 5.20. The device is usually meant to measure the velocity of the low by using the
incompressibility assumption in Equation 5.20 but in several other applications is used in sim-
pler conigurations to measure either the static or the total pressure contributions.
Since the device works with the particular orientation of the two sensing surfaces, it is quite
crucial to respect the canonical alignment of the instrument with the low. If this requirement
is not met, a correction factor can be applied to recover the actual low pressure. The effects
of compressibility can also be accounted for, either using the isentropic equation function of
the Mach number of the low
- g / ( g -1)
pstat é ( g - 1) 2 ù
= ê1 + M ú (5.23)
ptot êë 2 úû

or using the Rayleigh formula for supersonic Mach numbers and perfect gases

1/ ( g -1)
( g +1) / ( g -1) é ù
ptot Pitot æ g + 1 ö 2M 2g
= ê ú (5.24)
ptot Flow çè 2 ÷ø êë 2 gM - ( g - 1) úû
2

where
M is the low Mach number
γ is the speciic heat ratio for the gas where the instrument is immersed in

An interesting application of the Pitot tube is the pressure-type airspeed-altitude system of


an X-15 airplane from subsonic to supersonic light [8]. The device can be installed either in
a nose boom or either in a ball nose (Figure 5.11).
Pilot information about airspeed and altitude is usually provided by pressure sensors, espe-
cially important during landing. However, in the case of supersonic cruise speed, a shock wave
is formed in front of the aircraft creating calibration errors for both Mach number and angle
of attack effects. In Figures 5.12 and 5.13, such corrections are presented after calibration of
the systems in the X-15 aircraft by NASA. While Figure 5.12 shows the irst and second Pitot
probe design for the aircraft, Figure 5.13 presents the inal Mach effects given a certain static

(a) (b)

FIGUre 5.11 X-15 photograph with Pitot installed in a ball nose (a) and in a nose boom (b). (Taken from
Terry, J.L. and Lannie, D.W., Calibration and comparisons of pressure-type airspeed-altitude systems of the
X-15 airplane from subsonic to high supersonic speeds, NASA technical note D-1724, National Aeronautics
and Space Administration, Washington, DC, 1963. With permission of NASA.)
122 DANIELE RAGNI

0.02

0
3.33

Δpt
ptn
–0.02 15° 0.875 2.0
M
0.3 – 0.4 0.08 r 2.90
0.4– 0.5 0.188
–0.04 0.5–0.6
0.6–0.7
0.7–0.9
–0.06
0

–0.02 3.33
Δpt
ptn

30° 0.875 1.25

2.90
–0.04
Sharp 0.188

–0.06
0 2 4 6 8 10
α, deg

FIGUre 5.12 Variation of subsonic total pressure errors with angle of attack for basic and new
probe. (Taken from Terry, J.L. and Lannie, D.W., Calibration and comparisons of pressure-type
airspeed-altitude systems of the X-15 airplane from subsonic to high supersonic speeds, NASA
technical note D-1724. National Aeronautics and Space Administration, Washington, DC, 1963.
With permission of NASA.)

7
Cp
0.10

0.05
5

4
ΔM

0.02

0.01
2

0 2 4 6 8 10
M

FIGUre 5.13 Mach number associated with various values of static pressure coeficient. (From
Terry, J.L. and Lannie, D.W., Calibration and comparisons of pressure-type airspeed-altitude sys-
tems of the x-15 airplane from subsonic to high supersonic speeds, NASA technical note D-1724.
National Aeronautics and Space Administration, Washington, DC, 1963.)
PRESSURE MEASUREMENTS 123

pressure coeficient. The combined effect of these two main parameters is rather important,
especially during aircraft maneuvering at relatively high speed/altitude.

McLeod gauge The McLeod gauge is an instrument used to measure very low pressures, down to 10−4 Pa.
The device employs a known reference gas entrapped in a small volume, to determine the
unknown pressure p1 in Figure 5.14. The system is typically constituted by two columns C1
and C2, both connected to a piston via a reservoir of luid of known density ρ (mercury or
alcohol are the most commonly employed luids). The device is initially connected to allow
both columns of luids to reach equilibrium at the pressure p1. By small adjustments, the refer-
ence luid is brought to the cutoff point, without changing the pressure equilibrium. Once the
piston is operated and pushed down, the reference luid entraps a known volume of luid V 2 at
the pressure p2, initially equal to the original pressure p1. The piston keeps on compressing the
gas in the volume, until the liquid reaches the reference 0 on the irst column C1. At the same
time, the pressure of the trapped luid in C2 increases much more rapidly than in C1. Once the
luid in the irst column C1 has reached the reference value 0, the difference in height between
the two columns gives an indication of the difference in pressure between the new p1* (which
is usually assumed with good approximation to be equal to the original p1) and the inal pres-
sure p3 in C2. The inal volume of the trapped gas is h∙A, where h is the height of the gas in
the second column C2 and A is the cross-sectional area. The initial volume of the trapped luid
is known and equal to the volume in the irst column C1 from the cutoff point. Therefore, by
using the ideal gas law, it can be written as

p1V2 = p3 ( Ah )
(5.25)
p1V1 = p*1 ( V1 - VM )

Exploiting the difference in heights between the two luids, the inal expression for the origi-
nally unknown pressure p1 can be derived:

p1V1 pV
p3 = p*1 + rgDh = + rgDh = 1 2
(V1 - VM ) Ah
(5.26)
rgDh
p1 =
(
( V2 /Ah ) - V1 / ( V1 - VM ) )

p1

0
p3, V3 = Ah

p1*, VM
p2 = p1, V2 = Ah0

Δh

Piston
p1, V1

Cv C1

Cutoff
point Reference
fluid

FIGUre 5.14 McLeod gauge scheme based on the trapping mechanism of a reference luid
brought to the point A to enclose a precise volume of gas, the pressure of which is measured after
isothermal compression.
124 DANIELE RAGNI

from which the pressure p1is directly linked to the different volumes. It has to be noted that
in many applications the remaining volume V M in C1 is simpliied by taking VR - VM » VR ,
especially valid for low pressures p1.

5.3 Indirect pressure measurement devices

Among the vast range of devices meant for indirect pressure measurements, few examples
that do not directly transduce the force exerted by the luid on a known area are going to be
discussed. The following examples indirectly employ the properties of certain materials and
structures when subjected to an external pressure. In the present section, attention will be
given to typical elastic transducers, namely, diaphragms and Bourdon tubes. At the end of the
section, the basic principles of strain gauges will be introduced, despite the pressure stresses
they are sensitive to being mainly used for force determination.

elastic transducers Elastic transducers rely on the modiication of a known structure built upon an elastic mate-
rial when loaded by pressure. Many formulas are available in the literature to compute the
delection of a known structure under uniform pressure distribution; however, three special
cases will be considered here for conciseness, namely, membranes (or diaphragms), bellows,
and Bourdon tubes. The main delection of these elements can either be transduced to a digital
readout signal or directly indicated by a pointer on a graduated scale.

Bourdon tubes Bourdon tubes are commonly used as indicators in many simple valve manometers or as
barometric pressure indicators. Their name comes from their developer E. Bourdon, which
made them famous since 1849. A conventional Bourdon tube is constituted by a curve-shaped
hollow tube (several conigurations do exist including C-shaped tubes, helical, spiral, twisted
tube, etc.) mostly made out of brass, copper, or bronze. The hollow tube is irmly clamped
on one side and internally connected to the unknown pressure. The other end of the tube is
left free to move, connected to a delecting mechanism as an indicator or a scale pointer. The
cross section of the tube is asymmetric, typically resembling an ellipse, while its elongation is
generally twisted in one direction. Once the tube is pressurized, the uniform internal pressure
determines an uneven stress distribution that determines a straightener force for the tube. The
indicator connected to the free end follows the imposed deformation, displacing itself on a
graduated scale, originally calibrated in function of the applied pressure. An exploded view of
a Bourdon tube is reported in Figure 5.15.

Diaphragms Diaphragms are thin structures held tight by an enclosure separating two media with differ-
ent pressures. Conventionally employed to the purpose are thin plates and membranes. These
structures are used as pressure sensors by relating their elastic deformation under pressure.
The deformation is typically nonlinear, with stretching of the material linked to stiffening
effects, which has to be taken into account when dealing with the device. Thin plates are
typically used for high pressures, while membranes have a stronger sensitivity to pressure
changes. Several different empirical formulas are found in the literature that relate the pres-
sure difference across the membrane to the displacement at the center xc.
Given a thin circular plate of radius R and thickness s, with elastic Young’s modulus E and
ν Poisson’s ratio, subjected to a uniform Δp, the linear delection at the center xc can be written
as an expansion of the type

N
16 Es 3

Dp = Cx i
(5.27)
(
i c
3 R 4 1 - n2 i =1

where [9] reports a third-order expansion with C1 = 1/s, C2 = 0, C3 = 0.488/s3.


Different design solutions can be found to optimize the sensitivity of the membrane
with respect to the nonlinear delections, function of xc/s. From fatigue analysis, it can be
PRESSURE MEASUREMENTS 125

FIGUre 5.15 Drawing to show the principle of a Bourdon tube pressure gauge. (From creator
DStaiger, licence Attribution-Share-Alike 3.0 Unreported, no changes made to the picture.)

demonstrated that the maximum stress should not exceed s2/R2, the maximum stress allowed
by the material (unless the material is brought to its limit tensile strength). Membrane delec-
tions are generally larger and with severe nonlinear response. One of the most known formu-
las valid for Poisson’s ratios ~0.3 is

N
Es 3
Dp = 3.58
R4 åC x
i =1
i
i c (5.28)

where the major contribution is given by C3 = 1. The maximum delection can be either
measured by a displacement transducer or by using the membrane as a plate of a condenser
that relates its relative displacement to a voltage. In a conventional membrane, the stress
distribution consists of both compression and tension regions, always coexisting in the same
membrane (Figure 5.16a). These regions are typically employed to create a system with
temperature compensation, by multiple strain gauges mounted on a Wheatstone bridge, as
explained in the following sections. As a modiication of the diaphragm coniguration, bel-
lows are differently acting by displacing the material in the same direction of the stress, and
they are generally made by a number of springs, which are balancing the displacement of the
material (Figure 5.16b).

Strain gauges Strain gauges refer to conventional measurement devices used to obtain the stress distri-
bution in a certain material, by directly measuring the delection of a thin piece of extra-
material bonded with the irst. Strain gauges are employed for the measurement of a speciic
stress distribution, which is related via constitutive relations to forces or loads in the mate-
rial itself. What it is actually measured is the transducer deformation, which is related to
126 DANIELE RAGNI

k
p
r x
k
x
p + ∆p

R p + ∆p
p
+, tension

σr k
σt
(b)

–, compression
(a)

FIGUre 5.16 Deformation of membrane (a) and bellows (b) under pressure loading. The pressure loading typically
corresponds in a nonlinear displacement of the structure.

the stress/force acting on the device when the effective cross-sectional area and Young’s
modulus of the material are known. One of the most important hypotheses is the one-to-one
connection of the thin metallic wire to a loaded material. The material deforms as well as
the metallic wire connected to it, changing its own electric resistance with its deformation.
Considering a long straight metallic wire of length L and area A, the change of resistance
due to strain is given by the well-known expression of the wire resistance R:

rE L
R= (5.29)
A

where ρE is the electric resistivity of the material. From the previous expression, the variation
of electric resistance due to change in length and shape is obtained from inite differentiation,
which yields

DL DA D̻E DL DD DrE
DR = - + = -2 + (5.30)
L A ̻E L D rE

where D is the diameter of the wire. The ratio between the relative change of diameter and
of length is proportional to the Poisson's ratio; therefore, a gauge factor G can be deined as

DR /R Dr /r
G def = 1 + 2n + E E (5.31)
DL /L DL /L

As evident from the previous equation the gauge factor depends only on the characteristics of
the material and not on the wire geometry. The gauge factor of metallic strain gauges typically
varies in the range from 1.8 to 2.6, while strain gauges of the semiconductor type can reach
G = 100–150. Most of the metallic strain gauges are of two types: “unbonded” and “bonded.”
An unbonded strain gauge is conventionally used for measuring strain on moving parts, while
the bonded ones are permanently ixed to the deforming material. The most commonly used
bonded strain gauges are manufactured via photoetching technique on a metal foil, which
becomes the active element as the two ends of the strip are connected to the wiring. Multiple
strain gauges can be assembled to measure strain in more directions or to compensate for
temperature variations. Microelectromechanical pressure sensors are nowadays becoming
increasingly popular for measurements of pressure as they are nothing else than a small silicon
diagram with four piezo-resistive strain gauges mounted on them.
PRESSURE MEASUREMENTS 127

V1 V4
R1 R4
i1
V0
i
i2 R3
R2
V2 V3

+

E

R1 R2

V
strain gauge

R3

FIGUre 5.17 Wheatstone bridge and its installation with strain gauge. Wheatstone bridges
can be used to compensate for temperature biases, by exploiting the multiple compression/
tension states available in the material. The different strain gauges can be connected directly as
resistances R.

Wheatstone bridges are typically employed to transduce the material deformation into a
readable voltage output and obtained through mounting four strain gauges as in Figure 5.17
to allow for temperature compensation. Supposing that one of the strain gauges is connected
to a bridge with three ixed arms, with an unexpected temperature rise, the strain gauge will
change its resistance, unbalancing the bridge even without the presence of strain. If two identi-
cal gauges are oppositely mounted in compression/tension state to adjacent arms of the bridge
a perfect temperature compensation is achieved.
A secondary advantage of using this coniguration is the increase of sensitivity obtained in
the system, once the nominal voltage is measured:

E E æ R1 R3 R2 R4 ö
V0 =
2
( V1 + V3 - V2 - V4 ) = ç + - -
2 è R1 + R2 R3 + R4 R1 + R2 R3 + R4 ÷v
ø
EG
=
4
( e1 + e3 - e2 - e4 ) (5.32)

where
The different Vi refers to the different potentials
εi is the different strains in the bridge

emission-based In several cases, there might be the need to extend the locally measured pressure obtained by
techniques: one of the strain gauges into a continuous pressure distribution, perhaps on a surface. Few
pressure- techniques use the natural light emission of a luminescent dye dispersed in a gas permeable
sensitive paint binder. Typical applications use oxygen binders, which create a paint layer on top of the
128 DANIELE RAGNI

Emitted light, to the receiver


Irradiated
light p1, l1
p2 < p1, l2 > l1

Permeable
oxygen
binder
Luminescent paint

FIGUre 5.18 Schematic of a typical PSP measurement procedure.

surface to be measured. The dye is excited by ultraviolet light and it emits photons by return-
ing into the ground state. An amount of these photons is typically absorbed by the oxygen
molecules in air, while a part is collected by a camera or a receptor. In locations where the
pressure increases, a natural statistical increase in the number of oxygen molecules will be
present, causing more photons to be captured and less collected by the camera (Figure 5.18).
Two main hypotheses allow to relate the partial oxygen pressure into the air pressure:
1. The molar fraction of oxygen is constant in the molar mass of air.
2. The oxygen molecule after photon absorption comes back to its ground state without
further emission of photons.
The common law that relates the ratio between the max luminescence intensity in the
absence of oxygen I0 and the actual emitted intensity I is the Stern–Volmer equation:

I -1
= (1 + xO2 CSV C H p ) (5.33)
I0

where
p is the unknown pressure, related to the oxygen pressure by the mole fraction xO2
CSV is the Stern–Volmer constant
CH is Henry’s constant used to relate the concentration of the oxygen in the binder to the
mechanism of Stern–Volmer quenching

Due to the dependency to the temperature of the previous constants, the Stern–Volmer equa-
tion is written in a more useful manner by polynomially expanding the ratios of intensities
of the two states with the ratios of the respective pressures. This allows removing nonuni-
form illumination and the needs of using the maximum intensity that is typically dificult to
measure.
A typical measurement system is composed by a painted model, an opportune illumination
light, and a photodetector or a camera sensitive to the emitted light. Several collimated light
sources can be obtained by the use of lasers, ultraviolet/light-emitting diode/xenon lamps, etc.
One of the most important procedures in pressure-sensitive paint (PSP) is the need to calibrate
the system to automatically read the pressure from the camera reading (Figure 5.19). Many
imaging techniques are applied to correct for 3D imaging, background noise, and intensity
calibration, several of them in the same framework of the PIV ones for image processing
(see Chapter 10). More information about those can be found in [10].

5.4 Dynamic pressure measurements

The discussion has been so far carried out by considering the unknown pressure as a steady
variable, but in many aerodynamic cases the temporal luctuations of pressure might be of
interest. It is then important to realize that the time response of the pressure sensor (consisting
of the pressure devices, together with their measurement chain including connecting lines and
tubing), has to be taken into account to avoid errors in the acquisition line. The low pres-
sure can typically vary due to an unsteady random behavior (turbulence), due to a sinusoidal
PRESSURE MEASUREMENTS 129

Low Pressure High

(a) (b)

FIGUre 5.19 (See color insert.) Aircraft model in the DNW-HST tunnel (a) and PSP results
(b) pressure distribution on a completely coated model to calculate forces and moments, results
from the German Aerospace Center, Goettingen (Germany).

repetitive movement, or due to a sudden step-wise change. In general, even under ideal condi-
tions (i.e., in the absence of measurement noise), depending on the measurement device, the
measured pressure pmeas(t) is different from the real p(t).
The measurement device needs to be studied either in the time or in the frequency domain,
and its response characteristics to a predetermined loading have to be derived. In the pres-
ent section, few details will be given on the different system response possibilities under a
sudden step change of the input variable, with particular emphasis to second-order systems.
More information upon different applications can be found in [8]. In order to continue with
the temporal characterization of pressure device, it has to be considered that the response of
a typical nonlinear second-order system possessing both inertial and elastic characteristics is
given by the second-order transfer function H. This function determines the output response
of the system once an input time-dependent variable p(t) is applied to the system:

w2n
H (s) = (5.34)
s 2 + 2zwn s +w2n

where
s is a complex number
ωn is the natural frequency of the system
ζ is the damping coeficient

The transfer function of the system is often referred to in the frequency domain as

w2n w2n
H ( w) = 2 2
= e jj ,
w - w + 2zwnw 2
n
(w2
n - w2 ) + 4z 2w2nw2
(5.35)
2
2zwnw
tan j = -
w2n - w2

The equation states that three possible system typologies can be found (see Figure 5.20). The
most important ones are the overdamped one with ζ > 1, the critically damped coniguration
with ζ = 1, and the underdamped typology with ζ < 1. The particular deinitions rely on the
typology of the response that the system has, once subjected to an input forcing function,
for example, a step function (Figure 5.21). The response of the instrument is dictated by the
damping coeficient, a function of the inertial and elastic characteristics of the material and
of the particular design of the device itself. It is interesting to note that, depending on the
130 DANIELE RAGNI

10
ζ = 0.1 0
ζ = 0.8 ζ = 0.1
1 ζ=2 ζ = 0.8

Amplitude |H|
–45 ζ=2

Phase (deg)
0.1
–90

0.01 –135

–180
0.1 1 10 100 0.1 1 10 100
Frequency (rad/s) Frequency (rad/s)

FIGUre 5.20 Amplitude and phase response of a second-order transfer function with respect to different
damping ratios.

1.4

1.2
Amplitude response

0.8

0.6

0.4
Input
ζ = 0.8
0.2
ζ=1
ζ=2
0
0 5 10 15
Time (s)

FIGUre 5.21 Amplitude response in time of a second-order system to a step function.


Depending on the damping coeficient, the system can over- or underreact to the sudden change
in time of the input.

damping ratio, a resonance effect might be seen where the response of the device to the input
is substantially ampliied with respect to the input itself.

resonant All mechanical structures have particular structural and inertial characteristics that allow them
transducers and to vibrate with their own natural frequencies. It is important to note that in most structures,
vibrating cylinder these frequencies are called “natural” in that they are not affected by the loading on the system
itself (force/pressure magnitude), but they are ideally dependent on the structural characteris-
tics only. There are though some systems as membranes and strings, whose natural frequen-
cies are strongly dependent on the applied forces and pressures, due to stiffening effects given
by the load distribution. Ideally, the bending stiffness of strings and membranes is conven-
tionally neglected, with good approximation when their thickness to length (or radius) ratio
reduces. In this respect, it is possible to write a function of the kind

wn = f ( p, F , material ) (5.36)

where the natural frequencies of the structure become a function of the applied pressure p,
force F, and material characteristics. This is the case for some gas pressure transducers of very
high accuracy and stability constructed via the “vibrating cylinder” concept (Figure 5.22).
In this measurement system, a cylinder with thin walls is kept vibrating in one of its natural
PRESSURE MEASUREMENTS 131

Outer cylinder

Coils
Vibrating
cylinder

Signal out Pressure inlet

FIGUre 5.22 Schematic of a vibrating cylinder coniguration.

modes by a feedback loop constituted by ampliiers and pickup coils. Changes in frequencies
of oscillations in the selected mode are related to changes of pressure in the inner part of the
cylinder.
The vibrational modes of the cylinder can be selected by using lexible walls and stiff
ends so as to have a one-to-one relation between frequency change and change in pressure.
A pickup coil and a forcing one are built inside the cylinder to stabilize the frequency modes.
The outside shell is typically kept at a reference pressure, while the inside is connected to the
unknown pressure. Nominal vibrating frequencies range between 5 and 15 kHz, depending on
the applied pressure. One of the main designs is based on the following equation:

( )
é 1 - n2 l2
s2 (1 - n ) r æ n
2 ù
l2 ö ú pr ( m + 0.3 )
Eg
( )
2
2
w = ê + n2 + l 2 + 2
+ ÷p , l=
ç
( )
1 - n 2 g cr 2 ê n 2 + l 2
( ) ú
n 2
12r 2 Es è 2 ø L
êë úû
(5.37)

where
ω is the natural frequency of the cylinder
L is the cylinder length
r is the cylinder mean radius
E is Young’s modulus material
p is the pressure to be measured
s is the cylinder wall thickness
γc is the speciic cylinder weight
ν is Poisson’s ratio of the cylinder material
g is the gravitational acceleration
n and m are the circumferential and longitudinal mode numbers

Most transducers do not exceed the 2–3 cm cylinder length, with radiuses of about 1 cm. That
entails that for frequencies as high as 5–10 kHz the typical thicknesses of the active cylinder
become less than 0.1 mm. There are several modiications on the material properties that can
be made to compensate for temperature changes, based on the employment of nickel–iron
alloys.
Other conigurations rely on the use of vibrating membranes (or diaphragms) kept in vibra-
tion by positive feedback loops, again with a pickup and an ampliier. The sensor usually uses
a second diaphragm as shown in Figure 5.23.
A pressure change in the diaphragm is felt as a change of vibration mode in the material
that is measured and related to the pressure in the chamber.
132 DANIELE RAGNI

Vibration
generator

Vibrating Amplifier
membrane oscillator
Pickup

Membrane
Output
signal

Support

Input
pressure

FIGUre 5.23 Schematic of a vibrating membrane device used for measurements of pressure
luctuations.

Microphones, Microphones are transducers able to follow pressure luctuations at high frequency and mainly
capacitor type known for sound-recording applications. The most common type is the capacitor one shown
in Figure 5.24.
The main assumption in a microphone device is the existence of a uniform surround-
ing pressure pi, where sound waves just travel through. This is strictly true for low-pressure
variation and sound pressure waves. A diaphragm acts as one of the ends of a capacitor built
in the device. Every displacement of the diaphragm is seen as a change in the voltage at the
capacitor, thus digitally recorded. Some conigurations have a perforated plate at the other
end of the capacitor, in order to make the diaphragm continuously displace luid through the

Equalization adjustment
silver wire
Spring arrangement
Diaphragm
Capillary tube for pressure
equalization

Symmetrical
protection grid
Quartz insulator Backplate
Output terminal
gold

FIGUre 5.24 Example of a Brüel & Kjær condenser microphone.


PRESSURE MEASUREMENTS 133

dB re 1 V/Pa mV/Pa
–20
Types 4144, 4160 and 4166 4160
50
–30 4144, 4160 and 4166
4144
4166 20
Types 4134 and 4147
–40 4147 4134 10

5
–50
Type 4136 2
Type 4138
–60 1

0.5
–70
0.2

–80 0.1
0.01 0.02 Hz 0.05 0.1 0.2 Hz 0.5 1 2 Hz 5 10 20 Hz 50 100 200 Hz 500 1 2 kHz 5 10 20 kHz 50 100 200
Frequency

FIGUre 5.25 Preampliied curve response of a Brüel & Kjær condenser microphone in function of the frequency of the
input sound waves.

cavities, providing a damping force to the microphone. Due to a capillary aperture connecting
the internal membrane to the outside pressure, microphones are not able to measure absolute
pressure. This aperture allows the diaphragm to equalize low variations of the outside pressure
avoiding internal damages to the membrane. In the following lines, the second-order response
function for a microphone is derived (see Figure 5.25 for an example of curve response). More
details on the dynamic behavior of second-order systems can be still found in [8].
The gas contained in the microphone, with the aperture as a capillary leak, acts as a volume
of gas under pressure. If the volume of luid is small enough, it is possible to consider its reac-
tion as a whole, instead of modeling it as a medium where waves travel through. This allows
the volume V contained in the microphone to be considered as a mass-spring system, therefore
with a differential equation of the kind

d 2 pV dp
I1 2
+ I 2 V + I 3 pV = f ( pi ) (5.38)
dt dt

where
pν is the pressure in the volume
pi is the pressure in the surrounding

I1 is typically referred to as the inertia term (mass in a mass-spring second-order system),


while I2 and I3 as, respectively, the damping and the elastic coeficients. Equation 5.38 can be
rewritten by considering a volume V with a capillary of length L with viscosity μ, bulk modu-
dp
lus Bm = -V , and a capillary diameter dc (Figure 5.26).
dV
The pressure loss through a capillary connecting p� and pi in a luid where the inertia terms
are negligible compared to the viscous ones (approximate for the luid in a microphone; thus,
the equation obtained neglects I1) can be written by using the laminar steady equation:

32mLV
pV - pi = (5.39)
dc2
134 DANIELE RAGNI

Microphone scheme Equivalent gas model


pi

Diaphragm dd
Capacitor arm Fluid with pi
y
lar
pil

Damping
Ca

pv
L holes m, K
L
Insulating support Fluid with pv dd
dc
x, x0

+
– R
Power
Vo
supply

FIGUre 5.26 Schematic of the working principle of a condenser microphone together with its theoretical model on
the right.

where V is the luid velocity through the capillary. The change in volume due to the capillary
presence changes the pressure in the luid by the bulk modulus Bm:

dpV dpV 4 dp
Bm = -V = -V = -V 2 V (5.40)
dV AV dt pdc V dt

where dt is the ininitesimal time change. Combining the last two equations, the inal expres-
sion of the kind of Equation 5.38 is derived:

128mL dpV 128mL dpV


Bm = -V ®V + pV = pi (5.41)
pdc4 ( pV - pi ) dt Bm pdc4 dt

The I2 coeficient is referred to as the time response of the irst-order system, and it is depen-
dent on the characteristics of the gas in the microphone and on the cavity dimensions. The
total force on the diaphragm is given by (pi - pV)AD = FD. From simple operator analysis, the
previous expression can be written as

t ADpV + pV = pi (5.42)

where
τA is the time response previously introduced, equal to 128mL /BmVpdc4
D is the derivative operator in the Fourier space

Now the operational relation that exists between the force applied to the diaphragm and the
pressure is given by

FD t A D FD iwt
pV
( D) = t ADD+ 1 ® pV = Aeiwt + j ® D = iw ®
pV
( iw) = t iw +A 1 (5.43)
A A

A nice result of treating the time derivative D as an operator is obtained once considering
sinusoidal inputs. In this case, the entire sinusoid can be written (by using the equivalence eiωt 
= cos (ωt) + i ∙ sin (ωt)) in the compact exponential form, while the operator as a multiplication
PRESSURE MEASUREMENTS 135

term iω. In this respect, Equation 5.43 allows studying the response of the cavity for a large
frequency range (remember ω = 2πυ). For example, for ω → 0, FD/pV → 0, which means that
the capillary compensates for low-frequency changes in pressure, avoiding damages to the
membrane. So far in the discussion, only the response of the cavity has been treated, exclud-
ing the circuit and the inertia characteristics of the diaphragm. It can be demonstrated [8] that
the voltage at the microphone arms ΔV0 in the presence of a resistance R of an equivalent
diaphragm mass mD and stiffness KD and an extra polarizing voltage E0 is

DV0 KD2
( D) =
pV (( )
( tAD + 1) ( tED + 1) D2 /w2n + 2z ( D/wn ) + 1)
æ εAE 2 ö 128mL εAR
K = ç KD - 3 0 ÷, tA = 4
, tE = (5.44)
è x0 ø BmVpdc x0

where
ε is the permittivity of the material between the arms
A is the equivalent condenser plate area
R is the resistance
x0 is the distance between the plates of the circuit, where the microphone constitutes the
condenser

Another important characteristic of microphones besides their frequency response is the free-
ield response (Figure 5.27).
As previously said, at high frequencies, microphone relection and diffraction effects might
disturb the diaphragm in that the pressure wave impinging on the material is distorted, there-
fore not similar anymore to the free-ield one. For suficiently low frequencies, microphone
relections have rather negligible effects (due to the large wavelength compared to the micro-
phone size). However, at high frequencies, their effect might become stronger and a direction-
ality calibration has to be performed, by varying the angle of incidence of the microphones to
the incoming pressure waves.
The size of the microphone has an evident effect on its sensitivity and on the maximum
allowed range. Microphones can be of the capacitor type as well as of the piezoelectric one,
depending on the way the deformation of the diaphragm is transduced into electric signals.
The piezoelectric types have the piezoelectric material built in the diaphragm, and due to the
proportionality of the deformation with the potential ield created by the material, they do not
need a polarizing voltage; therefore, their circuit and device can be much smaller.

Inductive Inductive pressure transducers work similarly to piezoelectric or capacitor ones; the only dif-
and reluctive ference is in the transduction of the diaphragm deformation into a voltage signal. In an induc-
transducers tive transducer, in fact, the pressure difference between a measurement pressure p1 and the
reference p2 causes a diaphragm deformation, which consecutively determines a change in the
self-inductance of a single coil (Figure 5.28).
In a reluctive transducer, there are usually multiple coils having a predetermined magnetic
coupling, which is imbalanced by the movement of the diaphragm. In an inductive transducer
the moving material is typically a conductor, which is subjected to an induced electric ield
every time it moves with respect to the magnetic ield of a single coil. Due to the magnetic
coupling between the two coils in a reluctive transducer, an external alternate current excita-
tion is needed to determine the imbalance. The principle is similar to the one of loudspeakers,
only reversed. In these conigurations, single dynamic membranes do not linearly respond to
all sound and pressure variation frequencies. Many of these devices combine therefore the
signal derived from different membranes, especially when the pressure luctuations have to be
accurate in a vast range of frequencies.
136 DANIELE RAGNI

14

Free-field corrections for


12 microphones 4144 and 4145
with protection grid 0°
10

Correction to be added to actuator response (dB)


30°
8

4 60°

0
90° Random
180°
–2
150°
120°
–4

–6 θ°

–8

–10
1 2 3 4 5 6 7 8 9 10 15 20 30 40 50
Frequency (kHz) 801057

10
Correction to be added to actuator response (dB)


Free-field corrections for
8 microphone 4148 with 30°
protection grid
6

4
60°
2
Random
0
120° 180°
90°
–2 150°
150° 120°
–4 θ°

–6
1 2 3 4 5 6 7 8 9 10 15 20 30 40 50
Frequency (kHz) 801064

FIGUre 5.27 Free-ield response of a typical Brüel & Kjær condenser microphone.

5.5 Some aspects on measurement procedures

In the previous sections, few examples of pressure transducers have been discussed, meant
to the determination of both steady and unsteady pressure luctuations. With respect to
unsteady pressure luctuations, the importance of the system response with respect to the
input frequency of the pressure luctuations has been discussed. Up to now, the frequency at
which the pressure is varying in input has been assumed to be unknown; however, it is still
rather unclear on how to choose a precise sensor once the characteristics of the pressure to
PRESSURE MEASUREMENTS 137

P1: Pressure inlet


L1: Coil

Diaphragm

L2: Coil

P2: Pressure inlet

FIGUre 5.28 Inductive transducer schematic.

be measured are known. This becomes an important choice, given the vast availability of
many brands and transducers type. In the present section, few general but fundamental con-
cepts will be discussed that are supposed to help in choosing a single device for a speciic
application. It has to be considered that most of the sensor manufacturers provide with a irst
categorization of the devices by dimension and frequency of application. Once the most suit-
able characteristics to the needed application are chosen, two main extra parameters become
relevant:
• The speciications of the exciting instrument: that is, device that has to provide the
instrument with the power and the voltage to create an output
• The output characteristics of the transducer itself
These parameters of the measuring system typically deine the instrument with the main limit-
ing factors in the overall performance. The pressure transducer can be chosen by itting the
output pressure requirements in the design phase. Once the input and the output character-
istics match the excitation and the recording system used, several other parameters have to
be considered. Following the terminology adopted by the Instrument Society of America, the
characteristics of the instrument to be taken into account are reported in the following.
Consider that the objective of the measurement campaign is to collect measurements of three
pressures p1, p2, p3 where p2 and p3 are respectively equal to 2 and 3 p1. The three pressures
are measured over time and their variations are reported as plotted on the graph in Figure 5.29.

p3 Nonlinear
p3
Measured pressure

sensitivity
Measured pressure

drift

Zero drift Zero drift


p2 p2

p1 p1
Sensitivity
drift

Zero drift Zero drift

Time Time

FIGUre 5.29 Drift effect with respect to a linear and a nonlinear response system.
138 DANIELE RAGNI

Measured pressure
p3

p2

p1

p΄2

p΄1

Time

FIGUre 5.30 Hysteresis effect on a device for measuring pressure.

As it can be seen, there is a change in the measured pressure in time that is not caused by the
“measurand.” This phenomenon is called “drift,” deined as a (typically undesired) change in
output over a period of time that is not a function of the variable to be measured (Figure 5.29).
The drift can both affect the reference value or zero or the slope of the pressure to be measured;
therefore, we call it as zero drift or sensitivity drift.
The sensitivity of a transducer or system is deined as the ratio between the change in the
measured value (by the transducer) and the change in the input measurand value in the present
system Δpmi/Δpi. The sensitivity of a system is typically nonlinear (as seen for microphones)
and varying per frequency of the input.
It is generally referred to as an error if not a proper calibration is done, which takes into
account the system requirements. The measurement system might also suffer from “hystere-
sis” (Figure 5.30), that is, the device is sensitive to the derivative of the variable to be measured
and typically shows up as two different curves in ascending and descending measurement pro-
cedures. Several of the previous drifts are typically corrected through an opportune calibration
correction, which is often purchased with the transducer. Hysteresis effects are instead more
dificult to deal with, since they are strongly dependent on the way the loading is applied to
the system itself.

problems

5.1 A manometer is designed by a manufacturer to measure in the range 1–10 bar of pres-
sure. The manufacturer is able to create three different systems, respectively, having a
resolution (minimum difference of pressure that can be measured in the range) of
(a) 1/1000 of the full-scale value
(b) 1/10 of a psi
(c) 1/100 of in. Hg
Which system will have the best resolution (minimum value)? Give the answer in Pascal
and discretize the range mentioned earlier. Write the effective minimum and maximum
value for each range that will be recorded by each system with its own resolution.
(Sol. Third system, (a) 1e3 Pa:1e6 Pa; (b) 689.5 Pa: 999,775 Pa; (c) 33.86 Pa: 999,987.38 Pa)
5.2 Two different altimeters are manufactured with the use of a barometer. The manufac-
turer has to choose to use Equation 5.10 or Equation 5.13. Plot the two curves proile
in a range of pressure from p1 = 1 atm (at z1 = 0 m) to p2 = 0.7 atm. Assume a constant
temperature of 20°C for Equation 5.10 and a linear proile from T1 = 20°C (at z1 = 0 m)
PRESSURE MEASUREMENTS 139

and T2 = −50°C (at z2 = 11 km) for Equation 5.13. What is the largest variation of alti-
tude measured by the system in the range?

æ ö
ç 293 ÷
ç Sol. The two curves : z2 = -c × z2 = 1 ÷
é
( ) ù÷
-1
çç -8
7 / 1100 1 + 1e * log p2 /1.013e5
è ëê úû ÷ø

5.3 It is pretty cold in your home-street today, but you were perfectly ine in the large square
beside it. You are sure that there is an extra wind of about 5 m/s that is blowing in
the street unusually conveyed by the buildings on the road. Compute what would be the
pressure drop causing it in incompressible inviscid and steady conditions (assume the
air density from the ideal gas law with temperature of 20°). Calculate what would cause
a drop of pressure equal to 5, 10, and 20 Pa.
(Sol. 15 Pa, 2.9 m/s, 4.1 m/s, 5.8 m/s)
5.4 An inclined manometer is kept at a delta pressure of 0.1 bar. Compute the three densities
of the luid needed to have a change of inclined height d at an angle of 60 degrees of
50/100/200 cm.
(Sol. 2356.5 kg/m3, 1178.3 kg/m3, 589.1 kg/m3)
5.5 A McLeod gauge has to be designed for really low-pressure application. The irst study
has to be made by determining the response of the system in function of the inal volume
Ah and of the dead volume VM. Plot the outcoming pressure response in function of the
two parameters. Assume V2 = V1 = 1 L, water as reference luid at 20°C.
(Hint: Use Equation 5.26.)
5.6 A Pitot tube has been manufactured in the wrong way, with a static pressure hole at an
angle of 88° with respect to the needed 90°. The hole direction slightly points inward the
low direction, determining an extra component of dynamic pressure to be measured at
the location. Estimate what the error in the dynamic pressure is, in the derived velocity,
such a coniguration is for if not properly calibrated.
(Sol. In the irst approximation, an extra component of (1/2)ρ(V∞ sin α)2 in the pressure
is measured.)
5.7 Demonstrate Equation 5.32 for a Wheatstone bridge with four resistances and explain
how the four strain gauges connected in such a bridge can operate as in case of tempera-
ture drifts.
(Hint: Use resistances in series and in parallel to get the inal voltage.)
5.8 A plate of steel with radius R = 10 cm has to be used as an indicator of the pressure in a tank
and eventually as a safe valve. The maximum delection can reach 1 mm with a maximum
pressure variation of 58 bar. Determine an opportune thickness according to Equation 5.27
to be used as a safe valve, and plot the inal obtained response of the delection to pressure
variations from 1 to 50 bar. Assume 0.3 as Poisson's ratio and 200 GPa as Young’s modulus.
(Sol. 0.1 mm thickness)
5.9 A Bourdon tube is manufactured with three different shapes:
(a) Circular
(b) Ellipsoidal
(c) Rectangular
Given a certain pressure change in the inner part of the tube, discuss in a qualitative man-
ner what cross-sectional shape gives the most sensitive response in terms of straighten-
ing force.
(Hint: What kind of realigning stresses do you expect from the different shapes?
Consider tubes of those shapes under pressure. An easy drawing of the stress diagram
along the edges gives a lot of information of the change in shape.)
140 DANIELE RAGNI

5.10 Replot the graphs in Figure 5.20 for Equation 5.44 by employing conventional charac-
teristics for microphones for voice-recording applications. Discuss about the optimal
range of the damping factors you could use to have a lat response between 1 and 5 kHz
(human most sensitive range).
(Hint: What are the relevant parameters for the microphone damping? Accounting for
Equation 5.38, focus on Equation 5.43, which gives the response of the microphone.)
5.11 Derive in a similar manner as done for the capacitive microphone, a transfer function for
the inductive microphone. Consider a symmetrical design with the two coils at the same
distance from a circular membrane.
(Hint: Considering a symmetrical setup allows having just one distance parameter in
the formulation, and a similar formula to a second-order system can be derived.)
5.12 Consider a resonant cylinder to be designed in a range of frequencies between 1 and
5 kHz. Discuss how it could be designed in terms of thickness, radius, and length.
(Hint: Consider Equation 5.36 and analyze the order of magnitude of the separate con-
tributions. For example, selecting n, m = 1 for r → ∞, w2n ~ const ( g /g c s)(r /L2 ) p.)

additional exercises

5AE.1 A linear pressure device is based on the use of deadweights and it has been brought
back to the manufacturer for recalibration. The system is irst operated with several
weights conveying the results given in Table 5.2.
Knowing that the device is a linear system, compute the zero drift and the sensitivity
drift. Given the uncertainty on the measured values, compute a conidence level for
the two drift values.
(Hint: Linear system equation (Pa) = 102855 (kg) + 65.014; drift can be computed
from this one.)
5AE.2 A nonlinear pressure measurement device is giving an output voltage readout for
raising and falling cycles of loading.
Imagine that the system is used in the wrong way to measure unsteady pressure
varying from 0 to 1 atm with 2 Hz frequency in time:
(1) Plot the varying pressure in time as beginning input.
(2) Assume the signal is directly translated into voltage by the linear function
volt = 5 V/atm × pressure atm and plot the voltage signal.
(3) Plot the new signal after using the two functions in Table 5.3, one in rising cycles
and one in falling ones.
(4) Compute the maximum and minimum deviation in voltage compared to assump-
tion (2).
(Hint: Assume a sinusoidal signal and input the values in a numerical algorithm.
Output the signal by conditioning the processing with the signal derivative.)

Table 5.2 Values obtained from multiple readouts of a linear system

Value masses (kg) Read pressure (Pa) Uncertainty (Pa)


0.1 11,500 ±1500
0.5 56,000 ±1000
1 98,000 ±500
2 181,000 ±1000
5 552,000 ±2000
10 1,015,000 ±5000
PRESSURE MEASUREMENTS 141

Table 5.3 Voltage output per pressure loading for a


nonlinear pressure measurement device

Values Voltage out, Voltage out,


pressure (atm) rising cycle (atm) falling cycle (V)
0.01 0.50 0.00
0.13 1.80 0.10
0.25 2.50 0.30
0.37 3.00 0.70
0.49 3.50 1.20
0.61 3.90 1.90
0.73 4.30 2.70
0.85 4.61 3.60
0.97 4.92 4.70
1.00 5.00 5.00

references

1. Pag e C, Vig o u reux P (1975). The International Bureau of Weights and Measures 1875–1975,
Vol. 420, NBS Special Publication. U.S. Dept. of Commerce, National Bureau of Standards,
Washington, DC.
2. T h o m ps o n A, Taylor N (2008). The NIST Guide for the use of the International System of
Units, NIST Special Publications number 811.
3. IUPAC, International Union of Pure and Applied Chemistry, Research Triangle Park, NC.
4. International Organization for Standardization (1975). Standard Atmosphere, ISO 2533:1975.
5. U.S. Government Printing Ofice, Washington, DC (1976). U.S. Standard Atmosphere.
6. Society, Portland State Aerospace (2004). A quick derivation relating altitude to air pressure.
Model based upon: International Organization for Standardization, Standard Atmosphere,
ISO 2533:1975, 1975.
7. A n d e r s o n J (1991). Fundamentals of Aerodynamics, McGraw-Hill, Avenue of the Americas,
New York, NY.
8. Te r r y JL, L a n nie DW (1963). Calibration and comparisons of pressure-type airspeed-altitude
systems of the x-15 airplane from subsonic to high supersonic speeds, NASA technical note
D-1724, National Aeronautics and Space Administration, Washington, DC.
9. D o e b e l in E (2003). Measurement Systems: Application and Design, McGraw Hill,
New York, NY.
10. J a h a n m ir i M (2011). Pressure sensitive paints: The basics & applications, G, Division of Fluid
Dynamics Chalmers University of Technology, Göteborg, Sweden.
C h a p T e r S IX

Temperature and heat lux


measurements

Francesco Panerai

Contents

6.1 Introduction 144


Concepts of temperature and heat lux 144
Structure of this chapter 145
6.2 Gas temperature measurements with immersed sensors 146
Velocity effect and recovery factor 146
Conductive, radiative, and convective heat transfer 147
Transient effects 149
Nusselt number 150
Practical considerations 150
6.3 Thermocouples 151
Principles of operations 151
Laws of thermoelectricity 152
Type of thermocouples and considerations on their practical application 153
Sources of errors 155
Applications 155
6.4 Resistance thermometry 156
Resistance temperature detectors 156
Thermistors 157
6.5 Optical surface thermometry 158
Thermochromic liquid crystals 158
Temperature-sensitive paints 161
6.6 Radiation thermometry 162
Fundamentals of thermal radiation 162
Radiation thermometers 169
Applications 172
6.7 Infrared thermography 172
Infrared scanning radiometer 173
Performance of an infrared scanning radiometer 175
Applications 179
6.8 Heat lux sensors 183
Slug calorimeter 183
Coaxial thermocouple 184
Null-point calorimeter 185
Thin-ilm gauge 186
Water-cooled calorimeter 187
Gardon gauge 188
Problems 189
References 190

143
144 FRANCESCO PANERAI

6.1 Introduction

In aerodynamic systems, luids and surfaces interact with each other and exchange energy.
Flow/surface viscous interactions occur in any lying body, being that an airplane, a bird, an
asteroid penetrating a planetary atmosphere, or a missile. When a test article is immersed in the
gas stream of a wind tunnel, viscous interactions take place at the model’s surface determining
a certain response of the model-low system and changing the energy content of the low. In all
these and many other cases, energy is exchanged in the form of aerodynamic heating.
Viscous interactions are boundary layer processes. They ensure the nonisentropic slow-
down of the low to zero sleep velocity at the body’s surface. As the luid is brought at rest,
kinetic energy is converted into heat. Besides the chemical processes that occur in the pres-
ence of very high-enthalpy lows (e.g., at hypersonic speeds), the heat convected to the surface
is mostly dissipated by conduction into the material and reradiation to the surrounding envi-
ronment. The result is an increase in the body temperature and a decrease in the low energy
content.
The relative importance of viscous and thermal processes in boundary layer lows is
described by the dimensionless Prandtl number (Pr), deined as the ratio of momentum
diffusivity and thermal diffusivity:

n c pm (6.1)
Pr = =
a k

In Equation 6.1, ν = μ/ρ is the kinematic viscosity in (m2/s) and α = k/(ρcp) is the ther-
mal diffusivity with units of length square over time, with μ being the dynamic viscosity
in (Pa ⋅ s), k the thermal conductivity in (W/[m ⋅ K]), cp the speciic heat in (J/[kg ⋅ K]),
and ρ the density in (kg/m3). For small Prandtl numbers (Pr ≪ 1), the thermal diffusion
dominates over momentum diffusivity. In this case, the heat diffuses quickly compared to
the momentum, and the thermal boundary layer thickness is much larger than the thick-
ness of the velocity boundary layer. Conversely, for large Prandtl number (Pr ≫ 1) lows,
momentum transport prevails; hence, the thermal boundary layer is smaller than the veloc-
ity boundary layer.
Characterizing the energetics of aerodynamic phenomena is fundamental to understand
both the behavior of the low and that of the body interacting with it. In experimental aero-
dynamics, such characterization confronts the experimenter with two typical exercises: irst,
measuring the energy content of the luid, typically its stagnation temperature, and/or sec-
ond, quantifying the surface heat transfer by measuring either the surface temperature or the
amount of heat exchanged at the wall.
This chapter provides a general overview of the measurement techniques used to quantify
temperature and heat lux in aerodynamics. Summarizing decades, or rather centuries, of
inventions, developments, and reinements of experimental techniques in temperature mea-
surements is an overwhelming exercise. The goal of this chapter is to describe the funda-
mental principles of the most popular methods, discussing at the same time precautions
and best practices to be considered during practical implementations, as well as suitability,
advantages, and limitations of each measurement techniques to different types of lows and
applications.

Concepts of Temperature is a basic intensive variable used to objectively quantify the concepts of hot and
temperature cold. Kinetic theory describes the temperature of a gas as a measure of the “agitation” of
and heat lux its constituting particles. The temperature is directly proportional to the average translational
kinetic energy of its molecules and atoms. For practical measurements, temperature is deined
based on fundamental principles of thermodynamics, by means of temperature scales of refer-
ence substances with known ixed temperature points and interpolating instruments.
TEMPERATURE AND HEAT FLUx MEASUREMENTS 145

Attempts to quantify heating phenomena with thermometers date back to Galileo Galilei
and witnessed substantial improvements along the seventeenth and eighteenth centuries with
the works of Boyle, Fahrenheit, and Celsius. It was not until 1848 that William Thomson
(later Lord Kelvin) presented the irst rigorous thermodynamic deinition of temperature,
based on the eficiency of the Carnot cycle and the triple point of water (273.16 K). The
cornerstone works of the 1700s–1800s led to the deinition of temperature units that are still
in use nowadays. A comprehensive historical overview can be found in [1]. The limitations
of a inite number of reliable thermodynamic ixed points and the need for standard inter-
polation systems led in 1927 to the establishment of the International Practical Temperature
Scale (IPTS), successively revised in 1948, 1954, and 1960. The IPTS is composed of a
series of calibration standards to approximate Kelvin and Celsius scales, for comparability of
temperature measurements. The 1990 revision, referred to as the International Temperature
Scale of 1990 (ITS-90), is the active standard [2–4] that provides calibration between 0.65
and 1358 K in multiple, overlapping temperature ranges. Thermodynamic ixed points in the
ITS-90 are the triple point of water, phase transition points (freezing points) of pure metals
for higher temperature, and triple points of gases for lower temperatures. Calibration stan-
dards are helium isotope vapor pressure thermometers for cryogenic temperatures between
0.65 and 5 K, the helium gas thermometer between 3 and 24.5561 K (neon triple point),
the standard platinum resistance thermometer for temperatures between 13.8033 (hydrogen
triple point) and 1234.93 K (silver freezing point), and the optical pyrometer for higher
temperatures.
While the concept of temperature quantiies in a general manner the energetic content of a
system, the concept of heat describes the transfer of thermal energy within a body or between
different bodies. Particularly, the notion of heat lux describes the rate at which thermal energy
is transferred through a given surface unit area and has the units of W/m2. As it is well known,
heat is transferred through three modes: conduction, convection, and radiation. Here, the
reader is assumed to be already familiar with these concepts and to be knowledgeable of the
basic heat and mass transfer theory of classical textbooks [5,6].

Structure of This chapter covers operating principles and practical aspects of experimental methods
this chapter for temperature and heat lux measurements in aerodynamics. It is possible to distinguish
between two types of techniques: intrusive ones, which are based on temperature sensors
immersed into or in the vicinity of luid streams, and nonintrusive ones, which are instead
operating at distance from the test section or the measured models. Before describing
the intrusive techniques, Section 6.2 reports considerations that are needed when apply-
ing immersed sensors to measure the temperature of moving lows. Among the intrusive
methods, deeper attention is given to thermocouples sensors (Section 6.3) and resistance
thermometers (Section 6.4), which are simple and robust devices that allow point measure-
ments of lows or surfaces. Methods based on thermal expansion, such as common liquid-
in-glass thermometers or bimetallic thermometers, are not treated in this chapter. Despite
being extensively used for calibration purposes or for monitoring ambient temperatures,
they are rarely used in aerodynamic measurements. The reader is invited to consult dedi-
cated literature on general temperature measurements. References 7,8 provide a thorough
analysis of their operating principles and applications. Section 6.5 is dedicated to optical
surface temperature measurements, such as liquid crystals and temperature-sensitive paints
(TSPs). Despite these techniques being responsible for alterations of the low ield, as
applied directly over surfaces, they can be listed among nonintrusive methods. Differently
than thermocouples and resistance devices, they enable temperature ield measurements of
surfaces. Sections 6.6 and 6.7 cover temperature measurements with radiation thermometry.
Basic principles of thermal radiation are recalled and the general features of pyrometers and
radiometers for point measurements are illustrated (Section 6.6). Surface thermal mapping
with infrared (IR) thermography is discussed separately in Section 6.7. An overview of the
techniques used for direct heat lux measurements is reported in Section 6.8.
146 FRANCESCO PANERAI

6.2 Gas temperature measurements with immersed sensors

The concept of thermodynamic temperature used to deine temperature scales and to


describe the operating principles of intrusive instruments is an idealized concept that only
applies to systems in thermal equilibrium. Challenges arise in the presence of luids in
motion. In the attempt of measuring the temperature of a moving low with an immersed
temperature probe, one needs to consider the heat transfer processes between the luid
and the sensor itself. This is a common situation in aerodynamic experiments (especially
in high-speed wind tunnels), when intrusive thermometry is performed with the objective
of measuring the static temperature of the gas. In the following sections, the main heat
transfer processes are analyzed, with focus on thermocouple-type sensors and gas flows.
Similar considerations could be easily extended to any other type of intrusive probe and
luid.

Velocity effect and Consider the ideal case of a perfect gas at static temperature T decelerated form freestream
recovery factor velocity to the stagnation point (zero velocity) at total temperature Tt.* If all the kinetic energy
is converted into internal energy adiabatically and without work (isentropic deceleration),
the temperature of the gas will increase to the total (stagnation) temperature according to the
following relation:

v2
Tt = T + (6.2)
2c p

with
cp being the speciic heat at constant pressure
v being the luid velocity

The term Td = v2/2cp is referred to as dynamic temperature. Equation 6.2 can be also written as

Tt g -1 2 (6.3)
= 1+ M
T 2

and the gas total temperature can be computed measuring the Mach number with a Pitot
probe, if the static temperature is known. For low-speed airlows (M < 0.22), the steady-
state low temperature can be approximated with the stagnation temperature with errors
below 1%; thus, the response of an immersed probe at rest can be directly used as mean low
temperature.
The previously mentioned considerations apply to ideal gases, where viscous dissipa-
tion can be neglected. Recalling the concept of Pr as the ratio between the luid properties
governing the transport of momentum by viscous effects to the luid properties governing
the transport of heat by thermal diffusion (see Section 6.1), one realizes that lows dealt with
in practical applications are often characterized by a Pr number different from one. Hence,
when measuring temperature using an immersed probe, the wall temperature is different
from the stagnation temperature due to the heat transport through the thermal boundary
layer. To take this into account, a recovery factor is introduced (being r < 1 for Pr < 1 and
r > 1 for Pr > 1):

Tt, meas - T
r= (6.4)
Tt - T

* The total temperature is the temperature sensed by an idealized probe at rest with respect to the system boundaries.
It is also referred to as stagnation temperature Tstag.
TEMPERATURE AND HEAT FLUx MEASUREMENTS 147

The “recovery” denomination signiies that Tt, meas − T is the “recovered” portion of dynamic
temperature. With Equation 6.4, 6.2, and 6.3 becoming, respectively,

Tt, meas = T + rTd (6.5)

and

Tt, meas æ g -1 ö 2
= 1+ r ç ÷M (6.6)
T è 2 ø

Tt, meas is also called adiabatic temperature, Tad. Subtracting Equation 6.5 from Equation 6.2,
the velocity error is obtained as

v2
ε v = Tt - Tad = (1 - r ) (6.7)
2c p

For gases (Pr < 1), the recovery factor is comprised between 0 and 1. The actual value of the
recovery factor for a real system depends not only on the Prandtl number but also on the char-
acteristics of the actual sensor head (stagnation point, cylinder, lat plate, etc.).*
For design of bare head thermocouples, Moffat recommends an r of 0.68 ± 0.07 for wires
normal to the low and 0.86 ± 0.09 for wires parallel to the low [9]. In order to reduce veloc-
ity errors in gas temperature measurements of high-speed lows (e.g., high-speed subsonic
or transonic conditions), shielded thermocouple sensors are an effective solution. The shield
reduces the internal velocity vint in the vicinity of the sensors. In this case, the overall recovery
factor can be computed as [9,10]

2
vint
r¢ = 1 - (1 - r ) 2
(6.8)
vext

Shields designed for wires parallel to the low, such that the velocity ratio is of the order of 1/8,
enable a recovery factor very close to 1.
The actual recovery factor, where possible, should be characterized for each implemented
immersed sensor system, using a dedicated calibration setup.

Conductive, At steady state, the heat transfer within a temperature probe installed in a moving luid
radiative, and conined by walls is characterized by a balance between the heat exchanged by convection
convective between the sensor and the luid qɺc , the heat exchanged by conduction within the sensor and
heat transfer its support qɺ k , and the heat exchanged by radiation between the sensor, the luid, and the
enclosing walls qɺr :

qɺc = qɺr + qɺ k (6.9)

In real systems, all the processes mentioned earlier occur simultaneously and interact with
each other. Coupled interactions are mostly strengthened in extreme environments, like high-
speed, high-temperature lows. In certain cases, some of the effects have a minor inluence on
the measured temperature and can be neglected to simplify the solution of the heat transfer
problem.
A study of the different terms in Equation 6.9 has been proposed in [1], analyzing a dif-
ferential element dx of a temperature sensor, immersed in a low at static temperature T,

* An analogous concept can be introduced for liquid lows [1].


148 FRANCESCO PANERAI

surrounded by enclosing walls at Tw. For the sensor differential element, assuming that x is the
main direction of conductive transfer, Equation 6.9 becomes
dqk
dqc = dqr + dx (6.10)
dx

The convective term can be written as

dqc = hc ( Tad - Tx ) dAc (6.11)

where the adiabatic temperature Tad is given by an equation analogous to Equation 6.5 for a
right cylinder. Here, hc and dAc are the convective heat transfer coeficient and the area of heat
exchange.
The radiation term combines wall, gas, and sensor emission and can be expressed as

dqr = hr ( Tx - Tw ) dAr (6.12)

where dAr is the area of heat exchange and the radiative transfer coeficient hr is written as

hr =
(
sSBe¢ Tx4 - Tw4 ) (6.13)
Tx - Tw

Here, ε′ is a corrected emissivity factor that accounts for wall and sensor emissivities, gas
absorptivity, and view factors [1].
The conduction term is given by Fourier’s law:

dq x d 2Tx dT dAk
dx = -kAk dx - k x dx (6.14)
dx dx 2 dx dx

Combining Equations 6.11, 6.12, and 6.14 into Equation 6.10, with dAc ≈ dAr, one gets

d 2Tx dT
+ a1 ( x ) x - a2 ( x, y ) Tx = -a2 ( x, y ) a3 ( x, y ) (6.15)
dx 2 dx

where
dAk
a1 ( x ) =
Ak dx

dAc ( hc + hr )
a2 ( x, y ) =
kAk dx

hcTad + hrTw
a3 ( x, y ) =
hc + hr

Solving Equation 6.15 is a particularly involved problem, as one needs to deal with a second-
order nonlinear differential equation (a2 and a3 depend on Tx as clearly evinced from Equation
6.13). Benedict proposes three solution methods, known as tip solution, overall linearization,
and stepwise linearization [1,10]. The irst method consists in restricting the solution to the
tip of the probe, neglecting conduction effects, the results being an overestimation of the tem-
perature. The overall linearization is based on an approximation of Tx with an average value
between Tw and Tad, allowing to calculate hr. This method provides good results when the gas
is transparent to radiation but leads to large errors in the case of optically thick luids. More
accurately, the stepwise linearization uses a inite different approximation, solving the linear
problem within small elements of the sensor.
TEMPERATURE AND HEAT FLUx MEASUREMENTS 149

For practical applications, simpliied expressions of Equation 6.15 can be provided. These
are generally applicable to those systems where environmental effects are small enough that
they do not inluence each other. Let us consider for instance the case where radiation effects
are negligible. Under the assumption that Ac and Ak are constant and that the convective heat
transfer coeficient and thermal conductivity do not vary with temperature, Equation 6.15
becomes

d 2Tx hc Ac (6.16)
- (Tx - Tad ) = 0
dx 2 kAk

This can be solved with the boundary conditions Tx = Tw for x = 0 and dTx/dx = 0 for x = l
(sensor tip), yielding

Tx - Tad e mx e - mx (6.17)
= +
Tw - Tad 1 + e 2 ml
1 + e -2 ml

with
1/ 2
æh A ö (6.18)
m=ç c c ÷
è kAk ø

As provided earlier for the velocity, by evaluating Equation 6.17 at x = l, an expression for the
conduction error can be obtained as
Tt - Tw
εk = Tt - Tad = (6.19)
cosh ml

In the case of negligible conduction, for constant Ar, Ac, and hc, the solution of Equation 6.15
can be easily found from
Tx4 - Tw4 hA (6.20)
= c c
Tad - Tx se¢ Ar

and the radiation error reads

εr = Tt - Tad =
(
se¢Ar Tt 4 - Tw4 ) (6.21)
hc Ac

Transient effects A further correction to consider in gas temperature measurements is related to the transient
nature of the measured heat transfer phenomena. Because of the inertia of the sensor, an
immersed instrument does not respond instantaneously to a variation of the measurand, but
lags in time with respect to the actual environment temperature. A simpliied irst-order
formulation of the problem can be written as
dT (6.22)
hc Ac ( Tad - T ) = rVc p
dt

where the cp, ρ, and V are, respectively, speciic heat, density, and volume of the sensor mate-
rial at temperature T and Tad, the temperature of the low. The equation can be solved by
separation of variables, yielding
dT
Tad - T = t Þ
dt

æ dT T ö
e t / te - t / t t ç + ÷ = Tad Þ
è dt t ø
150 FRANCESCO PANERAI

d
t Tet / t = Tad et / t Þ
( )
dt

1
ò d (Te ) = t ò T
t /t
ad et/t dt Þ

t
1
t ò
T = Ce - t / t + e - t / t Tad et / t dt
0
(6.23)

Here, t = rVc p /hc Ac is the time constant of the sensors, namely, the ratio of its thermal capaci-
tance to the thermal resistance of the convecting low around it. The determination of the
integration constant C is treated in dedicated literature for different types of transient behav-
iors. A simpliied case, useful as a irst estimation in practical applications, allows computing
expression of the transient error for a cylindrical sensor head of diameter d as

rc p d dT
ε t = Tt - T = (6.24)
4hc dt

Equation 6.24 is obtained from Equation 6.22, considering that for a thin cylindrical section
V/ Ac » 4/d.

Nusselt number To determine velocity (Equation 6.7), conduction (Equation 6.19), radiation (Equation
6.21), and transient (Equation 6.24) errors, one needs to know the convective heat transfer
coeficient hc. When a gas stream lows around an immersed probe, a boundary layer is
established in the surrounding of the sensors. The amount of heat transferred to the probe
depends on the thermal transport through the boundary layer thickness.
A dimensionless form of hc is the Nusselt number Nu = hcd/k, which relates convective and
conductive transport. Since convective heat transfer occurs within the boundary layer, it has to
be dependent on the nondimensional number characteristic of it, that is, the Reynolds number
and the Pr. Typical empirical correlations are in the form

Nu = aReb Pr c (6.25)

The coeficient a, b, and c are to be determined for each speciic coniguration and gas
mixture. Useful relationships for cylindrical thermocouple wire of diameter d are given by
Moffat [11], for air or dilute combustion products at 100 < Re < 10,000 (being the Reynolds
number based on the wire diameter):

Nu = ( 0.44 ± 0.06 ) Re0.5 for wires normal to the flow


(6.26)
Nu = ( 0.085 ± 0.009 ) Re0.674 for wires parallel to the flow

practical In the design of intrusive temperature probes for gas measurements, the combined effects
considerations analyzed earlier must be taken into account. The level of analysis in each application depends
on the level of accuracy required, on the severity of the environmental conditions, and on the
amount and type of data to be acquired. No matter the effort spent on optimizing the mea-
surement sensor, every setup would be prone to errors, due to the intrinsic variability of any
real low system. There are typically three experimental approaches to cope with errors in the
design of gas measuring probes: (1) install a bare wire thermocouple and correct the direct
TEMPERATURE AND HEAT FLUx MEASUREMENTS 151

measurement for environmental effects, (2) design probes with constant correction factors
over a wide range of test conditions, or (3) design a probe to minimize the environmental
effects under the speciied test conditions.
A irst good estimation of the environmental effects is obtained using the simpliied error
relationships provided earlier. More detailed analysis can be achieved by building dedicated
setups to study the response of the sensor, which nevertheless is a very expensive and involved
practice. A relatively convenient approach is to use numerical simulations of the sensors and
its surrounding low stream in a decoupled manner [10]. Advances in conjugate heat transfer
simulations offer today an effective method to study with high idelity the response of a spe-
ciic probe design to a simulated measurement environment.

6.3 Thermocouples

principles of Thermocouples are widespread temperature sensors, offering a simple, inexpensive, and ver-
operations satile solution for temperature measurement and control. They are used in both scientiic and
industrial applications. Despite their simplicity, great attention must be taken to ensure proper
usage and to obtain accurate measurements.
Thermocouples consist of two different conductors assembled with contact at one or
more junction locations. When a temperature variation exists across the circuit, a voltage
(or electromotive force, E or emf) is produced. The voltage is proportional to the tem-
perature difference Thot − Tamb between the two junctions. This operating principle is known
as the Seebeck effect, from the name of the Estonian–German physicist Thomas Johann
Seebeck, who irst observed it in 1821. In 1834, Jean Charles Athanase Peltier discov-
ered the reversibility of the Seebeck effect, namely, that when an electrical current is sent
through a circuit of materials with a different conductivity, the heat is absorbed at one
junction and given up at the other. In 1851, William Thomson (later Lord Kelvin) extended
the Seebeck effect to a single thermoelectric material in the presence of a thermal gradi-
ent, observing the reversibility of thermal gradient and emf in a homogeneous conductor.
The Seebeck, Peltier, and Thompson effects are the three fundamental effects describing
the behavior of any thermoelectric circuit. In the case of thermocouple circuits, the Peltier
effect is concentrated at the junctions, while the Thompson effect is distributed along the
wires. However, if a thermocouple is well designed, they are negligible with respect to the
Seebeck effect.
A schematic of the Seebeck effect is presented in Figure 6.1. The graphical analysis used
in the igure and adopted for the following illustrations is that originally proposed by Moffat
[9,12]. The advantage of this approach is that complex, multimaterial circuits can be analyzed
with no ambiguity.
In practical application, thermocouples are obtained from junctions of metals or alloys. The
combination of the two materials depends on the temperature range of application and the type
of environment. The Seebeck coeficient a S , also known as thermocouple sensitivity, deines
the output voltage produced for a given temperature difference: E = a S ( Thot - Tamb ). The origin
of the electromotive force has been extensively discussed in the literature [7,12].
In the schematic presented in Figure 6.1, the thermocouple circuit measures the tem-
perature Thot relative to the ambient temperature Tamb of its terminals. If an absolute mea-
surement is desired, the ambient temperature must be known. To provide high accuracy,
thermocouple systems make use of a known reference temperature. Simple conigurations
for a reference junction are presented in Figure 6.2, where the thermocouple metal terminals
are inserted into a controlled environment at known Tref and connected to the ambient by a
third conductor.
Other conigurations can be assembled introducing a homogeneous thermoelectric material
in between the two metallic wires and placing the two junctions in the measurement and refer-
ence environments, respectively [13]. There exist several solutions for the reference junction,*

* Occasionally, the reference junction is referred to as “cold junction.”


152 FRANCESCO PANERAI

Tamb ε
3 Conductor A (+) Thot – 3
B
1
ε V
2 ε 1
Conductor B (–)
2 A
ΔT +
Tamb Thot T

FIGUre 6.1 Schematic of Seebeck effect.

ε
Tamb 3 C (+) Tref Thot – 3
A (+) 5 C
5
1 B
ε V 1
ε
2
C (+) 4 B (–)
A
2
+ 4 C
ΔT Tref Tamb Thot T

FIGUre 6.2 Schematic of a thermocouple circuit using a reference temperature zone.

a common example being the ice point of water, practically implemented using a bath of water
and ice. Alternative methods are triple points of known substances or electronic compensation
boxes based on isothermal blocks and thermistors [7].

Laws of As all thermoelectric circuits, thermocouples are characterized by the three fundamental laws
thermoelectricity of thermoelectricity. These can be regarded as empirical laws to be accounted for in designing
the measurement system to obtain accurate measurements:
I. Law of homogeneous materials. The voltage across a thermocouple is unaffected by
temperatures elsewhere in the circuit, provided the two metals used are both homoge-
neous (Figure 6.3). This is important, as it allows using thermocouple metals as lead
wires, irrespectively to the temperatures to which they are exposed along their paths.
II. Law of intermediate materials. If a third conductor C is inserted in either A or B and if
the two new junctions are at the same temperature, no effective voltage is generated by
the third metal, independently from the temperature to which C is subjected to out of the
new junctions (Figure 6.4). In practical application, this law allows the use of an ampli-
ier made of a third metal, with terminals close together to ensure the same temperature.
III. Law of intermediate temperatures. If a metal C is inserted in one of the AB junctions,
then no net voltage is generated provided that the junctions AC and BC are at the same
temperature (Figure 6.5). This means that the two wires or a junction can be soldered
together and the presence of the third metal (solder) will not affect the voltage if there

ε
Tamb 3 T* Thot – 3
A (+)
4 B
1
ε V
ε 1
2 4
B (–)
2 A
ΔT +
Tamb T* Thot T

FIGUre 6.3 Illustration of law of homogeneous materials.


TEMPERATURE AND HEAT FLUx MEASUREMENTS 153

ε
Tamb T* > T4 = T6 Thot – 3
3 A (+) 5 B
4 C (+) 6 1
ε V
ε 5 1
2 C
4,6
B (–)
2 A
Δ +
T Tamb T4 T* Thot T

FIGUre 6.4 Illustration of law of intermediate materials.

ε
Tamb T* < T1 = T4 Thot – 3
3 A (+)
4 5
B
ε V C (–) 5
ε 1, 4
2 1
B (–)
2 A
Δ +
T Tamb T* Thot T

FIGUre 6.5 Illustration of law of intermediate temperatures.

is no temperature gradient across the solder junction voltages. In practical application,


the law allows to compute temperature from the voltage if the temperature of a reference
junction is known.

Two corollaries can be derived from the previous laws:


IV. If EAC is the electromotive force produced by the two metals A and C between two tem-
peratures and E BC is that produced between B and C between the same temperatures,
then the electromotive force produced by A and B between the same temperatures is
EAB = EAC + E CB (Figure 6.6a).
V. If a thermocouple produces an electromotive force E1, its junctions being at T1 and T2,
and an electromotive force E2 , its junctions being at T2 and T3, then the electromotive
force produced when its junction are at T1 and T3 is E3 = E1 + E2 (Figure 6.6b).

Type of An important aspect in practical implementation of a thermocouple measurement is the selec-


thermocouples tion of the appropriate materials combination for the desired application. Overall, thermo-
and considerations couples can operate over a wide temperature range, from as low as −270 up to nearly 2500°C.
on their practical The materials used for assembling are characterized by a positive or negative thermoelectric
application

Tamb A Thot Tamb C Thot Tamb A Thot

εAC V εCB V εAC + εCB V

C B B
(a)

T1 A T2 T2 A T3 T1 A T3

ε1 V ε2 V ε1 + ε2 V

B B B
(b)

FIGUre 6.6 Illustration of (a) IV and (b) V law of thermocouples.


154 FRANCESCO PANERAI

Table 6.1 Composition, range, and sensitivity of the most common thermocouple types

Type Alloy Pair Temperature Range (°C) Sensitivity (μV/°C)

E Chromel vs. constantan −50 to 740 68


J Iron vs. constantan −40 to 750 50
K Chromel vs. alumel −200 to 1250 41
N Nicrosil vs. nisil −270 to 1300 39
Nickel–18% molybdenum vs. nickel–0.8%
M −270 to 1350 50
cobalt
T Copper vs. constantan −200 to 350 43
Platinum–30% rhodium vs. platinum–6%
B 50 to 1800 10
rhodium
R Platinum–13% rhodium vs. platinum 0 to 1450 7
S Platinum–10% rhodium vs. platinum 0 to 1450 7
Tungsten–5% rhenium vs. tungsten–26%
C 0 to 2320 15
rhenium

polarity, whether they produce an increase or decrease in voltage for a given temperature
variation. The combination of the two materials polarities determines the thermocouple
sensitivity.
Although combinations are virtually unlimited, certain alloy pairs have become a standard
in practical applications and are given a conventional letter label, deining the thermocouple
type. A list of the most common types is provided in Table 6.1, together with their tempera-
ture range and an indicative value of the sensitivity. The table, certainly not comprehensive
of the state of the art, is to be used as a general indication only. Some of the thermocouple
types have standard calibration tables and assigned color codes [14].
It is remarked that for most of the cases the temperature–voltage relationship is not lin-
ear, that is, the sensitivity might vary over the temperature range. In practice, this means
that the output voltage cannot be directly translated into a temperature. For instance, type
K thermocouples have a constant Seebeck coeficient between 0°C and 1000°C. In this
range, the temperature can be determined with a couple degrees accuracy directly from
the measured voltage. At higher temperature, however, the sensitivity drops, easily lead-
ing to wrong measurements if the ~40 μV/°C is used. Hence, the best practice is to rely on
a calibration of the thermocouple with the associated measurement chain (ampliier, lead
wires, etc.) within the whole temperature range of interest for the speciic applications.
Standard practice for calibration is found in [15]. An example of a simple calibration bench
is a uniform temperature oil bath, where the “true” temperature can be measured with
a calibrated thermometer submerged in the oil in the vicinity of the thermocouple head.
High-temperature calibration furnaces can be also found in the market, capable of provid-
ing accuracy of the order of 0.1°.
Several other considerations need to be made when choosing a thermocouple or designing
the measurement system. For example, sensors operating in oxidizing and corrosive environ-
ments need suitable protection or shielding to guarantee reliable measurements. Some alloys
are simply not compatible with certain gases and their properties are immediately altered
when exposed to such conditions.
Parameters, such as the choice of conigurations, attachments, and type of circuits, are vir-
tually unlimited. The experimentalist can count on the support of most of the thermocouples
manufacturers, who have achieved nowadays a sound maturity and are able to suggest the
optimal solution for any dedicated application.
The arrangement for embedding thermocouples in solid temperature measurements, or
for attaching the sensing junction in surface measurements, needs be carefully evaluated
as well.
If very high-precision measurements are desired, it is mandatory to study the com-
plete heat transfer problem at the sensing junction, including the coupled effects of the
TEMPERATURE AND HEAT FLUx MEASUREMENTS 155

thermocouple elements and the measured medium. For measurements in luids, further
complexities arise due to the combined effect of conduction, radiation, and convection, as
discussed in Section 6.2.

Sources of errors Aside of the quoted accuracy of the data acquisition system, possible sources of errors
affecting the measurements have to be taken into account when performing thermocouple
thermometry. Errors are mostly due to the production of spurious emfs caused by faulty
parts in the thermocouple system or emf noise picked up along the measurement chain.
Cases in which a poor soldering or welding of the junction head causes an open circuit are
easy to identify. A subtler situation occurs when the thermocouple keeps providing a mean-
ingful voltage, which is affected by spurious sources, hence actually wrong. The following
are typical sources of error that can be identiied:

• Decalibration is the alteration of the physical makeup of the thermocouple wire, causing
deviation of the thermocouple emf response. It can result from inhomogeneities in the
original manufacturing of the thermocouple, from plastic deformations due to strain-
ing, from contaminations of the alloys’ chemical compositions due to the diffusion of
atmospheric particles under temperature extremes, or from high-temperature annealing.
The best practice to cope with decalibration is replacing the thermocouple.
• Galvanic actions in the presence of electrolytes generate spurious emfs that can exceed
the Seebeck effect by order of magnitudes. Galvanic effects may occur when using
thermocouple in water or other liquid substances, where electrolytes can be generated
from the dyes used in wire insulations. Good practice is to use adequate protections and
shielding.
• Straining of the thermoelectric wires may generate spurious emfs. This may occur when
measurements of vibrating systems are performed. Type K thermocouples are particu-
larly affected by such issue. Less severe effects are instead obtained when using type
E or J devices, which are preferable solutions in the presence of vibrations.
• Cold junction compensation errors are mostly due to the temperature gradient between
the cold junction and the sensor. This is best minimized maintaining the thermocouple
in uniform and stable environment. In modern devices, if the cold temperature is mea-
sured electronically or with alternative methods (such as thermistors), further effects
may arise from the errors intrinsic to these methods.

applications Due to their competitiveness in terms of cost, robustness, and ease of applicability, thermo-
couples are a widespread method for temperature measurements and monitoring. They can
be applied as immersed sensors for direct measurements of luid temperatures. Alternatively,
they can be either attached by spot welding (or other procedures) to surfaces, installed right
underneath, or embedded into material samples for measurements in solids subjected to
aerodynamic or aerothermal heating.
A large contribution to the development of thermocouple thermometry over the last four
decades has been due to the turbomachinery industry, with focus on aerodynamics of blades
and gas temperatures in challenging aerothermal environments. Very ine thermocouple probes
have been designed for gas measurements in short-duration facilities with accuracies below
1 K. Efforts have been dedicated to the development of thermocouple rakes for temperature
measurement of gas turbine exhausts.
In hypersonic facilities, thermocouple sensors have been largely used for the measure-
ment of short-duration phenomena. Particularly, coaxial thermocouples (further described in
Section 6.8) have been developed for fast heat lux measurements in the presence of very-
high-speed lows.
In aerothermodynamic testing of high-temperature materials for space applications, tech-
niques have been reined for the attachment of thermocouple sensors to hot structure and for
the placement of in-depth thermocouples into material models, in order to minimize the errors
due to temperature gradients and conduction [16].
156 FRANCESCO PANERAI

Countless other applications could be mentioned from the aviation industry to manufactur-
ing and from chemical processes to power generation, where temperatures associated with
aerothermal phenomena are measured by means of thermocouples.

6.4 resistance thermometry

Resistance thermometers are based on the repeatable change with temperature of the electric
resistance of a conductor. Instruments consist of an electrical circuit acting as a sensing ele-
ment, a casing frame, a protecting sheath, and a bridge converting the resistance variation
into a voltage. Differently from thermocouples, which require a reference temperature to be
known, resistance thermometers are absolute temperature devices. Depending on the type
of material, two classes of resistance thermometers are distinguished: resistance tempera-
ture detectors (RTDs) use metallic conductors, while thermally sensitive resistors, or simply
thermistors, are manufactured upon semiconductor materials. The two types are described in
the following texts.

resistance RTDs are fast-response devices suitable to perform temperature measurements in short-
temperature duration facilities, shock tubes, or hypervelocity tunnels (see Chapter 3). In this type of
detectors facilities, one is often confronted with total test times of the order of milliseconds or less.
The instrumentation for temperature detection must be able to provide a response fast enough
to follow the transient nature of the low.
A typical RTD consists of a sensing element encapsulated in a protective sheath or case in
different arrangements. Wire-wound sensing elements are assembled using a very thin (10–40 μm
diameter) metallic wire, usually in platinum, wound into a coil and packaged inside a ceramic
mandrel (Figure 6.7a). Alternatively, the wire can be coiled around a glass or ceramic core
and coated with a glassy insulating material (Figure 6.7b). The sensing wire is connected to a
larger-diameter (200–400 μm) lead wires, in platinum or platinum alloys, departing from the
back face of the sensing element. The thin-ilm coniguration (Figure 6.7c) is instead made of
a thin metallic layer deposited over a ceramic substrate. A resistive pattern is etched or cut into
the metal ilm. Lead wires are bonded to the metallic coating using an epoxy or glass substrate.
The thin-ilm concept is further analyzed in “Thin-ilm gauge” section as a method for heat
lux measurements.
Platinum is the most commonly used metal for precision RTDs [17], because of its wide
temperature range (from 3 to 1370 K), accuracy, linearity, and stability. The standard plati-
num resistance thermometer, used to deine the ITS-90, enables measurements as accurate as
±0.0001°C. Other metals are also suitable, such as nickel (used from 80 to 700 K), copper
(from 80 to 530 K), or rhodium iron, with the latter being commonly used in cryogenic envi-
ronments, thanks to its stability and good sensitivity at temperatures as low as 0.5 K.
The conversion of the change in resistance into a voltage variation is done with a modiied
version of the Wheatstone bridge called Mueller bridge [1,18].

Ceramic Lead wire


Housing Film
Mandrel- mandrel protection
bores protective
wound Resistive covering
sensing wire pattern
Lead wires
Sensing
Ceramic film
mandrel Protective
Coil-wound
sensing glassy coating
Ceramic
wire Lead wires Lead wires substrate
(a) (b) (c)

FIGUre 6.7 RTD sensing element conigurations: (a) wire wound (coiled design), (b) wire wound (outer wound design),
(c) thin ilm.
TEMPERATURE AND HEAT FLUx MEASUREMENTS 157

6
Thermistor
5 Nickel

Nickel-iron

Relative resistance R/R0


4 alloy

3
Platinum
2

1 Copper

0
200 400 600 800 1000
Temperature (K)

FIGUre 6.8 Relative resistance versus temperature of typical RTD materials and a thermistor.

For most of the metals, the resistance can be expressed as a polynomial function of the
temperature [19]:

(
R ( T ) = R0 1 + a1T + a2T 2 + ⋯ + anT n ) (6.27)

Typical values of R(T)/R0 are shown in Figure 6.8.


The number of ai constants is adopted based on the temperature range of application and
the required accuracy. The resistance of a platinum RTD follows the Callendar–Van Dusen
equation [1,20,21], valid form 75 up to ~933 K:

3
R (T ) - R ( 0 ) æ T öæ T ö æ T öæ T ö
T= 100 + d ç - T ÷ç ÷ + b ç 100 - 1 ÷ ç 100 ÷ (6.28)
R (100 ) - R ( 0 ) è 100 ø è 100 ø è øè ø

where
R(0) is the ohmic resistance platinum measured in a saturated water–ice mixture (273.15 K)
R(100) is that measured in a saturated steam at atmospheric pressure (373.15 K)

Temperatures in Equation 6.28 shall be input in degree Celsius. The coeficients δ and β
should be determined experimentally for each RTD, through a proper characterization of the
sensor [1,22,23].
Materials used for RTDs have resistance values ranging from 10 to 25,000 Ω. An important
aspect to be considered in practical applications is that resistance characteristics are affected
by strain, pressure, and other environmental effects. This is partly avoided by the casing and
the protections in which the sensing element is embedded but should be further taken into
account when installing sensors in wind tunnel walls or test models, especially in the presence
of challenging lows.

Thermistors Thermistors are electrical circuits assembled using semiconductor materials, such as metal-
lic oxides. Typical sensors are made of chromium, cobalt, nickel, titanium, or manganese
oxides. Compared to conductors used in RTD sensors, they use the same working principle
but they are characterized by a large negative coeficient of resistivity (Figure 6.8). These are
usually classiied as negative temperature coeficient thermistors. Positive temperature coef-
icient thermistors exist as well but as they are mostly switching-type sensors, their use in luid
dynamics measurements is limited.
158 FRANCESCO PANERAI

Thermistors exhibit a monotonic decrease of resistance with increasing temperature, which


is usually expressed in the following form:

R ( T ) = R0e (
b (1/ T ) - (1/ T0 ) )
(6.29)

where
R0 is the resistance at a reference temperature T0 (usually 298 K)
β is a characteristic constant of the material

Their main advantage is a great temperature sensitivity, up to ten times more than that of ther-
mocouples. Reference 8 provides further details on thermistors characteristics and operations.
Despite being mostly used in industrial and commercial applications, thermistors have been
applied for measurements of aerodynamic heating, in sounding rocketry and as anemometers
in wind tunnel measurements [24,25].

6.5 Optical surface thermometry

The point measurements discussed thus far, which are of common use in aerodynamics exper-
iments, have major limitations when complex conigurations are investigated. Even using
temperature sensors at high spatial frequency, with the impracticalities that this implies, only
limited information, conined to punctual locations, can be achieved. In order to provide a
resolved temperature mapping of surfaces and test models, full-ield measurement techniques
have been developed. A widespread example, which is described later in Section 6.7, is IR
thermography, providing 2D temperature information based on thermal radiative properties
of real bodies. In this section, optical techniques based on effects produced by temperature
changes at visual wavelengths (~0.4–0.7 μm) are discussed. The focus is on liquid crystals and
TSPs, which have become a popular method for heat transfer and temperature measurements
in luid mechanics. Other techniques as phase change coatings are also available but are not
treated here.
Optical surface temperature methods deliver detailed, qualitative information. Quantitative
temperature ield data can be also obtained if a proper calibration exercise is performed and
if computerized true color analysis of digital images is implemented. Differently from IR
thermography, which is a fully nonintrusive method, these techniques are based on optical
detection at distance but require physical contact with surfaces and lows. Although altera-
tions of the low features are very limited, limitations exist in terms of temperature and type
of atmosphere that can be handled.

Thermochromic Liquid crystals are a mesomorphic state of matter, exhibiting properties of both liquids and
liquid crystals solids. Their mechanical behavior may present the viscosity and surface tension of a typical
luid. Conversely, as optically anisotropic solids, they have a birefractive nature, that is, their
refractive index depends on the polarization and direction of light. Several classiications
can be adopted for liquid crystals, based on the chemical formulation, on the crystalline
structure, on the optical behavior, and on other parameters. A comprehensive description is
provided in [26].
Both cholesterol-derived mesophases, called cholesteric, and nonsterol components,
referred to as chiral nematics, or chemical formulations combining the two, exhibit sensitiv-
ity to temperature and change color as they are subjected to temperature variations. These are
classiied as thermochromic liquid crystals (TLCs).
The sensitivity of TLCs to temperature occurs in the form of phase changes that depend
on their chemical composition. The reversibility and repeatability of these changes are attrac-
tive features for temperature measurements. Chemical composition modiications manifest
in the form of color changes at visible (VIS) wavelength that can be related to temperature
or to other low relevant quantities (Figure 6.9). Four phases are typically identiied. Below a
TEMPERATURE AND HEAT FLUx MEASUREMENTS 159

η 0.10 0.20 0.30 0.40 0.50 0.60 0.70 0.80 0.90 1.00

MFR = 1.0%

FIGUre 6.9 (See color insert.) An example of TLCs’ application to the study of aerodynamic
and thermal performance of a rotor blade cascade. (Adapted from Barigozzi, G. et al., Int. J. Heat
Fluid Flow, 44(0), 563, 2013.)

certain temperature TLCs have a crystalline arrangement near to that of solid crystals. Their
molecules, elongated and relatively rigid, are organized in a compacted fashion, with long
axes parallel to each other. The anisotropic structure is commonly described by a unit, dimen-
sionless vector, called director. In such a nearly crystalline status, liquid crystals are optically
inactive and highly viscous. As the temperature increases TLCs’ molecules keep their paral-
lel organization but tend to arrange into a layered structure of planes or sheets. In this meso-
morphic phase, called smectic, they are still optically inactive. Optical activity is achieved
at higher temperatures, when TLCs enter the cholesteric mesophase. At these conditions, the
molecules’ planes are twisted with respect to each other and arranged in a layered helicoidal
structure. Due to this architecture, the cholesteric mesophase acts as diffraction grating for
the incident light at VIS wavelength. The condition of light scattering by the lattice planes
(at interplanar spacing d) with maximum constructive interference is described by Bragg’s
law. This postulates that the relected wavelength λ is proportional to the chiral pitch p = 2d
of the helicoidal structure.

l = 2d n sin ( q ) (6.30)

where
θ is the angle of the incident light
n is the effective refractive index
160 FRANCESCO PANERAI

The chiral pitch p is the distance for a full 360° rotation of the liquid crystal molecules.
As the director is the same at 0° and 180°, the periodicity of the phase is actually half a pitch.
A single color will be relected for each pitch value. As p decreases with increasing tempera-
ture, the light rays at shorter wavelength (from red to violet) will be relected. Beyond the
cholesteric phase, TLCs lose again their optical activity, as higher temperatures break the
crystalline structure turning them into an isotropic liquid phase. The temperature at which
this occurs is known as clearing point.
TLCs can be tailored to different operating bands, between 240 and 400 K. The color
play range depends on their composition and is typically between 1° and 5° for narrowband
formulations up to 10°–20° for wideband ones. The former provide a higher accuracy and
are a convenient solution in transient measurements characterized by the passage of a single,
well-deined isotherm. Conversely, the latter are suitable to measure larger temperature gra-
dients over surfaces at the expenses of a lower accuracy. In selecting commercially available
crystals tailored to a deined temperature range, one shall remember that nominal speciication
usually assume a null angle between illumination and observation, which should be properly
converted using Equation 6.30 according to the actual coniguration.
TLCs are usually available in three different forms: unsealed pure cholesteric materials,
slurries of encapsulated TLCs, and mechanically protected thin liquid crystal ilms. Protected
or encapsulated versions have the advantage of being less sensitive to chemical contamina-
tions, to mechanical effects (like wall shear stresses), and to electromagnetic interferences.
Using pure materials offers instead a twofold advantage: a better a signal/noise (S/N) ratio
and the lexibility of adjusting the color play range by mixing TLCs with different clearing
points [28]. Moreover, they can be dissolved in organic solvents and conveniently applied by
spraying onto surfaces with complex geometrical features.
In several applications TLCs are adopted as low tracers. Pure materials can provide a good
solution for this purpose if opportunely dissolved in homogeneous and very diluted suspen-
sions and if a correct trade-off on the tracers’ dimensions is adopted. Small tracers help in
minimizing buoyancy effects and guarantee correct transport with the low pattern; large trac-
ers provide better detectability and a higher S/N ratio [28]. An alternative solution consists in
using encapsulated TLCs. These are realized by encapsulation of liquid crystal droplets into
microspheres of polymeric shells.
As TLC measurements are based on the detection of relected light, particular attention
must be dedicated to the illumination of the setup and to the relative position of the captur-
ing camera. In order to provide homogeneous illumination and stable spectral characteristics,
bright halogen lamps or xenon lash tubes are used to provide collimated light. Light pulses
are used when performing particle image thermometry and particle image velocimetry (PIV)
(Chapter 10) based on TLCs as tracers for combined temperature and velocity measurements.
The angular position of the observing camera with respect to the incident light is handled
through a dedicated calibration process.
Calibrating TLCs for accurate, quantitative measurements is a challenging task. Early
developed methods were based on the identiication of isothermal lines by means of interfer-
ential band-pass ilters [29]. However, these methods are unsuitable for transient and turbulent
lows. More recently, since the common practice is to use CCD cameras for image detection,
the calibration essentially consists in analyzing the light image captured by the CCD and
decomposing the color ield into the basic components of the trichromatic red–green–blue
(RGB) signal. The hue (color) identiication is performed by converting the RGB decomposi-
tion into the corresponding HIS (hue, saturation, intensity) decomposition. The temperature
is then determined by a calibration function relating hue and T. A good practice to overcome
the sensitivity of the technique to the several inluencing factors, like color of the light source,
observation angle, and scattering properties of the TLCs, is to determine several calibration
functions restricted to small portions of the measurement domain. Further details on different
calibration methods are reviewed in [28,30].
Liquid crystal thermography is widely applied in aerodynamic measurements, thanks
to its capability of providing 2D ield measurements of temperature and heat transfer. Its
primary advantage over competing techniques such as IR thermography is the lower cost.
TEMPERATURE AND HEAT FLUx MEASUREMENTS 161

The technique has been extensively reviewed in the literature [12,28,31–37]. Examples of
applications include measurements of cylinders in cross-low, impinging jets, turbine blades,
ribbed conigurations, and transient measurements in hypersonic lows.
Figure 6.9 shows an example of application of TLCs to the study of the aerodynamic and
thermal performance of a gas turbine rotor blade cascade [27]. TLCs are used to map ilm
cooling effectiveness and study how this is affected by purge low discharged from an axial
gap between rotor and stator platforms, by means of inned coniguration. The adiabatic effec-
tiveness η is expressed as the ratio (Tad − T∞ )/(Tc − T∞ ), being Tad, Tc, and T∞ the adiabatic,
cooling low, and freestream temperatures, respectively. The η contours shown in the bottom
chart of Figure 6.9 for one of the studied conigurations highlight the capability of TLCs to
map 2D cooling phenomena.

Temperature- The method of TSP, also known as phosphor thermometry, is based on the thermal sensitivity
sensitive paints of the luminescence of phosphors. In its basic form, it involves the observation of a surface
where phosphors are deposited, by means of an optical system like a CCD camera.
Phosphors are usually doped transition metal compounds or rare earth compounds. Common
examples are Europium-doped yttria (Eu:Y2O3) or yttrium orthovanadate (Eu:Y2VO4) that were
used to give the red color in old television tubes. Phosphor luminescence exhibits temperature
dependency in the spectral distribution of emitted energy, that is, different temperatures lead
to different colors of emitted light. The analysis of the emitted spectrum allows, therefore, the
quantiication of temperature through an opportune calibration. Intensity methods are based
on the evaluation of the total energy of the emission spectrum at different temperatures. A
valid alternative consists in evaluating the temperature-dependent luorescence lifetime, that
is, the temporal decay of luminescence.
As an alternative to phosphors, other TPS methods use organic sensors. Those exhibit
similar optical behaviors to that of phosphors but cannot be used in harsh environments, such
as hypersonic wind tunnels or combustion facilities.
Together with the imaging system and the paint, a TSP measurement chain is equipped
with a high energy light source, such as xenon lashlamps or light-emitting diode arrays, to
excite the paint. Luminescent molecules of TSPs, also called luminophors, are embedded into
a binding matrix. When stricken by an incident light in a certain wavelength range, they are
brought from ground state to an excited electronic state. As the excited electrons go back to
ground state, radiation is emitted at shifted (longer) wavelength with respect to the exciting
light (Stokes shift). The deactivation process of excited states is called thermal quenching.
The  luminescence of the TSP layer decreases with increasing temperature because of the
increased frequency of collisions, turning higher-temperature regions into darker areas in an
image. Deactivation of excited states can also occur through oxygen quenching, as in the case
of pressure-sensitive paints (Chapter 5), requiring, however, the binder to be permeable to
oxygen.
Different imaging methods such as RGB color evaluation, iltered black/white image ratios,
or wind-on versus wind-off image comparisons can be used. A comprehensive description is
provided by Kowalewski et al. [28].
TSP measurements offer sensitivities of 0.05 K, an accuracy of 0.1%–5%, and have the
appealing property of providing emissivity-independent measurements.
Different paint formulations can be used to work from cryogenic temperatures as low as 80 K,
up to temperatures of 2000 K. A typical bandwidth is of the order of 100 K. TSPs’ resolution
is usually related to the resolution of the detecting instrument.
TSPs are a very powerful tool for aerodynamic measurements in wind tunnels, as they pro-
vide temperature distributions over 3D surfaces are are able to work under a broad spectrum
of temperatures. They are largely used in cryogenic measurements and in laminar-to-turbulent
transition experiments. They are also commonly used in turbomachinery, for combustion
testing, and in turbine aerodynamics. For those applications, they offer the very attractive
advantage over IR thermography such as the possibility of eliminating from temperature mea-
surements the contributions of the gas radiation by detecting luminescence wavelengths at
which radiation is negligible [28].
162 FRANCESCO PANERAI

6.6 radiation thermometry

Radiation temperature measurements are based on the detection of thermal energy emitted by
an object without the need of physical contact. Thermal radiation is generated by the motion
of charged particles in matter and is transported in the form of electromagnetic waves, indif-
ferently through a medium or the empty space. Quantum theory describes thermal energy as
carried by discrete particles named photons. The energy e of a single photon is proportional
to the speed of light and inversely proportional to the radiation wavelength λ. The majority of
the thermal energy dealt with in terrestrial applications lies between 100 nm and 1 m wave-
lengths. In the radiation spectrum (Figure 6.10) this region is occupied, for the most part, by
VIS (390–700 nm) and IR (0.7–1000 μm) radiation. The IR band is further fractioned in near
infrared (NIR, 0.75–1.4 μm), short-wavelength infrared (1.4–3 μm), midwavelength infrared
(MWIR, 3–8 μm), long-wavelength infrared (LWIR, 8–15 μm), and far infrared (15–100 μm).
Radiation thermometry is performed from VIS and LWIR frequencies, with some instruments
working up ~40 μm.
This section recalls the underlying principles of thermal radiation and discusses the main
methods used to perform radiation temperature measurements. A comprehensive reference on
the topic is the book by Modest [29]. The chief technique in radiation thermometry, that is,
IR thermography, is treated separately in Section 6.7.

Fundamentals of The basic principle of radiation measurements is that any body at a temperature above the
thermal radiation absolute zero (0 K) emits energy in the form of electromagnetic radiation. An ideal radiator,
which is a body able to absorb all the radiation received and to emit the maximum possible
thermal radiation at one temperature, is called a blackbody. The radiative behavior of a black-
body, under vacuum conditions, is described by Planck’s law, providing the spectral radiance*
L0l per unit solid angle as

2hP c02
L0l = (6.31)
l 5 e( ( hP c0 / kB lT )
)
-1

where hP = 6.6260693 × 10−34 J ∙ s is the Planck constant, kB = 1.380658 × 10–23 J/K the
Boltzmann constant, and c0 = 2.99792458 × 108 m/s the speed of light in vacuum. Note that

Wavelength
0.1Å 1 nm 100 nm 10 µm 1 mm 10 cm 10 m 1 km
Gamma-rays

Microwaves

Radio waves

Long waves
Ultraviolet

Infrared
X-rays

Visible

1019 1017 1015 1013 1011 109 107 105


Frequency (Hz)
Visible light Infrared

NIR SWIR MWIR LWIR FIR


380 nm

450 nm

495 nm

570 nm
590 nm
620 nm

750 nm

1.4 µm

3 µm

8 µm

15 µm

1000 µm

0.6 µm IR thermography 40 µm

FIGUre 6.10 Radiation spectrum.

* In the literature on thermal radiation, the radiance is also referred to as radiative intensity, emissive power, or radiant
energy.
TEMPERATURE AND HEAT FLUx MEASUREMENTS 163

the subscript “0” refers to vacuum conditions, while the superscript “0” refers to the black-
body. For a generic medium, the speed of light depends on the refractive index n of the
medium, according to

c0
c= (6.32)
n

Expressing the wavelength in m, the blackbody spectral radiance L0l in Equation 6.31 has
the dimension of W/(m3 ∙ sr). It is also often referred to as emissive power and noted as El0.
Integrating Planck’s law over the entire solid angle, one obtains the spectral hemispherical
radiance L0Ç 3
l in W/m shown in Figure 6.11. It can be noticed that the maximum radiation
intensity moves toward longer wavelengths for decreasing body temperatures. This behavior
is described by Wien’s law:

b
l max = (6.33)
T

where T is the temperature in K and b = 2897.773 μm ⋅ K is the Wien’s displacement con-


stant. At peak wavelength, the maximum emissive power is proportional to the ifth power of
temperature:

L0max = 4.0958 ´ 10 -12 T 5 (6.34)

The integration of Planck’s law over the entire spectrum (i.e., ∀λ ∈ [0, ∞]) and over the
whole hemisphere (i.e., for every polar angle, ∀θ ∈ [0, π/2] and every circumferential angle
∀ϕ ∈ [0, 2π]) yields the well-known Stefan–Boltzmann law for the blackbody, which states the
proportionality of the total radiance L0∩ to the fourth power of the temperature:
2p p/2 ¥

L0totÇ =
ò ò ò L cos q sin q dl dq df = s
f=0q=0l =0
0
l T4
SB
(6.35)

2kB4 p5
where sSB =
15c02h3
= 5.670400 × 10 -8 W/ m 2 × K 4 is the Stefan–Boltzmann constant. L0Ç( tot )

is dimensionally of a power per unit area. It is also indicated as Ltot to underline that it is a
Spectral hemispherical radiance (kW/m2/µm1)

400 1000 K
1250 K
1500 K
1750 K
2000 K
300

200 Wien displacement law

100

0
0 1 2 3 4 5 6
Wavelength (µm)

FIGUre 6.11 Spectral hemispherical radiance for a blackbody at different temperatures.


164 FRANCESCO PANERAI

total quantity. Note that the integration mentioned earlier is only valid for radiative source, whose
radiance is independent of the angle (the emitted power per unit area, per unit solid angle is the
same). Such a source is called Lambertian, from Lambert’s cosine law of relection. An ideal
blackbody is perfectly Lambertian. Often the superscript “∩” is omitted in the nomenclature of
hemispherical quantities. One shall always pay attention to the units used to express radiance val-
ues, carefully evaluating whether they refer to a single direction or to the hemisphere and whether
they are spectral or total quantities.
Since radiation instruments are usually in-band operating devices, rather than integrat-
ing over the whole spectrum, it is useful to evaluate the total radiance within a waveband
∆λ = [λa, λb]:
lb ¥ ¥

ò
la
ò
L0Dl = L0l = L0l d l - L0l d l
lb
ò
la
(6.36)

In order to compute the RHS integrals, one can follow the integration by Widger and Woodall
[38] and express Planck’s law as a function of the wavenumber ν = 1/λ. From L0l d l = L0n d n ,
Equation 6.31 becomes

2h c n 2 5
L0n = P 0
hP n / ( kBT )
(6.37)
e -1

and Equation 6.36


vb va
L0v L0v
L0Dv =
ò
0
n2
dv -
ò
0
n2
dv (6.38)

Let

hP c0n hP c0
x= , dx = dn (6.39)
kBT kBT

one has
¥ ¥ 3
æ k T ö x3 kBT
ò
n s
ò
L0n d n = 2hP c02 ç B ÷ x
è hP c0 ø e - 1 hP c0
dx (6.40)

( ) å
¥
with σ = ξ(ν). Noting that 1/ex - 1 = e - nx , one gets
n =1
¥ ¥ ¥
k 4T 4
ò
n
L d n = 2 B3 2
0
n
hP c0 åòx e
n =1 x
3 - nx
dx (6.41)

The remaining integral can by integrated by parts:


¥
æ x3 3x2 6x 6 ö
ò
s
x3e - nx d x = ç + 2 + 3 + 4 ÷ e - nx
è n n n n ø
(6.42)

Hence,
¥ ¥
kB4T 4 é æ x3 3x2 6x 6 ö - nx ù
ò
n
L0n d n = 2
hP3 c02 å êêëçè n + n
n =1
2
+ + ÷e ú
n3 n 4 ø úû
(6.43)
TEMPERATURE AND HEAT FLUx MEASUREMENTS 165

or

éæ h c n/ k T ù
(
êç P 0 ( B ) ) (
3 hP c0n/( kBT ) ) (
6 hP c0n/( kBT ) )+ ö
¥ ¥ 3 2
k 4T 4 6 ÷ - n( hP c0n/kBT ) ú
ò L d n = 2 B3 2 å + +
0
n e
hP c0 êç n n2 n3 n4 ÷ ú
n n =1 êëè ø úû
(6.44)

From Equations 6.38 and 6.44, one can directly calculate the in-band radiance. A more
straightforward approach, still suficiently accurate in practical applications, consists in using
the intermediate value theorem to compute the mean radiance [39].
The blackbody concept is a theoretical abstraction. Real bodies instead emit less at the
same temperature. This is taken into account through the concept of emissivity. The spectral
emissivity ελ of a certain body is the ratio between its actual radiance Lλ and the blackbody
one L0l, at the same temperature and wavelength:

Ll (6.45)
el =
L0l

The emissivity of real bodies is nondimensional and always lower than one. A similar expres-
sion can be written for hemispherical and total quantities. With the introduction of emissivity,
Planck’s law reads

2hP c02el
Ll = (6.46)
l5 e ( hP c0 / ( kB lT )
-1 )
A body whose spectral emissivity is constant for all the wavelengths is called graybody. Many
materials behave like graybodies within certain bandwidths. In such cases, the total emissivity
can be determined independently from the spectral one. Given a wavelength band ∆λ = [λa, λb],
the in-band emissivity can be determined as

( )
lb

e Dl =
òe la
l
él 5 ehP c0 /( kBlT ) - 1 ù d l
ëê úû
(6.47)

ëê (
- 1) dl ù
lb

ò1 él e 5 hP c0 / ( kB lT )
la ûú

For a graybody, this is equivalent to the spectral emissivity.


An accurate measurement of spectral and total emissivity of surfaces is a dificult exercise.
Dedicated instruments or laboratory setups exist for such a purpose. Further complications
arise when high temperatures are involved. The reader is addressed to dedicated articles and
textbooks available in the literature.
In practical radiometric measurements, a value of emissivity needs to be chosen to correct
for a nonblackbody behavior of the target. The type of emissivity used should be suitable for
that particular type of instrument. A correct choice of emissivity implies several issues, since
it depends on different factors as body temperature, wavelength, polar angle, and roughness
of the surface. Best practice is, when possible, to consider emissivity through a calibration of
the measurement device.
According to Maxwell’s electromagnetic theory, the spectral emissivity of real bodies is
related to the optical constants of the material, namely, the refractive index n and the extinction
coeficient κ . n and κ are the real and imaginary part of complex refractive index n = n + i k
and depend on temperature, wavelength, and electrical properties of the material. Dielectric
materials, like paints, oxides, and most of the liquids, have an ininite electrical resistance
166 FRANCESCO PANERAI

(insulators), and their extinction coeficient κ is zero. For these materials, the normal spectral
emissivity is given by [6]

4n
el^ = 2
(6.48)
( n + 1)
In this case, if n(λ) is known (a good reference is [42]), one can easily extract el^ and, integrat-
ing over the wavelength, compute the total normal values ε⊥. However, for radiative transfer
calculations, one uses the total hemispherical value ε∩, thus needing the dependence of ελ on
the polar angle θ (the dependence on the circumferential angle ϕ can be usually neglected).
From the electromagnetic theory, this is given by

2a cos q é1 + n ù
el ( q ) = ê ú (6.49)
( )
2
( a + cos q ) êë a cos q + a sin q
2 2
úû

with a = n2 - sin 2 q .
Equations 6.48 and 6.49 for metallic materials read [6]

4n
el^ = 2
(6.50)
( n + 1) + k2

and

é 1 1 ù
el ( q ) = 2n cos q ê + 2ú (6.51)
êë ( n cos q ) + (1 + n cos q ) n + ( n + cos q ) úû
2 2 2

Figure 6.12 shows typical distributions of directional emissivity for metallic and nonmetallic
surfaces. Together with emissivity, other important concepts need to be taken into consideration
when evaluating the radiative transfer for real bodies. These are transmissivity τ, relectivity
ρ, and absorptivity α, also referred to as transmittance, relectance, and absorptance, respec-
tively. They quantify the fraction of incident radiation that is transmitted through, relected, and
absorbed by the body. Obeying to conservation of energy, one can write
a+r+t =1 (6.52)

Similar to the emissivity, these properties depend on several parameters, such as surface inish,
temperature, wavelength, and direction. For a blackbody, the transmissivity and relectivity
are null, hence α0 = ε0 = 1. A medium with zero transmissivity is instead deined as opaque.
When a body has an emissivity coeficient that does not depend on the angle between the nor-
mal direction to the surface and the direction of the emitted radiation, it is deined as diffuse
emitter. For such a body Kirchhoff’s law applies, stating the equivalence between spectral
emissivity and spectral absorptivity, ελ = αλ. Thus, for an opaque diffuse emitter one can write

e l + rl = 1 (6.53)

In the design of radiative thermometry measurements, relectivity and transmissivity play a


fundamental role. Relectivity represents a source of disturbance in many setups, where ther-
mal radiation of a target model can be signiicantly disturbed by the relected surroundings.
Polished or nonoxidized metal surfaces, which often appear in wind tunnel models of walls,
have typically high relectivities (therefore, low emissivity according to Equation 6.39). As a
consequence, attempts to measure their surface temperature with radiation methods are a chal-
lenging exercise, easily prone to large errors. In such cases, an effective solution is to coat the
target surface with a thin layer of high emissivity paint. Relectivity properties of a surface are
TEMPERATURE AND HEAT FLUx MEASUREMENTS 167

0
20 20
θ, degrees θ, degrees
40
40 Glass
Wood Clay
Copper 60
Ice
60 Paper oxide

Aluminum
oxide
80 80

1.0 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1.0
ε ε
(a)

θ, degrees θ, degrees
60 50 40 20 0 20 40 50 60

Cr

70 70
Ni, polished
Mn
80 80

Ni, dull Al
0.14 0.12 0.10 0.08 0.06 0.04 0.02 0 0.02 0.04 0.06 0.08 0.10 0.12 0.14
ε ε
(b)

FIGUre 6.12 Direction distributions of emissivity for (a) nonmetallic and (b) metallic surfaces.
(From Cardone, G. et al., Exp. Fluids, 52(2), 375, 2012; Schmidt, E. and Eckert, E.R.G., Über die
Richtungsverteilung der Wärmestrahlung von Oberlächen, Forschung Geb. D. Ingenieurwes.,
Vol. 6, 1935.)

nonetheless very desirable in certain situations. For instance, in the absence of a direct optical
access to a test section, relective properties of mirrors can be exploited to build alternative
measurement pathways. In both cases, a correct calibration process needs to be implemented
to account for relection disturbances and signal attenuations of mirrors.
Spectral relectivities of typical materials used as mirrors are shown in Figure 6.13. It is
remarked that the quality of a metal relector is very sensitive to the atmospheric conditions
of operation. Polluted atmosphere, overheating, or intense exposure to UV radiation can

1.0

Al
0.8
Al(MgF2)
Al(SiO)
Reflectivity

0.6

Au
0.4

0.2

0.0
0.2 0.4 0.6 1 2 4 6 8 10
Wavelength (µm)

FIGUre 6.13 Spectral relectivity of IR mirrors. (Data from www.newport.com, Newport


Corporation, Irvine, CA.)
168 FRANCESCO PANERAI

100
CaF2
SiO2
80
KRS-5

Transmissivity
ZnSe
60

Ge
40

20

0
0.2 0.4 1 2 4 6 8 10 20 40
Wavelength (µm)

FIGUre 6.14 Spectral transmissivity of materials used for IR windows. (Data from www.
crystaltechno.com, Crystaltechno Ltd., Moscow, Russia.)

cause deterioration of relectivity properties and require frequent recalibration or replace-


ment of the optics.
The transmissivity of crystals is a fundamental property for choosing optical access win-
dows to the test section of conined wind tunnels. It is necessary to select a suitable material
with suficiently high transmissivity over the whole wavelength range of the instrument used.
Attenuation of the thermal radiation shall be quantiied by means of a proper calibration.
The transmissivity of materials commonly used as windows in radiation measurements is
shown in Figure 6.14. As for mirrors, particular attention should be paid to the degradation of
the optical properties of windows, when these operate in polluting atmospheres.
Transmissivity of crystals is also important to build the optical system (lenses, ilters, etc.)
of the instruments (see also “Optical system” section). Radiation thermometers are usually
accompanied with a manufacturer calibration that accounts for the transmissivity of lenses and
ilters in front of the detector.
For the purpose of actual measurements, the transmissivity of the gaseous medium through
which thermal radiation is observed should be considered as well. Gas molecules may absorb
and reradiate at different wavelengths a signiicant amount of thermal radiation. Scattering
effects may also occur in the presence of particle-laden lows. This process, known as Rayleigh
scattering, scales with λ−4 (meaning that shorter wavelengths are scattered more strongly than
longer wavelengths), thus being negligible for wavelengths longer than ~2 μm.
With the exception of very speciic applications, radiation thermometry is usually per-
formed in atmospheric air. An example of atmospheric air IR spectral transmissivity is shown
in Figure 6.15. The presence of water vapor and carbon dioxide is responsible for high
absorption regions where IR measurements would not be feasible. The majority of radiation
thermometer works in the 8–14 μm or the 3–5 μm wavebands, where the air is suficiently

CO2 CO2 HO2 CO2


100

80
Air transmissivity

60

40

20

0
0 2 4 6 8 10 12 14
Wavelength (µm)

FIGUre 6.15 Transmissivity of atmospheric air and main absorbing molecules.


TEMPERATURE AND HEAT FLUx MEASUREMENTS 169

transparent. The former is preferred for high-performance thermal imaging; however, shorter
wavelengths offer advantages in terms of smaller optics and a better performance under very
high temperatures.
Having provided a deinition of the radiation properties, the thermal balance of an object
subjected to an incident radiant lux E can be considered. This is the sum of the emitted,
relected, and transmitted radiation. Nonblackbodies emit a fraction of the blackbody radia-
tion corresponding to εL0, while the remaining fraction is either relected, if the object is
opaque, or partly relected for a transmitting medium. For a diffuse, opaque graybody, the sum
of emitted radiation εL0 and the relected portion of E is referred to as radiosity J, given as

J = eL0 + (1 - e ) E (6.54)

In the presence of surfaces facing each other at arbitrary angles and exchanging thermal radia-
tion, the incident lux Ei received by a surface i needs to account for the “way” the surface
itself “sees” surrounding surfaces. This is done introducing the view factor Fj → i, which quanti-
ies a portion of radiation that leaves a surface j of area Aj and strikes the surface i, given as

E j ®i = Fj ®i A j Jj (6.55)

As view factors satisfy the reciprocity conditions, which is Fj → i Aj = Fi → j Ai, the total lux
received by the surface i from all the surrounding j surfaces is

åF j ®i A j Jj
Ei = j

Ai
= åF
j
J
i® j j (6.56)

With Equation 6.56, Equation 6.54 becomes

J i = ei L0 + (1 - ei ) Ei = ei L0 + (1 - ei ) åF
j
i® j Jj (6.57)

Finally, the net radiative heat transfer Qi at a surface i of area Ai is the difference between
incident energy lux Ei and the outgoing one Ji, that is,

Qi = Ai ( J i - Ei ) =
Ai ei 0
1 - ei
(
L + Ji ) (6.58)

radiation Compared to the methods discussed in the previous sections, as thermocouples or resistance
thermometers thermometers, radiation instruments present a unique advantage: they are nonintrusive devices,
that is, they do not require physical contact with the target medium. Being able to measure at a
distance implies several advantages. For example, in the case of very hot surfaces, the instru-
ment does not need to be at the same temperature and hence does not need to withstand the
temperature of the measurand. Other beneits exist when dealing with fast moving bodies or
when the scanning of temperature gradients is desired.
Radiation thermometers are broadly called radiometers. While they all operate under the
same principles, technical solutions can be very different from device to device, and different
nomenclatures can be adopted. The main classiication is between point-measurement instru-
ments providing punctual information restricted to a very small region (such as pyrometers)
and mapping instruments (like IR thermocameras), providing instead a 2D distribution of the
temperature. This section focused on the irst class, recalling few of the application where
punctual instruments can be used in aerodynamic experiments, while a dedicate section is
reserved later to discuss IR thermography. The second broad classiication is based on the
operating spectral range of each instrument.
170 FRANCESCO PANERAI

Large-band and narrowband pyrometers Large-band pyrometers measure the temperature


over broad wavelength intervals of the radiation spectrum. Examples of typical bands utilized
are the 0.1–5 μm and the 8–12 μm. Some instruments can cover the whole 0.6–39 μm range.
Nonmodulated models make use of blackened thermopiles as sensors and internal lenses or
mirrors to focus the radiation. The thermopile sensor can be made either of only few junc-
tions, providing a very high sensitivity, or of up to 20–30 sensing elements if the need of
measuring high temperatures is privileged over having a fast response. Lenses and/or mirrors
provide these instruments with the ability of resolving small targets at far distances. A more
popular version is represented by broadband radiometers using rotating choppers to stop the
radiative lux to the sensor at a predetermined frequency. Sensors can be either thermopiles
or photonic type. The modulation of the incoming radiation allows easier signal ampliication
and improved sensitivity.
Narrowband pyrometers use photonic sensors, providing an electrical output propor-
tional to the lux of photons carried by the incident radiation (further details on the work-
ing principles of photonic sensors can be found in the “Optical system” section, where
IR detectors are discussed). These instruments typically work over 0.2–0.6 μm wide bands,
centered at different wavelengths (~1 μm, 2 μm, 5 μm, etc.) depending on the type of detec-
tor. Compared to large band, they enable very short response times, with the possibility of
detecting transients down to 10 μs.
Both large-band and narrowband instruments cover large temperature ranges, from room
temperature up to 3300 K, with an accuracy as high as 0.5% of the full scale. They are usually
accompanied with factory calibrations, performed using blackbody radiation sources, assum-
ing ε∩ = 1. The correct (in-band) emissivity of the target surface needs to be taken into account
when converting the instrument output to obtain the temperature with high accuracy.
Further details of narrowband and broadband radiometers can be found in dedicated text-
books [7,43]. Operational options (wavelength, temperature, distance, etc.) are virtually
unlimited and can be advised ad hoc by the manufacturer.

Monochromatic optical pyrometers The operating principles of optical pyrometers, also


known as brilliance pyrometers, date back to the early nineteenth century. The classical
instrument is based on the optical brightness of a lamp ilament in the visible red spectrum
(at ~0.655 μm).
A schematic of a disappearing ilament pyrometer is presented in Figure 6.16 [44]. The
operator adjusts the power of the ilament, observed thought the eyepiece, changing its color
until it matches that of the target. An alternative design maintains a constant current through
the ilament and adjusts the target’s brightness by means of an adjustable absorbing optic. The
object temperature is related to the amount of energy absorbed by the optic. Obviously, the
accuracy of the measurement strictly depends on the stability of the lamp and on the individual
characteristics of the observer’s eye. Both are usually sources of signiicant errors.

Radiating
source Filament Narrowband
Lens lamp filter

Objective

Ammeter

Rheostat

FIGUre 6.16 Schematic of a manual disappearing ilament pyrometer. (From Prokhorov, A.M.,
Bol’shaia sovetskaia entsiklopediia [The Great Soviet Encyclopedia], Izd-vo “Sovetskaia entsik-
lopediia”, Moskva, Russia, 1970.)
TEMPERATURE AND HEAT FLUx MEASUREMENTS 171

Narrowband filter

Modulator

Radiation
source Lenses

Photoelectric
cell
Concave lens

Standard lamp
(reference source)

FIGUre 6.17 Schematic of a photoelectric pyrometer. (Prokhorov, A.M., Bol’shaia sovetskaia


entsiklopediia [The Great Soviet Encyclopedia], Izd-vo “Sovetskaia entsiklopediia”, Moskva,
Russia, 1970.)

Today’s optical pyrometers use an electrical radiation detector, adapted to the IR range,
comparing the amount of incident radiation with that emitted by an internally controlled ref-
erence source. The output is proportional to the difference in radiation between the target
and the reference. A chopper, driven by a motor, is used to alternately expose the detector to
incoming and reference radiation.
An example of automatic pyrometer is the photoelectric pyrometer shown in Figure 6.17.
The photoelectric cell is exposed alternately to the radiation of measurement target and to a
reference radiation source (lamp). As long as the respective brightness of the two is different,
an alternating current is produced in the circuit of the photoelectric cell; the amplitude of
this component is proportional to the difference in brightness. To obtain a measurement of the
actual target temperature, the ilament current of the lamp is regulated in such a way that the
alternating component of the photocurrent becomes equal to zero.
Optical pyrometers provide typical accuracies in the order of 1%–2% of full scale,
enabling measurements at temperature as high as 4000 K. As for narrow- and large-band
devices, accuracy is related to the knowledge of the target’s emissivity at the reference
wavelength.

Two-color pyrometer A suitable solution to perform measurements independent from the


emissivity is to use the two-color working scheme. The operating principle is based on the
graybody assumption. The spectral radiance Lλ is detected at two different wavelengths
λ1 and λ2, and their ratio is assumed as a measurement of temperature. For temperatures
below 3200 K, the approximation ehPc0/(kBλTt) ≫ 1 holds true with errors below 1%; thus, from
Equation 6.46, one can write

2hP c02el1 2hP c02el2


Ll1 = hP c0 /( kB l1T )
, Ll2 = hP c0 /( kB l 2T )
(6.59)
l15e l 52e

Dividing the two quantities, one has

5 é hP c0 æ 1 1 öù
Ll1 el1 æ l 2 ö êë kBT çè l2 - l2 ÷øúû (6.60)
= e
Ll2 el2 çè l1 ÷ø

Under the graybody assumption ελ1 = ελ2, hence

5 é hP c0 æ 1 1 öù
Ll1 æ l 2 ö êë kBT çè l2 - l2 ÷øúû (6.61)
= e
Ll2 çè l1 ÷ø
172 FRANCESCO PANERAI

from which one can directly extract the temperature without the need of the emissivity. Two-
color pyrometers are usually designed to work over two partially overlapping narrowbands
(typically around 1 μm), where the graybody approximation is basically satisied with negli-
gible errors. Nongray behaviors of some materials could be dealt with by biasing the ratio of
the wavelengths accordingly. Modern devices allow multiwavelength measurements.
Ratio thermometers cover wide ranges up to 3500 K with accuracies of the order of 1%–2%
of full scale.

Fiber optic radiation thermometers Among the punctual radiation methods, it is inally
worth mentioning iber optic thermometers. In those instruments, the measured radiation is
transported from the sensing head to the transducer by means of an optical iber. The measur-
ing head consists of a high-quality sapphire crystal rod protected with a thin sapphire ilm.
Other variations have a sensor tip made or a gallium arsenide crystal. The connection to a near
infrared range silicon detector is done by means of quartz iber that can be several meters long.
The relevance of those instruments in luid measurements is that they are suitable solutions for
applications in high-temperature gases and harsh atmospheres (like plasmas), thanks to their
good resistance to thermal, chemical, and electromagnetic interferences. Working temperature
ranges are typically from few hundreds to 2300 K.

applications Modern radiation-based devices have been originally developed for military applications, as
in nocturne vision devices and IR homing guidance systems of air-to-air missiles. Nowadays,
radiation thermometers are very accessible instruments, allowing noncontact temperature
measurements in several ields of science and technology. In industrial applications, they are
largely adopted for the control of production processes. An interesting example of application
is hot air balloons, which use pyrometric sensors for monitoring the temperature of the fabric
at the top of the envelope.
In aerodynamics or better in aerothermodynamics, the use of punctual radiation ther-
mometers is primarily related to environments involving high temperatures. Test models or
components facing hot lows require temperature monitoring that is very dificult or imprac-
tical using intrusive techniques. Furthermore, contact measurement sensors are often unsuit-
able for certain temperature levels encountered in practical experiments: irst, because they
are simply out of range, and second, because they hardly survive a reactive and highly oxi-
dizing environment. Radiation thermometers offer the possibility of detecting hot surfaces
at a distance and measuring temperatures of several thousand degrees with high accuracy.
In turbomachinery experiments, for instance, pyrometers and iber optic thermometers have
been extensively used to map turbine blades [45–48]. In combustion experiments, work-
ing wavelengths of pyrometers can be tailored to measure the temperature of lames and
combustion products containing high CO2 concentrations. In high-enthalpy wind tunnels,
a two-color or corrected single-color pyrometry is the baseline techniques for stagnation
temperature measurements of material sample exposed to plasma jets. In such applications,
temperatures in the range of 1200–3400 K [49–53] are easily reached at the surface. Authors
have exploited two-color pyrometers in combination with a broadband radiometer to per-
form in situ emissivity measurements of high-temperature surfaces [54–56]. New frontiers
in the ield of pyrometry aim at developing improved methods for the determination of low
emitting surface, as metals and alloys. Examples are the recent development of pyrorelec-
tometry techniques for measuring plasma facing metals in new-generation fusion reactors
[57–60].

6.7 Infrared thermography

IR thermography belongs to the radiation methods for temperature measurement described


in Section 6.6. Here, it is chosen to treat the topic in a dedicated section, owing to its chief
importance among other methods in thermodynamic and luid dynamic applications. Indeed,
IR thermography presents a unique feature, compared to most of the other techniques
TEMPERATURE AND HEAT FLUx MEASUREMENTS 173

treated in this chapter. It provides, at the same time, nonintrusive information on the tempera-
ture with spatial and temporal resolution.
IR thermography measurements are based on the concepts outlined in “Fundamentals of
thermal radiation” section. Its origins date back to the sixties in the frame of military applica-
tions. The appearance of commercial and research-dedicated IR scanning radiometers started
in the 1970s with the development of liquid-cooled optomechanical detectors, using oscillat-
ing or rotating optics to achieve spatially resolved measurements. The 1980s witnessed the
irst developments of focal plane array (FPA) detectors, when IR cameras became effectively
2D devices, while the diffusion of noncooled systems dates back to the mid-1990s.
During the last two decades, IR thermography measurements have beneited from the
advancement of silicon-based electronics, allowing superior acquisition and thermal resolu-
tion capabilities, along with a dramatic price reduction for industrial and commercial devices.
Today, the technique is so popular in countless applications that even personal smartphones
and tablets provide decent IR imagery features.

Infrared scanning IR temperature measurements are performed with an IR scanning radiometer, also called
radiometer IR  scanner or IR camera or (more loosely) thermocamera. Despite the numerous, speciic
features and technical solutions applied in today’s instruments, three basic components can be
identiied in an IR scanner: an optical system of windows and lenses that scans, focuses, and
ilters the incoming thermal radiation, a temperature detector collecting the thermal radiation
and transforming it into an electrical signal, and a processing unit that receives the electri-
cal signal and converts it into a temperature map of the ield of view. Optical systems and
detectors are briely described in the following sections, based on the detailed overview pro-
vided in [61]. Dedicated textbooks discuss the techniques for electronic acquisition and signal
processing. Most of the modern devices used for both commercial and laboratory applications
come with integrated units performing most of the operations that converts the signals into
mapped images to be processed with dedicated software.

Optical system The optical system of early IR cameras used a scanning mechanism based
on moving mirrors. In the horizontal direction a rotating mirroring polygon was providing a
continuous scan of the image per lines, while in the vertical direction an oscillating mirror
was step-moved each time a complete horizontal line has acquired and then brought back
to its original positions after the completion of a full frame. Mechanical movement devices
were allowed to block the vertical mirror and operate in a line-scanning mode, increasing the
acquisition frequency. More modern devices used electronic scanning, that provided improved
performance especially with fast transient measurements. Nowadays, most IR cameras use an
FPA detector.
The lens is the main component of the optical system. It allows focusing the incident radia-
tion into the detector, by transmission and diffraction. In order to better understand its operat-
ing principles, it is useful to recall few optical concepts. While the focusing distance can be
normally adjusted, IR thermography lenses are generally characterized by a ixed focal ratio
f# = f/Da, where f and Da are the focal length and the effective aperture of the lens, respectively.
The ixed focal ratio imposes a minimum useful pixel of the detector. Indeed, if one considers
the circular diffraction pattern (or Airy’s pattern [62]) of the incoming thermal radiation into
the image plane, the diffraction diameter, that is, the diameter of the irst Airy’s ring, is deined
by the focal ratio and the wavelength as [62]

DA = 2.44 f# l ( M + 1) @ 2.44 f# l (6.62)

Here, M is the lens magniication, usually very small for IR lenses. Since common IR scan-
ners operate at wavelengths greater than ~3 μm, even with a big aperture (e.g., f# = 2.8), using
a detector pixel size smaller than 20 μm, would not provide a better spatial resolution, due
to the diffraction limitation. To characterize the spatial resolution, one can consider that the
minimum distance criterion (or Rayleigh’s criterion [63]) for resolving two points in an image
prescribes the center of the Airy’s disk for the irst point to occurr at the irst minimum of the
174 FRANCESCO PANERAI

Airy’s disk of the second. Hence, the minimum resolvable angle θ at a focal distance f is given
by tan(θ/2) = DA/4f or using Equation 6.62 and the small angle approximation

l (6.63)
q @ 1.22
DA

The transmissivity of the lens depends on the material used to manufacture it. The concept has
been already introduced in the “Fundamentals of thermal radiation” section. The choice of the
material is based on the operating wavelength envelope of the IR scanner. Calcium luoride
(CaF2), sapphire (Al2O3), and silicon (Si) are examples of suitable solutions for MWIR sys-
tems. In the LWIR band, optics are usually made in zinc selenide (ZnSe) or germanium (Ge).
As seen in Equation 6.48, the relectivity depends on refractive index n:

2
æ n -1 ö
rl^ = 1 - el^ = ç ÷ (6.64)
è n +1 ø

Approximating an IR lens as a slab with small absorption coeficient [61], the transmissivity
can be expressed as

(1 - r )
2
^
^ l 2n (6.65)
t l @ =
1 - rl^ 2 n2 + 1

Most IR lenses use antirelection coating to improve transmissivity [64].


The optical system of some IR devices can also be equipped with ilters that allow attenu-
ating or masking the thermal energy transmitted to the detector. Examples are gray ilter to
prevent saturation of the IR image in measurements of high-temperature sources, or low- or
high-pass ilter to measure semitransparent materials.

Detector As already mentioned, modern IR cameras make use of a FPA detector, capturing
the radiation transmitted by the lens in a 2D array of sensible elements. The number of sensi-
tive elements (pixels) deines the dynamic range of the camera.
Depending on the technique used to convert thermal radiation into an electrical signal two
classes of detectors are distinguished: thermal and photonic.
Thermal detectors are based on thermoelectric effects, which have been previously treated
when illustrating the working principles of thermocouples or resistance thermometers (see
Sections 6.3 and 6.4). The incident radiation is absorbed by a heat capacitor, which electrical
properties change proportionally to its temperature variation imposed by the absorbed energy.
The output produced can be a differential voltage like in the case of thermopile sensors or a
resistance variation. Common thermal detectors make use of microbolometers. These are very
small sensors consisting of an absorptive element, such as a thin layer of metal, connected to
a thermal reservoir at constant temperature. The radiation impinging on the absorptive ele-
ment raises its temperature above that of the reservoir producing a change in the electrical
resistance. The response of such devices is proportional to the ratio between the heat capacity
of the absorber and the absorber–reservoir thermal conductivity. Modern microbolometers
enable acquisition frequencies up to 60 Hz with good thermal resolutions. Materials used for
manufacturing thermal detectors include amorphous silicon and vanadium oxide (V2O5).
In photonic (or quantum) detectors, the incoming radiation interacts with the electrons of
the detector material producing an electrical output. Photons in the incoming thermal radia-
tion carry an energy e = hPc/λ . If this energy is higher than the energy required by an electron
of the detector material to change a quantum level, then the electron undergoes a transition.
Since the energy required to promote an electronic transition decreases with temperature, cool-
ing the detector at a very low temperature helps conine the electrons at very low energy states.
Then, when suficient energy lux is provided by the incident photons, the electrons move to
TEMPERATURE AND HEAT FLUx MEASUREMENTS 175

the conductive band producing an electrical signal. Photonic detectors are classiied in photo-
conductors or photodiodes whether they produce an electrical resistance or a voltage change.
Common quantum detector technologies in modern IR cameras use mercury cadmium telluride
(HgCdTe), indium antimonide, or quantum well IR photodetectors with aluminum gallium
arsenide wells alternated to gallium arsenide semiconductors. The reader is addressed to [61]
for further details. Operation of these materials requires cooling at temperatures between 60
and 100 K, depending on the type of performance desired. For moderately low temperatures, a
cooling system based on the Seebeck–Peltier effect can be used, while higher performances, at
the expense of practical dificulties in routine operations, can be achieved using Dewar lasks
illed with liquid nitrogen. Nowadays, eficient cryocooling is mostly performed with Stirling
Technologies, allowing miniaturized, eficient, and robust systems.

performance of an Detectivity and thermal contrast The ability of an IR detector to measure thermal radia-
infrared scanning tion with an acceptable S/N ratio is quantiied by means of the detectivity. The noise level
radiometer can be characterized with a noise equivalent power, which is the total radiative power (in
watts) needed to produce an output equal to the detector noise. According to Jones’s deinition
[65,66], the detectivity increases with the size of the detector (of surface A, in cm2) and the
equivalent noise bandwidth Δf (in Hz), while it is inversely proportional to the NEP. Hence,
the normalized detectivity is deined as [65,66]

ADf
D* = (6.66)
NEP

An overview of normalized detectivity for different detector materials is shown in Figure 6.18,
from the comprehensive review of Rogalski [67].
By comparing the detectivity of Figure 6.18 with the spectral transmissivity of atmospheric
air shown in Figure 6.15, one can identify the two suitable bands at mid (3–5 μm) and long
(8–10 μm) wavelengths where IR thermography measurements are perfortmed.
Within a determined working range, the detection capability of the IR sensor depends on
the thermal contrast. This is deined as

1 ¶L0Dl
C Dl = (6.67)
LDl ¶T

1012
)77
K Ideal photovoltaic 2π FOV
s(PC
InA K 300 K background
)193 InAs(PV)193 K
(PC
PbS K HgCdTe(PC)77 K Ideal photoconductor
)77
(PC Ideal thermal detector
1011 PbS
K HgCdTe(PV)77 K
)295
e(PC K K )77 K
D*(cm Hz1/2/W)

PbS )193 Te(PV


)7 7 e(PC PbSn
( PV PbS
InS
b )28 K
7K g(PC K
C)7 7K Ge:H K )4.2
1010 b(P C)7 4.2 (PC
InS e(P PC) Ge:
Zn
PbS Cu(
Ge:
Si:As(PC)4.2 K

5 K Golay cell
V )29
s(P Radiation thermocouple
109 InA 95 K
)2 TGS Pyro
e(PC GaAs QWIP
PbS 77 K
InSb(PEM)295 K Ge: AU(PC)77 K
Thermistor bolometer
Thermopile
108
1 1.5 2 3 4 5 6 7 8 9 10 15 20 30 40
Wavelength (µm)

FIGUre 6.18 Normalized detectivity D* for various detectors from Rogalski. (From Rogalski, A.,
Progr. Quant. Electron., 27(2–3), 59, 2003.)
176 FRANCESCO PANERAI

0.04
2.2

0.03 C3–5 µm/C8–12 µm 2.0

Contrast (K–1)

C3–5 µm/C8–12 µm
1.8
0.02
1.6

C3–5 µm 1.4
0.01
C8–12 µm
1.2

0.00 1.0
300 600 900 1200 1500 1800
Temperature (K)

FIGUre 6.19 Thermal contrast as a function of temperature for different operative wavebands.

The thermal contrast decreases with increasing temperature (Figure 6.19). Working in MWIR
favors the detection of the small ∆T. However, for temperatures higher than 1000 K, this
advantage is negligible.

Thermal resolution The thermal resolution of IR detectors can be deined using two quanti-
ties, the noise equivalent temperature difference (NETD) or the minimum resolvable tempera-
ture difference (MRTD).
The NETD is deined as the temperature difference that produces an output equivalent
to the peak-to-peak noise from a uniform background temperature ield. The procedure for
calculating the NETD is given in [68]. It uses a blackbody target at temperature T0 behind
a background target plate at temperature T, with an aperture not exceeding a tenth in height
and width the dimensions of the plate (Figure 6.20). Denoting with ∆V the peak-to-peak
(standard deviation) noise detected when measuring the uniform background (by closing the
aperture with a cover of the same properties) and with V − V0 the output measured observing
the blackbody target through the aperture, the S/N ratio is (V − V0)/∆V and the NETD can
be calculated as

T -T0
NETD = (6.68)
S /N

Signal detection
V0
x

V
T0
T x

Noise detection

∆V
T0
T x

FIGUre 6.20 Schematic of NETD determination method.


TEMPERATURE AND HEAT FLUx MEASUREMENTS 177

Typical NETD values are of the order of 100 mK for uncooled detectors and 10 mK for cooled
devices. The NETD of an actual instrument is usually quote by the manufacturer. Periodic
measurements of the NETD are a useful method to perform sanity checks of an IR camera and
monitor eventual drifting with time.
The MRTD is the minimum detectable temperature of a target body behind a series of
4 horizontal or vertical slits [69]. As such, it allows relating thermal and spatial resolution of
an IR camera. The MRTD increases exponentially with increasing slits aperture, coinciding at
its lower limit with the NETD.

Spatial resolution A irst characterization of the spatial resolution of an IR camera can be


based on the instantaneous ield of view (IFOV). The IFOV is the angular area viewed by
the detector (or by a single pixel of the detector for FPA) through the optics of an IR camera
(Figure 6.21). It is proportional to the detector (or pixel) size ld to focal length f ratio:
2
æl ö (6.69)
IFOV µ ç d ÷
è f ø

It deines the size of the smallest object that can be viewed/resolved at a speciic distance
from the camera. The IFOV is expressed in rad. The projection of the IFOV over the target
plane is the instantaneous projected area (IPA):

IPA µ l 2 IFOV (6.70)

As a rule of thumb, the IPA shall be at least 5 times the thermal spot to be measured, in order
to prevent signiicant signal attenuation [13].
A more rigorous way to characterize the spatial resolution of an IR camera is obtained
considering the slit response function (SRF). The SRF is deined in reference to Figure 6.22,
which shows the case of a target background at uniform temperature Tb observed through
a slit of angular aperture α on a foreground plate at uniform temperature Tf. The SRF is
deined as the ratio between the detected bell-shaped amplitude Tb¢ - T f over the actual x-wise

l
IPA

Detector
Optics

IFOV

FIGUre 6.21 Schematic of the IFOV. (From Arts, T. et al., Introduction to Measurement
Techniques, 2nd revised edn., von Karman Institute for Fluid Dynamics, Rhode-Saint-Genese,
Belgium, 2007.)

Tb
x
T΄b

α Tf
Tb
Tf x

FIGUre 6.22 Schematic of the SRF.


178 FRANCESCO PANERAI

Tb α
Tf

Tf
T΄f α α΄ α˝
T˝f

Tb̋
T΄b
Tb
x

FIGUre 6.23 Schematic of the MTF.

temperature square pulse Tb−Tf. The SRF depends on the slit angular aperture and converges
to 1 for large apertures. This dependency is modeled accurately by the Gauss error function
as [13]

Tb¢ - T f æ a ö
SRF = = erf ç 0.48 (6.71)
Tb - T f è a 0.5 ÷ø

where α0.5 is the slit aperture for a SFR equal to 0.5. The 0.48 coeficient comes from ~96%
conidence level (2σ) on the Gaussian response of the IR camera [70].
An alternative quantity to be considered is the modulation transfer function (MTF), which
describes the response of the IR camera to periodic slits at spatial frequency ν, as shown in
Figure 6.23. The MTF for a slit aperture α′ is deined as [13]

Tb¢ - T f¢
=e ( )
- f n ×n
MTF = IR
(6.72)
Tb - T f

It is modeled by a negative exponential law, depending on the slits spatial frequency ν′ and
a function f(νIR) of the IR camera sampling frequency. Analysis of the MTF for different IR
camera technologies can be found in [71–73].

Acquisition frequency and temperature ranges The acquisition frequency of an IR camera


is strictly related to the performance of the detectors, namely, to the integration time needed
by a pixel to capture the temperature of the measurand.
This parameter is of particular interest for those applications where IR thermography is
applied to investigate very fast, transient phenomena. With integration times down to 200–500 ns,
achieved via cryocooling at temperatures below 100 K, modern cameras enable frame rates
between 30 and 60 Hz at full frame, with performances up to thousands of Hz for the most
advanced laboratory instruments. A cooled FPA instrument can achieve thousands of kilohertz
by framing the detector, that is, by reducing the number of pixels acquired per frame and,
consequently, the data rate per frame to the acquisition system.
A broad spectrum of solutions exists for what concerns the temperature range. Simple com-
mercial devices are usually limited to maximum temperature of few hundred kelvins. More
advanced instruments can measure temperatures as low as 200 K and can span up to 1700 K
or more, reducing the integration time or using proper ilters. Most of the broad range cameras
are equipped with ilters for the incoming radiation to restrict the operating range.

Calibration As for the punctual radiation thermometers described earlier, measure-


ments performed with an IR camera are inluenced by the optical properties of the target
TEMPERATURE AND HEAT FLUx MEASUREMENTS 179

surface, as well as by the environmental condition of the actual tests setup where the mea-
surements are performed. These contributions must be handled by a proper calibration of
the IR sensor. The detector receives the radiation tatmεσSBT4 emitted by the target object
at temperature T, the radiation t atm (1 - e ) sSBT¥4 emitted by the surrounding hemisphere
at temperature T∞ and relected by the target, and the radiation (1 - t atm ) sSBTatm
4
emitted
by the atmosphere (at Tatm) between the detector and the target body. The total incident
radiation reads

E = t atm esSBT 4 + t atm (1 - e ) sSBT¥4 + (1 - t atm ) sSBTatm


4 (6.73)

When windows and lenses are present in the optical path between the target and the lenses
other terms need to be added to the above expression to account for the transmissivity (and
eventually absorptivity) of each optical element. Equation 6.73 is a simpliied version of the
actual problem, neglecting the view factors between the target object and the detectors and
assuming blackbody emission for the surrounding. Moreover, in the real case one needs to
account for the dependence of the optical constants on wavelength, temperature, etc. Despite
the availability of complex radiation models, solving Equation 6.73 while accounting for all
the variables of an actual setup is an impractical exercise. A simpliied form for the calibration
function of an IR camera is given in [74] assuming a transparent atmosphere:

R R (6.74)
U =e + (1 - e ) B / T¥
e -F
B /T
e -F

where
U is the output voltage (or current)
R, B, and F are the calibration coeficients coming from a Planck’s law approximation

The calibration law can be easily solved for the target temperature T. The calibration constants
can be either determined prior to the actual experiments by means of a blackbody calibration
source or based on an in situ calibration during the operation of the actual setup, by means of
alternative temperature transducers placed at a signiicant location in the ield of view. The
two approaches are further discussed in [61], together with the additional considerations to be
made when dealing with pixel calibration in FPAs.
In practical measurements of real objects, the map detected by the camera represents a
projection of the 3D space in the plane of the image. Except for the comfortable (and seldom)
situation where the camera stares perpendicularly at the target surface, most of the cases an
optical calibration must be provided to complement the thermal calibration just discussed.
This is commonly experienced when performing IR thermography of wind tunnel models,
which might have a complex 3D shape. The purpose of the optical calibration is to ind a
mapping function that transforms the physical 3D coordinates of the object into the 2D ones
of the thermogram. A consolidated method consists in using heated, perforated calibration
plates to be moved in a direction through the test section and track the identiiable spots over
the surface [40,75]. As an alternative, an in situ calibration can be performed, using traceable
markers on the actual test model [76].

applications Today, IR thermography has countless applications in many different ields, from industry to
architecture and from military operations to medical diagnostics. Over the last three decades,
it has turned into a fundamental method for studying heat and low transport phenomena in
thermo-luid dynamics.
When performing IR thermography in experiments involving luids and heat transfer, one
is confronted with two possible situations. The irst one occurs in those experiments where a
target surface or test model is heated as a consequence of its aero- and thermodynamic interac-
tions with the low. This is deined as passive heating. Active heating occurs instead when the
target object is heated by a source independent from the low features.
180 FRANCESCO PANERAI

IR thermography can be used for both qualitative and quantitative measurements. The irst
case is typical for those experiments where low patterns are of signiicant interest. For exam-
ple, in separating/reattaching lows or in the presence of transition to turbulence, the onset loca-
tion of those phenomena can be identiied with fairly good accuracy using IR thermograms.
Quantitative measurements are certainly more challenging. Tackling the problem form a
very general perspective, in a typical luid-dynamics experiment involving the use of IR
thermography, one is after quantifying the convective heating exchanged between the solid
surface and its wetting low. A general representation is given by Newton’s law:

qw = hc ( Tw - Tref ) (6.75)

In this equation, IR thermography allows to quantify the wall temperature Tw. Tref is the char-
acteristic temperature of the low at the considered regime. For example, it corresponds to the
freestream total temperature in iposonic lows, or, in the compressible regime, to the adiabatic
wall temperature already discussed in Section 6.2. To close the problem and characterize the
convective heat transfer coeficient, the wall heat lux can be either calculated with analytical
solutions when certain assumptions and simpliication are possible, or it can be determined
using wall heat lux measurement techniques. These are discussed in the next section.
Since nondimensional analysis is often very effective to examine the behavior of aerody-
namic experiments, the heat transfer coeficient is expressed in terms of the Nusselt (already
introduces in “Nusselt number” section) and the Stanton number for internal and external lows,
respectively:

hcl (6.76)
Nu =
kf

hc (6.77)
St =
r f cP v

Here,
l is a characteristic lengths of the system (e.g., diameter of the model, boundary layer thick-
ness, throat diameter)
kf, ρf are the luid thermal conductivity and density, respectively
v is the low speed

Needless to say, the quantiication of those parameters and the form of Equation 6.75 become
eventually very complicated when dealing with high-temperature, highly viscous lows [77].
Since the 1990s, IR thermography has been applied with success in numerous aerody-
namic experiments, ranging from very low speed, subsonic lows up to hypersonic regimes
involving high speeds and high enthalpies, in laminar and turbulent lows, as well as multi-
phase, combusting and reacting lows. A certainly limited list of examples includes separation
studies in presence of bluff bodies, ribbed channel lows, jets, laminar-to-turbulence transi-
tion and separation, catalytic discontinuities in dissociated lows. Comprehensive reviews on
thermo-luid mechanics experiments using IR thermography can be found in the literature
[28,61,74,78].
Figure 6.24 provides examples of application of IR thermography to luid dynamics experi-
ments at different regimes. Figure 6.24a and b is from investigations on convective heat trans-
fer in subsonic lows. IR measurements of an impinging synthetic jet generated through a
pipe oriice by a membrane separating two cavities of a Helmholtz resonator were reported
in [79,80]. Nusselt maps rebuilt for different membrane phase angles (Figure 6.24a) show the
ability of IR measurements to capture complex low features, such as ring vortex sweeping
at the wall and consequent unsteady separation. Nusselt contours in Figure 6.24b for a 180°
turn channel low are an effective way to quantify heat transfer enhancement generated by
V-distributed ribs, compared to a smooth wall coniguration [81,82]. Figure 6.24c and d is
TEMPERATURE AND HEAT FLUx MEASUREMENTS 181

0° 24° 48°

y/d –2

72° 96° 120°

0
y/d

–2

144° 168° –4 –2 0 2 4
x/d
2
Nu
0
y/d

10 20 30 40 50 60 70 80
–2

–4 –2 0 2 4 –4 –2 0 2 4
(a) x/d x/d

0 1 2 3 4 5

0 1 2 3 4 5
(b)

FIGUre 6.24 (See color insert.) Examples of IR thermography applications at different low regimes and conigurations.
(a) Phase average Nusselt number maps for a synthetic jet. (From Carlomagno, G.M. and Ianiro, A., Exp. Therm. Fluid Sci.,
58(0), 15, 2014; Adapted from Greco, C.S. et al., Int. J. Heat Mass Transf., 73(0), 776, 2014.) (b) Nusselt number distribu-
tions for smooth (top) and V-ribbed (bottom) 180° turn channel low at Re = 30,000. The labels on the x axis are in equiva-
lent diameter units. (Adapted from Astarita, T. et al., Exp. Fluids, 33(1), 90, 2002; Astarita, T. et al., Opt. Lasers Eng., 44(3–4),
261, 2006.) (Continued)
182 FRANCESCO PANERAI

Visible part
Geo
met
ric
sim
plif
icat Nonvisible part
ion

Raw image
y x

z Transformed image

(c)

UD
[Counts] T (K)
4000

3500
380
3000
30
2500
Z (mm)

2000 360
20
Flow 1500

UD
10
[Counts]
340
2800
2600
2400 20
2200
0 20 320
2000 Flow
Y (mm) –20 m)
1800 0 X (m
(d)

×10–4 ×10–4
0 0
5 7
20 20
4
40 40
y (mm)
y (mm)

5
3
60 60
2 3
80 80
1
100 100 1
40 80 120 160 200 240 40 80 120 160 200 240
(e) x (mm) x (mm)

FIGUre 6.24 (Continued ) (See color insert.) Examples of IR thermography applications at different low regimes and
conigurations. (c) Temperature map of inned heat exchangers in a transonic bypass low. Schematic of the curved geom-
etry (left) and of the IR map transformation (right). (Adapted from Sousa, J. et al., Energy, 64(0), 961, 2014.). (d) Three-
dimensional temperature maps for a double cone coniguration in Ma = 9.3 hypersonic low. The two right insets display
the raw IR planar acquisition (top) and an IR map of the perforated calibration plate used for 3D reconstruction (bottom).
(Adapted from Cardone, G. et al., Exp. Fluids, 52(2), 375, 2012). (e) Stanton number maps for a Ma ≅ 3 low around an
isolated ramp roughness element over a lat plate model at Re = 5.2 × 106 (top) and Re = 2.7 × 106 (bottom). (Adapted
from Tirtey, S.C. et al., Exp. Fluids, 50(2), 407, 2011.)
TEMPERATURE AND HEAT FLUx MEASUREMENTS 183

from the measurement on inned heat exchangers in transonic low and from the testing of a
double cone model at hypersonic conditions, respectively. The two examples show how IR
thermography can be applied to the thermal imaging of 3D surfaces, if proper calibration and
image transformation are implemented. Finally, the study of heat transfer phenomena due to
laminar-to-turbulent transition in high-speed lows is an additional example of the effective
application of IR thermography to complex aerodynamic phenomenal. Highlights are pre-
sented for roughness-induced transition in Figure 6.24e.

6.8 heat lux sensors

This section is dedicated to an overview of experimental methods for direct measurements


of heat lux. In wind tunnels and laboratory experiments, the capability of locally evaluating
the heat lux carried by the low and experienced by a test model is a fundamental step in the
characterization of the testing environment. Direct measurements of the heat lux at the wall
can be very informative in order to understand low regimes, low and heat transfer features,
chemical effects, and several other phenomena.
Transducers for heat lux measurements operate based either on energy balance or on rate
equations. Common types of sensors and their working principles are reported in the follow-
ing text. Nowadays, several sophisticated variations of those basic concepts can be found in
the market, customized and adapted to speciic applications and environments.

Slug calorimeter A slug calorimeter, also referred to as capacitance calorimeter, is an energy balance transducer
that uses a metallic core embedded in the surface. The core is isolated at its sidewalls and back
face. The temperature response at the back face is monitored using a standard thermocouple
sensor. A schematic is presented in Figure 6.25.
The heat balance equation for a slug of cross-sectional area A, speciic heat cp, mass M,
length L, and radius R can be written as

¶T ¶c (6.78)
ɺ = Mc p
qA - MT p - qɺloss,k pRL - qɺloss,re -r A
¶t ¶T

Here, the slug back surface is considered adiabatic. Neglecting the losses due to reradiation
qɺloss,re -r and conduction qɺloss,k through insulation and assuming a speciic heat constant with
temperature, the heat lux can be calculated from the measured temperature slope as

Mc p ¶T (6.79)
qɺ =
A ¶t

The slope can be graphically determined from the T(t) signal (see Figure 6.25). The easiest situ-
ation occurs when measuring a constant incoming heat lux, with negligible heat losses, produc-
ing a linear temperature increase. An easy technique to estimate the heat losses is to determine
the temperature slope during the cooling phase of the sensor. The annular insulation around the

T
Thermocouple Insulation
Metallic core T∞
h = const.
+ Mcp T
ε q = qc + q r dT q=
– A t
dt
qloss, re–r
q = const.
qloss, k
t

FiGUre 6.25 Schematic of the heat balance of a slug calorimeter and type of responses for the
back face sensor.
184 FRANCESCO PANERAI

capacitance element serves the purpose of minimizing the heat transfer to or from the body of
the calorimeter, thus minimizing conduction losses and approximating a 1D heat low.
Equation 6.79 holds for a constant temperature of the slug, that is for small Biot numbers
(Bi = hcL/k < 0.1, being L the length of the core). A good solution to satisfy such a condition is
to use high conductivity copper to manufacture the core.
For more accurate applications, to compute the heat lux with a slug calorimeter, one needs
to account for the conductive transfer through the core:
L
æ ¶T ö ¶T
0
ò
qɺ = ç rc p
è ¶t ÷ dx + k ¶x
ø x=L
(6.80)

In this case, both temporal and spatial temperature gradients need to be measured.
Slug calorimeters can be used to measure stagnation region heat transfer and sidewall or
at body heat transfer rates. They are instruments tailored to short exposure measurements and
need to be restored to an initial condition before being reused. Common applications are very
high, steady-heat luxes, like those experimented in plasma jets of high-enthalpy wind tunnels.

Coaxial Coaxial thermocouples are used to measure heat lux from a direct temperature measurement.
thermocouple They commonly employ type E or K thermoelectric pairs in a special design arrangement,
where one thermocouple element is press itted around the second element with an electrical
insulation (~10 μm thick) in between. A schematic is presented in Figure 6.26. Figure 6.27
shows a picture of an actual sensor developed at the Shock Wave Laboratory of Rheinisch-
Westfälische Technische Hochschule (RWTH) Aachen University. The hot junction at the
surface is made by different techniques such as the application of a thin layer (~1 μm) of
one of the thermoelectric materials or a better suitable one in the case the instrument is oper-
ated in very reactive, oxidizing atmospheres. A simple and very effective method consists
in grinding the front surface with sandpaper, whose action turns the micro-scratched into
very small active junctions, with very short response time. This procedure makes this type of
gauge very robust and particularly suitable for harsh environments, allowing also the reacti-
vation of the junction by simply repeating the operation in case of failure.
The working principle of coaxial thermocouples is described by the 1D heat conduction
equation for a semi-ininite slab, under the assumption that the heat pulse through the sensor
during a measurement does not inluence the temperature of its back face. This allows to deter-
mine the heat lux from a direct measurement of the temperature. The solution of

¶T k ¶ 2T (6.81)
=
¶t rc p ¶x 2

can be written as [85,86]

b é T (t ) 1 t
T (t ) - T ( t) ù
qɺ ( t ) =
p
ê
ê t
ë
+
2 ò
0
(t - t)
3/2
d tú
ú
û
(6.82)

Thermoelectric
wires Chromel element
Surface
junction
+
ε q

Electrical Constantan
insulation (Teflon) element

FIGUre 6.26 Schematic of a type E coaxial thermocouple.


TEMPERATURE AND HEAT FLUx MEASUREMENTS 185

FIGUre 6.27 A coaxial thermocouple sensor developed at the Shock Wave Laboratory at
RWTH Aachen (Germany). (Image courtesy of Prof. H. Olivier.)

where b = rck deines the dependency from the thermocouple material properties. Here, t is
the time at which the heat lux is being determined and τ is a time variable. The same equation
in terms of electromotive force reads

b é E (t ) 1 t
E (t ) - E ( t) ù
qɺ ( t ) =
pa S
ê
ê t
ë
+
2 ò
0
(t - t)
3/2
d tú
ú
û
(6.83)

A useful form of Equation 6.83 evaluating E ( t ) at τ = i∆t , i = 0 , 1 , … , n, with ∆t = t/n approxi-


mating the electromotive force with a piecewise linear function, can be written as:

E ( ti ) - E ( ti -1 )
E ( t ) = E ( ti -1 ) + ( t - ti -1 ) (6.84)
Dt

where ti − 1 < ∆t < ti, i = 1, 2, 3, … , n, one gets

b ïì E ( t ) E ( t ) - E ( t - Dt )
í +
qɺ ( t ) = pa S ïî t Dt
n -1
é E ( t ) - E ( ti ) E ( t ) - E ( ti -1 ) E ( ti ) - E ( ti -1 ) ù üï
+ å êêë
i =1
t - ti
-
t - ti -1
+2 úý
t - ti + t - ti -1 úû þï
(6.85)

This equation allows a direct calculation of the convective lux at the thermocouple surface
from its output voltage signal. For the semi-ininite body assumption to hold and the method
to work properly, it is critical that the rear surface temperature is maintained constant during
the measurement.

Null-point The null-point calorimeter measures the heat lux to the surface of a disturbed solid body based
calorimeter on the transient rise detected by a temperature sensor. A schematic is presented in Figure 6.28.
The null point is realized by drilling a circular blind cavity of radius R at the back face of a cop-
per cylinder. Oxygen-free high conductivity copper is the preferred choice for the material. By
deinition, the null point is the unique position on the axial centerline of a disturbed body that
experiences the same transient temperature history of that on the surface of a solid body in the
absence of the physical disturbance (the hole). The temperature of the null point is measured
by means of a thermocouple sensor, usually type K. The error between the temperature at the
surface and that at the measurement location decreases proportionally to the ratio R/δ [87].
186 FRANCESCO PANERAI

Insulating air gap


Cavity

δ
R
+ q
ε

Copper body
Thermocouple

FiGUre 6.28 Schematic of a null-point calorimeter.

Several studies have been performed in the literature to ind the optimal value of R/δ. An agreed
value for practical design is 1.4 [87]. Particularly, the correct choice of the thickness of the
copper about the null-point cavity is critical. A too thick δ would limit the instrument response
time, impeding to capture important low features, while a too thin δ would lead to reading
signiicantly larger values than the actual incident lux.
Null-point calorimeters are usually installed in stagnation point models. The front and back
of the copper cylinder are usually langed in order to provide a thermally insulating air gap
between the sensor and its hosting body.
The null-point calorimeter operates with the same principle of a coaxial thermocouple.
Equations 6.82 through 6.85 are used to compute the heat lux from the measured temperature,
based upon the semi-ininite solid heat conduction. A inite length null-point calorimeter can
be considered a semi-ininite body if [88]

at (6.86)
£ 0.3
L2

A simpliied version of Equation 6.85 is usually adopted for heat lux data reduction [87] in
practical applications:

E ( ti ) - E ( ti -1 )
n
2b
qɺ ( t ) =
pa S å
i =1
t - ti + t - ti -1
(6.87)

Calculation of heat lux data using this equation is preceded by smoothing the temperature
timewise data, by sectional itting of second-order polynomial functions and applying the least
squares method.
Null-point calorimeters are chiely used in high energy facilities like arcjet plasma wind
tunnels for measuring the stagnation point heat lux (see Figure 6.29).
They can be operated in a destructive mode, where the probe is brought at rest into the low
and subjected to ablation as the measurement progresses. A careful selection of the useful data
must be then considered during the data processing if this technique is used. Alternatively,
they are operated in a sweep mode, swinging the probe into the plasma stream changing the
exposure time as a compromise between a suitable response time and the time to burnout.

Thin-ilm gauge A thin-ilm gauge consists of a thin metal layer, usually nickel o platinum, bonded by sputter
deposition onto an insulating substrate. The metal ilm is typically less than 1 μm thick. Due
to its small heat capacity, it is assumed to be at the same temperature as the substrate’s surface.
The thin-ilm concept has been already presented in Section 6.4, as a resistance thermometer
for direct surface temperature measurements. Nevertheless, it inds most of its application in
surface heat lux measurement, based on the temperature detection.
During operation, a constant current (in the range of 7–10 mA) is supplied. A change
in surface temperature of the substrate is measured as a variation of the resistance of the
device, thus of the voltage across it. From the voltage measurement, the thin-ilm resistance is
TEMPERATURE AND HEAT FLUx MEASUREMENTS 187

(a)

(b)

FIGUre 6.29 (See color insert.) (a) Telon probe measurement in the arc-heated wind tunnel of
the University of Texas at Arlington. (b) Picture of the Telon sample prior to (left) and after (right)
ablation. (Reproduced with permission from Gulli, S. et al., Exp. Fluids, 55(2), 1647, 2014.)

calculated. In practical applications, to obtain the surface temperature from the resistance, a
simpliied version of Equation 6.27 is used, assuming a linear dependency:

R ( T ) = R0 éë1 + a ( T - T0 ) ùû (6.88)

Here, R0 can be considered as the resistivity at the temperature T0 at the beginning of the mea-
surement, while the sensitivity α is to be determined during the sensor calibration. For determin-
ing the heat lux, the semi-ininite body assumption is adopted and Equation 6.82 can be used.
Thin-ilm resistance thermometers are widely applied in turbomachinery to measure the
heat lux to turbine blades [89]. As they allow very fast measurements, on the order of micro-
seconds, they are suitable for testing in impulse facilities, piston engines, and other types of
applications where highly transient phenomena occur.

Water-cooled Heat lux measurements with water-cooled calorimeters are based on the temperature rise
calorimeter of a coolant liquid, usually water, lowing at the back face of a heat exchanging surface
(Figure 6.30). The heat balance of the system imposes that energy crossing the sensing sur-
face area A is equal to energy absorbed by the cooling water. Hence, the heat lux is obtained
by measuring the water mass low rate with a rotameter or a mass low meter [54] and the
temperature difference in the cooling water supply and return lines, using type E or K thermo-
couples or resistance thermometers:

ɺ p ( Tout - Tin )
mc
qɺ = (6.89)
A

where mɺ and cp are the mass low and the speciic heat of cooling water.
188 FRANCESCO PANERAI

Insulation
Outlet line

Tout q

Tin

Inlet line Copper body

FIGUre 6.30 Schematic of a water-cooled calorimeter.

Depending on the side of the calorimeter surface, the heat lux may vary signiicantly
over the sensing area. Therefore, the measurement represents an average lux over the
active surface of the calorimeter. The sensor should be designed small enough compared
to low features (like the size of an impinging jet) to avoid nonuniformities. In order
to limit the heat conduction losses through the sidewall, a Telon or nylon insulation is
installed between the sensor and the hosting cavity. The choice of the cooling low rate
depends on the target heat lux to be measured. It is a trade-off between a value small
enough to ensure suficient sensitivity (i.e., a ΔT high enough to be accurately measured
with thermocouples) and large enough to avoid bubble formation (boiling) during the heat
exchange. A good practice in assembling a water-cooled calorimetric system is to limit
the length of the feeding and return lines, placing the thermocouples as close as possible
to the sensing surface.
One of the main drawbacks of the water-cooled calorimeter is the long transient time to
steady state. In order to have reliable measurements, a steady state must be achieved both in
the measured thermal environment and in the calorimeter cooling circuit.

Gardon gauge Gardon gauges are rate-equation sensors, based on Fourier’s law. They consist of a constantan
(or other thermoelectric metal) thin disk connected at its circumference to a massive metallic
support kept at constant temperature (for instance a copper well). Two thermoelectric wires
are used to measure the temperature difference across the disk radius that is proportional to
the average heat lux over the sensor surface. A schematic and a picture of an actual sensor are
shown in Figure 6.31 and 6.32, respectively.
If T is the temperature of the disk (of thickness δ) at radius r and time t and k and c are the
thermal conductivity and volumetric speciic heat of its material, respectively, then the heat
transfer along the disk can be modeled as [90]

c pr ¶T qɺ 1 ¶T ¶ 2T (6.90)
= + +
k ¶t k d r ¶r ¶r 2

with T = T* at t = 0, 0 < r < R, and T = T* at 0 < t < ∞, r = R as boundary conditions for a foil of
radius R and a copper heat sink at constant T*. The solution of Equation 6.90 at steady-state
conditions (and for constant conductivity) reads

R2 - r 2
T ( r ) - T * = qɺ (6.91)
4kd

Considering that the foil, the copper heat reservoir, and the two wires act like a thermocouple
system, a more useful working equation is obtained expressing the measured steady-state
electromotive force for a given incoming heat lux as

R2 (6.92)
E = aS qɺ
4kd
TEMPERATURE AND HEAT FLUx MEASUREMENTS 189

Constantan
Thermoelectric
disk
wires

Cooling in + q
ε

Cooling out
Copper well

FIGUre 6.31 Schematic of a Gardon gauge sensor.

FIGUre 6.32 A Gardon-type calorimeter used at Plasmatron facility at the von Karman Institute
for Fluid Dynamics.

More accurate models can be found in the literature to account for variable material properties
and nonlinearities [91].
The transient behavior of a Gardon-type sensor is commonly characterized in terms of response
to a step change in incident lux. This can be simply modeled by a rising exponential law as

T
= 1 - et / t (6.93)
T *

The characteristic time constant τ is found computing the temperature at the center of the
circular foil (r = 0):

c pr
t= (6.94)
4kR 2

problems

6.1 Consider the case of a spherical (d = 0.5 mm) bare bead thermocouple sensor (density
ρb = 8700 kg/m3, thermal conductivity kb = 30 W/[m K], speciic heat capacity cp, b = 446
J/[kg K]) at 293.15 K suddenly immersed in an M = 0.75 airlow at total temperature of
Tt = 313.15 K (density ρ = 1.2 kg/m3, kinematic viscosity ν = 16.97 × 10−6 m2/s, speciic
heat at constant pressure cp = 1005 J/[kg K], thermal conductivity k = 0.0257 W/[m K],
Prandtl number Pr = 0.796). Determine the velocity error and the response of the ther-
mocouple for a recovery factory r = 0.815. Discuss a suitable solution to minimize the
velocity error and analyze how this would affect the thermocouple response. Assume
190 FRANCESCO PANERAI

400
y
300
x

y (mm)
200
+ –
100

0
328 330 332 334
T (K)

FIGUre 6.33 Schematic of the setup and measured temperature.

that errors due to radiation and conduction are negligible and that the sensor obeys a
lumped capacitance response, [T(t) − T∞]/[T0 − T∞] = e−τ/t, valid for very small Biot num-
bers. Whitaker’s correlation for low past a sphere can be used [92]:

1/ 4
æm ö
( )
Nud = 2 + 0.4 Re1d/ 2 + 0.06 Red2 /3 Pr 2 / 5 ç ÷
è ms ø
(6.95)

Here, μs is the low viscosity evaluated at the sphere surface temperature. Equation 6.95
is valid for 3.5 ≤ Red ≤ 76,000 and 0.7 ≤ Pr ≤ 380.
6.2 The hot surface of a gray, nearly Lambertian sample is detected by means of a two-
color pyrometer through a pure quartz window and a broadband radiometer through
a KRS-5 window. The view angle of both instruments is nearly perpendicular to the
surface. The pyrometer is working within two overlapping narrowbands around 1 μm.
The radiometer measures the sample radiance between 0.6 and 39 μm. Both instruments
have been calibrated using a blackbody source with the windows in place and with the
same distance from the source as that of the actual measurement of the sample. During
the measurement, the devices are used with the same settings and arrangement as for the
calibration. The output at steady state reads Tpyro = 1700 K and Lradio = 410.2 kW/m2 for
pyrometer and radiometer, respectively.
Determine the total hemispherical emissivity of the sample at its actual temperature.
6.3 A stainless steel plate of thickness δ = 5 mm is heated by Joule heating, applying
a current of 10 A at 220 V through a circular (radius R = 200 mm) resistive heating
element installed with contact at one face of the plate. The temperature of the plate
can be varied adjusting the current supplied to the heating element. The opposite face
is coated in correspondence with the heat area (gray area in the igure) using a high
emissivity paint (ε = 0.95) of negligible thickness in order to image its surface using
an IR camera. The plate has a thickness of 1 mm and a thermal conductivity of 35 W/
(m K), which we assume to be constant with the temperature range of the experiment.
Describe a suitable method for calibrating the IR camera imaging the plate. If the
experiments are performed at an ambient temperature of 293 K and the temperature
along the vertical axis (y) of the plate is that shown in Figure 6.33, determine the natural
convection heat transfer coeficient for the plate along y.

references

1. Be n e d ic t RP (1977). Fundamentals of Temperature, Pressure, and Flow Measurements,


2nd edn., John Wiley & Sons, New York.
2. Consultative Committee for Thermometry (1990). The International Temperature Scale of 1990
(ITS-90), International Committee for Weights and Measures, Sèvre, France.
TEMPERATURE AND HEAT FLUx MEASUREMENTS 191

3. Be d fo r d RE, B o nnier G, M aas H, Paves e F (1996). Recommended values of temperature


on the International Temperature Scale of 1990 for a selected set of secondary reference points,
Metrologia, 33(2), 133.
4. Pav e s e F, Molinar, G (2013). Modern Gas-Based Temperature and Pressure Measurements,
Springer, New York.
5. In c r o pe r a FP, Dewitt DP, B ergm an TL, Lavine AS (2007). Fundamentals of Heat and
Mass Transfer, 6th edn., John Wiley & Sons, Inc., New York.
6. Ba e h r HD, S t ep han K (2011). Heat and Mass Transfer, 3rd edn., Springer-Verlag, Berlin,
Germany.
7. D o e b e l in EO (1975). Measurement Systems, McGraw-Hill.
8. We b s t e r J (1999). Mechanical Variables Measurement—Solid, Fluid, and Thermal, CRC Press,
Boca Raton, FL.
9. M o ffat RJ (1961). The Gradient Approach to Themocouple Circuitry. Temperature, Its
Measurement and Control in Science and Industry, Van Nostrand Reinhold, Princeton, NY.
10. Vil l a fa ñ e L, Paniagua G (2013). Aero-thermal analysis of shielded ine wire thermocouple
probes, International Journal of Thermal Sciences, 65(0), 214–223.
11. M o ffat RJ (1961). Gas temperature measurements, General Motors Research Laboratory,
Report no. 0894-1777.
12. M o ffat RJ (1990). Some experimental methods for heat transfer studies, Experimental Thermal
and Fluid Science, 3(1), 14–32.
13. A r t s T, B o e r r igter H, Buchlin JM, Carbonaro M, D enos R, D egrez G et al.
(2007). Introduction to Measurement Techniques, 2nd revised edn., von Karman Institute for Fluid
Dynamics, Rhode-Saint-Genese, Belgium.
14. ASTM International (2012). Standard speciication and temperature-electromotive force (emf)
tables for standardized thermocouples, ASTM E230, West Conshohocken, PA.
15. ASTM International (2013). Standard test method for calibration of thermocouples by compari-
son techniques, ASTM E220-13, West Conshohocken, PA.
16. M il o s FS, C h e n Y-K (2010). Ablation and thermal response property model validation for
phenolic impregnated carbon ablator, Journal of Spacecraft and Rockets, 47(5), 786–805.
17. ASTM International (1995). Standard speciication for industrial platinum resistance thermo-
meters, ASTM E1137,West Conshohocken, PA.
18. B e n t l e y RE (1998). Handbook of Temperature Measurement, Vol. 2, Resistance and Liquid-in-
Glass Thermometry, Springer, New York.
19. M c G e e TD (1988). Principles and Methods of Temperature Measurement, John Wiley & Sons,
Inc., New York.
20. Va n D u s e n MS (1925). Platinum-resistance thermometry at low temperatures, Journal of the
American Chemical Society, 47(2), 326–332.
21. Ca l l e n d a r HL (1887). On the practical measurement of temperature: Experiments made at
the Cavendish Laboratory, Cambridge, Philosophical Transactions of the Royal Society of London
(A), 178, 161–230.
22. E g g e n b e r g e r DN (1951). Correction. Converting platinum resistance to temperature,
Analytical Chemistry, 23(5), 803.
23. E g g e n b e r g e r DN (1950). Converting platinum resistance to temperature, Analytical
Chemistry, 22(10), 1335.
24. Wag n e r NK (1964). Theoretical accuracy of a meteorological rocketsonde thermistor, Journal
of Applied Meteorology, 3(4), 461–469.
25. S a n fo r d ER (1951). A wind-tunnel investigation of the limitations of thermistor anemometry,
Journal of Meteorology, 8(3), 182–190.
26. L t d H. (1991). Handbook of Thermochromic Liquid Crystal Technology, LCR Hallcrest Ltd.,
Connah’s Quay, U.K.
27. Ba r ig o z z i G, Franchini G, Perdichizzi A, M aritano M, A bram R (2013). Purge
low and interface gap geometry inluence on the aero-thermal performance of a rotor blade cas-
cade, International Journal of Heat and Fluid Flow, 44(0), 563–575.
28. Kowa l e w s k i T, Ligrani P, D reizler A, Schulz C, Fey U (2007). Temperature and
heat lux, in: Tr o p ea C, Yarin A, Fos s J, eds., Springer Handbook of Experimental Fluid
Mechanics, Springer, Berlin, Germany, pp. 487–561.
29. A k i n o N, Ku n u g i T, I c h i m i ya K, M i t s u s h i r o K, U e d a M (1989). Improved
liquid-crystal thermometry excluding human color sensation, Journal of Heat Transfer, 111(2),
558–565.
30. M o d e s t MF (2013). Radiative Heat Transfer, 3rd edn, Academic Press, New York.
31. Da b ir i D (2009). Digital particle image thermometry/velocimetry: A review, Experiments in
Fluids, 46(2), 191–241.
192 FRANCESCO PANERAI

32. Bau g h n JW (1995). Liquid crystal methods for studying turbulent heat transfer, International
Journal of Heat and Fluid Flow, 16(5), 365–375.
33. Ro b e r t s GT, E as t RA (1996). Liquid crystal thermography for heat transfer measurement in
hypersonic lows—A review, Journal of Spacecraft and Rockets, 33(6), 761–768.
34. Wo z n ia k G, Wozniak K, Siekm ann J (1996). Non-isothermal low diagnostics using
microencapsulated cholesteric particles, Applied Scientiic Research, 56(2–3), 145–156.
35. S ta s ie k J (1997). Thermochromic liquid crystals and true colour image processing in heat trans-
fer and luid-low research, Heat and Mass Transfer, 33(1–2), 27–39.
36. Ir e l a n d PT, J o nes TV (2000). Liquid crystal measurements of heat transfer and surface shear
stress, Measurement Science and Technology, 11(7), 969.
37. Be h l e M, S c h ulz K, Leiner W, Fiebig M (1996). Color-based image processing to mea-
sure local temperature distributions by wide-band liquid crystal thermography, Applied Scientiic
Research, 56(2–3), 113–143.
38. Wid g e r WK, Woodall MP (1976). Integration of the Planck blackbody radiation function,
Bulletin of the American Meteorological Society, 57(10), 1217–1219.
39. Ia n ir o A, Ca r done G (2010). Measurement of surface temperature and emissivity with stereo
dual-wavelength IR thermography, Journal of Modern Optics, 57(18), 1708–1715.
40. Ca r d o n e G, Ia niro A, dello Ioio G, Pas s aro A (2012). Temperature maps measure-
ments on 3D surfaces with infrared thermography, Experiments in Fluids, 52(2), 375–385.
41. S c h mid t E, E ckert ERG (1935). Über die Richtungsverteilung der Wärmestrahlung von
Oberlächen, Forschung Geb. D. Ingenieurwes., Vol. 6.
42. Pa l ik ED (1997). Handbook of Optical Constants of Solids, Academic Press, Burlington, MA.
43. H a r r is o n TR (1960). Radiation Pytrometry and Its Underlying Principles of Radiant Heat
Transfer, John Wiley & Son Inc., New York.
44. P r o k h o r ov AM (1970). Bol’shaia sovetskaia entsiklopediia (The Great Soviet Encyclopedia),
Izd-vo “Sovetskaia entsiklopediia”, Moskva, Russia.
45. G ao S, Wa n g L, Feng C (2014). Multi-spectral pyrometer for gas turbine blade temperature
measurement, in: Proc. SPIE 9202, Photonic Applications for Aviation, Aerospace, Commercial
and Harsh Environments V, San Diego, CA.
46. R o h y DA, Co m pton WA (1972). Radiation pyrometer for gas turbine blades. NASA Contractor
Report. NASA Marshall Space Flight Center, Report No.: 2232, Huntsville, AL.
47. d e L u c ia M, L anf ranchi C (1994). An infrared pyrometry system for monitoring gas tur-
bine blades: Development of a computer model and experimental results, Journal of Engineering
for Gas Turbines and Power, 116(1), 172–177.
48. K e r r Cl iv e IV, Ivey Paul C (2004). Exploratory design modiications for enhancing
pyrometer purge air system performance, International Journal of Turbo and Jet Engines, 21(3),
203–210.
49. Savino R, De Stefano Fumo M, Paterna D, Di Maso A, Monteverde F (2010).
Arc-jet testing of ultra-high-temperature-ceramics, Aerospace Science Technology, 14(3), 178–187.
50. L o e s e n e r O, N euer G (1994). A new far-infrared pyrometer for radiation temperature mea-
surement on semitransparent and absorbing materials in an arc-heated wind tunnel, Measurement,
14(2), 125–134.
51. M a r s c h a l l J, Pejakovic D, Fahrenholtz WG, H ilm as GE, Panerai F, C hazot O
(2012). Temperature jump phenomenon during plasmatron testing of ZrB2-SiC ultrahigh-temperature
ceramics, Journal of Thermophysics and Heat Transfer, 26(4), 559–572.
52. Pa n e r a i F, M a rs chall J, Thöm el J, Vandendael I, H ubin A, C hazot O (2014).
Air plasma-material interactions at the oxidized surface of the PM1000 nickel-chromium superal-
loy, Applied Surface Science, 316(0), 385–397.
53. Pa n e r a i F, H e l ber B, C hazot O, Balat-Pichelin M (2014). Surface temperature jump
beyond active oxidation of carbon/silicon carbide composites in extreme aerothermal conditions,
Carbon, 71, 102–119.
54. Pa n e r a i F, C h a zot O (2012). Characterization of gas/surface interactions for ceramic matrix
composites in high enthalpy, low pressure air low, Materials Chemistry and Physics, 134(2–3),
597–607.
55. A l fa n o D, S c atteia L, Cantoni S, Balat-Pichelin M (2009). Emissivity and catalyc-
ity measurements on SiC-coated carbon ibre reinforced silicon carbide composite, Journal of the
European Ceramic Society, 29(10), 2045–2051.
56. Balat-Pichelin M, Robert JF, Sans JL (2006). Emissivity measurements on carbon–carbon
composites at high temperature under high vacuum, Applied Surface Science, 253(2), 778–783.
57. H e r na n d e z D, Badie JM, Es courbiac F, R eichle R (2008). Development of two-colour
pyrorelectometry technique for temperature monitoring of tungsten plasma facing components,
Fusion Engineering and Design, 83(4), 672–679.
TEMPERATURE AND HEAT FLUx MEASUREMENTS 193

58. R e ic h l e R, A ndrew P, Balorin C, B richard B, Carp entier S, C orre Y et al.


(2009). Concept and development of ITER divertor thermography diagnostic, Journal of
Nuclear Materials, 390–391(0), 1081–1085.
59. H e r na n d e z D, Sans JL, N etchaief f A, R idoux P, Le Sant V (2009). Experimental
validation of a pyrorelectometric method to determine the true temperature on opaque surface
without hampering relections, Measurement, 42(6), 836–843.
60. Re ic h l e R, B r ichard B, Es courbiac F, G ardarein JL, H ernandez D, Le Ni li ot
C et al. (2007). Experimental developments towards an ITER thermography diagnostic, Journal of
Nuclear Materials, 363–365(0), 1466–1471.
61. A s ta r ita T, Ca rlom agno GM (2013). Infrared Thermography for Thermo-Fluid-Dynamics,
Springer, New York.
62. A ir y GB (1835). On the diffraction of an object-glass with circular aperture, Transactions of the
Cambridge Philosophical Society, 5, 283–291.
63. Ray l e ig h (1879). XXXI. Investigations in optics, with special reference to the spectroscope,
Philosophical Magazine Series 5, 8(49), 261–274.
64. C ox JT, H a s s G, J acobus GF (1961). Infrared ilters of antirelected Si, Ge, InAs, and InSb,
Journal of the Optical Society of America, 51(7), 714–718.
65. J o n e s RC (1953). Performance of detectors for visible and infrared radiation, in: Marton L, ed.,
Advances in Electronics and Electron Physics, Academic Press, pp. 1–96.
66. J o n e s RC (1959). Phenomenological description of the response and detecting ability of radia-
tion detectors, Proceedings of the IRE, 47(9), 1495–1502.
67. R o g a l s k i A (2003). Infrared detectors: Status and trends, Progress in Quantum Electronics,
27(2–3), 59–210.
68. ASTM International (2011). Standard test method for noise equivalent temperature difference of
thermal imaging systems, ASTM E1543, West Conshohocken, PA.
69. ASTM International (1997). Standard test method for minimum resolvable temperature difference
of thermal imaging systems, ASTM E1213, West Conshohocken, PA.
70. Ca r l o mag n o G, D e Luca L (1991). Infrared thermography for low visualization and heat
transfer measurements. Stato dell'arte del rilevamento con camere termiche nella banda 8–15
micron, Firenze.
71. G u na pa l a SD, Ting DZ, Soibel A, R af ol SB, K hos hakhlagh A, M um olo JM et al.
(2013). Modulation transfer function of infrared focal plane arrays, Photonics Conference (IPC),
2013, IEEE, Piscataway, NJ, pp. 600–601.
72. d e L u c a L, Ca rdone G (1991). Modulation transfer function cascade model for a sampled IR
imaging system, Applied Optics, 30(13), 1659–1664.
73. B o r e ma n GD (2001). Modulation Transfer Function in Optical and Electro-Optical Systems,
SPIE Press, Bellingham, WA.
74. Ca r l o mag n o G, Cardone G (2010). Infrared thermography for convective heat transfer
measurements, Experiments in Fluids, 49(6), 1187–1218.
75. Ca r d o n e G, D i s cetti S (2008). Reconstruction of 3D surface temperature from IR images,
Ninth International Conference on Quantitative Infrared Thermography, Krakow, Poland.
76. L e S a n t Y, M archand M, M illan P, Fontaine J (2002). An overview of infrared
thermography techniques used in large wind tunnels, Aerospace Science and Technology, 6(5),
355–366.
77. A n d e r s o n JD (2000). Hypersonic and High Temperature Gas Dynamics, American Institute of
Aeronautics and Astronautics, Reston, VA.
78. N a r aya na n V, Pag e RH, Seyed-Yagoobi J (2003). Visualization of air low using infrared
thermography, Experiments in Fluids, 34(2), 275–284.
79. G r e c o CS, Ia n i ro A, Cardone G (2014). Time and phase average heat transfer in single and
twin circular synthetic impinging air jets, International Journal of Heat and Mass Transfer, 73(0),
776–788.
80. Ca r l o mag n o GM, Ianiro A (2014). Thermo-luid-dynamics of submerged jets impinging at
short nozzle-to-plate distance: A review, Experimental Thermal and Fluid Science, 58(0), 15–35.
81. A s ta r ita T, Ca rdone G, Carlom agno G (2002). Convective heat transfer in ribbed chan-
nels with a 180° turn, Experiments in Fluids, 33(1), 90–100.
82. A s ta r ita T, Ca rdone G, Carlom agno GM (2006). Infrared thermography: An optical
method in heat transfer and luid low visualisation, Optics and Lasers in Engineering, 44(3–4),
261–281.
83. S o u s a J, Vil l a fañe L, Paniagua G (2014). Thermal analysis and modeling of surface heat
exchangers operating in the transonic regime, Energy, 64(0), 961–969.
84. Tir t e y SC, C h a zot O, Walp ot L (2011). Characterization of hypersonic roughness-induced
boundary-layer transition, Experiments in Fluids, 50(2), 407–418.
194 FRANCESCO PANERAI

85. H o l l is BR (1995). User’s manual for the one-dimensional hypersonic experimental aero-
thermodynamic (1DHEAT) data reduction code.
86. M e n e z e s V, B hat S (2010). A coaxial thermocouple for shock tunnel applications, Review of
Scientiic Instruments, 81(10), 104905.
87. ASTM International (2008). Standard test method for measuring extreme heat-transfer rates
from high-energy environments using a transient, Null-Point Calorimeter, ASTM E598, West
Conshohocken, PA.
88. D ic r is t ina V, H owey DC (1968). Advanced calorimetric techniques for arc plasma heat trans-
fer diagnostics in the heat lux range up to 20 kw/cm2, Third Aerodynamics Testing Conference,
American Institute of Aeronautics and Astronautics, Reston, VA.
89. S c h u lt z DL, J ones TV (1973). Heat transfer measurements in short-duration hyperson-
ics facilities. AGARDograph report AD0758590, Advisory Group for Aerospace Research and
Development, Paris, France.
90. G a r d o n R (1953). An instrument for the direct measurement of intense thermal radiation,
Review of Scientiic Instruments, 24(5), 366–370.
91. K e lt n e r NR, Wildin MW (1975). Transient response of circular foil heat-lux gauges to
radiative luxes, Review of Scientiic Instruments, 46(9), 1161–1166.
92. Wh ita k e r S (1972). Forced convection heat transfer correlations for low in pipes, past lat
plates, single cylinders, single spheres, and for low in packed beds and tube bundles, AIChE
Journal, 18(2), 361–371.
93. G u l l i S, G r o und C, C ris anti M, M addalena L (2014). Telon probing for the low
characterization of arc-heated wind tunnel facilities, Experiments in Fluids, 55(2), 1–18.
ChapTer SeVeN

Density-based methods

Fyodor Glazyrin

Contents

7.1 Introduction 195


7.2 Light refraction in inhomogeneous media 196
7.3 Equations of state 198
7.4 Shadowgraph 199
General principles 199
Experimental geometry 201
Light source and illumination 203
Processing and interpreting the images 204
7.5 Schlieren 205
General principles 205
Experimental geometry 207
Light source 209
Schlieren knife 209
Processing and interpreting the images 210
7.6 Background-oriented schlieren 211
General principles 211
Experimental geometry 212
Background 213
Light source and illumination 214
Image-capturing scheme 215
Image processing 216
Example experiment 217
Problems 218
References 220

7.1 Introduction

In the ield of low visualization, density-based techniques constitute a vast and important
array of methods. They have a long historical record—one of the irst techniques employed
for scientiic visualization of luid dynamics was shadowgraph [1]—and ever since, they have
been successfully employed in different applications, with more advanced equipment but
based on the very same physical laws. When the time came for the scientists to dive into the
mysteries of the supersonic lows, density-based methods appeared to be absolutely indispens-
able in these studies. Figure 7.1 shows one of the irst images of a low around a supersonic
projectile. The schlieren technique used in that truly remarkable experiment is still being
employed. Of course, the photographic plates used in this experiment have become obsolete,
and the atmospheric spark discharge is hardly found as a light source.

195
196 FYODOR GLAZYRIN

FIGUre 7.1 Schlieren photograph of a lying supersonic bullet, taken by Peter Salcher in
collaboration with Ernst Mach, 1888. (Image courtesy of P. Krehl.)

In modern experimental aerodynamics, most lows under investigation tend to include fea-
tures that fall under the scope of density-based visualization techniques:
• Convective heat transfer, with gas density changing with temperature
• Turbulence and vorticity
• Mixing processes, for example, on borders of submerged jets
• Compressible lows, usually showing up at transonic and supersonic velocities, including
shock waves as the most intense case
• Plasma lows, characterized by the impact of ionization on the optical properties
• Combustion processes, combining several of the features mentioned earlier [2]
All of these phenomena can be captured by density-based optical techniques, which explains
the fact that schlieren, shadowgraph, and techniques derived from them remain ubiquitous
tools in aerodynamic laboratories even centuries after their invention. The list describes a class
of lows wider than the conventional category of “compressible lows.”

7.2 Light refraction in inhomogeneous media

The ability to visualize lows of liquids and gases by density-based methods is based on light
refraction and on the fact that changes in the density of a luid lead to changes in its refractive
index (aptly named optical density). So a heterogeneous body of gas with varying density is
also an optical object with refractive index varying throughout its volume. The propagation of
light basically obeys the Huygens–Fresnel principle, which states that at each moment of time,
the wave front of light becomes a source of secondary spherical light waves, and the wave front
at the next moment of time is formed by the interference of all such (ininite) secondary waves.
For more extended explanation, one can turn to Chapter 3 of the Handbook of Optics under the
editorship of Bass [3], and readers skilled at mathematics can ind an in-depth description in
Chapter 7 of a classical book by Landau and Lifshitz [4]. As a consequence of this fundamental
principle, light rays traveling through media with varying optical density change not only their
speed but also the direction of propagation, which is essentially the principle of refraction.
DENSITY-bASED METHODS 197

Y
n0

εy

n0 + Δn

z1 z2 Z
X

FIGUre 7.2 Distortion of the background due to refraction.

A light ray passing through areas of a low with a higher optical density travels a longer
optical path than a ray traveling in an undisturbed luid. The difference in the optical path
results in a difference of phase between the two light rays. Because of this, objects that are
mostly transparent but have varying optical density are called phase objects. Of course, under
certain conditions, gas low starts absorbing light signiicantly (lows containing vapor or
droplets, extreme ionization, etc.), but in most cases, aerodynamic lows can be considered
phase objects as per the deinition given earlier.
Light passing through a low of uneven density undergoes delection according to the den-
sity distribution in the low. By analyzing the results of this delection, the density distribution
in the low can be studied.
Figure 7.2 shows the light ray delecting inside a heterogeneous phase object.
If we direct the Z-axis along the main optical axis of the system (i.e., an axis parallel to the
light ray direction), then the angular delection εy of the light ray due to the gradient of refrac-
tive index grows as it passes through the medium:

¶e x 1 ¶n ( z )
=
¶z n ( z ) ¶x
(7.1)
¶e y 1 ¶n ( z )
=
¶z n ( z ) ¶y

So, inally, after integrating Equation 7.1 along the optical path,

1 ¶n ( z )
z1
1 ¶n
ex »
ò n(z)
z2
¶x
dz »
n ¶x
( z2 - z1 )
(7.2)
1 ¶n ( z )
z1
1 ¶n
ey »
ò n(z)
z2
¶y
dz »
n ¶y
( z2 - z1 )

Here, ε = ε(z) is the angle between the initial direction of the light ray and its delected part
and εx and εy are its x- and y-components. n(z) is the local value of refractive index and 〈n〉 and
〈dn/dy〉 are its value and its spatial derivative, respectively, averaged along the Z-axis from z1
to z2. (z2 − z1) = L is the width of the refracting body. In practical applications, it coincides with
the size of the investigated low along the optical axis of the imaging system.
These expressions are written under the assumption that the delection of the light rays is
suficiently small so that ε ≪ 1. This is the case for most applications. For instance, the delec-
tion angle ε generated in a thermal plume above a candle is around 200 arcseconds or 0.06°.
A 5 cm vortex ring, generated behind an expanding shock wave, delects light rays at ≈0.1°.
198 FYODOR GLAZYRIN

FIGUre 7.3 “Wet asphalt” optical illusion on an intercity highway, Belarus. (Photo by A. Kasyan,
www.mpravda.by.)

From the form of Equation 7.2, it is obvious that the light rays are being delected toward
the areas of greater density (or, strictly speaking, refractive index). For instance, this is the
reason for the well-known mirages of “wet asphalt” occurring on summer days on roads
(Figure 7.3). In the air above the road, the layer closest to the asphalt is the hottest and thus the
most rareied. Air density increases with height, and light rays delect upward, in a way very
similar to the relection from a liquid surface.

7.3 equations of state

The exact relation between density and refractive index may be expressed in several ways.
One of the most well-known variants describing the dependence of gas refractive index on its
density is the Gladstone–Dale equation:

n = rK + 1 (7.3)

where
n is the refractive index
ρ is the gas density
K is the Gladstone–Dale constant speciic to the given medium

Using Equation 7.3, Equation 7.2 can be rewritten as

K ¶r
ex » L
r K + 1 ¶x
(7.4)
K ¶r
ey » L
r K + 1 ¶y

Here, the angle of delection is explicitly related to the gas density inside the investigated
object.
The speciic Gladstone–Dale constant is not strictly constant, depending on the gas condi-
tions. But when the properties of the gas do not vary too much in one experiment (or in one
part of the low), it can be considered constant, providing suficient precision in measure-
ments [5]. For instance, K = 2.257 × 10−4 m3/kg for dry air at normal conditions and light wave-
length of 633 nm (typical for He–Ne lasers), and changes negligibly with temperature below
4000 K (K ≈ 2.254 × 10−4 m3/kg at 100°C). Table 7.1 shows the values of Gladstone–Dale
constant of air for different wavelengths of light, and Table 7.2 of several often encountered
gaseous chemicals, at a single laser wavelength of 633 nm.
DENSITY-bASED METHODS 199

Table 7.1 Gladstone–Dale constant of air at 288 K for different


wavelengths

Light wavelength (nm) K (×10−4 m3/kg)


356.2 2.330
407.9 2.304
480.1 2.281
509.7 2.274
567.7 2.264
607.4 2.259
633 2.257
644 2.255
703.4 2.250
912.5 2.239

Table 7.2 Gladstone–Dale constants of various gases at 288 K


for 633 nm wavelength

Gas species K (×10−4 m3/kg)


Oxygen (O2) 1.89
Nitrogen (N2) 2.39
Argon (Ar) 1.57
Carbon dioxide (CO2) 2.27
Water vapor (H2O) 3.12

For the more complex cases of quantitative measurements, when the exact value of K is
necessary to extract density data, the gas should be treated as a mixture of its components.
Optical parameters of a gas mixture can be calculated as

n= åK ri
i i

(7.5)
ri
K= å
i
Ki
r
,

where
ρi denotes the density of the pure ith component of the mixture
Ki is the Gladstone–Dale constant

With the help of tabulated data [6], speciic K can be calculated for test objects consisting
of various gases or to account for the presence of the water vapor and, in some cases, low
ionization. It should be noted that variations in the chemical contents of the gas may lead to
gradients in the refractive index even in a low with homogeneous density.

7.4 Shadowgraph

General principles Shadowgraph is one of the earliest and simplest techniques used for visualizing luid lows.
In fact, in its simplest form; it does not need any optical component and can therefore be
observed in many real-world situations.
If light from a point light source I (Figure 7.4a) passes through a phase object S and is then
projected on a screen C, the resulting image will be unevenly illuminated, as light rays passing
through optical inhomogeneities delect. This effect can be seen outdoors on a clear day, when
the sun itself serves as a point light source.
200 FYODOR GLAZYRIN

S
I

(a) C

I S
L1 C
(b)

I S
F C
L1 M L2
(c)

FIGUre 7.4 Optical schemes of shadowgraph technique: (a) direct shadowgraph in diverging
light, (b) direct shadowgraph in parallel light, and (c) focused parallel-light shadowgraph. I, point
light source; L1, L2, condenser lenses; S, investigated phase object; C, screen.

An optical inhomogeneity (historically called “schliere”, hence the name for the schlieren
method) effectively redistributes the luminance on the screen, increasing the brightness of
some points while decreasing it in other areas. The total amount of falling light remains the
same, except for possible light absorption inside the phase object. Outlines of the optical
inhomogeneities form a corresponding shape on the screen, similar to a solid object casting a
shadow on the wall. This is a shadowgraph image, or simply shadowgram.
Then, if we add a focusing lens L1 to the system (Figure 7.4b), the light beams will become
parallel before reaching the phase object. The image projected on the screen will geometrically
match the optical inhomogeneities creating it and provide more accurate information about
the features of the low. This variant of the technique is called parallel-light shadowgraph.
If we want to capture the image on a smaller scale, the recording plane can be focused by means
of a camera lens onto a ilm or plate of reduced size (Figure 7.4c). This approach is generally dubbed
“focused” shadowgraph and is frequently used when the shadowgraph is captured on camera.
The quantity measured is the ield of light intensity in the screen plane that relects the
distribution of refractive index in the light path. Figure 7.5 presents four general kinds of
such distribution. They can be perceived as glass planes with different cross sections. For
simplicity, it is considered here that light delection happens only along the Y-axis. In general
cases, the angle of delection will have X- and Y-components, which are deined by X- and
Y-variations of refractive index, respectively.
If the refractive index does not change along Y, the light passes the test section undisturbed,
providing uniform illumination on the screen (Figure 7.5a, plain glass sheet). If the refractive index

Y ∂n ∂n ∂2n = const ∂3n ≠ 0


=0 = const
∂y ∂y ∂y2 ∂y3

n n
n
n

Z
(a) (b) (c) (d)

FIGUre 7.5 Delections of light by different distributions of refractive index: (a) ¶n /¶y = 0, (b) ¶n /¶y = const ¹ 0,
(c) ¶ 2n /¶y 2 = const ¹ 0 , and (d) ¶ 3n /¶y 3 ¹ 0 .
DENSITY-bASED METHODS 201

has a linear variation and the gradient of the refractive index ∂n/∂y is constant (Figure 7.5b, glass
wedge), the delection angle remains the same for all rays passing that region of the low. The
plane of observation will again show a uniform illumination for this region. When the density
gradient is represented by a glass block with a constant curvature (Figure 7.5c), it corresponds
to a density ield with constant (∂2n/∂y2) ≠ 0. The density ield with a constant second derivative
will also lead to a uniformly illuminated region, though of lower exposure, since the light rays are
diverging approximately uniformly. Only when the refractive index has a complex distribution for
which (∂3n/∂y3) ≠ 0 and ∂2n/∂y2 changes with y will the light be unevenly delected in different
regions and the brightness of the image on the screen be uneven. Shadowgraph would be unable
to produce any distinguishable effect where the test object has an area of constant (but nonzero)
gradient of the refractive index—for example, a glass wedge.
If we consider the schlieren image as a ield of illuminance E(x, y) in the image plane, then
the contrast C(x, y) in a given point of image is deined as a ratio of illuminance gradient to
the local illuminance value. It can be shown explicitly [5] that the image contrast is indeed
linearly proportional to the second derivative of optical density:

1 ¶E ¶ 2 n
Cx = ~
E ¶x ¶x 2
(7.6)
1 ¶E ¶ 2 n
Cy = ~
E ¶y ¶y 2

Compared to the schlieren technique (described and discussed later), shadowgraph is less
sensitive. Thin, sharp-edged inhomogeneities are best shown in shadowgraph.
For further reading, perhaps our irst recommendation would be the excellent monograph
by Prof. Gary Settles [7], a comprehensive work describing the basics of the schlieren meth-
ods in detail.

experimental The three main variants of shadowgraph techniques depicted in Figure 7.4 are used in modern
geometry experiments, judging by the speciic needs.
“Diverging-light shadowgraph” (Figure 7.6) is easily the most unsophisticated of them.
The schlieren object S of height d is located at distance g from the plane of the screen. If
illuminated by a quasi-point source of light I at distance h from that plane (the effective total
size of the setup), the schlieren object casts a “shadow” of height d′.
A light ray IA, being straight if no schlieren object was present, is delected by the object
by the angle ε and falls on the screen at point A′ instead of A, displaced by distance Δa ≅ ε ⋅ g.
The contrast of resulting shadowgram, deined previously, can be shown to be equal to

DE ¶e g ( h - g )
C= = (7.7)
E ¶y h

Y X

A Δa
Z ε

I d
S

FIGUre 7.6 Detailed scheme of shadowgraph in diverging light: I, light source; S, investigated
phase object; C, screen.
202 FYODOR GLAZYRIN

Here, the term ∂ε/∂y describes the schlieren object itself, while the second term describes the
dependency on the scheme geometry. It can easily be found that for a given h, best contrast
(and, consequently, sensitivity) is achieved when the schlieren object is placed halfway from
the light source to the screen: g = h/2.
If the delection angles are considered small, then the magniication of the shadowgram
related to the schlieren object is given simply by

h
m= (7.8)
h-g

Diverging-light shadowgraph is prominent in that it is not limited by the size of its optical ele-
ments: the only limiting dimension is the size of the screen on which the image is projected.
As there are very modest demands for the material and quality, screens can be made very large
at a relatively low cost. This allows building shadowgraph setups with a very large ield of
view, almost unattainable in other visualization techniques. It can be applied for studies of large-
scale lows, for example, aircraft jet engines or ield explosion tests. This is the most typical
application of diverging-light shadowgraph, since in other qualities it is inferior to more complex
setups discussed later. However, a high-intensity light source is necessary for such applications.
Practice shows that shadowgrams may be projected on photographic ilm, on ground-glass
or projection screens, or virtually on any reasonably lat, diffusely relecting surface such as a
wall, sandy soil, or snow. It is recommended, however, to use material with high relectivity if
available. Specialized screens used for projectors, especially the ones with intensiied relec-
tion, make perhaps the best screens for direct shadowgrams, but it is rather dificult to obtain
a separate patch of smaller size.
The image on the screen is easily observed by the eye but can be also captured by a cam-
era. It can be done in two ways: the screen is photographed either in relected light (when the
camera is positioned at the same side as the falling light beam) or in passed light (a semitrans-
parent screen is photographed from behind). Generally, the second case result gives better
contrast, but inding and mounting the appropriate screen are more dificult. Of course, both
methods give images of lower quality than the shadowgrams directly projected on the camera
ilm/chip. But unless focused shadowgraph is employed (see below), the latter approach limits
the scheme geometry severely, since the available ield of view is somewhat smaller than the
size of the camera lens used.
“Parallel-light shadowgraph” needs a more complicated optical setup but avoids distortions
associated with nonparallel light.
An optical element is added in the setup (a lens or a parabolic mirror), and the light source
is placed in its focal point. Then, the light is transformed into a parallel beam, which is then
directed on the test section. The diameter of the beam is deined by the diameter of the main
optical element and so is the ield of view of the scheme.
The behavior of light then becomes unrelated to the exact position of the focusing ield
element and the light source, as if the light source was moved to ininite distance. The contrast
of the shadowgram becomes

DE ¶e
C= = g (7.9)
E ¶y

It can be easily observed that for the same distance g the sensitivity of parallel-light shad-
owgraph is twice better than the optimal achievable in diverging light. Because of this, par-
allel-light shadowgraph is preferable, except for cases where unreasonably large collimating
elements are required. Parallel light also avoids shadow distortion and better matches the sort
of 2D phenomena often studied in wind tunnels [8].
The optical quality requirements are quite simple. Single-element lenses, Fresnel lenses, and
inexpensive mirrors can be used in shadowgraph setups, providing suficient quality images.
“Focused shadowgraph” goes one step further, manipulating the beam of light not only
before it reaches the test section but also after that. A second ield lens is added after the test
DENSITY-bASED METHODS 203

I S
F C
L1 M L2

FIGUre 7.7 Optical scheme of focused parallel-light shadowgraph with a camera: I, point light
source; L1, L2, condenser lenses; S, investigated phase object; F, camera lens; C, camera ilm/chip.

section that collimates the beam. Technically, it can be used to scale the shadowgraph image
on smaller or bigger screens, but nowadays, the most frequent use of it is to capture the
shadow image directly with a camera (Figure 7.7).
In this case, the camera is focused on a “virtual screen” M, situated at distance g from
the test object S. The second ield lens serves to scale the light beam to it it appropriately in
the dimensions of the camera’s lens, creating an image of shadowgraph M on the ilm/chip
of the camera. The position of M deines the sensitivity of the resulting image: Equation 7.9
applies here unchanged. Adjusting the focusing lens allows to change the sensitivity of the
scheme without disturbing any other optical elements, observing the image in the process.
Usually, long-focus lens is necessary to combine appropriate demagniication with focusing
length. Telephoto zoom lenses (200–300 mm) it nicely in this concept, if attached to cameras
in a standard way. In some schemes, longer focal length may be required. Technically, the lens
and the camera body can be mounted separately at a chosen distance.

Light source and Technically, every real light source has a inite size D and can be imagined as an array of point
illumination light sources. When used in the shadowgraph, the resulting image is a superposition of a mul-
titude of weak “elementary” shadowgrams created by different points of the light source. The
light beams appear to be not strictly parallel in the test section. There is an aperture angle D/h,
associated with the inite source size. This directly causes the shadowgram to be blurred by a
circle of confusion with a diameter

gD
dCoC = (7.10)
h-g

In the case of parallel-light shadowgraph, the aperture angle becomes equal to D/f1 and the
corresponding image blur equals

gD
dCoC = (7.11)
f1

where f1 denotes the focal length of the collimating ield element (L1 in Figure 7.7). Since geo-
metric blur grows linearly with the light source diameter, the light source for the shadowgraph
must be reasonably small to minimize distortions. Producing a shadowgram at high sensitivity
also beneits from a light source having sharp edges. Since producing such a light source is
not an easy task, a usual workaround is to create an intermediate source image, which is then
cut off by a diaphragm (Figure 7.8).

I Lc P

FIGUre 7.8 A light source setup with condenser lens and cutoff: I, light source; Lc, condenser lens;
P, pinhole or slit diaphragm.
204 FYODOR GLAZYRIN

The use of lasers as light sources, though favorable at irst sight, is, in fact, limited. Lasers
provide intense and spatially coherent light, but intense diffraction and coherent artifact noise
degrade the resulting image. One of the most useful traits of lasers is their ability to produce
very short light pulses, giving the system a high temporal resolution. When using a laser, it
is highly recommended to put a diffusor in the beam to reduce the effects of light coherency.

processing and Since digital cameras are as applicable for shadowgraph imaging as the ilm-based ones, shad-
interpreting owgraph can beneit from all the possibilities of digital image manipulation. The simplest
the images procedure is to enhance the contrast of the shadowgram.
Also, if a reference shadowgram has been taken (without any schlieren object in the test
section), later it can be subtracted from experimental images. This can reduce the inluence of
scheme defects, especially the defects of optical elements. For such correction, it is better to
translate images into digital matrices. The base intensity level of shadowgraph will correspond
to zero, with positive and negative values presenting effect. The matrix then can be translated
back into a grayscale image with the desired contrast rate.
Extraction of accurate data from shadowgrams is mostly possible for spatial parameters
of certain low features. The shadowgraph method has been used extensively in the study
of supersonic and transonic lows, in particular, because of its ability to easily observe such
structures as shocks, Prandtl–Meyer expansions, and boundary layers in compressible lows.
For instance, let us consider a model case of viewing a bow shock in front of a blunt body
immersed in a supersonic low (Figure 7.9). Light passing through the test section upstream of
the shock remains undelected, since there is no low disturbance upstream of the shock front.
As the light rays traverse the curved bow shock, they curve toward the more dense low region
downstream of the shock wave (Figure 7.9a). As the light rays passing the shock are delected,
a dark band appears on the screen or image (Figure 7.9b). The delected rays converge to form
a caustic (region of high brightness). The frontmost edge of the shadow will represent an
accurate position of the leading edge of the shock front. In some cases, the delected rays may
distort the shadow of the model. The position of the imaging plane can be adjusted to be closer
to or farther from the test section in order to decrease or increase the width of the shadow
image on the screen. Often, when strong gradients such as shocks are imaged, the imaging
plane is positioned close to the test section, since high sensitivity is unnecessary and the exact
position of low features is important. Such a technique is often called contact shadowgraph.
Prandtl–Meyer expansion fans, also often encountered in supersonic applications, act as
negative or concave lenses and produce an intensity distribution that has a bright band at the
leading part of the fan, followed by a less bright region.
Compressible boundary layers may also be visualized with the shadowgraph technique. As
the gas density is lower near the wall (assuming an adiabatic wall condition), the collimated
light rays entering parallel to the wall will be delected away from it. Also, the light near the
wall will be delected to a greater extent than the rays entering the outer region of the boundary
layer. The result is a caustic or bright band at the outer part of the boundary layer.
Remember that there is no 1:1 correspondence between the object and its shadow, as there
is between object and image in schlieren optics, where a lens generates an optically conjugate

Actual
Flow
shock position
direction
Light rays
Screen

Shock
(a) front (b)

FIGUre 7.9 Shadowgraph of a bow shock in front of a body in supersonic low: (a) scheme of
light delection and (b) shadowgraph of a sphere at M = 1.53. (Photo by A.C. Charters.)
DENSITY-bASED METHODS 205

relationship between them. Shadowgrams are not true-to-scale in general. Basically, only the
dark regions of a shadowgram can be used as an undistorted representation of the schlieren
object, since they mark the points where the delected rays originate.
Such semiquantitative measurements of the positions and angles of low features can still
be made with an accurate experimental technique. In general, though, shadowgraph is not well
suited to quantitative evaluation of refractive index.
A double integration is required [5] to compute the ields of refractive index from
quantitative shadowgraph data, which emphasizes all experimental errors and inaccuracies
signiicantly. Because of that, efforts of producing a shadowgram of necessary quality are
best used to set up a schlieren assembly and perform quantitative schlieren delectometry.
However, examples of application can be found in published articles [9,10], starting as early
as 1987 with the work by Lewis et al. [11]. There, a linear diode array camera was used to
capture shadowgraph signal from an axially symmetric disturbance produced by igniting a
fuel–air mixture with a laser spark. In the outcome, gas temperatures were determined with
high accuracy and temporal and spatial resolution.

7.5 Schlieren

General principles The schlieren method is a technique widely employed nowadays for qualitative and quantita-
tive analysis of luid lows. The major difference that turns a shadowgraph setup in a schlieren
setup is the introduction of the schlieren knife K (Figure 7.10). The knife in its simplest form
is a solid nontransparent plate with a smooth and even end placed in the focal plane of the sec-
ond condenser lens L2. At this plane, the second lens forms a sharp image of the light source.
Now, let us consider advancing the knife toward this image in the focal plane. When the test
object is void and nothing disturbs the light beam, the knife linearly obscures the image of the
light source, and the image on the screen evenly loses luminance. However, when the phase
object is in place and delects the light, some of the delected rays fall aside from the knife’s
edge and illuminate the screen, creating lighter points in some regions of the image and neigh-
boring darker points.
Let us consider the distribution of light on the schlieren image. First, when both the knife-
edge and the schlieren object are absent, the screen is evenly illuminated by the light source.
Common practice (discussed further) is to use a rectangular light source, so let us consider it to
be rectangular, with dimensions b × a (along the knife-edge and normal to it). If B is the lumi-
nance emitted by every point of the light source, f1 and f2 are focal lengths of the irst and second
ield optical elements L1 and L2, respectively, then illuminance falling on the irst ield element is

B×b×a
E= (7.12)
f12

As the light is parallel between the main elements, then, neglecting any losses, this lux of
light also falls on the test area and the second ield element. The illuminance of the schlieren
image is the same as well, except for a magniication factor m that describes the relation of
image size to the cross section of the test area:
B×b×a
E= (7.13)
m 2 f12

I S K
L1 L2 C

FIGUre 7.10 Principal scheme of parallel-light schlieren technique: I, point light source; L1, L2,
ield lenses; S, investigated phase object; K, schlieren knife; C, screen.
206 FYODOR GLAZYRIN

Displaced elemental image


Δa
a΄ Light source image
Knife-edge
a

FIGUre 7.11 Distribution of light on the schlieren knife with a single refracting point in
the ield.

Now let us move in the horizontal knife-edge so that it blocks a part of the light source image
at the focus of the second lens/mirror. If the unobstructed part has height a′ (see Figure 7.11),
then the resulting illuminance can be found by replacing h in Equation 7.13 with (f1/f2)a′:

B × b × a¢
E0 = (7.14)
m 2 f1 f2

This is the background illuminance of the schlieren image. It is usually visible as a middle
shade of gray. The brightness of any point in the schlieren image is judged relative to the
background illuminance.
Now if the schlieren object is present in the test sections, it delects a certain light ray at
an angle ε with vertical component εy. In the plane of the knife-edge, the elemental image of
the light source corresponding to this ray gets shifted by a distance ∆a = εy f2. The incremental
gain (or loss) of illuminance at the corresponding point of the resulting image can be found as

B × b × ey
DE = (7.15)
m 2 f1

The contrast in the schlieren image, then, is

DE f2e y
Cº = (7.16)
E a¢

The contrast in the schlieren image is the value measured in the output of the experiment.
Speciic experimental realizations of the scheme may process the contrast differently (see
“Processing and interpreting the images” section), but the overall result is that schlieren tech-
nique in principle allows to visualize the irst derivative (gradient) of the refractive index, as
εy ~ (∂n/∂y).
The preceding considerations also explain why the light source for schlieren does not have
to be as small as possible. If the unobstructed size of the source light a is close to zero, the
contrast rises ininitely, and even the small disturbances produce either a black point or a point
of maximum brightness. An extended schlieren light source allows producing a continuous
grayscale schlieren image rather than a merely binary black and white. As the dimensions
parallel to the knife-edge do not inluence the contrast parameters explicitly, it is useful to
make the light source elongated horizontally to signiicantly increase the overall light lux of
the scheme with a given emitted luminance.
A visual comparison of three different refractometric techniques is presented in Figure 7.12.
A low past an airfoil, including attached shocks, is viewed as a shadowgraph image, a schlie-
ren image, and an interferogram. It can be seen how the shadowgraph (Figure 7.12a) repre-
sents the system of shocks clearly, while the schlieren ield (Figure 7.12b) is indistinguishable
in some regions due to its higher sensitivity. Schlieren also provides a more pronounced effect
in the area of wake low behind the airfoil.
DENSITY-bASED METHODS 207

(a)

(b)

(c)

FIGUre 7.12 Comparison of density-based visualizations on subsonic low past an airfoil:


(a) diverging-light shadowgraph, (b) schlieren, and (c) Mach–Zehnder interferometry. (Modiied
image from M.Ya. Yudelovich.)

The third image (Figure 7.12c) is achieved by Mach–Zehnder interferometry, a high-


sensitivity refractometric technique requiring more complicated and precise optical setups than
most schlieren techniques. On the image, variations of density in the low manifest themselves
in distortion of otherwise regular pattern of light and dark bands (interference fringes). See
Chapter 8 for extensive explanation of the principles and practicals of interferometry techniques.

experimental The light path in a schlieren scheme can be schematically divided into three parts, separated by the
geometry main optical elements. The illuminator section consists of a light source and accompanying optics,
such as lenses, pinhole or slit diaphragm, and diffusor. The test section is essentially the part of the
light path that traverses the test object. The analyzer section is formed by the optics necessary to
focus the light onto the schlieren knife and then to capture the resulting schlieren image.
The scheme presented in Figure 7.10 is one of the simplest possible. It is also relatively
easy to align and tune. Although it is sometimes used in real-world applications, it has several
important drawbacks. The main drawback is the dificulty to obtain high-quality lenses with
large diameters. This makes dificult to build visualization systems with a large ield of view,
and this is often the case in experimental aerodynamics.
208 FYODOR GLAZYRIN

Lc
I
P
s f2

θ
S

L1 L2
K
C L3

FIGUre 7.13 Z-type (mirror-based) schlieren assembly. I, light source; Lc, condenser lens;
P, slit diaphragm; L1, L2, main ield elements; S, test object; K, schlieren knife; L3, focusing lens;
C,  camera chip/ilm. (With kind permission from Springer Science+Business Media: Schlieren
and Shadowgraph Techniques Visualizing Phenomena in Transparent Media, 2001, Settles, G.S.)

Figure 7.13 presents an outline of the scheme most often used for conducting schlieren
experiments on shock tubes, wind tunnels, and jets. This scheme is called the Z-type setup
because of its characteristic shape. The main (ield) optical elements for this setup are two
parabolic mirrors tilted slightly from the optical axis of the scheme on the angle θ (shown in
Figure 7.13 at L1). Understandably, the angle at which the mirrors relect the light ray directed
along the main optical axis is 2θ.
The mirrors employed are usually symmetrical, on-axis parabolic mirrors. The quality of
the scheme may be improved by the use of off-axis parabolic mirrors designed for a speciic
tilt angle, but they are quite expensive. Also, if on-axis mirrors with large focal lengths are
chosen and are carefully aligned, the possible improvement is quite minor. The advantages
of parallel light, stated previously for lens-type shadowgraph, apply as well to this schlieren
setup. The use of mirrors in place of lenses generally results in a smaller cost for a given ield
of view; the absence of chromatic aberrations is also an advantage.
If common on-axis mirrors are used in the scheme, optical aberrations in the form of coma
and astigmatism are present. Both of them are reduced with a reduction of the tilt angle θ,
and coma can be virtually eliminated if the two mirrors are identical and aligned in a strictly
symmetrical way.
To provide space for the test area, a minimum distance between the ield mirrors of about
2f, where f is the mirror focal length, is required. Longer distances between mirrors do not
matter, save for more possibilities to disturb the light in its path. To shorten the overall space
or it the setup on a complicated facility, plain “folding” mirrors can be used in the illumina-
tor and analyzer beams. Note that additional mirrors increase the dificulty of aligning the
system and are separate elements weak to vibration. Introducing folding relectors also serves
to empower optical aberrations by increasing the off-axis angles [12]. Nevertheless, they are
unavoidable in some cases. Figure 7.14 presents an example of such a setup, employed on

Light source
M∞
Mirror
Window

Parabolic Model Parabolic


mirror mirror

Mirror
Tunnel test section
Knife-edge
CCD camera

FIGUre 7.14 Z-type schlieren assembly with beam folding on a wind tunnel facility.
DENSITY-bASED METHODS 209

a wind tunnel. It must be noted that mirrors used for scheme folding must be irst-surface
mirrors or image-rotation prisms; otherwise, multiple relections on the front and back surface
of the mirror will introduce fatal distortions in the schlieren image.
Parabolic mirrors are often used as ield elements for coincident and Z-type schlieren
systems up to a meter in diameter or more. Astronomical telescope mirrors continue to make
the best available ield elements for traditional schlieren systems, as requirements are similar
for both applications.
Spherical mirrors are ideal elements for single-mirror coincidence schlieren systems, but
not so appropriate for parallel-beam setups. However, at f/10 or higher, the difference between
a spheroid and paraboloid is within λ/2 [13], making them indistinguishable for schlieren use.
Spherical primary mirrors with correctors can be found in some telescopes and in some com-
mercial schlieren instruments.
Like shadowgraph systems, schlieren setups allow a “focused” variation, added at the end
part of the light path. Without the use of a focusing lens, the schlieren ield lens or mirror still
forms a real image of the test area, if the distance between them (s + f2 in Figure 7.13) is greater
than f2, the focal length of the second ield element. In this case, the image magniication m,
that is, the ratio of image to the diameter of the test area, is equal to

f2
m= (7.17)
s

If the image has to be scaled down, this becomes inconvenient, since s grows large in order
to demagnify the image, and the overall dimension of the optical setup becomes unwieldy.
Including a focusing lens allows to independently control the image size. The image diameter
for direct viewing on a ground-glass screen should be on the order of 10 cm, but for project-
ing on photo or video recording, it may be only 1 cm. Again, as it is for the shadowgraph, a
set of focusing lenses or an adjustable zoom lens may be a useful part of equipment. For a
given magniication m, the thin-lens approximation yields the following expression [14] for
the focal length f3 of the focusing lens:

f3 =
(
m f22 - sg ) (7.18)
f2 - ms

Light source Practically in all cases, the light source for schlieren must be relatively small, usually a few
mm at most. High-luminous exitance (light lux emitted from a surface per unit area, mea-
sured in lux) is thus an important characteristic of the lamp that should be considered in order
to ensure suficient illuminance in the inal image. A rectangular light source with both dimen-
sions of several mm is most suitable for schlieren imaging, regarding sensitivity, technical
implementations, and measuring range. For instance, a typical tungsten–halogen automobile
lamp has a l.5 × 5 mm ilament. Such lamps with coiled ilaments are often used in schlieren
systems.

Schlieren knife As mentioned earlier, the simplest variation of the optical knife is a plain nontransparent edge.
It can be aligned in different directions in the knife plane, allowing to visualize gradients of
the refractive index in the corresponding perpendicular direction. It is often considered that
two schlieren images with perpendicular orientations of the knife-edge are necessary to grasp
the structure of the low. However, in many cases, one direction can be selected, based on the
speciic features of the low.
More sophisticated variations of the schlieren knife include circular and double cutoff
knife-edges. The circular cutoff allows to visualize the magnitude of density gradient, regard-
less of its direction.
A widely employed variation is a knife-edge that is formed by gradual optical density
variations. Replacing the conventional knife-edge with a ilter having a gradual variation of
light transmission may reduce the unwanted diffractive effects and provide further increase in
210 FYODOR GLAZYRIN

(a) (b)

FIGUre 7.15 (See color insert.) Examples of color schlieren images. (a) Thermal plumes
from candles, with horizontal RGB color ilter. (Image by © Andrew Davidhazy, andpph.com.)
(b) A propane torch lighting a Bunsen burner, with circular color ilter. (Image by A. Sailer.)

sensitivity. A cunning example of manufacturing such a ilter is given by [15], where a gradi-
ent displayed on an LCD screen is photographed by a conventional camera, and then a section
of a developed ilm frame is used as the schlieren knife.
An interesting variation is the so-called color schlieren, where the knife-edge is a colored
semitransparent ilter and delections of light are marked with light colors instead of intensity
(Figure 7.15).
One approach is to replace the knife-edge with a ilter, formed by several parallel, trans-
parent, colored strips. Most often three colored sheets are used, and this “tricolor ilter”
is oriented to be parallel to the light source slit, and the width of the central ilter section
being approximately equal to that of the slit image. The choice of the colors for the three
strips depends on the appearance and the visual discrimination. Color sensitivity of the ilm
material also should be taken into account, as the three color sections should have approxi-
mately the same transparency. A combination of red, blue, and yellow seems to yield the best
contrast.
Using color strips has the advantage of the eye being more sensitive to changes in color
than to shades of gray. Obviously, the color strips will only work well for white or broadband
light sources. Edge of a circular cutoff may also be made multi-colored to distinguish the
exact direction of light relection.

processing and The principal quantity in examining schlieren sensitivity is the minimum discernible
interpreting contrast Cmin = ΔE/Emin in a schlieren image. Its value differs signiicantly with the way
the images images are registered. For instance, the human eye and photoilm have a complex non-
linear response to light, which can be generally described as exponential. The practical
empirically determined threshold for Cmin if the image is observed or captured on ilm is
around several percent. For brightly illuminated images, that is, above 10 candela/m2, even
a 2% threshold is possible. However, weaker illumination of the resulting image raises the
threshold signiicantly.
On the contrary, in digital cameras, which today have become a de facto standard for
low visualization, the response curve of the imaging matrix is close to linear. Here, even if
we suppose that the image is recorded with data density of 8 bit per color channel, and the
base intensity is at the middle of this scale, technically Cmin = 1/128 < 1% is possible. If raw
camera data are operated instead of compressed 8-bit images, this value becomes more than
an order of magnitude smaller. The practical threshold in this case is mainly associated with
the intensity noise present in the image.
An expression can be derived to calculate the minimal value of optical inhomogeneity that
can be registered by such means. In Equation 7.16, the minimal contrast Cmin corresponds to the
DENSITY-bASED METHODS 211

minimal detectable delection εmin. If then we substitute ε with the expression from Equation 7.2,
then Cmin can be directly tied to the gradient of refractive index in the test section:

¶n n a¢
= Cmin 0 (7.19)
¶xmin L f2

where, again
a′ is the unobstructed height of light source in the knife plane
L is the width of the schlieren object along the optical axis
n0 denotes the undisturbed optical density in the test section
f2 is the focal length of the focusing ield optical element

So the minimum detectable gradient depends on the medium, the test object itself, and the
schlieren setup. Naturally, schlieren objects extended along the optical Z-axis produce stron-
ger effect and are easier to see. Ideally, a small unobstructed image size (a large cutoff) is
desired, along with a large focal length f2, for high sensitivity.
Digital image processing that came into mass reality in the last two decades provided a
new quality to the schlieren technique. Digital measurements of intensity of schlieren images
allow to determine the values of ∂n/∂x and ∂n/∂y quantitatively. Then, by integration, the ield
of refractive index itself can be determined, with wide possibilities for interpretation. This
technique is known as “calibrated schlieren.” Naturally, this requires a reference point in the
image where the base value of refractive index is known (usually an area of undisturbed gas or
free low) and a precise calibration of the schlieren apparatus. The latter is possible to achieve
through careful adjustments and calculation of sensitivity based on the parameters of the set-
up. Optical parameters of the second ield element, as well as the cutoff, geometry of the “tail
part” of the setup, and speciically the radiometric characteristics of the light source, must be
known with good precision. A more elegant and simple way is to place a reference schlieren
object in the ield, for which refracting parameters are known beforehand. Commonly, a small
lens with well-known parameters is used.
Recovering 3D data from schlieren projections is possible for 2D and, more often, axisym-
metric lows. For these purposes, inverse Abel and Radon transforms are used to a great extent
[16]; see Chapter 8 for further details.
A number of works are available, utilizing different approaches on quantitative schlieren
imaging [17–19].
A careful comparison of three quantitative schlieren methods is provided in [15],
including calibrated schlieren and background-oriented schlieren (BOS). The former appears
to be capable of the best accuracy. Besides, the use of parallel light beams gives schlieren
an advantage in visualizing lows near solid objects: light beam can be directed along the
solid surface. BOS, with its diverging-light scheme (described in Section 7.6, see “General
principles” and “Light source and illumination” subsections), almost inevitably suffers from
blind zones near solid walls.

7.6 Background-oriented schlieren

General principles The essence of the BOS method is the comparison of two images of the same background,
taken with (working image) and without (reference image) the investigated transparent object
between the camera and the background [20].
Refraction inside the schlieren object leads to a mismatch between the reference image and
the working one (Figure 7.16). By analyzing the displacement of the background elements,
it is possible to obtain quantitative information on the refractive index of the investigated
medium, averaged along the optical path.
The analysis is commonly performed numerically, utilizing cross-correlation algorithms of
image comparison. The “digital” part of the technique is very close to the processing methods
212 FYODOR GLAZYRIN

dym
εy

dy

LB h Lc

z1 z2 Z
X
B S F C

FIGUre 7.16 Optical scheme of the BOS method. B, background; S, schlieren object; F, lens;
C, imager.

of particle image velocimetry (PIV; see Chapter 10). This makes BOS a hybrid of traditional
shadowgraph and PIV, with notable differences from each.
One obvious advantage of the BOS is the simplicity of its experimental setup, compared
to other visualization techniques. Basically, one needs to set up the background image and
the image-capturing scheme and provide illumination to the background. All three of these
elements have relatively modest requirements for quality and operational parameters. It is
relatively easy to achieve quantitative data on low density with small changes introduced
in the general setup of the experiment. Because of this, BOS is often chosen as a secondary
visualization technique to provide complementary data for high-accuracy data from PIV, laser
Doppler velocimetry (see Chapter 10), temperature-sensitive paints (see Chapter 6), point
measurements, and other techniques not relying on low density. However, it should be noted
that with serious effort put into the setting up of BOS visualization, it is possible to collect
important quantitative data, describing the low in a self-consistent manner.

experimental The geometry of a BOS (see Figure 7.16) setup is deined by two values: the distance LC from
geometry the capturing lens F to the investigated object S and the distance LB between the object and
the background B. The lens F and the imager C are usually parts of a digital camera assembly,
C being a CCD/CMOS sensor. Here, they are shown separated for means of description.
The image capturing optics in BOS are focused on the background itself. If no schlieren
object is present, the light rays from the background pass through the test section undelected
and form the reference image of the background on the imager. If the schlieren object is
present, it causes the light ray that originates from a certain fragment of the background to
be delected by angle ε, deined by Equation 7.2. Consequently, on the working image, this
fragment of background is perceived as displaced by vector (dx, dy), where

æ Lö 1 ¶n æ Lö
dx » e x × ç LB + ÷ » L ç LB + ÷
è 2ø n ¶x è 2ø
(7.20)
æ Lö 1 ¶n æ Lö
dy » e y × ç LB + ÷ » L LB + ÷
è 2ø n ¶y çè 2ø

Here, L is the width of the schlieren object along the optical axis, and the light delection
is considered to take place in the middle of this span. Then, if we calculate the size of dis-
placement in the imager plane, it will be proportional to dx and inversely proportional to the
distance (LB + LC + L) between the camera and the background. Together, for a given schlie-
ren object (i.e., given ε), the observed effect is proportional to (LB + L/2)/(LB + LC + L) ≈ LB/
(LB + LC). The relationship between LB and LC determines the sensitivity of the scheme, that is,
the response it will produce to a given gradient of density in the investigated object.
In general, the desired value of the image shift is determined by the cross-correlation algo-
rithm employed. Knowing it, the geometry of the setup can be adjusted to give the scheme the
DENSITY-bASED METHODS 213

necessary sensitivity. Positioning the background closer to the object lowers the sensitivity of
the scheme, while moving it further increases the sensitivity.

Background The choice of the background image is crucial, since its pattern is the raw data for the image
comparison algorithm. In most experiments, background is a lat black-and-white printed
image of dots or lines spaced on the unicolor background. Figure 7.17 shows the most com-
monly encountered types of background images. A random dotted pattern (Figure 7.17b) is
the most commonly used, as its parameters can be adjusted to it any speciic instance of low
and imaging setup. It also has a structure similar to the image of tracer particles in a low
and thus beneits more from the improvements of processing algorithms developed for PIV.
Parallel lines background (Figure 7.17c) can be instrumental in “online” applications, as their
distortions are better interpreted visually, “by the eye.” This is especially useful for capturing
fronts of shock waves (see Figure 7.18, from [21]). A pattern somewhat standing out is the
quasi-grayscale pattern generated by employing the wavelet noise algorithm (Figure 7.17d).

(a) (b) (c) (d)

FIGUre 7.17 Types of background images used for BOS: (a) regularly spaced dots, (b) irregularly spaced dots, (c) parallel
lines, and (d) image generated by the wavelet noise algorithm.

FIGUre 7.18 Working BOS image of a cone in a supersonic low. Attached shock is discern-
ible as distortion of the background line pattern. (From Ota, M. et al., Meas. Sci. Technol., 22,
104011, 2011.)
214 FYODOR GLAZYRIN

It includes recognizable features on many scales, and this makes it possible to process a single
image pair with several largely different sizes of request windows in the cross-correlating
numerical scheme [22]. Thus, it can be used in situations when the number of possible
experimental shots is limited and the expected characteristics of disturbances in the low are
undetermined.
If a dotted background is used, the parameters of dot pattern—size and spacing—can be cho-
sen depending on the requirements of the processing algorithm, after determining the geometry
of the BOS setup. Here, advices for PIV measurements (see Chapter 10) can again be used
almost directly. For example, the minimal requirement for the request window (of the cross-
correlation algorithm) to contain 10 particle images (dots, in the case of BOS) is applicable.
An interesting modiication of the scheme, speciic to BOS imaging, is the colored BOS
technique [23,24]. Here, multicolored background is used, and then the images are split in
different color channels, which are processed separately. If executed correctly, this provides
a multiple increase of data density over the same area, which increases the accuracy of the
technique, and helps to ight data noise and even cope with the blurring of the image on
high-density gradients. In the work mentioned earlier [21], the background consisted of two
patterns of parallel lines, perpendicular to each other, printed in red and green. Distortions of
these patterns, processed independently, yielded horizontal and vertical components of the
displacement.
Usually, the background image is the least dificult part of the whole setup to be produced,
so it can be easily redesigned and reproduced between experimental runs. This gives an addi-
tional lexibility to the whole visualizing system.
A number of works have been presented utilizing “natural backgrounds,” such as a distant
forest [25,26]. Using natural objects as background images allows for extensive use of BOS
as a ield-condition technique, also allowing for extra large-scale visualizations. Examples are
BOS setups used to visualize live open-air explosive tests [27]; see experimental images in
Figure 7.19.

Light source and Unlike shadowgraph, BOS does not require setting up a parallel light beam passing through
illumination the investigated object. The light captured is strictly the one emitted or scattered by the back-
ground. This eliminates the need for the standard “schlieren” optical setup enguling the test
section. Still, the illumination of the background image plays an important role, especially
when BOS is applied to aerodynamic high-speed lows. Intensive and homogeneous illumina-
tion is required to achieve a clear image of the background, which, when processed by the

(a) (b)

FIGUre 7.19 (See color insert.) BOS visualization of an open-air explosion: (a) working image (grayscale); (b) back-
ground displacement (colormap, absolute values). FB, explosion ireball; SF, shock front; SS, secondary shock; TU, turbu-
lence. (With kind permission from Springer Science+Business Media: Shock Waves, Background-oriented schlieren with
natural background for quantitative visualization of open-air explosions, 24, 2014, 69, Mizukaki, T., Wakabayashi, K.,
Matsumura, T., and Nakayama, K.)
DENSITY-bASED METHODS 215

image comparison scheme, will produce less errors. A trick often employed to this purpose
is the use of a semitransparent background with a light source placed behind it so that less
light is wasted due to scattering. This scheme is slightly more complicated and requires the
investigated object to be transparent through, so it may be inapplicable in wind tunnels where
the test section has only one window.
As the Gladstone–Dale constant varies slightly with the wavelength of light, it is sensible
to use monochromatic lighting (lasers, etc.) for high-accuracy BOS imaging, instead of mixed
light. However, if a laser is used to illuminate the background, a speckle pattern forms in the
image plane due to self-interference of the laser beam. The speckle pattern can interfere with
the BOS processing algorithm, yielding errors and decreasing accuracy. To avoid this, the
laser beam can be directed through a diffuser prior to falling on the background. Actually, a
technique exists under the name of speckle photography (less often, speckle interferometry),
which intentionally uses the speckle pattern in a way similar to BOS background [28,29],
though it did not gain much attention. An easier way to achieve a monochromatic BOS setup
is to use narrowband color ilters in the lighting or camera assembly.
It should be noted that BOS does not require to be operated in a darkroom environment, as
classic shadowgraph/schlieren techniques sometimes demand. Since BOS is based on mea-
suring image displacement instead of light intensity, the exact amount of light is somewhat
irrelevant. Experimental images can be preiltered to compensate for uneven illumination of
reference and working images.

Image-capturing The image-capturing scheme, for most cases, consists of a standard digital camera with an
scheme appropriate lens. Most consumer-grade digital single-lens relex (DSLR) cameras meet the
conditions necessary for visualizing the lows commonly investigated in aerodynamics.
Employing high-resolution cameras allows for higher spatial resolution or sensitivity (see the
next paragraph) without decreasing the span of the imaging area. Technically, it is possible
to build a BOS imaging system with a compact digital camera, but it will lack versatility of
interchangeable optics and will have generally lower performance. Unlike PIV, BOS does not
require the two images to be captured in quick succession. The immediate distribution of low
density affects only the “working” image from each pair, and the history of the low is irrel-
evant to the result. The reference image, taken without the investigated low in frame, can be
captured conveniently before the start of the test run, or after it. Imaging of high-speed lows
with BOS does not require the capturing schemes to be able to produce two images with small
separation time, eliminating the requirement for complex dedicated equipment, such as dou-
ble-frame cameras used for PIV. The question of single-frame exposure, however, is still pres-
ent. In order to rapidly capture the changing features of the low, the working image should
be taken with an exposure considerably smaller than the characteristic time of these features.
Noteworthy is the placement of the camera focus. Naturally, an absolutely focused image
can only be achieved at a certain distance from the lens to the object being photographed, so
the background and the schlieren object cannot be brought into precise focus at once. This
separation, moving at least one of them out of the main focus, can reduce the image quality by
introducing spatial blur. But when the defocusing effect is small enough to be ignored on both
of them, the BOS method performs adequately.
Necessary sharpness is expressed in terms of the diameter of the spot that a point object
produces on the image; this spot is commonly called the circle of confusion. It is equal to zero
for objects situated directly in the camera focus and increases when the object is closer or far-
ther to the camera. The parameter called depth of ield (DOF) is determined as the length of a
fragment of the optical axis, on which objects produce a suficiently focused image. It depends
on the parameters and settings of the lens and camera and also on the necessary sharpness. In
general, good BOS imaging is possible when both the background and the investigated object
are inside the boundaries of the camera DOF. In contrast to what is reported in some works in
the literature, the background focus is accepted to be of irst importance. Most often, the cam-
era focus is set strictly on the background, and then the setup is adjusted to bring the object
216 FYODOR GLAZYRIN

of investigation inside the DOF. Equation 7.21 presents the expressions commonly used to
calculate the near and far borders of the DOF [30]:

Rf 2
R1 =
f + N f ( R - f ) dCoC
2

(7.21)
Rf 2
R2 = 2
f - N f ( R - f ) dCoC

where
R1 and R2 are the closest and farthest distances at which an object will be imaged sufi-
ciently sharp
R is the current precise focusing distance
f is the absolute (not the equivalent) focal length of the lens
Nf is the lens f-number
dCoC is the acceptable size of the circle of confusion in the plane of the camera ilm or
sensor

As for dCoC, it is usually considered safe to pick a value of 0.02–0.03 mm and divide it by the
camera’s crop factor. The latter is the value showing how small (“cropped”) is the capturing
CMOS/CCD matrix of the camera compared to a standard 35 mm photographic ilm, and can
be found in the speciications of the camera model.
As DOF is closely linked with focal length and aperture diameter (or f-number) of the lens
used in the camera, adjusting the setup sometimes turns into balancing these parameters. For a
given framing and camera position, the DOF is controlled by the lens aperture diameter: closing
the aperture (increasing the f-number) leads to bigger DOF in the image. However, by clos-
ing the aperture, the frame illumination is decreased, so more intense lighting (or longer expo-
sure, which is usually less feasible) may be required to achieve the necessary contrast in the
background image. Usual practice is to close the lens aperture as tightly as needed and use a
powerful source of light for the background. As a last resort, one can use the fact that DOF
changes inversely to the focal length of the camera and, consecutively, to the scaling of the
object in the inal image. Lesser magniication will yield greater DOF, but at the cost of spatial
resolution in the raw image and, of course, in the BOS ield.

Image processing BOS image processing shares a lot with PIV image processing. Save for a rare exception,
advanced PIV processing software is the best choice to use for comparison of BOS images.
Cross-correlation algorithm used in PIV works well with different background pat-
terns, although speciic modiications, introduced in the software to improve the quality of
processing, are sometimes useful only for the “random-dot” pattern that mimics the swarm
of tracer particles. As in PIV, BOS data can also beneit from multistep processing and sub-
pixel interpolation. However, due to small values of Gladstone–Dale constants for gases, the
displacements recorded in BOS experiments with aerodynamic lows are usually smaller than
those in PIV. This leads to greater relative errors introduced by noise.
BOS images, compared to PIV, do not suffer from such effects as uneven brightness of
particles (dots) or disappearance of a particle image because of tracers leaving the laser sheet
between the two frames. Most frequently encountered defects of BOS images are insuficient
contrast, too sparse or too dense background pattern, and blurring of image because of intense
or chaotic refraction inside the test section.
Several reference images can be taken to provide an array of data for ensemble correlation.
The single working image, compared with different reference frames, will provide a set of
quasi-independent data sets. Averaging them can increase the signal-to-noise ratio of resulting
data. Also, one reference image can be used for multiple working images captured during the
test run. If the low is continuously imaged in its dynamics, to obtain time-resolved structure of
the low, BOS frames should be compared not in consequent pairs (1⇔2, 2⇔3, …, 21⇔22, …)
DENSITY-bASED METHODS 217

as in PIV, but rather in an independent manner (R⇔1, R⇔2, …, R⇔21, …, where R is the
reference image).
BOS, as the classical schlieren method, is an integrative technique: properties of the
medium on the whole path from the background to camera affect the result. Because of this,
the extraction of quantitative density data is connected with the reconstruction of the 3D den-
sity ield and requires additional processing. Essentially 2D lows, homogeneous along the
light path, are seldom encountered. The simplest real case is a free axisymmetric low, for
which Abel or Fourier transform can be used to reconstruct the density ield from single-
perspective BOS ields [31]. Filtered backprojection algorithm, utilizing the inverse Radon
transform, is a method frequently used for the reconstruction of more complex lows [23,25].
Some works [24,32] state that algebraic reconstruction techniques may be more favorable for
applications where opaque bodies (e.g., wind tunnel models) are in the line of sight. Basically,
these techniques can also be used for the reconstruction of asymmetric density ields from
multiple-perspective BOS data [33,34].

example To illustrate the description of this technique, a simple case of BOS imaging of an unsteady
experiment low is presented. Here, a lat shock wave propagating inside of a rectangular channel reaches
its open end and exits the channel into the open section. The low that develops outside includes
the diffracted shock wave, the transonic jet emerging from the channel, and the vortex ring
that forms by the uprolling of the jet’s outer layer (see Figure 7.20b).
BOS setup used to image the low (Figure 7.20a) includes a DSLR camera, a random-dot
background pattern, and a lash based on a gas discharge lamp. The camera was operated with
an open shutter during the experiment in a darkroom, while temporal resolution was provided
by the pulsed light source. The lash provided light pulses with duration t ≈ 2 μs in broad wave
spectrum, synchronized with the approach of the shock wave by a set of piezoelectric pres-
sure gauges placed inside the channel. A random-dot background printed by a laser printer
was used.
Figure 7.21 presents results of this imaging. On raw working image (Figure 7.21a), areas
of blurring can be discerned, marking the lower part of the vortex ring and the shock front.
But the processing reveals signiicantly more. The front of the expanding shock wave can be
clearly seen, together with the vortex ring. As the system does not provide continuous imag-
ing, only one working image was captured for each test run of the facility. However, multiple
reference frames were taken. Because of this, ensemble averaging was possible in the way
mentioned earlier. As it can be seen from the results of processing (Figure 7.21b), the proce-
dure indeed works to reduce the background noise of the image.
Because of the rectangular symmetry of the low, 3D reconstruction and qualitative density
measurements are impossible in single-angle imaging. However, its results can be used to
determine the dynamics of low structures: speed of the shock front, evolution of the vortex
ring, etc. CFD simulations of the low can also provide synthetic images suitable for compari-
son with BOS data.

5 SF
2 VR

1
JJ

3
4
1 2

(a) (b)

FIGUre 7.20 Experiment setup: (a) overall schematics and (b) structure of a low. 1, channel;
2, open end of the channel; 3, lash; 4, camera; 5, background; SF, shock front; VR, vortex ring; JJ, jet.
218 FYODOR GLAZYRIN

10

30 9
8
40
7
50 6

mm
60 5 px
4
70
3
80 2

90 1

10 20 30 40 50 0
(a) (b) mm

FIGUre 7.21 (See color insert.) Results of imaging: (a) raw images (top, reference frame; bot-
tom, working frame) and (b) ield of image shift obtained by processing (top, single frame; bottom,
averaged over four reference frames).

problems

7.1 Figure 7.22 presents a shadowgraph image of a shock wave front RS. What is the direc-
tion of the luid low through this front?
1. From left to right
2. From right to left
3. Cannot be identiied from the picture
7.2 You have a focused shadowgraph setup shown in the schematic pictured in
Figure 7.7. The test section is situated d2 = 40 cm from the second ield lens L2. Both
ield lenses have a focal length of f1 = f2 = 40 cm. The light source of the scheme has
a diameter of D = 5 mm. The beam diameter in the test section is dS = 20 cm. The
working diameter of the camera lens is dF = 5 cm. The camera is focused on a virtual
image at distance lC = 60 cm, and the shadowgram ills the camera frame. Estimate

FIGUre 7.22 Shadowgraph image of a shock wave.


DENSITY-bASED METHODS 219

if a suficiently strong optical inhomogeneity that has a vertical size of 3 mm can be


detectable in this setup.
7.3 You have a parallel-light schlieren setup. Its condenser ield optical element has a focal
length of f2 = 70 cm. The image is captured by a camera with an 8-bit grayscale sensor.
The background intensity of the schlieren image is set to r0 = 40% of maximum detect-
able brightness. The test section spans 20 cm along the optical path and is illed with air
at normal conditions. If the unobstructed height of the light source image in the knife
plane is equal to a′ = 3 mm, what is the minimal density gradient that can be detected in
the test section?
7.4 Figure 7.23 presents results of BOS processing of a pair of images, working image
second. The object is made of solid glass and positioned close to the background. What
kind of object is imaged?
1. Diverging lens
2. Converging lens
3. Glass cylinder
4. Circular glass cone
7.5 Imagine that in your BOS setup the camera lens has a focusing dial as shown in
Figure 7.24, and test photos show that at f = 16 the background is not bright enough.
Given that the lighting and camera position relative to background are set, how far from
the background can you place your test object?

FIGUre 7.23 BOS image of a solid object.

FIGUre 7.24 Lens focusing dial and adjustment marks.


220 FYODOR GLAZYRIN

references

1. H u n t e r MCW, Schaf f er S (1989). Robert Hooke: New Studies, Boydell Press, Suffolk, UK.
2. M a e d a S, S u miya S, K as ahara J, M ats uo A (2013). Initiation and sustaining mecha-
nisms of stabilized Oblique Detonation Waves around projectiles, Proceedings of the Combustion
Institute, 34, 1973–1980.
3. Ba s s M, Optical Society of America, eds. (1995). Handbook of Optics, 2nd edn., McGraw-Hill,
New York.
4. L a n d au LD, L if s hitz EM (2009). The Classical Theory of Fields, Vol. 4, rev. Engl. ed.,
reprinted, Elsevier, Amsterdam, Netherlands.
5. M e r z k ir c h W (1987). Flow Visualization, 2nd edn., Academic Press, Orlando, FL.
6. G a r d in e r WC, Hidaka Y, Tanzawa T (1981). Refractivity of combustion gases, Combustion
and Flame, 40, 213–219.
7. S e t t l e s GS (2001). Schlieren and Shadowgraph Techniques Visualizing Phenomena in
Transparent Media, Springer, Berlin, Germany.
8. H ilt o n WF, Ba irs tow L (1952). High-Speed Aerodynamics, Longmans, Harlow, UK.
9. d e Iz a r r a G, Cerqueira N, de Izarra C (2011). Quantitative shadowgraphy on a laminar
argon plasma jet at atmospheric pressure, Journal of Physics D: Applied Physics, 44, 485202.
10. M ina r d i S, G o pal A, C ouairon A, Tam oaš us kas G, Pis kars kas R, D ubieti s A
et al. (2009). Accurate retrieval of pulse-splitting dynamics of a femtosecond ilament in water by
time-resolved shadowgraphy, Optics Letters, 34, 3020–3022.
11. L e wis RW, Te e ts RE, Sell JA, Seder TA (1987). Temperature measurements in a laser-
heated gas by quantitative shadowgraphy, Applied Optics, 26, 3695–3704.
12. J u s t T (1964). Optics of Flames—Including Methods for the Study of Refractive Index Fields in
Combustion and Aerodynamics, Von F. J. Weinberg, Butterworths, Oxford, UK.
13. C o u n c il AR, S p eak GS, Walters GSS, Walters DJ (1954). Optical considerations and
limitations of the schlieren method, H.M. Stationery Ofice, London, UK.
14. H o l d e r DW, N o rth RJ (1957). Optical Methods for Examining the Flow in High-Speed Wind
Tunnels, North Atlantic Treaty Organization, Advisory Group for Aeronautical Research and
Development, Paris, France.
15. H a r g at h e r MJ, Settles GS (2012). A comparison of three quantitative schlieren techniques,
Optics and Lasers in Engineering, 50, 8–17.
16. Ve s t CM (1974). Formation of images from projections: Radon and Abel transforms, Journal of
the Optical Society of America, 64, 1215.
17. H a n na h B (1975). Quantitative schlieren measurements of boundary layer phenomena, in:
Rolls PJ, ed., High Speed Photography, Springer US, Boston, MA, pp. 539–545.
18. S c h wa r z A (1996). Multi-tomographic lame analysis with a schlieren apparatus, Measurement
Science and Technology, 7, 406–413.
19. G a r g S, S e t t l es GS (1998). Measurements of a supersonic turbulent boundary layer by focus-
ing schlieren delectometry, Experiments in Fluids, 25, 254–264.
20. M e ie r G (2002). Computerized background-oriented schlieren, Experiments in Fluids, 33,
181–187.
21. O ta M, H a ma d a K, K ato H, M aeno K (2011). Computed-tomographic density measure-
ment of supersonic low ield by colored-grid background oriented schlieren (CGBOS) technique,
Measurement Science and Technology, 22, 104011.
22. At c h e s o n B, H eidrich W, Ihrke I (2009). An evaluation of optical low algorithms for
background oriented schlieren imaging, Experiments in Fluids, 46, 467–476.
23. S o u r g e n F, L eop old F, K latt D (2012). Reconstruction of the density ield using the
Colored Background Oriented Schlieren Technique (CBOS), Optics and Lasers in Engineering,
50, 29–38.
24. L e o po l d F, O ta M, K latt D, M aeno K (2013). Reconstruction of the unsteady supersonic
low around a spike using the colored background oriented schlieren technique, Journal of Flow
Control, Measurement & Visualization, 1, 69–76.
25. K in d l e r K, G o ldhahn E, Leop old F, R af f el M (2007). Recent developments in
background oriented Schlieren methods for rotor blade tip vortex measurements, Experiments in
Fluids, 43, 233–240.
26. H a r g at h e r MJ, Settles GS (2010). Natural-background-oriented schlieren imaging,
Experiments in Fluids, 48, 59–68.
27. M iz u k a k i T, Wakabayas hi K, M ats um ura T, N akayam a K (2014). Background-
oriented schlieren with natural background for quantitative visualization of open-air explosions,
Shock Waves, 24, 69–78.
DENSITY-bASED METHODS 221

28. M e r z k ir c h W (1995). Density-sensitive whole-ield low measurement by optical speckle


photography, Experimental Thermal and Fluid Science, 10, 435–443.
29. Fo min NA (1998). Speckle Photography for Fluid Mechanics Measurements, Springer, Berlin,
Germany.
30. R ay SF (2002). Applied Photographic Optics: Lenses and Optical Systems for Photography,
Film, Video, Electronic and Digital Imaging, 3rd edn., Focal Press, Waltham, MA.
31. Ve n k ata k r is h nan L (2005). Density measurements in an axisymmetric underexpanded jet
by background-oriented schlieren technique, AIAA Journal, 43, 1574–1579.
32. At c h e s o n B, Ihrke I, H eidrich W, Tevs A, B radley D, M agnor M et al. (2008).
Time-resolved 3D capture of non-stationary gas lows, ACM Transactions on Graphics (Proc.
SIGGRAPH Asia), 27, 132.
33. G o l d h a h n E, Seum e J (2007). The background oriented schlieren technique: Sensitivity,
accuracy, resolution and application to a three-dimensional density ield, Experiments in Fluids,
43, 241–249.
34. G o l d h a h n E, Alhaj O, H erbs t F, Seum e J (2009). Quantitative measurements of three-
dimensional density ields using the background oriented schlieren technique, in: Nitsche W,
Dobriloff C, eds., Imaging Measurement Methods for Flow Analysis, Springer, Berlin, Germany,
pp. 135–144.
C h a p T e r e IG hT

From interferometry to
color holography

Jean-Michel Desse

Contents

8.1 Introduction 223


Luminous interference 224
Polarization 224
Relection of polarized light 225
Diffraction by acousto-optic effect 225
8.2 Generation of luminous interferences 226
8.3 Different types of interferometers 228
Separated beams interferometry 228
Differential interferometry 230
Interferograms analysis 233
8.4 Principles of three-wavelength differential interferometry 238
Advantage of using three wavelength sources 238
Choice of three wavelengths 239
Contribution of color in oil-ilm thickness measurement 241
8.5 Application to lows 241
Application to 2D unsteady subsonic wake lows 241
Application to hypersonic lows 242
Application to axisymmetric lows 243
Application to gaseous mixture 243
8.6 Principles of color holographic interferometry 245
Principles of holograms by transmission and by relection 245
Optical setups of real-time holographic interferometry 245
Problem of gelatine contraction 249
Applications 250
Problems 252
References 252

8.1 Introduction

Everyone has observed in nature the effects produced by the deviation of light beams (mirage
effect, thermal convection, etc.) or the result of luminous interference (colored fringes caused
by a thin oil ilm on the ground). The visualizing methods of these optical phenomena are
mainly based on shadowgraph, schlieren technique (Chapter 7), interferometry, and holog-
raphy. The irst one visualizes the second derivative of the refractive index, the second one
shows the irst derivative of the refractive index, and the last one (and also under certain
conditions, interferometry) allows for measuring the refractive index itself. The principle of
these methods and the general information on luminous interferences are widely detailed and
related by [1–3] with many and varied applications in luid mechanics [4,5].

223
224 JEAN-MICHEL DESSE

This chapter presents interferometry and holography for quantitative measurements of


refraction index and, consequently, density.

Luminous First, the interferences between luminous waves can be observed if the following conditions
interference are met:
• They come from the same source point.
• They have the same frequency.
• The vibration directions of their luminous vectors are parallel to each other (or, at least,
the components that interfere are parallel).
The irst two conditions correspond to the notions of spatial coherence (the source must be
seen in a very small angle) and temporal coherence (the optical path difference of the inter-
fering waves must be small compared to the length of the wave packets). The last condition
is related to polarization. When two or more waves overlap, their amplitudes add to give a
new wave whose amplitude depends on the phase between these waves. This phenomenon
is referred to as “interference.” A wave U of amplitude A, frequency ω, and phase φ can be
mathematically represented by the following complex form:

U = A exp (
i wt - j )
(8.1)

Consider two waves U1 and U2 (amplitude A1 and A2, phase φ1 and φ2, respectively); the wave
U = U1 + U2 resulting from their interference is expressed as

U = A1 exp (
i wt - j1 )
+ A2 exp (
i wt - j2 )

(8.2)
= é A1 + A2 exp ( ù expiwt × exp -ij1
i j1 - j2 )
ë û

The term expiωt that represents the variation over time of luminous quantity can be set as the
factor and omitted from all the following calculations without inconvenience. The sensors
used in optics are only sensitive to luminous intensity, that is, the mean value over time of the
product of the amplitude of U by the complex conjugate quantity U*:

I = U × U * = é A1 + A2 exp ( 1 2 ) ù é A1 + A2 exp ( 1 2 ) ù
i j -j -i j - j
ë ûë û (8.3)
= A12 + A22 + 2 A1 A2 cos ( j1 - j2 )

The term 2A1A2 cos(φ1 − φ2) is the interference term.

polarization Most light sources emit waves whose luminous vector has a quickly and randomly variable
orientation (between two consecutive wave packets). This light then presents a rotational sym-
metry around its propagation direction and is called natural or nonpolarized light. Some lasers
emit a light whose luminous vector keeps a ixed orientation in space; in this case, the light
is said to be polarized linearly or, more briely, polarized. Note already that it is possible to
ilter the natural light to extract linearly polarized light. It is also possible, always by iltering
the natural light, to achieve a light whose orientation vector varies according to a simple law.
Practically, the mostly used one is elliptical polarization, where the luminous vector describes
an ellipse in the wave plane, with its particular case being circular polarization when the
ellipse is reduced to a circle. Figure 8.1 gives a representation in space of these three types of
polarization in which the luminous vector describes respectively a planar sinusoid, an ellipti-
cal and a circular spiral, whose pitch is equal to the wavelength of vibration in the medium
considered. Finally, note that the light can be partially polarized and then considered as a
mixture of natural light and polarized light. Its polarization degree is given by the ratio of
polarized luminous intensity to total luminous intensity.
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 225

x x x

(a) (b) (c)

FIGUre 8.1 Different state of polarization of luminous wave: (a) plane polarization, (b) elliptic
polarization, and (c) circular polarization.

i i

Vibration perpendicular
to incidence plane
Vibration parallel r
to incidence plane

FIGUre 8.2 Polarization by relection.

relection of Let a luminous wave relect on a plane diopter between two dielectric media (Figure 8.2).
polarized light Each vibration of the incident light can be decomposed on two axes, the irst one perpendicu-
lar to the plane of incidence and the other one located in the plane, both being perpendicular to
the direction of the light propagation. It is the same for the relected light and refracted light.
From the Maxwell laws, it is possible to determine the component intensities of the
relected light (Fresnel’s formulas):
• For vibration in the incidence plane (x, y):

tan 2 ( i - r )
I xy = I 0 (8.4)
tan 2 ( i + r )

• For vibration in the z direction perpendicular to this plane:

sin 2 ( i - r )
I z = I0 (8.5)
sin 2 ( i + r )

where i and r are the incidence and the refractive angle, respectively.
One immediately notes that Iz can never be canceled, while Ixy cancels when r = i + π/2.
This particular value is called “Brewster incidence” and, at this incidence, the relected light is
completely linearly polarized, with its vibration direction being normal to the incidence plane.
This method is effective, but it has the disadvantage of being weakly luminous and addition-
ally of delecting the light beam.

Diffraction by An acousto-optic modulator (AOM), also called “Bragg cell,” uses the acousto-optic effect to
acousto-optic effect diffract and shift the frequency of light using sound waves (usually at a frequency of the order
of some MHz). A piezoelectric transducer is attached to a material such as glass (e.g., bismuth
telluride). An oscillating electric signal drives the transducer to vibrate, thus creating sound
waves in the glass. These can be thought of as moving periodic planes of expansion and
compression that change the index of refraction. Incoming light scatters off the resulting peri-
odic index modulation and interference occurs. A diffracted beam emerges at an angle θ that
226 JEAN-MICHEL DESSE

First order
Incident
light beam
θ α = 2θ

Traveling
acoustic Л
plane waves
Zero order
Piezoelectric transducer

FIGUre 8.3 Principle of acousto-optic cell.

depends on the wavelength of the light λ relative to the wavelength of the sound Λ in the
following relation:

ml
sin ( q ) = (8.6)
2L

where m = …−2, −1, 0, 1, 2, … is the order of diffraction. In thick crystals with weak modu-
lation as shown in Figure 8.3, only phase-matched orders are diffracted; this is called Bragg
diffraction (only the zero and +1 orders are diffracted).
By simply turning the acoustic energy source on and off, the AOM can act as a rapid light
delector. The switching of the incident light beam to the irst-order diffracted beam can occur
in a very short period of time (<5 μs) depending only on how rapidly the acoustic wave ield
can be turned on and off in the volume of the lint glass traversed by the laser beam.

8.2 Generation of luminous interferences

Briely, in order to obtain luminous interferences, it is necessary to form two different beams
coming originally from the same light source and the beam’s shift has to be realized by an
interferential device. There are two ways to separate the beams:
• By division of wave front: The initial beam is split into two after passing through two
small holes separated by a distance b, leading to the formation of Young’s fringes
(Figure 8.4a)
• By division of amplitude: The initial beam is split into several beams through successive
relections or transmissions (Fresnel mirror, Figure 8.4b, Mach–Zehnder or Michelson
interferometers).
In Figure 8.4a, the holes can be replaced by fences and the same experiment can be realized
without the convergent lens if the light source is placed far enough. Interference fringes are
localized between the two diffracted beams by S1 and S2 separated by a distance a. Let d1 = S1P
and d2 = S2P; the waves emitted by the sources S1 and S2 can be expressed as

A1 A2
exp ( ) 1 exp ( ) 2
-i 2 p / l d -i 2 p / l d
U1 = U2 = (8.7)
d1 d2

The resulting vibration U = U1 + U2 at the point P is

éA A - i 2 p( d - d ) / l ù
U = ê 1 + 2 exp ( 2 1 ) ú × exp ( 1 )
- i 2 pd /p
(8.8)
ë d1 d2 û
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 227

Interference
fringes
S1 P
a

Light
source
S2
Parallel beams
Screen
d
Lens
(a)

Light source

Mirror M1
O B

Mirror M2

Mirror angle: α
S1 S1OS2 = 2α
S2
(b)

FIGUre 8.4 Two examples of formation of interference fringes: (a) division of wave front and
(b) division of amplitude.

If the distance S1S2 is small compared to d1 or d2, we can assume that d1 ≈ d2 ≈ d, where

A1 + A2 exp ( 2 1 ) ù exp (
1é - i 2 p( d - d ) / l - i 2 pd /p )
U= (8.9)
d ëê ûú

In this form, the interference term 2π(d2 − d1)/λ appears as a measurement of the optical path
difference between the two waves incoming from S1 and S2. Furthermore, the factor exp(−i2πd/λ)
cancels out when the luminous intensity is calculated (as we have seen in “Luminous interfer-
ence” section). On the other hand, the intensity measurements are always relative measure-
ments, unaffected by the constant term 1/d2. So they can be eliminated in the expression of
the amplitude U:

U = A1 + A2 exp(
- i ( 2 pD /l ) )
(8.10)

with Δ = d2 − d1 being the optical path difference. At the point P, the light intensity IP is

æ 2pD ö
I = U × U * = A12 + A22 + 2 × A1 A2 cos ç ÷ (8.11)
è l ø

At any point where Δ = k ∙ λ (with integer k), the intensity is maximal:

I M = ( A1 + A2 )
2
(8.12)

The locations of these points are called bright fringes or luminous fringes. At any point where
Δ = (2k + 1)λ/2, the intensity is minimum:

I m = ( A1 - A2 )
2
(8.13)
228 JEAN-MICHEL DESSE

The locations of these points are called black fringes and dark fringes. In particular, if A1 =
A2 = A0, I M = 4 A02 and Im = 0. If the coordinates of S1 and S2 sources are (a/2, 0) and (−a/2, 0)
and the coordinates of any point P are (x, y), the luminous intensity is expressed as

æ 2pax ö a×x
I = A12 + A22 + 2 × A1 A2 cos ç ÷ with D = d2 - d1 = d (8.14)
è l d ø

and the fringe space fr that is the distance between two dark or bright fringes, is

ld
fr = (8.15)
a

In Figure 8.4b, the amplitude of the incident light is split by two mirrors with a very small
angle (1 min to a few minutes of angle). Interference fringes are produced by the relection
on the two mirrors. In general, interference fringes can be localized if they are observed at a
precise location in space or unlocalized if they are observed in the space area where the beams
overlap.
Finally, when the two beams of the interferometer are entirely separated and when one of
the two beams crosses through the test section, the interferometer is of conventional type and
it is called “separated beams interferometer.” With this type of apparatus, the optical thick-
ness E can be directly measured in the test section. The optical thickness determines the phase
delay of the light passing through a medium with index of refraction n, and it is also referred
to as optical path length. If the thickness e of the test section is known, E = (n − 1) ∙ e and the
following Gladstone–Dale relationship (reported in Chapter 7 and here recalled) relates ρ to n:

n -1
=K (8.16)
r/rs

where
ρs is a reference value of the gas density
K is the Gladstone–Dale constant depending on the gas

Under standard conditions (0°C, 1 atm), ρs = 1.2928 kg/m3 and K = 293 × 10−6 for dry air.
When the two interfering beams are very weakly separated, they both pass through the test
section and the derivative of the optical thickness is measured along the shift direction of the
two beams. This second type of an interferometer is called “differential”.

8.3 Different types of interferometers

Separated beams Michelson interferometer Among the most used interferometers with separated beams,
interferometry it is worth mentioning the Michelson interferometer [2]. A beam splitter plate called a
“separating plate” divides a beam of parallel rays into two approximately perpendicular
beams (Figure 8.5). These beams are relected by plane mirrors on this separating plate and
observed beyond it. They have equal luminous intensities, but as one of the beams passes
twice through the separating plate while the other one is relected twice by the separating
plate, a “compensating plate” is inserted in the optical setup. The thickness of the com-
pensating plate is equal to the thickness of the separating plate in order to compensate for
different optical lengths. The observed interference fringes are similar to those obtained by
relexion on the two faces of transparent plates [6]. The source being assumed extended,
rings localized at ininity are observed if the two mirrors are rigorously perpendicular. On
the screen, a unique black fringe is obtained because the relexions produced by the two
beams on the beam splitter plate generate a shift of a half period for one of the beams. If one
of the two lat mirrors is rotated by a small angle, straight equidistant interference fringes
appear, and they are localized near the lat mirror located on the optical axis of the screen
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 229

Flat mirror
Test section
x Compensating plates
of the test section
Compensating plates
Image of the of the
Screen
flat mirror beam splitter plate

Flat
mirror

Beam splitter
Objective
plate
Collimating
0 lens

Light source

FIGUre 8.5 Michelson interferometer.

and the objective. In the wind tunnel, the  lat mirror is located just behind the test sec-
tion where the interference fringes are localized. Two plates compensating the test section
windows are inserted in the other optical arm. The measurement of the optical thickness of
the gas in the test section is obtained comparing two interferograms: the irst one recorded
without the low, and the second one with the low.

Mach–Zehnder interferometer The Mach–Zehnder interferometer is another kind of sep-


arated beams interferometer, with the advantage of giving interference fringes that can be
controlled moving optical pieces. The optical setup presented in Figure 8.6 uses two semi-
transparent plates and two lat mirrors sensibly parallel and located at the four corners of a
rectangle. A beam of parallel light is split by a semitransparent plate into two beams with
the same intensity. One is delected with an angle of 90°, while the other one proceeds with
unchanged direction. The second mirror returns also at the beam, which has crossed the beam
splitter plate 90°. The second beam splitter gathers the two beams to make them interfering.
It can be shown that the plane in which the interference fringes are located is parallel to the
bisector of the two large sides of the rectangle and passes by the intersection point O of this
axis. The interference fringes can be observed through an objective that is focused on the
plane normal to the beam and containing the point O.

Test section
Beam splitter
plate
1
Flat mirror 2
1
0 2

Collimating
lens

Flat mirror
Light
source Beam splitter
plate Compensating plates
of the test section

FIGUre 8.6 Mach–Zehnder interferometer.


230 JEAN-MICHEL DESSE

(a) (b)

FIGUre 8.7 Interaction between a shock wave and the wind tunnel loors. (a) Narrowed fringes
interferogram and (b) uniform background interferogram.

The test section is crossed by one of the beams and the test section windows are compen-
sated by two identical plates located in the path of the other beam. Mirrors and plates are
adjusted in order to locate the plane of the fringes in the middle of the test section. By trans-
lating a mirror or a plate, the central fringe is brought into the test section. The optical setup
can be adjusted with a uniform background fringe or with straight fringes oriented in the most
suitable direction. An example is given in Figure 8.7 where the interaction between a shock
wave and the boundary layer on the upper and the lower loors of the test section is visualized.
On interferogram (a) recorded with narrow fringes, one can follow the fringes step through
the shock wave and the analysis can be conducted on all the interferogram. On interferogram
(b) recorded with a uniform background tint, it is not possible to follow the fringes through
the shock wave, but a small variation in the optical thickness is observed between two succes-
sive relections [7]. This example shows that the interferometer has to be correctly adjusted
before recording to analyze strong or weak variations in the refractive index. A quantitative
analysis of interferograms requires the identiication of the fringes and of their shift across
low discontinuities.

Differential When the density gradients are relatively weak, the differential interferometry of polarized
interferometry white light is more suited. It produces colored interferograms, the analysis of which yields the
density ield after calibration of the whole setup [8,9]. The principles of differential interfer-
ometry have been discussed in details in [10]. A birefringent crystal (e.g., Wollaston biprism),
constituted by two quartz or calcite prisms, crossed and bonded with an angle α (Figure 8.8a),
decomposes a polarized light vibration in a given direction into two orthogonal coherent
vibrations of approximately equal amplitudes. With the type of crystal used, the birefringence
is described as uniaxial, meaning that there is a single direction governing optical anisotropy

Polarizer λ/2 plate Polarizer

Ray 1 Ray 1
b Ray 1
Ray 1 Ray 2 b b Ray 2
α α
– ε

Ray 2 45° Ray 2 45°


a a a
polarization polarization
axis axis
e e/2 e e/2
(a) (b)

FIGUre 8.8 Modiication of Wollaston biprism for analyzing a very large ield. (a) Standard
Wollaston biprism and (b) large-ield Wollaston biprism.
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 231

whereas all directions perpendicular to it are optically equivalent. This special direction is
known as the optic axis of the material. Light whose polarization is perpendicular to the optic
axis is governed by a refractive index no (for “ordinary”). Light whose polarization is in the
direction of the optic axis sees an optical index ne (for “extraordinary”). For any ray direction,
there will be a polarization direction perpendicular to the optic axis and this is called an “ordi-
nary ray.” However, for most ray directions, the other polarization direction will be partly in
the direction of the optic axis and this is called an “extraordinary ray.” The ordinary ray will
always experience a refractive index of no, whereas the refractive index of the extraordinary
ray will be in between no and ne, depending on the ray direction as described by the index ellip-
soid. Here, one of the prisms has its optical axis perpendicular to its edge. Each incident ray is
polarized at 45° with respect to the quartz axis so that, after crossing the biprism, the emerging
rays are separated by a small birefringence angle ε = ε(λ). If ne and no are, respectively, the
extraordinary and ordinary refractive indices of the crystal, the birefringence angle is given by
e = 2 ( ne - no ) tan a (8.17)

As a consequence, two rays cross the phase object at slightly different locations. After passing
through the phase object, the two rays can interfere in an analyzer. The difference in optical
path, which is put into evidence in this way, is characteristic of the body under observation
since the setup is autocompensated. Equation 8.17 shows that the sensitivity of the measure-
ment depends on the type of birefringence crystal used and also on the bonding angle. For
instance, if high sensitivity is required, calcite material will be preferred compared to quartz
because the calcite birefringence is twenty times higher than the quartz birefringence.
Very often, large objects have to be analyzed (more than 100 mm in diameter) so that a
diverging light beam crosses the Wollaston prism and then it is sent toward the object under
analysis. If a strictly uniform background color is required (no phase shift), the Wollaston
prism should be crossed by a parallel beam. If it is not the case, one could expect to have
continuously varying colors in the ield of observation. To reasonably compensate for this
defect [11] and to obtain a large-ield birefringent compensator (Figure 8.8b), it is appropriate
to add λ/2 plates before and past the bonded prisms and two plane crystals after and before
the system.
In this way, some authors performed experiments where the setup includes a Wollaston
prism that highly separates the interfering beams. This type of setup is equivalent to conven-
tional reference beam interferometry as the interferograms can be interpreted directly but
remains differential since one of the beams—the reference one—lies inside the undisturbed
upstream low [12–15].
Several optical setups can be designed to include one or two Wollaston prisms. Here, two
possibilities are considered:
• The irst one includes one Wollaston prism. The optical beams cross the test section
twice, thus making the apparatus compact and doubling its sensitivity. Adjustment is
quick and easy as no compensation is needed for the optical thickness of the test sec-
tion windows since both beams pass through them. In the differential interferometer of
Figure 8.9, the prism is placed at right angle from the axis of the spherical mirror, close
to its center of curvature (the prism is said to be centered). Before passing through the
prism, the beam goes through a polarizer; on the return leg, after leaving the prism, the
beams pass through an analyzer that is either parallel or orthogonal to the polarizer.
As the optical beams pass through the prism twice, the initial path difference can be
compensated by translating the prism. The system is also self-compensating as, with
the prism being ixed, any deviation in the light beams generates variations of the opti-
cal thickness in the air and in the prism that cancel: the observed path difference is thus
directly related to the properties of the observed medium. If a white light is used (e.g.,
high-pressure xenon source), a system of colored fringes is observed, the tints of which
are arranged in series roughly equivalent to the Newton scale [16]. The observation of
the interference fringes yields a measurement of the optical path difference that exists
between the beams after the crossing of the crystal, the ambient air, and the test section:
232 JEAN-MICHEL DESSE

Test
Xenon light Large-field
Condenser section
source Wollaston prism

Right angle
Polarizer–analyzer Object
prism
Achromatic lens Spherical
Camera concave mirror

FIGUre 8.9 Differential interferometer with double crossing and one Wollaston prism.

Large-field Achromatic
lens Large-field
Wollaston prism Wollaston prism
Condenser
Object

f f f
Xenon Camera
light source Polarizer Analyzer

FIGUre 8.10 In-line differential interferometer with single crossing and two Wollaston prisms.

from that, one infers the component of the gas density gradient along the direction per-
pendicular to the fringes. Then, the gas density itself is obtained by integration.
• For the second one, the optical setup of Figure 8.9 is changed from a double crossing
diverging beam to a single parallel beam setup as shown in Figure 8.10. It requires two
Wollaston prisms and two achromatic “Clairaut” lenses (designed to minimize chro-
matic and spherical aberrations), one on each side of the test section. The two Wollaston
prisms, mounted “upside down,” are located at each focusing point. A linear polarizer
is used to adjust the input polarization and an analyzer is then placed behind the second
prism in order to produce interferences between the orthogonally polarized beams [17].
An alternative of this setup can be used when very large ields of analysis have to be ana-
lyzed. Due to the very high cost of the achromatic “Clairaut” lenses, a “Z” optical setup is
preferred where two spherical mirrors replace the two achromatic lenses. In order to avoid
astigmatic aberrations, the angle of opening has to be less than 10° (Figure 8.11).

Condenser
Polarizer
Large-field Wollaston
prism
Spherical
concave mirror
Xenon light source

Object

Camera
Spherical
concave mirror
Large-field
Wollaston prism
Analyzer

FIGUre 8.11 “Z” optical setup with single crossing and two Wollaston prisms.
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 233

Interferograms Setup calibration for a manual analysis (2D lows) The manual analysis of interferograms can
analysis be made if the calibration of the position ξ of the Wollaston prism versus the observed colors in
the interferometer has been realized before. ξ is the Wollaston prism displacement measured in the
plane perpendicular to the optical axis along x or y. The light deviation θx or θy is obtained from
the following reasoning. Let R be the radius of curvature of the spherical mirror located behind
the test section (Figure 8.9) or f be the focal length of the achromatic lens or the spherical mirror
(Figure 8.10) and L' be the virtual distance between the middle of the test section and the spherical
mirror, the distance dx or dy between the two interfering rays in the test section is equal to

dx = e ( R - L¢ ) or dx = e ( f - L¢ ) (8.18)

It is easily shown that, under no low conditions, the relative displacement ξ – ξ0 of the prism
induces a path difference Δ between the outgoing and returning rays in the prism, which is

D = 2e ( x - x 0 ) (8.19)

The result of this path difference can be observed in the interferometer as a uniform variation
in the background color (Figure 8.12). Here, as the polarizer and the analyzer are parallel, a
white fringe is obtained for ξ – ξ0 = 0.
When a low exists in the test section, a light ray crossing the medium under analysis is
deviated by an angle θx or θy equal to
dE dE
qx = or q y = (8.20)
dx dy

where dE is the difference in the optical thickness of the medium for the two interfering rays.
As the test section is crossed twice by the light rays, the path difference δ produced by the
observed medium is

d = 2dE (8.21)

From the relations (8.18) and (8.20), Equation 8.21 becomes

d = 2e ( R - L ¢ ) q x (8.22)

0 9.2 11.3 14.3 16.8

24.4 27.7 31.3 35.5 38.5

40.6 44.8 47.5 49.4 52.7

56.5 61.0 73.0 81.5 96.7

FIGUre 8.12 Prism calibration—variation of colors with (ξ − ξ0)mm × 102.


234 JEAN-MICHEL DESSE

When the tints observed under low and no low conditions are identical, the path differences
δ and Δ are equal, yielding

d = 2e ( R - L ¢ ) q x = 2e ( x - x 0 ) (8.23)

The light deviation is then expressed as


x - x0
qx = (8.24)
R - L¢
The light deviation is obtained from the measurement of only two lengths: the prism displacement
ξ – ξ0 and the radius of curvature of the spherical mirror. Progressively moving the prism and not-
ing the colors in the interferometer are suficient to obtain the calibration curve of the optical setup.
In the analysis of the interferograms, one starts from a position where the background color
is uniform and where the gas density is known, and, by moving normally to the interference
fringes, say x, one notes the position of the observed tints. The calibration curve obtained from
Equation 8.24 yields the light deviation at every point [9]. The optical thickness E is obtained
by integrating the light deviation:

ò
E - E0 = q x dx
(8.25)

If the gas density is known at one point in the ield, the Gladstone–Dale relationship
(Equation 8.16) allows to obtain the refractive index at this point. For a 2D low and if e is the
test section width, the optical thickness is given by
r0
E0 = ( n0 - 1) × e = K × e × (8.26)
rs

As the optical thickness is proportional to the gas density, one obtains


E - E0 r - r 0
= (8.27)
E0 r0

from which
r E - E0
= 1+ (8.28)
r0 E0

Case of axisymmetric low The analysis of axisymmetric low implies an additional com-
plexity. If the interferogram is recorded with horizontal fringes, the analysis has to be realized
along parallels to Oz as shown in Figure 8.13.
Between two horizontal lines y0 and y, the deviation of luminous ray is given by
y y
1 dn dn
qz =
ò
y0
n dz
dy =
ò dz dy
y0
(8.29)

with n = log n. In cylindrical coordinates, the deviation θz crossing an axisymmetric low at


abscissa z is given by [1]
R
qz 2r dr
2z
z
ò
= G ( z ) = f (r )
r 2 - z2
(8.30)

where
R is the radius of the optically homogeneous medium
r = y2 + z2
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 235

R h

r1 y
dz r2
r4
Luminous r6
ray
rn = R

FIGUre 8.13 Division of asymmetric ield in N rings of the same thickness h.

The function

¶ ( Logn ) ¶n
f (r ) = = (8.31)
( )
¶ r 2
( )
¶ r2

is the unknown to be determined, where n is the refraction index of the gas. Equation 8.30 is
solved assuming the low to be axisymmetric. The radius R of the inhomogeneous medium is
divided into N rings of the same thickness h. With ri = ih, the experimental values Gi = G(z = ri)
are known. Then, the continuous variations of the refractive index are replaced by a sequence
of discontinuities separating constant values in each ring. We write fi = f(r) for ri − 1 < r < ri.
Equation 8.30 discretized is then solved using the method suggested in [1]. The system is
solved recursively: knowing the values of fi for k + 1 ≤ i ≤ N, we can deduce the values of fk.
The external low refractive index ne is determined from the density measurement ρe and the
Gladstone–Dale equation. The density in each ring is given by

r n -1
= (8.32)
re ne - 1

Interference fringes modeling In the case of the recording and the automatic processing of
interferograms, the approach is quite different because it is based on the spectral character-
ization of the whole interferometric setup, with the aim being to recreate the scale of experi-
mental colors of the interferometer on a computer. For that, we will show that it is enough
to analyze the spectrum of the light source and to take into account the transfer functions of
the interferometer optical components and of the three red (R), green (G), and blue (B) ilters
of the camera used to digitize the interferograms. Several conigurations can be met in the
recording or restitution of the interferograms. In the recording, an interferogram can be either
directly digitized or recorded on a ilm and digitized later. In the processing, the digitization
procedure of the ilm requires that the interferogram is illuminated with the same light source
as used in the recording even if, in some cases, the light sources are different, especially when
a spark source is used for the recording, while the restitution is always made with a continu-
ous light source.
To build the colors of interference fringes, it is necessary to know the spectrum of the light
source, the transfer function of the internal optical components and also of the three ilters
of the video camera. An example is shown in Figure 8.14. The spectrum of a XBO 150 Watt
xenon light source has been recorded through the interferometer when the Wollaston prism is
precisely located at the center of curvature of the spherical mirror and under no-low condi-
tions. In this manner, there is no optical path difference and only the effect of the internal opti-
cal components of the interferometer is taken into account. The spectral analysis of the light
236 JEAN-MICHEL DESSE

1.2

Fb Fg Fr
1
Xenon light source
XBO 150 W
0.8 Is (λ)

I/Imax
0.6

0.4

0.2
Ibs Ivs Irs

0
350 400 450 500 550 600 650 700 750
λ (nm)

FIGUre 8.14 Spectra of xenon light source and camera’s ilters.

source is performed using a monochromator and a photomultiplier. The bell-shaped curves


represent the spectral attenuation functions of the three internal ilters of the video camera
used either to directly record the interferogram from the interferometer or to digitize inter-
ferograms recorded on the ilm. The video camera considered here is a SONY 325P having
separated RGB outputs for which the three ilters’ transfer functions Fr(λ), Fg(λ), and Fb(λ)
have been given by the manufacturer.
In Figure 8.14, the spectral intensity Is(λ) of the light source is superimposed on the three
bell curves and the spectral intensities IRS(λ), IGS(λ), and IBS(λ) are computed and drawn
through the three ilters with the following relations:

I RS = I s ( l ) × Fr ( l )
I GS = I s ( l ) × Fg ( l ) (8.33)
I BS = I s ( l ) × Fb ( l )

This amounts to illuminating the test section with three light sources, the spectra of which
are known and shifted in the wavelength scale. In monochromatic light of wavelength λ0, the
luminous intensity of the interference fringes is expressed by

æ pD ö
I ( D ) = 4 I 0 cos2 ç ÷ (8.34)
è l0 ø

if the interfering vibrations have the same amplitude. In this precise case, the path difference
Δ does not depend on the light wavelength λ0, and, with the relation (8.19), the intensity dis-
tribution of the fringes is given by

æ 2e ( l 0 ) ( x - x 0 ) ö
I ( x ) = 4 I 0 cos2 ç ÷÷ (8.35)
ç l
è ø

Relation (8.35) can be extended to the interferences in white light. The constant luminous
intensity I0 is substituted by Is(λ) and the birefringence angle becomes a function of the wave-
length. The resultant intensity is then computed as the following integral:
l =0.8 mm
æ 2e ( l ) × ( x - x 0 ) ö
I (x) = 4 ×
ò
l =0.4 mm
I s ( l ) cos2 ç
ç
è l
÷÷ × d l
ø
(8.36)
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 237

ε varies little with wavelength and to each ξ can be associated some mean path difference Δ
using the value of ε for a given reference wavelength λ0 (λ0 = 560 nm).

D = 2e ( l 0 ) × ( x - x 0 ) (8.37)

l = 0.8
æ pDe ( l ) ö
I (D) = 4 ×
ò I ( l ) cos ççè e ( l ) ÷÷ø × dl
2
s (8.38)
0
l = 0.4

To calculate the intensity of the interference fringes on the red, green, and blue channels, the
relation (8.38) is applied using the three sources computed from (8.33):

æ pDe ( l ) ö
I R ( D ) = 4 × ò I RS ( l ) × cos2 ç × dl
ç l × e ( l 0 ) ÷÷
è ø
æ pDe ( l ) ö
I G ( D ) = 4 × ò I GS ( l ) × cos2 ç × dl
ç l × e ( l 0 ) ÷÷
(8.39)
è ø
æ pDe ( l ) ö
I B ( D ) = 4 × ò I BS ( l ) × cos2 ç × dl
ç l × e ( l 0 ) ÷÷
è ø

The results are sent to the three RGB inputs of an image processing board and the image is
obtained by superposition of the three RGB planes. The resulting total intensity will be

IT ( D ) = I R ( D ) + I G ( D ) + I B ( D ) (8.40)

To correctly compare experimental and theoretical fringes, the minimum and maximum limits
of the experimental fringes have to be computed in order to have the three sources obtained
from Equation 8.39 it within these limits. The white central fringe intensity will give the
maximum values Rmax, Gmax, and Bmax, while the minimum values Rmin, Gmin, and Bmin are com-
puted from the darker irst-order tint. Therefore, the signal sent, for example, to the red input is

I mr ( d ) = Rmin + ( Rmax - Rmin ) × I R ( d ) I R ( d )


I mg ( d ) = Gmin + ( Gmax - Gmin ) × I G ( d ) I G ( d ) (8.41)
I mb ( d ) = Bmin + ( Bmax - Bmin ) × I B ( d ) I B ( d )

( )
I R ( d ) = max I R ( d ) ,
where I G ( d ) = max ( I ( d ) ) ,
G

IB ( d ) = max ( I ( d ) )
B

Figure 8.15 shows the comparison between experimental fringes (at the bottom) and numeri-
cal fringes (at the top): practically no differences do exist. The luminous intensity IT(δ) of
these fringes is drawn as follows. It can be seen that luminance (visual excitation luminance
intensity) and chrominance (qualitative color) are astonishingly well reproduced. The good
agreement of the experimental and theoretical fringes makes it legitimate to apply the theoreti-
cal path difference scale to the experimental colors [9].
Very often, the interferograms are recorded on photographic ilms and the ilm spectral
response has to be taken into account. The following procedure can be assumed: under no
low conditions, the Wollaston prism is located at the center of curvature of the spherical mir-
ror and an interferogram is recorded with the white background color (parallel polarizer and
238 JEAN-MICHEL DESSE

250

200

IT (δ)
150
Computed (1)
Experimental (2)
100
–2.5 –1.5 –0.5 0.5 1.5 2.5
δ (µm)

FIGUre 8.15 Comparison between the fringes’ colors and the luminous intensities.

analyzer). After the development of the interferogram, it is illuminated with the same light
source in order to analyze the spectrum of the uniform white tint recorded on the ilm. The
spectral shape of the light source is attenuated by the ilm spectral response. In order to avoid
problems induced by the ilm development, the uniform background interferogram and the
measurement interferogram are simultaneously developed in the same baths.
The RGB transfer functions of the video camera interfere at the restitution in the interfero-
gram digitization and the process of fringes comparison described earlier is applied. In the
experiments, one white background interferogram is recorded on the ilm for determining its
spectrum. Then, another interferogram is recorded in narrow fringes to yield the minima and
maxima values and to allow a comparison of the computed and experimental fringes. It is then
no longer necessary to calibrate the interferometric system before recording the interferograms.
For example, the sensitivity measurement can be evaluated for a biprism angle of 4°, a
spherical mirror of 2500 mm in the focal length of the radius of curvature, an optical setup
with 2 crossings of the test section, and a 2D low with a test section width of 42 mm. The
abrupt change between two colors perceived by the eye is located in the tints of the irst order
of the Newton scale, and it is induced by a variation in the minimum path difference of 30 nm.
In these conditions, it is possible to measure a variation density of 5.01 × 10−4 kg/m3.

8.4 principles of three-wavelength differential interferometry

advantage of using In monochromatic interferometry (e.g., λ = 647 nm), it is well known that the classical interfer-
three wavelength ence pattern is represented by a succession of dark and bright red fringes. For two successive
sources fringes, the optical path difference is equal to the wavelength of the laser source (Figure 8.16b).
Unfortunately, the zero order of interference fringes can never be identiied and it is one of the
major dificulties with interference fringes in monochromatic light. Sometimes, it is not pos-
sible to follow the displacement of the fringes through a shock wave, for example, or to count
the fringe number in a complex low. When the light source is a continuous source (500 W
xenon, see Figure 8.16a), the interference pattern is a colored fringe pattern in a sequence
approximately matching Newton’s color scale. This fringe’s diagram exhibits a unique white
fringe, visualizing the zero order of interference and it allows one to measure very small path
differences, because six or seven different gray levels deine the interval 0–0.8 μm. When the
path difference is greater than three or four microns, instead, the gray levels can no longer
be separated and the larger path differences cannot be correctly measured [18]. Figure 8.16c
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 239

4000 I

3000

2000
150 W 800
1000 Xenon
400
0 0
400 500 600 700 800 –1 0 1 2 3 4 5 6
(a) λ (nm) δ (µm)

300 I

Red
200
647 nm
100 800
400
0 0
400 500 600 700 800 –1 0 1 2 3 4 5 6
(b) λ (nm) δ (µm)

I Blue
300 476 Green
514
Red
200
647

100 800
400
0 0
400 500 600 700 800 –1 0 1 2 3 4 5 6
(c) λ (nm) δ (µm)

FIGUre 8.16 Spectra and interference fringes given by three different light sources. (a) Xenon
light source, (b) monochromatic light source, and (c) trichromatic light source.

shows the fringes obtained with a laser that emits three different wavelengths (one blue line,
one green line, and one red line). One can see that the disadvantages of the two spurious
sources (Figure 8.16a and b) disappear. The zero order is always identiiable and the gray
levels always remain distinguishable for the small and large path differences. The interference
pattern also presents the following peculiarity: while the white fringe is not visible on the
interferogram, the sequence of three successive gray levels in the diagram is unique.

Choice of three First, we must remember that each wavelength or each color can be speciied in terms of equiv-
wavelengths alent stimuli. The tristimulus values (x, y, z) were adopted by the International Commission
on Illumination (CIE, Commission Internationale de l’Eclairage [19]) for various spectrum
colors. They indicate the amount of each of the CIE primaries that is required to match the
color of 1 W of radiant power of the indicated wavelengths. The tristimulus system is based
on visually matching a color under standardized conditions against the three primary colors:
red, green, and blue; the three results are expressed as X, Y, and Z, respectively, and are called
tristimulus values. These values specify not only color but also visually perceived relectance,
since they are calculated in such a way that the Y value equals a sample’s relectivity when
visually compared with a standard white surface by a standard viewer under average daylight.
The tristimulus values can also be used to determine the visually perceived dominant spec-
tral wavelength (which is related to the hue) of a given sample. Such data can be graphically
represented on a standard chromaticity diagram (Figure 8.17) based on the values x, y, and z,
where x = X/(X + Y + Z), y = Y/(X + Y + Z), and z = Z/(X + Y + Z). Note that x + y + z = 1; thus,
if two values are known, the third can always be calculated, so the z value is usually omitted.
The x and y values together constitute the chromaticity of a sample. Light and dark colors that
240 JEAN-MICHEL DESSE

0.9
520
530
0.8
510 540
λ2
0.7 550

560
0.6
570
500
0.5 580

y
590
0.4
600

0.3 650
White T.V. λ3
490
780
0.2
λ1

0.1
470

450
0
0 380 0.2 0.4 0.6 0.8
x

FIGUre 8.17 Triangle deined by the fundamental colors of Innova spectrum laser.

have the same chromaticity (and are therefore plotted at the same point on the 2D chromatic-
ity diagram) are distinguished by their different Y values (luminance, or visually perceived
brightness).
When their x and y coeficients are plotted on a chromaticity diagram, the spectral colors
from 400 to 700 nm follow a horseshoe-shaped curve; the non-spectral violet–red mixtures
fall along the straight line joining the 400 nm point to the 700 nm point. All visible colors fall
within the resulting closed curve, as shown in the standard chromaticity diagram. Points along
the circumference correspond to saturated colors; pale unsaturated colors appear closer to the
center of the diagram. The achromatic point is the central point at x = 0.33, y = 0.33 (shown
as a small circle in Figure 8.17), where visually perceived white is located (as well as pure
grays and black, which vary only in the magnitude of the luminance Y). The line that links the
wavelength from 380 to 780 nm is called the line of purples. For example, the dotted-line tri-
angle gives the coordinates of the fundamental colors of the RGB system. For television, it is
the dashed-line triangle that has been adopted by the Natioanal Television System Committee
standard.
In the case of three-wavelength differential interferometry, the chosen wavelengths create
a three-color base that can reproduce a maximum number of colors and the laser output power
enables the interferograms to be recorded at a high framing rate as required by unsteady lows
(exposure time of the order 1 μs). Three-wavelength interferometry requires, in principle, the
use of three different lasers. It is possible to avoid the use of three different lasers, for example,
by using an ionized gas laser (mixed argon and krypton) that produces approximately 10 vis-
ible lines. By selecting the wavelengths, λ1 = 476.7 nm, λ2 = 514.5 nm, and λ3 = 647.1 nm, of
this laser, we obtained the RGB triangle whose apex is represented in Figure 8.17.
All the colors that can be found in the scale of tints of the interferometer are included in
this triangle, which is the biggest triangle that can be created with the wavelengths generated
by an argon and krypton laser.
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 241

Flow
Leading edge Trailing edge

(1) Argon

(2) Krypton

(1) or (2) ? ?
? ?

(3) 500 W Xenon

h
(3)

Ogive Flat plate Beveled


edge

FiGUre 8.18 Interference fringes obtained with three different light sources.

contribution An example is given to illustrate the color contribution to directly determine the sign of the
of color in oil- change in the oil-ilm thickness to be used for the measurement of wall shear stresses (see
ilm thickness Chapter 12) from the knowledge of the color in Newton’s tints scale [20]. A thicker oil ilm
measurement corresponds to a longer optical path. Figure 8.18 shows an example: three different light
sources are used to record the oil-ilm interference fringes in the vicinity of the boundary
layer transition region on a lat plate. The low is from the left to the right and the central part
of the interferogram has been isolated in order not to take into account the test section lateral
boundary layers.
Interferograms 1 and 2 show interference fringes recorded with green and red lines, respec-
tively, issued from an argon and krypton laser. The simple visualization of dark and bright
fringes does not allow the unambiguous determination of the evolution of the oil-ilm thick-
ness (noted ? in Figure 8.18). Interferogram 3 is obtained with a xenon light source and the
knowledge of Newton’s tints scale allows determining the variation of the oil-ilm thickness
without any doubt. Moreover, since no color is enclosed with two identical tints in Newton’s
scale, it is easy to detect the extrema of the oil-ilm thickness proile. This has the advantage
of allowing easy determination of the gradient changes of the oil ilm. In interferogram 3 of
Figure 8.18, one can see that the pale green fringe is enclosed within two fringes having the
same red color (irst change of the oil-ilm thickness slope sign) and that the yellowish fringe is
enclosed within two identically purplish-red colored fringes (second change of the slope sign).
Analysis of the interferogram colors indicates that the oil-ilm thickness is increasing upstream
of the location of the pale green color and then decreasing up to that of the yellowish color and
increasing again downstream of it. Also note that the oil-ilm thickness experiences a small
variation in the downstream region. Better sensitivity is obtained with a white light source
because several fringes are visible downstream of the central part of the model. In monochro-
matic light (argon or krypton lasers), it is only possible to distinguish a variation of one fringe.

8.5 application to lows

application to 2D In this example, the unsteady wake low of schematic turbine blades at Mach 0.4 is analyzed
unsteady subsonic [17]. The unsteady pressure signals are simultaneously recorded around the trailing edge in
wake lows order to be able to synchronize the acquisition of images and of pressure variations. Two suc-
cessive interferograms and the gas density ield are given in Figure 8.19. The gas density ρ is
referenced to the upstream gas density ρ∞. The present model has a circular trailing edge with
D = 15 mm and the boundary layer is quasi-laminar. Due to the shifting of the two interfer-
ing rays (dx = 1.57 mm), the interferograms cannot be analyzed down to the model wall. The
interferograms’ analysis is conducted using the numerical model described in “Interference
C h a p T e r NINe

Thermal anemometry

Ramis Örlü and Ricardo Vinuesa

Contents

9.1 Introduction 257


Background 257
Reference literature and content 259
9.2 Basic principles 260
Heat transfer characteristics 260
Modes of operation 264
9.3 Probe design, manufacturing, and repair 265
Commercial versus in-house repaired/built probes 265
Hot-wire materials and geometrical constraints 265
Wire treatment: Etching versus plating 266
The prongs-wire connection: Soldering versus welding 267
Preaging, aging, and drift 269
9.4 Calibration and its relations 269
Precautions and presettings 270
Single-wire probes 271
Multiwire probes 275
Temperature calibration 276
Calibrations for low velocities 279
9.5 Measurements 282
9.6 Limitations and corrections 284
Wall/probe interference and wall-position determination 285
Temporal and spatial resolution 288
Corrections for temperature luctuations and drift 291
Acknowledgments 294
Problems 294
References 296

9.1 Introduction

Background The main objective of experimental luid mechanics is the measurement of local low
velocities, and in this respect, hot-wire anemometry (HWA) is without doubt the most versatile
and widely used laboratory method. The term “hot-wire anemometer” implies the usage of a
hot, that is, heated, wire to measure wind speeds. Although the term has a historical justiica-
tion since the early usage was restricted to measurements in air only, the so-called hot-ilm
anemometers have been used in various liquids as well. Nonetheless, due to the emergence
and advancements in laser Doppler velocimetry/anemometry (LDV/LDA) and particle image
velocimetry (PIV) to be discussed in Chapter 10, HWA has again become more focused on
measurements in gases, leaving the area of measurements in liquids primarily to optical mea-
surement techniques. The measurement principle of the hot wire (and of thermal anemometers

257
V/Vj : –1 –0.8 –0.6 –0.4 –0.2 0 0.2 0.4 0.6 0.8 1
1
1.5
0.5
1
0

a2/√2λ2
Y/D

–0.5 0.5
–1
0
(a) (c) (e)

1.5
1

0.5 1
Y/D

0
0.5

a2/√2λ2
–0.5

–1 0
–1.5 –1 –0.5 0 0.5 1 1.5 –1.5 –1 –0.5 0 0.5 1 1.5
–1 –0.5 0 0.5 1 X/D X/D
(b) a1/√2λ1 (d) (f )

FIGUre 2.9 Left: Scatter plot of the normalized time coeficients a1, a2 for free (a) and conined (b) swirling jets. The circumference with radius 1 is plotted for reference.
Right: Iso-contours with velocity vectors of the instantaneous velocity maps V/Vj (left) and of the low-order reconstruction velocity (right) at the jet mid-plane and isosurfaces
of positive second invariant of the velocity gradient tensor for free swirling jet (c, e) and conined swirling jet (d, f). (Reprinted from Experimental Thermal and Fluid Science,
52, Ceglia, G., Discetti, S., Ianiro, A., Michaelis, D., Astarita, T., and Cardone, G., Three-dimensional organization of the low structure in a non-reactive model aero engine
lean burn injection system, 164–173, Copyright 2014, with permission form Elsevier.)
Turning vanes Internally insulated Compressor (50 MW)
GN2
blow
off
LN2
Test section injection
T
pt
Plenum

Stilling chamber Insulated stainless


Second throat
steel pressure shell
(a)
(b)

FIGUre 3.21 Aerodynamic circuit of the ETW (a) and detail of the liquid nitrogen injector rakes (b). (From www.etw.de.)

FIGUre 3.23 Example of meteorological wind tunnel showing roughness elements upstream of
the buildings in the test section to simulate the proper conditions for studying the wind pressures
and pedestrian-level velocities. (Image credit to BMT Fluid Mechanics Limited.)
Streamlines

Detached vortex

Separation
bubble

(a) (b)

(c) (d)

FIGUre 4.5 Fluorescent dyes can be used to denote separation and streamlines. The igure shows
the dye pattern at a plate oriented streamwise near a corner at different low speeds (increasing
from (a) to (c)). Notice how the thickness of the boundary separating the detached corner vortex
increases with the speed, even giving birth to a complex pattern with two nested vortices (panel c).
These conigurations are explained in Reference 9. Panel (d) shows a sketch of the test model.

(a)

|V| (m/s): 20 25 30 35 40 45 50 55 60 Cp: –1 –0.8–0.6–0.4–0.2 0 0.2 0.4 0.6 0.8 1

40 40
50 50
10 60 10 60
70 70
m)

m)

80 80
x (mm)
z (m
x (mm)

0 0
z (m

90 90
100 100
–10 110 –10 110
–10 0 10 20 30120 –10 0 10 20 30
120
(b) y (mm) y (mm)

FIGUre 5.2 (a) Pressure distribution on a wind tunnel model, measured with PSP (From French-German Research
Institute of Saint-Louis, France), (b) 3D low and pressure coeficient visualization of a lying aircraft propeller blade model.
(From Daniele Ragni, TUDelft, Delft, the Netherlands. Reproduced with permission from Springer.)
Low Pressure High
(a) (b)

FIGUre 5.19 Aircraft model in the DNW-HST tunnel (a) and PSP results (b) pressure distribution
on a completely coated model to calculate forces and moments, results from the German Aerospace
Center, Goettingen (Germany).

η 0.10 0.20 0.30 0.40 0.50 0.40 0.70 0.80 0.90 1.00

FIGUre 6.9 An example of TLCs’ application to the study of aerodynamic and thermal perfor-
mance of a rotor blade cascade. (Adapted from Barigozzi, G. et al., Int. J. Heat Fluid Flow, 44(0),
563, 2013.)
0° 24° 48°

y/d
0

–2

72° 96° 120°

2
y/d

–2

144° 168° –4 –2 0 2 4
x/d
2
Nu
y/d

0
10 20 30 40 50 60 70 80
–2

–4 –2 0 2 4 –4 –2 0 2 4
(a) x/d x/d

0 1 2 3 4 5

0 1 2 3 4 5
(b)

FIGUre 6.24 Examples of IR thermography applications at different low regimes and conigu-
rations. (a) Phase average Nusselt number maps for a synthetic jet. (From Carlomagno, G.M. and
Ianiro, A., Exp. Therm. Fluid Sci., 58(0), 15, 2014; Adapted from Greco, C.S. et al., Int. J. Heat Mass
Transf., 73(0), 776, 2014.) (b) Nusselt number distributions for smooth (top) and V-ribbed (bottom)
180° turn channel low at Re = 30,000. The labels on the x axis are inequivalent diameter units.
(Adapted from Astarita, T. et al., Exp. Fluids, 33(1), 90, 2002; Astarita, T. et al., Opt. Lasers Eng.,
44(3–4), 261, 2006.) (Continued )
Visible part
Geo
met
ric
sim
plif
icat Non-Visible part
ion

Raw image
y x

z Transformed image

(c)
UD
[Counts] T (K)
4000

3500
380
3000
30
2500
Z (mm)

2000 360
20
Flow 1500
UD
[Counts]
10
340
2800
2600
2400 20
2200
0 20 320
2000 Flow
Y (mm) –20 )
1800 0 X (mm
(d)

×10–4 ×10–4
0 0
5 7
20 20
4
y (mm)

y (mm)

40 40 5
3
60 60
2 3
80 80
100 1 100 1
40 80 120 160 200 240 40 80 120 160 200 240
(e) x (mm) x (mm)

FIGUre 6.24 (Continued ) Examples of IR thermography applications at different low regimes


and conigurations. (c) Temperature map of inned heat exchangers in a transonic bypass low.
Schematic of the curved geometry (left) and of the IR map transformation (right). (Adapted from
Sousa, J. et al., Energy, 64(0), 961, 2014.). (d) Three-dimensional temperature maps for a double
cone coniguration in Ma = 9.3 hypersonic low. The two right insets display the raw IR planar
acquisition (top) and an IR map of the perforated calibration plate used for 3D reconstruction
(bottom). (Adapted from Cardone, G. et al., Exp. Fluids, 52(2), 375, 2012). (e) Stanton number
maps for a Ma ≅ 3 low around an isolated ramp roughness element over a lat plate model, at
Re = 5.2 × 106 (top) and Re = 2.7 × 106 (bottom). (Adapted from Tirtey, S.C. et al., Exp. Fluids,
50(2), 407, 2011.)
(a)

(b)

FIGUre 6.29 (a) Telon probe measurement in the arc-heated wind tunnel of the University
of Texas at Arlington. (b) Picture of the Telon sample prior to (left) and after (right) ablation.
(Reproduced with permission from Gulli, S. et al., Exp. Fluids, 55(2), 1647, 2014.)

(a) (b)

FIGUre 7.15 Examples of color schlieren images. (a) Thermal plumes from candles, with
horizontal RGB color ilter. (Image by © Andrew Davidhazy, andpph.com.) (b) A propane torch
lighting a Bunsen burner, with circular color ilter. (Image by A. Sailer.)

(a) (b)

FIGUre 7.19 BOS visualization of an open-air explosion: (a) working image (grayscale); (b) background displacement (col-
ormap, absolute values). FB, explosion ireball; SF, shock front; SS, secondary shock; TU, turbulence. (With kind permission
from Springer Science+Business Media: Shock Waves, Background-oriented schlieren with natural background for quantita-
tive visualization of open-air explosions, 24, 2014, 69, Mizukaki, T., Wakabayashi, K., Matsumura, T., and Nakayama, K.)
10

30 9
8
40
7
50 6

mm
60 5 px
4
70
3
80 2

90 1

10 20 30 40 50 0
(a) (b) mm

FIGUre 7.21 Results of imaging: (a) raw images (top, reference frame; bottom, working frame)
and (b) ield of image shift obtained by processing (top, single frame; bottom, averaged over four
reference frames).

r/D r/D
1.5 1.5
1 1
0.5 0.5
0 0
0.5 0.5
1 1
x/D = 0.133 x/D = 6.163
1.5 1.5
0.4 0.6 0.8 1 1.2 1.4 1.6 0.4 0.6 0.8 1
r/D ρ/ρe r /D ρ/ρe
(a) (b)

FIGUre 8.21 Interferograms of the jet and radial distribution of the gas density. (a) Axisymmetric
case and (b) non-axisymmetric case.
1.2

Partial density of normalized SF6


1
Air
(a)
0.8

0.6

0.4

0.2 (c)
(b)
0
(a) (b) (c)
–10 –5 0 5 10 15
x (mm)

FIGUre 8.22 Gas density proile of SF6 —Interface SF6 –Air, Ms = 1.45. (a) t = 1080 μs, (b) t = 1340 μs, and (c) t = 1698 μs.

1.5

0.5
y/D

–0.5
1 4
–1

–1.5
–1 0 1 2 3 4
(b) x/D

0.95

2 5
0.9
ρ/ρ0

0.85

0.8

3 6 0.75
0 1 2 3 4
(a)
(c) x/D

FIGUre 8.28 Unsteady wake low around the cylinder—results and analysis—Mach 0.37. (a) High-speed holographic
interferograms recorded by transmission −∆t = 100 μs. (b) Vortices trajectories. (c) Vortices gas density.
D = 1.3 µm, L/D = 900

D = 1.3 µm, L/D = 600

0.5 mm

(a) (b)

FIGUre 9.5 (a) Photograph of the soldering process for the boundary-layer probe depicted in Figure 9.4a (0.25 mm probe
tip spacing) including the crocodile clamp holding a Wollaston wire (30 μm in diameter) and the tip of a soldering iron.
Since the actual hot wire of 1.2 μm diameter is not visible in the picture, the right inset shows a 5 μm wire soldered to a
different boundary layer probe with a spacing of 1.5 mm. (Photo courtesy of Ferro, M., Experimental study on turbulent
pipe low, MSc thesis, Royal Institute of Technology, Stockholm, Sweden, 2012.) The left inset shows the result of using a
capillary acid bubble supported by an electrical current (fed through the crocodile clamp) to produce a partially etched
Wollaston wire. (b) Combined X-wire and cold-wire probe with close-up of probe and wire constellation. The picture was
taken several months after the performed measurements, which explains the traces of corrosion on the prongs. All wires
are soldered to the tip of the prongs, which is not apparent from the 2D microscopic picture. (With kind permission from
Springer Science+Business Media: Flow Turbul. Combust., An experimental study of the near-ield mixing characteristics of
a swirling jet, 80, 2008, 323, Örlü, R. and Alfredsson, P.H.)

Spot
welder
Probe
manipulator
ing
ers
av
l tr Flow
dia
Ra

Pitot tube Hot wire

1
Electrode Wire dispenser
manipulator and manipulator Cold wire
Pitot tube 5
(a) (b)

FIGUre 9.6 (a) In-house built hot-wire welding station (Fluid Physics Laboratory, KTH Mechanics, Stockholm, Sweden)
including manipulators for the electrode, wire dispenser, and the probe; the spot welder is a commercial product. The
probe manipulator can be rotated to both rotate the probe and change its incident angle relative to the electrode, which is
required for slanted single-wire and multiwire probes. (b) Combined hot-wire/cold-wire probe with incorporated Pitot tubes
for in situ calibration embedded in a pipe low setup. (Reprinted from Flow Meas. Instrum., 26, Laurantzon, F., Tillmark, N.,
Örlü, R., and Alfredsson, P.H., A low facility for the characterization of pulsatile lows, 10–17, Copyright 2012, with permis-
sion form Elsevier.)
(a) (b)

FIGUre 9.18 Photograph showing a boundary layer–type probe during the wall-position determination using physical meth-
ods, namely, by means of (a) a precision gauge block and a vernier height gauge and (b) the mirrored image technique. (Reprinted
from Prog. Aerosp. Sci., 46, Örlü, R., Fransson, J.H.M., and Alfredsson, P.H., On near wall measurements of wall bounded
lows—The necessity of an accurate determination of the wall position, 353–387, Copyright 2010, with permission form Elsevier.)

Vorticity, ω(s–1)
+400

λci ≥ 200
λ2 ≥ –2002
|ω| ≥ 400
–400

FIGUre 10.20 Two Eulerian coherent structure methods compared to thresholding the vorticity
ield for the case of jet low. Every other vector is skipped for clarity, and only the vectors within
the jet luid are plotted.

(a) (b)

FIGUre 11.2 Examples of pseudo-spatial reconstruction of instantaneous 3D velocity ield using


Taylor hypothesis. (a) Volumetric vector ield in a turbulent jet. (From Ganapathisubramani, B. et al.,
Exp. Fluids, 42(6), 923, 2007.) (b) Coherent structures in a turbulent boundary layer. (From Dennis, D.J.
and Nickels, T.B., J. Fluid Mech., 673, 180, 2011.)
z

1 mm

y
x
0 16,000
Acceleration (m/s2)
(a) (b)

FIGUre 11.4 (a) Sketch of the experimental setup used by Bourgoin et al. [17] to measure
Lagrangian tracer trajectories in fully developed turbulence. The particles were illuminated by
two high-power lasers and imaged by three high-speed cameras. (b) Trajectory of a tracer particle
in fully developed turbulence reconstructed by 3D PTV, with acceleration magnitude represented
by the color of the trajectory. (From Voth, G.A. et al., J. Fluid Mech., 469, 121, 2002.)

x/D
–1.0 –0.5 0 0.5 1.0 1.5
–1.0
ψ=0 ψ=x 0 ωxc/U∞: –5 –4–3–2–1 0 1 2 3 4 5
0 1.0

–0.5

0
y/D

–1.0
0.1 0.5
y/c

–1.5 0
0.4 1
x/c
0.2
–2.0 z/c 1.5
0
(a) (b)

FIGUre 11.7 Applications of defocusing PIV. (a) 3D vector ield and isosurface of vorticity at the outset of a jet issued from
an inclined exit. (From Troolin, D.R. and Longmire, E.K., Exp. Fluids, 48(3), 409, 2010.) (b) Development of a tip vortex on
an airfoil, colored based on the streamwise vorticity. (From Wolf, E. et al., Exp. Fluids, 50(4), 977, 2011.)

Mirror
y

Spatial filter Flow x


x2
z

Collimating 4
x3 x1
lens
3

Sample volume Flow 2


5 cm
y/k

Seed particles
1

Hologram 0
Particle B
plane –3
injection 2
–2
system 10 × objective A 0
z/k –1
Camera 0 –2 x/k
1 –4
(a) (b)

FIGUre 11.12 Application of digital in-line holographic PIV to investigate the near-wall region
in a turbulent channel low with 3D roughness by Talapatra and Katz [30,31]. (a) Optical setup.
(b) Isosurfaces of vortex identiication criterion (λ2) and vortex lines.
θ

Mirror 1
Mirror 2

y
ε

FIGUre 11.14 Multi-pass light ampliication system. (From Ghaemi, S. and Scarano, F., Meas. Sci. Technol., 21(12),
127002, 2010.)

–0.3–0.2–0.1 0 0.1 0.2 0.3


250300 350
7
6 Flow direction
5 –1
y/d

X/D
4 0
3
2 1
1
0 0 1
1 0
2 3 Y/D –1
0 –1 0 1 2 3 4 5
z /d

x /d 4 6 7 8 9 10
5 6 Z/D
(a) (b) (c)

FIGUre 11.21 Applications of tomographic PIV to turbulent lows. (a) Wake behind a circular cylinder. (From Scarano, F.
and Poelma, C., Exp. Fluids, 47(1), 69, 2009.) (b) Turbulent eddies in a boundary layer. (From Schröder, A. et al., Exp. Fluids,
44(2), 305, 2008.) (c) Coherent structures in a jet low. (From Violato, D. and Scarano, F., Phys. Fluids, 25(1), 015112, 2013.)

z
u(m/s): –0.01 0.02 0.05 0.08 0.11 0.14 0.17 0.18
y
x

40
–20
0 20
m)
20 0 y (m
x (mm) –40 –20

FIGUre 11.22 Particle tracks in a separated low obtained with the “Shake-the-Box” approach.
(From Schanz, D. et al., ‘Shake The Box’: A highly eficient and accurate Tomographic Particle
Tracking Velocimetry [TOMO-PTV] method using prediction of particle positions, PIV13; 10th
International Symposium on Particle Image Velocimetry, July 1–3, 2013, Delft University of
Technology, Delft, the Netherlands, July 2013.)
U/Uj: 0.1 0.3 0.5 0.7 0.9 1.1

0 0
2 2
4 4
2 X/ 6 2
X/ 6 D
D

Y/D
y

Y/D
8 1 8 1
10 10
–1
0 0 –1
z x 1 12 1
Z/D Z/D

(a) (b)

FIGUre 11.26 Applications of MRV to turbomachinery lows. (a) Film cooling low out of the trailing edge cutback of a
turbine airfoil model, showing positive (red) and negative (purple) streamwise velocity isosurfaces. (From Benson, M.J. et al.,
Exp. Fluids, 51(2), 443, 2011.) (b) Flow downstream of an inclined jet in crosslow relevant to ilm cooling applications, with
streamwise vorticity isosurfaces and velocity contours. (From Coletti, F. et al., Int. J. Heat Fluid Flow, 43, 149, 2013.)

1.5

Y/D
0.5
–1
–0.5 0
0
0.5 –0.5
X/D
1 0
1.5 0.5
Z/D
21

FIGUre 11.28 Application of MRV to the low around a rotating model of a VAWT [71].
Isosurfaces of positive (red) and negative (blue) streamwise vorticity are shown. Flow is in the
positive X-direction.

FIGUre 12.13 Fizeau interferometric pattern exhibited by an oil ilm driven by the air low,
where low is from bottom to top and the time evolution is from left to right.
(a) (b)

(c)

(d) (e)

FIGUre 13.2 Various mounting conigurations for wind tunnel models: (a) three-strut support of
a blended wing body, (b) nozzle sting support of the Euroighter, (c) belly sting support of a com-
mercial airliner, (d) half model for a propulsion interference test, and (e) single sting support for a
helicopter. ([a]: Courtesy of TU Delft, Delft, the Netherlands; [b], [d], and [e]: Courtesy of German
Dutch Windtunnels [DNW], Marknesse, the Netherlands; [c]: Courtesy of European Transonic
Windtunnel [ETW], Cologne, Germany.)
FIGUre 13.7 A shuttle model is magnetically suspended in the transparent hexagonal test
section of the MIT/NASA Langley 6 in. magnetic suspension balance system.
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 243

ρ/ρ∞
x/L = 0.501
2 Measurements
Merlen and
Andriamanalina [22]
Jones [23]
1
34 36 38 40 42 44
r (mm)

FiGUre 8.20 Interferogram and gas density distribution.

that separates the partial beams very widely has been used. The prism angle itself is about 18°
so that the distance between the two partial beams is 14.60 mm at the level of the spherical
mirror, and it is therefore greater than the domain to be measured. This is equivalent to an
ordinary interferometry arrangement with a separate reference, because the interferogram is
directly interpretable in a 2D low but the setup is still differential because one of the beams,
used as a reference, is placed in the undisturbed low, either upstream of the model or, as in
the present case, on the side of the model, outside of the shock layers. The interference fringes
are placed horizontally so that the separation between the two beams is vertical (Figure 8.20).
The results are in good agreement with the analytical solutions on slender ogives following
[23] and tabulated data for an inviscid low of ideal gas around a cone [24].

application to In the case of axisymmetric wake lows, differential interferometry has been used to analyze
axisymmetric lows the structure of a hot supersonic jet at Mach 1.8 injected into a coaxial supersonic low at
Mach 1.5. The method is suficiently sensitive for a quantitative analysis to reconstruct the
local density ield. This operation is possible from a single interferogram provided the low
is 2D or axisymmetric. The low structure was assumed axisymmetric and the interferograms
were recorded with horizontal fringes so that vertical gradients of the refractive index are
detected [25]. The radial density distribution was determined by the spectrum analysis of the
colors in the upper half-plane. Wherever possible, a similar analysis was made for the lower
half-plane to see whether the low was effectively axisymmetric. The axisymmetric low was
reconstructed from the undisturbed low starting as close as possible to the axis of revolution.
Figure 8.21 shows two interferograms recorded for two different pressure ratios of 2.74 and
3.38 at the same temperature ratio of 1.67. In the analysis of the interferograms, the density
distributions are computed along vertical lines. Whenever the vertical line, where the analysis
is performed, completely crosses the jet, a density proile can be determined by integration in
the upper half-plane and the lower plane. If the low is strictly axisymmetric, the two proiles
should be identical. In Figure 8.21 and for the axisymmetric case, the proiles of the optical
thickness (blue line) and the gas density (red line) obtained near the nozzle exit section are
relatively symmetric around the low axis. In the non-axisymmetric case, the analysis made far
downstream provides relatively contrasted density proiles that relect the turbulent behavior
of the low.

application to Differential interferometry has also been used to analyze the stability of the interface separat-
gaseous mixture ing two luids of highly different densities (such as sulfur hexaluoride [SF6] and air, or xenon
and air) when it is impacted by an incoming shock wave [26]. The shock tube is vertical in
order to keep the interface stable before the arrival of the shock wave. In this test, differential
interferometry is compared to another diagnostic technique based on densitometry where the
partial gas density proile of one of the two gases can be obtained with careful calibration if
the gas pair is air/xenon. In the case of SF6/air, the densitometry technique cannot be used
because both gases are transparent to this technique. Only differential interferometry can yield
244 JEAN-MICHEL DESSE

r/D r/D
1.5 1.5
1 1
0.5 0.5
0 0
0.5 0.5
1 1
x/D = 0.133 x/D = 6.163
1.5 1.5
0.4 0.6 0.8 1 1.2 1.4 1.6 0.4 0.6 0.8 1
r/D ρ/ρe r /D ρ/ρe
(a) (b)

FIGUre 8.21 (See color insert.) Interferograms of the jet and radial distribution of the gas den-
sity. (a) Axisymmetric case and (b) non-axisymmetric case.

a measurement of SF6 distribution in air. The optical setup is that shown in Figure 8.10. It
requires two Wollaston prisms (0.5° bonded angle) installed head to foot and two “Clairaut”
achromatic lenses, 800 mm in focal length and 120 mm in diameter. In the case of two-gas
mixtures, it is known that the Gladstone–Dale relation can be extended if the Gladstone–Dale
constants of each gas are known [1]. Then, the analysis of the interferogram yields the partial
density proile of one of the two gases across the interface. Figure 8.22 shows three interfero-
grams recorded at different times. On interferogram (a), the shock wave has already crossed
the interface, has been relected from the tube end wall, and is about to again impact on the
modiied interface. Picture (b) was taken shortly after this second impact and the wave is seen
to have been partly transmitted into SF6 and partly relected into the air. On picture (c), the
transmitted wave can be seen close to the bottom of the picture while the relected part has
again been relected from the end wall and is about to impact on the interface.

1.2
Partial density of normalized SF6

1
Air (a)
0.8

0.6

0.4

0.2 (c)
(b)
0
(a) (b) (c)
–10 –5 0 5 10 15
x (mm)

FIGUre 8.22 (See color insert.) Gas density proile of SF6—Interface SF6–Air, Ms = 1.45. (a) t = 1080 μs, (b) t = 1340 μs,
and (c) t = 1698 μs.
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 245

The SF6 partial density proiles were obtained through the interface by averaging a dozen
interferograms. For the xenon/air gas pair, the xenon partial density proiles were compared to
those obtained by the densitometry technique. The two techniques yield very similar results.

8.6 principles of color holographic interferometry

The use of differential interferometry implies data integration to get the full gas density, and
this integration results in a certain measurement inaccuracy. To obtain absolute data, real-time
true color holographic interferometry using a three-color laser source has been developed
[27]. Various holographic interferometry methods include double exposure, time-averaged, or
real-time holography. Until recent years, experiments in holographic interferometry were per-
formed with a single laser, that is, they were monochromatic. Most experiments found in the
literature relate to transmission holograms [3], and few experiments have been performed to
date using holographic interferometry with relected white light [28,29]. It should be said that,
in monochromatic mode, experiments in relected white-light holography have little advan-
tage over holographic interferometry in transmitted light. Some publications mention the use
of three-wavelength differential interferometry [18] and holographic interferometry by relec-
tion [30,31] and all show that the essential advantage of color is that the achromatic fringe can
be located in the observed ield.

principles of Real-time true color holographic interferometry uses three primary wavelengths (red, green,
holograms by blue) to record the interference between the three object beams and the three reference beams
transmission and simultaneously on a single reference hologram. The technique uses a single-layer silver halide
by relection panchromatic plate made of a gelatine ilm with immersed silver grains, sensitive in the range
450–690 nm. The size of silver grains of the gelatine is about 10 nm and the gelatine thickness
is of the order of 10 μm. Under no-low conditions, the undisturbed object red, green, and blue
waves ∑RO, ∑GO, and ∑BO are recorded in the hologram by virtue of their interference with
the three reference waves ∑RR, ∑GR, and ∑BR. As it can be seen in Figure 8.23, step 1, for
a plate recorded in transmission, the three reference waves and the three object waves arrive
on the same side of the plate, while in relection, they come from opposite sides of the holo-
graphic plate. After treatment of the plate and resetting in the optical bench (Step 2), the three
reference waves ∑RR, ∑GR, and ∑BR are diffracted by transmission or by relection accord-
ing to the recording mode used to form the three diffracted object waves ∑ROD, ∑GOD, and
∑BOD (Step 3, Figure 8.23).
Then the hologram is illuminated simultaneously by the three reference beams and three
object beams, from which we get the three object beams’ waves ∑ROD, ∑GOD, and ∑BOD
reconstructed by the holographic plate simultaneously with the three live object waves’ trans-
mitted waves ∑ROT, ∑GOT, and ∑BOT. The proiles of the ∑ROD and ∑ROT, ∑GOD, and
∑GOT waves and the ∑BOD and ∑BOT waves are strictly identical to each other if no change
has occurred between the two exposures (no low in the test section) and if the hologram gela-
tine has not contracted during development. So there will be three simultaneous interferences
among the object waves constructed by the hologram and the live object waves. In this case,
a lat uniform color can then be observed behind the hologram (Step 4). If a change in opti-
cal path is created in the test section of the wind tunnel, the three live waves will deform and
adopt the proiles ∑′ROT, ∑′GOT, and ∑′BOT, while the waves reconstructed in the hologram,
∑ROD, ∑GOD, and ∑BOD, remain unchanged. Any color variations representing optical path
variations will thus be visualized in real time behind the hologram (Step 5, Figure 8.23).

optical setups Real-time color transmission interferometer An example optical setup is shown in
of real-time Figure 8.24. The three wavelengths downstream of the acousto-optic cell are split into a refer-
holographic ence beam and an object beam by a beam splitter cube. A right-angle prism is used to adjust
interferometry the reference and object path lengths on the hologram. A spatial ilter is used to expand the
beam for its passage through the test section. A pair of achromatic lenses converts the beam
into parallel light in the test section and then focuses it on the hologram. The reference beam
246 JEAN-MICHEL DESSE

Transmission hologram Reflection hologram

ΣRO ΣRO ΣRR


ΣGO ΣGO ΣGR
ΣBO ΣBO ΣBR

ΣRR
ΣGR Holographic plate
ΣBR Holographic plate

(1) Recording : First exposure,


(2) Development of hologram and resetting

ΣRR
ΣGR
ΣBR
ΣRR ΣROD
ΣGR ΣGOD
ΣROD
ΣBR ΣBOD
ΣGOD
ΣBOD

(3) Reconstruction with only the three reference waves

ΣROC ΣROC ΣRR


ΣGOC ΣGOC ΣGR
ΣBOC ΣBOC ΣBR

ΣROT ΣROT
ΣGOT ΣGOT
ΣRR ΣBOT ΣBOT
ΣGR
ΣBR
ΣROD ΣROD
ΣGOD ΣGOD
ΣBOD ΣBOD

(4) Reconstruction with the three reference waves and the three undisturbed object waves

Σ΄ROC Σ΄ROC ΣRR


Σ΄GOC Σ΄GOC ΣGR
Σ΄BOC Σ΄BOC ΣBR

Σ΄ROT Σ΄ROT
Σ΄GOT Σ΄GOT
ΣRR
Σ΄BOT Σ΄BOT
ΣGR
ΣBR
ΣROD ΣROD
ΣGOD
ΣGOD
ΣBOD
ΣBOD
(5) Reconstruction with the three reference waves and the three disturbed object waves

FIGUre 8.23 Formation of color interference fringes.

passes over the test section, and then another achromatic lens is used to illuminate the holo-
gram with a parallel light beam. In order to provide a feeling of the capability of the technique
and the illumination power involved, an example of implementation of this experimental setup
is given in [32]. In their application, the object beam diameter is 40 mm at the hologram and
that of the reference beam is 60 mm. At the acousto-optic cell, the power of the three light
waves is practically the same (of the order of 70 mW per channel). The beam splitter cube
distributes 85% of this power to the reference path and 15% to the measurement path.
At the hologram, [32] measured 250 μW/cm² in the red and blue lines and 280 μW/cm²
in the green line for the reference beam, while the object beam powers are 30 μW/cm² in the
red line and 40 μW/cm² in the green and blue lines. These proportions can be used to obtain
a perfect balance among the powers of the three waves diffracted by the hologram when
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 247

λ1
λ2 Mirror
λ3 Acousto- Beam splitter
Mirror Spatial
optical cube
filter
cell

Right angle
Spatial Object
prism Camera
filter Holographic
plate
Achromatic lenses

FIGUre 8.24 Optical setup implemented around the wind tunnel.

repositioning it, and the three live waves. For reference, in the proposed setup, the hologram
diffracts 70 μW/cm² in the red line, 65 μW/cm² in the green line, and 90 μW/cm² in the
blue line. The holograms are then subjected to treatments to harden the gel, develop it, and
bleach it. When the hologram is put back in place, the light power at the camera entrance is
1.5 × 10−3 W at the focal point, which is suficient to record interferograms at an ultrahigh
speed of 35,000 frames per second with an exposure time of 750 ns per shot.

Real-time color relection interferometer The optical setup of Figure 8.25 could be named
“Denisyuk” because it uses a holographic plate in a classical Lippmann–Denisyuk in-line
experiment. To obtain a very simple setup, all the optical pieces are located on the same side
of the wind tunnel, except the lat mirror that relects the light rays back into the test section.
Due to these considerations, the optical setup shown in Figure 8.25 based on real-time color
relection holographic interferometry has been designed.
The light source used behind the interferometer can be constituted with three different
lasers: a red line (λ1 = 647  nm), a green line (λ2 = 532  nm), and a blue line (λ3 = 457  nm),
for example. A lat mirror located just behind the test section returns the three beams on the
holographic plate inserted between the quarter wave plate and the large achromatic lens.
The hologram is illuminated on the two sides by the three collimated reference and mea-
surement waves, which are formed by the convergent and divergent achromatic lenses (not
shown in Figure 8.25). This arrangement allows one to easily obtain before the test a uniform
background color (ininite fringes) or narrowed fringes (inite fringes). In this setup, a polar-
izing beam splitter cube is inserted between the spatial ilter and the quarter wave plate that
transforms the waves’ polarization twice (from P parallel to circular and from circular to
S parallel) so that, when the rays are returning, the beam splitter cube returns the rays toward

Polarizing 1/4 wave Test


beam splitter plate section
Diaphragm Mask cube

λ1 Acousto- Spatial
Object
optical filter
λ2
cell Holographic Flat
λ3 plate mirror

Interference Achromatic lenses


fringes

Camera

FIGUre 8.25 Real-time three-color relection holographic interferometer.


248 JEAN-MICHEL DESSE

PBC QWP Hologram TS LFM


100%

(a)

: 50%

Mask

: 50%

: 50%
(b)

100% : 50%

: 50% : 50%

: 50% : 50%
(c)

100% : 50%

: 50% : 50%
: 50% : 50%
(d)

FIGUre 8.26 Formation of color interference fringes in an optical setup. (a) Reference hologram
recording, (b) hologram resetting with mask, (c) second exposure with undisturbed object waves,
and (d) second exposure with disturbed object waves.

the camera. A diaphragm is placed in the focal plane just in front of the camera in order
to ilter out any parasitic interference. The interference fringes produced by the phenom-
enon under analysis can be directly recorded using high-speed camera. Figure 8.26 details
how the interference fringes are generated in the real-time three-color relection holographic
interferometer.
First, the holographic plate is simultaneously illuminated with the three wavelengths
(Figure 8.26a). The panchromatic hologram records simultaneously the three sets of interfer-
ence fringes produced by the three incident waves and the three waves relected by the lat
mirror (irst exposure). Then the hologram is developed and it is reset in the optical bench at
the same location. At the second exposure, if the diffraction eficiency of the holographic plate
is near to 50% for the three lines, 50% of the light is relected by the hologram (dashed lines)
and 50% crosses the holographic plate (solid lines). If a mask is inserted in the front of the test
section, one can observe on the screen the three images diffracted by the plate (Figure 8.26b).
This operation allows for verifying the quality of the holograms diffracted. When the mask
is moved, 50% of the light crosses the test section twice and interferes in real time with the
three references waves (solid lines). Interference fringes are not localized because they can be
observed from the holographic plate to the camera. If no disturbances exist in the test section,
a uniform background color is obtained in the camera (Figure 8.26c). If variation in refractive
index exists in the test section, color fringes will be seen on the screen. As the luminous inten-
sities of reference and measurement waves are basically equal, the contrast of color interfer-
ence fringes will be maximum (Figure 8.26d).
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 249

This optical setup is very simple but it presents some advantages and some inconveniences.
The advantage lies in the small number of optical pieces that are used. The reference beams
and the measurement beams are colinear and there is just a lat mirror behind the test section.
The contrast of color interference fringes depends on the diffraction eficiency of the holo-
graphic plate, and the color saturation depends on the luminous intensity of the three wave-
lengths that can be adjusted with the acousto-optic cell. A main inconvenience resides in the
fact that it is not possible to adjust the diffraction eficiency of the holographic plate. It is only
ixed by the chemical treatment and is a function of gelatine thickness. The unique solution to
solve this problem will consist in a speciic treatment of the surface of the lat mirror, and this
operation implies prior knowledge of the diffraction eficiency of the hologram.
Finally, the three interference fringe patterns will exist and can be recorded if the coherence
length of the three wavelengths is more than twice the distance between the holographic plate
and the lat mirror located just behind the test section. Compared to the setup of transmission
holographic interferometry, here it is not possible to adjust the length between the reference
and measurement rays.

problem The problem of gelatine shrinkage is described in detail in [33]. In Figure 8.27, one can see
of gelatine how the interference fringes are inscribed into the gelatine when the holographic image is
contraction recorded by transmission or relection. In transmission, the interference fringes are perpen-
dicular to the plate and a small variation in the gelatine thickness caused by the chemical
treatment of the hologram does not modify the three inter-fringe distances. On the other hand,
in relection, interference fringes are recorded parallel to the plate surface and the inter-fringe
distance is very sensitive to a small variation of gelatine thickness. Figure 8.27 presents the
effects of gelatine contraction when a relection hologram is recorded with a green wavelength
(514 nm). During the reconstruction, a white light source (e.g., a xenon source) illuminates
three different holograms at the incidence angle that the reference wave had at recording. One
can see that if the gelatine thickness is kept constant (Δe = 0), the hologram only diffracts the
recording wavelength, that is, for the green hologram, the green wavelength contained in the
xenon spectrum. If the gelatine thickness has decreased by 5% (Δe = − 0.5 μm for a e = 10 μm
gelatine thickness), the fringe spacing will be proportionally reduced and the diffracted wave-
length will be shifted by a quantity equal to
l
Dl = De (8.42)
e

Transmission hologram Reflection hologram


θ
Interfringes Interfringes
Diffracted
i1, i2, i3 Reference Object i1, i2, i3 Reference
waves
are not very are very
e sensitive to e
sensitive to
the gelatine Gelatine the gelatine Gelatine
i3
contraction i2 contraction
i1

Diffracted waves Object

Reflection hologram
Reflection hologram illuminated in white light
recorded at 514 nm

Reference

Δe < 0 (–5%) Δe = 0 Δe > 0 (+10%)


Object

FIGUre 8.27 Effect of the gelatine contraction on the different waves.


250 JEAN-MICHEL DESSE

The hologram will diffract a wavelength equal to 488.3 nm corresponding to a blue line for a
Δe = − 0.5 μm. If the gelatine thickness increases by 10% (Δe = + 1.0 μm), the hologram illu-
minated in white light will diffract a wavelength close to yellow (565.4 nm).
On the other hand, it is well known that in the chromatic perceptibility of the eye δl varies
with the wavelength. The chromatic perceptibility is the variation δl between two different
wavelengths perceived by the eye at constant luminosity. It is about 1 nm in the green and
yellow colors and 6 nm in the blue and red colors, which corresponds to relative variations of
0.2% and 1.5%, respectively. For the diffracted color change not to be detected by a human
eye, it is mandatory that δl/λ has to be less than δλ/λ, which implies that the variation in
gelatine thickness should be less than 0.2%. This means changes in thickness of more than
20 nm are not acceptable. As the optical technique is based on the knowledge of the true colors
diffracted by the hologram, variations of the gelatine thickness are a cause of large errors in
the data analysis. It is for this reason that the gelatine shrinkage problem has to be perfectly
mastered. Details of different holographic plates used (PFG03c from Slavich and Ultimate 08
from Gentet), the solution to control the gelatine contraction, and the measurements of diffrac-
tion eficiency can be found in [34].

applications Transmission interferometer The optical setup of real-time transmission holographic inter-
ferometry has been applied to analyze the unsteady low downstream of a cylinder with diam-
eter D = 20 mm in crosslow. Figure 8.28a gives a sequence of six interferograms of the low
around the cylinder at Mach 0.37. The time interval between each picture is 100 μs. One can
see that each vortex is represented by concentric rings of different colors with each color repre-
senting an isochoric line. The vortex formation and dissipation phases can be visualized clearly
by the fringes’ oscillation between the upper and lower surfaces of the cylinder. Several types
of measurements were made by analyzing a sequence of 100 interferograms. First, the vortex
center deined by the center of the concentric rings was located in space for each interferogram,
which made it possible to determine the mean paths for the vortices issued from the upper and
lower surfaces. The results of this are shown in Figure 8.28b. The “o” symbols represent the

1.5

0.5
y/D

–0.5
1 4
–1

–1.5
–1 0 1 2 3 4
(b) x/D

0.95

2 5
0.9
ρ/ρ0

0.85

0.8

3 6 0.75
0 1 2 3 4
(a)
(c) x/D

FIGUre 8.28 (See color insert.) Unsteady wake low around the cylinder—results and analysis—Mach 0.37. (a) High-
speed holographic interferograms recorded by transmission −∆t = 100 μs. (b) Vortices trajectories. (c) Vortices gas density.
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 251

positions of the vortex centers from the upper surface, and the “•” symbols represent those of
the lower surface. Remarkably, the two paths exhibit a horizontal symmetry about the x = 0 axis
passing through the cylinder center. One may also point out that even at x/D = 4 downstream
of the cylinder, the upper and lower vortex paths do not come together and line up. Then, the
colors of each interferogram were analyzed using a modeling of the luminous intensity of inter-
ference fringes versus the optical path difference called “MIDI” software, which models the
light intensity and experimental interference fringe colors as a function of the path difference
[9]. The gas density measured under freestream conditions is the same as that measured at the
outer low of the wake (measured in the vicinity of the wind tunnel’s upper and lower walls).
The graph in Figure 8.28c shows the variations of ρ/ρ0 vary for the vortices emanating from the
upper and lower surfaces. The trend curves plotted show the same variations. For 0.5 < x/D < 1,
the vortices are in a formation or agglomeration phase because the gas density decreases at their
center. Then, when x/D > 1, the vortices enter a dissipation phase because the gas density increases
again at their centers. The drop in gas density is large, reaching about 20% of ρ0. A rather large dis-
persion may nonetheless be noticed in the data. This is due mainly to the uncertainty in determin-
ing the vortex center locations, which is not trivial when the vortices are in the dissipation phase.

Relection interferometer A real-time color relection holographic interferometer has been


implemented to analyze the same unsteady wake low around the cylinder. In this experiment,
the ininite Mach number was ixed at 0.45 and the high-speed interferograms were recorded
with the rotating drum camera, which is equipped with a 400ASA color ilm. The time interval
between two successive frames is 117 μs and the time exposure is 750 ns. Several movies have
been recorded with uniform background color (ininite fringes), circular and narrowed fringes
(inite fringes). As the optical setup is very sensitive to external vibrations, the uniform back-
ground color is dificult to adjust when the wind tunnel is running, but the fringe formation can
be observed on the hologram surface so that it is possible to adjust the uniform background
color with the wind tunnel operating. Figure 8.29 shows 3 of the 12 interferograms covering

2
y/D

ρ/ρ0
1 0.970
0.945
0 0.920
0.895
–1 0.870
1 0.845
–1 0 1 2 3 4 x/D 0.820
2
0.795
y/D

0.770
1
0.745
0.720
0

–1
2
2
y/D

–1 0 1 2 3 4 x/D
2 1
y/D

1
0

0
–1
–1

3 0 1 2 3 4 x/D
–1 0 1 2 3 4 x/D

FIGUre 8.29 Interferogram analysis: instantaneous and average gas density ields.
252 JEAN-MICHEL DESSE

about a period of the vortex street. They are recorded in ininite fringes. The interferogram
colors are well saturated and of higher contrast than those obtained in previous experiments
performed with transmission holograms (see the interferograms of Figure 8.27).
When the background color is uniform, it is very easy to follow the vortices emitted from
the upper and lower sides. For instance, if one looks at the colors coming out in the vortex
cores, one can easily see that the irst vortex emitted from the upper side enters a formation
phase where the gas density decreases in the vortex center. A second phase of dissipation
is observed looking at the last vortex leaving the observed ield. Finally, as with transmis-
sion holographic interferometry, each color represents a value of the gas density. In the post-
processing analysis, the gas density ield was presented in nondimensional form with respect
to ρ0, the stagnation gas density. One can see that the instantaneous gas density varies from
0.70 to 0.98. The average gas density in the ield has been calculated from 12 successive
interferograms. The interferogram number is not very signiicant, but the obtained ield is
already symmetrical enough and the gas density varies from 0.72 to 0.97. Finally, if the color
scale of the interference pattern is very well known to the user, the image of interferograms is
suficient to correctly evaluate the evolution of the gas density ield.

problems

8.1 An optical setup based on differential interferometry with double crossing of the test
section and one Wollaston prism is used to analyze a ield of 200 mm in diameter. It
includes a spherical mirror, 400 mm in diameter, 2.5 m in radius of curvature. We want
to realize an absolute measurement of the optical path difference. Knowing the quartz
birefringence (ne − n0) is equal to 9 × 10−3 and the calcite birefringence is 20 times
higher than that of quartz, determine the bonding angle of the prism if the material is
calcite or quartz. What material should I use? Why?
8.2 A 2D low is analyzed with the differential interferometer deined in question (1). The
test section located at 100 mm from the spherical mirror is 50 mm in width. The low
generates a negative optical path difference of 0.55 μm. What is the decreasing gas
density measured in the test section? Without the low, the differential interferometer is
adjusted to obtain a white color on the screen (δ = 0). When the wind tunnel is running,
what is the color induced by the low?
8.3 What is the space fringe recorded in a transmission hologram (λ = 660 nm) if the two
interfering rays are separated by 30°. How many fringes are recorded in the gelatine
thickness (10 μm)? What is the effect of gelatine contraction on the space fringe?
8.4 What is the space fringe recorded in a relection hologram (λ = 660 nm) if the two
interfering rays are separated by 155°. Before the development, the gelatine thickness is
decreased by 5%. What wavelength is diffracted by the hologram if it is illuminated in
white light?
8.5 In a hypersonic wind tunnel, test air density is equal to 2 × 10−2 kg/m3 and the test sec-
tion width is 15 cm. Considering density variations up to 300%, quantify the refractive
index variations for a plane 2D experiment.

references

1. M e r z k ir c h W (1974). Flow Visualization, Academic Press, New York.


2. We s t CM (1978). Holographic Interferometry, Wiley-Interscience, New York.
3. Ra s t o g i PK (1994). Holographic Interferometry, Springer-Verlag, Berlin, Germany.
4. K r e is T (1996). Holographic Interferometry, Akademie Verlag, Berlin, Germany.
5. M e r c e r C (2003). Optical Metrology for Fluids, Combustion and Solids, Kluwer Academic
Publishers, New York.
6. F l e u r y P, M at hieu JP (1960). Images optiques, Interférences, Editions Eyrolles.
7. Ve r e t C, P h il b ert M, Surget J, Fertin G (1977). Aerodynamic low visualization in the
ONERA facilities, ISFV1, Asanuma: Hemisphere, Washington, DC.
FROM INTERFEROMETRY TO COLOR HOLOGRAPHY 253

8. Dewey JM, Heilig W, Reichenbach H, Walker DK (1983). The analysis of coloured interfero-
grams of shock waves, Flow Visualization I, Hemisphere, Washington, DC.
9. D e s s e JM (1997). Recording and processing of interferograms by spectral characterization of the
interferometric setup, Experiments in Fluids, 23, 265–271.
10. G o n t ie r G (1957). Contribution à l’étude de l’interférométrie différentielle à biprisme de
Wollaston, Pub. Sci. Tech. Ministère de l’air 338.
11. F r a n ç o n M, S ergent B (1955). Compensateur biréfringent à grand champ, Optica Acta,
2, 182–184.
12. S me e t s G (1968). Interféromètre differential à faisceaux fortement séparés, Dépouillement des
interférogrammes, ISL TN, 41.
13. Ca r l o mag n o GM (1986). A Wollaston prism interferometer used as a reference beam interfer-
ometer, Flow Visualization IV, Hemisphere, Washington, DC.
14. D e s s e JM (1993). Direct measurement of the density ield using high speed differential interfer-
ometry, Experiments in Fluids, 15, 452–458.
15. D e s s e JM, P ic a rt P (2013). Digital three-wavelength holographic interferometry using
Wollaston prisms, Digital Holography & 3D Imaging, OSA Meeting, Kohala Coast, HI.
16. D e s s e JM (1990). Instantaneous density measurement in two-dimensional gas low by high
speed differential interferometry, Experiments in Fluids, 12, 1–9.
17. S ie v e r d in g CH, C icatelli G, D es s e JM, M einke M, Zunino P (1999). Experimental
and Numerical Investigation of Time Varying Wakes Behind Turbine Blades, Notes on Numerical
Fluid Mechanics, Vol. 67, Vieweg, Rosenheim, Germany.
18. D e s s e JM (1997). Three-color differential interferometry, Applied Optics, 36, 7150–7156.
19. M ac A d a m DL (1985). Color Measurement, Theme and Variations, Vol. 27, Springer-Verlag,
New York.
20. D e s s e JM (2003). Oil-ilm interferometry skin-friction measurement under white light, AIAA
Journal, 41, 2468–2477.
21. Desse JM, Deron R (2009). Shadow, schlieren and color interferometry, Aerospace Lab, 1, 1–10.
22. D e s s e JM, Fabre E (1996). Differential interferometry for studying hypersonic lows,
Experiments in Fluids, 20, 273–278.
23. M e r l e n A, Andriamanalina D (1992). Analytical solutions for hypersonic low past slender
power law bodies at small angle of attack, AIAA Journal, vol. 30.
24. J o n e s DJ (1969). Tables of inviscid supersonic low about circular cones at incidence g =1.4,
Agardograph, 137.
25. R o d r ig u e z O, D es s e JM, Pruvos t J (1997). Interaction between a supersonic hot jet and a
coaxial supersonic low, Aerospace Science and Technology, 6, 369–379.
26. G a l a me t z I (1994). Visualisation et mesure de masse volumique dans un mélange gazeux en
tube à choc, Thèse de doctorat, Université de Lille, Lille, France.
27. D e s s e JM, A l b e F, Tribillon JL (2002). Real-time color holographic interferometry, Applied
Optics, 41, 5326–5333.
28. S m ig ie l s k i P, Fagot H, A lbe F (1976). Application de l’holographie ultra rapide à référence
arrière à l’étude de déformations dynamiques, Proceedings of the 12th International Congress of
High Speed Photography, Toronto, Ontario, Canada.
29. Vi k r a m CS, Wi t h e r ow WK (1992). Critical needs of fringe order accuracy in two-color
holographic interferometry, Experimental Mechanics, 74–77.
30. H a r t h o n g J, S adi J, Torzyns ki M, Vukicevic D (1997). Speckle phase averaging in
high-resolution color holography, Journal of the Optical Society of America A, 14, 405–410.
31. J e o n g TH, Bje lkhagen HI, Sp oto LM (1997). Holographic interferometry with multiple
wavelengths, Applied Optics, 36, 3686–3688.
32. D e s s e JM, P ic a rt P (2012). Color holographic interferometry (from holographic plates to
digital holography), 15th International Symposium on Flow Visualization, Minsk, Belarus, June
25–28, 2012.
33. D e s s e JM (2006). Recent contribution in color interferometry and applications to high-speed
lows, Optics and Lasers in Engineering, 44, 304–320.
34. N ay d e n ova Y (2011). Advanced holography-Metrology and Imaging, InTech Open Access
Publisher, Rijeka, Croatia.
S e C TI ON III
Velocity measurements
C h a p T e r NINe

Thermal anemometry

Ramis Örlü and Ricardo Vinuesa

Contents

9.1 Introduction 257


Background 257
Reference literature and content 259
9.2 Basic principles 260
Heat transfer characteristics 260
Modes of operation 264
9.3 Probe design, manufacturing, and repair 265
Commercial versus in-house repaired/built probes 265
Hot-wire materials and geometrical constraints 265
Wire treatment: Etching versus plating 266
The prongs-wire connection: Soldering versus welding 267
Preaging, aging, and drift 269
9.4 Calibration and its relations 269
Precautions and presettings 270
Single-wire probes 271
Multiwire probes 275
Temperature calibration 276
Calibrations for low velocities 279
9.5 Measurements 282
9.6 Limitations and corrections 284
Wall/probe interference and wall-position determination 285
Temporal and spatial resolution 288
Corrections for temperature luctuations and drift 291
Acknowledgments 294
Problems 294
References 296

9.1 Introduction

Background The main objective of experimental luid mechanics is the measurement of local low
velocities, and in this respect, hot-wire anemometry (HWA) is without doubt the most versatile
and widely used laboratory method. The term “hot-wire anemometer” implies the usage of a
hot, that is, heated, wire to measure wind speeds. Although the term has a historical justiica-
tion since the early usage was restricted to measurements in air only, the so-called hot-ilm
anemometers have been used in various liquids as well. Nonetheless, due to the emergence
and advancements in laser Doppler velocimetry/anemometry (LDV/LDA) and particle image
velocimetry (PIV) to be discussed in Chapter 10, HWA has again become more focused on
measurements in gases, leaving the area of measurements in liquids primarily to optical mea-
surement techniques. The measurement principle of the hot wire (and of thermal anemometers

257
258 RAMIS ÖRLÜ AND RICARDO VINUESA

in general) is based on the fact that the local luid velocity is measured by sensing the changes
in forced convection from a small, electrically heated, sensor exposed to the low of interest,
which makes it an indirect measurement technique. Its small size and good frequency response
as well as applicability to a wide velocity range with high accuracy and resolution makes it
especially suitable for rapidly changing low velocities such as in transient and turbulent lows.
HWA was also the irst technique that enabled the study of turbulent luctuations quanti-
tatively. In fact, it was the only measurement technique capable of measuring high frequency
and amplitude velocity luctuations, prior to the development of LDV in the 1970s.* Although
“modern” optical techniques such as LDV and PIV are often claimed to outrival the “very
classical” technique of HWA [5], the latter remains the technique of choice when it comes to
validation of numerical simulations or scaling laws, for example, in the ield of turbulence,
where the range of both spatial and temporal scales challenges these more “modern” tech-
niques.† Considering peer-reviewed publications, the occurrence of the aforementioned three
common measurement techniques is depicted in Figure 9.1, which indeed conirms that opti-
cal measurement techniques are more prominent, but it also underlines that HWA preserves
its importance in the research community since almost half a century. Nonetheless, faced with
these facts,‡ one cannot deny that LDV, and in particular PIV, has superseded HWA in many
areas (such as liquid and multiphase lows), thereby restricting its usages more and more to
areas where the other techniques cannot keep up with its frequency response and spatial reso-
lution. One prominent area in which this is the case is without doubt wall turbulence; in fact,
most of the experimental evidence in this ield is obtained through thermal anemometry. At
the same time, this is also the research ield that inherits most of the challenges for hot-wire
measurements. Classical textbooks touch on the problems that might occur when measuring
near walls (which comprise low-speed measurements, wall/probe interference effects, spatial

×104
1500 2.5
# = 2200 DNS
# = 1350 2 PIV
1000 LDV
Counts/year

HWA
Σ Counts

1.5
# = 625
1
500
# = 170 0.5

0 0
1970 1980 1990 2000 2010 1980 1990 2000 2010
(a) Year (b) Year

FIGUre 9.1 Occurrence of HWA (including thermal and hot-wire/ilm anemometer/


anemometry), LDV (including laser Doppler velocimetry/anemometry), particle image velocim-
etry (PIV), and direct numerical simulations (DNS) in peer-reviewed publications (in either title,
keywords or abstract). Statistics acquired via Scopus (www.scopus.com). (a) Publication rate,
that is, counts per year. Dashed horizontal lines indicate the average number 〈#〉 of publications
for HWA and LDV, while # indicates the peak values for PIV and DNS (both occurring in 2012).
(b) Cumulative occurrence. For comparison also the estimates by Freymuth [1,2] and Fingerson
[3] are shown as illed circles.

* See, for example, the classical textbook from 1972 by Sandborn [4], which states that “the hot-wire anemometer is
employed almost exclusively to measure transient low phenomenon. For this area of measurement there has been
no rival instrumentation.”
† The quotation marks around “modern” and “very classical” express the authors experience with referees and

committee board members, where the latter adjective is synonymously used for “outdated” in regard of HWA.
‡ See also the statistics presented in Westerweel et al. [5] based on Google Books, which has inspired the present

enquiry.
THERMAL ANEMOMETRY 259

and frequency resolution, etc.); however, since most of the advances in this respect are com-
parably recent, they have not been covered yet in an extent that might assist the potential user.
This chapter ends therefore with an overview of concurrent issues, their limitations and pos-
sible corrections as well as provides an extensive reference list for those planning to have an
in-depth view on the subject.
Undoubtedly, each of the measurement techniques available has its justiication and advan-
tages. It is therefore the responsibility of the experimentalist to select the most appropriate
one for the task of interest. In short, HWA is the preferred measurement technique when one
wishes to measure rapidly varying velocities with good spatial and temporal resolution, such
as in turbulent lows. In particular, in wall turbulence, HWA is—despite the aforementioned
limitation—the technique of choice. In general, their main limitation is their fragility and
sensitivity to contamination. Hence, if high temperatures, three-dimensionality (in the mean
sense), or contaminated lows are present, LDV should be considered instead. PIV, on the
other hand, is the technique of choice when more than single-point information is of interest,
for example, when spatial derivates or coherent structures need to be assessed.

reference Without any doubt, many researchers have made signiicant contributions to thermal anemom-
literature and etry and should be cited in any review on the subject. Contrary to most other measurement
content techniques reviewed in this book, however, being a “very classical” technique brings along
that a number of excellent textbooks and review articles are already available. Henceforth,
references will mainly be made to review articles and books, rather than papers, except where
topics have not been covered in the former. The raison d’être of this chapter is therefore
not to replace the excellent textbooks and review articles, but to present a practical primer
on the required steps starting from manufacturing a probe (Section 9.3), setting it up in a
hot-wire anemometer circuitry and calibrating it (Section 9.4), over to performing measure-
ments (Section  9.5) and assessing the quality, that is, limitations, of the measured signal
(Section 9.6). While the irst ive sections can be seen as a brief summary of the available lit-
erature, Section 9.6 distinguishes it not only from previous works but also from the rest of this
chapter and provides both an overview of concurrent issues and extensive references for the
interested reader. The basics and relevant notations will therefore only briely be summarized
(Section 9.2) to serve as a basis for the practical sections. Figure 9.2 outlines the structure of
this chapter and indicates some of the questions that will be discussed in the various sections.

Velocity versus temperature


Why HWA? sensitivity? Resolution and
Reference literature? range? Wall interference and position?
Spatial and temporal resolution
HWA system as well as drift and temperature
Introduction Drift? Yes! fluctuation correction?
settings and tuning
Section 9.1
Section 9.4
Drift?
Select/built Measurement/ Limitations/
Yes!
probe post-processing Corrections
Drift?
Section 9.3 Section 9.5 Section 9.6
No!
Basic Calibration
principles Buy versus built, E = f (U, T)
Section 9.2 solder versus weld, Section 9.4
etch versus plate,
Physical and operational material and geometry? Calibration relation for
principles? velocity and temperature?
Assumptions? Precautions and calibration for
low speed?

FIGUre 9.2 Flowchart depicting the structure of this chapter as well as the driving questions that will be discussed
in it. As apparent from the connection lines denoted with “Drift,” the crux of HWA lies in avoiding and/or handling
drift, that is, the temporal change in the functional relation between (primarily) velocity and the voltage read from the
anemometer.
260 RAMIS ÖRLÜ AND RICARDO VINUESA

The authors have beneitted in particular from the following reference works and will refer
to them throughout this chapter:
• Textbooks entirely dedicated to thermal anemometry: Sandborn [4], Strickert  [6],
Perry  [7], Lomas [8], and Bruun [9], while the textbook by Vukoslavčević and
Petrović [10] deals mainly with multiwire probes.
• Extensive book chapters: Corrsin [11], Bradshaw [12], Hinze [13], Blackwelder [14],
Smol'yakov and Tkachenko [15], Eckelmann [16], Bernard and Wallace [17],
Tavoularis [18], Comte-Bellot [19], Durst [20], and Bailly and Comte-Bellot [21].
• Comprehensive review papers: Fingerson [3], Comte-Bellot [22], Vagt [23], Stainback
and Nagabushana [24], Fingerson and Freymuth [25], Lekakis [26], and Lemonis and
Dracos [27].
• Speciically dealing with applications in compressible, supersonic lows: Comte-Bellot [19],
Kovasznay [28], Morkovin [29], Smits et al. [30], and Smits and Dussauge [31].
• Historical reviews are given in most of the aforementioned references, but Comte-
Bellot [22] is in particular often cited in this respect, while Huguenard et al. [32],
Dryden and Kuethe [33], and Burgers [34] provide very early accounts on the history
of HWA.

9.2 Basic principles

heat transfer The principle of thermal anemometers is that the amount of cooling experienced by a heated
characteristics wire, whose electrical resistance depends on the temperature, can be related to the local low
velocity: hence HWA is based on a thermoelectric measurement principle. This is accom-
plished by electrically heating the hot wire and exposing it to the low of interest, in which
case a strong relation between the cooling of the wire and the velocity of its surrounding can
be observed. To express the relation between the heat introduced into the sensing element and
the velocity of the low surrounding the wire, consider Figure 9.3a, which depicts the hot wire

I Wc Wfc

W
U
E = IRw L W Wr

Wc
I

(a) (b)

FIGUre 9.3 Photograph of hot wire (of diameter D = 5 μm and length L = 1.5 mm) soldered
to the tip of the prongs with schematic of (a) the steady-state heat balance between Joule heat-
ing (given by sensor current I and resistance Rw as well as voltage drop across the sensor E) and
forced convective heat loss W due to the cooling velocity U, and (b) the heat balance at the sensor
including the heat losses neglected in (a), namely, heat conduction from the hot wire to the cold
prongs Wc, natural/free/buoyant convection Wfc, and heat radiation Wr. Size of arrows illustrates
their relative importance.
THERMAL ANEMOMETRY 261

and its two support needles. Assume that the hot wire is electrically heated through a current I
in which case the heating power (Joule heating) is given by

E2
P = IE = I 2 Rw = , (9.1)
Rw

where
E denotes the voltage drop across the hot-wire sensor
Rw is the resistance of the heated wire

Furthermore, the cooling velocity U is assumed to be primarily responsible for the heat lost,
that is, through forced convection W:

W = hAw ( Tw - T ) = hpDL ( Tw - T ) , (9.2)

where h and Aw denote the convective heat transfer coeficient and surface area of the hot wire
with diameter D and length L, respectively, while Tw and T indicate the temperature of the hot
wire and its surrounding medium. This temperature difference ΔT = Tw − T is for applications
in air and water typically around 200 and 20 K, respectively. Whenever the subscript w is used,
a uniform distribution along the wire length will be assumed, well knowing that this is not
exactly fulilled; hence, it should be seen as a spatial average over the wire length.
Figure 9.3b illustrates the relative importance of the other heat transfer mechanisms
through the size of the arrows: (1) Radiation losses, keeping in mind that the hot wire radi-
ates only about 10% as much as a blackbody, are safely negligible. They account usually
to less than 0.1% of the convective losses. (2) Free/natural convection or so-called buoy-
ancy effects are safely negligible as well compared to forced convection under common
operational conditions, except at very low velocities (cf. “Calibrations for low velocities”
section).* (3) Heat conduction to the prongs, which act as heat sinks (since their cross-
sectional area is commonly at least 2 orders of magnitude larger than that of the sensing hot
wire), on the other hand, is small but generally not negligible and is thought to be accounted
for in the calibration (cf. Section 9.4). Its importance will reappear later in conjunction with
the study of wall turbulence (cf. “Temporal and spatial resolution” section), but for now we
will continue to consider (4) forced convection as the primary source of heat losses.
In order to express the heat transfer in nondimensional quantities, the convective heat
transfer coeficient h is replaced with the Nusselt number:

hD
Nu = , (9.3)
kf

with kf being the thermal conductivity of the luid. The Nusselt number and hence forced convec-
tion depends on almost every possible parameter of the luid, material, and low properties [35],

æ L ö
Nu = f ç Re, Pr, Ma, Gr , Kn, , aT , g, q, ¼÷ , (9.4)
è D ø

that is, it depends on


• Flow inluences: Reynolds number, Re = UD/ν,
• Fluid properties:† Prandtl number, Pr = ν/a, with a denoting the thermal diffusivity,

* A criterion ensuring that this is justiied is Re > Grp, with p = 1/3 [35].
† Fluid properties are evaluated at the ilm temperature, that is, the arithmetic mean of ambient and wire temperature.
262 RAMIS ÖRLÜ AND RICARDO VINUESA

• Compressibility effects: Mach number, Ma = U/c, with c denoting the speed of sound,
• Buoyancy effects: Grashof number, Gr = gβΔTD3/ν2, with g and β denoting the gravita-
tion acceleration and thermal expansion coeficient,* respectively,
• Molecular effects: Knudsen number, Kn = λ/D with λ denoting the molecular mean free
path,
• The geometry of the hot wire: aspect ratio L/D,
• The temperature overheat ratio: aT,
• The ratio of speciic heats of the luid: γ,
• The angle of the incoming low: θ.
The last parameter limits the usage of single-wire probes to lows in which the low is pre-
dominantly in one direction. But the angle dependence can also be exploited to measure the
low direction as well as multiple velocity components if slanted and multiwire probes are
employed as, for example, depicted in Figure 9.5b, and discussed in the “Multiwire probe”
section. The mentioned temperature overheat ratio, on the other hand, is an important param-
eter governing the operation of the hot wire and is deined in terms of temperatures or practi-
cally more meaningful in terms of its resistances:

Tw - T0 R - R0
aT = , aR = w , (9.5)
T0 R0

where the subscript 0 denotes the cold, that is, reference, state. To a good approximation, the
resistance of a hot-wire/ilm sensor (as most metals) varies linearly with its temperature in the
region near room temperature (for Tw − T0 < 250 ° C) and is given through

Rw = R0 éë1 + a ( Tw - T0 ) ùû , (9.6)

where α is the temperature coeficient of electrical resistivity of the wire material, which
expresses the relationship between the resistance and temperature and is positive for metal-
lic materials. It is also often the only practicable way to estimate the wire temperature
through its resistance, unless a high-resolution thermal camera is available. Utilization
of Equation 9.6 also shows that the overheat ratios deined in Equation 9.5 are related
through aR = aTαT0. This also indicates that both overheat ratios can easily be confused if
not explicitly stated, since depending on the wire material αT0 ≈ 1. The overheat ratio is
sometimes also deined as Tw/T0 or Rw/R0. Hence, caution is advised when the deinition is
not explicitly stated. Common materials for hot wires are platinum and its alloys as well
as tungsten and their diameter and length vary in the range of 0.5–10 μm and 0.1–2 mm,
respectively.
Coming back to relation 9.4, it becomes apparent that it is impossible to consider all of
these independent variables (as well as those not listed, such as humidity [36,37]) simultane-
ously. Surely, a number of assumptions need to be made in order to simplify the problem and
to make it treatable. In particular, the following assumptions can be made without drastically
reducing the range of applicability:
• Incompressible low: Ma < 0.3,
• Ignore free convection: U ≳ 0.2 m/s,
• Wire diameters much larger than the mean free path, that is, λ: Kn = λ/D ≪ 1,
• Large length-to-diameter ratios: L/D ≫ 1, making the problem less three dimensional.

* If density variations are due to changes in the temperature only, β can—for ideal gases—be expressed as the inverse
of the absolute ilm temperature.
THERMAL ANEMOMETRY 263

Further, assuming that γ is constant for a certain application and aligning the wire axis normal
to the main stream of the isothermal air low, the problem reduces to

Nu = f ( Re ,aT ) . (9.7)

Although a number of assumptions have been made in order to reduce Equations 9.4 through
9.7, still a variety of relations are available for this reduced problem [35,38], which can be
related to

Nu = éë A¢¢ ( aT ) + B¢¢ ( aT ) Re n ùû (1 + aT / 2 ) ,
m
(9.8)

where the temperature loading factor m is O(0.1), which reduces the temperature dependence,
and A″ and B″ are temperature-dependent parameters. The cooling velocity expressed through
the Reynolds number is now nonlinearly related to forced convection, which in turn can be
coupled to the electrical heating, namely, by utilizing Equations 9.1 through 9.3, thereby
yielding

E2
I 2 Rw =
Rw
{
= pLk f ( Tw - T¥ ) Nu, } (9.9)

where the term in brackets can be considered constant for a given application if the hot wire is
operated at a constant overheat ratio. The steady-state balance in Equation 9.9 can under the
aforementioned assumptions furthermore also be expressed directly in terms of the cooling
velocity by substituting Equation 9.8, the deinition of the Reynolds number, and incorporat-
ing all (case-dependent) constants into the calibration constants A′ and B′:

( )
E 2 = A¢ + B¢U n ( Tw - T ) . (9.10)

If temperature effects are also taken care of (i.e., incorporating them into the calibration
constants), an expression of the form

E 2 = A + BU n , (9.11)

is obtained, which is widely known as King’s law in honor of King [39], who derived a solu-
tion for the heat transfer from an ininite cylinder. This relation clearly illustrates the thermo-
electric measurement principle of HWA, namely, that the cooling of the electrically heated hot
wire is (nonlinearly) related to the voltage passing through the wire presumed that the luid
temperature, composition, and pressure are kept constant.* However, since the heat transfer
also depends on low properties such as temperature, density, and composition, among others
(see, e.g., relation 9.4), these quantities can in principle also be measured and are therefore all
part of thermal anemometry. In particular, luctuating temperature measurements through the
so-called cold-wire (“cold,” because the current is too small to heat it appreciably) anemom-
etry, which acts as a resistance thermometer, are besides velocity measurements also possible
and will as well be discussed in the “Temperature calibration” section. Despite the signiicant
number of assumptions made to arrive at this simpliied relation and the availability of empiri-
cally determined parameters, the small sensor size and its inherent variations make it necessary
that each hot-wire probe needs to be calibrated to determine an individual calibration valid for
a speciic probe. Also physical properties are often not at hand for the inal wire but are usually
related to the macroscopic/bulk material. The differences from an exact universal equation

* The expression put forward by King [39] for an ininitely long cylinder reads Nu = 1/p + 2 /p Re Pr , and
although it is derived from potential low theory, the exponent n for inite hot-wire sensors is not that far from 1/2,
but commonly slightly below that value.
264 RAMIS ÖRLÜ AND RICARDO VINUESA

are even more emphasized when considering that each probe is part of an anemometer circuit
including electronic bridge, ampliier, ilter, and signal conditioner. Hence, an individual cali-
bration that takes all these factors into account is required and will be discussed in Section 9.4.

Modes of Keeping the aforementioned assumptions, the steady-state balance in Equation 9.9 can be
operation extended to the unsteady energy balance:

dQ dT
= cm = P ( I , T ) - W (U , T ) , (9.12)
dt dt

in order to introduce two common modes of operation in thermal anemometry. The internal
energy (heat capacity) of the wire Q = cmT, with c and m denoting the speciic heat and mass
of the sensor, changes depending on the electrical power supplied (P, as function of cur-
rent and temperature) and the heat lost (W, as function of cooling velocity and temperature)
through forced convection. The aim in thermal anemometry is to ind an explicit relation for
Equation 9.12. The system is, however, underdetermined with I, T, and U being unknowns. If,
however, the current or the sensor temperature is kept constant by means of an electric circuit
in which the sensor acts as a leg of a Wheatstone bridge, the velocity (or the temperature) can
be related to the varying quantity.* These two solutions are the two common modes of oper-
ation, namely, constant temperature anemometry (CTA) and constant current anemometry
(CCA). For the former, the temperature of the wire and in turn its resistance are kept constant
via a differential feedback ampliier, thereby electronically circumventing the thermal inertia
and relating the cooling velocity to the current fed into the circuit. Furthermore, it is appar-
ent that in CTA, the unsteady term on the left-hand side of Equation 9.12 vanishes and the
dynamic response equation becomes identical to the steady (static) response equation, which
has already implicitly been assumed when deriving relation 9.9. This also explains why a
static calibration is often considered to be suficient when calibrating hot-wire probes. In case
of CCA, the sensor is supplied with a constant current and any change in cooling velocity
creates a change in resistance and hence voltage. The simplicity of the electrical circuitry for
the CCA explains why it was the predominant mode of operation until the middle of the last
century. The remaining unsteady term in case of CCA indicates also that thermal inertia limits
the frequency response of the hot-wire system, which is the reason why CCA is nowadays
mainly used for temperature luctuation measurements. Contrary to the application of HWA
for velocity measurements, when it comes to luctuating temperature measurements, cold-
wire anemometry remains unrivaled. On the other hand, for CTA the frequency response is
essentially determined by the feedback ampliier and not by the time constant of the wire.† For
a 5 μm diameter wire as shown in Figure 9.3, the frequency response is around 100 Hz when
employed for temperature measurements via a CCA circuit, while it is about 2 orders of mag-
nitude higher when operated in CTA mode at a high overheat ratio for velocity measurements.‡
Consequently, the CTA is the preferred mode of operation for velocity luctuation measure-
ments. A third alternative is the so-called constant voltage anemometer (CVA) [43,44], in
which case the voltage across the sensor is maintained constant. Considering the age of CCA
and CTA circuits, the CVA is considerably new and not (yet) that often employed. The inter-
ested reader is therefore referred to Comte-Bellot [19] and Bailly and Comte-Bellot [21],
which provides the only (considering the reference literature mentioned in the “Reference
literature and content” section) review on the technique.

* Detailed schematics of electric circuits and guides to build HWA systems can be found in Strickert [6] and Perry [7].
† The actual frequency response is, however, still dependent on the hot-wire length-to-diameter ratio, Reynolds

number, overheat ratio, and the hot-wire material used as shown by Li [40,41] and much lower than that of the CTA
system only.
‡ Note that the given values are much lower than those found in data sheets and manuals of commercially available

systems or some of the mentioned reference literature in the “Reference literature and content” section. However,
they are in accordance with a recent study by Hutchins et al. [42].
THERMAL ANEMOMETRY 265

9.3 probe design, manufacturing, and repair

Commercial versus Literature concerning HWA, as indicated in Section 9.1, is overwhelming. However, when it
in-house repaired/ comes to manufacturing probes and repairing them, the experimentalist is often left alone with
built probes tacit knowledge, besides a few notable exceptions [4,8,45–48]. Hot-wire probes are usually
commercially purchased and also their repair is often performed by sending them back to the
manufacturer. This is both economically and timely expensive, but also limiting when it comes
to geometrical and material properties. An in-house repair or even manufacturing station is
therefore recommended. As Lomas [8] puts it illustratively, “the time required to replace the
sensor on a wire probe [...] may be no longer than the time required to type the invoice and
shipping label.”* And if even Bradshaw [12] admits that a hot-wire probe can sometimes give
“erratic readings for no discoverable reason”, it might be necessary to abandon a problematic
probe, which might be easier if it has not been purchased.†
A hot-wire probe consists mainly of the hot-wire sensing element that is either soldered
or welded to aerodynamically shaped supports, so-called prongs, that act as an electrical con-
ductor and mechanical support as already anticipated from Figure 9.3. The prongs are electri-
cally insulated from each other by, for example, ceramic tubes (commercially available as
thermocouple insulators), or they are embodied in an epoxy housing, in order to place them
in a robust steel tube as shown in Figure 9.4a. Epoxy or two-component glues are often used
to ix the prongs inside the ceramic tubes. Since the same probe might be used in various geo-
metrically different situations, it is common to build the probe body from various intercon-
nectable pieces. As apparent from Figure 9.4a, the resources required to build a probe body
are overseeable: hence, building and above all repairing a hot-wire probe are recommended.

hot-wire materials Common materials for the hot wire are platinum (Pt) and its alloys as well as tungsten (a.k.a.
and geometrical wolfram, W), while nickel and nickel–titanium alloys could be used as well [4,50]. Their diam-
constraints eters have been in the range of 15–200 μm [34,39] in the early days of HWA, while not much
has changed since the middle of the last century where diameters down to 2.5 μm have been in
use [51]. Today, commercially available and in-house probes are mostly equipped with 2.5 and

(a) (b)

FIGUre 9.4 (a) Probe components including prongs embedded in an insulating ceramic tube
(also used as a robust spacer for the prongs) and probe extension tube. The inal probe body is
of boundary-layer type (a side view of a similar probe can be seen in Figure 9.18) with a prong
spacing that reduces to 0.25 mm at the tip. The inal wire is electrically connected to an anemom-
eter via the prongs and cables. (b) Bending device used to produce reproducible bent prongs for
boundary layer measurements. (Photo courtesy of Ferro, M., Experimental study on turbulent pipe
low, MSc thesis, Royal Institute of Technology, Stockholm, Sweden, 2012.)

* Although this statement is 30 years old, nothing has changed in this respect.
† However, this might be an “interesting physiological question” as Bradshaw [12] phrases it.
266 RAMIS ÖRLÜ AND RICARDO VINUESA

Table 9.1 Physical properties of common hot-wire materials

Therm.
Resistivity, Temp. coeff., Density, ρ Spec. heat, c cond., k Strength
Ξ (Ωm 10−8) α (°C−1) (kg/m3) (J/(kg °C)) (W/(m °C)) (N/cm2)

W 6 0.0045 19.3 140 170 250,000


Pt 11 0.0039 21.5 130 70 35,000
Pt–Rh 90/10 19 0.0017 19.9 150 40 70,000
Ni 7 0.0056 8.9 440 90 65,000

Values are taken from References 4,9,19 and are valid for ambient conditions and macroscopic quantities.
Note that the exact value depends on whether the wire is hardened or annealed and how much
“tormenting” it has undergone during the manufacturing of the inal product. The values given here
should therefore only be used for design choices.

5 μm diameter wires, but diameters down to 2.5 and 0.5 μm have also been used in case of
tungsten and platinum (alloys), respectively [46]. The reason why platinum wires are available
in much smaller diameter than tungsten wires is that the former are made by the Wollaston pro-
cess, that is, the wire of interest is covered by a thick sheet of silver and then drawn to a smaller
diameter. The composite Wollaston wire is also shipped in this form and the protecting silver
sheath is only removed once the delicate platinum wire needs to be used. In case of platinum
wires, pure platinum but more often alloys with iridium or rhodium (which increase the tensile
strength and have a high oxidation temperature) are used, although they have a lower thermal
resistivity (thereby limiting end losses to the prongs) as apparent from Table 9.1. For further
details on material properties, the reader is referred to ine wire manufacturers or Sandborn [4].
If even smaller wire diameters are required (the need for such probes will become apparent in
the “Temporal and spatial resolution” section), use can be made of microelectromechanical
system (MEMS) manufacturing methods. Although not cylindrical, such techniques have been
used to build the so-called nanoscale thermal anemometry probes (NSTAP), which recently
pushed the thickness (of the frontal area) of the sensing element down to 0.1 μm [52,53].
The sensing length of the hot wire is usually several hundreds of diameters long in order to
ensure that most of the heat losses are due to the velocity-dependent convection and not due
to conductive losses to the prongs (cf. References 19,50 for quantitative estimations of the
importance of the latter effect). However, this design rule competes with the desire to obtain
a local velocity measurement. A lower limit of L/D = 200 is often mentioned in the literature,
based on the work by Ligrani and Bradshaw [54]. In particular with respect to wall-bounded
turbulence measurements or small-scale turbulence, it is often found that this design rule is
more and more violated as will be discussed in the “Temporal and spatial resolution” section.
A  compromise is hence to decrease the wire diameter and thereby the length of the wire.
Fine wires are, however, fragile and, additionally, they appear to be more prone to drift. This
problem will be discussed in detail in the “Preaging, aging, and drift” and the “Corrections for
temperature luctuations and drift” sections.

Wire treatment: Besides the various wire materials, also the way how the wire is attached to the prongs is
etching versus part of the design criteria as discussed in the next paragraph. Since the largest aerodynamic
plating disturbances are due to the prongs itself and not the stem [55], it is also common to have a
wide spacing between the prongs and employ so-called stubbed wires, which means that the
hot wire has an inactive part close to its ends, namely, the stubs. This can be accomplished
either by etching the silver sheath (left from the Wollaston process) only partially from the
platinum wire away or by plating tungsten wires at their ends. Plating materials (e.g., copper
or gold), and also silver in the case of Wollaston wires, have a high electrical conductivity
(i.e., low electrical resistivity), which helps to reduce the end losses as well as to diminish its
impact on the sensor response. Hence, if end-conduction effects are to be reduced, the active
wire should be as long as possible and low thermal and high electrical conductivity should
be thought of in the wire material. Among the materials listed in Table 9.1, platinum–10%
THERMAL ANEMOMETRY 267

D = 1.3 µm, L/D = 900

D = 1.3 µm, L/D = 600

0.5 mm

(a) (b)

FIGUre 9.5 (See color insert.) (a) Photograph of the soldering process for the boundary-layer probe depicted in
Figure 9.4a (0.25 mm probe tip spacing) including the crocodile clamp holding a Wollaston wire (30 μm in diameter)
and the tip of a soldering iron. Since the actual hot wire of 1.2 μm diameter is not visible in the picture, the right inset
shows a 5 μm wire soldered to a different boundary layer probe with a spacing of 1.5 mm. (Photo courtesy of Ferro, M.,
Experimental study on turbulent pipe low, MSc thesis, Royal Institute of Technology, Stockholm, Sweden, 2012.) The left
inset shows the result of using a capillary acid bubble supported by an electrical current (fed through the crocodile clamp)
to produce a partially etched Wollaston wire. (b) Combined X-wire and cold-wire probe with close-up of probe and wire
constellation. The picture was taken several months after the performed measurements, which explains the traces of corro-
sion on the prongs. All wires are soldered to the tip of the prongs, which is not apparent from the 2D microscopic picture.
(With kind permission from Springer Science+Business Media: Flow Turbul. Combust., An experimental study of the near-
ield mixing characteristics of a swirling jet, 80, 2008, 323, Örlü, R. and Alfredsson, P.H.)

rhodium (Pt–Rh 90/10) seems to be most suitable for this task. If the entire wire should act as
the active sensing element as in the case of the wires shown in Figure 9.5b, the prongs need to
be tapered even more, as in the probe shown in the same igure, since the spacing between the
prongs is now equivalent to the active wire length.
In case of Wollaston wires, the etching is performed with an acid that does not affect the plat-
inum core. Nitric acid is often used for this removal (see, e.g., Reference 56). Few minutes are
often suficient at high concentrations (>33% in case of nitric acid) to set the platinum wire free.
At lower concentrations, a small electrical current can be passed through the wire to accelerate
the process. It is generally advisable to use low concentrations of acid, not only due to health
reasons but also to protect the working place and tools from corrosion. If instead only the middle
part of the wire needs to be etched, a small jet or capillary bubble of etching luid supported by
an electrical current can be used as also shown in the inset of Figure 9.5a. Sketches and descrip-
tions of such devices can be found in the literature [4,8,47], and the result is also visible in the
inset of Figure 9.5. Practical guidelines to plate tungsten wires can be found in Lomas [8].

The prongs- Prongs can be fabricated from sewing needles or jeweller’s broaches, or even piano/music wires,
wire connection: which are readily available in a variety of diameters. Diameters in the range of 0.2–0.5 mm are
Soldering versus common, depending on the spacing between the prongs from each other and the aerodynamic
welding force they need to withstand. These supports may signiicantly distort the low ield and intro-
duce errors into the measurements. In fact, the largest aerodynamic disturbances are due to the
prongs and not the probe body (i.e., the ceramic and/or metal tubes) [55]. The spacing of the
prongs should therefore be at least 10 times larger than their diameter [51]. This is, however,
not always desirable, and to reduce these effects, the support needles should be long, thin, and
tapered. While ultraine sandpaper can be used to taper the tips of the prongs, repeated dipping
into an acid bath (supported by an electrical current) can produce tapered prongs with smooth
surfaces in a repeatable manner. The platinum or tungsten wire can be either soft soldered
(see Figure 9.5a) or spot welded (see Figure 9.6a), respectively, to the prongs, while for appli-
cations in high temperatures hard soldering with silver can be used. Tungsten wires can also
be soldered if a good conductor is deposited onto the wire, for example, by copper-plating it.
For spot welding, the discharge of a capacitor through a silver or copper electrode can be used.
268 RAMIS ÖRLÜ AND RICARDO VINUESA

Spot
welder
Probe
manipulator ng
rsi
ve
l tra Flow
dia
Ra

Pitot tube Hot wire

1
Electrode Wire dispenser
manipulator and manipulator Cold wire
Pitot tube 5
(a) (b)

FIGUre 9.6 (See color insert.) (a) In-house built hot-wire welding station (Fluid Physics Laboratory, KTH Mechanics,
Stockholm, Sweden) including manipulators for the electrode, wire dispenser, and the probe; the spot welder is a com-
mercial product. The probe manipulator can be rotated to both rotate the probe and change its incident angle relative to
the electrode, which is required for slanted single-wire and multiwire probes. (b) Combined hot-wire/cold-wire probe with
incorporated Pitot tubes for in situ calibration embedded in a pipe low setup. (Reprinted from Flow Meas. Instrum., 26,
Laurantzon, F., Tillmark, N., Örlü, R., and Alfredsson, P.H., A low facility for the characterization of pulsatile lows, 10–17,
Copyright 2012, with permission form Elsevier.)

Sketches and guidelines to build such hot-wire spot-welding stations can be found in the litera-
ture [6,51,58,59]. Welding is in particular preferred when the probe is to be used in lows with
high stagnation temperatures, where the solder may otherwise melt. Generally, welded tungsten
probes have a longer “life time”, explaining while most commercial probes are of this type.
On the other hand, drift problems (cf. “Preaging, aging, and drift” and “Corrections for tem-
perature luctuations and drift” sections) appear to be more common with tungsten wires [7].
Once the probe is tapered and cleaned by, for example, acetone to remove reminisce from
the acid, the tip of the prongs can be covered with soldering tin upon which the hot wire is
aligned on it normal to the probe axis. When the wire is aligned via a micromanipulator as
shown in Figure 9.5a, a good mechanical and electrical contact can be obtained by touch-
ing the soldering iron slightly away from the tip, which will make the melted tin engulf the
attached wire. Repairing a broken wire, including cleaning the prongs, covering it with tin,
etching and aligning the wire, as well as the inal soldering process can all be mastered within
30 min by a trained user. When wires of smaller diameters are used, the process becomes
more cumbersome since breathing or even the heating of the light (from the microscope)
might bring the etched wire to hover in the air. It is therefore recommended to etch the wire as
short as possible (say, twice the distance of the prong spacing) in order to limit the lexibility
of the wire. Alternatively, when a stubbed probe is desired, the Wollaston wire can directly
be soldered to the prongs (which then does not have to be as thin as mentioned earlier) and a
central part of the silver sheath is then removed as described in the “Wire treatment: Etching
vs. plating” section. Whenever acid is used, it is important to clean the wire with acetone to
avoid quick corrosion and above all it should be avoided that acid runs along the prongs into
the probe body, thereby causing an internal short. When building probe bodies, one should
therefore always monitor the resistance of the open circuit, for example, connecting both
probe ends to an ohmmeter and ensuring that it is higher than several MΩ.
For harsh low environments, such as high-speed lows or where strong pulsations occur,
welded tungsten wires are preferred. These wires are directly available in their inal form, that
is, without a protecting sheet. The advantage compared to soldering platinum wires is that
THERMAL ANEMOMETRY 269

one can avoid working with acids (requiring safety precautions and ventilation). A hot-wire
welding station with its components is shown in Figure 9.6a depicting the welding device,
electrode, wire dispenser, and rotatable probe holder (for multiwire probe alignments). Since
the electrode needs to excess some pressure on the prongs when spot welding, the prongs are
larger in diameter and exhibit a lat area as apparent from the hot-/cold-wire probe shown in
the inset of Figure 9.6b. Thicker wires can also be employed when higher mechanical strength
is required, which, however, reduces the frequency response.*

preaging, aging, Having decided on a prongs/wire constellation and on how to mount it on the prongs, the
and drift wire will undergo some aging process before reaching a suficiently stable condition. This is
usually done through preaging, where the wire is operated at a high overheat ratio for several
hours. Experience of various researchers goes apart, but a half day seems to be a healthy bal-
ance between documented values [7,8] and the authors’ experience. Alternatively, a small
current could be passed through the wire for less than a minute directly after the welding or
soldering procedure until the wire can be observed to glow under a microscope. In particular,
for compressible low measurements, where several calibrations are needed for velocity and
density (or mass-low rate) as well as temperature, it is crucial to establish that the wire has
aged suficiently, prior to performing a number of cumbersome calibrations. This can, for
example, easily be done by monitoring the voltage read from the HWA system during the
aging process or by checking its cold resistance from time to time.
While most wires break due to improper handling, sensor burning is also not too uncommon;
in this case, the sensor looks intact but displays upon careful inspection a small missing section
in the middle of the sensor. A too high wire temperature, due to improper resistance readings,
deposit on the wire, etc., is often the cause. Dust, droplets of water/oil (attracted by static elec-
tricity), or any reminiscence from the soldering/welding process such as alcohol and acetone
increases the sensor diameter and above all resistance. This problem might be emphasized for
cold wires where the wire is not able to evaporate the particles. To ensure that the probe is not
contaminated/fouled, it is customary to come back to the initial measurement point (e.g., when
traversing through a jet or boundary layer) and ensure that no drift (due to whatever reason) has
taken place. If the resistance of a wire has changed due to contamination or displays a strange
behavior, it might help to clean it in a solvent (such as acetone), for example, with the help of an
ultrasonic agitator (thereby avoiding to shake the probe). The lifetime of a probe and in particu-
lar the wire itself depends on many factors and it is dificult to predict. Nonetheless, increased
sensor aging might be one indication for when the end of life is near. It is therefore advisable to
leave the wire on “standby” when not in use and to keep the overheat ratio not too high (keeping
it however high enough to have a suficiently high signal-to-noise ratio, high high-velocity sen-
sitivity, and low-temperature dependence; cf. “Temperature calibration” section). The only cure
for drift, which “is the curse of hot-wire [...] anemometry” [12], is hence a “clean environment
and repeated calibrations” [7]. Ideally, hot-wire measurements that display signs of drift should
be disregarded, the wire should be recalibrated, and the measurement should be repeated until
the pre- and postcalibration collapse on each other within the uncertainties of the calibration
procedure. This might, however, not always be possible in case the experiments are performed
in a campaign with a tight schedule. We will therefore revisit this topic in the “Corrections for
temperature luctuations and drift” section to present some workarounds.

9.4 Calibration and its relations

As stated in Section 9.2, the calibration of a hot wire expresses the thermoelectric measurement
principle in a quantitative manner. Additionally, it is deemed to incorporate a number of effects
that were assumed to be negligible when deriving the calibration relation. To ensure that the

* The time constant in CCA mode at a given velocity is proportional to D3/2, while it is D1/2 when operated in a CTA
system [25]; hence, the wire diameter is much more crucial for temperature measurements when it comes to the
time constant.
270 RAMIS ÖRLÜ AND RICARDO VINUESA

calibration is appropriate, repeatable, and accurate, a number of precautions need to be taken


that will be touched upon in the following section, before going over to the actual calibration.

precautions and A number of assumptions have been made when deriving the heat balance from which a
presettings relation between voltage signal from the hot wire and the cooling velocity was derived.
As illustrated in Figure 9.3b, in particular end-conduction effects due to the inite length of
the hot wire can become important, but also when considering forced convection alone, a
number of effects listed in Equation 9.4 had been neglected. All of these neglected effects are
potential error sources and a calibration is therefore deemed necessary, instead of employing
an empirical relation, to account for these simpliications. Keeping the mentioned simpliica-
tions in mind, it becomes also apparent that all environmental conditions that inluence the
heat transfer (i.e., temperature difference between sensor and luid, luid and probe proper-
ties, as well as geometry) need to be kept constant between calibration and measurements in
order to measure the velocity accurately. If this cannot be ensured, they need to be monitored
and accounted for in the post-processing as will be discussed in the “Temperature calibra-
tion” and the “Calibrations for low velocities” sections.
Prior to performing the calibration, the cold resistance of the hot wire needs to be deter-
mined in order to set the desired overheat ratio (Equation 9.5) and tune the dynamic response
of the anemometer under conditions that cover the range the probe will be operated in. The
resistance overheat ratio is a user-dependent parameter that sets the velocity/temperature sen-
sitivity (cf. “Temperature calibration” section), and it is recommended to keep it above 50%
(up to 100% in air) in order to reduce and enhance its sensitivity to temperature and velocity
variations, respectively. Higher values should be used with caution in order to avoid oxidation
in case of tungsten wires or weakening of the wire in general. Lower values can as well be nec-
essary in multiwire probes to reduce cross talk, due to the interaction of thermal wakes. In case
a cold wire is incorporated into a multiwire hot-wire probe, this is even more crucial since the
thermal wake of an upstream hot wire might inluence the temperature reading of the cold wire.
The design of the hot-/cold-wire probe shown in Figure 9.5b followed therefore the recommen-
dation to place the cold wire at least 150 wire diameters upstream the hot-wire wires [61,62].
In principle, the velocity and temperature effects on the dynamics of the hot-wire anemom-
eter system could be investigated if velocity and temperature luctuations of known amplitude
over a wide range of frequencies could be generated. Alternatively, the probe could be moved
or oscillated at high frequencies. However, these techniques are of limited use because their
amplitudes and frequencies cannot be accurately varied over a wide range and above all such
tests are often not practical; in fact, only few have bothered to take this cumbersome path
(e.g., Perry [7]). Instead, an electrical test is commonly preferred to simulate a velocity ield
perturbation, for example, through a square wave or pulse response, and thereby optimize the
system. Although various thumb rules are given by HWA manufacturers to determine the fre-
quency response, it is important to recall that these are dependent on the test signal and yield
merely rough estimates. Hence, it is advised to follow the speciic manufacturers’ guidelines
when tuning the anemometer system. The square-wave or pulse-response test should there-
fore only be seen as a means to establish a stable anemometer and not as a way to determine
the actual frequency response. It is recommended to perform this test at the highest expected
velocity once the overheat ratio is set, since the system might otherwise be underdamped.
Overdamped systems, on the other hand, tend to introduce nonlinear errors, which might
become signiicant in higher-order statistics [63]. Similarly, it is crucial to tune the system
with the cabling that is going to be used in the measurements, since the capacitance or induc-
tance in long cables or other elements between the wire and the anemometry circuit will affect
the frequency response, in addition to the operating temperature. Once the system is tuned,
an analog low-pass ilter should be set prior to acquisition in order to reduce electrical noise
in the sampled signal and avoid aliasing of energy that could not be resolved. Although the
analog input signal resolution of analog-to-digital (A/D) converters is nowadays of 16-bit or
higher resolution, it is still a good habit to exploit the full range of the A/D card in order to
avoid quantization errors, that is, to set an appropriate gain and DC offset to the signal from
the anemometer to match the range of the A/D card.
THERMAL ANEMOMETRY 271

Inspection of the square wave or pulse response on an oscilloscope should also be used to
check the voltage signal from the anemometer. Under no low conditions and in a quiescent
environment, it is easy to detect electronic oscillations that might be picked up from nearby
instruments, be it the oscilloscope itself or even the light source connected to the low facility.
Compact luorescent lamps might emit not only electric ields in the low-frequency range (dis-
tribution frequency and its harmonics) but also high-frequency electromagnetic ields in the
range of 30–60 kHz. These frequencies differ between different types of lamps, and it is easy
to check whether electromagnetic radiation pollutes the hot-wire readings by simply switching
them off or moving them away while observing the signal on an oscilloscope. Often, the sever-
ity of these inluences is ignored and special care should be taken prior to a new experiment.
Whenever a new probe body and/or traversing system is installed or one exceeds the usual
velocity range, one should also ensure that no mechanical vibrations occur, which might
pollute the measured signal. Such vibrations occur at discrete frequencies and can easily be
detected through inspection of the spectra of the measured signal. Alternatively, laser distance
meters with high spatial and temporal resolution could be employed to directly measure the
oscillations induced on the tip of the prongs. Last but not least, one should recall that the
sensor should be calibrated just prior and after the measurement (yielding the pre- and postcal-
ibration for a speciic measurement) to ensure that no drift has occurred. It is not uncommon
that a measurement run has to be disregarded if the pre- and postcalibration do not match.
Alternatively, if such a luxury is not possible, correction schemes might be helpful in this
respect as well (cf. “Corrections for temperature luctuations and drift” section).

Single-wire probes We have seen in Section 9.2 that the response equation from an electrically heated hot wire
connected to an HWA system can under certain assumptions simply be related to the velocity
that cools the wire (i.e., effective cooling velocity) through a simple power law of the form
(repeating Equation 9.11)

E 2 = A + BU n ,

which is known as King’s law. Although A, B, and n are often denoted as constants, it should
be recalled (cf. Section 9.2) that they account not only for wire/probe properties but also for
anemometer settings, such as the overheat ratio. A similar note of caution is valid for any cali-
bration relation. For the time being, we will assume that the assumptions made are justiied
and consider the steps required to calibrate a single hot-wire probe.
One distinguishes between in situ and ex situ calibrations, where the former is preferred
since the disturbances caused by the probe and its holder will be the same during calibration
and the measurements. This is particularly recommended if measurements are going to be
performed in a wind tunnel or a jet emanating from a high contraction ratio nozzle, where
the probe can be placed in the free stream or in the potential (and thermal) core, respectively.
While in the former case the probe is calibrated against a Pitot-static, that is, Prandtl tube,
placed close to the probe, the probe can directly be calibrated against Bernoulli’s theorem if
the contraction ratio is high enough in the latter. On the other hand, this might not always be
feasible since the low ield under investigation might not be homogeneous, of low turbulence,
or even unknown. In that case the probe needs to be calibrated ex situ in a calibration facility
(Figure 9.7). It might, however, be necessary to perform an in situ calibration even if the low
is not suitable for it, for example, when calibrating large probe arrays. Such a method has been
described in Tutkun et al. [64] for their 143-wire rake. Having taken the precautions (e.g.,
preaging) and presetting (overheat ratio and bridge stability) mentioned in Section 9.3 into
account, the hot wire is aligned with the main low direction. Both in the stagnation chamber
of the jet facility and in the stagnation zone of the Pitot-static tube, the velocity stagnates, that
is, it is close to 0, which reduces Bernoulli’s equation to

2 p
U = r Dp , r = atm , (9.13)
RT
272 RAMIS ÖRLÜ AND RICARDO VINUESA

FIGUre 9.7 Calibration nozzle with fully automated fan and angular calibration device.
(Courtesy of Julie Vernet, Fluid Physics Laboratory, KTH Mechanics, Stockholm, Sweden.)

where Δp is the pressure difference between the total and static pressure readings from the
Pitot-static tube, or the static pressure in the stagnation chamber and ambient pressure in case
of the calibration jet facility. With the atmospheric pressure patm read from a barometer and the
knowledge of temperature, the density is known through the ideal gas law with the speciic gas
constant R for dry air being 287 J/(kg K).
Figure 9.8a illustrates the result of two independent calibrations for a 5 μm tungsten wire
of 1 mm length operated in CTA mode. Despite the various assumptions made to arrive at
Equation 9.11, it appears as if King’s law provides a simple and accurate representation of the
calibration points. The calibration points go down to 1.8 m/s, which corresponds to a pres-
sure difference of around 2 Pa. Such a lower limit is common in many circumstances, either
because the wind tunnel cannot be operated under steady-state conditions (be it velocity-wise
or temperature-wise) or because of limitations of the differential pressure transducer at such
low differential pressures. Similarly, corrections might become necessary for the employed
pressure probes or taps at these low velocities [19,65].
In case lower velocities need to be resolved, accurate calibration points at lower velocities
need to be measured. This is, however, often practically not possible and to circumvent this prob-
lem one often employs the voltage the hot-wire anemometer reads at zero velocity E0 in order
to provide a physical bound. Considering King’s law, one might expect that this value might be

3 9

2.8 8 E02
7
2.6
A
(E (V))2

6
0 0.5
E (V)

2.4
5 ΔE02
B
2.2 E0
4
ΔU n
2 A1/2 3
0 1.8
1.8 2
0 10 20 30 40 0 1 2 3 4
(a) U (m/s) (b) (U (m/s))n

FIGUre 9.8 Single hot-wire calibration for a 5 μm tungsten wire of 1 mm length operated
in CTA mode with a resistance overheat ratio of 80% performed within the velocity range of
1.8–40 m/s with 18 (circles) and 28 (squares) calibration points. The obtained calibration con-
stants according to Equation 9.11 are A = 2.58, B = 1.28 (corresponding to the slope in the E2–Un
plot), and n = 0.40 (black solid line), while those of the modiied King’s law [66] are k1 = 1.04,
k2 = 2.41, n = 0.44, and E0 = 1.82. Plotted in (a) conventional and (b) alternative representation.
THERMAL ANEMOMETRY 273

identical to the square root of A: however, as apparent from Figure 9.8b, A is slightly lower than
E02, generally close to 0.8E02 [9]. Buoyancy effects, that is, free convection, which was assumed
to be negligible when deriving King’s law in Section 9.2, are obviously not negligible at such
low velocities. For the probe and the operational conditions employed here, 0.2 m/s seems to
indicate the demarcation line where free convection effects are not negligible and higher-order
calibration relations are necessary to describe the relation between the hot-wire voltage and
the cooling velocity. As apparent from the deinition of the Grashof number (cf. Section 9.2),
the effect of free convection can further be reduced by using wires with smaller diameter and
operating the hot wire at lower overheat ratios. A simple extension of King’s law to account for
free-convection effects is, for example, proposed by Johansson and Alfredsson [66] and reads

1/ n
U = k1 ( E 2 - E02 )
1/ 2
+ k2 ( E - E0 ) , (9.14)

where k1, k2, and n are again calibration constants. While the irst term of Equation 9.14 is
related to the classical King’s law, Equation 9.11, the second term takes account on the effect
of natural convection, which becomes important at low velocities. This relation is also shown
in Figure 9.8. It should be noted that the assumptions made to arrive at relation 9.13 both in
case of the calibration jet and the Pitot-static tube are not justiied toward zero velocity. This
problem will be discussed further in the “Calibrations for low velocities” section. In cases
where a wider velocity range and/or low velocities should accurately be represented, a poly-
nomial relation of the form
N N

åA ( E - E ) ,
n
U= ån=0
An E n or U =
n=0
n 0 (9.15)

up to third or fourth order is preferred and nowadays very common [67]. In case measure-
ments will be made below the lowest measurable calibration point, it is crucial to include E0
in the calibration relation as will be discussed in the “Calibrations for low velocities” section.
Nevertheless, there are situations where King’s law or its modiied version is preferred over
high-order polynomial its. Such situations include when complex measurement situations
occur and where a complete set of calibration curves for velocity, density, temperature, and
composition might become necessary, such as in compressible low. Similarly, in some cases,
ex situ calibrations in a well-deined low are not possible and only very few velocity–voltage
pairs can be obtained. It might in such cases be advantageous to utilize low-order relations that
preserve the “shape” of the physical calibration relation and anticipate large errors attached to
velocity readings. Furthermore, they are useful when assessing errors or attempting in correct-
ing for various error sources that are dificult to determine experimentally, as will be exempliied
in the “Temperature calibration” and the “Corrections for temperature luctuations and drift”
sections, where the effect of temperature drifts and luctuations, respectively, will be discussed.
For higher velocities, where the assumption of incompressible low is not valid any longer,
the velocity U needs to be replaced by the mass-low rate ρU in the calibration relations, that
is, Equations 9.11 and 9.15, since it is the latter that is sensed by the hot wire as can easily be
seen from Equations 9.7 and 9.8 [30,31]. Correspondingly, the temperature in hot-wire rela-
tions needs to be replaced by the recovery temperature; this is also the temperature the cold
wire would read [49,103]. Finally, Equation 9.13 should be replaced with the compressible
Bernoulli equation in order to calibrate the hot-wire readings against the correct velocity read
by the Prandtl tube.
So far, a well-deined effective cooling velocity has been implicitly assumed in the preced-
ing sections, that is, it was assumed that the wire axis was normal to the incoming velocity
vector. As indicated through Equation 9.4, the hot wire is, however, sensitive to the inclination
of the incoming low. This dependency between the magnitude of the velocity vector and the
effective cooling velocity can be described through a cosine law of the form

U e = U cos q, (9.16)
274 RAMIS ÖRLÜ AND RICARDO VINUESA

where θ is the angle between the magnitude of the velocity vector and the axis of the sens-
ing element. The absolute sign around the velocity vector expresses the inability of the hot
wire to detect the low direction. Hence, any deviation from the normal direction to the wire
results in a smaller measured velocity, but it also assures that misalignments of a few degrees
will not cause signiicant deviations. While this relation is valid for an ininite cylinder, it
was found to give excellent results for sensor aspect ratios L/D ≳ 600 [68]. However, also
the velocity tangential to the wire, that is, parallel to the wire axis, would have an effect on
the cooling due to the inite length and prongs: hence, a relation including empirical (i.e.,
calibration) constants is more suitable. It is common to express the effective cooling velocity
Ue as the square of the effective velocity magnitude, also known as Jørgensen [69] relation,
that is,

U e2 = U n2 + h2U b2 + k 2U t2 , (9.17)

where Un, Ub, and Ut denote the velocity normal and in plane with the prongs, normal to the
sensor and the plane of the prongs (binormal), and tangential to the sensor axis. The con-
stants h (pitch factor) and k (yaw factor) are experimentally determined weighting factors
that depend mostly on the aspect ratio of the sensor but are close to unity and 0.2 for standard
L/D ratios of around 200; the latter decreases to 0 for L/D ≳ 600. The former constant simply
states that the hot wire is insensitive to changes of the low direction in the plane normal to
the wire, which is known as the forward–reverse ambiguity of hot wires. The aforementioned
constants are useful to estimate the effect of binormal and tangential cooling when utiliz-
ing single-wire probes and comparing hot-wire results with numerical simulations or other
measurement techniques, when single wires are, for example, employed in complex lows, as
demonstrated in Figure 9.9, where a straight single-wire probe was employed in a turbulent
pipe low downstream a 90° pipe bent [70]. The wire was thereby vertically aligned (with ref-
erence to Figure 9.9a), and as apparent from the vector ield, the binormal velocity component
(along the horizontal axis) is not negligible and explains the difference observed when com-
paring the results with those from stereoscopic PIV measurements as shown in Figure 9.9b.
Utilizing relation 9.17 by Jørgensen [69] with the mentioned constants on the PIV data, the
effective cooling velocity corresponds much closer to what the hot wire is actually measuring.
However, to obtain accurate results in lows where the in-plane motion is not negligible, each
probe needs to be calibrated against yaw and possible also pitch angles, as will be discussed in

W/Wb
1.2 1.4
Re = 24,000, S = 0.5
1.2
1
1
0.8
0.8
W/Wb

0.6 0.6
0.4
0.4 Re = 24,000, S = 0
0.2
0.2
0
–1 –0.5 0 0.5 1
0 (b) r/R
(a)

FIGUre 9.9 Time-averaged velocity ield 0.67 pipe diameters downstream a 90° bend turbulent
pipe. (a) Contour plot of the streamwise velocity component and vector plot of in-plane motion
(both scaled by the bulk speed). (b) Comparison of the mean streamwise velocity component
along the horizontal axis from PIV experiments with hot-wire data without (S = 0) and with an
additional swirling motion (S = 0.5). Black lines denote the effective cooling velocity calculated
from the PIV data, gray lines the PIV data, and squares the hot-wire data. (Reprinted from Int.
J. Heat Fluid Flow, 41, Kalpakli, A. and Örlü, R., Turbulent pipe low downstream a 90° pipe bend
with and without superimposed swirl, 103–111, Copyright 2013, with permission form Elsevier.)
THERMAL ANEMOMETRY 275

the following. Counterintuitively, it is also interesting to see that a further complication of the
low ield by the superposition of a swirling motion does not worsen the reading of the single
hot wire, but instead improves it. This is due to the increase in the streamwise velocity. Hence,
with respect to the accuracy of single hot-wire probes in complex lows, “complexity” may
actually (but does not have to!) improve the results.

Multiwire probes To obtain more than one velocity component of the velocity vector simultaneously, more
than one sensor needs to be employed. The most common coniguration to measure two com-
ponents is the so-called X-wire (or cross wire) coniguration shown in Figure 9.5b, which
consists of two mirrored slanted wires. In this coniguration, the wires are arranged in an X at
approximately ±45° to the low direction, but angles between 30 and 60 are found in the litera-
ture. The two planes of the wires are usually separated by the length of the hot wire but no less
than half of it in order to avoid thermal cross talk. In case of near-wall measurements or other
lows with steep gradients, this might still be too large (hence there are correction schemes
[71–73]) and the X constellation is sometimes replaced with a V formation in which the two
slanted wires are arranged parallel to each other. For multiwire (but also slanted single-wire)
probes, an angular calibration of each wire with respect to its yaw angle is needed in addition
to the velocity calibration described in the previous section. In a calibration facility, such as
a jet, this is usually done by placing the probe on a rotating arm while keeping the center of
the measurement area or volume unchanged as shown in Figure 9.7. In wind-tunnel measure-
ments with a multiaxis traversing system, the probe can directly be inclined with respect to the
incoming homogeneous free stream.
The result of an X-wire calibration in the potential core of a jet is shown in Figure 9.10a.
The measured velocity, the yaw angle, and their corresponding voltages from the two CTA

40° 20° 0° –40°


3

1
3.5 4 5 5.5 6
–1.8
E 21 (V2)
0.25 m/s
–1.9 –40° –20° 0° 40°
)
1/2

3
((m/s)

–2
–40°
2
1/2

–2.1
U
E2 (V)

1

–2.2 4 4.5 5.5 6 6.5
+40° E 22 (V2)
–2.3
3
–2.4
2
–2.5 10.5 m/s
1
–2.3 –2.2 –2.1 –2 –1.9 –1.8 –1.7 –30° –20° –10° 0° 10° 20° 30° 40°
(a) E1 (V) (b) Yaw-angle α

FIGUre 9.10 (a) Example of a calibration plot for an X-wire consisting of 9 yaw angles between ±40° and 13 velocities
between 0.25 and 10.5 m/s. (b) Illustrative method for the determination of the velocity vector and yaw angle for a volt-
age pair E1 and E2 using King’s law. Note that the voltages are negative, which is merely related to a speciic commercial
CTA system. Furthermore, n = 1/2 has been chosen for illustrative purposes and does not represent the best it to the data.
(Reprinted from Örlü, R., Experimental study of passive scalar mixing in swirling jet lows, Licentiate (TeknL) thesis, Royal
Institute of Technology, Stockholm, Sweden, 2006.)
276 RAMIS ÖRLÜ AND RICARDO VINUESA

channels were registered for yaw angles from −40° to 40° in 10° intervals and 13 velocities
ranging from 0.25 to 10.5 m/s. To illustrate the computation to determine the streamwise and
azimuthal velocity component, a irst-order polynomial based on King’s law was utilized.
A line corresponding to a set of two measured voltage pairs from the two wires (E1 = − 2.1 V,
E2 = − 2.2 V) is drawn into the calibration curves for the two hot wires as illustrated in
Figure 9.10b and their crossing points with the individual calibration curves are determined
(see also Reference 75, from which this plot was inspired). Utilizing the “cosine law”, one of
the wires responds to the sum and the other to the difference of the effective velocities sensed
by the respective wire, that is,

U e,1 + U e,2 U - U e,1


U= , V = cq e,2 , (9.18)
2 2

where cθ denotes the directional coeficient. The effective velocities can be expressed through
power laws or polynomial itting relations as given in Equations 9.11, 9.14, and 9.15.
Modiications of such sum-and-difference schemes are available in the literature (see, e.g.,
Bruun [9]), but Equation 9.18 demonstrates the underlying idea.
However, the accuracy of such relations becomes critical at low velocities and for small
probes with limited length-to-diameter ratios [76]. Furthermore, such methods should be used
for small angles (say, θ < 15°) and/or small luctuation levels. The advantage of such tech-
niques lies in the fact that once the directional coeficient (which is a geometrical parameter)
is determined through a detailed calibration as shown in Figure 9.10a only the velocity depen-
dence needs to be recalibrated to account for possible drifts.
For high-turbulence intensities (e.g., in the near-wall region or the tails of a jet) and/or
large low angles (e.g., complex lows, such as the case shown in Figure 9.9), a look-up table/
inversion/matrix method should be used. In this case, the calibration data can be curve itted
using separate high-order polynomials for the sums and differences of E1 and E2 related to
the velocity vector and the low angle. If the process is automated as in Figure 9.7, a look-up
table method is more reliable and accurate. It can also easily be extended to account for vari-
ables such as temperature. For comparisons between various calibration relations for X-wires
or more complex wire constellations, the reader is referred to Bruun [9], Vukoslavčević and
Petrović [10], Comte-Bellot [19], and van Dijk and Nieuwstadt [77].

Temperature Throughout Section 9.2, the importance of temperature effects on the heat balance has been
calibration emphasized as explicitly apparent from Equation 9.9 as it modiies the difference (Tw − T) but
has either been assumed negligible or incorporated into the constants, such as into the King’s
law coeficients in Equation 9.11. This is indeed permissible as long as the temperature dur-
ing the calibration of the hot wire and measurements are kept identical such as during wind-
tunnel measurements, in which case the probe is calibrated in situ and the measurements
are performed in the same environment. But as soon as calibrations are performed ex situ or
measurements are performed in lows in which the mean temperature changes from location
to location or drifts in time (e.g., due to power dissipation), one is faced with the question of
how small the temperature drifts or changes between the calibration and measurement have
to be in order to ignore them. To answer this question, relation 9.9 can be revisited and con-
sidered for the same hot-wire probe operated under the same operational conditions for two
instances, namely, when exposed to a luid medium at the reference temperature T0 and at
another instance where the luid is at a slightly elevated temperature T. Assuming that T differs
only slightly from T0, in which case luid properties and temperature-dependent parameters
remain unchanged when exposed to the same cooling velocity, and utilizing Equations 9.5 and
9.6, one obtains
-1/ 2
Tw - T0 æ T - T0 ö
E ( T0 ) = E ( T ) = E (T ) ç 1 - . (9.19)
Tw - T è aR /a ÷ø
THERMAL ANEMOMETRY 277

2.8

2.6 ∆T
0 1 2 3

E (V)
2.4 6

2.2 4

∆U (%)
2 2

1.8 0
0 10 20 30 40
U (m/s)

FIGUre 9.11 Effect of mean temperature variations on the calibration curve shown in Figure 9.8
for an elevated (gray solid line) and reduced (dashed line) luid temperature compared to the tem-
perature during calibration of ΔT = 5 K. Inset shows the percentage error in the velocity reading
for a resistance overheat ratio aR of 0.5 (dashed lines) and 1.0 (solid lines) for 2 (black lines) and
20 m/s (gray lines). The error for a positive and negative ΔT is nearly identical: hence, only one
side is shown representatively.

The same relation can be obtained when keeping track of the temperature dependence of
all luid properties and parameters and assuming a small temperature difference.* Modiied
[27,78] but also alternative [79] relations are available for higher-temperature differences and
a detailed discussion can be found in Bruun [9]. Utilizing this relation, the effect of a ±5 K
change in calibration temperature on the velocity–voltage calibration presented in Figure 9.8
is simulated and depicted in Figure 9.11. Considering that the HWA system would read a volt-
age of 2.62 V when exposed to a 20 m/s luid stream at the calibration temperature of 20°C, a
lower voltage would be read if the probe would now be exposed to a 5 K warmer air stream at
the same velocity (viz., the intersection point with the gray curve). This is simply because the
feedback circuit of the CTA system would require a lower electrical power to keep the wire at
Tw if T > T0. The result would be a seemingly lower velocity when read from the original cali-
bration relation (black solid curve) if temperature changes would not be accounted for. The per-
centage error of a temperature drift/change between the calibration and measurements on the
measured velocity is shown in the same igure for a velocity of 2 and 20 m/s and a resistance
overheat ratio aR = 0.5 and 1.0. As apparent, even a 1 K change in temperature can give rise to
a 3% and 2% error in the velocity for 2 and 20 m/s, respectively, when operated at aR = 1.0 and
is doubled when reducing aR to 0.5. Note that these estimates are for a tungsten wire and can
be more than halved when, for example, a Pt–Rh 90/10 wire is used, due to its lower-temper-
ature coeficient of resistivity (cf. Table 9.1). High overheat ratios and platinum alloys for the
wire material are therefore recommended whenever temperature effects need to be reduced.
This simple exercise demonstrates inevitably that the change in temperature between the cali-
bration and the measurements or the variation of temperature during the measurements is an
(if not the most) important source of error when performing hot-wire velocity measurements.
A crucial ingredient when utilizing Equation 9.19 is obviously the temperature coeficient
of electrical resistivity α, which is an important material property for hot wires (or resis-
tance thermometers). Unfortunately, documented values for α may considerably differ from
its actual value, since the former generally corresponds to macroscopic quantities and should
be taken with caution [19,80]. Hence, if larger temperature changes are expected, say, of the
order of 5 K, it is advisable to perform velocity calibrations at 2–3 different temperatures to
assess the temperature dependence of the velocity calibration. This can then be used to estab-
lish a look-up table or utilized to rely on analytical correction as presented earlier; a detailed

* “Small” is commonly used in this context to justify that temperature differences can be considered as a passive
scalar.
278 RAMIS ÖRLÜ AND RICARDO VINUESA

discussion can be found in Bruun [9]. There is also a more direct way to determine α as will
be shown shortly, although for that the hot wire needs to be operated as a cold wire. In either
case, the local mean temperature needs to be measured, for example, through a thermocouple
or if possible with a cold wire.
The strong dependence of the hot-wire readings on temperature changes implies of course
also that temperature changes and even their luctuations, as they are encountered in noniso-
thermal transient and turbulent lows, can be measured. While there are various alternatives
for mean temperature measurements, the utilization of the hot wire as a resistance thermom-
eter offers an unrivaled technique to measure temperature luctuations. Since these wires are
operated at very low currents, which are too small to heat the wire appreciably, they are known
as cold wires. To exemplify the sensitivity of a hot wire to velocity and temperature luctua-
tions, consider Figure 9.12, which—based on Equation 9.10—shows the magnitude of the
ratio of velocity sensitivity

1/ 2
¶E nBU n -1 é Rw DT ù
SUCTA = = ê A + BU n ú , (9.20)
¶U 2 ë û

and temperature sensitivity

( ) ùú
1/ 2
1 é Rw A + BU
n
¶E
SCTA
T = =- ê , (9.21)
¶T 2ê DT ú
ë û

for a hot wire in CTA mode, that is,

SUCTA nB DTU n -1
=- . (9.22)
ST A + BU n

As apparent from the igure and Equations 9.20 through 9.22, a high overheat ratio ensures a
high-velocity sensitivity and low-temperature sensitivity, but the temperature sensitivity of the
hot wire becomes larger at higher velocities. This emphasizes the need to keep the temperature
constant or measure it accurately, in particular at higher velocities.
At extremely low overheat ratios, the sensing element becomes more sensitive to tempera-
ture than to velocity luctuations, and in the limit of aR → 0, the hot wire becomes a cold wire

102

101
|Sensitivity ratios|

100

10–1

100 101 102


U (m/s)

FIGUre 9.12 Variation of sensitivity ratios under operation of a hot wire in constant tem-
perature mode for different resistance overheat ratios (corresponding to 0.5%, 5%, 30%, and
100% from thin to thick lines). Ratio of velocity to temperature sensitivity, |SU/ST|, (- - -) and
vice versa, |ST/SU|, (—) based on Equation 9.22. The constants A, B, and n were taken from
Figure 9.8.
THERMAL ANEMOMETRY 279

that acts in fact as a resistance temperature sensor, that is, it becomes practically insensitive to
velocity luctuations. Since the feedback loop of a CTA loses its effectiveness at low overheat
ratios, temperature measurements are often performed in CCA mode. At low heating cur-
rents, the velocity and temperature sensitivity of the CCA can be shown to be proportional to
I3 and I, respectively [9]. Cold wires are usually operated with a constant current of 0.1–1 mA
depending on wire diameter, which ensures that the velocity sensitivity is practically 0, while
the temperature sensitivity is constant. The latter result simply implies that the cold-wire read-
ing is linearly related to the change in temperature, which simpliies the calibration and post-
processing for temperature measurements. Hence, Equation 9.6 can be utilized by simply
exchanging the subscript w with f [9] and directly be expressed in terms of the CCA voltage

E = IR f = IR0 éë1 + a ( T f - T0 ) ùû = A¢¢¢ + B¢¢¢Ta , (9.23)

which can also be exploited to deduce the temperature coeficient of electrical resistivity in
case temperature corrections need to be applied to velocity measurements by means of the
same probe operated in CTA mode at a higher overheat ratio.
The mentioned velocity insensitivity of the CCA at low operating currents explains why
temperature luctuation measurements are predominantly performed in this mode of opera-
tion. Nonetheless, as shown in Figure 9.12, the temperature sensitivity of the hot wire in CTA
mode can even with high overheat ratios become signiicant at higher velocities. This fact is
often exploited in compressible lows, where the frequency response of the CCA might be
insuficient: Once the temperature and velocity sensitivities, that is, their calibration relations,
have been determined, operating the sensor at three or more different overheat ratios enables
the measurement of time-averaged turbulence statistics [9]. Employing two close wires, such
as in Figure 9.6b, and operating them at two different (but not too low) overheat ratios in CTA
mode, the temperature and velocity time series can be obtained. These techniques work, in
particular, well in low-intensity lows with strong temperature signals [81]. Reviews of these
so-called multiple (or variable) overheat ratio methods can be found in Bruun [9] and Comte-
Bellot [19].

Calibrations for The most common way of calibrating hot-wire probes is, as mentioned in the “Single-wire
low velocities probes” section, to relate the voltage reading from a hot wire to the velocity obtained, for
example, through Bernoulli’s theorem. However, the inherent inaccuracy of pressure trans-
ducers at small differential pressures, corresponding to say velocities below 2 m/s, causes
problems. Several alternative calibration techniques have therefore been developed: the
modiied calibration jet [82], the laminar pipe low method [83], the rotating disk method
[84], methods exploring wall-proximity effects [85,86], a variety of methods utilizing a mov-
ing [87–89] or swinging probe in still air [90,91], as well as the vortex-shedding method
[86,92–95]. In particular, the vortex-shedding calibration is straightforward to implement
and is also described in classical hot-wire literature [7,9] due to its inexpensiveness and
simple setup and is therefore described next.

Vortex-shedding calibration Since the early observation by Strouhal [96] that the fre-
quency of the sound emitted by a wire exposed to wind is linearly related to the wind speed
itself and the proposal by Roshko [97] to expose this feature to measure the low velocity, it
is nowadays widely exploited in vortex lowmeters (see, e.g., Reference 98 and references
therein) as well as for the aforementioned calibration of hot-wire probes at low velocities.
Following classical hot-wire literature, it is suggested to employ a circular cylinder and the
following relation between the Strouhal number (St = fVSD/U) and the cylinder Reynolds
number proposed by Roshko [97]:

æ 21.2 ö
St = 0.212 ç 1 - , 50 < Re < 150, (9.24)
è Re ÷ø
280 RAMIS ÖRLÜ AND RICARDO VINUESA

æ 12.7 ö
St = 0.212 ç 1 - , 300 < Re < 2000, (9.25)
è Re ÷ø

where fVS is the fundamental vortex-shedding frequency. Since the velocity that is aimed to be
determined appears in both the Strouhal and Reynolds number, it is also common to employ
the so-called Roshko number (Ro = St Re = fVSD2/ν) [97]. The obtained Ro–Re relation is usu-
ally favored over the St–Re relation given in Equations 9.24 and 9.25, due to its linear and
explicit nature [9]. Classical literature restricts the method to the laminar, periodic vortex-
shedding region or so-called stable range [86,92–94] described through Equation 9.24.* Since
this regime is susceptible to oblique shedding that can alter the St–Re relation [93,100], spe-
cial attention needs to be drawn to the end conditions of the cylinder as well as the interpreta-
tion of the measured frequency spectra [101]. However, as shown in Sattarzadeh et al. [102],
a much wider Reynolds-number range could be exploited as well making the technique more
practicable. It is therefore advisable to employ at least two cylinders with different diameters,
to cover the same Reynolds-number range of interest [12].
Since the presence of the cylinder will alter the low ield, it is recommended to place
the hot wire not downstream along its axis for two reasons: First, harmonics of the fun-
damental vortex-shedding frequency (i.e., n × fVS, n > 1) might be picked up instead of the
fundamental frequency. Second, the velocity deicit (caused by the wake of the cylinder)
might still be too signiicant [102]. Instead, the probe should be positioned slightly off-axis
the cylinder, and as studies have shown, the most appropriate location to pick up the fun-
damental frequency of the vortex shedding and read the voltage signal that is related to the
free-stream velocity at the same time is to place the probe 2–4 cylinder diameters of axis
and more than 3 cylinder diameters downstream as schematically depicted in Figure 9.13a
[95,102]. The result of such a vortex-shedding calibration is shown in Figure 9.13, where
the premultiplied power-spectral density map for a range of free-stream velocities is shown
and compared to a conventional calibration against a Prandtl tube.

Precautions for near-wall measurements The need for low-velocity calibrations is particu-
larly acute when considering hot-wire measurements in wall-bounded turbulent lows. The
streamwise velocity can reach instantaneously zero velocity at the edge of the viscous sub-
layer and even be negative [103–105]. Due to the continuous increase in the local turbulence
intensity when approaching the wall, also measurements around the near-wall peak of the
variance proile (which have as well attracted considerable interest in recent years [106,107])
make it necessary that the calibration not just covers the indicated mean velocity range mea-
sured near the wall (as an example, see, e.g., Reference 108) but covers also values down to
20% of the local mean value [109] in order to obtain unbiased higher-order moments† as can
be anticipated when comparing the probability density distribution of the streamwise veloc-
ity component in a turbulent boundary layer obtained through direct numerical simulations
(DNS) and hot-wire measurements at matched conditions as shown in Figure 9.14.
However, low-velocity calibrations, as described in the “Vortex shedding calibration” sec-
tion, are not always at hand; hence, one often unwillingly extrapolates toward zero velocity
based on available calibration points. While accurate calibrations are the essence of hot-wire
measurements, the demands have become higher in recent years due to increased scrutiny
and comparison with high-idelity numerical simulations [111,112]. This restates that “the
main source of uncertainty in the measurements is the calibration of the hot-wire, due to
uncertainties in measuring the calibration velocity and the accuracy of the curve it” [106].
To demonstrate this, we utilize the calibration data shown in Figure 9.13 and check the inlu-
ence of successively omitted low-velocity calibration points. As apparent from Figure 9.15,
the removal of calibration points up to 1.5 m/s depicts quite different trends when E0 is not
included in the calibration relation, which is not seldom when considering available literature.

* For an in-depth discussion on the subject of vortex shedding behind cylinders, the reader is referred to
Zdravkovich [99].
† This concerns mainly third- and higher-order moments as shown in Lenaers et al. [110].
THERMAL ANEMOMETRY 281

6 U = U∞

4
f = fVS

y/D
2
U < U∞

D
–2

z/
0 2 4 6
(a) x/D

1.8

E (V)
1.6 1.5
1 1.8
( f Pee)*

1.4
1.4
0 1.6
100 1.3
)
E(V

0 0.5 1 1.5 2
101 1.4 1.2
f (H 0 2 4 6 8 10
(b) z) 102 (c) U (m/s)

FIGUre 9.13 (a) Schematic of the coordinate system centered around the cylinder of diameter D. The streamwise, verti-
cal, and spanwise directions are denoted by x, y, and z, respectively. The areas in which the fundamental vortex shedding
frequency and undisturbed free-stream velocity are measured are indicated through solid and dashed lines, the x – y plane,
respectively. (b) Premultiplied power-spectral density map for the voltage signal from a hot wire located 3D downstream of
a cylinder (2.5 and 6 cylinder diameters off-axis to obtain fVS and U∞, respectively). Asterisk denotes normalization of the
premultiplied spectral amplitudes to unity in order to visualize the fundamental peaks as well as to ease visualization of
the hot-wire calibration relation. Obtained calibration points for E versus fVS are highlighted by circles and the solid line is
Equation 9.14 itted to the data pairs. (c) Calibration plot and magniied view on the low-velocity region. Stars and dashed
line are from a conventional calibration against a Pitot-static tube, while circles and squares are from the vortex shedding
method from two different cylinder diameters. The lines are its to the modiied King’s law, that is, Equation 9.14.

30

25

20

15
U+

10

101 102 103


y+

FIGUre 9.14 Contour map of the probability density distribution (pdf) of the inner-scaled
streamwise velocity U+ with contour levels at 0.001, 0.05, 0.35, and 0.85 of the pdf maximum
(—) at Reθ ≈ 2500. Dashed lines represent the outermost, that is, minimum and maximum, veloc-
ity luctuations, while light and dark lines denote numerical and experimental data, respectively.
282 RAMIS ÖRLÜ AND RICARDO VINUESA

1.55
U = F(E) U = F (E – E0) Mod. King’s law

1.5
Equation 9.15 Equation 9.15 Equation 9.14

1.45

E (V)
1.4
Umin Umin

1.35

1.3
0 0.5 1 1.5 0 0.5 1 1.5 0 0.5 1 1.5 2
(a) U (m/s) (b) U (m/s) (c) U (m/s)

FIGUre 9.15 Calibration data from Figure 9.13, where the stars and dashed line refer to calibra-
tion points and it through the modiied King’s law, that is, Equation 9.14, respectively. Gray solid
lines correspond to calibration its with successively fewer low-velocity calibration points up to
1.5 m/s. Fourth-order polynomial its (a) excluding E0, (b) including E0, and (c) the modiied King’s
law. Arrow indicates direction of increasing minimum velocity that is included in the calibration.

Being unaware of the aforementioned concerns, namely, that the instantaneous velocity can
considerably fall below the calibration points if only expected mean velocities are considered,
may yield quite different results for the low-velocity region. Hence, in case accurate low-
velocity calibration points are missing, a formulation such as Equation 9.14 including the volt-
age at zero velocity (ensuring that temperature effects are taken care of) is preferable, since it
is less lexible and prescribes a physical behavior when approaching E0. However, whenever
reliable low-velocity calibration points are at hand, a high-order polynomial is recommended.

9.5 Measurements

Following the lowchart in Figure 9.2, we have—with a planned experiment in mind—selected


or built a hot-wire (or cold-wire) probe (Section 9.3), assured that it has preaged (“Preaging,
aging, and drift” section), connected it to a HWA system and tuned it with an appropriate
overheat ratio, while keeping the precautions mentioned in the “Precautions and presettings”
section in mind. Exposing the probe to the lowest and highest velocity to be expected and read-
ing the anemometer (so-called top of the bridge) voltage, we are able to set the DC offset and
gain of the signal-conditioning unit, which is nowadays incorporated in A/D or data acquisition
cards, in order to minimize resolution errors. The in situ or ex situ calibration over the range of
expected velocities can now be performed by keeping track of relevant ambient conditions. In
case of slanted single- or multiwire probes, an additional angle calibration is as well performed
or both angle and velocity dependencies are obtained through a look-up table calibration
(“Multiwire probes” section). In case the temperature will differ between the calibration and
measurements, or is expected to drift during the measurements, the temperature coeficient of
electrical resistance has been obtained/measured or the velocity calibration is repeated for dif-
ferent temperatures in case the temperature between calibration and measurements is expected
to exceed a few degrees (“Temperature calibration” section).
If the previous steps were performed in situ, the probe is ready for the actual measure-
ments, while in case of an ex situ calibration, the probe needs now to be placed into the mea-
surement position while keeping the same cabling as well as anemometer and bridge settings;
signal conditioner and A/D card settings can be adjusted if needed by keeping track of them.
It goes without saying that extreme caution is advised when dismounting the probe from the
calibration facility and moving it to the actual measurement traverse. It is furthermore crucial
THERMAL ANEMOMETRY 283

to ensure that the probe is aligned normal to the main low, which can be found either geo-
metrically or by adjusting the probe angle while checking the anemometer voltage. We are
now ready to set the sampling frequency and sampling time based on characteristics of the
low ield to be investigated, either through estimates (cf. Chapter 2) or, for example, based on
two measurement points, one where the smallest scales/highest frequencies are to be expected
and one where the largest scales/lowest frequencies are to be expected. Once a very long time
series at a high sampling frequency and long sampling time is acquired, a convergence test for
the mean and higher-order statistical moments can be performed to estimate the shortest pos-
sible sampling time to ensure suficient convergence up to the statistical moments of interest.
From a spectral analysis, on the other hand, one can further check whether the low frequency
side is suficiently resolved (i.e., suficient sampling time) and furthermore determine where
the electrical noise level deines an appropriate cutoff frequency. An optional analog low-pass
ilter can then be set at this frequency and the sampling frequency is found to be twice this
frequency according to the Nyquist–Shannon sampling theorem.
Measurements can now be started and it is advised to monitor the results “online” by
converting the voltage signal to its corresponding velocity time history through inversion of
the previously selected calibration relation as schematically depicted in Figure 9.16. In case
of temperature drifts, the mean temperature compensation should also be incorporated at this
stage, in order not to erroneously associate drift of the calibration relation with temperature-
related drifts. Upon completion of the traverse, one should come back to the initial measure-
ment point and ensure that the mean voltage reading or the velocity statistics (foremost the
mean value) have not changed. The measurements are hereby outlined and the obtained data
are now ready to be post-processed, as, for example, shown in Figure 9.16 for statistical or
spectral analysis (cf. Chapter 2). As graphically illustrated in that igure, the signal analysis
should not be carried out on the voltage signal e(t), as one might tempt to when reading some
of the classical literature, which were written in a time when statistics had to be computed

Time (ms) U (m/s)


0 2 4 6 8 10 0 10 20 30 40

2.8 2.8
e(t)

E (V)
2.4 2.4

u(t) = f (e(t))
2 2

0
Time (ms)
PSD

10
100 102 104
f (Hz)

u(t)
PDF

0 10 20 30 40
u (m/s)

FIGUre 9.16 Schematic of the data conversion of the voltage time trace from the hot-wire
probe e(t) to a velocity time series u(t) via the nonlinear calibration function u = f(e).
284 RAMIS ÖRLÜ AND RICARDO VINUESA

from the analog signal; the asymmetric probability density function of the voltage signal turns
out to be nearly symmetric and clearly demonstrates the kind of errors one would obtain if
one would not account for the nonlinearity. Hence, one should always convert the nonlinear
signal through the calibration function into the corresponding velocity time series u(t) and
then perform statistical and/or spectral analysis on it. This chapter on thermal anemometry
could here be inished; however, as will be reasoned in the next section, there might be a need
for further corrections under some special conditions, mainly with regard to wall turbulence
measurements or when the effect of temperature luctuations (nonisothermal lows) has been
ignored or drift was nonetheless observed upon completion of the experiments.
A last note of caution might be justiied at this point. Since thermal anemometry is a mea-
surement technique in time–space, it has been common (as apparent from classical litera-
ture, see, e.g., References 4,9) to convert time information into spatial information through
Taylor’s (frozen turbulence) hypothesis to enable comparison with other techniques, be it PIV
or numerical simulations, where spatial information is more common. This is either done to
obtain spatial derivates/correlations through

¶u 1 ¶u
=- , (9.26)
¶x U c ¶t

where Uc is the convection velocity (a velocity between local mean and bulk/average veloc-
ity depending on low case) or in order to convert frequency spectra to wavenumber spectra
through

2pf
kx = , (9.27)
Uc

where
kx denotes the streamwise wavenumber
f is the frequency of the luctuations

Although this is a practical workaround to enable comparison with reasonable success, one
should not forget that it is a hypothesis and that either the convection velocity might be
wrongly selected or that even the assumption might not apply in the speciic low case.*

9.6 Limitations and corrections

A number of assumptions have been made throughout Sections 9.2 and 9.4. Surely, there
will be occasions where we have to pay a price for all these simpliications. Many of these
are deemed to be accounted for during the calibration as reasoned in Section 9.4; however, a
number of measurement situations can and will occur, where the calibration cannot account
for it. These ignored effects will return as bias errors and have been discussed in the classical
literature mentioned in the “Reference literature and content” section under subject headers
such as “near-wall, turbulence, or low-speed measurements.” Here instead, we will present
some of the more recently “discovered” error sources and limitations of HWA, which have not
been dealt with in the aforementioned references. This part of HWA is, in fact, a very active
research ield and demonstrates at the same time that the very “classical technique” still pro-
vides enough possibilities for research and room for improvement. Since most of the advances
in this respect are comparably recent, they have not been covered yet in the reference literature
(cf. “Reference literature and content” section) to an extent that might assist the potential user.
This chapter provides therefore an overview of concurrent issues, their limitations, and pos-
sible corrections as well as provides an extensive reference list for those planning to deepen

* For a note of caution in this respect, consult References 113–115.


THERMAL ANEMOMETRY 285

on the subject. We will start with problems that are acute when performing near-wall measure-
ments and successively touch on issues that become more general.

Wall/probe When a hot-wire probe approaches a wall, different effects start to inluence its readings.
interference and Additional heat losses from the hot sensor toward the cooler wall are erroneously read as
wall-position an increase in velocity while approaching the wall as apparent from Figure 9.17a. Also the
determination wall material and overheat ratio (coupled with the wire material) affect the near-wall read-
ing up to around y+ = 4. Here, y and the superscript + denote henceforth the distance of the
wire from the wall and normalization with the friction velocity uτ = tw r (where τw is the
wall-shear stress and ρ is the luid density) and viscous length scale ℓ* = ν/uτ, that is, classical
inner scaling [116,117], respectively. If the problem would only be related to heat conduction
and free convection, it could be quantiied through studies under no-low conditions. This
has actually been done and exploited to determine the wall position, since for a given hot

10
d = 5 µm, a = 0.70, aluminum
d = 5 µm, a = 0.29, aluminum
8 d = 5 µm, a = 0.70, glass
d = 5 µm, a = 0.29, glass
LDA
6 U+ = y+
U+

0
0.1 1 10
(a) y+

0.12
U+
0.1
10
α
0.08
u΄/U∞

0.06

5 α
—90° 0.04
—12°
—7°
—5° 0.02
U+ = η —1°
η 0
0 0 0.2 0.4 0.6 0.8 1
0
2 4 6 8 10 2 4 6 8 101 2 U/U∞
(b) (c)

FIGUre 9.17 (a) Effect of wall thermal conductivity and overheat ratio on velocity measurements in the near-wall region.
Note that the overheat ratio is most probably the resistance overheat ratio aR and not aT as mentioned in Reference 120
(cf. also with Reference 119). (With kind permission from Springer Science+Business Media: Exp. Fluids, Experimental
investigation of near-wall effects on hot-wire measurements, 33, 2002, 210, Durst, F. and Zanoun, E.-S.) (b) Inluence of
probe/prongs inclination on velocity measurements in the near-wall region. (Reprinted from Lett. Heat Mass Transfer, 5,
Polyakov, A.F. and Shindin, S.A., Peculiarities of hot-wire measurements of mean velocity and temperature in the wall
vicinity, 53–58, Copyright 1978, with permission form Elsevier.) (c) Diagnostic plot for hot wire (symbols) and DNS (solid
line) data from a turbulent boundary layer at matched Reynolds number. Points deviating from the tangent (dash–dotted
line) are diagnosed to be problematic. (With kind permission from Springer Science+Business Media: Exp. Fluids, The
viscous sublayer revisited–exploiting self-similarity to determine the wall position and friction velocity, 51, 2011, 271,
Alfredsson, P.H., Örlü, R., and Schlatter, P.)
286 RAMIS ÖRLÜ AND RICARDO VINUESA

wire, the measured voltage in quiescent air is dependent on the hot-wire probe, its operating
parameters, as well as the distance from the wall. Consequently, a suitable calibration enables
the determination of the wall position, as described and employed in Durst et al. [118,119].
Note, however, that this method is restricted to work only under no-low conditions, since the
interaction between wall, sensor, and probe support and luid velocity complicates the situa-
tion [8], indicating that it is an effect of conjugate heat transfer consisting of the heat convec-
tion and conduction as well as low conditions. This becomes, in particular, apparent when
changing the inclination angle of the probe support/prongs relative to the wall as demonstrated
in Figure 9.17b. It is therefore not recommended to utilize straight wire probes (as shown in
Figures 9.5b and 9.7) that are inclined to the wall in order to come closer to it. Instead, the
plane of the prongs should be displaced from that of the probe body (cf. Figure 9.18), which
brings us to the so-called boundary-layer probes. Even in measurements of the free-stream
turbulence level, that is, far away from a wall, the probe alignment has been found to affect the
measured turbulence levels, as can be evinced when utilizing a straight probe that is aligned
parallel or perpendicular to the low direction [7]. As discussed in the “Single-wire probes”
section, there should be no difference in the sensed velocity as long as the low direction is
perpendicular to the wire and the wire was calibrated in that coniguration.
Although the results presented in Figure 9.17a are from a laminar boundary-layer low, the
wall distance up to which additional heat losses, due to wall/probe interference, are present
appears to be limited to y+ = 3.5 – 4 [122,123]. In practice, however, this information is not that
useful, since neither the friction velocity nor the exact wall position is often known a priori
and/or accurately. In addition, a small error in either quantity can drastically change the pic-
ture [124]; for example, some measured points that seemingly are above the linear velocity
proile may fall on the linear proile by simply shifting the absolute wall position by just one
wall unit, that is, by (ℓ* =)O (10 μm). A useful tool in this respect is the so-called diagnostic
plot [125] depicted in Figure 9.17c), in which the standard deviation of the luctuations, u′, is
plotted against the mean velocity U, both scaled with the free-stream velocity U∞ or center-
line velocity, in case of internal lows. As seen from the DNS results, the data should follow
a straight line within the viscous sublayer (cf. References 104,126), but as mentioned earlier
the mean velocity tends to be overestimated, which lets the data fall beneath the tangent. At
the same time, the standard deviation u′ is usually underestimated in the viscous sublayer
[110]. Both effects amplify and bring problematic data points beneath the tangent and can
hence be diagnosed as erroneous before employing them for wall-position (or friction veloc-
ity) determinations. Once these points are omitted, the near-wall data can be used in various
ways to correct for the wall position. The most common way is to employ the linear velocity

(a) (b)

FIGUre 9.18 (See color insert.) Photograph showing a boundary layer–type probe during the wall-position determina-
tion using physical methods, namely, by means of (a) a precision gauge block and a vernier height gauge and (b) the mir-
rored image technique. (Reprinted from Prog. Aerosp. Sci., 46, Örlü, R., Fransson, J.H.M., and Alfredsson, P.H., On near
wall measurements of wall bounded lows—The necessity of an accurate determination of the wall position, 353–387,
Copyright 2010, with permission form Elsevier.)
THERMAL ANEMOMETRY 287

proile close to the wall, that is, U+(y+) = y+ [127], which is straightforward in laminar lows,
since the linear region in the boundary layer is comparably thick. In turbulent boundary layers,
on the other hand, the linear region is restricted to the viscous sublayer, that is, y+ < 5, which
is usually thinner than a tenth of a mm. This leaves the experimentalist with only a few data
points for the itting (at best).* It is therefore practically necessary to extend the range of valid-
ity in order to employ a suficient number of measured data points. Following, for example,
Monin and Yaglom [116] and Townsend [117], the linear velocity proile can be Taylor-series
expanded to fourth or ifth order:

y +2
( )
U + y+ = y+ -
s s
- 1 y +4 + 2 y +5 + ⋯ ,
2R et 4 5
(9.28)

where the second-order term is related to the streamwise pressure gradient and disappears,
for example, in zero-pressure gradient turbulent boundary layers. This term is also practi-
cally negligible even for internal lows once the friction Reynolds number (so-called Kármán
number) Reτ = huτ/ν (where h is the channel half-height or pipe radius) is above 300 [122].
A way to determine the constants in Equation 9.28 is to use available high-idelity DNS data
from wall-bounded turbulent lows. This yields the values σ1 = 7.9 × 10−4 or σ1 = 11.8 × 10−4
and σ2 = 0.7 × 10−4 for the extended linear proile up to y+ = 9 or 15, respectively [122]. For the
wall-position determination, the variable y+ is simply replaced by (y+ − y0+), where y0 denotes
the determined offset to the absolute wall position. A comparative study of various available
methods for accurate determination of y0 can be found in References 128–130. Similarly,
the friction velocity can as well be determined by rewriting relation 9.28 in dimensional
form [131]. For canonical wall-bounded lows, more complex relations can be thought of that
describe not just the viscous and buffer regions but the entire proile [132,133] in order to
obtain both the wall position and friction velocity among other characteristic boundary layer
quantities (see, e.g., Örlü et al. [122] for a list of such composite-proile descriptions, and also
Chapter 12).
While the aforementioned post-processing techniques are useful in case an experiment
has been completed, it is nonetheless advisable to attempt in direct measurements of the wall
position (as well as wall-shear stress in order to obtain the friction velocity; cf. Chapter 12).
Most techniques to measure the distance between wire and wall are, however, performed under
no-low conditions. Once aerodynamic forces act on the probe body and traversing system, it
is, however, not guaranteed that the measured distance remains unchanged within accuracies
required in wall turbulence studies (for a recent review on wall-distance determinations, see
Örlü et al. [122]). Simple and straightforward techniques are, for example, the use of mechani-
cal techniques as shown in Figure 9.18a, while the mirrored image technique, illustrated in
Figure 9.18b, can also be used under low conditions. Already used by Laufer [134] more
than half a century ago, it can nowadays provide quite accurate results when used with high-
resolution cameras equipped with macro or tele objectives depending on the optical distance
to the probe.
When it comes to correction schemes for hot-wire errors close to the wall, there are vari-
ous attempts [120,135–138]; however, these are concerned with the mean velocity, which
in fact is well known in the viscous sublayer. Above all, they require the wall position and
friction velocity to be known a priori. Going back to Figure 9.14, an interesting observation
can be made when comparing the hot-wire results with the DNS, which is of interest for the
problem at hand, namely, it appears as if wall-interference effects are dependent not only on
the distance from the wall and the mean velocity (cf. aforementioned references as well as
Reference 139) but also on the instantaneous velocity. Interestingly, the high-velocity luctua-
tions follow the correct trend closer to the wall than the low-velocity luctuations at the same
wall distance. Utilization of DNS data reveals that not only the viscous sublayer but also
the low-velocity luctuations above y+ = 5 follow a log-normal probability density distribution

* Nonetheless, it is not seldom that the linear law is still employed for y+ > 5, which may, however, yield errors of up
to 20% when employed up to y+ = 11 [122].
288 RAMIS ÖRLÜ AND RICARDO VINUESA

10

u+
0.3

0.1

0.03
0.1 0.3 1 3 10
(a) y+

4
5

3
3
u (m/s)

1
1
0

0.1 0.2 0.3 0 0.05 0.1 0.15 0.2


(b) y (mm) (c) y (mm)

FIGUre 9.19 (a) Near-wall region of a turbulent channel low (DNS at Reτ = 590) demon-
strating the self-similarity of the PDF within the viscous sublayer. (b) and (c) Employment of
the self-similarity of the CDFs in the viscous sublayer to extract the wall position by means
of hot-wire data. Linear its through the closest near-wall points (+) are indicated through
dashed lines and circles are the mean streamwise velocity component. Dashed line with
–2-slope indicates the lower limit of near-wall points free from heat transfer to the wall
detected through the diagnostic plot, whereas the vertical dashed lines indicate the upper
limit in which the linear it was applied to the CDF contour levels. (With kind permission
from Springer Science+Business Media: Exp. Fluids, The viscous sublayer revisited–exploiting
self-similarity to determine the wall position and friction velocity, 51, 2011, 271, Alfredsson,
P.H., Örlü, R., and Schlatter, P.)

[103,104], which leads to parallel contour lines when plotted in a log–log plot, as shown
in Figure 9.19a. Here, DNS data were exploited to check the log-normal scaling in the limit of
y+ → 0. This can in fact be exploited, since compared to schemes that employ the very few data
points within the viscous sublayer (which are free of near-wall effects), now a large number of
contour lines of the PDF or the cumulative distribution function (CDF) can be used to extrapo-
late the parallel contour lines toward the wall. As shown in Figure 9.19b and c, the contour
lines should intersect where the wall should be when plotted in a lin–lin plot, thereby yielding
y0. In cases where near-wall effects are not predominant, such as in the rotating disk boundary
layer low, the experimental data follow nicely the picture given by the DNS data and foreseen
by assuming a log-normal PDF/CDF distribution [140]. The log-normal CDF can similarly be
used to estimate the friction velocity as also demonstrated in Alfredsson et al. [103].

Temporal and The advantage of HWA with respect to other measurement techniques is without doubt its
spatial resolution good spatial and temporal resolution, and this has been assumed throughout this chapter so
far. Nonetheless, even if unrivaled in this respect, the temporal and spatial resolution of com-
mon hot-wire probe dimensions might still suffer if employed in moderately large Reynolds-
number lows, in particular near the wall. Ideally, the hot-wire probe would have a length that
is shorter than the smallest scales in the low, that is, the Kolmogorov scale η. However, this is
THERMAL ANEMOMETRY 289

often violated in the very near-wall region, where η is a few viscous units [109,141]. Instead,
the wire has a inite length that is larger than η and responds therefore to an averaged value of
the turbulent luctuations u(t) and can be expressed through [142]

L
1
um ( t ) =
L ò
u ( s, t )ds,
0
(9.29)

where
s is a scalar coordinate along the wire direction
the subscript m denotes the measured quantity

The problem of spatial resolution is well known and has been covered in classical literature
[4,7,9] with respect to free-shear lows and in particular isotropic turbulence, where analyti-
cal considerations can be conducted. Starting from the early work of Dryden et al. [143] and
Frenkiel [144], there is a rich literature that was reviewed in Comte-Bellot [19] and continues
to be extended [145]. Similarly, the effect of shear on velocity measurements performed with
multiwire probes is well known as, for example, discussed in Vukoslavčević and Petrović [10]
and more recently in Vukoslavčević and Wallace [146,147]. With respect to wall turbulence,
starting from the early works by Ligrani and Bradshaw [54] and Johansson and Alfredsson
[148], it has been a rule of thumb to keep the viscous-scaled active wire length L+ ≤ 20, in order
not to be signiicantly affected by spatial resolution problems, while the length-to-diameter
ratio should be L/D ≥ 200 to minimize attenuation caused by end-conduction effects. While
these rules have been engraved in most textbooks and user manuals, its severeness has appar-
ently been underestimated throughout the years, causing a number of controversies (cf. review
articles in References 149–151). To demonstrate the effect of insuficient spatial resolution,
consider Figure 9.20a, which depicts the inner-scaled variance proile of the streamwise veloc-
ity component throughout a turbulent boundary layer obtained from a DNS for increasing wire

10 10
10 10

8 8
8 8

6 6
0 25 75 100 0 25 75 100
u΄2+

u΄2+

L+ L+
4 4

L+
2 2

0 0
101 102 103 101 102 103
(a) y+ (b) y+

FIGUre 9.20 (a) Variance proiles for different L+ = 22, 33, 49, 65, and 87 are simulated by spanwise iltering of the DNS
(dashed lines) at Reτ = 1220. Corrected proiles following the scheme by Segalini et al. [156] by combining various pairs
of attenuated proiles (gray solid lines). Inset depicts the DNS near-wall peak amplitude (dashed line) and the estimated
ones from the result of pairing proiles. (b) Corrected results based on the correction schemes by Smits et al. [157] (gray
solid lines) and Monkewitz et al. [158] (black dashed lines) applied on the shown ive iltered proiles. (With kind permis-
sion from Springer Science+Business Media: Exp. Fluids, A method to estimate turbulence intensity and transverse Taylor
microscale in turbulent lows from spatially averaged hotwire data, 51, 2011, 693, Segalini, A., Örlü, R., Schlatter, P.,
Alfredsson, P.H., Rüedi, J.-D., and Talamelli, A.)
290 RAMIS ÖRLÜ AND RICARDO VINUESA

lengths L+ obtained by iltering the data according to Equation 9.29. Already a wire length of
L+ = 22 (which can still be considered small) causes an apparent reduction in the amplitude
and with increasing wire length also the region further out from the wall starts to be affected,
ultimately letting a hump emerge in the outer region. A seminal contribution in this respect
is the work by Hutchins et al. [152], which resolved some of the questions that were vividly
discussed* and reenforced the practical guidelines for resolved measurements in wall turbu-
lence, which are the following:
• L+ should be as small as possible. Provided that L+ < 20, the error should be less
than 10%.
• L/D ⩾ 200: The effect of a too small L/D is similar to an insuficient L+, although its
effect is stronger in the region away from the wall.
• t+ < 3 (f + > 1/3) should be resolved, that is, the wire diameter and HWA system as well as
low-pass ilter should be set accordingly to avoid attenuation due to temporal resolution
problems.
Although the irst two points are those of Ligrani and Bradshaw [54], the latter study was
performed at one low Reynolds number, while Hutchins et al. [152] cover a wide Re range.
Consequently, whenever comparing results across low cases, facilities, and different Reynolds
numbers, it is crucial to ensure matched conditions for the aforementioned quantities or
account for them, for example, by estimating the attenuation caused by insuficient spatial and
temporal resolution as well as that from end-conduction effects.
In particular, the effect of spatial resolution in wall turbulence has been extensively stud-
ied following the work by Hutchins et al. [152], in order not only to assess its effect on
both the variance [109,159] and spectra [160], but also to provide correction schemes for
the variance [156–158], spectra [161], and also higher-order moments [162] when utiliz-
ing single-wire probes. Its effect on multiwire probes has as well been studied [163,164].†
A number of different correction schemes have been proposed, which relate the ilter-
ing effect to either the transverse Taylor microscale, Kolmogorov, or viscous length scale
[156–158,169], but perform comparably well within their range of applicability as shown in
References 156,170 and depicted in Figure 9.20. Here, we will report the correction scheme
proposed by Smits et al. [157], since it has been calibrated for a large range in terms of
L+(<150) and Re (viz., Reτ < 14 , 000).‡ Following Smits et al. [157], the corrected variance
can be obtained through

ë ( ) ( )
u¢c2 + = u¢m2 + é1 + M L+ f y + ù ,
û
(9.30)

where f and M are functions describing the dependency on the wall distance and spatial resolu-
tion, respectively, with

15 + ln ( 2 )
( )
f y+ =
+ æ (15- y+ ) ö
, (9.31)
y + ln ç e + 1÷
è ø

* In particular, they established that the near-wall peak of the streamwise variance proile increases with Re and
that the occurrence of a second/outer hump/peak in previous publications was related to spatial resolution effects
(see, e.g., the discussions in References 153–155).
† Following the pioneering work by Suzuki and Kasagi [142] and Moin and Spalart [165], DNS data have been

exploited in most of aforementioned activities to simulate the “response” of the hot wire. Although DNS might not
reach practically relevant Re by itself (despite recent efforts; see, e.g., References 111,112,166,167), it contributes
nonetheless to high Re turbulence, by assessing uncertainties in measurement techniques. Similar exploitation of
DNS data starts also to emerge to assess limitations and propose correction schemes for LDV [110] and PIV [168]
measurements.
‡ The correction method has in follow-up work also been conirmed up to Re =105 [155,171].
τ
THERMAL ANEMOMETRY 291

and

( ) (
A tanh s1L+ tanh s2 L+ - E ),
( )
M L+ =
max ( u¢ )
2+
(9.32)
m

( )
where A = 6.13, E = − 1.26 × 10−2, σ1 = 5.6 × 10−2, σ2 = 8.6 × 10−3, and max u¢m2 + denotes the
amplitude of the measured near-wall peak (indicated through symbols in Figure 9.20) located
at y+ ≈ 15. In case the measurements do not come close enough to the wall to reach this location
(as in high Re lows), Equation 9.32 can be replaced with M = 0.0091 L+ − 0.069. Application
of this correction scheme brings all proiles irrespective of L+ (dashed lines) back on the fully-
resolved proile from the DNS as apparent from Figure 9.20b.
It should be noted, however, that the aforementioned correction does not take attenuations
due to increased end-conduction effects into account. Although all commercial and most in-
house built probes comply to L/D ≥ 200, there is sometimes the need to minimize spatial
resolution effects even further (i.e., L+ → 0), by violating the L/D criterion.* Hence, if smaller
length-to-diameter ratios should be used, one could utilize the correction by Monkewitz
et al. [158] (which, however, is limited to y+ > 10 and displays a slightly larger spread around
the near-wall peak) or follow Miller et al. [170], which incorporates a correction for end-
conduction effects (based on results by Hultmark et al. [172]) into Equation 9.30.
With respect to cold-wire probes, little is known about their spatial resolution, although
here the problem might be even more signiicant, due to the large length-to-diameter ratios
recommended, namely, L/D ≳ 1000 [9]. Since only a small part of the ohmic heating is lost
through forced convection, it is not guaranteed that relations proposed for hot wires will
do justice for cold wires. In particular, the prongs/stubs are found to cause an additional
low-frequency attenuation, additional to the attenuation at high frequencies (which in turn is
related to the wire) [19]. One workaround to improve the frequency response of cold wires
is to employ two wires placed close to each other and with different diameters, but with the
same or large enough L/D, and estimate the thermal time constant upon which the ideal sen-
sor response, for which D → 0, can be estimated [173,174]. Such techniques are common for
resistance thermometers [175] but can be applied to cold-wire measurements as well. Finally,
analytical models can be used to study the prongs/stub/wire interaction as a design tool not
only for optimizing the probe design [7,176,177] but also to correct measured data [177].

Corrections for Going back to Figure 9.20, we have seen that measured velocity (and similarly also tem-
temperature perature) luctuations can be signiicantly attenuated due to insuficient spatial resolution
luctuations of the sensing hot-wire element. Limited frequency response (as inherent in temperature or
and drift high-speed velocity measurements) similarly contributes to this attenuation. Finally, end-
conduction effects were identiied to be an additional source of attenuation in luctuation
measurements. These three effects were, however, not able to explain the contradicting results
with respect to the near-wall behavior of turbulent pipe lows [106,107,118,170]. Hence, there
might be other effects that occur in experiments and have so far been neglected. One of these
possibilities is, for example, the temperature gradient between the wall and the centerline
(or free stream in case of semiconined lows). Dissipation of kinetic energy into heat or
facility-related factors might contribute to a temperature gradient. Such effects are commonly
compensated for through a mean temperature correction as discussed in the “Temperature
calibration” section. We have seen that such a correction, for example through Equation 9.19,
corrects accurately the mean velocity reading as extensively discussed in classical literature
and is therefore nowadays standard and often incorporated in commercial HWA software
packages, that is, it is barely seen as an explicit correction (as, e.g., the ones discussed in
the “Wall/probe interference and wall-position determination” and the “Temporal and spatial
resolution” sections). However, to ensure correctly measured velocity time series, either the

* Recent studies show that with increasing Re, the L/D criterion requirement could in fact be relaxed [50,172].
However, further studies are needed to exclude other effects.
292 RAMIS ÖRLÜ AND RICARDO VINUESA

low needs to be perfectly isothermal (i.e., no mean temperature changes in space and time),
or the instantaneous (space and time resolved) luctuating velocity and temperature need to
be measured simultaneously as is common in mixing studies in nonisothermal lows, where,
for example, combined hot-wire and cold-wire probes (as shown in Figure 9.5b) are utilized
[57,178].
As mentioned in the “Modes of operation” and the “Temperature calibration” sections,
it will barely be possible to measure temperature luctuations as accurately (with respect to
spatial and temporal resolution) as velocity luctuations when it comes to wall turbulence or
high-speed lows, and it is therefore important to be aware of the errors that can be introduced
due to luctuating temperatures (despite a performed mean temperature correction) on, for
example, velocity variance measurements. Assuming that the thermal boundary conditions are
analogous to the velocity boundary conditions, that is, higher velocities are related to higher
temperatures, a high-temperature luctuation will lead to a reduced voltage reading from the
CTA system for velocity measurements; cf. Figure 9.11. This in turn is seemingly interpreted
as a reduced luctuating velocity amplitude and vice versa.* The effect of ignoring tempera-
ture luctuations can directly be demonstrated by utilizing DNS data and exploiting King’s law
(Equation 9.11) in conjunction with Equation 9.19 and considering its effect on the stream-
wise variance proile in a turbulent channel low as depicted in Figure 9.21a. As apparent,
there is a nonnegligible effect when ignoring temperature luctuations in velocity measure-
ments, even if mean temperature effects are accounted for. These effects are comparably
small (with respect to spatial and frequency resolution) for the moderate mean temperature
gradients of a few degrees in temperature, but they might become important when consid-
ering lows with large temperature gradients, such as heat transfer measurements or high
Mach number lows as well as lows with large-scale pulsations/oscillations [60,179,180].†
Since there are no practically feasible methods to measure the temporally and spatially-
resolved temperature luctuations with respect to wall turbulence or high-speed lows in order

5
8 8
0
7 7 –5
6 6 –10
∆T –15
5 5 0 2 4 6
urms

rms
2+

u2+

4 4 ∆T

3 3
2 2
1 1
0 0
100 101 102 103 100 101 102 103
(a) y+ (b) y+

FIGUre 9.21 Variance proile of the streamwise velocity component: circles represent numerical results, solid, dashed,
and dash–dotted lines denote the hot-wire results that would have been measured at ΔT = 2, ΔT = 4, and ΔT = 6 with tem-
perature compensations utilizing (a) the local mean temperature and (b) the instantaneous temperature estimated through
Equation 9.33. Inset in (b) depicts percentage error in variance at y+ = 15, if the mean temperature (circle) and the correc-
tion after Equation 9.33 (stars) are used. (With kind permission from Springer Science+Business Media: Exp. Fluids, The
inluence of temperature luctuations on hot-wire measurements in wall-bounded turbulence, 55, 2014, 1781, Örlü, R.,
Malizia, F., Cimarelli, A., Schlatter, P., and Talamelli, A.)

* Note that the opposite effect is observed (ampliied velocity luctuations) if the thermal boundary conditions were
inverted. For example, in the case of a cold air stream on a warm wall, high velocities are associated with cold air.
† In particular, in the internal combustion engine environment, it is common to circumvent the need to measure the

temperature luctuations by obtaining “an approximate value for the instantaneous temperature [...] by assuming an
isentropic relationship between temperature and pressure” [181]. It should be realized, however, that the recovery
(which is close to the total/stagnation) temperature needs to be employed in the correction procedure and not the
static temperature.
THERMAL ANEMOMETRY 293

to deduce the correct velocity luctuations, an experimentally practical correction scheme is


desired, which is restricted to information that is easily measurable such as the mean tempera-
ture, which is in either case required for the mean temperature compensation. As shown in
Reference 182 and demonstrated in Figure 9.21b, utilization of

Tc ( y, t ) = T ( y) +
u ( t ) Tt æ u¢
ç
2 ö
÷ , Tt = (
d T /dy |w
,
)
(9.33)
ut ç u¢2
è max
÷
ø
( Ret Pr )

with Tτ denoting the friction temperature, provides a surrogate temperature luctuation signal
that in conjunction with Equation 9.19 brings all the curves shown in Figure 9.21b on top
of the proile with no temperature gradient. Since the correction acts on the instantaneous
velocity signal, contrary to the corrections discussed in the “Wall/probe interference and wall-
position determination” and the “Temporal and spatial resolution” sections, it provides also
satisfying corrections for spectra [182] and higher-order moments [183].
Temperature is indeed the largest uncertainty in hot-wire measurements, but there are estab-
lished compensations as mentioned earlier when it comes to mean temperature drifts, which
are lengthily discussed in the reference literature. Another uncertainty, although not so much
seen as an uncertainty, but a “curse” [12], is drift, since—as mentioned in the “Preaging, aging,
and drift” section—affected measurements are usually disregarded, rather than corrected for;
cf. Figure 9.2. Such drifts can be caused by “wire degradation [...], electro-migration, dust par-
ticles in the low and wire fouling” [184]. It might, however, not always be possible to simply
disregard measurements, and drift issues become more prevalent the smaller the wire diam-
eters are (recalling the need to reduce frequency and spatial resolution issues, this is indeed a
current trend).* As shown in Bailey et al. [186], even a drift that is limited to 1% in velocity
was found to cause a change in the mean velocity proile that can propagate, for example,
into the value of the Kármán constant by changing it up to 6%. This is indeed a considerable
uncertainty when comparing it to other sources of uncertainty [124,187]. Such problems are
even more acute when using hot ilms in water, due to contamination by dirt. However, in this
context it was observed that dirty hot ilms had the same calibration as a clean one at a lower
overheat ratio (cf. Figure 9.11, which illustrates the problem indirectly), thereby making it
possible to recalibrate them through a single calibration point [12]. Building on such experi-
ence, it had been common (but often undocumented) practice also in hot-wire measurements
to interpolate between pre- and postcalibration curves (cf. References 185,188) in case the
data had to be rescued, rather than disregarded, similar to the well-established temperature
drift corrections discussed in the “Temperature calibration” section.
A schematic of the effect of drift is presented in Figure 9.22a, which presents the time line
from a precalibration over to the measurements with intermediate single-point calibrations
and a postcalibration. Following Talluru et al. [184] and utilizing the pre- and postcalibration
curves (Figure 9.22b) together with the information from the intermediate calibration points,
an intermediate (i.e., interpolated) calibration relation Eint|U,

( Ei - E pre ) |U
Eint U (
= R i E post U )
- E pre U + E pre U , R i = ¥i

( E post - E pre ) |U
, (9.34)
¥i

with R|i denoting the proportional drift factor, can be reconstructed as illustrated in
Figure  9.22b and c. It should be noted that such a correction assumes that drift occurs
monotonically in time and that erratic jumps/steps cannot be accounted for in which case
the measured data need to be discarded [184]. Given a monotonic drift, and in absence of

* This is also one of the reasons why it might be advantageous to employ two hot wires simultaneously, one with a
smaller diameter for ine-scale turbulence measurements and one for mean velocity measurements, thereby provid-
ing an “online” check for the mean velocity, which can be used to correct the reading of the smaller wire, as, for
example, done by Hutchins et al. [185].
294 RAMIS ÖRLÜ AND RICARDO VINUESA

Time (min)
0 50 100 150 200 250
4

Postcalibration
Traverse start
Precalibration

Traverse end
2

E (V )
1
Freestream recalibration
point is used to determine
0 Ei and hence R|i

–1
(a)
25
Graphically, R|i is the
ratio of the lengths of the
arrows represented in (c)
20

15
U (m/s)

(Ei – Epre)|U∞i
10
(Epost – Epre)|U∞i

5 (c)

0
–8 –6 –4 –2 0 2 4 6
(b) E (V)

FIGUre 9.22 A schematic igure illustrating an example of the intermediate single point reca-
libration (ISPR) method applied in this case to a wall-normal traverse in a turbulent boundary
layer. (a) A time line of the experiment. Shaded regions show the time of the pre- and postcali-
brations, respectively, while dashed lines show the start and end times of the boundary-layer
traverse experiment. Dots show the individual traverse measurements of mean voltage; illed
circles show the free-stream recalibration points. (b) shows the (triangle) pre- and (square) post-
calibration curves. (c) Inset showing a detail of the intermediate calibration curve (dashed line).
(From Talluru, K.M., Kulandaivelu, V., Hutchins, N., and Marusic, I., A calibration technique
to correct sensor drift issues in hot-wire anemometry, Meas. Sci. Technol., 25, 105304, 2014.
Copyright of IOP Publishing.)

intermediate calibration points, the proportional drift factor R|i can also be formed with time
rather than voltage information [184].

acknowledgments

The irst author expresses his gratitude to Prof. P. Henrik Alfredsson for stimulating discus-
sions during the author’s graduate studies and sharing and discussing his experience on the
topic, which also left a strong imprint on some of the problems at the end of this chapter
[189]. Similarly, he beneitted through collaboration with Professors Alessandro Talamelli and
Philipp Schlatter as well as Dr. Antonio Segalini, which is gratefully acknowledged.

problems

Importance of end- As mentioned in Section 9.2, end-conduction effects are small, but not negligible. While a large
conduction effects length-to-diameter (L/D) ratio is desired to reduce end-conduction effects, one also tries to keep
THERMAL ANEMOMETRY 295

the wire length L short enough to reduce spatial resolution effects (cf. the “Temporal and spatial
resolution” section). Contrary to radiation and buoyancy effects, these effects are deemed to be
accounted for during a calibration. The rate of heat transfer to an end support is given by

¶Tw
Wc = -kw Aw , (9.35)
¶z z =± ( L / 2 )

where kw and Aw are the thermal conductivity of the wire material and its cross-sectional area,
respectively, while ±L/2 denotes the location of the joints between wire and prongs (i.e., z = 0
denotes the center of the hot wire). The temperature proile along the wire can be taken as

Tw ( z ) - T0 cosh ( z /Lc )
= 1- (9.36)
(Tw - T0 )¥ cosh ( L / 2 Lc )

where Lc is the so-called Betchov or cold length, which relates to the portion of the wire along
which the effect of the prongs/stubs is felt and is deined as Lc = D (1/4 ) ( kw /k f ) (1 + aR )/Nu
[14]. The subscript ∞ indicates conditions for an ininitely long wire, while 0 denotes ambient
conditions.
(a) Based on these relations, derive an expression for the ratio of conduction to forced
convection for a hot-wire sensor.
(b) Let us assume that Nu = 2. Is L/D ≥ 200 still a generally valid rule of thumb? To dem-
onstrate your answer, consider a standard tungsten wire (D = 5 μm, L/D = 200) at a
resistance overheat ratio of aR = 0.5 and compare it to a Pt–Rh 90/10 wire.
(c) Consider the nondimensional temperature proile for the aforementioned tungsten and
Pt–Rh 90/10 hot-wire probes to support your answer in (b).

hot-wire voltage A single hot-wire probe with a platinum wire of D = 5 μm diameter and 500D length is
versus cooling positioned normal to the low direction and operated in constant temperature mode at a resis-
velocity relation tance overheat ratio of 80% in an isothermal low (T0 = 20 ° C). Experiments are planned to be
performed in the velocity range of 0.2–35 m/s.
(a) Are buoyancy effects negligible for the lowest velocities to be encountered? Note that
luid properties are evaluated at the ilm temperature, that is, the arithmetic mean of
ambient and wire temperature. Material properties are given in Table 9.1, while air prop-
erties can be evaluated through the ideal gas law and Sutherland’s law or need to be
found from tables for air properties.
(b) Assume that L/D = 500 can be considered suficiently long to make use of King’s cor-
relation for the Nusselt number and estimate the voltage at 0 velocity (to get an idea
of its magnitude and dependencies) as well as obtain the calibration relation (i.e., hot-
wire voltage vs. cooling velocity). Is an A/D converter unit with a voltage range of 1 V
suficient to cover the velocity range of interest?

Binormal and Starting from the deinition of the effective (cooling) velocity Equation 9.17,
tangential
cooling effects U e2 = (U n + un ) + h2ub2 + k 2ut2 ,
2
(9.37)
on single-wire
measurements in
a turbulent low derive an expression for the measured (effective) mean and variance of a straight single-wire
probe. For this, assume that the luctuating components are much smaller than the mean veloc-
ity and make use of a series expansion around the mean value. Based on these results, consider
how high the turbulence intensity during a calibration can be, in order to reduce the errors
in the mean velocity due to luctuations to below 0.5%. Similarly, how justiied is the com-
mon assumption that a single hot-wire probe measures the mean and variance streamwise
296 RAMIS ÖRLÜ AND RICARDO VINUESA

U v

UT

UN

FIGUre 9.23 Photograph of an X-wire (without soldered hot wires) together with the notation
of the wire-ixed (UN, UT) and probe-stem coordinate system.

component when, for example, exposed to a turbulent boundary layer. In other words, are the
errors due to the effect of binormal and tangential velocity luctuations negligible?

hot-wire spatial To get a feeling of the severity of insuficient spatial resolution effects on the turbulence inten-
resolution effects sity, consider hot-wire measurements in the near-wall region of wall-bounded turbulent lows.
Using the correction scheme proposed by Smits et al. [157] (cf. Equations 9.30 through 9.32),
consider a measured streamwise variance proile that is publicly available and (a) correct it for
spatial resolution effects, that is, obtain the variance for an ininitesimal small wire length and
(b) compute the proiles for various wire lengths up to a point where an artiicial outer peak
appears and the near-wall peak diminishes.*

Two-component In case two velocity components need to be measured, an X-wire is commonly employed in
measurements by which case two mirrored slanted wires are used. For the special case that the angle between the
means of an X-wire two wires is 90° (see Figure 9.23; note, however, that the actual angle is <90°), and the prongs
are aligned parallel to the mean low direction (ignoring binormal cooling effects), derive an
(series-expanded) expression (keeping terms of irst order only) for the two velocity compo-
nents that lie in the plane spanned by the prongs. The same assumptions as under the Exercise
“Bi-normal and tangential cooling effects on single-wire measurements in a turbulent low”
are valid. Start by expressing the normal and tangential velocity components of each wire,
upon which the individual effective cooling velocities for each wire can be expressed. By sum-
ming and taking the difference of the effective cooling velocities for each wire, expressions for
the U and V can be formed in terms of U, u, and v.

references

1. P. Freymuth. Review: A bibliography of thermal anemometry. J. Fluids Eng., 102:152–159,


1980.
2. P. F r e y mu t h . A Bibliography of Thermal Anemometry. TSI Incorporated, St. Paul, MN, 1982.
3. L. M. F in g e r s o n. Thermal anemometry, current state, and future directions. Rev. Sci. Instrum.,
65:285–300, 1994.
4. V. A. S a n d b o r n. Resistance Temperature Transducers. Metrology Press, Ft Collins, CO, 1972.
5. J. We s t e r w e e l, G. E. Els inga, and R. J. A drian. Particle image velocimetry for complex
and turbulent lows. Annu. Rev. Fluid Mech., 45:409–436, 2013.
6. H. Strickert. Hitzdraht- und Hitzilmanemometrie. VEB Verlag Technik, Berlin, Germany, 1974.
7. A. E. P e r r y. Hot-Wire Anemometry. Clarendon Press, Oxford, MS, 1982.
8. C. G. L o ma s. Fundamentals of Hot Wire Anemometry. Cambridge University Press, 1986.

* There are various databases publicly accessible, but for simplicity the authors’ data can, for example, be assessed
from the FLOW database via www.low.kth.se.
THERMAL ANEMOMETRY 297

9. H. H. Br u u n. Hot-Wire Anemometry: Principles and Signal Analysis. Oxford University Press


Inc., New York, 1995.
10. P. V. Vu ko s l avčević and D. Petrović. Multiple Hot-Wire Probes: Measurements of Turbulent
Velocity and Vorticity Vector Fields. Montenegrin Academy of Sciences and Arts, Podgorica,
Montenegro, 2000.
11. S. C o r r s in. Turbulence: Experimental methods. In: S. Flugge, ed., Handbuch der Physik, Vol. 8.
Springer-Verlag, Berlin, Germany, pp. 524–590, 1963.
12. P. Br a d s h aw. Experimental Fluid Mechanics, 2nd ed. Pergamon Press Ltd., Oxford, UK, 1970.
13. J. O. H in z e . Turbulence, 2nd ed. McGraw-Hill, New York, 1975.
14. R. F. B l ac k we l der . Hot-wire and hot-ilm anemometers. In: E. J. Emrich, ed., Fluid Mechanics
Measurements. Academic Press, Cambridge, MA, pp. 259–314, 1981.
15. A. V. S m o l’ya kov and V. M. Tkachenko. The Measurement of Turbulent Fluctuations:
An Introduction to Hot-Wire Anemometry and Related Transducers. Springer-Verlag Berlin
Heidelberg, 1983.
16. H. E c k e l m a n n. Einführung in die Strömungsmeßtechnik. Springer Fachmedien Wiesbaden
GmbH, 1997.
17. P. S. B e r na r d and J. M. Wallace . Turbulent Flow: Analysis, Measurement, and Prediction.
John Wiley & Sons Inc., Hoboken, NJ, 2002.
18. S. Tavo u l a r is . Measurement in Fluid Mechanics. Cambridge University Press, Cambridge,
U.K., 2005.
19. G. C o mt e -B e l lot. Thermal anemometry. In: C. Trop ea, A. L. Yarin, and J. F. Foss,
eds., Handbook of Experimental Fluid Mechanics: Section B. Springer-Verlag Berlin Heidelberg,
pp. 5.2.1–5.2.7, 2007.
20. F. D u r s t. Fluid mechanics. An Introduction to the Theory of Fluid Flows. Springer-Verlag,
Berlin, Germany, 2008.
21. C. Ba il ly and G. C om te-B ello t. Turbulence. Springer International Publishing, Switzerland,
2015.
22. G. C o mt e -B e l lot. Hot-wire anemometry. Annu. Rev. Fluid Mech., 8:209–231, 1976.
23. J.-D. Vagt. Hot-wire probes in low speed low. Prog. Aerosp. Sci., 18:271–323, 1979.
24. P. C. S ta in b ac k and K. A. N agabus hana. Review of hot-wire anemometry techniques and
the range of their applicability for various lows. In: Thermal Anemometry. ASME FED, Vol. 167,
pp. 93–133, 1993.
25. L. M. F in g e r s o n and P. Freym uth. Thermal anemometers. In: R. J. Goldstein, ed., Fluid
Mechanics Measurements. Taylor & Francis Group, Boca Raton, FL, 1996.
26. I. L e k a k is . Calibration and signal interpretation for single and multiple hot-wire/hot ilm probes.
Meas. Sci. Technol., 7:1313–1333, 1996.
27. G. L e mo n is and T. D racos . Determination of 3-D velocity and vorticity vectors in turbulent
lows by multi-hotwire anemometry. In: Th. Drakos, ed., Three-Dimensional Velocity and Vorticity
Measuring and Image Analysis Techniques, Springer Science+Business Media, Dordrecht, the
Netherlands, pp. 1–42, 1996.
28. L. S. G. Kova s z nay. The hot-wire anemometer in supersonic low. J. Aero. Sci., 17:565–573,
1950.
29. M. V. M o r kov in. Fluctuations and hot-wire anemometry in compressible lows. AGARDograph
24, 1956.
30. A. J. S m it s , K. H ayakawa, and K. C. M uck. Constant temperature hot-wire anemometer
practice in supersonic lows. Exp. Fluids, 1:83–92, 1983.
31. A. J. S m it s and J. D us s auge . Turbulent Shear Layers in Supersonic Flow, 2nd ed. Springer,
New York, 2006.
32. E. H u g u e na r d, A. M agnan, and A. Planiol. A method for the instantaneous determina-
tion of the velocity and direction of the wind. NACA Tech. Memorandum No. 264, 1924.
33. H. L. D r y d e n and A. M. Kuethe . The measurement of luctuations of air speed by the
hot-wire anemometer. NACA Rep. No. 320, 1929.
34. J. M. Bu r g e r s. Hitzdrahtmessungen. In: W. Wien and F. H arm s, eds., Handbuch der
Experimentalphysik, Band 4. Akademische Verlagsgesellschaft, Leipzig, Germany, pp. 635–667,
1931.
35. D. C. C o l l is and M. J. William s . Two-dimensional convection from heated wires at low
Reynolds numbers. J. Fluid Mech., 6:357–384, 1959.
36. W. Pa e s c h k e . Feuchtigkeitseffekt bei Hitzdrahtmessungen. Phys. Z, 36:564–565, 1935.
37. G. B. S c h u b au e r. Effect of humidity in hot-wire anemometry. J. Res. Nat. Bur. Stand., 15:
575–578, 1935.
38. G. E. A n d r e ws, D. B radley, and G. F. Hundy. Hot wire anemometer calibration for measure-
ments of small gas velocities. Int. J. Heat Mass Transfer, 15:1765–1786, 1972.
298 RAMIS ÖRLÜ AND RICARDO VINUESA

39. L. V. K in g . On the convection of heat from small cylinders in a stream of luid: Determination
of the convection constants of small platinum wires with applications to hot-wire anemometry.
Phil. Trans. R. Soc. A, 214:373–432, 1914.
40. J. D. L i . Dynamic response of constant temperature hot-wire system in turbulence velocity
measurements. Meas. Sci. Technol., 15:1835–1847, 2004.
41. J. D. L i . The effect of electronic components on the cut-off frequency of the hot-wire system.
Meas. Sci. Technol., 16:766–774, 2005.
42. N. H u t c h in s , J. P. M onty, M. H ultm ark, and A. J. Sm its . A direct measure of the
frequency response of hot-wire anemometers: Temporal resolution issues in wall-bounded turbu-
lence. Exp. Fluids, 56:18, 2015.
43. G. R. S a r ma. Transfer function analysis of the constant voltage anemometer. Rev. Sci. Instrum.,
69:2385–2390, 1998.
44. A. Be r s o n, G. Poignand, P. B lanc-B enon, and G. C om te-B ellot. Capture of
instantaneous temperature in oscillating lows: Use of constant-voltage anemometry to correct the
thermal lag of cold wires operated by constant-current anemometry. Rev. Sci. Instrum., 81:015102,
2010.
45. P. M. L ig r a n i . Subminiature hot-wire sensor construction. Naval Postgraduate School,
Monterey, CA, NPS69-84-010, 1984.
46. P. M. L ig r a n i and P. B rads haw. Subminiature hot-wire sensors: Development and use.
J. Phys. E, 20:323–332, 1987.
47. P. M. L ig r a n i , R. V. Wes tp hal , and F. R. Lem os. Fabrication and testing of subminiature
multi-sensor hot-wire probes. J. Phys. E: Sci. Instrum., 22:262–268, 1989.
48. R. V. We s t ph a l , P. M. Ligrani , and F. R. Lem os . Development of subminiature multi-sensor
hot-wire probes. NASA TM 100052, 1988.
49. M. F e r r o. Experimental study on turbulent pipe low. MSc thesis, Royal Institute of Technology,
Stockholm, Sweden, 2012.
50. J. D. L i , B. J. M c keon, W. J iang , J. F. M orris on, and A. J. Sm its . The response of hot
wires in high Reynolds-number turbulent pipe low. Meas. Sci. Technol., 15:789–798, 2004.
51. B. G. va n d e r H egge Zijnen. On the construction of hot-wire anemometers for the investi-
gation of turbulence. Appl. Sci. Res., 2:351–363, 1951.
52. S. C. C. Ba il e y, G. J. Kunkel, M. H ultm ark, M. Vallikivi, J. P. H ill , K. A. Meyer ,
C. T s ay, C. B. A rnold, and A. J. Sm its . Turbulence measurements using a nanoscale thermal
anemometry probe. J. Fluid Mech., 663:160–179, 2010.
53. M. Va l l ik iv i, M. H ultm ark, S. C. C. Bailey, and A. J. Sm its . Turbulence measurements
in pipe low using a nano-scale thermal anemometry probe. Exp. Fluids, 51:1521–1527, 2011.
54. P. M. L ig r a n i and P. B rads haw. Spatial resolution and measurement of turbulence in the
viscous sublayer using subminiature hot-wire probes. Exp. Fluids, 5:407–417, 1987.
55. G. Co m t e -Be l lot, A. Strohl , and E. A lcaraz . On aerodynamic disturbances caused by
single hot-wire probes. J. Appl. Mech., 93:767–774, 1971.
56. P. Wa l k e r and W. H. Tarn. Handbook of Metal Etchants. CRC Press, Boca Raton, FL, 1991.
57. R. Ö r l ü and P. H. A lf reds s on. An experimental study of the near-ield mixing characteristics
of a swirling jet. Flow Turbul. Combust., 80:323–350, 2008.
58. W. G. S pa n g e n berg . Heat-loss characteristics of hot-wire anemometers at various densities in
transonic and supersonic low. NACA TN 3381, 1955.
59. DISA Type 55A12/13 Spot welding equipment: Instruction manual—Probe repair manual. DISA
Information Department, Denmark, 1977.
60. F. L au r a n t z o n, N. Tillm ark, R. Ö rlü, and P. H. A lf reds s on. A low facility for the
characterization of pulsatile lows. Flow Meas. Instrum., 26:10–17, 2012.
61. M. H is h id a and Y. N agano. Simultaneous measurements of velocity and temperature in
nonisothermal lows. J. Heat Trans., 100:340–345, 1978.
62. P. V. Vu ko s l avčević and J. M. Wallace . The simultaneous measurement of velocity and
temperature in heated turbulent air low using thermal anemometry. Meas. Sci. Technol., 13:
1615–1624, 2002.
63. P. F r e y mu t h . Further investigation of the nonlinear theory for constant-temperature hot-wire
anemometers. J. Phys. E, 10:710–713, 1977.
64. M. Tu t k u n, W. K. G eorge, J. M. Foucaut, S. C oudert, M. Stanis la s, and
J. D e lv il l e. In situ calibration of hot wire probes in turbulent lows. Exp. Fluids, 46:617–629,
2009.
65. S. Ch u e . Pressure probes for luid measurement. Prog. Aerosp. Sci., 16:147–223, 1975.
66. A. V. J o h a n s s o n and P. H. A lf reds s on. On the structure of turbulent channel low. J. Fluid
Mech., 122:295–314, 1982.
THERMAL ANEMOMETRY 299

67. W. K. G e o r g e, P. D. B euther , and A. Shabbir . Polynomial calibrations for hot wires in


thermally- varying lows. Exp. Thermal Fluid Sci., 2:230–235, 1989.
68. F. H. C h a mpag ne, C. A. Sleicher , and O. H. Wehrm ann. Turbulence measurements
with inclined hot-wires. Part 1. Heat transfer experiments with inclined hot-wire. J. Fluid Mech.,
28:153–175, 1967.
69. F. E. J ø r g e n s e n. Directional sensitivity of wire and iber-ilm probes. DISA Inform., 11:31–37,
1971.
70. A. Kalpakli and R. Örlü. Turbulent pipe low downstream a 90° pipe bend with and without
superimposed swirl. Int. J. Heat and Fluid Flow, 41:103–111, 2013.
71. A. D. C u t l e r and P. B rads haw. A crossed hot-wire technique for complex turbulent lows.
Exp. Fluids, 12:17–22, 1991.
72. A. Ta l a m e l l i, K. J. A. Wes tin, and P. H. A lf reds s on. An experimental investigation of
the response of hot-wire X-probes in shear lows. Exp. Fluids, 28:425–435, 2000.
73. P. Bu r at t in i . The effect of the X-wire probe resolution in measurements of isotropic turbu-
lence. Meas. Sci. Technol., 19:115405, 2008.
74. R. Ö r l ü. Experimental study of passive scalar mixing in swirling jet lows. Licentiate (TeknL)
thesis, Royal Institute of Technology, Stockholm, Sweden, 2006.
75. R. O v in k , A. P. G. G. Lam ers , A. A. va n Steenhoven, and H. W. M. H oeijm akers.
A method of correction for the binormal velocity luctuation using the look-up inversion method
for hot-wire anemometry. Meas. Sci. Technol., 12:1208, 2001.
76. J. P. M o r o, P. V. Vukos lavčević, and V. Blet. A method to calibrate a hot-wire X-probe for
applications in low-speed, variable-temperature low. Meas. Sci. Technol., 14:1054–1062, 2003.
77. A. va n D ijk and F. T. M. N ieuws tadt. The calibration of (multi-) hot-wire probes. 2.
Velocity-calibration. Exp. Fluids, 36:550–564, 2004.
78. S. F. B e n ja min and C. A. R oberts. Measuring low velocity at elevated temperature with a
hot wire anemometer calibrated in cold low. Int. J. Heat Mass Transfer, 45:703–706, 2002.
79. M. H u lt m a r k and A. J. Sm its . Temperature corrections for constant temperature and constant
current hot-wire anemometers. Meas. Sci. Technol., 21:105404, 2010.
80. A. va n D ijk and F. T. M. N ieuws tadt. The calibration of (multi-) hot-wire probes.
1. Temperature calibration. Exp. Fluids, 36:540–549, 2004.
81. J. H. Lienhard and K. Helland. An experimental analysis of luctuating temperature measure-
ments using hot-wires at different overheats. Exp. Fluids, 7:265–270, 1989.
82. A. Va n H ir t u m and X. Pelors on. Hot ilm/wire calibration for low to moderate low veloci-
ties. Meas. Sci. Technol., 21:115402, 2010.
83. Z. Yu e and T. G. M alm s tröm . A simple method for low-speed hot-wire anemometer calibra-
tion. Meas. Sci. Technol., 9:1506–1510, 1998.
84. E. Ö z a h i , M. Ö. Carp inlioğlu, and M. Y. G undoğdu. Simple methods for low speed
calibration of hot-wire anemometers. Flow Meas. Instrum., 21:166–170, 2010.
85. G. J a n k e . Hot wire in wall proximity. In: G. C om te - B ellot and J. M athieu, eds.,
Advances in Turbulence, Vol. I. Springer, pp. 488–498, 1987.
86. P. V. L a n s pe a r y. Establishing very low speed, disturbance-free low for anemometry in turbu-
lent boundary layers. PhD thesis, University of Adelaide, Adelaide, South Australia, Australia,
1998.
87. M. H e ik a l , A. A ntoniou, and T. C owell. A rig for the static calibration of constant-
temperature hot wires at very low velocities. Exp. Thermal Fluid Sci., 1:221–223, 1988.
88. L. P. C h ua, H. S. Li , and H. Zhang . Calibration of hot wire for low speed measurements.
Int. Commun. Heat Mass Transfer, 27:507–516, 2000.
89. A. M. A l -G a r ni . Low speed calibration of hot-wire anemometers. Flow Meas. Instrum.,
18:95–98, 2007.
90. M. Z a b at, F. K. B rowand, and D. Plocher . In-situ swinging arm calibration for hot-ilm
anemometers. Exp. Fluids, 12:223–228, 1992.
91. M. S. G u e l l o u z and S. Tavoularis . A simple pendulum technique for the calibration of
hot-wire anemometers over low-velocity ranges. Exp. Fluids, 18:199–203, 1995.
92. S. Ko h a n and W. H. Schwarz . Low speed calibration formula for vortex shedding from
cylinders. Phys. Fluids, 16:1528–1529, 1973.
93. T. L e e and R. Budwig. Two improved methods for low-speed hot-wire calibration. Meas. Sci.
Technol., 2:643–646, 1991.
94. A. Pa pa n g e l o u. A “robust” vortex-shedding anemometer. Exp. Fluids, 14:208–210, 1993.
95. M. A. A r d e k a n i. Hot-wire calibration using vortex shedding. Meas. Sci., 42:722–729, 2009.
96. V. S t r o u h a l . Ueber eine besondere Art der Tonerregung. Ann. Phys. Chem., 241:216–251,
1878.
300 RAMIS ÖRLÜ AND RICARDO VINUESA

97. A. R o s h ko. On the development of turbulent wakes from vortex streets. NACA Tech. Note 2913,
1953.
98. F. L au r a n t z o n, R. Ö rlü, A. Segalini, and P. H. A lf reds s on. Time-resolved measure-
ments with a vortex lowmeter in a pulsating turbulent low using wavelet analysis. Meas. Sci.
Technol., 21:123001, 2010.
99. M. Z d r av kov ich . Flow around Circular Cylinders: Fundamentals. Oxford University Press
Inc., New York, 2003.
100. T. L e e and R. Budwig. A study of the effect of aspect ratio on vortex shedding behind circular
cylinders. Phys. Fluids, 3:309–315, 1999.
101. D. G e r ic h and H. Eckelm ann. Inluence of end plates and free ends on the shedding
frequency of circular cylinders. J. Fluid Mech., 122:109–121, 1982.
102. S. S. S at ta r z a deh, A. K alpakli , and R. Ö rlü. Hot-wire calibration at low velocities:
Revisiting the vortex shedding method. Adv. Mech. Eng., 2013:241726, 2013.
103. P. H. A l fr e d s s o n, R. Ö rlü, and P. Schlatter . The viscous sublayer revisited-exploiting
self-similarity to determine the wall position and friction velocity. Exp. Fluids, 51:271–280, 2011.
104. R. Ö r l ü and P. Schlatter . On the luctuating wall shear stress in zero pressure-gradient
turbulent boundary layer lows. Phys. Fluids, 23:021704, 2011.
105. R. Vinuesa, R. Örlü, and P. Schlatter. Characterization of backlow events over a wing
section. J. Turbul. (In Print), 2016. http://dx.doi.org/10.1080/14685248.2016.1259626.
106. M. H u lt ma r k, S. C. C. Bailey, and A. J. Sm its . Scaling of near-wall turbulence in pipe low.
J. Fluid Mech., 649:103–113, 2010.
107. R. Ö r l ü and P. H. A lf reds s on. Comment on the scaling of the near-wall streamwise variance
peak in turbulent pipe lows. Exp. Fluids, 54:1431, 2012.
108. L. V. K r is h na moorthy, D. Wood, and R. A. A ntonia . Effect of wire diameter and
overheat ratio near a conducting wall. Exp. Fluids, 3:121–127, 1985.
109. R. Ö r l ü and P. H. A lf reds s on. On spatial resolution issues related to time-averaged quanti-
ties using hot-wire anemometry. Exp. Fluids, 49:101–110, 2010.
110. P. L e na e r s , Q. L i , G. B r e t h o u w e r , P. S c h l at t e r , and R. Ö r l ü. Rare backlow and
extreme wall-normal velocity luctuations in near-wall turbulence. Phys. Fluids, 24:035110,
2012.
111. P. S c h l at t e r and R. Ö rlü. Assessment of direct numerical simulation data of turbulent
boundary layers. J. Fluid Mech., 659:116–126, 2010.
112. G. E it e l -A mo r, R. Ö rlü, and P. Schlatter . Simulation and validation of a spatially
evolving turbulent boundary layers up to Reθ = 8300. Int. J. Heat Fluid Flow, 47:57–69, 2014.
113. J. C. D e l A l a m o and J. J i m e n e z . Estimation of turbulent convection velocities and
corrections to Taylor’s approximation. J. Fluid Mech., 640:5–26, 2009.
114. P. M o in. Revisiting Taylor’s hypothesis. J. Fluid Mech., 640:1–4, 2009.
115. R. De Kat and B. Ganapathisubramani. Frequency–wavenumber mapping in turbulent shear
lows. J. Fluid Mech., 783:166–190, 2015.
116. A. S. M o n in and A. M. Yaglom. Statistical Fluid Mechanics: Mechanics of Turbulence, Vol. I.
MIT Press, Cambridge, MA, 1971.
117. A. A. Town s e n d. The Structure of Turbulent Shear Flow, 2nd ed. Cambridge University Press,
Cambridge, UK, 1976.
118. F. D u r s t, J. J ova novic, and Lj. K anevce . Probability density distribution in turbulent wall
boundary-layer lows. In: F. Durst, B. Launder, J. Lumley, F. W. Schmidt, and J. H. Whitelaw, eds.,
Turbulent Shear Flows 5, Ithaca, NY, August 7–9, 1985. Springer, Berlin, Germany, pp. 197–220,
1987.
119. F. D u r s t, E.-S. Zanoun, and M. Pas htrapans ka . In situ calibration of hot wires close to
highly heatconducting walls. Exp. Fluids, 31:103–110, 2001.
120. F. D u r s t and E.-S. Zanoun. Experimental investigation of near-wall effects on hot-wire
measurements. Exp. Fluids, 33:210–218, 2002.
121. A. F. P o lya kov and S. A. Shindin. Peculiarities of hot-wire measurements of mean velocity
and temperature in the wall vicinity. Lett. Heat Mass Transfer, 5:53–58, 1978.
122. R. Ö r l ü, J. H. M. Frans s on, and P. H. A lf reds s on. On near wall measurements of wall
bounded lows—The necessity of an accurate determination of the wall position. Prog. Aerosp.
Sci., 46:353–387, 2010.
123. N. H u t c h in s and K.-S C hoi . Accurate measurements of local skin friction coeficient using
hot-wire anemometry. Prog. Aerosp. Sci., 38:421–446, 2002.
124. R. Vin u e s a , P. Schlatter , and H. M. N agib. Role of data uncertainties in identifying the
logarithmic region of turbulent boundary layers. Exp. Fluids, 55:1751, 2014.
125. P. H. A l fr e d s s o n and R. Ö rlü. The diagnostic plot—A litmus test for wall bounded turbu-
lence data. Eur. J. Mech. B: Fluid, 29:403–406, 2010.
THERMAL ANEMOMETRY 301

126. P. H. A l fr e d s s on, A. V. J ohans s on, J. H. H aritonidis , and H. Eckel mann.


The luctuating wall-shear stress and the velocity ield in the viscous sublayer. Phys. Fluids,
31:1026–1033, 1988.
127. L. P r a n d t l. Bericht über Untersuchungen zur ausgebildeten Turbulenz. ZAMM, 5:136–139,
1925.
128. R. Vin u e s a . Synergetic computational and experimental studies of wall-bounded turbulent lows
and their two-dimensionality. PhD thesis, Illinois Institute of Technology, Chicago, IL, 3574934,
2013.
129. R. Vinuesa and H. M. Nagib. Enhancing the accuracy of measurement techniques in high
Reynolds number turbulent boundary layers for more representative comparison to their canonical
representations. Eur. J. Fluid Mech. B/Fluids, 55:300–312, 2016.
130. R. Vinuesa, R. D. Duncan, and H. M. Nagib. Alternative interpretation of the Superpipe data and
motivation for CICLoPE: The effect of a decreasing viscous length scale. Eur. J. Fluid Mech. B/
Fluids, 58:109–116, 2016.
131. F. D u r s t, H. K ikura , I. Lekakis , J. J ovanovic, and Q. Ye. Wall shear stress determination
from near-wall mean velocity data in turbulent pipe and channel lows. Exp. Fluids, 20:417–428,
1996.
132. T. B. N ic k e l s . Inner scaling for wall-bounded lows subject to large pressure gradients. J. Fluid
Mech., 521:217–239, 2004.
133. K. A. C h au h a n, P. A. M onkewitz , and H. M. N agib. Criteria for assessing experiments in
zero pressure gradient boundary layers. Fluid Dyn. Res., 41:021404, 2009.
134. J. L au fe r. Investigation of turbulent low in a two-dimensional channel. PhD thesis, California
Institute of Technology, Pasadena, CA, 1948.
135. S. O k a and Z. Kos tic. Inluence of wall proximity on hot-wire velocity measurements. DISA
Inform., 13:29–33, 1972.
136. K. S. H e b b a r. Wall proximity corrections for hot-wire readings in turbulent lows. DISA Inform.,
25:15–16, 1980.
137. J. B h at ia, F. D urs t, and J. J ovanovic. Corrections of hot-wire anemometer measurements
near walls. J. Fluid Mech., 123:411–431, 1982.
138. Y. T. C h e w, S. X. Shi , and B. C. K hoo. On the numerical near-wall corrections of single
hot-wire measurements. Int. J. Heat Fluid Flow, 16:471–476, 1995.
139. J. A. B. Wil l s . The correction of hot-wire readings for proximity to a solid boundary. J. Fluid
Mech., 12:388–396, 1962.
140. P. H. A l fr e d s s on, S. Im ayam a, R. J. Lingwood, R. Ö rlü, and A. Segalini. Turbulent
boundary layers over lat plates and rotating disks-the legacy of von Karman: A Stockholm
perspective. Eur. J. Mech. B: Fluid, 40:17–29, 2013.
141. V. Ya k h o t, S. C. C. Bailey, and A. J. Sm its . Scaling of global properties of turbulence and
skin friction in pipe and channel lows. J. Fluid Mech., 652:65–73, 2010.
142. Y. S u z u k i and N. K as agi. Evaluation of hot-wire measurements in wall shear turbulence using
a direct numerical simulation database. Exp. Thermal Fluid Sci., 5:69–77, 1992.
143. H. L. D r y d e n, G. B. Schubauer, W. C. M ock , and H. K. Skram s tad. Measurements of
intensity and scale of wind-tunnel turbulence and their relation to the critical Reynolds number of
spheres. NACA Tech. Rep. 581, 1937.
144. F. F r e n k ie l . The inluence of the length of a hot wire on the measurements of turbulence.
Phys. Rev., 75:1263–1264, 1949.
145. A. A s h o k, S. C. C. Bailey, M. H ultm ark, and A. J. Sm its . Hot-wire spatial resolution
effects in measurements of grid-generated turbulence. Exp. Fluids, 53:1713–1722, 2012.
146. P. V. Vukoslavčević and J. M. Wallace . Using direct numerical simulation to analyze and
improve hot-wire probe sensor and array conigurations for simultaneous measurement of the
velocity vector and the velocity gradient tensor. Phys. Fluids, 25:110820, 2013.
147. P. V. Vukoslavčević and J. M. Wallace . The inluence of the arrangements of multi-sensor
probe arrays on the accuracy of simultaneously measured velocity and velocity gradient-based
statistics in turbulent shear lows. Exp. Fluids, 54:1537, 2013.
148. A. V. J o h a n s s o n and P. H. A lf reds s on. Effects of imperfect spatial resolution on measure-
ments of wall-bounded turbulent shear lows. J. Fluid Mech., 137:409–421, 1983.
149. I. M a r u s ic , B. J. M ckeon, P. A. M onkewitz , H. M. N agib, A. J. Sm its , and K. R.
S r e e n iva s a n. Wall- bounded turbulent lows at high Reynolds numbers: Recent advances and
key issues. Phys. Fluids, 22:065103, 2010.
150. J. C. K l e w ic k i . Reynolds number dependence, scaling, and dynamics of turbulent boundary
layers. J. Fluid Eng., 132:094001, 2010.
151. A. J. S m it s , B. J. M ckeon, and I. M arus ic . High-Reynolds number wall turbulence. Annu.
Rev. Fluid Mech., 43:353–375, 2011.
302 RAMIS ÖRLÜ AND RICARDO VINUESA

152. N. H u t c h in s , T. B. N ickels , I. M arus ic, and M. S. C hong. Hot-wire spatial resolution


issues in wall-bounded turbulence. J. Fluid Mech., 635:103–136, 2009.
153. P. H. A l fr e d s s on, A. Segalini, and R. Ö rlü. A new scaling for the streamwise turbulence
intensity in wall-bounded turbulent lows and what it tells us about the “outer” peak. Phys. Fluids,
23:041702, 2011.
154. P. H. Alfredsson, R. Örlü, and A. Segalini. A new formulation for the streamwise turbulence
intensity distribution in wall-bounded turbulent lows. Eur. J. Mech. B: Fluid, 36:167–175, 2012.
155. M. H u lt ma r k, M. Vallikivi, S. C. C. Bailey, and A. J. Sm its . Turbulent pipe low at
extreme Reynolds numbers. Phys. Rev. Lett., 108:094501, 2012.
156. A. S e g a l in i, R. Ö rlü, P. Schlatter , P. H. A lf reds s on, J.-D. RÜ ed i, and
A. Ta l a m e l l i. A method to estimate turbulence intensity and transverse Taylor microscale in
turbulent lows from spatially averaged hotwire data. Exp. Fluids, 51:693–700, 2011.
157. A. J. Smits, J. P. Monty, M. Hultmark, S. C. C. Bailey, N. Hutchins, and I. Marusic. Spatial
resolution correction for wall-bounded turbulence measurements. J. Fluid Mech., 676:41–53,
2011.
158. P. A. M o n k e w itz, R. D. D uncan, and H. M. N agib. Correcting hot-wire measurements of
stream-wise turbulence intensity in boundary layers. Phys. Fluids, 22:091701, 2010.
159. C. C h in, N. H u tchins , A. S. H. O oi, and I. M arus ic . Use of direct numerical simulation
(DNS) data to investigate spatial resolution issues in measurements of wall-bounded turbulence.
Meas. Sci. Technol., 20:115401, 2009.
160. C. Ch in, N. H u tchins , A. S. H. O oi, and I. M arus ic . Spatial resolution correction for
hot-wire anemometry in wall turbulence. Exp. Fluids, 50:1443–1453, 2011.
161. J. Philip, N. Hutchins, J. P. Monty, and I. Marusic. Spatial averaging of velocity
measurements in wall-bounded turbulence: Single hot-wires. Meas. Sci. Technol., 24:115301, 2013.
162. A. Ta l a m e l l i, A. Segalini, R. Ö rlü, P. Schlatter , and P. H. A lf reds s on. Correcting
hot-wire spatial resolution effects in third- and fourth-order velocity moments in wall-bounded
turbulence. Exp. Fluids, 54:1496, 2013.
163. P. V. Vukoslavčević and J. M. Wallace . On the accuracy of simultaneously measuring velocity
component statistics in turbulent wall lows with arrays of three or four hot-wire sensors. Exp.
Fluids, 51:1509–1519, 2011.
164. J. P h il ip, R. Ba idya, N. H utchins , J. P. M onty, and I. M arus ic . Spatial averaging
of streamwise and spanwise velocity measurements in wall-bounded turbulence using V- and
X-probes. Meas. Sci. Technol., 24:115302, 2013.
165. P. M o in and P. R. Spalart. Contributions of numerical simulation data bases to the physics,
modeling and measurement of turbulence. NASA TM 100022, 1987.
166. A. Lozano-DurÀn and J. JimÈnez. Effect of the computational domain on direct simulations
of turbulent channels up to Reτ = 4200. Phys. Fluids, 26:011702, 2014.
167. M. Lee and R. D. Moser. Direct numerical simulation of turbulent channel low up to Reτ 5200.
J. Fluid Mech., 774:395–415, 2015.
168. A. Segalini, G. Bellani, G. Sardina, L. Brandt, and E. A. Variano. Corrections for one-
and two-point statistics measured with coarse-resolution particle image velocimetry. Exp. Fluids,
55:1739, 2014.
169. A. S e g a l in i, A. C im arelli , J.-D. R uedi , E. D e A ngelis, and A. Talam elli. Effect of
the spatial iltering and alignment error of hot-wire probes in a wall-bounded turbulent low. Meas.
Sci. Technol., 22:105408, 2011.
170. M. A. M i l l e r , B. E s t e ja b , and S. C. C. Ba i l e y. Evaluation of hot-wire spatial iltering
corrections for wall turbulence and correction for end-conduction effects. Exp. Fluids, 55:1735,
2014.
171. M. H u lt m a r k, M. Vallikivi, S. C. C. Bailey, and A. J. Smits. Logarithmic scaling of
turbulence in smooth-and rough-wall pipe low. J. Fluid Mech., 728:376–395, 2013.
172. M. H u lt ma r k, A. A s hok, and A. J. Sm its . A new criterion for end-conduction effects in
hot-wire anemometry. Meas. Sci. Technol., 22:055401, 2011.
173. M. Tagawa , K. K ato, and Y. O hta. Response compensation of ine-wire temperature sensors.
Rev. Sci. Instrum., 76:094904, 2005.
174. K. K at o and M. Tagawa. Robust response-compensation scheme for estimating the thermal
time-constants of ine-wire temperature sensors. Rev. Sci. Instrum., 77:106103, 2006.
175. M. N a b av i. Invited Review Article: Unsteady and pulsating pressure and temperature: A review
of experimental techniques. Rev. Sci. Instrum., 81:031101, 2010.
176. T. T s u ji, Y. N agano, and M. Tagawa . Frequency response and instantaneous temperature
proile of cold-wire sensors for luid temperature luctuation measurements. Exp. Fluids, 13:
171–178, 1992.
THERMAL ANEMOMETRY 303

177. G. A r wat z, C. Bahri , A. J. Sm its , and M. H ultm ark. Dynamic calibration and modeling
of a cold wire for temperature measurement. Meas. Sci. Technol., 24:125301, 2013.
178. A. Ta l a m e l l i, A. Segalini, R. Ö rlü, and G. Bures ti . A note on the effect of the separa-
tion wall in the initial mixing of coaxial jets. Exp. Fluids, 54:1483, 2013.
179. A. B e r s o n, P. B lanc-B enon, and G. C om te-B ellot. On the use of hot-wire anemometry
in pulsating lows. A comment on ‘A critical review on advanced velocity measurement techniques
in pulsating lows’. Meas. Sci. Technol., 21:128001, 2010.
180. A. K a l pa k l i Ves ter , R. Ö rlü, and P. H. A lf reds s on. Pulsatile turbulent low in straight
and curved pipes—Interpretation and decomposition of hot-wire signals. Flow Turbul. Combust.,
94:305–321, 2015.
181. N. K a r a ma n is, R. F. M artinez-B otas , and C. C. Su. Mixed low turbines: Inlet and exit
low under steady and pulsating conditions. J. Turbomach., 123:359–371, 2001.
182. R. Ö r l ü, F. M a lizia , A. C im arelli , P. Schlatter , and A. Talam elli. The inluence
of temperature luctuations on hot-wire measurements in wall-bounded turbulence. Exp. Fluids,
55:1781, 2014.
183. A. Ta l a me l l i, F. M alizia , R. Ö rlü, A. C im arelli , and P. Schlatter . Temperature
effects in hot-wire measurements on higher-order moments in wall turbulence. Progress in
Turbulence VI, Proceedings of the iTi Conference in Turbulence, Bertinoro, Italy, pp. 185–189,
2016.
184. K. M. Ta l l u r u, V. Kulandaivelu, N. H utchins , and I. M arus ic . A calibration tech-
nique to correct sensor drift issues in hot-wire anemometry. Meas. Sci. Tech., 25:105304, 2014.
185. N. H u t c h in s , J. P. M onty, B. G anapathis ubram ani, H. C. H. N g, and I. M a rusi c.
Three-dimensional conditional structure of a high-Reynolds-number turbulent boundary layer.
J. Fluid Mech., 673:255–285, 2011.
186. S. C. C. Ba il e y, M. Vallikivi, M. H ultm ark , and A. J. Sm its . Estimating the value of
von Karman’s constant in turbulent pipe low. J. Fluid Mech., 749:79–98, 2014.
187. A. S e g a l in i, R. Ö rlü, and P. H. A lf reds s on. Uncertainty analysis of the von Kármán
constant. Exp. Fluids, 54:1460, 2013.
188. F. E. J ø r g e n s e n. How to Measure Turbulence with Hot-Wire Anemometers: A Practical Guide.
Dantec Dynamics, 2002.
189. A. V. Johansson and P. H. Alfredsson. Experimentella metoder inom Strömningsmekaniken.
KTH Mechanics, Stockholm, Sweden, 1988.
190. R. Ö r l ü and P. Schlatter . Comparison of experiments and simulations for zero pressure
gradient turbulent boundary layers at moderate Reynolds numbers. Exp. Fluids, 54:1547, 2013.
ChapTer TeN

Laser velocimetry

John J. Charonko

Contents

10.1 Overview of laser-based methods 306


Laser Doppler velocimetry/anemometry 306
Phase Doppler anemometry 307
Particle image velocimetry 307
10.2 Lasers 309
How do they work? 309
Types of lasers common in velocimetry 310
10.3 Principles of PIV 311
Basics of the imaging system 311
Image cross-correlation 313
Discrete cross-correlation 316
Subpixel estimation 316
Performance of basic PIV algorithms 317
Iterative schemes 324
Fourier-based cross-correlation theory 326
Advanced processing algorithms 327
Particle tracking 328
Stereo PIV 329
10.4 Experimental design 330
Flow tracers 330
Camera selection 330
Setup of laser optics 333
Camera calibration 334
10.5 Post-processing 336
Data validation and replacement 336
Derivative estimation 338
Coherent structure estimation 339
Pressure and force data 343
10.6 Estimation of error and uncertainty 345
Uncertainty due to cross-correlation 348
Uncertainty due to image calibration 349
Uncertainties due to timing errors 350
Problems 351
References 353

Many of the techniques discussed elsewhere in this book alter the low being studied in some
way, either by inserting a probe into the low or by modifying the model to provide access
ports for sensors. In contrast, optical-based methods like those discussed in Chapters 7 and 8
can provide noninvasive measurements of scalar properties like density that only minimally
affect the low. For velocity measurements, laser-based techniques provide similar access to

305
306 JOHN J. CHARONKO

velocity data, either at a point or over entire ields. In particular, particle image velocimetry
(PIV) has grown popular due to its ability to measure entire velocity ields around aerody-
namic bodies without modifying the low, enabling a quantitative and detailed study of the
low structure. There are trade-offs, of course; optical- and laser-based techniques require
complicated and expensive hardware, and more traditional approaches still often provide bet-
ter temporal resolution, or higher accuracy and precision. Therefore, the relative strengths and
weaknesses of each method should be compared to the goals of the experiment, and often a
combination of techniques is found to be best. Nevertheless, the lexibility and performance
of laser-based methods have led them to become workhorses in modern experimental practice.
In this chapter, we will irst summarize three of the most popular laser velocimetry tech-
niques and then spend the rest of the chapter detailing the principles of operation of PIV and
its analysis. Here, we will focus on planar measurements; the next chapter details the exten-
sion of PIV to volumetric measurements.

10.1 Overview of laser-based methods

The use of lasers in experimental luid dynamics has come to include a number of related
techniques, from techniques like planar laser-induced luorescence that can measure scalar
ields like temperature and concentration to a wide variety of methods for sampling velocity
ields. Here, we will briely introduce three of the most common and compare some of their
strengths and weaknesses.

Laser Doppler The irst method we will discuss is known as either laser Doppler velocimetry (LDV) or
velocimetry/ anemometry (LDA). Early implementations appeared soon after the development of the irst
anemometry laser [1], with the dual-beam coniguration used in modern systems and described in the fol-
lowing section appearing a few years later [2,3]. Like its extension phase Doppler anemom-
etry (PDA), LDV uses frequency shifts in the laser light scattered from particles in the low
to extract their velocity, v, in the interrogation region, making the properties of the beam an
integral part of the experimental measurement. Because the expected frequency shift fD is very
small compared to the frequency of the original source fb ( fD ≈ 1 MHz vs. fb ≈ 1014 MHz), this
signal can be dificult to be isolated. Instead, intersecting pairs of beams are used (with direc-
tions e1 and e2 and associated scattered frequencies f1 and f2), and a single receiver mixes both
frequency-shifted beams into a new signal with a frequency, fD, equal to the difference in the
two shifts for an incident beam of wavelength λb.

v × ( e1 - e2 )
f D = f1 - f2 = (10.1)
lb

In order to resolve multiple velocity components, two or more beam pairs of different wave-
lengths are required, as shown in Figure 10.1, with each wavelength analyzed separately. For
well-designed hardware and processing algorithms, the mean data rate of such systems is
typically only limited by the number of particles (Nɺ ) crossing the interrogation volume (V0)
of the system and should, for particles moving in the x-direction with mean speed vx , be on
average approximately

Nɺ = C pb0c0 vx (10.2)

though when the average number of particles in the volume, N = CV0 = ( 4 / 3 ) pa0b0c0C , becomes
higher than about 0.1, the probability of having more than a single particle cross the region
increases, causing an increase in the number of failed measurements. Here, C is the average
volume concentration of particles and a0, b0, and c0 are the half-axes of the ellipsoidal intersec-
tion region. Typically, this intersection region has a width on the order of 100 μm and a length
5× – 10× larger, which would yield an average data rate on the order of 10,000 samples/s for a
LASER VELOCIMETRY 307

Receiving optics

Focusing lens fD v

Doppler-shifted scattered light fb

e1
Transmitted
laser beams fb

e2

Interrogation spot
(a) (b)

FIGUre 10.1 (a) Schematic of a two-component LDV system in the dual-beam coniguration with integrated transmitter
and receiver optics. (b) In the dual-beam coniguration, the interference between the Doppler-shifted scattered light from
two beams is easier to measure than the frequency change of a single scattered beam.

10 m/s low with N = 0.1, though most LDV hardware can process at much higher rates, includ-
ing compensating for partially overlapped Doppler bursts. It is important to note, however, that
the spacing of the low tracers is governed by Poisson statistics and therefore the arrival times
are not uniformly separated. This introduces a number of complications in post-processing and
also can beneit later analysis of the turbulent spectra. Fortunately, most commercial LDV pack-
ages handle such details for the user automatically. Recent extensions of the technique also offer
the ability to extract acceleration information [4] or particle position and gradients within the
measurement volume [5].

phase Doppler As previously mentioned, PDA is an extension of LDV. In this technique, multiple receivers
anemometry (see Figure 10.2) are used to sample the Doppler-shifted burst that a particle transiting the
interrogation volume creates. This creates a size-dependent phase shift in the frequency signal
between the two receivers. Such systems are of great use when attempting to simultaneously
measure the speed and size of bubbles or droplets in bubbly low and sprays, or when trying to
understand how turbulence is modiied by the presence of particles in the low.

particle image Like PDA and LDV, PIV is a noninvasive optical method for measuring the velocity of a low
velocimetry ield. However, unlike these two other methods (as implied by its name), PIV uses images

Focusing lens
Doppler-shifted
scattered light

Transmitted Φ1
laser beams
Φ2

Interrogation spot

Receiving optics

FIGUre 10.2 Schematic of a PDA system with two receivers. The phase of the scattered light
(Φ1, Φ2) is proportional to the position of the receivers and the size of the particle in the interroga-
tion spot. A third receiver can extend the dynamic range of the system.
308 JOHN J. CHARONKO

Dual head
pulsed laser
Beam-forming optic assembly

Camera field of view

Camera Example particle image

Particle-laden flow
Laser sheet

FIGUre 10.3 Schematic of a planar PIV setup using a dual-framing camera for small interframe
times.

of a tracer-laden low to reconstruct the velocity of the surrounding luid elements and does
not directly use the properties of the light beam in making the measurement (Figure 10.3).
Instead, the laser is used essentially as a powerful camera lash, reducing motion blur and
enabling very short exposure times so that high-speed particle motion can be captured over
small timescales. The acquired images are then analyzed to reconstruct the particles’ motion
between sequential laser pulses and, knowing the magniication and time delay, their velocity
(as shown schematically in Figure 10.4). In modern systems, the acquired images are divided
into numerous smaller regions of interest, and the particle motion estimation on each area
is typically performed using correlation-based methods, but approaches including intensity
tracking or image low, minimization of differences, and direct tracking of individual particles
are also used.
Over the years, numerous variations of this basic design have been explored as the method
evolved. Early PIV implementations used analog ilm cameras in conjunction with digital
scanners, and single images were exposed with two or more laser pulses [6,7]. These images
were then processed with autocorrelation algorithms to yield particle displacement estimates.
However, the main weaknesses of this approach were that near-zero displacements could not
be resolved because of the self-correlation peak, and due to symmetry the true direction could
be positive or negative and had to be inferred from context. Researchers quickly realized that

u(t1/2)
∆x/∆t

Particle
trajectory
x(t1)

∆x(t0, t1)

x(t0)

FIGUre 10.4 In PIV, the velocity u (t1/ 2 ) of a group of particles is estimated from the best match
to their imaged displacements between times t0 and t1. For the best performance, Δt should be
small enough that the trajectories are locally straight and the velocity is nearly constant.
LASER VELOCIMETRY 309

Table 10.1 Typical experimental capabilities of laser-based velocimetry techniques

Spatial resolution
Method (μm) Max sampling rate Measurement type

LDA 50–100 1 to 1000 + kHza 1D, 3C velocity


PDA 50–100 1 to 1000 + kHza 1D, 3C velocity; particle
size
Planar PIV 20–1000 10 to 10 + kHz 2D, 2 or 3C velocity
Tomo-PIV 20–1000 10 to 10 + kHz 3D, 3C velocity
Micro-PIV 1–20 Typically mean only 2D, 2C velocity

2C, two-component; 3C, three-component; 1D, one-dimensional; 2D, two-dimensional;


3D, three-dimensional.
a Dependent on low seeding, interrogation volume, and low velocity.

if they could expose two frames with a short interframe delay time they could instead use a
cross-correlation analysis, eliminating both the directional ambiguity and the contamination
from the autocorrelation peak [8]. Additionally, transitioning to digital cameras allowed for
streamlined acquisition and processing methods, although at irst this meant losses in resolu-
tion and sensitivity compared to the best custom-built ilm camera hardware [9]. However, the
continual improvement of digital cameras as well as the development of hardware targeted
toward the needs of high-speed imaging has recovered much of that loss, as well as made pos-
sible the acquisition of thousands of images at rates exceeding a thousand frames per second,
feats very dificult using analog-only techniques.
Because of PIV’s image-based approach, unlike LDV and PDA, it is capable of simul-
taneous measurements of entire 2D regions or, using recent extensions to the method, 3D
volumes (see Chapter 11). Due to this ability of PIV to capture spatially resolved instanta-
neous and average low properties at a wide variety of scales, ranging from the microscopic
(micro-PIV) [10] to the very large scale [11], PIV has emerged in the last three decades as
one of the workhorses of modern experimental luid mechanics [12]. Although it still has
trade-offs in comparison to LDV and PDA in terms of sampling rates and dynamic range, the
ability to investigate the global evolution of the low yields engineering and physical insights
not easily achievable through the examination of time-averaged statistical properties alone
(see Table 10.1 for a comparison of some of the primary differences between the methods).
Therefore, the remainder of this chapter will focus on the principles of designing, performing,
and analyzing a PIV experiment. Readers interested in the details of the use of either of these
two other important methods are encouraged to seek out one of the many references that delve
into them in greater detail [13–15].

10.2 Lasers

Although lasers designed for PIV are readily available and relatively easy to use without need-
ing to irst become a laser expert, because their use is so fundamental to the technique, we will
irst review some basic details of their construction and operation. More detailed information
speciic to PIV can be found in specialized textbooks on the method [16,17].

how do they The word laser is originally derived from the acronym LASER—light ampliication by stimu-
work? lated emission of radiation. As described by the name, lasers work by exciting a material in
such a way that when perturbed by an incoming photon, it emits at least one extra photon
during the collision, leading to the build-up of light within the device until enough energy is
present to allow the release of a beam of coherent light. For this to work, the material, which
can be a gas, liquid, or solid, usually has to be put into some sort of excited state through the
310 JOHN J. CHARONKO

initial application of external energy, although the exact mechanism is dependent on the type
of laser and the material chosen. The choice of material and method of excitation also dictates
the resulting wavelength produced. Due to this ampliication process, most lasers produce
only a single or small set of discrete wavelengths, rather than a continuous spectrum of light,
and the emitted light tends to be polarized in a predictable fashion, again dependent on design.
Fortunately, for researchers, a wide variety of reliable turnkey systems are now commercially
available, and most experimenters do not need to modify or design their laser systems during
typical usage. However, routine maintenance is an important part of keeping these expensive
systems at peak performance.

Types of lasers For PIV experiments, we are using lasers essentially as precisely controlled and very bright
common in and fast camera lashes. Therefore, when selecting a system, several considerations that make
velocimetry devices used for scientiic purposes very different from laser pointers you might be familiar
with must be taken into account. First, since we wish to freeze the apparent motion of the par-
ticles, a pulsed laser is preferable to one designed to operate in continuous mode. Although we
can achieve a digital shuttering effect by controlling the exposure time on our cameras, doing
so means that most of the energy produced by the laser is wasted. For instance, many common
PIV lasers have pulse durations on the order of 5–100 ns. If we switch from iring two 10 ns
pulses at 10 Hz (every 100 ms) to shuttering a continuous laser at 1 μs (assuming our cameras
can even produce exposures that short) at the same repetition rate, we are wasting 99.998% of
the available light. Even for a time-resolved experiment running at 10 kHz, the losses are on
the order of 99%. We can extend the exposure, but we will run into problems of streaking long
before we meaningfully increase the light gathered.
The second main consideration is available light. Even with pulsed lasers, in a typical
experiment, the energy per pulse and the sensitivity of the camera are the limiting factors
determining how large is the interrogation region we can sample, even if the camera otherwise
has enough resolution. This is particularly true for volumetric techniques, where rather than
forming the laser beam into a sheet, a region must be illuminated, drastically decreasing the
energy per unit volume to which a given low tracer is exposed, and making time-resolved
measurements using tomographic PIV very dificult in anything but small volumes. As a
result, almost every system used for laser velocimetry is designated as a Class 4 laser, which
means that eye damage will occur before the user has time to blink.
To fulill these needs of high pulse energy in the visible spectrum (to allow the use of
normal camera sensors) at reasonable repetition rates and economical prices, laser users
and developers have gravitated toward two similar types of solid-state lasers: Q-switched
Nd:YAG and Nd:YLF models. Both use a crystal to produce the desired wavelengths
and are often pumped with a lashlamp to initiate lasing. A Q-switch is a device that
allows the laser to build up energy until it is released all at once to form a single pulse
rather than a continuous beam. Both types natively produce near-infrared laser light
(~1060 nm) that is passed through frequency doubling optics to yield a laser beam in the
green spectrum (~530 nm).
The main difference between the two types of lasers is that Nd:YAG lasers are typically
designed to ire at repetition rates around 10 Hz but with larger energy per pulse while Nd:YLF
lasers are usually capable of repetition rates up to 10 kHz, but with only a tenth or less of the
energy per pulse. High-speed Nd:YAG lasers also exist, but when run at high frequencies
they show similar performance to Nd:YLF models. As a result, Nd:YLF lasers are typically
only paired with high-speed complementary metal–oxide–semiconductor (CMOS) cameras
used for time-resolved PIV experiments, while for statistical measurements of turbulent lows
Nd:YAG lasers are selected for their greater energy per pulse. For both types, PIV systems are
frequently sold with two laser heads coupled with beam-combining optics to allow indepen-
dent control of two laser pulses, enabling very short time delays needed for dual-exposure PIV
at high speeds or large magniications. Other models that can pulse multiple times per head
also exist but are less common. Typical performance characteristics for each of these laser
types are listed in Table 10.2.
LASER VELOCIMETRY 311

Table 10.2 Typical performance characteristics of commonly used particle image velocimetry
lasers

Laser type Wavelengths (nm) Pulse energy (mJ) Repetition rate Pulse duration (ns)

Nd:YAG 1064/532 50–400 5–100 Hz 10


Nd:YLF 1054/527 5–30 1–10 kHz 150

10.3 principles of pIV

As previously discussed, PIV is a camera-based method that reconstructs the velocities within
the imaged region based on tracking the motion of low-tracing particles (Figure 10.4). While
this can be accomplished through many means, it is most often performed through the use
of image cross-correlation methods to search for matching particle patterns across pairs of
images. In the standard implementation of a PIV system, these images are of a planar region
that has been illuminated by laser light shaped by beam-forming optics into a thin (usually
1 mm or less) but broad sheet. A single camera is then placed perpendicular to this sheet to
record the motion of luid passing along it (see Figure 10.3). The purpose of using a sheet
instead of a volume is to limit the effects of perspective and motion perpendicular to the ield
of view, since we cannot usually resolve these with only a single camera, as well as to increase
the amount of light delivered per unit visible area. Thus, standard PIV can only capture two
components of velocity within the 2D laser sheet, and signiicant motion in the through-plane
direction will cause the measurement to fail since a particle seen in the irst frame of a pair
can pass completely through the laser sheet before the second image is taken, making this
implementation best suited for lows that are nearly 2D.

Basics of the The images acquired by such a system are 2D projections formed on the camera sensor of
imaging system the volume imaged by a series of lens optics. We represent an arbitrary position within the
imaged volume in world coordinates by x = ( x, y, z ), and the position on the planar sensor
as X = ( X ,Y ). Then, these two coordinate systems can be related by the arbitrarily complex
function F

X = F ( x) (10.3)

that is dictated both by the lens system and all the material that light must pass through on its
way from the particle ield to the sensor. This second point is important to remember for PIV,
since we are often imaging at an angle across multiple media with differing indices of refrac-
tion, such as water lowing through a polycarbonate tunnel. For planar two-component PIV, it
is often suficient, at least for a irst approximation, to only consider magniication and assume
that due to the use of a laser sheet we are only viewing an ininitely thin slice at z = 0. In this
case, we can approximate X as

æXö æ xö
ç ÷ » Mç ÷ (10.4)
èY ø è yø

where M is the magniication of the total imaging system. Depending on the experiment, a
more detailed calibration of the function F may be required. For multicamera systems, such
as in stereo and volumetric PIV, an accurate approximation of F for each camera is crucial to
a successful measurement. Image calibration will be discussed in more detail in the “Camera
calibration” section; for now, we will assume a uniform M is suficient.
Because we are dealing with real lens systems and particles of inite sizes that are often rel-
atively close in diameter to the wavelength, λ, of the illuminating light source and potentially
312 JOHN J. CHARONKO

only partially in focus, the apparent size on the image plane, dτ, of low tracers with a true
diameter, dp, can be estimated by the formula

dt2 @ M 2 d p2 + ds2 + dz2 (10.5)

where the diffraction limited spot diameter, ds, is

ds = 2.44 (1 + M ) f# l (10.6)

and the apparent increase in size due to loss of focus, dz, is (via Olsen and Adrian [18])

MzDa
dz = . (10.7)
z0 + z

Additional terms, such as image aberration, can also be included if needed [17,19].
Here, the parameters of the lens system appear in the form of the f-number of the lens, f#,
which is deined in terms of the effective focal distance of the system, f, and the lens aperture
diameter, Da:

f
f# » (10.8)
Da

The location of the object in the out-of-plane direction, z, is measured relative to z0, the object
distance, which is the distance of the in-focus plane from the effective center of the system.
The center of the lens system is furthermore located some distance Z0 from the imaging plane
(typically the camera sensor) as shown in Figure 10.5. For ideal thin lenses, these two dis-
tances have a ixed relationship governed by the focal distance of the lens and described by
the Gaussian lens law:

1 1 1
= + (10.9)
f z0 Z 0

And the magniication is given by

Z 0 si
M= = (10.10)
z0 so

in terms of either the object and image distances, or the object and image sizes, so and si,
respectively.
Using this imaging system, we can then acquire a set of two images, I1 and I2, separated by
a time delay of Δt. These images sample a particle ield illuminated by a laser sheet that we

Image
Lens Object plane
plane

so

si
f –z +z
Z0 z0

FIGUre 10.5 For an ideal thin lens, the image and object distances (Z0, z0) and sizes (si, so) are
geometrically related to the focal length of the lens (f ).
LASER VELOCIMETRY 313

will assume only varies in the z direction, as described by the function J(z), within the ield of
view and that the particle images rarely overlap such that they can be approximated as

I1 ( X ) = åt ( X ) * J ( z ) d ( X - X ( t ))
p
p p p (10.11)

and

I2 ( X ) = åt ( X ) * J ( z ) d ( X - X
p
p p p ( t + Dt ) ) (10.12)

where
δ is the delta function
X p is the list of time-varying particle positions
τp is the apparent image shape of the particle p
* is the convolution operator

Under typical PIV conditions, the image of a single particle can be closely approximated by
a Gaussian function [16,19]:

æ X2 ö
t p = exp ç -8 2 ÷ (10.13)
è dt ø

In general, the apparent particle positions X p ( t ) and X p ( t + Dt ) are related by an arbitrary


displacement ield and the camera transform function F, but for now we will consider only
the case of uniform motion DX such that X p ( t + Dt ) = X p ( t ) + DX . In this case, I2 then just
becomes a shifted version of I1:

I 2 ( X ) = I1 ( X - DX ) = åt ( X ) * J ( z ) d ( X - X
p
p p p ( t ) - DX ) (10.14)

Additionally, due to the discretization imposed by the camera sensor, the intensity recorded
at each discrete image location (“pixel”) on the sensor is actually an integral function of the
collected light intensity over a small nearby region, not a point sampling of I1 or I2. This
region is a subset of the geometric distance between the sensor pixels and is called the ill
factor. However, we will neglect this here, though its effect can be important for the accuracy
of PIV algorithms with real images. Additionally, real images will also feature various types
of image noises, including noise imposed by the operation of the camera and sources of back-
ground illumination, all of which impair the optimal behavior of PIV algorithms and should
be minimized where possible before the experiment or iltered out as much as practical before
processing if not.

Image cross- Now that we have deined our images, we can use their known relationship to search for the
correlation most likely value of DX that relates them. In PIV, this will typically be done using some form
of cross-correlation operation, though other methods such as the minimization of the sum
square difference between the images can be used.
The cross-correlations of the two images, I1 and I2, can be deined as follows:

R ( S ) = ò I1 ( X ) I 2 ( X + S ) dX (10.15)

with S being the imposed shift between the two images. Now, since the convolution of a shape
function like τp with a delta function just shifts τp to the location of the delta, if we substitute
Equations 10.11 and 10.12 into (10.15) and add arbitrary background noise ields N1 and N2,
314 JOHN J. CHARONKO

we can obtain the following product of two sums representing the cross-correlation of these
images. Here, for convenience, we deine the peak intensity of each particle at time t or t + Δt
as J1p or J2p, respectively.

æ öæ ö
R (S ) = ò ç
ç
è
åJp
t
1p p ( X - X p ) + N1 ( X ) ÷÷ çç åJ 2q tq ( X - Xq - DX + S ) + N2 ( X ) ÷÷ dX
øè q ø
(10.16)

If we examine the case of a single Gaussian particle (p = q = 1 only) according to the deini-
tion of (10.13) and substitute it into (10.16) with no background noise, we can quickly see that
the shape of the resulting correlation is another Gaussian function with diameter expanded by
a factor of √2 and a magnitude G determined by the particle diameter, the dimension of the
integral (1D, 2D, or 3D), and the intensity of the particle in each image and shifted from the
origin a distance equal to the displacement of the particle.

æ ( S - DX )2 ö
R ( S ) = G exp ç -8 ÷ (10.17)
ç 2dt2 ÷
è ø

Returning to Equation 10.16, we can see that the product of the Gaussian terms will in each
case add to the correlation plane R ( S ) a Gaussian function at the location of the distance
between particles p and q, as shown discretely in Figure 10.6. For the case where the particles
are the same (p = q), the location of this correlation peak will be at DX (Figure 10.6b), while
for p ≠ q (the particles are not the same) a correlation peak will be produced in a random loca-
tion that is unlikely to repeat or overlap with other false correlations (Figure 10.6c and d).
This is the basis for the use of cross-correlation to determine the displacement between two
images with a constant shift. To ind the displacement, we can just search for the largest peak
in the correlation plane and determine its location relative to the origin.

(
DX = S|R ( S ) = max éë R ùû ) (10.18)

S S
ΔX = (4, 2)

: Frame 1
: Frame 2
S = (4, 2) S = (5, –2)
(a) (b) 5 matches (c) 2 matches

b: R(4, 2) = 5
S d: R(1, 7) = 1

c: R(5, –2) = 2
5
R(S)

0
9
9
0
S = (1, 7) Δy 0
(d) 1 match (e)
–9 –9 Δx

FIGUre 10.6 Example cross-correlation ield between two interrogation regions. (a) Particle
image patterns for frames 1 and 2, shifted by DX. (b–d) Example particle matches for various shifts S .
(e) Final correlation plane, R ( S ), formed as a summation of the matches at all possible shifts S .
LASER VELOCIMETRY 315

Then, we can deine the estimated velocity in physical units, u, for the correlated tracer pattern
as follows:

DX
u= (10.19)
M Dt

Additionally, the presence of the background noise also creates a contribution to the inal cor-
relation plane, irst in its correlation between the two frames and second in its correlation with
the luctuating particle image intensities. The total correlation can therefore be written as the
sum of several terms, only some of which contribute to useful displacement signal and the rest
being noise. This can be represented by the following decomposition [8]:

R = RC + RF + RD (10.20)

where
RC is the cross-correlation of the constant part of the background image intensity
RF is the cross-correlation of the background of one image with the luctuating part of the
other image
RD is the correlation of the luctuating part of the images and contains contributions from
both any variations in the background intensity and the correlation of the particle image
pattern

Since on average, the correlation peak corresponding to the particle ield displacement
should, for a given number of particles of ixed size and intensity per interrogation region,
always have a constant shape, RD can then further be broken down into a conditional sum
< RD |DX > representing this average peak shape, and a remainder RD - < RD |DX > representing
the noise on the ideal peak shape resulting from background luctuations and the particular
random sampling of particles in this speciic pair of images. Of these components, only RD,
and more speciically its conditional average, represents the signal we are trying to measure.
Every other component, including those having to do with the arrangement of particles in the
image pair, contributes in some way to the noise, making it more dificult to recover the true
displacement ield.
Figure 10.7 shows this graphically for a discretized image of two sets of particles related
by a constant displacement. As can be seen in Figure 10.7a, the source images have both
constant and luctuating background added to the particle image pattern. The RC component
results in a pyramid-shaped background in the correlation plane that in the worst cases can
swamp the particle signal and bias the result toward the peak at zero displacement. RF also
lowers the signal-to-noise ratio (SNR) of the correlation without contributing any information
about the displacements. As such, it is important to minimize the background intensity in the

= RC RF RD
(b) (c) (d)
Region 1 Region 2

(a) R(S) RD|∆X RD – RD|∆X


(e) (f )

FIGUre 10.7 Decomposition of a discrete cross-correlation of particle images with uniform


displacement. (a) Cross-correlation of region 1 and region 2. (b–f) Components of the cross-corre-
lation plane corresponding to noise and signal components. Refer to the text citation for details.
316 JOHN J. CHARONKO

images collected during an experiment, and it is often of great beneit to preprocess the images
to further reduce the background level to as close to zero as practical, making the planes more
similar to RD. Note how the correlation of matching particles reinforces to create a single large
peak (Figure 10.7e), while the peaks from incorrect matches rarely align, leading to a rela-
tively constant background height for the noise peaks (Figure 10.7f). However, in general, it is
not possible to separate the two parts of RD using only a single image pair.

Discrete cross- Up until now, we have discussed PIV in terms of ininite continuous images. However, in prac-
correlation tice, the images we use are of inite size and have been quantized and spatially discretized by
the digital camera sensor. Therefore, instead of performing the cross-correlation using a form
like in Equation 10.15, we use a discrete form to evaluate this sum.

R ( S, T ) = ååI ( X ,Y ) I ( X
n m
1 m n 2 m + S,Yn + T ) (10.21)

where
Xm and Yn are the discretely sampled image locations over which cross-correlation is being
computed
S and T are integer shifts

Furthermore, instead of performing the sum over the entire images as the previous discussion
implied, Xm and Yn will only span a small subsection of the low that we will refer to as an
interrogation region. This size of this interrogation region is chosen to balance the need to
have enough particles in common to both images against trying to achieve the best possible
spatial resolution. We will discuss these trade-offs further in “Performance of basic PIV algo-
rithms” section. Otherwise, all the principles we discussed in the previous session in terms of
continuous cross-correlations carry over to discrete calculations.

Subpixel Because we have switched to a discrete calculation, our spatial resolution is now greatly lim-
estimation ited. In many experiments, interrogation region sizes of 16–32 pixels are common, with par-
ticle images of 2–3 pixels in diameter and maximum displacements of the order of 8 pixels.
However, our method of evaluating these displacements remains determining the location
of the highest peak in the correlation plane. Unfortunately, due to the discretization we can
only ind that location to the nearest integer value, or in other words to within ±0.5 pixels.
Comparing that to an assumed maximum displacement in our experiment of 8 pixels means
the relative error on such displacements could be as high as 6%. For smaller displacements,
for example at 1 pixel shift, the relative error can be as high as 50%, and practically we would
be unable to resolve displacements less than that.
From the previous discussion, it would appear that the PIV algorithm we have described
so far has no hope of providing useful measurements. However, because we know that the
correlation peaks, like the images they were formed from, are discretizations of continuous
functions we are not limited to choosing integer locations. Instead, we can it a curve to the
data points and reconstruct with subpixel accuracy where the peak of that function would
lay. A variety of methods have been suggested and tested for this it, including centroids and
polynomials of various orders in both one and two dimensions, but one approach that has been
shown to be among the most simple, accurate, and robust against failure is a three-point it to
a Gaussian function. An example of this itting is shown in Figure 10.8 for a particle image,
but the principle is the same for a correlation peak.
Theoretically, this is attractive because if the particle images are Gaussian (Equation 10.13)
the correlation peak will be as well (Equation 10.17), and the location of the peak is sepa-
rable in the X and Y directions, meaning that the it can be performed independently in each
direction. Solving for the location of the peak of a Gaussian passing exactly through the peak
integer location and its two adjacent image pixels, we obtain the following formula that can
be applied twice, once for each direction in the correlation plane [20,21].
LASER VELOCIMETRY 317

Gaussian particle shape


Discretized image intensity
Three-point
Gaussian
estimate of
peak location

–3 –2 –1 0 1 2 3

FIGUre 10.8 Comparison between the recorded image data and the original Gaussian particle
image for a particle with e−2 width of 2.0 pixels and center at 0.25 for a pixel ill factor of 100%.
Note the difference between the height of the discrete sample and the curve at the integer loca-
tion, and the slight error this causes in using a subpixel itting algorithm to estimate the true peak
location.

ln R-1 - ln R+1
dsubpixel = (10.22)
2 ( ln R-1 + ln R+1 - 2 ln R0 )

In this expression, R0, R+1, and R−1 are the heights at the integer position of the maximum value
in the correlation plane, one step in the positive direction and one step in the negative, respec-
tively; and δsubpixel is the fractional distance from R0 to the peak of the associated Gaussian
function. It is important to note that R0 should be a local maximum, and all three values must
be positive. This irst condition is identically true if R0 is correctly selected, and the second is
nearly always the case unless signiicant additional signal processing has been applied prior
to this step.
Because this it neglects the possible effects of image noise and non-Gaussian particle
shapes, some researchers simultaneously it the X and Y locations of the subpixel peak using
samples from multiple locations in a least squares approach [22], though this is more com-
putationally expensive. Additionally, since, as previously noted, the image sensor is actually
acquiring an integrated intensity and not a sampled value at each integer location (compare
the height of the discretized samples in Figure 10.8 to the original particle image proile), the
sampled particle images are not exactly Gaussian even if the imaged lightields were origi-
nally, and thus the correlation peak is not either. Appropriate its have been developed that
account for this disparity in the original images (for use in estimating the size of the particle
images, e.g., [23]), but it is not clear that such an approach provides much practical beneit
when applied to correlation ields. However, it may be more appropriate for particle tracking
velocimetry (PTV) algorithms in which the particle locations must be tracked in the original
image ields (see “Particle tracking” section).

performance With the addition of subpixel displacement estimation to the discrete cross-correlation opera-
of basic pIV tion, we now have all the steps necessary for a useful, if basic, PIV algorithm. However, many
algorithms practical details of how to apply it to experimental images are still unspeciied.
• What size of image interrogation windows should we use?
• How big should the particle images be? How many particles do we need?
• What is the best range of apparent displacements to observe?
• How much shear and rotation can we allow before the method (which assumes pure
translations) breaks down?
318 JOHN J. CHARONKO

As we will see, the answers to these questions are interrelated and can all be explored through
a combination of analytical reasoning and experimental testing, the latter performed with both
real data and synthetic images.
Let us start by considering the number of particles in a given correlation window. Since
for a ixed concentration of particles, C (in units of particles per volume), this quantity will
vary depending on the size of the window and the image conditions, we usually discuss this
in terms of a seeding density of the source images. This is typically reported for a PIV experi-
ment in one of the two ways. The irst is source density, NS, which is a measure of the average
number of particle images within a volume formed by the light sheet of depth Δz0 and the
projected area of a particle. It can also be thought of as the fractional number of illuminated
pixels per image pixel.
pdt2
N S = C Dz0 (10.23)
4M 2

For PIV, this number should be less than 1 or it implies that more than a single particle on average
can be seen in each pixel, and the images will overlap; above NS ≈ 0.4, individual particle images
are dificult to distinguish. Besides this dificulty, overlapping particles cause the assumption
that the inal images are built from the summation of many individual particle images to break
down, as the light coming from each tracer will interfere with itself, creating a speckle pattern
that is more dificult to correlate. However, this form is not always convenient to work with, and
often it is more intuitive to discuss the seeding density in terms of the mean number of particles
per interrogation window. This is referred to as the image density, NI, and is deined as

NI =
C Dz0 DI2
=
(
N S DI2 ) (10.24)
M2 (
p /4dt2 )
where the area of a rectangular interrogation window of side lengths Dx and Dy is DI2 = Dx Dy .
To obtain a successful measurement, we not only need to have a nonzero number of
particles in each interrogation window, we also need to ensure that some of these particles
can be matched between frames. The number of matching particles for a given window
pair will decrease as the particles move out of the interrogation window in the in-plane
direction, as the particles move out of the illuminated region in the out-of-plane direction,
and as the particle image pattern distorts due to shear and rotation. We can quantify each
of these terms in terms of fractional loss coeficients: the in-plane loss of pairs, FI; the
out-of-plane loss of pairs, FO; and the loss of correlation due to in-plane gradients, FΔ. For
a mean displacement ield of (ΔX, ΔY, ΔZ) within our ield of view, the irst two factors are

æ DX ö æ DY ö
FI = ç 1 - ç1 - ÷ (10.25)
è Dx ÷ø è Dy ø

and
DZ
FO = 1 - , (10.26)
Dz0

while for small gradients the inal factor is approximately

æ 2 a2 ö
FD @ exp ç - 2 ÷
, (10.27)
è 3 dt ø

with

¶u
a = M D u Dt » M DI Dt.
¶x (10.28)
LASER VELOCIMETRY 319

where
a is an estimate of the variation in displacement, Δu
Du » umax - umin , over the interrogation window
∂u/∂x is an estimate of the largest velocity gradient

From the deinition of FΔ, we can see that the gradient loss term is governed by the ratio of the
in-plane velocity gradient a to the apparent size of the particle. This makes sense, as, when the
difference in displacements approaches the diameter of the particle images, the contribution
of each matched particle pair to the correlation peak (Equation 10.17) will no longer overlap,
and the resulting sum (Equation 10.16) will irst begin to broaden and decrease and then split
into individual peaks, invalidating the assumption that the single largest value corresponds to
the mean displacement in the interrogation window.
Taken together, these terms form the product NIFIFOFΔ that describes the average number
of correlated particle pairs per interrogation window. Based on this expression, we can now
evaluate empirically how many particles we need to have a reasonable probability of obtain-
ing a valid correlation. Using synthetic image simulations of multiple different interroga-
tion window sizes, low gradients, and displacements, we can obtain plots like Figure 10.9,
which shows the percentage of attempts that resulted in selecting the correct correlation peak
(a “valid vector”) at a given value of NIFIFOFΔ. As can be seen from the plot, this concept does
a very good job in collapsing these effects into a single curve, and we can estimate that to get
a reasonable number of successful measurements we probably want at a minimum to have
5–10 correlated particles per interrogation region. Furthermore, we can use Equations 10.25
through 10.27 to help motivate some rules of thumb that PIV researchers have built up through
experience and testing over the years.
• Minimum image density, NI > 10: To allow for enough matched particles to remain after
accounting for all the loss of correlation terms, we need to start with more than 10 particles.
• One-quarter rules for displacement, DX < DI /4, |Δz| < Δz0/4: These conditions ensure
that no more than 25% of the particles should be lost to either effect from the volume
formed by the interrogation window and the light sheet thickness.
• Two-thirds rule for in-plane gradients, a < (2/3)dτ: Setting the threshold to two-thirds
makes sure that the particle images will overlap across the entire interrogation window
and keeps the approximate loss of correlation to about 25%, similar to the limits we
have placed on the displacements.
These are not mandatory conditions, just guidelines that can (and should) be used to plan
and evaluate a given PIV experiment. In fact, in the next section, we will explore multistep

100
DI = 16 pixels
Valid detection probability (%)

80 DI = 24 pixels FI < 1, FO = 1, F∆ = 1
DI = 32 pixels
60 DI = 16 pixels
DI = 24 pixels FI = 1, FO < 1, F∆ = 1
40 DI = 32 pixels
DI = 16 pixels
20 DI = 24 pixels FI = 1, FO = 1, F∆ < 1
DI = 32 pixels
0
0 5 10 15
NIFIFO F∆

FIGUre 10.9 Given some knowledge of the local low conditions, NIFIFOFΔ can be used to esti-
mate the average number of correlated particles in a given interrogation region and can serve as
a guide to the probability that a measurement will yield the correct correlation peak.
320 JOHN J. CHARONKO

algorithms that essentially drive both the loss of correlation due to displacement and in-plane
gradients to zero after the irst iteration, meaning we can choose much more aggressive set-
tings overall. The “knobs” we can adjust to affect these parameters are typically the image
magniication, the time separation between images, and the size of our interrogation windows.
However, their effects on the correlation peak are interconnected, and often minimizing them
has negative effects on the inal accuracy of the measurement. For instance, decreasing the
window size will reduce the effect of gradients and improve our spatial resolution but will
reduce the maximum displacements we can resolve and increase random errors. However,
reducing Δt or M to compensate has the effect of bringing the measured displacements closer
to the noise loor, increasing the relative velocity error.
We can also use the repeatable relationship between NIFIFOFΔ and the percentage of valid
vectors to quickly evaluate whether our acquisition and processing settings are too aggressive
or conservative. If we do not have enough valid vectors (perhaps <80%), we should adjust
the window size DI and time step Δt to raise it closer to 100%. On the other hand, if we are
too close to 100%, we are actually giving up potential information to averaging effects and
may wish to consider reducing the window size until we start seeing a small number (perhaps
~1%–5%) of failed measurements. For synthetic data like that used in this section, such fail-
ures are easy to identify directly from the known solution. However, for experimental data the
true displacements are not known and must be inferred from examination of the local low
coherency. Fortunately, at low percentages, such failures are relatively easy to identify and
replace using the techniques that will be discussed in the “Data validation and replacement”
section, and these methods are an important step in post-processing real data ields.
Let us apply these rules of thumb to a jet low experiment similar to that shown in
Figure 10.3 [24], which has a jet with a volume low rate of 6 L/min passing through a nozzle
with a diameter of 11 mm, corresponding to a mean exit velocity of ~1.0 m/s. We would like
to resolve the low near the nozzle and we have previously selected a camera image resolution
of 18 μm/pixels in order to capture the relevant spatial structures in the low. At this resolu-
tion, we have NI = 20 particles per 32 × 32-pixel window. What is the Δt required to satisfy the
¼ rule for displacement in our ield of view? Using 8 pixels/frame as our goal, and the given
parameters as conversion factors, we can ind

Dt =
( 32 pixels )/4 ´ 18 mm ´ s
= 144 ms
Frame Pixel 1.0 m

Near the nozzle, we can essentially neglect any spreading of the jet and just use the average
velocity. Therefore, under these conditions, we should use an interframe time of less than
144 μs and FI = 0.75. Now, assuming that any luctuations out-of-plane are limited to 10% of
the mean jet speed and the laser sheet is 1 mm thick, we also need to check for the ¼ rule for
out-of-plane displacement.

Dz = (1 m/s ) ´ 10% = 144 ms = 14.4 mm << (1 mm )/4

This is very small, so assuming our laser plane is aligned with the low FO ≈ 1. We should
also check for gradients within our 32-pixel interrogation window. For this, we need to know
the apparent particle diameter. Using Equations 10.5 and 10.6, we can determine that for our
imaging conditions the apparent particle diameter will be dτ ≈ 2.4 pixels. Then if we assume
the largest gradients we will see that correspond to the drop from the mean jet velocity to the
zero freestream velocity over twice the nozzle wall thickness of 1 mm, this corresponds to a
displacement gradient of 0.072 pixels/pixel, or a = 2.3 pixels over the region of interest (ROI),
which is larger than 2/3dτ and corresponds to a value of FΔ = 0.54. Fortunately, this is a very
conservative estimate of the gradients we will actually see; otherwise, we should consider
reducing either Δt or the ROI size. Finishing our example, the product NIFIFOFΔ still equals
~8, which should be suficient for most experiments.
LASER VELOCIMETRY 321

Armed with these concepts, we now have a basic understanding of the limits to making
a valid measurement for a single correlation pass of a basic PIV system. However, it is also
valuable to explore how different experimental conditions affect the expected performance
of the system within these bounds. For this, while analytical consideration can provide some
insight, researchers have typically relied on the use of synthetic PIV images with randomly
generated particle and noise ields (an approach known as Monte Carlo testing) to probe the
effect of various parameters on the system accuracy. This can also provide quantitative esti-
mates of the error levels that could be seen in an experiment, though the values measured
are typically lower than seen in real data. This is because even if care is taken to realistically
simulate the recording system, the synthetic images are still invariably of higher quality and
idealized in some way so that these types of results are often more useful in discerning trends
or comparing different algorithms. Unfortunately, in real experiments, it is nearly impossible
to have exact knowledge of the true low at every point, making this type of approach the only
way to study many effects in controlled settings.
First, let us examine the effect of particle image size, which we have control over both by
varying the physical particle size (within limits, see Chapter 4) and by adjusting the mag-
niication of the imaging system. We can even increase the size slightly by defocusing our
camera optics a small amount. So far, we have suggested that larger particle images make it
easier to overlap the correlation peaks in the presence of gradients, with peak spreading and
splitting increasing the random error. However, the larger the particle size grows, the fewer
tracer images we can it into a given interrogation region. We might expect this to increase
the random error of the measurement (fewer samples lead to more scatter; see Chapter 2).
Additionally, it might be guessed that it is easier to accurately ind the center of a small parti-
cle than a large one (again primarily affecting the random error). Figure 10.10 shows the effect
on error versus apparent particle size for simulated images with Gaussian particle shapes, con-
stant NI = 0.05 particles/pixel2, 1% random image noise on a 5-count background, and a small
low gradient (dΔX/dY = 0.01 pixel/pixel). What we see is that, as we predicted, the observed
error shows a minimum between 1.5 and 3 pixels, depending on the processing algorithm, and
increases for larger and smaller sizes. This is fairly typical for PIV systems under a wide range
of conditions, and so we can safely say that we should always try to achieve particle image
sizes in this range. However, if we look at the breakdown of the error into bias and random

100
CC, pass 1
CC, pass 2
Total error, εtotal (pixels)

RPC, pass 1
RPC, pass 2
10–1

10–2

0 1 2 3 4 5
Diameter, d (pixels)

FIGUre 10.10 The total error for a measurement typically varies with particle image diameter
and typically reaches a minimum near 2–3 pixels. The exact location of the optimum is depen-
dent on the image quality, low ield, and processing algorithm, as can be seen in the comparison
between a standard Fourier-based cross-correlation (CC) and an RPC algorithm (described in the
“Fourier-based cross-correlation theory” section). The results for the irst and second passes of a
DWO processing scheme are shown for both (see the “Discrete window offset” section for details).
322 JOHN J. CHARONKO

1 104
Total error dτ = 0.2 pixel
Random dτ = 0.4 pixel
Bias dτ = 3 pixels
103
0.1
Error, ε (pixels)

Frequency
102

0.01
101

0.001 100
0 1 2 3 4 5 –1 –0.5 0 0.5 1
(a) Diameter, d (pixels) (b) Error, ε (pixels)

FIGUre 10.11 (a) Bias error is only a small contributor to the total error for particle sizes above 2 pixels. (b) Peak locking
bias errors for small particle diameters force estimates toward ixed values, often the closest integer value, which can also
lead to increased random error about the mean.

components, we see something we might not expect (Figure 10.11). Above about 2 pixels,
the error is dominated by the random component and increases proportionally with diameter
as we predicted. On the other hand, for diameters below 1 pixel, the bias error becomes a
major contributor to the measurement uncertainty, even though none of the mechanisms we
discussed should lead to a signiicant systematic error. Where did it come from?
We can start to see the answer if we plot a histogram of the measured displacements
(Figure 10.12) for several particle diameters. For clarity, only the subpixel portion of the total
displacement is shown, and the frequency has been normalized by the expected count for the
true low ield. Therefore, we would expect a measurement with only random errors to have
a smooth distribution very close to a value of 1.0 for all displacements. Instead, for particle
diameters less than 1 pixel, we see obvious peaks in the distribution. This effect has been
termed peak locking, and it occurs in part because for very small discretely sampled particle
sizes there is insuficient information to resolve small differences in displacements. In most
experiments, the subpixel peak itting algorithm produces values that are biased toward whole
number shifts, but as can be seen here the exact behavior is determined by the particle size
and low ield. Also, since the exact magnitude and direction of the bias error are dependent

4
dτ = 0.2 pixel
dτ = 0.75 pixel
Normalized frequency

3 dτ = 3 pixels

0
–0.5 0 0.5
Subpixel displacement (pixels)

FIGUre 10.12 When the particle image diameter becomes small, the subpixel displacement
can no longer be properly resolved. Ideally, the full range should be equally probable.
LASER VELOCIMETRY 323

on the random particle distribution in a given interrogation region it also has the effect of
increasing the random error. This can be seen most easily in the error PDF for very small par-
ticles in Figure 10.11b. This condition should always be avoided wherever possible. It might
be tempting to attempt to enlarge the particle images by post-processing the images with a
smoothing ilter; this rarely works very well because the result is still limited by underlying
discrete signal properties. Instead, a slight defocusing of the image is often the best solution if
the magniication and particle size must remain ixed.
It is important to note that this is not the only source of bias errors in a PIV evaluation, and
the exact shape and magnitude of error plots such as these are quite sensitive to small changes
in the algorithm and source data. In fact, the data in Figure 10.11 were created using the same
image conditions as in Figure 10.10, but with two changes to minimize effects that obscure
the presence of peak locking errors. First, uniform instead of shear low was used to avoid the
increased bias contribution that velocity gradients create, and second, the PIV interrogation
spots were windowed with a Gaussian function to prevent a type of bias error that occurs when
particle images are truncated by the edge of the region. This second effect actually tends to
increase with larger particle sizes, in contrast to the peak locking error we just discussed.
We can also examine the effect variations in displacement have on the error level for a PIV
measurement. Figure 10.13a shows the error level for a set of images with increasing values
of uniform in-plane displacement levels and a particle image size of 2.0 pixels. From these
results, we can see errors are lowest for displacements less than 1.0 pixel/frame, with both
the bias and random errors increasing rapidly up to that point, and more slowly thereafter.
The bias error also shows (weak) evidence of peak locking in a cyclic variation of the error
with a period of one pixel. For displacements less than a pixel, the particle images are essen-
tially overlapped between correlated frames and the error is small and grows linearly. Above
one pixel, the particle image pattern is no longer completely overlapped and the error level
becomes more uniform, increasing more slowly.
This change in behavior at displacements above 1 pixel is one of the main motivations
behind the development of the window offset and image deformation techniques that will be
discussed in the next section. Both techniques attempt to adjust the correlated interrogation
regions to achieve apparent displacements as near to zero pixels as possible to take advantage
of the greater accuracy of cross-correlation for very small displacements, as well as improve-
ments due to reduced in-plane and gradient loss-of correlation. Although these algorithms can
reduce the overall error level, they can in some cases make the periodic behavior of the error
worse, as can be seen in Figure 10.13b, since their goal is to shift the interrogation windows
so that the residual displacement is less than half a pixel. Thus, for simple lows such as this,
the resultant error plot just shows the error for that range repeated at each discrete pixel shift.

0.2 0.2
Total Single pass
Random Discrete window offset
Total error, εtotal (pixels)

0.1 Bias 0.15 Image deformation


Error, ε (pixels)

0 0.1

–0.1 0.05

–0.2 0
0 1 2 3 4 5 6 7 8 0 1 2 3 4 5 6 7 8
(a) Displacement, Δx (pixels) (b) Displacement, Δx (pixels)

FIGUre 10.13 Dependence of error levels on displacement for a FFT-based discrete cross-correlation algorithm using
24 × 24 windows. (a) Breakdown of error components for a single pass of the PIV algorithm. (b) Comparison of total error
for a single pass and two simple iterative algorithms.
324 JOHN J. CHARONKO

It is important to note, however, that in real experiments with nonuniform low within the
correlation window and decreased image quality the actual improvement at small particle
shifts is less dramatic; the error does not typically drop all the way to zero at zero displace-
ment and the error levels are typically higher overall than demonstrated here.

Iterative schemes As has been shown in the previous section, the performance of this correlation-based region
matching tends to degrade with increasing displacements. Part of this loss in performance is
due to incremental changes in the particle images with distance (rotation, movement within
the beam and ield of view, etc.) as well as loss of the tracked particles due to out-of-plane
motion (FO), but in a well-designed and implemented experiment, much can be attributed to
simply the fact that with increasing displacements a decreasing fraction of the original particle
pattern will appear in a second region centered on the same spot in sequential images (FI).
Additionally, if shear and rotation are present in the low (as they almost always are), a pure
translation is not suficient to allow the particle images to line up when shifted, causing the
resulting correlation peak to stretch or split [17].
Therefore, based on the results discussed so far, it appears that it would be best to limit
displacements to the smallest values possible, but the reality is not so simple since the true
performance of these methods on real data tends to have a lower bound for error, and a little
thought will make obvious the fact that reducing the displacements too close to zero will make
the relative error on the velocity estimate unacceptably high. However, there is nothing that
says the correlated regions have to be sampled from the same point in both images or that
the images must be used as is with no modiication based on preexisting knowledge about
the state of the low ield. These ideas form the basis of many types of iterative processing
schemes that use the results of a previous velocity ield evaluation as a predictor to improve
the results of later correlation steps. Here, we will examine two of the most common: a dis-
crete window offset (DWO) approach and window deformation.

Discrete window offset In DWO methods, after irst making an estimate of the particle dis-
placement at every point in the low ield, this information is used to shift the location of one
or both of the original interrogation regions in an attempt to make the particle patterns line up
as closely as possible, pushing the correlation evaluator closer to the optimum performance
point of zero displacement, as shown in Figure 10.14a [25]. This is done in a discrete manner,
shifting the search regions in steps of whole pixels (i.e., discrete offsets), though continuous

ROI 2΄—deform
ROI 2—DWO
ROI 1 DWO:
(b)

Deform:
(a) (c)

FIGUre 10.14 Shifting or deforming the interrogation ROI from image 1 to image 2 can improve
the alignment for cross-correlation. (a) Example of forward shifting for DWO and deformation
methods in a boundary layer low. (b) Aligning the ROI extracted using DWO from images 1 and
2 shows how the method can leave large residual errors in particle position within the window
when shear or rotation is present. (c) Doing the same for image deformation results in smaller
residuals between images 1 and 2.
LASER VELOCIMETRY 325

shifting methods based on image resampling have also been tried [26,27]. The image defor-
mation methods explored in the next section are a generalization of this image resampling
idea. Although it is possible to shift just one window forward or backward in a Lagrangian-
style trajectory tracking, due to the fact that the resulting velocity ields are typically treated
in an Eulerian manner in post-processing and analysis, it has been shown in testing that it
is actually better to split the measured velocity from the previous step evenly between both
regions, symmetrically shifting both outward from the original measurement location [28].
The disparity between these two approaches parallels the changes between irst-order for-
ward or backward differencing and second-order central differencing operations in numerical
differentiation. This symmetric shifting results in an improved velocity estimate that remains
centered on the original measurement location in space and is most properly treated as occur-
ring midway between the two image acquisitions in time. Once new interrogation regions are
chosen for each location, cross-correlation analysis can be performed between them to select
the most probable residual shift and the measured fractional displacements are added as a
corrector to the total integer-valued shift used in selecting each new pair of regions. When
correctly implemented, this simple approach can dramatically reduce the error levels seen
for PIV measurements (Figure 10.13b). This approach can be further extended to multiple
iterations, each time using the results of the previous pass to initialize a new predictor and
set of shifted interrogation regions, though often only two or three passes are required for
convergence, and continuing to iterate can actually worsen errors, especially those related to
the peak-locking effect discussed earlier.

Image deformation While DWO and other translation-based iterative algorithms can sig-
niicantly improve the overall performance of a PIV algorithm, most lows also contain appre-
ciable levels of shear, rotation, or dilation. The result of these effects is that a simple shift
between two ROIs before correlation still cannot recover an exact match for every particle
image since each particle will be moving a slightly different amount and in slightly differ-
ent directions, as can be seen in Figure 10.14b. The recovered displacement estimates from
the cross-correlation analysis can only be some type of weighted average of these different
motions, and, as previously mentioned, the larger the gradients become, the more the displace-
ment peak will become spread and drop in magnitude, potentially splitting into separate peaks
for different groups of particles, or dropping below the background noise level in the correla-
tion plane, making the correct identiication of the bulk motion dificult.
However, provided we can achieve suficient accuracy in initial estimates of the low ield,
enough information exists to estimate through interpolation the displacement of every pixel in
either a given ROI or the entire image. Then, assuming that the two particle image patterns are
related by a continuous deformation equal to the motion of the underlying low ield, we can
reverse the deformation, re-interpolating the images (or ROIs within those images) back to a
common point in time. If the algorithms chosen for the interpolation of the vector ield and the
resampling of the image data are of suficient quality, and the displacement data are accurate
enough, the resulting location of every particle image in the reconstructed pattern should be
nearly identical, as shown schematically in Figure 10.14c. These new particle images can then
be cross-correlated again to recover a new (hopefully small) displacement estimate that can
be used as a corrector for the predictor ield used to originally deform the images. The two
ields are then summed to create either the inal estimate of displacement or a new predictor
ield that can be fed back into an iterative algorithm until the desired level of convergence is
achieved. Of course, in addition to errors in the displacement ield, this type of evaluation
is also prone to errors introduced by the interpolation and resampling steps that can lead to
biases or instabilities similar to those seen in iterative numerical solvers [29,30].
However, although more complicated to implement and more computationally expensive
to evaluate, it has been shown that for a wide variety of lows iterative image deformation
approaches are very robust and more accurate than simpler DWO evaluations [31]; see for
instance Figure 10.13b. As a result, most researchers consider some sort of image deforma-
tion to be the standard for modern PIV implementations and expect its use when processing
publication-quality data.
326 JOHN J. CHARONKO

Interrogation region reinement In addition to shifting the interrogation regions or deforming


the original images to reduce apparent displacements, it is also common to reduce the size of
the interrogation regions between iterations. Although on initial iterations, quite large regions
may be selected in order to guarantee that the particle images do not shift too far and that there
is suficient overlap to produce a good correlation signal, after using DWO or image defor-
mation the residual displacements for subsequent steps are expected to be quite small. This
means that the limiting factor for the size of an interrogation region then becomes the seeding
density, not the displacement. The use of smaller regions reduces the spatial averaging effect of
the correlation algorithm, leading to improved spatial resolution. This reduction in area can be
accomplished either by keeping the original set of interrogation locations and just reducing the
region size (requiring no interpolation of the displacement ield) or by resampling the previous
iteration’s displacement ield onto a new (typically iner) grid of interrogation points.
In addition to the beneit of decreased spatial averaging that the interrogation region reine-
ment brings, the reduced computational cost of smaller regions enables the user to increase
the spatial resolution of the inal result without increasing the overlap (and thus redundant
information) between adjacent windows. Thus, in practice, the initial passes are typically per-
formed on very coarse grids with relatively large interrogation windows and minimal overlap
for speed and then reined in later iterations to use as small of a window as possible on very
ine grids spaced down as far as single-pixel levels. These extremely ine grids have beneits
for resampling in iterative methods and can be beneicial in post-processing using discrete
derivative operators since errors in adjacent measurements will be largely correlated (because
the correlations draw from almost identical particle patterns) and can cancel out more than
they would for widely spaced data.

Fourier-based Although in the “Discrete cross-correlation” section, we discussed implementing PIV using
cross-correlation an additive summation approach (Equation 10.21), the cross-correlation algorithm can also be
theory implemented using Fourier transforms (see Chapter 2).

{ ( ) }
R ( S, T ) = F -1 conj F {I 2 } F {I1} (10.29)

There are several reasons for using such an approach. The irst is that due to the speed of
modern fast Fourier transform (FFT) libraries, the time to compute an FFT-based correlation
grows more slowly than direct summation for larger windows. As a result, many PIV imple-
mentations use Equation 10.29 instead. Because discrete Fourier transforms assume periodic-
ity, in order to achieve identical results to Equation 10.21, I1 and I2 must be zero padded to at
least twice their native sizes to avoid wrap-around aliasing. Alternately, a tapered windowing
function can be used to avoid discontinuities at the edges of the region [29,32], which as previ-
ously mentioned can also help reduce errors.
Second, the equivalency of the cross-correlation to a multiplication in the Fourier domain
means that we can apply additional signal processing techniques and analysis based on the
frequency content of the images. If I 2 ( x ) = I1 ( x + Dx ), then the Fourier shift theorem states
that the Fourier transforms of each image are related as follows:

F {I 2 } = F {I1} exp ( -ik Dx ) (10.30)

with k being the frequency vector. If we substitute this into Equation 10.29 and simplify
2
F {R} = F {I1} exp ( -ik Dx ) (10.31)

we can then show that for a pair of images related by a pure translation, Dx, in the frequency
space, the cross-correlation is the squared magnitude of the transformed images times the
phase shift (the exponential term) or in spatial coordinates the autocorrelation convolved with
(and thus shifted to the location of) a Dirac delta function at Dx. Here, ⋯ is the complex
magnitude operator.
LASER VELOCIMETRY 327

Therefore, we can see that the magnitude of the images’ frequency content is solely respon-
sible for the shape of the cross-correlation and remains invariant to translation (though not
shear or rotation), while the displacement we are interested in is carried only in the image’s
phase information. We could keep only this phase information and get delta functions with
very high SNRs at the locations of the candidate displacements, but these peaks are very nar-
row and so standard subpixel itting algorithms work poorly, leading to decreased accuracy
and increased peak locking. Instead we can try to improve our displacement estimates by
applying different frequency-based iltering techniques, such as the “symmetric phase-only
transform,” which divides in Fourier space the cross-correlation by the square root of the
autocorrelation (and thus is not strictly phase-only) [33]. Alternately, better performance can
sometimes be seen by discarding the original magnitude information entirely and instead con-
structing a Gaussian-shaped SNR ilter designed to emphasize those frequency components
corresponding to the particle images and decrease the components associated with common
noise sources. Such an approach has been called a “robust phase correlation” (RPC), and
because it results in a true Gaussian function of known diameter shifted to the displacement
location it can often yield better subpixel its less prone to peak locking [34,35]. Similar
approaches may also offer the opportunity for further tailoring the iltering functions to the
speciic known properties of the acquired images.

advanced In the years since the initial development of PIV, researchers have proposed numerous meth-
processing ods to extend this basic framework in various ways, often with the goal of improving its
algorithms accuracy or extending what can be measured with it. Here, we will discuss several popular
extensions to the standard cross-correlation algorithm.

Ensemble correlation For very small ields of view, such as those seen using a micro-
scope, the required magniication can become large, leading to additional optical effects not
typically seen in larger-scale experiments, such as the depth of the illuminated region being
larger than the in-focus depth. This can cause signiicant background illumination and out-
of-focus particles. Additionally, particle images become large and the image density low. For
such ields, the velocity ields obtained from traditional cross-correlation of only a single
pair of images will often be corrupted by many failed correlations and substantial noise. For
statistically stationary ields, it is possible to increase the SNR of the displacement estimate
by summing the cross-correlation planes across all image pairs [36,37]. Doing so, the true
displacement peak in every pair of images should be reinforced, while the spurious correla-
tions and background noise should vary in position with each new image pair and largely be
canceled out (Figure 10.15a). Once this is done for every ROI in every image pair, the result
is a single correlation plane at each desired interrogation point in the low. The peaks in these
correlation planes can then be located using standard subpixel algorithms to yield the average
low displacement.

Multiframe approaches Traditionally, PIV has been based on the analysis of a single
pair of particle images, with each pair being processed independently from all the others.
With the increasing availability of high repetition rate lasers and cameras, the ability to
acquire large sequences of time-resolved data has also become more common. Not only
does this type of analysis give access to additional physical insight, but taking advantage
of the correlated nature of the particle image signal over longer timescales can also be
used to increase the accuracy and precision of the measurements themselves. Examples
of these techniques that have found recent success include the pyramid correlation tech-
nique, which builds upon the idea of using ensemble correlation with mixed frame steps
over subsets of the full data set [38], and the luid trajectory correlation (FTC) approach
(Figure  10.15b), which improves accuracy by itting a polynomial to a Lagrangian par-
ticle trajectory computed using irst-order forward and backward correlations, instead
of the symmetrically shifted central difference approach we discussed earlier [39]. Such
approaches are currently still rare in commercial PIV packages but are likely to become
more common in the future.
328 JOHN J. CHARONKO

True particle trajectory

R1, 2 = I1 * I2
R0, 2 ∆x (t0, t2)
+
R0, 1 ∆x (t0, t1) x (t2)
R3, 4 = I3 * I4
+ x (t1)
...
R0, –1 ∆x (t0, t–1) x (t0)
+ Polynomial
fit, f (∆x)
RN, N + 1 = IN * IN + 1
R0, –2 ∆x (t0, t–2) x (t–1)
=

Rensemble= 1 Ri, j x (t–2) u (t0) = d f (t0, ∆x0, ±1,..., ∆x0, ±N)


N dt
(a) (b)

FIGUre 10.15 (a) The ensemble correlation, formed from the mean of a large number of cor-
relation planes with similar displacements, can greatly reduce the magnitude of the noise com-
ponent. (b) Multiframe methods, such as the FTC method shown here, estimate the motion from
a combination of multiple correlations over different time steps.

Single-pixel correlation Moving in the opposite direction, single-pixel correlation attempts


to greatly increase the spatial resolution of the velocity data extracted (down to the level of a
single image pixel) by sacriicing the temporal information entirely in favor of time-averaged
quantities. In this method, at a single-pixel location within the low, the intensity at each
time point in frame A is cross-correlated with its surroundings in frame B of the image pair.
When averaged over many such pairs (often 10,000 or more frames are required to offset
the reduction from a correlated region to a single pixel), the spatial resolution available on
the mean displacement approaches the maximum possible for a correlation-based approach
[40]. Additionally, as demonstrated by Scharnowski et al. [41], when properly normalized, the
resulting correlation plane also encodes the PDF of the velocity luctuations, which means that
such quantities as the turbulent Reynolds stresses can be directly extracted from the measure-
ment without needing to perform an explicit averaging step. This has the added advantage that
it preserves the single-pixel resolution of the measurements, and they have also demonstrated
that in contrast to traditional PIV correlations in which the spatial averaging of the correla-
tion windows serves to suppress the turbulent luctuations, this method preserves much more
accurately the original turbulence information encoded in the image sequence.

particle tracking Although recently it is more common in the luid dynamics community to track the motion of
groups of particles statistically using PIV, methods that evaluate the velocity ield by studying
the motion of individual particles have a long history, dating perhaps as far back as Leonardo
da Vinci [42]. Today, quantitative versions of these techniques are usually referred to as par-
ticle tracking velocimetry, or PTV. Practically, these methods can typically be applied inter-
changeably with PIV on the same set of images, and the principles of designing and setting
up an experiment are essentially identical, though often with a lower seeding density to aid in
matching and preventing overlap of particle images.
The general approach for such algorithms is a multistep process. First, potential particles
are identiied in each image and second, their locations and other identifying characteristics
such as size and shape are identiied (methods include many of the same algorithms used for
subpixel itting of the correlation peaks in PIV). Image preprocessing can often be required for
best results in this step. Next, the lists of particles are compared to ind matching pairs between
the two images. In the simplest implementation, this would just be a search for the nearest
matching particle within some search radius. If no other information about the low except the
particle position is known, this can be an expensive and inaccurate process, but the addition
LASER VELOCIMETRY 329

of multiple parameters can increase matching eficiency [43] and the use of a predictor ield
such as that obtained from a preexisting PIV evaluation of the same images also reduces the
required search domain [44]. After all possible matches are identiied, the resulting displace-
ment vectors can be validated against neighboring particles, and new searches can be made to
repair the incorrectly matched particles. Unpaired particles are usually discarded from further
consideration. Much like in PIV, the particle matching process can then be repeated itera-
tively from the beginning using information gained from the previous step to better identify
the largest set of correctly matched particles. If time-resolved data are available, particles’
displacements can be linked from frame to frame, enabling both more sophisticated pairing
and tracking algorithms, and the additional ability to directly measure Lagrangian (as opposed
to Eulerian) quantities. Finally, since the particles are not placed on a regular grid, the inal
displacement vectors between the particle locations in each image are often interpolated onto
a structured rectangular mesh in order to make post-processing simpler. More details about
various algorithms can be found in [17].
It is widely considered that PTV methods are slightly less precise (more noisy) than PIV
due to the fact that only a single particle is tracked at a time, though this may in part just be
because PIV measurements are in some sense a spatial average of the measured motion of
several particle images, and averaging inherently reduces random error. However, it has been
shown that PTV potentially has much higher spatial resolution than PIV, limited only by the
precision of the method used to locate each particle, while the spatial resolution of PIV is
limited by the very averaging that tends to reduce its velocity error [40]. Additionally, PIV
suffers from certain well-known bias errors near walls, especially if relections are present,
while PTV does not, again, because it tracks individual particles instead of groups in a region.
PIV also typically needs a minimum number of particles in each ROI to make a measurement,
while PTV has no such limitation, making it attractive for cases where it is dificult or impos-
sible to obtain a high seeding density.
These effects mean that the correct choice of a processing algorithm (PIV or PTV) is
dependent on the exact experimental conditions being tested. For example, a study of a highly
resolved boundary layer might have dificulty obtaining enough particles to get the spatial
resolution desired; here PTV might be the best choice [45], especially if time-averaged quan-
tities are of more importance than instantaneous behavior. PTV can also be a good choice in
multiphase problems when more than one particle type is present in the low, each denoting a
different luid type or source (such as in mixing), or in cases where one particle is meant as a
low tracer for the luid phase, and the others might be bubbles, larger solid particles, or both
[43,46]. In these cases, an identifying feature can be used to sort the particles into different
groups for further analysis. Similarly, PTV-type methods can be used to study the breakup
of sprays and aerosols. Finally, PTV methods are also experiencing renewed interest in the
research community due to the increasing development of volumetric imaging approaches,
many of which inherently result in a list of identiied particle images in reconstructed regions.
As a result, since the 3D correlations required for PIV are computationally expensive, it can be
more eficient to just go ahead and perform PTV instead. Additionally, if image sequences are
available, the ability to form particle trajectories also helps reduce one of the major limitations
of many volumetric methods, that of “ghost particles” during the image reconstruction steps.
See Chapter 11 for more discussion of these concepts.

Stereo pIV Over the years, many extensions to the basic single-camera, two-velocity-component pla-
nar PIV algorithm discussed earlier have been developed for a wide variety of specialized
needs, many requiring additional hardware and additional processing algorithms. One com-
mon adaptation is the use of two cameras in a stereoscopic arrangement (see Figure 10.16) to
measure three components of velocity within the planar light sheet. In fact, some now consider
it to be the default PIV implementation whenever practical as in addition to the extra velocity
information, the use of two cameras allows for more rigorous correction of several types of
calibration errors that commonly affect single-camera experiments. For more information on
stereoscopic PIV, interested readers are encouraged to consult other references such as the
textbook by Adrian and Westerweel [17] and the guidebook edited by Raffel et al. [16].
330 JOHN J. CHARONKO

Camera field of view


Laser sheet

Camera lens

α
β
Scheimpflug
mount

Camera 1: Camera 2:

FIGUre 10.16 Schematic of a simple stereo PIV experiment. Cameras 1 and 2 are offset from
perpendicular relative to the laser sheet, and therefore the particle images show perspective
effects that allow a third-velocity component to be computed. Scheimplug mounts are often
used to keep the tilted ield of view in focus.

10.4 experimental design

Although understanding and proper implementation of the displacement estimation meth-


ods used in PIV are critical to obtaining accurate measurements, equally important is the
initial experimental design. If the experiment is not set up properly, there is little that can be
done later to extract the desired result, and in fact the results can even be misleading. On the
other hand, a well-executed experiment designed with the limitations of the algorithm and the
hardware available will lead not only to better results but also easier analysis. The following
section outlines some basic guidelines that should be considered when planning your own
experiments.

Flow tracers As the basis of optical laser-based velocimetry techniques are images of translating particles,
proper consideration should be made of how the low is going to be seeded and with what type
of particles. The basic assumption that underlies both PIV and PTV (and for that matter, LDV
and PDA) is that the particles that are being observed are in fact low tracers and follow the
underlying low with minimal deviation. As methods for particle seeding and the requirements
for a particle to be an accurate low tracer have been covered previously in Chapter 4, we will
not spend any more time on the topic here.

Camera selection In early implementations of PIV, digital cameras were too slow and lacked the resolution and
speed to sample the low at a reasonable rate. Instead, researchers used a variety of ilm cam-
eras, limiting the number of frames that could be acquired and the time it took to begin ana-
lyzing them. Since then, however, digital image sensors have made enormous improvements,
making the use of ilm essentially obsolete, though high-end ilm equipment still can produce
image sizes and resolutions beyond any but the most expensive custom sensors. Therefore,
we will only consider the use of digital video cameras here. For PIV experiments, consumer-
level cameras are usually not ideal due to requirements such as higher image bit depth and
sensitivity, low sensor noise, precise control of exposure timing and duration, and high data
throughput. Additionally, most consumer cameras acquire color images through the use of
ilters on the sensor (such as a Bayer pattern) that reduce the acquired light and decrease the
true spatial resolution. The interpolation necessary to reconstruct the full image also creates
artifacts that can reduce the accuracy of PIV cross-correlation [47,48]. Therefore, in almost all
cases, cameras designed for scientiic or industrial use are employed instead.
These cameras can be broadly classiied into two main categories based on the type of sen-
sor: charge-coupled devices (CCDs) and CMOS. Differences in the way the two technologies
LASER VELOCIMETRY 331

work have led to slightly different capabilities, and therefore specialization in the roles the two
camera types are usually seen in. CCD cameras are typically produced in larger sensor sizes
than CMOS cameras with the trade-off of lower maximum frame rates, typically 5–50 Hz. For
CCD cameras, bit depths of at least 12 (212 levels of discrete intensity) are standard, with some
models having 14- or 16-bit modes. Although higher bit depth does not change the accuracy of
the cross-correlation much on its own, it does allow a wider dynamic range for image acquisi-
tion, making it easier to sample very dim images and reducing image noise. The low image
noise in conjunction with the large sensor size means that the dynamic spatial range for such
cameras is typically much higher than for CMOS designs, allowing researchers to resolve the
multiple decades of length scales needed for acquiring turbulence spectra.
In contrast, most CMOS cameras used for PIV have smaller resolutions, around 1–4 MP,
and are either 8 or 12 bits and are less sensitive to light. However, in exchange, these cameras
are capable of much higher frame rates—values of 1–10 kHz are typical at full sensor resolu-
tion and values at or above 500,000 frames/second if using small subsets of a sensor. Because
of this capability, they can be paired with high repetition rate lasers operating at similar pulse
speeds to achieve time-resolved velocity vector ields.
Both camera types are also available in models that can acquire two images spaced at very
small time intervals (values under 1 μs are typical) with every triggered acquisition. In this
mode, typically the start and duration of the irst frame can be controlled, but the second frame
is triggered automatically and the exposure time is required to be the readout time of the irst
frame (Figure 10.17c). Thus, care should be taken to minimize background illumination so
that both frames will have similar image characteristics. For sensors without this feature, a
similar effect can be achieved by holding each exposure open until the next frame is ready to
start and placing the irst laser pulse at the end of the exposure in the irst frame and the sec-
ond at the beginning of the second (Figure 10.17b). The limitation here is that the minimum
interframe time is usually larger.
When selecting a camera for a new PIV experiment, typically the decision irst needs to be
made between CCD and CMOS models. If time-resolved data are required, then a high-speed
CMOS camera may be the only choice; otherwise, a CCD is typically the better option due
to better image quality. To select among different options, start with the desired low rates,

Δt = 1/fs Camera exposures

Evenly pulsed,
single exposures: A A A A
t
(a)

T = 1/fs
Δt

Double pulsed,
even exposures: Frame A Frame B A B
t
tA tB = tA
(b)

T = 1/fs
Δt Laser pulses

Double pulsed,
dual exposures: A B A B
t
tA tB >> tA
(c)

FIGUre 10.17 Some common synchronization patterns for PIV lasers and cameras. (a) and
(b) are used with cameras that can only expose a single time per trigger, while (c) is useful when
the camera is capable of dual exposures. Double-pulse setups are common for high-speed lows,
while time-resolved experiments sometimes use evenly pulsed timings.
332 JOHN J. CHARONKO

minimum required spatial scales, and desired ield of view. The ratio between the smallest and
largest scales will dictate the required sensor resolution. That number can be used in conjunc-
tion with the camera’s physical sensor size to determine the magniication needed. Using that
number, the apparent particle image diameter can be checked, an interrogation window size can
be selected, and the Δt required to satisfy the one-quarter rule and the other design parameters
implied by Equations 10.25 through 10.27 can be calculated. Eventually, a inal camera model
will be selected that best matches the desired performance and required trade-offs. Alternately,
if only speciic cameras are available, experimental parameters can be planned backward from
the hardware capabilities. For reference, Table 10.3 summarizes the performance character-
istics of several representative modern PIV cameras. Note in particular the 5.5 MP CMOS
camera, which uses a newer variation on the CMOS sensor type known as “scientiic CMOS.”
Cameras using these sensors are designed to perform similarly to traditional CCD models but
with higher frame rates and are advertised as having better image quality and lower noise,
thanks to their CMOS design, but are usually slower than traditional CMOS cameras.
Let us return to the example in the “Performance of basic PIV algorithms” section and
derive some of the experimental parameters we assumed there using the speciications of real
cameras listed here, for which the real experiment used the 10.7 MP CCD camera in Table
10.3. Our desired ield of view in that experiment was approximately 75 × 50 mm so that we
could maintain a consistent magniication and still capture the full width of the jet as it spread
downstream. Dividing the desired ield of view by the pixel count of the sensor, we see we get
approximately 18.7 μm/pixel for the resolution of the resulting image, making 18 μm/pixel
a reasonable choice. To achieve this, we will need to use optics giving us a magniication of
M = 9/18 μm = 0.5×; here, we can use the desired pixel resolution of the image as the object
size, so, and the physical size of the pixel on the sensor as the image size, si, in Equation 10.10.
For this experiment, a Laskin nozzle was used with DEHS to produce particles with a diameter
of about 1 μm. Looking only at the resolution, it would seem that this would violate the opti-
mum particle image size, but applying Equations 10.5 and 10.6 and neglecting out-of-focus
effects, we can see if we set f# = 11, then we can still achieve an apparent size of dτ = 2.4 pixels
due to the large diffraction diameter at this magniication.

ds = 2.44 (1 + 0.5 )(11)( 0.532 mm ) = 21.4 mm » 2.4 pixels

( 0.5 ´ 1mm ) + ( 21.4 mm )


2 2
dt @ = 21.4 mm » 2.4 pixels

Previously, we stated that we were going to interrogate the image using 32 × 32 pixel win-
dows. Based on calculations of turbulent length scales for this experiment, it was determined

Table 10.3 Performance capabilities of several camera models typically used for particle image
velocimetry

Sensor Maximum resolution Pixel size Maximum Minimum Readout Bit


type (pixels) (μm) frame rate interframe noise depth
(frames/ time (μs) (RMS e−)
second)

CMOS 1024 × 1024 (1.0 MP) 20 3,600 <1.0 — 12


CMOS 1024 × 1024 (1.0 MP) 20 20,000 <1.0 — 12
CMOS 1280 × 800 (1.0 MP) 28 25,600 0.5 — 12
CMOS 2560 × 1600 (4.1 MP) 10 800 1.4 21 12
CMOS 2560 × 2160 (5.5 MP) 6.5 100 0.1 <3 16
CCD 2048 × 2048 (4.2 MP) 7.4 32 0.2 8 12
CCD 4008 × 2672 (10.7 MP) 9.0 4.8 0.2 30 12
CCD 6600 × 4400 (29.0 MP) 5.5 3.6 1.0 8 12
CCD 6600 × 4400 (29.0 MP) 5.5 2.4 0.3 13 14
LASER VELOCIMETRY 333

that a spatial resolution near 500 μm would be required to resolve features down to at least the
Taylor microscale. The choice of ROI size predominantly governs this value, and for 32 pixels
the physical size of the window will be 576 μm, which is close to that. The exact resolution
used in processing the data will need to be selected based on the actual measured low, the
image quality, and the seeding density achieved. We now have enough information to inish
evaluating NIFIFOFΔ for the example in the “Performance of basic PIV algorithms” section.

Setup of laser Once the models have been built, the particles have been selected, and the camera and desired
optics ield of view chosen, the next task is to decide how to illuminate the low. Most lasers used
for PIV have a beam proile that is Gaussian in shape and a diameter on the order of 5 mm.
This is much thicker than we would usually like for a light sheet, but much too small to illu-
minate a useful ield of view. As a result, we need to expand the beam along one axis to form
a sheet or fan and narrow it in the other. Easy-to-use optical assemblies are available from
most major PIV vendors to assist in this task with compact and easy to adjust designs, but
inevitably no single design can provide for all possible experimental needs. When that occurs,
the use of individual optical components can provide much greater lexibility in designing an
experiment.
The types of lenses typically used for PIV experiments are usually simple single-element
spherical or cylindrical designs. Because we will be working with a single wavelength of light,
it is not necessary to use achromatic lenses. Uncoated optics are acceptable as well, though
antirelective coatings can be useful in reducing backscatter and losses off of individual ele-
ments, and the lenses must be rated for high power densities. Even with laser-rated lenses,
make sure to keep components away from any focal points along the beam path. Near the
beam waist the power density grows as the cross section shrinks, and it can be easy to move
a lens through such a point while trying to adjust the beam shape past the next component.
In this section, a basic three-lens arrangement as shown in Figure 10.18 for expanding the
beam will be discussed, though the number of variations on these principles is essentially
unlimited. In the design shown here, an initial cylindrical lens (element A) will be used to
begin expanding the beam along the plane that will become our laser sheet. Next, a second
cylindrical lens (element B) oriented in the same plane as the irst can be used to re-collimate
the beam at the desired width. To make this work, the focal point of the irst lens must be
placed exactly at the focal point behind the second lens. With this arrangement, any light rays
focused through the irst lens will be captured by the second and returned to a parallel orienta-
tion with a inal ratio of beam diameters, ϕB/ϕA, given by the ratio of the focal distances, fA and
fB, and the distance between them equal to the sum of the focal distances.

fB f
= B ; L = f A + fB (10.32)
fA fA

The drawback to such an arrangement is that the second lens must be at least as large as ϕB,
and larger optics can be quite expensive (though if big enough, cheaper lenses not specii-
cally designed for laser use are sometimes feasible). If the desired ield of view is too large
for the available lenses, optic B can be moved slightly away from distance L so that the beam

A
B Camera field
C of view
Light
direction
fA
fB
LAB
L C = fC

FIGUre 10.18 One possible arrangement of three cylindrical lenses to expand a laser into a
collimated light sheet.
334 JOHN J. CHARONKO

continues to open at a shallow angle, or a third optic can be used farther along to expand the
beam beginning closer to the ROI. Alternately, the lens B can be omitted altogether, and a very
long focal distance lens can be selected for element A to create the desired spread. In any case,
the opening angle should be kept as small as possible to limit the inal size of the laser sheet;
since the total light energy per pulse is inite, the larger the illuminated volume the dimmer
the resulting beam (and thus particle images) will appear. However, if possible the laser sheet
should be expanded slightly larger than the desired ield of view because the energy distribu-
tion in a laser beam is typically Gaussian, making the edges of the sheet much dimmer than
the center [16,17]. It is often desirable to just use the central portion of the beam to achieve a
latter intensity proile, which can make the later PIV processing easier and avoid the need for
preprocessing the images to correct this.
With the beam expanded to an appropriate size, it is usually advantageous to attempt to
narrow the beam as much as possible in the out-of-plane direction. This is the job of element
C, which is usually a convex cylindrical lens with a very long focal distance oriented perpen-
dicular to A and B. This focal distance should be selected so that the narrowest portion of the
beam is near the center of the ield of view and should be as long as possible so that the waist
remains small for the maximum possible distance. In Figure 10.18, element C is shown later in
the beam path than A and B, but it can be placed before or between them if more convenient.
It is also common to use spherical lenses instead of cylindrical for elements A and B.
In addition to shaping the beam to the desired size and thickness, it is usually necessary to route
the beam around one or more corners. Similar to selecting lenses, a simple household mirror is
rarely suficient, and instead special coatings tuned to the particular wavelength of a laser must be
used. Many times these coatings are designed for optimum relectivity at particular angles; 0° and
45° designs are most common, and relectivity is often dramatically worse away from the design
angle. Additionally, with coated optics the treated side should typically face toward the incident
beam. Finally, some coated mirrors are also more eficient with a particular polarization of light
and in this case should be selected and oriented to match the polarization of your laser.
Finally, a word about proper safety and procedure when working with laser optics—alignment
of the laser optics is the most common time for an accident to occur when working with high-
power lasers. Make sure you follow all guidelines that have been explained to you during
training. Assuming that your laser is Class 3 or 4 like most PIV lasers, never work without
wearing goggles rated for wavelength and power of your device, and make sure that everyone
else that could be struck by a beam (especially if they are doing something else) is also wear-
ing them. It is very tempting to remove the goggles “just for a moment,” but this is a danger-
ous habit. Instead, use proper beam visualization aids such as beam cards to help locate and
monitor the beam. Additionally, when possible work with the lowest energy setting your laser
supports, align the beam path one component at a time, and always place beam stops rated for
your laser at the end of any beam path. This includes relected beams; components such as
beam splitters and mirrors typically allow some percentage of light to be transmitted as well
as relected, and there are almost always relected beams shining backward from every lens.
These additional beams should be blocked or sent directly back toward the laser. Finally, when
everything is correctly positioned, consider using beam tubes or other shrouds to cover the
optical path. Depending on how they are constructed, they will not only protect you from inad-
vertently placing objects in the beam path, but also protect the optics from dust and scratches
and help prevent them from being moved accidentally. If you are ever in doubt of how to work
with a laser safely, contact a knowledgeable supervisor or your local laser safety oficer to
discuss the safety rules and regulations particular to your institution and facility and follow
their recommendations.

Camera calibration Once you have established the desired camera ield of view and laser position, the next step
before recording low data is typically to acquire a series of images that can be used to deter-
mine the physical location and size of the camera ield of view. This process is referred to as
camera calibration and is most often performed using a calibration target. These targets are
usually gridded with evenly spaced lines or symbols printed or engraved in high contrast to
make accurately locating them easy in the resulting images, and often include reference marks
LASER VELOCIMETRY 335

to establish the position and orientation of the overall object. Such a target should be manu-
factured to high precision because any error in construction will translate directly into system-
atic uncertainty in the resulting physical magnitudes of the estimated velocity ields (see the
“Uncertainty due to image calibration” section for further detail). However, due to geometric
constraints, it is not always possible to place and remove such a target without disturbing the
rest of the experiment. In these cases, other options can be used, including the inclusion of
iducials (objects with known size and position) in the experimental domain that will remain
in place during testing, or the measurement of an experimental model or other feature that will
be present in the ield of view during the experiment; sometimes several of these approaches
are combined for redundancy.
When using a calibration target, the target should be placed within the camera ield of view,
illing as much of the ROI as is practical, and it must be aligned with the center of the laser
plane. Recall that the apparent magniication of the image is dependent on the distance of the
object from the lens system (Equation 10.10), so if the target is shifted or twisted relative to
the laser sheet the apparent magniication as determined from the calibration will be different
from the true magniication achieved in the particle images. The camera should then be set to
the desired focus that will be used during the experiment, and all settings locked in place if
possible. Any changes between the acquisition of the calibration images and the experiment
will invalidate the calibration data. Measurements of the target position can also be used to
establish the relationship between the camera’s ield of view and the geometry of the rest of
the experiment, for example, the distance downstream relative to an airfoil.
For 2D planar PIV with a camera arranged perpendicular to the laser plane, it may be
suficient to use the resulting images to simply measure a uniform magniication, M, of the
imaging system as in Equation 10.4. However, the images should also be checked for any dis-
tortion that may cause M to vary across the image. Common sources of such effects include
the camera optics (barrel and pincushion distortion), and windows or tunnel walls in the test
facility. If the camera is oriented off-axis with respect to the laser sheet, it may be necessary to
perform a more general calibration of the relationship between the recorded image domain, X,
and the physical world coordinates, x, as in Equation 10.3. This transform between coordinate
systems can take several forms. One of the simplest is a pinhole camera model of the imaging
system that is based on the camera angle and lens properties [49]

é X ù feffective æ x¢ ö
ê ú = z¢ ç ¢ ÷ , x¢ = Rx + T (10.33)
ëY û èy ø

where R and T are a rotation matrix and translation vector that convert world coordinates x
to a camera-oriented system x¢ followed by a pinhole transformation to image coordinates
using a lens with effective focal length feffective. When T = 0, this is equivalent to a perspective
transform [50]

a11 x + a12 y + a13 a x + a22 y + a23


X= , Y = 21 (10.34)
a31 x + a32 + 1 a31 x + a32 + 1

with the aij being coeficients to determine.


For more complicated setups where there is also distortion in the camera images or where
precise alignment of the image domain is required, a more complex model can be used. This
often takes to the form of a polynomial it between the coordinate systems [51], with X being
the image coordinates and x describing the world space, as previously deined.

X = F ( x) = åa
i , jÎéë0, 3ùû
ijk xi y j zk , i + j + k £ 3 (10.35)

kÎéë0, 2 ùû

This approach is usually the default for stereo- or tomo-PIV experiments because in order to
successfully reconstruct out-of-plane velocities and volumetric particle data, the images must
336 JOHN J. CHARONKO

be matched to each other to within less than a particle diameter of error, and for optimum
results usually less than a pixel.
In multiple-camera systems, further calibration steps can be taken by leveraging the addi-
tional information provided by repeated views of the same image domain. One such reine-
ment for stereo and tomographic setups is to compare the images of particle ields from
multiple cameras. Using triangulation-based algorithms, so-called “self-calibration” methods
[52,53] have been developed that allow any discrepancies between the target location and the
actual laser plane to be corrected; for tomographic PIV in particular, this step is essential to
successful reconstruction of the imaged volume. Such methods have also been adapted to two-
component measurements with the addition of a second reference camera during calibration
for use in cases where the accuracy is critical [54].
For planar two-component PIV, if a single uniform magniication has been determined to
be suficient for the desired level of precision in the physical velocities, the acquired low
images can be analyzed as is, and the resultant velocity will simply be a scalar multiple
of measured displacements. On the other hand, if there is distortion and a more complex
itting function has been deemed necessary, then the data must be transformed using the
calibration function. This can be applied either as a dewarping function to images, recast-
ing them to appear as if they were taken in an ideal rectilinear coordinate system from an
orthogonal camera orientation, or as a post-processing step to the resultant vectors after
cross-correlation. Typically, the irst approach is preferred because the transforms are sim-
pler; to transform the vector ield not only does the data need to be dewarped to account for
shifts in position, but the vectors must be reoriented to account for change from the original
curvilinear coordinate system. The math for arbitrary transforms in the latter case is not
straightforward. However, as previously discussed in the context of iterative image defor-
mation algorithms, the PIV algorithm has also been shown to be sensitive to the manner in
which image dewarping is performed, and so higher-order transforms such as B-splines are
often preferred to simple bilinear or bicubic resampling [30], increasing the computational
cost of the processing.

10.5 post-processing

Data validation As noted previously in the “Performance of basic PIV algorithms” section, after all the inter-
and replacement rogation regions have been processed for a given pair of images, a certain percentage of these
displacement vectors will be obviously incorrect to casual inspection. Typically, these failures
are due to either an insuficient number of particles in a given window, or the shear and rota-
tion being too large for a simple translation to capture the average motion. Although tuning the
experimental design can help reduce the number of failures, due to the trade-offs required to
extract the maximum amount of information from a data set, a certain level of failed correla-
tions is inevitable. Once discovered, we want to remove as many of these failed measurements
as possible so that they do not contaminate the following data analysis steps, while removing
as few correct measurements as possible. This iltering, or vector validation as it is commonly
called, is typically performed not only at the end of a sequence of iterative correlation steps,
but also in between so that the predictor ield derived from the previous step is as clean as pos-
sible, reducing the chance for error growth and unstable convergence.
However, although using our physical intuition these incorrect vectors are usually obvi-
ous to casual inspection, even in a sparsely sampled vector ield it is impractical to remove
them manually from even a single image pair, much less from the thousands of data ields in
a typical test case that might be one of dozens or more runs in a complete experimental study.
For example, using a series of three thousand 1 MP images with a vector spacing of 16 pixels
yields a total of 12 million vectors that must be examined. If 1% of these are failed measure-
ments and we can manually mark them at a rate of one per second, evaluating this data set will
take over 34 hours! Add to that a common processing strategy of iterative image deformation,
validating each ield after every pass, and it is obvious that the task is impractical to perform
by hand for all but the smallest or most crucial experiments.
LASER VELOCIMETRY 337

However, several methods have emerged over the years for the automatic validation of PIV
data ields. Most of them are based on the principle that the measured displacements at each
location should be statistically similar to the vectors nearby either spatially or, in the case of
stationary lows, temporally. One of the most popular methods for making this comparison
is known as universal outlier detection, or UOD [55]. It is so named because the method
attempts to normalize the test criterion (the local median absolute deviate) by the local stan-
dard deviation and an estimate of the error level so that it can be used across an entire low
ield with the same threshold. Isolated failures can often be detected and removed with only
a single iteration of the method, while clusters of failed points can be handled with multiple
passes and tightened thresholds. The method works very well in most circumstances and is
computationally eficient to perform, though it can struggle near walls where a sharp bound-
ary layer creates a strong and increasing gradient, especially in laminar lows where there is
little longitudinal variation. In addition to the originally described implementation for regular
grids, extensions to irregularly sampled data such as those found with PTV experiments have
also been devised [56].
Such statistical approaches can also be applied in the temporal dimension of the data. For
the case where the data have been oversampled in time (time-resolved PIV, for instance), the
principles of UOD can be applied in the time direction as well, either alone or simultane-
ously with a spatial search region. Alternately, if the signal is statistically stationary in time,
every measurement in a data set can be compiled at a given location, and the group statistics
calculated. Each measurement is then compared to the temporal statistics to see if it falls
within some expected interval—±3 standard deviations of the mean, for instance, or within
the central 95% of all samples. Such analysis can be conducted independently for each spatial
direction, but often performing this analysis simultaneously in both u and v is a better choice.
In that case, the bounds typically become an inclined ellipse that can better allow for potential
covariance between the vector components to be preserved.
It is important to note that the previously mentioned statistical methods only detect
failed measurements that are outliers as compared to adjacent values; measurements that
are incorrect, but similar to their surrounding measurements, will go completely undetected.
Alternatively, individual measurements can be evaluated based on characteristics of the image
cross-correlation procedure that generated them. Such a procedure should capture both sta-
tistical outliers and those incorrect measurements that remain close to the local mean. This is
often done based on the peak ratio, or detectability, of the correlation plane. The peak ratio
is typically deined as the ratio in heights of the primary to the secondary peaks. Because the
peak height grows relative to background noise as the number of correlated particles increases,
a larger value should indicate a more reliable measurement. It is typically assumed for stan-
dard cross-correlation that peak ratios above 1.2 could be considered reliably valid [8,57],
whereas if failed measurements are to be avoided, vectors with peak ratios below 2.0 could
be discarded [58]. The exact value can be left as a tuning parameter for the user. Although
attractive conceptually, this approach is not universally successful since sometimes correct
measurements can still have a small peak ratio, whereas in certain circumstances (such as in
very low seeding densities) the peak ratio can be quite high even for incorrect displacements.
Instead, it is probably better to apply it in combination with one or more statistical approaches.
An example of these techniques in action can be seen in Figure 10.19 for a particle image
taken from the experiment shown in Figure 10.3 that we have been discussing. In this image,
only the jet was seeded (which may bias later post-processing), leaving gaps where the
seeding density is too low for a successful measurement—see [24] for additional details on
how the images were acquired. We can exclude these points using a correlation peak height
test; here, a value of about 50% of the maximum peak height worked well (Figure 10.19b).
However, some correlations still failed and need to be identiied by other means (the bold
vectors in Figure 10.19c). While a velocity-based threshold can identify gross outliers, oth-
ers, like that shown in Figure 10.19d, can be best found using statistical methods; here, a
UOD approach was used.
Besides the previously mentioned vector validation routines, many other approaches have
been proposed. They include statistical methods such as a bootstrapping histogram analysis
338 JOHN J. CHARONKO

(a)

(b) (c)
(d)

FIGUre 10.19 (a) An example particle image ield from a jet low experiment with contrast
enhanced for better visibility. (From Gerashchenko, S. and Prestridge, K., J. Turbulence, 16, 1011,
2015.) (b) Height of correlation peaks at each vector location. (c) Vector ield computed from
the marked region in the particle ield. (d) Close-up of the neighboring points around one of the
remaining outliers in the processed region.

method [59] or comparison of the measured ield to low-order proper orthogonal decomposi-
tion (POD) models [60] (see Chapter 2 for more details on POD), both of which also provide
candidate replacement values. The results using these more advanced methods have been rea-
sonably good, but typically their much larger computational cost has limited their routine use
as compared to more traditional methods.
Once failed measurements have been identiied, we are left with gaps in our data ields.
Depending on the application, it may be suficient to simply mark the measurement as
incorrect so that it may be excluded from further processing, such as in the calculation of
turbulence statistics. On the other hand, when preparing plots for presentation, calculating
derivatives, or using the ield in an iterative correlation step, we may need to replace the mea-
surement with a reconstructed value based on the surrounding information. Often, especially
in intermediate steps in a larger correlation algorithm, it is suficient to use simple methods,
but for post-processing inal velocity ields, sometimes more advanced methods are valu-
able even though they are often more computationally expensive. Kriging, for example, can
produce very accurate reconstructions of missing data points in many circumstances [61].
Other approaches that have been explored are so-called “Gappy POD” reconstruction meth-
ods [62,63], which have accuracy similar to Kriging but can compute reconstructed values
that are more closely tied to the physical modes observed in low ields. In the end, research-
ers need to choose a method that best balances what they know about the physical properties
of the low, the expected accuracy and computational costs of a given method, and their goals
for the eventual use of the results.

Derivative Because many features of luid dynamics are best modeled by differential equations, accurate
estimation extraction of the derivatives of the measured displacement or velocity ields is required in
almost all PIV experiments. In fact, it is for this very reason that one of the main strengths of
the PIV method is the spatially resolved data that it provides, as it is one of the only experi-
mental approaches that can do so. However, since we are dealing with discretely sampled
LASER VELOCIMETRY 339

data contaminated with an unknown level and distribution of noise, care must be still taken
to ensure that the derivative ields are computed in a way that balances the formal order of
accuracy for the chosen method with its noise-ampliication properties [64,65]. Many higher-
order methods that are attractive for use in smooth computational data fail badly when used
with noisy experimental data.
One of the most common robust methods for computing derivatives of a PIV ield is a
simple second-order, central difference operator:

dui ui +1 - ui -1
dx
=
2Dx
( )
+ O Dx 2 (10.36)

where
ui is the value of the ield to be computed at the ith grid location
Δx is the spacing between the samples

This method has the advantage of being very straightforward and fast to compute, as well as
being a reasonable match in its noise-iltering properties to the frequency response of the PIV
cross-correlation algorithm itself [65,66]. Despite this caution, several researchers have shown
success applying advanced methods with higher-order accuracy that can still maintain good
performance even in the presence of noise, including the use of the noise-optimized fourth-
order hybrid compact Richardson scheme [65] and the use of surface its that can be analyti-
cally differentiated, such as radial basis functions [67].
Regardless of the choice of a differentiation method, often it is still beneicial to also pre-
smooth the velocity ields. This can be done with a number of different techniques, such as
a simple Gaussian smoothing with a strength and radius chosen to relect the limits of the
underlying PIV window size and grid spacing, or with more complex techniques such as ilter-
ing the data using a low-order POD model [67]. Often several methods will need to be tried
with a particular data set to evaluate how much of the small-scale structure that is ampliied
by differentiation is due to physical processes in the measured low that should be preserved,
and how much is due to noise and must be suppressed by the use of lower-order methods and
smoothing operations.

Coherent structure In aerodynamics applications, like most branches of luid mechanics, coherent structures
estimation are often of great importance, carrying parcels of luid great distances without separating,
or alternately increasing mixing, transferring momentum to and away from bodies in the
low, and inluencing the generation of aeroacoustic noise. Even in turbulent lows where
traditional analysis has treated the motion as random with properties evaluated in a sta-
tistical sense, research shows that the shape and behavior of local luid parcels often take
on predictable and consistent forms (such as the hairpin vortices in boundary layer low).
Because PIV is a spatially resolved method, it opens up the opportunity to study these struc-
tures, which tend to maintain their properties over time (hence the term “coherent”), more
comprehensively than with point-based measurement and more quantitatively than with
low visualization techniques. Methods for the analysis and detection of coherent structures
can be divided into two main approaches depending on the viewpoint used: either Eulerian
or Lagrangian.

Eulerian approaches In an Eulerian viewpoint, lows are studied in terms of their velocity
ields, and the coherent structures we are interested in most often take the form of vortices.
While the obvious approach might be to look for regions of high vorticity, this deinition has
some immediate problems. First, the choice of a threshold for which a parcel of luid should
be considered part of a vortex is arbitrary and can vary from location to location throughout a
low. More seriously, what we usually mean when we talk about vortices is a rotating structure
with correlated motion that remains connected over time. However, vorticity measures the
rotation of individual luid elements and is also present in shearing lows in which no bulk
rotation of structures occurs. Poiseuille and Couette lows are good examples of this.
340 JOHN J. CHARONKO

Instead, researchers have turned toward topological analysis of the low using the velocity
gradient tensor. Essentially, we seek to ind luid trajectories that are either closed or spiralling
around the candidate location. Unfortunately, such a deinition is dependent on the frame of
reference of the observer and the paths we observe are dependent on the rate of translation of
the observer relative to the low. Instead, we would like deinitions that are Galilean invariant
and give the same result independent of the choice of nonrotating reference frame (obviously,
if we change our rotation rate to match a vortex, it will no longer appear to spin!). Depending
on our deinition of a vortex, additional criteria may also apply. Based on these basic consid-
erations, four metrics for the detection of a vortex have become popular and will be described
in the following section: λci, Δ, Q, and λ2.
The irst three quantities are all very closely related to each other and are derived based
on critical point theory and the study of the motion around a point with zero velocity rela-
tive to the observers, or in other words in a reference frame traveling with a luid element.
The motion of the low around that point is then expanded using the velocity gradient tensor,
∂ui/∂xj, in a linear irst-order Taylor expansion around the origin

¶ui
xɺi = xj, (10.37)
¶x j

where xɺi are the trajectories of luid elements at positions xj with instantaneous velocities ui.
It is important to note that this expansion assumes that we are not on no-slip boundary; the
analysis for this case is similar but slightly different.
Analyzing this system, it can be shown that the eigenvectors of the velocity gradient tensor
yield values that determine principal directions of the low around the point, and the eigenval-
ues govern the rate and type of motion in those directions. If two of the eigenvalues (λcr ± λci)
are complex, the real part determines whether the low spirals in (λcr < 0) or out (λcr > 0), and
the imaginary part determines the rate of rotation, or swirling strength [68]. The third real
eigenvalue determines the rate of stretching in the third principal direction. Otherwise, all the
eigenvalues are real and the low topology does not contain any swirling motion. For a com-
plete catalog and discussion of the possible eigenvalues of this system, see Chong et al. [69].
This calculation of the eigenvalues of the velocity gradient tensor gives rise to the irst
method of determining the location of vortex cores in a low; a low element has rotating
trajectories if its eigenvalues have a complex conjugate pair, and vortices are deined as con-
nected regions where swirling strength λci is larger than some threshold greater than or equal to
zero [68]. This can be supplemented with an additional constraint on the ratio of the in-plane
straining to the in-plane swirling strength, λcr /λci [70], to exclude cases where the swirl-
ing is not the dominant behavior. Chakraborty et al. have suggested that a value of order 1
(i.e., −1 < λcr /λci < 1) works well.
This can also be determined without computing the eigenvalues through the analysis of
the determinant of the associated eigenvalue problem, Δ. This is referred to as the Δ-criterion,
and similarly to λci when a positive value is found (implying spiral motions) then connected
regions making up a vortex can be determined by isosurfaces of Δ. Although thresholds of
0 for the two criteria are identical, nonzero thresholds are not, with Δ usually being slightly
more restrictive.
Alternatively, the second invariant of the velocity gradient tensor, Q, measures a balance
between the magnitude of the vorticity, ||W||2, and the magnitude of the shear rate, ||S||2, and
therefore Hunt et al. [71] suggested using a positive value of Q in conjunction with a local
minimum in pressure. In particular, when the low is incompressible, this criterion guarantees
spiralling or circular streamlines, though this is not necessarily true for compressible lows.
Of the criteria mentioned so far, this tends to be the most restrictive, especially when the low
is incompressible since the region where Q > 0 is always a subset of the larger space where
the eigenvalues are complex.
Finally, Jeong and Hussain [72] have proposed an alternative approach that for incompress-
ible lows attempts to compensate for some of the deiciencies of the other metrics. Since large
LASER VELOCIMETRY 341

unsteady straining and viscous forces can obscure pressure minima created by the presence
of a vortex, their criterion seeks to ind a local pressure minimum due only to the contribution
of swirling motions. It is deined using the pressure Hessian for incompressible lows, which
when expanded and with the terms due to unsteady straining and viscous effects neglected
yields the following expression:

1
Wik W kj + Sik Skj = - p,ij (10.38)
r

Jeong and Hussein showed that given those assumptions, local pressure minima exist where
at least two of the eigenvalues of W 2 + S 2 are negative or in other words when λ2 < 0 for
λ1 > λ2 > λ3. Here, the eigenvalues will always be real. This λ2 method has been demonstrated
to typically generate more compact vortex core regions than the other approaches already dis-
cussed, though sometimes using a threshold slightly less than zero can help reduce the effect
of noise. The biggest drawbacks are that due to the assumptions made in the derivation the
relationship of the modiied pressure minima located is not clear and that the method is not
applicable to compressible lows.
So which approach is the best for PIV? The application of the λci and λ2 methods to a real
experiment is shown in Figure 10.20 for another snapshot of the jet low we have discussed in
previous examples. The thresholds for the two eigenvalue-based methods were set at nonzero
values in order to reduce the effect of experimental noise, and for the λ2 method it is squared
since the gradient tensors were squared in its derivation. It can be seen that with similar thresh-
olds, the two approaches yield nearly identical results, while attempting to use the vortic-
ity results in the false inclusion of larger regions that are primarily shear layers instead of
coherent vortices, even though the threshold has been doubled compared to λci (the units and
magnitude are the same as vorticity); if the same threshold had been used, the results would
be even worse. In particular, the large shear layer in the center of the frame would be detected
as a single object instead of having three large distinct objects embedded within it. Clearly, as
previously discussed, thresholding vorticity is inappropriate for such use.
More rigorously, several authors have undertaken detailed comparison of the different
approaches, and, in many circumstances, especially using high-quality computational data,
similar results can be achieved with any of the methods [70,72]. Of particular note is that for
2D planar incompressible conditions, all the discussed methods yield identical results when
used with the most conservative threshold. This means that for many of the lows measured

Vorticity, ω(s–1)
+400

λci ≥ 200
λ2 ≥ –2002
|ω| ≥ 400
–400

FIGUre 10.20 (See color insert.) Two Eulerian coherent structure methods compared to thresh-
olding the vorticity ield for the case of jet low. Every other vector is skipped for clarity, and only
the vectors within the jet luid are plotted.
342 JOHN J. CHARONKO

with 2D planar PIV, all approaches should give very similar results, though not identical.
In the end, though, the choice is best left to the researcher based on data quality and low
characteristics seen in a particular experiment.

Lagrangian approaches In contrast to methods for inding Eulerian coherent structures,


which operate on velocity ields as their primary objects and attempt to identify connected
regions of the low having coherent velocity patterns, Lagrangian approaches instead attempt
to identify parcels of luid that remain connected and coherent over time and operate on the
Lagrangian displacement ield as the primary variable. As such, they are well suited for stud-
ies of mixing and the tracking of luid regions over time. However, compared to Eulerian
approaches that typically need only a single instantaneous snapshot for computation,
Lagrangian methods typically require expensive numerical integration steps to be used over
time-resolved data and are more mathematically intensive to derive and understand. As such,
we will only offer a brief summary of them here.
Analysis of Lagrangian coherent structures (LCSs) typically begins with the calcula-
tion of the “low map,” x1 = Ft0t1 ( x0 ), for the luid domain, which is the function relating the
Lagrangian positions of the luid elements x1 at time t1 to their initial positions x0 at time t0. For
an experimentally measured low ield such as from PIV, this requires that the velocity ield
from every intermediate time step between t0 and t1 be interpolated to arbitrary locations with
high precision and that the initial position of every luid element, sampled on a very ine grid
that is typically greater than the original resolution of the PIV data, be integrated through time
to determine their inal locations. This procedure is often computationally very expensive, and
limitations in spatial and temporal resolution limit the inal resolution and detail that can be
extracted from noisy experimental ields.
After inding the low map, the right Cauchy–Green strain tensor is computed at every
point in the domain from the gradients of Ft0t1 ( x0 ):
T
C ( x0 ) = éëÑFt0t1 ( x0 ) ùû ÑFt0t1 ( x0 ) (10.39)

Similar to Eulerian coherent structure analysis, eigenvalues of C(x0) can be related to the
deformation of an ininitesimal luid element (rather than a rate as was the case in the previous
section), and the eigenvectors give principal directions for the strain. Depending on whether
the integration was forward or backward in time, further mathematical analysis can reveal
regions of maximum shearing, divergence, or convergence of the low over inite times. Unlike
in Eulerian methods, the LCSs derived from such approaches are typically considered to be
ridges or surfaces in the low that divide regions or mark maximum straining, rather than the
enclosed regions themselves. Additionally, these LCSs can be tracked through time to dis-
cover regions of the low which convect with minimal mixing.
A common simpliication for the determination of LCSs with maximal or minimal stretch-
ing is the determination of the inite time Lyapunov exponents (FTLEs) for the low map. The
FTLE values, L tt10 ( x0 ), are calculated from the eigenvalues λn of the Cauchy–Green strain ten-
sor according to the following formula:

1
L tt10 ( x0 ) = log l n ( x0 ) (10.40)
t1 - t0

Ridges of the largest FTLE at every point can then be used as proxies for the exact LCS sur-
faces. However, the two are not necessarily identical.
One signiicant strength that the use of LCS has over Eulerian methods is that their identii-
cation is not only Galilean invariant to translation of the coordinate system but also “objective,”
or invariant to rotations as well. However, despite the mathematical elegance of LCS analysis,
there are a number of limitations that make their use less straightforward. In addition to the
previously discussed computational cost, much of the theory has only been worked out for 2D
incompressible lows. While extensions to higher dimensions exist, discontinuities such as
LASER VELOCIMETRY 343

shocks are typically not handled at all, and for compressible lows many theoretical treatments
were not derived for nonzero velocity divergence. Furthermore, even though it would seem at
irst glance that planar PIV data would be a good match for 2D LCS theory, several limitations
in real lows actually increase the dificulty. Because the calculation requires that the low be
integrated forward in time, out-of-plane velocities carry the tracked luid parcels out of the
planar domain, making the computed low maps only approximately correct. Additionally,
the methods require time-resolved data, and analysis can only be conducted over periods for
which all the luid of interest remains within the measured ield of view. Despite these limita-
tions, several researchers have demonstrated a successful application of LCS methods to PIV
velocity ields. One such approach attempts to address some of these concerns by using the
low tracers in a PTV analysis to directly sample the low map without needing to explicitly
interpolate and then integrate virtual particles through time, reducing the ampliication of
experimental error [73]. Instead, particle trajectories are linked together over time, and only a
much simpler interpolation step is required. This method also brings with it the advantage that
it is easy to determine the period over which an LCS can be computed without loss due to out-
of-plane motion, since this can be determined from the point at which particles trajectories are
lost from view. It is likely that as the methods evolve such optimizations will be found for use
with PIV data and that the theory for volumetric and compressible lows will mature as well.

pressure and In many aerodynamics experiments, a major goal of the analysis is not only to understand the
force data low around objects and structures but also to measure the forces the luid exerts upon them.
Very often, these forces are broken into horizontal and vertical components as was discussed
in Chapter 1 for the computation of lift and drag coeficients. These net forces are exerted
through the action of the luid pressure upon the surface of the body. Traditionally, as was
discussed in Chapter 5, these pressures have been measured through the use of pressure ports
on the surface of models, but it is not always possible to place ports in every location that is
desired, and their presence can affect the performance of the test object. Net forces can also be
measured by the use of various types of load sensors (see Chapter 13), but these give limited
information on how the forces are distributed. Alternately, it is sometimes desired to measure
these pressure forces directly in the middle of the luid low, and not on any particular surface.
These types of measurements can be useful for studies of acoustics, for instance, or for better
understanding of the correlated pressure–velocity luctuations for comparison with computa-
tional luid dynamics (CFD) modeling. In this case, load gauges are of no use, and although
pressure probes can be used, they can have large effects on the low’s behavior.
Instead, a method that could noninvasively sample the pressure ield the same way PIV
samples the velocity ield would be ideal. In fact, examination of the momentum equation sug-
gests that this should be possible once the velocity ield is known, and similar techniques play
an essential role in the derivation of computational solvers. However, the use of incomplete
velocity data, corrupted by experimental noise and iltered spatially and temporally (if time-
resolved data are even available), presents special challenges that make direct application of
CFD methods dificult. Despite this, researchers have had good success in many cases dealing
with these challenges using two-component planar PIV data, and volumetric methods hold the
promise of improving on the shortcomings of these efforts. Review articles such as that by Van
Oudheusden [74] provide a detailed overview of many of these efforts; in the remainder of this
section, we will summarize the basic principles they employ for 2D velocity ields.
Regardless of method, determination of the pressure ield begins with the realization that,
via the momentum equation, the gradient of the pressure ield, p, can be written as dependent
only on the velocity ield and the luid properties:

Dr u ¶r u
Ñp = - + nÑ 2 ( r u ) = - - Ñ × ( r uu ) + nÑ 2 ( r u ) . (10.41)
Dt ¶t

The two forms on the right-hand side relect either a Lagrangian or Eulerian consideration
of the material derivative. The Eulerian form can be computed directly from inite differ-
ences of the recorded PIV ields, while the Lagrangian forms can be calculated by methods
344 JOHN J. CHARONKO

such as PTV or by estimating the acceleration of a luid element through the interpolation
of the measured Eulerian velocities. In either approach, cross-correlating special sequences
of three or more laser pulses over multiple exposures can also improve the estimates of this
term for time-varying data [75]. For incompressible lows, the density, ρ, and viscosity, ν,
can be assumed to be constants, but this is not necessarily true for compressible low; one
possible method for dealing with this dificulty will be discussed later. The viscous terms in
many lows of aerodynamic interest are frequently discarded as they are often much smaller
than the material derivatives.
In order to recover the pressure ield, two main approaches have typically been considered.
The most obvious is to simply integrate the gradient ields spatially from one or more points
of known or ixed pressure (the freestream, for instance, or a pressure port). While this seems
an attractive option, as the pressure at any point can be found by integrating along any arbi-
trary path, the presence of 3D effects and experimental inaccuracies means that errors quickly
accumulate. Instead, researchers have typically seen better results from schemes that average
data from multiple paths, such as the ield erosion technique of Van Oudheusden [76] or the
omni-directional approach of Liu and Katz [75]. The second main approach is to take the
divergence of the pressure gradient ields in order to form a Poisson equation for the pressure.
In contrast to integral methods, the solution to the Poisson equations requires boundary condi-
tions around the entire processing domain. These boundary conditions can be either explicit
pressure or pressure gradient values and can be obtained either from additional knowledge and
measurements about the low or by computing them from the velocity ield data.
Comparison of these different approaches has shown that for time-averaged data either
integration or a Poisson equation works fairly well as the random error in the velocity ield
is damped to extremely low errors leaving only errors due to inconsistencies in the phys-
ical assumptions used (2D low, incompressibility, etc.) and the truncation error from the
numerical scheme. However, for calculations of luctuating pressures with time-resolved data,
error propagation plays a much greater role, making the choice of a scheme more important.
Charonko et al. [77] showed that for at least some types of lows the omni-directional line
integration technique was the most robust, but given suficient spatial sampling a good imple-
mentation of a Poisson solver could also yield acceptable results. Interestingly, over-sampling
the data also causes problems. It can be shown that inite difference operators needed to evalu-
ate the derivative terms in Equation 10.41 produce two kinds of errors: truncation errors from
the numerical scheme and the ampliied error from the original noisy data. These two effects
compete, but with opposite dependence on the sampling rate; while the truncation error drops
as the step size in space or time gets smaller, the noise ampliication rapidly increases as step
size drops below the luctuation length. For spatial sampling, it is a good rule of thumb to not
overlap PIV interrogation windows more than about 50%, assuming that the window size is of
the same order of magnitude as the typical feature size in the low.
However, regardless of solver, when the relative error level of the velocity climbs above
1%, for all the methods tested the error level in the derived ields quickly grew to unmanage-
able levels—hundreds or thousands of percent. This is a fairly stringent requirement, since if
the accuracy of the algorithm typically yields uncertainties on the order of 0.1 pixel/frame, it
means that the mean low needs to be at least 10 pixels/frame. Even though such values are
achievable with careful experimental design, higher uncertainties are not uncommon, espe-
cially in regions with large shear and rotation. Furthermore, the errors produced in such cases
are often hard to diagnose, since they can manifest themselves not as unphysical pressure dis-
tributions, but rather as incorrect magnitudes for the result. Fortunately, it has been shown that
pre-iltering the velocity ields can improve the results considerably, with the use of a POD
low-order model offering much better performance than approaches like low-pass Gaussian
iltering that change the spatial velocity gradients [77], and thus the pressure ields as well.
Taken together, in a careful experiment with low-velocity error and suficient spatial and tem-
poral resolution, it is not unreasonable to expect random pressure errors on the solution to be
in the order of 5%–10%.
For compressible lows, slightly more care is required since the density ield can no lon-
ger be assumed to be a constant. However, one approach that has seen reasonable success in
LASER VELOCIMETRY 345

overcoming this dificulty is the use of the ideal gas law to compute the local density from the
temperature, with the temperature calculated under the assumption of adiabatic low:

p T g -1 2 æ V 2 ö
r= ; = 1+ M¥ ç 1 - 2 ÷ (10.42)
RT T¥ 2 è V¥ ø

where
γ is the speciic heat ratio
T and V are the local temperature and velocity magnitude
T∞, V∞, and M∞ are the freestream temperature, velocity, and Mach number
R is the universal gas constant

This assumption should be valid in regions of inviscid low as well as across shocks and should
be reasonable for regions of steady viscous low with limited heat transfer. Substituting these
relations into time-averaged versions of the pressure gradient equations, Van Oudheusden
demonstrated good agreement using this approach when starting with a conservative form of
the governing equations to yield the following expression [76]:

æ uiu j ö ¶ ln ( p /p¥ ) 1 æ ¶uiu j uiu j ¶T ö


çç dij + ÷÷ =- çç - ÷ (10.43)
è RT ø ¶x j RT è ¶x j T ¶x j ÷ø

where δij is the Kronecker delta. This equation could then be solved for the logarithm of the
pressure ratio using identical techniques as we discussed earlier.
If, instead of complete pressure ields, the calculation of net forces (including lift and drag
components) is our goal, we would instead typically begin with a control volume analysis of
the luid surrounding the body of interest and integration of the momentum equations. In this
case, it will be quickly seen that the resulting expressions will depend on the surrounding pres-
sure ield as well as the velocities. Although it is possible to rewrite the pressure terms using
only velocity ields and their derivatives (see, for instance, the work of Noca et al. [78]), in
general, it is more useful and straightforward to calculate the full pressure ield using the tech-
niques discussed earlier and use them instead. Such approaches can yield very good agree-
ment with simultaneously acquired secondary measurements; see Figures 10.21 and 10.22
for a comparison of force gauge data to calculate lift and drag around an airfoil mimicking
the body shape of a lying snake (Chrysopelea paradisi) [79] or the work of Ragni et al. [80]
calculating lift and drag coeficients over a transonic airfoil at M = 0.6 from both PIV and
surface pressure port data.

10.6 estimation of error and uncertainty

In almost every experiment of engineering or scientiic utility, knowledge of the expected level
of error in the measurements that have been performed is of interest to one degree or another.
This is especially true when making comparisons between multiple tests, for example, between
two wing or engine designs or between an experimental test and a computational simulation
of the same system. Without some knowledge of how much error the experimenter believes
was encountered during the measurement, it is impossible to correctly evaluate whether or
not the observed quantities are meaningfully different. As was discussed in Chapter 2, the
concept of the range of expected errors is typically referred to as the uncertainty of a measure-
ment and is most useful when it is accompanied by a description of the percent coverage for
the conidence interval (if the measurement is repeated, how many times would the result fall
within the stated range) and the predicted distribution shape for the method’s errors. Often,
however, these details are not known, and values must be assumed to proceed with further
analysis (such as error propagation to derived quantities like acceleration or drag coeficients).
346 JOHN J. CHARONKO

Water tunnel Load cells


mounts

Snake model

Support arms

Acrylic sidewalls
(a)

Water tunnel
Acrylic sidewalls
mounts
Concave mirror

Acrylic boat
Snake model
4.2 chords
2.1 chords

1280 × 512 pixels ROI


1.25 chords 3.75 chords 1280 × 1024 pixels ROI
(b)

FIGUre 10.21 Experimental setup used by Holden et al. for the measurement of lift and drag
on an airfoil mimicking the body shape of a lying snake using (a) direct load cell measurements
and (b) time-resolved PIV-based pressure estimates. (Adapted from Holden, D.P., Flying snakes:
Aerodynamics of body cross-sectional shape, in: Mechanical Engineering, Virginia Polytechnic
Institute and State University, Blacksburg, VA, 2011, p. 72. With permission.)

In practice, many researchers work toward 68% or 95% conidence intervals, which corre-
spond to one or two standard deviations of a normal error distribution. The assumption of
normal distributions is often a reasonable one due to the central limit theorem, which states
(loosely and given certain assumptions) that the sum of independent random variables tends
toward a normal distribution. Products, on the other hand, tend toward log-normal distribu-
tions for the same reasons.
For many traditional experimental techniques, measurements are based upon the use of a
sensor and measurement equipment (A/D converters, ampliiers, etc.) that all have predictable
response curves between their inputs and outputs, with certain assumptions (such as linearity
in their calibration curves) and predictable levels of random luctuations based on operating
conditions. Taken together, it is usually possible during a careful experiment to create not only
a calibration of the average response of each instrument in use but also the expected uncer-
tainty of each measurement under calibrated conditions. For PIV, however, the prediction of
uncertainty for a given measurement has proven to be much more challenging. PIV errors are
dependent on a wide variety of error sources that interact in nonlinear ways, making straight-
forward calibration a daunting task.
LASER VELOCIMETRY 347

2 2
Force
UF
1.5 DPIV
1.5
UDPIV
1 Pressure
1 Momentum||
Momentum
0.5
0.5

CD
CL

0
–0.5

–0.5 –1

–1 –1.5
–10 0 10 20 30 40 50 60 –10 0 10 20 30 40 50 60
(a) Angle of attack (°) (b) Angle of attack (°)

FIGUre 10.22 Comparison of (a) lift and (b) drag coeficients for an airfoil modeled on the body of a lying snake,
C. paradisi, at a Reynolds number of 13,000. Note the close agreement between the two methods and how the PIV-based
measurement allows decomposition of the forces into pressure and momentum contributions. (Adapted from Holden, D.P.,
Flying snakes: Aerodynamics of body cross-sectional shape, in: Mechanical Engineering, Virginia Polytechnic Institute and
State University, Blacksburg, VA, 2011, p. 72. With permission.)

In the “Image cross-correlation” section, we deined the velocity at a point as the measured
displacement divided by the pulse separation. However, this is a simpliication of the true par-
ticle transport equations, which actually show that the luid motion is given by the following
fourth-order accurate formula in terms of the measured particle displacements [17]:

(
u x p, t* = ) Dx p
Dt
+ éë x p - x p ( t * ) ùû ×Ñ u x -
p
1
24
( )
vp ( t * ) Dt 2 + éë vɺ p ( t * ) - b ùû t p + O Dt 4
ɺɺ

(10.44)

The irst term is the particle displacement over time; the second is the effect of the low curva-
ture in terms of the difference between the particle average position, x p , and the true position
at the midpoint time, t*; the third term is a inite differencing correction; and the fourth term
is the combined effect of particle accelerations and body forces, b, over the particle timescale
τp. The result is a bias error on our computed velocity that we could correct in theory, but typi-
cally we do not have enough information to do anything but estimate the resulting systematic
uncertainty.
Fortunately, for a well-designed experiment in many cases, these effects are much smaller
than our experimental errors. Therefore, we will constrain our analysis here to the uncertain-
ties that affect our evaluation of the irst term in the previous relation. Starting with Equation
10.19, we can perform a irst-order Taylor series expansion of the estimated particle velocity.
The general form for the propagated uncertainty sf on some function f = f(y1, …, yN) is

N 2
æ ¶f ö 2 æ ¶f ö æ ¶f ö
s 2f = å
i =1
ç ÷ s yi +
è ¶yi ø
å çè ¶y ÷ø çè ¶y ÷ø a s s
i¹ j i j
ij bi b j (10.45)

where
syi are the elemental uncertainties on each of the variables yi
aij is a correlation coeficient from −1 to +1 between the errors on yi and yj
348 JOHN J. CHARONKO

In most cases, assuming independent error sources, the covariance terms involving aij
can be assumed to be zero, thus leading to the relation for uncertainty estimate illustrated in
Chapter 2. Applying this relation to Equation 10.19, we ind the following:

2 2 2 2
æ su ö æ sDX ö æ sM ö æ sDt ö
ç u ÷ = ç DX ÷ + ç M ÷ + ç Dt ÷ (10.46)
è ø è ø è ø è ø

As can be seen from this expression, uncertainty sources include errors in the calibration of
the camera lens system (perspective, magniication, distortion), timing errors in the recording
of the images or the laser pulse sequences, and errors in the determination of displacement
and velocity ields from particle ields. Additionally, errors that cause the measurement made
to differ from the one that was planned (movement in the test ixtures during a run, location
errors in the placement of the measurement region, incorrect or changing experimental con-
ditions) can also cause the interpretation and post-processing of the velocity results to have
additional uncertainties.
Using some typical values for a PIV experiment using an Nd:YAG laser and a 12 MP
CCD camera (ΔX = 10 pixels, M = 0.25, Δt = 10 μs) and some conservative typical values
for the standard uncertainties of each (sΔX = 0.1 pixel, sM = 0.0001, sΔt = 50 ns), we can then
estimate the contribution of each to the total relative variance on the velocity. Substituting
these values into Equation 10.46, we see that the displacement uncertainty contributes almost
80% of the variance and the timing errors just under 20%, while magniication uncertainty is
fairly unimportant with only about 0.13% contribution. This is typical of most experiments,
but it is important to verify for your setup which factors are most inluential. While many of
these error sources can be addressed and estimated from traditional approaches, errors in the
displacement measurement come from the cross-correlation process that does not behave so
simply. However, there has been a concerted effort to address this error source more system-
atically in the last several years with promising results, and since all other derived results
depend ultimately on a successful displacement estimate, we will address it irst, with a dis-
cussion of the other sources to follow.

Uncertainty due to Fundamentally, the process of making a PIV measurement comes down to the estimation of
cross-correlation the motion of a set of particles that we have imaged in order to approximate the velocity or
displacement of the luid that is carrying them. Along the way, we have assumed that the par-
ticles are faithfully following the low (see Equation 10.44), but we will ignore that here. In
such a system, the particles themselves are then our probe, and the most common method for
obtaining their motion is through cross-correlation of the images. Assuming that the images
are faithfully recorded and that we know the correspondence between an image location and
its lab position, the estimated displacement then enables us to calculate the velocity within
that interrogation window, which we typically treat as either the velocity at the center of the
window or an average within it.
However, what we are really evaluating is a signal based on the motion of a discrete,
random set of tracer particles. These particles are our probe, but even if the low were to
stay the same for every measurement, our probe changes every time. Also, since the low
is not the same in all interrogation windows, and as we saw in the “Performance of basic
PIV algorithms” section, the error is very sensitive to many parameters, including the num-
ber of particles, amount of out-of-plane motion, and shear rates. It is tempting to use error
analysis techniques like we used in that section to bound the error levels and state a single
uncertainty level for a PIV measurement (such as 0.1 pixels), but this is not a good approach
due to the complex interplay of image and low parameters that go into arriving at the inal
displacement estimate. In fact, even something as simple as the apparent intensity proile of
the particle images turns out to inluence the error level of the resulting measurements [82],
and factors such as out-of-plane motion that cannot even be directly measured using planar
2D PIV have a large effect as well [83].
LASER VELOCIMETRY 349

Instead researchers working on this problem have turned to a number of different


approaches that attempt to handle this problem in more detail. They can broadly be classiied
into two main types of systems: calibration-based methods that indirectly attempt to infer
the uncertainty level for a given displacement estimate from secondary measurements such
as the number of correlated particles or the heights of the cross-correlation peaks, and direct
measurements that attempt to predict the uncertainty level directly from the analysis of how
the PIV cross-correlation algorithm is related to the measured displacements. Examples of
the irst class are the uncertainty surface method [84] and peak ratio [85] or SNR-based meth-
ods [86]. Techniques that fall in the second category are the particle image matching method
[87] and the correlation statistics method [88].
The issue of correlation uncertainty remains an active area of research, and as of yet there
is not a clear consensus as to which method is the best, or whether some might be better in
certain circumstances. In particular, synthetic data sets are not always representative of the
challenges seen in real experimental data, so are not necessarily the best choice for a com-
parison. However, to quantify uncertainty it is necessary to know the true error and thus the
true displacement ield, but in experimental data it is often very dificult to obtain this accu-
rately for any lows but the simplest. There have been initial efforts in comparing several of
these methods using the concept of a high-dynamic range measurement, taken at increased
magniication and optimal image conditions and processed with advanced PIV algorithms
to be used as a close approximation of the true solution in a complicated low [89]. These
measurements could then be compared to other PIV data taken simultaneously with less
favorable image conditions and processing so that estimates of uncertainty for this second
set could be evaluated against the approximate “true” errors. Comparisons of such measure-
ments to hot-wire experiments (which should have lower error levels than the PIV) showed
error levels low enough at the measured point to give reasonable conidence in extending the
comparison to the entire PIV displacement ield. Using this technique, a database of different
image and low conditions has been built that was used to compare the uncertainty surface,
peak ratio, image matching, and correlation statistics methods. So far, the analysis of the
database seems to show that all the methods perform reasonably well under most condi-
tions, with some methods doing better in certain circumstances. The peak ratio method was
probably the weakest performer, following the correct trend for the uncertainty distribution
but consistently overestimating its magnitude. The image matching and correlation statistics
on the other hand performed better under most circumstances, with the correlation statistics
method perhaps the most consistent of all [90].
As of yet, these methods have mostly been developed with the uncertainty of single-camera,
2D, two-component velocity measurements. However, work is ongoing among several groups
to adapt and extend these methods to stereo and volumetric techniques. So far, it appears that
the main challenge is how to appropriately propagate the uncertainty (usually in a Taylor
series expansion sense) through the image calibration and registration steps. This will obvi-
ously also require understanding of the uncertainty in those values, which for stereo and volu-
metric data is much more complicated than for single-camera experiments. Interested readers
are encouraged to search the recent literature, especially conference proceedings, as the topic
is still new and very much under development.

Uncertainty due to Although in many experiments the error on the magniication can be quite small, in other cases
image calibration it can be substantial. For planar PIV, if the magniication in an experiment has been deter-
mined by sizing an object of known dimension in an image (perhaps the chord or thickness of
a wing, or a calibration plate), a crude estimate can be obtained by simply assuming a level of
uncertainty in the measurement of the number of pixels the object spans. For example, if you
are able to estimate the length of a 6.0 cm object to be 3000 pixels long with an uncertainty
of 1 pixel, and the camera pixel size on the sensor is 5 μm, then our magniication will be
M = 0.25 and the uncertainty will be roughly sM = 0.0001 as in our example earlier (actually
~8.3 × 10−5). If instead we are using a calibration plate with multiple markers, we can more
quantitatively estimate the uncertainty on the sizing by taking the standard deviation of all the
350 JOHN J. CHARONKO

different intervals we measure on the image of the plate. This can give a better estimate of the
uncertainty across the image especially when distortions from the camera lens or test section
walls are present, since in these cases the magniication can vary across the image. If possible,
the images should be dewarped to remove this bias before processing.
For stereo PIV, assuming self-calibration has been performed [52], the residual correction
vectors in the disparity map can be used similar to the differences between calibration spots
for planar PIV measurements. This is superior to using only the variance of the sizing of the
original calibration plate images, since it takes into account differences between where the
laser plane actually lies, and where we thought it would be when we deined our coordinate
system. Similar information can be recovered from self-calibration in volumetric experiments.
Here, rather than examining a single magniication value, the disparity map information can
be used to estimate the uncertainties on the calibration coeficients used to transform the data
from the image plane to the laboratory coordinate system, and the uncertainties propagated by
Taylor series methods to the inal velocity ields [91].
However, this raises another concern for standard planar PIV—if we have used a calibra-
tion plate to ind our magniication rather than a iducial object visible in our PIV images,
planar measurements can be subject to the same types of positional errors as stereo systems,
but we will not have any way of detecting it. One way to estimate the size of this effect is to
use Equation 10.10 to estimate the effect a change in the object distance, z0, has on the mag-
niication for a ixed image distance, Z0. For example, if we were using a f = 50 mm lens in
the previous example, we can show using Equations 10.9 and 10.10 that Z0 = 62.5 mm and
z0 = 250 mm. Then, if we estimate that our uncertainty on the laser plane position relative to
where we took the calibration image is 1 mm (about 1 laser sheet thickness), by a Taylor series
uncertainty analysis of Equation 10.10 we get an uncertainty on M of sM = 0.016. Notice that
this is two full orders of magnitude larger than our previous guess at the uncertainty and that
it makes the contribution of the calibration error on the velocity uncertainty larger than that of
the timing error. Acting alone, it would contribute a systematic uncertainty of 6.4% on pixel
displacement (0.64 pixels on a 10-pixel displacement), easily surpassing the displacement
error’s effect. Clearly, this is not an effect that can be neglected in general, and it has been sug-
gested that when bias errors (rather than random errors) are of concern a second camera can
be used even for standard planar PIV in order to perform stereo self-calibration. This can then
be used to quantify and correct for the uncertainty that would be present in the magniication
term [54].

Uncertainties due Given the high precision available for most modern digital timing and synchronization hard-
to timing errors ware, it might be expected that this component should be essentially zero. In fact, the contribu-
tion of the random jitter of both the timing hardware and the PIV lasers’ beam production is
small enough that it can be neglected in almost all cases. Typically, systematic uncertainty on
the synchronization hardware is also quite low and within the speciied tolerances, though for
high-speed lows you should still consult the manufacturer’s documentation for veriication. On
the other hand, systematic uncertainties on the timing of the laser emission have been shown
to be large enough that they can affect the inal velocity estimate. One recent survey of several
PIV lasers of varying design, age, and manufacturer showed that differences between the two
laser heads in a dual-laser coniguration could be as high as 50 ns for a low-speed Q-switched
Nd:YAG laser, which was the value quoted in the initial example. On the other hand, high-speed
Nd:YLF lasers yielded biases of up to 1 μs, which if we were to keep the 10 μs pulse separation
would lead to a velocity error of 10%, much larger than the correlation uncertainty! Fortunately,
these values were found to be repeatable at a given energy setting and repetition rate, offering
the opportunity to simply calibrate the true time delay before the start of the experiment using a
good oscilloscope and a high-speed photodiode. However, to exhaustively catalog all possible
conigurations ahead of time would be tedious and may not guard against unanticipated drift in
performance. Therefore, the best practice should probably be to always measure the true pulse
timing for the inal experimental settings at the time the measurements are being taken and use
that value instead of the input settings for synchronization timers. This done, since the jitter is
negligible, the contribution of timing errors can essentially be eliminated.
LASER VELOCIMETRY 351

problems

10.1 (a) Generate a 1D discrete signal with enough length to hold 5–10 particle “images”
consisting of delta functions and then displace it to generate a second particle image
pattern, allowing the particles to leave the original signal domain. Implement a 1D
cross-correlation algorithm for measuring the displacement of these signals using
a direct summation approach as in Equation 10.21. Apply your algorithm to these
signals for a range of displacements between 0 and 10 pixels. Show at least one of
the resulting correlation planes and discuss the effect of increasing displacement.
(b) Repeat part (a), but introduce a constant background offset and random noise into
your signal. Discuss their effect on your results.
(c) Re-implement your correlation algorithm using discrete Fourier transforms (see
Equation 10.29), and apply it to the signals generated in part (b) both as originally
generated and with a top-hat windowing function. Discuss the difference in the
output between the three approaches, and show that with the proper application of
a windowing function the output of an FFT-based cross-correlation is numerically
identical to a direct summation approach.
10.2 (a) In Figure 10.3 and “Performance of basic PIV algorithms” and “Camera selection”
sections, we discussed the design of a simple jet low experiment in air using low-
speed PIV cameras and lasers. Suppose you now wish to examine the same low
using time-resolved planar PIV but keeping as nearly as possible the same ield of
view and a inal spatial resolution of 1 mm. Assume that you have a laser with suf-
icient repetition rate and energy per pulse, and select an appropriate camera from
Table 10.3. Show the design calculations you would make to ensure reasonable
results, and describe the hardware and experiment settings you would use such as
frame rate, interframe times, and seeding densities, and whether you will capture
the low with evenly spaced snapshots or in pairs of closely spaced frames. Do
not forget that using these dual-frame modes typically leaves the maximum frame
rate unchanged, meaning that the rate for acquiring correlated pairs drops by 50%.
Describe any potential limitations and design trade-offs to your selected setup.
(b) How might you answer change if you found out that you only had the 4.1 MP
CMOS camera, but that it could also achieve up to 1380 frames/second at a reduced
resolution of 1920 × 1200 pixels, or 2780 frames/second at 1024 × 1024 pixels?
10.3 The PIV challenge is series of four events that were held to compare various state-of-
the-art approaches to PIV processing, each focused on different elements of the method
and areas of concern. Case B of the third PIV challenge was a set of synthetic images
generated from the DNS of a laminar separation bubble. Download the images and
exact solutions from the project website (http://www.pivchallenge.org/), and process
them using any available PIV software according to the included directions, except that
you may choose to correlate either single pairs of images or use multiframe methods,
and you may use a different inal window size if you wish. If you do not have com-
mercial software available, one option is to download the current release of Prana,
an open-source PIV implementation for MATLAB® (https://github.com/aether-lab/
prana). Other free options are available online if you do not have MATLAB. After pro-
cessing the vector ields, duplicate Figures 26 through 29 of the results paper [92] for
your results, and provide vector plots of each processed time step. These correspond to
the following plots:
(a) Figure 26: Contour plot of the error in the x-displacement, Δx, compared to the exact
solution for time step t = 10.
(b) Figure 27: A line plot of the RMS error on Δx versus the time step, t = 10, 30, 50,
70, 90, 110.
(c) Figure 28: A line plot of the valid vector rate versus the time step, where valid
vectors are those where the errors on Δx and Δy are less than 0.5 pixels.
352 JOHN J. CHARONKO

(d) Figure 29: A line plot of the PDF of the error on Δx for t = 30 and t = 90, and
a line plot of the RMS error on Δx versus t only for the valid measurements as
found in (c).
Explain any choices you made while processing the particle images and comment on the
results.
10.4 (a) You have just inished processing the data for your latest PIV experiment, and you
would like to obtain an estimate of how much uncertainty you should attribute to
the resulting vector ield due to error sources in your experiment and data reduction.
You know the following facts about the experiment:
• Measured displacements range from 1 to 12 pixels per frame, and you estimate
that the standard uncertainty due to the correlation algorithm is up to 0.2 pixels
through most of the low.
• Your inal interrogation window size is 22 pixels.
• The particle size is estimated at 3.0 pixels on average.
• The separation between laser pulses is nominally 10 μs, but due to testing of the
laser you suspect there is an unknown standard systematic uncertainty of up to
100 ns.
• The physical pixel size on your camera sensor is 20 μm.
• Using a grid target, you previously determined that the average spacing between
the markers, which are set 5 mm apart, is 100 pixels with a standard deviation of
0.7 pixels.
• You estimate that standard uncertainty on the inal laser plane location might be
as much as 0.5 mm away from the location at which your grid calibration image
was acquired, which was 35 cm in front of your 100 mm focal length camera
lens. Assume that your lens system behaves as an ideal thin lens.
Use these facts and Equation 10.46 to estimate the range of inal uncertainty you can
expect on the velocity measurement. You may need to use the Taylor series uncer-
tainty propagation relationship, Equation 10.45, if so you may ignore the covariance
terms. Discuss how much uncertainty is attributable to each source and how much
the uncertainty might be reduced if you could eliminate some of the systematic
errors.
(b) For this experiment, you now want to calculate velocity gradients with the following
second-order accurate central difference scheme.

¶u u ( xi +1 ) - u ( xi -1 )
»
¶x xi 2Dx

Using a irst-order Taylor series method approach, show analytically how the uncertain-
ties on the measured velocity propagate into estimates of the local vorticity. Assume that
the velocity uncertainties are the same for the x and y directions, but they are everywhere
independent.
How does the answer change if the displacement errors in a given direction are
covariant due to overlap in the PIV interrogation window? Assume a simple model
such that the covariance on displacements in the same direction between adjacent
sampling points increases linearly with the amount of overlap of their interrogation
windows, but the errors are still uncorrelated between u and v. In other words, the
correlation coeficient for the error is equal to the fractional overlap. You do not
need to use your answer to part (a) to calculate a numerical result; instead give
an analytical expression for the standard uncertainty on the out-of-plane velocity
component.
LASER VELOCIMETRY 353

references

1. Ye h, Y. and H.Z. C um m ins , Localized luid low measurements with an He–Ne laser spectro-
meter. Applied Physics Letters, 1964. 4: 176–178.
2. Lehmann, B., Geschwindigkeitsmessung mit laser-doppler-anemometer verfahren. Wissenschaftliche
Berichte AEG-Telefunken, 1968. 41: 141–145.
3. vo m S t e in, H.D. and H.J. Pf eif er , A Doppler difference method for velocity measurements.
Metrologia, 1969. 5: 59.
4. L e h ma n n, B., H. N obach, and C. Trop ea, Measurement of acceleration using the laser
Doppler technique. Measurement Science and Technology, 2002. 13: 1367.
5. L ow e, K.T. and R.L. Sim p s on, Turbulence structural measurements using a comprehensive
laser–Doppler velocimeter in two- and three-dimensional turbulent boundary layers. International
Journal of Heat and Fluid Flow, 2008. 29: 820–829.
6. S u t t o n, M. et al., Determination of displacements using an improved digital correlation method.
Image and Vision Computing, 1983. 1: 133–139.
7. A d r ia n, R.J. and C.-S. Yao, Development of pulsed laser velocimetry (PLV) for measurement
of turbulent low, in X.B. Reed, Jr., J.L. Kakin, and G.K. Patterson, eds., Proceedings of the Eighth
Biennial Symposium on Turbulence, University of Missouri, Rolla, MO, 1983. 1984.
8. K e a n e , R.D. and R.J. A drian, Theory of cross-correlation analysis of PIV images. Applied
Scientiic Research, 1992. 49(3): 191–215.
9. A d r ia n, R.J., Dynamic ranges of velocity and spatial resolution of particle image velocimetry.
Measurement Science and Technology, 1997. 8: 1393–1398.
10. S a n t iag o, J.G. et al., A particle image velocimetry system for microluidics. Experiments in
Fluids, 1998. 25: 316–319.
11. F u jita, I., M. M us te , and A. K ruger , Large-scale particle image velocimetry for low analy-
sis in hydraulic engineering applications. Journal of Hydraulic Research, 1998. 36: 397–414.
12. A d r ia n, R.J., Twenty years of particle image velocimetry. Experiments in Fluids, 2005. 39(2):
159–169.
13. Goldstein, R., Fluid Mechanics Measurements, 2nd edn. 1996, Boca Raton, FL: CRC Press. p. 746.
14. A l b r e c h t, H.-E. et  al., Laser Doppler and Phase Doppler Measurement Techniques. 2003:
New York: Springer. p. 756.
15. Tr o pe a , C. , Yarin, A .L., and Fos s , J .F., eds., Springer Handbook of Experimental Fluid
Mechanics. 2007, Würzburg, Germany: Springer. p. 1570.
16. R a ffe l, M. et  al., Particle Image Velocimetry: A Practical Guide, 2nd edn. 2007, Berlin,
Germany: Springer.
17. A d r ia n, R.J. and J. Wes terweel , Particle Image Velocimetry. 2010, New York: Cambridge
University Press. p. 585.
18. O l s e n, M.G. and R.J. A drian, Out-of-focus effects on particle image visibility and correlation
in microscopic particle image velocimetry. Experiments in Fluids, 2000. 29: S166–S174.
19. H e c h t, E., Optics, 4th edn. 2001, Reading, MA: Addison-Wesley. p. 680.
20. Wil l e r t, C.E. and M. Gharib, Digital particle image velocimetry. Experiments in Fluids, 1991.
10(4): 181–193.
21. We s t e r w e e l , J., Fundamentals of digital particle image velocimetry. Measurement Science
and Technology, 1997. 8(12): 1379–1392.
22. R o n n e b e r g e r, O., M. R af f el, and J. Kom p enhans , Advanced evaluation algorithms for
standard and dual plane particle image velocimetry, in Ninth International Symposium on Applied
Laser Techniques to Fluid Mechanics, Lisbon, Portugal, 1998.
23. Br a d y, M.R., S.G. R aben, and P.P. V lachos, Methods for digital particle image sizing
(DPIS): Comparisons and improvements. Flow Measurement and Instrumentation, 2009. 20:
207–219.
24. G e r a s h c h e n ko, S. and K. Pres tridge, Density and velocity statistics in variable density
turbulent mixing. Journal of Turbulence, 2015. 16: 1011–1035.
25. We s t e r w e e l , J., D. Dabiri , and M. G harib, The effect of a discrete window offset on the
accuracy of cross-correlation analysis of digital PIV recordings. Experiments in Fluids, 1997.
23(1): 20–28.
26. L e c o r d ie r , B. et al., Estimation of the accuracy of PIV treatments for turbulent low studies
by direct numerical simulation of multi-phase low. Measurement Science and Technology, 2001.
12: 1382.
27. G u i, L. and S.T. Wereley, A correlation-based continuous window-shift technique to reduce
the peak-locking effect in digital PIV image evaluation. Experiments in Fluids, 2002. 32: 506–517.
28. We r e l e y, S.T. and C.D. M einhart, Second-order accurate particle image velocimetry.
Experiments in Fluids, 2001. 31(3): 258–268.
354 JOHN J. CHARONKO

29. N o g u e ir a, J., A. Lecuona, and P.A. Rodriguez, Local ield correction PIV: On the increase
of accuracy of digital PIV systems. Experiments in Fluids, 1999. 27: 107–116.
30. A s ta r ita, T. and G. Cardone, Analysis of interpolation schemes for image deformation
methods in PIV. Experiments in Fluids, 2005. 38: 233–243.
31. S c a r a n o, F., Iterative image deformation methods in PIV. Measurement Science and Technology,
2002. 13: R1.
32. E c k s t e in, A. and P.P. V lachos, Assessment of advanced windowing techniques for digital
particle image velocimetry (DPIV). Measurement Science and Technology, 2009. 20(7): 075402.
33. We r n e t, M.P., Symmetric phase only iltering: A new paradigm for DPIV data processing.
Measurement Science and Technology, 2005. 16(3): 601–618.
34. E c k s t e in, A., J. C haronko, and P. V lachos, Phase correlation processing for DPIV mea-
surements. Experiments in Fluids, 2008. 45(3): 485–500.
35. E c k s t e in, A. and P.P. V lachos, Digital particle image velocimetry (DPIV) robust phase cor-
relation. Measurement Science and Technology, 2009. 20(5): 055401.
36. D e l n o ij, E. et al., Ensemble correlation PIV applied to bubble plumes rising in a bubble column.
Chemical Engineering Science, 1999. 54: 5159–5171.
37. M e in h a r t, C.D., S.T. Wereley, and J.G. Santiago, A PIV algorithm for estimating time-
averaged velocity ields. Journal of Fluids Engineering, 2000. 122: 285–289.
38. S c iac c h ita n o, A., F. Scarano, and B. Wieneke , Multi-frame pyramid correlation for
time-resolved PIV. Experiments in Fluids, 2012. 53: 1087–1105.
39. Ly n c h , K. and F. Scarano, A high-order time-accurate interrogation method for time-resolved
PIV. Measurement Science and Technology, 2013. 24: 035305.
40. K ä h l e r , C., S. Scharnows ki, and C. Cierpka, On the resolution limit of digital particle
image velocimetry. Experiments in Fluids, 2012. 52: 1629–1639.
41. S c h a r n ows k i, S., R. H ain, and C.J. K ähler , Reynolds stress estimation up to single-pixel
resolution using PIV-measurements. Experiments in Fluids, 2011. 52: 985–1002.
42. Gharib, M. et al., Leonardo’s vision of low visualization. Experiments in Fluids, 2002. 33: 219–223.
43. Ca r d we l l , N.D., P.P. V lachos, and K.A. Thole , A multi-parametric particle-pairing algo-
rithm for particle tracking in single and multiphase lows. Measurement Science and Technology,
2011. 22: 105406.
44. K e a n e , R.D., R.J. A drian, and Y. Zhang , Super-resolution particle imaging velocimetry.
Measurement Science and Technology, 1995. 6(6): 754–768.
45. K ä h l e r , C., S. Scharnows ki, and C. C ierp ka, On the uncertainty of digital PIV and PTV
near walls. Experiments in Fluids, 2012. 52: 1641–1656.
46. K h a l it ov, D.A. and E.K. Longm ire, Simultaneous two-phase PIV by two-parameter phase
discrimination. Experiments in Fluids, 2002. 32: 252–268.
47. Ch a r o n ko, J.J., E. A ntoine , and P.P. V lachos, Multispectral processing for color particle
image velocimetry. Microluidics and Nanoluidics, 2014. 17: 729–743.
48. M c P h a il , M.J. et  al., Correcting for color crosstalk and chromatic aberration in multicolor
particle shadow velocimetry. Measurement Science and Technology, 2015. 26: 025302.
49. Tsai, R., A versatile camera calibration technique for high-accuracy 3D machine vision metrology using
off-the-shelf TV cameras and lenses. IEEE Journal on Robotics and Automation, 1987. 3: 323–344.
50. Wil l e r t, C., Stereoscopic digital particle image velocimetry for application in wind tunnel
lows. Measurement Science and Technology, 1997. 8: 1465.
51. S o l o ff, S.M., R.J. A drian, and Z.-C. Liu, Distortion compensation for generalized stereo-
scopic particle image velocimetry. Measurement Science and Technology, 1997. 8: 1441.
52. Wie n e k e , B., Stereo-PIV using self-calibration on particle images. Experiments in Fluids, 2005.
39(2): 267–280.
53. Wie n e k e , B., Volume self-calibration for 3D particle image velocimetry. Experiments in Fluids,
2008. 45: 549–556.
54. D is c e t t i , S. and R.J. Adrian, High accuracy measurement of magniication for monocular PIV.
Measurement Science and Technology, 2012. 23: 117001.
55. We s t e r w e e l , J. and F. Scarano, Universal outlier detection for PIV data. Experiments in
Fluids, 2005. 39(6): 1096–1100.
56. D u n c a n, J. et al., Universal outlier detection for particle image velocimetry (PIV) and particle
tracking velocimetry (PTV) data. Measurement Science and Technology, 2010. 21(5): 057002.
57. K e a n e , R.D. and R.J. A drian, Optimization of particle image velocimeters. I. Double pulsed
systems. Measurement Science and Technology, 1990. 1: 1202.
58. H a in, R. and C. K ähler , Fundamentals of multiframe particle image velocimetry (PIV).
Experiments in Fluids, 2007. 42(4): 575–587.
LASER VELOCIMETRY 355

59. P u n, C.-S., A. S us anto, and D. Dabiri , Mode-ratio bootstrapping method for PIV outlier
correction. Measurement Science and Technology, 2007. 18(11): 3511–3522.
60. Wa n g , H. et  al., Proper orthogonal decomposition based outlier correction for PIV data.
Experiments in Fluids, 2015. 56: 1–15.
61. S t e in, M.L., Interpolation of Spatial Data: Some Theory for Kriging. Springer Series in
Statistics. 1999, New York: Springer. p. 263.
62. E v e r s o n, R. and L. Sirovich , Karhunen–Loève procedure for gappy data. Journal of the
Optical Society of America A, 1995. 12: 1657.
63. Ra b e n, S.G., J.J. C haronko, and P.P. V lachos, Adaptive gappy proper orthogonal decom-
position for particle image velocimetry data reconstruction. Measurement Science and Technology,
2012. 23(2): 025303.
64. Fo u r a s , A. and J. Soria , Accuracy of out-of-plane vorticity measurements derived from in-
plane velocity ield data. Experiments in Fluids, 1998. 25: 409–430.
65. E t e b a r i , A. and P. V lachos , Improvements on the accuracy of derivative estimation from
DPIV velocity measurements. Experiments in Fluids, 2005. 39: 1040–1050.
66. Fo u c au t, J.M. and M. Stanis las, Some considerations on the accuracy and frequency
response of some derivative ilters applied to particle image velocimetry vector ields. Measurement
Science and Technology, 2002. 13(7): 1058–1071.
67. K a r r i, S., J. C haronko, and P.P. V lachos, Robust wall gradient estimation using radial
basis functions and proper orthogonal decomposition (POD) for particle image velocimetry (PIV)
measured ields. Measurement Science and Technology, 2009. 20(4): 045401.
68. Z h o u, J. et al., Mechanisms for generating coherent packets of hairpin vortices in channel low.
Journal of Fluid Mechanics, 1999. 387: 353–396.
69. C h o n g , M.S., A.E. Perry, and B.J. Cantwell , A general classiication of three-dimensional
low ields. Physics of Fluids A: Fluid Dynamics, 1990. 2: 765–777.
70. C h a k r a b o r t y, P., S. Balachandar , and R.J. A drian, On the relationships between local
vortex identiication schemes. Journal of Fluid Mechanics, 2005. 535: 189–214.
71. H u n t, J.C.R., A.A. W ray, and P. M oin, Eddies, stream and convergence zones in turbulent
lows, Report CTR-S88, in Proceedings of the CTR Summer Program. 1988, Stanford, CA: Center
for Turbulence Research, Stanford University. pp. 193–208.
72. J e o n g , J. and F. H us s ain, On the identiication of a vortex. Journal of Fluid Mechanics Digital
Archive, 1995. 285: 69–94.
73. R a b e n, S.G., S.D. R os s , and P.P. V lachos, Computation of inite-time Lyapunov exponents
from time-resolved particle image velocimetry data. Experiments in Fluids, 2014. 55: 1–14.
74. va n O u d h e u sden, B.W., PIV-based pressure measurement. Measurement Science and
Technology, 2013. 24: 032001.
75. L iu, X. and J. K atz, Instantaneous pressure and material acceleration measurements using a
four-exposure PIV system. Experiments in Fluids, 2006. 41: 227–240.
76. va n O u d h e u s d en, B., Principles and application of velocimetry-based planar pressure imag-
ing in compressible lows with shocks. Experiments in Fluids, 2008. 45: 657–674.
77. C h a r o n ko, J.J. et al., Assessment of pressure ield calculations from particle image velocimetry
measurements. Measurement Science and Technology, 2010. 21(10): 105401.
78. N o c a, F., D. S h i els, and D. J eon, A comparison of methods for evaluating time-dependent
luid dynamic forces on bodies, using only velocity ields and their derivatives. Journal of Fluids
and Structures, 1999. 13: 551–578.
79. H o l d e n, D. et al., Aerodynamics of the lying snake Chrysopelea paradisi: How a bluff body
cross-sectional shape contributes to gliding performance. The Journal of Experimental Biology,
2014. 217: 382–394.
80. R ag n i , D. et  al., Surface pressure and aerodynamic loads determination of a transonic air-
foil based on particle image velocimetry. Measurement Science and Technology, 2009. 20(7):
074005.
81. H o l d e n, D.P., Flying snakes: Aerodynamics of body cross-sectional shape, thesis document
for Masters of Science in Mechanical Engineering, 2011, Blacksburg, VA: Virginia Polytechnic
Institute and State University. p. 72.
82. We s t e r w e e l , J., Theoretical analysis of the measurement precision in particle image
velocimetry. Experiments in Fluids, 2000. 29(7): S003–S012.
83. N o b ac h, H., Inluence of individual variations of particle image intensities on high-resolution
PIV. Experiments in Fluids, 2010. 50(4): 919–927
84. Timm in s , B. et  al., A method for automatic estimation of instantaneous local uncertainty in
particle image velocimetry measurements. Experiments in Fluids, 2012. 53(4): 1133–1147.
356 JOHN J. CHARONKO

85. Ch a r o n ko, J.J. and P.P. V lachos, Estimation of uncertainty bounds for individual particle
image velocimetry measurements from cross-correlation peak ratio. Measurement Science and
Technology, 2013. 24: 065301.
86. X u e, Z., J.J. C h a ronko, and P.P. V lachos, Particle image velocimetry correlation signal-
to-noise ratio metrics and measurement uncertainty quantiication. Measurement Science and
Technology, 2014. 25: 115301.
87. S c iac c h ita n o, A., B. Wieneke , and F. Scarano, PIV uncertainty quantiication by image
matching. Measurement Science and Technology, 2013. 24(4): 045302.
88. Wie n e k e , B., PIV uncertainty quantiication from correlation statistics. Measurement Science
and Technology, 2015. 26: 074002.
89. N e a l , D.R. et al., Collaborative framework for PIV uncertainty quantiication: The experimental
database. Measurement Science and Technology, 2015. 26: 074003.
90. S c iac c h ita n o, A. et  al., Collaborative framework for PIV uncertainty quantiication:
Comparative assessment of methods. Measurement Science and Technology, 2015. 26: 074004.
91. Bhattacharya, S., J. Charonko, and P. Vlachos, Stereo-particle image velocimetry
uncertainty quantiication. Measurement Science and Technology, article reference: MST-104187.R1.
92. S ta n is l a s, M. et  al., Main results of the third international PIV challenge. Experiments in
Fluids, 2008. 45: 27–71.
C h a p T e r e L e Ve N

Volumetric velocimetry

Filippo Coletti

Contents

11.1 Introduction 357


11.2 Quasi-3D optical techniques 358
Scanning light sheet PIV 358
Pseudo-spatial reconstruction using Taylor hypothesis 359
11.3 3D particle tracking velocimetry 360
Generalities 360
Imaging, reconstruction, and tracking 361
Applications in turbulent lows 362
11.4 Defocusing PIV 363
11.5 Holographic particle velocimetry 364
Working principle 364
Off-axis systems 366
In-line systems 366
11.6 Tomographic particle image velocimetry 368
Working principle 368
Technical aspects 368
Applications 375
Hybrid methods: Toward tomographic 4D PTV 376
11.7 Magnetic resonance velocimetry 377
Magnetic resonance imaging 377
Phase-contrast velocimetry 381
Turbulence statistics 382
Applications 383
11.8 Comparison between volumetric techniques 385
Problems 385
References 386

11.1 Introduction

Recognizing that most real lows are three-dimensional (3D) is as straightforward as


recognizing they are time varying. Nevertheless, in luid mechanics, we often assume we
are in front of two-dimensional (2D) or even one-dimensional (1D) problems, just like we
often assume them to be steady state. The roots of such simpliications are both conceptual
and methodological: lows that are homogeneous in one or more spatiotemporal dimensions
are easier to understand and theoretically more tractable, their governing equations are less
expensive to integrate numerically, and (what is critical here) their velocity ields are much
easier to characterize experimentally.
In several cases of technological relevance, however, the third dimension is crucial to
describe even basic low aspects, and the inability of account for it limits severely both the

357
358 FILIPPO COLETTI

fundamental understanding and the practical applications. A clear example is given by tur-
bulent lows. The transport of vorticity, which is pivotal in the energy cascade from larger to
smaller eddies, is critically inluenced by the vortex stretching mechanism. The latter is 3D in
nature, and indeed 2D turbulence (in which the luid motion is conined to two spatial dimen-
sions) displays fundamentally different dynamics, including an inverse energy cascade from
the smaller to larger scales [1].
While already in the 1990s the ever-increasing computational power led to the wide-
spread use of CFD software for simulating 3D lows, volumetric velocimetry has not
become available until recently. Although there were early proofs of concept, in the last
decade there has been a surge of successful efforts in developing and applying techniques
capable of capturing 3D, three-component (3C) velocity ields. In part, this progress has
been the consequence of the evolution of technologies that were already used in experimen-
tal luid mechanics, such as laser-based imaging. The advances, however, have also been
accelerated from ideas and techniques borrowed from different ields, including computer
vision and medical imaging.
The following is a nonexhaustive description of the main volumetric velocimetry techniques
that have been successfully applied to laboratory lows. While other promising techniques are
emerging, we focus on the ones that are suficiently established, in order to provide a perspec-
tive on the available alternatives.

11.2 Quasi-3D optical techniques

In order to obtain volumetric low ields, a logical possible strategy is to combine multiple
planar velocity measurements obtained by standard 2D particle image velocimetry (PIV).
Particularly attractive in this sense is to measure 3C velocity ields with stereoscopic PIV and
“stack” them to reconstruct the full 3D-3C information. To achieve this goal, two practically
different but conceptually similar strategies are often employed: (1) the use of scanning-plane
systems and (2) the pseudo-spatial reconstruction of a low convected through the measure-
ment plane.

Scanning light In this technique, a light sheet is moved across the measurement volume, while one or more
sheet pIV cameras record particle images along the illuminated planes. For laminar lows, this does
not pose particular dificulties, other than the need of traversing the light sheet with great
accuracy. In time-periodic lows, the task is also facilitated by the possibility of synchroniz-
ing the scanning motion with the period of the luid motion. On the other hand, for generally
unsteady and turbulent lows, the scanning light sheet approach poses considerable technical
challenges: for the reconstructed ield to be a faithful representation of the instantaneous and
time-varying low, multiple planar measurements need to be acquired within a time smaller
than the temporal scales of interest. This is a challenge, because the minimum time required
for volume scanning is typically a few tenths of a second, which in turbulent lows is often
longer than the ine temporal scales.
For this reason, the technique has been applied mostly in relatively slow lows (typically
in liquids), for which the implicit assumption that the low is “frozen” during the scanning
operation is acceptable. Hori and Sakakibara [2] used this method to study the far ield of a
water jet at Reynolds number Re ≈ 1000, based on the jet diameter of 5 mm and a bulk velocity
of 0.2 m/s. The chosen conditions resulted in a relatively large value of the Kolmogorov time
scale of 0.45 s at the measurement station, which allowed capturing relevant scales by scan-
ning the volume in 0.22 s. In air, however, the same Reynolds number and physical dimen-
sions would result in a Kolmogorov temporal scale νair/νwater times smaller (where ν is the
luid kinematic viscosity), that is, about 0.03 s, which would be too short to be captured by
state-of-the-art scanning systems.
A second limitation is represented by the fact that the focal depth of the imaging system
must be suficient to observe focused particles in all scanned planes (see Chapter 10). This,
however, is an issue common to most camera-based volumetric techniques.
VOLUMETRIC VELOCIMETRY 359

y
Oscillating mirror Nd: YLF laser
x z Concave lens Laser beam
Channel Camera L

Convex lens

Cylindrical lens
Stepper Light sheet
motor
Cylindrical lens Scanning mirror
Laser beam Calibration plate
Plexiglas octagonal
tank
Scanning Video camera
light sheet Camera R
(a) (b)

FIGUre 11.1 Different types of setup used in scanning light sheet PIV. (a) Parallel scanning. (From Brücker, C., Exp. Fluids,
19(4), 255, 1995.) (b) Angular scanning. (From Hori, T. and Sakakibara, J., Meas. Sci. Technol., 15(6), 1067, 2004.)

Based on the direction of advancement of the light sheet during scanning, one can distin-
guish between parallel and angular scanning. In parallel scanning (Figure 11.1a), the light
sheet is delected by a mirror accurately rotated by a stepper motor. The delected sheet passes
through a cylindrical lens whose focal line coincides with the mirror axis of rotation so that
the light sheet maintains a parallel orientation during the scanning. One of the irst examples
of this strategy was reported by Brücker [3]. Parallel scanning is limited to illuminated
volumes of depth equal to the dimension of the lens. In order to investigate larger volumes,
angular scanning systems use simply a rotating mirror to swap the delected laser sheet,
without the need of an additional lens (Figure 11.1b). This method requires however a fairly
elaborate procedure for optical calibration and accurate spatial reconstruction of the volumetric
velocity ield [2].

pseudo-spatial The “frozen turbulence” approximation irst postulated by Taylor [4] states that “if the veloc-
reconstruction ity of the airstream which carries the eddies is much greater than the turbulent velocity, one
using Taylor may assume that the sequence of changes in the velocity U at the ixed point are simply due to
hypothesis the passage of an unchanging pattern of turbulent motion over the point.” In the formulation
of Townsend [5], this can be written as

U ( x, t ) = U ( x - U c t, t + t ) (11.1)

where
U is the luid velocity
x and t indicate the spatial and temporal coordinates
τ is the time delay
Uc is the local convection velocity

This assumption, also known as “Taylor hypothesis,” has proved to be a powerful tool to
investigate lows dominated by main advective components, such as jets and boundary layer,
and has been long invoked to convert single-point temporal measurements into pseudo-spatial
measurements. What is assumed is that the low structures will not change while they are
advected past the light sheet with the local convection velocity. This is somewhat analogous to
the approach of scanning light sheet PIV, with the low structures moving across the measure-
ment plane instead of the plane sweeping through the structures.
360 FILIPPO COLETTI

(a) (b)

FIGUre 11.2 (See color insert.) Examples of pseudo-spatial reconstruction of instantaneous


3D velocity ield using Taylor hypothesis. (a) Volumetric vector ield in a turbulent jet. (From
Ganapathisubramani, B. et al., Exp. Fluids, 42(6), 923, 2007.) (b) Coherent structures in a turbu-
lent boundary layer. (From Dennis, D.J. and Nickels, T.B., J. Fluid Mech., 673, 180, 2011.)

With the advent of time-resolved PIV, several researchers have used Taylor’s hypothesis to
reconstruct volumetric velocity ields. In particular, time-resolved stereoscopic PIV has been
employed as it can provide the three components of velocity along the measurement plane.
As examples of this approach, Figure 11.2 shows pseudo-spatial reconstructions obtained in
the far ield of a turbulent jet by Ganapathisubramani et al. [6] and in the outer region of a
turbulent boundary layer by Dennis and Nickels [7].
Taylor hypothesis is typically considered acceptable as long as the velocity luctuations
are small with respect to the mean convection velocity. However, a precise limit in terms of
turbulence intensity is not unanimously accepted. Moreover, direct comparison between spa-
tial and pseudo-spatial reconstructions indicates that small-scale motions are not accurately
retrieved when the approximation is invoked [8]. Most critically, an unambiguous deinition
of convection velocity is not always trivial and it depends on the scales of the considered low
structures [9,10]. These drawbacks limit the applicability and success of the pseudo-spatial
reconstruction approach and will likely continue to do so as true volumetric velocimetry tech-
niques become more accurate, performing, and widely available.

11.3 3D particle tracking velocimetry

Generalities In order to gain quantitative information on a 3D low, the most conceptually straightforward
approach is to follow a passive tracer along their trajectories within the investigated luid
volume. This returns a Lagrangian description of the low and is typically achieved by
performing the stereoscopic reconstruction of the tracer position using multiple camera views.
If the series of images is acquired at suficient temporal resolution, the outcome is the posi-
tion of each tracer as a function of time, that is, p(t ) = {x(t ), y(t ), z(t )}, where p is the tracer
position vector and x, y, and z are its three spatial components. Lagrangian quantities can then
be calculated, including velocity, acceleration, and higher-order temporal derivatives along
the tracer particle trajectories. By interpolating the Lagrangian tracks onto a Cartesian grid,
Eulerian quantities can be obtained [11].
This approach represents an extension of planar particle tracking velocimetry (PTV) (see
Chapter 10) to three dimensions, and indeed it has been named 3D PTV by the research group
that formally introduced it [12]. However, since the knowledge of a particle position over time
allows to determine several other quantities besides its velocity, the expression Lagrangian
particle tracking (LPT) is sometimes deemed more appropriate [13]. We also note that if the
only goal is to determine the luid velocity, two recordings are in principle suficient for a
VOLUMETRIC VELOCIMETRY 361

irst-order accuracy. However, as we shall see, the access to multiple successive recordings
allows using powerful predictor–corrector strategies for the accurate tracking of the particles
in time. On the other hand, this requires a high-speed illumination and imaging system capa-
ble of resolving the tracers’ motion, which is often a severe constraint in turbulent lows.

Imaging, As in planar PTV, the irst step is to detect the location of individual tracer particles within
reconstruction, the images, which is possible only if the particle image density is suficiently low. Additional
and tracking considerations are in order due to the volumetric nature of the technique. The entire volume of
interest needs to be in focus, which means that the depth of ield δz needs to be at least equal to
the thickness of the illuminated region Δz0. According to diffraction optics (as already shown
in Chapter 10):

2
æ 1 ö 2
dz @ 4 ç 1 + ÷ f# l (11.2)
è M 0 ø

where M0 is the magniication, f# is the lens aperture number, and λ is the wavelength. Therefore,
in order to obtain δz ≥ Δz0 (with Δz0 typically in the order of several mm or even a few cm),
a large aperture number f# is needed. This limits the amount of light that reaches each image
sensor. Moreover, the light intensity is inversely proportional to the thickness of the illumi-
nated volume, and thus, a large amount of energy is needed for proper particle imaging. Both
these requirements dictate the use of powerful lasers as a light source.
The following step is to establish, for each particle, a stereoscopic correspondence between
its images in multiple camera views (stereo-matching). The collinearity condition states that
object point, camera projective center, and image point lie on a straight line. In principle, two
camera views are suficient to determine the particle position. Ambiguities can arise, however,
if multiple particles lie on the same line of sight, as illustrated in Figure 11.3. In the example,
let us consider the problem stereomatching particle PA, which is imaged by camera 1 as point p
and by camera 2 as point pA. Considering the line of sight of particle PA imaged by camera 1, its
projection on camera 2 forms an epipolar line along which the correspondent particle image
is sought. However, due to the inite size of the particle image, the epipolar intersects with pB
and pC, which are the projection of particles PB and PC on camera 2. Therefore, a third camera
view is needed to resolve the ambiguity. The problem of ambiguous matching may be allevi-
ated using a larger number of cameras and will be exacerbated by higher particle density. The
number of ambiguity grows with the square of the number particles [12]; therefore, in practice
the particle image density is limited to 10−3 − 10−2 ppp (particles per pixel). The presence of
such “false” particles, so-called ghost particles, is a major source of error for both 3D PTV and
tomographic PIV (Section 11.6). Methods to mitigate this problem will be further discussed
in “Technical aspects” section.

PB
PA

PC

pB
pA
p pC

O1 Camera 1 Camera 2
O2

FIGUre 11.3 Schematic illustrating the problem of ambiguous matching of a particle using only
two cameras for stereomatching.
362 FILIPPO COLETTI

Next, the 3D physical coordinates of each particle are reconstructed from its 2D position in
camera views. The reconstruction of the 3D position of each particle requires the determina-
tion of a mapping function between its physical coordinates {x, y, z} and image coordinates
{Xi, Yi} on N camera views, with i = 1, … , N. The number of parameters in the mapping func-
tion will depend on the adopted model for the camera (typically a pinhole camera model [14]).
The parameters values are obtained by imaging a calibration target inserted in the region of
interest and spanning the entire observation volume. The target can either be multi-plane, or
planar and traversed on a micrometer stage.
Finally, once the particle positions in space have been determined for each multi-camera
recording, the particle motion is tracked in time through the temporal sequence. For this
purpose, a procedure is needed to establish links between the particle positions in con-
secutive time steps, that is, to ind which particle in every realization corresponds to the
same physical particle. While the closest neighbor principle appears as the obvious choice,
more accurate results have been demonstrated with more elaborated strategies. In particular,
Ouellette et al. [13] tested as criteria also the minimum acceleration, the minimum change
in acceleration, and a four-frame best estimate. The latter method gave the best results (as
demonstrated by comparison with simulated turbulent low data) and works as follows.
Starting from a particle whose trajectory has been reconstructed up to frame n, frame n − 1
is used to evaluate its velocity and hence its estimated position in frame n + 1. The particles
within a search volume centered in such a position are candidates to continue the trajectory.
Their velocity and acceleration are evaluated from frames n and n − 1, and the one that leads
to the estimated position closest to a real particle in frame n + 2 is chosen to continue the
trajectory.

applications in Since its introduction, 3D PTV has been the method of choice to obtain single-particle and
turbulent lows multiple-particle Lagrangian statistics in laboratory turbulent lows both in air and in water. In
particular, the possibility of obtaining Lagrangian statistics has allowed to tackle fundamental
questions regarding the phenomenology of turbulence [15]. Due to the intermittent nature
of turbulence, and to the desire of reaching asymptotically high Reynolds numbers, the con-
straints on the spatial and even more the temporal resolution imposed on the imaging system
can be enormous.
As an example, let us consider a fully turbulent low reproduced in a laboratory experiment.
In order to reproduce regimes of technological interest, and to come close to universal scaling
behaviors, the turbulent Reynolds number Re = u ′ L/ν shall approach 105. Here, u′ is the RMS
velocity luctuation, L is the energy injection scale, and ν is the kinematic viscosity. Assuming
water as a working luid (ν ≈ 10−6 m2/s) and a convenient energy injection scale of L = 0.1 m,
it follows u′ ≈ 1 m/s and ε ≈ 10 m2/s3, where ε is the turbulent kinetic energy dissipation
per unit mass. Changes in particle luid trajectories can happen on timescales of order of the
Kolmogorov time τη = (ν/ε)1/2. In order to accurately capture changes in tracer velocity and
acceleration, a temporal resolution should be at least one order of magnitude higher than
τη (0.3 ms in this example), which means an acquisition frequency of 30 kHz or more. This is
on the high end of what is attainable by state-of-the-art scientiic cameras and lasers. A similar
regime in air would require an acquisition frequency νair /νwater times larger.
From the earlier example, it is not surprising that most 3D PTV experiments in high
Reynolds number turbulence have utilized water as the working luid. Voth et al. [15] inves-
tigated a so-called von Karman low generated by two counter-rotating disks (Figure 11.4a).
At the achieved regime of Re = 970 (based on the Taylor microscale and the RMS velocity
luctuations), the instantaneous accelerations reached 16,000 m/s2 (Figure 11.4b). In order
to accurately reconstruct convoluted particle trajectories, a temporal resolution of 1/20 of
the Kolmogorov timescale was deemed necessary, that is, about 14 μs. At the time of the
study, this frame rate (in excess of 70 kHz) was only achievable using silicon strip detectors
developed to measure subatomic particle tracks [16]. However, only single-particle dynam-
ics could be investigated. Nowadays, complementary metal–oxide–semiconductor (CMOS)
cameras allow the tracking of many particles at rates comparable to those of silicon strip
detectors [17].
VOLUMETRIC VELOCIMETRY 363

1 mm

y
x
0 16,000
Acceleration (m/s2)

(a) (b)

FIGUre 11.4 (See color insert.) (a) Sketch of the experimental setup used by Bourgoin et al.
[17] to measure Lagrangian tracer trajectories in fully developed turbulence. The particles were
illuminated by two high-power lasers and imaged by three high-speed cameras. (b) Trajectory
of a tracer particle in fully developed turbulence reconstructed by 3D PTV, with acceleration
magnitude represented by the color of the trajectory. (From Voth, G.A. et al., J. Fluid Mech., 469,
121, 2002.)

11.4 Defocusing pIV

An alternative approach to locate the particles in three dimensions is represented by defo-


cused particle imaging [18]. In a typical imaging system with a convergent lens and one
aperture, a point A located on a reference plane is focused on A′ in the image plane, while a
point B placed between the reference plane and the lens system will be projected defocused
in B′ (Figure 11.5a). If a mask with two openings is used, each source point B produces two
defocused images B′ and B″ (Figure 11.5b), separated by a distance b related to the depth of

Lens Aperture

B΄ C
A

Reference plane Image plane Image


(a)

Lens Aperture

B


b
B˝ C
A

Reference plane Image plane Image


(b)

FIGUre 11.5 Imaging systems illustrating the principle of defocused particle imaging.
(a) Standard single-aperture arrangement. (b) Double-aperture arrangement. (From Gharib, M.
et al., Integr. Comp. Biol., 42(5), 964, 2002.)
364 FILIPPO COLETTI

A
Z>L Aperture–lens

Optical axis
Z=L

B
Z<L

Image plane

FIGUre 11.6 Three-aperture defocusing arrangement. (From Gharib, M. et al., Integr. Comp.
Biol., 42(5), 964, 2002.)

x/D
–1.0 –0.5 0 0.5 1.0 1.5
–1.0
ψ=0 ψ=x 0 ωxc/U∞: –5 –4–3–2–1 0 1 2 3 4 5
0 1.0

–0.5

0
y/D

–1.0
0.1 0.5
y/c

–1.5 0
0.4 1
x/c
0.2
–2.0 z /c 1.5
0
(a) (b)

FIGUre 11.7 (See color insert.) Applications of defocusing PIV. (a) 3D vector ield and isosurface of vorticity at the outset
of a jet issued from an inclined exit. (From Troolin, D.R. and Longmire, E.K., Exp. Fluids, 48(3), 409, 2010.) (b) Development
of a tip vortex on an airfoil, colored based on the streamwise vorticity. (From Wolf, E. et al., Exp. Fluids, 50(4), 977, 2011.)

location of B. It can be shown that b is independent of the in-plane coordinates of the source
point and the opening diameter, which allows a straightforward reconstruction of its spatial
position [19].
In practical implementations, a three-aperture arrangement has been utilized, which results
in each particle to be imaged as a small triangle that lips orientation when the source point
moves across the reference plane (Figure 11.6). The three-whole-aperture system can in fact
be replaced by three separate cameras [19], all having image planes parallel to the common
object focal plane of the three lenses.
Like 3D PTV, defocusing PIV relies on the identiication of individual particles, which
limits the maximum allowable particle density. Once the particle distribution in successive
acquisition is reconstructed, the displacement ield is determined either by tracking indi-
vidual particles or by means of 3D cross-correlation of interrogation volumes containing
multiple particles [21]. The technique has been used mostly in liquid lows, especially
in vortex dynamics studies, including the generation of vortex rings and tip vortices
(Figure 11.7).

11.5 holographic particle velocimetry

Working principle Holography is a technique in which the amplitude and phase of the light wave scattered by an
object are recorded and used to reconstruct the original light iled in 3D. The technique has
been already described in Chapter 8, and it is recalled here with more focus on 3D velocimetry
applications. An intense volumetric beam of coherent light illuminates the investigated object
VOLUMETRIC VELOCIMETRY 365

and scatters sensible light (object wave O), which is superimposed to a reference wave R.
The intensity of light on the recording plane is given by

I = ( R + O ) ( R* + O* ) = RR* + OO* + RO* + R*O (11.3)

where the asterisk indicates a complex conjugate. The irst term is the intensity of the ref-
erence wave, the second term is the intensity of the object wave, and the third and fourth
terms represent the interference pattern. The latter is stored on a recording medium to form a
hologram.
The hologram includes the 3D information of the shape and position of the illuminated
objects, encoded into two major frequencies of the interference pattern (Figure 11.8). For exam-
ple, for a spherical particle, the high-frequency component includes the depth information of
an object, whereas the lower frequency component forms an envelope containing the size and
shape information of the object [24]. When the developed hologram is illuminated (physically
or virtually) with the complex conjugate of the original reference wave, the wave transmitted
through the hologram is the product of the incident wave and the spatial transmittance of the
hologram, which has encoded the original object wave. As this transmitted wave propagates in
3D space, the original object wave is reconstructed and produces the 3D image of the particles.
In the context of low velocimetry, the illuminated object is typically a cloud of tracer parti-
cles. Self-interference of reconstructed waves from different particles can produce an apprecia-
ble amount of speckle noise that would obstruct particle recognition. Speckle noise problems
place an upper limit on the total number of particles that can be recorded by a hologram.
While in classical PIV lasers are preferred because they provide short pulses of high energy,
for holography lasers are deinitely needed because interference requires coherence: the opti-
cal paths of the object beam and the reference beam must be matched within the coherence
length, which is only a few centimeters in Nd:Yag lasers. This constraint can lead to very com-
plex optical setups. Although numerous different conigurations have been used, holographic
systems are typically classiied based on the illumination setup. In particular, the reference
beam and the object beam can be either inclined at a inite angle (off-axis holography) or
parallel (in-line holography) (see Figure 11.9).

Lower frequency
(the envelope)
Info of the shape
Collimated laser beam

Higher frequency
Z encodes the Z-origin

FIGUre 11.8 Schematic of holographic interference pattern for a spherical particle. (From
Mostafa Toloui, University of Minnesota, Minneapolis, MN.)

Undisturbed and
scattered waves Reference beam

Recording distance (d) Recording distance (d)

Object Recording Object plane Recording


plane plane plane
(a) (b)

FIGUre 11.9 Holographic recordings’ conigurations: (a) in-line and (b) off-axis. (From Sun, H.
et al., Philos. Trans. R Soc. Lond. A: Math. Phys. Eng. Sci., 366(1871), 1789, 2008.)
366 FILIPPO COLETTI

Off-axis systems In off-axis conigurations, the fringe spacing between two collimated beams is [26]

l æqö
df = sin ç ÷ (11.4)
2 è2ø

where
λ is the wavelength
θ is the angle between beams

Even for very small θ, a resolution of thousands of lines per mm is required to resolve the
fringe spacing for visible light wavelengths, and specialized high-resolution emulsions need
to be used as recording media, for example, photorefractive crystals or thermosensitive plates.
However, the sensitivity of these supports is low, and high-intensity laser light in forward
scattering is needed to image small tracer particles.
A signiicant disadvantage of photographic holography is that the recording medium (ilm
or plate) needs to be replaced after each recording. This is especially impractical in particle
velocimetry, where at least two acquisitions are needed. To alleviate such limitations, a reus-
able material can be applied by utilizing polarization multiplexing to record two holograms
that can be reconstructed separately [27].
During the optical reconstruction of holograms recorded on emulsions, these are illumi-
nated by a laser beam oriented along the same direction as the original reference beam but
propagated in the opposite direction, that is, a conjugate beam. The diffraction of light by the
interference patterns on the ilm generates 3D images of the particles in the original sample
volume. The particle ield is then scanned in three dimensions, typically by a traversing digital
camera (Figure 11.10). The interrogation of the obtained particle ield can be either planar
or volumetric, and the displacement can be determined either by correlation or by particle
tracking.

In-line systems The sensor resolution of charge-coupled device (CCD) or CMOS cameras is not sufi-
cient for off-axis holography, but it has proved adequate to successfully record digital
holograms from in-line conigurations. In such systems, the angle between reference and
object beams is constrained to be less than the angle subtended by the holographic plate. In
many applications, the illuminating beam and the reference beam coincide. In comparison
with off-axis holography, this inherently involves a simpler optical system and, due to
the small angle between the interfering beams, produces relatively large fringe spacing.

X 3D scanning
O = Oi
Particle field i
X
|R|2O* Real image
|R + O|2 Interrogation
0
Z camera
θ
0
Y Z
θ
ce Hologram Y
R ren
fe R* Hologram
Re ce
x

ren
fe
Double exposure Re Double exp image
Δt
(a) (b)

FIGUre 11.10 Holographic recording (a) and holographic reconstruction (b) of particle images
in off-axis systems. (From Meng, H. et al., Meas. Sci. Technol., 15(4), 673, 2004.)
VOLUMETRIC VELOCIMETRY 367

Imaging objects: Holographic recording


Laser Digital sensors subsystem
Holograms

RAM
CPU

In-focus plane Lens objective

3D motion tracking, Object information:


object identification, etc. 3D location, size, etc.
Post-processing Digital holographic
reconstruction

Local
velocity
3D data visualization
Reconstructed
cross sections

FIGUre 11.11 Principle of in-line digital holographic PIV. (From Jiarong Hong, University of
Minnesota, Minneapolis, MN.)

Furthermore, because of the superior light eficiency of forward scattering, the required
laser power is greatly reduced.
Another key advantage of digital holography is that the reconstruction process can be
performed numerically by calculating the light ield that would be generated by illuminat-
ing the hologram with the conjugate beam. The possibility of recording digital holograms
allows for a large number of ields at high temporal resolution, which leads to accurately
pair particle images in successive recordings and perform Lagrangian tracking. Figure 11.11
shows a conceptual schematic of in-line digital holography and its application to particle
velocimetry.
A critical issue in holography, which is particularly severe for in-line systems of small
numerical aperture, is the “depth-of-focus” (DOF) problem. Here, DOF indicates the lon-
gitudinal size of a diffraction-limited point-source image. In forward light scattering, this
is typically of the orders of 1 mm for 10–20 μm particles. Consequently, one can accurately
detect a particle’s coordinates in the plane normal to the reference beam, but the detection of
the axial location is much less accurate [24]. The current generation of digital holographic
imaging systems is typically limited to measurements at low particle densities (<1 mm−3) in
small reconstructed volumes (<1 cm3). Recent advances in the numerical reconstruction of
particle distribution, however, appear promising in overcoming these disadvantages, allowing
concentrations of thousands of particles per mm3 [29].
The holographic reconstruction of particle ields beneits from the small DOF that is
inherent to microscopic imaging. Indeed in-line digital conigurations have found their
most successful applications in holographic microscopy. A microscope objective magniies
the hologram, and the need for a high-resolution recording medium is relaxed. The small
investigated volumes require high magniication, and in this holography has the advantage,
with respect to other volumetric PIV approaches, of not being limited by the small depth
of ield.
Digital holographic PIV has been successfully used to perform volumetric PIV measure-
ments in situations where small-scale dynamics is critical. Figure 11.12a schematically shows
an in-line system for digital holographic microscopy used to image the near-wall layers of
the low in a square duct with roughened walls. Figure 11.12b shows the measured coherent
vortical structures in the inner part of the turbulent boundary layer over the rough wall: the
visualized volume is about 3.1 × 2.1 × 1.8 mm3, which was interrogated at a resolution of
60 μm in the three directions.
368 FILIPPO COLETTI

Mirror
y
Spatial filter Flow x
x2
z

Collimating 4
x3 x1
lens
3

Sample volume Flow 2


5 cm

y/k
Seed particles
1

Hologram 0
Particle B
plane –3
injection 2
–2
system 10 × objective A 0
z/k –1
Camera 0 –2 x/k
1 –4
(a) (b)

FIGUre 11.12 (See color insert.) Application of digital in-line holographic PIV to investigate
the near-wall region in a turbulent channel low with 3D roughness by Talapatra and Katz [30,31].
(a) Optical setup. (b) Isosurfaces of vortex identiication criterion (λ2) and vortex lines.

11.6 Tomographic particle image velocimetry

Working principle Tomographic PIV (also referred to as Tomo-PIV) was introduced as an extension of the planar
PIV method to measure 3C volumetric velocity ields [32]. The development of tomographic
PIV was motivated by the need of obtaining 3D velocity measurements with a robust particle-
volume reconstruction procedure, instead of relying upon particle identiication. Similar to
other optical volumetric approaches, the tracer particles are illuminated within a volume and
imaged by multiple cameras. However, unlike methods based on particle identiication, what
is reconstructed is the 3D light-intensity distribution, and the particle displacement is obtained
by robust cross-correlation methods. From its recent introduction, Tomo-PIV has made a sig-
niicant impact in the luid mechanics community, especially for the investigation of turbulent
lows, with continuous technical improvements and steady increase in applications [33].
The working principle of tomographic PIV is schematically represented in Figure 11.13
[32]. Tracer particles immersed in the low are illuminated by a pulsed light source within
a 3D region of space. The scattered light pattern is recorded simultaneously from several
viewing directions using digital cameras (typically three to six). The particle images need to
be recorded in focus, which requires a relatively long focal depth of the imaging system and
therefore a high f#. The Scheimplug condition (see Chapter 10) between the image plane,
the lens plane, and the mid-object plane is fulilled by means of camera-lens adapters. Pairs
of recorded images from each camera are input to the tomographic reconstruction algorithm.
This returns a 3D representation of the light ield scattered by the particles, in general using
iterative methods inspired by optical tomography. The tomographic reconstruction requires
an accurate relation between the image and object space, which must be determined through
a calibration procedure similar to that used for stereoscopic PIV. The analysis of the particle
motion within the object pair is performed by 3D cross-correlation with multigrid iterative
window deformation, a direct extension of the approach used in planar PIV.

Technical aspects Volume illumination The illumination of the 3D investigated domain is typically achieved
through laser light. Given the three-dimensionality of the domain, the constraints on the ori-
entation of the illuminated region with respect to the imaging system are more relaxed than
in planar PIV, and particle tracers are imaged also when moving along the viewing direction.
Furthermore, the thickness of the illuminated area allows a less complex optical arrangement
since the laser beam diameter of most lasers (e.g., about 1 cm for Nd:Yag lasers) is suficient
to achieve optimal illuminated volume size with a single cylindrical lens.
VOLUMETRIC VELOCIMETRY 369

Camera system

Laser

Recording

t
t + Δt

Projections

Tomographic reconstruction
t t + Δt

Reconstructed volumes

Cross-correlation

Vector field

FIGUre 11.13 Principle of tomographic PIV. (From Elsinga, G.E. et al., Exp. Fluids, 41(6), 933,
2006.)

The thickness of the light sheet is typically about one quarter the width or length of the
ield of view (FOV). Larger illuminated volumes are often desirable but are in conlict with
the intensity limitations of even the most powerful lasers available, a problem exacerbated by
the need of a large numerical aperture. Indeed, as in LPT, tomographic PIV requires a depth
of focus δz at least equal to the thickness of the illuminated region lz, implying high f# values.
It has been shown, however, that this condition can be somewhat relaxed in order to have a
limited out-of-focus effect, which returns particle images larger than one pixel and does not
degrade signiicantly the quality of the reconstruction [33].
Knife-edge slits need to be employed to control the illuminated thickness, as the beam
tends to diverge along the optical path. This is done to avoid recording any light originat-
ing from regions outside the measurement volume, which would increase the noise in the
reconstructed signal. In order to maximize the amount of light from a given laser source,
methods such as the double-pass light ampliication system have been developed, where the
laser light is back-relected by a mirror placed on the opposite side of the light source; as a
consequence, the light intensity is ampliied and also cameras placed in backward scattering
can beneit from the relected forward scattering. The gain factor practically achieved with
such a system is approximately 1.5. Larger gains (about a factor of 7) can be obtained with
multi-pass light ampliication systems [34], where the light beam is sent with a small angle
into a region bounded by two mirrors, where it is relected typically 20 times before exiting
(Figure 11.14). Typical illumination volumes achieved in tomographic PIV experiments at a
370 FILIPPO COLETTI

Mirror 1
Mirror 2

y
ε

FIGUre 11.14 (See color insert.) Multi-pass light ampliication system. (From Ghaemi, S. and Scarano, F., Meas. Sci.
Technol., 21(12), 127002, 2010.)

low repetition rate are around 200 and 50 cm3 for water and air lows, respectively. For high-
speed PIV, the illuminated volume is further reduced (approximately a factor of 1.5) due to the
less intense available laser sources. In conditions where the light requirements are lower, an
alternative illumination system to lasers can be used. The use of light-emitting diodes (LED)
was proposed by Willert et al. [35], and its feasibility for tomographic PIV has been dem-
onstrated by Buchmann et al. [36].

Imaging considerations The concentration of particle tracers within the measurement


volume ultimately determines the spatial resolution of the measurement. However, high
seeding density degrades the quality of the reconstruction, and a compromise between
accuracy and spatial resolution has to be achieved. Elsinga et al. [32] showed that a four-
camera system accurately reconstructs images with a seeding density of 0.05 particles
per pixel (ppp). In a standard 1 MP camera, this corresponds to 50,000 particles (Figure
11.15a). However, the number of particles alone does not govern the tomographic recon-
struction accuracy, and the particle image diameter needs to be accounted for. The source
density Ns governs more directly the relation between the seeding density and the recon-
struction accuracy (Figure 11.15b):

pdt*2
N s = ppp (11.5)
4

where dt* is the pixel normalized particle image diameter. Novara et al. [37] estimated that
an accurate reconstruction can be achieved when Ns < 0.3 with a four-camera tomographic
system.
In terms of camera positioning and orientation, the most typical conigurations are the
cross-like arrangement and the linear arrangement (Figure 11.16), the choice being often
dictated by constraints on optical access or complexity of installation. The accuracy of the par-
ticle ield reconstruction is maximized for a total aperture β ranging between 40° and 80° [32],
and the cross-like arrangement yields a somewhat higher reconstruction quality than the lin-
ear arrangement [33]. A condition to be avoided is that two cameras are collinear, that is,
they observe the illuminated domain from opposite angles’ directions, which would make the
recorded information redundant.

Calibration The relation between image coordinates and the physical space is established
by a calibration procedure similar to what is used in stereoscopic PIV (Chapter  10) or 3D
PTV. Each camera records images of a calibration target, with markers at several positions
in depth throughout the volume. Based on the known positions of markers, a relation can be
found between the pixel coordinates and their projection in 3D space, and a third-order poly-
nomial is typically used to obtain the mapping function. To successfully reconstruct particle
images, the accuracy requirement for the calibration is a fraction of the particle image size.
VOLUMETRIC VELOCIMETRY 371

0.005
0.014
0.022
0.031
0.037
0.055
0.065
0.075
0.085
0.10
0.12
0.13
0.135
0.15
0.17
(a)

Ns Ns
0.06 0.13 0.19 0.25 0.31 0.38 0.44 0.01 0.04 0.09 0.16 0.25 0.35 0.48 0.63
1 1
Nc = 8
Nc = 6 ppp = 0.05, Nc = 4
Nc = 5
0.8
Nc = 4 0.9

Nc = 3
0.6
Q
Q

0.8
Nc = 2
0.4

0.7
0.2

0
0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.5 1 1.5 2 2.5 3 3.5 4
(b) ppp dτ*

FIGUre 11.15 (a) Examples of recordings at different particle densities (ppp indicated). (b) Effect of ppp and Ns on image
quality factor Q, deined as the correlation coeficient between computer-generated particle images and those recon-
structed from the projection images. (With kind permission from Springer Science+Business Media: Meas. Sci. Technol.,
Tomographic PIV: Principles and practice, 24, 2013, 012001, Scarano, F.)

However, under realistic experimental conditions, the mechanical stability of camera holders
and lens-tilt adapters, thermal variations within the cameras, and potential vibrations of the
experimental setup may lead to typical misalignments in excess of 1 pixel, which would cause
the reconstruction to fail. A procedure for misalignment detection and correction is then neces-
sary. Wieneke [38] proposed an a posteriori self-calibration based on the minimization of the
disparity between the images of the same particle onto the camera images. An iterative applica-
tion of such a procedure allows to reduce the disparity value to values smaller than 0.1 pixels.

Tomographic reconstruction The 3D particle distribution is reconstructed by optical tomog-


raphy over an array of cubic elements (voxels) having approximately the same linear size as
372 FILIPPO COLETTI

Cross y Linear y
1
1 2 3 4 1
4 2 x x
0.9
3
β β
0.8
z

Q
z
x x 0.7 Linear, dτ* = 1 (CMOS)
Cross, dτ* = 1 (CMOS)
Linear, dτ* = 2.5 (CCD)
0.6
Cross, dτ* = 2.5 (CCD)

2 0.5
4 1,3 1 4 20 40 60 80 100 120 140
2 3
(a) (b) β

FIGUre 11.16 Cross-like and linear imaging conigurations of tomographic PIV systems based
on four cameras (a) and their reconstruction quality factor versus system aperture (b). (From
Scarano, F., Meas. Sci. Technol., 24(1), 012001, 2013.)

the image pixels with physical coordinates (X, Y, Z). The voxel intensity distribution is then
deined as E(X, Y, Z), and its projection onto the ith image pixel with image coordinates (xi, yi)
returns the pixel intensity I(xi, yi), according to the linear model:

åw E ( X , Y , Z )
jÎNi
ij j j j (11.6)

where Ni is the number of neighboring voxels (typically a cylinder of 3 × 3 voxels cross sec-
tion) around the line of sight of the ith pixel through the volume. The weighting coeficient wi,j
describes the contribution of the jth voxel to the ith pixel and depends on the distance between
the voxel and the line of sight. A graphical representation of the imaging model is presented
in Figure 11.17, where a top view of the reconstructed domain is shown, and camera sensors
appear as linear arrays of pixels.
The reconstruction problem is solved iteratively by a tomographic algorithm. The multipli-
cative algebraic reconstruction technique (MART) algorithm used by Elsinga et al. [32] can
still be considered the standard. Starting from an initial guess E(X, Y, Z)0 uniform and nonzero,
the object E(X, Y, Z) is updated at each full iteration in a loop over each pixel i from all cameras
and each voxel j. The magnitude of the update is determined by the ratio of the measured pixel
intensity I with the projection of the current object:
mwij
æ I ( xi , yi ) ö
E ( X j , Y j Z j )k +1 = E ( X j , Y j Z j )k ç ÷ (11.7)
ç
è å jÎNi
wij E ( X j , Y j , Z j )k ÷
ø

where μ ≤ 1 is a scalar relaxation parameter (usually set to 1 for the fastest convergence).
The iterative process has a relatively rapid convergence after four to ive iterations without
signiicant loss of accuracy. This is particularly interesting since the reconstruction procedure
is computationally intensive.

Ghost particles Among the sources of error in tomographic reconstruction, the most severe
is represented by the presence of the so-called “ghost particles,” which are also a problem in
3D PTV [12]. Ghost particles are spurious intensity blobs erroneously reconstructed at the
intersection of lines of sight corresponding to nonzero intensity on the camera pixel, which
appear in the reconstructed objects together with the actual tracers (Figure 11.18). Ghost pairs
VOLUMETRIC VELOCIMETRY 373

Z d

Y
X w
θ
Line-of-sight

Camera 1 I(x1, y1) I(x2, y2) Camera 2

FIGUre 11.17 Representation of the imaging model used for tomographic reconstruction. The
image plane is shown as a line of pixel elements and the measurement volume is a 2D array of
voxels. The gray level indicates the value of the weighting coeficient (wij) in each of the voxels
with respect to the pixel I(xi, yi). (With kind permission from Springer Science+Business Media:
Particle Image Velocimetry, Tomographic 3D-PIV and applications, 2008, 103, Elsinga, G.E.,
Wieneke, B., Scarano, F., and Schröder, A.)

LOS4
LOS1
LOS3 LOS2

Actual particle
Ghost particle

Icam1 Icam2

FIGUre 11.18 Formation of ghost particles in a two-camera setup arising from the crossing of
multiple lines of sight (LOS). (From Elsinga, G.E. et al., Exp. Fluids, 50(4), 825, 2011. Copyright
IOP Publishing.)

can lead to spurious contribution to the cross-correlation map and therefore to bias errors in
the velocity. Their number mainly depends on the number of simultaneous views N, the seed-
ing density ppp, and the thickness of the illuminated volume lz (in voxel units). The number
of ghost particles Ng can be estimated from the fact that, in a two-camera system, the ghost
particles will be located along the projection of the line of sight of one camera onto the sensor
of the second camera (Figure 11.3). In case of a pixel-to-voxel ratio of 1 (which is reasonable
374 FILIPPO COLETTI

since usually lx, ly ≫ lz), this projection has an area ~ dt*lz . Therefore, for each imaged particle,
the number of candidates to be a ghost particle is ppp × dt* × lz , and the total number of ghost
particles in a two-camera system can be estimated as [40]

N g, N = 2 = N p × ppp × dt* × lz (11.8)

For an N-camera system, one can model the generation of ghost particles as a random Poisson
process and obtain [41]

( )
N -2
N g = N g, N = 2 1 - e - N s (11.9)

Note that in the previous analysis we have not considered the possibility of overlapping par-
ticles, which can in fact happen for relatively high source density (Ns ~ 0.15 and above).
Advanced reconstruction algorithms have been proposed to reduce the amount of ghost
particles. An example is the motion tracking enhancement (MTE [37]), which uses two or
more successive sets of projections in order to enhance the accuracy of the reconstruction.
Inspired by computed tomography, this method treats the particle ield as a solid object
(although slightly deformed by the luid motion) and the successive recording as separate
projections of the same object. If an approximate, coarse-grained predictor of the motion
exists, the particle ield reconstructed from a given recording can be deformed back to the
time of the previous recording, which will lead real particles to superimpose. Ghost par-
ticles, on the other hand, being poorly correlated with the low motion, will be singled out
(Figure 11.19).
Unfortunately, ghost particles often appear in successive exposures of all cameras (ghost
pairs; see Figure 11.20), in which case the reconstruction improvement by MTE is limited.
Indeed, since such a class of ghost particles are displaced approximately by the average dis-
placement of the real particles from which they are formed but are located at different physical
positions, their effect is to smooth the velocity gradient (especially in the direction of depth
of the imaged volume).

Motion analysis The velocity ield is evaluated by cross-correlation of two reconstructed


intensity ields, knowing their time separation Δt. The reconstructed volume is divided into

G2
G2΄
A1 A2
A1 2΄
G1
G1
G1
G1
B2 B1
B1 = 2΄
G2 G2΄

I1 ; I
2 2
Cam era-
era Cam
-1 I2
I 1;
(a) (b) (c)

FIGUre 11.19 Principle of the MTE algorithm. (a) Two particles (A and B) imaged from two
directions (subscripts 1 and 2 indicate irst and second exposure, respectively), producing ghost
particles (G). (b) Hypothetical displacement ield followed by the particles. (c) Actual particles
and ghost particles after the second exposure ield is tracked back to the irst exposure time, con-
sidering the known displacement ield. (With kind permission from Springer Science+Business
Media: Meas. Sci. Technol., Motion tracking-enhanced MART for tomographic PIV, 21, 2010,
035401, Novara, M., Batenburg, K.J., and Scarano, F.)
VOLUMETRIC VELOCIMETRY 375

Cam1 Actual particle


Ghost particle

Cam2

Cam3

Cam4

FIGUre 11.20 Formation of ghost pairs. Dark particles and solid lines of sight correspond to the
irst exposure. Bright particles and dashed lines of sight correspond to the second exposure. (From
Elsinga, G.E. et al., Exp. Fluids, 50(4), 825, 2011. Copyright IOP Publishing.)

subdomains, typically of cubic geometry, where the particle displacement is evaluated by


means of the cross-correlation:

å å å
I J K
Ea ( i, j, k ) × Eb ( i - m, j - n, k - l )
R ( m, n, l ) = i =1 j =1 k =1
(11.10)
s a sb

where
Ea and Eb are the reconstructed intensities within the interrogation volumes at times t and
t + Δt, respectively
σa and σb are the standard deviations of the intensity distributions

The dimensions in voxels of the interrogation volume are indicated by I, J, and K, respectively,
while m, n, and l represent the displacements in the 3D shifts space. A multigrid volume defor-
mation approach is utilized [43] based on the multigrid window deformation approach [44].
While for planar PIV a minimum number of approximately 10 particles inside the interroga-
tion window are needed to obtain a robust correlation peak (see Chapter 10), in Tomo-PIV this
number can be lowered down to 4–8 particles, since the out-of-plane motion is virtually elimi-
nated by the volumetric illumination [45]. With respect to the 2D case, the computational cost
required to perform the interrogation of 3D objects is orders of magnitude higher. Even though
new eficient methods have already demonstrated reductions of about two orders of magnitude
in computational costs [41], these are still signiicantly larger with respect to planar PIV.

applications Most applications to date have been in turbulent shear lows. Indeed the investigation of
turbulent lows beneits the most from the ability of Tomo-PIV of resolving instantaneous
3D luid motions (Figure 11.21). The wake of a circular cylinder was the irst case on which
the technique was demonstrated [32]. Many investigations have concentrated on the instanta-
neous organization of turbulent boundary layers. Time-resolved studies could follow hairpin
vortices in a turbulent boundary layer and elucidate aspects of their evolution and charac-
teristic lifetime [46] and highlight the 3D pattern of turbulent jet transition [47]. Moreover,
the time-resolved measurement of the velocity ield enables the simultaneous determination
of the  velocity and pressure spatial distribution [48], making it possible to estimate sound
sources related to the turbulent low [49]. As an example of applications in turbulent lows, we
summarize in Table 11.1 the main parameters of time-resolved tomographic PIV experiments
in the wake of a trailing edge, where air was used as working luid [50].
Beyond studies on single-phase turbulent lows, the variety of applications to date dem-
onstrates the versatility of tomographic PIV. The technique has been used in low facilities
covering velocity range from only a few micrometers per second in microluidic setups [51]
376 FILIPPO COLETTI

–0.3–0.2–0.1 0 0.1 0.2 0.3


250300 350
7
6 Flow direction
5 –1

y/d

X/D
4 0
3
2 1
1
0 0 1
1 0
2 3 Y/D –1
0 –1 0 1 2 3 4 5

z/d
x/d 4 6 7 8 9 10
5 6 Z/D
(a) (b) (c)

FIGUre 11.21 (See color insert.) Applications of tomographic PIV to turbulent lows. (a) Wake behind a circular cylinder.
(From Scarano, F. and Poelma, C., Exp. Fluids, 47(1), 69, 2009.) (b) Turbulent eddies in a boundary layer. (From Schröder, A.
et al., Exp. Fluids, 44(2), 305, 2008.) (c) Coherent structures in a jet low. (From Violato, D. and Scarano, F., Phys. Fluids,
25(1), 015112, 2013.)

Table 11.1 Main imaging and reconstruction parameters for the


time-resolved tomographic PIV experiment of Ghaemi and Scarano [50]
of the low at an airfoil trailing edge

Fluid Air

Tracer particles 1 μm fog droplets


Flow velocity 14 m/s
Reynolds number 370,000 (based on airfoil chord)
Cameras Four 12-bit CMOS
Camera image size 1024 × 1024 pixels (20 μm)
Pixel pitch 20 μm
Lens focal length 105 mm
Lens aperture f# = 16–22
Camera coniguration Linear, 80° total angle
Laser Nd:YLF
Energy per pulse 10 mJ (with multi-pass light ampliication)
Pulse width 150 ns
Pulse separation 42 μs
Measurement volume 47 mm × 47 mm × 8 mm
Image resolution 21.8 voxels/mm
Image magniication 0.44
Seeding concentration 0.065 ppp, 4 particles/mm3
Interrogation volume 1.47 mm (32 voxels) isotropic
Overlap 75%
Vectors per ield 128 × 128 × 22

to supersonic speed in wind tunnels [52]. Drop coalescence has been investigated by Tomo-
PIV matching the refractive index of the phases [53]. Refractive index matching also allows to
access complex internal lows [54] and luid–structure interactions [55].

hybrid methods: In recent years, the availability of time-resolved particle ields had opened to the possibility of
Toward adopting hybrid algorithms making use of particle reconstruction and tracking [56] and luid
tomographic parcel trajectory reconstruction [57]. Those approaches however still rely on the voxel-based
4D pTV representation of 3D objects, which entails a very high computational cost. Recently, a par-
ticle-based tracking method named “Shake-the-Box” has been introduced by Schanz et  al.
[58], which uses the direct knowledge of particle positions in space. The method relies on the
principle that a particle whose trajectory is known for a certain number of time steps should
not disappear within the measurement volume, and its position should be predictable for the
next time step, based on a it on the previous time steps. Similar to the MART algorithm, it
VOLUMETRIC VELOCIMETRY 377

z
u(m/s): –0.01 0.02 0.05 0.08 0.11 0.14 0.17 0.18
y
x

40
–20
0 20
m)
20 0 y (m
x (mm) 40 –20

FIGUre 11.22 (See color insert.) Particle tracks in a separated low obtained with the “Shake-
the-Box” approach. (From Schanz, D. et al., ‘Shake The Box’: A highly eficient and accurate
Tomographic Particle Tracking Velocimetry [TOMO-PTV] method using prediction of particle
positions, PIV13; 10th International Symposium on Particle Image Velocimetry, Delft University
of Technology, Delft, the Netherlands, July 1–3, 2013.)

iteratively reconstructs 3D particle locations by comparing the recorded images with the pro-
jections calculated from the particle distribution in the volume. With respect to voxel-based
representation, this method effectively eliminates the ghost particle problem, largely improves
the computational time for 3D measurements, and provides accurate particle trajectory recon-
struction over long sequences. Moreover, it allows to perform Lagrangian particle tracking at
densities comparable to or higher than what is possible in Tomo-PIV (Figure 11.22).

11.7 Magnetic resonance velocimetry

Magnetic resonance velocimetry (MRV) is a noninvasive technique capable of measuring


volumetric, 3C mean velocity ields. MRV is based on the phenomenon of nuclear magnetic
resonance (NMR) and is an application of magnetic resonance imaging (MRI), which is
among the most universally used techniques in clinical imaging. The most commonly utilized
velocimetry approach in MRI is the so-called phase contrast, which will be outlined in the fol-
lowing sections. While MRI-based velocity imaging techniques were developed for medical
applications, recently the unique advantages of the technique have led to a steadily increasing
number of applications in engineering luid dynamics. A detailed review of fundamentals and
applications of MRI velocimetry can be found in Elkins and Alley [59].

Magnetic MRI generates spatially resolved image reconstructions of physical objects, by means of static
resonance imaging and gradient magnetic ields and radio frequency (RF) pulses. This is obtained by manipulating
nuclear “spins” exhibited by atoms with an odd number of protons or neutrons. First, the appli-
cation of a strong constant magnetic ield B0 causes the spins to align with it and produces a mag-
netization M, while precessing at a frequency ω (Figure 11.23a) deined by the Larmor equation:

w = g B0 (11.11)

γ is the gyromagnetic ratio speciic to each type of atom. In typical clinical magnets, B0 = 1.5 T,
but the need of higher resolution (especially in research studies) has pushed toward 3 and
7 T magnets. When an RF pulse is applied, the spins are tipped away from the direction of
B0 (Figure 11.23b) and a time-varying magnetic ield B1 and a corresponding rotating trans-
verse magnetization (Mxy) are created. After such excitation, the magnetization returns to the
equilibrium state at a rate governed by the time constants T1 (recovery of the longitudinal
378 FILIPPO COLETTI

m
B0 M0
m
N = =
m
S

(a) ω = γB0

Mz M
RF pulse My
y
Mx Mxy

(b) x

FIGUre 11.23 (a) Conceptual representation of the constant magnetic ield producing a net
magnetization and precession frequency of the spins. (b) RF pulsation perturbs the magnetization
and produces a rotating transverse magnetization.

magnetization Mz) and T2 (decay of the transverse magnetization Mxy). The complex electro-
magnetic signal s(t) induced by the rotating magnetization vector is then acquired by means
of a coil surrounding the object under investigation. Often, the receiving coil is also the source
of the RF pulse. A clinical MRI scanner and a view of a low system inserted in an RF coil are
shown in Figure 11.24.
Because of the abundance of hydrogen, 1H protons are typically the excitation targets: in
most luid mechanics applications, the working luid is indeed water, often doped with con-
trast agents such as copper sulfate or gadolinium. The gyromagnetic ratio of hydrogen protons
is 42.58 MHz/T; therefore, the resonance frequency is approximately 63.9 MHz at 1.5 T and
127.8 MHz at 3 T. The frequency of the RF pulses is typically around 1 kHz. T1 and T2 are in
the range of a few seconds for plain water. Doping water with agents such as copper sulfate
reduces T1 and T2 to a fraction of a second, which greatly increases the signal intensity. Also,
at typical concentrations of 0.1 mol/L, copper sulfate does not change appreciably either the
density or the viscosity of water.
The receiver coil detects a signal that contains contributions from all of the precessing
spins in the image volume. Taking advantage of the Larmor equation, spatial localization is

Magnet bore

RF coil

Flow tubing

(a) (b)

FIGUre 11.24 Views of a 3T MRI scanner (a) and test section for low experiments inserted in
the coil before an MRV experiment (b).
VOLUMETRIC VELOCIMETRY 379

achieved applying magnetic ield gradients G (assumed linear) along the three physical axes
of the magnet. In this way, spins at different locations contribute to different frequencies of
the acquired signal. The ield gradients are applied sequentially, and the excitation/acquisition
cycle is repeated each time. The time delay between the RF excitation and the signal reception
is called “echo time” (TE), and the time between successive acquisitions is called “repetition
time” (TR). In pulsed sequences used for MRV of water low experiments, TE and TR have
typical durations of 1–3 and 5–10 ms, respectively.
In MRI, the spatial reconstruction happens in the frequency domain, or k-space, where the
spatial frequency k is
t

ò
g
k (t ) = G ( t ) dt (11.12)
2p
0

Using this notation, it can be shown that the spin density distribution ρ and the acquired signal
s form a Fourier transform pair, that is,

ò
s( k ) = r ( r ) e -i 2 pk × r dr
W
ò
r ( r ) = s ( k ) ei 2 pk × r dk
W
(11.13)

where
Ω is the volume domain
r is a position vector in physical space

At each repetition of the sequence, a row of the 3D data matrix s( k ) is generated. The data
points near the center of the k-space contain the low spatial frequency information of the
image, while the data points at the periphery of k-space contain the high spatial frequency
information (which corresponds to edge details of the image). An illustrative example is pro-
vided in Figure 11.25 for a 2D image. After illing the data matrix in k-space, Fourier anti-
transformation returns the volumetric image in physical space on a Cartesian grid.
According to a Nyquist criterion, the maximum FOV that can be imaged in each direc-
tion is determined by the sample spacing Δk along the corresponding direction of the spatial
frequency domain:

1
Dk = (11.14)
FOV

Similarly, the spatial resolution Δx depends on the highest spatial frequency included in the
k-space grid:
1
kmax = N Dk = (11.15)
Dx
where N is the number of elements taken along a certain spatial direction. For each voxel,
the signal is also proportional to the interrogated volume, as it relates directly to the number
of spins in each voxel. Over the course of the scan, the signal will increase linearly with the
number of repetitions, while the noise will increase as the square root. In summary, this gives
a signal-to-noise ratio [59]:

SNR ~ M 0 DV NSA × N X NY N Z Dt (11.16)

where
ΔV is the size of the voxel
NSA is the number of times the scan sequence is repeated
NX, NY, and NZ are the number of voxels in each spatial direction
Δt is the sampling time of the receiver
380 FILIPPO COLETTI

Image k-space

(a)

(b)

y ky

x kx
(c)

FIGUre 11.25 Example of k-space representation of a 2D image. (a) The picture on the
left is represented by the full spectrum on the right (grayscale levels indicate frequency con-
tent). (b)  The  low-frequency content is removed from k-space, which results in contours been
blurred in the image. (c) The high-frequency content is removed from k-space, leaving only the
contours in the image.

Therefore, the SNR increases with the square root of the total time during which the
receiver is turned on. The dependence on the magnetization implies not only dependence
on the target atoms but also dependence on the spin density. In this perspective, liquids
typically exhibit much higher signal than gases. The dependence on both the volume size
and the acquisition time indicates that a trade-off is to be found between signal quality
and spatiotemporal resolution. Depending on the sequence, Δt is typically in the range
of several microseconds (and its inverse, the sampling bandwidth, is therefore in the hun-
dreds of kHz). Consider a typical experiment in which a low of CuSO4-doped water is
imaged in a volume of 5 × 5 × 10 cm3 discretized in isotropic voxels of (0.6 mm)3. For a
state-of-the-art 3 T magnet and a high-performance RF coil, the SNR of a single acquisi-
tion is about 4–5 but will increase by a factor of 4 if the excitation/acquisition sequence
is repeated 16 times. This is typically suficient to keep the errors in the mean quantities
below few percents.
From the earlier text, it can be seen how, for a certain SNR, the absolute spatial resolution
will depend on the FOV. For example, if the maximum wavelength of interest is 5 cm, the
spatial resolution (achievable within acceptable limits of accuracy and scan duration) will
be about 0.5 mm. This implies a spatial dynamic range of about 100, comparable to optical
imaging techniques such as PIV. Much higher absolute resolution (down to a few microns)
can be obtained in smaller volumes (of order 1 mm3), but those applications pertain mostly to
the realm of microluidics.
VOLUMETRIC VELOCIMETRY 381

phase-contrast Phase-contrast MRI (PC-MRI [60]) is the standard approach for MRI-based velocimetry. It
velocimetry takes advantage of the sensitivity of the phase of the MR signal to motion. In particular, the
movement of an object within a gradient ield creates a phase shift that can be measured to
provide velocity information. Let us write the position of a moving spin (associated with a
parcel of luid) as a function of time via a Taylor series expansion:

1 2
r ( t ) = r 0 + vt + at (11.17)
2

where
r 0 is the initial position
v is the velocity
a is the acceleration

Neglecting acceleration and higher-order terms, the phase shift of the signal in presence of a
gradient G is given by
t t t

ò
0
ò0
ò
j ( r,t ) = g G ( t ) r dt = r 0 g G ( t ) dt + v g tG ( t ) dt
0
(11.18)

When a bipolar gradient waveform of zero net area is used, the irst term in the equation
is zero, and the phase shift becomes dependent only on the velocity. The term “phase con-
trast” comes from subtracting two acquisitions that are identical except for the polarity of the
gradient waveform. In order to remove unknown phase shifts at different spatial locations
(e.g., due to the static magnetic ield inhomogeneities), a scan without bipolar gradient is
acquired as a reference. When the two acquisitions are subtracted, the unknown phase shifts
are canceled out. For a suficient phase accumulation, the measurement of small velocity com-
ponents requires high magnetic ield gradients. Typical values of maximum gradient strength
are in the 1–10 G/cm range (1 G = 10−4 T).
As signal amplitudes and phases are resolved as functions of spatial location, the velocity
in each voxel can be measured. However, since MRI reconstruction is carried out in Fourier
space, neither the shape of the object nor the velocity distribution acquired during a scan are
a true “snapshot” of the time-varying ields: depending on the desired resolution, a potentially
large number of sequence repetitions are needed to ill k-space and so obtain an image of the
low map in physical space. Moreover, velocity measurements can only be achieved in one
direction per sequence execution. Therefore, to obtain a 3C velocity ield, four independent
measurements are needed: a reference scan and three motion-sensitive scans in the three
spatial directions. The acquisition time is proportional to the number of voxels, and for
volumetric velocimetry it may last several minutes.
By diminishing spatial resolution, and even more by reducing the dimensionality of the
acquired ield (e.g., acquiring a 2D-2C ield instead of a 3D-3C ield), the temporal resolution
can be greatly enhanced. Compressed sensing strategies can also accelerate the process,
for example, illing k-space along non-Cartesian trajectories. Following this approach,
Tayler et al. [61] recently obtained 2D-1C velocity measurements of the luid around a raising
bubble at 188 frames per second. The most common strategy to obtain pseudo-temporal reso-
lution in periodic lows is the so-called “cine PC-MRI”: a trigger or gating signal is sampled
along with the velocity data, and the latter is sorted into a number of time bins over the time
period. This is similar to phase-locked averaging in classic velocimetry.
PC-MRI requires knowledge of the range of expected velocities to be measured. The key
parameter in this regard is the velocity encoding value (VENC), deined as the velocity that
produces a phase shift of 180°. This value represents the upper limit within which velocities
can be measured without aliasing. Since each velocity component is measured independently,
different values must be speciied for each spatial direction. The gradient waveforms can be
adjusted to vary the amount of phase difference resulting from a given velocity and so are used
382 FILIPPO COLETTI

to set the VENC. The latter are typically chosen to be slightly greater than the largest velocities
anticipated in the investigated low. For unsteady/turbulent lows, the VENC must be chosen
accounting for the luctuations: if the intermittent velocity occasionally exceeded the VENC,
the measured phase would include aliased samples, leading to inaccurate low measurements.
The expected uncertainty in the MRV measurements can be calculated from the formula [62]:

2 VENC
dV = (11.19)
p SNR

As the noise increases linearly with the VENC, the latter need to be kept at the minimum, but
still large enough to avoid aliasing.
Misregistration errors (or displacement errors) can occur if the spins (i.e., the luid parcels)
move by a nonnegligible amount during TE. For example, assuming TE = 1 ms, velocities of
0.5 m/s will produce displacements of 0.5 mm, which might be comparable to the voxel size.
Unless large spatial velocity gradients are expected, the resulting errors on the reconstructed
velocity would be modest, but an attentive analysis of this source of error is necessary.
The accuracy tends to decrease in near-wall regions of the low, due to the possibility that
a portion of a voxel contains the solid boundary. In this case, the phase-contrast method will
lead to an offset error, which coupled with the local loss of signal can lead to completely
spurious results. Therefore, a somewhat conservative threshold for masking the low volume
is advisable.

Turbulence Because MRV does not provide true snapshots of the low, in general, it is not possible to
statistics evaluate turbulence statistics by time-averaging instantaneous realizations of the velocity
ield. However, properties such as turbulent diffusivity and Reynolds stresses can be estimated
thanks to the phenomenon of turbulent dephasing [63,64]. When the low is turbulent, the ran-
domness of the luid motion causes phase dispersion and attenuation in the signal magnitude.
By assuming simple models for turbulent dispersion (typically consistent with homogeneous
isotropic turbulence), a turbulent diffusion coeficient can be calculated by measuring the sig-
nal loss between images acquired with and without a bipolar gradient. In particular, it can be
shown that if the duration of a lobe of the employed bipolar gradient is short compared to the
decay time of the Lagrangian autocorrelation function, then the signal decays exponentially
with the turbulence intensity:

s = s0e ( ) u
- g t,G g 2 s2
(11.20)

where
s0 is the signal acquired without the bipolar gradient g (which has a duration τ and an
amplitude G)
σu is the RMS of the velocity luctuation along the x-axis

By applying the bipolar gradient in different directions, this relation can be solved for the
Reynolds normal stresses. If bipolar gradients are applied simultaneously along two direc-
tions, say, the x-and y-axes, then the acquired signal will decay in a similar manner but as a
function of su2 + v = < (u¢ + v¢)2/2 > , where <⋯> indicates ensemble averaging. Once the latter is
calculated, and if the Reynolds normal stresses have also being evaluated, the Reynolds shear
stresses along the x–y plane can be obtained from the following relation:

( u¢ + v¢)
2
¢2 ¢2
= u + u¢v¢ + v (11.21)
2 2 2

This approach relies on the Lagrangian integral timescale of the turbulence to be longer
than the duration of the bipolar gradient. In lows with spatially varying turbulent scales,
VOLUMETRIC VELOCIMETRY 383

this assumption is hard to verify at all points in the FOV. This constitutes the main source of
inaccuracies in this method of measuring Reynolds stresses.

applications Although the most common applications of MRV are in the biomedical ield (especially
in vivo), in the last decade a large number of studies have used MRV to tackle engineering
problems. As mentioned earlier, the techniques most often require the use of water as working
luid. For investigations relevant to aerodynamics, the matching of the Reynolds number is
necessary and sometimes suficient to mimic all the important features of the low. This might
be hard to obtain as the diameter of the test section is constrained by the limited size of the
MRI scanner. However, Reynolds numbers of order 105 can be obtained (e.g., with velocities
of 1 m/s and test-section diameters of 10 cm). An obvious limitation of water low experi-
ments is the lack of compressibility effects that play a role at considerable Mach numbers. An
additional constraint is the fact that ferromagnetic materials cannot be brought in the vicinity
of a magnet, which pose limits to the types of test sections as well as the components allowed
in the low loop. Non-ferromagnetic metals can be used but should be kept outside the region
of interest of the test section, as they produce imaging artifacts. The common strategy is to
manufacture the parts by stereolithography or 3D printing of resin materials. In recent years,
this has become an opportunity rather than a problem: the great progresses in additive manu-
facturing, along with the high data yield of MRV, allow to design and test a large number of
conigurations in a short time.
Among the recent engineering luid mechanics applications, conigurations relevant to
turbine blade cooling have been tackled by Elkins et al. [65], who investigated a serpentine
ribbed channel, Benson et al. [66], who studied a slot ilm cooling coniguration for a vane
trailing edge, and Coletti et al. [67], who explored the transport of luid in a discrete-hole ilm
cooling coniguration (Figure 11.26). Turbomachinery applications often involve high-speed
low at high Reynolds and Mach numbers, which are not reproducible in water experiments by
MRV. However, in the aforementioned blade cooling conigurations, the engine Mach number
is typically below 0.3, and therefore compressibility effects can be considered negligible.
Matching Reynolds number for ilm cooling applications is still a challenge, given the limita-
tion on the maximum measurable velocity. However, by scaling up the physical dimensions
(and leveraging the higher kinematic viscosity of air with respect to water), fully turbulent
regimes can be attained, for which the main features are expected to change weakly with fur-
ther increase of Reynolds number. For example, in the ilm cooling study of Coletti et al. [67],
a 6 mm diameter of the ilm cooling hole and a jet bulk velocity of 0.5 m/s resulted in a jet
Reynolds number of ~3000.
The fact that optical access is not needed has stimulated applications in porous media
low, where detailed low maps within the pores can be obtained (Figure 11.27a), as well
as diffusion coeficient distributions [68,69]. Freudenhammer et al. [70] measured the full

U/Uj:0.1 0.3 0.5 0.7 0.9 1.1

0 0
2 2
4 4
2 X/ 6 2
y X/ 6 D
D
Y/D

Y/D

8 1 8 1
10 10
z x –1
1 0 12 1 0 –1
Z/D Z/D
(a) (b)

FIGUre 11.26 (See color insert.) Applications of MRV to turbomachinery lows. (a) Film cooling low out of the trail-
ing edge cutback of a turbine airfoil model, showing positive (red) and negative (purple) streamwise velocity isosurfaces.
(From Benson, M.J. et al., Exp. Fluids, 51(2), 443, 2011.) (b) Flow downstream of an inclined jet in crosslow relevant to
ilm cooling applications, with streamwise vorticity isosurfaces and velocity contours. (From Coletti, F. et al., Int. J. Heat
Fluid Flow, 43, 149, 2013.)
384 FILIPPO COLETTI

0 ms 15.9 ms 31.8 ms
0.0 0.12 0.25 0.38 0.50 0.62 0.75 0.88 1.00 1.12 1.25 1.38 1.50 1.62 1.75 1.88 2.00 2.12 2.25 2.39 2.50
Y
X

47.7 ms 63.6 ms 79.5 ms

x –14.7 14.7
y
y-velocity (cm/s)
In-plane velocity: 26.7 cm/s
(a) (b)

FIGUre 11.27 Application of MRV in porous media and multiphase lows. (a) Detailed low ield within the pores of
a metal foam replica. (From Onstad, A.J. et al., Exp. Fluids, 50(6), 1571, 2011.) (b) Liquid low around a rising bubble.
(From Tayler, A.B. et al., Phys. Rev. Lett., 108(26), 264505, 2012.)

velocity ield at the intake of an internal combustion engine replica. In the area of multiphase
lows, accelerated MRI sequences have allowed to capture the dynamics of raising bubbles
and the turbulent motion they induce in the surrounding water (Figure 11.27b). For the same
reason, MRV is suitable for measuring the low around moving objects, which typically
pose challenges in terms of optical access. Recently, Ryan et al. [71] used MRV to study
the 3D wake behind a model of a vertical axis wind turbine (VAWT; see Figure 11.28).

1.5

1
Y/D

0.5
–1
–0.5 0
0
0.5 –0.5
X/D
1 0
1.5 0.5
Z/D
2 1

FIGUre 11.28 (See color insert.) Application of MRV to the low around a rotating model of a
VAWT [71]. Isosurfaces of positive (red) and negative (blue) streamwise vorticity are shown. Flow
is in the positive X-direction.
VOLUMETRIC VELOCIMETRY 385

Table 11.2 Representative values of key parameters of the different


volumetric velocimetry techniques considered

Measurement Spatial Temporal


volume (cm3) resolution (mm) resolution (ms)

Scanning PIV 1000 0.5–2 50–500


Defocusing PIV 300–2000 2–4 100–150
3D PTV 5–150 1–3 0.1–1
Holographic PIV 0.001–0.1 0.05–01 0.2–2
Tomographic PIV 10–200 0.75–2 0.2–2
MRV 10–4000 0.3–1 40–100

The challenge in this case is represented by actuating the moving parts with MRI-compatible
materials. The problem was tackled using a geared paddle-wheel system driven by a second-
ary stream of water, with which they maintained a constant rotating speed of the VAWT
model. Although the model was only 50 mm in height (about ~200 times smaller than a full-
size turbine), the used tip speed ratio (tangential speed of the turbine blade over incoming
wind speed) was kept at values relevant to actual wind farms (which typically range between
1 and 2.5).
Finally, even though using gas as working low in MRV measurements presents its chal-
lenges due to low signal density, the feasibility of 3D velocity measurements around an airfoil
using SF6 has been demonstrated by Newling et al. [72].

11.8 Comparison between volumetric techniques

The sections earlier have hopefully clariied two points: irst, that volumetric velocimetry is
a relatively recent achievement in experimental luid dynamics and that continuous advance-
ments are modifying the landscape of what is accessible, and with which accuracy/resolu-
tion, and second, that the several available techniques have different merits and limitations,
such that no single technique is absolutely preferable or superior to the others. The method
of choice should ultimately depend on the low coniguration and the speciic quantities of
interest.
In this perspective, it is useful to summarize the main features of the approaches described
earlier in terms of maximum measurement volume, spatial resolution, and temporal resolution.
Table 11.2 lists typical ranges for such quantities. Notice that, for Lagrangian techniques,
the values of spatial resolution represent the result of interpolation on a Cartesian grid. These
values are only representative of the present state of the art; we do not speculate on the rhythm
at which any of the considered techniques will progress in the future.

problems

11.1 You set up to investigate the small-scale features in the far ield of an axisymmetric
turbulent jet. The working luid is air (kinematic viscosity ν =1.5 × 10−5 m2/s) and
the Reynolds number based on the centerline velocity U0 and the jet half-width r1/2 is
Re0 = 6600. The apparatus is designed with an optical access that allows you to per-
form stereoscopic PIV normal to the jet axis, at a location where the centerline velocity
has decayed to 0.77 m/s. You use a double-pulsed high-repetition laser synchronized
to two cameras with 2048 × 2048 pixels, which can each store 1000 images during a
measurement run. You plan to use a inal interrogation window size of 32 × 32 pixels
for the PIV measurements. Using Taylor hypothesis and approximating the convection
velocity as the centerline velocity at the measurement station, you want to reconstruct
386 FILIPPO COLETTI

a pseudo-volumetric low ield with a spatial resolution of 3η in all spatial directions.


Determine the following:
(a) The needed imaging resolution in pixel/mm
(b) The needed acquisition frequency
(c) The size of the reconstructed pseudo-volume
Note: Assume that at the measurement station the mean dissipation of the turbulent kinetic
energy can be calculated as [73]

U 03
e = 0.015
r1/ 2

11.2 You are planning to study the instantaneous motion of a 3D low using Tomo-PIV and/
or 3D PTV. The illuminated volume of interest has dimensions 80 × 80 × 10 mm3. You
can use four cameras, and the combination of optics and sensors allows each of them to
image the FOV at a resolution of 20 pixel/mm. The volume is seeded with particles that
display an image diameter of 2 pixels. Determine the following:
(a) The percentage of ghost particles expected for ppp = 0.05 (a typical seeding density
for Tomo-PIV)
(b) The maximum seeding density required to keep the percentage of ghost particles
below 1%
11.3 You want to study the volumetric time-dependent low features in a model of the human
airway tree. You assume standard physiological breathing, that is, a respiration cycle
that follows a sinusoidal waveform, with a peak volume low rate Qmax = 60 L/min and
a period T = 3 s. Assume the kinematic viscosity of air is ν = 1.5 × 10−5 m2/s. You are
provided with a CAD model of the bronchial tree, which you can manufacture by 3D
printing and/or by silicone casting. The diameter of the trachea, which is approximately
circular, is DT = 18 mm. Design the pumping system for a volumetric velocimetry exper-
iment in dynamic similarity with the physiological regime, and in particular determine
the following:
(a) The oscillation period and maximum low rate of the pump, assuming you will
perform MRV on a life-size model
(b) The oscillation period and maximum low rate of the pump, assuming you will
perform MRV on a model scaled down by a factor of 2
(c) The oscillation period and maximum low rate of the pump, assuming you will
perform PIV or PTV on a life-size model

references

1. Dav id s o n, P. (2015). Turbulence: An Introduction for Scientists and Engineers. Oxford


University Press, Oxford, UK.
2. H o r i, T. and S a k akibara , J. (2004). High-speed scanning stereoscopic PIV for 3D vorticity
measurement in liquids. Measurement Science and Technology, 15(6), 1067–1078.
3. Br ü c k e r, C. (1995). Digital-particle-image-velocimetry (DPIV) in a scanning light-sheet: 3D
starting low around a short cylinder. Experiments in Fluids, 19(4), 255–263.
4. Tay l o r , G. I. (1938). The spectrum of turbulence. Proceedings of the Royal Society of London:
Series B, 164, 476–490.
5. Town s e n d, A. A. (1976). The Structure of Turbulent Shear Flow, 2nd edn. Cambridge University
Press, Cambridge, UK.
6. G a na pat h is u b ram ani, B., Laks hm inaras im han, K., and C lem ens, N. T. (2007).
Determination of complete velocity gradient tensor by using cinematographic stereoscopic PIV in
a turbulent jet. Experiments in Fluids, 42(6), 923–939.
7. D e n n is, D. J. and N ickels , T. B. (2011). Experimental measurement of large-scale three-
dimensional structures in a turbulent boundary layer. Part 1. Vortex packets. Journal of Fluid
Mechanics, 673, 180–217.
VOLUMETRIC VELOCIMETRY 387

8. D e n n is, D. J. and N ickels , T. B. (2008). On the limitations of Taylor’s hypothesis in construct-


ing long structures in a turbulent boundary layer. Journal of Fluid Mechanics, 614, 197–206.
9. D e l A l a mo, J. C. and J im énez, J. (2009). Estimation of turbulent convection velocities and
corrections to Taylor’s approximation. Journal of Fluid Mechanics, 640, 5–26.
10. Bu x t o n, O. R. H., de K at, R., and Ganapathisubramani, B. (2013). The convection of large
and intermediate scale luctuations in a turbulent mixing layer. Physics of Fluids, 25(12), 125105
(1994–present).
11. K in z e l , M., Wo lf, M., H olzner , M., Lüthi, B., Trop ea, C., and K inzelbach , W.
(2011). Simultaneous two-scale 3D-PTV measurements in turbulence under the inluence of
system rotation. Experiments in Fluids, 51(1), 75–82.
12. M a a s , H. G., G ruen, A., and Papantoniou, D. (1993). Particle tracking velocimetry in
three-dimensional lows. Experiments in Fluids, 15(2), 133–146.
13. O u e l l e t t e, N. T., X u, H., and B odens chatz, E. (2006). A quantitative study of three-
dimensional Lagrangian particle tracking algorithms. Experiments in Fluids, 40(2), 301–313.
14. T s a i, R. Y. (1987). A versatile camera calibration technique for high-accuracy 3D machine vision
metrology using off-the-shelf TV cameras and lenses. IEEE Journal of Robotics and Automation,
3(4), 323–344.
15. Vo t h , G. A., l a Porta, A., C rawf ord, A. M., A lexander , J., and B odens chatz, E.
(2002). Measurement of particle accelerations in fully developed turbulence. Journal of Fluid
Mechanics, 469, 121–160.
16. L a P o r ta , A., Vo t h , G. A., C r aw f o r d, A. M., A l e x a n d e r , J., and B o d e n s c h at z , E.
(2001). Fluid particle accelerations in fully developed turbulence. Nature, 409(6823),
1017–1019.
17. B o u r g o in, M., O uellette, N. T., X u, H., B erg, J., and B odens chatz, E. (2006).
The role of pair dispersion in turbulent low. Science, 311(5762), 835–838.
18. Wil l e r t, C. E. and G harib, M. (1992). Three-dimensional particle imaging with a single
camera. Experiments in Fluids, 12(6), 353–358.
19. P e r e ir a , F., G harib , M., Dabiri, D., and M odarres s, D. (2000). Defocusing digi-
tal particle image velocimetry: A 3-component 3-dimensional DPIV measurement technique.
Application to bubbly lows. Experiments in Fluids, 29(1), S078–S084.
20. G h a r ib , M., P e reira , F., Dabiri, D., H ove, J. R., and M odarres s, D. (2002). Quantitative
low visualization: Toward a comprehensive low diagnostic tool. Integrative and Comparative
Biology, 42(5), 964–970.
21. P e r e ir a , F., S t üer , H., G raf f, E. C., and G harib , M. (2006). Two-frame 3D particle track-
ing. Measurement Science and Technology, 17(7), 1680.
22. Tr o o l in, D. R. and Longm ire, E. K. (2010). Volumetric velocity measurements of vortex
rings from inclined exits. Experiments in Fluids, 48(3), 409–420.
23. Wo l f, E., K ä h ler , C. J., Troolin, D. R., Kykal , C., and Lai , W. (2011). Time-resolved
volumetric particle tracking velocimetry of large-scale vortex structures from the reattachment
region of a laminar separation bubble to the wake. Experiments in Fluids, 50(4), 977–988.
24. K at z, J. and S heng, J. (2010). Applications of holography in luid mechanics and particle
dynamics. Annual Review of Fluid Mechanics, 42, 531–555.
25. S u n, H., B e n z ie , P. W., Burns , N., H endry, D. C., Player, M. A., and Wat son, J.
(2008). Underwater digital holography for studies of marine plankton. Philosophical Transactions
of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, 366(1871),
1789–1806.
26. H in s c h, K. D. (2002). Holographic particle image velocimetry. Measurement Science and
Technology, 13(7), R61.
27. C h a n, K., L aw rence , B., and B oden, E. (2004). U.S. Patent Application 10/964,092.
28. M e n g, H., Pa n, G., Pu, Y., and Woodward, S. H. (2004). Holographic particle image
velocimetry: From ilm to digital recording. Measurement Science and Technology, 15(4), 673.
29. To l o u i, M. and H ong , J. (2015). High idelity digital inline holographic method for 3D low
measurements. Optics Express, 23(21), 27159–27173.
30. Ta l a pat r a, S. and K atz, J. (2012). Coherent structures in the inner part of a rough-wall chan-
nel low resolved using holographic PIV. Journal of Fluid Mechanics, 711, 161–170.
31. Ta l a pat r a, S. and K atz, J. (2013). Three-dimensional velocity measurements in a roughness
sublayer using microscopic digital in-line holography and optical index matching. Measurement
Science and Technology, 24(2), 024004.
32. E l s in g a , G. E., Scarano, F., Wieneke , B., and va n O udheus den, B. W. (2006).
Tomographic particle image velocimetry. Experiments in Fluids, 41(6), 933–947.
33. S c a r a n o, F. (2013). Tomographic PIV: Principles and practice. Measurement Science and
Technology, 24(1), 012001.
388 FILIPPO COLETTI

34. G h a e mi, S. and Scarano, F. (2010). Multi-pass light ampliication for tomographic particle
image velocimetry applications. Measurement Science and Technology, 21(12), 127002.
35. Wil l e r t, C., S tas icki, B., K linner , J., and M oes s ner, S. (2010). Pulsed operation
of high-power light emitting diodes for imaging low velocimetry. Measurement Science and
Technology, 21(7), 075402.
36. Bu c h ma n n, N. A., Willert, C. E., and Soria , J. (2012). Pulsed, high-power LED illumina-
tion for tomographic particle image velocimetry. Experiments in Fluids, 53(5), 1545–1560.
37. N ova r a , M., Batenburg , K. J., and Scarano, F. (2010). Motion tracking-enhanced MART
for tomographic PIV. Measurement Science and Technology, 21(3), 035401.
38. Wie n e k e , B. (2008). Volume self-calibration for 3D particle image velocimetry. Experiments in
Fluids, 45(4), 549–556.
39. E l s in g a , G. E., Wieneke , B., Scarano, F., and Schröder, A. (2008). Tomographic
3D-PIV and applications. In: A . Schroeder and C . E. Willert, eds., Particle Image
Velocimetry (pp. 103–125). Springer, Berlin, Germany.
40. N ova r a , M. (2013). Advances in tomographic PIV. PhD thesis, Delft University of Technology,
Delft, the Netherlands.
41. D is c e t t i , S. and A s tarita, T. (2014). The detrimental effect of increasing the number of
cameras on self-calibration for tomographic PIV. Measurement Science and Technology, 25(8),
084001.
42. E l s in g a , G. E., Wes terweel , J., Scarano, F., and N ovara , M. (2011). On the velocity
of ghost particles and the bias errors in tomographic-PIV. Experiments in Fluids, 50(4), 825–838.
43. S c a r a n o, F. and Poelm a, C. (2009). Three-dimensional vorticity patterns of cylinder wakes.
Experiments in Fluids, 47(1), 69–83.
44. S c a r a n o, F. and R iethm uller , M. L. (2000). Advances in iterative multigrid PIV image
processing. Experiments in Fluids, 29(1), S051–S060.
45. Vio l at o, D., M oore , P., and Scarano, F. (2011). Lagrangian and Eulerian pressure ield
evaluation of rod-airfoil low from time-resolved tomographic PIV. Experiments in Fluids, 50(4),
1057–1070.
46. S c h r ö d e r, A., G eis ler , R., Els inga , G. E., Scarano, F., and D ierks heide , U. (2008).
Investigation of a turbulent spot and a tripped turbulent boundary layer low using time-resolved
tomographic PIV. Experiments in Fluids, 44(2), 305–316.
47. Vio l at o, D. and Scarano, F. (2013). Three-dimensional vortex analysis and aeroacoustic
source characterization of jet core breakdown. Physics of Fluids, 25(1), 015112 (1994–present).
48. D e K at, R. and Van O udheus den, B. W. (2012). Instantaneous planar pressure determina-
tion from PIV in turbulent low. Experiments in Fluids, 52(5), 1089–1106.
49. Ko s c h at z k y, V., M oore , P. D., Wes terweel , J., Scarano, F., and B oers m a, B. J.
(2011). High speed PIV applied to aerodynamic noise investigation. Experiments in Fluids, 50(4),
863–876.
50. G h a e m i, S. and Scarano, F. (2011). Counter-hairpin vortices in the turbulent wake of a sharp
trailing edge. Journal of Fluid Mechanics, 689, 317–356.
51. K im , H., G r o s s e, S., Els inga , G. E., and Wes terweel, J. (2011). Full 3D-3C velocity
measurement inside a liquid immersion droplet. Experiments in Fluids, 51(2), 395–405.
52. H u mb l e , R. A., Els inga, G. E., Scarano, F., and Van O udheus den, B. W. (2009).
Three-dimensional instantaneous structure of a shock wave/turbulent boundary layer interaction.
Journal of Fluid Mechanics, 622, 33–62.
53. O r t iz -D u e ña s , C., K im , J., and Longm ire, E. K. (2010). Investigation of liquid–liquid
drop coalescence using tomographic PIV. Experiments in Fluids, 49(1), 111–129.
54. Bu c h ma n n, N. A., Atkins on, C., J erem y, M. C., and Soria , J. (2011). Tomographic par-
ticle image velocimetry investigation of the low in a modeled human carotid artery bifurcation.
Experiments in Fluids, 50(4), 1131–1151.
55. J e o n, Y. J. and S ung, H. J. (2012). Three-dimensional PIV measurement of low around an
arbitrarily moving body. Experiments in Fluids, 53(4), 1057–1071.
56. N ova r a , M. and Scarano, F. (2013). A particle-tracking approach for accurate material deriv-
ative measurements with tomographic PIV. Experiments in Fluids, 54(8), 1–12.
57. Ly n c h, K. P. and Scarano, F. (2014). Experimental determination of tomographic PIV accu-
racy by a 12-camera system. Measurement Science and Technology, 25(8), 084003.
58. S c h a n z , D., S chröder, A., G es em ann, S., M ichaelis , D., and Wieneke, B.
(July 2013). ‘Shake The Box’: A highly eficient and accurate Tomographic Particle Tracking
Velocimetry (TOMO-PTV) method using prediction of particle positions. In PIV13; 10th
International Symposium on Particle Image Velocimetry, Delft University of Technology, Delft,
the Netherlands, July 1–3, 2013.
VOLUMETRIC VELOCIMETRY 389

59. E l k in s , C. J. and A lley, M. T. (2007). Magnetic resonance velocimetry: Applications of


magnetic resonance imaging in the measurement of luid motion. Experiments in Fluids, 43(6),
823–858.
60. P e l c, N. J., B e rns tein, M. A., Shim akawa, A., and G lover, G. H. (1991). Encoding
strategies for three-direction phase-contrast MR imaging of low. Journal of Magnetic Resonance
Imaging, 1(4), 405–413.
61. Tay l e r, A. B., H olland, D. J., Sederm an, A. J., and G ladden, L. F. (2012). Exploring
the origins of turbulence in multiphase low using compressed sensing MRI. Physical Review
Letters, 108(26), 264505.
62. Pelc, N. J., Sommer, F. G., Li, K. C., Brosnan, T. J., Herfkens, R. J., and Enzmann, D. R.
(1994). Quantitative magnetic resonance low imaging. Magnetic Resonance Quarterly, 10(3),
125–147.
63. G ao, J. H. and G ore , J. C. (1991). Turbulent low effects on NMR imaging: Measurement of
turbulent intensity. Medical Physics, 18(5), 1045–1051.
64. E l k in s , C. J., A lley, M. T., Saetran, L., and Eaton, J. K. (2009). Three-dimensional mag-
netic resonance velocimetry measurements of turbulence quantities in complex low. Experiments
in Fluids, 46(2), 285–296.
65. E l k in s , C. J., M arkl , M., Pelc, N., and Eaton, J. K. (2003). 4D Magnetic resonance
velocimetry for mean velocity measurements in complex turbulent lows. Experiments in Fluids,
34(4), 494–503.
66. B e n s o n, M. J., Elkins , C. J., and Eaton, J. K. (2011). Measurements of 3D velocity and
scalar ield for a ilm-cooled airfoil trailing edge. Experiments in Fluids, 51(2), 443–455.
67. C o l e t t i, F., B ens on, M. J., Ling, J., Elkins, C. J., and Eaton, J. K. (2013). Turbulent
transport in an inclined jet in crosslow. International Journal of Heat and Fluid Flow, 43,
149–160.
68. O n s ta d, A. J., Elkins , C. J., M edina, F., Wicker, R. B., and Eaton, J. K. (2011). Full-
ield measurements of low through a scaled metal foam replica. Experiments in Fluids, 50(6),
1571–1585.
69. Co l e t t i, F., M uram ats u, K., Schiavazzi , D., Elkins , C. J., and Eaton, J. K.
(2014). Fluid low and scalar transport through porous ins. Physics of Fluids, 26(5), 055104
(1994–present).
70. Freudenhammer, D., Baum, E., Peterson, B., Böhm, B., Jung, B., and Grundmann,  S.
(2014). Volumetric intake low measurements of an IC engine using magnetic resonance velocim-
etry. Experiments in Fluids, 55(5), 1–18.
71. R ya n, K. J., C o l etti , F., Elkins , C. J., Dabiri , J. O., and Eaton, J. K. (2016). Three-
dimensional low ield around and downstream of a subscale model rotating vertical axis wind
turbine. Experiments in Fluids, 57(3), 1–15.
72. N e wl in g , B., P oirier , C. C., Zhi, Y., R ioux , J. A., C oris tine, A. J., R oach , D., and
Ba l c o m , B. J. (2004). Velocity imaging of highly turbulent gas low. Physical Review Letters,
93(15), 154503.
73. Pa n c h a pa k e s a n, N. R. and Lum ley, J. L. (1993). Turbulence measurements in axisymmet-
ric jets of air and helium. Part 1. Air jet. Journal of Fluid Mechanics, 246, 197–223.
S e C TI ON IV
Wall shear and force measurement
C h a p T e r T W e LV e

Measurement of wall-shear stress

Ricardo Vinuesa and Ramis Örlü

Contents

12.1 Introduction 393


12.2 Floating-element methods 394
12.3 Methods based on velocity proiles 396
The Clauser chart 397
Other approaches based on velocity proiles 399
Momentum thickness gradient 401
Common problems in velocity proile measurements 401
12.4 Methods based on pressure measurements 401
Estimation from streamwise pressure gradient 401
Preston tubes 403
Stanton tube 405
Sublayer fence 406
12.5 Heat transfer methods 406
Hot ilms 406
Wall hot wires 407
Wall pulsed wire 408
12.6 Optical methods 409
Oil-ilm interferometry 409
Laser Doppler technique 420
Liquid crystal coating techniques 420
Micro-pillar sensors 421
Acknowledgments 422
Problems 422
References 425

12.1 Introduction

Despite being one of the most relevant quantities to characterize the low close to a solid
boundary, direct and accurate measurements of wall-shear stress have not been available until
recently. Extraordinary development of equipment and post-processing techniques over the
past two decades have made possible to measure this quantity accurately. The mean shear
stress at the wall τw is deined for Newtonian luids as

¶U
tw = m , (12.1)
¶y y =0

where
μ is the luid dynamic viscosity
y is the wall-normal coordinate
U is the streamwise mean velocity

393
394 RICARDO VINUESA AND RAMIS ÖRLÜ

It is therefore a measure of the tangential force exerted by the incoming low on the wall
and by integrating over the surface it is possible to determine its impact on the aerody-
namic actions on a submerged body. Its interest in the aeronautic industry stems from the
fact that the integrated value of τw is the viscous component of the total drag. In com-
mercial airplanes, viscous drag may contribute with as much as 50% of the total drag,
while this fraction increases up to 90% in the case of submarines, thereby highlighting
the importance of accurately measuring this quantity [1]. In addition to this, wall-shear
stress (also known as skin friction) plays an important role in fundamental research in
the fields of aerodynamics and fluid mechanics. Turbulence quantities and most promi-
nently mean velocity profiles are commonly scaled with the so-called friction velocity
ut = tw /r (ρ being the fluid density), and therefore small errors in the determination of
the skin friction may lead to wrong conclusions regarding the functional form and asymp-
totic behavior of the velocity profile at very high Reynolds numbers. It is then possible
to draw inaccurate conclusions about the nature of wall-bounded turbulent flows based
on unreliable measurements of wall shear. An interesting example of this is described in
the “The  Clauser chart” section, where the popular Clauser chart method is described.
This technique, which is based on an assumed form of the velocity profile in the so-called
overlap region, was widely used for several decades in the wall-turbulence community.
In  fact, the results obtained with this technique were in some cases used to prove the
validity of the initial assumptions [2]. Another case where accurate measurements of
skin friction are extremely relevant is complex flows, such as pressure-gradient bound-
ary layers or highly 3D configurations. Future improvements of the Reynolds-averaged
Navier–Stokes (RANS) models used in industry to predict complex flows rely on an
accurate characterization of these effects.
Throughout this chapter, we will describe the main characteristics of a wide range of
experimental techniques for skin-friction determination. Some of them are not widely used
in modern industrial and academic studies but are included here for historical reasons. They
might also assist the reader in comprehending the classical literature where such techniques
have widely been used. The various methods are divided according to the principles on which
they are based as follows: loating elements are discussed in Section 12.2; techniques based on
velocity proiles are described in Section 12.3; Section 12.4 discusses various methods based
on pressure measurements; heat transfer techniques are presented in Section 12.5; and inally,
Section 12.6 is devoted to different types of optical methods.
High emphasis is placed on the theoretical description and the operational procedure of
the optical technique known as oil-ilm interferometry (OFI). This method, based on the
correlation between the incoming shear and the thinning rate of a thin oil ilm deposited
on the surface to measure, is nowadays the method of choice for skin-friction measure-
ments. OFI allows us to measure the value and direction of the mean wall-shear stress
with a level of accuracy of around ±1%, although it does not provide measurements of the
luctuating component of τw. This luctuating component needs to be accurately measured
for validation of theoretical developments and turbulence models, among others. A detailed
description of a very promising optical technique, known as micro-pillar shear stress sensor
(MPS3), is also provided. Although it is still unable to determine the mean wall-shear with
a higher level of accuracy than OFI, it is currently the method of choice to measure its
luctuating component.

12.2 Floating-element methods

The loating-element techniques are also called direct methods of skin friction determina-
tion due to the fact that they are based on directly measuring the force exerted by the incom-
ing low on the surface of a sensor, which is mounted parallel to the wall. This sensor is a
force balance placed on a cavity, and the surface is basically a loating element as shown in
Figure 12.1. Knowing the total force measured by the balance and the area of the loating
element, it is possible to determine the wall-shear stress. Note that a number of designs have
MEASUREMENT OF wALL-SHEAR STRESS 395

τw

Test surface

FIGUre 12.1 Schematic view of a loating-element force balance.

been proposed to determine this force, including correlations with the displacement of the
sensing surface and determination of the feedback required to restore the initial position of
the sensor. In any case, the cavity must allow some surface displacement, and the neces-
sary gaps around the surface vary with the applied load, which may lead to spurious effects
as reported in the review by Winter [3]; see also Reference 4. The position of the loating
element is usually measured using a linear variable differential transformer (LVDT), and
the force is often measured with a Kelvin current balance or a coil magnet. Some of the
irst loating element designs were proposed by Schultz-Grunow [5] in 1940, Dhawan [6]
in 1951 (who performed a comprehensive measurement campaign in laminar, turbulent,
subsonic, and supersonic lows), and a more recent device was proposed by Winter and
Gaudet [7] in 1970.
As will be discussed throughout this chapter, direct methods of skin friction determina-
tion are preferred to indirect methods since the latter usually involve a number of simplify-
ing assumptions and additional measurement errors associated with derived quantities, which
eventually impact the inal measurement. Besides, loating elements have the advantage of
being relatively robust and therefore can be used in a wide range of complicated geometries
and even during light tests. However, several drawbacks listed by Winter [3] have limited
their applicability over the years: the size of the sensor (which has to be related to the mag-
nitude of the shear under consideration), the effect of the gaps around the element, addi-
tional contributions of pressure gradients, heat transfer and accelerations, or the impact on
the overall performance of temperature changes, leaks, surface misalignments, and transients
are some of the most signiicant pitfalls. The effect of small misalignments between the sen-
sor surface and the surface of the aerodynamic model may have an important effect on the
measurements [3,4,8], especially at higher Reynolds numbers where the near-wall turbulent
scales get progressively smaller. This however would not be a problem if the experiments
are performed in an atmospheric boundary layer, where, for example, measuring plates with
diameters of around 2 m can be used [9]. A positive misalignment is deined as the situation
where the sensor surface is above the model (and therefore the incoming stream is facing a
step), whereas the opposite scenario would be deined as a negative misalignment (where the
incoming low would be actually encountering a backward-facing step). Due to the interac-
tion between the step and the near-wall low, positive misalignments lead to higher measured
skin friction values, whereas the opposite effect is observed with negative misalignments; the
magnitude of the error is similar in both cases. Also note that the size of the gap plays a role
in the magnitude of these errors, and for the same misalignment smaller gaps lead to higher
deviations in the measurements.
Another important point to consider when using loating elements is the fact that, as
pointed out by Haritonidis [8], they have poor frequency response due to their size and there-
fore cannot be used to measure the luctuating component of the shear stress. However, more
modern versions of the loating element sensor based on microelectromechanical systems
(MEMS) allow us to effectively measure these luctuations as reported by Naughton and
Sheplak [10].
The concept of loating elements is also exploited in towing tank experiments, which are
exclusively aimed at accurate skin friction determination. A towing tank is a deposit with
a towing carriage that runs on two rails on both sides. The idea is that the towing carriage
tows a model over the surface of the tank, and the power required by this operation yields
the resistance experienced by the model. Two good examples of towing tank experiments
396 RICARDO VINUESA AND RAMIS ÖRLÜ

are the study focused on large-eddy breakup devices (LEBUs) by Sahlin et al. [11] and the
more recent work on boundary-layer skin friction by Mori et al. [12]. See also the review by
Gad-el-Hak [13]. A recent assessment of the accuracies achievable with this technique has
been performed by Baars et al. [14].

12.3 Methods based on velocity proiles

As discussed in Section 12.2, the use of loating elements to measure wall-shear stress has a
number of pitfalls that make this approach impractical for experimental campaigns requir-
ing highly accurate determination of skin friction. An alternative to this technique is to use
knowledge of some of the most relevant features of wall-bounded turbulent lows combined
with velocity measurements to indirectly estimate the wall shear. This method originated from
the fact that often additional experiments to measure skin friction are not feasible, or from the
necessity of determining the wall shear for an experiment that is already completed. Thus,
the use of the mean velocity proile started out from a necessity but soon became a standard
technique. The advantage of this approach is the fact that velocity measurements are less
problematic than direct measurements of skin friction, although we will discuss a number
of issues to keep in mind with respect to this. Moreover, this approach exhibits an important
problem, which is the fact that the friction velocity uτ is a crucial parameter in the scaling
of wall-bounded turbulent lows, and therefore it directly inluences any conclusions drawn
about their dynamics. Thus, one may end up using some assumptions to ind the wall-shear
stress and conirm those hypotheses by using the value of uτ obtained in this way.
Let us start by providing some background on wall-bounded turbulent lows: according
to their classical description [15,16], they are divided into two regions: the inner region is
located close to the wall and is scaled by the viscous length ℓ* = ν/uτ (where ν is the luid kine-
matic viscosity). On the other hand, convective effects are dominant in the outer region where
the characteristic length scale is usually the boundary-layer thickness δ (or equivalently for
internal lows, the pipe radius R or the channel half-height h). Note that throughout the whole
boundary-layer proile the mean velocity is scaled with the friction velocity and  the  wall-
normal location with the viscous length. This is what is called “inner scaling,” and the non-
dimensional velocity and wall-normal coordinate are deined in “wall” or “plus” units as
U+ = U/uτ and y+ = yuτ/ν. Note that in the asymptotic limits of both regions, that is, when
y+ → ∞ and η → 0 (with η = y/δ being the outer-scaled wall-normal coordinate), both descrip-
tions of the proile are valid in the so-called overlap region. This matching approach was irst
proposed by Millikan [17] in 1938. Despite some controversy regarding the functional form
of the overlap region [18], it is now commonly accepted that this part of the boundary layer is
described by the so-called logarithmic law [19,20]:

1
U+ =
k ( )
ln y + + B, (12.2)

where velocity and wall-normal location are expressed in inner scaling, κ is the von Kármán
coeficient of wall-bounded turbulent lows, and B is the log-law intercept. The value of κ is
also a subject of debate, and while some authors claim that it is low dependent [21], some oth-
ers claim that it takes the same value in boundary layers, pipes, and channels [20]. In any case,
the most accepted values for zero-pressure-gradient (ZPG) turbulent boundary layers (TBLs)
are κ = 0.38 and B = 4.17, which is also the value found for the highest Reynolds number
direct numerical simulation of turbulent channel lows [22], although it remains to be con-
irmed through experiments [23]. For turbulent pipe lows, on the other hand, the values are
closer to the classical value of 0.40 [21,24]. An example of a TBL proile scaled in inner units,
at a Reynolds number based on momentum thickness Reθ ≃ 14,000, is shown in Figure 12.2.
Note that if more measurements below y+ ≃ 5 were at hand, it could also be observed that the
MEASUREMENT OF wALL-SHEAR STRESS 397

30
Viscous Logarithmic
sublayer Buffer layer
region
25

Wake
20 region

U+
15

10

0
100 101 102 103 104
y+

FIGUre 12.2 Inner-scaled mean velocity proile of a ZPG TBL at Reθ ≃ 14,000. Dashed lines
denote the linear and logarithmic proiles, where κ = 0.38 and B = 4.17 are considered as log-law
constants. Solid lines indicate the limits between the various regions of the boundary layer. This
proile was measured by Österlund [25] using hot-wire anemometry for the velocity measure-
ments and OFI to determine wall shear stress.

data would satisfy U+ = y+. This is the viscous sublayer and is dominated by viscous forces.
The logarithmic region, which is represented by a straight line in a semilogarithmic plot as in
Figure 12.2, starts at a ixed position in wall units independent of Reynolds number and ends
at a location that scales in outer units and is Re dependent. Although there is no consensus
on the actual values of these limits due to geometry and Reynolds number effects [26,27],
+
a widely used bound for ZPG TBLs is ylog,min = 200 and ηlog, max = 0.15. The region between
y ≃ 5 and 200 is called buffer layer, and the wake region extends from the outer edge of the
+

logarithmic layer up to the freestream. Note, however, that there are also views that abstain
from the classical ixed inner-scaled lower bound of the logarithmic region and support a
Re-dependent one [20].

The Clauser chart The most widely used method of skin friction “measurement” based on velocity proiles is
the Clauser chart [28], which is based on the fact that the mean low is governed by the log
law (Equation12.2) in the overlap region.* Note that although there is consensus regarding
the validity of this statement, as mentioned earlier there is no consensus on the values of κ
and B or the limits of the log region. When Clauser proposed his method in 1954, the avail-
able range of Reynolds numbers was much lower and the measurement techniques were
less accurate than those nowadays. Based on his own measurements and other data sets
compiled at the time, he concluded that the value of κ was 0.41, the value of B was 4.9, and
he found the lower limit of the log region at y+ = 50. Following Clauser [28], the log law
(Equation 12.2) can be rewritten as

( )
U + = A log10 y + + B, (12.3)

* The trust in the Clauser chart method achieved such a level of conidence, that several studies started to describe it
as a measurement technique, see, e.g., Willmarth and Lu [29], who state: “The wall shear stress was measured using
the Clauser plot.”
398 RICARDO VINUESA AND RAMIS ÖRLÜ

where the parameter A is obtained as A = ln (10)/0.41 ≃ 5.6. The idea is to express Equation 12.3
in terms of the skin-friction coeficient Cf, deined as

tw
Cf = , (12.4)
1/2rU ¥2

where U∞ is the freestream velocity. Note that Equation 12.4 is equivalent to Cf = 2(uτ/U∞)2,
and hence

U U 2
U+ = = , (12.5)
ut U ¥ Cf

yut yU ¥ Cf
y+ = = . (12.6)
n n 2

This allows us to write Equation 12.3 as

U Cf é æ yU ö æ Cf ö ù
= ê A log10 ç ¥ ÷ + A log10 çç 2 ÷÷ + B ú , (12.7)
U¥ 2 ê
ë è n ø è ø úû

where we recall that A = 5.6 and B = 4.9 in the approach proposed by Clauser. Figure 12.3
shows a family of curves representing Equation 12.7 as a function of Cf. The idea of this
method is to plot the experimentally measured U/U∞ versus yU∞/ν curve on top of Figure 12.3
and infer the value of the skin-friction coeficient based on the best agreement for y+ > 50. Note
that this is an iterative process, since the inner-scaled wall position is not known a priori.
As pointed out by Tavoularis [30], there is some subjectivity involved in the use of the
chart, and as mentioned earlier, there is no consensus regarding the limits of the log region. To
alleviate this subjectivity, he proposes to rewrite Equation 12.3 as

æ yU ö 1 æ U ö æU ö
log10 ç ÷ = ç - B ÷ + log10 ç ÷ , (12.8)
è n ø A è ut ø è ut ø

1.2

0.8
U/U∞

0.6

0.4

0.2

0
102 103 104 105 106
yU∞/ν

FIGUre 12.3 Clauser chart representing U/U∞ versus yU∞/ν curves based on Equation 12.7. The
Cf values under consideration range from (bottom) 5 × 10−4 to (top) 5 × 10−3, in 5 × 10−4 intervals.
The lower limit of the log region y+ = 50 is indicated through the dashed line, and the following
constants were considered: A = 5.6 and B = 4.9.
MEASUREMENT OF wALL-SHEAR STRESS 399

where the properties of logarithms have been used in the derivation. Two values of the
pair (y, U) within the log region can be used to determine uτ from Equation 12.8, although
it is important to note that this equation is implicit, and therefore either numerical [30]
or graphical [31] approaches are required to solve it. Other alternatives are proposed by
Tavoularis [30] to ensure the quality and robustness of the it, in order to determine the
wall-shear stress.
Note that, in addition to the subjectivity involved in the use of the chart, the most signii-
cant pitfall of this approach is the fact that one has to assume the values of κ and B to obtain
uτ and the values of these constants in a number of low conigurations are one of the open
questions in fundamental wall-bounded turbulence. This is the reason why direct methods
of wall-shear stress determination are preferred [32], as will be discussed in the following.
In fact, it is interesting to note that the value of κ is relevant beyond fundamental turbulence
research, since the most widely used turbulence models in industrial simulations rely on this
parameter. Most of these models assume the value κ = 0.41, which for decades was considered
as universal, in part due to the extensive use of the Clauser chart for the determination of uτ
(which, as discussed earlier, is based on the assumption κ = 0.41!). Recent research shows the
potential of improving the performance of these models by adjusting the value of κ according
to geometry and pressure gradient [33].

Other approaches Although the Clauser chart has traditionally been the most extended method to determine
based on velocity uτ from measured velocity proiles, the deduced friction velocity is dependent not only on
proiles the used constants but also on the utilized bounds of the logarithmic region as demonstrated
by Örlü et al. [26]. One way of circumventing this problem is through the employment of
functional forms of the mean velocity proile that smoothly joins the logarithmic region with
the buffer and outer layer. Although most of them also involve the assumption of a particular
value of the log-law constants, avoiding the ixed bounds may yield better results than using
the Clauser chart. In 1956, Van Driest [34] derived the following integral form of the mean
velocity proile based on a mixing-length hypothesis:

y+
2
U+ =
ò 1+ dy +, (12.9)
( )
2
0 1 + 4 k y é1 - exp - y / A ù
2 +2 +
ë û

where κ is the von Kármán coeficient and in this context A and κ determine the log-law inter-
cept B. An alternative proile representing the inner region of the boundary layer (viscous and
buffer layers) and asymptotes to the form of the log region was proposed by Spalding [35]
in 1961:

é
( ) - ( kU ) - ( kU ) ù
2 3 4
kU + + +
ê
ê
+
(
y = U + exp ( - kB ) exp kU - 1 - kU -
+ + +
)2! 3! 4!
ú,
ú
(12.10)
êë úû

where also here the values of κ and B are prescribed. Another proile, in this case valid from
the wall to the lower limit of the log region, was proposed by Haritonidis [8]:

1
U+ =
l
( ) a
(
arctan ly + - 2 ln 1 + l 2 y +2 ,
2l
) (12.11)

where
a = 1/Reτ
λ is a function of κ, B, and a
400 RICARDO VINUESA AND RAMIS ÖRLÜ

The friction Reynolds number Reτ = δuτ/ν is deined in terms of the outer scale δ, which here
denotes either boundary-layer thickness, pipe radius, or channel half-height depending on the
low coniguration. To overcome the limitation of itting within a particular region of the low,
and prescribing particular values of κ and B, Chauhan et al. [36] proposed in 2009 a composite
velocity proile, based on matched asymptotic expansions, which represents the low from
the wall to the boundary-layer edge. The functional form of this composite proile would be
given by

+
U composite +
= Uinner + ë ( )
exp é - ln 2 y + /30 ù 2P
û+ W ( h) , (12.12)
2.85 k

where
κ is the von Kármán coeficient
Π is the wake parameter (which describes how the wake region of the boundary layer
deviates from the log law and approaches the freestream)
W is the wake function
η = y/δ is the wall-normal coordinate in outer scaling

The inner function Ui+nner was derived for ZPG boundary layer lows by Musker [37]:

ì æ ö
(y )
2
+
1 æ y+ - a ö R2 ï ç a - a + b2 ÷
+
U = ln ç ÷+ ´ í( 4a + a ) ln ç - ÷
k è - a ø a ( 4a - a ) ï
inner
ç R y+ - a ÷
î è ø
a é æ y+ - a ö æ a ö ù üï
+
b
( 4a + 5a ) êarctan ç ÷ + arctan ç ÷ ú ý , (12.13)
êë è b ø è b ø úû þï

where
α = (−1/κ − a)/2
b = -2aa - a 2
R = a 2 + b2

In this formulation, the parameters κ and a determine the value of B. The wake function W is
given for ZPG boundary layers as

1 - exp éë - ( 5a2 + 6a3 + 7a4 ) h4 /4 + a2h5 + a3h6 + a4h7 ùû é 1 ù


Wexp ( h) = ´ ê1 - ln ( h) ú , (12.14)
1 - exp éë - ( a2 + 2a3 + 3a4 )/4 ùû ë 2P û

where the empirical constants take the following values: a2 = 132.841, a3 = − 166.2041, and
a4 = 71.9114. It is important to remark that the equations above are valid for ZPG boundary-
layer lows, and extensions to internal lows are described in Reference 21. The composite
proile (Equation 12.12) has a total of 5 itting parameters: κ, a (both determine B), Π, uτ,
and δ. Although in principle one can it a velocity proile ixing any of them, the complex-
ity of the proile and the itting process may compromise the accuracy of the shear stress
estimation, and therefore it is often convenient to prescribe several parameters. Another use
of this formulation is to determine the values of κ and B if an independent measurement of
the wall shear is available. An alternative description that takes pressure-gradient effects
explicitly into account is described in Reference 38. There are also a number of techniques
that utilize only the viscous sublayer (i.e., the linear velocity proile or extended versions
of it) [39,40].
MEASUREMENT OF wALL-SHEAR STRESS 401

Momentum Another approach, which does not explicitly rely on the measured velocity proiles but is con-
thickness gradient nected to them through integral quantities, is the momentum thickness gradient. The mean
wall-shear stress τw can be expressed in terms of the momentum thickness θ by using von
Kármán’s momentum theorem, which for 2D boundary layers reads

tw dq q dU ¥
2
= + ( H + 2) , (12.15)
rU ¥ dx U ¥ dx

where
x is the streamwise direction
H is the shape factor, deined as the ratio between the displacement thickness δ* and θ

Note that turbulence terms have been neglected in Equation 12.15, and analyses by Bidwell
[41] and Dutton [42] aimed at considering their contributions lead to the following expression:

tw dq q dU ¥
= + ( H + 2)
rU ¥2 dx U ¥ dx
d d y d
1 ¶ 2 2 1 ¶2 d ¶2
-
U ¥2 ò (
0
¶x
)
u - v dy + 2
U¥ òò
0 0
( )
¶x 2
uv dy dy - 2
U¥ ò ( )
0
¶x 2
uv dy, (12.16)

where u and v are the luctuating components in the streamwise and wall-normal directions,
respectively. Note that although it is possible to accurately measure integral quantities and
turbulent luctuations, the terms involving streamwise gradients are very problematic since
they require suficient resolution in the direction of the low. Therefore, this method is not
commonly used nowadays for the determination of skin friction in aerodynamic lows. Two
very interesting studies dealing with these methods and their related issues are the ones by
Mehdi and White [43] and Mehdi et al. [44].

Common problems A wide range of techniques are available to measure velocity proiles, and it is important to
in velocity proile note that uncertainties in velocity (and probe location) measurements will directly impact the
measurements estimation of wall shear stress. For instance, Pitot tubes have the advantage that they can be
placed in contact with the wall for the irst measurement point, and therefore probe location
is usually not problematic. However, Bailey et al. [45] show that several corrections need to
be applied to these measurements to account for the effect of shear (since the incoming low
exhibits a velocity gradient and the front face of the tube is straight), wall proximity (due to
the delection of streamlines caused by the presence of the probe), and turbulence (the pres-
sure readings are affected by the luctuating component of the velocity, and therefore mean
velocity measurements are more problematic in regions of high turbulence intensity, i.e., in
the buffer layer). Although hot wires usually give better estimations of the mean velocity,
and their good frequency response also allows measurements of luctuating components,
several effects such as heat transfer to the wall, probe blockage, or wire length require that
the irst measurement is not taken in contact with the wall. This leads to problems related
to the probe position, which can be magniied at high velocities due to probe delection,
therefore highlighting the importance of correction schemes aimed at correcting the probe
location [26,46–48].

12.4 Methods based on pressure measurements

estimation from Although wall-shear stress measurements involve several challenges and dificulties, in inter-
streamwise nal lows of constant cross-sectional area it is possible to use the fact that static pressure losses
pressure gradient in the streamwise direction are directly related skin friction. If one considers the low in a
402 RICARDO VINUESA AND RAMIS ÖRLÜ

fully-developed pipe, a control-volume analysis over the cross-sectional area of diameter D,


and a strip of differential thickness dx in the streamwise direction, reveals that
1. The pressure gradient generates a force in the direction of the stream equal to
FPG = dPπD2/4.
2. The wall shear acts on the perimeter of the section, in the opposite direction of the
stream, with a force equal to FWS = tw pD dx .
Here, we use tw to denote the average wall-shear stress over the perimeter. In the case of pipe
lows, the azimuthal symmetry leads to a homogeneous low throughout the perimeter, but this
is not necessarily the case in any ducted geometry. By balancing both forces, it is possible to
obtain an expression for the wall-shear stress as a function of the streamwise pressure gradient
in pipe lows:

tw = - D dP . (12.17)
4 dx
This is in fact the most reliable way of determining the shear stress experimentally, although
it is important to remark that the low must be fully developed, and the streamwise pressure
gradient dP/dx must be measured accurately, so a common approach is to use a number of
pressure taps in the streamwise direction and it a straight line to obtain the slope of this dis-
tribution. Doherty et al. [49] report that the streamwise distances of around 60D are required
to obtain fully developed conditions in turbulent pipe lows. The case of rectangular ducts is
different, since in this case the low is not homogeneous in the spanwise direction z, and as
discussed by Monty [50] the relevant scale to nondimensionalize velocity proiles obtained
at the centerplane of the duct is the centerplane friction velocity uτ , c, and not the spanwise-
averaged value ut. Considering a control volume extended to the whole cross-sectional area, as
in Figure 12.4a, the wall-shear stress averaged over the perimeter of the duct can be computed
as follows:

tw = - H dP W = - H dP AR , (12.18)
2 dx W + H 2 dx AR + 1
where
W and H are the duct full width and height, respectively
AR = W/H is the duct aspect ratio

τw
H

dP
w dx

(a)

τw H

dP

Wcv dx
(b)

FIGUre 12.4 Control volume analysis considered to obtain (a) average wall-shear stress tw and
(b) centerplane wall-shear stress τw, c in a rectangular duct low.
MEASUREMENT OF wALL-SHEAR STRESS 403

Equation 12.18 for ducts is analogous to Equation 12.17 for pipes, with the additional term in
the former to account for perimeter effects for progressively wider ducts, that is, for increasing
aspect ratios. As noted earlier, the friction velocity obtained following this procedure is not
the appropriate velocity scale for the measurements carried out at the duct centerplane, and a
method to obtain the local wall-shear stress should be devised. Considering a control volume
centered at the duct centerplane of ininitely small width WCV as in Figure 12.4b, one obtains

t w = - H dP , (12.19)
2 dx

which is equivalent to the expression obtained for pipe lows. Equation 12.19 assumes that
there is a region at the centerplane of the duct where the low is homogeneous in the spanwise
direction, a condition that is satisied only at very high aspect ratios. Recent experiments on
duct lows of variable aspect ratio [51] show that the skin friction at the centerplane of the
duct should be measured with direct methods like OFI, which is discussed in detail in the
“Oil-film interferometry” section, since the pressure gradient method introduces dependence
on the hydraulic diameter of the section DH = 4A/Ps (where A is the cross-sectional area and
Ps is the section perimeter). A good example highlighting the importance of direct methods of
skin friction determination is discussed in Vinuesa et al. [51], who found different skin friction
results when using OFI compared with the ones obtained by means of the streamwise pressure
gradient and Equation 12.19. Those experiments were carried out in a variable-aspect-ratio
duct-low facility and show the relation between the error in using Equation 12.19 and the
duct aspect ratio. Thus, the direct methods of skin friction determination are preferred due to
the fact that a number of deeper theoretical implications are connected to the friction velocity
used to scale turbulent velocity proiles.

preston tubes The Preston tube has traditionally been one of the methods of choice for skin friction measure-
ments, and despite its poor frequency response it provides good estimations of the mean wall
shear stress. As can be observed in Figure 12.5, a Preston tube is basically a Pitot tube of inner
and outer diameters di and d placed at the wall, for which a number of correlations exist to
relate the wall shear τw with the readings of pressure difference Δp. Preston tubes are simple to
use and provide direct measurements of skin friction, although they still rely on the prescribed
values of κ and B. Following dimensional analysis arguments, Preston [52] showed that the
nondimensional shear stress and the pressure difference could be related as

Dpd 2 æ t d2 ö
2
= f ç w 2 ÷, (12.20)
rn è rn ø

where the function f is determined by means of a calibration process. The most widely used
calibration was proposed by Patel [53] in 1965 and is based on the nondimensional parameters:

æ Dpd 2 ö
x * = log10 ç 2 ÷, (12.21)
è 4rn ø

d di
Test surface

FIGUre 12.5 Schematic representation of a Preston tube.


404 RICARDO VINUESA AND RAMIS ÖRLÜ

æ t d2 ö
y* = log10 ç w 2 ÷, (12.22)
è 4rn ø

and the functional form of f depends on the values of x* and y* and the inner-scaled Preston
tube diameter d+ = duτ/ν. This is due to the fact that, although the Preston-tube diameter is
ixed in physical units, the viscous length scale ℓ* = ν/uτ may change with Reynolds number,
which means that the same Preston tube may have a d+ value within the viscous sublayer at
low Re and a much higher inner-scale diameter extending well beyond the logarithmic region
at higher Re. Having said that, Patel [53] proposed three different calibration curves depending
on the relative inner-scaled size of the tube with respect to ℓ*:

ì y* = 0.037 + 0.5 x*
ï
ï0 < x * < 2.9
í (12.23)
ï0 < y* < 1.5
ï0 < d + < 11.2
î

ì y* = 0.8287 - 0.1381x* + 0.1437 x*2 - 0.006 x*3


ï
ï2.9 < x * < 5.6
í (12.24)
ï1.5 < y* < 3.5
ï11.2 < d + < 110
î

ì y* = -0.9654 + 0.718 x* + 0.0175 x*2 - 0.0005 x*3


ï
ï5.6 < x* < 7.6
í (12.25)
ï3.5 < y* < 5.3
ï110 < d + < 1600
î

Also note that the various ranges shown in Equations 12.23 through 12.25 include a limi-
tation in x*, which is connected to the ranges of pressure differences admitted for each
interval. As inferred from the respective values of d+, relations 12.23 through 12.25 roughly
correspond to the viscous, buffer, and log layers, respectively, where reported uncertainties
are approximately ±1% in Equation 12.24 and ±1.5% in Equation 12.25 [30]. Zagarola et
al. [54] extended the previous calibrations to higher Reynolds numbers in 2001 with the
following correlation:

ì y* = -1.1649 + 0.784 x * + 0.0104 x*2 - 0.000235 x*3


ï
ï6.4 < x * < 11.3
í (12.26)
ï4.3 < y* < 8.7
ï280 < d + < 45, 000
î

Note that Equations 12.23 through 12.26 are valid for wall-bounded turbulent lows. As stated
earlier, this technique requires the prescribed values of κ and B, which in the case of the origi-
nal calibration by Preston [52] were 0.42 and 5.8, and in the more recent approach by Patel
[53] they are 0.42 and 5.45. Keeping in mind that according to the discussion in Section 12.3
the accepted values for ZPG boundary layers are κ = 0.38 and B = 4.17, the use of the previous
calibrations introduces some additional uncertainty in the measured uτ. In addition to this,
and due to the fact that the log-law constants change with pressure gradient [21,33,38,55],
errors in very mild pressure gradient lows rapidly increase beyond 3%. As with Pitot tubes,
several corrections aimed at compressibility and turbulence-intensity effects have also been
MEASUREMENT OF wALL-SHEAR STRESS 405

developed [3,56]. Tavoularis [30] provides several practical recommendations on the use of
Preston tubes, such as the fact that it is better to introduce the tube through the wall instead of
placing it on the wall as a Pitot tube to minimize low disturbance. He also suggests setting up
the experiment in such a way that the tube is kept in the inner region of the boundary layer and
points out to the importance of using static pressure taps of small sizes, at the same location as
the tube tip. On the other hand, he discourages the use of lattened tubes unless a speciic cali-
bration is performed. Finally, he suggests an inner–outer diameter ratio of di/d ≃ 0.6, following
the coniguration used by Patel [53].

Stanton tube A Stanton tube is a rectangular tube that is also placed on the wall, although it is smaller than
the Preston tube since it is usually submerged in the viscous sublayer of the boundary layer.
An important difference between Preston and Stanton tubes is the fact that in the former the
pressure drop is the result of luid deceleration in front of the obstacle, whereas in the second
also wall-shear luctuations lead to changes in Δp. As a consequence, Stanton tubes have faster
frequency response and therefore are able to measure the luctuating component of τw (although
as discussed in the “Micro-pillar sensors” section, MPS3 are able to measure this quantity more
accurately). A schematic representation of a Stanton tube is shown in Figure 12.6.
The most widely used versions of this device incorporate a razor blade on the windward
side instead of a forward-facing step, and it is important to note that the blade should be
thinner than the viscous sublayer height. The nominal dimensions of the Stanton tube are its
height and width H and W and the height and width of the aperture h and w. Also note that in
Figure 12.6 a recirculation bubble will form upstream of the Stanton tube and the separated
streamline will pass through the top of the tube, at a distance H from the wall. The value of
H is really important in the design of the device, and the role of the razor blade is to facilitate
this process. Analyses of Stanton tube performance carried out by Trilling and Häkkinen [57]
show three regions of operation depending on the inner-scaled value of H, which is connected
to the friction Reynolds number

twrH 2
Re t = , (12.27)
m2

and it can be shown that Reτ = H+2. According to Trilling and Häkkinen [57], for Reτ ≪ 1
(which in practice translates to H+ values below 0.5), the pressure difference is directly pro-
portional to τw. Although they claim that Reτ values between 10 and 1000 ensure the fact that
the Stanton tube lies within the viscous sublayer, and the pressure rises as the shear stress
raised to the power of 5/3, Haritonidis [8] shows that in fact this range should be reduced
to 10 < Reτ < 100. Finally, for Reτ values larger than 1000, the inner-scaled height is around
30, and the pressure readings are related to velocity measurements within the buffer layer.
Two widely used calibrations are the ones by East [58]:

ìï y* = -0.23 + 0.61x * + 0.0165 x *2


í +
(12.28)
îï3 < H < 100

H h
Test surface

FIGUre 12.6 Schematic representation of a Stanton tube. The recirculation bubble is also
sketched in the igure. (Adapted from Haritonidis, J.H., The measurement of wall shear stress,
in: Gad-el-Hak M, ed., Advances in Fluid Mechanics Measurements, Lecture Notes in Engineering,
Springer-Verlag, Berlin, Germany, pp. 229–261, 1989.)
406 RICARDO VINUESA AND RAMIS ÖRLÜ

Test surface

FIGUre 12.7 Schematic representation of a sublayer fence.

and by Bradshaw and Gregory [59]:

ìï y* = -0.0786 + 0.681x *
í +
(12.29)
ïî1.4 < H < 63

which relate the nondimensional variables x* and y* deined as follows:

æ DpH +2 ö
x * = log10 ç ÷, (12.30)
è tw ø

(
y* = log10 H +2 .) (12.31)

It is interesting to note that even in the ranges where these two calibrations (and others found
in the literature) overlap, different values of the ratio Δp/τw are obtained. This is discussed by
Haritonidis [8] and is due to the wide range of sizes, geometries, and dimensions deining the
operation of Stanton tubes. All the geometrical parameters shown in Figure 12.6 have an impact
on the calibration constants of the device, although the parameter with the largest inluence
is the total height H. Reported uncertainties of calibrated Stanton tubes are around 3%, and
although in principle smaller tubes provide more accurate results, they exhibit much longer
transient effects that complicate their practical usage. Note that this also applies to measure-
ments of pressure luctuations, and therefore great care must be taken when choosing the geom-
etry of the Stanton tube in order to obtain consistent measurements of luctuating wall shear.

Sublayer fence As shown in Figure 12.7, the sublayer fence is basically a wall tap with a blade inside it, which
splits the low into two parts, and is speciically designed to remain within the viscous sublayer,
therefore exhibiting high accuracy even with strong pressure gradient lows. Note that the blade
must protrude slightly into the low, and due to its reduced dimensions it is relatively dificult
to deine the geometry. Therefore, it is recommended to calibrate it against a Preston tube on
a well-characterized low coniguration. However, this limited size allows it to remain within
the irst range of the boundary layer deined by Trilling and Häkkinen [57], and therefore pres-
sure readings and wall-shear stress are related linearly. As reported by Winter [3], this device
can also be used with compressible lows, and it is usually more sensitive than Stanton tubes.

12.5 heat transfer methods

hot ilms Hot ilms take advantage of the fact that if the growing thermal boundary layer remains within
the viscous sublayer (where the inner-scaled velocity proile is linear, i.e., U+ = y+), and the
Prandtl number Pr = ν/α is larger than 1 (note that since α here is the thermal diffusion rate
of the luid, Pr > 1 implies that the momentum diffusivity dominates over the thermal one),
several correlations hold which relate the heat transfer rate to the luid and the wall-shear
MEASUREMENT OF wALL-SHEAR STRESS 407

δt

Test surface

FIGUre 12.8 Schematic representation of a hot ilm and the thermal boundary layer growing
over the wall.

stress. Moreover, under certain conditions even the luctuating component can be measured.
Hot ilms are basically metallic elements located at the wall, which are heated by means of a
constant temperature anemometer (cf. Chapter 9). A schematic representation of the thermal
boundary layer growing over the wall is shown in Figure 12.8, and it is also important to
note that some of the heat applied to the sensor is dissipated at the substrate and the rest is
transferred to the luid. Hot ilms are thus only effective with luids with larger conductivity
than the wall material, which are usually water and oil, but not air. Under this assumption,
Tavoularis [30] reports that in hot ilms with thermal boundary layers within the viscous sub-
layer the heat transfer rate from the sensor to the luid is roughly proportional to the 1/3 power
of the wall-shear stress and formulates an empirical relation reminiscent of King’s law used
in hot-wire anemometry:

E2
= A + Bt1w/ 3 , (12.32)
Tw - T f

where
Tw and Tf are the ilm and bulk luid temperatures, respectively
E is the voltage supplied to the sensor
A and B are calibration constants

A common approach to calibrate hot ilms is to use Preston tubes as a reference, with well-
characterized lows. Note that although they do not work well on the onset of separation and
cannot detect reversed lows, if mounted on a turntable they can identify the orientation of the
mean low in case of highly 3D lows.

Wall hot wires As mentioned in the “Hot films” section, hot ilms are effective only when the luid conduc-
tivity is larger than the one of the wall material and therefore are not recommended for mea-
surements in air. In this situation, it is better to use wall hot wires, which are basically single
hot-wire sensors mounted on the wall, and also lying within the viscous sublayer. A sche-
matic representation of this sensor is shown in Figure 12.9. Although the hot-wire sensitivity
increases with distance from the wall due to higher velocities, their frequency response remains
very good even very close to it, and therefore they are a suitable choice for measurements of
luctuating components of the velocity. Also note that wall hot wires should be calibrated
against another device, such as Preston tubes, on well-behaved and properly characterized
low conditions (such as in a canonical ZPG boundary layer). Fernholz et al. [60] show that,
due to heat transfer effects to the wall, the mean velocity measured by a hot-wire anemometer
calibrated in the freestream Um and the true velocity U in the near-wall region are related as
1/ 2
U m y Uy æ Uy ö
= - k1 ç ÷ + k2 , (12.33)
n n è n ø

where
y is the wall-normal coordinate
k1, k2 are calibration constants
408 RICARDO VINUESA AND RAMIS ÖRLÜ

FIGUre 12.9 Schematic representation of a wall hot wire.

Note that Equation 12.33 holds within the viscous sublayer (y+ ≤ 5), and the constants take
the values k1 = 0.55 and k2 = 3.2 when wall hot wires of 5 μm diameter are used with high
thermal conductivity substrates. Following Fernholz et al. [60], it can be shown that for a
wall hot wire located at a distance h from the wall, the actual skin friction τw is related to the
measured value τm as

1/ 2
æ rn 2 ö rn 2
tm = tw - k1 ç 2 tw ÷ + k2 . (12.34)
è h ø h2

Also note that the measured value τm in Equation 12.34 is obtained in terms of the voltage
supplied to the sensor E by means of calibration laws similar to Equation 12.32, although with
different exponents depending on the experimental setup.

Wall pulsed wire The main difference between wall pulsed wires and wall hot wires discussed in the “Wall hot
wires” section is the fact that here a total of three wires are used as displayed in Figure 12.10:
the central wire is heated with an electric current, and the other two are sensing elements.
The measurement procedure is based on applying a very short electric pulse to the wire at the
center, which then heats the luid around it by means of convection. Then the time that it takes
for the other elements to sense this convection (the so-called “time of light” T) is related to the
wall shear stress. These devices exhibit excellent frequency response and therefore are capable
of measuring instantaneous wall shear stresses in separated and even reverse lows. It is also

FIGUre 12.10 Schematic representation of a wall pulsed wire.


MEASUREMENT OF wALL-SHEAR STRESS 409

possible to detect low direction with wall pulsed wires. Whereas Castro et al. [61] propose the
following correlation between instantaneous wall shear stress and time of light:

2 3
æ1ö æ1ö æ1ö
tw = A ç ÷ + B ç ÷ + C ç ÷ , (12.35)
è ø
T è ø
T èT ø

with A, B, and C being calibration constants. Several authors including Fernholz et al. [60]
show that this equation should be time averaged, which also impacts the registered time of
light by the sensor. Combining calibration and time of light measurement errors, this tech-
nique shows an overall uncertainty of around 4% in the wall shear stress determination.

12.6 Optical methods

Experimental techniques based on nonintrusive optical devices have become the method of
choice for skin-friction measurements in a wide range of academic applications. This is due
to the fact that they do not rely on assumptions regarding the functional form of the velocity
proile or the extent of its various layers but are based on theoretical relationships of a different
nature instead. For instance, the technique known as oil-ilm interferometry is based on the
relation between the thinning rate of an oil droplet deposited on the surface of interest and the
shear stress of the incoming low. Their most signiicant limitations are the fact that special
coatings or additional equipment are usually required and the fact that additional assumptions
have to be made regarding the physical phenomena used to indirectly estimate wall shear.
Therefore, the price to pay for not avoiding assumptions about the low to be measured is
the requirement of modeling another phenomenon instead, which may introduce additional
uncertainties.

Oil-ilm OFI is based on the relation between the motion of a thin ilm of oil and the wall shear of the
interferometry stream driving it. The initial idea of using thin oil ilms to measure skin friction was by Tanner
and Blows [62] in 1976, and since the 1990s its use in wall-bounded turbulence research has
increased signiicantly. Tanner and Blows [62] and Squire [63] show that the motion of the
oil ilm is determined by wall-shear stress, surface curvature of the interface between the oil
and the driving luid, surface tension, pressure gradient, and gravity. The shear stress is the
dominant effect in the oil-ilm motion, and although there are studies aimed at incorporating
other contributions [64], the most widely used approaches nowadays are based on the method
proposed by Tanner and Blows [62] almost 40 years ago.
A schematic representation of the oil ilm driven by the incoming low is shown in
Figure 12.11, where the ilm height is denoted by h and is a function of the streamwise loca-
tion x and time t in this 2D analysis. Wall-shear stress is related to the thinning of the oil,

Oil

Substrate

FIGUre 12.11 Schematic representation of the oil ilm driven by the incoming low and Fizeau
interferometry phenomenon.
410 RICARDO VINUESA AND RAMIS ÖRLÜ

and the method proposed by Tanner and Blows [62] to measure h(x, t) is based on a phe-
nomenon that can also be observed in Figure 12.11: the formation of Fizeau interferomet-
ric fringes. These fringes can be observed if the ilm is illuminated with a monochromatic
source of light, where a very popular approach is the use of sodium lamps (with wavelength
λsodium = 589.3 nm). These lamps are robust and inexpensive and solve most of the problems
exhibited by the initial approach proposed by Tanner and Blows [62] based on He–Ne lasers
(see the review by Naughton and Sheplak [10]). The thickness of the ilm determines the total
path length of incoming light within the oil ilm. As shown in Figure 12.11, if the total path
length is a multiple of λ (2λ and 3λ for the irst and third fringes, respectively), the light ray
relected from the ilm surface and the one relected from the bottom wall are in phase, thus
producing a constructive interference. In the case of the second fringe, the total path length of
the light within the oil ilm is 2.5λ, so ilm-relected and wall-relected rays are out of phase,
leading to a destructive interference and a dark fringe. In fact, a more detailed analysis of
the optical relations between ilm-relected and wall-relected rays as shown in Figure 12.12
reveals that the condition required for a constructive interference to occur is that the difference
in path length satisies the relation: ABC - AD = nl, where n = 1 , 2 , 3 , …. A series of images
are obtained with a camera during the experiment, resulting in a stack of images showing the
Fizeau interferometric fringes as in Figure 12.13.

D
A C Flow

Oil film

B Substrate

FIGUre 12.12 Optical analysis of the interaction between ilm-relected and wall-relected
light rays in an OFI experiment.

FIGUre 12.13 (See color insert.) Fizeau interferometric pattern exhibited by an oil ilm driven
by the air low, where low is from bottom to top and the time evolution is from left to right.
MEASUREMENT OF wALL-SHEAR STRESS 411

The motion of an oil ilm of thickness h(x, t) driven by a fully developed 2D air shear stress
τ(x, h, t) is governed by the following equation:

¶h th ¶h
+ = 0, (12.36)
¶t m ¶x

where μ is the dynamic viscosity of the oil, and this equations neglects the effect of surface
tension, pressure gradient, and gravity. Equation 12.36 can be integrated to yield an expres-
sion of the oil-ilm thickness h as a function of the constant shear stress from the air stream τ,
the distance from the leading edge of the ilm x, and the time t as follows:

m x
h= , (12.37)
t t - t0

where t0 is the origin in time. In most applications, t ≫ t0, and thus the time origin can be
ignored. Following the optical setup shown in Figure 12.12 and interference optics rela-
tions (see Hccht [65]), the thickness difference between two consecutive fringes Δh can be
expressed as
l
Dh = , (12.38)
2 2
2 n - nair
oil sin 2a

where
λ is the light wavelength
nair and noil are the respective refractive indices (nair ≃ 1)
α is the angle between the observer and the wall-normal axes

A more detailed discussion about interference optics can be found in Chapter 8. The inter-
ferograms produced using this technique can be analyzed in a number of ways to estimate the
velocity of the fringes, and thus the wall-shear stress. It is important to note that the various
methods available for interferogram analysis require different levels of user interaction (which
introduces some subjectivity in the measurement) and rely on different amounts of informa-
tion from the interferograms.

The XT method The irst method was initially proposed by Janke [66] and is based on ana-
lyzing the position (X) of the fringes as a function of time (T) in order to evaluate the fringe
velocity. Let us consider that at the location of a particular fringe k the oil-ilm thickness is
constant and equal to hk and can be expressed in terms of the ilm thickness at the location of
the irst fringe h0 as

hk = h0 + k Dh. (12.39)

Combining Equations 12.37 and 12.39, it is possible to express the fringe velocity uk = dx/dt as

th t
uk = = ( h0 + k Dh ) . (12.40)
m m

The stack of images similar to the one shown in Figure 12.13 is analyzed, and the user needs
to manually select an interrogation line normal to the orientation of the fringes. Then it is pos-
sible to obtain an X versus T image, which in turn can be used to determine the fringe velocity
uk. With this value, and combining Equations 12.38 and 12.40, the wall-shear stress τ can be
computed as
2 2
2 noil - nair sin 2 a
t = muk . (12.41)
l ( k + h0 /Dh )
412 RICARDO VINUESA AND RAMIS ÖRLÜ

Since user interaction is required for determining the interrogation line, the level of error may
vary depending on the user’s experience. Algorithms for automated peak detection are also
available, although in general they do not provide more robust results than the ones obtained
through manual selection. This technique has been successfully used by several research
groups, such as Fernholz et al. [60] and Österlund et al. [67].

Local and global wavelength estimation methods Instead of computing the fringe velocity
uk based on the streamwise development of the fringes, it is possible to calculate the wall-shear
stress in terms of the mean peak distance of the interferometric pattern. Note that the wave-
length λf is calculated using geometrical properties as

-1
æ ¶h ö
l f = Dh ç ÷ , (12.42)
è ¶x ø

which can be rewritten as follows taking into account the relation 12.37:

tDht
lf = . (12.43)
m

Rearranging Equation 12.43 and taking the limit for very small t, it can be shown that the local
shear stress τ is computed as

m ¶l f
t= , (12.44)
Dh ¶t

and there are several ways of estimating the wavelength λf. Basically, this quantity can be com-
puted locally (by analyzing the spacing between consecutive fringes) or globally (considering
several periods). These are the so-called local and global wavelength estimation methods.
A degree of subjectivity through user interaction is introduced in the selection of an image
strip along the centerline of the interferometer. This strip is averaged in the spanwise direction
to reduce noise, resulting in a 1D signal s(x). After this, the signal is high-pass iltered with the
aim of removing any trend in the background illumination produced by lighting of the oil ilm.
The local wavelength estimation method is based on using the derivative of s(x) as an esti-
mation of the peak positions. This estimation is then reined, and the distance between fringes
is computed at each peak position. These results are then averaged to yield the wavelength of
the signal. Whereas this approach is fast and accurate, it may exhibit false peak detection in
particularly noisy signals. An alternative to this is the global wavelength estimation method,
which relies on Fourier-transform integrals to maximize the correlation between s(x) and a
test function deined as a complex exponential. This approach is also fast and accurate and
presents the advantage of being relatively insensitive to noise.
After comparing the results obtained with the various processing techniques, Vinuesa et al.
[51] concluded that the global wavelength estimation method is most reliable and therefore
the method of choice. Differences between the local and global methods were found only
in moderately noisy signals, and the results were virtually identical when used with signals
exhibiting low noise levels. They also claim that the XT method is slightly less reliable since
its results are more user dependent. Moreover, Vinuesa et al. [51] compared the results also
compared the results from the global wavelength estimation method with the ones obtained by
means of the Hilbert transform approach by Chauhan et al. [68] and found any discrepancies
to lie within the uncertainty of the method.

Procedure for OFI measurements After describing the theoretical background of the
method, in this section we focus on the more practical aspects of the measurements, since to
our knowledge there is a lack of information regarding these practical aspects in the literature.
OFI requires the use of a surface with good relective properties to allow the interferometric
MEASUREMENT OF wALL-SHEAR STRESS 413

phenomena described in Figures 12.11 through 12.13 to take place. Common materials are
glass, nickel, and polished stainless steel, all of them with several advantages and drawbacks.
In general, polished stainless steel is a good choice since it exhibits good relective properties,
is easy to manufacture models with different geometries of this material, and does not exhibit
problems with buildup of static electricity on its surface (unlike glass). A common approach is
to use a transparent upper cover of the test section so that the camera is located directly above
the oil ilm. However, in setups where the optical access is more dificult it is possible to use
mirrors to project the images on a side of the test section, where the camera is used to obtain
the images. Also, in lat-plate boundary-layer experiments, it is common to use removable
plugs to place oil drops, which are then assembled lush with the test surface for the measure-
ments. Signiicant errors in skin friction determination may be introduced if the plug surface
is not completely aligned with the test section: misalignments as small as 0.2 mm may lead to
2% error in the measured inner-scaled freestream velocity U ¥+ [46].
The necessary steps to carry out OFI measurements are described next.

Oil calibration A typical oil used in OFI is the Xiameter PMX or Dow Corning silicone
luid, with viscosities ranging from 10 up to 500 cSt (let us recall that 1 cSt = 10−6 m2/s).
The PMX-200 silicone oil (with ν = 200 cSt) provides reliable and repeatable results, although
the optimum viscosity to use may change depending on the range of friction velocities.
The use of silicone oil is motivated by the fact that its viscosity is much less sensitive to tem-
perature changes than the one of other luids with similar viscosities. It is extremely important
to calibrate the oil in order to obtain its viscosity as a function of temperature, since the actual
ν may be several percent different from the one reported by the manufacturer. Note that any
error in oil viscosity directly impacts the measured value of wall shear stress, thus highlight-
ing the importance of this measurement. The oil should be carefully calibrated using a con-
trolled bath and a capillary Ubbelohde-type or other viscosimeter (cf. Viswanath et al. [69] for
descriptions of various capillary viscometers) as shown in Figure 12.14. The viscosity curve
is usually represented by an exponential:

n = A exp ( a nT ) , (12.45)

where
the parameter αν is around −0.02°C−1 for silicone oils at ambient temperature
A is a calibration constant, which depends on the nominal viscosity
T is the temperature

Although in principle it should be possible to obtain accuracies on the order of 0.1% in this
kind of viscosimeter, in practice it is dificult to obtain the controlled conditions required for
that level of accuracy. Figure 12.15 shows the calibration curve of a 200 cSt oil in two differ-
ent facilities, which shows a repeatability of around ±0.3%. Along the lines of our previous
discussion regarding the relevance of an accurate calibration, the relative error of viscosity
is ανΔT, which means 2% difference for each degree of temperature. Note that this directly
affects the wall-shear stress measurements, and a 2% error is considerably large, especially as
wall-bounded turbulence conclusions are extrapolated to very high Reynolds numbers.
With respect to oil density, its variation with temperature is on the order of −0.06%/°C [51],
that is, around 30 times smaller than the change of viscosity. Thus, its impact on wall-shear
stress measurements is low, and for practical purposes oil density is assumed to be constant and
equal to the value provided by the manufacturer. Another parameter that does not signiicantly
vary with temperature is the oil refractive index and only depends weakly on the viscosity. Oil
manufacturers report that the refraction index of a 200 cSt oil may change approximately by
0.14% in a temperature range from 10°C to 50°C, whereas it changes by around 0.2% in the
viscosity range from 20 to 1000 cSt.

Experimental setup The irst step after obtaining a proper characterization of the oil to
be used is to calibrate the camera. Before every set of runs, it is necessary to determine
414 RICARDO VINUESA AND RAMIS ÖRLÜ

(a) (b)

FIGUre 12.14 (a) Ubbelohde viscosimeters and (b) thermal bath used for calibration. Setup
used at the Fluid Physics Laboratory, KTH Mechanics, Sweden.

250
Calibration at IIT
245
Calibration at EPFL
240 Österlund et al. [67] ±1%

235

230
ν (cSt)

225

220

215

210

205

200
14 16 18 20 22 24 26
T (°C)

FIGUre 12.15 Calibration curve of viscosity for silicon oil with nominal ν = 200 cSt at 25°C.
(With kind permission from Springer Science + Business Media: Exp. Fluids, New insight into low
development and two dimensionality of turbulent channel lows, 55, 2014, 1759, Vinuesa, R.,
Bartrons, E., Chiu, D., Dressler, K.M., Rüedi, J.-D., Suzuki, Y., and Nagib, H.M.)

the calibration factor of the camera by means of a plug with a calibration grid as shown in
Figure 12.16. This plug is placed on the location where oil drops will be tested, and an image
is taken with the camera. This image is then magniied to pixel resolution in order to manu-
ally select marks of the millimeter grid along the line passing through the symmetry axis
of the drop. This calibration process exhibits a repeatability better than 0.1%.
MEASUREMENT OF wALL-SHEAR STRESS 415

FIGUre 12.16 Plug with grid used to obtain the calibration factor of the camera.

Following the experimental procedure described by Bartrons and Muñoz [70], before the
measurements it is necessary to clean the plug (or surface where the oil drop will be deposited)
to prevent the drop from contaminating static particles located at the surface. The surface also
needs to be deionized in order to avoid electrostatic effects, which eventually lead to distorted
fringes in the interferograms. It is also common to use more than one drop per run to increase
the accuracy of the measurements, although the ones closer to the center of the plug are usu-
ally most reliable. A total of three or four drops are placed on the upstream side of the plug
to allow them to develop under the action of the incoming stream, with suficient separation
between them to avoid mutual interaction. The silicone oil drops are placed on the plug with
a needle. It is important to run the experimental facility for several minutes before the OFI
measurements start for two reasons: irst, to allow low conditions to stabilize, and second to
let the plate heat up to reach the air temperature. Based on our experience in various facilities
of different sizes and velocity ranges, somewhere between 15 and 20 min should be suficient
to achieve both goals. During the OFI runs, images are obtained at constant time intervals that
depend on oil viscosity and wall shear stress and are usually between 1 and 10 s. The total
duration of the experiment also depends on the characteristics of the oil and low case but runs
usually last around 15 min approximately. A common coniguration includes using a digital
camera to capture images, as well as a low-pressure sodium lamp to illuminate the oil ilm.
Vinuesa et al. [51] used a Nikon D5000 SLR camera with 18–55 mm zoom lens and a Philips
SOX-35 lamp. It is also important to monitor the temperature during the experiment in order
to ensure stable thermal experimental conditions.

Data processing After acquiring the images, they are cropped around the area of the oil
drops and are stored in a computer ile. As discussed in the"Local and global wavelength esti-
mation methods" section, the method of choice is the global wavelength estimation technique
due to its high accuracy and reliability, and thus the procedure described here is based on that
method. The irst images of the stack, where the fringes have not been formed yet, should be
removed. Once the inal stack is selected, the images are loaded in a post-processing program,
where they are transformed into matrices in which each value represents the light intensity of
a pixel in the original image.
Then an adequate interrogation region must be chosen. A good strategy is to superimpose
the irst image (which is brighter) and the last one, as in Figure 12.17. The combination of both
images allows a more objective selection of the interrogation region, valid for the whole run.
Note that in this example a total of four oil drops are used on an aluminum plug, which exhibits
suficient relectivity for proper image processing. Then the user selects interrogation regions
on the four drops, which are usually located around the symmetry axis of the drop (avoiding its
leading edge). If the interferometric pattern is noisy around the symmetry line, it is possible to
use another region slightly shifted horizontally from the centerplane of the drop. But in any case,
416 RICARDO VINUESA AND RAMIS ÖRLÜ

0
Select area to analyze,
press return to continue
200

400

x (pixels)
600

800

1000

1200

1400
0 500 1000 1500 2000
z (pixels)

FIGUre 12.17 Interferogram corresponding to four oil ilms and selection of an interrogation
region.

it is important to select regions perpendicular to the direction of the shear, not affected by the


sides of the drops and with minimum curvature. This is also done to minimize 3D effects within
the drop, since as discussed earlier the OFI theory assumes 2D oil-ilm motion. Other aspects
to keep in mind when selecting the interrogation region are the avoidance of wiggly regions
(affected by electrostatic effects), or areas contaminated by particles of dust. Then the selected
interrogation region for each drop is averaged in the spanwise direction, leading to 1D intensity
signals as shown in Figure 12.18. Note that Figure 12.18a shows how the irst image, even if it
already shows interferometric fringes, still has not developed a sinusoidal pattern as in the later
stages of the experiment. Figure 12.18b shows a fully developed state of the oil drop, where
the wavelength λf can be clearly determined. It is also useful to allow the lexibility to select
the irst and last images to be processed in order to obtain λf. An example of this is presented

120

115
Light intensity

110

105

100
0 50 100 150 200 250 300 350
(a) x (pixels)

100
Light intensity

95

90

85
0 50 100 150 200 250 300 350
(b) x (pixels)

FIGUre 12.18 Spanwise-averaged intensity signal as a function of streamwise distance for


(a) irst and (b) last images in the stack.
MEASUREMENT OF wALL-SHEAR STRESS 417

Selection flag
0.5

0
0 50 100 150 200 250 300 350 400 450
(a) Image #
100

Light intensity
95
90
85
80
0 50 100 150 200 250 300 350
(b) x (pixels)
100
Light intensity

95
90
85
80
0 50 100 150 200 250 300 350
(c) x (pixels)

FIGUre 12.19 Example showing the result of discarding the irst 21 images of the stack (a).
In this case, the irst image to be processed is number 22 (b), and the last image in the stack is
shown in panel (c). Spanwise-averaged intensity signals shown in both cases.

in Figure 12.19, where the irst 21 images in the stack are discarded, and the corresponding light
intensity signals are shown. These are then used in the wall-shear stress calculation.
After carefully selecting the set of images to be used, the next stage of the process is to
compute λf using the global wavelength estimation method. Doing so, a representation of λf as
a function of time t can be obtained as in Figure 12.20. Note that as the ilm develops the dis-
tance between fringes progressively increases, but the relevant parameter here is the thinning
rate dλf/dt, which as shown in Figure 12.20 is approximately constant. The idea is to perform
a linear it in the λf versus t plot to obtain the slope dλf /dt, which can then be used together
with Equations 12.44 and 12.38 to calculate the wall-shear stress τ. Figure 12.20 also shows
the relative error in dλf/dt from the individual images, and the accumulated value of the fric-
tion velocity uτ in terms of the number of images considered to determine the oil-ilm thinning
rate. Then the range of images that minimizes the error in dλf /dt is selected, which as a general
criterion should be kept below ±1%. Also, the accumulated uτ trend should be smooth and
converge to a constant value toward the end of the stack under consideration. This process is
repeated for the four drops used in the run, and the average value between them (considering
that outliers may have to be discarded, especially with drops exhibiting problematic interfero-
grams) is the measured friction velocity.

Outline of post-processing code In this section, we highlight the most relevant features
of a post-processing code to analyze interferograms using the global wavelength estimation
method:
1. Calibration factor of the camera.
a. Transform image to matrix data in the format of the post-processing program.
b. Select reference distance on the digitized image.
c. Introduce calibration factor connecting the number of pixels with the actual physical
distance Δx.
418 RICARDO VINUESA AND RAMIS ÖRLÜ

×10–3
2

1.5

λf (m)
1

0.5

0 50 100 150 200 250 300 350 400 450


(a) t (s)

0.5

0
e (%)
–0.5

–1
0 50 100 150 200 250
(b) Image #
uτ (m/s)

1.75

1.7
0 50 100 150 200 250
(c) Accum. # of images

FIGUre 12.20 Computed fringe wavelength λf as a function of time t; the range used to obtain
the slope dλf /dt highlighted in black (a). Relative error between the dλf /dt obtained from a
particular image and the one computed from the whole stack (b). Accumulated value of friction
velocity uτ as a function of the number of images considered for its calculation (c). Note that in
this particular example the time interval between images was 1 s.

2. Generation of a stack of images to process.


a. Select a data set to process and transform all the images to matrix data in the format
of the post-processing program.
b. Use the irst and last images of the stack to select a useful region containing the
drops to process, and crop all the images.
c. Sort iles and save the cropped stack.
3. Introduce parameters characterizing the OFI run.
a. Optical parameters: camera calibration factor Δx (m/pixel), time-step between
images Δt (s), light wavelength λ (nm), angle between camera axis and wall-normal
direction α (°).
b. Oil properties: oil refraction index noil, oil density ρ (kg/m3), oil calibration parameters
A (cSt) and B (°C−1), oil temperature Toil (°C), and oil kinematic viscosity ν (cSt).
c. Air properties: atmospheric pressure Patm (Pa), air temperature Tair (°C), and
freestream velocity U∞ (m/s).
4. Calculate wall-shear stress using the global wavelength estimation method.
a. Load coniguration ile and image stack for the run to post-process.
b. Initialize a graphic user interface (GUI) where the user can select the interroga-
tion region for each of the drops under consideration. A superposition of the irst
(brighter) and last images of the stack is usually considered in this step.
MEASUREMENT OF wALL-SHEAR STRESS 419

c. Average light intensity in the spanwise direction to obtain a 1D intensity signal s(x).
d. Obtain, for each image in the stack, the fringe wavelength λf by maximizing the
correlation between s(x) and a complex integral. A fast Fourier transform (FFT) is
combined with signal smoothing to yield the best estimation of λf(t).
e. Show the resulting λf as a function of time t and allow to select a different range of
images within the stack to minimize the error in the slope dλf/dt.
f. Display the resulting friction velocity uτ as a function of the images selected from
the stack for this calculation.

Comparison of OFI with other shear-stress measurement techniques After having


described several widely used methods for skin-friction measurement, here we compare
the most relevant features of OFI with other techniques. Fernholz et al. [60] performed a
comparative study of sublayer fence, wall hot wire, wall pulsed wire, and OFI methods,
which is summarized in Table 12.1. An interesting conclusion of this study is the fact that
Preston tubes should not be used in reversed lows or downstream of reattachment. In
these lows, wall hot wires, wall pulsed wires, and OFI perform well, although sublayer
fences show small deviations with respect to the results from the other techniques due to
small asymmetries of the probe. The uncertainty of sublayer fences, wall hot wires, and
wall pulsed wires was quantiied by Fernholz et al. [60] to be around ±4%, whereas they
show that OFI exhibits uncertainty values below this level. Interestingly, the uncertainty
of OFI has reduced signiicantly over the last decades, from the value of around 5%
reported by Ferhnolz and Finley [71] in 1996 to 1%, documented in the recent studies by
Rüedi et al. [72] in 2003 and Nagib et al. [32] in 2004. These studies also show that the
1% uncertainty can be potentially reduced to 0.5% with careful experimental procedures
and calibration. Vinuesa et al. [51] recently estimated the uncertainty of various OFI
quantities in a rectangular duct low facility: 0.85% in the wall shear stress τw, 0.58%
in the friction velocity uτ, and 0.87% in the inner-scaled centerline velocity U c+ . This
highlights the fact that OFI is the most accurate way of measuring local mean wall-shear
stress in turbulent lows.
Some extensions of the standard procedure of the OFI technique described in this sec-
tion include its application using white light (proposed by Desse [73]), its implementation
on aerodynamic models in larger wind tunnels closer to industrial applications (discussed
by Driver [74]), and the possibility of using dual- and multi-image analysis as suggested by
Naughton and Hind [75].

Table 12.1 Comparison of features from skin friction measuring methods

Feature Sublayer fence Wall hot wire Wall pulsed wire OFI

Classiication Pressure Heat transfer Heat transfer Optical


Measurement Pressure Heat transfer Time of light Fringe
difference rate advection
Calibration Yes Yes Yes No
Mean τw Yes Yes Yes Yes
Temporal resolution Unclear 10 kHz 20 Hz No
Spatial resolution Δx 1 mm 0.5 mm 1.5 mm 1 mm
Spatial resolution Δz 3 mm 0.5 mm 0.5 mm 1 mm
Direction of τw Yes Yes Yes Yes
APG Yes Yes Yes Yes
Reverse low Yes Yes Yes Yes
FPG Yes Yes Yes Yes
3D lows Yes Yes Yes Yes
Source: Adapted from Fernholz, H.H. et al., Meas. Sci. Technol., 7, 1396, 1996.
420 RICARDO VINUESA AND RAMIS ÖRLÜ

Laser Doppler As sketched in Figure 12.21, in this approach a laser beam is passed through a diffractive
technique lens below the test section, where two narrow gaps are placed. The beam is diffracted by
the two gaps, leading to interferometric patterns where the fringes exhibit a spacing d that is
proportional to the laser wavelength λ and inversely proportional to the separation between
the gaps s. In fact, this technique is an adaptation of the popular laser Doppler velocimetry
(LDV) method used for velocity measurements, which is discussed in detail in Chapter 10.
The idea is to seed particles on the low, which will scatter the light from the laser beam at the
Doppler frequency fD (see the work by Naqwi and Reynolds [76] and Fourguette et al. [77]
for additional reference information). Then the light is received with a lens and projected on
a photodetector that allows the measurement of fD. In the viscous sublayer the velocity proile
is linear, and therefore all the seeded particles scatter light with the same Doppler frequency,
a fact that can be used to determine the wall-shear stress as
ml
tw = fD , (12.46)
s

where μ is the luid dynamic viscosity. Although this method exhibits problems associated
with seeding and shows limitations due to the behavior of the particles close to the wall, it is
able to detect low reversal, does not require calibration, and shows good frequency response.
Thus, it allows to obtain accurate measurements of the luctuating component of the wall-
shear stress. Another related optical approach is the use of micro particle image velocimetry
(PIV) for velocity and wall-shear stress measurements, as reported by Kähler et al. [78].

Liquid crystal The use of liquid crystal coatings was traditionally aimed at temperature mapping and low
coating techniques visualization purposes. However, more recent studies starting with the work by Reda and
Muratore [79] in 1994 have extended their applicability to measurements of skin friction. In
this technique, the surface under consideration is covered with a liquid crystal coating and is
illuminated with white light in a direction normal to the surface forming an angle of around
15° [10]. If no shear stress is present the coating exhibits a red-orange color, whereas an
observer positioned upstream of the low will perceive a tendency toward blue as wall shear
is increased. Interestingly, no color changes will be perceived by an observer located down-
stream. The shear stress acting on the crystal modiies its physical properties, thus affecting
the spectral properties of the light relected from it. A number of images are acquired with
digital cameras from different angles and orientations in order to properly resolve the effect of
wall shear on the crystal. Besides, color change depends not only on the magnitude but also
on the orientation of the shear.

s
Test surface

Photodetector

Laser beam

FIGUre 12.21 Schematic representation of the experimental setup of a laser Doppler device for
wall-shear stress measurements. (Adapted from Tavoularis, S., Measurement in Fluid Mechanics,
Cambridge University Press, Cambridge, UK, 2005.)
MEASUREMENT OF wALL-SHEAR STRESS 421

Although this method still shows a number of challenges in its practical implementation,
there is potential for future developments in terms of accuracy and reliability [80,81].

Micro-pillar In the previous sections, we discussed several methods for skin friction measurement, exhibit-
sensors ing different levels of accuracy when determining both the mean and the luctuating compo-
nents of the wall shear stress. We showed that the most accurate way of determining mean
wall-shear stress is through the use of OFI, which combined with the global wavelength esti-
mation method may reach uncertainty levels on the order of ±1%. However, this method is
unable to predict the luctuating wall shear, and the ones that are able to do it (wall hot wire
and wall pulsed wire) show much larger error levels on the order of ±4%. In this section, we
present another optical method that is able to accurately measure the luctuating wall shear:
the micro-pillar shear stress sensor. Introduced by Brücker et al. [82] and Große and Schröder
[83], this sensor consists of an array of lexible micro-pillars on the wall of the low under
consideration. The idea behind this technique is to exploit the relation between the delection
of these micro-pillars and the shear stress, a concept inspired by the motion of corn heads
under the action of the wind.
A schematic description of this sensor, which is able to measure unsteadiness and the
direction of wall-shear stress in wall-bounded turbulent lows, is shown in Figure 12.22. The
deformation of the pillar, characterized by the displacement of the pillar tip Δt, is connected
with the magnitude of the shear and also with its direction. This is due to the fact that the
cylindrical cross section leads to the same stiffness in all wall-parallel directions, thus leading
to high directional sensitivity. Gnanamanickam et al. [84] showed that assuming the linear
bending theory of a circular beam the tip displacement can be connected with the wall-shear
stress τw as follows:
4
112 1 æ L p ö
D t ≃ tw ç ÷ Lp, (12.47)
9 E è Dp ø

where
Lp and Dp are the length and diameter of the pillar, respectively
E is its Young’s modulus

Note that the micro-pillar has to be immersed in the viscous sublayer so that it is subjected
to a linear velocity proile. This limits the value of Lp to around 60–1000 μm [84]. Although
Equation 12.47 gives an indication of the relation between Δt/Lp and τw, micro-pillars have to
be calibrated, usually on Couette lows where the velocity proile is linear. Gnanamanickam
et al. [84] showed that Δt/Lp is linearly proportional to τw when Δt/Lp ≃ 0.2, but higher tip dis-
placement values lead to nonlinear relations. This is why a speciic micro-pillar sensor has to
be designed and calibrated for the particular low coniguration being tested. Optical systems
and high-resolution cameras are used to measure the delection of micro-pillars. The tip is
usually coated with some particular color to easily identify its delection Δt. Whereas shear
stresses as low as 10−2 N/m2 can usually be measured with this technique, its spatial resolution
is on the order of 5 viscous units.

Δt

Lp

Dp
Test surface

FIGUre 12.22 Schematic representation of an MPS3.


422 RICARDO VINUESA AND RAMIS ÖRLÜ

It is important to note that due to the multi-scale nature of turbulent lows, the motion of the
micro-pillar will be determined by a range of temporal frequencies that may be on the order
of the kHz in the case of the smallest scales. This means that when used with luids with high
viscosity like water the system is overdamped, but with air the system exhibits a low-pass ilter
behavior, with a prominent resonance peak. Although Große et al. [85] considered a frequency-
dependent added mass to encapsulate the dynamic behavior of the sensor, Gnanamanickam
et al. [84] proposed the use of dynamic calibrations. They showed that the dynamic behavior
of the micro-pillar can be described by

¶ 4 w ( y, t ) ¶ 2 w ( y, t ) ɶ ¶w ( y, t )
EI
¶y 4
+ m
ɶ ( St ) ¶t 2
+ D ( St )
¶t
= F ( y, t ) , (12.48)

where
EI is the pillar stiffness
w is the lateral displacement at a particular wall-normal location y at an instant t
St is the Strouhal number
F(y, t) is the excitation, whereas mɶ and Dɶ are the reduced mass and damping coeficients,
respectively

Note that the Strouhal number is deined as St = fDp/U∞, where f is the frequency. A dynamic
calibration analysis [84] shows that micro-pillars exhibit a roughly constant transfer func-
tion in all the frequencies below 0.3–0.4f0. Note that this frequency needs to be determined
by considering an aeroelastic problem (due to the added mass consequence of the displaced
luid), as conirmed by comparison with experimental boundary-layer data [86].
The MPS3 is a reliable and robust experimental method that provides accurate measure-
ments of the luctuating component of τw and is the only technique able to measure the spatial
correlations of wall shear stress luctuations. Although OFI is still the method of choice to
accurately determine the mean wall-shear stress, this technique is very promising and will
progressively become more widely used with further development of the method and post-
processing techniques in the coming years. For instance, micro-pillars allow to measure
backlow events, which are dificult to measure using other techniques [87], and are very
relevant to the mechanisms of low separations [88]. Some of the issues to address would be
the problems with resonance frequency or the need to have the pillar fully submerged in the
viscous sublayer, which still limits its applicability at low Reynolds numbers. As discussed
by Brücker [89], in high-Re applications, the micro-pillar is too long, and therefore it can
inluence the low near the wall.

acknowledgments

The authors thank H.M. Nagib, J.-D. Rüedi, E. Bartrons, and M. Muñoz for sharing some of
the material related to oil-ilm interferometry measurements.

problems

12.1 Derive the thin oil-ilm equation 12.36 discussed in this chapter. To do so, consider the
following steps:
(i) Use a control volume enclosing a thin oil ilm developing in the streamwise direc-
tion. The differential oil ilm will have lengths dx and dz in the streamwise and
spanwise directions and will have an initial thickness h. A convective velocity
h

geometry.
0 ò
U c = 1/h u dy deines the oil-ilm motion in x. Perform a mass balance in this
MEASUREMENT OF wALL-SHEAR STRESS 423

(ii) An oil ilm with h = 1 μm and ν = 100 cSt has an approximate Reynolds number
of 10−8 [10]. Keeping this in mind, use the x-momentum equation to ind an expres-
sion for the streamwise velocity within the oil ilm u.
(iii) Integrate the result obtained in (ii) to compute the convective velocity and ind an
expression for the thin oil-ilm equation.
(iv) Note that the equation found in (iii) involves the streamwise pressure gradient, but
not surface tension effects. According to Brown and Naughton [90], the oil-ilm
surface curvature ∂2h/∂x2 can be used to calculate the oil-ilm pressure as

¶ 2h
P0 = P - s , (12.49)
¶x 2

where
P is the aerodynamic pressure
σ is the surface tension

Using this deinition, extend the previous thin oil-ilm equation to incorporate
surface-tension effects.
(v) Assess the relative importance of the various terms in the previous equation and the
validity of Equation 12.36 discussed in the “Oil-ilm interferometry” section.
12.2 An MPS3 system is under consideration to accurately determine luctuating stresses in
wind-tunnel experiments of TBLs. The irst step is to characterize the static response of
the pillar in a well-known low coniguration such as the Couette low, which exhibits
a linear velocity proile. A 200 μm long pillar of aspect ratio Lp/Dp = 10 is calibrated in
such coniguration, with the following results:

Δt (μm) 8.2 16.4 26.4 35 40.2 46.2 52.4 58.4


τw (N/m2) 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
Δt (μm) 63.4 69.1 75.6 78.4 83.6 87.4 88.4 92.6
τw (N/m2) 0.9 1.0 1.1 1.2 1.3 1.4 1.5 1.6

(i) Determine Young’s modulus of the pillar.


(ii) Obtain the static calibration curve of the sensor.
12.3 A wind-tunnel experiment is being carried out to measure the inner-scaled velocity pro-
ile of a TBL subjected to a zero streamwise pressure gradient. Velocity measurements
are performed by means of hot-wire anemometry at a distance x = 9 m from the leading
edge of the lat plate where the boundary layer is developing. The freestream velocity
is U∞ = 40 m/s and the operating luid is air. Previous experience with this conigura-
tion shows that an approximate Reynolds number based on momentum thickness of
Reθ = 30,000 will be obtained. In order to determine the friction velocity, the possibility
of using two Preston tubes of two different diameters (0.5 and 5 mm, respectively) is
considered. When the smaller Preston tube is used, the measurement is 209.13 Pa above
the static pressure, whereas in the larger one the reading is 416.98 Pa.
(i) Determine the friction velocity measured by each of the two Preston tubes.
(ii) OFI measurements on the same low case yield a friction velocity value of 1.3 m/s.
Discuss the accuracy of the previous results and the impact of the various factors
inluencing these particular Preston tube measurements.
12.4 An experimental setup to perform OFI measurements in a wind tunnel has been devel-
oped in order to accurately determine the wall-shear stress. A Philips SOX-35 low-
pressure sodium lamp is used for illumination, and the angle between the camera axis
and the wall-normal direction is 24°. A Nikon D5000 SLR camera with an 18–55 mm
424 RICARDO VINUESA AND RAMIS ÖRLÜ

zoom lens is used to capture images. Its calibration factor is 2.7 × 10−5 m/pixel, and
images are taken every 2 s. A 200 cSt Xiameter PMX silicone luid is used for the
experiments, and a careful characterization of its properties is required to ensure accu-
rate measurements. The measured oil density and refraction index agree very well with
the values provided by the manufacturer: 970 kg/m3 and 1.403. The oil viscosity is
carefully measured in a wide range of temperatures, leading to the calibration curve:
ν = A exp (ανT), where A = 0.00032 m2/s and αν = − 0.02°C−1.
During the experiment, a single drop was deposited on the plug, and the air tempera-
ture was 25°C. After processing the whole stack of images with an appropriate inter-
rogation region, the global wavelength estimation method is used to obtain the fringe
wavelength distribution with time λf(t). The following values were extracted from the
complete data set:

t (s) 0 12 30 40 50 60 70 80
λf × 104 (m) 1.1 1.8 3.1901 3.6 4.007 4.4094 4.7999 5.1982
t (s) 90 100 110 120 130 140 150 160
λf × 104 (m) 5.6107 6.04 6.3871 6.7922 7.2162 7.6118 8.0158 8.4337
t (s) 170 180 190 200 210 220 230 240
λf × 104 (m) 8.8201 9.2602 9.5782 9.9755 10.446 10.81 11.196 11.617
t (s) 250 260 270 274 282 292 298 300
λf × 104 (m) 12.010 12.412 12.859 12.920 14.3 15.8 14.8 14.0

Given the information described earlier, determine the friction velocity uτ.
12.5 The ZPG TBL is one of the most widely studied canonical cases due to its relatively
simple geometry and the interesting features associated with the streamwise develop-
ment. The following velocity proile was measured in a wind tunnel test using hot-wire
anemometry [25]:

y (mm) 0.0815 0.1145 0.1542 0.2037 0.2653 0.3402 0.4339 0.5495


U (m/s) 4.526 5.947 7.508 8.873 10.028 10.959 11.702 12.489
y (mm) 0.6895 0.8644 1.0790 1.3457 1.6712 2.0763 2.5729 3.1863
U (m/s) 13.044 13.602 14.049 14.574 15.011 15.495 16.024 16.464
y (mm) 3.9413 4.8690 6.0149 7.4275 9.1689 11.3165 13.9648 17.2277
U (m/s) 16.958 17.500 17.986 18.505 18.969 19.482 20.096 20.622
y (mm) 21.2473 26.2047 32.3163 39.8496 49.1368 60.5865 74.7006 92.0996
U (m/s) 21.278 22.037 22.824 23.740 24.722 25.701 26.407 26.565
y (mm) 113.5452 139.9897
U (m/s) 26.545 26.526

The wind tunnel was operated with a freestream velocity of U∞ = 26.55 m/s, and the
resulting Reynolds number based on momentum thickness was Reθ = 14,300.
(i) Use the Clauser chart to estimate the friction velocity.
(ii) Compare the result obtained in (i) with the one obtained during the experiment
using OFI: uτ = 0.9175 m/s. What is the relative error? Plot the velocity proile in
inner scaling using both friction velocities, and compare both distributions. Discuss
the most relevant mean low features, such as the extent of the logarithmic layer,
values of the log-law parameters, etc.
(iii) Is it possible to use any points within the viscous sublayer to determine the friction
velocity? Comment on the accuracy of this approach, and its potential application
in high Reynolds number lows.
MEASUREMENT OF wALL-SHEAR STRESS 425

references

1. M a r u s ic I (2009). Unravelling turbulence near walls, J. Fluid Mech., 630, 1–4.


2. George WK (2006). Recent advancements toward the understanding of turbulent boundary
layers, AIAA J., 44, 2435–2449.
3. Winter KG (1977). An outline of the techniques available for the measurement of skin friction in
turbulent boundary layers, Prog. Aerosp. Sci., 18, 1–57.
4. Allen JM (1977). Experimental study of error sources in skin-friction balance measurements,
J. Fluids Eng., 99, 197–204.
5. Schultz-Grunow F (1940). New frictional resistance law for smooth plates, NACA Technical
Memorandum 986.
6. Dhawan S (1951). Direct measurements of skin friction, PhD thesis, California Institute of
Technology, Pasadena, CA.
7. Winter KG, Gaudet L (1970). Turbulent boundary-layer studies at high Reynolds numbers
at Mach numbers between 0.2 and 2.8, RAE Technical Report No. 70251. Ministry of Aviation
Supply, Royal Aircraft Establishment, RAE.
8. Haritonidis JH (1989). The measurement of wall shear stress, in: Gad-el-Hak M, ed., Advances
in Fluid Mechanics Measurements, Lecture Notes in Engineering, Springer-Verlag, Berlin,
Germany, pp. 229–261.
9. S a d r R, K l e w icki JC (2000). Surface shear stress measurement system for boundary layer
low over a salt playa, Meas. Sci. Technol., 11, 1403–1413.
10. N au g h t o n JW, Shep lak M. (2002). Modern developments in shear-stress measurement,
Prog. Aerosp. Sci., 38, 515–570.
11. S a h l in A, J o h ans s on AV, A lf reds s on PH (1988). The possibility of drag reduction by
outer layer manipulators in turbulent boundary layers, Phys. Fluids, 31, 2814–2820.
12. M o r i K, Im a n ishi H, Ts uji Y, H attori T, M ats ubara M, M ochizuki S, Ina da  M,
K a s iwag i T (2007). Direct total skin-friction measurement of a lat plate in zero-pressure-
gradient boundary layers, Fluid Dynamics Res., 41, 021406.
13. G a d -e l -H a k M (1987). The water towing tank as an experimental facility, Exp. Fluids, 5,
289–297.
14. Baars WJ, Squire DT, Talluru KM, Abbassi MR, Hutchins N, Marusic I (2016). Wall-drag
measurements of smooth-and rough-wall turbulent bo undary layers using a loating element.
Exp. Fluids, 57, 90.
15. Te n n e k e s H, L um ley JL (1972). A First Course in Turbulence, MIT Press, Cambridge, MA.
16. P o pe S (2000). Turbulent Flows, Cambridge University Press, New York.
17. Millikan CB (1938). A critical discussion of turbulent lows in channels and circular tubes,
Proceedings of the Fifth International Congress on Applied Mechanics, Cambridge, MA,
pp. 386–392.
18. George WK, Castillo L (1997). Zero-pressure-gradient turbulent boundary layer, Appl. Mech.
Rev., 50, 689.
19. Monkewitz PA, Chauhan KA, Nagib HM (2008). Comparison of mean low similarity laws in
zero pressure gradient turbulent boundary layers, Phys. Fluids, 20, 105102.
20. M a r u s ic I, M o nty JP, H ultm ark M, Smits A (2013). On the logarithmic region in wall
turbulence, J. Fluid Mech., 716, R3.
21. N ag ib HM, C h auhan KA (2008). Variations of von Kármán coeficient in canonical lows,
Phys. Fluids, 20, 101518.
22. Lee M, Moser RD (2015). Direct numerical simulation of turbulent channel low up to
Reτ = 5200, J. Fluid Mech., 774, 395–415.
23. Schulz MP, Flack KA (2013). Reynolds-number scaling of turbulent channel low, Phys. Fluids,
25, 025104.
24. Bailey SCC, Vallikivi M, Hultmark M, Smits AJ (2014). Estimating the value of von Kármán’s
constant in turbulent pipe low, J. Fluid Mech., 749, 79–98.
25. Österlund JM (1999). Experimental studies of zero pressure-gradient turbulent boundary-layer
low, PhD thesis, Royal Institute of Technology, Stockholm, Sweden.
26. Örlü R, Fransson JHM, Alfredsson PH (2010). On near wall measurements of wall bounded
lows—The necessity of an accurate determination of the wall position, Prog. Aerosp. Sci., 46,
353–387.
27. Vinuesa R, Schlatter P, Nagib HM (2014). Role of data uncertainties in identifying the
logarithmic region of turbulent boundary layers, Exp. Fluids, 55, 1751.
28. Clauser FH (1954). Turbulent boundary layers in adverse pressure gradients, J. Aero. Sci., 21,
91–108.
426 RICARDO VINUESA AND RAMIS ÖRLÜ

29. Wil l m a r t h WW, Lu SS (1972). Structure of the Reynolds stress near the wall, J. Fluid Mech.,
55, 65–92.
30. Tavo u l a r is S (2005). Measurement in Fluid Mechanics, Cambridge University Press,
Cambridge, UK.
31. Ozarapoglu V (1973). Measurements in incompressible turbulent lows, PhD thesis, Laval
University, Quebec City, Quebec, Canada.
32. N ag ib HM, C hris top horou C, R üedi J-D, M onkewitz PA, Ö s terlund JM,
G r ava n t e S (2004). Can we ever rely on results from wall-bounded turbulent lows without
direct measurements of wall shear stress? 24th AIAA Aerodynamic Measurement Technology and
Ground Testing Conference, June 28–July 1, 2004, Portland, OR.
33. Vinuesa R, Rozier P, Schlatter P, Nagib HM (2014). Experiments and computations of
localized pressure gradients with different history effects, AIAA J., 52, 368–384.
34. Va n D r ie s t ER (1956). On turbulent low near a wall, J. Aero. Sci., 23, 1007–1011.
35. S pa l d in g DB (1961). A single formula for the “law of the wall”, J. Appl. Mech., 28, 455–458.
36. C h au h a n KA, M onkewitz PA, Nagib HM (2009). Criteria for assessing experiments in
zero pressure gradient boundary layers, Fluid Dyn. Res., 41, 021404.
37. M u s k e r AJ (1979). Explicit expression for the smooth wall velocity distribution in a turbulent
boundary layer, AIAA J., 17, 655–657.
38. N ic k e l s TB (2004). Inner scaling for wall-bounded lows subject to large pressure gradients,
J. Fluid Mech., 521, 217–239.
39. D u r s t F, K ik u ra H, Lekakis I, J ovanovic, Ye Q (1996). Wall shear stress determination
from near-wall mean velocity data in turbulent pipe and channel lows, Exp. Fluids, 20, 417–428.
40. Alfredsson PH, Örlü R, Schlatter P (2011). The viscous sublayer revisited–exploiting self-
similarity to determine the wall position and friction velocity, Exp. Fluids, 51, 271-280.
41. B id w e l l JM (1951). Application of the von Kármán momentum theorem to turbulent boundary
layers, NACA Technical Note 2571. Langley Aeronautical Laboratory, Langley Field, VA.
42. Dutton RA (1956). The accuracy of the measurement of turbulent skin friction by means of
surface Pitot-tubes and the distribution of skin friction on a lat plate, Aeronautical Research
Council Reports and Memoranda, 3058. Ministry of Supply, London, UK.
43. Mehdi F, White CM (2011). Integral form of the skin friction coeficient suitable for experimental
data, Exp. Fluids, 50, 43–51.
44. M e h d i F, J o h a ns s on TG, W hite CM, N aughton JW (2013). On determining wall shear
stress in spatially developing two-dimensional wall-bounded lows, Exp. Fluids, 55, 1656.
45. Ba il e y SCC, H ultm ark M, M onty JP, A lf reds s on PH, C hong MS, D uncan RD
et al. (2013). Obtaining accurate mean velocity measurements in high Reynolds number turbulent
boundary layers using Pitot tubes, J. Fluid Mech., 715, 642–670.
46. Vin u e s a R (2013). Synergetic computational and experimental studies of wall-bounded turbu-
lent lows and their two-dimensionality, PhD thesis, Illinois Institute of Technology, Chicago, IL.
47. Vinuesa R, Nagib HM (2016). Enhancing the accuracy of measurement techniques in high
Reynolds number turbulent boundary layers for more representative comparison to their canonical
representations, Eur. J. Fluid Mech. B/Fluids, 55, 300–312.
48. Vinuesa R, Duncan RD, Nagib HM (2016). Alternative interpretation of the Superpipe data
and motivation for CICLoPE: The effect of a decreasing viscous length scale, Eur. J. Fluid Mech.
B/Fluids, 58, 109-116.
49. D o h e r t y J, N g a n P, M onty JP, C hong M (2007). The development of turbulent pipe low,
Sixteenth Australasian Fluid Mechanics Conference, Crown Plaza, Gold Coast, Queensland,
Australia, December 2–7, 2007.
50. M o n t y JP (2005). Developments in smooth wall turbulent duct lows, PhD thesis, University of
Melbourne, Melbourne, Victoria, Australia.
51. Vin u e s a R, Ba r trons E, C hiu D, D res s ler KM, R üedi J-D, Suzuki Y, N ag i b HM
(2014). New insight into low development and two dimensionality of turbulent channel lows,
Exp. Fluids, 55, 1759.
52. Preston JH (1954). The determination of turbulent skin friction by means of Pitot tubes, Journal
of the Royal Aeronautical Society, 58, 109–121.
53. Pat e l VC (1965). Calibration of the Preston tube and limitations on its use in pressure gradients,
J. Fluid Mech., 23, 185–208.
54. Z ag a r o l a MV, William s DR, Sm its AJ (2001). Calibration of the Preston probe for high
Reynolds number lows, Meas. Sci. Technol., 12, 495–501.
55. Hosseini SM, Vinuesa R, Schlatter P, Hanifi A, Henningson DS (2016). Direct numerical
simulation of the low around a wing section at moderate Reynolds number, Int. J. Heat Fluid
Flow, doi:10.1016/j.ijheatluidlow.2016.02.001.
MEASUREMENT OF wALL-SHEAR STRESS 427

56. H a n r at t y TJ, Cam p bell JA (1996). Measurement of wall shear stress. in: Goldstein RJ, ed.,
Fluid Mechanics Measurements, 2nd edn., Taylor & Francis, Washington, DC, pp. 575–648.
57. Tr il l in g L, H ä kkinen RJ (1955). The calibration of the Stanton tube as a skin-friction meter,
in: G o rtler H, Tollmie n W, eds., 50 Jahre Grenzschichtforschung, Friedr. Vieweg and Sohn,
Braunschweig, Germany, pp. 201–209.
58. E a s t LF (1967). Measurement of skin friction at low subsonic speeds by the razor blade tech-
nique, Technical Report 3525, Aeronautic Research Council, London, U.K.
59. B r a d s h aw P, G regory N (1959). The determination of local turbulent skin friction from
observations in the viscous sub-layer, Technical Report 3202, Aeronautic Research Council,
London, U.K.
60. F e r n h o l z HH, J anke G, Schober M, Wagner PM, Warnack D (1996). New develop-
ments and applications of skin-friction measuring techniques, Meas. Sci. Technol., 7, 1396–1409.
61. Ca s t r o IP, D ia nat M, B radbury LJS (1987). The pulsed-wire skin-friction measurement
technique, in Proceedings of the Fifth Symposium on Turbulent Shear Flows, Durst F et al. ed.,
Vol. 5, Springer-Verlag, Berlin, pp. 278–290.
62. Ta n n e r LH, B l ows LG (1976). A study of the motion of oil ilms on surfaces in air low, with
application to the measurement of skin friction, J. Phys. E, 9, 194–202.
63. S q u ir e LC (1962). The motion of a thin oil sheet under the boundary layer on a body, in:
M a lt b y RL, ed., Flow Visualization In Wind Tunnels Using Indicators, AGARDo-Graph,
Vol.  70, North Atlantic Treaty Organization Advisory Group for Aeronautical Research and
Development, Bedford, England, pp. 7–23.
64. Segalini A, Rüedi J-D, Monkewitz PA (2015). Systematic errors of skin-friction measurements
by oil-ilm interferometry, J. Fluid Mech., 773, 298–326.
65. H c c h t E (1987). Optics, 2nd edn., Addison-Wesley, New York, pp. 270–314, 346–361.
66. Janke G (1993). Über die Grundlagen und einige Anwendungen der Ölilminterferometrie
zur Messung von Wandreibungsfeldern in Luftströmungen, PhD thesis, TU-Berlin, Berlin,
Germany.
67. Ö s t e r l u n d JM, J ohans s on AV, N agib HM, H ites MH (2000). A note on the overlap
region in turbulent boundary layers, Phys. Fluids, 12, 1–4.
68. Chauhan K, Ng HCH, Marusic I (2010). Empirical mode decomposition and Hilbert trans-
forms for analysis of oil-ilm interferograms, Meas. Sci. Technol., 21, 105405.
69. Vis wa nat h DS, G hos h T, Pras ad DHL, D utt NVK, R ani KY (2007). Viscosity of
Liquids: Theory, Estimation, Experiment, and Data, Springer, Netherlands.
70. Ba r t r o n s E, Muñoz M (2012). Aspect ratio and perimeter effects on turbulent channel lows,
Technical Report, Illinois Institute of Technology, Chicago, IL.
71. F e r n h o l z HH, Finley PJ (1996). The incompressible zero-pressure-gradient turbulent
boundary layer: An assessment of the data, Prog. Aerosp. Sci., 32, 245–311.
72. R ü e d i J-D, N ag ib HM, Ö s terlund J, M onkewitz PA (2003). Evaluation of three tech-
niques for wall-shear measurements in three-dimensional lows, Exp. Fluids, 35, 389–396.
73. D e s s e J-M (2003). Oil-ilm interferometry skin-friction measurement under white light, AIAA J.,
41, 2468–2477.
74. D r iv e r DM (2003). Application of oil-ilm interferometry skin-friction measurement to large
wind tunnels, Exp. Fluids, 34, 717–725.
75. N au g h t o n JW, H ind MD (2013). Multi-image oil-ilm interferometry skin friction measure-
ments, Meas. Sci. Technol., 24, 124003.
76. N aq wi AA, R e ynolds WC (1991). Measurement of turbulent wall velocity gradients using
cylindrical waves of laser light, Exp. Fluids, 10, 257–266.
77. Fo u r g u e t t e D, M odarres s D, Taugwalder F, Wils on D, Kooches fahani M,
G h a r ib M (2001). Miniature and MOEMS low sensors, Proceedings of the 31st AIAA Fluid
Dynamics Conference and Exhibit, AIAA Paper 2001-2982, American Institute of Aeronautics
and Astronautics, Reston, VA.
78. K ä h l e r CJ, S cholz U, O rtm anns J (2006). Wall-shear-stress and near-wall turbulence
measurements up to single pixel resolution by means of long-distance micro-PIV, Exp. Fluids, 41,
327–341.
79. R e d a DC, M u ratore JJ (1994). Measurements of surface shear stress vectors using liquid
crystal coatings, AIAA J., 32, 1576–1582.
80. R e d a DC, Wil der MC, M ehta R, Zilliac G (1998). Measurement of continuous pressure
and shear distributions using coatings and imaging techniques, AIAA Journal, 36, 895–899.
81. S mit s AJ, L im TT, eds. (2000). Flow Visualization Techniques and Examples, Imperial College
Press, London, U.K.
82. Br ü c k e r Ch, Bauer D, C haves H (2007). Dynamic response of micro-pillar sensors
measuring luctuating wall-shear-stress, Exp. Fluids, 42, 737–749.
428 RICARDO VINUESA AND RAMIS ÖRLÜ

83. Große S, S c h r öder W (2008). Mean wall-shear stress measurements using the micro-pillar
shear-stress sensor MPS3, Meas. Sci. Technol., 19, 015403.
84. G na na ma n ic k a m EP, N ottebrock B, Große S, Sullivan JP, Schröder W (2013).
Measurement of turbulent wall shear-stress using micro-pillars, Meas. Sci. Technol., 24, 124002.
85. Große S, S o o d t T, Schröder W (2008). Dynamic calibration technique for the micro-pillar
shear-stress tensor MPS3, Meas. Sci. Technol., 19, 105201.
86. N o t t e b r o c k B, Schröder W (2012). Improvement of the measurement range of the micro-
pillar shear-stress sensor MPS3, in Proceedings of the 28th AIAA Aerodynamic Measurement
Technology, Ground Testing, and Flight Testing Conference. New Orleans, LO, June 25–28, 2012,
Technical Report AIAA-2012-3011.
87. BrÜcker C. Evidence of rare backlow and skin-friction critical points in near-wall turbulence
using micropillar imaging, Phys. Fluids, 27, 031705 (2015).
88. Vinuesa R, OrlÜ R, Schlatter P. Characterisation of backlow events over a wing section,
Journal of Turbulence, Accepted (2016). Available at: http://dx.doi.org/10.1080/14685248.2016.1
259626.
89. Br ü c k e r C (2011). Interaction of lexible surface hairs with near-wall turbulence, J. Phys.:
Condensed Matter, 23, 184120.
90. Br own JL, N aughton JW (1999). The thin-oil-ilm equation, Technical Report NASA/TM
1999-208767, NASA-Ames Research Center, Moffett Field, CA.
C h a p T e r T hIrT e eN

Force and moments measurements

Marios Kotsonis

Contents

13.1 Introduction 429


Coordinate systems 429
13.2 Force measurements in wind tunnels 431
Fixed wing systems 431
Rotating systems 443
13.3 Force measurements for evaluation of low control actuators 445
Force measurements of plasma actuators 446
Problems 447
References 448

13.1 Introduction

Force and moments are some of the most important quantities measured in wind tunnels. The
forces and moments acting on an aerodynamic body under the inluence of external or internal
lows can be steady or unsteady and are a function of a wide variety of conditions. The most
typical approach toward the quantiication of these loads is the use of balances. Additionally,
techniques based on pressure or velocity measurements can also be used for the implicit deri-
vation of aerodynamic forces.
Due to the rich variety of low velocity regimes, wind tunnel conigurations, and complexity
of the models, an equally diverse variety of balances and pressure-based techniques exists.
This chapter attempts to provide a brief classiication of techniques aimed at quantiication
of forces and moments on ixed or rotating wing systems. Additionally, a short section is
provided on micro-force measurements in conditions of quiescent low, especially aimed at
characterization of low control actuators.
This chapter is structured based on morphological features of the used device or tech-
nique whether this might be a load balance or pressure-based method. For ixed wing systems,
steady and unsteady loads’ measurement approaches are presented. Analogously, techniques
developed for rotating wing systems such as propellers and rotors are shown. Finally, the
characterization of low control actuators is visited due to the very speciic conditions and
requirements that apply in this specialized area.
Coordinate systems Typical wind tunnel measurements of forces and moments can involve complex geometries,
orientations, and degrees of freedom (DOFs). As such, a clear deinition of coordinate systems
is necessary for the reference of the measurement and later post-processing of the results.
Although a matter of convention, the deinition of coordinate systems has been standardized
by national and international standards such as ISO and DIN [1]. Here, we concentrate on the
two commonly used systems, namely, European and American.
The European system is based on the ISO 1151 directive (Figure 13.1). The system of axes
is aligned to the wind tunnel’s main low direction. “Lift” force is deined as the force act-
ing on the model normal to the low direction and vertically and is opposing to the “weight.”
“Drag” is deined as the axial force acting parallel to the main low and is opposing “thrust.”

429
430 MARIOS KOTSONIS

Pitch
Lateral
axis

Vertical
Roll axis
Yaw
x

Longitudinal z
axis

FIGUre 13.1 Deinition of model-ixed axes system. (From ISO 1151-1:1988, Flight
dynamics—Concepts, quantities and symbols—Part 1: Aircraft motion relative to the air.)

Similarly, “side force” is deined as the force acting normal to the main low direction and
normal to “lift.” The positive direction for forces and moments is deined based on the right-
hand system. Thrust is positive while drag is negative. Weight is positive while lift is negative.
The side force is positive in the starboard direction. Positive pitch, yaw, and roll moments are
deined based on the right-hand system. The American coordinate system is similar with the
exception that lift and drag are deined as positive while weight and thrust are negative.
The previously mentioned deinitions apply also in the case of a model-ixed axis system.
This is particularly useful when the balance is attached to the model (internal balance) rather
than the wind tunnel (external balance). In order to transform the measured forces from an inter-
nal balance to the wind tunnel–ixed frame, knowledge of pitch, yaw, and roll angles is required.

Terminology Here, some commonly used terminologies regarding measurement of aero-


dynamic forces and moments are given. Several deinitions for these terms are available
but some standardized sources can be used for common reference such as the Guide to
Uncertainties in Measurement or the American Institute of Aeronautics and Astronautics
(AIAA) standards [2].

Load The term “load” can be used to describe any force or moment acting on an aero-
dynamic body, model, or component subject to wind tunnel testing. For fully 3D complex
systems such as scale models of aircraft, it is necessary to deine three forces, namely, lift,
drag, and side force, and three moments, namely, pitch, roll, and yaw moment. Loads can
also be deined for components such as high-lift devices, where fewer components can usu-
ally fully describe the system. In the specialized case of rotating wings such as propellers or
rotors, integrated components such as total thrust and torque complement the loads of each
individual blade.

Resolution Resolution is typically deined as the smallest value a balance can measure. This
usually includes the step of digitization. In essence, this is the smallest difference between
two loads that can be detected. Typical values vary per balance type but in general resolution
values of 0.005% of the full scale or lower are preferred.

Linearity Linearity is an important property for many types of balances. It is basically


deined as the change in resolution through the range. Usually, resolution is deined refer-
enced to the maximum load. If this value is different for lower loads, an additional error
might occur. Identical sensitivity throughout the measuring capacity of the balance can
FORCE AND MOMENTS MEASUREMENTS 431

be tested by measuring two loads separately, each approximately half of the maximum
load capacity. The sum of the two measurements should equal the measurement when both
loads are measured together.

Static and dynamic range Due to the large range of low regimes, scales, and conditions
encountered in wind tunnel testing, a good balance must possess a relatively broad static and
dynamic range. Static range is deined as the range of values between the minimum load and
the maximum load that can be measured by the balance within the accuracy speciications.
Usually, resolution and accuracy issues deine the minimum limit, while the maximum limit
is deined by strength constraints on the structure of the balance itself.
Analogously, dynamic range is deined as the band of frequencies a balance is able to
accurately register in case of luctuating loads. A lower limit is usually not an issue since
most balances can measure the direct current (DC) components by default. On the other hand,
a higher limit is subject to various constraints such as the mechanical inertia of the system,
resonant behavior, and sampling rate of the acquisition system.

13.2 Force measurements in wind tunnels

In this section, we treat the measurement of aerodynamic loads in the environment of wind
tunnels. A distinction is made between ixed and rotating wing systems due to extra complica-
tions involved in the testing of the latter.

Fixed wing systems Fixed wing systems are by far the most commonly tested conigurations in wind tunnels today.
These can span a wide range of complexity from basic 2D airfoil testing to full conigura-
tions of aircraft models with integrated propulsion. In any case, a wealth of measurement
techniques is available for the evaluation of aerodynamic loads. The most typical arrangement
involves the use of load balances whether these might be internal or external. Indirect tech-
niques based on pressure measurement are also available and will be treated briely.

Mounting The mounting coniguration is usually a function of whether an internal or exter-


nal balance will be used. External balances are usually positioned outside of the wind tunnel
test section. The model is connected to the balance either directly (side-wall balance) or via
supports. In the case of direct connection, the model is usually in contact to one of the walls
of the wind tunnel. Half-span models are typical for this coniguration. Side-wall balances
are usually simple constructions employing solid metallic pieces furnished with strain gauges
or piezoelectric transducers. In the case of connection via supports, the external balance is
positioned usually above or below the test section, and individual components are responsible
for carrying and measuring speciic loads. Although the complexity and space requirements
are higher than side-wall balances, external balances offer lexibility and accuracy due to the
segregation of loads interference.
Internal balances are placed within the model or at the interface between the model and
the sting or support. Due to limited space, they have to be more compact than external bal-
ances. The general principle here is the use of single elements designed to stress nonhomoge-
nously depending on the load. As such, internal balances are constructed to allow for different
loads to stress different areas of the balance. Due to their small size and direct connection to
the model, internal balances have a better dynamic range than their external counterparts. It
should be noted that, since internal balances are aligned with the model, the measured loads
have to be projected back on the wind tunnel coordinate system in order to be compared to
measurements of external balances [3].
Several options are available regarding the mounting of the model (Figure 13.2). These pri-
marily depend on the type of balance, wind tunnel, and testing requirements. In the case of exter-
nal balances, the model is required to be mechanically attached to the balance in order to transfer
the loads and also to allow for position adjustments such as changes in the angle of attack. In the
case of an internal balance, the support is only required to control the position of the model [3].
432 MARIOS KOTSONIS

(a) (b)

(c)

(d) (e)

FIGUre 13.2 (See color insert.) Various mounting conigurations for wind tunnel models:
(a) three-strut support of a blended wing body, (b) nozzle sting support of the Euroighter, (c) belly
sting support of a commercial airliner, (d) half model for a propulsion interference test, and
(e)  single sting support for a helicopter. ([a]: Courtesy of TU Delft, Delft, the Netherlands; [b],
[d], and [e]: Courtesy of German Dutch Windtunnels [DNW], Marknesse, the Netherlands; [c]:
Courtesy of European Transonic Windtunnel [ETW], Cologne, Germany.)

A typical coniguration for 2D models is the direct attachment on the balance from one
end while the other end is free. Care should be taken in this case to keep a minimum gap
between the ends of the model and the side walls of the wind tunnel in order to minimize
3D effects. A similar coniguration can be arranged also for 3D models of aircraft. In this
case, a half model is used by dividing the aircraft along the longitudinal axis of symmetry
(Figure 13.2d). The advantage of this technique is the ability to use a larger model and thus
FORCE AND MOMENTS MEASUREMENTS 433

increase the Reynolds number for a given wind tunnel, compared to full model testing. A
disadvantage is that it becomes impossible to test nonsymmetrical low conditions such as
light with a side slip or yawing.
For full 3D conigurations, the model is usually placed on supports. These can have sev-
eral arrangements. A common technique is a three-strut support where the aircraft is attached
on two supports at the quarter chord and on one support at the tail. The two front struts
are responsible for carrying the main aerodynamic loads such as lift and drag. The tail strut is
responsible for changes in the angle of attack and for the measurement of the pitching moment.
An example is shown in Figure 13.2a where a blended-wing-body aircraft is mounted upside
down in the low turbulence tunnel (LTT) of TU Delft. The model span was approximately
1.5 m while the produced lift approached 500 N.
A similar method relies on the use of a single strut usually attached on the belly of the
aircraft model (Figure 13.2c and e). Although this setup reduces complexity, it is quite chal-
lenging on the structural requirements of the sting. These naturally result in the increase of the
size of the sting, which in turn may create issues of blockage and interference.
Less common techniques are based on the suspension of the model with tension cables.
This technique has the advantage of minimum interference of the supports, although it suffers
due to additional complexity associated with the pretensioning of the cables.
For the case in which an internal balance is used, the support of the model is usually through
a single sting. This can be either attached on the body of the model or, if this is available,
through the engine nozzle. An example is shown in Figure 13.2b where a large-scale model of
the Euroighter Typhoon is suspended in the LLF facility of DNW in the Netherlands.
Independent of the type of support, a major consideration should be the interference of the
supports with the surrounding low. The wind tunnel experiment is aimed as a simulation of
reality and naturally external supports do not exist in free light. As such, it is necessary to
account for the presence of the supports during the measurement and later correct for it. The
presence of the supports has two effects. First, the supports are subject to aerodynamic loads,
which are additional to the loads measured on the model. Second, the supports locally change
the low ield surrounding the model, thus changing also the aerodynamic loads. The irst
effect is usually compensated by running additional measurements where only the support is
present in the wind tunnel. The measured forces can later be subtracted from the actual mea-
surements. The second effect is much more challenging since the inluence of the supports on
the low is also a function of the tested model. The correction is typically performed through
empirical techniques.

Steady balance measurements Steady measurements are by far the most common and
straightforward way of using force balances. Although several aerodynamic phenomena
are inherently unsteady, a usually valid assumption is that the effect on the time-averaged
aerodynamic properties is minimal. For example, lift measurements on an airfoil in near- or
post-stall regime will indeed luctuate about a mean. Yet if the measurement is suficiently
long, the reading is a relative accurate description of the average performance. Due to the
mechanical inertia, most balance systems are considered as steady.

Cantilever balances Cantilever balances have been one of the oldest implementations of
wind tunnel balances and are still in use today (Figure 13.3). They are based on the simple
principle of the weighing balance from which they originated. The balance is composed of
a cantilever element able to rotate around a pivot point. One end of the cantilever is attached
to the load, while on the other end a weight element is positioned accordingly. The weight
element can slide back and forth, thus changing the moment around the pivot. As the load
attempts to displace the cantilever, the weight is positioned such that the balance returns to
its neutral position. Through careful calibration, the position of the weight for which neutral
position is established is correlated with the load value.
Modern systems make use of accurate stepper motors for the movement of the weight
element resulting in high-accuracy balances. Typically, the system is used in external balances
due to space requirements. Apart from robustness, the cantilever balance exhibits high linearity
434 MARIOS KOTSONIS

FIGUre 13.3 The 6-DOF cantilever balance of the LTT at TU Delft, the Netherlands.

and low random error due to the mechanical operation. On the other hand, large masses need
to be physically moved, which imposes large limitations on the dynamic range of such bal-
ances [4].
A design example is given here, based on the external 6-DOF cantilever balance of the LTT
of TU Delft. Prior to any design effort, it is essential to have a clear overview of the design
requirements, especially for balance maximum loading, accuracy, and dynamic range. These
are mostly dependent on the characteristics of the wind tunnel and the measurement objec-
tives. In our design case, the wind tunnel is able to operate in a velocity regime between 0 and
120 m/s. The testing section is 1.2 m in height, 1.8 m in width, and 2.5 m in length. It needs
to be emphasized that the balance is designed for 2D as well as 3D geometries. For our initial
design, we assume the testing of a 2D airfoil. Taking into consideration blockage effects, the
maximum model chord (c) can be ixed to 0.5 m and maximum span (b) to 1.2 m. This gives
the maximum Reynolds number at room temperature equal to Remax = Umaxc/ν ~ 4 ⋅ 106.
At this high Reynolds number, low can be safely assumed as inviscid. As such, simple
inviscid panel codes can be used to calculate a irst estimation of the lift coeficient (CL) of the
airfoil under question. Since we are in search of the maximum balance loading, we assume a
relatively high-lift coeficient of CL max = 3.0 (this is higher than any practically achievable lift
coeficient). Based on the deinition of the lift coeficient, we can now compute the maximum
lift produced by this airfoil in the LTT as Lmax = cl max (1/ 2 ) rU max
2
cb ~ 15, 500 N = 1, 585 kg .
Further considering a structural safety factor of 1.2, the balance can be designed for a maxi-
mum lift of 1900 kg. Similar approach is taken for the calculation of the other ive components
of aerodynamic forces and moments.
Regarding accuracy, one needs to turn to the measurement of drag as it is usually one order
of magnitude smaller than lift. A general rule of thumb would be resolution in the order of half
drag count (ΔCD = 0.5 ⋅ 10−4). This can be calculated being equal to a force resolution of 0.23 N.

Strain gauge balances One of the most popular types of balances is based on the
application of strain gauges. In the elastic regime, strain is proportional to stress and in
turn to force. As such, strain of solid components can be used as a measure for the force
exerted on these items. The strain gauge is designed for converting strain into electrical
signals.
FORCE AND MOMENTS MEASUREMENTS 435

Strain-sensitive Terminals
pattern

Tension:
area narrows, Higher
resistance increases resistance

Lower
Compression: resistance
area thickens,
resistance decreases

FIGUre 13.4 Typical strain gauges and their operation.

The most common type of strain gauge is formed using very thin metallic wires
arranged on a plastic carrier (Figure 13.4). The working principle of the strain gauge is
based on the so-called electromechanical effect, irst observed and described by Lord
Kelvin (William Thomson) in 1856. The irst working strain gauges are commonly attrib-
uted to the work of Edward Simmons and Arthur Ruge in 1938. The strain gauges they
invented consisted of a metallic pattern deposited on a lexible carrier. The advancements
of photoetching techniques quickly made possible the mass production of cheap strain
gauges in the 1960s [5].
The working principle of the strain gauge relies on the change of the electrical resistance
of a conductor with changes on its geometry as described in Chapter 5.
Strain gauges as components of load balances are applied on metallic components.
As  such, very small deformations are measured that result in low resistance changes. This
creates the need for measuring extremely small values with the additional requirement for
high accuracy. Due to the challenges posed by these requirements, a method to “amplify” the
resistance change is the so-called Wheatstone bridge (Figure 13.5), described in Chapter 5.
A Wheatstone bridge is conigured as an arrangement of four resistors of equal nominal resis-
tance placed in two groups of two parallel resistors placed in series. The system is excited
with a known DC voltage (Ue) across points 1 and 3. Additionally, the output voltage (Uo) is
measured across points 2 and 4.
The voltage ratio can be directly related to the strain (5.32) by (Uo/Ue) = (1/4)(ε1 − ε2 + ε3 − ε4).
It is important to note that two of the RHS terms in the previous relation are negative. This
means that if all four strain gauge wires measure the same strain, the output voltage will be
zero leading to a nontrivial result. In order to avoid this problem, the strain gauge assembly
must be aligned in order to have two tension and two compression gauges (Figure 13.5). These
are typical situations encountered in force balances.
436 MARIOS KOTSONIS

Full-bridge strain gauge circuit

Strain gauge Strain gauge


(stressed) (stressed)

Strain gauge Strain gauge


(stressed) (stressed)

FIGUre 13.5 A full Wheatstone bridge with four strain gauges. Note that two gauges are under
tension and two under compression. (From www.allaboutcircuits.com.)

Corrections for gauge balances Several corrections need to be applied to the readings of
gauge balances in order to compensate for several undesired effects.
A force balance, which has no load, should give a reading of zero. This is not practically
feasible due to the weight of the balance itself and internal nominal resistances of the strain
gauges that can not be exactly the same. As such, an unloaded force balance will have a non-
zero output. In order to compensate for this, the force balance should be set in its nominal
unloaded position and the output voltage should be measured. A correction factor (ΔRcomp) for
the nominal resistance (Rnom) of the bridge can then be calculated as

Uo
DRcomp = Rnom (13.1)
Ue

A second factor to be considered is the zero drift effect. This is a thermal effect in principle.
Under temperature gradients, the resistance of strain gauges changes due to material
deformations. As such, the bridge zero output will change under conditions of nonuniform
temperature. Although every effort should be taken in order to ensure that the temperature
gradients experienced by a load balance are minimized, their complete cancellation is usu-
ally impractical. As such, a zero drift compensation procedure should be followed. For this,
the balance is set to the unloaded nominal position and a sequence of temperature runs is
performed through the entire thermal operation envelope of the balance. Each run is made at
conditions of uniform constant temperature. The output of the balance as well as the tempera-
ture is logged for each run. Thereafter, a zero drift compensation resistance is calculated using

DU 1
Rcomp = Rnom a comp (13.2)
U e DT

where
Rcomp is the compensation resistor to be added to the bridge
ΔU is the change in output voltage between the maximum and minimum applied temperature
ΔT is the difference of maximum and minimum temperature
αcomp is the thermal resistance change coeficient of the material of the compensation resistor

Another important thermal effect is the sensitivity shift effect. The sensitivity of the strain
gauge can change with temperature due to thermal expansion or compression, change of the
wire’s Young’s modulus, and the change of gauge factor. In order to compensate for these
effects, two techniques are mainly applied. First, modulus compensating strain gauges
FORCE AND MOMENTS MEASUREMENTS 437

are offered by several manufactures in order to correct for sensitivity shifts due to changes in
Young’s modulus of the strain gauge wire. Second, the excitation voltage can be adjusted in
order to keep the sensitivity shift equal to zero. This is done by placing a temperature-sensitive
resistor in series with the excitation voltage line.

Calibration of strain gauge balances Calibration is an important process in the context


of aerodynamic loads’ measurements. The basic principle of calibration is the conversion of
the output signal of the balance to the measured load. Due to the complexity of balance design
and non-perfect ambient conditions, a purely theoretical relation between the input and output
of the system can not be devised. As such, calibration serves the important role of a complete
transfer function between measured load and balance output.
The typical procedure for calibration is based on the use of precise known weights. These
are applied directly on the balance in a predeined direction. Additionally, it is common prac-
tice that the weights are referenced to some international standard in order to ensure accuracy
and maintain consistency among different balances.
An important consideration to be taken into account during calibration is the cross-talk
phenomenon between load components. This occurs when the balance is loaded in a speciic
direction (for instance, the direction of lift) and loads are measured in other directions (for
instance, the direction of drag). Although being undesirable, since a well-designed balance
should react independently to different loads, this behavior is not uncommon. A major func-
tion of the calibration is to establish this cross-talk relationship and utilize it in order to elimi-
nate their effect on the actual measurement.
As mentioned earlier, a typical calibration procedure establishes a transfer function
between measured loads and balance output. In the general case of a six-component wind
tunnel balance, this relationship can be put in tensor notation as

F =TS (13.3)

where
F is the load vector containing the three forces and three moments
T is the transfer function matrix (also called evaluation matrix)
S is the balance signal vector

The relationship implies of course that the output of the balance is known and the loads
are to be calculated. The calibration procedure on the other hand establishes the following
relationship:

S =CF (13.4)

where the loads are predetermined calibration weights. C is the so-called calibration
matrix. It is evident that the evaluation matrix can be calculated by inverting the calibra-
tion matrix.

-1
T =C (13.5)

When simple linear relationships exist between loads and balance output, the matrices T and C
are called linear. Their size is 6 × 6 when 3 forces and 3 moments components are measured.
Such a relationship can only be adequate when a good mechanical isolation of the loads is
established and the accuracy requirements for the balance are not a irst priority. In the oppo-
site case, higher-order relationships must be devised. In this case, more complicated and non-
linear relations are assumed as the calibration matrices increase in size. For the accounting of
second-order interactions, matrices in the order of 6 × 21 are used, while for the accounting
of third-order interactions, matrices in the order of 6 × 33 are used. While not trivial to derive,
these are absolutely necessary for modern high-precision wind tunnel balances.
438 MARIOS KOTSONIS

Magnetic suspension balances A very interesting concept of wind tunnel balances relies
on the principle of magnetic suspension. Magnetic suspension systems negate all physical
support of the model, thus resolving one of the main drawbacks of conventional suspension
systems, which is aerodynamic interference.
The principle of operation of a magnetic suspension system is simple yet very challeng-
ing to implement. The aerodynamic model is constructed from a non-ferrite material and
furbished with a set of permanent magnets positioned strategically (Figure 13.6). The wind
tunnel section is also furbished with surrounding electromagnets of adjustable strength.
The model is positioned and maintained in contactless “hover” in the section subject to
the attractive and repulsive forces exerted by the external electromagnets on the internal
permanent magnets. Due to the unstable nature of magnetic interaction, a sophisticated
electronic control system is required. The control system adjusts the strength of the elec-
tromagnets in real time such that the position of the model remains constant. Of course,
reliable measurements of the distance are performed using optical or other means. Even
more advanced designs are able to actually maneuver the model while testing within a
predeined trajectory [6–11].
Apart from the elimination of any mechanical support structures, magnetic suspension
balances present one additional advantage. By careful calibration, the electrical current that
is necessary to be provided to the electromagnets in order to keep the model ixed can be
correlated with the aerodynamic loads. For example, a repulsive electromagnet requiring an
initial value of current to sustain an aircraft model hovering in quiescent low will require less
current in the case the model is producing lift under low conditions.
Although quite revolutionary, the concept is not new. The irst working prototype was
demonstrated by ONERA in 1955 [12]. Further work by the same organization improved

Power supply

A/D PC D/A Current control

x, y, θx, θy
sensors Coil
z
N

Model
N
z sensors
θx
x
θy
S
y

x, y, θx, θy
sensors

FIGUre 13.6 Schematic of the magnetic suspension balance at the Fukuoka Institute of
Technology, Japan. (Reproduced with permission from Kawamura, Y. and Mizota, T., J. Fluid Eng.,
Trans. ASME, 135(10), 2013. Copyright of ASME.)
FORCE AND MOMENTS MEASUREMENTS 439

FIGUre 13.7 (See color insert.) A shuttle model is magnetically suspended in the transparent
hexagonal test section of the MIT/NASA Langley 6 in. magnetic suspension balance system.

the dynamic response characteristics of the system. Additionally, it was made possible to
remotely measure the aerodynamic loads by correlating them with the electric currents that
were supplied to the electromagnets in order to constrain the movement of the model.
As already mentioned, the magnetic suspension of wind tunnel models is attractive because
it allows a close approach to the free-light situation. The low ield surrounding the model can
be free of the interference effects ordinarily induced by the presence of mechanical supports.
In the case of heat-transfer studies, no corrections are necessary for heat conducted through
support members. In the case of dynamic stability measurements, magnetic suspension pro-
vides elastic restraint in ive DOFs. The elasticity of the restraint is adjustable to accommodate
the conditions of the test, and either forced oscillation or free oscillation modes of testing may
be used (Figure 13.7).
An example of magnetic balance design is given here. This is based on the efforts of the
Institute of Aeronautics and Astronautics of the National Cheng Kung University (NCKU)
in Taiwan [13]. The balance is designed for a rather small wind tunnel of 10 × 10 cm and
serves mainly as a development platform for the technology of magnetic suspension and
metrology. The balance is composed of 10 electromagnetic coils arranged in such a way as
to levitate and support the model in the test section. The model is constructed from a non-
ferroelectric body and a permanent magnet core. The position of the model is monitored by
photocells, which communicate in real time with the control computer. The control com-
puter uses the distance information in order to adapt the strength of the coils such that the
model is stabilized. Additionally, by measuring the current supplied to the coils, three aero-
dynamic forces (lift, drag, side) and two moments (pitch, yaw) can be calculated through
a calibration matrix. Given the small scale of the wind tunnel, the balance load capacity is
rather low (Table 13.1).

Table 13.1 Capacity of the National


Cheng Kung University magnetic balance

Lift (N) 0.6


Drag (N) 0.5
Side force (N) 0.5
Yaw moment (N m) 0.03
Pitch moment (N m) 0.025
440 MARIOS KOTSONIS

Unsteady balance measurements The requirement for unsteady and time-resolved force
measurements in wind tunnels stems from the need to understand complicated luid–
structure interactions. The higher dynamic range needed from the loads’ measurement
systems needs to extend beyond the dominant structural vibration frequencies encoun-
tered in these cases.
The more conventional strain-gauge-based systems have been very successful in measur-
ing steady loads with considerable accuracy. Due to the static nature of the loads, the elastic
compliance of the balance elements has been neglected and the entire system is considered
as rigid. Although this assumption is valid for steady measurements, accuracy will inevitably
suffer when required to perform time-resolved measurements. For the latter, the load mea-
surement system needs to be as rigid as possible in order to resolve high-frequency unsteady
events. An alternative to the strain-gauge technique that fulills these requirements is the
piezoelectric measuring system. This technique combines high rigidity with compact size in
order to offer a viable alternative for unsteady loads’ measurements [3].

Piezoelectric balances The general principle with any load balance system is to have the
highest possible rigidity. This property ensures good isolation between individual loads
(low cross-talk effects) and broad dynamic range as the natural frequencies of the system
are relatively high. Strain-gauge balances have an additional constraint, which is contra-
dicting the previous. They need relatively elastic elements in order to produce the required
strain to be measured. As such, they can be deined as passive measurement systems where
the elastic element (balance body) has a different function from the measurement element
(strain gauge). In contrast, a piezoelectric element can be considered as active since it
combines both functions in one element. As such, the required strain or deformation for
piezoelectric elements is one or two orders of magnitude lower than that of strain-gauge
balances for comparable loads.
The operational principle of piezoelectric force transducers can be explained considering
a crystalline material, for example, quartz. The internal structure of the crystal is composed
of ions arranged in a highly organized manner. If mechanical stress is applied on the external
surface of the crystal, the resulting deformation will cause a redistribution of the positive and
negative species and the polarization of the material. This effectively means that the sides of
the crystal become charged. Opposite sides along the stress axis attain an opposite sign. The
manner in which polarization occurs is dependent on the way the crystal is cut and the align-
ment of its crystallographic axis. As such, variations optimized for measuring normal or shear
loads are possible.
The piezoelectric force transducer is typically formed in a circular shape where a ring
of quartz (or other piezoelectric material) is arranged between steel plates. The steel plates
have the role of transferring the longitudinal load to the quartz forming a one-component
transducer. The extension to a multicomponent transducer is done by using multiple quartz
rings of different crystalline orientations. For example, a three-component transducer will
employ three quartz rings stacked and pre-compressed between the two steel plates. Two
rings are sensitive in shear and aligned with a 90° difference to each other. The third ring
is sensitive in normal loading and is usually placed in between the two shear rings. The
three rings are usually enclosed in a hermetically sealed structure that also serves as a
pretensioner.
As already mentioned, piezoelectric balances are deined as active techniques in which bal-
ance elements and sensors are the same. Due to the requirement of the piezoelectric material
for electrical power, a time constant T = RgCg can be deined for the output of the sensor. This
time constant describes an exponential decay of the charge signal. In the previous relation, Cg
is the total capacitance and Rg is the insulation resistance of the entire system composed of a
transducer, a cord, plugs, and an ampliier. In other words, piezoelectric transducers cannot
in principle measure steady loads as they drift in time. Fortunately, this disadvantage can be
overcome by a combination of precautions.
The most straightforward approach is based on the nature of the drift. Fault currents
in the input devices of the piezoelectric ampliier, which are nearly constant, dominate
FORCE AND MOMENTS MEASUREMENTS 441

Table 13.2 Main speciications of Cornell


Aeronautical Laboratory Inc. balances

Balance name H E K

Normal force (N) 533 280 204


Side force (N) 284 177 204
Axial force (N) 667 142 756
Rolling moment (N m) 0.52 0.084 0.18
Pitching moment (N m) 0.33 0.17 0.084
Yawing moment (N m) 0.17 0.11 0.084
Diameter (cm) 7.9 3.32 2.54
Source: Duryea, G.R. and Martin, J.F., IEEE Trans. Aerospace
Electron. Syst., AES-4(3), 351, May 1968.

the signal  drift. This causes a linear drift, which is only a function of time. The sign
of the drift can be positive or negative. As such, a linear correction technique can be
applied by simply logging a time stamp with each measurement. Additionally, zero-run
measurements are performed when the low is at rest. These are done before and after the
measurements. The zero run after the test would show a signal, which has drifted, com-
pared to the zero run prior to the test. These two points can be used for deining a linear
correction function of drift versus time. In order to correct the actual measurements, one
simply calculates the correction for every point based on the time stamp of the point and
the correction function.
An example of a piezoelectric balance is given here. This balance was developed for mea-
surements of forces on models in the Cornell Aeronautical Laboratory (CAL) Inc. [14] shock
tunnel (Table 13.2). Due to the extreme accelerations and forces involved in shock tubes, the
choice of piezoelectric elements is necessary in order to retain high dynamic range. The CAL
shock tunnels operate in a range of Mach numbers between 5 and 30 with testing times typi-
cally between 2 and 10 ms. The acceleration experienced by models and balances in the shock
tube can be up to 1000 g.
Due to the limited space in the shock tube and the need for high natural frequencies,
it was necessary to design the balance as compact as possible. On the other hand, at large
velocities, the capacity of the balance must be adequate to resolve large loads. In order to
have an optimum solution, it was decided to design three balances of different capacity and
size that can be easily interchanged depending on the application. The main speciications
of the three balances are given in [14]. Regarding sensitivity, the balances were rated at 0.22
V/N. Given electrical noise level lower than 100 μV, the measurement accuracy can be down
to mN scale.

Pressure-based techniques In contrast to directly attaching a model on a load-measuring


system, indirect techniques make use of low features in order to estimate the aerodynamic
forces acting on the coniguration. A majority of these techniques are pressure or velocity
measurements, making use of momentum balance in the wake of the model or directly inte-
grating the pressure on the surface of the model. Here, we cover two of such techniques aimed
at drag and lift estimation, respectively.

Wake rake measurements When an aerodynamic body is subject to a low, a wake is formed
downstream of the model. The wake is basically a deicit in velocity, which signiies the
extraction of momentum from the low due to the presence of the model (Figure 13.8). By
principle of force equilibrium, the force responsible for the deceleration of the low should be
equal and opposite to the force acting on the aerodynamic body in the direction of the low.
This force is aerodynamic drag.
442 MARIOS KOTSONIS

Wake
Model To manometer

Rake

Turbulent flow Laminar flow

(a) (b)

FIGUre 13.8 A wake-survey rake in the Langley ice tunnel. (a) Pressures across the wake at zero lift indicating momentum
deicit. (b) When airlow is laminar, the drag is reduced.

If force equilibrium is assumed in a control volume enclosing the aerodynamic body, drag
can be calculated as the difference in the integral momentum entering the volume from the
integral momentum leaving the volume:
y2

ò ( )
D = r u ( y ) u¥ - u ( y ) dy (13.6)
y1

where
u(y) is a distribution of velocity across the wake
u∞ is the undisturbed freestream velocity
ρ is the luid density
y1 to y2 is a range of distance in which the wake deicit is signiicant

It should be noted here that drag is normalized per unit length (N/m).
Wake surveys for the estimation of drag can be performed with a variety of techniques such as
pitot-static tubes, hot wires, or particle image velocimetry (PIV). The most common technique
makes use of a pitot-static tube, traversing across the wake registering dynamic pressure used
to derive velocities. In this case, an additional pitot-static tube is required upstream of the
model in order to provide u∞. An extension of this technique involves the use of an array of
pitot-static tubes, the so-called wake rake (Figure 13.8). Such an array can consist of up to
several tens of pitot-static tubes connected to a multichannel manometer or a scanivalve in
order to simultaneously acquire the velocity distribution across the wake.

Wall pressure measurements Lift can be measured by direct force balance measurements
or by integrating surface pressure on the model. The latter method, although quite accurate,
involves the tedious and costly task of furnishing each model with distributed pressure taps.
Althaus [15] proposed an alternative technique that makes use of the pressure distribution
on the wall of the wind tunnel, indicative of the reaction forces due to the lift the model
experiences (Figure 13.9). The pressure distribution along the walls of the wind tunnel can be
measured using multichannel pressure transducers or scanivalve systems.
FORCE AND MOMENTS MEASUREMENTS 443

c Wind tunnel wall

U
Airfoil
Wind tunnel wall

FIGUre 13.9 Pressure distribution over a 2D airfoil and corresponding pressure distribution
over the wind tunnel walls. (From Wolken-Möhlmann, G. et al., J. Phys. Conf. Ser., 75(1), 2007.
Copyright of IOP.)

The relation between the lift coeficient of the model and the pressure felt on the walls can
be found by considering the effect of the model as a distribution of vorticity. The distribution
of pressure at the wall can then be used to estimate the vorticity strength. Due to the inite
length of the pressure measurement system, not the entire effect can be measured. As such, the
missing pressures can be calculated using a itting formula.
The lift coeficient can be given by

p p - ps L 1
CL = (13.7)
q c n

where
ps is the overall pressure due to the suction side
pp is the overall pressure due to the pressure side
q is the dynamic pressure in the freestream
L is the length of the tunnel walls in which pressure is measured
c is the chord of the model

The coeficient n is the so-called Althaus factor, which accounts for the effect of non-ininite
walls and can be evaluated experimentally.

rotating systems Rotating shaft balances for propeller testing Rotating systems can be deined as aero-
dynamic components whose primary operation involves rotation about one or more axes.
Propellers, helicopter rotors, wind turbine rotors, and turbomachinery are some examples.
Several conventional techniques can be applied for the aerodynamic testing of rotating sys-
tems. Nevertheless, the complexity of such an endeavor is increased compared to ixed-wing
systems. Especially the measurement of forces and moments on rotors is subject to severe
constraints.
A specialized kind of internal balances designed for use with rotating components are the
so-called rotating shaft balances (RSBs) [17]. These are typically strain-gauge-based lexures
designed to measure isolated propeller loads. They are typically constructed as rotating parts,
integrated in the propeller or rotor assembly. As such, they rotate with the propeller while
measuring torque, thrust, and in-plane loads. A full six-DOF RSB is able to measure all com-
ponents of loads acting on a propeller or rotor.
Typically, RSBs are designed as spoked structures with an inner and outer rim. The outer
rim is attached to the propeller while the inner ring is attached to the power axis. The sensing
elements are based on strain gauges and are positioned on the spokes. Such form is the prod-
uct of a design trade-off between the required sensitivity for all the load components and the
444 MARIOS KOTSONIS

necessary stiffness for the balance itself. The spoke design allows for large sensitivity in axial
loading (such as torque and thrust) while allowing for suficient stiffness in the plane of rota-
tion. On the other hand, the stiffness in the plane of rotation reduces the sensitivity and thus
accuracy for the off-axis or in-plane load components such as side forces and side moments.
As was mentioned, typical RSB designs employ a spoked coniguration. An example
are the RSBs developed by the National Aerospace Laboratory (NLR) in the Netherlands
(Figure 13.10). They can be described in four main elements: an inner ring, an outer ring,
and two groups of three spokes each. The inner ring is mounted on the power shaft and is the
nonmetric part. The propeller and hub are connected to the outer ring, which serves as the
metric part. The two groups of spokes carry an array of strain gauges and act as lexures. The
number of strain gauges varies per design but typically a total of about 30 is used per RSB.
These are connected as parts of four independent bridges, each bridge responsible for one load
component. It should be noted here that the three-spoked design was the irst generation of
RSB developed from NLR. It was able to measure four components (torque, thrust, total side
force, and total side moment). In order to gain access to all six components, the total side force
and the total side moment had to be decomposed into y and z components by FFT procedures
or by using the angular position of the RSB. The second generation of NLR balance was an
improved design with four spokes able to give explicitly six components.
Since the RSB is an electronic balance, it is obvious that a way for powering it and read-
ing its output is needed without interrupting rotation. The most typical solution is the use of
cooled, low-friction slip rings. Other methods, based on wireless transmission or detection of
induction currents, are also possible.
Measurements with RSB require a considerable amount of corrections. First, a high-order
calibration matrix is needed due to the large cross talk between components. The calibration
needs to be done at specialized facilities, preferably by the manufacturer of the RSB. Another
consideration is the centrifugal load, which tends to tense the lexures during rotation. It has
been found that only the axial loads need to be corrected for this. Usually, a calibrated correc-
tion based on the number of revolutions per minute is adequate. Finally, an important issue
that needs to be accounted for is the existence of strong temperature gradients in the body of
the RSB due to limited space and inadequate ventilation. A good practice would be to correct
all the bridges for zero-shift and sensitivity changes due to thermal effects.
An example on the design of a irst-generation RSB balance developed by NLR is given
here. The balance is based on a three double-spoke design. This form employs a central shaft
connected to the driving motor. Three pairs of spokes arranged symmetrically around the rota-
tion axis connect the central shaft to an outer rim. The outer rim carries the propeller. By
design, this is a four-component balance measuring two in-plane (side force, side moment) and
two off-plane (thrust, torque) loads. The maximum load capacity chosen is given in Table 13.3.

FIGUre 13.10 Rotating shaft balances designed by the NLR, the Netherlands. (Courtesy of
German Dutch Windtunnels [DNW], Marknesse, the Netherlands.)
FORCE AND MOMENTS MEASUREMENTS 445

table 13.3 Maximum loads of National Aerospace


Laboratory in the Netherlands irst-generation rotating
shaft balance

Off-plane In-plane

Thrust (N) Torque (N m) Force (N) Moment (N m)

1250 150 600 200

Accuracies of 0.5% on the off-plane components and 2.5% on the in-plane components
were achieved. Further considerations should be taken regarding the sensitivity of the mea-
surements. The RSB has an outer diameter of 145 mm and an inner diameter of 30 mm.
At a typical rotational speed of 6000 rpm, the spokes would lex due to centrifugal forces.
Measurements have shown that only the thrust is affected by this effect [17]. A change of 0.8%
at 6000 rpm was anticipated.

13.3 Force measurements for evaluation of low control actuators

Flow control actuators have received overwhelming attention from aerodynamics in recent
years. Several concepts such as micro-electro-mechanical systems, piezoelectric elements,
luidic actuators, and synthetic jets have been rigorously investigated. Such actuators, through
different mechanisms, provide momentum to the low and allow controlling the boundary layer
and thus aerodynamic forces around a body. A special type of actuator that has been particu-
larly popular is the AC dielectric barrier discharge (AC-DBD) plasma actuator (Figure 13.11).
These actuators are considered here as a test case toward presenting a series of techniques
for measurements of forces of low control actuators. Variations of these techniques can be
applied to other types of actuators.
AC-DBD actuators are simple devices relying on the creation of weak plasma discharges.
They are formed as the combination of two metallic electrodes separated by a dielectric
layer. Application of alternating high voltage between the two electrodes results in the devel-
opment of a strong electric ield, which, in turn, ionizes the surrounding air and creates a
weak plasma cloud. The plasma, under the inluence of the electric ield, moves and through
collisional processes transfers momentum to the surrounding neutral air. This is macroscopi-
cally perceived as a body force, and it is essentially the low control mechanism attributed to
plasma actuators.
The inherent features of plasma actuators render them ideal for active low control.
They are relatively easy to manufacture, operate with low power consumption, exhibit high-
frequency response, and have no moving parts. On the other hand, their low control authority
is still limited and eficient scaling to operation at high Reynolds numbers is challenging.
The improvement of plasma actuator performances has been one of the major driving factors
behind the vast number of characterization studies published.

Active electrode Induced velocity

Ionization region (plasma)


Dielectric Grounded electrode
~V
High-voltage supply

FiGUre 13.11 Geometrical coniguration and operation of a DBD plasma actuator.


446 MARIOS KOTSONIS

Force Steady forces The low control authority of AC-DBD plasma actuators is attributed to the
measurements of creation of a volume-distributed body force of Coulombian origin. This force is a product of
plasma actuators collisions between heavy charged species (positive and negative ions) and neutral air particles.
Several studies have attempted to measure the plasma-produced force. Integral approaches
involve either direct reaction force measurement based on balance and load cell readings or
control volume momentum estimation techniques based on velocity ields. More advanced
techniques use experimental low ield data, typically from PIV and Navier–Stokes-based
decompositions in order to derive the full spatial and temporal distribution of the body force.
An overview of these efforts is given in this section.
Measurement of plasma body force using the principle of reaction is the most popular and
accessible technique due to the simplicity of the setup. Nevertheless, due to the extremely
small forces and the high voltages involved, accurate measurements are challenging. The con-
cept relies on Newton’s third law of motion stating that for every action there is an equal and
opposite reaction. The AC-DBD actuator is able to accelerate low in quiescent conditions as
demonstrated by the formation of the wall parallel jet. The accelerated low exerts an equal
and opposite force on the actuator and its supporting structure. By supporting the actuator on
a load-sensitive device such as an electronic balance or a load cell, the spatially integrated
plasma force can be measured.
Several studies have made use of off-the-shelf electronic balances [18] or load cells
(Figure  13.12). Typical accuracy of such devises is of the order of 1 mg. The actuator
assembly is either placed directly on the balance or via a leveraged pendulum in order
to amplify the measured force. Special care should be taken due to the extremely small
forces to be measured. The power and ground connections are usually made using thin
copper electrodes or ball chains in order to avoid reaction forces from contaminating the
measurement. Ashpis and Laun [19] review several practical methodologies toward acquir-
ing consistent force measurements for AC-DBD actuators. As an alternative to electronic
balances, techniques based on the delection of a pendulum can be applied for estimating
the AC-DBD force.
An important issue regarding reaction force measurements is the existence of the viscous
friction force between the developed wall jet and the plate, which is carrying the actuator.
The skin friction force is always opposing the plasma actuator body force, thus resulting
into an underestimation of the latter. It is then expected that the physical size of the lat plate
downstream of the actuator affects the measured force. Indeed, it can be demonstrated that
the measured force could increase as much as 20% when shortening the lat plate from 15 to
2.5 cm. Furthermore, a recent study by Pereira et al. [20] has shown that the body force can be
underestimated by as much as 50% due to friction forces.
A different technique to calculate the spatially integrated force of AC-DBD actuators has
been the momentum balance analysis. This is an implicit technique, which makes use of mea-
sured velocity proiles in the vicinity of the actuator. By applying a momentum integral in

Plasma-induced flow Covered electrode

Plasma Exposed electrode


generation
circuit
Dielectric

Thrust

Force balance

FIGUre 13.12 Reaction force balance setup used. (Reproduced with permission from Thomas,
F.O. et al., AIAA J., 47(9), 2169, 2009. Copyright of AIAA.)
FORCE AND MOMENTS MEASUREMENTS 447

a control volume around the actuator, the space-integrated plasma force can be calculated.
Velocity proiles acquired using pitot tubes, laser Doppler velocimetry (LDV), and PIV have
been successfully used using this approach. The choice of the size of the control volume is
quite important here [21].
Similar to reaction force measurements, it is important to take into account the friction
force. This force can be directly calculated using the measured velocity data and the shear
stress deinition (see Chapter 12). Alternatively, the skin friction can be considered part of the
measured force. A second note is the existence of the pressure gradient term. This cannot be
readily resolved using the velocity measurements due to the existence of the plasma force. On
the other hand, if the control volume is chosen such that its boundaries do not cross regions
of signiicant pressure gradients, then its net effect can be incorporated into the value of the
measured body force.
Measurements of time- and space-averaged forces are important for extensive parametric
studies. On the other hand, the insight into the underlying physics governing the operation of
the actuator is often limited due to the lack of time-resolved information and spatial topologi-
cal features of the produced body force. Several techniques have been proposed to gain access
to the time-resolved forcing behavior of the actuator.
Porter et al. [21] used a calibrated accelerometer attached directly to the actuator in order to
access the time-resolved evolution of the force. They carefully used a circular actuator so as to
have the force impulse arriving at the accelerometer simultaneously from the entire actuator.
Enloe et al. [22] used the same setup but a different technique for measuring the force. They
employed a Michelson interferometer measuring the resonant oscillation of the pendulum
carrying the actuator. The Michelson interferometer approach has also been used by Font
et al. [23] on a torsional pendulum. It must be noted that these studies relied on the calibration
of the results with theoretically predicted resonance based on vibration analysis of the pendu-
lum system. They veriied earlier velocity-based observations regarding the dominance of the
forward stroke and negative ions in the momentum production budget. In fact, they showed
that both half cycles produce positive forces.
Debien et al. [24] have made use of momentum balance analysis on time-resolved velocity
measurements in order to derive the time-resolved plasma force. Their integral analysis was
based on the approach of Noca et al. [25]. They showed time-resolved evolution of the force
for both plate and wire exposed electrodes. In contrast to the previously mentioned direct
force measurements, they showed negative force for the backward stroke.

problems

13.1 Determine the capacity of the remaining DOF of the LTT TU Delft cantilever balance
(Section 13.2).
13.2 Determine the necessary number of DOF, load capacity, and accuracy of a strain-gauge
balance able to measure the relevant forces and moments governing a 2D NACA0012
airfoil. The airfoil is to be placed in a closed-loop subsonic wind tunnel with a 40 ×
40 cm test section up to a chord Reynolds number of 105. Chord of the airfoil should be
no larger than 20 cm.
13.3 Calculate the coniguration, capacity, resolution, and dynamic range for an internal
piezoelectric balance to be used in a Mach 3 supersonic tunnel. The balance is required
for the testing of a triangular delta wing at angles of attack ranging from 0° to 15°. The
testing section is cylindrical with a diameter of 20 cm.
13.4 Investigate whether a combination of strain gauges and piezoelectric elements in the
design of a generic 6 DOF external balance would offer advantages over conventional
designs. Elaborate on possible design implementations.
13.5 Design a balance setup for the measurement of a plasma actuator’s forces. Give detailed
quantiication and explicit reasoning for your choices.
448 MARIOS KOTSONIS

references

1. ISO 1151-1:1988, Flight dynamics—Concepts, quantities and symbols—Part 1: Aircraft motion


relative to the air.
2. AIAA (1999). Assessment of experimental uncertainty with application to wind tunnel testing,
AIAA S-071A-1999, AIAA, Reston, VA.
3. Tr o pe a C, Ya r i n AL, Fos s JF, eds. (2007). Handbook of Experimental Fluid Mechanics,
Vol. XXVIII, Springer, 1557pp., 1240 illus. in color.
4. NACA report N.72 Wind Tunnel Balances.
5. Tat na l l FG (1966). Tatnall on Testing, American Society of Metals, Metals Park, OH.
6. Be a ms JW (1950). Magnetic suspension balance 4, Physical Review, 78(4), 471–472.
7. Be a ms JW (1950). Magnetic suspension for small rotors, Review of Scientiic Instruments, 21(2),
182–184.
8. Covert EE, Finston M, Vlajinac M, Stephens T (1973). Magnetic balance and suspension
systems for use with wind tunnels, Progress in Aerospace Sciences, 14(C), 27–94, IN3–IN4, 95–107.
9. V l a jinac M, S tep hens T, G illiam G, Perts as N (1972). Subsonic and supersonic static
aerodynamic characteristics of a family of bulbous base cones measured with a magnetic suspen-
sion and balance system, NASA Contractor Reports.
10. Zapata RN (1975). Magnetic suspension techniques for large-scale aerodynamic testing, AGARD
Conference Proceedings 174, March 1976.
11. K awa mu r a Y, M izota T (2013). Wind tunnel experiment of bluff body aerodynamic models
using a new type of magnetic suspension and balance system, Journal of Fluids Engineering,
Transactions of the ASME, 135(10).
12. L au r e n c e au P (1956). La Suspension Magnetique des Maquettes (The Magnetic Suspension
of Models), O.N.E.R.A. Discussion Technique OP, Ofice National d'Etudes et de Recherches
Aeronautiques, Chatillon-sous-Bagneux (Seine), Paris, France.
13. L in CE, Ya n g CK (1997). Force and moment calibration for NCKU 10 cm × 10 cm magnetic
suspension wind tunnel, IEE Instrumentation and Measurement Technology Conference, Ottawa,
Ontario, Canada.
14. D u r y e a GR, M artin JF (May 1968). An improved piezoelectric balance for aerodynamic
force, IEEE Transactions on Aerospace and Electronic Systems, AES-4(3), 351–359.
15. Althaus D (2009). Measurement of lift and drag in the laminar wind tunnel, University of Stuttgart,
Stuttgart, Germany, http://www.iag.unistuttgart.de/laminarwindkanal/pdfdateien/liftdrag2.pdf.
16. Wolken-Möhlmann G, Knebel P, Barth S, Peinke J (2007). Dynamic lift measurements
on a FX79W151A airfoil via pressure distribution on the wind tunnel walls, Journal of Physics:
Conference Series, 75(1).
17. Cu s t e r s LGM, H oeijm akers AHW, H arris AE. Rotating shaft balance for measure-
ment of total propeller force and moment, National Aerospace Laboratory NLR, Amsterdam, the
Netherlands.
18. T h o m a s FO, C o rke TC, Iqbal M, Kozlov A, Schatzm an D (2009). Optimization of
dielectric barrier discharge plasma actuators for active aerodynamic low control, AIAA Journal,
47(9), 2169–2178.
19. A s h pis DE L aun MC (2014). Dielectric barrier discharge (DBD) plasma actuators thrust—
Measurement methodology incorporating new anti-thrust hypothesis, Second AIAA Aerospace
Sciences Meeting—AIAA Science and Technology Forum and Exposition, SciTech 2014.
20. P e r e ir a R, R agni D, Kots onis M (2014). Effect of external low velocity on momentum
transfer of dielectric barrier discharge plasma actuators, Journal of Applied Physics, 116.
21. P o r t e r CO, Baughn JW, M cLaughlin TE, Enloe CL, Font GI (2007). Plasma actuator
force measurements, AIAA Journal, 45(7), 1562–1570.
22. Enloe CL, McHarg MG, McLaughlin TE (2008). Time-correlated force production mea-
surements of the dielectric barrier discharge plasma aerodynamic actuator, Journal of Applied
Physics, 103(7).
23. Fo n t GI, E n l o e CL, N ewcom b JY, Teague AL, Vas s o AR, M cLaughlin TE (2011).
Effects of oxygen content on dielectric barrier discharge plasma actuator behavior, AIAA Journal,
49(7), 1366–1373.
24. Debien A, Benard N, David L, Moreau E (2012). Erratum: Unsteady aspect of the electro-
hydrodynamic force produced by surface dielectric barrier discharge actuators (Applied Physics
Letters 100 (013901) 2012), Applied Physics Letters, 101(22).
25. N o c a F, S h ie l s D, J eon D (1997). Measuring instantaneous luid dynamic forces on bodies,
using only velocity ields and their derivatives, Journal of Fluids and Structures, 11(3), 345–350.
Index

Absolute measurement error, 31 real-time color relection interferometer, Data regression process
Acousto-optic modulator (AOM), 225–226 247–249 itting function, 33
Active heating, 179 real-time color transmission iterative method, 35
Adiabatic temperature, 147–148 interferometer, 245–247 least square method, 33
Aliasing effect, 37 Conduction error, 149 linear itting, 34–35
Angular scanning system, 359 Conservation of energy, 13 Deadweight gauges, 118–119
Axisymmetric low application, 211, Conservation of mass, 11–12, 70 Decalibration, 155
243–244 Constant current anemometry (CCA), Density-based methods
264, 279 BOS imaging
Bias error, 31–32, 322–323 Constant temperature anemometry (CTA), background image, 213
Boundary corrections 264, 272, 279 colored BOS technique, 214
adaptive wall test section, 69–70 Constant voltage anemometer (CVA), 264 general principles, 211–212
dynamic pressure, 69 Contact shadowgraph, 204 geometry of, 212
horizontal buoyancy, 66–68 Cumulative distribution function (cdf), 27 image-capturing scheme, 215–216
Reynolds number, 69 image processing, 216–217
solid blockage, 67–68 Data processing light source and illumination,
streamline curvature, 67–69 conditional average 214–215
wake blockage, 67–68 deinition, 48–49 open-air explosion visualization, 214
Bragg cell, 225–226 experimental aerodynamics, 49 supersonic low, cone image, 213
Brüel & Kjær condenser, see Microphones stochastic estimation, 49–50 of unsteady low, 217–218
DMD, 46–48 light refraction, inhomogeneous media,
Calibrated schlieren, 211 luid-low measured variables 196–198
Capacitance calorimeter, see Slug analog-to-digital conversion, 35–37 Mach–Zehnder interferometry, 207
calorimeter central limit theorem, 31 schlieren method
Charge-coupled devices (CCDs) camera, data regression, 33–35 calibrated schlieren, 211
330–332 error, precision, accuracy, digital image processing, 211
Cine PC-MRI strategy, 381 uncertainty, 31–32 illuminator section, 207
Clauser chart Gaussian distribution, 31 image magniication, 209
log law, 397 joint random variables, 29–31 light source, 209
mean low, 397 stationarity and ergodicity, 28–29 minimum detectable gradient, 211
turbulence models, 399 statistical data characterization, minimum discernible contrast,
U/U∞ vs. yU∞/ν curve, 398 26–28 210–211
CMOS cameras, 330–332 uncertainty quantiication principal scheme, 205
Coaxial thermocouples, 184–185 methods, 32–33 schlieren image contrast, 206
Coherent structure estimation Fourier analysis schlieren knife, 205–206, 209–210
aeroacoustic noise generation, 339 continuous component of signal, 39 Z-type schlieren assembly, 208–209
Eulerian approaches convolution theorem, 39–40 shadowgraph
Δ-criterion, 340 DFT, 40–41 of bow shock, 204
eigenvalues calculation, 340 Fourier series, 39 compressible boundary layers, 204
low analysis, 339–340 Fourier transforms, 38–39 contact shadowgraph, 204
incompressible lows, 340–341 PSD, 41–44 digital cameras, 204
motion of low, 340 spectral information, 38 diverging-light shadowgraph,
2D planar condition, 341–342 POD 201–202, 207
two Eulerian coherent structure eigenvalues, 45 focused shadowgraph, 202–203
methods, 341 energetic optimality, 45 general principles, 199–201
velocity gradient tensor, 340 Fredholm equation, 44 light source and illumination,
Lagrangian approaches, 342–343 Karhunen–Loève decomposition, 44 203–204
Cold-wire anemometry, 263 low-order reconstruction, 45–46 parallel-light shadowgraph, 202
Colored BOS technique, 214 noise contamination, 45–46 supersonic and transonic lows
Color holographic interferometry phase information extraction, 46 study, 204
applications, 250–252 snapshot matrix, 44 Differential interferometry, 230–232,
color interference fringes formation, swirling jet, 46–47 243, 245
245–246 turbulent data decomposition, Digital shuttering effect, 310
gelatine contraction, 249–250 lower-dimensional space, 37–38 Discrete Fourier transform (DFT), 40–41

449
450 INDE x

Discrete window offset (DWO) methods, Focused shadowgraph, 202–203 free/natural convection, 261
324–325 Force and moments heat conduction, 261
Diverging-light shadowgraph, 201–202 coordinate systems heating power, 261
Dynamic mode decomposition (DMD), 46–48 deinition, 429 King’s law, 263
drag force, 429–430 Nusselt number, 261
Electromechanical effect, 435 lift force, 429 radiation losses, 261
Encoding process, 37 linearity, 430–431 steady-state balance, 263
Ensemble correlation, 216, 327–328 load, 430 temperature overheat ratio, 262
Euler’s equation, 15–16, 117 model-ixed axes system, 429–430 hot-wire materials, 265–266
resolution, 430 limitations and corrections, 284–285
First law of thermodynamics, see side force, 430 linear velocity proile, 286–287
Conservation of energy static and dynamic range, 431 lin–lin plot, 288
Fixed wing systems low control actuators log–log plot, 288
load balances, 431 AC dielectric barrier discharge measurements, 282–284
mounting plasma actuator, 445 mirrored image technique, 286–287
calibration, 437 low momentum, 445 modes of operation, 264
cantilever balances, 433–434 plasma actuators, 446–447 occurrence of, 258
conigurations, wind tunnel models, measurements, wind tunnels overheat ratio, 285
431–433 ixed wing systems, 431–443 preaging, aging, and drift, 269
corrections for gauge balances, rotating systems, 443–445 probe components, 265
436–437 Fourier analysis soldering vs. welding, 267–269
external balances, 431 continuous component of signal, 39 temperature luctuations and drift,
internal balances, 431 convolution theorem, 39–40 corrections, 291–294
magnetic suspension balances, DFT, 40–41 temporal and spatial resolution,
438–439 Fourier series, 39 288–291
side-wall balances, 431 Fourier transform, 38–39 thermoelectric measurement
steady balance measurements, 433 PSD, 41–44 principle, 260
strain gauge balances, 434–436 spectral information, 38 turbulent luctuations, 258
supports, 433 Froude number, 15 wall-interference effects, 287
pressure-based techniques wall thermal conductivity, 285
wake rake measurements, 441–442 Galvanic effects, 155 wall turbulence, 259
wall pressure measurements, 442–443 “Gappy POD” reconstruction methods, 338 Huygens–Fresnel principle, 196
unsteady balance measurements, Gardon gauges, 188–189 Hypersonic low application, 242–243
440–441 Gaseous mixture application, 243–245 Hypersonic wind tunnels
Floating elements, 394–395 Gas refractive index, 198 hot-shot wind tunnels, 81–82
Flow visualization Gladstone–Dale constants, 199 Ludwieg tube, 81
outer lows Gladstone–Dale equation, 198 plasma arc tunnel, 82
luid velocity, 97 shock tubes, 79–80
recirculation bubbles and detached Heat lux shock wind tunnels, 80–81
vortical structures, 99–100 concept, 145
streamlines, 98–99 sensors Information extraction, 4
tracer dynamics, 91–93 coaxial thermocouples, 184–185 Infrared scanning radiometer
velocity proiles, 100–101 Gardon gauges, 188–189 detector, 174–175
wall-bounded lows null-point calorimeter, 185–186 optical system, 173–174
adverse pressure gradient, 93 slug calorimeter, 183–184 performance
angles of attacks and wing thin-ilm gauge, 186–187 acquisition frequency and
conigurations, 96, 98 water-cooled calorimeters, 187–188 temperature ranges, 178
boundary layer detachment, 95–96 Holographic particle velocimetry calibration, 178–179
dark regions, 96 in-line systems, 366–368 detectivity and thermal contrast,
friction coeficient, 94 off-axis systems, 366 175–176
keel in of panel, 95–96 working principle, 364–365 spatial resolution, 177–178
Laminar–turbulent transition, 94–95 Honeycombs, 63 thermal resolution, 176–177
separation bubble, further Horizontal buoyancy, 60 Instantaneous ield of view (IFOV), 177
reattachment, 95 Hot-wire anemometry (HWA) Interferograms analysis
shear stress, 93–94 boundary-layer probes, 286 axisymmetric low analysis, 234–235
streamline determination, 96 calibration interference fringes modeling, 235–238
Fluid dynamics visualization, see Density- low-velocity calibrations, 280–282 manual analysis, setup calibration,
based methods multiwire probes, 275–276 233–234
Fluid-low measured variables precautions and presettings, 270–271 Intermediate single point recalibration
analog-to-digital conversion, 35–37 single-wire probes, 271–275 (ISPR) method, 293–294
central limit theorem, 31 temperature calibration, 276–279 Inviscid compressible lows, 16–17
data regression, 33–35 vortex-shedding calibration, Inviscid incompressible lows, 15–16
error, precision, accuracy, uncertainty, 279–280 IR scanner/IR camera, see Infrared scanning
31–32 diagnostic plot, 285–286 radiometer
Gaussian distribution, 31 etching vs. plating, 266–267
joint random variables, 29–31 heat transfer characteristics Jørgensen relation, 274
stationarity and ergodicity, 28–29 assumptions, 262
statistical data characterization, 26–28 cooling velocity, 261, 263 King’s law, 263, 271–273, 282
uncertainty quantiication methods, 32–33 forced convection, 261 Knudsen number, 8, 101, 262
INDE x 451

Lagrangian coherent structures (LCSs), spatial coherence, 224 local and global wavelength estimation
342–343 temporal coherence, 224 methods, 412
Laser velocimetry measurements, procedure, 412–413
experimental capabilities, 309 Mach number, 9–10, 15 oil calibration, 413–414
experimental design Mach–Zehnder interferometer, 229–230 post-processing code, 417–419
camera calibration, 334–336 Mach–Zehnder interferometry, 207 schematic representation, 409
camera selection, 330–333 Magnetic resonance imaging (MRI) shear stress, 409
low tracers, 330 constant magnetic ield, spins, 377–378 thickness, two consecutive fringes, 411
laser optics, 333–334 echo and repetition time, 379 wall-bounded turbulence research, 409
laser doppler velocimetry/anemometry, hydrogen protons, 378 XT method, 411–412
306–307 image reconstructions, 377 Optical inhomogeneity, 200
PDA, 307 k-space representation, 2D image,
PIV 379–380 Parallel-light shadowgraph, 202
advanced processing algorithms, RF pulsation, rotating transverse Parallel scanning system, 359
327–328 magnetization, 377–378 Particle image velocimetry (PIV)
algorithm, 317–324 signal acquisition, 379 advanced processing algorithms, 327–328
discrete cross-correlation, 316 SNR, 379–380 algorithm
Fourier-based cross-correlation spatial reconstruction, 379 apparent particle diameter, 320
theory, 326–327 3T scanner, 378 bias and random error, 321–322
image cross-correlation, 313–316 Magnetic resonance velocimetry (MRV) error level dependence vs.
imaging system, 311–313 applications, 383–385 displacement, 323
iterative schemes, 324–326 MRI, 377–380 error vs. particle image diameter, 321
motion tracking, 311 phase-contrast velocimetry, 381–382 experimental images, 317–318
particle tracking, 328–329 turbulence statistics, 382–383 image defocusing, 323
stereo, 329–330 Maxwell–Boltzmann probability image density, 318
subpixel estimation, 316–317 distribution, 8–9 image magniication, 320
post-processing McLeod gauge, 123–124 in-plane loss of pairs, 318
coherent structure estimation, Michelson interferometer, 228–229 interrogation window, 319
339–343 Microphones jet low experiment, 320
data validation and replacement, diaphragm, 132–133 local low conditions, 319
336–338 diaphragm force and pressure, 134 measured displacements, particle
derivative estimation, 338–339 free-ield response, 135–136 diameters, 322
pressure and force data, 343–345 preampliied curve response, 133 Monte Carlo testing, 321
types, 310–311 working principle, 133–134 source density, 318
uncertainty estimation Minimum resolvable temperature difference synthetic images, 321
cross-correlation, 348–349 (MRTD), 176–177 defocusing, 363–364
irst-order Taylor series Motion tracking enhancement (MTE), 374 digital cameras, 309
expansion, 347 Multiframe approaches, 327–328 discrete cross-correlation, 316
image calibration, 349–350 Multiplicative algebraic reconstruction Fourier-based cross-correlation theory,
propagated uncertainty, 347 technique (MART) 326–327
sensor and measurement algorithm, 372 image acquisition, 308
equipment, 346 image cross-correlation
timing errors, 348, 350 Nanoscale thermal anemometry probes deinition, 313
Law of homogeneous materials, 152 (NSTAP), 266 displacement, 314
Law of intermediate materials, 152–153 National Aerospace Laboratory (NLR), 444 Gaussian function, 314
Law of intermediate temperatures, 152–153 Nd:YAG lasers, 310 image decomposition, 315–316
Linear mean square estimation, 50 Newton’s second law, 12 peak intensity, 314
Liquids and gases low visualization, Noise equivalent temperature difference two interrogation regions, 314
196–198 (NETD), 176–177 velocity, 315
Low-speed subsonic wind tunnels Normalized detectivity, 175 imaging system
boundary corrections, 66–70 Null-point calorimeter, 185–186 ill factor, 313
closed-return tunnel, 58–59 Nusselt number, 150 low tracers with true diameter, 312
contraction cone, 62–63 Nyquist frequency, 36 ideal thin lenses, 312
diffuser, 61 particle image sample, 312–313
eiffel-type open-return wind tunnel, 57 Oil-ilm interferometry (OFI) 2D projections, 311
fan, 61–62 data processing iterative schemes
open-circuit tunnel, 58–59 computed fringe wavelength, displacement, 324
power losses evaluation, 64–66 417–418 DWO methods, 324–325
test section, 59–61 lexibility, image selection, 416–417 image deformation, 324–325
turbulence reduction devices, 63–64 interrogation region, 415–416 interrogation region reinement, 326
working principle, 57 light intensity of pixel, 415 motion tracking, 311
Luminous interferences spanwise-averaged intensity particle tracking, 328–329
AOM, 225–226 signal, 416 planar setup, 308
generation of, 226–228 distance, 411 stereo, 329–330
polarization experimental setup, 413–415 subpixel estimation, 316–317
circular polarization, 224–225 features comparison, 419 tomographic (Tomo-PIV)
elliptic polarization, 224–225 ilm-relected vs. wall-relected rays, 410 applications, 375–376
plane polarization, 225 Fizeau interferometric fringes formation, calibration, 370–371
relection of polarized light, 225 409–410 ghost particles, 372–375
452 INDE x

hybrid methods, 376–377 Bourdon tubes, 124–125 digital cameras, 204


imaging considerations, 370–372 diaphragms, 124–126 diverging-light shadowgraph, 201–202
motion analysis, 374–375 elastic transducers, 124 focused shadowgraph, 202–203
tomographic reconstruction, 371–373 emission-based techniques, 127–128 general principles, 199–201
volume illumination, 368–370 strain gauges, 125–127 light source and illumination, 203–204
working principle, 368–369 isentropic and monoenergetic parallel-light shadowgraph, 202
Passive heating, 179 deceleration, 115–116 supersonic and transonic lows study, 204
Peak ratio, 337, 349 momentum equation, 116 Shake-the-Box tracking method,
Perfect gas, 7, 121, 246 parameters, 137 376–377
Phase doppler anemometry (PDA), 307 static and total pressure, 115 Shannon theorem, 37
Phosphor thermometry, see streamline paths, low total pressure, 117 Single-pixel correlation, 328
Temperature-sensitive units and standards, 110–112 Skin friction, see Wall-shear stress
paints (TSP) unsteady pressure luctuations, 136 Slit response function (SRF), 177–178
Piezoelectric balances, 440–441 Probability density function (pdf), 27 Slug calorimeter, 183–184
Power factor, 65 Proper orthogonal decomposition (POD) Source density, 370
Power spectral density (PSD) eigenvalues, 45 Statistical data characterization
boundary effects, 41 energetic optimality, 45 cdf, 27
detrending process, 44 Fredholm equation, 44 data classiication, 26
DFT, 40–41 Karhunen–Loève decomposition, 44 deterministic data, 26
spectral leakage, 41–42 low-order reconstruction, 45–46 ensemble averaging, 27
two- and one-sided, 41 models, 338 isotropic process, 28
windowing, 42–43 noise contamination, 45–46 mathematical expectation, 27
zero-padding, 42 phase information extraction, 46 mean and variance, 27
Prandtl–Meyer expansion fans, 204 snapshot matrix, 44 pdf, 27
Precision error, 31–32 swirling jet, 46–47 random data, 26
Pressure measurements skewness and kurtosis, 28
ambient air pressure/atmospheric Q-switch, 310 standard deviation, 27
pressure, 113–114 Quantization noise, 37 statistically homogeneous, 28
average velocity, 113 Quantization process, 37 Steady forces, 446–447
Bernoulli’s equation, incompressible Stereo-matching, 361
lows, 117 Rayleigh scattering, 168 Stokes number, 93
Bernoulli’s principle, 117–118 Real-time color relection interferometer, Strouhal number, 14, 99, 422
change of momentum, 112 247–249 Subsonic regime low, 9
concept of pressure, 109–110 Real-time color transmission interferometer, Supersonic regime low, 9
counter pressure exerted, ambient 245–247 Supersonic wind tunnels
contribution, 113 Resistance temperature detectors (RTDs), actual low, 73–74
direct devices 156–157 continuous tunnels, 79
deadweight gauges, 118–119 Reynolds number, 7, 14, 383 drying, avoid condensation, 75–76
inclined manometer, 119–120 Rotating shaft balances (RSBs) heating, avoid liquefaction, 76
Mach number, static pressure calibration, 444 ideal low, 72–73
coeficient, 121–122 design, 443–444 intermittent blowdown tunnels, 76–78
McLeod gauge, 123–124 electronic balance, 444 intermittent indraft tunnels, 78
Pitot tube, 120–121 full six-DOF, 443 intermittent pressure–vacuum
subsonic total pressure error maximum loads, 444–445 tunnels, 78–79
variation, 121–122 NLR, Netherlands, 444 second throat sizing, 73
U-tube manometer, 119–120 strain-gauge-based lexures, 443 tunnel start-up, test section, 74–75
X-15 photograph, ball nose, 121
drift effect, 137–138 Sampling process, 35–36 Temperature
dynamic Schlieren method concepts, 144–145
amplitude and phase response, calibrated schlieren, 211 gas temperature measurements,
second-order transfer function, digital image processing, 211 immersed sensors
129–130 illuminator section, 207 conductive, radiative, and convective
amplitude response, step function, image magniication, 209 heat transfer, 147–149
129–130 light source, 209 Nusselt number, 150
inductive and reluctive transducers, minimum detectable gradient, 211 practical considerations, 150–151
135–136 minimum discernible contrast, 210–211 transient effects, 149–150
measurement device, 129 principal scheme, 205 velocity effect and recovery factor,
microphones, 132–135 schlieren image contrast, 206 146–147
resonant transducers and vibrating schlieren knife, 205–206, 209–210 infrared thermography
cylinder, 130–132 Z-type schlieren assembly, 208–209 applications, 179–183
transfer function, 129 Second law of thermodynamics, 13–14 infrared scanning radiometer,
low pressure, 112–113 Seebeck coeficient, 151, 154 173–179
low streamline, 117 Seebeck effect, 151–152 nonintrusive information, 173
gas conined in box, 112 Self-calibration methods, 336 optical surface thermometry
gaseous cloud, macroscopic velocity, Separated beams interferometry, 228–230 full-ield measurement
115–116 Shadowgraph techniques, 158
hydrostatic equilibrium, 114 of bow shock, 204 thermochromic liquid crystals,
hysteresis effect, 138 compressible boundary layers, 204 158–161
indirect devices contact shadowgraph, 204 TSP, 161–162
INDE x 453

Prandtl number, 144 Thermistors, 157–158 motion analysis, 374–375


radiation thermometry Thermocamera, see Infrared scanning tomographic reconstruction, 371–373
applications, 172 radiometer volume illumination, 368–370
atmospheric air IR spectral Thermochromic liquid crystals (TLCs) working principle, 368–369
transmissivity, 168–169 applications, 159, 161 von Karman low, 362–363
blackbody, 162–163 calibration function, 160
distributions of directional emissivity, chiral nematics, 158 Wall-bounded lows
166–167 cholesteric, 158 adverse pressure gradient, 93
iber optic, 172 cholesteric mesophase, 159 angles of attacks and wing
in-band radiance, 164–165 director, 159 conigurations, 96, 98
Lambertian source, 164 low tracers, 160 boundary layer detachment, 95–96
large-band and narrowband liquid crystal thermography, 160–161 dark regions, 96
pyrometers, 170 measurements, 160 friction coeficient, 94
maximum emissive power, 163 mechanical behavior, 158 keel in of panel, 95–96
maximum radiation intensity, 163 mechanical effects and electromagnetic Laminar–turbulent transition, 94–95
monochromatic optical pyrometers, interferences, 160 separation bubble, further
170–171 operating bands, 160 reattachment, 95
quantum theory, 162 relected wavelength, 159 shear stress, 93–94
radiation spectrum, 162 rotor blade cascade, 158–159 streamline determination, 96
radiosity, 169 sensitivity, 158 Wall-shear stress
Rayleigh scattering, 168 smectic, 159 loating-element techniques, 394–396
relectivity, 166–167 Thermocouple sensitivity, 151 friction velocity, 394
spectral emissivity, 165–166 Thin-ilm gauge, 186–187 heat transfer methods
spectral hemispherical radiance, 163 Three-wavelength differential interferometry hot ilms, 406–407
spectral relectivity, IR mirrors, 167 advantages, 238–239 wall hot wires, 407–408
spectral transmissivity, IR windows, 168 color contribution, 241 wall pulsed wires, 408–409
speed of light, 163 standard chromaticity diagram, 239–240 mean shear stress, Newtonian luids, 393
Stefan–Boltzmann law, 163 tristimulus values, 239 optical methods
total radiance, 163–164 TLCs, see Thermochromic liquid crystals laser Doppler technique, 420
two-color pyrometer, 171–172 Turbulence, 19 liquid crystal coatings, 420–421
resistance thermometry 2D unsteady subsonic wake low application, micro-pillar sensors, 421–422
RTDs, 156–157 241–242 OFI, 409–419
thermistors, 157–158 pressure measurements
thermocouples Uncertainty estimation Preston tube, 403–405
applications, 155–156 cross-correlation, 348–349 Stanton tube, 405–406
assembling materials, 153–154 irst-order Taylor series expansion, 347 streamwise pressure gradient, 401–403
laws of thermoelectricity, 152–153 image calibration, 349–350 sublayer fence, 406
principles of operations, 151–152 propagated uncertainty, 347 velocity proiles
reliable measurements, 154 sensor and measurement equipment, 346 Clauser chart, 397–399
sensitivity determination, 154 timing errors, 348, 350 common problems, 401
sources of errors, 155 Universal outlier detection (UOD), 337 composite proile, 400
types, 154 inner function, ZPG boundary
Temperature-sensitive paints (TSP), 161–162 Viscous interactions, 144 layer, 400
Theoretical fundamentals Volumetric velocimetry inner scaling, 396–397
aerodynamic forces, lift and drag, 20–22 defocusing PIV, 363–364 log-law intercept, 399
air as continuum holographic particle velocimetry log region, 399
continuum hypothesis, 7–8 in-line systems, 366–368 momentum thickness gradient, 401
peculiar velocities and off-axis systems, 366 wake function, ZPG boundary
compressibility effects, 8–10 working principle, 364–365 layer, 400
turbulent lows, 10 MRV wall-bounded turbulent lows, 396
boundary layers, 17–19 applications, 383–385 Water-cooled calorimeters, 187–188
Buckingham Π theorem, nondimensional MRI, 377–380 “Wet asphalt” optical illusion, 198
parameters, 5–7 phase-contrast velocimetry, 381–382 Wheatstone bridge, 127, 435–436
dimensional analysis, 4–5 turbulence statistics, 382–383 Wind tunnels
and experiments, 3–4 quasi-3D optical techniques Galilean invariance concept, 56
hypersonic reentry low, 17 pseudo-spatial reconstruction, Taylor high-speed subsonic and transonic, 70–71
inviscid compressible lows, 16–17 hypothesis, 359–360 hypersonic
inviscid incompressible lows, 15–16 scanning light sheet PIV, 358–359 hot-shot wind tunnels, 81–82
laminar vs. turbulent lows 3D particle tracking velocimetry Ludwieg tube, 81
regimes, 19–20 imaging, reconstruction and tracking, plasma arc tunnel, 82
turbulent boundary layer, 20 361–362 shock tubes, 79–80
Navier–Stokes equations Lagrangian quantities, 360 shock wind tunnels, 80–81
conservation of energy, 13 turbulent lows, 362–363 low-speed subsonic
conservation of mass, 11–12 tomographic PIV boundary corrections, 66–70
Lagrangian and Eulerian applications, 375–376 closed-return tunnel, 58–59
speciication, low ield, 11 calibration, 370–371 contraction cone, 62–63
Newton’s second law, 12 ghost particles, 372–375 diffuser, 61
second law of thermodynamics, 13–14 hybrid methods, 376–377 eiffel-type open-return wind tunnel, 57
nondimensional numbers, 14–15 imaging considerations, 370–372 fan, 61–62
454 INDE x

open-circuit tunnel, 58–59 high Reynolds number, 82–85 intermittent blowdown tunnels, 76–78
power losses evaluation, 64–66 meteorological wind tunnel, 86–87 intermittent indraft tunnels, 78
test section, 59–61 water tunnel, 86 intermittent pressure–vacuum
turbulence reduction devices, 63–64 supersonic tunnels, 78–79
working principle, 57 actual low, 73–74 second throat sizing, 73
relevant testing parameters, 56–57 continuous tunnels, 79 tunnel start-up, test section, 74–75
special drying, avoid condensation, 75–76 Wollaston biprism, 230–232
anechoic wind tunnel, 85–86 heating, avoid liquefaction, 76
automotive wind tunnel, 88 ideal low, 72–73 Zero drift effect, 436

You might also like