You are on page 1of 12

Trends Trends in Analytical Chemistry, Vol. 23, No.

7, 2004

Trends in quality in the analytical


laboratory. I. Traceability and
measurement uncertainty of
analytical results
Isabel Taverniers , Erik Van Bockstaele, Marc De Loose

Credibility of analytical data has never caught the public’s Commission; ILAC, International Laboratory Accreditation Coopera-tion;
eye more than today. The key principle for quality and reli- IQC, Internal quality control; ISO, International Standardization
ability of results is comparability between laboratories and Organization; IUPAC, International Union of Pure and Applied
Chemistry; LGC, Laboratory of the Government’s Chemists; MU,
on a wider, international basis. In order to be comparable,
measurement uncertainty; QA, quality assurance; QC, quality control;
analytical results must be reported with a statement of
RM, reference material; SI, Systeme International; U(u), Expanded
measurement uncertainty (MU) and they must be traceable
uncertainty (individual uncertainty factor); WHO, World Health
to common primary references. This work focuses on Organization; X, concentration or amount of measurand
traceabil-ity and uncertainty of results. We discuss different
approaches to establishing traceability and evaluating MU.
We place both concepts in the broader context of analytical
method validation and quality assurance. We give up-to- 1. Introduction: quality of analytical results
date information in the framework of new, more exacting
European and international standards, such as those from Innumerable types of analytical methods exist in the fields
Eurachem/CITAC, IUPAC and ISO. of analytical and bioanalytical chemistry, biochemistry,
ª 2004 Published by Elsevier B.V. biology, clinical biology and pharmacology and related
Keywords: Analytical method validation; Measurement uncertainty;
application domains, such as forensic, toxicological,
Quality assurance; Reliability of results; Traceability environmental, agricultural and food analyses. Regardless
of the type of method, the scope and the application,
Abbreviations: AOAC, Association of Official Analytical Chemists; AQA, laboratories must be able to produce reliable data when
analytical quality assurance; CCMAS, Codex Committee on Methods of performing analytical tests for a client or for regulatory
Analysis and Sampling; CITAC, Cooperation on Interna-tional purposes.
Traceability in Analytical Chemistry; CRM, certified reference material;
With the fast development of analytical methodolo-gies,
EAL, European Cooperation for Accreditation; FAO, Food and
great importance is nowadays attached to the ‘‘quality’’ of
Agricultural Organization; IEC, International Electrotechnical
the measurement data. Quality of analytical measurement
data encompasses two essential criteria – utility and
reliability (Fig. 1) [1]. Utility means that analytical results
Isabel Taverniers*, Marc De Loose must allow reliable decision making.
Department for Plant Genetics and Breeding (DvP), Centre for Agricultural
A key aspect of reliability or validity of results is that
Research (CLO), Ministry of the Flemish Community, Caritasstraat 21
B-9090 Melle, Belgium they are comparable, whatever their origin. Compara-bility
between results in the strict sense is provided by
Erik Van Bockstaele traceability to appropriate standards. Traceability to
Department for Plant Genetics and Breeding (DvP), Centre for Agricultural common reference standards underlies the possibility of
Research (CLO), Ministry of the Flemish Community, Caritasstraat 21
making a comparison – i.e., a distinction – between
B-9090 Melle, Belgium
Department for Plant Production, Ghent University, Coupure Links 653
different results. If results are also to be compared in terms
B-9000 Gent, Belgium of their quantities or levels of analyte, additional
*Corresponding author. information on the analytical result is needed – MU.
Tel.: +32-9-272-2876; Fax: +32-9-272-2901; Uncertainty of results arises from the combination of all
E-mail: i.taverniers@clo.fgov.be uncertainties of the reference values (to which the

480 0165-9936/$ - see front matter ª 2004 Published by Elsevier B.V. doi:10.1016/S0165-9936(04)00733-2
Trends in Analytical Chemistry, Vol. 23, No. 7, 2004 Trends

quality

utility

reliability
(validity)

comparability

1. Are 2 results traceability MU estimation 2. Which of the 2 results


comparable to, i.e. contains, with a certain level
distinguishable from of confidence, the highest
each other? level of analyte?
Y1 = Y2 ? calibration Y1>Y2 or Y1<Y2 ?

Figure 1. Relationship between quality, traceability and measurement uncertainty (MU) of results [3].

results are traceable) and all additional uncertainties The process of providing an answer to a particular
associated with the measurement procedure. MU and analytical problem is presented in Fig. 2. The analytical
traceability are related concepts, both defining the quality system – which is ‘a defined method protocol, applicable to
of analytical data (Fig. 1) [2,3]. a specified type of test material and to a defined con-
Quality of results reflects adequacy (or inadequacy) of a centration rate of the analyte’ – must be ‘fit for a partic-ular
method in terms of the extent to which the method fulfils analytical purpose’ [4]. This analytical purpose reflects the
its requirements or is fit for its particular analytical purpose achievement of analytical results with an acceptable
(see Section 2 below). Quality is always a rela-tive notion, standard of accuracy. Without a statement of uncertainty, a
referring to the requirements fixed before-hand on the basis result cannot be interpreted and, as such, has no value [8].
of national or international regulations or customer needs A result must be expressed with its expanded uncertainty,
[1,4]. which, in general, represents a 95% confidence interval
The need for reliability of analytical data is stressed by around the result. The proba-bility that the mean
the fact that measurement results will be used and may measurement value is included in the expanded uncertainty
form the basis for decision making. Unreliable results bring is 95%, provided that it is an unbiased value that is made
a high risk of incorrect decisions and may lead to higher traceable to an interna-tionally recognized reference or
costs, health risks, and illegal practices. Imagine, for standard. In this way, the establishment of traceability and
example, the consequences if results are false posi-tives or the calculation of MU are linked to each other. Before MU
if the uncertainty is much larger than reported [1,5,6]. is estimated, it must be demonstrated that the result is
traceable to a reference

ANALYTICAL SYSTEM
2. The role of method validation in traceability and
MU measurand

An analysis is a complex multistage investigation of the


VALIDATION ANALYTICAL RESULT VALIDATION
values of the properties of materials, i.e., the identity and
measurement
the concentration of a specific component in a specific ± uncertainty
sample material [2,7]. van Zoonen et al. [1] presented value

chemical analysis as a cyclic process in which the final


objective is the generation of chemical information. This fitness-for-purpose
integrated process starts with defining the basic analyt-ical
problem (specifying the analytical requirement) and ends INTERPRETATION & EVALUATION
with evaluating and reporting the analytical result. Ideally,
the last step provides an answer to the initial problem, as Figure 2. Role of method validation in quality of analytical
measurements. Validation is the process to demonstrate the
stated by a client or based on regulatory requirements. fitness-for-purpose of the analytical system [4,8,14,15].

http://www.elsevier.com/locate/trac 481
Trends Trends in Analytical Chemistry, Vol. 23, No. 7, 2004

or standard which is assumed to represent the truth [9,10]. Analytical method validation forms the first level of
quality assurance (QA) in the laboratory. Analytical QA
Traceability and MU both form parts of the purpose of (AQA) is the complete set of measures a laboratory must
an analytical method. Validation plays an important role undertake to ensure that it is able to achieve high quality
here, in the sense that it ‘confirms the fitness-for-purpose data continuously. Besides the use of validation and/or
of a particular analytical method’ [4]. The ISO definition of standardized methods, these measures are effective internal
validation is ‘confirmation by examination and provision quality control (IQC) procedures (use of RMs, control
of objective evidence that the particular re-quirements of a charts,. . .), participation in proficiency testing schemes and
specified intended use are fulfilled’ [7]. accreditation to an international standard, normally
Validation is the tool used to demonstrate that a specific ISO/IEC 17025 [4]. Method validation and the different
analytical method measures what it is intended to measure, aspects of QA form the subject of part II of this review
and thus is suitable for its intended purpose [2,11]. In part (Method validation and AQA).
II of this review, the classical method-validation approach
is described, based on evaluation of a number of method-
3. Guidelines on traceability and uncertainty of
performance parameters. Summa-rized, the criteria-based results
validation process consists of precision and bias studies, a
check for specificity/ selectivity, a linearity check,
robustness studies and, eventually, based on the practical Table 1 shows an overview of prominent institutions
requirements of the method, an assessment of the limits of offering guidance and their guidelines on traceability, MU
detection and/or quantification. and related topics. In Europe, a leading role is played by
Eurachem, a working group on analytical chemistry
centralized at and originating from the UK’s LGC
The objective of validation is to verify that the (formerly Laboratory of the Government Chemist). Basic
measurement conditions and the equation used to calculate
references are CITAC/Eurachem guides on ‘Quality in
the final result include all the influences that will affect the Analytical Chemistry’ [2] and ‘Traceability in chemical
final result. Validation measures the dif-ferent effects,
measurement’ [3], and a Eurachem guide on MU [13,14].
throughout the whole analytical system, that influence the Eurachem has also published guides on related topics, such
result, and ensures that there are no other effects that have
as RMs [7] and method validation [15].
to be taken into account. A specificity test ensures that the
method responds to the specific analyte of interest only,
and not to other interf-erents or contaminants. A linearity At the international level, relevant standards are
check verifies that the supposed relationship between the available from IUPAC, ISO and AOAC International
[4,8,16,17] and from the Codex Alimentarius’s working
signal and units used for the analyte may be used. A bias
group, CCMAS [18–21]. Other helpful guides have been
study is a certified reference material (CRM) check that
published by EAL [22] and ILAC [23] (see Table 1 for
demon-strates that the method is not significantly biased;
explanations of abbreviations).
and, precision and robustness studies cover the effects of
variability in conditions, operators, equipment and time.

The role of method validation in the achievement of 4. The concept of traceability


reliable results is:
4.1. Definitions
(1) to include all possible effects or factors of influence Traceability is a relatively new term, gaining more and
on the final result; more attention in analytical measurement sciences.
(2) to make them traceable to stated references (refer- Traceability can be assigned to different aspects related to a
ence methods, RMs or SI units); measurement – such as traceability of a result, method,
(3) to know the uncertainties associated with each of procedure, laboratory, product, material, and equipment. As
these effects and with the references. such, there is no single definition of traceability.

Validation is thus a tool to establish traceability to these Before exploring the different concepts of traceability,
references [2–4]. In this context, it is important to see the we can look to a more general, extended meaning.
difference between traceability and accuracy. A method According to Valcarcel and Rios [24], the basic meaning of
that is accurate, in terms of ‘true’ (i.e., approx-imating the traceability integrates
‘true value’), is always traceable to what is considered to
be the true value. However, the opposite is not correct. A (1) the establishment of one or more relationships to
method that is traceable to a stated ref-erence is not well-stated references or standards, and
necessarily true (accurate). Errors can still occur in this (2) the documented ‘history’ of a product or a system.
method, depending on the reference [12].

482 http://www.elsevier.com/locate/trac
Trends in Analytical Chemistry, Vol. 23, No. 7, 2004 Trends

Table 1. Overview of European and international guiding institutions and regulatory bodies with their guidelines and standards on
traceability, measurement uncertainty (MU) and related topics

Body Full name Guidance on References


Eurachem A focus for analytical chemistry in Europe Traceability [2,3]
CITAC Cooperation on international traceability in analytical MU [13,14]
chemistry Reference materials [7]
Validation [15]
IUPAC International Union of Pure and Applied Chemistry MU [4,8,16,17]
ISO International Standardization Organisation
AOAC International Association of Official Analytical Chemists
FAO/WHO: Codex/CCMAS Food and Agricultural Organization/World Health MU [18–21]

Organisation: Codex Committee on Methods of Analysis


and Sampling
EAL European Cooperation for Accreditation MU [22]

ILAC International Laboratory Accreditation Cooperation MU [23]

These two parts of the basic meaning can be found again ments and which are embedded into a quality system and
when defining traceability as a property or a characteristic anchored to a common reference point’. Traceability
of different analytical facets. The different concepts of among standards is considered as the most relevant basis
traceability are shown in Fig. 3. for traceability of results [26], as shown in Fig. 3.
The most obvious definition of traceability is a ‘prop- Traceability of equipment is defined as ‘the detailed,
erty of the result of a measurement or the value of a timely, and customised recording of installation,
standard whereby it can be related to stated references, malfunctioning and repairs, periodic calibration and
usually national or international standards, through an corrections (if needed), hours of use, samples processed,
unbroken chain of comparisons all having stated un- standard used, etc., in such a way that all questions (what?,
certainties’. The different elements and the practical use of how?, who?, etc.) should have a detailed answer in the
this basic definition will be explained in Section 4.2 below. pertinent documents’ [24].
Calibration is the set of operations used to establish the
Traceability of a result is related to traceability of a relationship between values shown by a measuring
method, which in turn is linked to traceability of stan-dards instrument and the values of measurement standards. By
and traceability of the equipment used in the analytical calibrating, the results of measurements are related to and
procedure (Fig. 3). A method is called trace-able when it thus made traceable to values of standards or references. In
produces results (with their uncertainties) that are practice, calibration is performed by measuring samples
characterized by a defined traceability to well-stated with known amounts of analyte, such as CRMs, and
references [24]. Walsh [25] defines traceable methods as monitoring the measurement response [2,3]. These
‘validated official or standard methods or validated definitions confirm the links between traceability of
methods which contain uncertainty state- equipment, standards and results (Fig. 3).

traceability of results 1. relationship(s) to

well-stated
references or
standards
traceability of methods extended meaning
of traceability
2. documented
'history' of a product
or a system
traceability of traceability of
standards equipment

Figure 3. Different concepts and extended meaning of traceability [24].

http://www.elsevier.com/locate/trac 483
Trends Trends in Analytical Chemistry, Vol. 23, No. 7, 2004

4.2. Traceability in practice in ISO definition; step (5) in Eurachem/CITAC definition)


The practical establishment of traceability is based on a [12,24,27].
step-by-step implementation of the definition. The ISO Fig. 4 depicts the successive classes of stated references
definition of traceability, originating from metrology (see (materials or methods) in a so-called ‘traceability chain’.
above), can be translated into three basic steps: Establishing traceability through a traceability chain brings
a certain level of uncertainty, called ‘calibration
(1) to establish one or more links to well-stated refer- uncertainty’ or ‘traceability uncertainty’ (see also Section 6
ences, below) [3,7,28].
(2) through an unbroken chain of comparisons, and This then brings us to the third key element when
(3) to estimate all uncertainties associated with those applying definitions of traceability – the stated uncer-
comparisons [12,24,27]. tainties. Each step in the traceability chain, with the
uncertainty components of all the stated references, will
This definition is very much in line with the more contribute to the measurement result and thus to the
practical definition described by Eurachem/CITAC [3] in uncertainty associated with it. Uncertainty components
its procedure for traceability that consists of the follow-ing must thus be estimated at each step in the analytical
steps: process (step (3) in ISO definition; step (6) in Eurachem/
CITAC definition).
(1) specifying the measurand, the scope of measure- As described above, validation is a tool to identify all
ment and the required uncertainty; possible effects or factors within the analytical procedure
(2) choosing the method of measurement; that can influence the final result. As such, steps (3) and
(3) validating the method of measurement; (4) in the Eurachem/CITAC definition are additional steps
(4) identifying/quantifying all influences that will affect that can be very helpful in establishing traceability
the result; [3].
(5) choosing appropriate references; Examples of how traceability is established in practice
(6) estimating the uncertainty components associ-ated can be found in the literature. Recently, a Special Issue of
with all influences and references [3]. TrAC was published on ‘Challenges for achieving
traceability of environmental measurements’ (TrAC,
The key principle in both approaches is that rela- Volume 23, 2004). This issue contains a lot of up-to-date
tionships to stated references are established and that this is information and practical examples in the particu-lar
done through an unbroken chain of comparisons. domain of environmental analysis. Many authors reported
Practically, this means that the analytical procedure is first on the most important and most difficult step in
described as a chain or a flow diagram (step (2) in ISO establishing traceability – the selection of stated refer-ences
definition; steps (1) and (2) in Eurachem/CITAC or standards. For different stages in the traceability chain
definition). The word unbroken means that there is no loss shown in Fig. 4, descriptions and examples are given by
of information when considering the different steps in the Quevauviller and Donard [27], Charlet and Marschal [29]
analytical procedure leading to the measurement result. and Segura et al. [30]. Pan [28], Forstner€
Each step in the procedure then needs to be linked to either [31] and Theocharopoulos et al. [32] applied the ISO
a reference method, a RM or an SI unit (step (1) definition for establishing traceability in different types of

in-house/working RMs in-house/working method


higher level of
traceability
bias

spiking

certified RMs reference method

primary RMs primary method

SI units
Figure 4. The traceability chain and the relationship between traceability and uncertainty of measurements. The three possibilities for
establishing traceability referred to in Fig. 5, are indicated in bold [25,28].

484 http://www.elsevier.com/locate/trac
Trends in Analytical Chemistry, Vol. 23, No. 7, 2004 Trends

environmental methods of analysis. A similar approach was error have been taken into account [37]. Within this
followed by Sabe and Gauret [33] and Drolc et al. [34]; interval, the result is regarded as being accurate, i.e.,
however, they based it upon the Eurachem/CITAC Guide precise and true [11].
on Traceability [3]. In their examples, all the authors took It cannot be overemphasized that MU is different from
into account specific steps or influences in the analytical error. The error of an individual analytical result, the
procedure that can lead to a ‘broken’ chain of comparisons, difference between the result and the true value of the
such as sampling and sample treatment or preparation measurand, is always a single value [38]. Part of the value
steps. Some authors reported on uncertainties associated in of a known error, the systematic error, can be used to
particular with sampling [35,36]. correct a result. This means that, after correction, the result
of an analysis may be very close to the true value.
However, the uncertainty of the measurement may still be
very large, because there is doubt or limited knowl-edge
5. The concept of MU about how close the result is to the value. Uncer-tainty is
expressed as a range and applies to an analytical procedure
MU is the most important criterion in both method and a specific sample type, but to different determinations
validation and IQC. It is defined as ‘a parameter, asso- and thus measurement results. The value of the uncertainty
ciated with the result of a measurement, that charac-terizes cannot be used to correct a measurement result.
the dispersion of the values that could reasonably be
attributed to the measurand’ [11,14]. The measurand refers The error of an analytical result is related to the
to the particular quantity or the concentration of the analyte (in)accuracy of an analytical method and consists of a
being measured. The parameter can be a standard deviation systematic component and a random component (Fig. 5)
or the width of a confidence interval [14,37]. This [14]. Precision and bias studies form the basis for eval-
confidence interval represents the interval on the uation of the accuracy of an analytical method [18]. The
measurement scale within which the true value lies with a accuracy of results relates only to the fitness-for-purpose of
specified probability, given that all sources of an analytical system, assessed by method validation.

analytical result
difference between
true value error

analytical result and


true value
inaccuracy

expected value
(limiting mean) systematic error random error
difference between
bias difference between
expected value and analytical result and imprecision
true value expeted mean value

persistent bias run effect reproducibility intermediate repeatability


precision
variations within the whole analytical variations during a inter-laboratory within-lab variation due to inter-assay precision=

system, over longer periods particular run variation, tested by random effects= variability variability over a short
collaborative studiesover a longer period of time, time interval, under the
under different conditions same conditions
matrix variation method laboratory run random

effect bias bias bias bias


indicator for difference between indicator for difference between

TRUENESS PRECISION
expected value and true value result and expeced value

analysis of CRMs +
duplicate analysis
statistical control

single-laboratory validation

minimally needed in method validation

Figure 5. Composition of the error of an analytical result related to trueness and precision [4,8].

http://www.elsevier.com/locate/trac 485
Trends Trends in Analytical Chemistry, Vol. 23, No. 7, 2004

However, reliability of results has to do with more than error components, forming together the ‘ladder of errors’:
method validation alone.
MU is more than just a single figure expression of
accuracy. It covers all sources of errors that are relevant for (1) the method bias, a systematic error associated with
all analyte concentration levels. MU is a key indicator of the method as such;
both fitness-for-purpose and reliability of results, binding (2) the laboratory bias, which is either a systematic
together the ideas of fitness-for-purpose, QC and thus error – if the laboratory is considered on its own, or
covering the whole QA system [4,37]. a random error – if the laboratory is considered as
The MU of an analytical procedure is thus derived from, one of a group, as is the case in in-terlaboratory
but differs from, the error of a single analytical result. The studies;
deviation of the measurement result from the true value (3) the run error, seen as a systematic error for one run
comprises a number of systematic and random errors as and as a random variation over several runs
shown in Fig. 5. Each of these error components adds its performed intralaboratory;
own uncertainty to the total uncertainty budget of the (4) the repeatability error, which is a random error from
analytical procedure. The different error components are the replicate measurements performed within a
therefore referred to as ‘sources of uncertainty’. Depending single run [10].
on the sources of uncertainty taken into account and thus
the conditions of the measurement, the overall MU will be As the error being considered applies for only a
different and another definition of MU will apply. This specified concentration of analyte isolated from a speci-
means that there is no single, straightforward definition of fied type of sample or matrix, sampling errors and matrix
MU. It is rather a concept, the interpretation of which variation effects are not included here [4].
changes according to the measurement conditions and to The more traditional distinction between error compo-
the reference to which the result is traceable [10]. The dif- nents is between random errors and systematic errors (Fig.
ferent definitions of MU are subjects of the following 5). In this classical approach, random errors are generally
section. referred to as ‘precision’ (repeatability, inter-mediate
precision and reproducibility), while systematic errors are
typically attributed to the uncertainty on the bias-estimate
and calibration uncertainty. To this classi-fication, other
6. Different operational definitions of MU uncertainty contributions are added, such as sampling
effects, matrix effects and uncertainties associated with
As illustrated in Fig. 6, the error of an analytical result for a certain assumptions that underlie the measurement method
specified analyte concentration comprises different and/or the calculation equation [2].

Result = true value + method bias + lab bias + run error + repeatability error

Result = true value + traceability U + U on estimated bias+ lab bias + run error + repeatability error
systematic error random error
random error

intermediate precision
Uncertainty = within-laboratory uncertainty

random error
reproducibility precision

Bias is estimated with: 1. reference method U (reference) = 0 difference between


reproducibility uncertainty
two methods
2. spiking U (spike) ~ 0 difference between intermediate precision
two measured results

bias-included uncertainty
3. certified referenceU (CRM) = 0 difference between

material (CRM)
measured & certified valueintermediate precision

absolute uncertainty

Figure 6. Composition of the error of an analytical result related to measurement uncertainty. Different operational definitions of
measurement uncertainty [10]. Below on the left are the three possibilities for establishing traceability (see also Fig. 4).

486 http://www.elsevier.com/locate/trac
Trends in Analytical Chemistry, Vol. 23, No. 7, 2004 Trends

As mentioned above, each of these error components is a (2) all uncertainties associated with the references or
potential source of uncertainty. Depending on the con- standards, the analytical results are made trace-able
ditions under which the analysis is performed, different to [3].
sources of uncertainty contribute to the overall uncer-
tainty. Hund et al. [10] introduced different operational Different approaches exist for the estimation of overall
definitions of uncertainty, according to the number and MU, as reviewed by several authors [9,10,39] and
type of uncertainty sources considered (Fig. 6): summarized in Table 2.
The most well-known, traditional approach is based on
(1) within-laboratory uncertainty, derived from identifying, quantifying and combining all individual
intermediate precision and including only the contributions to uncertainty. In this ‘bottom-up approach’,
repeatability error and the run error; the overall uncertainty is derived from the uncertainties of
(2) reproducibility uncertainty, derived from repro- the individual components. The com-ponent-by-component
ducibility precision (interlaboratory tests) and assessment of MU was originally developed for physical
accounting for the repeatability error, run and measurements and adopted by Eurachem for chemical
laboratory effects; measurements [13]. However, because of its complexity,
(3) bias-included uncertainty and absolute uncer-tainty this methodology has signifi-cant costs in time and effort
additionally take into account the method bias, and has never found wide-spread applications.
which is the most important source of uncer-tainty
because it refers to a reference or a stan-dard, to A simplified approach to assessing MU is the ‘fitness-
which the method is considered to be traceable. If for-purpose approach’, defining a single parameter called
the working method is not a primary method – the fitness function. This fitness function has the form of an
which is traceable to SI units (see Fig. 4) algebraic expression
– the method is always compared to another, ref- u ¼ f ðcÞ and describes the relationship between the
erence method or is applied using appropriate MU and the concentration of the analyte. For example u ¼
CRMs. This reference or standard needs to be con- 0:05c means that the MU is 5% of the concen-tration.
sidered when the uncertainty associated with the Calculation of the MU will hereby rely on data obtained by
method bias is estimated. In addition to the uncer- evaluating individual method-performance characteristics,
tainty associated with this reference or standard, mainly repeatability and reproducibility precision, and
there is the uncertainty on the estimated bias (Fig. preferably also bias [21,40,41]. This approach can more or
6). The different possibilities of bias estima-tion – less be seen as a simplification of the step-by-step protocol
and thus of traceability – are depicted in Fig. 5. If for testing the MU, as described by Eurachem [14].
the method is compared to a reference method, the
uncertainty associated with this ref-erence method Although MU comprises more than systematic and
is considered negligible and the bias is estimated as random errors, it can be estimated from method-valida-tion
the difference between the two methods (case 1 in data. Data from method-performance studies can give all or
Fig. 6). If there is no method to compare with, bias nearly all the information required to evaluate the
can be estimated by spiking samples and assessing uncertainty [2,4,18,37]. This includes the use of data from
the difference between the spiked sample and the in-house and collaborative validation studies (typically
measured sample. In this case also, the uncertainty precision data), proficiency-testing schemes (typically bias
on the spike will approximate to zero (case 2) and data) or QA data, relevant for uncertainty. If such data are
the only method bias is the difference between the available and used to estimate the un-certainty, it is not
measured sample and the spiked sample. Ab-solute necessary to estimate MU using the component-by-
uncertainty can be estimated only if CRMs are used component approach [18,19]. In partic-ular, validation
(case 3). Only in this case can full trace-ability to SI studies and QC measures are considered highly relevant
units be guaranteed [10]. sources for estimating MU [42,43]. Table 2 describes three
methodologies for assessing MU based on validation data.

In the Analytical Methods Committee’s ‘top-down


approach’ [37], the laboratory is seen from a higher level,
7. Approaches to establishing MU as a member of a population of groups. As a consequence,
systematic errors within one laboratory become random
In general, to estimate the overall uncertainty on a errors and the estimated uncertainty is the reproducibility
particular result, it is necessary to know: uncertainty (Fig. 6). Examples of MU-estimation studies
using data from collaborative ring trials are works by
(1) all uncertainties arising from the measurement Dehouck et al. [44] and Maroto et al. [45,46].
procedure itself;

http://www.elsevier.com/locate/trac 487
Trends Trends in Analytical Chemistry, Vol. 23, No. 7, 2004

Table 2. Different approaches for estimating measurement uncertainty (MU)

Reference Name of approach Basic principle Strengths Weaknesses


Eurachem [13] Bottom-up, error-budget, Identification, quantification and Holistic ¼ all important Complex, Expensive,
and ISO [16] error-propagation or combination of all sources of sources of error should be Time-consuming
component-by- uncertainty included
component
Codex Fitness-for-purpose Establishment of a fitness-function, Simple MU can be assessed Some sources of

Alimentarius/ u ¼ f(c), based mainly on precision for different concentrations uncertainty may be
CCMAS [18–21] and bias studies overlooked
Analytical Top-down Based on data obtained from MU can be assessed for Some sources of

Methods inter-laboratory studies (precision) different concentrations uncertainty may be


Committee [37] overlooked
Only if data on
collaborative studies
are available
Eurachem [14] Validation-based Based on inter- or intra-laboratory Extension of validation work, Some sources of
Barwick & validation studies (precision, so no extra work is needed uncertainty may be
Ellison [47] trueness, robustness) overlooked
Hund et al. [39] Robustness-based Based on robustness tests as Simple, time-efficient Some sources of

intra-laboratory simulations of uncertainty may be


inter-laboratory studies overlooked
Method must first show to
be robust

The other two approaches mentioned in Table 2 with a coverage factor of 2. In addition to this bottom-up
[14,39,47] make use of different method-performance MU estimation (Approach 1), Armishaw reported the
parameters. All three validation-based methodologies can expanded uncertainty as a function of the concentration of
be seen as simpler, and more time- and cost-efficient toluene (Approach 2). Finally, the authors compared this
extensions of validation. experimentally assessed MU with calculated MU values,
However, it is important to note that not all sources of based on: (1) a within-laboratory reproducibility estimate;
uncertainty are covered by method-performance data. (2) proficiency test data; (3) the models of Horwitz [49]
Some sources that may need particular consideration in and Thompson and Lowthian [50]. These models allowed
addition to the available data are sampling, pre-treat-ment, calculation of % RSD values as a func-tion of the analyte
method bias, variation in conditions and changes in the concentration. To obtain expanded uncertainty values,
sample matrix [14,18,41]. Armishaw multiplied the predicted SD values from the
Many of those principles for estimating MU were models by 2 [48]. All three MU calculations are variants of
applied in a case study for toluene in ground water validation-based approaches (Approaches 3 and 4, Table
performed by Armishaw [48]. His idea was that, for a 2).
routine method that has been validated previously and
performed in a laboratory where QC measures are in place,
it is possible to estimate MU in a working after-noon. The 8. Importance of traceability and MU
key is to extract all relevant information from already
available data: validation studies (bias/recovery and The underlying motivation to establish traceability and MU
precision data); instrument calibration data (RM is the need to make decisions based on the analytical
uncertainties); and, information from regularly per-formed results obtained or to be in compliance with regulatory
QC measurements (replicate analyses and control samples) limits (for quantitative determinations) [51]. MU is an
and from the method procedure itself (sampling, essential feature of analytical results, for three different
homogeneity of samples,. . .). After identifying the reasons:
components contributing to the uncertainty budget and
assigning the relevant sources of information, standard (1) Customers want to have an idea about the range of
uncertainties were quantified and combined to form the results and about the comparability of results
combined standard uncertainty. between different laboratories [43]. Any result must
The results for toluene in water were reported as x U, be accompanied by a statement of MU, so that the
where U was the expanded uncertainty obtained by user of the result knows the level of confidence
multiplying the combined standard uncertainty, uc, associated with it [52]. The concepts of compara-

488 http://www.elsevier.com/locate/trac
Trends in Analytical Chemistry, Vol. 23, No. 7, 2004 Trends

bility and reliability of results have been discussed of a result. Although every method has its specific scope,
briefly in Section 1 and are presented in Fig. 1. application and analytical requirement, the basic prin-ciples
(2) It demonstrates traceability. Before MU can be eval- of QA are the same, regardless the type of method or the
uated, traceability to stated references or standards sector of application. The information in this article is
must be established. Moser et al. [43] claim that taken mainly from the analytical chemistry, but it also
traceability is proved by the appropriate use of RMs or applies to other sectors. The validation of analytical
standards and by a full uncertainty budget. methods, the establishment of traceability of results and the
(3) There is a requirement to know the method, to assessment of MU should be done in a uniform,
understand the underlying principles and harmonized way, conforming with interna-tionally
mechanisms of the measurement procedure. Lack of recognized standards from institutions such as Eurachem,
profound knowledge on the method itself will lead IUPAC or ISO. This update on analytical quality provides a
to certain unknown uncertainty contributions not common understanding for the topics of method validation,
being taken into account and thus to gaps in the traceability and MU of measure-ments. It has elucidated
uncertainty budget. MU can be estimated only if the the interrelationships between method validation and
method is well understood [43,53]. traceability and MU of results. From all the guidelines and
standards, we selected and summarized the most relevant
information. We dis-cussed different approaches to
MU is increasingly gaining attention, in particular within establishing traceability and assessing MU of analytical
the framework of accreditation. The new accreditation methods in general. We highlighted the importance of both
standard ISO/IEC 17025 [17], which has been in force concepts and the link with method validation and analytical
from December 2002, contains clear requirements about QA.
estimating MU and when and how it should be stated in
test reports. ISO/IEC 17025 requires MU to be reported Acknowledgements
when required by the client and when relevant to the
application and the interpretation of the measurement We wish to thank Andrew Damant for giving sugges-tions
results, within the framework of certain specifications or and Friedle Vanhee for reading and assistance.
decision limits. The MU should be readily available and
reported together with the result as X U, where U is the References
expanded uncertainty [17,47,51,54].
Eurachem and CCMAS within the Codex Alimentarius [1] P. van Zoonen, R. Hoogerbrugge, S.M. Gort, H.J. van de Wiel, H.A.
deal with MU as a separate issue [14,18–20]. van ’t Klooster, Trends Anal. Chem. 18 (1999) 584.
The Analytical Methods Committee even claims that [2] CITAC/Eurachem Guide: Guide to Quality in Analytical Chemistry
– An Aid to Accreditation, 2002. Available from:
MU will become the main unifying principle of analytical <http://www.Eurachem.bam.de>.
data quality [37]. [3] Eurachem/CITAC Guide: traceability in chemical measurement, A
guide to achieving comparable results in chemical measure-ment,
Joint Eurachem/CITAC Working Group on Measurement
9. Summary Uncertainty and Traceability, 2003. Available from:
<http://www.Eurachem.bam.de>.
[4] M. Thompson, S. Ellison, R. Wood, Pure Appl. Chem. 74 (2002)
Rather than focusing on the techniques and methodol-ogies 835.
being used, attention is nowadays paid to the quality and [5] R. Battaglia, Accred. Qual. Assur. 1 (1996) 256.
the reliability of the final results of analysis. This is [6] R.J. Mesley, W.D. Pocklington, R.F. Walker, Analyst (Cambridge,
UK) 116 (1991) 975.
influenced by greater demand for regulatory compliance
[7] Eurachem Guide EEE/RM/062rev3, The Selection and Use of
and greater awareness of the customer – the client wants to Reference Materials, A Basic Guide for Laboratories and
know the level of confidence of the reported result. In Accreditation Bodies, 2002. Available from:
order for results to be comparable, they must be reported <http://www.Eurachem. bam.de>.
with a statement of MU and they must be traceable to [8] M. Thompson, R. Wood, Pure Appl. Chem. 67 (1995) 649.
common primary references. Methods must be validated to [9] A. Maroto, R. Boque, J. Riu, F.X. Rius, Trends Anal. Chem. 18
(1999) 577.
show that they actually measure what they are intended to [10] E. Hund, D.L. Massar, J. Smeyers-Verbeke, Trends Anal. Chem. 20
measure; that they are fit for exacting European and (2001) 394.
international stan-dards, such as the ISO/IEC 17025 norm [11] J. Fleming, H. Albus, B. Neidhart, W. Wegschieder, Accred. Qual.
for laboratory accreditation. On the basis of quality and Assur. 1 (1996) 87.
reliability of analytical data rests the comparability of [12] Ph. Quevauviller, Trends Anal. Chem. 23 (2004) 171.
[13] S.L.R. Ellison, M. Rosslein, A. Williams (Eds.), Eurachem/CITAC
results for their specific purpose.
Guide: Quantifying Uncertainty in Analytical Measurement, first ed.,
1995. Available from: <http://www.Eurachem.bam.de>.
An analytical method is a complex, multi-step process, [14] S.L.R. Ellison, M. Rosslein, A. Williams (Eds.), Eurachem/ CITAC
starting with sampling and ending with the generation Guide: Quantifying Uncertainty in Analytical

http://www.elsevier.com/locate/trac 489
Trends Trends in Analytical Chemistry, Vol. 23, No. 7, 2004

Measurement, second ed., 2000. Available from: [34] A. Drolc, M. Ros, M. Cotman, Anal. Bioanal. Chem. 378 (2004)
<http://www.Eurachem. bam.de>. 1243.
[15] Eurachem Guide: The Fitness for Purpose of Analytical Methods, A [35] M. Thompson, Accred. Qual. Assur. 3 (1998) 117.
Laboratory Guide to Method Validation and Related Topics, LGC, [36] S. Roy, A.-M. Fouillac, Trends Anal. Chem. 23 (2004) 185.
Teddington, UK, 1998. Available from: [37] Analytical Methods Committee, Analyst (Cambridge, UK) 120
<http://www.Eurachem.bam.de>. (1995) 2303.
[16] International Standards Organization, GUM: Guide to the Expres- [38] J. Fleming, H. Albus, B. Neidhart, W. Wegschieder, Accred. Qual.
sion of Uncertainty in Measurement, ISO, Geneva, Switzerland, Assur. 2 (1997) 160.
1995. [39] E. Hund, D.L. Massart, J. Smeyers-Verbeke, Anal. Chim. Acta 480
[17] ISO/IEC 17025 on General Requirements for the Competence of (2003) 39.
Calibration and Testing Laboratories, ISO, Geneva, Switzerland, [40] Eurachem/EA Guide 04/10, Accreditation for Microbiological
1999. Laboratories, 2002. Available from: <http://www.Eurachem.
[18] CX/MAS 01/8, Codex Alimentarius Commission, Codex Commit- bam.de>.
tee on Methods of Analysis and Sampling (FAO/WHO), Measure- [41] S. Kuppers,€ Accred. Qual. Assur. 3 (1998) 412.
ment uncertainty, Relationship between the analytical result, the [42] S.L.R. Ellison, V.J. Barwick, Analyst (Cambridge, UK) 123 (1998)
measurement uncertainty and the specification in Codex stan-dards, 1387.
Agenda Item 4a of the 23rd Session, Budapest, Hungary, 26 [43] J. Moser, W. Wegscheider, C. Sperka-Gottlieb, Fresenius’ J. Anal.
February–2 March 2001. Chem. 370 (2001) 679.
[19] CX/MAS 02/6, Codex Alimentarius Commission, Codex [44] P. Dehouck, Y. Vander Heyden, J. Smeyers-Verbeke, D.L. Massart,
Committee on Methods of Analysis and Sampling (FAO/WHO), P.H. Crommen, R.D. Marini, O.S. Smeets, G. Decristoforo, W. Van
Proposed draft guidelines on measurement uncertainty, Agenda Item de Wauw, J. De Beer, M.G. Quaglia, C. Stella, J.L. Veuthey, O.
5 of the 24th Session, Budapest, Hungary, 18–22 November 2002. Estevenon, A. Van Schepdael, E. Roets, J. Hoogmartens, Anal.
Chim. Acta 481 (2003) 261.
[20] CX/MAS 02/13, Codex Alimentarius Commission, Codex [45] A. Maroto, J. Riu, R. Boque, F.X. Rius, Anal. Chim. Acta 391
Committee on Methods of Analysis and Sampling (FAO/ WHO), (2003) 173.
The use of analytical results: sampling, relationship between the [46] A. Maroto, R. Boque, J. Riu, F.X. Rius, Anal. Chim. Acta 446
analytical results, the measurement uncertainty, recovery factors and (2001) 133.
the provisions in Codex standards, Agenda Item 9 of the 24th [47] V.J. Barwick, S.L.R. Ellison, VAM Project 3.2.1, Development and
Session, Budapest, Hungary, 18–22 November 2002. Harmonisation of Measurement Uncertainty Principles, Part d,
Protocol for Uncertainty Evaluation from Validation Data, Version
[21] CX/MAS 02/4, Codex Alimentarius Commission, Codex Commit- 5.1, January 2000, p. 9. Available from: <http://www.caeal. ca/VAM
tee on Methods of Analysis and Sampling (FAO/WHO), Proposed %20uncertainty.pdf>.
draft guidelines for evaluating acceptable methods of analysis, [48] P. Armishaw, Accred. Qual. Assur. 8 (2003) 218.
Agenda Item 4a of the 24th Session, Budapest, Hungary, 18–22 [49] W. Horwitz, Anal. Chem. 54 (1982) 67A.
November 2002 + CX/MAS 02/4-Add 2 Dispute situations. [50] M. Thompson, P.J. Lowthian, J. AOAC Int. 80 (1997) 676.
[22] EAL-G23, The Expression of Uncertainty in Quantitative Testing, [51] B. King, Fresenius’ J. Anal. Chem. 371 (2001) 714.
EAL, 1996, 9 pp. [52] I. Mueller-Harvey, Food Agric. Environ. 1 (2003) 9.
[23] ILAC-G17:2002, Introducing the concept of uncertainty of [53] M. Rosslein,€ Accred. Qual. Assur. 5 (2000) 88.
measurement in testing in association with the application of the [54] N. Mueller, Accred. Qual. Assur. 7 (2002) 79.
standard ISO/IEC 17025, ILAC Technical Accreditation Issues
Committee, 2002, 7 pp. Available from: <www.ilac.org>. Isabel Taverniers graduated in Agricultural and Applied Biological
[24] M. Valcarcel, A. Rios, Trends Anal. Chem. 18 (1999) 570. Sciences from the University of Gent, Belgium, in 1999. Until April 2001,
[25] M.C. Walsh, Trends Anal. Chem. 18 (1999) 616. she worked at AgriFing, a joint spin-off laboratory of Gent University,
[26] M. Valcarcel, A. Rios, Fresenius’ J. Anal. Chem. 359 (1997) 473. Hogeschool Gent and the Department for Plant Genetics and Breeding
[27] Ph. Quevauviller, O.F.X. Donard, Trends Anal. Chem. 20 (2001) (DvP), where she specialized in DNA fingerprinting technol-ogies. She is
600. now preparing a Ph.D. thesis in the Laboratory of Applied Plant
[28] X.R. Pan, Accred. Qual. Assur. 1 (1996) 181. Biotechnology of the Department for Plant Genetics and Breeding (CLO,
[29] P. Charlet, A. Marschal, Trends Anal. Chem. 23 (2004) 178. Flemish Community).
[30] M. Segura, C. Camara, Y. Madrid, C. Rebollo, J. Azcarate, G.N.
Kramer, B.M. Gawlik, A. Lamberty, Ph. Quevauviller, Trends Anal. Erik Van Bockstaele is Head of the Department for Plant Genetics and
Chem. 23 (2004) 194. Breeding and Professor at the Faculty of Agricultural and Applied
[31] U. F€orstner, Trends Anal. Chem. 23 (2004) 217. Biological Sciences of the University of Gent.
[32] S.P. Theocharopoulos, I.K. Mitsios, J. Arvanitoyannis, Trends Anal.
Chem. 23 (2004) 237. Marc De Loose is Head of the Section Applied Plant Biotechnology at the
[33] R. Sabe, G. Rauret, Trends Anal. Chem. 23 (2004) 273. Department for Plant Genetics and Breeding.

490 http://www.elsevier.com/locate/trac

You might also like