You are on page 1of 31

COLLEGE OF ENGINEERING THIRUVANANTHAPURAM

METROLOGY AND
INSTRUMENTATION
KAILAS SREE CHANDRAN
S7 INDUSTRIAL 432
kailassreechandran@yahoo.co.in

S7 N 2010

SUBMISSION DATE: 30-10-2009


METROLOGY AND INSTRUMENTATION

PRINCIPLES OF MEASUREMENT
Today, the techniques of measurement are of immense importance in
most facets of human civilization. Present-day applications of measuring
instruments can be classified into three major areas. The first of these is
their use in regulating trade, and includes instruments which measure
physical quantities such as length, volume and mass in terms of standard
units.
The second area for the application of measuring instruments is in
Monitoring functions. These provide information which enables human beings
to take some prescribed action accordingly. Whilst there are thus many
uses of instrumentation in our normal domestic lives, the majority of
monitoring functions exist to provide the information necessary to allow a
human being to control some industrial operation or process. In a chemical
process for instance, the progress of chemical reactions is indicated by the
measurement of temperatures and pressures at various points, and such
measurements allow the operator to take correct decisions regarding the
electrical supply to heaters, cooling water flows, valve positions, etc. One
other important use of monitoring instruments is in calibrating the
instruments used in the automatic process control systems.
Use as part of automatic control systems forms the third area for the
application of measurement systems. The characteristics of measuring
instruments in such feedback control systems are of fundamental importance
to the quality of control achieved. The accuracy and resolution with which an
output variable of a process is controlled can never be better than the
accuracy and resolution of the measuring instruments used. This is a very
important principle, but one which is often inadequately discussed in many
texts on automatic control systems. Such texts explore the theoretical
aspects of control system design in considerable depth, but fail to give
sufficient emphasis to the fact that all gain and phase margin performance

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 2


METROLOGY AND INSTRUMENTATION

calculations, etc., are entirely dependent on the quality of the process


measurements obtained.

Measuring Equipments
A measuring instrument exists to provide information about the
physical value of some variable being measured. In simple cases, an
instrument consists of a single unit which gives an output reading or signal
according to the magnitude of the unknown variable applied to it. However,
in more complex measurement situations, a measuring instrument may
consist of several separate elements. These components might be contained
within one or more boxes, and the boxes holding individual measurement
elements might be either close together or physically separate. Because of
the modular nature of the elements within it, a measuring instrument is
commonly referred to as a measurement system, and this term is used
extensively to emphasize this modular nature.

Common to any measuring instrument is the primary transducer: this


gives an output which is a function of the measurand (the input applied to
it). For most but not all transducers, this function is at least approximately
linear. Some examples of primary transducers are a liquid-in-glass
thermometer, a thermocouple and a strain gauge. In the case of a mercury-
in-glass thermometer, the output reading is given in terms of the level of the
mercury, and so this particular primary transducer is also a complete
measurement system in itself. In general, however, the primary transducer
is only part of a measurement system. The types of primary transducers
available for measuring a wide range of physical quantities widely available.

The output variable of a primary transducer is often in an inconvenient


form and has to be converted to a more convenient one. For instance, the
displacement-measuring strain gauge has an output in the form of a varying

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 3


METROLOGY AND INSTRUMENTATION

resistance. This is converted to a change in voltage by a bridge circuit, which


is a typical example of the variable conversion element.

Signal processing elements exist to improve the quality of the output


of a measurement system in some way. A very common type of signal
processing element is the electronic amplifier, which amplifies the output of
the primary transducer or variable conversion element, thus improving the
sensitivity and resolution of measurement. This element of a measuring
system is particularly important where the primary transducer has a low
output. For example, thermocouples have a typical output of only a few mill
volts. Other types of signal processing element are those which filter out
induced noise and remove mean levels, etc.

The observation or application point of the output of a measurement


system is often some physical distance away from the site of the primary
transducer which is measuring a physical quantity, and some mechanism of
transmitting the measured signal between these points is necessary.
Sometimes, this separation is made solely for purposes of convenience, but
more often it follows from the physical inaccessibility or environmental
unsuitability of the site of the primary transducer for mounting the signal
presentation/recording unit. The signal transmission element has
traditionally consisted of single- or multi-cored cable, which is often
screened to minimize signal corruption by induced electrical noise. Now,
optical fiber cables are being used in ever increasing numbers in modem
installations, in part because of their low transmission loss and
imperviousness to the effects of electrical and magnetic fields.

The final element in a measurement system is the point where the


measured signal is utilized. In some cases, this element is omitted
altogether because the measurement is used as part of an automatic control
scheme, and the transmitted signal is fed directly into the control system. In

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 4


METROLOGY AND INSTRUMENTATION

other cases, this element takes the form of either a signal presentation unit
or a signal recording unit. These take many forms according to the
requirements of the particular measurement application.

PRECISION

Precision is how close the measured values are to each other. The
precision of a measurement is the size of the unit you use to make a
measurement. The smaller the unit, the more precise the measurement.
Precision depends on the unit used to obtain a measure. Consider measures
of time, such as 12 seconds and 12 days. A measurement of 12 seconds
implies a time between11.5 and 12.5 seconds. This measurement is precise
to the nearest second, with a maximum potential error of 0.5 seconds. A
time of 12 days is far less precise. Twelve days suggests a time between
11.5 and 12.5 days, yielding a potential error of 0.5 days, or 43,200
seconds! Because the potential error is greater, the measure is less precise.
Thus, as the length of the unit increases, the measure becomes less precise.

The number of decimal places in a measurement also affects precision.


A time of 12.1 seconds is more precise than a time of 12 seconds; it implies
a measure precise to the nearest tenth of a second. The potential error
in12.1 seconds is 0.05 seconds, compared with a potential error of 0.5
seconds with a measure of 12 seconds.

Although students learn that adding zeros after a decimal point is


acceptable, doing so can be misleading. The measures of 12 seconds and
12.0 seconds imply a difference in precision. The first figure is measured to
the nearest second—a potential error of 0.5 seconds. The second figure is
measured to the nearest tenth—a potential error of 0.05 seconds. Therefore,
a measure of 12.0 seconds is more precise than a measure of 12 seconds.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 5


METROLOGY AND INSTRUMENTATION

Differing levels of precision can cause problems with arithmetic


operations. Suppose one wishes to add 12.1 seconds and 14.47 seconds.
The sum, 26.57 seconds, is misleading. The first time is between 12.05
seconds and12.15 seconds, whereas the second is between 14.465 and
14.475 seconds. Consequently, the sum is between 26.515 seconds and
26.625 seconds. A report of 26.57 seconds suggests more precision than the
actual result possesses.

The generally accepted practice is to report a sum or difference to the


same precision as the least precise measure. Thus, the result in the
preceding example should be reported to the nearest tenth of a second; that
is, rounding the sum to 26.6 seconds. Even so, the result may not be as
precise as is thought. If the total is actually closer to 26.515 seconds, the
sum to the nearest tenth is 26.5 seconds. Nevertheless, this practice usually
provides acceptable results.

Measurements in industrial settings such as a rubber manufacturing plant must be both accurate and
precise. Here a technician is measuring tire pressure.

Multiplying and dividing measures can create a different problem.


Suppose one wishes to calculate the area of a rectangle that measures 3.7
centimeters (cm) by 5.6 cm. Multiplication yields an area of 20.72 square

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 6


METROLOGY AND INSTRUMENTATION

centimeters. However, because the first measure is between 3.65 and 3.75
cm, and the second measure is between 5.55 and 5.65 cm, the area is
somewhere between 20.2575 and 21.1875 square centimeters. Reporting
the result to the nearest hundredth of a square centimeter is misleading.
The accepted practice is to report the result using the fewest number of
significant digits in the original measures. Since both 3.7 and 5.6 have two
significant digits, the result is rounded to two significant digits and an area
of 21 square centimeters is reported. Again, while the result may not even
be this precise, this practice normally produces acceptable results.

ACCURACY

Accuracy is how close a measured value is to the actual (true) value.


The accuracy of a measurement is the difference between your
measurement and the accepted correct answer. The bigger the difference,
the less accurate your measurement. Rather than the absolute error to
which precision refers, accuracy refers to the relative error in a measure. For
example, if one makes a mistake by 5 centimeters in measuring two objects
that are actually 100 and 1,000 cm, respectively, the second measure is
more accurate than the first. The first has an error of 5 percent (5 cm out of
100 cm), whereas the second has an error of only 0.5 percent (5 cm out of
1,000 cm).

Difference between Accuracy and Precision

To illustrate the difference between precision and accuracy, suppose


that a tape measure is used to measure the circumference of two circles—
one small and the other large. Suppose a result of 15 cm for the first circle
and 201 cm for the second circle are obtained. The two measures are
equally precise; both are measures to the nearest centimeter. However,

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 7


METROLOGY AND INSTRUMENTATION

their accuracy may be quite different. Suppose the measurements are both
about 0.3 cm too small. The relative errors for these measures are 0.3 cm
out of 15.3 cm (about 1.96 percent) and0.3 cm out of 201.3 cm (about
0.149 percent). The second measurement is more accurate because the
error is smaller when compared with the actual measurement. Consequently,
for any specific measuring tool, one can be equally precise with the
measures. But accuracy is often greater with larger objects than with smaller
ones.

Confusion can arise when using these terms. The tools one uses affect
both the precision and accuracy of one's measurements. Measuring with a
millimeter tape allows greater precision than measuring with an inch tape.
Because the error using the millimeter tape should be less than the inch
tape, accuracy also improves; the error compared with the actual length is
likely to be smaller. Despite this possible confusion and the similarity of the
ideas, it is important that the distinction between precision and accuracy be
understood.

Examples of Precision and Accuracy:

Low Accuracy High Accuracy High Accuracy


High Precision Low Precision High Precision

In Figure 1, the marksman has achieved a uniformity, although it is


inaccurate. This uniformity may have been achieved by using a sighting

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 8


METROLOGY AND INSTRUMENTATION

scope, or some sort of stabilizing device. With the knowledge gained by


observation of the results, the marksman can apply a systematic adjustment
(aim lower and to the left of his intended target, or have his equipment
adjusted) to achieve more accurate results in addition to the precision that
his methodology and equipment have already attained. In Figure 2, the
marksman has approached the "truth", although without great precision. It
may be that the marksman will need to change the equipment or
methodology used to obtain the result if a greater degree of precision is
required, as he has reached the limitations associated with his equipment
and methodology. Figure 3 represents results indicating both accuracy and
precision. It differs from Figure 1 in that the marksman has probably made
one of the systematic adjustments that was indicated by his attainment of
precision without accuracy. The degree of precision has not changed greatly,
but its conformity with the "truth" has improved over the results obtained
inFigure 1. If the marksman from Figure 2 determines that his results are
not adequate for the task at hand, he has no choice but to change his
methodology or equipment. He has already performed to the limitations of
these.

Degree of Accuracy

Accuracy depends on the instrument you are measuring with. But as a


general rule: The degree of accuracy is half a unit each side of the unit of
measure

Examples:

If the instrument measures in "1"s then


any value between 6½ and 7½ is
measured as "7"

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 9


METROLOGY AND INSTRUMENTATION

If the instrument measures in "2"s then


any value between 7 and 9 is measured
as "8"

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 10


METROLOGY AND INSTRUMENTATION

SENSITIVITY OF MEASUREMENT
All calibrations and specifications of an instrument are only valid under
controlled conditions of temperature, pressure, etc. These standard ambient
conditions are usually defined in the instrument specification. As variations
occur in the ambient temperature, etc., certain static instrument
characteristics change, and the sensitivity to disturbance is a measure of the
magnitude of this change. Such environmental changes affect instruments in
two main ways, known as zero drift and sensitivity drift.

Zero drift describes the effect where the zero reading of an instrument
is modified by a change in ambient conditions. Typical units by which zero
drift is measured are volts/°C, in the case of a voltmeter affected by ambient
temperature changes. This is often called the zero drift coefficient related to
temperature changes. If the characteristic of an instrument is sensitive to
several environmental parameters, then it will have several zero drift
coefficients, one for each environmental parameter. The effect of zero drift is
to impose a bias in the instrument output readings; this is normally
removable by recalibration in the usual way.

Sensitivity drift (also known as scale factor drift) defines the amount
by which an instrument's sensitivity of measurement varies as ambient
conditions change. It is quantified by sensitivity drift coefficients which
define how much drift there is for a unit change in each environmental
parameter that the instrument characteristics are sensitive to.

Many components within an instrument are affected by environmental


fluctuations, such as temperature changes: for instance, the modulus of
elasticity of a spring is temperature dependent. Sensitivity drift is measured
in units of the form (angular degree/bar)/°C.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 11


METROLOGY AND INSTRUMENTATION

Sensitivity may be defined as the rate of displacement of the indicating


device of an instrument, with respect to the measured quantity. In other
words, sensitivity of an instrument is the ratio of the scale spacing to the
scale division value. For example, if on a dial indicator, the scale spacing is
1.0mm and the scale division value is 0.01mm, then sensitivity is 100. It is
also called as amplification factor or gearing ratio.

CALIBRATION
Understanding instrument calibration and its proper use is an essential
element in an overall laboratory program. Proper calibration will ensure that
equipment remains within validated performance limits to accurately report
patient results. Calibration is the set of operations that establish, under
specified conditions, the relationship between the values of quantities
indicated by a measuring instrument and the corresponding values realized
by standards.

The result of a calibration permits either the assignment of values of


measurements to the indications or the determination of corrections with
respect to indications.

A calibration may also determine other metrological properties such as


the effect of influence quantities.

The result of a calibration may be recorded in a document, sometimes


called a calibration certificate or a calibration report.

A measuring instrument can be calibrated by comparison with a standard.

An adjustment of the instrument is often carried out after calibration in


order that it provides given indications corresponding to given values of the
quantity measured.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 12


METROLOGY AND INSTRUMENTATION

When the instrument is made to give a null indication corresponding to


a null value of the quantity to be measured, the set of operation is called
zero adjustment .

Calibration is the process of establishing the relationship between a


measuring device and the units of measure. This is done by comparing a
device or the output of an instrument to a standard having known
measurement characteristics. For example the length of a stick can be
calibrated by comparing it to a standard that has a known length. Once the
relationship of the stick to the standard is known the stick is calibrated and
can be used to measure the length of other things.

For many operations the quality of the calibration needs to be known


and is quantified by an uncertainty estimate for the calibration. This is so
important for the scientific community and manufacturing operations that it
has been proposed that an evaluation of the measurement uncertainty was
added as part of the calibration process.

Part of calibration is to zero the measuring device, the process of


establishing that the zero point of the device corresponds to zero on the
relevant scale.

Instrument Calibration

Calibration can be called for:

with a new instrument


when a specified time period is elapsed
when a specified usage (operating hours) has elapsed
when an instrument has had a shock or vibration which potentially
may have put it out of calibration

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 13


METROLOGY AND INSTRUMENTATION

whenever observations appear questionable

In non-specialized use, calibration is often regarded as including the


process of adjusting the output or indication on a measurement instrument
to agree with value of the applied standard, within a specified accuracy. For
example, a thermometer could be calibrated so the error of indication or the
correction is determined, and adjusted (e.g. via calibration constants) so
that it shows the true temperature in Celsius at specific points on the scale.

Instrument calibration is one of the primary processes used to


maintain instrument accuracy. Calibration is the process of configuring an
instrument to provide a result for a sample within an acceptable range.
Eliminating or minimizing factors that cause inaccurate measurements is a
fundamental aspect of instrumentation design.

Although the exact procedure may vary from product to product, the
calibration process generally involves using the instrument to test samples
of one or more known values called ―calibrators.‖ The results are used to
establish a relationship between the measurement technique used by the
instrument and the known values. The process in essence ―teaches‖ the
instrument to produce results that are more accurate than those that would
occur otherwise. The instrument can then provide more accurate results
when samples of unknown values are tested in the normal usage of the
product.

Calibrations are performed using only a few calibrators to establish the


correlation at specific points within the instrument’s operating range. While
it might be desirable to use a large number of calibrators to establish the
calibration relationship, or ―curve‖, the time and labor associated with
preparing and testing a large number of calibrators might outweigh the
resulting level of performance. From a practical standpoint, a tradeoff must

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 14


METROLOGY AND INSTRUMENTATION

be made between the desired level of product performance and the effort
associated with accomplishing the calibration. The instrument will provide
the best performance when the intermediate points provided in the
manufacturer’s performance specifications are used for calibration; the
specified process essentially eliminates, or ―zeroes out‖, the inherent
instrument error at these points.

Importance of Calibration

The following figures depict how a properly performed calibration can


improve product performance.

If a product were perfect from an accuracy standpoint, then the results


for a series of tests would form for the curve in Figure 8 labeled ―Ideal
Results‖. But what if the test results form the curve labeled ―Actual
Results? Although the curve has been greatly exaggerated for the purpose
of this discussion, there is an error at any point within the operating range
with respect to the Ideal Curve. In addition, the error is not constant over
the operating range.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 15


METROLOGY AND INSTRUMENTATION

Figure 8. Calibrated Graph

Calibrating the product can improve this situation significantly. In


Figure 9, the product is ―taught‖ using the known values of Calibrators 1 and
2 what result it should provide. The process eliminates the errors at these
two points, in effect moving the Actual Results curve closer to the Ideal
Results curve. The Error At Any Point has been reduced to zero at the
calibration points, and the residual error at any other point within the
operating range (exaggerated by the curve) is within the manufacturer’s
published linearity or accuracy specification.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 16


METROLOGY AND INSTRUMENTATION

Figure 9. Uncalibrated Graph

Factors affecting Calibration

Once the benefits of a properly performed calibration are understood,


it becomes evident that care must be taken during the process to prevent
potential error sources from degrading the results. Several factors can occur
during and after a calibration that can affect its result. Among these are:

Using the wrong calibrator values: It is important to closely follow


the instructions for use during the calibration process. Disregarding the

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 17


METROLOGY AND INSTRUMENTATION

instructions and selecting the wrong calibrator values will ―teach‖ the
instrument incorrectly, and produce significant errors over the entire
operating range. While many instruments have software diagnostics that
alert the operator if the calibrators are tested in the incorrect order (i.e.
Calibrator 2 before Calibrator 1), the instrument may accept one or more
calibrators of the wrong value without detecting the operator error.

Calibrator formulation tolerance: It is important to use calibrators


that are formulated to tight tolerance specifications by a reputable
manufacturer. There is a tolerance associated with formulating a
calibrator/control due to normal variations in the instrumentation and quality
control processes. This tolerance can affect the mean value obtained when
using the calibrator.

Frequency of Calibration
The simple answer to this question, although not a very helpful one, is
―when it needs it.‖ From a more practical standpoint, daily or periodically
testing the control solutions of known values can provide a quantitative
indication of instrument performance, which can be used to establish a
history. If the controls data indicate that instrument performance is stable,
or is varying randomly well within the acceptable range of values, then there
is no need to recalibrate the instrument. However, if the historical data
indicates a trend toward, or beyond, the acceptable range limits, or if the
instrument displays a short-term pronounced shift, then recalibration is
warranted. Realize also that specific laboratory standard operating
procedures or regulatory requirements may require instrument recalibration
even when no action is warranted from a results standpoint. These
requirements should always take precedence, and the above guidance used
at times when there is uncertainty as to whether instrument recalibration
should be performed to improve accuracy.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 18


METROLOGY AND INSTRUMENTATION

STANDARDS OF MEASUREMENTS
In A.D. 1120 the king of England decreed that the standard of length
in his country would be named the yard and would be precisely equal to the
distance from the tip of his nose to the end of his outstretched arm.
Similarly, the original standard for the foot adopted by the French was the
length of the royal foot of King Louis XIV. This standard prevailed until 1799,
when the legal standard of length in France became the meter, defined as
one ten-millionth the distance from the equator to the North Pole along one
particular longitudinal line that passes through Paris.

Many other systems for measuring length have been developed over
the years, but the advantages of the French system have caused it to prevail
in almost all countries and in scientific circles everywhere. As recently as
1960, the length of the meter was defined as the distance between two lines
on a specific platinum–iridium bar stored under controlled conditions in
France. This standard was abandoned for several reasons, a principal one
being that the limited accuracy with which the separation between the lines
on the bar can be determined does not meet the current requirements of
science and technology. In the 1960s and 1970s, the meter was defined as 1
650 763.73 wavelengths of orange-red light emitted from a krypton-86
lamp. However, in October 1983, the meter (m) was redefined as the
distance traveled by light in vacuum during a time of 1/299 792 458 second.
In effect, this latest definition establishes that the speed of light in vacuum
is precisely 299 792 458 m per second.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 19


METROLOGY AND INSTRUMENTATION

The different types of standards of length are

1. Material Standards

(a) Line Standard – When length is measured as the distance


between centers of two engraved lines.

(b) End Standard – When length is measured as the distance between


to flat parallel faces.

2. Wavelength Standard
The wavelength of a selected orange radiation of Krtypton-86 isotope
was measured and used as the basic unit of length.

Imperial Standard YARD

After the 1 July 1959 deadline, agreed upon in 1958, the US and the
British yard were defined identically, at 0.9144 metres to match the
international yard. Metric equivalents in this article usually assume this
latest official definition. Before this date, the most precise measurement of
the Imperial Standard Yard was 0.914398416 metres.

A YARD (abbreviation: yd) is a unit of length in several different


systems, including English units, Imperial units, and United States
customary units. It is equal to 3 feet or 36 inches, although its length in SI
units varied slightly from system to system. The most commonly used yard
today is the international yard, which is equal to precisely 0.9144 meter.

The yard is used as the standard unit of field-length measurement in


American, Canadian and association football.

There are corresponding units of area and volume, the square yard
and cubic yard respectively, and these are sometimes referred to simply as
"yards" when no ambiguity is possible. For example, an American or

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 20


METROLOGY AND INSTRUMENTATION

Canadian concrete mixer marked with a capacity of "11 yards" or "1.5


yards", where cubic yards are obviously referred to.

Figure 10. Standard lengths on the wall of the Royal Observatory, Greenwich, London - 1 yard (3
feet), 2 feet, 1 foot, 6 inches (1/2 foot), and 3 inches. The separation of the inside faces of the
markers is exact at an ambient temperature of 60 °F (16 °C) and a rod of the correct measure,
resting on the pins, will fit snugly between them.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 21


METROLOGY AND INSTRUMENTATION

Figure 11. Imperial standards of length 1876 in Trafalgar Square, London.

International Prototype METRE


The meter is the length of the path travelled by light in vacuum during
a time interval of 1/299 792 458 of a second. International Prototype meter
is defined as the straight line distance, at 0’c between the engraved lines of
a platinum iridium alloy of 1020 mm of total length and having a tresca
cross-section as shown in the figure. The graduations are on the upper
surface of the web, which coincides with the neutral axis of the section. The
sectional shape gives better rigidity for the amount of metal involved and is
therefore economic in use for an expensive metal.

The metre or meter is the basic unit of length in the International


System of Units (SI). Historically, the metre was defined by the French

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 22


METROLOGY AND INSTRUMENTATION

Academy of Sciences as the length between two marks on a platinum-


iridium bar, which was designed to represent one ten-millionth of the
distance from the Equator to the North Pole through Paris. In 1983, the
metre was redefined as the distance travelled by light in free space in
1
⁄299,792,458 of a second.

The symbol for metre is m. Decimal multiples such as kilometre and


centimetre are indicated by adding SI prefixes to metre.

In the 1870s and in light of modern precision, a series of international


conferences were held to devise new metric standards. The Metre
Convention (Convention du Mètre) of 1875 mandated the establishment of a
permanent International Bureau of Weights and Measures (BIPM: Bureau
International des Poids et Mesures) to be located in Sèvres, France. This new
organisation would preserve the new prototype metre and kilogram
standards when constructed, distribute national metric prototypes, and
maintain comparisons between them and non-metric measurement
standards. The organisation created a new prototype bar in 1889 at the first
General Conference on Weights and Measures (CGPM: Conférence Générale
des Poids et Mesures), establishing the International Prototype Metre as the
distance between two lines on a standard bar composed of an alloy of ninety
percent platinum and ten percent iridium, measured at the melting point of
ice.

The original international prototype of the metre is still kept at the


BIPM under the conditions specified in 1889. A discussion of measurements
of a standard metre bar and the errors encountered in making the
measurements is found in a NIST document

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 23


METROLOGY AND INSTRUMENTATION

Figure 12. Historical International Prototype Metre bar, made of an alloy of platinum and
iridium, that was the standard from 1889 to 1960.

Standard wavelength of krypton-86 emission


In 1893, the standard metre was first measured with an
interferometer by Albert A. Michelson, the inventor of the device and an
advocate of using some particular wavelength of light as a standard of
distance. By 1925, interferometry was in regular use at the BIPM. However,
the International Prototype Metre remained the standard until 1960, when
the eleventh CGPM defined the metre in the new SI system as equal to
1,650,763.73 wavelengths of the orange-red emission line in the
electromagnetic spectrum of the krypton-86 atom in a vacuum.

Standard wavelength of helium-neon laser light


To further reduce uncertainty, the seventeenth CGPM in 1983 replaced
the definition of the metre with its current definition, thus fixing the length
of the metre in terms of time and the speed of light:

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 24


METROLOGY AND INSTRUMENTATION

The metre is the length of the path travelled by light in vacuum during
a time interval of 1⁄299 792 458 of a second.

This definition effectively fixed the speed of light in a vacuum at


precisely 299,792,458 metres per second. Although the metre is now
defined in terms of time-of-flight, actual laboratory realizations of the metre
are still delineated by counting the required number of wavelengths of light
along the distance. Three major factors limit the accuracy attainable with
laser interferometers:

Uncertainty in vacuum wavelength of the source,


Uncertainty in the refractive index of the medium,
Laser count resolution of the interferometer.

Use of the interferometer to define the metre is based upon the


relation:

where λ is the determined wavelength; c is the speed of light in ideal


vacuum; n is the refractive index of the medium in which the measurement
is made; and f is the frequency of the source. In this way the length is
related to one of the most accurate measurements available: frequency.

An intended byproduct of the 17th CGPM’s definition was that it


enabled scientists to measure the wavelength of their lasers with one-fifth
the uncertainty. To further facilitate reproducibility from lab to lab, the 17th
CGPM also made the iodine-stabilised helium-neon laser ―a recommended
radiation‖ for realising the metre. For purposes of delineating the metre, the
BIPM currently considers the HeNe laser wavelength to be as follows: λHeNe =

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 25


METROLOGY AND INSTRUMENTATION

632.99139822 nm with an estimated relative standard uncertainty (U) of


2.5×10−11.This uncertainty is currently the limiting factor in laboratory
realisations of the metre as it is several orders of magnitude poorer than
that of the second (U = 5×10−16). Consequently, a practical realisation of
the metre is usually delineated (not defined) today in labs as
1,579,800.298728(39) wavelengths of helium-neon laser light in a vacuum.

Table 1. Line and End Standards and differentiate between them.

LINE STANDARDS – When length is measured as the distance between


centers of two engraved lines, it is called Line Standards. Both material
Standards, yard and metre are line standards

E.g. Scale, Rulers, Imperial Standard Yard.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 26


METROLOGY AND INSTRUMENTATION

Characteristics of Line Standards :


(i) Scale can be accurately embalmed, but the engraved lines posses
thickness and it is not possible to accurately measure
(ii) Scale is used over a wide range
(iii) Scale markings are subjected to wear. However the ends are
subjected to wear and this leads to undersize measurements
(iv) Scale does not posses built in datum. Therefore it is not possible to
align the scale with the axis of measurement
(v) Scales are subjected to parallax errors
(vi) Assistance of magnifying glass or microscope is required.

END STANDARDS – When length is expressed as the distance between


centers of two flat parallel faces, it is called End Standards. Slip Gauges, End
Bars, Ends of micrometer Anvils.

Characteristics of End Standards


(i) Highly accurate and used for measurement of closed tolerances in
precision engineering as well as standard laboratories, tool rooms,
inspection departments.
(ii) They require more time for measurement and measure only one
dimension.
(iii) They wear at their measuring faces
(iv) They are not subjected to parallax error.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 27


METROLOGY AND INSTRUMENTATION

Differentiate between Line and End Standards

Sl no Characteristics Line Standard End Standard

Length is expressed as Length is expressed as


Principle
1. distance between 2 lines distance between 2 ends

Highly accurate of closed


Accuracy Ltd. To ± 0.2mm.
2. tolerances to ±0.001mm

Time consuming and requires


Ease Quick and easy
3. skill

Effect of wear Wear at only the ends wear at measuring surfaces


4.
Cannot be easily
Allignment easily aligned
5. aligned

Cost low cost high cost


6.
Subjected to parallax
Parallax Effect not subjected to parallax effect
7. effect

Wavelength standards and its advantages


A major drawback wit the material standards, that their length
changes with time. Secondly, considerable difficulty is expressed while
comparing the sizes of the gauges by using material standards.

Jacques Babinet suggested that wave length of a monochromatic light


can be used as a natural and invariable unit of length. 7th general
Conference of Weights and Measures approved in 1927, approved the
definition of standard of length relative to meter.

Accurately known wavelengths of spectral radiation emitted from


specified sources that are used to measure the wavelengths of other spectra.
In the past, the radiation from the standard source and the source under
study were superimposed on the slit of a spectrometer (prism or grating)

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 28


METROLOGY AND INSTRUMENTATION

and then the unknown wavelengths could be determined from the standard
wavelengths by using interpolation. This technique has evolved into the
modern computer-controlled photoelectric recording spectrometer. Accuracy
of many more orders of magnitude can be obtained by the use of
interferometric techniques, of which Fabry-Perot and Michelson
interferometers are two of the most common. See also Interferometry;
Spectroscopy.

The newest definition of the meter is in terms of the second. The


wavelength of radiation from the cesium atomic clock is not used to realize
length because diffraction problems at this wavelength are severe. Instead,
lasers at shorter wavelengths whose frequencies have been measured are
used. Frequency measurements can now be made even into the visible
spectral region with great accuracy. Hence, when the 1983 Conférence
Général des Poids et Mesures redefined the meter, it also gave a list of very
accurate wavelengths of selected stabilized lasers which may be used as
wavelength standards. Nearly ten times better accuracy can be achieved by
using these wavelengths than by using the radiation from the krypton lamp
which provided the previous standard. See also Frequency measurement;
Hyperfine structure; Laser; Laser spectroscopy; Length; Molecular structure
and spectra; Physical measurement.

The progress in laser frequency measurements since 1974 has


established wavelength standards throughout the infrared spectral region.
This has been accomplished with the accurate frequency measurement of
rotational-vibrational transitions of selected molecules. The OCS molecule is
used in the 5-micrometer spectral region. At 9–10 μm, the carbon dioxide
(CO2) laser itself with over 300 accurately known lines is used. From 10 to
100 μm, rotational transitions of various molecules are used; most are
optically pumped laser transitions. The increased accuracy of frequency

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 29


METROLOGY AND INSTRUMENTATION

measurements makes this technique mandatory where ultimate accuracy is


needed.

Orange radiation of isotope Krypton-86 was chosen for the new


definition of length in 1960, by the 11th General Conference of Weigths and
Measures. The committee recommended Krypton-86 and that it should be
used in hot cathode discharge lamp, maintained at a temperature of 63K.

According to this standard metre was defined as equal to 165763.73


wavelengths of the red-orange radiation of Krypton-86 isotope.

A standard can now be produced to an accuracy of about 1 part of 10^9.

Source Conditions

Krypton 86. The wavelength of the 605 nm radiation, when emitted by a


lamp conforming to the specification below, has an estimated uncertainty
(99% confidence) of ±4 parts in 109. The other radiations, under similar
conditions, have uncertainties of ± 2 parts in 108.
The source is a hot-cathode discharge lamp containing krypton-86 (purity
99%) in sufficient quantity to assure the presence of solid krypton at 64 K,
the lamp having a capillary portion with the dimensions: internal diameter
2–4 mm, wall thickness 1 mm. The conditions of operation are:
(i) The capillary is observed end-on from the anode side of the lamp;
(ii) the lower part of the lamp, including the capillary, is immersed in a
refrigerant bath maintained within 1K of the triple point of nitrogen;
(iii) the current density in the capillary is 3 ± 1 mA mm−2.

Mercury-198. The uncertainties of the wavelengths are ± 5 parts in 108


when emitted by a high-frequency electrode less discharge lamp, operated
at moderate power with the radiation observed through the side of the
capillary. The lamp should be maintained at a temperature below 10 °C and

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 30


METROLOGY AND INSTRUMENTATION

contain mercury-198 (purity 98%) with argon as carrier gas at a pressure


between 65 and 133 Nm−2. The internal diameter of the capillary should be
about 5 mm, with the volume of the lamp preferably >20 cm3.

Cadmium-114. The standard wavelengths have an estimated uncertainty of


±7 parts in 108 when emitted by an electrode less discharge lamp source,
maintained at a temperature such that the green line is not reversed and
containing cadmium-114 (purity 95%) with argon as carrier gas (pressure
about 150 N m−2 at ambient temperatures). The radiations should be
observed through the side of the capillary part of the tube, having an
internal diameter of about 5 mm.

Advantages :
(a) Not a material standard and hence it is not influeced by effects of
variation of environmental conditions like temperature, pressure
(b) It need not be preserved or stored under security and thus there is not
fear of being destroyed.
(c) It is subjected to destruction by wear and tear.
(d) It gives the unit of length which can be produced consistently at all
times.
(e) The standard facility can be easily available in all standard laboratories
and industries
(f) Can be used for making comparative measurements of very high
accuracy.

COLLEGE OF ENGINEERING THIRUVANATHAPURAM 31

You might also like