You are on page 1of 117

INSTRUMENTATION

PART-2

Prepared By

RAJAN SINGH PAL


FLOW
Flow Measurement Techniques
Differential Pressure (DP) Flow Meter
 Orifice Plate
1. Concentric
2. Eccentric
3. Segmented

Tapping Points for Differential Pressure (DP)


1. Corner Taps (Orifice Face : Orifice Face)
2. Flange Taps (1 ” (25mm) : 1” (25mm))
3. Vena-Contracta Taps (1 D : VENA-CONTRACTA)
4. Radius Taps (1 D : 1/2 D)
5. Pipe Taps (2.5 D : 8 D)

 Venturi Tube

 Flow Nozzle

 Pitot Tube

Vortex Flow Meter

Magnetic Flow Meter

Turbine Flow Meter

Ultrasonic Flow Meter


 Transient Time or Time of Flight
 Doppler Shift

Thermal Flow meters

Coriolis Flow Meter

Variable Area Flow Meter (Rotameter)

Positive Displacement (PD) Flow Meter


 Oval Gear
 Rotating Lobes
 Rotating Impeller
 Nutating Disk
 Rotating Vane
 Helix Type
In 1883, Osborne Reynold proposed single, dimensionless ratio to describe velocity profile of flowing fluid known an Reynold Number:
Re = DVr/m
Re is the Reynold Number, D is the pipe diameter, V is the fluid velocity, r is the fluid density, and m is the fluid viscosity.
LAMINAR FLOW - At low Reynolds number (below 2000), flow is dominated by viscous forces, velocity profile is (elongated) parabolic.
Fluid flows in smooth layers with the highest velocity at center of pipe and low velocities at pipe wall where viscous forces restrain it.
When the flow is laminar, a linear relationship exists between flow and pressure drop.
TRANSITIONAL FLOW : At Reynold numbers between 2000 to 20000, flow is in transient phase, it is neither laminar and nor turbulent.
It is changing its flow profile from laminar flow and to turbulent flow.
TURBULANT FLOW - At high Reynolds numbers (above 20000), the flow is dominated by inertial forces, resulting in a more uniform
axial velocity across the flowing stream and a flat velocity profile.
In turbulent flow conditions, the flow breaks up into turbulent eddies which all flow through the pipe with the same average velocity.
When flow is fully turbulent (above 20000), the pressure drop through a restriction is usually proportional to the square of flow rate.
Therefore, flow rate can be measured by taking the square root of the differential pressure across the restriction.

DIFFERENTIAL PRESSURE FLOWMETER


Differential Pressure (DP) (Head) are the most common type of flow meter used to measure flow rates. They measure flow indirectly
by creating and measuring a differential pressure by means of an obstruction to flow. Using well established conversion coefficients
which depend on type of flow meter used & diameter of pipe, a measurement of differential pressure is translated into a volume rate.

It is based on the Equation of Continuity or Equation of Energy Conservation. It states that total energy in system remains constant,
Q = A1.V1 = A2.V2
If the pipe diameter decreases, the velocity of the fluid increases and static pressure decreases.
If the pipe diameter increases, the velocity of the fluid decreases and static pressure increases.
However total energy remain constant (Total Mechanical Energy=Kinetic Energy+Internal Energy of Fluid+Potential Energy= Constant)

ORIFICE PLATES
As the fluid approaches the orifice, the pressure increases slightly and then drops suddenly as the fluid passes through the orifice.
It continues to drop until the “vena contracta” is reached and then gradually increases until at approximately 5D to 8D downstream a
maximum pressure point is reached that will be lower than the pressure upstream of the orifice.
The decrease in pressure as fluid passes thru orifice is a result of increased velocity of fluid passing thru reduced area of the orifice.
When the velocity decreases as the fluid leaves the orifice, the pressure increases and tends to return to its original upstream level.
All of the pressure loss is not recovered because of friction and turbulence losses in the stream.
The pressure drop across the orifice increases when the rate of flow increases. When there is no flow there is no differential.
The differential pressure is proportional to the square of the velocity, it therefore follows that if all other factors remain constant,
then the differential pressure is proportional to the square of the flow rate.

Critical Flow
The square root flow formula applies to subsonic flow only. Sonic or critical flow occurs when the velocity of the gas or vapor reaches
the speed of sound (approx. 700 miles per hour in air). A gas cannot be made to travel any faster and remain in the same state.
A rule of thumb used in gas flow is that critical flow is reached when downstream pressure is 50% or less than the upstream pressure.
But at critical flow in gases only volumetric flow rate is fixed, the mass flow rate can still be increased by changing the density of gas.
Mass Flow Rate = Volume flow Rate x Density (Density can be changed by changing the temperature of gas)

Rangeability or Turndown Ratio


Sometimes called “turn-down” is the ratio of maximum flow to minimum flow throughout which a stated accuracy is maintained.
For example, if an orifice meter installation is accurate to 1% from 200 SCFH to 600 SCFH, the rangeability or turndown is 3 to 1.
It can still measure beyond this range but accuracy of the measurement will not be maintained or the accuracy may get affected.
Concentric Orifice Plates
This type of plate is most commonly used to measure fluids which are with single phase flow (Not recommended for two phase flow)
Concentric orifice are not recommended for multi-phase fluids because secondary phase can build up around upstream edge of plate.
In extreme cases, this can clog opening, or can change flow pattern, creating measurement error.

Eccentric Orifice Plates


This type of plate is commonly used to measure fluids which carry a two phase flow, gases containing small amounts of liquids or
liquids containing small amount of gases.

Segmental Orifice Plates


The opening in a segmental orifice plate is comparable to a partially opened gate valve.
This plate is generally used for measuring liquids / gases which carry non-abrasive impurities, light slurries or exceptionally dirty gases.
The accuracy of both the eccentric and segmental plate is not as good as the concentric plate.

Restriction Orifice
Restriction orifice is specially designed to drop pressure, has low pressure recovery factor, usually it drops more than 60% of pressure.
There are 3 types : Single Stage Restriction Orifice (Single Bore), Single Stage Restriction Orifice (Multi Bore), Multi Stage Restriction Orifice.

BETA RATIO
The thickness of the plate used (1/8” – ½”) is a function of line size, process temperature, pressure, and differential pressure.
The beta (or diameter) ratios of orifice plates range from 0.25 to 0.75, The accuracy is good in this range.
If the beta ratio drops below 0.25, the fluid will begin to get choked and if Beta ratio rises above 0.75, the fundamental flow equation
(flow rate is directly proportional to square root of pressure drop) loses its accuracy.
Orifice Disadvantages
* They cause high permanent pressure drop 20% - 40% (outlet pressure is 60% - 80% of inlet pressure), & they are subject to erosion,
which will eventually cause inaccuracies in the measured differential pressure.
* Their turndown ratio or rangebility is limited to 3 : 1

TAPPING LOCATION
As per principal working of DP meter, the tapping point shall be at upstream (maximum pressure point) & minimum pressure point
(maximum pressure difference point i.e. vena contracta).

Corner Taps (Orifice Face : Orifice Face)


These taps are located immediately adjacent to the plate faces, upstream and downstream.
Corner taps are widely used in line sizes less than 2 inches.

Flange Taps (1 ” (25mm) : 1” (25mm))


These taps are located one inch from upstream face of orifice plate and one inch from downstream face. Used from 2” to 24” line size.

Vena-Contracta Taps (1 D : VENA-CONTRACTA)


These taps are located one pipe diameter upstream & at point of minimum pressure downstream (this point is called vena-contracta).
This vena-contracta point, however varies with Beta ratio and they are used in measurement where flows are relatively constant and
plates are not changed.

Radius Taps (1 D : 1/2 D)


These taps are located one pipe diameters upstream and half pipe diameters downstream.
Generally used for higher line sizes more than 24” line size.

Pipe Taps (2.5 D : 8 D)


These taps are located 2½ pipe diameters upstream and 8 pipe diameters downstream (point of maximum pressure recovery).
VENTURI TUBES

The venturi tube is the most accurate flow-sensing element in differential pressure measurement, when it is properly calibrated.
The venturi tube has a converging conical inlet (21 degrees), cylindrical throat, and a diverging recovery cone (7 degree or 15 degree).
It has no projections into the fluid, no sharp corners, and no sudden changes in contour.
The inlet section of venture tube decreases the area of the fluid stream, causing the velocity to increase and the pressure to decrease.
The low pressure is measured in the downstream at center of the cylindrical throat since the pressure will be at its lowest value, and
neither the pressure nor the velocity is changing. The recovery cone allows for the recovery of pressure such that total pressure loss is
only 10% to 20%. The high pressure is measured upstream of the entrance cone.
The major disadvantages of venture tube is the high initial costs for installation, difficulty in installation, space required & inspection.

Venturi Tubes can pass 25% to 50% more flow than an orifice with same pressure drop. The total unrecovered head loss is 10% - 25%.
The initial cost of venture tubes is high, so they are primarily used on larger flows or on more difficult or demanding flow applications.
Venturis are insensitive to velocity profile effects and therefore require less straight pipe run than an orifice. Their contoured nature,
combined with selfscouring maction of flow through the tube, makes venturi immune to corrosion, erosion, & internal scale build up.
The classical Herschel venturi has a very long flow element characterized by a tapered inlet and a diverging outlet.
The pressure is measured at the upstream entrance and in the throat section. The pressure taps feed into common annular chamber,
providing an average pressure reading over the entire circumference of the element.
The classical venturi is limited in its application to clean, non-corrosive liquids and gases.
In short form venturi, the entrance angle is increased. The short form venturi maintains many of the advantages of classical venturi,
but at a reduced initial cost, shorter length and reduced weight.

Selection
* Less pressure drop (10% - 20%), as compared to orifice plates (20% - 40%)
* Their turndown ratio or rangebility is limited to 5 : 1, higher than that of orifice plate (3 : 1).
FLOW NOZZLE

The flow nozzle is dimensionally more stable than the orifice plate, particularly in high temperature and high velocity services.
The flow nozzle, has a greater flow capacity than the orifice plate and requires a much lower initial investment than a venture tube,
but also provides less pressure recovery as compared with venturi.
A major disadvantage of nozzle is that it is more difficult to replace than the orifice unless it can be removed as part of a spool section.
The downstream end of a flow nozzle is a short tube having the same diameter (area) as vena contracta of an equivalent orifice plate.
The Beta ratio range is 0.25 to 0.80. The nozzle should be centered in pipe, and downstream pressure tap should be inside nozzle exit.
The throat taper should always decrease the diameter toward the exit. Flow nozzles are not recommended for slurries or dirty fluids.
The most common flow nozzle is the flange type. Other type include weld-in type etc.
Taps are commonly located one pipe diameter upstream and ½ pipe diameter downstream from the inlet face.
Flow nozzle accuracy is typically 1%. While discharge coefficient data is available for Reynolds numbers as low as 5,000, it is advisable
to use flow nozzles only when Reynolds number exceeds 50,000. They maintain their accuracy for long period, even in difficult service.
Flow nozzles can be a highly accurate way to measure the gas flows. When the gas velocity reaches the speed of sound in the throat,
the velocity cannot increase any more (even if downstream pressure is reduced), and a choked flow condition is reached.
Such critical flow nozzles are very accurate & often used in flow laboratories as standards for calibrating other gas flow meter devices.
Nozzles can be installed in any position, although horizontal orientation is preferred. Vertical down flow is preferred for wet steam,
gases, or liquids containing solids. The straight pipe run requirements are similar to those of orifice plates.

Selection
* Pressure Drop 15% - 30%, Less as compared to orifice plates (20% - 40%), but more as compared to venture tube (10% - 20%).
* Their turndown ratio or rangebility is limited to 4 : 1, more than orifice plate (3 : 1), but less when compared to venture tube (5 : 1)
PITOT TUBE

The pitot tube is another primary flow element used to produce a differential pressure for flow detection.
In its simplest form, it consists of tube with an opening at end. The small hole in end is positioned such that it faces the flowing fluid.
The velocity of fluid at the opening of tube decreases to zero. This provides for a high pressure input to differential pressure detector.
A pressure tap at the upstream provides the low pressure input.

Pitot Tube actually measures fluid velocity instead of fluid flow rate. However, volumetric flow rate can be obtained using Equation.
v = KAV
v = volumetric flow rate
A = area of flow cross-section
V = velocity of flowing fluid
K = flow coefficient (normally about 0.8)

Pitot tubes must be calibrated for each specific application. This instrument can be used even when fluid is not enclosed in pipe / duct
In industrial applications, pitot tubes are used to measure air flow in pipes, ducts, stacks, liquid flow in pipes, weirs, & open channels.
While accuracy & rangeability are relatively low, pitot tube are reliable, inexpensive, & suited for variety of environmental conditions,
including extremely high temperatures and a wide range of pressures. The pitot tube is an inexpensive alternative to an orifice plate.

Turndown ratio or rangebility is limited to 3:1 which is similar to capability of orifice plate.
Accuracy ranges from 1% to 5%, which is comparable to orifice.
The main difference, while an orifice measures the full flowstream, pitot tube detects flow velocity at only one point in flowstream..

Theory of Operation
A pitot tube measures two pressures: Static and Total Impact Pressure.
The static pressure is the operating pressure in the pipe, duct, or the environment, upstream to the pitot tube.
It is measured at right angles to the flow direction, preferably in a low turbulence location.
The total impact pressure (PT) is sum of static and kinetic pressures and is detected as the flowing stream impacts on pitot opening.
To measure impact pressure, the pitot tubes use small, sometimes L-shaped tube, with opening directly facing oncoming flowstream.
The point velocity of approach (VP) is calculated by taking square root of difference between total pressure (PT) & static pressure (P)
and multiplying that by the C/D ratio, where C is a dimensional constant and D is density:
VP = C(PT - P)H /D
When the volumetric flowrate of fluid is obtained by multiplying the point velocity (VP) by the cross-sectional area (A) of pipe or duct,
it is critical that the velocity measurement be made at an insertion depth which corresponds to the average velocity of the fluid.
As the flow velocity rises, the velocity profile in the pipe changes from elongated (laminar) to more flat (turbulent).
This changes the point of average velocity and requires an adjustment of the insertion depth. Pitot tubes are recommended only for
highly turbulent flows (Re > 20,000) and under these conditions, velocity profile tends to be flat so that insertion depth is not critical.
In laminar flow the insertion length is up most critical , as Fluid flows in smooth layers with the highest velocity at center of pipe and
low velocities at pipe wall where viscous forces restrain it.
In turbulent flow conditions, the flow breaks up into turbulent eddies which all flow through the pipe with the same average velocity.
VORTEX FLOW METER
Vortex type flow measurement involves placing a bluff body (also called a shedder bar) in the path of the flowing fluid.
As the fluid passes this bluff body, disturbances in the flow called vortices are created.
The vortices trail behind the cylinder, alternatively from each side of the bluff body.
This vortex trail is called the Von Karman vortex street after von Karman's 1912 mathematical description of phenomenon.
The frequency at which these vortices alternate sides is essentially proportional to the flow rate of the flowing fluid.
When the flow rate increases, the number of vortices increases, when the flow rate decreases, the number of vortices decreases but
interval between the vortices remains constant or distance between vortices remain same, only no. of vortices change with flow rate.
Inside, top or downstream of the shedder bar is a sensor for measuring the frequency of the vortex shedding.
This sensor is often a piezoelectric crystal, which produces a small, but measurable, voltage pulse every time a vortex is created.
Since frequency of such voltage pulse is proportional to fluid velocity, a volumetric flow rate is calculated by using area of flow meter.
The frequency is measured and the flow rate is calculated by the flow meter electronics using the equation
f = St V / d
V = f d / St
Q = AV
Q = A f d B / St
Q=fK
f is frequency of vortices, d width of bluff body, V is velocity of fluid , B is blockage factor, A is area of pipe, K is meter coefficient
St is Strouhal number (d/L), which is constant (S=0.17) for given body shape within its operating limits, L is distance between vortices.

Theodor von Karman discovered that, when a non-streamlined object (called a bluff body) is placed in path of a fast-flowing stream,
fluid will alternately separate from object on its two downstream sides, and as boundary layer become detached & curl back on itself,
the fluid forms vortices (also called whirlpools or eddies).
The distance between the vortices was constant and depended solely on the size of the shedder bar that formed it.
On the side of the bluff body where the vortex is being formed, the fluid velocity is higher and the pressure is lower. As the vortex
moves downstream, it grows in strength and size, and eventually detaches or sheds itself. This is followed by a vortex's being formed
on the other side of the bluff body. The alternating vortices are spaced at equal distances. The vortex-shedding phenomenon can be
observed as wind is shed from a flagpole (which acts as a bluff body); this is what causes the regular rippling one sees in a flag.
Vortices are also shed from bridge piers, pilings, offshore drilling platform supports, and tall buildings. The forces caused by the
vortex-shedding phenomenon must be taken into account when designing these structures. In a closed piping system, the vortex
effect is dissipated within a few pipe diameters downstream of the bluff body and causes no harm.

Applications
Vortex meters are used for relatively clean, non-corrosive liquids or gases , the viscosity of the fluid shall be less than 30 centipoises.
Vortex flow meters cannot be used to measure the flow of viscous or dirty process fluids.
These flow meters are also limited to sizes under 12 in, because the frequency of fluid oscillation drops off as the line size increases.
Vortices do not form at Reynolds numbers below 10000, therefore, vortex meter cannot be used in low Reynolds number application.
Turndown ratio/Rangeability is limited 20 : 1
MAGNETIC FLOW METER

A magnetic field is applied to metering tube, resulting in potential difference, proportional to flow velocity, perpendicular to flux lines.
The principle at work is Faraday's law of electromagnetic induction.
When a conductor is moved through a fixed magnetic field or when conductor is stationary and strength of magnetic field changes,
whenever there is a relative motion between the conductor and magnetic field , a voltage is induced across the conductor.
This voltage (emf) E is proportional to the magnetic field strength B, the velocity V of the conductor and the length L of conductor:
E = B.V.L
For the magnetic flow meter, the conductor is the flowing fluid, a field is applied across the pipe, and the induced voltage is therefore
a measure of the velocity with which the conductor is moving. Non-conducting fluids cannot be metered by magnetic flow meter.
For a pipe of diameter D, the emf (E) can be calculated as:
E = B.V.L = B.V.D

The magnetic flow meters is based on Faraday law of electromagnetic induction. Magmeters can detect flow of conductive fluids only.
Early magmeter designs required a minimum fluidic conductivity of 1-5 microsiemens per centimeter for their operation.
The newer designs have reduced that requirement a hundred fold to between 0.05 and 0.1.
The magnetic flow meter consists of a non-magnetic pipe which is lined with an insulating material. A pair of magnetic coils is placed,
and a pair of electrodes penetrates pipe & its lining. The magnetic coils and the electrodes are relatively perpendicular to each other.
If conductive fluid flows through pipe of diameter (D) through magnetic field density B generated by coils, amount of voltage (E)
developed across the electrodes as predicted by Faradays law will be proportional to the velocity (V) of the flowing fluid.
As magnetic field density and pipe diameter are fixed values, they can be combined into a calibration factor (K) & equation reduces to:
E = KV
The velocity differences at different points of the flow profile are compensated for by a signal-weighing factor.
Compensation is also provided by shaping the magnetic coil such that magnetic flux is greatest where signal weighing factor is lowest,
Magmeters can measure flow in both directions, as reversing direction will change the polarity but not the magnitude of the voltage.
The K value which is obtained by water testing might not be valid for non-Newtonian fluids (with velocity dependent viscosity) or
magnetic slurries (those containing magnetic particles). These types of fluids can affect the density of the magnetic field in the tube.
In-line calibration and special compensating designs should be considered for both of these fluids.

Electrodes
In conventional flow tubes, the electrodes are in contact with the process fluid. They can be removable or permanent if produced by a
droplet of liquid platinum as it sinters through a ceramic liner and fuses with the aluminum oxide to form a perfect seal.
This design is preferred due to its low cost, its resistance to abrasion and wear, its insensitivity to nuclear radiation, and its suitability
for sanitary applications because there are no cavities in which bacteria can grow. On the other hand, the ceramic tube cannot
tolerate bending, tension, or sudden cooling and cannot handle oxidizing acids or hot and concentrated caustic.
In more recent design, non-contacting electrodes are used. The design use area of metal sandwiched between layers of liner material.
Magmeter with non-contacting electrodes read fluids having 100 times less conductivity than required to for conventional flow tubes.
Because the electrode is behind the liner, these designs are also better suited for severe coating applications.

Applications
Only measures fluids which are conductive. They are used in clean, multi-phase, dirty, corrosive, erosive, viscous liquids, and slurries.
The conductivity of fluids shall be between 1-5 microsiemens per centimeter.
Turndown ratio or rangeability is limited to 50 : 1
They are generally available in sizes under 8 inches.
No or Negligible pressure drop (only the pressure drop due to piping)
TURBINE FLOW METER

Turbine flow meters (also axial turbine) translates the mechanical action of the turbine rotating in the liquid flow around an axis into a
user-readable rate of flow (gpm, lpm, etc.)
The turbine wheel is set in path of a fluid stream. The flowing fluid impinges on turbine blades, imparting a force to blade surface and
setting the rotor in motion. When a steady rotation speed has been reached, the speed is proportional to fluid velocity.
It consists of a multi-bladed rotor mounted at right angles to the flow and suspended in the fluid stream on a free-running bearing.
The diameter of rotor is less than inside diameter of metering chamber, & its speed of rotation is proportional to volumetric flow rate.
A turbine meter consists of a multi-bladed rotor suspended in the fluid stream on a free-running bearing.
The axis of rotation of the rotor is perpendicular to flow direction, and the rotor blades sweep out nearly to the full bore of the meter.
The fluid impinging on the rotor blades causes the rotor to revolve.
Within the linear flow range of the meter, the angular speed of rotation is directly proportional to the volumetric flow rate.

Turbine rotation is detect by solid state device(reluctance,inductance,Hall effect pick-up) or mechanical sensor (gear,magnetic drives).
In Reluctance pick-up, coil is permanent magnet & turbine blade are made of material attracted to magnet. As each blade passes coil,
a voltage is generated in coil. Each pulse represents discrete volume of liquid. No of pulses per unit volume is called meters K-factor.
In Inductance pick-up, the permanent magnet is embedded in rotor, or blades of rotor are made of permanently magnetized material.
As each blade pass coil, it generates voltage pulse. Sometime only 1 blade is magnetic & pulse represent complete revolution of rotor.
The outputs of reluctance and inductive pick-up coils are continuous sine waves with pulse trains frequency proportional to flow rate.
At low flow, the output (the height of the voltage pulse) may be on the order of 20 mV peak-to-peak. It is not advisable to transport
such weak signal over long distances. Hence, distance between pickup & associated display electronic or preamplifier must be short.
In Hall Effect pick-up coil, the transistors change their state when they are in presence of low strength (order 25 gauss) magnetic field.
In Hall Effect meter, very small magnet are embedded in tips of the rotor blades. Rotor are typically made of a non-magnetic material,
like polypropylene, PVDF. Signal output from a Hall Effect sensor is a square wave pulse train, at a frequency proportional to flowrate.
As Hall-effect sensors have no magnetic drag, it can operate at lower flow velocities (0.2 ft/sec) than magnetic pick-up (0.5-1.0 ft/sec).
Hall- sensors provides signal of high amplitude, permitting distances upto 3000ft. between sensor & electronics without amplification.

The driving torque of the fluid is resisted by mechanical bearing friction, fluid drag & magnetic drag, all of which effect the flow rate.
The relation between flow rate and the rotation speed is:
Q = K.w
(K is the meter factor which must be calibrated for a given meter) (w is the rotational speed or angular velocity)

Selection
Turbine meters are generally available for 1"-12" pipe sizes. The turbine meters operating temperature ranges from 200 to 450 °C.
Turbine meters should be sized so that the expected average flow is between 60 - 75 percent of the maximum capacity of the meter.
Flow velocities under 1 ft/sec is insufficient, above 15ft/sec excessive, most turbine meter are designed for max velocities of 30 ft/sec.
Viscosity & temperature affect accuracy of turbines. It is therefore important to calibrate meter for specific fluid it is intended to use.
Generally the Reynolds Number shall be greater than 4000 and less than 20000. Pressure drop is generally not more than 3 - 5 PSIG.
Turndown ratio or rangeability is limited to 20:1

Disadvantages are, particle 100 mg/l of +75 micron Size will bearing/wheel and sensitivity to installation/swirl, affects the calibration.
Turbine flow meters are sensitive to upstream piping geometry that can cause vortices or swirling flow and should be avoided.
Under certain conditions, the pressure drop across turbine can cause flashing or cavitation. The first causes the meter to read high,
second result in rotor damage. To protect, the downstream pressure must be at 1.25 times vapour pressure plus twice pressure drop.
ULTRASONIC FLOW METER (TRANSIENT TIME PRINCIPLE) (DOPPLER PRINCIPLE)

Ultrasonic flow meters are generally divided into two types : Transit Time (Time of Flight) principle and Doppler Shift principle.
In both the types, a piezoelectric crystal is excited by the electrical energy at its mechanical resonance, thus emitting a sound wave,
which travelling at the speed of sound in the medium, is used to determine the flow rate.
The crystal is placed in contact with fluid (wetted/inserted transducer) or mounted outside piping (clamp-on transducer,Non- Contact)

Transit Time / Time of Flight Measurement


The speed at which sound propagates in a fluid is dependent on the fluid density. If the density is constant, however, one can use the
time of ultrasonic passage (or reflection) to determine the velocity of a flowing fluid.
In this design, the time of flight of the ultrasonic signal is measured between two transducer. One upstream and one downstream.
The difference in elapsed time going with or against the flow determines the fluid velocity.
When the flow is zero, the time for the signal T1 to get to T2 is the same as that required to get from T2 to T1.
When there is flow, the effect is to boost speed of signal in the downstream direction, while decreasing it in the upstream direction.
In Time of Flight, transmitter beams high frequency (1MHz) pressure wave (ultrasonic) so as to form a fixed cross angle with pipe axis.
The transmitter employed by the wave to reach a receiver placed on the opposite pipe wall depend on the velocity of sound in fluid
and whether the wave is moving with or against the flow.

The velocity (V) can be determined by the following equation:


V = Kdt/TL
where K is a calibration factor for volume and time units used, dt is time differential between upstream & downstream transit times,
and TL is the zero-flow transit time.
The speed of sound in fluid is a function of both density and temperature. Therefore, both parameters have to be compensated for.
In addition, the change in sonic velocity can change the refraction angle, which in turn will affect the distance the signal has to travel.
In extreme cases, the signal might completely miss the downstream receiver. Again, this type of failure is known as walk-away.

Design Variations
Clamp-on ultrasonic meters come in either single sensor versions or dual-sensor versions.
In single sensor version, transmiter & receiver are potted into same sensor body, which is clamped onto a single point of pipe surface.
In dual sensor version, the transmit crystal is in one sensor body, while the receive crystal is in another sensor body.
They rival performance of wetted sensor designs, but without the need to break the pipe or stop the process to install the flow meter.
Clamp-on Doppler flow meters are subject to interference from pipe wall itself, as well as from any air space between sensor and wall.
If pipe wall is of SS, it might conduct transmit signal far enough so that returning echo will be shifted enough to interfere with reading.
There are also built-in acoustic discontinuities in concrete-lined, plastic- lined, and fiberglass-reinforced pipes.
These are significant enough to either completely scatter the transmitted signal or attenuate the return signal.
Wetted transducer designs both Doppler and transit time are available overcome many of these signal attenuation limitations.
The full-pipe transit-time meter originally consisted of a flanged spool section with wetted transducers mounted in the pipe wall in
transducer wells opposite to one another, 45deg angle to flow. Transit-time flow meters is either single-path or multiple-path designs.
Single-path flow meters are provided with single pair of transducers that make single-line velocity measurement. They use a meter
factor that is pre-determined by calibration to compensate for variations in velocity profile & flow section construction irregularities.
Multi-path flow meters, several sets of transducers are placed in different paths across the flow section, thereby attempting to
measure velocity profile across entire cross-section of pipe. Multi-path ultrasonic flow meter are used in large-diameter conduits,
such as utility stacks, and in other applications where non-uniform flow velocity profiles exist.
Doppler Shift Measurement
Doppler Shift effect ultrasonic flow meter is based on the principle that, when a wave beam travels into a non-homogenous fluid,
some energy is scattered back by solid particles or air bubbles entrained in the flow.
The relative motion of the discontinuous produces a frequency shift of scattered wave, which is received & analyzed by a transducer.
This difference in frequency known as Doppler Shift, frequency difference (Delta f) is linearly proportional to the fluid velocity.

Because of velocity profile, accuracy depends on particle concentration and distribution. If the particles are uniformly distributed,
such particles influence the depth of penetration of the acoustic beam. For instance, if there is high concentration of particles, the
flow meter will read only the slow moving reflectors near the wall undervaluing the flow rate.
Generally such drawbacks are overcome by using 2 transducers on the opposite walls of the pipe.

Christian Doppler discovered that wavelength of sound perceived by stationary observer appears shorter when source is approaching
and longer when source is moving away. This shift in frequency is basis upon which all Doppler-shift ultrasonic flowmeters work.
Doppler flow meter transducers operate at 0.640 MHz in clamp-on designs and at 1.2 MHz in wetted sensor designs.
The transducer sends an ultrasonic pulse or beam into the flowing stream.
The sound waves are reflected back by such discontinuities as particles or entrained gas bubbles, or even by turbulence vortices.
The meter detects the velocity of the discontinuities, rather than the velocity of the fluid, in calculating the flow rate.
The flow velocity (V) can be determined by:

where Ct is velocity of sound inside transducer, f0 is transmission frequency,


f1 is reflected frequency, and a is the angle of the transmitter and receiver
crystals with respect to pipe axis
Because Ct /2f0cos(a) is a constant (K), the relationship can be simplified to:

Applications
Time of flight are used in clean liquids applications where ultrasonic beam is not attenuated/continually interrupted by fluid particles.
For Doppler flowmeter to work, it is mandatory that flow stream contains sonically reflective materials, (solid particles or air bubbles).
Solid Particles : minimum 80-100 mg/l of solids with particle size of +200 mesh (+75 micron).
Air Bubbles : minimum 100-200 mg/l of air bubbles with diameters between 75 - 150 microns.

Ultrasonic flow meter have an advantage over coriolis / vortex/ magnetic / turbine meters in that they do well in large-size pipes.
As compared to magnetic flowmeters, ultrasonic meters have the advantage that they can measure non-conductive fluids.
Ultrasonic are used for low flow rates. Vortex require minimum Re no/minimum velocity that might not be available at low flow rates.
This is because, for a vortex meter to work effectively, the fluid has to be flowing fast enough to generate regular vortices.
As compared differential-pressure (DP) and turbine , ultrasonic flow detectors have advantage of being less intrusive (Interferences).
Clamp-on are completely non-intrusive, spool piece meters are slightly intrusive. Insertion type are more intrusive than spool piece.
Turndown ratio or rangeability is 20:1
CORIOLIS FLOW METER

When a fluid is flowing in a pipe and it is subjected to Coriolis acceleration through mechanical introduction of rotation into the pipe,
the amount of deflecting force generated by the Coriolis inertial effect principle will be a function of the mass flow rate of the fluid.
If a pipe is rotated around a point while liquid is flowing through it (toward or away from center of rotation), that fluid will generate
an inertial force (acting on the pipe) that will be at right angles to the direction of the flow.

In coriolis flow meter, all of the fluid being measured flows through the sensor tube where its mass flow rate is measured directly.
Inside the coriolis flow meter sensor housing, the flowtube are vibrated at its natural frequency by an electromagnetic drive system.
Its vibration is similar to that of a tuning fork, typically having an amplitude of less than 1mm and a frequency of about 80 HZ.
Sensor tube frequency varies according to the fluid density. The tube frequency is used to calculate / determine the fluid density.
As the fluid moves through the vibrating tube, it is forced to take on the tubes vertical momentum. During half of the vibration cycle,
when the tube is moving upward, fluid flowing into the sensor pushed downward against the tube, resisting the upward force.
Conversely fluid flowing out of the sensor, having been forced upwards, now resists having its vertical momentum decreased and
pushes upward against the tube. The combination of resistive forces causes the flow sensor to twist. This is called Coriolis Effect.
During the second half of the vibration cycle, when the tube moves downward, the resultant twist will be in the opposite direction.

The amount that the sensor tube twists is directly proportional to the mass flow rate of the fluid flowing through it.
Electromagnetic sensors are located on each side of flow tube to measure respective velocity of vibration tube at these two points.
Any time difference between these two velocity signals is caused by the twisting of the tube. The electromagnetic sensors send this
information to the transmitter where it is processed and connected to an output signal directly proportional to mass flow rate.
In sensors with double flow tubes, the two tubes vibrates & twist 180 out of phase & their combined twist determines mass flow rate.

Tube Designs
A tube is curved or straight form. When design consists of two parallel tubes, flow is divided into two streams by splitter near inlet &
is recombined at exit. The drivers vibrate the tubes. These drivers consist of coil connected to one tube & magnet connected to other.
The transmitter applies an alternating current to coil, which causes the magnet to be attracted and repelled by turns, thereby forcing
the tubes towards and away from one another. The sensor can detect the position, velocity, or acceleration of the tubes.
If electromagnetic sensors are used, the magnet and coil in the sensor change their relative positions as the tubes vibrate, causing a
change in the magnetic field of the coil. Therefore, the sinusoidal voltage output from the coil represents the motion of the tubes.
When there is no flow, the vibration caused by coil & magnet drive results in identical displacements at two sensing points (B1 & B2).
When flow is present, Coriolis forces act to produce secondry twisting vibration, resulting in small phase difference in relative motion.
This is detected at sensing point. The deflection of tubes by Coriolis force exist when both axial fluid flow & tube vibration are present.
Vibration at zero flow, or flow without vibration, does not produce an output from the meter.
The natural resonance frequency of tube structure is a function of its geometry, materials of construction, and mass of tube assembly
(mass of the tube plus the mass of fluid inside the tube). The mass of the tube is fixed. Since mass of fluid is its density (D) multiplied
by its volume (which is also fixed), the frequency of vibration can be related to the density of the process fluid (D).

Selection
Turndown ratio or rangaebility is limited to 100:1. The accuracy of coriolis flow meter is 0.1%.
Single Continuous tube designs are generally preferred for slurry, viscous, multi-phase fluid applications.
There are no Re No limitations associated with Coriolis meters. They are also insensitive to velocity profile distortion and swirl.
Therefore, there is no requirement for straight runs of relaxation piping upstream or downstream of the meter to condition the flow.
On corrosive, viscous, or abrasive slurry services, downsizing the coriolis flow meter generally is not recommended.
VARAIABLE AREA FLOW METER

Variable area flow meter are simple devices that operate at relatively constant pressure drop & measure flow of liquids, gases, steam.
The position of their float, piston or vane is changed as the increasing flow rate opens a larger flow area to pass the flowing fluid.
The position of the float, piston or vane provides a direct visual indication of flow rate.
Design variations include most basic type, a rotameter (a float in a tapered tube), orifice/rotameter combination (by-pass rotameter),
open-channel variable gate, tapered plug, and vane or piston designs.
Either the force of gravity or a force of a spring is used to return the flow element to its resting position when the flow lessens.
Gravity-operated meters (rotameter) must be installed in vertical position, whereas spring operated can be mounted in any position.
All variable area flow meters are available with local indicator. Most can also be provided with position sensor & as well as transmitter
(pneumatic, electronic, digital, or fiber optic) for connecting to remote displays or controls.

ROTAMETER
Rotameter is widely used variable area flow meter because of its low cost, simplicity, low pressure drop, rangeability, & linear output.
Its operation: in order to pass through tapered tube, the fluid flow raises the float. The greater the flow, the higher the float is lifted.
In liquid service, the float rises due to a combination of the buoyancy of the liquid and the velocity head of the fluid.
With gases, buoyancy is negligible, and the float responds mostly to the velocity head.
In rotameter, metering tube is mounted vertically, with the small end at bottom. The fluid to be measured enters at bottom of tube,
passes upward around the float, and exits the top. When no flow exists, the float rests at the bottom.
As fluid enter, float begin to rise. The float move up/down in proportion to fluid flow rate & annular area between float and tube wall.
As the float rises, the size of the annular opening increases. As this area increases, the differential pressure across the float decreases.
The float reaches a stable position when the upward force exerted by the flowing fluid equals the weight of the float.
When sized correctly, the flow rate can be determined by matching the float position to calibrated scale on the outside of rotameter.
Early design had slots, which caused float to spin for stabilizing & centering. Because this float rotated, term rotameter was coined.
Rotameters are typically provided with calibration data and a direct reading scale for air or water (or both).
To size a rotameter for other service, one must first convert the actual flow to a standard flow.
For liquids, standard flow is water equivalent in gpm; For gases, standard flow is air flow equivalent in standard cubic feet per minute.

Rotameter Types
Glass Tube Rotameters : A wide choice of materials are available for the float, packing, O-rings, and end fittings to handle the widest
selection of fluids. The only fluids that cannot be handled are those that attack the glass metering tube. The meters also are limited to
the pressure and temperature extremes of the glass metering tube, and by safety considerations.
Metal Tube Rotameters are used when the general-purpose meters cannot be applied. They can be used for hot and strong alkalies,
fluorine, hydrofluoric acid, hot, steam, slurries, or molten metals where glass cannot be used.
This meter is used where operating temperature/pressure exceed ratings of glass tube or where electronic transmission is required.
By-pass & Pitot Rotameters: The cost of rotameter installation is reduced, if instead of full pipe size rotameter, an orifice or pitot tube
is used in main pipeline to develop a pressure drop. This causes a related small flow that is directed through an bypass rotameter.
These units are designed for clean process streams as water & air and are provided with easily accessible filters for periodic cleaning.
The bypass rotameters are also available with isolation valves to allow for their removal & maintenance while process is in operation.

Selection
Turndown ratio or rangeability is limited to 10:1.
The accuracy of these rotameters is 2%.
POSITIVE DISPLACEMENT FLOW METER

Positive Displacement (PD) flow meter measure the flow by dividing the fluid into “packets”, each packet has precisely known volume.
The number of such packets are counted in a known time which then gives a precise measure of volume flow rate.
Under suitable conditions, they offer high performance of any mechanical meter, achieved by careful manufacture to high tolerances.

Positive Displacement (PD) flow meters measure the volume flow rate directly by repeatedly trapping a sample of the fluid.
The total volume of liquid passing through meter in a given period of time is product of the volume of sample and number of samples.
Positive displacement flow meters frequently totalize the flow directly on an integral counter, but also can generate a pulsed outputs
which may read on a local display counter or transmitted to a control room.
As each pulse represents a discrete volume of fluid, they are ideally suited for automatic batching and accounting.

PD flow meters do not require straight upstream and downstream pipe runs for their installation.
PD meters are available in sizes from 1 - 12 in and can operate with turndowns as high as 100:1, although ranges of 20:1 are common.
The process fluid must be clean. Particles greater than 100 microns in size is removed by filtering. PD operate with small clearances
between their precision-machined parts. For this reason, PD meters are not recommended for measuring slurries or abrasive fluids.

At low flow rates, the friction of the moving parts may become significant and lead to the meter running too slowly.
Leakage is also likely to be significant at lowest flows, leading to under-reading. PD meter rely on moving parts with close tolerances,
so sustained operation over long periods is not their strength. Due to close tolerances they may be sensitive to temperature changes,
and almost certainly will not operate well if the flow is not clean, i.e. has particulate matter entrained at all.
Although slippage through PD meter decrease (accuracy increases) as fluid viscosity increases, pressure drop through meter also rises.
Consequently, maximum & minimum flow capacity of the flow meter is decreased as viscosity increases. The higher the viscosity, the
less slippage and lower the measurable flow rate becomes. As viscosity decreases, the low flow performance of meter deteriorates.
The maximum allowable pressure drop across the meter constrains the maximum operating flow in high viscosity services.

Oval Gear
Oval gear meter is a PD meter that use two or more oblong gears configured to rotate at right angles to one another, forming T shape.
Such a meter has two sides, which are called A and B. No fluid passes through center of meter, where teeth of two gears always mesh.
On one side of meter A, teeth of gears close off fluid flow because elongated gear on side A is protruding into measurement chamber,
while on the other side of the meter B, a cavity holds a fixed volume of fluid in a measurement chamber.
As the fluid pushes the gear, it rotates them, allowing the fluid in the measurement chamber on side B to be released into outlet port.
Meanwhile, fluid entering the inlet port will be driven into the measurement chamber of side A, which is now open.
The teeth of Side B will close off fluid from entering B. The cycle continues as gears rotate & fluid is metered in alternating chambers.
Permanent magnets in the rotating gears can transmit a signal to an electric reed switch or current transducer for flow measurement.

Rotating Lobe
In rotating lobe design, the two lobes rotate in opposite directions. They are geared together as to maintain a fixed relative position,
so a fixed volume of liquid is displaced by each revolution. These meters are normally built for 2 – 24 inches pipe sizes.
The advantages of this lobe design include good repeatability at high flows, the availability of a range of materials of construction, and
high operating pressures upto 80 barg and temperatures upto 200 °C.
The disadvantages of rotating lobe design include loss of accuracy at low flows because of the large size, heavy weight, and high cost.

Rotating Impeller
The rotating impeller design has 2 moving parts: two impellers, which are made of wear, abrasion, corrosion resistant thermoplastics.
Operation, a proximity switch sensing passage of magnets implanted in impeller lobes and transmitting resultant pulses to a counter.
Units are available from ½” to 4” line sizes. High operating pressures upto 200 barg and high operating temperatures upto 200 °C.
The design is suited for high-viscosity operation and the precision and rangeabilities are also high.
Nutating Disk
In nutating disk design, the moving assembly, separating fluid into volume increments, consists of an assembly of radially slotted disk
with an integral ball bearing and an axial pin. This full mechanism fits into the meter and divides metering chamber into four volumes
two above the disk on the inlet side and two below the disk on the outlet side.
As the liquid flows through the meter, the pressure drop from inlet to outlet causes the disk to wobble, or nutate.
For each cycle, the meter displaces a volume of liquid equal to the volume of metering chamber minus the volume of disk assembly.
The end of an axial pin, which moves in a circular motion, drives a cam that is connected to a gear train and to flow totalizing register.
This flow meter measures the liquids with an error range of about 2% of actual flow. It is built only for smaller pipe sizes.
Its temperature range is from –150 to 120°C, and its maximum working pressure rating is 15 barg.

Rotating Vane
This flow meter has spring-loaded vanes that seal increments of liquid between the eccentrically mounted rotor and the casing and
transport it from the inlet to the outlet, where it is discharged as a result of the decreasing volume.
Rotating vane meters have spring-loaded vanes that entrap increments of liquid between the eccentrically mounted rotor and casing.
The rotation of the vanes moves the flow increment from inlet to outlet and discharge.
This type of meter is widely used in the petroleum industry and is used for such varied services as gasoline and crude oil metering.
Accuracy is ±0.1%. The meter is built from variety of materials, can be used at temperatures & pressures upto 200 °C and 70 barg.

Helix (for viscous fluids)


The helix type design utilizes two uniquely nested, radically pitched helical rotors as the measuring elements.
The flow forces the helical gears to rotate in the plane of pipeline. Optical/Magnetic sensor is used to encode pulse train proportional
to the rotational speed of the helical gears. The forces required to make the helices rotate are relatively small.
Close machining tolerances ensure minimal slippage and thus high accuracy.
The design of sealing surfaces provides ratio of longitudinal to lateral sealing to minimize pressure drop, with high-viscosity liquids.
The large inlet size of the progressive cavity allows for passage of gels, fines, and even undissolved or hydraulically conveyed solids.
The helix type design meter can measure flow rates from 1 to 4000 GPM. This flow sensor is available in sizes from 1.5 - 10 inches and
can operate at temperatures up to 300 °C and at pressures up to 200 barg.
It is a high-pressure drop device requiring a minimum of 10 PSID for its operation. Its turndown can reach 100:1, the accuracy is 0.5%.
Available design variations include versions that are heated to maintain line temperatures for metering melted solids or polymers.
This meter is suited for high-viscosity (above 1000 cps) and for slurry services. It is recommended that the process fluids be filtered by
mesh size 30 filters before they enter helix flow meter.

Advantages
Good Accuracy 0.5%
Accuracy is unaffected by upstream pipe conditions
Turndown or Rangeability is 100:1
Very good repeatability
Suitable for high viscosity fluids

Disadvantages
Regular maintenance and service required
Moving parts subject to wear
High pressure loss
Not suitable for dirty, non-lubricating or abrasive liquids
Expensive particularly in large diameters
LEVEL
Level Measurement Techniques
Differential Pressure (DP) Level
 Dry Leg
 Wet Leg
 Balanced System (Remote Seals)
 Tuned System (Remote Seals)

Displacer
 Torque Tube
 Spring Operated

Radar
 Contact Radar (Guided Wave Radar)
 Non-Contact Radar

 Transient Time or Time of Flight (Pulsed)


 Continuous (FMCW Frequency Modulation Continuous Wave)

 Low Frequency Radar


 High frequency Radar

Ultrasonic

Conductivity

Capacitance

Magnetic

Float / Tilt Float / Tape Float

Vibration
 Vibrating Reed
 Vibrating Probe
 Vibrating Fork

Reflux

Tubular or Transparent
DIFFERENTIAL PRESSURE (DP)
Open Tanks
Differential Pressure (DP) method of liquid level measurement uses a DP detector connected to the bottom of tank being monitored.
The higher pressure, caused by fluid in tank, is compared to lower reference pressure (usually atmospheric).
The tank is open to the atmosphere; therefore, it is necessary to use only the high pressure (HP) connection on the DP transmitter.
The low pressure (LP) side is vented to atmosphere; the pressure differential is hydrostatic head, or weight of the liquid in the tank.
The maximum level that can be measured by the DP transmitter is determined by the maximum height of liquid above transmitter.
The minimum level that can be measured is determined by the point where the transmitter is connected to the tank.
(P is the Pressure, p is the density of liquid, g is the gravity, h is the height of the liquid level)

Closed Tanks or Pressurized Tanks


Not all tanks or vessels are open to the atmosphere. Many are totally enclosed to prevent vapors or steam from escaping or to allow
pressurizing the contents of the tank. When measuring the level in a tank that is pressurized, or the level that can become pressurized
by vapor pressure from the liquid, both the high pressure and low pressure sides of the DP transmitter must be connected.
Dry Leg Design
When measuring level in pressurized tanks, same d/p design (motion balance, force balance, or electronic) are used as on open tanks.
It is assumed that the weight of the vapor column above the liquid is negligible.
On the other hand, the pressure in the vapor space cannot be neglected, but must be relayed to the low pressure side of the d/p cell.
Such a connection to the vapor space is called a dry leg design, and is used when process vapors are non-corrosive, non-plugging, and
when their condensation rates, at normal operating temperatures, are very low.
A dry leg design enables the d/p cell to compensate for the pressure pushing down on the liquid surface in a closed pressurized tanks,
in the same way as the effect of barometric pressure is canceled out in open tanks.
It is important to keep reference leg dry because accumulation of condensate / other liquids would cause error in level measurement.
Wet Leg Design
When the process vapors condenses at normal ambient temperatures, then the reference leg is filled out to form a wet leg design.
If condensate is used to fill reference leg, a condensate pot is mounted both to high level connection of tank & to top of vapor space.
The condensate pot must be mounted slightly higher than high level connection (tap) so that it maintains constant condensate level.
Excess liquid will drain back into tank. It is desirable either to install a level gauge on condensate pot or to use a sight flow indicator in
place of the pot, so that the level in the pot can conveniently be inspected tank.
If the process condensate is unstable, or undesirable to be used to fill the wet leg, then reference leg can be filled with an inert liquid.
In this case, 2 factors are considered. First, specific gravity of inert fluid (SG) & height (h) of reference column must be determined,
and d/p cell must be depressed by equivalent of the hydrostatic head of that column [(SG)(h)].
Second, it is desirable to provide a sight flow indicator at top of wet leg so that height of that reference leg can be visually checked.
Any changes in leg fill level (due to leakage or vaporization) introduce error into level measurement. If specific gravity of filling fluid
for the wet leg is greater than that of the process fluid, the high pressure side should be connected to reference leg and low to tank.

Flat Diaphragm, Extended Diaphragm, Chemical Seals (Remote Seals) for Open Tanks & Closed Tanks
When the process fluid is a sludge, viscous, corrosive or is otherwise hard to handle, the goal is to isolate the process from d/p cell.
A flat diaphragm is used on tank nozzle so that d/p cell can be removed for cleaning / replacement without taking tank out of service.
If it is acceptable to take the tank out of service when d/p cell removal is needed, an extended diaphragm design can be considered.
In this case, the diaphragm extension fills the tank nozzle so that the diaphragm is flush with the inside surface of the tank.
This eliminates dead ends or pockets where solids can accumulate and affect the performance of the DP cell.
Flat & Extended diaphragm type d/p cells, and Chemical Seals (Remote Seals) are available to protect d/p cells under these conditions.
Chemical seals or diaphragm pressure seals, are available with fill liquids such as silicone, water, glycol, alcohol, and various oils.
These seals are used when plugging or corrosion can occur on both sides of the cell.
A range of corrosion-resistant diaphragm & lining materials is available. Teflon¨ lining is used to minimize material build-up/coating.
Capillary tube lengths should be as short as possible and the tubes should be shielded from the sun.
A low thermal expansion filling fluid should be used or ambient temp. compensation is provided, as discussed above wet legs design.
If seals leak, maintenance of these systems is done at suppliers factory due to complex evacuation & backfilling procedures involved.
Selection of Chemical Seals or Remote Seals
Seal systems provide a reliable process measurement and prevent the process medium from contacting the transmitter diaphragm.
Transmitter/diaphragm seal systems should be considered when:
• Process temperature is outside of normal operating ranges of transmitter and cannot be brought into limits with impulse piping.
• The process is corrosive and would require frequent transmitter replacement or specific exotic materials of construction.
• The process contains suspended solids or is viscous and may plug the impulse piping.
• There is need to replace wet/dry leg to reduce maintenance, where reference leg is not stable or often needs to be refilled/drained.
• The process medium may freeze or solidify in the transmitter or impulse piping.

PERFORMANCE CONSIDERATIONS
Attaching a diaphragm seal to a transmitter changes the performance of the transmitter.
The seal system will have additional temperature effects and response time depending on the system configuration.
Temperature Effects
Seal system temperature effects are caused by changes in volume and density of the fill fluid in the seal system.
Changes in volume are known as Seal Temperature Effects & occurs when fill fluid expands/contracts with fluctuations in process or
ambient temperature. This change in fill volume drives a change in the internal pressure of the transmitter/seal system.
The density of the fill fluid also changes with temperature fluctuations. Changes in density are described as Head Temperature Effect
as they represent a change in head zero offset reference. Both effects are combined to get total temperature effect for a seal system.
Seal Temperature Effects
Three primary factors affect the seal temperature effects of a diaphragm seal system:
Diaphragm Stiffness
Diaphragm stiffness is critical parameter affecting seal temperature effect. As fill fluid expands/contracts due to temperature changes,
seal diaphragm stiffness determine amount of volume change that is absorbed by seal diaphragm & amount exerted as back pressure
on the sensor module. This back pressure acting upon the sensing diaphragm of the transmitter represents output temperature error.
Diaphragm stiffness is affected by the diaphragm surface diameter, material of construction, thickness, and convolution pattern.
Generally, smaller diameter diaphragms are stiffer than larger diameter diaphragms, and have a larger seal temperature effect error
when fill fluid expand/contract with temperature changes. The larger diameter diaphragm, which is less stiff can accommodate more
changes in fill volume and has a smaller error than the smaller diameter diaphragm.
Fill Fluid
The expansion characteristics & volume of fill fluid affect seal temperature effects. All fill fluids expand/contact with changes in temp.
Coefficient of thermal expansion defines amount of change & is represented in cubic centimeters of expansion per cubic centimeter
of fluid volume per degree Fahrenheit (cc/cc/F). Selecting fill fluid with smaller coefficient of thermal expansion will minimize effects.
Seal System Volume
The amount of fluid in a seal system will determine the potential amount of volume expansion.
Choosing appropriate direct mount or capillary type will determine overall volume in system and resulting seal temperature effects.
By selecting the right connection type for a high or low side connection, you can optimize the seal temperature effects for the system.
Head Temperature Effects
Head temperature effects are dependent on the change in ambient temperature, fill fluid specific gravity (ratio of density of fluid to
reference density of water), and the vertical distance between the process connections. When a transmitter/seal system is installed
initial head zero offset is function of vertical distance between two process connections multiplied by specific gravity of the fill fluid on
a transmitter. This offset value is read as negative differential pressure value, as fluid column is pushing on low side of transmitter and
pulling away from high side. Ambient temperature fluctuations cause density of fill fluid to change, and thus change zero offset head.
When ambient temperature increases, fill fluid gets lighter, reduces head zero offset which causes positive shift in transmitter output.
When ambient temperature decreases, density gets heavier, increases head zero offset & causes negative shift in transmitter output.
Total Temperature Effects
Seal Temperature Effects and Head Temperature Effects are combined to get the Total Temperature Effects for the seal system.
A balanced seal system consisting of seal, capillary, fill fluid on both high & low side will act to cancel seal temperature effects created
on either side of transmitter. The resulting total performance will only consist of head temperature effect, because a balanced system
will not affect the head temperature effect. A preferred approach is to select the Tuned System where the seal system that results in
seal temperature effects that partially/completely cancel out head temperature effects, resulting in improved system performance.

Time Response
Adding diaphragm seals to a transmitter increases the overall response time of a transmitter/seal system.
Time response varies with temperature, pressure, capillary length, capillary inside diameter (ID), fill fluid viscosity, transmitter type.
Applications that change quickly like flow or level on small, narrow tanks require a faster response time.
Direct vs. Capillaries: If time response is important, choose a direct mount connection when possible to minimize connection length.
For capillaries, longer capillary provides greater distance for pressure signal to travel, so specify only length required for installation.
Capillary ID: A smaller diameter capillary (ID) creates more restrictions and slows down the pressure transport.
A large diameter capillary (ID) provides a faster response time.
Fill Fluid Viscosity: Viscosity of the fill fluid is a measure of its fluidity and is temperature dependent.
Choosing a less viscous fill fluid reduces time response, especially when using longer capillaries or in colder conditions.
DISPLACER

Displacer type liquid level measurement is based on archimedes principle, that buoyancy force exerted on a body immersed in liquid
is equal to the weight of the liquid displaced. If the cross sectional area of the displacer and the density of the liquid are constant,
then a change in liquid level brings about a corresponding change in the apparent weight of the displacer.
A displacement body also known as “displacer” is immersed in liquid and is subjected to an upthrust based on Archimedes’ principle,
this being proportional to mass of liquid displaced. Every change in weight of rod corresponds to a certain change in the liquid level.

There are two types of displacer transmitters in common use;


 Torque Tube
 Spring Operated
Both type of transmitters have a cylindrical displacer element of a length corresponding to range of level measurement required and
weighted to sink in the liquid being measured.
The difference between two types of displacer transmitters centers around mechanics of transmitting displacer movement because of
the buoyancy force from the wetside of the instrument to the dryside where it can be translated into an electronic signal.
Torque Tube Design
The displacer is suspended on a knife edge hanger at the end of a cantilever arm, the other end of which is welded to the torque tube.
The torque-tube is hollow tube welded at one end to instrument flange and is put in torsion by weight of displacer on cantilever arm.
A rod, welded to the torque tube at one end but free at its other end, sits inside the torque tube and is thus caused to rotate axially as
the torque tube rotates.
When displacer rises or falls, the corresponding angular displacement of the torque rod is linearly proportional to displacer movement
and therefore to the liquid level. The knife-edge bearing support minimizes friction and a limit stop on torque arm is used to prevent
accidental over-stressing of the torque tube. However, it is a bulky instrument which can be difficult to install.
Spring Operated Design
The change in the apparent weight of the displacer is transmitted directly, through a spring from which the displacer is usually hung.
When the displacer rises or falls with changing liquid level the spring will relax or extend accordingly as dictated by the formula:
Spring Extension or Contraction = Force / Spring Rate.
A core piece is located on top of a rod attached to the spring and is thus caused to rise or fall inside the pressure tube.
A linear variable differential transformer (LVDT) is situated outside pressure tube, totally isolated from process pressure and vapour.
Movement of the core within the fields of the LVDT causes an imbalance which the instrument electronics detects & is able to convert
into a signal proportional to the liquid level.
The main key advantages of the spring operated transmitter are that they have a much smaller mounting envelope than torque-tube,
and are much lighter and much easier to install and does not have critical welds under stress.

The displacer diameter generally depends on the density of process liquids, process operating conditions & level measurement span.
Internal displacers are normally used in vessels where leak detection could go unnoticed, vessel donot have any internal hindrances.
It is good practice to install internal displacer in stilling wells to eliminate turbulence. The stilling well is fabricated from piece of pipe
which has a number of vertical slots along its length.
External displacers are used where the vessel cannot be depressurized and drained in order to perform maintenance on the displacer.
An external displacer is installed in an external chamber mounted outside the tank, isolated from process by means of isolating valves.
RADAR

CONTACT RADAR (Guided Wave Radar) (Transient Time/Time of Flight) (Pulsed)


The contact type, guided wave radar, level measurement works on principle of Time Domain Reflectometry (TDR) or Time of Flight.
Radar pulses are guided down probe submerged in process media. When radar pulse reaches media with different dielectric constant,
part of energy is reflected back to transmitter and remaining passes down. The time difference between transmitted (reference) and
the reflected pulse is converted into a distance from which the total level or interface level can be calculated.
The intensity of reflection depends on dielectric constant of process. Higher the dielectric constant value, stronger will be reflection.
The transmitter measures time from when the pulse is transmitted to when it is received:
Half of the time is equivalent to the distance from reference point of the transmitter (the flange facing) to the surface of the product.
The time value is converted into an output current of 4 - 20 mA or digital signal.

Guided wave radar technology sends the radar pulse down a probe that extends into the tank contents, either liquid or solid.
The pulse hits the surface and is reflected back up the probe to sensor, where the transit time is translated into a distance.
In TDR for process level measurement micro-pulses are continuously transmitted along a probe or “wave guide” at the speed of light.
As soon as pulses reach the material surface they reflect back to sensor electronics unit. The time-of-flight of the pulses is calculated
and directly related to the distance from the point at which the sensor is mounted on the top of the vessel to material surface (level).
The greater the dielectric difference between air and process medium being measured, the greater the amplitude of the reflection.

The probe concentrates the energy pulse much as speaking through a tube concentrates the sound.
This is particularly useful under certain conditions where a through-air pulse is less reliable:
 Liquids that have low dielectric values (e.g. propane, butane)
 Low specific gravity liquids
 Turbulent liquids
 Internal physical obstructions (e.g., baffles)
 Heavy foam layers
 Problematic solids

Guided-wave probe is smaller, which allows it to be mounted through a smaller opening or where equipment is more congested.
While the probe helps with the signal, it can introduce its own problems:
 In some extreme cases no product contact is permitted, which demands another technology entirely (Non-Contact Type)
 A unit can only measure as far as probe extends. There is no theoretical limit, but practical issues when they get very long.
 Probes can be either rigid or flexible. An application with solids may not be suitable for a flexible probe.
Similarly, liquid with high turbulence may cause a flexible probe to move around.
 Sticky material can build up on probe.

Applications
Dust, foam, vapor, boiling/agitated surfaces, changes in pressure, temperature, density donot have an effect on device performance.
In GWR technology, several probe styles are available and the application, length, and mounting restrictions influence their choice.
Unless a coax-style probe is used, probes should not be in direct contact with the metallic object, as that will impact the signal.
Twin and coaxial probes are susceptible to clogging & build up. If application involves liquids that tend to be dirty, sticky or can coat,
then only single lead probes should be used. For such applications, devices offering signal quality diagnostics can help user determine
if the probe needs to be cleaned and allows maintenance to be scheduled only when needed.
In general, GWR is not suitable for extremely viscous or corrosive products.
If GWR is used with viscous fluids & is installed in bypass chamber, then chamber should be heat traced & insulated to ensure fluidity.
Furthermore, connections from tank to chamber and chamber’s diameter should be
sufficiently large enough to allow good fluid flow.
NON-CONTACT RADAR (Transient Time/Time of Flight - Pulsed) (FMCW Frequency Modulation Continuous Wave - Continuous)

Transient Time/Time of Flight - Pulsed


Non-Contact, Pulsed Radar level measurement is based on the principle of measuring the time required for the microwave pulse and
its reflected echo to make a complete return trip between the non-contacting transducer and the material level.
Then the transceiver converts this signal electrically into a distance/level and presents it as an analogue or digital signal.

Antenna Basics
An important aspect of an antenna is directivity. Directivity is ability of antenna to direct the maximum amount of radiated microwave
energy towards the liquid or solid. Two kinds of antennas are used in process industry: Rod Antenna and Horn Antenna.
The rod or horn antenna is connected to the tail of the instrument to ensure outstanding focusing and to direct maximum amount of
microwave energy toward the level being measured and to capture energy from the return echoes.
The function of a antenna in radar level transmitter is to direct maximum amount of microwave energy towards level being measured
& to capture maximum amount of energy from return echoes for analysis within electronics. No matter how well antenna is designed,
there will be some microwave energy being radiated in every direction. The goal is to maximize the directivity.
A measure of how well the antenna is directing the microwave energy is called the 'antenna gain'.
Antenna gain is a ratio between the power per unit of solid angel radiated by the antenna in a specific direction to the power per unit
of solid angle if the total power was radiated isotropically (equally in all directions).
Antenna gain is dependent on the square of the diameter (D) of the antenna as well as being inversely proportional to the square of
the wavelength. Antenna gain also depends on the aperture efficiency of the antenna. Therefore the beam angle of a small antenna at
a high frequency is not necessarily as efficient as the equivalent beam angle of larger, lower frequency radar.

Rod Antenna
Dielectric rods are made of polypropylene (PP) or Teflon (PTFE) and can be used in vessels nozzles as small as 40 mm (1½").
The microwaves travel down the inactive parallel section of the rod towards the tapered section. The tapered section of rod focuses
the microwaves toward the liquid/solid being measured. It is imperative that all of the tapered section of the rod be inside the vessel.
If a rod type antenna is coated in viscous, conductive, adhesive product, the antenna efficiency will deteriorate.
Rod antennas are used for liquids & slurries, while horn antennas are recommended for powder& granular applications.

Horn Antenna
Horn antenna or Cone antenna is mechanically robust and generally is virtually unaffected by condensation and product build up.
There are variations in the internal design of horn antennas. The operating principal is that the microwaves that are generated within
the microwave module are transmitted down a high frequency cable for encoupling into a waveguide.
The metal waveguide directs microwaves towards low dielectric material (PTFE) machined to pointed cone and then horn of antenna.
The microwaves are emitted from this pointed cone in a controlled way and then are focused towards the target by the metal horn.
After reflection from product surface, the returning echoes are collected within horn antenna for processing within the electronics.

Process Seal Antenna


A type of horn antenna used in most aggressive or demanding applications, where process fluid/vapours cannot come in contact.
The dish of the process seal antenna is made of PTFE or PFA and is the only part that comes in contact with the process.
Its unique seal prevents vapours reaching horn antenna and allows removal of radar without breaking the process seal of the vessel.

Types
Low Frequency : 6.3 GHz : C-Band
High Frequency : 26 GHz : K-Band
Continuous (FMCW - Frequency Modulation Continuous Wave)

The level of the product in the tank is measured by radar signals transmitted from the antenna at the tank top.
After the radar signal is reflected by the product surface the echo is picked up by the antenna.
As the signal is varying in frequency the echo has a slightly different frequency compared to the transmitted signal.
The difference in frequency is proportional to the distance to the product surface, and can be accurately calculated.
This method is called FMCW (Frequency Modulated Continuous Wave) and is used in all high performance radar transmitters.

Radar level transmitters make a measurement of the distance between the antenna and material surface by measuring the time taken
for a radio signal to travel to the surface and to be reflected back.
The speed of propagation in ullage space varies very little with temperature, pressure or in presence of vapour, hence the relationship
between distance and time is precisely known and the level in the vessel can be calculated.
Pulsed radar system transmit a short burst of power and time taken for the signal to return to the instrument is measured directly.
The duration of the burst must necessary be very short, so that the transmission will have finished before the reflected signal returns.
However microwaves travel at speed of light & distance between antena & product is relatively small, making time interval very small
and difficult to measure accurately.
FMCW transmitters operate differently and deliver more microwave energy to the measuring surface, creating a bigger return echo.
Rather than attempt to the transient time , the electronics measures the frequency difference between transmit and echo signals.
FMCW radar systems transmit a continuous signal, the frequency of signal is precisely linearly modulated so that received signal has a
different frequency to the transmitted signal. The transmitter electronics continuously monitors the difference in frequency between
transmit and receive signals which is proportional to the time delay.
If the product surface is moving away from radar antenna, the return echo will arrives late and frequency difference would be greater.
This method of using continuous microwave signal, then calculating rather than measuring such small time differences (time of flight),
makes FMCW highly reliable and repeatable in most process measurement applications.

The frequency difference is directly proportional to distance. A large frequency difference corresponds to large distance & vice versa.
Δf frequency diference is transformed by Fourier transformation(FFT) into frequency spectrum & distance is calculated from spectrum

Limitations
Most non-contact type radar level sensors cannot accurately measure distances that are close to the sensor itself.
The transmitter sensors are allowed to disregard measurements at these distances. This region is often called the blanking distance.

Dependence on dielectric constant of material is another limitation. This affects the reflectivity of the radar wave.
Lower dielectric value will lessen amplitude or strength of microwave signal reflecting back to antenna mounted at top of the vessel.

Higher dielectric value, the more reflective the signal will be, thus enhancing strength & validity of performance for radar transmitter.
There are also some pressure limitations on radar technology for the antenna seal as well as temperature limitations at flange area
due to the material type of emitter or propagation tip enclosed within the horn or use of the rod style antenna.
ULTRASONIC

Ultrasonic level measurement is based on sound wave emission (transmitter) & reflection of a sound wave pulse (echo) to a receiver.
The distance travelled by the pulse is equal to the travel time multiplied by the speed of sound.
Measurement of the transit time (time of flight) of this sound pulse provides a means for level detection.
Sound travels at 331m/s in air at 0°C. It varies with air temperature. So correction factor of 0.17%/0°C rise in temp. shall be applied.

Ultrasonic transmitters use sound waves while radar transmitters use electromagnetic radiation (similar to light) to measure distance.
Radar donot require a medium for transmission of microwaves, but ultrasonic requires a medium for transmission of sound waves.
Hence radar can work in vacuum whereas ultrasonic requires a transferring medium like air to propagate the soundwaves.
Ultrasonic level measurement method is based on the fact that sound passes through a medium with the known propagation speed,
depending on density and temperature of that medium. The pulse is generated and then travels through the medium (typically air).
When the pulse hits the surface of material, it is reflected back to transducer to be measure. The distance to the level surface and
level height can be calculated from the reflection time and the speed of sound wave

Selection
Distance to be measured: system developed for very high accuracy & short distances will not be powerful enough for large distances.
Composition & properties of the product surface : If there is a layer of foam present on liquid surface or if bulk material is composed
of fine granules, then less energy is reflected & hence greater transmission power is required to receive same distance measurement.
Pressure: There will be no significant variation in sound travel time, Maximum sensor pressure of 3 bar. At high pressures, membrane
is not able to move to its fullest extent because of the force exerted on it.
The system cannot work at pressures below 760 mm of Hg as the signal transport medium is absent.
Temperature: Each echo sensor is fitted with a temperature transducer which electronically compensates for temperature (0.17%/0c).
Variations in case of Liquids with high vapour pressure, the vapour will not be homogenous but will form layers.
In such highly variable temperature conditions, it might be advisable to mount the temperature transducer separately.
Gas composition: The speed of propagation varies with gas composition, but amplifier processes echo signal on basis of sound in air.
So the instrument has to be calibrated considering the speed of propagation in the gas or vapour other than air.

The reflection of sound waves depends mainly on density of reflecting surface. Denser the surface structure, stronger the reflection of
the sound waves will be. A strong reflection is always generated at interface between liquid & air. An ideal reflector for sound waves.
Water has dielectric constant value of 80, so it has good reflective property. Nearly complete strength of signal is reflected by surface.
But media with smaller DK values absorb a part of the energy and the reflected signal is correspondingly weaker
The reflective characteristics are determined not only by the medium itself however, but also disturbing effects on the surface, such
as foam, floating dirt or waves. Both measuring principles radar and ultrasonic experience signal damping due to energy absorption,
more or less according to consistency of foam. The degree of damping is largely dependent on structure and bubble size of the foam.
In principle, considerably stronger damping is caused by fine-grained foam with high water content than by large bubbles.
Because sound waves are reflected strongly by individual bubble surfaces, ultrasonic systems are more affected than radar systems.

While signal running time of microwaves is unaffected by temperatures, but sound waves are subject to considerable influence.
The propagation velocity of sound waves changes by 1.6% per 10°C temperature variation. This is a significant measurement error.
Temperature compensation is done. A temperature sensor in electro acoustic transducer measures ambient temperature and the
calculation of the signal running time is corrected accordingly. Since temperature of electro acoustic transducer does not correspond
to the temperature of the entire measuring range however, a possible source of error creeps into the ultrasonic measuring principle.
CONDUCTANCE

Conductance type level measurement is generally used for Point level detection of liquids in the vessels or tanks.

The conductance method of liquid level measurement is based on the property of electrical conductance of the measured material,
which is usually a liquid that can conduct a current with a low-voltage source.
Hence the method is also referred to as a conductivity system. Conductance is a low-cost, simple method to detect level in a vessel.

The presence of a conductive product will cause a change in the resistance between the two conductors.
When the product is not in contact with the probe, the resistance between probe and the tank wall or the common reference probe
will be very high or even infinite. When the level rises to complete the circuit. The resistance will be relatively low.

One way to set up an electrical circuit is to use a dual-tip probe that will eliminate need for grounding or common from a metal tank.
These are used for point level detection, and detected point also can be the interface between conductive and non-conductive liquid.
Depending on design, multiple electrodes of different lengths is used with one holder (Common) to determine desired product levels.
Multiple probes can be used to detect maximum, minimum or any desired levels.

Conductive level sensors are ideal for the point level detection of a wide range of conductive liquids such as water and is especially
well suited for highly corrosive liquids such as caustic soda, hydrochloric acid, nitric acid, ferric chloride, and similar liquids.
For conductive liquids that are corrosive, the sensor electrodes need to be constructed from titanium, Hastelloy-C, or stainless steel
and insulated with spacers, separators or holders of ceramic, polyethylene and Teflon-based materials.

Since corrosive liquids become more aggressive as temperature & pressure increase, these extreme conditions need to be considered
when specifying these sensors.

Conductive level sensors use a low-voltage, current-limited power source applied across separate electrodes.
The power supply is matched to conductivity of liquid, with higher voltage version designed to operate in less conductive mediums.
Conductive sensors are extremely safe because they use low voltages and currents. Since current and voltage used is inherently small,
for personal safety reasons, the technique is capable of “Intrinsically Safe” to meet international standards for hazardous locations.
Conductive probes have the additional benefit of being solid-state devices and are very simple to install and use.

In some liquids and applications, maintenance can be an issue. The probe must continue to be conductive.
If buildup insulates the probe from the medium, it will stop working properly.

The advantages of conductivity switch include low cost, simple design & elimination of moving parts in contact with process material.
The three probe element design can also provide differential level control. (On – Off control) (High and Low liquids level).

The disadvantages include possibility of sparking when the liquid level is close to the tip of the probe.
Such phenomena are eliminated in the solid state designs, which are rated for intrinsic safety operations.
The conductivity switches are also limited to the conductivity (below 108Ω resistivity) & non-coating process applications.
One should also consider the possible harmful side effects of electrolytic corrosion of the electrode.
Electrolysis can be reduced, but not eliminated, by using AC currents.
CAPACTAINCE

A capacitance probe determines level of liquid in column or receiver by measuring combined capacitance of liquid and gas (vapor).
As liquid level rises in column, total capacitance value increases.(capacitance of vapor is very small compared to capacitance of liquid.)
This increase is measured by the controlling electronic system and an output control signal is created.

A capacitor consists of two conductors separated by an insulator. We call the conductors plates and refer to the insulator as dielectric.
The very basic nature of a capacitor is its ability to accept and store the electric charge.
Capacitance is measured in farad. A capacitor has 1 farad capacitance, if it stores 1 coulomb charge when connected to 1-volt supply.
Because this is a very large unit, we commonly use one millionth of it, noted as a microfarad.
The electric size in farad of capacitor is dependent on physical dimensions & on type of material (dielectric) between capacitor plates.

The capacitance for the basic capacitor arrangement can be computed from the equation:
C = E (K A/d)
where:
C = capacitance in picofarads (pF)
E = a constant known as the absolute permittivity of free space
K = relative dielectric constant of the insulating material
A = effective area of the conductors
d = distance between the conductors

The first conductor can be the vessel wall (plate 1), and the second can be a measurement probe or electrode (plate 2).
Between the two conductors there is an insulating medium or dielectric — a non-conducting material involved in level measurement.
The amount of capacitance is determined not only by the spacing and area of the conductors, but also by electrical characteristic
(relative dielectric constant, K) of the insulating material. The value of K affects the charge storage capacity of the system:
The higher the K, the more charge it can build up. Dry air has a K of 1.0. Liquids and solids have considerably higher values.

A bare, conductive, sensing electrode (probe) is inserted down into a tank to act as one conductor of the capacitor.
The metallic wall of the tank acts as the other conductor.
If the tank is non-metallic, a conductive ground reference must be inserted into tank to act as the other capacitor conductor.
With tank is empty, insulating medium between two conductors is air. With tank full, the insulating material is process liquid or solid.
As the level rises in the tank to start covering the probe, some of insulating effect from air changes into that from process material,
producing a change in capacitance between sensing probes. This capacitance is used to provide a linear measurement of tank level.

When the process material is conductive, the sensing probe is covered with an insulating sheath such as Teflon (PTFE/PFA) or Kynar.
The insulated probe acts as one plate of the capacitor and the conductive process material acts as the other.
The process material being conductive, connects electrically to the grounded metallic tank. The insulating medium or dielectric for
this application is probe’s sheath. As the level of conductive process material changes, a proportional change in capacitance occurs.

As the level rises and material begins to cover the sensing element the capacitance within circuit between the probe and the media
(conductive applications) or the probe and the vessel wall (insulating applications) increases. This causes the change in capacitance
and bridge misbalance, and the signal is demodulated (rectified), amplified and the output corresponding to the level is generated.

Capacitance techniques are capable of operation at extremes of temperature and pressure.


Have problems with varying dielectric materials and those medias, that coat the sensing element.
MAGNETIC

Magnetic Level Gauge (MLG) consists of a chamber and an internal float in non-magnetic material, compatible with the process liquid.
The float containing a magnetic system, rides on liquid level and is coupled to external visual indication, which comes in two designs.
Design 1:
The simpler / economical design consists of a RED magnetic follower capsule, that moves within a glass tube which is filled with water
(to reduce friction) and can be read against a scale. The indicator tube is hermetically sealed.
Some uses nitrogen-purge glass or polycarbonate tube & then put ryton (to avoid corrosion) caps on each end to ensure air-tightness.
This hermetic seal eliminates possibility of moisture entrapment & build-up that could negatively impact indicator movement & travel
This allows a smooth path for the indicator, which tracks the float movement to represent the exact level against a calibrated scale.
Design 2:
The design system is expensive and consist of a series of bicolour flappers, WHITE on front side and contrasting RED colour on reverse.
These flappers flip over corresponding to float movement, thus changing their colour from WHITE to RED as the float rises in chamber
and changing back to white, when the float falls in the chamber. As such, the liquid level is represented by an external RED column.
The magnet in each flag cause it to flip one way as float passes while moving upward & flip other way as float pass moving downward.
The accuracy of bicolour flappers is limited by its width, however the follower capsule type does not have this limitation.

Options (Swiches & Transmitters)


A mechanical / proximity switch and a transmitter output configurations for remote indication can also be provided if required.
The Clamp-on Switch is attached to MLG in same way as a indication rail. High temperatures, higher contact ratings, are available.
The transmitter is mounted on the outside of gauge tube. It is mounted an angle with the capsule or bicolour flappers indication rail.
The transmitter consists of a sensor tube containing a series of reed switches & resistors and an electronics circuit in transmitter head.
Level changes causes the magnet inside float to operate the contacts of reed resistor chain. As the float rises and falls within chamber
the corresponding reed switch closes altering the circuit resistance, this resistance is converted into a output signal.
The signal is proportional to change of liquid providing an 4-20mA output for a remote indication of liquid level in the tank or vessel.

The Magnetic Level Gauge (MLG) consists of a chamber and an internal float made of stainless steel, titanium or plastic fitted with a
permanent omni-directional magnet moves freely inside the chamber and actuates the magnetic wafers within the indicator.
As the float rises or falls with the liquid level in the chamber, each wafer will rotates 180° and will change the colour.
Those wafers above the float show white, whilst those level and below show red – the indicator then presents a clearly defined and
accurate level of the liquid in the chamber.

Magnetic Level Gauge (MLG) is still a visual indicator of liquid level, but it utilizes magnetic transmission to couple the position of float
(housed within an external chamber alongside process fluid vessel) to a moving flags (indicator) housed in closely separate tube that is
totally isolated from the fluid. As the fluid level is repeated in the float chamber, so is it represented in the indicator tube.
Since the visible flags/indicator avoids direct contact with process liquids, problems with coating, plating, fouling, fugitive emissions
and hazardous material leaks are completely eliminated. This ensures safe leveling of liquids that are toxic, corrosive, or flammable.

Float
Floats are specifically calibrated to match the conditions of the vessel. The density, pressure and temperature are taken into account,
ensuring that the level indication is accurate and repeatable. The float can be made of stainless steel, Hastelloy, titanium, or plastic.
A sealed guide-free float carrying a single powerful omni-directional magnet system, provides good performance and reliability.
The float may contain upto 15 rod-magnets, which are arranged around the circumference of the float & held in place with flux rings
to evenly distribute the magnetic pull of the float.

Indication Rail
* A RED magnetic follower capsule, that moves within glass tube which is filled with water and can be read against a calibrated scale.
* A series of bicolour flappers, each of coloured flaps contains a small magnet which rotates through 180° when passed by the float.
The bar magnet design donot lose magnet field strength even at temperature of 450°C guaranteing operation in extreme application.
The indication rail magnetic field is interlocked by individual magnets in each of the bicolour flaps, which ensures a stable indication.
The indication rails are available in many configurations, the first indicated colour indicates the liquid level :
 Yellow / Blank Aluminium
 Blue / Blank Aluminium
 Purple / Blank Aluminium
 Green / Blank Aluminium
 Black / Yellow
 Red / Green
 Red / White
Selection
When selecting a magnetic level gauge (MLG), it is important to take into account the strength of the magnetic field of the float.
The magnetic field is the heart of the magnetic level gauge – the stronger the field, the more reliable the instrument will function.
Moreover, field strength as you travel around circumference will have high & low spots as you pass between individual bar magnets.
Since the visible flags/indicator avoids direct contact with process liquids, problems with coating, plating, fouling, fugitive emissions
and hazardous material leaks are completely eliminated. This ensures safe leveling of liquids that are toxic, corrosive, or flammable.
Process compatibility is assured as the range of materials of construction includes: Stainless Steel, PVC, PP, PVDF, Hastelloy.
Other exotic materials are also available and all gauge designs can be lined or coated with PTFE.
FLOAT / TILT FLOAT / TAPE FLOAT

Float
Float devices operate on the buoyancy Principle, as the liquid level changes, a sealed container (Float) will move correspondingly,
providing its density is lower than that of the liquid.

Magnetically or Mechanically actuated float design


With magnetically actuated float sensors, switching occurs when a permanent magnet sealed inside a float rises or falls with level.
As the float rises or falls, it actuates a proximity switch or a mechanical switch placed inside the tube at a pre-defined desired level.
With mechanically actuated float sensors, switching occurs as a result of the movement of a float against a miniature (micro) switch.
The miniature switch is placed inside the float itself and there is a mechanism by which switch activates due to the movement of float.
For both magnetic & mechanical float level sensors, chemical compatibility, temperature, specific gravity(density), buoyancy, viscosity
affect the selection of the stem and the float.

Float-type sensors can be designed so that a shield protects the float itself from turbulence and wave motion.
Float sensors operate well in a wide variety of liquids, including corrosives.
When used for solvents, one will need to verify that the liquids are chemically compatible with materials used to construct the sensor.
Float-style sensors should not be used with high viscosity (thick) liquids, slurries, sludge or liquids that adhere to the stem or floats, or
materials that contain contaminants such as metal chips, other sensing technologies are better suited for these applications.

A special application of float type sensors is the determination of interface level in oil-water separation systems.
Two floats can be used with each float sized to match the specific gravity of the oil on one hand and the water on the other.
Magnetic & Mechanical float switches are popular for simplicity, dependability and low cost.

Tilt Float
It consist of a float, integral with electric cable. A fixed microswitch and a moving steel ball is usually enclosed within the float casing.
A change in liquid level causes float to tilt up or down around a pivot, provided in form of adjustable stopper, support pipe or ballast,
at an angle and in the process, actuating a steel ball to move and operate a micro-switch plunger to close or open an electrical circuit
with potential free contacts.

Advantages
Float sensors work well with clean liquids and are accurate and adaptable to wide variations in fluid densities.
Once commissioned, the process fluid measured must maintain its density if repeatability is required.
Generally used for point level detection (High, low Alarms) (On-Off control of auxiliary pumps etc.)

Disadvantages
In situations of a coating media the moving parts may seize and the unit will no longer function.
Tape Float

In tape float level measurement, a tape is connected to a float at one end and to a counterweight is attached at the other, thereby
keeping the tape under constant tension.

The float moves the counterweight up and down in front of a direct reading gauge board, thereby indicating the level in the tank.
The installation is typically used on storage tanks.

The instrument range is a function of tape length used, which can be up to 30 m.

In this design, a tape or cable connects the float inside tank to a gage board or an indicating take-up reel mounted on outside of tank.
The float is guided up and down the tank by guide wires or travels inside a stilling well.

These level indicators are used in remote, unattended, stand-alone applications, or they can be provided with data transmission
electronics for integration into plant-wide control systems.

To install the tape gage, an opening is needed at the top of the tank and an anchor is required at its bottom.

When properly maintained, tape gages are accurate to ±1/4 inches.

It is important to maintain the guide wires under tension, clean and free of corrosion, and to make sure that the tape never touches
the protective piping in which it travels. If this is not done, the float can get stuck on guide wires or the tape can get stuck to the pipe.
(This can happen if the level does not change for long periods or if the tank farm is located in a humid region.)
VIBRATION

Vibrating level measurement is generally used for point level detection of liquids or solids in tanks or vessels.
Vibrating level switches detect the dampening of vibrations that occurs when a vibrating sensor is submerged in a process medium.

There are three types of vibrating sensors--Reed, Probe, Tuning Fork.

REED
The reed level switch consists of a paddle, a driver and a pickup.
The driver coil induces a (120 Hz) vibration in the paddle that is damped out when the paddle gets covered by a process material.
The switch can detect both rising and falling levels and only its actuation depth (the material depth over the paddle) increases as the
density of the process fluid drops. The variation in actuation depth is usually less than an inch.
A reed switch can detect liquid/liquid, liquid/vapor and solid/vapor interfaces, and can also signal density or viscosity variations.
When used on wet powders, the vibrating paddle has a tendency to create a cavity in the granular solids.
If this occurs, false readings will result, because the sensor will confuse the cavity with vapor space.
It is best to use reed switch on non-coating applications or to provide automatic spray washing after each immersion in sludge/slurry.

PROBE
Probe type vibrating sensors are less sensitive to material build-up or coating.
The vibrating probe is a round stainless steel element (resembling a thermowell) that extends into the material.
Both the drive and the sensor are piezoelectric elements: one causes the vibration and the other measures it.
When the probe is buried under the process material, its vibration is dampened and this decrease triggers the switch.
Vibrating probe sensors can be used to monitor powders, bulk solids, granular materials such as grain, plastic pellets, cement, fly ash.
Their vibrating nature tends to minimize the bridging that occurs in solid materials.
Tuning fork sensors are vibrated at about 85 Hz by one piezoelectric crystal, while another piezoelectric crystal detects the vibration.
As the process fluid rises to cover the tuning forks, the vibration frequency changes.

Tuning Fork
It uses a tuning fork as a sensing probe. A piezoelectric crystal vibrates the tuning fork at its resonant frequency (85 Hz approx.).
When material level increases and covers the tines of fork, the vibrations are damped.
This is detected, processed and converted into switching signal i.e. relay contact changeover.
Since amplitude of the vibrations is maximum at the tips and minimum at the base of the vibrating fork, it is more sensitive at the tips
and less sensitive at the base of the tuning fork.
Tuning fork sensors can be constructed with components made of PVDF, polypropylene, stainless steel, carbon steel, and aluminum.
They are available with PFA coatings or in hygienic versions for sanitary applications.
Vibrating sensors can be used to ascertain liquid, solid, and slurry levels.
The vibrating tines have a self-cleaning effect, which prevent excessive build-up formation.
OPTICAL

Using visible, infrared, laser light, the optical sensors rely upon light transmitting, reflecting, refracting properties of process material.
The optical level sensors are classified in two types:
 Contacting
 Non-Contacting

In a non-contacting design, a reflecting optical sensor relies on a beam of light aimed down at the surface of the process material.
When the level of this surface rises to the setpoint of the switch, the reflected light beam is detected by a photocell.
Both the LED light source and photo-detector are housed behind the same lens.
The optical reflective switches can measure the levels of clear as well as translucent, reflective, and opaque liquids.
By using multiple photocells, a sensor can detect several levels.
The laser light also can be used when making difficult level measurements, such as of molten metals, molten glass, glass plate, or any
other kind of solid or liquid material that has a reflecting surface.
If the receiver module is motor driven, then it can track the reflected laser beam as the level rises and falls, thereby it can also act as a
continuous level transmitter.

In contact design, a refracting sensor relies on principle that infrared or visible light changes direction (refract) when it passes through
the interface between two media.
When the sensor is in the vapor phase, most of the light from the LED is reflected back within a prism.
When prism is submerged, most of light refracts into liquid & amount of reflected light that reaches the receiver drops substantially.
Therefore, a drop in the reflected light signal indicates contact with the process liquid.
A refracting sensor cannot be used with slurries or coating liquids, unless it is spray-washed after each submersion.
Even a few drops of liquid on the prism will refract light and cause error in the readings.
Refracting sensors are submerged in liquids, Hence number of sensors can be installed on vertical pipe to detect different level points.
Transmission optical sensors send a beam of light across the tank.
A sludge level sensor, uses an LED and a photocell at the end of a probe, located at the same elevation and separated by a few inches.
To find sludge level, a mechanism (or an operator, manually) lowers the probe into tank until sensors encounter the sludge layer.

Optical sensors can operate at pressures up to 30 barg and temperatures up to 125C.


Response time is virtually immediate, and detection accuracy of most designs is within 1 mm.
Optical level switches are also designed for specific or unique applications.
For example, Teflon¨ optical level switches are available for sensing the level of ultra-pure fluids.
Other unique designs include a level switch that combines an optical with a conductivity-type level sensor to detect the presence of
both water (conductive) and hydrocarbons (nonconductive).
REFLUX

Reflex level gauges working principle is based on the light refraction and reflection laws.
In the gas or steam phase, the light is reflected by the prismatic grooves of the glass, which therefore gives a clear appearance.
In the liquid phase, the light is absorbed or refracted into the chamber, thus providing a dark indication of the level.

Reflux type consists of a metal body, machined to have an internal chamber & one or more front windows (only on one side of gauge).
On each window a special high resistance plate reflex glass is fitted with sealing joint and metal cover plates hold by bolts and nuts.
The chamber is connected to vessel with cross fittings and flanged, threaded or welded ends.
Usually, between the instrument and its connecting ends, valves are fitted to consent shut-off piping and to disassemble level gauge
without emptying the vessel. To avoid leakage in case of glass breakage, safety ball-check device is provided.

Reflex level gauges working principle is based on the light refraction and reflection laws.
Reflex level gauges use glasses having face fitted towards the chamber shaped to have prismatic grooves with section angle of 90°.
When in operation, the chamber is filled with liquid in the lower zone and gases or vapors in upper zone.
Liquid level is distinguished by different brightness of the glass in the liquid and in the gas/vapor zone.

Light rays strike outer surface of glass at 90-degree angle. The light rays travel through glass striking inner side of glass at 45deg angle.
The presence/absence of liquid in chamber determine if light rays is refracted into chamber or reflected back to outer surface of glass.
When the liquid is at an intermediate level in the gauge glass, the light rays encounter an air-glass interface in one portion of chamber
and a water-glass interface in the other portion of the chamber.
Where an air-glass interface exists, the light rays are reflected back to outer surface of the glass since the critical angle for light to pass
from air to glass is 42 degrees. This causes the gauge glass to appear silvery-white.
In the portion of the chamber with the water-glass interface, the light rays are refracted into the chamber by the prisms grooves.
Reflection of light back to outer surface of gauge glass donot occur because critical angle for light to pass from glass to water is 62deg.
This results in glass appearing black, since it is possible to see through the water to the walls of the chamber which are painted black.

The principle of Reflex Level Gauges is based on the difference in the refractive indices of liquid and vapor.
The sight glass has prismatic right-angled grooves on the side facing the liquid and vapor space.
Light rays entering from outside the gauge are either absorbed or reflected depending upon whether they enter liquid or vapor space.

When the ray of light encounter the surface of one of the grooves in the vapor space, it is reflected to the opposite surface of grooves
and from there totally reflected back to the direction of observation. Thus, vapor space appears as silvery white.

When the light ray encounters the surface of the grooves in the liquid space, it is totally absorbed or refracted by the liquids surface,
thereby making the liquid behind the glass appear black.
TUBULAR / TRANSPARENT

TUBULAR
A tubular level gauge comprises of a transparent glass tube, seals, end blocks, and guard rods to protect the glass.
It is positioned parallel to the vessel along the elevation over which level is to be indicated & mounted with fittings to retain pressure
as well as to seal the ends of sight glass tube. This construction, is not well suited for use with dangerous or hazardous process fluids.

An important consideration in tubular glass gauge selection is that of maintaining the safety of personnel and associated equipment.
If sight glass tube sustains fracture of glass or leak at seals, dangerous fluid can escape & create potential for a hazardous conditions.
The tubular glass tube design is not recommended for use with toxic materials, pressures above 1 bar or temperatures above 100°C.

Some tubular gauge designs have extra protection against breakage, an improvement on the simple guard rods of standard designs.
The protection elements may include an outer tube that contain the fluid if the inner tube is fractured, or the sheet metal protectors.
The traditional tubular glass level gauge is not recommended for most industrial applications.

TRANSPARENT
Transparent Gauge consist of metal body, machined to have an internal chamber & one/more front windows (on each side of gauge).
On each window a special high resistance transparent glass is applied with sealing joint and metal cover plate hold by bolts and nuts.
In Transparent level gauges, the liquid is contained between the two transparent glasses.
This permits a through vision of the fluid, thus providing a perfect indication of the level.
A backlight illuminator can also be mounted on their rear side, for improving the visibility.

To avoid leakage in case of glass breakage, safety ball-check device can be provided in cross fittings or shut-off valves.
To protect glass surfaces from corrosive action of process fluid, transparent level gauges can be fitted with CAF, Mica or PTFE shields.
The Transparent type level gauges are equipped with a ball check valve (emergency shut-off valve).

Transparent type level gauge consists of body with glass and cover securely tightened by stud bolts. It is hermetically sealed.
Transparent type level gauges are suitable for observing liquid level through the transparent glasses.
For high temperature, MICA is placed to wet surfaces, not only to protect glass but to have anti-corrosion and heat-resistant effects.

Transparent are normally used on non-pressurized and non-dangerous fluid systems, and are economical.
Tubular / Transparent / Reflex gauges are normally limited to a maximum single gauge length of up to 1.5 m.
If a greater total visible length is required, multiple gauges will have to be installed.
Overlapping is done to allow viewing of levels that would otherwise be blocked by top and bottom edges of covers.
One single standpipe can be mounted on vessel. Using a standpipe reduces the number of vessel connections and increases flexibility.
TEMPERATURE
RTD (Resistance Temperature Detector)
 2 - Wire
 3 - Wire
 4 - Wire

 Thin Film
 Wire Wound
 Coiled

 Class A
 Class B

Thermocouple
 K (chromel–alumel)
 E (chromel–constantan)
 T (copper–constantan)
 J (Iron–constantan)
 N (nicrosil–nisil)
 R (Pt:Rh13% - Pt)
 S (Pt:Rh10% - Pt)
 B (Pt:Rh30% - Pt:Rh6%)

 Sealed and Isolated from Sheath


 Sealed and Grounded to Sheath
 Exposed Bead
 Exposed Fast Response

Bimetallic Temperature Gauge

Filled Temperature Gauge


 Liquid Filled
 Mercury Filled
 Vapor Filled
 Gas Filled
RTD (Resistance Temperature Detectors)
Principle - The resistance of metals changes as temperature changes.
The resistance of RTD varies directly with temperature (positive temperature coefficient of resistance).
RTD has pure metals or alloys that increase in resistance as temperature increases & decrease in resistance as temperature decreases.
The metals best suited for RTD sensors are pure, uniform quality, stable & able to give reproducible resistance-temperature readings.
The RTD elements are constructed from platinum, copper, nickel. These metals are best suited for RTD applications because of their
linear resistance-temperature characteristics, high coefficient of resistance( ), & ability to withstand repeated temperature cycles.
The coefficient of resistance ( ) is the change in resistance per degree change in temperature.

RTD’s are generally constructed using a fine, pure, metallic, spring like wire surrounded by an insulator & enclosed in a metal sheath.
A change in temperature will cause an RTD to heat or cool, producing a proportional change in resistance.
The change in resistance is measured by a precision device that is calibrated to give the proper temperature reading.

Operation
RTD elements are long, spring like wires surrounded by insulator & enclosed in metal sheath. RTD’s are generally made of Platinum.
Platinum element that is surrounded by a MgO insulator. The insulator prevents a short circuit between the wire and metal sheath.
Inconel (nickel-iron-chromium alloy) is normally used in manufacturing the RTD sheath because of its inherent corrosion resistance.
When placed in a liquid or gas medium, the Inconel sheath will quickly reach the temperature of the process medium.
The change in temperature will cause the platinum wire to heat or cool, resulting in a proportional change in resistance.
This change in resistance is then measured by a precision resistance measuring device that is calibrated to give temperature reading.
This device is normally a bridge circuit, WHEAT STONE BRIDGE.

Standard IEC-751:1983 specifies tolerance and temperature to resistance relationship for the platinum resistance thermometers.
The most used has nominal resistance of 100 ohms at 0°C & called Pt-100. The sensitivity of standard Pt100 sensor is 0.385ohm/°C.

Working
Resistance Temperature Detectors or RTD’s require a small current to be passed through them in order to determine the resistance.
This can cause resistive heating and manufacturers' limits should always be followed along with heat path considerations in design.
Lead wire resistance should be considered, adopting 3/4 wire RTD eliminates connection lead resistance effects from measurements,
Industrial practice is almost universally to use 3-wire connection. 4-wire connections are used for precise applications (laboratories).

Limitations
* RTDs are used upto 660°C. At temperatures above 660°C platinum becomes contaminated by impurities from metal sheath.
* At very low temperatures -270°C, due to fact that there are very few phonons, resistance of RTD is mainly determined by impurities,
boundary scattering & basically independent of temperature. Therefore, RTD sensitivity below -270°C is zero and therefore not useful.

Construction
RTD elements requires insulated leads attached. At low temperatures, PVC, silicon rubber, PTFE insulators are common upto 250°C.
Above this temp, glass fibre or ceramic are used. The measuring point and usually most of leads require housing or protection sleeve.
This is often metal alloy which is inert to a particular process. Often more consideration goes in selecting/designing protection sheaths
than sensors as this is the layer that must withstand chemical or physical attack and offer convenient process attachment points.

Summary
The connection between RTD & instrumntation is made with standard electrical cable with copper conductor 2/3/4 core construction
The cabling introduces electrical resistance which is placed in series with the resistance of RTD (resistance temperature detector).
The resistances are therefore added & could be interpreted as increased temperature because of additional resistance of lead wires.
The longer or smaller the diameter of the cable, the greater the lead resistance will be & measurement errors could be appreciable.
In the case of a 2 wire connection, little can be done about this problem and some measurement error will result according to cabling.
If it is essential to use only 2 wires, ensure that largest possible diameter of conductors is specified & that length of cable is minimized
to keep cable resistance to as low a values as possible.
The use of 3 wires, allows for a good level of lead resistance compensation. However compensation technique is based on assumption
that the resistance of all 3 leads is identical and that they all reside at same ambient temperature, which is not always the case.
Highest accuracy is achieved with a 4 wire RTD configuration. The Pt100 measuring current is measured. The voltage drop across the
sensing resistor is picked off by measurement wires. If the measurement circuit has a very high input impedance, then lead resistance,
connection resistances have negligible effect. The voltage drop thus obtained is independent of connecting wire resistivity.
Thin Film RTD’s have a layer of platinum on a ceramic substrate, the layer may be extremely thin, perhaps one micrometer.
Platinum film is coated with epoxy/glass. The coating protects deposited platinum film and acts as strain relief for external leadwires
Advantages of thin film type are relatively low cost and fast response. Such thin films devices have improved performance although
the different expansion rates of the substrate and platinum give "strain gauge" effects and stability problems.

Wire Wound RTD’s is simplest sensor design.


The sensing wire is wrapped around insulating mandrel/core. The winding core can be round/flat, but must be an electrical insulator.
Matching the coefficient of thermal expansion of the sensing wire and the winding core materials will minimize any mechanical strain.
The coil diameter provides a compromise between mechanical stability and allowing expansion of wire to minimize strain and drift.

Coiled RTD’s have largely replaced wire wound elements in the industry.
This design allows the wire coil to expand more freely over temperature while still providing the necessary support for the coil.
The basis of sensing element is a small coil of platinum sensing wire. This coil resembles a filament in an incandescent light bulb.
The mandrel is a hard fired aluminum oxide tube with four equally spaced bores that run transverse to the axes.
The coil is inserted in bores of the mandrel and the bores are packed with a very fine grit ceramic powder.
This permits the sensing wire to move while still remaining in good thermal contact with process being measured.

2-wire configuration
The simplest resistance thermometer detector configuration uses 2 wires.
It is only used when high accuracy is not required as resistance of connecting wires is
always included with that of the sensor leading to errors in the signal.
2-wire construction is least accurate since there is no way of eliminating lead wire
resistance from the sensor measurement.
2-wire RTD’s are used with short lead wires or where high accuracy is not required.
Measured Resistance = RL1 + RRTD + RL2

3-wire configuration
In order to minimize the effects of lead resistances, 3 wire configuration is used.
By using this method the two leads to sensor are on the adjoining arms,
there is lead resistance in each arm of bridge & lead resistance are cancelled out.
All the 3 lead wires have should same resistances. (RL1 = RL2 = RL3 )
A 3 wire circuit works by measuring the resistance between 1 & 2 and subtracting the
resistance between 2 & 3 which leaves us just the resistance of the RTD bulb.
Measured Resistance = R(1*2) – R(2*3) = (RL1 + RRTD + RL2) - (RL2 + RL3 ) = RRTD

4-wire configuration
In 4 wire RTD, resistance of lead wires does not contribute to the resistance of sensor.
A 4 wire circuit works by using wires 1 & 4 to power circuit and wires 2 & 3 to read.
This true bridge method will compensate for differences in lead wire resistances.
A constant current (Is) flows through RL1 , RRTD , RL2
Total Voltage V = V RLI + V RTD + V RL2
Voltage (VRTD) is directly measured across the RRTD.
V RTD / Is = Exact resistance at RRTD.
Actual Equation
The relation between temperature and resistance is given by the Callendar-Van Dusen equation,
R(T) = R(0) [1 + A T + B T2 + C T3 (T − 100)] , (For -200°C < T < 0°C)

R(T) = R(0) [1 + A T + B T2] , (For 0°C < T < 660°C)

R(T) is resistance at temperature T, R(0) is resistance at 0°C, constants [for alpha ( ) = 0.00385 platinum RTD] are

Since B and C coefficients are relatively small, Resistance changes almost linearly with temperature

General Equation

Temperature Coefficient of Resistance ( )


The coefficient of resistance is the change in resistance per degree change in temperature.
The temperature coefficient is the slope of the platinum RTD between 0°C to 100°C.

Accuracy

Class A = 0.15 + ( 0.002 * t )


Class B = 0.30 + ( 0.005 * t )

t is value of temperature being measured without considering the sign

Range
Class A -200 to 600°C
Class B -200 to 800°C

Self Heating
Since RTD’s is resistor they will produce heat when current is pass through them. The normal current limit for industrial RTD is 1 mA.
Thin film RTD’s are more susceptible to self-heating so 1 mA should not be exceeded.
Wire wound RTD’s can dissipate more heat so they can withstand more than 1 mA.
THERMOCOUPLE

A thermocouple consist of 2 dissimilar metal wires joined at one end.


When other end of each wire is connected to a measuring instrument, the thermocouple becomes sensitive & highly accurate device.
The most important factor to be considered when selecting a pair of materials is "thermoelectric difference" between two materials.
A significant difference between the two materials will result in better thermocouple performance materials

The construction of a typical thermocouple consists of two dissimilar metal wires joined at one end and encased in rigid metal sheath.
The measuring junction is normally formed at the bottom of thermocouple housing. MgO surrounds thermocouple wires to prevent
vibration that could damage fine wires & to enhance heat transfer betwen measuring junction & medium surrounding thermocouple.

Operation
The thermocouples will cause an electric current to flow in attached circuit when they are subjected to the changes in temperature.
The amount of current produced is dependent on temperature difference between measurement junction and reference junction;
the characteristics of the two metals used; and the characteristics of the attached circuit.
Heating the measuring junction of the thermocouple produces a voltage which is greater than the voltage across reference junction.
The difference between two voltages is proportional to difference in temperature and can be measured on voltmeter (in millivolts).

Theory
In 1821, the German–Estonian physicist Thomas Johann Seebeck discovered that when any conductor is subjected to thermal gradient
(temperature difference), it will generate a voltage. This is now known as the thermoelectric effect or Seebeck effect.
Any attempt to measure this voltage necessarily involves connecting another conductor to the "hot" end. This additional conductor
will then also experience the temperature gradient and will also develop a voltage of its own which will oppose the original voltage.
Using dissimilar metal to complete circuit creates a circuit in which two legs generate different voltages, leaving a small difference in
voltage available for measurement. Difference increases with temperature & is between 1-70 microvolts per degree Celsius (µV/°C).

A thermocouple circuit has at least two junctions: the measurement junction & the reference junction.
Typically, reference junction is created where two wires connect to measuring device. At reference junction there are two junctions:
one for each of two wires,but because they are assumed to be at same temperature (isothermal) they are considered as one junction.
It is the point where metals change from thermocouple metals to whatever metals are used in measuring device - typically copper.

The output voltage is related to temperature difference between measurement junction & reference junction.
This is called Seebeck effect. The Seebeck effect generates a small voltage along the whole length of a wire, and is greatest where
temperature gradient is greatest. If circuit wires are of identical material, they will generate identical but opposite Seebeck voltages
which will cancel each other. However, if the wire metals are different the Seebeck voltages will be different and will not cancel.

In practice the Seebeck voltage is made up of two components: Peltier Voltage + Thomson Voltage
The Peltier Voltage generated at the junctions, plus the Thomson Voltage generated in the wires by the temperature gradient.
The Peltier voltage is proportional to the temperature of each junction while the Thomson voltage is proportional to square of the
temperature difference between the two junctions. It is Thomson voltage that accounts for most of the observed voltage.

Seebeck Effect
Seebeck effect states that when two different or dissimilar metals are joined together at two junctions, an electromotive force (emf)
is generated at two junctions. The amount of emf generated is different for different combinations of metals.
Peltier Effect
As per the Peltier effect when two dissimilar metals are joined together to form two junctions, the emf is generated within the circuit
due to different temperatures of the two junctions of the circuit.
Thomson Effect
As per Thomson effect, when two dissimilar metals are joined together to form two junctions, the emf is generated within the circuit
due to temperature gradient along the entire length of the conductors within the circuit.

In most of the cases the emf suggested by Peltier effect is very small and it can be neglected by making proper selection of the metals.
The Thomson effect play prominent role in the working principle of the thermocouple.
SEEBECK EFFECT = PELTIER EFFECT + THOMSON EFFECT
Considerations
* A third metal may be introduced into a thermocouple circuit and have no impact, provided that both ends are at same temperature.
This means that thermocouple measurement junction can be soldered, brazed, welded without affecting thermocouple's calibration,
as long as there is no net temperature gradient along the third metal.
If measuring circuit metal (copper) is different to that of thermocouple, then if the temperature of two connecting terminals is same,
the reading will not be affected by the presence of this third metal (copper).
* The thermocouple's output is generated by temperature gradient along wires and not at the junctions as is commonly believed.
Therefore it is important that the quality of the wire be maintained where temperature gradients exists.
Wire quality can be compromised by contamination from its operating environment and the insulating material.
For temperatures below 400°C, the contamination of insulated wires is generally not a problem. But at temperatures above 1000°C,
the choice of insulation and sheath materials, as well as wire thickness, become critical to calibration stability of the thermocouple.
* Voltage generated by thermocouple is function of temperature difference between measurement junctions & reference junctions.
Traditionally the reference junction was held at 0°C by an ice bath. The ice bath is now considered impractical and is replaced by a
reference junction compensation arrangement. This is done by measuring reference junction temperature with an alternate sensor
(typically an RTD and applying a correcting voltage to the measured thermocouple voltage before scaling to temperature.
The correction can be done electrically in hardware or mathematically in software.
* The low-level output from thermocouples (typically 50mV full scale) requires that care be taken to avoid electrical interference from
motors, power cable, transformers. Twisting thermocouple wire pair (say 1 twist per 10 cm) can greatly reduce magnetic field pickup.
Using shielded cable or running wires in metal conduit reduces electric field pickup. A measuring device should provide signal filtering,
either in hardware or by software, with strong rejection of the line frequency (50/60 Hz) and its harmonics.

LAWS OF THERMOCOUPLES
Law of Homogeneous Material
A thermoelectric current cannot be sustained in circuit of a single homogeneous material by application of heat alone, regardless of
how it might vary (Heat). In other words, temperature changes in wiring between input & output do not affect the output voltage,
provided wires are made of same materials as thermocouple. No current flows in circuit made of single metal by application of heat.
The thermocouple will respond only to temperature differences between the hot junction and the reference junction.
This result is useful as it mean that emf from a thermocouple is not dependent on intermediate temperatures along its entire length.
The thermal EMF of a thermocouple with junctions at T1 & T2 is totally unaffected by temperature elsewhere in circuit if two metals
used are each homogeneous.
Law of Intermediate Materials
The algebraic sum of the thermoelectric emfs in a circuit composed of any number of dissimilar materials is zero if all of junctions are
at a uniform temperature. So If a third metal is inserted in either wire and if two new junctions are at same temperature (isothermal),
there will be no net voltage generated by the new metal.
If a third homogenous material C is inserted into either thermoelement A or B, then as long as two new thermoelectric junctions are
at same temperature, net emf of circuit is unchanged irrespective of temperature in material C away from thermoelectric junctions.
Provided the thermoelectric junctions between C and A are both at same temperature, the net emf of the thermocouple is unaffected
by presence of inserted material and any local hot spot as the emf excursion between 2 and 3 is cancelled by that between 3 and 4.
This law is of great significance stating that, provided there is no thermal gradient across thermoelectric junction it does not matter if
thermoelements are joined by a third material such as solder or if local thermoelectric properties are changed at junction by welding.

Law of Successive or Intermediate Temperatures


If two dissimilar homogeneous materials produce thermal emf 1 when the junctions are at T1 and T2 and produces thermal emf 2
when junctions are at T2 and T3 , the emf generated when the junctions are at T1 and T3 will be emf 1 + emf 2, provided T1<T2<T3.

THERMOCOUPLE COMPENSATION AND LINEARIZATION


It is possible to provide reference junction compensation in hardware or software. (Ideally reference junction should be at 0°C)
The principal is to add a correction voltage to thermocouple output voltage, which is proportional to reference junction temperature.
The connection point of thermocouple wires to measuring device (where thermocouple change to copper for circuit electronics) must
be monitored by a sensor. The area design is isothermal, so that the sensor accurately tracks both reference junction temperatures.

In Hardware Compensation, a variable voltage source is inserted into the circuit to cancel the parasitic thermoelectric voltages.
The variable voltage source generates a compensation voltage according to ambient temperature and thus adds the correct voltage
to cancel the unwanted thermoelectric signals. When these parasitic signals are canceled, the only signal the system measures is the
voltage from the thermocouple junction. With hardware compensation, the temperature at the system terminals is irrelevant because
the parasitic thermocouple voltages are canceled. The major disadvantage of hardware compensation is that each thermocouple type
must have a separate compensation circuit that can add correct compensation voltage, which makes the circuit fairly very expensive.
Also, hardware compensation is generally less accurate than software compensation.

In Software Compensation, After a sensor measures the reference-junction temperature, software can add appropriate voltage value
to the measured voltage to eliminate the parasitic thermocouple effects from its algorithm/look-up table for each specific types of TC.
The signal that the system measures is the voltage, which is proportional to the difference between the two thermocouple junctions.
Here correction is done by software to eliminate parasitic thermocouple effect by adding/substracting voltage to measured voltage.
Thermocouple Mounting

Sealed and Isolated from Sheath: Good relatively trouble-free arrangement. The principal reason for not using this arrangement for
all applications is its sluggish response time, the typical time constant is 75 seconds
Sealed and Grounded to Sheath: Can cause ground loops and other noise injection, but they provides a reasonable time constant
(40 seconds) and a sealed enclosure.
Exposed Bead: Faster response time constant (typically 15 seconds), but lacks mechanical, chemical, electrical isolation from material
being measured. The porous insulating mineral oxides must be sealed
Exposed Fast Response: Fastest response time constant (typically 2 seconds), but with fine gauge wire the time constant is 10-100 ms.
In addition to problems of exposed bead type, protruding and light construction makes thermocouple more prone to physical damage

THERMOCOUPLE TYPES
A variety of thermocouples are available for different applications. They are selected based on temperature range & sensitivity range.
Thermocouples with low sensitivities (R, S, B types) have correspondingly lower resolutions. Other selection criteria include inertness
of thermocouple material, and whether or not it is magnetic. The thermocouple types are listed below with a positive electrode first,
followed by the negative electrode.

K (chromel : alumel) - The most common general purpose thermocouple. It is inexpensive and available in a wide variety of probes.
They are available in the −200 °C to +1350 °C range. The type K was specified at a time when metallurgy was less advanced than it is
today and consequently characteristics vary. Another potential problem arises in some situations since one of the constituent metals,
nickel, is magnetic. One characteristic of thermocouples made with magnetic material is that they undergo a step change when the
magnetic material reaches its Curie Point. This occurs for this thermocouple at 354°C. Sensitivity is approximately 41 µV/°C.

E (chromel : constantan) - High Sensivity (68 µV/°C) which makes it well suited to cryogenic use. It is non-magnetic.

T (copper : constantan) - Thermocouples are suited for measurements in the −200 to 350 °C range. Used as differential measurement
since only copper wire touches probes. Since both conductors are non-magnetic, no Curie point & no abrupt change in characteristics.
Type T thermocouples have a sensitivity of about 43 µV/°C.

J (Iron : constantan) - It is less popular than type K due to its limited range (−40 to +750 °C). The Curie point of the iron (770 °C) causes
an abrupt change to characteristic & this provides its upper temperature limit. Type J thermocouples have sensitivity of 50 µV/°C.

N (nicrosil : nisil) - Thermocouples are suitable for use at high temperatures, exceeding 1200°C, due to their stability & ability to resist
high temperature oxidation. Sensitivity is about 39 µV/°C at 900°C, lower than type K. Designed to be an improvement over type K.

R, S, B - Thermocouples use platinum or a platinum-rhodium alloy for each conductor. These are among most stable thermocouples,
but have lower sensitivity, approximately 10 µV/°C than other types. The high cost of these makes them unsuitable for general use.
Generally, type B, R, S thermocouples are used only for high temperature measurements.

R thermocouples use a platinum–rhodium alloy containing 13% rhodium for one conductor and pure platinum for other conductor.
Type R thermocouples are used up to 1600 °C.

S thermocouples use a platinum–rhodium alloy containing 10% rhodium for one conductor and pure platinum for other conductor.
Like type R, type S thermocouples are used to 1600 °C. Type S is used as standard of calibration for melting point of gold (1064.43 °C).

B thermocouples use platinum–rhodium alloy for each conductor. One conductor has 30% rhodium, other conductor has 6% rhodium
These thermocouples are suited for upto 1800 °C. The Type B produces same output at 0 °C and 42 °C, limiting their use below 50 °C.
Type Conductor Material Range (°C) Sensivity Error

Chromel (+) Cr-Ni


K (-) 200 to 1200 41 µV/°C
Alumel (-) Al-Ni
Chromel (+) Cr-Ni
E (-) 200 to 800 68 µV/°C
Constantan (-) Cu-Ni
Copper (+) Cu
T (-) 200 to 300 43 µV/°C
Constantan (-) Cu-Ni
Iron (+) Fe
J 0 to 750 50 µV/°C
Constantan (-) Cu-Ni

Nicrosil (+) Ni-Cr-Si


N 0 to 1300 39 µV/°C
Nisil (-) Cu-Si

Platinum–Rhodium (13%) (+) Pt-Rh (13%)


R 0 to 1600 10 µV/°C
Platinum (-) Pt

Platinum–Rhodium (10%) (+) Pt-Rh (10%)


S 0 to 1600 10 µV/°C
Platinum (-) Pt

Platinum–Rhodium (30%) (+) Pt-Rh (30%)


B 800 to 1800 10 µV/°C
Platinum–Rhodium (6%) (+) Pt-Rh (6%)
BIMETALLIC TEMPERATURE GAUGE

Bimetallic thermometers make use of two fundamental principles:


* The metals changes its volume with temperature
* The coefficient of change is not the same for all metals
If two different straight metal strips are bonded together and heated, the resultant strip will bend toward the side of the metal with
lower expansion rate. Deflection is proportional to square of length and temperature change and inversely proportional to thickness.
A bimetallic spring can be calibrated to produce a predictable deflection at a preset temperature.
The motion produced by a bimetallic strip is small, so to amplify it, the bimetal strip may be wound in the form of a spiral or a helix.
The outside edge of the spiral is fixed to the frame and a pointer is connected to the center.
As the temperature increases, the spiral winds up deflecting the pointer clockwise.
The helix element is surrounded by a protecting tube or thermowell. The device is mounted to measure temperature of gas or liquid.
Bimetallic gauges are used with protective thermowell, which allows removal or replacement of thermometer without opening up the
process tank or piping.

The thermometer is either back or bottom connected, depending on which orientation allows operator better visibility of dial face.
Bimetal thermometers are also made in types that can adjust the dial face at any angle, with respect to axis of stem(Every Angle Type)
This requires a bend in the motion transmission from the helix coil to the indicating pointer.
This is done with an edge-wound helical spring, which eliminates the backlash and requires little torque to operate.
A dry gas in dial face portion of assembly, while silicone fluid fill stem & surround coil to dampen vibration & accelerate heat transfer.
The dials are available from 1 to 6.5 in (25 to 165 mm) in diameter and with stem lengths upto 36 in. (914 mm).
Thermowells made of carbon steel, stainless steel or other materials are available to protect against corrosive environments.

Advantages & Disadvantages


The advantages over glass stem thermometers include that the bimetallic design is less likely to break and is easier to read.
Relative to filled or electronic temperature indicators, the main advantages of bimetallic thermometers are lower cost & simplicity.
Disadvantages include that the calibration of bimetallic thermometers can change due to rough handling and that the overall accuracy
is not as good as that of the glass stem design. The bimetallic thermometers are generally confined to local measurement.

Range
-50 to 600°C
FILLED TEMPERATURE GAUGE

Liquid Filled
These systems are filled with a liquid (other than mercury) & operate on principle of liquid expansion with an increase of temperature
The filling fluid is usually an inert hydrocarbon, such as xylene (C8H10), which has a coefficient of expansion six times that of mercury
and makes smaller bulbs possible. The criterion is that pressure inside the system must be greater than vapor pressure of the liquid to
prevent bubbles of vapor from forming in the spiral.
Also, the liquid should not be allowed to solidify even in storage or the calibration may be affected.
The minimum operating temperature is usually set by the freezing point of the filling liquid.
The maximum operating temperature is set by the point at which the filling liquid is no longer stable.
The maximum temperature to which the bulb can be exposed without damage is defined as the allowable overrange of the system.

Mercury Filled
It is different from liquid filled because of unique characteristics of mercury and its importance as a temperature measuring medium.
Mercury provides rapid response, accuracy and good repeatability.
Pressures within working system are relatively high (80 bar) for higher temperatures, dropping to 25 bars at low-temperature range.
This high pressure cuts down on any head effect error (difference in elevation between bulb and measuring instrument).
Mercury filled systems can detect temperatures between the freezing and the boiling points of mercury, −40 to 650°C.
The speed of response of mercury filled systems is faster than that of liquid filled ones but slower than that of gas or vapor systems.

Vapor Filled
The capillary and bulb system have the filling medium in both the liquid and vapor form.
The interface between the two must occur in the bulb, and this will move slightly with temperature, affecting the pressure.
The pressure within the system is a function of the vapor pressure of the filling fluid at the operating (bulb) temperature.
The filling fluids used include methyl chloride, sulfur dioxide, butane, propane, hexane, methyl ether, ethyl chloride, ethyl ether.
Each has a different vapor pressure–temperature relationship.
The maximum temperature is limited by critical point of the fill, while minimum limit is consequence of the loss of reading sensitivity,
as the vapor pressure changes less per unit temperature change at low temperatures.
The speed of response is 1s to 10s. It is faster than liquid or mercury fills and as fast as gas-filled system in most of its configurations,
The overrange limit of the vapor filled systems is small, because the vapor pressure tends to rise exponentially with temperature.

Gas Filled
The principle for gas filled is that in a perfect gas confined to constant volume the pressure is proportional to absolute temperature.
The gas is not perfect and not all at the same temperature nor is the volume constant.
However, variances are small enough so that measurement of pressure can be used to indicate temperature.
Nitrogen is the favorite fill because it is inert and inexpensive. Helium is another favorite in gas filled systems.
In general, bulbs should be as large as practical to lessen the influence of temperature variations along the capillary.
One way of avoiding long capillaries is to terminate a short capillary at a small diaphragm chamber.
The force due to gas pressure on the diaphragm causes it to compress the spring.
This motion is amplified and used to regulate another pressure that is transmitted to the spiral.
Gas-filled systems approximate Charles’s law (absolute pressure of a confined gas is proportional to absolute temperature) by keeping
the bulb volume relatively large compared to the rest of the system.
On the low side they are limited by the critical temperatures of the filling gas (usually nitrogen or helium), corresponding to −268°C,
and on the high side by the temperature limits on the bulb materials, usually corresponding to 700°C.
The maximum span can be 600°C, and is limited only by non-linearities due to mass flow from the bulb.
The minimum span is limited by the pressure at which the Bourdon tube becomes overstressed.
The speed of response is only 1s to 4 s, because the ratio between bulb mass and surface area tends to be favorable.
The Gas filled system can provide 150%-300% over range protection, as maximum temperature is limited only by permissible pressure
and temperature ratings of the bulb.
PRESSURE
Electronic Pressure Sensors

 Direct Mount
 Remote Mount (Impulse Tubing)

 Direct Sensing
 Chemical Seals / Remote Seals
1. With Capillary
2. Without Capillary

Transducer Types
 Strain Gage
 Capacitance
 Potentiometric
 Piezoelectric
 Inductive/Reluctive
 Optical

Local Pressure Sensors


 Bourdon
 Spiral
 Helical
 Capsule
 Bellows
 Diaphragm

Accessories
 Syphon
 Snubber
 Gauge Savor
Pressure is defined as a force acting evenly over a given area.

TYPES OF PRESSURE
Absolute Pressure
The most definite reference point is absolute zero pressure. This is the pressure of empty space in the universe.
When a pressure is based on this reference point, it is called absolute pressure.
To distinguish it from other types of pressures it is accompanied by the suffix "a" or "abs".
Atmospheric Pressure
The most important pressure for life on earth is atmospheric air pressure pamb (amb = ambiens, surrounding).
It is produced by the weight of the atmosphere surrounding the earth up to an altitude of about 300 miles.
Atmospheric pressure decreases continuously up to this altitude until it practically equals zero (full vacuum).
The normal value considered for atmospheric pressure is 1 bar.

The term pressure is used, if the measured pressure is higher than the atmospheric pressure.
The term vacuum is used, if the measured pressure is below atmospheric pressure.

PAbsolute = Pressure Measured + Pressure Atmospheric


Pa = P + Patm

1 Bar : 1 Kg/cm2 : 14.7 psi : 750 Torr : 100,000 Pascal : 10,000 mm WC : 750 mm Hg : 1000 mBar

Direct Mount Remote Mount (Impulse Tubing)

The pressure transmitters can be mounted directly on the pipeline, vessel or tank, if the process fluid is clean, non-viscous, and
the temperature is within the operating range of the pressure transmitter.
If operating temperature of process is very high, then pressure transmitter can be remote mount with impulse tubing arrangement.
The impulse tubes will lower down the temperature and bring it to the normal operating range of the pressure transmitter.

Direct Sensing
If the process fluid is clean, non-viscous, non-corrosive, then pressure transmitter can be a direct mount or remote mount depending
on the process fluid temperature and the process fluid can be directly brought in contact/wetted with pressure sensor elements with
no harmful effects on the pressure transmitter.
Chemical Seals / Remote Seals
Seals transmits process pressure from process fluid to pressure instruments.
They are used for no contact between measuring instrument & process fluid:
* The process fluid is highly corrosive
* The measuring fluid is highly viscous.
* The process fluid tends to polymerize.
* The process fluid is a slurry.
* The process fluid is very hot.
* Hygiene regulations must be observed for process fluid.
* Leakage of fluid to atmosphere or to environment is prohibited.

PERFORMANCE CONSIDERATIONS
The seal system will have additional temperature effects and response time depending on the system configuration.
Temperature Effects
Seal system temperature effects are caused by changes in volume and density of the fill fluid in the seal system.
Changes in volume are known as Seal Temperature Effects & occurs when fill fluid expands/contracts with fluctuations in process or
ambient temperature. This change in fill volume drives a change in the internal pressure of the transmitter/seal system.
The density of the fill fluid also changes with temperature fluctuations. Changes in density are described as Head Temperature Effect
as they represent a change in head zero offset reference. Both effects are combined to get total temperature effect for a seal system.
Seal Temperature Effects
Three primary factors affect the seal temperature effects of a diaphragm seal system:
Diaphragm Stiffness
Diaphragm stiffness is critical parameter affecting seal temperature effect. As fill fluid expands/contracts due to temperature changes,
seal diaphragm stiffness determine amount of volume change that is absorbed by seal diaphragm & amount exerted as back pressure
on the sensor module. This back pressure acting upon the sensing diaphragm of the transmitter represents output temperature error.
Diaphragm stiffness is affected by the diaphragm surface diameter, material of construction, thickness, and convolution pattern.
Generally, smaller diameter diaphragms are stiffer than larger diameter diaphragms, and have a larger seal temperature effect error
when fill fluid expand/contract with temperature changes. The larger diameter diaphragm, which is less stiff can accommodate more
changes in fill volume and has a smaller error than the smaller diameter diaphragm.
Fill Fluid
The expansion characteristics & volume of fill fluid affect seal temperature effects. All fill fluids expand/contact with changes in temp.
Coefficient of thermal expansion defines amount of change & is represented in cubic centimeters of expansion per cubic centimeter
of fluid volume per degree Fahrenheit (cc/cc/F). Selecting fill fluid with smaller coefficient of thermal expansion will minimize effects.
Seal System Volume
The amount of fluid in a seal system will determine the potential amount of volume expansion.
Choosing appropriate direct mount or capillary type will determine overall volume in system and resulting seal temperature effects.
By selecting the right connection type for a high or low side connection, you can optimize the seal temperature effects for the system.
Head Temperature Effects
Head temperature effects are dependent on the change in ambient temperature, fill fluid specific gravity (ratio of density of fluid to
reference density of water), and the vertical distance between the process connections. When a transmitter/seal system is installed
initial head zero offset is function of vertical distance between two process connections multiplied by specific gravity of the fill fluid on
a transmitter. This offset value is read as negative differential pressure value, as fluid column is pushing on low side of transmitter and
pulling away from high side. Ambient temperature fluctuations cause density of fill fluid to change, and thus change zero offset head.
When ambient temperature increases, fill fluid gets lighter, reduces head zero offset which causes positive shift in transmitter output.
When ambient temperature decreases, density gets heavier, increases head zero offset & causes negative shift in transmitter output.
Total Temperature Effects
Seal Temperature Effects and Head Temperature Effects are combined to get the Total Temperature Effects for the seal system.
A balanced seal system consisting of seal, capillary, fill fluid on both high & low side will act to cancel seal temperature effects created
on either side of transmitter. The resulting total performance will only consist of head temperature effect, because a balanced system
will not affect the head temperature effect. A preferred approach is to select the Tuned System where the seal system that results in
seal temperature effects that partially/completely cancel out head temperature effects, resulting in improved system performance.

Time Response
Adding diaphragm seals to a transmitter increases the overall response time of a transmitter/seal system.
Time response varies with temperature, pressure, capillary length, capillary inside diameter (ID), fill fluid viscosity, transmitter type.
Direct vs. Capillaries: If time response is important, choose a direct mount connection when possible to minimize connection length.
For capillaries, longer capillary provides greater distance for pressure signal to travel, so specify only length required for installation.
Capillary ID: A smaller diameter capillary (ID) creates more restrictions and slows down the pressure transport.
A large diameter capillary (ID) provides a faster response time.
Fill Fluid Viscosity: Viscosity of the fill fluid is a measure of its fluidity and is temperature dependent.
Choosing a less viscous fill fluid reduces time response, especially when using longer capillaries or in colder conditions.
ELECTRONIC PRESSURE SENSORS (Transmitters)
Strain Gage
A strain gage is used to measure the deflection of an elastic diaphragm or Bourdon tube.
Strain gage transducers are used for narrow-span pressure and for differential pressure measurements.
The strain gage is used to measure the displacement of an elastic diaphragm due to a difference in pressure across the diaphragm.
It can detect gauge pressure if low pressure port is open to atmosphere or differential pressure if connected to 2 process pressures.
If the low pressure side is a sealed vacuum reference, the transmitter will act as an absolute pressure transmitter.
Strain gage transducers are available for pressure ranges as low as 0.005 bar to 10000 bar.

When external forces are applied to a stationary object, stress and strain are the result.
Stress is defined as the object internal resisting forces, and strain is defined as the displacement and deformation that occur.
Strain is defined as the amount of deformation per unit length of an object when a load is applied.
Strain is calculated by dividing the total deformation of the original length by the original length (L)
A change in capacitance, inductance, or resistance is proportional to the strain experienced by the sensor
If a wire is held under tension, it gets slightly longer & its cross-sectional area is reduced. This changes its resistance (R) in proportion
to strain sensitivity (S) of wires resistance. When a strain is introduced, strain sensitivity, also called the gage factor (GF), is given by:
GF = (R/R) / (L/L) = (R/R) / Strain
The most widely used characteristic that varies in proportion to strain is electrical resistance.
Although capacitance/inductance based strain gages are constructed, devices sensitivity to vibration, their mounting requirements,
and circuit complexity have limited their application.

In order to measure strain, it must be connected to electric circuit capable of measuring changes in resistance corresponding to strain.
Strain gage transducers are usually connected to a Wheatstone Bridge Circuit
Any small change in resistance of the sensing grid will throw the bridge out of balance, making it suitable for the detection of strain
and correspondingly relating it to the pressure applied, hence giving a indication of the pressure.

Capacitance
The capacitance change results from the movement of a diaphragm element. The diaphragm is usually metal or metal-coated quartz
and is exposed to the process pressure on one side and to the reference pressure on the other. Depending on the type of pressure,
the capacitive transducer can be either an absolute, gauge, or a differential pressure transducer.
Stainless steel is most common diaphragm material, but for corrosive service, high-nickel steel alloys, such as Inconel or Hastelloy,
give better performance. Tantalum also is used for highly corrosive, high temperature applications.
In a 2-plate capacitor sensor, movement of diaphragm between plates is detected as an indication of the changes in process pressure.
The deflection of the diaphragm causes a change in capacitance that is detected by a bridge circuit.
This circuit can be operated in either a balanced or unbalanced mode.
In balanced mode, the output voltage is fed to a null detector and the capacitor arms are varied to maintain the bridge at null.
Therefore, in the balanced mode, the null setting itself is a measure of process pressure.
When operated in unbalanced mode, process pressure measurement is related to ratio between output voltage & excitation voltage.
Single-plate capacitor designs are also common. In this design, the plate is located on the back side of the diaphragm and the variable
capacitance is a function of deflection of the diaphragm. Therefore, the detected capacitance is an indication of the process pressure.
Capacitance-type sensors are often used as secondary standards, especially in low-differential and low-absolute pressure applications.
They also are quite responsive, because the distance the diaphragm must physically travel is only a few microns.

Potentiometric
The potentiometric pressure sensor provides a simple method for obtaining an electronic output from a mechanical pressure gauge.
The device consists of a precision potentiometer, whose wiper arm is mechanically linked to a Bourdon or bellows element.
The movement of wiper arm across potentiometer converts mechanically detected sensor deflection into a resistance measurement,
using a Wheatstone bridge circuit. The mechanical nature of linkages connecting wiper arm to Bourdon tube, bellows, or diaphragm
element introduces unavoidable errors into this type of measurement.
Temperature effects cause additional errors because of differences in thermal expansion coefficients of metallic components.
Potentiometric transducers can be made extremely small & installed in very tight quarters, as inside housing of a dial pressure gauge.
They provide strong output that can be read without additional amplification. This permits them to be used in low power applications.
Potentiometric transducers detect pressures between 0.5 - 700 bar.
Piezoelectric
When pressure or force is applied to a quartz crystal, a charge is developed across the crystal that is proportional to the force applied.
Piezoelectric is classified according to whether the crystals electrostatic charge, or its resistivity or its resonant frequency is measured.
Depending on which phenomenon is used, the crystal sensor can be called electrostatic, piezoresistive, or resonant.
When pressure is applied to a crystal, it is elastically deformed. This deformation results in a flow of electric charge (which lasts for a
period of a few seconds). The resulting electric signal can be measured as an indication of pressure which was applied to the crystal.
These sensors cannot detect static pressures and are used to measure dynamic pressures resulting from blast, explosion or vibration.
The desirable features of piezoelectric sensors include their rugged construction, small size, high speed, and self-generated signal.
On the other hand, they are sensitive to temperature variations and require special cabling and amplification.
As quartz is a common and naturally occurring mineral, these transducers are generally inexpensive.
By selecting the crystal properly, we can ensure both good linearity and reduced temperature sensitivity.

Inductive / Reluctive
These type of sensors include the use of inductance, reluctance and eddy currents.
Inductance is that property of an electric circuit that expresses the amount of electromotive force (emf) induced by a given rate of
change of current flow in the circuit.
Reluctance is resistance to magnetic flow, the opposition offered by a magnetic substance to the magnetic flux.
In these sensors, a change in pressure produces a movement, which in turn changes the inductance or reluctance of an electric circuit.
Linear Variable Differential Transformer (LVDT) is used as working element of transmitter. LVDT operates on inductance ratio principle
In this design, 3 coils are wired to an insulating tube containing an iron core, which is positioned within the tube by a pressure sensor.
Alternating current is applied to primary coil in center & if core also is centered, equal voltage will be induced in secondary coils (1/2).
Because the coils are wired in series, this condition will result in a zero output. As the process pressure changes and the core moves,
the differential in the voltages induced in the secondary coils is proportional to the pressure causing the movement.
LVDT type pressure transducers can detect absolute, gauge or differential pressures.
Their main limitations are susceptibility to mechanical wear and sensitivity to vibration and magnetic interference.

Reluctance is equivalent of resistance in magnetic circuit. If a change in pressure, changes the gap in magnetic flux paths of two cores,
the ratio of inductances L1 / L2 will be related to the change in process pressure. Reluctance-based pressure transducers have a very
high output signal (on the order of 40 mV/volt of excitation), but must be excited by AC voltage.
They are susceptible to stray magnetic fields and to temperature effects.
Because of their very high output signals, they are used in applications where high resolution over a relatively small range is desired.
They can cover pressure ranges from 0.005 bar to 10000 bar.

Optical
Optical transducers detect the effects of minute motions due to changes in pressure and thus generate a corresponding output signal.
A light emitting diode (LED) is used as light source and a vane blocks some of light as it is moved by diaphragm.
As process pressure moves the vane between source diode and the measuring diode, the amount of infrared light received changes.
The optical transducer must compensate for aging of LED light source by means of a reference diode, which is never blocked by vane.
This reference diode also compensates the signal for build-up of dirt or other coating materials on the optical surfaces.
Optical transducer is immune to temperature effects, as source & reference diodes are affected equally by changes in temperature.
The amount of movement required to make measurement is very small (0.5 mm), hysteresis & repeatability errors are nearly zero.
Optical transducers donot require much maintenance.
They have excellent stability & are designed for long duration measurements.
They are available with ranges from 0.5 bar to 5000 bar.
LOCAL PRESSURE SENSORS (Gauges)
Bourdon, Spiral, Helical
The Bourdon tubes are manufactured in C, helical, and spiral forms.
The C-Bourdon element is made by winding a tubular element circularly to form a segment of a circle.
The helical element is made by winding the tube several turns into a helix.
The spiral is formed by winding two or three turns in a spiral around the same axis.

The pressure changes the shape of the measuring element (Bourdon Tube) in proportion to the applied pressure.
A movement is used to amplify the relatively small travel of the tube end and to convert it into a rotary motion.
A pointer moving over a graduated dial indicates the pressure reading.
Basically, a pressure instrument with a flexible element consists of three parts: a flexible measuring element, a movement, a dial.
The measuring element converts the pressure P into a displacement S. The movement amplifies the displacement S and converts it
into an angle of rotation γ. The dial is marked with a graduated scale to convert the pointer position directly into a pressure reading.
Bourdon tubes are made from metal tubing with a circular cross-section.
The cross-section of the tubing is flattened and then the flattened tubing is formed into a circular, helical or Spiral shape.
For pressures upto 70 bar, C shaped tubes are used. Above 70 bar they are helical. Spiral elements are used only in special cases.

C-Type
The bourdon tube consists of thin-walled tube that is flattened on opposite sides to produce a cross-sectional area elliptical in shape.
The tube is bent lengthwise into an arc of a circle of 270 to 300 degrees.
Pressure applied to the inside of the tube causes distention of the flat sections and tends to restore its original round cross-section.
This change in cross-section causes the tube to straighten slightly.
The tube is permanently fixed at one end, the tip of tube traces a curve, as result of change in angular position with respect to center.
Within limits, the movement of the tip of the tube can then be used to position a pointer or to develop an equivalent electrical signal
to indicate the value of the applied internal pressure.

Spiral
Sometimes the free end motion of the C-Bourdon tube is insufficient to operate some of the motion balance devices.
The spiral element is essentially or in a way a series of C-Bourdon tubes joined end to end.
When pressure is applied, flat spiral tends to uncoil & produces greater movement of free end, requiring no mechanical amplification.
This increases the sensitivity and accuracy of the instrument because no lost motion or friction is introduced through links and levers.

Helical
A helical type sensor produces an even greater motion of free end than spiral element, eliminating need for mechanical amplification.
The range of the helical coil is affected by the diameter, wall thickness, number of coils used, and construction materials.
High-pressure elements might have as many as 20 coils, while low-span sensors can have only 2 or 3 coils.

Its resolution and sensitivity are high while its hysteresis is negligible.
It is used in laboratories and suitable for industrial installations in which precision pressure detection is desired.
The cost is high and the unit is sensitive to and thus should be protected from shocks and vibration.
One of its drawbacks is the slow response speed of 2 minutes for full-scale travel.

Diaphragm
A pressure instrument cannot always be connected directly to a process when the process medium is corrosive, viscous or slurry.
In such cases we introduce a seal system between pressure instrument and process medium, isolating it from the process medium.
A flexible diaphragm, one side of diaphragm is in contact with process medium and other side is connected to the sensing element.
The space between the diaphragm and the sensing element is completely filled with a neutral liquid called as the Fill or Transfer fluid.
When pressure is applied to seal arrangement, it is transmitted by flexible diaphragm through Fill fluid to pressure sensing element.
Sometimes capillary tube is provided between seal diaphragm housing & sensing element to facilitate remote mounting of instrument
This type of arrangement is known as Diaphragm Seal, Chemical Seal or remote seals.
Capsules
A capsule consists of two diaphragms that are welded or soldered together around their outer edge.
The center of just one capsule half is supported in the case so that both halves can move freely.
This design doubles the displacement of diaphragm, allowing smaller pressures to be measured without reduction of wall thickness.

The advantages and disadvantages of capsule elements are similar to those of diaphragms.
Closing the inlet when the nominal pressure range is exceeded allows capsule elements to withstand high overloads.
A pin with a gasket is attached to one half of the capsule. In the event of excessive pressure in the chamber, this pin closes the inlet
and at the same time supports the center of the upper diaphragm, increasing the load-bearing capacity significantly.
The materials used are copper alloys and stainless steels.

Bellows
The need for a pressure sensing element that was extremely sensitive to low pressures resulted in development of bellows element.
The metallic bellows is most accurate when measuring pressures from 0.05 to 5 bar.
Bellows are thin walled cylindrical containers with deep corrugations. When pressure is applied, the length of the bellows changes.
Depending on the application, a supporting spring can be installed inside the bellows. Bellows are notable for their good linearity.

The bellows is a one-piece, collapsible, seamless metallic unit that has deep folds formed from very thin-walled tubing.
System pressure is applied to internal volume of bellows. As inlet pressure to instrument varies, the bellows will expand or contract.
The moving end of the bellows is connected to a mechanical linkage assembly.
As the bellows and linkage assembly moves, either an electrical signal is generated or a direct pressure indication is provided.
The flexibility of a metallic bellows is similar in character to that of a helical, coiled compression spring.
Up to the elastic limit of the bellows, the relation between increments of load and deflection is linear.
However, this relationship exists only when the bellows is under compression. It is necessary to construct the bellows such that all of
the travel occurs on the compression side of the point of equilibrium.
Therefore, the bellows must always be opposed by a spring, and deflection characteristics will be resulting force of spring & bellows.

Accessories (Syphon, Snubber, Gauge Savor, Alarm Contacts etc)


Syphon
Pigtail & U shaped syphons are used to protect pressure gauge from effects of high temperature and high pressure media as steam &
also reduce the effect of rapid pressure surge. The condensate inside the coil of syphon prevents the direct contact.
In order to prevent live steam from entering bourdon tube, a syphon filled with water shall be installed between gauge & process line.
If freezing of condensate in loop of a siphon is a possibility, a diaphragm seal should be used to isolate the gauge from process steam.
Snubber
Snubber is used to suppress the effects of sudden pressure pulses and fluctuations of pressure peaks.
It is also provided with adjustable screw to restrict the flow or fine tuning the flow as per operating conditions.
Snubbers should be used when a pressure gauge is subjected to rapid pressure fluctuations, which make the gauge difficult to read
because of rapid pointer movement. Snubber generally reduces pressure impact, slows the speed and range of pointer movement,
and prolongs gauge life. They are also known as throttle screws, pulsation dampeners or simply snubbers.
Gauge Savor
Gauge savor is intended to protect pressure gauge against the effect of pressure that exceed the maximum rating of pressure gauge.
The operation of the unit is based upon a piston assembly that can open or close dependent upon the applied pressure at the inlet.
An internal spring holds the valve open under normal conditions. When the inlet pressure exceeds the setting pressure of the valve
then the spring is compressed and closes the valve. Setting is achieved by selection of appropriate spring to operate in correct range
and adjustment of the spring adjuster until the desired operational setting is achieved
Alarm Contacts
Direct contacts are simple mechanical switches which make or break an electric circuit by a contact arm moving with the pointer.
Indirect contacts require an auxiliary energy source and an amplifier circuit, in addition to the components installed on the instrument
A examples of direct contact is Magnetic Snap Action Contact and a example of indirect contact is Proximity Switch (Inductive).

Vibration Damping by Liquid Filling


Many pressure instruments are exposed to vibrations or shocks, transmitted either by process fluid itself, by instrument mounting or
during operation. As a result, moving parts and pointer start to vibrate & pressure reading becomes very difficult or even impossible.
In extreme cases the measuring element may be broken by resonance and hazards may occur from leaking fluid.
The liquid filling:
• Extends life of gauge when used in high-dynamic operating conditions • Prevents resonance-induced fracturing of Bourdon tubes
• Allows readings even when vibration and pulsations are present • Prevents aggressive ambient air from entering the gauge case
• Prevents condensation and the formation of ice • Improves overall operational reliability even under extreme loads
• Economical solution to pressure measuring problems.
FLAME ARRESTER
EXPLOSIONS can be broadly classified in two types
 Deflagration - Explosions propagating at Sub-Sonic Velocities - Less than 1000 m/s
 Detonation - Explosions propagating at Super-Sonic Velocities - More than 1000 m/s

Deflagrations are thermal processes that proceed radially outward in all directions through available fuel, away from ignition source.
As volume of reaction zone expands with every passing moment, larger surface area contact more fuel, as surface of inflating balloon.
The reaction starts small and gathers energy with time. This process occurs at speeds depending largely on the chemistry of the fuel -
from 1 to 10 meters per second in gasoline vapors air to hundreds of meters per second in black powder or nitrocellulose propellants.
These speeds are less than the speed of sound in the fuel (The speed of sound through a material is not constant, but dependent on
the density of the material, the higher its density, the higher the speed of sound will be through it).
Deflagrations are thermally initiated reactions propagating at subsonic speeds through materials like: mixtures of natural gas and air,
LP gases and air, or gasoline vapors and air, black powder or nitrocellulose (single-base) propellants, rocket fuel and many other fuels.
The pressures developed in deflagration explosions are dependent on fuels, geometry, strength (failure pressure) of a vessel (if any).
Pressures can range from 0.005 Bar to approximately 10 bar for gasoline-air mixtures to 500 bar for propellants.
Times of development are on the order of thousandths of a second to a half-second or more.
Maximum temperatures are on the order of 1000 – 2000°C.

Detonations are different. While a detonation is still chemically an oxidation reaction, it does not involve a combination with oxygen.
It involves special chemically unstable molecules, when energized, instantaneously splits into many small pieces that then recombine
into different chemical products releasing very large amounts of heat as they do so.
High explosives are defined as materials intended to function by detonation, such as TNT, nitroglycerine, C4, picric acid and dynamite.
The reaction speeds are higher than the speed of sound in material (supersonic). Since most explosives are roughly the same density,
a reaction speed of 1000 m/s is set as the minimum speed that distinguishes detonations from deflagrations.
Due to supersonic reaction speed, a shock wave develops, (e.g sonic boom from supersonic aircraft) that trigger propagating reaction.
Detonation speeds are of the order of 1000 - 10000 m/s, therefore times of development are on the order of millionths of a second.
Temperatures produced can be 3000 - 5000°C and pressures can be from 500 - 5000 bar.
A few materials can transition from deflagration to detonation depending on geometry (long, straight galleries, pipes), temperature,
and manner of initiation.

The effects of detonation type explosions are very different from those of deflagration explosions.
Deflagrations tend to push, shove, heave, often with very limited shattering & little production of secondary missiles (fragmentation).
The maximum pressures developed by deflagrations are often limited by the failure pressure of the surrounding vessel or structure.
Detonations on other hand, tend to shatter, pulverize, splinter nearby materials with fragments propelled away at a very high speeds.
There is no time to move and relieve pressure so damage tends to be much more localized (seated) in the vicinity of explosive charge
(and its initiator) than a deflagration whose damage is more generalized.
Damage from a deflagration is more severe & away from ignition point, as reaction energy grow with expanding reaction (flame) front
It is for this reason that identification of an ignition source & mechanism for deflagration may be more difficult than for a detonation.

FLAME ARRESTOR
A flame arrester is a device which is fitted to an opening of an enclosure or to the connecting pipework of a system of enclosures and
whose intended function is to allow flow but prevent the transmission of flame.
It quenches and arrests the flame by absorbing and dissipating its heat to below its Flash Point.
A flame arrestor is a device that allows vapors to flow freely through it but will not allow flame to pass through it.
The arrestor element consist of numbers of small holes. When flame enter arrester element it is broken into numbers of small flames.
Heat is absorbed & dissipated by arrester element & temperature is reduced below ignition temperature. Hence Flame is extinguished
The number of arrestor elements & gap size depends on operating parameters as Temperature , Pressure , Explosion Group , Fluid.
The Narrower and Longer the gap - Greater is the Extinguish Effectiveness and Greater is the Pressure Loss .
The Wider & Shorter the gap - Lesser is the Extinguish Effectiveness and Lesser is the Pressure Loss

If L/D ratio is Less than 50, then Deflagration Flame Arrestor is suitable.
If L/D ratio is Greater than 50, then Detonation Flame Arrestor is suitable.
where L is the length of pipe from the Flame Arrestor till the probable source of ignition or end of the pipe.
D is diameter of the pipe.

Flame Arrester shall be installed


 On tanks that contain liquids with flash points below 43°C
 On tanks that contain liquids with flash points above 43°C, but where tank may be exposed to combustibles or other tanks
containing liquids with flash points below 43°C
 On tanks where the contents can be heated to their flash points under normal operation
A flame arrester is a safety device installed on top of a tank when the flash point of product is lower than possible tank temperature.
A flame arrester is also used as in-line device where the combustible gases are transported through pipe lines to actual combustion,
as in an incinerator or flare or where combustion fumes are vented through piping to atmosphere where lightning can cause a flame.
Flame arresters should be designed to stop tank farm fires caused by lightning, sparking, or actual flame in the immediate tank area,
and to prevent flashbacks in lines.
In order to accomplish the above, a flame arrester must act as a barrier (stop a flame), a flame holder (contain the flame at barrier),
and dissipate heat in order to prevent auto ignition on down side of flame arrester. In order to be a effective flame prevention device,
a flame arrester must have a diameter small enough to stop the flame created by the combustible gas. Each combustible gas has a
different required diameter to stop the flame. In addition to stopping the flame, an arrester must be able to absorb & dissipate heat.
Flame element mass ensures that hot gases above auto ignition temperature never reach the downstream side of the flame arrester.
Unless a flame arrester meets or exceeds the above mentioned design criteria, it is not a true flame arrester.

Deflagration Flame Arrester (Explosions propagating at Sub-Sonic Velocities - Less than 1000 m/s)
Deflagration Flame Arrestor is generally designed with an eccentric housing to automatically drain condensate built up in the housing.
It can be installed in pipelines that run close to floors , walls or low points where condensate can collect within the piping system.
When installing the Deflagration Flame Arrestor, the distance between potential ignition source and location of the installed device (L)
and pipe diameter(D) should not exceed the L/D ratio by 50. The L/D ratio shall always be less than 50 for Deflagration Flame Arrestor.

Detonation Flame Arrester (Explosions propagating at Super-Sonic Velocities - More than 1000 m/s)
In Detonation type explosions, a Shock Wave is produced which travels ahead of the flame front.
Once the Detonation enters the Flame Arrestor, the heat is absorbed and dissipated from the Shock Wave by a Shock Wave Tube and
then it enters the narrow gaps of the arrestor element where it Is extinguished.
SHOCK WAVE GUIDE TUBE EFFECT (SWGTE) – It is the component for decoupling of the Shock Wave & Flame Front.
Detonation Flame Arrestor can be mounted anywhere in the pipe. It is independent of the Potential Ignition Source.

The main difference between deflagration and detonation type is number of flame arrester elements required to extinguish the flame
The number of flame arrestor elements is less in deflagration type and number of flame arrestor elements is more in detonation type.

Flame Arresters can be broadly classified as


 Deflagration Flame Arrester
 Detonation Flame Arrester
 In Line Flame Arrester
 End of Line Flame Arrester
RUPTURE DISCS
Overpressure may occur due to thermal expansion, equipment failure, control failure, misoperation, or an external fire.
A rupture disc is non-reclosing device actuated by inlet static pressure & designed to function by bursting of pressure containing disc.
The rupture disc in its simplest form is a metallic membrane that is held between flanges and is designed and manufactured to burst
at a predetermined pressure and corresponding temperature.
Uses
Mounted directly on equipment
The rupture disc may be used to relieve an inexpensive and inert material to air if the loss of process pressure can be tolerated.
At the other extreme, rupture discs may be used to vent highly toxic, poisonous, or corrosive materials into a flare header system.
Upstream of a Relief Valve
Being mounted upstream to relief valves is a very useful application for rupture discs. Under normal conditions the rupture disc is
sealed tight and protects the relief valve from being contacted by corrosive, plugging, hazardous, freezing, or regulated processes.
If maximum allowable working pressure is exceeded, the disc will break and the relief valve will start to relieve the over pressure.
As the pressure drops, the valve will shut and reclose the process. Thus, the best characteristics of both devices are utilized.
As the rupture disc is a differential pressure (d/p) device it is important to prevent a buildup of pressure between disc and relief valve.
This is accomplished by the use of a pressure switch and/or a pressure gauge and excess flow valve.
Downstream of a Relief Valve
A rupture disc downstream of relief valve may be desired when valve discharges to a vent header that might contain corrosive vapors.
If conventional relief valves are used, then precautions should be taken as to prevent the buildup of pressure between relief valve and
rupture disc due to valve leakage. A better option is to use bellows sealed or pilot-operated valve whose set pressure is unaffected by
the accumulated pressure between the relief valve and rupture disc.

Rupture discs can be classified in four general categories: Forward Acting, Reverse Acting, Solid, Scored.
Forward Acting discs are pressurized on concave side of disc such that material in the dome of the disc is subjected to tensile stresses.
Flat discs are also considered to be forward acting. Forward acting rupture discs can be prebulged, composite, scored, flat, graphite.
Reverse Acting discs are pressurized on convex side of disc such that material in the dome of disc is subjected to compressive stresses
As the burst pressure rating of a reverse acting type disc is reached, the compression loading on the rupture disc causes it to reverse,
snapping through the neutral position and causing it to open by a predetermined knife blade penetration or scoring pattern.
Solid type design, early rupture discs were prebulged solid metal discs that burst in tension & could release metal fragment in system.
These fragments could prevent relief valves from reseating or may cause damage.
Scored type design, a newer rupture disc designs protect against this by either using tension loaded scored or a slotted metal discs,
reverse-buckling discs with knife-edge cutters or reverse-buckling discs that are preweakened (scored along lines), so that they burst
with full opening without any metal fragmentations.

Forward Acting design provides a satisfactory service life when operating pressure are upto 70% of the marked burst pressure of disk.
Reverse acting design provides a satisfactory service life when operating pressure are upto 90% of the marked burst pressure of disk.
Solid design results in the release of metal fragments which could damage or prevent the relief valve from reseating.
Scored design will not release any metal fragmentation as they burst along the pre weakened or slotted scored lines.
PARTIAL STROKE TESTING (PST)
Processing plants contain many valves that performs the safety functions (e.g. emergency shut down (ESD) and blowdown (BD)).
In normal condition, the ESD valves are always kept open & the blowdown/depressurization valves are always kept in closed position.
If ESD/BD valves are called into use, they have to work reliably, as consequences of failure will be far more serious than just disruption
Long experience has shown that, if the valves are not exercised, they can stick in one position.
In fact, the general perception is that sticking is main failure mode of safety related valves. Sticking may be caused by several factors
(e.g. dirt or corrosion). The movement of the valves can reduce the dirt build-up and can give an indication if corrosion is present (e.g.
because the stroking time is longer than specified).
Earlier the valves were fully test at scheduled shutdowns/turnarounds. This may mean interval of one/two years between valves test.
Given the trend in the process industry to follow the requirements of IEC 61508 and 61511 to preserve the safety integrity levels (SIL),
these long intervals between tests are often too long to show an adequately low probability of failure on demand (PFD).

PST - Partial stroke testing of the valves can mitigate some of these problems.
PST is function test used to check that safety valves are operated securely without any non-conformities, sticking of valve or actuator
in the event of emergency, by closing the valve slightly & slowly to such range as not to interfere with process during plant operation.
The main advantage of partial stroke testing is that it will provide a measure of confidence that a valve is not stuck in one position.
The valve movement can dislodge any dirt build-up to help prevent sticking. If the valve is already stuck, the test will detect it and
corrective measures can be taken. The system can either be brought to an orderly shut down to perform repairs, or, if repairs can be
completed quickly, the shut down valve may be temporarily by-passed.

There are 3 types of techniques used for PST


 Mechanical Limiting
 Position Control
 Solenoid valves

Mechanical Limiting involves installation of a mechanical device to limit the degree of valve travel. When mechanical method is used,
valve is not available for process shutdown. Mechanical devices used for partial stroke testing include collars, valve jacks, jammers.
* Valve collars are slotted pipes that are placed around the valve stem of a rising stem valve. The collar prevents valve from traveling
any farther than the top of the collar. Any fabrication shop can build a valve collar, suitable for test use.
* A valve jack is a screw that is turned until it reaches a set position. The valve jack limits actuator movement to screw set position.
The valve jack is ordered from valve manufacturer when valve is purchased. Valve jacks work with both rising stem & rotary valves.
* Jammers are integrated into rotary valve design. They are essentially slotted rods that limit valve rotation when placed in position
using an external key switch. Since jammer is integrated into rotary valve, the jammer must be purchased from a valve manufacturer.
A contact can be provided for the key switch to allow annunciation in the control room whenever the key is used.

Position Control uses positioner to move valve to a pre-determined point. This method can be used on rising stem and rotary valves.
Since most of the ESD & BD valves are not installed with a positioner, this method does require installation of additional hardware.
Consequently, cost is a major drawback for the position control method.
A limit switch or position transmitter can be used to determine and document successful completion of the tests. If a smart positioner
is used for the position control, a HART maintenance station can collect the test information and generate test documentation.
Of course, the use of a smart positioner and maintenance station further increases the capital cost.
A solenoid valve should still be used for valve actuation. This solenoid valve must be installed between the positioner and actuator.
The positioner does contribute to spurious trip rate during normal operation, since positioner can fail and vent the air from the valve.
When a solenoid valve is installed between positioner and the actuator, the safety functionality is never lost during partial-stroke test.
De-energizing the solenoid valve will shut the valve, regardless of the positioner action.

Solenoid Valves can be used to accomplish a partial stroke test.


The solenoid valve can be same as one used for valve actuation, resulting in lower capital and installation costs than other methods.
If the actuation solenoid valve is used for PST, then this method will also test solenoid valve’s capability to execute safe shutdown.
When simplex solenoid valve is used, solenoid is de-energized & re-energized. If solenoid valve doesnot reset, the test becomes a trip.
Using redundant solenoids valves can eliminate this problem.
One solenoid is used as the primary actuation solenoid and is confirmed on-line using a pressure switch.
A secondary solenoid valve is off-line and confirmed in the vented state (off-line state) by a pressure switch.
The logic is programmed so that if primary solenoid valve goes to vent state without being commanded (detected by pressure switch),
the secondary solenoid valve is energized, preventing the spurious trip. Solenoid valve testing is performed by cycling solenoid and by
verifying that each solenoid valve successfully vents and resets using the pressure switches.
1oo1 can be used for PST by incorporating a PLC timer to pulse the power to solenoids for just long enough to achieve partial stroke.
To verify movement of valve, position transmitter/limit switch is used. The position indication is used to prevent over-stroking of valve
i.e. if the valve moves too far during the timed stroke, the solenoids valves are re-energized. For preventive maintenance activities,
over stroke/under stroke alarms can be configured to let maintenance know if valve is moving too quickly or too slowly during test.
MATERIAL SELECTION

STAINLESS STEELS
Stainless Steels are Iron based alloys containing Chromium.
Stainless Steels usually contain more than 11% Cr - 30% Cr and more than 50% Iron (Fe).
If an alloy contains less than 50% Iron, then it is not called stainless steel. It is then named after the next major element in that alloy.
They attain stainless characteristics, because of formation of invisible & adherent chromium-rich oxide surface film (Chromium Oxide)
Chromium Oxide (Cr2O3) is a Passive film. This oxide establishes on the surface and heals itself in the presence of oxygen.
Some other alloying elements are added to enhance specific characteristics, nickel, molybdenum, copper, titanium, aluminum, nitrogen.

A metal derives its corrosion resistance by forming a protective oxide film on the surface.
Metals may be classified in two categories - Active and Passive, depending on the nature of the oxide film.
Active Film Metals, the oxide film continuously grows until it reaches a limiting thickness then sloughs/tears off and continues to grow,
sloughs off-repeating this process until the metal is completely consumed. Examples of metals with active oxides are iron, copper, zinc.
Passive Film Metals form an extremely thin oxide layer, in the order of 10 atoms -100 atoms thick and then stops growing.
This film remains stable until something upsets equilibrium. Examples of metals with passive films are SS, titanium, gold, platinum.
Chromium imparts special property to iron that makes it corrosion resistant.
When chromium is in excess of 11%, the corrosion barrier of iron changes from active film to passive film.
Active film continues to grow over time in corroding solution until base metal is consumed, while passive film will form and stop growing.
This passive layer is extremely thin, in the order of 10 to 100 atoms thick, and is composed of chromium oxide (Cr2O3) which prevents
further diffusion of oxygen into the base metal.

Stainless Steels
In 1913 Harry Brearly discovered that adding 11% chromium to carbon steel would impart good level of corrosion & oxidation resistance
These steels were named as “stainless steels”. For metallurgical reasons, termed “ferritic steels” due to their crystallographic structure.
These ferritic stainless steels lacked the ductility to undergo extensive fabrication and could not be welded.
The addition of other alloying elements, to produce a material where ferrite was transformed to austenite, that was stable at room temp.
This new group of steels had 18% chromium, nickel was added as second alloying element & were termed “Austenitic Stainless Steels”
The optimum combination for Austenitic Stainless Steels is 18% chromium and 8% nickel – hence the terminology 18 - 8s.
Probably the next major advance in the development of stainless steels was the discovery that relatively small additions of molybdenum
had a pronounced effect on corrosion resistance, greatly enhancing ability to withstand effects of mineral acids and other corrodents.
From these early developments, there is a tremendous growth in production facilities and number of grades of stainless steel available.

AUSTENITIC STAINLESS STEELS


Stainless Steels which contains minimum 18% Cr and 8% Ni, with other alloying elements added to enhance specific characteristics.
Austenitic Stainless Steels have a austenitic crystal structure that is (Face Centered Cubic,FCC). They are non-Magenetic.
Austenite structure is formed through the generous use of austenitizing elements such as nickel, manganese and nitrogen.
The major weakness of the Austenitic Steels is their susceptibility to Chloride Stress Corrosion Cracking.
Nickel containing stainless steel or Austenitic Stainless Steels is especially susceptible to chloride induced SCC.
The susceptibility to SCC is in the nickel range of about 5% - 35% and that pure ferritics such as Types 430, 439, 409 are immune.
The point of maximum susceptibility is between 7% - 20% Nickel. This makes types 304/304L, 316/316L, 321 etc very prone to failures.

The base Austenitic Stainless Steel type is SS304 (18% Cr – 8% Ni) also knpwn as “18 – 8”
Adding 2% Molybdenum to SS304 makes it SS316, Adding more Molybdenum to SS316 makes it SS317
Adding Titanium to SS304 makes it SS321
Lowering Carbon content in SS304 makes it SS304L, Lowering Carbon content in SS316 makes SS316L
FERRITIC STAINLESS STEELS
Stainless Steels which contain minimum 11% Cr and 0% Ni (No Nickel) with other alloying elements to enhance specific characteristics.
After Chromium the next major alloying element in Ferritic Stainless Steels is Molybdenum and some Nitrogen.
Ferritic Stainless Steels contains 11% - 30% Cr and have Molybdenum, some Nitrogen and other alloying elements but NO Nickel.

They have excellent resistance to Chloride Stress Corrosion Cracking


Ferritic Stainless Steels are chromium containing alloys with Ferritic crystal structures that is (Body Centered Cubic , BCC).
They are ferromagnetic with good ductility, but high-temperature mechanical properties are inferior to austenitic stainless steels.
Toughness is limited at low temperatures and in heavy sections.

“SEA-CURE” is one of the most popular SuperFerritic Stainless Steels alloys, widely used in all marine and seawater applications,
Since its corrosion resistance for Chloride Stress Corrosion Cracking in seawater is the same as that of titanium.
The most widely used ferritic stainless steel is Type 409, (10.5% Cr alloy with NO nickel).

Ferritic grades of stainless steels have been developed in order to provide a group of stainless steel to resist corrosion and oxidation,
Highly resistant to chloride stress corrosion cracking. They are magnetic but cannot be hardened or strengthened by heat treatment.
They can be cold worked & softened by annealing. They are more corrosive resistant than martensitic, but inferior to austenitic grades.
Like martensitic these are straight chromium steels with no nickel.

MARTENSITIC STAINLESS STEELS


Stainless Steels which contains minimum 11% Cr and 0% Ni (No Nickel) with other alloying elements to enhance specific characteristic.
Martensitic Stainless Steels contains 11% -18% (Cr) and have carbon upto 1.20% C and some other alloying elements but NO Nickel.
Martensitic stainless steels are similar in composition to the ferritic group but contains lower chromium and higher carbon.
Martensitic Stainless Steels contains 11% - 18% Cr, upto 1.20% C and small amounts of Mn , Ni , Mo.
The most widely used martensitic stainless steel is Type 410, (12% Cr alloy with 0.12% C).

Martensitic Stainless Steels are essentially alloys of chromium & carbon that possess martensitic crystal structure in hardened condition
They are magnetic, hardenable by heat treatment and usually least resistant to corrosion than some other grades of stainless steels.
The chromium content usually does not exceed 18%, while the carbon content may be upto 1.20%.
The chromium and carbon contents are adjusted to ensure a martensitic structure after hardening.
Martensitic Stainless steels are similar to ferritic steels in being based on chromium but have higher carbon levels up as high as 1.20%.
This allows them to be hardened and tempered much like carbon steels and low-alloy steels.
They have high strength and moderate corrosion resistance. They are magnetic and have generally low weldability and formability.

Martensitic stainless steels are developed to provide group of stainless alloys that are corrosion resistant & hardenable by heat treating.
The martensitic grades steels have chromium, carbon, containing no nickel. They are magnetic and can be hardened by heat treating.
The martensitic grades are mainly used where hardness, strength, and wear resistance are required.

DUPLEX STAINLESS STEELS


Stainless Steels which contains minimum 11% Cr and 2%-6% Ni with other alloying elements added to enhance specific characteristics
They are characterized by having austenite crystal structure and ferrite crystal structure in their microstructure, Duplex Stainless Steel.
A ferrite matrix with islands of austenite characterizes the lower nickel grades (2% - 4% Ni)
A austenite matrix with islands of ferrite characterizes the higher nickel range (4% - 6% Ni).
When the matrix is ferrite, the alloys are resistant to chloride stress corrosion cracking.
When the matrix is austenitic, the alloys are sensitive to chloride stress corrosion cracking.
Duplex Stainless Steels are a mixture of BCC ferrite crystal structures and FCC austenite crystal structures.
The percentage of each phase is dependent on the composition and heat treatment.
Duplex stainless steels have similar corrosion resistance as austenitic, except they have better stress corrosion cracking resistance.
Duplex stainless steels also generally have greater tensile and yield strengths, but poorer toughness than austenitic stainless steels.
High strength, good corrosion resistance and good ductility characterize them. They are magnetic.

These steels have a microstructure which is approximately 50% ferritic and 50% austenitic.
This gives them a higher strength than either ferritic stainless steels or austenitic stainless steels. Resistant to stress corrosion cracking
“Lean duplex” steels are formulated to have comparable corrosion resistance to standard austenitic steels but with enhanced strength
and resistance to stress corrosion cracking. They are magnetic but not so much as the ferritic, martensitic.
“Superduplex” steels have enhanced strength and resistance to all forms of corrosion compared to standard austenitic satinless steels.
“Duplex” because it combines austenitic molecular structures (chromium-nickel) and ferritic molecular structures (chromium only).
Duplex Stainless Steels offers greater strength, higher resistance to stress-corrosion cracking, than most other types of stainless steels.
Stainless Steel is the name given to a group of corrosion resistant and high temperature steels.
Their remarkable resistance to corrosion is due to a chromium-rich oxide film which forms on the surface due to addition of chromium.
When ordinary carbon steel is exposed to rain water, it corrodes forming a brown iron oxide, commonly called rust, on the surface.
This is not protective and eventually the entire piece of steel will corrode and be converted to rust (Active Film Metal).
But when enough chromium (usually more than 11%) is added to ordinary steel, the oxide film on surface is transformed - it is very thin,
virtually invisible and protective in wide range of corrosive media (Passive Film Metal). This is known as stainless steels.

Types of Stainless Steel


The basic composition of stainless steel is Iron & Chromium.
This is simplest form of stainless steel known as ferritic stainless steels because their crystal structure is called ferrite.
The ferritic stainless steels are magnetic like ordinary steel. A commonly used grade is Type 430 which is used for automotive trim and
inside dishwashers, clothes dryers. They are often the least expensive stainless steels but are generally more difficult to form and weld.

If we wish to make carbon steel strong and hard, we increase carbon content, heat treat carbon steel by quenching and tempering it.
We can do same with stainless steel - if we increase carbon content of ferritic steels then we produce martensitic stainless steels.
Martensitic grades are strong/hard, but are brittle and difficult to form and weld. Like ferritic stainless steels, martensitic are magnetic.

The majority of stainless steels contain nickel (Ni), which is added for number of reasons but particularly to change the crystal structure
from ferrite to austenite, are known as Austenitic Stainless Steels they are ductile, tough and most importantly easy to form and weld.
They not magnetic in annealed condition. The most common example is Type 304 or "18-8" - most widely used stainless steel in world.
The lower carbon version, Type 304L is always preferred in more corrosive environments where welding is involved.

Molybdenum (Mo) is added in ferritic stainless steels to increase corrosion resistance, particularly in marine and acidic environments.
It increases an alloy's pitting and crevice corrosion resistance. These corrosion forms are caused by highly aggressive chloride ion (Cl¯)
present in sea salts. They are referred as marine grades of stainless steel, since they are widely used for items such as boat fittings.
When 2%-3% molybdenum is added in austenitic stainless steels Type 304 / 304L, we create Type 316 / 316L stainless steel.
They are also known as acid resistant grades, since they have better corrosion resistance for some acids such as sulphuric acids.
But their range of applications is wide, from building facades in aggressive atmospheres to piping onboard chemical tankers.

Halfway between ferritic and austenitic there is a type known duplex stainless steels, which are about 50% ferrite and 50% austenite.
Because of duplex structure, they are resistant to stress corrosion cracking which can affect austenitic stainless steel in chloride service

SSC (Sulphide Stress Corrosion Cracking) & SCC (Chloride Stress Corrosion Cracking)

SSC (Sulphide Stress Corrosion Cracking) in Carbon Steels and Martensitic Stainless Steels
SSC is most severe at room temperature for these materials. SSC is a low temperature phenomenon.
In general, temperatures between –45 to +65ºC should be avoided.
The SSC phenomenon is reduced at higher temperatures.Temperature above 150ºC is very good protection.
Worst case scenario for carbon steel, martensitic steel are wellhead shutdowns, resulting in reduced temperature & increased pressure.
Carbon Steels and Martensitic Stainless Steels can be used in services where temperature is continuously in excess of 65ºC.
The effect can not be applied for well components which may cool down during normal operations like wellhead shutdowns.
The effect can be used for components installed in well, such as tubing, casing, packers, or other downhole equipment.
Carbon Steels with hardness below 22 HRC is acceptable in sour services. Austenitic Stainless Steels are most recommended
(if there is NO chloride ion). A high nickel content alloy is favorable for good resistance to sulphide stress corrosion cracking

SCC (Chloride Stress Corrosion Cracking) in Austenitic Stainless Steels


SCC is a high temperature phenomenon. In general temperatures above +60ºC should be avoided.
Austenitic Stainless Steels have limited resistance to SCC, even at very low chloride contents and temperatures.
The susceptibility to SCC is in the nickel range of about 5% - 35% and that pure ferritics such as Types 430,439,409 are immune.
The point of maximum susceptibility is between 7% - 20% Nickel. This makes types 304/304L,316/316L,321, etc. very prone to failures.
Any alloy whose nickel content is less than 6% and higher than 30% is used for chloride service
Therefore Carbon Steels, Ferrite Stainless Steels, Duplex Stainless Steels (alloys where nickel content is generally less than 6%) and
Nickel base alloys (alloys where nickel content is generally more than 30%) are used in chloride services.

SSC and SCC : Duplex Stainless Steels


Corrosion resistant alloys in downhole components are normally exposed to water and H2S which are basic conditions for SSC.
This is a low temperature mechanism. The downhole components may also be exposed to chlorides which are basic condition for SCC.
This is normally a high temperature mechanism.
Duplex alloys have a mixed microstructure. That may cause a synergistic effect between SSC and SCC at intermediate temperatures.
The worst temperature for duplex is 90ºC (intermediate temp). They are tested at multiple temperatures to detect synergistic effects.

Why Austenitic Stainless Steel is used for SSC (Sulphide Stess Corrosion)
The (Body-Centred Cubic) BCC crystal structure of ferritic stainless steels has relatively small holes between the metal atoms,
but channels between these holes are wide. Hydrogen has relatively low solubility in ferritic iron, but relatively high diffusion coefficient.
In contrast, holes in FCC crystal structure (Face-Centred Cubic) austenite lattice are larger, but the channels between them are smaller,
Hence materials such as austenitic stainless steel have a higher hydrogen solubility and a lower diffusion coefficient.
Consequently, it takes very much longer (years rather than days) for austenitic materials to become embrittled by hydrogen diffusing in
from surface than it does for ferritic materials and austenitic alloys are often regarded as immune from effects of hydrogen embrittled.
SUMMARY
Carbon Steel contains iron and carbon. For a metal to be named as steel concentration of iron shall be more than 50%.
If we add 11% chromium to carbon steel it would impart good level of corrosion & oxidation resistance and is known as Stainless Steel.
The basic composition of stainless steel is Chromium (Cr) minimum 11% and Iron (Fe) more than 50%.
This is the simplest form of stainless steel known as Ferritic Stainless Steels and are magnetic.
After Chromium the next major alloying element in Ferritic Steels is Molybdenum and with some Nitrogen
Ferritic Stainless Steels contains 11% - 30% Cr and have Molybdenum, some Nitrogen and other alloying elements but NO Nickel.
They are resistant to chloride stress corrosion cracking (SCC).

If we wish to make carbon steel strong and hard, we increase carbon content and heat treat carbon steel by quenching & tempering it.
We can do the same with stainless steel - if we increase carbon content of ferritic steels then we produce Martensitic Stainless Steels
Martensitic Stainless Steels contains 11%-18% (Cr) and have carbon upto 1.20% C and some other alloying elements but NO Nickel.
Martensitic Stainless Steels are similar to ferritic group, but contains lower chromium and higher carbon than Ferritic Stainless Steels.
Martensitic Steels contains 11%-18% Cr, upto 1.20% C and small amounts of Mn, Ni, Mo.
Martensitic Stainless Steels are magnetic. Martensitic grades are strong and hard, but are brittle and difficult to form and weld.
The Martensitic grades are mainly used where hardness, strength and wear resistance are required.

Stainless Steels which contains minimum 18% (Cr) and 8% (Ni) with other alloying elements added to enhance specific characteristics
are Austenitic Stainless Steels. They are prone to chloride stress corrosion cracking but immune to sulphide stress corrosion cracking
They are non-magnetic in nature and are the most corrosion resistant group of stainless steels.
The base Austenitic Stainless Steel type is SS304 (18% Cr – 8% Ni) also known as “18-8”
Adding 2% Molybdenum to SS304 makes it SS316, Adding more Molybdenum to SS316 makes it SS317
Adding Titanium to SS304 makes it SS321
Lowering Carbon content in SS304 makes it SS304L, Lowering Carbon content in SS316 makes SS316L

Stainless Steels which contains minimum 11%(Cr) and 2%-6% (Ni) and with other alloying elements to enhance specific characteristics
are known as Duplex Stainless Steels. They are magnetic.
They are characterized by having both austenite and ferrite crystal in their microstructure, hence termed as Duplex Stainless Steel.
A ferrite matrix with islands of austenite characterizes the lower nickel grades (2% - 4% Ni)
A austenite matrix with islands of ferrite characterizes the higher nickel range (4% - 6% Ni).
When the matrix is ferrite, the alloys are resistant to chloride stress corrosion cracking.
When the matrix is austenitic, the alloys are sensitive to chloride stress corrosion cracking.

Sulphide Stress Corrosion Cracking


SSC is maximum at ambient temp. upto 65°C and at higher temperature sulphide stress corrosion cracking start to reduce above 65 °C
For sour services (containing H2S) Austenitic Stainless Steels are used for all temperatures. (service should have no chloride ions Cl-)
Carbon Steels or Ferritic Stainless Steels are also used at high temperature (generally at 150°C ).
Carbon Steels or Ferritic Stainless Steels cannot be used at ambient temperature for sour services.
Chloride Stress Corrosion Cracking
Chloride Stress Corrosion Cracking occurs at all temperatures, But at higher temperature (>60°C) it increases as temperature increases
For Chloride services Austenitic Stainless Steels cannot be used.
Carbon Steels or Ferritic Stainless Steels are used for chloride service.
Nickel containing stainless steel (Austenitic Stainless Steels) is especially susceptible to chloride induced stress corrosion cracking.
The maximum susceptibility is in range the of 7% - 20%Nickel. This makes Austenitic Steels very prone to failure in chloride service
and that pure ferritic stainless steels are immune to Chloride Stress Corrosion Cracking because they donot contain nickel (NO Nickel).
Any alloy whose nickel content is less than 6% and higher than 30% can be used for chloride services.
Therefore Carbon Steels, Ferrite Stainless Steels, Duplex Stainless Steels (alloys where nickel content is less than 6%) and
Nickel base alloys (alloys where nickel content is more than 30%) are used in chloride services.
NICKEL ALLOYS
By definition in order for stainless steel to be stainless steel it must contain a minimum of 50% iron.
The iron ratio in nickel based alloys are considerably less than 50%.
Within nickel based alloys there are 4 classifications.
Group A is nickel-copper alloys such as Monel 400
Group B is nickel-chromium alloys such as Hastelloy C-22 and C-276
Group C is Nickel-Molybdenum alloys such as Hastelloy B2, B3 B4
Group D is Precipitation-hardening alloys such as Monel K-500 and Inconel 718.

The range of nickel based alloys available are used for their resistance to corrosion and retention of strength at elevated temperatures.
Many severe corrosion problems can be solved through the use of these alloys. However, they are not universally corrosion resistant.
Nickel based alloys are very resistant to corrosion in alkaline environments, neutral chemicals and other many natural environments.
In addition, many nickel based alloys show excellent resistance to pitting, crevice corrosion and chloride stress corrosion cracking.

ALUMINIUM ALLOYS
The main consituents are aluminium and copper with some other elements added to enhance specific charactersitics.
LM25 – A general purpose high strength casting alloy used where good mechanical properties are required, resistant to corrosion.
Typically used in food, chemical, marine, electrical and automobile industry.
LM6 – Suitable for marine ‘on deck’ castings, water-cooled manifolds and jackets, automobile industry, meter cases and switch-boxes.
LM6 offers excellent corrosion resistance and is suitable for castings that are to be welded. Also ideal for pump parts, paint industry.
LM13 – A low expansion piston alloy which is hard wearing with excellent bearing properties.
Commomly used for both diesel and petrol engine pistons.
L99 – General aerospace alloy, offers high strength and corrosion resistance. This standard include full heat treatment.
LM4 – General engineering application suited for instrument cases, tool handles & where moderate mechanical properties are required
Suited to thin forms and where castings are required ro be pressure tight. The LM4 castings are suited to relatively high static loadings.
L169 – This is strongest strength alloy, used for the most critical and demanding applications
99% Pure Aluminium – Pure aluminium is ideal for cookware and catering equipment.
PID CONTROLLER
PID controllers named after the Proportional, Integral and Derivative control actions.
They are used in the vast majority of automatic process control applications in industry today.
PID controllers are generally responsible for regulating flow, temperature, pressure, level, host of other industrial process variables.
This reviews application of PID controllers, explains P,I,D control modes and highlights three basic controller structures used in industrial.

Manual Control : Without automatic controllers, all regulation tasks will have to be done manually.
Ex : To keep the temperature of water discharged from an industrial gas-fired heater constant, a operator has to watch temperature gauge
and adjust a gas control valve accordingly (Figure1). If the water temperature becomes too high, an operator has to close gas control valve
to bring the temperature back to desired value. If the water becomes too cold, he has to open the valve again.

To relieve our operator from the tedious task of manual control, we automate the controls, that is by installing a PID controller (Figure2).
The controller has a Set Point (SP) that the operator can adjust to the desired temperature. We also have to automate the control valve by
installing an actuator (and perhaps a positioner) so that Controller's Output (CO) or Manipulated variable (MU) can change valve's position.
And finally, we will provide the controller with an indication of temperature or Process Variable (PV) by installing a temperature transmitter.

The PV and CO are mostly transmitted via 4-20mA signals.So, when everything is up and running, the installed PID controller will compare
the process variable to its desired set point & then calculates the difference between the two signals which is generally called as Error (E).
Then based on Error, a few adjustable settings and its internal structure, the controller calculates an output that positions the control valve.
If the temperature is above its set point, the controller will close the valve and vice-versa.
P,I,D Control Modes : A PID controller has proportional, integral and derivative control modes.
These modes each react differently to the error and also the degree of control action is adjustable for each mode.

Proportional Control : The proportional control mode changes the controller output in proportion to the error (Figure3).
If error rises,controller output rises, if error falls,controller output falls or vice-versa. But if error is constant,controller output is also constant.
Controller Gain (Kp), also referred as PID controller's P-setting or proportional setting is used to change the response time of controller.
Control action is proportional to controller gain x error.Higher controller gain will increase amount of output action and so will a larger error.
Proportional control alone has a large drawback-Offset.Offset is sustained error that cannot be eliminated by the proportional control alone.
For example, let us consider controlling the water level in the tank as shown in Figure 4 with a proportional controller only.
As long as the flow out of the tank remains constant, the level (which is our process variable in this case) will also remain at its set point.
Although controllers use controller gain (Kp) as proportional setting, some controllers use Proportional Band (PB) which is expressed in %.
Table 1 shows the relationship between Kp and PB.
But if the operator increases the flow out of the tank, the tank level will begin to decrease due to the imbalance between inflow and outflow.
While tank level decreases, error increases and our proportional controller increases controller output proportional to this error (Figure 5).
Consequently, the valve controlling the flow into tank opens wider and more water flows into the tank. As the level continues to decrease,
valve continues to open until it gets to a point where inflow matches outflow. At this point tank level remain constant and so does the error.
Then because the error remains constant our Proportional-controller will keep its output constant and the control valve will hold its position.
The system now remains at balance with the tank level remaining below its set point. This residual error is called Offset.
If controller gain is small, the system response is also slower and offset is larger.
If controller gain is large, the system response is also faster and offset is smaller.

With a Proportional controller the offset will remain until the operator manually applies a bias to the controller's output to remove the offset.
It is said that the operator has to manually "Reset" the controller. Or…we can add Integral action to our controller.
Integral Control
The concept of manual reset as described in proportional controller led to the development of automatic reset or Integral Control.
The integral control mode of controller produces long-term corrective change in controller output, driving the constant error or offset to zero
Integral action appears as a ramp of which the slope is determined by the size of the error, the controller gain and the Integral Time (Ti),
also called the I-setting of the controller (Figure 6).

Most controllers use integral time in minutes as the unit for integral control, but some others use integral time in seconds.
Integral Gain in Repeats/Minute or Repeats/Second. Table 2 compares the different types of integral units.
Integral control mode eliminates the offset or the constant error produced in proportional control mode.
Figure 7 show same level control , but this time with PI controller. PI controller simply adds together the output of P & I modes of controller
The integral action raises the controller output to bring the level back to its set point thus eliminating the offset associated with P-controller.

Derivative Control
The third control action in a PID controller is Derivative. The derivative control is rarely used in controllers. The system response is highest.
It is very sensitive to measurement noise and it makes tuning very difficult, if trial and error methods are applied.
Nevertheless, derivative control can make a control loop respond faster and with less overshoot from the desired set point.
The derivative control mode produces an output based on the rate of change of the error (Figure 8).
Derivative action is sometimes called Rate. Its action is dependent on the rate of change (or slope) of the error.
It has an adjustable setting called Derivative Time (Td), which is the D-setting of the controller.
Two units are used for derivative setting of controller: minutes or seconds. Derivative control have predictive or anticipative capabilities.
Technically this is not true, but PID control does provide more control action sooner than possible with P or PI control.
To see this, compare initial controller response of Figure 9 (PID control) with that in Figure 7 (PI control).
The derivative control reduces the time it takes for the level to return to its set point.
With derivative control the controller output appears noisier. This is due to derivative control mode's sensitivity to measurement noise.
A controller compares its measurement to its set point and based on the difference between them (error), generates a correction signal to
the final control element (e.g. control valve) in order to eliminate the error.
A controller response to an error determines its character, which significantly influences the performance of the closed-loop control system.
There are different types of controller modes. They include on/off, floating, proportional (P), integral (I), differential (D), and many others.
Direct acting controller - output increases when measurement rises and Reverse acting controller - output decreases with measurement rises.

Proportional control mode is simplest linear control algorithm, characterized by constant relationship between controller input and output.
The adjustable parameter of proportional mode is called proportional gain (Kp). It is usually adjusted beteween 0 to 10.
It is frequently expressed in terms of percent proportional band (PB), which is inversely related to the proportional gain (Kp): Kp 100/PB
As the name “proportional” suggests, the correction generated by the proportional control mode is proportional to the error.
If error rises,controller output rises, if error falls,controller output falls or vice-versa. But if error is constant,controller output is also constant.
Proportional controller responds only to present. It cannot consider past history of error or possible future consequences of an error trend.
Equation below describes the operation of the proportional controller. m = Kp e + b (100/PB)(e)  b
m = the output signal to the manipulated variable (control valve)
Kc = the gain of the controller, e = the deviation from set point or error, PB = the proportional band (100/Kc),
b = the live zero or bias of the output, which in pneumatic systems is usually 0.2 bars and in analog electronic loops is 4 mA.
When a small error results in a large response, the gain (Kp) is said to be large or the proportional band (PB) is said to be narrow.
Inversely, when a large error causes only a small response, the controller is said to have a small gain or a wide proportional setting.
“Wide bands” (high PB) correspond to less sensitive controller settings, “narrow bands” (low PB) correspond to more sensitive controller settings.
The main limitation of proportional control is that it cannot keep the controlled variable on set point. There is always an offset.
It is evident that by increasing the gain, one can reduce (but not eliminate) the offset.
If controller gain is very high, the presence of offset is no longer noticeable, and it seem as if controller is keeping the process on set point.
Unfortunately, most processes become unstable if their controller is set with such high gain. The only exceptions are very slow processes.
For this reason plain proportional control is limited to slow processes that can tolerate the high controller gain (narrow proportional bands).

Integral control mode sometimes called reset mode because after a load change it returns the controlled variable to original set point and
eliminates the offset or constant error which plain proportional controller cannot do. Mathematical expression of integral-only controller is

while the mathematical expression for a proportional- integral (PI) controller is


.
The integral mode has been introduced in order to eliminate the offset that plain proportional control cannot remove.
The integral control mode of controller produces long-term corrective change in controller output, driving the constant error or offset to zero
Integral control eliminates the offset or the constant error produced in proportional control mode.
PI controller simply adds together the output of P & I modes of controller
The integral action raises the controller output to bring the level back to itset point thus eliminating the offset associated with P-controller.

The meaning of term “repeats/minute”(or its inverse) can be understood by referring to above Figure.
Here in middle section of error curve, the error is constant and therefore the proportional correction is also constant (A).
If the length of duration of middle section is one integral time (Ti), the integral mode is going to repeat proportional correction by the end of
first integral time (B = 2A). The integral mode will keep repeating (adding “A” amount of correction) after the passage of each integral time
during which error still exists. The shorter the integral time, the more often proportional correction is repeated (the more repeats/minute),
and thus the more effective is the integral contribution.

Derivative control mode : The proportional mode considers the present state of error and integral mode looks at the past history of error,
while the derivative mode anticipates the future values of the error and acts on that prediction.
Derivative control mode produces an output based on rate of change of error. Derivative action is sometimes called Rate
The figure on right describes the derivative response to the same error curve that has been used earlier.
In the middle portion of illustration where the error is constant, the derivative contribution to the output signal (to the control valve) is zero.
This is because the derivative contribution is based on the rate at which the error is changing and in this region that rate of change is zero.
As illustrated in right Figure, when the error is rising, the derivative contribution is positive and is a function of the slope of the error curve.
The unit of derivative setting is the derivative time (Td). This is length of time by which the D-mode (derivative mode) “looks into the future.”
In other words, if a derivative mode is set for a time Td, it will generate a corrective action immediately when the error starts changing, and
the size of that correction will equal the size of the correction that the proportional mode would have generated Td time later.
The longer the Td setting, the further into the future the D-mode predicts and the larger its corrective contribution.
When slope of error is positive (measurement is moving up relative to set point), derivative contribution will rise On the right side of Figure,
error is still positive (measurement is above set point), but derivative contribution is already negative, as it is anticipating future occurrence
when the loop might overshoot in the negative direction and is correcting for that future occurrence now.
Because the derivative control mode acts on the rate at which the error signal changes, it can also cause unnecessary upsets.
Ex. it will react to sudden setpoint change made by operator. It will amplify noise & will cause upsets when signal changes occur in steps.
In such situations precautions are required. It is necessary to change control algorithm so that it is insensitive to sudden set point changes.
It is necessary to make sure that the derivative contribution to output signal going to the control valve will not respond to sudden changes,
but only to rate at which measurement changes. This is aimed at making derivative mode to act on measurement only, but not on the error.
Controller Structures : Controller manufacturers integrate the P,I,D modes into three different arrangements or controller structures.
These are called Series, Ideal and Parallel controller structures.
Some controller manufacturers allow you to choose between different controller structures as a configuration option in controller software.

Series : This very popular controller structure is also called Classical, Real or Interacting structure.
The original pneumatic and electronic controllers had this structure and we still find it in most PLCs and DCSs today.
Most of the controller tuning rules are based on this controller structure.

Ideal : Also called the Non-Interacting, Standard or ISA structure, this controller structure was popularized with digital control systems.
If no derivative is used (i.e. Td= 0), the series and ideal controller structures become identical.

Parallel : Academic-type textbooks generally use the parallel form of PID controller, but it is also used in some DCSs and PLCs.
This structure is simple to understand, but difficult to tune. The reason is that it has no controller gain, but has a proportional gain instead.
Tuning should be done by adjusting all the settings simultaneously. Try not to use this structure if possible.

Conclusion
With its proportional, integral, derivative modes, the PID controller is the most popular method of control by a great margin.
If properly tuned, the three control modes complement each other in the control effort.
It is important to note however, that there are quite a few different and conflicting options used for P,I ,D units and for controller structure.
The method of tuning must be adjusted to suite the controller's functional characteristics.
CASCADE CONTROL
In single-loop control there is one measured signal & controller’s set point is set by an operator and its output drives a final control element.
Ex: a temp controller driving a control valve to keep temp at its setpoint, level controller driving a control valve to keep level at its set point.

Cascade controls make use of multiple control loops that involve multiple signals for one manipulated variable.
In cascade control arrangement, there are two or more controllers of which one controller’s output drives the set point of another controller.
The controller driving the set point (level controller / Temperature controller in the example) is called primary, outer or master controller.
The controller receiving the set point (flow controller in both examples shown below) is called the secondary, inner or slave controller.

To illustrate how cascade control works and why it is used, a typical control system is analyzed.
This control system is one that is used to adjust the amount of steam used to heat up a process fluid stream in a heat exchanger.
In single loop control, the fluid is to be heated up to a certain temperature by steam.
This process is controlled by temperature controller (TC1) which measures the temperature of exiting fluid and then adjusts the valve (V1)
to correct the amount of steam needed by the heat exchanger to maintain specified temperature.
Figure on the right shows the flow of information to and from the temperature controller.
Initially this process seems sufficient.
However, above control system works on the assumption that a constant flow of steam is available and that the steam to heat exchanger is
solely dependent on opening the valve to varying degrees.
If the flow rate of the steam supply changes (i.e. pipeline leakage, clogging, drop in boiler power), the controller will not be aware of it.
The controller opens valve to the same degree expecting to get a certain flow rate of steam but will in fact be getting less than expected.
The single loop control system will be unable to effectively maintain the fluid at the required temperature.

Implementing cascade control in the above loop will allow us to correct for fluctuations in flow rate of steam going into the heat exchanger
as an inner part of a grander scheme to control the temperature of the process fluid coming out of the heat exchanger.
A basic cascade control uses two control loops; in the case presented below, one loop (the outer loop, or master loop, or primary loop)
consists of TC1 reading fluid out temp, comparing it to TC1 set (which will not change suddenly) and changing FC1 set point accordingly.
The other loop (the inner loop, or slave loop, or secondary loop) consists of FC1 reading the steam flow, comparing it to the FC1 setpoint
(which is controlled by the outer loop as explained above) and changing the valve opening as necessary.
The main reason to use cascade control in this system is that the temperature has to be maintained at a specific value.
The valve position does not directly affect temperature (consider an upset in steam input; the flow rate will be lower at same valve setting).
Thus, the steam flow rate is the variable that is required to maintain the process temperature.
The inner loop is chosen to be the inner loop because it is prone to higher frequency variation.
The rationale behind this example is that the steam in flow can fluctuate, and if this happens, the flow measured by FC1 will change faster
than the temperature measured by TC1, since it will take a finite amount of time for heat transfer to occur through the heat exchanger.
Since the steam flow measured by FC1 changes at higher frequency, we chose this to be the inner loop.
FC1 controls fluctuations in flow by opening/closing valve & TC1 control fluctuations in temperature by increasing/decreasing FC1set-point.
Thus, cascade control uses two inputs to control the valve and allows the system to adjust to both variable fluid flow and steam flow rates.

Primary and Secondary Loops


Loop 1 is known as the primary loop, outer loop, or master loop, whereas loop 2 is known as the secondary loop, inner loop, or slave loop.
To identify the primary and secondary loops, one must identify the control variable and the reference variable.
In this case, the control variable is the temperature and the reference variable is the steam flow rate.
Hence, the primary loop (loop 1) involves the control variable and the secondary loop (loop 2) involves the reference variable.
Please note that the user sets the set point for loop 1 while the primary controller sets the set point for loop 2.
The master controller responds to SLOW changes in the system, while the slave controller responds to the FAST changes in the system.

When to use cascade control


There must be a clear relationship between the measured variables of the primary and secondary loops.
The secondary loop must have influence over the primary loop.
Response period of the primary loop has to be at least 4 times larger than the response period of the secondary loop.
The major disturbance to the system should act in the primary loop. The primary loop should be able to have a large gain, Kc.
Example, a level controller driving the set point of a flow controller to keep the level at its set point.
The flow controller in turn, drives a control valve to match the flow with the set point the level controller is requesting
There are several advantages of cascade control, most of them is to isolate a slow control loop from non-linearities in final control element.
The relatively slow level control loop is isolated from any control valve problems by having fast flow control loop deal with these problems.
Imagine that control valve has a stiction problem. Without the flow control loop, the level control loop (driving sticky valve) will continuously
oscillate in a stick-slip cycle with a long (slow) period, which will quite likely affect the downstream process.
With fast flow control loop in place, the sticky control valve will cause it to oscillate, but at much shorter (faster) period due to inherent fast
dynamic behavior of tuned flow loop. It is likely that fast oscillations will be attenuated by downstream without having much adverse effect.
Or imagine that the control valve has a non-linear flow characteristic.
This requires that the control loop driving it be detuned to maintain stability throughout the possible range of flow rates.
If the level controller directly drives the valve, it must be detuned to maintain stability - possibly resulting in poor level control.
In cascade control arrangement with flow control loop driving the valve, the flow loop will be detuned to maintain stability.
This result in relatively poor flow control, but as flow loop is dynamically so much faster than level loop, level control loop is hardly affected

In cascade architecture, inner secondary PV2 serves as early warning process variable.
Given this, essential design characteristics for selecting PV2 include that:
 It be measurable with a sensor.
 The same Final Control Element (e.g., valve) used to manipulate PV1 also manipulates PV2.
 The same disturbances that are of concern for PV1 also disrupt PV2.
 PV2 responds before PV1 to disturbances of concern and to Final Control Element manipulations.
Since PV2 sees the disruption first, it provides "early warning" that a disturbance has occurred and is heading towards PV1.
The inner secondary controller begins corrective action immediately. And since PV2 responds first to final control element manipulations,
disturbance rejection can be well underway even before primary variable PV1 has been substantially impacted by the disturbance.
In cascade, control of outer primary process variable PV1 benefit by corrective action applied to upstream early warning measurement PV2

When Should Cascade Control be Used?


Cascade control should always be used if you have a process with relatively slow dynamics (like level, temperature, composition, humidity)
and a liquid or gas flow, or some other relatively-fast process, has to be manipulated to control the slow process.
Ex : changing cooling water flow rate to control condenser pressure or changing steam flow rate to control heat exchanger outlet temp.
In both cases, flow control loops should be used as inner loops in cascade arrangements.

Does Cascade Control Have any Disadvantages?


Cascade control has three disadvantages.
 It requires an additional measurement (usually flow rate) to work.
 There is an additional controller that has to be tuned.
 The control strategy is more complex – for engineers and operators alike.
These disadvantages have to be weighed up against the benefits of the expected improvement in control to decide if cascade control
should be implemented or not.

When Should Cascade Control Not be Used?


Cascade control is beneficial only if the dynamics of inner loop are fast compared to those of the outer loop.
Cascade control should not be used if the inner loop is not at least three times faster than the outer loop.
As a rough estimate, the process dead time of the inner loop must be shorter than one third of dead time in the outer loop.
In addition to diminished benefits of cascade control when inner loop is not significantly faster than outer loop, there is risk of interaction
between the two loops that could result in instability – especially if the inner loop is tuned very aggressively.

How Should Cascade Controls be Tuned?


A cascade arrangement should be tuned starting with the innermost loop.
Once that one is tuned, it is placed in cascade control, or external set point mode, and then the loop driving its set point is tuned.
Make sure that control loop settling time (time it takes to eliminate 95% of error) is at least three times slower with each successive loop.
If this is not the case (without some serious detuning), the benefit of cascade control is minimal and its use should be questioned.

A cascade system needs to be set up properly in order to function. The inner loop should be tuned before the outer loop.
The following are the suggested steps for starting up a cascade system (both controllers start in automatic mode).
Place the secondary controller in manual mode. This will break the cascade and isolate the secondary controller so that it can be tuned.
Tune the secondary controller as if it were the only control loop present.
Return the secondary controller to the remote set point and/or place the primary controller in manual mode.
This will isolate the primary controller so that it can be tuned.
Tune primary control loop by manipulating the set point to secondary controller.
If the system begins to oscillate when primary controller is placed in automatic, reduce the primary controller gain.
FEEDFORWARD CONTROL
Feedforward control is useful to control those systems where there is a possibility for major process disturbances to occur.
In such systems, addition of feedforward control can improve the process performance as compared to the use of only a feedback control.
In feedforward, process disturbances are measured before entering process & a process model is used to predict effect of this disturbance.
The feedforward controller will then implement the process change to offset the predicted disturbances before the process is fully impacted
(i.e. before the potential set-point change caused by the disturbance is fully manifested in the process outputs).
Therefore a feedforward control system scheme constantly predicts what the measured output will be as process disturbances occurs and
works to keep the outputs as close to the setpoint as possible.

However, one implication of process models for feedforward control is that a feedforward system cannot be used without feedback control.
Feedback control is needed with feedforward control because no process model can be perfect, and the perfect process model would be
needed in the theoretical case of controlling a process via only feedforward control.
In other words, the feedback control acts to compensate for any differences between the ideal process model and the real process.
To implement feedforward control, it is necessary to understand added costs which result in implementing feedforward control into system.
As all feedforward control systems require feedback control system to both track setpoint changes and suppress unmeasured disturbances
which exist in real processes, the implementation of feedforward control is an added expense over a purely feedback controlled system.
The final decision on whether or not to implement feedforward control from an industrial standpoint “is the cost of this system justified”.
If the answer to this question is yes, then the reason should involve a greater production capacity and/or a higher quality.

Feedback control is the action of moving manipulated variable (m) in response to deviation or error (e) between the controlled variable (c)
and its set point (r) in such a way as to reduce or if possible to eliminate the error.
Feedback control cannot anticipate & prevent errors, because it can only initiate its corrective action after an error has already developed.
Because of dynamic lags & delays in response of controlled variable to manipulation, some time will elapse before correction takes effect.
During this interval, the deviation will tend to grow, before eventually diminishing.
Feedback control cannot therefore achieve perfect results; its effectiveness is limited by the responsiveness of process to manipulation.
By contrast, the feed-forward corrections can be initiated as soon as any change is detected in a load variable as it just enters the process,
if feedforward model is accurate & load dynamic are favorable, upset caused by load change is canceled before it affect controlled variable
As sensors are imperfect, feedforward loops are usually corrected by feedback trimming.

Definitions
FF (Feedforward) - Disturbances are measured & then manipulated variable is changed to counter the effect of disturbance on system.
FB (Feedback) – Disturbances go through the system and the controller does not respond until the controlled variable changes.
Load Change – The change in a disturbance that affects the system.
Static Feedforward Control — To have this control the control cannot provide dynamic compensation in a process.
Static Feedforward Controller is used when process displays same dynamic response to changes in manipulated variable & disturbance.

FF control measures disturbances and changes to the manipulated variable (MV) to stay on the set point.
FF control will ideally stop a disturbance’s affect on process, but because no model or system is perfect, most of the time feedback control
is used in conjunction with feedforward control.
Because most of the time FF control is not ideal, it will make a FB controller do less work to fix errors from the set point.
This is because the disturbance is already accounted for and manipulated variable changed before feedback control kicks in.
FB controller only has to correct the ‘error’ in FF control.

Feedforward Advantages
Compensates for disturbance before effect is seen on process
Doesnot introduce instability into a closed loop response.
Works well for slow processes or processes with a lot of deadtime.
Feedback Advantages
Can eliminate offset (when used with integral control).
Not reliant on process models.
Does not require measurements of disturbances.
Simple to implement.

Feedforward Disadvantages
Doesnot completely eliminate offset.
Dependent on process models.
Requires disturbance model and extra sensor.
Does not compensate for unmeasured disturbances.
Feedback Disadvantages
Disturbances significantly impact systems before control action is initiated.
When used improperly, can cause instability due to nonlinearity.
Too slow of a system or a system with high deadtime do not work well.

Common Applications
Since feedforward control systems rely on process models and no process model is perfect, the use of solely feedforward control to control
a real world process is not feasible. However as discussed above, the feedforward control systems can be a very effective control methods
when augmented with feedback. In real-world applications, feedforward control systems can be useful in processes with two key qualities:
A highly accurate model of the process is available and the impact of the disturbances on the process are well known.
The process disturbances can significantly and negatively impact the process outcome.
Therefore, it is desirable to prevent disturbances from impacting the process as opposed to responding after the process is impacted.
Feedforward & Cascade is confused because of similarities, two measured variables, one manipulated variable, one independent setpoint.
But cascade systems control both measured variables, with the master determining the set-point of the slave.
In contrast, feedforward & feedback corrections independently adjust control valve and there is no control applied to feedforward variable.
To summarize, feedback control is first choice & commonly used type of control, but when controller must operate with low values of gain
and reset for optimum settings, disturbances can cause large upsets and the system may take a long time to recover.
Use feedforward control to accommodate the major disturbances; but it will be successful only if the major disturbance can be sensed and
a correction made quickly in the manipulated variable before it affects the system.
Feedforward control is successful in applications where feedback control loop is slow responding and feedforward path is fast responding.

Feedback Control
Feedback control is typically done with PID controllers (proportional + integral + derivative).
The process variable of interest is measured and the controller’s output is calculated based on the process variable and its set point.
Although external disturbances often affect the process variable, they are not used directly for control.
Instead, if a disturbance affects the process variable, the control action is based on the process variable and not the disturbance.
As an example, the outlet temperature of a heat exchanger can be measured and used for feedback control.
The feedback controller will manipulate the steam flow to heat exchanger and keep the outlet temperature as close to set point as possible.

Feedback Control and Disturbances


Many process control loops are affected by large disturbances. The feedback control system can act only on the results of a disturbance,
which means feedback control cannot do anything until the process variable has been affected by the disturbance.
In the example of the heat exchanger above, changes in process flow rate will be a major source of disturbances to the outlet temperature.
If process flow rate through heater is increased, the original steam flow rate will not enough to heat up increased amount of process liquid
& outlet temperature will decrease. Feedback control will eventually increase steam flowrate & bring outlet temperature back to its setpoint,
but not until there has been a significant deviation in temperature.

Feedforward Control and Disturbances


In contrast to feedback control, feedforward acts the moment disturbance occurs, without having to wait for a deviation in process variable.
This enables a feedforward controller to quickly and directly cancel out the effect of a disturbance.
To do this, a feedforward controller produces its control action based on a measurement of the disturbance.
When used, feedforward control is almost always implemented as an add-on to feedback control. The feedforward controller takes care of
major disturbance and feedback controller takes care of everything else that might cause the process variable to deviate from its set point.
In our example of the heat exchanger, in which major disturbances come from changes in process flow rate, the latter can be measured &
used for adjusting the steam flow rate proportionally. This is done by the feedforward controller.

Implementing Feedforward Control


Many PID controllers have an external connection for adding an input from a feedforward controller.
Otherwise the output of the feedforward controller can be externally added to the output of the feedback controller.
Review the controller documentation and take special care with scaling the feedforward signal.
Many PID controllers expect the feedforward signal to be scaled between -100% and +100%.
Feedforward control and feedback control is often combined with cascade control, so as to ensure that their control actions manipulate the
physical process linearly, eliminating control valve nonlinearities and mechanical problems.
If several major disturbances exist, feedforward controller can be implemented for each of them.The outputs of all feedforward controllers
can be added together to produce one final feedforward signal. Only consider those disturbances which meets the criteria given below:
Measurable – if it cannot be measured, it cannot be controlled
Predictable effect on process variable – most disturbances will fall in this class
Occur so rapidly that the feedback control cannot deal with them as they happen.

Feedforward Tuning
A feedforward controller essentially consists of a lead-lag function with an adjustable gain.
A dead-time function (Ttd) can be added if the effect of disturbance has a long time delay while the control action is much more immediate.
The feedforward gain (Kff) is set to obtain the required control action for a given disturbance.
For example, it controls the ratio of steam flow to process flow in the example used previously.
The lead / lag time constants are set to get the right timing for the control action. The feedforward’s lead (Tld) will speed up control action
should be set equal to the process lag between the controller output and the process variable.
The feedforward’s lag (Tlg) will slow down control action & should be set equal to process lag between disturbance and process variable.
There is alternative design for feedforward controller that makes tuning easy. This is by using function generator as feedforward controller.
Before implementing feedforward, take note of feedback controller’s output and disturbance measurement at various levels of disturbance.
Use this relationship to set up the curve in the function generator.
In heat exchanger, we should tabulate temperature controller’s output and process flow rates under various steady-state production rates.
Then we program a curve in the function generator to produce the desired controller output at each of the process flow rates we measured.
CASCADE CONTROL & FEEDFORWARD CONTROL

Example, one common application of cascade control combined with feedforward control is in level control systems for boiler steam drums.
When boilers operated at low pressure, it was reasonably inexpensive to make the steam drum large.
In a large drum, liquid level moves relatively slowly in response to disturbances (it has a long time constant).
Therefore, manual or automatic adjustment of the feedwater valve in response to liquid level variations was an effective control strategy.
But as boiler operating pressures have increased over the years, the cost of building and installing large steam drums forced the reduction
of the drum size for a given steam production capacity.
The consequence of smaller drum size is reduction in process time constants or speed with which important process variables can change.
Smaller time constants mean upsets must be addressed quickly, which led to development of increasingly sophisticated control strategies.

As shown above, most boilers of medium to high pressure today use a “3-element” boiler control strategy.
The term “3-element control” refer to number of process variables (PV) that are measured to effect control of boiler feedwater control valve.
These measured PVs are:
▪ liquid level in the boiler drum,
▪ flow of feedwater to the boiler drum,
▪ flow of steam leaving the boiler drum.
It is critical that liquid level remain low enough to guarantee there is adequate disengaging volume above liquid & high enough to assure
that there is water present in every steam generating tube in boiler.
These requirements typically result in a narrow range in which the liquid level must be maintained.
The feedwater used to maintain liquid level in industrial boilers often comes from multiple sources & is brought up to steam drum pressure
by pumps operating in parallel. With multiple sources and multiple pumps, the supply pressure of the feedwater will change over time.
Every time supply pressure changes, the flow rate through the valve, even if it remains fixed in position, is immediately affected.
For example if the boiler drum liquid level is low, the level controller will call for an increase in feedwater flow so as to maintain liquid level.
But consider that if at this moment, the feedwater supply pressure dropped for any reason. The level controller could be opening the valve,
yet the falling supply pressure could actually cause a decreased flow through the valve and into the drum.
Thus, it is not enough for level controller to directly open/close valve, it must decide whether it needs more or less feed flow to boiler drum.
The level controller transmits its target flow as a set point to a flow controller.
The flow controller then decides how much to open or close the valve as supply pressure swings to meet the set point target.
It is a “2-element” cascade control (boiler liquid level to feedwater flow rate). By placing this feedwater flow rate in a fast flow control loop,
the flow controller will immediately sense any variations in the supply conditions which produce a change in feedwater flow.
The flow controller will adjust boiler feedwater valve position to restore flow to its set point before boiler drum liquid level is even affected.
The level controller is the primary controller (sometimes referred to as the master controller) in this cascade, adjusting the set point of the
flow controller, which is the secondary controller (sometimes identified as the slave controller).
The third element in “3-element control” system is flow of steam leaving the steam drum.
The variation in demand from steam header is most common disturbance to the boiler level control system in an industrial steam system.
By measuring the steam flow, the magnitude of demand changes can be used as a feed forward signal to the level control system.
The feed forward signal can be added into the output of level controller to adjust flow control loop set point, or can be added into the output
of the flow control loop to directly manipulate the boiler feedwater control valve.
Majority of boiler level control systems add feed forward signal into level controller output to secondary (feedwater flow) controller set point.
This approach eliminates the need for characterizing the feed forward signal to match the control valve characteristic.
Actual boiler level control schemes do not feed the steam flow signal forward directly. Instead, the difference between the outlet steam flow
and the inlet water flow is calculated. The difference value is directly added to the set point signal to the feedwater flow controller.
Therefore, if steam flow out of boiler is suddenly increased by start up of turbine, for example, the set point to the feedwater flow controller
is increased by exactly the amount of the measured steam flow increase.
Simple material balance considerations suggest that if two flow meters are exactly accurate, the flow change produced by flow control loop
will make up exactly enough water to maintain the level without producing a significant upset to the level control loop.
Similarly, sudden drop in steam demand caused by trip of a significant turbine load will produce an exactly matching drop in feedwater flow
to the steam drum without producing any significant disturbance to the boiler steam drum level control.
Of course, there are losses from the boiler that are not measured by the steam production meter.
The most common of these are boiler blow down and steam vents (including relief valves) ahead of the steam production meter.
In addition, boiler operating conditions that alter the total volume of water in boiler cannot be corrected by the feed forward control strategy.
For example, forced circulation boilers may have steam generating sections that are placed out of service or in service intermittently.
The level controller itself must correct for these unmeasured disturbances using the normal feedback control algorithm.
GAP CONTROLLER
Gap Controller controls a digital contact output or analog output to field, based upon target set point & measured process value from field.
The output from gap controller normally controls an on-off valve, motor but may control any discrete devices or modulating control valves.
If required this Function Block may form the secondary controller in a cascade configuration.

There are two types of Gap Controllers

Gap Action Controller : provides standard PID control action when process variable rises above high gap limit or falls below low gap limit.
When the process variable is between gap limits, the controller works with sharply reduced gain or provides no output within the gap limit.

Differential Gap Controller : provides standard on/off action when process variable rises above high gap limit or falls below low gap limit.
When the process variable is between gap limits, the controller output remains at the last value, either low or either high.

Gap Action Controller


Some control loops have two conflicting objectives to keep process variable under control, but also minimizing controller output movement.
Such a loop will have a set point, it is more important to keep process variable within predefined bounds than to keep it exactly at set point.
Controller manufacturers has designed modification of standard PID control algorithm for use on processes with such conflicting objectives
This modification is called gap control. It works on principle of two user-definable control regions, one for each of the two control objective.

The first region is far from set point (outside gap) and requires strong control action to turn process around and bring it back to set point.
The normal controller settings are used outside the gap.

The second region is close to set point (inside gap) within which controller detunes itself generally for no action or reduced control action.
The detuning helps to minimize controller output movements.

In this type of gap controller, there is no control action or reduced action is apllied, if the process variable (PV) falls inside a defined range.
If process variable (PV) strays outside the range, PID controller action is applied to drive the process variable (PV) back within the range.
The difference between regular PID and a PID with gap control is that for the gap option there is no control action or reduced control action
when the deviation between the setpoint and the process variable is within a certain limit (gap), i.e. a kind of deadband.
Thus there can be two values of gain in the PID with Gap Controller
One value of gain (generally low) when PV is within gap range & other value of gain (generally high) when PV is above or below gap range
A default gap of 5% of measurement span shall be configured; this may be changed during FAT/Commissioning.

Gap Action Controller provides standard PID control action when process variable rises above high gap limit or falls below low gap limit.
When the process variable is between gap limits, the controller works with sharply reduced gain or provides no output within the gap limit.

Differential Gap Controller


Differential Gap Controller provides standard on / off action when process variable rises above high gap limit or falls below low gap limit.
When the process variable is between gap limits, the controller output remains at the last value, either low or either high.
A two-position (on-off) controller that actuates when the process variable reaches the high or low value of its range (differential gap).

The controller will turn on its output when prescribed analog high value (nominally 95% of LAH transmitter alarm set point) is exceeded and
the controller will turn off its output only when prescribed analog low value (nominally 95% of LAL transmitter alarm set point) is exceeded.
The controller output will remain off until the gap higher threshold is again exceeded. Refer self-explanatory figure below.
The digital output configuration can be inverted as required.
SET-POINT TRACKING
In Auto mode: The controller moves the control valve (or final control element) to keep the process value (PV) at setpoint (SP).
In Manual mode: The human operator moves the control valve directly. The controller may be doing the control calcs in the background,
but the controller outputs are ignored and the operator's manipulations take precedence.

The challenge comes when changing the mode from Auto to Manual and back to Auto.
When going from Auto to Manual, control valve goes under the control of the operator at the last position directed by the controller's calcs.
The operator may then move the control valve to any position.
Since the PV will be changing as valve is moved or the process changes, the old SP from Auto mode will no longer match the current PV.
Thus when the controller is switched back to Auto mode the current PV and old SP may be far apart causing the controller to make quick,
large adjustments to the valve. This can upset certain processes.

Setpoint tracking is optional controller feature that causes Auto mode SP to be constantly updated to match the PV while in manual mode.
Thus when controller is switched from manual to auto, SP equals current PV and there is little or no movement of control valve (Bumpless).
The operator would then be expected to slowly shift the SP to whatever is appropriate to return the process to desired typical conditions.

By using set point tracking in controller, we make the set point of controller track it's Process Value or it's downstream block's in cascade.
This is to enable a bumpless transfer when the controller is switched to Auto from Manual mode.
Say, controller is in manual mode and operator has entered X value to output and driven the process variable PV to Z.
And the set point before switching to Manual Mode was Y.
Now when operator again puts the loop in auto mode, if the setpoint is Y, the loop will try to maintain that & there are chances that process
may experience bump in doing that.
So to avoid this the setpoint is made to follow PV(Z) so when the loop is again put to Auto SP=PV=Z, hence there is no change in output.
Bumpless transfer from MAN to AUTO

The controller output that manipulates the final control element can be adjusted manually by the operator or automatically by the controller,
depending upon whether the operator has selected the manual mode or the automatic mode of the controller.
During switching between modes (Auto to Manual to Auto), condition can arise that cause improper set points and outputs to be generated.
When controller is in manual mode, the operator can directly manipulate the controller output and move the control valve as per his desire.
Although the loop may be stabilized in manual, when the operator switches the controller to automatic, the controller algorithm adjusts the
output in response to difference between set point (set-point which was there when controller was in auto mode) and process variable.
This can cause large upset to process if the operator fails to set the set point equal to the process variable before switching to automatic.
In many applications, it is advantageous to unburden operator from inconvenience of having to set the setpoint equal to process variable.
This can be accomplished through use of a bumpless transfer feature or set point tracking.

During switch from manual to auto, bumpless transfer feature causes setpoint to equal PV before control algorithm manipulates the output.
As a result, the control loop is switched from manual to automatic without a bump in output signal.

In some controllers, the set point may follow the process variable in manual mode (set point tracking), effectively yielding the same result.
It should be noted that while the bump is avoided, the operator should adjust the set point to desired value.
This however is a function with which the operator should be familiar.
INTEGRAL WINDUP
It is often the case that solving of one problem introduces another.
While introduction of integral mode has solved problem of offset, it has introduced another that has to do with very nature of integral mode.
The integral looks at past history of errors & integrates, it is good & useful when loop is operational, but is a problem when the loop is idle.
When the plant is shut down for the night, the controllers do not need to integrate the errors under the error curves.
Other cases where integration of error is undesirable include selective control system, when particular controller is not selected for control,
or in a cascade master, if the operator has switched the loop off from cascade, etc.
Under these conditions mentioned above it is not desirable for the integral mode to stay active because if it does, it will eventually saturate,
and its output will either drop to zero or rise to the maximum value.
Once saturated, controller will not be ready to take control when called upon to do so but will actually upset process by trying to introduce
an equal and opposite area of error, which it has experienced during its idle state.
In all such installations or applications, the controller must be provided with either external reset, which protects it from ever becoming idle
or with an antireset windup feature, which protects it from saturating in its idle state.
In selective loops and cascade control loops, external feedback is the most often applied solution.
Instead of looking at its own output, which can be blocked, integral looks at external feedback (opening of valve), which cannot be blocked.
In heat-up applications the chosen solution usually is to use the slave measurement as external reset signal to prevent integral saturation.
In some control systems, when windup limit is reached, the integral (repeats/minute) is increased to 8, 16, or 32 folds, in order to speed the
“unwinding” process and return the algorithm to normal operation. In DCS systems these functions are implemented in software.

WINDUP ACCOMMODATION
The integral is an accumulator that continuously adds deviation from set point to previous sum.
Whenever an error persists for a long time, the integral can / may grow to the largest value the assigned processor memory can hold.
For analog systems, integral can grow to maximum signal value, usually beyond limits of 0 or 100%. This is called integral or reset windup.
Since the controller output is the sum of the three controller terms (P,I,D), the output will be dominated by the wound-up integral mode and
will “max out” with the control valve either fully open or fully closed.
When error is subsequently removed or even if error changes sign, integral windup continue to maintain the output saturated for long time.
The control action persist until negative error accumulation equal previously accumulated positive error & permits integral mode to unwind.
Whenever the controller is not in control, the integral mode can wind up unless some other provisions are made.
Therefore it is desirable for controller containing the integral mode to have antireset windup provisions. There are a variety of offerings.

Conceptually the simplest solution is to specify a limit on the integral contribution to the total output signal.
The maximum and minimum reasonable values for the integral contribution correspond to maximum and minimum controller outputs.
For a standard controller: closely associated strategy is to inhibit integral accumulation if output hits a pre-defined limit.
With either of mechanisms, the integral windup will be limited to values corresponding to the fully open or fully closed valve positions.
It is possible for integral mode to wind up to fully open when the proper value should only be 30% open. In that case, the integral will keep
the output excessively high and until it unsaturate and has time to unwind, it will persist in introducing a process error in opposite direction.
To accelerate return of integral contribution, some controller make integral accumulation 8,16,32 times faster when coming off limited value
If a controller can detect when it is unable to control, then it can go into an “initialization” mode that forces integral mode output to a value
that permits graceful recovery whenever controller is again able to control.
Situations when a controller can detect that it is unable to control include being in the manual mode, having the valve stem reaching a limit
(if valve stem position is detected & transmitted back to the controller) or if controller is a primary controller of cascade mode configuration,
having the secondary controller in some mode other than fully automatic cascade.

Bumpless Transfer and Controller Windup


Bumpless Transfer is essential so that when control loop is switched from manual to automatic mode, the controlled variable is not upset.
There is a algorithm in form of setting appropriate initial conditions for iterative program schemes based on process variable (control input)
Reset or Integral windup occurs, for example, if an error exists while the controller is in manual and PID algorithm is active.
As integral mode keeps integrating the area under error curve, when the loop is switched back to automatic mode from the manual mode,
the controller output signal (manipulated variable) can be saturated, meaning that it has reached its maximum limit (constraint).
One should prevent the controller from such windup, because it can cause overshoots followed by cycling or other forms of instability.
Integrators are components that can wind up and antireset windup refers to stopping of integration once control signal has been saturated.
For digital control applications a number of antireset windup algorithms have been developed. A common characteristic of each is the use
of a feedback signal from the manipulated variable (the controller output signal), to keep that signal from reaching saturation values.

Antireset Windup
Certain limits can prevent Windup, if deviation persists longer than it takes for the integral mode to drive the control amplifier to saturation.
Ex. beginning of batch process, if controller input signal is temporarily lost or if large disturbance occur in either setpoint or measurement.
Only when the deviation changes sign (process variable crosses set point) does the integral action reverse direction.
So the process variable will most likely overshoot the set point by a large margin.
If limit acts in feedback section of control amplifier’s integral circuit, the controller output will immediately begin to drive in opposite direction
as soon as the process variable signal crosses the setpoint. This approach is commonly referred to as antireset windup.
On the other hand, if the limit acts directly on output as discussed earlier, it essentially diverts excess current coming from control amplifier,
and therefore output will remain at a high value for some time after the process crosses the set point.
The output limit will divert less current. This extends time that output remains at a saturated value, aggravating overshoot due to windup.
Antireset windup circuits are usually offered only to eliminate saturated output currents at the high end of the signal range.
A few manufacturers go so far as to provide special batch controller models to minimize reset windup.
Here, special circuit modifications actually begin to drive the output out of saturation before process variable value crosses the set point.
Reset windup can be avoided if operator equalizes PV value and setpoint in manual mode before switching from manual to auto operation.
FLOW COMPENSATION BLOCK
Description
The FLOWCOMP (Flow Compensation) block operates on uncompensated flow measurements of liquids, steam, gases or vapors.
It computes a flow compensation factor based on variations in parameters, temperature, pressure, specific gravity, molecular weight.
The block derives a compensated flow value as its output. It looks like this graphically.

The parameters for a FLOWCOMP block should generally / usually be fetched from a another function block, by block wiring or
through a parameter connector.
At every execution cycle the parameter will be fetched to calculate the compensation term and compensated flow.

Function
The FLOWCOMP block uses the following basic equation to calculate a compensated flow value as its output.
Compensated flow = (uncompensated flow) COMPTERM

Where: uncompenated flow = An input


COMPTERM = A calculated compensation term

The FLOWCOMP block offers five different equations for calculating the flow compensation term (COMPTERM).
There is one equation for liquids, one for steam, and three for gases and vapors. Each equation may require different inputs.
For example, depending on which gases and vapors equation you choose, one requires temperature and pressure measurements,
another requires temperature, pressure and specific gravity, and a third requires temperature, pressure and molecular weight.
Configuration Parameters
The following table provides a summary of FLOWCOMP specific parameters that you can cofigure through the Main tab of the block's
properties form in Control Builder. You must have an access level of at least Engineer to enter or modify values for these parameters.
The table does not include descriptions of the common parameters such as block name and desciption.

Title Parameter Name Description

PV Display Format PVFORMAT Lets you define decimal format to be used to display the PV value.
The choices are D0(None), D1(One), D2(Two), D3(Three). Default value is D1.

Overall Scaling Factor CPV Lets you define the overall scaling factor to be applied to the PV value to meet
for PV your process requirements. The default value is 1.

Flow Compensation CF1 Lets you define the compensation factor to use for converting units of
Factor 1 measurement for the uncompensated flow to units for the compensated flow,
or correcting for assumed design conditions. The default value is 1.

Flow Compensation CF2 Lets you define the compensation factor to use for converting units of
Factor 2 measurement for the uncompensated flow to units for the compensated flow,
or correcting for assumed design conditions. The default value is 1.

Compensation Term COMPHILM Lets you define a high limit for flow compensation term, Default value is 1.25.
High Limit

Compensation Term COMPLOLM Lets you define a low limit for flow compensation term, Default value is 0.8.
Low Limit

PV Equation Type PVEQN Lets you select the flow compensation equation type the block is to use.
The default value is EQA (Equation A).

PV Characterization PVCHAR Lets you specify square root as the PV characterization to use.
Option The default value is SQUAREROOT.

Bad Comp Term Alarm BADCOMPTERM.PR Lets you specify the priority level for a bad COMPTERM alarm.
Priority The default value is LOW.

Bad Comp Term Alarm BADCOMPTERM.SV Lets you specify the severity level for a bad COMPTERM alarm.
Severity The default value is 0.

Alarm Filter Cycles MAXCYCLE Let you specify no. of filter cycle before a bad COMPTERM alarm is generated.
The default value is 0.
If value is NaN, COMPTERM is frozen at its last good value for indefinite period

Zero Ref. for Pressure P0 Lets you specify zero pressure reference value for equations that require it.
The default value is 0.

Zero Ref. for T0 Let you specify zero temperature reference value for equations that require it.
Temperature The default value is 0.

Specific Gravity RG Lets you specify specific gravity reference value for equations that require it.
The default value is 1.

Pressure RP Let you specify absolute pressure reference value for equations that require it.
The default value is 1.
Title Parameter Name Description

Steam Quality RQ Lets you specify steam quality reference value for equations that require it.
The default value is 1.

Temperature RT Lets you specify the temperature reference value for equations that require it.
The default value is 1.

Steam Compressiblity RX Let you specify steam compressibility ref. value for equations that require it.
The default value is 1.

Reference Molecular RMW Lets you specify molecular weight ref. value for equations that require it.
Weight The default value is 1.

Input
The PV Equation Type (PVEQN) selection determines number of inputs that the FLOWCOMP block requires
as outlined in the following table. All inputs must be fetched from other function blocks.

If PVEQN is . . . Then, It Requires These And, It Is Used For . . .


Inputs . . .

Equation A Uncompensated Flow (F) Mass Flow or Volumetric Flow compensation of liquids.
Specific Gravity (G)

Equation B Uncompensated Flow (F) Mass Flow compensation of gases or vapors


Pressure (P)
Temperature (T)

Equation C Uncompensated Flow (F) Mass Flow compensation of gases or vapors.


Pressure (P)
Temperature (T)
Specific Gravity (G)

Equation D Uncompensated Flow (F) Volumetric Flow compensation of gas or vapor.


Pressure (P)
Temperature (T)
Molecular Weight (MW)

Equation E Uncompensated Flow (F) Mass Flow compensation of steam.


Pressure (P)
Temperature (T)
Steam Quality Factor (Q)
Steam Compressibility (X)

If you need characterization or alarming on individual inputs to the FLOWCOMP block, provide the inputs through a DATAACQ block.
If you want alarming for the compensated flow output, send the output to a DATAACQ block.
Output
This block produces the following outputs:
 PV and its status (PVSTS)
You can configure the COMPTERM parameter as an output pin on the FLOWCOMP block for connection to another block.

Equations
The FLOWCOMP block uses the following basic equation.
PV = CPV CF1 / CF2 F COMPTERM

Where:
CPV = Overall scale factor for PV
CF1 = Compensation factor
CF2 = Compensation factor
F = Uncompensated flow input
COMPTERM = A calculated flow compensation term

• The PVCHAR parameter is the COMPTERM Characterization option. Default value is SQUAREROOT.
Valid options are SQUAREROOT and NONE.
• If COMPTERM is greater than COMPHILM, then COMPTERM is clamped to COMPHILM.
• If COMPTERM is less than COMPLOLM, then COMPTERM is clamped to COMPLOLM.
• The COMPTERM is calculated differently for each equation as noted in the following sections.

Equation A
Used for mass-flow or volumetric flow compensation of liquids.
 If PVCHAR = SQUAREROOT, then:

 Else: If PVCHAR = NONE, then:

Additonal considerations for FLOWCOMP Equation A


Consider the following when converting uncompensated, standard volumetric-flow to compensated, standard volumetric-flow.

If the variation in density caused by fluid-composition changes is not significant, then:


 G = Gravity of the actual fluid at flowing conditions
 RG = Gravity at flowing conditions used in design basis

If variations in density caused by fluid-composition changes are significant, C1 * C2 of FLOWCOMP should be manipulated as follows:
 C1 is set to the Gravity at reference conditions used in the design basis.
 If measured value of specific gravity at flow conditions is available, actual specific gravity, referenced to standard conditions,
is calculated from that measurement by another function block (using the flowing temperature and expansion formulas) this
is pulled by the FLOWCOMP block into the C2 pin.
 If the actual specific gravity is measured by a lab, a numeric block could be used to hold the value and can be pulled by the
FLOWCOMP block into the C2 pin. In this case, another function block may use lab value & flowing temperature to calculate
specific gravity at flowing conditions and the result is used as the G input.

For these cases:


 G = Gravity of the actual fluid at flowing conditions
 RG = Gravity at flowing conditions used in design basis
 C1 = Gravity at reference conditions used in design basis
 C2 = Gravity of the actual fluid at reference conditions
Equation B
Used primarily for mass-flow compensation of gases and vapors.
 If PVCHAR = SQUAREROOT, then:

 Else: If PVCHAR = NONE, then:

Equation C
Used for mass-flow compensation of gases and vapors.
 If PVCHAR = SQUAREROOT, then:

 Else: If PVCHAR = NONE, then:

Equation D
Used typically for volumetric-flow compensation of gases and vapors.
 If PVCHAR = SQUAREROOT, then:

 Else: If PVCHAR = NONE, then:

Equation E
Used for mass-flow compensation of steam.
 If PVCHAR = SQUAREROOT, then:

 Else: If PVCHAR = NONE, then:

Symbol Definitions
G = Specific gravity
MW = Molecular weight
P = Pressure (input)
T = Temperature (input)
Q = Steam quality (input)
X = Steam compressibility (input)
RG = Reference specific gravity (configured)
RP = Reference pressure (configured)
RT = Reference temperature (configured)
RQ = Reference steam quality (configured)
RX = Reference steam compressibility (configured)
RMW = Reference molecular weight (configured)
P0 = Zero pressure reference (configured)
T0 = Zero temperature reference (configured)
MOTOR CONTROL CIRCUIT
Motor Control Circuits are an effective way to reduce cost by using smaller wire and reduced-amperage devices to control a motor.
Many smaller motors use same size conductors for both control and power circuits, but as horsepower increases it becomes impractical.
Motor control circuits are connected to lower voltage than the motor they control to make it safer for operators and maintenance personnel.

The diagram shows the electrical relationship of a contactor and an overload relay in a typical motor control circuit.
The contactor includes an electromagnetic coil (M), auxiliary contacts (Ma) in control circuit and three main contacts (M) in power circuit.
The overload relay includes three heaters contacts (OL) in the power circuit and auxiliary contacts (OL) in the control circuit.

In this circuit, when Start pushbutton is pressed, power is provided to coil of contactor (M), it energizes contacts (M & Ma) and they close.
This provides power to motor through OL heater contact. At the same time Ma contacts close so that when Start pushbutton is released,
power is still provided to coil (M). The motor continues to run until Stop pushbutton is pressed or unless overload occur, OL contact open.
If an overload occurs, OL heater contacts open removing power from motor, and OL auxiliary contacts open removing power from the coil.
Removing the power from coil is essential or necessary to prevent the motor from automatically restarting after the overload relay cools.
This is example of full-voltage starter or across-the-line starter. This type of circuit starts motor by providing full line voltage to motor.

When SW1 is pressed, power is provided to coil of contactor & M contacts close. This provides power to motor through OL heater contacts
The motor continues to run until SW1 is again pressed (opened) or unless an overload occurs.
If an overload occurs, OL heater contacts open removing power from motor and OL auxiliary contacts open removing power from the coil.
After the overload relay cools, then the OL auxiliary contacts again closes, But SW1 is still latched and the motor will automatically restart.
Removing power from the coil is essential or necessary to prevent the motor from automatically restarting.
Therefore switch must be momentary (contacts of momentary pushbutton change their state, open to close when pushbutton is pressed.
They return to their normal state as soon as the button is released, the change of state that is open to close or vice-versa is momentary).
But if switch is maintained/latched (maintained pushbutton latches itself in place. It must be unlatched to allow to return to normal state)
the motor will automatically restart which is not desired.

Manual Motor Starters are simply manual switches designed to control larger current loads typical of motor control.
They may be small / large switches designed for control of high ampere circuits. Motor starters may be Single/ Double/Triple Pole devices
Manual motor starters may also be equipped with matched heaters/overload protectors designed to open when the current load is too high.
These heaters must be properly sized to the motor they are protecting or else they will either open too soon, or will not protect the motor.
The disadvantage to manual motor starters is that they cannot have remotely located On and Off controls.

Magnetic Motor Starters are essentially heavy duty relays, often equipped with heater / thermal overloads matched to the motor they start
They are then controlled using lighter duty circuit, auxiliary relay contacts, a control station or several stations utilizing lighter duty switches
(usually momentary sometimes latching). These switches would not be capable of switching the large loads required by the motors.
Because control circuitry is separate from the Load circuit, On/Off controls can be mounted remotely and can even be duplicated if desired.
This type of motor starter has auxiliary contact switch: a smaller set of contacts that open / close along with the motion of main contactors
These contacts will be used to latch the system in an on condition.
Latching mean that auxiliary contact bypasses the ON button, so contactor remains energized, until a separate OFF button cuts the power.
Additional contacts (NO & NC) may also be provided and may be used for auxiliary circuits or to provide feedback to the rest of the system
that the starter is engaged and the motor has power.
MANUAL MOTOR STARTERS

MAGNETIC MOTOR STARTERS


FORWARD & REVERSE

The above Logic is also true for the “Reverse” pushbutton.


The parallel auxiliary contacts are also referred as seal-in contacts, the word “seal” meaning essentially the same thing as the word latch.
To stop motor (either in forward or reverse), we require some means for operator to interrupt power to motor contactors (Stop Pushbutton).

If motor needs to be stopped, the latched forward or reverse circuits, can be “unlatched” by momentarily pressing the “Stop” pushbutton,
which will open either forward or reverse circuit, de-energizing the energized contactor, returning seal-in contact to its normal state (open).

Problem:
If a motor running with large Inertia Load is suddenly reversed, then the motor would struggle to overcome that inertia of the large fan as it
tried to begin turning in reverse, drawing excessive current and potentially reducing the life of the motor, drive mechanisms and fan.

Solution:
Adding some kind of a time-delay function in this motor control system will prevent such a premature startup from happening.
This is achieved by adding a couple of time-delay relay coils, one in parallel with each motor contactor coil. By using time delay contacts,
that delay returning to their normal state, these relays will provide us a "memory" of which direction the motor was last powered to turn.
Each time-delay contact must open starting-switch leg of the opposite rotation circuit for several seconds, while the fan coasts to a halt.
If the motor has been running in the forward direction, both M1 and TD1 will have been energized.
In this case, normally-closed timed contact of TD1 will have immediately opened, the moment TD1 was energized.
When the stop button is pressed, the TD1 gets de- energized but TD1 contact in reverse line circuit waits for the specified amount of time
before returning to its normally-closed state, thus holding the reverse circuit open for a specific duration of time, so M2 can't be energized.
When TD1 times out, the contact will close and the circuit will allow M2 to be energized, if the reverse pushbutton is pressed.
In same manner, TD2 will prevent "Forward" pushbutton from energizing M1 until prescribed time delay after M2,TD2 is de-energized.

The time-interlocking functions of TD1 & TD2, render M1 & M2 interlocking contacts redundant. So auxiliary contact M1 & M2 for interlocks
thus can be removed by TD1 & TD2 contact can only be used, since they immediately open when their respective relay coil are energized,
thus "locking out" one contactor if the other is energized.

Each time delay relay will serve a dual purpose:


Preventing the other contactor from energizing while the motor is running
Preventing the same contactor from energizing until a prescribed time after motor shutdown.

PERMISSIVE CIRCUITS
A practical application is in control systems where several process conditions have to be met before a equipment is allowed to start.
Example a burner control for large combustion furnaces. In order for the burners in a large furnace to be started safely, the control system
requests "permission" from several process conditions, high and low fuel pressure, fan flow check, exhaust stack damper position, etc.
In an permissive circuit design, each of the process condition is called a permissive and each permissive switch contact is wired in series,
so that if any one of them detects an unsafe condition, the circuit will be opened.
If all permissive conditions are met, CR1 will energize and green lamp will lit. In real life, more than just a green lamp would be energized,
usually a control relay or solenoid would be placed in that rung of circuit to be energized when all permissive contacts were "good"(closed).
If any one of permissive conditions are not met, the series string of switch contacts will be broken, CR1 de-energizes and red lamp will lit.

INTERLOCK CIRCUITS
Another practical application in control systems where we want to ensure two incompatible events cannot occur at the same time.
An example of this type of circuit is in reversible motor control, where two motor contactors are wired to switch polarity to an electric motor,
and we don't want the forward and reverse contactors energized simultaneously.
The normally-closed "OL" contact, which is overload contact activated by "heater" elements wired in series with each phase of AC motor.
If heaters get too hot, contact will change from its normal (closed) state to being open, which will prevent either contactor from energizing.
This control system will work fine, so long as no one pushes both buttons at same time. If someone were to do that, phases A and B would
be short-circuited together by virtue of fact that contactor M1 sends phases A & B straight to motor and the contactor M2 reverses them
phase A would be shorted to phase B and vice versa. Obviously, this is a bad control system design.
To prevent this from happening, we can design the circuit so that the energization of one contactor prevents the energization of the other.
This is called interlocking, and it is accomplished through the use of auxiliary contacts on each contactor.
when M1 is energized, the normally-closed auxiliary contact on the second rung will be open, thus preventing M2 from being energized,
even if the "Reverse" pushbutton is actuated. Likewise M1 energization is prevented when M2 is energized.

Permissive Circuits - Switch contacts that are designed to interrupt a circuit if certain predefined physical / process conditions are not met
because the system requires permission from these inputs to activate.

Interlock Circuits - Switch contacts designed to prevent a control system from taking two incompatible actions at once (such as powering
an electric motor forward and backward simultaneously) are called interlocks.
OVERLOAD PROTECTION

The overload protection prevents an electric motor from drawing too much current, which may result to overheating & literally “burning out.”
Like a contactor, overload protection is another building block of motor starters.

A motor starter generally consists of


 Controller (Contactor)
 Overload Protection

Why an overload protection is needed. Then we will move to different types of overload protection.
Part of understanding overload protection is understanding how motors work.
A motor goes through three stages during normal operation: Resting, Starting, and Operating Under Load.

A motor at rest requires no current because the circuit is open.


But once the circuit is closed , motor starts drawing a tremendous Inrush current which is generally 6-8 times its normal running current.
This large inrush current can cause immediate tripping of the circuit breaker.
A fuse or circuit breaker sized to handle the normal running load of motor will open the circuit during startup.
Sizing the fuse or circuit breaker for spike in current or inrush current would not solve the problem.
Because once motor is running, only the most extreme Overload would open the circuit.
Smaller overloads would not Trip breakers and the motor may burn out.

The Problem with Oversized Fuses

What is an Overload?
The term means that too much load has been placed on motor. A motor is designed to run at certain speed called its synchronous speed.
If the load on the motor increases, the motor draws more current to continue running at its Synchronous Speed.
It is quite possible to put so much load on a motor that it will draw more and more current without being able to reach synchronous speed.
If this happens for a long enough period of time, the motor can melt its insulation and burn out. This condition is usually called an overload.
In fact, the motor could stop turning altogether (Locked Rotor) under a large enough load. This is another example of overload conditions.
Even though the motor shaft is unable to turn or locked, the motor continues to draw current, attempting to reach its synchronous speed.
Although running motor may not draw enough current to blow fuses or trip circuit breakers, it can produce sufficient heat to burn up motor.
This heat generated by excessive current in windings, causes insulation to fail and motor to burn out.
We generally use the term Locked Rotor Amps to describe when the motor is in this state and is drawing the maximum amount of current.
So, because of the way a motor works, an overload protection device is required that does not open the circuit while the motor is starting,
but opens the circuit if the motor gets overloaded and the fuses do not blow.

Overload Relay
Overload Relay is device used in motor starters for overload protection, It limits amount of current drawn to protect motor from overheating.

An overload relay consists of:


 A current sensing unit (connected in line to the motor).
 A mechanism to break the circuit either directly or indirectly.

To meet motor protection needs, the overload relays have a time delay to allow harmless temporary overloads without breaking the circuit.
They have a trip capability to open the Control Circuit if dangerous currents (that could result in motor damage) continue over a time period
All overload relays also have some means of resetting the circuit once the overload is removed.
TYPES

Thermal Overload Relays are designed to protect motor against overload currents, they must be capable of handling large current without
tripping for short periods during the motor starting (inrush currents). They should however trip quickly, if the starting currents last too long.
The temperature trip range is the trip point setting range for relays designed to monitor the temperature of the motor stator windings.

Eutectic (melting alloy) Overload Relay


Melting alloy overload relay consists of Heater Coil, Eutectic Alloy & mechanical mechanism for a tripping device when overload occurs.
The relay measures the temperature of motor by monitoring amount of current being drawn. This is done indirectly through a heater coil.
Many different types of heater coils are available, but the operating principle is the same:
A heater coil converts excess current into heat which is used to determine whether the motor is in danger.
The magnitude of the current and the length of time it is present determine the amount of heat registered in the heater coil.
Usually, a eutectic alloy tube is used in combination with a ratchet wheel to activate a tripping device when overload occurs.
A eutectic alloy is a metal that has a fixed temperature at which it changes directly from a solid to a liquid.
When overload occurs, heater coil heats eutectic alloy tube. The heat melts the alloy, freeing ratchet wheel and allowing it to turn.
This action opens the normally closed contacts in the overload relay.
Eutectic Overload Relay: Ratchet Wheel and Eutectic Alloy Combination

Bimetallic Overload Relay


A bimetallic strip is made up of two different metals. The two dissimilar metals are permanently joined.
Heating the Bimetallic Strip causes it to bend because the dissimilar metals expand and contract at different rates.
The bimetallic strip applies tension to spring on a contact. If heat begins to rise, strip bends, spring pulls contacts apart, breaking the circuit

Once the tripping action has taken place, the bimetallic strip cools and reshapes itself, automatically resetting the circuit.
As we mentioned, an overload relay is designed to prevent the motor from overheating.
The heat comes from two sources: heat generated within the motor, and heat present in area where the motor operates (Ambient Heat).
Although ambient heat contributes a relatively small portion of total heat, it has a significant effect on operation of overload relay bimetals.
A properly designed ambient-compensating element reduces the effects of ambient temperature change on the overload relay.

Solid State Overload Relay


The Solid State overload relay does not actually generate heat to facilitate a trip. Instead it measures current or a change in resistance .
The advantage of this method is that overload relay doesn't waste energy generating heat & doesn't add to cooling requirements of panel.
The current can be measured via current transformers, then converted into a voltage which is used as a reference by the overload relay.
If the relay notices that the current is higher than it should be for too long a period of time, then it trips.
Another type uses sensors to detect heat generated in motor. Heat in excess of preset value for too long period of time trips motor offline.

 It is possible to provide proactive functionality and improved protection against special conditions.
Ex. when high ambient temp. exists, devices that use sensors can sense the effect of ambient temperature is having on motor.
 Some solid state overload relays offer programmable trip time. This can be useful when a load takes longer to accelerate than
traditional overload relays will allow, or when a trip time in between traditional Trip Classes is desired.
 Some overload relays have a built in emergency override, to allow motor starting even when it is damaging to the motor to do so.
This can be useful in a situation where the process is more important than saving the motor.
 Some solid-state overload relays can detect the change in current when a motor suddenly becomes unloaded.
In such situation, relay will trip to notify user of a problem. Normally, this indicates a system problem rather than a motor problem.
Momentary Contacts
The contacts change their state, open to close or close to open, when the button is pressed.
They return to their normal state as soon as the button is released.

Maintained Contacts
The contacts change their state, open to close or close to open, and remain in place when pressed (latches itself).
They must be unlatched to allow it to return to its normal state.
MOTOR TYPES
Torque is a twisting or turning force that causes an object to rotate.
For example, a force applied to the end of a lever causes a turning effect or torque at the pivot point.
T = Force x Radius (Torque (T) is the product of force and radius (lever distance)
In the English system of measurements, torque is measured in pound-feet (lb-ft) or pound-inches (lb-in).
For example, if 10 lbs of force is applied to a lever 1 foot long, the resulting torque is 10 lb-ft.
An increase in force or radius results in corresponding increase in torque. Increasing radius to two feet, results in 20 lb-ft of torque.

Angular Speed of a rotating object determines how long it takes for an object to rotate a specified angular distance.
Angular speed is often expressed in revolutions per minute (RPM).
For example, an object that makes ten complete revolutions in one minute has a speed of 10 RPM.

Power is the rate of doing work or the amount of work done in a period of time.
Power can be expressed in foot-pounds per second, but is often expressed in horsepower. This unit was defined in the James Watt.

AC motors manufactured in the United States are generally rated in horsepower, but motors in other countries are rated in kilowatts (kW).

AC Motors
* Asynchronous or Induction Motor
* Synchronous Motor

Asynchronous motors or Induction motors : This type of motor has three main parts, Rotor, Stator, Enclosure.
The stator and rotor do the work and the enclosure protects the stator and rotor.
Stator is the stationary part of the motor’s electromagnetic circuit. The stator core is made up of many thin metal sheets called laminations.
Laminations are used to reduce energy loses that would result if a solid core were used.
Stator laminations are stacked together forming a hollow cylinder. Coils of insulated wire are inserted into slots of the stator core.
When the assembled motor is in operation, the stator windings are connected directly to the power source.
Each grouping of coils, together with steel core it surrounds becomes electromagnet when current is applied.
Electromagnetism is basic principle behind motor operation
Rotor is rotating part of motor’s electromagnetic circuit. The most common rotor used in 3-phase induction motor is squirrel cage rotor.
The squirrel cage rotor is so called because its construction is reminiscent of the rotating exercise wheels found in some pet cages.
A squirrel cage rotor core is made by stacking thin steel laminations to form a cylinder. Rather than using coils of wire as conductors,
conductor bars are die cast into the slots evenly spaced around the cylinder.
Most rotors are made of die cast aluminum to form conductor bar. Manufacturers also makes motors with die cast copper rotor conductors.
The rotor conductor bars are mechanically/electrically connected to end rings. The rotor is pressed onto steel shaft to form rotor assembly.
Enclosure consists of a frame (or yoke) and two end brackets (or bearing housings). The stator is mounted inside the frame.
The rotor fits inside stator with slight air gap separating it from stator. There is no direct physical connection between the rotor and stator.
The enclosure protects the internal parts of the motor from water and other environmental elements.

The polarity of an electromagnet connected to an AC source changes at the frequency of the AC source.
In the above example, the coil was directly connected to a power supply.
However, a voltage can be induced across a conductor by merely moving it through a magnetic field.
This same effect is caused when a stationary conductor encounters a changing magnetic field.
This electrical principle is critical to the operation of AC induction motors.

The principles of electromagnetism explain the shaft rotation of an AC motor.


Recall that the stator of an AC motor is a hollow cylinder in which coils of insulated wire are inserted.
The following diagram shows electrical configuration of stator windings. In this example, six windings are used, two for each of 3 phases.
The coils are wound around the soft iron core material of the stator. When a current is applied, each winding becomes an electromagnet,
with the two windings for each phase operating as the opposite ends of one magnet.
The coils for each phase are wound in such a way that when current is flowing, one winding is a north pole and the other is a south pole.
When A1 is a north pole, A2 is a south pole and, when current reverses direction, the polarities of the windings also reverse.
The stator is connected to 3-phase AC power source. The illustration shows windings A1 and A2 connected to phase A of power supply.
When the connections are completed, B1 and B2 will be connected to phase B, and C1 and C2 will be connected to phase C.
As the following illustration shows, coils A1, B1, and C1 are 120° apart. Note that windings A2, B2, and C2 also are 120° apart.
This corresponds to 120° separation between each phase. Because each phase winding has two poles, this is called a two-pole stator.

The speed of the rotating magnetic field is referred to as the synchronous speed (NS) of the motor.
Synchronous speed is equal to 120 times the frequency (F), divided by the number of motor poles (P).
The synchronous speed for a two-pole motor operated at 60 Hz, for example is 3600 RPM.

The squirrel cage rotor is substituted with a magnet for explain. When stator windings are energized, rotating magnetic field is established.
The magnet has its own magnetic field that interacts with the rotating magnetic field of the stator.
The north pole of rotating magnetic field attracts south pole of magnet, south pole of rotating magnetic field attracts north pole of magnet.
As the magnetic field rotates, it pulls the magnet along.
AC motors that use a permanent magnet for a rotor are referred to as permanent magnet synchronous motors. The term synchronous
means that the rotors rotation is synchronized with the magnetic field, and the rotor’s speed is same as the motor’s synchronous speed.

Instead of a permanent magnet rotor, a squirrel cage induction motor induces a current in its rotor, thus creating an electromagnet.
When current flows in a stator winding, the electromagnetic field created cuts across the nearest rotor bars.
When a conductor, such as a rotor bar, passes through a magnetic field, a voltage (emf) is induced in the conductor.
The induced voltage causes current flow in conductor.
In squirrel cage rotor, current flows through the rotor bars and around the end ring and produces a magnetic field around each rotor bar.
Because stator windings are connected to an AC source, thus the current which is induced in the rotor bars also continuously changes and
the squirrel cage rotor becomes an electromagnet with alternating north and south poles.
At any given point in time, the magnetic fields for stator windings are exerting forces of attraction and repulsion against various rotor bars.
This causes the rotor to rotate, but not exactly at the motor’s synchronous speed.
For a 3-phase AC induction motor, the rotating magnetic field must rotate faster than the rotor in order to induce current in the rotor.
When power is first applied to motor with rotor stopped, the difference in speed is at maximum & large amount of current is induced in rotor
After motor has been running long enough to get up to operating speed, difference between synchronous speed of rotating magnetic field
and rotor speed is much smaller. This speed difference is called slip. Slip is necessary to produce torque. Slip is also dependent on load.
An increase in load causes the rotor to slow down, increasing slip. A decrease in load causes the rotor to speed up, decreasing the slip.
Slip is expressed as a percentage. This type of motor is known as 3-phase Induction motor or Asynchronous motor.

Synchronous Motor: Another type of 3-phase AC motor is synchronous motor. The synchronous motor is not an induction motor.
One type of synchronous motor is constructed somewhat like a squirrel cage rotor.
In addition to rotor bars, coil windings are also used. The coil windings are connected to external DC power supply by slip rings & brushes.
When the motor is started, AC power is applied to the stator and the synchronous motor starts like a squirrel cage rotor.
DC power is applied to the rotor coils after the motor has accelerated.
This produces a strong constant magnetic field in the rotor which locks the rotor in step with the rotating magnetic field.
The rotor therefore turns at synchronous speed of motor, which is why this is a synchronous motor.
DC Motors
Powering an electric motor with a DC current isn’t easy, The direct current is fine for the electromagnet, but because there is nothing that
naturally changes with time, the poles of the electromagnets stay the same forever.
Since motor need magnetic poles that change periodically something must reverse current flow through electromagnet at proper moments
In most DC electric motors, the rotor is an electromagnet that turns within a shell of stationary permanent magnets.
To make the electromagnet stronger, the rotor’s coil contains an iron core that’s gets magnetized when current flows through the coil.
The rotor will spin as long as this current reverses each time its magnetic poles reach the opposite poles of the stationary magnets.
The most common way to produce these reversals is with a commutator.
In its most simplest form, a commutator has two curved plates that are fixed to the rotor and connected to opposite ends of the wire coil.
Current flows into rotor through conducting brush that touches one of plates & leaves rotor through second brush that touches other plate.
As the rotor turns, each brush makes contact first with one plate and then with the other. Each time the rotor turns half of a turn, the plates
the two brushes touch are interchanged and with this swapping of connections comes a reversal in direction of current flow around the coil.
The DC motor will now spins forever. But the DC motor depicted below has problems.

First, there’s nothing to determine which way the motor should turn when it starts, so it starts randomly in either direction.
Second, because there are times when a brush touches either both plates at once or neither of them, motor will sometimes not start at all.
To start reliably, the motor must make sure that its brushes send current through rotor and there are no short circuits through commutator.
In most DC motors, this requirement is met by having several coils in the rotor, each with its own pair of commutator plates.
As the rotor turns, the brushes supply current to one coil after another.
The rotor is constructed so that each coils receives power when it is in proper orientation to experience a strong torque in desired direction.
Brushes are wide enough and always supply current to at least one coil but not so wide that they directly connect brushes to one another,
the rotor will always start spinning when it is turned on.

The rotor of a DC motor turns at a speed that is proportional to the voltage drop through its coils.
Those coils have little electric resistance and allow a large current to flow while the rotor is at rest.
But once the rotor is turning, the changing magnetic fields around spinning rotor cause it to experience electric fields.
These electric fields oppose the flow of current through the rotor, extracting energy from that current and lowering its voltage.
Eventually rotor reaches angular velocity at which voltage supplied to coils matches induced voltage drop caused by dynamic electric field.
The rotor then spins stably at this angular velocity.
However, if rotor rotation is slowed by making it do work, the voltage drop through the rotor will no longer match induced voltage drop and
the remaining voltage drop will have to occur because of electric resistance—more current will have to flow through the rotor.
In general, loading the rotor has little effect on its angular velocity but causes it to draw more current from its power source.
To change the rotor angular velocity, we must change the voltage drop through its coils.
The direction in which rotor turns depends on motor’s asymmetry, it also depends on direction in which current flows through entire motor.
If we reverse that current, rotor will begin turning backward. Its electromagnets will still turn on and off as before, but now magnetic forces
between the rotor and the stationary permanent magnets will be reversed.
Instead of being attractive, they will be repulsive or vice versa. The torque on rotor will be reversed, too, and the motor will spin backward.
So to make a motor to rotate backward, we simply reverse the direction of current flow through its DC motor.
DC electric motors of this sort are used in a wide variety of battery operated devices, from toy cars to electric screwdrivers.
Unfortunately, their brushes experience mechanical wear as their rotors turn and eventually wear out and must be replaced.
Furthermore, the brushes produce mechanical interruptions in the flow of current and these interruptions often create sparks.
Motors with brushes are unsuitable for some environments because their sparking can ignite flammable gases.
Sparking also produces radio waves so that an automobile’s DC motors often interfere with its radio reception.

A commutator on the rotor of a DC motor connects its coil to the source of electric power.
The commutator turns with the rotor and reverses the direction of current flow in the coil once every half turn of the rotor.
Just as the rotor’s north pole reaches the south pole to its left, the current in the coil reverses and so do the rotor poles.
Universal Motors
An intermediate type of motor called a universal motor is a motor that can run on either AC or DC electric power.
A true DC motor can’t tolerate AC power because its rotational direction will reverse every half cycle of power & will simply vibrate in place.
A true AC motor can’t tolerate DC power because, it depends on the power line’s reversing current to keep the rotor moving.

However, if we replace the permanent magnets of a DC motor with electromagnets and connect these electromagnets in the same circuit
as commutator and rotor, we will have a universal motor. This motor will spin properly when powered by either direct or alternating current.

If a DC power is connected to a universal motor, the stationary electromagnets will behave as if they were a permanent magnets and
the universal motor will operate just like a DC motor.
The only difference is that the universal motor will not reverse directions when we reverse the current passing through it.
It will continue turning in the same direction because reversing the current through the rotor also reverses current through electromagnets.
Since the universal motor contains no permanent magnets, every pole in the entire motor changes from north to south or south to north.
Because all poles change, the motor behavior does not change. It keeps turning in same direction.
If we really want to reverse the motor rotational direction, we must rewire the stationary electromagnets to reverse their poles.

Since universal motor always turns in same direction, regardless of which way current flows through it, it works just fine with AC power.
There are moments during the current reversals when the rotor experiences no torque, but the average torque is still high and rotor spins
as though it were connected to DC electric power. Like a DC motor, the voltage drop through the coils of its rotor governs its speed.

Universal motors are commonly used in kitchen mixers, blenders, and vacuum cleaners. While these motors are cheap and reliable, their
graphite brushes eventually wear out and must be replaced. To repair a motor with a worn brush, simply remove what left of the old brush
and replace it with a new one from the hardware store. Some appliances even provide access ports through which we can replace brushes
without disassembling the motor.

Because the stationary magnets of a universal motor are actually electromagnets, the motor doesn’t notice changes in the direction
of current flow from the power source—every pole in the entire motor reverses, leaving the forces between those poles unaffected.
The universal motor works equally well on DC or AC electric power.
UPS (Uninterruptible Power Supply)
UPS (Uninterruptible Power Supply)

No company can afford to leave its assets unprotected from the power issues. Here are just a few of the reasons:
• Even short outages can be trouble. Losing power for as little as a quarter second can trigger events that may keep the
equipment unavailable for anywhere from 15 minutes to many hours.
• Utility power isn't clean. By law, electrical power can vary widely enough to cause many problems for the equipment.
According to standards, voltage level can legally vary from 5.7 percent to 8.3 percent under absolute specifications.
That means that what utility services promising 208-phase voltage actually deliver can range from 191 to 220 volts.
• Utility power isn't 100% reliable. In fact, it's 99.9% reliable, which translates into a 9 hours of utility outages every year.
• The problems and risks are intensifying. Today’s storage systems, servers and network devices use components so
miniaturized that they falter and fail under power conditions earlier-generation equipment used to easily withstood.
• Generators and Surge Suppressors are not enough. Generators can keep systems operational during a utility outage,
but they take time to start (start-up time) and provide no protection from power spikes and other electrical disturbances.
Surge suppressors help with power spikes but not with issues like power loss, under-voltage and brownout conditions.
• Availability is everything these days. When systems are down, business processes quickly come to standstill.

UPS (Uninterruptible Power Supply) is a device that:


* Provides backup power when incoming AC power fails, either long enough for critical equipment to shut down so that no
data is lost or provides backup power long enough to keep required loads operational until a generator comes online.
* Conditions incoming AC power so that all-too-common sags and surges donot damage the sensitive electronic system.

An uninterruptible power supply/uninterruptible power source/UPS/battery/flywheel backup, is an electrical apparatus that


provides emergency power to a load when the input power source fails (typically mains power).
A UPS differs from emergency power system or standby generator in that it will provide a near-instantaneous protection
from input power interruptions, by supplying energy stored in batteries or a flywheel.
The on-battery runtime of an uninterruptible power sources is relatively short (can be from a few minutes to several hours)
but sufficient to start a standby power source or properly shut down the equipments which are to be protected.

The primary role of any UPS is to provide short-term power when the input power source fails.
However, most UPS units are also capable in varying degrees of correcting common utility power problems:
 Voltage spike or sustained Overvoltage
 Momentary or sustained reduction in input voltage.
 Noise, defined as high frequency transient or oscillation, usually injected into the line by nearby equipment.
 Instability of the mains frequency.
 Harmonic distortion: defined as a departure from the ideal sinusoidal waveform expected on the line.

UPS TYPES
Single Conversion Systems : Standby/Offline Type and Line-Interactive Type
Double Conversion Systems : On-Line Type
Multi-Mode Systems : Combination of Line-Interactive and On-Line

Single Conversion UPS are more efficient than the Double Conversion UPS, but offer less protection. That makes them
good fit for loads with a higher tolerance for failure. Standby/Offline UPS (the most basic type of single conversion UPS)
are generally the best option for smaller applications, like desktop and point-of-sale solutions, while line-interactive UPS
are typically preferable for smaller server, network applications located in facilities with relatively trouble-free AC power.
Double Conversion UPSs, which provide the highest levels of protection, are less efficient but are usually the standard
choice for protecting mission-critical systems.
Multi-mode UPS, although they may be more expensive than either single or double conversion systems, are the best
choice for companies looking to achieve an optimal blend of both efficiency and protection.

Battery Banks (Static UPS) or Flywheel Assembly (Rotary UPS) are sources of power if the incoming AC supply fails.
Though lead-acid batteries are a proven technology well suited to the rigors of data center, they’re also bulky and heavy.
Furthermore due to the toxic chemicals they contain, disposing of them is an expensive and tightly-regulated process.
As an alternative, a flywheel is a mechanical device typically built around a large rotating disk. During normal operation,
electrical power spins the disk. When a power outage occurs, disk continues to spin on its own, generating DC power that
a UPS can use as an emergency energy source. As the UPS consumes that power, the disk gradually loses momentum,
producing less and less energy until eventually it stops moving altogether. (It makes use of stored Kinectic Energy)
On plus side, flywheels are smaller and lighter than lead-acid batteries, easier to maintain and free of harmful substances.
On negative side, they typically deliver only 30 seconds of standby power. (Used for making up the start-up time of EDG)
SINGLE CONVERSION SYSTEMS
In normal operation, these feed incoming AC power to the equipment. If the AC input supply falls out of predefined limits,
UPS utilizes its inverter to draw current from battery & disconnects input supply to prevent backfeed from inverter to utility.
UPS stays on battery, until AC input returns to its normal tolerances or battery runs out of power, whichever happens first.
Two of the most popular single-conversion designs are Standby/Offline Type and Line-Interactive Type
Standby / Offline Type
The system allow equipment to run on utility power until UPS detect a problem, at which point it switches to battery power.
Some standby/Offline design incorporate transformers or other devices to provide very limited power conditioning as well.
Line Interactive Type
The system regulates the AC input supply voltage up/down as necessary before allowing it to pass through the equipment
However, like standby/Offline UPS, they use their battery to guard against frequency abnormalities.

DOUBLE CONVERSION SYSTEMS


As the name suggests, these devices convert power twice.
First, an input rectifier converts AC power into DC power and feeds it to an output inverter.
The output inverter then processes the power back to AC before sending it on to the equipment.
This double-conversion process isolates critical loads from incoming raw utility power completely, ensuring that equipment
always receives only clean, reliable electricity supply.
In normal operation, a double-conversion UPS system processes power twice.
If AC input supply falls out of predefined limits, however, the input rectifier shuts off and the output inverter begins drawing
power from the battery instead.
UPS stays on battery, until AC input returns to its normal tolerances or battery runs out of power, whichever happens first.
In case of severe overload of inverter or a failure of rectifier or inverter, the static switch bypass path is turned on quickly,
to support the output loads.

MULTI-MODE SYSTEMS
Combines features of single and double conversion technologies while providing improvements in efficiency and reliability:
• Under normal conditions, system operates in line-interactive mode, saving energy and money while also keeping voltage
within safe tolerances and resolving common abnormalities found in utility power.
• If the AC input power falls outside of preset tolerances for line-interactive mode, the system automatically switches to a
double-conversion mode, completely isolating the equipment from the incoming AC source.
• If AC input power falls outside the tolerances of double-conversion rectifier or goes out altogether, the UPS uses battery
to keep supported loads up and running. When the generator comes online, the UPS switches to double conversion mode
until input power stabilizes. Then it transitions back to the high-efficiency line-interactive mode.
Multi-mode UPSs are designed to dynamically strike an ideal balance between efficiency and protection.
REDUNDANCY
FIELD SIDE
Redundancy comes in many forms and is inherent at some level in any plant design.
The most basic form of redundancy requires the inclusion of a auto-manual switch for each component.
In the automatic mode, the plant or system controller runs the process.
In the manual mode this step is by-passed, resulting in continued treatment but a loss of efficiency and/or quality.
As an alternative, sometimes a process can be by-passed. This is common in headworks, in which a by-pass channel can be found.
Using this bypass channel can ensure the plant operates, but it may allow particles to accumulate in downstream clarifiers & basins.

Another form of redundancy exists when more equipment is installed than is required.
For instance, three pumps may be provided when only two are needed. This type of redundancy is quite common in process industry.
Typically the third pump has its own dedicated starter, VFD and control components.
Redundancy is also gained with multiple process trains.
Sometimes each train has its own automation system, or trains have been grouped into multiple control panels.
This also represents a level of redundancy.

SYSTEM SIDE
Redundancy in a process control system means that some or all of the system is duplicated or redundant.
The goal is to eliminate, as much as possible, any single point of failure.
When a piece of equipment or a communication link goes down, a similar or identical component is ready to take over.
There are 3 types of redundant systems. Cold standby, Warm standby, Hot standby
Cold Redundancy
Cold redundancy is far those processes where response time is of minimal concern and may require operator intervention.
Cold standby implies that there will be a significant time delay in getting the replacement system up and running.
In olden days of steam locomotives. The cold standby was extra engine in roundhouse that had to be fired up & brought into service.
Cold standby is not usually used for control systems unless the data changes very infrequently.
The replacement components are generally at the shelf of a warehouse or installed parallel to running components but are not online.
The hardware and software are available, but may have to be booted up and loaded with the appropriate data.
Warm Redundancy
Warm redundancy is used where time is somewhat critical but a momentary outage is still acceptable.
ln this scenario, a momentary bump can be expected.
Warm redundancy systems typically have two processors connected in a primary and standby configuration.
The primary processor controls the system's inputs and outputs (I/O) while the standby processor is powered up and waits for the
primary processor to stop controlling the process.
When this occurs, standby processor assumes control of I/O and takes designation of primary processor, allowing offline processor to
become the secondary processor, and can be maintained without sacrificing process control.
During normal operation, the primary processor provides periodic updates to the standby processor.
These updates usually occur at the end of each program scan and may only involve a portion of the data at anytime.
Therefore, when a changeover occurs, the standby processor can work off of incomplete data since it may take the standby processor
a few program scans to catch up to where the primary was before the changeover.
This can contribute to a bump in the process during the changeover.
Hot Redundancy
Hot Redundancy is used when the process must not go down for even a brief moment under any circumstance.
As stated above, the hardware layout of a hot redundancy system is almost identical to a warm redundancy system.
However, hot redundancy systems provide bumpless transfer of the I/O during a changeover from primary to standby.
In Hot standby, both primary & secondary system run simultaneously & both are providing identical data stream to downstream client
The underlying physical system is same, but two data systems use separate hardware to ensure that there is no single point of failure.
When primary system fails, the switchover to secondary system is intended to be completely seamless or bumpless, with no data loss.
The first is to perform the transfer at the end of each program scan. Only upon completion of the transfer will the next scan resume.
This approach is "scan & transfer." However, there are some things that need to be considered when using scan & transfer system.
First, the true scan time of the program will be a combination of the program scan and the transfer update.
Since scan time can be critical in certain applications, the program should be designed to minimize scanning.
Manufacturers will offer suggestions on how to limit rung executions to only those instances when conditional logic has changed.
If suggestions are not incorporated properly, a bump is experienced on output. Any bump would create warm redundancy situation.
A new method is developed to eliminate the problem with Scan & Transfer. This new method is referred as "asynchronous transfer."
In an asynchronous transfer, the primary processor has two separate microprocessors embedded in its circuitry.
The first microprocessor executes the program. At the end of the execution, all data is passed to the second microprocessor.
This second microprocessor handles all transfer tasks while the first microprocessor executes the next program scan.
Thus, one microprocessor is executing while the other microprocessor is transferring data to the standby processor.
As the transfer of data from primary to secondary processor is asynchronous to program scan, it now becomes possible to transfer
entire data table without affecting program execution. This eliminates any need to design the program for an optimized scan.
There are a number of ways to construct Redundant power systems or Fault-Tolerant power systems.
The common method is to have at least one supply with sufficient output power to fully satisfy the systems power requirements.
Then a second power supply of the exact same ratings is provided as a "back-up" in the event one of the two supplies fails.

(1+1) (N+N) (2N)


This forms a basic Redundant power system and Fault-Tolerant power system also generally known as full redundant system.
"N" equals number of supplies required to fully power system, "+1/+N" equals back-up/redundant that takes over for a failed supply.

(N+1) (N+M)
N is the number of power supplies, required to operate a system without redundancy.
M is the number of redundant power supplies (spares), specified to improve the system availability.
If N+M supplies are used to power a system, N are needed to carry load and availability of the power system increases as M increases.
N is number of supplies required to power system, M is number of "spare", necessary to provide requisite level of system availability.

For small systems, generally one component is sufficient to power the system and adding another will give us (1+1) (N+N) system.
For larger systems, additional power supplies are added (N is increased), it may require one additional supply to power the system.
In this case, system would operate from a N+M = 2+1 redundant power system. System requiring higher power will increase N count.
Further improvement in availability may include increasing the number of spares (M) in system.
Most large systems require M = 1 as minimum, but very high availability systems may specify M = 2.
However, not many system today specify M greater than 2, because power supply MTBF has significantly improved & because of cost.

(N+1) (N+M) Redundancy Options and (1+1) (N+N) (2N) Redundancy Options
A simple way of (N+1) (N+M) is to think in term of a birthday party. Say we need ten cupcakes, but in case we order eleven cupcakes.
N represents exact amount of cupcakes we need, and the extra cupcake represents the +1.
Therefore we have N+1 cupcakes for party. If you order twelve cupcakes, then we have N+2. This is example of (N+1) (N+M) systems.

If we plan birthday party with (1+1) (N+N) (2N) redundancy, then we would have the ten cupcakes, plus an additional ten cupcakes.
(1+1) (N+N) (2N) is simply double the amount of cupcakes we need.
At a data center, (1+1) (N+N) (2N) system contain double amount of equipment, that run separately with no single points of failure.
(1+1) (N+N) (2N) are far more reliable than an (N+1) (N+M) system because they offer a fully redundant system.
In event of extended power disturbance, a (1+1) (N+N) (2N) system will still keep things running.
Some data centers offer 2N+1, which is actually double the amount needed plus an extra piece of equipment.

(N+1) (N+M) Redundancy Options and (1+1) (N+N) (2N) Redundancy Options
"N" represents the number of UPS required to deliver necessary amount of power for system. "+1/+M" refers to extra UPS module.
This type of configuration (N+1) (N+M) is not a fully redundant system.
On the other hand (1+1) (N+N) (2N) redundancy means system has double the amount of equipment needed.
The equipment runs separately with no single points of failure. This configuration represents a fully redundant system.
(N+1) (N+M) redundancy requires one or more additional elements to serve as a backup for an unlimited number of other elements.
Ex. if there are 5 elements – one or more additional elements would be required to serve as backup for any one of other 5 elements.
On the other hand, (1+1) (N+N) redundancy requires having a duplicate piece of equipment as a backup in case the primary one fails.
For every piece of equipment, duplicate piece of equipment must be available for backup.

Difference between 1:1 and 1+1


Device B is protecting device A, in a 1:1 configuration. If device A fails, device B will become the active unit in the configuration.
When fault at A is cleared, device A once again becomes active after a predetermined time and B returns back to standby (idle) mode.
Reversion takes place from B to A, in 1:1 configuration.

Device B is protecting A, in a 1+1 configuration. If device A fails, device B will become the active unit in the configuration.
When the fault at A is cleared, device B remains as the active device indefinitely (unless a fault occurs at B).
No reversion takes place in 1+1 configuration.

The key advantage of 1+1 systems, there is only one disruption taking place.
In 1:1 configurations, there are two disruptions taking place.
One when the fault occurs and a second when the fault is cleared and reversion takes place from the standby unit.

You might also like