Professional Documents
Culture Documents
printmode=1
PRINT MODE : ON
Completing a power plant’s start-up and commissioning usually means pushing the prime
contractor to wrap up the remaining punch list items and getting the new operators trained.
Staffers are tired of the long hours they’ve put in and are looking forward to settling into a work
routine.
Just when the job site is beginning to look like an operating plant, a group of engineers arrives
with laptops in hand, commandeers the only spare desk in the control room, and begins to
unpack boxes of precision instruments. In a fit of controlled confusion, the engineers install the
instruments, find primary flow elements, and make the required connections. Wires are dragged
back to the control room and terminated at a row of neatly arranged laptops. When the test
begins, the test engineers stare at their monitors as if they were watching the Super Bowl and
trade comments in some sort of techno-geek language. The plant performance test has begun
(Figure 1).
<img alt=""
src="http://www.powermag.com/wp-content/uploads/2006/09/520004dcca100-60-1.jpg" />
1. Trading spaces. This is a typical setup of data acquisition computers used during a plant
performance test. Courtesy: McHale & Associates
Anatomy of a test
The type and extent of plant performance testing activities are typically driven by the project
specifications or the turnkey contract. They also usually are linked to a key progress payment
milestone, although the value of the tests goes well beyond legalese. The typical test is designed
to verify power and heat rate guarantees that are pegged to an agreed-upon set of operating
conditions. Sounds simple, right? But the behind-the-scenes work to prepare for a test on which
perhaps millions of dollars are at stake beyond the contract guarantees almost certainly exceeds
your expectations (see box).
Mid-term exams
There are many reasons to evaluate the performance of a plant beyond meeting contract
guarantees. For example, a performance test might be conducted on an old plant to verify its
output and heat rate prior to an acquisition to conclusively determine its asset value. Other
performance tests might verify capacity and heat rate for the purpose of maintaining a power
purchase agreement, bidding a plant properly into a wholesale market, or confirming the
performance changes produced by major maintenance or component upgrades.
Performance tests are also an integral part of a quality performance monitoring program. If
conducted consistently, periodic performance tests can quantify non-recoverable degradation and
gauge the success of a facility’s maintenance programs. Performance tests also can be run on
individual plant components to inform maintenance planning. If a component is performing
better than expected, the interval between maintenance activities can be extended. If the opposite
is the case, additional inspection or repair items may be added to the next outage checklist.
Whatever the reason for a test, its conduct should be defined by industry-standard specifications
such as the Performance Test Codes (PTCs) published by the American Society of Mechanical
Engineers (ASME), whose web site—www.asme.org—has a complete list of available codes.
Following the PTCs allows you to confidently compare today’s and tomorrow’s results for the
same plant or equipment. Here, repeatability is the name of the game.
The PTCs don’t anticipate how to test every plant configuration but, rather, set general
guidelines. As a result, some interpretation of the codes’ intent is always necessary. In fact, the
PTCs anticipate variations in test conditions and reporting requirements in a code-compliant test.
The test leader must thoroughly understand the codes and the implications of how they are
applied to the plant in question. Variances must be documented, and any test anomalies must
either be identified and corrected before starting the test or be accounted for in the final test
report.
A performance test involves much more than just taking data and writing a report. More time is
spent in planning and in post-test evaluations of the data than on the actual test. Following is a
brief synopsis describing the process of developing and implementing a typical performance test.
Obviously, the details of a particular plant and the requirements of its owner should be taken into
account when developing a specific test agenda.
Each input to the calculation must be analyzed for its impact on the final result. This
impact is identified as the sensitivity of the result to that input. For example, if inlet air
temperature changes by 3 degrees F, and the corrected output changes by 1%, the sensitivity is
1% per 3 degrees F or 0.33%/degree F.
The instrumentation information is used to identify the systematic error potential for each input.
For example, a precision 4-wire resistance-temperature detector can measure inlet air
temperature with an accuracy of +/- 0.18F, based on information provided by the manufacturer
and as confirmed during periodic calibrations.
During a test run, multiple recordings are made for any given parameter, and there will be scatter
in the data. The amount of scatter in the data is an indication of the random error potential for
each input. For example, during a 1-hour test run, the inlet air temperature may be recorded as an
average of 75F, with a standard deviation in the measurements of 0.6F.
If more than one sensor is used to measure a parameter, there also will be variances between
sensors based on location. These variances may be due to the variances either in the
instrumentation or in the actual parameter measured. For example, if air temperature is being
measured by an array of sensors, there may be effects due to ground warming or exhaust vents in
the area, either of which would affect the uncertainty of the bulk average measurement. These
variances will affect the average and standard deviation values for that parameter. Spatial
variances are added into the systematic error potential, based on the deviation of each location
from the average value for all locations.
Now that we’ve defined the three separate inputs to the uncertainty determination—sensitivity
(A), systematic error potential/uncertainty (B), and random error potential/uncertainty (C)—it’s
time to put on our statistician’s hats.
The "t" value on the right side of the equation is known as the Student-t factor and is based on
the number of degrees of freedom (or number of data points recorded) in the data set. For a 95%
confidence interval and data taken at 1-minute intervals for a 60-minute test run, the value of "t"
is 2.0. If data are taken less frequently (such as at 2-minute intervals), fewer recordings are made
and therefore either the test run must be longer (which is not recommended, because ambient
conditions may change) or the value of "t" will increase.
The example given above is for a single parameter, such as inlet air temperature, and its effect on
corrected output. For each correction made, the same process must be carried out to determine
the sensitivity, systematic uncertainty, and random uncertainty of the corrected result on that
correction parameter (such as barometric pressure and relative humidity).
Once each individual uncertainty has been identified, they can be combined to determine the
overall uncertainty of the corrected result. Combining the individual uncertainties is a three-step
process:
Determine the total systematic uncertainty as the square root of the sum of the squares for all
the individual systematic uncertainties.
Determine the total random uncertainty as the square root of the sum of the squares for all
the individual random uncertainties.
Combine the total systematic uncertainty and total random uncertainty as follows: Total
uncertainty = SQRT[(systematic_total)2 + ( t x random_total)2].
The result of the analysis is an expression stated in terms of the uncertainty calculated for an
individual instrument or the overall system. We might normally say, "The inlet air temperature is
75F," but when including an uncertainty analysis of a temperature measurement system, a more
accurate statement would be, "We are 95% certain that the inlet air temperature is between 74.6F
and 75.4F."
Once again, the value for "t" will depend on the design of the test, including the number of
multiple sensors and the frequency of data recordings. Additional information on the Student-t
factor as well as a discussion of how to determine uncertainty can be found in ASME PTC 19.1
(Test Uncertainty).