You are on page 1of 20

Scholarship applications are invited for Wiki Conference India <="" a="" being held from 18-20 November,

2011 in Mumbai. Apply here. border="0"> Last date for application is August 15, 2011.

Accuracy and precision


From Wikipedia, the free encyclopedia "Accuracy" redirects here. For the song by The Cure, see Three Imaginary Boys. In the fields of science, engineering, industry and statistics, the accuracy[1] of a measurement system is the degree of closeness of measurements of a quantity to that quantity's actual (true) value. The precision[1] of a measurement system, also called reproducibility or repeatability, is the degree to which repeated measurements under unchanged conditions show the same results.[2] Although the two words can be synonymous in colloquial use, they are deliberately contrasted in the context of the scientific method.

This glossary serves to introduce the user to the terminology used by Lee Company engineers in describing out products. These descriptions are proposed to serve as a reference point in product discussions to eliminate problems of definition. While these terms are subjected to different interpretation throughout various fields, it is proposed that these definitions be adhered to aid efficient communication. Accuracy Accuracy is the degree of error between the intended, specified, or nominal property value and actual value. Typically used to define the performance envelope of a production lot of parts about the specified nominal. Normally used to relate single-event performance of multiple parts. Compare to PRECISION. Axial Mixing See FLUSHABILITY Backlash (Mechanical Hysteresis) Backlash is defined as the amount (usually in microliters) of "play" or error in the mechanical drive of the pump assembly. This is only noticable when the motor armature changes direction. The error is the result of the clearance

between the screw and nut portions of the drive. Properly accounted for with drive software, the backlash can be made insignificant to accuracy and precision. Carry-over Volume See CROSSOVER VOLUME Coefficient of Variation (CV) CV is defined as the standard deviation of a distribution of data divided by the mean value. This value, expressed in percent, reflects the degree of spread of data and is used to define the consistency of the performance or of dispensed volumes or other parameter. Crossover Volume (Dead-leg Volume, Carry-over Volume) Crossover Volume is any internal-geometry-dependent volumetric error introduced by the value internal volume between the valving point and the common flow point. Most commonly used in discussions of three-way valves, it refers to the unflushed slug of material between the flowing passage and the closed port seal. Crosstalk (Intra-port Flow) Crosstalk is any response-time-dependent flow or pressure variation between any two valves or two ports of a three-way valve. For example, this term refers to the flow that takes place between the Normally Closed and Normally open ports of the three-way valve in the time between the beginning of actuation and the end of actuation, when both ports are partially open. Dead-leg Volume See CROSSOVER VOLUME Flushability (Axial Mixing) Flushability the degree of dispersion or band-broadening introduced by a component into a flowing stream. Sometimes referred to as axial mixing, it defines the stretching of a slug of sample as it passes through a component. Usually discussed in relative or qualitative terms, as the specific definition of this characteristic is somewhat complex. Response Time This term defines the lag time between the input of a control signal and the resulting response of the system or component being monitored. Typical use of the response time with a passive component could define the time lag between a pressure pulse input to a check valve, and the time to close or open the valve seat in response to that pulse. The more common usage is in reference to a active components, such as solenoid valves. This term then typically defines the time from the beginning of a normal voltage step-input drive signal, and the pneumatic output from the valve port that is opening or closing as a result of

that signal. For further discussion of response times, contact your Lee Sales Engineer. Dead Volume The actual non-flushable volumes of any component or system flow passages, where a dead-end passageway or cavity could retain materials to contaminate subsequent samples or flow media. This value is highly subjective, as many factors come into play to determine the actual dead volume such as miscibility, viscosity, binding energy, etc. The quantity of the former sample still retained inside the component after flushing with some specified volume is defined as dead volume. Intra-port Flow See CROSSTALK Repeatability (Precision) The repeatability of any function means consistency of performance even if the performance is not accurate. Used in reference to valve response times or dispensed volumes. Usually specified in terms of the percent tolerance about the nominal, specified, or mean value. Used to express the total variability of a single component over multiple events.

Accuracy indicates proximity of measurement results to the true value, precision to the repeatability or reproducibility of the measurement A measurement system can be accurate but not precise, precise but not accurate, neither, or both. For example, if an experiment contains a systematic error, then increasing the sample size generally increases precision but does not improve accuracy. The end result would be a consistent yet inaccurate string of results from the flawed experiment. Eliminating the systematic error improves accuracy but does not change precision. A measurement system is designated valid if it is both accurate and precise. Related terms include bias (non-random or directed effects caused by a factor or factors unrelated to the independent variable) and error (random variability).

The terminology is also applied to indirect measurements--that is, values obtained by a computational procedure from observed data. In addition to accuracy and precision, measurements may also have a measurement resolution, which is the smallest change in the underlying physical quantity that produces a response in the measurement. In the case of full reproducibility, such as when rounding a number to a representable floating point number, the word precision has a meaning not related to reproducibility. For example, in the IEEE 754-2008 standard it means the number of bits in the significand, so it is used as a measure for the relative accuracy with which an arbitrary number can be represented.

Contents
[hide]

1 Accuracy versus precision: the target analogy 2 Quantifying accuracy and precision 3 Accuracy and precision in binary classification 4 Accuracy and precision in psychometrics and psychophysics 5 Accuracy and precision in logic simulation 6 Accuracy and precision in information systems 7 See also 8 References 9 External links

[edit] Accuracy versus precision: the target analogy

High accuracy, but low precision

High precision, but low accuracy

Accuracy is the degree of veracity while in some contexts precision may mean the degree of reproducibility.[citation needed] The analogy used here to explain the difference between accuracy and precision is the target comparison. In this analogy, repeated measurements are compared to arrows that are shot at a target. Accuracy describes the closeness of arrows to the bullseye at the target center. Arrows that strike closer to the bullseye are considered more accurate. The closer a system's measurements to the accepted value, the more accurate the system is considered to be. To continue the analogy, if a large number of arrows are shot, precision would be the size of the arrow cluster. (When only one arrow is shot, precision is the size of the cluster one would expect if this were repeated many times under the same conditions.) When all arrows are grouped tightly together, the cluster is considered precise since they all struck close to the same spot, even if not necessarily near the bullseye. The measurements are precise, though not necessarily accurate. However, it is not possible to reliably achieve accuracy in individual measurements without precisionif the arrows are not grouped close to one another, they cannot all be close to the bullseye. (Their average position might be an accurate estimation of the bullseye, but the individual arrows are inaccurate.) See also circular error probable for application of precision to the science of ballistics.

[edit] Quantifying accuracy and precision


See also: False precision Ideally a measurement device is both accurate and precise, with measurements all close to and tightly clustered around the known value. The accuracy and precision of a measurement process is usually established by repeatedly measuring some traceable reference standard. Such standards are defined in the International System of Units and maintained by national standards organizations such as the National Institute of Standards and Technology. This also applies when measurements are repeated and averaged. In that case, the term standard error is properly applied: the precision of the average is equal to the known standard deviation of the process divided by the square root of the number of measurements averaged. Further, the central limit theorem shows that the probability distribution of the averaged measurements will be closer to a normal distribution than that of individual measurements. With regard to accuracy we can distinguish:

the difference between the mean of the measurements and the reference value, the bias. Establishing and correcting for bias is necessary for calibration. the combined effect of that and precision.

A common convention in science and engineering is to express accuracy and/or precision implicitly by means of significant figures. Here, when not explicitly stated, the margin of error is understood to be one-half the value of the last significant place. For instance, a recording of 843.6 m, or 843.0 m, or 800.0 m would imply a margin of 0.05 m (the last significant place is the tenths place), while a recording of 8,436 m would imply a margin of error of 0.5 m (the last significant digits are the units). A reading of 8,000 m, with trailing zeroes and no decimal point, is ambiguous; the trailing zeroes may or may not be intended as significant figures. To avoid this ambiguity, the number could be represented in scientific notation: 8.0 103 m indicates that the first zero is significant (hence a margin of 50 m) while 8.000 103 m indicates that all three zeroes are significant, giving a margin of 0.5 m. Similarly, it is possible to use a multiple of the basic measurement unit: 8.0 km is equivalent to 8.0 103 m. In fact, it indicates a margin of 0.05 km (50 m). However, reliance on this convention can lead to false precision errors when accepting data from sources that do not obey it. Looking at this in another way, a value of 8 would mean that the measurement has been made with a precision of 1 (the measuring instrument was able to measure only down to 1s place) whereas a value of 8.0 (though mathematically equal to 8) would mean that the value at the first decimal place was measured and was found to be zero. (The measuring instrument was able to measure the first decimal place.) The second value is more precise. Neither of the measured values may be accurate (the actual value could be 9.5 but measured inaccurately as 8 in both instances). Thus, accuracy can be said to be the 'correctness' of a measurement, while precision could be identified as the ability to resolve smaller differences. Precision is sometimes stratified into:

Repeatability the variation arising when all efforts are made to keep conditions constant by using the same instrument and operator, and repeating during a short time period; and Reproducibility the variation arising using the same measurement process among different instruments and operators, and over longer time periods.

[edit] Accuracy and precision in binary classification


Accuracy is also used as a statistical measure of how well a binary classification test correctly identifies or excludes a condition. Condition as determined by Gold standard True Test Positive outcome Negative True positive False negative False False positive True negative Positive predictive value Negative predictive

value Sensitivity Specificity Accuracy

That is, the accuracy is the proportion of true results (both true positives and true negatives) in the population. It is a parameter of the test.

On the other hand, precision is defined as the proportion of the true positives against all the positive results (both true positives and false positives)

An accuracy of 100% means that the measured values are exactly the same as the given values. Also see Sensitivity and specificity. Accuracy may be determined from Sensitivity and Specificity, provided Prevalence is known, using the equation: accuracy = (sensitivity)(prevalence) + (specificity)(1 prevalence) The accuracy paradox for predictive analytics states that predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. It may be better to avoid the accuracy metric in favor of other metrics such as precision and recall.[citation needed] In situations where the minority class is more important, F-measure may be more appropriate, especially in situations with very skewed class imbalance. An alternate performance measure that treats both classes with equal importance is "balanced accuracy":

[edit] Accuracy and precision in psychometrics and psychophysics


In psychometrics and psychophysics, the term accuracy is interchangeably used with validity and constant error. Precision is a synonym for reliability and variable error. The validity of a measurement instrument or psychological test is established through experiment or correlation with behavior. Reliability is established with a variety of statistical techniques, classically through an internal consistency test like Cronbach's

alpha to ensure sets of related questions have related responses, and then comparison of those related question between reference and target population.[citation needed]

[edit] Accuracy and precision in logic simulation


In logic simulation, a common mistake in evaluation of accurate models is to compare a logic simulation model to a transistor circuit simulation model. This is a comparison of differences in precision, not accuracy. Precision is measured with respect to detail and accuracy is measured with respect to reality.[3][4]

[edit] Accuracy and precision in information systems


The concepts of accuracy and precision have also been studied in the context of data bases, information systems and their sociotechnical context. The necessary extension of these two concepts on the basis of theory of science suggests that they (as well as data quality and information quality) should be centered on accuracy defined as the closeness to the true value seen as the degree of agreement of readings or of calculated values of one same conceived entity, measured or calculated by different methods, in the context of maximum possible disagreement.[5]

[edit] See also


Accuracy class ANOVA Gauge R&R ASTM E177 Standard Practice for Use of the Terms Precision and Bias in ASTM Test Methods Experimental uncertainty analysis Failure assessment Gain (information retrieval) Precision bias Precision engineering Precision (statistics) Accepted and experimental value

[edit] References
1. ^ a b JCGM 200:2008 International vocabulary of metrology Basic and general concepts and associated terms (VIM) 2. ^ John Robert Taylor (1999). An Introduction to Error Analysis: The Study of Uncertainties in Physical Measurements. University Science Books. pp. 128129. ISBN 0-935702-75-X. 3. ^ John M. Acken, Encyclopedia of Computer Science and Technology, Vol 36, 1997, page 281-306

4. ^ 1990 Workshop on Logic-Level Modelling for ASICS, Mark Glasser, Rob Mathews, and John M. Acken, SIGDA Newsletter, Vol 20. Number 1, June 1990 5. ^ Ivanov, K. (1972). "Quality-control of information: On the concept of accuracy of information in data banks and in management information systems". The University of Stockholm and The Royal Institute of Technology. Doctoral dissertation. Further details are found in Ivanov, K. (1995). A subsystem in the design of informatics: Recalling an archetypal engineer. In B.. Dahlbom (Ed.), The infological equation: Essays in honor of Brje Langefors, (pp. 287-301). Gothenburg: Gothenburg University, Dept. of Informatics (ISSN 1101-7422).

[edit] External links


Look up accuracy, or precision in Wiktionary, the free dictionary.

BIPM - Guides in metrology - Guide to the Expression of Uncertainty in Measurement (GUM) and International Vocabulary of Metrology (VIM) "Beyond NIST Traceability: What really creates accuracy" - Controlled Environments magazine Precision and Accuracy with Three Psychophysical Methods Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, Appendix D.1: Terminology Accuracy and Precision

View page ratings Rate this page What's this? Trustworthy Objective Complete Well-written I am highly knowledgeable about this topic (optional) Categories: Biostatistics | Statistical theory | Psychometrics | Evaluation | Critical thinking | Qualities of thought | Uncertainty of numbers | Measurement | Summary statistics for contingency tables

Log in / create account Article Discussion Read Edit View history

Main page Contents Featured content Current events Random article Donate to Wikipedia

Interaction Toolbox Print/export Languages

Help About Wikipedia Community portal Recent changes Contact Wikipedia

Azrbaycanca Deutsch Espaol Franais Hrvatski Italiano Magyar Nederlands Polski Portugus Slovenina Slovenina Suomi This page was last modified on 16 July 2011 at 19:07. Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. See Terms of use for details. Wikipedia is a registered trademark of the Wikimedia Foundation, Inc., a nonprofit organization.

Contact us Privacy policy About Wikipedia Disclaimers Mobile view

PAGE-2

Previous subject

Back to Index

Next Subject

B. Accuracy vs. Precision, and Error vs. Uncertainty


When we discuss measurements or the results of measuring instruments there are several distinct concepts involved which are often confused with one another. This sections describes four important ideas and establishes the differences between them. The first distinction is between Accuracy and Precision. Accuracy Accuracy refers to the agreement between a measurement and the true or correct value. If a clock strikes twelve when the sun is exactly overhead, the clock is said to be accurate. The measurement of the clock (twelve) and the phenomena it is meant to measure (The sun located at zenith) are in agreement. Accuracy cannot be discussed meaningfully unless the true value is known or is knowable. (Note: The true value of a measurement can never be known. Read more about this.) Accuracy refers to the agreement of the measurement and the true value and does not tell you about the quality of the instrument. The instrument may be of high quality and still disagree with the true value. In the example above it was assumed that the purpose of the clock is to measure the location of the sun as it appears to move accross the sky. However, in our system of time zones the sun is directly overhead at twelve O'clock only if you are at the center of the time zone. If you are at the eastern edge of the time zone the sun is directly overhead around 11:30, while at the western edge the sun is directly overhead at around 12:30. So at either edge the twelve O'clock reading does not agree with the phenomena of the sun being at the local zenith and we might complain that the clock is not accurate. Here the accuracy of the clock reading is affected by our system of time zones rather than by any defect of the clock.

In the case of time zones however clocks measure something slightly more abstract than the location of the sun. We define the clock at the center of the time zone to be correct if it matches the sun, we then define all the other clocks in that time zone to be correct if they match the central clock. Thus a clock at the Eastern edge of a time zone that reads 11:30 when the sun is overhead would still be accurate since it agrees with the central clock. A clock that read 12:00 would not be accurate at that time. The idea to get used to here is that accuracy only refers to the agreement between the measured value and the expected value and that this may or may not say something about the quality of the measuring instrument. A stopped clock is accurate at least once each day. Precision Precision refers to the repeatability of measurement. It does not require us to know the correct or true value. If each day for several years a clock reads exactly 10:17 AM when the sun is at the zenith, this clock is very precise. Since there are more than thirty million seconds in a year this device is more precise than one part in one million! That is a very fine clock indeed! You should take note here that we do not need to consider the complications of edges of time zones to decide that this is a good clock. The true meaning of noon is not important because we only care that the clock is giving a repeatable result. Error Error refers to the disagreement between a measurement and the true or accepted value. You may be amazed to discover that error is not that important in the discussion of experimental results. This statement certainly needs some explanation. As with accuracy, you must know the true or correct value to discuss your error. But consider what science is about. The central objective is to discover new things. If they are new, then we do not know what the true value is ahead of time. Thus it is not possible to discuss our error. You might raise the possibility that the experiment has a defective component or incorrect assumption so that an error is made. Of course the scientist is concerned about this. Typically there has been much discussion with other scientists and a review of the methods to try to avoid exactly this possibility. However, if an error occurs we simply will not know it. The true value has not yet been established and there is no other guide. The good scientist assumes the experiment is not in error. It is the only choice available. Later research, attempts by other scientists to repeat the result, will hopefully reveal any problems, but the first time around there is no such guide. Students in science classes are in an artificial situation. Their experiments are necessarily repetitions of previous work, so the results are known. Because of this students learn a poor lesson about science. Students often are very conscious of error to the point where they assume it happens in every experiment. This is distracting to the project of becoming a scientist. If you want to benefit most from your laboratory experiences, you will need to do some judicious pretending. After the experiment has been conducted, while you write up the result in your lab report, assume that error is not a consideration. Your team has

done the best it can have done in the lab and you must account for the results on that basis. Do not write "human error" as any part of your lab report. It is in the first place embarrassing, and in our experience as faculty members, it is rarely the source of experimental problems. (Well over half of problems producing bad laboratory results are due to analysis errors in the report! Look here first.) Uncertainty Uncertainty of a measured value is an interval around that value such that any repetition of the measurement will produce a new result that lies within this interval. This uncertainty interval is assigned by the experimenter following established principles of uncertainty estimation. One of the goals of this document is to help you become proficient at assigning and working with uncertainty intervals. Uncertainty, rather than error, is the important term to the working scientist. In a sort of miraculous way uncertainty allows the scientist to make completely certain statements. Here is an example to see how this works. Let us say that your classmate has measured the width of a standard piece of notebook paper and states the result as 8.53 0.08 inches. By stating the uncertainty to be 0.08 inches your classmate is claiming with confidence that every reasonable measurement of this piece of paper by other experimenters will produce a value not less than 8.45 inches and not greater than 8.61 inches. Suppose you measured the length of your desk, with a ruler or tape measure, and the result was one meter and twenty centimeters (L = 1.20 m). Now the true length is not known here, in part because you do not have complete knowledge of the manufacture of the measuring device, and because you cannot see microscopically to confirm that the edge of the table exactly matches the marks on the device. Thus you cannot discuss error in this case. Nevertheless you would not say with absolute certainty that L = 1.20 m. However it is quite easy to imagine that you could be certain that the desk was not more than ten centimeters (~ five inches) different than your measurement. You may have experience with tape measures. And based on that experience, you are sure that your tape measure could not be stretched out by five inches compared to its proper length. If you do not have this confidence, perhaps ten inches or a foot would make you confident. After measuring you might say "This desk is not longer than 1.35 m and not shorter than 0.95 m." You could make this statement with complete confidence. The scientist would write L = 1.20 0.15 m. The format is "value plus or minus uncertainty." Notice that it is always possible to construct a completely certain sentence. In the worst case we might say the desk is not shorter than zero meters and not longer than four meters (because it would not fit the room). This measurement may be nearly useless, but it is completely certain! By stating a confidence interval for a measurement the scientist makes statements that any reasonable scientist must agree with. The skill comes in getting the confidence intervals (the uncertainty) to be as small as possible.

This is your task in the laboratory. Every measurement you make should be considered along with a confidence interval. You should then assign this uncertainty to the measurement at the time that you record the data. Uncertainty: Having presented the example, here is the definition of uncertainty.

The uncertainty in a stated measurement is the interval of confidence around the measured value such that the measured value is certain not to lie outside this stated interval. Uncertainties may also be stated along with a probability. In this case the measured value has the stated probability to lie within the confidence interval. A particularly common example is one standard deviation (SD) for the average of a random sample. The format "value 1 SD" means that if you repeat the measurement, 68% of the time your new measurement will fall in this interval. True values vs. Accepted values

Check your understanding of these terms by working through the example below.

A metal rod about 4 inches long has been passed around to several groups of students. Each group is asked to measure the length of the rod. Each group has five students and each student independently measures the rod and records his or her result.

Student Group Group A Group B Group C Group D Group E

Student 1 Student 2 Student 3 Student 4 Student 5 10.1 10.135 12.14 10.05 10 10.4 10.227 12.17 10.82 11 9.6 10.201 12.15 8.01 10 9.9 10.011 12.14 11.5 10 10.8 10.155 12.18 10.77 10

A B

Which group has the most accurate measurement?

A B

Which group has the most precise measurement?

C D E Which group has the greatest error?

C D E Which group has the greatest uncertainty?

A B C D E

A B C D E

We now recieve a report from the machine shop where the rod was manufactured. This very reputable firm certifies the rod to be 4 inches long to the nearest thousandths of an inch. Answer the questions below given this new information. Note that the questions are slightly different. (4.000 inches = 10.160 cm) Which group has the least Which group has the least precise accurate measurement? measurement? A A B C D E Which group has the smallest error? B C D E Which group has the smallest uncertainty?

A B C

A B C

D E

D E

The accuracy is the degree of closeness of the measured value to its "true" value. This value is generally diffined as a purcentage of the capacity of the sensor or instrument in the unit of measurement. For example, on our Centor Easy force gauge withinternal sensor, the accuracy is 0.1% FS. It means that if you have a sensor with a capacity of 100lb, the accuracy of the complete gauge is 0.1% of 100lb = 0.1lb. You can find this value with different spelling: 0.1%, 0.1% of the full scale, 0.1% FS (FS: Full scale), or 0.1lb Repeatability or test-retest reliability[1] is the variation in measurements taken by a single person or instrument on the same item and under the same conditions. A measurement may be said to be repeatable when this variation is smaller than some agreed limit. According to the Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results, repeatability conditions include:

the same measurement procedure the same observer the same measuring instrument, used under the same conditions the same location repetition over a short period of time.

Repeatability methods were developed by Bland and Altman (1986).[2] The repeatability coefficient is a precision measure which represents the value below which the absolute difference between two repeated test results may be expected to lie with a probability of 95%. The standard deviation under repeatability conditions is part of precision and accuracy.

accuracy The difference between a measurement reading and the true value of that measurement. amplification The movement of a measuring instrument's contact points in relation to the amount of readout on the needle or scale. bias The predicted difference on average between the measurement and the true value. Bias is also known as accuracy. calibration The comparison of a device with unknown accuracy to a device with a known, accurate standard to eliminate any variation in the device being checked. caliper A measuring instrument with two pairs of jaws on one end and a long beam containing a marked scale of unit divisions. One pair of jaws measures external features; the other pair measures internal features. coordinate measuring machine A sophisticated measuring instrument with a flat polished table and a suspended probe that measures parts in three-dimensional space. correction factor The amount of deviation in a measurement that is accounted for in the calibration process. You can either add the

correction factor to the measured value or adjust the measuring instrument. depth gage A type of measuring instrument that measures the depth of holes, slots, or recesses. dial indicator A measuring instrument with a contact point attached to a spindle and gears that moves a pointer on the dial. Dial indicators have graduations that are available for reading different measurement values. discrimination The distance between two lines on a scale or the fineness of an instrument's divisions of measurement units. drift The actual change in the measurement value when the same characteristic is measured under the same conditions, same operator, at different points in time. Drift indicates how often a measurement needs recalibration. error The amount of deviation from a standard or specification. Errors should be eliminated in the measuring process. error of measurement The actual difference between a measurement value and the known standard value. gage A device that determines whether or not a part feature is within specified limits. Most gages do not provide an actual measurement value. However, measuring instruments are also sometimes called gages. granite A dense, wear-resistant material that is capable of excellent flatness. Granite is often used for inspection surfaces. graph A diagram that represents the variation of one variable compared to another. height gage A type of measuring instrument with a precision finished base, a beam that is at a right angle to the base, and an indicator. hysteresis The delay between the action and reaction of a measuring instrument. Hysteresis is the amount of error that results when this action occurs. linearity The amount of error change throughout an instrument's measurement range. Linearity is also the amount of deviation from an instrument's ideal straight-line performance. measuring instrument A device used to inspect, measure, test, or examine parts in order to determine compliance with required specifications. micrometer A U-shaped measuring instrument with a threaded spindle that slowly advances toward a small anvil. Micrometers are available in numerous types for measuring assorted dimensions and features. plug gage A hardened, cylindrical gage used to inspect the size of a hole. Plug gages are available in standardized diameters. precision The degree to which an instrument will repeat the same measurement over a period of time. repeatability The ability to obtain consistent results when measuring the same part with the same measuring instrument. resolution The smallest change in a measured value that the instrument can detect. Resolution is also known as sensitivity. rule of ten The inspection guideline stating that a measuring instrument must be ten times more precise than the acceptable tolerance of the inspected part feature. slope The angle of a line that appears when comparing two variables on a graph. specified range of measurement The limit of measurement values that an instrument is capable of reading. The dimension being measured must fit inside this range. stability The ability of a measuring instrument to retain its calibration over a long period of time. Stability determines an instrument's consistency over time. standard A recognized true value. Calibration must compare measurement values to a known standard. systematic error An error that is not determined by chance but is introduced by an inaccuracy in the system. Systematic errors are predictable and expected. thermal characteristic The way a material behaves due to changes in heat. Measuring instruments have thermally stable characteristics so that they are not affected by temperature increases. tolerance The unwanted but acceptable deviation from a desired dimension. variation A difference between two or more similar things.

Equally important to the dollar budget is the error budget, which is often overlooked or determined de facto by cost constraints. For optimum system performance, each individual system element needs an acceptable error allocation. In determining system error budgets, "repeatability" is often confused with "reproducibility". The accepted definition of "repeatability" is the closeness of agreement among a number of consecutive measurements of outputs for the same input where this input is approached from the same direction after a transversal of input across the full-scale span. The accepted definition of "reproducibility" is the closeness of agreement among a number of repeated measurements of outputs for the same input where this input is approached from any direction and these measurements are made over a period of time. The subtle difference between "repeatability" and "reproducibility" is as follows; (a) Repeatability includes neither drift errors (since consecutive measurements over a period of time are too short for drift to be a factor) nor hysteresis, (b) Reproducibility includes drift (repeated measurements over any length of time), hysteresis, and repeatability. Note reproducibility includes repeatability. All measurements are made within the test units allowed range and operating conditions.

Extensive Definition
Reproducibility is one of the main principles of the scientific method, and refers to the ability of a test or experiment to be accurately reproduced, or replicated, by someone else working independently. Reproducibility is different from repeatability, which measures the success rate in successive experiments, possibly conducted by the same experimenters. Reproducibility relates to the agreement of test results with different operators, test apparatus, and laboratory locations. It is often reported as a standard deviation. While repeatability of scientific experiments is desirable, it is not considered necessary to establish the scientific validity of a theory. For example, the cloning of animals is difficult to repeat, but has been reproduced by various teams working independently, and is a well established research domain. One failed cloning does not mean that the theory is wrong or unscientific. Repeatability is often low in protosciences.

The results of an experiment performed by a particular researcher or group of researchers are generally evaluated by other independent researchers by reproducing the original experiment. They repeat the same experiment themselves, based on the original experimental description, and see if their experiment gives similar results to those reported by the original group. The result values are said to be commensurate if they are obtained (in distinct experimental trials) according to the same reproducible experimental description and procedure. The basic idea can be seen in Aristotle's dictum that there is no scientific knowledge of the individual, where the word used for individual in Greek had the connotation of the idiosyncratic, or wholly isolated occurrence. Thus all knowledge, all science, necessarily involves the formation of general concepts and the invocation of their corresponding symbols in language (cf. Turner).

Famous problems
In March 1989, University of Utah chemists Stanley Pons and Martin Fleischmann reported the production of excess heat that could only be explained by a nuclear process. The report was astounding given the simplicity of the equipment: it was essentially an electrolysis cell containing heavy water and a palladium cathode which rapidly absorbed the deuterium produced during electrolysis. The news media reported on the experiments widely, and it was a front-page item on many newspapers around the world. Over the next several months others tried to replicate the experiment, but were unsuccessful. At the end of May the US Energy Research Advisory Board found the evidence to be unconvincing, and cold fusion was dismissed as pseudoscience. Later on, successful replications by independent teams were reported in peer reviewed scientific journals, and, although the effect is not considered fully repeatable, the field eventually gained some scientific recognition. In the 1930's the Austrian scientist Wilhelm Reich claimed to have discovered a physical energy he called "orgone," and which he said existed in the atmosphere and in all living matter. He developed instruments to detect and harness this energy that he said could be used to treat illness or control the weather. His views were not accepted by the mainstream scientific community; in fact, he was vilified for his claims. In the early 1940's Reich encouraged Albert Einstein to test an orgone accumulator, which Einstein did, but he disagreed on the interpretation of the results. In 2001, Canadian researchers Paulo Correa and Alexandra Correa claimed to have successfully reproduced the experiment. But Martin Gardner's book, Fads and Fallacies in the Name of Science debunks orgone energy. Nikola Tesla claimed as early as 1899 to have used a high frequency current to light gasfilled lamps from over away without using wires. In 1904 he built Wardenclyffe Tower on Long Island to demonstrate means to send and receive power without connecting wires. The facility was never fully operational and was not completed, supposedly due to economic problems. Tesla's experiments have never been replicated.

You might also like