Home  |  Meet our Team  |  Endorsements  |  Contact Us | Online Support        Visit our other sites:  Laboratory Services|Corporate
Decontamination Process Validation
The Difference Between Limit of Error of Measurement and Measurement Uncertainty
Slider

Limit of Error of Measurement

The limit of error of measurement is the difference between the temperatures measured by the Data Logger and the temperature measured by the Reference Probe. According to HTM2010 it is 0.5ºC. It is a quantified measure we can see and therefore verify.

Uncertainty of Measurment

Measurement Uncertainty

Every measurement is subject to Uncertainty, even though HTM2010 does not ask us to determine this.

Measurement Uncertainty is typically expressed as a range of values in which the value is estimated to lie, within a given statistical confidence. For example you may see 1.0ºC accuracy with 95% confidence or 1.5ºC at three sigma accuracy.

To explain what this means we must produce a distribution curve:

Uncertainty of Measurment 1

The x-axis (the horizontal one) is the value in question, temperature, where the middle is the ideal value. The y-axis (the vertical one) is the number of data points, for each value on the x-axis i.e. the number of measurements received from the data logger trying to measure the ideal value. This will then produce a distribution curve something like this "bell curve" shape.

You may have also seen the term Standard Deviation. This is a statistic that tells you how tightly all the various examples are clustered around the ideal value in a set of data. When the examples are pretty tightly bunched together and the bell-shaped curve is steep, the standard deviation is small. When the examples are spread apart and the bell curve is relatively flat, that tells you there is a relatively large standard deviation.

However if Manufacturers specification has an Uncertainty Budget of say, +/- 0.5ºC which is more often used as the word accuracy, it is often worth checking with how much statistical confidence the accuracy has.

For example one standard deviation away from the mean in either direction on the horizontal axis (the red area on the above graph) accounts for somewhere around 68 percent of the temperatures measured in this group. Two standard deviations away from the mean (the red and green areas) account for approximately 95 percent of the measured temperatures. And three standard deviations (the red, green and blue areas) account for about 99 percent of the measured temperatures.

However, standard deviations are often expressed as Sigma(s). Therefore if a specification of uncertainty (accuracy) is specified it should be qualified in terms of sigma. For example, an uncertainty value of 0.5ºC (3 sigma) means that statistically 99% of readings will be within the 0.5ºC ‘spread’. Similarly 0.5ºC (2 sigma) would mean that only 95% of readings would statistically fall within the 0.5ºC.

When comparing Manufacturers Specifications it is therefore very important to establish what Standard Deviation the accuracy figures are related to. Obviously 3 sigma figures are much more conservative and realistic than single sigma figures.

Conclusion

This is, as you can see, a complicated issue and Measurements Uncertainties are often confused with Limit of Error of Measurement. However, the main difference is the Limit of Error is a verified error measurement which we can quantify ourselves. Measurement Uncertainty is a Statistical Calculation to determine an accuracy figure to a specified Standard Deviation.

HTM0101 Part B 2.21:c states:
“The width of the sterilization temperature band varies from 3 º C (high-temperature steam sterilizers) to 10 º C (dry-heat sterilizers). The recorder must be accurate enough to show clearly whether the measured temperatures are within the band or not. For all the types of sterilizer covered by this HTM, the repeatability of the recorder should be± 0.25ºC or better, and the limit of error of the complete measurement system (including sensors) should be no more than 0.5ºC”

Now since Systematic Errors can be calibrated out, both the Repeatability of Measurement and the Limit of Error of the complete measurement system can be reduced significantly by careful calibration.

Therefore the Standard Deviation AFTER CALIBRATION has to be better than +/- 0.25ºC as this is the Repeatability of Measurement as this is the ‘scatter of measurements’ produced in the distribution curve this can be quantified during the calibration and verification of calibration.

The Limit of Error of the complete measurement system AFTER CALIBRATION has to be better than 0.5ºC, which is the Measured Error from your Reference value, which is also quantified during the calibration and verification of calibration.