Your QA manager can put you to sleep explaining the difference between these two terms- but you really need to know the difference.

Accuracy describes 'close to true value;' Precision describes 'repeatability.'

Accuracy in measurement describes how closely the measurement from your system matches the actual or true measurement of the thing being measured. It is the difference between the observed average of measurements and the true average.

Think of accuracy as the “trustworthiness” of a measurement system.

Precision in measurement describes how well a measurement system will  return the same measure; that is its Repeatability.

As the targets above show, it is important to be both Accurate and Precise if you are to get useable information from your measurement system.

But the repeatability has two components- that of the measurement system (gage) itself and that of the operator(s). Differences resulting from different operators using the same measurement device- this is called Reproducibility.

In our shops, we cannot tell if our measurement system has repeatability or reproducibility issues without doing a Long Form Gage R&R study.

Gage repeatability and reproducibility studies (GR&R) use statistical techniques  to identify and discern the sources of variation in our measurement system: is it the gage, or is it the operator?

Gage error determined by the GR&R is expressed as a percentage of the tolerance that you are trying to hold.

Typically, 10% or less Gage Error is considered acceptable. Over 30% is unacceptable; between 10 and 30% gage error may be acceptable depending on the application.

Regardless- any level of gage error is an opportunity for continuous improvement.

Target Graphic