User Tools

Site Tools


theory:sensor_technology:st2_measurement_errors

Measurement Errors

All measurments will have errors. Either random errors or systematic errors. These errors have to be represented well in writing down the value of the quantity. We must also be aware of how errors propagate through the system.

Types of errors

The Basic Measurement Theory chapter introduced figure 2 as the basic measurement chain, which is repeated here as figure 1.

Fig. 1: The measurement chain

This figure indicates two major insertion points of noise. However, that does not mean these are the only points where measurement errors occur. We distinguish errors that are caused by the system and errors due to the environment.

  • Systematic errors have a source within the system. For example, a calibration error of one of the measurement devices may give a bias error, which is a systematic error. Another example is the drift of a sensor resulting into an unexpected offset in the measurement. Systematic errors can be minimised by improving the measurement system. As a rule of thumb, we can say that a calibration with a ten times more accurate measurement method is needed.
  • Random errors are also called noise. They can not be minimized by measuring more accurately because they have an external source: they don't reproduce. Random errors are caused by inherently unpredictable fluctuations in the readings of a measurement tool or in the experimenter's interpretation of the reading. Random errors are in many cases normally distributed, so the size of the error can be minimized by taking more measurements. Although most random errors have an external source, some specific random errors originate from within the system. An example is quantization noise in an analog to digital conversion, which gives a uniformely distributed noise. Another example is the noise in the electronics of the measurement tool itself.

The two insertion points of noise in figure 1 represent both environmental noise. The first source is in the measurement domain. This can for example be a motion artefact. An example of noise originating from after the transduction is electronic $50 Hz$ noise after bad shielding.

Besides the classification of errors into random and systematic errors, we can also speak about absolute and relative errors.

  • The absolute error is the difference between the measured value and the real value. For example, if we measure $1002 \Omega$ and we know the measured resistor is actually $989 \Omega$, then the absolute error is $13 \Omega$
  • In a relative error, the absolute error is normalized as $(\mathtt{Measured Value} – \mathtt{Real Value}) / \mathtt{Real Value}$. For example, $(1002 \Omega – 989 \Omega)$ / $989 \Omega \approx 0.013~(1.3\%)$

The quantization error as mentioned before, is also observed as rounding errors when reading a value from a display. The last digits are not represented, so for example $14.3476$ can be written as $14.3$ while introducing an absolute error of $0.0476$.

Some errors are the result of transducers that are non-linear, these are nonlinearity errors. These can be expressed as a non-linearity number in percent.

Errors can be reduced or compensated in some situations. This is partially explained in the chapter about Sensor/Actuator systems in the section Sensor/actuator network concepts. The most common methods are:

  • Feedback
  • Stimulus-response measurement
  • Differential measurement
  • Compensation (feed-forward)
  • Multivariate analysis
  • Averaging

Accuracy and precision

Consider a multimeter that has a reading of $1.000341 V$. This is a high precision reading, but we do not know whether it is accurate (correct). The words accuracy and precision are sometimes mixed up, but have completely different meanings. The most important mathematical tools we have are the average reading of a set of measurements and the standard deviation of the readings. The question is how they relate to accuracy and precision.

Accuracy is defined by how close our average is to the “real” value. So, after defining the average as

\begin{equation} \mu=\frac{\sum_{i=1}^{n} x_{i}}{n} \label{eq:Average} \end{equation}

the accuracy becomes $\left | x_{0}-\mu \right |$, with $x_{0}$ the true value .

Precision indicates the variation on the measurements and can therefore be expressed in terms of the standard deviation

\begin{equation} \sigma_{n}=\sqrt{\frac{\sum_{i=1}^{n}\left ( x_{i}-\mu \right )^{2}}{n-1}} \label{eq:StandardDeviation} \end{equation}

It can be understood why we use a root-mean-square for determining the precision:

  • Noise, tolerances and variances can result in positive and negative numbers: may cancel out in an average
  • Relates to electrical power (remember that $P = U \cdot I = U^{2}/R$, so in fact we compare powers)

As shown in figure 2, the accuracy is the proximity of measurement results to the true value (“trueness”). It relates to the systematic error which can only be reduced if we determine the offset by a method with better accuracy and compensate for it. Precision is the repeatability, or reproducibility of the measurement. It is determined by the random errors in the measurement (which can be reduced by taking more measurements) and by the resolution of measurement system.

Fig. 2: Accuracy and precision shown in the frequency of occurrence of measurements

In experimental research we distinghuish:

  • Validity is whether an instrument actually measures what you think it is (what we would call a cross sensitivity from an engineering perspective). We distinguish
    1. Criterion validity when you can compare it to a real objective value
    2. Concurrent validity when data is recorded with respect to established criteria or a known dataset
    3. Predictive validity when the data can be used to predict new values at a later stage
    4. Content validity when the data covers the full range of the construct, so no influences are overlooked
  • Reliability is whether an instrument can be interpreted consistently accross different situations: whether it reproduces in a test-retest reliability

So validity maps to accuracy (trueness) and reliability to precision.

Example

In an example we take eight measurements of the resistance of a single resistor. What can we say about the resistance $R$?

Measurement value found
$1$ $1002 \Omega$
$2$ $960 \Omega$
$3$ $1047 \Omega$
$4$ $1010 \Omega$
$5$ $913 \Omega$
$6$ $986 \Omega$
$7$ $1037 \Omega$
$8$ $955 \Omega$
Tab. 1: An example of eight measurements of a single resistor

First of all, the average value, or mean value is equal to $(1002 + . . . + 955) \div 8 \approx 989 \Omega$. So the best estimate for $R$ is about $989 \Omega$. But, how accurate is this number? Both wordings precision and accuracy determine the error (uncertainty) in the measurement.

The standard deviation for the $R$ in the example \begin{equation} \sigma=\sqrt{\frac{\left ( 1002-989 \right )^{2} + \ldots + \left ( 955-989 \right )^{2}}{8-1}}\approx 16.9 \Omega \end{equation}

This means that 95% of the measurements is between the average and plus/minus two sigma: $989 \Omega \pm 2 \times 16.9 \Omega$.

Tolerance

With the previous example, we took eight measurements of the same resistor. The systematic error (accuracy) is the result of the measurement tool which was the same with all eight measurements. There was also a random error (precision limitation) due to noise in the measurement. A similar experiment could be done with eight different resistors from the same batch. These should have a similar resistance, but there will be variation in the resistor values due to fabrication processes.

This random variation is indicated by the tolerance. Sometimes the $2\sigma$ or $3\sigma$ range is used to define a tolerance. The tolerance is the permissible limit of variation in an object. The production process is optimized until all components are with specification (withing the tolerance limits), or sometimes devices outside the specification range are discarded.

The effect of taking more measurements

Most errors have a normal distribution, meaning it follows the probability density curve of Gauss

\begin{equation} f_{n}\left ( x \right )=\frac{1}{\sigma \sqrt{2\pi}}e^{-\frac{1}{2}\left ( \frac{x-\mu }{\sigma } \right )^{2}} \label{eq:NormalDistribution} \end{equation}

with the standard deviation $\sigma$ and the average $\mu$. The Gauss curve was already visible in figure 2. By taking sufficient measurements, for example $N$, we can determine the shape in the Gauss curve. The location of this peak corresponds to the average $\mu$ and the width of the curve to the standard deviation $\sigma$. For a reasonable number of measurements ($N>15$), $95\%$ of the measurements lies between $\mu-2\sigma$ and $\mu+2\sigma$. The standard deviation $\sigma$ decreases with the square root of $N$, and so the precision increases with the square root of $N$. We can now see that with random errors, the precision can be increased by taking more measurements. For systematic errors, this averaging does not help, we still have the same offset in the vaue of $\mu$.

For a systematic error of zero ($\mu=0$), we can say that the random error is equal to $\pm 2\sigma$. When endlessly repeated measuring, the real value $x_{0}$ is equal to the average $\mu$. When measured $N$ times, the formula for the real value $x_{0}$ with a probability of $95\%$ is \begin{equation} x_{0} = \mu \pm \frac{2 \sigma}{\sqrt{N}}. \label{eq:NinetyFiveConfidence} \end{equation} The systematic error can be approximated by $\left | x_{0}-\mu \right |$ for sufficient high $N$. However, because we do not know the real value $x_{0}$, we have to use the independent reference (calibration) measurement that has a ten times higher accuracy.

Significant digits

The accuracy of a measured value is represented in the number of significant digits (‘meaningfull digits’). So from the number of digits we can recognise the accuracy of the number. The number of significant digits is the total number of digits without noticing the comma, where a zero on the left side does not count.

For example $6.34$ has three significant digits. This means that the real value lies between $6.335$ and $6.345$ and $0.2$ has one significant digit. Note that $0.02$ also has only one significant digit because the leading zeroes are not significant!

  • The value of $3000 m$ lies between $2999.5$ and $3000.5 m$.
  • The value of $3 km$ lies between $2.5$ and $3.5 km$.
  • When a value is measured with a certain instrument, the accuracy can be denoted explicitly, e.g. a force can be measured as $23.4N \pm 0.3 N$.

Once the standard deviation $\sigma$ of a measurement is known, we can use that for the representation of the number

  • Take the highest power of ten smaller than $\sigma/2$:
    • For example, when $\sigma=0.03 \rightarrow \sigma/2=0.015 \rightarrow accuracy= 0.01$
    • For example, when $\sigma=0.01 \rightarrow \sigma/2=0.005 \rightarrow accuracy= 0.001$
  • Round to a multiple of this:
    • For example, when when $\sigma=0.03$ and $y_{m}=8.314$, then $accuracy = 0.01$ and $y_{m}$ must be written as $y_{m}=8.31$
  • Last digit 5 round to even number (avoid bias)
    • For example: $\sigma = 0.03$ and $y_{m} = 8.315$, then $accuracy = 0.01$ and $y_{m} = 8.32$
    • For example: $\sigma = 0.03$ and $y_{m} = 8.345$, then $accuracy = 0.01$ and $y_{m} = 8.34$
  • When more than 1 decimal goes away than round in one step.
    • For example: $\sigma = 0.3$ and $y_{m} = 8.345$, then $accuracy = 0.1$ and $y_{m} = 8.3$

In case of a calculation, do not round the intermediate results. Otherwise, you are summing up errors.

Error can be represented as relative errors (as a percentage). Take care of the exact meaning:

  • Absolute error: $d = 5.19 \pm 0.06 mm$ is equivalent to the
  • Relative error with respect to the measured value: $d = 5.19 mm \pm 1.2 \%$ but also
  • Relative with respect to a full scale (for example of $200 mm$): $d = 5.19 \pm 0.03 \%$

Error propagation

In the measurement chain (or in our model), the reading may be the result of a mathematical operation on two input variables. For example, the length of a bar may be the sum of the first part plus a second part. Or, as another example, the output of a sensor is the product of the quantity to be measured times the sensitivity of the sensor. The question is what happens to the error of the output if both values (length 1 and length 2, or sensitivity and quantity) have noice and uncertainty. There are some basic rules to determine the error propagation under mathematical operations for the 'worst-case' estimation:

  • If two quantities are added or subtracted, the individual absolute uncertainty is added in the result
  • If two quantities are multiplied or divided, the percentages of uncertainty are added to get the percentage of uncertainty in the result
  • When finding the square root of a quantity, we divide the percentage of uncertainty by two. For squaring, the percentage uncertainty is multiplied by two.

Note that when dealing with error propagation one has to handle random errors and systematic errors strictly separate. In case of a systematic error one has to take the sign into account with a difference or quotient of quantities. And, also with systematic errors, one has to subtract the errors (absolute respectively relative) with a sum or product of quantities.

In case of a calculation, for example on a calculator, we normally take a simple approach for determining the number of digits:

  • With a product or quotiënt the number of significant digits of the result is equal to the smallest number of significant digits of the original numbers.
    • For example: $R = U/I = 21.3/0.2061= 103.3478893740902 \Omega \rightarrow R = 103 \Omega$
  • With a addition or subtraction the number of digits after the comma is equal to the smallest number of digits after the comma of the original numbers.
    • For example: $I = 2.5 + 0.357 = 2.9 A$

This is summarized in table 2.

Add or subtract Multiply or divide
Error Absolute errors add up Relative errors add up
Number of digits Lowest number of decimals Lowest number of digits
Tab. 2: Propagation of errors in a worst-case approach

Sensor Technology TOC

theory/sensor_technology/st2_measurement_errors.txt · Last modified: 2018/10/09 17:11 by glangereis