theory:sensor_technology:st1_measurement_theory

*A measurement is an action to verify an assumption. The outcome is evidence in combination with the appropriate interpretation. Therefore, it is essential to understand all ins and outs of a measurement. What is measured? What may have influenced the measurement? This chapter gives some background in the common knowledge behind measurements.*

The direct purpose of a measurement is to get knowledge about a system or phenomenon. Roughly speaking, a measurement can be part of two processes:

- The first one is a research process where we are creating knowledge about our world. We try to understand the world, and to do this, we are building models. Measurements are needed to verify these models. This is the scientific approach.
- The second process in which we need measurements, is the product creation process. The design of a product is done based on specifications. In that case, we must know if our initial products, or prototypes, meet these specifications. In some cases, we have to check if we meet a certain standard, for example an emission standard. We need the measurements to convince the customer that the product does what we promise. So the measurements help us to guide our design process: this is the engineering approach.

This is summarized in figure 1. It is the process where measurements, simulations and models form a sequence to gain knowledge in a scientific, documented and reproducible way. In fact, with a *measurement*, we compare *reality* with our understanding or interpretation of reality. The interpretation of reality, in a simplified representation, is called a *model*. The measurement compares reality to a model.

The measurement always contains an intended dataset (the responses that come from the phenomenon we are looking for), but is normally disturbed by *noise*. Noise is the unintended content of the data. It may disturb our conclusions if we are not aware of the noise.

The outcome of the measurement, interfered with noise, is our input for *interpretation*. Based on the outcome we may validate the model (science) or validate the reality (product design). Note that also in product design, we may first need to update the model, before it is accurate enough to validate the product. Also engineering needs the scientific approach. In product engineering, there may be three conclusions:

- The product meets the specs and can be finalized
- The product does not meet the specs, so we have to change the design
- The product does not meet the specs, but because of budget and time constraints, we decide to change the specifications

*Measuring* can have two meanings:

- In a first definition, we can define measuring as the verification of the structure and values of a model (so
*qualitative and quantitative model verification*). This verification process is described in the chapter about modelling as a scientific research process. - In a more narrow scope, we can define measuring as
*the quantitative determination of a value*. This is the step needed to collect and verify data as part of both the engineering and scientific process.

In measurement theory, we are more interested in the second definition of measuring.

With a measurement, we always measure a *quantity* (Dutch: *grootheid*) by comparing the quantity with a *unit* (Dutch: *eenheid*).

\begin{equation} \mathtt{Quantity}=\mathtt{Number} \cdot \mathtt{Unit} \label{eq:QuantityUnit} \end{equation}

For example, if we say that the length of an object is $5 m$, then the length is the quantity, and the value is five times the unit *meter*. More background on quantities and some classifications can be found in the chapter on sensor theory.

All existing quantities can be expressed in a set of basic quantities. The Bureau international des poids et mesures (BIPM) has defined such a set as the *International System of Units* (Système International d'Unités, with the international abbreviation SI. The seven base units are summarized in table 1.

In electrical engineering, we are using some of the SI derived units that are defined by the BIPM as well. These are shown in table 2.

A consequence of using standard units is that with some phenomena we get huge or extremely small numbers. Therefore, we can scale the quantities by using a prefix for the quantity symbol. The prefixes are in table 3. To get an idea of how big the range of numbers is, the documentary "Powers of 10" from 1977 is very illustrative. A more modern flash based tool is also avaliable.

To convert from a certain unit system to another (like the SI system), there is a structured method that uses the fact that we can multiply everything by one (dimensionless) without affecting the quantity.

While doing calculations, it is strongly advised to use units as a second check for the correctness of the used equation. For example, we may remember the equation \begin{equation} s = \frac{1}{2} a t^{2} \end{equation} which gives the distance s as a function of time t for a constant acceleration a, where the start time and place are both zero. When we fill in an acceleration, let's say $9.8 m/s^{2}$, and a time of $10 s$, we find \begin{equation} s = \frac{1}{2} 9.8 \frac{m}{s^{2}} \left ( 10 s \right )^{2} = 490 m. \end{equation} Because the unit of the result is in m (the $s^{2}$ units cancel out), we have a first check that the formula is propably correct.

Measurements are done on a certain scale. This means that the result of a measurement can result into expression like “$a$ is not $b$”, “$a$ is bigger than $b$”, or “$a$ is $10.3$ times the unit $b$”. The *scale* is how numbers are arranged along a line. We distinguish the scales as indicated in table 4. The different measurement scales are referred to as the *levels of measurements*^{1)}.

When doing electronic measurements of currents and voltages, we normally speak of ratio scales.

The hardware needed to measure a quantity is represented in the block schematic of figure 2.

First of all, the physical quantity to be measured is not one to one coupled to the sensor selector part. For example, to measure temperature, there is the packaging of the sensor and some air or liquid shield between the object with the temperature of interest and the sensor material. Another example is the glue or screw with with a strain gauge is attached to an object: these connecting materials may affect the measurement. In sound recordings, there is the influence of the room which modulates the transfer from the sound source to the microphone. These interfering structures are called the *coupling network* and they are between the physical quantity parameter and the sensor.

The sensor head is the *transducer* that converts information from a physical domain to the electrical domain.

The transducer to be read out by an electronic circuit. This circuit normally has three functions:

*Biasing*of the sensor element, which is the creation of a setting point. For example, a resistive element has to be biased with a current in order to convert a change in resistance to a change in voltage. This is needed because resistance as such can not be processed electronically, but a voltage can be treated as a signal- To bring the output signal to a level that is optimized for post-processign like the analog to digital converter input stage. This is called
*signal leveling* - After bringing the sensor output to an approriate voltage level, we may dicover some
*filtering*is needed. For example, it is wise to remove 50 Hz noise from the signal before sampling with a analog to digital converter.

We call this stage the analog *signal conditioning*.

Once the signal is pre-conditioned, it can be sampled by an analog to digital converter (ADC) and fed to a microprocessor. The microprocessor can be in the same smart sensor housing. In the digital domain we can do some additional signal processing and the conversion to a bus prototocol like USB, SPI or I^{2}C.

Because the intention of the measurement is to do something with the data, there must be an output. This can be a dedicated display in a data analysis system or by a feedback circuit back into the system.

As indicated in figure 1, there are two major insertion points of noise. However, that does not mean these are the only points where *measurement errors* occur. We distinguish errors that are caused by the system and errors due to the environment.

*Systematic errors*have a source within the system. For example, a calibration error of one of the measurement devices may give a bias error, which is a systematic error. Another example is the drift of a sensor resulting into an unexpected offset in the measurement. Systematic errors can be minimised by improving the measurement system. As a rule of thumb, we can say that a calibration with a ten times more accurate measurement method is needed.*Random errors*are also called*noise*. They can not be minimized by measuring more accurately because they have an external source: they*don't reproduce*. Random errors are caused by inherently unpredictable fluctuations in the readings of a measurement tool or in the experimenter's interpretation of the reading. Random errors are in many cases normally distributed, so the size of the error can be minimized by taking more measurements. Although most random errors have an external source, some specific random errors originate from within the system. An example is quantization noise in an analog to digital conversion, which gives a uniformely distributed noise. Another example is the noise in the electronics of the measurement tool itself.

The two insertion points of noise in figure 1 represent both environmental noise. The first source is in the measurement domain. This can for example be a motion artefact. An example of noise originating from after the transduction is electronic $50 Hz$ noise after bad shielding.

Besides the classification of errors into random and systematic errors, we can also speak about absolute and relative errors.

- The
*absolute error*is the difference between the measured value and the real value. For example, if we measure $1002 \Omega$ and we know the measured resistor is actually $989 \Omega$, then the absolute error is $13 \Omega$ - In a
*relative error*, the absolute error is normalized as $(\mathtt{Measured Value} – \mathtt{Real Value}) / \mathtt{Real Value}$. For example, $(1002 \Omega – 989 \Omega)$ / $989 \Omega \approx 0.013~(1.3\%)$

The quantization error as mentioned before, is also observed as *rounding errors* when reading a value from a display. The last digits are not represented, so for example $14.3476$ can be written as $14.3$ while introducing an absolute error of $0.0476$.

Some errors are the result of transducers that are non-linear, these are *nonlinearity errors*. These can be expressed as a non-linearity number in percent.

Errors can be reduced or compensated in some situations. This is partially explained in the chapter about Sensor/Actuator systems in the section Sensor/actuator network concepts. The most common methods are:

- Feedback
- Stimulus-response measurement
- Differential measurement
- Compensation (feed-forward)
- Multivariate analysis
- Averaging

Consider a multimeter that has a reading of $1.000341 V$. This is a high precision reading, but we do not know whether it is accurate (correct). The words *accuracy* and *precision* are sometimes mixed up, but have completely different meanings. The most important mathematical tools we have are the *average* reading of a set of measurements and the *standard deviation* of the readings. The question is how they relate to accuracy and precision.

In an example we take eight measurements of the resistance of a single resistor. What can we say about the resistance $R$?

First of all, the *average value*, or *mean value* is equal to $(1002 + . . . + 955) \div 8 \approx 989 \Omega$. So the best estimate for $R$ is about $989 \Omega$. But, how accurate is this number? Both wordings *precision* and *accuracy* determine the error (uncertainty) in the measurement.

*Accuracy* of this estimate of $R$ is defined by how close our average $R$ is to the “real” value of $R$. A value we don't know in this case. If we knew the real value of $R$, we could express our accuracy in terms of the standard deviation. There is however a more important role for the standard variation, because it defines how close the measurements are to each other.

*Precision* indicates the variation on the measurements and can therefore be expressed in terms of the standard deviation

\begin{equation} \sigma_{n}=\sqrt{\frac{\sum_{i=1}^{n}\left ( x_{i}-\mu \right )^{2}}{n-1}} \label{eq:StandardDeviation} \end{equation}

It can be understood why we use a root-mean-square for determining the precision:

- Noise, tolerances and variances can result in positive and negative numbers: may cancel out in an average
- Relates to electrical power (remember that $P = U \cdot I = U^{2}/R$, so in fact we compare powers)

The standard deviation for the $R$ in the example \begin{equation} \sigma=\sqrt{\frac{\left ( 1002-989 \right )^{2} + \ldots + \left ( 955-989 \right )^{2}}{8-1}}\approx 16.9 \Omega \end{equation}

This means that 95% of the measurements is between the average and plus/minus two sigma: $989 \Omega \pm 2 \times 16.9 \Omega$.

As shown in figure 3, the accuracy is the proximity of measurement results to the true value (“trueness”). It relates to the systematic error which can only be reduced if we determine the offset by a method with better accuracy and compensate for it. Precision is the repeatability, or reproducibility of the measurement. It is determined by the random errors in the measurement (which can be reduced by taking more measurements) and by the resolution of measurement system.

With this example, we took eight measurements of the same resistor. The systematic error (accuracy) is the result of the measurement tool which was the same with all eight measurements. There was also a random error (precision limitation) due to noise in the measurement.

A similar experiment could be done with eight different resistors from the same batch. These should have a similar resistance. Again, the systematic error (accuracy) is the result of the measurement system. But there could be an offset due to a temperature effect which is common for all resistors. The random error may have a completely new component: the spreading due to the production of the resistors. This random variation is indicated by the *tolerance*. Sometimes the two or even three $\sigma$ is used to define a *tolerance*. The tolerance is the permissible limit of variation in an object. The production process is optimized until all components are with specification (withing the tolerance limits), or sometimes devices outside the specification range are discarded.

Scientists, especially social scientists, may look slightly different at a problem compared to an engineer. Although there is fundamentally no big difference between an engineering physical model and a (statistical) model for a real world problem, the paradigms of scientists and engineers are different^{2)}:

*Scientists*try to understand and model the world without affecting it*Engineers*try to change the world by making new solutions.

As a result, there are a few concepts that you will encounter in experimental research which are not described on this highly “desktop measurement” focused page^{3)}:

- With respect to the research methods, we distinguish
*Correlational research*or*cross-sectional research*when we only observe relations without affecting it. In this case we study the natural world. This can be done in several ways, for example by^{4)}- Taking a snapshot of many variables at a single time or
- Measuring variables in time (
*longitudinal research*)

*Experimental research*where we manipulate a variable to see how it affects a system. Also this can be done in two ways:- Different groups take part in each experimental condition -
*between groups*,*between subjects*or*independent design*or - A single group or person is uesd to try several inputs -
*within-subject*or*repeated-measures design*.

- In experimental and correlational research, we speak about
*variables*as the observed quantities.- A variable that is the cause of a reaction is the
*independent variable*(or*predictor variable*) and - A variable that is assumed to be the reaction is the
*independent variable*(or*outcome variable*). Where the words between brackets are more appropriate for correlational research and the first words for experimental research where the input is manipulated deliberately.

*Validity*is whether an instrument actually measures what you think it is (what we would call a*cross sensitivity*from an engineering perspective). We distinguish*Criterion validity*when you can compare it to a real objective value*Concurrent validity*when data is recorded with respect to established criteria or a known dataset*Predictive validity*when the data can be used to predict new values at a later stage*Content validity*when the data covers the full range of the construct, so no influences are overlooked

*Reliability*is whether an instrument can be interpreted consistently accross different situations: whether it reproduces in a test-retest reliability

The following terminology is important in measurement devices.

Be careful: *calibration* is comparing with the standard, and does not include *adjustment*
The Dutch word *ijken* is calibration with respect to the law for commercial use of a tool

In figure 4 the sequence of measuring a quantity is represented by three steps. These are basically the same as the measurement chain introduced in figure 2.

First of all, there is the sensor or transducer. We will see in the page about sensor theory that the sensor

- Converts a physical parameter to modulation of an electronic component
- May be non-linear
- Will have an offset that may drift, so we have to calibrate
- May be frequency dependent, and so has a certain bandwidth

Next, there is a biasing circuit. A biasing circuit makes the step from the sensor to a voltage that can be sampled. This will be discussed in more detail on the page about signal conditioning and sensor read-out. We will see there are two purposes:

- To make a voltage output out of the modulated electronic device (sensor)
- To filter the signal to prepare it for long cables and A to D conversion

Finally, there is a Analog to Digital Conversion as will be explained on the page about ADC and DAC. Analog to digital conversion must satisfy:

- A good capture of the amplitude of the signal
- An appropriate sampling frequency according to the Nyquist rate

In fact, there is a fourth final step. The measurement information serves a certain goal. It has a communicative value for a designer or researcher. We must represent the information in an unambiquous way that underpins the conclusion of the measurement. This communicative purpose of measurements as evidence in a design process, is that important that a special page on data representation is devoted to it.

- Chapter 1: Measurement Theory
- Chapter 2: Measurement Errors ← Next
- Chapter 3: Measurement Domains
- Chapter 4: Circuits, Graphs, Tables, Pictures and Code
- Chapter 5: Basic Sensor Theory
- Chapter 6: Sensor-Actuator Systems
- Chapter 7: Modelling
- Chapter 8: Modelling: The Accelerometer - example of a second order system
- Chapter 9: Modelling: Scaling - why small things appear to be stiffer
- Chapter 10: Modelling: Lumped Element Models
- Chapter 11: Modelling: Finite Element Models
- Chapter 13: Modelling: Systems Theory
- Chapter 14: Modelling: Numerical Integration
- Chapter 15: Signal Conditioning and Sensor Read-out
- Chapter 16: Resistive Sensors
- Chapter 17: Capacitive Sensors
- Chapter 18: Magnetic Sensors
- Chapter 19: Optical Sensors
- Chapter 20: Actuators - an example of an electrodynamic motor
- Chapter 21: Actuator principles for small speakers
- Chapter 22: ADC and DAC
- Chapter 23: Bus Interfaces - SPI, I
^{2}C, IO-Link, Ethernet based - Appendix A: Systematic unit conversion
- Appendix B: Common Mode Rejection Ratio (CMRR)
- Appendix C: A Schmitt Trigger for sensor level detection

Andy Field, Discovering statistics using IBM SPSS statistics, Sage, 2013

Bartneck, C., & Rauterberg, M. (2007). HCI Reality - An 'Unreal Tournament'? International Journal of Human Computer Studies, 65(8), 737-743

Paul Martin, Patrick Bateson (1993), Measuring behaviour: an introductory guide. Cambridge University Press

theory/sensor_technology/st1_measurement_theory.txt · Last modified: 2017/10/10 18:37 by glangereis