Sensor Systems

Sidebar

theory:sensor_technology:st1_measurement_theory

Measurement Theory

A measurement is an action to verify an assumption. The outcome is evidence in combination with the appropriate interpretation. Therefore, it is essential to understand all ins and outs of a measurement. What is measured? What may have influenced the measurement? This chapter gives some background in the common knowledge behind measurements.

Why do we measure?

The direct purpose of a measurement is to get knowledge about a system or phenomenon. Roughly speaking, a measurement can be part of two processes:

• The first one is a research process where we are creating knowledge about our world. We try to understand the world, and to do this, we are building models. Measurements are needed to verify these models. This is the scientific approach.
• The second process in which we need measurements, is the product creation process. The design of a product is done based on specifications. In that case, we must know if our initial products, or prototypes, meet these specifications. In some cases, we have to check if we meet a certain standard, for example an emission standard. We need the measurements to convince the customer that the product does what we promise. So the measurements help us to guide our design process: this is the engineering approach.

This is summarized in figure 1. It is the process where measurements, simulations and models form a sequence to gain knowledge in a scientific, documented and reproducible way. In fact, with a measurement, we compare reality with our understanding or interpretation of reality. The interpretation of reality, in a simplified representation, is called a model. The measurement compares reality to a model.

Fig. 1: The relation between measurements, models and a simulation

The measurement always contains an intended dataset (the responses that come from the phenomenon we are looking for), but is normally disturbed by noise. Noise is the unintended content of the data. It may disturb our conclusions if we are not aware of the noise.

The outcome of the measurement, interfered with noise, is our input for interpretation. Based on the outcome we may validate the model (science) or validate the reality (product design). Note that also in product design, we may first need to update the model, before it is accurate enough to validate the product. Also engineering needs the scientific approach. In product engineering, there may be three conclusions:

• The product meets the specs and can be finalized
• The product does not meet the specs, so we have to change the design
• The product does not meet the specs, but because of budget and time constraints, we decide to change the specifications

Measuring can have two meanings:

• In a first definition, we can define measuring as the verification of the structure and values of a model (so qualitative and quantitative model verification). This verification process is described in the chapter about modelling as a scientific research process.
• In a more narrow scope, we can define measuring as the quantitative determination of a value. This is the step needed to collect and verify data as part of both the engineering and scientific process.

In measurement theory, we are more interested in the second definition of measuring.

Quantities and units

With a measurement, we always measure a quantity (Dutch: grootheid) by comparing the quantity with a unit (Dutch: eenheid).

$$\mathtt{Quantity}=\mathtt{Number} \cdot \mathtt{Unit} \label{eq:QuantityUnit}$$

For example, if we say that the length of an object is $5 m$, then the length is the quantity, and the value is five times the unit meter. More background on quantities and some classifications can be found in the chapter on sensor theory.

All existing quantities can be expressed in a set of basic quantities. The Bureau international des poids et mesures (BIPM) has defined such a set as the International System of Units (Système International d'Unités, with the international abbreviation SI. The seven base units are summarized in table 1.

Quantity Symbol Unit
Length $l$ $m$ The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second
Time $t$ $s$ The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom
Mass $m$ $kg$ The kilogram is equal to the mass of the international prototype of the kilogram
Electric Current $I$ $A$ The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1 metre apart in vacuum, would produce between these conductors a force equal to 2×10-7 newton per metre of length
Temperature $T$ $K$ The kelvin is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water
Luminosity $I_{v}$ $cd$ The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540×1012 Hz and that has a radiant intensity in that direction of 1/683 watt per steradian
Amount of Substance $n$ $mole$ The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12
Help units:
Angle $\alpha$ $rad~(deg)$
Solid angle $w$ $sr$
Tab. 1: SI Quantities and units

In electrical engineering, we are using some of the SI derived units that are defined by the BIPM as well. These are shown in table 2.

Quantity Symbol Unit SI-Units
Current $I$ $A$ $A$
Potential difference (= Voltage) $U$ $V$ $kg \cdot m^{2} s^{-3} A^{-1}$
Resistance $R$ $\Omega$ $kg \cdot m^{2} s^{-3} A^{-2}$
Capacitance $C$ $F$ $kg^{-1} m^{-2} s^{4} A^{2}$
Frequency $f$ $Hz$ $s^{-1}$
Tab. 2: Derived SI quantities and units as used for electrical engineering

A consequence of using standard units is that with some phenomena we get huge or extremely small numbers. Therefore, we can scale the quantities by using a prefix for the quantity symbol. The prefixes are in table 3. To get an idea of how big the range of numbers is, the documentary "Powers of 10" from 1977 is very illustrative. A more modern flash based tool is also avaliable.

 $p$ $n$ $\mu$ $m$ $unit$ $k$ $M$ $G$ $T$ $10^{-12}$ $10^{-9}$ $10^{-6}$ $10^{-3}$ $1$ $10^{3}$ $10^{6}$ $10^{9}$ $10^{12}$
Tab. 3: Prefixes used to scale the units

To convert from a certain unit system to another (like the SI system), there is a structured method that uses the fact that we can multiply everything by one (dimensionless) without affecting the quantity.

While doing calculations, it is strongly advised to use units as a second check for the correctness of the used equation. For example, we may remember the equation $$s = \frac{1}{2} a t^{2}$$ which gives the distance s as a function of time t for a constant acceleration a, where the start time and place are both zero. When we fill in an acceleration, let's say $9.8 m/s^{2}$, and a time of $10 s$, we find $$s = \frac{1}{2} 9.8 \frac{m}{s^{2}} \left ( 10 s \right )^{2} = 490 m.$$ Because the unit of the result is in m (the $s^{2}$ units cancel out), we have a first check that the formula is propably correct.

Measurement scales

Measurements are done on a certain scale. This means that the result of a measurement can result into expression like “$a$ is not $b$”, “$a$ is bigger than $b$”, or “$a$ is $10.3$ times the unit $b$”. The scale is how numbers are arranged along a line. We distinguish the scales as indicated in table 4. The different measurement scales are referred to as the levels of measurements1).

 Categorial levels of measurement (distinct categories) Binary scale Is a nominal scale with only two points For example dead or alive. SPSS calls variables for this scale Nominal Nominal scale Nominal variables are those whose outcomes are categorical, not meaningfully put into numbers Red and blue marbles in statistics, male and female labels in statistics, boolean operators. SPSS calls variables for this scale Nominal Ordinal scale The scale indicates whether things are equal, larger or smaller than each other. Ordinal variables are those that are naturally ordered Sorting objects from large to small without being interested in the absolute size. Likert scales in statistics (“Strongly agree”, “Agree”, etc.). SPSS calls variables for this scale Ordinal Continuous levels of measurement (distinct scores) Interval scale An expansion of the ordinal scale: now the sizes of the differences are known, the absolute reference not: we can not say that $20^\circ C$ is two times as warm as $10^\circ C$ The Celsius scale. SPSS calls related variables Scalar Ratio scale As interval scales, but now with an absolute reference level: now we can work with ratios, like $2 m$ is two times as long as $1 m$ Electric current and length measurements. SPSS calls related variables also Scalar Cardinal scale Is a ratio scale where the the reference is a generally accepted standard Ampere, meter Derived scale When a number relates to the ratio of standard units Capacitance in $A \cdot sec/V$
Tab. 4: The levels of measurements represent several measurement scales

When doing electronic measurements of currents and voltages, we normally speak of ratio scales.

The measurement chain

The hardware needed to measure a quantity is represented in the block schematic of figure 2.

Fig. 2: The measurement chain

First of all, the physical quantity to be measured is not one to one coupled to the sensor selector part. For example, to measure temperature, there is the packaging of the sensor and some air or liquid shield between the object with the temperature of interest and the sensor material. Another example is the glue or screw with with a strain gauge is attached to an object: these connecting materials may affect the measurement. In sound recordings, there is the influence of the room which modulates the transfer from the sound source to the microphone. These interfering structures are called the coupling network and they are between the physical quantity parameter and the sensor.

The sensor head is the transducer that converts information from a physical domain to the electrical domain.

The transducer to be read out by an electronic circuit. This circuit normally has three functions:

• Biasing of the sensor element, which is the creation of a setting point. For example, a resistive element has to be biased with a current in order to convert a change in resistance to a change in voltage. This is needed because resistance as such can not be processed electronically, but a voltage can be treated as a signal
• To bring the output signal to a level that is optimized for post-processign like the analog to digital converter input stage. This is called signal leveling
• After bringing the sensor output to an approriate voltage level, we may dicover some filtering is needed. For example, it is wise to remove 50 Hz noise from the signal before sampling with a analog to digital converter.

We call this stage the analog signal conditioning.

Once the signal is pre-conditioned, it can be sampled by an analog to digital converter (ADC) and fed to a microprocessor. The microprocessor can be in the same smart sensor housing. In the digital domain we can do some additional signal processing and the conversion to a bus prototocol like USB, SPI or I2C.

Because the intention of the measurement is to do something with the data, there must be an output. This can be a dedicated display in a data analysis system or by a feedback circuit back into the system.

Types of errors

As indicated in figure 1, there are two major insertion points of noise. However, that does not mean these are the only points where measurement errors occur. We distinguish errors that are caused by the system and errors due to the environment.

• Systematic errors have a source within the system. For example, a calibration error of one of the measurement devices may give a bias error, which is a systematic error. Another example is the drift of a sensor resulting into an unexpected offset in the measurement. Systematic errors can be minimised by improving the measurement system. As a rule of thumb, we can say that a calibration with a ten times more accurate measurement method is needed.
• Random errors are also called noise. They can not be minimized by measuring more accurately because they have an external source: they don't reproduce. Random errors are caused by inherently unpredictable fluctuations in the readings of a measurement tool or in the experimenter's interpretation of the reading. Random errors are in many cases normally distributed, so the size of the error can be minimized by taking more measurements. Although most random errors have an external source, some specific random errors originate from within the system. An example is quantization noise in an analog to digital conversion, which gives a uniformely distributed noise. Another example is the noise in the electronics of the measurement tool itself.

The two insertion points of noise in figure 1 represent both environmental noise. The first source is in the measurement domain. This can for example be a motion artefact. An example of noise originating from after the transduction is electronic $50 Hz$ noise after bad shielding.

Besides the classification of errors into random and systematic errors, we can also speak about absolute and relative errors.

• The absolute error is the difference between the measured value and the real value. For example, if we measure $1002 \Omega$ and we know the measured resistor is actually $989 \Omega$, then the absolute error is $13 \Omega$
• In a relative error, the absolute error is normalized as $(\mathtt{Measured Value} – \mathtt{Real Value}) / \mathtt{Real Value}$. For example, $(1002 \Omega – 989 \Omega)$ / $989 \Omega \approx 0.013~(1.3\%)$

The quantization error as mentioned before, is also observed as rounding errors when reading a value from a display. The last digits are not represented, so for example $14.3476$ can be written as $14.3$ while introducing an absolute error of $0.0476$.

Some errors are the result of transducers that are non-linear, these are nonlinearity errors. These can be expressed as a non-linearity number in percent.

Errors can be reduced or compensated in some situations. This is partially explained in the chapter about Sensor/Actuator systems in the section Sensor/actuator network concepts. The most common methods are:

• Feedback
• Stimulus-response measurement
• Differential measurement
• Compensation (feed-forward)
• Multivariate analysis
• Averaging

Accuracy, precision and tolerance

Consider a multimeter that has a reading of $1.000341 V$. This is a high precision reading, but we do not know whether it is accurate (correct). The words accuracy and precision are sometimes mixed up, but have completely different meanings. The most important mathematical tools we have are the average reading of a set of measurements and the standard deviation of the readings. The question is how they relate to accuracy and precision.

In an example we take eight measurements of the resistance of a single resistor. What can we say about the resistance $R$?

Measurement value found
$1$ $1002 \Omega$
$2$ $960 \Omega$
$3$ $1047 \Omega$
$4$ $1010 \Omega$
$5$ $913 \Omega$
$6$ $986 \Omega$
$7$ $1037 \Omega$
$8$ $955 \Omega$
Tab. 5: An example of eight measurements of a single resistor

First of all, the average value, or mean value is equal to $(1002 + . . . + 955) \div 8 \approx 989 \Omega$. So the best estimate for $R$ is about $989 \Omega$. But, how accurate is this number? Both wordings precision and accuracy determine the error (uncertainty) in the measurement.

Accuracy of this estimate of $R$ is defined by how close our average $R$ is to the “real” value of $R$. A value we don't know in this case. If we knew the real value of $R$, we could express our accuracy in terms of the standard deviation. There is however a more important role for the standard variation, because it defines how close the measurements are to each other.

Precision indicates the variation on the measurements and can therefore be expressed in terms of the standard deviation

$$\sigma_{n}=\sqrt{\frac{\sum_{i=1}^{n}\left ( x_{i}-\mu \right )^{2}}{n-1}} \label{eq:StandardDeviation}$$

It can be understood why we use a root-mean-square for determining the precision:

• Noise, tolerances and variances can result in positive and negative numbers: may cancel out in an average
• Relates to electrical power (remember that $P = U \cdot I = U^{2}/R$, so in fact we compare powers)

The standard deviation for the $R$ in the example $$\sigma=\sqrt{\frac{\left ( 1002-989 \right )^{2} + \ldots + \left ( 955-989 \right )^{2}}{8-1}}\approx 16.9 \Omega$$

This means that 95% of the measurements is between the average and plus/minus two sigma: $989 \Omega \pm 2 \times 16.9 \Omega$.

As shown in figure 3, the accuracy is the proximity of measurement results to the true value (“trueness”). It relates to the systematic error which can only be reduced if we determine the offset by a method with better accuracy and compensate for it. Precision is the repeatability, or reproducibility of the measurement. It is determined by the random errors in the measurement (which can be reduced by taking more measurements) and by the resolution of measurement system.

Fig. 3: Accuracy and precision shown in the frequency of occurrence of measurements

With this example, we took eight measurements of the same resistor. The systematic error (accuracy) is the result of the measurement tool which was the same with all eight measurements. There was also a random error (precision limitation) due to noise in the measurement.

A similar experiment could be done with eight different resistors from the same batch. These should have a similar resistance. Again, the systematic error (accuracy) is the result of the measurement system. But there could be an offset due to a temperature effect which is common for all resistors. The random error may have a completely new component: the spreading due to the production of the resistors. This random variation is indicated by the tolerance. Sometimes the two or even three $\sigma$ is used to define a tolerance. The tolerance is the permissible limit of variation in an object. The production process is optimized until all components are with specification (withing the tolerance limits), or sometimes devices outside the specification range are discarded.

Terminology in experimental research methods

Scientists, especially social scientists, may look slightly different at a problem compared to an engineer. Although there is fundamentally no big difference between an engineering physical model and a (statistical) model for a real world problem, the paradigms of scientists and engineers are different2):

• Scientists try to understand and model the world without affecting it
• Engineers try to change the world by making new solutions.

As a result, there are a few concepts that you will encounter in experimental research which are not described on this highly “desktop measurement” focused page3):

• With respect to the research methods, we distinguish
1. Correlational research or cross-sectional research when we only observe relations without affecting it. In this case we study the natural world. This can be done in several ways, for example by4)
1. Taking a snapshot of many variables at a single time or
2. Measuring variables in time (longitudinal research)
2. Experimental research where we manipulate a variable to see how it affects a system. Also this can be done in two ways:
1. Different groups take part in each experimental condition - between groups, between subjects or independent design or
2. A single group or person is uesd to try several inputs - within-subject or repeated-measures design.
• In experimental and correlational research, we speak about variables as the observed quantities.
1. A variable that is the cause of a reaction is the independent variable (or predictor variable) and
2. A variable that is assumed to be the reaction is the independent variable (or outcome variable). Where the words between brackets are more appropriate for correlational research and the first words for experimental research where the input is manipulated deliberately.
• Validity is whether an instrument actually measures what you think it is (what we would call a cross sensitivity from an engineering perspective). We distinguish
1. Criterion validity when you can compare it to a real objective value
2. Concurrent validity when data is recorded with respect to established criteria or a known dataset
3. Predictive validity when the data can be used to predict new values at a later stage
4. Content validity when the data covers the full range of the construct, so no influences are overlooked
• Reliability is whether an instrument can be interpreted consistently accross different situations: whether it reproduces in a test-retest reliability

Measurement device terminology

The following terminology is important in measurement devices.

Word in Dutch Explanation
Range Bereik Minimum / Maximum value that can be measured
Resolution Resolutie Minimum difference that can be measured. This can be because of digitization, but also because there is a noise floor or physical phenomenon that makes it impossible to measure smaller quantities
Accuracy Nauwkeurigheid How close a measured value is to the actual (true) value. Can be expressed as percentage of the full scale
Precision Precisie How close the measured values are to each other. Means good reproducibility
Offset Afwijking Systematic difference in measured and real value, This can be confusing: in the sensor response we can speak of an offset (the y-axis zero-crossing), but when taljing about measurement equipment we can also say there is an offset when there is a bias between measured value and the true value
Calibration Kalibratie a set of operations that establish, under specified conditions, the relationship between the values of quantities indicated by a measuring instrument or measuring system and the corresponding values realised by standards
Tab. 6: Measurement device terminology

Be careful: calibration is comparing with the standard, and does not include adjustment The Dutch word ijken is calibration with respect to the law for commercial use of a tool

Summary, and what is next?

In figure 4 the sequence of measuring a quantity is represented by three steps. These are basically the same as the measurement chain introduced in figure 2.

Fig. 4: The complete chain from a sensor via biasing to DA conversion

First of all, there is the sensor or transducer. We will see in the page about sensor theory that the sensor

• Converts a physical parameter to modulation of an electronic component
• May be non-linear
• Will have an offset that may drift, so we have to calibrate
• May be frequency dependent, and so has a certain bandwidth

Next, there is a biasing circuit. A biasing circuit makes the step from the sensor to a voltage that can be sampled. This will be discussed in more detail on the page about signal conditioning and sensor read-out. We will see there are two purposes:

• To make a voltage output out of the modulated electronic device (sensor)
• To filter the signal to prepare it for long cables and A to D conversion

Finally, there is a Analog to Digital Conversion as will be explained on the page about ADC and DAC. Analog to digital conversion must satisfy:

• A good capture of the amplitude of the signal
• An appropriate sampling frequency according to the Nyquist rate

In fact, there is a fourth final step. The measurement information serves a certain goal. It has a communicative value for a designer or researcher. We must represent the information in an unambiquous way that underpins the conclusion of the measurement. This communicative purpose of measurements as evidence in a design process, is that important that a special page on data representation is devoted to it.

Sensor Technology TOC

1) , 3)
Andy Field, Discovering statistics using IBM SPSS statistics, Sage, 2013
2)
Bartneck, C., & Rauterberg, M. (2007). HCI Reality - An 'Unreal Tournament'? International Journal of Human Computer Studies, 65(8), 737-743
4)
Paul Martin, Patrick Bateson (1993), Measuring behaviour: an introductory guide. Cambridge University Press