Measuring instrument

A device used for the measurement of a certain physical quantity is called a measuring instrument. The instruments indicate the value of these quantities, based on which we get some understanding and also take appropriate actions and decisions.

There are two main types of measuring instruments: analog and digital. The analog instruments indicate the magnitude of the quantity in the form of the pointer movement. They usually indicate the values in the whole numbers, though one can get the readings up to one or two decimal places also. The readings taken in decimals places may not always be entirely correct since some human error is always involved in reading.

The digital measuring instruments indicate the values of the physical quantity in digital format that is in numbers, which can be read easily. They can give the readings in one or more decimal places. Since there is no human error involved in reading these instruments, they are more accurate than the analog measuring instruments.

Types of measurement instruments

• Linear measurement instruments
• Angular measurement instruments
• Comparators
• Optical measurement instruments
• Interferometry

Measuring instruments functions

Indicating the value of the physical quantity: instruments are calibrated against the standard values of the physical quantities. The movement of the pointer directly indicates the magnitude of the quantity, which can be whole numbers or also fractions. Nowadays, the digital instruments are becoming very popular, which indicate the values directly in numerical form and even in decimals thus making them easy to read and more accurate.

Measuring instruments used as the controllers: many instruments can be used as the controllers. For instance, when a specific value of the pressure is reached, the measuring instrument interrupts the electrical circuit, which stops the running of the compressor. Similarly, the thermostat starts or stops the compressor of the refrigeration system depending on the temperature achieved in the evaporator.

Recording the data: some measuring instruments can also be used to record and also store the data for real-time or later processing.

Transmitting the data: the measuring instruments can also be used to transfer the data to some distant places. Wires can connect the instruments kept in unsafe locations like high temperature, and their output can be taken at some distant places which are safe for human beings. The signal obtained from these instruments can also be used for operating some controls.

Do calculations: some measuring instruments can also carry out several calculations like addition, subtraction, multiplication, division, etc. Some can also be used to find solutions to highly complex equations.

Measuring range of an instrument

The measuring range of an instrument is defined by the interval between the maximum and minimum values of the physical quantity that can be detected. In other words, it is the main specification, because it provides information on the suitability of the instrument to measure a given quantity, as well as to define the safety specifications declared by the manufacturer.

All other metrological characteristics of an instrument are referred to the measuring range; in fact, they can only be considered valid for the values of the quantity under examination, internal to the said range.

The measuring range can be lower than the range of the graduation (i.e., of the graduated scale): in this case the graduation contains the minimum and maximum flow rates of the instrument, or only the maximum flow rate in the event that the minimum flow rate is equal to zero; or vice versa the only minimum flow rate in the case in which the maximum flow coincides with the upper end of the graduation.

The spatial distribution law of the divisions that make up the instrument scale (scale graduation) represents the physical law on which the operating principle of the instrument is based.

The graduation may be linear (the scale divisions are equally distributed), quadratic (if the distances between two successive divisions vary according to a quadratic law), logarithmic, etcetera. In this regard, it should be noted that all instruments which are not linear graduated require particular attention in reading the measurement: in fact, the human eye hardly performs the interpolation operation in the interval between two consecutive sections if the spatial distribution law of the divisions is not linear.

The knowledge of the physical law on which the principle of operation of the measuring instrument is based allows establishing whether the instrument is, or not, suitable to provide the measurement of specific quantities. For example, if a galvanometer is linear, this instrument will be suitable only for the measurement of direct currents; if instead, the galvanometer has a quadratic law, it will be sensitive to a thermal effect and therefore suitable for the measurement of continuous and alternating currents, of any waveform.

The fundamental concept of measurement field, for the practical use of the instrument, is completed by the following further definitions:

• extension of the graduation: set of all the divisions drawn on the scale of the instrument; the measurement field can, at most, be equal to the extension of the graduation;
• minimum flow rate: the value of the quantity to be measured below which the instrument provides indications with lower precision than the declared one;
• maximum flow rate: same definition relative to the minimum flow rate with reference to the maximum value of the quantity. Furthermore, it is the value above which the instrument provides indications of the quantity to be measured with a precision lower than the declared one;
• flow rate: maximum flow rate of an instrument whose minimum flow rate is close to zero;
• nominal overload: maximum value allowed for the quantity to be measured beyond which the instrument suffers irreversible damage; the order of magnitude of the nominal overload is approximately 3 or 4 times the maximum flow rate of the instrument.

Performance characteristics of measurement instruments

The treatment of instrument performance characteristics generally has been broken down into the subareas of static characteristics and dynamic characteristics.

The reasons for such a classification are several. First of all, some applications involve the measurement of quantities that are constant or vary only quite slowly. Under these conditions, it is possible to define a set of performance criteria that give a meaningful description of the quality of measurement without becoming concerned with dynamic descriptions involving differential equa­tions. These criteria are called static characteristics.

With static characteristics shall mean the set of metrological properties that allow an exhaustive description of the operation of a transducer, which operates under specific environmental conditions, when: slow variations of the measurand are imposed in input, in the absence of shocks, vibrations, and accelerations (unless, of course, these physical quantities are themselves the object of measurement).

Many other measurement problems involve rapidly varying quantities. Here the dynamic relations between the instrument input and output must be examined, generally by the use of differen­tial equations. Performance criteria based on these dynamic relations constitute the dynamic characteristics.

Static characteristics also influence the quality of measurement under dynamic conditions, but the static characteristics generally show up as nonlinear or statistical effects in the otherwise linear differential equations giving the dynamic characteristics. These effects would make the differential equations analytically un­manageable, and so the conventional approach is to treat the two aspects of the problem separately.

Thus the differential equations of dynamic performance gener­ally neglect the effects of dry friction, backlash, hysteresis, statistical scatter, etc., even though these effects influence the dynamic behavior. These phenomena are more conveniently studied as static characteristics, and the overall performance of an instrument is then judged by a semiquantitative superposition of the static and dynamic characteristics.

This approach is, of course, approximate but a necessary expedient for convenient mathematical study. Once experimental designs and numerical values are available, we can use simulation to investigate the nonlinear and statistical effects.

Static characteristics

• Accuracy
• Precision
• Sensitivity
• Linearity
• Reproducibility
• Repeatability
• Hysteresis
• Resolution
• Threshold
• Drift
• Stability
• Tolerance
• Range or span

Accuracy

Accuracy is the degree of agreement of the measured dimension with its true magnitude. It can also be defined as the maximum amount (error) by which the result differs from the true value or as the nearness of the measured value to its true value, often expressed as a percentage. It also represents a static characteristic of an instrument.

The concept of the accuracy of a measurement is a qualitative one. An appropriate approach to stating this closeness of agreement is to identify the measurement errors and to quantify them by the value of their associated uncertainties, where an uncertainty is the estimated range of value of an error.

Accuracy depends on the inherent limitations of instrument and shortcomings in the measurement process.

Often an estimate for the value of the error is based on a reference value used during the instrument’s calibration as a surrogate for the true value. A relative error based on this reference value is estimated by:

$\textrm{Accuracy (A)}=\dfrac{\textrm{|measured value – true value|}}{\textrm{reference value}}\times 100$

Thus, if the accuracy of a temperature indicator, with a full-scale range of 0÷500 °C is specified as ±0.5%, it indicates that the measured value will always be within ±2.5 °C of the true value if measured through a standard instrument during the process of calibration. But if it indicates a reading of 250 °C, the error will also be ±2.5 °C, i.e. ±1% of the reading. Thus it is always better to choose a scale of measurement where the input is near full-scale value. But the true value is always difficult to get. We use standard calibrated instruments in the laboratory for measuring true value if the variable.

Accuracy and costs

The demand for accuracy increases the costs increases exponentially.

If the tolerance of a component is to be measured, then the accuracy requirement will typically be 10% of the tolerance values.

Demanding high accuracy unless it is required is not viable, as it increases the cost of the measuring equipment and hence the inspection cost. Besides, it makes the measuring equipment unreliable, because, higher accuracy increases sensitivity.

Therefore, in practice, while designing the measuring equipment, the desired/required accuracy to cost considerations depends on the quality and reliability of the component/product and inspection cost.

Precision

In metrology, precision indicates the repeatability or reproducibility of an instrument (but does not indicate accuracy). In other words is the degree of repetitiveness of the measuring process of a quantity made by using the same method, under similar conditions.

The ability of the measuring instrument to repeat the same results during the act of measurements for the same quantity is known as repeatability. Repeatability is random in nature and, by itself, does not assure accuracy, though it is a desirable characteristic. Precision refers to the consistent reproducibility of a measurement.

Reproducibility is normally specified in terms of a scale reading over a given period of time. If an instrument is not precise, it would give different results for the same dimension for repeated readings. In most measurements, precision assumes more significance than accuracy. It is important to note that the scale used for the measurement must be appropriate and conform to an internationally accepted standard.

If an instrument is used to measure the same input, but at different instants, spread over the whole day, successive measurements may vary randomly. It also represents a static characteristic of an instrument.

The random fluctuations of readings, (mostly with a Gaussian distribution) is often due to random variations of several other factors which have not been taken into account while measuring the variable. A precision instrument indicates that the successive reading would be very close, or in other words, the standard deviation $$\sigma_e$$ of the set of measurements would be very small. Quantitatively, the precision can be expressed as:

$\textrm{Precision}=\dfrac{\textrm{measured range}}{\sigma_e}$

Accuracy vs Precision

The difference between precision and accuracy needs to be understood carefully. Precision means repetition of successive readings, but it does not guarantee accuracy; successive readings may be close to each other, but far from the true value. On the other hand, an accurate instrument has to be precise also, since successive readings must be close to the true value (that is unique).

Accuracy gives information regarding how far the measured value is with respect to the true value, whereas precision indicates the quality of measurement, without giving any assurance that the measurement is correct. These concepts are directly related to random and systematic measurement errors.

It can clearly be seen from the figure that precision is not a single measurement but is associated with a process or a set of measurements. Normally, in any set of measurements performed by the same instrument on the same component, individual measurements are distributed around the mean value and precision is the agreement of these values with each other.

The difference between the true value and the mean value of the set of readings on the same component is termed as an error. An error can also be defined as the difference between the indicated value and the true value of the quantity measured.

$E=V_m−V_t$

where $$E$$ is the error, $$V_m$$ the measured value, and $$V_t$$ the true value.

The value of $$E$$ is also known as the absolute error. For example, when the weight being measured is of the order of 1 kg, an error of ±2 g can be neglected, but the same error of ±2 g becomes very significant while measuring a weight of 10 g. Thus, it can be mentioned here that for the same value of the error, its distribution becomes significant when the quantity being measured is small.

Hence, % error is sometimes known as relative error. Relative error is expressed as the ratio of the error to the true value of the quantity to be measured. The accuracy of an instrument can also be expressed as % error. If an instrument measures $$V_m$$ instead of $$V_t$$, then:

$\%_{error}=\dfrac{\textrm{error}}{\textrm{true value}}\times 100$

$\%_{error}=\dfrac{V_m-V_t}{V_t}\times 100$

The accuracy of an instrument is always assessed in terms of error. An instrument is more accurate if the magnitude of error is low. It is essential to evaluate the magnitude of error by other means as the true value of the quantity being measured is seldom known, because of the uncertainty associated with the measuring process. In order to estimate the uncertainty of the measuring process, one needs to consider the systematic and constant errors along with other factors that contribute to the uncertainty due to the scattering of results about the mean.

Consequently, when precision is an important criterion, mating components are manufactured in a single plant and measurements are obtained with the same standards and internal measuring precision, to accomplish interchangeability of manufacture. If mating components are manufactured at different plants and assembled elsewhere, the accuracy of the measurement of two plants with true standard value becomes significant.

To maintain the quality of manufactured components, the accuracy of measurement is an important characteristic. Therefore, it becomes essential to know the different factors that affect accuracy. Sense factor affects the accuracy of measurement, be it the sense of feel or sight. In instruments having a scale and a pointer, the accuracy of the measurement depends upon the threshold effect, that is, the pointer is either just moving or just not moving. Since the accuracy of measurement is always associated with some error, it is essential to design the measuring equipment and methods used for measurement in such a way that the error of measurement is minimized.

Two terms are associated with accuracy, especially when one strives for higher accuracy in measuring equipment: sensitivity and consistency. The ratio of the change of instrument indication to the change of quantity being measured is termed as sensitivity. In other words, it is the ability of the measuring equipment to detect small variations in the quantity being measured. When efforts are made to incorporate higher accuracy in measuring equipment, its sensitivity increases. The permitted degree of sensitivity determines the accuracy of the instrument. An instrument cannot be more accurate than the permitted degree of sensitivity. It is very pertinent to mention here that unnecessary use of a more sensitive instrument for measurement than required is a disadvantage.

When successive readings of the measured quantity obtained from the measuring instrument are same all the time, the equipment is said to be consistent. A highly accurate instrument possesses both sensitivity and consistency. A highly sensitive instrument need not be consistent, and the degree of consistency determines the accuracy of the instrument. An instrument that is both consistent and sensitive need not be accurate, because its scale may have been calibrated with a wrong standard.

Errors of measurement will be constant in such instruments, which can be taken care of by calibration. It is also important to note that as the magnification increases, the range of measurement decreases and, at the same time, sensitivity increases. Temperature variations affect an instrument and more skill is required to handle it. The range is defined as the difference between the lower and higher values that an instrument is able to measure. If an instrument has a scale reading of 0.01÷100 mm, then the range of the instrument is 0.01÷100 mm, that is, the difference between the maximum and the minimum value.

Sensitivity

In metrology sensitivity of a measuring instrument is that metrological characteristic that provides information on the instrument’s ability to detect small variations in the input quantity; in other words: the increment of the output signal (or response) to the increment of the input measured signal.

It can be defined also as the ratio of the incremental output and the incremental input. While defining the sensitivity, we assume that the input-output characteristic of the instrument is approximately linear in that range. It also represents a static characteristic of an instrument.

Again sensitivity of an instrument may also vary with temperature or other external factors. This is known as sensitivity drift. In order to avoid such sensitivity drift, sophisticated instruments are either kept at a controlled temperature, or suitable in-built temperature compensation schemes are provided inside the instrument.

Linearity

In metrology, linearity is actually a measure of nonlinearity of the measurement instrument.

When we talk about sensitivity, we assume that the input/output characteristic of the instrument to be approximately linear. But in practice, it is normally nonlinear, as shown in the figure below.

The linearity is defined as the maximum deviation from the linear characteristics as a percentage of the full-scale output. Thus:

$\textrm{Linearity}=\dfrac{\Delta O}{O_{max}-O_{min}}$

$\Delta O=max(\Delta O_1,\Delta O_2)$

Reproducibility

In metrology, the term “reproducibility,” when reported in instrument specifications, refers to the closeness of agreement in results obtained from duplicate tests carried out under similar conditions of measurement. It is specified in terms of scale readings over a given period of time. It also represents a static characteristic of an instrument.

As with repeatability, the uncertainty is based on statistical measures. Manufacturer claims of instrument reproducibility must be based on multiple tests (replication) performed in different labs on a single unit or model of the instrument.

Repeatability

In metrology repeatability is a static characteristic of an instrument defined as the ability of an instrument to reproduce a group of measurements of same measured quantity, made by the same observer, using the same instrument, under the same conditions.

Specific claims of repeatability are based on multiple calibration tests (replication) performed within a given lab on the particular unit.

The instrument repeatability reflects only the variations found under controlled calibration conditions.

Hysteresis

Hysteresis is a delay of the effect when the forces acting upon a body are changed (as if from viscosity or internal friction), or lagging in the values of resulting magnetization in a magnetic material (as iron) due to a changing magnetizing force. Read more about hysteresis »

Resolution

In metrology the resolution of a measuring instrument is the ability to detect the smallest change in the value of a physical property that an instrument can sense. It also represents a static characteristic of an instrument.

Resolution of an instrument can also be defined as the minimum incremental value of the input signal that is required to cause a detectable change in the output. Resolution is also defined in terms of percentage as:

$\textrm{Resolution}=\dfrac{\Delta I}{I_{max}-I_{min}}\times 100$

The quotient between the measuring range and resolution is often expressed as a dynamic range and is defined as:

$\textrm{Dynamic range}=\dfrac{\textrm{measurement range}}{resolution}$

And is expressed in terms of dB. The dynamic range of an n-bit ADC comes out to be approximately $$6n$$ dB.

In Metrology, dead space or threshold is a static characteristic of an instrument defined as the range of different input values over which there is no change in output value.

If the instrument input is increased very gradually from zero there will be some minimum value below which no output change can be detected. This minimum value defines the threshold of the instrument.

The numerical value of the input to cause a change in the output is called the threshold value of the instrument. Read more about threshold »

Drift

In metrology drift can be defined as the variation caused in the output of an instrument, which is not caused by any change in the input.

Drift in a measuring instrument is mainly caused by internal temperature variations and lack of component stability. It also represents a static characteristic of an instrument.

A change in the zero output of a measuring instrument caused by a change in the ambient temperature is known as thermal zero shift.

Thermal sensitivity is defined as the change in the sensitivity of a measuring instrument because of temperature variations.

These errors can be minimized by maintaining a constant ambient temperature during the course of a measurement and/or by frequently calibrating the measuring instrument as the ambient temperature changes.

Drift may be classified into three categories:

1. Zero drift: if the whole calibration gradually shifts due to slippage, permanent set, or due to undue warming up of electronic tube circuits, zero drift sets in.
2. Span drift or sensitivity drift: if there is a proportional change in the indication all along the upward scale, the drifts is called span drift or sensitivity drift.
3. Zonal drift: in case the drift occurs only a portion of the span of an instrument, it is called zonal drift.

Stability

In metrology, stability represents a static characteristic of an instrument and the ability of an instrument to retain its performance throughout is specified operating life.

Zero stability is defined as the ability of an instrument to return to the zero reading after the input signal or measurand comes back to the zero value and other variations due to temperature, pressure, vibrations, magnetic effect, etc., have been eliminated.

Tolerance

In Metrology, tolerance means the limit or acceptable limits of the variations of a physical dimension, a physical property of a manufactured object, of a system, or other measured values such as temperature, humidity or time. In other words, it is the maximum allowable error in the measurement is specified in terms of some value.

Tolerance allows the operator to establish a measurement with a confidence interval, even in the presence of imperfections and variables due to influence quantities, without the measurement being compromised. Read more about tolerance »

Range of interval (or span)

In Metrology the range of the interval [a, b] is the difference (b – a) and is denoted by r[a, b]. It also represents a static characteristic of an instrument.

It defines the maximum and minimum values of the inputs or the outputs for which the instrument is recommended to use. For example, for a temperature measuring instrument the input range may be 100÷500 °C and the output range may be 4÷20 mA.

Metrological characteristics functions of time (dynamic)

When the transducer must follow rapid variations of the quantity to be measured, always operating in specified environmental conditions, it is necessary to integrate the static metrological characteristics with the dynamic ones.

• Speed of response and response time
• Measuring lag
• Fidelity
• Dynamic error

Speed of response and response time

Speed of response is defined as the rapidity with which an instrument or measurement system responds to changes in measured quantity.

Response time is the time required by instrument or system to settle to its final steady position after the application of the input. For a step input function, the response time may be defined as the time taken by the instrument to settle to a specified percentage of the quantity being measured, after the application of the input. This percentage may be 90 to 99 percent depending upon the instrument.

Measuring lag

The delay in the response of an instrument to a change in the measured quantity is known as measuring lag. Thus it is the retardation delay in the response of a measurement system to changes in the measured quantity.

This lag is usually quite small, but this small lag becomes highly important when high speed measurements are required. In the high speed measurement systems, as in dynamic measurements, it becomes essential that the time lag be reduced to minimum.

Measuring lag is of two types:

1. Retardation type: In this type of measuring lag the response begins immediately after a change in measured quantity has occurred.
2. Time delay: In this type of measuring lag the response of the measurement system begins after a dead zone after the application of the input.

Fidelity

Fidelity is defined as the degree to which a measurement system indicates changes in the measurand quantity without dynamic error.

Fidelity error

In metrology, it is said that a measuring instrument is all the more “faithful” the more it provides indications of little discordant value between them in the course of several measurements of a constant physical quantity. The fidelity error is evaluated by performing a certain number of measurements of the same, assuming constant measurand: the error will, therefore, be represented by the semi-difference between the maximum and minimum value of the corresponding measures.

The fidelity error is mainly due to external influences: temperature, magnetic field, pressure, angular or linear acceleration, etc. These quantities will act simultaneously and with different intensity for each moment so that the instrument will provide different indications of the same size over time; therefore, an instrument will be all the more faithful, the more it has been constructed to be insensitive to the magnitudes of influence.

Dynamic error

Dynamic error is the difference between the true value of the quantity changing with time and the value indicated by the measurement system if no static error is assumed.

This error may have an amplitude and usually a frequency related to the environmental influences and the parameters of the system itself.

In metrology, dynamic errors are caused by dynamic influences acting on the system such as vibration, roll, pitch or linear acceleration; they are:

• Insertion error
• Rapidity error
• Error band