Measurement

In Metrology the term measurement is closely associated with all the activities about scientific, industrial, commercial, and human aspects. It is defined as the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events.

The knowledge of the reality that surrounds us is based on the measurement of physical quantities, in fact we can say that knowing means measuring.

Measuring a physical quantity

The execution of a measurement theoretically requires a comparison between the unknown quantity to be measured and a known quantity, which is taken as the reference sample.

The concept of measurement derives from the possibility of making the relationship between two homogeneous physical quantities of which one is taken as a sample or unit of measurement; by comparing two quantities \(A\) and \(B\), there is always a certain real and rational or irrational number, such that:

\[m=\dfrac{A}{B}\]

the numerical value of m is called the measure of A with respect to B.

Definition of physical quantity

physical quantity is defined as a physical property of a body or entity with which it is possible to describe phenomena that can be measured (quantified by measurement). A physical quantity can be expressed as the combination of a magnitude expressed by a number – usually a real number – and a unit of measurement. Read also: Quantity value. They can be of two types: scalar or vector.

scalar quantity is a quantity that is described solely, from a mathematical point of view, by a “scalar,” that is, by a real number associated with a unit of measurement (examples are the following: mass, energy, temperature, etc.). The definition of “scalar” derives from the possibility of reading the value on a graduated scale of a measuring instrument, as it does not need other elements to be identified.

On the other hand, it is more complex to define a physical quantity (such as velocity, acceleration, force, etc.) to associate its value with other information such as, for example, a direction or a verse or both; in this case we are dealing with a vector quantity described by a vector. Unlike vector quantities, the scalar ones are therefore not sensitive to the size of the space, nor to the particular reference or coordinate system used.

Furthermore, each physical quantity corresponds to a unit of measurement that can be “fundamental” (base) if the physical quantity is one of the fundamental ones of the International System, or “derived” if it derives (or is formed) from the fundamental ones. So, the physical quantities can be classified into two types: base and derived.

Base physical quantities (SI base units)

By convention, the base physical quantities used in the SI are seven, organized in a system of dimensions and assumed to be independent. Each of the seven base quantities used in the SI is regarded as having its dimension, which is symbolically represented by a single sans serif roman capital letter. The symbols used for the base quantities, and the symbols used to denote their dimension, are given as follows.

The dimension of a physical quantity does not include magnitude or units. The conventional symbolic representation of the dimension of a base quantity is a single upper-case letter in roman (upright) sans-serif type.

Base quantitySymbol for
quantity
Symbol for
dimension
SI unitSI unit symbol
lengthlLmetrem
massmMkilogramkg
timetTseconds
electric currentIIampereA
thermodynamic temperatureTΘkelvinK
amount of substancenNmolemol
luminous intensityIvJcandelacd

The value of a quantity is generally expressed as the product of a number and a unit. The unit is a particular example of the quantity concerned which is used as a reference. Units should be chosen so that they are readily available to all, are constant throughout time and space, and are easy to realize with high accuracy. The number is the ratio of the value of the quantity to the unit. For a particular quantity, many different units may be used.

All other quantities are called derived quantities, which may be written in terms of the base quantities by the equations of physics.

Derived physical quantities (SI derived units)

Derived units are products of powers of base units. They are either dimensionless or can be expressed as a product of one or more of the base units, possibly scaled by an appropriate power of exponentiation. Coherent derived units are products of powers of base units that include no numerical factor other than 1. The base and coherent derived units of the SI form a coherent set, designated the set of coherent SI units.

The International System of Units assigns special names to 22 derived units from SI base units, which includes two dimensionless derived units, the radian (rad) and the steradian (sr).

NameSymbolQuantityEquivalentsSI base unit Equivalents
hertzHzfrequency1/ss−1
radianradanglem/m1
steradiansrsolid anglem2/m21
newtonNforce, weightkg·m/s2kg·m·s−2
pascalPapressure, stressN/m2kg·m−1·s−2
jouleJenergy, work, heatN·m
C·V
W·s
kg·m2·s−2
wattWpower, radiant fluxJ/s
V·A
kg·m2·s−3
coulombCelectric charge or
quantity of electricity
s·A
F·V
s·A
voltVvoltage,
electrical potential difference,
electromotive force
W/A
J/C
kg·m2·s−3·A−1
faradFelectrical capacitanceC/V
s/Ω
kg−1·m−2·s4·A2
ohmΩelectrical resistance,
impedance, reactance
1/S
V/A
kg·m2·s−3·A−2
siemensSelectrical conductance1/Ω
A/V
kg−1·m−2·s3·A2
weberWbmagnetic fluxJ/A
T·m2
kg·m2·s−2·A−1
teslaTmagnetic field strength,
magnetic flux density
V·s/m2
Wb/m2
N/(A·m)
kg·s−2·A−1
henryHelectrical inductanceV·s/A
Ω·s
Wb/A
kg·m2·s−2·A−2
degree Celsius°Ctemperature relative to 273.15 KKK
lumenlmluminous fluxcd·srcd
luxlxilluminancelm/m2m−2·cd
becquerelBqradioactivity
(decays per unit time)
1/ss−1
grayGyabsorbed dose
(of ionizing radiation)
J/kgm2·s−2
sievertSvequivalent dose
(of ionizing radiation)
J/kgm2·s−2
katalkatcatalytic activitymol/ss−1·mol

Measurement applications

Types of measurement applications can be classified into only three major categories:

  1. Monitoring of processes and operations: refers to situations where the measuring device is being used to keep track of some physical quantity (without any control functions).
  2. Control of processes and operations: is one of the most important classes of measurement application. This usually refers to an automatic feedback control system.
  3. Experimental engineering analysis: is that part of engineering design, develop­ment, and research that relies on laboratory testing of one kind or another to answer questions.

Every application of measurement, including those not yet “invented,” can be put into one of the three groups just listed or some combination of them.

The primary objective of measurement in the industrial inspection is to determine the quality of the component manufactured. Different quality requirements, such as permissible tolerance limits, form, surface finish, size, and flatness, have to be considered to check the conformity of the component to the quality specifications. In order to realize this, quantitative information of a physical object or process has to be acquired by comparison with a reference.

The three basic elements of measurements, which are of significance, are the following:

  1. Measurand, a physical quantity to be measured (such as length, weight, and angle);
  2. Comparator, to compare the measurand (physical quantity) with a known standard (reference) for evaluation;
  3. Reference, the physical quantity or property to which quantitative comparisons are to be made, which is internationally accepted.

The specification of a measurand requires:

  • the knowledge of the species of physical quantity;
  • the description of the state of the phenomenon, of the body or of the substance of which the physical quantity constitutes a property (including all the relevant components);
  • the chemical entities involved.

It is essential to underline that the term “measurand” does not refer to the object or phenomenon on which a measurement is being performed, but to a specific physical quantity that characterizes them. For example, when we detect the temperature of a liquid, the measurand is not the liquid, but its temperature.

All these three elements would be considered to explain the direct measurement using a calibrated fixed reference. In order to determine the length of the component, measurement is carried out by comparing it with a steel scale (a known standard).

Methods of measurements

When precision measurements are made to determine the values of a physical variable, different methods of measurements are employed. For measurement method is defined as the logical sequence of efficient operations, employed in measuring physical quantities under observation.

The better the measurement method used and how much better are the instruments and their technology, much closer to reality is the measure describing the state of the measured physical quantity. In principle, therefore, the measure represents the physical reality with a certain approximation, or with a certain error, an error that can be made very small but never null.

The choice of the method of measurement depends on the required accuracy and the amount of permissible error. Irrespective of the method used, the primary objective is to minimize the uncertainty associated with the measurement. The common methods employed for making measurements are as follows:

Direct method

In this method, the quantity to be measured is directly compared with the primary or secondary standard. Scales, vernier callipers, micrometers, bevel protractors, etc., are used in the direct method. This method is widely employed in the production field. In the direct method, a very slight difference exists between the actual and the measured values of the quantity. This difference occurs because of the limitation of the human being performing the measurement.

The advantage of direct measurements consists mainly in the fact that with them it is harder to make gross errors, since the instrument necessary to make the comparison is generally simple, and therefore not subject to hidden faults.

Indirect method

In this method, the value of a quantity is obtained by measuring other quantities that are functionally related to the required value. Measurement of the quantity is carried out directly and then the value is determined by using a mathematical relationship.

Most of the measurements are obtained indirectly, almost always for cost reasons. For example, a density measurement of a given substance could be obtained directly through a device called densimeter, but it is definitely more convenient to directly measure the mass and volume of the substance and then make the relationship.

Indirect measurements, on the other hand, are more subject to approximations since error propagation is present in the formula that represents the physical law. It is, therefore, necessary to pay particular attention to the approximations that are made when performing direct measurements.

Fundamental or absolute method

In this case, the measurement is based on the measurements of base quantities used to define the quantity. The quantity under consideration is directly measured and is then linked with the definition of that quantity.

Comparative method

In this method, as the name suggests, the quantity to be measured is compared with the known value of the same quantity or any other quantity practically related to it. The quantity is compared with the master gauge and only the deviations from the master gauge are recorded after comparison. The most common examples are comparators, dial indicators, etc.

Transposition method

This method involves making the measurement by direct comparison, wherein the quantity to be measured V is initially balanced by a known value X of the same quantity; next, X is replaced by the quantity to be measured and balanced again by another known value Y. If the quantity to be measured is equal to both X and Y, then it is equal to:

\[V=\sqrt{XY}\]

An example of this method is the determination of mass by balancing methods and known weights.

Coincidence method

This is a differential method of measurement wherein a very minute difference between the quantity to be measured and the reference is determined by careful observation of the coincidence of certain lines and signals. Measurements on vernier caliper and micrometer are examples of this method.

Deflection method

This method involves the indication of the value of the quantity to be measured directly by deflection of a pointer on a calibrated scale. Pressure measurement is an example of this method.

Complementary method

The value of the quantity to be measured is combined with a known value of the same quantity. The combination is so adjusted that the sum of these two values is equal to the predetermined comparison value. An example of this method is the determination of the volume of a solid by liquid displacement.

Null measurement method

In this method, the difference between the value of the quantity to be measured and the known value of the same quantity with which comparison is to be made is brought to zero.

Substitution method

It is a direct comparison method. This method involves the replacement of the value of the quantity to be measured with a known value of the same quantity, so selected that the effects produced in the indicating device by these two values are the same. The Borda method of determining mass is an example of this method.

Contact method

In this method, the surface to be measured is touched by the sensor or measuring tip of the instrument. Care needs to be taken to provide constant contact pressure in order to avoid errors due to excess constant pressure. Examples of this method include measurements using micrometer, vernier calliper, and dial indicator.

Contactless method

As the name indicates, there is no direct contact with the surface to be measured. Examples of this method include the use of optical instruments, tool maker’s microscope, and profile projector.

Composite method

The actual contour of a component to be checked is compared with its maximum and minimum tolerance limits. Cumulative errors of the interconnected elements of the component, which are controlled through a combined tolerance, can be checked by this method. This method is very reliable to ensure interchangeability and is usually effected through the use of composite GO gauges. The use of a GO screw plug gauge to check the thread of a nut is an example of this method.

Measurement chain

With measurement chain, we refer to the set of stages of a measuring instrument, which process the information detected by the physical quantity (object of study), to then present a result: i.e., the measurement.

The main stages of a measurement chain are three:

  1. the first stage consists of a sensor and/or a transducer in contact with the physical quantity to be detected (also called the primary sensitive element). In measurement chains that include more than one transducer, crosstalk effects can occur, and the cause of this effect is to be found in capacitive and inductive couplings that can occur in the transducers themselves, in the connection cables, and finally, in the block of manipulation;
  2. the second stage consists of an intermediate signal processing system or signal conditioner which converts the information coming from the previous stages into a form such as to adapt to the acquisition system. Typical operations performed by the conditioning circuit are noise filtering, linearization of the transfer function, conversion and amplification of the signal generated by the transducer. The output signal from the measurement chain can be analog or digital. The power supply provides the electrical power necessary for the operation of the various electronic devices used in the measurement chain. The sensor is not powered when it draws the indispensable power for information from the outside world as it happens for thermocouples and piezoelectric sensors;
  3. the third stage is represented by the terminal instrument which indicates the result of the operations carried out by the previous stages, that provides the operator with the value of the measurement.

Measurement errors

Measurement error is the difference between a measured value of a quantity and its true value. The term measurement uncertainty is often used as a synonym for measurement error.

In metrology, the analysis of errors includes the study of uncertainties in the measurements, since no measure as far as it is carried out with care is entirely free from uncertainties.

The term error does not necessarily imply an incorrect measurement procedure by the operator, but also an uncertainty provided by the instrumentation, namely that the value presented by the measuring instrument provides a value of the measured quantity with a certain approximation.

Measurement errors are caused by:

  • human factors (inaccuracies in the design of the measurement chain, distractions or poor operator accuracy);
  • technological factors (static and dynamic constructive and metrological qualities of the instruments);
  • environmental factors (external influence quantities present in the environment in which the measurement is made).

In statistics, an error is not a “mistake”. Variability is an inherent part of the results of measurements and of the measurement process.

The measurement operation is always invasive, in fact, it introduces a perturbation in the system that we want to investigate; therefore the variables involved are always altered when the measurement is performed.

Classification of measurement errors

The measurement error can depend on both the instrument and the observer. There are two main types of errors:

  1. random errors (or accidental, which may vary from observation to another);
  2. systematic errors (which always occurs, with the same value, when we use the instrument in the same way and in the same case).

Random error

Random error is always present in a measurement. It is caused by inherently unpredictable fluctuations in the readings of a measurement instrument, in operating and environmental conditions or the experimenter’s interpretation of the instrumental reading.

Random errors can be analyzed statistically, as it is empirically seen that they are generally distributed according to simple laws. In particular, it is often hypothesized that the causes of these errors act in a completely random manner, thus determining deviations, with respect to the average value, both negative and positive. This allows us to expect that the effects vanish on average; substantially that the average value of the accidental errors is zero.

The smaller the random errors are, the more it is said that the measurement is precise.

Random (or accidental) errors have less impact than systematic errors because, by repeating the measurement several times and calculating the average of the values found (reliable measurement), their contribution is generally reduced for a probabilistic reason.

This observation has a fundamental consequence: if we can correct all the gross errors and the systematic ones, so we will have to deal only with accidental errors, we will just need to take repeated measures and then mediate the results: the more measures we will consider, the less the result final (average of the individual results) will be affected by accidental errors.

Systematic error

Systematic errors are predictable and typically constant or proportional to the true value. If the cause of the systematic error can be identified, then it usually can be eliminated. An error is called systematic if the functional relationship between the magnitude of the error and the intensity of the physical quantity (that is the cause) is known.

Systematic errors always occur with the same sign + or – and the same amplitude, where the measurement of a physical quantity is repeated several times with the same instrumentation and under the same operating and environmental conditions.

Systematic errors are caused by imperfect calibration of measurement instruments or imperfect methods of observation (an error, voluntary or involuntary, committed by the observer), or interference of the environment with the measurement process, and always affect the results of an experiment in a predictable direction.

Incorrect zeroing of an instrument leading to a zero error is an example of systematic error in instrumentation.

Other types of errors are: gross errors, static, and dynamic errors.

Gross error

Gross error are those attributable to inexperience or distraction of the operator who is making the measurement; may for example result from a wrong reading or by improper use of measuring instruments, or by incorrect transcriptions from experimental data or even erroneous processing of such data. These errors do not occur when measurements are taken with care and attention and in any case, can be eliminated by repeating the measurement.

Static error

Static errors are those errors evaluated in static conditions, that is, by performing the measurement of a constant physical quantity; they are:

  • Reading error
  • Mobility error
  • Hysteresis error
  • Fidelity error
  • Zero error
  • Calibration error
Reading error

In Metrology, the reading error is that what happens when evaluating the relative position of the index of the measuring instrument with respect to the scale; this error is generally due to four causes:

  1. resolving power of the human eye: it is defined as the angle of minimum separation between two points that the eye is able to discern as two separate and distinct objects (it is about 0.1 mm which corresponds to 100 μm, however with many physiological variables);
  2. parallax error: due to the fact that the index and the scale of the measuring instrument are located on different planes (the operator’s gaze should always be perpendicular to the scale for a correct measurement). Parallax error is primarily caused by viewing the object at an oblique angle with respect to the scale, which makes the object appear to be at a different position on the scale. For example, if measuring the distance between two ticks on a line with a ruler marked on its top surface, the thickness of the ruler will separate its markings from the ticks. If viewed from a position not exactly perpendicular to the ruler, the apparent position will shift, and the reading will be less accurate than the ruler is capable of. In the context of reading a piece of volumetric glassware, such as a measuring cylinder, burette, or volumetric flask, the meniscus should be at eye level otherwise there will be an error in the reading. If the meniscus is above eye level an increased volume measurement will be made, conversely if the eye is above the meniscus then a lower volume reading will be made. A similar error occurs when reading the position of a pointer against a scale in an instrument such as an analog multimeter. To help the user avoid this problem, the scale is sometimes printed above a narrow strip of mirror, and the user’s eye is positioned so that the pointer obscures its own reflection, guaranteeing that the user’s line of sight is perpendicular to the mirror and therefore to the scale. The same effect alters the speed read on a car’s speedometer by a driver in front of it and a passenger off to the side, values read from a graticule not in actual contact with the display on an oscilloscope, etc.
  3. interpolation uncertainty: when the scale of the measuring instrument is linear, it is of the order of ±10% of the distance between two successive divisions. If the scale is not linear, this uncertainty value can also increase considerably. To reduce the interpolation error, reading systems with nonii, micrometric screws and silicone scales can be used;
  4. background noise: it is the set of all those causes that impose index movements overlapping the displacement produced by the measurand. Regarding the estimation of the error value produced by the background noise, when it comes to appreciating the average value of the observed or recorded signal, it is admitted equal to ± 10% calculated on the double amplitude of the oscillation.
Mobility error

Mobility error is mainly due to the friction that develops between the mobile components of the instrument and the inevitable spaces between them.

Hysteresis error

The hysteresis error of a measuring instrument is defined as the maximum difference between the value detected by the transducer when a specific value of the input quantity is applied, by imposing increasing inputs, and the same value obtained by imposing decreasing inputs.

In other words, the hysteresis error is given by the maximum difference between the value measured in ascending direction and the respective value measured in a decreasing direction.

Hysteresis represents the history dependence of physical systems. If you push on something, it will yield: when you release, does it spring back completely? If it doesn’t, it is exhibiting hysteresis, in some broad sense.

The term is most commonly applied to magnetic materials: as the external field with the signal from the microphone is turned off, the little magnetic domains in the tape don’t return to their original configuration (by design, otherwise your record of the music would disappear!)

Hysteresis happens in lots of other systems: if you place a large force on your fork while cutting a tough piece of meat, it doesn’t always return to its original shape: the shape of the fork depends on its history.

Fidelity error

In metrology, it is said that a measuring instrument is all the more “faithful” the more it provides indications of little discordant value between them in the course of several measurements of a constant physical quantity. The fidelity error is evaluated by performing a certain number of measurements of the same, assuming constant measurand: the error will, therefore, be represented by the semi-difference between the maximum and minimum value of the corresponding measures.

The fidelity error is mainly due to external influences: temperature, magnetic field, pressure, angular or linear acceleration, etc. These quantities will act simultaneously and with different intensity for each moment so that the instrument will provide different indications of the same size over time; therefore, an instrument will be all the more faithful, the more it has been constructed to be insensitive to the magnitudes of influence.

Zero error

Zero error means the error that is made when long-term measurements are made, and it is verified that the zero of the measuring instrument undergoes a drift phenomenon, called zero drift.

The zero drift is the deviation of the index from the zero position, that is from the origin of the graduation curve. The zero error is evaluated in units of the quantity to be measured.

Calibration error

The limiting factor of the calibration process is repeatability because it is the only characteristic error that cannot be calibrated out of the measuring system and hence the overall measurement accuracy is curtailed. Thus, repeatability could also be termed as the minimum uncertainty that exists between a measurand and a standard.

Conditions that exist during calibration of the instrument should be similar to the conditions under which actual measurements are made. The standard that is used for calibration purpose should normally be one order of magnitude more accurate than the instrument to be calibrated. When it is intended to achieve greater accuracy, it becomes imperative to know all the sources of errors so that they can be evaluated and controlled.

Dynamic error

Dynamic error is the difference between the true value of the quantity changing with time and the value indicated by the measurement system if no static error is assumed.

This error may have an amplitude and usually a frequency related to the environmental influences and the parameters of the system itself.

In metrology, dynamic errors are caused by dynamic influences acting on the system such as vibration, roll, pitch or linear acceleration; they are:

  • Insertion error
  • Rapidity error
  • Error band
Insertion error

The insertion error is caused by the presence of the measuring instrument itself, inside the environment in which the measurement is carried out; in other words, the measuring instrument changes the measurement conditions and consequently also changes the final value of the measurand.

Therefore it is said that a measuring instrument is the better, the less it disturbs the phenomenon to be measured; that is, the smaller the entity of the disturbance caused by its presence. This interference can be evaluated if the characteristics of the instrument and in particular of its sensor (or transducer) are known.

Rapidity error

The error of rapidity is that metrological quality of a measuring instrument that expresses the ability to follow the (dynamic) variations in the time of the measurand; it is essential because it allows evaluating the limits within which a measuring instrument can be suitable for measuring variable quantities over time (dynamic quantities).

Another practical definition of the error of rapidity establishes that: it is the smaller, the faster the index of the measuring instrument in changing its position on the graduated scale of the instrument. The speed, limited by the inertia of the moving parts of the instrument and by the damping to which they are subjected, is characterized differently depending on the type of temporal variations of the size.

  1. In the case in which the quantity to be measured is constant, the speed of the instrument is characterized by the response time. This is defined by the time required, for the index, to reach the final position once put in contact with the measurand.
  2. In the case in which the quantity to be measured is slowly variable over time, the speed is characterized by the delay with which the index of the instrument follows it. This delay is constant if the variation of the magnitude under examination is, and is higher in proportion to the said variation. If instead, the variation of the quantity is periodic, the index of the instrument provides a measurement whose maximum value is less than the maximum value of the quantity: the delay depends on the frequency of this variation.
  3. Finally, in the case of rapidly variable quantities over time, speed is defined by the behavior of the mobile parts of the instrument when the size varies sinusoidally. In general, the ratio between the indication provided by the instrument and the value of the input quantity decreases when the frequency increases.
Error band

The range of maximum deviation of the transducer output from a reference curve due to the transducer is defined as error band; said deviation (which is generally expressed in percent of the full scale) can be caused by non-linearity, non-repeatability, hysteresis, etc.; it is determined by several consecutive calibration cycles so as to include repeatability.

Error band is a measurement of worst-case error. This is the best specification (compared to linearity) to determine gauge suitability for an application.

Error band of measurement

It can also be verified that the transducer must operate only in a range of variation of the input quantity that is contained in the measurement range; it follows that by varying the value of the static error considered acceptable, it is possible to have different fields of use.

The error band specification describes a bipolar band (i.e., ±0.2%) around the ideal line. The “ideal line” is the line plotted where all dimensional changes produce perfect sensor output voltages. All measurements must fall within the error band for the instrument to be within specification. The magnitude of the band is equal to the worst case error throughout the gauge’s measurement range. Using the worst case error assures that every measurement made by the gage will perform within the error band specification.

Measurement uncertainty

Measurement uncertainty is the degree of uncertainty with which the value of a physical quantity or property is obtained through its direct or indirect measurement.

The result of the measurement is therefore not a single value, but a set of values derived from the measurement (direct or indirect) of the physical size or property itself.

Influence quantities in measurements

In metrology, in cases where the environmental conditions of the actual use of the transducer deviate significantly from the environmental calibration conditions, the effects due to the influence quantities must be taken into account. In these cases, specific tests must be conducted on a population of transducers or, at least, on a single transducer.

It appears necessary to highlight that attention must be paid to environmental conditions not only during the sensor operation but also during the previous phases such as storage and transport; these environmental conditions, if not checked and verified, can significantly alter, and above all, in an unpredictable way the metrological performance of the transducer.

Some of the main quantities of influence that occur in mechanical and thermal measurements are summarized below.

Effects due to temperature

For each transducer, the working temperature variation range is indicated within which it can be used without causing damage.

In this field of use, the trends of both the zero drift and the sensitivity drift are generally provided by the manufacturer. In fact, for example, in the measurements carried out with resistance strain gauges are given both the trends of the apparent deformation as a function of temperature (zero drift) and the sensitivity coefficient of the calibration factor as a function of temperature (sensitivity drift).

A further method that allows expressing the effect due to the temperature is the identification of a range of variation of the error due to it, which is expressed for example as a percentage of the full scale.

It is also necessary to know the maximum and minimum value of the temperature at which the transducer can be exposed without permanent damage, that is without the metrological characteristics varying. Changes in ambient temperature determine not only effects on static metrological characteristics but also dynamic ones. It is necessary that the values supplied by the manufacturer refer to a specific temperature variation range.

However, the temperature shows effects that can also be significant when there are step variations.

Effects due to acceleration

Errors caused by acceleration can occur either directly on the sensitive element, or the connection or support elements and can be of such a magnitude as to induce deformations to render the measurements conducted meaningless.

In general, the transducers will show a more relevant acceleration sensitivity according to some axes; therefore it is necessary to indicate the triad of the selected reference axes and express the error due to acceleration.

The maximum difference between the output of the sensor in the absence and the presence of a specified constant acceleration applied according to a specific axis is defined as the acceleration error.

Finally, it is opportune to specify that some sensors show a sensitivity to the acceleration of gravity so that the disposition of the transducer with respect to the gravitational field constitutes an essential condition of constraint.

Effects due to vibrations

The variation of the frequency of the vibrations, applied according to a specific reference axis, can determine (for example due to resonance phenomena, etc.) significant effects in the signal output provided by the transducer.

To express the effect due to vibrations, it will be necessary to define the maximum variation in the output, for each value of the physical input quantity, when a specific amplitude of the vibration, and for a given frequency range, is applied according to an axis of the transducer.

Effects due to environmental pressure

Sometimes it can be verified that the transducer must operate in conditions under which the pressure is significantly different from the pressure at which the calibration operation was carried out, which in general is equal to the environmental pressure.

Relatively different pressures from those to which the calibration tests have been conducted may determine variations in the internal geometry of the transducer to vary the metrological characteristics provided by the manufacturer.

A deviation from the calibration conditions is much more severe than from damage to the transducer which, on the other hand, is easily detectable by the experimenter.

The error due to pressure is defined as the maximum variation of the transducer output, for each value of the input quantity included in the measurement range, when the pressure at which the transducer operates is made to vary in specified intervals.

Effects due to commissioning of the transducer

If the commissioning of a transducer does not occur with care, damage can occur (deformation of the structure, for example) such as to vary the operating conditions of the transducer.

No data relating to this cause of the error are available from the manufacturer, and the user must make sure of the proper and correct installation of the device.