Skip to content

Temperature is a physical property of a material that gives a measure of the average kinetic energy of the molecular movement in an object or a system. Temperature can be defined as a condition of a body by virtue of which heat is transferred from one system to another. It is pertinent to mention here that both temperature and heat are different.

Temperature is a measure of the internal energy of a system, whereas heat is a measure of the transfer of energy from one system to another. Heat transfer takes place from a body at a higher temperature to one at a lower temperature. The two bodies are said to be in thermal equilibrium when both of them are at the same temperature and no heat transfer takes place between them. The rise in temperature of a body is due to greater absorption of heat, which increases the movement of the molecules within the body.

The temperature is an intensive physical quantity, and this means that to be defined conceptually it needs evaluation in terms of effects caused by its variations on the behavior of the materials. In fact, the temperature is evaluated as a function of the average kinetic energy possessed by the atoms constituting the matter and describes the thermodynamic state of the systems in equilibrium.

Furthermore, once the unit of measurement has been defined, it is not possible to define a multiple or sub-multiple quantity; since the temperature is an intensive quantity, by putting two bodies of unitary temperature in contact, the set consisting of the two bodies still has a unitary value. It will, therefore, be necessary to define not only the unitary quantity but also each multiple and sub-multiple thereof, that is, to define a thermometric scale.

How to measure the temperature

Temperature measurement is an operation that presents numerous difficulties, which affect the determination of the actual body temperature. In fact, the temperature cannot be measured by fundamental units (i.e., with temperature samples); it can only be determined through the use of suitable calibrated measuring instruments.

Generally these problems arise from the technological imperfections of the temperature sensor; for example, the thermometers that generally measure the temperature by contact or conduction, indicate the temperature that they assume and not the real temperature of the body or the environment being measured.

The first thermometer was developed by Galileo Galilei in the 17th century, which has undergone significant improvement with the advancement of science and technology; present-day thermometers are capable of measuring temperatures more accurately and precisely. In 1724, D.G. Fahrenheit, a German physicist, contributed significantly to the development of thermometry. He proposed his own scale, in which 32° and 212° were considered the freezing point and boiling point of water, respectively.

The Swedish physicist Anders Celsius, in 1742, developed the mercury-in-glass thermometer. He identified two points, namely the melting point of ice and the boiling point of water, and assigned 0° and 100°, respectively, to them. He made 100 divisions between these two points. In 1859, William John Macquorn Rankine, a Scottish physicist, proposed an absolute or thermodynamic scale, known as Rankine scale when, after investigating the changes in thermal energy with changes in temperature, he came to a conclusion that the theoretical temperature of each of the substances was the same at zero thermal energy level. According to him, this temperature was approximately equal to −460 °F.

William Thomson, first Baron Kelvin, popularly known as Lord Kelvin, a British physicist, introduced a new concept, known as the Kelvin scale, in the mid-1800s. He suggested 0 K as the absolute temperature of gas and 273 K as the freezing point of water. Although human beings generally perceive temperature as hot, warm (neutral), or cold, from an engineering perspective, a precise and accurate measurement of temperature is essential.

The scales used to measure temperature can be divided into relative scales [Fahrenheit (°F) and Celsius (°C)] and absolute scales [Rankine (°R) and Kelvin (K)]. The various temperature scales are related as follows:

  • F = 1.8 °C + 32
  • °C = (F − 32)/1.8
  • R = F + 460
  • K = °C + 273

Thermodynamic temperature (kelvin)

The definition of the unit of thermodynamic temperature was given in substance by the 10th CGPM (1954) which selected the triple point of water as the fundamental fixed point and assigned to it the temperature 273.16 K, so defining the unit. The 13th CGPM (1967/68) adopted the name kelvin, symbol K, instead of “degree Kelvin,” symbol °K, and defined the unit of thermodynamic temperature as follows (1967/68):

The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water.

It follows that the thermodynamic temperature of the triple point of water is exactly 273.16 kelvins, \(T_{\textrm{TPW}}\) = 273.16 K.

The symbol \(T_{\textrm{TPW}}\) is used to denote the thermodynamic temperature of the triple point of water.

At its 2005 meeting, the CIPM affirmed that: This definition refers to water having the isotopic composition defined exactly by the following amount of substance ratios: 0.00015576 moles of 2H per mole of 1H, 0.0003799 moles of 17O per mole of 16O, and 0.0020052 moles of 18O per mole of 16O.

Because of the manner in which temperature scales used to be defined, it remains common practice to express a thermodynamic temperature, symbol T, regarding its difference from the reference temperature \(T_0\) = 273.15 K, the ice point. This difference is called the Celsius temperature, symbol \(t\), which is defined by the quantity equation:

\[T=T−T_0\]

The unit of Celsius temperature is the degree Celsius, symbol °C, which is by definition equal in magnitude to the kelvin. A difference or interval of temperature may be expressed in kelvins or degrees Celsius (13th CGPM, 1967/68), the numerical value of the temperature difference is the same. However, the numerical value of a Celsius temperature expressed in degrees Celsius is related to the numerical value of the thermodynamic temperature expressed in kelvins by the relation:

\[\dfrac{t}{^{\circ}C} = \dfrac{T}{K−273.15}\]

The kelvin and the degree Celsius are also units of the International Temperature Scale of 1990 (ITS-90) adopted by the CIPM in 1989 in its Recommendation 5.

Methods of measuring temperature

Measurement of temperature cannot be accomplished by direct comparison with basic standards such as length and mass. A standardized calibrated device or system is necessary to determine temperature. In order to measure temperature, various primary effects that cause changes in temperature can be used.

The temperature may change due to changes in physical or chemical states, electrical property, radiation ability, or physical dimensions. The response of the temperature-sensing device is influenced by any of the following factors:

  • Thermal conductivity and heat capacity of an element.
  • Surface area per unit mass of the element.
  • Film coefficient of heat transfer.
  • Mass velocity of a fluid surrounding the element.
  • Thermal conductivity and heat capacity of the fluid surrounding the element.

Temperature can be sensed using many devices, which can broadly be classified into two categories: contact- and non-contact-type sensors. In case of contact-type sensors, the object whose temperature is to be measured remains in contact with the sensor. The inference is then drawn on the assessment of temperature either by knowing or by assuming that the object and the sensor are in thermal equilibrium. Contact-type sensors are classified as follows:

In the case of non-contact-type sensors, the radiant power of the infrared or optical radiation received by the object or system is measured. Temperature is determined using instruments such as radiation or optical pyrometers. Non-contact-type sensors are categorized as follows:

  • Radiation pyrometers
  • Optical pyrometers
  • Fiber-optic thermometers

Temperature problems in metrology

One of the main attributes of an ideal measuring system is to only respond to the designed signal and ignore all other signals. Temperature variations adversely affect the operation of the measuring system and hence the concept of an ideal measurement has never been completely achieved. It is extremely difficult to maintain a constant-temperature environmental condition for a general-purpose measuring system. The only option is to accept the effects due to temperature variations and hence, methods to compensate temperature variations need to be devised.

Changes in dimensions and physical properties, both elastic and electrical, are dependent on temperature variations, which result in deviations known as zero shift and scale error.

Whenever a change occurs in the output at the no-input condition, it is referred to as zero shift. A zero shift is chiefly caused by temperature variations. It is a consequence of expansion and contraction due to changes in temperature, which results in linear dimensional changes. Zero indication is normally made on the output scale to correspond to the no-input condition, for most of the applications.

A very common example is setting the spring scales to zero at the no-input condition. Consider an empty pan of the weighing scale. If there is any temperature variation after the scale has been adjusted to zero, then the no-load reading will be altered. This change, which is due to the differential dimensional change between spring and scale, is termed a zero shift.

Temperature, especially when resilient load-carrying members are involved, affects scale calibration. Temperature variations alter the coil and wire diameters of the spring, and so does the modulus of elasticity of the spring material. The spring constant would change because of the temperature variations. This results in changed load–deflection calibration. This effect is referred to as scale error. Various methods can be employed in order to limit temperature errors:

  1. Minimize temperature errors by proper and careful selection of materials and range of operating temperatures. The main reason for the occurrence of temperature errors is thermal expansion. When simple motion transmitting elements are considered, only thermal expansion causes temperature errors. Temperature errors are also caused when thermal expansion combines with modulus change when calibrated resilient transducer elements are considered. In the case of electric resistance transducers, thermal expansion combines with resistivity change to cause temperature errors. In each of these cases, temperature errors can be minimized by appropriately choosing materials having low-temperature coefficients. While selecting such materials, one needs to keep in mind that other requisite characteristics such as higher strength, low cost, and resistance to corrosion will not always be associated with minimum temperature coefficients. Hence, a compromise needs to be made.
  2. Provide compensation by balancing the elements comprising inversely reacting elements or effects. This depends on the type of measurement system employed. In the case of mechanical systems, a composite construction can be used to provide adequate compensation. A typical example is the composite construction of a balance wheel in a watch or clock. With the rise in temperature, the modulus of the spring material reduces and the moment of inertia of the wheel increases. The reason for this may be attributed to thermal expansion, which results in the slowing down of the watch. A bimetal element having appropriate features can be incorporated into the rim of the wheel to counter these effects such that the moment of inertia decreases with temperature so that it is enough to compensate for both expansions of the wheel spokes and change in modulus of the spring. When electrical systems are employed, compensation may be provided in the circuitry itself. Thermistors and resistance-type strain gauges are examples of this type.
  3. Control temperature such that temperature problem is eliminated.