Experimental methods used for the realization of units and which use equations of physics are known as primary methods. The essential characteristic of a primary method is that it allows a quantity to be measured in a particular unit directly from its definition by using only quantities and constants that themselves do not contain that unit.
Traditionally, a unit for a given quantity was taken to be a particular example of that quantity, which was chosen to provide numerical values of common measurements of a convenient size. Before the rise of modern science, units were necessarily defined in terms of material artifacts, notably the meter and kilogram for length and mass, or the property of a particular object, namely the rotation of the earth for the second. Even at the origin of the metric system at the end of the 18th century it was recognized that a more desirable definition of a unit of length for example would be one based on a universal property of nature such as the length of a pendulum beating seconds. Such a definition would be independent of time and place and would in principle be accessible all over the world. At the time, practical considerations resulted in the simpler, artifact definitions for the meter and the kilogram and the second remained linked to the rotation of the Earth. It was only in 1960 that the first non-material definition was adopted, namely the wavelength of a specified optical radiation for the meter.
Since then, definitions of the ampere, kelvin, mole and candela have been adopted that do not refer to material artifacts. In the case of the ampere it refers to a specified electric current required to produce a given electromagnetic force and, in the case of the kelvin, to a particular thermodynamic state, namely the triple point of water. Even the atomic definition of the second was in terms of a specified transition of the cesium atom. The kilogram has always stood out as the one unit that had resisted the transformation from an artifact. The definition that opened the way to real universality was that of the meter in 1983. This definition implied, although it did not state, a fixed numerical value for the speed of light. The definition was worded, however, in the traditional form and stated essentially that the meter was the distance travelled by light in a specified time. In this way it reflected the other definitions of the base units of the SI each of which has the same form, for example “the ampere is the current which…” and “the kelvin is a fraction of a specified temperature.” Such definitions can be called explicit unit definitions.
Although these definitions meet many of the requirements for universality and accessibility, and a variety of realizations are often possible, they nevertheless constrain practical realizations to experiments that are directly or indirectly linked to the particular conditions or states specified in each definition. In consequence, the accuracy of realization of such definitions can never be better than the accuracy of realization of the particular conditions or states specified in the definitions.
This is a particular problem with the present definition of the second, which is based on a microwave transition of an atom of cesium. Frequencies of optical transitions of different atoms or ions are now demonstrably more reproducible, by some orders of magnitude, than the defined frequency of cesium.
In the present definition of the SI based on the set of defining constants, instead of each definition specifying a particular condition or state, which sets a fundamental limit to the accuracy of realization, any convenient equation of physics that links the particular constant or constants to the quantity we want to measure may be used. This is a much more general way of defining the basic units of measurement. It is one that is not limited by today’s science or technology as future developments may lead to as yet unknown equations that could result in different ways of realizing units with a much higher accuracy. When defined in this way, there is, in principle, no limit to the accuracy with which a unit can be realized. The exception remains the definition of the second in which the original microwave transition of cesium remains, for the time being, the basis of the definition.
The difference between an explicit unit and an explicit constant definition can be clearly illustrated using the two previous definitions of the meter that depended upon a fixed numerical value of the speed of light and secondly the two definitions of the kelvin. The original 1983 definition of the meter states, in effect, that “the meter is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second”. The new definition simply states that the meter is defined by taking the constant that defines the second, the specified cesium frequency and the fixed numerical value of the speed of light expressed in units m·s−1. We can thus use any equation of physics including, of course, that indicated by the former definition, the time taken to travel the given distance which is used for astronomical distances, but also the simple equation relating frequency and wavelength to the speed of light. The former definition of the kelvin based on a fixed numerical value for the temperature of the triple point of water requires ultimately a measurement at the triple point of water. The new definition, based on the fixed numerical value for the Boltzmann constant, is much more general in that any thermodynamic equation in which k appears can in principle be used to determine a thermodynamic temperature at any point on the temperature scale. For example, by determining the total radiant exitance of a black body at temperature T, equal to (2π5k4/15c2h3) T4, in Wm−2 we can determine T directly.
For the kilogram, the unit whose definition has undergone the most fundamental change, realization can be through any equation of physics that links mass, the Planck constant, the velocity of light and the cesium frequency. One such equation is that which describes the operation of an electro-mechanical balance, previously known as a watt balance, more recently known as a Kibble11 balance. With this apparatus, a mechanical power, measured in terms of a mass, m, the local acceleration due to gravity, g, and a velocity, v, can be measured in terms of an electrical power measured in terms of an electric current and voltage measured in terms of the quantum Hall and Josephson effects respectively. The resulting equation is mgv = Ch where C is a calibration constant that includes measured frequencies and h is Planck’s constant.
Another method that can be used for a primary realization of the kilogram is through the determination of the number of atoms in a silicon sphere and using the equation:
$$m\,=\,\frac{8V}{a^3_0}\,\frac{2 R_\infty h}{c \alpha^2}\,\frac{m_{Si}}{m_e}$$
with the mass m and volume V of the sphere, lattice parameter a0, Rydberg constant R∞, fine structure constant α, and the masses of a silicon atom (averaged over the three isotopes used for the sphere) mSi, and the electron me, respectively. The first fraction corresponds to the number of atoms in the sphere, the second to the electron mass and the third fraction is the ratio of the mass of the (isotopically averaged) silicon atom to the electron mass.
Another possibility for measuring mass through the new definition, but this time at the microscopic level, is through measurements of atomic recoil using the relation that includes h/m.
All these provide a striking illustration of the generality of the new way of defining units. Detailed information on the current realization of the base and other units is given on the BIPM website.
The 9th CGPM (1948, Resolution 6; CR 64) instructed the CIPM:
The same CGPM also laid down, in Resolution 7 (CR 70), ‘general principles for the writing of unit symbols’ and listed some coherent derived units that were assigned special names.
The 10th CGPM (1954, Resolution 6; CR 80) adopted as base quantities and units for this practical system the following six quantities: length, mass, time, electric current, thermodynamic temperature and luminous intensity, as well as the six corresponding base units: meter, kilogram, second, ampere, kelvin and candela. After a lengthy discussion between physicists and chemists, the 14th CGPM (1971, Resolution 3, CR 78 and Metrologia 1972, 8, 36) added amount of substance, unit mole, as the seventh base quantity and unit.
The 11th CGPM (1960, Resolution 12; CR 87) adopted the name Système international d’unités, with the international abbreviation SI, for this practical system of units and laid down rules for prefixes, derived units and the former supplementary units, as well as other matters; it thus established a comprehensive specification for units of measurement. Subsequent meetings of the CGPM and the CIPM have added to and modified the original structure of the SI to take account of advances in science and of the changing needs of users.
The historical sequence that led to these important decisions may be summarized as follows.
The definitions of the seven base units can be related unambiguously to the numerical values of the seven defining constants. However, there is not a one-to-one relationship between the seven defining constants and the seven base units as many of the base units call upon more than one of the defining constants.
Unit of time, second
Before 1960, the unit of time the second, was defined as the fraction 1/86 400 of the mean solar day. The exact definition of “mean solar day” was left to astronomers. However measurements showed that irregularities in the rotation of the Earth made this an unsatisfactory definition. In order to define the unit of time more precisely, the 11th CGPM (1960, Resolution 9, CR, 86) adopted a definition given by the International Astronomical Union based on the tropical year 1900. Experimental work, however, had already shown that an atomic standard of time, based on a transition between two energy levels of an atom or a molecule, could be realized and reproduced much more accurately. Considering that a very precise definition of the unit of time is indispensable for science and technology, the 13th CGPM (1967-1968, Resolution 1, CR, 103 and Metrologia, 1968, 4, 43) chose a new definition of the second referenced to the frequency of the ground state hyperfine transition in the cesium-133 atom. A revised more precise wording of this same definition now in terms of a fixed numerical value of the unperturbed ground-state hyperfine transition frequency of the cesium-133 atom, ΔνCs, was adopted in Resolution 1 of the 26th CGPM (2018).
Unit of length, meter
The 1889 definition of the meter, namely, the length of the international prototype of platinum-iridium, was replaced by the 11th CGPM (1960) using a definition based on the wavelength of the radiation corresponding to a particular transition in krypton 86. This change was adopted in order to improve the accuracy with which the definition of the meter could be realized, this being achieved using an interferometer with a travelling microscope to measure the optical path difference as the fringes were counted. In turn, this was replaced in 1983 by the 17th CGPM (Resolution 1, CR, 97, and Metrologia, 1984, 20, 25) with a definition referenced to the distance that light travels in vacuum in a specified interval of time, as presented in 2.3.1. The original international prototype of the meter, which was sanctioned by the 1st CGPM in 1889 (CR, 34-38), is still kept at the BIPM under conditions specified in 1889. In order to make clear its dependence on the fixed numerical value of the speed of light, c, the wording of the definition was changed in Resolution 1 of the 26th CGPM (2018).
Unit of mass, kilogram
The 1889 definition of the kilogram was simply the mass of the international prototype of the kilogram, an artifact made of platinum-iridium. This was, and still is, kept at the BIPM under the conditions specified by the 1st CGPM (1889, CR, 34-38) when it sanctioned the prototype and declared that “this prototype shall henceforth be considered to be the unit of mass”. Forty similar prototypes were made at about the same time and these were all machined and polished to have closely the same mass as the international prototype. At the 1st CGPM (1889), after calibration against the international prototype, most of these “national prototypes” were individually assigned to Member States, and some also to the BIPM. The 3rd CGPM (1901, CR, 70), in a declaration intended to end the ambiguity in common usage concerning the use of the word “weight”, confirmed that “the kilogram is the unit of mass; it is equal to the mass of the international prototype of the kilogram”. The complete version of these declarations appears on p. 70 of the above-mentioned CGPM proceedings.
By the time of the second verification of national prototypes in 1946 it was found that on average the masses of these prototypes were diverging from that of the international prototype. This was confirmed by the third verification carried out from 1989 to 1991, the median difference being about 25 micrograms for the set of original prototypes sanctioned by the 1st CGPM (1889). In order to assure the long-term stability of the unit of mass, to take full advantage of quantum electrical standards and to be of more utility to modern science, a new definition for the kilogram based on the value of a fundamental constant, for which purpose the Planck constant h was chosen, was adopted by Resolution 1 of the 26th CGPM (2018).
Unit of electric current, ampere
Electric units, called “international units,” for current and resistance were introduced by the International Electrical Congress held in Chicago in 1893 and definitions of the “international ampere” and “international ohm” were confirmed by the International Conference in London in 1908.
By the time of the 8th CGPM (1933) there was a unanimous desire to replace the “international units” by so-called “absolute units”. However because some laboratories had not yet completed experiments needed to determine the ratios between the international and absolute units, the CGPM gave authority to the CIPM to decide at an appropriate time both these ratios and the date at which the new absolute units would come into effect. The CIPM did so in 1946 (1946, Resolution 2, PV, 20, 129-137), when it decided that the new units would come into force on 1 January 1948. In October 1948 the 9th CGPM approved the decisions taken by the CIPM. The definition of the ampere, chosen by the CIPM, was referenced to the force between parallel wires carrying an electric current and it had the effect of fixing the numerical value of the vacuum magnetic permeability μ0 (also called the magnetic constant). The numerical value of the vacuum electric permittivity ε0 (also called the electric constant) then became fixed as a consequence of the new definition of the meter adopted in 1983.
However the 1948 definition of the ampere proved difficult to realize and practical quantum standards (based on Josephson and quantum-Hall effects), which link both the volt and the ohm to particular combinations of the Planck constant h and elementary charge e, became almost universally used as a practical realization of the ampere through Ohm’s law (18th CGPM (1987), Resolution 6, CR 100). As a consequence, it became natural not only to fix the numerical value of h to redefine the kilogram, but also to fix the numerical value of e to redefine the ampere in order to bring the practical quantum electrical standards into exact agreement with the SI. The present definition based on a fixed numerical value for the elementary charge, e, was adopted in Resolution 1 of the 26th CGPM (2018).
Unit of thermodynamic temperature, kelvin
The definition of the unit of thermodynamic temperature was given by the 10th CGPM (1954, Resolution 3; CR 79) which selected the triple point of water, TTPW, as a fundamental fixed point and assigned to it the temperature 273.16 K, thereby defining the kelvin. The 13th CGPM (1967-1968, Resolution 3; CR, 104 and Metrologia, 1968, 4, 43) adopted the name kelvin, symbol K, instead of “degree kelvin,” symbol ºK, for the unit defined in this way. However, the practical difficulties in realizing this definition, requiring a sample of pure water of well-defined isotopic composition and the development of new primary methods of thermometry, led to the adoption of a new definition of the kelvin based on a fixed numerical value of the Boltzmann constant k. The present definition, which removed both of these constraints, was adopted in Resolution 1 of the 26th CGPM (2018).
Unit of amount of substance, mole
Following the discovery of the fundamental laws of chemistry, units called, for example, “gram-atom” and “gram molecule”, were used to specify amounts of chemical elements or compounds. These units had a direct connection with “atomic weights” and “molecular weights”, which are in fact relative atomic and molecular masses. The first compilations of “Atomic weights” were originally linked to the atomic weight of oxygen, which was, by general agreement, taken as being 16. Whereas physicists separated the isotopes in a mass spectrometer and attributed the value 16 to one of the isotopes of oxygen, chemists attributed the same value to the (slightly variable) mixture of isotopes 16, 17 and 18, which for them constituted the naturally occurring element oxygen. An agreement between the International Union of Pure and Applied Physics (IUPAP) and the International Union of Pure and Applied Chemistry (IUPAC) brought this duality to an end in 1959-1960. Physicists and chemists had agreed to assign the value 12, exactly, to the so-called atomic weight, correctly referred to as the relative atomic mass Ar, of the isotope of carbon with mass number 12 (carbon-12, 12C). The unified scale thus obtained gives the relative atomic and molecular masses, also known as the atomic and molecular weights, respectively. This agreement is unaffected by the redefinition of the mole.
The quantity used by chemists to specify the amount of chemical elements or compounds is called “amount of substance”. Amount of substance, symbol n, is defined to be proportional to the number of specified elementary entities N in a sample, the proportionality constant being a universal constant which is the same for all entities. The proportionality constant is the reciprocal of the Avogadro constant NA, so that n = N/NA. The unit of amount of substance is called the mole, symbol mol. Following proposals by the IUPAP, IUPAC and ISO, the CIPM developed a definition of the mole in 1967 and confirmed it in 1969, by specifying that the molar mass of carbon 12 should be exactly 0.012 kg/mol. This allowed the amount of substance nS(X) of any pure sample S of entity X to be determined directly from the mass of the sample mS and the molar mass M(X) of entity X, the molar mass being determined from its relative atomic mass Ar (atomic or molecular weight) without the need for a precise knowledge of the Avogadro constant, by using the relations
$$n_s(X) = m_s/M(X),\,and\,M(X) = A_r(X)\, g/mol$$
Thus, this definition of the mole was dependent on the artifact definition of the kilogram.
The numerical value of the Avogadro constant defined in this way was equal to the number of atoms in 12 grams of
carbon-12. However, because of recent technological advances, this number is now known with such precision that a simpler and more universal definition of the mole has become possible, namely, by specifying exactly the number of entities in one mole of any substance, thus fixing the numerical value of the Avogadro constant. This has the effect that the new definition of the mole and the value of the Avogadro constant are no longer dependent on the definition of the kilogram. The distinction between the fundamentally different quantities ‘amount of substance’ and ‘mass’ is thereby emphasized. The present definition of the mole based on a fixed numerical value for the Avogadro constant, NA, was adopted in Resolution 1 of the 26th CGPM (2018).
Unit of luminous intensity, candela
The units of luminous intensity, which were based on flame or incandescent filament standards in use in various countries before 1948, were replaced initially by the “new candle” based on the luminance of a Planckian radiator (a black body) at the temperature of freezing platinum. This modification had been prepared by the International Commission on Illumination (CIE) and by the CIPM before 1937 and the decision was promulgated by the CIPM in 1946. It was then ratified in 1948 by the 9th CGPM, which adopted a new international name for this unit, the candela, symbol cd; in 1954 the 10th CGPM established the candela as a base unit; In 1967 the 13th CGPM (Resolution 5, CR, 104 and Metrologia, 1968, 4, 43-44) amended this definition.
In 1979, because of the difficulties in realizing a Planck radiator at high temperatures, and the new possibilities offered by radiometry, i.e. the measurement of optical radiation power, the 16th CGPM (1979, Resolution 3, CR, 100 and Metrologia,1980, 16, 56) adopted a new definition of the candela.
The present definition of the candela uses a fixed numerical value for the luminous efficacy of monochromatic radiation of frequency 540 ×1012 Hz, Kcd, adopted in Resolution 1 of the 26th CGPM (2018).