By Tom Roberts (look me up in the FNAL Phonebook)
and Siegmar Schleif (email), 2007.
HTML/CSS coding and copyediting: John Dlugosz, 2007.
There has been a renaissance in tests of special relativity (SR), in part because considerations of quantum gravity imply that SR may well be violated at appropriate scales (very small distance, very high energy).
Physics is an experimental science, and as such the experimental basis for any physical theory is extremely important. The relationship between theory and experiments in modern science is a multi-edged sword:
At present, special relativity (SR) meets all of these requirements and expectations. Literally hundreds of experiments have tested SR, with an enormous range and diversity, and the agreement between theory and experiment is excellent. There is a lot of redundancy in these experimental tests. Also, many indirect tests of SR are not included here. This list of experiments is by no means complete!
Other than their sheer numbers, the most striking thing about these experimental tests of SR is their remarkable breadth and diversity. An important aspect of SR is its universality—it applies to all known physical phenomena and not just to the electromagnetic phenomena it was originally invented to explain. In these experiments you will find tests using electromagnetic and nuclear measurements (including both strong and weak interactions). Gravitational tests are the province of general relativity, and are not considered here.
There are several useful surveys of the experimental basis of SR:
Zhang's book is especially comprehensive. The LivingReviews article goes into considerable detail relating current theoretical ideas to experimental tests.
Textbooks with good summaries of the experimental basis of relativity are:
Albert Einstein introduced the world to special relativity in his seminal 1905 paper: A. Einstein, "Zur Elektrodynamik bewegter Körper", Ann. d. Physik, 17, 1905 ("On the Electrodynamics of Moving Bodies"). It is available in several forms:
Note, however, that SR is not perfect (in agreement with every experiment), and there are some experiments that are in disagreement with its predictions. See Experiments that Apparently are not Consistent with SR where some of these experiments are referenced and discussed. It is clear that most, if not all, of these experiments have difficulties that are unrelated to SR. Note also that few if any standard references or textbooks even mention the possibility that some experiments might be inconsistent with SR, and there are also aspects of publication bias in the literature. That being said, as of this writing there are no reproducible and generally accepted experiments that are inconsistent with SR, within its domain of applicability.
Technically, the basis of SR is Lorentz invariance, and many recent articles phrase it that way. This is closely related to the CPT theorem, and many of the recent experiments apply to both Lorentz invariance and CPT. Recently there have been conferences on Lorentz invariance and CPT violations:
Much of the renewed interest in testing SR has come from considerations of quantum gravity, which imply that at a suitable scale (very small distance, very high energy) SR might well be violated. Here are some review articles (ordered less to more technical):
The domain of applicability of a physical theory is the set of physical situations in which the theory is valid. For SR, this is measurements of distance, time, momentum, energy, etc. in inertial frames. Calculus can be used to apply SR in accelerated systems. A more technical definition is that SR is valid only in flat Lorentz manifolds topologically equivalent to R4.
In particular, any experiment in which the effects of gravitation are important is outside the domain of SR. Because SR is the local limit of general relativity, it is possible to compute how large an error is made when one applies SR to a situation that is approximately but not exactly inertial, such as the common case of experimental apparatus supported against gravity on Earth's surface. In many cases (e.g. most optical and elementary-particle experiments on the rotating Earth's surface) these errors are vastly smaller than the experimental resolution, and SR can be accurately applied.
A test theory of SR is a generalization of the Lorentz transforms of SR using additional parameters. One can then analyze experiments using the test theory (rather than SR itself) and fit the parameters of the test theory to the experimental results. If the fitted parameter values differ significantly from the values corresponding to SR, then the experiment is inconsistent with SR. But more normally, such fits can show how well a given experiment confirms or disagrees with SR, and what the experimental accuracy is for doing so. This gives a general and tractable method of analysis which can be common to multiple experiments.
Different test theories differ in their assumptions about what form the transform equations could reasonably take. There are at present four test theories of SR:
Zhang discusses their interrelationships and presents a unified test theory encompassing the other three, but with a better and more interpretable parametrisation. His discussion implies that there will be no more test theories of SR that are not reducible to one of the first three.
Robertson showed that one can unambiguously deduce the Lorentz transform of SR to an accuracy of ~0.1% from the following three experiments: Michelson and Morley, Kennedy and Thorndike, Ives and Stilwell. Zhang showed that modern experiments determine the Lorentz transforms to within a few parts per million.
These test theories can also be used to examine potential alternative theories to SR; such alternative theories predict particular values of the parameters of the test theory, which can easily be compared to values determined by experiments analyzed with the test theory. The existing experiments put rather strong experimental constraints on any alternative theory.
In particular, Zhang showed that these experimental limits essentially require that any theory based upon the existence of an aether be experimentally indistinguishable from SR, and have an aether frame which is unobservable (the only alternative is for a theory to "live in the error bars" of the experiments, which is quite difficult given the high accuracies achieved by many of these experiments). Note also that some of the parameters in these test theories are not determined at all by SR (or by experiments)—this means that many different theories, characterized by different values of such parameters, are equivalent to SR in that they are experimentally indistinguishable from SR (though they differ from SR in other aspects).
In addition there is the "standard model extension (SME)" of Colladay and Kosteleckı that extends the standard model of particle physics with various plausible Lorentz-violating terms. This is in the context of quantum field theory, and is well beyond the scope of this article (which is limited to SR and its immediate consequences). The goal of many of the recent tests is to put limits on the many parameters of this test theory. Colladay and Kosteleckı, Phys. Rev. D55 (1997) pg 6760 (arxiv:hep-ph/9703464), and Phys. Rev. D 58, 116002 (1998) (arxiv:hep-ph/9809521).
Many measurements of the speed of light involve the passage of the light through some material medium. This can invalidate some conclusions of the measurement due to the extinction theorem of Ewald and Oseen. This theorem states that the speed of light will approach the speed c/n relative to the medium (n is its index of refraction), and it also determines how long a path length is required for that approach. The distance required depends strongly on the index of refraction of the medium and the wavelength: for visible light and optical glass it is less than a micron, for air it is about a millimetre, and for the intergalactic medium it is several light years. So even astronomical observations over vast distances in the "vacuum" of outer space are not immune from the effects of this theorem. Note this theorem is based purely on classical electrodynamics, and for gamma rays detected as individual particles it does not apply; it is also not clear how it would apply to theories other than SR and classical electrodynamics. See for instance: J.G. Fox, Am. J. Phys. 30, pg 297 (1962), JOSA 57, pg 967 (1967), and AJP 33, pg 1 (1964). An elementary discussion is given in Ballenegger and Weber, AJP 67, pg 599 (1999). The standard reference for this is Born and Wolf, Principles of Optics, and the original paper is Oseen, Ann. der Physik 48, pg 1, 1915.
The special theory of relativity (SR) was invented in 1905 by Einstein to explain several experimental results. Since then it has been found to explain a wide range of experimental results. SR is not a mathematical game or just a hypothesis. SR is a physical theory that has been well tested many times.
A detailed account of the early history of SRT is given in: Arthur L. Miller, Albert Einstein's Special Theory of Relativity: Emergence and early interpretation, Addison Wesley, 1981, ISBN 0-201-04680-6.
When A. Einstein wrote his famous paper: "The Electrodynamics of Moving Bodies" in 1905, he already had experimental support for his new theory:
...Examples of this sort, together with the unsuccessful attempts to discover any motion of Earth relative to the "light medium", suggest that the phenomena of electrodynamics as well as of mechanics possess no properties corresponding to the idea of absolute rest. They suggest rather that, as has already been shown to the first order of small quantities, the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good...
What was the experimental support for this claim? There were several experiments concerning the electrodynamics of moving bodies that are not very well known today; but Einstein knew of numerous examples. Many of these experiments were reviewed in H.A. Lorentz's important paper "On the influence of the earth's motion on luminiferous phenomena", Versl. Kon. Akad. Wettensh. Amsterdam, 297 (1886). Lorentz showed that Stokes' theory of light, which assumed complete dragging of the aether at Earth's surface and decreasing to zero dragging far away, had severe problems with aberration and the results of Arago and Airy.
Bradley (1727) discovered that the images of stars move in small ellipses. This is explained as aberration due to Earth's motion around the Sun. This is inconsistent with a simple model of light as waves in an aether that is dragged along by Earth; it is consistent with SR.
Examined the expected change in focus of a refracting telescope due to Earth's motion around the Sun. This is first order in v/c if one assumes light is fully dragged by the lens. The null result is consistent with SR. Arago's results caused Fresnel to develop his theory of the partial dragging of aether, which was then confirmed by:
Measured the speed of light in moving materials. The Fresnel drag coefficient is solidly established by experiments, and is consistent with SR to within experimental resolutions.
Much more accurate version of the basic concept of Arago's experiment, using a terrestrial source and a square (ring) interferometer with one side in water and three in air. The null result is consistent with Arago's result and with Fresnel's drag coefficient, and with SR.
Airy tested whether stellar aberration remained unchanged if the telescope was filled with water. It did, in agreement with the prediction of SR.
Changes of polarization in the direction of Earth's motion not observed.
Note that Roentgen describes in this paper an "unsuccessful" experiment, where he tried to measure the velocity of Earth through the aether (a "primitive" version of the Trouton–Noble experiment).
Experiments concerning the so-called Roentgen convection, with a magnetic field (See Sommerfeld Vol. 3, Chapter 4, and von Laue Ch. 1). Blondlot's experiment was an early experiment with μ, ε = 1.
Experiments concerning the effect of the motion of Earth on double refraction.
Experiments concerning the so-called Roentgen convection, with an electric field (See Sommerfeld Vol. 3, Chapter 4).
- The laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems of coordinates in uniform translatory motion.
- Any ray of light moves in the "stationary" system of coordinates with determined speed c, whether the ray be emitted by a stationary or by a moving body.
—Einstein, Ann. d. Physik 17 (1905); translated by Perrett and Jeffery; reprinted in: Einstein, Lorentz, Weyl, Minkowski, The Principle of Relativity, Dover 1952.
"Stationary" was defined in the first paragraph of this section:
Let us take a system of coordinates in which the equations of newtonian mechanics hold good. In order to render our presentation more precise and to distinguish this system of coordinates verbally from others that will be introduced hereafter, we call it the "stationary system".
It is clear that the word "stationary" is used merely as a label, and implies no "absolute" aspects at all.
[Editor's note: The phrase "stationary system" here denotes an inertial frame. By a "system of coordinates", Einstein ultimately means an inertial frame. A distinction should be made between frames (which embody physics) and coordinates (which embody maths). Because Einstein allocated distinct coordinates to distinct frames, he used the terms interchangeably, and they are still used interchangeably by most physicists. But it's important to realise that they are different entities. This is also true when doing calculations in, say, projectile motion, where we must include several frames and coordinate systems in one calculation. There, it's well known that interchanging "frame" and "coordinates" leads to great confusion.]
The speed of light is said to be isotropic if it has the same value when measured in any/every direction.
The Michelson–Morley experiment (MMX) was intended to measure Earth's velocity relative to the "lumeniferous aether" which was at the time presumed to carry electromagnetic phenomena. The failure of it and the other early experiments to actually observe Earth's motion through the aether became significant in promoting the acceptance of Einstein's theory of special relativity, as it was appreciated from early on that Einstein's approach (via symmetry) was more elegant and parsimonious of assumptions than were other approaches (e.g. those of Maxwell, Hertz, Stokes, Fresnel, Lorentz, Ritz, and Abraham).
The following table comes from R.S. Shankland et al., Rev. Mod. Phys. 27 no. 2, pg 167–178 (1955), which includes references to each experiment (resolution and the limit on vaether are from the original sources). The expected fringe shift is what would be expected for a rigid aether at rest with respect to the Sun and Earth's orbital speed (~30 km/s).
|Michelson + Morley||1887||11.0||0.4||<0.01||8 km/s|
|Morley + Morley||1902–04||32.2||1.13||0.015|
|Miller (re-analysis in 2006, see note)||1925–29||32.0||1.12||0.000||0.015||6 km/s|
|Kennedy (Mt Wilson)||1926||02.0||0.07||0.002|
|Piccard + Stahel (Mt Rigi)||1927||02.8||0.13||0.006|
|Michelson et al.||1929||25.9||0.9||0.01|
Note: before about 1950 it was common to not perform a detailed error analysis, and to not report error bars or resolutions.
Note: the re-analysis of Miller's 1925–29 results is: T.J. Roberts, "An Explanation of Dayton Miller's Anomalous ‘Ether Drift' Results", arXiv:physics/0608238. There is more discussion of this below.
This is the classic paper describing this famous experiment. Contrary to popular myth, their result is not actually "null"—in their words "the relative velocity of the Earth and the aether is probably less than one sixth the Earth's orbital velocity, and certainly less than one-forth". While some people claim to see a "signal" in their plots, an elementary error analysis shows it is not statistically significant (see Appendix I of arXiv:physics/0608238). So this experiment is certainly consistent with SR.
This is a general review article.
An excellent repetition of the MMX, in vacuum.
Used a clever technique with a tiny step in one mirror to obtain significantly improved resolution.
A repetition of the MMX with the optical paths in perspex (n = 1.49), and a laser-based optics sensitive to ~0.00003 fringe. They report a null result with an upper limit on vaether of 6.64 km/s.
See also: Brillet and Hall.
This uses an interferometer similar to Michelson's, except that its arms are of different length, and are not at right angles to each other. They used a spectacular technique to keep the apparatus temperature constant to 0.001°C, which gave them sufficient stability to permit observations during several seasons. They also used photographs of their fringes (rather than observing them in real time as in most other interferometer experiments). Their apparatus was fixed to Earth and could only rotate with it. Their null result is consistent with SR.
Using lunar laser ranging data they put a limit on Lorentz violating terms in the test theory of Mansouri and Sexl only a factor of two or so worse than modern laser techniques in the laboratory.
See also: Hils and Hall.
They used two ammonia-beam masers back to back to put a limit of 30 m/s on any "aether drift".
They mounted two He–Ne microwave masers perpendicularly on a shock-mounted table and observed the beat frequency between them as the table was rotated. They put a limit of 30 m/s on the anisotropy.
This is one of the most accurate limits on any anisotropy in the round-trip speed of light in a laboratory. They measured the beat frequency between a single-mode laser on a rotating table and a single-mode laser fixed to Earth to put a limit on such an anisotropy of 3 parts in 1015. Due to the construction of their rotating laser, this can also be interpreted as a limit on any anisotropy of space. This is a round-trip experiment because of their use of a Fabry–Perot etalon to determine the frequency of the rotating laser. Note that their limit on the round-trip anisotropy corresponds to a round-trip speed of less than 10−6 m/s (!); in terms of the more usual one-way anisotropy it is 30 m/s.
Their residual 17 Hz signal (out of ~1015 Hz) was described as "unknown"; it was fixed with respect to their laboratory and therefore could not be of cosmic origin. A. Brillet has indicated privately that this is most likely due to the rotation axis being slightly off vertical by a few microradians.
This is similar to Brillet and Hall (above), but the lasers are fixed to Earth for better stability. No variations were found at the level of 2 × 10−13. As they made observations over a year, this is not merely a limit on anisotropy, but also a limit on variations in different inertial frames. Brillet and Hall corresponds roughly to the Michelson–Morley experiment (no variations of the round-trip speed of light in different directions, with a time scale of minutes or seconds); Hils and Hall corresponds roughly to the Kennedy–Thorndike experiment (no variations of the round-trip speed of light in different directions or for the different inertial frames occupied by Earth during a year or so).
Commented on by Tobar et al., Phys. Rev. A72, 066101 (2005). Reply by the authors Phys. Rev. A72, 066102 (2005).
Improved limits at a level of a few parts in 10−16.
Anisotropy of c < (2.6 ±1.7) × 10−15.
An experiment similar to Brillet and Hall, with a limit of 1 × 10−18 in the anisotropy of c.
A triangle interferometer with one leg in glass. They set an upper limit on the anisotropy of 0.025 m/s. This is about one millionth of Earth's orbital speed and about 1/10,000 of its rotational speed.
This novel experiment uses a two-photon transition in a neon atomic beam to set an upper limit on any anisotropy of 0.3 m/s.
Using GPS satellites to test for anisotropy in the speed of light, they find δc/c < 5 × 10−9.
This innovative experiment uses an interferometer with frequency-doubling crystals, so the fundamental's fringes are due to signals going all the way around, but the doubled-frequency fringes are due to signals going only half the way around (converging from opposite directions onto the detector). Its null result is consistent with SR.
Note that while these experiments clearly use a one-way light path and find isotropy, they are inherently unable to rule out a large class of theories in which the one-way speed of light is anisotropic. These theories share the property that the round-trip speed of light is isotropic in any inertial frame, but the one-way speed is isotropic only in an aether frame. In all of these theories the effects of slow clock transport exactly offset the effects of the anisotropic one-way speed of light (in any inertial frame), and all are experimentally indistinguishable from SR. All of these theories predict null results for these experiments. See Test Theories above, especially Zhang (in which these theories are called "Edwards frames").
Uses two multi-mode lasers mounted on a rotating table to look for variations in their interference pattern as the table is rotated. Places an upper limit on any one-way anisotropy of 0.9 m/s.
Uses two hydrogen masers fixed to Earth and separated by a 21-km fiber-optic link to look for variations in the phase between them. They put an upper limit on the one-way linear anisotropy of 100 m/s.
Uses a rotating Mössbauer absorber and fixed detector to place an upper limit on any one-way anisotropy of 3 m/s.
Uses a rotating source and fixed Mössbauer detector to place an upper limit on any one-way anisotropy of 10 m/s.
A guided-wave test of isotropy. Their null result is consistent with SR.
Several VLBI tests sensitive to first-order effects of an aether are described. No aether is detected, with a sensitivity of 70 m/s.
A "one-way" test that is bidirectional with the outgoing ray in glass and the return ray in air. The interferometer is by design particularly robust against mechanical perturbations, and temperature controlled. The limit on the anisotropy of c is 0.13 m/s.
If the light emitted from a source moving with velocity v toward the observer has a speed c+kv in the observer's frame, then these experiments place a limit on k. Many but not all of these experiments are subject to criticism due to Optical Extinction.
Observations of binary stars. k < 10−6. These are all subject to criticism due to Optical Extinction.
Uses observations of binary pulsars to put a limit on the source-velocity dependence of the speed of light. k < 2 × 10−9. Optical Extinction is not a problem here, because the high-energy X-rays used have an extinction length considerably longer than the distance to the sources.
Differential aberration, galaxies versus stars. This experiment is subject to criticism due to Optical Extinction.
A supernova explosion sends debris out in all directions with speeds of 10,000 km/s or more (known from Doppler broadening of spectral lines). If the speed of light depended on the source velocity, its arrival at Earth would be spread out in time due to the spread of source velocities. Such a time spread is not observed, and observations of distant supernovae give k < 5 × 10−9. These observations could be subject to criticism due to Optical Extinction, but some observations are for supernovas considerably closer than the extinction length of the X-ray wavelengths used.
Measured the speed of gamma rays from the decay of fast π0 (~0.99975 c) to be c with a resolution of 400 parts per million. Optical extinction is not a problem for such high-energy gamma rays. The speed of the π0 is not measured, but is assumed to be similar to that measured for π+ and π−.
Measured the speed of the gammas emitted from e+e− annihilation (with center of mass v/c ~0.5) to be c within 10%.
This experiment was criticized in Lo Savio, Phys. Lett. A, 1988, Vol 133, pg 176. It is certainly true that at the instant of annihilation the e+ need not be traveling in the same direction it had initially, or have the same speed (most annihilations occur at very low energy as the positrons stop). This experiment is inconclusive at best.
This repeat of Kantor's experiment in vacuum shows no significant variation in the speed of light affected by moving glass plates. Optical Extinction is not a problem. k < 0.02.
Measured the speed of gamma rays from the decay of fast π0 (~0.2 c) in an experiment specifically designed to avoid extinction effects. Their results are in complete disagreement with the assumption c+v, and are consistent with SR. k < 0.5 with a confidence level of 99.9%.
A direct experiment with coherent light reflected from a moving mirror was performed in vacuum better than 10−6 torr. Its result is consistent with the constant speed of light. This experiment is notable because Beckmann was a perennial critic of SR. Optical Extinction is not a problem.
A free-electron laser generates highly collimated X-rays parallel to the relativistic electron beam that is their source. If the region that generates the X-rays is L metres long, and the speed of light emitted from the moving electrons is c+kv (here v is essentially c), then at the downstream end of that region the minimum pulse width is k(L/c)/(1+k), because light emitted at the beginning arrives before light emitted at the downstream end. For FLASH, L=30 metres, v=0.9999997 c (700 MeV), and the observed X-ray pulse width is as short as 25 fs. This puts an upper limit on k of 2.5 × 10−7. Optical extinction is not present, as the entire process occurs in very high vacuum.
In 1983 the international standard for the metre was redefined in terms of the definition of the second and a defined value for the speed of light. The defined value was chosen to be as consistent as possible with the earlier metrological definitions of the metre and the second. Since then it is not possible to measure the speed of light using the current metrological standards, but one can still measure any anisotropy in its speed, or use an earlier definition of the metre if necessary.
A report on measurements by the NBS.
A review article on the set of precision frequency and wavelength measurements that became the basis for the 1973 value of c. This is the best single reference for this.
Measured c = 299,792,458.8 ± 0.2 metre/s, with 1.2 metre uncertainty due to realization of the Kr metre standard used. The fact that the Kr standard for the metre became the limit on accuracy was a major reason for the 1983 redefinition of the metre in terms of the definition of c and the definition of the second.
Discussion of three proposals for a new definition of the metre (pre 1983).
Review of methods to relate c to the metre, and results for further measurements checking the 1973 determination of c leading to the 1983 adoption of the new metre standard in terms of the definition of c and the definition of the second.
An overview of past definitions of the metre with emphasis on the guidelines that governed the choice of the new definition in 1983 in terms of the definition of the second and the definition of the speed of light.
A review article discussing the reasons for the re-definition of the metre in 1983 in terms of the definition of the second and the definition of the speed of light.
A summary of measurements of c. The second paper describes measuring c by measuring frequency and wavelength and describes a college-level lab experiment.
For frequencies between 108 and 1015 Hz the speed of light is constant within 1 part in 105.
For visible light and 7 GeV gammas the speed of light differs by at most 6 parts in 106. The speed of 11 GeV electrons is within 3 parts in 106 of the speed of visible light.
For photons of 30 keV and 200 keV the speed of light is the same within a few parts in 1021.
A limit of 2.3 × 10−15 eV/c2.
A review article discussion about various experimental limits.
A limit of 6 × 10−16 eV/c2.
An experimental approach using a toroid Cavendish balance.
A limit of 1.2 × 10−51 g (6 × 10−19 eV/c2).
See also the Particle Data Group's summary on "Gauge and Higgs Bosons". As of July 2007, their reported limit on the photon mass is 6 × 10−17 eV/c2.
Einstein's first postulate, the principle of relativity (PoR), essentially states that the laws of physics do not vary for different inertial frames. Most if not all of the tests of his second postulate (the isotropy experiments above) could also be placed in this section, as could those in the following section on the isotropy of space.
This classic experiment looked for a torque induced on a charged capacitor due to its motion through the aether. Its null result is consistent with SR.
Measurements of the resistance of a coil fixed in the lab, for various orientations relative to Earth's motion. Its null result is consistent with SR.
Set an upper limit on aether drift of 4 km/s.
Tomaschek performed the Trouton–Noble experiment at three different altitudes; all results are consistent with the SR predictions.
Simple observations of the existence of cosmic rays lead to extremely tight limits on Lorentz non-invariance. These are model dependent, and depending on choice of model and other assumptions limits as good as 5 × 10−23 are obtained.
A more general perturbative framework is developed.
This extremely accurate experiment looked for any anisotropy in nuclear magnetic resonance. Hughes placed an upper limit on such anisotropy of 10−20.
Variations on the Hughes–Drever experiment.
A test using a cryogenic torsion pendulum carrying a transversely polarized magnet. No significant anisotropy was observed.
This uses a very clever spin-aligned torsion pendulum with a net spin but zero magnetization.
See also Brillet and Hall.
A general review.
Superconducting cylindrical cavities oriented vertically and East–West. No anisotropy to 1 part in 1013.
The Doppler effect is the observed variation in frequency of a source when it is observed by a detector that is moving relative to the source. This effect is most pronounced when the source is moving directly toward or away from the detector, and in pre-relativity physics its value was zero for transverse motion (motion perpendicular to the source–detector line). In SR there is a non-zero Doppler effect for transverse motion, due to the relative time dilation of the source as seen by the detector. Measurements of Doppler shifts for sources moving with velocities approaching c can test the validity of SR's prediction for such observations, which differs significantly from classical predictions; the experiments support SR and are in complete disagreement with non-relativistic predictions.
A general review article.
This classic experiment measured the transverse Doppler effect for moving atoms.
A measurement that is truly at 90° in the lab. Agreement with SR to an accuracy of a few percent.
See also Mandelberg and Witten.
Various measurements of the lifetimes of muons.
See also: Bailey et al.
Measurements of the lifetimes of pions. An interpretation was given by: Terell, Nuovo Cimento 16 (1960) pg 457.
More accurate measurement of pion lifetimes.
Measurements of pion lifetimes, comparison of positive and negative pions, etc.
Measurements of Kaon lifetimes.
They compared the frequency of two lasers, one locked to fast-beam neon and one locked to the same transition in thermal neon. Kaivola found agreement with SR's Doppler formula is to within 4 × 10−5; McGowan within 2.3 × 10−6.
A Mössbauer absorber on a rotor.
A Mössbauer absorber on a rotor was used to verify the transverse Doppler effect of SR to 1.1%.
A nuclear measurement at 0.05 c, in very good agreement with the prediction of SR.
Measured the exponent of the quadratic Doppler shift to be 0.498±0.025, in agreement with SR's value of ½.
The "twin paradox" occurs when two clocks are synchronized, separated, and rejoined. If one clock remains in an inertial frame, then the other must be accelerated sometime during its journey, and it displays less elapsed proper time than the inertial clock. This is a paradox only in that it appears to be inconsistent but is not.
They flew atomic clocks on commercial airliners around the world in both directions, and compared the time elapsed on the airborne clocks with the time elapsed on an earthbound clock (USNO). Their eastbound clock lost 59 ns on the USNO clock; their westbound clock gained 273 ns; these agree with GR predictions to well within their experimental resolution and uncertainties (which total about 25 ns). By using four cesium-beam atomic clocks they greatly reduced their systematic errors due to clock drift.
Criticised in: A.G. Kelly, "Reliability of Relativistic Effect Tests on Airborne Clocks", Inst. Engineers Ireland Monograph No. 3 (February 1996), http://www.cartesio-episteme.net/H&KPaper.htm. His criticism does not stand up, as he does not understand the properties of the atomic clocks and the way the four clocks were reduced to a single "paper" clock. The simple averages he advocates are not nearly as accurate as the paper clock used in the final paper—that was the whole point of flying four clocks (they call this "correlated rate change"; this technique is used by all standards organizations today to minimize the deficiencies of atomic clocks).
Also commented on in Schlegel, AJP 42, pg 183 (1974). He identifies the East–West time difference as the Sagnac effect, notes that this is independent of the clock's speed relative to the (rotating) Earth, and proposes a coordinate system in which it is treated just like the international date line (for use in highly accurate time transfer around the world). Whether or not this idea has any substance, it has been superceded by the ECI coordinate system of the GPS.
Here is a brief description of a repetition in the UK (2005): http://www.npl.co.uk/upload/pdf/metromnia_issue18.pdf (Page 2)
They flew a hydrogen maser in a Scout rocket up into space and back (not recovered). Gravitational effects are important, as are speed effects. This experiment is also known as "Gravity Probe A".
They flew atomic clocks in airplanes that remained localized over Chesapeake Bay, and also which flew to Greenland and back.
They stored muons in a storage ring and measured their lifetime. When combined with measurements of the muon lifetime at rest this becomes a highly relativistic twin scenario (v ~0.9994 c), for which the stored muons are the traveling twin and return to a given point in the lab every few microseconds. Muon lifetime at rest: Meyer et al., Physical Review 132, pg 2693; Balandin et al., JETP 40, pg 811 (1974); Bardin et al., Physics Letters 137B, pg 135 (1984). Also a test of the clock hypotheses (below).
The clock postulate states that the tick rate of a clock when measured in an inertial frame depends only upon its speed in that frame, and is independent of its acceleration or higher derivatives. The experiment of Bailey et al. referenced above stored muons in a magnetic storage ring and measured their lifetime. While being stored in the ring they were subject to a proper acceleration of approximately 1018 g (1 g = 9.8 m/s2). The observed agreement between the lifetime of the stored muons with that of constant-velocity muons with the same energy partly confirms the clock postulate for accelerations of that magnitude. We must say "partly" here, because these accelerations were centripetal: that is, they were perpendicular to the muons' velocity, and contained no component parallel to that velocity. Thus, using this restricted type of acceleration only partly tested the clock postulate.
He discusses some Mössbauer experiments that support the thesis that the tick rate of a clock is independent of its acceleration (~1016 g), and depends only on its speed.
Dynamics is the study of how energy and momentum conservation laws constrain and affect physical interactions. The two predictions of SR in this regard are that massive objects will have a limiting speed of c (the speed of light), and that their "relativistic mass" will increase with their speed. This latter property implies that the newtonian equations for conservation of energy and momentum will be violated by enormous factors for objects with velocities approaching c, and that the corresponding formulas of SR must be used. This has become so obvious in particle experiments that few experiments test the SR equations, and virtually all particle experiments rely upon SR in their analysis. The exceptions are primarily early experiments measuring energy as a function of speed for electrons and protons.
Note that the nomenclature has changed over the past century, and current literature focusses more on rest mass than relativistic mass because rest mass is an invariant property of an object. In this article, use of the word "mass" means rest mass. See also this FAQ page.
Electro–electron elastic scattering
The dispersion relation expresses conservation of probability, and its validity at different energies is related to relativistic kinematics.
In newtonian mechanics, when two equal-mass objects scatter elastically, in the rest frame of one initial particle the two outgoing particles always travel at right angles to each other. In SR, that angle can be much less than a right angle, and in this experiment it is strikingly less than 90° (see their Fig. 3).
A comparison of neutrino and muon velocities, at Fermilab.
A comparison of muon, neutrino, and antineutrino velocities over a range of energies, at Fermilab.
Relative velocity measurements of 15 GeV electrons and gammas. No significant difference was observed within ~2 parts in 107. See also Brown et al.
An analysis combining the results of several experiments gives the result that the Lorentz limiting speed must be equal to the speed of light to within 12 parts per million.
A comparison of neutrino and photon speeds from supernova SN1987A puts a limit of about 1 part in 108 on their speed difference.
In the early 20th century there was an alternative theory by Abraham that is now little known, because these experiments rejected it in favor of SR. A critical review of the experimental evidence concerning the Lorentz model compared to the Abraham model was given in: Farago and Jannossy, Il Nuovo Cim. Vol5, No 6, pg 1411 (1957).
There were several discussions about the conclusions from Kaufmann's experiments and his data analysis. See for instance: M. Planck, "Die Kaufmannschen Messungen der Ablenkbarkeit der beta-Strahlen in ihrer Bedeutung fur die Dynamik der Electron", Verhandlungen der Deutschen Physikalischen Gesellschaft, 8, 1906; and M. Planck, "Nachtrag zu der Besprechung der Kaufmannschen Ablenkungsmessungen", Verhandlungen der Deutschen Physikalischen Gesellschaft, 9, 1907.
Measurement of m/e and v for three beta particles (electrons) from Radium. Supports the Lorentz model over the Abraham model by > 10 σ.
Measurements of speed vs. energy for 0.5–15 MeV electrons.
The beam power at SLAC is measured using temperature rise in a calorimeter, for electrons of ~17 and 20 GeV and beam currents up to ~15 microamperes. Their results confirm SR with a resolution of about 30%, and are "many orders of magnitude larger than predicted by the theory of autodynamics", of which Carezani is the author (and also member of this experimental group).
At this time there are no direct tests of length contraction, as measuring the length of a moving object to the precision required has not been feasible. There is, however, a demonstration that it occurs:
A current-carrying wire is observed to be electrically neutral in its rest frame, and a nearby charged particle at rest in that frame is unaffected by the current. A nearby charged particle that is moving parallel to the wire, however, is subject to a magnetic force that is related to its speed relative to the wire. If one considers the situation in the rest frame of a charge moving with the drift velocity of the electrons in the wire, the force is purely electrostatic due to the different length contractions of the positive and negative charges in the wire (the former are fixed relative to the wire, while the latter are mobile with drift velocities of a few mm per second). This approach gives the correct quantitative value of the magnetic force in the wire frame. This is discussed in more detail in: Purcel, Electricity and Magnetism. It is rather remarkable that relativistic effects for such a tiny speed explain the enormous magnetic forces we observe.
The CPT theorem is a general property of quantum field theories that states (loosely) that any system must behave the same if one applies the CPT transform to it: invert all charges (C, charge conjugation), invert all spatial axes (P, parity inversion), and invert the direction of time (T, time inversion). While one cannot actually do any of that in the real world, one can perform experiments in which particles are replaced by antiparticles (C), one looks at situations in which left and right are interchanged (P), and the particles travel along similar paths but in opposite directions and have opposite spin polarizations (T).
Lorentz Invariance is the technical term for the statement that SR is valid. Any violation of CPT invariance implies a violation of Lorentz invariance; theories without Lorentz invariance need not have CPT invariance.
A review of various limits, terrestrial and astrophysical.
A review article.
By combining results from two interferometers made of different materials, located in different hemispheres, rotating on tables, they are able to put limits on more parameters of the SME than otherwise. They have also improved both the statistics and systematic errors of the individual interferometers.
This is an incredibly clever experiment using 7Li+ ions in a storage ring, synchronizing a single laser to a 2-level transition via Doppler shifts in both directions. The fractional accuracy in frequency is 10−9, and the limit on deviation from the relativistic formula is 2.2 × 10−7 for speeds a substantial fraction of c.
These neutrino oscillations display no significant sidereal variation.
Note, however, that the LSND results have been a puzzle for several years, as they appear to be inconsistent with other experiments. Just recently they were directly contradicted by the Mini-BooNE results from Fermilab (May 2007, no reference yet).
Using the published results of the Liquid Scintillator Neutrino Detector (LSND) experiment, an estimated nonzero value (3 ±1) × 10−19 GeV for a combination of coefficients for Lorentz violation is obtained. This lies in the range expected for effects originating from the Planck scale in an underlying unified theory.
Note, however, that the LSND results have been a puzzle for several years, as they appear to be inconsistent with other experiments. Just recently they were directly contradicted by the Mini-BooNE results from Fermilab (May 2007, no reference yet).
Search for sidereal variation in the frequency difference between co-located 129Xe and 3He Zeeman masers sets the most stringent limits to date on leading order Lorentz and CPT violation. By locating the two masers in the same enclosure they eliminate many systematic errors, and are looking at variations at the level of 100 nHz (10−7 Hz !).
If the speed of light has an energy dependence c(E) ~ c0(1 − E/M), a limit on M is obtained: M > 0.9 × 1016 GeV/c2.
Certain coefficients for Lorentz violation are bounded to less than 3 × 10−32.
The original GZT papers.
Fizeau measured the speed of light in moving mediums, most notably moving water. Fresnel proposed a "drag coefficient" that putatively described how strongly a moving material medium "dragged" the aether. SR predicts no aether but does predict that the speed of light in a moving medium differs from the speed in the medium at rest, by an amount consistent (to within experimental resolutions) with these experiments and with the Fresnel drag coefficient.
This is a repetition of Fizeau's experiment, not the original MMX experiment!
A critical review of Zeeman's experiments is in: Lerche, American Journal of Physics Vol. 45, pg 1154 (1977).
A more accurate, modern repetition. Includes a moving solid, liquid, and gas.
Measurements with a glass plate moving perpendicular to the light path. The experiment measures no significant effect, but is not sensitive enough to detect the small effect predicted by SR.
Sagnac constructed a ring interferometer and measured its fringe shifts as it is rotated. Contrary to some uninformed claims, this experiment can be fully analyzed using SR, and the results are consistent with SR.
The classic papers by Sagnac.
A review article. This is probably the most useful reference on ring interferometers and the Sagnac effect.
A more recent review, and description of a much more accurate ring interferometer.
The Sagnac effect using electrons.
They observed the Sagnac effect using GPS satellite signals observed simultaneously at multiple locations around the world. See GPS.
Various additional papers on the analysis of rotating systems.
A discussion using GR.
A detailed and varied series of modern measurements using a highly sensitive ring laser. A review paper is: Stedman, Rep. Prog. Phys. 60: pg 615–688 (1997), http://www.phys.canterbury.ac.nz/research/laser/files/ringlaserrpp.pdf.
This is essentially the Sagnac experiment, but on a much larger scale. They constructed a ring interferometer fixed on the ground with a size of 0.2 mile by 0.4 mile (about 320 m by 640 m). They did indeed detect Earth's rotation.
A modern large ring laser.
The value g is the gyromagnetic ratio of a particle, and is exactly 2 for a classical particle with charge and spin. So g−2 measures the anomalous magnetic moment of the particle, and can be used (via QED) as a test of SR.
A discussion of the basic technique of using measurements of the anomalous magnetic moment of electrons and muons as a test of SR, and an analysis of some low-energy electron data.
Electron and muon measurements.
Electron measurements up to 12 GeV.
Measurements of the anomalous magnetic moment of muons.
The Brookhaven experiment to measure g−2 for muons, http://www.g-2.bnl.gov/.
While not really an experiment, and not really any sort of test of SR, the GPS is an interesting and useful system in which relativity plays an important part. In particular it has become the best and most economical method of highly accurate time transfer around the globe.
US naval Observatory (USNO) GPS Operations. Includes an overview of the GPS and current details of its operation.
A tutorial and general overview of the GPS.
A large collection of links to GPS resources, tutorials, and references. More up to date than most other references in this section.
They discuss in detail how time and frequency comparisons among the various standards organizations of the world can be performed with an accuracy of about 1 part in 1014, using GPS satellites.
They discuss how the GPS coordinate system is used on and near Earth. They also describe two different comparisons between USNO and the Paris Observatory.
The "discrepancy" they mention is merely the Sagnac effect, and observations agree with predictions.
The corner reflectors placed on the moon by the Apollo astronauts are used to verify GR with a net accuracy of 15 cm in the telescope-to-reflector distance.
The CMBR is a diffuse and almost isotropic microwave radiation that apparently suffuses all of space. It is generally thought to be a relic of the big bang. While not really a test of SR, CMBR measurements may be of interest to some readers—there is a unique locally inertial frame near Earth in which its dipole moment is zero; this frame moves with speed ~370 km/s relative to the Sun.
Detected an anisotropy in the CMBR, and determined it is primarily a dipole anisotropy which would be zero in a frame moving at 390 ± 60 km/s with respect to Earth.
Measurement of the CMBR by the COBE satellite's FIRAS instrument.
Microkelvin variations in the CMBR are described. Note that these are after the dipole is subtracted out (i.e. these variations are measured in the "zero-dipole frame" of the CMBR moving about 370 km/s with respect to Earth).
They present a measurement of the CMBR for a distant object with z = 1.776 (z is the redshift, often used as a measure of distance from Earth).
They present a measurement of the CMBR for a distant object with z = 1.9731.
WMAP is a more recent set of satellite measurements of the CMBR. It has considerably better resolution than previous measurements.
Uniformity to 1 part in 104 is shown, subsequent to an epoch corresponding to less than 5% of the current age of the universe.
Quasar spectra with redshifts z ~0.2–3.7 are used to put a limit on the rate of change of alpha of about 4 × 10−14 per year.
New limits from measurements in atomic hydrogen.
The charge on sulfur hexafluoride is less than 2 × 10−19 times the charge on an electron.
It is clear that most if not all of these experiments have difficulties that are unrelated to SR. In some cases the anomalous experiment has been carefully repeated and been shown to be in error (e.g. Miller, Kantor, Munera); in others the experimental result is so outrageous that any serious attempt to reproduce it is unlikely (e.g. Esclangon); in still other cases there are great uncertainties and/or unknowns involved (e.g. Marinov, Silvertooth, Munera, Cahill, Mirabel), and some are based on major conceptual errors (e.g. Marinov, Thimm, Silvertooth). In any case, at present no reproducible and generally accepted experiment is inconsistent with SR, within its domain of applicability. In the case of some anomalous experiments there is an aspect of this being a self-fulfilling prophecy (being inconsistent with SR may be considered to be an indication that the experiment is not acceptable). Note also that few if any standard references or textbooks even mention the possibility that some experiments might be inconsistent with SR, and there are also aspects of publication bias in the literature—many of these papers appear in obscure journals. Many of these papers exhibit various levels of incompetence, which explains their authors' difficulty in being published in mainstream peer-reviewed physics journals; the presence of major peer-reviewed journals here shows it is not impossible for a competently performed anomalous experiment to get published in them.
There is a common thread among most of these experiments: the experimenters make no attempt to measure and quantify the systematic effects that could affect or mimic the signal they claim to observe. And none of them perform a comprehensive error analysis, which is necessary for any experiment to be believable today—especially ones that purport to overturn the foundations of modern physics. For Esclangon and Miller this is understandable, as during their lifetimes the use of error bars and quantitative error analyses was not the norm; the modern authors have no such excuse. In several cases (Esclangon, Miller, Marinov, Torr and Kolen, Cahill) it is possible to perform an error analysis which shows that the experiment is not inconsistent with SR after all.
Another common thread among many of these experiments is the claim of "agreement with Miller's result" (Kantor, Marinov, Silvertooth, Torr and Kolen, Munera, Cahill). Miller was the first to claim to have measured the "absolute motion of the Earth", and his result has achieved a sort of "cult status" among people who doubt the validity of SR. The paper referenced below in the discussion of Miller's results shows conclusively that his result is wrong, and explains why in detail. So claims of "agreement with Miller" generate doubts about the validity of experiments making such claims (how likely is it that a valid result would "agree" with a demonstrably bogus result?).
A key point is: if one is performing an experiment and claiming that it completely overthrows the foundations of modern physics, one must make it bulletproof or it will not be believed or accepted. At a minimum this means that a comprehensive error analysis must be included, direct measurements of important systematic errors must be performed, and whatever "signal" is found must be statistically significant. None of these experiments come anywhere close to making a convincing case that they are valid and refute SR. This is based on a basic and elementary analysis of the experimenters' technique, not on the mere fact that they disagree with the predictions of SR. Most of these experiments are shown to be invalid (or at least not inconsistent with SR) by a simple application of the elementary error analysis or other arguments relating to error bars, showing how important that is to the believability of a result—the authors merely found patterns:
Amateurs look for patterns, professionals look at error bars.
All that being said, I repeat: as of this writing there are no reproducible and generally accepted experiments that are inconsistent with SR, within its domain of applicability.
Some people claim to see a "signal" in this iconic experiment. Indeed there does appear to be a sinusoidal variation in their plots, with period ½ turn, just as any real signal would be. But an elementary error analysis shows these variations are not statistically significant. Appendix I of arxiv:physics/0608238 discusses this experiment, including the error analysis, and Section III of that paper shows why their noise appears to be a sinusoid with period ½ turn. There is no justification for claims of a real "signal" here.
He observed a systematic variation in the position of an optical image, correlated with sidereal angle (also called, incorrectly, "sidereal time", and nowadays called Earth rotation angle). This result is inconsistent not only with SR, but is also inconsistent with the hypothesis that space is Euclidean and light travels in straight lines.
His "signal" is actually composed of points that are an average of several thousand measurements each, and the magnitude of the signal is about 25 times smaller than the resolution of the individual measurements; an elementary error analysis shows that his result is not significantly different from no variation (the prediction of both SR and Euclidean geometry). So there is no reason to believe this result is inconsistent with SR. Also see Experimenter's Bias below—this is a clear example of over-averaging.
This is a laborious repetition of the Michelson–Morley experiment (MMX), with observations taken over a decade. He reports a net aether drift of about 10 km/s, and describes the variation in speed and direction in terms of the motions of the Sun and Earth combined with a net aether drift.
This experiment was re-analyzed in: R.S. Shankland, S.W. McCuskey, F.C. Leone and G. Kuerti, "New Analysis of the Interferometric Observations of Dayton C. Miller", Rev. Mod. Phys. 27 pg 167–178 (1955). They re-examined Miller's original data logs and explained his non-null result as partly due to statistical fluctuations and partly due to local temperature conditions. Their re-analysis is consistent with a null result at all epochs during a year. They gave no justification for any correlation with sidereal angle such as Miller reported.
Remarkably, the raw data of this experiment have survived (copies can be ordered from the C.W.R.U. archives). They were also re-analyzed in: T.J. Roberts, "An Explanation of Dayton Miller's Anomalous 'Ether Drift' Result", arXiv:physics/0608238. This paper explains in detail how and why Miller was fooled (using digital signal processing techniques), and performs an error analysis showing his results are not statistically significant. It also presents a new analysis that models his systematic drift and obtains a zero result with an upper bound on "aether drift" of 6 km/s (90% confidence). In short, this is every experimenter's nightmare: Miller was unknowingly looking at statistically insignificant patterns in his systematic drift that precisely mimicked the appearance of a real signal. While Miller himself could not have known this, there is no reason to believe or accept his anomalous result today.
Dozens of other papers discuss and/or attempt to "re-analyze" Miller's data. They all claim to find some real signal in his data. They are all worthless as they do not perform the elementary error analysis of his raw data (see Section II of arXiv:physics/0608238). Miller's anomalous result comes from averaging data—the elementary error analysis is indisputable and shows that his result is not statistically significant. Some modern authors even perform a complicated statistical analysis on plots of his run results versus sidereal angle, proclaiming there is a "significant signal"—they forgot to look at the raw data and compute the statistical significance of each run's result: those are not significant, which destroys their house of cards.
There is also an aspect of experimenter's bias in Miller's original result (and in the modern "re-interpretations" that find a "signal"). He clearly over-averaged his data, and the "signal" he (and others) found is an order of magnitude smaller than the resolution with which his raw data points were recorded. It is a fact of arithmetic that when averaging data one will obtain an answer, but an error analysis is required to determine whether or not it is statistically significant. People unfamiliar with modern experimental physics can impose their personal desires onto Miller's plots and find a "signal" by ignoring the huge scatter of the individual runs and just looking at the averages. The quantitative error analysis shows this approach is woefully inadequate and the "signal" found this way is not significant.
So there is no reason to believe or accept Miller's anomalous result today.
Criticized in: Burcev, Phys. Lett. 5 no. 1 (1963), pg 44.
Repeated by: Babcock and Bergman, J.O.S.A. 54 (1964), pg 147.
Repeated by: Rotz, Phys. Lett. 7 no. 4 (1963), pg 252.
Repeated by: Waddoups et al., JOSA 55, pg 142 (1965).
The consensus is now that Kantor's non-null result was due to his rotating mirrors dragging the air; repetitions in vacuum yield a null result consistent with SR.
This is a series of experiments using mechanically rotating mirrors and apertures that claim to measure a local anisotropy in the one-way speed of light.
Marinov thinks his rotating mirrors and apertures provide an "absolute synchronization" which can be used to measure the one-way speed of light; this is not so, and is a major conceptual error in his design: they merely provide synchronization in the rest frame of his lab. He is also conspicuously bad about ignoring errors and resolutions, to the point of being ridiculous. Simple estimates based on his apertures and rotation rates show that his apparatus is incapable of measuring what he claims, by a factor of 1,000 or more. His apparatus inherently averages over several microseconds (or more), and he completely ignores this basic fact and claims to be measuring the speed of light over a distance of 1.4 metres (!). And he does not bother to monitor various environmental factors (temperature, humidity, barometric pressure) that could easily induce the variations he observes. There is no reason to believe his experiments have any value at all.
This is a series of experiments using variations on a novel interferometer in which Silvertooth claims to have observed the aether. The first paper is simply a description of the special phototube and its usage in measuring Wiener fringes. The second and third present different variations of Silvertooth's basic double interferometer; both claim to observe the aether.
The experiments are marred by two clear instrumentation effects: there is feedback into the laser, and the multi-mode lasers employed could mimic the effect seen due to the interrelationships among the different modes. And the apparatus is excessively finicky—an attempt to repeat the measurement using his apparatus failed to see any effect at all (unpublished, see Publication Bias). In addition, the analysis presented is downright wrong—the anisotropy in the speed of light postulated in the 2nd and 3rd papers is completely unable to account for the observations (two different erroneous analyses are presented in the last two papers, making the same elementary mistake both times: not considering the entire light path). Indeed, their postulated transforms belong to the class of theories that are experimentally indistinguishable from SR (see Test Theories above).
This is an experiment using two atomic clocks separated by 500 metres connected by an underground coaxial cable, which looks for sidereal variations in the phase between them. Variations in that phase are interpreted as variations in the one-way speed of propagation in the cable. This experiment is quite similar to those of Krisher et al. and Cialdea referenced above (both of whom reported null results).
It is not clear why some people interpret this result as inconsistent with SR—certainly the authors themselves do not do so. Their monitoring of systematic effects such as temperature and barometric pressure (both of which affect the propagation speed of their cable) is woefully inadequate, and such uncontrolled and unmonitored environmental effects could easily explain the tiny variations they observe. Those variations are about a factor of 100 times smaller than the relative drift of the clocks for zero separation (which they tried to subtract in their analysis by assuming it is linear, an assumption that is unlikely to hold to better than 1% as they require). Their data analysis method is also inadequate, as it involves averaging over 23 days (averaging data like this is almost never justified). Moreover, they omitted an error analysis related to that averaging, implying they do not understand the terrible implications of such averaging (see arXiv:physics/0608238 for an example of how disastrous it was for Miller to perform such averaging). They also have some days during which no variation was detected—that is consistent with an unmonitored environmental effect, and is inconsistent with any sort of cosmic effect. Because of the great variability in their diurnal variations, and the inadequacy of their monitoring and analysis, there is no reason to believe this result is inconsistent with SR; the experimenters themselves considered their result "preliminary".
The simple observation of a visibly superluminal expansion or motion of a distant object does not necessarily imply that anything actually exceeds c locally. See for instance: Gabuzda, Am. J. Phys. 55 no. 3 (1987), pg 214. If a high-gamma (subluminal) object is moving at a small angle to our line of sight, it can appear to be going faster than light, but is not. This is different from any uncertainties in distance scales.
This experiment uses antennas mounted on a rotating disk inside a pair of metal enclosures to attempt to measure the transverse Doppler effect. Unfortunately the author fails to realize that he has merely constructed two closed RF cavities with a rotating coupler between them. That is, the "antennas" he mounted on the rotating disk are not free, and reflections from the surrounding walls completely dominate the RF pattern inside his apparatus, setting up wave patterns in what amounts to a coupled pair of untuned RF cavities. As the input and output of the enclosure have no relative motion, no frequency shift is predicted, in agreement with his measurement. This experiment does not actually test transverse Doppler at all, and it is fully consistent with SR.
This experiment is a repeat of the Kennedy–Thorndike experiment, but with equal-length arms at 90°. The interferometer is fixed to Earth at a latitude quite close to the equator.
When Kennedy and Thorndike performed a similar experiment more than 70 years ago, they realized they would be unable to distinguish between temperature effects and orientation effects, so they took great pains to keep the apparatus temperature constant to 0.001°C. In contrast, Múnera et al. did not make any attempt to control temperature or humidity effects (both are quite large in their room). They measured the temperature with a resolution of just 0.2°C, and attempted to correct for the large temperature and humidity changes—a simple estimate shows that an unmeasurable drift of 0.2°C between the two arms can cause a fringe shift comparable to their "signal". Humidity differences can generate similarly large fringe shifts. Because they insulated each arm's light path from the room and from each other, it is clear that such variations did occur between the two arms (variations in the room itself were much larger). Because of inadequate environmental monitoring and control, there is no reason to believe this measurement is inconsistent with SR.
This experiment measures the round-trip delay of RF signals that go out via an optical fiber and come back via coaxial cable, minus the delay from signals that go out via coaxial cable and come back via optical fiber. The apparatus has 5-metre cables, and is fixed to Earth with the cables aligned north–south at a latitude of 38° S. He claims that optical fiber is insensitive to "absolute motion" but coax cable can observe it, and this combination maximizes the "signal".
Cahill made a reasonable effort to minimize the effects of temperature variations on his apparatus, but then assumed that no systematic error remained, and did not monitor the environmental factors (temperature, barometric pressure) that can affect his apparatus. By temporarily arranging the cables to form a circle, he used a test setup that eliminates any real signal and permits him to directly measure his systematic errors, which is an important thing to do. But then he inexplicably ignores that measurement (the last 4 hours of his Fig. 14). The apparatus drifts up and down by 8 ps during this signal-canceling configuration, which is roughly half the magnitude of his "signal". The presence of such a large, unmonitored, and unknown drift completely invalidates his conclusions. For instance, there are several 24-hour periods in his data plots during which the variations are less than the 8 ps during that short measurement of the systematic drift—that is consistent with the entire "signal" being this unknown systematic drift.
Cahill has a novel way of dealing with the clear and obvious variations in his data at a given orientation (i.e. data points 24 hours apart): he calls this "gravitational waves" and claims they are an additional part of his "signal" (these "gravitational waves" are from his theory, not GR). The presence of comparable variations when the cables were configured in a circle invalidates this claim, as that cancels any real signal. For this apparatus any real signal corresponding to "absolute motion" must have a period of 24 hours, and it is clear from his plots that there is no significant signal present (in his Fig. 15, the variance of the differences between points separated by 24 hours is comparable to the variance of the entire plot and of the signal-canceling configuration; this is just an application of the elementary error analysis). Calling the variations at a given orientation "gravitational waves" does not change the fact that these variations are comparable to the orientation dependence, which is therefore not statistically significant. In order to separate an orientation-dependent signal from the "gravitational waves" he claims, it is necessary to perform an experiment that can actually separate them, and this one cannot possibly do so. That requires an apparatus capable of separating systematic environmental effects from the data, and also capable of varying its orientation on a time scale smaller than that of the "gravitational waves" (remember these are not the gravitational waves of GR).
Cahill emphasizes that his experiment agrees with Miller's results and with an unpublished experiment by de Witte. But his comparisons are without error bars and are therefore worthless. Error bars for Miller's data can be computed, and error bars for de Witte's and Cahill's data can be estimated, and each of the three results is not significantly different from null (for "absolute motion"; one must ignore his "gravitational waves" for this comparison). So they actually are consistent with each other, in a way completely unanticipated by Cahill: all three are consistent with a null "absolute motion" result! Do not be deluded by his Fig. 18, as Cahill does not display any error bars, and NONE of those variations are statistically significant; like Miller he is unknowingly looking at insignificant patterns in systematic drifts.
In short, there is no reason to believe this result because: a) the systematic effects cannot be separated from the "signal" while taking data, b) the brief measurement of systematic drift (in a signal-canceling configuration) is comparable to the "signal", c) the data for "absolute motion" are not significantly different from zero, and d) the "agreement" with other experiments is not at all what he thinks it is.
This experiment looks for a variation in the transverse position of the laser light diffracted by a grating as the orientation of the apparatus is changed.
The authors apparently think their laser came out of a textbook and provides a perfectly coherent monochromatic light source. Real lasers are not so perfect, and the difference is important. As they did not describe their laser other than saying it is He–Ne, I'll use generic values for a typical classroom or laboratory He–Ne laser: such a laser has a line width of about 1.5 parts per million, including 3–5 longitudinal modes among which the power is shared, with the fraction in each mode varying widely during operation. Such a laser also has a beam divergence of about 2 milliradians, and a pointing accuracy about 0.1 milliradians. These effects generate systematic uncertainties in the diffraction peak position comparable to their "signal"; they were neither controlled nor monitored by the experimenters.
The authors provided no error analysis, despite the fact that averaging is central to their analysis. From their plot of raw data (their Fig. 3), it appears to me that the data are quantized quite coarsely, and are then overaveraged to obtain their "result". The averaged data have unexplained variations (noise) unrelated to orientation which is about half the variations of their "signal". One can obtain an estimate of the variance of their data from their Fig. 7, and their data plots have variations only about double that variance. So their "signal" is of marginal significance at best.
Without a careful measurement of their obvious systematic errors, and a comprehensive error analysis, there is no reason to believe the variations they observe are significant.
When multiple measurements of a single quantity are made, their mean provides the best estimate for the actual value of the quantity being measured. But this value is not perfect, and there is still uncertainty in the estimate. A histogram of the original measurements can provide an error estimate for the mean: the best estimate for the error bar on the mean comes from the r.m.s. variance of the histogram (i.e. its σ). If the original values are all statistically independent, and there are N of them, then the best estimate of the error bar on the mean is σ/√N (this comes from the central limit theorem of statistics). That is a lower bound for the error bar on the mean. But if the original measurements are not truly independent, such as when some systematic effect is present, then the error bar on the mean will be larger. For a purely systematic error, the error bar on the mean is σ (independent of N), because one does not know which of the original measurements is correct. This is not necessarily an upper bound on the error bar because additional errors could be present, such as calibration errors of the instrumentation.
It is a fact of arithmetic that when averaging data one will obtain an answer, but the above error analysis is required to know whether or not it is significant. As a rule of thumb, a signal that is 5σ (or more) from zero is difficult or impossible to ignore; a "signal" that is less than 3σ from zero is unconvincing at best. The challenge is usually in determining what the σ actually is; but for an average, σ/√N gives a lower bound that is indisputable.
Usually the averaging of data is unwarranted, and in most cases one can apply an analysis to the entire data sequence—one should normally fit a parameterized theoretical expression to such a data sequence. So if an experiment measures a series of fringe positions as the apparatus is rotated, the theoretical fringe position should be parameterized as a function of orientation, and the parameters fit to the entire measurement sequence. A parametrisation of backgrounds and/or systematic errors should be included. Such a fit inherently provides error bars on the resulting parameter values. This is vastly better than averaging the data taken at each orientation and looking for patterns in the averages, because such averaging introduces artifacts, because averaging cannot distinguish between orientation dependence and systematic drifts, and because the fit inherently accounts for correlations in the parameters that averaging ignores. See arXiv:physics/0608238 for examples of both the artifacts introduced by averaging (Section III), and an analysis performed without such averaging (Section IV).
Error bars have become such an important part of modern experimental physics that it is not uncommon to make multiple measurements of a quantity, or to split one sequence of measurements into multiple smaller sequences, specifically so the error bar on the result can be estimated.
Note that the word "error" here is standard terminology, and is used in the sense of "uncertainty" rather than "mistake". For well-designed experiments, care is taken to minimize the backgrounds and systematic errors, and major systematic errors are measured; then a comprehensive error analysis is performed and used to quantify the resolutions and significance of the results. For most experiments in this section the authors simply did not do this. Prior to 1950 or so that was common and accepted practice; today it is not acceptable at all.
Experimenter's bias is a phenomenon caused by the inability of human participants in an experiment to remain completely objective, in which the human experimenter directly influences the experiment's outcome based upon his or her personal desires or expectations. It is most commonly a concern in medical and sociological experiments, in which "single-blind" and "double-blind" protocols are usually required. But some physical experiments in which a human observer is required to round measurements off can also be subject to it. In the experiments here, the conditions for this are the combination of a signal smaller than the actual measurement resolution, and an over-averaging of the data used to extract the "signal" from the measurements.
In principle, if a measurement has a resolution of R, then if the experimenteraverages N independent measurements the average will have a resolution of R/√N (this is just an application of the error analysis above). This is an important experimental technique used to reduce the impact of randomness on an experiment's outcome. But note that this requires that the measurements be statistically independent, and there are several reasons why that may not be true—if so then the average may not actually be a better measurement but may merely reflect the correlations among the individual measurements and their non-independent nature.
The most common cause of non-independence is systematic errors (errors affecting all measurements equally, causing the different measurements to be highly correlated, so the average is no better than any single measurement). But another cause can be due to the inability of a human observer to round off measurements in a truly random manner. If an experiment is searching for a sidereal variation of some measurement, and if the measurement is rounded off by a human who knows the sidereal angle of the measurement, and if hundreds of measurements are averaged to extract a "signal" that is smaller than the apparatus's actual resolution, then it should be clear that this "signal" can come from the non-random round-off, and not from the apparatus itself. In such cases a single-blind experimental protocol is required; if the human observer does not know the sidereal angle of the measurements, then even though the round-off is not random, it cannot introduce a spurious sidereal variation.
Note that modern electronic and/or computerized data acquisition techniques have greatly reduced the likelihood of such bias, but it can still be introduced by a poorly designed analysis technique. Experimenter's bias was not well recognized until the 1950s and 1960s, and then it was primarily in medical experiments and studies. Its effects on experiments in the physical sciences have not always been fully recognized. Several experiments referenced above were clearly affected by it.
There are two very different aspects of publication bias:
In both cases the experimental record in the literature does not fully and accurately reflect the actual experiments that have been performed. Both of these effects clearly affect the literature on experimental tests of SR. This second aspect is one reason why this list of experiments is incomplete; there have probably been many hundreds of unpublished experiments that agree with SR.
Note that this does not include papers that are rejected for other reasons, such as: inappropriate subject or style, major internal inconsistencies, or downright incompetence on the part of authors or experimenters. Such rejections are not bias, they are the proper functioning of a peer-reviewed journal.
My interest in the experimental basis of SR has been piqued by many discussions in the newsgroup sci.physics.relativity about how well SR has or has not been confirmed or refuted. One effect of this is that I have assembled a rather large collection of papers on experimental tests of SR; this FAQ page is in some sense an index to this collection. Most of the descriptions above are my summaries direct from primary sources. In some cases the original paper was unavailable to me and I have relied on secondary sources (primarily the previous version of this FAQ page, and the books by Zhang, by Born, and by von Laue). — Tom Roberts