Discussion:
E = 3/4 mc² or E = mc²? The forgotten Hassenohrl 1905 work.
Add Reply
rhertz
2024-12-01 00:28:14 UTC
Reply
Permalink
In March 1905, six months before Einstein, the Austrian physicist Fritz
Hasenohrl published his third and final paper about the relationship
between mass and radiant energy in the same journal Annalen der Physik
that received and published his papers about relativity.
His final paper, a review of his former two since 1904, was an
elaborated thought experiment to determine if the mass of a perfect
black body radiation increased, from rest, while it was slowly
accelerated (the same hypothesis used by Einstein in his SR paper, to
deal with electrons). The final result was that this relationship:

m = 4/3 E/c² , which can be expressed as E = 3/4 mc²

which he found to be independent of the velocity of the cavity.
His work received much attention from the physics community, and won the
Haitinger Prize of the Austrian Academy of Sciences. In 1907 he
succeeded Boltzmann as professor of theoretical physics at the
University of Vienna.

This is the translation of his first paper, in 1904, where he derived m
= 8/3 E/c². In the next two papers, he corrected some mistakes,
publishing the last one in March 1905, six months before Einstein's
paper deriving E = mc².

https://en.wikisource.org/wiki/Translation:On_the_Theory_of_Radiation_in_Moving_Bodies#cite_note-21

Prior to Hassenohrl, and since 1881 paper from J.J. Thomson, different
works were published correcting Thomson and relating mass increase and
changes of the electrostatic energy of a moving charged sphere (later
the electron) by Fitzgerald, Heaviside, Larmor, Wien and (finally) by
Abraham in 1903. The work of Hassenohrl was based on Abraham, but with
the fundamental change of using radiant energy from inside a perfect
black body moving. This alone was considered a breakthrough in physics,
and Einstein took note of it and simplified the thought experiment of
Hassenohrl (a closed system) for other in an open system, which has
theoretical deficiencies, which Einstein was never able to solve, giving
up in 1942 (7th. attempt).

The remarkable work of Hassenohrl showed, beyond doubts, that any energy
(electrostatic or radiant) is related to mass increase, when moving, by
the relationship m = 4/3 E/c².

This fact, known for almost a decade since FitzGerald, couldn't be
explained correctly until 1922, when Enrico Fermi focused on the
problem.

All these works are considered today as pre-relativistic, even when
ether is barely mentioned.

Hassenohrl himself used two references (Einstein jargon didn't exist
yet): A fixed reference frame and a co-moving reference (along with the
cavity). The popularization of relativity and the easiness of having a
relationship E = mc² (even with restricted use of velocities) made it
much more appealing to the scientific community than having to deal with
E = 0.75 mc².

Even more, in the next decades, using c = 1 became popular, and so the
direct use E = m, as it was shown in the calculations done by Chadwick
(1932) to justify that he had proven the existence of the neutron. A
different world would exist if E = 0.75 mc² had been adopted, which is a
proof of what I've sustained for years about that such a simple equation
was adopted for convenience and colluded consensus (like many other
constants and formulae. GR?).

Hassenohrl's work proved that his equation is independent of the
velocity, and that mass is an invariant property of matter. On the
contrary, E = mc² has a limited range of applicability, forcing its use
to rations v/c << 1. This is because its derivation comes from using the
first term (the cuadratic one) of an infinite McLauring series used on
the expansion of the Gamma factor minus one:

γ - 1 = 1/√(1 - v ²/c²) - 1 = 1/√(1 - β²) - 1 = 1/2 β² + 3/4 β⁴ + 15/24
β⁶ + 105/192 β⁸ + ..

Einstein used L (γ - 1) ≈ L/2 β² = 1/2 (L/ c²) v², from where he
extracted m = L/ c² as the mass in the kinetic energy equation. Nor him
neither von Laue (1911) nor Klein (1919) could solve this very limited
approximation for uses on closed systems. Yet, the equation stayed (for
consensus due to its convenience).

The work of Hassenohrl, based on his thought experiment, is very
detailed. Much more than the loose arrangement of Einstein's paper. He
did care to present his closed system with severe restrictions:

- A perfect black body cylindrical cavity, with the walls covered with a
perfectly reflective mirror, exterior temperature of 0"C, and two
perfect black body caps on the ends, tightly fixed and having zero
stress from the forces of radiation and motion.

- A very small acceleration, in order to cause smooth changes in
velocity of the cavity.

- The black body radiation is taken from its intensity i (he never
mentioned Planck), which he described as a "pencil of energy", which
formed an angle θ with the vector of velocity.
In modern terms, it's the Monochromatic Irradiance or Spectral Flux
Density: Radiance of a surface per unit frequency or wavelength per unit
solid angle.

- This directional quantity differs from Planck's Spectral Radiant
Energy formula by (c/4𝜋), which he accepted when integrating along the
volume of the cavity, giving original Planck's density u of radiation
energy.

With the above considerations, and many others, Hassenohrl wrote his
final paper, for which he gained recognition and a prize. But the
problem for him, and for physics, is that it was a pre-relativistic work
where absolute reference at rest was used (as in all the other works
from legions of physicists during the centuries). Relativity
cannibalized all the classic physics, except when it's not convenient to
do so: a blatant hypocrisy (take the merging of reference frames in
particle physics, or just the Sagnac effect).

The problem that Hassenohrl's work poses for physics is his enormous
complexity, which has consumed a lot of manpower since 1905 up to these
days, in order to be understood.

This paper
Fritz Hasenohrl and E = mc²
Stephen Boughn
Haverford College, Haverford PA 19041
March 29, 2013
https://arxiv.org/abs/1303.7162

is one of many modern papers that try to understand Hassenohrl's work by
using relativity and Planck, which simplify the complex work of the
Austrian physicist. Even this paper poses some doubts about the validity
(or not) of Hassenohrl's work in these days, where a notion of absolute
reference frame is gaining momentum within physics. The paper try to
explain (but fails) which were Hassenohrl's mistakes (of course under
the light of relativity), but it serves as a guide to analyze
Hassenohrl's work.

However, the author is highly biased, because he focused on the first
1904 paper and not in the final publication in Annalen der Physik, where
Hassenohrl had changed substantially his first proposal. For instance,
introducing the idea of a slowly accelerated cavity (which is essential
to prove the independence of the gain in mass with respect to the
velocity).

I'm sorry not being able to get the March 1905 paper to cite it here. It
seems that efforts to erase Hassenohrl's work (or Abraham's work with
electrons) from the history have been successful. You have to resort to
find books from the '50s to get some info, like the one cited by Stephen
Boughn.

Now, E = 3/4 mc² or E = mc²? Which one would the physics community
adopt?

Hmmm....
The Starmaker
2024-12-01 02:53:02 UTC
Reply
Permalink
Post by rhertz
In March 1905, six months before Einstein, the Austrian physicist Fritz
Hasenohrl published his third and final paper about the relationship
between mass and radiant energy in the same journal Annalen der Physik
that received and published his papers about relativity.
His final paper, a review of his former two since 1904, was an
elaborated thought experiment to determine if the mass of a perfect
black body radiation increased, from rest, while it was slowly
accelerated (the same hypothesis used by Einstein in his SR paper, to
m = 4/3 E/c² , which can be expressed as E = 3/4 mc²
which he found to be independent of the velocity of the cavity.
His work received much attention from the physics community, and won the
Haitinger Prize of the Austrian Academy of Sciences. In 1907 he
succeeded Boltzmann as professor of theoretical physics at the
University of Vienna.
This is the translation of his first paper, in 1904, where he derived m
= 8/3 E/c². In the next two papers, he corrected some mistakes,
publishing the last one in March 1905, six months before Einstein's
paper deriving E = mc².
https://en.wikisource.org/wiki/Translation:On_the_Theory_of_Radiation_in_Moving_Bodies#cite_note-21
Prior to Hassenohrl, and since 1881 paper from J.J. Thomson, different
works were published correcting Thomson and relating mass increase and
changes of the electrostatic energy of a moving charged sphere (later
the electron) by Fitzgerald, Heaviside, Larmor, Wien and (finally) by
Abraham in 1903. The work of Hassenohrl was based on Abraham, but with
the fundamental change of using radiant energy from inside a perfect
black body moving. This alone was considered a breakthrough in physics,
and Einstein took note of it and simplified the thought experiment of
Hassenohrl (a closed system) for other in an open system, which has
theoretical deficiencies, which Einstein was never able to solve, giving
up in 1942 (7th. attempt).
The remarkable work of Hassenohrl showed, beyond doubts, that any energy
(electrostatic or radiant) is related to mass increase, when moving, by
the relationship m = 4/3 E/c².
This fact, known for almost a decade since FitzGerald, couldn't be
explained correctly until 1922, when Enrico Fermi focused on the
problem.
All these works are considered today as pre-relativistic, even when
ether is barely mentioned.
Hassenohrl himself used two references (Einstein jargon didn't exist
yet): A fixed reference frame and a co-moving reference (along with the
cavity). The popularization of relativity and the easiness of having a
relationship E = mc² (even with restricted use of velocities) made it
much more appealing to the scientific community than having to deal with
E = 0.75 mc².
Even more, in the next decades, using c = 1 became popular, and so the
direct use E = m, as it was shown in the calculations done by Chadwick
(1932) to justify that he had proven the existence of the neutron. A
different world would exist if E = 0.75 mc² had been adopted, which is a
proof of what I've sustained for years about that such a simple equation
was adopted for convenience and colluded consensus (like many other
constants and formulae. GR?).
Hassenohrl's work proved that his equation is independent of the
velocity, and that mass is an invariant property of matter. On the
contrary, E = mc² has a limited range of applicability, forcing its use
to rations v/c << 1. This is because its derivation comes from using the
first term (the cuadratic one) of an infinite McLauring series used on
γ - 1 = 1/√(1 - v ²/c²) - 1 = 1/√(1 - β²) - 1 = 1/2 β² + 3/4 β⁴ + 15/24
β⁶ + 105/192 β⁸ + ..
Einstein used L (γ - 1) ≈ L/2 β² = 1/2 (L/ c²) v², from where he
extracted m = L/ c² as the mass in the kinetic energy equation. Nor him
neither von Laue (1911) nor Klein (1919) could solve this very limited
approximation for uses on closed systems. Yet, the equation stayed (for
consensus due to its convenience).
The work of Hassenohrl, based on his thought experiment, is very
detailed. Much more than the loose arrangement of Einstein's paper. He
- A perfect black body cylindrical cavity, with the walls covered with a
perfectly reflective mirror, exterior temperature of 0"C, and two
perfect black body caps on the ends, tightly fixed and having zero
stress from the forces of radiation and motion.
- A very small acceleration, in order to cause smooth changes in
velocity of the cavity.
- The black body radiation is taken from its intensity i (he never
mentioned Planck), which he described as a "pencil of energy", which
formed an angle θ with the vector of velocity.
In modern terms, it's the Monochromatic Irradiance or Spectral Flux
Density: Radiance of a surface per unit frequency or wavelength per unit
solid angle.
- This directional quantity differs from Planck's Spectral Radiant
Energy formula by (c/4𝜋), which he accepted when integrating along the
volume of the cavity, giving original Planck's density u of radiation
energy.
With the above considerations, and many others, Hassenohrl wrote his
final paper, for which he gained recognition and a prize. But the
problem for him, and for physics, is that it was a pre-relativistic work
where absolute reference at rest was used (as in all the other works
from legions of physicists during the centuries). Relativity
cannibalized all the classic physics, except when it's not convenient to
do so: a blatant hypocrisy (take the merging of reference frames in
particle physics, or just the Sagnac effect).
The problem that Hassenohrl's work poses for physics is his enormous
complexity, which has consumed a lot of manpower since 1905 up to these
days, in order to be understood.
This paper
Fritz Hasenohrl and E = mc²
Stephen Boughn
Haverford College, Haverford PA 19041
March 29, 2013
https://arxiv.org/abs/1303.7162
is one of many modern papers that try to understand Hassenohrl's work by
using relativity and Planck, which simplify the complex work of the
Austrian physicist. Even this paper poses some doubts about the validity
(or not) of Hassenohrl's work in these days, where a notion of absolute
reference frame is gaining momentum within physics. The paper try to
explain (but fails) which were Hassenohrl's mistakes (of course under
the light of relativity), but it serves as a guide to analyze
Hassenohrl's work.
However, the author is highly biased, because he focused on the first
1904 paper and not in the final publication in Annalen der Physik, where
Hassenohrl had changed substantially his first proposal. For instance,
introducing the idea of a slowly accelerated cavity (which is essential
to prove the independence of the gain in mass with respect to the
velocity).
I'm sorry not being able to get the March 1905 paper to cite it here. It
seems that efforts to erase Hassenohrl's work (or Abraham's work with
electrons) from the history have been successful. You have to resort to
find books from the '50s to get some info, like the one cited by Stephen
Boughn.
Now, E = 3/4 mc² or E = mc²? Which one would the physics community
adopt?
Hmmm....
You mean...in those days they never heard of...footnotes?
--
The Starmaker -- To question the unquestionable, ask the unaskable,
to think the unthinkable, mention the unmentionable, say the unsayable,
and challenge the unchallengeable.
The Starmaker
2024-12-01 20:44:00 UTC
Reply
Permalink
Post by The Starmaker
Post by rhertz
In March 1905, six months before Einstein, the Austrian physicist Fritz
Hasenohrl published his third and final paper about the relationship
between mass and radiant energy in the same journal Annalen der Physik
that received and published his papers about relativity.
His final paper, a review of his former two since 1904, was an
elaborated thought experiment to determine if the mass of a perfect
black body radiation increased, from rest, while it was slowly
accelerated (the same hypothesis used by Einstein in his SR paper, to
m = 4/3 E/c² , which can be expressed as E = 3/4 mc²
which he found to be independent of the velocity of the cavity.
His work received much attention from the physics community, and won the
Haitinger Prize of the Austrian Academy of Sciences. In 1907 he
succeeded Boltzmann as professor of theoretical physics at the
University of Vienna.
This is the translation of his first paper, in 1904, where he derived m
= 8/3 E/c². In the next two papers, he corrected some mistakes,
publishing the last one in March 1905, six months before Einstein's
paper deriving E = mc².
https://en.wikisource.org/wiki/Translation:On_the_Theory_of_Radiation_in_Moving_Bodies#cite_note-21
Prior to Hassenohrl, and since 1881 paper from J.J. Thomson, different
works were published correcting Thomson and relating mass increase and
changes of the electrostatic energy of a moving charged sphere (later
the electron) by Fitzgerald, Heaviside, Larmor, Wien and (finally) by
Abraham in 1903. The work of Hassenohrl was based on Abraham, but with
the fundamental change of using radiant energy from inside a perfect
black body moving. This alone was considered a breakthrough in physics,
and Einstein took note of it and simplified the thought experiment of
Hassenohrl (a closed system) for other in an open system, which has
theoretical deficiencies, which Einstein was never able to solve, giving
up in 1942 (7th. attempt).
The remarkable work of Hassenohrl showed, beyond doubts, that any energy
(electrostatic or radiant) is related to mass increase, when moving, by
the relationship m = 4/3 E/c².
This fact, known for almost a decade since FitzGerald, couldn't be
explained correctly until 1922, when Enrico Fermi focused on the
problem.
All these works are considered today as pre-relativistic, even when
ether is barely mentioned.
Hassenohrl himself used two references (Einstein jargon didn't exist
yet): A fixed reference frame and a co-moving reference (along with the
cavity). The popularization of relativity and the easiness of having a
relationship E = mc² (even with restricted use of velocities) made it
much more appealing to the scientific community than having to deal with
E = 0.75 mc².
Even more, in the next decades, using c = 1 became popular, and so the
direct use E = m, as it was shown in the calculations done by Chadwick
(1932) to justify that he had proven the existence of the neutron. A
different world would exist if E = 0.75 mc² had been adopted, which is a
proof of what I've sustained for years about that such a simple equation
was adopted for convenience and colluded consensus (like many other
constants and formulae. GR?).
Hassenohrl's work proved that his equation is independent of the
velocity, and that mass is an invariant property of matter. On the
contrary, E = mc² has a limited range of applicability, forcing its use
to rations v/c << 1. This is because its derivation comes from using the
first term (the cuadratic one) of an infinite McLauring series used on
γ - 1 = 1/√(1 - v ²/c²) - 1 = 1/√(1 - β²) - 1 = 1/2 β² + 3/4 β⁴ + 15/24
β⁶ + 105/192 β⁸ + ..
Einstein used L (γ - 1) ≈ L/2 β² = 1/2 (L/ c²) v², from where he
extracted m = L/ c² as the mass in the kinetic energy equation. Nor him
neither von Laue (1911) nor Klein (1919) could solve this very limited
approximation for uses on closed systems. Yet, the equation stayed (for
consensus due to its convenience).
The work of Hassenohrl, based on his thought experiment, is very
detailed. Much more than the loose arrangement of Einstein's paper. He
- A perfect black body cylindrical cavity, with the walls covered with a
perfectly reflective mirror, exterior temperature of 0"C, and two
perfect black body caps on the ends, tightly fixed and having zero
stress from the forces of radiation and motion.
- A very small acceleration, in order to cause smooth changes in
velocity of the cavity.
- The black body radiation is taken from its intensity i (he never
mentioned Planck), which he described as a "pencil of energy", which
formed an angle θ with the vector of velocity.
In modern terms, it's the Monochromatic Irradiance or Spectral Flux
Density: Radiance of a surface per unit frequency or wavelength per unit
solid angle.
- This directional quantity differs from Planck's Spectral Radiant
Energy formula by (c/4𝜋), which he accepted when integrating along the
volume of the cavity, giving original Planck's density u of radiation
energy.
With the above considerations, and many others, Hassenohrl wrote his
final paper, for which he gained recognition and a prize. But the
problem for him, and for physics, is that it was a pre-relativistic work
where absolute reference at rest was used (as in all the other works
from legions of physicists during the centuries). Relativity
cannibalized all the classic physics, except when it's not convenient to
do so: a blatant hypocrisy (take the merging of reference frames in
particle physics, or just the Sagnac effect).
The problem that Hassenohrl's work poses for physics is his enormous
complexity, which has consumed a lot of manpower since 1905 up to these
days, in order to be understood.
This paper
Fritz Hasenohrl and E = mc²
Stephen Boughn
Haverford College, Haverford PA 19041
March 29, 2013
https://arxiv.org/abs/1303.7162
is one of many modern papers that try to understand Hassenohrl's work by
using relativity and Planck, which simplify the complex work of the
Austrian physicist. Even this paper poses some doubts about the validity
(or not) of Hassenohrl's work in these days, where a notion of absolute
reference frame is gaining momentum within physics. The paper try to
explain (but fails) which were Hassenohrl's mistakes (of course under
the light of relativity), but it serves as a guide to analyze
Hassenohrl's work.
However, the author is highly biased, because he focused on the first
1904 paper and not in the final publication in Annalen der Physik, where
Hassenohrl had changed substantially his first proposal. For instance,
introducing the idea of a slowly accelerated cavity (which is essential
to prove the independence of the gain in mass with respect to the
velocity).
I'm sorry not being able to get the March 1905 paper to cite it here. It
seems that efforts to erase Hassenohrl's work (or Abraham's work with
electrons) from the history have been successful. You have to resort to
find books from the '50s to get some info, like the one cited by Stephen
Boughn.
Now, E = 3/4 mc² or E = mc²? Which one would the physics community
adopt?
Hmmm....
You mean...in those days they never heard of...footnotes?
I mean, in those days 'they' didn't use footnotes to make reference their sources because it wasn't required then...


So, why would Fritz Hasenohrl and Einstein (the guy with autism looking everywhich way than everyone else) be in the same
room with Fritz if Einstein didn't get permission to steal from Fritz in 1911????

Loading Image...
--
The Starmaker -- To question the unquestionable, ask the unaskable,
to think the unthinkable, mention the unmentionable, say the unsayable,
and challenge the unchallengeable.
rhertz
2024-12-02 00:36:16 UTC
Reply
Permalink
In what is considered as the first experimental proof of Einstein's 1905
E = mc² paper, 27 years after (1932), the English physicist John
Cockroft and the Irish physicist Ernest Walton produced a nuclear
disintegration by bombarding Lithium with artificially accelerated
protons.

They used beams of protons accelerated with 600,000 Volts to strike
Lithium7 atoms, which resulted in the creation of two alpha particles.
The experiment was celebrated as a proof of E = mc², even when the
results were closer to E = 3/4 mc², BUT NOBODY WANTED TO NOTICE THIS!

For this paper, Cockcroft and Walton won the 1951 Nobel Prize in Physics
for their work on the FIRST artificial transmutation of atomic nuclei,
not for proving E = mc², a FALSE CLAIM still used by relativists.

Cockcroft and Walton NEVER HAD IN MIND to prove E = mc², as it can be
shown in his 1932 publication, nor they mentioned Einstein even once:

https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1932.0133

Yet, relativists hurried to celebrate the experiment as a triumph of
Einstein's theories, because they needed such accomplishment to
celebrate the veracity of their pseudoscience.

The equation for their experiment was the following:


7:3 Li + 1:1 H ---> 4:2 He + 4:2 He + energy

From their paper, this is the balance (as published in 1932):


Lithium7 amu 7.0104
Hydrogen amu 1.0072
8.0176

Helium amu 4.0011
Helium amu 4.0011
8.0022

Difference 0.0154 ± 0.003 amu = 14.3 ± 2.7 MeV

The difference in energy using E = mc², with 2024 NIST values, varies
from -2.1% to -49.7%, AVERAGING almost -25%.

CURIOUSLY, the average error over hundred of measurements is EXACTLY the
factor 0.75 of the Hassenohrl's formula E = 3/4 mc².

What happened with the history of this experiment. Was it re-written
since THIS single experiment, NEVER EVER REPEATED, to hype Einstein?

---------------------------------------------------

These are the values with NIST 2024:


Lithium7 amu 7.0160034366
Hydrogen amu 1.00782503223
8.02382846883

Helium amu 4.00260325413
Helium amu 4.00260325413
8.00520650826

Difference 0.01862196057 amu
17.36590E+07 MeV

************************************************************

INTERESTING: 92 years after the 1932 experiment, NIST managed to correct
the amu of the elements, so the difference FITS with E = mc².

WORSE YET: In the Manhattan booklet "Los Alamos Primer", written by
Serber & Oppenheimer in 1943, to instruct scientists recruited for the
project, the calculations WRITTEN THERE were based on electrostatic
repulsion of split atoms, which ALSO DIFFER IN A SIMILAR AMOUNT with the
infamous 200 MeV computed by Meitner and her nephew in 1939.

Serber, on his 1992 book, affirmed that nuclear fission WAS UNRELATED to
E = mc², and that the fission process was NON-RELATIVISTIC.

Yet, just after WWII finished, the infamous Time Magazine cover had the
figure of Einstein and the nuclear cloud with E = mc² written on it.
Time Magazine was widely known as an outlet of Jewish propaganda, and
still is (what was left of it).


So, Hassenohrl was the real deal and Einstein the Jewish icon to be
hyper-hyped as the most important physicist since Babylon times?


From 1932 to 1943, the brightest minds involved in EXPERIMENTAL nuclear
fission DIDN'T SUPPORT E = mc².

The above FACT has to count, and open the eyes of most. The drive to
reinstall the genius of Einstein and relativity re-started in the early
'50s, and never did stop (cosmology, particle physics, etc.).

We live in a world of lies and INFAMOUS reconstruction of history, and I
mean ALL THE HISTORY.
rhertz
2024-12-02 17:44:33 UTC
Reply
Permalink
SOME CORRECTIONS TO THE PREVIOUS POST:

On Mon, 2 Dec 2024 0:36:16 +0000, rhertz wrote:

<snip>
Post by rhertz
Lithium7 amu 7.0104
Hydrogen amu 1.0072
8.0176
Helium amu 4.0011
Helium amu 4.0011
8.0022
Difference Δamu = 0.0154 ± 0.003 amu = 14.3 MeV ± 2.8 MeV. To this, it
has to be added an extra energy of 2.7 MeV

Δamu varies between 0.0124 amu and 0.0187 amu.

The total change in energy ΔE varies between 14.3 MeV and 19.8 MeV.


It corresponds to equations:

ΔE = 2/3 Δmc² for 14.3 MeV

ΔE = 7/5 Δmc² for 19.8 MeV

Authors claimed that momentum was accounted and conserved.



These values are written, after hundred of experiments, following the
relationship:

7:3 Li + 1:1 H ---> 4:2 He + 4:2 He + energy

https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1932.0133

This corresponds to the equation:

Considering that momentum is conserved and the energy of the proton is
0.6 MeV, the final values are closer to Hassenohrl than to Einstein.


Hardly an experimental verification of ΔE = Δmc², as relativists have
claimed as this being the first experimental proof of such equation.


<snip>
ProkaryoticCaspaseHomolog
2024-12-02 18:07:14 UTC
Reply
Permalink
Post by rhertz
<snip>
Post by rhertz
Lithium7 amu 7.0104
Hydrogen amu 1.0072
8.0176
Helium amu 4.0011
Helium amu 4.0011
8.0022
Difference Δamu = 0.0154 ± 0.003 amu = 14.3 MeV ± 2.8 MeV. To this, it
has to be added an extra energy of 2.7 MeV
Δamu varies between 0.0124 amu and 0.0187 amu.
The total change in energy ΔE varies between 14.3 MeV and 19.8 MeV.
ΔE = 2/3 Δmc² for 14.3 MeV
ΔE = 7/5 Δmc² for 19.8 MeV
Authors claimed that momentum was accounted and conserved.
These values are written, after hundred of experiments, following the
7:3 Li + 1:1 H ---> 4:2 He + 4:2 He + energy
https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1932.0133
Considering that momentum is conserved and the energy of the proton is
0.6 MeV, the final values are closer to Hassenohrl than to Einstein.
Hardly an experimental verification of ΔE = Δmc², as relativists have
claimed as this being the first experimental proof of such equation.
Modern measurements are much more accurate. For example,
https://www.nature.com/articles/4381096a
The full paper is also available online.

Many more measurements are consistent with E=mc^2, even if the
measurements are not from experiments specifically designed to
test the prediction. For instance, electron-positron annihilation
is routinely observed to result in 0.511 MeV gamma rays.

Do you honestly believe that some sort of conspiracy exists to
suppress measurements inconsistent with relativistic predictions?
Ross Finlayson
2024-12-02 19:20:59 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by rhertz
<snip>
Lithium7 amu 7.0104
Hydrogen amu 1.0072
8.0176
Helium amu 4.0011
Helium amu 4.0011
8.0022
Difference Δamu = 0.0154 ± 0.003 amu = 14.3 MeV ± 2.8 MeV. To this, it
has to be added an extra energy of 2.7 MeV
Δamu varies between 0.0124 amu and 0.0187 amu.
The total change in energy ΔE varies between 14.3 MeV and 19.8 MeV.
ΔE = 2/3 Δmc² for 14.3 MeV
ΔE = 7/5 Δmc² for 19.8 MeV
Authors claimed that momentum was accounted and conserved.
These values are written, after hundred of experiments, following the
7:3 Li + 1:1 H ---> 4:2 He + 4:2 He + energy
https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1932.0133
Considering that momentum is conserved and the energy of the proton is
0.6 MeV, the final values are closer to Hassenohrl than to Einstein.
Hardly an experimental verification of ΔE = Δmc², as relativists have
claimed as this being the first experimental proof of such equation.
Modern measurements are much more accurate. For example,
https://www.nature.com/articles/4381096a
The full paper is also available online.
Many more measurements are consistent with E=mc^2, even if the
measurements are not from experiments specifically designed to
test the prediction. For instance, electron-positron annihilation
is routinely observed to result in 0.511 MeV gamma rays.
Do you honestly believe that some sort of conspiracy exists to
suppress measurements inconsistent with relativistic predictions?
One imagines not, necessarily, since for example that
having "dark energy and dark matter" has long ago
falsified theories without such non-scientific non-explanations.

In "Electron Physics", O.W. Richardson decribes that the "electron's"
"relativistic mass" as it may be as almost entirely relativistic
in explanation, vis-a-vis its "mass", per se, while for example
long since Lienard-Wiechert's original experiments or this was
about a century ago, these days it's "electron-holes" which sort
of only represent a bit of back-and-forth what propagates, current.


How about the 1995 if not the 2024 "atomic weight" values?

Of course it's well known that NIST CODATA Particle Data Group
releases new values for constants every few years.

"Modern measurements" _decrease_.

Of course everybody knows that the usual e=mc^2 is only
derived from K.E. Taylor series first term, the rest
infinitely-many truncated (not even having matching
units), and that SR's arrival at e = mc^2 is circular.


Then there's also "Einstein's second mass-energy equivalency
derivation", yet it's sort of, formally un-linear.


Anyways if you leave out "dark matter" then "relativity"
or the "classical in the limit, Newtonian", in the limit
it attains to is long ago falsified.

Then, about things like electron self-energy and various
quite a few things about the stack of derivations and
what the "linearisations" have made so great and lost,
have that it's the mathematics that needs more than
a "partial" account.

Anyways the actual laws of mechanics even indeed
have things to fix and repair since before Galileo.
ProkaryoticCaspaseHomolog
2024-12-02 21:54:42 UTC
Reply
Permalink
Yes, I do.
In particular since mid '50s.
Too much money, credibility on science due to the sustained hype over
relativity, prestige and many other important issues are involved.
Considering that E = mc² has been publicly hyped as the most important
equation in the history of science
Most famous, NOT the most important.
Hardly ANYBODY would claim that it is the most important.
(See Max Planck Institute), imagine
THE DISASTER for science, millions of publications, academy, etc., that
BREAKING NEWS: E = mc² HAS BEEN PROVEN FALSE. TOTAL COLLAPSE OF THE
SCIENTIFIC ESTABLISMENT!
What would follow to this news, even if actually E = 0.99 mc²?
We know that E = mc² to about the 10^-7 level.

If the equation is found to be off at some level of significance,
that would be an extremely important result, not the end of science.
Worse than proving that the speed of light in vacuum, across large
distance, IS NOT A CONSTANT.
Personally, I hope that the next space-borne equivalence principle
test, whatever technology it uses (STEP never got the funding that
it deserved), finds that the equivalence principle breaks down at
some level of accuracy. As I have written elsewhere:

| "Currently envisioned tests of the weak equivalence principle are
| approaching a degree of sensitivity such that non-discovery of a
| violation would be just as profound a result as discovery of a
| violation. Non-discovery of equivalence principle violation in this
| range would suggest that gravity is so fundamentally different from
| other forces as to require a major reevaluation of current attempts
| to unify gravity with the other forces of nature. A positive
| detection, on the other hand, would provide a major guidepost
| towards unification."
Ross Finlayson
2024-12-03 00:39:36 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Personally, I hope that the next space-borne equivalence principle
test, whatever technology it uses (STEP never got the funding that
it deserved), finds that the equivalence principle breaks down at
| "Currently envisioned tests of the weak equivalence principle are
| approaching a degree of sensitivity such that non-discovery of a
| violation would be just as profound a result as discovery of a
| violation. Non-discovery of equivalence principle violation in this
| range would suggest that gravity is so fundamentally different from
| other forces as to require a major reevaluation of current attempts
| to unify gravity with the other forces of nature. A positive
| detection, on the other hand, would provide a major guidepost
| towards unification."
Oh, which way is it going to be?
ProkaryoticCaspaseHomolog
2024-12-03 02:35:20 UTC
Reply
Permalink
Post by Ross Finlayson
Post by ProkaryoticCaspaseHomolog
Personally, I hope that the next space-borne equivalence principle
test, whatever technology it uses (STEP never got the funding that
it deserved), finds that the equivalence principle breaks down at
| "Currently envisioned tests of the weak equivalence principle are
| approaching a degree of sensitivity such that non-discovery of a
| violation would be just as profound a result as discovery of a
| violation. Non-discovery of equivalence principle violation in this
| range would suggest that gravity is so fundamentally different from
| other forces as to require a major reevaluation of current attempts
| to unify gravity with the other forces of nature. A positive
| detection, on the other hand, would provide a major guidepost
| towards unification."
Oh, which way is it going to be?
Either way is a win.

A negative result, which would render highly implausible most of the
alternative gravitational theories (which mostly predict breakdown of
the equivalence principle by the 10^-18 level) would be comparable in
importance to the MMX negative result, which rendered highly
implausible most variants of luminiferous ether theories.

A positive result would serve to validate decades of effort to find
a viable theory of gravitation beyond GR.
Ross Finlayson
2024-12-03 03:07:09 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by Ross Finlayson
Post by ProkaryoticCaspaseHomolog
Personally, I hope that the next space-borne equivalence principle
test, whatever technology it uses (STEP never got the funding that
it deserved), finds that the equivalence principle breaks down at
| "Currently envisioned tests of the weak equivalence principle are
| approaching a degree of sensitivity such that non-discovery of a
| violation would be just as profound a result as discovery of a
| violation. Non-discovery of equivalence principle violation in this
| range would suggest that gravity is so fundamentally different from
| other forces as to require a major reevaluation of current attempts
| to unify gravity with the other forces of nature. A positive
| detection, on the other hand, would provide a major guidepost
| towards unification."
Oh, which way is it going to be?
Either way is a win.
A negative result, which would render highly implausible most of the
alternative gravitational theories (which mostly predict breakdown of
the equivalence principle by the 10^-18 level) would be comparable in
importance to the MMX negative result, which rendered highly
implausible most variants of luminiferous ether theories.
A positive result would serve to validate decades of effort to find
a viable theory of gravitation beyond GR.
So, a pat on the head or a kick in the ass?

I suppose their theories of gravitation fixed perpetual motion
too, or the constant violation of conservation of energy.

Don't get me wrong, the equivalence principle wasn't always
a thing, and the ether/aether theories have come in and out
of fashion.

One time I read in a magazine "the difference between fashion
and style, is that fashion goes in and out of style,
yet style is never out of fashion."


Since Lense Thirring and Pioneer Anomaly yet really after
classical mechanics what's different "gravity's force"
and "g-forces", some sites claim things like "equivalence
principle is violated all the time".

https://www.npl.washington.edu/eotwash/equivalence-principle

Once entirely forbidden and vigorously castigated,
now top in results "don't be what you think is right wrongly".

Because it really ruins argument from authority
when it's not anymore. Or if it was ever wrong.


The magnetopause or about where Earth's gravity well
is 50/50, to decay or not, you can read Einstein about
it, it's like "Einstein, did you really say there _is_
an ether?" and he's like "yeah, uh huh", then it's like
"Einstein, what does that mean for the equivalence principle"
and he might be like "well, you see, it's just a _principle_,
it's a nice way of looking at things that totally simplifies
some thing, _principles_ are not the same thing as _cause_,
see".

Now, the L-principle for light's speed's constancy is held
up a little more than that, strength-wise, yet "the locality
of SR" has that it's according to the space and that the
space is according to GR, much like the equivalence principle,
the L-principle.

About _mass_ and _inertia_ and _momentum_ and _heft_ and
whether _heft_ is _inertial_ and whether _momentum_ in
kinematics oscillates real/virtual or classical/potential,
that it's an _inertial system_ and with regards to
whether the terrestrial frame is moving along with
the solar moving along with the pole star frame,
and so on, the orbital and the ecliptic and the zodaical,
has that according to Einstein it's an _inertial_ system
for avoiding circular insoluble mathematical singularities,
and he has that as a _law_.


Anyways if "theories of gravitation" don't solve "conservation
of energy" then they deserve the great round-file.

Now, Eotvos was a pretty big deal, if the precession of
the ball pair is to considered for its "rest for its
spinning out at the LaGrange point", vis-a-vis, Michelson-Morley
and "the mirror-pond of the mercury bath, after it all
wound down and we could watch it spin following Foucault",
to be sure, in the middle, there's a null.

"Round and round and round it goes,
then it sort of goes according to Foucault".

Or Coriolis, ....


About "fifth force" or whatever that was just supposed
to be "gravity straight down", that in a fall-gravity
is quite different than a pull gravity, where it's
figured that fall-gravity just makes time and gravity
their gradients balance each other.

So, this way then fall-gravity is same as nuclear,
that any theory of gravity must satisfy being a
model of a fall-gravity.

If laws are the same everywhere, ....

In principle, ....
Ross Finlayson
2024-12-03 21:50:52 UTC
Reply
Permalink
Post by Ross Finlayson
Post by ProkaryoticCaspaseHomolog
Post by Ross Finlayson
Post by ProkaryoticCaspaseHomolog
Personally, I hope that the next space-borne equivalence principle
test, whatever technology it uses (STEP never got the funding that
it deserved), finds that the equivalence principle breaks down at
| "Currently envisioned tests of the weak equivalence principle are
| approaching a degree of sensitivity such that non-discovery of a
| violation would be just as profound a result as discovery of a
| violation. Non-discovery of equivalence principle violation in this
| range would suggest that gravity is so fundamentally different from
| other forces as to require a major reevaluation of current attempts
| to unify gravity with the other forces of nature. A positive
| detection, on the other hand, would provide a major guidepost
| towards unification."
Oh, which way is it going to be?
Either way is a win.
A negative result, which would render highly implausible most of the
alternative gravitational theories (which mostly predict breakdown of
the equivalence principle by the 10^-18 level) would be comparable in
importance to the MMX negative result, which rendered highly
implausible most variants of luminiferous ether theories.
A positive result would serve to validate decades of effort to find
a viable theory of gravitation beyond GR.
So, a pat on the head or a kick in the ass?
I suppose their theories of gravitation fixed perpetual motion
too, or the constant violation of conservation of energy.
Don't get me wrong, the equivalence principle wasn't always
a thing, and the ether/aether theories have come in and out
of fashion.
One time I read in a magazine "the difference between fashion
and style, is that fashion goes in and out of style,
yet style is never out of fashion."
Since Lense Thirring and Pioneer Anomaly yet really after
classical mechanics what's different "gravity's force"
and "g-forces", some sites claim things like "equivalence
principle is violated all the time".
https://www.npl.washington.edu/eotwash/equivalence-principle
Once entirely forbidden and vigorously castigated,
now top in results "don't be what you think is right wrongly".
Because it really ruins argument from authority
when it's not anymore. Or if it was ever wrong.
The magnetopause or about where Earth's gravity well
is 50/50, to decay or not, you can read Einstein about
it, it's like "Einstein, did you really say there _is_
an ether?" and he's like "yeah, uh huh", then it's like
"Einstein, what does that mean for the equivalence principle"
and he might be like "well, you see, it's just a _principle_,
it's a nice way of looking at things that totally simplifies
some thing, _principles_ are not the same thing as _cause_,
see".
Now, the L-principle for light's speed's constancy is held
up a little more than that, strength-wise, yet "the locality
of SR" has that it's according to the space and that the
space is according to GR, much like the equivalence principle,
the L-principle.
About _mass_ and _inertia_ and _momentum_ and _heft_ and
whether _heft_ is _inertial_ and whether _momentum_ in
kinematics oscillates real/virtual or classical/potential,
that it's an _inertial system_ and with regards to
whether the terrestrial frame is moving along with
the solar moving along with the pole star frame,
and so on, the orbital and the ecliptic and the zodaical,
has that according to Einstein it's an _inertial_ system
for avoiding circular insoluble mathematical singularities,
and he has that as a _law_.
Anyways if "theories of gravitation" don't solve "conservation
of energy" then they deserve the great round-file.
Now, Eotvos was a pretty big deal, if the precession of
the ball pair is to considered for its "rest for its
spinning out at the LaGrange point", vis-a-vis, Michelson-Morley
and "the mirror-pond of the mercury bath, after it all
wound down and we could watch it spin following Foucault",
to be sure, in the middle, there's a null.
"Round and round and round it goes,
then it sort of goes according to Foucault".
Or Coriolis, ....
About "fifth force" or whatever that was just supposed
to be "gravity straight down", that in a fall-gravity
is quite different than a pull gravity, where it's
figured that fall-gravity just makes time and gravity
their gradients balance each other.
So, this way then fall-gravity is same as nuclear,
that any theory of gravity must satisfy being a
model of a fall-gravity.
If laws are the same everywhere, ....
In principle, ....
O.W. Richardson's "The Electron Theory of Matter" is
really pretty great, from the outset he details why
there is the aether yet also that the medium makes
for the usual analysis as these days is, and then
also things like charge and "real and fictitious",
helping explain why matters of potential are real
and "real and fictitious" merely differentiate perspectives
and they're both real, contra usual un-qualified usage
where of course fictitious means un-so.

So, of course he's a big fan of Faraday.

It's after Lorentz and Zeeman, yet also after Rutherford
and Geiger, works up a usual Laplace, Gauss, Green, Poisson,
and gets into rays and refraction, which affects light,
and Roentgen Rays.

"One is tempted to ask what can be the use of
a conception of the electric intensity which is
so much at variance with what we believe to be
the reality. The answer is, of course, that most
of our methods of experimenting are so coarse,
compared with the atomic scale, that they do not
detect these enormous differences which occur within
distances of the order of atomic magnitude. Our
experimental arrangements for the most part measure
only the average values over spaces containing a
large number of atoms. The reason why our average
values possess validity is not because they are
the true values but because, so far as such experimental
arrangements enable us to detect, everything happens
as if the average values were the true values."
-- O.W. Richardson


Mentions Rowland, 1876, Drude, Lehrbuch der Optik,
reminds me to look into Droste, then about Leroux
and Kundt, with regards to Richardon's optical
theories of transmission vis-a-vis the dielectric.

Since they're not the same, optical and electrical
intensity, ....


Yeah a fall gravity courtesy a gradient, with
mechanics and inertia and heft flow and flux,
and then the electrical, flow and flux, and
optical and radionuclear, flow and flux,
courtesy time in space, makes for a usual
theory where "energy is conserved" is not
ignored or violated or made un-scientific or
non-scientific or otherwise what's considered wrong.
Ross Finlayson
2024-12-04 17:13:21 UTC
Reply
Permalink
Post by Ross Finlayson
Post by Ross Finlayson
Post by ProkaryoticCaspaseHomolog
Post by Ross Finlayson
Post by ProkaryoticCaspaseHomolog
Personally, I hope that the next space-borne equivalence principle
test, whatever technology it uses (STEP never got the funding that
it deserved), finds that the equivalence principle breaks down at
| "Currently envisioned tests of the weak equivalence principle are
| approaching a degree of sensitivity such that non-discovery of a
| violation would be just as profound a result as discovery of a
| violation. Non-discovery of equivalence principle violation in this
| range would suggest that gravity is so fundamentally different from
| other forces as to require a major reevaluation of current attempts
| to unify gravity with the other forces of nature. A positive
| detection, on the other hand, would provide a major guidepost
| towards unification."
Oh, which way is it going to be?
Either way is a win.
A negative result, which would render highly implausible most of the
alternative gravitational theories (which mostly predict breakdown of
the equivalence principle by the 10^-18 level) would be comparable in
importance to the MMX negative result, which rendered highly
implausible most variants of luminiferous ether theories.
A positive result would serve to validate decades of effort to find
a viable theory of gravitation beyond GR.
So, a pat on the head or a kick in the ass?
I suppose their theories of gravitation fixed perpetual motion
too, or the constant violation of conservation of energy.
Don't get me wrong, the equivalence principle wasn't always
a thing, and the ether/aether theories have come in and out
of fashion.
One time I read in a magazine "the difference between fashion
and style, is that fashion goes in and out of style,
yet style is never out of fashion."
Since Lense Thirring and Pioneer Anomaly yet really after
classical mechanics what's different "gravity's force"
and "g-forces", some sites claim things like "equivalence
principle is violated all the time".
https://www.npl.washington.edu/eotwash/equivalence-principle
Once entirely forbidden and vigorously castigated,
now top in results "don't be what you think is right wrongly".
Because it really ruins argument from authority
when it's not anymore. Or if it was ever wrong.
The magnetopause or about where Earth's gravity well
is 50/50, to decay or not, you can read Einstein about
it, it's like "Einstein, did you really say there _is_
an ether?" and he's like "yeah, uh huh", then it's like
"Einstein, what does that mean for the equivalence principle"
and he might be like "well, you see, it's just a _principle_,
it's a nice way of looking at things that totally simplifies
some thing, _principles_ are not the same thing as _cause_,
see".
Now, the L-principle for light's speed's constancy is held
up a little more than that, strength-wise, yet "the locality
of SR" has that it's according to the space and that the
space is according to GR, much like the equivalence principle,
the L-principle.
About _mass_ and _inertia_ and _momentum_ and _heft_ and
whether _heft_ is _inertial_ and whether _momentum_ in
kinematics oscillates real/virtual or classical/potential,
that it's an _inertial system_ and with regards to
whether the terrestrial frame is moving along with
the solar moving along with the pole star frame,
and so on, the orbital and the ecliptic and the zodaical,
has that according to Einstein it's an _inertial_ system
for avoiding circular insoluble mathematical singularities,
and he has that as a _law_.
Anyways if "theories of gravitation" don't solve "conservation
of energy" then they deserve the great round-file.
Now, Eotvos was a pretty big deal, if the precession of
the ball pair is to considered for its "rest for its
spinning out at the LaGrange point", vis-a-vis, Michelson-Morley
and "the mirror-pond of the mercury bath, after it all
wound down and we could watch it spin following Foucault",
to be sure, in the middle, there's a null.
"Round and round and round it goes,
then it sort of goes according to Foucault".
Or Coriolis, ....
About "fifth force" or whatever that was just supposed
to be "gravity straight down", that in a fall-gravity
is quite different than a pull gravity, where it's
figured that fall-gravity just makes time and gravity
their gradients balance each other.
So, this way then fall-gravity is same as nuclear,
that any theory of gravity must satisfy being a
model of a fall-gravity.
If laws are the same everywhere, ....
In principle, ....
O.W. Richardson's "The Electron Theory of Matter" is
really pretty great, from the outset he details why
there is the aether yet also that the medium makes
for the usual analysis as these days is, and then
also things like charge and "real and fictitious",
helping explain why matters of potential are real
and "real and fictitious" merely differentiate perspectives
and they're both real, contra usual un-qualified usage
where of course fictitious means un-so.
So, of course he's a big fan of Faraday.
It's after Lorentz and Zeeman, yet also after Rutherford
and Geiger, works up a usual Laplace, Gauss, Green, Poisson,
and gets into rays and refraction, which affects light,
and Roentgen Rays.
"One is tempted to ask what can be the use of
a conception of the electric intensity which is
so much at variance with what we believe to be
the reality. The answer is, of course, that most
of our methods of experimenting are so coarse,
compared with the atomic scale, that they do not
detect these enormous differences which occur within
distances of the order of atomic magnitude. Our
experimental arrangements for the most part measure
only the average values over spaces containing a
large number of atoms. The reason why our average
values possess validity is not because they are
the true values but because, so far as such experimental
arrangements enable us to detect, everything happens
as if the average values were the true values."
-- O.W. Richardson
Mentions Rowland, 1876, Drude, Lehrbuch der Optik,
reminds me to look into Droste, then about Leroux
and Kundt, with regards to Richardon's optical
theories of transmission vis-a-vis the dielectric.
Since they're not the same, optical and electrical
intensity, ....
Yeah a fall gravity courtesy a gradient, with
mechanics and inertia and heft flow and flux,
and then the electrical, flow and flux, and
optical and radionuclear, flow and flux,
courtesy time in space, makes for a usual
theory where "energy is conserved" is not
ignored or violated or made un-scientific or
non-scientific or otherwise what's considered wrong.
Richardson helps advise that light's "c", and
electromagnetic radiation's "c", and the ratio
electrostatic/electromagnetic, "c", are
three different things.

About the same, ..., in deep space in a vacuum.
rhertz
2024-12-03 02:22:27 UTC
Reply
Permalink
On Mon, 2 Dec 2024 21:54:42 +0000, ProkaryoticCaspaseHomolog wrote:

<snip>
Post by ProkaryoticCaspaseHomolog
We know that E = mc² to about the 10^-7 level.
If the equation is found to be off at some level of significance,
that would be an extremely important result, not the end of science.
Worse than proving that the speed of light in vacuum, across large
distance, IS NOT A CONSTANT.
Personally, I hope that the next space-borne equivalence principle
test, whatever technology it uses (STEP never got the funding that
it deserved), finds that the equivalence principle breaks down at
| "Currently envisioned tests of the weak equivalence principle are
| approaching a degree of sensitivity such that non-discovery of a
| violation would be just as profound a result as discovery of a
| violation. Non-discovery of equivalence principle violation in this
| range would suggest that gravity is so fundamentally different from
| other forces as to require a major reevaluation of current attempts
| to unify gravity with the other forces of nature. A positive
| detection, on the other hand, would provide a major guidepost
| towards unification."
These are the values of the 1932 experiment with NIST 2024 data:

Lithium7 amu 7.0160034366
Hydrogen amu 1.00782503223
8.02382846883

Helium amu 4.00260325413
Helium amu 4.00260325413
8.00520650826

Difference (amu) 0.01862196057
Difference (eV) 17.3462464706347E+06
Difference (J) 2.7791750783E-12

Is this the level of precision that you claim to exist with E = mc²?

Check this out:
HISTORY OF THE RECOMMENDED ATOMIC-WEIGHT VALUES FROM 1882 TO 1997:
A COMPARISON OF DIFFERENCES FROM CURRENT VALUES TO THE ESTIMATED
UNCERTAINTIES OF EARLIER VALUES

https://www.ciaaw.org/hydrogen.htm

For Hydrogen, they don't go further than 5 decimals. Not to mention Li7,
which seems to pose some problems since ever, even with the best mass
spectrometry instrumentation.


----------------------------------------------------------------
QUOTE:

Atomic mass units (AMU) are a unit of mass used to measure atomic
masses, while atomic weight is the average weight of an element's
isotopes:

Atomic mass units: A unit of mass used to measure atomic masses. One AMU
is equal to 1/12 the mass of a carbon-12 atom in a grounded state. AMU
is also known as a Dalton.

Atomic weight: The average weight of an element's isotopes, taking into
account their relative abundances. Atomic weight is measured in AMU.
---------------------------------------------------------------


So, how can NIST publish up to 10 decimal digits, if those who ACTUALLY
measure atomic weight and amu of elements uses 5 decimal digits?

These guys, from CIAAW, recollect and distribute data. NIST uses it.

https://www.ciaaw.org/members.htm

CIAAW is part of the International Union of Pure and Applied Chemistry
(IUPAC), which publishes revised tables of RECOMMENDED atomic-weight
values.


Collision, collusion. Which is the difference?
Paul.B.Andersen
2024-12-03 13:15:01 UTC
Reply
Permalink
BREAKING NEWS: E = mc² HAS BEEN PROVEN FALSE. TOTAL COLLAPSE OF THE
SCIENTIFIC ESTABLISMENT!
What would follow to this news, even if actually E = 0.99 mc²?
If you believe that E = const mc²
where const is any number, and the constant c is the speed
of light in the inertial frame where m is stationary,
then you believe that the speed of light is invariant.

Is this what you believe?
--
Paul

https://paulba.no/
ProkaryoticCaspaseHomolog
2024-12-03 14:01:10 UTC
Reply
Permalink
Post by rhertz
Collision, collusion. Which is the difference?
So, the fact that more recent measurements are significantly different
from older results in that they better support E=mc^2 is de facto
evidence of selective tweaking of results, fakery, and collusion?
rhertz
2024-12-03 18:27:00 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by rhertz
Collision, collusion. Which is the difference?
So, the fact that more recent measurements are significantly different
from older results in that they better support E=mc^2 is de facto
evidence of selective tweaking of results, fakery, and collusion?
ALL OF THEM. IN PARTICULAR COLLUSION TO HAVE EVERY PHYSICAL CONSTANT
RELATED TO OTHERS, SO ALL THE FORMULAE GIVE COHERENT RESULTS (COHERENT
WITH WHAT? WITH THE BUILDING OF PHYSICS). CALL IT AS YOU WANT.

************************************************************

SOME PHYSICAL CONSTANTS FIXED BY COLLUSION LED BY CIPM AND ADOPTED BY
DIFFERENT CGPM (GENERAL CONFERENCE ON WEIGHT AND MEASURES) SINCE 1970:

________________________________________
1. Speed of Light in Vacuum (c)
Value: 299,792,458 m/s
Fixed: 1983, fixed by the 17th CGPM

DERIVED: 1 meter is the distance light travels in vacuum in
1/299,792,458 seconds.
________________________________________
2. Second (s)
Definition based on: Transition between two hyperfine levels of the
ground state of the cesium-133 atom.
Fixed: 1967, by the 13th CGPM

Second: the duration of 9,192,631,770 periods of the radiation
corresponding to the transition between the two levels.
________________________________________
3. Planck Constant (h)
Value: 6.626 070 15 × 10⁻³⁴ J•s
Fixed: 2019, by the 26th CGPM

DERIVED: The kilogram is now defined in terms of the Planck constant and
the meter and second (related through quantum mechanics).
________________________________________
4. Elementary Charge (e)
Value: 1.602 176 634 × 10⁻¹⁹ C
Fixed: 2019
The same redefinition of the SI system in 2019 fixed the value of the
elementary charge, effectively redefining the ampere.
________________________________________
5. Boltzmann Constant (k)
Value: 1.380 649 × 10⁻²³ J/K
Fixed: 2019
This fixed value redefined the kelvin as a unit of thermodynamic
temperature, relating it directly to energy.
________________________________________
6. Avogadro Constant (Nₐ)
Value: 6.02214076 × 10²³ mol⁻¹
Fixed: 2019
The Avogadro constant was set to an exact value in 2019, redefining the
mole.
1 mole now contains exactly 6.02214076 × 10²³ entities (atoms,
molecules, etc.).
________________________________________
7. Magnetic Constant (μ₀)
Fixed Value: Previously 4π × 10⁻⁷ N/A² (inexact)

Before 2019, μ₀ was defined EXACTLY, but now it is A DERIVED CONSTANT,
based on the fine-structure constant (α), Planck constant, and the
elementary charge.
________________________________________
8. Electric Constant (ε₀)
Similar to μ₀, the electric constant is no longer fixed but DERIVED from
other constants like the speed of light and magnetic permeability.
________________________________________
9. Fine-Structure Constant (α)
Not fixed but DERIVED from other constants: α = e²/(4πε₀ℏc) ≈ 1/137
________________________________________
10. Rydberg Constant (R∞)
Value: 10,973,731.568160 m⁻¹

The Rydberg constant is DERIVED from fixed constants like the Planck
constant (ℎ), speed of light (𝑐), and electron mass.
________________________________________
11. Gravitational Constant (G)
Unlike many other constants, G (≈ 6.67430 × 10⁻¹¹ m³ kg⁻¹ s⁻²) is not
fixed and remains one of the least precisely known constants.

This is due to experimental difficulties in measuring it accurately.
________________________________________
12. Permeability of Free Space (μ₀)

Old fixed value: 4π × 10⁻⁷ N•A⁻² (EXACT VALUE)

Since 2019, DERIVED based on the speed of light (c), Planck constant
(h), and elementary charge (e), and no longer fixed.

It is related to the fine-structure constant: μ₀=2hα/(e²c)
________________________________________
13. Permittivity of Free Space (ε₀)
Old FIXED value: 8.854187817 × 10⁻¹² F•m⁻¹ (exact value)

After 2019, DERIVED using the relation: ε₀ = 1/(μ₀c²)
________________________________________
14. Molar Gas Constant (R)
Value: 8.314462618 J•mol⁻¹•K⁻¹

DERIVED from fixed constants like the Boltzmann constant (k) and
Avogadro number (Nₐ): R = Nₐ × k
________________________________________
15. Stefan-Boltzmann Constant (σ)
Value: 5.670374419 × 10⁻⁸ W•m⁻²•K⁻⁴

DERIVED from fixed constants such as the Boltzmann constant (k), speed
of light (c), and Planck constant (h): σ = 2π⁵k⁴/(15h³c²)
________________________________________
16. Wien’s Displacement Constant (b)
Value: 2.897771955 × 10⁻³ m•K

DERIVED from fixed constants such as the Boltzmann constant (k) and
Planck constant (h):

b = hc/(k×4.9651) (numerical factor from Planck’s law)
________________________________________
17. von Klitzing Constant (R_K)
Value: 25,812.8074593045 Ω

DERIVED from the Planck constant (h) and elementary charge (e):
R_K = h/e²

Used in the quantum Hall effect to define the ohm.
________________________________________
18. Faraday Constant (F)
Value: 96,485.332 123 310 C•mol⁻¹

Relates to the charge of one mole of electrons and is DERIVED from

F = Nₐ × e
________________________________________
19. Electron Volt (eV)
Value: 1 eV = 1.602176634 × 10⁻¹⁹ J

DEFINED in 2019, based on the exact value of the elementary charge (e).
________________________________________
rhertz
2024-12-03 19:02:45 UTC
Reply
Permalink
And I forgot:

The settlement of constants BY COLLUSION requires that ALL THE
INSTRUMENTATION THAT EXIST (used in any science) BE RE-CALIBRATED, to
obey.


Do you get this?

If you manufacture mass spectrometers, voltmeters, timers, WHATEVER,
better that you RE-ADJUST the values that come from measurements.

Example: Your voltmeter measures 1 Volt as 0.9995743 OLD Volts? Then
RECALIBRATE THAT MF or you will sell NONE. Is that clear?

CALIBRATION is an essential part in the design and manufacturing OF ANY
INSTRUMENT!. But you require MASTER REFERENCES (OR GUIDELINES LIKE THOSE
FROM BIPM).

Your laser based distance meter measure 1 meter as 1.00493 meters?
RECALIBRATE THE INSTRUMENT RIGHT IN THE PRODUCTION LINE.

Not to talk about instrumentation used to compute Atomic Weight or
a.m.u.

ADJUST, COMPLY AND OBEY OR YOU'RE OUT OF THE BUSINESS.

Did you manufacture a single instrument in an university lab? ADJUST,
COMPLY AND OBEY or you are OUTCASTED.

How do you dare to measure c = 299,793,294 m/s? ARE YOU CRAZY? Adjust
the readings to c = 299,792,458 m/s, OR ELSE.

And this has been happening since late XIX Century. Read the history
behind the definition of 1 Ohm, mainly commanded by British
institutions, with Cavendish lab behind it.
ProkaryoticCaspaseHomolog
2024-12-04 10:10:39 UTC
Reply
Permalink
Post by rhertz
The settlement of constants BY COLLUSION requires that ALL THE
INSTRUMENTATION THAT EXIST (used in any science) BE RE-CALIBRATED, to
obey.
Do you get this?
If you manufacture mass spectrometers, voltmeters, timers, WHATEVER,
better that you RE-ADJUST the values that come from measurements.
Example: Your voltmeter measures 1 Volt as 0.9995743 OLD Volts? Then
RECALIBRATE THAT MF or you will sell NONE. Is that clear?
CALIBRATION is an essential part in the design and manufacturing OF ANY
INSTRUMENT!. But you require MASTER REFERENCES (OR GUIDELINES LIKE THOSE
FROM BIPM).
Your laser based distance meter measure 1 meter as 1.00493 meters?
RECALIBRATE THE INSTRUMENT RIGHT IN THE PRODUCTION LINE.
Not to talk about instrumentation used to compute Atomic Weight or
a.m.u.
ADJUST, COMPLY AND OBEY OR YOU'RE OUT OF THE BUSINESS.
Did you manufacture a single instrument in an university lab? ADJUST,
COMPLY AND OBEY or you are OUTCASTED.
How do you dare to measure c = 299,793,294 m/s? ARE YOU CRAZY? Adjust
the readings to c = 299,792,458 m/s, OR ELSE.
And this has been happening since late XIX Century. Read the history
behind the definition of 1 Ohm, mainly commanded by British
institutions, with Cavendish lab behind it.
E ≈ 1.0000000 mc^2 is not a calibration adjustment. It is a
measurement made with calibrated instrumentation whose consistency
with other instrumentation has been carefully verified by procedures
such as you cast aspersion upon above.

Do you want to go back to three barleycorns per inch? Or the
historical chaos that resulted in the Troy pound, Tower pound,
London pound, Wool pound, Jersey pound, Trone pound, libra, livre
and so forth? Or a second equals 1/86400 part of a day?
J. J. Lodder
2024-12-04 11:40:04 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by rhertz
The settlement of constants BY COLLUSION requires that ALL THE
INSTRUMENTATION THAT EXIST (used in any science) BE RE-CALIBRATED, to
obey.
Do you get this?
If you manufacture mass spectrometers, voltmeters, timers, WHATEVER,
better that you RE-ADJUST the values that come from measurements.
Example: Your voltmeter measures 1 Volt as 0.9995743 OLD Volts? Then
RECALIBRATE THAT MF or you will sell NONE. Is that clear?
CALIBRATION is an essential part in the design and manufacturing OF ANY
INSTRUMENT!. But you require MASTER REFERENCES (OR GUIDELINES LIKE THOSE
FROM BIPM).
Your laser based distance meter measure 1 meter as 1.00493 meters?
RECALIBRATE THE INSTRUMENT RIGHT IN THE PRODUCTION LINE.
Not to talk about instrumentation used to compute Atomic Weight or
a.m.u.
ADJUST, COMPLY AND OBEY OR YOU'RE OUT OF THE BUSINESS.
Did you manufacture a single instrument in an university lab? ADJUST,
COMPLY AND OBEY or you are OUTCASTED.
How do you dare to measure c = 299,793,294 m/s? ARE YOU CRAZY? Adjust
the readings to c = 299,792,458 m/s, OR ELSE.
And this has been happening since late XIX Century. Read the history
behind the definition of 1 Ohm, mainly commanded by British
institutions, with Cavendish lab behind it.
E ≈ 1.0000000 mc^2 is not a calibration adjustment. It is a
measurement made with calibrated instrumentation whose consistency
with other instrumentation has been carefully verified by procedures
such as you cast aspersion upon above.
Was, was, was. There is nothing to 'cast upon' anymore.
With the redefinition of the kilogram in 2018
those measurements have become irrelevant.

E = m c^2 now holds exactly,
by the definition of the kilogram.
(and the Joule)

Jan
ProkaryoticCaspaseHomolog
2024-12-04 12:41:04 UTC
Reply
Permalink
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
E ≈ 1.0000000 mc^2 is not a calibration adjustment. It is a
measurement made with calibrated instrumentation whose consistency
with other instrumentation has been carefully verified by procedures
such as you cast aspersion upon above.
Was, was, was. There is nothing to 'cast upon' anymore.
With the redefinition of the kilogram in 2018
those measurements have become irrelevant.
E = m c^2 now holds exactly,
by the definition of the kilogram.
(and the Joule)
Specious argument.

When the kilogram was defined in terms of a metal artifact held in
vaults in Paris, it was a legitimate question whether the mass of said
artifact varied over time, even though by definition it was _the_
kilogram. As a matter of fact, that mass was found to vary despite its
being the basis as the definition of kilogram.

The mere fact that E = mc^2 holds exactly according to our present
definitions of the kilogram and the Joule does not make irrelevant
experiments intended to check whether the assumptions that have led to
the adoption of our current set of standards are correct.

The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
J. J. Lodder
2024-12-04 20:17:25 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
E ≈ 1.0000000 mc^2 is not a calibration adjustment. It is a
measurement made with calibrated instrumentation whose consistency
with other instrumentation has been carefully verified by procedures
such as you cast aspersion upon above.
Was, was, was. There is nothing to 'cast upon' anymore.
With the redefinition of the kilogram in 2018
those measurements have become irrelevant.
E = m c^2 now holds exactly,
by the definition of the kilogram.
(and the Joule)
Specious argument.
When the kilogram was defined in terms of a metal artifact held in
vaults in Paris, it was a legitimate question whether the mass of said
artifact varied over time, even though by definition it was _the_
kilogram. As a matter of fact, that mass was found to vary despite its
being the basis as the definition of kilogram.
The mere fact that E = mc^2 holds exactly according to our present
definitions of the kilogram and the Joule does not make irrelevant
experiments intended to check whether the assumptions that have led to
the adoption of our current set of standards are correct.
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,

Jan
Ross Finlayson
2024-12-04 21:29:53 UTC
Reply
Permalink
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
E ≈ 1.0000000 mc^2 is not a calibration adjustment. It is a
measurement made with calibrated instrumentation whose consistency
with other instrumentation has been carefully verified by procedures
such as you cast aspersion upon above.
Was, was, was. There is nothing to 'cast upon' anymore.
With the redefinition of the kilogram in 2018
those measurements have become irrelevant.
E = m c^2 now holds exactly,
by the definition of the kilogram.
(and the Joule)
Specious argument.
When the kilogram was defined in terms of a metal artifact held in
vaults in Paris, it was a legitimate question whether the mass of said
artifact varied over time, even though by definition it was _the_
kilogram. As a matter of fact, that mass was found to vary despite its
being the basis as the definition of kilogram.
The mere fact that E = mc^2 holds exactly according to our present
definitions of the kilogram and the Joule does not make irrelevant
experiments intended to check whether the assumptions that have led to
the adoption of our current set of standards are correct.
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
Jan
What's that in dynes?

The "dynamics" and all, ....


The current latest greatest SI units make it dirt simple to
formalize electronic circuitry (in deep space, in a vacuum,
alone, at operating temperature), and leave out as "dimensionless"
all the "dynamics" what go into the "derivations" all these
matters of very definitions of units themselves.

So, it only serves a particularly simple subset of SR-ians.

Because layout is hard, ....

"The latest SI re-definition."

Of course, then there's that that "E = mc^2, exactly"
is a _circular_ argument and makes, though a positivist
theory, demanding what it stipulates is so, that
it's not really attached to the "physical interpretation",
insofar as with regards to electrostatic, electromagnetism,
electrical and magnetic and optical intensities, and such
matters of radionuclear radiation, and also "gravitational
waves" and other forms of radiation, like nutation.

The very latest here makes a very brief packet,
"here's physics, SR-ians, you're on your own".


(I forgot who here originally coined the term
"SR-ians", with regards to things like "Einstein
already said SR was local and its space was spacial,
75 years ago", with regards to SR-IANS, and, "KISS-SR-ians",
as it were.)
ProkaryoticCaspaseHomolog
2024-12-04 22:20:18 UTC
Reply
Permalink
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
You prematurely rest your case.

Since 1983, the speed of light in vacuum has been defined as exactly
equal to 299,792,458 meters per second.

Given this definition, is there any point to conducting experiments
to test whether there are anisotropies in the speed of light due to
Earth's motions in space? Such as these: https://tinyurl.com/8hkry7k3

The definition of the speed of light is such that there can't be.

Right?
Ross Finlayson
2024-12-04 23:56:08 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
You prematurely rest your case.
Since 1983, the speed of light in vacuum has been defined as exactly
equal to 299,792,458 meters per second.
Given this definition, is there any point to conducting experiments
to test whether there are anisotropies in the speed of light due to
Earth's motions in space? Such as these: https://tinyurl.com/8hkry7k3
The definition of the speed of light is such that there can't be.
Right?
The definition of the speed of light in what theory?

You mean you have a theory that says nothing at all
about it except that it's a finite constant?

Then other theories say what that entails,
if it isn't up into indeterminate-quantities
then besides like Einstein says that the
L-principle is a local thing and that the
"spacial SR" and "spatial GR" are at least
two different things and that it depends
on what's "classical gravity" and depends
on whether motion has constant velocity,
these kinds of things?

What you don't include boost addition, the Riemann
tensor, Ricci tensor and Regge map, Hermann,
Baecklund, Bianchi, these kinds of things?

So, the L-principle of SR indeed has that
light's speed is a constant, and finite.

Then though sometimes "wave-length" is
"inverse frequency" and in other considerations
"wave velocity", so, they kind of line up
at one end, yet, there's multiplicities,
that's what-all singularity theory, usually
theories about 2/3 of the hypergeometric
with principal branches of multiplicity theories.

Don't get me wrong, "multiple-worlds" has no
physical interpretation, or, at least scientifically.

It's a great realm of many theories, though.


I sort of enjoy this since foundations of mathematics
and foundations of physics have a lot in common.

Mathematical physics, ....
The Starmaker
2024-12-05 01:15:36 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
You prematurely rest your case.
Since 1983, the speed of light in vacuum has been defined as exactly
equal to 299,792,458 meters per second.
Given this definition, is there any point to conducting experiments
to test whether there are anisotropies in the speed of light due to
Earth's motions in space? Such as these: https://tinyurl.com/8hkry7k3
The definition of the speed of light is such that there can't be.
Right?
There is 'no such thing' as a vacumn that exist anywhere...

the definition is a fraud.

There is no 'vacumn' that exist in which an "experiment" can be
performed at.
--
The Starmaker -- To question the unquestionable, ask the unaskable,
to think the unthinkable, mention the unmentionable, say the unsayable,
and challenge the unchallengeable.
J. J. Lodder
2024-12-05 10:57:06 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
You prematurely rest your case.
OK. Maybe I gave up on you to soon.
Post by ProkaryoticCaspaseHomolog
Since 1983, the speed of light in vacuum has been defined as exactly
equal to 299,792,458 meters per second.
Correct, almost.
Conceptually better: the meter is defined as....
The CGPM is concerned with how measurements are to be done,
not with theoretical proclamations.
Post by ProkaryoticCaspaseHomolog
Given this definition, is there any point to conducting experiments
to test whether there are anisotropies in the speed of light due to
Earth's motions in space? Such as these: https://tinyurl.com/8hkry7k3
The definition of the speed of light is such that there can't be.
Right?
That's where you go wrong.
The agreement to give c a defined value
is irrelevant to any experiment.

It is a convention that tells us how to represent
the outcomes of experiments.
So the results of an anisotropy of space experiment
must be presented (under the SI) as the length of meter rods
depending on their orientatation in space.
(even if it may loosely be called differently)
It has no bearing at all on the possibility of doing such experiments.

Jan

PS Given unexpected outcomes of such experiments
those in the know may of course rethink the SI.
No need or use to pre-think such hypothecalities.
ProkaryoticCaspaseHomolog
2024-12-05 12:42:18 UTC
Reply
Permalink
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
You prematurely rest your case.
OK. Maybe I gave up on you to soon.
Post by ProkaryoticCaspaseHomolog
Since 1983, the speed of light in vacuum has been defined as exactly
equal to 299,792,458 meters per second.
Correct, almost.
Conceptually better: the meter is defined as....
The CGPM is concerned with how measurements are to be done,
not with theoretical proclamations.
Post by ProkaryoticCaspaseHomolog
Given this definition, is there any point to conducting experiments
to test whether there are anisotropies in the speed of light due to
Earth's motions in space? Such as these: https://tinyurl.com/8hkry7k3
The definition of the speed of light is such that there can't be.
Right?
That's where you go wrong.
The agreement to give c a defined value
is irrelevant to any experiment.
That was PRECISELY my point!
Post by J. J. Lodder
It is a convention that tells us how to represent
the outcomes of experiments.
So the results of an anisotropy of space experiment
must be presented (under the SI) as the length of meter rods
depending on their orientation in space.
(even if it may loosely be called differently)
It has no bearing at all on the possibility of doing such experiments.
EXACTLY!
Post by J. J. Lodder
Jan
PS Given unexpected outcomes of such experiments
those in the know may of course rethink the SI.
No need or use to pre-think such hypotheticalities.
Given that you obviously understand this point, please think back
on your comments on experiments intended to verify E=mc^2.
Even post-2018, properly framed questions on the validity of
E=mc^2 may still be posed.
Paul B. Andersen
2024-12-05 14:26:08 UTC
Reply
Permalink
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
Jan
The meter is defined as:

1 metre = (1 sec/⁠299792458⁠ m/s)

1 second = 9192631770 Δν_Cs

Note that neither the definition of second nor the definition
of metre depend on the speed of light.

The constant ⁠299792458⁠ m/s is equal to the defined speed of light,
but in the definition of the metre it is a constant.

That means that it possible to measure the speed of light
even if it is different from the defined value.

So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458⁠.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.

The point is that the metre isn't define by the speed of light,
but by the constant 299792458⁠ m/s.
--
Paul

https://paulba.no/
rhertz
2024-12-05 15:21:52 UTC
Reply
Permalink
SOME PHYSICAL CONSTANTS FIXED BY COLLUSION LED BY CIPM AND ADOPTED BY
DIFFERENT CGPM (GENERAL CONFERENCE ON WEIGHT AND MEASURES) SINCE 1970:

________________________________________
1. Speed of Light in Vacuum (c)
Value: 299,792,458 m/s
Fixed: 1983, fixed by the 17th CGPM

DERIVED: 1 meter is the distance light travels in vacuum in
1/299,792,458 seconds.
________________________________________
2. Second (s)
Definition based on: Transition between two hyperfine levels of the
ground state of the cesium-133 atom.
Fixed: 1967, by the 13th CGPM

Second: the duration of 9,192,631,770 periods of the radiation
corresponding to the transition between the two levels.
________________________________________
3. Planck Constant (h)
Value: 6.626 070 15 × 10⁻³⁎ J•s
Fixed: 2019, by the 26th CGPM

DERIVED: The kilogram is now defined in terms of the Planck constant and
the meter and second (related through quantum mechanics).
________________________________________
4. Elementary Charge (e)
Value: 1.602 176 634 × 10⁻¹⁹ C
Fixed: 2019
The same redefinition of the SI system in 2019 fixed the value of the
elementary charge, effectively redefining the ampere.
________________________________________
5. Boltzmann Constant (k)
Value: 1.380 649 × 10⁻²³ J/K
Fixed: 2019
This fixed value redefined the kelvin as a unit of thermodynamic
temperature, relating it directly to energy.
________________________________________
6. Avogadro Constant (Nₐ)
Value: 6.02214076 × 10²³ mol⁻¹
Fixed: 2019
The Avogadro constant was set to an exact value in 2019, redefining the
mole.
1 mole now contains exactly 6.02214076 × 10²³ entities (atoms,
molecules, etc.).
________________________________________
7. Magnetic Constant (Ό₀)
Fixed Value: Previously 4π × 10⁻⁷ N/A² (inexact)

Before 2019, Ό₀ was defined EXACTLY, but now it is A DERIVED CONSTANT,
based on the fine-structure constant (α), Planck constant, and the
elementary charge.
________________________________________
8. Electric Constant (ε₀)
Similar to Ό₀, the electric constant is no longer fixed but DERIVED from
other constants like the speed of light and magnetic permeability.
________________________________________
9. Fine-Structure Constant (α)
Not fixed but DERIVED from other constants: α = e²/(4πε₀ℏc) ≈ 1/137
________________________________________
12. Permeability of Free Space (Ό₀)

Old fixed value: 4π × 10⁻⁷ N•A⁻² (EXACT VALUE)

Since 2019, DERIVED based on the speed of light (c), Planck constant
(h), and elementary charge (e), and no longer fixed.

It is related to the fine-structure constant: Ό₀=2hα/(e²c)
________________________________________
13. Permittivity of Free Space (ε₀)
Old FIXED value: 8.854187817 × 10⁻¹² F•m⁻¹ (exact value)

After 2019, DERIVED using the relation: ε₀ = 1/(Ό₀c²)

***************************************************

Coherence is obtained by FIXING some "constants" and deriving others
from the FIXED ONES.

Like the speed of light in vacuum (FIXED), used to DERIVE Permittivity
of Free Space (ε₀) (CALCULATED) and Permeability of Free Space (Ό₀).

As Maxwell stated (and used values of that epoch, 160 years ago):


c₀ = 1/√(ε₀Ό₀)


Now, c₀ is related to

- Planck's constant h,

Fine-Structure Constant α = e²/(4πε₀ℏc)

- Elementary charge e

- Permittivity and Permeability of Free Space, ε₀ and Ό₀


Does anybody see a RECURSION here?

The buildings of physics and chemistry, based on the values of
"universal constants" is becoming more and more cohesive, but at a cost
of extraordinary levels of entanglements between these constants.

In chemistry, the struggle to refine the values of atomic weights and
atomic mass are extraordinary since WWII, in particular due to efforts
in refining the mass of neutrons (INDIRECTLY) and of nuclear binding
energy (using very complex formulae and difficult bombardment of atomic
nuclei, to obtain energy lost by nuclei).

Watch the attached figure:





__________________
J. J. Lodder
2024-12-05 18:42:24 UTC
Reply
Permalink
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
Jan
1 metre = (1 sec/?299792458? m/s)
1 second = 9192631770 ??_Cs
Note that neither the definition of second nor the definition
of metre depend on the speed of light.
The constant ?299792458? m/s is equal to the defined speed of light,
but in the definition of the metre it is a constant.
That means that it possible to measure the speed of light
even if it is different from the defined value.
The point is that the metre isn't define by the speed of light,
but by the constant 299792458? m/s.
So you didn't get the point either.
(also suffering from a naive empirist bias, I guess)

The point is not about pottering around with lasers and all that,
it is about correctly interpreting what you are doing.
To do that you need to understand the physics of it.

In fact, the kind of experiments that used to be called
'speed of light measurements' (so before 1983)
are still being done routinely today, at places like NIST, or BIPM.
The difference is that nowadays, precisely the same kind of measurements
are called 'calibration of a (secudary) meter standard',
or 'calibration of a frequency standard'. [1]
So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458?.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.
So this is completely, absolutely, and totally wrong.
Such a result does not mean that the speed of light
is off its defined value,
it means that your meter standard is off,
and that you must use your measurement result to recalibrate it.
(so that the speed of light comes out to its defined value)

In other words, it means that you can nowadays
calibrate a frequency standard, aka secundary meter standard
to better accuracy than was possible 1n 1983.
This is no doubt true,
but it cannot possibly change the (defined!) speed of light.

In still other words, there is no such thing as an independent SI meter.
The SI meter is that meter, and only that meter,
that makes the speed of light equal to 299792458? m/s (exactly)

Jan
--
Aber das ist Falsch! Sogar ganz Falsch!! (Wolfgang Pauli)


[1] They publish 'prefered values' for the frequencies
of a number of standard laser lines.
Ross Finlayson
2024-12-06 02:29:15 UTC
Reply
Permalink
Post by J. J. Lodder
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
Jan
1 metre = (1 sec/?299792458? m/s)
1 second = 9192631770 ??_Cs
Note that neither the definition of second nor the definition
of metre depend on the speed of light.
The constant ?299792458? m/s is equal to the defined speed of light,
but in the definition of the metre it is a constant.
That means that it possible to measure the speed of light
even if it is different from the defined value.
The point is that the metre isn't define by the speed of light,
but by the constant 299792458? m/s.
So you didn't get the point either.
(also suffering from a naive empirist bias, I guess)
The point is not about pottering around with lasers and all that,
it is about correctly interpreting what you are doing.
To do that you need to understand the physics of it.
In fact, the kind of experiments that used to be called
'speed of light measurements' (so before 1983)
are still being done routinely today, at places like NIST, or BIPM.
The difference is that nowadays, precisely the same kind of measurements
are called 'calibration of a (secudary) meter standard',
or 'calibration of a frequency standard'. [1]
So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458?.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.
So this is completely, absolutely, and totally wrong.
Such a result does not mean that the speed of light
is off its defined value,
it means that your meter standard is off,
and that you must use your measurement result to recalibrate it.
(so that the speed of light comes out to its defined value)
In other words, it means that you can nowadays
calibrate a frequency standard, aka secundary meter standard
to better accuracy than was possible 1n 1983.
This is no doubt true,
but it cannot possibly change the (defined!) speed of light.
In still other words, there is no such thing as an independent SI meter.
The SI meter is that meter, and only that meter,
that makes the speed of light equal to 299792458? m/s (exactly)
Jan
Not only "deep space in a vacuum, alone, at constant velocity",
yet, what is the "radius of gyration"?
rhertz
2024-12-06 03:55:45 UTC
Reply
Permalink
Permittivity and permeability at the center of each galaxy are different
from the values of ε₀ and μ₀ on the outer limits of each one.

So, the value of c₀ = 1/√(ε₀μ₀) applies only locally.

c, out there, can be higher or lower than the fixed 299,792,458 m/s that
arrogant assholes here want to project and use for the entire infinite
universe.

Even the Alan Guth's Big Bang model consider this as a fact in the
theory, in particular since the first 10E-32 seconds up to 300,000 years
after the inflation. The speed of light is considered, in the BBT, as
almost infinite at the beginning of light, when it appeared after the
BB.

The VELOCITY of light was anything but isotropic, and has been slowing
down since the dawn of time.

Now, who can be so imbecile to believe that c₀ = 299,792,458 m/s apply
for the entire universe of now, 3,000,000,000 ly far from here?

Only retarded scientists in the last 100 years, so they can keep
pretending that they can MODEL the entire universe and its behavior,
living in a remote dust calling Earth.
J. J. Lodder
2024-12-06 10:48:39 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
Jan
1 metre = (1 sec/?299792458? m/s)
1 second = 9192631770 ??_Cs
Note that neither the definition of second nor the definition
of metre depend on the speed of light.
The constant ?299792458? m/s is equal to the defined speed of light,
but in the definition of the metre it is a constant.
That means that it possible to measure the speed of light
even if it is different from the defined value.
The point is that the metre isn't define by the speed of light,
but by the constant 299792458? m/s.
So you didn't get the point either.
(also suffering from a naive empirist bias, I guess)
The point is not about pottering around with lasers and all that,
it is about correctly interpreting what you are doing.
To do that you need to understand the physics of it.
In fact, the kind of experiments that used to be called
'speed of light measurements' (so before 1983)
are still being done routinely today, at places like NIST, or BIPM.
The difference is that nowadays, precisely the same kind of measurements
are called 'calibration of a (secudary) meter standard',
or 'calibration of a frequency standard'. [1]
So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458?.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.
So this is completely, absolutely, and totally wrong.
Such a result does not mean that the speed of light
is off its defined value,
it means that your meter standard is off,
and that you must use your measurement result to recalibrate it.
(so that the speed of light comes out to its defined value)
In other words, it means that you can nowadays
calibrate a frequency standard, aka secundary meter standard
to better accuracy than was possible 1n 1983.
This is no doubt true,
but it cannot possibly change the (defined!) speed of light.
In still other words, there is no such thing as an independent SI meter.
The SI meter is that meter, and only that meter,
that makes the speed of light equal to 299792458? m/s (exactly)
Not only "deep space in a vacuum, alone, at constant velocity",
yet, what is the "radius of gyration"?

There is no SI meter in "deep space in a vacuum, alone, at constant
velocity"
SI meters exist only in SI standards laboratories.
Elsewhere there are only less accurate copies.

Jan
Ross Finlayson
2024-12-06 18:48:24 UTC
Reply
Permalink
Post by Ross Finlayson
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
Jan
1 metre = (1 sec/?299792458? m/s)
1 second = 9192631770 ??_Cs
Note that neither the definition of second nor the definition
of metre depend on the speed of light.
The constant ?299792458? m/s is equal to the defined speed of light,
but in the definition of the metre it is a constant.
That means that it possible to measure the speed of light
even if it is different from the defined value.
The point is that the metre isn't define by the speed of light,
but by the constant 299792458? m/s.
So you didn't get the point either.
(also suffering from a naive empirist bias, I guess)
The point is not about pottering around with lasers and all that,
it is about correctly interpreting what you are doing.
To do that you need to understand the physics of it.
In fact, the kind of experiments that used to be called
'speed of light measurements' (so before 1983)
are still being done routinely today, at places like NIST, or BIPM.
The difference is that nowadays, precisely the same kind of measurements
are called 'calibration of a (secudary) meter standard',
or 'calibration of a frequency standard'. [1]
So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458?.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.
So this is completely, absolutely, and totally wrong.
Such a result does not mean that the speed of light
is off its defined value,
it means that your meter standard is off,
and that you must use your measurement result to recalibrate it.
(so that the speed of light comes out to its defined value)
In other words, it means that you can nowadays
calibrate a frequency standard, aka secundary meter standard
to better accuracy than was possible 1n 1983.
This is no doubt true,
but it cannot possibly change the (defined!) speed of light.
In still other words, there is no such thing as an independent SI meter.
The SI meter is that meter, and only that meter,
that makes the speed of light equal to 299792458? m/s (exactly)
Not only "deep space in a vacuum, alone, at constant velocity",
yet, what is the "radius of gyration"?
There is no SI meter in "deep space in a vacuum, alone, at constant
velocity"
SI meters exist only in SI standards laboratories.
Elsewhere there are only less accurate copies.
Jan
Oh, here there's also a mathematical universe hypothesis,
of a continuum mechanics and about infinity in nature,
and also there's a "real realism", as with regards to
the _ideal_, the _ideal_, of length in space and
the meter of space thusly also the meter of time.


It's like, I'm reading Tinkham's group theory book, it's
pretty great and a lot of algebra then at the end he
details a bunch of groups then these with these just
arbitrary seeming "well it's less than a dozen" giving
reasons why the usual "metrics" and "amplitudes" are
quite very tattered at the edges.
J. J. Lodder
2024-12-06 10:48:39 UTC
Reply
Permalink
Post by rhertz
Permittivity and permeability at the center of each galaxy are different
from the values of ?? and ?? on the outer limits of each one.
So, the value of c? = 1/√(????) applies only locally.
There we go again.
Is there really no part of physics that you don't misunderstand?
FYI, eps_0 and mu_0 are not physical quantities.
They are artifacts of an ill-conceived unit system. (the SI)

In any half-way decent unit system they are both equal to 1,
with c appearing explictly in Maxwell's equations in the right places.

Even saying that they have been put equal to one is too kind to them.
They just have no physical existence at all,

Jan
--
They are as physical as the 'tractability of free space' tau_0.
You know, the dimensionless constant with value one
that appears in Newton's force law:

F = tau_0 m a
ProkaryoticCaspaseHomolog
2024-12-07 04:51:50 UTC
Reply
Permalink
Post by rhertz
Permittivity and permeability at the center of each galaxy are different
from the values of ε₀ and μ₀ on the outer limits of each one.
So, the value of c₀ = 1/√(ε₀μ₀) applies only locally.
Please provide references for this assertion.
Post by rhertz
c, out there, can be higher or lower than the fixed 299,792,458 m/s that
arrogant assholes here want to project and use for the entire infinite
universe.
Even the Alan Guth's Big Bang model consider this as a fact in the
theory, in particular since the first 10E-32 seconds up to 300,000 years
after the inflation. The speed of light is considered, in the BBT, as
almost infinite at the beginning of light, when it appeared after the
BB.
The VELOCITY of light was anything but isotropic, and has been slowing
down since the dawn of time.
Now, who can be so imbecile to believe that c₀ = 299,792,458 m/s apply
for the entire universe of now, 3,000,000,000 ly far from here?
The question as to whether accepted physical "constants" are indeed
constant over all space and all time is an important one. Various
theories suggest that they may not be.

Experiments and careful measurements, both astronomical and in
terrestrial laboratories, have been conducted to test whether such
variations in fact exist. Here are some references on searches for
variation in the fine structure constant, the gravitational constant
and so forth. It's easy to find LOTS more. To date, all measured
variations have been consistent with zero.

https://www.nature.com/articles/s41586-020-2964-7
https://www.science.org/doi/full/10.1126/sciadv.aay9672
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.124.081101
https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.126.051301
https://link.springer.com/article/10.1140/epjc/s10052-020-7727-y
Post by rhertz
Only retarded scientists in the last 100 years, so they can keep
pretending that they can MODEL the entire universe and its behavior,
living in a remote dust calling Earth.
Paul B. Andersen
2024-12-06 13:46:56 UTC
Reply
Permalink
Post by J. J. Lodder
Post by Paul B. Andersen
So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458?.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.
Note: measured with SI metre and SI second.
Post by J. J. Lodder
So this is completely, absolutely, and totally wrong.
Such a result does not mean that the speed of light
is off its defined value,
it means that your meter standard is off,
and that you must use your measurement result to recalibrate it.
(so that the speed of light comes out to its defined value)
The 1983 definition of the speed of light is:
c = 299792458 m/s

The 1983 definition of second is:
1 second = 9192631770 ΔνCs

The 1983 definition of meter is:
1 metre = 1 second/299792458 m/s

The 2019 definition of meter is:
1 metre = 9192631770 ΔνCs/299792458 m/s

If the speed of light is measured _with the meter and second
defined above_ it is obviously possible to get a result slightly
different from the defined speed of light.

So I was not "completely, absolutely, and totally wrong".

Are you are saying that if we got the result 299792458.000001 m/s
then the metre would have to be recalibrated to:
1 metre = 9192631770 ΔνCs/299792458.000001 m/s ?
Post by J. J. Lodder
In other words, it means that you can nowadays
calibrate a frequency standard, aka secundary meter standard
to better accuracy than was possible 1n 1983.
Or are you saying that we would have to recalibrate the meter to:
1 metre = 9192631770.0000306 ΔνCs/299792458 m/s ?
Post by J. J. Lodder
This is no doubt true,
but it cannot possibly change the (defined!) speed of light.
In still other words, there is no such thing as an independent SI meter.
The SI meter is that meter, and only that meter,
that makes the speed of light equal to 299792458 m/s (exactly)
Jan
In fact, the kind of experiments that used to be called
'speed of light measurements' (so before 1983)
are still being done routinely today, at places like NIST, or BIPM.
The difference is that nowadays, precisely the same kind of measurements
are called 'calibration of a (secudary) meter standard',
or 'calibration of a frequency standard'.
Is any such recalibration of the meter ever done?
And which "frequency standard" are you referring to?
The definition of a second?
--
Paul

https://paulba.no/
J. J. Lodder
2024-12-06 20:00:10 UTC
Reply
Permalink
Post by Paul B. Andersen
Post by J. J. Lodder
Post by Paul B. Andersen
So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458?.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.
Note: measured with SI metre and SI second.
Post by J. J. Lodder
So this is completely, absolutely, and totally wrong.
Such a result does not mean that the speed of light
is off its defined value,
it means that your meter standard is off,
and that you must use your measurement result to recalibrate it.
(so that the speed of light comes out to its defined value)
c = 299792458 m/s
1 second = 9192631770 ??Cs
1 metre = 1 second/299792458 m/s
1 metre = 9192631770 ??Cs/299792458 m/s
If the speed of light is measured _with the meter and second
defined above_ it is obviously possible to get a result slightly
different from the defined speed of light.
So I was not "completely, absolutely, and totally wrong".
You were, and it would seem that you still are.
You cannot measure the speed of light because it has a defined value.
If you would think that what you are doing is a speed of light
measurement you don't understand what you are doing.
Post by Paul B. Andersen
Are you are saying that if we got the result 299792458.000001 m/s
1 metre = 9192631770 ??Cs/299792458.000001 m/s ?
Of course not.
All it would mean is that you have made some systematic error
with your particular implementattion of the SI meter.
Post by Paul B. Andersen
Post by J. J. Lodder
In other words, it means that you can nowadays
calibrate a frequency standard, aka secundary meter standard
to better accuracy than was possible 1n 1983.
1 metre = 9192631770.0000306 ??Cs/299792458 m/s ?
Neither. The SI meter is a secondary standard that must be calibrated
such that the speed of light comes to 299792458 m/s.
Post by Paul B. Andersen
Post by J. J. Lodder
This is no doubt true,
but it cannot possibly change the (defined!) speed of light.
In still other words, there is no such thing as an independent SI meter.
The SI meter is that meter, and only that meter,
that makes the speed of light equal to 299792458 m/s (exactly)
Jan
In fact, the kind of experiments that used to be called
'speed of light measurements' (so before 1983)
are still being done routinely today, at places like NIST, or BIPM.
The difference is that nowadays, precisely the same kind of measurements
are called 'calibration of a (secudary) meter standard',
or 'calibration of a frequency standard'.
Is any such recalibration of the meter ever done?
Of course, routinely, on a day to day basis.
Guess there are whole departments devoted to it.
(it is a subtle art)
The results are published nowadays as a list of frequencies
of prefered optical frequency standards.
(measuring the frequency of an optical frequency standard
and calibrating a secondary meter standard are just two different ways
of saying the same thing)
And remember, there is no longer such a thing as -the- meter.
It is a secondary unit, and any convenient secondary standard will do.
Post by Paul B. Andersen
And which "frequency standard" are you referring to?
Any optical frequency standard of known frequency
defines a secondary meter standard.
(because given the frequency, you know the wavelength,
so you can measure lengths by interferometry)

A commonly used one is a certain stabilised He-Ne laser.
(of specified construction)
Post by Paul B. Andersen
The definition of a second?
Of course not, that is fixed. (for the time being)
It is the frequency that all other frequencies must relate to.
It will be replaced in the not to far future
by an optical frequency standard. (yet to be chosen)

Finaly, you really need to get yourself out of the conceptual knot
that you have tied yourself in.
Something is either defined, or it can be measured.
It can't possibly be both,

Jan
Ross Finlayson
2024-12-06 21:27:48 UTC
Reply
Permalink
Post by J. J. Lodder
Post by Paul B. Andersen
Post by J. J. Lodder
Post by Paul B. Andersen
So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458?.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.
Note: measured with SI metre and SI second.
Post by J. J. Lodder
So this is completely, absolutely, and totally wrong.
Such a result does not mean that the speed of light
is off its defined value,
it means that your meter standard is off,
and that you must use your measurement result to recalibrate it.
(so that the speed of light comes out to its defined value)
c = 299792458 m/s
1 second = 9192631770 ??Cs
1 metre = 1 second/299792458 m/s
1 metre = 9192631770 ??Cs/299792458 m/s
If the speed of light is measured _with the meter and second
defined above_ it is obviously possible to get a result slightly
different from the defined speed of light.
So I was not "completely, absolutely, and totally wrong".
You were, and it would seem that you still are.
You cannot measure the speed of light because it has a defined value.
If you would think that what you are doing is a speed of light
measurement you don't understand what you are doing.
Post by Paul B. Andersen
Are you are saying that if we got the result 299792458.000001 m/s
1 metre = 9192631770 ??Cs/299792458.000001 m/s ?
Of course not.
All it would mean is that you have made some systematic error
with your particular implementattion of the SI meter.
Post by Paul B. Andersen
Post by J. J. Lodder
In other words, it means that you can nowadays
calibrate a frequency standard, aka secundary meter standard
to better accuracy than was possible 1n 1983.
1 metre = 9192631770.0000306 ??Cs/299792458 m/s ?
Neither. The SI meter is a secondary standard that must be calibrated
such that the speed of light comes to 299792458 m/s.
Post by Paul B. Andersen
Post by J. J. Lodder
This is no doubt true,
but it cannot possibly change the (defined!) speed of light.
In still other words, there is no such thing as an independent SI meter.
The SI meter is that meter, and only that meter,
that makes the speed of light equal to 299792458 m/s (exactly)
Jan
In fact, the kind of experiments that used to be called
'speed of light measurements' (so before 1983)
are still being done routinely today, at places like NIST, or BIPM.
The difference is that nowadays, precisely the same kind of measurements
are called 'calibration of a (secudary) meter standard',
or 'calibration of a frequency standard'.
Is any such recalibration of the meter ever done?
Of course, routinely, on a day to day basis.
Guess there are whole departments devoted to it.
(it is a subtle art)
The results are published nowadays as a list of frequencies
of prefered optical frequency standards.
(measuring the frequency of an optical frequency standard
and calibrating a secondary meter standard are just two different ways
of saying the same thing)
And remember, there is no longer such a thing as -the- meter.
It is a secondary unit, and any convenient secondary standard will do.
Post by Paul B. Andersen
And which "frequency standard" are you referring to?
Any optical frequency standard of known frequency
defines a secondary meter standard.
(because given the frequency, you know the wavelength,
so you can measure lengths by interferometry)
A commonly used one is a certain stabilised He-Ne laser.
(of specified construction)
Post by Paul B. Andersen
The definition of a second?
Of course not, that is fixed. (for the time being)
It is the frequency that all other frequencies must relate to.
It will be replaced in the not to far future
by an optical frequency standard. (yet to be chosen)
Finaly, you really need to get yourself out of the conceptual knot
that you have tied yourself in.
Something is either defined, or it can be measured.
It can't possibly be both,
Jan
Oh, what then of quasi-invariant measure theory,
with regards to Jordan measure, Lebesgue measure,
with regards to Shannon and Nyquist, about three
different continuous domains their models in mathematics,
and with regards to what yesterday was _defined_
yet today may be _derived_?

Seems you should add "in deep space, in a vacuum,
at constant velocity, at an instant in time", ....

Things are usually defined by what's measured.
ProkaryoticCaspaseHomolog
2024-12-07 01:21:04 UTC
Reply
Permalink
Post by J. J. Lodder
Finaly, you really need to get yourself out of the conceptual knot
that you have tied yourself in.
Something is either defined, or it can be measured.
It can't possibly be both,
Sure it can, provided that you use a different measurement standard
than the one used in the definition.

It would not make sense to quantify hypothetical variations in the
speed of light in terms of the post-1983 meter. But they would make
sense in terms pre-1983 meters. Or (assuming some incredible ramp-up
in technology, perhaps introduced by Larry Niven-ish Outsiders) in
terms of a meter defined as the distance massless gluons travel in
1/299,792,458 of a second. Or gravitons... :-)
Ross Finlayson
2024-12-07 02:30:38 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Finaly, you really need to get yourself out of the conceptual knot
that you have tied yourself in.
Something is either defined, or it can be measured.
It can't possibly be both,
Sure it can, provided that you use a different measurement standard
than the one used in the definition.
It would not make sense to quantify hypothetical variations in the
speed of light in terms of the post-1983 meter. But they would make
sense in terms pre-1983 meters. Or (assuming some incredible ramp-up
in technology, perhaps introduced by Larry Niven-ish Outsiders) in
terms of a meter defined as the distance massless gluons travel in
1/299,792,458 of a second. Or gravitons... :-)
What's relevant is standards and systems of units,
when for example different units are simply defined
inverses of each other yet either way vary,
and the average versus the instantaneous,
matters of projection and perspective,
configurations and energies of experiment,
mostly though about the algebraic derivations,
and "quantities" when formulaic, and about
when "quantities" in the mathematical are
actually yet "systems of derivations", then
there's also the whole "infinitely-many higher-order
derivatives of displacement" with respect to motion
itself, and these kinds of things.


The the constants are just kind of last, ....

In a theory of sum-of-histories/sum-of-potentials,
it's the potential fields that are the real fields,
and quantities are just the last sort of quantities,
also, and yet histories/potentials themselves.

It's usually enough a field theory a gauge theory,
where particle physics lives, then with regards to
relativity it's coordinate-free thanks to tensor spaces,
so, fields, vis-a-vis material points and bodies.

So, the entire complementary dual of point-and-space
itself, geometry, is a many-splendored thing, or,
a "continuous manifold" as it's usually relayed.


The usual notion of "applied mathematics" of course
is quite thoroughly rational, ..., physics isn't
necessarily, because it's a "continuum mechanics".
J. J. Lodder
2024-12-07 11:03:24 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Finaly, you really need to get yourself out of the conceptual knot
that you have tied yourself in.
Something is either defined, or it can be measured.
It can't possibly be both,
Sure it can, provided that you use a different measurement standard
than the one used in the definition.
Sure, you can be inconsistent, if you choose to be.
Don't expect meaningful results.
Post by ProkaryoticCaspaseHomolog
It would not make sense to quantify hypothetical variations in the
speed of light in terms of the post-1983 meter. But they would make
sense in terms pre-1983 meters. Or (assuming some incredible ramp-up
in technology, perhaps introduced by Larry Niven-ish Outsiders) in
terms of a meter defined as the distance massless gluons travel in
1/299,792,458 of a second. Or gravitons... :-)
Completely irrelevant,
and it does not get you out of your conceptual error as stated above.

Summmary: There must be:
1) a length standard, 2) a frequency standard [1], and 3) c

Two of the three must be defined, the third must be measured.
Pre-1983 1) and 2) were defined, and 3), c was measured.
Post-1983 2) and c are defined, 1) must be measured.
So in 1983 we have collectively decided that any future refinement
in measurement techniques will result in more accurate meter standards,
not in a 'better' value for c. [2]

Finally, an excercise for you personally.
You quoted a pre-2018 experiment that verified that E=mc^2
to some high accuracy. (using the measured value of Planck's constant)
Post-2018, Planck's constant has a defined value,
and E=mc^2 is true by definition. (of the Joule and the kilogram)

So E=mc^2 can no longer be verified by any possible experiment.
Now:
Ex1) Does this make the experiment you quoted worthless?
Ex2) If not, what does that experiment demonstrate?

Jan


[1] Or a time standard, which amounts to the same in other words.
But defining it as a frequency standard is more 'natural'.

[2] Note that all this has nothing whatsoever to do with physics.
(like c being 'really' constant in some sense or something like that)
It is all about metrology, so about the ways -we agree upon-
to have standards in the most stable, accurate, and reproducible way.
ProkaryoticCaspaseHomolog
2024-12-07 16:03:31 UTC
Reply
Permalink
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Finaly, you really need to get yourself out of the conceptual knot
that you have tied yourself in.
Something is either defined, or it can be measured.
It can't possibly be both,
Sure it can, provided that you use a different measurement standard
than the one used in the definition.
Sure, you can be inconsistent, if you choose to be.
Don't expect meaningful results.
Post by ProkaryoticCaspaseHomolog
It would not make sense to quantify hypothetical variations in the
speed of light in terms of the post-1983 meter. But they would make
sense in terms pre-1983 meters. Or (assuming some incredible ramp-up
in technology, perhaps introduced by Larry Niven-ish Outsiders) in
terms of a meter defined as the distance massless gluons travel in
1/299,792,458 of a second. Or gravitons... :-)
Completely irrelevant,
and it does not get you out of your conceptual error as stated above.
1) a length standard, 2) a frequency standard [1], and 3) c
Two of the three must be defined, the third must be measured.
Pre-1983 1) and 2) were defined, and 3), c was measured.
Post-1983 2) and c are defined, 1) must be measured.
So in 1983 we have collectively decided that any future refinement
in measurement techniques will result in more accurate meter standards,
not in a 'better' value for c. [2]
You don't "get" the point that I was trying to make. Let us review

| Resolution 1 of the 17th CGPM (1983)
| Definition of the metre
| The 17th Conférence Générale des Poids et Mesures (CGPM),
| considering
[Skip over the first several considerations]
| - that a new definition of the metre has been envisaged in various
| forms all of which have the effect of giving the speed of light an
| exact value, equal to the recommended value, and that this
| introduces no appreciable discontinuity into the unit of length,
| taking into account the relative uncertainty of ± 4 ´ 10–9 of the
| best realizations of the present definition of the metre,
[Skip over the last two considerations]
| decides
| - The metre is the length of the path travelled by light in vacuum
| during a time interval of 1/299 792 458 of a second.
| - The definition of the metre in force since 1960, based upon the
| transition between the levels 2p10 and 5d5 of the atom of
| krypton 86, is abrogated.
https://www.bipm.org/en/committees/cg/cgpm/17-1983/resolution-1

Gamma ray burst observations have constrained the arrival times
between the visible light and gamma ray components of the burst to
be equal to within 10^-15 of the total travel time of the burst.
Current theory holds that the gamma rays are part of the "prompt
emission", while the visible light results from the "afterglow".

======================================================================
Let us presume that I want to experimentally explore various
alternative hypotheses to account for the difference in arrival times.

The shortest visible light pulses have a duration of about 10^-15 s,
this shortest duration roughly equal to Δt = λ/c

Let us presume that some future technology enables us to generate, at
will, short pulses of gamma radiation of duration comparable to or
shorter than that of the visible light pulses mentioned above.

I set up a 10000 meter vacuum chamber. At various points along the
chamber, I set up visible light and gamma ray detectors.

At 10 meters from the source, I can't tell which pulse arrives first.

At 100 meters from the source, I'm getting the notion that the gamma
rays are maybe (???) arriving ahead of the light pulse by 3e-22
seconds, although given that the width of the light pulse is
10^-15 s, detecting whether the offset is real is a challenge.

At 1000 meters from the source, I'm starting to get reproducible
results indicating that the gamma rays are arriving ahead of the light
pulse by 3e-21 seconds. I have to analyze zillions of pulses to get
statistically significant results, and of course I worry a lot about
systematic errors and all that.

At 10000 meters from the source, I'm now reasonably sure that the
gamma rays are arriving ahead of the light pulse by 3e-20 seconds.
I still have to analyze zillions of pulses, but after a year of
running the experiment, I'm at the five-sigma level of significance.

Note that the 17th CGPM (1983) definition does not specify the
wavelength of light used in its definition.

Should I conclude that under the conditions of my experiment, gamma
rays travel faster than the defined c by about 1 part in 10^15 ?

Or should I go the other way around, and conclude that under the
conditions of my experiment, visible light travels slower than the
defined c by about 1 part in 10^15 ?
======================================================================

Definitions are BASED ON state-of-the-art known physics. They do not
DETERMINE physical law.
Post by J. J. Lodder
Finally, an excercise for you personally.
You quoted a pre-2018 experiment that verified that E=mc^2
to some high accuracy. (using the measured value of Planck's constant)
Post-2018, Planck's constant has a defined value,
and E=mc^2 is true by definition. (of the Joule and the kilogram)
So E=mc^2 can no longer be verified by any possible experiment.
Ex1) Does this make the experiment you quoted worthless?
Not at all.
Post by J. J. Lodder
Ex2) If not, what does that experiment demonstrate?
It would demonstrate an inadequacy in the definitions that must be
addressed in some future conference when the discrepancies have been
better characterized.
Ross Finlayson
2024-12-07 18:49:39 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Finaly, you really need to get yourself out of the conceptual knot
that you have tied yourself in.
Something is either defined, or it can be measured.
It can't possibly be both,
Sure it can, provided that you use a different measurement standard
than the one used in the definition.
Sure, you can be inconsistent, if you choose to be.
Don't expect meaningful results.
Post by ProkaryoticCaspaseHomolog
It would not make sense to quantify hypothetical variations in the
speed of light in terms of the post-1983 meter. But they would make
sense in terms pre-1983 meters. Or (assuming some incredible ramp-up
in technology, perhaps introduced by Larry Niven-ish Outsiders) in
terms of a meter defined as the distance massless gluons travel in
1/299,792,458 of a second. Or gravitons... :-)
Completely irrelevant,
and it does not get you out of your conceptual error as stated above.
1) a length standard, 2) a frequency standard [1], and 3) c
Two of the three must be defined, the third must be measured.
Pre-1983 1) and 2) were defined, and 3), c was measured.
Post-1983 2) and c are defined, 1) must be measured.
So in 1983 we have collectively decided that any future refinement
in measurement techniques will result in more accurate meter standards,
not in a 'better' value for c. [2]
You don't "get" the point that I was trying to make. Let us review
| Resolution 1 of the 17th CGPM (1983)
| Definition of the metre
| The 17th Conférence Générale des Poids et Mesures (CGPM),
| considering
[Skip over the first several considerations]
| - that a new definition of the metre has been envisaged in various
| forms all of which have the effect of giving the speed of light an
| exact value, equal to the recommended value, and that this
| introduces no appreciable discontinuity into the unit of length,
| taking into account the relative uncertainty of ± 4 ´ 10–9 of the
| best realizations of the present definition of the metre,
[Skip over the last two considerations]
| decides
| - The metre is the length of the path travelled by light in vacuum
| during a time interval of 1/299 792 458 of a second.
| - The definition of the metre in force since 1960, based upon the
| transition between the levels 2p10 and 5d5 of the atom of
| krypton 86, is abrogated.
https://www.bipm.org/en/committees/cg/cgpm/17-1983/resolution-1
Gamma ray burst observations have constrained the arrival times
between the visible light and gamma ray components of the burst to
be equal to within 10^-15 of the total travel time of the burst.
Current theory holds that the gamma rays are part of the "prompt
emission", while the visible light results from the "afterglow".
======================================================================
Let us presume that I want to experimentally explore various
alternative hypotheses to account for the difference in arrival times.
The shortest visible light pulses have a duration of about 10^-15 s,
this shortest duration roughly equal to Δt = λ/c
Let us presume that some future technology enables us to generate, at
will, short pulses of gamma radiation of duration comparable to or
shorter than that of the visible light pulses mentioned above.
I set up a 10000 meter vacuum chamber. At various points along the
chamber, I set up visible light and gamma ray detectors.
At 10 meters from the source, I can't tell which pulse arrives first.
At 100 meters from the source, I'm getting the notion that the gamma
rays are maybe (???) arriving ahead of the light pulse by 3e-22
seconds, although given that the width of the light pulse is
10^-15 s, detecting whether the offset is real is a challenge.
At 1000 meters from the source, I'm starting to get reproducible
results indicating that the gamma rays are arriving ahead of the light
pulse by 3e-21 seconds. I have to analyze zillions of pulses to get
statistically significant results, and of course I worry a lot about
systematic errors and all that.
At 10000 meters from the source, I'm now reasonably sure that the
gamma rays are arriving ahead of the light pulse by 3e-20 seconds.
I still have to analyze zillions of pulses, but after a year of
running the experiment, I'm at the five-sigma level of significance.
Note that the 17th CGPM (1983) definition does not specify the
wavelength of light used in its definition.
Should I conclude that under the conditions of my experiment, gamma
rays travel faster than the defined c by about 1 part in 10^15 ?
Or should I go the other way around, and conclude that under the
conditions of my experiment, visible light travels slower than the
defined c by about 1 part in 10^15 ?
======================================================================
Definitions are BASED ON state-of-the-art known physics. They do not
DETERMINE physical law.
Post by J. J. Lodder
Finally, an excercise for you personally.
You quoted a pre-2018 experiment that verified that E=mc^2
to some high accuracy. (using the measured value of Planck's constant)
Post-2018, Planck's constant has a defined value,
and E=mc^2 is true by definition. (of the Joule and the kilogram)
So E=mc^2 can no longer be verified by any possible experiment.
Ex1) Does this make the experiment you quoted worthless?
Not at all.
Post by J. J. Lodder
Ex2) If not, what does that experiment demonstrate?
It would demonstrate an inadequacy in the definitions that must be
addressed in some future conference when the discrepancies have been
better characterized.
O.W. Richardson's "The Electron Theory ..." is really pretty
great, he spends a lot of time explaining all sorts of
issues in systems of units and algebraic quantities and
derivations and the inner and outer and these things,
it's a 100 years old yet I'm glad to be reading it now.

Like the difference between displacement and real current,
and "third current", that's great, and all these things
he points out "of course this holds only for zero,
the electrical and optical intensities are qualitatively
different and in these quantitative ways", and so on.

The "complementary duals" of course have a lot
going on, and confound usual partial linear accounts
or the great stacks of Laplacians, with the highly
non-linear, or rather, "the formally un-linear".
J. J. Lodder
2024-12-07 21:35:57 UTC
Reply
Permalink
On 2024-12-07 16:03:31 +0000, ProkaryoticCaspaseHomolog said:
[missing article on my server, sorry about mixed up quote levels]
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
Finaly, you really need to get yourself out of the conceptual knot
that you have tied yourself in.
Something is either defined, or it can be measured.
It can't possibly be both,
Sure it can, provided that you use a different measurement standard
than the one used in the definition.
Sure, you can be inconsistent, if you choose to be.
Don't expect meaningful results.
It would not make sense to quantify hypothetical variations in the
speed of light in terms of the post-1983 meter. But they would make
sense in terms pre-1983 meters. Or (assuming some incredible ramp-up
in technology, perhaps introduced by Larry Niven-ish Outsiders) in
terms of a meter defined as the distance massless gluons travel in
1/299,792,458 of a second. Or gravitons... :-)
Completely irrelevant,
and it does not get you out of your conceptual error as stated above.
1) a length standard, 2) a frequency standard [1], and 3) c
Two of the three must be defined, the third must be measured.
Pre-1983 1) and 2) were defined, and 3), c was measured.
Post-1983 2) and c are defined, 1) must be measured.
So in 1983 we have collectively decided that any future refinement
in measurement techniques will result in more accurate meter standards,
not in a 'better' value for c. [2]
You don't "get" the point that I was trying to make. Let us review
I do get it, and it is wrong.
Post by J. J. Lodder
| Resolution 1 of the 17th CGPM (1983)
[snip boilerplate material]
Post by J. J. Lodder
Gamma ray burst observations have constrained the arrival times
between the visible light and gamma ray components of the burst to
be equal to within 10^-15 of the total travel time of the burst.
[snip more irrelevancies]

This is irrelevant for the issue of E=mc^2.
Differential travel times are a test for a non-zero photon mass, if any.
Post by J. J. Lodder
Definitions are BASED ON state-of-the-art known physics. They do not
DETERMINE physical law.
Are you really incapabable of understanding
that all this is about metrology, not physical law?
No definition of units can ever determine or change any physical law.
Post by J. J. Lodder
Finally, an excercise for you personally.
You quoted a pre-2018 experiment that verified that E=mc^2
to some high accuracy. (using the measured value of Planck's constant)
Post-2018, Planck's constant has a defined value,
and E=mc^2 is true by definition. (of the Joule and the kilogram)
So E=mc^2 can no longer be verified by any possible experiment.
Ex1) Does this make the experiment you quoted worthless?
Not at all.
Correct.
Post by J. J. Lodder
Ex2) If not, what does that experiment demonstrate?
It would demonstrate an inadequacy in the definitions that must be
addressed in some future conference when the discrepancies have been
better characterized.
I'm sorry, but this is not the right answer,

Jan
ProkaryoticCaspaseHomolog
2024-12-08 05:42:07 UTC
Reply
Permalink
Post by J. J. Lodder
[missing article on my server, sorry about mixed up quote levels]
Post by J. J. Lodder
Sure, you can be inconsistent, if you choose to be.
Don't expect meaningful results.
It would not make sense to quantify hypothetical variations in the
speed of light in terms of the post-1983 meter. But they would make
sense in terms pre-1983 meters. Or (assuming some incredible ramp-up
in technology, perhaps introduced by Larry Niven-ish Outsiders) in
terms of a meter defined as the distance massless gluons travel in
1/299,792,458 of a second. Or gravitons... :-)
Completely irrelevant,
and it does not get you out of your conceptual error as stated above.
1) a length standard, 2) a frequency standard [1], and 3) c
Two of the three must be defined, the third must be measured.
Pre-1983 1) and 2) were defined, and 3), c was measured.
Post-1983 2) and c are defined, 1) must be measured.
So in 1983 we have collectively decided that any future refinement
in measurement techniques will result in more accurate meter standards,
not in a 'better' value for c. [2]
You don't "get" the point that I was trying to make. Let us review
I do get it, and it is wrong.
Post by J. J. Lodder
| Resolution 1 of the 17th CGPM (1983)
[snip boilerplate material]
Post by J. J. Lodder
Gamma ray burst observations have constrained the arrival times
between the visible light and gamma ray components of the burst to
be equal to within 10^-15 of the total travel time of the burst.
[snip more irrelevancies]
This is irrelevant for the issue of E=mc^2.
Differential travel times are a test for a non-zero photon mass, if any.
Post by J. J. Lodder
Definitions are BASED ON state-of-the-art known physics. They do not
DETERMINE physical law.
Are you really incapabable of understanding
that all this is about metrology, not physical law?
No definition of units can ever determine or change any physical law.
Post by J. J. Lodder
Finally, an excercise for you personally.
You quoted a pre-2018 experiment that verified that E=mc^2
to some high accuracy. (using the measured value of Planck's constant)
Post-2018, Planck's constant has a defined value,
and E=mc^2 is true by definition. (of the Joule and the kilogram)
So E=mc^2 can no longer be verified by any possible experiment.
Ex1) Does this make the experiment you quoted worthless?
Not at all.
Correct.
Post by J. J. Lodder
Ex2) If not, what does that experiment demonstrate?
It would demonstrate an inadequacy in the definitions that must be
addressed in some future conference when the discrepancies have been
better characterized.
I'm sorry, but this is not the right answer,
So what are you saying, then? Are you saying that, because of the
definition of E=mc^2, it is totally required that 1 gram of electrons
annihilating 1 gram of positrons completely to electromagnetic
radiation must NECESSICARILY yield the same amount of energy as 1 gram
of protons annihilating 1 gram of antiprotons completely to electro-
magnetic radiation? That the equality of these two values is a matter
of definition, not something to be established by experiment?

Are you saying that because the current definition of c is
299,792,458 meters per second regardless of wavelength, that questions
as to whether gamma rays travel faster than visible light rays are
totally nonsensical?
ProkaryoticCaspaseHomolog
2024-12-08 06:46:04 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
I'm sorry, but this is not the right answer,
So what are you saying, then? Are you saying that, because of the
definition of E=mc^2, it is totally required that 1 gram of electrons
annihilating 1 gram of positrons completely to electromagnetic
radiation must NECESSICARILY yield the same amount of energy as 1 gram
of protons annihilating 1 gram of antiprotons completely to electro-
magnetic radiation? That the equality of these two values is a matter
of definition, not something to be established by experiment?
Are you saying that because the current definition of c is
299,792,458 meters per second regardless of wavelength, that questions
as to whether gamma rays travel faster than visible light rays are
totally nonsensical?
In fewer words:

No experiment can measure a difference between the amount of energy
released by the complete annihilation of 1 g of (electrons + positrons)
versus the complete annihilation of 1 g of (protons + antiprotons).
True or false?

No experiment can measure a difference between the speed of visible
light photons versus the speed of gamma rays. True or false?
J. J. Lodder
2024-12-08 20:35:14 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
I'm sorry, but this is not the right answer,
So what are you saying, then? Are you saying that, because of the
definition of E=mc^2, it is totally required that 1 gram of electrons
annihilating 1 gram of positrons completely to electromagnetic
radiation must NECESSICARILY yield the same amount of energy as 1 gram
of protons annihilating 1 gram of antiprotons completely to electro-
magnetic radiation? That the equality of these two values is a matter
of definition, not something to be established by experiment?
Are you saying that because the current definition of c is
299,792,458 meters per second regardless of wavelength, that questions
as to whether gamma rays travel faster than visible light rays are
totally nonsensical?
No experiment can measure a difference between the amount of energy
released by the complete annihilation of 1 g of (electrons + positrons)
versus the complete annihilation of 1 g of (protons + antiprotons).
True or false?
False, see previous.
Post by ProkaryoticCaspaseHomolog
No experiment can measure a difference between the speed of visible
light photons versus the speed of gamma rays. True or false?
False, already answered several postings back.
A class of experiments relevant to this question
are experiments that set an upper limit on the photon mass,
(the most plausible mechanism for such an effect)

Why for heavens sake would you even get such an idea?

Jan
Ross Finlayson
2024-12-10 17:15:25 UTC
Reply
Permalink
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
I'm sorry, but this is not the right answer,
So what are you saying, then? Are you saying that, because of the
definition of E=mc^2, it is totally required that 1 gram of electrons
annihilating 1 gram of positrons completely to electromagnetic
radiation must NECESSICARILY yield the same amount of energy as 1 gram
of protons annihilating 1 gram of antiprotons completely to electro-
magnetic radiation? That the equality of these two values is a matter
of definition, not something to be established by experiment?
Are you saying that because the current definition of c is
299,792,458 meters per second regardless of wavelength, that questions
as to whether gamma rays travel faster than visible light rays are
totally nonsensical?
No experiment can measure a difference between the amount of energy
released by the complete annihilation of 1 g of (electrons + positrons)
versus the complete annihilation of 1 g of (protons + antiprotons).
True or false?
False, see previous.
Post by ProkaryoticCaspaseHomolog
No experiment can measure a difference between the speed of visible
light photons versus the speed of gamma rays. True or false?
False, already answered several postings back.
A class of experiments relevant to this question
are experiments that set an upper limit on the photon mass,
(the most plausible mechanism for such an effect)
Why for heavens sake would you even get such an idea?
Jan
O.W. Richardson's "The Electron Theory of Matter" has
really a great account of various considerations of
what "c" is with regards to electromagnetic radiation
as opposed to the optical range of not-electromagnetic
radiation and as with regards to wavelength versus
wave velocity.

Sort of like "photons" are overloaded and diluted these
days, so are waves, and so is "c".

The wave model is great and all and the energy equivalency
is great and all, yet it's overloaded and diluted (i.e.,
tenuous and weak).

The popular public deserves quite an apology from the
too-simple accounts that have arrived at having nothing
at all to say and no way to say it about the wider milieu
and the real-er parts of the theory.

So, for a pretty great example when these differences
were not just ignored and furthermore pasted over,
wall-papered as it were, "The Electron Theory of Matter"
is a bit antique yet it's perfectly cool and furthermore
greatly expands a usual discourse on radiation that travels
through space, _and_, the space-contraction (FitzGeraldian).
J. J. Lodder
2024-12-10 22:16:53 UTC
Reply
Permalink
Post by Ross Finlayson
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
I'm sorry, but this is not the right answer,
So what are you saying, then? Are you saying that, because of the
definition of E=mc^2, it is totally required that 1 gram of electrons
annihilating 1 gram of positrons completely to electromagnetic
radiation must NECESSICARILY yield the same amount of energy as 1 gram
of protons annihilating 1 gram of antiprotons completely to electro-
magnetic radiation? That the equality of these two values is a matter
of definition, not something to be established by experiment?
Are you saying that because the current definition of c is
299,792,458 meters per second regardless of wavelength, that questions
as to whether gamma rays travel faster than visible light rays are
totally nonsensical?
No experiment can measure a difference between the amount of energy
released by the complete annihilation of 1 g of (electrons + positrons)
versus the complete annihilation of 1 g of (protons + antiprotons).
True or false?
False, see previous.
Post by ProkaryoticCaspaseHomolog
No experiment can measure a difference between the speed of visible
light photons versus the speed of gamma rays. True or false?
False, already answered several postings back.
A class of experiments relevant to this question
are experiments that set an upper limit on the photon mass,
(the most plausible mechanism for such an effect)
Why for heavens sake would you even get such an idea?
Jan
O.W. Richardson's "The Electron Theory of Matter" has
really a great account of various considerations of
what "c" is with regards to electromagnetic radiation
as opposed to the optical range of not-electromagnetic
radiation and as with regards to wavelength versus
wave velocity.
Sort of like "photons" are overloaded and diluted these
days, so are waves, and so is "c".
The wave model is great and all and the energy equivalency
is great and all, yet it's overloaded and diluted (i.e.,
tenuous and weak).
The popular public deserves quite an apology from the
too-simple accounts that have arrived at having nothing
at all to say and no way to say it about the wider milieu
and the real-er parts of the theory.
So, for a pretty great example when these differences
were not just ignored and furthermore pasted over,
wall-papered as it were, "The Electron Theory of Matter"
is a bit antique yet it's perfectly cool and furthermore
greatly expands a usual discourse on radiation that travels
through space, _and_, the space-contraction (FitzGeraldian).
Not at hand, but this 1914! book, while perhaps a classic
is no doubt completely obsolete.
From the available reviews it would seem
that it is mostly a rehash of Lorentz' 'Theory of Electons',

Jan
Ross Finlayson
2024-12-11 02:37:06 UTC
Reply
Permalink
Post by J. J. Lodder
Post by Ross Finlayson
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
I'm sorry, but this is not the right answer,
So what are you saying, then? Are you saying that, because of the
definition of E=mc^2, it is totally required that 1 gram of electrons
annihilating 1 gram of positrons completely to electromagnetic
radiation must NECESSICARILY yield the same amount of energy as 1 gram
of protons annihilating 1 gram of antiprotons completely to electro-
magnetic radiation? That the equality of these two values is a matter
of definition, not something to be established by experiment?
Are you saying that because the current definition of c is
299,792,458 meters per second regardless of wavelength, that questions
as to whether gamma rays travel faster than visible light rays are
totally nonsensical?
No experiment can measure a difference between the amount of energy
released by the complete annihilation of 1 g of (electrons + positrons)
versus the complete annihilation of 1 g of (protons + antiprotons).
True or false?
False, see previous.
Post by ProkaryoticCaspaseHomolog
No experiment can measure a difference between the speed of visible
light photons versus the speed of gamma rays. True or false?
False, already answered several postings back.
A class of experiments relevant to this question
are experiments that set an upper limit on the photon mass,
(the most plausible mechanism for such an effect)
Why for heavens sake would you even get such an idea?
Jan
O.W. Richardson's "The Electron Theory of Matter" has
really a great account of various considerations of
what "c" is with regards to electromagnetic radiation
as opposed to the optical range of not-electromagnetic
radiation and as with regards to wavelength versus
wave velocity.
Sort of like "photons" are overloaded and diluted these
days, so are waves, and so is "c".
The wave model is great and all and the energy equivalency
is great and all, yet it's overloaded and diluted (i.e.,
tenuous and weak).
The popular public deserves quite an apology from the
too-simple accounts that have arrived at having nothing
at all to say and no way to say it about the wider milieu
and the real-er parts of the theory.
So, for a pretty great example when these differences
were not just ignored and furthermore pasted over,
wall-papered as it were, "The Electron Theory of Matter"
is a bit antique yet it's perfectly cool and furthermore
greatly expands a usual discourse on radiation that travels
through space, _and_, the space-contraction (FitzGeraldian).
Not at hand, but this 1914! book, while perhaps a classic
is no doubt completely obsolete.
From the available reviews it would seem
that it is mostly a rehash of Lorentz' 'Theory of Electons',
Jan
Richardson's a Nobel-prize winner,
I thought you'd be fawning all over it.

I don't know anything wrong with it,
and it's quite modern, if you use
the modern language for the contemporary language,
it's no different than for the usual Lienard-Wiechert,
at all, in the main, and in fact thoroughly underpins it.

This one I think is a, later edition.

It's the greatest exposition of the theory
of the day, which is "electron physics", today.


Anyways he makes clearly delineated a bunch of
stuff that these days was so long ago wall-papered
that somebody came along with circular SR definitions
and totally forgot how they could ever remember.

In the interest of neatening things, as we recall
"keep it simple, stupid", with the usual idea that
"I'm stupid and better keep things simple as possible
or as Einstein put it, no, even simpler", that at
some point somebody came along and was like "well,
Einstein was a genius so this must be it".

That is to say, for "keep it simple, stupid", has
that there's "it's obtuse at every angle".

Pointy-end flat-earthers.
J. J. Lodder
2024-12-08 20:35:14 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by J. J. Lodder
[missing article on my server, sorry about mixed up quote levels]
Post by J. J. Lodder
Sure, you can be inconsistent, if you choose to be.
Don't expect meaningful results.
It would not make sense to quantify hypothetical variations in the
speed of light in terms of the post-1983 meter. But they would make
sense in terms pre-1983 meters. Or (assuming some incredible ramp-up
in technology, perhaps introduced by Larry Niven-ish Outsiders) in
terms of a meter defined as the distance massless gluons travel in
1/299,792,458 of a second. Or gravitons... :-)
Completely irrelevant,
and it does not get you out of your conceptual error as stated above.
1) a length standard, 2) a frequency standard [1], and 3) c
Two of the three must be defined, the third must be measured.
Pre-1983 1) and 2) were defined, and 3), c was measured.
Post-1983 2) and c are defined, 1) must be measured.
So in 1983 we have collectively decided that any future refinement
in measurement techniques will result in more accurate meter standards,
not in a 'better' value for c. [2]
You don't "get" the point that I was trying to make. Let us review
I do get it, and it is wrong.
Post by J. J. Lodder
| Resolution 1 of the 17th CGPM (1983)
[snip boilerplate material]
Post by J. J. Lodder
Gamma ray burst observations have constrained the arrival times
between the visible light and gamma ray components of the burst to
be equal to within 10^-15 of the total travel time of the burst.
[snip more irrelevancies]
This is irrelevant for the issue of E=mc^2.
Differential travel times are a test for a non-zero photon mass, if any.
Post by J. J. Lodder
Definitions are BASED ON state-of-the-art known physics. They do not
DETERMINE physical law.
Are you really incapabable of understanding
that all this is about metrology, not physical law?
No definition of units can ever determine or change any physical law.
Post by J. J. Lodder
Finally, an excercise for you personally.
You quoted a pre-2018 experiment that verified that E=mc^2
to some high accuracy. (using the measured value of Planck's constant)
Post-2018, Planck's constant has a defined value,
and E=mc^2 is true by definition. (of the Joule and the kilogram)
So E=mc^2 can no longer be verified by any possible experiment.
Ex1) Does this make the experiment you quoted worthless?
Not at all.
Correct.
Post by J. J. Lodder
Ex2) If not, what does that experiment demonstrate?
It would demonstrate an inadequacy in the definitions that must be
addressed in some future conference when the discrepancies have been
better characterized.
I'm sorry, but this is not the right answer,
So what are you saying, then?
That you are confusing matters of units, (true by definition)
such as for example E=mc^2, (nowadays)
with physical results, (which need experimental verification)
such as for example energy-momentum conservation.
Post by ProkaryoticCaspaseHomolog
Are you saying that, because of the
definition of E=mc^2, it is totally required that 1 gram of electrons
annihilating 1 gram of positrons completely to electromagnetic
radiation must NECESSICARILY yield the same amount of energy as 1 gram
of protons annihilating 1 gram of antiprotons completely to electro-
magnetic radiation?
No. That is a matter of energy-momentum conservation,
which is a physical result.
No one doubts it, but it can be verified.
Post by ProkaryoticCaspaseHomolog
That the equality of these two values is a matter
of definition, not something to be established by experiment?
No. Equalities of measured quantities can never be a matter of
definition.
Post by ProkaryoticCaspaseHomolog
Are you saying that because the current definition of c is
299,792,458 meters per second regardless of wavelength, that questions
as to whether gamma rays travel faster than visible light rays are
totally nonsensical?
That's your red hering.
The current definition of the meter says no such thing,
that is merely your mistaken reading of it.
Or non-reading even, for the definition of the meter
doesn't mention the wavelength of the light to be used at all. [1]

If the vacuum would turn out to be dispersive after all
the definition of the meter would have to be refined.

Jan

[1] It doesn't even use the fact
that there are states of the radiation field to which
a more or less well-defined wavelength can be assigned.
Those things are merely practicalities of the application
of the definition.
J. J. Lodder
2024-12-07 21:35:57 UTC
Reply
Permalink
Post by Ross Finlayson
O.W. Richardson's "The Electron Theory ..." is really pretty
great, he spends a lot of time explaining all sorts of
issues in systems of units and algebraic quantities and
derivations and the inner and outer and these things,
it's a 100 years old yet I'm glad to be reading it now.
Yes, in those long past times every competent physicist
understood about systems of units and dimensions.
This has been lost as a consequence of general 'SI-only' education.

A more readily accessible (and excellent) source for the subject
is in the appendices of Jackson, Classical Electrodynamics.
Unfortunately the subject is not covered adequately in Wikipedia,
(afaics)

Jan
Ross Finlayson
2024-12-07 22:50:34 UTC
Reply
Permalink
Post by J. J. Lodder
Post by Ross Finlayson
O.W. Richardson's "The Electron Theory ..." is really pretty
great, he spends a lot of time explaining all sorts of
issues in systems of units and algebraic quantities and
derivations and the inner and outer and these things,
it's a 100 years old yet I'm glad to be reading it now.
Yes, in those long past times every competent physicist
understood about systems of units and dimensions.
This has been lost as a consequence of general 'SI-only' education.
A more readily accessible (and excellent) source for the subject
is in the appendices of Jackson, Classical Electrodynamics.
Unfortunately the subject is not covered adequately in Wikipedia,
(afaics)
Jan
Then I suppose you should walk back what you said
about "Buckingham Pi: dimensionless analysis", also.

Thanks though I've heard of that.

How about Baylis, I've enjoyed reading Baylis,
and in the decades since I first started reading it,
I imagine there's much more to make of it.
Athel Cornish-Bowden
2024-12-08 09:19:52 UTC
Reply
Permalink
Post by J. J. Lodder
Post by Ross Finlayson
O.W. Richardson's "The Electron Theory ..." is really pretty
great, he spends a lot of time explaining all sorts of
issues in systems of units and algebraic quantities and
derivations and the inner and outer and these things,
it's a 100 years old yet I'm glad to be reading it now.
Yes, in those long past times every competent physicist
understood about systems of units and dimensions.
This has been lost as a consequence of general 'SI-only' education.
A more readily accessible (and excellent) source for the subject
is in the appendices of Jackson, Classical Electrodynamics.
Unfortunately the subject is not covered adequately in Wikipedia,
(afaics)
Why don't you fix it, then? Anyone can edit Wikipedia.
--
Athel -- French and British, living in Marseilles for 37 years; mainly
in England until 1987.
J. J. Lodder
2024-12-08 11:56:15 UTC
Reply
Permalink
Post by Athel Cornish-Bowden
Post by J. J. Lodder
Post by Ross Finlayson
O.W. Richardson's "The Electron Theory ..." is really pretty
great, he spends a lot of time explaining all sorts of
issues in systems of units and algebraic quantities and
derivations and the inner and outer and these things,
it's a 100 years old yet I'm glad to be reading it now.
Yes, in those long past times every competent physicist
understood about systems of units and dimensions.
This has been lost as a consequence of general 'SI-only' education.
A more readily accessible (and excellent) source for the subject
is in the appendices of Jackson, Classical Electrodynamics.
Unfortunately the subject is not covered adequately in Wikipedia,
(afaics)
Why don't you fix it, then? Anyone can edit Wikipedia.
Because I have experience with the matters.
I have had to argue units and dimensions with electrical engineers.
In consequence my jaws are stronger than those of Father William.
I'm not in the mood for going into it again,

Jan
Maciej Wozniak
2024-12-08 14:01:05 UTC
Reply
Permalink
Post by J. J. Lodder
Post by Athel Cornish-Bowden
Post by J. J. Lodder
Post by Ross Finlayson
O.W. Richardson's "The Electron Theory ..." is really pretty
great, he spends a lot of time explaining all sorts of
issues in systems of units and algebraic quantities and
derivations and the inner and outer and these things,
it's a 100 years old yet I'm glad to be reading it now.
Yes, in those long past times every competent physicist
understood about systems of units and dimensions.
This has been lost as a consequence of general 'SI-only' education.
A more readily accessible (and excellent) source for the subject
is in the appendices of Jackson, Classical Electrodynamics.
Unfortunately the subject is not covered adequately in Wikipedia,
(afaics)
Why don't you fix it, then? Anyone can edit Wikipedia.
Because I have experience with the matters.
I have had to argue units and dimensions with electrical engineers.
And, of course, you were unable to listen to
competent men, just limke any other Shit's
fanatic.
Paul B. Andersen
2024-12-07 21:19:50 UTC
Reply
Permalink
Post by J. J. Lodder
Post by Paul B. Andersen
Post by J. J. Lodder
Post by Paul B. Andersen
So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.
Note: measured with SI metre and SI second.
Post by J. J. Lodder
So this is completely, absolutely, and totally wrong.
Such a result does not mean that the speed of light
is off its defined value,
it means that your meter standard is off,
and that you must use your measurement result to recalibrate it.
(so that the speed of light comes out to its defined value)
According to:
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
(2019)
The SI definitions are:

The relevant defining constants:
Δν_Cs = 9192631770 Hz (hyperfine transition frequency of Cs133)
c = 299 792 458 m/s (speed of light in vacuum)

The relevant base units:
Second:
1 s = 9192631770/Δν_Cs 1 Hz = Δν_Cs/9192631770

Metre:
1 metre = (c/299792458)s = (9192631770/299792458)⋅(c/Δν_Cs)

The home page of BIMP:
https://www.bipm.org/en/measurement-units

Give the exact same definitions, so I assume
that the definitions above are valid now.


https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
Post by J. J. Lodder
Post by Paul B. Andersen
If the speed of light is measured _with the meter and second
defined above_ it is obviously possible to get a result slightly
different from the defined speed of light.
So I was not "completely, absolutely, and totally wrong".
You were, and it would seem that you still are.
You cannot measure the speed of light because it has a defined value.
If you would think that what you are doing is a speed of light
measurement you don't understand what you are doing.
When you have a definition of second and a definition of metre,
it is _obviously_ possible to measure the speed of light.

If you measure the speed of light in air, you would probably
find that v_air ≈ 2.99705e8 m/s.

If you measure it in vacuum on the ground, you would probably
get a value slightly less than 299792458 m/s because the vacuum
isn't perfect.

If you measure it in perfect vacuum (in a space-vehicle?) you
would probably get the value 299792458 m/s.
But it isn't impossible, if you had extremely precise instruments,
that you would measure a value slightly different from 299792458 m/s,
e.g. 299792458.000001 m/s.

However, so precise instruments hardly exists, and probably never will.
So I don't think this ever will be a real problem needing a fix.

But my point is:
It is possible to measure the speed of light even if it exists
a defined constant c = 299792458 m/s

If you are claiming otherwise, you are simply wrong.
Post by J. J. Lodder
Post by Paul B. Andersen
Post by J. J. Lodder
In fact, the kind of experiments that used to be called
'speed of light measurements' (so before 1983)
are still being done routinely today, at places like NIST, or BIPM.
The difference is that nowadays, precisely the same kind of measurements
are called 'calibration of a (secudary) meter standard',
or 'calibration of a frequency standard'.
Calibration of a frequency standard is just that, and not
a 'speed of light measurements'.
Post by J. J. Lodder
Post by Paul B. Andersen
Is any such recalibration of the meter ever done?
Of course, routinely, on a day to day basis.
Guess there are whole departments devoted to it.
(it is a subtle art)
The results are published nowadays as a list of frequencies
of prefered optical frequency standards.
(measuring the frequency of an optical frequency standard
and calibrating a secondary meter standard are just two different ways
of saying the same thing)
And remember, there is no longer such a thing as -the- meter.
It is a secondary unit, and any convenient secondary standard will do.
In:
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf

I read:
https://www.bipm.org/en/cipm-mra

"The CIPM has adopted various secondary representations of
the second, based on a selected number of spectral lines of atoms,
ions or molecules. The unperturbed frequencies of these lines can
be determined with a relative uncertainty not lower than that of
the realization of the second based on the 133Cs hyperfine transition
frequency, but some can be reproduced with superior stability."

This is how I interpret this:
The second is still defined by "the unperturbed ground state
hyperfine transition frequency of the caesium 133 atom"
Δν_Cs = 9192631770 Hz by definition.

But practical realisations of this frequency standard,
that is an atomic frequency standard based on Cs133 is
not immune to perturbation, a magnetic field may affect it.

So there exist more stable frequency standards than Cs,
and some are extremely more stable.
But the frequencies of these standards are still defined
by Δν_Cs. 1 hz = Δν_Cs/9192631770
This is "Calibration of a frequency standard".

The "secondary representations of second"
don't change the duration of a second
and the "secondary representations of metre"
don't change the length of a metre.
--
Paul

https://paulba.no/
ProkaryoticCaspaseHomolog
2024-12-07 22:14:02 UTC
Reply
Permalink
Post by Paul B. Andersen
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
(2019)
Δν_Cs = 9192631770 Hz (hyperfine transition frequency of Cs133)
c = 299 792 458 m/s (speed of light in vacuum)
1 s = 9192631770/Δν_Cs 1 Hz = Δν_Cs/9192631770
1 metre = (c/299792458)s = (9192631770/299792458)⋅(c/Δν_Cs)
https://www.bipm.org/en/measurement-units
Give the exact same definitions, so I assume
that the definitions above are valid now.
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
Post by J. J. Lodder
Post by Paul B. Andersen
If the speed of light is measured _with the meter and second
defined above_ it is obviously possible to get a result slightly
different from the defined speed of light.
So I was not "completely, absolutely, and totally wrong".
You were, and it would seem that you still are.
You cannot measure the speed of light because it has a defined value.
If you would think that what you are doing is a speed of light
measurement you don't understand what you are doing.
When you have a definition of second and a definition of metre,
it is _obviously_ possible to measure the speed of light.
If you measure the speed of light in air, you would probably
find that v_air ≈ 2.99705e8 m/s.
If you measure it in vacuum on the ground, you would probably
get a value slightly less than 299792458 m/s because the vacuum
isn't perfect.
If you measure it in perfect vacuum (in a space-vehicle?) you
would probably get the value 299792458 m/s.
But it isn't impossible, if you had extremely precise instruments,
that you would measure a value slightly different from 299792458 m/s,
e.g. 299792458.000001 m/s.
However, so precise instruments hardly exists, and probably never will.
So I don't think this ever will be a real problem needing a fix.
It is possible to measure the speed of light even if it exists
a defined constant c = 299792458 m/s
If you are claiming otherwise, you are simply wrong.
In any measurement of the speed of light, one uses a local "best
representation" (i.e. secondary standard) of the meter and a local
best representation of the second to standardize our measurement
instrumentation. These practical realizations of the units of length
and time may be in fact superior in reproducibility than the materials
and methods cited in the primary definitions of the units of length
and time.

Using such calibrated instrumentation, it is obviously possible to
perform a speed of light measurement, and as Paul stated, we are not
doing anything stupid if we get a result differing from 299792458 m/s.

======================================================================

Δv measurements of the speed of light (for example, MMX measurements
testing for anisotropies in v) are fundamentally different in
principle than direct measurements v of the speed of light, and can
be conducted without especially precise local representations of
length and time.
Paul B. Andersen
2024-12-08 08:19:33 UTC
Reply
Permalink
Post by Paul B. Andersen
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
(2019)
 Δν_Cs = 9192631770 Hz  (hyperfine transition frequency of Cs133)
 c = 299 792 458 m/s (speed of light in vacuum)
 1 s = 9192631770/Δν_Cs  1 Hz = Δν_Cs/9192631770
 1 metre = (c/299792458)s = (9192631770/299792458)⋅(c/Δν_Cs)
https://www.bipm.org/en/measurement-units
Give the exact same definitions, so I assume
that the definitions above are valid now.
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
Post by J. J. Lodder
Post by Paul B. Andersen
If the speed of light is measured _with the meter and second
defined above_ it is obviously possible to get a result slightly
different from the defined speed of light.
So I was not "completely, absolutely, and totally wrong".
You were, and it would seem that you still are.
You cannot measure the speed of light because it has a defined value.
If you would think that what you are doing is a speed of light
measurement you don't understand what you are doing.
Yes, I was indeed "absolutely, and totally wrong",
but not completely wrong.
Post by Paul B. Andersen
When you have a definition of second and a definition of metre,
it is _obviously_ possible to measure the speed of light.
If you measure the speed of light in air, you would probably
find that v_air ≈ 2.99705e8 m/s.
If you measure it in vacuum on the ground, you would probably
get a value slightly less than 299792458 m/s because the vacuum
isn't perfect.
OK so far.
Post by Paul B. Andersen
If you measure it in perfect vacuum (in a space-vehicle?) you
would probably get the value 299792458 m/s.
You would certainly measure the value 299792458 m/s.

It is possible measure the speed of light in vacuum, but not much
point in doing so since the result is given by definition.
Post by Paul B. Andersen
But it isn't impossible, if you had extremely precise instruments,
that you would measure a value slightly different from 299792458 m/s,
e.g. 299792458.000001 m/s.
This is indeed "completely, absolutely, and totally wrong".

I somehow thought that the "real speed" of light in vacuum
measured before 1985 was different from 299792458 m/s.
(Which it probably was, but the difference hidden in the error bar)
And since the definition of metre only contain the defined constant c,
i thought "the real speed" of light could be different from c.

But this is utter nonsense!
Now I can't understand how I could think so.
My brain seems to be slower than it used to be. :-(

The real speed of light in vacuum is exactly c = 299792458 m/s,
and 1 metre = (1 second/299792458)c, is derived from c,
which means that the measured speed of light in vacuum will
always be c.
Post by Paul B. Andersen
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
https://www.bipm.org/en/cipm-mra
"The CIPM has adopted various secondary representations of
 the second, based on a selected number of spectral lines of atoms,
 ions or molecules. The unperturbed frequencies of these lines can
 be determined with a relative uncertainty not lower than that of
 the realization of the second based on the 133Cs hyperfine transition
 frequency, but some can be reproduced with superior stability."
The second is still defined by "the unperturbed ground state
hyperfine transition frequency of the caesium 133 atom"
  Δν_Cs = 9192631770 Hz by definition.
But practical realisations of this frequency standard,
that is an atomic frequency standard based on Cs133 is
not immune to perturbation, a magnetic field may affect it.
So there exist more stable frequency standards than Cs,
and some  are extremely more stable.
But the frequencies of these standards are still defined
by Δν_Cs. 1 hz = Δν_Cs/9192631770
This is "Calibration of a frequency standard".
The "secondary representations of second"
don't change the duration of a second
and the "secondary representations of metre"
don't change the length of a metre.
--
Paul

https://paulba.no/
J. J. Lodder
2024-12-08 11:30:36 UTC
Reply
Permalink
Post by Paul B. Andersen
Post by Paul B. Andersen
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
(2019)
??_Cs = 9192631770 Hz (hyperfine transition frequency of Cs133)
c = 299 792 458 m/s (speed of light in vacuum)
1 s = 9192631770/??_Cs 1 Hz = ??_Cs/9192631770
1 metre = (c/299792458)s = (9192631770/299792458)?(c/??_Cs)
https://www.bipm.org/en/measurement-units
Give the exact same definitions, so I assume
that the definitions above are valid now.
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
Post by J. J. Lodder
Post by Paul B. Andersen
If the speed of light is measured _with the meter and second
defined above_ it is obviously possible to get a result slightly
different from the defined speed of light.
So I was not "completely, absolutely, and totally wrong".
You were, and it would seem that you still are.
You cannot measure the speed of light because it has a defined value.
If you would think that what you are doing is a speed of light
measurement you don't understand what you are doing.
Yes, I was indeed "absolutely, and totally wrong",
but not completely wrong.
Post by Paul B. Andersen
When you have a definition of second and a definition of metre,
it is _obviously_ possible to measure the speed of light.
If you measure the speed of light in air, you would probably
find that v_air ≈ 2.99705e8 m/s.
If you measure it in vacuum on the ground, you would probably
get a value slightly less than 299792458 m/s because the vacuum
isn't perfect.
OK so far.
Post by Paul B. Andersen
If you measure it in perfect vacuum (in a space-vehicle?) you
would probably get the value 299792458 m/s.
You would certainly measure the value 299792458 m/s.
It is possible measure the speed of light in vacuum, but not much
point in doing so since the result is given by definition.
Post by Paul B. Andersen
But it isn't impossible, if you had extremely precise instruments,
that you would measure a value slightly different from 299792458 m/s,
e.g. 299792458.000001 m/s.
This is indeed "completely, absolutely, and totally wrong".
I somehow thought that the "real speed" of light in vacuum
measured before 1985 was different from 299792458 m/s.
Of course it was. The adopted value was a compromise
between the results of different teams.
BTW, you are also falling into the 'das ding an sich' trap.
Post by Paul B. Andersen
(Which it probably was, but the difference hidden in the error bar)
And since the definition of metre only contain the defined constant c,
i thought "the real speed" of light could be different from c.
Yes, that is where you go wrong.
Post by Paul B. Andersen
But this is utter nonsense!
Beginning to see the light?
Post by Paul B. Andersen
Now I can't understand how I could think so.
My brain seems to be slower than it used to be. :-(
The real speed of light in vacuum is exactly c = 299792458 m/s,
and 1 metre = (1 second/299792458)c, is derived from c,
which means that the measured speed of light in vacuum will
always be c.
Correct.
Perhaps I can explain the practicalities behind it in another way.
If you measure the speed of light accurately
you must of course do an error analysis.
The result of this that almost all of the error results from
the ecessary realisation of the meter standard. (in your laboratory)
So the paradoxal result is that you cannot measure the speed of light
even when there is a meter standard of some kind.

You may call whatever it is that you are doing
'a speed of light measurement',
but if you are a competent experimentalist you will understand
that what you are really doing is a meter calibraton experiment.
Hence the speed of light must be given a defined value,
for practical experimental reasons. [1]

Jan

[1] Which have not changed.
(and will not change in the forseeable future)
Meter standards are orders of magnitude less accurate
than time standards. (see why this must be?)
Paul B. Andersen
2024-12-09 14:21:01 UTC
Reply
Permalink
Post by J. J. Lodder
Post by Paul B. Andersen
Post by Paul B. Andersen
But it isn't impossible, if you had extremely precise instruments,
that you would measure a value slightly different from 299792458 m/s,
e.g. 299792458.000001 m/s.
This is indeed "completely, absolutely, and totally wrong".
I somehow thought that the "real speed" of light in vacuum
measured before 1985 was different from 299792458 m/s.
Of course it was. The adopted value was a compromise
between the results of different teams.
BTW, you are also falling into the 'das ding an sich' trap.
Post by Paul B. Andersen
(Which it probably was, but the difference hidden in the error bar)
And since the definition of metre only contain the defined constant c,
i thought "the real speed" of light could be different from c.
Yes, that is where you go wrong.
Post by Paul B. Andersen
But this is utter nonsense!
Beginning to see the light?
Post by Paul B. Andersen
Now I can't understand how I could think so.
My brain seems to be slower than it used to be. :-(
The real speed of light in vacuum is exactly c = 299792458 m/s,
and 1 metre = (1 second/299792458)c, is derived from c,
which means that the measured speed of light in vacuum will
always be c.
Correct.
Perhaps I can explain the practicalities behind it in another way.
If you measure the speed of light accurately
you must of course do an error analysis.
The result of this that almost all of the error results from
the ecessary realisation of the meter standard. (in your laboratory)
So the paradoxal result is that you cannot measure the speed of light
even when there is a meter standard of some kind.
You may call whatever it is that you are doing
'a speed of light measurement',
but if you are a competent experimentalist you will understand
that what you are really doing is a meter calibraton experiment.
Hence the speed of light must be given a defined value,
for practical experimental reasons. [1]
Jan
This is my way of thinking which made me realise that I was wrong:
How do we measure the speed of light?
We measure the time it takes for the light to travel a known distance.
So we bounce the light off a mirror and measure the round trip time.
How do we calibrate the distance to the mirror?
We measure the time it takes for the light to go back and forth
to the mirror.
L = (c/299792458)⋅t/2 where t is round trip time in seconds
AHA!!!
Post by J. J. Lodder
[1] Which have not changed.
(and will not change in the forseeable future)
Meter standards are orders of magnitude less accurate
than time standards. (see why this must be?)
No, I don't understand.
The definition of metre only depends on the two constants
Δν_Cs and c and both have an exact value.
Is it because the time standard only depend on one constant?

I can however understand that practical calibration of the meter
is less precise than the calibration of a frequency standard.

------------------

I would like your reaction to the following;

In:
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
I read:
https://www.bipm.org/en/cipm-mra

"The CIPM has adopted various secondary representations of
the second, based on a selected number of spectral lines of atoms,
ions or molecules. The unperturbed frequencies of these lines can
be determined with a relative uncertainty not lower than that of
the realization of the second based on the 133Cs hyperfine transition
frequency, but some can be reproduced with superior stability."

This is how I interpret this:
The second is still defined by "the unperturbed ground state
hyperfine transition frequency of the caesium 133 atom"
Δν_Cs = 9192631770 Hz by definition.

But practical realisations of this frequency standard,
that is an atomic frequency standard based on Cs133 is
not immune to perturbation, a magnetic field may affect it.

So there exist more stable frequency standards than Cs,
and some are extremely more stable.
But the frequencies of these standards are still defined
by Δν_Cs. 1 hz = Δν_Cs/9192631770
This is "Calibration of a frequency standard".

The "secondary representations of second"
don't change the duration of a second
and the "secondary representations of metre"
don't change the length of a metre.
--
Paul

https://paulba.no/
rhertz
2024-12-09 17:37:53 UTC
Reply
Permalink
Talking about idiocies in SI definitions of units by BIPM, what about
the Kg of mass?


The old definition of the kilogram was based on the mass of the
International Prototype of the Kilogram (IPK), a platinum-iridium
cylinder kept at the BIPM. The IPK was sanctioned in 1889 by the 1st
General Conference on Weights and Measures (CGPM).

After 1960, the kilogram was the only SI unit still defined in terms of
a single manufactured object. So to ensure the accuracy of mass and
weight measurements, all the standard masses used in all the
measurements around the globe were, in theory, to be directly compared
to the IPK — which was kept by the International Bureau of Weights and
Measures (BIPM) in SÚvres, France.

Since May 20, 2019, the kilogram is now an ABSTRACT IDEA based on light
and energy, rather than a physical object. The new definition doesn't
need a physical reference block.

The kilogram, symbol kg, is the SI unit of mass. It is defined by taking
the fixed numerical value of the Planck constant h to be 6.62607015E−34
J⋅s, which is equal to kg⋅m^2/s, where the meter and the second are
defined in terms of c and ΔΜCs.

It was the last physical object used to define the seven standard units
of the SI system, but it was susceptible to different perturbations like
changes in g, temperature, etc. Now, this reference is gone, after it
was found a difference of 54 micrograms (by 2005) with was previously
thought as a true 1.00000000 Kg of mass.

Now, everyone of the seven units of SI don't have any physical
representation. All of them are FIXED as theoretical values, even when
PHYSICS REALITY shows that they all ARE INCORRECT!

Yet, they have been ADOPTED, and any other secondary unit is derived
from THEORETICAL VALUES.

This is what science has become: a FUCKING JOKE (ON YOU).
rhertz
2024-12-09 22:44:08 UTC
Reply
Permalink
Post by rhertz
Talking about idiocies in SI definitions of units by BIPM, what about
the Kg of mass?
The old definition of the kilogram was based on the mass of the
International Prototype of the Kilogram (IPK), a platinum-iridium
cylinder kept at the BIPM. The IPK was sanctioned in 1889 by the 1st
General Conference on Weights and Measures (CGPM).
After 1960, the kilogram was the only SI unit still defined in terms of
a single manufactured object. So to ensure the accuracy of mass and
weight measurements, all the standard masses used in all the
measurements around the globe were, in theory, to be directly compared
to the IPK — which was kept by the International Bureau of Weights and
Measures (BIPM) in Sèvres, France.
Since May 20, 2019, the kilogram is now an ABSTRACT IDEA based on light
and energy, rather than a physical object. The new definition doesn't
need a physical reference block.
The kilogram, symbol kg, is the SI unit of mass. It is defined by taking
the fixed numerical value of the Planck constant h to be 6.62607015E−34
J⋅s, which is equal to kg⋅m^2/s, where the meter and the second are
defined in terms of c and ΔνCs.
It was the last physical object used to define the seven standard units
of the SI system, but it was susceptible to different perturbations like
changes in g, temperature, etc. Now, this reference is gone, after it
was found a difference of 54 micrograms (by 2005) with was previously
thought as a true 1.00000000 Kg of mass.
Now, everyone of the seven units of SI don't have any physical
representation. All of them are FIXED as theoretical values, even when
PHYSICS REALITY shows that they all ARE INCORRECT!
Yet, they have been ADOPTED, and any other secondary unit is derived
from THEORETICAL VALUES.
This is what science has become: a FUCKING JOKE (ON YOU).
In case you didn't follow this:


Since May 20, 2019, the kilogram is now an ABSTRACT IDEA based on light
and energy, rather than a physical object. The new definition doesn't
need a physical reference block.


SI base unit: kilogram (kg)
https://www.bipm.org/en/si-base-units/kilogram


Being h = 6.62607015E–34 kg m^2/s


1 Kg = h/6.62607015E–34 s/m^2

replacing h, it gives that 1 Kg = 1 Kg.

I'd swear that this is A CIRCULAR REFERENCE without any value at all!

Science has become a fucking joke, as I wrote.


From the same page, another variant of the same joke:


1 Kg = 299,792,458,942^2/(9,192,671,730 x 6.62607015E–34) h ΔνCs/c^2

then

1 Kg = 1 Kg

Science at its highest!

Fucking retarded people, taking the rest as fucking retarded people!

Now, what idiocies were you writing about FIXING the speed of light?
rhertz
2024-12-09 23:26:37 UTC
Reply
Permalink
Volume 106, Number 1, January–February 2001
Journal of Research of the National Institute of Standards and
Technology
The Kilogram and Measurements of Mass and Force

https://www.nist.gov/system/files/documents/calibrations/j61jab.pdf


For almost 140 years, this was the standard of 1 Kg of mass:

QUOTE:

" In 1878, three 1 kg cylinders, KI, KII, and KIII, made of
90 % platinum—10 % iridium alloy were ordered from Johnson
Matthey in England; they were delivered in 1879. They were
polished, adjusted, and compared with the Kilogram of
the Archives by four observers in 1880 at the Observatory
of Paris. The mass of KIII was found to be the closest to
that of the Kilogram of the Archives. KIII was placed in
a safe at the BIPM in 1882, was chosen by the CIPM to be
the International Prototype Kilogram, and was ratified
as such by the 1st “Conference Generale des Poids et Mesures”
(CGPM) in 1889. In 1901, the 3rd CGPM in Paris established
the definition of the unit of mass: “The Kilogram is the
unit of mass; it is equal to the mass of the International
Prototype of the Kilogram.”
.........

VERIFICATIONS: "The latest one, the third periodic verification,
took place between 1988 and 1992. For it, the IPK was used with
the NBS-2 balance. The results of the third periodic verification
demonstrated a long-term instability of the unit of mass
on the order of approximately 30 ug/kg over the last century [4];
this instability is attributed to surface effects that are not
yet fully understood."


Some calculations:

1 Pt atom = 3.24827372640E-25 Kg
1 Ir atom = 3.23153838601E-25

0.9 Kg of Pt contains 2.77070245862E+24 atoms
0.1 Kg of Ir contains 3.09450138154E+23 atoms

For unknown reasons, 0.000030 Kg vanished in the last century, as
measured in
the period 1988-1992. This represents an average of 1.80664222610E+22
atoms of Platinum that vanished in thin air (or any mix of Pt-Ir that
you may use).

But the abstract values of h, c and ΔνCs are considered INVARIANT
because they are FIXED NUMBERS (by choice of a bunch of retarded
scientists at CPGH). They all are unaccountable for their actions that
ruin physics since WWII.

Is this degradation happening by design, coming from an obscure agenda?
J. J. Lodder
2024-12-09 19:28:42 UTC
Reply
Permalink
Post by Paul B. Andersen
Post by J. J. Lodder
Post by Paul B. Andersen
Post by Paul B. Andersen
But it isn't impossible, if you had extremely precise instruments,
that you would measure a value slightly different from 299792458 m/s,
e.g. 299792458.000001 m/s.
This is indeed "completely, absolutely, and totally wrong".
I somehow thought that the "real speed" of light in vacuum
measured before 1985 was different from 299792458 m/s.
Of course it was. The adopted value was a compromise
between the results of different teams.
BTW, you are also falling into the 'das ding an sich' trap.
Post by Paul B. Andersen
(Which it probably was, but the difference hidden in the error bar)
And since the definition of metre only contain the defined constant c,
i thought "the real speed" of light could be different from c.
Yes, that is where you go wrong.
Post by Paul B. Andersen
But this is utter nonsense!
Beginning to see the light?
Post by Paul B. Andersen
Now I can't understand how I could think so.
My brain seems to be slower than it used to be. :-(
The real speed of light in vacuum is exactly c = 299792458 m/s,
and 1 metre = (1 second/299792458)c, is derived from c,
which means that the measured speed of light in vacuum will
always be c.
Correct.
Perhaps I can explain the practicalities behind it in another way.
If you measure the speed of light accurately
you must of course do an error analysis.
The result of this that almost all of the error results from
the ecessary realisation of the meter standard. (in your laboratory)
So the paradoxal result is that you cannot measure the speed of light
even when there is a meter standard of some kind.
You may call whatever it is that you are doing
'a speed of light measurement',
but if you are a competent experimentalist you will understand
that what you are really doing is a meter calibraton experiment.
Hence the speed of light must be given a defined value,
for practical experimental reasons. [1]
Jan
How do we measure the speed of light?
We measure the time it takes for the light to travel a known distance.
So we bounce the light off a mirror and measure the round trip time.
How do we calibrate the distance to the mirror?
We measure the time it takes for the light to go back and forth
to the mirror.
L = (c/299792458)?t/2 where t is round trip time in seconds
AHA!!!
Post by J. J. Lodder
[1] Which have not changed.
(and will not change in the forseeable future)
Meter standards are orders of magnitude less accurate
than time standards. (see why this must be?)
No, I don't understand.
The definition of metre only depends on the two constants
??_Cs and c and both have an exact value.
Is it because the time standard only depend on one constant?
I can however understand that practical calibration of the meter
is less precise than the calibration of a frequency standard.
------------------
I would like your reaction to the following;
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
https://www.bipm.org/en/cipm-mra
"The CIPM has adopted various secondary representations of
the second, based on a selected number of spectral lines of atoms,
ions or molecules. The unperturbed frequencies of these lines can
be determined with a relative uncertainty not lower than that of
the realization of the second based on the 133Cs hyperfine transition
frequency, but some can be reproduced with superior stability."
The second is still defined by "the unperturbed ground state
hyperfine transition frequency of the caesium 133 atom"
??_Cs = 9192631770 Hz by definition.
But practical realisations of this frequency standard,
that is an atomic frequency standard based on Cs133 is
not immune to perturbation, a magnetic field may affect it.
So there exist more stable frequency standards than Cs,
and some are extremely more stable.
But the frequencies of these standards are still defined
by ??_Cs. 1 hz = ??_Cs/9192631770
This is "Calibration of a frequency standard".
The "secondary representations of second"
don't change the duration of a second
and the "secondary representations of metre"
don't change the length of a metre.
Instead of replying point by point I'll sum up the whole situation.
(as I understand it, and perhaps repeating what I wrote earlier)

For understanding all this you must realise
that there are two kinds of frequency standards:
microwave ones, typically in the (perhaps many) GHz range,
and
optical ones, typically in the hundreds of THz range.
The GHz ones may serve as absolute frequency standards and as clocks.
The optical ones (like the standard stabilised HeNe laser)
may also serve as (secondary) meter standards.
Standards labs supply lists of 'recommended' optical frequencies.
The optical frequency sources are of course also 'floating' frequency
standards on their own.

The GHz ones can be calibrated against each other by direct counting.
So their accuracy may equal that of the Cs standard. (by the definition)
The stability of frequency standards can in general be established
by comparing ensembles of them against each other. (so indepently of Cs)
Which kind of standard to use depends on what you need:
relative or absolute accuracy.

AFAIK about those matters, the idea among metrologists at present
is to leave things as they are,
until a really big step forward can be made.
(hopefully already at the next CGPM)

Some of the optical frequency standards are far more stable indeed.
(nowadays pushing 10^18, last time I looked)
But their frequencies (in terms of the Cs standard!)
are known to a much lesser accuracy.
(pushing 10^12, again last time I looked)
The use of frequency combs caused a revolution here. (see 2005 Nobel)

Summary: optical frequency standards can be far more stable,
but their frequencies are (relatively speaking!) poorly known.

Once you have a calibrated optical frequency standard, [1]
for which you know the frequency in terms of the Cs standard,
you know its wavelength, by the definition of c,
so you can start measuring distances and sizes
in terms of its wavelength, hence in meters.
It has become a secondary meter standard.

So measuring distances/lengths is inherently much less accurate
than measuring time/frequency.
And, circle closed, this was precisely the reason
for giving c a defined value.
So c really cannot be measured anymore,
not because some crazed guru-followers decreed so,
but because of hard experimental realities and necessities.

Hope this clears up the questions you had,

Jan

[1] This is the ongoing, never-ending, program I mentioned earlier:
finding optical frequency standards, aka secondary meter standards,
to ever greater accuracy and reproducibiliy.
The original <1983 series of measurements, then called 'measuring c',
was just good enough to base the definined value of c on.
Those decades of added precision had to go into better frequency/meter
standards, not into a 'better' value of c.

PS There are first indications that it may be possible
to harness a gamma ray line from a nuclear transition
in the not to far future, for again greatly increased stability.
Very low frequency, as gammas go, but still in the very far UV.
Challenges, challenges.
Paul B. Andersen
2024-12-10 10:20:27 UTC
Reply
Permalink
Post by J. J. Lodder
Post by Paul B. Andersen
I would like your reaction to the following;
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
https://www.bipm.org/en/cipm-mra
"The CIPM has adopted various secondary representations of
the second, based on a selected number of spectral lines of atoms,
ions or molecules. The unperturbed frequencies of these lines can
be determined with a relative uncertainty not lower than that of
the realization of the second based on the 133Cs hyperfine transition
frequency, but some can be reproduced with superior stability."
The second is still defined by "the unperturbed ground state
hyperfine transition frequency of the caesium 133 atom"
??_Cs = 9192631770 Hz by definition.
But practical realisations of this frequency standard,
that is an atomic frequency standard based on Cs133 is
not immune to perturbation, a magnetic field may affect it.
So there exist more stable frequency standards than Cs,
and some are extremely more stable.
But the frequencies of these standards are still defined
by ??_Cs. 1 hz = ??_Cs/9192631770
This is "Calibration of a frequency standard".
The "secondary representations of second"
don't change the duration of a second
and the "secondary representations of metre"
don't change the length of a metre.
Instead of replying point by point I'll sum up the whole situation.
(as I understand it, and perhaps repeating what I wrote earlier)
For understanding all this you must realise
microwave ones, typically in the (perhaps many) GHz range,
and
optical ones, typically in the hundreds of THz range.
The GHz ones may serve as absolute frequency standards and as clocks.
The optical ones (like the standard stabilised HeNe laser)
may also serve as (secondary) meter standards.
Standards labs supply lists of 'recommended' optical frequencies.
The optical frequency sources are of course also 'floating' frequency
standards on their own.
The GHz ones can be calibrated against each other by direct counting.
So their accuracy may equal that of the Cs standard. (by the definition)
The stability of frequency standards can in general be established
by comparing ensembles of them against each other. (so indepently of Cs)
relative or absolute accuracy.
AFAIK about those matters, the idea among metrologists at present
is to leave things as they are,
until a really big step forward can be made.
(hopefully already at the next CGPM)
Some of the optical frequency standards are far more stable indeed.
(nowadays pushing 10^18, last time I looked)
But their frequencies (in terms of the Cs standard!)
are known to a much lesser accuracy.
(pushing 10^12, again last time I looked)
The use of frequency combs caused a revolution here. (see 2005 Nobel)
Summary: optical frequency standards can be far more stable,
but their frequencies are (relatively speaking!) poorly known.
Once you have a calibrated optical frequency standard, [1]
for which you know the frequency in terms of the Cs standard,
you know its wavelength, by the definition of c,
so you can start measuring distances and sizes
in terms of its wavelength, hence in meters.
It has become a secondary meter standard.
So measuring distances/lengths is inherently much less accurate
than measuring time/frequency.
And, circle closed, this was precisely the reason
for giving c a defined value.
So c really cannot be measured anymore,
not because some crazed guru-followers decreed so,
but because of hard experimental realities and necessities.
Hope this clears up the questions you had,
Yes. Thank you!
Post by J. J. Lodder
Jan
finding optical frequency standards, aka secondary meter standards,
to ever greater accuracy and reproducibiliy.
The original <1983 series of measurements, then called 'measuring c',
was just good enough to base the definined value of c on.
Those decades of added precision had to go into better frequency/meter
standards, not into a 'better' value of c.
PS There are first indications that it may be possible
to harness a gamma ray line from a nuclear transition
in the not to far future, for again greatly increased stability.
Very low frequency, as gammas go, but still in the very far UV.
Challenges, challenges.
--
Paul

https://paulba.no/
ProkaryoticCaspaseHomolog
2024-12-08 21:17:32 UTC
Reply
Permalink
Post by Paul B. Andersen
Post by Paul B. Andersen
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
(2019)
 Δν_Cs = 9192631770 Hz  (hyperfine transition frequency of Cs133)
 c = 299 792 458 m/s (speed of light in vacuum)
 1 s = 9192631770/Δν_Cs  1 Hz = Δν_Cs/9192631770
 1 metre = (c/299792458)s = (9192631770/299792458)⋅(c/Δν_Cs)
https://www.bipm.org/en/measurement-units
Give the exact same definitions, so I assume
that the definitions above are valid now.
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
Post by J. J. Lodder
Post by Paul B. Andersen
If the speed of light is measured _with the meter and second
defined above_ it is obviously possible to get a result slightly
different from the defined speed of light.
So I was not "completely, absolutely, and totally wrong".
You were, and it would seem that you still are.
You cannot measure the speed of light because it has a defined value.
If you would think that what you are doing is a speed of light
measurement you don't understand what you are doing.
Yes, I was indeed "absolutely, and totally wrong",
but not completely wrong.
I disagree that you were wrong at all.
1) The expression "c" has multiple meanings. On the one hand, it is,
according to a widely accepted geometric model of spacetime, a
constant that expresses the relationship between units of space and
units of time. This "c" is given a defined value of 299792458 m/s,
and because it has that value by definition, it cannot be measured.
2) Another meaning of "c" is the speed of photons in vacuum. Photons
are, to the best of our knowledge, massless, and according to the
above geometric model of spacetime, all unimpeded massless
particles travel at the speed "c" given in definition (1).

Does the above-mentioned geometric model of spacetime, this "theory",
correspond to reality? All tests of predictions made by that model
(in the absence of gravity) have thus far validated its predictions.

But although well-validated, this theory (SR) is not proven and can
never be. Other theories of physics exist that are far beyond my
competency to discuss, which would predict different results of
physical measurements.

So it is legitimate to ask questions like, "Is the speed of light
in vacuum completely independent of wavelength"? since alternative
theories of physics envision scenarios where the SR prediction breaks
down.
J. J. Lodder
2024-12-08 22:32:09 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by Paul B. Andersen
Post by Paul B. Andersen
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
(2019)
??_Cs = 9192631770 Hz (hyperfine transition frequency of Cs133)
c = 299 792 458 m/s (speed of light in vacuum)
1 s = 9192631770/??_Cs 1 Hz = ??_Cs/9192631770
1 metre = (c/299792458)s = (9192631770/299792458)?(c/??_Cs)
https://www.bipm.org/en/measurement-units
Give the exact same definitions, so I assume
that the definitions above are valid now.
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
Post by J. J. Lodder
Post by Paul B. Andersen
If the speed of light is measured _with the meter and second
defined above_ it is obviously possible to get a result slightly
different from the defined speed of light.
So I was not "completely, absolutely, and totally wrong".
You were, and it would seem that you still are.
You cannot measure the speed of light because it has a defined value.
If you would think that what you are doing is a speed of light
measurement you don't understand what you are doing.
Yes, I was indeed "absolutely, and totally wrong",
but not completely wrong.
I disagree that you were wrong at all.
So you are not there yet.

Remember that nothing you say, and no definitions you make
can have any effect on reality as it is.
It can only change your way of looking at it,
and your interpretations of what you see.
Post by ProkaryoticCaspaseHomolog
1) The expression "c" has multiple meanings. On the one hand, it is,
according to a widely accepted geometric model of spacetime, a
constant that expresses the relationship between units of space and
units of time. This "c" is given a defined value of 299792458 m/s,
and because it has that value by definition, it cannot be measured.
2) Another meaning of "c" is the speed of photons in vacuum. Photons
are, to the best of our knowledge, massless, and according to the
above geometric model of spacetime, all unimpeded massless
particles travel at the speed "c" given in definition (1).
All very true, but completely irrelevant
from the point of view of metrology.

Metrology is about how to realise units, and nothing else.
Deep thoughts about the nature of things,
or what words might mean, do not come into it at all.
In particular, the whole theory of relativity is irrelevant
as far the definition of the meter is concerned.
[snip more irrelevancies]

Jan
J. J. Lodder
2024-12-08 11:30:36 UTC
Reply
Permalink
Post by Paul B. Andersen
Post by J. J. Lodder
Post by Paul B. Andersen
Post by J. J. Lodder
Post by Paul B. Andersen
So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.
Note: measured with SI metre and SI second.
Post by J. J. Lodder
So this is completely, absolutely, and totally wrong.
Such a result does not mean that the speed of light
is off its defined value,
it means that your meter standard is off,
and that you must use your measurement result to recalibrate it.
(so that the speed of light comes out to its defined value)
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
(2019)
??_Cs = 9192631770 Hz (hyperfine transition frequency of Cs133)
c = 299 792 458 m/s (speed of light in vacuum)
1 s = 9192631770/??_Cs 1 Hz = ??_Cs/9192631770
1 metre = (c/299792458)s = (9192631770/299792458)?(c/??_Cs)
https://www.bipm.org/en/measurement-units
Give the exact same definitions, so I assume
that the definitions above are valid now.
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
Certainly. Letters are merely letters, you should know how to read.
You should have read on to the section on the realisations of those
units.
===
Avec un tel système d'unités,
il n'existe en principe aucune limite concernant l'exactitude avec
laquelle une unité peut
être réalisée. L'exception reste la seconde pour laquelle la transition
micro-onde du césium
doit être conservée, pour le moment, comme base de la définition.
===

Translating: All units may be developed in the future
to ever inceasing accuracy, except for the second,
which is the defining basis of the system.
Post by Paul B. Andersen
Post by J. J. Lodder
Post by Paul B. Andersen
If the speed of light is measured _with the meter and second
defined above_ it is obviously possible to get a result slightly
different from the defined speed of light.
So I was not "completely, absolutely, and totally wrong".
You were, and it would seem that you still are.
You cannot measure the speed of light because it has a defined value.
If you would think that what you are doing is a speed of light
measurement you don't understand what you are doing.
When you have a definition of second and a definition of metre,
it is _obviously_ possible to measure the speed of light.
So you persist in being utterly wrong about it.
All meter standards are based on the defined speed of light.
So they cannot be used to measure the speed of light.
Post by Paul B. Andersen
If you measure the speed of light in air, you would probably
find that v_air ≈ 2.99705e8 m/s.
True, but irrelevant.
Post by Paul B. Andersen
If you measure it in vacuum on the ground, you would probably
get a value slightly less than 299792458 m/s because the vacuum
isn't perfect.
Again, irrelevant.
Of course you can measure -other- speeds.
Post by Paul B. Andersen
If you measure it in perfect vacuum (in a space-vehicle?) you
would probably get the value 299792458 m/s.
In that case you would be an incompetent idiot
who doesn't know what he is doing.
(which is calibrating a local meter standard)
Post by Paul B. Andersen
But it isn't impossible, if you had extremely precise instruments,
that you would measure a value slightly different from 299792458 m/s,
e.g. 299792458.000001 m/s.
However, so precise instruments hardly exists, and probably never will.
So I don't think this ever will be a real problem needing a fix.
Definitions can never be fixed by experiments.
Only people can do that, by agreeing on another one.
Post by Paul B. Andersen
It is possible to measure the speed of light even if it exists
a defined constant c = 299792458 m/s
OK, I give up on you. It would seem that you will never get it.
Post by Paul B. Andersen
If you are claiming otherwise, you are simply wrong.
Post by J. J. Lodder
Post by Paul B. Andersen
Post by J. J. Lodder
In fact, the kind of experiments that used to be called
'speed of light measurements' (so before 1983)
are still being done routinely today, at places like NIST, or BIPM.
The difference is that nowadays, precisely the same kind of measurements
are called 'calibration of a (secudary) meter standard',
or 'calibration of a frequency standard'.
Calibration of a frequency standard is just that, and not
a 'speed of light measurements'.
Do you have any idea of how these things were done?
(and still are)
Post by Paul B. Andersen
Post by J. J. Lodder
Post by Paul B. Andersen
Is any such recalibration of the meter ever done?
Of course, routinely, on a day to day basis.
Guess there are whole departments devoted to it.
(it is a subtle art)
The results are published nowadays as a list of frequencies
of prefered optical frequency standards.
(measuring the frequency of an optical frequency standard
and calibrating a secondary meter standard are just two different ways
of saying the same thing)
And remember, there is no longer such a thing as -the- meter.
It is a secondary unit, and any convenient secondary standard will do.
https://www.bipm.org/utils/common/pdf/si-brochure/SI-Brochure-9.pdf
https://www.bipm.org/en/cipm-mra
"The CIPM has adopted various secondary representations of
the second, based on a selected number of spectral lines of atoms,
ions or molecules. The unperturbed frequencies of these lines can
be determined with a relative uncertainty not lower than that of
the realization of the second based on the 133Cs hyperfine transition
frequency, but some can be reproduced with superior stability."
Yes, of course, many of them, such as rubidium clocks,
hydrogen masers, etc.
The difference is that the Cesium standard is exact, by definition.
All others have error bars on them.
Post by Paul B. Andersen
The second is still defined by "the unperturbed ground state
hyperfine transition frequency of the caesium 133 atom"
??_Cs = 9192631770 Hz by definition.
But practical realisations of this frequency standard,
that is an atomic frequency standard based on Cs133 is
not immune to perturbation, a magnetic field may affect it.
Certainly, it takes a very competent experimentalist to get it right.
Well, that is what standards labs are for.
Post by Paul B. Andersen
So there exist more stable frequency standards than Cs,
and some are extremely more stable.
But the frequencies of these standards are still defined
by ??_Cs. 1 hz = ??_Cs/9192631770
This is "Calibration of a frequency standard".
They are more stable only in terms of themselves.
Their true frequency is not known to comparable accurately.
(to far less than the stability of the Cesium standard, actually)
Post by Paul B. Andersen
The "secondary representations of second"
don't change the duration of a second
and the "secondary representations of metre"
don't change the length of a metre.
The secondary representations of the metre -are- the metre. [1]
There is no primary meter, so no 'the length of the meter,
('ding an sich' error again)

Jan

[1] Practicalities: secondary meter standards are less accurate than the
second by several orders of magnitude.
ProkaryoticCaspaseHomolog
2024-12-07 00:34:35 UTC
Reply
Permalink
Post by J. J. Lodder
Post by Paul B. Andersen
So if the speed of light, measured with instruments with better
precision than they had in 1983 is found to be 299792458?.000001 m/s,
then that only means that the real speed of light (measured with
SI metre and SI second) is different from the defined one.
So this is completely, absolutely, and totally wrong.
Such a result does not mean that the speed of light
is off its defined value,
it means that your meter standard is off,
and that you must use your measurement result to recalibrate it.
(so that the speed of light comes out to its defined value)
Not necessarily.

It is still possible that despite there having been 1 1/2 centuries
of experiment supporting the constancy of the speed of light, at some
level of precision of measurement, some variation may be discovered.
To quantify such variation, it would obviously be necessary to perform
such analysis using an EARLIER definition of the meter.
Post by J. J. Lodder
In other words, it means that you can nowadays
calibrate a frequency standard, aka secundary meter standard
to better accuracy than was possible 1n 1983.
This is no doubt true,
but it cannot possibly change the (defined!) speed of light.
In still other words, there is no such thing as an independent SI meter.
The SI meter is that meter, and only that meter,
that makes the speed of light equal to 299792458? m/s (exactly)
Yes.

Going back to the OP thread topic, the fact that since 2018, E=mc^2
is true BY DEFINITION does not render irrelevant experiments designed
to test its validity.

Here is a thought experiment:
1) Take a gram of electrons and a gram of positrons, converting them
completely to electromagnetic energy.
2) Take a gram of protons and a gram of antiprotons, converting them
completely to electromagnetic energy.

Do we know absolutely for sure that all masses, whatever their form,
are equivalent when converted to electromagnetic energy? I believe it
to be true, but I don't KNOW it to be true.

Certainly in terms of behavior in a gravitational field, most
alternative theories of gravitation generically predict violations of
the weak equivalence principle in the 10^-16 to 10^-18 range. The
Galileo Galilei mission will be the first experiment to explore this
range:
http://eotvos.dm.unipi.it/nobili/

Could violations of the "mass-energy equivalence principle" exist?
Maciej Wozniak
2024-12-08 03:52:19 UTC
Reply
Permalink
Post by Paul B. Andersen
Post by J. J. Lodder
Post by ProkaryoticCaspaseHomolog
The mere fact that theory and over a century of experimental
validation have led to the speed of light being adopted as a constant
does not invalidate experiments intended to verify to increasing
levels of precision the correctness of the assumptions that led to
it adoption as a constant.
So you haven't understood what it is all about.
I rest my case,
Jan
1 metre = (1 sec/⁠299792458⁠ m/s)
1 second = 9192631770 Δν_Cs
Anyone can check GPS, nobody serious cares
about the moronic wishes of your moronic cult.
But feel free to keep enchanting the reality,
poor halfbrain.
rhertz
2024-12-04 15:44:02 UTC
Reply
Permalink
Post by ProkaryoticCaspaseHomolog
Post by rhertz
The settlement of constants BY COLLUSION requires that ALL THE
INSTRUMENTATION THAT EXIST (used in any science) BE RE-CALIBRATED, to
obey.
Do you get this?
If you manufacture mass spectrometers, voltmeters, timers, WHATEVER,
better that you RE-ADJUST the values that come from measurements.
Example: Your voltmeter measures 1 Volt as 0.9995743 OLD Volts? Then
RECALIBRATE THAT MF or you will sell NONE. Is that clear?
CALIBRATION is an essential part in the design and manufacturing OF ANY
INSTRUMENT!. But you require MASTER REFERENCES (OR GUIDELINES LIKE THOSE
FROM BIPM).
Your laser based distance meter measure 1 meter as 1.00493 meters?
RECALIBRATE THE INSTRUMENT RIGHT IN THE PRODUCTION LINE.
Not to talk about instrumentation used to compute Atomic Weight or
a.m.u.
ADJUST, COMPLY AND OBEY OR YOU'RE OUT OF THE BUSINESS.
Did you manufacture a single instrument in an university lab? ADJUST,
COMPLY AND OBEY or you are OUTCASTED.
How do you dare to measure c = 299,793,294 m/s? ARE YOU CRAZY? Adjust
the readings to c = 299,792,458 m/s, OR ELSE.
And this has been happening since late XIX Century. Read the history
behind the definition of 1 Ohm, mainly commanded by British
institutions, with Cavendish lab behind it.
E ≈ 1.0000000 mc^2 is not a calibration adjustment. It is a
measurement made with calibrated instrumentation whose consistency
with other instrumentation has been carefully verified by procedures
such as you cast aspersion upon above.
Do you want to go back to three barleycorns per inch? Or the
historical chaos that resulted in the Troy pound, Tower pound,
London pound, Wool pound, Jersey pound, Trone pound, libra, livre
and so forth? Or a second equals 1/86400 part of a day?
************************************************************************
Atomic mass unit is measured by determining the mass of atoms relative
to carbon-12. Measured by mass spectrometer (mass-to-charge ratio).
Units amu (or u, Dalton)

Atomic weight is a calculated average based on the naturally occurring
isotopes and their abundance. Measured by mass spectrometer + isotopic
abundance. Dimensionless (relative value).

Example Carbon-12 mass = 12 amu. Carbon atomic weight ≈ 12.01.

The formula used in a mass spectrometer relates the mass-to-charge ratio
(m/z) of ions to their behavior in an electric or magnetic field. The
mass-to-charge ratio m/z is directly proportional to the radius of the
ion’s path in the magnetic field.

m/z = r² B² e/2V

m = z (r² B² e/2V)

m = mass of the ion (in kilograms).
z = charge of the ion (in coulombs, usually a multiple of the elementary
charge e.
V = accelerating voltage (Volts)
B = magnetic field strength (in tesla, T).
r = radius of the circular path of the ion in the magnetic field (in
meters).

Measurement Process: The mass of atoms is measured using highly
sensitive instruments like a mass spectrometer. The steps are:

1) Ionization: Atoms are ionized (charged).
2) Acceleration: Ions are accelerated through an electric field V.
3) Deflection: Ions are deflected by a magnetic field B, based on their
mass and charge.
4) Detection: The mass-to-charge ratio is detected and used to calculate
the atomic mass of the ion.

*****************************************************************

NIST 2024 value for amu: 1.660 539 068 92 x 10^-27 kg

Some values of Relative Atomic Mass (C = 12.0000000)
H 1 1.00782503223(9)
He 4 4.00260325413(6)
Li 7 7.0160034366(45)
Be 9 9.012183065(82)
C 12 12.0000000(00)

****************************************************************

QUESTION 1: IN THE FORMULA

m = z (r² B² e/2V)

WITH HOW MANY DIGITS EACH VARIABLE HAS TO BE MEASURED IN ORDER TO OBTAIN
amu WITH 9 DECIMAL DIGITS? IN PARTICULAR, THE MAGNETIC FIELD AND THE
RADIUS.

QUESTION 2: WHAT IS THE IMPACT OF THE CALIBRATION OF THE MANY
INSTRUMENTS USED IN MASS SPECTROGRAPHY? CAN THEY BE FIXED BY COLLUSION
OF MANUFACTURERS AND REGULATORY BODIES?
LaurenceClarkCrossen
2024-12-11 05:59:03 UTC
Reply
Permalink
Post by rhertz
In what is considered as the first experimental proof of Einstein's 1905
E = mc² paper, 27 years after (1932), the English physicist John
Cockroft and the Irish physicist Ernest Walton produced a nuclear
disintegration by bombarding Lithium with artificially accelerated
protons.
They used beams of protons accelerated with 600,000 Volts to strike
Lithium7 atoms, which resulted in the creation of two alpha particles.
The experiment was celebrated as a proof of E = mc², even when the
results were closer to E = 3/4 mc², BUT NOBODY WANTED TO NOTICE THIS!
For this paper, Cockcroft and Walton won the 1951 Nobel Prize in Physics
for their work on the FIRST artificial transmutation of atomic nuclei,
not for proving E = mc², a FALSE CLAIM still used by relativists.
Cockcroft and Walton NEVER HAD IN MIND to prove E = mc², as it can be
https://royalsocietypublishing.org/doi/pdf/10.1098/rspa.1932.0133
Yet, relativists hurried to celebrate the experiment as a triumph of
Einstein's theories, because they needed such accomplishment to
celebrate the veracity of their pseudoscience.
7:3 Li + 1:1 H ---> 4:2 He + 4:2 He + energy
Lithium7 amu 7.0104
Hydrogen amu 1.0072
8.0176
Helium amu 4.0011
Helium amu 4.0011
8.0022
Difference 0.0154 ± 0.003 amu = 14.3 ± 2.7 MeV
The difference in energy using E = mc², with 2024 NIST values, varies
from -2.1% to -49.7%, AVERAGING almost -25%.
CURIOUSLY, the average error over hundred of measurements is EXACTLY the
factor 0.75 of the Hassenohrl's formula E = 3/4 mc².
What happened with the history of this experiment. Was it re-written
since THIS single experiment, NEVER EVER REPEATED, to hype Einstein?
---------------------------------------------------
Lithium7 amu 7.0160034366
Hydrogen amu 1.00782503223
8.02382846883
Helium amu 4.00260325413
Helium amu 4.00260325413
8.00520650826
Difference 0.01862196057 amu
17.36590E+07 MeV
************************************************************
INTERESTING: 92 years after the 1932 experiment, NIST managed to correct
the amu of the elements, so the difference FITS with E = mc².
WORSE YET: In the Manhattan booklet "Los Alamos Primer", written by
Serber & Oppenheimer in 1943, to instruct scientists recruited for the
project, the calculations WRITTEN THERE were based on electrostatic
repulsion of split atoms, which ALSO DIFFER IN A SIMILAR AMOUNT with the
infamous 200 MeV computed by Meitner and her nephew in 1939.
Serber, on his 1992 book, affirmed that nuclear fission WAS UNRELATED to
E = mc², and that the fission process was NON-RELATIVISTIC.
Yet, just after WWII finished, the infamous Time Magazine cover had the
figure of Einstein and the nuclear cloud with E = mc² written on it.
Time Magazine was widely known as an outlet of Jewish propaganda, and
still is (what was left of it).
So, Hassenohrl was the real deal and Einstein the Jewish icon to be
hyper-hyped as the most important physicist since Babylon times?
From 1932 to 1943, the brightest minds involved in EXPERIMENTAL nuclear
fission DIDN'T SUPPORT E = mc².
The above FACT has to count, and open the eyes of most. The drive to
reinstall the genius of Einstein and relativity re-started in the early
'50s, and never did stop (cosmology, particle physics, etc.).
We live in a world of lies and INFAMOUS reconstruction of history, and I
mean ALL THE HISTORY.
How can E=mc^2 when the exact same mass of one substance is converted
into a different amount of energy than another substance? "In his
memoirs Count Harry Kessler records some conversations with Einstein,
including one where he asked the point-blank question : do your theories
relate to the atomic components? And receive the equally blunt answer
‘no’. Einstein gave his opinion that objects on such a small scale would
not be covered by his theory (See ‘Diaries of a Cosmopolitan’ by
Kessler, entry for Monday 14th Feb 1921)” [Newton, Zak. WAS EINSTEIN
WRONG? . The Electronic Book Company. Kindle Edition.] “And as for the
claim that his E = M22 prefigures the huge amounts of energy that can be
released by breaking up an atom, his equation deals with the supposed
conversion of mass into energy but the mass in an atom is not destroyed
in a nuclear explosion. It is simply broken into smaller particles.
Furthermore, his equation implies that the amount of energy held in a
body depends only on the amount of mass, not what it is a mass of. All
substances are considered to be the same, mass for mass, as generators
of energy. Except we know they aren’t. If you split an atom of uranium
it releases more than two and a half million more units of energy than
an atom of carbon while the fusion of deuterium into helium delivers 400
times the ‘oomph’ of uranium.” [Newton, Zak. WAS EINSTEIN WRONG? . The
Electronic Book Company. Kindle Edition.]

Ross Finlayson
2024-12-01 03:10:42 UTC
Reply
Permalink
Post by rhertz
In March 1905, six months before Einstein, the Austrian physicist Fritz
Hasenohrl published his third and final paper about the relationship
between mass and radiant energy in the same journal Annalen der Physik
that received and published his papers about relativity.
His final paper, a review of his former two since 1904, was an
elaborated thought experiment to determine if the mass of a perfect
black body radiation increased, from rest, while it was slowly
accelerated (the same hypothesis used by Einstein in his SR paper, to
m = 4/3 E/c² , which can be expressed as E = 3/4 mc²
which he found to be independent of the velocity of the cavity.
His work received much attention from the physics community, and won the
Haitinger Prize of the Austrian Academy of Sciences. In 1907 he
succeeded Boltzmann as professor of theoretical physics at the
University of Vienna.
This is the translation of his first paper, in 1904, where he derived m
= 8/3 E/c². In the next two papers, he corrected some mistakes,
publishing the last one in March 1905, six months before Einstein's
paper deriving E = mc².
https://en.wikisource.org/wiki/Translation:On_the_Theory_of_Radiation_in_Moving_Bodies#cite_note-21
Prior to Hassenohrl, and since 1881 paper from J.J. Thomson, different
works were published correcting Thomson and relating mass increase and
changes of the electrostatic energy of a moving charged sphere (later
the electron) by Fitzgerald, Heaviside, Larmor, Wien and (finally) by
Abraham in 1903. The work of Hassenohrl was based on Abraham, but with
the fundamental change of using radiant energy from inside a perfect
black body moving. This alone was considered a breakthrough in physics,
and Einstein took note of it and simplified the thought experiment of
Hassenohrl (a closed system) for other in an open system, which has
theoretical deficiencies, which Einstein was never able to solve, giving
up in 1942 (7th. attempt).
The remarkable work of Hassenohrl showed, beyond doubts, that any energy
(electrostatic or radiant) is related to mass increase, when moving, by
the relationship m = 4/3 E/c².
This fact, known for almost a decade since FitzGerald, couldn't be
explained correctly until 1922, when Enrico Fermi focused on the
problem.
All these works are considered today as pre-relativistic, even when
ether is barely mentioned.
Hassenohrl himself used two references (Einstein jargon didn't exist
yet): A fixed reference frame and a co-moving reference (along with the
cavity). The popularization of relativity and the easiness of having a
relationship E = mc² (even with restricted use of velocities) made it
much more appealing to the scientific community than having to deal with
E = 0.75 mc².
Even more, in the next decades, using c = 1 became popular, and so the
direct use E = m, as it was shown in the calculations done by Chadwick
(1932) to justify that he had proven the existence of the neutron. A
different world would exist if E = 0.75 mc² had been adopted, which is a
proof of what I've sustained for years about that such a simple equation
was adopted for convenience and colluded consensus (like many other
constants and formulae. GR?).
Hassenohrl's work proved that his equation is independent of the
velocity, and that mass is an invariant property of matter. On the
contrary, E = mc² has a limited range of applicability, forcing its use
to rations v/c << 1. This is because its derivation comes from using the
first term (the cuadratic one) of an infinite McLauring series used on
γ - 1 = 1/√(1 - v ²/c²) - 1 = 1/√(1 - β²) - 1 = 1/2 β² + 3/4 β⁴ + 15/24
β⁶ + 105/192 β⁸ + ..
Einstein used L (γ - 1) ≈ L/2 β² = 1/2 (L/ c²) v², from where he
extracted m = L/ c² as the mass in the kinetic energy equation. Nor him
neither von Laue (1911) nor Klein (1919) could solve this very limited
approximation for uses on closed systems. Yet, the equation stayed (for
consensus due to its convenience).
The work of Hassenohrl, based on his thought experiment, is very
detailed. Much more than the loose arrangement of Einstein's paper. He
- A perfect black body cylindrical cavity, with the walls covered with a
perfectly reflective mirror, exterior temperature of 0"C, and two
perfect black body caps on the ends, tightly fixed and having zero
stress from the forces of radiation and motion.
- A very small acceleration, in order to cause smooth changes in
velocity of the cavity.
- The black body radiation is taken from its intensity i (he never
mentioned Planck), which he described as a "pencil of energy", which
formed an angle θ with the vector of velocity.
In modern terms, it's the Monochromatic Irradiance or Spectral Flux
Density: Radiance of a surface per unit frequency or wavelength per unit
solid angle.
- This directional quantity differs from Planck's Spectral Radiant
Energy formula by (c/4𝜋), which he accepted when integrating along the
volume of the cavity, giving original Planck's density u of radiation
energy.
With the above considerations, and many others, Hassenohrl wrote his
final paper, for which he gained recognition and a prize. But the
problem for him, and for physics, is that it was a pre-relativistic work
where absolute reference at rest was used (as in all the other works
from legions of physicists during the centuries). Relativity
cannibalized all the classic physics, except when it's not convenient to
do so: a blatant hypocrisy (take the merging of reference frames in
particle physics, or just the Sagnac effect).
The problem that Hassenohrl's work poses for physics is his enormous
complexity, which has consumed a lot of manpower since 1905 up to these
days, in order to be understood.
This paper
Fritz Hasenohrl and E = mc²
Stephen Boughn
Haverford College, Haverford PA 19041
March 29, 2013
https://arxiv.org/abs/1303.7162
is one of many modern papers that try to understand Hassenohrl's work by
using relativity and Planck, which simplify the complex work of the
Austrian physicist. Even this paper poses some doubts about the validity
(or not) of Hassenohrl's work in these days, where a notion of absolute
reference frame is gaining momentum within physics. The paper try to
explain (but fails) which were Hassenohrl's mistakes (of course under
the light of relativity), but it serves as a guide to analyze
Hassenohrl's work.
However, the author is highly biased, because he focused on the first
1904 paper and not in the final publication in Annalen der Physik, where
Hassenohrl had changed substantially his first proposal. For instance,
introducing the idea of a slowly accelerated cavity (which is essential
to prove the independence of the gain in mass with respect to the
velocity).
I'm sorry not being able to get the March 1905 paper to cite it here. It
seems that efforts to erase Hassenohrl's work (or Abraham's work with
electrons) from the history have been successful. You have to resort to
find books from the '50s to get some info, like the one cited by Stephen
Boughn.
Now, E = 3/4 mc² or E = mc²? Which one would the physics community
adopt?
Hmmm....
Fermi's always been pretty famous,
now those others are getting more their due.


About mc^2 or 3/4 mc^2, perhaps first you should acknowledge
that mc^2, is the first term in the infinite series and maybe
the rest of them add up to 1/4.

Then there's also (m-m') = e/c^2, though that's rotational.

The idea of the geometric pencil should get you into
algebraic geometry and more the integral than differential,
analysis.


Then you got there the non-adiabatic course or with
regards to "radiation in a shell" or the cavity.

I don't know much yet about this "4/3 problem in
electron physics", yet, again it's probably about
the linear and rotational and the "nominal un-linearity"
of things, with regards to the quadrature and triangle
rule, and what's been neglected.


The aether theory is "post-" relativistic again.
I know it really got a bad name and that's too bad
because now that was dumb.

Whether "rest" mass or "resistant" mass becomes a
thing since _inertia_ contra momentum or energy,
is still a thing, and indeed, some, like Einstein,
have that it is _is_ the thing.

Physics these days doesn't even yet have
a concept of "heft", inertial. That then
gets all into the equivalence principle,
and, you know, not the equivalence principle.


Kind of like Fresnel and "ether theory
might be not a total drag", make for
things like f = ma that f(t) = ma(t),
vis-a-vis mv and mv(t), whether "inertia"
is resistance to acceleration, "momentum" is
resistance to acceleration, or there's also "heft",
classical mechanics.


I'll be looking to read into Jammer.

Well then, thanks for these mentions.

"The net work done ..." is an integral equation,
_after_, "Retaining terms _to first order_ in β ...".
(2.8, 2.9)


"One might worry that that we have ignored
questions of simultaneity that, afterall,
are first order in v/c."



I imagine we should look to Max Jammer.

"From conservation of energy/momentum we know that ...."


"This assumption is the requirement that the change in
velocity of the cavity in one light crossing time is
much less than the speed of light, the small acceleration
condition."



Yet, there are infinitely-many nominally non-zero
higher-orders of acceleration in any change.


"In addition, we did not address what constitutes
a constant acceleration of the cavity."


"Hasenöhrl was certainly familiar
with Lorentz-Fitzgerald contraction
and, in fact, invoked it in H2 and H3 ...."


"Because a Born rigid object has constant
dimensions in instanteously co-moving frames,
its length in the lab frame is Lorentz contracted.
This is only approximately so."


"The problem is that the results from energy conservation
imply an effective mass that is different from that implied by
conservation of momentum and both of these are different
from the m_eff = E/c^2 that we are led to expect from
special relativity."




In my podcasts I'd kind of arrived at that
momentum wasn't a conserved quantity, that
though it's "conserved in the open" after
kinematics, kinematics being different than kinetics.
(Rotational, linear, "nominally un-linear".)


The author of the paper you cited does make a
point like 'don't go assuming e = mc^2 if
you don't know'.


"It is often claimed that Einstein’s derivation of E = mc^2
was the first generic proof of the equivalence of mass and
energy (see Ohanian[2009] for arguments to the contrary)."
Ross Finlayson
2024-12-01 03:37:27 UTC
Reply
Permalink
Post by Ross Finlayson
Post by rhertz
In March 1905, six months before Einstein, the Austrian physicist Fritz
Hasenohrl published his third and final paper about the relationship
between mass and radiant energy in the same journal Annalen der Physik
that received and published his papers about relativity.
His final paper, a review of his former two since 1904, was an
elaborated thought experiment to determine if the mass of a perfect
black body radiation increased, from rest, while it was slowly
accelerated (the same hypothesis used by Einstein in his SR paper, to
m = 4/3 E/c² , which can be expressed as E = 3/4 mc²
which he found to be independent of the velocity of the cavity.
His work received much attention from the physics community, and won the
Haitinger Prize of the Austrian Academy of Sciences. In 1907 he
succeeded Boltzmann as professor of theoretical physics at the
University of Vienna.
This is the translation of his first paper, in 1904, where he derived m
= 8/3 E/c². In the next two papers, he corrected some mistakes,
publishing the last one in March 1905, six months before Einstein's
paper deriving E = mc².
https://en.wikisource.org/wiki/Translation:On_the_Theory_of_Radiation_in_Moving_Bodies#cite_note-21
Prior to Hassenohrl, and since 1881 paper from J.J. Thomson, different
works were published correcting Thomson and relating mass increase and
changes of the electrostatic energy of a moving charged sphere (later
the electron) by Fitzgerald, Heaviside, Larmor, Wien and (finally) by
Abraham in 1903. The work of Hassenohrl was based on Abraham, but with
the fundamental change of using radiant energy from inside a perfect
black body moving. This alone was considered a breakthrough in physics,
and Einstein took note of it and simplified the thought experiment of
Hassenohrl (a closed system) for other in an open system, which has
theoretical deficiencies, which Einstein was never able to solve, giving
up in 1942 (7th. attempt).
The remarkable work of Hassenohrl showed, beyond doubts, that any energy
(electrostatic or radiant) is related to mass increase, when moving, by
the relationship m = 4/3 E/c².
This fact, known for almost a decade since FitzGerald, couldn't be
explained correctly until 1922, when Enrico Fermi focused on the
problem.
All these works are considered today as pre-relativistic, even when
ether is barely mentioned.
Hassenohrl himself used two references (Einstein jargon didn't exist
yet): A fixed reference frame and a co-moving reference (along with the
cavity). The popularization of relativity and the easiness of having a
relationship E = mc² (even with restricted use of velocities) made it
much more appealing to the scientific community than having to deal with
E = 0.75 mc².
Even more, in the next decades, using c = 1 became popular, and so the
direct use E = m, as it was shown in the calculations done by Chadwick
(1932) to justify that he had proven the existence of the neutron. A
different world would exist if E = 0.75 mc² had been adopted, which is a
proof of what I've sustained for years about that such a simple equation
was adopted for convenience and colluded consensus (like many other
constants and formulae. GR?).
Hassenohrl's work proved that his equation is independent of the
velocity, and that mass is an invariant property of matter. On the
contrary, E = mc² has a limited range of applicability, forcing its use
to rations v/c << 1. This is because its derivation comes from using the
first term (the cuadratic one) of an infinite McLauring series used on
γ - 1 = 1/√(1 - v ²/c²) - 1 = 1/√(1 - β²) - 1 = 1/2 β² + 3/4 β⁴ + 15/24
β⁶ + 105/192 β⁸ + ..
Einstein used L (γ - 1) ≈ L/2 β² = 1/2 (L/ c²) v², from where he
extracted m = L/ c² as the mass in the kinetic energy equation. Nor him
neither von Laue (1911) nor Klein (1919) could solve this very limited
approximation for uses on closed systems. Yet, the equation stayed (for
consensus due to its convenience).
The work of Hassenohrl, based on his thought experiment, is very
detailed. Much more than the loose arrangement of Einstein's paper. He
- A perfect black body cylindrical cavity, with the walls covered with a
perfectly reflective mirror, exterior temperature of 0"C, and two
perfect black body caps on the ends, tightly fixed and having zero
stress from the forces of radiation and motion.
- A very small acceleration, in order to cause smooth changes in
velocity of the cavity.
- The black body radiation is taken from its intensity i (he never
mentioned Planck), which he described as a "pencil of energy", which
formed an angle θ with the vector of velocity.
In modern terms, it's the Monochromatic Irradiance or Spectral Flux
Density: Radiance of a surface per unit frequency or wavelength per unit
solid angle.
- This directional quantity differs from Planck's Spectral Radiant
Energy formula by (c/4𝜋), which he accepted when integrating along the
volume of the cavity, giving original Planck's density u of radiation
energy.
With the above considerations, and many others, Hassenohrl wrote his
final paper, for which he gained recognition and a prize. But the
problem for him, and for physics, is that it was a pre-relativistic work
where absolute reference at rest was used (as in all the other works
from legions of physicists during the centuries). Relativity
cannibalized all the classic physics, except when it's not convenient to
do so: a blatant hypocrisy (take the merging of reference frames in
particle physics, or just the Sagnac effect).
The problem that Hassenohrl's work poses for physics is his enormous
complexity, which has consumed a lot of manpower since 1905 up to these
days, in order to be understood.
This paper
Fritz Hasenohrl and E = mc²
Stephen Boughn
Haverford College, Haverford PA 19041
March 29, 2013
https://arxiv.org/abs/1303.7162
is one of many modern papers that try to understand Hassenohrl's work by
using relativity and Planck, which simplify the complex work of the
Austrian physicist. Even this paper poses some doubts about the validity
(or not) of Hassenohrl's work in these days, where a notion of absolute
reference frame is gaining momentum within physics. The paper try to
explain (but fails) which were Hassenohrl's mistakes (of course under
the light of relativity), but it serves as a guide to analyze
Hassenohrl's work.
However, the author is highly biased, because he focused on the first
1904 paper and not in the final publication in Annalen der Physik, where
Hassenohrl had changed substantially his first proposal. For instance,
introducing the idea of a slowly accelerated cavity (which is essential
to prove the independence of the gain in mass with respect to the
velocity).
I'm sorry not being able to get the March 1905 paper to cite it here. It
seems that efforts to erase Hassenohrl's work (or Abraham's work with
electrons) from the history have been successful. You have to resort to
find books from the '50s to get some info, like the one cited by Stephen
Boughn.
Now, E = 3/4 mc² or E = mc²? Which one would the physics community
adopt?
Hmmm....
Fermi's always been pretty famous,
now those others are getting more their due.
About mc^2 or 3/4 mc^2, perhaps first you should acknowledge
that mc^2, is the first term in the infinite series and maybe
the rest of them add up to 1/4.
Then there's also (m-m') = e/c^2, though that's rotational.
The idea of the geometric pencil should get you into
algebraic geometry and more the integral than differential,
analysis.
Then you got there the non-adiabatic course or with
regards to "radiation in a shell" or the cavity.
I don't know much yet about this "4/3 problem in
electron physics", yet, again it's probably about
the linear and rotational and the "nominal un-linearity"
of things, with regards to the quadrature and triangle
rule, and what's been neglected.
The aether theory is "post-" relativistic again.
I know it really got a bad name and that's too bad
because now that was dumb.
Whether "rest" mass or "resistant" mass becomes a
thing since _inertia_ contra momentum or energy,
is still a thing, and indeed, some, like Einstein,
have that it is _is_ the thing.
Physics these days doesn't even yet have
a concept of "heft", inertial. That then
gets all into the equivalence principle,
and, you know, not the equivalence principle.
Kind of like Fresnel and "ether theory
might be not a total drag", make for
things like f = ma that f(t) = ma(t),
vis-a-vis mv and mv(t), whether "inertia"
is resistance to acceleration, "momentum" is
resistance to acceleration, or there's also "heft",
classical mechanics.
I'll be looking to read into Jammer.
Well then, thanks for these mentions.
"The net work done ..." is an integral equation,
_after_, "Retaining terms _to first order_ in β ...".
(2.8, 2.9)
"One might worry that that we have ignored
questions of simultaneity that, afterall,
are first order in v/c."
I imagine we should look to Max Jammer.
"From conservation of energy/momentum we know that ...."
"This assumption is the requirement that the change in
velocity of the cavity in one light crossing time is
much less than the speed of light, the small acceleration
condition."
Yet, there are infinitely-many nominally non-zero
higher-orders of acceleration in any change.
"In addition, we did not address what constitutes
a constant acceleration of the cavity."
"Hasenöhrl was certainly familiar
with Lorentz-Fitzgerald contraction
and, in fact, invoked it in H2 and H3 ...."
"Because a Born rigid object has constant
dimensions in instanteously co-moving frames,
its length in the lab frame is Lorentz contracted.
This is only approximately so."
"The problem is that the results from energy conservation
imply an effective mass that is different from that implied by
conservation of momentum and both of these are different
from the m_eff = E/c^2 that we are led to expect from
special relativity."
In my podcasts I'd kind of arrived at that
momentum wasn't a conserved quantity, that
though it's "conserved in the open" after
kinematics, kinematics being different than kinetics.
(Rotational, linear, "nominally un-linear".)
The author of the paper you cited does make a
point like 'don't go assuming e = mc^2 if
you don't know'.
"It is often claimed that Einstein’s derivation of E = mc^2
was the first generic proof of the equivalence of mass and
energy (see Ohanian[2009] for arguments to the contrary)."
Consider for example McLaughlin's 2008 "Nature and Inertia",
a dry and rambling recount of that since Newton made an
un-tenable theory with pull-gravity that now it's out.

Yeah, everybody knows that "e=mc^2" in SR is circular
and where it's arrived at is the GR's K.E.'s Taylor series.
(... If only the first term of an infinite series.)

"Vis-viva: kind of like super-symmetry: not dead again."

"Resistance, for Leibniz, presumes a prior positive inclination
to persevere. Newton, however, reverses the order of perseverance
and resistance." -- McLaughlin

"Are these inertial forces real or not?" -- A.P. French


The zero meters/second is infinity seconds/meter,
so, in kind of classical accounts, any change is
as from rest, for example accelerating the course
of turns of otherwise rotating bodies or as they
"radiate", besides as above for they "deform".

So, "heft" is sort of the idea for inertial resistance
and resistance as inertial, moment of inertia.


"Mach's Principle originated as a philosophical view
that was only later developed intoa theory of physics.
Einstein hoped that General Relativity would instantiate
Mach's principle and ''suggested that the inertial forces
are not fictitious but are gravitational in origin.''" -- McLaughlin


Yeah, sure Einstein, go uniting gravity and strong nuclear
force with some fall-gravity then expect to arrive at
that the kinetic and kinematic have that too, in that
"General Relativity" per se is detachable from these
and re-attachable to whatever arrives as "you know,
whatever, classical in the limit".

Or, he's not far from right is what I'm saying,
Einstein, and you should give him a little more
credit his later works, being that he knew what
was required and now everybody's missing it.


"Inertia is not accurately described
as brute resistant matter."

There are plenty of pre-relativistic theories
that are potential theories, so there are lots
of times that it's "pre-potentialistic" theories
which are more relevant.

(I think I just made up that word, "potentialistic".)


"In this respect, inertia is unlike the Aristotelian
notions of gravitas and levitas, which are more easily
misconstrued as movers than is inertia."

"Although a developed account of natural and compulsory
motion will not be given here, the basis for distinguishing
between natural and compulsory motion is present in
inertial physics, even if physicists think of it in
different, perhaps confusing, terms."



_Zero-eth_ law(s) of motion
Mikko
2024-12-01 10:27:54 UTC
Reply
Permalink
Post by rhertz
Now, E = 3/4 mc² or E = mc²? Which one would the physics community
adopt?
The latter because the former was refuted by later experiments, in
particular observations and analysis of radioactive decays.
--
Mikko
J. J. Lodder
2024-12-06 10:48:39 UTC
Reply
Permalink
Post by Mikko
Now, E = 3/4 mc? or E = mc?? Which one would the physics community
adopt?
The latter because the former was refuted by later experiments, in
particular observations and analysis of radioactive decays.
Yes, it was an elementary error that was easily fixed.
Nevertheless, some people never give up.

There are still papers being written
about the so-called '4/3-problem',
as if that really is a problem,

Jan
Paul.B.Andersen
2024-12-01 11:38:44 UTC
Reply
Permalink
Now, E = 3/4 mc² or E = mc²?  Which one would the physics community
adopt?
Hmmm....
You know the answer.
--
Paul

https://paulba.no/
Richard Hachel
2024-12-01 12:19:39 UTC
Reply
Permalink
Post by rhertz
Now, E = 3/4 mc² or E = mc²?
E=mc².sqrt(1+Vr²/c²)

If Vr~0 then E=mc².

If a mass is at rest in a system, it has no real speed in this system.

Nor any observable speed since Vo=Vr/sqrt(1+Vr²/c²)

Its only displacement is in time, and only the energy of displacement in
time is counted.

E=mc² is the energy of a particle by its passage in time.

If, in addition, the particle moves in space, it also takes on an energy
of movement (not to be confused with kinetic energy).

This energy is E=mVr².

It is extremely simple.

Since this does not add longitudinally since the axis of time and the axis
of movement are perpendicular, we must call upon Pythagoras.

E=sqrt[(mc²)²+(mVr²)²]

E=mc².sqrt(1+Vr²/c²) and, if Vr=Vo/sqrt(1-Vo²/c²)

So :
E=mc²/sqrt(1-Vo²/c²)

It's that simple.

R.H.
Ross Finlayson
2024-12-01 18:30:32 UTC
Reply
Permalink
Post by Richard Hachel
Post by rhertz
Now, E = 3/4 mc² or E = mc²?
E=mc².sqrt(1+Vr²/c²)
If Vr~0 then E=mc².
If a mass is at rest in a system, it has no real speed in this system.
Nor any observable speed since Vo=Vr/sqrt(1+Vr²/c²)
Its only displacement is in time, and only the energy of displacement in
time is counted.
E=mc² is the energy of a particle by its passage in time.
If, in addition, the particle moves in space, it also takes on an energy
of movement (not to be confused with kinetic energy).
This energy is E=mVr².
It is extremely simple.
Since this does not add longitudinally since the axis of time and the
axis of movement are perpendicular, we must call upon Pythagoras.
E=sqrt[(mc²)²+(mVr²)²]
E=mc².sqrt(1+Vr²/c²) and, if Vr=Vo/sqrt(1-Vo²/c²)
E=mc²/sqrt(1-Vo²/c²)
It's that simple.
R.H.
When linear and rotational are different,
then the rotational just makes for both
space-contraction-rotational and mass-energy equivalency,
while space-contraction-linear just hauls along,
it's kind of like light speed:
"in deep space in a vacuum",
that mass-energy equivalency with light's speed being infinite,
that v -> c => 1/(v-c) = +-infinity,
simply is not in accords with "apparent super-luminal motion",
as is evident plainly in the sky-survey.

c = infinity is _not_ a natural unit. Furthermore,
it doesn't necessarily "reflect", usual derivations
that have the perfectly, linearly equal-and-opposite,
of usual linear and one-dimensional and symmetric
and one-dimensional accounts, since that entire
one-dimensionality, is contrived, in real space.

Then, this 4/3 bit about the self-energy of the electron,
is plainly arrived at, about continuum mechanics, of
which particle mechanics is a conceit, a concession,
to that physics is just lacking some necessary mathematics.


The zero meters per second is infinity seconds per meter.

About "making the corner", with regards to ideas like
the Dirac delta, the radial basis function, and otherwise
kernels as they are about the origin, whose area equals one,
and the whole endeavor after deMoivre of Eulerian/Gaussian,
and the quadrature, makes not only for the yenri, has that
there are many, many assumptions in the stack of derivations,
that the "three regularities" of "increment, dispersion,
_and dimension_", of a continuum analysis, has that the
entire notion of path travel, makes for a greater dimensional
analysis, here with regards to mathematical notions like
the "dimensional, dimensionless, dimensioned resonator/alternator"
with regards to a "walk integral".

Now, that's a bit under-defined, but so is mechanics, and
so is the mathematics about mechanics, so, it fits underneath,
and makes for "around". There is quite a bit involved in
this sort of "original analysis", making a real kind of,
so-to-say, "coordinate-free", that though is sort of having
set aside as un-used, all what follows deMoivre.

So, deMoivre is great, it's like l'Hospitale (an approximation)
and Rodriguez formula (an approximation) when not
"reducible to the first-order", or neglecting the
infinitude of remaining terms.

So, a usual "severe abstraction of a mechanical reduction",
where "whatever Lagrange says, his theory of potentials
always wins", has that mechanics has that any change at
all in velocity and thus arbitrarily displacement about
arbitrarily an origin or frame, has that the walk integral's
origin is where it's at.


Mechanics: and all above it, is underdefined, right beneath.


So, infinity seconds per meter, may be zero meters per second,
then that c = infinity is _not_ a natural unit.
Richard Hachel
2024-12-02 00:56:09 UTC
Reply
Permalink
Post by Richard Hachel
E=mc².sqrt(1+Vr²/c²)
If Vr~0 then E=mc².
I never said that.

When I talk about c, I'm talking about the observable speed of the photon.

I said that it is only an observable transverse value.

I said that this value was invariant by change of intertial reference
frame.

I said that this value was c=3.10^8m/s

I said that this value was a decoy due to the nature of the local and
singular space-time of the experimenter who was making the measurement.

I said that in fact, it was only a truly instantaneous interaction in the
photon reference frame (zero proper time) AND in the receiver reference
frame (usually my receiver or the receiver of the equipment used).

I said that this is very logical in the end, because these two (the photon
and the receiver) are the reciprocal of the same effect and that the laws
of physics being reciprocal, the time taken for the photon to cross
a space for my retina is the same as the time taken for my retina in the
frame of reference (if we could give it one) of the photon to reach it.

So I reaffirm the speed of light is finally "relative" by local positional
change while it is transversally invariant by change of inertial frame of
reference.

It is worth c, if I die it transversally, and this in all frames of
reference.

But it is an instantaneous transfer of energy for all receivers in all
frames of reference, and in all frames of reference, its escape speed is
c/2.

In summary: it is the receiver that tears the photon from its source,
instantly when a source is "hot" enough to be able to emit. This
phenomenon is instantaneous for any receiver.

In the reference frame (a reference frame is specific to any entity and is
not translatable) of the source, something strange happens. The source
sees a "photon" snatched from it, that is to say a quantity of energy,
without "understanding" why, since for it, if it knows the cause (it has
heated up), it does not know which receiver has just snatched its photon
or where it is (it is in the future for it).

All this happens as if I put this pencil on this table, and suddenly,
without understanding why my pencil disappeared.

I would be absolutely incapable of understanding why,

The reason would be that an intelligence placed centuries in my future
managed to go back in time, to grab my pencil instantly (for it) without
it being possible for me to know where my pencil went.

It is as simple and as logical as that.

This is also what the experiments of my compatriot, Mr. Alain Aspect
(Nobel Prize in Physics) show. He explains that there can be instantaneous
transfer of information for paired photons.

Why?

Because the receiver of the first photon, like this photon, perceives the
universe in the same instantaneity, at the moment of the tearing off of
the first photon. The other photon still conjoined is instantly polarized,
and when it is going to be torn off by another receiver, it will be torn
off as already polarized.

It's as simple as that.

R.H.
--
Ce message was posted with Nemo : <https://www.nemoweb.net/?DataID=***@jntp>
Loading...