• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Cont: Why James Webb Telescope rewrites/doesn't the laws of Physics/Redshifts (2)

Status
Not open for further replies.
I have a couple of questions about that paper.

In section 4, the paragraph bracketed by equations (10) and (11) makes three references to proper distance. Consulting a Wikipedia article for definitions of the various distance measures, it looks to me as though all three of that paragraph's references to proper distances should have referred to comoving distances. It seems to me that is a matter of some importance, because (in an expanding universe, in the context of that paragraph) proper distances differ from comoving distances by a factor of 1+z. Can you comment on this?

In a simply expanding space, comoving distance is:

http://latex.codecogs.com/gif.latex?d_C = z \frac{c}{H_0}​

Proper distance is:

http://latex.codecogs.com/gif.latex?d_p = \frac{d_C}{1+z}​

And luminosity distance is:

http://latex.codecogs.com/gif.latex?d_L = d_C(1+z) = d_p(1+z)^2​

The author seems to have that straight, unless I'm missing something.

ETA, read it a couple more times. When he says "the proper distance has expanded by a factor of (1+z)", he's referring to the proper distance now, aka comoving distance.

I'd also be interested to hear your comments with regard to Figure 3 and its caption, whose concluding sentence says "The predictions of the expanding and nonexpanding universes are presented as the solid and dashed lines, respectively." From the figure, it is rather obvious that the solid line (which is said to be the prediction for an expanding universe) fits the data better than the dashed line (which is said to be the prediction for a nonexpanding universe)

Not sure. It looks mislabeled.

This is one of the big muddy spots I've brought up before.

In the expanding model, luminosity distance is:

http://latex.codecogs.com/gif.latex?d_L = d_C(1+z) = d_p(1+z)^2​

In my non-expanding model it's based on the light travel time distance:

http://latex.codecogs.com/gif.latex?d_L = d_t(1+z)​

What the author's non-expanding model is, I'm still figuring out.

It all depends on the "d" used in the flux-luminosity-distance relationship.

The question I have for my model is, what effect, if any, does time dilation have on angular size. I'm not sure on that right now.
 
Last edited:
I have a couple of questions about that paper.

In section 4, the paragraph bracketed by equations (10) and (11) makes three references to proper distance. Consulting a Wikipedia article for definitions of the various distance measures, it looks to me as though all three of that paragraph's references to proper distances should have referred to comoving distances. It seems to me that is a matter of some importance, because (in an expanding universe, in the context of that paragraph) proper distances differ from comoving distances by a factor of 1+z. Can you comment on this?

In a simply expanding space, comoving distance is:

http://latex.codecogs.com/gif.latex?d_C = z \frac{c}{H_0}​

Proper distance is:

http://latex.codecogs.com/gif.latex?d_p = \frac{d_C}{1+z}​

And luminosity distance is:

http://latex.codecogs.com/gif.latex?d_L = d_C(1+z) = d_p(1+z)^2​

The author seems to have that straight, unless I'm missing something.
No, the author doesn't have that straight.

From the last equation you wrote above, the luminosity distance is greater than the proper distance by a factor of (1 + z)2. But the author says:
Redshift and time dilation cause a luminosity distance greater than the proper distance by (1 + z).
Now that might just be another example of careless/sloppy wording, as in Figure 3 and its caption. But the author goes on to say
The expanding metric makes the angular diameter distance smaller than the proper distance by (1 + z).
According to Wikipedia, however:
dA = dM / (1 + z)​
where dM is not the proper distance, but is instead the transverse comoving distance, which is equal to the comoving distance dC if space is flat. From your second equation, and assuming flat space, the proper distance is
dP = dM / (1 + z)​
so
dA = dP
In other words, the angular distance should be the same as the proper distance (for flat space).

The author's equation (11) is derived from the author's belief (which I quoted above) that (for an expanding universe) the angular distance is "smaller than the proper distance by (1 + z)." As I have just shown, however, using your equations and those of Wikipedia, the angular distance is actually equal to the proper distance (assuming flat space).

If that is the error it appears to be, then the entire paper is worthless.

And it does appear to be some kind of error. After writing the above, I consulted Weinberg's Cosmology. Weinberg's equation (1.4.11) says the angular distance is
dA = a(t1)r1
For flat space, the right hand side of that equation is the proper distance (as given by Weinberg's equation (1.1.15)).

At the very least, it seems the author of that paper is writing "proper distance" when he should be writing "comoving distance" or "transverse comoving distance". But if he really believes the angular distance differs from the proper distance by a factor of (1 + z), and derives his equation (11) by removing that factor, as appears to be the case, then I think that's a fatal error.

ETA, read it a couple more times. When he says "the proper distance has expanded by a factor of (1+z)", he's referring to the proper distance now, aka comoving distance.
If he's referring to the comoving distance, he should say so. By writing "proper distance" when, on your reading, he means the proper distance today, i.e. the comoving distance, he confuses readers such as myself who know just enough to check what he wrote but do not know enough to intuit what he really meant.

At the very least, it's a sloppy paper. That much is obvious from Figure 3 and its caption.

The author completed his PhD (in astronomy) at Case Western Reserve University in 2020, and his BS (in Engineering) at Huazhong University of Science and Technology in 2012. He is currently a postdoc at Case Western Reserve University. A young researcher who is probably not a native speaker of English can be forgiven some sloppy prose, but there may be a real question here as to whether his sloppiness carried over into his calculations.
 
ETA: When sketching a proof that metric form 1 describes the same metric as metric form 2, via a simple coordinate transformation, I omitted the highlighted factor I am now adding to this equation:
du2 = (cos2 (r/R)) dr2 = (1 − (u2 / α2)) dr2


The one that I mentioned when I first cited the de Sitter Sightseeing Tour is:

http://www.bourbaphy.fr/moschella.pdf

Figure 8:

[qimg]https://mikehelland.github.io/hubbles-law/img/desitterflat.png[/qimg]

Here time is defined by relation:

http://latex.codecogs.com/gif.latex?x_0 + x_4 = Re^{\frac{t}{R}}​

From this you get that static time coefficient and the dynamic space coefficients that satisfy the equation for the hyperboloid.

As the caption says, http://latex.codecogs.com/gif.latex?x_0 + x_4 > 0 covers half the manifold. So I'm thinking, how about this:

....wrong guesses snipped....
The wrong guesses I snipped are wrong guesses.

That probably might not make a ton of sense.
Yes.
 
Last edited:
If he's referring to the comoving distance, he should say so. By writing "proper distance" when, on your reading, he means the proper distance today, i.e. the comoving distance, he confuses readers such as myself who know just enough to check what he wrote but do not know enough to intuit what he really meant.

The comoving distance is the proper distance now.

The angular distance is the proper distance then.

In my "paper", I say comoving distance, and proper distance when the light was emitted. Adding "when the light was emitted" to every single instance of "proper distance" is kind of annoying, but necessary to avoid the nitpicks.

It seems his convention is to use angular distance (proper distance when light was emitted) and proper distance (now, aka, comoving distance).

It's a bit more concise. But otherwise yeah, it confused me too for a bit.

ETA: he discusses it in a tweet thread:

https://twitter.com/PengfeiLi0606/status/1673323586709274624

I mean, the dashed line shows the turnaround, so that has to be the expanding model. I asked him on twitter. I'll let you know.
 
Last edited:
It seems his convention is to use angular distance (proper distance when light was emitted) and proper distance (now, aka, comoving distance).


That wouldn't be right either, because the angular distance coincides with proper distance only for flat space, and he claims to make no assumptions about the model.

ETA: This seems like too obvious a mistake for the author to make, but Mike Helland might not be entirely right about what the author is thinking.
 
Last edited:
The wrong guesses I snipped are wrong guesses.

At least this part should be correct:

http://latex.codecogs.com/gif.latex?x_0 + x_4 = Re^{\frac{t}{R}}-R​

Insofar as this slicing covers "more" of the manifold. (http://latex.codecogs.com/gif.latex?x_0 + x_4 > -R vs http://latex.codecogs.com/gif.latex?x_0 + x_4 > 0).

Maybe better written as:

http://latex.codecogs.com/gif.latex?x_0 + x_4 = R(e^{\frac{t}{R}}-1)​
 
That wouldn't be right either, because the angular distance coincides with proper distance

Proper distance at the time when light was emitted.

Its proper distance was closer before the light was emitted, and farther today, and will be even farther tomorrow.
 
That wouldn't be right either, because the angular distance coincides with proper distance (when the light was emitted) only for flat space, and he claims to make no assumptions about the model.

That seems to be a valid criticism to me (I added the bit in parenthesis).

ETA: FWIW, this means his conclusion applies to all flat models, but not open or closed models. From that image we looked at showing levels of confidence for SNe 1a models for different cosmological parameters, we can see that an open FLRW is a better fit to the data. My own SSE calculations show that the luminosity distance -ct(1+z), which is the same is Melia's, is a better fit to that data than all flat models. But I also calculated SSE values for open FLRW models, after seeing that image, and universes with ΩΛ > 1 show an improvement over dL=-ct(1+z).

So it appears to be model independent in the sense that as long as ΩΛ + ΩM=1, the model is covered. This avoids drawing a conclusion specific to ΩΛ=0.68 or 0.72, for example.
 
Last edited:
https://theconversation.com/cosmolo...-observations-demand-a-radical-rethink-204190

Our ideas about the Universe are based on a century-old simplification known as the cosmological principle. It suggests that when averaged on large scales, the Cosmos is homogeneous and matter is distributed evenly throughout.

This allows a mathematical description of space-time that simplifies the application of Einstein’s general theory of relativity to the Universe as a whole.

Our cosmological models are based on this assumption. But as new telescopes, both on Earth and in space, deliver ever more precise images, and astronomers discover massive objects such as the giant arc of quasars, this foundation is increasingly challenged.

In our recent review, we discuss how these new discoveries force us to radically re-examine our assumptions and change our understanding of the Universe.
 
Here's one idea for a geometrical foundation. You have a hyperboloid:

http://latex.codecogs.com/gif.latex?-x_0^2 + x_1^2 + x_2^2 = \alpha^2​

Place stationary clocks around its "neck". The worldlines of the clocks are hyperbolas that go up and down.

As time passes, these clocks don't actually move along their worldlines. We usually think of them as moving up. We'll say instead that the events representing clock ticks move down the worldline instead, into the past.

It's an equivalence principle for time, of sorts. Who says we're moving forward in time? Maybe we're standing still and time is moving against us?

This means the distances between the clocks never changes, but the space and time between their events does as they move to the past.

hyperboloid.gif
 
This means the distances between the clocks never changes, but the space and time between their events does as they move to the past.

There's a saying, "The future is certain, it is only the past that is unpredictable." But it's supposed to be a joke about the Soviet Union, not a blueprint for a model of physics.
 
Modern galaxies are really very different to high redshift ones.
Probably the most powerful test is chemical abundances, unlike sizes or masses they don't depend on the assumed cosmology. It has been known for decades that higher redshift galaxies have less heavy elements, and that has been further confirmed to higher redshift by JWST spectroscopy. Most z=6-10 galaxies studied by JWST are around a tenth solar, but some go much lower. This is not compatible with no evolution. The difference is so marked that some emission lines which can barely be detected in local galaxies are booming in high-z ones.
https://ui.adsabs.harvard.edu/abs/2012ApJ...755...89R/abstract
https://ui.adsabs.harvard.edu/abs/2023MNRAS.518..425C/abstract
https://ui.adsabs.harvard.edu/abs/2022arXiv221108255M/abstract

Under standard cosmology galaxies are also smaller and less massive at fixed galaxy abundance. They show bluer UV spectra which likely indicates less dust. The highest redshift objects appear to have basically zero dust.
https://ui.adsabs.harvard.edu/abs/2022arXiv220813582O/abstract
https://ui.adsabs.harvard.edu/abs/2023ApJ...949L..18P/abstract
https://iopscience.iop.org/article/10.3847/2041-8213/acbfb9

There's also a lot of things which fall in line with predictions, such as the decline in metallicity and more detailed predictions. There some loud nonsense claims at the beginning, but those people did not formalise any analysis and very much cherry pick what results they look at.
https://ui.adsabs.harvard.edu/abs/2023arXiv230413755M/abstract
https://ui.adsabs.harvard.edu/abs/2023arXiv230413755M/abstract

Another confirmed prediction is the time dilation thing, which is exactly what is predicted in an expanding universe. (1+z) time dilation. That result and others is incompatible with tired light models.
 
Last edited:
Modern galaxies are really very different to high redshift ones.
Probably the most powerful test is chemical abundances, unlike sizes or masses they don't depend on the assumed cosmology. It has been known for decades that higher redshift galaxies have less heavy elements, and that has been further confirmed to higher redshift by JWST spectroscopy. Most z=6-10 galaxies studied by JWST are around a tenth solar, but some go much lower. This is not compatible with no evolution. The difference is so marked that some emission lines which can barely be detected in local galaxies are booming in high-z ones.
https://ui.adsabs.harvard.edu/abs/2012ApJ...755...89R/abstract
https://ui.adsabs.harvard.edu/abs/2023MNRAS.518..425C/abstract
https://ui.adsabs.harvard.edu/abs/2022arXiv221108255M/abstract

Under standard cosmology galaxies are also smaller and less massive at fixed galaxy abundance. They show bluer UV spectra which likely indicates less dust. The highest redshift objects appear to have basically zero dust.
https://ui.adsabs.harvard.edu/abs/2022arXiv220813582O/abstract
https://ui.adsabs.harvard.edu/abs/2023ApJ...949L..18P/abstract
https://iopscience.iop.org/article/10.3847/2041-8213/acbfb9

There's also a lot of things which fall in line with predictions, such as the decline in metallicity and more detailed predictions. There some loud nonsense claims at the beginning, but those people did not formalise any analysis and very much cherry pick what results they look at.
https://ui.adsabs.harvard.edu/abs/2023arXiv230413755M/abstract
https://ui.adsabs.harvard.edu/abs/2023arXiv230413755M/abstract

Another confirmed prediction is the time dilation thing, which is exactly what is predicted in an expanding universe. (1+z) time dilation. That result and others is incompatible with tired light models.


"But... but... but...what about...?"

(Wait for it)
 
Modern galaxies are really very different to high redshift ones.
Probably the most powerful test is chemical abundances, unlike sizes or masses they don't depend on the assumed cosmology. It has been known for decades that higher redshift galaxies have less heavy elements, and that has been further confirmed to higher redshift by JWST spectroscopy. Most z=6-10 galaxies studied by JWST are around a tenth solar, but some go much lower. This is not compatible with no evolution. The difference is so marked that some emission lines which can barely be detected in local galaxies are booming in high-z ones.
https://ui.adsabs.harvard.edu/abs/2012ApJ...755...89R/abstract
https://ui.adsabs.harvard.edu/abs/2023MNRAS.518..425C/abstract
https://ui.adsabs.harvard.edu/abs/2022arXiv221108255M/abstract

Under standard cosmology galaxies are also smaller and less massive at fixed galaxy abundance. They show bluer UV spectra which likely indicates less dust. The highest redshift objects appear to have basically zero dust.
https://ui.adsabs.harvard.edu/abs/2022arXiv220813582O/abstract
https://ui.adsabs.harvard.edu/abs/2023ApJ...949L..18P/abstract
https://iopscience.iop.org/article/10.3847/2041-8213/acbfb9

There's also a lot of things which fall in line with predictions, such as the decline in metallicity and more detailed predictions. There some loud nonsense claims at the beginning, but those people did not formalise any analysis and very much cherry pick what results they look at.
https://ui.adsabs.harvard.edu/abs/2023arXiv230413755M/abstract
https://ui.adsabs.harvard.edu/abs/2023arXiv230413755M/abstract

Thanks for that.

Doesn't that follow a pretty obvious general trend though?

https://www.nature.com/articles/nature14164

Candidates for the modest galaxies that formed most of the stars in the early Universe, at redshifts z > 7, have been found in large numbers with extremely deep restframe-ultraviolet imaging1. But it has proved difficult for existing spectrographs to characterize them using their ultraviolet light2,3,4. The detailed properties of these galaxies could be measured from dust and cool gas emission at far-infrared wavelengths if the galaxies have become sufficiently enriched in dust and metals. So far, however, the most distant galaxy discovered via its ultraviolet emission and subsequently detected in dust emission is only at z = 3.2 (ref. 5), and recent results have cast doubt on whether dust and molecules can be found in typical galaxies at z ≥ 76,7,8. Here we report thermal dust emission from an archetypal early Universe star-forming galaxy, A1689-zD1. We detect its stellar continuum in spectroscopy and determine its redshift to be z = 7.5 ± 0.2 from a spectroscopic detection of the Lyman-α break. A1689-zD1 is representative of the star-forming population during the epoch of reionization9, with a total star-formation rate of about 12 solar masses per year. The galaxy is highly evolved: it has a large stellar mass and is heavily enriched in dust, with a dust-to-gas ratio close to that of the Milky Way. Dusty, evolved galaxies are thus present among the fainter star-forming population at z > 7.

The first galaxies we noticed at z=1 were the less dusty ones. We look longer and closer and see that they don't represent the entire population. Same at z=4 and z=7 (the existence of any z>7 galaxy was thought to be impossible not all that long ago) and now the same is happening at z>10.

Don't we always notice the easiest ones to notice first? Isn't that to be expected?

Sure we see some metal poor galaxies out there. But we also see exceptions. Which shouldn't be there is the early universe was actually different.

JWST was supposed to show Pop III stars. Maybe it will. Some might even say it has. We'll have to let the dust settle on that.

But we notice metal poor stars right here in our own galaxy too:

https://en.wikipedia.org/wiki/HD_140283

So it's not like those conditions are unique to the early universe.

If visible light is, just say, between 800 nm and 400 nm, then at z=9, those wavelengths would be between 80 nm and 40 nm.

Doesn't that make absorption lines harder to see?

And, if the theory were right, shouldn't we notice at some distance elements like Oxygen completely disappearing?


Another confirmed prediction is the time dilation thing, which is exactly what is predicted in an expanding universe. (1+z) time dilation. That result and others is incompatible with tired light models.

Indeed.

If phenomena are time dilated because of their speed away from us, and if the expansion of universe also stretches wavelengths while light from those distances as it travels, it seems the oscillations of EM waves should be redshifted by two factors of (1 + z), one for the time dilation at the beginning, one for the stretching during the trip.

So if photons are redshifted by (1 + z) shouldn't supernovae be time dilated by (1 + z)1/2?
 
Last edited:
Thanks for that.

Doesn't that follow a pretty obvious general trend though?
No.
Sure we see some metal poor galaxies out there. But we also see exceptions. Which shouldn't be there is the early universe was actually different.
Reference?
JWST was supposed to show Pop III stars.
Indeed. What is your argument?

But we notice metal poor stars right here in our own galaxy too:

https://en.wikipedia.org/wiki/HD_140283
So what?

So it's not like those conditions are unique to the early universe.
What conditions? If you are able to read and understand scientific papers, which you clearly are not, you would realise that all the spectroscopically confirmed z>10 galaxies are different from low-z galaxies.

If visible light is, just say, between 800 nm and 400 nm, then at z=9, those wavelengths would be between 80 nm and 40 nm.

Doesn't that make absorption lines harder to see?
Wut?
And, if the theory were right, shouldn't we notice at some distance elements like Oxygen completely disappearing?
The first pop III stars are massive and short lived and die in pair instability supernovae spewing out nucleosynthesised elements well past oxygen. It doesn't take long for that to happen.
 
Last edited:



We theorize what the early universe should look like, and then we go looking for it, and we find it.

Later, we find things we didn't initially calibrate our telescopes specifically for.

"ALMA reveals a stable rotating gas disk in a paradoxical low-mass, ultra-dusty galaxy at z = 4.274"

https://arxiv.org/abs/2306.10450

Happens every time.

ETA:

https://arxiv.org/abs/2305.14418

Extremely red galaxies at z=5−9 with MIRI and NIRSpec: dusty galaxies or obscured AGNs?
 
Last edited:
Don't we always notice the easiest ones to notice first? Isn't that to be expected?
The evolution in the beta slope is done using the same galaxy selection at lower redshift. It is a like-for-like comparison. The galaxy in the paper you cited is a Lyman break galaxy and would meet the criteria used to select these galaxies. Some very dusty objects were missed by HST, but JWST's extends beyond the rest-frame UV. These objects are rare, they are not normal galaxies.

Sure we see some metal poor galaxies out there. But we also see exceptions. Which shouldn't be there is the early universe was actually different.
That is completely backwards. It isn't some galaxies being metal poor, it is the bulk population. All the NIRSpec galaxies at z=6-10 are metal poor. There is no law in the standard model which forbids some massive galaxies enriching early, that is set by galaxy evolution and not cosmology. What cannot happen is in a tired light or static model, there can be no global evolution. And yet there is.


But we notice metal poor stars right here in our own galaxy too:

https://en.wikipedia.org/wiki/HD_140283

So it's not like those conditions are unique to the early universe.
A star which is formed in the early universe, 12 Gyr ago.


If visible light is, just say, between 800 nm and 400 nm, then at z=9, those wavelengths would be between 80 nm and 40 nm.

Doesn't that make absorption lines harder to see?

And, if the theory were right, shouldn't we notice at some distance elements like Oxygen completely disappearing?
They're not measuring absorption lines. JWST isn't working in visible light. The models predict much earlier times for the first enrichment.


If phenomena are time dilated because of their speed away from us, and if the expansion of universe also stretches wavelengths while light from those distances as it travels, it seems the oscillations of EM waves should be redshifted by two factors of (1 + z), one for the time dilation at the beginning, one for the stretching during the trip.

So if photons are redshifted by (1 + z) shouldn't supernovae be time dilated by (1 + z)1/2?

No. Time dilation and redshift are the same thing, one cannot be bigger than the other. Both can be derived from the FLRW metric to show they scale together, and not as you suggest.
 
The evolution in the beta slope is done using the same galaxy selection at lower redshift. It is a like-for-like comparison. The galaxy in the paper you cited is a Lyman break galaxy and would meet the criteria used to select these galaxies. Some very dusty objects were missed by HST, but JWST's extends beyond the rest-frame UV. These objects are rare, they are not normal galaxies.

Ok. Well, they seem to becoming less rare, but I'm just an outside observer.

That is completely backwards. It isn't some galaxies being metal poor, it is the bulk population. All the NIRSpec galaxies at z=6-10 are metal poor.

I mean, are they?

https://arxiv.org/abs/2305.14418

Is there a handy image for what the global evolution might look like?

Say at z=0, we call the dust-to-gas ratio X. Presumably at z=infinity, it should be zero.

I would think there should be curve that shows the evolution of that percentage from X to 0.

And I'm curious how that has changed over the last few years.

Are we now saying the early universe is z>10?

Because it used to be like z>2.

Can we just change where the early universe ends everytime we observe a dusty galaxy?

Again, outside observer. But that seems to be what's happened since the early 2000's.


No. Time dilation and redshift are the same thing, one cannot be bigger than the other. Both can be derived from the FLRW metric to show they scale together, and not as you suggest.

The time dilation of a supernovae comes from the photons emitted at the end of the event having more distance to travel than the photons emitted at the beginning of the event, due to the supernovae moving away from us.

If the rest-frame duration of the event were 2 weeks, the light from the end of the event would have 1 light-fortnight farther to travel than light at the beginning of the event. (ETA, assuming z=1 and a simply expanding universe.)

So all the time dilation happens because of that.

Which means whatever was emitting light back then should be time dilated too. So a photon emitted at that distance should be time dilated (and thus redshifted).

But the photon should also be redshifted by the stretching of its wavelength due to the expansion that occurs between emission and detection.

Doesn't it seem logical that the photon would be redshifted by the time dilation at the beginning and also by the stretching during its travel?

I think that's what this paper is getting at:

https://www.frontiersin.org/articles/10.3389/fphy.2022.826188/full

Although, I've pointed to it here before, and its been dismissed (without direct criticism, fwiw).
 
Last edited:
No. Time dilation and redshift are the same thing, one cannot be bigger than the other. Both can be derived from the FLRW metric to show they scale together, and not as you suggest.

The time dilation of a supernovae comes from the photons emitted at the end of the event having more distance to travel than the photons emitted at the beginning of the event, due to the supernovae moving away from us.

If the rest-frame duration of the event were 2 weeks, the light from the end of the event would have 1 light-fortnight farther to travel than light at the beginning of the event. (ETA, assuming z=1 and a simply expanding universe.)

So all the time dilation happens because of that.

Which means whatever was emitting light back then should be time dilated too. So a photon emitted at that distance should be time dilated (and thus redshifted).

But the photon should also be redshifted by the stretching of its wavelength due to the expansion that occurs between emission and detection.

Doesn't it seem logical that the photon would be redshifted by the time dilation at the beginning and also by the stretching during its travel?
Wow.

That is not just wrong. It is staggeringly, stupendously, spectacularly, extraordinarily, amazingly, astonishingly, mind-bendingly wrong.

ETA:
I think that's what this paper is getting at:

https://www.frontiersin.org/articles...22.826188/full

Although, I've pointed to it here before, and its been dismissed (without direct criticism, fwiw).
Here's a direct criticism of that paper: Its equation (B6) is incorrect.

The incorrectness of equation (B6) should be obvious to anyone with minimal understanding of calculus. As written, that equation assumes the integral doesn't change during a translation by t0, as would be the case if a(t) were a constant function. In FLRW models of an expanding universe, however, a(t) is a monotonically increasing function, so translation by t0 changes the value of the integral.
 
Last edited:
Wow.

That is not just wrong. It is staggeringly, stupendously, spectacularly, extraordinarily, amazingly, astonishingly, mind-bendingly wrong.

Well, say a supernova happens at z=1, and let's just use a simply expanding universe, so the galaxy's recessional velocity is v=c.

sntimedilation.png


The light paths are shown in yellow. The green path is the supernova's motion away from us at v=c.

The red line is the duration of the supernova in the rest frame, and the blue line is the time measured by an observer at the origin.

The blue line is twice as long as the red line, which is what you'd expect at z=1, since 1+z =2.

You obviously can't just use relativistic time dilation, since the speeds are greater than or equal to c.

Were we to be talking about the oscillation of an EM wave instead of a supernovae, that wave would be redshifted by z=1 right out of the gate.

Since the universe expands between the emission time and the detection time, wouldn't it redshift further?
 
Doesn't it seem logical that the photon would be redshifted by the time dilation at the beginning and also by the stretching during its travel?

I think that's what this paper is getting at:

https://www.frontiersin.org/articles/10.3389/fphy.2022.826188/full

Although, I've pointed to it here before, and its been dismissed (without direct criticism, fwiw).

Wow.

Here's a direct criticism of that paper: Its equation (B6) is incorrect.

The incorrectness of equation (B6) should be obvious to anyone with minimal understanding of calculus. As written, that equation assumes the integral doesn't change during a translation by t0, as would be the case if a(t) were a constant function. In FLRW models of an expanding universe, however, a(t) is a monotonically increasing function, so translation by t0 changes the value of the integral.

B6?

eta, I see, waaay down there

That's assuming the photons are seconds apart, rather than billions of years, so changes in the scale factor over the duration aren't significant.
 
Last edited:
Mike Helland's struggles with calculus have been apparent throughout this thread and its predecessor, but he continues to remind us of his incompetence.

Wow.

Here's a direct criticism of that paper: Its equation (B6) is incorrect.

The incorrectness of equation (B6) should be obvious to anyone with minimal understanding of calculus. As written, that equation assumes the integral doesn't change during a translation by t0, as would be the case if a(t) were a constant function. In FLRW models of an expanding universe, however, a(t) is a monotonically increasing function, so translation by t0 changes the value of the integral.

B6?

eta, I see, waaay down there

That's assuming the photons are seconds apart, rather than billions of years, so changes in the scale factor over the duration aren't significant.
Nonsense. The scale factor doesn't change much during the interval Δt, but t0 is completely arbitrary and T is the time between emission and detection, which can be billions of years.

Here's a concrete counterexample to both Vavryčuk's equation (B6) and Mike Helland's attempt to justify it. Take the "initial time" t0 (i.e. the time at which the first photon is emitted) to be 10 billion years in the past, so (for example) t0 = -10 Gy. Take its travel time T to be the 10 billion years from its emission to its detection in the here and now (t = 0). The scale factor a(t) increases over time, so a(t0) might (for example) be 1/3 of a(t0+T), which might (for example) be 1/4 of a(T). (Note that the time t=T is ten billion years from now.)

Then
[size=+2]∫t0+Tt0+T+Δt (c dt) / a(t) ≈ 4TT+Δt (c dt) / a(t)[/size]​
In that example, Mike Helland and Vavryčuk's equation (B6) are off by the highlighted factor of 4.

Vavryčuk's equation (B6) is just plain wrong. That error invalidates his equation (B7) and the whole point of his Appendix B. His entire paper depends on the false conclusion he draws from Appendix B, so his entire paper is worthless.

Worse than worthless, actually, because the existence of that worthless paper does positive harm by misleading the unskeptical and uninformed.
 
Mike Helland's struggles with calculus have been apparent throughout this thread and its predecessor, but he continues to remind us of his incompetence.


Nonsense. The scale factor doesn't change much during the interval Δt, but t0 is completely arbitrary and T is the time between emission and detection, which can be billions of years.

Here's a concrete counterexample to both Vavryčuk's equation (B6) and Mike Helland's attempt to justify it. Take the "initial time" t0 (i.e. the time at which the first photon is emitted) to be 10 billion years in the past, so (for example) t0 = -10 Gy. Take its travel time T to be the 10 billion years from its emission to its detection in the here and now (t = 0). The scale factor a(t) increases over time, so a(t0) might (for example) be 1/3 of a(t0+T), which might (for example) be 1/4 of a(T). (Note that the time t=T is ten billion years from now.)

Then
[size=+2]∫t0+Tt0+T+Δt (c dt) / a(t) ≈ 4TT+Δt (c dt) / a(t)[/size]​
In that example, Mike Helland and Vavryčuk's equation (B6) are off by the highlighted factor of 4.

Vavryčuk's equation (B6) is just plain wrong. That error invalidates his equation (B7) and the whole point of his Appendix B. His entire paper depends on the false conclusion he draws from Appendix B, so his entire paper is worthless.

Worse than worthless, actually, because the existence of that worthless paper does positive harm by misleading the unskeptical and uninformed.

Wouldn't t0+T = -20 billion using your numbers? (ETA, ha, nope. Should equal 0, so t0+T+Δt = Δt. If t0=0 it makes sense.)

His conclusion in that appendix is:

"Comparing Eqs B5, B7, we see that the proper distance between two successive photons is constant and independent of the scale factor a(t). Consequently, the wavelength of photons cannot change with the scale factor a(t) in the standard FLRW metric."

Which obviously seems wrong.

My point about photons redshifting due to the expansion of space while they travel having a "double dipped" redshift compared to the time dilation of supernova from an equal distance just seemed similar to one of his arguments. I'm not endorsing that paper by any means.
 
Last edited:

They do not measure metallicities. One of the objects is in this paper, which is metal poor.

https://arxiv.org/abs/2301.09482

Is there a handy image for what the global evolution might look like?
Perhaps someone posted some papers on that already.

But the photon should also be redshifted by the stretching of its wavelength due to the expansion that occurs between emission and detection.
No, these are the same thing. They are imprecise ways of describing metric expansion.

https://www.frontiersin.org/articles...22.826188/full

Although, I've pointed to it here before, and its been dismissed (without direct criticism, fwiw).
They author doesn't actually demonstrate that there is a contraction in FLRW. The two equations he compares are not describing the same time variable, one is a differential the other is an integral relating different observers. Nor does he prove that there is real mathematical contradiction. The explanation for where he thinks the derivation fails is also absurd, that it is due to the microscopic expansion between wave crests of light. His alternative derivation fudges it so the traveling photons have the same light travel time, this is entirely circular, there is no redshift because that's his definition. The paper skims over the contradiction point.
Most bizarrely if you follow the source for the "conformal metric", it is completely different in the citation. The time component is replaced by the "parametric time", which by definition depends on the scale factor. So this paper has rejected the FLRW metric because "time depends on scale factor" and adopted a new one where time depends on the scale factor. But the paper here has just dropped that and replaced it with the normal time coordinate, which is wrong. In a later section the author comes clean that he has redefined everything in a conformal time, which indeed depends on the scale factor. His metric is still wrong though as he does not use this conformal time. In the end his equation for redshift is identical to the standard one and hence changes nothing. I'd say it's pure sophistry, but it's not even clever.
It's literally just rearranging the same equations and expecting a different result, if you do it correctly there won't. The bit a the Friedman equations and supernovae is just wrong, he uses the standard Friedmann equation derived with the standard metric. But I see these aren't the only errors.

https://arxiv.org/pdf/1103.4743.pdf
 
They do not measure metallicities.

Ok. I took it to mean "dusty" means not metal poor.

I suppose it would be useful to define when the "early" universe was and what constitutes "metal poor". A curve from now to the big bang showing what the predicted dust-to-gas ratio is would be the best way to quantify that. Maybe I can find one.

No, these are the same thing. They are imprecise ways of describing metric expansion.

It seems common to say that the expansion of space stretches the wavelength of EM waves as they travel. Is that imprecise? Or do you just mean that's not a mathematical formulation of expanding space?

Can we take this to mean that if we describe the metric as time dilating EM waves at their source, that it would be "wrong", in that context, to describe them as being stretched by the expansion of space as they travel?

They author doesn't actually demonstrate that there is a contraction in FLRW. The two equations he compares are not describing the same time variable, one is a differential the other is an integral relating different observers. Nor does he prove that there is real mathematical contradiction. The explanation for where he thinks the derivation fails is also absurd, that it is due to the microscopic expansion between wave crests of light. His alternative derivation fudges it so the traveling photons have the same light travel time, this is entirely circular, there is no redshift because that's his definition. The paper skims over the contradiction point.
Most bizarrely if you follow the source for the "conformal metric", it is completely different in the citation. The time component is replaced by the "parametric time", which by definition depends on the scale factor. So this paper has rejected the FLRW metric because "time depends on scale factor" and adopted a new one where time depends on the scale factor. But the paper here has just dropped that and replaced it with the normal time coordinate, which is wrong. In a later section the author comes clean that he has redefined everything in a conformal time, which indeed depends on the scale factor. His metric is still wrong though as he does not use this conformal time. In the end his equation for redshift is identical to the standard one and hence changes nothing. I'd say it's pure sophistry, but it's not even clever.
It's literally just rearranging the same equations and expecting a different result, if you do it correctly there won't. The bit a the Friedman equations and supernovae is just wrong, he uses the standard Friedmann equation derived with the standard metric. But I see these aren't the only errors.

https://arxiv.org/pdf/1103.4743.pdf

Thanks. I wasn't advocating that paper, by the way. It just appeared to me to be making a similar argument to redshift and time dilation.
 
But we notice metal poor stars right here in our own galaxy too:

https://en.wikipedia.org/wiki/HD_140283

So it's not like those conditions are unique to the early universe.
That star, and the other like it are from the early universe. They don’t just disappear when the early galaxies merge to become modern galaxies. We use these stars to trace the early history of our own galaxy.
 
Thanks. I wasn't advocating that paper, by the way. It just appeared to me to be making a similar argument to redshift and time dilation.
So you just threw in a paper you were not advocating, in order to waste the time of your critics?
 
Meanwhile, back in the real world;

Spectroscopic verification of very luminous galaxy candidates in the early universe
Arrabal Haro, P. et al
https://arxiv.org/abs/2303.15431

Basically, as we all expected, they have now done some spectroscopic measurements to confirm, or otherwise, the early claims of very distant galaxies. And, as expected, not all of them were as distant as the first claims. For instance, a galaxy at a claimed z ~ 16 was, in fact, only at z ~ 4.9!
To cut a long story short, the most distant spectroscopically confirmed galaxy is now at z ~ 13.2.
 
So you just threw in a paper you were not advocating, in order to waste the time of your critics?

Of course not.

I think it's a curious situation.

Take this situation:

sntimedilation.png


Now say that at the time where the last photon is emitted (SN end, the top of the green line) expansion were to stop and everything would be stuck exactly where it is.

In this situation, the SN would be observed as fully time dilated, as if expansion never stopped. It's only the galaxy's motion between the start and end that determines the time dilation. The light could travel for 1 year, or 1 trillion years, in static or expanding space, and nothing would change.

If expansion stopped and everything was fixed into a position, the time dilation would exist. But the last photon never travels through expanding space. The first photon would only travel through expanding space until the second is emitted.

Are the photons redshifted? One of them never travels through expanding space, and the other only does for short time.

The author of the paper seems to think so. Though I'm not sure. This is part of the Discussion section:

The re-examination of light propagation in space defined by the standard FLRW metric reveals another severe contradiction with observations: this metric actually does not predict the cosmological redshift. This is surprising and against the common opinion that the standard FLRW metric produces the cosmological redshift. However, it is shown that the mathematical derivation originally proposed by Lemaitre [2] and repeated in textbooks is not correct. Lemaitre [2] analysed the change of the wavelength of photons propagating in expanding space and he came to a wrong conclusion that the wavelength of photons must increase, similarly as the proper distance between objects in rest. An increasing wavelength of photons is then transformed into the change of their frequency under the assumption of the constant speed of light. Since this derivation gave intuitively acceptable results, there was no reason to critically check its correctness by other cosmologists.

A correct analysis shows, however, that the wavelength of photons does not increase and the frequency of photons is constant during the space expansion defined by the standard FLRW metric. The change in the frequency of photons is always connected with time dilation and with a variation of the time metric g00 in GR, similarly as for the gravitational redshift. Therefore, the standard FLRW metric must be substituted by the conformal FLRW metric that predicts the cosmic time dilation and the cosmological redshift properly. Consequently, the cosmic time should be identified with the conformal time and the space-time evolution of the Universe should be described by the conformal FLRW metric only.

Obviously, we can ask a question: why atoms radiate photons with the same (rest-frame) frequency at all redshifts and why this frequency is not affected by time dilation? The answer is straightforward: the frequency of emitted photons is independent of redshift, because it depends on quantized energy levels of electrons in atoms and these energy levels are redshift independent. Once the photon is emitted, its frequency decreases due to time dilation when photon propagates along the ray path from the emitter to the receiver. Since the comoving speed of light is constant, the proper speed of light must be variable. In this way, the emitted photons with frequency ν have shorter proper wavelengths at high redshift than the photons with the same frequency ν but emitted at the present epoch.

So... if expansion stops at the end of the SN, it will be observed as time dilated. I don't think anyone will dispute that.

But what about the photons? Do they redshift or not?
 
Status
Not open for further replies.

Back
Top Bottom