• Quick note - the problem with Youtube videos not embedding on the forum appears to have been fixed, thanks to ZiprHead. If you do still see problems let me know.

Evidence against concordance cosmology

I can do a bit of Eric's job for him again. I know a bit about the CMB alignments. There's a survey of them here (including things I've heard of and things new to me.)

http://arxiv.org/abs/1510.07929

(Note that the paper includes an overview, which I encourage Lerner not to skip, of the many, many CMB statistics that have *tested and confirmed* LCDM predictions.) But it's true: among the hundreds of aspects of the CMB that precisely match the basic LCDM hypothesis, there are a few that appear somewhat unlikely (at the 2% to 0.1% level) to have arisen in a random sampling of LCDM primordial fluctuations. The paper overviews ideas for new hypotheses where the initial conditions are different.

Note that non-LCDM cosmologies (plasma, steady-state, etc.) have not yet come up with a proposal in which a roughly uniform blackbody background exists at all, much less one where this background has isentropic fluctuations at the 10^-5 level, much less one where the fluctuation angular power shows a damped acoustic-wave-like spectrum, much less etc. etc. etc..

Some of the authors of the above are also on this paper:

http://arxiv.org/abs/1512.05356

which is like the bizarro-world version of the Crisis In Cosmology Conference---this is what it sounds like when people who actually understand LCDM look for opportunities to break it. Interesting reading.
 
One thing I am struggling to understand, with Lerner's "tired light" cosmology:

We know that light from distant, redshifted supernovae is spread out over time. A supernova at z=0 with a monthlong outburst? If you see a spectrally-identical event at z=1, it has a two-month-long outburst.

(In fact, they're observed to spread out over time by exactly the same time stretch that appears in individual photon frequencies. In LCDM that is an obvious predicted effect.)

If I want to write down a tired-light model, I understand the tired-light energy loss on an individual photon level: "The source emitted a 6 eV photon but it only had 3 eV left when it arrives, so the detected energy is E0/(1+z)". (ETA: what I mean, is I understand the fact that this is what I'm supposed to be modeling. I do not understand that as the plausible outcome of some new microphysics.) But if I'm looking at a whole supernova---well, let's compare.

In LCDM, identical supernovae have identical emitted-photon counts. If there's an event emitting N photons, each at energy E0 at a distance D and redshift z, I detect N/D^2 of those photons but they're each at E0/(1+z).

In tired-light, I'm not sure. It is an observational fact that high-z supernova photons are detected for a longer period of time.

a) Do you think that's a real effect ("those really were longer-lasting supernovae by a factor of (1+z)", or more generally "longer by some factor we can parameterize in z, for which 1+z is the best fit")? In that case, the total emitted photon count was N*(1+z), the photon energies are E0/(1+z). The supernova's bolometric luminosity is then N/D^2 E0/(1+z), but its bolometric fluence is N/D^2. (Note that a non-bursty standard candle just has luminosity N/D^2 E0/(1+z) in this case, magically matching the supernova.)

b) Do you think that "new tired-light physics" puts variable time-delays on the photons? So the photons are emitted over time T, but have their arrivals spread out to T*(1+z)? (Or, sorry, "have their arrivals spread out by some unknown z-dependent factor we'll have to fit to the data") In that case the bolometric luminosity is N/D^2 E0/(1+z)^2 while the bolometric fluence is N/D^2 E0/(1+z). Note that a time-independent candle isn't dimmed by mere timeshifting, so its bolometric luminosity is N/D^2 E0/(1+z) with just the photon-energy-loss correction.

c) If we're putting in a time-shift effect in empty space (as in answer b) surely this is experienced independently photon-by-photon. An individual photon has no way of knowing whether it's from the early edge of a supernova or the late edge. An actual "dT --> dT(1+z)" factor sounds highly implausible in this sense; surely any real photon-delays-in-empty-space would be a time-smearing (with *sigma* proportional to z, maybe) rather than a coherent stretch. Please show how a tired-light model's time smearing is parameterized, and show whether you've found any parameter choices that agree with the data.

(Maybe you don't want to do this yet, but we *have* to decide how the photon-arrival-time stretching works, because until you know that you don't know how to treat supernovae as standard candles.)

d) If we're allowed to invent new physics allowing perfectly-achromatic, perfectly-collinear "redshifting" at all frequencies, surely we have lost some confidence in any of our other priors about photon propagation through free space. Please document the set of different assumptions, hypotheses, and constraints that you've considered for the "tired light" behavior itself.

:popcorn1
 
So, tired light..........anybody think that this hasn't been debunked years ago? I am really struggling to see the point of this. Nobody takes it seriously. Why should we start again now?
Is there some new evidence that I am not aware of? Or are we just going around in the same unevidenced PC circles again?
I linked to Brian Koberlein's rather dismissive articles on this nonsense, way back in the thread. Any chance that those objections have actually been overcome by the tired light brigade? Any chance they could post some evidence?
Frankly, it's just nonsense. Hence why nobody takes it seriously. Very silly.
 
I can do a bit of Eric's job for him again. I know a bit about the CMB alignments. There's a survey of them here (including things I've heard of and things new to me.)

http://arxiv.org/abs/1510.07929

(Note that the paper includes an overview, which I encourage Lerner not to skip, of the many, many CMB statistics that have *tested and confirmed* LCDM predictions.) But it's true: among the hundreds of aspects of the CMB that precisely match the basic LCDM hypothesis, there are a few that appear somewhat unlikely (at the 2% to 0.1% level) to have arisen in a random sampling of LCDM primordial fluctuations. The paper overviews ideas for new hypotheses where the initial conditions are different.

Note that non-LCDM cosmologies (plasma, steady-state, etc.) have not yet come up with a proposal in which a roughly uniform blackbody background exists at all, much less one where this background has isentropic fluctuations at the 10^-5 level, much less one where the fluctuation angular power shows a damped acoustic-wave-like spectrum, much less etc. etc. etc..

Some of the authors of the above are also on this paper:

http://arxiv.org/abs/1512.05356

which is like the bizarro-world version of the Crisis In Cosmology Conference---this is what it sounds like when people who actually understand LCDM look for opportunities to break it. Interesting reading.
These are great, thanks for posting them ben m! :thumbsup:

The second one, Bull+ (2015) "Beyond ΛCDM: Problems, solutions, and the road ahead" is particularly interesting. Readers be warned though, it's 97 pages long, has 517 references (!), and assumes the reader is pretty familiar with quite a lot of physics and astronomy.
 
Here's what I found:

<snip>

JeanTate said:
CBR alignments (etc): (no refs)
<snip>

Nothing for the former, obviously.

<snip>
I can do a bit of Eric's job for him again. I know a bit about the CMB alignments. There's a survey of them here (including things I've heard of and things new to me.)

http://arxiv.org/abs/1510.07929

(Note that the paper includes an overview, which I encourage Lerner not to skip, of the many, many CMB statistics that have *tested and confirmed* LCDM predictions.) But it's true: among the hundreds of aspects of the CMB that precisely match the basic LCDM hypothesis, there are a few that appear somewhat unlikely (at the 2% to 0.1% level) to have arisen in a random sampling of LCDM primordial fluctuations. The paper overviews ideas for new hypotheses where the initial conditions are different.

Note that non-LCDM cosmologies (plasma, steady-state, etc.) have not yet come up with a proposal in which a roughly uniform blackbody background exists at all, much less one where this background has isentropic fluctuations at the 10^-5 level, much less one where the fluctuation angular power shows a damped acoustic-wave-like spectrum, much less etc. etc. etc..
The paper ben m cites is Scwarz+ (2015) "CMB Anomalies after Planck". Figure 4 in this seems to be ~the same as that in Eric L's presentation.

The caption for Figure 4 is:
The combined quadrupole-octopole map from the Planck 2013 release [33]. The multipole vectors (v) of the quadrupole (red) and for the octopole (black), as well as their corresponding area vectors (a) are shown. The effect of the correction for the kinetic quadrupole is shown as well, but just for the angular momentum vector ^n2, which moves towards the corresponding octopole angular momentum vector after correction for the understood kinematic effects.

Have I correctly identified the reference, Eric L?
 
So, Eric, a summary of your talk looks like:

a) You took a tired-light theory that you haven't actually written down in any way. One of its predictions is vaguely consistent with your implementation of the Tolman surface brightness test. The error bars are large.

b) You don't know enough about the 20th-century hypothesis-testing machinery to quantify that "consistency" claim. You don't know enough about mainstream structure-formation theory to make a convincing claim of consistency/inconsistency with LCDM, so the best you have is an (invalid) parameter-counting/Occam's Razor argument. You don't know enough about your own tired light hypothesis to perform cross-checks with non-Tolman observables.

c) Your Occam's Razor argument is itself sort of hermetically sealed---you can only state it with a straight face if you pretend that the Hubble curve is the only evidence for LCDM's otherwise-unconstrained new physics, and if you pretend that tired light is not otherwise-unconstrained new physics. Your talk apparently gestures towards the existence of a parameter-counting argument against LCDM, but there is no such argument because LCDM is very highly overconstrained.

d) You also did some ArXiV mining for plots that you thought discredited LCDM, and in several cases you deluded yourself severely about BBN and large-scale-structure issues. This perhaps costs you credibility, don't you think? When someone has cried wolf over and over, saying "X is disproven! X is disproven!", it suggests that they have some sort of kneejerk dislike of the hypothesis, rather than that they're a good objective judge of its merits. Such a person has no credibility to make claims like "X isn't as parsimonious a hypothesis as we'd prefer!"

e) The above is apparently the best anti-LCDM argument available after 30 years of trying to construct plasma-cosmology alternative.

f) Also, some mainstream cosmologists are worried about 7Li abundances and about the possibility that we're seeing an unlikely realization of an LCDM CMB. You're right! We are already on the case on those, Eric. Maybe it's new physics. Maybe it's radical new physics. We'll learn more by constructing hypotheses and testing them against the best and largest datasets---just like we've always done.
 
Sure.

Note that I have not tried to match figures (etc) to what's in the first part of your presentation; I assume it's all in your "UV Tolman" paper (my shorthand).

<snip>

I've now tried; here's what I found:

Lerner+ (2014) "UV surface brightness of galaxies from the local universe to z ~ 5" is behind a paywall. However, there's an arXiv preprint; here's the abstract:

Lerner+ (2014) said:
The Tolman test for surface brightness dimming was originally proposed as a test for the expansion of the Universe. The test, which is independent of the details of the assumed cosmology,is based on comparisons of the surface brightness (SB) of identical objects at different cosmological distances. Claims have been made that the Tolman test provides compelling evidence against a static model for the Universe. In this paper we reconsider this subject by adopting a static Euclidean Universe with a linear Hubble relation at all z (which is not the standard Einstein- de Sitter model),resulting in a relation between flux and luminosity that is virtually indistinguishable from the one used for LCDM models. Based on the analysis of the UV surface brightness of luminous disk galaxies from HUDF and GALEX datasets, reaching from the local Universe to z ~ 5 we show that the surface brightness remains constant as expected in a SEU.
A re-analysis of previously-published data used for the Tolman test at lower redshift, when treated within the same framework, confirms the results of the present analysis by extending our claim to elliptical galaxies. We conclude that available observations of galactic SB are consistent with a static Euclidean model of the Universe.
We do not claim that the consistency of the adopted model with SB data is sufficient by itself to confirm what would be a radical transformation in our understanding of the cosmos. However, we believe this result is more than sufficient reason to examine further this combination of hypotheses.

In Eric L's presentation, "Static vs Expanding SNIa - Both Fit No Difference - amazing coincidence?" seems to be Figure 2 in the paper; here's the caption:

Lerner+ (2014) said:
Figure 2
Superposed to the models are data for supernovae type Ia from the gold sample as defined in Riess et al.8 (pluses), and the supernovae legacy survey9, (crosses). The assumed absolute magnitude of the supernovae is M = -19.25. The two lines (SEU model with solid line and ΛCDM concordance cosmology as dashed line) are nearly identical over the entire redshift range, differing at no point by more than 0.15 mag and in most of the region by less than 0.05 mag.

"Mean SB is Constant" seems to be Figure 4 (though it is B&W in the paper, with squares for the blue circles); here's the caption:

Figure 4. The difference in mean SB (Δμ = μHUDF - μGALEX) between the HUDF and GALEX members of each pair of matched samples plotted against the mean redshift of the HUDF samples (filled circles NUV dataset, filled squares: FUV dataset). Results are consistent with no change in SB with z. Error bars are 1-sigma statistical errors.

"If we include unresolved galaxies, Median SB is constant" seems to be Figure 5; here's the caption:

Figure 5. The difference in median SB (taking into account unresolved galaxies) between the HUDF and GALEX members of each pair of matched samples is plotted against the mean z of the HUDF sample (filled circles NUV dataset, filled squares: FUV dataset). As with the mean SB, results are consistent with no change in SB with z. Error bars are one-sigma statistical errors

"Observational limits do not bias results" seems to be Figure 6; here's the caption:

Figure 6. Log relative frequency of galaxies are plotted against SB for the selected HUDF sample with -17.5<M<-19 (squares) and with -16<M-17 (triangles). The dimmer galaxies on the right show the effect of the limits of SB visibility is significant only for galaxies dimmer than 28.5 mag/arcsec2 which does not affect the distribution of the galaxies in the sample. The sample galaxies on the left show a similar cutoff due to the smallest, highest SB galaxies being unresolved. In both cases the curves are Gaussian fits to the non-cutoff sections of the distributions.

"-16<M-17" seems to be a typo; I think it's meant to be "-16<M<-17".

There are ~five figures in the presentation with the same title, "New: Size - expanding vs non-expanding". None seem to be in the paper.

8 is "Riess A. G., Strolger L. G., Torny J. et al., ApJ 607, (2004) 665" (per ADS, Riess+ (2004) "Type Ia Supernova Discoveries at z > 1 from the Hubble Space Telescope: Evidence for Past Deceleration and Constraints on Dark Energy Evolution")
9 is "Astier P., Guy J, Regnault N, et al., A&A 447, (2006) 31" (per ADS, Astier+ (2006) "The Supernova Legacy Survey: measurement of ΩM, ΩΛ and w from the first year data set")

I am a bit puzzled that no SNIa data after 2006 is included in Figure 2, nor that any statistical tests seem to have been done on the two fits (I have not yet checked the text in the paper, re this Figure).
 
<SNIP>
I am a bit puzzled that no SNIa data after 2006 is included in Figure 2, nor that any statistical tests seem to have been done on the two fits (I have not yet checked the text in the paper, re this Figure).

I assume you may be referring to this (possibly amongst others):
TIME DILATION IN TYPE Ia SUPERNOVA SPECTRA AT HIGH REDSHIFT
Blondin et al, (2008): http://arxiv.org/pdf/0804.3595v1.pdf
Abstract (bolding mine):
"We present multiepoch spectra of 13 high-redshift Type Ia supernovae (SNeIa) drawn from the literature, the ESSENCE and SNLS projects, and our own separate dedicated program on the ESO Very Large Telescope. We use the Supernova Identification (SNID) code of Blondin & Tonry to determine the spectral ages in the supernova rest frame. Comparison with the observed elapsed time yields an apparent aging rate consistent with the 1/(1 +z) factor (where z is the redshift) expected in a homogeneous, isotropic, expanding universe. These measurements thus confirm the expansion hypothesis, while unambiguously excluding models that predict no time dilation, such as Zwicky’s “tired light” hypothesis. We also test for power-law dependencies of the aging rate on redshift. The best-fit exponent for these models is consistent with the expected 1/(1 +z) factor."
 
Last edited:
I
I am a bit puzzled that no SNIa data after 2006 is included in Figure 2, nor that any statistical tests seem to have been done on the two fits (I have not yet checked the text in the paper, re this Figure).

I was initially puzzled by that too. (I mean, the inclusion of this figure in the paper (with almost no accompanying discussion in the text) is bizarre to begin with, but why wouldn't a 2014 paper use, e.g., the Union2.1 dataset?) In terms of redshift coverage, this is not too big a deal, as a large fraction of our highest-redshift supernova catalogue is already in Lerner's catalogue. (On the other hand, our understanding of metallicity/spectra keeps improving, so the later Union catalogue would be expected to have better magnitude measurements.)
 
<snip>
Li declines with Fe, < 0.03 BBN prediction: Sbordone+ (2012), Hansen+ (2015)

Sbordone+ (2012): "Lithium abundances in extremely metal-poor turn-off stars" The figure in Eric L's presentation seems to be ~the same as Figure 4 in this paper.

<snip>
Here is the caption for Figure 4:
Sbordone+ (2012) said:
Fig. 4. Like in Fig. 1 but now including the two ultra metal poor stars HE 1327-2326 and SDSS J102915+172927 (open and filled large circles, respectively). Symbols for the other stars are the same as in Fig 1.

And Figure 1:

Fig. 1. A(Li) vs. [Fe/H] for various samples. Filled black circles, Bonifacio et al. (2012); open black circlessbordone10; green filled triangles Aoki et al. (2009); red open triangles Hosford et al. (2009); magenta stars, the two components of the binary star CS 22876-032 (González Hernández et al. (2008), blue open squares Asplund et al. (2006). Points with a downward arrow indicate upper limits. The grey horizontal line indicates the current estimated primordial Li abundance based on WMAP results.

"circlessbordone10" is a typo; it should be (I think) "circles Sbordone et al. (20100"

Bonifacio+ (2012) "Chemical abundances of distant extremely metal-poor unevolved stars"

Sbordone+ (2010) "The metal-poor end of the Spite plateau. I. Stellar parameters, metallicities, and lithium abundances"

Aoki+ (2009) "Lithium Abundances of Extremely Metal-Poor Turnoff Stars"

Hosford+ (2009) "Lithium abundances of halo dwarfs based on excitation temperature. I. Local thermodynamic equilibrium"

González Hernández+ (2008) "First stars XI. Chemical composition of the extremely metal-poor dwarfs in the binary CS 22876-032"

Asplund+ (2006) "Lithium Isotopic Abundances in Metal-poor Halo Stars"
 
The figure in Eric L's presentation seems to be Portinari, Casagrande, Flynn (2010)'s Figure 5.

This refers to "He also far too low in local stars", Portinari, Casagrande, Flynn (2010): "Revisiting ΔY/ΔZ from multiple main sequences in globular clusters: insight from nearby stars"

The caption for Figure 5 reads:

Portinari said:
Figure 5. Metal versus helium mass fraction for nearby field dwarfs. Lines with two values of helium-to-metal enrichment ratio (break at Z = 0.015) are plotted to guide the eye. Left-hand panel: using the isochrone fitting procedure described in Casagrande et al. (2007). Right-hand panel: using numerical homology relations (see Section 4).

Casagrande+ (2007) "The helium abundance and ΔY/ΔZ in lower main-sequence stars"

The paper itself is behind a paywall, however there is an arXiv preprint. The figure in Eric L's presentation is ~the same as Karachentsev (2012)'s Figure 4.


This refers to "LCDM predicts 3x too much DM" Karachentsev (2012) "Missing dark matter in the local universe"

The caption for Figure 4 reads:

Karachentsev (2012) said:
Figure 4.
The average density of matter in the spheres of different radii (the stepped line). The squares and triangles mark the contribution of pairs, triplets, and
groups of galaxies.

(to be continued)

The figure in Eric L's presentation seems to be Figure 2 in Clowes+ (2013), not anything in Clowes+ (2012).

This refers to ">200 Mpc LSS takes far too long to form for BB" Clowes+ (2013) "A structure in the early Universe at z ˜1.3 that exceeds the homogeneity scale of the R-W concordance cosmology"

The caption for Figure 2 reads:

Clowes+ (2013) said:
Figure 2. Snapshot from a visualization of both the new, Huge-LQG, and the CCLQG. The scales shown on the cuboid are proper sizes (Mpc) at the present epoch. The tick marks represent intervals of 200 Mpc. The Huge-LQG appears as the upper LQG. For comparison, the members of both are shown as spheres of radius 33.0 Mpc (half of the mean linkage for the Huge-LQG; the value for the CCLQG is 38.8 Mpc). For the Huge-LQG, note the dense, clumpy part followed by a change in orientation and a more filamentary part. The Huge-LQG and the CCLQG appear to be distinct entities.

I will add relevant details to "Evidence indicates scattering/abs of RF radiation in local universe: (Atrophys & SS, 1993)" when Eric L provides the reference. Ditto "Disney? (voiceover, not slide)".
 
Last edited:
So, Eric, a summary of your talk looks like:

a) You took a tired-light theory that you haven't actually written down in any way. One of its predictions is vaguely consistent with your implementation of the Tolman surface brightness test. The error bars are large.

b) You don't know enough about the 20th-century hypothesis-testing machinery to quantify that "consistency" claim. You don't know enough about mainstream structure-formation theory to make a convincing claim of consistency/inconsistency with LCDM, so the best you have is an (invalid) parameter-counting/Occam's Razor argument. You don't know enough about your own tired light hypothesis to perform cross-checks with non-Tolman observables.

c) Your Occam's Razor argument is itself sort of hermetically sealed---you can only state it with a straight face if you pretend that the Hubble curve is the only evidence for LCDM's otherwise-unconstrained new physics, and if you pretend that tired light is not otherwise-unconstrained new physics. Your talk apparently gestures towards the existence of a parameter-counting argument against LCDM, but there is no such argument because LCDM is very highly overconstrained.

d) You also did some ArXiV mining for plots that you thought discredited LCDM, and in several cases you deluded yourself severely about BBN and large-scale-structure issues. This perhaps costs you credibility, don't you think? When someone has cried wolf over and over, saying "X is disproven! X is disproven!", it suggests that they have some sort of kneejerk dislike of the hypothesis, rather than that they're a good objective judge of its merits. Such a person has no credibility to make claims like "X isn't as parsimonious a hypothesis as we'd prefer!"

e) The above is apparently the best anti-LCDM argument available after 30 years of trying to construct plasma-cosmology alternative.

f) Also, some mainstream cosmologists are worried about 7Li abundances and about the possibility that we're seeing an unlikely realization of an LCDM CMB. You're right! We are already on the case on those, Eric. Maybe it's new physics. Maybe it's radical new physics. We'll learn more by constructing hypotheses and testing them against the best and largest datasets---just like we've always done.

I'm not yet at the point ben m was, a few days ago.

However, the "He also far too low in local stars", "LCDM predicts 3x too much DM", and ">200 Mpc LSS takes far too long to form for BB" seem very poorly researched, with highly relevant papers not mentioned.

"CBR alignments (etc)" is not so much poorly researched as cherry-picking; it is far from settled that the anomalies are more than just statistical flukes.

"Li declines with Fe, < 0.03 BBN prediction" refers to a well-known apparent mismatch between LCDM models (the BBN part, in this case) and observation. However, Eric L's presentation fails to mention the rich history on this.

Next: I'll take a deeper look at the first ~half of the presentation, which covers Eric L's recent "Tolman UV" paper (my shorthand). And then will likely look at the old Lerner paper cited in the second part of the presentation.
 
JeanTate said:
<SNIP>
I am a bit puzzled that no SNIa data after 2006 is included in Figure 2, nor that any statistical tests seem to have been done on the two fits (I have not yet checked the text in the paper, re this Figure).
I assume you may be referring to this (possibly amongst others):
TIME DILATION IN TYPE Ia SUPERNOVA SPECTRA AT HIGH REDSHIFT
Blondin et al, (2008): http://arxiv.org/pdf/0804.3595v1.pdf
Abstract (bolding mine):
"We present multiepoch spectra of 13 high-redshift Type Ia supernovae (SNeIa) drawn from the literature, the ESSENCE and SNLS projects, and our own separate dedicated program on the ESO Very Large Telescope. We use the Supernova Identification (SNID) code of Blondin & Tonry to determine the spectral ages in the supernova rest frame. Comparison with the observed elapsed time yields an apparent aging rate consistent with the 1/(1 +z) factor (where z is the redshift) expected in a homogeneous, isotropic, expanding universe. These measurements thus confirm the expansion hypothesis, while unambiguously excluding models that predict no time dilation, such as Zwicky’s “tired light” hypothesis. We also test for power-law dependencies of the aging rate on redshift. The best-fit exponent for these models is consistent with the expected 1/(1 +z) factor."
JeanTate said:
I am a bit puzzled that no SNIa data after 2006 is included in Figure 2, nor that any statistical tests seem to have been done on the two fits (I have not yet checked the text in the paper, re this Figure).
I was initially puzzled by that too. (I mean, the inclusion of this figure in the paper (with almost no accompanying discussion in the text) is bizarre to begin with, but why wouldn't a 2014 paper use, e.g., the Union2.1 dataset?) In terms of redshift coverage, this is not too big a deal, as a large fraction of our highest-redshift supernova catalogue is already in Lerner's catalogue. (On the other hand, our understanding of metallicity/spectra keeps improving, so the later Union catalogue would be expected to have better magnitude measurements.)

I did not have any particular paper or SNIa dataset in mind.

Take "Astier P., Guy J, Regnault N, et al., A&A 447, (2006) 31" (per ADS, Astier+ (2006) "The Supernova Legacy Survey: measurement of ΩM, ΩΛ and w from the first year data set") for example. In ADS it has been cited 1856 times! Sure, many of those cites are not papers, and many are after Lerner et al. were writing their 2014 paper. And many do not report new SNIa observations or analyses (or any kind of supernovae ones).

However, there are many papers reporting new (SNIa) observations and important new analyses. Why did Lerner et al. not include at least the bigger and most recent of the SNIa datasets in their Figure 2?

Some examples (an entirely random selection!): Gal-Yam+ (2013) "Supernova Discoveries 2010--2011: Statistics and Trends"; Feindt+ (2013) "Measuring cosmic bulk flows with Type Ia supernovae from the Nearby Supernova Factory"; "The Fundamental Metallicity Relation Reduces Type Ia SN Hubble Residuals More than Host Mass Alone".
 
I did not have any particular paper or SNIa dataset in mind.

No, I didn't in particular, but that one was quoted in Ned Wright's debunking of tired light:
http://www.astro.ucla.edu/~wright/tiredlit.htm

I am far from an expert in this particular field (planetary science, with a bit of archaeology and palaeoanthropology thrown in), but the little I have followed of this over the years, says that this whole 'tired light' concept is ruled out at very high confidence levels.
So from an outsider looking in, it wouuld seem that it would take an absolutely momentous discovery of how they are right, or everybody else is wrong, to overturn the existing data.
From what I can tell, the SN1a time dilation data kills it stone dead.
I suppose what I'm saying is; is the whole of current cosmology about to be turned on its head, or is this just more of the same from the PC brigade? i.e. not worth losing any sleep over?
 
So I've started reading Lerner+ (2014), and have some questions. Yes, some are pretty straight-forward, and yes, I can probably find the answers myself, with enough time and effort ... however, if any reader has the answers at their finger-tips ...

From the Introduction:

Lerner+ (2014) said:
In fact, in any expanding cosmology, the SB is expected to decrease very rapidly, being proportional to (1+z)-4, where z is the redshift and where SB is measured in the bolometric units (VEGA-magnitudes/arcsec−2 or erg sec-1cm−2arcsec−2). One factor of (1+z) is due to time-dilation (decrease in photons per unit time), one factor is from the decrease in energy carried by photons, and the other two factors are due to the object being closer to us by a factor of (1+z) at the time the light was emitted and thus having a larger apparent angular size. (If AB magnitudes or flux densities are used, the dimming is by a factor of (1+z)3, while for space telescope magnitudes or flux per wavelength units, the dimming is by a factor of (z+1)5). By contrast, in a static (non expanding) Universe, where the redshift is due to some physical process other than expansion (e.g., light-aging), the SB is expected to dim only by a factor (1+z), or be strictly constant when AB magnitudes are used.

Is there an easily accessible, 'for dummies' explanation of all these relationships?

Also, are the first lot true "in any expanding cosmology" (my bold)? Or are at least some only true for GR-based (or equivalent) expanding cosmologies?

Too, in all SnEUs (static, non-expanding universes ETA: a.k.a. SEU, static Euclidean universe), "where the redshift is due to some physical process other than expansion" are the last two relationships true, no matter what that "physical process" is?
 
Last edited:
This is a "best practice" question.

Lerner+ (2014) said:
In the last few decades the use of modern ground-based and space-based facilities have provided a huge amount of high quality data for the high-z Universe. The picture emerging from these data indicates that galaxies evolve over cosmic time.
Lerner+ (2014) have no references for this.

On the one hand, I guess many readers would be at least somewhat familiar with the literature on this; on the other hand, this paper was published in "International Journal of Modern Physics D", and I expect only a few readers would be familiar with the relevant literature.

From my reading of astronomy papers, I would expect to see a list of relevant references, if only "e.g. X, Y, and Z, and references therein".

What do others think?
 
<snip>

Too, in all SnEUs (static, non-expanding universes ETA: a.k.a. SEU, static Euclidean universe), "where the redshift is due to some physical process other than expansion" are the last two relationships true, no matter what that "physical process" is?

Answering my own question, No.

For example, in a process/mechanism that is hot among EU acolytes ("plasma redshift"), the scattering will produce blurring, which will certainly affect the apparent SB! Though by how much, and with what functional form (i.e. dependence on z), it is, I think, impossible to say (needless to say, no EU groupie has ever published any estimates of this effect).

And any redshfit due to some physical process/mechanism which involves scattering will also produce some effect on SB, right?
 
Once again I urge Lerner to clarify (ideally with a published reference) what his tired-light model actually says.

I am trying to construct the Hubble curve for such a model and---well, let's see. Putting in an exponential scale length for photon energy loss, and assuming that the "magnitude" measured by astronomers (which is a flux, not a fluence) is the standard-candle property---as far as I can tell the redshift vs. distance-modulus relation needs to be of the form:

D = 1 / Sqrt((1+z) ln(1+z)^2)

OK, so the question cosmologists ask about the Hubble curve is "what is the shape of the curve?" A polynomial approximation that nicely captures the cosmology is:

D = c/H (z + (1-q)/2 z^2 + ...)

where, in standard cosmology, "q" is a simple sum over the energy content of the Universe, accounting for different equations-of-state.

Tired light theory looks like it corresponds to basically q=+1. That's closest, it turns out, as the prediction of standard cosmology when Omega_lambda = 1 and Omega_m = 0 ... i.e. a flat universe containing dark energy and no matter at all. So, yes, standard cosmology *has* done that fit, and quantified it using standard statistical hypothesis testing. You can see the Omega_lambda = 1 Omega_m = 0 point on this two-axis plot:

http://supernova.lbl.gov/Union/figures/Union2.1_Om-Ol_slide.pdf

and it's ruled out; it's nowhere near the 68% confidence region, nor the 95%, nor the 99.7% confidence regions.

I repeat, if you think that I've done the tired-light prediction incorrectly (this is, after all, just a forum post) you are welcome to point me to a reliable non-YouTube source that does the math.
 
Last edited:
magnitude systems used by astronomers

In the Introduction section, Lerner+ (2014) mentions both Vega ("VEGA-magnitudes/arcsec−2 or erg sec−1 cm−2 arcsec−2") and AB ("If AB magnitudes or flux densities are used") magnitude systems.

Some readers may be unfamiliar with these; this webpage has an explanation (there are plenty of others, of course). Note that the Vega system is called the "Johnson System" on that page.

A key aspect of both systems is the colors (my bold):
  • Vega: "This system is defined such that the star Alpha Lyr (Vega) has V=0.03 and all colors equal to zero."
  • AB: "This magnitude system is defined such that, when monochromatic flux f_nu is measured in erg sec^-1 cm^-2 Hz^-1,
    m(AB) = -2.5 log(f_nu) - 48.60​
    where the value of the constant is selected to define m(AB)=V for a flat-spectrum source. In this system, an object with constant flux per unit frequency interval has zero color."

How clearly does Lerner+ (2014) make the distinctions?

"where SB is measured in the bolometric units (VEGA-magnitudes/arcsec−2 or erg sec−1 cm−2 arcsec−2": this is at least a little bit misleading ... "bolometric units" are indeed power per unit area (erg sec−1 cm−2 in this case; the arcsec−2 converts this to surface brightness), but the Vega system is no different from the AB one in this regard ... "bolometric" refers an integration over all wavelengths/frequencies, to get the total electromagnetic fluxnote1
"If AB magnitudes or flux densities are used, the dimming is by a factor of (1+z)3, while for space telescope magnitudes or flux per wavelength units, the dimming is by a factor of (z+1)5"

(to be continued)

Aside from the possible confusion re "flux" and "flux density"note1, there's the lack of reference for "space telescope magnitudes or flux per wavelength units". Presumably "space telescope" refers to the Hubble Space Telescope. But what camera, or cameras, is it referring to?

The default for WFPC2 is "Johnson Visual magnitudes", which is indeed flux per wavelength unit (source). However, for ACS three different systems are used, ST, AB, and VEGA (source).

Does any of this possible confusion matter, in terms of what's in the guts of the paper? I don't know, but I'll certainly be looking out for any such.

note1 You have to be extra careful with the terms "flux" and "flux density"; they are not defined consistently across all branches of physics, and even within astronomy you may come across different definitions. WP has a good discussion.
 
Last edited:
In Section 2 ("The adopted cosmology"):

Lerner+ (2014) said:
Since the SB of galaxies is strongly correlated with the intrinsic luminosity, for a correct implementation of the Tolman test it is necessary to select samples of galaxies at different redshifts from populations that have on average the same intrinsic luminosity.

No reference given, and I for one would really like to see evidence for "the SB of galaxies is strongly correlated with the intrinsic luminosity"! Presumably, the authors are referring to intrinsic luminosity in the UV/optical/NIR, but they don't say so. Ditto SB.

There's also a potential problem re "average" ... it depends on which average is used, and what the SB-intrinsic luminosity relationship is.

It should be noted that this cosmological model is not the Einstein-De Sitter static Universe often used in literature.

I guess one has to take the authors' word for it; again, it would be nice if there were some references.

The choice of a linear relation is motivated by the fact that the flux-luminosity relation derived from this assumption is remarkably similar numerically to the one found in the concordance cosmology, the distance modulus being virtually the same in both cosmologies for all relevant redshifts. This is shown in Fig. 1 where the two relations are compared to each other [...]

Here's the caption for Figure1:

Comparison of the distance modulus for Vega magnitudes for the adopted Euclidean non-expanding universe with linear Hubble relation cosmology and the concordance cosmology. Upper panel: The distance modulus (m–M) = 25+5Log(cz/Ho)+2.5Log(1+z), where H0 = 70 in km s−1 Mpc−1 as a function of the redshift z for an Euclidean Universe with d= cz/H0 (black line) compared to the one obtained from the concordance cosmology with Ωm = 0.26 and ΩΛ = 0.76 (red line). Middle panel: Ratio of the two distances (concordance/Euclidean). Lower panel: Distance modulus difference in magnitudes(concordance-Euclidean). This graph shows clearly the similarity of the two, making galaxy selection in luminosity model-independent.

The last phrase, "making galaxy selection in luminosity model-independent", is clearly not true, as the figure itself (and the text) shows.

and, in Fig. 2, to supernovae type Ia data. Up to redshift 7, the apparent magnitude predicted by the simple linear Hubble relation in a Static Euclidean Universe (SEU) is within 0.3 magnitude of the concordance cosmology prediction with ΩM = 0.26 and ΩΛ = 0.74. The fit to the actual supernovae data is statistically indistinguishable between the two formulae.

I find the inconsistency in terms - Ωm (Fig 1) vs ΩM (ref to Fig 2) to be not only annoying, but possibly indicative of sloppiness in the research itself. Combined with the - to me - amazing lack of references, it strongly suggests both lax standards re the research, and poor peer-review. The latter is perhaps not surprising, as the reviewers chosen by the Editor of "International Journal of Modern Physics D" were likely not experienced astronomers.

However, the last sentence - "The fit to the actual supernovae data is statistically indistinguishable between the two formulae" - is just too much ... if it's true, then a 'goodness of fit' statistic should have been quoted. Myself, I find it very hard to accept that it's true (more later).
 
So, I guess I was right in thinking that you are having a hard time putting this down. Is that a fair summation?
 
So I've started reading Lerner+ (2014), and have some questions. ...
The problem with discussing Lerner+ (2014) in this thread is that the paper contains no evidence against concordance cosmology.
It is a invalid paper supporting a toy model of the universe (a static Euclidean universe) by assuming a tired light explanation for cosmological redshift. So tere is no pint in extensive analysis of the paper. We should wait for Eric L to provide valid evidence against concordance cosmology.
 
Last edited:
So, I guess I was right in thinking that you are having a hard time putting this down. Is that a fair summation?

Having read all 91 (!) pages of the Plasma Cosmology - Woo or not thread (started by RC!), I'm hoping that, with Eric L's participation, we will at last have a deep discussion of the topic (plasma cosmology, or PC for short), here in ISF.

As far as I can tell, there has been no such discussion, anywhere on the internet; at least, not one where there is a proponent of PC that is head and shoulders above all the others I've seen (of course, it's entirely possible that I may not have come across a discussion of PC where there is a proponent as competent, in physics at least, as Eric L).

As Lerner+ (2014) is peer-reviewed, I think I can learn a lot about the process of getting scientific papers published, not least to learn what gets through such a review, in terms of the sorts of things I'd expect to picked up on (but weren't). For example, as I've already mentioned, I'm quite surprised that whoever reviewed Lerner+ (2014) did not insist on more references, to back up the bald statement. Too, I find it amazing that there was no call for the authors to quote a 'goodness of fit' statistic.
 
One final thing, since this is fun, then I'm checking out for a while.

I tested the best tired-light model I could come up with. I now realize why that's so different than Eric's model. He insists on:


d= cz/H0


And calls it a static-universe tired-light theory. On top of the (unjustified) idea of tired light to begin with, Eric has chosen a totally unphysical implementation of it.

Consider sources at d = c/H and d=2c/H. Lerner tells us that these will be detected with redshifts of z=1 and z=2 respectively. A photon from a z=1 source has only half of its original energy (or has doubled in wavelength). A photon from a z=2 source has only 1/3rd of its original energy left (or has tripled in wavelength). Tired Light theory attributes this to some property of space that saps energy from light passing through.

Here's the especially broken thing about Lerner's version. I will set c/H = 1 for simpler typing.

Emit an E=8 eV photon at d=1, which Lerner says is z=1. When it gets to d=0 it has E= 4 eV.

Emit an E= 12 eV photon at d=2, which Lerner says is z=2. When it gets to d=0 it has E=4 eV.

But a photon from d=2 has to pass by d=1 on the way to d=0. The two halves of its journey are, according to Eric's curve, very different.


Are we supposed to conclude that the z=2 photon went from 12 eV to 8 eV (a factor of 0.66) in the *first* half of its journey, then went from 8 eV to 4 eV (a factor of 0.5) in the second half? Because that's what Lerner's equation says. Surely the path between d=1 and d=0 has the same effect on (a) a photon that travels that path only, vs (b) photons arriving from further away. (If not, Lerner's theory is even weirder.) says that the tired-light-ness of distant space is less effective than the tired-light-ness of nearby space. He wrote something that looks simple on paper (d = cz/H) but which implies ridiculous physical complication---including putting the Earth at the geometric center of a set of spherical shells of different tired-light effects, all stacked and fine-tuned in some weird way to make Lerner's equation look linear in conventional notation.

If you insist on attributing tired-light properties to space, the only remotely parsimonious thing to write is, say,

1/(z+1) = e^(-d/d0)

with some scale factor d0. That's a well-behaved, one-parameter, "local" theory of energy-loss-while-traversing-a-medium. (And that's the theory I tested. It disagrees horribly with the supernova data. Oh well.)

Lerner's theory is a tired-light, static, heliocentric theory, with an unknown number of free parameters tossed in (and their values chosen) solely to yield a linear d-z relation. Contra Lerner, a linear d-z is not some simple minimal theory that we should default to except under strong compulsion---a linear d-z is a bizarre trainwreck of two variables that really don't want to be related linearly.
 
Yes. I was also hoping for an answer to this question ....

How does a linear relationship make sense? As I understand it, a redshift z corresponds to a frequency ratio of 1/(1+z). So if at distance d, z=1 and at distance 2d, z=2 (in a static model, so this relationship is presumably not supposed to change with time), what happens to a photon emitted from a galaxy at distance 2d on its way here? If it starts with frequency f, wouldn't it have frequency f/(1+1)=f/2 after distance d (half way) and so f/4 when it arrives, corresponding to z=3 rather than z=4?

Maybe I'm missing something but it seems that the concordance model is internally consistent and yours isn't.
 
Yes, thank you to Ben M for the explanation (debunking). Now I can sleep. Also seems that someone I've never heard of within scientific literature, who is parroting this thread on a Saturnist website (when he's not using christian ones), has gone suddenly quiet.
 
Eric, any word? If you've dropped out of the thread, I have no reason to keep nitpicking.

I was thinking about the exercise you do with the Tolman test---you apparently thought about "what actual galaxy size evolution path would explain this", and you thought the resulting formula was too complicated, and you claim to have scored a point against LCDM.

If you're interested, I can try the same exercise with *your* model. Your model does not contain a single constant tired-light attenuation length scale d0; to get your desired d=cz/H linear behavior, you have inserted a tired-light theory with a space- or time-variation (d0(t) or d0(d)) in the already-mysterious attenuation process. If you're interested, I might try to construct a multiparameter fit to d0(d) and see how many parameters I need to describe the data. Of course this is not a watertight argument, but it *is* an argument you rely on heavily yourself so I think it's fair play.

Can you tell me, though, would you prefer that I model a time-varying or a space-varying tired-light-function? In principle, since they're both utterly-unknown phenomena that we're trying to use cosmological data to discover, the standard approach would be to do BOTH and see which parts of either parameter-space are ruled out. But that's an approach you *don't* seem to like, so I thought I would check before wasting time.
 
Hi all, got a little time today. I will answer one basic point on the linear Hubble relation and then move on to the second point relating to Lithium and Helium abundances.

Ben m, you really need to brush up a bit on math. Of course it is possible to rewrite a linear Hubble relation in a single-parameter differential form. Here it is:

(dE/dt)/E = frequency ((initial wavelength) /Hubble length)

where E is the energy of the photon at any time, “frequency” is its frequency at any time , “initial wavelength” is the photon’s wavelength when it was emitted and “Hubble length”, the sole parameter, is c/H where c is the speed of light and H is the Hubble parameter.

To put this into words, in the time it takes to travel one wavelength, a photon loses a fraction of its energy equal to the ratio of its initial wavelength divided by the Hubble length.

If we change the hypothesis to say “current wavelength” rather than “initial wavelength” we then get a logarithmic relation between z and d rather than a linear one.

This is NOT a physical mechanism, this is just rewriting the relationship mathematically.

Of course the linear relation is NOT a logarithmic relation, so the photon traveling twice the distance does not lose twice the energy. As its own frequency decreases, the proportional rate of energy loss slows down. But it is still all described by a simple on-parameter equation—the one written here.
 
On the light elements data, a couple of points. The data I cite on helium abundances is derived from observations of nearby stars in a our immediate neighborhood. The data is very high quality as a result. To get helium abundances, one has to use models of stellar structure, but these models have been very well verified and, unlike cosmology theory are based on physical theories that are extremely well tested in the laboratory. We know experimentally how fusion reactions work in detail, we know about thermodynamics, radiation transfer etc. Someone wrote we don’t have stars in the laboratory. But we have all the physics that goes into understanding stars. No dark matter needed. When we learn new physics from stars—like fusion itself 80 years ago and about neutrinos more recently—we can test that in the lab as well.

So the fact that helium abundance trends down towards zero with decreasing abundance of heavy elements is a flat contradiction to Big Bang nuclear synthesis theory, to any Big Bang theory that assumes a hot dense phase for the expansion of the universe.

How does this cohere with observations of interstellar gas in other galaxies? We are seeing galaxies after they have completed their initial formation process. When galaxies are forming they modestly cloak themselves in lots of dust that absorbs most visible and UV light and allows us to see them mainly in IR. So, at the moment , we can’t directly see very-low-He clouds. But we can see stars in our own galaxy that are relics of that early period , We know that because they have so little iron and other heavy elements produced by earlier stars. They are “pristine” and they don’t have the helium BBN predicts.

If on the other hand, there was no BB and all the elements, including helium, that we observe are built up by thermonuclear process in stars, then we expect that as we look at older and older starts in our own galaxy we see less and less helium and lithium. That is what I predicted in published results 30 years ago and that is what is observed.
 
references

People have asked for specific older references of mine, so here’s bunch. Nearly all of them are available on either bigbangneverhappened.org or lppfusion.com.

Do Local Analogs of Lyman Break Galaxies Exist? R. Scarpa, R. Falomo, and E. Lerner
The Astrophysical Journal. Volume 668, Issue 1, Page 74–80, Oct 2007 http://arxiv.org/abs/0706.2948

Evidence for a Non-Expanding Universe: Surface Brightness Data From HUDF
Proceedings of the First Crisis in Cosmology Conference, AIP proceedings series 822, p.60-74. 2006 http://arxiv.org/abs/astro-ph/0509611


Two World Systems Revisited: A Comparison of Plasma Cosmology and the Big Bang, IEEE Trans. On Plasma Sci. 31, p.1268-1275, 2003

"Intergalactic Radio Absorption and the COBE Data", Astrophysics and Space Science, Vol.227, May, 1995, p.61-81

"On the Problem of Big Bang Nucleosynthesis", Astrophysics and Space Science, Vol.227, May, 1995 p.145-149

"The Case Against the Big Bang" in Progress in New Cosmologies, Halton C. Arp et al, eds., Plenum Press (New York), 1993

"Confirmation of Radio Absorption by the Intergalactic Medium", Astrophysics and Space Science, Vol 207,1993 p.17-26.

"Force-Free Magnetic Filaments and the Cosmic Background Radiation", IEEE Transactions on Plasma Science, Vol.20, no. 6, Dec. 1992, pp. 935-938.

"Radio Absorption by the Intergalactic Medium," The Astrophysical Journal, Vol. 361, Sept. 20, 1990, pp. 63 68.

"Galactic Model of Element Formation," IEEE Transactions on Plasma Science, Vol. 17, No. 3, April 1989, pp. 259 263.

"Plasma Model of the Microwave Background," Laser and Particle Beams, Vol. 6, (1988), pp. 456 469.

"Magnetic Vortex Filaments, Universal Invariants and the Fundamental Constants," IEEE Transactions on Plasma Science, Special Issue on Cosmic Plasma, Vol. PS 14, No. 6, Dec. 1986, pp. 690 702.

"Magnetic Self Compression in Laboratory Plasma, Quasars and Radio Galaxies," Laser and Particle Beams, Vol. 4, Pt. 2, (1986), pp. 193 222.
 
Hi all, got a little time today. I will answer one basic point on the linear Hubble relation and then move on to the second point relating to Lithium and Helium abundances.

Ben m, you really need to brush up a bit on math. Of course it is possible to rewrite a linear Hubble relation in a single-parameter differential form. Here it is:

(dE/dt)/E = frequency ((initial wavelength) /Hubble length)

where E is the energy of the photon at any time, “frequency” is its frequency at any time , “initial wavelength” is the photon’s wavelength when it was emitted and “Hubble length”, the sole parameter, is c/H where c is the speed of light and H is the Hubble parameter.

To put this into words, in the time it takes to travel one wavelength, a photon loses a fraction of its energy equal to the ratio of its initial wavelength divided by the Hubble length.

If we change the hypothesis to say “current wavelength” rather than “initial wavelength” we then get a logarithmic relation between z and d rather than a linear one.

If photons with a higher initial frequency lose proportionally more energy than lower-initial-frequency photons, then the spectra of distant galaxies (supernovae, etc) should be compressed. Is this consistent with observation?
 
One more brief comment in reply to Jean Tate and others. The referees did not ask for specific statistical results on the fit to the SN Ia data because it was obvious by eye that the mathematical difference between the two lines was small compared with the scatter in the data. My colleagues have shown this around a lot and no one asks what the fit is. They are all just surprised at the “coincidence” that the LCDM prediction is so close to a straight line. Once you see how close the predictions are, it is obvious they will both fit the same data.

And yes, Jean, astronomical terminology is a mess. Magnitudes, Vega, AB, ST all these systems are historical relics (despite the reference to the HST) and it is too bad astronomers are so fond of tradition. IMHO as a physicist it would be better to use physical units (SI and cgs) only and throw out magnitudes altogether but that will not happen. The US still uses British imperial units, so go figure.
 
Eric L: Why are all of the published calculations of He abundance wrong

On the light elements data, a couple of points. The data I cite on helium abundances is derived from observations of nearby stars in a our immediate neighborhood.
Which is the point you are missing , Eric L.
The paper you cite states in the introduction that BBN produces a universal primordial He mass faction of ~0.24.
Fig 5 is the metal versus He mass fraction observed from the surface of local dwarf stars fitted in 2 ways.

Ignorance about how primordial He abundance is calculated seems to be repeated, Eric L.
More clearly: Lerner provides no citations to literature saying that calculations of primordial He abundance in the Big Bang are wrong or that observations contradict the calculations. All we have is his unsupported assertion that the abundance of He in local stars (as in PCF10) can be used to predict the primordial abundance and is "far too low".

Lerner needs to learn how primordial He abundance is actually derived from observations. For example, they look at hot HII regions in dwarf galaxies to eliminate the need to model He abundance in stars.
Helium-4 and Dwarf Galaxies

The BB model prediction is 25%.
Eric L: Why are all of the published calculations of primordial He abundance wrong?
 
People have asked for specific older references of mine, so here’s bunch.
Try to read your OP and title, Eric L: This is the Evidence against concordance cosmology thread.
Plasma cosmology is not concordance cosmology :eek:!
Plasma cosmology does not really exist. It is a collection of sometimes invalid, sometimes contradictory theories made by people with the unscientific assumption that the BB must be wrong.
Conference proceedings are not scientific literature, especially when they are an irrelevant early version of a non-physical toy model being tested.
 
If photons with a higher initial frequency lose proportionally more energy than lower-initial-frequency photons, then the spectra of distant galaxies (supernovae, etc) should be compressed. Is this consistent with observation?

No, the product of wavelength and frequency for an emitted photon is a constant--c.
 
This is NOT a physical mechanism, this is just rewriting the relationship mathematically.

Thanks Eric.

  • Are we *ever* allowed to try to write a physical model for this idea? And to propose methods for testing it other than sort-of-fitting straight lines through Hubble diagrams? Because right now I can think of lots of problems with your suggested model. (I mean, seriously, it's a godawful theory from the point of view of E&M, particle physics, and relativity) and is trivially disproven by the existence of emitting and absorbing systems at different redshifts.) If I try to post such criticism, I suspect you'd say "no, no, you're not supposed to criticize the model itself yet, let's start by showing that it's a good fit and counting the parameters". To which I say, preemptively:
    • Your ability to invent the model is the only thing that allows us to count the parameters at all. (Nobody cares how many parameters there are in the best-vague-polynomial-drawn-through-something.)
    • The existence of multiple models---"one tired light model gives d=z, another gives d = log(1+z), another gives ..."---is, by definition, a free parameter in modeling; you chose to emphasize the d=z version over the d=log(1+z) version or the d=z+z^2/2 version because you think it agrees with the data, not for any other reason. But you forget (or pretend) that you made this choice when it comes time to talk about the number of parameters in your theory. (Recall that if the data had come in looking like d = log(1+z), you'd be talking about the one-parameter nature of your fit to THAT, and poking fun at any cosmologist who ran hypothesis-tests on a dark-matter-inspired d=z model.)
    • You're basically saying "we should describe the data simply and accurately first, and worry about the physical nature of the best-fit model later" here, which is exactly the behavior that invites your scorn. We have a great, predictive cosmological model, Eric, which describes the data simply and accurately. It's called "LCDM". It includes three ingredients (something that looks like dark matter, something that looks like dark energy, and an inflaton mechanism that kicks in at high energy) whose detailed physical nature we don't know ... and don't need to know for further model computations, although it'd be nice to find out. The resulting fits are superb, and theorists tell us that the new-physics-inferred is not implausible or otherwise ruled out.
 
Analysis that shows how Scarpa et. al. (2007) was wrong

Do Local Analogs of Lyman Break Galaxies Exist? R. Scarpa, R. Falomo, and E. Lerner
The Astrophysical Journal. Volume 668, Issue 1, Page 74–80, Oct 2007 http://arxiv.org/abs/0706.2948.
Eric L: You need to not cite a paper ant ignore the paper that debunks it:
HST morphologies of local Lyman break galaxy analogs I: Evidence for starbursts triggered by merging
Roderik A. Overzier, Timothy M. Heckman, Guinevere Kauffmann, Mark Seibert, R. Michael Rich, Antara Basu-Zych, Jennifer Lotz, Alessandra Aloisi, Stephane Charlot, Charles Hoopes, D. Christopher Martin, David Schiminovich, Barry Madore
See the Appendix for a detailed analysis that demonstrates that the interpretation and conclusions presented in the recent paper by Scarpa et al. (2007) are in fact in error.
This happens to be the only credible paper that cites yours (one preprint and a single author paper).
There are 88 papers about local analogs of Lyman break galaxies published after yours.
 
Last edited:
"Intergalactic Radio Absorption and the COBE Data", Astrophysics and Space Science, Vol.227, May, 1995, p.61-81
Sorry, Eric L, but this is a really bad citation for many reasons.
  1. The COBE data did describe a "perfect" black body spectrum.
    The FIRAS measurements have error bars that are typically hidden by a Planck's Law curve.
    Mather et al. 1994, Astrophysical Journal, 420, 439, "Measurement of the Cosmic Microwave Background Spectrum by the COBE FIRAS Instrument, "Wright et al. 1994, Astrophysical Journal, 420, 450,"Interpretation of the COBE FIRAS CMBR Spectrum," and Fixsen et al. 1996, Astrophysical Journal, 473, 576,"The Cosmic Microwave Background Spectrum from the Full COBE FIRAS Data Sets"
  2. Figure 2 is dubious. You are showing that your model cannot fit the COBE data (crosses) because you have not bothered to add error bars!
  3. Stars are not perfect black bodies. Galaxies are not prefect back body emitters. The IGM is not a perfect black body. A idea about invisible, never detected "electrons trapped in dense magnetically pinched filaments in the IGM" is a fantasy.
  4. An obscure single author paper (cited 4 times!).
  5. Where are the follow up papers for the WMAP and Planck data?
    Citing an author who seems to have abandoned his own idea is bad.
  6. Where are the calculations of the power spectrum produced by these filaments?
  7. Where are the calculations of the temperature of the CMB with distance from us?
 
Last edited:

Back
Top Bottom