Evidence against concordance cosmology

The presentation is 25 minutes-- not too long for anyone with a serious interest and non-impaired attention span.

It looks like you spend 15 minutes narrating the Tolman test paper. Why didn't you say so? Is there a few dozen or a hundred words in the video that I won't have learned ('learned') from the paper? If so, just summarize that here.

See how easy that was?
 
Next is "Free Parameters Exceed Measurements" ignorance. The Lambda-CDM parameters are well established by WMAP and Planck measurements. The lie that the concordance model has no predictive value.
There is a hint of the craziness that models should not be updated to include new observations. I call it is crazy because it is similar to "ignore the observations that Newtonian gravitation is wrong and do not look for a better theory of gravity"!
GR has solutions that describe an expanding or contracting or static universe.
There is overwhelming evidence for an expanding universe so we have the Big Bang model.
There is overwhelming evidence for dark matter so we add dark matter to the Bang model.
There is strong evidence for dark energy so we add dark energy to the Bang model.
There is good evidence for inflation so we add inflation to the Bang model.

ETA: The conclusion is ridiculous. It is never time to switch to a cosmological model that does not exist :eek:! It is never time to switch to a bunch of "plasma cosmology" (note the small p) that cannot match fundamental observations about the universe, e.g. the temperature, black body spectrum and power spectrum of the CMB.
 
Last edited:
That is one paper covered. The presentation points out that there are observational contradictions to all the basic predictions of concordance cosmology, not just one or two. In today's science, conference presentations tend to precede peer-reviewed papers. The paper that will cover all the points in the presentation is still being written. However, the presentation cites data in papers that are published. The presentation is 25 minutes-- not too long for anyone with a serious interest and non-impaired attention span.

The presentation is tedious, long winded and pointless (I gave up after getting about half way through).
People write papers because readers can fact check, review important points and equations, be thoughtful and patiently read the material to grasp it's purpose and significance. None of that can be done with a stupid video presentation.
Come back and let us know when "the paper" is available.
 
So, everybody must accept a theory in schools and universities and study it, even when it's been proven wrong, just because there is no other theory to explain the phenomenon? I don't agree. I would not accept such a disproven theory and I will call it 'pseudoscience'.

Edited by zooterkin: 
<SNIP>
Edited for rule 12.
 
Last edited by a moderator:
Still trying to figure out which theory is disproven and how. ...
Nothing yet. There's no expectation that such will happen.


...Anybody got a.clue what Maartenn100 is talking about?
Maartenn100 has no scientific background.education/talents/insights/etc, he thinks that is no fair and he thinks that his education resistant opinion is entirely valid, and that he has a right to have it treated as such.
 
I'll ask a supplementary question: what are the alternate hypotheses? And what is the evidence FOR them?
He's a Plasma Cosmos nut proponent.

So, everybody must accept a theory in schools and universities and study it, even when it's been proven wrong, just because there is no other theory to explain the phenomenon? I don't agree. I would not accept such a disproven theory and I will call it 'pseudoscience'.
:rolleyes:

Why not present it on the Cosmoquest 'Against the Mainstream' Forum. It will be looked at by people in the right field with enough expertise to ask you the right questions.

http://cosmoquest.org/forum/
Probably because...
Rules are tougher than here though and you only get 30 days to make your case.
There is a lot of helpful advice in the FAQ to help posters though
Exactly.
 
You still learn Newtonian Physics in school, even though we now have General Relativity.

I don't think that analogy works because Newtonian physics are still applicable and does a great job of describing reality, it can make testable and verifiable predictions that can be independently and objectively observed.

As such, general relativity didn't overturn Newtonian physics. It refined them.
 
Oh jeez...........here we go again. Are we a black hole? We seem to be attracting an inordinate number of "Einstein-was-wrong" types over our event horizon at the moment, all armed with pseudo-intellectual word soup as the primary weapon, and a noticeable shortage of maths.

I have two explanations for the phenomenon.

  1. Einstein has risen from the dead and been going around kicking people's dogs.
  2. Biblical literalism is rearing it's ugly head at general relativity.
 
Still trying to figure out which theory is disproven and how. Anybody got a.clue what Maartenn100 is talking about?

I know what he thinks he is talking about. He believes that since, in his mind, general relativity has been discredited - and there's posts about it in one of these threads; something about scientists "forcing" dark matter/dark energy into the model invalidating GR - that it should no longer be taught. Maartenn, I wish to make an observation, and I've brought this up before. Even if GR is incorrect about some detail, it still has excellent agreement with a large number of other details that make it a useful theory. Why should we throw the baby out with the bathwater? If GR is wrong about something, we will investigate that wrongness and refine (or even overturn) the model; but until then, we'll use it for what we know it's good for. That's how scientific theories work.
 
To ask a somewhat serious question Eric, why not compare your model to the best geometric data available, Baryon Acoustic Oscillations? The problem with the Tolman test is that it is degenerate with galaxy evolution as you point out but the BAO scale is not. Plenty of data exists from BOSS, WIGGLEZ and 6dFgrs at several redshifts. A linear, static model like yours should be very simple to test as there is no weird geometric effects and the the geometry should simply be euclidean. By using only the angular correlation function you can test. That would be interesting.

I also think people could do with being a bit more polite on here.

I'd also take serious issue with the claim that the CMB is local. The Sunyaev–Zel'dovich is observed and hundreds of clusters have been detected with it now up to z=1.47. It simply cannot be local.
 
Rules are tougher than here though and you only get 30 days to make your case.
There is a lot of helpful advice in the FAQ to help posters though

Also you have to answer questions not dance around them.
 
This should be an interesting thread!
This thread is to discuss evidence against “concordance cosmology”. Concordance cosmology is the name used for the dominant model for cosmology.

The basic hypotheses of concordance cosmology are

1) The universe is expanding. This means the space between galaxies is expanding, not space within gravitationally (or electromagnetically) bound objects. This expansion accounts for the Hubble relationship between redshift and apparent luminosity.
2) The universe originated in a Big Bang.
I know it is often expressed like this, but I don't think it's a basic assumption. If only because it isn't, as of today, testable.

It went through a state of extremely high density and temperature. During this period, the light isotopes He-4, He-3, Li-7 and d were formed.
3) The expansion was initially driven by an “inflation” force field, which expanded the universe exponentially, accounting for the smoothness of the cosmic microwave background (CMB). This expansion also determines that the total energy density of the universe is the critical density. (The critical density is matter-energy density need to exactly balance gravitational energy)
4) The subsequent expansion, after inflation, was accelerated by a repulsive energy field, “dark energy” .whose energy density at the present is 70% of the critical density of the universe.
I don't think it's accurate to call DE an 'energy field', in terms of it being a basic assumption.

5) Five-sixths of the matter density in the universe is “dark matter” or non-baryonic matter, an unknown type of mater not consisting of nucleons and electrons. The remaining one sixth, or 5% of the critical density, consists of ordinary matter, or baryonic matter.

There are also a number of subordinate assumptions, and quite a few additional adjustable parameters in the model, but these are the core hypotheses.
This too is not accurate, I think. For example, there is the core assumption that 'the laws of physics' apply throughout the observable universe. Without this, every astronomical observation would be nigh on impossible to interpret.

One thing to note right off is that neither the inflation field, dark energy nor dark matter have been observed in any experiments on earth or experiments conducted by spacecraft.
True, but so what? The 21 cm hyperfine transition in H has not been so observed, but a large part of radio astronomy depends on it. Also, no state of ordinary matter like that in a neutron star either.

While in the past contradictions between this model and observations have been addressed by modification of the model, rather than questioning its underlying hypotheses, I would argue that those hypotheses are in fact testable and falsifiable. The universe is either expanding at a rate that explain the Hubble relation or it is not. It either went through a hot dense epoch, a Big Bang, or it did not.
Putting this as an either/or is too restrictive, I feel. It too arbitrarily dismisses the possibility of a partial 'explanation', for example. And that's bad science.

Dark energy exists or it does not. Dark matter exists or it does not.
These are simply incorrect. If only because they are both merely placeholders.

I summarized the evidence against concordance cosmology in an invited presentation last June to a workshop at EWASS, a large astronomy conference. It is here. People should watch this presentation before posting to this thread.
I haven't yet had a chance to watch it, but I do have a question about the sources you certainly used in preparing this!

Can you please list the primary sources you relied upon (other than your own, recent, paper)?

I realize people tend to get excited about this subject. But please, no name-calling. Personally I’ll only respond to posts that raise actual scientific arguments or questions.
I hope mine falls into the latter category; I look forward to your response.
 
To ask a somewhat serious question Eric, why not compare your model to the best geometric data available, Baryon Acoustic Oscillations?

I think Eric wants to do to totally separate things:

a) Talk about the things (there are indeed a few) that LCDM cosmology mispredicts. Argue that LCDM has some epistemological problem with free parameters and fudge factors (it doesn't). He wants you to end this discussion by affirmatively rejecting LCDM. "Oh, no, it's all wrong and can't be salvaged by more incremental work! I wonder where we will find a totally new idea to replace everything?"

b) At this point Eric will want to talk about his alternative cosmologies, which would look terrible side by side with LCDM. But now that LCDM is dead (see point a) Eric thinks he will try to present as an interesting incomplete theory which merely needs incremental improvements.("Improvements" in plasma cosmology never count as fudge factors or free parameters because <mumble mumble> they just don't, OK?)
 
This thread is to discuss evidence against “concordance cosmology”. Concordance cosmology is the name used for the dominant model for cosmology.

This ought to be good.

1) The universe is expanding. This means the space between galaxies is expanding, not space within gravitationally (or electromagnetically) bound objects. This expansion accounts for the Hubble relationship between redshift and apparent luminosity.

Slight correction: space also expands within galaxies and planets and atoms, but the other forces keep the stuff together.

One thing to note right off is that neither the inflation field, dark energy nor dark matter have been observed in any experiments on earth or experiments conducted by spacecraft.

Isn't it frustrating that we don't just have all the answers right now?

The universe is either expanding at a rate that explain the Hubble relation or it is not. It either went through a hot dense epoch, a Big Bang, or it did not. Dark energy exists or it does not. Dark matter exists or it does not.

It is. It did. It may. IT does.

I realize people tend to get excited about this subject.

Meh.
 
I think all of this nonsense has been dealt with by Brian Koberlein in the links I posted earlier. Unless anybody else has some new evidence, it's kind of dead in the water.
 
Let’s start by looking at just the first point I make in the presentation. I do hope people see the presentation, and it does have references to published papers. But I’ll summarize here for those who need it.

The hypothesis that the universe is expanding, taken by itself –-that is taking this hypothesis alone--makes very few testable predictions. One very well-known one is that the surface brightness of objects drops as (1+z)^3. Equivalently, it makes quantitative predictions about the apparent size of objects of a given luminosity.

The alternative hypothesis-- that the universe is not expanding and the Hubble relation is due to energy loss that happens to the light as it travels-- makes the prediction that surface brightness of objects (as measured in AB magnitude—in other words per unit frequency) is constant with distance. To test that hypothesis for objects of the same intrinsic luminosity, however, you need to assume an actual relation between redshift and distance. My colleagues and I assumed z, redshift, is linearly proportional to distance at all distances (as we know it is at small z).

This relationship fits the data set of apparent magnitudes vs redshift of the supernova 1a data just as well as the LCDM model does, and it is almost mathematically indistinguishable from those predictions for that data set. It however has the Occam’s razor advantage that it fits the data set using only one adjustable parameter—the Hubble constant—while LCDM requires 3 adjustable parameters—H, the density of matter(including dark matter) and the energy density of “dark energy”. If you accept the LCDM model, the fact that the non-expanding model with linear Hubble relation fits just as well has to be considered a big coincidence.

The data set of disk galaxies, discussed in our published paper, and the data set of elliptical galaxies, taken from others work and used in this presentation, both show no change in surface brightness with distance. So the simple, no-parameter prediction of the non-expanding hypothesis is confirmed with these two data sets.

In order to fit the data with the expanding-universe hypothesis, you need four additional ad-hoc parameters to describe the size evolution of elliptical galaxies and of disk galaxies. The actual physical theories of size expansion proposed in published papers prior to this data release do NOT fit the data. Again, you have to consider that the zero-parameter fits to these two data sets by the non-expanding hypothesis are two more big coincidences.

So, to fit these two data sets, the non-expanding hypothesis takes no free parameters, the expanding universe requires at least four free parameters. Occam’s razor is very cutting here.

The expanding universe hypothesis also requires one to explain why disks and elliptical galaxy sizes evolve in exactly the same way. Disk that are high-luminosity in UV—the ones we studied—are all young galaxies—babies. To be bigger with decreasing z they have to be born bigger. Ellipticals are old galaxies. We are looking at the same evolving population of ellipitcals at high and low z.

The expanding-universe hypothesis here is like saying that our measurements with our special variable yardsticks indicate that human babies born in 1966 are 18 inches long, while babies born in 2016 are 36 inches long. At the same time, adults that were average 6 feet tall when they were 20 in 1966 are now 12 feet tall in 2016 at age 70. It is just a coincidence that when measured with non-varying yardsticks babies and adults are the same size as they were 50 years ago.
 
I think all of this nonsense has been dealt with by Brian Koberlein in the links I posted earlier
Brian Koberlein points a flaw in the Lerner et. el. paper that I was considering but in a different sense: Selection Bias
If you’re going to test an alternative model that requires the introduction of some unknown mechanism for redshift, you should probably compare your results to an expanding universe model to see if yours works better. In particular, you should probably compare your data to the ΛCDM model (the standard dark energy/dark matter/expanding universe). Do they do this? No. In their own words, “In this paper, we do not compare data to the ΛCDM model. We only remark that any effort to fit such data to ΛCDM requires hypothesizing a size evolution of galaxies with z.” Apparently hypothesizing a size evolution for galaxies is bad, but introducing an unknown tired light mechanism to preserve a static universe is okay.
A main point of tests is not whether they work for a model. It is whether they work better for some models rather than others. Not testing against an expanding universe makes the paper fairly useless. Their selection of a model to test is biased.

The other sense of selection bias is that it is usual when you select a subset from a set of data that you do tests to see whether that selection introduces biases. The paper has "5.2 Is there a bias for size or surface brightness?". We would expect this to be followed by tests for bias. What we get are assertions.
 
The hypothesis that the universe is expanding, taken by itself –-that is taking this hypothesis alone--makes very few testable predictions.
That is not right, Eric L. An expanding universe is not a hypothesis that was just made up. It is a hypothesis that is backed up by many observations and testable predictions that it passes: Frequently Asked Questions in Cosmology: at is the evidence for the Big Bang?
The evidence for the Big Bang comes from many pieces of observational data that are consistent with the Big Bang. None of these prove the Big Bang, since scientific theories are not proven. Many of these facts are consistent with the Big Bang and some other cosmological models, but taken together these observations show that the Big Bang is the best current model for the Universe. These observations include:
•The darkness of the night sky - Olbers' paradox.
•The Hubble Law - the linear distance vs redshift law. The data are now very good.
•Homogeneity - fair data showing that our location in the Universe is not special.
•Isotropy - very strong data showing that the sky looks the same in all directions to 1 part in 100,000.
•Time dilation in supernova light curves.
The observations listed above are consistent with the Big Bang or with the Steady State model, but many observations support the Big Bang over the Steady State: •Radio source and quasar counts vs. flux. These show that the Universe has evolved.
•Existence of the blackbody CMB. This shows that the Universe has evolved from a dense, isothermal state.
•Variation of TCMB with redshift. This is a direct observation of the evolution of the Universe.
•Deuterium, 3He, 4He, and 7Li abundances. These light isotopes are all well fit by predicted reactions occurring in the First Three Minutes.
Finally, the angular power spectrum of the CMB anisotropy that does exist at the several parts per million level is consistent with a dark matter dominated Big Bang model that went through the inflationary scenario.

The Tolman surface brightness test is described by the Wikipedia article but does not emphasize the factors which prevented it from being done for about 70 years.

Your published paper on the Tolman test is irrelevant since it is not evidence against concordance cosmology, Eric L
We conclude that available observations of galactic SB are consistent with a static Euclidean model of the Universe.
Your paper does not test concordance cosmology.

The expanding universe hypothesis also requires one to explain why disks and elliptical galaxy sizes evolve in exactly the same way.
Whoops - an expanding universe is not a model of galaxy evolution, Eric L :jaw-dropp!
Nice of you to point out the ignorance of ignoring galaxy evolution as your paper.

ETA: Regardless of whether the universe is expanding or not, there is good evidence that the observable universe did not always contain galaxies. The increasing amount of neural H with z tells us that galaxies have been ionizing H for a finite time. The ages of globular clusters are less thon 13.7 billion years. Thus galaxies formed and then evolved over those 13.7 or less billion years.
 
Last edited:
Oh jeez...........here we go again. Are we a black hole? We seem to be attracting an inordinate number of "Einstein-was-wrong" types over our event horizon at the moment, all armed with pseudo-intellectual word soup as the primary weapon, and a noticeable shortage of maths.

At least 6. All completely ignorant of the math and functionally ignorant of the meaning of the words in their proper combinations.
 
Disk that are high-luminosity in UV—the ones we studied—are all young galaxies—babies.
Citation to the scientific literature please, Eric L.
What approximate age are you saying all of the galaxies in your sample are? 1 million years? 1 billion years? 100 billion years?
After all in a static Euclidean universe any age should be "young" compared to infinity :D!
 
Since Eric L mentions the size of galaxies: The Shape of Things by Brian Koberlein.
So is there a way to test whether the ΛCDM model is accurate for our universe, or if some other model might work as well. It turns out that there is, and it’s known as the Alcock-Paczynski cosmological test. The basic idea of this test is to measure two things: the redshift of an object (for which we typically use the ΛCDM model to determine its distance) and the apparent size of the object. Since any model for the structure of the universe will predict a relation between these two quantities, you can use these quantities to test the accuracy of your model.

A recent paper in the Astrophysical Journal (arxiv version:http://goo.gl/xe4vwL) has done just that. Using redshift and apparent size data for distant galaxies, the authors test six models that represent a wide range of possibilities: ΛCDM, Einstein-de Sitter (a model with no dark matter), Friedman model (no dark energy), quasi-steady state cosmology (no big bang), a static universe model, and static universe with “tired light.”

What the authors found was that only two models agreed with the Alcock-Paczynski test, the ΛCDM model and the tired light model.
And other tests rule out tired light models: I’m Tired…
In addition there are the Lubin and Sandage papers
  1. The Tolman Surface Brightness Test for the Reality of the Expansion. I. Calibration of the Necessary Local Parameters
  2. The Tolman Surface Brightness Test for the Reality of the Expansion. II. The Effect of the Point-Spread Function and Galaxy Ellipticity on the Derived Photometric Parameters
  3. The Tolman Surface Brightness Test for the Reality of the Expansion. III. Hubble Space Telescope Profile and Surface Brightness Data for Early-Type Galaxies in Three High-Redshift Clusters
  4. The Tolman Surface Brightness Test for the Reality of the Expansion. IV. A Measurement of the Tolman Signal and the Luminosity Evolution of Early-Type Galaxies
  5. The Tolman Surface Brightness Test for the Reality of the Expansion. V. Provenance of the Test and a New Representation of the Data for Three Remote Hubble Space Telescope Galaxy Clusters (a 2010 follow up by Sandage)
 
Last edited:
Maartenn 100 : science does not do belief. For that is an article of faith requiring zero evidence. But what it does do is to provide testable
hypotheses to determine their validity with regard to the behaviour of observable phenomena. So new evidence will be added to the body
of knowledge to gain a more accurate understanding.
And this applies to string theory how?
 
Can one of you explain to everyone how a model (expanding-universe) that requires four ad-hoc parameters to fit these data sets is superior to a model (non-expanding universe) that requires no free parameters to fit the same data? Science is about prediction. Any set of data can be fit by any model post-hoc, with enough free variables. To be useful, a scientific theory must be able to predict data ahead of time, not just fit the data afterwards with an every-expanding list of free parameters.

Sandage’s papers , which are addressed in our paper, set up a tired light theory with a wrong relationship of redshift to distance, not a linear one. As we show, with the right, (linear), relationship, his data is a good fit to our non-expanding model—with no free parameters.
 
Can one of you explain to everyone how a model (expanding-universe) that requires four ad-hoc parameters to fit these data sets is superior to a model (non-expanding universe) that requires no free parameters to fit the same data?.
Can you explain why you are derailing your own thread into about the evidence against concordance cosmology into the irrelevant topic of alternative cosmologies, Eric L :jaw-dropp?

But if you want an answer: The question is based on ignorance or maybe even a lie.
Frequently Asked Questions in Cosmology: at is the evidence for the Big Bang?
Static models like the Steady State Model do not "fit the same data". They fail to fit most of the data.

ETA: Your irrelevant paper is also a tired light model. You evoke a unspecified mechanism to replicate Hubble's Law. That is what tired light does.
 
Last edited:
<snip>

I haven't yet had a chance to watch it, but I do have a question about the sources you certainly used in preparing this!

Can you please list the primary sources you relied upon (other than your own, recent, paper)?

<snip>
I've now watched the presentation, and I must say that I wasn't very impressed (as several others have reported too).

Here are the references I noted, not including the SB test (for which the main reference is your recent paper, Eric L):

Li declines with Fe, < 0.03 BBN prediction: Sbordone+ (2012), Hansen+ (2015)

He also far too low in local stars: Portinari, Casagrande, Flynn (2010)

LCDM predicts 3x too much DM: I.D. Karachentsev, Astrophs. Bull. 67, 123-134

>200 Mpc LSS takes far too long to form for BB: Clowes+ (2012)

CBR alignments (etc): (no refs)

Evidence indicates scattering/abs of RF radiation in local universe: Atrophys & SS, 1993

Free Parameters exceed measurements: Disney? (voiceover, not slide)

These are the refs which seem to relate to the topic of this thread, evidence against concordance cosmology.

There are also several mentions of an alternative, plasma cosmology. And there's at least one ref given for that. From what I understood, there's little difference in the alternative in this presentation from what's in the (91-page long!) Plasma Cosmology thread, here in ISF (other than the recent SB paper).

Have I copied the references correctly, Eric L?

I will try to find the actual papers to which the refs in the presentation seem to refer to.
 
Sandage’s papers , which are addressed in our paper, ....
That is section " 6.2. Lubin and Sandage 2001" where you do not address anything real in the papers :eye-poppi!
You raise two strawmen
  1. There is the inanity of saying that when they used the correct relation for "the Einstein-de Sitter static case" (your words) in 2001 they should tested your 2012 model instead.
  2. Comments about LS01 in terms of your model again.
Ending with an evidence-less assertion about Lubin and Sandage using Sandage and Perelmute data.
To be charitable this is emphasizing that your paper has nothing to do with any evidence for against concordance cosmology because you do not show the Lubin and Sandage papers are wrong.
 
Last edited:
Can one of you explain to everyone how a model (expanding-universe) that requires four ad-hoc parameters to fit these data sets is superior to a model (non-expanding universe) that requires no free parameters to fit the same data?
IIRC, this (or something like it) was discussed at some length, in the Plasma Cosmology thread.

A quick answer is that this is an entirely artificial, ad hoc, comparison.

For example, the CMB fit to a 2.73K blackbody, its dipole, and the angular power spectrum: the data are unambiguous, I think (refs: a key COBE paper, the main WMAP papers, and the main Planck papers; I'll provide a list if anyone asks). AFAIK, no one has published a paper showing that "a model (non-expanding universe)" fits the same data. IIRC, there is one paper - by Lerner? - published before WMAP, which shows a weak fit to some of the COBE data, certainly one that's demonstrably worse than a concordance cosmology model.

No "fit the same data" here.

Science is about prediction. Any set of data can be fit by any model post-hoc, with enough free variables. To be useful, a scientific theory must be able to predict data ahead of time, not just fit the data afterwards with an every-expanding list of free parameters.

<snip>
Hmm ... I was under the impression that concordance cosmology models have a remarkably good track record, in terms of predictions. The CMB angular power spectrum, for example, and the rich clusters of galaxies 'discovered' in the Planck and SPT data (via the Sunyaev-Zel'dovich effect; confirmed by optical observations).
 
The hypothesis that the universe is expanding, taken by itself –-that is taking this hypothesis alone--makes very few testable predictions.

"that hypothesis alone" is sort of odd. You can imagine hypothesizing a sort of clockwork universe. "The creator has glued all of the galaxies to mysterious mounting-pegs, and then arranged some unseen clockworks to move the pegs apart according to some formula." Sure, in that case there are very few predictions. But nobody (I hope not you) seems to hypothesize that.

The more sensible hypothesis is "things are moving apart governed by some regular laws of motion". And here your statement is wrong. If you hypothesize that the law of motion is "the usual one", i.e. GR, which seems parsimonious, you get a very tightly constrained world in which to make predictions---indeed, under this assumption, any initial-condition hypothesis you which to make can be easily turned into a suite of predictions.

One very well-known one is that the surface brightness of objects drops as (1+z)^3. Equivalently, it makes quantitative predictions about the apparent size of objects of a given luminosity.

That is not a generic expansion hypothesis. That is a very specific expansion hypothesis---it corresponds to the hypothesis that things are flying apart (to use Newtonian language) without being decelerated by (e.g.) their mutual gravitational attractions. That sounds like the sort of thing we should be testing rather than assuming.

My colleagues and I assumed z, redshift, is linearly proportional to distance at all distances (as we know it is at small z).

What an odd assumption. What actual physics does this correspond to? Do you suppose that gravity is just "turned off" and unable to affect large-scale structure?

This relationship fits the data set of apparent magnitudes vs redshift of the supernova 1a data just as well as the LCDM model does, and it is almost mathematically indistinguishable from those predictions for that data set.

"almost" is doing a lot of work in that sentence. Your expansion history is the "empty universe" one, and yes it's been tested. It's known to be close to the data but it is NOT a match. It corresponds precisely to the "omegaM = omegaL = 0" hypothesis in mainstream cosmology, which is ruled out at high confidence on the supernova data alone. (Note: these contours include systematic errors. If you think it's "almost" a match based on statistical errors alone, you're even more wrong.)

http://supernova.lbl.gov/Union/figures/Union2.1_Om-Ol_systematics_slide.pdf

It however has the Occam’s razor advantage that it fits the data set using only one adjustable parameter—the Hubble constant—while LCDM requires 3 adjustable parameters—H, the density of matter(including dark matter) and the energy density of “dark energy”. If you accept the LCDM model, the fact that the non-expanding model with linear Hubble relation fits just as well has to be considered a big coincidence.

Part of this is the well-known "cosmic coincidence"---why are omega_M and omega_L anywhere in the same ballpark? 0.3 and 0.7? Why aren't they, say, 0.01 and 0.99? Or 10^-9 and 0.9999999? Nobody knows and this is an active debate. But for your purposes, "hey the supernovae don't rule me out too badly" is precisely this one-parameter coincidence. Any cosmology for which "the blue supernova blob is nearer the middle than the corners" will prompt your claim of a coincidence. (The actual "it's a coincidence" parameter moves the blue blob back and forth the black "flat-space" line. You mention that LCDM has an extra fit parameter---and indeed it does, constraining the blob to lie near the line rather than far from it---apparently confirming the predictions of inflation.)

Also, I repeat, given that we know that the Universe isn't devoid of matter, what makes you think a zero-deceleration, Omega_M=0 model is parsimonious? Do you have a hypothesis telling us to turn gravitational attraction off? We know galaxies are massive, right?

The data set of disk galaxies, discussed in our published paper, and the data set of elliptical galaxies, taken from others work and used in this presentation, both show no change in surface brightness with distance. So the simple, no-parameter prediction of the non-expanding hypothesis is confirmed with these two data sets.

Discussed in your published paper which arbitrarily assumes it can treat galaxies as standard candles. Serious file-drawer effect here, Eric: if your analysis had concluded that there was surface brightness evolution, you'd have say "oops, I guess those weren't good candles". For all we know you did that with a bunch of different datasets.

So, to fit these two data sets, the non-expanding hypothesis takes no free parameters,

a) Your decision to turn off gravity (or set Omega-M=0) is a parameter choice, Eric.

b) These are not "two data sets", they are two different candles measuring a single expansion history. They are degenerate in the Bayesian sense.

c) There are dozens of astrophysical systems which in principle can test cosmological theories. You chose one, i.e. the late-time redshift-distance relation. In stats this raises the issue of "p-hacking". If you have dozens of tests to choose from, it's easy to find one that happens to sort-of-match. Saying "I found a test where my theory is only ruled out at 99%, could be worse, so nyah nyah" is not particularly surprising, and does not make me excited about your theory.
 
Regarding the "anomalous" large-scale structures, the Clowes 2013 discovery was in the news at the time and is not terribly convincing. Accidental structure of this type is indeed created by statistically-homogenous data all the time. http://arxiv.org/abs/1306.1700 does the analysis:

I show that the algorithm used to identify the Huge-LQG regularly
finds even larger clusters of points, extending over Gpc scales, in explicitly homogeneous
simulations of a Poisson point process with the same density as the quasar catalogue.

The 7Li abundance deficit is very, very well known and is in my mind the "most serious" problem with LCDM cosmology. CMB alignments are still very much up in the air. I read the Portinari paper and, wow, that is an incredibly roundabout way of estimating helium abundances, and exactly no one seems to think it's telling us anything about the cosmological He abundance (which is measured well elsewhere)---where did you fish that up? Don't tell me you just pulled out Figure 5 and called it a "helium abundance measurement" or something?
 
Impressive amount of work there ben m. Thanks for doing that for the education of all of us without an agenda. It is important that we (OK, you, in this case) don't let pseudo-science and innumeracy turn the world of science into some sort of idiocracy by letting the crackpots have free rein on the internet. You'll have no effect on the current batch of time-wasters, of course, but there must be hope that you'll deter some of the waverers from joining their ranks.
 
Can one of you explain to everyone how a model (expanding-universe) that requires four ad-hoc parameters to fit these data sets is superior to a model (non-expanding universe) that requires no free parameters to fit the same data? Science is about prediction. Any set of data can be fit by any model post-hoc, with enough free variables. To be useful, a scientific theory must be able to predict data ahead of time, not just fit the data afterwards with an every-expanding list of free parameters.

Sandage’s papers , which are addressed in our paper, set up a tired light theory with a wrong relationship of redshift to distance, not a linear one. As we show, with the right, (linear), relationship, his data is a good fit to our non-expanding model—with no free parameters.

How does a linear relationship make sense? As I understand it, a redshift z corresponds to a frequency ratio of 1/(1+z). So if at distance d, z=1 and at distance 2d, z=2 (in a static model, so this relationship is presumably not supposed to change with time), what happens to a photon emitted from a galaxy at distance 2d on its way here? If it starts with frequency f, wouldn't it have frequency f/(1+1)=f/2 after distance d (half way) and so f/4 when it arrives, corresponding to z=3 rather than z=4?

Maybe I'm missing something but it seems that the concordance model is internally consistent and yours isn't.
 
Also you have to answer questions not dance around them.
Exactly. I suggested a similar policy here.

I think all of this nonsense has been dealt with by Brian Koberlein in the links I posted earlier. Unless anybody else has some new evidence, it's kind of dead in the water.
That won't stop people putting a few volts through it and causing a few twitches.
 
JeanTate said:
I will try to find the actual papers to which the refs in the presentation seem to refer to.
Here's what I found:

<snip>

Here are the references I noted, not including the SB test (for which the main reference is your recent paper, Eric L):

Li declines with Fe, < 0.03 BBN prediction: Sbordone+ (2012), Hansen+ (2015)
Sbordone+ (2012): "Lithium abundances in extremely metal-poor turn-off stars" The figure in Eric L's presentation seems to be ~the same as Figure 4 in this paper.

Hansen+ (2015): "An Elemental Assay of Very, Extremely, and Ultra-metal-poor Stars"

He also far too low in local stars: Portinari, Casagrande, Flynn (2010)
Portinari, Casagrande, Flynn (2010): "Revisiting ΔY/ΔZ from multiple main sequences in globular clusters: insight from nearby stars"

LCDM predicts 3x too much DM: I.D. Karachentsev, Astrophs. Bull. 67, 123-134
Karachentsev (2012): "Missing dark matter in the local universe"

>200 Mpc LSS takes far too long to form for BB: Clowes+ (2012)
Clowes+ (2012): "Two close large quasar groups of size ˜350 Mpc at z ˜1.2"

There's also Clowes+ (2013), which ben m cited: "A structure in the early Universe at z ˜1.3 that exceeds the homogeneity scale of the R-W concordance cosmology"

CBR alignments (etc): (no refs)

Evidence indicates scattering/abs of RF radiation in local universe: Atrophys & SS, 1993
Nothing for the former, obviously.

"Atrophys & SS" seems to be a typo; perhaps Eric L meant "Astrophysics and Space Science", the journal?

If so, then there were 12 volumes published in 1993, from 199 to 210. I do not intend to find out which paper, or papers, Eric L is referring to.

Free Parameters exceed measurements: Disney? (voiceover, not slide)
I did not try to track this down.

<snip>

Have I copied the references correctly, Eric L?
Have I correctly identified the references, Eric L?
 
Can one of you explain to everyone how a model (expanding-universe) that requires four ad-hoc parameters to fit these data sets is superior to a model (non-expanding universe) that requires no free parameters to fit the same data? Science is about prediction. Any set of data can be fit by any model post-hoc, with enough free variables. To be useful, a scientific theory must be able to predict data ahead of time, not just fit the data afterwards with an every-expanding list of free parameters.

This exhibits an incredibly poor understanding of physics and deduction. If the actual stars/galaxies in the Universe is observed to be doing X right now, we should be able to say why. What initial conditions Y, evolving under what laws of physics Z, can lead to X being observed?

If it so happens, by good luck, that your first guess was right---"Oh, hey, if I plug in a boring and obvious Y and the simplest possible approximation of Z, I predict X exactly"---great. If Y+Z-->X doesn't work on the first try, your next job is necessarily parameterized hypothesizing. That is how we discover new things---by following up on theory/experiment disagreements and hypothesizing something you hadn't previously thought of or known about might

Please note that the Hubble curve was not fit with free shape-determining parameters. We did not take a curve that "should have been straight", notice that straight line was a poor fit, and throw in unmotivated quadratic and quartic terms. (That is the only case where your parameter-counting argument is meaningful.) The Supernova Cosmology Project specifically set out to measure Omega_L and Omega_M, which we knew were physically-meaningful quantities; and whose value we didn't have some magic way of guessing.

Anyone who claims to "know" Omega_M without trying to measure it is delusional. Of course we try to measure it! Of course we hypothesize that it could take any value, and compare these hypotheses with the data! How the heck else are we supposed to know it?

Please note that this parameter-counting argument only sounds sensible for a millisecond because of your choice to argue about the Hubble curve as an isolated mystery-function, fit only with mystery-parameters invented out of nowhere, to which we later ascribed dark names and mystery meanings. That's just not true. They're physically motivated quantities and we've overconstrained the heck out of them via different effects on different gravitating systems at different redshifts.

(An analogy: The Earth's curvature can be measured by pointing out how you can stand on the ground, watch the sun set, then quickly ascend a tall building and see it set again. You draw a diagram and show the sunset-time-vs-height curve looks for different possible Earth radii. "I don't like how you invented a whole new "radius" parameter explain these tiny time differences. Of course adding arbitrary parameters improves the fit. Flat-Earth theory has zero parameters.")

So, no, I am absolutely not impressed by the fact the small number of parameters in your fit. Your simple theory describes a straight line, the data is not quite a straight line, so we're forced to look for other physical phenomena (either different initial conditions or different laws to govern the evolution) that explain why not.
 
Last edited:
I did not try to track this down.

Oh, wait, I recognize that one. It's got to be Mike Disney, "The Case Against Cosmology", http://arxiv.org/abs/astro-ph/0009020

Disney, speaking in 2000 very shortly after the supernova data releases (it's a very informal paper, I think it's a talk writeup) was frustrated with people treating cosmology as known and full constrained. Here is his intro: (bolding mine)

As an example of this
triumphalist approach consider the following conclusion from Hu et al. [1] to a
preview of the results they expect from spacecraft such as MAP and PLANCK
designed to map the Cosmic Background Radiations: “. . . we will establish the
cosmological model as securely as the Standard Model of elementary particles.
We will then know as much, or even more, about the early Universe and its
contents as we do about the fundamental constituents of matter”.
We believe the most charitable thing that can be said of such statements is
that they are naive in the extreme and betray a complete lack of understanding
of history, of the huge difference between an observational and an experimental
science, and of the peculiar limitations of cosmology as a scientific discipline. By
building up expectations that cannot be realised, such statements do a disservice
not only to astronomy and to particle physics but they could ultimately do harm
to the wider respect in which the whole scientific approach is held. As such,
they must not go unchallenged.

You know what MAP (later called WMAP) and Planck saw in their data? Spectacularly-detailed agreement with LCDM. The expectation was precisely realized; as it turned out, cosmologists using the LCDM initial conditions had made extraordinarily precise predictions of CMB features. Disney has not updated his criticisms in any way, and neither has anyone else. That was 15 years ago, in a rapidly changing field. He was advocating for caution, and people were basically being cautious already, and none of his concerns were borne out by the data.

There is no parameter-counting argument against LCDM cosmology. It's terrifically overconstrained.
 
Last edited:
ETA: Your irrelevant paper is also a tired light model. You evoke a unspecified mechanism to replicate Hubble's Law. That is what tired light does.
Another point about your paper containing no evidence against concordance cosmology, Eric L: It contains no evidence for a realistic cosmology :eye-poppi!
This is because of the selection of a static Euclidean model. One thing we know about the universe that we live in is that it is not Euclidean. So what the paper tests is a toy cosmological model, maybe selected to make calculations easier.
We might argue that a Euclidean model is physical because analyzing WMAP and Planck data gives that the universe is probably flat. But that analysis uses the concordance model.
 
Just curious RC. How do you know that?
I'm not RC, but it's an easy question to answer.

First, some key assumptions:
  • the 'laws of physics' are universal
  • General Relativity is one such 'law of physics'
  • the universe contains mass

With these three, it follows - inevitably - that the universe that we live is not Euclidean. Would you like me (or someone) to walk you through how the 'non-Euclidean' conclusion follows from the three assumptions, hecd2?

Now there's not much we can do about the first assumption (the 'laws of physics' are universal): short of sending fully-equipped physics labs to every point in the universe, to confirm, empirically, that this assumption is valid, how could it be robustly tested?

The second (General Relativity is one such 'law of physics') need not be rigorously true; it only needs to be as good a description of what we 'see' as any alternative (well, it's a bit more complicated than that, but that'll do as a shorthand).

The third (the universe contains mass) is obviously true.
 

Back
Top Bottom