JEROME DA GNOME
Banned
- Joined
- Oct 7, 2007
- Messages
- 8,837
David, there is no good way to address the sampling issue. There is a huge dataset full of objects. Almost all of them obey Hubble's law, but there are a few anomalies. If you take one of those anomalies and ask, what's the probability this happened by chance, it will be very very small (that is what's called a posteriori statistics, and it's wrong and misleading). But if you only ask, what's the probability there will be some anomalies, it's basically 1.
Somewhere in between those two questions is the correct one to ask. The second question isn't satisfactory because it would lead you to ignore real interesting anomalies, but neither is the first, because it lends false significance to chance events.
In this case, the probability that the big bang model is wrong is ridiculously small. Nearly every object in the universe obeys a Hubble law, and anomalies are both expected and predicted from big bang theory. No object has precisely its Hubble velocity, and the differences are called peculiar velocities. A few of the billions of objects we see will have large peculiar velocities. So that's one possible explanation. Another is that they are wrong about how far away these things are. In astro measuring distance is extremely difficult, but without it you can't determine whether there's an anomaly (because Hubble relates distance to velocity, and hence redshift).
Furthermore the theory does a superb job explaining other observations too, such as the cosmic microwave background, it's consistent with particle physics, and it's predicted by general relativity (which we know independently is correct). There is no alternative theory that can explain those things.
How do you know that the redshift is correctly measuring time and distance?