Arp objects, QSOs, Statistics

David, there is no good way to address the sampling issue. There is a huge dataset full of objects. Almost all of them obey Hubble's law, but there are a few anomalies. If you take one of those anomalies and ask, what's the probability this happened by chance, it will be very very small (that is what's called a posteriori statistics, and it's wrong and misleading). But if you only ask, what's the probability there will be some anomalies, it's basically 1.

Somewhere in between those two questions is the correct one to ask. The second question isn't satisfactory because it would lead you to ignore real interesting anomalies, but neither is the first, because it lends false significance to chance events.

In this case, the probability that the big bang model is wrong is ridiculously small. Nearly every object in the universe obeys a Hubble law, and anomalies are both expected and predicted from big bang theory. No object has precisely its Hubble velocity, and the differences are called peculiar velocities. A few of the billions of objects we see will have large peculiar velocities. So that's one possible explanation. Another is that they are wrong about how far away these things are. In astro measuring distance is extremely difficult, but without it you can't determine whether there's an anomaly (because Hubble relates distance to velocity, and hence redshift).

Furthermore the theory does a superb job explaining other observations too, such as the cosmic microwave background, it's consistent with particle physics, and it's predicted by general relativity (which we know independently is correct). There is no alternative theory that can explain those things.



How do you know that the redshift is correctly measuring time and distance?
 
sol invictus said:
David, there is no good way to address the sampling issue. There is a huge dataset full of objects. Almost all of them obey Hubble's law, but there are a few anomalies. If you take one of those anomalies and ask, what's the probability this happened by chance, it will be very very small (that is what's called a posteriori statistics, and it's wrong and misleading). But if you only ask, what's the probability there will be some anomalies, it's basically 1.

Somewhere in between those two questions is the correct one to ask. The second question isn't satisfactory because it would lead you to ignore real interesting anomalies, but neither is the first, because it lends false significance to chance events.

In this case, the probability that the big bang model is wrong is ridiculously small. Nearly every object in the universe obeys a Hubble law, and anomalies are both expected and predicted from big bang theory. No object has precisely its Hubble velocity, and the differences are called peculiar velocities. A few of the billions of objects we see will have large peculiar velocities. So that's one possible explanation. Another is that they are wrong about how far away these things are. In astro measuring distance is extremely difficult, but without it you can't determine whether there's an anomaly (because Hubble relates distance to velocity, and hence redshift).

Furthermore the theory does a superb job explaining other observations too, such as the cosmic microwave background, it's consistent with particle physics, and it's predicted by general relativity (which we know independently is correct). There is no alternative theory that can explain those things.
How do you know that the redshift is correctly measuring time and distance?
Welcome to this thread, JEROME! :)

I'm puzzled though, what does your question have to do with the focus of this thread?

And on the question itself, "redshift" does not "measure time and distance"! :p

Would you like a brief introduction to the relevant physics and astronomy? If so, I'll start a new thread on just that topic. :D
 
Boy,are you gonna regret that.......
I hope not!

As BeAChooser has apparently left us, we have no one who can tell us how to do the calculations concerning the 'probability' of finding one configuration or another, and no one who can say if the results I got by applying what I think is BAC's approach (to calculating probabilities) is correct or not. I trust that JEROME can step up to the plate and provide some statistical meat to the poor bones we have to work with at the moment ...
 
Welcome to this thread, JEROME! :)
:)

I'm puzzled though, what does your question have to do with the focus of this thread?

Did you read what I quoted? It was refrencing Hubble's law.

And on the question itself, "redshift" does not "measure time and distance"! :p

You may want to read up on Hubble's law.

Would you like a brief introduction to the relevant physics and astronomy? If so, I'll start a new thread on just that topic. :D

You have it backwards.
 
... snip ...
DeiRenDopa said:
I'm puzzled though, what does your question have to do with the focus of this thread?
Did you read what I quoted? It was refrencing Hubble's law.
Indeed.

You did not answer my question - what has this got to do with the focus of this thread?

Please read the OP to get a good idea of what it's about - the focus is quite narrow (Arp, use of statistics, QSO/quasar-galaxy associations, that sort of thing).
And on the question itself, "redshift" does not "measure time and distance"!
You may want to read up on Hubble's law.
Would you like a brief introduction to the relevant physics and astronomy? If so, I'll start a new thread on just that topic.
You have it backwards.
Ah yes, I see now ...

OK, I'll start a new thread on the topic later today - I hope you can join! :D

ETA: done - it's called What is the observational evidence for the Hubble relationship?
 
Last edited:
Set FOUR: 0.217353, 0.252693, 0.342904, 0.362706, 0.537829, 1.05378, 1.1014, 1.16468, 1.45072, 1.54642, 1.56929.
Here are the 'BAC probabilities' I calculate:

"Amaik peaks": 5.7x10-7
"Karlsson peaks": 3.6x10-5
"regular peaks": 5.7x10-7
"DRDS peaks": 8.2x10-8
Set FIVE: 0.365631, 0.7015, 0.746949, 0.937404, 0.963945, 1.03822, 1.1356, 1.2532, 1.37193, 1.73417, 1.8172, 1.86347, 2.1246, 2.3712.
Here are the 'BAC probabilities' I calculate:

"Amaik peaks": 1.9x10-5
"Karlsson peaks": 2.3x10-6
"regular peaks": 2.9x10-9
"DRDS peaks": 8.0x10-6
Now that BeAChooser has returned to posting in the JREF forum, I look forward to his confirming (or not) the correctness of these calculations ...
 
Hi Jerome!

Well, it looks as though the luminosity of Cephid variables is related to the redshift.

But Arp says he has objects where it would not be, based up a wishful assciation without any statistical meaning.

Bye jeroem, please return when you want to address how Arp's association has any more meaning than my finger pointing at the moon. they could run the representative samples any time that they want now.

Then DRD raises a great question in relation to this topic of: What is a QSO?
 
Hi Jerome!

Well, it looks as though the luminosity of Cephid variables is related to the redshift.

The variables have a period-luminosity relation (though I can't remember which way round it goes). We can measure the flux at Earth and look at how it varies over time. We can then deduce the luminosity. If we know the luminosity and (mean I guess) flux then its just simple geometry to work out the distance. This can then be plotted against redshift and Robert's your father's brother.
 
Yeah, I was just yanking on Jerome's chain.

And as DRD has pointed out there are other ways that converge on the ratio as well. The Cephids are very cool, mass is related to period and luminosity if I remember correctly.
 
Set SIX: 0.267251, 0.725625, 0.934802, 1.08559, 1.19983, 1.29071, 1.45628, 1.69317, 1.78238, 2.03665, 2.10012.
Here are the 'BAC probabilities' I calculate:

"Amaik peaks": 0.00020

"Karlsson peaks": 0.0031

"regular peaks": 0.00011

"DRDS peaks": 0.00014
Set SEVEN: 0.071838, 0.198392, 0.487895, 0.718565, 0.962793, 1.10984, 1.2344, 1.37592, 1.39097, 1.45664, 1.47775, 1.67076, 1.78836, 1.81664, 1.85583, 1.93462, 2.13762, 2.50297, 4.43636.
Here are the 'BAC probabilities' I calculate (note that I have dropped the last object, the one with a redshift of 4.43636, from the calculation, per BAC):

"Amaik peaks": 1.8x10-5
"Karlsson peaks": 1.1x10-9
"regular peaks": 1.8x10-8
"DRDS peaks": 5.5x10-10
Out of curiosity, I'll try calculating the 'BAC probabilities' for some of 'Arp et al.' cases BeAChooser introduced in this thread, both as per data in the papers and with contemporary data on quasars/QDOs within 30' of the bright, low-z spiral galaxies.
 
Sophistry without answering a direct question.

Evidence of obfuscation noted.

:gnome:

Lack of defense of a sampling bias in statictics, lack of critical argument, lack of anything in terms of critical thought. Please continue to show that you can't defend Arp's use of statistics and that all you can do is name call.

The :gnome: is yours Jerome.

:gnome::gnome::gnome::gnome::gnome::gnome::gnome: x 1023, a whole mole of gnomes.

This is weak argumentation at its worst Jerome, there are pages after pages of critical discussion and you can't manage a little finger of critical thought.

Why not try reading the first five pages and offering a critical defense of Arp's use of statistics?

Can't or won't.
 
My attempt at doing 'the BAC calculations' with the 'quasars of NGC 5985'.

Recall BAC's post#329, on NGC 5985 (note that the calculation he details there is wrong):
In this case, observed z = 0.69, 0.81, 1.90, 1.97, 2.13 according to [source] ... Of the above 5 quasars, 4 are aligned.
He didn't say which wasn't aligned, but it's the one with z = 1.90.

Here are the 'BAC probabilities' I calculate for these four:

"Amaik peaks": 0.13

"Karlsson peaks": 0.0047

"regular peaks": 0.0096

"DRDS peaks": 0.068

Note that I used nk = 6, not 7.

NED gives a whole lot of other "QSOs" within 60' of NGC 5985; in addition to the 5 in BAC's post, and omitting likely duplicates, the rest have redshifts as follows: 0.159846, 0.308804, 0.44584, 0.853491, 1.51803, 1.53915, 1.71782, 1.76805, 1.78098, 1.968, 2.132, 2.18131, 2.53132, 2.59017, 3.059, 3.878.

The following are 'predominantly aligned' with NGC 5985's major axis: 1.51803, 1.53915, 2.59017, and 3.059.

Here are the 'BAC probabilities' I calculate for these four:

"Amaik peaks": 0.023

"Karlsson peaks": 0.075

"regular peaks": 0.0031

"DRDS peaks": 0.017

Note that I used nk = 6, not 7.

Next, a look at NGC 2639's quasars.
 
My attempt at doing 'the BAC calculations' with the 'quasars of NGC 2639'.

Recall BAC's post#329, on NGC 2639 (note that the calculation he details there is wrong):
In this case, [source] identifies observed quasar z = 0.305, 0.323, 0.337, 0.352, 1.304, 2.63 [...] Of the above 6 quasars, 5 are aligned. The paper also mentions some 3 other x-ray sources lying along the axis but I shall ignore them.
Here things are a bit tricky.

First, like BAC, it is wise to ignore the x-ray sources - you need measured redshifts to do 'the BAC calculations'.

Second, the object with a z of 0.337 is listed in NED as a galaxy, not a QSO.

Third, NED seems to have no object, within 60' of NGC 2639, with a z of 2.63.

Fourth, the 'alignment' seems rather odd - of the 4 remaining quasars, only one is predominantly along the minor axis, and one of the other 3 isn't really 'predominantly along' the major axis!

Nevertheless, here are the 'BAC probabilities' I calculate for objects with these five z's (0.305, 0.323, 0.352, 1.304, 2.63):

"Amaik peaks": 0.037

"Karlsson peaks": 6.5x10-6
"regular peaks": 0 (yep, a probability of zero!)

"DRDS peaks": 0.0040

Note that I used nk = 6, not 7.

Now NED lists 33 "QSOs" within 60' of NGC 2639; of these, seven are 'predominantly along' the minor axis; they have redshifts of 0.219364, 0.354017 (this is the 0.352 one in BAC's list), 0.9392, 1.12783, 1.55048, 1.57886, and 2.

Here are the 'BAC probabilities' I calculate for these seven objects:

"Amaik peaks": 0.00018

"Karlsson peaks": 0.00024

"regular peaks": 1.3x10-5
"DRDS peaks": 2.8x10-5
Note that this time I used nk = 7.

There are also 7 QSOs listed in NED as being predominantly along the major axis; they have redshifts of 0.305077, 1.30402 (these two are in BAC's list), 1.54253, 1.90763, 2.03032, 2.07549, and 2.89004.

Here are the 'BAC probabilities' I calculate for these seven objects:

"Amaik peaks": 0.074

"Karlsson peaks": 0.00012

"regular peaks": 0.0017

"DRDS peaks": 0.0022

Note that this time I again used nk = 7.

Next, a comment on NGC 1068.
 
Re NGC 1068.

Recall BAC's post#329, on NGC 1068:
Recall that [source1] and [source2] and [source3] collectively list 12 quasars with z =
  • [...] Since I don't really know the alignment of these quasars relative to the minor axis, ...
  • Oh what a difference of a few years of astronomy surveys make! :p

    First, I could match only 10 of the 12 to what's in NED, within 60' of NGC 1068; a read of Burbidge's paper gave me a tentative match to one more.

    So, I could have done what BAC couldn't, get the alignments.

    But, it doesn't really matter much ... NED lists 178 QSOs within 60' of NGC 1068! Perhaps as many as 10 are duplicates or possible mis-identifications (there's a quality flag field that's sometimes populated).

    And where are these quasars? Aligned 'predominantly along' the minor axis? Or perhaps along the major axis? Nah ... they seem to be distributed entirely randomly throughout the field (if anyone is interested, I could give you some counts, by bins of position angle, if you'd like).

    So I'll leave NGC 1068 there for now.

    Similarly I'll leave NGC 3516 alone, for now.

    Next, what can we conclude from these calculations? Assuming, of course, that I've done them right ... but perhaps I'll never know, BAC seems to have deserted this thread, as has Wrangler ...
 
Next, what can we conclude from these calculations? Assuming, of course, that I've done them right ... but perhaps I'll never know, BAC seems to have deserted this thread, as has Wrangler ...

I feel bad that I haven't completed my tasks here.

After all, I have been posting in other threads.

I will attempt to at least get my initial calculations done.

I hope we can bring a conclusion to this thread via further discussion, though.

It seems to me that the 'probabilities' that have been calculated are not statistically significant.

In other words, they can't be used to demonstrate some relationship between these QSO's and these galaxies.
 
It seems to me that the 'probabilities' that have been calculated are not statistically significant.

In other words, they can't be used to demonstrate some relationship between these QSO's and these galaxies.
I think that is the real point about the use (misuse?) of statistics in the various papers and by BEC in this thread:
How can we tell whether a probability is significant without something to compare it to?


As an example:


I have a die with a million sides each with a different number. It may be weighted toward a set of numbers. Let us test it using BEC's methodology (based on Halton Arp, et al):
  1. Throw the die a number of times.
  2. Note that for each throw the probability of that number is low (a million to 1).
  3. Conclude that the die is weighted to that set of numbers because the probabilities are low.
The proper methodology is to throw the die enough times to get a statistically significant sample and compare the statistics to what we would expect from a unweighted die.
 
I have a die with a million sides each with a different number. It may be weighted toward a set of numbers. Let us test it using BEC's methodology (based on Halton Arp, et al):
1 Throw the die a number of times.
2 Note that for each throw the probability of that number is low (a million to 1).
3 Conclude that the die is weighted to that set of numbers because the probabilities are low.
The proper methodology is to throw the die enough times to get a statistically significant sample and compare the statistics to what we would expect from a unweighted die.

RC, it amazes me that after as many posts as I made describing EXACTLY what I was doing, you folks continue to totally misrepresent the methodology and what it means. That's why I've decided to stop wasting my time with you folks.

A better analogy would be to say that there are million dice scattered about with one number facing up. Each die has a million sides each with a different number.

Now your side believes those dice give no preference to any given number on the dice. On the other hand, Karlsson, Arp and their associates suggest that the dice are loaded and give preference to certain sides of the dice.

Now suppose you randomly pick a group of dice from that field of dice and observe the number that is facing up on each die. You compute the likelihood of that set of dice having numbers as close are those are to Karlson's turning up if the dice were not weighted towards specific sides.

Then you multiply that likelihood by the ratio of the total number of dice in the field over that sample to get a final total likelihood, assuming one could look at the entire population of dice.

And then you ask yourself if you feel comfortable getting that specific total likelihood. If the probability is very small, you shouldn't. If it is small, you must ask yourself whether you are extraordinarily lucky or could there perhaps be some justification to the suggestion that the dice are loaded?

And you then sample the field of dice repeatedly computing an expected likelihood in each case. And as the number of cases climbs where you must assume you were VERY lucky to get the result you got, your confidence that the dice aren't loaded should fall further ... if you are being rational.

There is nothing difficult about this logic, RC. I don't understand why much trouble understanding and restating it. :D
 
RC, it amazes me that after as many posts as I made describing EXACTLY what I was doing, you folks continue to totally misrepresent the methodology and what it means. That's why I've decided to stop wasting my time with you folks.

A better analogy would be to say that there are million dice scattered about with one number facing up. Each die has a million sides each with a different number.

Now your side believes those dice give no preference to any given number on the dice. On the other hand, Karlsson, Arp and their associates suggest that the dice are loaded and give preference to certain sides of the dice.

Now suppose you randomly pick a group of dice from that field of dice and observe the number that is facing up on each die. You compute the likelihood of that set of dice having numbers as close are those are to Karlson's turning up if the dice were not weighted towards specific sides.

Then you multiply that likelihood by the ratio of the total number of dice in the field over that sample to get a final total likelihood, assuming one could look at the entire population of dice.

And then you ask yourself if you feel comfortable getting that specific total likelihood. If the probability is very small, you shouldn't. If it is small, you must ask yourself whether you are extraordinarily lucky or could there perhaps be some justification to the suggestion that the dice are loaded?

And you then sample the field of dice repeatedly computing an expected likelihood in each case. And as the number of cases climbs where you must assume you were VERY lucky to get the result you got, your confidence that the dice aren't loaded should fall further ... if you are being rational.

There is nothing difficult about this logic, RC. I don't understand why much trouble understanding and restating it. :D

That is a better anology.
The problem is that I can only recall papers with examples of low probability samples (usually with a sample size of 1).

Can you give a paper listing the random choices that Arp, etc. made from the observations?

Otherwise their methodology looks like:
  1. Look through the various surveys for interesting (Arp) objects.
  2. See if there are associated QSOs.
  3. If the probability of an association is low then publish a paper.
  4. Go to step 1.
There is no random sampling in this.
 
RC, it amazes me that after as many posts as I made describing EXACTLY what I was doing, you folks continue to totally misrepresent the methodology and what it means. That's why I've decided to stop wasting my time with you folks.

BAC, couldn't you at least give us your opinion on the calculations that DRD (mostly him) and I have made?

Are we doing the calculations correctly?

How would you interpret the results?

If you could respond to the posts presenting our take on your method, it might go a long way to enabling us to reach a profitable conclusion.
 
RC, it amazes me that after as many posts as I made describing EXACTLY what I was doing, you folks continue to totally misrepresent the methodology and what it means. That's why I've decided to stop wasting my time with you folks.

A better analogy would be to say that there are million dice scattered about with one number facing up. Each die has a million sides each with a different number.

Now your side believes those dice give no preference to any given number on the dice.
That is not what i have said at all, just explains that you can't answer the isue of what sample bias or sample error might be, so you throw up a smoke screen.

Even though sample bias was listed in my first and second post. here we are at page ten and you still haven't addressed it.

I wonder why that is? :D
On the other hand, Karlsson, Arp and their associates suggest that the dice are loaded and give preference to certain sides of the dice.

Now suppose you randomly pick a group of dice from that field of dice and observe the number that is facing up on each die.
Which is NOT what Arp did, he did NOT choose his galaxies randomly.

false analogy!
You compute the likelihood of that set of dice having numbers as close are those are to Karlson's turning up if the dice were not weighted towards specific sides.

Then you multiply that likelihood by the ratio of the total number of dice in the field over that sample to get a final total likelihood, assuming one could look at the entire population of dice.

And then you ask yourself if you feel comfortable getting that specific total likelihood. If the probability is very small, you shouldn't. If it is small, you must ask yourself whether you are extraordinarily lucky or could there perhaps be some justification to the suggestion that the dice are loaded?
or you could ask what significance it has since it can't determine anything, but the fact that you ignore what I have asked you is not suprising, you know it is a gaping hole. No where have you addressed it.

Kind of funny. :)

Your method would have left modern medicine in the lurch, i wonder how much it could tell about obesity and type II diabetes (zero) i wonder what it could tell about family history of cancer (zero), I wonder how it would help find the causes of Alzheimer's it can't.

You are using methods that are discredited in population samples.

And you are a terrible fibber for saying that Arp randomly looked at his galaxies. Want some porridge Goldilocks! :D
And you then sample the field of dice repeatedly computing an expected likelihood in each case. And as the number of cases climbs where you must assume you were VERY lucky to get the result you got, your confidence that the dice aren't loaded should fall further ... if you are being rational.
EXCEPT YOU DIDN"T SAMPLE THE FIELD OD DICE. So you have no control group, all you have is a possibility of sample error. You did not choose your sample randomly, you don't have a control group. Might as well be Ganzfeld.
There is nothing difficult about this logic, RC. I don't understand why much trouble understanding and restating it. :D

I wonder why you refuse to address the issue of sampling error?

Hmmmm.

See you later, glad you stopped by, too bad you don't engage is a dialouge.
 
BAC, couldn't you at least give us your opinion on the calculations that DRD (mostly him) and I have made?

Are we doing the calculations correctly?

How would you interpret the results?

If you could respond to the posts presenting our take on your method, it might go a long way to enabling us to reach a profitable conclusion.
Me too.

BeAChooser, you put a lot of time and effort into working out how to calculate these probabilities.

You have said, I think, that you think the approach is correct, in that IF you do the calculations correctly, and IF you interpret the results correctly, then you have a very strong case for saying that at least some quasars are close, in physical 3D space, to some galaxies, AND (or is it OR?) that the quasars which are close (in 3D space) have redshifts close to the Karlsson peaks.

Further, I think, 'you' in the above paragraph can be replaced by 'one', as in 'anyone', and the results will always be the same. In other words, the method is objective, repeatable, and independently verifiable.

Now as I have said, repeatedly, I do not understand the method (and I still don't understand it). However, I think I can do the calculations, but I'm not sure. Wrangler had a go at doing some calculations, and I was able to reproduce the results (including discovering a typo in one of his inputs). That gave me confidence that I was doing the calculations correctly - either BOTH Wrangler and I are doing them right, OR both of us are doing them wrong (there is a third alternative, namely that the results are not repeatable; however I think that we can rule that out, can't we?).

So, before moving to trying to understand how your method works, by trying to interpret the calculated probabilities, would you be so kind as to take a look at what's in the last few posts, and comment on whether the calculations are correct or not?

Thank you in advance.
 
There is no random sampling in this.

You're wrong. I'm willing to bet you that Arp et. al. cited every single case they could find where there were large numbers of quasers near galaxies and especially where those quasars seemed to align with some features of the galaxies. That's a random sample of all such such cases out there. And I took every one of those cases (that I could find) and calculated what the probability of finding redshifts that close to the Karlsson values would be, assuming that one looked at the entire population of such objects in the sky. And in almost all those cases the probability of seeing those cases was << 1. So you are mistaken. That is a random sample and the probability results should give you pause.
 
You're wrong. I'm willing to bet you that Arp et. al. cited every single case they could find where there were large numbers of quasers near galaxies and especially where those quasars seemed to align with some features of the galaxies. That's a random sample of all such such cases out there. And I took every one of those cases (that I could find) and calculated what the probability of finding redshifts that close to the Karlsson values would be, assuming that one looked at the entire population of such objects in the sky. And in almost all those cases the probability of seeing those cases was << 1. So you are mistaken. That is a random sample and the probability results should give you pause.


That is not a random sample. A random sample would be to select a number of galaxies at random. In that random sample there would be cases where there are "large numbers of quasers near galaxies and especially where those quasars seemed to align with some features of the galaxies". There would also be cases where that is not true.

Your second sentence confirms that the sample of Arp et. al. is a biased sample. They have picked out the cases that confirm their hypothesis.

Look at your previous posting with the analogy of a million thrown dice (each with a million sides). The task it to find out whether the dice are biased toward certain numbers. As you state,
Now suppose you randomly pick a group of dice from that field of dice and observe the number that is facing up on each die.
You compute the likelihood of that set of dice having numbers as close are those are to Karlson's turning up if the dice were not weighted towards specific sides.
Then you multiply that likelihood by the ratio of the total number of dice in the field over that sample to get a final total likelihood, assuming one could look at the entire population of dice.
And then you ask yourself if you feel comfortable getting that specific total likelihood. If the probability is very small, you shouldn't. If it is small, you must ask yourself whether you are extraordinarily lucky or could there perhaps be some justification to the suggestion that the dice are loaded?
This is the proper method that any statistician would use.

The incorrect method used by Arp et. al. is to assume that a specific set of numbers is the bias, pick a group of dice from that field of dice that show those numbers and calculate the probabilities for the number showing for each dice individually. They then get a low probability which they state shows that the dice are biased. This is not surprising since they are only looking at the dice showing the numbers that they are assuming to be the bias.
 
You're wrong. I'm willing to bet you that Arp et. al. cited every single case they could find where there were large numbers of quasers near galaxies and especially where those quasars seemed to align with some features of the galaxies.
Wow, noe we can add to the random or not random debate.

Sure whatever BAC, you can'r refute the possibility of sample error or sample bias, so now you will spin a yarn about wether the sample is random or not.

You don't know the meaning of the word , random sample.

just keep making stuff up.

:D
That's a random sample of all such such cases out there. And I took every one of those cases (that I could find) and calculated what the probability of finding redshifts that close to the Karlsson values would be, assuming that one looked at the entire population of such objects in the sky. And in almost all those cases the probability of seeing those cases was << 1. So you are mistaken. That is a random sample and the probability results should give you pause.

No what gives me pause if your willingness to just spout whatever nonsense you wish, and never address the issue of sample bias.

That is not a random sample. that is a 'selected sample'.

Go ahead tell us another fairy tale. Another of your gnomes BAC. :gnome:
 
Last edited:
I have not really followed this thread, admittedly, so this may have already been discussed, or I may have missed it, but I have a quick question. Sol said "If you take one of those anomalies and ask, what's the probability this happened by chance, it will be very very small (that is what's called a posteriori statistics, and it's wrong and misleading). But if you only ask, what's the probability there will be some anomalies, it's basically 1." I can see what he's getting at, sort of, but surely there is a way to calculate or interpret an "a posteriori probability‟??

Now, statistics isn't my strong point, but surely there has to be a way. From a quick look around online I have seen that some people think that you can solve these sort of problems using Bayes' theorem. If so, has this been used so far in this thread? and how can it be applied to a posteriori statistsics? I just want to get a firm ground to work with when I have the time...

http://en.wikipedia.org/wiki/Bayes'_theorem
Bayesian Epistemology

And this quote from Bayes seems fitting (from my and BAC's viewpoint anyway!) “Statistics don‟t lie… but statisticians sometimes do.” :)

When I see the selection of quasars that are seemingly connected and alligned along the filaments of galaxies, it just jumps out at me as being far too much of a co-incidence to be chance. Subjective, yes, but thats what I think. The empty space around them is huge, and to me the probability of them being directly alligned with the galaxies by merely chance seems huge.

And one more thing, surely you can calculate the average density of these objects (the number of them for every square degree) over the entire sky, and compare the probabilities of finding them behind a galaxy to finding them in an open space. Surely, from this, a rough probablity of the amount of corellations you would expect to see from a random distribution can be worked out?
 
Last edited:
Um, maybe you should read the first two pages of the thread.

bayes theorem is used as an ancillary to frequency statitics but this would be a misapplication, I made some cogent arguments that BAC has ignored.

that is the point of this thread, they are not appropriate in this case.

they can not be used to determine relationships.
 
I have not really followed this thread, admittedly,

... snip ...
Indeed.

So maybe when you have, and can resist the urge to drop yet another load of drive-by spam woo, you might consider actually writing something relevant (in light of what has already been discussed)?
 
Indeed.

So maybe when you have, and can resist the urge to drop yet another load of drive-by spam woo, you might consider actually writing something relevant (in light of what has already been discussed)?


Why thank you for being so polite. If you hadn't noticed, my post is relevant to this topic, giving my thoughts on the issue we're talking about. Sorry it offended you.


And one more thing, surely you can calculate the average density of these objects (the number of them for every square degree) over the entire sky, and compare the probabilities of finding them behind a galaxy to finding them in an open space. Surely, from this, a rough probablity of the amount of corellations you would expect to see from a random distribution can be worked out?


?
 
Last edited:
... snip ...
And one more thing, surely you can calculate the average density of these objects (the number of them for every square degree) over the entire sky, and compare the probabilities of finding them behind a galaxy to finding them in an open space. Surely, from this, a rough probablity of the amount of corellations you would expect to see from a random distribution can be worked out?

?
Indeed.

Here you go:
DeiRenDopa said:
So maybe when you have [actually read the material in this thread, like what is written in the posts in it], and can resist the urge to drop yet another load of drive-by spam woo, you might consider actually writing something relevant (in light of what has already been discussed)?
[parts in square brackets added]

HINT: that's what most of this thread is about ...
 
Indeed.

Here you go:[parts in square brackets added]

HINT: that's what most of this thread is about ...


So, from what I've seen, (I'm sorry, I dont fancy reading all #512 posts in this thread just because you cant answer a very simple question) you are saying that Arps observation of this is an example of an a posteriori probability?

I'm not so sure...

Surely, a an a posteriori probability quantifies how much you can trust the results presented by some kind of test. Arp was not performing any kind of test. He was simply comparing the observed density of a certain type object in one place on the sky contrasted with any other location - he did not need to perform any kind of test in order to verify those observations.


[......] An astronomer obtains an image of a highly redshifted object (QSO) that appears to be in front of a low redshifted galaxy. Other astronomers are unconvinced and demand that he should evaluate the a posteriori probability that the QSO is indeed closer to us than the galaxy. In this case, examining data is not a matter of "probabilities" (neither a priori nor a posteriori). It is simply a question of do you believe the evidence or not. If not, then you must be prepared to say why not. Are you accusing the presenter of the evidence of counterfeiting it? Are you saying the QSO is an "artifact" and not really there? To raise probabilistic arguments in cases where the evidence is 'in your face' is simply an evasion. It is dissembling. It is dishonest. When you have prima facie evidence of something, you do not need to initiate a 'test' to determine a posteriori probabilities. It is therefore incorrect to refer to 'a posteriori probabilities' when no test, as such, has been performed.

For example, astronomer Halton Arp has presented a long series of images of unusual concentrations of BL Lac objects relatively near Seyfert galaxies. In order to quantify his observation, Arp calculated the average density of these objects (the number of them per square degree) over the entire sky. He then compared that BL Lac density measurement to their observed densities in small areas centered on Seyfert galaxies. He determined the ratio of those densities, to be > ~10,000. That is to say, the probability of finding a BL Lac near to a Seyfert galaxy is at least 10,000 times greater than the probability of finding one alone in an equal sized area chosen randomly on the open sky.

Throwing around the descriptor 'a posteriori' in a pejorative attempt to belittle Arp's work, clearly demonstrates that the critic either does not understand probability theory – or hopes that we don't.
 
Last edited:
'Correcting' for the putative ejector, I get the following 'BAC probabilities':

a) Amaik: 0.011; Karlsson: 0.0034; 'regular': 0.0094

b) Amaik: 0.16; Karlsson: 9.2x10-5; 'regular': 0.0017

Well. my recalculation using the correct Karlsson peaks matches your group a) 'BAC probabilities', but I am having problems with group b).

What is the point, however, if BAC won't comment?

Why don't we just state our unanimous conclusion: the 'BAC probabilities' are just not giving us good, convincing statistical information.
 
... snip ...

I'm not so sure...

Surely, a an a posteriori probability quantifies how much you can trust the results presented by some kind of test. Arp was not performing any kind of test. He was simply comparing the observed density of a certain type object in one place on the sky contrasted with any other location - he did not need to perform any kind of test in order to verify those observations.
Evidence of obfuscation noted (to quote a certain JREF member).

I'm sure BAC will be happy to see that you have joined this thread, ... I'll leave it to him to comment on the extent to which you have just pulled the rug from under his dozens (if not hundreds) of hours of painstaking work ...
 
Evidence of obfuscation noted (to quote a certain JREF member).

I'm sure BAC will be happy to see that you have joined this thread, ... I'll leave it to him to comment on the extent to which you have just pulled the rug from under his dozens (if not hundreds) of hours of painstaking work ...


My obfuscation?

Just posting what I find. And care to comment on it?, or you going to ignore it, again?

And i'm quite sure BAC would be happy to point out any error he feels are in my posts, just as I would with him. We're not the same mind you know, we dont have to agree on everything!
 
Last edited:
Zeuzzz said:

And one more thing, surely you can calculate the average density of these objects (the number of them for every square degree) over the entire sky, and compare the probabilities of finding them behind a galaxy to finding them in an open space. Surely, from this, a rough probablity of the amount of corellations you would expect to see from a random distribution can be worked out?
Um, no. That is exactly what this thread is about but please educate yourself by reading at least the first five pages. or just the first page, i laid out a very simple argument, care to counter?

This kind of probability is based upon the erroneous assumtion that QSOs are evenly spaced across the sky.

I think you are pretending innocense Zeuzzz, surely you know more about the ghistory of modern statitstics than to be taken in by such a simple error.

Why isn't this kinds of mathematics used in other population sampling research? (Like epidemiology)

Hmmmm?

best wishes and all that stuff.
 
Last edited:
So, from what I've seen, (I'm sorry, I dont fancy reading all #512 posts in this thread just because you cant answer a very simple question) you are saying that Arps observation of this is an example of an a posteriori probability?

I'm not so sure...

Surely, a an a posteriori probability quantifies how much you can trust the results presented by some kind of test. Arp was not performing any kind of test. He was simply comparing the observed density of a certain type object in one place on the sky contrasted with any other location - he did not need to perform any kind of test in order to verify those observations.


You’re a sly one Zeuzzz but your innocence of mathematics is feigned.

I shall give you that short version since you want to pretend that you can't even read the first two posts in the thread, really poor form that. I laid out my argument, you haven't addressed it and are trying to set up your own goal posts.


1. Arp claims there is an association between the Arp galaxies and QSOs.

2. He did not compare his sample to any sort of control sample. (Say a random sample of 'normal' galaxies or random points on the sky)

3. Therefore without a baseline, you have no idea if the association rises above the noise level of normative distribution.

I think you debase yourself by pretending that you can't read the first page of the thread and find out what 'sample bias' and 'sample error' are. You are smart enough to grasp the principles of census sampling in less than five minutes.
 
Last edited:
So, from what I've seen, (I'm sorry, I dont fancy reading all #512 posts in this thread just because you cant answer a very simple question) you are saying that Arps observation of this is an example of an a posteriori probability?

I'm not so sure...

Surely, a an a posteriori probability quantifies how much you can trust the results presented by some kind of test. Arp was not performing any kind of test. He was simply comparing the observed density of a certain type object in one place on the sky contrasted with any other location - he did not need to perform any kind of test in order to verify those observations.


Except for one thing Zeuzzz, how did he determine the density 'contrasted with any other location '?

Did he sample those locations, if he did please tell me what they are.

And don't pretend that you don't know the problems of averages when it comes to arrangement of a population.

I detailed this somewhere aound page three.

An average denisty of 6 objects per space?

Well say you have one hundred such spaces, you could bunch them all (600 objects) in one space out of the hundred, and what do you get.

An average of 6 per space.

You know exactly what I am talking about.

So please refrain from your pissing contest with DRD, and i ask DRD to do the same. I would rather not ask that this thread be moderated.
 
Evidence of obfuscation noted (to quote a certain JREF member).

I'm sure BAC will be happy to see that you have joined this thread, ... I'll leave it to him to comment on the extent to which you have just pulled the rug from under his dozens (if not hundreds) of hours of painstaking work ...


Please avoid the pthy comments.

Attack the argument and say that Zeuzzz has avoided the issue of the thread instead.

please avoid returning fire.


Cease fire! Both of you.

:)
 

Back
Top Bottom