Now, I want to explain the error I made and correct my methodology accordingly. Let's start by recalling that I claimed the probability of finding r specific values drawn without regard to order from a uniform distribution of n different values is 1/(n!/((n-r)!r!)) = r!/(n!/(n-r)!). But that's wrong.
To find the probability of a set of r specific values picked randomly from a distribution of n different values, we actually need to ratio the number of ways one can pick those r values from the distribution by the number of ways one can pick any r values from the distribution. Right?
For example, if we have a distribution with 5 possible values (call them a,b,c,d,e) and we want the probability of seeing c and d show up in a random draw of 2 values from that pool of 5 possibilities, we first need to find the number of ways we can draw c and d. Well that turns out to be r!, so the answer is 2 in that case.
Next, we need to divide by the number of ways one can draw ANY 2 values from the 5 possibilities. Note that drawing that value does not eliminate it from the pool. The formula to use here is n
r. So there are 5
2 = 25 ways of drawing 2 values from a pool containing 5 different values.
So the probability of seeing c and d in a single observation in the above example is 2/25 = 0.08 = 8 percent.
So the formula I should have used in my calculation for the probability of seeing r specific values of z picked randomly from a distribution of n different values of z is
P = r!/n
r.
Since n
r > n!/(n-r)! we know that this probability will be somewhat smaller than what I previously calculated.
Now, instead of the probability of r specific numbers, we want the probability of a set of r observed values falling within a certain distance of those r specific numbers. It is this miss distance that determines the value of n we use.
If all the observations fell the same miss-distance from the r specific values, then the value of n would simply be found as follows:
n = possible range of values / increment ; where increment = 2 * miss distance .
But in reality, each of the r observations will likely fall a different miss-distance from the specific r number closest to it. So to handle this, instead of n
r in the denominator of the probability equation, we substitute in n
i1*n
i2* ... snip ... * n
ir; where i
j indicates that n is derived from the increment that corresponds to the miss-distance for the j-th data point.
Thus, P = r!*(1/n
i1)*(1/n
i2)* ... snip ... * (1/n
ir)
All would be well at this point, except the distribution of quasar redshift, z, between 0 and 3.0 is not really uniform. Mainstream literature indicates that the frequency of z actually rises from a small value (about zero) near z=0 to a maximum at about z=1, then stays roughly constant until it reaches z=3, where it then rather rapidly drops back to very near zero. A uniform assumption about the distribution will overestimate the effect on probability of data points with z<1 and underestimate the effect on probability of data points with z>1 within the range z=0-3.
I treated it as such in the previous calculations and demonstrated, when challenged about this, that at least for a few of the cases in question that assumption was likely conservative because of other factors in the calculation that also affected the relative weighting of the individual data points.
But this new form of the probability equation raises an interesting possibility ... that of directly accounting for the non-uniform nature of the mainstream's distribution of z. Suppose we weight the terms associated with specific data points (i.e., (1/n
ij) where j is the data point)? Since these terms are multiplicative, we should use a power law.
The weights should be based on the frequency of each data point relative to the average frequency they would have were they from a uniform distribution instead.
Now in our problem, the frequency rises from zero at z=0 to a maximum at z=1.0 and then stays constant to z=3.0. The area under that frequency distribution should sum to 1 over the entire range giving this equation:
1 = 1/2*maximum
num+2*maximum
num; where the m denotes that this is the non-uniform mainstream distribution of z.
The maximum value of the frequency can then be found:
maximum
num = 1/2.5 = 0.400
Now we find a uniform distribution from 0 to 3 that has the same area.
1 = 3*maximum
u
maximum
u = 0.333
The weights assigned to given z in the non-uniform mainstream distribution depend on where they lie between 0 and 3. A uniform distribution assumption underweights the importance of any z over 1. To correct this, any z over 1 will get a weight of 0.4/0.333 = 1.201 in the analysis. Any z under 1 is overweighted if a uniform distribution is assumed. To correct this, any z under one will get a weight less than 1. At z=0.3 the weighting factor is 0.36 while at z=0.6 the weighting factor is 0.72. The weighting equation can be written thus:
w = z*1.201; w<=1.201; 0<z<=3.0 .
The effect of this when dealing with terms that are all less than one will be to make the final probability smaller if the weight is > 1 and make it larger if the weight is < 1. As it should. This may not be the exact weighting that should apply, but I do think it will serve as a first approximation in dealing with the particular concern.
So, to summarize, my new approach to calculating the probability of seeing r observed z values at any galaxy under study, given the mainstream's assumptions about quasars, will be to find an appropriate increment for each z value, determine an n from each of those increments, determine a weight to apply to each n, then multiply them all together as follows:
P
1G = r!*(1/n
i1)
w1*(1/n
i2)
w2* ... snip ... * (1/n
ir)
wr
where
n
ik = 3.0/(2 * distance to the nearest Karlsson value) for the k-th z value,
w
k = z*1.201; w<=1.201; for the k-th z value,
and
r = number of z values.
Any problems with that folks?
Now the probability of finding a particular observation, given the mainstream's assumptions, obviously goes up with the number of quasar/galaxy associations that are studied. So what is the probability of seeing a given observation if we were to look at all the quasar/galaxy associations that are possible in the sky? I think this is a useful indication of whether finding half a dozen (or so) very low probability cases after examining only a fraction of the possible quasar/galaxy associations is indicative of a problem. If the probability of finding the case is still very low even if we looked at all the possible associations, then it's highly likely there is a problem in the mainstream's theory regarding the cause of redshift (for quasars at least). Whether the mainstream proponents will admit this or not is another matter.
So the next question to answer is what is the maximum possible number of galaxies in the sky with r associated quasars? I shall call that quantity N
maxGwithr.
To find this, I will be assuming a number of things (some revised from previous calculations as well, based on better information). Those I'm debating are always free to offer specific alternative values for these parameters. If they don't, I can only assume they agree with them and that any complaints are merely hand-waving in stubborn defense of the mainstream theory.
First, the SDSS survey is a relatively complete sampling of all observable quasars. The SDSS website indicates it accounts for over 90%, in the areas that have been surveyed. But I'm going to conservatively assume that they only found 75% the quasars that exist and could be observed in that survey. So I will increase the number of quasars SDSS found in the portion of the sky they surveyed by 33% and then use that to compute the number of observable quasars across the entire sky.
Now the SDSS surveyors say (according to
http://cas.sdss.org/dr6/en/sdss/release/ ) that their DR6 effort (the latest) found 104,140 quasars over in 6860 deg
2 area. That works out to about 15 quasars deg
-2. Following what I stated above, I'm going to increase that to 20 quasars deg
-2. Now the surface area of a sphere has about 4 PI (180/PI)
2 = 41,250 square degrees so if there are 20 quasars per square degree (over the range of magnitudes we can observe) then there are a possible 825,000 observable quasars in the sky. Anyone want to claim that's not a conservative estimate for the total number of observable quasars in the sky?
Next, there is the question of how those quasars are distributed with respect to low redshift galaxies and to each other. For now, I will conservatively assume that only half of them are near low redshift galaxies. I think that's a VERY conservative assumption and would be very interested in any data that would further refine that parameter. Afterall, there aren't that many low redshift galaxies. In fact, I showed earlier in the thread (post #223) that perhaps no more than 1 percent of the galaxies lie within z = 0.01, which equates to 1-2 galaxies per square degree of sky. I found a source that indicated most of the galaxies in near field surveys are smaller than 30-40 arcsec in diameter ... meaning they occupy only a fraction of any given square degree field (because 1 arcsec is 1/3600th of a degree). I noted a source that indicates even in galaxy groups, the distance between galaxy members is typically 3-5 times the galaxy diameter. I noted that even the large, naked eye, Andromeda galaxy ... our nearest neighbor ... only has an apparent diameter of 180 arcmin (that's 1/30th of a degree). And I noted that NGC 3516, one of the cases I calculate only has an apparent diameter of a few arcmin. It may be typical of specific galaxies I am looking at. So with only 20 quasars per square degree of sky on average, does everyone agree I'm very conservative in assuming only half of all quasars lie near low redshift galaxies like those in each of the cases of interest? If so, then that brings us down to 413,000 quasars in the population of interest. And I suspect the number really should be smaller.
And how are those quasars distributed amongst the low redshift galaxies? In other words, are they spread out evenly over 413,000 different galaxies or do they all lie near one galaxy? Now previously, I assumed that half the quasars are in groups of r. That would have meant that the maximum possible number of quasar/galaxy associations I would use in the following calculations would be 207,000/r. But I think that is far too conservative an assumption ... that high r associations are much rarer than I assumed. And I think the proof of that is how few high-r cases can be specifically identified by anyone. And certainly the Arp et. al. community would like to list as many as possible.
So, instead, I'm going assume that at each r level, one third of the remaining quasars are distributed at that r level. I think this will be much more consistent with the mainstream assumption that quasars have no connection to each other or to low redshift galaxies. Thus, at r=1 (in other words, where there is only 1 quasar near a low redshift galaxy), 137,000 of the quasars will be distributed. That leaves 275,000 quasars. And after distributing the r=2 quasars, there are 184,000 left. And then one-third of those will be in r = 3 associations. That means there are 61,333 quasars in r=3 associations for a total of 20,444 possible r=3 quasar/galaxy associations in the sky. That leaves 122,000 quasars still to distribute. One third of those in r=4 associations means there are 10,166 possible r=4 cases. And if one continues this logic one arrives at the following number of possible quasar/galaxy associations for each r of interest:
N
maxGwithr =
20444 at r=3
10166 at r=4
5433 at r=5
3018 at r=6
1724 at r=7
1000 at r=8
596 at r=9
358 at r=10
216 at r=11
132 at r=12
Do any of my opponents on this thread want to disagree with this assumption? If so, please provide a specific distribution that I should use and tell us why you think that's more reasonable. If you don't, I can only assume you think this one is or at least puts the best face on it from the standpoint of the results you would like to see.
So we arrive at the final probability of seeing the specific values of z around each of the cases in question were we to examine all the possible quasar/galaxy associations in the sky assuming the mainstream's theory is correct:
P
zTotal = P
1G * N
maxGwithr
I believe that if this probability is small (<1), then this is a good indication that the mainstream model regarding the origin of quasar z in these specific observations (and perhaps more generally) is badly flawed. And the less the probability, the more likely it is that z for quasars are instead quantized.
But I have one last factor to add, the likelihood of seeing that observation's particular alignment of quasars relative to the minor axis. Since they are independent of z under the mainstream model, this probability should be multiplicative with P
zTotal. Now that I think about it some more, I don't like the way I calculated this probability earlier.
I want to find the probability of a specific observation being encountered given the number of quasars observed to be within a 15 degree zone centered about the minor axis. I showed earlier there is a probability of only 0.083 (based on area) that any given quasar will lie in that zone, assuming quasars are positioned randomly like darts (which is the mainstream assumption). And since quasar locations are assumed to be independently located in the mainstream theory, those probabilities are multiplicative.
But if there are quasars in the observation outside the 15 degree zone, the probability of encountering that observation should be increased. In fact, for every quasar outside the zone, I believe the probability of encountering that observation should be increased by a factor of 1/.917 = 1.091, up to a maximum probability of 1. Thus, I will calculate the probability of seeing the alignment in a single observation as
P
afor1G = 0.083
(number of aligned quasars)*1.091
(number of non-aligned quasars); P
afor1G <= 1 .
In calculating this probability, I will assume (I think conservatively) that any quasars near a galaxy and whose location with respect to the minor axis is unknown are non-aligned.
Now again, the probability of seeing that alignment observation if we examined every possible quasar/galaxy alignment with r quasars might be a good indicator of whether there is a problem with the mainstream model for generating quasars. So I will calculate
P
aTotal = P
afor1G * N
maxGwithr .
If the probability is < 1 then that is an indication the mainstream model is wrong. In such a case, seeing multiple highly unlikely examples of quasars that are aligned with the minor axis would be an indication there is some underlying physics which is producing that phenomena and which the mainstream is ignoring. And the Narlikar/Arp et. al. model might be that phenomena and therefore deserve a much closer look by the mainstream.
And since the alignment and z phenomena are independent according to the mainstream model,
P
Total = P
zTotal * P
aTotal
And, by the way, with regards to both P
zTotal and P
aTotal, keep in mind that Arp et. al. have not examined a large fraction of the total number of possible quasar/galaxy associations with r quasars. In fact, the probability of encountering the group of observations studied here will decrease in direct proportion to the percentage of total possible quasar/galaxy associations that has actually been studied. This even further strengthens my argument.
Now can we all agree on this approach, or not? Does anyone have any suggestions to improve it ... to make it more accurate? Would anyone like to adjust one of the parameters or equations? Speak up and don't be coy. Or I'll rightly assume you agree with all aspects of this method.
Now, with the above as the basis, I will now demonstrate that correcting the errors in my method had no significant no impact on the overall conclusion I drew from my original calculations. I will redo the full calculation for each of the observations I've previously analyzed. This should serve as a nice summary of the overall method and my current results to this point, and hopefully act as a basis for further debate on this subject.
Let's start with
NGC 3516.
In this case, observed z = 0.33, 0.69, 0.93, 1.40, 2.10 . The Karlsson z = 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 . Therefore, the spacings are +0.03, +0.09, -0.03, -0.01, +0.14 . Doubling the spacings gives the increment width for each data point = 0.06, 0.18, 0.06, 0.02, 0.28 . The n corresponding to each increment is (3/0.06), (3/0.18), (3/0.06), (3/.02), (3/0.28) = 50, 16, 50, 150, 10 . The weighting factor for each data point is 0.40, 0.83, 1.12, 1.201, 1.201 .
Thus, P
1G = 5! * (1/50)
0.40 * (1/16)
0.83 * (1/50)
1.12 * (1/150)
1.201 * (1/10)
1.201 = 120 * 0.209 * 0.100 * 0.0125 * 0.0024 * 0.063 = 4.7 x 10
-6 (compared to 2 x 10
-6 without the weighting factors).
Now with N
maxGwith5 = 5433, so P
zTotalNGC3156 = 4.7 x 10
-6 * 5433 = 0.026
Now to factor in the alignment probability. In this case, I haven't found anything to suggest there are other quasars besides the 5 identified by Arp et. al. near this galaxy. If anyone out there can show there are additional quasars within 50 arcmin (given that the galaxy is only a few arcmin across), I will add them to the calculation as non-aligned if we don't know their location, or do and they aren't aligned. Until then, it looks like
P
aTotalGNGC3516 = 0.083
5 * 5433 = 0.021.
Thus,
PTotalGNGC3516 = 0.026 * 0.021 = 0.00055 .
Wow! That's very small considering it assumes all the possible quasar/galaxy associations in the sky have been examined.
Now consider with
NGC 5985.
In this case, observed z = 0.69, 0.81, 1.90, 1.97, 2.13 according to
http://articles.adsabs.harvard.edu//full/1999A&A...341L...5A/L000006.000.html . With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 , the spacing to the nearest Karlson values are +0.09, -0.15, -0.06, +0.01 and +0.17. The increment for each is 0.18, 0.30, 0.12, 0.02, 0.34 so n equal 16, 10, 25, 150 and 8. The weighting factors are 0.83, 0.97, 1.201, 1.201, 1.201.
Thus, P
1G = 4! * (1/16)
0.83 * (1/10)
0.97 * (1/25)
1.201 * (1/150)
1.201 * (1/8)
1.201 = 24 * 0.100 * 0.107 * 0.021 * 0.0024 * 0.082 = 6.8 x 10
-7
P
zTotalNGC5985 = 6.8 x 10
-7 * 10166 = 0.007 .
Of the above 5 quasars, 4 are aligned. In addition, DRD provided a source that suggested the existance of two more quasars and with no information about them besides their z, they will be assumed non-aligned. The total number of cases with 8 quasars is
P
aTotalGNGC5985 = 0.083
4 * 1.091
3* 1000 = 0.062 .
Thus,
PTotalGNGC5985 = 0.007 * 0.062 = 0.00043 .
Even smaller! So now it's a little hard to believe we just got lucky seeing these observations in the first case.
Now consider with
NGC 2639.
In this case,
http://www.journals.uchicago.edu/doi/abs/10.1086/421465 identifies observed quasar z = 0.305, 0.323, 0.337, 0.352, 1.304, 2.63 . With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 , the spacing to the nearest Karlson values are +0.005, +0.023, +0.037, +0.052, -0.106, 0.01. The increment for each is 0.01, 0.046, 0.074, 0.104, 0.212, 0.02 so n equal 300, 65, 40, 28, 14, 150 .. The weighting factors for each are 0.366, 0.388, 0.405, 0.422, 1.201, 1.201.
Thus, P
1G = 6! * (1/300)
0.366 * (1/65)
0.388 * (1/40)
0.405 * (1/28)
0.422 * (1/14)
1.201 * (1/150)
1.201= 720 * 0.123 * 0.198 * 0.224 * 0.245 * 0.042 * 0.0024 = 9.7 x 10
-5 (compared to the unweighted probability of 1.6 x 10
-8!)
P
zTotalNGC2639 = 9.7 x 10
-5 * 3018 = 0.29 (and it may be much smaller than that if my weighting method is faulty).
Of the above 6 quasars, 5 are aligned. The paper also mentions some 3 other x-ray sources lying along the axis but I shall ignore them.
P
aTotalGNGC2639 = 0.083
5 * 1.091
1* 3018 = 0.013 .
Thus,
PTotalGNGC2639 = 0.29 * 0.013 = 0.0038 .
Again, a probability that is very small considering that it assumes that all the possible quasar/galaxy associations have been examined. Now are you starting to get the picture, folks?
Now what about
NGC 1068?
Recall that
http://www.journals.uchicago.edu/doi/abs/10.1086/311832 and
http://arxiv.org/abs/astro-ph/0111123 and
http://www.sciencedirect.com/scienc...serid=10&md5=596c8badf26d1a60f6786ae0bfcae1d6 collectively list 12 quasars with z = 0.261, 0.385, 0.468, 0.63, 0.649, 0.655, 0.684, 0.726, 1.074, 1.112, 1.552 and 2.018. With Karlsson z = 0.06, 0.3, 0.6, 0.96, 1.41, 1.96, 2.64 ... the distance to the nearest Karlsson value for each quasar is: -0.039, +0.085, +0.132, +0.03, +0.049, +0.055, +0.084, +0.126, +0.104, +0.152, +0.142, +0.058 . That makes the increments 0.078, 0.170, 0.264, 0.06, 0.098, 0.11, 0.168, 0.252, 0.208, 0.304, 0.284, 0.116 and n values 38, 17, 11, 50, 30, 27, 17, 11, 14, 9, 10, 25 . The weights for each is 0.313, 0.462, 0.562, 0.76, 0.78, 0.79, 0.82, 0.87, 1.201, 1.201, 1.201, 1.201 .
Thus, P
1G = 12! * (1/38)
0.313 * (1/17)
0.462 * (1/11)
0.562 * (1/50)
0.76 * (1/30)
0.78 * (1/27)
0.79 * (1/17)
0.82 * (1/11)
0.87 * (1/14)
1.201 * (1/9)
1.201 * (1/10)
1.201 * (1/25)
1.201 = 479002600 * 0.320 * 0.270 * 0.260 * 0.051 * 0.070 * 0.074 * 0.100 * 0.124 * 0.042 * 0.005 * 0.063 * 0.021 = 9.7 x 10
-6 (compared to the unweighted probability of 2.8 x 10
-7!)
P
zTotalNGC1068 = 9.7 x 10
-6 * 132 = 0.0012 .
Since I don't really know the alignment of these quasars relative to the minor axis,
P
aTotalGNGC1068 = 1.0
Although as I pointed out previously, there are curious and improbable alignments to be found in this case:
http://www.journals.uchicago.edu/cgi-bin/resolve?ApJ54273PDF "On Quasar Distances and Lifetimes in a Local Model, M. B. Bell, 2001 ... snip ... It was shown previously from the redshifts and positions of the compact, high-redshift objects near the Seyfert galaxy NGC 1068 that they appear to have been ejected from the center of the galaxy in four similarly structured triplets."
But in any case, we currently have
PTotalGNGC1068 = 0.0012 .
So yet again, an incredibly small probability given that this calculation assumes all possible quasar/galaxy associations have been examined.
Dare we draw any conclusions here, folks? Or will the mainstream community be allowed to simply sweep this under the rug?